Using 10Gb adaptor in PCIe 1x slot (An Unraid 10Gb journey)


KptnKMan

Recommended Posts

Hi everyone,

I have 2 unRAID servers (Details in Signature), both working pretty good, and I've been looking at upgrading my networking to 10Gb for some time.

 

Trouble is, I've been all over the internet for months looking for details compatibility of running 10Gb adaptors in a PCIe 1x slots.

 

I have an ASUS TUF GAMING X570-PLUS (WI-FI) motherboard, with a limited number of slots (Most of them 1x).

It is a PCIe 4.0 motherboard so, in my case, my PCIe 4.0 x1 slot is capable of 1.97GB/s or 15.76Gb/s.

 

What I'm trying to figure out is:

1. What is a recommended PCIe x4 10Gb NIC card I can use?
2. Can I plug an PCIe x4/8x 10Gb NIC into my PCIe 4.0 x1 slot and get 10Gb NIC speed?
3. Will my motherboard simply negotiate the PCIe 4.0 x1 slot at PCIe 2.0/3.0 speed and halve the bandwidth to 985MB/s or 7.88Gb/s?
4. Has anyone used a 10Gb x4 NIC in an PCIe 4.0 x1 slot, and what was your experience?

 

I'm asking this because there is a lot of THEORETICAL talk, but I haven't found/seen any actual cases.

Really hoping someone can help.

 

Is anyone out there run/running a 10Gb 4x or 8x card in a 1x slot?

What was your experience?

 

Thanks.

 

Update 2021-10-24:

This thread turned into a story of my journey to 10Gb, and I've tried to include as much information as possible of my hardware choices, purchases, performance, troubleshooting, questions, etc. I hope at least someone may benefit from this information being available later.

Feel free to chime in, comment or ask questions if you are on a similar journey or have relevant questions (Mostly surrounding 10Gb networking using Unraid).

Edited by KptnKMan
Link to comment
  • 3 weeks later...

...maybe the reason for a low response to your question is not related to the basic nature of your question regarding functionality and performance in x1 slot, but rather build on the fact, that PCIe-4.x cards do exist, but...damn, they are *very* expensive!

 

In fact, the only ones I know of are the latest NVIDIA Mellanox ConnectX-5 models (2x QSFP28 - 40Gbps) ...price tag over here is approx 850USD / 725EUR 🤐

Edited by Ford Prefect
Link to comment

Hey thanks for responding, but I think maybe there is a misunderstanding.

 

I'm talking about PCIe x4/x8 cards (As in PCIe 1.0/2.0/3.0 card in an x4/x8 physical form-factor), not PCIe 4.0/4.x card (As in the PCIe 4.0 standard).

I'm definitely not hoping/looking/dreaming to find a PCIe 4.0 card anytime soon.

 

I've been looking at a card like the Mellanox ConnectX-3 like in this ebay listing.

Its a PCIe x4 card and I figure someone on this forum has to have one, and maybe someone in an x1 slot. ¯\_(ツ)_/¯

 

Really just trying to know if it will work and if so, at what speed?

 

Any ideas?

Link to comment

Thanks @JorgeB I've been trying to read through various manuals and such to see if the Mellanox ConnectX-3 will work in a x1 slot, but I can't find any info anywhere.

I figure I'll probably just hack one of my x1 slots open and see how it goes, I've seen a lot of guides online on how others have managed it.

 

@Ford Prefect I think I could be happy with 8Gbps from a 10Gbps card.

I'm currently teaming to achieve 3GBs so that would be a huge improvement.

Still wondering if it will work, but from what I've seen in other places, it should.

Link to comment

From the Wikipedia Page:
image.png.9a94e07891014efc585e49b4b734b9cd.png

 

In an ideal world, I'd be able to get 1.969 GB/s or 15.752 Gb/s.

But I think I'd be happy with 0.985 GB/s or 7.88 Gb/s.

 

Still trying to figure out if I should invest in fiber on the Mellanox Connectix-3 like this or just stick with all-through ethernet and get an intel card like this.

 

The Connectix-3 is clearly cheaper, but the cost of fiber cables and transceivers doubles the price of the card.

I could of course go with direct attached copper cables, but I'm unsure.

 

I'm starting fresh here, so I'm hoping from some advice on where to start?

 

Edit: There is of course Intel SFP+ cards like this also.

Edited by KptnKMan
Link to comment
Just now, KptnKMan said:

Any advice from your experience on if going all-ethernet or mixing in SFP+ DAC?

For me SFP+ was the best option, since most servers are close together and can work with just a DAC cable, I do have 3 or 4 that are too far away for DAC, but got fiber cables together with the transceivers pretty cheap on ebay.

  • Thanks 1
Link to comment

It does seem cheaper and more straightforward to use DAC where possible, and that's been my own experience in server rooms also.

 

But at home I'm looking at getting a Mikrotik  CRS305 4/5 port 10Gb for the DAC cables to terminate to, which would make the whole thing easier.

It also has a 1GGbe uplink port which is handy, so no transceivers needed, with the potential complications that they bring.

Both my current servers are in the same room also.

 

Would be a good start, I guess.

Link to comment
4 hours ago, KptnKMan said:

But at home I'm looking at getting a Mikrotik  CRS305 4/5 port 10Gb for the DAC cables

i have two of these at home, tested and worked with:

- Brocade Active 10Gbit DACs

- fs.com Brocade 10G-SFPP-LR as i have Brocade card in my PC

- fs.com Cisco GLC-TA Compatible 10/100/1000BASE-T SFP SGMII Copper RJ-45 100m Transceiver Module - to connect more Ethernet cable if needed

- fs.com Ubiquiti UF-SM-10G Compatible 10GBASE-LR SFP+ 1310nm 10km DOM Transceiver Module to interconnect both Mikrotik CRS305 with fs.com LC-LC UPC Duplex Single Mode Fibre Patch cable.

  • Thanks 1
Link to comment
  • 4 months later...

I've had to put an Intel 550 card in a PCIe 2.0 x1 slot on an old Dell T420 as it was the last one available (its our database server at the moment).  We get a good solid 3 Gbps out of the machine ... which at the end of the day is still a 3x improvement over what we had. At current prices it might not be optimal, but its worth it.

I will note that the machine is not running unraid and the application that fills up the interface is PostgreSQL ... for a single sshfs or rsync session, we only get just over 1 Gbps.

Edited by Shorthand
Context
  • Thanks 1
Link to comment
  • 1 month later...

I have a https://www.startech.com/en-gb/cards-adapters/pex1to162 with IBM Connectx3 flashed to latest Mellanox Firmware.

 

Mellanox Network Card:
Temperature:	39 °C
Info:	FW Version: 2.42.5000
FW Release Date: 5.9.2017
Product Version: 02.42.50.00
Rom Info: type=PXE version=3.4.752
Device ID: 4099
Description: Node Port1 Port2 Sys image
GUIDs: ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
MACs: f452140eb5c0 f452140eb5c1
VSD:
PSID: MT_1080120023

 

[   37.909940] mlx4_core 0000:03:00.0: 7.876 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x1 link at 0000:00:1c.6 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link)

iperf3 I get the following.

 

root@computenode:~# iperf3 -c 192.168.254.1
Connecting to host 192.168.254.1, port 5201
[  5] local 192.168.254.2 port 35822 connected to 192.168.254.1 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec   725 MBytes  6.09 Gbits/sec    0    345 KBytes       
[  5]   1.00-2.00   sec   726 MBytes  6.09 Gbits/sec    0    342 KBytes       
[  5]   2.00-3.00   sec   726 MBytes  6.09 Gbits/sec    0    348 KBytes       
[  5]   3.00-4.00   sec   726 MBytes  6.09 Gbits/sec    0    351 KBytes       
[  5]   4.00-5.00   sec   725 MBytes  6.08 Gbits/sec    0    348 KBytes       
[  5]   5.00-6.00   sec   725 MBytes  6.08 Gbits/sec    0    348 KBytes       
[  5]   6.00-7.00   sec   726 MBytes  6.09 Gbits/sec    0    348 KBytes       
[  5]   7.00-8.00   sec   725 MBytes  6.08 Gbits/sec    0    359 KBytes       
[  5]   8.00-9.00   sec   726 MBytes  6.09 Gbits/sec    0    359 KBytes       
[  5]   9.00-10.00  sec   725 MBytes  6.08 Gbits/sec    0    382 KBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  7.09 GBytes  6.09 Gbits/sec    0             sender
[  5]   0.00-10.00  sec  7.08 GBytes  6.09 Gbits/sec                  receiver

iperf Done.
root@computenode:~# 

 

And from the other end IBM 40Gb Card with 10Gb adapter.

 

root@unraid:~# iperf3 -c 192.168.254.2
Connecting to host 192.168.254.2, port 5201
[  5] local 192.168.254.1 port 52776 connected to 192.168.254.2 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec   766 MBytes  6.43 Gbits/sec    0    226 KBytes       
[  5]   1.00-2.00   sec   777 MBytes  6.52 Gbits/sec    0    223 KBytes       
[  5]   2.00-3.00   sec   778 MBytes  6.53 Gbits/sec    0    223 KBytes       
[  5]   3.00-4.00   sec   778 MBytes  6.53 Gbits/sec    0    223 KBytes       
[  5]   4.00-5.00   sec   778 MBytes  6.53 Gbits/sec    0    223 KBytes       
[  5]   5.00-6.00   sec   778 MBytes  6.53 Gbits/sec    0    223 KBytes       
[  5]   6.00-7.00   sec   778 MBytes  6.53 Gbits/sec    0    223 KBytes       
[  5]   7.00-8.00   sec   778 MBytes  6.53 Gbits/sec    0    223 KBytes       
[  5]   8.00-9.00   sec   777 MBytes  6.52 Gbits/sec    0    223 KBytes       
[  5]   9.00-10.00  sec   747 MBytes  6.26 Gbits/sec    0   5.66 KBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  7.55 GBytes  6.49 Gbits/sec    0             sender
[  5]   0.00-10.04  sec  7.55 GBytes  6.46 Gbits/sec                  receiver

 

Edited by SimonF
  • Thanks 2
Link to comment
  • 2 weeks later...

Hey all, not sure if this helps but... I faced a similar situation with a itx board where the only pcie slot is occupied by a gpu. 

First I thought about using a splitter with bifurcation, but that adds to much hazzle with the case and the gpu will no longer perform at 100%.

So I found another solution that involves a m.2 to pcie adapter. You can get those for 20 euros or so from Ali. Sadly mine was just a pcie 1x, so I ran into the same topic discussed here. 

 

I don't really care in the end, cause my NAS system cannot provide 10gbit throughput, so between 6-7gbit are really fine for me. I'm using mellanox connect x3 cards in all systems. 20210617_154810.thumb.jpg.135e55bf86624afc926fe1aa429e287f.jpg

  • Thanks 1
Link to comment
  • 5 weeks later...
On 6/20/2021 at 11:55 AM, deveth0 said:

Hey all, not sure if this helps but... I faced a similar situation with a itx board where the only pcie slot is occupied by a gpu. 

First I thought about using a splitter with bifurcation, but that adds to much hazzle with the case and the gpu will no longer perform at 100%.

So I found another solution that involves a m.2 to pcie adapter. You can get those for 20 euros or so from Ali. Sadly mine was just a pcie 1x, so I ran into the same topic discussed here. 

 

I don't really care in the end, cause my NAS system cannot provide 10gbit throughput, so between 6-7gbit are really fine for me. I'm using mellanox connect x3 cards in all systems. 20210617_154810.thumb.jpg.135e55bf86624afc926fe1aa429e287f.jpg

 

Yep, 

here is a speed test that I carried out on a pci x1 port with a graphics card

image.png.9ac633bc15052285703a9152659590b4.png

We see that in memory access I get about 785 MB/s which makes about 6Gbits/s 

Personally for the moment I plugged my card into a nvme to pci x4 adapter
and if I ever needed this port later I would switch my card to port x1

My adapter : 
 

image.thumb.png.e5d01e84a6ba1736013b86bd33c1c865.png

 

 

IMG_20210720_113504.thumb.jpg.50deee1ae871435e71a12c7e25fad118.jpg

IMG_20210720_113458.thumb.jpg.a9fa7a50ef8f0b60afb45b8412b53e9c.jpg

 

  • Thanks 1
Link to comment
  • 3 weeks later...

Thanks everyone for responding and sharing your experiences, took some time but this is what I was hoping to discover before investing into some Mellanox cards.

In the coming weeks/months I'm hoping to finally find the time to invest in the hardware.

 

@JorgeB I'm looked into that CRS309-1G-8S+IN and it looks amazing. Currently my top pick.

 

@Ford Prefect From what I've seen the DAC seems to be cheaper.

Do you mean a pair of transcievers and a patch cable as comparison?

 

@SimonF I've never seen that adaptor before, does that convert the physical x1 to a x16 SFF? That's amazing.

The throughput of 6.09 Gbits/sec is a perfect compromise, I can do that.

Did you have any issues with the SFF fitting or does it secure down well?

You're using SFF cards for this to work? I've seen a few different models.

 

@deveth0 thanks for the numbers, this is what I'm looking for.

 

@JamesAdams it looks like that adapto could be useful, i'm gonna look into that. That's a quality suggestion.

Link to comment
3 minutes ago, KptnKMan said:

 I'm looked into that CRS309-1G-8S+IN and it looks amazing. Currently my top pick.

...it is. Even more as, since you started this topic, MT has announced that with ROSv7, it will fully support L3 HW-offloading (the smaller brother CRS305 will only support it partly, especially not for VLANs).

 

3 minutes ago, KptnKMan said:

 

@Ford Prefect From what I've seen the DAC seems to be cheaper.

Do you mean a pair of transcievers and a patch cable as comparison?

Yes, a pair of real fiber tranceivers and a fiber patchcable. more flexible in terms of length that can be covered and almost as easy to attach as a DAC.

A DAC is limited in length. Also, some (very few) devices will not accept a DAC, but rather need an AOC or true fiber transceiver.

When shopping for a Mellanox, look for a Kit, including 2-3m DAC...this is the the cheapest option, as the DAC will come for approx 9EUR extra.

When you buy a new DAC seperately, the difference is marginal to non existant, based on the actual offer you get in a shop.

When I looked at the time of writing, a 5m DAC was approx 25EUR, where a fiber SR tranceiver was 12 EUR and a 5m fiber patchcable was 4 EUR (hence 28EUR in total).

  • Thanks 1
Link to comment

@Ford Prefect Yeah this all makes a lot of sense.

Looks like there's even more reason to stretch for the bigger MT switch now, with the HW-offloading for VLANs being something I'd like.

Are the SR tranceivers you're referring to 10Gb capable, as I see the 1Gb tranceivers cheaper when I look around.

Honestly, I'd rather use ethernet if I can as I have spare CAT6/7, so this would be fine for me.

Link to comment
  • KptnKMan changed the title to Using 10Gb adaptor in PCIe 1x slot (An Unraid 10Gb journey)

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.