2.5Gbps and 5Gbps Network cards


aaronwt

Recommended Posts

Does unRAID support any 2.5Gbps or 5Gbps network cards?

 

With the lower prices I'm finally going to pull the trigger on some 2.5/5 Gbps Network switches and network cards. So I was curious if unRAID supports any of them.

 

57TB unRAID1a--49TB unRAID2--76TB unRAID3

Edited by aaronwt
Link to comment

Some 2.5 or 5 Gb LANs have not been working in linux/unraid.

But in 6.8.0 anouncement it says "Added oot: Realtek r8125: version 9.002.02", this is Realtek 2.5 Gb controller, so just guessing it might work now?

I think it is a matter of what drivers are included in the linux kernels used, so this is important for all linux OSes not just unraid.

Please report back if it works.

 

Anything with an Intel 1Gb or an Aquantia 10 Gb controller works out of the box.

I have a motherboard with the Aquantia 10 Gb controller (with the AQC107 chip) and 2 Intel 1 Gb LAN.

All 3 work out of the box in unraid.

Here a link to a pcie card that works with that aquantia AQC107 chip: https://forums.unraid.net/topic/58390-asus-xg-c100c-10gbe-nic/?do=findComment&comment=803918

As far as I know all controller cards with these chip/circuits work too but I have not personally tested it so do not quote me on that.

 

The best 10 Gb PCIe LAN card is probably the AQN107

https://www.anandtech.com/show/13066/aquantias-gamer-edition-aqtion-aqn107-10-gbe-adapter-now-available

but that one sold out really fast.

 

Mac/Hackintosh seems to only work with the 107S chipset. (Think those work on Windows too).

https://www.tonymacx86.com/threads/high-sierra-native-support-for-10gb-ethernet.239690/page-41#post-1925622

But some still have trouble with them loosing Wake on LAN after a while for instance.

https://www.insanelymac.com/forum/topic/330614-aquantia-10-gb-ethernet-support-thread-10132-upwards/?page=12&tab=comments#comment-2679821

 

Marvell acquired Aquantia, and since then the supply of the cards/chips is unfortunately scarce.

A coincidence?

I do not know.


Some say the Intel 10 Gb LAN is supported. I do not know.

They talk a little about it here: https://forums.unraid.net/topic/86878-enough-pci-express-lanes/

I guess it works now at least for intel X550-T2. It uses ixgbe driver and in 6.8.0-rc8 announcement it says

Update Intel 10Gbit (ixgbe) to out-of-tree driver version: 5.6.5

https://forums.unraid.net/bug-reports/prereleases/unraid-os-version-680-rc8-available-r761/

I hope somebody can confirm if intel 10 Gb is working in unraid?

 

The aquantia 10 Gb can be connected to 2.5 and 5 Gb networks, and it will work.

Obviously I have not tested all AQC107 based card but I have not seen anybody saying the opposite on the net.

 

But do note that  intel 10 Gb is not connecting to 2.5 or 5 Gb networks, so intel is "inferior" to aquantia on that point.

Intel 10 Gb does NOT support the standard for 2.5 and 5 Gb speeds/connections (it is another standard, and aquantia chip does support it too, but not intel). They both support connection to 1 Gb LANs though.

 

The aquantia 5 Gb controller (AQC108 chip based) may not work in Unraid.

 

I think Spaceinvader One had a setup with Mellanox 10 Gb SFTP cards. You will have to buy adapters for copper cables or fibre cables to those (will cost you some). The standards used vary by brand for the copper adapter ones (voltages etc) so you should use same or compatible in both ends. If you use it fibre is normally suggested.

 

If I do not remember wrong Spaceinvader One had to install a driver and he also needed a 1 Gb LAN network connected as well in parallel to get his 10 Gb Mellanox LAN working, at least for his advanced use, so I would NOT call this a work out of the box solution.

 

All this is NOT necessary with the 10 Gb Aquantia copper LAN, you just use it exactly as any normal 1 Gb copper LAN.

With unraid or any not too old linux (kernel) it will run/work out of the box. No issues or workarounds needed.

 

 

Another thing. Do not bond 10 Gb and 1 Gb networks together in unraid.

https://forums.unraid.net/topic/84516-upgrade-to-68-rc3-not-possible/?do=findComment&comment=783433

Bonding (parallel cables) is only supporting 1 up to 8 ethernet ports (cable connections) with the same speed.

So you can for example create a 8 Gb link bond with 8x 1 Gb ports (on each end)

                                               20 Gb link with 8x 2.5 Gb ports

                                               40 Gb link with 8x 5 Gb ports

                                               80 Gb link with 8x 10 Gb ports

Edited by Alexander
Mac compatibility info
Link to comment

I have had no issues using the Aquantia AQtion 10G Pro NIC in my unraid machine.  The card is multi-gig so it supports 1, 2.5, 5, and 10G depending on the connection at the other end and the length and quality of the cable.  In my case it sits right next to my main switch with a CAT 7 patch cord connection, but is limited to 5G because it is connected to a 5G port on my switch.  Still, with spinning hard drives this is more than enough speed.

  • Like 2
Link to comment
  • 1 month later...

Has anyone got the AQtion 10G to transfer at full speed. I have the AQC107 in my unraid server and it seems to only connect /transfer at 600 to 700MB/s using iperf3.  This is weird as it is not capped at 5GBps.  I have tried different cables (CAT 6A) (3 cables - length as low as 0.5m) as well as Cat 5e/Cat 6 cables. They all seem to only connect above 5GBps.  This is both direct link to another AQC107 (XG-C100C) or a onboard Gigabyte AQC107 (X399 Aorus Extreme) and via a XG-U2008 router

 

[  4]   0.00-1.00   sec   613 MBytes  5.14 Gbits/sec
[  4]   1.00-2.00   sec   615 MBytes  5.16 Gbits/sec
[  4]   2.00-3.00   sec   629 MBytes  5.27 Gbits/sec
[  4]   3.00-4.00   sec   638 MBytes  5.35 Gbits/sec
[  4]   4.00-5.00   sec   674 MBytes  5.65 Gbits/sec
[  4]   5.00-6.00   sec   656 MBytes  5.50 Gbits/sec
[  4]   6.00-7.00   sec   652 MBytes  5.48 Gbits/sec
[  4]   7.00-8.00   sec   641 MBytes  5.37 Gbits/sec
[  4]   8.00-9.00   sec   632 MBytes  5.31 Gbits/sec
[  4]   9.00-10.00  sec   640 MBytes  5.37 Gbits/sec

 

It may be cables but they should down sync to 5000.  

Edited by Livefreak
Included IPerf3 values
Link to comment
On 3/1/2020 at 9:01 AM, Livefreak said:

Has anyone got the AQtion 10G to transfer at full speed. I have the AQC107 in my unraid server and it seems to only connect /transfer at 600 to 700MB/s using iperf3.  This is weird as it is not capped at 5GBps.  I have tried different cables (CAT 6A) (3 cables - length as low as 0.5m) as well as Cat 5e/Cat 6 cables. They all seem to only connect above 5GBps.  This is both direct link to another AQC107 (XG-C100C) or a onboard Gigabyte AQC107 (X399 Aorus Extreme) and via a XG-U2008 router

 

[  4]   0.00-1.00   sec   613 MBytes  5.14 Gbits/sec
[  4]   1.00-2.00   sec   615 MBytes  5.16 Gbits/sec
[  4]   2.00-3.00   sec   629 MBytes  5.27 Gbits/sec
[  4]   3.00-4.00   sec   638 MBytes  5.35 Gbits/sec
[  4]   4.00-5.00   sec   674 MBytes  5.65 Gbits/sec
[  4]   5.00-6.00   sec   656 MBytes  5.50 Gbits/sec
[  4]   6.00-7.00   sec   652 MBytes  5.48 Gbits/sec
[  4]   7.00-8.00   sec   641 MBytes  5.37 Gbits/sec
[  4]   8.00-9.00   sec   632 MBytes  5.31 Gbits/sec
[  4]   9.00-10.00  sec   640 MBytes  5.37 Gbits/sec

 

It may be cables but they should down sync to 5000.  

Maybe its your PCIe slot? Gen 2.0?

Link to comment
On 1/18/2020 at 12:13 AM, Alexander said:

The aquantia 5 Gb controller (AQC108 chip based) is obviously not working (=no kernel "driver" in linux), so do not get that one either.

Are you sure that 5G Aquantia doesn't work? It has same manufacturer support like 10G card, linux drivers and all, everything is available.

I already own this card and my plan is to use it until i get 10G later if needed, since 5G will do 500MB/s and at best ill use SATA cache SSD it might be enough for my need, or if i find a deal for 10G card

Link to comment
On 3/6/2020 at 1:15 PM, Hexenhammer said:

Maybe its your PCIe slot? Gen 2.0?

I don't think so. It is a 1920x on a x399 board.  I will verify next reboot in case it set something weird.  

 

Additionally. I have switched it with an identical duplicate card and it still does this weird limit.

 

Thanks for the suggestion :)

Link to comment

Running lspci -vv  it is reporting as Gen 3 x4 (8GT/s)  

42:00.0 Ethernet controller: Aquantia Corp. AQC107 NBase-T/IEEE 802.3bz Ethernet Controller [AQtion] (rev 02)
        Subsystem: ASUSTeK Computer Inc. Device 8741
        Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
        Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
        Latency: 0, Cache Line Size: 64 bytes
        Interrupt: pin A routed to IRQ 50
        Region 0: Memory at 82440000 (64-bit, non-prefetchable)
        Region 2: Memory at 82450000 (64-bit, non-prefetchable)
        Region 4: Memory at 82000000 (64-bit, non-prefetchable)
        Expansion ROM at 82400000 [disabled]
        Capabilities: [40] Express (v2) Endpoint, MSI 00
                DevCap: MaxPayload 512 bytes, PhantFunc 0, Latency L0s <64ns, L1 <1us
                        ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 0.000W
                DevCtl: CorrErr- NonFatalErr- FatalErr- UnsupReq-
                        RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+ FLReset-
                        MaxPayload 512 bytes, MaxReadReq 512 bytes
                DevSta: CorrErr+ NonFatalErr- FatalErr- UnsupReq- AuxPwr+ TransPend-
                LnkCap: Port #0, Speed 8GT/s, Width x4, ASPM L0s L1, Exit Latency L0s unlimited, L1 unlimited
                        ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
                LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- CommClk+
                        ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
                LnkSta: Speed 8GT/s (ok), Width x4 (ok)
                        TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
                DevCap2: Completion Timeout: Not Supported, TimeoutDis+, LTR-, OBFF Not Supported
                         AtomicOpsCap: 32bit- 64bit- 128bitCAS-
                DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-, LTR-, OBFF Disabled
                         AtomicOpsCtl: ReqEn-
                LnkCtl2: Target Link Speed: 8GT/s, EnterCompliance- SpeedDis-
                         Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
                         Compliance De-emphasis: -6dB
                LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+, EqualizationPhase1+
                         EqualizationPhase2+, EqualizationPhase3+, LinkEqualizationRequest-
        Capabilities: [80] Power Management version 3
                Flags: PMEClk- DSI- D1+ D2+ AuxCurrent=375mA PME(D0+,D1+,D2+,D3hot+,D3cold+)
                Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
        Capabilities: [90] MSI-X: Enable+ Count=32 Masked-
                Vector table: BAR=2 offset=00000000
                PBA: BAR=2 offset=00000200
        Capabilities: [a0] MSI: Enable- Count=1/32 Maskable- 64bit+
                Address: 0000000000000000  Data: 0000
        Capabilities: [100 v2] Advanced Error Reporting
                UESta:  DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
                UEMsk:  DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
                UESvrt: DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol-
                CESta:  RxErr+ BadTLP+ BadDLLP+ Rollover- Timeout- AdvNonFatalErr-
                CEMsk:  RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+
                AERCap: First Error Pointer: 00, ECRCGenCap+ ECRCGenEn- ECRCChkCap+ ECRCChkEn-
                        MultHdrRecCap- MultHdrRecEn- TLPPfxPres- HdrLogCap-
                HeaderLog: 00000000 00000000 00000000 00000000
        Capabilities: [150 v1] Vendor Specific Information: ID=0001 Rev=1 Len=024 <?>
        Capabilities: [180 v1] Secondary PCI Express <?>
        Kernel driver in use: atlantic
        Kernel modules: atlantic

Link to comment
  • 1 month later...
On 3/6/2020 at 4:17 AM, Hexenhammer said:

Are you sure that 5G Aquantia doesn't work? It has same manufacturer support like 10G card, linux drivers and all, everything is available.

I already own this card and my plan is to use it until i get 10G later if needed, since 5G will do 500MB/s and at best ill use SATA cache SSD it might be enough for my need, or if i find a deal for 10G card

I read that it did not work for somebody in this Unraid forum, but maybe it was this post.

But that is for an older Aquantia 5 Gb controller the AQC111U. So it might work with the AQC108 5 Gb controller?

Here a link to some Aquantia chips. There seems to be more than I knew of.

https://www.marvell.com/products/ethernet-adapters-and-controllers/aqtion-ethernet-controllers.html

 

For the linux driver to work OOTB (out of the box) it must be included in the linux kernel or "added" by Limetech, the later only possible if the driver code is "compatible" with the linux kernel used.

 

Might work now or in the future. If it does report back.

Since you have the card please try it.

 

You can use the 30-Day Free Trial to test it, if you do not already have a license.

You can not use the same USB key for a new trial after the expiry unless you buy it or maybe by contacting limetech for a trial extension (if possible).

 

Go into settings / Network Settings. Scroll down to Interface Rules.

If Interface eth0: eth1: eth2 etc ends with something like "PCI device 0x1234:0x1234 (atlantic)

it will probably work. (atlantic) is the Aquantia driver used for my 10 Gb connection.

Intel 1 Gb often has (igb) or (e1000e) drivers on recent boards.

Edited by Alexander
Link to comment
On 3/1/2020 at 8:01 AM, Livefreak said:

Has anyone got the AQtion 10G to transfer at full speed. I have the AQC107 in my unraid server and it seems to only connect /transfer at 600 to 700MB/s using iperf3.  This is weird as it is not capped at 5GBps.  I have tried different cables (CAT 6A) (3 cables - length as low as 0.5m) as well as Cat 5e/Cat 6 cables. They all seem to only connect above 5GBps.  This is both direct link to another AQC107 (XG-C100C) or a onboard Gigabyte AQC107 (X399 Aorus Extreme) and via a XG-U2008 router

 

[  4]   0.00-1.00   sec   613 MBytes  5.14 Gbits/sec
[  4]   1.00-2.00   sec   615 MBytes  5.16 Gbits/sec
[  4]   2.00-3.00   sec   629 MBytes  5.27 Gbits/sec
[  4]   3.00-4.00   sec   638 MBytes  5.35 Gbits/sec
[  4]   4.00-5.00   sec   674 MBytes  5.65 Gbits/sec
[  4]   5.00-6.00   sec   656 MBytes  5.50 Gbits/sec
[  4]   6.00-7.00   sec   652 MBytes  5.48 Gbits/sec
[  4]   7.00-8.00   sec   641 MBytes  5.37 Gbits/sec
[  4]   8.00-9.00   sec   632 MBytes  5.31 Gbits/sec
[  4]   9.00-10.00  sec   640 MBytes  5.37 Gbits/sec

 

It may be cables but they should down sync to 5000.  

Interesting. Does the XG-U2008 router show that it has Full 10 Gb speeds on both links (blue lights for 10 Gb ports)?

https://www.asus.com/Networking/XG-U2008/

See Signal Quality Indicator in the above link.

 

Here is somebody else with the same "issue" on a specific motherboard.

 

Edited by Alexander
Link to comment

Hi Alexander,


I noticed that link as well. The link on the XG-2008 is blue. Also when i run -P 2 (dual streams) I get 9.8Gb/s (virtually 10GBe) so it must be a iperf3 / windows implementation similar to theirs. When i get around to it I will try a linux distro as it may be windows (SAMBA gets 700MB/s). 

 

Hope this helps others.

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.