Need Help 10GbE Transfer Rates


Recommended Posts

Since I have installed the 10 GbE network I have only been able to achieve 3.16 Gb/s (iperf3).  I have searched through the forum and tried Jumbo Frames, disabling flow control, direct i/o etc but absolutely no change.  I have adjusted these setting on both sides and no change.  Not sure what to try next and would appreciate some help.

 

NAS

CPU-AMD Ryzen 5 3400G

Motherboard - Asus ROG Strix X570-E

RAM - Corsair Vengeance LPX DDR4 2133 MHZ -16GB

PSU - Corsair HX-850

Storage - WD Red+ 10TB drives x 6 (Cache Samsung 970 EVO plus NVMe M.2 2TB x 2)

NIC - X520-10G-1S-X8 (SFP+) - connected with fiber optic

 

PC

Windows 10

NIC - X520-10G-1S-X8 (SFP+) - connected with fiber optic

 

Switch

QNAP QSW-M408S

 

 

Link to comment
  • 2 weeks later...

I've got similar cards and switch, though on my UnRaid server it's connected to an X1 slot (old Intel dq965gf motherboard with an LSI SAS2008/9211 in the x16 slot, so the x520 ends up in a X1 slot... well, one of those mining rig adapters that plugs in to an X1 slot and gives you a remote X16 slot on a USB-like cable, but anyway...)  I get about 150MB/s to/from cache on my UnRAID box (Pioneer 512GB SATA SSD on the LSI HBA, local storage on my desktop is a Pioneer NVMe 2TB, "G" edition, 650K IOPS, way faster than my previous Intel 660p NVMe.)

 

You wouldn't think it'd be worth it to do so much just for an extra 50MB/s, but the UnRAID server was only part of it.  I kept saturating one direction while working other directions, as in an incoming download going straight to UnRAID storage would saturate a link depending on where the "action" was happening, which could possibly make extra trips for a particular data stream.  So I've got 2 of these switches, which means now both switches in the house are 10G capable and linked, along with the router (pfSense) that's fed by Gb Comcast cable.  Cable is actually measured out to 1.3Gb, but the connections on the cable modem are only 1Gb, but they're bondable with LACP... but LACP has been flakey with the SB8200, so it's just single connected for now.

 

Between other computers in the house I can move... well, it looks like it moves about 2.9Gb/s to the other desktop across the house that's also on 10G, at least as far as NetIO says.  Which I'm not particularly inclined to disbelieve.  Especially since file transfer to that desktop seems to run about 300MB/s, but that could be disk limited with a Pioneer 1TB SATA SSD (yes, Pioneer again, they're cheap and they've been decent so far; there's at least 3 others in the house) that's well over 90% full (really, it runs about 300MB/s for about 2-3GB, then speed just drops to about 33MB/s; it's my son's desktop full of games.)

 

Network wise, all machines with 10Gb have Intel x520 cards (from Dell servers.)  My main desktop has a short DAC cable (2 meter, probably) to the QNAP switch.  The UnRAID server is connected the same way.  This switch connects to the other switch in the house over multimode fiber (burial rated/armored, though mostly to get a black cable that increased the WAF over orange or aqua cable, though still surprisingly cheap at just 10M) using a pair of 10Gtek SFP+ transceivers (which came packaged way nicer than I would have expected in nice little tins with foam cut to the shape of the module.)  The other desktop I was testing with was also connected with a standard SFP+ DAC, though about 7 meters long.  This switch also has the router connected via 10g, X520, 2-3 meter DAC, etc.  Both switches also have Ubiquiti UniFi AC APs, so wireless can move a medium-significant amount of data, sometimes to/from the UnRAID box or another machine, but usually internet.

 

I guess I want to get in to the gui of the switches now to make sure all links are actually 10Gb being I can't directly confirm a 10Gb link on the fiber "backbone".  The end devices on SFP+ DACs are all confirmed to 10Gbe.

 

While, network wise, I really just wanted something faster than 1Gb, so it fits my needs, I'm gonna be at least a little annoyed if these end up limited to ~3Gb.  I got these based on the better name than MikroTik and hearing that the MikroTik's can have throughput issues.

 

Oh, as an aside FYI, to anyone wondering, these switches do not support SFP+ Active Fiber DAC cables.  I learned that with a $33 cable that's now useless to me.

Edited by matguy
Link to comment
3 hours ago, matguy said:

it's connected to an X1

Expect could be faster, but can't confirm the PCIe ver. of ICH chipset. BTW, all relate I/O share the DMI link.

i965g-block.png

 

3 hours ago, matguy said:

though mostly to get a black cable that increased the WAF over orange or aqua cable, though still surprisingly cheap at just 10M

I use aqua one from 2m to 6m, also very cheap ( may be ~3USD per one ), anyway no speed issue.

 

3 hours ago, matguy said:

hearing that the MikroTik's can have throughput issues.

May be on some model / firmware / different ethernet frame size, but I haven't notice issue on mine, too.

Link to comment
5 hours ago, Vr2Io said:

Expect could be faster, but can't confirm the PCIe ver. of ICH chipset. BTW, all relate I/O share the DMI link.

i965g-block.png

 

Yeah, I wasn't sure how much was being shared over the DMI link.  I do have some drives off the onboard SATA ports (slower ports, but SMR drives, so I'm not particularly concerned about the bottleneck of the port.)  It's an ICH8DO(/HO), which theoretically gives me vPRO, but the important part is it should be PCIe 1.0, according to Intel's documentation on my motherboard.  I'm pretty sure the 500MB/s for the x1 slot(s) in the diagram is total bi-directional throughput, so 250MB/s each way.  I should be getting more than 150MB/s, though.  But I haven't done any investigation in to it.  I literally just put in the cache drive a couple days ago.

 

5 hours ago, Vr2Io said:

I use aqua one from 2m to 6m, also very cheap ( may be ~3USD per one ), anyway no speed issue.

Yeah, the black/armored cable was $21 for 10M, prime shipped.  It's not like I was trying to bundle as many as possible in to a run, so there was zero reason not to go with it.  It's also being run through a house, so the extra protection may help.  I do suspect, however, that the armoring is more like just a thin sheath, like wearing thin leather to a battle.  Maybe it'd help... but probably not.

 

5 hours ago, Vr2Io said:

May be on some model / firmware / different ethernet frame size, but I haven't notice issue on mine, too.

This was mostly from Amazon reviews.  They said it seemed great with single large streams, but seemed to slow down considerably with multiple streams.  Like the effective backplane was 10Gb, so once that was saturated switching speed greatly reduced.  With only 4 ports, though, that's probably not a huge concern.  And it's rando Amazon reviews, so who knows how valid that is.

 

Though, still sounds better than what I'm currently seeing on the QNAP switches.  I just re-built my media PC, which sits just next to a switch.  I was contemplating putting a 10g adapter in it, just because it's right there and there's an empty SFP+ port on the switch.  There really wasn't a need, but if I do then I can test with that one instead of trying to wrestle my son off his computer just to test link speeds that he doesn't really care about.  And, when I can wrestle him off (or send him to bed for the night) I can test 2 machines off a single switch.  The only other machine connected by my desk, other than the UnRAID box, is a little general purpose server, but it's a USFF Dell with no slots to install 10g.  I could probably put together a little test server, but all I have sitting around are old Core2 level machines... or older.

 

If this is a switch issue I might have been (and may still be in the future) better off to get a couple of the MikroTik switches and use the QNAP switches as edge switches for multiple 1Gb devices with a 10Gb (well, 3Gb, it seems) switch uplink.  I'd really like the APs to have as much 1Gb port speed as possible and avoid uplink congestion.  I mean, sure, they only really push about 300Mb to devices, though they can push that to quite a few devices.  Well, there goes another $300.

 

It also looks like the MikroTik switches support Active Fiber SFP+ DACs...

 

I also keep meaning to upgrade the underlying hardware on my UnRAID box.

Link to comment
12 hours ago, matguy said:

It's also being run through a house, so the extra protection may help.

Agree.

 

12 hours ago, matguy said:

Though, still sounds better than what I'm currently seeing on the QNAP switches.

I don't mean MikroTik better then QNAP, just wonder your test result was ~3Gbps, btw pls use Iperf for throughput test.

In my experience, I almost won't found Switch have throughput different in my use case, no matter it is cheap or expensive type, the really different is the feature. 

Edited by Vr2Io
Link to comment

What is the Mainboard of your Win10 PC?

Where is the NIC connected?

I am using two very old Boards - an ASRock Z77 Extreme4 (unraid) and an ASRock H97M Pro4 with 2x ASUS XG-C100C.

The Cards are connected to a PCIe 2.0 x4 (H97) and PCIe 3.0 x8 (Z77)

Throughput 1200MB/s or 9600MBit - no Jumbo-Frames.

So there must be something wrong with your config or your hardware (Mainboard/NIC/Cable etc.)

Edited by Zonediver
Link to comment
14 hours ago, Vr2Io said:

I don't mean MikroTik better then QNAP, just wonder your test result was ~3Gbps, btw pls use Iperf for throughput test.

In my experience, I almost won't found Switch have throughput different in my use case, no matter it is cheap or expensive type, the really different is the feature. 

I was using LanSpeedTest for file share tests as well as general Windows Resource Monitor plus graphs from within the switches once I got in to the management (which is nice, btw.)  I used the NetIO GUI to run server/client between the machines and pretty much every test seemed to cap around 3Gb/s.  There's something that's limiting throughput.

Link to comment
On 10/19/2020 at 7:18 AM, matguy said:

New drivers don't seem to do much.

 

At some point (probably after the kid goes to bed) I'll try direct connecting the machines and see what happens.

Pls also check the PCIe link width and speed under powershell by "Get-NetAdapterHardwareInfo"

 

Mine X520 5GT/s x8 width.

 

PS C:\WINDOWS\system32> Get-NetAdapterHardwareInfo

Name                           Segment Bus Device Function Slot NumaNode PcieLinkSpeed PcieLinkWidth Version
----                           ------- --- ------ -------- ---- -------- ------------- ------------- -------
Wi-Fi                                0   8      0        0    1               5.0 GT/s             1 1.1
乙太網路 2                           0   1      0        0                    5.0 GT/s             8 1.1
乙太網路                             0  10      0        0    1               2.5 GT/s             1 1.1

 

Link to comment

Yup, they were all x4 per port.

 

Mine:

Name                           Segment Bus Device Function Slot NumaNode PcieLinkSpeed PcieLinkWidth Version
----                           ------- --- ------ -------- ---- -------- ------------- ------------- -------
Ethernet 6                           0   0     31        6                     Unknown
Ethernet 7                           0   5      0        0                    5.0 GT/s             4 1.1
Ethernet 8                           0   5      0        1                    5.0 GT/s             4 1.1

 

Kid:

Name                           Segment Bus Device Function Slot NumaNode PcieLinkSpeed PcieLinkWidth Version

----                           ------- --- ------ -------- ---- -------- ------------- ------------- -------

Ethernet 3                           0   2      0        1                    2.5 GT/s             4 1.1
Ethernet 2                           0   2      0        0                    2.5 GT/s             4 1.1

Ethernet 0                           0   0     25        0                     Unknown

 

So, my x520 cards are all dual port x8 cards.  Do we know if Intel dedicates lanes to each phy?

 

Anyway, by that chart my ports should have 20GT/s, after the 8b/10b overhead that's 16Gb/s.  Plenty.  Kid's is half that, so 8Gb/s.  Still way more than we're seeing.  And I haven't done the direct link test yet.

 

But, back to the dedicated or not idea, if it doesn't, that could give the kid 16Gb/s available to the populated port.  What that really makes me wonder about, though, is my UnRAID server with the card on an x1 PCIe 1.0.  I mean, (upon further inspection) I'm seeing the ~200MB/s you'd expect from a single 10. x1 lane, I just wonder what would happen if I populated the other port (purely for science, I have no other need, that's for sure, but it also reminds me to light a fire under myself to upgrade that 14 year old motherboard.)

Edited by matguy
Link to comment
4 hours ago, matguy said:

Do we know if Intel dedicates lanes to each phy?

Should be not.

 

4 hours ago, matguy said:

Still way more than we're seeing.

Agree, I make test if set PCIe to 2.5GT/s also no performance drop. ( Always not use jumbo frame )

 

Name                           Segment Bus Device Function Slot NumaNode PcieLinkSpeed PcieLinkWidth Version
----                           ------- --- ------ -------- ---- -------- ------------- ------------- -------
Wi-Fi                                0   8      0        0    1               5.0 GT/s             1 1.1
乙太網路 2                           0   1      0        0                    2.5 GT/s             8 1.1
乙太網路                             0  10      0        0    1               2.5 GT/s             1 1.1
 

Edited by Vr2Io
Link to comment

Ok, put a x520 in the Media PC (SFF Optiplex 9020.)

 

PS C:\Users\Media> Get-NetAdapterHardwareInfo

Name                           Segment Bus Device Function Slot NumaNode PcieLinkSpeed PcieLinkWidth Version
----                           ------- --- ------ -------- ---- -------- ------------- ------------- -------
Ethernet 3                           0   3      0        1    4               5.0 GT/s             4 1.1
Ethernet 2                           0   3      0        0    4               5.0 GT/s             4 1.1
Ethernet                             0   0     25        0                     Unknown


PS C:\Users\Media>
 

I just did some file transfer tests and even though it just has a SATA SSD (another Pioneer!) it still hit around 450MB/s between my machine and it.  I think my kid's machine isn't trimming the drive, plus it's so full.  And who knows what else he's doing with it these days.

 

Brought out NetIO Gui and it got a max of 528MB/s between my machine and the media PC.  This is still all standard MTU.  And this is through 2 of the QNAP switches.

 

The 9020 (Media PC) is a x4 slot that it's in, slot that doesn't have a back end, so the rest of the x8 card hangs out the back.

 

I just looked it up and the x16 physical slot in my Optiplex 7050 is electrically x4, though, so the 4-lanes shown in the hardware info is all the motherboard has on that slot.  That makes more sense now.

 

I still need to do a direct connection test, but it's midnight here.

Link to comment

Tried a direct test from my computer to the media computer using the fiber I have running (my desktop is about 3 feet from its switch and the Media PC is less than a foot from its switch), just taking the switches out of the loop, but my x520 does not like the 10gTek SFP+ optics.  Being its the same card in the Media PC and my kid's computer, I didn't even try with them.  Goes in to full error mode even through a reset, though the Amazon description says "Intel" and a review does mention using it with an x520... maybe I'll look in to this more.

 

But, I did a DAC between the media PC and my kid's computer, being they're about 15 feet (as the tucked cable runs) from each-other and got about the same 500MB/s in NetIO and about 400MB/s with file transfers (both SATA SSD, Pioneer, of course.)  I wouldn't expect the (non RAID) SATA SSD to SATA SSD transfer to be any quicker, of course, but I was hoping NetIO would move a bit more.

 

But, the whole point was the switches, anyway, and it doesn't sound like they're the limiting point in the data transfer that I can see, at least not lower than I can seem to measure with the end equipment I'm using.

Link to comment
30 minutes ago, matguy said:

Between my machine and the media PC (through 2 QNAP switches) iperf3 showed 4.39Gb one direction and 5.39Gb the other.

So 5.39 Gbps seems the ceiling, there are hard to found out which is bottleneck. PC hardware config may be the issue.

I also cross two different 10G switch, FYR, my lowest hardware with 10G was Ryzen 2400G.

Edited by Vr2Io
Link to comment

These are various generations of I3 and I7.  In this test the iperf3 process on the I3-4160 (in an Optiplex 9020) stayed around 13% and the I7-7700 (Optiplex 7050) barely showed 2% for the faster transfer, the other direction the I3 sat around 20% and the I7 peaked at 3% for the slower transfer.

Link to comment
  • 3 months later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.