Slow transfer speed over 40GBe + NVME


Recommended Posts

Hey Guys!

 

So quick rundown of the hardware:

  • Client
    • Mellanox 40GBe Card - established 40GB link straight to unRAID
    • NVME SSD, tested @ 3.8GB Read, 3.1GB Write via Crystal Disk Mark
    • Windows 11
  • unRAID
    • Mellanox 40GBe Card - established 40GB link straight to Windows 11
    • NVME Cache Drive - same drive that is on the client. Tested via unRAID (dd write command, 1.9GB/s)
    • unRAID 6.9.2

 

When I transfer a file over to unRAID from Windows 11, I'm only seeing around 7.2Gbps copy speeds via explorer/taskmanager. 

 

Since I know my client disk can read around 3.8GB, and unRAID Cache is writing around 1.9GB, I should be at least seeing around 15Gbps in Windows right?

 

Is this a limitation of Windows Pro? Are there settings I'm missing???

Link to comment

Update - I installed iperf3 on my unRAID Server, and on my Client OS.

 

Results avg 18Gbit/s, which is about 2250MBps.

 

Issue 1: I would of expected to see around 35Gbit/s over iperf.

 

Issue 2: I now know 100% that my Link can achieve 2,200MBps, and I know that my Cache SSD on unRAID can write 1,800MBps - this still brings me back to my original problem of - why are my transfers not at least hitting the ~1,800MBps speed of my drive?

Link to comment
  • 1 year later...

I'm in a similar boat.  I just upgraded the whole system yesterday --

 

* AsRock Rack ROMED8-2T

* Epyc 7B13 64-core CPU

* 256GB 3200MHz DDR4 EEC RAM

* 500GB WD SN750 Gen3

* Mellanox MCX354A-FCCT

 

The client machine (also running Unraid) is a Windows 11 Pro VM --

 

* MSI MEG z690 ACE

* Intel i9-12900k

* 32GB 5200Mhz DDR5 RAM

* 2TB WD SN850 Gen4

* 4TB WD SN850X Gen4

* Mellanox MCX354A-FCCT

 

The Windows VM is using the `virtio` driver instead of `virtio-net` because the latter would hard-cap the bandwidth to 10Gbps.

 

Testing using iPerf3, I'm easily getting an upwards of 35Gbps across the link.

 

Copying files over SMB or iSCSI cap out at around 550MB/s, to either the SSD or NVMe drive in the server, and it's basically the same speed both read and write.

 

But 550MB/s is only 4Gbps -- could have just used the 10Gbe adapters!

 

I was really expecting at least greater than 1GB/s (8000Gbps) to the NVMe drive since it's benched at 3.6GB/s sequential write -- was targeting at least several GB/s.  If I can't get this going faster, then I don't see a point in getting a HyperX M.2 drive going.

 

The next test would be to run the client VM bare-metal (which I can easily do) but I'm surprised there'd be that much overhead, and the issue doesn't seem to be the client since iPerf3 can easily saturate the link.

 

Below @ 35Gbps peak -- 

 

image.png.bc48d80e8e675dc2a6666ebb1a6e3ded.png

 

Connecting to host 172.16.0.1, port 7777
[  5] local 172.16.0.3 port 17297 connected to 172.16.0.1 port 7777
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-1.00   sec  3.30 GBytes  28.3 Gbits/sec
[  5]   1.00-2.00   sec  3.77 GBytes  32.4 Gbits/sec
[  5]   2.00-3.00   sec  4.07 GBytes  35.0 Gbits/sec
[  5]   3.00-4.00   sec  3.85 GBytes  33.1 Gbits/sec
[  5]   4.00-5.00   sec  4.09 GBytes  35.1 Gbits/sec
[  5]   5.00-6.00   sec  3.81 GBytes  32.7 Gbits/sec
[  5]   6.00-7.00   sec  3.57 GBytes  30.6 Gbits/sec
[  5]   7.00-8.00   sec  3.97 GBytes  34.1 Gbits/sec
[  5]   8.00-9.00   sec  4.06 GBytes  34.9 Gbits/sec
[  5]   9.00-10.00  sec  4.04 GBytes  34.7 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-10.00  sec  38.5 GBytes  33.1 Gbits/sec                  sender
[  5]   0.00-10.00  sec  38.5 GBytes  33.1 Gbits/sec                  receiver

Edited by rootd00d
Link to comment
8 minutes ago, rootd00d said:

I'm in a similar boat.  I just upgraded the whole system yesterday --

 

* AsRock Rack ROMED8-2T

* Epyc 7B13 64-core CPU

* 256GB 3200MHz DDR4 EEC RAM

* Mellanox MCX354A-FCCT

 

The client machine (also running Unraid) is a Windows 11 Pro VM --

 

* MSI MEG z690 ACE

* Intel i9-12900k

* 32GB 5200Mhz DDR5 RAM

* Mellanox MCX354A-FCCT

 

The Windows VM is using the `virtio` driver instead of `virtio-net` because the latter would hard-cap the bandwidth to 10Gbps.

 

Testing using iPerf3, I'm easily getting an upwards of 35Gbps across the link.

 

Copying files over SMB or iSCSI cap out at around 550MB/s, to either the SSD or nVME drive in the server, and it's basically the same speed both read and write.

 

But 550MB/s is only 4Gbps -- could have just used the 10Gbe adapters!

 

I was really expecting at least greater than 1GB/s (8000Gbps) to the nVME drive since it's benched at 3.6GB/s sequential write -- was targeting at least several GB/s.  If I can't get this going faster, then I don't see a point in getting a HyperX M.2 drive going.

 

The next test would be to run the client VM bare-metal (which I can easily do) but I'm surprised there'd be that much overhead, and the issue doesn't seem to be the client since iPerf3 can easily saturate the link.

 

Below @ 35Gbps peak -- 

 

image.png.bc48d80e8e675dc2a6666ebb1a6e3ded.png

 

Connecting to host 172.16.0.1, port 7777
[  5] local 172.16.0.3 port 17297 connected to 172.16.0.1 port 7777
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-1.00   sec  3.30 GBytes  28.3 Gbits/sec
[  5]   1.00-2.00   sec  3.77 GBytes  32.4 Gbits/sec
[  5]   2.00-3.00   sec  4.07 GBytes  35.0 Gbits/sec
[  5]   3.00-4.00   sec  3.85 GBytes  33.1 Gbits/sec
[  5]   4.00-5.00   sec  4.09 GBytes  35.1 Gbits/sec
[  5]   5.00-6.00   sec  3.81 GBytes  32.7 Gbits/sec
[  5]   6.00-7.00   sec  3.57 GBytes  30.6 Gbits/sec
[  5]   7.00-8.00   sec  3.97 GBytes  34.1 Gbits/sec
[  5]   8.00-9.00   sec  4.06 GBytes  34.9 Gbits/sec
[  5]   9.00-10.00  sec  4.04 GBytes  34.7 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-10.00  sec  38.5 GBytes  33.1 Gbits/sec                  sender
[  5]   0.00-10.00  sec  38.5 GBytes  33.1 Gbits/sec                  receiver

What is the read write of disk  you have on the client?

Link to comment

Using Robocopy, I maxed out at just over 1.3GB/s, so definitely having a multi-threaded copier is helping quite a bit over Windows File Explorer or TeraCopy.

 

So it's an inch faster than 10Gbe -- maybe more copying needs to be done.

 

Enabling the performance CPU governor didn't have any effect on the transfer speed, but I can see it's operational and hitting 3.5GHz turbo speeds.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.