Jump to content

Slow transfer speeds over 10Gb network to an NVME drive


Recommended Posts

Changing the docker apps to mnt/cache/appdata fixed my docker issue.  Thanks for the tip Greg.  I am also now getting up to 1.15 GB/s transfer speed!!!! What type of drive are you writing too/from Greg? anything but a multidrive ssd raid or a very high speed NVME drive will be the bottleneck over a 10gbe network.

Link to comment
1 minute ago, mattcoughlin said:

Changing the docker apps to mnt/cache/appdata fixed my docker issue.  Thanks for the tip Greg.  I am also now getting up to 1.15 GB/s transfer speed!!!! What type of drive are you writing too/from Greg? anything but a multidrive ssd raid or a very high speed NVME drive will be the bottleneck over a 10gbe network.

I currently have 4 Samsung 850 pro ssds in btrfs raid 10. That should equal around 1100mb read/write. I am getting 350mb max over the network. 

Link to comment
15 minutes ago, mattcoughlin said:

They should be more than fast enough. I had 3 850 evos for cache that gave me around 700 MB/s transfer speed. I assume you have jumbo files enabled on both as well as the switch. 

I have not been able to enable jumbo frames on unraid. NIC supports jumbo frames but the kernel refuses any mtu over 1500. The same NIC on windows works fine with jumbo frames and the NIC description shows support for Linux. I'm not too sure if that's the issue because I've heard other people saturating 10gbe without jumbo frames enabled. 

Link to comment
  • 2 weeks later...
  • 1 month later...

I have the same problem,

I set the MTU on my windows PC to 9000 and did the same to the 10gb NIC on my unraid server and my transfers are stuck at 400-450mbps. Its a huge improvement but this is a dedicated P2P connection so I was hoping to completely saturate it. My first few test where 980mbps but I have seen that speed sense.

 

My cache drives are 2 Kingston 240gb SSDs in Raid 0. The drive on my Windows computer is an EVO 850 250gb ssd.

 

enabling "Tunable enable Direct IO" made no difference. I am planning to swap the cache drives with 2x Samsung Evo 850tb drives and run them in raid 0, but I do not want to buy them until I can get this working.

 

What am I missing?

 

Here are my parts:

Workstation - HP 10GB MELLANOX CONNECTX-2 PCIe 10GB (part 671798-001)

Unraid - HP Dual Port 10Gb Ethernet PCIe Card for Proliant (part 468349-001 468330-002)

"Fiber Optic" LC UPC to LC UPC Duplex 98ft and 33ft cables (from fs.com)

2x HPE BladeSystem 10GBase-SR SFP+ 300m Transceiver (part 455883-B21 from fs.com)

 

 

Edited by smaka510
Link to comment
I set the MTU on my windows PC to 9000 and did the same to the 10gb NIC on my unraid server and my transfers are stuck at 400-450mbps.

 

My cache drives are 2 Kingston 240gb SSDs in Raid 0. The drive on my Windows computer is an EVO 850 250gb ssd.

 

Perfectly normal speeds for a single SSD, you may get faster speeds briefly when it's reading/writing from RAM, depending on the models used same for the 2 x Kingston in raid0, especially for writes.

 

 

Link to comment
11 hours ago, smaka510 said:

I have the same problem,

I set the MTU on my windows PC to 9000 and did the same to the 10gb NIC on my unraid server and my transfers are stuck at 400-450mbps. Its a huge improvement but this is a dedicated P2P connection so I was hoping to completely saturate it. My first few test where 980mbps but I have seen that speed sense.

 

My cache drives are 2 Kingston 240gb SSDs in Raid 0. The drive on my Windows computer is an EVO 850 250gb ssd.

 

enabling "Tunable enable Direct IO" made no difference. I am planning to swap the cache drives with 2x Samsung Evo 850tb drives and run them in raid 0, but I do not want to buy them until I can get this working.

 

What am I missing?

 

Here are my parts:

Workstation - HP 10GB MELLANOX CONNECTX-2 PCIe 10GB (part 671798-001)

Unraid - HP Dual Port 10Gb Ethernet PCIe Card for Proliant (part 468349-001 468330-002)

"Fiber Optic" LC UPC to LC UPC Duplex 98ft and 33ft cables (from fs.com)

2x HPE BladeSystem 10GBase-SR SFP+ 300m Transceiver (part 455883-B21 from fs.com)

 

 

 

 

Actually,

I noticed that I get well over 1gbps when files are copied from unraid to windows 10, but only about 450mbps when windows transfers files to unraid. What should I do?

Link to comment
On 11/25/2017 at 9:18 AM, greg2895 said:

You are doing better than me! Im getting 300mb windows to unraid and about 400mb unraid to windows. I also setup a p2p connection to bypass the switch and nothing changed.

 

I ended up configuring jumbo packets on the 10gb connection at both ends (windows 10 and unraid). Set MTU to 9000 for each.

 Also on unraid set: Tunable (enable direct IO): set to Yes

Link to comment
  • 2 years later...
On 10/5/2017 at 9:09 AM, greg2895 said:

I am having the same issues here. Can't saturate 10gbe at about 350mb/s. Direct I/O is giving me call traces and all docker apps had to be changed from mnt/usr/appdata to mnt/cache/appdata for them to be able to read/write. To top it off i am still only getting 350mb over 10gbe! I am out of ideas.

 

You guys ever figure this out? I am in the same boat.

 

I just posted a new topic regarding this issue, and I have possibly found the CAUSE. However, I am able to fully saturate my 10GBe NIC with sustained 1 GB/s writes under a very specific scenario. Please see below and feel free to stop by my post and saturate that LOL. I REALLY want to get this fixed. The correct way. Thanks everyone!

 

 

Edited by falconexe
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...