mattcoughlin Posted October 6, 2017 Author Share Posted October 6, 2017 Changing the docker apps to mnt/cache/appdata fixed my docker issue. Thanks for the tip Greg. I am also now getting up to 1.15 GB/s transfer speed!!!! What type of drive are you writing too/from Greg? anything but a multidrive ssd raid or a very high speed NVME drive will be the bottleneck over a 10gbe network. Quote Link to comment
greg2895 Posted October 6, 2017 Share Posted October 6, 2017 1 minute ago, mattcoughlin said: Changing the docker apps to mnt/cache/appdata fixed my docker issue. Thanks for the tip Greg. I am also now getting up to 1.15 GB/s transfer speed!!!! What type of drive are you writing too/from Greg? anything but a multidrive ssd raid or a very high speed NVME drive will be the bottleneck over a 10gbe network. I currently have 4 Samsung 850 pro ssds in btrfs raid 10. That should equal around 1100mb read/write. I am getting 350mb max over the network. Quote Link to comment
mattcoughlin Posted October 6, 2017 Author Share Posted October 6, 2017 They should be more than fast enough. I had 3 850 evos for cache that gave me around 700 MB/s transfer speed. I assume you have jumbo files enabled on both as well as the switch. Quote Link to comment
greg2895 Posted October 6, 2017 Share Posted October 6, 2017 15 minutes ago, mattcoughlin said: They should be more than fast enough. I had 3 850 evos for cache that gave me around 700 MB/s transfer speed. I assume you have jumbo files enabled on both as well as the switch. I have not been able to enable jumbo frames on unraid. NIC supports jumbo frames but the kernel refuses any mtu over 1500. The same NIC on windows works fine with jumbo frames and the NIC description shows support for Linux. I'm not too sure if that's the issue because I've heard other people saturating 10gbe without jumbo frames enabled. Quote Link to comment
mattcoughlin Posted October 17, 2017 Author Share Posted October 17, 2017 So now it has quit saturating my network. I have changed nothing... Quote Link to comment
smaka510 Posted November 25, 2017 Share Posted November 25, 2017 (edited) I have the same problem, I set the MTU on my windows PC to 9000 and did the same to the 10gb NIC on my unraid server and my transfers are stuck at 400-450mbps. Its a huge improvement but this is a dedicated P2P connection so I was hoping to completely saturate it. My first few test where 980mbps but I have seen that speed sense. My cache drives are 2 Kingston 240gb SSDs in Raid 0. The drive on my Windows computer is an EVO 850 250gb ssd. enabling "Tunable enable Direct IO" made no difference. I am planning to swap the cache drives with 2x Samsung Evo 850tb drives and run them in raid 0, but I do not want to buy them until I can get this working. What am I missing? Here are my parts: Workstation - HP 10GB MELLANOX CONNECTX-2 PCIe 10GB (part 671798-001) Unraid - HP Dual Port 10Gb Ethernet PCIe Card for Proliant (part 468349-001 468330-002) "Fiber Optic" LC UPC to LC UPC Duplex 98ft and 33ft cables (from fs.com) 2x HPE BladeSystem 10GBase-SR SFP+ 300m Transceiver (part 455883-B21 from fs.com) Edited November 25, 2017 by smaka510 Quote Link to comment
JorgeB Posted November 25, 2017 Share Posted November 25, 2017 I set the MTU on my windows PC to 9000 and did the same to the 10gb NIC on my unraid server and my transfers are stuck at 400-450mbps. My cache drives are 2 Kingston 240gb SSDs in Raid 0. The drive on my Windows computer is an EVO 850 250gb ssd. Perfectly normal speeds for a single SSD, you may get faster speeds briefly when it's reading/writing from RAM, depending on the models used same for the 2 x Kingston in raid0, especially for writes. Quote Link to comment
greg2895 Posted November 25, 2017 Share Posted November 25, 2017 I still haven't solved my issue either. Transferring from an nvme drive to 2 samsung 850 pro ssds in raid 0. Iperf3 shows bandwidth around 3gbit. MTU 9014 is enabled on both sides. Quote Link to comment
JorgeB Posted November 25, 2017 Share Posted November 25, 2017 19 minutes ago, greg2895 said: Iperf3 shows bandwidth around 3gbit. Then your problem is here, you'll never get more speed than the iperf test achieves. Quote Link to comment
smaka510 Posted November 25, 2017 Share Posted November 25, 2017 11 hours ago, smaka510 said: I have the same problem, I set the MTU on my windows PC to 9000 and did the same to the 10gb NIC on my unraid server and my transfers are stuck at 400-450mbps. Its a huge improvement but this is a dedicated P2P connection so I was hoping to completely saturate it. My first few test where 980mbps but I have seen that speed sense. My cache drives are 2 Kingston 240gb SSDs in Raid 0. The drive on my Windows computer is an EVO 850 250gb ssd. enabling "Tunable enable Direct IO" made no difference. I am planning to swap the cache drives with 2x Samsung Evo 850tb drives and run them in raid 0, but I do not want to buy them until I can get this working. What am I missing? Here are my parts: Workstation - HP 10GB MELLANOX CONNECTX-2 PCIe 10GB (part 671798-001) Unraid - HP Dual Port 10Gb Ethernet PCIe Card for Proliant (part 468349-001 468330-002) "Fiber Optic" LC UPC to LC UPC Duplex 98ft and 33ft cables (from fs.com) 2x HPE BladeSystem 10GBase-SR SFP+ 300m Transceiver (part 455883-B21 from fs.com) Actually, I noticed that I get well over 1gbps when files are copied from unraid to windows 10, but only about 450mbps when windows transfers files to unraid. What should I do? Quote Link to comment
greg2895 Posted November 25, 2017 Share Posted November 25, 2017 You are doing better than me! Im getting 300mb windows to unraid and about 400mb unraid to windows. I also setup a p2p connection to bypass the switch and nothing changed. Quote Link to comment
smaka510 Posted November 25, 2017 Share Posted November 25, 2017 (edited) after doing the same test again with a RAMdisk, the problem went away. I am getting 1.18 transfers to and from windows 10. Edited November 25, 2017 by smaka510 Quote Link to comment
smaka510 Posted November 29, 2017 Share Posted November 29, 2017 On 11/25/2017 at 9:18 AM, greg2895 said: You are doing better than me! Im getting 300mb windows to unraid and about 400mb unraid to windows. I also setup a p2p connection to bypass the switch and nothing changed. I ended up configuring jumbo packets on the 10gb connection at both ends (windows 10 and unraid). Set MTU to 9000 for each. Also on unraid set: Tunable (enable direct IO): set to Yes Quote Link to comment
falconexe Posted May 11, 2020 Share Posted May 11, 2020 (edited) On 10/5/2017 at 9:09 AM, greg2895 said: I am having the same issues here. Can't saturate 10gbe at about 350mb/s. Direct I/O is giving me call traces and all docker apps had to be changed from mnt/usr/appdata to mnt/cache/appdata for them to be able to read/write. To top it off i am still only getting 350mb over 10gbe! I am out of ideas. You guys ever figure this out? I am in the same boat. I just posted a new topic regarding this issue, and I have possibly found the CAUSE. However, I am able to fully saturate my 10GBe NIC with sustained 1 GB/s writes under a very specific scenario. Please see below and feel free to stop by my post and saturate that LOL. I REALLY want to get this fixed. The correct way. Thanks everyone! Edited May 11, 2020 by falconexe Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.