Yousty Posted March 30, 2020 Share Posted March 30, 2020 I recently switched out my cache drive from an older 250GB Crucial SSD to a 500GB Samsung Evo 960 NVMe SSD using a PCI-e to NVMe adapter. The weird thing is, even though the NVMe drive is faster I'm seeing slower speeds from it, particularly when I transfer files to it over my gigabit hard-wired network. With the SATA SSD I could saturate my network every time transferring at 113MB/s both read and write. But with the NVMe SSD the fastest I can write to it over the network is 84MB/s and I still get 113MB/s reads from it so I know it's not the network. Here is the hardware I'm using: Asrock 990FX Extreme9 w/ latest firmware - I'm using PCI-e slot #1 which is a PCI-e x16 slot This PCIe to NVMe adapter - which is capable of 1500+MB/s write Samsung Evo 960 NVMe SSD - which can easily max out the adapter I enabled jumbo frames in Unraid's settings but that didn't make a difference. Any suggestions would be highly appreciated. Thank you! Quote Link to comment
testdasi Posted March 30, 2020 Share Posted March 30, 2020 It's probably useful to attach Diagnostics (Tools -> Diagnostics -> attach zip file). Have you done actual network-only test (e.g. iperf) and/or storage-only test (e.g. diskspeed docker, or even dd / rsync)? Also as a side note, you sort of misunderstood how the adapter would work. Given your mobo has PCIe 2.0 + the M.2 device is a x4 device + the adapter itself is just basically rewiring, your M.2 will only run at PCIe 2.0 x4 speed so theoretical 2GB/s max throughput. All those claimed speed numbers on the listing are meaningless. Not that it should have any impact to your 84MB/s discussion here but just thought to mention it. Quote Link to comment
Yousty Posted March 30, 2020 Author Share Posted March 30, 2020 I have attached my Diagnostics report. Yes, I ran the DiskSpeed docker, but as far as I can tell it only benchmarks read speed, which it benchmarked at 1,502MB/s the whole test. I'm not terribly familiar with iperf so unsure what to run, but I figured my network wasn't the issue since I always maxed out network speed with my previous SSD. nas-diagnostics-20200330-1001.zip Quote Link to comment
testdasi Posted March 30, 2020 Share Posted March 30, 2020 Watch the SpaceInvader One on iperf testing below. Without an actual test, you can't completely eliminate network issues. With the 84MB/s, what sort of data are you copying over? Did you trim it? What's the reported temperature? Are you sure it's being written to cache and not to the array? Quote Link to comment
Yousty Posted March 30, 2020 Author Share Posted March 30, 2020 I transfer mostly video files, ranging from 1GB to 60GB and they always max out at 84MB/s now. The screenshot shows a transfer I just did. As you can see it hits 84MB/s right away and sits there, almost like there's a bottleneck somewhere. I am positive it's going to the cache drive. I monitored the cache drive temp in Unraid during the transfer and it stayed 88°F the whole time. I have the SSD trim app installed and set to run every 4 hours. I'll watch that video and do the tests but it's highly unlikely it's a network issue when I've been doing hardwired transfers to this server at 113MB/s for over 5 years now and the ONLY thing that changed was switching out my SSD cache drive from a SATA one to an NVMe one. Quote Link to comment
JorgeB Posted March 30, 2020 Share Posted March 30, 2020 14 minutes ago, Yousty said: As you can see it hits 84MB/s right away This suggests a network issue, since the first couple of GB are cached to RAM and should be transferred a max lan speed (assuming source is capable of that). Quote Link to comment
Yousty Posted April 2, 2020 Author Share Posted April 2, 2020 Finally had some time to watch the video and run iperf and shockingly it is the network causing the slowdown. C:\iperf3>iperf3 -c 192.168.1.3 Connecting to host 192.168.1.3, port 5201 [ 4] local 192.168.1.2 port 58770 connected to 192.168.1.3 port 5201 [ ID] Interval Transfer Bandwidth [ 4] 0.00-1.00 sec 84.0 MBytes 704 Mbits/sec [ 4] 1.00-2.00 sec 83.9 MBytes 704 Mbits/sec [ 4] 2.00-3.00 sec 84.0 MBytes 705 Mbits/sec [ 4] 3.00-4.00 sec 83.9 MBytes 704 Mbits/sec [ 4] 4.00-5.00 sec 83.8 MBytes 703 Mbits/sec [ 4] 5.00-6.00 sec 84.0 MBytes 704 Mbits/sec [ 4] 6.00-7.00 sec 84.0 MBytes 704 Mbits/sec [ 4] 7.00-8.00 sec 84.0 MBytes 704 Mbits/sec [ 4] 8.00-9.00 sec 83.9 MBytes 704 Mbits/sec [ 4] 9.00-10.00 sec 83.5 MBytes 700 Mbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bandwidth [ 4] 0.00-10.00 sec 839 MBytes 704 Mbits/sec sender [ 4] 0.00-10.00 sec 839 MBytes 704 Mbits/sec receiver iperf Done. It just makes no sense to me since literally nothing about my network has changed since switching from SATA to NVMe SSD. Quote Link to comment
Yousty Posted April 2, 2020 Author Share Posted April 2, 2020 Aaaand I solved it. Decided to make sure I had the latest NIC software installed on my Windows 10 source machine, and sure enough after installing that I am now transferring at 113MB/s to my Unraid server. Thank you everyone for helping me troubleshoot and leading me down the right path to fix the issue! Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.