NinjaBreadMan Posted March 11, 2021 Share Posted March 11, 2021 I've been using Unraid for few months now and I think this has always been an issue but it was never a problem for me until now. I just built a second server and started trying to fix this before I transfer everything onto it. I know write speeds are a common topic, but I've done a lot of research and haven't found anything that applies to what I'm seeing. Here's the problem: I can't get write speeds any higher than 41MB/s. I know this is about normal for non turbo writes, but enabling turbo makes no difference to the speed at all. And the speed is exactly the same transferring to the cache, which is what seems really weird. The only thing that's faster is when the mover runs, and the Main page shows consistent 84MB/s from cache to array. I have gotten bursts of 70-80MB/s at the start due to what I assume would be RAM caching, but this is inconsistent and usually doesn't happen at all. Usually it starts from nothing and slowly ramps up to 41, where it stays. Any read operation saturates my 1Gb network and sustains 110MB/s, so I know the network or any other connection isn't a problem. Using large video files, I've tried: Over SMB to Array, with and without Turbo enabled (cache setting to "No"): 41MB/s, Over SMB to Cache (cache setting to "Yes"): 41MB/s, Over SMB to exported disk share, Disk1: 41MB/s, Over SMB to exported disk share, Cache: 41MB/s, Directly connected USB drive using MC to Array and Cache: 20-30MB/s (weird that this would be slower but could be the USB bus or something), Checked write cache on drives, all already enabled, Upgraded from 1.9.0 to 1.9.1, Directly connected my PC to the server with Ethernet, bypassing router and everything else. I've been able to determine that writes are actually happening at much faster speeds, but that the writes are spiking and in between nothing happens, which appears to average out to 41MB/s. The sceenshots below show the netdata docker illustrating this. sdd: Disk1 during a write to the array. You can see where I enabled Turbo write, even though the transfer speed didn't change at all. sde: Disk2, during the same time period. sdc: Cache, during a write to a share with the cache setting set to "Yes". And lastly Disk1 during a read, no issues there. Also attached a screenshot of the file transfer showing 41MB/s. Only possible reason I can think of is that I don't have a fast enough CPU, because during a transfer it seems to always being using 100% of one random core/thread, although overall utilization never goes past around 20%. The fact that the cache also has this issue seems to rule that out though, unless there's some other kind of overhead I'm not aware of. Beyond that, I have no idea what to try next. Thanks everyone! System Specs: Unraid 6.9.1 Dell Poweredge R510 Dual Xeon X5570 2.93Ghz Quad Core CPUs 8GB ECC RAM PERC H200 HBA Cache: Crucial BX500 1TB SSD (temporarily connected to the HBA along with the drives, waiting for a sata power adapter) Array: 3x Seagate EXOS 16TB SATA drives, 1 Parity Diagnostics Zip Attached (Upgraded to 6.9.1 right after, didn't want to redo because I rebooted). archive-diagnostics-20210310-1540.zip Quote Link to comment
JorgeB Posted March 11, 2021 Share Posted March 11, 2021 Run a single stream iperf test in both directions. Quote Link to comment
NinjaBreadMan Posted March 11, 2021 Author Share Posted March 11, 2021 (edited) Here's the iperf result. It does appear to be slower when the server is receiving, but still not as slow as the SMB speeds I'm getting. Edited March 11, 2021 by NinjaBreadMan Quote Link to comment
JorgeB Posted March 11, 2021 Share Posted March 11, 2021 8 minutes ago, NinjaBreadMan said: but still not as slow as the SMB speeds I'm getting. Yes, but still a clear issue, I would start with trying to fix that, try a different NIC, cable, switch, source PC, etc Quote Link to comment
NinjaBreadMan Posted March 11, 2021 Author Share Posted March 11, 2021 Thanks for the suggestions JorgeB. I'm getting somewhere, I think. I've confirmed that the slow sending was a driver issue on my PC, after reinstalling the LAN drivers iperf shows almost the same speed both ways. However, when I write a file, it starts out at 90-100MB/s and then drops to that same 41 after a minute or two. I'll continue to try some more tests from another PC or whatever else I can think of to try and isolate the problem and post the results if I make further progess. Quote Link to comment
JorgeB Posted March 11, 2021 Share Posted March 11, 2021 3 minutes ago, NinjaBreadMan said: it starts out at 90-100MB/s and then drops to that same 41 after a minute or two. This now suggests a device limit, after the RAM cache is exhausted, enable turbo write, write directly to one of the disks and grab the diags after it slows down, then post them here. Quote Link to comment
NinjaBreadMan Posted March 11, 2021 Author Share Posted March 11, 2021 Here's the diag, enabled turbo and wrote to disk1 share. archive-diagnostics-20210311-1233.zip Quote Link to comment
JorgeB Posted March 12, 2021 Share Posted March 12, 2021 According to the diags there's 0 array activity. Quote Link to comment
NinjaBreadMan Posted March 12, 2021 Author Share Posted March 12, 2021 I'm not sure why that would be. Should I have written to a user share with cache set to no instead of a disk share? Quote Link to comment
NinjaBreadMan Posted March 12, 2021 Author Share Posted March 12, 2021 I created a fresh share, cache set to no, transferred an 8GB file to it (speed never higher than 41, not even at first) and when it finished I downloaded the diagnostics, attached here. Let me know if that wasn't the correct procedure. archive-diagnostics-20210312-1120.zip Quote Link to comment
JorgeB Posted March 12, 2021 Share Posted March 12, 2021 35 minutes ago, NinjaBreadMan said: I'm not sure why that would be. Should I have written to a user share with cache set to no instead of a disk share? Share type won't matter, if they were grabbed during a transfer it suggests that the disks are writing in faster bursts than the speed you're seeing, and at the time the diags were saved they weren't writing, they were waiting for data. 21 minutes ago, NinjaBreadMan said: (speed never higher than 41, not even at first) At the time the diags were saved disks was writing at 100MB/s+, suggesting that it's not a disk problem. Try this, copy that same large file to cache and then using Windows explorer (assuming you're using Windows 10) transfer from the cache share to a disk share, you need to have disk shares enable, then transfer from \\tower\cache to e.g. \\tower\disk1, transfer won't use the network, it will be done locally, see what speed you get with that. Quote Link to comment
NinjaBreadMan Posted March 12, 2021 Author Share Posted March 12, 2021 Yep, that worked at full speed. Quote Link to comment
JorgeB Posted March 12, 2021 Share Posted March 12, 2021 Then it suggest it's still a network problem, despite the iperf results. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.