Network write to unraid speeds drop off - advice?


Beebs

Recommended Posts

I'm new to unraid and am just setting up my first server and I've noticed as I am transfering files (anywhere from 20 to 100gigs at a go) to unraid the network performance for the first few gigs of data is "pretty good" ( approx. 888mbps) then drops for a bit, increases again then it drops and stays at 320mbps or less. I am wondering if this is due to my hardware limitations of the unraid server I have or there is some setting I should be using.  I have set the cache usage to be "yes" so that the cache drive fills first, then writes to the array (?) and am using encryption on the array drives (the CPU supports AES). 

 

When I look at the dashboard, the memory utilization is low and the CPU load seems to be not particularly high overall. I did a search on the forum and haven't found a similar thread to mine (the ones I have found are talking about read performance). Thanks in advance and I apologize if this is a bit of a noob question. 

 

Unraid server: Gigabyte F2A68HM-H FM2+ mobo, AMD 760K quad core @ 3800Mhz, 8gb ram, 3 x 8TB (1 parity), 1 x 120g sata SSD. I am using the onboard SATA and realtek nic.

Link to comment
4 hours ago, Beebs said:

for the first few gigs of data is "pretty good" ( approx. 888mbps) then drops for a bit, increases again then it drops and stays at 320mbps or less.

This would suggest the network is good since first few GB will be cached to RAM, then it will be limited by the device being written to, 120GB SSD might not be that fast, you can also try writing directly to the array with turbo write enable to see if there's any difference.

Link to comment

I created a topic with the exact same issue a couple hours ago so I'm deleting (or trying to delete) that and posting the text here:

 

I've been struggling with this issue over the past month or two. I've recently started getting rapid drop-offs in write-speeds to my cache. When pulling down data from various servers (each able to upload at 250M/s) over my home gigabit connection I used to be able to sustain 100M/s ingest speeds as reported by rclone or a given desktop app, and that makes sense, given that I should be writing to my cache and thus bypassing parity. But as it stands (and for testing I've been pulling a single 10GB file at a time) I'll get 90-100M/s download speeds, sustain that for 3-6GB, and then rapidly drop to several M/s and ultimately get to a few kilobytes/second. After a few minutes of that the speeds with pick back up to ~60M/s, sustain for a few mins, and then drop back down. I rebooted my server a few days ago so don't have much historical data but I'm attaching a screenshot of transfer rates. I'm seeing this issue when using FileZilla on a Windows VM or using the rclone plugin running directly on unRAID. Any thoughts or suggestions? System stats and diags below.

 

Dual e5-2667 v1 (12 cores at 2.9GHz)

64GB memory

2-disk 1TB SSD cache pool

10gig NIC

 

Edit: added screencap of my rclone output (this is with turbo write on, though I don't think it should matter anyway since I tried writing to cache specifically) and a shot from NetData.

 

Edit edit: have isolated this to a cache-specific issue. When doing the same copy operation to an Unassigned Devices NVMe disk I can sustain 100MB/s write speeds. Next up is testing a write direct to the array, bypassing cache, with turbo-write turned on as described above.

 

image.png.29d15512e18440a81f6ffc076cb712dd.png

 

image.thumb.png.0ccf4c504b09bb89eea286c5d3d78d81.png


image.png.4b99bf6c9873c0fcb62b0c4b11c20793.png

dexxy-diagnostics-20200116-1608.zip

Edited by dexxy
Link to comment

Well, I solved my issue accidentally. Don't know if it'll help for you all. I was trying to troubleshoot cache drives and removed a disk from my pool (to test the remaining one). I cancelled the balance operation when I was running on the remaining disk, and somewhere along the line managed to break the filesystem on my cache drives. I ended up reformatting them both and rebuilding my cache pool. I've been able to sustain the old speeds, now.

 

Don't know if this will help anyone, and I wish there had been an easier way to do this (at least if I'd planned on it I'd have backed up appdata!), but it did work. This has been bugging me for 6-8 weeks and so glad to have it fixed.

 

image.thumb.png.7fef181f6e5514f846cf0ce9c8c9d518.png

 

 

Link to comment
  • 3 weeks later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.