[Solved] Rsync performance between two unRAID boxes


Recommended Posts

Hey all, seeing some strange Rsync issues between two unRAID servers, and I'm hoping someone can help me diagnose to fix it.

 

Summary: Rsync via SMB tops out at 20-25MB/s on a gigabit link, and usually sits at 10-20MB/s. More details below.

 

Rsync command:

rsync -avzhiPW --delete /mnt/disks/LENNYR4_NAS/ /mnt/user/NAS

 

My unRAID boxes are tied together via Unassigned Devices plugin and mounted via SMB. iPerf3 single-threaded tests between the two machines top out at 970mbit/s, so gigabit line speed and no network issues to be found. Ping is anywhere from 0.5 - 1.0ms. Using things like dd and hdparm, I can see that I have anywhere from 80-120MB/s read/write speed between the two machines via SMB. Using the Diskspeed.sh script found in these forums, each disk in the destination machine can handle the same 80-120MB/s write speed.

 

Files are 99% 1-40GB movies and backup .zips and .tar.gz files

 

Yet for some reason, Rsync likes to hang out at 10-25MB/s, rarely higher. This has to be an issue with my Rsync command I'd imagine. Thoughts?

Link to comment

how is cpu utilization?

 

100% of a core - that's likely the culprit. I tried both on the source as a sender (2x Xeon E5-2680) and on the destination as a receiver (Core i5 3570). Both top a single core out, both sit at around 30MB/s now.

 

How can one reduce the CPU utilization of rsync? Or is there a better option entirely to mirror the share across SMB/NFS?

Link to comment

thats a good question for the rsync gurus on here. I have another box for versioned backups with crash plan. I'm just leery of accidental deletion carrying over to the other copy.... so what I do works for me. A little slow at times (also bound by single core speed) but, I set it and forget it.

Link to comment

I really appreciate your assistance thus far! And that's how I usually handle things - slow is fine when it's a nightly backup of 10GB or so, but this is a ~11TB copy after some drives were upgraded.

 

Update: I disabled -z to disable compression, as it's a "local" copy (not over the internet) and CPU dropped like a rock (to ~2.5% of a single core). Still sitting at 12-20 MB/s  :(

Link to comment

Remove the -z flag, it compresses all data before sending, only useful for slow connections and highly compressable data, much slower when using Ethernet.

 

I've dropped the -z flag actually. It solved the CPU maxing out (it's not at 3% or so), but didn't affect throughput. Looks like it's bouncing anywhere between 10MB/s and 35MB/s overnight.

Link to comment

Have you tried Resilio Sync?

 

Sent from my ONEPLUS A3000 using Tapatalk

 

I haven't. I hear that Midnight Commander is also one to try, but if Robocopy in Windows (mounted the drives in Explorer) and rsync in the unRAID box itself shows the same result, I'm not sure we will see different behavior.

 

Will try and report back, though.

Link to comment

Have you tried Resilio Sync?

 

Sent from my ONEPLUS A3000 using Tapatalk

 

I haven't. I hear that Midnight Commander is also one to try, but if Robocopy in Windows (mounted the drives in Explorer) and rsync in the unRAID box itself shows the same result, I'm not sure we will see different behavior.

 

Will try and report back, though.

 

Just realized what Resilio was - I've been using SyncThing for so long I didn't even realize btsync spun off from BitTorrent Inc  :o

 

Anyways, Tried Midnight Commander - very cool idea and I dig the interface, but it still sits at 20-25MB/s. I'm truly at a loss here.

Link to comment

Further testing:

 

Summary of infrastructure:

10.0.1.4 - source unRAID box (has parity)

10.0.1.2 - destination unRAID box (no parity)

 

I used a third computer (Windows Server 2016 Dell T3500, iPerfs at 950mbit/s) and tried robocopying from each unRAID box to the local data drive in Windows.

 

From 10.0.1.4 - 150-250mbit/s

from 10.0.1.2 - 800-900mbit/s

 

So it appears to be a read issue from the source unRAID box. I ran hdparm on the disks, and they all seem to be good (140-160MBytes/s). Anything else I can look at for read speed?

Link to comment

So it appears to be a read issue from the source unRAID box. I ran hdparm on the disks, and they all seem to be good (140-160MBytes/s). Anything else I can look at for read speed?

Were these reads from existing data or from files newly copied to a freshly formatted disk? If all existing data, can you completely empty off a disk and reformat it for testing?
Link to comment

Update:

 

I feel a bit ashamed, but turns out there was a parity check / data rebuild going on on the source server. When I checked the WebUI, it had an hour or so remaining - I let it finish, and rsync speeds jumped up to 900mbit/s. That's why it was so slow to read from the parity-backed server.

 

Confused as to why a parity check kills read speeds as badly as it did, but that's for another time. For now, things appear to be working top notch again. Thanks all for the amazing help! This community rocks. ;D

Link to comment
Confused as to why a parity check kills read speeds as badly as it did, but that's for another time.

It kills the speed because you were thrashing your drive.  The read head was reading one group of sectors for the parity check then immediately moving to a new location to read the sectors involved for your read.  So besides the seek time involved the drive was reading twice the information.  Your parity check speed was slowed down while you were reading from the disks as well.
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.