cinereus Posted April 18, 2020 Share Posted April 18, 2020 From experience, rclone is the best way to move large amounts of data between NASes. I've had issues with rsync and other methods before. What's the best practice to set up either the old NAS as a remote or the new NAS as a remote with rclone? SMB shares? Obviously data transfer speed is an issue as we're talking tens of TB. What user should I be running it as (and how)? Quote Link to comment
cinereus Posted April 26, 2020 Author Share Posted April 26, 2020 I'm just tested mounting the other server as SMB though UD. Even though iperf3 gives speeds of 0.9Gb/s, I'm only seeing transfer rates of 40 MB/s. What's causing this to slow down so much and what can I do to get everything transferred in as quickly as possible? Quote Link to comment
Vr2Io Posted April 26, 2020 Share Posted April 26, 2020 (edited) There are many factor could cause slow down, but most should be storage device performance cause. I use rsync all for large data transfer no matter local or network. If I need network transfer ( between NAS to NAS, or Unraid ), I usually use rsync over NFS mount instead SMB mount, if single disk with 1Gbps network, throughput should around 70MB - 80MB/s. I use SMB / NFS perform network transfer much, in fact, no much performance different between them. What I want to say, transfer protocol usually not a main factor. Due to I haven't network transfer to do this time, but I would show a local transfer by rsync. For example, array disk rsync to a UD disk ( ~200GB data, large file ), a steady 145MB/s could be achieve. Edited April 26, 2020 by Benson Quote Link to comment
JorgeB Posted April 26, 2020 Share Posted April 26, 2020 7 hours ago, cinereus said: I'm only seeing transfer rates of 40 MB/s. Make sure rsync isn't using compression (-z) and enable turbo write. Quote Link to comment
cinereus Posted April 26, 2020 Author Share Posted April 26, 2020 2 hours ago, johnnie.black said: Make sure rsync isn't using compression (-z) and enable turbo write. No compression with either rsync or rclone and I purposely disabled parity for data ingress so I don't think that should be an issue? Quote Link to comment
cinereus Posted April 26, 2020 Author Share Posted April 26, 2020 (edited) 9 hours ago, Benson said: There are many factor could cause slow down, but most should be storage device performance cause. I use rsync all for large data transfer no matter local or network. If I need network transfer ( between NAS to NAS, or Unraid ), I usually use rsync over NFS mount instead SMB mount, if single disk with 1Gbps network, throughput should around 70MB - 80MB/s. I use SMB / NFS perform network transfer much, in fact, no much performance different between them. What I want to say, transfer protocol usually not a main factor. This is what I don't understand. This is what I'm doing: Mount SMB over 1GbE with UD. Ensure no other processes are accessing the write or read disks. For example, writes to the target are >150 MB/s with preclear. Test network performance between two systems with iperf3. I get sustained transfers of 115 MB/s over Cat-5E. Use either rsync -avPR or rclone copy to perform the transfer. Transfer speeds are about 40 MB/s max with rclone (tested transfers 1–10) and even worse with rsync Edited April 26, 2020 by cinereus Quote Link to comment
JorgeB Posted April 26, 2020 Share Posted April 26, 2020 26 minutes ago, cinereus said: I purposely disabled parity for data ingress so I don't think that should be an issue? Yes, not an issue without parity, source NAS is the likelier bottleneck. Quote Link to comment
cinereus Posted April 26, 2020 Author Share Posted April 26, 2020 (edited) 34 minutes ago, johnnie.black said: Yes, not an issue without parity, source NAS is the likelier bottleneck. 40MB/s read bottleneck? 😢 Would it not be closer to max read speed of a disk (WD Reds so about 100 MB/s)? Edited April 26, 2020 by cinereus Quote Link to comment
JorgeB Posted April 26, 2020 Share Posted April 26, 2020 Try transferring a large file from your desktop to Unraid, if you get the same 40MB/s there's likely a problem with the array, if you get 100MB/s+ (assuming gigabit) it's likely a problem with the NAS. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.