sentein Posted January 24, 2022 Share Posted January 24, 2022 What is going to be the fastest way to copy data between 2 unraid boxes? I am using 2 10gb Nic, one in each system. Should i direct connect the 2 systems and use rsync to get the data copied the quickest? I have tried using a switch with a windows and a linux machine doing the heavy lifting but the performance is horrible and apparently my memory fills up and stops the transfer. Could i throw a 8TB drive in as cache to speed up transfers at all? Then remove the 8tb drive after the transfer is complete, the disks on the system will be static after the transfer. Any thoughts would be helpful. Quote Link to comment
JorgeB Posted January 24, 2022 Share Posted January 24, 2022 Fastest way is if you can copy multiple disk to disk sessions with rsync or something similar, without parity of course, I can get usually around 400MB/s sustained for initial server sync, could be faster even without using SSH, for single disk copy you'll get 100 to 200MB/s depending on the disks used. Quote Link to comment
sentein Posted January 24, 2022 Author Share Posted January 24, 2022 Just to pic your brain a bit. Would you know of a write up or similar for this "Fastest way is if you can copy multiple disk to disk sessions with rsync or something similar"? That sounds like exactly what i need. Also i did see in a post a little earlier that active cable limit for 10gb was 33'. Was this limit for, as an example passive copper SFP+ cables or would fiber OM3 & OM5 also be limited to this length? I have a new 30m OM3 cable here and would like to know before i run it. 25 minutes ago, JorgeB said: Fastest way is if you can copy multiple disk to disk sessions with rsync or something similar, without parity of course, I can get usually around 400MB/s sustained for initial server sync, could be faster even without using SSH, for single disk copy you'll get 100 to 200MB/s depending on the disks used. Quote Link to comment
JorgeB Posted January 25, 2022 Share Posted January 25, 2022 With rsync you just use the disks paths instead, e.g.: rsync -av /mnt/disk1/share/ dest_ip_adrress:/mnt/disk1/share/ 13 hours ago, sentein said: as an example passive copper SFP+ cables That limit is for active copper cables, fiber cables can have much longer runs. https://en.wikipedia.org/wiki/Multi-mode_optical_fiber Quote Link to comment
sentein Posted January 25, 2022 Author Share Posted January 25, 2022 Thank you very much. That helps a lot. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.