teamhood Posted June 11, 2011 Share Posted June 11, 2011 Hey all, I'm attempting to push a lot of data between two unraid boxes and I can't seem to get these things to transfer faster than 10MB/s! I've got them both on the same gig switch and I've checked that ethtool eth0 on both boxes shows 1000/full. I've disabled the parity drive in the machine that is receiving the data, doing this over a smbmount. I'm not sure what is wrong even after fully restarting the entire network and being sure that the gig speed LED on the switch is pegged... Thoughts? Link to comment
dgaschk Posted June 11, 2011 Share Posted June 11, 2011 Post the results of "ethtool eth0" for each machine. Connect the machines directly with a CAT5e+ cable and check the speed. Link to comment
teamhood Posted June 11, 2011 Author Share Posted June 11, 2011 unraid 1: root@unraid:~# ethtool eth0 Settings for eth0: Supported ports: [ TP MII ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Half 1000baseT/Full Supports auto-negotiation: Yes Advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Half 1000baseT/Full Advertised pause frame use: No Advertised auto-negotiation: Yes Link partner advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Link partner advertised pause frame use: No Link partner advertised auto-negotiation: Yes Speed: 1000Mb/s Duplex: Full Port: MII PHYAD: 0 Transceiver: internal Auto-negotiation: on Supports Wake-on: pumbg Wake-on: g Current message level: 0x00000033 (51) Link detected: yes root@unraid:~# ethtool -S eth0 NIC statistics: tx_packets: 1448408 rx_packets: 2103223 tx_errors: 0 rx_errors: 0 rx_missed: 0 align_errors: 0 tx_single_collisions: 0 tx_multi_collisions: 0 unicast: 2097709 broadcast: 5514 multicast: 0 tx_aborted: 0 tx_underrun: 0 Link to comment
teamhood Posted June 11, 2011 Author Share Posted June 11, 2011 unraid 2: root@Tower:~# ethtool eth0 Settings for eth0: Supported ports: [ TP ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Supports auto-negotiation: Yes Advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Advertised pause frame use: No Advertised auto-negotiation: Yes Speed: 1000Mb/s Duplex: Full Port: Twisted Pair PHYAD: 0 Transceiver: internal Auto-negotiation: on MDI-X: Unknown Supports Wake-on: umbg Wake-on: g Current message level: 0x00000007 (7) Link detected: yes root@Tower:~# ethtool -S eth0 NIC statistics: rx_packets: 23693 tx_packets: 8454 rx_bytes: 2555871 tx_bytes: 2043390 rx_broadcast: 15934 tx_broadcast: 599 rx_multicast: 0 tx_multicast: 0 rx_errors: 0 tx_errors: 0 tx_dropped: 0 multicast: 0 collisions: 0 rx_length_errors: 0 rx_over_errors: 0 rx_crc_errors: 0 rx_frame_errors: 0 rx_no_buffer_count: 0 rx_missed_errors: 0 tx_aborted_errors: 0 tx_carrier_errors: 0 tx_fifo_errors: 0 tx_heartbeat_errors: 0 tx_window_errors: 0 tx_abort_late_coll: 0 tx_deferred_ok: 4 tx_single_coll_ok: 0 tx_multi_coll_ok: 0 tx_timeout_count: 0 tx_restart_queue: 0 rx_long_length_errors: 0 rx_short_length_errors: 0 rx_align_errors: 0 tx_tcp_seg_good: 133 tx_tcp_seg_failed: 0 rx_flow_control_xon: 8 rx_flow_control_xoff: 8 tx_flow_control_xon: 0 tx_flow_control_xoff: 0 rx_long_byte_count: 2555871 rx_csum_offload_good: 8751 rx_csum_offload_errors: 0 alloc_rx_buff_failed: 0 tx_smbus: 0 rx_smbus: 0 dropped_smbus: 0 Link to comment
cyrnel Posted June 11, 2011 Share Posted June 11, 2011 I prefer rsync on native filesystems, but even so adding an smb layer shouldn't cause this drastic an effect. Can we see your smb mount and rsync commands? Syslog maybe? Link to comment
WeeboTech Posted June 12, 2011 Share Posted June 12, 2011 I've found in the past that to gain the highest possible speed I needed to set up an rsync server. This would by pass a filesystem protocol layer or an SSH layer. In one particular instance I had to use the option to set the SO_SNDBUF=,SO_RCVBUF= to matching high values on the client and server. If you have parity enabled, 12-17MB/s over the network is what I have seen. On my local machine from one drive to another, I burst at around 35MB/s but then drop to around 23MB/s depending on activity. But this is drive to drive. My best performance over a network has always been client to remote direct rsync server. Link to comment
teamhood Posted June 12, 2011 Author Share Posted June 12, 2011 I prefer rsync on native filesystems, but even so adding an smb layer shouldn't cause this drastic an effect. Can we see your smb mount and rsync commands? Syslog maybe? I do the following: mkdir /copy_disk1 smbmount //192.168.0.x/the/dir/here /copy_disk1 rsync -av --stats --progress /copy_disk/ /mnt/diskx I'm really surprised by how slow the speed is between the two machines when Parity is disabled. I really enjoy rsync, but I'm wondering if there is something else or another way that I can copy between unraid servers at top speed? It's weird that going from WIN7>unraid I get excellent speeds using Teracopy... I usually average around 30mbs. Link to comment
WeeboTech Posted June 12, 2011 Share Posted June 12, 2011 I'm wondering if there is something else or another way that I can copy between unraid servers at top speed? I just mentioned that setting up an rsync server bypasses other network protocol layers Put this in your inetd.conf rsync stream tcp nowait root /usr/sbin/tcpd /usr/bin/rsync --daemon Then do a kill -1 of the pid of inetd. root@atlas ~ #ps -ef | grep inetd root 1754 1 0 May21 ? 00:00:00 /usr/sbin/inetd root 10250 10240 0 09:07 pts/0 00:00:00 grep inetd set up an /etc/rsyncd.conf file Here's a partial of mine. root@atlas ~ #cat /etc/rsyncd.conf uid = root gid = root use chroot = no max connections = 4 pid file = /var/run/rsyncd.pid timeout = 600 [mnt] path = /mnt comment = /mnt files read only = FALSE Then do rsync (options) (sources) rsync://servername/mnt/disk1/destinationdirectory Link to comment
teamhood Posted June 12, 2011 Author Share Posted June 12, 2011 Thanks weebo, I am going to give this a shot tomorrow. I'll try to not bug you too much if I run into any problems Link to comment
teamhood Posted June 12, 2011 Author Share Posted June 12, 2011 Also what if I do not want to SMB mount and just do rsync? Is there any reason that I have to SMBMount? I was just thinking about this... Link to comment
dgaschk Posted June 12, 2011 Share Posted June 12, 2011 You do not need SMB mount. This is exactly what WeeboTech described in his last post. Link to comment
teamhood Posted June 13, 2011 Author Share Posted June 13, 2011 Weebo: So I tried to get this setup and seem to be running into an error: root@Tower:~# rsync -av --stats --progress /mnt/disk1/ rsync://192.168.0.189/mnt/disk11/ rsync: failed to connect to 192.168.0.189: Connection refused (111) rsync error: error in socket IO (code 10) at clientserver.c(122) [sender=3.0.7] root@Tower:~# ps -ef | grep inetd root 1149 1 0 05:31 ? 00:00:00 /usr/sbin/inetd root 3083 1 0 05:58 ? 00:00:00 /usr/sbin/inetd root 4603 2202 0 06:13 pts/0 00:00:00 grep inetd I set this code up on 'tower' which is the server that I want to copy the data from and onto 'unraid' Thoughts? Link to comment
teamhood Posted June 13, 2011 Author Share Posted June 13, 2011 FYI I need to move about 8TB of data... I'm looking for any suggestions how to do this as fast as possible Link to comment
dgaschk Posted June 13, 2011 Share Posted June 13, 2011 Did you perform the following on 192.168.0.189? I'm wondering if there is something else or another way that I can copy between unraid servers at top speed? I just mentioned that setting up an rsync server bypasses other network protocol layers Put this in your inetd.conf rsync stream tcp nowait root /usr/sbin/tcpd /usr/bin/rsync --daemon Then do a kill -1 of the pid of inetd. root@atlas ~ #ps -ef | grep inetd root 1754 1 0 May21 ? 00:00:00 /usr/sbin/inetd root 10250 10240 0 09:07 pts/0 00:00:00 grep inetd set up an /etc/rsyncd.conf file Here's a partial of mine. root@atlas ~ #cat /etc/rsyncd.conf uid = root gid = root use chroot = no max connections = 4 pid file = /var/run/rsyncd.pid timeout = 600 [mnt] path = /mnt comment = /mnt files read only = FALSE Then on Tower you enter: root@Tower:~# rsync -av --stats --progress /mnt/disk1/ rsync://192.168.0.189/mnt/disk11/ Link to comment
teamhood Posted June 13, 2011 Author Share Posted June 13, 2011 Did you perform the following on 192.168.0.189? I'm wondering if there is something else or another way that I can copy between unraid servers at top speed? I just mentioned that setting up an rsync server bypasses other network protocol layers Put this in your inetd.conf rsync stream tcp nowait root /usr/sbin/tcpd /usr/bin/rsync --daemon Then do a kill -1 of the pid of inetd. root@atlas ~ #ps -ef | grep inetd root 1754 1 0 May21 ? 00:00:00 /usr/sbin/inetd root 10250 10240 0 09:07 pts/0 00:00:00 grep inetd set up an /etc/rsyncd.conf file Here's a partial of mine. root@atlas ~ #cat /etc/rsyncd.conf uid = root gid = root use chroot = no max connections = 4 pid file = /var/run/rsyncd.pid timeout = 600 [mnt] path = /mnt comment = /mnt files read only = FALSE Then on Tower you enter: root@Tower:~# rsync -av --stats --progress /mnt/disk1/ rsync://192.168.0.189/mnt/disk11/ Ahhhh.. I set it up on Tower and not unRAID (.189) I guess that would be my problem! Link to comment
teamhood Posted June 13, 2011 Author Share Posted June 13, 2011 NICE THIS IS WORKING!!! 30MB/s+ THANK YOU WEEB AND dgaschk!!!!!! Link to comment
teamhood Posted June 13, 2011 Author Share Posted June 13, 2011 I figure I will help out the other non-linux gurus and create a step by step on how to make this work as I've often struggled with not knowing exactly how to do some of this. Warning: I'm still a linux newb. Guru's, please correct me where I'm wrong Warning2: If anything breaks all blame goes to WeeboTech and if it works just right all prop goes to WeeboTech 1. Install vim using unMenu's Pkg Manager (it's near the bottom) 2. Telent into your unRAID machine that you want to receive the data (in my case that is 'unRAID' ip: 192.168.0.189 3. Type this command: vim /etc/inetd.conf -This will open the inetd.conf file in Vi editor 4. Press the letter 'i' on your keyboard 5. Now you need to insert the following: rsync stream tcp nowait root /usr/sbin/tcpd /usr/bin/rsync --daemon -I placed it in the section below FTP 6. Once completed press 'ESC' then ':' followed with 'wq' - Pressing escape will remove you from insert mode. Pressing ':' will give you an input line at the bottom of your screen and 'wq' will write and quit. 7. Issue the killall command to restart the inetd.conf killall -HUP inetd 8. Check that inetd is loaded ps -ef | grep inetd It should look like this: root 1754 1 0 May21 ? 00:00:00 /usr/sbin/inetd root 10250 10240 0 09:07 pts/0 00:00:00 grep inetd 9. Setup a rsyncd.conf file by doing the following: vim /etc/rsyncd.conf 10. Press the letter 'i' to go into insert mode 11. Enter the following into the conf file uid = root gid = root use chroot = no max connections = 4 pid file = /var/run/rsyncd.pid timeout = 600 [mnt] path = /mnt comment = /mnt files read only = FALSE 12. Press ESC, ':' and 'wq' 13. Now open up another telnet window on the array that has the data you want to copy to the machine we just setup as the Rsync server (for me that is 'Tower' or 192.168.0.199) 14. Type in the following command to being rsync rsync -av --stats --progress /mnt/disk1/ rsync://192.168.0.189/mnt/disk11/ Note that the above will copy disk1 (on tower) to disk11 (on unraid) ** I highly suggest that you install Screen from unMenu and add screen to the beginning of the above command screen rsync -av --stats --progress /mnt/disk1/ rsync://192.168.0.189/mnt/disk11/ This will allow you to close your telnet window and also protect you if you shut down your local PC and still have the rsync script running. To check on the progress you will simply log back into your array and type screen -x or screen -r Link to comment
WeeboTech Posted June 13, 2011 Share Posted June 13, 2011 Nice HOWTO teamhood ! and 30MB/s is nothing to sneeze at! Link to comment
teamhood Posted June 13, 2011 Author Share Posted June 13, 2011 Nice HOWTO teamhood ! and 30MB/s is nothing to sneeze at! Quick question: Does anything needed to be added to the Go Script so that the rsync/inetd are good to go at reboot? Link to comment
teamhood Posted June 13, 2011 Author Share Posted June 13, 2011 Nice HOWTO teamhood ! and 30MB/s is nothing to sneeze at! Oh.. I'm at 40MB/s now Link to comment
dgaschk Posted June 13, 2011 Share Posted June 13, 2011 I'll take a stab at this: First create a custom directory on your flash drive if one does not exist: mkdir /boot/custom Copy your new /etc/rsyncd.conf to custom: cp /etc/rsyncd.conf /boot/custom Add these lines to your go file: echo "rsync stream tcp nowait root /usr/sbin/tcpd /usr/bin/rsync --daemon" >> /etc/inetd.conf cp /boot/custom/rsyncd.conf /etc killall -HUP inetd That should do it. Link to comment
teamhood Posted June 13, 2011 Author Share Posted June 13, 2011 I'll take a stab at this: First create a custom directory on your flash drive if one does not exist: mkdir /boot/custom Copy your new /etc/rsyncd.conf to custom: cp /etc/rsyncd.conf /boot/custom Add these lines to your go file: echo "rsync stream tcp nowait root /usr/sbin/tcpd /usr/bin/rsync --daemon" >> /etc/inetd.conf cp /boot/custom/rsyncd.conf /etc killall -HUP inetd That should do it. I thought there was going to be something more to it! I will give this a shot once I push over this 8TB of data!! Thank you again for your help! Now we should clean this all up and add it to the Wiki for 'Transferring Files Between 2 UnRAID servers' as that was the first place I checked... Also - would this work if the arrays were not on the same LAN if you updated the Rsync command syntax? Link to comment
WeeboTech Posted June 13, 2011 Share Posted June 13, 2011 This is my scriptlet for adding the entry to inetd. if ! grep ^rsync /etc/inetd.conf > /dev/null ; then cat <<-EOF >> /etc/inetd.conf rsync stream tcp nowait root /usr/sbin/tcpd /usr/bin/rsync --daemon EOF read PID < /var/run/inetd.pid kill -1 ${PID} fi Keep in mind, you still need to copy over the rsyncd.conf file from some area in /boot to /etc Another way is to run rsync in server mode from the go script with a specific path to the config file as in /usr/bin/rsync --daemon --config=/boot/custom/etc/rsyncd.conf & But this holds a chunk of memory for rsync all the time rather then when needed via inetd. Link to comment
dgaschk Posted June 13, 2011 Share Posted June 13, 2011 The server side does not need to be changed. Thats the side with the go file. The client side: rsync -av --stats --progress /mnt/disk1/ rsync://192.168.0.189/mnt/disk11/ The address of the server, e.g., 192.168.0.189, needs to be reachable from the client. No changes needed; although, performance tuning might help but won't know this until performance is observed. Link to comment
Recommended Posts
Archived
This topic is now archived and is closed to further replies.