Recommended Posts

Try these

 

root@Atlas:/mnt/disk2/bittorrent# whereis rsync

rsync: /usr/bin/rsync /usr/bin/X11/rsync

root@Atlas:/mnt/disk2/bittorrent# ls -l /usr/bin/rsync

-rwxr-xr-x 1 root root 276688 Jun 14 23:04 /usr/bin/rsync*

root@Atlas:/mnt/disk2/bittorrent# /usr/bin/rsync rsync://Tower

backups        Backups

vmware          VMWare Backups

music          Music

pub            Public Files

boot            /boot files

mnt            /mnt files

Videos          VIDEOS

bittorrent      BITTORRENT

 

Also do

 

echo $PATH

Link to comment

This is copy and paste from telnet(putty)

 

Tower login: root
Linux 2.6.24.4-unRAID.
root@Tower:~# whereis rsync
rsync: /usr/bin/rsync /etc/rsync.conf /usr/bin/X11/rsync
root@Tower:~# ls -l /usr/bin/rsync
-rwxr-xr-x 1 root root 276688 Jul  1 19:41 /usr/bin/rsync*
root@Tower:~# /usr/bin/rsync rsync://Tower
rsync: getaddrinfo: Tower 873: Temporary failure in name resolution
rsync error: error in socket IO (code 10) at clientserver.c(104) [receiver=2.6.9                    ]
root@Tower:~#

 

I just redid rsyncd.conf file but still getting:

root@Tower:/etc#  /usr/bin/rsync rsync://Tower
rsync: getaddrinfo: Tower 873: Temporary failure in name resolution
rsync error: error in socket IO (code 10) at clientserver.c(104) [receiver=2.6.9]
root@Tower:/etc#

Oops forgot this:

root@Tower:/etc# echo $PATH
/usr/local/sbin:/usr/sbin:/sbin:./:/usr/local/bin:/usr/bin:/bin
root@Tower:/etc#

Link to comment

do a

cat /etc/resolv.conf

 

Chances are your name is not resolving

You can substituite the IP address for the hostname as in the example below

 

root@Atlas:/mnt/disk2/bittorrent# ifconfig eth0

eth0      Link encap:Ethernet  HWaddr 00:50:8D:9D:7B:AA 

          inet addr:192.168.1.179  Bcast:192.168.1.255  Mask:255.255.255.0

          UP BROADCAST NOTRAILERS RUNNING MULTICAST  MTU:1500  Metric:1

          RX packets:263331527 errors:0 dropped:0 overruns:0 frame:0

          TX packets:255971791 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:1000

          RX bytes:4217610711 (3.9 GiB)  TX bytes:3314289142 (3.0 GiB)

          Interrupt:16

 

root@Atlas:/mnt/disk2/bittorrent# /usr/bin/rsync rsync://192.168.1.179/

backups        Backups

vmware          VMWare Backups

music          Music

pub            Public Files

boot            /boot files

mnt            /mnt files

Videos          VIDEOS

bittorrent      BITTORRENT

Link to comment

Ok, now I get this:

root@Tower:/boot/custom/etc#  /usr/bin/rsync rsync://192.168.1.122
backups         Backups

but when I use a name "Tower" I get this:

root@Tower:/boot/custom/etc#  /usr/bin/rsync rsync://Tower
rsync: getaddrinfo: Tower 873: Temporary failure in name resolution
rsync error: error in socket IO (code 10) at clientserver.c(104) [receiver=2.6.9]

Link to comment

Hi everyone,

 

I tried setting up rsync for automatic backups of my Mac prior to Leopard's Time Machine. Here are my notes:

 

Setup rsync on the server:

1. In /boot/config/go, add:

rsync --daemon --config=/boot/config/rsyncd.conf

2. Edit /boot/config/rsyncd.conf:

gid = root
uid = root
log file = /var/log/resyncd.log

[backups]
path = /mnt/disk1/Backups
comment =
read only = no
list = yes

 

Also, I ported a C program to do exponential backup rotation to bash, since I didn't have an easy way to compile it for unraid:

 

#!/bin/sh

VERSION=2.20

# This is a bash version of John C. Bowman's C++ program powershift,
# distributed with rlbackup. There are two functional differences: This
# version includes the delimiter with the directory or file base, and this
# version does not retain the access and modification times of rotated files
# and directories. The original C++ code is copyright John C.  Bowman.  This
# code is copyright David Coppit.

# Syntax: powershift directory_or_file_base [n]
#
# Generate a geometric progression of backup files or directories by
# retaining, after the first n (>= 2) backups, only every every second backup,
# then every fourth backup, and so on, for each successive power of 2 (except
# for intermediate files necessary to accomplish this).
#
# Path are constructed from the first argument concatenated with a number
# generated by this program. New backups should be placed into the rotation as
# a concatenated file ending with "0".
#
# EXAMPLE #1: DIRECTORIES
#
# For the default value of n=2, successive applications of
#
# mkdir -p dir/0; powershift 'dir/'
#
# will shift dir/i to dir/i+1 for i=0,1,2,... while retaining (along with
# necessary intermediate files) only the sequence
#
# dir/1 dir/2 dir/4 dir/8 dir/16 dir/32 dir/64...
#
# EXAMPLE #2: FILES
#
# The command
#
# echo > name.log0; powershift name.log 4
#
# will retain (disregarding intermediate files)
#
# name.log1 name.log2 name.log3 name.log4 name.log6 name.log10 name.log18
# name.log34 name.log66...
#
# This program is free software; you can redistribute it and/or modify it
# under the terms of the GNU General Public License version 2 as published by
# the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful, but WITHOUT
# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
# more details.
#
# You should have received a copy of the GNU General Public License along with
# this program; if not, write to the Free Software Foundation, Inc., 675 Mass
# Ave, Cambridge, MA 02139, USA.

function stagger() {
let RESULT="$1 + ( $1 >> 1 )";
}

function file_name() {
local SUM;

let SUM=$1+$OFFSET;
RESULT="$FILESTEM$SUM";
}

# First arg is the message
# Second arg is a file path relative to the current directory
function fatal_error() {
local MESSAGE=$1;

if (( $# == 2 )); then
	MESSAGE="$MESSAGE "`pwd`;

	if [ -n $2 ]; then
		MESSAGE="$MESSAGE/";
	fi

	MESSAGE="$MESSAGE$2";
fi

echo "$MESSAGE: $?" 1>&2;

exit 1;
}

function recursive_delete() {
if ! rm -rf $1 2>/dev/null; then
	fatal_error "Cannot remove" $1;
fi;
}

# The original C++ version fixed up the modified time so that it didn't change
# after the rename. We can't do that in bash, unfortunately
function rename() {
mv $1 $2;
}

function power() {
local I=$1;
local STAGGER_I_FILENAME;
local N;
local N_FILENAME;
local I_FILENAME;

stagger $I;
file_name $RESULT;
STAGGER_I_FILENAME=$RESULT;

if [ -e $STAGGER_I_FILENAME ]; then
	let "SHIFTED=$1 << 1";
	power $SHIFTED;
	N=$RESULT;
	let "I=$N >> 1";

	stagger $I;
	file_name $RESULT;
	STAGGER_I_FILENAME=$RESULT;

	if [ -e $STAGGER_I_FILENAME ]; then
		file_name $N;
		N_FILENAME=$RESULT;

		rename $STAGGER_I_FILENAME $N_FILENAME;
	fi

	file_name $I;
	recursive_delete $RESULT;
else
	file_name $I;
	I_FILENAME=$RESULT;

	if [ -e $I_FILENAME ]; then
		rename $I_FILENAME $STAGGER_I_FILENAME;
	fi
fi

RESULT=$I;
}

function powershift() {
FILESTEM=$1;

if (( $# > 1 )); then
	N=$2;
else
	N=2;
fi

if (( $N < 2 )); then
	echo "n must be >= 2" 1>&2;
	exit 1;
fi

let OFFSET=$N-2;

file_name -$OFFSET;
OFFSET_FILENAME=$RESULT;
if [ -e $OFFSET_FILENAME ]; then
	touch $OFFSET_FILENAME; # Timestamp the latest backup;

	power 2;

	for (( I=2; I >= -$OFFSET; I-- )); do
		file_name $I;
		I_FILENAME=$RESULT;

		let "IPLUSONE=$I+1";
		file_name $IPLUSONE;
		IPLUSONE_FILENAME=$RESULT;

		if [ -e $I_FILENAME ]; then
			rename $I_FILENAME $IPLUSONE_FILENAME;
		fi
	done
fi
}

if (( $# == 0 )) || (( $# > 2 )); then
echo "Usage: $0 directory [n]" 1>&2;
exit 1;
fi;

powershift $@

 

I hope this helps someone. I got it working except I was having errors during the rsync, which I reported. I didn't follow up because I went with Time Machine because it "just works".

  • Like 1
Link to comment

Hi again.

Everything seems to work OK but...

When i use Deltacopy client files backed up with it are showing in windows explorer as hidden files.

Anybody (Weebotech) have idea what to do?

Thanks.

Here is what my rsyncd.conf looks like:

uid             = root
gid             = root
use chroot      = no
max connections = 4
pid file        = /var/run/rsyncd.pid
timeout         = 600

[backups]
    path = /mnt/disk1/backups
    comment = Backups
    read only = FALSE
    list = yes

[photo]
   path = /mnt/disk1/photo
   comments = pictures
   read only = FALSE
   list = yes
   
[music]
   path = /mnt/disk2
   comments = music
   read only = FALSE
   list = yes     

Link to comment
  • 1 year later...

just want to thank this topic for helping me setup rsync.

 

also want to add:

deltacopy did not support asian characters. http://www.aboutmyip.com/AboutMyXApp/DisplayFAQ.do?fid=13

 

you need to replace cygwin1.dll in the deltacopy folder with the one from

http://www.okisoft.co.jp/esc/utf8-cygwin/cygwin1-dll-20-11-18.tar.bz2

 

note: the deltacopy i used is the one without installer DeltaCopyRaw.zip

 

 

  • Like 1
Link to comment
  • 2 months later...

Thanks for a great thread!  Not as clean as I hoped, but I got things working. I would have had no luck without this thread.

 

I did have one snag.

 

When I used this in my go file I had problems.

 

# Start Rsync
installpkg /boot/packages/rsync/rsync-2.6.6-i486-1.tgz
/boot/packages/rsync/rc.d/S20-init.rsyncd

 

 

The Error: line 10: syntax error: unexpected end of file.

 

This is what my S20-init.rysncd file looks like:

 

if ! grep ^rsync /etc/inetd.conf > /dev/null ; then
cat <<-EOF >> /etc/inetd.conf
rsync   stream  tcp     nowait  root    /usr/sbin/tcpd  /usr/bin/rsync --daemon
EOF
read PID < /var/run/inetd.pid
kill -1 ${PID}
fi

cp /boot/packages/rsync/rsyncd.conf /etc/rsyncd.conf

 

If I add

 

#!/bin/bash

 

at the beginning I get "Bad interpreter" as the error.

 

I changed my GO to the following and everything seems to work. I found the info here:

http://lime-technology.com/wordpress/?page_id=34

 

installpkg /boot/packages/rsync/rsync-2.6.6-i486-1.tgz
rsync --daemon --config=/boot/packages/rsync/rsyncd.conf

 

I'm  not sure if I've crippled things or not, but the various steps provided in this post got me to the point where testing and now backing up was something I could do.

  • Like 1
Link to comment
  • 2 months later...

Okay... My eyes are glazing over after all the reading.

 

I just need an easy way to do differential copies to a folder in the unRAID server. Unfortunately my usual xcopy source target /d /e /v /c /i /g /h /r /y isn't doing the trick. All the files are getting copied everytime instead of just new and changed ones.

 

I remember seeing unRAID has rsync. I have an rsync client (cwrsync) installed on the computer I need to copy from. I just had it for creating a local mirror of the php.net website. I have no knowledge of the inner working and intricacies of rsync so any help in this direction would be much appreciated.

 

Thanks!

Link to comment
  • 1 month later...

Thanks to this thread, i am able to backup my main unRAID to a separate machine.

 

I have one issue, tho.

 

rsync creates directories ahead of the file copying process, and my disks are rather small 250G/300G, so i end up with tons of empty folders on each drive that were created before the files filled up the drive and the user share automatically moved to the next disk.

 

Is there a way to stop rsync from creating a folder before it is needed for a file, or an easy way to clean the empty folders after the first bulk run?

 

tia

 

found this on the web:

 

find -depth -type d -empty -exec rmdir {} \;

 

(Joe's solution -next msg - is a bit clearer)

Link to comment
  • 9 years later...

Folks just want to make a public service announcement here for Synology Hyper Backup users (and maybe other source clients?).  If your unRAID backup target share is set to use the cache drive, and your cache is smaller than the backup size, you're going to have a bad time.

 

I'm a bit of an unRAID noob, so I beat my head against the wall with this one for like weeks.  I tried running rsync in a docker.  I tried spinning up a VM.  The logs on both end were pretty much useless.  I was able to establish the connection and start the backup, but eventually it would always fail and the rsync daemon process would evaporate.  I just happened to browse the cache drive and see the backup stuff stacking up there, and wondered why that was.  Once I turned off cache for that share, it worked flawlessly.

 

Can anyone comment on potential workarounds to be able to utilize the cache drive in this scenario?

 

  • Like 1
Link to comment
23 minutes ago, mikegeezy said:

Folks just want to make a public service announcement here for Synology Hyper Backup users (and maybe other source clients?).  If your unRAID backup target share is set to use the cache drive, and your cache is smaller than the backup size, you're going to have a bad time.

 

I'm a bit of an unRAID noob, so I beat my head against the wall with this one for like weeks.  I tried running rsync in a docker.  I tried spinning up a VM.  The logs on both end were pretty much useless.  I was able to establish the connection and start the backup, but eventually it would always fail and the rsync daemon process would evaporate.  I just happened to browse the cache drive and see the backup stuff stacking up there, and wondered why that was.  Once I turned off cache for that share, it worked flawlessly.

 

Can anyone comment on potential workarounds to be able to utilize the cache drive in this scenario?

 

It sounds as if you have not set a Minimum Free Space value for the cache under Settings->Global Share settings?    When the free space on the cache falls below that value then Unraid will start by-passing the cache for new files for shares set to use the cache.

  • Like 1
Link to comment
22 minutes ago, mikegeezy said:

Once I turned off cache for that share, it worked flawlessly.

If you had asked for advice sooner, this is what I would have recommended.

 

22 minutes ago, mikegeezy said:

Can anyone comment on potential workarounds to be able to utilize the cache drive in this scenario?

It's really just a case of what the hardware is capable of. There is simply no way to move stuff from the faster cache to the slower parity array as fast as you can write it to the faster cache.

 

  • Like 1
Link to comment
3 hours ago, itimpi said:

It sounds as if you have not set a Minimum Free Space value for the cache under Settings->Global Share settings?    When the free space on the cache falls below that value then Unraid will start by-passing the cache for new files for shares set to use the cache.

Yes, that's probably the case.  I'll have to check when I get home.  Although, it seems like there's not a real advantage to using the cache here.

 

3 hours ago, trurl said:

If you had asked for advice sooner, this is what I would have recommended.

 

It's really just a case of what the hardware is capable of. There is simply no way to move stuff from the faster cache to the slower parity array as fast as you can write it to the faster cache.

 

Thanks for the feedback fellas.  I'm new to unRAID / the community, good to know it's so supportive.

Link to comment
  • 2 months later...
  • 2 months later...

I have just set this up, and thought i should summarize this fairly wordy and bitty Thread to a step by step guide.

To set up Unraid as a Rsync server for use as a Rsync destination for compatible clients, such as Synology hyperbackup.

 

1. (optional)  

Open Unraid web interface and set up a new share('s) that you want to use with Rsync


2.
Open Unraid web interface and open a new web terminal window by clicking the 6th icon from the right, at the top right of the interface ( or SSH in to your unraid box)

3.
Type or copy and paste the following one line at a time. (SHIFT + CONTROL + V to paste in to the unraid web terminal)

mkdir /boot/custom

mkdir /boot/custom/etc

mkdir /boot/custom/etc/rc.d

nano /boot/custom/etc/rsyncd.conf

 

4. Type your Rsync config. As a guide use the below example, modified from @WeeboTech

uid             = root
gid             = root
use chroot      = no
max connections = 4
pid file        = /var/run/rsyncd.pid
timeout         = 600


[backups]                       # Rsync Modual name (basicaly Rsync share name)  Synology hyperbackup calls this "Backup Modual"
    path = /mnt/user/backups    # Unraid share location.  /mnt/user/YOURSHARENAME   could also be a subdirectory of a share
    comment = Backups           # Modual Description
    read only = FALSE

[vmware]                        # Add multiple Rsync moduals as required
    path = /mnt/user/backups/vmware 
    comment = VMWare Backups
    read only = FALSE

 

5.

Press CONTROL + x  then press y and then ENTER  to save the config

6. 

Type or copy and paste the following:

nano /boot/custom/etc/rc.d/S20-init.rsyncd

 

7.

Type or copy and paste the following: 

#!/bin/bash

if ! grep ^rsync /etc/inetd.conf > /dev/null ; then
cat <<-EOF >> /etc/inetd.conf
rsync   stream  tcp     nowait  root    /usr/sbin/tcpd  /usr/bin/rsync --daemon
EOF
read PID < /var/run/inetd.pid
kill -1 ${PID}
fi

cp /boot/custom/etc/rsyncd.conf /etc/rsyncd.conf

 

8. 

Press CONTROL + x  then press y and then ENTER  to save the Script

 

9.

To add your script to the Go file, its quicker to use echo to send a line to the end of the file. Type or Copy and paste:

echo "bash /boot/custom/etc/rc.d/S20-init.rsyncd" >> /boot/config/go

 

10. 

Type or copy and paste the following:   ( I am not sure if chmod is needed, however its something i did trying to get this to work)

chmod =x /boot/custom/etc/rc.d/S20-init.rsyncd

bash /boot/custom/etc/rc.d/S20-init.rsyncd

rsync rsync://127.0.0.1

 

11.
The last command above is to check rsync is working locally on your Unraid server. It should return the rsync module's and comments from your rsync.conf like below:

root@YOURUNRAIDSERVERNAME:/# rsync rsync://127.0.0.1
backups        Backups
vmware         VMWare Backups

 

12. If the last command Displays your rsync module's then you may want to quickly check if rsync can be reached by your unraids domain name or network interface IP:

rsync rsync://192.168.0.100   # replace ip with your unraid server ip

Or

rsync rsync://UNRAIDSERVERNAME.local    #obviously replace with your sever name ;)


End.
Now check that your rsync client connects to unraid. 
i used Synology Hyper Backup,
Created a new data backup,
Under file server selected rsync > next
change server type to "rsync-compatible server",
fill in your unraid server Ip or domain name, 
transfer encryption "off"    -  not sure how to get encryption to work,  please post below if you know how
port "873"
username "root"       -  i guess couldset up a second account and grant appropriate privileges using CLI on unraid?
password "YOURUNRAIDROOTPASSWORD" 
Backup module:  "your Rsync  module from rsynd.conf"   
directory "subdirectory inside your rsync module / unraid share"


Hope this helps someone.

( Edited, thanks Dr_Frankenstein )

 



 

Edited by bung
Dir mistake
  • Like 9
  • Thanks 1
Link to comment
  • 2 months later...

Thanks for the tutorial, I am using this for my Synology backup destination.

 

However you need to edit the steps for mkdir as you missed the /etc/ folder on the third line, meaning the folder doesn't exist for the S20-init.rsyncd.conf

 

mkdir /boot/custom

mkdir /boot/custom/etc

mkdir /boot/custom/rc.d > mkdir /boot/custom/etc/rc.d

nano /boot/custom/etc/rsyncd.conf

 

Link to comment
  • 2 months later...

 

2 hours ago, tazire said:

Im curious about using rsync for local backups. I essentially want a cache only share to sync to an array only share. Would rsync be the right thing to use or is there something else i should be looking at?

 

i believe a scheduled rsync would be good for this though it's not something i have ever set up it is the first thing i would try.  

i would want to be sure that some form of notifications are working, mainly to know if something goes wrong and your backup isn't working such as disk full or whatever. 

This looks like a good place to start: https://forums.unraid.net/topic/73038-my-rsync-config-for-backing-up-my-unraid-with-e-mail-notification/

 

 

 

 

 

 

 

 

Link to comment
  • 9 months later...
  • 1 year later...

Maybe someone can help me here:

I setup an rsync server as discribed by @bung to serve as a target for Synology Hyper Backup.

Works great so far. But after a reboot of Unraid, when Hyper Backup wants to reconnect to Unraid, it takes some time, in which one core is at 100% and the memory gets filled up more and more until I see an "out of memory" warning in the logs. The process "avahi daemon" gets killed. After that Hyper Backup can connect and everything seems to work.

 

I'm not experienced enough with Rsync, to determine, why the Ram gets stuffed like this. I you need more info please tell me :)

 

 

Edit: I might have found a solution for this.

Added

reverse lookup = no

to my rsyncd.conf.

 

More details here:

 

 

 

 

Edited by Jaytie
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.