Jump to content

42 posts in this topic Last Reply

Recommended Posts

Try these

 

root@Atlas:/mnt/disk2/bittorrent# whereis rsync

rsync: /usr/bin/rsync /usr/bin/X11/rsync

root@Atlas:/mnt/disk2/bittorrent# ls -l /usr/bin/rsync

-rwxr-xr-x 1 root root 276688 Jun 14 23:04 /usr/bin/rsync*

root@Atlas:/mnt/disk2/bittorrent# /usr/bin/rsync rsync://Tower

backups        Backups

vmware          VMWare Backups

music          Music

pub            Public Files

boot            /boot files

mnt            /mnt files

Videos          VIDEOS

bittorrent      BITTORRENT

 

Also do

 

echo $PATH

Share this post


Link to post

This is copy and paste from telnet(putty)

 

Tower login: root
Linux 2.6.24.4-unRAID.
root@Tower:~# whereis rsync
rsync: /usr/bin/rsync /etc/rsync.conf /usr/bin/X11/rsync
root@Tower:~# ls -l /usr/bin/rsync
-rwxr-xr-x 1 root root 276688 Jul  1 19:41 /usr/bin/rsync*
root@Tower:~# /usr/bin/rsync rsync://Tower
rsync: getaddrinfo: Tower 873: Temporary failure in name resolution
rsync error: error in socket IO (code 10) at clientserver.c(104) [receiver=2.6.9                    ]
root@Tower:~#

 

I just redid rsyncd.conf file but still getting:

root@Tower:/etc#  /usr/bin/rsync rsync://Tower
rsync: getaddrinfo: Tower 873: Temporary failure in name resolution
rsync error: error in socket IO (code 10) at clientserver.c(104) [receiver=2.6.9]
root@Tower:/etc#

Oops forgot this:

root@Tower:/etc# echo $PATH
/usr/local/sbin:/usr/sbin:/sbin:./:/usr/local/bin:/usr/bin:/bin
root@Tower:/etc#

Share this post


Link to post

do a

cat /etc/resolv.conf

 

Chances are your name is not resolving

You can substituite the IP address for the hostname as in the example below

 

root@Atlas:/mnt/disk2/bittorrent# ifconfig eth0

eth0      Link encap:Ethernet  HWaddr 00:50:8D:9D:7B:AA 

          inet addr:192.168.1.179  Bcast:192.168.1.255  Mask:255.255.255.0

          UP BROADCAST NOTRAILERS RUNNING MULTICAST  MTU:1500  Metric:1

          RX packets:263331527 errors:0 dropped:0 overruns:0 frame:0

          TX packets:255971791 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:1000

          RX bytes:4217610711 (3.9 GiB)  TX bytes:3314289142 (3.0 GiB)

          Interrupt:16

 

root@Atlas:/mnt/disk2/bittorrent# /usr/bin/rsync rsync://192.168.1.179/

backups        Backups

vmware          VMWare Backups

music          Music

pub            Public Files

boot            /boot files

mnt            /mnt files

Videos          VIDEOS

bittorrent      BITTORRENT

Share this post


Link to post

Ok, now I get this:

root@Tower:/boot/custom/etc#  /usr/bin/rsync rsync://192.168.1.122
backups         Backups

but when I use a name "Tower" I get this:

root@Tower:/boot/custom/etc#  /usr/bin/rsync rsync://Tower
rsync: getaddrinfo: Tower 873: Temporary failure in name resolution
rsync error: error in socket IO (code 10) at clientserver.c(104) [receiver=2.6.9]

Share this post


Link to post

OK.

Well we are getting somewhere.

Now in Deltacopy client when I use IP address insted of Host Name (Tower) IT WORKS.

Share this post


Link to post

Hi everyone,

 

I tried setting up rsync for automatic backups of my Mac prior to Leopard's Time Machine. Here are my notes:

 

Setup rsync on the server:

1. In /boot/config/go, add:

rsync --daemon --config=/boot/config/rsyncd.conf

2. Edit /boot/config/rsyncd.conf:

gid = root
uid = root
log file = /var/log/resyncd.log

[backups]
path = /mnt/disk1/Backups
comment =
read only = no
list = yes

 

Also, I ported a C program to do exponential backup rotation to bash, since I didn't have an easy way to compile it for unraid:

 

#!/bin/sh

VERSION=2.20

# This is a bash version of John C. Bowman's C++ program powershift,
# distributed with rlbackup. There are two functional differences: This
# version includes the delimiter with the directory or file base, and this
# version does not retain the access and modification times of rotated files
# and directories. The original C++ code is copyright John C.  Bowman.  This
# code is copyright David Coppit.

# Syntax: powershift directory_or_file_base [n]
#
# Generate a geometric progression of backup files or directories by
# retaining, after the first n (>= 2) backups, only every every second backup,
# then every fourth backup, and so on, for each successive power of 2 (except
# for intermediate files necessary to accomplish this).
#
# Path are constructed from the first argument concatenated with a number
# generated by this program. New backups should be placed into the rotation as
# a concatenated file ending with "0".
#
# EXAMPLE #1: DIRECTORIES
#
# For the default value of n=2, successive applications of
#
# mkdir -p dir/0; powershift 'dir/'
#
# will shift dir/i to dir/i+1 for i=0,1,2,... while retaining (along with
# necessary intermediate files) only the sequence
#
# dir/1 dir/2 dir/4 dir/8 dir/16 dir/32 dir/64...
#
# EXAMPLE #2: FILES
#
# The command
#
# echo > name.log0; powershift name.log 4
#
# will retain (disregarding intermediate files)
#
# name.log1 name.log2 name.log3 name.log4 name.log6 name.log10 name.log18
# name.log34 name.log66...
#
# This program is free software; you can redistribute it and/or modify it
# under the terms of the GNU General Public License version 2 as published by
# the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful, but WITHOUT
# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
# more details.
#
# You should have received a copy of the GNU General Public License along with
# this program; if not, write to the Free Software Foundation, Inc., 675 Mass
# Ave, Cambridge, MA 02139, USA.

function stagger() {
let RESULT="$1 + ( $1 >> 1 )";
}

function file_name() {
local SUM;

let SUM=$1+$OFFSET;
RESULT="$FILESTEM$SUM";
}

# First arg is the message
# Second arg is a file path relative to the current directory
function fatal_error() {
local MESSAGE=$1;

if (( $# == 2 )); then
	MESSAGE="$MESSAGE "`pwd`;

	if [ -n $2 ]; then
		MESSAGE="$MESSAGE/";
	fi

	MESSAGE="$MESSAGE$2";
fi

echo "$MESSAGE: $?" 1>&2;

exit 1;
}

function recursive_delete() {
if ! rm -rf $1 2>/dev/null; then
	fatal_error "Cannot remove" $1;
fi;
}

# The original C++ version fixed up the modified time so that it didn't change
# after the rename. We can't do that in bash, unfortunately
function rename() {
mv $1 $2;
}

function power() {
local I=$1;
local STAGGER_I_FILENAME;
local N;
local N_FILENAME;
local I_FILENAME;

stagger $I;
file_name $RESULT;
STAGGER_I_FILENAME=$RESULT;

if [ -e $STAGGER_I_FILENAME ]; then
	let "SHIFTED=$1 << 1";
	power $SHIFTED;
	N=$RESULT;
	let "I=$N >> 1";

	stagger $I;
	file_name $RESULT;
	STAGGER_I_FILENAME=$RESULT;

	if [ -e $STAGGER_I_FILENAME ]; then
		file_name $N;
		N_FILENAME=$RESULT;

		rename $STAGGER_I_FILENAME $N_FILENAME;
	fi

	file_name $I;
	recursive_delete $RESULT;
else
	file_name $I;
	I_FILENAME=$RESULT;

	if [ -e $I_FILENAME ]; then
		rename $I_FILENAME $STAGGER_I_FILENAME;
	fi
fi

RESULT=$I;
}

function powershift() {
FILESTEM=$1;

if (( $# > 1 )); then
	N=$2;
else
	N=2;
fi

if (( $N < 2 )); then
	echo "n must be >= 2" 1>&2;
	exit 1;
fi

let OFFSET=$N-2;

file_name -$OFFSET;
OFFSET_FILENAME=$RESULT;
if [ -e $OFFSET_FILENAME ]; then
	touch $OFFSET_FILENAME; # Timestamp the latest backup;

	power 2;

	for (( I=2; I >= -$OFFSET; I-- )); do
		file_name $I;
		I_FILENAME=$RESULT;

		let "IPLUSONE=$I+1";
		file_name $IPLUSONE;
		IPLUSONE_FILENAME=$RESULT;

		if [ -e $I_FILENAME ]; then
			rename $I_FILENAME $IPLUSONE_FILENAME;
		fi
	done
fi
}

if (( $# == 0 )) || (( $# > 2 )); then
echo "Usage: $0 directory [n]" 1>&2;
exit 1;
fi;

powershift $@

 

I hope this helps someone. I got it working except I was having errors during the rsync, which I reported. I didn't follow up because I went with Time Machine because it "just works".

Share this post


Link to post

Hi again.

Everything seems to work OK but...

When i use Deltacopy client files backed up with it are showing in windows explorer as hidden files.

Anybody (Weebotech) have idea what to do?

Thanks.

Here is what my rsyncd.conf looks like:

uid             = root
gid             = root
use chroot      = no
max connections = 4
pid file        = /var/run/rsyncd.pid
timeout         = 600

[backups]
    path = /mnt/disk1/backups
    comment = Backups
    read only = FALSE
    list = yes

[photo]
   path = /mnt/disk1/photo
   comments = pictures
   read only = FALSE
   list = yes
   
[music]
   path = /mnt/disk2
   comments = music
   read only = FALSE
   list = yes     

Share this post


Link to post

just want to thank this topic for helping me setup rsync.

 

also want to add:

deltacopy did not support asian characters. http://www.aboutmyip.com/AboutMyXApp/DisplayFAQ.do?fid=13

 

you need to replace cygwin1.dll in the deltacopy folder with the one from

http://www.okisoft.co.jp/esc/utf8-cygwin/cygwin1-dll-20-11-18.tar.bz2

 

note: the deltacopy i used is the one without installer DeltaCopyRaw.zip

 

 

Share this post


Link to post

Also, I ported a C program to do exponential backup rotation to bash, since I didn't have an easy way to compile it for unraid:

 

I can compile it if need be.

Share this post


Link to post

Thanks for a great thread!  Not as clean as I hoped, but I got things working. I would have had no luck without this thread.

 

I did have one snag.

 

When I used this in my go file I had problems.

 

# Start Rsync
installpkg /boot/packages/rsync/rsync-2.6.6-i486-1.tgz
/boot/packages/rsync/rc.d/S20-init.rsyncd

 

 

The Error: line 10: syntax error: unexpected end of file.

 

This is what my S20-init.rysncd file looks like:

 

if ! grep ^rsync /etc/inetd.conf > /dev/null ; then
cat <<-EOF >> /etc/inetd.conf
rsync   stream  tcp     nowait  root    /usr/sbin/tcpd  /usr/bin/rsync --daemon
EOF
read PID < /var/run/inetd.pid
kill -1 ${PID}
fi

cp /boot/packages/rsync/rsyncd.conf /etc/rsyncd.conf

 

If I add

 

#!/bin/bash

 

at the beginning I get "Bad interpreter" as the error.

 

I changed my GO to the following and everything seems to work. I found the info here:

http://lime-technology.com/wordpress/?page_id=34

 

installpkg /boot/packages/rsync/rsync-2.6.6-i486-1.tgz
rsync --daemon --config=/boot/packages/rsync/rsyncd.conf

 

I'm  not sure if I've crippled things or not, but the various steps provided in this post got me to the point where testing and now backing up was something I could do.

Share this post


Link to post

Okay... My eyes are glazing over after all the reading.

 

I just need an easy way to do differential copies to a folder in the unRAID server. Unfortunately my usual xcopy source target /d /e /v /c /i /g /h /r /y isn't doing the trick. All the files are getting copied everytime instead of just new and changed ones.

 

I remember seeing unRAID has rsync. I have an rsync client (cwrsync) installed on the computer I need to copy from. I just had it for creating a local mirror of the php.net website. I have no knowledge of the inner working and intricacies of rsync so any help in this direction would be much appreciated.

 

Thanks!

Share this post


Link to post

Thanks to this thread, i am able to backup my main unRAID to a separate machine.

 

I have one issue, tho.

 

rsync creates directories ahead of the file copying process, and my disks are rather small 250G/300G, so i end up with tons of empty folders on each drive that were created before the files filled up the drive and the user share automatically moved to the next disk.

 

Is there a way to stop rsync from creating a folder before it is needed for a file, or an easy way to clean the empty folders after the first bulk run?

 

tia

 

found this on the web:

 

find -depth -type d -empty -exec rmdir {} \;

 

(Joe's solution -next msg - is a bit clearer)

Share this post


Link to post

Folks just want to make a public service announcement here for Synology Hyper Backup users (and maybe other source clients?).  If your unRAID backup target share is set to use the cache drive, and your cache is smaller than the backup size, you're going to have a bad time.

 

I'm a bit of an unRAID noob, so I beat my head against the wall with this one for like weeks.  I tried running rsync in a docker.  I tried spinning up a VM.  The logs on both end were pretty much useless.  I was able to establish the connection and start the backup, but eventually it would always fail and the rsync daemon process would evaporate.  I just happened to browse the cache drive and see the backup stuff stacking up there, and wondered why that was.  Once I turned off cache for that share, it worked flawlessly.

 

Can anyone comment on potential workarounds to be able to utilize the cache drive in this scenario?

 

Share this post


Link to post
23 minutes ago, mikegeezy said:

Folks just want to make a public service announcement here for Synology Hyper Backup users (and maybe other source clients?).  If your unRAID backup target share is set to use the cache drive, and your cache is smaller than the backup size, you're going to have a bad time.

 

I'm a bit of an unRAID noob, so I beat my head against the wall with this one for like weeks.  I tried running rsync in a docker.  I tried spinning up a VM.  The logs on both end were pretty much useless.  I was able to establish the connection and start the backup, but eventually it would always fail and the rsync daemon process would evaporate.  I just happened to browse the cache drive and see the backup stuff stacking up there, and wondered why that was.  Once I turned off cache for that share, it worked flawlessly.

 

Can anyone comment on potential workarounds to be able to utilize the cache drive in this scenario?

 

It sounds as if you have not set a Minimum Free Space value for the cache under Settings->Global Share settings?    When the free space on the cache falls below that value then Unraid will start by-passing the cache for new files for shares set to use the cache.

Share this post


Link to post
22 minutes ago, mikegeezy said:

Once I turned off cache for that share, it worked flawlessly.

If you had asked for advice sooner, this is what I would have recommended.

 

22 minutes ago, mikegeezy said:

Can anyone comment on potential workarounds to be able to utilize the cache drive in this scenario?

It's really just a case of what the hardware is capable of. There is simply no way to move stuff from the faster cache to the slower parity array as fast as you can write it to the faster cache.

 

Share this post


Link to post
3 hours ago, itimpi said:

It sounds as if you have not set a Minimum Free Space value for the cache under Settings->Global Share settings?    When the free space on the cache falls below that value then Unraid will start by-passing the cache for new files for shares set to use the cache.

Yes, that's probably the case.  I'll have to check when I get home.  Although, it seems like there's not a real advantage to using the cache here.

 

3 hours ago, trurl said:

If you had asked for advice sooner, this is what I would have recommended.

 

It's really just a case of what the hardware is capable of. There is simply no way to move stuff from the faster cache to the slower parity array as fast as you can write it to the faster cache.

 

Thanks for the feedback fellas.  I'm new to unRAID / the community, good to know it's so supportive.

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.