unRaid to go - suitcase servers / rsync --delete issues


Recommended Posts

I have an unRaid server that sits in my basement.  We are using it to backup our Photo Studio workflow images from 6 different workstations via cw-rsync, and this backup to unRaid happens 3-5 times per day.  Total data backed up per day to unRaid varies from 5GB - 50GB.  Because I don't want to lose this data, I have 2 other servers that are essentially duplicates of the main server, and I call them unRaid suitcases because one of them is always offsite.  Every few days, the one here turns itself on, and does an overnight rsync from the main server.  Every few weeks the suitcases trade places and get swapped at offsite location.

 

Server1 - v5b14  main server that stays here

Tower1 - v5b12a unRaid suitcase #1

Tower2 - v5b13  unRaid suitcase #2

 

This works well except for one problem.  It only syncs in one direction.  If I rename a directory on Server1, it gets duplicated on Tower1 and Tower2 under both names creating a mess and duplicated storage.  I've tried to get rsync to use the --delete option but can't get it to work.  Do I have to run the --delete option from Server1?  Any suggestion?  Seems to be doing something now.

 

Tower1 has a go script that looks like the following:

 

mkdir /mnt/s1disk1
mkdir /mnt/s1disk2
mount -t nfs server1:/mnt/disk1/ /mnt/s1disk1
mount -t nfs server1:/mnt/disk2/ /mnt/s1disk2

echo mounts to server1 set up on tower2:

rsync -av --stats --progress /mnt/s1disk1/ /mnt/disk1/  >> /boot/logs/cronlogs/t2disk1.log
rsync -av --stats --progress /mnt/s1disk2/ /mnt/disk2/  >> /boot/logs/cronlogs/t2disk2.log

echo backup completed to tower2 - waiting an hour then powering down

sleep 3600
powerdown

 

When I want to delete I do  the following: 

 

First do a dry run to see what it will be trashing.

rsync -av --stats --progress --delete --dry-run /mnt/s1disk1/ /mnt/disk1/  >> /boot/logs/cronlogs/t2disk1delete.log

After examining the delete log, and not having any issues with it, run it for real

rsync -av --stats --progress --delete /mnt/s1disk1/ /mnt/disk1/  >> /boot/logs/cronlogs/t2disk1delete.log

Link to comment

Why is it that every time I get concerned about something not working, that immediately after I post, It works fine?  I have now tested this system and completely cleaned up one of my backup servers to match the main server (Server1).

 

Not only is it working, but performance has not seemed to be an issue.  Not sure why I was having problems in the beginning.  It will go through a full 2tb drive and verify sync in about 10 minutes, provided no files need copied. 

 

I recommend running just the backup script without the --delete option in all unattended instances, and using the --delete option when you are paying full attention and only after running it in --dry-run mode first verify what it is doing.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.