unRAID as a rsync target/server


Recommended Posts

My main storage solution is a Synology diskstation, that has a quite nice backup utility called "Hyper Backup". It supports rsync and WebDAV as backup targets (and some custom synology ones).

I need to get my unraid server running as a backup/failover to keep on working if the diskstation fails for some reason (had a bad PSU once so better safe than sorry - took a week to replace the unit).

 

I read a lot on this and some other forums on unraid but still am absolutely clueless on how to setup the rsync server on unraid.

The goal is that I can connect to the unraid server (with a dedicated share for backups) and run the "Hyper Backup" sync utility fromy my diskstation.

 

This is the interface of "Hyper Backup"

I am not sure if unraid ships with rsync or if I have to add it tbh ... but after trying to use the unraid server as a target, it says there is no rsync running.

 

The only thing I found regarding this issue is this article.

 

I hope you guys can help me out with this =)

If this works, I would suggest that I summarize this into a tutorial for others with similar issues.

 

Thanks in advance

SyntaxX_3rroR

 

PS: The WebDAV route would be a bit messy (afaik) - setting up owncloud as a docker with the /mnt/user/Syno-BKP as the main storage using the "Cloud Sync" utility on synology to sync a specified folder on the NAS to the owncloud instance running on unRAID.

 

Edit: Would be a nice addition for future updates to integrate these features and their config in the Web GUI. It is time to have an alternative to Synologys DSM (besides Xpenology).

Edited by SyntaxX_3rroR
Link to comment

rsync is included in unRAID.  I am using rsync to backup between two unRAID servers via ssh.  It's all automated and even powers the backup server on/off via IPMI when it's time for a backup.  Even though my backups are between two unRAID servers, the same rsync principles/syntax should apply between Synology and unRAID.

 

Here's the discussion that got me started (with plenty of my own comments as I worked through it).

 

 

  • Like 1
Link to comment

Both my servers are on the same LAN.  The principles and rsync commands are the same regardless of location of servers.  I just did it via ssh because my server root logins are password protected and I wanted this all to run without user input (password prompt).  I also plan to eventually move the backup server offsite and I want it to be secure over the Internet when that happens.

 

If you have two always-on servers,  want to run the backup manually, or have no password on the login to the unRAID server, it will be less complicated.  Since my backup runs unattended once a week at 1am on Monday, I have all the email logic in my script as well to tell me what happened.

 

It took me a few days to work through this as I had to learn a lot about how rsync and ssh worked.  Most of my issues were ssh related, so, if that does not matter to you, this is really not that difficult to script.

Link to comment

Thanks. That means I can remove the email digest from the script?

The goal is unatended secure backup in one direction only. The Diskstation keeps the master role and the unRAID should just have read rights to not be able to accidentally destroy/delete/sync something in the wrong direction. The unRAID server will only be on 24/7 when I am working on my NAS. When I dont need it, I'll turn it off, so something like a cronjob might be possible. Guess I have to read through rsync too ...

Link to comment

I have two servers called MediaNAS (main server) and BackupNAS (backup server).  I do a one-way backup of all new/changed files from MediaNAS to BackupNAS.

 

My script does the following:

 

1. power on backupserver via IPMI

2. setup email headers

3. backup shares from MediaNAS to BackupNAS

4. record everything in log files

5. email me the backup summary and logs

6. power down the backup server vi IPMI

 

You can get rid of the email stuff and the IPMI stuff (although the logs are nice even if you don't email them to yourself) and modify the rsync lines to your liking (eliminating ssh if you like) as a starting point.

 

Here is my modified script which eliminates all of the original poster's "check to see if the server is up" logic and backs up share to share instead of from disk to disk.

 

#!/bin/bash
#description=This script backs up shares on MediaNAS to BackupNAS
#arrayStarted=true

echo "Starting Sync to BackupNAS"
echo "Starting Sync $(date)" >> /boot/logs/cronlogs/BackupNAS_Summary.log

# Power On BackupNAS
ipmitool -I lan -H 192.168.1.16 -U admin -P xxxxxx chassis power on

# Wait for 3 minutes
echo "Waiting for BackupNAS to power up..."
sleep 3m

echo "Host is up"
sleep 10s

	# Set up email header
	echo To: [email protected] >> /boot/logs/cronlogs/BackupNAS_Summary.log
	echo From: [email protected] >> /boot/logs/cronlogs/BackupNAS_Summary.log
	echo Subject: MediaNAS to BackupNAS rsync summary >> /boot/logs/cronlogs/BackupNAS_Summary.log
	echo   >> /boot/logs/cronlogs/BackupNAS_Summary.log

				
	# Backup Pictures Share
	echo "Copying new files to Pictures share =====  $(date)"
	echo "Copying new files to Pictures share =====  $(date)" >> /boot/logs/cronlogs/BackupNAS_Summary.log
	echo "Copying new files to Pictures share =====  $(date)" >> /boot/logs/cronlogs/BackupNAS_Pictures.log

	rsync -avu --stats --numeric-ids --progress -e "ssh -i /root/.ssh/id_rsa -T -o Compression=no -x"  /mnt/user/Pictures/ [email protected]:/mnt/user/Pictures/  >> /boot/logs/cronlogs/BackupNAS_Pictures.log


	# Backup Videos Share
	echo "Copying new files to Videos share =====  $(date)"
	echo "Copying new files to Videos share =====  $(date)" >> /boot/logs/cronlogs/BackupNAS_Summary.log
	echo "Copying new files to Videos share =====  $(date)" >> /boot/logs/cronlogs/BackupNAS_Videos.log

	rsync -avu --stats --numeric-ids --progress -e "ssh -i /root/.ssh/id_rsa -T -o Compression=no -x"  /mnt/user/Videos/ [email protected]:/mnt/user/Videos/  >> /boot/logs/cronlogs/BackupNAS_Videos.log


	# Backup Movies Share
	echo "Copying new files to Movies share =====  $(date)"
	echo "Copying new files to Movies share =====  $(date)" >> /boot/logs/cronlogs/BackupNAS_Summary.log
	echo "Copying new files to Movies share =====  $(date)" >> /boot/logs/cronlogs/BackupNAS_Movies.log

	rsync -avu --stats --numeric-ids --progress -e "ssh -i /root/.ssh/id_rsa -T -o Compression=no -x"  /mnt/user/Movies/ [email protected]:/mnt/user/Movies/  >> /boot/logs/cronlogs/BackupNAS_Movies.log


	# Backup TVShows Share
	echo "Copying new files to TVShows share =====  $(date)"
	echo "Copying new files to TVShows share =====  $(date)" >> /boot/logs/cronlogs/BackupNAS_Summary.log
	echo "Copying new files to TVShows share =====  $(date)" >> /boot/logs/cronlogs/BackupNAS_TVShows.log

	rsync -avu --stats --numeric-ids --progress -e "ssh -i /root/.ssh/id_rsa -T -o Compression=no -x"  /mnt/user/TVShows/ [email protected]:/mnt/user/TVShows/  >> /boot/logs/cronlogs/BackupNAS_TVShows.log

	
	# Backup Documents Share
	echo "Copying new files to Documents share =====  $(date)"
	echo "Copying new files to Documents share =====  $(date)" >> /boot/logs/cronlogs/BackupNAS_Summary.log
	echo "Copying new files to Documents share =====  $(date)" >> /boot/logs/cronlogs/BackupNAS_Documents.log

	rsync -avu --stats --numeric-ids --progress -e "ssh -i /root/.ssh/id_rsa -T -o Compression=no -x"  /mnt/user/Documents/ [email protected]:/mnt/user/Documents/  >> /boot/logs/cronlogs/BackupNAS_Documents.log
	
	echo "moving to end =====  $(date)"
	echo "moving to end =====  $(date)" >> /boot/logs/cronlogs/BackupNAS_Summary.log

	
	# Add in the summaries
	cd /boot/logs/cronlogs/
	echo ===== > Pictures.log
	echo ===== > Videos.log
	echo ===== > Movies.log
	echo ===== > TVShows.log
	echo ===== > Documents.log
	echo Pictures >> Pictures.log
	echo Videos >> Videos.log
	echo Movies >> Movies.log
	echo TVShows >> TVShows.log
	echo Documents >> Documents.log
	tac BackupNAS_Pictures.log | sed '/^Number of files: /q' | tac >> Pictures.log
	tac BackupNAS_Videos.log | sed '/^Number of files: /q' | tac >> Videos.log
	tac BackupNAS_Movies.log | sed '/^Number of files: /q' | tac >> Movies.log
	tac BackupNAS_TVShows.log | sed '/^Number of files: /q' | tac >> TVShows.log
	tac BackupNAS_Documents.log | sed '/^Number of files: /q' | tac >> Documents.log
		
	# now add all the other logs to the end of this email summary
	cat BackupNAS_Summary.log Pictures.log Videos.log Movies.log TVShows.log Documents.log > allshares.log
	zip BackupNAS BackupNAS_*.log 
	
	# Send email of summary of results
	ssmtp [email protected] < /boot/logs/cronlogs/allshares.log
	cd /boot/logs/cronlogs  
	mv BackupNAS.zip "$(date +%Y%m%d_%H%M)_BackupNAS.zip"
	rm *.log
	
	#Power off BackupNAS gracefully
	sleep 30s
	ipmitool -I lan -H 192.168.1.16 -U admin -P xxxxxxx chassis power soft
	

 

Edited by Hoopster
  • Like 2
  • Thanks 2
Link to comment
2 hours ago, Hoopster said:

You can get rid of the email stuff and the IPMI stuff (although the logs are nice even if you don't email them to yourself) and modify the rsync lines to your liking (eliminating ssh if you like) as a starting point.

 

Here is my modified script which eliminates all of the original poster's "check to see if the server is up" logic and backs up share to share instead of from disk to disk.

Nice cleanup to my original ugly looking script.  Interesting that you are able to eliminate the IPMI wakeup checks.  Even locally, I would sometimes get a server started, but not available.  Since it wasn't able to rsync, I would get lots or errors.  A subsequent power cycle always seemed to fix things.

Edited by tr0910
Link to comment
8 minutes ago, tr0910 said:

Nice cleanup to my original ugly looking script.  Interesting that you are able to eliminate the IPMI wakeup checks.  Even locally, I would sometimes get a server started, but not available.  Since it wasn't able to rsync, I would get lots or errors.  A subsequent power cycle always seemed to fix things.

 

So far the server has never failed to power on and start unRAID array through IPMI.  I understand why your "server alive" checks were in the script and once I move mine offsite in the future, I will likely do that at as well.  For now, and so I would have less to troubleshoot in the beginning, I took it out.  I am only waiting for 3 minutes after the IPMI call and assuming the server is up.  It almost always is within 2 minutes and the script waits for three.  Fingers crossed that it stays this way :)  Thanks for your original work on this.

Link to comment
  • 2 years later...

This is amazing @Hoopster, thanks for putting this up.

 

I've been looking at improving my backup process between servers, and integrating pfSense backups to unRAID.

 

Along with the CA User Scripts plugin, I'm going to give this a try.

 

I realise also that this is quite a while after your post, do you have any things you added to this, or is it still working as-is to now?

Link to comment
On 3/10/2020 at 1:21 PM, KptnKMan said:

do you have any things you added to this, or is it still working as-is to now?

It's been running for over two years as-is.  I have the script automated through a User Scripts cron job and it faithfully does its thing once a week unattended.  So far, it has never failed.

 

Since it only copies new/changed files to the backup server, the script does not account for any files on the source server that have been deleted (they still exist on the backup server).  I have basically the same script with the --delete parameter on the rsync command lines that I run manually from time to time once I am sure I don't need to recover anything from the backup server.  You might want to try --dry-run as well to make sure it behaves as you expect before committing to deleting files.

Edited by Hoopster
Link to comment

Thats great advice, thanks.

I'm also using the CA User Scripts plugin for this, so good to know.

 

I've had it in my mind to get down and get a more robust solution like this in place over my current simple copy.

 

I'm no slouch o Shell scripts, so I'm going to see if I can write some scripts to accomplish these tasks.

 

Thanks again.

Link to comment
  • 2 years later...

So I'm in the same position as OP in that I have Synology's Hyperbackup already configured with the data that needs backing up. It was going to an rsync server running on a Qnap NAS but I am taking that offline and replacing it with the Unraid.

 

I'd still really like to know how to configure UnRaid to be an rsync target. that way I can continue to use the Synology UI to pick the source data and 'push' it to the UnRaid rsync server rather than 'pulling' the data. I'd also like to avoid creating/modifying scripts anytime I want to add another source.

Link to comment
1 hour ago, aglyons said:

I came across this doc that walks through creating an rsync daemon. But being new to UnRaid, I don't know where to create the conf file as etc is hidden AFAIK.

 

https://www.atlantic.net/vps-hosting/how-to-setup-rsync-daemon-linux-server/

 

I know that rsync is already installed in UnRaid so it's just a question of creating the conf file and then spinning up the daemon.

 

What I do (and I am sure there are many ways of doing it) is create the rsync.conf file and place in in the config folder on the USB stick.

 

Mine looks like this...

 

uid             = nobody
gid             = users
use chroot      = no
max connections = 20
pid file        = /var/run/rsyncd.pid
timeout         = 3600
log file        = /var/log/rsyncd.log
incoming chmod	= Dug=rwx,Do=rx,Fug=rw,Fo=r

[mnt]
    path = /mnt
    comment = /mnt files
    read only = FALSE

 

Then, in the go file in the config folder (which is what Unraid uses when starting up, I have the following...

 

#!/bin/bash
# Start the Management Utility
/usr/local/sbin/emhttp &

#rsync daemon
rsync --daemon --config=/boot/config/rsyncd.conf

 

So when the rsync daemon is invoked it starts up using the parameters in the file on the stick.  There is no need to copy the file anywhere else.


Note: This is only as secure as a your local network.  A bad actor on your network could use this to delete files etc. on the target machine.  In my case, that machine is powered off 99.9% of the time, and only powered on when I occasionally need to access it.  Other users may have better suggestions.

  

  • Like 1
Link to comment

Awesome, thx s80_UK,

 

This gets me started.

 

The article I posted did have a section at the end on securing the daemon.

 

[files]
path = /home/public_rsync
comment = RSYNC FILES
read only = true
timeout = 300
auth users = rsync1,rsync2
secrets file = /etc/rsyncd.secrets

 

and in the secrets file just a username:password combo. Then secure the password file with chmod 600 /etc/rsyncd.secrets

 

rsync1:9$AZv2%5D29S740k
rsync2:Xyb#vbfUQR0og0$6
rsync3:VU&A1We5DEa8M6^8

 

  • Like 1
Link to comment
19 hours ago, aglyons said:

Curious, I have user scripts installed. Could that be used for this in some way?

 

To be honest, I don't think you need to use it for this unless you want to be able to easily turn the daemon on and off.  Then you mighty perhaps create a couple of scripts to do that.  

 

16 hours ago, aglyons said:

 

Do I need to provide specific vars here or is this as scripted?

 

What I posted is exactly how I have it set.

  • Upvote 1
Link to comment

So after searching all over and taking bits of info from all over, including your amazing help. I've come to determine that Synology's rsync on DSMv7+ has some issues.

 

SMB transfers into UnRaid come in around 70-115MB/s.

 

Rsync from Synology hovers around 7MB/s. 

 

So at the moment I can't use rsync as it would keep things running for months with the backup set that I have. Just for the initial sync!

 

I have a ticket in with Synology. I'll update here when I have some news.

Link to comment

OK - that is disappointing, but I am pleased that you've at least got this far.  As a point of reference, when I back up large files from one Unraid box to another rsync is able to pretty much utilise the full network bandwidth.  I see sustained rates up to around 110MB/s between servers.  

Link to comment

So it's definitely something buggy with Synology then.

 

Good to know, I can bring this up in the ticket I have with them. 

 

It's frustrating in that rsync is the ONLY way to automate a file backup to a server that is not another Synology NAS. The local backup option in hyperbackup won't let you choose a mapped network location!

Link to comment

So the situation here is there are a number of different sources all over the Synology that are in the backup set. To "pull" the data to the unraid  as apposed to "pushing" the data would mean a significant amount of scripting and management. Each source would need its own script to trigger the sync.

 

While many might not be concerned about a 'little' scripting, I'm not a deep Linux guy and it would take me some time to figure it all out.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.