Backing up the Unraid server


Recommended Posts

Hello everyone

I'm on Unraid since the start of this year, and with most things pretty happy.

The one thing I do miss hardly is no good way to backup the unraid server to another server.

In my case my old synology. I mean it is absolutely nessesairy to do another backup - and there is no build in function for that?
Even worse, including all the available dockers there seems no useable solution. I tried duplicati - it is just slow as hell. after 20h i got 300GB of my 8TB to backup. All wired LAN network.
Rsync seems like a method, but there is simply no GUI available? setting up would be one thing via console - but I need some monitoring.

 

Is there anything left Im missing? How is every one else doing it?

 

Link to comment
9 minutes ago, dedi said:

Hello everyone

I'm on Unraid since the start of this year, and with most things pretty happy.

The one thing I do miss hardly is no good way to backup the unraid server to another server.

In my case my old synology. I mean it is absolutely nessesairy to do another backup - and there is no build in function for that?
Even worse, including all the available dockers there seems no useable solution. I tried duplicati - it is just slow as hell. after 20h i got 300GB of my 8TB to backup. All wired LAN network.
Rsync seems like a method, but there is simply no GUI available? setting up would be one thing via console - but I need some monitoring.

 

Is there anything left Im missing? How is every one else doing it?

 

 

I use rsync to backup my main server shares to the backup server once a week (of course, it can be done as often as you wish).  It is all automatic using the user scripts plugin.

 

See the discussion below.  It took me a while to get it all figured out with the help of some others.  But, now, it is working flawlessly without any intervention on my part.  I simply wait for the email sent to me every Monday morning with a summary of the backup process.  It's very fast via rsync/ssh.

 

I am sure there are other methods that could work as well, but, this one works for me.

 

 

 

Link to comment

rsync is a quite good route to mirror data to a secondary. And it isn't that very hard to use even if there is no GUI available. But it's excellent for secure and automated transfers of new/changed data between two systems.

 

For living data that gets constantly updated, you basically want to store the data on BTRFS file systems so you can make a snapshot that you can then backup and later release the snapshot.

 

But one thing to consider is if you are fine with having a mirror of your data, or if you need versioning. With mirroring, your accidentally overwritten document will on next backup run also get overwritten on the backup server.

Link to comment

Thanks Hoopster for your reply.

Not really the kind of solution im looking for. I am used to console, but I dont like it at all on "it should just run" system like my NAS.
I dislike it because not only the work to put in until it works, it's also a thing I never know if it will still run when upgrading unraid.

Still because of lack of alternatives I'll give it a try. So far Im 2 hours in fiddling around and still not working. And it wont stop after that guide. This is still only a sync, not a backup.

 

Edited by dedi
Link to comment

You might also consider looking at rsnapshot which is a tool based on rsync that makes use of hard links to handle multiple backups.

 

It isn't part of the standard unRAID package, but you can install it from the Nerd Tools application.

Link to comment
2 hours ago, dedi said:

I dislike it because not only the work to put in until it works, it's also a thing I never know if it will still run when upgrading unraid.

 

It did take me a while to get it all figured out and running exactly as I wished.  My backup script only copies new files to the backup server.  I don't have it delete anything.  Every 90 days (hopefully enough time to figure out if something was accidentally deleted or modified), I run another script that deletes files on the backup server that do not exist on the main server.

 

As pwm mentioned, if you want snapshots and versioning there are tools that are a little easier to deal with than just rsync (although it does have some rudimentary versioning as well).

 

My backup script has run flawlessly every week for many months and it has survived  several unRAID upgrades from 6.3.5>6.4.0>6.4.1>6.5.0>6.5.1>6.5.2 without issue.

 

I am not saying this is the solution that is best for you, but, it does work well once properly setup. I believe the later posts in the thread discuss exactly what I did to finally get it working properly in an unattended manner.

 

Prior to going to the sync scripts, I was using the Syncthing docker.  It was adequate but rsync is much faster.  In addition to Syncthing, another docker you may want to investigate is Resilio sync if you prefer the docker approach.

Link to comment

This is what I do with my backup servers:

 

-on the backup server snapshot all disks before every rsync run

-run rsync with delete

 

This way the backup will always be a current mirror of the source server, but if I need an older deleted or modified file I can use the snapshots, for this my backup servers have a few more TBs of space than the source server for the extra space needed, when they are getting full I clean up older snapshots.

 

Link to comment
This is what I do with my backup servers:
 
-on the backup server snapshot all disks before every rsync run
-run rsync with delete
 
This way the backup will always be a current mirror of the source server, but if I need an older deleted or modified file I can use the snapshots, for this my backup servers have a few more TBs of space than the source server for the extra space needed, when they are getting full I clean up older snapshots.
 

Are you using rsnapshot or doing a btrfs snapshot?

If the latter...can you give a little more information?

I was thinking of implementing this , but am not sure where to start...

Thanks!


Sent from my iPhone using Tapatalk Pro
Link to comment
3 minutes ago, airbillion said:

doing a btrfs snapshot?

This, just have a simple script snapshotting all the disks for each share before every backup, in my case I have it running at array start, since the backup servers are off, only turn them on before doing a sync.

 

For example, for my backup TV server I use this scrip with the user scripts plugin set to run at first array start:

 

#!/bin/bash
nd=$(date +%Y-%m-%d-%H%M)
for i in {1..28} ; do
btrfs sub snap -r /mnt/disk$i/TV /mnt/disk$i/snaps/TV_$nd
done
beep -f 500 ; beep -f 500 ; beep -f 500

Beeps are just so I know when the server finishes all the snapshots and is ready for rsync.

Link to comment

Note that rsnapshot is a way of (more or less slowly) creating a mirror of a directory tree while avoiding the need to copy and store the files that is identical with the files in the previous mirror copy. So it requires no special system support but isn't a true snapshot. If there are writes to the source disk during the copy then different parts of the backup may represent different times.

 

The snapshot functionality in BTRFS is a true snapshot in the way that BTRFS will set an instant "marker" in the file system exactly as it looks at the time of the command. Any further writes on the disk will continue to change the "main line" file system. But the snapshot will continue to present the system as it looked at the snapshot time. This means that for data modified, the disk will have to store the old copy of the data to hand out within the snapshot. And the new copy of the data to hand out within the "main line".


But after the BTRFS snapshot has been taken, you can take your time performing the actual copy of the data to another system. That other system will still end up with a snapshot that represents a very specific time point in the source system.

 

One interesting thing with BTRFS is that if the snapshot is readonly (which is what you want since the snapshot is intended as a backup or as the source of a backup) then BTRFS also supports "send" where BTRFS can send out the data within the snapshot to stdout or to a file.


This can be used to send a backup to a different machine by tunneling the stdout data.

 

One interesting thing with "btrfs send" is that it supports sending the difference between two snapshots.

 

So if you make one snapshot every night, then you can ask btrfs to send the changes during the last 24 hours to the backup server. And btrfs on the receiving side can then patch the file system on that side with these changes.

 

So BTRFS can help both with making sure you lock down your backup to the file system state at a very specific time. But also help you with having multiple snapshots that allows you to access files and directories as they looked at an earlier time.

 

The limitation here is that the snapshots are not selective - they relate to full directory trees. So if you keep one snapshot/night for 14 days, then you need to catch and recover from an accidental file overwrite within these 14 days. And the server needs disk space to store the full amount of data, together with the disk changes introduced during each 24-hour interval. Obviously, a backup strategy doesn't need to use a fixed time step between the snapshots - it could avoid purging every old snapshot and start to purge only every second snapshot or maybe purge 6 out of 7 so you keep one snapshot/week. But the disk still needs to store the total difference between two weekly snapshots and that diff is obviously expected to be larger than the diff between two daily snapshots since it covers a longer time interval.

 

With a traditional backup with file versioning, it might be possible to specify additional rules for how many copies you should keep of specific files. So you might keep 10 copies of your word documents and specify that they should be kept indefinitely. So it will not be until after the 10th accidental overwrite of the document that the backup will finally overflow and not be able to recover the original and correct version of the document.

Link to comment
3 hours ago, pwm said:

then BTRFS also supports "send" where BTRFS can send out the data within the snapshot to stdout or to a file.

That can be a good option, I use send/receive to backup my VMs, info on how it works here for anyone interested, but I don't use it for sever backups for two reasons:

#1 - would need to do a send/receive for each disk, so e.g. on a 28 disks server there would be 28 different send/receives, though that is not a show stopper

#2 - and the show stopper for me, both servers would need to have the same disk configuration, i.e., same number of disks with the same sizes in the same assignments, or it wouldn't work, as I use the disks I upgrade on my main servers for the backup servers they all have more but smaller disks compared to the server they are mirroring, so send/receive is not an option.

Link to comment
7 minutes ago, johnnie.black said:

but I don't use it for sever backups for two reasons

I forgot to mention another reason that would make me still probably use rsync for these backups even if I had identical servers, no progress display on send/receive, since I turn the backups servers on just for the sync I like so see how much data there's to transfer and an ETA, and I couldn't get that with send/receive.

Link to comment
1 minute ago, johnnie.black said:

both servers would need to have the same disk configuration

 

This is normally the show stopper for quite a lot of simpler (and even a number of more advanced) backup solutions.

 

It also bites for rsync of disk shares.

And rsync on user shares can get a nose bleed from preallocating all directories on the first disk.

 

Lots of backup software either wants to write to removable media where the user is expected to replace the media if full - or to write to a single disk volume that must be big enough to store the full backup (and often all previous backups for that backup work).

 

For backup of own machines I use a software I have written myself that makes use of storage pools. So the view of the source volume is just stored in a database, while all unique files gets sent to one or more storage pools where it really doesn't matter exactly which disk in the pool gets the individual files. The backup client just lists the directory information and the hash of potential files to backup.

 

The server stores changed directory information in delta format in the database while requesting a copy of all files with unknown hashes and then direct the storage to a disk with enough space. So it's enough to make sure the destination server has enough free local storage or can mount in remote disks. It would only be if I would be stupid enough to try to send a full disk image that the storage pool would get into troubles since it then would need at least one disk with more free space than the full disk image - no splitting currently allowed.

Link to comment
1 minute ago, johnnie.black said:

I like so see how much data there's to transfer and an ETA, and I couldn't get that with send/receive

 

I wonder if BTRFS knows that when it starts, or if it just walks the internal trees and produces the output on-the-fly. It would definitely be good if it could precompute some statistics and then maybe produce progress data to a socket or file, allowing a supervisor to pick up the progress.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.