Backup dockers and their data from Cache drive


Recommended Posts

FYI: I found the WebSync docker in sparklyballs' beta repository and am using it for my backups. Was easier to deal with for me than setting up cron jobs with rsync manually. I have one backup scheduled weekly and one monthly for some redundancy.

 

I'm only backing up my app data so I'm not stopping docker. My understanding is that files that change after rsync has started will not be synced, which I'm ok with. Can get them the next time.

Link to comment

FYI: I found the WebSync docker in sparklyballs' beta repository and am using it for my backups. Was easier to deal with for me than setting up cron jobs with rsync manually. I have one backup scheduled weekly and one monthly for some redundancy.

 

I'm only backing up my app data so I'm not stopping docker. My understanding is that files that change after rsync has started will not be synced, which I'm ok with. Can get them the next time.

 

How did you configure your WebSync job?  Any chance you can post screens?

Link to comment

It's a pretty simple setup with just a source folder (my apps folder on the cache), destination folder (a backup folder on the array), 2 flags (archive & hard links), and then a weekly schedule (Mondays at 2am). A screenshot of the weekly backup is attached. The monthly backup looks the same but with just a change in the schedule.

 

Thanks for the screen shot.

 

Are you stopping your dockers before running the backup and if so how are you automating that?

Link to comment

Should have mentioned before that if you're at all worried about websync you should test it first with some test folders to make sure it's doing what you want.

 

I'm not stopping my dockers. My understanding is that rsync will miss files that are added or changed during the sync and I'm ok with that. Can just get those the next time. Shouldn't be any other downside to leaving dockers running.

 

Link to comment

Did some more reading on this and it looks like rsync can copy files that are open, but there does appear to be a flaw in using it without stopping docker.

 

Based on what I've read, if a file is being written while rsync is running then rsync will simply copy what is there at that instant. So if an application is in the middle of a write then there's a chance the copy will be corrupt. This may be a very rare situation, but given that this is meant to be a backup it needs to be reliable.

 

Glad you asked...I think it's probably best to switch back to a manual cron job that includes the command to stop/start docker in addition to the rsync command.

Link to comment

I'm not having very good success trying to back up my Plex appdata folder.  If I run the websync job it seems to run properly and throw all the data into my Backups share.  It moves it to the cache drive first of course but then when I run the mover I run into issues.  First off the log gets full before the mover can complete (gets to 100% and then the mover appears to stop moving files).  So then if I restart the server (the only thing I can do once the log gets full) and runt he mover on whatever files are left over, it skips them.

 

Just to note my PlexMediaServer appdata folder is pretty large (130GB).

Link to comment

I don't use the cache drive for my Backup share, which may help you get around the problem. But I'm not really sure what would be causing that.

 

Was just thinking that I should try not using cache for that share.  Makes sense.

Yes - that would mean that there would be a LOT less mover messages in the syslog.

 

Another possibility might be to increase the space available for logging.  I have the following in my go file to increase the space allowed from 128MB to 256MB.

mount -o remount,size=256m /var/log
logger -tgo "Increased space for logs to 256MB"

 

Link to comment
  • 3 months later...

I just set up a backup for my cache-only "Apps" share using this method. A few "gotchas" I discovered (unRAID 6.1.6):

 

1. If you're backing up your Plex docker files, it's a good idea to use a disk share instead of a user share as the destination (i.e. "disk2" instead of "user" in the path). The latter works, but you'll get unnerving messages in the syslog about missing files.

2. If you want to include a notification, it appears that the "notify" command has moved:

 

/usr/local/emhttp/plugins/dynamix/scripts/notify -i normal -s "Cache Drive Backup Completed" -d " Cache Drive Backup completed at `date`"
Link to comment
  • 2 months later...

Thanks for this script.

 

I used the V5 version of this and it worked fine.

Unfortunately I did not change it when I started to move to dockers and it has locked up my system last night.

 

Here my "enhanced" code to only stop running containers and only restarting the ones I stopped.

What options should I give the docker start command? What are the defaults unRAID uses when starting containers?

 

As I had to "foce" stop my tower this morning it is now doing a parity check.

Therefore I have only run the script with the actual rsync command commented out.

Any improvements, comments are very welcome. (Error handling still to come...)

 

#!/bin/bash

LogFile=/var/log/cache_backup.log
BackupDir=/mnt/disk1/Backup/unRAID_cache

echo `date` "Starting cache drive backup to " $BackupDir >> $LogFile

#Stop plugin services located in /etc/rc.d/
# enter in plugins to stop here, if any
# /etc/rc.d/rc.plexmediaserver stop >> $LogFile


#stop dockers

  # find running docker containers
  declare -a Containers=(`docker ps -q`)

  # stop running containers
  for Cont  in "${Containers[@]}"
  do
    echo `date` " Stopping Container: " $Cont >> $LogFile
    docker stop $Cont >> $LogFile
  done


#Backup cache via rsync

/usr/bin/rsync -avrtH --delete /mnt/cache/ $BackupDir  >> $LogFile

## RESTORE
## /usr/bin/rsync -avrtH --delete  $BackupDir  /mnt/cache/

#Start plugin services
# enter in plugins to start here, if any
# /etc/rc.d/rc.plexmediaserver start  >> $LogFile


#start dockers previousy stopped

  for Cont  in "${Containers[@]}"
  do
    echo `date` " Starting Container: " $Cont >> $LogFile
    docker start $Cont >> $LogFile
  done


echo `date` "backup Completed " $BackupDir >> $LogFile

# send notification
/usr/local/sbin/notify -i normal -s "Cach Drive Backup Completed" -d " Cache Drive Backup completed at `date`"

 

EDIT: The parity check finished so I gave this a whirl. All running dokers stopped before the backup and the same dockers got restarted afterwards. So all looks good.

 

EDIT2: added full path to notify

 

Using the above, I got my backup script working. Initially had a problem restarting docker containers and it was a typo in the variable name in my script :)

 

I'm only backing up appdata, as I can easily re-setup docker.img.

 

#!/bin/bash

#Logfile location
LOG_FILE=/var/log/cache_backup.log
BACKUP_ROOT=/mnt/disk3/backups/unraid/appdata

echo `date` "Starting cache drive appdata backup to " ${BACKUP_ROOT} >> $LOG_FILE

#Stop plugin services located in /etc/rc.d/
# enter in plugins to stop here, if any

#stop dockers

# find running docker containers
declare -a Containers=(`docker ps -q`)

# stop running containers
for CONT in "${Containers[@]}"
do
  echo `date` " Stopping Container: " $CONT >> $LOG_FILE
  docker stop $CONT >> $LOG_FILE
done

#Backup cache via rsync to the latest folder
/usr/bin/rsync -avrtH --delete /mnt/cache/appdata/ ${BACKUP_ROOT}  >> $LOG_FILE

#Make a compressed archive into the archive folder so we can rotate
#tar -czvf ${BACKUP_ROOT}/archive/cache-appdata_$(date +%y%m%d).tar.gz ${BACKUP_ROOT}/latest/ >> $LOG_FILE

#Start plugin services
# enter in plugins to start here, if any

#start dockers previousy stopped
for CONT in "${Containers[@]}"
do
  echo `date` " Starting Container: " $CONT >> $LOG_FILE
  docker start $CONT >> $LOG_FILE
done

#notify about backup
/usr/local/emhttp/plugins/dynamix/scripts/notify -i normal -s "Cache Drive AppData Backup Completed" -d "Cache Drive AppData Backup completed at `date`"

 

The notification is slick because it shows the notification in the Web UI. Great confirmation it ran, and since it's weekly it's not a big deal.

 

I initially was messing with rotating compressed tars, but ended up ditching it because I realized I have crashplan running in a docker backing up this directory and it handles all that automatically as seen in this config screenshot:

 

ukzjX1V.png

 

To kick the script off, I'm using the tip earlier in the thread about using cron, by creating /boot/config/plugins/cache_backup/cache_backup.cron:

 

# Weekly cache backup
0 0 5,12,19,26 * * /boot/scripts/cache_backup.sh &> /dev/null

 

Backups every at midnight on the 5th, 12th, 19th and 26th of every month.

Link to comment
  • 2 weeks later...

hm, when I do this I get

-bash: /boot/scripts/cache_backup.sh: /bin/bash^M: bad interpreter: No such file or directory

 

copied the script posted by tmchow in into /boot/scripts and just adjusted some of the paths to fit my requirements.

 

What am I missing here?

 

Link to comment

hm, when I do this I get

-bash: /boot/scripts/cache_backup.sh: /bin/bash^M: bad interpreter: No such file or directory

 

copied the script posted by tmchow in into /boot/scripts and just adjusted some of the paths to fit my requirements.

 

What am I missing here?

You probably copied and pasted using Notepad.

 

Use Notepad++ and after pasting, set the EOL Conversion (Edit - EOL Conversion) to be UNIX / OSX format

Link to comment

okay thanks it is working now but I get

 

rsync: failed to set times on "/mnt/user/Backup/cache/appdata/teamspeak3/log/server": No such file or directory (2)

rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1165) [sender=3.1.0]

 

But it seems that everything was copied over, so no files are actually missing.

 

I also don't see any of the echo messages such as "Starting cache drive backup to " in the webUI log.

Do I have to look anywhere else here?

 

Link to comment

hm, when I do this I get

-bash: /boot/scripts/cache_backup.sh: /bin/bash^M: bad interpreter: No such file or directory

 

copied the script posted by tmchow in into /boot/scripts and just adjusted some of the paths to fit my requirements.

 

What am I missing here?

You probably copied and pasted using Notepad.

 

Use Notepad++ and after pasting, set the EOL Conversion (Edit - EOL Conversion) to be UNIX / OSX format

 

Another possibility (and what I ran into today) is mixing and matching turl's edits with the originals... turl uses a different path ("scripts" vs "custom").

Link to comment

Yeah I noticed this aswell.

But for me the actual backup works. I just receive the error message and don't see any echo messages.

I don't have permission to open the cache_backup.log file aswell

 

 

--------------------

 

update: I just switched my destination to a single disk share and now the error is gone. Seems that there is the same problem with the teamspeak3 container as with the plex cointainer then.

 

But I still don't get were to wach the logs of the script (e.g. "echo `date` "Starting cache drive backup to " $BackupDir >> $LogFile"

When I type in var/log/cache_backup.log I get:

-bash: /var/log/cache_backup.log: Permission denied

Link to comment
  • 2 weeks later...

The next version of CA is going to have this in it (sometime this weekend).

 

But, as I'm not a real rsync guy I have a question

 

Using -avrtH, I get a ton of

rsync: failed to set times on "/mnt/user/Appdata Backup/krusader/cache-30b58ef6daa1": No such file or directory (2)

(Plex gives a ton of these)

 

These files are symlinks.  The errors disappear if I do a --safe-links within the command.

 

Am I going to run into any problems with a restore on this?

 

Link to comment

About the -avrtH, the r and the t are already included in the a, so they're redundant.  What's not included is the X for extended attributes, something many users will want.  So I think -avrtH should be -avXH.

 

You need someone much more Linux knowledgeable than me though for the symlink issue, to guarantee a perfect restore.

Link to comment

About the -avrtH, the r and the t are already included in the a, so they're redundant.  What's not included is the X for extended attributes, something many users will want.  So I think -avrtH should be -avXH.

 

You need someone much more Linux knowledgeable than me though for the symlink issue, to guarantee a perfect restore.

Yeah I figured all that.

 

What I'm going to wind up doing is after I finish the cron settings is release CA and see who pipes in on the rsync settings (as the CA thread will probably get more traction than this one)

 

Had to stop my testing on the backup / restore as the wife is back home and will want to watch a movie now  :(

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.