Automatically backing up cache drive apps directory to protected array


Recommended Posts

This script is for v5 and plugins.  For v6 and Docker, refer to this post .

 

I've posted this in the past, but it's buried in other threads and probably not easy to find.  Since I've had a few people ask how it was done, I figured I'd create a thread for it so that maybe it could be more easily found.

 

My server has a cache drive which hosts the apps directory for my plugins.  As the cache is not fault tolerant, I wanted to backup the apps directory to the protected array periodically in case the cache drive were ever to fail.

 

Obviously the script would need to be modified to your particular needs/plugins, but it's a good starting point for those Linux novices like myself.  The script was named cache_backup.sh and placed in /boot/custom.

 

#!/bin/bash

#Stop services
/etc/rc.d/rc.plexmediaserver stop
/etc/rc.d/rc.sabnzbd stop
/etc/rc.d/rc.sickbeard stop
/etc/rc.d/rc.couchpotato_v2 stop

#Backup cache via rsync
date >/var/log/cache_backup.log
/usr/bin/rsync -avrtH --delete /mnt/cache/apps/ /mnt/user/Backup/unRAID_cache >>/var/log/cache_backup.log

#Start services
/etc/rc.d/rc.plexmediaserver start
/etc/rc.d/rc.sabnzbd start
/etc/rc.d/rc.sickbeard start
/etc/rc.d/rc.couchpotato_v2 start

 

You then need to add the script to cron.weekly on every reboot.  This is done by adding the following to the go script.

 

#Add cache backup script to cron
cp /boot/custom/cache_backup.sh /etc/cron.weekly

Link to comment

By default cron.weekly runs at 04:30 am on the first day of the week (Sunday). If 04:30 am on Sunday doesn't work for you, you could always create a custom cron job to run it whenever you wish.

 

The initial rsync will obviously take a few minutes, depending on the size of your apps directory.  Subsequent backups (since it only syncs changed/new files) take under a minute, at least on my server.

Link to comment

thanks for sharing, dirtysanchez!

 

and take a shower!!!  ;D

 

 

EDIT: One thing to take into consideration - if you're backing up your sabnzbd directory, and your partial and completed download directories are in there, make sure you exclude them. If you happen to be downloading at the time the backup runs (as I was when I decided to test the script to ensure it worked), you'll backup all the download bits and pieces.

/usr/bin/rsync -avrtH --delete --exclude-from 'cache_backup_exclude' /mnt/cache/apps/ /mnt/user/Backups/AppsOnCache >>/var/log/cache_backup.log

where 'cache_backup_exclude' is a file listing directories (relative to /mnt/cache/apps) to be excluded from the rsync

Link to comment

 

EDIT: One thing to take into consideration - if you're backing up your sabnzbd directory, and your partial and completed download directories are in there, make sure you exclude them. If you happen to be downloading at the time the backup runs (as I was when I decided to test the script to ensure it worked), you'll backup all the download bits and pieces.

/usr/bin/rsync -avrtH --delete --exclude-from 'cache_backup_exclude' /mnt/cache/apps/ /mnt/user/Backups/AppsOnCache >>/var/log/cache_backup.log

where 'cache_backup_exclude' is a file listing directories (relative to /mnt/cache/apps) to be excluded from the rsync

 

FreeMan, I am not sure if follow... my Sab downloads to a separate share I have called /mnt/user/downloads/incomplete.... So anything currently in the queue will not get backed up... are there other any dangerous mid-download bits that are inside the /mnt/cache/MyAppFolder/Sabnzbd, Couchpotato, etc folder?

 

Thanks,

 

H.

 

 

Link to comment
So anything currently in the queue will not get backed up... are there other any dangerous mid-download bits that are inside the /mnt/cache/MyAppFolder/Sabnzbd, Couchpotato, etc folder?
Not dangerous, just a pain in the tail. They will add up in every backup, since there will be new content there every time. You could end up with a HUGE backup.
Link to comment

Not dangerous, just a pain in the tail. They will add up in every backup, since there will be new content there every time. You could end up with a HUGE backup.

 

Thanks Jonathan... but my questions still remains... if my /downloads/incomplete save folder is outside of my cache drive's app folder, is there ANYTHING ELSE I need to worry to exclude that is INSIDE my cache drive's application folder?

 

Thanks.

 

H.

Link to comment

EDIT: One thing to take into consideration - if you're backing up your sabnzbd directory, and your partial and completed download directories are in there, make sure you exclude them. If you happen to be downloading at the time the backup runs (as I was when I decided to test the script to ensure it worked), you'll backup all the download bits and pieces.

/usr/bin/rsync -avrtH --delete --exclude-from 'cache_backup_exclude' /mnt/cache/apps/ /mnt/user/Backups/AppsOnCache >>/var/log/cache_backup.log

where 'cache_backup_exclude' is a file listing directories (relative to /mnt/cache/apps) to be excluded from the rsync

 

Good point.  I went about it a bit differently though.  While I may backup a few GB worth of files I don't need (incomplete downloads directory), when the next rsync runs it will delete those files from the backup, assuming they no longer exist on the cache drive (due to the --delete option).  I also have completed downloads immediately moved to the array, so I have no worry of backing up completed downloads. 

 

In my particular use case it was just easier to do it that way.  Lazier, but easier.  ;D

Link to comment

Not dangerous, just a pain in the tail. They will add up in every backup, since there will be new content there every time. You could end up with a HUGE backup.

 

Thanks Jonathan... but my questions still remains... if my /downloads/incomplete save folder is outside of my cache drive's app folder, is there ANYTHING ELSE I need to worry to exclude that is INSIDE my cache drive's application folder?

 

Thanks.

 

H.

 

Don't quote me, but I don't think so. SAB is the only downloader, the rest are just finders. I have my ../download and ../complete directories as part of the ../sabnzbd structure, and they're on the cache drive. There is no danger, as dirtysanchez points out, it's just a major pain, and, depending on how much you may have downloaded on that particular Sunday, you may have a ton of extra stuff in your backup. As dirtysanchez also points out, it will be deleted the next time the backup runs.

 

I only really hit the situation because I was testing the .sh file, and had a lot of stuff recently downloaded, and something in the process of downloading. I was stumped, 5 min in, why it was taking so long, when indications were that it should run about a minute the first time and seconds after that.

 

I suppose the danger could be that you'd run out of space on your backup location.

Link to comment
  • 4 weeks later...

If I was to add

 

/etc/init.d/crond stop
/etc/init.d/crond start

 

to the shell script as I have vnSTAT running every minute and do not want to corrupt that database, would this stop the backup script or just cron from starting new commands?

 

EDIT: Also, any reason you are writing directly to disk3 instead of to user/backup?

 

 

Link to comment

If I was to add

 

/etc/init.d/crond stop
/etc/init.d/crond start

 

to the shell script as I have vnSTAT running every minute and do not want to corrupt that database, would this stop the backup script or just cron from starting new commands?

 

EDIT: Also, any reason you are writing directly to disk3 instead of to user/backup?

 

As for your first question, I do not know.  I'm learning more about Liunx every day, but I'm no expert.  Hopefully one of the gurus around here can answer that question.

 

As to why I'm backing up directly to /mnt/disk3, there's no real reason and backing up to /mnt/user would work just as well.  In my particular case the Backup share is limited to disk3 only, so either way works.  But for the sake of clarity for other users, it probably makes more sense to point it to a user share (in case their backup directory spans multiple disks).  Thanks for pointing that out and I will change the OP.

Link to comment

If I was to add

 

/etc/init.d/crond stop
/etc/init.d/crond start

 

to the shell script as I have vnSTAT running every minute and do not want to corrupt that database, would this stop the backup script or just cron from starting new commands?

 

EDIT: Also, any reason you are writing directly to disk3 instead of to user/backup?

 

As for your first question, I do not know.  I'm learning more about Liunx every day, but I'm no expert.  Hopefully one of the gurus around here can answer that question.

 

As to why I'm backing up directly to /mnt/disk3, there's no real reason and backing up to /mnt/user would work just as well.  In my particular case the Backup share is limited to disk3 only, so either way works.  But for the sake of clarity for other users, it probably makes more sense to point it to a user share (in case their backup directory spans multiple disks).  Thanks for pointing that out and I will change the OP.

 

well  /etc/init.d doesn't exist, so I guess you can ignore that.  Still curious how I would stop crop from running which this script is running.

Link to comment

Thanks for this script. I added it to back up my apps folder on the cache drive. Can you also write a script to restore the backup? Thanks!

 

Just reverse it :)

 

#Stop services
/etc/rc.d/rc.plexmediaserver stop
/etc/rc.d/rc.sabnzbd stop
/etc/rc.d/rc.sickbeard stop
/etc/rc.d/rc.couchpotato_v2 stop

#Restore cache via rsync
/usr/bin/rsync -avrH /mnt/user/Backup/unRAID_cache/ /mnt/cache/apps >>/var/log/cache_restore.log

#Start services
/etc/rc.d/rc.plexmediaserver start
/etc/rc.d/rc.sabnzbd start
/etc/rc.d/rc.sickbeard start
/etc/rc.d/rc.couchpotato_v2 start

Link to comment

Thanks for sharing!  Works like a charm and surprisingly quick!

 

As FreeMan pointed out, I opted to exclude the incomplete downloads directory for SAB, but instead of using an external file, I just used the --exclude command like this:

 

/usr/bin/rsync -avrtH --delete --exclude 'sabnzbd/Downloads/incomplete' /mnt/cache/apps/ /mnt/user/Backup/unRAID_cache >>/var/log/cache_backup.log

 

This, like --exclude-from, is relative to the source directory (/mnt/cache/apps/).

Link to comment

Hi,

 

New user, new install of 5.0.5 plus.

Do I miss some essential files/packages to perform this script?

 

Tried to run it manually, and it gives me this error:

root@Tower:/boot/custom# cache_backup.sh
-bash: ./cache_backup.sh: /bin/bash^M: bad interpreter: No such file or directory

 

I have not edited the script, and I have Plex in my cache drive, at /mnt/cache/apps

I also have a share at mnt/backup

 

terminal:

root@Tower:/mnt/cache/apps# ls
library/  temp/

 

cache_backup.sh

#Backup cache via rsync
date >/var/log/cache_backup.log
/usr/bin/rsync -avrtH --delete /mnt/cache/apps/ /mnt/user/backup/unRAID_cache >>/var/log/cache_backup.log

 

I even tried to change /mnt/cache/apps/ to /mnt/user/apps/ since this is what it show inside unRAID? This confuses me a bit :)

 

Happy for any hint or sollution :)

library.jpg.f059a75f40a1f076803dc47c4540a5ce.jpg

cache.jpg.ec6098011d0706930bb4c6b25a5617da.jpg

Link to comment

Tried to run it manually, and it gives me this error:

root@Tower:/boot/custom# cache_backup.sh
-bash: ./cache_backup.sh: /bin/bash^M: bad interpreter: No such file or directory

Looks like you must have used notepad or some other text editor that did not save the script in a Unix compatible format. Notepad++ is often recommended around here.
Link to comment
  • 6 months later...
  • 2 months later...

How would I need to modify this script if I wanted to backup everything in my cache/appdata to a backup share and everything in the appdata share is dockers? can the script stop and start all dockers like it does services?

 

The rsync would certainly work regardless of what's in the appdata directory you tell it to rsync.  As for modifying the stop/start commands to stop and start the dockers, I can't help you there as I don't yet run dockers as I'm still on v5.  That said, I'm sure there's a way to stop and start them from CLI.  Hopefully one of the gurus can help you with that.

Link to comment

How would I need to modify this script if I wanted to backup everything in my cache/appdata to a backup share and everything in the appdata share is dockers? can the script stop and start all dockers like it does services?

 

The rsync would certainly work regardless of what's in the appdata directory you tell it to rsync.  As for modifying the stop/start commands to stop and start the dockers, I can't help you there as I don't yet run dockers as I'm still on v5.  That said, I'm sure there's a way to stop and start them from CLI.  Hopefully one of the gurus can help you with that.

 

My bad! I didn't realize this was in the 5.x thread...

Link to comment
  • 3 months later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.