dirtysanchez Posted January 10, 2014 Share Posted January 10, 2014 This script is for v5 and plugins. For v6 and Docker, refer to this post . I've posted this in the past, but it's buried in other threads and probably not easy to find. Since I've had a few people ask how it was done, I figured I'd create a thread for it so that maybe it could be more easily found. My server has a cache drive which hosts the apps directory for my plugins. As the cache is not fault tolerant, I wanted to backup the apps directory to the protected array periodically in case the cache drive were ever to fail. Obviously the script would need to be modified to your particular needs/plugins, but it's a good starting point for those Linux novices like myself. The script was named cache_backup.sh and placed in /boot/custom. #!/bin/bash #Stop services /etc/rc.d/rc.plexmediaserver stop /etc/rc.d/rc.sabnzbd stop /etc/rc.d/rc.sickbeard stop /etc/rc.d/rc.couchpotato_v2 stop #Backup cache via rsync date >/var/log/cache_backup.log /usr/bin/rsync -avrtH --delete /mnt/cache/apps/ /mnt/user/Backup/unRAID_cache >>/var/log/cache_backup.log #Start services /etc/rc.d/rc.plexmediaserver start /etc/rc.d/rc.sabnzbd start /etc/rc.d/rc.sickbeard start /etc/rc.d/rc.couchpotato_v2 start You then need to add the script to cron.weekly on every reboot. This is done by adding the following to the go script. #Add cache backup script to cron cp /boot/custom/cache_backup.sh /etc/cron.weekly Quote Link to comment
jevans04 Posted January 10, 2014 Share Posted January 10, 2014 Thanks for sharing! One question, is there a way to control (or determine) when the weekly rsync runs? I would not want it to stop my Plex service while I was streaming something. Quote Link to comment
dirtysanchez Posted January 10, 2014 Author Share Posted January 10, 2014 By default cron.weekly runs at 04:30 am on the first day of the week (Sunday). If 04:30 am on Sunday doesn't work for you, you could always create a custom cron job to run it whenever you wish. The initial rsync will obviously take a few minutes, depending on the size of your apps directory. Subsequent backups (since it only syncs changed/new files) take under a minute, at least on my server. Quote Link to comment
FreeMan Posted January 17, 2014 Share Posted January 17, 2014 thanks for sharing, dirtysanchez! and take a shower!!! EDIT: One thing to take into consideration - if you're backing up your sabnzbd directory, and your partial and completed download directories are in there, make sure you exclude them. If you happen to be downloading at the time the backup runs (as I was when I decided to test the script to ensure it worked), you'll backup all the download bits and pieces. /usr/bin/rsync -avrtH --delete --exclude-from 'cache_backup_exclude' /mnt/cache/apps/ /mnt/user/Backups/AppsOnCache >>/var/log/cache_backup.log where 'cache_backup_exclude' is a file listing directories (relative to /mnt/cache/apps) to be excluded from the rsync Quote Link to comment
hernandito Posted January 18, 2014 Share Posted January 18, 2014 EDIT: One thing to take into consideration - if you're backing up your sabnzbd directory, and your partial and completed download directories are in there, make sure you exclude them. If you happen to be downloading at the time the backup runs (as I was when I decided to test the script to ensure it worked), you'll backup all the download bits and pieces. /usr/bin/rsync -avrtH --delete --exclude-from 'cache_backup_exclude' /mnt/cache/apps/ /mnt/user/Backups/AppsOnCache >>/var/log/cache_backup.log where 'cache_backup_exclude' is a file listing directories (relative to /mnt/cache/apps) to be excluded from the rsync FreeMan, I am not sure if follow... my Sab downloads to a separate share I have called /mnt/user/downloads/incomplete.... So anything currently in the queue will not get backed up... are there other any dangerous mid-download bits that are inside the /mnt/cache/MyAppFolder/Sabnzbd, Couchpotato, etc folder? Thanks, H. Quote Link to comment
JonathanM Posted January 18, 2014 Share Posted January 18, 2014 So anything currently in the queue will not get backed up... are there other any dangerous mid-download bits that are inside the /mnt/cache/MyAppFolder/Sabnzbd, Couchpotato, etc folder?Not dangerous, just a pain in the tail. They will add up in every backup, since there will be new content there every time. You could end up with a HUGE backup. Quote Link to comment
jumperalex Posted January 18, 2014 Share Posted January 18, 2014 Thanks Dirty. Appreciate it. Quote Link to comment
hernandito Posted January 18, 2014 Share Posted January 18, 2014 Not dangerous, just a pain in the tail. They will add up in every backup, since there will be new content there every time. You could end up with a HUGE backup. Thanks Jonathan... but my questions still remains... if my /downloads/incomplete save folder is outside of my cache drive's app folder, is there ANYTHING ELSE I need to worry to exclude that is INSIDE my cache drive's application folder? Thanks. H. Quote Link to comment
dirtysanchez Posted January 18, 2014 Author Share Posted January 18, 2014 EDIT: One thing to take into consideration - if you're backing up your sabnzbd directory, and your partial and completed download directories are in there, make sure you exclude them. If you happen to be downloading at the time the backup runs (as I was when I decided to test the script to ensure it worked), you'll backup all the download bits and pieces. /usr/bin/rsync -avrtH --delete --exclude-from 'cache_backup_exclude' /mnt/cache/apps/ /mnt/user/Backups/AppsOnCache >>/var/log/cache_backup.log where 'cache_backup_exclude' is a file listing directories (relative to /mnt/cache/apps) to be excluded from the rsync Good point. I went about it a bit differently though. While I may backup a few GB worth of files I don't need (incomplete downloads directory), when the next rsync runs it will delete those files from the backup, assuming they no longer exist on the cache drive (due to the --delete option). I also have completed downloads immediately moved to the array, so I have no worry of backing up completed downloads. In my particular use case it was just easier to do it that way. Lazier, but easier. Quote Link to comment
FreeMan Posted January 20, 2014 Share Posted January 20, 2014 Not dangerous, just a pain in the tail. They will add up in every backup, since there will be new content there every time. You could end up with a HUGE backup. Thanks Jonathan... but my questions still remains... if my /downloads/incomplete save folder is outside of my cache drive's app folder, is there ANYTHING ELSE I need to worry to exclude that is INSIDE my cache drive's application folder? Thanks. H. Don't quote me, but I don't think so. SAB is the only downloader, the rest are just finders. I have my ../download and ../complete directories as part of the ../sabnzbd structure, and they're on the cache drive. There is no danger, as dirtysanchez points out, it's just a major pain, and, depending on how much you may have downloaded on that particular Sunday, you may have a ton of extra stuff in your backup. As dirtysanchez also points out, it will be deleted the next time the backup runs. I only really hit the situation because I was testing the .sh file, and had a lot of stuff recently downloaded, and something in the process of downloading. I was stumped, 5 min in, why it was taking so long, when indications were that it should run about a minute the first time and seconds after that. I suppose the danger could be that you'd run out of space on your backup location. Quote Link to comment
smdion Posted February 17, 2014 Share Posted February 17, 2014 If I was to add /etc/init.d/crond stop /etc/init.d/crond start to the shell script as I have vnSTAT running every minute and do not want to corrupt that database, would this stop the backup script or just cron from starting new commands? EDIT: Also, any reason you are writing directly to disk3 instead of to user/backup? Quote Link to comment
dirtysanchez Posted February 17, 2014 Author Share Posted February 17, 2014 If I was to add /etc/init.d/crond stop /etc/init.d/crond start to the shell script as I have vnSTAT running every minute and do not want to corrupt that database, would this stop the backup script or just cron from starting new commands? EDIT: Also, any reason you are writing directly to disk3 instead of to user/backup? As for your first question, I do not know. I'm learning more about Liunx every day, but I'm no expert. Hopefully one of the gurus around here can answer that question. As to why I'm backing up directly to /mnt/disk3, there's no real reason and backing up to /mnt/user would work just as well. In my particular case the Backup share is limited to disk3 only, so either way works. But for the sake of clarity for other users, it probably makes more sense to point it to a user share (in case their backup directory spans multiple disks). Thanks for pointing that out and I will change the OP. Quote Link to comment
smdion Posted February 17, 2014 Share Posted February 17, 2014 If I was to add /etc/init.d/crond stop /etc/init.d/crond start to the shell script as I have vnSTAT running every minute and do not want to corrupt that database, would this stop the backup script or just cron from starting new commands? EDIT: Also, any reason you are writing directly to disk3 instead of to user/backup? As for your first question, I do not know. I'm learning more about Liunx every day, but I'm no expert. Hopefully one of the gurus around here can answer that question. As to why I'm backing up directly to /mnt/disk3, there's no real reason and backing up to /mnt/user would work just as well. In my particular case the Backup share is limited to disk3 only, so either way works. But for the sake of clarity for other users, it probably makes more sense to point it to a user share (in case their backup directory spans multiple disks). Thanks for pointing that out and I will change the OP. well /etc/init.d doesn't exist, so I guess you can ignore that. Still curious how I would stop crop from running which this script is running. Quote Link to comment
SCSI Posted February 19, 2014 Share Posted February 19, 2014 Thanks for this script. I added it to back up my apps folder on the cache drive. Can you also write a script to restore the backup? Thanks! Quote Link to comment
smdion Posted February 19, 2014 Share Posted February 19, 2014 Thanks for this script. I added it to back up my apps folder on the cache drive. Can you also write a script to restore the backup? Thanks! Just reverse it #Stop services /etc/rc.d/rc.plexmediaserver stop /etc/rc.d/rc.sabnzbd stop /etc/rc.d/rc.sickbeard stop /etc/rc.d/rc.couchpotato_v2 stop #Restore cache via rsync /usr/bin/rsync -avrH /mnt/user/Backup/unRAID_cache/ /mnt/cache/apps >>/var/log/cache_restore.log #Start services /etc/rc.d/rc.plexmediaserver start /etc/rc.d/rc.sabnzbd start /etc/rc.d/rc.sickbeard start /etc/rc.d/rc.couchpotato_v2 start Quote Link to comment
SCSI Posted February 19, 2014 Share Posted February 19, 2014 That was fast and easy lol! Thanks a lot! Sent from my Galaxy Nexus using Tapatalk Quote Link to comment
kaiguy Posted February 24, 2014 Share Posted February 24, 2014 Thanks for sharing! Works like a charm and surprisingly quick! As FreeMan pointed out, I opted to exclude the incomplete downloads directory for SAB, but instead of using an external file, I just used the --exclude command like this: /usr/bin/rsync -avrtH --delete --exclude 'sabnzbd/Downloads/incomplete' /mnt/cache/apps/ /mnt/user/Backup/unRAID_cache >>/var/log/cache_backup.log This, like --exclude-from, is relative to the source directory (/mnt/cache/apps/). Quote Link to comment
Banderaz Posted March 2, 2014 Share Posted March 2, 2014 Hi, New user, new install of 5.0.5 plus. Do I miss some essential files/packages to perform this script? Tried to run it manually, and it gives me this error: root@Tower:/boot/custom# cache_backup.sh -bash: ./cache_backup.sh: /bin/bash^M: bad interpreter: No such file or directory I have not edited the script, and I have Plex in my cache drive, at /mnt/cache/apps I also have a share at mnt/backup terminal: root@Tower:/mnt/cache/apps# ls library/ temp/ cache_backup.sh #Backup cache via rsync date >/var/log/cache_backup.log /usr/bin/rsync -avrtH --delete /mnt/cache/apps/ /mnt/user/backup/unRAID_cache >>/var/log/cache_backup.log I even tried to change /mnt/cache/apps/ to /mnt/user/apps/ since this is what it show inside unRAID? This confuses me a bit Happy for any hint or sollution Quote Link to comment
trurl Posted March 2, 2014 Share Posted March 2, 2014 Tried to run it manually, and it gives me this error: root@Tower:/boot/custom# cache_backup.sh -bash: ./cache_backup.sh: /bin/bash^M: bad interpreter: No such file or directory Looks like you must have used notepad or some other text editor that did not save the script in a Unix compatible format. Notepad++ is often recommended around here. Quote Link to comment
Banderaz Posted March 2, 2014 Share Posted March 2, 2014 Thank you. Started from the beginning, and now it works.. Must have done the novice misstake of using a win-file when edited it. Normally I edit all files inside CuteFTP since it works good with Unix format. Quote Link to comment
mikedpitt420 Posted September 24, 2014 Share Posted September 24, 2014 This script ran ok for me once. Now with subsequent times I run it, I get "rsync failed to set times". What am I doing wrong? Quote Link to comment
mostlydave Posted December 3, 2014 Share Posted December 3, 2014 How would I need to modify this script if I wanted to backup everything in my cache/appdata to a backup share and everything in the appdata share is dockers? can the script stop and start all dockers like it does services? Quote Link to comment
dirtysanchez Posted December 3, 2014 Author Share Posted December 3, 2014 How would I need to modify this script if I wanted to backup everything in my cache/appdata to a backup share and everything in the appdata share is dockers? can the script stop and start all dockers like it does services? The rsync would certainly work regardless of what's in the appdata directory you tell it to rsync. As for modifying the stop/start commands to stop and start the dockers, I can't help you there as I don't yet run dockers as I'm still on v5. That said, I'm sure there's a way to stop and start them from CLI. Hopefully one of the gurus can help you with that. Quote Link to comment
mostlydave Posted December 4, 2014 Share Posted December 4, 2014 How would I need to modify this script if I wanted to backup everything in my cache/appdata to a backup share and everything in the appdata share is dockers? can the script stop and start all dockers like it does services? The rsync would certainly work regardless of what's in the appdata directory you tell it to rsync. As for modifying the stop/start commands to stop and start the dockers, I can't help you there as I don't yet run dockers as I'm still on v5. That said, I'm sure there's a way to stop and start them from CLI. Hopefully one of the gurus can help you with that. My bad! I didn't realize this was in the 5.x thread... Quote Link to comment
dirtysanchez Posted March 23, 2015 Author Share Posted March 23, 2015 I've now moved to v6, so anyone finding this thread and wanting to know how to do it on v6 can find the post here. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.