StevenD Posted March 30, 2015 Share Posted March 30, 2015 You should also be aware that the running/stopped state of the array is stored on the flash in super.dat along with the array configuration. If you boot from a backup that was taken with the array running, unRAID will assume an unclean shutdown and start a correcting parity check. Its mainly just to keep my share settings, logs etc. I would never restore the entire thing in the event of a DR situation. What is your plan to retrieve the backup for the flash drive if the array isn't bootable? It's a little bit of a chicken and egg situation, so I'm curious what your course of action would be if the flash drive dies. I would set up a fresh flash, assign a few drives at a time as data only until I found my backup, then copy what I wanted back out to the new flash and work from there. Do you have a more elegant solution in mind? I know that my "CacheBackup" share is on Disk1. I can always pop it out and read it on my Windows workstation. Quote Link to comment
dirtysanchez Posted March 30, 2015 Author Share Posted March 30, 2015 I think this error is only going to happen on the first run / when there are new files or directories to rsync. On the second run those directories exist so rsync doesn't have the same problem. I'm not sure why this would be different between user shares and just directly addressing the disk though. I was under the impression this DID NOT occur on the first run, only on subsequent runs. User shares are obviously addressed differently by unRAID than direct disk shares, and for what ever reason it manifests in these errors only when backing up Plex appdata (that I know of). As strange as that sounds, it has been reported more than once. I have never had the errors, but then I have always run the backup to a disk share and not a user share. I assume those of you that had the errors, changing to a disk share resolved them? Quote Link to comment
interwebtech Posted March 30, 2015 Share Posted March 30, 2015 I was under the impression this DID NOT occur on the first run, only on subsequent runs. User shares are obviously addressed differently by unRAID than direct disk shares, and for what ever reason it manifests in these errors only when backing up Plex appdata (that I know of). As strange as that sounds, it has been reported more than once. I have never had the errors, but then I have always run the backup to a disk share and not a user share. I assume those of you that had the errors, changing to a disk share resolved them? I won't know until the next time it runs from cron. Quote Link to comment
mdoom Posted March 31, 2015 Share Posted March 31, 2015 So I have been wanting to do a backup of my app data for quite some time now and I stumbled upon this thread. After reading everything here and seeing all the awesome work by dirtysanchez, and then looking into expansion by StevenD, I then took gundamguy's idea of looking at this link: Time Machine for every Unix out there Putting it all together I came up with a script that works great for me and my needs and I thought I'd share. I liked the idea of using the "time machine" like approach where it'd just create links to files that did not change so I can truly have an iterative approach to backups and don't need to worry about wasting a lot of space. The other major concern I had was Plex. Plex downloads and stores TONS of metadata, and if you have a library even half the size of mine, you know that its a pain to copy/move/do anything with all of those thousands and thousands of folders and files. I wanted to exclude all Plex metadata from my backups. I successfully did this and tested restoring it to a new Plex installation. Sure enough, Plex restored with just no metadata and it instantly started downloading it. :-) Here is my code of my script: #!/bin/bash #Set date variables for today time_stamp=$(date +%Y-%m-%dT%H_%M_%S) #Create backup folder for today backup_path="/mnt/disk1/CacheBackup" mkdir -p "${backup_path}/$time_stamp" #Stop Plugins /etc/rc.d/rc.Couchpotato stop #Stop Docker Apps docker stop $(docker ps -a -q) #Backup apps dir via rsync date >/var/log/cache_backup.log /usr/bin/rsync -azP \ --delete \ --delete-excluded \ --exclude-from=$backup_path/excludes.txt \ --link-dest=$backup_path/current /mnt/cache/apps/ $backup_path/$time_stamp \ >>/var/log/cache_backup.log #Start Docker Apps /etc/rc.d/rc.docker start #Start Plugins /etc/rc.d/rc.Couchpotato start #Create symbolic link for current backup rm -f $backup_path/current ln -s $backup_path/$time_stamp $backup_path/current One thing you'll notice, is that it uses a "--exclude-from=" statement in rsync. This allows you to specify a file that includes all of the items you want it to ignore. I went ahead and excluded the following: Metadata/ Cache/ MediaCover/ The way excludes works for directories is: /dir/ means exclude the root folder /dir /dir/* means get the root folder /dir but not the contents dir/ means exclude any folder anywhere where the name contains dir/ Examples excluded: /dir/, /usr/share/mydir/, /var/spool/dir/ /dir means exclude any folder anywhere where the name contains /dir Examples excluded: /dir/, /usr/share/directory/, /var/spool/dir/ /var/spool/lpd//cf means skip files that start with cf within any folder within /var/spool/lpd I hope this helps someone! Quote Link to comment
dirtysanchez Posted March 31, 2015 Author Share Posted March 31, 2015 That's a great idea mdoom. Thanks for posting that up! I don't personally feel the need for "time machine"-like history for the cache backup, but I will definitely use that idea elsewhere in other backup scripts. The idea of excluding all the Plex metadata is brilliant IMHO, as anyone who runs this script knows 95% of the time to run the backup is all the Plex metadata, which can easily be redownloaded automatically by Plex if needed. As for all the "awesome work" by me, I wouldn't exactly call it that. I cobbled the original v5 script together from bits and pieces elsewhere on the forums and converted it to work with v6 once I upgraded. What I did do is put the pieces together and created a post to that others could easily find it and not have to search multiple threads like I did. I'll take the props though, regardless. Quote Link to comment
gundamguy Posted April 8, 2015 Share Posted April 8, 2015 Just wanted to post that this script ran over the weekend without any errors and without emailing me. I have no idea why it was ok this second time even though I am still saving to a user share. Quote Link to comment
dertbv Posted October 4, 2015 Share Posted October 4, 2015 reviving and old thread. I have been using the command line /usr/bin/rsync -avrtH --delete /mnt/cache/ $BackupDir >> $LogFile with out a problem but when i add a excludes.txt file with the following command line /usr/bin/rsync -avrtH --delete /mnt/cache/ --exclude-from=/mnt/user/backups/excludes.txt $BackupDir >> $LogFile i get "mkdir "boot/scripts/ /mnt/user/backups/unRAID_cache' failed; do such file or directory" I am confused why i would be getting that error message if i am just adding and exclude statement? thanks Quote Link to comment
trurl Posted October 4, 2015 Share Posted October 4, 2015 reviving and old thread. I have been using the command line /usr/bin/rsync -avrtH --delete /mnt/cache/ $BackupDir >> $LogFile with out a problem but when i add a excludes.txt file with the following command line /usr/bin/rsync -avrtH --delete /mnt/cache/ --exclude-from=/mnt/user/backups/excludes.txt $BackupDir >> $LogFile i get "mkdir "boot/scripts/ /mnt/user/backups/unRAID_cache' failed; do such file or directory" I am confused why i would be getting that error message if i am just adding and exclude statement? thanks Haven't tried it, but I think you need to put your "options" before your parameters, in other words /usr/bin/rsync -avrtH --delete --exclude-from=/mnt/user/backups/excludes.txt /mnt/cache/ $BackupDir >> $LogFile Quote Link to comment
dertbv Posted October 4, 2015 Share Posted October 4, 2015 reviving and old thread. I have been using the command line /usr/bin/rsync -avrtH --delete /mnt/cache/ $BackupDir >> $LogFile with out a problem but when i add a excludes.txt file with the following command line /usr/bin/rsync -avrtH --delete /mnt/cache/ --exclude-from=/mnt/user/backups/excludes.txt $BackupDir >> $LogFile i get "mkdir "boot/scripts/ /mnt/user/backups/unRAID_cache' failed; do such file or directory" I am confused why i would be getting that error message if i am just adding and exclude statement? thanks Haven't tried it, but I think you need to put your "options" before your parameters, in other words /usr/bin/rsync -avrtH --delete --exclude-from=/mnt/user/backups/excludes.txt /mnt/cache/ $BackupDir >> $LogFile I got it figured out.. --exclude-from /mnt/user/backups/excludes.txt Works! Removed the = Sign Thanks for taking a look! Quote Link to comment
Flick Posted May 14, 2016 Share Posted May 14, 2016 Any suggestions on how to filter out log files? I don't really need those backed up either. Thanks! Quote Link to comment
interwebtech Posted May 14, 2016 Share Posted May 14, 2016 Any suggestions on how to filter out log files? I don't really need those backed up either. Thanks! this method is deprecated by the new backup solution included in Community Apps plug-in. http://lime-technology.com/forum/index.php?topic=40262.0 Quote Link to comment
Flick Posted May 14, 2016 Share Posted May 14, 2016 this method is deprecated by the new backup solution included in Community Apps plug-in. http://lime-technology.com/forum/index.php?topic=40262.0 I wasn't aware it had that feature, thanks! At a glance, I just need to figure it if it a> does the "time machine" effect like this method does and b> how to exclude the cache folders and things like I can with this method. Thank you. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.