[Plugin] CA Appdata Backup / Restore v2


Squid

848 posts in this topic Last Reply

Recommended Posts

I just noticed the other day that my backups stopped working with the process having been hung for two months.  I rebooted Unraid, manually initiated a backup, and it still hung on backing up notifications.  I'll try removing the boot drive from the backup and try again, but it really needs some way to notify users if the process is ongoing for far too long (e.g. two months ;p )
 
Here's where it gets stuck, with the 'Abort' button doing nothing:
Screenshot_20190824-161803.thumb.png.bd26557b14dc78db639d1d2b693d97d0.png
 
Here are my settings:
Screenshot_20190824-162145.thumb.png.bdd23a31c136e91d5b100c5a7cdd5ba8.png
 
Cheers!
Post the diagnostics

Sent from my NSA monitored device

Link to post
  • Replies 847
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

CA Appdata Backup / Restore (v2)   Due to some fundamental problems with XFS / BTRFS, the original version of Appdata Backup / Restore became unworkable and caused lockups for many users.  D

Feature request: Instead of one giant tarball, could this app use separate tarballs for each folder in appdata? That way it would be much easier to restore a specific app's data (manually) or even pul

I would rather automate backups to a local network share rather than unraid.net. I doubt I'll even install the unraid.net plugin as I already have remote access set up to my satisfaction. Disappointed

Posted Images

  • 2 weeks later...

Suggestion for v3 -- I had a file get corrupted in my Radarr docker appdata, and had to retrieve it, and extracting the 2mb .xml file took forever because my .tar.gz backup file is huge. Can you add a setting to back up each docker container to individual .tar.gz files, rather than one enormous file?

Link to post
15 hours ago, jmello said:

Suggestion for v3 -- I had a file get corrupted in my Radarr docker appdata, and had to retrieve it, and extracting the 2mb .xml file took forever because my .tar.gz backup file is huge. Can you add a setting to back up each docker container to individual .tar.gz files, rather than one enormous file?

+1 for the feature suggestion

 

In the meantime though, you don't need to extract the entire archive to retrieve a single file/folder from it (unless you're talking about access times and such due to a large single file). Here's a simple CLI guide I found.  It's even easier if you just open up the archive with something like 7zip.

https://www.cyberciti.biz/faq/linux-unix-extracting-specific-files/

Link to post
  • 2 weeks later...

Is there any way to add a wildcard to an excluded folder?

 

My use case for this would be to exclude specific folders inside of the individual Plex media folders containing video preview and chapter thumbnails, e.g.:

\appdata\PlexMediaServer\Library\Application Support\Plex Media Server\Media\localhost\<wildcard>\<wildcard>\Contents\Chapters\

 

Thanks!

Link to post
18 minutes ago, propman07 said:

location of the Plex media directory as a variable

I assume you mean the plex library (database) which is usually in a subfolder in the appdata user share. Do you have it somewhere other than appdata? Isn't your appdata already on an SSD? If not, why not?

 

 

Link to post

Ah....I can tell I'm about to look stupid....oh well....

 

You are correct. I meant the plex library database. I added a 1TB SSD to my setup, and followed a youtube video on how to move the database to an SSD. My appdata folder is not on the same SSD....I have a Samsung 970 EVO 500GB - NVMe PCIe which has my appdata folder (and sub folders) on it. Hindsight....I probably could have picked up a 2TB NVMe PCIe, and left well enough alone (after I setup the appdata folder to be on the cache drive)

 

So I've got things like

binhex-delugevpn, binhex-jackett, binhex-krusader, binhex-radarr, binhex-sonarr, CrashPlanPRO, etc. on the NVMe as my cache drive

 

I've added the unassigned device (1TB SSD) which contains the plex-data folder, which had the plex database, etc. I did this after doing some research, and learning that the plex database can get quite large based on the media library. I was worried that the 500GB NVMe SSD would run out of space...which led me to picking up the 1TB SSD.

 

Any suggestions on how to make improvements to the setup would be welcomed....still pretty new to the whole unraid scene....

 

Thanks.

Edited by propman07
Link to post
1 hour ago, propman07 said:

Any suggestions on how to make improvements to the setup would be welcomed....

Post your diagnostics so I can get a better idea of your current setup. I will probably split this part of the discussion into its own topic since it really isn't about this plugin.

Link to post

Actually your setup doesn't look that bad. Better than a lot of people. Many people setup the dockers, etc. before installing cache and then I have to work with them to get things that belong on cache moved or reinstalled to cache. But looks like you are good with that, except as you say, your plex library. And your cache has plenty of space for that. I would just copy the plex library to its usual appdata location on cache and change the docker to use it there instead of the UD where you have it now. Then your problems with appdata backup are solved.

Link to post
11 minutes ago, trurl said:

I would just copy the plex library to its usual appdata location on cache and change the docker to use it there instead of the UD where you have it now. Then your problems with appdata backup are solved.

Thanks for taking a look at the setup. I'll work  on copying the plex library back to the usual location so that I can take advantage of the backup plugin. Maybe I'll see about adding the SSD to the cache and get it out of the UD where it is now.

 

Thanks again for your help.

Link to post

I have having the worst luck, and in need of help... please.

 

I have followed another guide to remove bad drives that are having failures (new one sitting here ready to be put in), and had to run the new config for the drives. then, then forgetting to screenshot the cache drives, i winged it. I then ran the parity check. The next day, I found that all my dockers images are missing BUT thankfully I have been running the backup every week, as much as a couple days ago...

 

However, the issue now is that when I restore it, it fails and states: "Directory renamed before its status could be extracted" (see screenshot) Not sure what i did wrong.

 

I've added a diagnostics to help my my headaches.

 

Thanks

 

Screen Shot 2019-09-25 at 10.02.10 PM.png

prime-nas-diagnostics-20190926-0610.zip

Link to post
9 hours ago, doogalbeez said:

I have followed another guide to remove bad drives that are having failures (new one sitting here ready to be put in), and had to run the new config for the drives. then, then forgetting to screenshot the cache drives, i winged it. I then ran the parity check. The next day, I found that all my dockers images are missing BUT thankfully I have been running the backup every week, as much as a couple days ago...

Where did you find this "guide"? Sounds like it led you down the wrong path. New Config isn't typically part of drive replacement, assuming they actually needed replacing. Possibly it was simply a case of the much more common bad connection. And even if you do New Config, it shouldn't forget your disk assignments unless you tell it to.

 

Please consider asking for advice before trying to fix these sorts of problems in the future. Are you sure you didn't actually lose data winging it like this?

 

Also, your system share has files on the array, and your docker image is twice as large as recommended.

Link to post
10 hours ago, doogalbeez said:

I have having the worst luck, and in need of help... please.

 

I have followed another guide to remove bad drives that are having failures (new one sitting here ready to be put in), and had to run the new config for the drives. then, then forgetting to screenshot the cache drives, i winged it. I then ran the parity check. The next day, I found that all my dockers images are missing BUT thankfully I have been running the backup every week, as much as a couple days ago...

 

However, the issue now is that when I restore it, it fails and states: "Directory renamed before its status could be extracted" (see screenshot) Not sure what i did wrong.

 

I've added a diagnostics to help my my headaches.

 

Thanks

 

Screen Shot 2019-09-25 at 10.02.10 PM.png

prime-nas-diagnostics-20190926-0610.zip 100.81 kB · 2 downloads

Yeah, I don't know.  I first thought that mover might have done something at the time, but it looks like the restore finished 2 minutes before mover was set to kick in.   

Link to post

Thank you for replying, i feel pretty stupid right about now.

1 hour ago, trurl said:

Where did you find this "guide"? Sounds like it led you down the wrong path. New Config isn't typically part of drive replacement, assuming they actually needed replacing. Possibly it was simply a case of the much more common bad connection. And even if you do New Config, it shouldn't forget your disk assignments unless you tell it to.

 

Please consider asking for advice before trying to fix these sorts of problems in the future. Are you sure you didn't actually lose data winging it like this?

 

Also, your system share has files on the array, and your docker image is twice as large as recommended.

I followed this guide https://blog.linuxserver.io/2013/11/18/removing-a-drive-from-unraid-the-easy-way .

41 minutes ago, Squid said:

Yeah, I don't know.  I first thought that mover might have done something at the time, but it looks like the restore finished 2 minutes before mover was set to kick in.   

I looked at the mover schedule just now and changed it to daily, it was hourly, just to see if it helps.

 

I ran the restore again, and it failed again. Can this be done manually through the terminal?

 

Thanks again for the help

Link to post
10 minutes ago, doogalbeez said:

That guide is about removing a disk, not replacing a disk. Removing a disk then adding a disk is not the way to replace a disk.

 

Also, that "guide" is nearly 6 years old, and for a very old version of Unraid that I assume you are not running. That very old version doesn't even present the New Config options to you the same way as the current version of Unraid. New Config (which is not what you wanted to do) will let you keep your disk assignments, without screenshots or possible mistakes in reassigning them.

 

And, even if you did actually want to remove a drive, those instructions are needlessly complex in my opinion, probably mostly due to the instructions being very, very old. You can do everything you need to do to remove a disk without going to the command line.

 

Why did you think you had bad disks instead of some more likely problem such as a bad connection? Problems communicating with the disks, such as bad connections or controller issues, are the cause of the vast majority of disabled disks.

 

 

Link to post
5 minutes ago, doogalbeez said:

amazing clicking sound when the parity check was running, the normal signs of a bad hard drive.

Are you sure that drive was the source of the noise? Typically there will be other symptoms. Were there? Do you have Notifications setup to alert you immediately by email or other agent when Unraid detects a problem?

 

5 minutes ago, doogalbeez said:

Is there a guide I can follow to manually restore the backup? It's a tar.gz backup.

You might try opening it in 7zip on your PC.

Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.