[Plugin] CA Appdata Backup / Restore v2.5


KluthR

Recommended Posts

4 minutes ago, KluthR said:

Interesting. Because the image will always be backed up while vms are running. Thats not consistent at all 🤔

 

Supposing they are, I have a bunch of VMs but only use them occasionally, not running all the time, and basically never during the backup window. 

Link to comment

Unrelated to this plugin, but since I have your diagnostics

2 minutes ago, trurl said:

I see this (anonymized)

appdata                           shareUseCache="prefer"  # Share exists on cache, disk5
domains                           shareUseCache="prefer"  # Share exists on isos-vms
G-------e                         shareUseCache="yes"     # Share exists on disk1, disk2
isos                              shareUseCache="prefer"  # Share exists on disk12
M---a                             shareUseCache="no"      # Share exists on disk1, disk2, disk3, disk4, disk5, disk6, disk7, disk8, disk9, disk10, disk11, disk12, disk13
M------t                          shareUseCache="only"    # Share exists on non-array-media
p-------l                         shareUseCache="prefer"  # Share exists on plex-pool-drive, disk2, disk3
system                            shareUseCache="prefer"  # Share exists on disk5, disk12
t------s                          shareUseCache="only"    # Share exists on torrent-pool

If you go to User Shares page, you can click Compute... for a share to see how much of each disk is used by the share, or click Compute All button

 

Your appdata and system shares have files on the array. Since they are cache:prefer, mover will move them to their designated pool (cache for both of these) if it can. Nothing can move open files. You will have to disable Docker and VM Manager then run Mover.

 

Link to comment
3 minutes ago, trurl said:

I see this (anonymized)

b-----s                           shareUseCache="prefer"  # Share does not exist

This is a share that you created to stay on cache, but no longer exists. Possibly named backups.

This is probably what I see in diagnostics. You can look there yourself to confirm.

 

So, probably it is all gone now

Yeah.... I looked, it's gone. Damn. Well, lesson learned. Thanks for taking the time for me. I guess I just have to rebuild the apps. Thankfully my Plex server is being stored on its own drive, so nothing was lost for it. Mostley it is all my *arr apps and qbittorrent that I have to rebuild. Easy, but time consuming.

Link to comment
7 minutes ago, KluthR said:

Interesting. Because the image will always be backed up while vms are running. Thats not consistent at all 🤔

Does it matter if they are running? I haven't dug very deeply into this, but I thought libvirt.img was just the definitions for the VMs, and doesn't include anything dynamic which would be in separate vdisks

Link to comment
2 minutes ago, trurl said:

Does it matter if they are running? I haven't dug very deeply into this, but I thought libvirt.img was just the definitions for the VMs, and doesn't include anything dynamic which would be in separate vdisks

Yeah it's the XMLs and NVRAM AFAIK, should only change when manually editing a VM config or UEFI settings. Technically you're still dumping a mounted filesystem so it's not that clean, but problably not an issue in practice.

 

I have 3 months worth of daily backups, could try mounting a few to see if any chokes...

Edited by Kilrah
Link to comment
10 hours ago, KluthR said:

Finally, I "finished" the final new UI for the settings part. I oriented myself on how Unraid does it.

(DEV-Sidenote: Wow, these many code documentation of the webgui-code is gorgeous... NOT! Maybe I start to add some to important things...)

 

However, the current design looks like this:

1396199295_Screenshot2023-02-10at20-08-22Playtime_AB.Main2.thumb.png.ce717618ec275a81f6b686a430fce6d2.png

 

I expanded the important help blocks for this screenshot.

Can we work with this? :)

The image was compressed and is unreadable on my phone 

Link to comment

ive got an issue, i updated to v3 about a month ago (i think? its been a few weeks at least) but just noticed 2 days ago it started giving me errors. i looked into it this morning and following this thread it was a file being edited while it was backing up....but it didn't say witch file(s). 

[10.02.2023 04:01:02] Separate archives disabled! Saving into one file.
[10.02.2023 04:01:02] Backing Up
/usr/bin/tar: .: file changed as we read it
[10.02.2023 04:03:56] tar creation/extraction failed!
[10.02.2023 04:03:56] Verifying Backup 
[10.02.2023 04:06:57] done

i have daily backups at 4am, auto update at 2:30am. weird thing is it failed backup friday, but succeeded saturday, then fail again this morning (sunday). i enabled separate files for each docker (just noticed it reading through this thread...omg thanks for adding this!) and ran a manual backup, watched the logs and now it output the files it errored. 

[12.02.2023 10:59:49] Backing Up: PlexMediaServer
/usr/bin/tar: PlexMediaServer/Library/Application Support/Plex Media Server/Plug-in Support/Databases: file changed as we read it
/usr/bin/tar: PlexMediaServer/Library/Application Support/Plex Media Server/Plug-in Support: file changed as we read it
/usr/bin/tar: PlexMediaServer/Library/Application Support/Plex Media Server: file changed as we read it
/usr/bin/tar: PlexMediaServer/Library/Application Support: file changed as we read it
/usr/bin/tar: PlexMediaServer/Library: file changed as we read it
/usr/bin/tar: PlexMediaServer: file changed as we read it
[12.02.2023 11:01:51] tar creation/extraction failed!
[12.02.2023 11:01:51] Verifying Backup PlexMediaServer

no idea why this is happening as i read in this thread its usually due to the docker app still running while updating, but i have the plugin shut down all dockers during backup and update, and during the manual backup this morning i also verified plex was not running by opening a second tab and checking docker page to find every one of my dockers was stopped i will attach both logs aswell. hope someone could help, or point me in the right direction as i don't believe anything is accessing any of the appdata files other than the dockers

backup.log backup.log

Link to comment

Is it possible to exclude files of a specific type from appdata backups?

 

I don't see the need to backup my Plex video thumbnails, which are all stored as *.bif files, but they're mixed in with the other metadata (non *.bif files) - I'd like to exclude only *.bif files from my backups, without excluding the whole metadata folder.

Link to comment

I think if not careful, this plugin will morph into a fully fledged backup system if all these 'enhancements' are implemented.  There are many backup systems already out there for such things.  I don't believe that is the expected direction for this tool, but I am happy either way. 

For my requirements personally, I need a clean backup (tick), so must be stopping the containers (tick), and also to run them individually rather than my emby to be down while all the non related containers are backed up (no tick yet). 

Until this plugin does all that, I found a script online and modified it for my requirements to just do my jackett container, and once I got that all working, tested and restored (as its a sort of a throw away), then I  copy/pasted/edited this script to do all my other containers also - all run from user scripts in the right order for the dependencies.  I have done a test restore also.  Once this appdata back does individual stop, backup, start with order dependence, then I will move back to it.  Note - I am not a developer or anything like that - I am sure there are better and easier ways, and better commands that I used - but this is how I got it working if you want to try it.

 

#!/bin/bash

# variables
#
# If copying this to make a new backup script, copy this file, and search/replace 'jackett' with the new containername / folder
# This assumes the container name and the folder name for it are the same
# I.E replace jackett with EmbyServerBeta for the emby version, as the folder and name are the same - which may not always be the case
# For Example uptimekuma - the name is UptimeKuma, but the folder is uptimekuma
# So be carefull
# I wanted to 'variablise' all this, but I could not figure out how to pass the container_name into some lines.

 

now=$(date +"%m_%d_%Y-%H_%M")
appdata_library_dir="/mnt/cache/appdata/"
backup_dir="/mnt/user/Backups/appdata_automatic_backups"
appdata_folder="jackett"
container_name="jackett"
num_backups_to_keep=3

echo " "
echo "Script started : $now"
echo " "
# Stop the container
docker stop $container_name
echo " "
echo "Stopping: $container_name and waiting for 30 seconds for it to stop......."
echo " "

# wait 30 seconds
sleep 30

# Get the state of the docker
container_running=`docker inspect -f '{{.State.Running}}' jackett`
echo " "
echo "$container_name running: ${container_running}"

# If the container is still running retry 5 times
fail_counter=0
while [ "$container_running" = "true" ];
do
    fail_counter=$((fail_counter+1))
    docker stop $container_name
    echo "Stopping $container_name attempt #$fail_counter"
    sleep 30
    container_running=`docker inspect -f '{{.State.Running}}' jackett`
    echo $container_running
    # Exit with an error code if the container won't stop
    # Restart container and report a warning to the Unraid GUI
    if (($fail_counter == 5));
    then
        echo "$container_name failed to stop. Restarting container and exiting"
        docker start $container_name
        /usr/local/emhttp/webGui/scripts/notify -i warning -s "$container_name Backup failed. Failed to stop container for backup."
        exit 1
    fi
done

    echo " "
    echo "Compressing and backing up....... $container_name"
    echo "gzip file is going to be......... $backup_dir/${container_name}_backup_$now.tar.gz"
    echo "Application folder is............ $appdata_folder/"
    echo "Application Libraray folder is... $appdata_library_dir"
    echo "Container Running is............. $container_running"

# Once the container is stopped, backup the appdata and restart the container
# The tar command shows progress
if [ "$container_running" = "false" ]
then
    echo " "
    echo "Changing to folder $appdata_library_dir now......."
    cd $appdata_library_dir
    echo " Backing up the container with tar -zcvf now...... "
    tar -zcvf "$backup_dir/${container_name}_backup_$now.tar.gz" $appdata_library_dir$appdata_folder
#   To restore, tar -xzvf /mnt/user/Backups/appdata_automatic_backups/name-or-the-backup-file-to-restore -C /mnt/cache/appdata/folder-name-to-be-overwritten --overwrite
    echo " "
    echo "The container $container_name is now backed up, restarting the container now ......"
    echo " "
    echo "Starting $container_name"
    echo " "
    docker start $container_name
fi

# Get the number of files in the backup directory
num_files=`ls /mnt/user/Backups/appdata_automatic_backups/jackett_backup_*.tar.gz | wc -l`
echo "Number of files in directory: $num_files"
# Get the full path of the oldest file in the directory
oldest_file=`ls -t /mnt/user/Backups/appdata_automatic_backups/jackett_backup_*.tar.gz | tail -1`
echo $oldest_file

# After the backup, if the number of files is larger than the number of backups we want to keep
# remove the oldest backup file
if (($num_files > $num_backups_to_keep));
then
    echo "Removing file: $oldest_file"
    rm $oldest_file
fi


# Push a notification to the Unraid GUI if the backup failed of passed
done=$(date +"%m_%d_%Y-%H_%M")
if [[ $? -eq 0 ]]; then
  /usr/local/emhttp/webGui/scripts/notify -i normal -s "$container_name Backup completed.  Started $now and finished $done"
else
  /usr/local/emhttp/webGui/scripts/notify -i warning -s "$container_name Backup failed. See log for more details."
fi

echo " "
echo "Script Started $now and finished $done"
echo " "

 

 

  • Like 1
Link to comment

Hey everyone,

My past few backups have been erroring out and I can't figure out why. The logs from the backup job show the following:

 

 .
 .
[<Date>] Separate archives disabled! Saving into one file.
  .
  .
gzip: stdin: invalid compressed data--crc error
./<container>/path: Contents differ
/usr/bin/tar: Child returned status 1
/usr/bin/tar: Error is not recoverable: exiting now
[<Date>] tar verify failed!
  .
  .
[<Date>] A error occurred somewhere. Not deleting old backup sets of appdata

 

I have the plugin set to backup Appdata, USB, and libvirt.img, and I'm wondering if there's a way to create separate backup jobs per container and source so I can see exactly what item is causing the problem. I don't really understand the outputs from the log above.

Link to comment

Below is the logs as I found them from the Plugin "Backup / Restore Status" tab. Apologies, I mean that I don't understand the errors from the logs to know how to action on them. I'm aware that the logs are telling me what the issue is.

[13.02.2023 03:00:01] Backup of appData starting. This may take awhile
[13.02.2023 03:00:01] Stopping bazarr... done! (took 6 seconds)
[13.02.2023 03:00:07] Not stopping calibre: Not started! [ / Created]
[13.02.2023 03:00:07] Stopping ddns-updater... done! (took 0 seconds)
[13.02.2023 03:00:07] Stopping duplicati... done! (took 4 seconds)
[13.02.2023 03:00:11] Stopping filebot... done! (took 1 seconds)
[13.02.2023 03:00:12] Stopping flaresolverr... done! (took 1 seconds)
[13.02.2023 03:00:13] Stopping jellyfin... done! (took 4 seconds)
[13.02.2023 03:00:17] Stopping jellyfin-alt... done! (took 4 seconds)
[13.02.2023 03:00:21] Not stopping MKVToolNix: Not started! [ / Exited (0) 2 days ago]
[13.02.2023 03:00:21] Stopping nginxproxymanager... done! (took 4 seconds)
[13.02.2023 03:00:25] Stopping ombi... done! (took 5 seconds)
[13.02.2023 03:00:30] Stopping prowlarr... done! (took 0 seconds)
[13.02.2023 03:00:30] Stopping qbittorrent... done! (took 1 seconds)
[13.02.2023 03:00:31] Stopping radarr... done! (took 4 seconds)
[13.02.2023 03:00:35] Stopping radarr-alt... done! (took 5 seconds)
[13.02.2023 03:00:40] Not stopping readarr: Not started! [ / Created]
[13.02.2023 03:00:40] Stopping sabnzbd... done! (took 4 seconds)
[13.02.2023 03:00:44] Stopping sabnzbd-alt... done! (took 5 seconds)
[13.02.2023 03:00:49] Not stopping scrutiny: Not started! [ / Exited (255) 5 days ago]
[13.02.2023 03:00:49] Stopping sonarr... done! (took 4 seconds)
[13.02.2023 03:00:53] Stopping sonarr-alt... done! (took 4 seconds)
[13.02.2023 03:00:57] Stopping Stash... done! (took 0 seconds)
[13.02.2023 03:00:57] Not stopping uptimekuma: Not started! [ / Exited (0) 2 weeks ago]
[13.02.2023 03:00:57] Not stopping vaultwarden: Not started! [ / Exited (255) 5 days ago]
[13.02.2023 03:00:57] Backing up USB Flash drive config folder to /mnt/user/backups/usbbackup/
2023/02/13 03:00:57 [31250] building file list
2023/02/13 03:00:58 [31250] .d...p..... ./
2023/02/13 03:00:58 [31250] *deleting config/super.dat.CA_BACKUP
2023/02/13 03:00:58 [31250] *deleting config/plugins/nvidia-driver/packages/5.19.17/nvidia-525.85.05-5.19.17-Unraid-1.txz.md5
2023/02/13 03:00:58 [31250] *deleting config/plugins/nvidia-driver/packages/5.19.17/nvidia-525.85.05-5.19.17-Unraid-1.txz
2023/02/13 03:00:58 [31250] .d...p..... EFI/
2023/02/13 03:00:58 [31250] .d...p..... EFI/boot/
2023/02/13 03:00:58 [31250] .d...p..... System Volume Information/
2023/02/13 03:00:58 [31250] .d..tp..... config/
2023/02/13 03:00:58 [31250] >f..tp..... config/disk.cfg
2023/02/13 03:00:58 [31250] >f..tp..... config/docker.cfg
2023/02/13 03:00:58 [31250] >f..tp..... config/drift
2023/02/13 03:00:58 [31250] >f..tp..... config/forcesync
2023/02/13 03:00:58 [31250] >f..tp..... config/ident.cfg
2023/02/13 03:00:58 [31250] >f.stp..... config/parity-checks.log
2023/02/13 03:00:58 [31250] >f.stp..... config/passwd
2023/02/13 03:00:58 [31250] >f..tp..... config/random-seed
2023/02/13 03:00:58 [31250] >f..tp..... config/shadow
2023/02/13 03:00:58 [31250] >f..tp..... config/smbpasswd
2023/02/13 03:00:58 [31250] >f+++++++++ config/super.dat
2023/02/13 03:00:58 [31250] .d...p..... config/modprobe.d/
2023/02/13 03:00:58 [31250] .d...p..... config/plugins-error/
2023/02/13 03:00:58 [31250] .d...p..... config/plugins-removed/
2023/02/13 03:00:58 [31250] .d...p..... config/plugins/
2023/02/13 03:00:58 [31250] >f..tp..... config/plugins/dynamix.file.manager.plg
2023/02/13 03:00:58 [31250] >f.stp..... config/plugins/unassigned.devices.plg
2023/02/13 03:00:58 [31250] .d...p..... config/plugins/ca.backup2/
2023/02/13 03:00:58 [31250] .d...p..... config/plugins/ca.cleanup.appdata/
2023/02/13 03:00:58 [31250] .d...p..... config/plugins/ca.update.applications/
2023/02/13 03:00:58 [31250] .d...p..... config/plugins/ca.update.applications/scripts/
2023/02/13 03:00:58 [31250] .d...p..... config/plugins/ca.update.applications/scripts/starting/
2023/02/13 03:00:58 [31250] .d...p..... config/plugins/ca.update.applications/scripts/stopping/
2023/02/13 03:00:58 [31250] .d...p..... config/plugins/community.applications/
2023/02/13 03:00:58 [31250] >f..tp..... config/plugins/community.applications/notification_scan.cron
2023/02/13 03:00:58 [31250] .d...p..... config/plugins/dockerMan/
2023/02/13 03:00:58 [31250] .d..tp..... config/plugins/dockerMan/templates-user/
2023/02/13 03:00:58 [31250] >f+++++++++ config/plugins/dockerMan/templates-user/my-MKVToolNix.xml
2023/02/13 03:00:58 [31250] >f+++++++++ config/plugins/dockerMan/templates-user/my-ddns-updater.xml
2023/02/13 03:00:58 [31250] >f.stp..... config/plugins/dockerMan/templates-user/my-prowlarr.xml
2023/02/13 03:00:58 [31250] >f+++++++++ config/plugins/dockerMan/templates-user/my-scrutiny.xml
2023/02/13 03:00:58 [31250] >f.stp..... config/plugins/dockerMan/templates-user/my-vaultwarden.xml
2023/02/13 03:00:58 [31250] .d...p..... config/plugins/dockerMan/templates/
2023/02/13 03:00:58 [31250] .d...p..... config/plugins/dockerMan/templates/limetech/
2023/02/13 03:00:58 [31250] .d..tp..... config/plugins/dynamix.file.manager/
2023/02/13 03:00:58 [31250] >f+++++++++ config/plugins/dynamix.file.manager/dynamix.file.manager.txz
2023/02/13 03:00:58 [31250] .d...p..... config/plugins/dynamix.my.servers/
2023/02/13 03:00:58 [31250] .d...p..... config/plugins/dynamix.system.stats/
2023/02/13 03:00:58 [31250] .d..tp..... config/plugins/dynamix/
2023/02/13 03:00:58 [31250] >f..tp..... config/plugins/dynamix/docker-update.cron
2023/02/13 03:00:58 [31250] >f..tp..... config/plugins/dynamix/language-check.cron
2023/02/13 03:00:58 [31250] >f..tp..... config/plugins/dynamix/monitor.cron
2023/02/13 03:00:58 [31250] >f..tp..... config/plugins/dynamix/mover.cron
2023/02/13 03:00:58 [31250] >f..tp..... config/plugins/dynamix/plugin-check.cron
2023/02/13 03:00:58 [31250] >f..tp..... config/plugins/dynamix/unraid-check.cron
2023/02/13 03:00:58 [31250] .d...p..... config/plugins/dynamix/notifications/
2023/02/13 03:00:58 [31250] .d...p..... config/plugins/dynamix/notifications/agents/
2023/02/13 03:00:58 [31250] .d...p..... config/plugins/dynamix/users/
2023/02/13 03:00:58 [31250] >f.stp..... config/plugins/dynamix/users/adamskub.png
2023/02/13 03:00:58 [31250] .d...p..... config/plugins/gpustat/
2023/02/13 03:00:58 [31250] .d..tp..... config/plugins/nvidia-driver/
2023/02/13 03:00:58 [31250] >f..tp..... config/plugins/nvidia-driver/settings.cfg
2023/02/13 03:00:58 [31250] .d...p..... config/plugins/nvidia-driver/packages/
2023/02/13 03:00:58 [31250] .d..tp..... config/plugins/nvidia-driver/packages/5.19.17/
2023/02/13 03:01:14 [31250] >f+++++++++ config/plugins/nvidia-driver/packages/5.19.17/nvidia-525.89.02-5.19.17-Unraid-1.txz
2023/02/13 03:01:14 [31250] >f+++++++++ config/plugins/nvidia-driver/packages/5.19.17/nvidia-525.89.02-5.19.17-Unraid-1.txz.md5
2023/02/13 03:01:14 [31250] *deleting config/plugins/unassigned.devices/unassigned.devices-2023.02.05.tgz
2023/02/13 03:01:14 [31250] .d...p..... config/plugins/unassigned.devices-plus/
2023/02/13 03:01:14 [31250] .d...p..... config/plugins/unassigned.devices-plus/packages/
2023/02/13 03:01:14 [31250] .d...p..... config/plugins/unassigned.devices.preclear/
2023/02/13 03:01:14 [31250] >f..tp..... config/plugins/unassigned.devices.preclear/.gitignore
2023/02/13 03:01:14 [31250] .d..tp..... config/plugins/unassigned.devices/
2023/02/13 03:01:14 [31250] >f+++++++++ config/plugins/unassigned.devices/unassigned.devices-2023.02.08.tgz
2023/02/13 03:01:14 [31250] .d..tp..... config/pools/
2023/02/13 03:01:14 [31250] >f..tp..... config/pools/cache.cfg
2023/02/13 03:01:14 [31250] .d...p..... config/shares/
2023/02/13 03:01:14 [31250] .d...p..... config/ssh/
2023/02/13 03:01:14 [31250] .d...p..... config/ssh/root/
2023/02/13 03:01:14 [31250] .d...p..... config/ssl/
2023/02/13 03:01:14 [31250] .d...p..... config/ssl/certs/
2023/02/13 03:01:14 [31250] .d...p..... config/wireguard/
2023/02/13 03:01:14 [31250] .d...p..... config/wireguard/peers/
2023/02/13 03:01:14 [31250] cd+++++++++ logs/
2023/02/13 03:01:14 [31250] >f+++++++++ logs/cerberus-diagnostics-20230207-1020.zip
2023/02/13 03:01:14 [31250] >f+++++++++ logs/cerberus-diagnostics-20230209-0851.zip
2023/02/13 03:01:14 [31250] .d...p..... previous/
2023/02/13 03:01:14 [31250] .d...p..... syslinux/
2023/02/13 03:01:14 [31250] sent 363,184,357 bytes received 1,683 bytes 20,753,488.00 bytes/sec
2023/02/13 03:01:14 [31250] total size is 1,032,566,658 speedup is 2.84
[13.02.2023 03:01:14] Backing up libvirt.img to /mnt/user/backups/libvrtbackup/
[13.02.2023 03:01:14] Using Command: /usr/bin/rsync -avXHq --delete --log-file="/var/lib/docker/unraid/ca.backup2.datastore/appdata_backup.log" "/mnt/user/system/libvirt/libvirt.img" "/mnt/user/backups/libvrtbackup/" > /dev/null 2>&1
2023/02/13 03:01:14 [31340] building file list
2023/02/13 03:01:14 [31340] sent 75 bytes received 19 bytes 188.00 bytes/sec
2023/02/13 03:01:14 [31340] total size is 1,073,741,824 speedup is 11,422,785.36
[13.02.2023 03:01:14] Backing Up appData from /mnt/user/appdata/ to /mnt/user/backups/appdatabackup/[email protected]
[13.02.2023 03:01:14] Separate archives disabled! Saving into one file.
[13.02.2023 03:01:14] Backing Up
/usr/bin/tar: ./qbittorrent/qBittorrent/config/ipc-socket: socket ignored
[13.02.2023 03:15:52] Verifying Backup

gzip: stdin: invalid compressed data--crc error
./radarr/MediaCover/129/fanart-360.jpg: Contents differ
/usr/bin/tar: Child returned status 1
/usr/bin/tar: Error is not recoverable: exiting now
[13.02.2023 03:27:01] tar verify failed!
[13.02.2023 03:27:01] done
[13.02.2023 03:27:01] Searching for updates to docker applications
[13.02.2023 03:27:16] Starting bazarr... (try #1) done!
[13.02.2023 03:27:18] Starting ddns-updater... (try #1) done!
[13.02.2023 03:27:20] Starting duplicati... (try #1) done!
[13.02.2023 03:27:23] Starting filebot... (try #1) done!
[13.02.2023 03:27:25] Starting flaresolverr... (try #1) done!
[13.02.2023 03:27:27] Starting jellyfin... (try #1) done!
[13.02.2023 03:27:30] Starting jellyfin-alt... (try #1) done!
[13.02.2023 03:27:32] Starting nginxproxymanager... (try #1) done!
[13.02.2023 03:27:34] Starting ombi... (try #1) done!
[13.02.2023 03:27:36] Starting prowlarr... (try #1) done!
[13.02.2023 03:27:39] Starting qbittorrent... (try #1) done!
[13.02.2023 03:27:41] Starting radarr... (try #1) done!
[13.02.2023 03:27:44] Starting radarr-alt... (try #1) done!
[13.02.2023 03:27:46] Starting sabnzbd... (try #1) done!
[13.02.2023 03:27:48] Starting sabnzbd-alt... (try #1) done!
[13.02.2023 03:27:51] Starting sonarr... (try #1) done!
[13.02.2023 03:27:53] Starting sonarr-alt... (try #1) done!
[13.02.2023 03:27:55] Starting Stash... (try #1) done!
[13.02.2023 03:27:58] A error occurred somewhere. Not deleting old backup sets of appdata
[13.02.2023 03:27:58] Backup / Restore Completed

 

Link to comment
  • KluthR changed the title to [Plugin] CA Appdata Backup / Restore v2.5

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.