UNRAID 6.7.1 easy way to backup and restore specific dockers


Recommended Posts

I have been running unraid for a few month and love it, the only thing i am unsure of is how to backup and restore specific dockers, I know i can use the appdata restore plugin, to backup and restore the entire appdata folder. But is there an easy way to take snapshots and backup and restore specific dockers??  for example I have had to rebuild my plex libraries 3 times because of issues, i wish I can take a snapshot that would restore plex to the exact state it is in now..

 

thanks for the help.

Edited by amorillo
forgot to add unraid version
Link to comment
4 hours ago, testdasi said:

What makes the plugin not easy?

To my knowledge the appdata backup and restore plugin deals with the entire appdata folder, if you read my question , I specifically asked about a method of backing up individual Dockers, not the entire appdata folder...thanks for any assistance that can be provided.

Link to comment
1 hour ago, Squid said:

Been on my todo list forever

Sent from my NSA monitored device
 

Thanks for your response squid, I've been googling for an answer unsuccessfully..are there any manual steps I can take to make an exact backup of the current state of a docker? An then restore it?  

Any assistance would be greatly appreciated, and thank you for all you have already done for the unraid community..

Link to comment
15 minutes ago, amorillo said:

To my knowledge the appdata backup and restore plugin deals with the entire appdata folder, if you read my question , I specifically asked about a method of backing up individual Dockers, not the entire appdata folder...thanks for any assistance that can be provided.

When I was testing out various configs, I simply use User Scripts and do a simple cp to create a copy. I reckon someone more verse in shell script can sure create a folder based on time and cron it every hour or something.

Link to comment

Backup/Restore dockers.

 

The backup directory contains cache files and Gzip files.  Running dockers are stopped and started during backup and restore.  Gzip compression is used so the owner and permissions are preserved in the backup files.  If you want to run with hard coded variables, then set the variables under Defaults.

 

Usage:

Usage: backupDockers.sh: [-a] [-d <backup directory>] [<dockers and/or archive files>...]

  -b : backup mode
  -r : restore mode
  -l : list dockers
  -a : all dockers
  -c : crc comparison during rsync, default is check time and size
  -d : set backup directory
  -s : save backup during restore

Examples:

 

Backup dockers into a specific backup directory:

backupDockers.sh -b -d /mnt/user/backup/dockers binhex-plexpass transmission

Restore docker latest file and a certain docker file from a specific backup directory:

backupDockers.sh -r -d /mnt/user/backup/dockers binhex-plexpass transmission.2019-07-02-12-30-38-EDT.tgz

Backup all dockers into a specific backup directory:

backupDockers.sh -bad /mnt/user/backup/dockers

Restore all dockers from a specific backup directory:

backupDockers.sh -rad /mnt/user/backup/dockers

Source:

If copy/paste doesn't work then download the file.  backupDockers.sh

#!/bin/bash

# Defaults
backup="/mnt/user/Backup/Dockers"
restore=false
all=false
checksum=false
dockers=()
files=()

usage()
{
    echo "Usage: backupDockers.sh: [-a] [-d <backup directory>] [<dockers and/or archive files>...]"
    echo
    echo "  -b : backup mode"
    echo "  -r : restore mode"
    echo "  -l : list dockers"
    echo "  -a : all dockers"
    echo "  -c : crc comparison during rsync, default is check time and size"
    echo "  -d : set backup directory"
    echo "  -s : save backup during restore"
    echo
    exit 1
}

while getopts 'brlacd:?h' opt
do
    case $opt
    in
        b) restore=false
           ;;
        r) restore=true
           ;;
        l) docker ps -a --format "{{.Names}}" | sort -fV
           exit 0
           ;;
        a) all=true
           ;;
        c) checksum=true
           ;;
        d) backup=${OPTARG%/}
           ;;
    h|?|*) usage
           ;;
    esac
done

shift $(($OPTIND - 1))

if [ "$all" == "true" ]
then
    readarray -t all < <(printf '%s\n' "$(docker ps -a --format "{{.Names}}" | sort -fV)")
else
    all=()
fi

readarray -t items < <(printf '%s\n' "$@" "${dockers[@]}" "${all[@]}" | awk '!x[$0]++')

[ "${items[0]}" == "" ] && usage

for item in "${items[@]}"
do
    if echo $item | grep -sqP ".+\.\d\d\d\d-\d\d-\d\d-\d\d-\d\d-\d\d-\w+\.tgz"
    then
        files+=("$item")
    else
        if [ ! -z $item ]
        then
            dockers+=("$item")
        fi
    fi
done

date=$(date +"%Y-%m-%d-%H-%M-%S-%Z")
echo "DATE: $date"

appdata="/mnt/user/appdata"
cache="$backup/appdata"

if [ "$restore" == "true" ]
then
    restores=()
    errors=()

    for docker in "${dockers[@]}"
    do
        file="$(ls -t $backup/$docker/*.tgz 2>/dev/null | head -1)"

        [ -e "$file" ] && restores+=("$file") || errors+=("$docker")
    done

    for file in "${files[@]}"
    do
        docker=$(echo $file | cut -d '.' -f 1)
        file="$backup/$docker/$file"

        [ -e "$file" ] && restores+=("$file") || errors+=("$file")
    done

    for error in "${errors[@]}"
    do
        archive=$(echo "$error" | rev | cut -d '/' -f 1 | rev)

        echo "ERROR: $archive: archive not found"
    done

    readarray -t restores < <(printf '%s\n' "${restores[@]}" | awk '!x[$0]++')

    for restore in "${restores[@]}"
    do
        archive=$(echo "$restore" | rev | cut -d '/' -f 1 | rev)
        docker=$(echo "$archive" | cut -d '.' -f 1)

        [ "$docker" == "" ] && continue

        echo "DOCKER: $docker"

        running=$(docker ps --format "{{.Names}}" -f name="^$docker$")

        if [ "$docker" == "$running" ]
        then
            echo "STOP: $docker"
            docker stop --time=30 "$docker" >/dev/null
        fi

        cd "$appdata"

        backup="$docker.$date"

        echo "MOVE: $docker -> $backup"
        mv -f "$docker" "$backup" 2> /dev/null

        if [ -d "$backup" ]
        then
            echo "RESTORE: $archive -> $appdata"
            #tar --same-owner --same-permissions -xzf "$restore"
            pv $restore | tar --same-owner --same-permissions -xzf -
            if [ ! -d "$appdata/$docker" ]
            then
                echo "ERROR: restore failed"
                mv -f "$backup" "$docker" 2> /dev/null

                if [ ! -d "$appdata/$docker" ]
                then
                    echo "ERROR: repair failed"
                fi
            fi

            if [ ! -d $docker ]
            then
                echo "ERROR: restore failed"
           fi
        else
            echo "ERROR: move failed"
        fi

        if [ "$docker" == "$running" ]
        then
            echo "START: $docker"
            docker start "$docker" >/dev/null
        fi
    done
else
    for docker in "${files[@]}" "${dockers[@]}"
    do
        if ! docker ps -a --format "{{.Names}}" | grep $docker -qs
        then
            echo "ERROR: $docker: docker not found"
        fi
    done

    mkdir -p "$backup" "$cache"
    chown nobody:users "$backup" "$cache"
    chmod ug+rw,ug+X,o-rwx "$backup" "$cache"

    for docker in "${dockers[@]}"
    do
        if [ -d "$appdata/$docker" ]
        then
            echo "DOCKER: $docker"

            running=$(docker ps --format "{{.Names}}" -f name="^$docker$")

            if [ "$docker" == "$running" ]
            then
                echo "STOP: $docker"
                docker stop --time=30 "$docker" >/dev/null
            fi

            echo "SYNC: $docker"
            [ "$checksum" == "true" ] && checksum=c || checksum=
            rsync -ha$checksum --delete "$appdata/$docker" "$cache"

            if [ "$docker" == "$running" ]
            then
                echo "START: $docker"
                docker start "$docker" >/dev/null
            fi

            mkdir -p "$backup/$docker"

            echo "GZIP: $docker.$date.tgz"
            tar cf - -C "$cache" "$docker" -P | pv -s $(du -sb "$cache/$docker" | cut -f 1) | gzip > "$backup/$docker/$docker.$date.tgz"
            chown -R nobody:users "$backup/$docker"
            chmod -R ug+rw,ug+X,o-rwx "$backup/$docker"
        fi
    done
fi

date=$(date +"%Y-%m-%d-%H-%M-%S-%Z")
echo "DATE: $date"

 

Edited by JoeUnraidUser
  • Like 5
  • Thanks 2
Link to comment
  • 1 year later...

Thanks for this script @JoeUnraidUser!

 

Quick question: Is there a reason a cache is used for this rather than running Gzip on the original files?  I'd prefer not to have a complete copy of my docker files in addition to the archive, but before I modify the script I want to make sure I'm not missing something that makes the cache necessary.  Thanks for any help you can provide!

 

EDIT: After using the script a bit further, it seems like the cache is to enable as minimal downtime to the docker as possible.  After the first cache is created, each subsequent "caching" only copies the changed/new files to the cache which means the docker is only offline for a short period of time.  Then the Gzip is run on the cached files while the docker is already back up and running.  If there's another reason I'm not seeing, please let me know.  Thanks again!

Edited by rippedwarrior
Additional info discovered
Link to comment
  • 2 months later...

Thanks @JoeUnraidUser for this excellent script.  I'm using it instead of CA AppData Backup now, as I'm kinda tired of ALL of my dockers stopping for hours every day for backup.  This one is great because it only stops each docker while it backs that docker up, and then starts it again.  The caching mechanism further reduces docker down-time.

 

One caveat for anyone else using it - your appdata folder must be named exactly the same as the docker name.  I had a few that were different, and found it easier to rename the docker rather than the folder.  Also if you have any folders in appdata that aren't the same name as an existing docker (eg. from old containers no longer in use), they won't be backed up.

 

Edited by jammin
  • Like 1
Link to comment
  • 1 month later...

Hi @JoeUnraidUser

I just tried your script (which is exactly what I am looking for), but the system gives me this error message

 

backupDockers.sh: line 52: syntax error near unexpected token `<'
backupDockers.sh: line 52: `    readarray -t all < <(printf '%s\n' "$(docker ps -a --format "{{.Names}}" | sort -fV)")'

 

I tried inserting as text and then downloaded the provided file. Buth without success.

As this post is ranked #1 in google for unraid docker backups, I think fixing this would probably be a big gain for the community.
(Maybe I am the problem).

Link to comment
10 hours ago, 1unraid_user said:

Hi @JoeUnraidUser

I just tried your script (which is exactly what I am looking for), but the system gives me this error message

 


backupDockers.sh: line 52: syntax error near unexpected token `<'
backupDockers.sh: line 52: `    readarray -t all < <(printf '%s\n' "$(docker ps -a --format "{{.Names}}" | sort -fV)")'

 

I tried inserting as text and then downloaded the provided file. Buth without success.

As this post is ranked #1 in google for unraid docker backups, I think fixing this would probably be a big gain for the community.
(Maybe I am the problem).

 

I can't seem to make it get the same error.  What command line are you entering?

Link to comment
On 10/4/2020 at 5:29 PM, rippedwarrior said:

Thanks for this script @JoeUnraidUser!

 

Quick question: Is there a reason a cache is used for this rather than running Gzip on the original files?  I'd prefer not to have a complete copy of my docker files in addition to the archive, but before I modify the script I want to make sure I'm not missing something that makes the cache necessary.  Thanks for any help you can provide!

 

EDIT: After using the script a bit further, it seems like the cache is to enable as minimal downtime to the docker as possible.  After the first cache is created, each subsequent "caching" only copies the changed/new files to the cache which means the docker is only offline for a short period of time.  Then the Gzip is run on the cached files while the docker is already back up and running.  If there's another reason I'm not seeing, please let me know.  Thanks again!

 

That's exactly why I used a cache.

Link to comment
16 hours ago, JoeUnraidUser said:

 

I can't seem to make it get the same error.  What command line are you entering?


Fixed it by running

chmod u+x backupDockers.sh

 

I tried to run the command partially with "sudo" before, because I already assumed it might be a rights problem and I thought I can work around it with sudo.
Turns out however, sudo was part of the problem. Don't understand it though but it's running now :)

Link to comment

Did anyone managed to run it without the array spinning up?
I use a cached share as backup destination as well as Dynamix Folder Cache plugin.

 

However, my array still spins up every time I start a backup job :(

 

By the way: is it on purpose that the cache folder for the plugin is not beeing deleted after finishing? Looks to me like "appdata" in the backup could be deleted afterwards, as we have the .tgz

Edited by 1unraid_user
Added another question
Link to comment
On 6/26/2019 at 10:58 AM, JoeUnraidUser said:

Backup/Restore dockers.

 

[clipped]

Noob question: where do I store the script? I note that the UserScripts plugin doesn't work with interactive scripts, so I'm guessing I have to drop to terminal to execute this, but I have no idea what 'best practice' is for storing scripts. Should I just place it in the Backup directory? Or is it better to place it on the flash drive somewhere?

Thanks for your help.

Link to comment
1 hour ago, jademonkee said:

Noob question: where do I store the script? I note that the UserScripts plugin doesn't work with interactive scripts, so I'm guessing I have to drop to terminal to execute this, but I have no idea what 'best practice' is for storing scripts. Should I just place it in the Backup directory? Or is it better to place it on the flash drive somewhere?

Thanks for your help.

 

You can run it from any folder except the flash drive.  You can then run it from within a script in the UserScripts plugin.

  • Thanks 1
Link to comment
  • 5 months later...
  • 2 months later...

Hi @1unraid_user 

On 7/1/2021 at 2:39 AM, 1unraid_user said:

Hi @JoeUnraidUser,

I am still using your script as part of my backup routine, but I end up with a lot of backups meanwhile.

Do you have a way to delete the oldest files as soon as a certain amount of backups is reached (e.g. only keep 10 backups per Docker and then delete the oldest)

 

You could add this line to the script (or to your command when you run the script:

 

find /path/to/backupfiles/ -name *.tgz -mtime +5 -exec rm {} \;

 

That will delete all files on a folder more than 5 days old. You can change the number to 14 (for 2 weeks) etc. Make sure you choose a folder with ONLY the backups in it an no other files.

Edited by TheSpook
sorry - mistake in the syntax
Link to comment
  • 3 weeks later...
On 12/10/2020 at 7:36 AM, jammin said:

One caveat for anyone else using it - your appdata folder must be named exactly the same as the docker name.  I had a few that were different, and found it easier to rename the docker rather than the folder.  Also if you have any folders in appdata that aren't the same name as an existing docker (eg. from old containers no longer in use), they won't be backed up.

Thanks for this. Was going crazy trying to backup my unifi-controller docker and just getting an empty appdata folder in the backup until I saw your post and realized the names didn't match.

Link to comment
  • 1 month later...

For those like me they are not familiar in linux, just put the file in usr/local/sbin ???? and run chmod a+x backupDockers.sh or chmod +x backupDockers.sh. The script can be called from any folder in terminal.

 

Now I can edit some file in my docker and if i mess something, juste restore one container.


Thanks!

Edited by wallas9512
Link to comment

Hey,

 

@JoeUnraidUser thanks for your script, working like a charm so far.

 

At the moment i'm using Duplicati to backup my appdata offsite. But deduplication doesn't really work with compressed/archived files, is that correct? 
If so how do i need to adjust the script so that i just get folders without the *.tgz files.


I tried to figure it out by myself but i'm pretty new to all of this.

 

 

Edited by Adeon
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.