btrfs snapshots


Recommended Posts

paging johnnie.black!

 

I'm currently using btrfs encrypted disks in my server. By default, unraid does not create btrfs subvolumes for all user shares. In order to setup regular snapshots, would you need to recreate user shares as subvolumes? What process do you use to snapshot all your disks?

Link to comment
4 minutes ago, aberg83 said:

In order to setup regular snapshots, would you need to recreate user shares as subvolumes?

Yes, if you have existing data on the disk(s) it can take a while to move the data to the subvolume since it's like a disk to disk copy, it can't be moved directly, but there's a way around that, this is what I did to quickly convert a share to a subvolume:

 

Rename current share to a temp name:

mv /mnt/disk1/YourShare /mnt/disk1/temp

Create a new subvolume with old share name:

btrfs sub create /mnt/disk1/YourShare

use btrfs COW to do an instant (or almost instant, it can take a few seconds) copy of the data to the new subvolume

cp -aT --reflink=always /mnt/disk1/temp /mnt/disk1/YourShare

Delete temp folder

rm -r /mnt/disk1/temp

Done, repeat this for all disks and shares

 

 

You should also create a folder (or more if there are various shares on each disk) for the snapshots, this can be a regular folder, e.g.:
 

mkdir /mnt/disk1/snaps

Then I use the user scripts plugins to create the snapshots, at regular intervals for my always on server, and at first array start for cold storage/backup servers, I use a script like this:

 

#!/bin/bash
nd=$(date +%Y-%m-%d-%H%M)
for i in {1..28} ; do
btrfs sub snap -r /mnt/disk$i/TV /mnt/disk$i/snaps/TV_$nd
done
beep -f 500 ; beep -f 500 ; beep -f 500

On line 3 specify the correct number of disks where the share is, e.g., for disks 1 to 5 it would be

for i in {1..5} ; do

and adjust the paths as necessary, beeps I use on my backup servers so I know when the snapshots are done and the server is ready to receive new data.

  • Like 1
Link to comment

Thanks for the excellent guidance!

 

I assume once you create a subvolume, a folder is automatically created so unraid automatically picks this up as a user share?

 

Also, I assume if you add a new disk to your array, before adding data, you have to create the required subvolumes?

Edited by aberg83
Link to comment

These can also be useful while there's no GUI support to list and delete older snapshots, also with user scripts plugin.

 

List existing snapshots for a share, I use just one disk, since all the other disks containing that share will have the same ones:

 

#!/bin/bash
btrfs sub list /mnt/disk1

Delete snapshots:

 

#!/bin/bash
#argumentDescription=Enter in the arguments
#argumentDefault=Date
for i in {1..28} ; do
btrfs sub del /mnt/disk$i/snaps/TV_$1
done

For example list will produce this:

 

Script location: /tmp/user.scripts/tmpScripts/list sspaces snapshots/script
Note that closing this window will abort the execution of this script
ID 613 gen 16585 top level 5 path sspaces
ID 614 gen 16580 top level 5 path snaps
ID 3381 gen 4193 top level 614 path snaps/sspaces_daily_2018-07-01-070001
ID 3382 gen 4200 top level 614 path snaps/sspaces_daily_2018-07-02-070001
ID 3386 gen 4204 top level 614 path snaps/sspaces_daily_2018-07-03-070001
ID 3387 gen 4206 top level 614 path snaps/sspaces_daily_2018-07-04-070001
ID 3391 gen 4213 top level 614 path snaps/sspaces_daily_2018-07-05-070002
ID 3394 gen 4219 top level 614 path snaps/sspaces_daily_2018-07-06-070001
ID 3419 gen 4231 top level 614 path snaps/sspaces_daily_2018-07-07-070001
ID 3518 gen 4260 top level 614 path snaps/sspaces_daily_2018-07-08-070001
ID 3522 gen 4263 top level 614 path snaps/sspaces_daily_2018-07-09-070001
ID 3541 gen 4268 top level 614 path snaps/sspaces_daily_2018-07-10-070001
ID 3545 gen 4274 top level 614 path snaps/sspaces_daily_2018-07-11-070001
ID 3554 gen 4283 top level 614 path snaps/sspaces_daily_2018-07-12-070001
ID 3634 gen 4304 top level 614 path snaps/sspaces_daily_2018-07-13-070001
ID 3638 gen 4307 top level 614 path snaps/sspaces_daily_2018-07-14-070001
ID 3645 gen 4312 top level 614 path snaps/sspaces_daily_2018-07-15-070001
ID 3676 gen 4320 top level 614 path snaps/sspaces_daily_2018-07-16-070001
ID 3695 gen 4326 top level 614 path snaps/sspaces_daily_2018-07-17-070001
ID 3757 gen 4339 top level 614 path snaps/sspaces_daily_2018-07-18-070001
ID 3779 gen 4348 top level 614 path snaps/sspaces_daily_2018-07-19-070001
ID 3780 gen 4351 top level 614 path snaps/sspaces_daily_2018-07-20-070001
ID 3781 gen 4359 top level 614 path snaps/sspaces_daily_2018-07-21-070001
ID 3782 gen 4391 top level 614 path snaps/sspaces_daily_2018-07-22-070001
ID 3783 gen 4392 top level 614 path snaps/sspaces_daily_2018-07-23-070001
ID 3784 gen 4398 top level 614 path snaps/sspaces_daily_2018-07-24-070001
ID 3785 gen 4402 top level 614 path snaps/sspaces_daily_2018-07-25-070001
ID 3786 gen 4410 top level 614 path snaps/sspaces_daily_2018-07-26-070001
ID 3914 gen 4473 top level 614 path snaps/sspaces_daily_2018-07-27-070001
ID 4021 gen 9626 top level 614 path snaps/sspaces_daily_2018-07-28-070002
ID 4047 gen 16427 top level 614 path snaps/sspaces_daily_2018-07-29-070001
ID 4048 gen 16429 top level 614 path snaps/sspaces_daily_2018-07-30-070001
ID 4049 gen 16431 top level 614 path snaps/sspaces_daily_2018-07-31-070001
ID 4050 gen 16437 top level 614 path snaps/sspaces_daily_2018-08-01-070001
ID 4051 gen 16445 top level 614 path snaps/sspaces_daily_2018-08-02-070001
ID 4052 gen 16453 top level 614 path snaps/sspaces_daily_2018-08-03-070001
ID 4053 gen 16461 top level 614 path snaps/sspaces_daily_2018-08-04-070001
ID 4054 gen 16477 top level 614 path snaps/sspaces_daily_2018-08-05-070001
ID 4055 gen 16505 top level 614 path snaps/sspaces_daily_2018-08-06-070001
ID 4056 gen 16508 top level 614 path snaps/sspaces_daily_2018-08-07-070001
ID 4057 gen 16515 top level 614 path snaps/sspaces_daily_2018-08-08-070001
ID 4058 gen 16522 top level 614 path snaps/sspaces_daily_2018-08-09-070001
ID 4059 gen 16525 top level 614 path snaps/sspaces_daily_2018-08-10-070001
ID 4060 gen 16564 top level 614 path snaps/sspaces_daily_2018-08-11-070001
ID 4061 gen 16579 top level 614 path snaps/sspaces_daily_2018-08-12-070002
ID 4062 gen 16580 top level 614 path snaps/sspaces_daily_2018-08-13-070001

 

Then I can impute just one date or use a wildcard to delete several snapshots at once, for example insert 2018-07-0* or 2018-07* to delete de older 10 or all of last months snapshots.

Link to comment
Just now, aberg83 said:

And why does live snapshot of a vm leading to crash consistent state cause issues?

It can still cause issues, since any data in RAM will not be snapshoted, though I did try several times to boot of a live VM snapshot and it always worked, the same way Windows will most times recover from a crash/power loss, but there could be issues, what I do is a live snapshot everyday of all VMs and try to to at least once a week turn them all off and do an offline snapshot, this way I have more options if I need to recover.

Link to comment
  • 7 months later...
On 8/13/2018 at 8:44 AM, johnnie.black said:

 

On 8/13/2018 at 8:23 AM, aberg83 said:

In order to setup regular snapshots, would you need to recreate user shares as subvolumes?

Yes, if you have existing data on the disk(s) it can take a while to move the data to the subvolume since it's like a disk to disk copy, it can't be moved directly, but there's a way around that, this is what I did to quickly convert a share to a subvolume:

 

Rename current share to a temp name:


mv /mnt/disk1/YourShare /mnt/disk1/temp

Create a new subvolume with old share name:


btrfs sub create /mnt/disk1/YourShare

use btrfs COW to do an instant (or almost instant, it can take a few seconds) copy of the data to the new subvolume


cp -aT --reflink=always /mnt/disk1/temp /mnt/disk1/YourShare

Delete temp folder


rm -r /mnt/disk1/temp

Done, repeat this for all disks and shares

 

I need clarification for the “repeat this for all disks and shares”. 

 

I have an array of 18 disks. Currently just using regular shares but want to start taking snapshots.

 

what I did to test this was moved all “data share” onto disk 1 as it was spread over several disks as a regular share. Worked great! But now here is my question;

 

If if I were to move all my data from the “data share” onto a single disk, and follow the commands for that single share, do I need to repeat the commands for each disk with the same name so it can propagate files onto those disks as the drive gets full as it does using basic shares? Or will it do that on its own? Then I would do the same thing for each disk for the “movie share”?

Link to comment
5 hours ago, Brandon87 said:

If if I were to move all my data from the “data share” onto a single disk, and follow the commands for that single share, do I need to repeat the commands for each disk with the same name so it can propagate files onto those disks as the drive gets full as it does using basic shares? Or will it do that on its own? Then I would do the same thing for each disk for the “movie share”?

With Unraid each disk is an independent filesystem, so if the share spans multiple disks it will need to be snapshoted for each disk, you can do that easily with a script.

Link to comment
  • 2 months later...
On 3/20/2019 at 9:54 PM, johnnie.black said:

With Unraid each disk is an independent filesystem, so if the share spans multiple disks it will need to be snapshoted for each disk, you can do that easily with a script.

Yes.  That's one of the more powerful features of unRAID IMHO, although it does make BTRFS snapshots more complicated.  Would you suggest forcing shares to be on particular disks (as shown here for example) in order to make the snapshot procedure simpler?

Link to comment
Would you suggest forcing shares to be on particular disks (as shown here for example) in order to make the snapshot procedure simpler?

It's an option, most of my media servers only have one share, so I just snapshot that and send/receive to the backup server, I have a script that does an incremental send/receive to all disks in order.

 

imagem.png.265ac974a620a5e0db16e0cf33f24fca.png

 

 

Link to comment
10 hours ago, johnnie.black said:

It's an option, most of my media servers only have one share, so I just snapshot that and send/receive to the backup server, I have a script that does an incremental send/receive to all disks in order.

Thanks, dude.  Is your script publicly available?  I have seen a number of scripts for btrfs snapshotting but none unRAID-specific so that would be helpful.

Link to comment
On 5/21/2019 at 9:15 PM, servidude said:

Is your script publicly available?

I don't mind posting it, but note that I known nothing about scripting, I'm just good a googling and finding examples of what I want to do, so the script is very crude and while it works great for me and my use case it likely won't for other use cases, also:

 

-send/receive has currently no way of showing progress/transfer size, so I do it by using pv after comparing the used sized on both servers, obviously this will only be accurate if both servers contain the same data, including the same snapshots, i.e., when I delete old snapshots on source I also delete them on destination.

-you'll need to pre-create the ssh keys.

-if any of the CPUs doesn't have hardware AES support remove "-c [email protected]" from the SSH options.

-for the script to work correctly the most recent snapshot (the one used as parent for the incremental btrfs send) must exist on source and destination, so the initial snapshot for all disks needs to be sent manually, using the same name format.

 

#!/bin/bash

#Snapshot date format
nd=$(date +%Y-%m-%d-%H%M)
#Dest IP Address
ip="192.168.1.24"
#Share to snapshot and send/receive
sh=TV

#disks that have share to snapshot and send/receive
for i in {1..28} ; do

#calculate and display send size
s=$(BLOCKSIZE=1M df | grep -w disk$i | awk '/[0-9]%/{print $(3)}')
su=$(BLOCKSIZE=1M df | grep -w user | awk '/[0-9]%/{print $(3)}')
d=$(ssh root@$ip BLOCKSIZE=1M df | grep -w disk$i | awk '/[0-9]%/{print $(3)}')
du=$(ssh root@$ip BLOCKSIZE=1M df | grep -w user | awk '/[0-9]%/{print $(3)}')
t=$((s-d))
if [ "$t" -lt 0 ] ; then ((t = 0)) ; fi
g=$((t/1024))
tu=$((su-du))
if [ "$tu" -lt 0 ] ; then ((tu = 0)) ; fi
gu=$((tu/1024))

echo -e "\e[32mTotal transfer size for disk$i is ~"$g"GiB, total remaining for this backup is ~"$gu"GiB\e[0m"

#source snaphots folder
cd /mnt/disk$i/snaps

#get most recent snapshot
sd=$(echo $sh_* | awk '{print $NF}')

#make a new snapshot and send differences from previous one
    btrfs sub snap -r /mnt/disk$i/$sh /mnt/disk$i/snaps/"$sh"_$nd
    sync
        btrfs send -p /mnt/disk$i/snaps/$sd /mnt/disk$i/snaps/"$sh"_$nd | pv -prtabe -s "$t"M | ssh -c [email protected] root@$ip "btrfs receive /mnt/disk$i"
        if [[ $? -eq 0 ]]; then
        ssh root@$ip sync
        echo -e "\e[32mdisk$i send/receive complete\e[0m"
        printf "\n"
        else
        echo -e "\e[31mdisk$i send/receive failed\e[0m"
        /usr/local/emhttp/webGui/scripts/notify -i warning -s "disk$i send/receive failed"
        fi
done
/usr/local/emhttp/webGui/scripts/notify -i normal -s "T5>T6 Sync complete"

 

Edited by johnnie.black
Link to comment

Thanks @johnnie.black!  Appreciate you sharing your efforts.  The script looks good, a lot better than some of my past bash efforts 😆

You could automate detecting the number of disks using something like:

ndisks=`ls -d1 /mnt/disk[0-9]* | wc -l`

for i in $(seq 1 $ndisks); do

...

Probably overkill, but I always think the less is hardcoded, the better...

  • Like 1
Link to comment
  • 5 months later...

Dear johnnie.black,

Sorry i am testing around with your given script but unfortunately it seems to be not working for me.

Everytime i try to "send the differences from previous one" i get some connection errors and finally the information "send/receive failed" from all disks.

I try to figure out how i can get it working, because i am not so familiar with ssh it is a little challenge for me.

Please can you explain me a way to get it working for my private purpose?

The goal will be to send btrfs snapshots to my other unraid Server located on a different location in my house, they will not go into cloud or offsite.

 

 

Things i figured already out:

 

- Made ssh connection on from Source-Server to Backup-Server and gave a fingerprint (confirmed yes)

 

- Manually used the following string in the terminal:

 

btrfs send -p /mnt/disk1/snaps/Data_2019-11-11-1340 /mnt/disk1/snaps/Data_2019-11-11-1358 | pv -prtabe | ssh root@HEREISDESTIP "btrfs receive /mnt/disk1"
At subvol /mnt/disk1/snaps/Data_2019-11-11-1358
[email protected]'s password:

 

Here i entered valid PW


ERROR: empty stream is not considered valid

 

I am not using variables to fixing everything to one working command, then i am not using -s option on pv and not using the -c switch on ssh to make all simpler

 

What could i do next to get it finally working?

Thank you very much, any help is very appreciated!

 

Best regards

Lobsi

Quote

#!/bin/bash

#Snapshot date format
nd=$(date +%Y-%m-%d-%H%M)
#Dest IP Address
ip="192.168.1.24"
#Share to snapshot and send/receive
sh=TV

#disks that have share to snapshot and send/receive
for i in {1..28} ; do

#calculate and display send size
s=$(BLOCKSIZE=1M df | grep -w disk$i | awk '/[0-9]%/{print $(3)}')
su=$(BLOCKSIZE=1M df | grep -w user | awk '/[0-9]%/{print $(3)}')
d=$(ssh root@$ip BLOCKSIZE=1M df | grep -w disk$i | awk '/[0-9]%/{print $(3)}')
du=$(ssh root@$ip BLOCKSIZE=1M df | grep -w user | awk '/[0-9]%/{print $(3)}')
t=$((s-d))
if [ "$t" -lt 0 ] ; then ((t = 0)) ; fi
g=$((t/1024))
tu=$((su-du))
if [ "$tu" -lt 0 ] ; then ((tu = 0)) ; fi
gu=$((tu/1024))

echo -e "\e[32mTotal transfer size for disk$i is ~"$g"GiB, total remaining for this backup is ~"$gu"GiB\e[0m"

#source snaphots folder
cd /mnt/disk$i/snaps

#get most recent snapshot
sd=$(echo $sh_* | awk '{print $NF}')

#make a new snapshot and send differences from previous one
    btrfs sub snap -r /mnt/disk$i/$sh /mnt/disk$i/snaps/"$sh"_$nd
    sync
        btrfs send -p /mnt/disk$i/snaps/$sd /mnt/disk$i/snaps/"$sh"_$nd | pv -prtabe -s "$t"M | ssh -c [email protected] root@$ip "btrfs receive /mnt/disk$i"
        if [[ $? -eq 0 ]]; then
        ssh root@$ip sync
        echo -e "\e[32mdisk$i send/receive complete\e[0m"
        printf "\n"
        else
        echo -e "\e[31mdisk$i send/receive failed\e[0m"
        /usr/local/emhttp/webGui/scripts/notify -i warning -s "disk$i send/receive failed"
        fi
done
/usr/local/emhttp/webGui/scripts/notify -i normal -s "T5>T6 Sync complete"

 

 

Edited by Lobsi
Link to comment
2 hours ago, Lobsi said:

Sorry i am testing around with your given script but unfortunately it seems to be not working for me.

Everytime i try to "send the differences from previous one" i get some connection errors and finally the information "send/receive failed" from all disks.

Does the parent snapshot exist on /mnt/disk1 on the destination server? Post the output of

ls -l /mnt/disk1

on the destination server.

Link to comment
50 minutes ago, johnnie.black said:

Does the parent snapshot exist on /mnt/disk1 on the destination server? Post the output of


ls -l /mnt/disk1

on the destination server.

This seems to be my main issue, i have not the right idea how i get the parent snapshot from one server to the backup server.

 

The output of your question:

root@Zwerg:~# ls -l /mnt/disk1
total 0
drwxrwxrwx 1 nobody users 8 Nov 11 11:08 snaps/

 

Link to comment
2 minutes ago, Lobsi said:

This seems to be my main issue, i have not the right idea how i get the parent snapshot from one server to the backup server.

The first snapshot from every disk needs to be sent manually to dest, only after that you can start using the script to make incremental backups, also the script is set to send to dest server root, not snaps folder, you can use any folder if you want but need to adjust the script accordingly.

  • Like 1
Link to comment
20 minutes ago, johnnie.black said:

The first snapshot from every disk needs to be sent manually to dest, only after that you can start using the script to make incremental backups, also the script is set to send to dest server root, not snaps folder, you can use any folder if you want but need to adjust the script accordingly.

Thank you very much for your explaination, i appreciate that much.

 

Can you give me an example how i can put the first snapshot from my source server to my destination server?

 

I would like to get finally this situation:

I have 2 Unraid Servers, a bigger one who has different Dockers and VMs and a smaller one who only should store the snapshots from the bigger one in case i get a ransom ware or crypto trojan...

On the small server i would like to have all snapshots stored by preserving the unraid mechanic (there are 3 8tb Drives, one for parity in it) if this is possible. Finally i like to put the Server in sleep until the next snapshot cronjob of the bigger server starts again....

 

Edited by Lobsi
Link to comment
13 minutes ago, Lobsi said:

On the small server i would like to have all snapshots stored by preserving the unraid mechanic (there are 3 8tb Drives, one for parity in it) if this is possible.

Just so were clear to be able to use the script above the destination server needs to have the same disk config as the source server, or at least the disks you want to snaphot, i.e., you have 2 x 8TB disks on dest you can snapshot only 2 disks on source up to 8TB.

  • Like 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.