[Plugin] Snapshots


Recommended Posts

27 minutes ago, SimonF said:

Retention removes snaps. The function not implemented is to remove on a remote host of send was used. Maybe need to hide that field 

What counts as a remote or local one then? I have mine set to a send location on another disk that is then backed up. image.png.7b400b8a74716654a966ed0b8b6ad008.png

 

I have my settings to keep 7 days and remove snaps/sends local. I don't technically have a remote share, just another disk on the same system (which I may just get rid of because I can back it up from its current location just as easily). I originally started sending it locally because I had issues with the appdata and needed to recover a couple times and having it locally in a backup folder made it easier.

 

image.thumb.png.71cf5cc318cce85e0eec96aa1b859509.png

 

 

But I just removed about 50 that went into my Cache drive.

image.png.9cce900ed64fa535615d477170742dc8.png

 

Also I just noticed that I had my domains set to send to cache/Backup and my system to do the same and only appdata is getting sent.

Edited by ZerkerEOD
Link to comment
  • 4 weeks later...

@SimonF - Thanks for the great plugin!  I was able to fairly easily get it to do most of what I want.  I did have to go research elsewhere how to setup the keyfiles for remote ssh access, but that was fairly easy to figure out and get working as well.

 

I'm stuck on retention for the remote server though.  I understand that this is still not implemented on the source server side, but is there something I'm missing to be able to apply retention directly on the remote server?  What are other people doing to cleanup old snapshots that have been sent from another server?

 

In my case, I have a BTRFS pool with some subvols on a 'source' UnRaid server that I want to keep ~3 days worth of hourly snapshots on, and then send them to a BTRFS pool on a 'remote' UnRaid server.  The remote server should then keep 14 days of the hourly snapshots, but then also take daily and weekly snapshots, each with their own retention periods.  So, I have one schedule on the source server to create the hourly snaps, tag them with 'Hourly' and send them to the remote server.  Then, I have two schedules on the remote server to create the daily and weekly snaps, tagged as 'Daily' and 'Weekly'.  I think the retention of the hourly snaps on the source server is working ok, and I think retention of the daily and weekly snaps on the remote server is working ok as well, but what should I do about the hourly snaps on the remote server?  I tried creating an hourly schedule on the remote server, basically just to handle the retention cleanup, but it doesn't seem like it considers the received incremental snaps when it applies retention.  Is this just a bug, or does it purposely exclude snaps that were received from a different server?

 

So, again... what are other people doing for retention on 'remote' servers?  Maybe custom scripts, or am I just missing something obvious?

 

And, just because it is fresh in mind... here are a few general useability observations:

  • It would be nice to be able to sort the list of snaps from newest to oldest instead of the current default of oldest to newest.
  • It would be nice to see the schedule's tag on the list view, maybe instead of the slot #.  Or, maybe allow a short description that could be shown there.
  • I keep accidentally clicking on the schedule's + icon thinking I'm editing the schedule row.  I'm not sure why that moves down from the subvol row to the schedule row once there are schedules?  Maybe leave the + on the subvol row, and then move the clock icon over to where the + currently is on the schedule row?  Maybe change the clock icon to a 'edit' icon as well?
Link to comment

First of all, thanks for the amazing work on this plugin!

Here is my question now (with context):
I have been using Unraid for several years now, but I just got interested in BTRFS snapshots (after destroying a lot of data due to an error I made... A noob error that should never have happened!)
All the documentation I could find is aimed at people who already know what filesystem snapshots are and how they work... I don't!

My use-case is:
To be able to roll back a few hours in case I screw-up... the rest is managed by daily replication of data with rsync on another server and some other backup strategies. There is a lot of movement in the week on this server as it syncs with work related stuff, so hundreds of files might change/be created per week.

  1. When I create snapshot on a schedule, the default value is to base it on the previous snap.
    What am I supposed to do if I want to only keep a week worth of hourly snapshots, with auto-deletion of older snaps? Won't deleting a previous snap destroy data?
     
  2. How does one roll back a file, folder, or a state of the file system on unraid? We don't have Timeshift or other cool GUI stuff like that of which I am aware. Can it be done while the server is running? Do I need to mount the disk on a live boot and use Timeshift?

Feel free to send me obvious guides that I might have missed (Esspecially in the form of videos if available)

 

Have a nice day

EDIT: All my snapshots are directly on the filesystem from which they originated for the moment. I don't send them to a remote. The goal is to roll-back files if I screw-up something, not back-up the entire drive on a server on the moon for preservation.

Edited by Normand_Nadon
Link to comment
2 hours ago, Normand_Nadon said:
  1. When I create snapshot on a schedule, the default value is to base it on the previous snap.
    What am I supposed to do if I want to only keep a week worth of hourly snapshots, with auto-deletion of older snaps? Won't deleting a previous snap destroy data?

Snapshots are not "based on a previous snap" they are a Copy on Write copy of a subvolume. For the purpose of restoration there are no dependencies between them, (all of the data sharing stuff is handled by the CoW nature of the FS). You can delete any of them without effecting the others . The only time the relationship of one snapshot to another really matters is when sending them between filesystems using btrfs send. With send if the snapshot to be transferred has an ancestor snapshot at both the source and destination then the amount of data to transfer is reduced (highly simplified explanation).

 

2 hours ago, Normand_Nadon said:
  1. How does one roll back a file, folder, or a state of the file system on unraid? We don't have Timeshift or other cool GUI stuff like that of which I am aware. Can it be done while the server is running? Do I need to mount the disk on a live boot and use Timeshift?

There is not really a simple gui way to handle rolling back. Snapshots appear as just folders on the filesystem. The simplest way of restoring is to delete the live file or folder and then copy it from a snapshot directory back into place. If you are restoring entire subvolumes (the whole snapshot), there are fancier ways of doing it involving deleting the subvolume and then creating a writable snapshot of the snapshot you want to restore, but copying is the easiest to understand. Since snapshotting only involves data disks an not the OS there is no need to bring the server down when restoring something. At most you might have to stop some VM or Docker containers that are using data from the subvolume to be restored.

  • Like 2
  • Thanks 1
Link to comment

First of all, thank you @primeval_god, in 12 or so lines, you made this a million times simpler to understand for me!
I have been trying to understand the concept for days!
(I will admit I need to google COW and what it means... but 95% of you explanation was clear to me!)

 

11 minutes ago, primeval_god said:

Snapshots are not "based on a previous snap" they are a Copy on Write copy of a subvolume. For the purpose of restoration there are no dependencies between them, (all of the data sharing stuff is handled by the CoW nature of the FS).


But what is the meaning of this then? Does it apply only to remote sends ?

image.thumb.png.2c3cd16d62ba47188fb14987fab1ff47.png

 

 

 

14 minutes ago, primeval_god said:

The simplest way of restoring is to delete the live file or folder and then copy it from a snapshot directory back into place.

Amazing! I will experiment on this and report if I still need help!

Link to comment
3 hours ago, Normand_Nadon said:

But what is the meaning of this then? Does it apply only to remote sends ?

It only applies to "incremental sends". I believe its purpose is to define the "Master Snaptshot" (not a btrfs term) against which the snapshot will be compare to do an incremental send. This is a feature of the plugin vs something to do with the underlying fs.
image.thumb.png.b49122d42920f2feec047fdab8a89826.png
 P.S. If you expand the help text (either globally or by clicking on labels like Master Snap) you will find some useful information.

Link to comment
6 minutes ago, Normand_Nadon said:

@primeval_god, I don't know what I am missing, but when I navigate to the individual snapshot folders (either from the plugin, or from the terminal) all I see is all the preceding snapshots, but no file...

 

I really don't get how it works!

image.thumb.png.5d113f5addc958a39ad60c4ffaba8d60.png
image.thumb.png.a0cbfbc608b3ea131e2a77c7fcd2b36c.png

I am confused as to what you are doing / trying to do here. From the snapshot plugin it looks like you have a subvolume called "snapshots" on /mnt/cache and you have several snapshots of that subvolume, which are located under within that subvolume at /mnt/cache/snapshots/2023-*

Link to comment
5 minutes ago, primeval_god said:

I am confused as to what you are doing / trying to do here. From the snapshot plugin it looks like you have a subvolume called "snapshots" on /mnt/cache and you have several snapshots of that subvolume, which are located under within that subvolume at /mnt/cache/snapshots/2023-*

I can't help you get less confused! I have no idea what I am doing! :D

 

Here is what I want to do: I want to take hourly snapshots of the entire drive, and be able to recover if ever I make a stupid mistake...
I know snapshots don't replace backups, but I don't want to make as many backups as I take snapshots, if it makes sense

Link to comment
21 minutes ago, Normand_Nadon said:

I can't help you get less confused! I have no idea what I am doing! :D

 

Here is what I want to do: I want to take hourly snapshots of the entire drive, and be able to recover if ever I make a stupid mistake...
I know snapshots don't replace backups, but I don't want to make as many backups as I take snapshots, if it makes sense

I dont think that the Snapshot plugin GUI allows you to take snapshots of the root volume of the drive (the entire drive). At least thats what your screenshot shows, and i have never done it myself. Personally I have replaced my share folders (every top level folder on the drive) with subvolumes (which look like folders but can be snapshotted), and then i take snapshots of each share at different schedules.

 

It looks like what you have done is create a subvolume called "snapshots" and then snapshotted that empty subvolume a bunch of times, saving the snapshots into the subvolume. Since the subvolume called snapshots is empty, each snapshot of it will be empty as well (note that snapshots do not recurse through subvolumes so the snapshots in the base subvolume will not appear in subsequent snapshots of the base subvolume).

Edited by primeval_god
  • Thanks 1
Link to comment
31 minutes ago, primeval_god said:

I dont think that the Snapshot plugin GUI allows you to take snapshots of the root volume of the drive (the entire drive). At least thats what your screenshot shows, and i have never done it myself. Personally I have replaced my share folders (every top level folder on the drive) with subvolumes (which look like folders but can be snapshotted), and then i take snapshots of each share at different schedules.

That's correct, btrfs can sonly snapshot subvolumes, so you need to recreate your share(s) as subvolume(s).

  • Like 1
Link to comment

Oooooooh! Thank you @primeval_god and @JorgeB !!!

 

That is what I was telling you... Snapshots have been around for millennias, so all the documentation I could find was assuming I knew what a snapshot was from the beginning, from other types of filesystems! You can clearly see that I don't :D

 

50 minutes ago, JorgeB said:

In any case I would still recommend creating shares as subvolumes, to make things cleaner, and for the plugin to work.


Would you mind pointing me to a clear procedure on how to that?

My guts tell me this: rename all the shares as [SHARE'S NAME]_OLD, create subvolumes named as [SHARE'S NAME], move the data inside those subvolumes (should be near instantaneous as it is just re-mapping the location) delete the old share...
Does it make sense ?

EDIT:
Oh... and how will Fuser manage that? I have shares that are splitted between the cache and on the array

Edited by Normand_Nadon
Link to comment
34 minutes ago, Normand_Nadon said:

My guts tell me this: rename all the shares as [SHARE'S NAME]_OLD, create subvolumes named as [SHARE'S NAME], move the data inside those subvolumes (should be near instantaneous as it is just re-mapping the location) delete the old share...
Does it make sense ?

It's not instantaneous if you do that but there's a trick you can use:

 

mv /mnt/disk1/Share_name /mnt/disk1/temp
btrfs sub create /mnt/disk1/Share_name
cp -aT --reflink=always /mnt/disk1/temp /mnt/disk1/Share_name

 

Check all data is there, then

 

rm -r /mnt/disk1/temp


 

 

  • Like 1
Link to comment
1 hour ago, Normand_Nadon said:

Will the snapshot exclude the snapshot folder? (I don't want to snapshot the snapshot!)

It wont matter if you do. If you have nested subvolumes and snapshot the outer most one, the snapshot will not contain the contents of the nested subvolumes because BTRFS snapshots are non-recursive with respect to subvolumes. And of course the key piece of info is that a snapshot is a type of subvolume.

 

Likewise if you were to create subvolumes on the disk, and then snapshot the root volume of the disk, the resulting snapshot would not contain a copy of the contents of the subvolume. You would have to snapshot the root volume and the subvolume separately. 

 

To snapshot a snapshot you would have to specifically target the snapshot in the command.

Edited by primeval_god
Link to comment
4 minutes ago, primeval_god said:

It wont matter if you do. If you have nested subvolumes and snapshot the outer most one, the snapshot will not contain the contents of the nested subvolumes because BTRFS snapshots are non-recursive with respect to subvolumes. And of course the key piece of info is that a snapshot is a type of subvolume.

 

To snapshot a snapshot you would have to specifically target the snapshot in the command.


To save time, I activated the "root" option and it works fine
I created a subvolume called ".snapshot"
For some reason, it works... The full volume gets it's snapshot inside the .snapshot subvolume, and the .snapshot folder exists inside the snapshot, but is empty... maybe there is something in Unraid or BTRFS that keeps it from making snapshots of snapshots... can't tell!
image.thumb.png.fe75ac6e356f46659fa21240ea9385f8.png

Link to comment
7 minutes ago, Normand_Nadon said:

To save time, I activated the "root" option and it works fine
I created a subvolume called ".snapshot"
For some reason, it works... The full volume gets it's snapshot inside the .snapshot subvolume, and the .snapshot folder exists inside the snapshot, but is empty... maybe there is something in Unraid or BTRFS that keeps it from making snapshots of snapshots... can't tell!
image.thumb.png.fe75ac6e356f46659fa21240ea9385f8.png

That is the expected behavior per my previous comment. It is the way btrfs snapshots work for nested subvolumes.

Link to comment
2 hours ago, primeval_god said:

That is the expected behavior per my previous comment. It is the way btrfs snapshots work for nested subvolumes.

I will admit that I did not understand it all at first... My brain is not collaborating this week, I am sick!
Re-reading it a couple times helped and it made a lot of sense! 

Do I understand it right that Unraid shows me:
image.png.cb465db1d4a369cc28c65dac37fc8201.png

But in reality, this is all smoke and mirrors created by the sorcery of symlinks and in fact it is structured as such behind the scene
image.png.544ef5819c03c5a45c968ac233cac952.png

 

Am I right to see it that way?

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.