[Plugin] Snapshots


Recommended Posts

3 hours ago, Normand_Nadon said:

But in reality, this is all smoke and mirrors created by the sorcery of symlinks and in fact it is structured as such behind the scene

It is not symlinks, its lower level than that more like mount points. Its a fundamental property of BTRFS filesystems. If it helps to think of it that way then sure, I am not entirely sure what the lower levels of BTRFS look like. The important point is that no matter how they are structured subvolumes can only be snapshotted separately. 

  • Like 1
Link to comment
  • 2 months later...

So I have a question...

I backup my ESXi to my unraid server using ghettovcb.

I have modified it so it always writes to the same directory, so my strategy is always remove the backup and then backup again.

Will that "deletion" negate the benefit of snapshots? it's a single gigantic file in most cases and the method I use doesn't support overwriting the old backup, so it's always a completely new file in the same folder (despite the files being mostly identical, actually completely identical when I backup a stopped VM).

Right now I tried it, and it I get 2x the file usage so I guess that's just how it is

Link to comment

I guess I could backup to a temporary location without snapshots or anything, then use something like virtsync to sync only changed blocks to the "old file", then my incremental backups would work with minimal space usage

Link to comment

I am having issues and broke my setup. I had flaws in it and using duplicacy to back up as it was creating excessive amount of storage. My 300 gig backups turned into 2TB because it was archiving everything due to the prefix changing file structure. Each backup had a YYYYMMDD style tag behind it. So now I am trying to only maintain 1 backup with the plugin and have everything backed up with duplicacy once a day that maintains version and revision control. So not instead of having 7 days of backups I changed it to 1.

 

That isn't working through, it creates a snapshot but then keeps telling me that it exists already, how can I have a static names backup file such as just appdata that is deleted and then created for the new one each time?

Link to comment
2 hours ago, ZerkerEOD said:

I am having issues and broke my setup. I had flaws in it and using duplicacy to back up as it was creating excessive amount of storage. My 300 gig backups turned into 2TB because it was archiving everything due to the prefix changing file structure. Each backup had a YYYYMMDD style tag behind it. So now I am trying to only maintain 1 backup with the plugin and have everything backed up with duplicacy once a day that maintains version and revision control. So not instead of having 7 days of backups I changed it to 1.

 

That isn't working through, it creates a snapshot but then keeps telling me that it exists already, how can I have a static names backup file such as just appdata that is deleted and then created for the new one each time?

Are you backing up your snapshots as well as you filesystem? You should probably just exclude your snapshots from your backup.

Link to comment
1 minute ago, primeval_god said:

Are you backing up your snapshots as well as you filesystem? You should probably just exclude your snapshots from your backup.

An I missing an option, the exclude that I see is only for the GUI. I don't see anything that excludes the backups from the snapshot. 

Link to comment
19 hours ago, ZerkerEOD said:

An I missing an option, the exclude that I see is only for the GUI. I don't see anything that excludes the backups from the snapshot. 

Are you using BTRFS? If so just make the directory you store your backups in into a sub volume (rename the folder they are currently in, create a sub volume with the name of the original directory, then move them into the subviolume). Btrfs snapshots are not recursive, so when snapshotting the top level the sub volume containing the backups will be excluded.

Link to comment
  • 2 months later...

Just wondering, is there any way to see what command is issued by the plugin to create the snapshots?

I have a script that needs to create some manually, and I'd like it to be coherent and identical to what it does in the plugin.

Link to comment
11 minutes ago, fr500 said:

Just wondering, is there any way to see what command is issued by the plugin to create the snapshots?

I have a script that needs to create some manually, and I'd like it to be coherent and identical to what it does in the plugin.

 

Here is command and example php

 

btrfs subvolume snapshot '.$readonly.' '.escapeshellarg($subvol).' '.escapeshellarg($snapshoty).

 

subvol is path to snap.

snapshoty is the name of the snapshot.

readonly is -r if you want the snapshot to be readonly.

 

      case 'create_snapshot':
           $snapshot = urldecode(($_POST['snapshot']));
           $subvol = urldecode(($_POST['subvol']));
           $readonly = urldecode(($_POST['readonly']));
           if ($readonly == "true")  $readonly = "-r" ; else $readonly="" ;
           $DateTimeF = findText("{", "}", $snapshot) ;
           if ($DateTimeF == "YMD") $DateTime = "YmdHis" ; else $DateTime = $DateTimeF ;

           $ymd = date($DateTime, time());
           $snapshoty = str_replace("{".$DateTimeF."}", $ymd, $snapshot);

#           check_to_dir($snapshoty) ;
           $slashpos = substr(strrchr($snapshot,'/'), 1);
           $directory = substr($snapshot, 0, - strlen($slashpos));
           if (!is_dir($directory) && $snapshot != $subvol) mkdir($directory, 0777, true) ;
           $result = NULL ;
           exec('btrfs subvolume snapshot '.$readonly.' '.escapeshellarg($subvol).' '.escapeshellarg($snapshoty)." 2>&1", $result, $error) ;
           snap_manager_log('btrfs snapshot create '.$snapshot.' '.$error.' '.$result[0]) ;
           if ($error=="1") $error_rtn = false ; else $error_rtn=true ;
           echo json_encode(array("success"=>$error_rtn, "error"=>$result));
           break;

 

Auto script.

https://github.com/SimonFair/Snapshots/blob/b4ff445c44f3618a74d2737873c3efadfda43a38/source/include/snapping.php#L228

Link to comment
  • 2 weeks later...

I want to implement Snapshots on my cache but this is my first time learning about Snapshots and how COW works so I want to make sure I am understanding the process correctly.

 

I start with a single disk cache that is /mnt/cache-single that contains the shares Downloads, appdata and system. I run the four commands in the instructions for a share, eg Downloads

 

mv /mnt/cache-single/Downloads /mnt/cache-single/some-temp
btrfs sub create /mnt/cache-single/Downloads
cp -aT --reflink=always /mnt/cache-single/some-temp /mnt/cache-single/Downloads
rm -r /mnt/cache/some-temp

 

This will convert that share to a subvolume. Since this is a file system level of change, do I need to go in and update Docker mappings and the Share Settings (Primary storage, move settings, SMB settings etc) as if a new Share had been created in UNRAID? Or do I now have two Shares of the same name and I need to configure the new one and remove the old one?

 

I then repeat this for each Share and I will have 4 Subvolumes on this one pool. I can then enable Snapshots of a Subvolume and either store it within the Subvolume (preferred?) or to a different Subvolume as long as it is on the same pool. ie I cannot make a snap to /mnt/cache-single/some-snaps as 'some-snaps' is not a Subvolume and I cannot send to /mnt/cache-double/... as 'cache-double' is a different pool. 

 

At this point I will be able to restore from a snapshot manually if my Scrub notices a file is broken or I accidentally delete something I did not want to.

Link to comment
14 hours ago, AngryPig said:

This will convert that share to a subvolume. Since this is a file system level of change, do I need to go in and update Docker mappings and the Share Settings (Primary storage, move settings, SMB settings etc) as if a new Share had been created in UNRAID? Or do I now have two Shares of the same name and I need to configure the new one and remove the old one?

You will want to stop any Docker containers that have a mapping to or within the share you are operating on while you make changes, aside from that though you dont need to make any changes since you are creating the subvolume with the original path. Likewise you shouldnt need to make any changes to share settings since so far as unRAID is concerned the new subvolume (which has the same path as the original user share) is the existing user share

 

15 hours ago, AngryPig said:

I can then enable Snapshots of a Subvolume and either store it within the Subvolume (preferred?) or to a different Subvolume as long as it is on the same pool. ie I cannot make a snap to /mnt/cache-single/some-snaps as 'some-snaps' is not a Subvolume

Where you store snapshots doesnt really matter, they can be anywhere within the same pool (they dont have to be within a subvolume). Snapshots themselves are just subvolumes anyway.  

 

15 hours ago, AngryPig said:

I cannot send to /mnt/cache-double/... as 'cache-double' is a different pool.

This is not entirely true but it requires some explanation. When you snapshot a subvolume the snapshot must be made somewhere on the same filesystem (pool) as it is a CoW copy of the subvolume (and a new subvolume itself). You can however send subvolumes from one BTRFS filesystem to another using btrfs send and receive (which are available in this plugin). Doing this copies the subvolume to the other filesystem and thus it is no longer CoW copy but a full copy taking up space on the other filesystem. Once a subvolume is sent to the other filesystem there is a way to send subsequent snapshots of that subvolume between the two filesystems in a way that maintains the CoW relationship between the subvolume and its snapshots.

  • Like 1
Link to comment
5 hours ago, primeval_god said:

You will want to stop any Docker containers that have a mapping to or within the share you are operating on while you make changes, aside from that though you dont need to make any changes since you are creating the subvolume with the original path. Likewise you shouldnt need to make any changes to share settings since so far as unRAID is concerned the new subvolume (which has the same path as the original user share) is the existing user share

 

Glad to see I understood that part!

 

5 hours ago, primeval_god said:

Where you store snapshots doesnt really matter, they can be anywhere within the same pool (they dont have to be within a subvolume). Snapshots themselves are just subvolumes anyway.  

 

That's good to know. Are they required to be in a Subvolume for Send to work or because they are Subvolumes themselves they will automatically work with Send?

 

5 hours ago, primeval_god said:

This is not entirely true but it requires some explanation. When you snapshot a subvolume the snapshot must be made somewhere on the same filesystem (pool) as it is a CoW copy of the subvolume (and a new subvolume itself). You can however send subvolumes from one BTRFS filesystem to another using btrfs send and receive (which are available in this plugin). Doing this copies the subvolume to the other filesystem and thus it is no longer CoW copy but a full copy taking up space on the other filesystem. Once a subvolume is sent to the other filesystem there is a way to send subsequent snapshots of that subvolume between the two filesystems in a way that maintains the CoW relationship between the subvolume and its snapshots.

 

Thank you for clarifying this. Does the other file system have to BTRFS to be able to Send to it? ie I would not be able to send to my array as it is XFS?

 

Does this plugin allow for me to Snap on a daily basis and Send on a weekly basis? So if I was Sending off-site it would not use my bandwidth as much? Or is it simply that since it maintains the BTRFS CoW, it would not be that much bandwidth after the initial send?

Link to comment
17 hours ago, AngryPig said:

That's good to know. Are they required to be in a Subvolume for Send to work or because they are Subvolumes themselves they will automatically work with Send?

Yes send and receive work on subvolumes only (snapshots are just a type of subvolume).

 

17 hours ago, AngryPig said:

Thank you for clarifying this. Does the other file system have to BTRFS to be able to Send to it? ie I would not be able to send to my array as it is XFS?

Yes the other filesystem has to be BTRFS for send and receive to work, the send subvolume becomes a subvolume on the receiving filesystem. 

 

17 hours ago, AngryPig said:

Does this plugin allow for me to Snap on a daily basis and Send on a weekly basis?

I am not entirely sure about the capabilities of this plugin with regards to scheduling.

 

17 hours ago, AngryPig said:

So if I was Sending off-site it would not use my bandwidth as much? Or is it simply that since it maintains the BTRFS CoW, it would not be that much bandwidth after the initial send?

BTRFS send and receive does a sort of differential send when subvolume/snapshot is based on subvolume/snapshot that is present in both filesystems (assuming you use the option to specify the parent). This reduces the amount of data sent for subsequent snapshots of the same subvolume. I am not sure if this plugin actually makes that option available though as I do my snapshot sending via the command line.

  • Like 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.