SimonF Posted July 15, 2022 Author Share Posted July 15, 2022 (edited) This is my 6.9.2 with no existing definitions, I followed your testing, what options are your expecting for test2 snapshot? Edited July 16, 2022 by SimonF Quote Link to comment
aim60 Posted July 15, 2022 Share Posted July 15, 2022 (edited) My appdata folder on my original cache drive corresponds to test1. It was snapshotted and send-received to a backup drive. The cache drive was replaced, and the snapshot was send-received to a new drive. It looks to the plugin like test2. There needs to be a mechanism in the plugin for the new appdata, a writeable snapshot, to be treated exactly like an original (btrfs sub create) subvolume, i.e. Settings, Schedule, & Create Snapshot, so subsequent snapshots can be created. My situation is not unique. Anyone who has to restore a subvolume from a snapshot will be in the same situation. Edited July 18, 2022 by aim60 Quote Link to comment
trott Posted July 18, 2022 Share Posted July 18, 2022 is there way to limit the number of those received incremental backup? Quote Link to comment
SimonF Posted July 19, 2022 Author Share Posted July 19, 2022 On 7/15/2022 at 11:24 PM, aim60 said: My appdata folder on my original cache drive corresponds to test1. It was snapshotted and send-received to a backup drive. The cache drive was replaced, and the snapshot was send-received to a new drive. It looks to the plugin like test2. There needs to be a mechanism in the plugin for the new appdata, a writeable snapshot, to be treated exactly like an original (btrfs sub create) subvolume, i.e. Settings, Schedule, & Create Snapshot, so subsequent snapshots can be created. My situation is not unique. Anyone who has to restore a subvolume from a snapshot will be in the same situation. Please can you send me output of the following also btrfs sub show /mnt/cache Quote Link to comment
SimonF Posted July 19, 2022 Author Share Posted July 19, 2022 6 hours ago, trott said: is there way to limit the number of those received incremental backup? Are you looking to delete/purge based on a schedule, I have added an option for removing remote sends/snaps on the schedule options, but this function is not implemented as yet, but this will run on the source system. You are you looking for something to run on the target system? Quote Link to comment
trott Posted July 19, 2022 Share Posted July 19, 2022 2 hours ago, SimonF said: Are you looking to delete/purge based on a schedule, I have added an option for removing remote sends/snaps on the schedule options, but this function is not implemented as yet, but this will run on the source system. Thanks, I just need the delete/purge remote sends/snaps on the schedule Quote Link to comment
aim60 Posted July 19, 2022 Share Posted July 19, 2022 6 hours ago, SimonF said: Please can you send me output of the following also btrfs sub show /mnt/cache root@Tower7:~# btrfs sub show /mnt/cache Name: <FS_TREE> UUID: ba110717-bbbb-4c18-ae17-abf768fad644 Parent UUID: - Received UUID: - Creation time: 2021-03-06 11:44:53 -0500 Subvolume ID: 5 Generation: 16952037 Gen at creation: 0 Parent ID: 0 Top level ID: 0 Flags: - Snapshot(s): As the problem clarified in my mind, I realized that the solution was not a trivial bug fix, but significant additional development. Your efforts will be greatly appreciated by the community. Quote Link to comment
SimonF Posted July 19, 2022 Author Share Posted July 19, 2022 (edited) On 7/15/2022 at 6:45 PM, aim60 said: Yes, but no settings widget. And its in the debug tab. debug.txt 419.97 kB · 2 downloads I can reproduce your view now so will start to look at how I can change the setup. Currently uses UUIDs of original subvols/snaps so may take a bit of time. Edited July 19, 2022 by SimonF 1 Quote Link to comment
aim60 Posted July 22, 2022 Share Posted July 22, 2022 On 7/19/2022 at 1:28 PM, SimonF said: I can reproduce your view Simon, I sent you a PM. Quote Link to comment
Peter Braun Posted July 29, 2022 Share Posted July 29, 2022 (edited) I am using unraid 6.10.3 and Snapshots 2022.06.25. Since a view days i realized, that I can create but can't delete the schedule elements anymore ("fail" is displayed). Is there any condition that has to be fulfilled to delete the schedules of the snapshots? I am quite sure that it works in the past with the same versions, but meanwhile I rebooted the system and expanded the numbers of snapshot tasks. Edited July 29, 2022 by Peter Braun Quote Link to comment
SimonF Posted July 30, 2022 Author Share Posted July 30, 2022 7 hours ago, Peter Braun said: I am using unraid 6.10.3 and Snapshots 2022.06.25. Since a view days i realized, that I can create but can't delete the schedule elements anymore ("fail" is displayed). Is there any condition that has to be fulfilled to delete the schedules of the snapshots? I am quite sure that it works in the past with the same versions, but meanwhile I rebooted the system and expanded the numbers of snapshot tasks. Seems to be working fine on my systems. Are you able to provide any screen prints and do you see any errors in the log? Quote Link to comment
Peter Braun Posted July 30, 2022 Share Posted July 30, 2022 Here is a typical constellation: Slot 1 and 2 are disabled and I want to remove them. (It is independent, if the schedule is enabled or not ) The standard message appears After pushing the "delete" button "Fail" is displayed for less than a second. The schedule is as before and no entry in the log. I just wanted to know, if something has to be taken into account before deleting the schedules. This seems not to be the case. Now, I will take a deeper look in this behavior. Quote Link to comment
SimonF Posted July 30, 2022 Author Share Posted July 30, 2022 42 minutes ago, Peter Braun said: Here is a typical constellation: Slot 1 and 2 are disabled and I want to remove them. (It is independent, if the schedule is enabled or not ) The standard message appears After pushing the "delete" button "Fail" is displayed for less than a second. The schedule is as before and no entry in the log. I just wanted to know, if something has to be taken into account before deleting the schedules. This seems not to be the case. Now, I will take a deeper look in this behavior. It is only deleting entries from the json file and removing cron files. you can PM if you dont want to post here, Can you provide output of ls /boot/config/plugins/snapshots and cat /boot/config/plugins/snapshots/subvolsch.cfg Quote Link to comment
Peter Braun Posted July 30, 2022 Share Posted July 30, 2022 here are the results: root@Holz:~# ls -la /boot/config/plugins/snapshots total 144 -rw------- 1 root root 162 Jul 25 22:29 %2Fmnt%2Fdisk1%2FaudioSlot0.cron -rw------- 1 root root 162 Jul 25 22:30 %2Fmnt%2Fdisk1%2FbooksSlot0.cron -rw------- 1 root root 160 Jul 25 22:30 %2Fmnt%2Fdisk1%2FdataSlot0.cron -rw------- 1 root root 160 Jul 25 22:32 %2Fmnt%2Fdisk1%2FdataSlot1.cron -rw------- 1 root root 170 Jul 25 22:32 %2Fmnt%2Fdisk1%2FdocumentsSlot0.cron -rw------- 1 root root 170 Jul 25 22:32 %2Fmnt%2Fdisk1%2FdocumentsSlot1.cron -rw------- 1 root root 158 Jul 25 22:33 %2Fmnt%2Fdisk1%2FjobSlot0.cron -rw------- 1 root root 158 Jul 25 22:33 %2Fmnt%2Fdisk1%2FjobSlot1.cron -rw------- 1 root root 162 Jul 25 22:33 %2Fmnt%2Fdisk1%2FphotoSlot0.cron -rw------- 1 root root 162 Jul 25 22:34 %2Fmnt%2Fdisk1%2FphotoSlot1.cron -rw------- 1 root root 168 Jul 25 22:34 %2Fmnt%2Fdisk1%2FsoftwareSlot0.cron -rw------- 1 root root 162 Jul 25 22:34 %2Fmnt%2Fdisk1%2FvideoSlot0.cron drwx------ 3 root root 8192 Jul 29 23:38 ./ drwx------ 28 root root 8192 Jul 30 21:18 ../ drwx------ 2 root root 8192 Jun 25 09:38 packages/ -rw------- 1 root root 1154 Jul 20 19:53 subvol.cfg -rw------- 1 root root 14527 Jul 29 23:38 subvolsch.cfg root@Holz:~# cat /boot/config/plugins/snapshots/subvolsch.cfg { "\/mnt\/cache\/audio": [], "\/mnt\/backup\/snapshots": [], "\/mnt\/backup\/backups": [], "\/mnt\/backup\/latest_backup": [ { "rund": "Monday,Tuesday,Wednesday,Thursday,Friday,Saturday", "snapscheduleenabled": "no", "snapSchedule": "1", "hour1": "19", "min": "5", "hour2": "*\/1", "snaplogging": "yes", "tag": "daily", "snapsend": "no", "remotehost": "", "snapincremental": "yes", "mastersnap": "", "hostoption": "shutdown", "shutdowntimeout": "", "Removal": "yes", "days": "", "occurences": "12", "volumeusage": "90", "snapsendopt": "none", "subvolprefix": "", "subvolsendto": "", "cron": "5 19 * * 1,2,3,4,5,6,", "vmselection": null }, { "rund": "Sunday,Monday,Tuesday,Wednesday,Thursday,Friday,Saturday", "snapscheduleenabled": "no", "snapSchedule": "2", "day": "0", "hour1": "19", "min": "5", "hour2": "*\/1", "snaplogging": "yes", "tag": "weekly", "snapsend": "no", "remotehost": "", "snapincremental": "yes", "mastersnap": "", "hostoption": "shutdown", "shutdowntimeout": "", "Removal": "yes", "days": "", "occurences": "12", "volumeusage": "90", "snapsendopt": "none", "subvolprefix": "", "subvolsendto": "", "cron": "5 19 * * 0", "vmselection": null }, { "rund": "Sunday,Monday,Tuesday,Wednesday,Thursday,Friday,Saturday", "snapscheduleenabled": "no", "snapSchedule": "3", "dotm": "1", "hour1": "19", "min": "10", "hour2": "*\/1", "snaplogging": "yes", "tag": "monthly", "snapsend": "no", "remotehost": "", "snapincremental": "yes", "mastersnap": "", "hostoption": "shutdown", "shutdowntimeout": "", "Removal": "yes", "days": "", "occurences": "", "volumeusage": "90", "snapsendopt": "none", "subvolprefix": "", "subvolsendto": "", "cron": "10 19 1 * *", "vmselection": null } ], "\/mnt\/disk1\/books": [ { "rund": "Sunday,Monday,Tuesday,Wednesday,Thursday,Friday,Saturday", "snapscheduleenabled": "yes", "snapSchedule": "3", "hour1": "19", "min": "25", "hour2": "*\/1", "snaplogging": "yes", "tag": "monthly", "snapsend": "local", "remotehost": "", "snapincremental": "yes", "mastersnap": "", "hostoption": "shutdown", "shutdowntimeout": "", "Removal": "yes", "days": "", "occurences": "", "volumeusage": "90", "snapsendopt": "none", "subvolprefix": "", "subvolsendto": "", "cron": "25 19 1 * *", "vmselection": null, "day": "0", "dotm": "1" }, { "rund": "Sunday,Monday,Tuesday,Wednesday,Thursday,Friday,Saturday", "snapscheduleenabled": "no", "snapSchedule": "2", "hour1": "19", "min": "20", "hour2": "*\/1", "snaplogging": "yes", "tag": "weekly", "snapsend": "no", "remotehost": "", "snapincremental": "yes", "mastersnap": "", "hostoption": "shutdown", "shutdowntimeout": "", "Removal": "yes", "days": "", "occurences": "10", "volumeusage": "90", "snapsendopt": "none", "subvolprefix": "", "subvolsendto": "", "cron": "20 19 * * 6", "vmselection": null, "day": "6" }, { "rund": "Sunday,Monday,Tuesday,Wednesday,Thursday,Friday,Saturday", "snapscheduleenabled": "no", "snapSchedule": "0", "hour1": "0", "min": "0", "hour2": "*\/1", "snaplogging": "yes", "tag": "", "snapsend": "no", "remotehost": "", "snapincremental": "yes", "mastersnap": "", "hostoption": "shutdown", "shutdowntimeout": "", "Removal": "no", "days": "", "occurences": "", "volumeusage": "", "snapsendopt": "none", "subvolprefix": "", "subvolsendto": "", "cron": "0 *\/1 * * 0,1,2,3,4,5,6,", "vmselection": null } ], "\/mnt\/disk1\/audio": [ { "rund": "Sunday,Monday,Tuesday,Wednesday,Thursday,Friday,Saturday", "snapscheduleenabled": "yes", "snapSchedule": "3", "dotm": "1", "hour1": "19", "min": "25", "hour2": "*\/1", "snaplogging": "yes", "tag": "monthly", "snapsend": "local", "remotehost": "", "snapincremental": "yes", "mastersnap": "", "hostoption": "shutdown", "shutdowntimeout": "", "Removal": "yes", "days": "", "occurences": "", "volumeusage": "90", "snapsendopt": "none", "subvolprefix": "", "subvolsendto": "", "cron": "25 19 1 * *", "vmselection": null } ], "\/mnt\/disk1\/data": [ { "rund": "Sunday,Monday,Tuesday,Wednesday,Thursday,Friday,Saturday", "snapscheduleenabled": "yes", "snapSchedule": "3", "dotm": "1", "hour1": "19", "min": "25", "hour2": "*\/1", "snaplogging": "yes", "tag": "monthly", "snapsend": "local", "remotehost": "", "snapincremental": "yes", "mastersnap": "", "hostoption": "shutdown", "shutdowntimeout": "", "Removal": "yes", "days": "", "occurences": "", "volumeusage": "90", "snapsendopt": "none", "subvolprefix": "", "subvolsendto": "", "cron": "25 19 1 * *", "vmselection": null }, { "rund": "Sunday,Monday,Tuesday,Wednesday,Thursday,Friday,Saturday", "snapscheduleenabled": "yes", "snapSchedule": "2", "day": "6", "hour1": "19", "min": "20", "hour2": "*\/1", "snaplogging": "yes", "tag": "weekly", "snapsend": "no", "remotehost": "", "snapincremental": "yes", "mastersnap": "", "hostoption": "shutdown", "shutdowntimeout": "", "Removal": "yes", "days": "", "occurences": "10", "volumeusage": "90", "snapsendopt": "none", "subvolprefix": "", "subvolsendto": "", "cron": "20 19 * * 6", "vmselection": null } ], "\/mnt\/disk1\/documents": [ { "rund": "Sunday,Monday,Tuesday,Wednesday,Thursday,Friday,Saturday", "snapscheduleenabled": "yes", "snapSchedule": "3", "dotm": "1", "hour1": "19", "min": "25", "hour2": "*\/1", "snaplogging": "yes", "tag": "monthly", "snapsend": "local", "remotehost": "", "snapincremental": "yes", "mastersnap": "", "hostoption": "shutdown", "shutdowntimeout": "", "Removal": "yes", "days": "", "occurences": "", "volumeusage": "90", "snapsendopt": "none", "subvolprefix": "", "subvolsendto": "", "cron": "25 19 1 * *", "vmselection": null }, { "rund": "Sunday,Monday,Tuesday,Wednesday,Thursday,Friday,Saturday", "snapscheduleenabled": "yes", "snapSchedule": "2", "day": "6", "hour1": "19", "min": "20", "hour2": "*\/1", "snaplogging": "yes", "tag": "weekly", "snapsend": "no", "remotehost": "", "snapincremental": "yes", "mastersnap": "", "hostoption": "shutdown", "shutdowntimeout": "", "Removal": "yes", "days": "", "occurences": "10", "volumeusage": "90", "snapsendopt": "none", "subvolprefix": "", "subvolsendto": "", "cron": "20 19 * * 6", "vmselection": null } ], "\/mnt\/disk1\/job": [ { "rund": "Sunday,Monday,Tuesday,Wednesday,Thursday,Friday", "snapscheduleenabled": "yes", "snapSchedule": "3", "dotm": "1", "hour1": "19", "min": "25", "hour2": "*\/1", "snaplogging": "yes", "tag": "monthly", "snapsend": "local", "remotehost": "", "snapincremental": "yes", "mastersnap": "", "hostoption": "shutdown", "shutdowntimeout": "", "Removal": "yes", "days": "", "occurences": "", "volumeusage": "90", "snapsendopt": "none", "subvolprefix": "", "subvolsendto": "", "cron": "25 19 1 * *", "vmselection": null }, { "rund": "Sunday,Monday,Tuesday,Wednesday,Thursday,Friday,Saturday", "snapscheduleenabled": "yes", "snapSchedule": "2", "day": "6", "hour1": "19", "min": "20", "hour2": "*\/1", "snaplogging": "yes", "tag": "weekly", "snapsend": "no", "remotehost": "", "snapincremental": "yes", "mastersnap": "", "hostoption": "shutdown", "shutdowntimeout": "", "Removal": "yes", "days": "", "occurences": "10", "volumeusage": "90", "snapsendopt": "none", "subvolprefix": "", "subvolsendto": "", "cron": "20 19 * * 6", "vmselection": null } ], "\/mnt\/disk1\/photo": [ { "rund": "Sunday,Monday,Tuesday,Wednesday,Thursday,Friday,Saturday", "snapscheduleenabled": "yes", "snapSchedule": "3", "dotm": "1", "hour1": "19", "min": "25", "hour2": "*\/1", "snaplogging": "yes", "tag": "monthly", "snapsend": "local", "remotehost": "", "snapincremental": "yes", "mastersnap": "", "hostoption": "shutdown", "shutdowntimeout": "", "Removal": "yes", "days": "", "occurences": "", "volumeusage": "90", "snapsendopt": "none", "subvolprefix": "", "subvolsendto": "", "cron": "25 19 1 * *", "vmselection": null }, { "rund": "Sunday,Monday,Tuesday,Wednesday,Thursday,Friday,Saturday", "snapscheduleenabled": "yes", "snapSchedule": "2", "day": "6", "hour1": "19", "min": "20", "hour2": "*\/1", "snaplogging": "yes", "tag": "weekly", "snapsend": "no", "remotehost": "", "snapincremental": "yes", "mastersnap": "", "hostoption": "shutdown", "shutdowntimeout": "", "Removal": "yes", "days": "", "occurences": "10", "volumeusage": "90", "snapsendopt": "none", "subvolprefix": "", "subvolsendto": "", "cron": "20 19 * * 6", "vmselection": null } ], "\/mnt\/disk1\/software": [ { "rund": "Sunday,Monday,Tuesday,Wednesday,Thursday,Friday,Saturday", "snapscheduleenabled": "yes", "snapSchedule": "3", "dotm": "1", "hour1": "19", "min": "25", "hour2": "*\/1", "snaplogging": "yes", "tag": "monthly", "snapsend": "local", "remotehost": "", "snapincremental": "yes", "mastersnap": "", "hostoption": "shutdown", "shutdowntimeout": "", "Removal": "yes", "days": "", "occurences": "", "volumeusage": "90", "snapsendopt": "none", "subvolprefix": "", "subvolsendto": "", "cron": "25 19 1 * *", "vmselection": null } ], "\/mnt\/disk1\/video": [ { "rund": "Sunday,Monday,Tuesday,Wednesday,Thursday,Friday,Saturday", "snapscheduleenabled": "yes", "snapSchedule": "3", "dotm": "1", "hour1": "19", "min": "25", "hour2": "*\/1", "snaplogging": "yes", "tag": "monthly", "snapsend": "local", "remotehost": "", "snapincremental": "yes", "mastersnap": "", "hostoption": "shutdown", "shutdowntimeout": "", "Removal": "yes", "days": "", "occurences": "", "volumeusage": "90", "snapsendopt": "none", "subvolprefix": "", "subvolsendto": "", "cron": "25 19 1 * *", "vmselection": null } ] Quote Link to comment
Peter Braun Posted July 30, 2022 Share Posted July 30, 2022 I realized that in subvolsch.cfg subvolumes are mentioned, that do not exist anymore. A view days ago, I deinstalled the plugin "Snapshots", formatted some disks with btrfs and reorganized the subvolumes. After reinstallation of "Snapshots" ist seemed that I can start from scratch. But this was wrong. The configuration of all deinstalled plugins are still saved in /boot/config/plugins. ... and this is maybe my problem. Is it possible to deinstall a plugin as if it never was installed before? Which directories do I have to remove in addition after deinstallation? Is it only: /boot/config/plugins/snapshots ? If so, this will not solve my problem unfortunately. Even after deinstallatin of snapshot and removing /boot/config/plugins/snapshots it is only possible to create an schedule but not to delete it. Quote Link to comment
SimonF Posted July 30, 2022 Author Share Posted July 30, 2022 (edited) 23 minutes ago, Peter Braun said: I realized that in subvolsch.cfg subvolumes are mentioned, that do not exist anymore. A view days ago, I deinstalled the plugin "Snapshots", formatted some disks with btrfs and reorganized the subvolumes. After reinstallation of "Snapshots" ist seemed that I can start from scratch. But this was wrong. The configuration of all deinstalled plugins are still saved in /boot/config/plugins. ... and this is maybe my problem. Is it possible to deinstall a plugin as if it never was installed before? Which directories do I have to remove in addition after deinstallation? Is it only: /boot/config/plugins/snapshots ? If so, this will not solve my problem unfortunately. Even after deinstallatin of snapshot and removing /boot/config/plugins/snapshots it is only possible to create an schedule but not to delete it. will be /tmp/snapshots/config also. you will need to run update_cron after the cron files are removed. Edited July 30, 2022 by SimonF Quote Link to comment
Peter Braun Posted July 31, 2022 Share Posted July 31, 2022 The last thing i did before bed was to try to delete the schedule --> fail The first thing I did this morning was to try to delete the schedule --> success Afterwards I wanted to have a clean installation of snapshots and did what you have recommended above. I installed snapshot and created a schedule and --> Removing the schedule failed again. ☹️ If I am the only person with this phenomenon, I would recommend not to take too much effort in solving this problem, because this function is not vital. I'll report back here, when I find the solution. But in general... Unraid should think about to optimize the deinstallation of plugins. The user should have an oportunity to select, wheter he wants a complete deinstallation or a deinstallation where the config files are stored for later use. Quote Link to comment
SimonF Posted July 31, 2022 Author Share Posted July 31, 2022 29 minutes ago, Peter Braun said: But in general... Unraid should think about to optimize the deinstallation of plugins. The user should have an oportunity to select, wheter he wants a complete deinstallation or a deinstallation where the config files are stored for later use. This is down to the plugin author, I will update the removal to remove /tmp/snapshots, cron files and and update_cron. I always like to leave the config, other authors remove everything. Will see I can reproduce with your data, if you don't disable the schedule before removing does that work ok? it may be the parse_cron_cfg is failing. case 'delete_schedule_slot': $subvol = urldecode(($_POST['subvol'])); $slot = urldecode(($_POST['slot'])); $config_file_json = $GLOBALS["paths"]["subvol_schedule.json"]; $config = @json_decode(file_get_contents($config_file_json) , true); unset($config[$subvol][$slot]) ; save_json_file($config_file_json, $config) ; $cron="" ; $file=$subvol."Slot".$slot ; parse_cron_cfg("snapshots", urlencode($file), $cron); snap_manager_log('Removed Schedule Slot "'.$subvol.'" '.$slot.' '.$error.' '.$result[0]) ; echo json_encode(TRUE); break; Quote Link to comment
Peter Braun Posted July 31, 2022 Share Posted July 31, 2022 2 hours ago, SimonF said: if you don't disable the schedule before removing does that work ok? no, there is no difference, if the schedule is enabled or disabled. Quote Link to comment
aim60 Posted August 3, 2022 Share Posted August 3, 2022 (edited) Restoring a Subvol Simon’s Snapshots plugin is an awesome solution for managing btrfs snapshots. I had problems with the plugin after replacing a drive and restoring a subvol, so I set out to find a working scenario. The goal was to create a subvol, setup incremental snapshots to a backup drive, simulate a failed drive, restore from snapshot, and be able to continue the process of taking incremental snapshots. The plugin has some limitations in its current form, but I found a working scenario. I recommend that anyone depending on snapshots for recovery, upgrade to at least Unraid 6.10. Assumed starting conditions - You are currently taking snapshots. Restore Scenario - You have replaced a failed drive Send the latest snapshot from the backup drive to the new drive, directly to the same position as the original subvol. It will not look like a subvol to the plugin until the terminal session below is completed. Open a terminal session Using the mv command rename the snapshot to the original subvol name. btrfs property set –f <path to subvol> ro false This will make the subvol read/write. This command must be executed from Unraid 6.10 or above. Earlier versions will make the subvol r/w, but will leave it unusable to the plugin. Before you can continue taking incremental snapshots, you must manually create a new snapshot and send it (non-incrementally) to the backup drive. Restore Scenario - you are restoring a subvol from a snapshot on the same drive Delete the corrupted subvol Send the snapshot to the original subvol’s position Proceed as above starting with the terminal session. Note – this is a full data send, and will use twice the amount of disk space. Delete the source snapshot and take a new snapshot of the subvol to reclaim the disk space. Methods using “btrfs sub snap” are not currently successful. Edited August 3, 2022 by aim60 Quote Link to comment
BVD Posted August 4, 2022 Share Posted August 4, 2022 (edited) On 7/14/2022 at 10:15 AM, JorgeB said: There is*, but note that the OS will be in a crash consistent state, i.e., if you restore that backup it would be like if the plug was pulled, I take daily snapshots with the VMs running but also try to at least once a week create a snapshot with the VMs off/hibernating, this way I have more options. * Edit to add - not sure if the plugin supports that, but it's possible with btrfs snapshots. This is the reason I use the pre/post scripts capability of sanoid and stick with zfs snapshots. Qemu has the ability to quiesce the VM which you do in the prescript section, snapshot occurs, then post-script to resume normal operations. Edit: Just to be clear, I do something similar to this to ensure my snapshot's consistency. It *is* still only crash consistent, which is the reason all my "complex" applications (databases etc) run in docker containers - I then have the granularity to individualize the pre/post scripts to the specific applications needs for application consistent backups via snapshot. Edited August 4, 2022 by BVD clarifying what I meant by pre and post scripts Quote Link to comment
aim60 Posted August 4, 2022 Share Posted August 4, 2022 I tried to create a schedule for an incremental snapshot that doesn't run automatically. I set the schedule mode to daily or hourly, and unchecked all the days to run. The GUI did not cooperate. With the server down, I edited subvolsch.cfg on the flash drive. I changed the rund line to "rund": "", It had the desired effect, until I re-edited the schedule in the GUI. Might this be a supportable option? Quote Link to comment
Peter Braun Posted August 4, 2022 Share Posted August 4, 2022 (edited) On 7/30/2022 at 12:10 AM, Peter Braun said: I am using unraid 6.10.3 and Snapshots 2022.06.25. Since a view days i realized, that I can create but can't delete the schedule elements anymore ("fail" is displayed) Just for your information... The problem seems to be related to Firefox. If a Chrome based bowser is used, no problem could be observed. Thanks to everyone who gave me tips. ...and special thanks to SimonF, who put a lot of work into the clarification. Edited August 4, 2022 by Peter Braun Quote Link to comment
SimonF Posted August 4, 2022 Author Share Posted August 4, 2022 5 hours ago, aim60 said: I tried to create a schedule for an incremental snapshot that doesn't run automatically. I set the schedule mode to daily or hourly, and unchecked all the days to run. The GUI did not cooperate. With the server down, I edited subvolsch.cfg on the flash drive. I changed the rund line to "rund": "", It had the desired effect, until I re-edited the schedule in the GUI. Might this be a supportable option? So are you looking for an option that can be run manaully i.e. I could add a option for snap schedule to be disabled, manual and enabled? Quote Link to comment
aim60 Posted August 4, 2022 Share Posted August 4, 2022 4 minutes ago, SimonF said: So are you looking for an option that can be run manaully i.e. I could add a option for snap schedule to be disabled, manual and enabled? That would be great. Thanks Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.