ZnapZend plugin for unRAID


steini84

Recommended Posts

Hi,

 

Here is a companion plugin for the ZFS plugin for unRAID.

 

Quote

ZnapZend is a ZFS centric backup tool to create snapshots and send them to backup locations. It relies on the ZFS tools snapshot, send and receive to do its work. It has the built-in ability to manage both local snapshots as well as remote copies by thinning them out as time progresses.

The ZnapZend configuration is stored as properties in the ZFS filesystem itself.

 

To install you copy this url into the install plugin page in your unRAID 6 web gui or install through the Community Applications.

https://raw.githubusercontent.com/Steini1984/unRAID6-ZnapZend/master/unRAID6-ZnapZend.plg

 

To run the program i recommend using this command to put the log in a separate file

znapzend --logto=/var/log/znapzend.log --daemonize

You can start ZnapZend automatically from the boot file or you can create a empty file called auto_boot_on under /boot/config/plugins/unRAID6-ZnapZend/

touch /boot/config/plugins/unRAID6-ZnapZend/auto_boot_on

Documentation can be found on https://www.znapzend.org/ and I recommend using the examples as a starting ground.

 

Here are some links worth checking also:

https://github.com/oetiker/znapzend/blob/master/doc/znapzendzetup.pod

https://www.lab-time.it/2018/06/30/zfs-backups-on-proxmox-with-znapzend/

 

For example the following command makes automatic snapshots and keeps 24 backups a day for 7 days, 6 backups a day for a month a then a single snapshot every day for 90 days:

znapzendzetup create --recursive SRC '7d=>1h,30d=>4h,90d=>1d' tank/home

 

Edited by steini84
  • Thanks 1
Link to comment
  • 4 weeks later...
  • 1 month later...

hello steini84 - I have this plugin installed and a couple of jobs for some datasets however I have noticed that the jobs are not running running automatically. I usually have to run the job using the "znapzend --debug --runonce=zpool/dataset" command and it will run successfully. Below is an example of one of the schedules that I have setup:

 

znapzendzetup create --recursive SRC '1week=>12hour' zpool/dataset DST:a '1week=>24hour' [email protected]:zpool/dataset DST:b '1week=>24hour' [email protected]:zpool/dataset

 

I can schedule a user script to run at a schedule to run the znapzend --debug --runonce command; however just wondering if there were any other steps. I setup the touch auto_boot_on file.

 

Thanks! 

Link to comment

I was able to resolve my issue. It had to do with an error in znapzend not having a matching dataset between source and one of the destinations. I ran znapzendztatz and then deleted the snapshot on destination along with the dataset and then ran delete on znapzend schedule on source dataset and then recreated everything from scratch and its working fine now.

 

thanks!

 

https://github.com/oetiker/znapzend/issues/303

 

  • Like 1
Link to comment
  • 3 weeks later...

Hello! thanks again for this awesome stuf.

 

If you look at my snapshots, you see there are some missed snapshots. I've set it up: daily every 3 hours, weekly everyday.

i think something is off, any idea where to start looking?

 

SSD/VMs/Ubuntu@2019-12-29-180000                   0B      -       25K  -
SSD/VMs/Ubuntu@2019-12-30-000000                   0B      -       25K  -
SSD/VMs/Ubuntu@2020-01-03-150000                   0B      -       25K  -
SSD/VMs/Ubuntu@2020-01-04-000000                   0B      -       25K  -
SSD/VMs/Ubuntu@2020-01-04-030000                   0B      -       25K  -
SSD/VMs/Ubuntu@2020-01-04-060000                   0B      -       25K  -
SSD/VMs/Ubuntu@2020-01-04-090000                   0B      -       25K  -
SSD/VMs/Ubuntu@2020-01-04-120000                   0B      -       25K  -
SSD/VMs/Ubuntu@2020-01-04-150000                   0B      -       25K  -
SSD/VMs/Ubuntu@2020-01-04-180000                   0B      -       25K  -
SSD/VMs/Ubuntu@2020-01-04-210000                   0B      -       25K  -
SSD/VMs/Ubuntu@2020-01-05-000000                   0B      -       25K  -
SSD/VMs/Ubuntu@2020-01-05-030000                   0B      -       25K  -
SSD/VMs/Windows@2019-12-29-180000                  0B      -       24K  -
SSD/VMs/Windows@2019-12-30-000000                  0B      -       24K  -
SSD/VMs/Windows@2020-01-03-150000                  0B      -       24K  -
SSD/VMs/Windows@2020-01-04-000000                  0B      -       24K  -
SSD/VMs/Windows@2020-01-04-030000                  0B      -       24K  -
SSD/VMs/Windows@2020-01-04-060000                  0B      -       24K  -
SSD/VMs/Windows@2020-01-04-090000                  0B      -       24K  -
SSD/VMs/Windows@2020-01-04-120000                  0B      -       24K  -
SSD/VMs/Windows@2020-01-04-150000                  0B      -       24K  -
SSD/VMs/Windows@2020-01-04-180000                  0B      -       24K  -
SSD/VMs/Windows@2020-01-04-210000                  0B      -       24K  -
SSD/VMs/Windows@2020-01-05-000000                  0B      -       24K  -
SSD/VMs/Windows@2020-01-05-030000                  0B      -       24K  -
SSD/tmp@2019-12-29-180000                          0B      -       24K  -
SSD/tmp@2019-12-30-000000                          0B      -       24K  -
SSD/tmp@2020-01-03-150000                          0B      -       24K  -
SSD/tmp@2020-01-04-000000                          0B      -       24K  -
SSD/tmp@2020-01-04-030000                          0B      -       24K  -
SSD/tmp@2020-01-04-060000                          0B      -       24K  -
SSD/tmp@2020-01-04-090000                          0B      -       24K  -
SSD/tmp@2020-01-04-120000                          0B      -       24K  -
SSD/tmp@2020-01-04-150000                          0B      -       24K  -
SSD/tmp@2020-01-04-180000                          0B      -       24K  -
SSD/tmp@2020-01-04-210000                          0B      -       24K  -
SSD/tmp@2020-01-05-000000                          0B      -       24K  -
SSD/tmp@2020-01-05-030000                          0B      -       24K  -

 

Link to comment

@ezra those don't look problematic to me.  Only the first few - which is normal as those may not have had the new snapshot running.  Also, after setting up a new snapshot, you do have to end the process, sometimes this doesn't work as expected either and a reboot or something sets it off right without your realising it.  Could be that.

Link to comment
  • 5 weeks later...

@steini84 I've been having a problem where snapshots don't delete (i.e. a plan that says keep snapshots for 10 minutes for 2 hours, then a lower frequency ends up creating endless 10 minute snapshots) and have logged a report on GitHub here.  That got me to wondering if I was running the latest version (which probably they will ask).  Your compiled version says it's 0.dev (znapzend --version) whereas GitHub says they're up to 0.19.1.  I think yours is actually the same since the 19.1 came out in June last year.  Can you confirm?  Thanks.

Edited by Marshalleq
Link to comment
  • 1 month later...

k still running into some issues...-bash: /usr/local/bin/znapzend: /usr/bin/perl: bad interpreter: No such file or directory when I run znapzend --logto=/var/log/znapzend.log --daemonize. below are some logs.

 

root@Omega:~# cat /var/log/syslog | head
Mar  9 20:41:30 Omega kernel: microcode: microcode updated early to revision 0x1d, date = 2018-05-11
Mar  9 20:41:30 Omega kernel: Linux version 4.19.107-Unraid (root@Develop) (gcc version 9.2.0 (GCC)) #1 SMP Thu Mar 5 13:55:57 PST 2020
Mar  9 20:41:30 Omega kernel: Command line: BOOT_IMAGE=/bzimage vfio-pci.ids=8086:2934,8086:2935,8086:293a, vfio_iommu_type1.allow_unsafe_interrupts=1 isolcpus=6-7,14-15 pcie_acs_override=downstream initrd=/bzroot
Mar  9 20:41:30 Omega kernel: x86/fpu: x87 FPU will use FXSAVE
Mar  9 20:41:30 Omega kernel: BIOS-provided physical RAM map:
Mar  9 20:41:30 Omega kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009dfff] usable
Mar  9 20:41:30 Omega kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009f378fff] usable
Mar  9 20:41:30 Omega kernel: BIOS-e820: [mem 0x000000009f379000-0x000000009f38efff] reserved
Mar  9 20:41:30 Omega kernel: BIOS-e820: [mem 0x000000009f38f000-0x000000009f3cdfff] ACPI data
Mar  9 20:41:30 Omega kernel: BIOS-e820: [mem 0x000000009f3ce000-0x000000009fffffff] reserved
root@Omega:~# cat /var/log/znapzend.log | head
cat: /var/log/znapzend.log: No such file or directory
root@Omega:~# ls -la /boot/config/plugins/unRAID6-ZnapZend/auto_boot_on
-rw------- 1 root root 0 Mar  9 20:58 /boot/config/plugins/unRAID6-ZnapZend/auto_boot_on
root@Omega:~# ps aux | grep -i znapzend
root     18502  0.0  0.0   3912  2152 pts/1    S+   21:06   0:00 grep -i znapzend

 

Link to comment
  • 2 weeks later...

Hello.
I was using the autosnapshot script and noticed that znapzend wouldn't create snapshots anymore of the dataset contained a snapshot made by the auto snapshots script. I guess it's conflicting between the two snapshots.

 

[Sat Mar 21 06:43:17 2020] [debug] sending snapshots from zSSD/PROJECTS to zHDD/BACKUP_Projects
cannot restore to zHDD/BACKUP_Projects@zfs-auto-snap_01-2020-03-21-0540: destination already exists
[Sat Mar 21 06:49:22 2020] [info] starting work on backupSet zSSD/PROJECTS
[Sat Mar 21 06:49:22 2020] [debug] sending snapshots from zSSD/PROJECTS to zHDD/BACKUP_Projects
cannot restore to zHDD/BACKUP_Projects@zfs-auto-snap_01-2020-03-21-0540: destination already exists
warning: cannot send 'zSSD/PROJECTS@2020-03-21-064445': Broken pipe
warning: cannot send 'zSSD/PROJECTS@2020-03-21-064626': Broken pipe
warning: cannot send 'zSSD/PROJECTS@2020-03-21-064921': Broken pipe
cannot send 'zSSD/PROJECTS': I/O error

 

I would like to move to znapzend however it doesn't seem to support shadowcopy.

 

What I like with znapzend

- Ease of use to set snapshots occurence and retention rules.

- Robustness by saving datasets between different pools... I lost a pool yesterday and that's why I tried znapzend.

But shadowcopy is an option I had with thezfs autosnapshots script that I'm not willing to loose :/

 

 

Edit : In short, is there a way to backup ONLY to the external drive/pool?
I guess it's still more efficient than a dumb rsync (or is it?) ?

 

Edited by dboris
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.