Jump to content

ZnapZend plugin for unRAID

This topic contains 66 posts with an estimated read time of 39 minutes. A summary containing the most significant posts is available with an estimated read time of 5 minutes.

Featured Replies

Posted

Hi,

 

Here is a companion plugin for the ZFS plugin for unRAID.

 

Quote

ZnapZend is a ZFS centric backup tool to create snapshots and send them to backup locations. It relies on the ZFS tools snapshot, send and receive to do its work. It has the built-in ability to manage both local snapshots as well as remote copies by thinning them out as time progresses.

The ZnapZend configuration is stored as properties in the ZFS filesystem itself.

 

To install you copy this url into the install plugin page in your unRAID 6 web gui or install through the Community Applications.

https://raw.githubusercontent.com/Steini1984/unRAID6-ZnapZend/master/unRAID6-ZnapZend.plg

 

To run the program i recommend using this command to put the log in a separate file

znapzend --logto=/var/log/znapzend.log --daemonize

You can start ZnapZend automatically from the boot file or you can create a empty file called auto_boot_on under /boot/config/plugins/unRAID6-ZnapZend/

touch /boot/config/plugins/unRAID6-ZnapZend/auto_boot_on

Documentation can be found on https://www.znapzend.org/ and I recommend using the examples as a starting ground.

 

Here are some links worth checking also:

https://github.com/oetiker/znapzend/blob/master/doc/znapzendzetup.pod

https://www.lab-time.it/2018/06/30/zfs-backups-on-proxmox-with-znapzend/

 

For example the following command makes automatic snapshots and keeps 24 backups a day for 7 days, 6 backups a day for a month a then a single snapshot every day for 90 days:

znapzendzetup create --recursive SRC '7d=>1h,30d=>4h,90d=>1d' tank/home

 

Edited by steini84

  • Replies 65
  • Views 21.6k
  • Created
  • Last Reply

Top Posters In This Topic

Most Popular Posts

  • I'm posting a problem and solution, I found this exact error in a few other places but not here:   /usr/bin/perl: /lib64/libc.so.6: version `GLIBC_2.33' not found (required by /usr/lib64/perl5

  • Could you please update plugin to last version? Plugin uses v 0.21.2, but on github already 0.23.2.   And also do you have any plan for feature developemnt?

  • Installed, configuring and very happy you've taken the time to create this!  Thankyou very much @steini84.   Marshalleq

Posted Images

Installed, configuring and very happy you've taken the time to create this!  Thankyou very much @steini84.

 

Marshalleq

Quick question, does mbuffer path need to be specified, or is it included and compiled in somehow?

 

Thanks.

  • 4 weeks later...
  • Author

Sorry did not see this until now. But did you figure this out? If not I could look into it - I don´t use mbuffer so I´m not sure :)

  • 1 month later...

hello steini84 - I have this plugin installed and a couple of jobs for some datasets however I have noticed that the jobs are not running running automatically. I usually have to run the job using the "znapzend --debug --runonce=zpool/dataset" command and it will run successfully. Below is an example of one of the schedules that I have setup:

 

znapzendzetup create --recursive SRC '1week=>12hour' zpool/dataset DST:a '1week=>24hour' [email protected]:zpool/dataset DST:b '1week=>24hour' [email protected]:zpool/dataset

 

I can schedule a user script to run at a schedule to run the znapzend --debug --runonce command; however just wondering if there were any other steps. I setup the touch auto_boot_on file.

 

Thanks! 

  • Author

Hmm

 

Can you send me the output from these commands

Quote

cat /var/log/syslog | head
cat /var/log/znapzend.log | head
ls -la /boot/config/plugins/unRAID6-ZnapZend/auto_boot_on
ps aux | grep -i znapzend

 

I was able to resolve my issue. It had to do with an error in znapzend not having a matching dataset between source and one of the destinations. I ran znapzendztatz and then deleted the snapshot on destination along with the dataset and then ran delete on znapzend schedule on source dataset and then recreated everything from scratch and its working fine now.

 

thanks!

 

https://github.com/oetiker/znapzend/issues/303

 

  • 3 weeks later...

Hello! thanks again for this awesome stuf.

 

If you look at my snapshots, you see there are some missed snapshots. I've set it up: daily every 3 hours, weekly everyday.

i think something is off, any idea where to start looking?

 

SSD/VMs/Ubuntu@2019-12-29-180000                   0B      -       25K  -
SSD/VMs/Ubuntu@2019-12-30-000000                   0B      -       25K  -
SSD/VMs/Ubuntu@2020-01-03-150000                   0B      -       25K  -
SSD/VMs/Ubuntu@2020-01-04-000000                   0B      -       25K  -
SSD/VMs/Ubuntu@2020-01-04-030000                   0B      -       25K  -
SSD/VMs/Ubuntu@2020-01-04-060000                   0B      -       25K  -
SSD/VMs/Ubuntu@2020-01-04-090000                   0B      -       25K  -
SSD/VMs/Ubuntu@2020-01-04-120000                   0B      -       25K  -
SSD/VMs/Ubuntu@2020-01-04-150000                   0B      -       25K  -
SSD/VMs/Ubuntu@2020-01-04-180000                   0B      -       25K  -
SSD/VMs/Ubuntu@2020-01-04-210000                   0B      -       25K  -
SSD/VMs/Ubuntu@2020-01-05-000000                   0B      -       25K  -
SSD/VMs/Ubuntu@2020-01-05-030000                   0B      -       25K  -
SSD/VMs/Windows@2019-12-29-180000                  0B      -       24K  -
SSD/VMs/Windows@2019-12-30-000000                  0B      -       24K  -
SSD/VMs/Windows@2020-01-03-150000                  0B      -       24K  -
SSD/VMs/Windows@2020-01-04-000000                  0B      -       24K  -
SSD/VMs/Windows@2020-01-04-030000                  0B      -       24K  -
SSD/VMs/Windows@2020-01-04-060000                  0B      -       24K  -
SSD/VMs/Windows@2020-01-04-090000                  0B      -       24K  -
SSD/VMs/Windows@2020-01-04-120000                  0B      -       24K  -
SSD/VMs/Windows@2020-01-04-150000                  0B      -       24K  -
SSD/VMs/Windows@2020-01-04-180000                  0B      -       24K  -
SSD/VMs/Windows@2020-01-04-210000                  0B      -       24K  -
SSD/VMs/Windows@2020-01-05-000000                  0B      -       24K  -
SSD/VMs/Windows@2020-01-05-030000                  0B      -       24K  -
SSD/tmp@2019-12-29-180000                          0B      -       24K  -
SSD/tmp@2019-12-30-000000                          0B      -       24K  -
SSD/tmp@2020-01-03-150000                          0B      -       24K  -
SSD/tmp@2020-01-04-000000                          0B      -       24K  -
SSD/tmp@2020-01-04-030000                          0B      -       24K  -
SSD/tmp@2020-01-04-060000                          0B      -       24K  -
SSD/tmp@2020-01-04-090000                          0B      -       24K  -
SSD/tmp@2020-01-04-120000                          0B      -       24K  -
SSD/tmp@2020-01-04-150000                          0B      -       24K  -
SSD/tmp@2020-01-04-180000                          0B      -       24K  -
SSD/tmp@2020-01-04-210000                          0B      -       24K  -
SSD/tmp@2020-01-05-000000                          0B      -       24K  -
SSD/tmp@2020-01-05-030000                          0B      -       24K  -

 

@ezra those don't look problematic to me.  Only the first few - which is normal as those may not have had the new snapshot running.  Also, after setting up a new snapshot, you do have to end the process, sometimes this doesn't work as expected either and a reboot or something sets it off right without your realising it.  Could be that.

Thank you both, i've destroyed all snapshots prior to setting it up properly. after the creation i did a reboot. Still misses some of the 3 hourly's... id really like that. Anyway i'll start from scratch again. Thanks for the input.

That’s a bit unusual isn’t it. Have you checked logs? Probably some clues in there. I would have thought almost nothing could stop snapshots except maybe faulty hardware or I/o issues. Maybe check the system log too.


Sent from my iPhone using Tapatalk

  • 5 weeks later...

@steini84 I've been having a problem where snapshots don't delete (i.e. a plan that says keep snapshots for 10 minutes for 2 hours, then a lower frequency ends up creating endless 10 minute snapshots) and have logged a report on GitHub here.  That got me to wondering if I was running the latest version (which probably they will ask).  Your compiled version says it's 0.dev (znapzend --version) whereas GitHub says they're up to 0.19.1.  I think yours is actually the same since the 19.1 came out in June last year.  Can you confirm?  Thanks.

Edited by Marshalleq

  • 1 month later...

@steini84 I received the below error when updating to stable 6.8.3 ...any suggestions? going back to 6.7.2 for now.

Capture.PNG

@steini84 I was able to get it working by deleting the old /boot/config/plugins/unRAID6-ZnapZend folder and downloading a new one.

k still running into some issues...-bash: /usr/local/bin/znapzend: /usr/bin/perl: bad interpreter: No such file or directory when I run znapzend --logto=/var/log/znapzend.log --daemonize. below are some logs.

 

root@Omega:~# cat /var/log/syslog | head
Mar  9 20:41:30 Omega kernel: microcode: microcode updated early to revision 0x1d, date = 2018-05-11
Mar  9 20:41:30 Omega kernel: Linux version 4.19.107-Unraid (root@Develop) (gcc version 9.2.0 (GCC)) #1 SMP Thu Mar 5 13:55:57 PST 2020
Mar  9 20:41:30 Omega kernel: Command line: BOOT_IMAGE=/bzimage vfio-pci.ids=8086:2934,8086:2935,8086:293a, vfio_iommu_type1.allow_unsafe_interrupts=1 isolcpus=6-7,14-15 pcie_acs_override=downstream initrd=/bzroot
Mar  9 20:41:30 Omega kernel: x86/fpu: x87 FPU will use FXSAVE
Mar  9 20:41:30 Omega kernel: BIOS-provided physical RAM map:
Mar  9 20:41:30 Omega kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009dfff] usable
Mar  9 20:41:30 Omega kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009f378fff] usable
Mar  9 20:41:30 Omega kernel: BIOS-e820: [mem 0x000000009f379000-0x000000009f38efff] reserved
Mar  9 20:41:30 Omega kernel: BIOS-e820: [mem 0x000000009f38f000-0x000000009f3cdfff] ACPI data
Mar  9 20:41:30 Omega kernel: BIOS-e820: [mem 0x000000009f3ce000-0x000000009fffffff] reserved
root@Omega:~# cat /var/log/znapzend.log | head
cat: /var/log/znapzend.log: No such file or directory
root@Omega:~# ls -la /boot/config/plugins/unRAID6-ZnapZend/auto_boot_on
-rw------- 1 root root 0 Mar  9 20:58 /boot/config/plugins/unRAID6-ZnapZend/auto_boot_on
root@Omega:~# ps aux | grep -i znapzend
root     18502  0.0  0.0   3912  2152 pts/1    S+   21:06   0:00 grep -i znapzend

 

  • Author

Perl install is failing. Will look into it tonight [emoji123]


Sent from my iPhone using Tapatalk

Perl install is failing. Will look into it tonight [emoji123]


Sent from my iPhone using Tapatalk
Awesome! Thanks

Sent from my SM-G955U using Tapatalk

  • Author
20 minutes ago, 188pilas said:

Awesome! Thanks

Sent from my SM-G955U using Tapatalk
 

Should be fixed now

I have endless problems with this not deleting snapshots when it's supposed to.  So I thought I'd just update in case it refreshes something.  

However, I just get the below - first plugin error I've ever had.  Is it because I'm not running the beta?

274384143_Screenshot2020-03-11at9_16_56AM.png.ff50d290a49e9df1aed7744d1067109a.png

 

  • Author

No I accidentally pushed a broken update , but it’s correct now


Sent from my iPhone using Tapatalk

Coolness.  Thanks for that.

  • 2 weeks later...

Hello.
I was using the autosnapshot script and noticed that znapzend wouldn't create snapshots anymore of the dataset contained a snapshot made by the auto snapshots script. I guess it's conflicting between the two snapshots.

 

[Sat Mar 21 06:43:17 2020] [debug] sending snapshots from zSSD/PROJECTS to zHDD/BACKUP_Projects
cannot restore to zHDD/BACKUP_Projects@zfs-auto-snap_01-2020-03-21-0540: destination already exists
[Sat Mar 21 06:49:22 2020] [info] starting work on backupSet zSSD/PROJECTS
[Sat Mar 21 06:49:22 2020] [debug] sending snapshots from zSSD/PROJECTS to zHDD/BACKUP_Projects
cannot restore to zHDD/BACKUP_Projects@zfs-auto-snap_01-2020-03-21-0540: destination already exists
warning: cannot send 'zSSD/PROJECTS@2020-03-21-064445': Broken pipe
warning: cannot send 'zSSD/PROJECTS@2020-03-21-064626': Broken pipe
warning: cannot send 'zSSD/PROJECTS@2020-03-21-064921': Broken pipe
cannot send 'zSSD/PROJECTS': I/O error

 

I would like to move to znapzend however it doesn't seem to support shadowcopy.

 

What I like with znapzend

- Ease of use to set snapshots occurence and retention rules.

- Robustness by saving datasets between different pools... I lost a pool yesterday and that's why I tried znapzend.

But shadowcopy is an option I had with thezfs autosnapshots script that I'm not willing to loose :/

 

 

Edit : In short, is there a way to backup ONLY to the external drive/pool?
I guess it's still more efficient than a dumb rsync (or is it?) ?

 

Edited by dboris

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...