steini84 Posted October 19, 2019 Share Posted October 19, 2019 (edited) Hi, Here is a companion plugin for the ZFS plugin for unRAID. Quote ZnapZend is a ZFS centric backup tool to create snapshots and send them to backup locations. It relies on the ZFS tools snapshot, send and receive to do its work. It has the built-in ability to manage both local snapshots as well as remote copies by thinning them out as time progresses. The ZnapZend configuration is stored as properties in the ZFS filesystem itself. To install you copy this url into the install plugin page in your unRAID 6 web gui or install through the Community Applications. https://raw.githubusercontent.com/Steini1984/unRAID6-ZnapZend/master/unRAID6-ZnapZend.plg To run the program i recommend using this command to put the log in a separate file znapzend --logto=/var/log/znapzend.log --daemonize You can start ZnapZend automatically from the boot file or you can create a empty file called auto_boot_on under /boot/config/plugins/unRAID6-ZnapZend/ touch /boot/config/plugins/unRAID6-ZnapZend/auto_boot_on Documentation can be found on https://www.znapzend.org/ and I recommend using the examples as a starting ground. Here are some links worth checking also: https://github.com/oetiker/znapzend/blob/master/doc/znapzendzetup.pod https://www.lab-time.it/2018/06/30/zfs-backups-on-proxmox-with-znapzend/ For example the following command makes automatic snapshots and keeps 24 backups a day for 7 days, 6 backups a day for a month a then a single snapshot every day for 90 days: znapzendzetup create --recursive SRC '7d=>1h,30d=>4h,90d=>1d' tank/home Edited November 13, 2019 by steini84 1 Quote Link to comment
Marshalleq Posted October 19, 2019 Share Posted October 19, 2019 Installed, configuring and very happy you've taken the time to create this! Thankyou very much @steini84. Marshalleq 1 Quote Link to comment
Marshalleq Posted October 19, 2019 Share Posted October 19, 2019 Quick question, does mbuffer path need to be specified, or is it included and compiled in somehow? Thanks. Quote Link to comment
steini84 Posted November 13, 2019 Author Share Posted November 13, 2019 Sorry did not see this until now. But did you figure this out? If not I could look into it - I don´t use mbuffer so I´m not sure Quote Link to comment
188pilas Posted December 13, 2019 Share Posted December 13, 2019 hello steini84 - I have this plugin installed and a couple of jobs for some datasets however I have noticed that the jobs are not running running automatically. I usually have to run the job using the "znapzend --debug --runonce=zpool/dataset" command and it will run successfully. Below is an example of one of the schedules that I have setup: znapzendzetup create --recursive SRC '1week=>12hour' zpool/dataset DST:a '1week=>24hour' [email protected]:zpool/dataset DST:b '1week=>24hour' [email protected]:zpool/dataset I can schedule a user script to run at a schedule to run the znapzend --debug --runonce command; however just wondering if there were any other steps. I setup the touch auto_boot_on file. Thanks! Quote Link to comment
steini84 Posted December 13, 2019 Author Share Posted December 13, 2019 Hmm Can you send me the output from these commands Quote cat /var/log/syslog | head cat /var/log/znapzend.log | head ls -la /boot/config/plugins/unRAID6-ZnapZend/auto_boot_on ps aux | grep -i znapzend Quote Link to comment
188pilas Posted December 14, 2019 Share Posted December 14, 2019 I was able to resolve my issue. It had to do with an error in znapzend not having a matching dataset between source and one of the destinations. I ran znapzendztatz and then deleted the snapshot on destination along with the dataset and then ran delete on znapzend schedule on source dataset and then recreated everything from scratch and its working fine now. thanks! https://github.com/oetiker/znapzend/issues/303 1 Quote Link to comment
ezra Posted January 5, 2020 Share Posted January 5, 2020 Hello! thanks again for this awesome stuf. If you look at my snapshots, you see there are some missed snapshots. I've set it up: daily every 3 hours, weekly everyday. i think something is off, any idea where to start looking? SSD/VMs/Ubuntu@2019-12-29-180000 0B - 25K - SSD/VMs/Ubuntu@2019-12-30-000000 0B - 25K - SSD/VMs/Ubuntu@2020-01-03-150000 0B - 25K - SSD/VMs/Ubuntu@2020-01-04-000000 0B - 25K - SSD/VMs/Ubuntu@2020-01-04-030000 0B - 25K - SSD/VMs/Ubuntu@2020-01-04-060000 0B - 25K - SSD/VMs/Ubuntu@2020-01-04-090000 0B - 25K - SSD/VMs/Ubuntu@2020-01-04-120000 0B - 25K - SSD/VMs/Ubuntu@2020-01-04-150000 0B - 25K - SSD/VMs/Ubuntu@2020-01-04-180000 0B - 25K - SSD/VMs/Ubuntu@2020-01-04-210000 0B - 25K - SSD/VMs/Ubuntu@2020-01-05-000000 0B - 25K - SSD/VMs/Ubuntu@2020-01-05-030000 0B - 25K - SSD/VMs/Windows@2019-12-29-180000 0B - 24K - SSD/VMs/Windows@2019-12-30-000000 0B - 24K - SSD/VMs/Windows@2020-01-03-150000 0B - 24K - SSD/VMs/Windows@2020-01-04-000000 0B - 24K - SSD/VMs/Windows@2020-01-04-030000 0B - 24K - SSD/VMs/Windows@2020-01-04-060000 0B - 24K - SSD/VMs/Windows@2020-01-04-090000 0B - 24K - SSD/VMs/Windows@2020-01-04-120000 0B - 24K - SSD/VMs/Windows@2020-01-04-150000 0B - 24K - SSD/VMs/Windows@2020-01-04-180000 0B - 24K - SSD/VMs/Windows@2020-01-04-210000 0B - 24K - SSD/VMs/Windows@2020-01-05-000000 0B - 24K - SSD/VMs/Windows@2020-01-05-030000 0B - 24K - SSD/tmp@2019-12-29-180000 0B - 24K - SSD/tmp@2019-12-30-000000 0B - 24K - SSD/tmp@2020-01-03-150000 0B - 24K - SSD/tmp@2020-01-04-000000 0B - 24K - SSD/tmp@2020-01-04-030000 0B - 24K - SSD/tmp@2020-01-04-060000 0B - 24K - SSD/tmp@2020-01-04-090000 0B - 24K - SSD/tmp@2020-01-04-120000 0B - 24K - SSD/tmp@2020-01-04-150000 0B - 24K - SSD/tmp@2020-01-04-180000 0B - 24K - SSD/tmp@2020-01-04-210000 0B - 24K - SSD/tmp@2020-01-05-000000 0B - 24K - SSD/tmp@2020-01-05-030000 0B - 24K - Quote Link to comment
steini84 Posted January 5, 2020 Author Share Posted January 5, 2020 To quote the creators of the program “If you find have a question, head over to the ZnapZend section on serverfault.com and tag your question with 'znapzend'.”http://serverfault.com/questions/tagged/znapzend Quote Link to comment
Marshalleq Posted January 5, 2020 Share Posted January 5, 2020 @ezra those don't look problematic to me. Only the first few - which is normal as those may not have had the new snapshot running. Also, after setting up a new snapshot, you do have to end the process, sometimes this doesn't work as expected either and a reboot or something sets it off right without your realising it. Could be that. Quote Link to comment
ezra Posted January 6, 2020 Share Posted January 6, 2020 Thank you both, i've destroyed all snapshots prior to setting it up properly. after the creation i did a reboot. Still misses some of the 3 hourly's... id really like that. Anyway i'll start from scratch again. Thanks for the input. 1 Quote Link to comment
Marshalleq Posted January 6, 2020 Share Posted January 6, 2020 That’s a bit unusual isn’t it. Have you checked logs? Probably some clues in there. I would have thought almost nothing could stop snapshots except maybe faulty hardware or I/o issues. Maybe check the system log too. Sent from my iPhone using Tapatalk Quote Link to comment
Marshalleq Posted February 4, 2020 Share Posted February 4, 2020 (edited) @steini84 I've been having a problem where snapshots don't delete (i.e. a plan that says keep snapshots for 10 minutes for 2 hours, then a lower frequency ends up creating endless 10 minute snapshots) and have logged a report on GitHub here. That got me to wondering if I was running the latest version (which probably they will ask). Your compiled version says it's 0.dev (znapzend --version) whereas GitHub says they're up to 0.19.1. I think yours is actually the same since the 19.1 came out in June last year. Can you confirm? Thanks. Edited February 4, 2020 by Marshalleq Quote Link to comment
steini84 Posted February 5, 2020 Author Share Posted February 5, 2020 Yeah it's 0.19.1 [emoji106]Sent from my iPhone using Tapatalk Quote Link to comment
188pilas Posted March 9, 2020 Share Posted March 9, 2020 @steini84 I received the below error when updating to stable 6.8.3 ...any suggestions? going back to 6.7.2 for now. Quote Link to comment
188pilas Posted March 10, 2020 Share Posted March 10, 2020 @steini84 I was able to get it working by deleting the old /boot/config/plugins/unRAID6-ZnapZend folder and downloading a new one. Quote Link to comment
188pilas Posted March 10, 2020 Share Posted March 10, 2020 k still running into some issues...-bash: /usr/local/bin/znapzend: /usr/bin/perl: bad interpreter: No such file or directory when I run znapzend --logto=/var/log/znapzend.log --daemonize. below are some logs. root@Omega:~# cat /var/log/syslog | head Mar 9 20:41:30 Omega kernel: microcode: microcode updated early to revision 0x1d, date = 2018-05-11 Mar 9 20:41:30 Omega kernel: Linux version 4.19.107-Unraid (root@Develop) (gcc version 9.2.0 (GCC)) #1 SMP Thu Mar 5 13:55:57 PST 2020 Mar 9 20:41:30 Omega kernel: Command line: BOOT_IMAGE=/bzimage vfio-pci.ids=8086:2934,8086:2935,8086:293a, vfio_iommu_type1.allow_unsafe_interrupts=1 isolcpus=6-7,14-15 pcie_acs_override=downstream initrd=/bzroot Mar 9 20:41:30 Omega kernel: x86/fpu: x87 FPU will use FXSAVE Mar 9 20:41:30 Omega kernel: BIOS-provided physical RAM map: Mar 9 20:41:30 Omega kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009dfff] usable Mar 9 20:41:30 Omega kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009f378fff] usable Mar 9 20:41:30 Omega kernel: BIOS-e820: [mem 0x000000009f379000-0x000000009f38efff] reserved Mar 9 20:41:30 Omega kernel: BIOS-e820: [mem 0x000000009f38f000-0x000000009f3cdfff] ACPI data Mar 9 20:41:30 Omega kernel: BIOS-e820: [mem 0x000000009f3ce000-0x000000009fffffff] reserved root@Omega:~# cat /var/log/znapzend.log | head cat: /var/log/znapzend.log: No such file or directory root@Omega:~# ls -la /boot/config/plugins/unRAID6-ZnapZend/auto_boot_on -rw------- 1 root root 0 Mar 9 20:58 /boot/config/plugins/unRAID6-ZnapZend/auto_boot_on root@Omega:~# ps aux | grep -i znapzend root 18502 0.0 0.0 3912 2152 pts/1 S+ 21:06 0:00 grep -i znapzend Quote Link to comment
steini84 Posted March 10, 2020 Author Share Posted March 10, 2020 Perl install is failing. Will look into it tonight [emoji123]Sent from my iPhone using Tapatalk Quote Link to comment
188pilas Posted March 10, 2020 Share Posted March 10, 2020 Perl install is failing. Will look into it tonight [emoji123]Sent from my iPhone using TapatalkAwesome! Thanks Sent from my SM-G955U using Tapatalk Quote Link to comment
steini84 Posted March 10, 2020 Author Share Posted March 10, 2020 20 minutes ago, 188pilas said: Awesome! Thanks Sent from my SM-G955U using Tapatalk Should be fixed now Quote Link to comment
Marshalleq Posted March 10, 2020 Share Posted March 10, 2020 I have endless problems with this not deleting snapshots when it's supposed to. So I thought I'd just update in case it refreshes something. However, I just get the below - first plugin error I've ever had. Is it because I'm not running the beta? 1 Quote Link to comment
steini84 Posted March 10, 2020 Author Share Posted March 10, 2020 No I accidentally pushed a broken update , but it’s correct now Sent from my iPhone using Tapatalk Quote Link to comment
Marshalleq Posted March 10, 2020 Share Posted March 10, 2020 Coolness. Thanks for that. 1 Quote Link to comment
dboris Posted March 21, 2020 Share Posted March 21, 2020 (edited) Hello. I was using the autosnapshot script and noticed that znapzend wouldn't create snapshots anymore of the dataset contained a snapshot made by the auto snapshots script. I guess it's conflicting between the two snapshots. [Sat Mar 21 06:43:17 2020] [debug] sending snapshots from zSSD/PROJECTS to zHDD/BACKUP_Projects cannot restore to zHDD/BACKUP_Projects@zfs-auto-snap_01-2020-03-21-0540: destination already exists [Sat Mar 21 06:49:22 2020] [info] starting work on backupSet zSSD/PROJECTS [Sat Mar 21 06:49:22 2020] [debug] sending snapshots from zSSD/PROJECTS to zHDD/BACKUP_Projects cannot restore to zHDD/BACKUP_Projects@zfs-auto-snap_01-2020-03-21-0540: destination already exists warning: cannot send 'zSSD/PROJECTS@2020-03-21-064445': Broken pipe warning: cannot send 'zSSD/PROJECTS@2020-03-21-064626': Broken pipe warning: cannot send 'zSSD/PROJECTS@2020-03-21-064921': Broken pipe cannot send 'zSSD/PROJECTS': I/O error I would like to move to znapzend however it doesn't seem to support shadowcopy. What I like with znapzend : - Ease of use to set snapshots occurence and retention rules. - Robustness by saving datasets between different pools... I lost a pool yesterday and that's why I tried znapzend. But shadowcopy is an option I had with thezfs autosnapshots script that I'm not willing to loose Edit : In short, is there a way to backup ONLY to the external drive/pool? I guess it's still more efficient than a dumb rsync (or is it?) ? Edited March 21, 2020 by dboris Quote Link to comment
steini84 Posted March 26, 2020 Author Share Posted March 26, 2020 I don't think that is possible because you have to have "from" and "to" snapshots to compare the difference, but I recommend that you go over to https://www.znapzend.org/ and check the "get help" section Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.