Ricin Posted July 23, 2023 Share Posted July 23, 2023 I can not help but i get exactly the same error when trying to install it from Apps. Quote Link to comment
mysteryman Posted July 24, 2023 Share Posted July 24, 2023 Same here seem the issue is in the plugin config if referencing a file that doesn't exist. I opened a ticket in Github. Quote Link to comment
steini84 Posted July 25, 2023 Author Share Posted July 25, 2023 Forgot to update the reference to the new mbuffer package so the install failed on new installs (updates were fine). Fixed now Sent from my iPhone using Tapatalk 1 Quote Link to comment
Masterwishx Posted July 25, 2023 Share Posted July 25, 2023 im not sure it releated here ,but im using @SpaceInvaderOne user script for ZFS snapshots and all worked fine but today have Warning when : Create the snapshots on the source directory [system_cache_ssd/appdata] using Sanoid ,and sanoid + syncoid take 15min instead of 3 min . Quote Link to comment
Masterwishx Posted July 25, 2023 Share Posted July 25, 2023 5 hours ago, steini84 said: Forgot to update the reference to the new mbuffer package so the install failed on new installs (updates were fine). Fixed now Should we need to wait for plugin Update or just to download *.plg file and replace the old one ? Quote Link to comment
Masterwishx Posted July 27, 2023 Share Posted July 27, 2023 On 7/25/2023 at 8:24 PM, Masterwishx said: im not sure it releated here ,but im using @SpaceInvaderOne user script for ZFS snapshots and all worked fine but today have Warning when : Create the snapshots on the source directory [system_cache_ssd/appdata] using Sanoid ,and sanoid + syncoid take 15min instead of 3 min . @steini84 i posted to UNASSIGNED DEVICES topic ,and author said : The reason this showed up is that you had a PC with the UD GUI open and apparently your server was working very hard. The command that timed out is UD trying to read the mounts file to determine mounted status of the UD devices. Reading that file should not take longer than 1 second. I will increase the time out a bit to try to keep this from happening, but it is harmless other than to show UD could potentially be hanging. UD will catch up on the mounted status at the next background scan. Maybe ask the plugin author if they can adjust the priority. Quote Link to comment
Gorf Posted August 16, 2023 Share Posted August 16, 2023 (edited) Since about August 6 I'm getting warnings about mbuffer not found in my daily replication logs: WARN: mbuffer not available on source ssh:-S /tmp/syncoid-nas-backup-nas-backup@nas-1692147601 nas-backup@nas - sync will continue without source buffering. I do regular updates of Unraid incl. all apps. My current version of sanoid is 2.2.0a. Edited August 18, 2023 by Gorf Quote Link to comment
Masterwishx Posted August 26, 2023 Share Posted August 26, 2023 also have this issue Quote Link to comment
Octalbush Posted October 8, 2023 Share Posted October 8, 2023 Does this plugin work on 6.12.4 with ZFS pools created via the Unraid method rather than the old ZFS plugin? Looking for an elegant solution for snapshots on default Unraid ZFS. Quote Link to comment
Octalbush Posted October 20, 2023 Share Posted October 20, 2023 On 8/26/2023 at 1:29 PM, Masterwishx said: also have this issue I'm also getting this warning, does anyone have a workaround? I tried installing mbuffer from source that this plugin downloads to the root of Unraid, however with no gcc I can't compile. @steini84 Do you have a solution? 1 Quote Link to comment
steini84 Posted October 23, 2023 Author Share Posted October 23, 2023 I'm also getting this warning, does anyone have a workaround? I tried installing mbuffer from source that this plugin downloads to the root of Unraid, however with no gcc I can't compile. @steini84 Do you have a solution?There seems to be a problem with the mbuffer package. You can install the older version, but all the Slackware packages i found for the latest version are the same. To install the older package you can use wget https://github.com/Steini1984/unRAID6-Sainoid/raw/167f5ad3dc1941ef7670efcd21fbb4e6e6ad8587/packages/mbuffer.20200505.x86_64.tgzinstallpkg mbuffer.20200505.x86_64.tgzSent from my iPhone using Tapatalk 2 Quote Link to comment
Marshalleq Posted January 14 Share Posted January 14 @steini84How do you find syncoid / sanoid for removing older snapshots that are over the allowance? I.e. If we did hourly for a week, then daily for a month, then monthly for a year, we should expect only 4 weeks of daily backups right? This is something that I never got working in znapzend. It seemed to just keep everything indefinitely. BTW I never did get znapzend working after the ZFS went built in, it only works for a few hours or days then stops. So as per your suggestion, I am looking at Syncoid / Sanoid. (Still figuring out that difference). Quote Link to comment
steini84 Posted January 15 Author Share Posted January 15 13 hours ago, Marshalleq said: @steini84How do you find syncoid / sanoid for removing older snapshots that are over the allowance? I.e. If we did hourly for a week, then daily for a month, then monthly for a year, we should expect only 4 weeks of daily backups right? This is something that I never got working in znapzend. It seemed to just keep everything indefinitely. BTW I never did get znapzend working after the ZFS went built in, it only works for a few hours or days then stops. So as per your suggestion, I am looking at Syncoid / Sanoid. (Still figuring out that difference). It works perfectly for me in Sanoid/syncoid and I might need to depricate the Znapzend plugin since I have not used it at all since my migration. This works exactly as you described and here you can see an example of my production profile: [template_production] frequently = 4 hourly = 24 daily = 30 monthly = 0 yearly = 0 autosnap = yes autoprune = yes and here you can see how the snapshots are currently root@Unraid:~# zfs list -t snapshot ssd/Docker/Freshrss NAME USED AVAIL REFER MOUNTPOINT ssd/Docker/Freshrss@autosnap_2023-12-16_23:59:15_daily 1.41M - 17.2M - ssd/Docker/Freshrss@autosnap_2023-12-17_23:59:03_daily 1.16M - 17.2M - ssd/Docker/Freshrss@autosnap_2023-12-18_23:59:05_daily 1.12M - 17.2M - ssd/Docker/Freshrss@autosnap_2023-12-19_23:59:18_daily 1.16M - 17.2M - ssd/Docker/Freshrss@autosnap_2023-12-20_23:59:11_daily 1.19M - 17.2M - ssd/Docker/Freshrss@autosnap_2023-12-21_23:59:17_daily 1.19M - 17.2M - ssd/Docker/Freshrss@autosnap_2023-12-22_23:59:09_daily 1.13M - 17.2M - ssd/Docker/Freshrss@autosnap_2023-12-23_23:59:03_daily 1.13M - 17.2M - ssd/Docker/Freshrss@autosnap_2023-12-24_23:59:16_daily 864K - 17.1M - ssd/Docker/Freshrss@autosnap_2023-12-25_23:59:08_daily 1.05M - 18.3M - ssd/Docker/Freshrss@autosnap_2023-12-26_23:59:12_daily 1.07M - 18.3M - ssd/Docker/Freshrss@autosnap_2023-12-27_23:59:13_daily 1.15M - 18.4M - ssd/Docker/Freshrss@autosnap_2023-12-28_23:59:06_daily 1.18M - 18.4M - ssd/Docker/Freshrss@autosnap_2023-12-29_23:59:20_daily 1.09M - 18.4M - ssd/Docker/Freshrss@autosnap_2023-12-30_23:59:18_daily 904K - 18.4M - ssd/Docker/Freshrss@autosnap_2023-12-31_23:59:03_daily 1.06M - 19.5M - ssd/Docker/Freshrss@autosnap_2024-01-01_23:59:03_daily 1.08M - 19.5M - ssd/Docker/Freshrss@autosnap_2024-01-02_23:59:07_daily 1.17M - 19.6M - ssd/Docker/Freshrss@autosnap_2024-01-03_23:59:10_daily 1.15M - 19.6M - ssd/Docker/Freshrss@autosnap_2024-01-04_23:59:09_daily 1.15M - 19.7M - ssd/Docker/Freshrss@autosnap_2024-01-05_23:59:06_daily 1.33M - 19.7M - ssd/Docker/Freshrss@autosnap_2024-01-06_23:59:23_daily 1.20M - 19.7M - ssd/Docker/Freshrss@autosnap_2024-01-07_23:59:20_daily 1.14M - 19.6M - ssd/Docker/Freshrss@autosnap_2024-01-08_23:59:15_daily 1.09M - 19.6M - ssd/Docker/Freshrss@autosnap_2024-01-09_23:59:12_daily 1.09M - 19.7M - ssd/Docker/Freshrss@autosnap_2024-01-10_23:59:21_daily 1.10M - 19.7M - ssd/Docker/Freshrss@autosnap_2024-01-11_23:59:21_daily 1.26M - 19.7M - ssd/Docker/Freshrss@autosnap_2024-01-12_23:59:02_daily 1.14M - 19.7M - ssd/Docker/Freshrss@autosnap_2024-01-13_23:59:25_daily 1.04M - 19.7M - ssd/Docker/Freshrss@autosnap_2024-01-14_13:00:09_hourly 944K - 19.6M - ssd/Docker/Freshrss@autosnap_2024-01-14_14:00:17_hourly 824K - 19.6M - ssd/Docker/Freshrss@autosnap_2024-01-14_15:00:11_hourly 840K - 19.6M - ssd/Docker/Freshrss@autosnap_2024-01-14_16:00:23_hourly 856K - 19.6M - ssd/Docker/Freshrss@autosnap_2024-01-14_17:00:32_hourly 856K - 19.6M - ssd/Docker/Freshrss@autosnap_2024-01-14_18:00:04_hourly 752K - 19.6M - ssd/Docker/Freshrss@autosnap_2024-01-14_19:00:16_hourly 768K - 19.6M - ssd/Docker/Freshrss@autosnap_2024-01-14_20:00:10_hourly 792K - 19.6M - ssd/Docker/Freshrss@autosnap_2024-01-14_21:00:36_hourly 800K - 19.6M - ssd/Docker/Freshrss@autosnap_2024-01-14_22:00:08_hourly 760K - 19.6M - ssd/Docker/Freshrss@autosnap_2024-01-14_23:00:45_hourly 648K - 19.6M - ssd/Docker/Freshrss@autosnap_2024-01-14_23:59:21_daily 144K - 19.6M - ssd/Docker/Freshrss@autosnap_2024-01-15_00:00:11_hourly 144K - 19.6M - ssd/Docker/Freshrss@autosnap_2024-01-15_01:00:31_hourly 640K - 19.6M - ssd/Docker/Freshrss@autosnap_2024-01-15_02:00:28_hourly 624K - 19.5M - ssd/Docker/Freshrss@autosnap_2024-01-15_03:00:01_hourly 616K - 19.5M - ssd/Docker/Freshrss@autosnap_2024-01-15_04:00:33_hourly 616K - 19.5M - ssd/Docker/Freshrss@autosnap_2024-01-15_05:00:10_hourly 568K - 19.5M - ssd/Docker/Freshrss@autosnap_2024-01-15_06:00:28_hourly 680K - 19.5M - ssd/Docker/Freshrss@autosnap_2024-01-15_07:00:39_hourly 768K - 19.5M - ssd/Docker/Freshrss@autosnap_2024-01-15_08:00:25_hourly 744K - 19.6M - ssd/Docker/Freshrss@autosnap_2024-01-15_09:00:32_hourly 736K - 19.6M - ssd/Docker/Freshrss@autosnap_2024-01-15_10:00:08_hourly 888K - 19.6M - ssd/Docker/Freshrss@autosnap_2024-01-15_11:00:36_hourly 880K - 19.6M - ssd/Docker/Freshrss@autosnap_2024-01-15_12:00:19_hourly 0B - 19.6M - ssd/Docker/Freshrss@autosnap_2024-01-15_12:00:19_frequently 0B - 19.6M - ssd/Docker/Freshrss@autosnap_2024-01-15_12:15:17_frequently 184K - 19.6M - ssd/Docker/Freshrss@autosnap_2024-01-15_12:30:21_frequently 200K - 19.6M - ssd/Docker/Freshrss@autosnap_2024-01-15_12:45:16_frequently 0B - 19.6M - The system is really configurable and the documentation is really good: https://github.com/jimsalterjrs/sanoid Quote Link to comment
Marshalleq Posted February 10 Share Posted February 10 On 1/16/2024 at 2:20 AM, steini84 said: It works perfectly for me in Sanoid/syncoid and I might need to depricate the Znapzend plugin since I have not used it at all since my migration. This works exactly as you described and here you can see an example of my production profile: [template_production] frequently = 4 hourly = 24 daily = 30 monthly = 0 yearly = 0 autosnap = yes autoprune = yes and here you can see how the snapshots are currently root@Unraid:~# zfs list -t snapshot ssd/Docker/Freshrss NAME USED AVAIL REFER MOUNTPOINT ssd/Docker/Freshrss@autosnap_2023-12-16_23:59:15_daily 1.41M - 17.2M - ssd/Docker/Freshrss@autosnap_2023-12-17_23:59:03_daily 1.16M - 17.2M - ssd/Docker/Freshrss@autosnap_2023-12-18_23:59:05_daily 1.12M - 17.2M - ssd/Docker/Freshrss@autosnap_2023-12-19_23:59:18_daily 1.16M - 17.2M - ssd/Docker/Freshrss@autosnap_2023-12-20_23:59:11_daily 1.19M - 17.2M - ssd/Docker/Freshrss@autosnap_2023-12-21_23:59:17_daily 1.19M - 17.2M - ssd/Docker/Freshrss@autosnap_2023-12-22_23:59:09_daily 1.13M - 17.2M - ssd/Docker/Freshrss@autosnap_2023-12-23_23:59:03_daily 1.13M - 17.2M - ssd/Docker/Freshrss@autosnap_2023-12-24_23:59:16_daily 864K - 17.1M - ssd/Docker/Freshrss@autosnap_2023-12-25_23:59:08_daily 1.05M - 18.3M - ssd/Docker/Freshrss@autosnap_2023-12-26_23:59:12_daily 1.07M - 18.3M - ssd/Docker/Freshrss@autosnap_2023-12-27_23:59:13_daily 1.15M - 18.4M - ssd/Docker/Freshrss@autosnap_2023-12-28_23:59:06_daily 1.18M - 18.4M - ssd/Docker/Freshrss@autosnap_2023-12-29_23:59:20_daily 1.09M - 18.4M - ssd/Docker/Freshrss@autosnap_2023-12-30_23:59:18_daily 904K - 18.4M - ssd/Docker/Freshrss@autosnap_2023-12-31_23:59:03_daily 1.06M - 19.5M - ssd/Docker/Freshrss@autosnap_2024-01-01_23:59:03_daily 1.08M - 19.5M - ssd/Docker/Freshrss@autosnap_2024-01-02_23:59:07_daily 1.17M - 19.6M - ssd/Docker/Freshrss@autosnap_2024-01-03_23:59:10_daily 1.15M - 19.6M - ssd/Docker/Freshrss@autosnap_2024-01-04_23:59:09_daily 1.15M - 19.7M - ssd/Docker/Freshrss@autosnap_2024-01-05_23:59:06_daily 1.33M - 19.7M - ssd/Docker/Freshrss@autosnap_2024-01-06_23:59:23_daily 1.20M - 19.7M - ssd/Docker/Freshrss@autosnap_2024-01-07_23:59:20_daily 1.14M - 19.6M - ssd/Docker/Freshrss@autosnap_2024-01-08_23:59:15_daily 1.09M - 19.6M - ssd/Docker/Freshrss@autosnap_2024-01-09_23:59:12_daily 1.09M - 19.7M - ssd/Docker/Freshrss@autosnap_2024-01-10_23:59:21_daily 1.10M - 19.7M - ssd/Docker/Freshrss@autosnap_2024-01-11_23:59:21_daily 1.26M - 19.7M - ssd/Docker/Freshrss@autosnap_2024-01-12_23:59:02_daily 1.14M - 19.7M - ssd/Docker/Freshrss@autosnap_2024-01-13_23:59:25_daily 1.04M - 19.7M - ssd/Docker/Freshrss@autosnap_2024-01-14_13:00:09_hourly 944K - 19.6M - ssd/Docker/Freshrss@autosnap_2024-01-14_14:00:17_hourly 824K - 19.6M - ssd/Docker/Freshrss@autosnap_2024-01-14_15:00:11_hourly 840K - 19.6M - ssd/Docker/Freshrss@autosnap_2024-01-14_16:00:23_hourly 856K - 19.6M - ssd/Docker/Freshrss@autosnap_2024-01-14_17:00:32_hourly 856K - 19.6M - ssd/Docker/Freshrss@autosnap_2024-01-14_18:00:04_hourly 752K - 19.6M - ssd/Docker/Freshrss@autosnap_2024-01-14_19:00:16_hourly 768K - 19.6M - ssd/Docker/Freshrss@autosnap_2024-01-14_20:00:10_hourly 792K - 19.6M - ssd/Docker/Freshrss@autosnap_2024-01-14_21:00:36_hourly 800K - 19.6M - ssd/Docker/Freshrss@autosnap_2024-01-14_22:00:08_hourly 760K - 19.6M - ssd/Docker/Freshrss@autosnap_2024-01-14_23:00:45_hourly 648K - 19.6M - ssd/Docker/Freshrss@autosnap_2024-01-14_23:59:21_daily 144K - 19.6M - ssd/Docker/Freshrss@autosnap_2024-01-15_00:00:11_hourly 144K - 19.6M - ssd/Docker/Freshrss@autosnap_2024-01-15_01:00:31_hourly 640K - 19.6M - ssd/Docker/Freshrss@autosnap_2024-01-15_02:00:28_hourly 624K - 19.5M - ssd/Docker/Freshrss@autosnap_2024-01-15_03:00:01_hourly 616K - 19.5M - ssd/Docker/Freshrss@autosnap_2024-01-15_04:00:33_hourly 616K - 19.5M - ssd/Docker/Freshrss@autosnap_2024-01-15_05:00:10_hourly 568K - 19.5M - ssd/Docker/Freshrss@autosnap_2024-01-15_06:00:28_hourly 680K - 19.5M - ssd/Docker/Freshrss@autosnap_2024-01-15_07:00:39_hourly 768K - 19.5M - ssd/Docker/Freshrss@autosnap_2024-01-15_08:00:25_hourly 744K - 19.6M - ssd/Docker/Freshrss@autosnap_2024-01-15_09:00:32_hourly 736K - 19.6M - ssd/Docker/Freshrss@autosnap_2024-01-15_10:00:08_hourly 888K - 19.6M - ssd/Docker/Freshrss@autosnap_2024-01-15_11:00:36_hourly 880K - 19.6M - ssd/Docker/Freshrss@autosnap_2024-01-15_12:00:19_hourly 0B - 19.6M - ssd/Docker/Freshrss@autosnap_2024-01-15_12:00:19_frequently 0B - 19.6M - ssd/Docker/Freshrss@autosnap_2024-01-15_12:15:17_frequently 184K - 19.6M - ssd/Docker/Freshrss@autosnap_2024-01-15_12:30:21_frequently 200K - 19.6M - ssd/Docker/Freshrss@autosnap_2024-01-15_12:45:16_frequently 0B - 19.6M - The system is really configurable and the documentation is really good: https://github.com/jimsalterjrs/sanoid Nice, I've got this started now using space invaders script, but it's a bit weird and limiting so will convert to the method you're using. Am hoping I can do a replication different to the snaps like znapzend did. Quote Link to comment
d3m3zs Posted February 22 Share Posted February 22 On 1/15/2024 at 3:20 PM, steini84 said: It works perfectly for me in Sanoid/syncoid and I might need to depricate the Znapzend plugin since I have not used it at all since my migration. This works exactly as you described and here you can see an example of my production profile: [template_production] frequently = 4 hourly = 24 daily = 30 monthly = 0 yearly = 0 autosnap = yes autoprune = yes and here you can see how the snapshots are currently root@Unraid:~# zfs list -t snapshot ssd/Docker/Freshrss NAME USED AVAIL REFER MOUNTPOINT ssd/Docker/Freshrss@autosnap_2023-12-16_23:59:15_daily 1.41M - 17.2M - ssd/Docker/Freshrss@autosnap_2023-12-17_23:59:03_daily 1.16M - 17.2M - ssd/Docker/Freshrss@autosnap_2023-12-18_23:59:05_daily 1.12M - 17.2M - ssd/Docker/Freshrss@autosnap_2023-12-19_23:59:18_daily 1.16M - 17.2M - ssd/Docker/Freshrss@autosnap_2023-12-20_23:59:11_daily 1.19M - 17.2M - ssd/Docker/Freshrss@autosnap_2023-12-21_23:59:17_daily 1.19M - 17.2M - ssd/Docker/Freshrss@autosnap_2023-12-22_23:59:09_daily 1.13M - 17.2M - ssd/Docker/Freshrss@autosnap_2023-12-23_23:59:03_daily 1.13M - 17.2M - ssd/Docker/Freshrss@autosnap_2023-12-24_23:59:16_daily 864K - 17.1M - ssd/Docker/Freshrss@autosnap_2023-12-25_23:59:08_daily 1.05M - 18.3M - ssd/Docker/Freshrss@autosnap_2023-12-26_23:59:12_daily 1.07M - 18.3M - ssd/Docker/Freshrss@autosnap_2023-12-27_23:59:13_daily 1.15M - 18.4M - ssd/Docker/Freshrss@autosnap_2023-12-28_23:59:06_daily 1.18M - 18.4M - ssd/Docker/Freshrss@autosnap_2023-12-29_23:59:20_daily 1.09M - 18.4M - ssd/Docker/Freshrss@autosnap_2023-12-30_23:59:18_daily 904K - 18.4M - ssd/Docker/Freshrss@autosnap_2023-12-31_23:59:03_daily 1.06M - 19.5M - ssd/Docker/Freshrss@autosnap_2024-01-01_23:59:03_daily 1.08M - 19.5M - ssd/Docker/Freshrss@autosnap_2024-01-02_23:59:07_daily 1.17M - 19.6M - ssd/Docker/Freshrss@autosnap_2024-01-03_23:59:10_daily 1.15M - 19.6M - ssd/Docker/Freshrss@autosnap_2024-01-04_23:59:09_daily 1.15M - 19.7M - ssd/Docker/Freshrss@autosnap_2024-01-05_23:59:06_daily 1.33M - 19.7M - ssd/Docker/Freshrss@autosnap_2024-01-06_23:59:23_daily 1.20M - 19.7M - ssd/Docker/Freshrss@autosnap_2024-01-07_23:59:20_daily 1.14M - 19.6M - ssd/Docker/Freshrss@autosnap_2024-01-08_23:59:15_daily 1.09M - 19.6M - ssd/Docker/Freshrss@autosnap_2024-01-09_23:59:12_daily 1.09M - 19.7M - ssd/Docker/Freshrss@autosnap_2024-01-10_23:59:21_daily 1.10M - 19.7M - ssd/Docker/Freshrss@autosnap_2024-01-11_23:59:21_daily 1.26M - 19.7M - ssd/Docker/Freshrss@autosnap_2024-01-12_23:59:02_daily 1.14M - 19.7M - ssd/Docker/Freshrss@autosnap_2024-01-13_23:59:25_daily 1.04M - 19.7M - ssd/Docker/Freshrss@autosnap_2024-01-14_13:00:09_hourly 944K - 19.6M - ssd/Docker/Freshrss@autosnap_2024-01-14_14:00:17_hourly 824K - 19.6M - ssd/Docker/Freshrss@autosnap_2024-01-14_15:00:11_hourly 840K - 19.6M - ssd/Docker/Freshrss@autosnap_2024-01-14_16:00:23_hourly 856K - 19.6M - ssd/Docker/Freshrss@autosnap_2024-01-14_17:00:32_hourly 856K - 19.6M - ssd/Docker/Freshrss@autosnap_2024-01-14_18:00:04_hourly 752K - 19.6M - ssd/Docker/Freshrss@autosnap_2024-01-14_19:00:16_hourly 768K - 19.6M - ssd/Docker/Freshrss@autosnap_2024-01-14_20:00:10_hourly 792K - 19.6M - ssd/Docker/Freshrss@autosnap_2024-01-14_21:00:36_hourly 800K - 19.6M - ssd/Docker/Freshrss@autosnap_2024-01-14_22:00:08_hourly 760K - 19.6M - ssd/Docker/Freshrss@autosnap_2024-01-14_23:00:45_hourly 648K - 19.6M - ssd/Docker/Freshrss@autosnap_2024-01-14_23:59:21_daily 144K - 19.6M - ssd/Docker/Freshrss@autosnap_2024-01-15_00:00:11_hourly 144K - 19.6M - ssd/Docker/Freshrss@autosnap_2024-01-15_01:00:31_hourly 640K - 19.6M - ssd/Docker/Freshrss@autosnap_2024-01-15_02:00:28_hourly 624K - 19.5M - ssd/Docker/Freshrss@autosnap_2024-01-15_03:00:01_hourly 616K - 19.5M - ssd/Docker/Freshrss@autosnap_2024-01-15_04:00:33_hourly 616K - 19.5M - ssd/Docker/Freshrss@autosnap_2024-01-15_05:00:10_hourly 568K - 19.5M - ssd/Docker/Freshrss@autosnap_2024-01-15_06:00:28_hourly 680K - 19.5M - ssd/Docker/Freshrss@autosnap_2024-01-15_07:00:39_hourly 768K - 19.5M - ssd/Docker/Freshrss@autosnap_2024-01-15_08:00:25_hourly 744K - 19.6M - ssd/Docker/Freshrss@autosnap_2024-01-15_09:00:32_hourly 736K - 19.6M - ssd/Docker/Freshrss@autosnap_2024-01-15_10:00:08_hourly 888K - 19.6M - ssd/Docker/Freshrss@autosnap_2024-01-15_11:00:36_hourly 880K - 19.6M - ssd/Docker/Freshrss@autosnap_2024-01-15_12:00:19_hourly 0B - 19.6M - ssd/Docker/Freshrss@autosnap_2024-01-15_12:00:19_frequently 0B - 19.6M - ssd/Docker/Freshrss@autosnap_2024-01-15_12:15:17_frequently 184K - 19.6M - ssd/Docker/Freshrss@autosnap_2024-01-15_12:30:21_frequently 200K - 19.6M - ssd/Docker/Freshrss@autosnap_2024-01-15_12:45:16_frequently 0B - 19.6M - The system is really configurable and the documentation is really good: https://github.com/jimsalterjrs/sanoid Please correct me f I am wrong, but Sanoid provides just some kind of syntax sugar to pure ZFS, so even after installation of Sanoid I need to write script that will be send snapshots to backup storage. Quote Link to comment
steini84 Posted February 22 Author Share Posted February 22 No sanoid does a lot and you only need to write the config file Sent from my iPhone using Tapatalk Quote Link to comment
d3m3zs Posted February 22 Share Posted February 22 6 hours ago, steini84 said: No sanoid does a lot and you only need to write the config file Sent from my iPhone using Tapatalk I thought I need script that should be executed to run sanoid jobs. On 2/10/2024 at 11:21 PM, Marshalleq said: space invaders script, but it's a bit weird and limiting Yesterday I used his script, works good except one issue: when I want to create replication of one child dataset it will fail because I provide path like "nvme/downloads/Kopia" script doesn`t expect second symbol "/" in path and can`t parse it, easy to fix it in destination path just replace by regex second "/" to "_" (for example). But honestly I don`t like his script because it is overingeneered: there are many conditions and verification like "does sanoid installed, does ZFS installed", but it parametrized well. In my opinion script should be match easier: get only 2 parameters as arguments (source and destination) create snapshots and replica of source dataset according to policy and follow retention logic send to destination notification In my opinion it should be just a few commands. And only 1 script that I can store in local folder and use it like that from any terminal or from scheduler scripts: zfs_snap_replica.sh "cache/docker" "disk13/backups/docker" Probably I will create such script in next few days, when will be free. Quote Link to comment
dcooper Posted March 9 Share Posted March 9 (edited) I've having issues running replication, get this error which results in everything failing during initial replication: INFO: Sending oldest full snapshot data/appdata@2024-01-03-171850 (~ 88 KB) to new target filesystem: cannot receive new filesystem stream: pool must be upgraded to receive this stream. warning: cannot send 'data/appdata@2024-01-03-171850': signal received CRITICAL ERROR: zfs send 'data/appdata'@'2024-01-03-171850' | pv -p -t -e -r -b -s 90224 | zfs receive -F 'nvr/zfs_backups/data_appdata' failed: 256 at /usr/local/sbin/syncoid line 549. INFO: Sending oldest full snapshot data/appdata/MKVToolNix@2024-01-03-171850 (~ 1.6 MB) to new target filesystem: cannot open 'nvr/zfs_backups/data_appdata': dataset does not exist This is the command being used: /usr/local/sbin/syncoid -r --force-delete --delete-target-snapshots $source_path $destination_path $source_path is data/appdata, where "data" is the zfs pool and "appdata" is a dataset (with child datasets for each docker) $destination_path is nvr/zfs_backups/data_appdata, where "nvr" is the zfs pool and "zfs_backups" is the dataset. zfs_backups is empty since this is first time replication. The full script I'm using is here, it's a greatly simplified/cleaned up version of Space Invader's script: https://github.com/freeskier93/unraid-zfs-snapshot-replication/blob/main/zfs_snapshot_replication.sh Edited March 9 by dcooper Quote Link to comment
dcooper Posted March 30 Share Posted March 30 On 3/9/2024 at 11:50 AM, dcooper said: I've having issues running replication, get this error which results in everything failing during initial replication: INFO: Sending oldest full snapshot data/appdata@2024-01-03-171850 (~ 88 KB) to new target filesystem: cannot receive new filesystem stream: pool must be upgraded to receive this stream. warning: cannot send 'data/appdata@2024-01-03-171850': signal received CRITICAL ERROR: zfs send 'data/appdata'@'2024-01-03-171850' | pv -p -t -e -r -b -s 90224 | zfs receive -F 'nvr/zfs_backups/data_appdata' failed: 256 at /usr/local/sbin/syncoid line 549. INFO: Sending oldest full snapshot data/appdata/MKVToolNix@2024-01-03-171850 (~ 1.6 MB) to new target filesystem: cannot open 'nvr/zfs_backups/data_appdata': dataset does not exist This is the command being used: /usr/local/sbin/syncoid -r --force-delete --delete-target-snapshots $source_path $destination_path $source_path is data/appdata, where "data" is the zfs pool and "appdata" is a dataset (with child datasets for each docker) $destination_path is nvr/zfs_backups/data_appdata, where "nvr" is the zfs pool and "zfs_backups" is the dataset. zfs_backups is empty since this is first time replication. The full script I'm using is here, it's a greatly simplified/cleaned up version of Space Invader's script: https://github.com/freeskier93/unraid-zfs-snapshot-replication/blob/main/zfs_snapshot_replication.sh Had some time to look into this and the fix was easy. For whatever reason not all the zfs features were enabled for the destination pool. The command "zpool upgrade" showed what features weren't enabled, then the command "zpool upgrade -a" to upgrade all the pools. After running that no issues with replication. https://openzfs.github.io/openzfs-docs/man/master/8/zpool-upgrade.8.html 1 Quote Link to comment
SoerenS Posted April 15 Share Posted April 15 (edited) On 10/23/2023 at 3:25 PM, steini84 said: There seems to be a problem with the mbuffer package. You can install the older version, but all the Slackware packages i found for the latest version are the same. To install the older package you can use wget https://github.com/Steini1984/unRAID6-Sainoid/raw/167f5ad3dc1941ef7670efcd21fbb4e6e6ad8587/packages/mbuffer.20200505.x86_64.tgz installpkg mbuffer.20200505.x86_64.tgz Sent from my iPhone using Tapatalk I stumbled over the problem with the broken mbuffer package. The package fetched by the plugin cointains only the sources of mbuffer and I was not able to find a current version packaged for Slackware. But there is a build script available: https://slackbuilds.org/repository/15.0/system/mbuffer/ I built the latest version of mbuffer (20240107) and packaged it with the linked SlackBuild for Slackware. I think it would be great, if mbuffer package in the plugin would be updated to have a working version available out of the box. The packaged version of mbuffer is in the attachment. But if you like, I can also create a pull request on GitHub with the updates for the plugin. mbuffer-20240107-x86_64-1_SBo.tgz Edited April 15 by SoerenS typos 1 Quote Link to comment
Nogami Posted April 20 Share Posted April 20 On 10/23/2023 at 6:25 AM, steini84 said: There seems to be a problem with the mbuffer package. You can install the older version, but all the Slackware packages i found for the latest version are the same. To install the older package you can use wget https://github.com/Steini1984/unRAID6-Sainoid/raw/167f5ad3dc1941ef7670efcd21fbb4e6e6ad8587/packages/mbuffer.20200505.x86_64.tgz installpkg mbuffer.20200505.x86_64.tgz Sent from my iPhone using Tapatalk Thanks for this, worked fine on my main system and backup to remove the mbuffer error. Quote Link to comment
Masterwishx Posted April 30 Share Posted April 30 On 4/15/2024 at 5:40 PM, SoerenS said: But if you like, I can also create a pull request on GitHub with the updates for the plugin. Please if you can Quote Link to comment
Masterwishx Posted April 30 Share Posted April 30 (edited) @steini84 can be this version (updated) used in plugin ? Edited April 30 by Masterwishx Quote Link to comment
Masterwishx Posted May 1 Share Posted May 1 On 4/15/2024 at 5:40 PM, SoerenS said: I built the latest version of mbuffer (20240107) and packaged it with the linked SlackBuild for Slackware. I think it would be great, if mbuffer package in the plugin would be updated to have a working version available out of the box. I made the pr with your file, just not shure if needs to be recompiled for usr/local instead of /usr, waiting for @steini84 responds 1 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.