steini84

Community Developer
  • Posts

    434
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by steini84

  1. https://www.dropbox.com/s/wmzxjyzqs9b9fxz/zfs-2.0.0-rc3-unRAID-6.8.3.x86_64.tgz?dl=0 https://www.dropbox.com/s/3onv1qur26yxb7n/zfs-2.0.0-rc3-unRAID-6.8.3.x86_64.tgz.md5?dl=0 You turn on trim with zpool set autotrim=on POOLNAME Then you can run zpool trim POOLNAME regularly, maybe after a scrub . I do a scrub, then a trim every month via the user scripts plugin. ref: https://github.com/openzfs/zfs/commit/1b939560be5c51deecf875af9dada9d094633bf7
  2. Well to be fair you are running a beta version of unRAID with a release candiate of ZFS so things like this are more likely with a combination like that. But just a few hours ago there was a new rc for zfs and i´m building it now. Should be up in 1-2 hours it´s online now https://github.com/openzfs/zfs/releases/zfs-2.0.0-rc3 The best bet is to run this command and reboot rm /boot/config/plugins/unRAID6-ZFS/packages/zfs*
  3. Check out this guide on smb: https://forum.level1techs.com/t/zfs-on-unraid-lets-do-it-bonus-shadowcopy-setup-guide-project/148764 (this part-> ZFS Snapshots – Part 2, The Samba bits)
  4. Built zfs-2.0.0-rc2 for unRAID-6.9.0-beta29
  5. Im Sorry but that is a but over my head Sent from my iPhone using Tapatalk
  6. Hopefully this will help: https://www.thegeekdiary.com/solaris-zfs-how-to-import-2-pools-that-have-the-same-names/ If not I suggest you try to find expert ZFS advice and maybe you can find it on IRC http://webchat.freenode.net/?channels=openzfs Sent from my iPhone using Tapatalk
  7. Try booting into Freenas since you know it was working there. See if you can mount it there Sent from my iPhone using Tapatalk
  8. Or even try zpool import -f -a Sent from my iPhone using Tapatalk
  9. If I understand correctly you have to use the -f flag: zpool import -f POOLNAME Sent from my iPhone using Tapatalk
  10. Yeah I would reboot and retry. It worked as expected on my test server after a reboot: root@Tower:~# zpool upgrade SSDThis system supports ZFS pool feature flags.Enabled the following features on 'SSD': redaction_bookmarks redacted_datasets bookmark_written log_spacemap livelist device_rebuild zstd_compressroot@Tower:~# zfs set compression=zstd SSDroot@Tower:~# zfs get all | grep -i compressionSSD compression zstd localroot@Tower:~# This is just a plugin for zfs and nothing added. I would recommend setting up a Check_mk docker to monitor your server and that can send you a mail if you have a problem. For example a problem with the pool or if the pool is running out of space. Sent from my iPhone using Tapatalk
  11. Here you go: https://www.dropbox.com/s/f3fp04zsgp1g4a0/zfs-2.0.0-rc2-unRAID-6.8.3.x86_64.tgz?dl=0 https://www.dropbox.com/s/z381hehf28k3gj5/zfs-2.0.0-rc2-unRAID-6.8.3.x86_64.tgz.md5?dl=0 You can either rename and replace the files in /boot/config/plugins/unRAID6-ZFS/packages or run these commands: #Unmount bzmodules and make rw if mount | grep /lib/modules > /dev/null; then echo "Remounting modules" cp -r /lib/modules /tmp umount -l /lib/modules/ rm -rf /lib/modules mv -f /tmp/modules /lib fi #install and load the package and import pools installpkg zfs-2.0.0-rc2-unRAID-6.8.3.x86_64.tgz depmod modprobe zfs zpool import -a
  12. Well you could zfs send to a different pool make a new pool and zfs send back. Or I could build zfs 2.0 for you on 6.8.3. Let me know if you want that Sent from my iPhone using Tapatalk
  13. Built zfs-2.0.0-rc2 for unRAID-6.9.0-beta25
  14. The first release candidate of OpenZFS 2.0 has been released https://github.com/openzfs/zfs/releases/tag/zfs-2.0.0-rc1 I have built it for unRAID 6.9.0 beta 25 For those already running ZFS 0.8.4-1 on unRAID 6.9.0 beta 25 and want to update, you can just un-install this plugin and re-install it again (don´t worry , you wont have any ZFS downtime) or run this command and reboot rm /boot/config/plugins/unRAID6-ZFS/packages/zfs-0.8.4-unRAID-6.9.0-beta25.x86_64.tgz Either way you should see this: #Before root@Tower:~# modinfo zfs | grep version version: 0.8.4-1 srcversion: E9712003D310D2B54A51C97 #After root@Tower:~# modinfo zfs | grep version version: 2.0.0-rc1 srcversion: 6A6B870B7C76FB81D4FEFB4
  15. On What os did you create the pool. That feature is not yet in openzfs on Linux http://build.zfsonlinux.org/zfs-features.html Sent from my iPhone using Tapatalk
  16. Zfs actively being worked on for unraid [emoji1635] https://selfhosted.show/25 Sent from my iPhone using Tapatalk
  17. I have not tried booting into safe mode, but will try when nobody is home and using the server The only thing I have is: cat /boot/config/smb-extra.conf veto files = /._*/.DS_Store/ I have tried restarting samba after removing this line, but no difference there.
  18. Did you find out what was causing this? I am running into this on 6.9.0-beta25 Jul 22 11:14:04 Unraid smbd[32706]: [2020/07/22 11:14:04.494323, 0] ../../lib/param/loadparm.c:415(lp_bool) Jul 22 11:14:04 Unraid smbd[32706]: lp_bool(no): value is not boolean! Jul 22 11:14:16 Unraid smbd[7368]: [2020/07/22 11:14:16.215700, 0] ../../lib/param/loadparm.c:415(lp_bool) Jul 22 11:14:16 Unraid smbd[7368]: lp_bool(no): value is not boolean! Jul 22 11:14:17 Unraid smbd[7392]: [2020/07/22 11:14:17.397938, 0] ../../lib/param/loadparm.c:415(lp_bool) Jul 22 11:14:17 Unraid smbd[7392]: lp_bool(no): value is not boolean! Jul 22 11:14:18 Unraid smbd[7736]: [2020/07/22 11:14:18.053278, 0] ../../lib/param/loadparm.c:415(lp_bool) Jul 22 11:14:18 Unraid smbd[7736]: lp_bool(no): value is not boolean!
  19. Hopefully when unraid adds native support [emoji2134] Sent from my iPhone using Tapatalk
  20. ... 5 years for me, bur we are getting off topic Sent from my iPhone using Tapatalk
  21. FYI you can bake in ZFS and then you don’t need a plug-in: https://forums.unraid.net/topic/92865-support-ich777-nvidiadvb-kernel-helperbuilder-docker/
  22. Yes exactly But to be clear ZFS only finds errors when you try to read files so too be sure that your hard drives are not plotting against you (spoiler alert they are it’s good to scrub regularly https://docs.oracle.com/cd/E23823_01/html/819-5461/gbbwa.html Sent from my iPhone using Tapatalk
  23. Bitte The wife went out and the kids fell a sleep early so... I probably have to make some rewrites to the "tutorial" but it will be fun to see if someone gets it up and running
  24. Hi, Here is an another companion plugin for the ZFS plugin for unRAID. To install you copy this url into the install plugin page in your unRAID 6 web gui https://raw.githubusercontent.com/Steini1984/unRAID6-Sainoid/master/unRAID6-Sanoid.plg I recommend you follow the directions here: https://github.com/jimsalterjrs/sanoid but you have to keep in mind that unRAID does not have a persistent /etc/ and cron so you have to take account to that. Below you can see how I setup my system, but the plugin is built pretty vanilla so you can adjust to you needs. Why? I have a 3 SSD pool on my unRAID server running RAIDZ-1 and that has been rock solid for years. I have used multiple different snapshot tools and Znapzend (unRAID plugin available) has server me well..... well apart from remote replication. Probably my user error but multiple systems I have setup all have the same problem with losing sync between the main server and backup server. In comes Sanoid and Syncoid which was a little bit more effort in the beginning but it literally was set it and forget it after that. My setup The setup for Sanoid is pretty straight forward, but I wanted to show you how I use it and how it is configured so you can hopefully save some time and/or get inspired to backup your own system. My servers are: Main server running unRAID with a ZFS Pool for Vms/Docker (SSD) Backup server running Proxmox with an NVME pool for Vms/Containers (rpool) and a USB pool for backups (Buffalo) Setting up my system (adjust to your needs): This part is way too long, probably need an edit and maybe missing a step or two, but hope it helps someone: - Automatic snapshots My main ZFS pool is on unRAID named SSD and mounted at /mnt/SSD root@Unraid:/mnt/SSD# zfs list NAME USED AVAIL REFER MOUNTPOINT SSD 148G 67.8G 160K /mnt/SSD SSD/Docker 106G 67.8G 10.3G /mnt/SSD/Docker SSD/Docker/Bitwarden 4.93M 67.8G 2.24M /mnt/SSD/Docker/Bitwarden SSD/Docker/Bookstack 7.11M 67.8G 6.23M /mnt/SSD/Docker/Bookstack SSD/Docker/Check_MK 7.36G 67.8G 471M /mnt/SSD/Docker/Check_MK SSD/Docker/Code-Server 90.1M 67.8G 85.1M /mnt/SSD/Docker/Code-Server SSD/Docker/Daapd 662M 67.8G 508M /mnt/SSD/Docker/Daapd SSD/Docker/Duplicati 3.56G 67.8G 2.64G /mnt/SSD/Docker/Duplicati SSD/Docker/Emoncms 69.7M 67.8G 34.5M /mnt/SSD/Docker/Emoncms SSD/Docker/Grafana 4.41M 67.8G 240K /mnt/SSD/Docker/Grafana SSD/Docker/Guacamole 4.02M 67.8G 3.47M /mnt/SSD/Docker/Guacamole SSD/Docker/HomeAssistant 60.7M 67.8G 44.2M /mnt/SSD/Docker/HomeAssistant SSD/Docker/Influxdb 511M 67.8G 66.1M /mnt/SSD/Docker/Influxdb SSD/Docker/Kodi 1.83G 67.8G 1.59G /mnt/SSD/Docker/Kodi SSD/Docker/MQTT 293K 67.8G 181K /mnt/SSD/Docker/MQTT SSD/Docker/MariaDB 2.05G 67.8G 328M /mnt/SSD/Docker/MariaDB SSD/Docker/MariaDB/log 218M 67.8G 130M /mnt/SSD/Docker/MariaDB/log SSD/Docker/Netdata 128K 67.8G 128K /mnt/SSD/Docker/Netdata SSD/Docker/Node-RED 51.5M 67.8G 50.5M /mnt/SSD/Docker/Node-RED SSD/Docker/Pi-hole 514M 67.8G 351M /mnt/SSD/Docker/Pi-hole SSD/Docker/Unifi 956M 67.8G 768M /mnt/SSD/Docker/Unifi SSD/Docker/deCONZ 5.44M 67.8G 256K /mnt/SSD/Docker/deCONZ SSD/Vms 32.3G 67.8G 128K /mnt/SSD/Vms SSD/Vms/Broadcaster 26.5G 67.8G 17.3G /mnt/SSD/Vms/Broadcaster SSD/Vms/libvirt 2.55M 67.8G 895K /mnt/SSD/Vms/libvirt SSD/Vms/unRAID-Build 5.87G 67.8G 5.87G /mnt/SSD/Vms/unRAID-Build SSD/swap 9.37G 67.8G 9.37G - First I installed the plugin and configured copied the config files to the main pool cp /etc/sanoid/sanoid.defaults.conf /mnt/SSD cp /etc/sanoid/sanoid.example.conf /mnt/SSD/sanoid.conf Then you have to edit the sanoid config file nano /mnt/SSD/sanoid.conf My file config has two templates just so I can ignore the swap partition. I thing the config file explains it self and it looks like this: [SSD] use_template = production recursive = yes [SSD/swap] use_template = ignore recursive = no ############################# # templates below this line # ############################# [template_production] frequently = 4 hourly = 24 daily = 7 monthly = 0 yearly = 0 autosnap = yes autoprune = yes [template_ignore] autosnap = no autoprune = no monitor = no Now we have to run Sanoid every minute and you can for example use the User Scripts plugin or cron. I use cron and have this line in my crontab: * * * * * /usr/local/sbin/sanoid --configdir=/mnt/SSD/ --cron This overwrites the default config dir so we can keep the files at a persistent storage location. To add this at boot you can add this to your go file or setup a User Scripts script that runs on boot with this command: (crontab -l 2>/dev/null; echo "* * * * * /usr/local/sbin/sanoid --configdir=/mnt/SSD/ --cron") | crontab - Now you are good to go and should have automatic snapshots on you unRAID server root@Unraid:/mnt/SSD# zfs list -t snapshot NAME USED AVAIL REFER MOUNTPOINT SSD@autosnap_2020-07-03_23:59:01_daily 0B - 160K - SSD@autosnap_2020-07-04_23:59:01_daily 0B - 160K - SSD@autosnap_2020-07-05_23:59:01_daily 0B - 160K - SSD@autosnap_2020-07-06_23:59:01_daily 0B - 160K - SSD@autosnap_2020-07-07_23:59:01_daily 0B - 160K - SSD@autosnap_2020-07-08_23:59:01_daily 0B - 160K - SSD@autosnap_2020-07-09_22:00:01_hourly 0B - 160K - SSD@autosnap_2020-07-09_23:00:02_hourly 0B - 160K - SSD@autosnap_2020-07-09_23:59:01_daily 0B - 160K - SSD@autosnap_2020-07-10_00:00:01_hourly 0B - 160K - SSD@autosnap_2020-07-10_01:00:01_hourly 0B - 160K - SSD@autosnap_2020-07-10_02:00:02_hourly 0B - 160K - SSD@autosnap_2020-07-10_03:00:02_hourly 0B - 160K - SSD@autosnap_2020-07-10_04:00:01_hourly 0B - 160K - SSD@autosnap_2020-07-10_05:00:01_hourly 0B - 160K - SSD@autosnap_2020-07-10_06:00:01_hourly 0B - 160K - SSD@autosnap_2020-07-10_07:00:01_hourly 0B - 160K - SSD@autosnap_2020-07-10_08:00:01_hourly 0B - 160K - SSD@autosnap_2020-07-10_09:00:01_hourly 0B - 160K - SSD@autosnap_2020-07-10_10:00:01_hourly 0B - 160K - SSD@autosnap_2020-07-10_11:00:01_hourly 0B - 160K - SSD@autosnap_2020-07-10_12:00:02_hourly 0B - 160K - SSD@autosnap_2020-07-10_13:00:02_hourly 0B - 160K - SSD@autosnap_2020-07-10_14:00:01_hourly 0B - 160K - SSD@autosnap_2020-07-10_15:00:01_hourly 0B - 160K - SSD@autosnap_2020-07-10_16:00:01_hourly 0B - 160K - SSD@autosnap_2020-07-10_17:00:01_hourly 0B - 160K - SSD@autosnap_2020-07-10_18:00:01_hourly 0B - 160K - SSD@autosnap_2020-07-10_19:00:02_hourly 0B - 160K - SSD@autosnap_2020-07-10_20:00:01_hourly 0B - 160K - SSD@autosnap_2020-07-10_20:30:01_frequently 0B - 160K - SSD@autosnap_2020-07-10_20:45:01_frequently 0B - 160K - SSD@autosnap_2020-07-10_21:00:01_hourly 0B - 160K - SSD@autosnap_2020-07-10_21:00:01_frequently 0B - 160K - SSD@autosnap_2020-07-10_21:15:01_frequently 0B - 160K - SSD/Docker@autosnap_2020-07-03_23:59:01_daily 2.79G - 10.6G - SSD/Docker@autosnap_2020-07-04_23:59:01_daily 1.54G - 9.97G - .... - Replication Now we take a look at my second server that has a pool named Buffalo (I use a 2 disk USB Buffalo Disk station ) for backups root@proxmox:~# zfs list NAME USED AVAIL REFER MOUNTPOINT Buffalo 1.15T 621G 25K /Buffalo Buffalo/Backups 906G 621G 28K /Buffalo/Backups Buffalo/Backups/Nextcloud 227G 621G 227G /Buffalo/Backups/Nextcloud Buffalo/Backups/Pictures 43.6G 621G 43.6G /Buffalo/Backups/Pictures Buffalo/Backups/Unraid 29.4G 621G 29.4G /Buffalo/Backups/Unraid Buffalo/Proxmox-Replication 86.2G 621G 25K /Buffalo/Proxmox-Replication Buffalo/Proxmox-Replication/ROOT 2.30G 621G 24K /Buffalo/Proxmox-Replication/ROOT Buffalo/Proxmox-Replication/ROOT/pve-1 2.30G 621G 1.47G / Buffalo/Proxmox-Replication/data 83.9G 621G 24K /Buffalo/Proxmox-Replication/data Buffalo/Proxmox-Replication/data/Vms 83.5G 621G 24K /Buffalo/Proxmox-Replication/data/Vms Buffalo/Proxmox-Replication/data/Vms/vm-102-unRAID-BUILD-disk-0 2.55G 621G 1.94G - Buffalo/Proxmox-Replication/data/Vms/vm-102-unRAID-BUILD-disk-1 5.06G 621G 3.81G - Buffalo/Proxmox-Replication/data/Vms/vm-102-unRAID-BUILD-disk-2 776M 621G 506M - Buffalo/Proxmox-Replication/data/subvol-103-disk-0 386M 7.62G 386M /Buffalo/Proxmox-Replication/data/subvol-103-disk-0 Buffalo/unRAID-Replication 185G 621G 28.5K /mnt/SSD Buffalo/unRAID-Replication/Docker 140G 621G 9.89G /mnt/SSD/Docker Buffalo/unRAID-Replication/Docker/Bitwarden 2.94M 621G 1.33M /mnt/SSD/Docker/Bitwarden Buffalo/unRAID-Replication/Docker/Bookstack 6.29M 621G 5.61M /mnt/SSD/Docker/Bookstack Buffalo/unRAID-Replication/Docker/Check_MK 5.67G 621G 372M /mnt/SSD/Docker/Check_MK Buffalo/unRAID-Replication/Docker/Code-Server 41.2M 621G 34.0M /mnt/SSD/Docker/Code-Server Buffalo/unRAID-Replication/Docker/Daapd 845M 621G 505M /mnt/SSD/Docker/Daapd Buffalo/unRAID-Replication/Docker/Duplicati 5.08G 621G 2.53G /mnt/SSD/Docker/Duplicati Buffalo/unRAID-Replication/Docker/Emoncms 79.6M 621G 29.2M /mnt/SSD/Docker/Emoncms Buffalo/unRAID-Replication/Docker/Grafana 872K 621G 87K /mnt/SSD/Docker/Grafana Buffalo/unRAID-Replication/Docker/Guacamole 2.50M 621G 1.87M /mnt/SSD/Docker/Guacamole Buffalo/unRAID-Replication/Docker/HomeAssistant 42.5M 621G 38.2M /mnt/SSD/Docker/HomeAssistant Buffalo/unRAID-Replication/Docker/Influxdb 884M 621G 68.7M /mnt/SSD/Docker/Influxdb Buffalo/unRAID-Replication/Docker/Kodi 1.65G 621G 1.46G /mnt/SSD/Docker/Kodi Buffalo/unRAID-Replication/Docker/MQTT 67.5K 621G 31.5K /mnt/SSD/Docker/MQTT Buffalo/unRAID-Replication/Docker/MariaDB 2.63G 621G 298M /mnt/SSD/Docker/MariaDB Buffalo/unRAID-Replication/Docker/MariaDB/log 310M 621G 104M /mnt/SSD/Docker/MariaDB/log Buffalo/unRAID-Replication/Docker/Netdata 24K 621G 24K /mnt/SSD/Docker/Netdata Buffalo/unRAID-Replication/Docker/Node-RED 18.5M 621G 17.3M /mnt/SSD/Docker/Node-RED Buffalo/unRAID-Replication/Docker/Pi-hole 663M 621G 331M /mnt/SSD/Docker/Pi-hole Buffalo/unRAID-Replication/Docker/Unifi 1.07G 621G 691M /mnt/SSD/Docker/Unifi Buffalo/unRAID-Replication/Docker/deCONZ 1.03M 621G 61.5K /mnt/SSD/Docker/deCONZ Buffalo/unRAID-Replication/Vms 45.2G 621G 23K /mnt/SSD/Vms Buffalo/unRAID-Replication/Vms/Broadcaster 39.4G 621G 16.5G /mnt/SSD/Vms/Broadcaster Buffalo/unRAID-Replication/Vms/libvirt 1.78M 621G 471K /mnt/SSD/Vms/libvirt Buffalo/unRAID-Replication/Vms/unRAID-Build 5.78G 621G 5.78G /mnt/SSD/Vms/unRAID-Build rpool 125G 104G 104K /rpool rpool/ROOT 1.94G 104G 96K /rpool/ROOT rpool/ROOT/pve-1 1.94G 104G 1.62G / rpool/data 114G 104G 96K /rpool/data rpool/data/Vms 114G 104G 96K /rpool/data/Vms rpool/data/Vms/vm-101-blueiris-disk-0 41.9G 104G 33.4G - rpool/data/Vms/vm-101-blueiris-disk-1 40.4G 104G 40.4G - rpool/data/Vms/vm-102-unRAID-BUILD-disk-0 1.98G 104G 1.94G - rpool/data/Vms/vm-102-unRAID-BUILD-disk-1 4.30G 104G 4.29G - rpool/data/Vms/vm-102-unRAID-BUILD-disk-2 776M 104G 750M - rpool/data/subvol-103-disk-0 440M 7.57G 440M /rpool/data/subvol-103-disk-0 rpool/swap 8.50G 105G 6.96G - spinner 277G 172G 25.0G /spinner spinner/vm-101-cctv-disk-0 252G 172G 252G - I also installed Sanoid there but the config file is a bit different there: root@proxmox:~# cat /etc/sanoid/sanoid.conf [rpool] use_template = production recursive = yes [rpool/swap] use_template = ignore recursive = no [Buffalo/Proxmox-Replication] use_template = backup recursive = yes [Buffalo/unRAID-Replication] use_template = backup recursive = yes ############################# # templates below this line # ############################# [template_production] frequently = 4 hourly = 24 daily = 7 monthly = 0 yearly = 0 autosnap = yes autoprune = yes [template_backup] autoprune = yes frequently = 0 hourly = 0 daily = 90 monthly = 0 yearly = 0 ### don't take new snapshots - snapshots on backup ### datasets are replicated in from source, not ### generated locally autosnap = no ### monitor hourlies and dailies, but don't warn or ### crit until they're over 48h old, since replication ### is typically daily only hourly_warn = 2880 hourly_crit = 3600 daily_warn = 48 daily_crit = 60 [template_ignore] autosnap = no autoprune = no monitor = no Since the backup server is Debian based Sanoid is automatically called every minute via Systemd (automatic setup via the offical package). Now I have regular snapshots from my rpool (the data for Proxmox) and then I have two replication targets on the USB backup pool Buffalo/Proxmox-Replication & Buffalo/unRAID-Replication to replicate from NVME on my Proxmox to the USB it´s really simple since it is on the same system, but for unRAID we need to setup SSH keys: On the unRAID server I run root@Unraid:~# ssh-keygen and press enter multiple times until the process is finished. Then on the second (Proxmox) server i run this command as root: root@proxmox:~ ssh-copy-id root@unraid #your server name/ip address and answer "Yes" - you can test it from your backup server by ssh-ing and see if you get password less access. Again since unRAID wont retain any of this on reboot we have to backup the ssh folder to the USB key. I do it like this: (*this step is not relevant on unRAID 6.9 and later since it symlinks the ssh folder to the boot drive) mkdir -p /boot/custom/ssh/root cp -r /root/.ssh /boot/custom/ssh/root/ Then you can add this script to run on boot e.g. using the aforementioned User Scripts plugin: #!/bin/bash #Root mkdir -p /root/.ssh cp -r /boot/custom/ssh/root/.ssh /root/ chown -R root:root /root/ chmod 700 /root/ chmod 700 /root/.ssh chmod 600 /root/.ssh/authorized_keys then we finally can start the replication from the second server: Since I am backing up two pools I run then one after the other /usr/sbin/syncoid -r --quiet --no-sync-snap root@unraid:SSD Buffalo/unRAID-Replication && /usr/sbin/syncoid -r --quiet --no-sync-snap rpool Buffalo/Proxmox-Replication this command first sends all the snapshots from unRAID via SSH to the USB backup pool on Proxmox and then locally from the NVME to the USB This I run every day at 2:15 using cron, and for simplicity sake I put the commands in a bash file: The crontab: #Nigthly replication 15 2 * * * /usr/local/bin/replicate The bash script: root@proxmox:~# cat /usr/local/bin/replicate #!/bin/bash /usr/sbin/syncoid -r --quiet --no-sync-snap root@unraid:SSD Buffalo/unRAID-Replication && /usr/sbin/syncoid -r --quiet --no-sync-snap rpool Buffalo/Proxmox-Replication Now lets take a look at what the unRAID replication looks like over at the backup server's USB pool: root@proxmox:~# zfs list -t snapshot -r Buffalo/unRAID-Replication NAME USED AVAIL REFER MOUNTPOINT Buffalo/unRAID-Replication@autosnap_2020-06-19_11:51:00_daily 0B - 23K - Buffalo/unRAID-Replication@autosnap_2020-06-19_23:59:01_daily 0B - 23K - Buffalo/unRAID-Replication@autosnap_2020-06-20_23:59:01_daily 0B - 23K - Buffalo/unRAID-Replication@autosnap_2020-06-21_23:59:01_daily 0B - 23K - Buffalo/unRAID-Replication@autosnap_2020-06-22_23:59:01_daily 0B - 23K - Buffalo/unRAID-Replication@autosnap_2020-06-23_23:59:01_daily 0B - 23K - Buffalo/unRAID-Replication@autosnap_2020-06-24_23:59:01_daily 0B - 23K - Buffalo/unRAID-Replication@autosnap_2020-06-25_23:59:01_daily 0B - 23K - Buffalo/unRAID-Replication@autosnap_2020-06-26_23:59:01_daily 0B - 28.5K - Buffalo/unRAID-Replication@autosnap_2020-06-27_23:59:01_daily 0B - 28.5K - Buffalo/unRAID-Replication@autosnap_2020-06-28_23:59:01_daily 0B - 28.5K - Buffalo/unRAID-Replication@autosnap_2020-06-29_23:59:01_daily 0B - 28.5K - Buffalo/unRAID-Replication@autosnap_2020-06-30_23:59:01_daily 0B - 28.5K - Buffalo/unRAID-Replication@autosnap_2020-07-01_23:59:01_daily 0B - 28.5K - Buffalo/unRAID-Replication@autosnap_2020-07-02_23:59:01_daily 0B - 28.5K - Buffalo/unRAID-Replication@autosnap_2020-07-03_23:59:01_daily 0B - 28.5K - Buffalo/unRAID-Replication@autosnap_2020-07-04_23:59:01_daily 0B - 28.5K - Buffalo/unRAID-Replication@autosnap_2020-07-05_23:59:01_daily 0B - 28.5K - Buffalo/unRAID-Replication@autosnap_2020-07-06_23:59:01_daily 0B - 28.5K - Buffalo/unRAID-Replication@autosnap_2020-07-07_23:59:01_daily 0B - 28.5K - Buffalo/unRAID-Replication@autosnap_2020-07-08_23:59:01_daily 0B - 28.5K - Buffalo/unRAID-Replication@autosnap_2020-07-09_23:59:01_daily 0B - 28.5K - Buffalo/unRAID-Replication/Docker@autosnap_2020-06-19_11:51:00_daily 416M - 9.11G - Buffalo/unRAID-Replication/Docker@autosnap_2020-06-19_23:59:01_daily 439M - 9.23G - ....