Sanoid/Syncoid (ZFS snapshots and replication)


Recommended Posts

Hi,

 

Here is an another companion plugin for the ZFS plugin for unRAID.

Quote

Sanoid is a policy-driven snapshot management tool for ZFS filesystems. When combined with the Linux KVM hypervisor, you can use it to make your systems functionally immortal.

 

Sanoid also includes a replication tool, syncoid, which facilitates the asynchronous incremental replication of ZFS filesystems.

 

To install you copy this url into the install plugin page in your unRAID 6 web gui

https://raw.githubusercontent.com/Steini1984/unRAID6-Sainoid/master/unRAID6-Sanoid.plg

 

I recommend you follow the directions here: https://github.com/jimsalterjrs/sanoid but you have to keep in mind that unRAID does not have a persistent /etc/ and cron so you have to take account to that.

 

Below you can see how I setup my system, but the plugin is built pretty vanilla so you can adjust to you needs.

 

Why?

I have a 3 SSD pool on my unRAID server running RAIDZ-1 and that has been rock solid for years. I have used multiple different snapshot tools and Znapzend (unRAID plugin available) has server me well..... well apart from remote replication. Probably my user error but multiple systems I have setup all have the same problem with losing sync between the main server and backup server. In comes Sanoid and Syncoid which was a little bit more effort in the beginning but it literally was set it and forget it after that. 

 

My setup

The setup for Sanoid is pretty straight forward, but I wanted to show you how I use it and how it is configured so you can hopefully save some time and/or get inspired to backup your own system.

 

My servers are:

  • Main server running unRAID with a ZFS Pool for Vms/Docker (SSD)
  • Backup server running Proxmox with an NVME pool for Vms/Containers (rpool) and a USB pool for backups (Buffalo)

 

Setting up my system (adjust to your needs):

This part is way too long, probably need an edit and maybe missing a step or two, but hope it helps someone:

 

- Automatic snapshots

 

My main ZFS pool is on unRAID named SSD and mounted at /mnt/SSD

root@Unraid:/mnt/SSD# zfs list
NAME                       USED  AVAIL     REFER  MOUNTPOINT
SSD                        148G  67.8G      160K  /mnt/SSD
SSD/Docker                 106G  67.8G     10.3G  /mnt/SSD/Docker
SSD/Docker/Bitwarden      4.93M  67.8G     2.24M  /mnt/SSD/Docker/Bitwarden
SSD/Docker/Bookstack      7.11M  67.8G     6.23M  /mnt/SSD/Docker/Bookstack
SSD/Docker/Check_MK       7.36G  67.8G      471M  /mnt/SSD/Docker/Check_MK
SSD/Docker/Code-Server    90.1M  67.8G     85.1M  /mnt/SSD/Docker/Code-Server
SSD/Docker/Daapd           662M  67.8G      508M  /mnt/SSD/Docker/Daapd
SSD/Docker/Duplicati      3.56G  67.8G     2.64G  /mnt/SSD/Docker/Duplicati
SSD/Docker/Emoncms        69.7M  67.8G     34.5M  /mnt/SSD/Docker/Emoncms
SSD/Docker/Grafana        4.41M  67.8G      240K  /mnt/SSD/Docker/Grafana
SSD/Docker/Guacamole      4.02M  67.8G     3.47M  /mnt/SSD/Docker/Guacamole
SSD/Docker/HomeAssistant  60.7M  67.8G     44.2M  /mnt/SSD/Docker/HomeAssistant
SSD/Docker/Influxdb        511M  67.8G     66.1M  /mnt/SSD/Docker/Influxdb
SSD/Docker/Kodi           1.83G  67.8G     1.59G  /mnt/SSD/Docker/Kodi
SSD/Docker/MQTT            293K  67.8G      181K  /mnt/SSD/Docker/MQTT
SSD/Docker/MariaDB        2.05G  67.8G      328M  /mnt/SSD/Docker/MariaDB
SSD/Docker/MariaDB/log     218M  67.8G      130M  /mnt/SSD/Docker/MariaDB/log
SSD/Docker/Netdata         128K  67.8G      128K  /mnt/SSD/Docker/Netdata
SSD/Docker/Node-RED       51.5M  67.8G     50.5M  /mnt/SSD/Docker/Node-RED
SSD/Docker/Pi-hole         514M  67.8G      351M  /mnt/SSD/Docker/Pi-hole
SSD/Docker/Unifi           956M  67.8G      768M  /mnt/SSD/Docker/Unifi
SSD/Docker/deCONZ         5.44M  67.8G      256K  /mnt/SSD/Docker/deCONZ
SSD/Vms                   32.3G  67.8G      128K  /mnt/SSD/Vms
SSD/Vms/Broadcaster       26.5G  67.8G     17.3G  /mnt/SSD/Vms/Broadcaster
SSD/Vms/libvirt           2.55M  67.8G      895K  /mnt/SSD/Vms/libvirt
SSD/Vms/unRAID-Build      5.87G  67.8G     5.87G  /mnt/SSD/Vms/unRAID-Build
SSD/swap                  9.37G  67.8G     9.37G  -

 

First I installed the plugin and configured copied the config files to the main pool 

cp /etc/sanoid/sanoid.defaults.conf /mnt/SSD
cp /etc/sanoid/sanoid.example.conf /mnt/SSD/sanoid.conf

Then you have to edit the sanoid config file

nano /mnt/SSD/sanoid.conf

My file config has two templates just so I can ignore the swap partition. I thing the config file explains it self and it looks like this:

[SSD]
     	use_template = production
        recursive = yes

[SSD/swap]
	use_template = ignore
        recursive = no


#############################
# templates below this line #
#############################

[template_production]
        frequently = 4
        hourly = 24
        daily = 7
        monthly = 0
        yearly = 0
        autosnap = yes
        autoprune = yes

[template_ignore]
        autosnap = no
        autoprune = no
        monitor = no

Now we have to run Sanoid every minute and you can for example use the User Scripts plugin or cron.

I use cron and have this line in my crontab:

* * * * *  /usr/local/sbin/sanoid --configdir=/mnt/SSD/ --cron

This overwrites the default config dir so we can keep the files at a persistent storage location. To add this at boot you can add this to your go file or setup a User Scripts script that runs on boot with this command:

(crontab -l 2>/dev/null; echo "* * * * *  /usr/local/sbin/sanoid --configdir=/mnt/SSD/ --cron") | crontab -

Now you are good to go and should have automatic snapshots on you unRAID server

root@Unraid:/mnt/SSD# zfs list -t snapshot

NAME                                                               USED  AVAIL     REFER  MOUNTPOINT
SSD@autosnap_2020-07-03_23:59:01_daily                               0B      -      160K  -
SSD@autosnap_2020-07-04_23:59:01_daily                               0B      -      160K  -
SSD@autosnap_2020-07-05_23:59:01_daily                               0B      -      160K  -
SSD@autosnap_2020-07-06_23:59:01_daily                               0B      -      160K  -
SSD@autosnap_2020-07-07_23:59:01_daily                               0B      -      160K  -
SSD@autosnap_2020-07-08_23:59:01_daily                               0B      -      160K  -
SSD@autosnap_2020-07-09_22:00:01_hourly                              0B      -      160K  -
SSD@autosnap_2020-07-09_23:00:02_hourly                              0B      -      160K  -
SSD@autosnap_2020-07-09_23:59:01_daily                               0B      -      160K  -
SSD@autosnap_2020-07-10_00:00:01_hourly                              0B      -      160K  -
SSD@autosnap_2020-07-10_01:00:01_hourly                              0B      -      160K  -
SSD@autosnap_2020-07-10_02:00:02_hourly                              0B      -      160K  -
SSD@autosnap_2020-07-10_03:00:02_hourly                              0B      -      160K  -
SSD@autosnap_2020-07-10_04:00:01_hourly                              0B      -      160K  -
SSD@autosnap_2020-07-10_05:00:01_hourly                              0B      -      160K  -
SSD@autosnap_2020-07-10_06:00:01_hourly                              0B      -      160K  -
SSD@autosnap_2020-07-10_07:00:01_hourly                              0B      -      160K  -
SSD@autosnap_2020-07-10_08:00:01_hourly                              0B      -      160K  -
SSD@autosnap_2020-07-10_09:00:01_hourly                              0B      -      160K  -
SSD@autosnap_2020-07-10_10:00:01_hourly                              0B      -      160K  -
SSD@autosnap_2020-07-10_11:00:01_hourly                              0B      -      160K  -
SSD@autosnap_2020-07-10_12:00:02_hourly                              0B      -      160K  -
SSD@autosnap_2020-07-10_13:00:02_hourly                              0B      -      160K  -
SSD@autosnap_2020-07-10_14:00:01_hourly                              0B      -      160K  -
SSD@autosnap_2020-07-10_15:00:01_hourly                              0B      -      160K  -
SSD@autosnap_2020-07-10_16:00:01_hourly                              0B      -      160K  -
SSD@autosnap_2020-07-10_17:00:01_hourly                              0B      -      160K  -
SSD@autosnap_2020-07-10_18:00:01_hourly                              0B      -      160K  -
SSD@autosnap_2020-07-10_19:00:02_hourly                              0B      -      160K  -
SSD@autosnap_2020-07-10_20:00:01_hourly                              0B      -      160K  -
SSD@autosnap_2020-07-10_20:30:01_frequently                          0B      -      160K  -
SSD@autosnap_2020-07-10_20:45:01_frequently                          0B      -      160K  -
SSD@autosnap_2020-07-10_21:00:01_hourly                              0B      -      160K  -
SSD@autosnap_2020-07-10_21:00:01_frequently                          0B      -      160K  -
SSD@autosnap_2020-07-10_21:15:01_frequently                          0B      -      160K  -
SSD/Docker@autosnap_2020-07-03_23:59:01_daily                     2.79G      -     10.6G  -
SSD/Docker@autosnap_2020-07-04_23:59:01_daily                     1.54G      -     9.97G  -
....

 

 - Replication

Now we take a look at my second server that has a pool named Buffalo (I use a 2 disk USB Buffalo Disk station :P) for backups

root@proxmox:~# zfs list
NAME                                                              USED  AVAIL     REFER  MOUNTPOINT
Buffalo                                                          1.15T   621G       25K  /Buffalo
Buffalo/Backups                                                   906G   621G       28K  /Buffalo/Backups
Buffalo/Backups/Nextcloud                                         227G   621G      227G  /Buffalo/Backups/Nextcloud
Buffalo/Backups/Pictures                                         43.6G   621G     43.6G  /Buffalo/Backups/Pictures
Buffalo/Backups/Unraid                                           29.4G   621G     29.4G  /Buffalo/Backups/Unraid
Buffalo/Proxmox-Replication                                      86.2G   621G       25K  /Buffalo/Proxmox-Replication
Buffalo/Proxmox-Replication/ROOT                                 2.30G   621G       24K  /Buffalo/Proxmox-Replication/ROOT
Buffalo/Proxmox-Replication/ROOT/pve-1                           2.30G   621G     1.47G  /
Buffalo/Proxmox-Replication/data                                 83.9G   621G       24K  /Buffalo/Proxmox-Replication/data
Buffalo/Proxmox-Replication/data/Vms                             83.5G   621G       24K  /Buffalo/Proxmox-Replication/data/Vms
Buffalo/Proxmox-Replication/data/Vms/vm-102-unRAID-BUILD-disk-0  2.55G   621G     1.94G  -
Buffalo/Proxmox-Replication/data/Vms/vm-102-unRAID-BUILD-disk-1  5.06G   621G     3.81G  -
Buffalo/Proxmox-Replication/data/Vms/vm-102-unRAID-BUILD-disk-2   776M   621G      506M  -
Buffalo/Proxmox-Replication/data/subvol-103-disk-0                386M  7.62G      386M  /Buffalo/Proxmox-Replication/data/subvol-103-disk-0
Buffalo/unRAID-Replication                                        185G   621G     28.5K  /mnt/SSD
Buffalo/unRAID-Replication/Docker                                 140G   621G     9.89G  /mnt/SSD/Docker
Buffalo/unRAID-Replication/Docker/Bitwarden                      2.94M   621G     1.33M  /mnt/SSD/Docker/Bitwarden
Buffalo/unRAID-Replication/Docker/Bookstack                      6.29M   621G     5.61M  /mnt/SSD/Docker/Bookstack
Buffalo/unRAID-Replication/Docker/Check_MK                       5.67G   621G      372M  /mnt/SSD/Docker/Check_MK
Buffalo/unRAID-Replication/Docker/Code-Server                    41.2M   621G     34.0M  /mnt/SSD/Docker/Code-Server
Buffalo/unRAID-Replication/Docker/Daapd                           845M   621G      505M  /mnt/SSD/Docker/Daapd
Buffalo/unRAID-Replication/Docker/Duplicati                      5.08G   621G     2.53G  /mnt/SSD/Docker/Duplicati
Buffalo/unRAID-Replication/Docker/Emoncms                        79.6M   621G     29.2M  /mnt/SSD/Docker/Emoncms
Buffalo/unRAID-Replication/Docker/Grafana                         872K   621G       87K  /mnt/SSD/Docker/Grafana
Buffalo/unRAID-Replication/Docker/Guacamole                      2.50M   621G     1.87M  /mnt/SSD/Docker/Guacamole
Buffalo/unRAID-Replication/Docker/HomeAssistant                  42.5M   621G     38.2M  /mnt/SSD/Docker/HomeAssistant
Buffalo/unRAID-Replication/Docker/Influxdb                        884M   621G     68.7M  /mnt/SSD/Docker/Influxdb
Buffalo/unRAID-Replication/Docker/Kodi                           1.65G   621G     1.46G  /mnt/SSD/Docker/Kodi
Buffalo/unRAID-Replication/Docker/MQTT                           67.5K   621G     31.5K  /mnt/SSD/Docker/MQTT
Buffalo/unRAID-Replication/Docker/MariaDB                        2.63G   621G      298M  /mnt/SSD/Docker/MariaDB
Buffalo/unRAID-Replication/Docker/MariaDB/log                     310M   621G      104M  /mnt/SSD/Docker/MariaDB/log
Buffalo/unRAID-Replication/Docker/Netdata                          24K   621G       24K  /mnt/SSD/Docker/Netdata
Buffalo/unRAID-Replication/Docker/Node-RED                       18.5M   621G     17.3M  /mnt/SSD/Docker/Node-RED
Buffalo/unRAID-Replication/Docker/Pi-hole                         663M   621G      331M  /mnt/SSD/Docker/Pi-hole
Buffalo/unRAID-Replication/Docker/Unifi                          1.07G   621G      691M  /mnt/SSD/Docker/Unifi
Buffalo/unRAID-Replication/Docker/deCONZ                         1.03M   621G     61.5K  /mnt/SSD/Docker/deCONZ
Buffalo/unRAID-Replication/Vms                                   45.2G   621G       23K  /mnt/SSD/Vms
Buffalo/unRAID-Replication/Vms/Broadcaster                       39.4G   621G     16.5G  /mnt/SSD/Vms/Broadcaster
Buffalo/unRAID-Replication/Vms/libvirt                           1.78M   621G      471K  /mnt/SSD/Vms/libvirt
Buffalo/unRAID-Replication/Vms/unRAID-Build                      5.78G   621G     5.78G  /mnt/SSD/Vms/unRAID-Build
rpool                                                             125G   104G      104K  /rpool
rpool/ROOT                                                       1.94G   104G       96K  /rpool/ROOT
rpool/ROOT/pve-1                                                 1.94G   104G     1.62G  /
rpool/data                                                        114G   104G       96K  /rpool/data
rpool/data/Vms                                                    114G   104G       96K  /rpool/data/Vms
rpool/data/Vms/vm-101-blueiris-disk-0                            41.9G   104G     33.4G  -
rpool/data/Vms/vm-101-blueiris-disk-1                            40.4G   104G     40.4G  -
rpool/data/Vms/vm-102-unRAID-BUILD-disk-0                        1.98G   104G     1.94G  -
rpool/data/Vms/vm-102-unRAID-BUILD-disk-1                        4.30G   104G     4.29G  -
rpool/data/Vms/vm-102-unRAID-BUILD-disk-2                         776M   104G      750M  -
rpool/data/subvol-103-disk-0                                      440M  7.57G      440M  /rpool/data/subvol-103-disk-0
rpool/swap                                                       8.50G   105G     6.96G  -
spinner                                                           277G   172G     25.0G  /spinner
spinner/vm-101-cctv-disk-0                                        252G   172G      252G  -

I also installed Sanoid there but the config file is a bit different there:

root@proxmox:~# cat /etc/sanoid/sanoid.conf
[rpool]
	use_template = production
        recursive = yes

[rpool/swap]
	use_template = ignore
	recursive = no

[Buffalo/Proxmox-Replication]
        use_template = backup
        recursive = yes

[Buffalo/unRAID-Replication]
        use_template = backup
        recursive = yes


#############################
# templates below this line #
#############################

[template_production]
	frequently = 4
	hourly = 24
	daily = 7
	monthly = 0
	yearly = 0
	autosnap = yes
	autoprune = yes

[template_backup]
	autoprune = yes
	frequently = 0
	hourly = 0
	daily = 90
	monthly = 0
	yearly = 0

	### don't take new snapshots - snapshots on backup
	### datasets are replicated in from source, not
	### generated locally
	autosnap = no

	### monitor hourlies and dailies, but don't warn or
	### crit until they're over 48h old, since replication
	### is typically daily only
	hourly_warn = 2880
	hourly_crit = 3600
	daily_warn = 48
	daily_crit = 60

[template_ignore]
	autosnap = no
	autoprune = no
	monitor = no

Since the backup server is Debian based Sanoid is automatically called every minute via Systemd (automatic setup via the offical package).

 

Now I have regular snapshots from my rpool (the data for Proxmox) and then I have two replication targets on the USB backup pool 

Buffalo/Proxmox-Replication & Buffalo/unRAID-Replication

 

to replicate from NVME on my Proxmox to the USB it´s really simple since it is on the same system, but for unRAID we need to setup SSH keys:

On the unRAID server I run 

root@Unraid:~# ssh-keygen

and press enter multiple times until the process is finished.

Then on the second (Proxmox) server i run this command as root:

root@proxmox:~ ssh-copy-id root@unraid #your server name/ip address

and answer "Yes" - you can test it from your backup server by ssh-ing and see if you get password less access.

 

Again since unRAID wont retain any of this on reboot we have to backup the ssh folder to the USB key. I do it like this:

(*this step is not relevant on unRAID 6.9 and later since it symlinks the ssh folder to the boot drive) 

mkdir -p /boot/custom/ssh/root
cp -r /root/.ssh  /boot/custom/ssh/root/

Then you can add this script to run on boot e.g. using the aforementioned User Scripts plugin:

#!/bin/bash

#Root
mkdir -p /root/.ssh
cp -r /boot/custom/ssh/root/.ssh  /root/
chown -R root:root /root/
chmod 700 /root/
chmod 700 /root/.ssh
chmod 600 /root/.ssh/authorized_keys

 then we finally can start the replication from the second server:

Since I am backing up two pools I run then one after the other

/usr/sbin/syncoid -r --quiet --no-sync-snap root@unraid:SSD Buffalo/unRAID-Replication && /usr/sbin/syncoid -r --quiet --no-sync-snap rpool Buffalo/Proxmox-Replication

this command first sends all the snapshots from unRAID via SSH to the USB backup pool on Proxmox and then locally from the NVME to the USB

 

This I run every day at 2:15 using cron, and for simplicity sake I put the commands in a bash file:

The crontab:
#Nigthly replication
15 2 * * * /usr/local/bin/replicate

The bash script:
root@proxmox:~# cat /usr/local/bin/replicate

#!/bin/bash
/usr/sbin/syncoid -r --quiet --no-sync-snap root@unraid:SSD Buffalo/unRAID-Replication && /usr/sbin/syncoid -r --quiet --no-sync-snap rpool Buffalo/Proxmox-Replication

Now lets take a look at what the unRAID replication looks like over at the backup server's USB pool:

root@proxmox:~# zfs list -t snapshot -r Buffalo/unRAID-Replication
NAME                                                                                 USED  AVAIL     REFER  MOUNTPOINT
Buffalo/unRAID-Replication@autosnap_2020-06-19_11:51:00_daily                          0B      -       23K  -
Buffalo/unRAID-Replication@autosnap_2020-06-19_23:59:01_daily                          0B      -       23K  -
Buffalo/unRAID-Replication@autosnap_2020-06-20_23:59:01_daily                          0B      -       23K  -
Buffalo/unRAID-Replication@autosnap_2020-06-21_23:59:01_daily                          0B      -       23K  -
Buffalo/unRAID-Replication@autosnap_2020-06-22_23:59:01_daily                          0B      -       23K  -
Buffalo/unRAID-Replication@autosnap_2020-06-23_23:59:01_daily                          0B      -       23K  -
Buffalo/unRAID-Replication@autosnap_2020-06-24_23:59:01_daily                          0B      -       23K  -
Buffalo/unRAID-Replication@autosnap_2020-06-25_23:59:01_daily                          0B      -       23K  -
Buffalo/unRAID-Replication@autosnap_2020-06-26_23:59:01_daily                          0B      -     28.5K  -
Buffalo/unRAID-Replication@autosnap_2020-06-27_23:59:01_daily                          0B      -     28.5K  -
Buffalo/unRAID-Replication@autosnap_2020-06-28_23:59:01_daily                          0B      -     28.5K  -
Buffalo/unRAID-Replication@autosnap_2020-06-29_23:59:01_daily                          0B      -     28.5K  -
Buffalo/unRAID-Replication@autosnap_2020-06-30_23:59:01_daily                          0B      -     28.5K  -
Buffalo/unRAID-Replication@autosnap_2020-07-01_23:59:01_daily                          0B      -     28.5K  -
Buffalo/unRAID-Replication@autosnap_2020-07-02_23:59:01_daily                          0B      -     28.5K  -
Buffalo/unRAID-Replication@autosnap_2020-07-03_23:59:01_daily                          0B      -     28.5K  -
Buffalo/unRAID-Replication@autosnap_2020-07-04_23:59:01_daily                          0B      -     28.5K  -
Buffalo/unRAID-Replication@autosnap_2020-07-05_23:59:01_daily                          0B      -     28.5K  -
Buffalo/unRAID-Replication@autosnap_2020-07-06_23:59:01_daily                          0B      -     28.5K  -
Buffalo/unRAID-Replication@autosnap_2020-07-07_23:59:01_daily                          0B      -     28.5K  -
Buffalo/unRAID-Replication@autosnap_2020-07-08_23:59:01_daily                          0B      -     28.5K  -
Buffalo/unRAID-Replication@autosnap_2020-07-09_23:59:01_daily                          0B      -     28.5K  -
Buffalo/unRAID-Replication/Docker@autosnap_2020-06-19_11:51:00_daily                 416M      -     9.11G  -
Buffalo/unRAID-Replication/Docker@autosnap_2020-06-19_23:59:01_daily                 439M      -     9.23G  -
....

 

Edited by steini84
update for 6.9 changes
  • Like 3
Link to comment

... and there is some possibility that unRaid may include ZFS as a supported file system out of the box.  Developer has been ranting about btrfs and dropping ZFS hints.  We will just have to wait and see.  The plugin has been working with unRaid for over a year, and we can thank steini84 for his dedication.  Getting Tom to bake it in to unRaid and have it be more thoroughly tested will be even better yet.

Link to comment
... and there is some possibility that unRaid may include ZFS as a supported file system out of the box.  Developer has been ranting about btrfs and dropping ZFS hints.  We will just have to wait and see.  The plugin has been working with unRaid for over a year, and we can thank steini84 for his dedication.  Getting Tom to bake it in to unRaid and have it be more thoroughly tested will be even better yet.

... 5 years for me, bur we are getting off topic


Sent from my iPhone using Tapatalk
  • Haha 1
Link to comment

@steini84, just wanted to say thanks for all that you do regarding ZFS. I played with Unraid a year ago and installed the ZFS plugin at the time, and tried to get ZnapZend and Sanoid/Syncoid working. I've been using both tools pretty regularly on my Ubuntu server setup where everything has been on ZFS. I finally passed a threshold where it just made sense to split my servers up with the bulk media content moving to Unraid. I just stood up the Unraid box this week and now I'm seeing both ZnapZend and Sanoid/Syncoid available as plugins! So awesome! I still like having my appdata on ZFS and now I can back up that data to my 'main' Ubuntu server. I've actually had great success with ZnapZend for my regular backups, and I tend to use syncoid for ad hoc ZFS sending between different zpools either on the same or different servers.

 

Thanks!

Link to comment

@steini84: if I don't need to send backup remotely, there wouldn't be any benefit of using Sanoid over Znapzend, would it?

 

I like the config file of sanoid but the part about needing to run sanoid every minute seems excessive in my mind. If none of my schedule is more frequent than hourly, can I just cron it for every 30 minutes instead?

Link to comment
  • 2 months later...
  • 3 weeks later...

any recommendations for editing smb-extra.conf to see the snapshots as shadow copies in windows?

 

My snapshots look like 

%PoolName/%dataset@autosnap_%Y-%m-%d_%H:%M:%S_%snaptype

with %snaptype being daily, weekly, hourly, or frequently.

 

I tried

shadow: format = autosnap_%Y-%m-%d_%H:%M:%S

any help would be greatly appriciated.

Link to comment
  • 5 months later...

Hi

 

Sorry to bring this up. But I just applied this plugin and did the same config you demonstrated. It is working fine for now but I just keep receiving email notifications with the following

"could not find any snapshots to destroy; check snapshot names.
could not remove SSD@autosnap_2021-04-19_13:15:01_frequently : 256 at /usr/local/sbin/sanoid line 343."

 

Do you know what to do to fix this? 

Link to comment
  • 1 year later...
  • 1 month later...
On 3/22/2023 at 8:21 AM, boxer74 said:

Any reason to keep using this? Will these features become built-in to 6.12?

 

I'm running 6.12.0-rc4.1 and I don't see any native UI options for snapshot control (but I could be blind), so I'd say if it works, keep using it. 

Link to comment
  • 2 months later...
  • 2 weeks later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.