steini84
-
Posts
434 -
Joined
-
Last visited
-
Days Won
1
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by steini84
-
-
Or even try zpool import -f -a
Sent from my iPhone using Tapatalk- 1
-
Hello, I am running into a simple problem, yet I won't be trying anything that might damage the disk I am trying to get data from. I have backups, but recovering data would take me weeks with my current connection.
I am coming from FreeNAS, with a 4TB ZFS formatted drive. I got a new 4TB drive, which is already formatted in XFS and ready to get that data. Problem is: I forgot to 'export' my zpool on the FreeNAS system before, well, formatting it to install unRaid. As so, the 'zpool import' command does not work, and I am unsure on how to properly mount the ZFS drive to retrieve the data. That HDD will be properly formatted and added to the array when the transfer is done.
I got this far, but pressing 'mount' won't properly mount the drive. What should I do to retrieve that data?
If I understand correctly you have to use the -f flag:
zpool import -f POOLNAME
Sent from my iPhone using Tapatalk -
Hi,
I would like to thank you for this amazing plugin. It allows me to get the best filesystem along with my preferred Linux OS. An almost no-compromise solution!
I am currently running the latest stable release of Unraid 6.8.3 with your RC2 OpenZFS 2.0 plugin version you just posted.
I am very excited to try the new zstd compression but for some reason it won't allow me to use it. It says invalid argument. Do I need to reboot the server in order to fully support the new ZFS version? I was running 0.8.3-1 before upgrading. I can't wait to try out the different zstd levels to see which one fits my needs better.
Is there a way to see any disk/pool activity within Unraid using OpenZFS? So far I'm always connected via putty running "zpool iostat -v 5". I was just wondering if there is another plugin or some way to get at least the status in the GUI if one of my pools become degraded.
Thank you so much !!!
Phil
Yeah I would reboot and retry. It worked as expected on my test server after a reboot:root@Tower:~# zpool upgrade SSDThis system supports ZFS pool feature flags.Enabled the following features on 'SSD': redaction_bookmarks redacted_datasets bookmark_written log_spacemap livelist device_rebuild zstd_compressroot@Tower:~# zfs set compression=zstd SSDroot@Tower:~# zfs get all | grep -i compressionSSD compression zstd localroot@Tower:~#
This is just a plugin for zfs and nothing added. I would recommend setting up a Check_mk docker to monitor your server and that can send you a mail if you have a problem. For example a problem with the pool or if the pool is running out of space.
Sent from my iPhone using Tapatalk -
18 hours ago, TheSkaz said:
you have built one for me before, that would be awesome, I REALLY dont want to lose that data. maybe it could help someone else too?
Here you go:
https://www.dropbox.com/s/f3fp04zsgp1g4a0/zfs-2.0.0-rc2-unRAID-6.8.3.x86_64.tgz?dl=0
https://www.dropbox.com/s/z381hehf28k3gj5/zfs-2.0.0-rc2-unRAID-6.8.3.x86_64.tgz.md5?dl=0
You can either rename and replace the files in /boot/config/plugins/unRAID6-ZFS/packages or run these commands:
#Unmount bzmodules and make rw if mount | grep /lib/modules > /dev/null; then echo "Remounting modules" cp -r /lib/modules /tmp umount -l /lib/modules/ rm -rf /lib/modules mv -f /tmp/modules /lib fi #install and load the package and import pools installpkg zfs-2.0.0-rc2-unRAID-6.8.3.x86_64.tgz depmod modprobe zfs zpool import -a
- 1
-
Well you could zfs send to a different pool make a new pool and zfs send back. Or I could build zfs 2.0 for you on 6.8.3. Let me know if you want that
Sent from my iPhone using Tapatalk -
Built zfs-2.0.0-rc2 for unRAID-6.9.0-beta25
-
The first release candidate of OpenZFS 2.0 has been released
https://github.com/openzfs/zfs/releases/tag/zfs-2.0.0-rc1
I have built it for unRAID 6.9.0 beta 25
For those already running ZFS 0.8.4-1 on unRAID 6.9.0 beta 25 and want to update, you can just un-install this plugin and re-install it again (don´t worry , you wont have any ZFS downtime) or run this command and reboot
rm /boot/config/plugins/unRAID6-ZFS/packages/zfs-0.8.4-unRAID-6.9.0-beta25.x86_64.tgz
Either way you should see this:
#Before root@Tower:~# modinfo zfs | grep version version: 0.8.4-1 srcversion: E9712003D310D2B54A51C97 #After root@Tower:~# modinfo zfs | grep version version: 2.0.0-rc1 srcversion: 6A6B870B7C76FB81D4FEFB4
- 2
-
On What os did you create the pool. That feature is not yet in openzfs on Linux
http://build.zfsonlinux.org/zfs-features.html
Sent from my iPhone using Tapatalk -
Zfs actively being worked on for unraid
https://selfhosted.show/25
Sent from my iPhone using Tapatalk- 1
-
Hopefully when unraid adds native support
Sent from my iPhone using Tapatalk- 1
-
Updated for 6.9.0-beta25
-
... and there is some possibility that unRaid may include ZFS as a supported file system out of the box. Developer has been ranting about btrfs and dropping ZFS hints. We will just have to wait and see. The plugin has been working with unRaid for over a year, and we can thank steini84 for his dedication. Getting Tom to bake it in to unRaid and have it be more thoroughly tested will be even better yet.
... 5 years for me, bur we are getting off topic
Sent from my iPhone using Tapatalk- 1
-
FYI you can bake in ZFS and then you don’t need a plug-in:
https://forums.unraid.net/topic/92865-support-ich777-nvidiadvb-kernel-helperbuilder-docker/- 1
-
Stupid question alert:
Does "errors: No known data errors" mean there is no error i.e. all is well?
Yes exactly
But to be clear ZFS only finds errors when you try to read files so too be sure that your hard drives are not plotting against you (spoiler alert they are it’s good to scrub regularly
https://docs.oracle.com/cd/E23823_01/html/819-5461/gbbwa.html
Sent from my iPhone using Tapatalk- 1
-
Bitte
The wife went out and the kids fell a sleep early so...
I probably have to make some rewrites to the "tutorial" but it will be fun to see if someone gets it up and running
-
Hi,
Here is an another companion plugin for the ZFS plugin for unRAID.
QuoteSanoid is a policy-driven snapshot management tool for ZFS filesystems. When combined with the Linux KVM hypervisor, you can use it to make your systems functionally immortal.
Sanoid also includes a replication tool, syncoid, which facilitates the asynchronous incremental replication of ZFS filesystems.
To install you copy this url into the install plugin page in your unRAID 6 web gui
https://raw.githubusercontent.com/Steini1984/unRAID6-Sainoid/master/unRAID6-Sanoid.plg
I recommend you follow the directions here: https://github.com/jimsalterjrs/sanoid but you have to keep in mind that unRAID does not have a persistent /etc/ and cron so you have to take account to that.
Below you can see how I setup my system, but the plugin is built pretty vanilla so you can adjust to you needs.
Why?
I have a 3 SSD pool on my unRAID server running RAIDZ-1 and that has been rock solid for years. I have used multiple different snapshot tools and Znapzend (unRAID plugin available) has server me well..... well apart from remote replication. Probably my user error but multiple systems I have setup all have the same problem with losing sync between the main server and backup server. In comes Sanoid and Syncoid which was a little bit more effort in the beginning but it literally was set it and forget it after that.
My setup
The setup for Sanoid is pretty straight forward, but I wanted to show you how I use it and how it is configured so you can hopefully save some time and/or get inspired to backup your own system.
My servers are:
- Main server running unRAID with a ZFS Pool for Vms/Docker (SSD)
- Backup server running Proxmox with an NVME pool for Vms/Containers (rpool) and a USB pool for backups (Buffalo)
Setting up my system (adjust to your needs):
This part is way too long, probably need an edit and maybe missing a step or two, but hope it helps someone:
- Automatic snapshots
My main ZFS pool is on unRAID named SSD and mounted at /mnt/SSD
root@Unraid:/mnt/SSD# zfs list NAME USED AVAIL REFER MOUNTPOINT SSD 148G 67.8G 160K /mnt/SSD SSD/Docker 106G 67.8G 10.3G /mnt/SSD/Docker SSD/Docker/Bitwarden 4.93M 67.8G 2.24M /mnt/SSD/Docker/Bitwarden SSD/Docker/Bookstack 7.11M 67.8G 6.23M /mnt/SSD/Docker/Bookstack SSD/Docker/Check_MK 7.36G 67.8G 471M /mnt/SSD/Docker/Check_MK SSD/Docker/Code-Server 90.1M 67.8G 85.1M /mnt/SSD/Docker/Code-Server SSD/Docker/Daapd 662M 67.8G 508M /mnt/SSD/Docker/Daapd SSD/Docker/Duplicati 3.56G 67.8G 2.64G /mnt/SSD/Docker/Duplicati SSD/Docker/Emoncms 69.7M 67.8G 34.5M /mnt/SSD/Docker/Emoncms SSD/Docker/Grafana 4.41M 67.8G 240K /mnt/SSD/Docker/Grafana SSD/Docker/Guacamole 4.02M 67.8G 3.47M /mnt/SSD/Docker/Guacamole SSD/Docker/HomeAssistant 60.7M 67.8G 44.2M /mnt/SSD/Docker/HomeAssistant SSD/Docker/Influxdb 511M 67.8G 66.1M /mnt/SSD/Docker/Influxdb SSD/Docker/Kodi 1.83G 67.8G 1.59G /mnt/SSD/Docker/Kodi SSD/Docker/MQTT 293K 67.8G 181K /mnt/SSD/Docker/MQTT SSD/Docker/MariaDB 2.05G 67.8G 328M /mnt/SSD/Docker/MariaDB SSD/Docker/MariaDB/log 218M 67.8G 130M /mnt/SSD/Docker/MariaDB/log SSD/Docker/Netdata 128K 67.8G 128K /mnt/SSD/Docker/Netdata SSD/Docker/Node-RED 51.5M 67.8G 50.5M /mnt/SSD/Docker/Node-RED SSD/Docker/Pi-hole 514M 67.8G 351M /mnt/SSD/Docker/Pi-hole SSD/Docker/Unifi 956M 67.8G 768M /mnt/SSD/Docker/Unifi SSD/Docker/deCONZ 5.44M 67.8G 256K /mnt/SSD/Docker/deCONZ SSD/Vms 32.3G 67.8G 128K /mnt/SSD/Vms SSD/Vms/Broadcaster 26.5G 67.8G 17.3G /mnt/SSD/Vms/Broadcaster SSD/Vms/libvirt 2.55M 67.8G 895K /mnt/SSD/Vms/libvirt SSD/Vms/unRAID-Build 5.87G 67.8G 5.87G /mnt/SSD/Vms/unRAID-Build SSD/swap 9.37G 67.8G 9.37G -
First I installed the plugin and configured copied the config files to the main pool
cp /etc/sanoid/sanoid.defaults.conf /mnt/SSD cp /etc/sanoid/sanoid.example.conf /mnt/SSD/sanoid.conf
Then you have to edit the sanoid config file
nano /mnt/SSD/sanoid.conf
My file config has two templates just so I can ignore the swap partition. I thing the config file explains it self and it looks like this:
[SSD] use_template = production recursive = yes [SSD/swap] use_template = ignore recursive = no ############################# # templates below this line # ############################# [template_production] frequently = 4 hourly = 24 daily = 7 monthly = 0 yearly = 0 autosnap = yes autoprune = yes [template_ignore] autosnap = no autoprune = no monitor = no
Now we have to run Sanoid every minute and you can for example use the User Scripts plugin or cron.
I use cron and have this line in my crontab:
* * * * * /usr/local/sbin/sanoid --configdir=/mnt/SSD/ --cron
This overwrites the default config dir so we can keep the files at a persistent storage location. To add this at boot you can add this to your go file or setup a User Scripts script that runs on boot with this command:
(crontab -l 2>/dev/null; echo "* * * * * /usr/local/sbin/sanoid --configdir=/mnt/SSD/ --cron") | crontab -
Now you are good to go and should have automatic snapshots on you unRAID server
root@Unraid:/mnt/SSD# zfs list -t snapshot NAME USED AVAIL REFER MOUNTPOINT SSD@autosnap_2020-07-03_23:59:01_daily 0B - 160K - SSD@autosnap_2020-07-04_23:59:01_daily 0B - 160K - SSD@autosnap_2020-07-05_23:59:01_daily 0B - 160K - SSD@autosnap_2020-07-06_23:59:01_daily 0B - 160K - SSD@autosnap_2020-07-07_23:59:01_daily 0B - 160K - SSD@autosnap_2020-07-08_23:59:01_daily 0B - 160K - SSD@autosnap_2020-07-09_22:00:01_hourly 0B - 160K - SSD@autosnap_2020-07-09_23:00:02_hourly 0B - 160K - SSD@autosnap_2020-07-09_23:59:01_daily 0B - 160K - SSD@autosnap_2020-07-10_00:00:01_hourly 0B - 160K - SSD@autosnap_2020-07-10_01:00:01_hourly 0B - 160K - SSD@autosnap_2020-07-10_02:00:02_hourly 0B - 160K - SSD@autosnap_2020-07-10_03:00:02_hourly 0B - 160K - SSD@autosnap_2020-07-10_04:00:01_hourly 0B - 160K - SSD@autosnap_2020-07-10_05:00:01_hourly 0B - 160K - SSD@autosnap_2020-07-10_06:00:01_hourly 0B - 160K - SSD@autosnap_2020-07-10_07:00:01_hourly 0B - 160K - SSD@autosnap_2020-07-10_08:00:01_hourly 0B - 160K - SSD@autosnap_2020-07-10_09:00:01_hourly 0B - 160K - SSD@autosnap_2020-07-10_10:00:01_hourly 0B - 160K - SSD@autosnap_2020-07-10_11:00:01_hourly 0B - 160K - SSD@autosnap_2020-07-10_12:00:02_hourly 0B - 160K - SSD@autosnap_2020-07-10_13:00:02_hourly 0B - 160K - SSD@autosnap_2020-07-10_14:00:01_hourly 0B - 160K - SSD@autosnap_2020-07-10_15:00:01_hourly 0B - 160K - SSD@autosnap_2020-07-10_16:00:01_hourly 0B - 160K - SSD@autosnap_2020-07-10_17:00:01_hourly 0B - 160K - SSD@autosnap_2020-07-10_18:00:01_hourly 0B - 160K - SSD@autosnap_2020-07-10_19:00:02_hourly 0B - 160K - SSD@autosnap_2020-07-10_20:00:01_hourly 0B - 160K - SSD@autosnap_2020-07-10_20:30:01_frequently 0B - 160K - SSD@autosnap_2020-07-10_20:45:01_frequently 0B - 160K - SSD@autosnap_2020-07-10_21:00:01_hourly 0B - 160K - SSD@autosnap_2020-07-10_21:00:01_frequently 0B - 160K - SSD@autosnap_2020-07-10_21:15:01_frequently 0B - 160K - SSD/Docker@autosnap_2020-07-03_23:59:01_daily 2.79G - 10.6G - SSD/Docker@autosnap_2020-07-04_23:59:01_daily 1.54G - 9.97G - ....
- Replication
Now we take a look at my second server that has a pool named Buffalo (I use a 2 disk USB Buffalo Disk station ) for backups
root@proxmox:~# zfs list NAME USED AVAIL REFER MOUNTPOINT Buffalo 1.15T 621G 25K /Buffalo Buffalo/Backups 906G 621G 28K /Buffalo/Backups Buffalo/Backups/Nextcloud 227G 621G 227G /Buffalo/Backups/Nextcloud Buffalo/Backups/Pictures 43.6G 621G 43.6G /Buffalo/Backups/Pictures Buffalo/Backups/Unraid 29.4G 621G 29.4G /Buffalo/Backups/Unraid Buffalo/Proxmox-Replication 86.2G 621G 25K /Buffalo/Proxmox-Replication Buffalo/Proxmox-Replication/ROOT 2.30G 621G 24K /Buffalo/Proxmox-Replication/ROOT Buffalo/Proxmox-Replication/ROOT/pve-1 2.30G 621G 1.47G / Buffalo/Proxmox-Replication/data 83.9G 621G 24K /Buffalo/Proxmox-Replication/data Buffalo/Proxmox-Replication/data/Vms 83.5G 621G 24K /Buffalo/Proxmox-Replication/data/Vms Buffalo/Proxmox-Replication/data/Vms/vm-102-unRAID-BUILD-disk-0 2.55G 621G 1.94G - Buffalo/Proxmox-Replication/data/Vms/vm-102-unRAID-BUILD-disk-1 5.06G 621G 3.81G - Buffalo/Proxmox-Replication/data/Vms/vm-102-unRAID-BUILD-disk-2 776M 621G 506M - Buffalo/Proxmox-Replication/data/subvol-103-disk-0 386M 7.62G 386M /Buffalo/Proxmox-Replication/data/subvol-103-disk-0 Buffalo/unRAID-Replication 185G 621G 28.5K /mnt/SSD Buffalo/unRAID-Replication/Docker 140G 621G 9.89G /mnt/SSD/Docker Buffalo/unRAID-Replication/Docker/Bitwarden 2.94M 621G 1.33M /mnt/SSD/Docker/Bitwarden Buffalo/unRAID-Replication/Docker/Bookstack 6.29M 621G 5.61M /mnt/SSD/Docker/Bookstack Buffalo/unRAID-Replication/Docker/Check_MK 5.67G 621G 372M /mnt/SSD/Docker/Check_MK Buffalo/unRAID-Replication/Docker/Code-Server 41.2M 621G 34.0M /mnt/SSD/Docker/Code-Server Buffalo/unRAID-Replication/Docker/Daapd 845M 621G 505M /mnt/SSD/Docker/Daapd Buffalo/unRAID-Replication/Docker/Duplicati 5.08G 621G 2.53G /mnt/SSD/Docker/Duplicati Buffalo/unRAID-Replication/Docker/Emoncms 79.6M 621G 29.2M /mnt/SSD/Docker/Emoncms Buffalo/unRAID-Replication/Docker/Grafana 872K 621G 87K /mnt/SSD/Docker/Grafana Buffalo/unRAID-Replication/Docker/Guacamole 2.50M 621G 1.87M /mnt/SSD/Docker/Guacamole Buffalo/unRAID-Replication/Docker/HomeAssistant 42.5M 621G 38.2M /mnt/SSD/Docker/HomeAssistant Buffalo/unRAID-Replication/Docker/Influxdb 884M 621G 68.7M /mnt/SSD/Docker/Influxdb Buffalo/unRAID-Replication/Docker/Kodi 1.65G 621G 1.46G /mnt/SSD/Docker/Kodi Buffalo/unRAID-Replication/Docker/MQTT 67.5K 621G 31.5K /mnt/SSD/Docker/MQTT Buffalo/unRAID-Replication/Docker/MariaDB 2.63G 621G 298M /mnt/SSD/Docker/MariaDB Buffalo/unRAID-Replication/Docker/MariaDB/log 310M 621G 104M /mnt/SSD/Docker/MariaDB/log Buffalo/unRAID-Replication/Docker/Netdata 24K 621G 24K /mnt/SSD/Docker/Netdata Buffalo/unRAID-Replication/Docker/Node-RED 18.5M 621G 17.3M /mnt/SSD/Docker/Node-RED Buffalo/unRAID-Replication/Docker/Pi-hole 663M 621G 331M /mnt/SSD/Docker/Pi-hole Buffalo/unRAID-Replication/Docker/Unifi 1.07G 621G 691M /mnt/SSD/Docker/Unifi Buffalo/unRAID-Replication/Docker/deCONZ 1.03M 621G 61.5K /mnt/SSD/Docker/deCONZ Buffalo/unRAID-Replication/Vms 45.2G 621G 23K /mnt/SSD/Vms Buffalo/unRAID-Replication/Vms/Broadcaster 39.4G 621G 16.5G /mnt/SSD/Vms/Broadcaster Buffalo/unRAID-Replication/Vms/libvirt 1.78M 621G 471K /mnt/SSD/Vms/libvirt Buffalo/unRAID-Replication/Vms/unRAID-Build 5.78G 621G 5.78G /mnt/SSD/Vms/unRAID-Build rpool 125G 104G 104K /rpool rpool/ROOT 1.94G 104G 96K /rpool/ROOT rpool/ROOT/pve-1 1.94G 104G 1.62G / rpool/data 114G 104G 96K /rpool/data rpool/data/Vms 114G 104G 96K /rpool/data/Vms rpool/data/Vms/vm-101-blueiris-disk-0 41.9G 104G 33.4G - rpool/data/Vms/vm-101-blueiris-disk-1 40.4G 104G 40.4G - rpool/data/Vms/vm-102-unRAID-BUILD-disk-0 1.98G 104G 1.94G - rpool/data/Vms/vm-102-unRAID-BUILD-disk-1 4.30G 104G 4.29G - rpool/data/Vms/vm-102-unRAID-BUILD-disk-2 776M 104G 750M - rpool/data/subvol-103-disk-0 440M 7.57G 440M /rpool/data/subvol-103-disk-0 rpool/swap 8.50G 105G 6.96G - spinner 277G 172G 25.0G /spinner spinner/vm-101-cctv-disk-0 252G 172G 252G -
I also installed Sanoid there but the config file is a bit different there:
root@proxmox:~# cat /etc/sanoid/sanoid.conf [rpool] use_template = production recursive = yes [rpool/swap] use_template = ignore recursive = no [Buffalo/Proxmox-Replication] use_template = backup recursive = yes [Buffalo/unRAID-Replication] use_template = backup recursive = yes ############################# # templates below this line # ############################# [template_production] frequently = 4 hourly = 24 daily = 7 monthly = 0 yearly = 0 autosnap = yes autoprune = yes [template_backup] autoprune = yes frequently = 0 hourly = 0 daily = 90 monthly = 0 yearly = 0 ### don't take new snapshots - snapshots on backup ### datasets are replicated in from source, not ### generated locally autosnap = no ### monitor hourlies and dailies, but don't warn or ### crit until they're over 48h old, since replication ### is typically daily only hourly_warn = 2880 hourly_crit = 3600 daily_warn = 48 daily_crit = 60 [template_ignore] autosnap = no autoprune = no monitor = no
Since the backup server is Debian based Sanoid is automatically called every minute via Systemd (automatic setup via the offical package).
Now I have regular snapshots from my rpool (the data for Proxmox) and then I have two replication targets on the USB backup pool
Buffalo/Proxmox-Replication & Buffalo/unRAID-Replication
to replicate from NVME on my Proxmox to the USB it´s really simple since it is on the same system, but for unRAID we need to setup SSH keys:
On the unRAID server I run
root@Unraid:~# ssh-keygen
and press enter multiple times until the process is finished.
Then on the second (Proxmox) server i run this command as root:
root@proxmox:~ ssh-copy-id root@unraid #your server name/ip address
and answer "Yes" - you can test it from your backup server by ssh-ing and see if you get password less access.
Again since unRAID wont retain any of this on reboot we have to backup the ssh folder to the USB key. I do it like this:
(*this step is not relevant on unRAID 6.9 and later since it symlinks the ssh folder to the boot drive)
mkdir -p /boot/custom/ssh/root cp -r /root/.ssh /boot/custom/ssh/root/
Then you can add this script to run on boot e.g. using the aforementioned User Scripts plugin:
#!/bin/bash #Root mkdir -p /root/.ssh cp -r /boot/custom/ssh/root/.ssh /root/ chown -R root:root /root/ chmod 700 /root/ chmod 700 /root/.ssh chmod 600 /root/.ssh/authorized_keys
then we finally can start the replication from the second server:
Since I am backing up two pools I run then one after the other
/usr/sbin/syncoid -r --quiet --no-sync-snap root@unraid:SSD Buffalo/unRAID-Replication && /usr/sbin/syncoid -r --quiet --no-sync-snap rpool Buffalo/Proxmox-Replication
this command first sends all the snapshots from unRAID via SSH to the USB backup pool on Proxmox and then locally from the NVME to the USB
This I run every day at 2:15 using cron, and for simplicity sake I put the commands in a bash file:
The crontab: #Nigthly replication 15 2 * * * /usr/local/bin/replicate The bash script: root@proxmox:~# cat /usr/local/bin/replicate #!/bin/bash /usr/sbin/syncoid -r --quiet --no-sync-snap root@unraid:SSD Buffalo/unRAID-Replication && /usr/sbin/syncoid -r --quiet --no-sync-snap rpool Buffalo/Proxmox-Replication
Now lets take a look at what the unRAID replication looks like over at the backup server's USB pool:
root@proxmox:~# zfs list -t snapshot -r Buffalo/unRAID-Replication NAME USED AVAIL REFER MOUNTPOINT Buffalo/unRAID-Replication@autosnap_2020-06-19_11:51:00_daily 0B - 23K - Buffalo/unRAID-Replication@autosnap_2020-06-19_23:59:01_daily 0B - 23K - Buffalo/unRAID-Replication@autosnap_2020-06-20_23:59:01_daily 0B - 23K - Buffalo/unRAID-Replication@autosnap_2020-06-21_23:59:01_daily 0B - 23K - Buffalo/unRAID-Replication@autosnap_2020-06-22_23:59:01_daily 0B - 23K - Buffalo/unRAID-Replication@autosnap_2020-06-23_23:59:01_daily 0B - 23K - Buffalo/unRAID-Replication@autosnap_2020-06-24_23:59:01_daily 0B - 23K - Buffalo/unRAID-Replication@autosnap_2020-06-25_23:59:01_daily 0B - 23K - Buffalo/unRAID-Replication@autosnap_2020-06-26_23:59:01_daily 0B - 28.5K - Buffalo/unRAID-Replication@autosnap_2020-06-27_23:59:01_daily 0B - 28.5K - Buffalo/unRAID-Replication@autosnap_2020-06-28_23:59:01_daily 0B - 28.5K - Buffalo/unRAID-Replication@autosnap_2020-06-29_23:59:01_daily 0B - 28.5K - Buffalo/unRAID-Replication@autosnap_2020-06-30_23:59:01_daily 0B - 28.5K - Buffalo/unRAID-Replication@autosnap_2020-07-01_23:59:01_daily 0B - 28.5K - Buffalo/unRAID-Replication@autosnap_2020-07-02_23:59:01_daily 0B - 28.5K - Buffalo/unRAID-Replication@autosnap_2020-07-03_23:59:01_daily 0B - 28.5K - Buffalo/unRAID-Replication@autosnap_2020-07-04_23:59:01_daily 0B - 28.5K - Buffalo/unRAID-Replication@autosnap_2020-07-05_23:59:01_daily 0B - 28.5K - Buffalo/unRAID-Replication@autosnap_2020-07-06_23:59:01_daily 0B - 28.5K - Buffalo/unRAID-Replication@autosnap_2020-07-07_23:59:01_daily 0B - 28.5K - Buffalo/unRAID-Replication@autosnap_2020-07-08_23:59:01_daily 0B - 28.5K - Buffalo/unRAID-Replication@autosnap_2020-07-09_23:59:01_daily 0B - 28.5K - Buffalo/unRAID-Replication/Docker@autosnap_2020-06-19_11:51:00_daily 416M - 9.11G - Buffalo/unRAID-Replication/Docker@autosnap_2020-06-19_23:59:01_daily 439M - 9.23G - ....
- 3
-
What is your use case for Sanoid?? Yes, we are always interested in new shiny things.. Is there any issues with it?
I had some problems with ZnapZend and remote replication, but Sanoid/Syncoid has been rock solid.
The plugin is pretty much complete, but I’m just wondering how much of the setup I should build into the plugin or leave to the user. I prefer flexibility and don’t want to limit the user, but I don’t want it to be difficult either.
I’ll grab some coffee this weekend and throw something together
Sent from my iPhone using Tapatalk -
Hey, thanks for the report. ZFS is very cool, but @steini84maybe we should update post1 with "Where not to use ZFS"
It would appear that using ZFS to mount USB devices is not a good use case, (or should only be done in cases where you are aware that ZFS is not plug and play with USB). For normal disk maintenance, unRaid leads us to believe that we can safely do this with the array stopped. ZFS is a totally different animal.
Would best practice be to put a "zpool export" command into Squid's awesome user scripts plugin and set it to happen on array stop, and "zpool import" on array start? On first startup of array, you should not need this as ZFS will automatically do the zpool import. It would seem that user scripts supports all this and could make ZFS behave like the unRaid array. Would this make sense?
I want to provide a vanilla experience of ZFS on unRAID until it’s natively supported
I don’t want to add some some functions that might or might not benefit user use-cases.
If you want the pool to automatically import export it’s a great idea to add zpool export -a and zpool import -a on stopping/starting the array via the awesome user scripts plugin.
Considering this is a plug-in for advanced users I don’t think the target audience for this will have a problem adding these commands if preferred.
Now we just wait and see if native zfs support comes in the next betas
Ps I have built Sanoid/Syncoid for myself and was wondering if there was any demand for a plug-in?- 2
-
Built for unRAID-6.9.0-beta24
Sent from my iPhone using Tapatalk -
I'm a bit of a tinkerer and I'm interested in giving FreeNAS a go right now.
Is it reasonable for me to think that I could fire up my server with a FreeNAS boot disk and have it import my zfs pool I currently have on Unraid without any hitches?
What if I wanted to come back to Unraid later on?
Yeah ZFS is really portable, but you have to be sure that you don't have feature flags turned on on the pool that is not supported on Freebsd/Freenas
See this link: https://openzfs.org/wiki/Feature_Flags
Sent from my iPhone using Tapatalk- 1
-
I have been testing on 6.9.0 b22 and have a ZFS mirror created on 2 spinners. On this mirror, I have a Win10 VM created, and now twice unRaid has refused to start the VM after a VM shutdown. The VM start command in unRaid just sits with the VM stopped and the red spinning arrows going in a circle attempting to restart the VM. UnRaid is still reponsive as you can create another tab, and work with unRaid normally.
The VM had been running successfully after a fresh unRaid boot, but shutting down a Win10 VM and restarting the VM causes this issue.
Anything in the syslog?
Sent from my iPhone using Tapatalk -
Hey @steini84,
I ran into a similar problem, I don't know if you seen it already but I created the container to build a custom Kernel/Images for Unraid with nVidia/DVB/ZFS builtin without any plugin needed.
This is the default behaviour of ZFS since it goes to a 'wait' state read this: Click
Also this thread from L1 should make it a little bit clearer: Click (a little old but still up-to-date)
Hope this helps, this is all that I've found from my research.
EDIT: I personally don't use ZFS on Unraid because it's not my usecase... but as said above I ran into the same problem after accidentally disconnecting a drive from a running test ZFS pool.
Perfect thanks for this information, also great job with the container!
Sent from my iPhone using Tapatalk- 1
-
So the conclusion is that ZFS is extremely sensitive to disconnections and does not recover on reconnection. Have you ever seen data corruption as a result of these issues??
No never
Sent from my iPhone using Tapatalk -
15 hours ago, tr0910 said:
ZFS lockup Array stop and restart
I have a 6.8.3 production server that has ZFS installed recently. I took a single disk and set it up with ZFS for testing as follows.
root@Tower:/# zpool create -m /mnt/ZFS1 ZFS1 sde
The server behaved as expected and the ZFS1 pool was active. I then took the unRaid array offline and performed some disk maintenance. I didn't do anything to unmount ZFS1. By mistake, the ZFS disk was also pulled from the server and reinserted into the hotbay. Bringing the server back online resulted in ZFS being locked up. The ZFS and ZPOOL commands would execute, but a ZFS list command would result in a hung terminal window. ZFS import also locked the terminal window. The unRaid server was working fine, but to recover ZFS, the only thing I could do was reboot the server. The ZFS pool came back online after reboot.
I expect that I needed to perform some ZFS commands on array stop and restart? What should I have done? Is there any way to recover a locked up ZFS without reboot?
I would like to hear if someone knows the answer to this. This has happened to me using a USB disk (power failure, disconnected etc.). On Linux (unRAID and Debian) the server works fine, but I get incredibly high load and nothing I try works to reset ZFS to a useful state. I could not even reboot normally and had to force a reboot. In FreeBSD this is similar, but since the load is calculated differently there the signs were not as obvious, but pretty much the same problem.
ZFS plugin for unRAID
in Plugin Support
Posted
Try booting into Freenas since you know it was working there. See if you can mount it there
Sent from my iPhone using Tapatalk