ZFS plugin for unRAID


steini84

Recommended Posts

******************************************************

This plugin is depricated since Unraid 6.12 has native ZFS!

Since this thread was written I have moved my snapshots/bakups/replication over to Sanoid/Syncoid which I like even more, but will keep the original thread unchanged since ZnapZend is still a valid option:

******************************************************

 What is this?

This plugin is a build of ZFS on Linux for unRAID 6

 

Installation of the plugin

To install you copy the URL below into the install plugin page in your unRAID 6 web gui or install through the Community Applications.

https://raw.githubusercontent.com/Steini1984/unRAID6-ZFS/master/unRAID6-ZFS.plg

 

WHY ZFS and unRAID?

I wanted to put down a little explanation and a mini "guide" that explains how and why I use ZFS with unRAID 

* I use sdx, sdy & sdz as example devices but you have to change that to your device names. 

* SSD is just the name I like to use for my pool, but you use what you like

 

My use-case for ZFS is a really simple, but a powerful way to make unRAID the perfect setup for me. In the past I have ran ESXI with unRAID as a guest with pci pass-through and Omnios+napp-it as a data store. Then I tried Proxmox that had native ZFS and unRAID again as a guest, but both of these solutions were a little bit fragile. When unRAID started to have great support for Vms and Docker I wanted to have that as the host system and stop relying on a hypervisor. The only thing missing was ZFS and even though I gave btrfs a good chance it did not feel right for me. I built ZFS for unraid in 2015 and as of March 2023 the original setup of 3x SSD + 2x HDD is still going strong running 24/7. That means  7 years of rock solid and problem free up time.

 

You might think a ZFS fanboy like my self would I like to use Freenas or other ZFS based solution, but I really like unRAID for it´s flexible ability to mix and match hard drives for media files. I use ZFS to compliment unRAID and think I get the best of both worlds with this setup.

 

I run a 3 SSD disk pool in raidz that i use for Docker and Vms.  I run automatic snapshots every 15 minutes and replicate every day to a 2x2TB mirror that connects over USB as a backup. I also use that backup pool to rsync my most valuable data to from unRAID (photos etc) which has the added bonus of being protected witch checksums (no bit rot).

 

I know btrfs can probably solve all of this, but I decided to go with ZFS. The great thing about open source is that you have the choice to choose. 

 

Disclamer/Limitations

  • The plugin needs to be rebuilt when a update includes a new Linux Kernel (there is a automated system that makes new builds so there should not be a long delay - thanks Ich777)
  • This plugin does not allow you to use ZFS as a part of the array or a cache pool (which would be awesome by the way).
  • This is not supported by Limetech. I cant take any responsibility for your data, but it should be fine as it's just the official ZFS on Linux packages built on unRAID(thanks to gfjardim for making that awesome script to setup the build environment). The plugin installs the packages, loads the kernel module and imports all pools.

 

How to create a pool?

First create a zfs pool and mount it somewhere under /mnt

 

Examples:

Single disk pool

zpool create -m /mnt/SSD SSD sdx

2 disk mirror

zpool create -m /mnt/SSD SSD mirror sdx sdy

3 disk raidz pool

zpool create -m /mnt/SSD SSD raidz sdx sdy sdz

 

Tweaks

After creating the pool I like to make some adjustments. They are not needed, but give my server better performance:

 

My pool is all SSD so I want to enable trim

zpool set autotrim=on SSD

Next I add these lines to my go file to limit the ARC memory usage of ZFS (i like to limit it to 8GB on my 32GB box, but you can adjust that to your needs)

echo "#Adjusting ARC memory usage (limit 8GB)" >> /boot/config/go
echo "echo 8589934592 >> /sys/module/zfs/parameters/zfs_arc_max" >> /boot/config/go

I also like to enable compression. "This may sound counter-intuitive, but turning on ZFS compression not only saves space, but also improves performance. This is because the time it takes to compress and decompress the data is quicker than then time it takes to read and write the uncompressed data to disk (at least on newer laptops with multi-core chips)." -Oracle

 

to enable compression you need to write this command: (only applies to block written after enabling compression)

zfs set compression=lz4 SSD

and lastly I like to disable access time

zfs set atime=off SSD

File systems

Now we could just use one file system (/mnt/SSD/), but I like to make separate file systems for Docker and Vms 

zfs create SSD/Vms
zfs create SSD/Docker

Now we should have something like this:

root@Tower:~# zfs list
NAME         USED  AVAIL     REFER  MOUNTPOINT
SSD          170K   832M       24K  /mnt/SSD
SSD/Docker    24K   832M       24K  /mnt/SSD/Docker
SSD/Vms       24K   832M       24K  /mnt/SSD/Vms

Now we have Dockers and Vms separated and that gives us more flexibility. For example can we have different ZFS features turned on for the file systems and we can snapshot, restore and replicate them separately.

 

To have even more flexibility I like to create a separate file system for every Vm and every Docker container. By that I can work with a single vm or a single container without interfering with the rest. In other words, I can mess up a single docker, and rollback, without affecting the rest of the server.

 

Lets start with a single Ubuntu Vm and a Home Assistant container. While we are at it lets create a file system for libvirt.img

 

*The trick is to add the file system before you create a vm/container in unRAID, but with some moving around you can copy the data directory from an existing docker in to a zfs file system after the fact.

Quote

 

zfs create SSD/Docker/HomeAssistant

zfs create SSD/Vms/Ubuntu

zfs create SSD/Vms/libvirt

 

Now we have this structure and each and everyone of these file systems can be worked with as a group, subgroup or individually (snapshots, clones, replications, rollbacks etc):

root@Tower:~# zfs list
NAME                       USED  AVAIL     REFER  MOUNTPOINT
SSD                        309K   832M       24K  /mnt/SSD
SSD/Docker                  48K   832M       24K  /mnt/SSD/Docker
SSD/Docker/HomeAssistant    24K   832M       24K  /mnt/SSD/Docker/HomeAssistant
SSD/Vms                     72K   832M       24K  /mnt/SSD/Vms
SSD/Vms/Ubuntu              24K   832M       24K  /mnt/SSD/Vms/Ubuntu
SSD/Vms/libvirt             24K   832M       24K  /mnt/SSD/Vms/libvirt

 

unRAID settings

From here you can navigate to the unRAID webgui and set the default folders for Dockers and Vms to /mnt/SSD/Docker and /mnt/SSD/Vms: 

 

543198345_Screenshotfrom2019-12-0113-55-20.png.16bf9f9bf183c592b1df2430cc59a386.png

**** There have been reported issues with keeping docker.img on ZFS 2.1 (will be default on unRAID 6.10.0). The system can lock up so I reccomend you keep docker.img on the cache drive if you run into any troubles*****

 

1597209724_Screenshotfrom2019-12-0113-54-50.png.3268bd752a0d7d8c548a22159a6714c5.png

 

Now when you add and new app via docker you choose the newly created folder as the config directory

243534393_Screenshotfrom2019-12-0114-15-01.thumb.png.4ed938910295d9adefd2b5d1c269992d.png

Same with the Vms

1595305000_Screenshotfrom2019-12-0114-23-55.png.8abc597f03b402a477c27612799b436b.png

 

Snapshots and rollbacks 

Now this is where the magic happens.

 

You can snapshot the whole pool or you can snapshot a subset.

Lets try to snapshot the whole thing, then just Docker (and it's child file systems) then one snapshot just for the Ubuntu Vm

oot@Tower:/mnt/SSD/Vms/Ubuntu# zfs list -t snapshot
no datasets available
root@Tower:/mnt/SSD/Vms/Ubuntu# zfs snapshot -r SSD@everything
root@Tower:/mnt/SSD/Vms/Ubuntu# zfs snapshot -r SSD/Docker@just_docker
root@Tower:/mnt/SSD/Vms/Ubuntu# zfs snapshot -r SSD/Vms/Ubuntu@ubuntu_snapshot

root@Tower:/mnt/SSD/Vms/Ubuntu# zfs list -r -t snapshot
NAME                                   USED  AVAIL     REFER  MOUNTPOINT
SSD@everything                           0B      -       24K  -
SSD/Docker@everything                    0B      -       24K  -
SSD/Docker@just_docker                   0B      -       24K  -
SSD/Docker/HomeAssistant@everything      0B      -       24K  -
SSD/Docker/HomeAssistant@just_docker     0B      -       24K  -
SSD/Vms@everything                       0B      -       24K  -
SSD/Vms@ubuntu_snapshot                  0B      -       24K  -
SSD/Vms/Ubuntu@everything                0B      -       24K  -
SSD/Vms/Ubuntu@ubuntu_snapshot           0B      -       24K  -
SSD/Vms/libvirt@everything               0B      -       24K  -

You can see that at first we did not have any snapshots, but after creating the first recursive snapshot we can see that we have the "@everything" snapshot on every level, we only have the "@just_docker" for the docker related folders and the only one that has a ubuntu_snapshot is the Ubuntu Vm folder

 

Lets say we make a snapshot, then destroy the Ubuntu VM with a misguided update, we can just power it off and run 

zfs rollback -r SSD/Vms@ubuntu_snapshot

and we are at the state the vm was before we ran the update.

 

One can also access the snapshot (read only) via a hidden folder called .zfs

root@Tower:~# ls /mnt/SSD/Vms/Ubuntu/.zfs/snapshot/
everything/  ubuntu_snapshot/

 

Automatic snapshots

If you want automatic snapshots I recommend ZnapZend and I made a plugin available for the here: ZnapZend

 

There is more information in the plugin thread, but to get up and running you can install this via the plugin page in the unRAID gui or install through the Community Applications.

https://raw.githubusercontent.com/Steini1984/unRAID6-ZnapZend/master/unRAID6-ZnapZend.plg

Then run these two commands to start and auto-start the program on boot:

znapzend --logto=/var/log/znapzend.log --daemonize
touch /boot/config/plugins/unRAID6-ZnapZend/auto_boot_on

Then you can turn on automatic snapshots with this command: 

znapzendzetup create --recursive SRC '7d=>1h,30d=>4h,90d=>1d' SSD

The setup is pretty readable, but this example makes automatic snapshots and keeps 24 backups a day for 7 days, 6 backups a day for a month a then a single snapshot every day for 90 days.

 

The snapshots are also named in a easy to read format:

root@Tower:~# zfs list -t snapshot SSD/Docker/HomeAssistant
NAME                                         USED  AVAIL     REFER  MOUNTPOINT
SSD/Docker/HomeAssistant@2019-11-12-000000  64.4M      -     90.8M  -
SSD/Docker/HomeAssistant@2019-11-13-000000  46.4M      -     90.9M  -
SSD/Docker/HomeAssistant@2019-11-13-070000  28.4M      -     92.5M  -
SSD/Docker/HomeAssistant@2019-11-13-080000  22.5M      -     92.6M  -
SSD/Docker/HomeAssistant@2019-11-13-090000  29.7M      -     92.9M  -
......
SSD/Docker/HomeAssistant@2019-11-15-094500  14.4M      -     93.3M  -
SSD/Docker/HomeAssistant@2019-11-15-100000  14.4M      -     93.4M  -
SSD/Docker/HomeAssistant@2019-11-15-101500  17.2M      -     93.5M  -
SSD/Docker/HomeAssistant@2019-11-15-103000  26.8M      -     93.7M  -

Lets say that we need to go back in time to a good configuration. We know we made a mistake after 10:01 so we can rollback to 10:00:

zfs rollback -r SSD/Docker/HomeAssistant@2019-11-15-100000

 

Backups

I have a USB connected Buffalo drive station with 2x 2tb drives which i have added for backups.

I decided on a mirror and created it with this command:

zpool create External mirror sdb sdc

Then I created a couple of file systems:

zfs create External/Backups
zfs create External/Backups/Docker
zfs create External/Backups/Vms
zfs create External/Backups/Music
zfs create External/Backups/Nextcloud
zfs create External/Backups/Pictures

I use Rsync for basic files (Music, Nextcloud & Pictures) and run this in my crontab:

#Backups
0 12 * * * rsync -av --delete /mnt/user/Nextcloud/ /External/Backups/Nextcloud >> /dev/null
0 26 * * * rsync -av --delete /mnt/user/Music/ /External/Backups/Music >> /dev/null
1 26 * * * rsync -av --delete /mnt/user/Pictures/ /External/Backups/Pictures >> /dev/null

Then I run automatic snapshots on the usb pool: (keep a years worth).

znapzendzetup create --recursive SRC '14days=>1days,365days=>1weeks' External

The automatic snapshots on the ZFS side make sure that I have backups of the files that are deleted between snapshots (files that are created and deleted within the day will get lost if accidentally deleted in unRAID)

 

Replication

ZnapZend supports automatic replication and I send my daily snapshots to the usb pool with these commands

I have not ran into space issues... yet. But this command means a snapshot retention on the usb pool for 10 years (lets see when I need to reconsider)

znapzendzetup create --send-delay=21600 --recursive SRC '7d=>1h,30d=>4h,90d=>1d' SSD/Vms DST:a '90days=>1days,1years=>1weeks,10years=>1months'  External/Backups/Vms

znapzendzetup create --send-delay=21600 --recursive SRC '7d=>1h,30d=>4h,90d=>1d' SSD/Docker DST:a '90days=>1days,1years=>1weeks,10years=>1months'  External/Backups/Docker

 

Scrub 

Scrubs are used to maintain the pool, kinda like parity checks and I run them from a cronjob 

#ZFS Scrub
30 6 * * 0 zpool scrub SSD >> /dev/null
4 2 4 * * zpool scrub External >> /dev/null

 

New ZFS versions:

The plugin check on each reboot if there is a newer version for ZFS available, download it and install it (on default settings the update check is active).

 

If you want to disable this feature simply run this command from a unRAID terminal:

sed -i '/check_for_updates=/c\check_for_updates=false' "/boot/config/plugins/unRAID6-ZFS/settings.cfg"

 

If you have disabled this feature already and you want to enable it run this command from a unRAID terminal:

sed -i '/check_for_updates=/c\check_for_updates=true' "/boot/config/plugins/unRAID6-ZFS/settings.cfg"

Please note that this feature needs an active internet connection on boot.

If you run for example AdGuard/PiHole/pfSense/... on unRAID it is very most likely to happen that you have no active internet connection on boot so that the update check will fail and plugin will fall back to install the current available local package from ZFS.

 

New unRAID versions:

Please also keep in mind that for every new unRAID version ZFS has to be compiled.

I would recommend to wait at least two hours after a new version from unRAID is released before upgrading unRAID (Tools -> Update OS -> Update) because of the involved compiling/upload process.

 

Currently the process is fully automated for all plugins who need packages for each individual Kernel version.

 

The Plugin Update Helper will also inform you if a download failed when you upgrade to a newer unRAID version, this is most likely to happen when the compilation isn't finished yet or some error occurred at the compilation.

If you get a error from the Plugin Update Helper I would recommend to create a post here and don't reboot yet.

image.png.bf1c2b2afa5b60d5d41b2f07aa3bd226.png

 

 Unstable builds 

Now with the ZFS 2.0.0 RC series I have enabled unstable builds for those who want to try them out:

*ZFS 2.0.0 is out so no need to use these builds anymore.. 

 

If you want to enable unstable builds simply run this command from a unRAID terminal:

sed -i '/unstable_packages=/c\unstable_packages=true' "/boot/config/plugins/unRAID6-ZFS/settings.cfg"

 

If you have enabled this feature already and you want to disable it run this command from a unRAID terminal:

sed -i '/unstable_packages=/c\unstable_packages=false' "/boot/config/plugins/unRAID6-ZFS/settings.cfg"

Please note that this feature also will need a active internet connection on boot like the update check (if there is no unstable package found, the plugin will automatically return this setting to false so that it is disabled to pull unstable packages - unstable packages are generally not recommended).

 

Extra reading material

This hopefully got you started, but this example was based on my setup and ZFS has so much more to offer. Here are some links I wanted to 

 

Edited by steini84
Deprecating the plugin because of 6.12 RC1
  • Like 6
  • Thanks 2
  • Upvote 2
Link to comment

That is pretty cool...

 

Just tried loading the package, and the zpool command is giving me:

 

The ZFS modules are not loaded.

Try running '/sbin/modprobe zfs' as root to load them.

 

If I try to load the module, it states that it isn't found.  Is there anything else that needs to be done outside of installing the plg?

Link to comment

couple of questions:

 

am new to zfs so please be kind.....

 

I am currently storing my docker/vm data (large plex library) on 2 SSD mounted outside the array.  I would like to use zfs and make one mount point using the 2 SSD.

 

I understand to create the pool I can use the following:

 

zpool create -m /mnt/disk/appdata appdata /dev/sdb /dev/sdc

 

the 2 SSD always are sdb and sdc because they are on the MB sata and not the 3 8 port sas cards.

 

what I am not sure is what to do when I want to shutdown/reboot the server to 1st unmount the pool and reactivate it upon bootp up

 

Thanks for any info

 

Myk

 

Link to comment

ok, figured out the zfs mount -a and zfs umount

 

so i assume i can put the zfs mount -a in the go file

 

but how would do the zfs unmount during shutdown/reboot?

 

Thanks

Myk

It is not that obvious, but you can also have a 'stop' file that is the equivalent to the 'go' file that is called on shutdown.  I am surprised that LimeTech do not include a dummy 'stop' file that does nothing as part of the standard release. 

 

I mount external disks that are permanently in the unRAID server in the 'go' file and 'umount' them in the 'stop' file.  Do not forget that since the /dev/sd? names can change across boots (although it is unusual) it is better to use the /dev/disk-by-id names to identify such disks as this removes that as an issue.  I also like adding appropriate calls to syslog in the 'go' and 'stop' files when I do this so that I have entries in the syslog to remind me that it is happening.

Link to comment

couple of questions:

 

am new to zfs so please be kind.....

 

I am currently storing my docker/vm data (large plex library) on 2 SSD mounted outside the array.  I would like to use zfs and make one mount point using the 2 SSD.

 

I understand to create the pool I can use the following:

 

zpool create -m /mnt/disk/appdata appdata /dev/sdb /dev/sdc

 

the 2 SSD always are sdb and sdc because they are on the MB sata and not the 3 8 port sas cards.

 

what I am not sure is what to do when I want to shutdown/reboot the server to 1st unmount the pool and reactivate it upon bootp up

 

Thanks for any info

 

Myk

 

The plugin automatically mounts all pools (zpool import -a ) and mount points are saved in the pool itself, no need for fstab entries or manual mounting through the go file.

 

ZFS should export the pool on shutdown, but if you want you could try adding this to the stop file

zpool export <poolname> 

 

Just remember that everyone is new to everything at first :)

 

I started with zfs on Freebsd and this page is a goldmine https://www.freebsd.org/doc/handbook/zfs.html - you just have to look past the Freebsd parts, but all the ZFS commands and concepts are the same

 

The Arch wiki is always great: https://wiki.archlinux.org/index.php/ZFS

 

Then there are some good Youtube videos that give you a crash course on ZFS and these are good if I remember correctly:

 

https://www.youtube.com/watch?v=R9EMgi_XOoo
https://www.youtube.com/watch?v=tPsV_8k-aVU
https://www.youtube.com/watch?v=jDLJJ2-ZTq8

 

Link to comment

Learned alot already - but more to go

 

I love this ZFS - it was exactly what I was looking for to pool my 2 SSDs for docker/vm storage.

 

Couple of bumps and had to rethink and move things around...

 

but I now has a 2 SSD zfspool, and my docker/vm stuff mounted and shared in zfspool/appdisk as /mnt/appdisk

 

and I LOVE IT!

 

Thank you so much for this plugin!

 

Myk

 

  • Like 1
Link to comment
  • 2 weeks later...

Learned alot already - but more to go

 

I love this ZFS - it was exactly what I was looking for to pool my 2 SSDs for docker/vm storage.

 

Couple of bumps and had to rethink and move things around...

 

but I now has a 2 SSD zfspool, and my docker/vm stuff mounted and shared in zfspool/appdisk as /mnt/appdisk

 

and I LOVE IT!

 

Thank you so much for this plugin!

 

Myk

 

Good to hear man

 

If you want to learn some ZFS (and become a fanboy like me :) I reccomend listening to Techsnap http://www.jupiterbroadcasting.com/show/techsnap/ especially the feedback section where there is always some ZFS gold.

 

But I forgot that you can easily add swap with zfs under unRAID (code from the Arch wiki):


#first create a 8gb zvol where <pool> is the name of your pool:
zfs create -V 8G -b $(getconf PAGESIZE) \
              -o primarycache=metadata \
              -o com.sun:auto-snapshot=false <pool>/swap

#then make it a swap partition
mkswap -f /dev/zvol/<pool>/swap
swapon /dev/zvol/<pool>/swap

#to make it persistent you need to add this to your go file:
swapon /dev/zvol/<pool>/swap

 

  • Thanks 1
Link to comment
  • 2 weeks later...

Probably a stupid question but i'll ask anyway...

I currently house my VMs (windows\openelec) on an SSD outside of my array. At the moment its formatted in ext4. Would i see much difference in performance using zfs?

 

Also, you mentioned the lack of ZFS out of the box was a "dealbreaker". Could you expand on that a little? What benefits does ZFS bring the the table that made it worth the effort to build the plugin?

Link to comment

Probably a stupid question but i'll ask anyway...

I currently house my VMs (windows\openelec) on an SSD outside of my array. At the moment its formatted in ext4. Would i see much difference in performance using zfs?

 

Also, you mentioned the lack of ZFS out of the box was a "dealbreaker". Could you expand on that a little? What benefits does ZFS bring the the table that made it worth the effort to build the plugin?

 

No you would probably not see any difference in performance on a single SSD, but I use ZFS on my single SSD laptop for snapshots, check sums and easy backups with zfs send/receive.

 

It was a dealbreaker for me since I want my VMs to stay on a multiple SSD drive array with redundancy and data integrity. ZFS is what I know and what I trust so that was my first choice. I have gotten used to automatic snapshots, clones, compression and other goodies that ZFS has so a hardware raid was not a option.

 

I am aware that Btrfs has all those options and it built into unRaid so I decided to give that a try. Learning the btrfs way of doing these things were fun, but after a couple of days the performance got horrible. My server got a really high (50+) load while writing big files and all the pain with re balancing, scrubbing and the "art" of knowing how much free space you have made me rethink things. I wiped everything and started all over with btrfs but after maybe a week later the performance and my mood had gone down again.

 

I realize that It was probably definitely something I did that caused this bad performance and with enough debugging and massaging the setup I could get it where I wanted... but knowing ZFS had beeen rock solid for years and really easy for me to administer I came to the conclusion that building a plugin would be less work than making my btrfs setup work.

 

No hate against btrfs but ZFS suited me better and I decided to post this plugin if it would be helpful to others

Link to comment
  • 4 weeks later...
  • 1 month later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.