Jump to content
steini84

ZFS plugin for unRAID

186 posts in this topic Last Reply

Recommended Posts

Hey guys

 

I compiled the zfs package and made a plugin for unRAID, but since ZFS uses a kernel module it needs to be rebuilt for every new release (that changes the kernel).

https://raw.githubusercontent.com/Steini1984/unRAID6-ZFS/master/unRAID6-ZFS.plg
 

To install you copy this url into the install plugin page in your unRAID 6 web gui or install through the Community Applications.

 

Since ZFS tries to use up all memory for cache it's a good rule to limit the size of the ARC

 

I limit the ARC to 8GB with these two lines in my go file:

#Adjusting ARC memory usage (limit 8GB)
echo 8589934592 >> /sys/module/zfs/parameters/zfs_arc_max
 

 

I also like to enable compression. "This may sound counterintuitive, but turning on ZFS compression not only saves space, but also improves performance. This is because the time it takes to compress and decompress the data is quicker than then time it takes to read and write the uncompressed data to disk (at least on newer laptops with multi-core chips)." -Oracle

 

to enable compression you need to write this command: (only applies to block written after enabling compression)

 

zfs set compression=lz4 <poolname>
 

 

and lastly I like to disable access time

 

zfs set atime=off <poolname>
 

 

This plugin does not allow you to use ZFS as a part of the array or a cache pool (which would be awesome by the way). I use it to host my virtual machines and Docker files. I have been running ZFS on a 3 drive SSD Raidz1 with LZ4 compression and the performance is great. Using ZFS send and receve I tranfer my incremental daily snapshots over to a 2tb backup drive.. you know... just in case

 

If you want to use zfs for your Docker/Vms you have to set the mount point somewhere under /mnt/

https://docs.oracle.com/cd/E19253-01/819-5461/gaztn/index.html

 

For example I have mine under /mnt/SSD/Docker and /mnt/SSD/Vms

 

If you want to set up automatic snapshots I recommend this package: https://github.com/zfsonlinux/zfs-auto-snapshot You have to manually install it through the go file since we dont have a persistant storage on unRAID.

I really like to have automatic snapshots for every VM/Docker so that I can rollback a single entity if needed. That is done by creating a new file system before the vm/docker is created.  For example: SSD/Docker/HomeAssistant or SSD/Vms/Arch

 

 

Disclamer: This is not supported at all by Limetech. I wont take any resposibility if this package trashes your data... It should be fine as it's just the official ZFS on Linux packages built on unRAID(thanks to gfjardim for making that awesome script to setup the build environment). The plugin installs the packages, loads the kernel module and imports pools.

 

PS - I know Btrfs is getting better every day and probably a good solution, but after a couple of weeks of really testing it out I decided to go back to ZFS. This is just my personal preference so please don't make this a Btrfs vs ZFS flame war.. 


PSS - still running in 2019 without a single hiccup on the original SSD drives.

Edited by steini84
Small update

Share this post


Link to post

Interesting...  Will probably play around with this at some point when I have time.

Share this post


Link to post

That is pretty cool...

 

Just tried loading the package, and the zpool command is giving me:

 

The ZFS modules are not loaded.

Try running '/sbin/modprobe zfs' as root to load them.

 

If I try to load the module, it states that it isn't found.  Is there anything else that needs to be done outside of installing the plg?

Share this post


Link to post

root@Tower:~# depmod

root@Tower:~# modprobe zfs

modprobe: FATAL: Module zfs not found.                                                                                                                                       

 

Myk

 

Share this post


Link to post

Yeah I can see that 6.1.3 is already out with kernel 4.1.7

 

I will build a new version and post later tonight.

 

Then I will have to figure out how the plugin can install different packages based on the version of unRAID

Share this post


Link to post

Yeah I can see that 6.1.3 is already out with kernel 4.1.7

 

I will build a new version and post later tonight.

 

Then I will have to figure out how the plugin can install different packages based on the version of unRAID

 

I had just updated unraid before I saw your post

Share this post


Link to post

couple of questions:

 

am new to zfs so please be kind.....

 

I am currently storing my docker/vm data (large plex library) on 2 SSD mounted outside the array.  I would like to use zfs and make one mount point using the 2 SSD.

 

I understand to create the pool I can use the following:

 

zpool create -m /mnt/disk/appdata appdata /dev/sdb /dev/sdc

 

the 2 SSD always are sdb and sdc because they are on the MB sata and not the 3 8 port sas cards.

 

what I am not sure is what to do when I want to shutdown/reboot the server to 1st unmount the pool and reactivate it upon bootp up

 

Thanks for any info

 

Myk

 

Share this post


Link to post

ok, figured out the zfs mount -a and zfs umount

 

so i assume i can put the zfs mount -a in the go file

 

but how would do the zfs unmount during shutdown/reboot?

 

Thanks

Myk

 

Share this post


Link to post

ok, figured out the zfs mount -a and zfs umount

 

so i assume i can put the zfs mount -a in the go file

 

but how would do the zfs unmount during shutdown/reboot?

 

Thanks

Myk

It is not that obvious, but you can also have a 'stop' file that is the equivalent to the 'go' file that is called on shutdown.  I am surprised that LimeTech do not include a dummy 'stop' file that does nothing as part of the standard release. 

 

I mount external disks that are permanently in the unRAID server in the 'go' file and 'umount' them in the 'stop' file.  Do not forget that since the /dev/sd? names can change across boots (although it is unusual) it is better to use the /dev/disk-by-id names to identify such disks as this removes that as an issue.  I also like adding appropriate calls to syslog in the 'go' and 'stop' files when I do this so that I have entries in the syslog to remind me that it is happening.

Share this post


Link to post

couple of questions:

 

am new to zfs so please be kind.....

 

I am currently storing my docker/vm data (large plex library) on 2 SSD mounted outside the array.  I would like to use zfs and make one mount point using the 2 SSD.

 

I understand to create the pool I can use the following:

 

zpool create -m /mnt/disk/appdata appdata /dev/sdb /dev/sdc

 

the 2 SSD always are sdb and sdc because they are on the MB sata and not the 3 8 port sas cards.

 

what I am not sure is what to do when I want to shutdown/reboot the server to 1st unmount the pool and reactivate it upon bootp up

 

Thanks for any info

 

Myk

 

The plugin automatically mounts all pools (zpool import -a ) and mount points are saved in the pool itself, no need for fstab entries or manual mounting through the go file.

 

ZFS should export the pool on shutdown, but if you want you could try adding this to the stop file

zpool export <poolname> 

 

Just remember that everyone is new to everything at first :)

 

I started with zfs on Freebsd and this page is a goldmine https://www.freebsd.org/doc/handbook/zfs.html - you just have to look past the Freebsd parts, but all the ZFS commands and concepts are the same

 

The Arch wiki is always great: https://wiki.archlinux.org/index.php/ZFS

 

Then there are some good Youtube videos that give you a crash course on ZFS and these are good if I remember correctly:

 

https://www.youtube.com/watch?v=R9EMgi_XOoo
https://www.youtube.com/watch?v=tPsV_8k-aVU
https://www.youtube.com/watch?v=jDLJJ2-ZTq8

 

Share this post


Link to post

Learned alot already - but more to go

 

I love this ZFS - it was exactly what I was looking for to pool my 2 SSDs for docker/vm storage.

 

Couple of bumps and had to rethink and move things around...

 

but I now has a 2 SSD zfspool, and my docker/vm stuff mounted and shared in zfspool/appdisk as /mnt/appdisk

 

and I LOVE IT!

 

Thank you so much for this plugin!

 

Myk

 

Share this post


Link to post

Learned alot already - but more to go

 

I love this ZFS - it was exactly what I was looking for to pool my 2 SSDs for docker/vm storage.

 

Couple of bumps and had to rethink and move things around...

 

but I now has a 2 SSD zfspool, and my docker/vm stuff mounted and shared in zfspool/appdisk as /mnt/appdisk

 

and I LOVE IT!

 

Thank you so much for this plugin!

 

Myk

 

Good to hear man

 

If you want to learn some ZFS (and become a fanboy like me :) I reccomend listening to Techsnap http://www.jupiterbroadcasting.com/show/techsnap/ especially the feedback section where there is always some ZFS gold.

 

But I forgot that you can easily add swap with zfs under unRAID (code from the Arch wiki):


#first create a 8gb zvol where <pool> is the name of your pool:
zfs create -V 8G -b $(getconf PAGESIZE) \
              -o primarycache=metadata \
              -o com.sun:auto-snapshot=false <pool>/swap

#then make it a swap partition
mkswap -f /dev/zvol/<pool>/swap
swapon /dev/zvol/<pool>/swap

#to make it persistent you need to add this to your go file:
swapon /dev/zvol/<pool>/swap

 

Share this post


Link to post

Probably a stupid question but i'll ask anyway...

I currently house my VMs (windows\openelec) on an SSD outside of my array. At the moment its formatted in ext4. Would i see much difference in performance using zfs?

 

Also, you mentioned the lack of ZFS out of the box was a "dealbreaker". Could you expand on that a little? What benefits does ZFS bring the the table that made it worth the effort to build the plugin?

Share this post


Link to post

Probably a stupid question but i'll ask anyway...

I currently house my VMs (windows\openelec) on an SSD outside of my array. At the moment its formatted in ext4. Would i see much difference in performance using zfs?

 

Also, you mentioned the lack of ZFS out of the box was a "dealbreaker". Could you expand on that a little? What benefits does ZFS bring the the table that made it worth the effort to build the plugin?

 

No you would probably not see any difference in performance on a single SSD, but I use ZFS on my single SSD laptop for snapshots, check sums and easy backups with zfs send/receive.

 

It was a dealbreaker for me since I want my VMs to stay on a multiple SSD drive array with redundancy and data integrity. ZFS is what I know and what I trust so that was my first choice. I have gotten used to automatic snapshots, clones, compression and other goodies that ZFS has so a hardware raid was not a option.

 

I am aware that Btrfs has all those options and it built into unRaid so I decided to give that a try. Learning the btrfs way of doing these things were fun, but after a couple of days the performance got horrible. My server got a really high (50+) load while writing big files and all the pain with re balancing, scrubbing and the "art" of knowing how much free space you have made me rethink things. I wiped everything and started all over with btrfs but after maybe a week later the performance and my mood had gone down again.

 

I realize that It was probably definitely something I did that caused this bad performance and with enough debugging and massaging the setup I could get it where I wanted... but knowing ZFS had beeen rock solid for years and really easy for me to administer I came to the conclusion that building a plugin would be less work than making my btrfs setup work.

 

No hate against btrfs but ZFS suited me better and I decided to post this plugin if it would be helpful to others

Share this post


Link to post

Looks like the current doesn't work, it needs to be recompiled for the new kernel.  I tested on 1 server and the driver wont load..

Share this post


Link to post

This is a really exciting development. I will be testing it tomorrow, hopefully this will solve the disk performance issues with kvm+brtfs.

Share this post


Link to post

I have chosen to use LVM on my SSD  to be used for my VM and docker, but I will take a look at this plugin....maybe LVM are a better choose than ZFS????

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.