ZFS plugin for unRAID


Recommended Posts

Hi, thanks for updating, unfortunately I have to roll-back again due to the following;



root@Areopagus:~# zpool import

   pool: ZFS

     id: 3884350302998828620

  state: UNAVAIL

status: The pool can only be accessed in read-only mode on this system. It

  cannot be accessed in read-write mode because it uses the following

  feature(s) not supported on this system:

  com.delphix:log_spacemap (Log metaslab changes on a single spacemap and flush them periodically.)

action: The pool cannot be imported in read-write mode. Import the pool with

  "-o readonly=on", access the pool on a system that supports the

  required feature(s), or recreate the pool from backup.



  ZFS         UNAVAIL  unsupported feature(s)

    sdg       ONLINE

    sde       ONLINE

    sdf       ONLINE




    sdd       ONLINE





Link to comment

You have been using the 2.0.0 rc version of zfs and need to enable unstable builds.

From the original post

“Now with the ZFS 2.0.0 RC series I have enabled unstable builds for those who want to try them out:

#Enable unstable builds
touch /boot/config/plugins/unRAID6-ZFS/USE_UNSTABLE_BUILDS
rm /boot/config/plugins/unRAID6-ZFS/packages/*
#Then reboot

Sent from my iPhone using Tapatalk

  • Thanks 1
Link to comment
On 11/14/2020 at 7:48 PM, steini84 said:

Built zfs-0.8.5 for unRAID-6.9.0-beta35

I also built zfs-2.0.0-rc6 for unRAID-6.8.3 & unRAID-6.9.0-beta35

if you are on beta35 you should update the plugin and restart to be on the safe side

Hi @steini84,

at the moment I'm using zfs-2.0.0-rc5 for 6.9.0-beta30 and want to update to zfs-2.0.0-rc6 & unRAID-6.9.0-beta35.
Just to be sure...  maybe you have some additional hints...

  1. first update to beta35 -> with or without reboot? With reboot, the zfs pool won't be available afterwards, or? So, I guess no reboot.
  2. update the plugin (do I need to touch /boot/config/plugins/unRAID6-ZFS/USE_UNSTABLE_BUILDS again?)
  3. reboot
  4. keep fingers crossed during reboot...

best regards

Link to comment
  • 2 weeks later...
16 minutes ago, BasWeg said:

Is it possible to get rid of / waiver this error/warning?


I know that I'm using an unstable version, and everytime after a reboot this shows up and shocks me. 🙃


Yeah you can just remove lines 444-446 in the plugin. For example using nano 

nano -c /boot/config/plugins/unRAID6-ZFS.plg



  • Thanks 1
Link to comment
On 7/13/2020 at 10:10 AM, testdasi said:

Figured it out. No need to mount through /etc/fstab.


What's missing are entries in /etc/mtab,  which are created if mounted from fstab.

So a few echo into /etc/mtab is the solution. Just need to do this at boot.

Each filesystem that is accessible by smb (even through symlinks) needs a line in mtab to stop the spurious warning spam.

echo "[pool]/[filesystem] /mnt/[pool]/[filesystem] zfs rw,default 0 0" >> /etc/mtab


Thanks for the hint!
I've made a user script with following content in order to run after first array start:

#from testdasi  (https://forums.unraid.net/topic/41333-zfs-plugin-for-unraid/?do=findComment&comment=875342)
#echo "[pool]/[filesystem] /mnt/[pool]/[filesystem] zfs rw,default 0 0" >> /etc/mtab

#just dump everything in
for n in $(zfs list -H -o name)
  echo "$n /mnt/$n zfs rw,default 0 0" >> /etc/mtab


Edited by BasWeg
failure in user script (removed additional zfs)
Link to comment

Built OpenZFS 2.0.0 for 6.8.3 & 6.9.0-beta35

Here you can see the changelog, congrats to the team!



Since the plugin caches the install files you need to manually remove them and reboot if you want to upgrade 

rm /boot/config/plugins/unRAID6-ZFS/packages/*

I would implement a more elegant solution, but since native ZFS on unRAID is on the horizon I dont see the need. Also if you are afraid of the CLI this plugin is not great for you anyway :D

  • Thanks 1
Link to comment
  • 2 weeks later...
On 11/27/2020 at 2:37 PM, steini84 said:

Built zfs-2.0.0-rc7 for unRAID-6.8.3 & 6.9.0-beta35


Great to see that unRAID is finally adding native ZFS so this might be one of the last builds from me :)


And yes, i´m already using the new way of side-loading kernel moduels in this plugin




Does this mean we can upgrade to 6.9 rc1 directly?

Link to comment

It should be as easy as removing the ZFS plugin and upgrading to beta35 and do the side load right?

Does anyone know if they are also working on GUI/array support with ZFS?

Would love to contribute to the development but not a clue where. Unraid with zfs array support would solve all my needs!


This is the first frontend i've seen next to FreeNAS for ZFS: https://github.com/optimans/cockpit-zfs-manager

I use it for my proxmox servers, though up until this point i only managed ZFS via cli which is still fine.

But i love the snapshot/pool creation from the UI!

Link to comment
So the process for those who have 6.8.3 and some version of your plugin is update ZFS plugin then update unRaid to 6.9rc1 then reboot?  

My unRaid only finds an update to the ZFS plugin from November, not your December one.


Yes. But to be clear I don’t update the plug-in every time I make a new build, only when I need to change the way we load the zfs modules. For example recently with the addition of overlayFS


There has not been a change to the plug-in since November, only new builds:




Sent from my iPhone using Tapatalk

Link to comment
@Steini84 Can't we get you to do a webinar on your ZFS setup on your unraid server?
So to be clear,
1. Update plugin
2. Upgrade unraid to rc1
That will keep everything as is, in theory?
Thanks for your work on this plugin, much appreciated.

Yeah sure I’m up for that

But yeah that process will get you on zfs 2.0 if you have not already updated to that in 6.8.3

Sent from my iPhone using Tapatalk
Link to comment

Just wanted to provide a solution for an issue I had. 

Scenario,  I wanted to keep the standard shares and mount ZFS under the existing shares rather than using links.


The issue I had was that the ZFS would mount before the other shares and so when starting the array I could only see the ZFS folders.  I tried updating the /etc/default/zfs not to automount, but that did not work.


Solution:  User Scripts to umount when stopping array and then mounting when starting array.  This has worked for a number of reboots so far O.o.  If it fails, I will provide an update.


stopping array script (nas is the zfs file-system)

zfs umount nas

zfs umount fastdisk
zfs set mountpoint=none nas

zfs set mountpoint=none fastdisk



starting array script

zfs set mountpoint=/mnt/user/nas nas
zfs set mountpoint=/mnt/user/fastdisk fastdisk 

zfs mount -a


Hope this helps for anyone wanting to mount ZFS files-systems under the standard shares

Edited by Joe Cat
  • Like 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.