Jump to content

ZFS plugin for unRAID

Featured Replies

Once installed you will need to restart Unraid.  I think it will automatically attempt to mount any ZFS pools found on startup.

  • 2 weeks later...
  • Replies 1.6k
  • Views 391.6k
  • Created
  • Last Reply

Top Posters In This Topic

Most Popular Posts

  • Today I released in collaboration with @steini84 a update from the ZFS plugin (v2.0.0) to modernize the plugin and switch from unRAID version detection to Kernel version detection and a general overha

  • You have truly taken this plugin to the next level and with the automatic builds it´s as good as it gets until we get native ZFS on Unraid!

  • Built zfs-2.0.0-rc7 for unRAID-6.8.3 & 6.9.0-beta35   Great to see that unRAID is finally adding native ZFS so this might be one of the last builds from me   And yes, i´m alre

Posted Images

I'm sure you guys must be tired of these posts, but I'm still unsure about how to migrate from 6.11.5 with this plugin to 6.12.

Here's everything about my setup related to ZFS that I think might be affected in some way:

  • The classic setup of an unused USB stick for the array + Unassigned Devices for drives that are part of my ZFS pools.
  • 2 ZFS pools (with 6.12-compatible names thankfully).
  • Autotrim is enabled on 1 pool.
  • Compression is set to lz4 on 1 pool and to zstd-2 on the other.
  • Scripts to scrub and check pool health scheduled through the User Scripts plugin.
  • Docker image is inside a zvol on one of the pools (as a workaround for the major docker+ZFS issue caused by 6.11 or whatever it was).
  • Shares on the pools are setup through smb-extras.conf.
  • My go file has some ZFS related lines:
    • mount /dev/ssd/dockerzvol-part1 /mnt/ssd/dockerzvol
    • echo 0 > /sys/module/zfs/parameters/zfs_dmu_offset_next_sync

This has always worked perfectly (except for that time docker broke), and I'd like to keep things running the way I'm used to.

Looking at the 6.12 release notes I see the instructions on how to import pools created with this plugin, but it's still unclear to me what exactly the result of importing the pools into Unraid will be. I don't want them to become part of the array in any way and I don't care about a GUI. I just want to them mounted and accessible the way they are now.

It says "Compression can be configured as on or off, where on selects lz4. Future update will permit specifying other algorithms/levels.". Will this cause any issues for my zstd-2 pool / datasets?

There is a lot about changes to shares and mentions of ZFS, but can I ignore all this and just keep using my smb-extras.conf?

Is there anything else I should be mindful of?

 

Thanks in advance.

I'm not sure what the go file changes are, but other than that I don't see any problems.  You say you don't want it to be part of the array, but it will be part of its own separate array, accessible under the standard Unraid array method.  This does change some things, there are a bunch of ZFS related things that sort of don't work like hot swapping disks without stopping the whole unraid system.  This is because you have to stop the array(s) to change the disk in the GUI.  A bit of a problem which is making me rethink my whole relationship with unraid at present.

 

But answering you directly, I don't see any problems, it's very similar to my setup, I have a few extra things in mine but it just pulls them in directly.

 

Shares I just keep the same.  Autotrim and feature flags like that are irrelevant and will just continue on.  Compression is irrelevant to the import process also.

User scripts sounds fine too, I use znapzend which still doesn't work for me despite some multiple attempts to do so.  I assume scripts will be better.

15 hours ago, Marshalleq said:

User scripts sounds fine too, I use znapzend which still doesn't work for me despite some multiple attempts to do so.  I assume scripts will be better.

 

The Community Applications ZnapZend plugin is currently broken because the plugin executes early in the boot process, even before the Pool is imported, mounted and working, so the plugin doesn't find any valid pools and exits early in the boot process. One way to make it work with the latest version is using the Docker version; here is my current docker-compose spec:

 

version: '3'
services:
  znapzend:
    image: oetiker/znapzend:master
    container_name: znapzend
    hostname: znapzend-main
    privileged: true
    devices:
      - /dev/zfs
    command: ["znapzend --logto /mylogs/znapzend.log"]
    restart: unless-stopped
    volumes:
      - /var/log/:/mylogs/
      - /etc/localtime:/etc/localtime:ro
    networks:
      - znapzend

networks:
  znapzend:
    name: znapzend

 

The only downside is that if you are replicating data to another machine, you have to access the container and the destination machine and set up the SSH keys, or ... you have to mount a specific volume with the keys and known host in the container.

 

Best

23 hours ago, Marshalleq said:

But answering you directly, I don't see any problems, it's very similar to my setup, I have a few extra things in mine but it just pulls them in directly.

Thanks for the detailed answer, that really put my mind at ease.

Since 6.12.6 includes a version of ZFS that fixes the scary silent corruption bug I guess I will try and update soon.

On 12/1/2023 at 2:44 PM, Iker said:

 

The Community Applications ZnapZend plugin is currently broken because the plugin executes early in the boot process, even before the Pool is imported, mounted and working, so the plugin doesn't find any valid pools and exits early in the boot process. One way to make it work with the latest version is using the Docker version; here is my current docker-compose spec:

 

version: '3'
services:
  znapzend:
    image: oetiker/znapzend:master
    container_name: znapzend
    hostname: znapzend-main
    privileged: true
    devices:
      - /dev/zfs
    command: ["znapzend --logto /mylogs/znapzend.log"]
    restart: unless-stopped
    volumes:
      - /var/log/:/mylogs/
      - /etc/localtime:/etc/localtime:ro
    networks:
      - znapzend

networks:
  znapzend:
    name: znapzend

 

The only downside is that if you are replicating data to another machine, you have to access the container and the destination machine and set up the SSH keys, or ... you have to mount a specific volume with the keys and known host in the container.

 

Best

 

Since I also use the znapzend plugin, does your solution just work with the old stored znapzend configuration?

18 hours ago, BasWeg said:

Since I also use the znapzend plugin, does your solution just work with the old stored znapzend configuration?

 

Yes, one of the multiple benefits from ZnapZend is that the configuration is stored on dataset custom properties, so you lose nothing by migrating from plugin to docker version.

Edited by Iker

  • 2 weeks later...
On 11/30/2023 at 10:49 PM, Marshalleq said:

Shares I just keep the same.  Autotrim and feature flags like that are irrelevant and will just continue on.  Compression is irrelevant to the import process also.

 

Sadly it turns out Unraid overrides the root compression setting for the pools on starting the array, and it defaulted to off for both my pools. Of course I can enable compression in the GUI after stopping the pool again, but I had one of them set to zstd-2 which isn't even an option in Unraid. (Why???)

Dataset local compression settings are kept as is thankfully, but for almost all my datasets I relied on inheriting it from the pool.

Easy enough to set it for each dataset, just a bit annoying that Unraid needlessly messes with this.

 

Thankfully the only real issue I encountered is with my docker image. Being contained in a zvol on a pool it has to be mounted after the pool is started, which worked fine, but then Unraid cannot stop the array until I manually unmount that zvol.

I'll switch to directories as recommended in https://docs.unraid.net/unraid-os/release-notes/6.12.0/#docker and just live with the horrible dataset pollution it causes.

 

Seeing all the root directories on my pools listed as user shares (with primary storage: array) makes me a bit nervous, but I suppose that won't affect anything especially if I never access anything through /mnt/user?

Edited by Visua

Wow good to know!  Will have to check mine.

Edited by Marshalleq

13 hours ago, Visua said:

 

Sadly it turns out Unraid overrides the compression setting for the pools on starting the array, so it defaulted to off for both my pools and one of them was set to zstd-2 which isn't even an option in Unraid.

 

This sounds like a bug imo... Someone not noticing for an extended period would need to copy everything out and back in to reap the compression benefits, and that can be a huuuuge PITA...

  • 4 months later...

Hi i recently updated to 6.12 and my zpools arent working.

i followed the 'procedure' which involves simply creating a pool and adding the devices and clicking start. I but its not working.

I found out it is because my drives have 2 partitions.

Now i didn't realise that i was using 2 partitions i don't even know i did that. i just created the the "data" pool and added the drives. so is it safe for me delete the smaller partition? Would it work or is there some sort of zpool/vdev format that requires both?

 

I've attached an image of what my drives look like unassigned, in a native pool and what one of my drive partitions look like.

 

Now can i delete sdc9 or is it integral to my sdz1 zfs partition

 

Screenshot 2024-05-10 213332.png

Screenshot 2024-05-10 213435.png

Screenshot 2024-05-10 221203.png

Edited by Sally-san

10 hours ago, Sally-san said:

found out it is because my drives have 2 partitions

That should not be a problem, post the diagnostics after array start and the output of 'zpool import'

 

 

I recently went through upgrade to 6.12 with existing pool.  My disks looked like this:

 

fdisk --list /dev/sdaf
Disk /dev/sdaf: 5.46 TiB, 6001175126016 bytes, 11721045168 sectors
Disk model: ST6000NM0034
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: C92F7A6B-2F0B-EF41-9080-8B71DE0E618A

Device           Start         End     Sectors  Size Type
/dev/sdaf1        2048 11721027583 11721025536  5.5T Solaris /usr & Apple ZFS
/dev/sdaf9 11721027584 11721043967       16384    8M Solaris reserved 1


I also had an issue with "unmountable: unsupported or no file system".

 

The trick for me was to stop the array, and export the pool/s by running:

zpool export -a

 

Starting the array then mounted the zfs pool correctly.  edit:  Make sure they are in the same order in your pool as "zpool status" shows.  Hope this helps!

 

 

Edited by jortan

  • 6 months later...

So, I just upgraded from 6.11.5 to 6.12.10 (very late, I know) and discovered that my pool and shares are gone.  I need to recreate them.  I see the drives belonging to the original plugin pool are listed under Unassigned Devices, and their type is zfs.

 

There should be some ZFS features in the web gui of 6.12.x, right?  I'm not finding any.  Shelled into the server I see `zpool` and `zfs` commands are available.  The presence of ZFS in the web gui is mentioned 6.12.0 release notes and related blog post, and the very sparse docs.

 

root@UNRAID:~# zpool version
zfs-2.1.14-1
zfs-kmod-2.1.14-1

root@UNRAID:~# zfs version
zfs-2.1.14-1
zfs-kmod-2.1.14-1

 

I also see that the old ZFS plugin is now under "Plugin File Install Errors" sub-tab of Plugins tab.  I removed that and rebooted, but still no ZFS in the web gui.  The files are still present under /boot/config/plugins-removed/

 

root@UNRAID:~# ls -l /boot/config/plugins-removed/
total 72
-rw------- 1 root root  1606 May 12  2022 ZFS-companion.plg
-rw------- 1 root root 19462 May 30  2021 nvidia-driver.plg
-rw------- 1 root root 25544 May 12  2022 unRAID6-ZFS.plg
-rw------- 1 root root  2560 May 13  2022 zfs.master.plg

 

`dmesg` shows one message about the ZFS kernel module, appears to have loaded normally-

 

[   38.861784] ZFS: Loaded module v2.1.14-1, ZFS pool version 5000, ZFS filesystem version 5

 

syslog shows a few lines-

Dec  3 18:19:48 UNRAID kernel: ZFS: Loaded module v2.1.14-1, ZFS pool version 5000, ZFS filesystem version 5
...
Dec  3 18:20:15 UNRAID emhttpd: shcmd (4): modprobe zfs
Dec  3 18:20:21 UNRAID emhttpd: shcmd (33): /usr/sbin/zfs mount -a

 

System resources all good-

 

root@UNRAID:~# uname -a
Linux UNRAID 6.1.79-Unraid #1 SMP PREEMPT_DYNAMIC Fri Mar 29 13:34:03 PDT 2024 x86_64 AMD Ryzen 9 3950X 16-Core Processor AuthenticAMD GNU/Linux

root@UNRAID:~# cat /proc/meminfo | head -3
MemTotal:       131826180 kB
MemFree:        83573768 kB
MemAvailable:   94084944 kB

root@UNRAID:~# df
Filesystem      1K-blocks       Used  Available Use% Mounted on
rootfs           65895996    1480740   64415256   3% /
tmpfs              131072       5632     125440   5% /run
/dev/sda1        15309840    1093112   14216728   8% /boot
overlay          65895996    1480740   64415256   3% /lib
overlay          65895996    1480740   64415256   3% /usr
devtmpfs             8192          0       8192   0% /dev
tmpfs            65913088          0   65913088   0% /dev/shm
tmpfs              131072        740     130332   1% /var/log
tmpfs                1024          0       1024   0% /mnt/disks
tmpfs                1024          0       1024   0% /mnt/remotes
tmpfs                1024          0       1024   0% /mnt/addons
tmpfs                1024          0       1024   0% /mnt/rootshare
/dev/md1p1     1999421892 1204474196  794947696  61% /mnt/disk1
/dev/md2p1      976284628  495825080  480459548  51% /mnt/disk2
shfs           2975706520 1700299276 1275407244  58% /mnt/user0
shfs           2975706520 1700299276 1275407244  58% /mnt/user
/dev/loop2       20971520   14479928    6099592  71% /var/lib/docker
/dev/loop3        1048576       5512     924824   1% /etc/libvirt
tmpfs            13182616          0   13182616   0% /run/user/0

 

Not sure what could have caused ZFS to get dropped from the GUI.  Has anyone else experienced this?

 

Any ideas?  Am I safe to recreate the pool from CLI?  Should I worry about the apparently missing GUI elements?

 

-tourist

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...