Jump to content

ZFS plugin for unRAID


steini84

Recommended Posts

  • 2 weeks later...

I'm sure you guys must be tired of these posts, but I'm still unsure about how to migrate from 6.11.5 with this plugin to 6.12.

Here's everything about my setup related to ZFS that I think might be affected in some way:

  • The classic setup of an unused USB stick for the array + Unassigned Devices for drives that are part of my ZFS pools.
  • 2 ZFS pools (with 6.12-compatible names thankfully).
  • Autotrim is enabled on 1 pool.
  • Compression is set to lz4 on 1 pool and to zstd-2 on the other.
  • Scripts to scrub and check pool health scheduled through the User Scripts plugin.
  • Docker image is inside a zvol on one of the pools (as a workaround for the major docker+ZFS issue caused by 6.11 or whatever it was).
  • Shares on the pools are setup through smb-extras.conf.
  • My go file has some ZFS related lines:
    • mount /dev/ssd/dockerzvol-part1 /mnt/ssd/dockerzvol
    • echo 0 > /sys/module/zfs/parameters/zfs_dmu_offset_next_sync

This has always worked perfectly (except for that time docker broke), and I'd like to keep things running the way I'm used to.

Looking at the 6.12 release notes I see the instructions on how to import pools created with this plugin, but it's still unclear to me what exactly the result of importing the pools into Unraid will be. I don't want them to become part of the array in any way and I don't care about a GUI. I just want to them mounted and accessible the way they are now.

It says "Compression can be configured as on or off, where on selects lz4. Future update will permit specifying other algorithms/levels.". Will this cause any issues for my zstd-2 pool / datasets?

There is a lot about changes to shares and mentions of ZFS, but can I ignore all this and just keep using my smb-extras.conf?

Is there anything else I should be mindful of?

 

Thanks in advance.

Link to comment

I'm not sure what the go file changes are, but other than that I don't see any problems.  You say you don't want it to be part of the array, but it will be part of its own separate array, accessible under the standard Unraid array method.  This does change some things, there are a bunch of ZFS related things that sort of don't work like hot swapping disks without stopping the whole unraid system.  This is because you have to stop the array(s) to change the disk in the GUI.  A bit of a problem which is making me rethink my whole relationship with unraid at present.

 

But answering you directly, I don't see any problems, it's very similar to my setup, I have a few extra things in mine but it just pulls them in directly.

 

Shares I just keep the same.  Autotrim and feature flags like that are irrelevant and will just continue on.  Compression is irrelevant to the import process also.

User scripts sounds fine too, I use znapzend which still doesn't work for me despite some multiple attempts to do so.  I assume scripts will be better.

Link to comment
15 hours ago, Marshalleq said:

User scripts sounds fine too, I use znapzend which still doesn't work for me despite some multiple attempts to do so.  I assume scripts will be better.

 

The Community Applications ZnapZend plugin is currently broken because the plugin executes early in the boot process, even before the Pool is imported, mounted and working, so the plugin doesn't find any valid pools and exits early in the boot process. One way to make it work with the latest version is using the Docker version; here is my current docker-compose spec:

 

version: '3'
services:
  znapzend:
    image: oetiker/znapzend:master
    container_name: znapzend
    hostname: znapzend-main
    privileged: true
    devices:
      - /dev/zfs
    command: ["znapzend --logto /mylogs/znapzend.log"]
    restart: unless-stopped
    volumes:
      - /var/log/:/mylogs/
      - /etc/localtime:/etc/localtime:ro
    networks:
      - znapzend

networks:
  znapzend:
    name: znapzend

 

The only downside is that if you are replicating data to another machine, you have to access the container and the destination machine and set up the SSH keys, or ... you have to mount a specific volume with the keys and known host in the container.

 

Best

Link to comment
23 hours ago, Marshalleq said:

But answering you directly, I don't see any problems, it's very similar to my setup, I have a few extra things in mine but it just pulls them in directly.

Thanks for the detailed answer, that really put my mind at ease.

Since 6.12.6 includes a version of ZFS that fixes the scary silent corruption bug I guess I will try and update soon.

Link to comment
On 12/1/2023 at 2:44 PM, Iker said:

 

The Community Applications ZnapZend plugin is currently broken because the plugin executes early in the boot process, even before the Pool is imported, mounted and working, so the plugin doesn't find any valid pools and exits early in the boot process. One way to make it work with the latest version is using the Docker version; here is my current docker-compose spec:

 

version: '3'
services:
  znapzend:
    image: oetiker/znapzend:master
    container_name: znapzend
    hostname: znapzend-main
    privileged: true
    devices:
      - /dev/zfs
    command: ["znapzend --logto /mylogs/znapzend.log"]
    restart: unless-stopped
    volumes:
      - /var/log/:/mylogs/
      - /etc/localtime:/etc/localtime:ro
    networks:
      - znapzend

networks:
  znapzend:
    name: znapzend

 

The only downside is that if you are replicating data to another machine, you have to access the container and the destination machine and set up the SSH keys, or ... you have to mount a specific volume with the keys and known host in the container.

 

Best

 

Since I also use the znapzend plugin, does your solution just work with the old stored znapzend configuration?

Link to comment
18 hours ago, BasWeg said:

Since I also use the znapzend plugin, does your solution just work with the old stored znapzend configuration?

 

Yes, one of the multiple benefits from ZnapZend is that the configuration is stored on dataset custom properties, so you lose nothing by migrating from plugin to docker version.

Edited by Iker
  • Thanks 1
Link to comment
  • 2 weeks later...
On 11/30/2023 at 10:49 PM, Marshalleq said:

Shares I just keep the same.  Autotrim and feature flags like that are irrelevant and will just continue on.  Compression is irrelevant to the import process also.

 

Sadly it turns out Unraid overrides the root compression setting for the pools on starting the array, and it defaulted to off for both my pools. Of course I can enable compression in the GUI after stopping the pool again, but I had one of them set to zstd-2 which isn't even an option in Unraid. (Why???)

Dataset local compression settings are kept as is thankfully, but for almost all my datasets I relied on inheriting it from the pool.

Easy enough to set it for each dataset, just a bit annoying that Unraid needlessly messes with this.

 

Thankfully the only real issue I encountered is with my docker image. Being contained in a zvol on a pool it has to be mounted after the pool is started, which worked fine, but then Unraid cannot stop the array until I manually unmount that zvol.

I'll switch to directories as recommended in https://docs.unraid.net/unraid-os/release-notes/6.12.0/#docker and just live with the horrible dataset pollution it causes.

 

Seeing all the root directories on my pools listed as user shares (with primary storage: array) makes me a bit nervous, but I suppose that won't affect anything especially if I never access anything through /mnt/user?

Edited by Visua
  • Confused 1
Link to comment
13 hours ago, Visua said:

 

Sadly it turns out Unraid overrides the compression setting for the pools on starting the array, so it defaulted to off for both my pools and one of them was set to zstd-2 which isn't even an option in Unraid.

 

This sounds like a bug imo... Someone not noticing for an extended period would need to copy everything out and back in to reap the compression benefits, and that can be a huuuuge PITA...

Link to comment
  • 4 months later...
Posted (edited)

Hi i recently updated to 6.12 and my zpools arent working.

i followed the 'procedure' which involves simply creating a pool and adding the devices and clicking start. I but its not working.

I found out it is because my drives have 2 partitions.

Now i didn't realise that i was using 2 partitions i don't even know i did that. i just created the the "data" pool and added the drives. so is it safe for me delete the smaller partition? Would it work or is there some sort of zpool/vdev format that requires both?

 

I've attached an image of what my drives look like unassigned, in a native pool and what one of my drive partitions look like.

 

Now can i delete sdc9 or is it integral to my sdz1 zfs partition

 

Screenshot 2024-05-10 213332.png

Screenshot 2024-05-10 213435.png

Screenshot 2024-05-10 221203.png

Edited by Sally-san
Link to comment
Posted (edited)

 

 

I recently went through upgrade to 6.12 with existing pool.  My disks looked like this:

 

fdisk --list /dev/sdaf
Disk /dev/sdaf: 5.46 TiB, 6001175126016 bytes, 11721045168 sectors
Disk model: ST6000NM0034
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: C92F7A6B-2F0B-EF41-9080-8B71DE0E618A

Device           Start         End     Sectors  Size Type
/dev/sdaf1        2048 11721027583 11721025536  5.5T Solaris /usr & Apple ZFS
/dev/sdaf9 11721027584 11721043967       16384    8M Solaris reserved 1


I also had an issue with "unmountable: unsupported or no file system".

 

The trick for me was to stop the array, and export the pool/s by running:

zpool export -a

 

Starting the array then mounted the zfs pool correctly.  edit:  Make sure they are in the same order in your pool as "zpool status" shows.  Hope this helps!

 

 

Edited by jortan
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...