ZFS plugin for unRAID


steini84

Recommended Posts

On 3/18/2023 at 8:37 AM, JorgeB said:

You can import the pools using the GUI, but wait for rc2 if the pool has cache, log, dedup, special and/or spare vdev(s), then just create a new pool, assign all devices (including any special vdevs) and start the array, what you currently cannot do is add/remove those special devices using the GUI, but you can always add them manually and re-import the pool.

Thanks @JorgeB I note that rc2 is out today, but in the changelog I don't see mentioned of special vdevs etc.  There's something about non-root vdevs, is that it?  I guess I'm just asking so I don't waste my time importing pools that still can't be imported.  Thanks!

Link to comment

I am having trouble with importing my zfs pool into Unraid 6.12.0rc2. The pool was created with Truenas Scale. It is a 2 x 4 wide RAIDZ1 with ZIL and Metadata VDEVS.

I can import it manually with the terminal (zpool import Data) but there are two problems:
1. When I import it it compains that an SMB share cannot be created and I can't do it through the GUI either... but it does import.

2. It does not automatically import after a reboot.

Is this just 'early days' problems and I need to be patient or am I missing something obvious? I have the zfs master plugin but I am very new to Unraid so I'm wondering if I missed some requirement or limitation.

Also, where do I find info on how to do XYZ with ZFS - is it again because it's very new and the docs haven't been updated yet? I hate posting questions - I would rather find the info myself but I haven't been able to find anything. But again, I'm new.

Link to comment
8 minutes ago, Tomo82 said:

The pool was created with Truenas Scale

Like mentioned above TrueNAS pools cannot currently be imported using the GUI, because TrueNAS uses partition #2 for zfs, it should be possible to import them in the near future, for now you should be able to manually import them using the console, but they cannot be part of the user shares.

  • Like 1
Link to comment

I had a tough time managing my snapshots, too many were created with 0 file changes.  So I wrote a script that creates the snapshot only if data has been written to the dataset, using the "written" property.  It also prunes snapshots afterwards, keeping only the 100 most recent.  Can be adjusted using the "keep" variable below.
To use, put your full Dataset, including the pool name.  Such as pool/dataset

Hope it helps someone!
 

#!/bin/bash

# List your ZFS datasets
DATASETS=("pool/dataset1" "pool/dataset2" "pool/dataset3" "pool/dataset4" "pool/dataset5")

# Function to create snapshot if there is changed data
create_snapshot_if_changed() {
  local DATASET="$1"
  local WRITTEN=$(zfs get -H -o value written "${DATASET}")

  if [ "${WRITTEN}" != "0" ]; then
    local TIMESTAMP=$(date "+%Y%m%d-%H%M")
    zfs snapshot "${DATASET}@${TIMESTAMP}"
    echo "Snapshot created: ${DATASET}@${TIMESTAMP}"
  else
    echo "No changes detected in ${DATASET}. No snapshot created."
  fi
}

# Function to prune snapshots
prune_snapshots() {
  local DATASET="$1"
  local KEEP=100

  local SNAPSHOTS=( $(zfs list -t snapshot -o name -s creation -r "${DATASET}" | grep "^${DATASET}@") )
  local SNAPSHOTS_COUNT=${#SNAPSHOTS[@]}

  echo "Total snapshots for ${DATASET}: ${SNAPSHOTS_COUNT}"

  if [ ${SNAPSHOTS_COUNT} -gt ${KEEP} ]; then
    local TO_DELETE=$((SNAPSHOTS_COUNT - KEEP))
    for i in "${SNAPSHOTS[@]:0:${TO_DELETE}}"; do
      zfs destroy "${i}"
      echo "Deleted snapshot: ${i}"
    done
  fi
}

# Iterate over each dataset and call the functions
for dataset in "${DATASETS[@]}"; do
  create_snapshot_if_changed "${dataset}"
  prune_snapshots "${dataset}"
done

 

Link to comment
On 3/20/2023 at 10:34 PM, JorgeB said:

Yep, that's it, it should now import pools with log, cache, dedup, special and/or spare vdev(s).

Finally getting around to it now.  I recall someone saying I still need a USB stick though to run a dummy array on.  So I'll keep it like that for now, but if anyone knows otherwise I'm keen to hear about it!  Thanks.

Link to comment

Hi all, well I tried one pool which worked technically, but what I assume is a constraint around the design has meant I have to roll it back.  Not sure if there's somewhere else I should feed this back to, will look in a moment.

 

The specifics are that the unraid pool names have non-zfs limitations around them i.e. unraid require to not have capital letters, symbols or numbers at the beginning or end.  So my SSDPool1 has to become something like ssdpool which is annoying when I have numbered lists of pool names.

 

That in itself wouldn't be too bad if I didn't have so much referencing the former mount point and makes this more like a mammoth task to use the unraid GUI version than a minor one.  For now, I will stick to import -a.

 

Hopefully this is something that will be addressed in the future so that it's easier to import from native ZFS or perhaps other systems such as TrueNAS.

 

Also note it actually renames the zfs pool as well (it's not just the mount point that changes), so you have to undo this by exporting and then reimporting with zpool import oldpoolname newpoolname then zfs set mountpoint=mountpoint newpoolname.

 

I was excited to do this, but not looking forward to this much work right now.

 

I suspect I could actually just redirect the mountpoint and leave the changed name, but unsure how unraid will treat that so left it for now.

Edited by Marshalleq
Link to comment
  • 3 weeks later...
On 3/22/2023 at 3:49 AM, Marshalleq said:

Finally getting around to it now.  I recall someone saying I still need a USB stick though to run a dummy array on.  So I'll keep it like that for now, but if anyone knows otherwise I'm keen to hear about it!  Thanks.

 

Yep. Have that on my test server. One ZFS pool mounted to replace the only disk in the unraid array:

 

mount -R /mnt/zfs /mnt/disk1

 

I run that at the start of the array.

 

Works well so far.

 

Only limitation with the ZFS plugin is that dockers and VMs had to be on the cache pool. Will try now with 6.12 rc2 to see if that can be changed back to the array

Link to comment

I’ve run ZFS plugin for years and have never once run dockers and vms on the unraid cache. The whole point for me was to get these off btrfs so that I could get a reliable experience. Which was successful. So I’ve never understood why people keep saying that to be honest. Works for me anyway, perhaps I’m just lucky. 

Link to comment

I have and have always had my zfs pools under /mnt. In the built in ZFS I still have them under /mnt and haven’t noticed any problems. The only thing I haven’t worked out yet is znapzend plugin. Currently not working but I believe others have gotten it to function. Also there are pool naming constraints currently. I had to do probably 1-2 hours work to rename all my dockers, vms etc to use the lower case names (not so bad) but more annoying is the restrictions on where numbers can be placed. So I now have hdd1pool instead of HDDPool1. Totally messed up my naming scheme. However the unraid gui is a big improvement on matching UUID and /dev device names. Slight negative is that you have to stop the whole server to create the array. I find that to be quite diasappointing but I don’t do it that much so it’s not too bad, just enough to be annoying. Got two more disks today, so here I go again. Stop the whole server for something that technically doesn’t need that to happen. 

Edited by Marshalleq
  • Thanks 1
Link to comment

ok, then I need to rename all my dockers, too.
 

NAME        SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
HDD        5.45T  1.79T  3.67T        -         -     2%    32%  1.00x    ONLINE  -
NVME        928G   274G   654G        -         -     8%    29%  1.00x    ONLINE  -
SSD         696G   254G   442G        -         -    13%    36%  1.00x    ONLINE  -
SingleSSD   464G  16.6G   447G        -         -     1%     3%  1.00x    ONLINE  -


HDD and SSD are raidz1-0 with 3 discs each.

So, to update, I should remember the zpool status (UIDs for each array), and create arrays afterwards with these UIDS?

 

Link to comment

Well, when you first drop the plugin and update to Unraid with the built in plugin, nothing really changes.  You can still do zpool import -a and use the original names.  The issue comes when you use unraids GUI.  So what I did is import the pools, then list out the disks with zpool list -v, take a screenshot of that, then export the pools.  Then you have all the info you need to import into the GUI.  The GUI gives you both the UUID and the /dev device name so that it's super easy to select.

  • Thanks 1
Link to comment

I had a lot of snapshots and decided to get rid of them with 

zfs list -H -o name -t snapshot | grep autosnap | xargs -n1 zfs destroy

This spat out a few "snapshot not found" errors and the system got really slow until I noticed some apps could no longer write files.

System no longer responded and I had to power cycle it.

Now system hangs at boot but I'm still able to ssh into the machine. Everything seems to hang waiting for

zpool import -a

Commands like zpool status don't seem to work.  /proc/spl/kstat/zfs/dbgmsg contains the following

timestamp    message
1681372320   spa.c:6242:spa_tryimport(): spa_tryimport: importing tank
1681372320   spa_misc.c:418:spa_load_note(): spa_load($import, config trusted): LOADING
1681372320   vdev.c:160:vdev_dbgmsg(): disk vdev '/dev/disk/by-id/ata-ST12000NM001G-2MV103_ZL274PFX-part1': best uberblock found for spa $import. txg 18136025
1681372320   spa_misc.c:418:spa_load_note(): spa_load($import, config untrusted): using uberblock with txg=18136025
1681372320   spa.c:8360:spa_async_request(): spa=$import async request task=4096
1681372321   spa.c:8360:spa_async_request(): spa=$import async request task=2048
1681372321   spa_misc.c:418:spa_load_note(): spa_load($import, config trusted): LOADED
1681372321   spa_misc.c:418:spa_load_note(): spa_load($import, config trusted): UNLOADING
1681372321   spa.c:6098:spa_import(): spa_import: importing tank
1681372321   spa_misc.c:418:spa_load_note(): spa_load(tank, config trusted): LOADING
1681372321   vdev.c:160:vdev_dbgmsg(): disk vdev '/dev/disk/by-id/ata-ST12000NM001G-2MV103_ZL274PFX-part1': best uberblock found for spa tank. txg 18136025
1681372321   spa_misc.c:418:spa_load_note(): spa_load(tank, config untrusted): using uberblock with txg=18136025
1681372321   spa.c:8360:spa_async_request(): spa=tank async request task=4096
1681372341   spa_misc.c:418:spa_load_note(): spa_load(tank, config trusted): read 182 log space maps (4100 total blocks - blksz = 131072 bytes) in 17406 ms
1681372371   spa_misc.c:418:spa_load_note(): spa_load(tank, config trusted): spa_load_verify found 0 metadata errors and 53 data errors
1681372371   mmp.c:240:mmp_thread_start(): MMP thread started pool 'tank' gethrtime 354061113656
1681372371   spa.c:8360:spa_async_request(): spa=tank async request task=1
1681372371   spa.c:2606:spa_livelist_delete_cb(): deleting sublist (id 95993) from livelist 95991, 0 remaining
1681372371   spa.c:8360:spa_async_request(): spa=tank async request task=2048
1681372371   spa_misc.c:418:spa_load_note(): spa_load(tank, config trusted): LOADED
1681372371   spa.c:8360:spa_async_request(): spa=tank async request task=4096
1681372372   metaslab.c:2437:metaslab_load_impl(): metaslab_load: txg 18136027, spa tank, vdev_id 0, ms_id 937, smp_length 135136, unflushed_allocs 442368, unflushed_frees 12580790272, freed 0, defer 0 + 0, unloaded time 354384 ms, loading_time 51 ms, ms_max_size 14897127424, max size error 14082514944, old_weight 840000000000001, new_weight 840000000000001
1681372372   metaslab.c:2437:metaslab_load_impl(): metaslab_load: txg 18136027, spa tank, vdev_id 0, ms_id 1105, smp_length 387240, unflushed_allocs 188547072, unflushed_frees 0, freed 0, defer 0 + 0, unloaded time 354436 ms, loading_time 64 ms, ms_max_size 16458719232, max size error 16458719232, old_weight 840000000000001, new_weight 840000000000001
1681372372   metaslab.c:2437:metaslab_load_impl(): metaslab_load: txg 18136027, spa tank, vdev_id 0, ms_id 1220, smp_length 518048, unflushed_allocs 303104, unflushed_frees 14799437824, freed 0, defer 0 + 0, unloaded time 354500 ms, loading_time 74 ms, ms_max_size 15683502080, max size error 12946800640, old_weight 840000000000001, new_weight 840000000000001
1681372372   metaslab.c:2437:metaslab_load_impl(): metaslab_load: txg 18136027, spa tank, vdev_id 0, ms_id 2374, smp_length 326880, unflushed_allocs 2367995904, unflushed_frees 258826240, freed 0, defer 0 + 0, unloaded time 354575 ms, loading_time 104 ms, ms_max_size 12846678016, max size error 12846579712, old_weight 840000000000001, new_weight 840000000000001
1681372372   metaslab.c:2437:metaslab_load_impl(): metaslab_load: txg 18136027, spa tank, vdev_id 0, ms_id 49, smp_length 384, unflushed_allocs 1228546048, unflushed_frees 90112, freed 0, defer 0 + 0, unloaded time 354679 ms, loading_time 56 ms, ms_max_size 6055149568, max size error 6055059456, old_weight 800000000000001, new_weight 800000000000001
1681372372   metaslab.c:2437:metaslab_load_impl(): metaslab_load: txg 18136027, spa tank, vdev_id 0, ms_id 2478, smp_length 207680, unflushed_allocs 2539520, unflushed_frees 8653791232, freed 0, defer 0 + 0, unloaded time 354735 ms, loading_time 51 ms, ms_max_size 5210308608, max size error 1588199424, old_weight 800000000000001, new_weight 800000000000001
1681372372   metaslab.c:2437:metaslab_load_impl(): metaslab_load: txg 18136027, spa tank, vdev_id 0, ms_id 2480, smp_length 142144, unflushed_allocs 712704, unflushed_frees 8535842816, freed 0, defer 0 + 0, unloaded time 354787 ms, loading_time 76 ms, ms_max_size 3770638336, max size error 1008377856, old_weight 7c0000000000002, new_weight 7c0000000000002
1681372372   metaslab.c:2437:metaslab_load_impl(): metaslab_load: txg 18136027, spa tank, vdev_id 0, ms_id 2497, smp_length 1480, unflushed_allocs 5153120256, unflushed_frees 2602729472, freed 0, defer 0 + 0, unloaded time 354863 ms, loading_time 122 ms, ms_max_size 3899867136, max size error 2694348800, old_weight 7c0000000000002, new_weight 7c0000000000002
1681372372   metaslab.c:2437:metaslab_load_impl(): metaslab_load: txg 18136027, spa tank, vdev_id 0, ms_id 2093, smp_length 66136, unflushed_allocs 315039744, unflushed_frees 15409152, freed 0, defer 0 + 0, unloaded time 354985 ms, loading_time 40 ms, ms_max_size 3293323264, max size error 3293306880, old_weight 7c0000000000001, new_weight 7c0000000000001
1681372372   spa_history.c:307:spa_history_log_sync(): txg 18136027 open pool version 5000; software version zfs-2.1.9-0-g92e0d9d18-dist; uts Tower 5.19.17-Unraid #2 SMP PREEMPT_DYNAMIC Wed Nov 2 11:54:15 PDT 2022 x86_64
1681372398   dsl_scan.c:3433:dsl_process_async_destroys(): freed 27648 blocks in 25843ms from free_bpobj/bptree txg 18136027; err=85

 

Link to comment
29 minutes ago, ashman70 said:

I have a backup server running the latest RC of unRAID 6.12. I am contemplating wiping all the disks and formatting them in ZFS so each dis is running ZFS on its own. Would I still get the data protection from bit rot from ZFS by doing this? 

You would get the detection of bit rot any time you tried to read/write the files.   You will still need backups to recover any files with bitrot.

 

it is worth pointing out that you have been able to do this for a long time in earlier releases of Unraid using btrfs as the file system type on array drives.

Link to comment

  

1 hour ago, ashman70 said:

I have a backup server running the latest RC of unRAID 6.12. I am contemplating wiping all the disks and formatting them in ZFS so each dis is running ZFS on its own. Would I still get the data protection from bit rot from ZFS by doing this? 

 

Is there any reason you don't want to create a single pool of your disks?  Less total capacity?  Concerns around being able to expand the array?   ZFS has supported expanding raidz pools for some time now and Unraid will be adding GUI features for this at some point.

Link to comment
5 minutes ago, jortan said:

  

 

Is there any reason you don't want to create a single pool of your disks?  Less total capacity?  Concerns around being able to expand the array?   ZFS has supported expanding raidz pools for some time now and Unraid will be adding GUI features for this at some point.

Because all the drives are different sizes, I don't want to create pools to lose disk space.

Link to comment
54 minutes ago, itimpi said:

You would get the detection of bit rot any time you tried to read/write the files.   You will still need backups to recover any files with bitrot.

 

it is worth pointing out that you have been able to do this for a long time in earlier releases of Unraid using btrfs as the file system type on array drives.

RIght but my understanding is the difference between BTRFS and ZFS is BTRFS will alert you to disk corruption but you have to take action to fix it, whereas ZFS does it automatically? Am I correct in this?

Link to comment
11 minutes ago, ashman70 said:

RIght but my understanding is the difference between BTRFS and ZFS is BTRFS will alert you to disk corruption but you have to take action to fix it, whereas ZFS does it automatically? Am I correct in this?

No, for that they are the same, they can detect corruption for non redundant filesystems and self heal redundant filesystems.

Link to comment
13 minutes ago, ashman70 said:

RIght but my understanding is the difference between BTRFS and ZFS is BTRFS will alert you to disk corruption but you have to take action to fix it, whereas ZFS does it automatically? Am I correct in this?

No.    When used in the array there is no redundancy at the file system level so you only get detection, not automatic fixing.

Link to comment
1 minute ago, JorgeB said:

No, for that they are the same, they can detect corruption for non redundant filesystems and self heal redundant filesystems.

Right so if I use ZFS as I intend to, convert all my disks to ZFS and just have them operate independently, with no vdevs then I will get the detection and self healing features of ZFS?

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.