Jump to content

ZFS plugin for unRAID


steini84

Recommended Posts

Posted
2 hours ago, xxxliqu1dxxx said:

I can remove it with rm -rf and it comes back after reboot, as shown previously. It gets "recreated" every reboot, even the timestamp changes. there's no "disk usage" with it, just two folders... /mnt/ssd500gb/Backups ... that's it...

If someone else has an idea... I am kind of still stuck.

Here's what I attempted doing since last post:

 

1. recreating zpool

2. recreating dataset with same name

3. created snapshot

4. deleted snapshot

5. deleted dataset (destroyed)

6. delete zpool (destroyed)

7. rebooted and folder was still mounted/present.

8. Currently trying pre-clearing the disk, as this wipes MBR and "starts fresh" from my understanding... 

 

Somehow I think there's still something creating this folder, preparing to mount the zvol and/or dataset and/or something else at boot time... Would the plugin creator and/or someone else be able to point to some zfs config file that may still be present despite removing plugin, which would do a mount command and/or a mkdir command at boot time? I mean how can zfs try to do something if the plugin is removed & if the drive is zero wiped?

Posted

Thanks @steini84 for pointing to the culprit - a docker which still had some configuration in it: (i.e. binhex-urbackup.xml)

 

root@Tower:/# grep -r ssd500gb /boot
/boot/config/plugins/dockerMan/templates-user/my-MacinaBox.xml:      <HostDir>/mnt/disks/ssd500gb/domains/</HostDir>
/boot/config/plugins/dockerMan/templates-user/my-MacinaBox.xml:  <Config Name="VM Images location" Target="/image" Default="" Mode="rw" Description="normally /mnt/users/domains" Type="Path" Display="always" Required="false" Mask="false">/mnt/disks/ssd500gb/domains/</Config>
/boot/config/plugins/dockerMan/templates-user/my-binhex-urbackup.xml:      <HostDir>/mnt/ssd500gb/Backups/</HostDir>
/boot/config/plugins/dockerMan/templates-user/my-binhex-urbackup.xml:  <Config Name="Host Path 2" Target="/media" Default="/mnt/user" Mode="rw,slave" Description="Container Path: /media" Type="Path" Display="always" Required="true" Mask="false">/mnt/ssd500gb/Backups/</Config>
/boot/config/plugins/dockerMan/templates-user/my-duplicati.xml:      <HostDir>/mnt/ssd500gb/tmp/</HostDir>
/boot/config/plugins/dockerMan/templates-user/my-duplicati.xml:  <Config Name="Host Path 2" Target="/tmp" Default="" Mode="rw,slave" Description="Container Path: /tmp" Type="Path" Display="always" Required="true" Mask="false">/mnt/ssd500gb/tmp/</Config>
/boot/config/plugins/dockerMan/templates-user/my-diskover.xml:      <HostDir>/mnt/disks/ssd500gb/appdata/diskover</HostDir>
/boot/config/plugins/dockerMan/templates-user/my-diskover.xml:  <Config Name="appdata" Target="/config" Default="/mnt/user/appdata/diskover" Mode="rw" Description="Specify the exact disk.  DO NOT USE /mnt/user/appdata either use /mnt/cache/appdata/ or /mnt/disk$/appdata/ " Type="Path" Display="always" Required="true" Mask="false">/mnt/disks/ssd500gb/appdata/diskover</Config>
/boot/config/plugins/fix.common.problems/ignoreList.json:    "Invalid folder ssd500gb contained within /mnt": "true"

Clearing that fixed the issue!!!

 

Thanks again for everyone involved.

  • Like 1
Posted

Two things I always do when repurposing a drive(not that it applies to this situation) is:

 

1) Remove all partitions

2) wipefs -a /dev/sdX

 

I wish I'd known about that last one much earlier.  Would have saved me a lot of grief with removing MD superblocks, ZFS metadata, etc...

Posted

Hi, I'm newbie to ZFS, I'm having two issues I can't wrap my mind around. 

 

1. The drive shares

I can browse it, I can list it, however I open or write to it. On the share I have the below. I cam't find an example for 4 people, is it just a comma between users?

 

Text from shares on Unraid.

[media]                                                                                                                                                                 

path = /mnt/tank/media
# Secure
public = yes
writeable = yes
write list = user1, user2, user3, User4
browseable = yes                                                                                                                                                       
guest ok = no                                                                                                                                                        
read only = no                                                                                                                                                         
create mask = 0775
directory mask = 0775                                                                                                                                                  
vfs objects = shadow_copy2                                                                                                                                             
shadow: snapdir = .zfs/snapshot                                                                                                                                        
shadow: sort = desc
shadow: format = zfs-auto-snap_%S-%Y-%m-%d-%H%M                                                                                                                        
shadow: localtime =  yes

 

2. Is my read write issue possibly permissions?  I have tried the "chown nobody:users" at the path. I can write via SSH and copy/delete with MC via root, but do so as user1, user2, etc.  When I go back to clear up the issues as root the command runs no error, but also no change. I have also created a local user on one of the windows VM's and the user root sees the same issue I do on say user1.

 

Thank you for any pointers in advance. I've been working with Linux since 2002 in corporate environment, but much of it has been an occasional LAMP connectivity and backup issue, typically this part is done before I get there, so its new to me. I'm getting my new hardware to update my actual server next week, I'm trying to make sure I can get past this when I bring to a production level, even if its in my home.

Posted

I'd like to thank the members who have reached out to me privately. I got a share going. i'm still struggling with snapshots/previous versions, but I also have a backup in order.  Thanks!  I know, newbie issues usually don't publicly get a lot of help around her so its much appreciated, to see them get answered even behind the scenes. I think this and the level 1 guides are great guides, but they're written by high level people that forget there are trees in the specific forest to paint, I'm just good at finding those sort of things. I'll try to put together a noob to newbie guide for the quick when I do my actual build.

  • Like 2
  • 2 weeks later...
  • 2 weeks later...
Posted

Dear Steini84,

 

thanks for enabling ZFS support on UnRAID, which makes this by far the best system for storage solutions out there, having the chance of creating ZFS pools of any flavour and the possibility of an UnRAID-pool on the same machine.

I'm just starting to do testing on my machine, and stumbled over dRAID-vdev-driver documentation and was interested, if there was a possibility if you could include the option for this one in your build? I know, for ZFS-standards, this is far from production ready software, but since I'm testing around, I'd be really interested what performance gains I'd get with an 15+3dspares draid1 setup compared to a 3x 4+1 raidz1 vdev-pool, for example.

 

Thanks. ;)

 

M

Posted (edited)

Playing around with ZFS in Unraid the for a few days now. Thanks for keeping the Plugin up-to-date!

 

I created a single zpool on an external USB disk using the commands mentioned in the first post. The device name changed however from 'sdb' to 'sdg' and the pool was not loaded automatically any more. Thus, I exported the pool and imported the device via its unique UUID i.e (source) :

 

root@server:~# zpool export extdrive
root@server:~# zpool list -v
no pools available
root@server:~# zpool import -d /dev/disk/by-id extdrive
root@server:~# zpool list -v
NAME                                                  SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
extdrive                                              928G   139G   789G        -         -     0%    15%  1.00x    ONLINE  -
  usb-WD_Elements_25A2_575833314142354837365654-0:0   928G   139G   789G        -         -     0%  15.0%      -  ONLINE  
root@server:~# 

 

This makes sure the pool is loaded, even if the device name changed. For me it looks like it is recommended to create the pool using the unique UUIDs rather than device names. What do you guys think?

 

Edit: Seems like it does not work well. After reboot another device name is assigned and despite the fact that the pool is mounted via UUID, commands like 'zpool list -v' stuck :/

 

Edit2: Looks like that the "stuck" behaviour occurs, when the device labels changes (e.g. re-plug USB drive) while the pool is still loaded. Thus, I ended up doing the following via UD plugin:

#!/bin/bash
PATH=/usr/local/sbin:/usr/sbin:/sbin:/usr/local/bin:/usr/bin:/bin

case $ACTION in
  'ADD' )
  ;;

  'UNMOUNT' )
  ;;

  'REMOVE' )
  ;;
  'ERROR_MOUNT' )
    DEST=/mnt/extdrive
    zpool import -d /dev/disk/by-id extdrive
    zfs mount -a
    if mountpoint -q $DEST; then

      rsync -a -v --delete a b 2>&1
      [...]

      sync

      ud_backup_exit=$?
      if [ ${ud_backup_exit} -eq 0 ]; then
        echo "Completed UD backup"
      else
        echo "UD backup failed"
      fi
    else
      echo "Backup drive not mounted, exiting"
      exit 1
    fi

    zfs umount /mnt/extdrive
    zpool export extdrive
    if mountpoint -q $MOUNTPOINT; then
      echo "Error while un-mounting ZFS drive"
    else
      echo "Device can be removed"
    fi
  ;;

  'ERROR_UNMOUNT' )
  ;;
esac

 

I assigned this script via the UD plugin and configured auto-mount for the device. Now I can plug in my ZFS USB device and remove it once the backup is finished.

 

Using the "ERROR_MOUNT" state is kind of a hack. Would love to have the "ADD" state renamed to "MOUNTED". Then an additional state "ADD" would allow to just indicate the occurrence of new devices.

 

Maybe also custom mount commands for the UD plugin would be cool for kind of scripting with ZFS drives.

 

How do you guys handle such cases?

 

 

Edited by T0a
Posted
On 5/29/2020 at 10:26 AM, MatzeHali said:

Dear Steini84,

 

thanks for enabling ZFS support on UnRAID, which makes this by far the best system for storage solutions out there, having the chance of creating ZFS pools of any flavour and the possibility of an UnRAID-pool on the same machine.

I'm just starting to do testing on my machine, and stumbled over dRAID-vdev-driver documentation and was interested, if there was a possibility if you could include the option for this one in your build? I know, for ZFS-standards, this is far from production ready software, but since I'm testing around, I'd be really interested what performance gains I'd get with an 15+3dspares draid1 setup compared to a 3x 4+1 raidz1 vdev-pool, for example.

 

Thanks. ;)

 

M

Here you go - have fun and don't break anything :)

https://www.dropbox.com/s/dvmgw6iab43qpq9/zfs-0.8.4-draid-feature-unRAID-6.8.3.x86_64.tgz?dl=0

https://www.dropbox.com/s/rrjpqo0zyddgqmn/zfs-0.8.4-draid-feature-unRAID-6.8.3.x86_64.tgz.md5?dl=0

 

To install this test build you first have to have the plugin install, then fetch the .tgz file and install it with this command:

installpkg zfs-0.8.4-draid-feature-unRAID-6.8.3.x86_64

If you want to to persist after reboot you have to fetch both files, rename them to: "zfs-0.8.3-unRAID-6.8.3.x86_64.tgz & zfs-0.8.3-unRAID-6.8.3.x86_64.tgz.md5" and overwrite the files in /boot/config/plugins/unRAID6-ZFS/packages/

  • Thanks 1
Posted
18 hours ago, steini84 said:

Here you go - have fun and don't break anything :)

https://www.dropbox.com/s/dvmgw6iab43qpq9/zfs-0.8.4-draid-feature-unRAID-6.8.3.x86_64.tgz?dl=0

https://www.dropbox.com/s/rrjpqo0zyddgqmn/zfs-0.8.4-draid-feature-unRAID-6.8.3.x86_64.tgz.md5?dl=0

 

To install this test build you first have to have the plugin install, then fetch the .tgz file and install it with this command:


installpkg zfs-0.8.4-draid-feature-unRAID-6.8.3.x86_64

If you want to to persist after reboot you have to fetch both files, rename them to: "zfs-0.8.3-unRAID-6.8.3.x86_64.tgz & zfs-0.8.3-unRAID-6.8.3.x86_64.tgz.md5" and overwrite the files in /boot/config/plugins/unRAID6-ZFS/packages/

Awesome, thanks. I'll try to have some kind of extensive testing scenario before I go into any production state, so I'll report back if I had any reliability problems and hopefully with an extensive performance comparison between raidz2 configurations and a corresponding draid vdev, not only with rebuild times, which definitely will be much faster, but also with IO-performance.

 

Cheers,

 

M

Posted (edited)

Hey all, just thought I'd post that we are super lucky because we now have two methods of getting ZFS support in Unraid.

 

1 - This plugin, which has served us well (and I have to say, the ZFS developer has been both super responsive and amazing)

2 - And now an interesting alternative via a new community kernel method here.

 

There isn't any difference in the ZFS capability, mainly it's just that you don't have to wait for the developer to update plugins when a new unraid version comes out.  Obviously that's not really a big deal for ZFS since the developer is super responsive, but I always feel bad for asking! 

 

However, this kernel also builds in support for Nvidia drivers and DVB drivers, Nvidia at times was hard to get updated for the latest Unraid release, so this works around that, especially nice for testing against beta versions.

 

I'm running it to try it out and it works well for me so far, thought I should share.

 

Thanks,

 

Marshalleq.

Edited by Marshalleq
  • Like 1
Posted

Wow that is great. I try my best to update ASAP but it’s awesome that we are getting more ways to enjoy ZFS. Hope it will one day be native in unraid, but until then it’s great to see more options [emoji2]

 

 

Sent from my iPhone using Tapatalk

  • Like 1
Posted

Actually, there's the home gadget geeks interview on the front page of unraid was an interview with Limetech - in it Limetech says they're really considering ZFS, (or something along those lines).  He pretty much indicated it's in the works, which is very exciting.  It would be great to get an official version.

Posted
Actually, there's the home gadget geeks interview on the front page of unraid was an interview with Limetech - in it Limetech says they're really considering ZFS, (or something along those lines).  He pretty much indicated it's in the works, which is very exciting.  It would be great to get an official version.

Yeah I saw that interview, hope that they find a creative way to integrate ZFS [emoji3060]


Sent from my iPhone using Tapatalk
Posted

first thx a lot steini84 for the nice howto 🙂

 

I have to questions:

1. i crypted my datasets data/docker, data/vm and data/media with a keyfile stored on /mnt/disk1. My unraid array is fs btrfs crypted so the keyfile is only awayable if i mount my array with a password. On reboot the server and manual unlock the crypted btrfs-array docker and vm have a failure because imagefiles and dockers are in the crypted zfs. Is it possible to "automount" the zfs if the array starts?

2. my zfs get keylocation is following:

data                           keylocation  none                   default
data@2020-06-09-090000         keylocation  -                      -
data/docker                    keylocation  file:///mnt/disk1/.key  local
data/docker@just_docker        keylocation  -                      -
data/docker@2020-06-09-090000  keylocation  -                      -
data/media                     keylocation  file:///mnt/disk1/.key  local
data/media@2020-06-09-090000   keylocation  -                      -
data/vm                        keylocation  file:///mnt/disk1/.key  local
data/vm@just_vm                keylocation  -                      -
data/vm@2020-06-09-090000      keylocation  -                      -

 

should i set keys also on the snapshots? and on the whole pool? I dont want to "crypt in a crypt"

 

thanks a lot

  • Like 1
Posted
12 hours ago, Randael said:

first thx a lot steini84 for the nice howto 🙂

 

I have to questions:

1. i crypted my datasets data/docker, data/vm and data/media with a keyfile stored on /mnt/disk1. My unraid array is fs btrfs crypted so the keyfile is only awayable if i mount my array with a password. On reboot the server and manual unlock the crypted btrfs-array docker and vm have a failure because imagefiles and dockers are in the crypted zfs. Is it possible to "automount" the zfs if the array starts?

2. my zfs get keylocation is following:

data                           keylocation  none                   default
data@2020-06-09-090000         keylocation  -                      -
data/docker                    keylocation  file:///mnt/disk1/.key  local
data/docker@just_docker        keylocation  -                      -
data/docker@2020-06-09-090000  keylocation  -                      -
data/media                     keylocation  file:///mnt/disk1/.key  local
data/media@2020-06-09-090000   keylocation  -                      -
data/vm                        keylocation  file:///mnt/disk1/.key  local
data/vm@just_vm                keylocation  -                      -
data/vm@2020-06-09-090000      keylocation  -                      -

 

should i set keys also on the snapshots? and on the whole pool? I dont want to "crypt in a crypt"

 

thanks a lot

It is really easy to run commands when the array starts with this program: 

You can make it run on array start or even ONLY on the first array start after booting the server. Much easier than using the "go" file.

 

But I cannot answer about the encryption since I´m not familiar enough to give you a good answer 😕

 

Posted (edited)

Nice to have an option for ZFS, but there's no way I'm using ZFS with UnRaid until it's a native integrated feature. FreeNAS is leaps and bounds more stable and easy to use on the ZFS front. I'm all for seeing this develop though :)

 

 

Edited by Raid-or-Die
Posted

To Each Their Own, but if you read the first post you can understand why this plugin exists and what role zfs plays in a unRAID setup in my mind. If I wanted to go full on zfs I would use Freenas/Ubuntu/Freebsd/Omnos+napp-it but I think zfs for critical data and xfs with parity data for media is just perfect and have been running a stable setup líka that since 2015 [emoji123]


Sent from my iPhone using Tapatalk

Posted
18 hours ago, Raid-or-Die said:

Nice to have an option for ZFS, but there's no way I'm using ZFS with UnRaid until it's a native integrated feature. FreeNAS is leaps and bounds more stable and easy to use on the ZFS front. I'm all for seeing this develop though :)

 

 

I CAN subscribe to ZFS being more polished in freenas from a user interaction perspective (assuming you don't do console). 

 

I CAN'T subscribe to ZFS being more stable on freenas, there is nothing unstable about this plugin at all, in fact I'd say with absolute confidence it's more stable and robust than all other filesystems existing natively on unraid today.  And if you like you can even run this in native kernel now anyway, not that it makes a difference to the stability of it.  @steini84 has done an amazing job of bringing us a stable and robust option with this plugin and it has saved me a number of times already.  I am extremely grateful for it.

 

If you have some evidence around how ZFS on unraid is not stable, I'd certainly like to know about it so that I can re-asess my options.

 

Thanks,

 

Marshalleq

  • Like 2
Posted
On 5/14/2020 at 8:27 PM, steini84 said:

Built ZFS 0.8.4 for unRAID 6.8.3 & 6.9.0-beta1 (kernel 5.5/5.6 officially supported in this ZFS version) 

 

The upgrade is done when you reboot your server  

 

Changelog can be found here: https://github.com/openzfs/zfs/releases/tag/zfs-0.8.4

I'm on unRAID 6.8.3 but plugin still shows as version 0.8.2, though that would be explained by plugin notes:

2020.01.09

Rewrote the plugin so it does not need to be updated everytime unRAID is upgraded. It checks if there is already a new build available and installs that

 

Rebooted unRAID today, "zfs version" returns:

zfs-0.8.3-1

 

Was hoping to get persistent l2arc added, which apparently has been merged in to openzfs:

https://github.com/openzfs/zfs/pull/9582

 

though isn't mentioned in recent change logs for openzfs?

 

ps:  Big thank you for getting ZFS in to unRAID and the fantastic primer in the first post.  Having per-VM and per-docker snapshots has already saved my bacon.

Posted
Very interested in replacing a FreeNAS box w/ Unraid running ZFS. Is it possible to get Quickassist (gzip-qat) hardware acceleration working? I'm using an Atom processor w/ integrated QAT acceleration, and offloading the compression has a significant impact on performance:
 
https://github.com/openzfs/zfs/pull/5846

It should be included since 0.7

https://openzfs.org/wiki/ZFS_Hardware_Acceleration_with_QAT


Sent from my iPhone using Tapatalk
Posted
5 hours ago, ConnectivIT said:

I'm on unRAID 6.8.3 but plugin still shows as version 0.8.2, though that would be explained by plugin notes:


2020.01.09

Rewrote the plugin so it does not need to be updated everytime unRAID is upgraded. It checks if there is already a new build available and installs that

 

Rebooted unRAID today, "zfs version" returns:

zfs-0.8.3-1

 

Was hoping to get persistent l2arc added, which apparently has been merged in to openzfs:

https://github.com/openzfs/zfs/pull/9582

 

though isn't mentioned in recent change logs for openzfs?

 

ps:  Big thank you for getting ZFS in to unRAID and the fantastic primer in the first post.  Having per-VM and per-docker snapshots has already saved my bacon.

I have updated to the 0.8.4 release. Persistent l2arc has been added to the master branch, but it has not made it to a release yet. It appears that it will be included in the 2.0 release - "The features in progress or ported for OpenZFS 2.0 is lengthy, and includes:"

ref: https://en.wikipedia.org/wiki/OpenZFS

 

You can follow the changelog over @ https://zfsonlinux.org/

 

 

  • Like 1
Posted
On 6/17/2020 at 4:12 AM, ensnare said:

Very interested in replacing a FreeNAS box w/ Unraid running ZFS. Is it possible to get Quickassist (gzip-qat) hardware acceleration working? I'm using an Atom processor w/ integrated QAT acceleration, and offloading the compression has a significant impact on performance:

 

https://github.com/openzfs/zfs/pull/5846

Hey, welcome - I think QAT is built into ZFS since it was released in 2017 in the ZFS changelog, but also needs the QAT driver - which as far as I know is not included in the linux kernel (though you could just try that first).  But unlike freenas, thinks like this are usually a bit easier in unraid.  I'd suggest having a look at the community kernel and have a go at building the driver in there.  The dev is quite helpful too, so I'm sure he'll give you some tips.  Maybe, just maybe, he'll even include it as an option automatically since it's probably common to all INTEL processors and gives a performance boost.

Posted
11 hours ago, ConnectivIT said:

Was hoping to get persistent l2arc added, which apparently has been merged in to openzfs:

https://github.com/openzfs/zfs/pull/9582

 

ps:  Big thank you for getting ZFS in to unRAID and the fantastic primer in the first post.  Having per-VM and per-docker snapshots has already saved my bacon.

Persistent l2arc!  I hadn't noticed that!  It will be a great feature - I'm running the non-persistent one on an NVME drive and it does make a huge difference for VM's and such - actually combined with a decent L1Arc makes VM's and dockers on mechanical HDD's very usable again.  ZFS sure is magic.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...