ZFS plugin for unRAID


steini84

Recommended Posts

32 minutes ago, ich777 said:

Please keep in mind that this documentation is for stock unRAID systems

[...]

As you can see it is using the ZFS driver automagically.

Of course Limetech would not account for ZFS since it is not officially supported.  But since I was not sure about docker picking the right storage driver and my previous attempt with unstable builds (2.0.3 iirc) was unsuccessful, I recommended setting the docker vDisk location to path on the array which 100% does work.

Link to comment
1 minute ago, Arragon said:

I recommended setting the docker vDisk location to path on the array which 100% does work.

Yes, but also keep in mind, many people here, at least from what I know, using a USB thumb drive as "Array" to start the Array and in this case this would be very slow...

 

I always recommend using a path instead of an image because it saves much space and previously I had a few issues with the images on my system.

 

I would always try the path first and if it doesn't work move over to the image file.

Link to comment
44 minutes ago, trurl said:

But won't perform as well due to slower parity, and will keep array disks spinning since that file is always open.

Since the user asked for a server with SSDs, I did assume he wouldn't put some spinning rust for that obligatory array drive.  And with a single drive, there is not parity

image.png.154a40603520db491e3a1833e6394406.png

Link to comment

Thanks for the plugin.

 

I have tried to create zpool with 8x NVMe SSD, but they are not all the same, 4x 512G and 4x 500G, so as far as I understand it should take the lowest disk size and use that so 8x 500G… and I can see that:
 

root@PlexServer:/mnt/zfs_pool# zpool list
NAME       SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
zfs_pool  3.62T   407K  3.62T        -         -     0%     0%  1.00x    ONLINE  -

 

However, I see only 3.1T usable in unraid when looking at the df -h

 

root@PlexServer:/mnt/zfs_pool# df -h .
Filesystem      Size  Used Avail Use% Mounted on
zfs_pool        3.1T  128K  3.1T   1% /mnt/zfs_pool

 

I know that one disk is used for parity, but where the another almost 500G go ?

Link to comment

One more issue I have with docker on zfs, for some yet unknown reason I can't get jlesage/nginx-proxy-manager working... ( and some others ) where it does not load the management UI with lots of errors like this in console ( browser console). I'm trying to use Docker data-root: directory
 

Failed to load resource: net::ERR_CONTENT_LENGTH_MISMATCH
main.bundle.js:1 Failed to load resource: net::ERR_CONTENT_LENGTH_MISMATCH
:81/images/favicons/favicon-32x32.png:1 Failed to load resource: net::ERR_CONTENT_LENGTH_MISMATCH

 

For life of me, i can't figure out whats wrong with it.

 

EDIT: Looks like its specifically related to Docker data-root: directory mode because when I switched to vdisk mode xfs and put it into zfs it seems to work ok... kind of strange 😕

Edited by VladoPortos
more info
Link to comment
28 minutes ago, VladoPortos said:

EDIT: Looks like its specifically related to Docker data-root: directory mode because when I switched to vdisk mode xfs and put it into zfs it seems to work ok... kind of strange 😕

 

I also had unusual problems with certain dockers trying to use docker in directory mode on ZFS last time I tried it.  Glad you got it working.

Link to comment
On 9/20/2021 at 2:14 PM, ich777 said:

Yes, but also keep in mind, many people here, at least from what I know, using a USB thumb drive as "Array" to start the Array and in this case this would be very slow...

 

I always recommend using a path instead of an image because it saves much space and previously I had a few issues with the images on my system.

 

I would always try the path first and if it doesn't work move over to the image file.

Sorry for the silly question... how can I change from docker image file to folder?

Link to comment
3 minutes ago, BasWeg said:

Sorry for the silly question... how can I change from docker image file to folder?

Stop the Docker service then change at the drop down from image to path, specify the path and then start the Docker service again.

 

ATTENTION: Keep in mind if you do this your Docker page will be empty but you can easily restore them by clicking ADD CONTAINER on the Docker page and from the drop down you can select the containers again (these are actually the templates with your specific configurations) that you've had installed previously and they will be downloaded again. You have this to do for each individual container that you have installed, I recommend taking pictures from the docker page so that you actually don't miss one if you got many containers installed.

  • Thanks 1
Link to comment
37 minutes ago, ich777 said:

ATTENTION: Keep in mind if you do this your Docker page will be empty but you can easily restore them by clicking ADD CONTAINER on the Docker page and from the drop down you can select the containers again (these are actually the templates with your specific configurations) that you've had installed previously and they will be downloaded again. You have this to do for each individual container that you have installed, I recommend taking pictures from the docker page so that you actually don't miss one if you got many containers installed.

Just use the Previous Apps feature on the Apps page and it will let you select any and all of your templates for reinstalling.

  • Like 2
Link to comment
3 hours ago, jortan said:

 

I also had unusual problems with certain dockers trying to use docker in directory mode on ZFS last time I tried it.  Glad you got it working.

Actually didn't :( I mean it installed ok, and I could log into the UI but there is some issue with Nginx-Proxy-Manager-Official that I'm trying to solve for last 5 hours ... looks like my certificates just vanished ( they are 0 size in backup also ), so the image wont start because:
 

nginx: [emerg] cannot load certificate "/etc/letsencrypt/live/npm-1/fullchain.pem": BIO_new_file() failed (SSL: error:02001002:system library:fopen:No such file or directory:fopen('/etc/letsencrypt/live/npm-1/fullchain.pem','r') error:2006D080:BIO routines:BIO_new_file:no such file)

 

When I try to set up everything from beginning then the certbot have issue retrieving the verify file, for some reason... when I reverted to original set up the Nginx-Proxy-Manager-Official still could not authenticate and create certificate... it got stuck in loop, and now I hit letsencrypt limit... I'm super pissed now.  Despite everything working a day earlier.... 

 

EDIT: Verdict is in, this have to do something with ZFS...  I tried many combinations of running docker vDisk on zfs, not on zfs, as btrfs image, as etx4 image on and on.... and so far it looks like the issue is somewhere on appdata side. 

If the appdata is on normal storage, either unraid cache or normal share... everything works. But! When the appdata for the container is on zfs the certification verification will fail 100% time.  What the container is doing is creating a verification file for the cert provider to check from outside... and looks like this file is not created or something I can't find any error that would specifically say that the file was not created... 

 

EDIT2: found this:
 

2021/09/21 20:19:37 [alert] 459#459: *70 sendfile() failed (22: Invalid argument) while sending response to client, client: 18.192.36.99, server: <some domain>, request: "GET /.well-known/acme-challenge/YSqUaazqTGKmHlVXMr8LgWnELaVeHvJgpHqVO2jdREc HTTP/1.1", host: "<some domain>"

 

Looks like the issue I had at start where data that was red from zfs is not what was expected by browser... can't explain it... I spend whole day trying to migrate to  zfs, but seems like zfs is not for me :)

 

Edited by VladoPortos
more info
Link to comment
8 hours ago, VladoPortos said:
2021/09/21 20:19:37 [alert] 459#459: *70 sendfile() failed (22: Invalid argument) while sending response to client, client: 18.192.36.99, server: <some domain>, request: "GET /.well-known/acme-challenge/YSqUaazqTGKmHlVXMr8LgWnELaVeHvJgpHqVO2jdREc HTTP/1.1", host: "<some domain>"

 

Looks like the issue I had at start where data that was red from zfs is not what was expected by browser... can't explain it... I spend whole day trying to migrate to  zfs, but seems like zfs is not for me :)

 

 

Well that confirms it - ZFS lacks sendfile syscall support, at least on Unraid.  This should be configurable in nginx, it might be fairly simple to disable this as presumably that file will be stored in appdata.

 

Look for nginx.conf and just change "sendfile = on" to "sendfile = off"

 

I had to do some scripting to ensure sendfile is disabled in my lancache docker as the relevant configuration file was inside the docker image, not inside appdata, so my changes were overwritten every docker update.  

 

 

As an alternative, swag docker doesn't have this issue, though it doesn't have the nice front-end of nginx-proxy-manager

Edited by jortan
  • Like 1
Link to comment

Hi, first, thank you very much for this amazing plugins, I started to use ZFS for the Cache arrays a week ago, and it has been just great; I want to contribute with a couple of things:

 

  • In the first post maybe is relevant to talk about "ashift", specially if you are creating a SSD pool, setting to 12 or 13 instead of 9 as is recommended in old or hdds guides is very important; also if you could discuss a bit about recordsize and recommendations by dataset use case,  it will be great.
  • I created a guide for users of ZFS Plugin who want to monitor their ZFS pools using Grafana & Prometheus & zfs_exporter, you could check it out 

 

 

 

At last but not least, as stated in my guide, there are some caveats with the monitoring that could be solver with the 2.1 version; I noticed in Github that you have build that version for Unraid 6.10, do you think that it could be build for 6.9.2, i'm willing to write another tutorial for the zfs_influxdb tool for proper monitoring of the pools.

 

  • Like 1
Link to comment

I have spent some time understanding zfs and zpools and whatnot. I have an array of drives and dont want to use the OOTB array that unraid does. 

 

right now, I have a USB 3.1 drive (64GB) as the only array disk so that it starts up. Currently using 6.10.0-rc1. 

 

[0:0:0:0]	disk             USB DISK 3.0     PMAP  /dev/sda   62.0GB
[1:0:0:0]	disk     USB      SanDisk 3.2Gen1 1.00  /dev/sdb   30.7GB
[4:0:0:0]	disk    ATA      Patriot Blast    12.2  /dev/sdc    240GB
[4:0:1:0]	disk    ATA      Patriot Blast    12.2  /dev/sdd    240GB
[4:0:2:0]	disk    ATA      WDC  WDBNCE0020P 40RL  /dev/sde   2.00TB
[4:0:3:0]	disk    ATA      WDC  WDBNCE0020P 40RL  /dev/sdf   2.00TB
[4:0:4:0]	disk    ATA      Patriot Blast    12.2  /dev/sdg    240GB
[4:0:5:0]	disk    ATA      Patriot Blast    12.2  /dev/sdh    240GB
[4:0:6:0]	disk    ATA      WDC  WDS200T2B0A 40WD  /dev/sdi   2.00TB
[4:0:7:0]	disk    ATA      Patriot Blast    12.2  /dev/sdj    240GB
[4:0:8:0]	disk    ATA      Patriot Blast    22.3  /dev/sdk    240GB
[4:0:9:0]	disk    ATA      WDC  WDS200T2B0A 40WD  /dev/sdl   2.00TB
[4:0:11:0]	disk    ATA      WDC WD20EFRX-68E 0A82  /dev/sdm   2.00TB
[4:0:12:0]	disk    ATA      ST16000NM001G-2K SN03  /dev/sdn   16.0TB
[4:0:13:0]	disk    ATA      ST16000NM001G-2K SN03  /dev/sdo   16.0TB
[4:0:14:0]	disk    ATA      ST16000NM001G-2K SN03  /dev/sdp   16.0TB
[4:0:15:0]	disk    ATA      ST16000NM001G-2K SN03  /dev/sdq   16.0TB
[4:0:16:0]	disk    ATA      ST16000NM001G-2K SN03  /dev/sdr   16.0TB
[4:0:17:0]	disk    ATA      ST16000NM001G-2K SN03  /dev/sds   16.0TB
[4:0:18:0]	disk    ATA      WDC WD20EFRX-68E 0A82  /dev/sdt   2.00TB
[4:0:19:0]	disk    ATA      Hitachi HDS5C302 AA10  /dev/sdu   2.00TB
[4:0:20:0]	disk    ATA      ST16000NM001G-2K SN03  /dev/sdv   16.0TB
[4:0:21:0]	disk    ATA      ST16000NM001G-2K SN03  /dev/sdw   16.0TB
[4:0:22:0]	disk    ATA      Hitachi HDS5C302 AA10  /dev/sdx   2.00TB
[4:0:23:0]	disk    ATA      WDC WD20EFRX-68E 0A82  /dev/sdy   2.00TB
[4:0:24:0]	disk    ATA      ST16000NM001G-2K SN03  /dev/sdz   16.0TB
[4:0:25:0]	disk    ATA      ST16000NM001G-2K SN03  /dev/sdaa  16.0TB
[N:0:1:1]	disk    Force MP600__1                             /dev/nvme0n1  2.00TB
[N:1:1:1]	disk    Sabrent Rocket Q4__1                       /dev/nvme1n1  2.00TB
[N:2:1:1]	disk    Sabrent Rocket Q4__1                       /dev/nvme2n1  2.00TB
[N:3:1:1]	disk    PCIe SSD__1                                /dev/nvme3n1  1.00TB
[N:4:1:1]	disk    Sabrent Rocket Q4__1                       /dev/nvme4n1  2.00TB

 

current setup:

 

  pool: datastore
 state: ONLINE
  scan: scrub repaired 0B in 00:03:36 with 0 errors on Fri Sep 24 07:00:44 2021
config:

        NAME        STATE     READ WRITE CKSUM
        datastore   ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            sdf     ONLINE       0     0     0
            sdi     ONLINE       0     0     0
          mirror-1  ONLINE       0     0     0
            sde     ONLINE       0     0     0
            sdl     ONLINE       0     0     0

errors: No known data errors

  pool: fast
 state: ONLINE
  scan: scrub repaired 0B in 00:02:44 with 0 errors on Thu Sep 23 22:47:49 2021
config:

        NAME                                           STATE     READ WRITE CKSUM
        fast                                           ONLINE       0     0     0
          nvme0n1                                      ONLINE       0     0     0
          nvme1n1                                      ONLINE       0     0     0
          nvme2n1                                      ONLINE       0     0     0
          nvme-Sabrent_Rocket_Q4_03F10707144404184492  ONLINE       0     0     0

errors: No known data errors

  pool: tank
 state: DEGRADED
status: One or more devices could not be used because the label is missing or
        invalid.  Sufficient replicas exist for the pool to continue
        functioning in a degraded state.
action: Replace the device using 'zpool replace'.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-4J
  scan: scrub in progress since Mon Sep 27 08:42:25 2021
        6.13T scanned at 4.81G/s, 412G issued at 323M/s, 6.13T total
        0B repaired, 6.56% done, 05:10:03 to go
config:

        NAME                                            STATE     READ WRITE CKSUM
        tank                                            DEGRADED     0     0     0
          raidz1-0                                      ONLINE       0     0     0
            ata-ST16000NM001G-2KK103_ZL267CE1           ONLINE       0     0     0
            ata-ST16000NM001G-2KK103_ZL267NDH           ONLINE       0     0     0
            ata-ST16000NM001G-2KK103_ZL2672V8           ONLINE       0     0     0
            ata-ST16000NM001G-2KK103_ZL268CAW           ONLINE       0     0     0
            ata-ST16000NM001G-2KK103_ZL2660YG           ONLINE       0     0     0
            ata-ST16000NM001G-2KK103_ZL266MEX           ONLINE       0     0     0
            ata-ST16000NM001G-2KK103_ZL267LNF           ONLINE       0     0     0
            ata-ST16000NM001G-2KK103_ZL2678RA           ONLINE       0     0     0
            ata-ST16000NM001G-2KK103_ZL266CQD           ONLINE       0     0     0
            ata-ST16000NM001G-2KK103_ZL267S9C           ONLINE       0     0     0
          raidz1-1                                      DEGRADED     0     0     0
            ata-WDC_WD20EFRX-68EUZN0_WD-WCC4M1NJR9X5    ONLINE       0     0     0
            ata-WDC_WD20EFRX-68EUZN0_WD-WCC4M4TRDPV1    ONLINE       0     0     0
            ata-Hitachi_HDS5C3020ALA632_ML4230FA10X4EK  ONLINE       0     0     0
            ata-Hitachi_HDS5C3020ALA632_ML0230FA16RPLD  ONLINE       0     0     0
            ata-WDC_WD20EFRX-68EUZN0_WD-WCC4M6FLJJNK    UNAVAIL     98   499     0
        logs
          nvme3n1                                       ONLINE       0     0     0

errors: No known data errors

  pool: vmstorage
 state: ONLINE
  scan: scrub repaired 0B in 00:04:08 with 0 errors on Thu Sep 23 22:44:03 2021
config:

        NAME        STATE     READ WRITE CKSUM
        vmstorage   ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            sdc     ONLINE       0     0     0
            sdd     ONLINE       0     0     0
            sdg     ONLINE       0     0     0
            sdh     ONLINE       0     0     0
            sdj     ONLINE       0     0     0
            sdk     ONLINE       0     0     0

errors: No known data errors

 

I want to redo tank and vmstorage. I am going to wipe raidz1-1 out of tank (means ill delete and recreate the pool) and throw those drives out. they are really old WD Red 2TB and 2 hitachi 2TBs. I am going to buy another 10 16TB drives and I think that i can use the 240GB SSDs as cache/log drives. maybe a mirror array on both cache and log, then 2 hot spares? I have no use case for vmstorage since I now have the fast pool.

 

my ask, is given the info above, what would be the best config with performance as #1 with some redundancy. these will not be mission critical files, if they are, they will be backed up elsewhere. 

 

2 vdevs of raidz with 10 disks each? and cache/log setup in mirrors? or something else?

I also have a 1TB nvme drive that wouldnt be utilized.

Link to comment

I have been working in a ZFS Plugin for the Unraid Main Tab, considering that I am not an experienced PHP developer,  this probably is going to take a while. However, I want to ask you ZFS users, which information do you want to check in the Main tab, considering that as today we don't have any information about the pools.

 

Any comment or feedback appreciated, this is the current look:

 

image.thumb.png.f7683f3f66ee11e9ac5eb0e620e8778c.png

 

image.thumb.png.c045cbad419c48a9aa195f09c0e31799.png

 

Edited by Iker
  • Like 4
Link to comment
31 minutes ago, Iker said:

I have been working in a ZFS Plugin for the Unraid Main Tab, considering that I am not an experienced PHP developer,  this probably is going to take a while.

Please keep in mind that ZFS will be added to unRAID in 6.11 if the poll stays like it is now, see here:

 

Link to comment
34 minutes ago, Iker said:

I have been working in a ZFS Plugin for the Unraid Main Tab, considering that I am not an experienced PHP developer,  this probably is going to take a while. However, I want to ask you ZFS users, which information do you want to check in the Main tab, considering that as today we don't have any information about the pools.

 

Any comment or feedback appreciated, this is the current look:

 

image.thumb.png.f7683f3f66ee11e9ac5eb0e620e8778c.png

 

image.thumb.png.c045cbad419c48a9aa195f09c0e31799.png

 

 

wow super !

90% done

i would vote for:

.. last snapshot date

.. what device is in this pool

 

Link to comment
8 hours ago, ich777 said:

Please keep in mind that ZFS will be added to unRAID in 6.11 if the poll stays like it is now, see here:

 True, but we don't know when is going to be, probably next year, in the meanwhile I'm not planning replace the Unraid interface, just add some visibility.

 

8 hours ago, Dtrain said:

i would vote for:

.. last snapshot date

.. what device is in this pool

 

 

Devices in the pool  👌

Last snapshot date… It's completely possible, but please hang in there for a while. Snapshots in general are a different monster, I'm thinking how to display them, there are some systems (like my own) with 1K snapshots in a single pool, displaying or even list every single one could be difficult.

Link to comment
9 hours ago, Iker said:

Any comment or feedback appreciated, this is the current look:

 

 

Nice job, looks great!

 

Live read/writes stats for pools?  ie. 'zpool iostat 3' ?

 

Perhaps make the pool "green ball" turn another colour if a pool hasn't been scrubbed in >35 days (presumably it turns red if degraded?)  Maybe a similar traffic light indicator for datasets that don't have a recent snapshot?  This might really help someone who has added a dataset but forgotten to configure snapshots for it.

 

1 hour ago, Iker said:

Snapshots in general are a different monster, I'm thinking how to display them, there are some systems (like my own) with 1K snapshots in a single pool,

 

Maybe make the datasets clickable - like the devices in a normal array?  You could then display various properties of the datasets (zfs get all pool/dataset - though maybe not all of these) as well as snapshots.  Some of the more useful properties for a dataset:

 

used

available

referenced    

compression   

compressratio

 

as well as snapshots

  • Like 1
Link to comment
3 hours ago, jortan said:

Live read/writes stats for pools?  ie. 'zpool iostat 3' ?

 

I'l take a look at this.

 

3 hours ago, jortan said:

Perhaps make the pool "green ball" turn another colour if a pool hasn't been scrubbed in >35 days (presumably it turns red if degraded?) 

 

Green ball turns yellow if degraded, red if faulted, blue if Offline, grey otherwise (Unavailable or Removed); the scrubb info is already present in ZFS Companion, i don't have the intention to replicate the information already present.

 

3 hours ago, jortan said:

Maybe a similar traffic light indicator for datasets that don't have a recent snapshot?  This might really help someone who has added a dataset but forgotten to configure snapshots for it.

 

Great Idea!.

 

3 hours ago, jortan said:

Maybe make the datasets clickable - like the devices in a normal array?  You could then display various properties of the datasets (zfs get all pool/dataset - though maybe not all of these) as well as snapshots.  Some of the more useful properties for a dataset:

 

used

available

referenced    

compression   

compressratio

 

Got that info, nice & clean (I think), working now on snapshots, just the basic info & date:

 

image.thumb.png.c74bef2a5ae71b9746c76d7b888c7579.png

 

haven't forgotten you (dtrain):

 

image.thumb.png.79900787c5e64f59e78d13b8847b333d.png

 

 

 

 

 

  • Like 2
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.