ZFS plugin for unRAID


steini84

Recommended Posts

well for the moment, I got it working by creating a user script to run

 

zpool export <poolname>

zpool import <poolname>

 

on array start. Of course this means i can't autostart the VMs, since they're using ZFS for storage, but only a minor inconvenience.  It populates the /dev/zvol directory and the drive assignments work properly. Not sure why it doesn't do this on system boot correctly, but if anyone else is using it this way, this is how i got around it.

Link to comment

Hi all! I've only just recently gotten into ZFS and have been enjoying it, but I am not well versed with CLIs of any kind (but I'm willing to give it a go) and I'm relatively new to Linux as well.

Now, I'm having permission issues with my unRAID ZFS. Again, I haven't used CLI to manage it as I'm not familiar with the commands or syntax, but as I've been reading, it may be the simplest solution (maybe with chmod?). The main thing I need to do is manage the files; or have rwx in Krusader. Using the binhex/arch-krusader container: I can modify vdisk.img files (i.e. change their name) but I cannot delete them. I've tried R/W - R/WSlave - R/WShare with "privileged" & "bridge" ("host" conflicts with VM VNC port). Also, some folders(datasets) simply have no write permissions displaying "r-x" in Krusader (i.e. my ISO folder, some files can be modified, some cannot), but I'm not trying to modify the folders(datasets) themselves. I was... I forgot I was trying to rename one of the dataset folders, also, trying to place files in the 'master dataset' or zpool. I'll have to modify Krusader's access so this doesn't happen again, and also below, I noted that I got access to files w/chmod.

Separately, some Docker containers don't have full permission (I'm assuming), or are incompatible with ZFS in the case of jlesage/nginx-proxy-manager it cannot start and Krusader is missing config files. I'm curious to how others' Docker containers are running.

All in all, if feels like a simple permission issue I'm experiencing. I'm okay with using another file manager instead of Krusader, like sharing the datasets (but got rejected using {zfs set sharesmb=on} and I think that may be inefficient), I just can't think of another way of adding permissions or another file manager.

 

I'm on unRAID 6.7.1 with the Unassigned Devices & ZFS plugins. I really don't have an array, but a single 16GB USB stick(dummy_placeholder) and no cache. For ZFS I use a single Samsung 950 Pro 256GB NVMe SSD with a single pool. Apart from a handful of datasets, I haven't added anything else other than steini84's config {zfs set compression=lz4} & {zfs set atime=off} and I chose a 4GB ARC [4294967296]. My pool is mounted to "/mnt/zfs/950" and Docker appdata settings set to "/mnt/zfs/950/docker/appdata".

 

P.S. I have FreeNAS running as a VM with two physical NICs, the onboard USB/SATA controller passed to it (no vdisks), and 16GB RAM. I intend to use this for all my kvm/docker storage but I'm still searching for a solution to get unRAID to play nicely with it. I've been using unRAID for sometime and really enjoy KVM, but I've been considering XCP-NG+FreeNAS with iSCSI (and Xen just got native ZFS support 🤩 might be considering the switch sooner than I thought 🤔 it just doesn't play nicely with nVidia GPUs 😕)

Edited by Basserra
Mistakes were made - I'm still new to ZFS and have Docker issues
Link to comment

I was playing around a bit, and launched three new Krusader containers with the same template. One on my ZFS drive, another on my array USB, and another on my FreeNAS through UD's SMB, and I've noticed that the one in my ZFS gives me an error about being unable to save bookmarks. After, I took a look at the folder structure of each container and realized that the one in ZFS is incomplete (see image) missing the ".config" & ".local" folders, although, the one over SMB works as intended (though this is not the case with vdisks). The paths are as follows:

ZFS: /mnt/zfs/950/docker/appdata/krusader0
USB: /mnt/user/usb/docker-appdata/krusader0
SMB: /mnt/disks/FreeNAS_DataBass/Docker/AppData/Krusader

I don't mean to spam, but I'm getting concerned about populating it with more data, and then need to attempt to migrate it off and revert back to btrfs. But I suppose it wouldn't be too big an ordeal since I have FreeNAS on standby, I just would like to use ZFS locally right now until I can integrate FreeNAS better.

 

P.S. I also read up on chmod a bit and was able to clean up some files by using {chmod -R 777 folder/file} and then Krusader, but still struggling with the rest.

Krusader Cannot Make dot Folders.png

Edited by Basserra
Fired up another container through UD SMB
Link to comment
  • 2 months later...

Love this plugin. 

One issue i have is that mountpoints seem to not be persistent over a boot.

After the boot i have to do a zpool export <pool> and then an import -R <mountpoint> <pool>, to get the mountpoint correct again.

zfs get mountpoint shows it correctly after the reimport.

 

Anything i missed ?? Its shitty as my dockers/vm are on there so will fail after reboot untill manualy fixed.

Link to comment
8 hours ago, glennv said:

Love this plugin. 

One issue i have is that mountpoints seem to not be persistent over a boot.

After the boot i have to do a zpool export <pool> and then an import -R <mountpoint> <pool>, to get the mountpoint correct again.

zfs get mountpoint shows it correctly after the reimport.

 

Anything i missed ?? Its shitty as my dockers/vm are on there so will fail after reboot untill manualy fixed.

Can you post the output of zfs get all <pool>

Link to comment

To not explode the post , i grepped on mountpoint as i guess that is what you need right ?.

Its all good but after boot i have to redo it as described. The pools willl default be mounted at root level after boot.

 

#zfs get all | grep mountpoint
ZFS_BACKUPS_V1                            mountpoint            /mnt/disks/ZFS_BACKUPS_V1            default
ZFS_BACKUPS_V1/DCP                        mountpoint            /mnt/disks/ZFS_BACKUPS_V1/DCP        default
ZFS_BACKUPS_V1/FCS                        mountpoint            /mnt/disks/ZFS_BACKUPS_V1/FCS        default
ZFS_BACKUPS_V1/NODE1                      mountpoint            /mnt/disks/ZFS_BACKUPS_V1/NODE1      default
ZFS_BACKUPS_V1/TACH-SRV3                  mountpoint            /mnt/disks/ZFS_BACKUPS_V1/TACH-SRV3  default
ZFS_BACKUPS_V1/W10                        mountpoint            /mnt/disks/ZFS_BACKUPS_V1/W10        default
ZFS_BACKUPS_V1/appdata                    mountpoint            /mnt/disks/ZFS_BACKUPS_V1/appdata    default
virtuals                                  mountpoint            /mnt/disks/virtuals                  default
virtuals/DCP                              mountpoint            /mnt/disks/virtuals/DCP              default
virtuals/FCS                              mountpoint            /mnt/disks/virtuals/FCS              default
virtuals/NODE1                            mountpoint            /mnt/disks/virtuals/NODE1            default
virtuals/appdata                          mountpoint            /mnt/disks/virtuals/appdata          default
virtuals2                                 mountpoint            /mnt/disks/virtuals2                 default
virtuals2/Mojave                          mountpoint            /mnt/disks/virtuals2/Mojave          default
virtuals2/MojaveDev                       mountpoint            /mnt/disks/virtuals2/MojaveDev       default
virtuals2/TACH-SRV3                       mountpoint            /mnt/disks/virtuals2/TACH-SRV3       default
virtuals2/W10                             mountpoint            /mnt/disks/virtuals2/W10             default

 

 

edit :

in case you do need other stuff. Here from a single pool.

# zfs get all virtuals
NAME      PROPERTY              VALUE                  SOURCE
virtuals  type                  filesystem             -
virtuals  creation              Fri Sep  6 15:29 2019  -
virtuals  used                  207G                   -
virtuals  available             243G                   -
virtuals  referenced            27K                    -
virtuals  compressratio         1.33x                  -
virtuals  mounted               yes                    -
virtuals  quota                 none                   default
virtuals  reservation           none                   default
virtuals  recordsize            128K                   default
virtuals  mountpoint            /mnt/disks/virtuals    default
virtuals  sharenfs              off                    default
virtuals  checksum              on                     default
virtuals  compression           lz4                    local
virtuals  atime                 off                    local
virtuals  devices               on                     default
virtuals  exec                  on                     default
virtuals  setuid                on                     default
virtuals  readonly              off                    default
virtuals  zoned                 off                    default
virtuals  snapdir               hidden                 default
virtuals  aclinherit            restricted             default
virtuals  createtxg             1                      -
virtuals  canmount              on                     default
virtuals  xattr                 on                     default
virtuals  copies                1                      default
virtuals  version               5                      -
virtuals  utf8only              off                    -
virtuals  normalization         none                   -
virtuals  casesensitivity       sensitive              -
virtuals  vscan                 off                    default
virtuals  nbmand                off                    default
virtuals  sharesmb              off                    default
virtuals  refquota              none                   default
virtuals  refreservation        none                   default
virtuals  guid                  882676013499381096     -
virtuals  primarycache          all                    default
virtuals  secondarycache        all                    default
virtuals  usedbysnapshots       0B                     -
virtuals  usedbydataset         27K                    -
virtuals  usedbychildren        207G                   -
virtuals  usedbyrefreservation  0B                     -
virtuals  logbias               latency                default
virtuals  objsetid              54                     -
virtuals  dedup                 off                    local
virtuals  mlslabel              none                   default
virtuals  sync                  standard               default
virtuals  dnodesize             legacy                 default
virtuals  refcompressratio      1.00x                  -
virtuals  written               27K                    -
virtuals  logicalused           276G                   -
virtuals  logicalreferenced     13.5K                  -
virtuals  volmode               default                default
virtuals  filesystem_limit      none                   default
virtuals  snapshot_limit        none                   default
virtuals  filesystem_count      none                   default
virtuals  snapshot_count        none                   default
virtuals  snapdev               hidden                 default
virtuals  acltype               off                    default
virtuals  context               none                   default
virtuals  fscontext             none                   default
virtuals  defcontext            none                   default
virtuals  rootcontext           none                   default
virtuals  relatime              off                    default
virtuals  redundant_metadata    all                    default
virtuals  overlay               off                    default
virtuals  encryption            off                    default
virtuals  keylocation           none                   default
virtuals  keyformat             none                   default
virtuals  pbkdf2iters           0                      default
virtuals  special_small_blocks  0                      default

Edited by glennv
Link to comment
  • 2 weeks later...

Something else i do wonder though with using ZFS.

How about trim for ssds ?

I have several ZFS ssd pools and the normal trim command i would run against my btrfs pool(s) does not work anymore for zfs.

Is it part of ZFS itself somehow ? Or am i missing something ?

 

---

# fstrim -v /mnt/disks/virtuals
fstrim: /mnt/disks/virtuals: the discard operation is not supported

Edited by glennv
Link to comment
3 minutes ago, glennv said:

Something else i do wonder though with using ZFS.

How about trim for ssds ?

I have several ZFS ssd pools and the normal trim command i would run against my btrfs pool(s) does not work anymore for zfs.

Is it part of ZFS itself somehow ? Or am i missing something ?

 

---

# fstrim -v /mnt/disks/virtuals
fstrim: /mnt/disks/virtuals: the discard operation is not supported

You can use zpool trim <poolname>

https://github.com/zfsonlinux/zfs/commit/1b939560be5c51deecf875af9dada9d094633bf7

 

I had some problems with autotrim, but I might need to test it again since the bug seems to be fixed 

https://github.com/zfsonlinux/zfs/issues/8550

Link to comment
  • 3 weeks later...

Wow, how did I miss that this plugin existed.  With all these issues with performance etc in 6.7.x I've been thinking of moving back to proxmox and maybe running unraid in a VM (if that's even worth it).  This plugin may just turn unraid back to awesome.  I don't know why unraid don't just include this plugin (which would get around the recompile issues) and make it an expert option that you can make a ZFS array instead.  Or one of each  unraid / zfs (that's what I'm going to do).  I mean BTRFS doesn't even do a RAID 5/6 equivalent reliably anyway.

  • Like 1
Link to comment

Hi, I have three basic questions about this plugin.

 

But first the background context and assumptions how I'd set it up:

1xPool with 1x 2x8TB HDD in a vdev as a mirror - this is for my photo's and documents - I've wanted to have a greater protection on these for a long time

1xPool with 1x 1TB SSD in a vdev - this is for my VM's and docker - I know it's not redundant, but I'm happy with a 'backup level' of redundancy on this for now.  Maybe I'll go buy another 1TB SSD later though.

Then I'd have 7 HDD's assigned to unraid

 

Question 1

I think I can have multiple pools right?

 

Question 2

I'm aware there is a way to create self healing on a single drive, I assume that would be preferable than a single drive without?

 

Question 3

Any tips generally?  I also have the option of 2 NVME drives on board if that helps.  I am also fortunate to have plenty of RAM and CPU.

 

Many thanks,

 

Marshalleq

Edited by Marshalleq
Link to comment
Hi, I have three basic questions about this plugin.

 

But first the background context and assumptions how I'd set it up:

1xPool with 1x 2x8TB HDD in a vdev as a mirror - this is for my photo's and documents - I've wanted to have a greater protection on these for a long time

1xPool with 1x 1TB SSD in a vdev - this is for my VM's and docker - I know it's not redundant, but I'm happy with a 'backup level' of redundancy on this for now.  Maybe I'll go buy another 1TB SSD later though.

Then I'd have 7 HDD's assigned to unraid

 

Question 1

I think I can have multiple pools right?

 

Question 2

I'm aware there is a way to create self healing on a single drive, I assume that would be preferable than a single drive without?

 

Question 3

Any tips generally?  I also have the option of 2 NVME drives on board if that helps.  I am also fortunate to have plenty of RAM and CPU.

 

Many thanks,

 

Marshalleq

 

1. Yes you can have multiple zpools.

2. Not sure about the self healing of single drive zpools, but i use them (large spindle) for snapshot send / recieve targets .

3. regarding tips.

- auto mounting on a specific set mountpoint does not work for me on reboot (while mountpoint is properly set on datasets) so i created a user scripts to start after reboot to export/import on the target mountpoint.

- for appdata to work properly you need to mount your datasets under /mnt. i use /mnt/disks/appdata for example.

- for trimming i run nightly zpool trim commands using user scripts plugin.

- i heavily use zfs snapshot , send /recieve etc triggered by user scipt plugin , rollbacks , cloning etc and all working great.

- use the user scripts plugin to limit arc after boot as described in the beginning of this topic.

- read this whole topic and check the usefull tips for monitoring and sending notifications on zfs events or used in you own snapshot scripts etc.

- do not upgrade unraid unless a new zfs plugin version has become available (ouch)

Link to comment

@johnnie.blackAnd I'm not stupid either, magic only works in books.  But yesterday I read something about setting it up with backup metadata on a single drive, so not knowing a lot about ZFS yet, it sounded feasible - apparently it would self heal except for hardware failures or something.  Anyway it was just a question.

 

@glennv Thanks heaps for your reply, that's very helpful.  Have read through the whole thread, already and quite a number of long form articles on it to get my basic understanding straight.  Seems like I'll need to get used to using scripts for this one.  Very exciting though!

Link to comment
19 minutes ago, Marshalleq said:

But yesterday I read something about setting it up with backup metadata on a single drive, so not knowing a lot about ZFS yet, it sounded feasible - apparently it would self heal except for hardware failures or something.  Anyway it was just a question.

Metadata is redundant duplicated (a better word), same as btrfs deafults to with single HDDs, and it can still detect data corruption, but can't fix it without redundancy, also same as btrfs.

 

 

Edited by johnnie.black
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.