ZFS plugin for unRAID


steini84

Recommended Posts

1 hour ago, Iker said:

@ich777 thank you very much, the Plugin is live since this morning in the CA as "ZFS Master".

 

To all you guys using ZFS, Any feedback, new functionality or bugs that you could find, don't hesitate to contact me :).

Hey, great plugin, though i have 1 issue and one inconvinience. The issue first. When using Dark theme in unraid, every second entry in the dataset list is unreadablegrafik.png.95108c5344ce3aada4d65de773b49038.png

 

The inconvinience is, that i would like to filter out the auto created datasets docker creates when the docker directory is placed on the zfs array (visible in the screenshot), would be great if this was possible. They all have the "legacy" mountpoint, rather then an actual path, i guess that could make it easier to filter out

 

Other then that, great plugin, keep up the work :D

Edited by Joly0
Link to comment

I also have the inconvenience with dockerfiles, I'm planning to create a settings page, so you could specify your own "excluded dataset" by name patterns or attributes; I have never used the Dark Theme, but the fix is pretty easy, however I'm very aware that the Styles for the plugin need a lot of work in general; probably next version both things are going to be completely fixed.

  • Like 1
Link to comment

Hey all,

 

does the zfs plugin flush the RAM/ ARC cache automatically? if not, is there a command line to do so?

the ram usage of my unraid is at 50 for hours after 100GB of copying. it seems like it won't clear until I restart. (the system actually flushed 2gb of ram after 3-4 hours of idle, it this normal?)

 

I am currently running a Raidz2 of 10 x 16TB drives, is there any advantage setup a Raidz1/ Raidz2 in 2 vdev instead?

Capacity isn't an issue, but I wanna optimize the zpool for my daily workflow (After Effect, Premiere, Cinema 4D etc)

 

I use Intel optane ssd for my SLOG, is it going to work fine on a AMD build?

 

Unraid version: 6.10.0-rc1

Wing

Edited by winglam
Link to comment

Hi all, recently I made two changes, 1 upgraded to rc1 of Unraid (which from memory has upgraded ZFS) 2, changed from a docker image file with btfrs to a docker folder, ironically called docker image.  I've been trying to fault find some performance issues that have subsequently occurred and find a bunch of random snapshots have been taken of the docker image folder.  There are no automated snapshots set for this folder and I'm wondering if anyone else has noticed anything similar?

 

See screenshot.

 

I'll probably just delete the dataset and it's subfolders and create a new one to see if that fixes it, but just in case....

 

1868742031_ScreenShot2021-10-06at5_58_15PM.thumb.png.fc90ac3a44ac24c2a4cd73eb6f329a03.png

Edited by Marshalleq
Link to comment

So it turns out these are definitely not snapshots, something is creating datasets.  The mount points are all saying they're legacy.  Apparently that's when it's set in fstab, which of course they're not.  I'm guessing it's something odd with docker folder mode so I'm going to go back to an image and try that.

Link to comment
35 minutes ago, Marshalleq said:

So it turns out these are definitely not snapshots, something is creating datasets.  The mount points are all saying they're legacy.  Apparently that's when it's set in fstab, which of course they're not.  I'm guessing it's something odd with docker folder mode so I'm going to go back to an image and try that.

from https://daveparrish.net/posts/2020-11-10-Managing-ZFS-Snapshots-ignore-Docker-snapshots.html

Quote

DOCKER ON ZFS

Docker uses a compatible layered file system to manage it’s images. The file system used can be modified in the Docker settings. By default, on root ZFS system Docker will use ZFS as the file system for images. Also, by default, the datasets are created in the root of the pool which docker was installed. This causes Docker to create many datasets which look something like this:

$ zfs list -d 1 zroot | head
NAME                                                                          USED  AVAIL     REFER  MOUNTPOINT
zroot                                                                        42.4G   132G       96K  none
zroot/0004106facc034e1d2d75d4372f4b7f28e1aba770e715b48d0ed1dd9221f70c9        212K   132G      532M  legacy
zroot/006a51b4a6b323b10e9885cc8ef9023a725307e61f334e5dd373076d80497a52       44.6M   132G      388M  legacy
zroot/00d07f72b0c5e3fed2f69eeebbe6d82cdc9c188c046244ab3163dbdac592ae2b       6.89M   132G     6.88M  legacy

 

 

so, I think it is wanted as it is ;)

Link to comment

Aha, that makes sense!  Thankyou!  I hadn't realised Unraid was actually using ZFS anywhere yet.

 

I've downgraded from RC1, but left the docker folder option (created a new one though) - it didn't work for me last time, but so far the performance issues are solved - so I think issues were RC1, but too soon to tell obviously.

 

Then the question will be, what is it about RC1 causing issues - argh....

Link to comment
6 hours ago, winglam said:

does the zfs plugin flush the RAM/ ARC cache automatically? if not, is there a command line to do so?

the ram usage of my unraid is at 50 for hours after 100GB of copying. it seems like it won't clear until I restart. (the system actually flushed 2gb of ram after 3-4 hours of idle, it this normal?)

 

This is normal but ZFS will release this memory if needed by any other processes running on the system.

 

You can test this, create a ram disk of whatever size is appropriate and copy some files files to it:

 

mount -t tmpfs -o size=64G tmpfs /mnt/ram/

 

Outside of edge cases where other processes benefit from large amount of caching, it's generally best to leave ZFS to do its own memory management.  If you want to set a 24GB ARC maximum, add this to /boot/config/go

 

echo 25769803776 >> /sys/module/zfs/parameters/zfs_arc_max

 

6 hours ago, winglam said:

I am currently running a Raidz2 of 10 x 16TB drives, is there any advantage setup a Raidz1/ Raidz2 in 2 vdev instead?

Capacity isn't an issue, but I wanna optimize the zpool for my daily workflow (After Effect, Premiere, Cinema 4D etc)

 

Yes, but if you're optimising for performance on spinning rust, you should probably use mirrors.

 

6 hours ago, winglam said:

I use Intel optane ssd for my SLOG, is it going to work fine on a AMD build?

 

Optane covers a lot of products.  As far as I'm aware, they all just show up as nvme devices and work fine for ZFS.  Where they don't work outside of modern Intel systems is when you want to use them in conjunction with Intel's software for tierered storage.  I use an Optane P4800X in an (old, unsupported) Intel system for ZFS SLOG/L2ARC on unRAID.

Edited by jortan
Link to comment
On 10/15/2020 at 8:54 PM, steini84 said:

I have had problems with that before and I had to do

zfs destroy dataset

rm -rf mount point

Then zfs destroy dataset again

None of this helps with me.
After a 'rm -rf /mnt/SSD' and reboot the zpool SSD incl. mount point is back - and busy! 
How the heck do I get rid of it??

I don't want to use ZFS at all anymore and use the used SSD's for cache. Could I also just uninstall the ZFS plugin and reformat the drives with Preclear and then use them in a cache pool?

 

Edit:

Forget it. Just found the help here:

https://www.osso.nl/blog/zfs-destroy-dataset-is-busy/

Edited by JoergHH
Self solution :-)
Link to comment
11 hours ago, winglam said:

Hey all,

 

does the zfs plugin flush the RAM/ ARC cache automatically? if not, is there a command line to do so?

the ram usage of my unraid is at 50 for hours after 100GB of copying....

 

That's very common, is how ZFS is supposed to work, even with the SLOG, ARC continues working as normal, however, you could use "zinject -a" to force flush the ARC (without the failure simulation of course); my advice, try to go with "primarycache=metadata" in the dataset and see how it performs, in such configuration there should not be any difference in performance.

 

image.thumb.png.3f5fa75c4274d1ca2f487f1973ab5936.png

 

https://openzfs.github.io/openzfs-docs/man/8/zinject.8.html

  

Link to comment
13 hours ago, Marshalleq said:

Aha, that makes sense!  Thankyou!  I hadn't realised Unraid was actually using ZFS anywhere yet.

 

I've downgraded from RC1, but left the docker folder option (created a new one though) - it didn't work for me last time, but so far the performance issues are solved - so I think issues were RC1, but too soon to tell obviously.

 

Then the question will be, what is it about RC1 causing issues - argh....

Regarding the datasets: That has nothing to do with Unraid, its the filesystem driver of docker. Usually it uses overlayfs or overlay2 (however its called) but as soon as the directory is on a zfs array docker uses the zfs driver. Problem here is, afaik, you cant use another driver currently, there is work done to make overlayfs compatible with zfs but that is a long on going problem and it might take a while to see this fixed. Other then that, its normal zfs+docker behavior when it creates tons of datasets

Link to comment

Yep understand. The unraid downgrade was all about performance issues. Running RC1 my chia container had huge performance issues. Downgrading resolved that. I did notice the loop service at 100% also and trying a docker image froze the system completely. 
 

so there’s still something problematic about zfs, docker and unraid. 
 

maybe it’s the driver issue you mention. 

Link to comment
On 10/5/2021 at 11:28 AM, Joly0 said:

Hey, great plugin, though i have 1 issue and one inconvenience...

 

ZFS Master 2021.10.08e is live with a lot of fixes and new functionality, check it out:

 

2021.10.08e

  • Add - SweetAlert2 for notifications
  • Add - Refresh and Settings Buttons
  • Add - Mountpoint information for Pools
  • Add - Configurable Settings for Refresh Time, Destructive Mode, Dataset Exclusions, Alert Max Days Snapshot Icon 
  • Fix - Compatibility with Other Themes (Dark, Grey, etc.)
  • Fix - Improper dataset parsing
  • Fix - Regex warnings
  • Fix - UI freeze error on some system at destroying a Dataset
  • Remove - Unassigned Devices Plugin dependency
Edited by Iker
  • Thanks 1
Link to comment
11 minutes ago, Xxharry said:

How can I share zfs dataset via nfs? TIA

I would expect to use the exports file manually.  And to make sure it persists across reboots.  I'm trying to remember if ZFS has native NFS sharing built in like it does for SMB.  If it does, I assume it will be the same i.e. edit the existing sharing mechanism.  I think the main point is, currently the sharing mechanisms built into unraid GUI do not work for ZFS, you've got to do it at the command line.

 

Hope that helps.

 

Marshalleq

Link to comment
2 hours ago, Marshalleq said:

Hi all, does anyone know why zdb command does not work? Is this something that could be fixed?  I fairly regularly find that it would be useful to have.

 

Thanks.

 

ZDB as long as I have checked works, however, requires a cache file that doesn't exist in the unRAID config; try with this:

 

UNRAID:~# mkdir /etc/zfs
UNRAID:~# zpool set cachefile=/etc/zfs/zpool.cache hddmain
UNRAID:~# zdb -C hddmain

 

 

Edited by Iker
  • Like 3
Link to comment

It's not really a big deal, I suppose that it's because the folder doesn't exist in unraid, being the whole file system ephemeral and the pools by default does not have the property setted for a cache file, maybe zfs init before the path exist, so it could be very problematic in unraid having the cache file configured for the pools.

 

More info:

https://openzfs.github.io/openzfs-docs/Project and Community/FAQ.html#the-etc-zfs-zpool-cache-file

https://github.com/openzfs/zfs/issues/1035

 

Why could be problematic:

https://github.com/openzfs/zfs/issues/2433

 

Edited by Iker
Link to comment
8 hours ago, Xxharry said:

How can I share zfs dataset via nfs? TIA

 

I've done this via dataset properties.

To share a dataset:

zfs set sharenfs='rw=@<IP_RANGE>,fsid=<FileSystemID>,anongid=100,anonuid=99,all_squash' <DATASET>

<IP_RANGE> is something like 192.168.0.0/24, to restrict rw access. Just have a look to the nfs share properties.

<FileSystemID> is an unique ID you need to set. I've started with 1 and with every shared dataset I've increased the number

<DATASET> dataset you want to share.

 

The magic was the FileSystemID, without setting this ID, it was not possible to connect from any client.

 

To unshare a dataset, you can easily set:

zfs set sharenfs=off <DATASET>

 

  • Like 2
  • Thanks 2
Link to comment
On 10/8/2021 at 10:16 PM, Iker said:

 

ZFS Master 2021.10.08e is live with a lot of fixes and new functionality, check it out:

 

2021.10.08e

  • Add - SweetAlert2 for notifications
  • Add - Refresh and Settings Buttons
  • Add - Mountpoint information for Pools
  • Add - Configurable Settings for Refresh Time, Destructive Mode, Dataset Exclusions, Alert Max Days Snapshot Icon 
  • Fix - Compatibility with Other Themes (Dark, Grey, etc.)
  • Fix - Improper dataset parsing
  • Fix - Regex warnings
  • Fix - UI freeze error on some system at destroying a Dataset
  • Remove - Unassigned Devices Plugin dependency

Hey, i am trying to hide the auto generated docekr datasets, but mountpoint is "legacy" and dont know exactly what to write into the exclusion. Any instructions you could give for that?

 

Btw great update, works nice so far, compatibility to dark theme is nice

Link to comment
37 minutes ago, Joly0 said:

Hey, i am trying to hide the auto generated docekr datasets, but mountpoint is "legacy" and dont know exactly what to write into the exclusion. Any instructions you could give for that?

 

Btw great update, works nice so far, compatibility to dark theme is nice

An example is documented in the settings.

 

I've my docker files in

/mnt/SingleSSD/docker/

zpool is SingleSSD

Dataset is docker, so the working pattern for exclusion is:

 

/^SingleSSD\/docker\/.*/

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.