[PLUGIN] ZFS Master


Iker

Recommended Posts

I just read the bug report thread; my best shoot will be that the Dataset has some properties not supported by the current ZFS version or that the unRaid UI implementation is not importing the pool correctly. Here are some ideas to debug the issue a little bit further down:

 

More Diag Info

  • If you create a new folder on the same dataset, everything works with this new folder?
  • Create a dataset in unRaid 6.12 and check if everything works correctly, and you can see the folder and its content. (Just to check if there is a problem 

 

Possible Solutions

  • Do not assign the pool to unRaid Pools; import it using the command line and see if that works. (zfs import, then zfs mount -a)
  • As weird as that may be, you could clone the Dataset in unRaid 6.12, see if the information shows up, promote it, and let go of the old one.

 

Edited by Iker
Link to comment

When using the "Create Dataset" feature, if I fill in the Mount parameter with Chinese characters, the creation will fail.
 

Also, when I modify the mount point in the command line and the path contains Chinese characters, it will cause the plugin to fail to load the dataset list.

 

An example of a Chinese path is: /mnt/test/测试.

 

I hope you can provide support for non-English encoded paths. 

 

Thanks for your work.

 

Link to comment

FYI for anyone searching for this, after RC3, docker has a new "directories" mode that creates a zfs dataset per container. This creates many snapshots, and ZFS Master takes a solid 2-3 minutes to load. Go into settings, and in management settings, select Dashboard instead of Main on log in or you will hang for a long time at first log in. I tested with baseline 350 snapshots (1 per docker dependency) and it was still super slow, I had 8000 snapshots last night and it just hard crashed and wouldn't let me log in to the GUI.  I asked in the RC3 thread how to disable snapshots on 1 dataset as disabling them on the docker set should help with this issue.

Link to comment
On 2/22/2023 at 8:16 AM, Iker said:

Yes, sorry for the late response, I'll be pushing an update next week, with the nested mount command.

 


For now at least that's the way to go, in the coming 6.12 version ZFS will be a first class citizen, that means that you can import pools and create shares using unRaid's GUI.

 

On 12/29/2022 at 2:46 AM, SimonF said:

Have you tried pci=realloc=off

 

 

 

On 2/22/2023 at 8:16 AM, Iker said:

Yes, sorry for the late response, I'll be pushing an update next week, with the nested mount command.

 


For now at least that's the way to go, in the coming 6.12 version ZFS will be a first class citizen, that means that you can import pools and create shares using unRaid's GUI.

I found the options and syntax needed toake sharenfs work great!  Tried replicating it via gui and/or go file, but they aer frustrating.  Also added nfs to windows 11 pro, couldn't be happier. 

Link to comment

Thank you for the plugin!  It has been really useful!

But I'm having a problem, none of my ZFS disks stay spun down.  I traced it down to this plugin.  In my log I see this message:

Apr 22 21:35:21 bertha emhttpd: spinning down /dev/sdab
Apr 22 21:35:33 bertha emhttpd: read SMART /dev/sdab


Is there a setting I'm overlooking? Or is this expected behavior.


Thanks!

Sam
 

Link to comment
4 hours ago, samsausages said:

Thank you for the plugin!  It has been really useful!

But I'm having a problem, none of my ZFS disks stay spun down.  I traced it down to this plugin.  In my log I see this message:

Apr 22 21:35:21 bertha emhttpd: spinning down /dev/sdab
Apr 22 21:35:33 bertha emhttpd: read SMART /dev/sdab


Is there a setting I'm overlooking? Or is this expected behavior.


Thanks!

Sam
 

The first line is when a spindown is issued, and the second one is Unraid trying to read the SMART data because it thinks the drive has just been spun up again.   The issue is trying to determine if something IS spinning it up again.

Link to comment
11 hours ago, itimpi said:

 The issue is trying to determine if something IS spinning it up again.

 

That makes sense.
As far as what is querying the disks, it's for sure the ZFS Master plugin.  Clean install, no data on the pool, no apps, no dockers.  Only ZFS formatted disks affected and only when ZFS Master is installed.

Edited by samsausages
Link to comment
20 hours ago, samsausages said:

 

That makes sense.
As far as what is querying the disks, it's for sure the ZFS Master plugin.  Clean install, no data on the pool, no apps, no dockers.  Only ZFS formatted disks affected and only when ZFS Master is installed.

To be honest, I actually don't think it's the Master plugin (but the current ZFS implementation in Unraid). It could also be the ZFS companion plugin, if you have that installed.

 

I had ZFS Master installed but uninstalled it due to the snapshot loading times (see earlier discussion in the thread). If I try to spin down ZFS disks in the UD section of main, I ALWAYS receive the spin up with the SMART message. Therefore, I really would not assume that it's only ZFS master.

Link to comment
41 minutes ago, HumanTechDesign said:

To be honest, I actually don't think it's the Master plugin (but the current ZFS implementation in Unraid). It could also be the ZFS companion plugin, if you have that installed.

 

I had ZFS Master installed but uninstalled it due to the snapshot loading times (see earlier discussion in the thread). If I try to spin down ZFS disks in the UD section of main, I ALWAYS receive the spin up with the SMART message. Therefore, I really would not assume that it's only ZFS master.

Like I was saying, this is a clean install.  Only the ZFS Master plugin installed, purely for testing.
I have also confirmed this behavior with the 6.11 version of Unraid, utilizing OpenZFS.  (Sounds like 6.12 is based on similar ZFS implementation) When using ZFS Master it keeps spinning up the disks.

Edited by samsausages
Link to comment
On 4/24/2023 at 6:42 AM, samsausages said:

Like I was saying, this is a clean install.  Only the ZFS Master plugin installed, purely for testing.
I have also confirmed this behavior with the 6.11 version of Unraid, utilizing OpenZFS.  (Sounds like 6.12 is based on similar ZFS implementation) When using ZFS Master it keeps spinning up the disks.

 

Hi, I agree with @itimpi; this situation is not related to the plugin, buy to unRaid itself, the plugin doesn't implement any code associated with SMART functionality, and all the commands are exclusively ZFS-related commands (zpool list, zpool status, etc.), more even so, the plugin doesn't enumerate the devices present in the system, only parses the results from zfs status for pool health purposes.

Edited by Iker
Link to comment
7 hours ago, Iker said:

 

Hi, I agree with @itimpi; this situation is not related to the plugin, buy to unRaid itself, the plugin doesn't implement any code associated with SMART functionality, and all the commands are exclusively ZFS-related commands (zpool list, zpool status, etc.), more even so, the plugin doesn't enumerate the devices present in the system, only parses the results from zfs status for pool health purposes.


I don't think the plugin is actually performing the SMART function, I think the plugin is spinning up the disks, when the disks spin up it results in a SMART read.
So yes, the plug in is not calling reading SMART, but the plugin is spinning up the disks.

Link to comment
3 hours ago, samsausages said:


I don't think the plugin is actually performing the SMART function, I think the plugin is spinning up the disks, when the disks spin up it results in a SMART read.
So yes, the plug in is not calling reading SMART, but the plugin is spinning up the disks.

 

TBH that doesn't make a lot of sense to me. As I said, the plugin doesn't query the disk directly, only executes zfs commands every 30 seconds (you can change the timeframe on the config).

  • Like 1
Link to comment
9 hours ago, Iker said:

 

TBH that doesn't make a lot of sense to me. As I said, the plugin doesn't query the disk directly, only executes zfs commands every 30 seconds (you can change the timeframe on the config).


Doesn't make sense to me either, but that's what's happening on my hardware, on a clean USB install, with no other plugins, docker & VM's disabled, only happens to the ZFS formatted disks and behavior stops as soon as I uninstall ZFS Master. (Existing Pool and freshly formatted and empty disks/pools as well)
Tested on 6.11 with OpenZFS Plugin, and on 6.12 RC3.  Literally the only thing I installed was ZFS Master, just to test and try to figure out what was causing them to stay spun up.

I also thought it might be related to the Query interval and I tried changing the Query time in ZFS Master to 300 seconds.  I don't think it's related to that query, because even set to 300, the disks spin back up within about 30-60 seconds.

FYI, I did this testing on x99, but I'm moving over to EPYC.  If the rest of my parts come in, I should be able to test it this weekend with new hardware and see if that changes anything.  (will still use the same HBA Cards)

Edited by samsausages
Link to comment

Really enjoying this plugin so far, makes managing my ZFS pool much easier in 6.12.

 

My only comment would be a GUI suggestion that when you make a modification to a ZFS entry (for example renaming or adding/deleting something), it should trigger an immediate refresh on the unRAID UI, rather than waiting for the next scheduled update (~30 sec).

Link to comment
On 4/26/2023 at 9:35 AM, Iker said:

Sounds good!, I will take a deeper look in the coming days, as this is a very unexpected behavior, and I haven't been able to reproduce it in a unRaid VM.

I got my new build done, still having the same issue.  So now I'm wondering if it's related to my HBA card, an LSI 9300-i16.  As that and the HDD's are the only thing I carried forward from my old build.  (New CPU, Motherboard, Memory etc.)
I have more testing that I'm going to do, I'm going to try another clean USB without the HBA card and some clean dummy HDD's.  But I'm leaving town and won't have time to do the testing until 2 weeks from now.
Just wanted to give a heads up!

  • Like 1
Link to comment
On 4/26/2023 at 10:35 PM, Iker said:

Sounds good!, I will take a deeper look in the coming days, as this is a very unexpected behavior, and I haven't been able to reproduce it in a unRaid VM.

I am seeing the same spin up behaviour. More specifically, the spin up only happens if I visit the Main tab. If I instead go to the dashboard while my pool drives are spun down, I can observe them as spun down in the list of disks on dashboard. It's only when I go to Main that the pool spins up.

 

I have also asked in unassigned device plugin support thread, but it seems that zfs master is the cause. 

 

Does this plugin run any zfs commands when the main tab is loaded? Would those commands need to read data off the disks, because if they do, it would spin them up.

 

Can anyone test a spun down zfs pool verified as spun down on dashboard still stays spun down when accessing main tab? I am on unraid 6.11.5

  • Like 1
Link to comment
1 hour ago, apandey said:

Does this plugin run any zfs commands when the main tab is loaded? Would those commands need to read data off the disks, because if they do, it would spin them up.

 

Multiple ZFS commands, that's the whole idea, enumerate the pools, then for each pool, list his datasets, and then the snapshots for every single dataset. As far as I know, some of that information is stored in the zfs metadata; depending on how you configure your dataset's primarycache, it can be the case that it ends up reading the data from the disks instead of memory.

Link to comment
41 minutes ago, Iker said:

Multiple ZFS commands, that's the whole idea, enumerate the pools, then for each pool, list his datasets, and then the snapshots for every single dataset. As far as I know, some of that information is stored in the zfs metadata; depending on how you configure your dataset's primarycache, it can be the case that it ends up reading the data from the disks instead of memory.

ok, thanks. next time, when the pool is spun down, I will try running these one by one over ssh. Let me see if I can pinpoint which one of these triggers a spin up (and subsequently what can be done to avoid that). Will report back what I discover

Link to comment
59 minutes ago, Iker said:

Multiple ZFS commands, that's the whole idea, enumerate the pools, then for each pool, list his datasets, and then the snapshots for every single dataset.

I managed to do a quick test. running the following did not spin up the pool

zpool list
zfs list
zfs list <dataset>
zfs list -r <dataset>
zfs list -t snapshot

 

I am not sure I am being exhaustive enough though? Is there a list of commands, or some log where I can observe what the plugin is doing? Or maybe relevant code snippet I can refer to

Link to comment
22 hours ago, apandey said:

I am not sure I am being exhaustive enough though? Is there a list of commands, or some log where I can observe what the plugin is doing? Or maybe relevant code snippet I can refer to

 

These are the commands executed upon main tab loading:
 

zpool list -v // List Pools
zpool status -v <pool> // Get Pool Health status
zfs program -jn -m 20971520 <pool> zfs_get_pool_data.lua <poool> <exclussion_pattern> // Lists ZFS Pool Datasets & Snapshots

 

The lua script is a very short ZFS channel program executed in read only mode for safety and performance reasons.

 

Obiously if you create, delete, snapshot or perform other actions over datasets, there are going to be additional zfs commands.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.