Jump to content

[PLUGIN] ZFS Master


Iker

Recommended Posts

What makes you think znapzend is abandoned? 

 

 

For aliasing, definitely a niche case; I don't think many people run zfs on their array disks. I'm not even sure that I think it's the greatest idea, but it allowed for some interesting experiments with zfs that I wouldn't have had the available hardware to try otherwise. Overall, I think it's still worth it, detecting silent corruption, transaction groups, and caching metadata off the spinning rust without needing plugins for any of that are big pluses. 

 

As an example, right now I have a lot of scripts and snapshots set up for the pool disk6. It would be nice if I could just refer to that pool as backup and have a single place where I could tell the OS "backup" = "disk6", for readability and ease of change later on. Again, definitely niche, but I figured that would be built in to zfs from the start. 

Link to comment

Installation of ZFS Master causes php warning:

[02-Jan-2024 08:07:43 America/Chicago] PHP Warning:  Undefined array key "PluginURL" in /usr/local/emhttp/plugins/dynamix.plugin.manager/post-hooks/post_plugin_checks on line 27

 

Also some more when running:

[02-Jan-2024 08:11:41 America/Chicago] PHP Warning:  filemtime(): stat failed for /boot/config/plugins/zfs.master/zfs.master.cfg in /usr/local/emhttp/plugins/zfs.master/nchan/zfs_master on line 25
[02-Jan-2024 08:11:44 America/Chicago] PHP Warning:  filemtime(): stat failed for /boot/config/plugins/zfs.master/zfs.master.cfg in /usr/local/emhttp/plugins/zfs.master/nchan/zfs_master on line 106

 

Link to comment

Znapzend does not seems abandoned, it was latest updated April 12 2023 and is on version 0.21.2, BUT the znapzend plugin in Unraid AppStore seems abandoned as it has not been updated since 2020 and is on version 0.20.0.1

Asked about that in its thread in December with no answer yet :(

Link to comment

@dlandon, thansk I'll fix it on the new release.

 

@Renegade605 @isvein; Don't get me wrong, I like the tool, but that doesn't change the fact that it is abandoned. Most of the issues were closed automatically by a bot, with no answer; the last release was an automatic build without any new features or fixes, and the last true update is from Jan 2022, which is almost two years ago; that's why I don't feel very comfortable building a GUI for the tool.

  • Thanks 1
Link to comment

@xreyuk It's very simple, ZFSMaster refreshes the pool and dataset information that displays now and then (The refresh interval you select), but that requires reading some information from disks, so every time that you visit the main page, the plugin is going to refresh the data every X seconds/minutes and your disks are going to be wake from his sleep by that operation. The No Refresh option helps you with that; it provides a convenient way to refresh the information only when you press the "refresh" button, letting your disks sleep and only retrieving the information when you explicitly request it.

 

Other options are associated with the refresh interval, described in the initial post of this threat.

  • Like 1
Link to comment

Quick question: what are these cryptic entries with a legacy mount point under my system share/dataset in the ZFS Master plugin entries on the Main tab?

image.thumb.png.8c86a834f0080ffffe86d77c3880f150.png

 

I'll be the first to admit, ZFS is new to me, and I'm no expert. I recently did a server upgrade, and in the process, I added new ZFS pools and moved my appdata and system shares over to it. These all popped up under my system after a reboot (It might have been a dirty reboot) after the server was initially booted up and my files were moved to the new shares on the ZFS pool. I see that some of them have snapshots and some of them don't.

 

Generally, there are a lot of them (hundreds) and I'm bothered because I didn't make them, and I don't know what they are. On a more superficial level, they're really mucking up my GUI since they always open expanded, and I've got to really scroll to get down to my unassigned devices now. So, besides "what are these?" My follow up question is "how can I get rid of them?" Or at least "how can I hide them?"

 

My system share has standard stuff in it (docker and libvirt.) Docker is set as a directory (not an image.) I tried setting the dockerfile filter in the ZFS Master settings (/dockerfiles/.*) but that didnt work. I didn't have high hopes since these didnt seem to match any of those file strings (like the files in the /zfs/graph/ subdirectory.)

Edited by FirbyKirby
Link to comment
1 minute ago, FirbyKirby said:

Quick question: what are these cryptic entries with a legacy mount point under my system share/dataset in the ZFS Master plugin entries on the Main tab?

That's how Docker handles using a directory for image storage. It's normal. 

 

Try "/docker/+" as the exclusion pattern, or similar for your dataset structure. 

Link to comment

Thanks @Renegade605! I suspect you're absolutely right. As it is, my blindly plugging in the example "/dockerfiles/.* would not have worked since my directory structure is "/system/docker/docker/". But I'm still having a tough time coming up with a lua exclusion pattern that works. I've reviewed the tutorial, and if I'm understanding this correctly, "/docker/.*" or "/docker/.+" should work for me (the former matching /docker/, as well as all subfolders and files, and latter mattching only the subfolders and files of /docker/.) I don't think I need to add the full path if I don't include the start of string pattern match (the ^) as well. But, despite all these permutations, I still can't exclude these files. Any advice? Here's a look at my directory structure for docker.

image.thumb.png.db798e56fd37d022b797b4c0a6e51eda.png

 

I did try "/docker/+" witch didn't work. And if I understand Lua patterns correctly, the "." is necessary as the "all character class" character before the * or + pattern match character.

Edited by FirbyKirby
Link to comment
56 minutes ago, FirbyKirby said:

I did try "/docker/+" witch didn't work. And if I understand Lua patterns correctly, the "." is necessary as the "all character class" character before the * or + pattern match character.

Now that you mention it, I did typo that. But it worked anyway, because "docker/" matches all child datasets of docker (ie. "cache/system/docker/01d13...." matches "docker/" and the + 'one or more times' is irrelevant).

 

Looking closer, I suspect it's because your docker folder is just a folder, not a dataset. The exclusion pattern only matches datasets, not file/folder structure. (Type "zfs list" in the terminal to see what datasets there are on your system.) You can either a) make a child dataset under 'system' named 'docker' (you'll have to delete your existing directory first, and reinstall your containers after), or b) exclude "system/.+"

 

If you do the latter, and later make a child dataset for system, it will also not appear. Unless you make your exclusion pattern something like "system/[a-f0-9]{64}". EDIT: Nevermind, looks like Lua doesn't support {} notation.

 

I suggest the former, as creating new datasets for differing tasks is a big part of zfs design philosophy and you can do more later on. (For example: I have a reservation and quota of 20G on the docker dataset, so it's guaranteed to always have 20G available no matter how full the rest of my cache pool is, and to never take up more than 20G if something weird should happen.)

Edited by Renegade605
  • Thanks 1
Link to comment

Ahh. That makes sense. Essentially, there is a "hole" in my datasets between the Unraid share dataset (/system) and the docker files further down the directory tree (which are also, I guess, created as datasets?)

 

Yep. My docker folder is just a folder, not a dataset.

image.thumb.png.4fef92a426bb1834ce76c24f54c58b62.png

 

And thanks for the explanation of what a dataset is and your recommendation of making docker a dataset. In my server upgrade, I moved /system, /appdata, and /domains over to single ZFS pool with exclusive access for the best performance. So I am vaugely nervous about the amount of space these directories will consume as they dynamically grow. So a reservation and quota for docker makes a ton of sense for me as well.

 

I guess I wouldn't really need to make a dataset for any directories below /docker based on your description of the ZFS design philosophy being focused on 1 dataset per task (everything under /docker is the same "docker" task, so to speak.)

 

OK, off to figure out how to create a new dataset with the ZFS Master plugin....

Link to comment

One quick follow-up question: So, I understand that docker is off making ZFS datasets of it's own on my system, and apparently snapshots too. I've recently learned docker is smart enough to detect the underlying FS and then manipulate the FS appropriately to it's own ends. So excluding these docker generated datasets and snapshots certainly cleans up my GUI, but am I safe to ignore them and just let docker do what docker is going to do? I'm vaguely nervous about letting docker create (and destroy hopefully) datasets and snapshots on its own all willy-nilly. But then again, I assume everyone running docker on Unraid is doing the same thing here.

Link to comment
9 minutes ago, FirbyKirby said:

I guess I wouldn't really need to make a dataset for any directories below /docker based on your description of the ZFS design philosophy being focused on 1 dataset per task (everything under /docker is the same "docker" task, so to speak.)

Correct. There's generally no reason to have a child dataset if it's the only child. The benefit to child datasets is different properties, attributes, snapshots, etc. for each.

 

"Task" may have been the wrong word choice. For example, another thing I've done is create a separate child dataset in appdata for each container. They all get their own snapshots, so if an update borks one container, I can rollback the appdata for that one container with a single button. Postgres performs better when the zfs record size matches the database page size, so I've done that for only those containers. Once you start playing with ZFS tuning, you may find yourself going down a rabbit hole, but the sky is the limit.

 

 

3 minutes ago, FirbyKirby said:

One quick follow-up question: So, I understand that docker is off making ZFS datasets of it's own on my system, and apparently snapshots too. I've recently learned docker is smart enough to detect the underlying FS and then manipulate the FS appropriately to it's own ends. So excluding these docker generated datasets and snapshots certainly cleans up my GUI, but am I safe to ignore them and just let docker do what docker is going to do? I'm vaguely nervous about letting docker create (and destroy hopefully) datasets and snapshots on its own all willy-nilly. But then again, I assume everyone running docker on Unraid is doing the same thing here.

I just let it handle itself. My 24 containers use 8.7G. And again, there's a quota in place just in case.

  • Thanks 1
Link to comment

The plugin has stopped showing any information recently. The CLI shows that I do have datasets:

$ zfs list
NAME                 USED  AVAIL     REFER  MOUNTPOINT
cache               20.0G   879G      120K  /mnt/cache
cache/appdata       10.5G   879G     10.5G  /mnt/cache/appdata
cache/domains         96K   879G       96K  /mnt/cache/domains
cache/system        9.39G   879G     9.39G  /mnt/cache/system
cache/www           11.9M   879G     11.9M  /mnt/cache/www
disk1               1.39T  9.39T      104K  /mnt/disk1
disk1/data          1.39T  9.39T     1.39T  /mnt/disk1/data
disk1/isos            96K  9.39T       96K  /mnt/disk1/isos
vault               1.81T  8.63T      813G  /mnt/vault
vault/backups        451G  8.63T      451G  /mnt/vault/backups
vault/gdrive        1.13G  8.63T      413M  /mnt/vault/gdrive
vault/gphotos       17.1G  8.63T     17.0G  /mnt/vault/gphotos
vault/lesley         140K  8.63T      140K  /mnt/vault/lesley
vault/tibbe          284G  8.63T      284G  /mnt/vault/tibbe
vault/time-machine   290G  8.63T      290G  /mnt/vault/time-machine

 

However the plugin shows nothing:

Capture.thumb.PNG.f35b5b73659e3aed79638b304088a4c1.PNG

 

I have default settings:

Capture2.thumb.PNG.b90f7cb83c1628574003cbd428e7e954.PNG

Link to comment

funny enough I just lost my appdata (still had to set up the backup plugin) since I had the appdata folder but not the appdata dataset

 

I went to Create Dataset -> typed appdata and wiped straight the appdata folder

 

In my opinion there should be a follow up warning for this if the directory exists

Link to comment
17 minutes ago, mich2k said:

funny enough I just lost my appdata (still had to set up the backup plugin) since I had the appdata folder but not the appdata dataset

 

I went to Create Dataset -> typed appdata and wiped straight the appdata folder

 

In my opinion there should be a follow up warning for this if the directory exists

It isn't gone, just hidden. The old appdata folder exists but the new appdata dataset is mounted over top. Change the mountpoint of the appdata dataset to "none" and the folder will be there. 

Link to comment
5 minutes ago, Renegade605 said:

It isn't gone, just hidden. The old appdata folder exists but the new appdata dataset is mounted over top. Change the mountpoint of the appdata dataset to "none" and the folder will be there. 

how can i do this? I mean is possible by gui?

Link to comment
1 minute ago, Renegade605 said:

Yes. 

 

336712453_Screenshot_20240121-0724312.thumb.jpg.99d1f7f295a8780f43c27fec3bc1f8bf.jpg

oh thanks, i already did by cli, it needed the destructive mode on

 

Now i have back the data so i destroy this dataset, create a new one and just move the files in?

I guess is a nice to have to have appdata folder as a dataset no?

 

thanks :)

Link to comment
1 minute ago, mich2k said:

oh thanks, i already did by cli, it needed the destructive mode on

 

Now i have back the data so i destroy this dataset, create a new one and just move the files in?

I guess is a nice to have to have appdata folder as a dataset no?

 

thanks :)

You can either 

- rename the current folder

- remount the dataset

- move the files

- delete the folder

Or

- mount the dataset somewhere else (eg /mnt/cache/appdata-new) 

- move the files

- delete the folder

- change the dataset mount point back to default

 

ZFS will let you mount a dataset anywhere you want, which has many practical uses. If you wanted you could create a dataset on the cache and then mount it to /mnt/disk1, although that would not be a good idea. 

 

At minimum, every top level folder on a zpool should be a dataset. Unraid will do this automatically for user shares that don't exist yet, but doesn't convert folders that already exist. It's up to you if you want to create more. 

 

For example, you can see my "cache/domains" dataset has a "cache/domains/pfSense" child dataset. 

  • Like 1
Link to comment
On 1/10/2024 at 6:47 PM, Iker said:

@xreyuk It's very simple, ZFSMaster refreshes the pool and dataset information that displays now and then (The refresh interval you select), but that requires reading some information from disks, so every time that you visit the main page, the plugin is going to refresh the data every X seconds/minutes and your disks are going to be wake from his sleep by that operation. The No Refresh option helps you with that; it provides a convenient way to refresh the information only when you press the "refresh" button, letting your disks sleep and only retrieving the information when you explicitly request it.

 

Other options are associated with the refresh interval, described in the initial post of this threat.

I found weird behavior: if I have Unassigned Devices formatted in ZFS they will never spindown.

It seems this setting  'no refresh' works only for array ZFS drive.

image.thumb.png.c8a7bcf4600c737a1bf04148f7fcf003.png

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...