[PLUGIN] ZFS Master


Iker

Recommended Posts

50 minutes ago, Alyred said:

Can you extend the maximum time option between automatic refreshes, and whenever the plugin is used/a button clicked to create/edit/destroy something? Seems like it wouldn't need to be updated too often in some situations for anyone that doesn't make too many changes.

 

Sure, I will include a new option beyond 5 minutes (Probably, 15m,30m, 1 hour and "manual") for the next version; in combination with the functionality already in place, it should provide the behaviour you are looking to.

Edited by Iker
  • Like 1
Link to comment
22 hours ago, Niklas said:

 

Please read back just a couple of posts before yours in this thread. 

 

Like this one 

 


Thanks, I did spot that

 

i have not loaded the main tab in 24 hours, spindown is set to one hour 

 

I’ve also moved the drive off my lsi hba direct to sata and it’s still spun up 

 

guess something else is keeping the drive online. It’s only a backup share that’s used once a month and no files are open so I’m out of ideas

Link to comment

The spin up is happening with out ZFS master installed so it's something about 6.12.x

 

I also removed any ZFS drives from the main array so now the only ZFS drives are SSD caches. and 1 of those pools spins down (standby for ssd). so it's some but to do with 6.12.x and the main array.

 

Possibly something built into the OS? It's not something that shows up in disk activity or open files.

 

However I do also notice that my hard drive lights are not spinning so perhaps the drives are not spun up but are showing as spun up??

Edited by dopeytree
Link to comment

This issue (not spinning up/down) even without the plugin has already been discussed earlier this year (even pre 6.12.x); see for example this

I am still on 6.11.5, without the plugin installed and also experience the issue that the disks spin up automatically if I spin them down manually and don't spin down at all based on time.

Unfortunately, in April we did not find the culprit (it was NOT the main tab issue), so therefore it is probably also related to the ZFS implementation in Unraid itself or a specific hardware configuration.

Link to comment

I tested on 6.12.3 now, sat drives spin down to 30min (just not to need to wait for hours) and kept away from the "main" page and they did indeed spin down. My arrays are 10 of 11 drives are zfs.
But once I clicked on "Main", they all spun up again as expected pr info from @Iker (all zfs drives, the xfs one keeps off as also expected)

Edited by isvein
Link to comment

A story from a noob:
I converted my drive to ZFS. I copied the data back to it (*1). I wanted to benefit from ZFS datasets and tried converting an existing path to a dataset via ZFS Master (*2). Pressed "create dataset" button, entered the existing path, pressed confirm, saw the success window (*3). Didn't read all the other options within the window. Left them on their default values.
I was shocked to find out that the contents of the path was gone. After a nerve break, rationality kicked in, and I thought that this mountpoint could just overlap the existing path. I renamed the dataset via `zfs rename mountpoint/path mountpoint/new_path` and luckily the old path returned with all the data. Then I copied the `mountpoint/path/*` to `mountpoint/new_path/`, deleted the old empty path and renamed the dataset.

*1. My fault that I didn't pre-create datasets for the root paths. Didn't read enough, thought that a magical plugin will do everything for me without much knowledge of ZFS.

*2. My fault again, didn't read much on ZFS. Should have known that datasets can be mounted on top of the existing paths.
*3. Feedback to ZFS Master plugin: This operation may have checked that the path exists? Warned about it? Offered to move the data inside the existing path to the newly created dataset for me? I understand that I am a ignorant noob, but I imagine someone doesn't think of renaming the newly created dataset and their surprise of that the data is gone, the free space is not reclaimed, whilst the path is now empty.

Thank you for this plugin, @Iker

Link to comment

Hey all, pls help 
I never removed anything, all 3 drives are 2 months old, I tried to do an online but got

zpool online nvme_cache /dev/nvme0n1p1
warning: device '/dev/nvme0n1p1' onlined, but remains in faulted state
use 'zpool replace' to replace devices that are no longer present

 

  pool: nvme_cache
 state: DEGRADED
status: One or more devices has been removed by the administrator.
	Sufficient replicas exist for the pool to continue functioning in a
	degraded state.
action: Online the device using zpool online' or replace the device with
	'zpool replace'.
  scan: scrub repaired 0B in 00:02:45 with 0 errors on Wed Jul 26 12:19:49 2023
config:

	NAME                STATE     READ WRITE CKSUM
	nvme_cache          DEGRADED     0     0     0
	  raidz1-0          DEGRADED     0     0     0
	    /dev/nvme0n1p1  REMOVED      0     0     0
	    /dev/nvme1n1p1  ONLINE       0     0     0
	    /dev/nvme2n1p1  ONLINE       0     0     0

errors: No known data errors



In Unraid they all show green, no errors

 

Screenshot 2023-07-26 at 12.30.14 PM.png

Link to comment
On 7/19/2023 at 8:31 PM, dopeytree said:

The spin up is happening with out ZFS master installed so it's something about 6.12.x

 

I also removed any ZFS drives from the main array so now the only ZFS drives are SSD caches. and 1 of those pools spins down (standby for ssd). so it's some but to do with 6.12.x and the main array.

 

Possibly something built into the OS? It's not something that shows up in disk activity or open files.

 

However I do also notice that my hard drive lights are not spinning so perhaps the drives are not spun up but are showing as spun up??

 

Just thought I would update back with my finding for the 'not spinning down array drives'.

 

It was actually caused by the contain 'dash dot' which is a nice dashboard and can be integrated by iframe into homarr.

 

Anyway if you edit it and remove the 'privileged' setting it stops the drives from constantly spinning and allows sleep.

 

Edited by dopeytree
Link to comment
On 7/19/2023 at 4:28 AM, Iker said:

 

Sure, I will include a new option beyond 5 minutes (Probably, 15m,30m, 1 hour and "manual") for the next version; in combination with the functionality already in place, it should provide the behaviour you are looking to.

In my case the ZFS master plugin was causing drive spin ups every time I entered the Main page. When I set the refresh option to longer drives would stay spun down for a while. Once I removed the plugin drives are no longer spining up when I enter the Main page. What is weird is that it would spin up ALL my drives even the XFS one. Perhaps the plugin does a default refresh once the page is entered?

 

Will this also be disabled with the 'manual' option? Perhaps there could be a way to only refresh for SSDs and keep HDDs idle until requested?

Link to comment
On 7/5/2023 at 10:08 PM, Iker said:

 

That's correct; my latest investigation about this issue is that "querying" snapshot names reads the data from the disk, and by that, it spins up those disks; unfortunately, there is nothing to do about it; however, I want to be very clear about this, because it seems that there is some misinformation about the matter; ZFS Master wakes up the disks only when you open the dashboard in the main tab, there are no background processes or any other activity performed by the plugin outside of when you open your dashboard; if you keep it closed or in another tab, there is no way that the plugin is the culprit for preventing the disk from spinning down.

Hello,
I am myself used to go to main tab and therefore spin up regularly the ZFS drive.
Could I suggest the following modification: adding "hide/view" button next to manual refresh that close or develop the datasets parts. The spinning up will only happen when the dataset part is developped. The state hide/view should be remembered beetween sessions.
Is that something easily achievable ?

Link to comment
10 minutes ago, Can0n said:

Hello I just happened to notice today all my containers (which were converted to datasets using SpaceInader1's script) are listed completely scrambled and there are way more than there should be. advice/support please?


image.thumb.png.1c5aa68903ebced6fb24c1b94872b88b.png

 

Looks like when you have docker set to directory and the dir is on zfs formatted drive. Docker will use the built in zfs driver and create the layers as datasets. It will also create lots of snapshots. Just hide it. 

 

Edit

You can hide them by entering the dir as exclusion in plugin settings, like this (i have it set to a share named docker on zfs cache only)

Screenshot_2023-08-14-00-31-25-01_40deb401b9ffe8e1df2f1cc5ba480b12.thumb.jpg.cecfceb07f1a048b9beb3d85fca49360.jpg

Edited by Niklas
Link to comment
2 minutes ago, Niklas said:

 

Looks like when you have docker set to directory and the dir is on zfs formatted drive.


looks like the dockerdirectory dataset is still there from when i was playing around with ZFS and docker but im actually usign BTRFS file system for docker.img...i manually delete the docker directory but the ZFS master plugin still showing all the super long strings and not the actual docker container names like it used itimage.png.1ad728d90b34c120886350799ddce433.png

Link to comment
5 minutes ago, Can0n said:


looks like the dockerdirectory dataset is still there from when i was playing around with ZFS and docker but im actually usign BTRFS file system for docker.img...i manually delete the docker directory but the ZFS master plugin still showing all the super long strings and not the actual docker container names like it used itimage.png.1ad728d90b34c120886350799ddce433.png

 

Do you mean you don't have the datasets created using SpaceInader1's script? Only that? I guess you'll have to destroy the datasets (using zfs destroy) from when docker created them. Deleting the folder/share won't do it. But I'm fairly new to zfs myself. You can use destroy to do it recursively but don't destroy the wrong one. 😅

Edited by Niklas
Link to comment
1 minute ago, Niklas said:

 

Do you mean you don't have the datasets created using SpaceInader1's script? Only that? I guess you'll have to destroy the datasets from when docker created them. Deleting the folder/share won't do it. But I'm fairly new to zfs myself. 

there are hundreds i did use the script to create the ones via docker.img not sure when it would have run to create them when DockerDirectory was tested.

no way to mass remove all but the ones i need (appdata and domains) I mean its very massive list

 

Link to comment
15 minutes ago, Can0n said:

there are hundreds i did use the script to create the ones via docker.img not sure when it would have run to create them when DockerDirectory was tested.

no way to mass remove all but the ones i need (appdata and domains) I mean its very massive list

 

 

You shoud be able to destroy the parent and it will take the datasets and snapshots with it.

My docker share is on cache pool called "cache" (dataset is cache/docker) so I should be able to use "zfs destroy -r cache/docker" (i guess??)

Edit: Depends on where you put the folder when messing with it? Hm.
I have no container names there (except the datasets in appdata I created with SpaceInader1's script). The rest is that seemingly random stuff used by docker. 

Look at "zfs list" in the terminal

Edited by Niklas
Link to comment
38 minutes ago, Niklas said:

 

You shoud be able to destroy the parent and it will take the datasets and snapshots with it.

My docker share is on cache pool called "cache" (dataset is cache/docker) so I should be able to use "zfs destroy -r cache/docker" (i guess??)

Edit: Depends on where you put the folder when messing with it? Hm.
I have no container names there (except the datasets in appdata I created with SpaceInader1's script). The rest is that seemingly random stuff used by docker. 

Look at "zfs list" in the terminal

 

 

without wild cards its going to take a while

looks like its all cache/randomstring
and there is a lot more than in this screenshot possibly multiple hundreds not related to my correct datasets (appdata, domains, system, isos)
image.thumb.png.ca17ef1a59ff4d3f9602147ca2af1ccc.png


here is a smaller screenshot showing cache/domains

it seems what ever happened created all these in the cache root so hopefully someone who knows ZFS better might be able to point me to a equivalent to ZFS Destroy -R cache/0* type command

Edited by Can0n
Link to comment
14 minutes ago, Can0n said:

 

 

without wild cards its going to take a while

looks like its all cache/randomstring
and there is a lot more than in this screenshot possibly multiple hundreds not related to my correct datasets (appdata, domains, system, isos)
image.thumb.png.ca17ef1a59ff4d3f9602147ca2af1ccc.png
 

 

Definitely from docker. 

What did you set as directory when you tried it?

Link to comment
4 minutes ago, Niklas said:

 

Definitely from docker. 

What did you set as directory when you tried it?

I never ran the script when trying docker directory thats whats odd about it

I hadn't looked at the datasets in a while server was up 25 days with no issues just noticed it today then my server started locking up so spent time diagnosing that and its running better now so thought id look into why and how these all showed up


 

Link to comment
Just now, Can0n said:

I never ran the script when trying docker directory thats whats odd about it

I hadn't looked at the datasets in a while server was up 25 days with no issues just noticed it today then my server started locking up so spent time diagnosing that and its running better now so thought id look into why and how these all showed up


 

 

The script from sp1 has nothing to do with it. Docker does that when you use it with directory if the directory is on zfs. Looks like you maybe used "/mnt/cache" as destination? I could be very wrong here. Sleep time here but people knowing more will most certainly answer you better. 😊

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.