[PLUGIN] ZFS Master


Iker

Recommended Posts

4 minutes ago, Niklas said:

 

The script from sp1 has nothing to do with it. Docker does that when you use it with directory if the directory is on zfs. Looks like you maybe used "/mnt/cache" as destination? I could be very wrong here. Sleep time here but people knowing more will most certainly answer you better. 😊

 

yeah script was never known by me when i played with docker directory

but once i got docker work once converting to ZFS from BTRFS using the docker.img file i wanted to create datasets so found the script this is my docker setup since using the script

image.thumb.png.ccf6219185f07f1ecf176c4237c2a7a9.png

Link to comment

As far as I know there is no built-in function to delete multiple datasets at the same level, however, with bash is possible, e.j:

 

ls | grep -oE '\b[0-9a-f]{64}\b' | xargs -I{} echo {}

 

You can replace echo with zfs destroy and add the pool

Edited by Iker
Link to comment
8 hours ago, Iker said:

As far as I know there is no built-in function to delete multiple datasets at the same level, however, with bash is possible, e.j:

 

ls | grep -oE '\b[0-9a-f]{64}\b' | xargs -I{} echo {}

 

You can replace echo with zfs destroy and add the pool

its ok i got them all manually removed now im dealing with freezing on my server...time to hit up another menu to get help

I get a CPU panic and it freezes right after

Link to comment
58 minutes ago, Can0n said:

its ok i got them all manually removed now im dealing with freezing on my server...time to hit up another menu to get help

I get a CPU panic and it freezes right after

 

Try switching from macvlan to ipvlan. Docker need to be stopped to be able to switch.

Link to comment
5 minutes ago, Niklas said:

 

Try switching from macvlan to ipvlan. Docker need to be stopped to be able to switch.

I did switch back to ipvlan right after the first freeze, froze up two more times last night so switched it off for the night and booted it this morning no array started and it was fine started plex and it was fine started some more containers and it froze so its frozen twice today

here is my post with screenshots and logs
 

 

Link to comment
  • 2 weeks later...

Hi.
Can we have an option in the plugin settings to query any ZFS related information(pools, snapshots, etc.) only by clicking that "Reload" button on the plugin section? It would be good to have such an option to not spin up disks on visiting the Main page every time.

 

image.thumb.png.faa83393db14726c59ce9857de307141.png

  • Upvote 3
Link to comment

Dear team,

thank you for providing this plugin. I do have a hint with the prohibited spindown if the plugin is installed. There are several reports of this in different threads. I read that your plugin doesn't use smart commands. I observed that there a pullings accesses to the drives. Perhaps this prohibites the spindown.

Any ideas on this?

 

Frank

Link to comment

Hi all,

 

I am converting some data on the array to datasets and after transferring the data (2TB) I got a "share deleted" dialog when renaming the share after creating it as new, (and checking with ZFS master that the new share was a dataset).

 

Some more detail on the process:

I put a new disk in and formatted to ZFS.

I have an old XFS share named "unRAID_Backups".

 

I'm renamed that existing XFS share on Disk1 to "unRAID_Backups_old". I then created a new share called "Backups" which prompted a "Backups" dataset automatically. I then copied all of the data from the old share to the new share in Krusader.

 

After the data was transferred into the new dataset "Backups" decided I wanted to keep the old name so I renamed the new "Backups" share to "unRAID_Backups" and it gave me a pop-up message saying "share ...unRAID_Backups... deleted"

 

Now when I look in my shares, Backups is still there (it wasn't renamed). But when I look at the dataset for this disk it was renamed to "unRAID_Backups" but now is only taking a small portion of space .6TB vs 2TB. Also, when I check with the SIO script to check if folders are datasets or folders, it shows:

**Datasets in disk2 are**
unRAID_Backups

**Folders in disk2 are**
Backups

 

The problem is, in Krusader or file explorer, under Disk 2 there is no "unRAID_Backups folder, only "Backups". So it appears to have renamed the ZFS dataset, but not updated it's location and now some of the data is just missing? I see that the dataset itself is still pointed to the "Backups" folder when I run ZFS List.

 

Any thoughts or ideas on issues around renaming shares (that are also datasets)?

Link to comment

I am trying to remove some existing datasets, that are on cache/appdata and no longer being used by any docker.

 

When running the remove in the gui, i get an error resouce is busy. If the docker app doesnt exist anymore, how could it be busy?

 

CMD output: 'cannot destroy '\''cache/appdata/plextraktsync'\'': dataset is busy destroy        cache/appdata/plextraktsync'
 

Link to comment

This has been a great app for me and I do love it, thanks!  But one issue keeps nagging me and that is that ZFS Master causes my ZFS Formatted Array disks to stay spun up and never spin down.
We touched on this some time ago and looks like it's related to ZFS Master refreshing info, especially when refreshing the "main" page.

Due to this I find myself uninstalling the plugin and only re-installing it when I need it... a bit clunky.

I was wondering if you would consider a setting/button that allows the option for manual update only.  That way I can keep the plugin installed, but it only accesses my disks, and spins them back up, when I actually need them to.


It would help me save power and wear on my disks, as I really only access these 1-2x a day when a backup runs.  Right now it's spun up pretty much all day.

  • Upvote 1
Link to comment
On 9/1/2023 at 11:11 AM, mihcox said:

I am trying to remove some existing datasets, that are on cache/appdata and no longer being used by any docker.

 

When running the remove in the gui, i get an error resouce is busy. If the docker app doesnt exist anymore, how could it be busy?

 

CMD output: 'cannot destroy '\''cache/appdata/plextraktsync'\'': dataset is busy destroy        cache/appdata/plextraktsync'
 

just wanted to reply to this. The only way i was able to delete anything was to completely stop the docker daemon. I doubt this is intended functionality.

Link to comment

Hey, answering some of the questions:

  1. @XuvinWhat does it mean if the dataset/snapshot icon is yellow instead of blue: It means that the last snapshot is older than the time configured on the settings, is just a visual indicador that you should create a new snapshot for the dataset.
  2. @samsausagesI was wondering if you would consider a setting/button that allows the option for manual update only: Yes, I was finally able to get some time for working on the next update, and that's one of the planned features.
  3. @lordsysopUpdate 2023.09.05.31: It was just a test for the new CI/CD system I'm using: Sorry about that. 
  4. @mihcox The only way i was able to delete anything was to completely stop the docker daemon: I haven't been able to reliable delete datasets used at some point by unraid without rebooting or stoping docker daemon; a procedure that sometimes works, is the following:
    1. Stop the docker using the directory
    2. Delete all snapshots, clones, holds, etc
    3. Delete the directory (rm -r <dataset-path>)
    4. Delete the dataset using ZFS Master or CLI.

Sorry for the delay on the update with the lazy load functionality and custom refresh time guys, now I'm back to work on the plugin, so hopefully the new update adressing most of your concerns will be released this month.

  • Like 4
  • Thanks 1
Link to comment
3 hours ago, Iker said:
  1. @mihcox The only way i was able to delete anything was to completely stop the docker daemon: I haven't been able to reliable delete datasets used at some point by unraid without rebooting or stoping docker daemon; a procedure that sometimes works, is the following:
    1. Stop the docker using the directory
    2. Delete all snapshots, clones, holds, etc
    3. Delete the directory (rm -r <dataset-path>)
    4. Delete the dataset using ZFS Master or CLI.

 

exactly what i did, and it worked. Just a heads up as i dont think thats something most people would be able/willing to do. but it happening how it did means i dont think its a zfs master plugin issue, and instead a zfs/docker issue as even a reboot didnt resolve

 

Link to comment
  • 2 weeks later...

ZFS master appears to chewing tons of CPU every 30s, in line with the "refresh interval" specified on the plugin settings.  I noticed my server was constantly cycling having 2 of the cores maxed out every few seconds:

high_cpu_30s.JPG.10dcc485f4b25e723777013e4e48df08.JPG

 

So I started looking for the cause:

find_30s.JPG.be6ba662f0641660f002ee4302d90cba.JPG

 

Sure enough, those are my zfs pools, and the timeout matches with the refresh interval.

 

Why so much CPU usage?

Link to comment

This doesn't seem to be related directly to the plugin, as there is not so much processing on the plugin to be honest. Probably something else running at the same interval, try changing the refresh interval on the plugin settings and checking if the problem still persists.

Link to comment
On 9/6/2023 at 8:33 PM, Iker said:

I was wondering if you would consider a setting/button that allows the option for manual update only: Yes, I was finally able to get some time for working on the next update, and that's one of the planned features.

 

I will kiss you if this makes the list!  Purely consensual, of course...

  • Haha 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.