[PLUGIN] ZFS Master


Iker

Recommended Posts

For the last few months I've had an imported Truenas pool working perfectly. Two days ago I had to reboot my server for the first time and now there's this weird issue with my pool. My docker containers can only see a few folders in the pool, even though none of the configuration has changed. Same thing with my SMB setup. I can only access certain folders through those shares.

Any insights as to what to try?

Link to comment
15 hours ago, erfanghasemy said:

For the last few months I've had an imported Truenas pool working perfectly. Two days ago I had to reboot my server for the first time and now there's this weird issue with my pool. My docker containers can only see a few folders in the pool, even though none of the configuration has changed. Same thing with my SMB setup. I can only access certain folders through those shares.

Any insights as to what to try?

I doubt this is related to ZFS master plugin so you might want to make a general support thread for this.

 

On a side note from personal experience I've had issues with UNRAID resetting user passwords and permissions. Though this happened when rebuilding unraid USB from a backup. It is quite simple to reset user password and permissions so you may as well try. 

Edited by Laov
Link to comment

I can not get the exclusions to work. Reading through these posts I am pretty confident that I should set it to /docker-sys/.*

I have tried variations on that and also including the cache drive name /nvme_1_nvme/ 

When I first installed zfs master I had a parent folder above my docker-sys folder. It was /docker/docker-sys/ when this was my folder scheme I remember them being excluded. However I have now removed that parent folder and the docker-sys folder is its own dataset.

I have dozens of these docker images listed.

 

image.png.342e9dd4a9ffff2bf38fff71a5e225a2.png

Link to comment
49 minutes ago, sfef said:

I can not get the exclusions to work. Reading through these posts I am pretty confident that I should set it to /docker-sys/.*

I have tried variations on that and also including the cache drive name /nvme_1_nvme/ 

When I first installed zfs master I had a parent folder above my docker-sys folder. It was /docker/docker-sys/ when this was my folder scheme I remember them being excluded. However I have now removed that parent folder and the docker-sys folder is its own dataset.

I have dozens of these docker images listed.

 

image.png.342e9dd4a9ffff2bf38fff71a5e225a2.png

I must delete my Folder but he was not created from ZFS self and a old btrfs folder before ZFS:

https://docs.unraid.net/unraid-os/release-notes/6.12.0/#zfs-pools

https://docs.unraid.net/unraid-os/release-notes/6.12.0/#docker

 

Than my exclude for the /mnt/cache/docker/ Path working.

 

With Exclusive Share.

Edited by Revan335
Link to comment
3 hours ago, sfef said:

I can not get the exclusions to work. Reading through these posts I am pretty confident that I should set it to /docker-sys/.*

I have tried variations on that and also including the cache drive name /nvme_1_nvme/ 

When I first installed zfs master I had a parent folder above my docker-sys folder. It was /docker/docker-sys/ when this was my folder scheme I remember them being excluded. However I have now removed that parent folder and the docker-sys folder is its own dataset.

I have dozens of these docker images listed.

 

image.png.342e9dd4a9ffff2bf38fff71a5e225a2.png

 

I had to tinker with mine to get it to work as well.

 

My setup, Docker-Directory is set to: 

/mnt/zfspool/.docker_system/docker_directory/

 

ZFS Master Exclusion is set to:

.docker_system

 

 

 

Link to comment

Please keep in mind, that ZFS master doesn't use Regex, but Lua patterns for matching the exclusion folder, that comes with some downsides, in your particular case @sfef, at first sight your pattern seems fine, but it actually contains a reserved symbol "-", combined with "r" it means a completely different thing; you can check your patterns here:

 

https://gitspartv.github.io/lua-patterns/

 

Long story short, this should do the trick "/docker%-sys/.*"

 

Additional doc on Lua patterns:

 

https://www.fhug.org.uk/kb/kb-article/understanding-lua-patterns/

 

 

Edited by Iker
  • Thanks 1
Link to comment

You should add the code to show the % used and free consistent with the display type selected in 'Used / Free Columns' and the color theme:

Screenshot 2023-11-12 070252.png

Your % Used and Free is gray in my setup:

Screenshot 2023-11-12 070447.png

This is the UD display for this device:

Screenshot 2023-11-12 070558.png

When doing GUI work, we want to keep the UI consistent.  Feel free to PM me and I can show you the code to accomplish this.

  • Like 1
Link to comment

@Iker Firstly as always I would like to thankyou for this great plugin and for all that you do for this great community.

I have encountered an issue with a ZFS disk mounted via UD not showing up in ZFS Master.

After a discussion with the UD developer @dlandon in the UD thread, it appears this is to do with ZFS compatibility in the upcoming Unraid 6.13 release, UD now accommodates for 6.13 when formatting a disk to ZFS rendering it unmountable in ZFS Master...... see below

 

 

(p.s I hope i have relayed this information correctly, I apologise in advance if this isn't the case and would be due to a lack of understanding on my part)



 

Link to comment
On 11/13/2023 at 9:45 AM, wacko37 said:

@Iker Firstly as always I would like to thankyou for this great plugin and for all that you do for this great community.

I have encountered an issue with a ZFS disk mounted via UD not showing up in ZFS Master.

After a discussion with the UD developer @dlandon in the UD thread, it appears this is to do with ZFS compatibility in the upcoming Unraid 6.13 release, UD now accommodates for 6.13 when formatting a disk to ZFS rendering it unmountable in ZFS Master...... see below

 

 

(p.s I hope i have relayed this information correctly, I apologise in advance if this isn't the case and would be due to a lack of understanding on my part)



 

If anyone runs into this issue,  1 must run this command in Unraid console to make the zpool mountable in ZFS Master

 

UNRAID:~# zpool upgrade (zpool name)

 

The command will upgrade (downgrade really) the ZFS disk to the 6.12 ZFS features.

Edited by wacko37
layout - extra info
  • Thanks 1
Link to comment

I am sure this is not a problem with this plugin I just wanted some clearification from someone here if they had ran into the same problem. Basicaly I used @SpaceInvaderOne script to create datasets of all my appdata folders. Today I did a big messup in plex and I basically deleted a library and it wanted to rescan the whole thing which takes foreever for me. So instead I restored my snapshot from earlier today without thinking. This caused all my shares on my unraid server to disappear and not matter what I did I could not get them back with the server up. So I rebooted it and they all got readded. I am guessing since ZFS messes blocklevel this freaked fuse up and caused all shares to dissapear. Is there some guideline on how to restore snapshots without this happening? or is this the only workaround?

Link to comment
On 11/12/2023 at 6:45 PM, wacko37 said:

I have encountered an issue with a ZFS disk mounted via UD not showing up in ZFS Master.

After a discussion with the UD developer @dlandon in the UD thread, it appears this is to do with ZFS compatibility in the upcoming Unraid 6.13 release, UD now accommodates for 6.13 when formatting a disk to ZFS rendering it unmountable in ZFS Master...... see below

Thanks for the info, currently I don't have access to the 6.13 beta version, as soon as it is released to the general public, I will try to reproduce your issue and check why the plugin is not picking up the pools.

 

7 hours ago, Michel Amberg said:

Is there some guideline on how to restore snapshots without this happening? or is this the only workaround?

 

Seems that is a something that should go on the General Support thread, I'm not sure that I follow exactly what is going on with your datasets.

Edited by Iker
  • Thanks 1
Link to comment
46 minutes ago, Iker said:

Thanks for the info, currently I don't have access to the 6.13 beta version, as soon as it is released to the general public, I will try to reproduce your issue and check why the plugin is not picking up the pools.

Just format a disk zfs with the latest version of UD in 6.12 and the issue will show.

  • Thanks 1
Link to comment
8 minutes ago, andyd said:

Thanks for the plugin!

 

I set this up today - it picks up one of the pool drives but I have two formatted with zfs. Any reason the one would be ignored

 

Can you please share the result of the following command (As text):

 

zpool list -v

 

Best

Link to comment

Could it be how I moved over the data? This drive has been sitting around for a while. Yesterday I decided to set up zfs snapshotting. 
 

I moved a folder I had another drive (nextcloud) to this drive using unbalance. Not sure if that is the reason?

 

is there some way data should be moved to the drive? Do I have to manually create datasets for it to appear in zfs master 

Link to comment
16 hours ago, Iker said:

This thread over here seems like a better place for your questions: https://forums.unraid.net/forum/37-pre-sales-support/

OK.

 

Due to problems in OpenZFS 2.2, it is good that Unraid (6.12.5-rc1) still uses the older ZFS version (2.1.13-1).

But Open 2.1.13-1 is also still affected by a problem.

"So are we tracking three different disk corruption issues?

With strict hole reporting (i.e. zfs_dmu_offset_next_sync set to 1). This has been a silent disk corruption issue since 2.1.4 and a fix is not in any current release.

With block cloning which results in silent disk corruption. This is only an issue in 2.2.0 and has been resolved in 2.2.1 by disabling it, but no long term fix exists yet.

When using LUKS which results in write errors with ZFS on 2.2.1. A fix is not in any current release.

So the safest thing to do for now to avoid further data loss is just use 2.1.13 and set zfs_dmu_offset_next_sync=0? Is that the blessed thing to do by the OpenZFS team until an official release is made for 2.1.14 and 2.2.2?"

https://github.com/openzfs/zfs/issues/15526#issuecomment-1825113314

 

So if zfs_dmu_offset_next_sync is set to 1, then it is best to set the value to 0 until the problem has been solved by an update: 

echo 0 >> /sys/module/zfs/parameters/zfs_dmu_offset_next_sync
Edited by boyish-stair3681
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.