[PLUGIN] ZFS Master


Iker

Recommended Posts

If you have set the "No Refresh" option, the plugin doesn't load any data from the pools, so I'm not sure what else could be writing or reading data to the disk. You can upgrade the pool to the newest format to discard compatibility issues.

 

About the open snapshots, there are several ways; if you only need some files, you can access the special folder ".zfs" located on the dataset mount point, from there you can copy files pretty easily; also, you can revert the dataset to the specific spanshot from the "Admin Datasets" Dialog in the plugin; and the other option is to mount the snapshot to another folder and copy any data that you may need.

Edited by Iker
  • Thanks 1
Link to comment

Thanks, got that.

 

Also, I have a few questions:

1. I see no button "scrub" in plugin, it was removed or some setting should activate it? I use 

2023.12.08.48 version 

image.png.05751a57b73558fdde5d457e87fde646.png

2. Is it possible to remove dataset via plugin? For example it is easy to create, but for removing I have to do that in CLI.

 

BTW:  "No Refresh" doen`t affect Unassigned disks, so I just formatted them to btrfs and xfs.

Link to comment
  1. Unraid has built-in scrub functionality; you can check it on the pool properties.

  2. Yes, you must activate the destruction mode in the setting; I recommend you read the first post in this thread to check the plugin's functionality.

  3. "No refresh" affects either all or none of the pools, so something else is happening with the ZFS pool from unassigned disks.

Link to comment
13 hours ago, Iker said:
  1. Unraid has built-in scrub functionality; you can check it on the pool properties.

  2. Yes, you must activate the destruction mode in the setting; I recommend you read the first post in this thread to check the plugin's functionality.

  3. "No refresh" affects either all or none of the pools, so something else is happening with the ZFS pool from unassigned disks.

1. Yes it has, but from some tutorial with your plugin I saw this "scrub" button and don`t see where it was removed according to last change log.

2. Thank you, activated, but after destroying see 

image.png.0c52b0e31dcb8bd78a74394af98d11ed.png 

3. No refresh works only for array disks, I don`t know why, but unassignedd will never spin down.

Link to comment
4 hours ago, d3m3zs said:

2. Thank you, activated, but after destroying see 

image.png.0c52b0e31dcb8bd78a74394af98d11ed.png 

This is mostly a indicated for access from other Sites. For example Docker Container, File Manager, Antivirus, mover, ... that's in the Directory or have a open File in this.

Link to comment
1 hour ago, Revan335 said:

This is mostly a indicated for access from other Sites. For example Docker Container, File Manager, Antivirus, mover, ... that's in the Directory or have a open File in this.

None of them, but maybe after reboot I will be able to remove. 

Link to comment
38 minutes ago, d3m3zs said:

None of them, but maybe after reboot I will be able to remove. 

All this plugin really does is give you buttons that run the same commands you would type in the shell. 

 

The shell command would also fail in many circumstances. Perhaps you should review the OpenZFS documentation to learn how it works. 

Link to comment
  • 2 weeks later...
On 1/12/2024 at 7:57 PM, FirbyKirby said:

Yep. My docker folder is just a folder, not a dataset.

I have the same setup as him as we are supposed to user /mnt/user/docker according to unRAID's changelog for Docker installations now. But what do I have to exclude now or better said how can I stop my Main page being dead?

 

Before I used 

/system/.*

which worked well. Now it doesn't. I also guess my docker folder is no dataset. Should I convert it to one? Snapshots that I don't want to see on the main page and to exclude are these ones:

 

cache/system/5f8a659c775bcd2166be79aab33465fe7408dcc9fb8ad4d7a2b10ebf94a53522       44.1M   667G      117M  legacy
cache/system/5fd422e174cf30dac4eaff286ab717c59366e6694072b879f704a9d0928caa7c       21.4M   667G      271M  legacy

 

Link to comment

Hi Folks, a new update is live with the following changelog:

 

2024.02.9

  • Add - Convert directory to dataset functionality
  • Add - Written property for snapshots
  • Add - Directory listing for root datasets
  • Fix - Tabbed view support
  • Fix - Configuration file associated errors
  • Fix - Units nomenclature
  • Fix - Pool information parsing errors
  • Remove - Unraid Notifications 

How Convert to Dataset Works?

Pretty simple is divided into three steps:

  • Rename Directory: Source directory is renamed to <folder_name>_tmp_<datetime>
  • Create Dataset: A dataset with the directory's original name is created in the same pool (and path); the dataset options are the default ones.
  • Copy the data: Data is copied using the command "rsync -ra --stats --info=progress2 <source_directory> <dataset_mountpoint>"; the GUI displays a dialog with a progress bar and some relevant information about the process.

 

If anything fails on steps 1 or 2, the plugin returns an error, and the folder is renamed back to its original name. If something fails in step 3, an error is returned, but the dataset and directory remain intact.

 

As always, don't hesitate to report any errors, bugs, or comments about the Plugin functionality.

 

Best,

Edited by Iker
  • Thanks 3
Link to comment
14 minutes ago, Iker said:

are the default ones.

  • Copy the data: Data is copied using the command "rsync rsync -ra --stats --info=progress2 <source_directory> <dataset_mountpoint>"; the GUI displays a dialog with a progress bar and some relevant information about the process.

 

Is rsync rsync a typo? Would you consider adding -X to preserve extended attributes for things like the Dynamic File Integrity plugin?

Link to comment

Yes it was a typo, thanks for the note. Yes, it can be added without too much trouble; I'll wait a little bit for more feedback, but be sure that it will make it into the next version.

 

BTW I just notice that the -r option was kept, it's an error (-a already includes -r), please suggest other rsync options that may be beneficial for the copy process.

Edited by Iker
Link to comment

What do you mean?,  Is the plugin failing to create the dataset? Is the data incomplete in the new dataset? If it is just that the temp folder is not deleted, that's by design and is stated in the convert to Dataset dialog. If we want to avoid data loss, it's preferable to have the original data still in place and delete it by hand once you don't need it.

  • Thanks 1
Link to comment
On 2/7/2024 at 2:33 PM, Iker said:

Not so sure if I'm following exactly, but yeah, in general with ZFS is better to have a Docker dataset, so you can exclude it from the list and speed up the loading information process.

I converted the Docker folder now to a dataset and tried to exclude this with /docker/.* or also cache/docker/.* but it does not work. It's frustrating and I wish I had never changed the folder...

 

image.thumb.png.5d6dcd713a5366f14c6780fd2dd7c8c0.png

Edited by sasbro97
Link to comment
12 minutes ago, sasbro97 said:

I converted the Docker folder now to a dataset and tried to exclude this with /docker/.* or also cache/docker/.* but it does not work. It's frustrating and I wish I had never changed the folder...

 

From your screenshot, it doesn't look like you configure that correctly. Check this comment and how the user deal with the situation, because is exactly the same as yours.

 

 

Link to comment
1 hour ago, Iker said:

What do you mean?,  Is the plugin failing to create the dataset? Is the data incomplete in the new dataset? If it is just that the temp folder is not deleted, that's by design and is stated in the convert to Dataset dialog. If we want to avoid data loss, it's preferable to have the original data still in place and delete it by hand once you don't need it.

OK, than its correct. Thanks!

Link to comment
1 hour ago, Iker said:

 

From your screenshot, it doesn't look like you configure that correctly. Check this comment and how the user deal with the situation, because is exactly the same as yours.

 

 

I even reinstalled Docker now and it's the same. No container is existing but now it looks like exactly the same... I have Docker installed at /mnt/cache/docker in a dataset.

 

Okay I realized its actually coming from the system folder. 

zfs list -o name,mountpoint,mounted,my.custom:property

helps to see it. But the thing is that I cannot exclude it anymore. Back then I used /system/.* as exclusion but it isn't working anymore. Also cache/system/.* is not working. Any ideas?

Edited by sasbro97
Link to comment

This is my setup. 

 

I have created a share in the Unraid gui named docker. Cache pool as primary. Nothing as secondary. Shares on zfs will create the share as dataset on 6.12.

 

In the settings for docker I have the directory set to /mnt/user/docker/

/docker/.* as exclusion pattern in ZFS master.

 

If you have done it in other ways you may have to delete all the datasets that docker created before. This has been discussed in this thread before. You can't just delete the docker directory, the datasets will stay. They need to be destroyed. 

Edited by Niklas
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.