[PLUGIN] ZFS Master


Iker

Recommended Posts

7 hours ago, isvein said:

What does an amber color on snapshot icon mean? :)

 

It's associated with an option on the settings "Dataset Icon Alert Max Days", it means that the latest dataset snapshot is older than the amount of days specified on that config flag.

 

@frodr This is a known issue with ZFS on Unraid, not this particular plugin, here is a nice summary on how to destoy a dataset if you face that error. 

 

 

Edited by Iker
  • Thanks 2
Link to comment

I have a question, maybe I don`t understand how should I work with plugin, but I expected that when I create snapshot of dataset - it should have all datasets inside, instead of this I can see that when I created (in my case) snapshot of docker dataset - it takes 0 space and all folders will be empty, but each dataset that inside docker - created.

Doesn`t matter did I checked "Recursively create snapshots of all descendent datasets" or not, nvme/docker Snapshot will be empty each time.

image.thumb.png.d00dd615ad7fde6d440a56c496694234.png

 

Because I executed command 

zfs send nvme/docker@{my_snapshot} | zfs recv disk3/zfs_backups/dockers

and noticed that all folders in "disk3/zfs_backups/dockers" are empty and started my investigation.

Edited by d3m3zs
Link to comment

ZFS snapshots don't work like you think; a snapshot applies exclusively to one and only one dataset (There are exceptions like clones, but that's another subject). The "recursive" option means that you will also take a snapshot for every child dataset. However, that doesn't mean that the parent dataset will include the data from the child datasets; those snapshots are associated with their respective dataset.

 

BTW, check the send command documentation on how to send snapshots incrementally and  process datasets recursively.

Edited by Iker
  • Thanks 1
Link to comment
42 minutes ago, Iker said:

ZFS snapshots don't work like you think; a snapshot applies exclusively to one and only one dataset (There are exceptions like clones, but that's another subject). The "recursive" option means that you will also take a snapshot for every child dataset. However, that doesn't mean that the parent dataset will include the data from the child datasets; those snapshots are associated with their respective dataset.

 

BTW, check the send command documentation on how to send snapshots incrementally and  process datasets recursively.

Thank you.

It seems best way to backup is - do not create child dataset, just use usual folders or write such script that will be sending all child one by one.

Link to comment
2 hours ago, d3m3zs said:

It seems best way to backup is - do not create child dataset, just use usual folders or write such script that will be sending all child one by one.

 

No for the backup, yes for the documentation. You can configure policies and send all the dataset descendants without too much trouble. Check Syncoid or ZnapZend; those solutions automatically take care of that part and help you stay organized.

  • Thanks 1
Link to comment

Directory Listing disappeared after update to 2024.02.10.63

 

Before update to v2024.02.10.63, everything was ok. Convert directory to dataset functionality was working fine with 2024.02.9

 

But after the update to the latest version, all directories listings disappeared from main page zfs master gui.

I tried to disable all, then enable them per Dataset and also in the plugin configuration with no luck.
I tried to reboot, but no change

 

What I'm doing something wrong ?

Link to comment

Thanks for the fixed and New Features in the last Versions.

 

Maybe you can more explained the working of Convert Datasets. In the Plugin, Help, First Post, ...

This included:

On 2/9/2024 at 8:51 PM, Iker said:

What do you mean?,  Is the plugin failing to create the dataset? Is the data incomplete in the new dataset? If it is just that the temp folder is not deleted, that's by design and is stated in the convert to Dataset dialog. If we want to avoid data loss, it's preferable to have the original data still in place and delete it by hand once you don't need it.

 

Can you change the Version Schema? Like others 2024.02.10b for example than he have the right rating and ZFS is not the first Entry under Plugins when he rating/sorting by Version and others are newer. Like 2024.02.12.

Edited by Revan335
Link to comment

How does the convert function work?

I just nuked my system share thinking it was going to be converted.

Be good to understand how it works so I avoid doing it again. 

 

  

On 2/9/2024 at 5:41 AM, Iker said:

Hi Folks, a new update is live with the following changelog:

 

2024.02.9

  • Add - Convert directory to dataset functionality
  • Add - Written property for snapshots
  • Add - Directory listing for root datasets
  • Fix - Tabbed view support
  • Fix - Configuration file associated errors
  • Fix - Units nomenclature
  • Fix - Pool information parsing errors
  • Remove - Unraid Notifications 

How Convert to Dataset Works?

Pretty simple is divided into three steps:

  • Rename Directory: Source directory is renamed to <folder_name>_tmp_<datetime>
  • Create Dataset: A dataset with the directory's original name is created in the same pool (and path); the dataset options are the default ones.
  • Copy the data: Data is copied using the command "rsync -ra --stats --info=progress2 <source_directory> <dataset_mountpoint>"; the GUI displays a dialog with a progress bar and some relevant information about the process.

 

If anything fails on steps 1 or 2, the plugin returns an error, and the folder is renamed back to its original name. If something fails in step 3, an error is returned, but the dataset and directory remain intact.

 

As always, don't hesitate to report any errors, bugs, or comments about the Plugin functionality.

 

Best,

 

Edited by dopeytree
Link to comment
On 2/9/2024 at 10:19 PM, sasbro97 said:

I converted the Docker folder now to a dataset and tried to exclude this with /docker/.* or also cache/docker/.* but it does not work. It's frustrating and I wish I had never changed the folder...

 

image.thumb.png.5d6dcd713a5366f14c6780fd2dd7c8c0.png

Does nobody have an idea how to exclude this legacy mountpoint snapshots? ZFS master is unusable when this is shown.

 

The strange thing is that the docker folder (dataset) shows no snapshots. But I guess we all agree that these cryptic snapshot must be from Docker.

image.thumb.png.10205c863600098f023c6ef675eb8ed4.png

 

Edit: no way! I removed the exclude line with /docker/.* to check if I can solve it somehow and now even more snapshots appeared and this time really in the Docker folder. So my cache folder / pool is creating these strange ones too. I have never seen it anywhere else before. What could be the reason?

Edited by sasbro97
Link to comment

@Nonoss I just reproduce the bug, let me check a little bit deeper and I'll release a new version with the fix.

@Revan335 I'll se what I can do, but not a priority right now.

@dopeytree Can you elaborate your question, I mean, the description is very accurate on the steps that the plugin follows for converting a directory to a dataset.

@sasbro97 You have to delete those datasets at the root manually.

Link to comment

Hello All,

 

When converting a Directory to a Dataset, I was receiving the message, "Too many open files in system" in the Convert Result dialogue.  This occurred when the directory to be converted contained invalid characters for a ZFS Dataset name.  In my case, it was a blank space in the directory named "Ubuntu 23.04".

 

The convert action renames the directory appending TMP_dateTime to the directory name, creates the dataset with the blank space in its name, briefly displays the Copying dialogue, then terminates displaying the Convert Result dialogue shown below.

 

Result.thumb.png.ddf44947ffed7b1cc330d177d5f5ae0b.png

 

I believe creating the dataset name with a blank space character is incorrect per the : ZFS Component Naming Requirements.

 

Could an edit check be added to the Convert to Dataset action?  Perhaps with a dialogue noting the issue and prompting the user to specify a valid Dataset name that can be re-checked upon submission?

 

This is a great tool.  Thanks for making it available!

 

Best Regards,
Larz

Edited by Larz
Included more images
Link to comment

Hello Iker, thank you for your reply.

 

The rsync was long done at the time of the convert action.  I reviewed ulimit and the system reported unlimited.  Running ulimit -n reports 40960 file descriptors.  I tested setting the limit to 42960, but received the same result.  I had re-tested after a server reboot as well.

 

In the end, the issue was a blank space character in the directory name.  I've revised the post above with the details.

 

I believe the blank space is invalid in a Dataset name per the ZFS documentation.  ZFS Component Naming Requirements

 

Adding an edit check to the convert dialogue displaying a message indicating that the directory name contains invalid characters and allowing a revised Dataset name to be specified and re-checked would be helpful.

 

Nonetheless, I'm back in business and exercising more discipline in my directory naming.  :)

 

I'll test following the failed copy process with a manual copy from the renamed directory to the new Dataset.  I'm interested to know how ZFS behaves with an invalidly named Dataset.

 

Thanks for your help and for this fine product,
Larz

Link to comment
1 hour ago, Larz said:

In the end, the issue was a blank space character in the directory name.  I've revised the post above with the details.

 

Hmmm, maybe there is a bug on the plugin side when handling datasets with empty spaces because those are allowed on OpenZFS. I'll check if that's the case and issue a new update with the fix.

Link to comment

Hi All,

 

I find that the Convert to Dataset action is dismissing the Copying Directory progress dialogue before the copy is complete.  I have launched the Convert to Dataset operation from the action button menu.  I see that the directory is renamed, the Dataset is created, and see the Copying Directory dialogue showing the copy progress.   I have left the computer alone for half an hour while the directory containing a 300+GB img file is copied into the Dataset.  Upon returning, I see that the copying dialogue has been dismissed and the system is displaying the ZFS Master page.  The copy is still in process.  Htop shows the rsync process is running, I see disk I/Os, and the drive lights blinking.

 

If I select a Move/Rename action on a different directory while this is running, the system displays the Move/Rename dialogue with  the field for specifying the new name/location, after less than a second, the copy progress indicator is displayed in place of the editable field.  Images below.

 

After selecting Action, Move/Rename:

Move Rename dialogue

 

A fraction of a second later:

Copy Progress on Move/Rename Dialogue

The name field is replaced with the progress indicator for the rsync copy that is running in the background.

 

I have seen the Copying Directory dismissed before the copy has completed several times.  There is no cancel button on this dialogue, and upon return to the computer, I see it is still on the ZFS Master page.

 

Is there anything I may be doing that is dismissing the dialogue?

 

For now, I am checking the Dataset contents watching for the in-progress files with the arbitrary six character extension before attempting further operations.

 

Please let me know what you think.

 

Thanks,
Larz

Edited by Larz
Link to comment
58 minutes ago, Iker said:

 

Hmmm, maybe there is a bug on the plugin side when handling datasets with empty spaces because those are allowed on OpenZFS. I'll check if that's the case and issue a new update with the fix.

That's good to know that spaces are allowed in OpenZFS.

Link to comment

Hi @Larz the situation that you mention is something that I imagine may happen with large transfers, as the dialog is dismissed if the page refreshes or if any other notification/dialog appears; however, as the rsync process is running in the background, everything should be fine, and when finished, the plugin should give you a new dialog with the process result. 

 

I'm currently working on building a progress bar that appears at the right side of the dataset name to indicate any ongoing transfers and allow you folks to perform multiple transfers simultaneously.

  • Thanks 1
Link to comment
On 2/9/2024 at 7:41 AM, Iker said:

Hi Folks, a new update is live with the following changelog:

 

2024.02.9

  • Add - Convert directory to dataset functionality
  • Add - Written property for snapshots
  • Add - Directory listing for root datasets
  • Fix - Tabbed view support
  • Fix - Configuration file associated errors
  • Fix - Units nomenclature
  • Fix - Pool information parsing errors
  • Remove - Unraid Notifications 

How Convert to Dataset Works?

Pretty simple is divided into three steps:

  • Rename Directory: Source directory is renamed to <folder_name>_tmp_<datetime>
  • Create Dataset: A dataset with the directory's original name is created in the same pool (and path); the dataset options are the default ones.
  • Copy the data: Data is copied using the command "rsync -ra --stats --info=progress2 <source_directory> <dataset_mountpoint>"; the GUI displays a dialog with a progress bar and some relevant information about the process.

 

If anything fails on steps 1 or 2, the plugin returns an error, and the folder is renamed back to its original name. If something fails in step 3, an error is returned, but the dataset and directory remain intact.

 

As always, don't hesitate to report any errors, bugs, or comments about the Plugin functionality.

 

Best,

This is great update! Thank you! 

But also would be great to automatically remove tmp folders after rsync command done.

 

For example, I selected folder, decided to convert it, dialog was appeared with progress and then was closed, I checked only 20GB was copied to new dataset (should be 80GB), ok, after 10-20 minutes I checked again and see that new dataset ha all 80GB of data and all tmp folder is also 80GB of data.

So, as user I don`t understand if rsync job successfully finished or not and I don`t understand why I still have tmp folder.

So, I need now execute one more rsync command manually with dryrun to be sure that everything was transfered.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.