Iker

Community Developer
  • Posts

    264
  • Joined

  • Last visited

Posts posted by Iker

  1. 12 minutes ago, sasbro97 said:

    I converted the Docker folder now to a dataset and tried to exclude this with /docker/.* or also cache/docker/.* but it does not work. It's frustrating and I wish I had never changed the folder...

     

    From your screenshot, it doesn't look like you configure that correctly. Check this comment and how the user deal with the situation, because is exactly the same as yours.

     

     

  2. What do you mean?,  Is the plugin failing to create the dataset? Is the data incomplete in the new dataset? If it is just that the temp folder is not deleted, that's by design and is stated in the convert to Dataset dialog. If we want to avoid data loss, it's preferable to have the original data still in place and delete it by hand once you don't need it.

    • Thanks 1
  3. Yes it was a typo, thanks for the note. Yes, it can be added without too much trouble; I'll wait a little bit for more feedback, but be sure that it will make it into the next version.

     

    BTW I just notice that the -r option was kept, it's an error (-a already includes -r), please suggest other rsync options that may be beneficial for the copy process.

  4. Hi Folks, a new update is live with the following changelog:

     

    2024.02.9

    • Add - Convert directory to dataset functionality
    • Add - Written property for snapshots
    • Add - Directory listing for root datasets
    • Fix - Tabbed view support
    • Fix - Configuration file associated errors
    • Fix - Units nomenclature
    • Fix - Pool information parsing errors
    • Remove - Unraid Notifications 

    How Convert to Dataset Works?

    Pretty simple is divided into three steps:

    • Rename Directory: Source directory is renamed to <folder_name>_tmp_<datetime>
    • Create Dataset: A dataset with the directory's original name is created in the same pool (and path); the dataset options are the default ones.
    • Copy the data: Data is copied using the command "rsync -ra --stats --info=progress2 <source_directory> <dataset_mountpoint>"; the GUI displays a dialog with a progress bar and some relevant information about the process.

     

    If anything fails on steps 1 or 2, the plugin returns an error, and the folder is renamed back to its original name. If something fails in step 3, an error is returned, but the dataset and directory remain intact.

     

    As always, don't hesitate to report any errors, bugs, or comments about the Plugin functionality.

     

    Best,

    • Thanks 3
    1. Unraid has built-in scrub functionality; you can check it on the pool properties.

    2. Yes, you must activate the destruction mode in the setting; I recommend you read the first post in this thread to check the plugin's functionality.

    3. "No refresh" affects either all or none of the pools, so something else is happening with the ZFS pool from unassigned disks.

  5. If you have set the "No Refresh" option, the plugin doesn't load any data from the pools, so I'm not sure what else could be writing or reading data to the disk. You can upgrade the pool to the newest format to discard compatibility issues.

     

    About the open snapshots, there are several ways; if you only need some files, you can access the special folder ".zfs" located on the dataset mount point, from there you can copy files pretty easily; also, you can revert the dataset to the specific spanshot from the "Admin Datasets" Dialog in the plugin; and the other option is to mount the snapshot to another folder and copy any data that you may need.

    • Thanks 1
  6. @xreyuk It's very simple, ZFSMaster refreshes the pool and dataset information that displays now and then (The refresh interval you select), but that requires reading some information from disks, so every time that you visit the main page, the plugin is going to refresh the data every X seconds/minutes and your disks are going to be wake from his sleep by that operation. The No Refresh option helps you with that; it provides a convenient way to refresh the information only when you press the "refresh" button, letting your disks sleep and only retrieving the information when you explicitly request it.

     

    Other options are associated with the refresh interval, described in the initial post of this threat.

    • Like 1
  7. @dlandon, thansk I'll fix it on the new release.

     

    @Renegade605 @isvein; Don't get me wrong, I like the tool, but that doesn't change the fact that it is abandoned. Most of the issues were closed automatically by a bot, with no answer; the last release was an automatic build without any new features or fixes, and the last true update is from Jan 2022, which is almost two years ago; that's why I don't feel very comfortable building a GUI for the tool.

    • Thanks 1
  8. Going through the suggestions:

    1. znapzend integration with a nice GUI: I also use znapzend for dataset snapshot management; however, the project looks abandoned, so I don't know if it is a good idea to integrate it directly. I've been thinking about how to provide this functionality, even up to the point of coding it by myself, but the path could be clearer. If you folks think that Zanpzend is a good solution, I am on board with that, but there are other options that I'm also open to.
    2. Samba to use shadow copy: Another good old functionality that worked well when ZFS was a plugin. But, this is more for unraid to implement, not this plugin.
    3. ZFS properties in the Share: Also a suggestion for Unraid to implement, not this plugin.
    4. Alias ZFS pools: I am not sure if I follow how this should work or what benefits it brings. On the other hand, I don't use ZFS disks on the array; I use exclusively pools with a a dumb USB stick on the array.
    5. Autocreate datasets and zvols in the Docker container and VM setup pages: Sorry, this is also for Unraid to implement.
    6. Sending notifications through Unraid's notification handler: I'm planning to ditch those notifications and leave the code within the plugin for future functionality.
    • Upvote 1
  9. @Renegade605 feel free to make any suggestions on this thread, I've been working on this plugin for 2 years now, long before unraid 6.12, and my idea is to provide all the functionality not cover or that may take to long to be covered by Unraid, so, any suggestions or ideas are more than welcome.

    • Upvote 1
  10. I'm not sure if I follow your idea correctly, but currently, the plugin remembers which pools and datasets are collapsed, and the next time you visit the page, you get the same view; I believe that the functionality you mention can be solved that way.

  11. Hi @skler, does this situation happen with the main tab open?, if so, very likely the plugin is the culprit. Still, you can go to the plugin config section and set how often the plugin refreshes the datasets and snapshot information.

     

    Hi @dyno that is very accurate; I was aware of the plugin technical debt in that regard when I started the development and forgot about it for a while. Regarding the units that I'm using, those are base 2 because ZFS uses the same units; the output of the plugin and, for example, the zpool list, and zfs list commands are consistent; however, I would take a look at other metrics and make sure that are also in line and use the appropriate nomenclature.

  12. Hi @sjerisman, Thanks for the proposal. The first one is already on my tasks list; I have to evaluate how it can be implemented and keep everything compatible with the cache and lazy load features; please wait a little bit longer (Next year); I will come up with something.

     

    As for the second one, sure!, it's straightforward to add a new property; I'll be working on a new version with a little fix and include the new property on the snapshots table.

     

    Best,

    • Thanks 1
  13. Hi @dlandon Good catch; the first one is shown because you haven't made any config changes to the plugin config, and it doesn't check if the file exits. I'll submit a new build with a fix for that case. About the continued logging even when the plugin is uninstalled, it's probably related to the Nchan process not being terminated automatically by Unraid.

     

    About the other ones, are you sure you have a mounted Pool? The plugin seems to be unable to parse the pool config or the datasets. If you happen to have one or more pools, can you please share (As text via PM) the result of the following commands:

     

    zpool list -v
    zfs list -r <pool>

     

  14. Unfortunately, this is not possible, as the directory and dataset listing are highly coupled on the plugin; on the other hand, I don't foresee this functionality as something you folks end up using on a daily basis, but to check that "everything is working as expected" and there are no ghost folders lying around, you can always remove datasets from directory listing and add them only when you need the information.

  15. Hi Folks, a new update is live with the following changelog:

     

    2023.12.8

    • Add - Directory Listing functionality
    • Fix - Optimize multiple operations

    How Directory Listing Works?

    Directory Listing is a new feature (You should enable it per Dataset or in the plugin configuration) that lists the top-level folders for a given dataset. This functionality should give you better visibility over your pools, allowing you to spot possible duplicates and directories that may be associated with leftovers of a migration. 

     

    The folders are listed after the datasets, under the dataset children with a different icon (a folder); the plugin doesn't gather any information about the directory besides its name. Given that a Dataset Snapshot covers his subfolders, the Snapshots count is associated with all the subfolders, even if the folders are brand new and not present in any snapshot; this is by design.

     

    This new feature needs to be enabled per Dataset, using the Actions menu or the plugin configuration; it's important to note that it may impact loading times up to 5 or 10% depending on the number of folders under the Dataset.

     

    Finally, this version required many changes, so multiple bugs are expected to appear here and there; please don't hesitate to report them here.


    This is most likely the last update of the year, so best wishes to you Unraid community, and I'll see you next year with more functionalities 😃.

     

    Best

    • Thanks 1
  16. It's possible that the plugin backend keep working for a little bit (10-15 secs) after you change tabs; however, it's not possible for the plugin to execute any additional actions once you navigate to other pages. Do you have any other insights on how this situation happens and your configuration for the plugin?

  17. On 11/14/2023 at 4:22 PM, wacko37 said:

    The command will upgrade (downgrade really) the ZFS disk to the 6.12 ZFS features.

     

    I was able to reproduce the bug, and unfortunately, ZFS Master is not compatible with legacy pools created by UD; this is due to specific features used by the plugin currently missing in legacy pools, so the only way for those pools to be detected and listed within the plugin is as you mention upgrade the pool. @dlandon for viz.

    • Thanks 1