Iker

Community Developer
  • Posts

    257
  • Joined

  • Last visited

Posts posted by Iker

  1. The convert to dataset functionality is just a convenient way of copying the data without having to execute the commands, but you can always do it manually. Create a dataset and the respective child ones, and start syncing each folder to the corresponding dataset from the source using rsync.

  2. On 3/16/2024 at 2:22 PM, frodr said:

    Creating dataset thru the plugin -> appear as user folder in the Array.

     

    When you create a dataset at the root level, Unraid detects it as an user share.

     

    On 3/16/2024 at 4:08 PM, Sobrino said:

    Any hints on what I might be doing wrong?

     

    It's working as intended. It's very simple: When you unlock a dataset, you're also mounting it, so you can access the content in the mountpoint (/mnt/data/media). When you lock the dataset, the mountpoint still exists as a folder, so you can write anything to it because it's a normal folder within the system.

    • Thanks 1
  3. 8 hours ago, Renegade605 said:

    Preferences probably depend a lot on how people are using their snapshots...

     

    Fair enough, as I said, this is still a work in progress, and your feedback is very much welcome.

     

    @d3m3zs As the other disks are sleeping and the last refresh date doesn't change, I suggest you post the problem on the unassigned devices plugin thread.

    • Thanks 1
  4. Some questions that could help to pinpoint the issue source:

    • Are the other pool disks sleeping?
    • "Last refresh" date changes?
    • Do you have other processes/plugins accessing the folders/files stored on that pool?

     

  5. @MowMdown I have been thinking about reworking the entire snapshot admin dialog, but it is still in the draft phase. However, what you suggest maybe even more problematic than the current approach. Many people using ZFS have tens, if not hundreds, of snapshots in a single dataset, and displaying that inline along with the dataset will be a nightmare for you as a user and for me as the developer.

     

    Right now, I'm more inclined to directly present snapshots grouped by date (no more than ten buckets) on the contextual menu and implement another contextual menu for the dataset with all the related actions. But if you folks have any other ideas, I'm more than open to reading them.

  6. New update with the following changelog:

     

    2024.03.03

    • Fix - Dataset name validation before convertion
    • Fix - rsync command for folders with whitespaces or special characters
    • Fix - Errors on empty exclusion patterns

    • Add - Compatibility with 6.13 beta

     

  7. 1 hour ago, d3m3zs said:

    But also would be great to automatically remove tmp folders after rsync command done.

     

    Sorry, I'm very reluctant to delete data without user verification; I prefer you folks verify that everything is fine and the data that you need is already in the dataset, and then and only then, you manually delete the source folder; this prevents accidentally losing data in case of unexpected errors. As I said before, I'm working on changing the dialog for conversion to a progress bar integrated on the main UI.

    • Thanks 2
  8. Hi @Larz the situation that you mention is something that I imagine may happen with large transfers, as the dialog is dismissed if the page refreshes or if any other notification/dialog appears; however, as the rsync process is running in the background, everything should be fine, and when finished, the plugin should give you a new dialog with the process result. 

     

    I'm currently working on building a progress bar that appears at the right side of the dataset name to indicate any ongoing transfers and allow you folks to perform multiple transfers simultaneously.

    • Thanks 1
  9. 1 hour ago, Larz said:

    In the end, the issue was a blank space character in the directory name.  I've revised the post above with the details.

     

    Hmmm, maybe there is a bug on the plugin side when handling datasets with empty spaces because those are allowed on OpenZFS. I'll check if that's the case and issue a new update with the fix.

  10. The error indicates that the system ran out of file handles, probably because rsync is trying to copy many files. Try increasing the "Max User Watches" using the "Tips And Tweaks" plugin to fix it.

  11. @Nonoss I just reproduce the bug, let me check a little bit deeper and I'll release a new version with the fix.

    @Revan335 I'll se what I can do, but not a priority right now.

    @dopeytree Can you elaborate your question, I mean, the description is very accurate on the steps that the plugin follows for converting a directory to a dataset.

    @sasbro97 You have to delete those datasets at the root manually.

  12. 2 hours ago, d3m3zs said:

    It seems best way to backup is - do not create child dataset, just use usual folders or write such script that will be sending all child one by one.

     

    No for the backup, yes for the documentation. You can configure policies and send all the dataset descendants without too much trouble. Check Syncoid or ZnapZend; those solutions automatically take care of that part and help you stay organized.

    • Thanks 1
  13. ZFS snapshots don't work like you think; a snapshot applies exclusively to one and only one dataset (There are exceptions like clones, but that's another subject). The "recursive" option means that you will also take a snapshot for every child dataset. However, that doesn't mean that the parent dataset will include the data from the child datasets; those snapshots are associated with their respective dataset.

     

    BTW, check the send command documentation on how to send snapshots incrementally and  process datasets recursively.

    • Thanks 1
  14. 7 hours ago, isvein said:

    What does an amber color on snapshot icon mean? :)

     

    It's associated with an option on the settings "Dataset Icon Alert Max Days", it means that the latest dataset snapshot is older than the amount of days specified on that config flag.

     

    @frodr This is a known issue with ZFS on Unraid, not this particular plugin, here is a nice summary on how to destoy a dataset if you face that error. 

     

     

    • Thanks 2
  15. 12 minutes ago, sasbro97 said:

    I converted the Docker folder now to a dataset and tried to exclude this with /docker/.* or also cache/docker/.* but it does not work. It's frustrating and I wish I had never changed the folder...

     

    From your screenshot, it doesn't look like you configure that correctly. Check this comment and how the user deal with the situation, because is exactly the same as yours.

     

     

  16. What do you mean?,  Is the plugin failing to create the dataset? Is the data incomplete in the new dataset? If it is just that the temp folder is not deleted, that's by design and is stated in the convert to Dataset dialog. If we want to avoid data loss, it's preferable to have the original data still in place and delete it by hand once you don't need it.

    • Thanks 1
  17. Yes it was a typo, thanks for the note. Yes, it can be added without too much trouble; I'll wait a little bit for more feedback, but be sure that it will make it into the next version.

     

    BTW I just notice that the -r option was kept, it's an error (-a already includes -r), please suggest other rsync options that may be beneficial for the copy process.

  18. Hi Folks, a new update is live with the following changelog:

     

    2024.02.9

    • Add - Convert directory to dataset functionality
    • Add - Written property for snapshots
    • Add - Directory listing for root datasets
    • Fix - Tabbed view support
    • Fix - Configuration file associated errors
    • Fix - Units nomenclature
    • Fix - Pool information parsing errors
    • Remove - Unraid Notifications 

    How Convert to Dataset Works?

    Pretty simple is divided into three steps:

    • Rename Directory: Source directory is renamed to <folder_name>_tmp_<datetime>
    • Create Dataset: A dataset with the directory's original name is created in the same pool (and path); the dataset options are the default ones.
    • Copy the data: Data is copied using the command "rsync -ra --stats --info=progress2 <source_directory> <dataset_mountpoint>"; the GUI displays a dialog with a progress bar and some relevant information about the process.

     

    If anything fails on steps 1 or 2, the plugin returns an error, and the folder is renamed back to its original name. If something fails in step 3, an error is returned, but the dataset and directory remain intact.

     

    As always, don't hesitate to report any errors, bugs, or comments about the Plugin functionality.

     

    Best,

    • Thanks 3
    1. Unraid has built-in scrub functionality; you can check it on the pool properties.

    2. Yes, you must activate the destruction mode in the setting; I recommend you read the first post in this thread to check the plugin's functionality.

    3. "No refresh" affects either all or none of the pools, so something else is happening with the ZFS pool from unassigned disks.

  19. If you have set the "No Refresh" option, the plugin doesn't load any data from the pools, so I'm not sure what else could be writing or reading data to the disk. You can upgrade the pool to the newest format to discard compatibility issues.

     

    About the open snapshots, there are several ways; if you only need some files, you can access the special folder ".zfs" located on the dataset mount point, from there you can copy files pretty easily; also, you can revert the dataset to the specific spanshot from the "Admin Datasets" Dialog in the plugin; and the other option is to mount the snapshot to another folder and copy any data that you may need.

    • Thanks 1