Iker

Community Developer
  • Posts

    264
  • Joined

  • Last visited

Posts posted by Iker

  1. Please remove the exclussion pattern and give it a minute to check if that works, also, try with the refresh interval enabled; please let me know if any of those options help you with problem, otherwise i'll send you a dm to start debugging the error.

  2. Probably I should start putting together a FAQ on the first thread page:

     

    On 9/6/2023 at 7:33 PM, Iker said:
    1. The only way i was able to delete anything was to completely stop the docker daemon: I haven't been able to reliable delete datasets used at some point by unraid without rebooting or stoping docker daemon; a procedure that sometimes works, is the following:
      1. Stop the docker using the directory
      2. Delete all snapshots, clones, holds, etc
      3. Delete the directory (rm -r <dataset-path>)
      4. Delete the dataset using ZFS Master or CLI.

     

  3. 55 minutes ago, Renegade605 said:

    @Iker FYI, I'm on the latest version but nothing changed about the zvol properties in the GUI. 

     

    The update fixed an issue with the mountpoint being displayed as "undefined" for zvols, it was reported by an user via dm; the full support for zvol is coming, but is going to take me a while.

  4. Hi @dopeytree I think this article will help you to understand how snapshots work on ZFS (https://arstechnica.com/information-technology/2020/05/zfs-101-understanding-zfs-storage-and-performance/) and better plan your policy. You can always check how much space a snapshot is using with the "Snapshots Admin" dialog on the plugin.

     

    @couzin2000 I have seen that error multiple times with different plugins, it's not directly related to this plugin, so you should post it to General support thread.

    • Thanks 1
  5. New update with the following changelog:

     

    2024.03.31

    • Change - Convert to dataset reporting on dataset row
    • Add - Unraid notifications for Convert to Dataset
    • Fix - Compatibility with 6.13 beta
    • Fix - for folder listing corner cases
    • Fix - for zvols mounting point
    • Like 1
  6. The convert to dataset functionality is just a convenient way of copying the data without having to execute the commands, but you can always do it manually. Create a dataset and the respective child ones, and start syncing each folder to the corresponding dataset from the source using rsync.

  7. On 3/16/2024 at 2:22 PM, frodr said:

    Creating dataset thru the plugin -> appear as user folder in the Array.

     

    When you create a dataset at the root level, Unraid detects it as an user share.

     

    On 3/16/2024 at 4:08 PM, Sobrino said:

    Any hints on what I might be doing wrong?

     

    It's working as intended. It's very simple: When you unlock a dataset, you're also mounting it, so you can access the content in the mountpoint (/mnt/data/media). When you lock the dataset, the mountpoint still exists as a folder, so you can write anything to it because it's a normal folder within the system.

    • Thanks 1
  8. 8 hours ago, Renegade605 said:

    Preferences probably depend a lot on how people are using their snapshots...

     

    Fair enough, as I said, this is still a work in progress, and your feedback is very much welcome.

     

    @d3m3zs As the other disks are sleeping and the last refresh date doesn't change, I suggest you post the problem on the unassigned devices plugin thread.

    • Thanks 1
  9. Some questions that could help to pinpoint the issue source:

    • Are the other pool disks sleeping?
    • "Last refresh" date changes?
    • Do you have other processes/plugins accessing the folders/files stored on that pool?

     

  10. @MowMdown I have been thinking about reworking the entire snapshot admin dialog, but it is still in the draft phase. However, what you suggest maybe even more problematic than the current approach. Many people using ZFS have tens, if not hundreds, of snapshots in a single dataset, and displaying that inline along with the dataset will be a nightmare for you as a user and for me as the developer.

     

    Right now, I'm more inclined to directly present snapshots grouped by date (no more than ten buckets) on the contextual menu and implement another contextual menu for the dataset with all the related actions. But if you folks have any other ideas, I'm more than open to reading them.

  11. New update with the following changelog:

     

    2024.03.03

    • Fix - Dataset name validation before convertion
    • Fix - rsync command for folders with whitespaces or special characters
    • Fix - Errors on empty exclusion patterns

    • Add - Compatibility with 6.13 beta

     

  12. 1 hour ago, d3m3zs said:

    But also would be great to automatically remove tmp folders after rsync command done.

     

    Sorry, I'm very reluctant to delete data without user verification; I prefer you folks verify that everything is fine and the data that you need is already in the dataset, and then and only then, you manually delete the source folder; this prevents accidentally losing data in case of unexpected errors. As I said before, I'm working on changing the dialog for conversion to a progress bar integrated on the main UI.

    • Thanks 2
  13. Hi @Larz the situation that you mention is something that I imagine may happen with large transfers, as the dialog is dismissed if the page refreshes or if any other notification/dialog appears; however, as the rsync process is running in the background, everything should be fine, and when finished, the plugin should give you a new dialog with the process result. 

     

    I'm currently working on building a progress bar that appears at the right side of the dataset name to indicate any ongoing transfers and allow you folks to perform multiple transfers simultaneously.

    • Thanks 1
  14. 1 hour ago, Larz said:

    In the end, the issue was a blank space character in the directory name.  I've revised the post above with the details.

     

    Hmmm, maybe there is a bug on the plugin side when handling datasets with empty spaces because those are allowed on OpenZFS. I'll check if that's the case and issue a new update with the fix.

  15. The error indicates that the system ran out of file handles, probably because rsync is trying to copy many files. Try increasing the "Max User Watches" using the "Tips And Tweaks" plugin to fix it.

  16. @Nonoss I just reproduce the bug, let me check a little bit deeper and I'll release a new version with the fix.

    @Revan335 I'll se what I can do, but not a priority right now.

    @dopeytree Can you elaborate your question, I mean, the description is very accurate on the steps that the plugin follows for converting a directory to a dataset.

    @sasbro97 You have to delete those datasets at the root manually.

  17. 2 hours ago, d3m3zs said:

    It seems best way to backup is - do not create child dataset, just use usual folders or write such script that will be sending all child one by one.

     

    No for the backup, yes for the documentation. You can configure policies and send all the dataset descendants without too much trouble. Check Syncoid or ZnapZend; those solutions automatically take care of that part and help you stay organized.

    • Thanks 1
  18. ZFS snapshots don't work like you think; a snapshot applies exclusively to one and only one dataset (There are exceptions like clones, but that's another subject). The "recursive" option means that you will also take a snapshot for every child dataset. However, that doesn't mean that the parent dataset will include the data from the child datasets; those snapshots are associated with their respective dataset.

     

    BTW, check the send command documentation on how to send snapshots incrementally and  process datasets recursively.

    • Thanks 1
  19. 7 hours ago, isvein said:

    What does an amber color on snapshot icon mean? :)

     

    It's associated with an option on the settings "Dataset Icon Alert Max Days", it means that the latest dataset snapshot is older than the amount of days specified on that config flag.

     

    @frodr This is a known issue with ZFS on Unraid, not this particular plugin, here is a nice summary on how to destoy a dataset if you face that error. 

     

     

    • Thanks 2