Iker

Community Developer
  • Posts

    264
  • Joined

  • Last visited

Posts posted by Iker

  1. Hi guys, a new minor-update is live with the following changelog:

     

    2022.07.04

    • Fix - Dataset names with spaces not being properly handled

    This applies both for creation and listing, so you should be able to create and list datasets that contain spaces in the name. Now the exclussion pattern supports true "empty" values, so you can  leave empty and it should work without problems.

     

    4 hours ago, TheThingIs said:

    oh and a quick suggestion if I may. When you manipulate the datasets, create or destroy etc, do an immediate refresh. If you have the refresh quite high then your changes take a while to appear and it makes you wonder if the action worked.

     

    That is planned for the next major version that I'm working on, 

    • Like 1
  2. 6 hours ago, TheThingIs said:

    Versions

    Unraid 6.12.2

    ZFS Master 2023.04.03.64

    ...

    Hope that helps :)

     

    Please go to "Settings->ZFS Master", and check that the "Datasets Exclussion Patterns (Just One!):" is not an empty space, if it is, please change it to something that doesn't match with any dataset name in your pools.

  3. Quota shows B because is the smaller unit, but you can specify it in K, M, G, T at dataset creation or when you edit a dataset. (Probably worth to include the unit as a list in the Dataset creation dialog for clarity)

  4. 16 minutes ago, deliverer said:

    This is probably a dumb question, but is there a way to set up automatic ZFS snapshots based on time?  Say I want to take a snapshot every 4 hours, is this possible?  Is there also a way to maintain how many snapshots are kept? 

     

    Yes, it is, take a look at ZnapZend, Sanoid and other programs that can help you with that, is one of the most important and handy features.

     

    17 minutes ago, deliverer said:

    Lastly, what does "Recursively create snapshots of all descendent datasets" mean when I manually perform a snapshot?  I don't quite understand if I should select this or not?  Based on my google research, it's more if you want to do the entire pool?  Since the UI only allows the dataset to be selected for a snapshot, does it matter if I select this option or not?

     

    Let's say you have the following datasets structure:

     

    image.png.9d1b657c2055b8dfca26285523d2e905.png

     

    Note that only powlarr, radarr and sonarr, are Datasets; my_folder is a regular folder, and my_file, well is just a file. Now imagine you want to create a snapshot that apply to every single dataset in appdata and regular folders and files; you check the option "Recursively create snapshots...", if you don't, the snapshot is only going to apply to the regular folder and files, but not the datasets inside appdata. this is handy if you want for example, backup the entire appdata dir every 6 hours.

     

    21 minutes ago, boomam said:

    Probably a silly question, as ZFS is ZFS, but is there any negative impact to the usage of this plugin or ZFS features, considering they are not exposed in Unraid itself nativly?

    For example, if i create sub-datasets and/or snapshots, then Unraid 6.13 adds support for managing that in the GUI, i assume it'll just work?

     

    No negative impact, I'm using standard ZFS stuff here, so everything should be compatible with Unraid in the future.

     

    21 minutes ago, boomam said:

    If you know the command you want to run to achieve that as a 'one off', you can just use User Scripts to schedule it with Cron.

     

    I strongly suggest you to get away of custom scripts or "manual scheduled" snapshots, the lifecyle of those things can be a complete nightmare, and the more you have, the more complicated things start to be, use well stablished solutions that create and delete snapshots in a scheduled; Sanoid and ZnapZend are my goto choices. Personally I use ZnapZend (In a container, not the CA plugin) and only create snapshots for particular tasks, like replicating a dataset to another pool or server.

     

    PD: I understand this is very different from what you are used to, but ZFS is a very powerfull system with many functionalities (Clones are a great example), once you get the hand of it, you can really use it at his full, This article should help you to jump start with the concepts: https://arstechnica.com/information-technology/2020/05/zfs-101-understanding-zfs-storage-and-performance/

     

    • Like 1
  5. 4 hours ago, TheThingIs said:

    Interesting, mine isn't showing any datasets which have a space in their name.

    Please share what Unraid and Plugin version are you using, and if possible the ZFS list command output from the top directory that contains those datasets not being detected and the dir tree; if it's not possible, at least some mock up names and tree that can help me to reproduce the situation.

  6. 21 hours ago, TheThingIs said:

    The plugin doesn't show datasets which have spaces in their name, is this correct? I thought ZFS was ok with spaces?

     

    Kind of. You can not create them from the user interface (Next update I will fix that), but you can list them and operate over them.

     

    image.png.4bacdb05f21cb33345d11263832b45b7.png

     

    58 minutes ago, Stokkes said:

    wondering if there is going to be an update pushed in the future (is it still supported)?...this really really slows down the Main tab...

     

    Please read a couple comments back in this thread.

     

  7. Hi there, you can import your existing pools without too much trouble, the process is very well outlined in the 6.12 release notes https://docs.unraid.net/unraid-os/release-notes/6.12.0/#zfs-pools; however, please be aware of some limitations:

    • Not all pool topologies are supported
    • Please be carefull about mountpoints, as those may change from whatever location you may have to /mnt/<pool_name>
    • Autotrim and compression can be configured for the entire pool
    • First level datasets will be imported as shares, you have to change some settings in the Shares section and select the most appropiate config for primary an secondary storage (See https://docs.unraid.net/unraid-os/release-notes/6.12.0/#share-storage-conceptual-change)

    If you find any erros or have any issues please report.

  8. Hi sorry for the late response to everyone. To your questions:

     

    On 6/8/2023 at 11:57 PM, Nogami said:

    Any chance would could get an option (or default) show datasets to be enabled?

     

    Sure thing, I will push a new update once Unraid 6.12 is released; the update will include multiple new options, including the one you mentioned.

     

    On 6/12/2023 at 3:35 AM, yarx said:

    Hi! Maybe everyone will be interested if the author describes in a few words the parameters that can be set when create dataset, like cache, metadata and others. I have few nvme and interesting what will be best settings for windows vm. Thanks!

     

    This is outside the plugin scope; however, you can search for multiple ways to improve your ZFS workloads.

     

    Additionally, here are some other functionalities that I'm planning to include in the next version:

    • Support for multiple charsets for datasets names
    • Lazy load of snapshots counts and other info for datasets (This should improve the plugin loading times)
    • Support for "zfs send."
    • Some dialogs simplification.
    • Like 3
  9. Well, just the influx migration from v1.8 to 2.X was wild, and when you have a lot of data, the performance is far from good; they have to rewrite the entire engine two times now (https://www.influxdata.com/products/influxdb-overview/#influxdb-edge-cluster-updates) so I decided to move to Prometheus, and Victoria Metrics as long-term storage makes a lot of sense, queries are really fast, and Victoria is compatible with telegraf line protocol. Overall I'm happy with my decision, and my dashboards load much faster now.

  10. Hi my friend; unfortunately, I'm no longer using this dashboard, as I ditched influxdb and telegraf from my stack in favor of victoria metrics. I have plans to write a new guide, including a template for the Grafana dashboard, but it will take me a while.

  11. One question regarding this issue. Are you experiencing the same behavior even with the main tab closed? I have stated several times before, for other "issues," that if you don't have the unRaid GUI open in the main tab, ZFS Master doesn't perform any actions or execute any commands; from the info that you have presented, I agree with the conclusion that loading the main tab is going to wake up your disks, but, there are no other background processes in the plugin besides the ones running when the main tab is open. So if your spin-up issue persists even with the unRaid GUI closed, something else is happening.

  12. 11 hours ago, apandey said:

    I can confirm that zfs_get_pool_data.lua is what spins up the pool for me. I'll have to probably spend a bit more time to deconstruct it and find the exact step that is the culprit, and see if any caching configuration can help. listing datasets and snapshots with zfs command does not spin up the pool for me

     

    ZFS commands don't spin up the pool because they don't include the plethora of properties that the script does. IMO, the most likely properties that could cause disks to spin up are "use" and "available", here is the properties list for snaps and datasets:

     

    • snap properties = 'used','referenced','defer_destroy','userrefs','creation'
    • dataset properties = 'used','available','referenced','encryption', 'keystatus', 'mountpoint','compression','compressratio','usedbysnapshots','quota','recordsize','atime','xattr','primarycache','readonly','casesensitivity','sync','creation', 'origin'
  13. 22 hours ago, apandey said:

    I am not sure I am being exhaustive enough though? Is there a list of commands, or some log where I can observe what the plugin is doing? Or maybe relevant code snippet I can refer to

     

    These are the commands executed upon main tab loading:
     

    zpool list -v // List Pools
    zpool status -v <pool> // Get Pool Health status
    zfs program -jn -m 20971520 <pool> zfs_get_pool_data.lua <poool> <exclussion_pattern> // Lists ZFS Pool Datasets & Snapshots

     

    The lua script is a very short ZFS channel program executed in read only mode for safety and performance reasons.

     

    Obiously if you create, delete, snapshot or perform other actions over datasets, there are going to be additional zfs commands.

  14. 1 hour ago, apandey said:

    Does this plugin run any zfs commands when the main tab is loaded? Would those commands need to read data off the disks, because if they do, it would spin them up.

     

    Multiple ZFS commands, that's the whole idea, enumerate the pools, then for each pool, list his datasets, and then the snapshots for every single dataset. As far as I know, some of that information is stored in the zfs metadata; depending on how you configure your dataset's primarycache, it can be the case that it ends up reading the data from the disks instead of memory.

  15. 3 hours ago, samsausages said:


    I don't think the plugin is actually performing the SMART function, I think the plugin is spinning up the disks, when the disks spin up it results in a SMART read.
    So yes, the plug in is not calling reading SMART, but the plugin is spinning up the disks.

     

    TBH that doesn't make a lot of sense to me. As I said, the plugin doesn't query the disk directly, only executes zfs commands every 30 seconds (you can change the timeframe on the config).

    • Like 1
  16. On 4/24/2023 at 6:42 AM, samsausages said:

    Like I was saying, this is a clean install.  Only the ZFS Master plugin installed, purely for testing.
    I have also confirmed this behavior with the 6.11 version of Unraid, utilizing OpenZFS.  (Sounds like 6.12 is based on similar ZFS implementation) When using ZFS Master it keeps spinning up the disks.

     

    Hi, I agree with @itimpi; this situation is not related to the plugin, buy to unRaid itself, the plugin doesn't implement any code associated with SMART functionality, and all the commands are exclusively ZFS-related commands (zpool list, zpool status, etc.), more even so, the plugin doesn't enumerate the devices present in the system, only parses the results from zfs status for pool health purposes.

  17. I just read the bug report thread; my best shoot will be that the Dataset has some properties not supported by the current ZFS version or that the unRaid UI implementation is not importing the pool correctly. Here are some ideas to debug the issue a little bit further down:

     

    More Diag Info

    • If you create a new folder on the same dataset, everything works with this new folder?
    • Create a dataset in unRaid 6.12 and check if everything works correctly, and you can see the folder and its content. (Just to check if there is a problem 

     

    Possible Solutions

    • Do not assign the pool to unRaid Pools; import it using the command line and see if that works. (zfs import, then zfs mount -a)
    • As weird as that may be, you could clone the Dataset in unRaid 6.12, see if the information shows up, promote it, and let go of the old one.

     

  18. Hi folks, a new update is live with the following changelog:

     

    2023.04.03

    • Add - Rename datasets UI
    • Add - Edit datasets UI
    • Add - unRaid 6.12 compatiblity
    • Add - Lazy load for snapshots admin UI
    • Fix - Improve PHP 8 Compatibility

     

    Snapshots lazy load functionality is slowly moving forward; in the next version, it will probably be integrated on the main page.

    Best,

    • Like 1
  19. Unfortunately, right now, there is no easy way to create shares on the pool; you have to deal with smb-extra.conf file; this situation should be solved on the upcoming 6.12 version; in the meantime these are my templates for pool shares (ssdnvme in this example)

     

    Private Share
     

    [example_private_share]
    	path = /ssdnvme/example_private
    	comment = My Private share
    	browsable = yes
    	guest ok = no
    	writeable = yes
    	write list = iker
    	read only = no	
    	create mask = 0775
    	directory mask = 0775
    	vfs objects = shadow_copy2
    	shadow: snapdir = .zfs/snapshot
    	shadow: sort = desc
    	shadow: format = %Y-%m-%d-%H%M%S
    	shadow: localtime =  yes
    	shadow: snapdirseverywhere = yes

     

    Private Hidden Share

    [example_private_hidden]
    	path = /ssdnvme/example_private_hidden
    	comment = My Private info
    	browsable = no
    	guest ok = yes
    	writeable = no
    	write list = iker
    	read only = yes
    	create mask = 0775
    	directory mask = 0775
    	vfs objects = shadow_copy2
    	shadow: snapdir = .zfs/snapshot
    	shadow: sort = desc
    	shadow: format = %Y-%m-%d-%H%M%S
    	shadow: localtime =  yes

     

    Public Share

    [example_public_share]
    	path = /ssdnvme/example_public
    	comment = UnRaid related info
    	browsable = yes
    	guest ok = yes
    	writeable = yes
        write list = iker
    	read only = yes
    	create mask = 0775
    	directory mask = 0775

     

    Best,

    • Like 1
    • Thanks 1
  20. Hi guys, a new update is live with the following changelog:

     

    2023.02.28

    • Fix - PHP 8 Upgrades
    • Fix - Export pool command
    • Fix - Error on parsing dataset origin property

     

    I`m still working on the lazy load thing ( @HumanTechDesign ), a page for editing dataset properties, and make everything compatible with Unraid 6.12. The microservices idea is a big no, so probably I'm going to use websockets (nchan functionality) to improve load times, the Snapshots are going to be loaded on the background. 


    @Ter About the recursive mount, ZFS doesn't have an option for that use case, so I have to code myself, still evaluating if is worth impementing or not.

    Best,

    • Thanks 1
  21. On 2/10/2023 at 8:18 AM, Ter said:

    Thank you. I have edited my comment, so I am not sure if you managed to see it...

    Yes, sorry for the late response, I'll be pushing an update next week, with the nested mount command.

     

    20 hours ago, Veriwind said:

    Is editing the nfs/samba files still the best way to 'export' these datasets from a zfs pool?...

     


    For now at least that's the way to go, in the coming 6.12 version ZFS will be a first class citizen, that means that you can import pools and create shares using unRaid's GUI.