Iker

Community Developer
  • Posts

    264
  • Joined

  • Last visited

Everything posted by Iker

  1. Hi guys, a new minor-update is live with the following changelog: 2022.07.04 Fix - Dataset names with spaces not being properly handled This applies both for creation and listing, so you should be able to create and list datasets that contain spaces in the name. Now the exclussion pattern supports true "empty" values, so you can leave empty and it should work without problems. That is planned for the next major version that I'm working on,
  2. Please go to "Settings->ZFS Master", and check that the "Datasets Exclussion Patterns (Just One!):" is not an empty space, if it is, please change it to something that doesn't match with any dataset name in your pools.
  3. Quota shows B because is the smaller unit, but you can specify it in K, M, G, T at dataset creation or when you edit a dataset. (Probably worth to include the unit as a list in the Dataset creation dialog for clarity)
  4. Yes, it is, take a look at ZnapZend, Sanoid and other programs that can help you with that, is one of the most important and handy features. Let's say you have the following datasets structure: Note that only powlarr, radarr and sonarr, are Datasets; my_folder is a regular folder, and my_file, well is just a file. Now imagine you want to create a snapshot that apply to every single dataset in appdata and regular folders and files; you check the option "Recursively create snapshots...", if you don't, the snapshot is only going to apply to the regular folder and files, but not the datasets inside appdata. this is handy if you want for example, backup the entire appdata dir every 6 hours. No negative impact, I'm using standard ZFS stuff here, so everything should be compatible with Unraid in the future. I strongly suggest you to get away of custom scripts or "manual scheduled" snapshots, the lifecyle of those things can be a complete nightmare, and the more you have, the more complicated things start to be, use well stablished solutions that create and delete snapshots in a scheduled; Sanoid and ZnapZend are my goto choices. Personally I use ZnapZend (In a container, not the CA plugin) and only create snapshots for particular tasks, like replicating a dataset to another pool or server. PD: I understand this is very different from what you are used to, but ZFS is a very powerfull system with many functionalities (Clones are a great example), once you get the hand of it, you can really use it at his full, This article should help you to jump start with the concepts: https://arstechnica.com/information-technology/2020/05/zfs-101-understanding-zfs-storage-and-performance/
  5. Please share what Unraid and Plugin version are you using, and if possible the ZFS list command output from the top directory that contains those datasets not being detected and the dir tree; if it's not possible, at least some mock up names and tree that can help me to reproduce the situation.
  6. Kind of. You can not create them from the user interface (Next update I will fix that), but you can list them and operate over them. Please read a couple comments back in this thread.
  7. Hi there, you can import your existing pools without too much trouble, the process is very well outlined in the 6.12 release notes https://docs.unraid.net/unraid-os/release-notes/6.12.0/#zfs-pools; however, please be aware of some limitations: Not all pool topologies are supported Please be carefull about mountpoints, as those may change from whatever location you may have to /mnt/<pool_name> Autotrim and compression can be configured for the entire pool First level datasets will be imported as shares, you have to change some settings in the Shares section and select the most appropiate config for primary an secondary storage (See https://docs.unraid.net/unraid-os/release-notes/6.12.0/#share-storage-conceptual-change) If you find any erros or have any issues please report.
  8. Jummm, that's weird; as I said in the other post, I experienced the same error with the same resolution. My router is an Asus with Asuswrt-Merlin fully updated, and my 2, 6.12 Unraid servers are using static IP addresses; the other one with 6.11.5 it's working just fine. In my case I don't use the ipvlan feature; none of my dockers receive exclusive IP addresses. Over the weekeend, I will try to replicate the issue and provide diagnostics.
  9. Sorry, I have some workloads on the two servers that are important, so I can not provide diagnostics right now, however it seems to be replicated by other user:
  10. Sorry for the "light report", but I cannot replicate and provide diagnostics at this time. I have 3 Unraid servers, and I have migrated 2 to 6.12 so far. On both of those migrated servers, I experienced a loss of internet access (LAN works just fine) 3 or 4 mins after the array start; it is hard to diagnose the why because the situation doesn't leave any traces on the logs or diagnostics files as far as I was able to check. By trial and error, I have come to the conclusion that the culprit is the "Host access to custom networks" setting on the docker config. Despite that, I had the config set to enabled, but I wasn't using it. Once I stopped the array and set the config to Disable, I stopped experiencing the loss of internet access. My current config is the following; the only difference with the previous one is the setting already mentioned:
  11. Hi sorry for the late response to everyone. To your questions: Sure thing, I will push a new update once Unraid 6.12 is released; the update will include multiple new options, including the one you mentioned. This is outside the plugin scope; however, you can search for multiple ways to improve your ZFS workloads. Additionally, here are some other functionalities that I'm planning to include in the next version: Support for multiple charsets for datasets names Lazy load of snapshots counts and other info for datasets (This should improve the plugin loading times) Support for "zfs send." Some dialogs simplification.
  12. Well, just the influx migration from v1.8 to 2.X was wild, and when you have a lot of data, the performance is far from good; they have to rewrite the entire engine two times now (https://www.influxdata.com/products/influxdb-overview/#influxdb-edge-cluster-updates) so I decided to move to Prometheus, and Victoria Metrics as long-term storage makes a lot of sense, queries are really fast, and Victoria is compatible with telegraf line protocol. Overall I'm happy with my decision, and my dashboards load much faster now.
  13. Hi my friend; unfortunately, I'm no longer using this dashboard, as I ditched influxdb and telegraf from my stack in favor of victoria metrics. I have plans to write a new guide, including a template for the Grafana dashboard, but it will take me a while.
  14. Hi, ZFS Master developer here; I have already answered some of the doubts on the plugin thread; however, I thought it would be beneficial also to respond and even dive a little bit deeper here. Just one question regarding this issue. Are you experiencing the same behavior even with the main tab closed? I agree that loading the main tab will wake up your disks, but there are no other background processes in the plugin besides the ones running when the main tab is open. So if your spin-up issue persists even with the unRaid GUI closed, something else is happening. Please confirm if that is the case.
  15. One question regarding this issue. Are you experiencing the same behavior even with the main tab closed? I have stated several times before, for other "issues," that if you don't have the unRaid GUI open in the main tab, ZFS Master doesn't perform any actions or execute any commands; from the info that you have presented, I agree with the conclusion that loading the main tab is going to wake up your disks, but, there are no other background processes in the plugin besides the ones running when the main tab is open. So if your spin-up issue persists even with the unRaid GUI closed, something else is happening.
  16. ZFS commands don't spin up the pool because they don't include the plethora of properties that the script does. IMO, the most likely properties that could cause disks to spin up are "use" and "available", here is the properties list for snaps and datasets: snap properties = 'used','referenced','defer_destroy','userrefs','creation' dataset properties = 'used','available','referenced','encryption', 'keystatus', 'mountpoint','compression','compressratio','usedbysnapshots','quota','recordsize','atime','xattr','primarycache','readonly','casesensitivity','sync','creation', 'origin'
  17. These are the commands executed upon main tab loading: zpool list -v // List Pools zpool status -v <pool> // Get Pool Health status zfs program -jn -m 20971520 <pool> zfs_get_pool_data.lua <poool> <exclussion_pattern> // Lists ZFS Pool Datasets & Snapshots The lua script is a very short ZFS channel program executed in read only mode for safety and performance reasons. Obiously if you create, delete, snapshot or perform other actions over datasets, there are going to be additional zfs commands.
  18. Multiple ZFS commands, that's the whole idea, enumerate the pools, then for each pool, list his datasets, and then the snapshots for every single dataset. As far as I know, some of that information is stored in the zfs metadata; depending on how you configure your dataset's primarycache, it can be the case that it ends up reading the data from the disks instead of memory.
  19. No, there are no conflicts. This plugins is exclusively for ZFS Datasets administration, it has nothing to do with the underliying zfs systems or how is loaded.
  20. Sounds good!, I will take a deeper look in the coming days, as this is a very unexpected behavior, and I haven't been able to reproduce it in a unRaid VM.
  21. TBH that doesn't make a lot of sense to me. As I said, the plugin doesn't query the disk directly, only executes zfs commands every 30 seconds (you can change the timeframe on the config).
  22. Hi, I agree with @itimpi; this situation is not related to the plugin, buy to unRaid itself, the plugin doesn't implement any code associated with SMART functionality, and all the commands are exclusively ZFS-related commands (zpool list, zpool status, etc.), more even so, the plugin doesn't enumerate the devices present in the system, only parses the results from zfs status for pool health purposes.
  23. I will check how to support non-Englishs languages.
  24. I just read the bug report thread; my best shoot will be that the Dataset has some properties not supported by the current ZFS version or that the unRaid UI implementation is not importing the pool correctly. Here are some ideas to debug the issue a little bit further down: More Diag Info If you create a new folder on the same dataset, everything works with this new folder? Create a dataset in unRaid 6.12 and check if everything works correctly, and you can see the folder and its content. (Just to check if there is a problem Possible Solutions Do not assign the pool to unRaid Pools; import it using the command line and see if that works. (zfs import, then zfs mount -a) As weird as that may be, you could clone the Dataset in unRaid 6.12, see if the information shows up, promote it, and let go of the old one.