Iker

Members
  • Posts

    62
  • Joined

  • Last visited

Everything posted by Iker

  1. Most probably it's a parse error; I'm planning a new version with a couple of new features, let met check that bug and get back when it's fixed. Thanks for using the plugin, by the way.
  2. You could use the Global Section, but I'm not really an expert in SMB, so YMMV; however, another option is editing the smb-extra.conf file directly from the flash/flash-share so you don't have the 2048 characters limit from the GUI.
  3. ZFS Master 2021.11.09a is live with a few changes, check it out: 2021.11.09a - Add - List of current Datasets at Dataset Creation - Add - Option for export a Pool - Fix - Compatibility with RC version of unRAID
  4. Yeah, it's mainly a guess, based on the info in the thread and what users are asking for. I agree with @jortan that the magic of unRaid is having a bunch of different size disks and just made them work as one with redundancy without so much trouble, and it's okay, that always should be an option, however, being the only option it's not sustainable for the times to come, unRaid is a business, they have a market, getting behind in characteristics that the competition offers is not a great move for them. Every month it's announced a new SATA SSD cheaper than ever, a board with 2.5Gb/5Gb network card included; think about the new 2022 offers from intel & amd with ddr5, pcie5, etc; all of that outperform by a lot unRaid current storage strategy; check Linus videos, there is a long time since he used the array for anything. If unRaid keeps the current "array" for a couple of years, it's not going to be competitive; and don't get me wrong, I have been using unRaid for the last 5 years, I Love it!, but it has started to feel a little old in terms of storage capabilities (snapshots, backups without having to copy 300 GB vms disk every night, wait for hours for a copy to complete, 10Gb support, etc...). For me this ZFS Plugins had been a game changer, make unRaid just the perfect modern system, I really hope that unRaid give us options and not just stick to the old good times.
  5. It's mentioned later in the same page… and a lot on next pages; however, as I mentioned, it doesn't matter, the point is the same is just an import/export operation, the important part and what unRaid brings into the game it's the GUI.
  6. Probably, in the features for 6.11 pool is mentioned that array devices could be formatted as ZFS, but, at least for me, it doesn't change anything, is just an import/export operation.
  7. It's a fair question, however, unRaid it's not going to personalize ZFS, just offer it as an option for the array file system; the most likely (and Easy) scenario is that you only have to export & import your pool, the information there will be intact. Take for example SpaceInvaderOne videos about exporting the pool from unRaid and importing it in TrueNas.
  8. You could check in the repository the available ZFS versions for you unRaid. Repository: https://github.com/Steini1984/unRAID6-ZFS/tree/master/packages
  9. Maybe it's a better idea to set the docker folder from the GUI to another point (reinstalling every docker is mandatory), with zfs as the underlying system, it's not just "move the files". Once that is done this should be enough: zfs list -o name,mountpoint | grep legacy | awk '{printf "zfs destroy -frR %s\n", $1}' The output it's all you need to destroy all of those remaining datasets, be extremely careful review every single line including the path, the command has no filters whatsoever.
  10. Hi, thanks; initially I thought filtering by attributes, however, the filter implemented its by name. I'm under the impression that your docker folder is the root of your RaidZ2, it's correct? If that the case you should find a regular expression that suits the name of the folders, maybe something like "/^RaidZ2\/([0-9A-Fa-f]{2})+$/", but really, you should move docker folder to a dataset of his own.
  11. It's not really a big deal, I suppose that it's because the folder doesn't exist in unraid, being the whole file system ephemeral and the pools by default does not have the property setted for a cache file, maybe zfs init before the path exist, so it could be very problematic in unraid having the cache file configured for the pools. More info: https://openzfs.github.io/openzfs-docs/Project and Community/FAQ.html#the-etc-zfs-zpool-cache-file https://github.com/openzfs/zfs/issues/1035 Why could be problematic: https://github.com/openzfs/zfs/issues/2433
  12. ZDB as long as I have checked works, however, requires a cache file that doesn't exist in the unRAID config; try with this: UNRAID:~# mkdir /etc/zfs UNRAID:~# zpool set cachefile=/etc/zfs/zpool.cache hddmain UNRAID:~# zdb -C hddmain
  13. ZFS Master 2021.10.08e is live with a lot of fixes and new functionality, check it out: 2021.10.08e Add - SweetAlert2 for notifications Add - Refresh and Settings Buttons Add - Mountpoint information for Pools Add - Configurable Settings for Refresh Time, Destructive Mode, Dataset Exclusions, Alert Max Days Snapshot Icon Fix - Compatibility with Other Themes (Dark, Grey, etc.) Fix - Improper dataset parsing Fix - Regex warnings Fix - UI freeze error on some system at destroying a Dataset Remove - Unassigned Devices Plugin dependency
  14. Parity disk does not contain data, just the information that is necessary for rebuilding the data present in others disks; if you swap the disk from parity to array, the array goes unprotected, any disk failure will be fatal for the information present in that particular disk, but nothing else.
  15. There you go; check it out, any questions happy to help.ZFS -Influx1.8.json; also, check the parameters in the non_negative_derivative because your latency are way over the roof; maybe compare with the output of "zpool iostat -l".
  16. Wow, that's great, seems that you got most of the panels working by now, let me know if you need any help.
  17. As I stated in my answer, I use influxdb 2, so the query language is Flux, not InfluxQL; the data is still present in the database as long as you are using telegraf; but the queries are very different, for example: Pool Writes (Flux): from(bucket: "Telegraf") |> range(start: v.timeRangeStart, stop: v.timeRangeStop) |> filter(fn: (r) => r["_measurement"] == "zfs_pool") |> filter(fn: (r) => r["_field"] == "nwritten") |> aggregateWindow(every: v.windowPeriod, fn: mean) |> derivative(unit: 1s, nonNegative: true) |> map(fn: (r) => ({ _value: r._value, _time:r._time, _field : r.pool})) Pool Writes (InfluxQL): SELECT SELECT non_negative_derivative(mean("nwritten"), 1s) AS "writes" FROM "zfs_pool" WHERE $timeFilter GROUP BY time($__interval), "pool" fill(none) I'm not completely sure about the InfluxQL query (I don't have any Influx 1.8 DB available right now), but should be something along those lines. (https://docs.influxdata.com/influxdb/v1.8/flux/flux-vs-influxql/) In the next days I should come back to you with all the queries to their InfluxQL equivalent, just let me spin up a InfluxDB 1.8 and translate the queries.
  18. That's very common, is how ZFS is supposed to work, even with the SLOG, ARC continues working as normal, however, you could use "zinject -a" to force flush the ARC (without the failure simulation of course); my advice, try to go with "primarycache=metadata" in the dataset and see how it performs, in such configuration there should not be any difference in performance. https://openzfs.github.io/openzfs-docs/man/8/zinject.8.html
  19. I also have the inconvenience with dockerfiles, I'm planning to create a settings page, so you could specify your own "excluded dataset" by name patterns or attributes; I have never used the Dark Theme, but the fix is pretty easy, however I'm very aware that the Styles for the plugin need a lot of work in general; probably next version both things are going to be completely fixed.
  20. @ich777 thank you very much, the Plugin is live since this morning in the CA as "ZFS Master". To all you guys using ZFS, Any feedback, new functionality or bugs that you could find, don't hesitate to contact me :).
  21. Well, initial version is ready to publish, anyone know how to submit for approval?, really look deep in the "Programming" section, but is not clear for me how to proceed. PD: If anyone wants to give it a try https://raw.githubusercontent.com/IkerSaint/ZFS-Master-Unraid/main/zfs.master.plg
  22. No problem, panels are mixed between prometheus & influxdb 2 (telegraf); so, keep that in mind, the sintaxis of flux is very different. ZFS.json
  23. I'l take a look at this. Green ball turns yellow if degraded, red if faulted, blue if Offline, grey otherwise (Unavailable or Removed); the scrubb info is already present in ZFS Companion, i don't have the intention to replicate the information already present. Great Idea!. Got that info, nice & clean (I think), working now on snapshots, just the basic info & date: haven't forgotten you (dtrain):
  24. True, but we don't know when is going to be, probably next year, in the meanwhile I'm not planning replace the Unraid interface, just add some visibility. Devices in the pool 👌 Last snapshot date… It's completely possible, but please hang in there for a while. Snapshots in general are a different monster, I'm thinking how to display them, there are some systems (like my own) with 1K snapshots in a single pool, displaying or even list every single one could be difficult.