Iker

Members
  • Posts

    57
  • Joined

  • Last visited

Converted

  • Gender
    Male
  • URL
    https://ikersaint.medium.com/
  • Location
    Colombia

Recent Profile Visitors

761 profile views

Iker's Achievements

Rookie

Rookie (2/14)

21

Reputation

  1. It's mentioned later in the same page… and a lot on next pages; however, as I mentioned, it doesn't matter, the point is the same is just an import/export operation, the important part and what unRaid brings into the game it's the GUI.
  2. Probably, in the features for 6.11 pool is mentioned that array devices could be formatted as ZFS, but, at least for me, it doesn't change anything, is just an import/export operation.
  3. It's a fair question, however, unRaid it's not going to personalize ZFS, just offer it as an option for the array file system; the most likely (and Easy) scenario is that you only have to export & import your pool, the information there will be intact. Take for example SpaceInvaderOne videos about exporting the pool from unRaid and importing it in TrueNas.
  4. You could check in the repository the available ZFS versions for you unRaid. Repository: https://github.com/Steini1984/unRAID6-ZFS/tree/master/packages
  5. Same here, a couple of days ago I update and getting flood with these messages: 021-10-12 17:34:57.980+0000: 20738: error : virConnectNumOfDefinedStoragePools:244 : this function is not supported by the connection driver: virConnectNumOfDefinedStoragePools 2021-10-12 17:34:57.981+0000: 8018: error : virConnectNumOfStoragePools:164 : this function is not supported by the connection driver: virConnectNumOfStoragePools 2021-10-12 17:34:57.982+0000: 20735: error : virConnectNumOfStoragePools:164 : this function is not supported by the connection driver: virConnectNumOfStoragePools 2021-10-12 17:34:57.982+0000: 20731: error : virConnectNumOfDefinedStoragePools:244 : this function is not supported by the connection driver: virConnectNumOfDefinedStoragePools 2021-10-12 17:34:57.983+0000: 7992: error : virConnectNumOfSecrets:79 : this function is not supported by the connection driver: virConnectNumOfSecrets A Temporary Workaround to prevent Log from filling up to 100% with messages this command flush the logs: rm /var/log/libvirt/libvirtd.log virt-admin daemon-log-outputs ""
  6. Hope the rc2 is more stable for you, zfs 2.1 introduced draid and even is you are not using it, it's a nice to have. In the monitoring aspect there are more options with zpool_influxdb and some already present are different, when my dashboard is more mature I will write another post about it. This is a little sneak peek:
  7. Maybe it's a better idea to set the docker folder from the GUI to another point (reinstalling every docker is mandatory), with zfs as the underlying system, it's not just "move the files". Once that is done this should be enough: zfs list -o name,mountpoint | grep legacy | awk '{printf "zfs destroy -frR %s\n", $1}' The output it's all you need to destroy all of those remaining datasets, be extremely careful review every single line including the path, the command has no filters whatsoever.
  8. Hi, thanks; initially I thought filtering by attributes, however, the filter implemented its by name. I'm under the impression that your docker folder is the root of your RaidZ2, it's correct? If that the case you should find a regular expression that suits the name of the folders, maybe something like "/^RaidZ2\/([0-9A-Fa-f]{2})+$/", but really, you should move docker folder to a dataset of his own.
  9. As an update, I was checking out ZFS version 2.1, but it was marked as "unstable" on unRAID 6.9.2, so simply YOLO it and upgrade to 6.10rc1, the result, zfs_exporter works just fine; but telegraf is not polling any metrics from the pools.... but, I find zpool_influxdb is working without so much trouble, at least for pools (not datasets), I'm going to start working with just zfs_exporter & zpool_influxdb for the next guide.
  10. It's not really a big deal, I suppose that it's because the folder doesn't exist in unraid, being the whole file system ephemeral and the pools by default does not have the property setted for a cache file, maybe zfs init before the path exist, so it could be very problematic in unraid having the cache file configured for the pools. More info: https://openzfs.github.io/openzfs-docs/Project and Community/FAQ.html#the-etc-zfs-zpool-cache-file https://github.com/openzfs/zfs/issues/1035 Why could be problematic: https://github.com/openzfs/zfs/issues/2433
  11. ZDB as long as I have checked works, however, requires a cache file that doesn't exist in the unRAID config; try with this: UNRAID:~# mkdir /etc/zfs UNRAID:~# zpool set cachefile=/etc/zfs/zpool.cache hddmain UNRAID:~# zdb -C hddmain
  12. ZFS Master 2021.10.08e is live with a lot of fixes and new functionality, check it out: 2021.10.08e Add - SweetAlert2 for notifications Add - Refresh and Settings Buttons Add - Mountpoint information for Pools Add - Configurable Settings for Refresh Time, Destructive Mode, Dataset Exclusions, Alert Max Days Snapshot Icon Fix - Compatibility with Other Themes (Dark, Grey, etc.) Fix - Improper dataset parsing Fix - Regex warnings Fix - UI freeze error on some system at destroying a Dataset Remove - Unassigned Devices Plugin dependency
  13. Parity disk does not contain data, just the information that is necessary for rebuilding the data present in others disks; if you swap the disk from parity to array, the array goes unprotected, any disk failure will be fatal for the information present in that particular disk, but nothing else.
  14. There you go; check it out, any questions happy to help.ZFS -Influx1.8.json; also, check the parameters in the non_negative_derivative because your latency are way over the roof; maybe compare with the output of "zpool iostat -l".
  15. Wow, that's great, seems that you got most of the panels working by now, let me know if you need any help.