campusantu

Community Developer
  • Posts

    14
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

campusantu's Achievements

Noob

Noob (1/14)

5

Reputation

  1. Hey, can you check the output of the commands again? I'm not really planning updates as I've stopped using ZFS on unraid (using it elsewhere) and I have very limited time available unfortunately. But from what I can see in your output, a quick fix should be very easy to do, so I may try to find the time
  2. Yep, for now I made a VM with an HBA passed through, and doing backups to an Odroid HC4 with OMV+ZFS (https://www.thingiverse.com/thing:5065253). After a long time I am once again confident that my irreplaceable data is safe. For not-so-important stuff, and ease of use of apps, Unraid is still the way to go.
  3. thanks I'm waiting to see news on the official support for ZFS before I put more time into this, especially because i went back to have TrueNAS to ease snapshots and backups, but in the meantime you can try adding ZFS Master from IkerSaint. The two plugins seems to complement rather well, as Companion has the general health status in the dashboard while Master has a nicely done Main widget
  4. Should we show all of those fields? I was thinking of using 'zpool list' in MAIN and 'zfs list' in SHARES. How would the zpool structure be displayed though? Expandable tree from the pool, similar to what unassigned devices does for partitions?
  5. New version is out From the settings you can set the current unhealthy status to be reported as healthy in the dashboard (the tooltip will still show the correct status), as well as clear it. If the status goes back to fully healthy, then it will be reported as healthy, if any pool changes status, it will be reported as unhealthy again.
  6. Me too, I thought it couldn't possibly be the plugin but it seems that when extracting plugins it overwrites the filesystem permissions with those from the package: It should be fixed now, the problem was I forgot to run the build command as root so it couldn't change permissions before packaging. Sorry for that! Unrelated: @JoergHH the ability to ignore a specific unhealthy status is in the works, I'm working on how plugins are supposed to use settings, hang on a couple more days You will be able to flag your current status and it will report as healthy until the status of any pool changes, so you won't miss any warning/errors/etc.
  7. I added a tooltip. I might add the ability to ignore an unhealthy status (that would reset when the status changes). While the pool may not be 100% healthy, in JoergHH's case he may choose not to resolve the issue but the persistent unhealthy status could lead to him not noticing should a different problem/warning arise. What do you think? Sorry for ignoring your question, from what I found you would need to move the data away, reformat the pool, and move it back, as the block size cannot be changed.
  8. I'm no ZFS expert, so I'm open to discussion. From what I found, zpool status -x is the preferred way of getting a synthetic status report. I was thinking of providing an alternative method of checking if all pools are reported as ONLINE, but found examples of pools with errors still being reported as ONLINE (see: https://docs.oracle.com/cd/E19253-01/819-5461/gavwg/index.html, under Determining the Type of Device Failure. That's Oracle's ZFS documentation, not the one we're using but I assume they work the same way), hence I think it's not a good idea because you would have no idea something is wrong with the pools. So I would agree with glennv here about not saying the pool is healthy. I'm ok with introducing a "warning" state instead of just healthy/unhealthy, but what would the criteria be for the pools reported by zfs status -x? The pools are ONLINE status is ok-ish (we would need to define a whitelist) errors is "No known data errors" something else?
  9. What is the output of zpool status -x and zpool list ? On mine it says root@Unraid:~# zpool status -x all pools are healthy I'll look into it, to see why it reports it as "not healthy" even if the pool is in fact online
  10. You're welcome, glad it works now have a nice day
  11. I made a new version, let me know if it fixes it
  12. Could you post the output of zpool status as text (or file) instead of image to better test the regex please? thanks :)
  13. It should be fixed now. My pool is not upgraded to the latest features so it was showing "status" and "action" even if healthy
  14. What? Why? Consider this plugin like topping on steini84's ZFS Plugin. I love how Unraid makes it easy to run Docker and VMs and to allow for expansion with mismatched drives, but coming from another software I learned to trust ZFS more than other filesystems. If you're reading this, I guess you prefer it too. While the ZFS Plugin brings our loved filesystem, and I fully understand and share steini84's opinion about keeping the plugin pure and simple with just the binaries, I missed a way to keep an eye on the status of my pool without resorting to shell commands or copy-pasted scripts. In fact I was not fully trusting the pool just because I was not monitoring it adequately. Judging by some threads I was not the only one, so... Enter ZFS-companion. What does it do? Right now it's just a dashboard widget. It shows the general health of all your pools, plus a list of all the zpools with their status and last scrub information. I don't have ETAs, but I have some ideas of what could be added to make it more useful (not necessarily in order): Full (secondary?) widget in the disks section of the dashboard Section in the Main screen, something like Unassigned Devices does for other filesystems. Integrated scripts for scrubbing and error reporting, to avoid copy-pasting from different places Shares management Maybe with some detailed page about more detailed info (pool properties? snapshot list?) How to install Install it directly (Plugins -> Install Plugin -> Enter Url then click INSTALL): https://raw.githubusercontent.com/GiorgioAresu/ZFS-companion-unraid/main/ZFS-companion.plg If you have suggestions or issues you can post them below. If you can provide examples of different messages for pools status, scrub results, errors, and so on please write them (PM if you want) because I'm having difficulties finding all possible values. Troubleshooting If you're having issues or the state is not what you'd expect, please post the output of the following commands: zpool status -x zpool status -v zpool list