Iker

Community Developer
  • Posts

    261
  • Joined

  • Last visited

Everything posted by Iker

  1. @Revan335 It was a empty update, no new functionality or fixes, just a test release fo the new (again) CI/CD pipeline; I'm finishing some fixes and I release the new version with minor improvements in the coming days. @Masterwishx I'm not so sure what Squid means for "standard" versioning scheme, but I'll check.
  2. The Community Applications ZnapZend plugin is currently broken because the plugin executes early in the boot process, even before the Pool is imported, mounted and working, so the plugin doesn't find any valid pools and exits early in the boot process. One way to make it work with the latest version is using the Docker version; here is my current docker-compose spec: version: '3' services: znapzend: image: oetiker/znapzend:master container_name: znapzend hostname: znapzend-main privileged: true devices: - /dev/zfs command: ["znapzend --logto /mylogs/znapzend.log"] restart: unless-stopped volumes: - /var/log/:/mylogs/ - /etc/localtime:/etc/localtime:ro networks: - znapzend networks: znapzend: name: znapzend The only downside is that if you are replicating data to another machine, you have to access the container and the destination machine and set up the SSH keys, or ... you have to mount a specific volume with the keys and known host in the container. Best
  3. You can check SpaceInvader One video and check if it is a good fit for your use case:
  4. This thread over here seems like a better place for your questions: https://forums.unraid.net/forum/37-pre-sales-support/
  5. Sure thing; I'm planning to release a new version the next week, with all the bugs and little improvements implemented; stay tuned.
  6. Please post it as text, that would help me to check if there are any parsing errors in the plugin and implement the neccesary fixes for the upcoming version.
  7. Can you please share the result of the following command (As text): zpool list -v Best
  8. Thanks for the info, currently I don't have access to the 6.13 beta version, as soon as it is released to the general public, I will try to reproduce your issue and check why the plugin is not picking up the pools. Seems that is a something that should go on the General Support thread, I'm not sure that I follow exactly what is going on with your datasets.
  9. Please keep in mind, that ZFS master doesn't use Regex, but Lua patterns for matching the exclusion folder, that comes with some downsides, in your particular case @sfef, at first sight your pattern seems fine, but it actually contains a reserved symbol "-", combined with "r" it means a completely different thing; you can check your patterns here: https://gitspartv.github.io/lua-patterns/ Long story short, this should do the trick "/docker%-sys/.*" Additional doc on Lua patterns: https://www.fhug.org.uk/kb/kb-article/understanding-lua-patterns/
  10. Nice catch, indeed; there is no support for tabbed view; I'll check and come up with a fix for this in the coming versions.
  11. Hi, ZFS Master plugin developer here. Destructive mode is not what you think; that setting that you just changed only affects the UI; when you change it to "Yes" the ZFS Master plugin shows destructive action elements in the UI (Destroy dataset and other stuff); but the plugin doesn't implement or even have the powers to prevent a dataset from being deleted/modified; quite the opposite, the plugin provides a UI for doing precisely that. Your issue is more related to this:
  12. Right now, the refresh options are a global setting, but the plugin functionality is implemented at the pool level, so it should be... not easy (The cache could be a mess), but at least possible.
  13. No, you don't need to create the destination in advanced, which Unraid and ZFS Master version are you using?
  14. Nice catch, I'll probably change it to short datetime format.
  15. A new update is live with the following changelog: 2023.10.07 Add - Cache last data in Local Storage when using "no refresh" Fix - Dataset admin Dialog - Error on select all datasets Fix - Multiple typos Fix - Special condition crashing the backend Fix - Status refresh on Snapshots admin dialog Change - Date format across multiple dialogs Change - Local Storage for datasets and pools view options Thanks @Niklas; when looking for a way to preserve the views, I end up finding an excellent way to implement a cache for the last refresh :). Also, now the view options are as durable as they can be; even across reboots. How Cache Works? Every time the plugin refreshes the data, it saves a copy to the web browser's local storage; if you have configured the "No refresh" option, once you enter the main page, the plugin loads that information (Including the timestamp) from that cache, this operation is almost instantaneously. This only happens if the "No refresh" option is enabled; otherwise, the plugins load the information from the pools directly. The cache also works with Lazy and Classic load. Best,
  16. I'll take a deeper look at this, and check if there is a better way to keep the view conf.
  17. /docker/.* should do the trick, if not please send me a pm with the following command result "zfs list". Best/
  18. That is not even close to normal, you should report this to General support, as the pools and datasets are supposed to be mounted automatically on every reboot; this is unless you have defined otherwise at creation time.
  19. @Masterwishx a couple things about that: Is only displayed if the last snapshot is older than the days configured. The icon color only shows on the Main page, not the dataset admin dialog. Best
  20. If you are referring to datasets that were collapsed in the UI, yes, it's correct and expected; the information about datasets hidden/collapsed on the UI is stored on the Unraid cookie; if you reboot, that invalidates the cookie and all the datasets are shown; this also works as a sanity check that everything works once you have rebooted your server.
  21. Answers to the questions: Thanks!, Through the "donate" link in my App profile, Red Peroni is my favorite!. No problem; I will update for 12h format on the next release As weird as it may sound, this is directly related to the "display last loaded data". The communication protocol (Nchan) retains the last message published; that's why the last refresh at is changed to the page refresh timestamp. I'm testing if that "not a bug but a feature" of Nchan can be leveraged as a cache to keep a copy of the last data loaded by the plugin or if I have to keep a copy of the last data on a file(Technically, is ram) located on "/tmp". However, this testing is in a very early stage, so please bear with me for a while. In the meantime, please keep testing the plugin and all the other functionalities, and report any other bug you may find. Best,
  22. Well, enjoy, my friends, because a new update is live with the so-long-awaited functionality; the changelog is the following: 2023.09.27 Change - "No refresh" option now doesn't load information on page refresh Fix - Dynamic Config reload The "Dynamic Config reload" means you don't have to close the window for the config to apply correctly.
  23. It's working as expected, or at least how I designed it; given the massive amount of changes in the backend for this version, I didn't want to introduce a functionality change that didn't result intuitively to a user. If you guys agree and it's really what is most useful for you, I can modify the "No refresh" functionality to not pull any info unless you click the button. About your config not applying @samsausages.
  24. A new update is live with the fix for the issues mentioned, the changelog is the following: 2023.09.25.72 Fix - Config load Fix - Exclusion patterns for datasets with spaces Fix - Destroy dataset functionality
  25. That's not the result of a script, is related to docker folder, configure your exclusion pattern as "/system/.*" and everything should work as expected. I will look into those two things. The destroy action was changed to a zfs program, probably it doesn't work as expected.