Iker

Community Developer
  • Posts

    264
  • Joined

  • Last visited

Posts posted by Iker

  1. Hi Folks, a new update is live with the following changelog:

     

    2023.12.4

    • Fix - Used and Free % bars/texts are now consistent with unraid theme and config
    • Fix - Set time format for the last refresh to short date and time
    • Fix - Detect Pools with used % under 0%
    • Fix - ZPool regex not caching some pools with dots or Underscore in the name

     

    • Like 3
  2. 18 hours ago, BasWeg said:

    Since I also use the znapzend plugin, does your solution just work with the old stored znapzend configuration?

     

    Yes, one of the multiple benefits from ZnapZend is that the configuration is stored on dataset custom properties, so you lose nothing by migrating from plugin to docker version.

    • Thanks 1
  3. @concerned-contour2481 No, you can't, however, you can send snashots incrementally and in replicate mode, Check https://docs.oracle.com/cd/E19253-01/819-5461/gfwqb/index.html

     

    @Revan335 Yeah, that's my guess, however I use those tags to work in the plugin using multiple branches. I have to check how to remove the branch name when is from the main one; it shouln't be too complicated, so you can count on the version name change ;).

    • Like 2
  4. 15 hours ago, Marshalleq said:

    User scripts sounds fine too, I use znapzend which still doesn't work for me despite some multiple attempts to do so.  I assume scripts will be better.

     

    The Community Applications ZnapZend plugin is currently broken because the plugin executes early in the boot process, even before the Pool is imported, mounted and working, so the plugin doesn't find any valid pools and exits early in the boot process. One way to make it work with the latest version is using the Docker version; here is my current docker-compose spec:

     

    version: '3'
    services:
      znapzend:
        image: oetiker/znapzend:master
        container_name: znapzend
        hostname: znapzend-main
        privileged: true
        devices:
          - /dev/zfs
        command: ["znapzend --logto /mylogs/znapzend.log"]
        restart: unless-stopped
        volumes:
          - /var/log/:/mylogs/
          - /etc/localtime:/etc/localtime:ro
        networks:
          - znapzend
    
    networks:
      znapzend:
        name: znapzend

     

    The only downside is that if you are replicating data to another machine, you have to access the container and the destination machine and set up the SSH keys, or ... you have to mount a specific volume with the keys and known host in the container.

     

    Best

  5. 8 minutes ago, andyd said:

    Thanks for the plugin!

     

    I set this up today - it picks up one of the pool drives but I have two formatted with zfs. Any reason the one would be ignored

     

    Can you please share the result of the following command (As text):

     

    zpool list -v

     

    Best

  6. On 11/12/2023 at 6:45 PM, wacko37 said:

    I have encountered an issue with a ZFS disk mounted via UD not showing up in ZFS Master.

    After a discussion with the UD developer @dlandon in the UD thread, it appears this is to do with ZFS compatibility in the upcoming Unraid 6.13 release, UD now accommodates for 6.13 when formatting a disk to ZFS rendering it unmountable in ZFS Master...... see below

    Thanks for the info, currently I don't have access to the 6.13 beta version, as soon as it is released to the general public, I will try to reproduce your issue and check why the plugin is not picking up the pools.

     

    7 hours ago, Michel Amberg said:

    Is there some guideline on how to restore snapshots without this happening? or is this the only workaround?

     

    Seems that is a something that should go on the General Support thread, I'm not sure that I follow exactly what is going on with your datasets.

    • Thanks 1
  7. Please keep in mind, that ZFS master doesn't use Regex, but Lua patterns for matching the exclusion folder, that comes with some downsides, in your particular case @sfef, at first sight your pattern seems fine, but it actually contains a reserved symbol "-", combined with "r" it means a completely different thing; you can check your patterns here:

     

    https://gitspartv.github.io/lua-patterns/

     

    Long story short, this should do the trick "/docker%-sys/.*"

     

    Additional doc on Lua patterns:

     

    https://www.fhug.org.uk/kb/kb-article/understanding-lua-patterns/

     

     

    • Thanks 1
  8. 4 hours ago, Indi said:

    When you mentioned this I immediately thought of the ZFS Master plugin, which was the culprit. I went into that plugin settings and "Destructive Mode" was set to "No". I changed this to "Yes" and then went back into the share in unraid, and was able to delete it. It seems the plugin prevents deletion even if the share is empty. 

     

    Hi, ZFS Master plugin developer here. Destructive mode is not what you think; that setting that you just changed only affects the UI; when you change it to "Yes" the ZFS Master plugin shows destructive action elements in the UI (Destroy dataset and other stuff); but the plugin doesn't implement or even have the powers to prevent a dataset from being deleted/modified; quite the opposite, the plugin provides a UI for doing precisely that. Your issue is more related to this: 

     

  9. On 10/19/2023 at 4:06 PM, samsausages said:

    I do have a future feature request:  The ability to refresh by pool. I.e. a refresh button on the pool bar that has the "hide dataset" "create dataset" buttons.

    And/or in the config the ability to select/deselect pools from the refresh.

     

    Right now, the refresh options are a global setting, but the plugin functionality is implemented at the pool level, so it should be... not easy (The cache could be a mess), but at least possible.

    • Like 2
  10. A new update is live with the following changelog:

     

    2023.10.07

    • Add - Cache last data in Local Storage when using "no refresh"
    • Fix - Dataset admin Dialog - Error on select all datasets
    • Fix - Multiple typos
    • Fix - Special condition crashing the backend
    • Fix - Status refresh on Snapshots admin dialog
    • Change - Date format across multiple dialogs
    • Change - Local Storage for datasets and pools view options

    Thanks @Niklas; when looking for a way to preserve the views, I end up finding an excellent way to implement a cache for the last refresh :). Also, now the view options are as durable as they can be; even across reboots.

     

    How Cache Works?

    Every time the plugin refreshes the data, it saves a copy to the web browser's local storage; if you have configured the "No refresh" option, once you enter the main page, the plugin loads that information (Including the timestamp) from that cache, this operation is almost instantaneously. This only happens if the "No refresh" option is enabled; otherwise, the plugins load the information from the pools directly. The cache also works with Lazy and Classic load.

     

    Best,

    • Thanks 3
  11. 1 hour ago, Joly0 said:

    I have deleted everything now, reformated my pool and setup everything fresh and new, now it looks right, but i still cant find the right setting to hide those datasets
    Any idea? Tried "/cache/docker/.*" or "/docker/.*"

     

    /docker/.* should do the trick, if not please send me a pm with the following command result "zfs list".

     

    Best/

  12. 9 hours ago, unr41dus3r said:

    Maybe a Noob ZFS question, but i have to run "zfs mount -a" after every reboot to mount my zfs datasets again.

    Is this by design or an configuration mistake by me?

     

    That is not even close to normal, you should report this to General support, as the pools and datasets are supposed to be mounted automatically on every reboot; this is unless you have defined otherwise at creation time.

  13. 4 hours ago, SimonF said:

    It shows all datasets following a reboot is that expected?

    If you are referring to datasets that were collapsed in the UI, yes, it's correct and expected; the information about datasets hidden/collapsed on the UI is stored on the Unraid cookie; if you reboot, that invalidates the cookie and all the datasets are shown; this also works as a sanity check that everything works once you have rebooted your server.

  14. Answers to the questions:

     

    1 hour ago, lazant said:

    How can I buy you a beer?!

     

    Thanks!, Through the "donate" link in my App profile, Red Peroni is my favorite!.

     

    30 minutes ago, Laov said:

    (24 h vs 12 h format)

     

    No problem; I will update for 12h format on the next release

     

    30 minutes ago, Laov said:

    BUT STILL! There is a minor bug:

     

    As weird as it may sound, this is directly related to the "display last loaded data". The communication protocol (Nchan) retains the last message published; that's why the last refresh at is changed to the page refresh timestamp. I'm testing if that "not a bug but a feature" of Nchan can be leveraged as a cache to keep a copy of the last data loaded by the plugin or if I have to keep a copy of the last data on a file(Technically, is ram) located on "/tmp". However, this testing is in a very early stage, so please bear with me for a while.

     

    In the meantime, please keep testing the plugin and all the other functionalities, and report any other bug you may find.

     

    Best,

    • Thanks 4
  15. Well, enjoy, my friends, because a new update is live with the so-long-awaited functionality; the changelog is the following:

     

    2023.09.27

    • Change - "No refresh" option now doesn't load information on page refresh
    • Fix - Dynamic Config reload

    The "Dynamic Config reload" means you don't have to close the window for the config to apply correctly.

    • Like 2