-
Posts
264 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by Iker
-
-
18 hours ago, BasWeg said:
Since I also use the znapzend plugin, does your solution just work with the old stored znapzend configuration?
Yes, one of the multiple benefits from ZnapZend is that the configuration is stored on dataset custom properties, so you lose nothing by migrating from plugin to docker version.
- 1
-
@concerned-contour2481 No, you can't, however, you can send snashots incrementally and in replicate mode, Check https://docs.oracle.com/cd/E19253-01/819-5461/gfwqb/index.html
@Revan335 Yeah, that's my guess, however I use those tags to work in the plugin using multiple branches. I have to check how to remove the branch name when is from the main one; it shouln't be too complicated, so you can count on the version name change ;).
- 2
-
@Revan335 It was a empty update, no new functionality or fixes, just a test release fo the new (again) CI/CD pipeline; I'm finishing some fixes and I release the new version with minor improvements in the coming days.
@Masterwishx I'm not so sure what Squid means for "standard" versioning scheme, but I'll check.
- 1
-
15 hours ago, Marshalleq said:
User scripts sounds fine too, I use znapzend which still doesn't work for me despite some multiple attempts to do so. I assume scripts will be better.
The Community Applications ZnapZend plugin is currently broken because the plugin executes early in the boot process, even before the Pool is imported, mounted and working, so the plugin doesn't find any valid pools and exits early in the boot process. One way to make it work with the latest version is using the Docker version; here is my current docker-compose spec:
version: '3' services: znapzend: image: oetiker/znapzend:master container_name: znapzend hostname: znapzend-main privileged: true devices: - /dev/zfs command: ["znapzend --logto /mylogs/znapzend.log"] restart: unless-stopped volumes: - /var/log/:/mylogs/ - /etc/localtime:/etc/localtime:ro networks: - znapzend networks: znapzend: name: znapzend
The only downside is that if you are replicating data to another machine, you have to access the container and the destination machine and set up the SSH keys, or ... you have to mount a specific volume with the keys and known host in the container.
Best
-
You can check SpaceInvader One video and check if it is a good fit for your use case:
-
This thread over here seems like a better place for your questions: https://forums.unraid.net/forum/37-pre-sales-support/
-
Sure thing; I'm planning to release a new version the next week, with all the bugs and little improvements implemented; stay tuned.
- 1
-
Please post it as text, that would help me to check if there are any parsing errors in the plugin and implement the neccesary fixes for the upcoming version.
-
8 minutes ago, andyd said:
Thanks for the plugin!
I set this up today - it picks up one of the pool drives but I have two formatted with zfs. Any reason the one would be ignored
Can you please share the result of the following command (As text):
zpool list -v
Best
-
On 11/12/2023 at 6:45 PM, wacko37 said:
I have encountered an issue with a ZFS disk mounted via UD not showing up in ZFS Master.
After a discussion with the UD developer @dlandon in the UD thread, it appears this is to do with ZFS compatibility in the upcoming Unraid 6.13 release, UD now accommodates for 6.13 when formatting a disk to ZFS rendering it unmountable in ZFS Master...... see belowThanks for the info, currently I don't have access to the 6.13 beta version, as soon as it is released to the general public, I will try to reproduce your issue and check why the plugin is not picking up the pools.
7 hours ago, Michel Amberg said:Is there some guideline on how to restore snapshots without this happening? or is this the only workaround?
Seems that is a something that should go on the General Support thread, I'm not sure that I follow exactly what is going on with your datasets.
- 1
-
Please keep in mind, that ZFS master doesn't use Regex, but Lua patterns for matching the exclusion folder, that comes with some downsides, in your particular case @sfef, at first sight your pattern seems fine, but it actually contains a reserved symbol "-", combined with "r" it means a completely different thing; you can check your patterns here:
https://gitspartv.github.io/lua-patterns/
Long story short, this should do the trick "/docker%-sys/.*"
Additional doc on Lua patterns:
https://www.fhug.org.uk/kb/kb-article/understanding-lua-patterns/
- 1
-
Nice catch, indeed; there is no support for tabbed view; I'll check and come up with a fix for this in the coming versions.
-
4 hours ago, Indi said:
When you mentioned this I immediately thought of the ZFS Master plugin, which was the culprit. I went into that plugin settings and "Destructive Mode" was set to "No". I changed this to "Yes" and then went back into the share in unraid, and was able to delete it. It seems the plugin prevents deletion even if the share is empty.
Hi, ZFS Master plugin developer here. Destructive mode is not what you think; that setting that you just changed only affects the UI; when you change it to "Yes" the ZFS Master plugin shows destructive action elements in the UI (Destroy dataset and other stuff); but the plugin doesn't implement or even have the powers to prevent a dataset from being deleted/modified; quite the opposite, the plugin provides a UI for doing precisely that. Your issue is more related to this:
-
On 10/19/2023 at 4:06 PM, samsausages said:
I do have a future feature request: The ability to refresh by pool. I.e. a refresh button on the pool bar that has the "hide dataset" "create dataset" buttons.
And/or in the config the ability to select/deselect pools from the refresh.
Right now, the refresh options are a global setting, but the plugin functionality is implemented at the pool level, so it should be... not easy (The cache could be a mess), but at least possible.
- 2
-
No, you don't need to create the destination in advanced, which Unraid and ZFS Master version are you using?
-
Nice catch, I'll probably change it to short datetime format.
-
A new update is live with the following changelog:
2023.10.07
- Add - Cache last data in Local Storage when using "no refresh"
- Fix - Dataset admin Dialog - Error on select all datasets
- Fix - Multiple typos
- Fix - Special condition crashing the backend
- Fix - Status refresh on Snapshots admin dialog
- Change - Date format across multiple dialogs
- Change - Local Storage for datasets and pools view options
Thanks @Niklas; when looking for a way to preserve the views, I end up finding an excellent way to implement a cache for the last refresh :). Also, now the view options are as durable as they can be; even across reboots.
How Cache Works?
Every time the plugin refreshes the data, it saves a copy to the web browser's local storage; if you have configured the "No refresh" option, once you enter the main page, the plugin loads that information (Including the timestamp) from that cache, this operation is almost instantaneously. This only happens if the "No refresh" option is enabled; otherwise, the plugins load the information from the pools directly. The cache also works with Lazy and Classic load.
Best,
- 3
-
2 hours ago, Niklas said:
It still expands and show all datasets from time to time. No reboot.
I'll take a deeper look at this, and check if there is a better way to keep the view conf.
- 1
-
1 hour ago, Joly0 said:
I have deleted everything now, reformated my pool and setup everything fresh and new, now it looks right, but i still cant find the right setting to hide those datasets
Any idea? Tried "/cache/docker/.*" or "/docker/.*"/docker/.* should do the trick, if not please send me a pm with the following command result "zfs list".
Best/
-
9 hours ago, unr41dus3r said:
Maybe a Noob ZFS question, but i have to run "zfs mount -a" after every reboot to mount my zfs datasets again.
Is this by design or an configuration mistake by me?
That is not even close to normal, you should report this to General support, as the pools and datasets are supposed to be mounted automatically on every reboot; this is unless you have defined otherwise at creation time.
-
@Masterwishx a couple things about that:
- Is only displayed if the last snapshot is older than the days configured.
- The icon color only shows on the Main page, not the dataset admin dialog.
Best
- 1
-
4 hours ago, SimonF said:
It shows all datasets following a reboot is that expected?
If you are referring to datasets that were collapsed in the UI, yes, it's correct and expected; the information about datasets hidden/collapsed on the UI is stored on the Unraid cookie; if you reboot, that invalidates the cookie and all the datasets are shown; this also works as a sanity check that everything works once you have rebooted your server.
-
Answers to the questions:
1 hour ago, lazant said:How can I buy you a beer?!
Thanks!, Through the "donate" link in my App profile, Red Peroni is my favorite!.
30 minutes ago, Laov said:(24 h vs 12 h format)
No problem; I will update for 12h format on the next release
30 minutes ago, Laov said:BUT STILL! There is a minor bug:
As weird as it may sound, this is directly related to the "display last loaded data". The communication protocol (Nchan) retains the last message published; that's why the last refresh at is changed to the page refresh timestamp. I'm testing if that "not a bug but a feature" of Nchan can be leveraged as a cache to keep a copy of the last data loaded by the plugin or if I have to keep a copy of the last data on a file(Technically, is ram) located on "/tmp". However, this testing is in a very early stage, so please bear with me for a while.
In the meantime, please keep testing the plugin and all the other functionalities, and report any other bug you may find.
Best,
- 4
-
Well, enjoy, my friends, because a new update is live with the so-long-awaited functionality; the changelog is the following:
2023.09.27
- Change - "No refresh" option now doesn't load information on page refresh
- Fix - Dynamic Config reload
The "Dynamic Config reload" means you don't have to close the window for the config to apply correctly.
- 2
[PLUGIN] ZFS Master
in Plugin Support
Posted
Hi Folks, a new update is live with the following changelog:
2023.12.4