Jump to content

Iker

Community Developer
  • Posts

    270
  • Joined

  • Last visited

Everything posted by Iker

  1. No, there are no conflicts. This plugins is exclusively for ZFS Datasets administration, it has nothing to do with the underliying zfs systems or how is loaded.
  2. Sounds good!, I will take a deeper look in the coming days, as this is a very unexpected behavior, and I haven't been able to reproduce it in a unRaid VM.
  3. TBH that doesn't make a lot of sense to me. As I said, the plugin doesn't query the disk directly, only executes zfs commands every 30 seconds (you can change the timeframe on the config).
  4. Hi, I agree with @itimpi; this situation is not related to the plugin, buy to unRaid itself, the plugin doesn't implement any code associated with SMART functionality, and all the commands are exclusively ZFS-related commands (zpool list, zpool status, etc.), more even so, the plugin doesn't enumerate the devices present in the system, only parses the results from zfs status for pool health purposes.
  5. I will check how to support non-Englishs languages.
  6. I just read the bug report thread; my best shoot will be that the Dataset has some properties not supported by the current ZFS version or that the unRaid UI implementation is not importing the pool correctly. Here are some ideas to debug the issue a little bit further down: More Diag Info If you create a new folder on the same dataset, everything works with this new folder? Create a dataset in unRaid 6.12 and check if everything works correctly, and you can see the folder and its content. (Just to check if there is a problem Possible Solutions Do not assign the pool to unRaid Pools; import it using the command line and see if that works. (zfs import, then zfs mount -a) As weird as that may be, you could clone the Dataset in unRaid 6.12, see if the information shows up, promote it, and let go of the old one.
  7. Hi folks, a new update is live with the following changelog: 2023.04.03 Add - Rename datasets UI Add - Edit datasets UI Add - unRaid 6.12 compatiblity Add - Lazy load for snapshots admin UI Fix - Improve PHP 8 Compatibility Snapshots lazy load functionality is slowly moving forward; in the next version, it will probably be integrated on the main page. Best,
  8. It's relatively easy, you can find examples of shares and vsf objects on this post: However, my advice is to read and understand the samba options and naming convention necccesary for this to work, this is very well explained on the levelone forums post (https://forum.level1techs.com/t/zfs-on-unraid-lets-do-it-bonus-shadowcopy-setup-guide-project/148764). Bests,
  9. Unfortunately, right now, there is no easy way to create shares on the pool; you have to deal with smb-extra.conf file; this situation should be solved on the upcoming 6.12 version; in the meantime these are my templates for pool shares (ssdnvme in this example) Private Share [example_private_share] path = /ssdnvme/example_private comment = My Private share browsable = yes guest ok = no writeable = yes write list = iker read only = no create mask = 0775 directory mask = 0775 vfs objects = shadow_copy2 shadow: snapdir = .zfs/snapshot shadow: sort = desc shadow: format = %Y-%m-%d-%H%M%S shadow: localtime = yes shadow: snapdirseverywhere = yes Private Hidden Share [example_private_hidden] path = /ssdnvme/example_private_hidden comment = My Private info browsable = no guest ok = yes writeable = no write list = iker read only = yes create mask = 0775 directory mask = 0775 vfs objects = shadow_copy2 shadow: snapdir = .zfs/snapshot shadow: sort = desc shadow: format = %Y-%m-%d-%H%M%S shadow: localtime = yes Public Share [example_public_share] path = /ssdnvme/example_public comment = UnRaid related info browsable = yes guest ok = yes writeable = yes write list = iker read only = yes create mask = 0775 directory mask = 0775 Best,
  10. Hi guys, a new update is live with the following changelog: 2023.02.28 Fix - PHP 8 Upgrades Fix - Export pool command Fix - Error on parsing dataset origin property I`m still working on the lazy load thing ( @HumanTechDesign ), a page for editing dataset properties, and make everything compatible with Unraid 6.12. The microservices idea is a big no, so probably I'm going to use websockets (nchan functionality) to improve load times, the Snapshots are going to be loaded on the background. @Ter About the recursive mount, ZFS doesn't have an option for that use case, so I have to code myself, still evaluating if is worth impementing or not. Best,
  11. Yes, sorry for the late response, I'll be pushing an update next week, with the nested mount command. For now at least that's the way to go, in the coming 6.12 version ZFS will be a first class citizen, that means that you can import pools and create shares using unRaid's GUI.
  12. Nicely done! It's weird as all you did has the same effects as the kernel parameters, but anyway, it's nice to have a new method. For the last question, I didn't have any instability issues, they were latency problems. I have multiple services running every X seconds, and additionally in the morning the governor change to power save; with the new driver the cores go down to 500 MHz, that's actually nice; but also it causes some of the scheduled tasks to fail with timeouts and generally speaking the cores didn't scale very well. Obviously, you save a buck or two with the new driver, but if you need performance it doesn't work really well.
  13. Hi @Ter thanks for your feedback, let me take a closer look into this and get back to you. The problem with "-a" option, is that it mounts all the datasets, not just the childs, that can be a little bit problematic for other users.
  14. You probably need the feature to be enabled on your bios (Global C Stats, CPPC, and a lot of other options), I don't recall the exact parameters and it varies for different manufacturers, so give it a try with a google search.
  15. Which unraid version are you using?, I don't have amd cpu governor enabled, Unraid lags a lot and generally speaking performance vs power is not so good.
  16. I get this thing working with the following options 1. Add the blacklisting and pstate command to the syslinux config (It's possible from the GUI): modprobe.blacklist=acpi_cpufreq amd_pstate.shared_mem=1 2. Include in your go file the following line to load the kernel module: modprobe amd_pstate 3. Reboot and enjoy your new scheduler.
  17. Not returned, but created by a plugin. Endpoints Example: https://unradiserver/Main/ZFS/dataset - Accepts PUT, GET and DELETE Request https://unradiserver/Main/ZFS/snapshot - Accepts PUT, GET and DELETE Request https://unradiserver/Main/ZFS/pool - Accepts GET and DELETE Request
  18. Hi guys, is it possible to develop a plugin with multiple endpoints? I want to expose multiple endpoints for ZFS administration programmatically. However, I need help finding a proper way to do this. And in the other hand, I prefer to avoid creating a separate docker container for this.
  19. This lazy load functionality has been on my mind for quite a while now, and it will help a lot with load times in general; that's why I'm focusing on rewriting a portion of the backend, so every functionality becomes a microservice, but given the unRaid UI architecture, it's a little more complicated than I anticipated, I understand your issue but please hang in there for a little while.
  20. I dig a little about this issue with the logs available at my system and found that the command is issued every time you reboot Unraid; not sure if that's your case @1WayJonny, but it seems pretty correct, at least in my case
  21. Hi @1WayJonny, plugin author here; I'm not so sure if you are in the correct forum, but if case you are, it seems that you get a couple of things wrong: The plugin doesn't implement any commands related to importing pools; the source code is available, and you can check in the repo "zfs import" command doesn't exist. There are no actions performed automatically or in the background by the plugin besides refreshing the info every 30 seconds (or the time you configure) when you are in the main tab. The zfs program you correctly identify is the refreshing action; it only occurs when interacting with the main tab. There is something else messing with your pools; I'm not sure why, but I suggest you uninstall all the zfs related plugins and then check the history commands. Best,
  22. Just remind that when you import a pool, ZFS tries to upgrade the filesystem to the latest version; for me, it looks pretty likely that it is something related to a specific attribute/feature that TrueNAS introduces to the pool; I mean, if the pool works with TrueNAS but not with vanilla ZFS (unRaid) there is not a lot of space for other types of errors.
  23. As you mentioned Truenas maybe is related to that, based on this: https://github.com/openzfs/zfs/issues/12643
  24. This is due to the import process of the pool and how the Unassigned devices plugin works; it has nothing to do with you pool and the only way that I'm aware of to solve it is by attaching and detaching the disk using: zpool detach mypool sdd zpool attach mypool /dev/disk/sdd-by-id <wait until zpool status shows it's rebuilt...> One easy way could be: zpool export mypool zpool import -d /dev/disk/by-id mypool Additional info: https://plantroon.com/changing-disk-identifiers-in-zpool/
  25. Please be aware that there are some diferencies in using "add" and "attach: https://openzfs.github.io/openzfs-docs/man/8/zpool-attach.8.html https://openzfs.github.io/openzfs-docs/man/8/zpool-add.8.html
×
×
  • Create New...