Iker

Community Developer
  • Posts

    264
  • Joined

  • Last visited

Everything posted by Iker

  1. Hi folks, a new update is live with the following changelog: 2023.04.03 Add - Rename datasets UI Add - Edit datasets UI Add - unRaid 6.12 compatiblity Add - Lazy load for snapshots admin UI Fix - Improve PHP 8 Compatibility Snapshots lazy load functionality is slowly moving forward; in the next version, it will probably be integrated on the main page. Best,
  2. It's relatively easy, you can find examples of shares and vsf objects on this post: However, my advice is to read and understand the samba options and naming convention necccesary for this to work, this is very well explained on the levelone forums post (https://forum.level1techs.com/t/zfs-on-unraid-lets-do-it-bonus-shadowcopy-setup-guide-project/148764). Bests,
  3. Unfortunately, right now, there is no easy way to create shares on the pool; you have to deal with smb-extra.conf file; this situation should be solved on the upcoming 6.12 version; in the meantime these are my templates for pool shares (ssdnvme in this example) Private Share [example_private_share] path = /ssdnvme/example_private comment = My Private share browsable = yes guest ok = no writeable = yes write list = iker read only = no create mask = 0775 directory mask = 0775 vfs objects = shadow_copy2 shadow: snapdir = .zfs/snapshot shadow: sort = desc shadow: format = %Y-%m-%d-%H%M%S shadow: localtime = yes shadow: snapdirseverywhere = yes Private Hidden Share [example_private_hidden] path = /ssdnvme/example_private_hidden comment = My Private info browsable = no guest ok = yes writeable = no write list = iker read only = yes create mask = 0775 directory mask = 0775 vfs objects = shadow_copy2 shadow: snapdir = .zfs/snapshot shadow: sort = desc shadow: format = %Y-%m-%d-%H%M%S shadow: localtime = yes Public Share [example_public_share] path = /ssdnvme/example_public comment = UnRaid related info browsable = yes guest ok = yes writeable = yes write list = iker read only = yes create mask = 0775 directory mask = 0775 Best,
  4. Hi guys, a new update is live with the following changelog: 2023.02.28 Fix - PHP 8 Upgrades Fix - Export pool command Fix - Error on parsing dataset origin property I`m still working on the lazy load thing ( @HumanTechDesign ), a page for editing dataset properties, and make everything compatible with Unraid 6.12. The microservices idea is a big no, so probably I'm going to use websockets (nchan functionality) to improve load times, the Snapshots are going to be loaded on the background. @Ter About the recursive mount, ZFS doesn't have an option for that use case, so I have to code myself, still evaluating if is worth impementing or not. Best,
  5. Yes, sorry for the late response, I'll be pushing an update next week, with the nested mount command. For now at least that's the way to go, in the coming 6.12 version ZFS will be a first class citizen, that means that you can import pools and create shares using unRaid's GUI.
  6. Nicely done! It's weird as all you did has the same effects as the kernel parameters, but anyway, it's nice to have a new method. For the last question, I didn't have any instability issues, they were latency problems. I have multiple services running every X seconds, and additionally in the morning the governor change to power save; with the new driver the cores go down to 500 MHz, that's actually nice; but also it causes some of the scheduled tasks to fail with timeouts and generally speaking the cores didn't scale very well. Obviously, you save a buck or two with the new driver, but if you need performance it doesn't work really well.
  7. Hi @Ter thanks for your feedback, let me take a closer look into this and get back to you. The problem with "-a" option, is that it mounts all the datasets, not just the childs, that can be a little bit problematic for other users.
  8. You probably need the feature to be enabled on your bios (Global C Stats, CPPC, and a lot of other options), I don't recall the exact parameters and it varies for different manufacturers, so give it a try with a google search.
  9. Which unraid version are you using?, I don't have amd cpu governor enabled, Unraid lags a lot and generally speaking performance vs power is not so good.
  10. I get this thing working with the following options 1. Add the blacklisting and pstate command to the syslinux config (It's possible from the GUI): modprobe.blacklist=acpi_cpufreq amd_pstate.shared_mem=1 2. Include in your go file the following line to load the kernel module: modprobe amd_pstate 3. Reboot and enjoy your new scheduler.
  11. Not returned, but created by a plugin. Endpoints Example: https://unradiserver/Main/ZFS/dataset - Accepts PUT, GET and DELETE Request https://unradiserver/Main/ZFS/snapshot - Accepts PUT, GET and DELETE Request https://unradiserver/Main/ZFS/pool - Accepts GET and DELETE Request
  12. Hi guys, is it possible to develop a plugin with multiple endpoints? I want to expose multiple endpoints for ZFS administration programmatically. However, I need help finding a proper way to do this. And in the other hand, I prefer to avoid creating a separate docker container for this.
  13. This lazy load functionality has been on my mind for quite a while now, and it will help a lot with load times in general; that's why I'm focusing on rewriting a portion of the backend, so every functionality becomes a microservice, but given the unRaid UI architecture, it's a little more complicated than I anticipated, I understand your issue but please hang in there for a little while.
  14. I dig a little about this issue with the logs available at my system and found that the command is issued every time you reboot Unraid; not sure if that's your case @1WayJonny, but it seems pretty correct, at least in my case
  15. Hi @1WayJonny, plugin author here; I'm not so sure if you are in the correct forum, but if case you are, it seems that you get a couple of things wrong: The plugin doesn't implement any commands related to importing pools; the source code is available, and you can check in the repo "zfs import" command doesn't exist. There are no actions performed automatically or in the background by the plugin besides refreshing the info every 30 seconds (or the time you configure) when you are in the main tab. The zfs program you correctly identify is the refreshing action; it only occurs when interacting with the main tab. There is something else messing with your pools; I'm not sure why, but I suggest you uninstall all the zfs related plugins and then check the history commands. Best,
  16. Just remind that when you import a pool, ZFS tries to upgrade the filesystem to the latest version; for me, it looks pretty likely that it is something related to a specific attribute/feature that TrueNAS introduces to the pool; I mean, if the pool works with TrueNAS but not with vanilla ZFS (unRaid) there is not a lot of space for other types of errors.
  17. As you mentioned Truenas maybe is related to that, based on this: https://github.com/openzfs/zfs/issues/12643
  18. This is due to the import process of the pool and how the Unassigned devices plugin works; it has nothing to do with you pool and the only way that I'm aware of to solve it is by attaching and detaching the disk using: zpool detach mypool sdd zpool attach mypool /dev/disk/sdd-by-id <wait until zpool status shows it's rebuilt...> One easy way could be: zpool export mypool zpool import -d /dev/disk/by-id mypool Additional info: https://plantroon.com/changing-disk-identifiers-in-zpool/
  19. Please be aware that there are some diferencies in using "add" and "attach: https://openzfs.github.io/openzfs-docs/man/8/zpool-attach.8.html https://openzfs.github.io/openzfs-docs/man/8/zpool-add.8.html
  20. Hi daver, the new functionalities are for the most part associated with snapshots; for examploe, batch deletion and cloning, are exclusively present on the snapshot admin dialog.
  21. Last update of the year (Probably): 2022.11.12 Fix - Error on dialogs and input controls Add - Clone capabilities for snapshots Add - Promote capabilities for datasets General Advice: Be careful when using clones and promoting datasets; things can become messy quick
  22. Hi guys, a new update is live with the following changelog: 2022.11.05 Fix - Error on pools with snapshots but without datasets Fix - Dialogs not sizing properly Add - Snapshot Batch Deletion Be aware that for the Snapshots Admin dialog, the delete and other operations are now reported through the status message; however, the dialog doesn't refresh the info, so you have to close it, hit the refesh button and open it again. I will work on one more feature (Clone snapshots and promote them) before the year ends; after that; I will focus on rewriting a portion of the backend, so every functionality becomes a microservice; that should open the door for a good API (Automation is coming...). Best
  23. For the options already configured in BIOS everything seems to be okay. First; make sure that you are on the latest version of unraid (6.11.1), then add the following options to syslinux config modprobe.blacklist=acpi_cpufreq amd_pstate.shared_mem=1 After that and just to be sure add this to your go file modprobe amd_pstate Reboot, and that's it; but, as I have stated; probably it's not going to worth all the trouble, the power reduce doesn't compensate for the latency problems.
  24. I have the same processor and a similar config in the BIOS; the only possible way to reach C6 states it's using the AMD PState; however, in my testing, Unraid lags a lot using this config given that the driver doesn't schedule priority task really well, one example its Telegraf+Prometheus+InfluxDB2, some metrics get missed. So I ended up reverting to acpi_cpufreq. Additionally, power use was reduced between 15% and 20%.
  25. The info is the same that the commands "zpool list" and "zfs list" prints; with that being said; in the pool "size" reports the entire size including parity (Thats for for raidz1, for mirror it's just the size of one disk). Used, and free columns includes parity as well; so no, from the pool you are not able to know how much space available for data you have. For datasets things are different, the used should report the actual ammount of data in the dataset, and free space reports the reamining usable space or quota for the dataset. So yes you should be able to use the entire 4.51 TB