steini84

Community Developer
  • Posts

    434
  • Joined

  • Last visited

  • Days Won

    1

Posts posted by steini84

  1. 13 hours ago, Marshalleq said:

    @steini84How do you find syncoid / sanoid for removing older snapshots that are over the allowance?  I.e. If we did hourly for a week, then daily for a month, then monthly for a year, we should expect only 4 weeks of daily backups right?  This is something that I never got working in znapzend.  It seemed to just keep everything indefinitely.  BTW I never did get znapzend working after the ZFS went built in, it only works for a few hours or days then stops.  So as per your suggestion, I am looking at Syncoid / Sanoid.  (Still figuring out that difference).

    It works perfectly for me in Sanoid/syncoid and I might need to depricate the Znapzend plugin since I have not used it at all since my migration. This works exactly as you described and here you can see an example of my production profile:

    [template_production]
            frequently = 4
            hourly = 24
            daily = 30
            monthly = 0
            yearly = 0
            autosnap = yes
            autoprune = yes

    and here you can see how the snapshots are currently 

    root@Unraid:~# zfs list -t snapshot ssd/Docker/Freshrss
    NAME                                                          USED  AVAIL     REFER  MOUNTPOINT
    ssd/Docker/Freshrss@autosnap_2023-12-16_23:59:15_daily       1.41M      -     17.2M  -
    ssd/Docker/Freshrss@autosnap_2023-12-17_23:59:03_daily       1.16M      -     17.2M  -
    ssd/Docker/Freshrss@autosnap_2023-12-18_23:59:05_daily       1.12M      -     17.2M  -
    ssd/Docker/Freshrss@autosnap_2023-12-19_23:59:18_daily       1.16M      -     17.2M  -
    ssd/Docker/Freshrss@autosnap_2023-12-20_23:59:11_daily       1.19M      -     17.2M  -
    ssd/Docker/Freshrss@autosnap_2023-12-21_23:59:17_daily       1.19M      -     17.2M  -
    ssd/Docker/Freshrss@autosnap_2023-12-22_23:59:09_daily       1.13M      -     17.2M  -
    ssd/Docker/Freshrss@autosnap_2023-12-23_23:59:03_daily       1.13M      -     17.2M  -
    ssd/Docker/Freshrss@autosnap_2023-12-24_23:59:16_daily        864K      -     17.1M  -
    ssd/Docker/Freshrss@autosnap_2023-12-25_23:59:08_daily       1.05M      -     18.3M  -
    ssd/Docker/Freshrss@autosnap_2023-12-26_23:59:12_daily       1.07M      -     18.3M  -
    ssd/Docker/Freshrss@autosnap_2023-12-27_23:59:13_daily       1.15M      -     18.4M  -
    ssd/Docker/Freshrss@autosnap_2023-12-28_23:59:06_daily       1.18M      -     18.4M  -
    ssd/Docker/Freshrss@autosnap_2023-12-29_23:59:20_daily       1.09M      -     18.4M  -
    ssd/Docker/Freshrss@autosnap_2023-12-30_23:59:18_daily        904K      -     18.4M  -
    ssd/Docker/Freshrss@autosnap_2023-12-31_23:59:03_daily       1.06M      -     19.5M  -
    ssd/Docker/Freshrss@autosnap_2024-01-01_23:59:03_daily       1.08M      -     19.5M  -
    ssd/Docker/Freshrss@autosnap_2024-01-02_23:59:07_daily       1.17M      -     19.6M  -
    ssd/Docker/Freshrss@autosnap_2024-01-03_23:59:10_daily       1.15M      -     19.6M  -
    ssd/Docker/Freshrss@autosnap_2024-01-04_23:59:09_daily       1.15M      -     19.7M  -
    ssd/Docker/Freshrss@autosnap_2024-01-05_23:59:06_daily       1.33M      -     19.7M  -
    ssd/Docker/Freshrss@autosnap_2024-01-06_23:59:23_daily       1.20M      -     19.7M  -
    ssd/Docker/Freshrss@autosnap_2024-01-07_23:59:20_daily       1.14M      -     19.6M  -
    ssd/Docker/Freshrss@autosnap_2024-01-08_23:59:15_daily       1.09M      -     19.6M  -
    ssd/Docker/Freshrss@autosnap_2024-01-09_23:59:12_daily       1.09M      -     19.7M  -
    ssd/Docker/Freshrss@autosnap_2024-01-10_23:59:21_daily       1.10M      -     19.7M  -
    ssd/Docker/Freshrss@autosnap_2024-01-11_23:59:21_daily       1.26M      -     19.7M  -
    ssd/Docker/Freshrss@autosnap_2024-01-12_23:59:02_daily       1.14M      -     19.7M  -
    ssd/Docker/Freshrss@autosnap_2024-01-13_23:59:25_daily       1.04M      -     19.7M  -
    ssd/Docker/Freshrss@autosnap_2024-01-14_13:00:09_hourly       944K      -     19.6M  -
    ssd/Docker/Freshrss@autosnap_2024-01-14_14:00:17_hourly       824K      -     19.6M  -
    ssd/Docker/Freshrss@autosnap_2024-01-14_15:00:11_hourly       840K      -     19.6M  -
    ssd/Docker/Freshrss@autosnap_2024-01-14_16:00:23_hourly       856K      -     19.6M  -
    ssd/Docker/Freshrss@autosnap_2024-01-14_17:00:32_hourly       856K      -     19.6M  -
    ssd/Docker/Freshrss@autosnap_2024-01-14_18:00:04_hourly       752K      -     19.6M  -
    ssd/Docker/Freshrss@autosnap_2024-01-14_19:00:16_hourly       768K      -     19.6M  -
    ssd/Docker/Freshrss@autosnap_2024-01-14_20:00:10_hourly       792K      -     19.6M  -
    ssd/Docker/Freshrss@autosnap_2024-01-14_21:00:36_hourly       800K      -     19.6M  -
    ssd/Docker/Freshrss@autosnap_2024-01-14_22:00:08_hourly       760K      -     19.6M  -
    ssd/Docker/Freshrss@autosnap_2024-01-14_23:00:45_hourly       648K      -     19.6M  -
    ssd/Docker/Freshrss@autosnap_2024-01-14_23:59:21_daily        144K      -     19.6M  -
    ssd/Docker/Freshrss@autosnap_2024-01-15_00:00:11_hourly       144K      -     19.6M  -
    ssd/Docker/Freshrss@autosnap_2024-01-15_01:00:31_hourly       640K      -     19.6M  -
    ssd/Docker/Freshrss@autosnap_2024-01-15_02:00:28_hourly       624K      -     19.5M  -
    ssd/Docker/Freshrss@autosnap_2024-01-15_03:00:01_hourly       616K      -     19.5M  -
    ssd/Docker/Freshrss@autosnap_2024-01-15_04:00:33_hourly       616K      -     19.5M  -
    ssd/Docker/Freshrss@autosnap_2024-01-15_05:00:10_hourly       568K      -     19.5M  -
    ssd/Docker/Freshrss@autosnap_2024-01-15_06:00:28_hourly       680K      -     19.5M  -
    ssd/Docker/Freshrss@autosnap_2024-01-15_07:00:39_hourly       768K      -     19.5M  -
    ssd/Docker/Freshrss@autosnap_2024-01-15_08:00:25_hourly       744K      -     19.6M  -
    ssd/Docker/Freshrss@autosnap_2024-01-15_09:00:32_hourly       736K      -     19.6M  -
    ssd/Docker/Freshrss@autosnap_2024-01-15_10:00:08_hourly       888K      -     19.6M  -
    ssd/Docker/Freshrss@autosnap_2024-01-15_11:00:36_hourly       880K      -     19.6M  -
    ssd/Docker/Freshrss@autosnap_2024-01-15_12:00:19_hourly         0B      -     19.6M  -
    ssd/Docker/Freshrss@autosnap_2024-01-15_12:00:19_frequently     0B      -     19.6M  -
    ssd/Docker/Freshrss@autosnap_2024-01-15_12:15:17_frequently   184K      -     19.6M  -
    ssd/Docker/Freshrss@autosnap_2024-01-15_12:30:21_frequently   200K      -     19.6M  -
    ssd/Docker/Freshrss@autosnap_2024-01-15_12:45:16_frequently     0B      -     19.6M  -

     

    The system is really configurable and the documentation is really good: https://github.com/jimsalterjrs/sanoid

  2. I'm also getting this warning, does anyone have a workaround? I tried installing mbuffer from source that this plugin downloads to the root of Unraid, however with no gcc I can't compile.
     
    @steini84 Do you have a solution?

    There seems to be a problem with the mbuffer package. You can install the older version, but all the Slackware packages i found for the latest version are the same.

    To install the older package you can use

    wget https://github.com/Steini1984/unRAID6-Sainoid/raw/167f5ad3dc1941ef7670efcd21fbb4e6e6ad8587/packages/mbuffer.20200505.x86_64.tgz

    installpkg mbuffer.20200505.x86_64.tgz


    Sent from my iPhone using Tapatalk
    • Thanks 2
  3. 11 minutes ago, JorgeB said:

     

    Cannot reproduce, created a new share with only disk3 as included and it created a dataset:

     

    Jun 20 17:47:18 Tower15 shfs: /usr/sbin/zfs create 'disk3/test'
    Jun 20 17:47:18 Tower15 emhttpd: Starting services...
    Jun 20 17:47:18 Tower15 emhttpd: shcmd (1053): chmod 0777 '/mnt/user/test'
    Jun 20 17:47:18 Tower15 emhttpd: shcmd (1054): chown 'nobody':'users' '/mnt/user/test'

     

     

    you are correct, dont know what i was doing earlier.  thanks :)

    • Like 1
  4. Congratulations on integrating ZFS!

     

    *** MY BAD *** Creating a single disk share actually makes a datastore *****

     

    I would like to create a ZFS-only shares on the array and have a separate datastore for each share to enable replication and snapshots. While going through the release notes, I came across the following information: "Top-level user shares in a ZFS pool are created as datasets instead of ordinary directories." However, I haven't found an official way to achieve this for a single ZFS drive within the array through the settings.

     

    Manually, it is easy to accomplish, but I wanted to think aloud and see if anyone has any insights into potential issues with my strategy in unRAID or if there are alternative approaches that align better with the "Out of the box - unRAID way". In my understanding, this approach should work, but I am unsure if I might unintentionally disrupt any underlying mechanisms in unRAID by manually creating the datasets.

     

    Here's what I have done so far:

    1. Converted disk 15 to ZFS.
    2. Manually created disk15/Nextcloud.
    3. Configured the included disk to only include disk 15.
    4. Migrated my existing data to disk 15. 

    image.thumb.png.724179de188496557187dd57367de3c7.png

     

    So far so good, but please let me know if you have any suggestions on a better strategy and/or if there are any potential concerns with my current setup.

     

    PS I guess Exclusive access is not relevant here since the folder is on the array, not a zfs pool.

     

    • Upvote 1
  5. On 1/30/2023 at 4:11 AM, BVD said:

    Pretty sure its @ich777tthat maintains the build automation for it, unless I misread at some point?

    You are absolutly correct. He took my manual build process and automated it so well that I have not had to think about it at all any more! Really took this plugin to another level and now we just wait for the next Unraid release so we can depricate it :)

    • Like 1
    • Thanks 2
  6. 18 hours ago, BVD said:

    After looking at @dmacias's repo, seems like it'd be easy enough to build. Guess I know what I'll be doing this weekend lol

    You can get away with a oneliner in the go file / User Scripts: 

    wget -P /usr/local/sbin/ "https://raw.githubusercontent.com/jimsalterjrs/ioztat/main/ioztat" && chmod +x /usr/local/sbin/ioztat

     

    But packing this up as a plugin should be a fun project

  7. @steini84 or @ich777 would it be possible to include ioztat with the plugin, or do you feel it's better served to something like NerdPack? I've been using it since it's inception, super helpful for quickly tracking down problem-child filesets:
    https://github.com/jimsalterjrs/ioztat
     
    It does require python, but that's the only thing outside of the base OS that's required for us (though I don't know if that requirement precluded it from inclusion, hence the NerdPack comment...)
     
    Something else I've been symlinking for a while now is the bash-completion.d file from mainline zfs - it's 'mostly' functional in openzfs, though I've not spent a lot of time poking around at it.
    https://github.com/openzfs/zfs/tree/master/contrib/bash_completion.d

    I want to keep the zfs package as vanilla as possible. It would be a great fit for a plugin :)


    Sent from my iPhone using Tapatalk
  8. 4 minutes ago, sabertooth said:

    WIth 6.10.0-rc4, pool goes offline when an attempt is made to write something into the pool.

    errors: List of errors unavailable: pool I/O is currently suspended

     

    Attempting to run zpool clear fails

    cannot clear errors for data: I/O error

     

    state: SUSPENDED
    status: One or more devices are faulted in response to IO failures.
    action: Make sure the affected devices are connected, then run 'zpool clear'.
       see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-HC
      scan: scrub repaired 0B in 01:51:14 with 0 errors on Wed Mar 23 15:24:10 2022

     

    The pool was fine till RC3.

     

    This is just an unfortunate coincidence, i would guess it´s cabling issues or a dying drive. Have you done a smart test?

  9. Hi Everyone,
     
    I just wanted to share you a problematic trouble I've met.
     
    For some unknown reasons, I was no more able to load some part of the Unraid GUI and some dockers's been stopped. I've tried to generate a diagnostic file in the settings, and it's has display in windows a writing to one of my NVMe drive, has it it was executing an operation on it.
     
    The problem is that the system was no more responsive and after an hour, I decided to reset the system. After what, I've lost my zpools because the NVMe Drives was no more recognized as ZFS disks.
     
    He's the configuration of the ZFS Pool before the lost of the NVMe's drives partition.
     
    zpool status fastraid
      pool: fastraid
     state: ONLINE
    config:
            NAME                                      STATE     READ WRITE CKSUM
            fastraid                                  ONLINE       0     0     0
              raidz1-0                                ONLINE       0     0     0
                wwn-SSD1                ONLINE       0     0     0
                wwn-SSD2                ONLINE       0     0     0
                wwn-SSD3                ONLINE       0     0     0
                wwn-SSD4                ONLINE       0     0     0
            special
              mirror-2                                ONLINE       0     0     0
                nvme-CT2000P5SSD8_XX-part1  ONLINE       0     0     0
                nvme-CT2000P5SSD8_YY-part1  ONLINE       0     0     0
            logs
              mirror-3                                ONLINE       0     0     0
                nvme-CT2000P5SSD8_XX-part2  ONLINE       0     0     0
                nvme-CT2000P5SSD8_YY-part2  ONLINE       0     0     0
     
    This not the first time that I've begin to lost control of my unraid server, but until now I didn't lost anything. All I can say, it's begun to appear with the version 2.0 of the plugin, but maybe that it's linked to something else?
     
    rohrer-enard.fr-diagnostics-20220220-1151.zip

    Are you keeping your docker.img on the zfs array? It can cause the lockup you are describing.


    Sent from my iPhone using Tapatalk
  10. Sure thing, but this needs to be tested and a custom script needs to be made.
    The last thing that I want is that you get two or more messages when one operation finished for example...
     
    I only built the notification through ZED for a finished scrub into the plugin because it is actually trivial to do a scrub after a unclean shutdown and it is much nicer that you get a notification when it's finished.
     
    If now users want more notifications/features maybe a WebUI for the ZFS plugin is necessary to turn on/off notifications/features, but this means serious work and I'm also not to sure if this will actually make sense because ZFS is sometime in the near future built into unRAID itself.
     
    Also keep in mind you can add your own script(s) for notifications you want to ZED, I also don't know what makes more sense, what's your opinion about that @steini84?

    What I think would make most sense is to have the zfs plug-in as vanilla as possible and extra functionality to a companion plugin. Then the companion plug-in would continue to work when zfs becomes native


    Sent from my iPhone using Tapatalk
  11. I use check_mk for monitoring and that works perfectly for everything from low space to high load and even failed pools. It takes some time to set up but is really good!

    A simple bash script can be rigged to check zpool status -x

    I can set one up when I'm in front of a computer


    Sent from my iPhone using Tapatalk

  12. Hi
     
    Is it possible that you guys introduced some nasty bugs with that update?
    My system is not reacting anymore and everytime when i force it to reboot loop2 starts to hang with 100% cpu usage and docker.img is on my ZFS pool.
     
    Started after i updated zfs for unraid to 2.0.0.

    You are probably storing docker.img on the zfs pool and running the latest RC of unraid:
    32bf98e753ab053fdedf0902b0b3c880.jpg
    You can also try to use folders instead of docker.img


    Sent from my iPhone using Tapatalk
    • Like 1
    • Thanks 1
  13. 7 minutes ago, ich777 said:

    Today I released in collaboration with @steini84 a update from the ZFS plugin (v2.0.0) to modernize the plugin and switch from unRAID version detection to Kernel version detection and a general overhaul from the plugin.

     

    When you update the plugin from v1.2.2 to v2.0.0 the plugin will delete the "old" package for ZFS and pull down the new ZFS package (about 45MB).

    Please wait until the download is finished and the "DONE" button is displayed, please don't click the red "X" button!

    After it finishes you can use your Server and ZFS as usual and you don't need to take any further steps like rebooting or anything else.

     

    The new version from the plugin also includes the Plugin Update Helper which will download packages for plugins before you reboot when you are upgrading your unRAID version and will notify you when it's safe to reboot:

    grafik.png.cb5d8b6b7189de6aad7305bd2d6ec769.png

     

     

    The new version from the plugin now will also check on each reboot if there is a newer version for ZFS available, download it and install it (the update check is by default activated).

    If you want to disable this feature simply run this command from a unRAID terminal:

    sed -i '/check_for_updates=/c\check_for_updates=false' "/boot/config/plugins/unRAID6-ZFS/settings.cfg"

     

    If you have disabled this feature already and you want to enable it run this command from a unRAID terminal:

    sed -i '/check_for_updates=/c\check_for_updates=true' "/boot/config/plugins/unRAID6-ZFS/settings.cfg"

    Please note that this feature needs an active internet connection on boot.

    If you run for example AdGuard/PiHole/pfSense/... on unRAID it is very most likely to happen that you have no active internet connection on boot so that the update check will fail and plugin will fall back to install the current available local package from ZFS.

     

     

    It is now also possible to install unstable packages from ZFS if unstable packages are available (this is turned off by default).

    If you want to enable this feature simply run this command from a unRAID terminal:

    sed -i '/unstable_packages=/c\unstable_packages=true' "/boot/config/plugins/unRAID6-ZFS/settings.cfg"

     

    If you have enabled this feature already and you want to disable it run this command from a unRAID terminal:

    sed -i '/unstable_packages=/c\unstable_packages=false' "/boot/config/plugins/unRAID6-ZFS/settings.cfg"

    Please note that this feature also will need a active internet connection on boot like the update check (if there is no unstable package found, the plugin will automatically return this setting to false so that it is disabled to pull unstable packages - unstable packages are generally not recommended).

     

     

    Please also keep in mind that for every new unRAID version ZFS has to be compiled.

    I would recommend to wait at least two hours after a new version from unRAID is released before upgrading unRAID (Tools -> Update OS -> Update) because of the involved compiling/upload process.

     

    Currently the process is fully automated for all plugins who need packages for each individual Kernel version.

     

    The Plugin Update Helper will also inform you if a download failed when you upgrade to a newer unRAID version, this is most likely to happen when the compilation isn't finished yet or some error occurred at the compilation.

    If you get a error from the Plugin Update Helper I would recommend to create a post here and don't reboot yet.

    You have truly taken this plugin to the next level and with the automatic builds it´s as good as it gets until we get native ZFS on Unraid!

    • Like 6
    • Thanks 1
  14. You can do a pretty basic test using dd

    dd if=/dev/random of=/mnt/SSD/test.bin bs=1MB count=1024



    If you want to test the read speed you change input and output

    *remember to change to a path within your zfs pool


    Sent from my iPhone using Tapatalk

    • Like 1
  15. 17 hours ago, dianasta said:

    Hello,

     

    A quick question, not sure if this was answered previously,  how do we update the zfs version ?

    I'm on unraid Version: 6.9.2

     

    Currently running.

    ------------------------------

    zfs version
       zfs-2.0.0-1
       zfs-kmod-2.0.0-1

     

    New version:

    ----------------------

       zfs-2.0.6-1
       zfs-kmod-2.0.6-1

     

     

    Thanks.

    Just run this command then reboot

    rm /boot/config/plugins/unRAID6-ZFS/packages/*