steini84

Community Developer
  • Posts

    434
  • Joined

  • Last visited

  • Days Won

    1

Posts posted by steini84

  1. 4 hours ago, Marshalleq said:

    I use /mnt/x y z

     

    I've just reformatted the key and this time not copied the config.  It was all working fine until I added ZFS.  I note that only the beta of ZFS is now being offered not stable, which I think is a huge mistake given it's the stable 6.8.3 I have installed.  @steini84 perhaps you could implement something so that stable / next can be chosen for ZFS.

     

    Anyway, I'm back to believing it's a ZFS issue.  I don't know why it is.  I'm going to try the community kernel and see if that sheds any light.

     

    Community Kernel (Unraid Kernel Helper) same result (using my normal config though, but I did prove it did this with none of my config also). This is a nightmare.  The only thing I can say for sure is it only appears when ZFS is installed.  But earlier known good versions of ZFS also exhibit the behaviour.

    Good idea, I completely forgot to separate the stable builds from the RC version. I upgraded the plugin so that by default you install the cached version if available. If not the plugin check for a stable version by default.

    However you can touch a file on disk and enable unstable builds (like the 2.0.0 RC series) 

     

    #Enable unstable builds
    touch /boot/config/plugins/unRAID6-ZFS/USE_UNSTABLE_BUILDS
    rm /boot/config/plugins/unRAID6-ZFS/packages/*
    #Then reboot
    
    #Disable unstable builds
    rm /boot/config/plugins/unRAID6-ZFS/USE_UNSTABLE_BUILDS
    rm /boot/config/plugins/unRAID6-ZFS/packages/*
    #Then reboot

    The builds are pretty much vanilla ZFS and you can check the build scripts on Github:

    https://github.com/Steini1984/unRAID6-ZFS/blob/master/build.sh (to build latest ZFS)

    https://github.com/Steini1984/unRAID6-ZFS/blob/master/build_github.sh (to build custom versions like RC)

     

    I feel your pain and it´s incredibly frustrating when the server has issues! I remember having incredible problems with my server a few years back. Random disks failing, processes crashing and all around pain. Took me probably a week of debugging and the solution was a new PSU. Hope you can fix your problems and I really hope that at least ZFS in not the culprit

    • Like 1
  2. On 10/14/2020 at 8:55 AM, Marshalleq said:

    I'm really worried it's ZFS causing all this - I just don't understand why creating a VM would trigger it.  I have to say I wish I'd not accepted the new ZFS version and upgraded the pools - it will be challenging to sort out.  I think it's enough to upgrade an OS, but keep the filesystem stable :/

     

    @steini84 from the attached syslog, would you agree it's ZFS?  If so, I might need some help to log a ticket upstream i.e. around how you've packaged it.

     

    Started a ticket here - I'm sure it's ZFS now.  I would appreciate if you could take a look and add any commentary - I'm concerned they'll complain about it being unraid and about it being a beta of unraid.  Thanks.

     

    obi-wan-diagnostics-20201013-1953.zip 232.52 kB · 0 downloads

    First off here you can see the build script. 

    https://github.com/Steini1984/unRAID6-ZFS/blob/master/build.sh

    • Thanks 1
  3. On 10/14/2020 at 8:39 AM, Marshalleq said:

    So I'm quite frustrated with this new beta, they're unusual a lot more stable by now - (the Unraid one and maybe the zfs rc - though I'm not sure).  I'm still getting this issue and other randomness.  As part of that testing, I'd like to downgrade to Unraid stable and keep the latest ZFS.  The instructions for that are mentioned above, except it didn't seem to work for me.  I think what's happening is when you reboot, it does it's auto update or whatever and makes it the lower version again?  My process was to downgrade the kernel, remove the older zfs versions, copy the above files from dropbox where the old ones were, reboot again.  Thoughts?

    So what happens is that the plugin first checks if you have a locally cahed package to install in /boot/config/plugins/unRAID6-ZFS/packages/ and if not it check on github. If I understand correctly you are running unRAID 6.8.3 stable and want to run zfs 2.0.0-rc3? 

     

    This is what I did to achive what you want.

    Have the plugin installed and run these commands

    rm /boot/config/plugins/unRAID6-ZFS/packages/zfs*
    wget -O /boot/config/plugins/unRAID6-ZFS/packages/zfs-2.0.0-rc3-unRAID-6.8.3.x86_64.tgz https://www.dropbox.com/s/wmzxjyzqs9b9fxz/zfs-2.0.0-rc3-unRAID-6.8.3.x86_64.tgz?dl=0
    wget -O /boot/config/plugins/unRAID6-ZFS/packages/zfs-2.0.0-rc3-unRAID-6.8.3.x86_64.tgz.md5 https://www.dropbox.com/s/3onv1qur26yxb7n/zfs-2.0.0-rc3-unRAID-6.8.3.x86_64.tgz.md5?dl=0

     

    Before you reboot you can run this command and test if everything went as expected

    cat /etc/unraid-version && md5sum /boot/config/plugins/unRAID6-ZFS/packages/zfs-2.0.0-rc3-unRAID-6.8.3.x86_64.tgz && cat /boot/config/plugins/unRAID6-ZFS/packages/zfs-2.0.0-rc3-unRAID-6.8.3.x86_64.tgz.md5

    and you should get this exact output:

    version="6.8.3"
    8a6c48b7c3ff3e9a91ce400e9ff05ad6  /boot/config/plugins/unRAID6-ZFS/packages/zfs-2.0.0-rc3-unRAID-6.8.3.x86_64.tgz
    8a6c48b7c3ff3e9a91ce400e9ff05ad6  /root/mount/zfs-2.0.0-rc3-unRAID-6.8.3.x86_64.tgz

    then you can reboot and can confirm if it worked like expected:

    root@Tower:~# dmesg | grep ZFS && cat /etc/unraid-version
    [   33.429241] ZFS: Loaded module v2.0.0-rc3, ZFS pool version 5000, ZFS filesystem version 5
    version="6.8.3"

     

  4. You are using almost all of your memory. You can either use a smaller arc or add swap... maybe both

     

    Adding swap:
    
    #first create a 8gb zvol where <pool> is the name of your pool:
    zfs create -V 8G -b $(getconf PAGESIZE) \
                  -o primarycache=metadata \
                  -o com.sun:auto-snapshot=false <pool>/swap
    
    #then make it a swap partition
    mkswap -f /dev/zvol/<pool>/swap
    swapon /dev/zvol/<pool>/swap
    
    #to make it persistent you need to add this to your go file:
    swapon /dev/zvol/<pool>/swap

     

  5. 55 minutes ago, segator said:

    Hey using from a couple of weeks ZFS on unraid with gaming VM as primary desktop pc (nas + desktop pc all in one), it works fine but sometimes unraid decides to kill my VM because "out of memory" i assigned 16gb of ram to the VM and the host have 64gb of ram, I think zfs is not cleaning enough fast the arc memory when other containers reclaim memory and then kernel decides to kill my VM :(

    i can fix it using hugepages but i don't like it because then its memory that ZFS can not use it when the VM is shutted down (that at the end is the 90% of the time), I tried to limit ZFS arc with echo 12884901888 >> /sys/module/zfs/parameters/zfs_arc_max but same :(

     

    can you paste the output of arcstat

  6. Built zfs-0.8.5 for unRAID-6.8.3

     

    I also built zfs-0.8.5 for unRAID-6.9.0-beta30 for those who want to try the unRAID beta but stay on the latest stable ZFS version. To install 0.8.5 on beta30 you have to run these commands and reboot:

    rm /boot/config/plugins/unRAID6-ZFS/packages/zfs*
    wget -P /boot/config/plugins/unRAID6-ZFS/packages/ https://github.com/Steini1984/unRAID6-ZFS/raw/master/packages/zfs-0.8.5-unRAID-6.9.0-beta30.x86_64.tgz
    wget -P /boot/config/plugins/unRAID6-ZFS/packages/ https://raw.githubusercontent.com/Steini1984/unRAID6-ZFS/master/packages/zfs-0.8.5-unRAID-6.9.0-beta30.x86_64.tgz.md5

     

  7. So, following the guide off of Level 1 techs, I am sure I have messed something up or missed a step.  I was setting up automatic snapshots, and I was able to make the install, but I don't know what command to run to install the zfs-auto-snapshots.  I honestly assumed it installed, but now I am doubting that.  I am getting a message frequently of
    Oct  5 07:11:11 Beast crond[3324]: failed parsing crontab for user root: PATH="/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin"

    I am also emailed 

    Console and webGui login accountOct 4, 2020, 7:15 PM (14 hours ago)/bin/sh: root: command not found

    I do show as zfs-auto-snapshot* in /usr/local/sbin
    Any help would be greatly appreciated.  I am not the best in Linux.


    I World reccomend checking out:

    https://forums.unraid.net/topic/94549-sanoidsyncoid-zfs-snapshots-and-replication/

    Or

    https://forums.unraid.net/topic/84442-znapzend-plugin-for-unraid/


    Sent from my iPhone using Tapatalk
    • Thanks 1
  8. 20 hours ago, FLiPoU said:

    Would it be possible to get the RC3 for 6.8.3 like you did with 6.8.2?

    I'm loving zstd so far! I like the fact that you can tune its level (1-19) until your CPU is working too much to your liking.

    Didn't notice any difference regarding my all SSD pool. Should I enable something to take advantage of TRIM?

    https://www.dropbox.com/s/wmzxjyzqs9b9fxz/zfs-2.0.0-rc3-unRAID-6.8.3.x86_64.tgz?dl=0
    https://www.dropbox.com/s/3onv1qur26yxb7n/zfs-2.0.0-rc3-unRAID-6.8.3.x86_64.tgz.md5?dl=0

     

    You turn on trim with zpool set autotrim=on POOLNAME

    Then you can run  zpool trim POOLNAME regularly, maybe after a scrub . I do a scrub, then a trim every month via the user scripts plugin.

     

    ref: https://github.com/openzfs/zfs/commit/1b939560be5c51deecf875af9dada9d094633bf7

  9. Well to be fair you are running a beta version of unRAID with a release candiate of ZFS so things like this are more likely with a combination like that.

     

    But just a few hours ago there was a new rc for zfs and i´m building it now. Should be up in 1-2 hours  it´s online now

    https://github.com/openzfs/zfs/releases/zfs-2.0.0-rc3

     

    The best bet is to run this command and reboot

    rm /boot/config/plugins/unRAID6-ZFS/packages/zfs*