steini84

Community Developer
  • Posts

    434
  • Joined

  • Last visited

  • Days Won

    1

Posts posted by steini84

  1. OK, so OpenZFS from here 0.8.4 is not supported on kernel 5.7, so you're using master branch.
     
    Master branch is newer? but has older version number.
     
    I have to say, that was unexpected, but thanks for clarifying!

    Yes excactly. The master branch is still using version number 0.8.0 and only the releases get a updated version number.


    Sent from my iPhone using Tapatalk
  2. 4 minutes ago, Marshalleq said:

    I'm currently using this plugin on 8.0-1 .  I'm not really concerned with 'trusting it' given ZFS itself is not in beta.

     

    @steini84am I reading right that your ZFS version is 8.0-1 when the latest stable is 0.8.4?  And interestingly your plugin says it's on 8.2.  This doesn't seem right.

     

    # zfs --version
    zfs-0.8.0-1
    zfs-kmod-0.8.0-1

     

    cat /sys/module/zfs/version
    0.8.0-1

    Ok let me try to make it clear :)

     

    I just updated the plugin to 1.0 since new builds are not dependant on an update to the plugin. On boot the plugin checks for available builds and installs the latest one available for Unraid 

     

    The latest build for Unraid 6.8.3 is OpenZFS 0.8.4 and I usually only build from the releases (see them @ https://zfsonlinux.org/)

     

    Since Unraid 6.9 beta 22 is running on Kernel 5.7, which is not supported in OpenZFS 0.8.4, I made a build from the latest master (commit 2e6af52). I hoped that some changes were already in the master branch that added 5.7 support since it is the most up to date code.

     

    The confusing part about the master branch of OpenZFS is that you (apparently) don't get a new version tag unless there is a release. So every build from the master branch since 0.8.0 has been marked as 0.8.0. See here: https://github.com/openzfs/zfs/blob/master/META 

    I could change it of course, but I don't see a point in that since it could make things even more confusing and this is just a test build for a beta version of Unraid.

     

    Hope that makes sense

  3. 1 hour ago, tr0910 said:

    Is there a version of the ZFS plugin that works with 6.9.0 beta22? 

     

    Installing from Community App gets the one that works with 6.8.3

     

    
    Unsupported kernel detected!
    ZFS not installed! - Please follow this post https://forums.unraid.net/topic/41333-zfs-plugin-for-unraid/ and reinstall the plugin when this version of unRAID is supported

     

    Built ZFS from the latest master for 6.9.0-beta22. You can re-install the plugin to try it out.

    I would not trust it for anything important, but then again you should probably not be running the beta version of Unraid on your production server :)

  4. Is there a version of the ZFS plugin that works with 6.9.0 beta22? 
     
    Installing from Community App gets the one that works with 6.8.3

    Linux 5.7 is not yet supported

    https://github.com/openzfs/zfs/releases/tag/zfs-0.8.4

    I have not tried to build it because of that, but I’ll start a build now and see what happens.

    But a great quote from the release notes from the 9.3 beta
    “We are also considering zfs support.”


    Sent from my iPhone using Tapatalk
  5. 5 hours ago, ConnectivIT said:

    I'm on unRAID 6.8.3 but plugin still shows as version 0.8.2, though that would be explained by plugin notes:

    
    2020.01.09
    
    Rewrote the plugin so it does not need to be updated everytime unRAID is upgraded. It checks if there is already a new build available and installs that

     

    Rebooted unRAID today, "zfs version" returns:

    zfs-0.8.3-1

     

    Was hoping to get persistent l2arc added, which apparently has been merged in to openzfs:

    https://github.com/openzfs/zfs/pull/9582

     

    though isn't mentioned in recent change logs for openzfs?

     

    ps:  Big thank you for getting ZFS in to unRAID and the fantastic primer in the first post.  Having per-VM and per-docker snapshots has already saved my bacon.

    I have updated to the 0.8.4 release. Persistent l2arc has been added to the master branch, but it has not made it to a release yet. It appears that it will be included in the 2.0 release - "The features in progress or ported for OpenZFS 2.0 is lengthy, and includes:"

    ref: https://en.wikipedia.org/wiki/OpenZFS

     

    You can follow the changelog over @ https://zfsonlinux.org/

     

     

    • Like 1
  6. Very interested in replacing a FreeNAS box w/ Unraid running ZFS. Is it possible to get Quickassist (gzip-qat) hardware acceleration working? I'm using an Atom processor w/ integrated QAT acceleration, and offloading the compression has a significant impact on performance:
     
    https://github.com/openzfs/zfs/pull/5846

    It should be included since 0.7

    https://openzfs.org/wiki/ZFS_Hardware_Acceleration_with_QAT


    Sent from my iPhone using Tapatalk
  7. To Each Their Own, but if you read the first post you can understand why this plugin exists and what role zfs plays in a unRAID setup in my mind. If I wanted to go full on zfs I would use Freenas/Ubuntu/Freebsd/Omnos+napp-it but I think zfs for critical data and xfs with parity data for media is just perfect and have been running a stable setup líka that since 2015


    Sent from my iPhone using Tapatalk

  8. 12 hours ago, Randael said:

    first thx a lot steini84 for the nice howto 🙂

     

    I have to questions:

    1. i crypted my datasets data/docker, data/vm and data/media with a keyfile stored on /mnt/disk1. My unraid array is fs btrfs crypted so the keyfile is only awayable if i mount my array with a password. On reboot the server and manual unlock the crypted btrfs-array docker and vm have a failure because imagefiles and dockers are in the crypted zfs. Is it possible to "automount" the zfs if the array starts?

    2. my zfs get keylocation is following:

    data                           keylocation  none                   default
    data@2020-06-09-090000         keylocation  -                      -
    data/docker                    keylocation  file:///mnt/disk1/.key  local
    data/docker@just_docker        keylocation  -                      -
    data/docker@2020-06-09-090000  keylocation  -                      -
    data/media                     keylocation  file:///mnt/disk1/.key  local
    data/media@2020-06-09-090000   keylocation  -                      -
    data/vm                        keylocation  file:///mnt/disk1/.key  local
    data/vm@just_vm                keylocation  -                      -
    data/vm@2020-06-09-090000      keylocation  -                      -

     

    should i set keys also on the snapshots? and on the whole pool? I dont want to "crypt in a crypt"

     

    thanks a lot

    It is really easy to run commands when the array starts with this program: 

    You can make it run on array start or even ONLY on the first array start after booting the server. Much easier than using the "go" file.

     

    But I cannot answer about the encryption since I´m not familiar enough to give you a good answer 😕

     

  9. Actually, there's the home gadget geeks interview on the front page of unraid was an interview with Limetech - in it Limetech says they're really considering ZFS, (or something along those lines).  He pretty much indicated it's in the works, which is very exciting.  It would be great to get an official version.

    Yeah I saw that interview, hope that they find a creative way to integrate ZFS


    Sent from my iPhone using Tapatalk
  10. Wow that is great. I try my best to update ASAP but it’s awesome that we are getting more ways to enjoy ZFS. Hope it will one day be native in unraid, but until then it’s great to see more options

     

     

    Sent from my iPhone using Tapatalk

    • Like 1
  11. On 5/29/2020 at 10:26 AM, MatzeHali said:

    Dear Steini84,

     

    thanks for enabling ZFS support on UnRAID, which makes this by far the best system for storage solutions out there, having the chance of creating ZFS pools of any flavour and the possibility of an UnRAID-pool on the same machine.

    I'm just starting to do testing on my machine, and stumbled over dRAID-vdev-driver documentation and was interested, if there was a possibility if you could include the option for this one in your build? I know, for ZFS-standards, this is far from production ready software, but since I'm testing around, I'd be really interested what performance gains I'd get with an 15+3dspares draid1 setup compared to a 3x 4+1 raidz1 vdev-pool, for example.

     

    Thanks. ;)

     

    M

    Here you go - have fun and don't break anything :)

    https://www.dropbox.com/s/dvmgw6iab43qpq9/zfs-0.8.4-draid-feature-unRAID-6.8.3.x86_64.tgz?dl=0

    https://www.dropbox.com/s/rrjpqo0zyddgqmn/zfs-0.8.4-draid-feature-unRAID-6.8.3.x86_64.tgz.md5?dl=0

     

    To install this test build you first have to have the plugin install, then fetch the .tgz file and install it with this command:

    installpkg zfs-0.8.4-draid-feature-unRAID-6.8.3.x86_64

    If you want to to persist after reboot you have to fetch both files, rename them to: "zfs-0.8.3-unRAID-6.8.3.x86_64.tgz & zfs-0.8.3-unRAID-6.8.3.x86_64.tgz.md5" and overwrite the files in /boot/config/plugins/unRAID6-ZFS/packages/

    • Thanks 1
  12. Hello!   I've installed the plugin for someone else, now on his unraid 6.3 we dont see any snapshots that are created by znapzend. Reinstalling did not help.  

     

     

     

     

    *** backup plan: HDD ***        enabled = on        mbuffer = off   mbuffer_size = 1G  post_znap_cmd = off   pre_znap_cmd = off      recursive = on            src = HDD       src_plan = 24hours=>2hours,7days=>1day,30days=>7days,90days=>30days       tsformat = %Y-%m-%d-%H%M%S     zend_delay = 0*** backup plan: NVME ***          dst_0 = HDD/Backup/NVME     dst_0_plan = 1day=>6hours        enabled = on        mbuffer = off   mbuffer_size = 1G  post_znap_cmd = off   pre_znap_cmd = off      recursive = on            src = NVME       src_plan = 24hours=>2hours,7days=>1day,30days=>7days,90days=>30days       tsformat = %Y-%m-%d-%H%M%S     zend_delay = 0

     

    I've executed after creating:

     

     

     

     

     

    pkill -HUP znapzend

    Please advise.

     

     

     

     

     

    Znapzend runs on an interval, in your case every 2 hours for the 24 hour retention snapshots. You can make it run right away with the command    

     

     

    znapzend --runonce=HDD  

     

    See more @ https://github.com/oetiker/znapzend/blob/master/README.md

     

    "If you don't want to wait for the scheduler to actually schedule work, you can also force immediate action by calling

     

    znapzend --noaction --debug --runonce=src_dataset

     

    "

     

     

    Sent from my iPhone using Tapatalk

  13. On 4/15/2020 at 10:43 PM, TheSkaz said:

    thank you so much! sorry for the headache

    https://www.dropbox.com/s/zwq418jq6t3ingt/zfs-0.8.3-unRAID-6.8.3.x86_64.tgz?dl=0

    https://www.dropbox.com/s/qdkq4c3wqc5698o/zfs-0.8.3-unRAID-6.8.3.x86_64.tgz.md5?dl=0

     

    Just overwrite these files and double check that you actually have to overwrite, the files should have the same name. You have to copy both the files otherwise the md5 check will fail and the plugin will re-download the released binary. 

     

    Just check after a reboot

    root@Tower:~# dmesg | grep ZFS
    [ 4823.737658] ZFS: Loaded module v0.8.0-1, ZFS pool version 5000, ZFS filesystem version 5

     

  14.  

    What I did was upload the files using WinSCP:

    Capture.PNG.e8b494afdce04d47ae10ee0916273d05.PNG

     

    and then rebooted, once it comes back up, it shows this:

     

    Capture1.PNG.5cdb30709b87d76d1163089dbd21615b.PNG

     

    it seems to be reverting.

     

    I assume I don't need to rename the files right?

    Whoops. I assumed you were on unraid 6.9 beta 1 - I will make a build for you for unraid 6.8.3 tomorrow

     

     

    Sent from my iPhone using Tapatalk

  15. 6 hours ago, TheSkaz said:

    Thank you so much!!!!!!!

     

    [   88.290629] ZFS: Loaded module v0.8.3-1, ZFS pool version 5000, ZFS filesystem version 5

     

    I have it installed, and currently stress testing. Let's hope this works!

    I would double check that you overwrote the right file. You should have gotten version v0.8.0-1, (it´s lower I know, but that is the current version give on the master branch @github) you can double check by running these two command and you should get the exact same output

    root@Tower:~# dmesg | grep -i ZFS
    [   30.852570] ZFS: Loaded module v0.8.0-1, ZFS pool version 5000, ZFS filesystem version 5
    root@Tower:~# md5sum /boot/config/plugins/unRAID6-ZFS/packages/zfs-0.8.3-unRAID-6.9.0-beta1.x86_64.tgz
    8cdee7a7d6060138478a5d4121ac5f96  /boot/config/plugins/unRAID6-ZFS/packages/zfs-0.8.3-unRAID-6.9.0-beta1.x86_64.tgz

     

  16. 20 hours ago, TheSkaz said:

    I previously posted about a kernel panic under heavy load, and it seems this was addressed 6 days ago:

     

    https://github.com/openzfs/zfs/pull/10148

     

    Is there a way that we can get this implemented, or know of a workaround?

    I built a version for linux 5.5.8 for you from the latest master (https://github.com/openzfs/zfs/tree/7e3df9db128722143734a9459771365ea19c1c40)  which includes that fix you refrenced.

     

    You can find that build here

    https://www.dropbox.com/s/i6cuvqnka3y64vs/zfs-0.8.3-unRAID-6.9.0-beta1.x86_64.tgz?dl=0

    https://www.dropbox.com/s/6e64mb8j9ynokj8/zfs-0.8.3-unRAID-6.9.0-beta1.x86_64.tgz.md5?dl=0

     

    Copy both these files and replace with the files in /boot/config/plugins/unRAID6-ZFS/packages/ and reboot

     

    To verify that you are on the correct build type: dmesg | grep ZFS and you should see "ZFS: Loaded module v0.8.0-1," (the version name from master https://github.com/openzfs/zfs/blob/7e3df9db128722143734a9459771365ea19c1c40/META

     

    FYI only kernel versions up to 5.4 are officially supported according to the META file above.

     

    Have fun :)

    • Like 1
  17. How stable is this plugin? I want to create a raidz2 array for critical data. Does zfs send/receive work with it?

    Thanks in advance

     

    It's just openzfs for Linux. Nothing taken out nothing added.

     

    For what it's worth I have ran the same zfs pool on unraid since 2015 without any problems.

     

    Zfs send and recv work fine

     

    Sent from my iPhone using Tapatalk

  18. How stable is this plugin? I want to create a raidz2 array for critical data.
    Does zfs send/receive work with it?
    Thanks in advance

    It's just openzfs for Linux. Nothing taken out nothing added.

    For what it's worth I have ran the same zfs pool on unraid since 2015 without any problems.


    Sent from my iPhone using Tapatalk
  19. On 4/6/2020 at 12:35 PM, Namru said:

    First steini84, thanks for you amazing work!

    I have just a short question.

    Do you have any information about a new zfsonlinux version > 0.8.3 which supports a kernel like the 5.5 because 0.8.3 supports only 2.6.32 - 5.4.

    I was not able to find a solution for this issue but I found error reports regarding 5.5 and 5.6 kernels.

    Additional I think there was some discussions about a functionality change that breaks some not fully GPL compliant modules.

    Thanks

    I have heard some discussion about it, but to be honest I do not run the 6.9 beta with kernel 5.5x so I have not ran in to any issues. I am on stable 6.8.3 with kernel 4.19.107 which is running fine with ZOL 0.8.3

     

    You can watch the progress of openzfs here https://github.com/openzfs/zfs