ZFS plugin for unRAID


steini84

Recommended Posts

@Joly0 Can you point at what the specific fix is from here?  I run lancache too, but I haven't noticed (nor tested) for any issues such as you describe because I don't understand how there could create an issue with a single docker container - it sounds like you're saying ZFS / RC2 prevented writes - but we'd be seeing that across the board.

 

Either way, there are some good performance fixes in ZFS 2.01.  Regarding updates, I'm not sure if it needs to be manually compiled first by @steini84, but either way it's supposed to apply when you reboot.

Edited by Marshalleq
Link to comment

And its not exactly writes, it has something to do with specific syscalls, idk, which changed from one Linux Kernel to the next and which seems to be needed with Lancache, but not with other containers or other writes.

Link to comment

Built zfs-2.0.1 for 6.8.3 & 6.9.0-rc2

If you want to update zfs versions you have to remove the old cached version

rm /boot/config/plugins/unRAID6-ZFS/packages/*


And then restart the server


Sent from my iPhone using Tapatalk

Link to comment

@steini84
zfs --version shows zfs-2.0.0-1 after deleting the old package and restarting the server. Is this the correct version? Didnt check the old version and github says it should be 2.0.1?

Filename of the package is zfs-2.0.0-unraid-6.9.0-rc1.x86_x64.tgz so i think it didnt update, but why?

Edited by Joly0
missed something
Link to comment

I tried on both 6.9.0-rc2 & 6.8.3 and get the right version after a reboot. 3e942c73b1ca160c8b635f471c56359e.jpg
I have only ran it on my test server and have not seen anything strange. But to be honest I only run zfs on that install and nothing else going on there.


Sent from my iPhone using Tapatalk

Link to comment

There were two plugin updates at the same time, unassigned devices and nerd pack.  I assume it'll be some issue with unassigned devices rather than ZFS but I am still investigating.  After upgrade the docker tab loads indefinitely, the main tab is either slow or only partially loads.

 

It also won't reboot via GUI or command line reboot requests, which is the most concerning thing.

 

This happening on both systems I just updated with ZFS and the two plugins.

Edited by Marshalleq
Link to comment

@steini84 There's something wrong with 2.0.1. I removed unassigned devices plugin altogether, hard rebooted (because I have no choice - saved by CoW).  Still same issue, so then did:

cd /boot/config/plugins/unRAID6-ZFS/packages/

rm *

wget https://github.com/Steini1984/unRAID6-ZFS/blob/master/packages/zfs-2.0.0-unRAID-6.9.0-rc2.x86_64.tgz.md5

wget https://github.com/Steini1984/unRAID6-ZFS/blob/master/packages/zfs-2.0.0-unRAID-6.9.0-rc2.x86_64.tgz

 

Hard rebooted again because no choice and the server now reboots correctly.  However I must have done something wrong as ZFS is not working.  Looking in the packages directory only the md5 file remains and zfs file is missing.  I tried the whole procedure again and the file is missing again.  Zfs command doesn't work.  It's like the tgz file is just being deleted.

Link to comment
On 1/9/2021 at 11:10 PM, Marshalleq said:

So for my main box I got around it by using the community kernel direct download here to get it going, in case anyone else gets stuck.  Until this plugin is fixed or whatever is happening.  Note the direct download is 6.8.3 though, with ZFS 2.0.0.

So, at the moment it is better not to update? I'm still on RC1 with ZFS 2.0.0

Link to comment

I have moved the 2.0.1 builds to the unstable folder for now. 

 

#Enable unstable builds
touch /boot/config/plugins/unRAID6-ZFS/USE_UNSTABLE_BUILDS
rm /boot/config/plugins/unRAID6-ZFS/packages/*
#Then reboot

#Disable unstable builds
rm /boot/config/plugins/unRAID6-ZFS/USE_UNSTABLE_BUILDS
rm /boot/config/plugins/unRAID6-ZFS/packages/*
#Then reboot

 

Please let us know if you are running 2.0.1 without issues and better yet if the conflict reported by Marshalleq has been identified / resolved :)

 

Link to comment

I seem to be stuck pretty hard here Making the samba share. I’ve tried every witch way and I always end up with no write access. I have added the h8750 credentials to credential manager. What could I be missing here.

 

[test22]
path = /mnt/dump/dataset
browseable = yes
guest ok = no
writeable = yes
write list = h8750
read only = no
create mask = 0775
directory mask = 0775
vfs objects = shadow_copy2
shadow: snapdir = .zfs/snapshot
shadow: sort = desc
shadow: format = zfs-auto-snap_%S-%Y-%m-%d-%H%M
shadow: localtime = yes

 

 

I see the issue here. I don't know how to fix this. The user permissions are not taking at all.

 

root@Tower:/mnt/dump# ls -la
total 3
drwxr-xr-x 6 root root   6 Jan 12 18:43 ./
drwxr-xr-x 6 root root 120 Jan 12 21:45 ../
drwxr-xr-x 2 root root   2 Jan 12 18:35 dataset/
drwxr-xr-x 2 root root   3 Jan 12 18:37 docker/
drwxr-xr-x 2 root root   2 Jan 12 18:43 isos/
drwxr-xr-x 3 root root   3 Jan 12 18:42 vms/

 

After some more poking around I found this to help
chown nobody:users /mnt/dump
chown nobody:users /mnt/dump/dataset
And

chmod 775 /mnt/dump/
chmod 775 /mnt/dump/dataset

Edited by Xbgt1
more information
Link to comment

@steini84 Just an update that today I've compiled zfs version 2.0.1 on 6.9 RC2 using the unraid kernel helper.  That seems to be working perfectly so far.  So this is good as it points to some other issue, that should be fixable.  I'd be interested to know if anyone else has any issue with the 2.0.1 plugin.

 

EDIT:  It actually looks like there is at least one thing still happening - and that is that I can't actually reboot without hard rebooting by holding the power button down.  So that to me confirms there is some issue with 2.0.1 or some feature it's introduced that's incompatible with something in unraid.  I'll keep testing.

 

EDIT2: Nope my docker tab now is also not loading.  So right now I definitely don't recommend 2.0.1 given this happens on two machines and two separate builds of ZFS.

Edited by Marshalleq
  • Like 1
Link to comment
2 hours ago, Marshalleq said:

It actually looks like there is at least one thing still happening - and that is that I can't actually reboot without hard rebooting by holding the power button down.  So that to me confirms there is some issue with 2.0.1 or some feature it's introduced that's incompatible with something in unraid.  I'll keep testing.

Please try to:

  1. Stop the array
  2. Open up a terminal and type in: 'zpool export -a' and wait for it to finish
  3. Then click reboot

Please report back what happens

  • Like 1
Link to comment

I just built my own kernel using the kernel helper aswell, and for me everything seems to be working without a problem. I can reboot and the docker tab works aswell.

 

EDIT: Ok, i have the same problems now aswell. Somehow my server is constantly peaked at 100% cpu usage, that might be the cause, why docker does not work correctly

Edited by Joly0
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.