Marshalleq 91 Posted January 8 Share Posted January 8 (edited) @Joly0 Can you point at what the specific fix is from here? I run lancache too, but I haven't noticed (nor tested) for any issues such as you describe because I don't understand how there could create an issue with a single docker container - it sounds like you're saying ZFS / RC2 prevented writes - but we'd be seeing that across the board. Either way, there are some good performance fixes in ZFS 2.01. Regarding updates, I'm not sure if it needs to be manually compiled first by @steini84, but either way it's supposed to apply when you reboot. Edited January 8 by Marshalleq Quote Link to post
steini84 78 Posted January 8 Author Share Posted January 8 Sorry guys, did not see that update. Will build now Sent from my iPhone using Tapatalk 1 Quote Link to post
Joly0 8 Posted January 8 Share Posted January 8 @Marshalleq On the Lancache Discord someone said it might be associated with this issue https://github.com/openzfs/zfs/issues/11151 Dont know exactly, if its really this, i hope its this, i will see when the zfs plugin its updated and i update again to RC2 Quote Link to post
Joly0 8 Posted January 8 Share Posted January 8 And its not exactly writes, it has something to do with specific syscalls, idk, which changed from one Linux Kernel to the next and which seems to be needed with Lancache, but not with other containers or other writes. Quote Link to post
steini84 78 Posted January 9 Author Share Posted January 9 Built zfs-2.0.1 for 6.8.3 & 6.9.0-rc2 If you want to update zfs versions you have to remove the old cached version rm /boot/config/plugins/unRAID6-ZFS/packages/*And then restart the serverSent from my iPhone using Tapatalk Quote Link to post
Joly0 8 Posted January 9 Share Posted January 9 (edited) @steini84 zfs --version shows zfs-2.0.0-1 after deleting the old package and restarting the server. Is this the correct version? Didnt check the old version and github says it should be 2.0.1? Filename of the package is zfs-2.0.0-unraid-6.9.0-rc1.x86_x64.tgz so i think it didnt update, but why? Edited January 9 by Joly0 missed something Quote Link to post
Marshalleq 91 Posted January 9 Share Posted January 9 I got zfs-2.0.1-1 zfs-kmod-2.0.1-1 However, for reasons unknown, on both my machines the GUI is now mostly unresponsive. Anyone installed it and got it working? Quote Link to post
steini84 78 Posted January 9 Author Share Posted January 9 I tried on both 6.9.0-rc2 & 6.8.3 and get the right version after a reboot. I have only ran it on my test server and have not seen anything strange. But to be honest I only run zfs on that install and nothing else going on there. Sent from my iPhone using Tapatalk Quote Link to post
Marshalleq 91 Posted January 9 Share Posted January 9 (edited) There were two plugin updates at the same time, unassigned devices and nerd pack. I assume it'll be some issue with unassigned devices rather than ZFS but I am still investigating. After upgrade the docker tab loads indefinitely, the main tab is either slow or only partially loads. It also won't reboot via GUI or command line reboot requests, which is the most concerning thing. This happening on both systems I just updated with ZFS and the two plugins. Edited January 9 by Marshalleq Quote Link to post
Marshalleq 91 Posted January 9 Share Posted January 9 @steini84 There's something wrong with 2.0.1. I removed unassigned devices plugin altogether, hard rebooted (because I have no choice - saved by CoW). Still same issue, so then did: cd /boot/config/plugins/unRAID6-ZFS/packages/ rm * wget https://github.com/Steini1984/unRAID6-ZFS/blob/master/packages/zfs-2.0.0-unRAID-6.9.0-rc2.x86_64.tgz.md5 wget https://github.com/Steini1984/unRAID6-ZFS/blob/master/packages/zfs-2.0.0-unRAID-6.9.0-rc2.x86_64.tgz Hard rebooted again because no choice and the server now reboots correctly. However I must have done something wrong as ZFS is not working. Looking in the packages directory only the md5 file remains and zfs file is missing. I tried the whole procedure again and the file is missing again. Zfs command doesn't work. It's like the tgz file is just being deleted. Quote Link to post
Marshalleq 91 Posted January 9 Share Posted January 9 Another curiosity, along with the zfs-2.0.0-unRAID-6.9.0-rc2.x86_64.tgz going missing, the zfs plugin get's uninstalled, both at reboot. Quote Link to post
Marshalleq 91 Posted January 9 Share Posted January 9 So for my main box I got around it by using the community kernel direct download here to get it going, in case anyone else gets stuck. Until this plugin is fixed or whatever is happening. Note the direct download is 6.8.3 though, with ZFS 2.0.0. 2 Quote Link to post
KrisMin 3 Posted January 11 Share Posted January 11 Hi everyone! A newbie question. How do i add an unraid share which is located on my created zfs pool? When adding share, all i can see is my dummy array disk... Quote Link to post
itimpi 875 Posted January 11 Share Posted January 11 Since ZFS is not (yet anyway) an officially supported format within Unraid I do not think User Shares cannot have files located on your ZFS array. Quote Link to post
steini84 78 Posted January 11 Author Share Posted January 11 Hi everyone! A newbie question. How do i add an unraid share which is located on my created zfs pool? When adding share, all i can see is my dummy array disk...You can use smb-extras.confSent from my iPhone using Tapatalk Quote Link to post
BasWeg 5 Posted January 11 Share Posted January 11 On 1/9/2021 at 11:10 PM, Marshalleq said: So for my main box I got around it by using the community kernel direct download here to get it going, in case anyone else gets stuck. Until this plugin is fixed or whatever is happening. Note the direct download is 6.8.3 though, with ZFS 2.0.0. So, at the moment it is better not to update? I'm still on RC1 with ZFS 2.0.0 Quote Link to post
steini84 78 Posted January 11 Author Share Posted January 11 I have moved the 2.0.1 builds to the unstable folder for now. #Enable unstable builds touch /boot/config/plugins/unRAID6-ZFS/USE_UNSTABLE_BUILDS rm /boot/config/plugins/unRAID6-ZFS/packages/* #Then reboot #Disable unstable builds rm /boot/config/plugins/unRAID6-ZFS/USE_UNSTABLE_BUILDS rm /boot/config/plugins/unRAID6-ZFS/packages/* #Then reboot Please let us know if you are running 2.0.1 without issues and better yet if the conflict reported by Marshalleq has been identified / resolved Quote Link to post
Xbgt1 0 Posted January 11 Share Posted January 11 On 9/20/2015 at 7:03 PM, steini84 said: zpool create -m /mnt/SSD SSD radz sdx sdy sdz Please fix this typo for the next guy Quote Link to post
steini84 78 Posted January 12 Author Share Posted January 12 8 hours ago, Xbgt1 said: Please fix this typo for the next guy Its done, thanks Quote Link to post
Xbgt1 0 Posted January 13 Share Posted January 13 (edited) I seem to be stuck pretty hard here Making the samba share. I’ve tried every witch way and I always end up with no write access. I have added the h8750 credentials to credential manager. What could I be missing here. [test22] path = /mnt/dump/dataset browseable = yes guest ok = no writeable = yes write list = h8750 read only = no create mask = 0775 directory mask = 0775 vfs objects = shadow_copy2 shadow: snapdir = .zfs/snapshot shadow: sort = desc shadow: format = zfs-auto-snap_%S-%Y-%m-%d-%H%M shadow: localtime = yes I see the issue here. I don't know how to fix this. The user permissions are not taking at all. root@Tower:/mnt/dump# ls -la total 3 drwxr-xr-x 6 root root 6 Jan 12 18:43 ./ drwxr-xr-x 6 root root 120 Jan 12 21:45 ../ drwxr-xr-x 2 root root 2 Jan 12 18:35 dataset/ drwxr-xr-x 2 root root 3 Jan 12 18:37 docker/ drwxr-xr-x 2 root root 2 Jan 12 18:43 isos/ drwxr-xr-x 3 root root 3 Jan 12 18:42 vms/ After some more poking around I found this to help chown nobody:users /mnt/dump chown nobody:users /mnt/dump/dataset And chmod 775 /mnt/dump/ chmod 775 /mnt/dump/dataset Edited January 13 by Xbgt1 more information Quote Link to post
NeoJoris 0 Posted January 13 Share Posted January 13 (edited) 5 hours ago, Xbgt1 said: write list = h8750 To better understand your problem: Is this your user name? Also tried "samba reload" ? Edited January 13 by NeoJoris Quote Link to post
BasWeg 5 Posted January 13 Share Posted January 13 13 hours ago, Xbgt1 said: After some more poking around I found this to help chown nobody:users /mnt/dump chown nobody:users /mnt/dump/dataset And chmod 775 /mnt/dump/ chmod 775 /mnt/dump/dataset All this samba staff is pretty well explained in this referenced topic https://forum.level1techs.com/t/zfs-on-unraid-lets-do-it-bonus-shadowcopy-setup-guide-project/148764 Quote Link to post
Marshalleq 91 Posted January 16 Share Posted January 16 (edited) @steini84 Just an update that today I've compiled zfs version 2.0.1 on 6.9 RC2 using the unraid kernel helper. That seems to be working perfectly so far. So this is good as it points to some other issue, that should be fixable. I'd be interested to know if anyone else has any issue with the 2.0.1 plugin. EDIT: It actually looks like there is at least one thing still happening - and that is that I can't actually reboot without hard rebooting by holding the power button down. So that to me confirms there is some issue with 2.0.1 or some feature it's introduced that's incompatible with something in unraid. I'll keep testing. EDIT2: Nope my docker tab now is also not loading. So right now I definitely don't recommend 2.0.1 given this happens on two machines and two separate builds of ZFS. Edited January 16 by Marshalleq 1 Quote Link to post
ich777 771 Posted January 16 Share Posted January 16 2 hours ago, Marshalleq said: It actually looks like there is at least one thing still happening - and that is that I can't actually reboot without hard rebooting by holding the power button down. So that to me confirms there is some issue with 2.0.1 or some feature it's introduced that's incompatible with something in unraid. I'll keep testing. Please try to: Stop the array Open up a terminal and type in: 'zpool export -a' and wait for it to finish Then click reboot Please report back what happens 1 Quote Link to post
Joly0 8 Posted January 18 Share Posted January 18 (edited) I just built my own kernel using the kernel helper aswell, and for me everything seems to be working without a problem. I can reboot and the docker tab works aswell. EDIT: Ok, i have the same problems now aswell. Somehow my server is constantly peaked at 100% cpu usage, that might be the cause, why docker does not work correctly Edited January 18 by Joly0 Quote Link to post
676 posts in this topic Last Reply
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.