ZFS plugin for unRAID


steini84

Recommended Posts

@Joly0 can you run the commands requested by @steini84 above?  I'm not going to be able to get to it for around a week.

 

e.g. 

 

Stop the array

Open up a terminal and type in: 'zpool export -a' and wait for it to finish

Then click reboot

 

And report back.

 

Probably good to run and post a diagnostics too.

 

Thanks.

Edited by Marshalleq
Link to comment

btw it definitly seems to be an issue with docker (or only some docker containers/volumes?), but when i disable docker, my servers cpu usage is normal, and i can reboot without a problem, but somehow, now i am unable to start docker. Will restart the server to see, if i can enable it afterwards again.

Link to comment

Oh right.  I think first you should stop the unraid array (go to the main screen of the unraid going, scroll down and stop it there).  This will stop docker, vm's shares and such like so that nothing is accessing ZFS - then that should allow the zpool command to run.

 

Then run the command in console - zfs import -a which I think will try to auto unmount the ZFS parts of the ZFS array as part of the command.

 

I've never run this command before, but in case you can't get your array back, the command would be the opposite e.g. zpool import -a

 

Yeah - docker seems more impacted.  There were other screens that were an issue too.  Docker was running in the background if I recall - but I do have 128GB RAM in my system which might have been hiding that particular issue.

Edited by Marshalleq
Link to comment

Yes if you unmount the unraid array, then it shouldn't give you that error.  The unmounting of the unraid array would stop all the things that are preventing the zfs array from running that command.

 

Out of interest, which docker?  Perhaps I'm running the same one and it's related to that.

 

Thanks.

Link to comment

As said, its not working. I stopped the array, cant execute the command with the error message i already wrote. The zfs array btw doesnt get unmounted, when i stop the array in unraid like you described, it stays mounted and i can access it without a problem.

 

The Docker container thats giving me problems is "mitchtalmadge/amp-dockerized"

Edited by Joly0
Link to comment

Yeah the zfs array unmounts separately.  I guess the same problem is causing the unraid array to not unmount properly, which in turn is meaning things are still active running on the ZFS array.  Only other thing I could think of would be to reboot in maintenance mode, or to set the unraid array not to start at boot.  But I'm not sure if that will give @steini84 the information he wants.

 

I am definitely not running that particular docker BTW.   It could be an 'other' common docker component I suppose.

 

At least now we have two of us with the issue - maybe someone else will get it too and we can start to narrow it down.  I might just have time to try run those commands tonight if I'm lucky.

Edited by Marshalleq
Link to comment

@Joly0 you don't happen to be running xattr=sa do you?  Or any other changes from the norm?  Was reading the issues list and there are a few BSD related things.  Just trying to think of any similarities.

 

@steini84 on the thread above, they're suggested we do a staggered approach of all the commits to see where in the code history this happened between 2.0.0 and 2.0.1.  I might just give that a go - might need to just confirm some things with you first.

Link to comment
7 hours ago, Joly0 said:

EDIT: Ok, i have the same problems now aswell. Somehow my server is constantly peaked at 100% cpu usage, that might be the cause, why docker does not work correctly

When did you get the 100% usage, is there something in the syslog?

 

@Joly0 I think @Marshalleq mean my commands:

 

Please try to:

  1. Stop the array <- This is really important and more important is that you wait for it to finish and after that continue with step 2
  2. Open up a terminal and type in: 'zpool export -a' and wait for it to finish
  3. Then click reboot

Please report back what happens when you do this if the server correctly reboots so that I can fix my Unraid-Kernel-Helper and also send a PR to the Plugin from @steini84

 

6 hours ago, Joly0 said:

Yeah, as i said, i cant run that command

Which command? What does happen?

 

6 hours ago, Joly0 said:

The zfs array btw doesnt get unmounted, when i stop the array in unraid like you described, it stays mounted and i can access it without a problem.

What did you select when building the image with the Unraid-Kernel-Helper, there is a option if it's unmounted or not on an Array start/stop.

 

EDIT: I really want to solve the issue where the server doesn't reboot properly and I think I already got the solution but I first need the confirmation that it works.

Btw. after you issue the command 'zpool export -a' you shouldn't be able to access your ZFS volumes.

Link to comment

@Marshalleq The command "zfs get xattr" returns "on" as value, so i assume its enabled like you mean? Although i dont remember setting that.

 

@ich777The setting "Load ZFS Pools on Array start/stop:" in unraid-kernel-helper is set to false. I guess i should build a new kernel with it set to true, so the zfs array stops, when the unraid array does. But as i said, the reboot problems seems to only appear for me, when my system is pinned to 100% cpu-usage, otherwise i can normally reboot (atleast the times i tried, it worked)

  • Thanks 1
Link to comment
36 minutes ago, Joly0 said:

But as i said, the reboot problems seems to only appear for me, when my system is pinned to 100% cpu-usage, otherwise i can normally reboot (atleast the times i tried, it worked)

Okay, if your system is pinned to 100% usage can you at least try to stop the array and issue the command form above and try to reboot?

If something isn't working please make a screenshot or hook me up with a short message here.

Link to comment

I am building a new kernel now with the setting set to true, so the zfs array unmounts when the unraid array does, but its takes forever to download the files as "deutsche telekom" (my inet provider) has serious problems with various cdn´s like aws and the github ones letting me only download with ~5kb/s max (a proxy helps here, maybe a setting for this can be added to the kernel helper?).

 

Other then that, if the cpu is pinned to 100% i cant do anything, cant access any menu, cant access the bash, cant do anything other then hard-reboot the server, so no, i cant stop the array and run the command when its pinned.

Link to comment
39 minutes ago, Joly0 said:

am building a new kernel now with the setting set to true, so the zfs array unmounts when the unraid array does, but its takes forever to download the files as "deutsche telekom" (my inet provider) has serious problems with various cdn´s like aws and the github ones letting me only download with ~5kb/s max

Yes, I'm aware of that..

Strangely here in Austria we don't have such problems. :/

 

40 minutes ago, Joly0 said:

(a proxy helps here, maybe a setting for this can be added to the kernel helper?)

You can only configure a for example my OpenVPN container and route the traffic through this Container.
I think it is only possible to set a proxy for the entire Docker daemon and not only for a Container since I'm using standard system tools like wget or curl a proxy would not be that easy.

 

You don't have to delete the files that are in the 'kernel' directory or the directory itself since when it finds the files in that directory it uses this files, if you are want to build a new Kernel you have to only delete the 'output-XXXXXXX' folder.

 

44 minutes ago, Joly0 said:

Other then that, if the cpu is pinned to 100% i cant do anything, cant access any menu, cant access the bash, cant do anything other then hard-reboot the server, so no, i cant stop the array and run the command when its pinned.

I can think of a problem I experienced with ZFS, if a drive becomes unavailable for whatever reason (drive pulled out accidentally or not, bad SATA connection,...) if the zpool is not unmounted before the drive becomes unavailable this would lead to such a problem that you described here.

Link to comment
2 minutes ago, Joly0 said:

Btw, with the kernel-helper, @ich777, can i build against exact commits from zfs and if so, how? To narrow down when the problem occured, that might help

Yes and no, I once got the option in to pull from the master branch but I removed that ability since nobody used it and it could lead to other problems and more complexity...

 

You can always do a completely manual build, set the variable 'CUSTOM_BUILD' to 'true' and the Container copies the build script to the main directory and you can log into the Container itself with 'docker exec -ti Unraid-Kernel-Helper /bin/bash' and copy line by line and at the ZFS part you can/have to checkout whatever commit you want.

Link to comment

Ok, for the openvpn container to work, i would need a vpn somewhere to connect to (i dont trust those pesky free vpn´s found everywhere on the internet).

 

Other then that, if i dont delete the files, the container hangs while doing som steps, for example now its hanging at "--Found Kernel ..... extracting, this can take some time.....". It already took about 2 hours, while the first run everything was done in under half an hour (including the download, which weirdly, went faster then expected the first time around....)

Link to comment
9 minutes ago, Joly0 said:

Other then that, if i dont delete the files, the container hangs while doing som steps, for example now its hanging at "--Found Kernel ..... extracting, this can take some time.....".

Can you please post a screenshot of you actual contents of your 'kernel' folder?

There should be no folders inside if you start building new images, other than that, this is the wrong place to discuss this... :D

 

10 minutes ago, Joly0 said:

he first run everything was done in under half an hour (including the download, which weirdly, went faster then expected the first time around....)

I think this is also dependent on the time you download from Github/AWS...

 

The best would be to delete all the folders that are inside of the 'kernel' directory, the files can stay there.

Link to comment

Ok, i figured out its this commit https://github.com/openzfs/zfs/commit/1c2358c12a673759845f70c57dade601cc12ed99 which causes those issues, the commit before works fine, although my lancache doesnt work due to some syscalls (afaik), but starting with this commit, starting some specific docker containers like amp-dockerized or jdownloader causes my server to pin its cpu (sometimes or every, sometimes on just a few cores) to 100% and making it impossible to do anything (stop the containers, stop the array, restart the server)

  • Like 2
  • Thanks 1
Link to comment
On 3/21/2017 at 4:06 PM, rinseaid said:

I also wanted to get the ZFS Event Daemon (zed) working on my unRAID setup. Most of the files needed are already built into steini84's plugin (thanks!) but zed.rc needs to be copied into the file system at each boot.

 

I created a folder /boot/config/zfs-zed/ and placed zed.rc in there - you can get the default from /usr/etc/zfs/zed.d/zed.rc.

 

Add the following lines to your go file:

 


#Start ZFS Event Daemon
cp /boot/config/zfs-zed/zed.rc /usr/etc/zfs/zed.d/
/usr/sbin/zed &

 

To use built in notifications in unRAID, and to avoid having to set up a mail server or relay,  set your zed.rc with the following options:


ZED_EMAIL_PROG="/usr/local/emhttp/webGui/scripts/notify"
ZED_EMAIL_OPTS="-i warning -s '@SUBJECT@' -d '@SUBJECT@' -m \"\`cat $pathname\`\""

 

$pathname contains the verbose output from ZED, which will be sent in the body of an email alert from unRAID. I have this set to alert level of 'warning' as I have unRAID configured to always email me for warnings. You'll also want to adjust your email address, verbosity level, and set up a debug log if desired.

 

Either place the files and manually start zed, or reboot the system for this to take effect.

 

Pro tip, if you want to test the notifications, zed will alert on a scrub finish event. If you're like me and only have a large pools that takes hours/days to scrub, you can set up a quick test pool like this:

 


truncate -s 64M /root/test.img
zpool create test /root/test.img
zpool scrub test

When you've finished testing, just destroy the pool.

 

 

I can't seem to locate the "go" file or the zed.rc on Krusader app to edit it. Could you provide a step-by-step process to setup ZED. I'm used to the GUI, not familiar with CLI. If you can provide a step-by-step command entry, I'd really appreciate it. Thanks.

Link to comment
15 minutes ago, tocho666 said:

 

I can't seem to locate the "go" file or the zed.rc on Krusader app to edit it. Could you provide a step-by-step process to setup ZED. I'm used to the GUI, not familiar with CLI. If you can provide a step-by-step command entry, I'd really appreciate it. Thanks.

 

Hello, I haven't used ZFS on unRAID in quite some time - so not sure if it still works, but basically here's what you'd run in the terminal based on the instructions I provided.

 

nano /boot/config/go

 

This will open up the nano text editor. If there's anything in the file, use the cursor keys to navigate to the bottom of the file. Copy and paste the below (you can paste by right clicking and selecting paste using the unRAID web browser terminal):

#Start ZFS Event Daemon
cp /boot/config/zfs-zed/zed.rc /usr/etc/zfs/zed.d/
/usr/sbin/zed &

 

Press CTRL+x, then type 'y' and press enter. This will save the file and exit the nano text editor.

 

Type the below to create the /boot/config/zfs-zed/ directory:

 mkdir /boot/config/zfs-zed/

 

And finally, copy the default zed.rc file into the new directory

cp /usr/etc/zfs/zed.d/zed.rc /boot/config/zfs-zed/

 

You would then use nano to modify the zed.rc file if desired to use unRAID notifications or configure email notifications:

nano /boot/config/zfs-zed/zed.rc

 

Find the two lines mentioned (ZED_EMAIL_PROG and ZED_EMAIL_OPTS) and modify as required.

 

Reboot after the above changes for the settings to take effect.

 

Side note: you can also turn on SMB sharing of the flash drive from the unRAID web gui (Main -> Boot Device -> Flash -> SMB security settings -> Export yes, security public), which would allow you to access the /boot/config from SMB - e.g. \\yourserver\flash\config - you could then use whatever text editor you like to modify these files.

 

 

Edited by rinseaid
Mention reboot after making changes
  • Like 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.