Jump to content
steini84

ZFS plugin for unRAID

169 posts in this topic Last Reply

Recommended Posts

Steini, I just discovered this plugin... really want to use it. But, I am at at current release:

 

kernel: Linux version 4.4.23-unRAID (root@develop64) (gcc version 5.3.0 (GCC) ) #1 SMP PREEMPT Sat Oct 1 13:41:00 PDT 2016

 

I am sure it is a pain to do this each time, but can you compile again vs. current 6.2.1?

 

 

Share this post


Link to post

No pain at allt. Just had a baby so I have not been keeping up with the releases. Thanks for the heads up, will compile in a few hours

 

 

Sent from my iPhone using Tapatalk

Share this post


Link to post

@steini84: did you make any progress in putting together a tutorial for compiling zfs and spl for new kernels? I mistakenly updated to new kernel and cannot access my pool now. Maybe a few high-level bullet points could help steer me in the right direction?

 

While I'm new to the forum, I'm not new to unRAID, ZFS or Linux.

 

There are really good performance tests over at http://www.phoronix.com/ but for me it's more about stability and usability (snapshots replication etc that fits my workflow)

 

When unRaid is updated you have to wait for a plugin update. I try to update the plugin as fast as I can, but sometimes it can be a couple of days. If you accidentally update the worst thing that happens is that your Zfs pool won't be imported and the data the won't be accessible until a plugin update is pushed. In my setup that would mean all my dockers and vms.

 

I wanted to wait for 6.1 final before writing a complete how to for Zfs on unRaid, but if there was any interest I could get on that right away.

 

 

Sent from my iPhone using Tapatalk

Share this post


Link to post

I have updated the plugin for 6.3 rc3 (kernel 4.8.4)

 

But compiling is pretty easy. Here is a rough guide for building the packages:

 

First off download this awesome script from gfjardim

wget https://gist.githubusercontent.com/gfjardim/c18d782c3e9aa30837ff/raw/224264b305a56f85f08112a4ca16e3d59d45d6be/build.sh

Change this line from:
LINK="https://www.kernel.org/pub/linux/kernel/v3.x/linux-${KERNEL}.tar.xz"
to 
LINK="https://www.kernel.org/pub/linux/kernel/v4.x/linux-${KERNEL}.tar.xz

using:
nano build.sh
then make it executable
chmod +x build.sh

run it with 
./build.sh

answer 1, 2 & 3 with Y
answer 3.1, 3.2 with N
answer 3.3 with Y
answer 4 and 6 with N

then make the modules that are needed:
cd kernel
make modules

Then we need to build some dependencies

#Libuuid
wget "http://downloads.sourceforge.net/project/libuuid/libuuid-1.0.3.tar.gz?r=http%3A%2F%2Fsourceforge.net%2Fprojects%2Flibuuid%2F&ts=1453068148&use_mirror=skylink"
tar -xvf libuuid*
cd libuuid-1.0.3
./configure
make 
make install

#Zlib
wget http://zlib.net/zlib-1.2.8.tar.gz
tar -xvf zlib-1.2.8.tar.gz
cd zlib-1.2.8
./configure
make 
make install

Then we build zfs and spl

First download the latest zfs and spl from zfsonlinux.org

wget https://github.com/zfsonlinux/zfs/releases/download/zfs-0.6.5.8/spl-0.6.5.8.tar.gz
tar -xvf spl-0.6.5.8.tar.gz
cd spl-0.6.5.8

./configure --prefix=/usr
make
make install DESTDIR=$(pwd)/PACKAGE
cd $(pwd)/PACKAGE
makepkg -l y -c n ../spl.tgz
installpkg ../spl.tgz

load the module

depmod
modprobe spl

Same for zfs
wget https://github.com/zfsonlinux/zfs/releases/download/zfs-0.6.5.8/zfs-0.6.5.8.tar.gz
tar -xvf zfs-0.6.5.8.tar.gz
cd zfs-0.6.5.8

./configure --prefix=/usr
make
make install DESTDIR=$(pwd)/PACKAGE
cd $(pwd)/PACKAGE
makepkg -l y -c n ../zfs.tgz
installpkg ../zfs.tgz

depmod
modprobe zfs

Share this post


Link to post

Awesome..thanks! I'll take a look at the script too.

 

I have updated the plugin for 6.3 rc3 (kernel 4.8.4)

 

But compiling is pretty easy. Here is a rough guide for building the packages:

Share this post


Link to post

steini84, Congratulations!!! best of times with the new born!

 

I'm using 6.2.2, do you have a compiled version for it?

 

Another thing I'm facing, I have napp-it as an ESXi guest.

It had a SAS card passedthrough and used 4 1TB HDDs to get a total of 1TB.

 

I can't load the old VM, can I use you'r plugin to setup my pool as it was? without rebuilding the pool?

 

Thanks, and congrats!

Share this post


Link to post

First of all thanks :)

 

I did not even realize that 6.2.2 was out, i thought regjc01 had updated to the new 6.3 RC - I have to setup notifications when a new release is out!

 

No worries I can compile it now and upload later tonight.

 

About your napp-ip VM.. if you dont have a newer version of zfs that is in zfs on linux you should be able to import it no problem. The plugin tries to mount all pools on startup. Try that and if that does not work you can try zpool import -a or zpool import [poolname]

Share this post


Link to post

Thanks.

 

Can't remember the version I had, but it was a napp-it dist from 2013 I think...

 

I'll wait for the 6.2.2 version, how do I get it specifically?

Share this post


Link to post

Thanks.

 

Can't remember the version I had, but it was a napp-it dist from 2013 I think...

 

I'll wait for the 6.2.2 version, how do I get it specifically?

It's already uploaded - just press "check for updates" in the unRAID plugin manager

Share this post


Link to post

steini84, The pool is set on /pool01 (when I did zpool import -a)

 

How do I change it to /mnt/zfs

 

I don't want to break it :)

Share this post


Link to post

Of the top of my mind you should do:

zpool export pool01

zfs set mountpoint=/mnt/zfs pool01

zpool import pool01

 

If you run into any problems (like pool busy) pm me and I will get back to you when I'm on a computer tomorrow.

 

 

Sent from my iPhone using Tapatalk

Share this post


Link to post

If you set the mountpoint it will be permanent

 

 

Sent from my iPhone using Tapatalk

Share this post


Link to post

Does this need fstrim to be run on it daily?

 

E: This may be relevant: https://github.com/zfsonlinux/zfs/pull/3656

 

E2: I have successfully built this using the vbatts/slackware container as a basis, with the following packages from Slackware64 14.1:

 

attr-2.4.46-x86_64-1.txz
autoconf-2.69-noarch-1.txz
automake-1.11.5-noarch-1.txz
bc-1.06.95-x86_64-2.txz
ca-certificates-20160104-noarch-1.txz
curl-7.31.0-x86_64-1.txz
cyrus-sasl-2.1.23-x86_64-5.txz
gcc-4.8.2-x86_64-1.txz
gcc-g++-4.8.2-x86_64-1.txz
git-1.8.4-x86_64-1.txz
glibc-2.17-x86_64-7.txz
kernel-headers-3.10.17-x86-3.txz
less-451-x86_64-1.txz
libmpc-0.8.2-x86_64-2.txz
libtool-2.4.2-x86_64-2.txz
m4-1.4.17-x86_64-1.txz
make-3.82-x86_64-4.txz
perl-5.18.1-x86_64-1.txz
zlib-1.2.8-x86_64-1.txz

 

And the following from Slackware64 14.2, due to a bug in how the Git package was built:

 

cyrus-sasl-2.1.26-x86_64-1.txz

 

Begin by preparing a kernel directory, which involves fetching a matching kernel's source package, applying all patches from unRAID's /usr/src, and copying any new files. Then make oldconfig and make to build it, and you'll have a tree for the Slackbuild packages for spl-solaris and zfs-on-linux.

 

You'll also need to follow the comment I posted at the bottom of the train at that pull request, since it outlines an spl commit that's not in master yet, to rebase the ntrim branch against. Otherwise, rebase the zfs tree against upstream/master, which is github.com/zfsonlinux/zfs.git.

 

Then:

 

LINUXROOT=/root/linux-whatever ./spl-solaris.SlackBuild

 

Then the zfs-on-linux.SlackBuild needs to be modified to pass a --with-spl=/tmp/SBo/spl-your-version.

 

E3: Now running spl:master and zfs:ntrim on unRAID 6.3.0-rc6. Fine so far, but benches slightly slower. Strange.

 

Alternative to echoing stuff to /sys config variables is to create a full zfs.conf of settings to apply, depending on which types of devices you'll be pooling, and copying that to /etc/modprobe.d before modprobe zfs.

Share this post


Link to post

Maybe I'm just too new with unraid or zfs, but I can't seem to figure out how to utilize the zpool.

I'm able to manage the pool, and see that all the disks I assigned are "zfs members" in the 'Main' tab on unraid.

 

How can I access the pool and share it? Or alternatively access it from my VMs?

 

***EDIT***

I've managed to figure it out! Probably very inefficient (or maybe not) but I've managed to use a bound mountpoint to point to my zpool within a share.

If anyone has suggestion on something better, let me know.

 

Thank you so much steini84 for making this awesome plugin!

Share this post


Link to post

Good that you figured it out, but a better way would be to add a mountpoint.

 

Normally ZFS pools are mounted automatically on the root of the file system - for example a pool named tank is mounted at /tank

 

To easily use it in unraid you would want to mount it under /mnt/tank

 

You can do that with

 

zfs set mountpoint=/mnt/tank tank

 

Read more here -> http://docs.oracle.com/cd/E19253-01/819-5461/gaztn/

Share this post


Link to post

I'll give that a shot once I get off work today.

 

So if I set the zfs mountpoint to /mnt/<tank>, will unRAID supply the directory as a share? Or do I need to manually create one?

 

 

 

Share this post


Link to post

You need to manually crate the share since unraid does not integrate with ZFS by default.

 

I guess via smb.conf - maybe you will find something in the documentation.

 

 

Sent from my iPhone using Tapatalk

Share this post


Link to post

Looks like samba was the way to go! Can't believe I didn't look there first...  :-\

 

I ended up using the /boot/config/smb-extra.conf to add the share since it was removed upon reboot from the smb.conf file.

 

Thanks again!

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now