ZFS plugin for unRAID


steini84

Recommended Posts

  • 2 weeks later...

Added ZFS v2.0.0 build for 6.9.0 stable (2.0.3 in the unstable folder)

Added ZFS v2.0.3 build for 6.9.0 stable

This hopefully fixes the problems with docker.img and zfs 2.0.1+ :

".. we added the ability to bind-mount a directory instead of using a loopback. If file name does not end with .img then code assumes this is the name of directory (presumably on a share) which is bind-mounted onto /var/lib/docker. For example, if /mnt/user/system/docker/docker then we first create, if necessary the directory /mnt/user/system/docker/docker. If this path is on a user share we then "de-reference" the path to get the disk path which is then bind-mounted onto /var/lib/docker. For example, if /mnt/user/system/docker/docker is on "disk1", then we would bind-mount /mnt/disk1/system/docker/docker. "


Sent from my iPhone using Tapatalk

Edited by steini84
moved 2.0.3 to unstable
Link to comment
34 minutes ago, Arragon said:

Still running into the same problem with Docker.  How can I go back to 2.0.0?

Did you try the new bind-mount to a directory that was added? In any case I moved 2.0.3 to unstable and added a 2.0.0. build instead (thanks ich777)

 

You can move to 2.0.0. by deleting the 2.0.3 package and reboot:

rm /boot/config/plugins/unRAID6-ZFS/packages/*

 

  • Like 1
Link to comment
1 hour ago, steini84 said:

Did you try the new bind-mount to a directory that was added? In any case I moved 2.0.3 to unstable and added a 2.0.0. build instead (thanks ich777)

I'm still new to docker, so maybe I did it wrong.  Here is what I thought I could do:

1) Create new dataset like /mnt/tank/Docker/image for the content that was previously in docker.img

2) Start Docker with docker.img and make sure containers are stopped

3) Copy over content from /var/lib/docker to /mnt/tank/Docker/image

4) Stop Docker and change path to /mnt/tank/Docker/image

5) Start Docker again and it would bind mount /mnt/tank/Docker/image to /var/lib/docker

 

Unfortunately the system completely locked up during step 3 and I had to reset.

 

EDIT: Seeing as there is a whole storage driver (https://docs.docker.com/storage/storagedriver/zfs-driver/) and I don't even know how those changes are made in Unraid, I think I'll stay with the .img-file

Edited by Arragon
Link to comment

@Arragon what @steini84 is suggesting is to mount your docker to a folder rather than a .img file.  I'm not sure if the bit above is new since the previous beta though, but it would pay to check.

 

If it works, that could solve the problem and enable us to run later versions of ZFS.  The problem I see is we still don't know why .img files won't run on ZFS filesystem above 2.0.0 and seemikly this is specific to Unraid.  No reports from any other system.

 

I'll try it later when I have some time.

 

Also, I'm not sure if that zfs storage driver refers to zfs as an underlying file system, or within the docker container - I assume the latter.

Edited by Marshalleq
Link to comment
2 hours ago, Marshalleq said:

@Arragon what @steini84 is suggesting is to mount your docker to a folder rather than a .img file

 

Also, I'm not sure if that zfs storage driver refers to zfs as an underlying file system, or within the docker container - I assume the latter.

Like I said, I didn't even get to that point as the system completely locked up before I could copy over the data.  Starting with a fresh Docker is not an option for me.  Also the help page mentions ZFS for /var/lib/docker (where we want to bind-mount to).  Currently they recommend btrfs (which Unraid uses by default) or OverlayFS (successor to aufs).  I guess that is why it's currently an img-File and not simply a folder.

I just don't understand what 2.0.1+ changed to make the img-File not work any longer.

 

There was also something strange I noticed on 2.0.3 - I tried to copy the docker.img from ZFS pool to an SSD with XFS and it was really slow.  Docker wasn't started and other files copied over just fine.  My docker.img had multiple snapshots (like other files) - don't know if that is of importance.

Edited by Arragon
Link to comment

Hello, I'm new here. first of all i would like to thank @steini84 for his great plugin! I am not sure whether my question is correct here, as it is actually a Samba (SMB) problem, which only occurs in connection with ZFS. To access a ZFS device via SMB, you have to edit the Samba extras. Mine looks like this:

 

[Media]
path = /mnt/HDD/Media
comment = 
browseable = yes
public = no
valid users = tabbse
writeable = yes
vfs objects = 

 

I updated from 6.8.3 to 6.9.0 and since then I get the same error message in the system log as soon as I access an SMB share.

 

Mar  2 09:28:21 UNRAID smbd[14859]:  ../../source3/lib/sysquotas.c:565(sys_get_quota)
Mar  2 09:28:21 UNRAID smbd[14859]:   sys_path_to_bdev() failed for path [.]!

 

I can access the share, but it's very annoying as hundreds of lines of the same error message were generated during normal operation (e.g. browsing directories). I've already researched and the only thing I've found is that there is a problem with ZFS because SMB expects a mountpoint entry in /etc/fstab. A guy also came up with a workaround. I quote:

 

"The workaround is pretty simple: add the ZFS subvolume you're using as a share to /etc/fstab. If your share is not a ZFS subvolume, but your share directory is located beneath an auto-mounted ZFS subvolume (not listed in /etc/fstab), add that subvolume to /etc/fstab and the error will go away. No hacky scripts involved."

 

I have no idea how to do it or if it will cause any problems. I can give you the link to the site where I got this information but I didn't know if it is against forum rules. Does anyone else have the same problem or know how to solve it? Thank you very much in advance and if I am wrong here please tell me where else I can ask the question.

Edited by tabbse
Link to comment
56 minutes ago, tabbse said:

Hello, I'm new here. first of all i would like to thank @steini84 for his great plugin! I am not sure whether my question is correct here, as it is actually a Samba (SMB) problem, which only occurs in connection with ZFS. To access a ZFS device via SMB, you have to edit the Samba extras. Mine looks like this:

 

[Media]
path = /mnt/HDD/Media
comment = 
browseable = yes
public = no
valid users = tabbse
writeable = yes
vfs objects = 

 

I updated from 6.8.3 to 6.9.0 and since then I get the same error message in the system log as soon as I access an SMB share.

 

Mar  2 09:28:21 UNRAID smbd[14859]:  ../../source3/lib/sysquotas.c:565(sys_get_quota)
Mar  2 09:28:21 UNRAID smbd[14859]:   sys_path_to_bdev() failed for path [.]!

 

I can access the share, but it's very annoying as hundreds of lines of the same error message were generated during normal operation (e.g. browsing directories). I've already researched and the only thing I've found is that there is a problem with ZFS because SMB expects a mountpoint entry in /etc/fstab. A guy also came up with a workaround. I quote:

 

"The workaround is pretty simple: add the ZFS subvolume you're using as a share to /etc/fstab. If your share is not a ZFS subvolume, but your share directory is located beneath an auto-mounted ZFS subvolume (not listed in /etc/fstab), add that subvolume to /etc/fstab and the error will go away. No hacky scripts involved."

 

I have no idea how to do it or if it will cause any problems. I can give you the link to the site where I got this information but I didn't know if it is against forum rules. Does anyone else have the same problem or know how to solve it? Thank you very much in advance and if I am wrong here please tell me where else I can ask the question.

 

Hi,

please check this post

This is the workaround, I'm using.

 

best regards

Bastian

 

Link to comment

Limetech wasn't expressing frustration with BTRFS as far as I know, they did however hear some others frustration with BTRFS, which in one interview was referred to as a 'certain group whom remain vocal about the issues' or words to that effect.  One has to wonder how many failures a file system has to go through before you will agree it's problematic (there are plenty recorded on these forums) and in my world one failure is one too many.   

 

Anyway, to answer your question, I do believe LimeTech said that they are most definitely looking at bringing it in, most definitely not in this latest version, but in the future.  No date, nor release has been provided and no guarantee either, just a good whole hearted we're looking at it - which is very exciting and hopeful sounding.  I doubt it's very hard to be honest.

 

My one hope is that it's not limited to existing within their own unraid array as it's main benefits would largely be unavailable.  And that things like docker and vm's will run on it.

 

Personally I think the future of unraid lies in it's unraid driver being for mass storage and zfs for everything else.  However I've migrated completely away from the unraid raid driver, I found it to make my system too sluggish.

 

But there are many advantages to unraid, over and above it's disk setup, so it's still worth it for those I think.

 

I do still think about the enterprise features that are missing though.  The basic file and print, backup, user accounts, directory management etc would be a great start.  Hopefully some day in the future.

Edited by Marshalleq
  • Like 2
Link to comment

Thx, using ZFS for VM and Dockers now.  Yes, its good.  Only issue is updating ZFS when you update unRaid.

 

Regarding unraid and enterprise, it seems that the user base is more the bluray and DVD hoarders.  There are only a few of us that use unRaid outside of this niche.  I'll be happy when ZFS is baked in.

  • Like 1
Link to comment
1 hour ago, tr0910 said:

Thx, using ZFS for VM and Dockers now.  Yes, its good.  Only issue is updating ZFS when you update unRaid.

 

Regarding unraid and enterprise, it seems that the user base is more the bluray and DVD hoarders.  There are only a few of us that use unRaid outside of this niche.  I'll be happy when ZFS is baked in.

Which version of ZFS do you have working with VM and Dockers now?

 

Link to comment
13 hours ago, tr0910 said:

Thx, using ZFS for VM and Dockers now.  Yes, its good.  Only issue is updating ZFS when you update unRaid.

 

Regarding unraid and enterprise, it seems that the user base is more the bluray and DVD hoarders.  There are only a few of us that use unRaid outside of this niche.  I'll be happy when ZFS is baked in.

Yeah, so that issue will be over once it's baked in which is exciting.

 

And yes again, DVD hoarders lol, I remember guys that were like that, whole walls of spent money on CD's and such.

 

The thing I don't get is there's a whole market there in that file and print space (and even as a home market) that unraid almost completely flips the bird at, maybe they don't want to be compared with FreeNAS / TrueNAS (which is looking pretty exciting at the moment).  But, unraid still wins or has the up and coming tag on it when you add in the community support - unraid has that area nailed and I doubt anyone will ever be able to even compete with it.  Actually I don't think I've ever seen it on any product online ever.

 

Other thing on my wishlist would be a proper software network stack - it doesn't work well when you start swapping around network cards (to much disbelief by others that have looked at my message one day when I decided to rant lol).  One day I'll document it down properly - when I have the will to go up against the man again. :)

 

 

  • Like 1
Link to comment
3 hours ago, steini84 said:

Also 2.0.3 in the "Unstable" folder you can manually enable

Is it still

touch /boot/config/plugins/unRAID6-ZFS/USE_UNSTABLE_BUILDS 

?

 

I want to reproduce the problems on a test machine but only have 1 drive (since the array wants at least 1) and therefore could only create a single-disk pool.  I don't know if that would result in me not being able to reproduce the docker error.

Link to comment

Hi, i have a problem with ZFS and spindown:

I have 3x WD RED 4TB as raidz1 in Unraid 6.9.1, my spindows time is 30min, but the disks do not go into standby and if i switch them off manually they go on again after a short time. 


"Server emhttpd: spinning down /dev/sdf
Server emhttpd: read SMART /dev/sdf
Server emhttpd: spinning down /dev/sdf
Server emhttpd: read SMART /dev/sdf"

Does anybody have the same Problem? Is the Problem ZFS Plugin/Controller/Unraid ?


Without ZFS and filesystem xfs as unassigned Devices works.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.