Jump to content
steini84

ZFS plugin for unRAID

438 posts in this topic Last Reply

Recommended Posts

Excellent, thanks steini84, I didn’t realise that had been implemented yet. I had read about it being available in some implementations, but didn’t for a second think it’d be the one I actually wanted to use!

 

So with auto trim, is it ‘set and forget’? No periodic commands etc?

Share this post


Link to post
Excellent, thanks steini84, I didn’t realise that had been implemented yet. I had read about it being available in some implementations, but didn’t for a second think it’d be the one I actually wanted to use!
 
So with auto trim, is it ‘set and forget’? No periodic commands etc?

Yes that is the idea. If you have an hour you can check this out:


Share this post


Link to post

Brilliant, I certainly will, look forward to giving this a try, and seeing what kind of performance I can get out of those NVMEs!

Share this post


Link to post

Ok, so I’ve been reading like crazy about zfs and I’ve been creating, destroying and recreating zpools to try to understand how it all works. However, like many, I’m not quite piecing it all together.

 

I’ve can create a zpool (say zfspool), I can see what drives it uses and that it’s status is all good. I can set the mountpoint to say /mnt/zfs (which I’m afraid is confusing me a little) but then I run out of steam.

 

Basically all I want to use it for it to group 4 devices into effectively a raid-set and use it for a VM (if I learn more about zfs then I might be brave enough to use it for more things like dockers etc., but a single VM would be a start).

 

I wonder if someone with zfs knowledge would mind doing a generic, idiots guide to the most common setups so people like me could make use of this awesome plugin/technology. 

 

These might include:

 

configuring a zpool for a vm, on unRaid, start to finish.

Configuring a zpool for a general samba share, on unRaid, start to finish.

 

If all we’ve got to do is swap out our device names and pool names, then I think this would really help guys like me understand how the more general use-cases work in the unRaid environment.

 

I know basic guides do exist (like level1techs), but they all seem to stop at the point where the pool is created, so I can’t progress past that point, I’d like to see someone take that pool, give it a real mount-point and create a VM (on unRaid) using that same pool.

 

Please understand I have tried to read as much as possible about the subject, but not being a Linux guy, even the main concepts are quite alien to me (and I’m sure others).

 

Any help would be much appreciated

 

Thanks in advance

 

 

 

 

Edited by Zoroeyes

Share this post


Link to post

I should be able to put something together for you next weekend if someone has not already done that by then :)


Sent from my iPhone using Tapatalk

Share this post


Link to post

Testing this setup on my r730xd box with 12x12tb drives, 2 ssds, and an optane drive.

For now I’m running unRaid on one USB drive and have another USB drive as one of the “data” drives. I added all of the 12tb drives to a zpool to test.

My question is. What is the long-term feasibility of doing this? I’d rather not waste any of my rust on the single required data drive. Thoughts?

Share this post


Link to post

In case it's helpful, I feel there are two use cases once you've got ZFS on unraid.  One is your array where you can have multiple different sized disks and also crucially for home users, they all power off individually when not in use - so that would be the standard unraid array.  The other is the data that you just don't want to ever lose through any kind of file corruption, like photos, important documents etc - that's ZFS.  So Actually I run both.  It does mean you have to lose two drives for parity, but many people do that anyway.  I basically just run a simple single mirror on ZFS, the rest on unraid array with XFS.

Share this post


Link to post
On 11/14/2019 at 12:34 AM, mytime34 said:

I am running into an issue with accessing the ZFS share from windows.

 

I am able to see the path to the ZFS share, but it says I do not have permission to create/delete, etc

I get the following error when I try to enable SMB share

root@Pughhome:~# zfs set sharesmb=on dumpster
cannot share 'dumpster': smb add share failed
cannot share 'dumpster/test': smb add share failed

 

Here is my SMB script

global]
...
   usershare path = /dumpster
   usershare max shares = 100
   usershare allow guests = yes
   usershare owner only = no

                                                                                                                                                         
[data]                                                                                                                                                                 
path = /dumpster
browseable = yes                                                                                                                                                       
guest ok = yes                                                                                                                                                         
writeable = yes
writelist = 
read only = no        

 

Did you find a Solution for this ? I am on the same Boat ...

Share this post


Link to post

I'm well expereinced in SMB, though not so much in ZFS.  However i did this and the first trick was that all the shares are set in the SMB extras portion of the SMB icon within unraid.  I didn't actually need to do anything with zfs.  However, there were a few enhancements you could do with ZFS / SMB if you read the SMB manual.  I don't think any of them are mandatory though.  I think the misconception is that ZFS does the SMB sharing, it doesn't, it just has the capability to work with it built in.  At least that's how I understand it.

Edited by Marshalleq

Share this post


Link to post
On 11/25/2019 at 10:36 PM, Zoroeyes said:

That sounds great steini84, appreciate it and will look forward to it.

Finally found some time and rewrote the original post with a small guide. Hope that helps

 

 

 

Edited by steini84

Share this post


Link to post
11 hours ago, Dtrain said:

Did you find a Solution for this ? I am on the same Boat ...

  

2 hours ago, Marshalleq said:

I'm well expereinced in SMB, though not so much in ZFS.  However i did this and the first trick was that all the shares are set in the SMB extras portion of the SMB icon within unraid.  I didn't actually need to do anything with zfs.  However, there were a few enhancements you could do with ZFS / SMB if you read the SMB manual.  I don't think any of them are mandatory though.  I think the misconception is that ZFS does the SMB sharing, it doesn't, it just has the capability to work with it built in.  At least that's how I understand it.

If i remember correctly then ZFS does not include a SMB server, it relies on a SMB configuration that is written to work with the ZFS tools. I want to steal this sentence: "let ZFS take care of the file-system and let Samba take care of SMB sharing." :)

 

Here you can find a great guide for SMB https://forum.level1techs.com/t/zfs-on-unraid-lets-do-it-bonus-shadowcopy-setup-guide-project/148764

Edited by steini84

Share this post


Link to post
12 hours ago, steini84 said:

Finally found some time and rewrote the original post with a small guide. Hope that helps

 

 

 

That looks excellent steini84, exactly what i was looking for (hopefully others too). thanks for taking the time to put it together.

Share this post


Link to post

Okay, ignore my previous post, I gave up on 9p. Someone on the ubuntu forums showed me a guide for mounting nfs shares in fstab, and I was able to get that working.

 

I have a new question now. how do I get zfs to auto publish the zfs shares on boot and on array start? My VMs seem to start up before the shares are available, which doesn't make them happy. If I enter "zfs share -a" into the unraid console and restart the VMs they all work fine.

 

I've tried:

  • using the user script addon to run "zfs share -a" on array start, but it either doesn't work, or runs too late
  • adding "zfs share -a" to the /boot/config/go (not 100% sure I did this correctly though)
  • manually adding the shares to /etc/exports, but it got overwritten the next time I restarted unraid].

I'm out of ideas, and could use some help if someone would be kind enough.

Share this post


Link to post
Okay, ignore my previous post, I gave up on 9p. Someone on the ubuntu forums showed me a guide for mounting nfs shares in fstab, and I was able to get that working.
 
I have a new question now. how do I get zfs to auto publish the zfs shares on boot and on array start? My VMs seem to start up before the shares are available, which doesn't make them happy. If I enter "zfs share -a" into the unraid console and restart the VMs they all work fine.
 
I've tried:
  • using the user script addon to run "zfs share -a" on array start, but it either doesn't work, or runs too late
  • adding "zfs share -a" to the /boot/config/go (not 100% sure I did this correctly though)
  • manually adding the shares to /etc/exports, but it got overwritten the next time I restarted unraid].
I'm out of ideas, and could use some help if someone would be kind enough.

I also did not figure out the 9p problem, but great that you have figured out nfs.

You could try adding
zfs share -a

after zpool import -a in your plugin file and try restarting
nano /boot/config/plugins/unRAID6-ZFS.plg 


If that works for you I could add that to the plugin :)

....or you could try to delay the startup of VMs until your command has run in the GO file or even start the VMs from the go file ?
Here is some discussion about this topic

https://forums.unraid.net/topic/78454-boot-orderpriority-for-vms-and-dockers/


Sent from my iPhone using Tapatalk

Share this post


Link to post

I think it's prudent to add - ZFS does not publish shares, it adds configuration so that unraid can publish that configuration via it's own SMB implementation.  Also, I don't recall having to do anything to publish my shares at boot - I've see a few people say that they need to do something and have always been confused by that.  But since I did shift some existing shares from the unraid array to zfs, I did have to first remove the shares from the unraid config to get them to work.  Also had to make sure the file permissions were right on the ZFS files.  But other than that they do seem to work automatically.

Share this post


Link to post

How is the stability of ZFS under UnRAID?

How about CIFS speeds, using 10gbe?

 

Was just about to move away from UnRAID to FreeNAS for the striped array speed increases when I noticed this.

Really like the UnRAID ecosystem, but the array write speed is a killer for me; using striped disks via ZFS might plug that gap!

 

Share this post


Link to post
How is the stability of ZFS under UnRAID?
How about CIFS speeds, using 10gbe?
 
Was just about to move away from UnRAID to FreeNAS for the striped array speed increases when I noticed this.
Really like the UnRAID ecosystem, but the array write speed is a killer for me; using striped disks via ZFS might plug that gap!
 

Stability has been great for me running the same pool for 4 years.

I can’t speak of 10gb speeds since I’m only on 1gb, but it’s just straight up zfs on Linux so it’s rather a Linux vs FreeBSD question.

ZFS on unRaid does not work for array drives so if you want to go all in on ZFS imo FreeNAS makes more sense.

Is a cache drive not enough for you (do all the slow writes during off hours)?


Sent from my iPhone using Tapatalk

Share this post


Link to post

You don't have to have a standard unraid array running as far as I know.  Anything that works on ZFS on linux will work here - so yes, if you're after a striped array - that would work too.  So to this end - ZFS does work for array drives - just not unraid array drives.  FreeNAS doesn't necessarily make sense because of as you say the ecosystem of unraid is better.  FreeNAS has a way better GUI for ZFS is probably the biggest difference.  But also being BSD you'll have to learn about their BSD Jails for docker equivalent (no native docker) and also BHIVE for virtual machines which it seems people complain about a lot.  There can also be driver issues if you have anything but fairly standard hardware because the linux kernel drivers are going to be more than in freebsd. 

 

For these reasons, if Unraid doesn't suit you - I'd actually consider proxmox before FreeNAS.  Proxmox support zfs and other standard filesystems and arrays (ZFS doesn't do RAID 5 reliably for example) and has a more enterprise feature set which is nice (e.g proper vm backups, docker and LXC which is very cool (think a whole ubuntu distro in 2MB).  That's my 2c anyway. 

 

For me I nearly moved away from unraid too - but adding ZFS kept me going.  Believe it or not the last version kept killing my disks for some reason, so ZFS gave me the security and less important stuff is on the unraid array.

 

Hope that helps a little.

Edited by Marshalleq

Share this post


Link to post

I built out a new Unraid server several years ago to replace and CentOS 7 server running docker compose and several ZFS RAID2Zs which is the ZFS equivalent of a RAID6.  Since I was using new drives for Unraid, the ZFS plugin was key to me making that choice so I could easily hook up those enclosures, mount those filesystems and just copy over all my content.

 

 As was stated above, ZFS on Unraid is the same ZFS on any other linux distro.  As long as you are comfortable with the CLI it should be all good.

 

I run several ZFS production systems at work.  Some are multiple HDD RAID2Zs pooled together for almost half a PB of storage.  That's been running stable for 3-4 years.  We have more important DB servers running mirrored HDD pools with SSD caching that we use for the snapshotting.  Also been running those 3-5 years, many of them on two bonded 10G NICs.  Many of these are just on the stock CENT 7 kernel which is still 3.10.x we recently upgraded the kernels on some of those to the latest stable 5.3.x kernel so we could do some testing with some massive mirrors(24 x 2 on 12G backplanes) with NVMe caching(we needed the improved NVMe support in 5.3.x) and the performance has been incredible.

 

In 4 years we had one issue that came up where performance went to shit, and we needed to try a reboot quickly to get the system back online so we weren't able to determine if it was a ZFS or NFS issue, but all was good on a reboot.

 

Probably more info than you needed, but wanted to answer your 10G question and put something in this thread for people to read later about what I did personally and what our company has done with great results with ZFLOL.

 

Cheers,

 

-dev

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.