ZFS plugin for unRAID


steini84

839 posts in this topic Last Reply

Recommended Posts

On 7/13/2020 at 4:10 AM, testdasi said:

Figured it out. No need to mount through /etc/fstab.

 

What's missing are entries in /etc/mtab,  which are created if mounted from fstab.

So a few echo into /etc/mtab is the solution. Just need to do this at boot.

Each filesystem that is accessible by smb (even through symlinks) needs a line in mtab to stop the spurious warning spam.


echo "[pool]/[filesystem] /mnt/[pool]/[filesystem] zfs rw,default 0 0" >> /etc/mtab

 

 

Dude, I know this is an older post, but you just saved by syslog!

Link to post
  • Replies 838
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

What is this? This plugin is a build of ZFS on Linux for unRAID 6   Installation of the plugin To install you copy the URL below into the install plugin page in your unRAID 6 web g

Built zfs-2.0.0-rc7 for unRAID-6.8.3 & 6.9.0-beta35   Great to see that unRAID is finally adding native ZFS so this might be one of the last builds from me   And yes, i´m alre

Figured it out. No need to mount through /etc/fstab.   What's missing are entries in /etc/mtab,  which are created if mounted from fstab. So a few echo into /etc/mtab is the solution. J

Posted Images

Thanks for following up.  I am getting errors in the syslog, but I get these whether I mount via the script or maintaining  /etc/mtab

 

Tower emhttpd: error: share_luks_status, 5975: Operation not supported (95): getxattr: /mnt/user/nas

 

I believe the error is caused by mounting under /mnt/user share vs. /mnt.  If I mount under /mnt by either method, the error goes away.  The pool does not have encryption set on, so I am wondering if it is an incompatibility between mounting zfs under xfs.

 

Still playing around with a solution, but apart from the above message, I am not seeing any issues

 

Any feedback, greatly appreciated

Link to post

Yeah, it seems fairly obvious to avoid the unraid raid mount for a differing brand i.e zfs raid mount - we don't know how the secret sauce of unraid raid really works.  Anyway, either way, you can put it straight under /mnt, that's what I do.  And just ignore it in the fix common problems if you have that come up.

 

So /mnt/data

/mnt/data1

 

or /mnt/Samsung500G

 

are good examples.

 

Coming from windows about 15 years ago, it took me a while to figure out what I should call these things.  That's what I came up with after talking to an ex linux admin.

Edited by Marshalleq
Link to post

Agreed,  I am currently using ZFS on my main home server and running Debian Buster with mirrored boot and root, but really like the way unRAID manages VMs and Docker rather than relying on portainer and kvm.  I haven't hit any functional issues in my unRAID server, but I would rather err on the supported solution and will mount directly to /mnt

 

I am more prosaic, /mnt/nas for storage and /mnt/fastdisk for ssds  B|

 

Thanks again

Link to post

Hey, I just wanted to share my experiences with ZFS on unraid now that I've migrated all my data (don't use the unraid array at all now).

 

The big takeaway is the performance improvements and simplicity are amazing. And I'm not just talking about Unraid's Achilles heel - throughput - more details below.

 

Why

Due to ongoing stability issues I couldn't track down, I ended up buying an IBM P700 with twin xeons and 96GB RAM.  This I thought would cover my production software such as nextcloud, Plex, Wordpress and so on, (the things that have customer facing service).

 

The big challenge was that the disks remained on the other box and so stability issues on that would impact my production instance.  The P700 only has 4 official 3.5" disk slots and some 5.25" I could use if desired to increase that (I didn't).

 

Solution

My solution was to sell the multitude of 8TB disks I have and buy (through a deal on an auction site) some 6 month old 16TB EXOS disks.  This gives 48TB usable which is more than enough and 4.5 years remaining on warranty.  Decision time on to whether to make the array ZFS or unraid included weighing up the loss of individual disks powering down, multi-size disks in an array and the easier expansion (compared to ZFS).

 

Benefits

It was a big decision in a way, but now I'm reaping the somewhat unexpected benefits of having improved performance, a production box with storage and a play box which does GPU passthrough, back end automation and such that can be rebooted without issue.

 

One of the most surprising benefits was the increased speed of Plex Library scanning.  This was not something I was expecting, nor thought was possible at all.  On the unraid array it would take a significant amount of time to complete a manual scan of the library.  On ZFS, the scan is sub 5 seconds!  I can only guess this is some clever in memory directory caching contained within ZFS.  I must go and read up to find out about that.

 

Things I've noticed so far include:

  1. Seriously fast directory scanning e.g. plex
  2. No spin up delay
  3. Faster throughput
  4. General system responsiveness improvement
  5. Only four drives which are more modern don't use much if any more power being spun up more

 

Other optional benefits are obvious:

  1. Variable cluster sizes for optimised storage / speed (e.g. media dataset can have 1MB while documents dataset can have 8k)
  2. NVME read caching algorithm enables VM's to be performantly run from HDD rather than SSD if desired
  3. Can migrate away from Unraid to something else if desired and keep my disk pool as is.  E.g. Proxmox / FreeNAS / TrueNAS Scale.
  4. Throughput
  5. Snapshots
  6. Compression
  7. Encryption
  8. Send Receive / backup option
  9. Super reliable CoW file system unlike BTRFS in my opinion
  10. Best in class data integrity, Unraid's array can't even come close to that

 

Downsides

  1. Upgrade options are more limited, can either add an extra mirror, or upgrade all 4 to larger when eventually that is needed
  2. With a lot of drives, the power bill may be more expensive due to lack of individual drive power down, but newer helium filled drives reduce this requirement and given I've gone from 11 drives to 4 this is unlikely to be an issue

Something cool that happened was while migrating data one of the drives got unplugged accidentally (dodgy cable).  When I noticed it, I just shut down the system, rebooted and it automatically resilvered the drive in 3minutes back to known good state.  If this had been unraid (or any other array) it would have had to write the whole 16TB again.  You gotta love ZFS.

 

Anyway, that's my thoughts so far.

 

Oh yes, to get around the requirement that Unraid won't start Docker or VM's without an unraid array started, I just pointed it at a cheap usb drive and put the array on that.  It works well and have been doing that for a few months now with no issues.  My hope is this requirement will change in the next version of unraid.

 

Marshalleq

 

Edited by Marshalleq
Link to post
4 hours ago, Marshalleq said:

Hey, I just wanted to share my experiences with ZFS on unraid now that I've migrated all my data (don't use the unraid array at all now).

 

The big takeaway is the performance improvements and simplicity are amazing.

I've been running my VM's of a pair of old 2tb spinners using ZFS.  I have been amazed that it just works, and I don't notice the slowness of the spinners for running VM's.  The reason I switched was for the snapshot backups.  I love the ability to just snapshot my windows VM back to a known good state.  I look forward to having ZFS baked in more closely to unRaid. 

 

I still have one VM on the SSD with BTRFS but I can't see any speed benefit to the SSD compared with the ZFS spinners.  I have a Xeon 2670 with 64GB ECC ram.  Lesser RAM and/or non ECC RAM may not be such a good option.

 

I can see benefits via a 2 stage storage system in the future, with ZFS for the speed, and unRaid array for the near line storage that can be spun down most of the time. 

Link to post
1 minute ago, tr0910 said:

I can see benefits via a 2 stage storage system in the future, with ZFS for the speed, and unRaid array for the near line storage that can be spun down most of the time. 

Yeah, I used to run it that way, but right now the performance aspects of using ZFS will keep me from going back there for a while.  I really forgot what proper storage was meant to perform like.  I mean, I know unraid has it's place and it's idea is really well tailored to a certain market, but it really does have some performance related challenges that you sort of learn to live with after a while.

 

I just hope that there will actually be a way to run it without starting the unraid array.  I don't want ZFS to just be a supported option for a single disk inside the unraid array, that'd suck. :)

 

Hopefully they will allow us to set up in either fashion and not restrict us to the original unraid array.

Link to post
8 hours ago, Marshalleq said:

Yeah, I used to run it that way, but right now the performance aspects of using ZFS will keep me from going back there for a while.  I really forgot what proper storage was meant to perform like. 

Well in your case you want all your storage in the fast zone.  I also want to have ZFS continue to work and VM's continue to run even if the unRaid array is stopped and restarted.  Then unRaid will be perfectly able to run our firewall's, and pfSense without the (your firewall shuts down if the array is stopped).

Edited by tr0910
Link to post

I agree, but it makes it a bit difficult if you are passing through Unraid storage to the VM or Dockers. 

 

Maybe there could be made a change to the KVM/Docker service and allow to run Vms/Containers  that are not dependant on anything in /mnt/user even if the array is off. 

 

That would indeed mean that Pfsense/Homeassistant etc. could keep running even if you had to do some maintenance and stop the array.  I guess there is more to this and how the network stack etc is set up. But if there is a will there is a way.

 

My hope is that the native ZFS that is "Coming soon TM" will maybe add some good gui presets, but will keep feature parity with this plugin. That is a vanilla version of ZFS under the hood that can be used without restriction. 

Link to post
7 hours ago, tr0910 said:

Well in your case you want all your storage in the fast zone.  I also want to have ZFS continue to work and VM's continue to run even if the unRaid array is stopped and restarted.  Then unRaid will be perfectly able to run our firewall's, and pfSense without the (your firewall shuts down if the array is stopped).

I guess it was a long article and I may not have been the clearest, but I didn't set out for having all my storage in the fast zone, it was a punt that has worked out much better than I imagined.  And the main point is that the sluggish parts of unraid are no longer sluggish. 

 

I'm not sure what you mean by second point, but ZFS does continue to work without the array, and I don't need to stop the unraid array since it's just running on a usb stick with nothing on it.  So VM's and dockers run fine.  They all run off ZFS though.

Link to post
6 hours ago, steini84 said:

I agree, but it makes it a bit difficult if you are passing through Unraid storage to the VM or Dockers. 

Even when I did run unraids array, I always ran VM's and dockers on ZFS without difficulty, just point them at the path and set the default path in the settings.  Making shares is quite different though.  You have to use the samba extra config in settings, but that is one of the simplest configs existing in the linux world IMO.

 

6 hours ago, steini84 said:

Maybe there could be made a change to the KVM/Docker service and allow to run Vms/Containers  that are not dependant on anything in /mnt/user even if the array is off. 

We do you want them in /mnt/user?  I've never ever put them in there, even pre-ZFS.  Just copy them to ZFS and point everything there.

 

6 hours ago, steini84 said:

That would indeed mean that Pfsense/Homeassistant etc. could keep running even if you had to do some maintenance and stop the array.  I guess there is more to this and how the network stack etc is set up. But if there is a will there is a way.

The main thing right now is there are certain (less used) functions that require the array to be stopped in order to change.  That DOES require VM's and dockers to be stopped unfortunately, though ZFS continues to run I think. But because the array in my case is a dummy array on a usb stick, it is much, much faster to do (like 5 seconds or so).  This is actually part of what I was writing about.  The sluggish problems of unraid disappear.

 

6 hours ago, steini84 said:

My hope is that the native ZFS that is "Coming soon TM" will maybe add some good gui presets, but will keep feature parity with this plugin. That is a vanilla version of ZFS under the hood that can be used without restriction. 

Yeah that sounds good.

Link to post
Yeah that sounds good.

Just to be clear I meant for example plex in a docker accessing /mnt/user/Movies

I keep everything zfs under /mnt/SSD including docker.img libvirt.img, vms and docker data without problems. I just was talking about media that is shared inside the vms/docker from /mnt/user/ eg music movies or tvshows


Sent from my iPhone using Tapatalk
Link to post

Hey!  You know I wasn't even looking at who I was replying to - didn't realise it was you lol.

 

So you're saying you don't want to have to go through all the dockers and repoint their host paths to a new location?  It takes a little while but wasn't too bad.

 

This article was meant to be a sort of 'not saying it's for everyone, but hey I went all ZFS and this is what I found'.  The number one thing I notice is all the sluggish stuff gone.  I think we don't realise the impact of all those drive spin downs and whatnot sometimes.

 

I'm also quite happy that if I ever get the hump with it, I can move it to TrueNAS scale or proxmox.

 

The only disadvantages are lack of drive power downs and flexibility of drive expansion really.

 

Thanks for the RC2 update. :)

Link to post

No I just wish all my standalone dockers and vms could stay up even though the array was stopped. But to be fair I really seldomly have to mess with the array.

But I don't think this is really a problem though. I have had this running over 5 years now without a single hickup and 0 bytes in data loss. And with the daily zfs replication I had 3 months of daily backups I could go back to if I needed any old configs etc.

My point being this started as a way for me to make the perfect setup for me and I'm happy that it has helped others. Funny to look back at the beginning and seeing the first reply to this thread. It took some time for him to to find time to play with it, but when zfs becomes a official part of unraid I will open a small bottle of Champagne


Sent from my iPhone using Tapatalk

Link to post

I dont know, if this is an issue with the rc2 or the zfs plugin, but i updated yesterday to rc2 and the plugin aswell and afterwards, my lancache container was no longer able to cache anything (write cached data to the zfs array). As soon as i downgraded to rc1 it worked perfectly again. Dont know where to post this, but here seems like a good starting point. If any additional information is needed, i try to provide that to investigate that problem.

Link to post
  • 2 weeks later...

Hi, I'm newbie to ZFS and Unraid. I have a old machine running  Freenas/zfs, it was broken. I bilud a Unraid  machine. Can i just move the disk  from Freenas/zfs to unraid? I see someguy do that at here, But i don't no how to du that "imported the zfs pool and Magic,my pool is visible. Now im copying the data to my unraid drives", someone give me some details?

Thanks a lot!

Link to post

Hi, as far as I know yes.  Provided of course you have the unraid zfs plugin installed.  I assume you won't have to upgrade the versions of zfs on the disks first, I would certainly try not to in case you need to go back to ZFS on freenas.  The main thing is that you gotta import your pools normally, but I'm pretty sure the plugin will do that automatically.  If not, I think it's something like zpool import -a or similar.

Link to post
19 hours ago, Marshalleq said:

Hi, as far as I know yes.  Provided of course you have the unraid zfs plugin installed.  I assume you won't have to upgrade the versions of zfs on the disks first, I would certainly try not to in case you need to go back to ZFS on freenas.  The main thing is that you gotta import your pools normally, but I'm pretty sure the plugin will do that automatically.  If not, I think it's something like zpool import -a or similar.

Thanks! it's working. I "zpool import -a", than found the pool /mnt/xxpool, then  copy it.

Link to post

Marshalleq I really like your post regarding your + - with migrating all your data from unraid's pool to strictly ZFS pool.

 

I really like the explanations, structure and general things you pointed out. It would be an amazing post on reddit just saying ;)

 

I would like to know what is up with the drive not going into standby? I am a 40 disk-shelf full of 3 TB disks and it's getting hot in here. I have to open my window even when it is -20C outside. I can't imagine how I will be able to handle that during summer time. I would really need to change the drives/get and AC working all the time or go to TrueNAS SCALE which I might end up going anyways in the long run.

I would really like to keep going on Unraid, it's been like 4-5 years I've been using it.

Isn't there any way or plugin available to put disks to sleep? :( I kinda regret buying my disk shelf just because of that simple issue.

 

Any help would be really appreciated

Edited by FLiPoU
Link to post

No sure if you're running ZFS or not - but if not, I think spin down will work, assuming you're not hit by one of those SAS spin down issues.  If you are, I do believe ZFS will largely keep your disks up and rely on their internal idle mechanisms for power / heat reduction.

Link to post

Did i miss something or when is the plugin uodated? A new Version came out 2 days ago https://github.com/openzfs/zfs/releases/tag/zfs-2.0.1

which should fix some problems with 6.9 RC2 and some Docker Containers (i wrote about my problem a few posts above and in the Discord server of the Containers, the problem got a bit more attention there). So would be good to get an update soon, so RC2 works without a problem.

Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.