ZFS plugin for unRAID


steini84

Recommended Posts

Hi, thanks; initially I thought filtering by attributes, however, the filter implemented its by name. I'm under the impression that your docker folder is the root of your RaidZ2, it's correct? If that the case you should find a regular expression that suits the name of the folders, maybe something like "/^RaidZ2\/([0-9A-Fa-f]{2})+$/", but really, you should move docker folder to a dataset of his own. 

Edited by Iker
Link to comment
2 hours ago, Iker said:

Hi, thanks; initially I thought filtering by attributes, however, the filter implemented its by name. I'm under the impression that your docker folder is the root of your RaidZ2, it's correct? If that the case you should find a regular expression that suits the name of the folders, maybe something like "/^RaidZ2\/([0-9A-Fa-f]{2})+$/", but really, you should move docker folder to a dataset of his own. 

Alright, i moved all the files to a dataset, but the legacy datasets remained. Even after completly moving it away from the zfs pool. Any idea or command on how to quickly delete/destroy all those legacy datasets?

Link to comment
16 minutes ago, Joly0 said:

Alright, i moved all the files to a dataset, but the legacy datasets remained. Even after completly moving it away from the zfs pool. Any idea or command on how to quickly delete/destroy all those legacy datasets?

 

Maybe it's a better idea to set the docker folder from the GUI to another point (reinstalling every docker is mandatory), with zfs as the underlying system, it's not just "move the files". Once that is done this should be enough:

 

zfs list -o name,mountpoint | grep legacy | awk '{printf "zfs destroy -frR %s\n", $1}'

 

The output it's all you need to destroy all of those remaining datasets, be extremely careful review every single line including the path, the command has no filters whatsoever.

Edited by Iker
Link to comment
On 10/9/2021 at 7:50 PM, BasWeg said:

 

I've done this via dataset properties.

To share a dataset:

zfs set sharenfs='rw=@<IP_RANGE>,fsid=<FileSystemID>,anongid=100,anonuid=99,all_squash' <DATASET>

<IP_RANGE> is something like 192.168.0.0/24, to restrict rw access. Just have a look to the nfs share properties.

<FileSystemID> is an unique ID you need to set. I've started with 1 and with every shared dataset I've increased the number

<DATASET> dataset you want to share.

 

The magic was the FileSystemID, without setting this ID, it was not possible to connect from any client.

 

To unshare a dataset, you can easily set:

zfs set sharenfs=off <DATASET>

 

 

ok so I followed that and this is below is what my share looks like. I can mount the nfs share from the networks listed in there but I can not place any files in it. what did I miss?

 

root@UnRAID:~# zfs get sharenfs citadel/vsphere
NAME             PROPERTY  VALUE                                                                             SOURCE
citadel/vsphere  sharenfs  [email protected]/24,[email protected]/24,fsid=1,anongid=100,anonuid=99,all_squash  local

 

Link to comment
28 minutes ago, Xxharry said:

 

ok so I followed that and this is below is what my share looks like. I can mount the nfs share from the networks listed in there but I can not place any files in it. what did I miss?

 

root@UnRAID:~# zfs get sharenfs citadel/vsphere
NAME             PROPERTY  VALUE                                                                             SOURCE
citadel/vsphere  sharenfs  [email protected]/24,[email protected]/24,fsid=1,anongid=100,anonuid=99,all_squash  local

 

 

Does the user 99:100 (nobody:users) have the correct rights in the folder citadel/vsphere?

Link to comment
5 hours ago, ensnare said:

When I upgrade to 6.10.0-rc2g ZFS downgrades to 2.0.6 (from 2.1.1). 

That's actually caused because OpenZFS tagged the release 2.0.6 as the latest (see here) and my server always pulls the version number that is tagged as latest.

 

Implemented a workaround for now and triggered the build for 2.1.1 for 6.10.0-rc2g

Link to comment

has anyone had issues when doing a preclear and then their pool gets r/w/cksum errors? Seems it occurs for me when it gets into the zeroing phase and then my stuff starts going nuts.

 

Edit: nm, think I was having some PSU problems that cause read write errors in my pool. Seems to have resolved after purchase of new psu. 

Edited by muddro
Update
Link to comment
15 hours ago, ich777 said:

That's actually caused because OpenZFS tagged the release 2.0.6 as the latest (see here) and my server always pulls the version number that is tagged as latest.

 

Implemented a workaround for now and triggered the build for 2.1.1 for 6.10.0-rc2g

Thanks, and thanks for your work with this plugin. Allowed me to switch from TrueNAS!

Link to comment

Hey all, need some advice.

 

I'm looking into setting up ZFS on Unraid , but considering it's most likely going to be officially supported on 6.11, I'm not sure if starting it now will be an issue, or if I'd have to wipe all my data to use the official implementation in the future.  I have 8TB of data, and that's a lot to move around.  Would it be fine to set that up now, or would it be wise to wait for the official Unraid implementation?

Link to comment

Great question, yes it is possible to have features in a newer version of ZFS that don't work in an older version.  So what you say is possible.  However if you stick to stable versions, I'd say there's little chance of this being an issue due to the timeframes involved and maybe someone on here will explain how to keep to a specific stable version of the plugin, so you could e.g. keep it to one version.  It's very unlikely that when Unraid DOES add zfs that it will be older than the current stable version given their attentiveness to remaining current.

 

And if that scenario were to occur, you can bet they're working with the makers of this plugin to cover all the scenarios.  I think stick to stable and you'll be good.  Someone else may chip in something I haven't thought of though.

Link to comment
14 minutes ago, Marshalleq said:

Great question, yes it is possible to have features in a newer version of ZFS that don't work in an older version.  So what you say is possible.  However if you stick to stable versions, I'd say there's little chance of this being an issue due to the timeframes involved and maybe someone on here will explain how to keep to a specific stable version of the plugin, so you could e.g. keep it to one version.  It's very unlikely that when Unraid DOES add zfs that it will be older than the current stable version given their attentiveness to remaining current.

 

And if that scenario were to occur, you can bet they're working with the makers of this plugin to cover all the scenarios.  I think stick to stable and you'll be good.  Someone else may chip in something I haven't thought of though.

 

I ask cause currently the plugin works through Unassigned Devices, and if it eventually switches so that the array can be on ZFS, I'm wondering if that would cause an issue there where I'd need to wipe my data.

Link to comment
3 hours ago, asopala said:

 

I ask cause currently the plugin works through Unassigned Devices, and if it eventually switches so that the array can be on ZFS, I'm wondering if that would cause an issue there where I'd need to wipe my data.

 

It's a fair question, however, unRaid it's not going to personalize ZFS, just offer it as an option for the array file system; the most likely (and Easy) scenario is that you only have to export & import your pool, the information there will be intact. Take for example SpaceInvaderOne videos about exporting the pool from unRaid and importing it in TrueNas.

  • Like 1
Link to comment
2 hours ago, Iker said:

it's not going to personalize ZFS, just offer it as an option for the array file system

 

Is this confirmed anywhere?  Seems more likely to me it will be offered as an alternative pool file system, not an array replacement?

Edited by jortan
Link to comment
22 minutes ago, Iker said:

Probably, in the features for 6.11 pool is mentioned that array devices could be formatted as ZFS, but, at least for me, it doesn't change anything, is just an import/export operation.

 

It's mentioned that ZFS will be an option for storage pools, not the array:

 

ZFS File System

Ever since the release of Unraid 6, we have supported the use of btrfs for the cache pool, enabling users to create fault-tolerant cache storage that could be expanded as easily as the Unraid array itself (one disk at a time).  Adding ZFS support to Unraid would provide users with another option for pooled storage, and one for which RAID 5/6 support is considered incredibly stable (btrfs today is most reliable when configured in RAID 1 or RAID 10).  ZFS also has many similar features like snapshot support that make it ideal for inclusion.

 

Link to comment
It's mentioned later in the same page… and a lot on next pages; however, as I mentioned, it doesn't matter, the point is the same is just an import/export operation, the important part and what unRaid brings into the game it's the GUI.
Are you really sure about this?
I think this is just a guess that it will work like this?

Sent from my C64


  • Like 1
Link to comment

From what I understand about the way the unraid array works, I don't think it matters what's actually on the drives, the parity just calculates at the block level and that is that.  I've run an unraid array before, that had a mix of btrfs and xfs within the single array.

 

Also, I'd say there is zero chance that the Unraid folk are going to include zfs for the people that want it and tell those same people that they can't use their existing pools, that'd just be stupid.

 

Where I think there will be some ambiguity is exactly what and how the GUI manages native zfs pools.  Whether unraid actually makes a whole GUI for something that is essentially not their core (and particularly whether they do this right in the first version) remains to be seen.  What will be nice is just a little native integration so that ZFS is not a second class citizen for unraid features like cache, docker images, disk status, thermal reporting and such.  Heck even a scheduled scrub option would be cool.

 

That's my 2c anyway, I'll even take a simple 'we include the binary now' as a first step.

Edited by Marshalleq
  • Like 1
Link to comment
3 hours ago, Marshalleq said:

From what I understand about the way the unraid array works, I don't think it matters what's actually on the drives, the parity just calculates at the block level and that is that.  I've run an unraid array before, that had a mix of btrfs and xfs within the single array.


Yes, and that's the secret sauce of unRAID - the ability to add redundancy to an array of dissimilar disks via block level RAID5 equivalent parity (even when those disks contain different file systems) and combine these in to a single virtual filesystem.

 

But it's for this reason that It simply makes no sense to have ZFS pools inside unRAID's array because they're trying to do similar things things in completely different, mutually exclusive ways.  In the same way it makes no sense to have a RAID5/10 BTRFS pool inside the unRAID array and why this functionality is already provided in the form of pools separate from the array.

 

4 hours ago, Marshalleq said:

Also, I'd say there is zero chance that the Unraid folk are going to include zfs for the people that want it and tell those same people that they can't use their existing pools, that'd just be stupid.

 

Agreed, if anything just don't upgrade your ZFS pools with any new feature flags introduced from now if you don't need them, just in case.

Link to comment

Anyway, I think the addition of ZFS officially is very exciting and I am looking forward to see where it leads.  Though I suspect like everything else in unraid, we're not going to get enterprise features in the GUI.  But who knows, this might just start a journey given from a file system perspective this stuff (like send /receive for backups) is just built in.

 

It's funny to think, I jumped in this forum some time back with little knowledge and lots of questions and now I have quite a full on implementation and completely got rid of unrraids array.  I had tried ZFS once on proxmox and it was incredibly slow, which I can only assume is due to a poor default arc setting or something.  It goes to show the good job @steini84 has done here and everyone else whom has contributed.  Also @ich777 I found to be super awesome to work with too - seeing that they're aligned up and unraid is aligned up with them I think the future is rosy for ZFS. :)

  • Like 1
Link to comment

Yeah, it's mainly a guess, based on the info in the thread  and what users are asking for. I agree with @jortan that the magic of unRaid is having a bunch of different size disks and just made them work as one with redundancy without so much trouble, and it's okay, that always should be an option, however, being the only option it's not sustainable for the times to come, unRaid is a business, they have a market, getting behind in characteristics that the competition offers is not a great move for them.

 

Every month it's announced a new SATA SSD cheaper than ever, a board with 2.5Gb/5Gb network card included; think about the new 2022 offers from intel & amd with ddr5, pcie5, etc; all of that outperform by a lot unRaid current storage strategy; check Linus videos, there is a long time since he used the array for anything. If unRaid keeps the current "array" for a couple of years, it's not going to be competitive; and don't get me wrong, I have been using unRaid for the last 5 years, I Love it!, but it has started to feel a little old in terms of storage capabilities (snapshots, backups without having to copy 300 GB vms disk every night, wait for hours for a copy to complete, 10Gb support, etc...). For me this ZFS Plugins had been a game changer, make unRaid just the perfect modern system, I really hope that unRaid give us options and not just stick to the old good times.

  • Like 1
Link to comment
6 hours ago, Iker said:

 that always should be an option, however, being the only option it's not sustainable for the times to come

 

It's not the only option though, that's what pools are for?

 

There's a case to be made that unRAID should not require an array to be present if you don't need one.  That one I agree with, though there is clearly a lot of code currently hanging off whether the array is up or down (ie. you can't run docker/vm services without the array running).  It will probably happen one day, but that's a massive change for unRAID.

 

I'm a huge advocate of ZFS and very appreciative of @steini84 / @ich777's work on this, but unRAID's array is still arguably the better option for large libraries of non-critical media where write performance isn't that important - ie. media libraries/copies of your blurays, etc.  With an unRAID array, if you have a 12 disk array and you lose 2 disks - you only lose the data on those 2 disks.  With the same RAID5-style BTRFS/ZFS pool and a single parity disk, if you lose 2 disks, you've lost the entire pool.

 

There's also the flexibility where (as long as your unRAID array parity disk is equal in size to your largest disk), you can add individual disks of any size and fully utilise their capacity.

 

To each their own, but my 2c is that these features will continue to provide value long after ZFS support for pools in unRAID becomes mainstream.

Edited by jortan
  • Like 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.