ZFS plugin for unRAID


steini84

Recommended Posts

6 minutes ago, ashman70 said:

Right so if I use ZFS as I intend to, convert all my disks to ZFS and just have them operate independently, with no vdevs then I will get the detection and self healing features of ZFS?

 

Strictly speaking you're talking each disk being a separate pool with a single vdev containing one disk.

 

As everyone else has pointed out, in this scenario ZFS can detect errors thanks to checksums, but will have no ability to repair them because you have no redundancy in each pool.

 

If you want to pool dissimilar disks, Unraid arrays are perfect for this.  If you want the performance and resilience of a ZFS pool, you may want to invest in some new disks.

Edited by jortan
Link to comment
13 hours ago, ashman70 said:

I have a backup server running the latest RC of unRAID 6.12. I am contemplating wiping all the disks and formatting them in ZFS so each dis is running ZFS on its own. Would I still get the data protection from bit rot from ZFS by doing this? 

After all the comments, I thought it might be useful to make some terminology clear because even knowing how this all works, you have to be sort of eagle eyed to understand the explanations and how they relate the the question.

 

Probably the best way to summarise it more clearly that I can think of is like this:

 

The UNRAID Array

When people mention the 'array' (at least how they are replying to you here), they actually mean Unraids implementation of raid, which accepts differing sized disks.  Even though technically speaking (by my training at least) any form of RAID is an array.  Yes, ZFS can now be a single disk inside an unraid array as of latest RC.  Note that Unraids array is very much slower than other arrays and it doesn't offer any form of recovery or protection other than being able to rebuild the number of disks that match your parity level.  There is no 'bitrot' support as some call it, there is no checking that files are corrupted or integrity checking of files as they're written or even similar features provided by non-self healing arrays, not as far as I know (someone please correct me if I'm wrong).  There used to be a plugin that checked integrity, but it was awkward and I don't think it really did anything worthwhile in the end.  I uninstalled it.

 

ZFS vs BTRFS.  They're similar and they're not.  There's a good write up here: https://history-computer.com/btrfs-vs-zfs/ however the things that make the difference to me personally are listed below, some will undoubtedly disagree.

 

BTRFS

If you search these forums you will see that there are a number of people complaining about losing data with BTRFS.  It may be better now, but in my opinion losing data even once due to a file system is a major shortfall that isn't worth the risk of trying again.  I did try again because people said it was a coincidence, but silly me - another set of lost data.  And at the time I had to use btrfs for my docker image which consistently had problems.  I won't use it anymore because I don't trust it, others will disagree and may (hopefully) be able to point out that they actually found bugs that have fixed this issue - because choices are good.  Having bugs of that nature in a filesystem that's been said is production ready though doesn't give me any confidence whatsoever.  It should never have happened.

 

ZFS

If you really want a safe file system, ZFS has the longest proven pedigree.  Probably because it had the most resources thrown at it by Sun Microsystems in the original years while it was still open code.  Now that the legal issues are out of the way it has I would say the most active and stable development and mature core.

 

Another option

If you truly want data protection, whatever file system you choose, a nice trick is to just create a ZFS or BTRFS mirror and put the unraid array along side it separately.  So you have one array that is self healing and one that is multi disk size capable.  For people that rely on the different sized disks feature, but want self healing, I think this is a good middle ground.  Because self healing is usually best suited to photographs, documents and such like which you want to last forever.  Large binary files that perhaps you could redownload, temp space and that sort of thing the Unraid array is well suited for because all that the unraid array will essentially do is allow you to replace failed disks.  At least that's my understanding.

 

Hope that helps.

 

Link to comment
3 minutes ago, Marshalleq said:

Has anyone found a changelog for RC3?  There are links and headings that say they're a changelog but they all seem to be older releases...

The release notes for the rc releases are at the start of the announcement thread for a release under Bug Reports->Pre-Releases part of the forum.

Link to comment

It is still unclear though what exactly changed in RC3, as some of this is most certainly not new, in fact since rc1.  I think this is normal for unraid though right?  They sort of call it an rc3 changeling but lump everything else in.  But if I recall correctly they used to put rc1/2/3 next to the items that they applied to so you could tell.

 

For example I am uncertain if anything in the ZFS section is rc3 specific.  Certainly the first half of it isn't.

Link to comment
  • 2 weeks later...

Hello

 

I imported a ZFS pool and dataset from a previous system.

My pool and dataset are showing up when I type "ZFS list" and "zpool status"

 

How do I make a samba share to access it through windows with the basic user I set up.

 

Sorry if this has already been asked but I can not find an answer.

 

I am very much a noob so a detailed instructions/explanation would be appreciated.

Link to comment
On 4/30/2023 at 11:07 AM, izzyhope58 said:

Hello

 

I imported a ZFS pool and dataset from a previous system.

My pool and dataset are showing up when I type "ZFS list" and "zpool status"

 

How do I make a samba share to access it through windows with the basic user I set up.

 

Sorry if this has already been asked but I can not find an answer.

 

I am very much a noob so a detailed instructions/explanation would be appreciated.

What version of Unraid are you using?    In Unraid 6.12 (currently at rc5 status and expected to soon become a stable release) this is built into the GUI, earlier releases involved manual editing of the SMB-extras file.

Link to comment
3 hours ago, itimpi said:

What version of Unraid are you using?    In Unraid 6.12 (currently at rc5 status and expected to soon become a stable release) this is built into the GUI, earlier releases involved manual editing of the SMB-extras file.

I am on Unraid 6.11.5 and I see where to add to SMB extras. I just don't know what lines I need to add to SMB-extras to make it work

Link to comment
40 minutes ago, JorgeB said:

There are various post about that in this thread, for example:

 

https://forums.unraid.net/topic/41333-zfs-plugin-for-unraid/?do=findComment&comment=668069

 

So I have this in the SMB Extras field

[izzyZFS]
path = /mnt/Main Pool/Main Storage
        browseable = yes
	guest ok = no
	writeable = yes
	read only = no
	create mask = 0775
	directory mask = 0775
        write list = izzy
        valid users = izzy

The share shows up in windows explorer

explorer_EaUu0YqoQx.png.284d8df5f45c287088d742d908d77244.png

But when I go to the server but when I try to access it (using the username and password for the "izzy" account) I get this error message.

explorer_LBcCkYyq6A.png.ace745e7da6995044c79a9675e5ee76e.png

Is there something else I'm missing?

Link to comment

Here is what happens if I try to setup a zfs pool in 6.12.RC5 this is without the ZFS Master plugin.

I setup the pool, change the ZFS settings on the first disk, then it asks me if I want to format the drives, if I tick the fox and click format a warning box pops up,  I click ok, it tries to format something, then the first zfs disk has a red x next to it and we are back at the beginning, with it asking me if I want to format the disk with the box I can tick off and the button to format.

So, do we need the ZFS master plugin and if so, how do we create a share to a ZFS data set?
If we don't need the ZFS master plugin, how do we create a pool. so we can create a share in a ZFS data set, and can it be done through the GUI all or partially, and if not, what commands do we need to run to get it to work?

Screen Shot 2023-05-02 at 3.34.00 PM.png

Screen Shot 2023-05-02 at 2.28.17 PM.png

Screen Shot 2023-05-02 at 2.29.30 PM.png

Link to comment
47 minutes ago, theangelofspace15 said:

How do I create a zfs pool with rc5? and I dont see ashift option. is it default to 12? 

I think this is still manual.  I don't expect any zfs gui to come out in this release.  Someone may correct me on that.

 

RC5 came around fast.  Anyone know if I can get rid of my USB key that is used to boot the unraid array yet?  I'm still on RC2 because RC3 had problems.  I have a horrible feeling these will still be present in RC5 - but will see!  Slightly surprised to hear them say that they expect a stable in a few weeks.

Link to comment
8 hours ago, theangelofspace15 said:

How do I create a zfs pool with rc5?

Using the GUI, add a new pool, assign the devices, click on the first one and select zfs and the topology you want, start array, format the pool.

 

8 hours ago, theangelofspace15 said:

and I dont see ashift option. is it default to 12? 

Yes.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.