ZFS plugin for unRAID


steini84

Recommended Posts

Not sure if i should be asking it here but how do i mount the datasets i created in the terminal from zfs to unraid?
i tried using this command ln -s /zpool/dataset /mnt/user/sharename where zpool and dataset are my own named ones and sharename the name of the share i created in smb, what i get is a share with the size of my unraid pool and the location /zpool/dataset/dataset (yes it force created a dataset within a dataset it seems like) any suggestions?

Link to comment
1 minute ago, Defendos said:

Not sure if i should be asking it here but how do i mount the datasets i created in the terminal from zfs to unraid?

 

Typically you set this when you create the pool and the datasets fit under the pool's mount via inheritance.  That's what I do anyway.  However it is possible to set differing mount points afterward if you wish.

 

Example of setting as part of original pool:

zpool create Seagate48T -f -o ashift=12 —o autotrim=on -m /mnt/Seagate48T raidz1 /dev/sdb /dev/sdc /dev/sdd /dev/sdf

 

Example of setting mount point afterward:

zfs set mountpoint=/yourmountpoint/yoursubmountpointetc Seagate48T/datasetname

 

You can also use zfs get Seagate48T/yourdataset to find out what it's currently set to.

 

Hope that helps!

 

Marshalleq.

Link to comment
32 minutes ago, Marshalleq said:

 

Typically you set this when you create the pool and the datasets fit under the pool's mount via inheritance. That's what I do anyway. However it is possible to set differing mount points afterward if you wish.

 

Example of setting as part of original pool:

zpool create Seagate48T -f -o ashift=12 —o autotrim=on -m /mnt/Seagate48T raidz1 /dev/sdb /dev/sdc /dev/sdd /dev/sdf

 

Example of setting mount point afterward:

zfs set mountpoint=/yourmountpoint/yoursubmountpointetc Seagate48T/datasetname

 

You can also use zfs get Seagate48T/yourdataset to find out what it's currently set to.

 

Hope that helps!

 

Marshalleq.

 

When setting those mount points does that also mean that i have to first create the corresponding smb/nfs shares in unraid to link them to my zfs pool/disks or will the shares show up in windows explorer for example when i create the mounting points as you have described?

Link to comment

Not first, but probably afterward.  ZFS does have SMB sharing contained within it, but on unraid I just set them afterward in the /etc/samba/smb.extras file (or a similar name that I can't quite remember).  Luckily samba has a really easy to use configuration file.

 

But I just want to make sure I understand you here, what exactly is the requirement for NFS?  If everything is contained within unraid i.e. ZFS is on unraid, you don't need to use NFS or samba to do anything to do with ZFS unless you're connecting a remote host or want to mount something inside of a VM that way.

 

Sorry, I don't know what expertise you have and it almost sounds as though you think you need NFS to mount the folders within the same system.

Link to comment
1 hour ago, Defendos said:

 

When setting those mount points does that also mean that i have to first create the corresponding smb/nfs shares in unraid to link them to my zfs pool/disks or will the shares show up in windows explorer for example when i create the mounting points as you have described?

 

The Unraid "Shares" GUI is really designed for sharing Unraid array/pools.  I would also recommend sharing a ZFS dataset using smb-extra.conf (or zfs set) rather than trying to do this in the Unraid Shares GUI.

 

If you want a share that's open to anonymous (unauthenticated) access on your network:

 

chown nobody:users /mnt/poolname/dataset

chmod 777 /mnt/poolname/dataset

 

In /boot/config/smb-extra.conf add a new share:

[sharename]
path = /mnt/poolname/dataset
comment = this is a comment
browseable = yes
public = yes
writeable = yes

 

Restart samba service to load the new config

/etc/rc.d/rc.samba restart

 

sharename should now be visible at \\unraidhostname or \\unraid.ip.address

Edited by jortan
Link to comment
2 hours ago, Marshalleq said:

Not first, but probably afterward. ZFS does have SMB sharing contained within it, but on unraid I just set them afterward in the /etc/samba/smb.extras file (or a similar name that I can't quite remember). Luckily samba has a really easy to use configuration file.

 

But I just want to make sure I understand you here, what exactly is the requirement for NFS? If everything is contained within unraid i.e. ZFS is on unraid, you don't need to use NFS or samba to do anything to do with ZFS unless you're connecting a remote host or want to mount something inside of a VM that way.

 

Sorry, I don't know what expertise you have and it almost sounds as though you think you need NFS to mount the folders within the same system.

NFS was an example, what i need is SMB for my Windows host, this is for sharing in my LAN network, and with wireguard i will be sharing it outside my home eventually just to make things clear.

 

About my expertise i would say beginner/intermediate, been fiddling with servers and hyper-v's a while mostly guided by google and self learning, been using truenas for a while but the learning curve and occasional apps not working in comparison to the great app store of unraid set me on the pad to combine the 2

 

i've managed to set my zfs pools and datasets but sharing them and hopefully also with some kind of permissions is where i cant find easy to understand guides or helpful tips as i'm not very skilled with either linux, bsd or terminal commands.

Link to comment
2 hours ago, jortan said:

 

The Unraid "Shares" GUI is really designed for sharing Unraid array/pools. I would also recommend sharing a ZFS dataset using smb-extra.conf (or zfs set) rather than trying to do this in the Unraid Shares GUI.

 

If you want a share that's open to anonymous (unauthenticated) access on your network:

 

chown nobody:users /mnt/poolname/dataset

chmod 777 /mnt/poolname/dataset

 

In /boot/config/smb-extra.conf add a new share:

[sharename]
path = /mnt/poolname/dataset
comment = this is a comment
browseable = yes
public = yes
writeable = yes

 

Restart samba service to load the new config

/etc/rc.d/rc.samba restart

 

sharename should now be visible at \\unraidhostname or \\unraid.ip.address

I would like to have multiple shares for family users(example home/user1 home/user2, etc) but also for myself 1 with read only and 1 with full access for extra security with ransomware just in case

 

Any suggestions if this is possible at all through terminal or truenas gui?

i have set up a truenas vm which i can import and export my pools at,

is it also possible to set up shares with permissions on the truenas vm gui,

export the gui and import them in unraid again?

 

Sorry for all my lack of skills or expertise, still in the big learning curve to combine truenas zfs and unraid

Link to comment
9 minutes ago, Defendos said:

NFS was an example, what i need is SMB for my Windows host, this is for sharing in my LAN network, and with wireguard i will be sharing it outside my home eventually just to make things clear.

OK, so want an external windows host to connect to the unraid ZFS dataset via SMB?

 

10 minutes ago, Defendos said:

About my expertise i would say beginner/intermediate, been fiddling with servers and hyper-v's a while mostly guided by google and self learning, been using truenas for a while but the learning curve and occasional apps not working in comparison to the great app store of unraid set me on the pad to combine the 2

Nice, this is the best way in my opinion, I have recruited many tech people and the best ones are almost always those whom teach themselves and have a natural interest.  And a good dose of 'I don't know everything' to go with it.  Yes, I tried TrueNAS, I really really wanted it to be good because they do have the basics covered a lot better than unraid does (I mean backups, file sharing, permissions that kind of thing, but unraid just works better.  And I really dislike their docker / Kubernetes implementation - hopefully they will sort it out and we will have a second option).  

12 minutes ago, Defendos said:

i've managed to set my zfs pools and datasets but sharing them and hopefully also with some kind of permissions is where i cant find easy to understand guides or helpful tips as i'm not very skilled with either linux, bsd or terminal commands.

You can share them easy enough as per above method, custom config in the smb.extras file.  Permissions however, unraid has basically restricted this to the group nobody.users at the file level and leaves reliance on setting permissions at the share level.  I expect they did this because it was just too complicated for the target users of this product to do otherwise.  I'm not sure - others may be able to correct me about some hacks, but moving away from nobody.users (i.e. chown nobody:users folder name -Rfv) is likely asking for trouble.

 

By the way the file is /boot/config/smb.extra.conf - I was guessing before.  Here's what I add to it to share a ZFS share.  I hope it helps you.

 

```

[temp]

path = /mnt/HDDPool1/temp

comment = ZFS backups Drive

browseable = yes

valid users = username

write list = username

vfs objects =

create mask = 664

force create mode = 664

directory mask = 2775

force directory mode = 2775

valid users = username root

write list = username root

force user = nobody

force group = users

guest ok = no

Link to comment
8 minutes ago, Defendos said:

I would like to have multiple shares for family users(example home/user1 home/user2, etc) but also for myself 1 with read only and 1 with full access for extra security with ransomware just in case

 

Any suggestions if this is possible at all through terminal or truenas gui?

i have set up a truenas vm which i can import and export my pools at,

is it also possible to set up shares with permissions on the truenas vm gui,

export the gui and import them in unraid again?

 

Sorry for all my lack of skills or expertise, still in the big learning curve to combine truenas zfs and unraid

My advice is to look at the samba config file options.  It has a really good example config included usually.  There are definitely things you can do to set home drives at the share level.  Failing that, you can set them individually via the above config if you don't have too many to do.  You can put a dollar sign after the share name to hide them if you like too.  Samba is a really really mature product that has been around for decades, if it can be done, it will do it.  Just have a browse around e.g. https://docs.centrify.com/Content/zint-samba/SMBConfSample.htm

 

Link to comment
6 minutes ago, Marshalleq said:

OK, so want an external windows host to connect to the unraid ZFS dataset via SMB?

 

Nice, this is the best way in my opinion, I have recruited many tech people and the best ones are almost always those whom teach themselves and have a natural interest. And a good dose of 'I don't know everything' to go with it. Yes, I tried TrueNAS, I really really wanted it to be good because they do have the basics covered a lot better than unraid does (I mean backups, file sharing, permissions that kind of thing, but unraid just works better. And I really dislike their docker / Kubernetes implementation - hopefully they will sort it out and we will have a second option).

You can share them easy enough as per above method, custom config in the smb.extras file. Permissions however, unraid has basically restricted this to the group nobody.users at the file level and leaves reliance on setting permissions at the share level. I expect they did this because it was just too complicated for the target users of this product to do otherwise. I'm not sure - others may be able to correct me about some hacks, but moving away from nobody.users (i.e. chown nobody:users folder name -Rfv) is likely asking for trouble.

Perhaps i've been a bit late on this but thank for so far for willing to help me with this journey of mine, i agree on you that truenas has indeed have the basics set, but the stability sometimes with dockers and vms is what led me to unraid, also counting the very helpful userbase, so if i'm correct you suggesting me to just stay at nobody.users but make the valid users as you saying here below in unraid? or where do i get to add valid users?, just trying to figure out in my head how unraid and especially zfs is made up and how and where they are connected, so i can more easily troubleshoot and explain my wrongdoings.

 

6 minutes ago, Marshalleq said:

 

By the way the file is /boot/config/smb.extra.conf - I was guessing before. Here's what I add to it to share a ZFS share. I hope it helps you.

 

```

[temp]

path = /mnt/HDDPool1/temp

comment = ZFS backups Drive

browseable = yes

valid users = username

write list = username

vfs objects =

create mask = 664

force create mode = 664

directory mask = 2775

force directory mode = 2775

valid users = username root

write list = username root

force user = nobody

force group = users

guest ok = no

 

Is that list containing standard commands/lines which are also available in zfs documentation? would like to deepen myself into it about what is or isn't possible.

Link to comment

Tried setting up the smb-extra.conf file using multiple examples listed above, but everytime i try to connect to the smb share through Windows10, i keep getting the following error: You need permission to perform this action

my smb-extra.conf file:

 

[storage]
path = /tank/storage
writeable = yes
public = yes
browseable = yes
inherit acls = no
mal acl inherit = yes
valid users = testuser
write list = testuser
vfs objects =
create mask = 0755
force create mode = 0755
directory mask = 0755
force directory mode = 0755
force user = nobody
force group = users
guest ok = no

 

tried 0755 as well as 0777 as permission in the file above

any suggestions so far?

Link to comment

my own doing, moved from unrias+zfs to trunas to trunas scale back to unraid. Suprizingly everything seems to be there, credit to zfs. However 2 of my disks say format option and don't show zfs partition. I know I did have issues but I thought I fixed this.

What commands can I do to fix this?

zpool status
  pool: media
 state: ONLINE
  scan: resilvered 55.0G in 00:06:51 with 0 errors on Sun Nov 13 13:51:50 2022
config:

        NAME                                      STATE     READ WRITE CKSUM
        media                                     ONLINE       0     0     0
          raidz2-0                                ONLINE       0     0     0
            sdw                                   ONLINE       0     0     0
            7abef184-7498-fa4f-a897-5d67b8856c20  ONLINE       0     0     0
            023d95f7-5dbf-534a-8756-b5dc5b713714  ONLINE       0     0     0
            4793496c-047b-8a4d-b2b3-868b70dddc3f  ONLINE       0     0     0
            sdv                                   ONLINE       0     0     0
            473af983-64c2-e14c-849c-2c259b2c01b8  ONLINE       0     0     0
            20d11577-d787-214e-881a-2ff53abc874e  ONLINE       0     0     0
            1776dee7-e4ac-d24e-bf83-0455c44db59a  ONLINE       0     0     0
            cdf66fbe-c4bf-c646-ae59-e475a3630bf7  ONLINE       0     0     0
            a5b8e63e-94bb-4148-bccd-6d99995b0aa5  ONLINE       0     0     0
            b65ca965-064f-ec4b-a16e-e489b4effbef  ONLINE       0     0     0
            2c4f7737-3d20-3e4f-9715-19faa428a050  ONLINE       0     0     0
            680217b6-eeab-134c-99bc-763554b462e6  ONLINE       0     0     0
            82e7a98c-ce07-6b49-950c-1e86c307424d  ONLINE       0     0     0
            b0c8d5f1-4980-b046-9ca3-5733759dab77  ONLINE       0     0     0
        cache
          nvme0n1p1                               ONLINE       0     0     0

sdw and sdv are the drive with the format option.

Link to comment

Hi, I created a raidz-1 pool test using your zfs plugin. Now I want to add new hard drives to this pool, but it doesn't work. The command I use is zpool attach test raidz1-0 sdd. But it was not successful, prompting me "can only attach to mirrors and top-level disks". But I see others are successful, how this is achieved.

Link to comment
5 minutes ago, ncceylan said:

Hi, I created a raidz-1 pool test using your zfs plugin. Now I want to add new hard drives to this pool, but it doesn't work. The command I use is zpool attach test raidz1-0 sdd. But it was not successful, prompting me "can only attach to mirrors and top-level disks". But I see others are successful, how this is achieved.

 

Please be aware that there are some diferencies in using "add" and "attach:

 

https://openzfs.github.io/openzfs-docs/man/8/zpool-attach.8.html

https://openzfs.github.io/openzfs-docs/man/8/zpool-add.8.html

Link to comment
19 hours ago, anylettuce said:

my own doing, moved from unrias+zfs to trunas to trunas scale back to unraid. Suprizingly everything seems to be there, credit to zfs. However 2 of my disks say format option and don't show zfs partition....

sdw and sdv are the drive with the format option.

 

This is due to the import process of the pool and how the Unassigned devices plugin works; it has nothing to do with you pool and the only way that I'm aware of to solve it is by attaching and detaching the disk using:

 

zpool detach mypool sdd
zpool attach mypool /dev/disk/sdd-by-id
<wait until zpool status shows it's rebuilt...>

 

One easy way could be:

 

zpool export mypool
zpool import -d /dev/disk/by-id mypool

 

Additional info:

 

https://plantroon.com/changing-disk-identifiers-in-zpool/

Edited by Iker
Link to comment
42 minutes ago, Iker said:

Yes. I am an unraid user. I want to add new hard drives to the existing raidz1 pool. But the Internet said that it is currently not possible.Seems like it can be achieved with RAID Z Expansion.

Edited by ncceylan
Link to comment
4 hours ago, ncceylan said:

Yes. I am an unraid user. I want to add new hard drives to the existing raidz1 pool. But the Internet said that it is currently not possible.Seems like it can be achieved with RAID Z Expansion.

 

ZFS raidz expansion is not available yet.

 

You must destroy and create a pool with each disk. Copy your data to an external USB drive temporarily to create your new zpool.

Edited by gyto6
Link to comment

Have been running ZFS on my UNRaid now for most of this year with no issues.  Completed update to 6.11.5 (had previously been applying all point releases fairly quickly after release)... After the last update I'm getting a PANIC at boot directly after "Starting Samba: on 4th line "/usr/sbin/winbindd -D". 

 

Quote

VERIFY3(size <= rt->rt_space) failed (281442933186560 <= 1071386624)
PANIC at range_tree.c:436:range_tree_remove_impl()

 

If I start Unraid in safe mode it boots just fine... I renamed the unRAID6-ZFS .PLG file and it boots normally as expected...  Has anyone else run into any issues with the ZFS plugin and newer 6.11 versions of UnRaid?  Any ideas?  I completely removed the ZFS plugin and reinstalled and get the error messages in the live SysLog...

 

I've imported my pool into a TRUENAS boot that I've used to add vdevs and it appears healthy and no issues that I can identity.   Scrub completed normally.  I also downgraded to previous 6.11 versions (I was running 6.11.4 before with no issue) and no change, same PANIC error.

 

Any Ideas or anyone else with similar issues?

 

SW2

Edited by SuperW2
Link to comment
7 hours ago, SuperW2 said:

Have been running ZFS on my UNRaid now for most of this year with no issues.  Completed update to 6.11.5 (had previously been applying all point releases fairly quickly after release)... After the last update I'm getting a PANIC at boot directly after "Starting Samba: on 4th line "/usr/sbin/winbindd -D". 

 

 

If I start Unraid in safe mode it boots just fine... I renamed the unRAID6-ZFS .PLG file and it boots normally as expected...  Has anyone else run into any issues with the ZFS plugin and newer 6.11 versions of UnRaid?  Any ideas?  I completely removed the ZFS plugin and reinstalled and get the error messages in the live SysLog...

 

I've imported my pool into a TRUENAS boot that I've used to add vdevs and it appears healthy and no issues that I can identity.   Scrub completed normally.  I also downgraded to previous 6.11 versions (I was running 6.11.4 before with no issue) and no change, same PANIC error.

 

Any Ideas or anyone else with similar issues?

 

SW2

 

As you mentioned Truenas maybe is related to that, based on this:

 

https://github.com/openzfs/zfs/issues/12643

 

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.