ZFS plugin for unRAID


steini84

Recommended Posts

27 minutes ago, calvados said:

It appears that my ZFS pools are still up and running.  Is this an issue?

No, not in any way. I think you haven‘t rebooted in a long time or am I wrong?

The update helper was updated about 2 weeks ago because there was a bug in it.

 

ZFS is already built for 6.11.0 stable and everything is working.

Link to comment
11 minutes ago, ich777 said:

No, not in any way. I think you haven‘t rebooted in a long time or am I wrong?

The update helper was updated about 2 weeks ago because there was a bug in it.

 

ZFS is already built for 6.11.0 stable and everything is working.

 

Thank you for your reply @ich777.  I rebooted after applying the 6.11.0 update today.  The update previous to that was around 30 days prior.

 

When you say "the update help was updated about 2 weeks ago", is there any action I need to take to receive the update, or did the reboot I did today cause me to receive that update?

 

EDIT: FWIW I regularly update whenever prompted.

 

Thanks again @ich777

Edited by calvados
Link to comment
8 minutes ago, calvados said:

is there any action I need to take to receive the update, or did the reboot I did today cause me to receive that update?

The reebot did that or any driver plugin (which has the plugin update helper built in) will update the plugin update helper when a plugin update is done, but AFAIK there wasn‘t any major plugin updates.

 

Nothing to worry about, everything is now updated on your system.

Link to comment
29 minutes ago, ich777 said:

The reebot did that or any driver plugin (which has the plugin update helper built in) will update the plugin update helper when a plugin update is done, but AFAIK there wasn‘t any major plugin updates.

 

Nothing to worry about, everything is now updated on your system.

@ich777 thank you very much for your reply! Very much appreciated.

  • Like 1
Link to comment
On 9/19/2022 at 12:27 AM, BVD said:

One other thing immediately comes to mind - you're mounting you're zpool directly in /mnt, right?

 

If not, do that and re-test - putting zfs inside the virtual directory unraid uses to merge the disparate filesystems of multiple disks introduces a massive number of unaccounted for variables, and even unraid itself doesnt directly mount physical filesystems directly on top of virtual ones.

 

Yeah I mounted the zpool directly to /mnt like in the first post of this thread.

Link to comment
  • 2 weeks later...
1 hour ago, sabertooth said:

After upgrade to 6.11.1, ZFS dataset is no longer visible.

Is the ZFS plugin still installed?

Maybe something went wrong while downloading.

 

If it‘s not installed anymore please check if you got a tab Plugins Error, remove it from there and pull a fresh copy from the CA App

Link to comment
  • 2 weeks later...

I have two pools because of different types of disks, with mount point of /mnt/disks/zfs

 

Is there any issue with this? Or should the 2nd pool be mounted on a different location such as /mnt/disks/zfs2 ?

 

For Samba share purposes, does 'zfs' in my case need any permissions set, or should these be set at the datasets?

 

Thanks

 

Link to comment
2 minutes ago, fwiler said:

I have two pools because of different types of disks, with mount point of /mnt/disks/zfs

 

Is there any issue with this? Or should the 2nd pool be mounted on a different location such as /mnt/disks/zfs2 ?

 

For Samba share purposes, does 'zfs' in my case need any permissions set, or should these be set at the datasets?

 

Thanks

 

 

do you mean

 

/mnt/disks/zfs/pool1

/mnt/disks/zfs/pool2 ?

 

I believe /mnt/disks is where the unassigned disk plugin mounts disks?  Probably not an issue, but to avoid any edge case issues I would just mount zfs pools here:

 

/mnt/pool1

/mnt/pool2

 

For Samba sharing - you can either have ZFS do this - I haven't done it this way, but it's something like: 

 

zfs set sharesmb=on pool1

 

What I have done before is just add this to Unraid /boot/config/smb-extras.conf

 

[sharename]
path = /mnt/pool1
comment = share description
browseable = yes
public = yes
writeable = yes
vfs objects =

 

This assumes you want the share to be public - accessible anonymously.  If so, then:

 

chmod 777 /mnt/pool1

chown nobody:users /mnt/pool1

Link to comment
8 minutes ago, jortan said:

 

do you mean

 

/mnt/disks/zfs/pool1

/mnt/disks/zfs/pool2 ?

 

I believe /mnt/disks is where the unassigned disk plugin mounts disks?  Probably not an issue, but to avoid any edge case issues I would just mount zfs pools here:

 

/mnt/pool1

/mnt/pool2

 

Yes, I have them currently like you list. (Although I don't see the actual pool names under zfs, just the datasets). I just didn't know if I should put 2nd pool at i.e /mnt/disks/zfs2.  Essentially a different mount point for a different pool.  Or if it made any difference.

 

I had picked /mnt/disks/zfs as the mount point because 1. One of the developers or moderators said not to mount at /mnt/ and suggested /mnt/disks/xxx. 2. When I tried /mnt/zfs/ I received complaints from Fix Common Problems. Probably not a big deal, but easy for me to change.

If I shouldn't use /mnt/disks/xxx then I will change it as I'm just setting things up.

 

As far as sharing, I believe I understand as I've tested out several different configurations for shares.  

I was just asking about the permissions on the location of the mount point.  For instance mnt, disks, and in my case zfs all have different permissions.  But if setting the correct permissions on the share is what matters, then I won't worry about it.

Link to comment
5 hours ago, fwiler said:

just the datasets). I just didn't know if I should put 2nd pool at i.e /mnt/disks/zfs2.  Essentially a different mount point for a different pool.

 

Not only should you - you will need to.  You can't mount multiple pools to the same path.

 

1 hour ago, fwiler said:

If I shouldn't use /mnt/disks/xxx then I will change it as I'm just setting things up.

 

It's probably fine.  AFAIK the only reason not to mount zfs directly in /mnt/pool is this, which can just be ignored:

 

1 hour ago, fwiler said:

When I tried /mnt/zfs/ I received complaints from Fix Common Problems. Probably not a big deal, but easy for me to change.

 

 

 

Link to comment
5 hours ago, jortan said:
11 hours ago, fwiler said:

just the datasets). I just didn't know if I should put 2nd pool at i.e /mnt/disks/zfs2.  Essentially a different mount point for a different pool.

 

Not only should you - you will need to.  You can't mount multiple pools to the same path.

I did some more reading, and I see my original question wasn't clear due to wrong naming convention. Mount points can be the same for multiple pools according to what I read. The pool names obviously have to be different.  For instance zpool create -m /mnt/disks/zfs poolname mirror sdb sdc and zpool create -m /mnt/disks/zfs poolname2 mirror sdd sde.  The actual mount point for both is /mnt/disks/zfs.

I guess my confusion is that I don't see the pool name when browsing to /mnt/disks/zfs/, only datasets created under those pools.

Link to comment
58 minutes ago, fwiler said:

I did some more reading, and I see my original question wasn't clear due to wrong naming convention. Mount points can be the same for multiple pools according to what I read. The pool names obviously have to be different.  For instance zpool create -m /mnt/disks/zfs poolname mirror sdb sdc and zpool create -m /mnt/disks/zfs poolname2 mirror sdd sde.  The actual mount point for both is /mnt/disks/zfs.

I guess my confusion is that I don't see the pool name when browsing to /mnt/disks/zfs/, only datasets created under those pools.

 

The pool name isn't represented as a sub-folder of the mount location, it's mounted directly to the directory you specify.

 

My suggestion would be not to overcomplicate this:

 

zpool create -m /mnt/poolname poolname mirror sdb sdc

zpool create -m /mnt/poolname2 poolname2 mirror sdd sde

 

Link to comment
6 minutes ago, jortan said:

 

The pool name isn't represented as a sub-folder of the mount location, it's mounted directly to the directory you specify.

 

My suggestion would be not to overcomplicate this:

 

zpool create -m /mnt/poolname poolname mirror sdb sdc

zpool create -m /mnt/poolname2 poolname2 mirror sdd sde

 

Thank you for your help.

Link to comment
On 3/24/2022 at 3:39 PM, gyto6 said:
zfs create -V 20G pool/docker # -V refers to create a ZVOL
cfdisk /dev/pool/docker # To create easily a partition
mkfs.btrfs -q /dev/pool/docker-part1 # Simple to format in the desired sgb
mount /dev/pool/docker-part1 /mnt/pool/docker # The expected mount point

 

Confused. 

1st line-  is pool/docker already a location on your pool or are you creating a new one with this command?  

3rd line- I read that xfs should be used instead of btrfs, so would I just use mkfs.xfs -q /

and I don't understand the docker-part1.  Wouldn't you mkfs on /dev/pool/docker? When I try with /dev/pool/docker-part1 it says Error accessing specified device /dev/ssdpool/docker-part1 : No such file or directory.  (Yes I did write after cfdisk create)

 

Update- never mind.  The write in cfdisk didn't happen for some reason.  I just used fdisk.  Now I see docker-part1 is automatically created.

 

 

Edited by fwiler
Link to comment
On 10/22/2022 at 5:51 AM, fwiler said:

 

Confused. 

1st line-  is pool/docker already a location on your pool or are you creating a new one with this command?  

3rd line- I read that xfs should be used instead of btrfs, so would I just use mkfs.xfs -q /

and I don't understand the docker-part1.  Wouldn't you mkfs on /dev/pool/docker? When I try with /dev/pool/docker-part1 it says Error accessing specified device /dev/ssdpool/docker-part1 : No such file or directory.  (Yes I did write after cfdisk create)

 

Update- never mind.  The write in cfdisk didn't happen for some reason.  I just used fdisk.  Now I see docker-part1 is automatically created.

 

 

To explain:

 

Zfs create : Is used to create a dataset, so any path following this command refers to a zfs pool (already created) and a dataset (or sub dataset) to be created.

 

Do not mix up "zfs create" and "zpool create".

 

BTRFS or XFS : At the time I posted it, I tried first to make a docker disk image run in a ZVOL, so I began with the BTRFS. Several specifications tends to prefer the use of XFS instead of BTRFS.

 

Use XFS docker disk image in a ZVOL partitionned with XFS.

 

Error accessing specified device: /dev/pool/docker is a ZFS "file". /dev/pool/docker-part1 is the relative path to the partition included in the ZVOL.

 

Once your partition configured with the appropriate file system, you'd better use a script to mount the filesystem at boot in a different path to enable writing. My own actually :
 

mount /dev/zvol/pool/Docker-part1 /mnt/pool/Docker/

Edited by gyto6
  • Like 1
Link to comment

For those wishing the use of a ZVOL with XFS for a XFS docker disk image:

 

zfs create -V 20G yourpool/datasetname # -V refers to create a ZVOL / 20G = 20GB or 20Go
cfdisk /dev/zvol/yourpool/datasetname # To create easily a partition
mkfs.xfs /dev/zvol/yourpool/datasetname-part1 # Simple to format in the desired sgb
mount /dev/zvol/yourpool/datasetname-part1 /mnt/yourpool/datasetname/ # The expected mount point

 

Edited by gyto6
  • Like 1
Link to comment
26 minutes ago, Valiran said:

Is ZFS pertinent for a Plex Library?

I've moved from Unraid to Xpenology (virtualized on Unraid) because I've had performance issues with Plex libraries with standard Array :(

ZFS is not more pertinent. But if you’ve the required stuff in your server, you can custom the Filesystem in this way.

 

Whatever, if you’re meeting performance issue, I don’t think that ZFS will magically solve the trouble.

Link to comment
2 hours ago, Valiran said:

Is ZFS pertinent for a Plex Library?

I've moved from Unraid to Xpenology (virtualized on Unraid) because I've had performance issues with Plex libraries with standard Array :(

Yes, the standard unraid array is one big known performance issue, however depending on what you're doing with it it may or may not bother you.  Putting ZFS on unraid will indeed get around this if you do it right because basically any array known to man is faster than unraids standard array, but speed isn't what it was made for - it does have a use case.  So there is really no need to move from unraid to solve a standard unraid array speed issue - you can simply put zfs on unraid.  Your response was a bit confusing because it sounded like you had ZFS on unraid already, but you also said 'standard array' which is certainly not ZFS on unraid (yet).

 

I suspect you'll be back on unraid as the features are pretty much better than everything else.  You'll be able to bring your ZFS array over easily if you ever decide to do that.  Good luck.

Link to comment
2 hours ago, Valiran said:

I've had performance issues with Plex libraries with standard Array :(

 

Was plex on your Unraid array or cache pool?  Can you describe what performance issues you were having?

 

2 hours ago, Valiran said:

I've moved from Unraid to Xpenology (virtualized on Unraid)

 

I'm not sure that nested virtualisation is the path to better performance!

  • Like 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.