ZFS plugin for unRAID


steini84

Recommended Posts

OK, my mistake... I thought that the video @SpaceInvaderOne did on 'Native ZFS' showed the ZFS pool mounted as a Pool Device. I've just re-watched it and the zpool members are indeed showing up under Unassigned Devices.

 

So my next question is this: can I set the mountpoint for the zpool to be /mnt/cache? Of course after I've removed the existing SSD which is mounted there. Will unRAID still recognize this mountpoint from a Mover standpoint? I'm guessing no unless there's a way to mount the zpool (which will be the 2 x 2TB NVME drives in a mirror set) as a disk to replace the existing SSD.

 

EDIT: I hadn't finished re-watching the 'Native ZFS' video and near the end he discusses using a symlink to mount the zpool for share use. It looks like it's possible to map the zpool to /mnt/cache, but still unsure if Mover or any other tasks won't see it as the cache. Sure, I can go and edit all my shares and all my Docker templates to point to another location. But hopefully after creating a symlink unRAID still sees the zpool as the cache pool for the shares I've selected to use it.

 

Thanks for the responses @Marshalleq, @JorgeB and @muddro!

 

Edited by AgentXXL
Edit with info/question re: symlink
Link to comment
1 hour ago, AgentXXL said:

OK, my mistake... I thought that the video @SpaceInvaderOne did on 'Native ZFS' showed the ZFS pool mounted as a Pool Device. I've just re-watched it and the zpool members are indeed showing up under Unassigned Devices.

 

So my next question is this: can I set the mountpoint for the zpool to be /mnt/cache? Of course after I've removed the existing SSD which is mounted there. Will unRAID still recognize this mountpoint from a Mover standpoint? I'm guessing no unless there's a way to mount the zpool (which will be the 2 x 2TB NVME drives in a mirror set) as a disk to replace the existing SSD.

 

EDIT: I hadn't finished re-watching the 'Native ZFS' video and near the end he discusses using a symlink to mount the zpool for share use. It looks like it's possible to map the zpool to /mnt/cache, but still unsure if Mover or any other tasks won't see it as the cache. Sure, I can go and edit all my shares and all my Docker templates to point to another location. But hopefully after creating a symlink unRAID still sees the zpool as the cache pool for the shares I've selected to use it.

 

Thanks for the responses @Marshalleq, @JorgeB and @muddro!

 

I have no idea if mover will act on. I have a bunch of folders symlinked in mnt/users/ so I can export the folders with smb using unRAID (instead of modifying some smb conf file). I have all my app data mapped to there and a second z pool with my HDDs for storage, but I have all data go directly to that second zpool.

 

So long story short haven't had a need for mover. But would be interesting to see if it would work.

Link to comment
22 minutes ago, muddro said:

I have no idea if mover will act on. I have a bunch of folders symlinked in mnt/users/ so I can export the folders with smb using unRAID (instead of modifying some smb conf file). I have all my app data mapped to there and a second z pool with my HDDs for storage, but I have all data go directly to that second zpool.

 

So long story short haven't had a need for mover. But would be interesting to see if it would work.

 

I use Mover with the Mover Tuning plugin so I could move files based on age. I try to keep my 'new releases' for movies, tv and music on the unRAID cache pool for 2 weeks. This has the advantage of faster disk I/O for concurrent streams, especially helpful for remote users that often need transcoded versions. Once the popularity of the items fades, I'm fine moving it to the slower array and 2 weeks seemed to be a workable time frame.

 

I tend to move a lot of data manually as well, so that takes care of moving some files from cache to array. I can create a scheduled script to run from the User Scripts plugin so Mover itself isn't critical. But I'm still on the fence on how to best implement these new NVME drives - stick with the default BTRFS mirror set, use a XFS mirror set or lastly, use a ZFS mirror set.

 

Link to comment

This is working well for me, you can mount the pool anywhere then make standard unraid shares that will exist only the usb you're using to start the array. To access the zpool dataset make symlinks in the unraid user shares to the dataset locations. works great for sharing.  credit spaceinvader for this symlink idea.

 

Share:

 

image.thumb.png.2a50590e6a81bb3ca64a38f69cf6d538.png

 

 

Symlink in user/photos

 

image.thumb.png.863a66dae3b3f00f29a669931b2fc9c5.png

 

Mapping share

 

image.png.c1313e4165e306d86f2c6bf127ed9c55.png

Edited by NickF
Link to comment

Does anyone else think this is a very very cool idea?  I'm surprised nobody thought about it before, but perhaps it's just a sign of the reach ZFS has had since it's all opened up in the last 12 months.

 

Short Description - Time Machine for ZFS

Original Reddit Post

Github Link

 

How hard would it be to make this a plugin?  This could be a real killer feature if unraid officially integrated it.  It might even stop me complaining so much about how a basic thing like backups is not included in unraid, making me want to run off to TrueNAS SCALE (which is not ready either TBH).

 

Thoughts?

 

Marshalleq

Link to comment
8 minutes ago, Marshalleq said:

Does anyone else think this is a very very cool idea?  I'm surprised nobody thought about it before, but perhaps it's just a sign of the reach ZFS has had since it's all opened up in the last 12 months.

 

Short Description - Time Machine for ZFS

Original Reddit Post

Github Link

 

How hard would it be to make this a plugin?  This could be a real killer feature if unraid officially integrated it.  It might even stop me complaining so much about how a basic thing like backups is not included in unraid, making me want to run off to TrueNAS SCALE (which is not ready either TBH).

 

Thoughts?

 

Marshalleq

 

I think Level1Techs did a similar thing with Shadowcopy on Windows.

Link to comment
On 11/27/2021 at 7:12 PM, Marshalleq said:

Thoughts?

Are you looking for a list of snapshots and the ability to view the contents?

 

I am creating a plugin for snapshots, currently BTRFS but could look to add a tab for ZFS as phase 2.

 

I don't use ZFS, but if someone can provide commands that need to be run and sample outputs I can add in for next phase along with VM Snapshots. Initial version is not release as yet.

 

Installed the plugin and created show tests so should be able to build a view.

root@computenode:/mnt/Blade# zfs list  -t snapshot
NAME                              USED  AVAIL     REFER  MOUNTPOINT
Blade/Testservers@Snap20211129      0B      -       24K  -
Blade/Testservers2@Snap20211129     0B      -       24K  -
root@computenode:/mnt/Blade# zfs list  -t snapshot
NAME                              USED  AVAIL     REFER  MOUNTPOINT
Blade/Testservers@Snap20211129      0B      -       24K  -
Blade/Testservers2@Snap20211129     0B      -       24K  -
root@computenode:/mnt/Blade# zfs list 
NAME                 USED  AVAIL     REFER  MOUNTPOINT
Blade                230K  27.6G       25K  /mnt/Blade
Blade/Testservers     24K  27.6G       24K  /mnt/Blade/Testservers
Blade/Testservers2    24K  27.6G       24K  /mnt/Blade/Testservers2
root@computenode:/mnt/Blade# zfs list  -t snapshot
NAME                              USED  AVAIL     REFER  MOUNTPOINT
Blade/Testservers@Snap20211129      0B      -       24K  -
Blade/Testservers2@Snap20211129     0B      -       24K  -
root@computenode:/mnt/Blade# df -t btrfs
Filesystem     1K-blocks      Used Available Use% Mounted on
/dev/nvme0n1p1 488385560 444091764  44281484  91% /mnt/cache
/dev/sdd1       58605088      3936  56490496   1% /mnt/vms
/dev/loop2      20971520   3272896  17339632  16% /var/lib/docker
/dev/loop3       1048576      6832    923312   1% /etc/libvirt
root@computenode:/mnt/Blade# df -t zfs
Filesystem         1K-blocks  Used Available Use% Mounted on
Blade               28950400   128  28950272   1% /mnt/Blade
Blade/Testservers   28950400   128  28950272   1% /mnt/Blade/Testservers
Blade/Testservers2  28950400   128  28950272   1% /mnt/Blade/Testservers2
root@computenode:/mnt/Blade# 

 

image.thumb.png.b0672ce8ea4560f360bf7bf51414549a.png

Edited by SimonF
  • Like 1
Link to comment
10 hours ago, SimonF said:

Are you looking for a list of snapshots and the ability to view the contents?

 

I am creating a plugin for snapshots, currently BTRFS but could look to add a tab for ZFS as phase 2.

 

I don't use ZFS, but if someone can provide commands that need to be run and sample outputs I can add in for next phase along with VM Snapshots. Initial version is not release as yet.

 

image.thumb.png.b0672ce8ea4560f360bf7bf51414549a.png

Amazing !

Link to comment
  • 2 weeks later...

594221912_2021-12-1013_01_13-tower_MainMozillaFirefox.thumb.png.fd9ab7def6a495707f1d315b13950870.pngI messed up my SSL and had to use a backup flash image. 

1.  Got everything back, but my Main page doesn't show the FS as zfs anymore.  Everything else works fine, but not sure if I need to perform another step (import zpool) or what. 

2.  I want to ensure I have my system setup right as far as mounting goes.  When I setup zfs originally, I mounted the pools to /mnt/nvme & /mnt/data.  I also see them mounted at /mnt/disk1/zpool/data & /mnt/disk1/zpool/nvme plus /mnt/user/zpool. 

 

Is this "ok" and/or "optimal", as I've seen alot of new practices since setting up originally ~1 year ago.  Thank you for any help!

Edited by OneMeanRabbit
added picture
Link to comment
  • 3 weeks later...

Hi,

I do not know if I have put this in the right place.
Noob in unraid here trying to follow guides to set up my unraid server with zfs. Using the video https://www.youtube.com/watch?v=umKXHO-hr5w and this forum post. Thanks for making this! I had a test server set up, but formated the drives and deleted each partition using unassigned devices and unassigned devices pluss. I then installed the plugins: ZFS for unraid 6, ZFS companion and ZFS Master for Unraid. Since I have 32GB of ram i set 8GB to be used by ARC memmory as per guide:

"/boot/config/go" and "echo 8589934592 >> /sys/module/zfs/parameters/zfs_arc_max".

I have 1 ssd as a normal unraid drive.
I then created the ZFS Pool using 6 (8TB) HDD drives: zpool create -m /zfs zfs raidz2 sdf sdb sdc sdh sde sdd

I then set ZFS compression to lz4 as recomended with: zfs set compression=lz4 zfs

I then created datasets using: zfs create zfs/data and zfs create zfs/backup
Changed the compression on backup since it is going to be used as cold storage rarely used, using: zfs set compression=gzip zfs/backup

 

Checking the status with zpool status and zfs list, everything looks ok to me. Where my problem starts is when I want to use SMB to access the folders in windows using it as a network drive. I have earlier used this in unraid with a user I crated called admin with read and write rights and a user called user with read-only. This worked before zfs just fine ;).
Using the yt guide I created 2 shares with the names zpool and backup. I then typed in the command ln -s /zfs/backup /mnt/user/backup/backup and ln -s /zfs/data /mnt/user/zpool/data
The folders show up in the share and i set SMB to secure and the user admin to Read/Write and user to Read-only.

I then mapped the folders in windows using: \\TOWER\backup\backup and \\TOWER\zpool\data

 

So far so good, but the problem comes when I try to add files to these as I get an error message stating Destination folder access denied, You need permission to perform this action. Does the folder not inherit the user access set in the share? When mapping it as \\TOWER\backup it only show me the space from the ssd (drive1). When looking at the security tab in windows it shows: Everyone, root (Unix Group \root) and Console and WebGui login account (TOWER\root) all with just special permissions tagged. When adding admin as a user, using the "Check Names" function it finds TOWER\admin, but I get a error when applying security "Failed to enumerate objects in the container. Access is denied" and "unable to save permission changes on data (\\TOWER\zpool)(W:). Access is denied".  
Is there any way to get the subfolder to inherit the security setting set in shares?

Is there any other way to access the files in windows in the same manner?
If you see any other improvements to my setup, please do tell me :)

Link to comment

To be honest I'm not sure I'm following you so much on the smb and security side.  Everything else was very well outlined though.

 

I think you're saying you use a link from zfs to the unraid share which to me sounds absolutely horrible.  So I only know the way I've done it, which is outlined below.

 

SMB permissions with ZFS are done manually via smb-extra.conf (which is in the /boot/config/smb directory).  The unraid smb GUI does not like anything outside of it's own array (I honestly don't know why you'd put in this artificial restriction, but they do) I've always preferred the console method anyway as it's more powerful.  So the point here being, you're using the same SMB system that unraid uses, but you're bypassing their artificial restriction of the GUI.  At least this is how I do it, someone else might have a better way.

 

Here's a typical one, then a more advanced one to help you out.

[isos]
path = /mnt/Seagate48T/isos
comment = ZFS isos Drive
browseable = yes
valid users = mrbloggs
write list = mrbloggs
vfs objects =

 

[pictures]
path = /mnt/Seagate48T/pictures
comment = ZFS pictures Drive
browseable = yes
read only = no
writeable = yes
oplocks = yes
dos filemode = no
dos filemode = no
dos filetime resolution = yes
dos filetimes = yes
fake directory create times = yes
csc policy = manual
veto oplock files = /*.mdb/*.MDB/*.dbf/*.DBF/
nt acl support = no
create mask = 664
force create mode = 664
directory mask = 2775
force directory mode = 2775
guest ok = no
vfs objects = fruit streams_xattr recycle
fruit:resource = file
fruit:metadata = netatalk
fruit:locking = none
fruit:encoding = private
acl_xattr:ignore system acl
valid users = mrbloggs
write list = mrbloggs

Also about the access denied, with ZFS on unraid you do have to go through and set nobody.users on each of these shares at the file level.  So basically

# chown nobody.users /zfs -Rfv

Who knows, perhaps this is all you need to do to get your method to work.

 

Good luck!

Link to comment
On 12/11/2021 at 7:45 AM, OneMeanRabbit said:

594221912_2021-12-1013_01_13-tower_MainMozillaFirefox.thumb.png.fd9ab7def6a495707f1d315b13950870.pngI messed up my SSL and had to use a backup flash image. 

1.  Got everything back, but my Main page doesn't show the FS as zfs anymore.  Everything else works fine, but not sure if I need to perform another step (import zpool) or what. 

2.  I want to ensure I have my system setup right as far as mounting goes.  When I setup zfs originally, I mounted the pools to /mnt/nvme & /mnt/data.  I also see them mounted at /mnt/disk1/zpool/data & /mnt/disk1/zpool/nvme plus /mnt/user/zpool. 

 

Is this "ok" and/or "optimal", as I've seen alot of new practices since setting up originally ~1 year ago.  Thank you for any help!

I just put all mine in /mnt.  I am not sure that you can have two mount points for one pool, but ZFS is very powerful so perhaps that's a feature I've not seen before.

 

You can change the mount point of an existing ZFS pool with zfs set mountpoint=/myspecialfolder mypool.

 

I suspect to get your drives to show up as zfs, that your restore has lost you the unassigned devices / plus  plugin?  Not that is not the same as the unassigned devices heading you have above.  At least that's what I think I'm seeing in your screenshot.

Link to comment
1 hour ago, Marshalleq said:

To be honest I'm not sure I'm following you so much on the smb and security side.  Everything else was very well outlined though.

 

I think you're saying you use a link from zfs to the unraid share which to me sounds absolutely horrible.  So I only know the way I've done it, which is outlined below.

 

SMB permissions with ZFS are done manually via smb-extra.conf (which is in the /boot/config/smb directory).  The unraid smb GUI does not like anything outside of it's own array (I honestly don't know why you'd put in this artificial restriction, but they do) I've always preferred the console method anyway as it's more powerful.  So the point here being, you're using the same SMB system that unraid uses, but you're bypassing their artificial restriction of the GUI.  At least this is how I do it, someone else might have a better way.

 

Here's a typical one, then a more advanced one to help you out.

[isos]
path = /mnt/Seagate48T/isos
comment = ZFS isos Drive
browseable = yes
valid users = mrbloggs
write list = mrbloggs
vfs objects =

 

[pictures]
path = /mnt/Seagate48T/pictures
comment = ZFS pictures Drive
browseable = yes
read only = no
writeable = yes
oplocks = yes
dos filemode = no
dos filemode = no
dos filetime resolution = yes
dos filetimes = yes
fake directory create times = yes
csc policy = manual
veto oplock files = /*.mdb/*.MDB/*.dbf/*.DBF/
nt acl support = no
create mask = 664
force create mode = 664
directory mask = 2775
force directory mode = 2775
guest ok = no
vfs objects = fruit streams_xattr recycle
fruit:resource = file
fruit:metadata = netatalk
fruit:locking = none
fruit:encoding = private
acl_xattr:ignore system acl
valid users = mrbloggs
write list = mrbloggs

Also about the access denied, with ZFS on unraid you do have to go through and set nobody.users on each of these shares at the file level.  So basically

# chown nobody.users /zfs -Rfv

Who knows, perhaps this is all you need to do to get your method to work.

 

Good luck!

 

Thank you for that fast and informative answer :D 

 

I only followed the video guide on the link from zfs to the unraid share part: (timed where he shows this) https://youtu.be/umKXHO-hr5w?t=1850 if this makes more sense to you than it does for me :D

I guess since he did not have to worry about SMB security it just worked for him out of the box? I will look in to setting it manually as you have explained. Thanks

 

Link to comment

I found a strange behaviour for ZFS Master. I have three pools that are using lz4 as the compression algorithm. I changed one of them to gzip-9, and ZFS Master stopped showing dataset info. Only one row which has SHOW DATASETS, SCRUB POOL, EXPORT POOL and CREATE DATASET is showing. The rest pools are displayed correctly. Also, I checked with the command line, and the pool with gzip is normal.

Does anyone else have this issue?

Link to comment

Hi,

I played a little bit with this plugin + the iSCSI plugin for a "shared" steam library but based on native unriad.

Currently I managed to create ZVOL's and propagate them via iSCSI to my clients... it basically works.

But I can't see any deduplication benefits from haveing equal files on these ZVOL's... which works within TrueNAS.

I use TrueNAS as VM to create/manipulate and export/import my test ZFS pool based on the idea of @SpaceInvaderOne.

 

Is it possible that this ZFS implementation/plugin laks the deduplication feature?

 

Thanks for an info

Link to comment
3 hours ago, subivoodoo said:

Hi,

I played a little bit with this plugin + the iSCSI plugin for a "shared" steam library but based on native unriad.

Currently I managed to create ZVOL's and propagate them via iSCSI to my clients... it basically works.

But I can't see any deduplication benefits from haveing equal files on these ZVOL's... which works within TrueNAS.

I use TrueNAS as VM to create/manipulate and export/import my test ZFS pool based on the idea of @SpaceInvaderOne.

 

Is it possible that this ZFS implementation/plugin laks the deduplication feature?

 

Thanks for an info

 

Your mileage may vary, but I have played around with network steam folders in the past and found it wasn't worth the effort.  Adding network i/o to storage is always going to impact negatively on performance, and adding ZFS dedupe is going to make this even worse.

 

Another approach you may want to consider is lancache:

https://forums.unraid.net/topic/79858-support-josh5-docker-templates/

 

This will cache all steam downloads from computers on your network - so you can delete games from your computers knowing you can reinstall them again later at LAN speed if required.

 

ps: lancache docker does require a workaround when using ZFS as your cache repository: https://forums.unraid.net/topic/79858-support-josh5-docker-templates/page/9/?tab=comments#comment-938921

Link to comment

Performance isn't that bad as I already have a 10G Network to all clients... measured up to 1200MB/s read - 900MB/s write over iSCSI with Crystaldiskmark (to the ZFS cache) on the TrueNAS test VM running on Unraid. This is with Dedup+Compression ON and Sync OFF on the ZVOL. Also downloading isn't my issue with 1G fiber...

 

My aproach is safe some space on clients... and play with IT stuff 😁 

 

But it seams that the dedup feature isn't working if I run this setup on Unraid... while it works with TrueNAS.

Edited by subivoodoo
Link to comment
9 minutes ago, subivoodoo said:

Performance isn't that bad as I already have a 10G Network to all clients... measured up to 1200MB/s read - 900MB/s write over iSCSI with Crystaldiskmark (to the ZFS cache) on the TrueNAS test VM running on Unraid. This is with Dedup+Compression ON and Sync OFF on the ZVOL. Also downloading isn't my issue with 1G fiber...

 

My aproach is safe some space on clients... and play with IT stuff 😁 

 

But it seams that the dedup feature isn't working if I run this setup on Unraid... while it works with TrueNAS.

 

Impressive numbers - just keep in mind that these may not reflect real-world performance due to a number of factors (not least of which your reads are likely coming straight from ZFS ARC memory, not from actual ZFS storage)

 

>>and play with IT stuff

 

Have fun then, sorry I don't have any answers re: ZFS de-dupe in Unraid. 

Link to comment

That's clear... the max numbers comes from the ARC cache. Real seq read of a big movie file for the first time tops at around 350-400MB/s in Windows Explorer... which is the combined max of my 3 mechanical disks in the ZFS pool (I assume) and small files will be less.

 

Someone else know if dedup is possible?

Edited by subivoodoo
Link to comment
On 1/7/2022 at 9:08 AM, zxhaxdr said:

I found a strange behaviour for ZFS Master. I have three pools that are using lz4 as the compression algorithm. I changed one of them to gzip-9, and ZFS Master stopped showing dataset info. Only one row which has SHOW DATASETS, SCRUB POOL, EXPORT POOL and CREATE DATASET is showing. The rest pools are displayed correctly. Also, I checked with the command line, and the pool with gzip is normal.

Does anyone else have this issue?

 

Most probably it's a parse error; I'm planning a new version with a couple of new features, let met check that bug and get back when it's fixed. Thanks for using the plugin, by the way.

  • Like 1
Link to comment
9 hours ago, subivoodoo said:

That's clear... the max numbers comes from the ARC cache. Real seq read of a big movie file for the first time tops at around 350-400MB/s in Windows Explorer... which is the combined max of my 3 mechanical disks in the ZFS pool (I assume) and small files will be less.

 

Someone else know if dedup is possible?

I'm using dedup quite successfully.  What I've learnt is that most people whom say it isn't worth it either haven't looked at it for a while (so are just continuing on old stories without checking) or are not applying it to the right type of data.

 

In my case I'm running a special vdev.  It works extremely well for the content that can be deduped (such as VM's).  I've never noticed any extra memory being used either as I do believe this is handled by the special vdev.  I'm using unraid - tried TrueNAS scale but it's containerisation is just awful - hopefully they figure out what market they're aiming for there and fix their strategy in a future version not too far away.

Edited by Marshalleq
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.