ZFS plugin for unRAID


steini84

Recommended Posts

4 hours ago, ich777 said:

Yes.

 

Many people reporting issues with ZFS and Docker on 6.10.0

Spoiler


 

 

My main concern is why so many people having issues but I don't have... I run everything form ZFS (Docker, libvirt,...). Maybe it's because I've created a partition before or because I use the full path to the partition /dev/sdx1 who knows, would be really interesting, have to do some tests when I got more time on my test system.

 

I've seen also some Github Issues that Docker fails in different scenarios while having everything on ZFS.

 

 

I've an theory on that, but it's probably an unpopular one -

ZFS has gotten so much more popular in recent years, with a lot of folks diving in head first. Hearing how it protects data, the ease of snapshots and replication, how easy it makes backups, that they migrate everything over without first learning what it really is and isn't, what it needs, how to maintain it, and what the tradeoffs are.

Then when something eventually goes sideways (neglected scrubs, power outage during a resilver, whatever, they change xattr with data already in the pool, set sync to disabled for better write performance, any number of things both environmental or user inflicted), the filesystem is resilient enough that it 'still works', so it's expected that anything on it still should as well... 

Hell, I've been neck deep in storage my entire career, and using ZFS since Sun was still Sun, and I *STILL* find myself having to undo stupid crap I did in haste on occasion. 

The fact that you partitioned the disks first wouldn't change any functional behavior in the driver (similar to running zfs on sparse files, the same code/calls are used). Either 'it's fixed', or I'd simply taxed the driver beyond what it'd optimized for at the time, at least that's my feeling anyway.

Link to comment
22 hours ago, BVD said:

 

I've an theory on that, but it's probably an unpopular one -

ZFS has gotten so much more popular in recent years, with a lot of folks diving in head first. Hearing how it protects data, the ease of snapshots and replication, how easy it makes backups, that they migrate everything over without first learning what it really is and isn't, what it needs, how to maintain it, and what the tradeoffs are.

Then when something eventually goes sideways (neglected scrubs, power outage during a resilver, whatever, they change xattr with data already in the pool, set sync to disabled for better write performance, any number of things both environmental or user inflicted), the filesystem is resilient enough that it 'still works', so it's expected that anything on it still should as well... 

Hell, I've been neck deep in storage my entire career, and using ZFS since Sun was still Sun, and I *STILL* find myself having to undo stupid crap I did in haste on occasion. 

The fact that you partitioned the disks first wouldn't change any functional behavior in the driver (similar to running zfs on sparse files, the same code/calls are used). Either 'it's fixed', or I'd simply taxed the driver beyond what it'd optimized for at the time, at least that's my feeling anyway.

Well, only the truth hurts.

 

I share that point of view, every one is looking for the best ZFS optimization for best performance, and most of the "tutorials" that you can find on the net doesn't talk about the risk it implies. Most of the people writing them are not even aware about that. The best sources of information I could've found except of LVL1Tech are the websites listed at the beginning of the topic.
Some books are rare by the way. Mostly english of course, and for some already invested users, they're lacking a lot about daily operations to maintains ZFS. Even if you've to figure out how to manage your ZFS, it's bit disapointting to buy a 20€ book who's learnt you to create your pool from scratch, but doesn't explore the aspect of data transfers between a ZFS send/receive and a ZFS clone.

Talking about ZFS on Unraid itself, I didn't meet any troubleshoot on Unraid 6.10.

Only storage controllers problem with my NVMe's SSD which was suddenly not recognized with Intel Vt-D (IOMMU) whatever the Unraid version, a well known problem with Crucial P5.
Or one of my partition (mounted with /dev/sdx at time) which suddenly changed after a reboot, so I recreated the pool with /dev/disk/by-id/and no more suffered about any problems. And that was still not related to a specific Unraid version.



 

Edited by gyto6
Link to comment

I’ve been planning a new NAS (replacing a Synology unit), and was toying between TrueNAS and Unraid.

TrueNAS was very appealing at first, due to ZFS. As mentioned above, it looks very shinny to new users as it looks like it can do no wrong.

All this talk of bit rot scares me. Some people are 100% “it’s real, protect your shit or definitely loose it”. Then I’m all panicked, and change my mind to TrueNAS. But then I read other’s counter posts, and they’re all “it’s not really a thing on modern hardware, get ECC RAM, stop panicking, and call it a day”. Both camps make good points 🤷‍♂️.

This is why I got out of PC gaming and brought an Xbox a decade ago. Tech can be so opinion based sometimes lol 🤦‍♂️.

Ultimately though, I believe I’ve settled on Unraid. I’m so over “managing” the tech in my house, and Unraid sounds stupid simple to use. I just want to power something on, boot up plex/emby, and let it do it’s thing. The last two NAS’s I’ve had over the last 8 years have been synology units, and I literally probably logged into their respective GUI’s maybe half dozen times between them.

 

Mmm, I do think about that sweet ZFS though 🤔.

Was considering running this plugin to get the best of both worlds. I wonder what Unraid official support will look like though? How will it work? And is it worth just waiting for that?

Think that’ll literally just copy this plugin into installer, and call it “officially supported” now?

Or do we think it’ll be implemented another way. Not sure they’ve mentioned how they’ll support it, only that they’re looking at it hey?

Link to comment
On 4/2/2022 at 5:30 AM, te5s3rakt said:

I’ve been planning a new NAS (replacing a Synology unit), and was toying between TrueNAS and Unraid.

TrueNAS was very appealing at first, due to ZFS. As mentioned above, it looks very shinny to new users as it looks like it can do no wrong.

All this talk of bit rot scares me. Some people are 100% “it’s real, protect your shit or definitely loose it”. Then I’m all panicked, and change my mind to TrueNAS. But then I read other’s counter posts, and they’re all “it’s not really a thing on modern hardware, get ECC RAM, stop panicking, and call it a day”. Both camps make good points 🤷‍♂️.

This is why I got out of PC gaming and brought an Xbox a decade ago. Tech can be so opinion based sometimes lol 🤦‍♂️.

Ultimately though, I believe I’ve settled on Unraid. I’m so over “managing” the tech in my house, and Unraid sounds stupid simple to use. I just want to power something on, boot up plex/emby, and let it do it’s thing. The last two NAS’s I’ve had over the last 8 years have been synology units, and I literally probably logged into their respective GUI’s maybe half dozen times between them.

 

Mmm, I do think about that sweet ZFS though 🤔.

Was considering running this plugin to get the best of both worlds. I wonder what Unraid official support will look like though? How will it work? And is it worth just waiting for that?

Think that’ll literally just copy this plugin into installer, and call it “officially supported” now?

Or do we think it’ll be implemented another way. Not sure they’ve mentioned how they’ll support it, only that they’re looking at it hey?

I didn't got everything...

 

All I could say is that managing ZFS isn't just install the plugin and see the system being the most briliant and powerfull. ZFS has the ability to manage volumes, file system, and backup on a custom way that requires you to have the knowledges concernings your equipments, volume aggregation, and your files workload. At least, you've to finally take over ZFS that offers a lot of improvement according to your devices and you'll spend months and years to optimize your ZFS system and correct your mistakes.

 

If using RAID on HDD was quite easy due to the lack of innovations compatible with and mostly turned to the final use, there was first many concerns about RAID on SSD drive due to Write Amplification, Overprovisioning, cells degradation etc... And now NVMes drives with namespaces, sets, endurances groups, and over the top speed without talking about low latencies technologies (Optane, Z-NAND) for now expected as SLOG for database mostly.

 

There're tons to talk about drives technologies before talking about ZFS, dataset, L2ARC, Interrupt trouble with NVMe drives, L2ARC calibration... Tons to talk about...

I don't know Truenas enough, but all I could say is that is GUI helps a few to take your hands on ZFS. But what helps mostly is practicing.

 

I've read on the forum that ZFS is expecteded unofficialy for Unraid 6.11, but if you're expecting the GUI to do all the work, you shouldn't.

 

It'll works, as a smartwatch works to get the hour. But most ZFS users would take a mechanical watch instead because they like to know how it works. It costs more, but they love it, take care of it, and the watch finally lasts longer.

Edited by gyto6
  • Like 1
Link to comment

Hey folks, a minor update about ZFS Master:

 

Today I have released version "2022.04.10.42"; this includes a couple of new things:

 

  • In the Dataset creation Dialog, there is a new field for setting the permissions of the new dataset.
  • A new button called "Snaps" in the main UI (Requires Destructive mode On).
  • The "Snaps" button allows some primary administration of the Dataset Snapshots (Hold a snapshot, Released it, Rollback to it, and Destroy it)

 

Any feedback regarding this new functionality or even an old one, including UI design, will be appreciated. I expect to continue working and improving plugin features, like creating snapshots and adding support for volume; i'll request specific thread for the Plugin support; so we could stop flooding this thread :P.

 

image.thumb.png.5628323dfc73b5586688113c88a2b36d.png

 

Edited by Iker
  • Like 2
Link to comment
6 hours ago, Iker said:

i'll request specific thread for the Plugin support; so we could stop flooding this thread :P.

 

Sounds like a good idea indeed. 😁

 

Thanks again, I'm getting my hand on it. For an unknow reason, the "SNAPS" button stays grey, even if the array, docker and VM are  stopped.

Do I need to manually create a snapshot to get it available? *Some tests have shown that YES*

 

Sounds cooler if we can trigger a snapshot creation on the local pool with a single button from your GUI.

 

Thanks again for this welcomed update! 😄

 

Edit : The plugin cannot manage snapshots from the Pool. Only from the dataset by now. I don't know if this is by design.

Edited by gyto6
Link to comment
6 hours ago, gyto6 said:

 

Sounds like a good idea indeed. 😁

 

 

Most of the answers are "yes"; the snapshot administration is per Dataset; otherwise, it could be incredibly complicated; my main Pool has 1400 Snapshots, for example. If there are no Snapshots for the Dataset, the button is disabled; The functionality for taking Snapshots over the GUI is on the way; the next update probably.

 

BTW, Support Topic is ready :) 

 

 

  • Like 1
Link to comment

Hello,

 

i try to use ZFS because of the snapshot and shadow copy function. But i have the problem that i can not use docker on my single nvme zfs pool. I have created everything according to the first post but everytime i try to add a container it stands still and i can not restart the machine or stop the array. I have to hard reset the system. I use unraid 6.9.2. Following you can see a screenshot where it say's "Please wait" for a small docker like adguard for 1 hour now. 2 of 4 CPU cores are on 100% . Do you have an idea?

 

image.thumb.png.a96165497f81d2419f1eee71fc0f65f6.png

 

My Docker config:

image.thumb.png.9a7172db1b00ce5be28a8cbb997891ba.png

 

image.png

image.png

Link to comment
47 minutes ago, Jack8COke said:

Hello,

 

i try to use ZFS because of the snapshot and shadow copy function. But i have the problem that i can not use docker on my single nvme zfs pool. I have created everything according to the first post but everytime i try to add a container it stands still and i can not restart the machine or stop the array. I have to hard reset the system. I use unraid 6.9.2. Following you can see a screenshot where it say's "Please wait" for a small docker like adguard for 1 hour now. 2 of 4 CPU cores are on 100% . Do you have an idea?

 

Yep, mount your docker.img into a ZVOL formated as your docker.img filesystem. Set your docker.img there (copy the old one in the new location) and you're good.

Create a script to mount the Zvol automatically at first "Start Array"

 

Refer to my older post for specifics commands

Edited by gyto6
Link to comment
11 hours ago, gyto6 said:

 

Yep, mount your docker.img into a ZVOL formated as your docker.img filesystem. Set your docker.img there (copy the old one in the new location) and you're good.

Create a script to mount the Zvol automatically at first "Start Array"

 

Refer to my older post for specifics commands

 

 

Im sorry but im not able to do this. Do i have to create a dataset before i run your command? Or i do it with the first command already? Because i can not find docker under /mnt/ssdpool.

Which label type i have to use when i use the cfdisk command?

when i run the last command it says:

mount: /mnt/ssdpool/docker: mount point does not exist.

 

 

Link to comment
17 minutes ago, Jack8COke said:

 

 

Im sorry but im not able to do this. Do i have to create a dataset before i run your command? Or i do it with the first command already? Because i can not find docker under /mnt/ssdpool.

Which label type i have to use when i use the cfdisk command?

when i run the last command it says:

mount: /mnt/ssdpool/docker: mount point does not exist.

 

 

Create a folder and it'll work.

For the labeltype, always gpt. If you do not use gpt, it's because you know what you're doing.

Edited by gyto6
Link to comment
On 10/9/2021 at 2:50 AM, BasWeg said:

 

I've done this via dataset properties.

To share a dataset:

zfs set sharenfs='rw=@<IP_RANGE>,fsid=<FileSystemID>,anongid=100,anonuid=99,all_squash' <DATASET>

<IP_RANGE> is something like 192.168.0.0/24, to restrict rw access. Just have a look to the nfs share properties.

<FileSystemID> is an unique ID you need to set. I've started with 1 and with every shared dataset I've increased the number

<DATASET> dataset you want to share.

 

The magic was the FileSystemID, without setting this ID, it was not possible to connect from any client.

 

To unshare a dataset, you can easily set:

zfs set sharenfs=off <DATASET>

 

 

I've been able to set my zfs dataset to enable NFS sharing, as per the above quote. I've verifed with ``zfs get sharenfs <POOLNAME>/<DATASET>`` and it lists the share with the settings I used. I also made sure that the mountpoint/dataset is using 99:100 for owner, but I still can't connect to the share from a client.

 

I can mount the zfs dataset via Unassigned Devices on unRAID, but my Mac and Linux boxes won't mount it. Reported error isn't very revealing - 'operation not permitted'. My Google-fu isn't helping much. Any thoughts?

Edited by AgentXXL
Clarification
Link to comment
  • 2 weeks later...

well moving my zfs pool over to trunas was a failure. Looks like a AMD issue. I have since moved back and started over fresh with unraid and existing zfs pool

what is the best way to get this to connect to another unraid setup? its mostly going to server to linux enviroment. 

Link to comment
18 hours ago, anylettuce said:

well moving my zfs pool over to trunas was a failure. Looks like a AMD issue. I have since moved back and started over fresh with unraid and existing zfs pool

what is the best way to get this to connect to another unraid setup? its mostly going to server to linux enviroment. 

Sanoid/Syncoid or SEND/RECEIV commands will help you through SSH replication.

Maybe you were concerned about another way to clone your pool?

Edited by gyto6
Link to comment

I have just migrated from truenas Core to Unraid, enjoying the platform thus far;

i have completed my ZFS Setup on unraid via Space Invaders one - 'setting up a native ZFS pool on unraid - youtube guide' (method one via terminal without installing truenas as a vm) - 

 

 

I have followed his steps verbatim, and i can confirm my zfs companion confirms my zfs pool is healthy and online. 

 

Setup:

USB: Boot drive

Array Drive: 1 x 500gb SSD 

ZFS drive(s): 6 x 4tb raidz1

 

i can confirm i have access to the array drive within windows, & and i can read and write to the drive. 

However, when i try to write any data to the ZFS folder (simlink directory that was added as per steps shown in the attached youtube video @ 31.50) windows advises : "you need permission to perform this action" 

 

what steps are required so i can write data to my ZFS dataset within windows?

i will be installing plex shortly. In Truenas Core, users had the option of allowing Jails to have read only access to the data within the ZFS, is this possible with unraid?

 

Thanks in advance 

 

Link to comment
21 hours ago, Atoz said:

I have just migrated from truenas Core to Unraid, enjoying the platform thus far;

i have completed my ZFS Setup on unraid via Space Invaders one - 'setting up a native ZFS pool on unraid - youtube guide' (method one via terminal without installing truenas as a vm) - 

 

 

I have followed his steps verbatim, and i can confirm my zfs companion confirms my zfs pool is healthy and online. 

 

Setup:

USB: Boot drive

Array Drive: 1 x 500gb SSD 

ZFS drive(s): 6 x 4tb raidz1

 

i can confirm i have access to the array drive within windows, & and i can read and write to the drive. 

However, when i try to write any data to the ZFS folder (simlink directory that was added as per steps shown in the attached youtube video @ 31.50) windows advises : "you need permission to perform this action" 

 

what steps are required so i can write data to my ZFS dataset within windows?

i will be installing plex shortly. In Truenas Core, users had the option of allowing Jails to have read only access to the data within the ZFS, is this possible with unraid?

 

Thanks in advance 

 

Hey Atoz,

 

I followed the same video too and had the same permissions issues you did on the shares of symlinks he recommends. Not sure if something changed in the plugins after @SpaceInvaderOne created his awesome tutorials. I solved it by changing the permissions of my ZFS top level mount to 775 vs 755 (it is now by default) through Unpaid terminal. This allowed it to get read & write permissions vs just read over SMB. Not 100% this is the right thing I should be doing to make it work, but it is a quick solution.

 

There has to be a better way to share these pools though, as the other gotcha I came into is that the symlink (therefore ZFS storage pool) is limited by the size of the array drive space vs the actual size of the pool. ie 250GB vs 50TB

Link to comment
  • 2 weeks later...

If you got a error message like this while upgrading to 6.10.0:

image.png.56b98e1936403f61a15e8ce7fab89db7.png

 

You can safely ignore that, the plugins are already built and this is caused because the Kernel detection fails on 6.10.0

 

The plugin packages will be downloaded on boot (as long as there is an active Internet connection on boot).

Link to comment
1 hour ago, Arragon said:

Does that mean it is save now to go to 6.10?  I remember people having problems especially with Docker on ZFS (Dataset, apparently ZVOL seemed OK).  Can anyone confirm?

Got the message disclaimed by @ich777, and it indeed downloaded ZFS at boot.
 

ZVOL and Dataset are still working, everything is fine to me.

  • Like 1
Link to comment
1 hour ago, Arragon said:

So no problems while running Docker on ZFS anymore?

 

From what I understand it has long been the case that some people report issues with docker on ZFS and some people have none.  This might be due to ZFS only having problems with specific containers?

 

I've had issues with dockers using "sendfile" syscall on ZFS previously:

 

But it seems likely this is fixed now:

https://github.com/openzfs/zfs/issues/11151

 

Could this have caused some of the other docker + ZFS issues seen in the past?

 

I've had issues with docker + ZFS previously (both using docker.img and using a direct file path on ZFS).  I've never used ZFS zvols.  I don't have the bandwidth right now to try migrating this back to ZFS.  I will try to revisit this when 6.10 is released.

Link to comment
5 hours ago, Arragon said:

Does that mean it is save now to go to 6.10?  I remember people having problems especially with Docker on ZFS (Dataset, apparently ZVOL seemed OK).  Can anyone confirm?

 

The problem insofar as I'm aware is/was related to automatically created snapshots of docker image data (each individual layer of each image), to the point that the filesystem would start encountering performance issues. Basically so many filesystems ended up getting created that the system would grow sluggish over time.

 

Not everyone has enough layers/images that theyd encounter this or need to care, but in general, youd be best suited to have the docker img stored on a zvol anyway imo.  Containers are meant to be disposable, and at a minimum, shouldnt need their snapshots of them cluttering up your snapshot list.

Link to comment
5 hours ago, BVD said:

The problem insofar as I'm aware is/was related to automatically created snapshots of docker image data (each individual layer of each image), to the point that the filesystem would start encountering performance issues. Basically so many filesystems ended up getting created that the system would grow sluggish over time.

Haha, who would do that.  *hastily deletes 51000 Znapsend snapshots*

  • Like 1
Link to comment

First off, sorry if this is a FAQ. I've been searching around and haven't found a definitive answer.

 

I want to set up a ZFS pool for the first time on one of my unRAID servers, using this plugin. But part of my heat/power management strategy is to spin down disks when they're not in use (I know there's a debate about this, but it makes sense for my use case). If my ZFS pool disks are listed in Unassigned Devices, will the UD plugin spin them down automatically while the pool is mounted but not in use? If so, do I need to do anything special in the UD plugin to spin them down, or will they spin down according to the same policy I've set for the main array?

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.