ZFS plugin for unRAID


steini84

Recommended Posts

6 hours ago, 0edge said:

Only question is what storage-driver is being used

The storage driver for Docker is selected automatically and you don't have to specify it, but please keep in mind that if you use a Docker image or a directory it can complicate things on 6.11.5 and even lead to crashes

 

I would rather put it for the time being on another xfs or btrfs drive and wait for 6.12.0 to release or for RC1 where the Docker directory shouldn't cause any issues anymore (even the overlay2 storage driver caused issues on my system on 6.11.5).

 

6 hours ago, 0edge said:

Typically I would edit the /etc/docker/daemon.json

You could do it like that if you really want too but you have to create the directory and the file each time you reboot the server and this part is also important before emhttp is called in the go file since otherwise you have to restart the Docker service to load the daemon.json again.

Link to comment
6 hours ago, ich777 said:

The storage driver for Docker is selected automatically and you don't have to specify it, but please keep in mind that if you use a Docker image or a directory it can complicate things on 6.11.5 and even lead to crashes

 

I would rather put it for the time being on another xfs or btrfs drive and wait for 6.12.0 to release or for RC1 where the Docker directory shouldn't cause any issues anymore (even the overlay2 storage driver caused issues on my system on 6.11.5).

 

You could do it like that if you really want too but you have to create the directory and the file each time you reboot the server and this part is also important before emhttp is called in the go file since otherwise you have to restart the Docker service to load the daemon.json again.

 

So putting my docker and docker_volumes in my ZFS NVME drive can cause issues? Do we know what issues and why that is? Is it more stable on say 6.11.0 or another version?

 

What storage driver is being used automatically right now, and how did you go about selecting overlay2?

Link to comment
29 minutes ago, 0edge said:

So putting my docker and docker_volumes in my ZFS NVME drive can cause issues? Do we know what issues and why that is? Is it more stable on say 6.11.0 or another version?

Exactly, no older version from Unraid will also cause issues as you see if you go back a few pages and is a known issue.

 

The issue should be resolved with the release from 6.12.0, btw with the release of 6.12.0 native ZFS support will be available and the plugin isn't needed anymore.

 

29 minutes ago, 0edge said:

What storage driver is being used automatically right now, and how did you go about selecting overlay2?

Currently it defaults to the default filesystem driver where the Docker directory is located at (I would strongly recommend that you use a Docker directory in the settings instead of a Docker image file), I know that overlay2 will maybe have advantages but it also have some downsides IIRC.

 

However you can of course write a small routine in the go file which creates the directory /etc/docker and then the daemon.json which is located there with the necessary contents to change the storage driver from Docker.

Link to comment

Got home to a false degraded pool error and lots of "ereport.fs.zfs.deadman" in the logs. Restarted and everything is fine. Also been having a lot of the ACPI error. This is a brand new 12700k system, any input?

 

Mar 14 15:06:22 GUNRAID kernel: ACPI BIOS Error (bug): Failure creating named object [\_SB.PC00.PEG1.PEGP._DSM.USRG], AE_ALREADY_EXISTS (20220331/dsfield-184)

Mar 14 15:06:22 GUNRAID kernel: ACPI Error: AE_ALREADY_EXISTS, CreateBufferField failure (20220331/dswload2-477)

Mar 14 15:06:22 GUNRAID kernel: ACPI Error: Aborting method \_SB.PC00.PEG1.PEGP.\_DSM due to previous error (AE_ALREADY_EXISTS) (20220331/psparse-529)

Mar 14 15:06:47 GUNRAID zed[21232]: Diagnosis Engine: error event 'ereport.fs.zfs.deadman'

Mar 14 15:06:47 GUNRAID zed[21232]: Diagnosis Engine: error event 'ereport.fs.zfs.deadman'

Link to comment
9 hours ago, 0edge said:

Got home to a false degraded pool error and lots of "ereport.fs.zfs.deadman" in the logs. Restarted and everything is fine. Also been having a lot of the ACPI error. This is a brand new 12700k system, any input?

 

Mar 14 15:06:22 GUNRAID kernel: ACPI BIOS Error (bug): Failure creating named object [\_SB.PC00.PEG1.PEGP._DSM.USRG], AE_ALREADY_EXISTS (20220331/dsfield-184)

Mar 14 15:06:22 GUNRAID kernel: ACPI Error: AE_ALREADY_EXISTS, CreateBufferField failure (20220331/dswload2-477)

Mar 14 15:06:22 GUNRAID kernel: ACPI Error: Aborting method \_SB.PC00.PEG1.PEGP.\_DSM due to previous error (AE_ALREADY_EXISTS) (20220331/psparse-529)

Mar 14 15:06:47 GUNRAID zed[21232]: Diagnosis Engine: error event 'ereport.fs.zfs.deadman'

Mar 14 15:06:47 GUNRAID zed[21232]: Diagnosis Engine: error event 'ereport.fs.zfs.deadman'

Did you try putting the server into some rice? 😁

Sorry, that's weird for me. All I can say is, by copying google research, "it sounds like a I/O error".

Maybe your disks have been spinning down and ZFS didn't expect that?

Edited by gyto6
Link to comment
7 minutes ago, gyto6 said:

Did you try putting the server into some rice? 😁

Sorry, that's weird for me. All I can say is, by copying google research, "it sounds like a I/O error".

Maybe you're disk have been spinning down and ZFS didn't expect that?

 

Ugh i also had a weird situation where radarr could not copy a movie over from my downloads folder due to some "IO error", it would fail at 13gb copied every time. I had to delete the file. These are 4x14tb in RAIDZ1 ( Western Digital Ultrastar DC HC530 WUH721414ALE6L4 0F31284 14TB ).

 

 

here's some more info

 

image.png.d124ea538f9194d317e9f72454b3f4f3.png

 

 

I deleted the file and a scrub is running. This 4x14tb RAIDZ1 array is the only 4 disks running off a 9211-8i 6Gbps LSI HBA. Disks are all at 33-34C with fans in their face, the HBA does not have a dedicated fan but I do have a fan on the side panel which should be hitting it.... 

Edited by 0edge
Link to comment

I opened up the PC and reseated RAM, lowered RAM from 3600mhz to 3200mhz, reseated HBA card, replaced HBA card SATA cables and moved cables to other slot. I have the RAIDZ4 on the LSI HBA card, and the mirror on the motherboard SATA, all with brand new cables. Why do I keep getting CKSUM errors? Every time it's a different random file with an error.  Also the mirror should be rock solid running off the mobo (even if HBA is bad) and brand new cables and barely used drives. What is going on here?

 

 

image.png.b25a57c673913e9bd1e30e03065f1b4c.png

 

image.png.ef1569e7d298cf45b9718bbd1594cdc7.png

Link to comment
31 minutes ago, 0edge said:

I underclocked it to 3200mhz, it's a 3600mhz kit. 

 

It's still a reasonable diagnostic step to run this all the way back to 2133/2400 and see if the issue persists.  If it does and if your memory passes memtest for a reasonable amount of time, then you can move on to other potential causes. 

 

Also:

 

image.png.fdc80eca4ea4369677b3a85c976ef248.png

 

https://www.asus.com/au/motherboards-components/motherboards/tuf-gaming/tuf-gaming-z690-plus-wifi-d4/helpdesk_bios/?model2Name=TUF-GAMING-Z690-PLUS-WIFI-D4

 

image.png.38cc25dae11a1e25976ac86d3cc1276f.png

  • Thanks 1
Link to comment
3 hours ago, jortan said:

 

It's still a reasonable diagnostic step to run this all the way back to 2133/2400 and see if the issue persists.  If it does and if your memory passes memtest for a reasonable amount of time, then you can move on to other potential causes. 

 

Also:

 

image.png.fdc80eca4ea4369677b3a85c976ef248.png

 

https://www.asus.com/au/motherboards-components/motherboards/tuf-gaming/tuf-gaming-z690-plus-wifi-d4/helpdesk_bios/?model2Name=TUF-GAMING-Z690-PLUS-WIFI-D4

 

image.png.38cc25dae11a1e25976ac86d3cc1276f.png

 

Thanks, tried 6.12 this morning (also interested in the newer kernel for my quicksync issues), but it detected my ZFS mirror but not my RADIZ1. I selected the 4 drives, and identified it as a RAIDZ1 but wouldn't work. So i updated the BIOS as you pointed out and back on 6.11.5 re-running scrubs. I will try memtest at some point too thanks.

Link to comment
4 hours ago, steini84 said:

This plugin is now depricated, but for all the right reasons:

image.png.b4bdb93c0d14520643fbb9ebf1c3cfb5.png

https://unraid.net/blog/6-12-0-rc1

 

Contrats to the team on this big milestone and I´m excited to play with native ZFS on my Unraid setup!


Hey @steini84
Just a heartfelt thank you for your (and @ich777's) work and your plugin over the past years. It always ran, stable, and without needing constant attention. It saved my a..  x. times when I tried something again with Dockers or Vms .... and took away the terror of the final death in the Minecraft lava lake from the kids ... the plugin was always a joy!
Thank you!!!


My questions before updating to 6.12 rc1:


1. do I have to uninstall the plugin ("ZFS Master for unraid" and "ZFS companion" and "ZFS Master for unraid") before the update?
2. will the plugin "Sanoid for unraid 6" and the syncoid it contains continue to run in 6.12? 


Kind regards Andi


 

  • Like 2
Link to comment
48 minutes ago, andber said:

1. do I have to uninstall the plugin ("ZFS Master for unraid" and "ZFS companion" and "ZFS Master for unraid") before the update?

I don't think so, these are only additions to ZFS.

 

48 minutes ago, andber said:

2. will the plugin "Sanoid for unraid 6" and the syncoid it contains continue to run in 6.12? 

Should be, this is only a addition to ZFS.

  • Like 1
Link to comment

Thanks @steini84, it is strangely exciting to be able to remove my usb drives from my setup. Just a note that you may not be out of the woods yet because according to the release notes only certain zfs features are supported. I am not sure how those are being limited but I have more than one pool, cache and special vdev, way more than 4 drives, dedup and probably other things. I would have thought zfs was just zfs or maybe the limitations are just in a new gui?  Hoping I can just upgrade and that’s that. Slightly scared to do it! 😮

  • Like 1
Link to comment
29 minutes ago, Marshalleq said:

 I have ... way more than 4 drives

 

Are you referring to this?

 

Up to 4 devices supported in a mirror vdev.

 

If I'm understanding correctly, this should almost never be an issue.  Even a mirrored pool of 8 drives will typically only have 2 per vdev.  Example:

 

image.png.bee525e75fb4b04a01ee79551b1375c6.png

Link to comment
22 minutes ago, Marshalleq said:

it is strangely exciting to be able to remove my usb drives from my setup.

Not yet, at least one array data device still must be assigned, this should change for v6.13 or v6.14 when multiple arrays are implemented.

 

23 minutes ago, Marshalleq said:

but I have more than one pool

It already supported up to 35 pools before v6.12

 

26 minutes ago, Marshalleq said:

cache and special vdev

There was a bug with -rc1, this will be supported from -rc2+ (import only for now, no GUI support to create these for now).

 

27 minutes ago, Marshalleq said:

way more than 4 drives

There's no such limit, you are possibly referring to max 4 way mirror limit, don't think anyone plans to use more than that for mirrors, you can have as many stripped mirrors as you want, and if you really want >4 way mirror you can create one manually and it will be imported.

 

29 minutes ago, Marshalleq said:

dedup and probably other things.

Also supported, log, cache, dedup, special and spare vdevs are all supported, for import only for now.

 

30 minutes ago, Marshalleq said:

Slightly scared to do it!

No reason for that, if you are using an unsupported pool, like for example a pool with a mirror and a raidz vdev in the same pool it will just fail to import, it won't be damaged.

 

Also note that pools created by FreeNAS/TrueNAS cannot be imported for now, because they use partition #2 for zfs, but importing these pools is expected to be supported in the near future.

  • Like 1
Link to comment

Ok thanks so sounds like when you say import only that the command line is still available to do whatever we want and the limitations are in some kind of fui implementation?

 

if so I’ll do it. Have been looking forward to this, hopefully it isn’t too unraided like some weird unraid array requirement. Seems not though. Thanks!

Link to comment

Done it. Updated unraid then removed zfs plugin (that’s probably better other way around). Nothing was detected so did a zpool import -a then I was able to stop and start docker. 

 

one disk is apparently unavailable but otherwise it’s working as far as I can tell. Will let it settle before trying anything further.   For those wondering zfs version is 2.1.9-1. 
 

  • Like 1
Link to comment
1 hour ago, Marshalleq said:

Nothing was detected so did a zpool import -a then I was able to stop and start docker. 

1 hour ago, Marshalleq said:

For those wondering zfs version is 2.1.9-1. 

I would recommend that you read the announcement post for Unraid 6.12.0-RC1 on how to import your pools to Unraid and there is also the ZFS version included.

Link to comment
13 hours ago, ich777 said:

I would recommend that you read the announcement post for Unraid 6.12.0-RC1 on how to import your pools to Unraid and there is also the ZFS version included.

Thanks, I did actually, but you made me think I missed something, so I revisited.  What I was hoping for was not to use any fudged, not true zfs weirdness in unraid, which the post below seems to border on.  I'm not brave enough yet to go sticking zfs into unraid pool strangeness because of all the extra bits my pools have that unraid currently don't support - the advice above due to this was to 'import only', which I took to mean import -a not the gui am I wrong about that?  I'm not confident I know what unraid is actually doing enough yet to perform this step.  Does the GUI import workaround method below work for the unsupported parts of zfs like special vdevs?  Thankfully I'm not using encryption, but I nearly set it up this week because really I should be doing that these days.  Part of my problem might simply be I've not used unraid pools since before multi pools were supported - so there will be a small learning there.

 

I have about 30 drives in my ZFS setup.  I have dedup, special vdev for storing metadata, there's a 6 disk mirror in there,  a couple of 8 disk raidz2 pools, so it's just a bit to unpick - better to be safe than sorry.

 

Key parts from the announcement I think you're referring to in red.

Pools created with the 'steini84' plugin can be imported as follows: First create a new pool with the number of slots corresponding to the number of devices in the pool to be imported. Next assign all the devices to the new pool. Upon array Start the pool should be recognized, though certain zpool topologies may not be recognized (please report).

 

When creating a new ZFS pool you may choose "zfs - encrypted", which, like other encrypted volumes, applies device-level encryption via LUKS. ZFS native encryption is not supported at this time.

 

During system boot, the file /etc/modprobe.d/zfs.conf is auto-generated to limit the ZFS ARC to 1/8 of installed memory. This can be overridden if necessary by creating a custom 'config/modprobe.d/zfs.conf' file. Future update will include ability to configure the ARC via webGUI, including auto-adjust according to memory pressure, e.g., VM start/stop.

 

 

Link to comment
2 minutes ago, Marshalleq said:

the advice above due to this was to 'import only', which I took to mean import -a not the gui am I wrong about that?

You can import the pools using the GUI, but wait for rc2 if the pool has cache, log, dedup, special and/or spare vdev(s), then just create a new pool, assign all devices (including any special vdevs) and start the array, what you currently cannot do is add/remove those special devices using the GUI, but you can always add them manually and re-import the pool.

  • Like 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.