ZFS plugin for unRAID


steini84

825 posts in this topic Last Reply

Recommended Posts

I wonder if this should be asked on ZFS forums first to confirm what kind of activity happens at how often?  It may be that e.g. some kind of metadata check happens every 10 mins or something.  I used to wonder about this too, but I've recently upgraded to 16TB seagate that have some kind of magic 'nearly the same as spindown' at idle thing so am not so concerned now.  It's a good question though!

Link to post
  • Replies 824
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

What is this? This plugin is a build of ZFS on Linux for unRAID 6   Installation of the plugin To install you copy the URL below into the install plugin page in your unRAID 6 web g

Built zfs-2.0.0-rc7 for unRAID-6.8.3 & 6.9.0-beta35   Great to see that unRAID is finally adding native ZFS so this might be one of the last builds from me   And yes, i´m alre

For anyone trying to set up zfs-auto-snapshot to use Previous Versions on Windows clients:   I placed the zfs-auto-snapshot.sh file from https://github.com/zfsonlinux/zfs-auto-snapshot/blob/

Posted Images

I have setup a test system with

  • zfs 2.0.4 on Unraid 6.9.1
  • zpool of only one SSD drive
  • znapsend making snapshots hourly
  • one dataset for Docker
  • placed docker.img in that dataset
  • have one docker running (Telegraf)

So far I could not reproduce the error.  But there are some key differences to my main system when I had the problem, it had

  • raidz1 of 3 drives
  • zfs 2.0.3 on Unraid 6.9.0
  • multiple docker running
  • drives where classic spinning rust

Anyone got an idea on how to reproduce the error?  I wouldn't call it fixed in 2.0.4/6.9.1 so soon.

Link to post

I'm sorry to maybe misinterpreting your problem, but have you checked for any hardware faults? Please try some cpu and memory tests (xmp was for me a random crasher and causing some drive and network instabilities on a threadripper)

Link to post
1 hour ago, NeoJoris said:

I'm sorry to maybe misinterpreting your problem, but have you checked for any hardware faults? Please try some cpu and memory tests (xmp was for me a random crasher and causing some drive and network instabilities on a threadripper)

Which problem are you referring to?

Link to post

@steini84 I have run 2.0.4 in 6.9.1 on my test system without a problem so far and switched my main system (raidz1, 24 docker containers in btrfs img) to unstable.  Running fine for the last 8 hours where 2.0.3 under 6.9.0 had the system lock up within minutes.  Looking like the issue is fixed in either Unraid 6.9.1 or ZFS 2.0.4.  A verification from someone else would be helpful though.

Link to post
42 minutes ago, Arragon said:

@steini84 I have run 2.0.4 in 6.9.1 on my test system without a problem so far and switched my main system (raidz1, 24 docker containers in btrfs img) to unstable.  Running fine for the last 8 hours where 2.0.3 under 6.9.0 had the system lock up within minutes.  Looking like the issue is fixed in either Unraid 6.9.1 or ZFS 2.0.4.  A verification from someone else would be helpful though.

@Joly0 & @Marshalleq can you test/verify this?

Link to post
3 minutes ago, Joly0 said:

Nope, for me still whole system lockup when having the docker.img on my zfs array on 2.0.4/6.9.1

Now that is strange.  I have docker.img in it's own dataset on a raidz1.  My settings

NAME         PROPERTY                    VALUE                                       SOURCE
tank/Docker  type                        filesystem                                  -
tank/Docker  creation                    Sun Feb 21 15:50 2021                       -
tank/Docker  used                        106G                                        -
tank/Docker  available                   22.8T                                       -
tank/Docker  referenced                  11.4G                                       -
tank/Docker  compressratio               1.66x                                       -
tank/Docker  mounted                     yes                                         -
tank/Docker  quota                       none                                        default
tank/Docker  reservation                 none                                        default
tank/Docker  recordsize                  128K                                        default
tank/Docker  mountpoint                  /mnt/tank/Docker                            inherited from tank
tank/Docker  sharenfs                    off                                         default
tank/Docker  checksum                    on                                          default
tank/Docker  compression                 lz4                                         inherited from tank
tank/Docker  atime                       off                                         inherited from tank
tank/Docker  devices                     on                                          default
tank/Docker  exec                        on                                          default
tank/Docker  setuid                      on                                          default
tank/Docker  readonly                    off                                         default
tank/Docker  zoned                       off                                         default
tank/Docker  snapdir                     hidden                                      default
tank/Docker  aclmode                     discard                                     default
tank/Docker  aclinherit                  restricted                                  default
tank/Docker  createtxg                   14165                                       -
tank/Docker  canmount                    on                                          default
tank/Docker  xattr                       sa                                          inherited from tank
tank/Docker  copies                      1                                           default
tank/Docker  version                     5                                           -
tank/Docker  utf8only                    off                                         -
tank/Docker  normalization               none                                        -
tank/Docker  casesensitivity             sensitive                                   -
tank/Docker  vscan                       off                                         default
tank/Docker  nbmand                      off                                         default
tank/Docker  sharesmb                    off                                         default
tank/Docker  refquota                    none                                        default
tank/Docker  refreservation              none                                        default
tank/Docker  guid                        8024818214154210388                         -
tank/Docker  primarycache                all                                         default
tank/Docker  secondarycache              all                                         default
tank/Docker  usedbysnapshots             45.6G                                       -
tank/Docker  usedbydataset               11.4G                                       -
tank/Docker  usedbychildren              49.5G                                       -
tank/Docker  usedbyrefreservation        0B                                          -
tank/Docker  logbias                     latency                                     default
tank/Docker  objsetid                    15006                                       -
tank/Docker  dedup                       off                                         default
tank/Docker  mlslabel                    none                                        default
tank/Docker  sync                        standard                                    inherited from tank
tank/Docker  dnodesize                   legacy                                      default
tank/Docker  refcompressratio            1.49x                                       -
tank/Docker  written                     9.09M                                       -
tank/Docker  logicalused                 171G                                        -
tank/Docker  logicalreferenced           17.0G                                       -
tank/Docker  volmode                     default                                     default
tank/Docker  filesystem_limit            none                                        default
tank/Docker  snapshot_limit              none                                        default
tank/Docker  filesystem_count            none                                        default
tank/Docker  snapshot_count              none                                        default
tank/Docker  snapdev                     hidden                                      default
tank/Docker  acltype                     off                                         default
tank/Docker  context                     none                                        default
tank/Docker  fscontext                   none                                        default
tank/Docker  defcontext                  none                                        default
tank/Docker  rootcontext                 none                                        default
tank/Docker  relatime                    off                                         default
tank/Docker  redundant_metadata          all                                         default
tank/Docker  overlay                     on                                          default
tank/Docker  encryption                  off                                         default
tank/Docker  keylocation                 none                                        default
tank/Docker  keyformat                   none                                        default
tank/Docker  pbkdf2iters                 0                                           default
tank/Docker  special_small_blocks        0                                           default
tank/Docker  org.znapzend:zend_delay     0                                           inherited from tank
tank/Docker  org.znapzend:enabled        on                                          inherited from tank
tank/Docker  org.znapzend:src_plan       7days=>1hours,30days=>4hours,90days=>1days  inherited from tank
tank/Docker  org.znapzend:mbuffer_size   1G                                          inherited from tank
tank/Docker  org.znapzend:mbuffer        off                                         inherited from tank
tank/Docker  org.znapzend:tsformat       %Y-%m-%d-%H%M%S                             inherited from tank
tank/Docker  org.znapzend:recursive      on                                          inherited from tank
tank/Docker  org.znapzend:pre_znap_cmd   off                                         inherited from tank
tank/Docker  org.znapzend:post_znap_cmd  off                                         inherited from tank

Running the following images 

linuxserver/sabnzbd
linuxserver/jackett
wordpress
linuxserver/sonarr
linuxserver/nzbhydra2
linuxserver/plex
linuxserver/tautulli
linuxserver/nextcloud
linuxserver/lidarr
linuxserver/mariadb
binhex/arch-qbittorrentvpn
linuxserver/radarr
telegraf
grafana/grafana
linuxserver/sabnzbd
prom/prometheus
influxdb
jlesage/nginx-proxy-manager
b3vis/borgmatic
boerderij/varken
spaceinvaderone/macinabox
spaceinvaderone/vm_custom_icons
binhex/arch-krusader
jlesage/jdownloader-2

 

Link to post

Ok, i can only spot a few differences between the settings of my dataset and yours. Like the snapdir is visible on my end, createtxg is 1 instead of 14165, xattr is set to on instead of sa. Other then that, its basically the same.
But there are still a few differences on your end compared to mine like i am running RaidZ2 instead of Z1 and my docker.img is not on a separate dataset but rather just simply on a combined dataset for basically everything. and well other then the jdownloader, nextcloud and mariadb i have no container in common with you.

 

@MarshalleqCould you tell us a bit about your configration and the settings of your dataset? Maybe that could help

Link to post

Hey thanks so much for your help Arragon, this is great!  So if Joly0 has those 3 dockers in common, of those I have nextcloud and mariadb.  There are plenty of others that I have in common with Arragon though, but I'll leave them out as they seem to be irrelevant.  Also, I have a test box that has unraid but is basically only used for encoding so has nothing in common with what either of you have listed in regards to dockers used, so I will upgrade it all to latest shortly and see if it's impacted.  As far as ZFS setups, on my test box I have:

  • AMD Threadripper 1950X with 128G RAM
  • RaidZ1 3x8TB Seagate HDD's SATA
  • Mirror 4x240G M.2 intel SSD's in a mirror (e.g 480 total space) SATA
  • XFS 128G Intel NVME for testing docker.img.  NVME

 

The other box is similar with

  • Intel Xeon E5-2620 v3 with 96GB ECC RAM
  • Raidz1 4x16TB Seagates SATA
  • Mirror 2x150G+2x480G Intels SATA
  • Mirror + 1x960G as a standalone ZFS. SATA

This box has some of the disks on Dell Perc H310 in IT mode.

 

Both of these boxes have exhibited the issue.

 

Both systems have docker.img on mirrored zfs shared with active docker configs.

 

One thing I don't see listed above is fstrim, which I do have enabled on all my ssd's. I also have xattr set to sa and I am using the new compression type zstd-1.

 

I have attached output of zpool get all and one zfs get all from one snapshot

 

Also screenshot of my docker settings below:  What is your host access to custom networks set to?  That's something else I've enabled from default.

988906850_ScreenShot2021-03-13at9_27_30AM.thumb.png.f2eadd600192b3d826a32b59739523d0.pngZpoolGetAll.rtf ZFSGetAll.rtf

Edited by Marshalleq
Link to post

Thanks for the information.

In the next few days i am going to make some more precise tests to see exactly which containers are affected by this issue.

 

I think nothing from zpool get all  seems to cause that issue, we have mostly the same settings and the ones we dont have cancel each other out or are irrelevant such as the createtxg or the snapdir option.

 

Also it seems not to make a difference having Z1 or Z2, but we will see, jsut lets try to pin down this issue as precisely as possible.

Edited by Joly0
Link to post

There was some time ago, issue with something on custom docker networks, should probably disable that and check.  I've updated my test system now and it 'seems' to be OK with latest ZFS and latest unraid.  If it is, I will try adding those few docker containers in common and see if there's an impact.

Link to post
1 hour ago, Joly0 said:

Also it seems not to make a difference having Z1 or Z2, but we will see, jsut lets try to pin down this issue as precisely as possible.

I can confirm that since I still had issues with 6.9.0/2.0.3 and my raidz1 didn't change.

Link to post
  • 2 weeks later...

Hi there,

 

I'm about to make the switch from TrueNAS to Unraid after spending several painful months fighting with poorly written free software.  That said, the one thing I really liked about it was ZFS.  I've been reading / watching various tutorials on setting ZFS up on Unraid and looks pretty straight forward to get up and running.

 

One of the nice things about TrueNAS and ZFS is you could blow away your config, reinstall and all you needed to do was simply import the ZFS volume and you were back in business.

 

My question.... can anyone out there who currently runs the ZFS plugin on Unraif provide some guidance on how difficult it is to manage / recover a ZFS volume in Unraid when something like a power failure abruptly takes out the Unraid configuration?

 

I don't want to create a giant make work recovery project (in the event a piece of hardware fails) if sticking with native BTRFS is the safer path. 

Link to post
26 minutes ago, zetabax said:

My question.... can anyone out there who currently runs the ZFS plugin on Unraif provide some guidance on how difficult it is to manage / recover a ZFS volume in Unraid when something like a power failure abruptly takes out the Unraid configuration?

 

I just moved a ZFS pool from one unRAID server to another and I'm fairly sure the ZFS plugin mounted the pool for me automatically.  I may have issued a "zfs mount -a" command, can't remember, but it was no more complicated than that.

Edited by ConnectivIT
Link to post
On 3/5/2021 at 3:22 PM, Marshalleq said:

Personally I think the future of unraid lies in it's unraid driver being for mass storage and zfs for everything else.  However I've migrated completely away from the unraid raid driver, I found it to make my system too sluggish.

 

There has been a misunderstanding historically whenever ZFS support was raised as a feature request.  ie. it was always talked down because people assumed that that there was a desire to replace the unRAID array with ZFS.  Some may wish to do that, but as you say, it's best suited to replacing the "pool" function, not the array.

 

Support for multiple pools was probably a pre-requisite for ZFS support.  What's next?  Maybe to get ZFS pools supported in Unassigned Devices or a new plugin?

 

Ultimately that work could potentially be pulled in to unRAID proper for ZFS support, as we've seen with other plugins.

Link to post
31 minutes ago, zetabax said:

I don't want to create a giant make work recovery project (in the event a piece of hardware fails) if sticking with native BTRFS is the safer path. 

I would at least stick to one btrfs volume for the containers:

 

The question is what you want to do with Unraid, as a Cold/Hot storage system it is perfect for me (Cold: Array - slow copy speed but houses all the data that I don't use on a daily basis | Hot: Cache Pool(s) - normal copy speed and data for selected shares that are not used on a daily basis can be automatically transferred over to the Array on a scheduled interval or have also the option for selected shares that you leave it on the Cache).

Link to post
4 minutes ago, ich777 said:

I would at least stick to one btrfs volume for the containers:

 

Probably a good idea for the docker image ("Docker vDisk location:" in Docker Settings).  I just have this running on a single XFS partition at the moment, it's not that much of a pain to recover from if it dies.

 

For the "appdata" docker files, ZFS works great.  Out of the ~30 or so dockers I run, the only issue I've had was with lancache-bundle, workaround for that is here

Link to post

@ich777 i have yet not been able to test it, cause the whole system is my production server, which is more in need then usual, so its hard for me to just shut it down for a few hours to test.

If i would be able to get unraid running in virtualbox or vmware or whatever, then i could test it in that environment, but unfortunately i dont have a second system or anything i can use for this, so i have to wait until my server is not in use for a few hours and test than

Link to post

I have a 2 disk ZFS being used for VM's on one server.  These are older 3tb Seagates.  And one showing 178 pending and 178 uncorrectable sectors.  UnRaid parity check usually finds these errors are spurious and resets everthing to zero.  Is there anything similar to do with ZFS?

Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.