ZFS plugin for unRAID


steini84

Recommended Posts

Thanks all, that’s great feedback.

 

I originally looked at adding cache, but then thought that even if i did, it still left read speeds at a single spindle once the data was off-cache. If I moved to striped, then I could reuse my disks and get better performance without needing to buy SSDs!

 

The other thing I looked at was Ubuntu 19 + Cockpit GUI.

This gave me Docker & KVM as well as the (beta) Cockpit ZFS GUI manager. 

Would have preferred Centos 8, but they have moved to Podman instead of Docker and getting the right combination of ZFS / Cockpit / Podman all working together was problematic.

 

Only looked at FreeNAS as it was the quickest to deploy :) but ProxMox sounds like it could be worth a look as well.

 

Looks like I have some testing to do over the Christmas break!

 

Happy Holidays!

 

 

 

 

Link to comment

Hello!

 

Thanks for this great plugin, i just moved away from freenas to unRAID, and i really like ZFS. I did ran into some problems.

 

I've setup a array just because i needed one with 2x 32GB SSD one of which is for parity. 

Then i followed the guide and created the following:

        NAME        STATE     READ WRITE CKSUM
        HDD         ONLINE       0     0     0
          raidz2-0  ONLINE       0     0     0
            sdj     ONLINE       0     0     0
            sdp     ONLINE       0     0     0
            sdn     ONLINE       0     0     0
            sdl     ONLINE       0     0     0
            sdk     ONLINE       0     0     0
            sdi     ONLINE       0     0     0
        logs
          sdg       ONLINE       0     0     0
  pool: SSD
 state: ONLINE
  scan: none requested
config:
        NAME        STATE     READ WRITE CKSUM
        SSD         ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            sdc     ONLINE       0     0     0
            sdm     ONLINE       0     0     0
          mirror-1  ONLINE       0     0     0
            sdb     ONLINE       0     0     0
            sdd     ONLINE       0     0     0

With these datasets:

root@unRAID:~# zfs list
NAME                        USED  AVAIL     REFER  MOUNTPOINT
HDD                        4.39M  10.6T      224K  /mnt/HDD
HDD/Backup                 1.36M  10.6T      208K  /mnt/HDD/Backup
HDD/Backup/Desktop          192K  10.6T      192K  /mnt/HDD/Backup/Desktop
HDD/Backup/RPI              991K  10.6T      224K  /mnt/HDD/Backup/RPI
HDD/Backup/RPI/AlarmPanel   192K  10.6T      192K  /mnt/HDD/Backup/RPI/AlarmPanel
HDD/Backup/RPI/Garden       192K  10.6T      192K  /mnt/HDD/Backup/RPI/Garden
HDD/Backup/RPI/Kitchen      192K  10.6T      192K  /mnt/HDD/Backup/RPI/Kitchen
HDD/Backup/RPI/OctoPrint    192K  10.6T      192K  /mnt/HDD/Backup/RPI/OctoPrint
HDD/Film                    192K  10.6T      192K  /mnt/HDD/Film
HDD/Foto                    192K  10.6T      192K  /mnt/HDD/Foto
HDD/Nextcloud               192K  10.6T      192K  /mnt/HDD/Nextcloud
HDD/Samba                   192K  10.6T      192K  /mnt/HDD/Samba
HDD/Serie                   192K  10.6T      192K  /mnt/HDD/Serie
HDD/Software                192K  10.6T      192K  /mnt/HDD/Software
SSD                         642K   430G       25K  /mnt/SSD
SSD/Docker                  221K   430G       29K  /mnt/SSD/Docker
SSD/Docker/Jackett           24K   430G       24K  /mnt/SSD/Docker/Jackett
SSD/Docker/Nextcloud         24K   430G       24K  /mnt/SSD/Docker/Nextcloud
SSD/Docker/Organizr          24K   430G       24K  /mnt/SSD/Docker/Organizr
SSD/Docker/Plex              24K   430G       24K  /mnt/SSD/Docker/Plex
SSD/Docker/Radarr            24K   430G       24K  /mnt/SSD/Docker/Radarr
SSD/Docker/Sabnzbd           24K   430G       24K  /mnt/SSD/Docker/Sabnzbd
SSD/Docker/Sonarr            24K   430G       24K  /mnt/SSD/Docker/Sonarr
SSD/Docker/appdata           24K   430G       24K  /mnt/SSD/Docker/appdata
SSD/VMs                     123K   430G       27K  /mnt/SSD/VMs
SSD/VMs/HomeAssistant        24K   430G       24K  /mnt/SSD/VMs/HomeAssistant
SSD/VMs/Libvert              24K   430G       24K  /mnt/SSD/VMs/Libvert
SSD/VMs/Ubuntu               24K   430G       24K  /mnt/SSD/VMs/Ubuntu
SSD/VMs/Windows              24K   430G       24K  /mnt/SSD/VMs/Windows

 

Now when i disable Docker and try to set the corresponding paths, i get this:

DeepinScreenshot_select-area_20191222115949.thumb.png.a219e86f38ab4ba4d9066303eadd7739.png

 

How to solve this?

 

Kind regards.

 

Edit, it just needed a trailing slash after /appdata/

 

Now, i cant disable the VM service from the vm settings tab. Also editing the default location is not found or editable to the mount point of zfs /mnt/SSD/VMs (even with a trailing slash) i just cant press apply (same for disabling the VM service)

Please advise.

DeepinScreenshot_select-area_20191222145356.thumb.png.6bc45ca085e15358bf1840f71c9f7ece.png

Second edit:

 

Needed to stop the array, then everything is editable. Works as advertised so far. thanks again. Solved!

Edited by ezra
  • Like 1
Link to comment

I'm moving from Freenas to Unraid. I currently have Unraid up and running with 3 8tb drives (one used for parity). I have 3 drives from my ZFS Freenas system (8tb, 6tb, and 3tb) with data on them that I would like to move to my new Unraid pool. All three drives are have data on them separately and were not part of a ZFS pool. I've tried to piece together a solution from previous posts, but I'm stuck. I have added the ZFS and Unassigned Devices plugins, and have tried to import one of my ZFS discs using terminal in Unraid and the command "zpool import -f".  I get this:

 

  pool: Red8tb
     id: 14029892406868332685
  state: ONLINE
 status: The pool was last accessed by another system.
 action: The pool can be imported using its name or numeric identifier and
        the '-f' flag.
   see: http://zfsonlinux.org/msg/ZFS-8000-EY
 config:

        Red8tb      ONLINE
          sde       ONLINE

 

I'm not sure if the disc is successfully mounted yet, and if it is I'm not sure how to access its data. Thanks for any help you can give. 

Link to comment
I'm moving from Freenas to Unraid. I currently have Unraid up and running with 3 8tb drives (one used for parity). I have 3 drives from my ZFS Freenas system (8tb, 6tb, and 3tb) with data on them that I would like to move to my new Unraid pool. All three drives are have data on them separately and were not part of a ZFS pool. I've tried to piece together a solution from previous posts, but I'm stuck. I have added the ZFS and Unassigned Devices plugins, and have tried to import one of my ZFS discs using terminal in Unraid and the command "zpool import -f".  I get this:
 
  pool: Red8tb
     id: 14029892406868332685
  state: ONLINE
 status: The pool was last accessed by another system.
 action: The pool can be imported using its name or numeric identifier and
        the '-f' flag.
   see: http://zfsonlinux.org/msg/ZFS-8000-EY
 config:
        Red8tb      ONLINE
          sde       ONLINE
 
I'm not sure if the disc is successfully mounted yet, and if it is I'm not sure how to access its data. Thanks for any help you can give. 


What is the output from zpool list ?
It should force import with zpool import -f Red8tb


Sent from my iPhone using Tapatalk
  • Like 1
Link to comment
I'm moving from Freenas to Unraid. I currently have Unraid up and running with 3 8tb drives (one used for parity). I have 3 drives from my ZFS Freenas system (8tb, 6tb, and 3tb) with data on them that I would like to move to my new Unraid pool. All three drives are have data on them separately and were not part of a ZFS pool. I've tried to piece together a solution from previous posts, but I'm stuck. I have added the ZFS and Unassigned Devices plugins, and have tried to import one of my ZFS discs using terminal in Unraid and the command "zpool import -f".  I get this:
 
  pool: Red8tb
     id: 14029892406868332685
  state: ONLINE
 status: The pool was last accessed by another system.
 action: The pool can be imported using its name or numeric identifier and
        the '-f' flag.
   see: http://zfsonlinux.org/msg/ZFS-8000-EY
 config:
        Red8tb      ONLINE
          sde       ONLINE
 
I'm not sure if the disc is successfully mounted yet, and if it is I'm not sure how to access its data. Thanks for any help you can give. 


What is the output from zpool list ?
It should force import with zpool import -f Red8tb


Sent from my iPhone using Tapatalk
Link to comment
On 7/7/2018 at 7:09 AM, david279 said:

Doesn't zfs place the pools at "/" when at they are mounted? /Tank, /rpool etc.

Sent from my SM-G955U using Tapatalk
 

Good guidance. After the ZFS plugin mounted (and I also force mounted) the freenas ZFS pools, I found them available in the root directory "/" via ssh using midnight commander. I could then simply issue a copy command to move my pool data onto the unraid array I created earlier.  

Other tips: I was not able to mount the drives in Unassigned Devices.  I also was not able to see the root "/' zfs pools with Krusader (docker container) but could with midnight commnader via SSH.  (suspect I'm doing something wrong).

 

Thanks for the tip!

Edited by KRK
Link to comment

I have been debating what to do next with my setup. I really like unRAID but I want ZFS for my storage. I want to make use of this plugin and while I am comfortable with the CLI I really wish there was a GUI. Any chance we can see that happen? Does something already exist?

 

Thanks

Link to comment

Hi,

Excellent plugin which kept me reading non-stop informations on the ZFS subject for the past two days. A huge thanks to @steini84 for this port. If I find the motivation I may even lend my meager abilities to try and see if I can't come up with some GUI plugin '-' (would have to refurbish my coding skills a bit but meh, sounds like a good project).

Speaking of ZFS plugin, it doesn't seem to be updated to the latest kernel (unraid 6.8.1-rc1).

I'm currently trying to get my GPU to pass-through properly through unRaid but aside from that I have quite a decent (meaning completely insane) setup in mind (involving building a RAIDZ2 zpool of 8 disks while missing two which are part of my current NAS and will be replaced after transfering the datas first + a second zpool of 2x 1Tb PCIe4 nvme drives, because, why not...). I'll let you know how it goes @.@ Oh, yes, the fun part : 2 of my 8 disks are actually 6Tb (the 6 others are all 4Tb) meaning a 2x 2Tb loss of data. I plan on partitionning those extra 2x2Tb of data to use them as the default unRaid array (for some backup and stuff). Meaning part of the 6Tb drives will be in BTRFS with the unraid array in Raid1 (one parity, one drive), while the rest will be part of the main Zpool in RAIDZ2 xD Does that make sense ? Since the unraid array will only be used as backup I highly doubt it will have any performance impact. What's your take on that ?

Link to comment

Sounds like a cool setup.  I'm not sure you can use Unraids default array options to use partitions though - generally it uses whole disks.  It might be you'd have to do something with mdadm or similar.  Or even better, perhaps ZFS can be configured to use them somehow.  I'd say what would make more sense is to leave the 2x2TB disks out and set them up seperately, then you'd be able to do exactly what you're talking about and gain the remainder of the 6TB disks for your ZFS array.  Generally when you ping @steini84 he'll update the zfs for you - I wasn't sure I wanted to bother with this one, though I was feeling guilty about not testing it (which I can't without ZFS updated) so perhaps it's a good idea.  I really want the 5.0 series kernels and this is the pathway to those.

Edited by Marshalleq
Link to comment

I also need the 6.9 unRaid version with 5.4x linux kernel since I'm running a x570 MB with Ryzen 3700x and AMD RX 5700 XT and the passthrough is nigh impossible for some reasons... (made a support topic about it which I should update btw). The RX 5700 XT has the amd reset bug very annoying but apparently there's a partial fix already out and that's available on the custom kernel for unraid 6.8.0-rc5, which I'm running at the moment (but I guess I'm just unlucky or I probably failed some config somewhere because it doesn't work...).

ZFS can use partitions. My hope is that, once I set up the Zpool, the remainder of the 2x2Tb drives will be available for the unraid array. If not... Meh, I suppose I'll have to make another zpool just for them in raid1... ? I still have some time to iron-out the kinks since I'm still at the pre-clearing state of the drives right now so I'll start trying things out tomorrow with ZFS to get the most out of my current config'. Problem is: if I use all of my disks in the zPool, how will I start the Array T.T Maybe I'll be forced to use another user's trick and just dump a usb drive to start it... ? °I'll see°

Link to comment

Updated for 6.8.1-rc1

 

The plugin was redesigned so that you do not need to update it for every new build of ZFS.  (ZFS still needs to be built for every new kernel - you just dont have to upgrade the plugin it selfe)

 

I have done some testing (using this small script ) but please let me know if something breaks for you in the install. You can still get the old version if you need it here: old version

 

I am open for ideas on the best way for users to know if the latest build is available before an update, but for now I will continue to build and announce it in this thread. 

 

@Yros I dont personally see the benefit of a GUI for my use-case (the I would just move back to Napp-IT or Freenas for my ZFS) but if you want to dust off your programming skills the install script could use some polishing. The github is here https://github.com/Steini1984/unRAID6-ZFS so please take a look.

Edited by steini84
Link to comment

There's no absolute benefit for a GUI other than being able to monitor (in real time?) the ZFS pool and maybe even some additionnal perks. Either way it's, in my humble opinion, better to have a separate space for the Zpool(s) rather than be aggregated with the Unassigned Devices plugin and risk some mishap down the line due to a erronous mounting on unraid or something.

As for the script, I'll have a look though I don't promise anything (my level is basically 1st year of IT college in programming... I was more of a design' guy at the time), and I'm still trying to figure out why my GPU won't pass-through @.@

Link to comment

LOL, of course there's benefit, not everyone that wants ZFS knows command line, this is unraid after all, not some enterprise geek OS.  Even me with nearly 30 years in IT spanning from command line days in Novell server OS's can admit that the GUI can be useful cause you don't have to remember stuff.  There may not be benefit from a functionality point of view, but from a user perspective it has a lot of potential benefit.  You may even find a lot of extra people start installing it if there was a GUI.

Link to comment

Yeah, a GUI with monitoring + a dedicated page with tutorial and basic commands and tweaks would do just fine I think. Though I'm not sure if allowing direct manipulation of the zpool through the GUI is a good idea, I think leaving that part in command lines is better as ZFS is quite flexible, meaning it needs precise options on a per-case basis to obtain the best benefits. (Plus it's a lot easier to make without direct modification xD)

Edited by Yros
Link to comment
9 minutes ago, Marshalleq said:

This is one place I'd somewhat disagree.  I can use it fine in the command line, but I think a gui would also be fantastic.  @steini84 did you mean to say we don't need to update for every build of ZFS, or don't need to update it for every build of unraid?  Thanks.

You don't need to update the plugin every time, but I still have to push new builds for new kernels. 

I changed the wording a little bit in the post to better explain... but what i meant is that I don't see any benefits for me so I wont take the time to try to put something together. 

Link to comment
7 minutes ago, Yros said:

There's no absolute benefit for a GUI other than being able to monitor (in real time?) the ZFS pool and maybe even some additionnal perks. Either way it's, in my humble opinion, better to have a separate space for the Zpool(s) rather than be aggregated with the Unassigned Devices plugin and risk some mishap down the line due to a erronous mounting on unraid or something.

As for the script, I'll have a look though I don't promise anything (my level is basically 1st year of IT college in programming... I was more of a design' guy at the time), and I'm still trying to figure out why my GPU won't pass-through @.@

I use check_mk to monitor my pools and it lets me know if there is a problem:

1487387589_Screenshotfrom2020-01-0919-03-45.thumb.png.9cf501abf84acba46fb97366f73084e5.png

Link to comment

I also misread @yros - thought he said theres absolute no benefit, but actually said there's no absolute benefit, which it quite different.  Anyway, I look at the videos of freenas and am a bit jealous of their GUI.  There's a lot to be liked about how polished that is and not just with ZFS.  But in particuarl managing snapshots and backups and such through the GUI would be much easier.  Especially from a monitoring perspective.  Even the pool creation and so on would be cool - and a display of what's active and it's available size etc.

Link to comment

For sure a GUI would be great, i moved from FreeNAS to unRAID a few weeks back. It took me 2 weeks to figure it out and i now monitor everything via CLI and its fast, without any issues. I've also setup monitoring for the zpool status and unraid notifies me if somethings off.

 

No need for a GUI, it would be nice but not a priority IMO.

Let me know if anyone needs some useful commands or the monitoring setup.

 

root@unraid:~# zpool status
  pool: HDD
 state: ONLINE
  scan: scrub repaired 0B in 0 days 02:43:57 with 0 errors on Sun Jan  5 14:14:00 2020
config:

	NAME        STATE     READ WRITE CKSUM
	HDD         ONLINE       0     0     0
	  raidz2-0  ONLINE       0     0     0
	    sdp     ONLINE       0     0     0
	    sde     ONLINE       0     0     0
	    sdf     ONLINE       0     0     0
	    sdr     ONLINE       0     0     0
	    sdq     ONLINE       0     0     0
	    sds     ONLINE       0     0     0
	logs	
	  sdg       ONLINE       0     0     0

errors: No known data errors

  pool: SQL
 state: ONLINE
  scan: resilvered 254M in 0 days 00:00:01 with 0 errors on Thu Jan  9 13:10:08 2020
config:

	NAME        STATE     READ WRITE CKSUM
	SQL         ONLINE       0     0     0
	  mirror-0  ONLINE       0     0     0
	    sdi     ONLINE       0     0     0
	    sdl     ONLINE       0     0     0

errors: No known data errors

  pool: SSD
 state: ONLINE
  scan: resilvered 395M in 0 days 00:00:02 with 0 errors on Thu Jan  9 13:30:10 2020
config:

	NAME        STATE     READ WRITE CKSUM
	SSD         ONLINE       0     0     0
	  mirror-0  ONLINE       0     0     0
	    sdd     ONLINE       0     0     0
	    sdo     ONLINE       0     0     0
	  mirror-1  ONLINE       0     0     0
	    sdn     ONLINE       0     0     0
	    sdm     ONLINE       0     0     0
	logs	
	  sdh       ONLINE       0     0     0

errors: No known data errors

  pool: TMP
 state: ONLINE
  scan: scrub repaired 0B in 0 days 00:00:00 with 0 errors on Sun Jan  5 11:30:04 2020
config:

	NAME        STATE     READ WRITE CKSUM
	TMP         ONLINE       0     0     0
	  sdt       ONLINE       0     0     0

errors: No known data errors

Monitor Disk I/O

root@unraid:~# zpool iostat -v 1
              capacity     operations     bandwidth 
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
HDD         5.93T  10.4T     20    126  5.47M  29.4M
  raidz2    5.93T  10.4T     20    125  5.47M  29.4M
    sdp         -      -      3     19  1.14M  4.89M
    sde         -      -      3     20   936K  4.89M
    sdf         -      -      3     20   835K  4.89M
    sdr         -      -      4     23  1.06M  4.89M
    sdq         -      -      2     19   803K  4.89M
    sds         -      -      3     23   783K  4.89M
logs            -      -      -      -      -      -
  sdg        172K  29.5G      0      0     56  1.65K
----------  -----  -----  -----  -----  -----  -----
SQL         3.99G   106G      3    116   287K  4.66M
  mirror    3.99G   106G      3    116   287K  4.66M
    sdi         -      -      1     58   136K  2.33M
    sdl         -      -      1     58   151K  2.33M
----------  -----  -----  -----  -----  -----  -----
SSD          156G   288G     25    246  1.47M  8.83M
  mirror    77.6G   144G     12    111   755K  3.01M
    sdd         -      -      6     52   355K  1.50M
    sdo         -      -      6     59   400K  1.50M
  mirror    78.0G   144G     12    102   746K  2.90M
    sdn         -      -      6     55   399K  1.45M
    sdm         -      -      5     47   346K  1.45M
logs            -      -      -      -      -      -
  sdh       4.91M  29.5G      0     31    201  2.92M
----------  -----  -----  -----  -----  -----  -----
TMP         1.50M  29.5G      0      0    149  2.70K
  sdt       1.50M  29.5G      0      0    149  2.70K
----------  -----  -----  -----  -----  -----  -----

List snapshots

root@unraid:~# zfs list -t snapshot
NAME                                             USED  AVAIL     REFER  MOUNTPOINT
HDD@manual                                       160K      -     87.2G  -
HDD/Backup@2019-12-29-180000                     168K      -      248K  -
HDD/Backup@2020-01-03-150000                    65.1M      -     36.5G  -
HDD/Backup@2020-01-04-000000                    40.4M      -     43.3G  -
HDD/Backup@2020-01-05-000000                    72.0M      -     43.8G  -
HDD/Backup@2020-01-06-000000                    69.1M      -     44.7G  -
HDD/Backup@2020-01-07-000000                    35.6M      -     45.1G  -
HDD/Backup@2020-01-08-000000                    7.00M      -     45.5G  -
HDD/Backup@2020-01-08-120000                     400K      -     45.5G  -
HDD/Backup@2020-01-08-150000                     400K      -     45.5G  -
HDD/Backup@2020-01-08-180000                     416K      -     45.5G  -
HDD/Backup@2020-01-08-210000                    1.33M      -     45.5G  -
HDD/Backup@2020-01-09-000000                    1.33M      -     46.0G  -
HDD/Backup@2020-01-09-030000                     687K      -     46.0G  -
HDD/Backup@2020-01-09-060000                     663K      -     46.0G  -
HDD/Backup@2020-01-09-090000                     456K      -     46.0G  -
HDD/Backup@2020-01-09-120000                     480K      -     46.0G  -

 

 

Scrub weekly - User scripts

#!/bin/bash
/usr/local/emhttp/webGui/scripts/notify -i normal -s "Scrub" -d "Scrub of all sub zfs file systems started..."
/usr/sbin/zpool scrub SSD
/usr/sbin/zpool scrub HDD
/usr/sbin/zpool scrub SQL
/usr/sbin/zpool scrub TMP

 

Trim SSD's weekly - User scripts

#!/bin/bash
/usr/local/emhttp/webGui/scripts/notify -i normal -s "Trim" -d "Trim of all SSD disks started..."
/usr/sbin/zpool trim SSD
/usr/sbin/zpool trim SQL
/usr/sbin/zpool trim TMP

Zpool Status check every 5 minutes (custom */5 * * * *) - User scripts

#!/bin/bash
#
# https://gist.github.com/petervanderdoes/bd6660302404ed5b094d
#
problems=0
emailSubject="`hostname` - ZFS pool - HEALTH check"
emailMessage=""

#
ZFS_LOG="/boot/logs/ZFS-LOG.txt"
#

# Health - Check if all zfs volumes are in good condition. We are looking for
# any keyword signifying a degraded or broken array.

condition=$(/usr/sbin/zpool status | egrep -i '(DEGRADED|FAULTED|OFFLINE|UNAVAIL|REMOVED|FAIL|DESTROYED|corrupt|cannot|unrecover)')
#condition=$(/usr/sbin/zpool status | egrep -i '(ONLINE)')
if [ "${condition}" ]; then
  emailSubject="$emailSubject - fault"
  problems=1
fi

#

# Capacity - Make sure pool capacities are below 80% for best performance. The
# percentage really depends on how large your volume is. If you have a 128GB
# SSD then 80% is reasonable. If you have a 60TB raid-z2 array then you can
# probably set the warning closer to 95%.
#
# ZFS uses a copy-on-write scheme. The file system writes new data to
# sequential free blocks first and when the uberblock has been updated the new
# inode pointers become valid. This method is true only when the pool has
# enough free sequential blocks. If the pool is at capacity and space limited,
# ZFS will be have to randomly write blocks. This means ZFS can not create an
# optimal set of sequential writes and write performance is severely impacted.

maxCapacity=80

if [ ${problems} -eq 0 ]; then
  capacity=$(/usr/sbin/zpool list -H -o capacity)
  for line in ${capacity//%/}
  do
    if [ $line -ge $maxCapacity ]; then
      emailSubject="$emailSubject - Capacity Exceeded"
      problems=1
    fi
  done
fi

# Errors - Check the columns for READ, WRITE and CKSUM (checksum) drive errors
# on all volumes and all drives using "zpool status". If any non-zero errors
# are reported an email will be sent out. You should then look to replace the
# faulty drive and run "zpool scrub" on the affected volume after resilvering.

if [ ${problems} -eq 0 ]; then
  errors=$(/usr/sbin/zpool status | grep ONLINE | grep -v state | awk '{print $3 $4 $5}' | grep -v 000)
  if [ "${errors}" ]; then
    emailSubject="$emailSubject - Drive Errors"
    problems=1
  fi
fi

# Scrub Expired - Check if all volumes have been scrubbed in at least the last
# 8 days. The general guide is to scrub volumes on desktop quality drives once
# a week and volumes on enterprise class drives once a month. You can always
# use cron to schedule "zpool scrub" in off hours. We scrub our volumes every
# Sunday morning for example.
#
# Scrubbing traverses all the data in the pool once and verifies all blocks can
# be read. Scrubbing proceeds as fast as the devices allows, though the
# priority of any I/O remains below that of normal calls. This operation might
# negatively impact performance, but the file system will remain usable and
# responsive while scrubbing occurs. To initiate an explicit scrub, use the
# "zpool scrub" command.
#
# The scrubExpire variable is in seconds. So for 8 days we calculate 8 days
# times 24 hours times 3600 seconds to equal 691200 seconds.

##scrubExpire=691200
#
# 2764800 => 32 dias
#
scrubExpire=2764800

if [ ${problems} -eq 0 ]; then
  currentDate=$(date +%s)
  zfsVolumes=$(/usr/sbin/zpool list -H -o name)

  for volume in ${zfsVolumes}
  do
    if [ $(/usr/sbin/zpool status $volume | egrep -c "none requested") -ge 1 ]; then
      echo "ERROR: You need to run \"zpool scrub $volume\" before this script can monitor the scrub expiration time."
      break
    fi
##    if [ $(/usr/sbin/zpool status $volume | egrep -c "scrub in progress|resilver") -ge 1 ]; then
    if [ $(/usr/sbin/zpool status $volume | egrep -c "scrub in progress") -ge 1 ]; then
      break
    fi

    ### FreeBSD with *nix supported date format
    #scrubRawDate=$(/usr/sbin/zpool status $volume | grep scrub | awk '{print $15 $12 $13}')
    #scrubDate=$(date -j -f '%Y%b%e-%H%M%S' $scrubRawDate'-000000' +%s)

    ### Ubuntu with GNU supported date format
    scrubRawDate=$(/usr/sbin/zpool status $volume | grep scrub | awk '{print $13" "$14" " $15" " $16" "$17}')
    scrubDate=$(date -d "$scrubRawDate" +%s)

    if [ $(($currentDate - $scrubDate)) -ge $scrubExpire ]; then
      if [ ${problems} -eq 0 ]; then
        emailSubject="$emailSubject - Scrub Time Expired. Scrub Needed on Volume(s)"
      fi
      problems=1
      emailMessage="${emailMessage}Pool: $volume needs scrub \n"
    fi
  done
fi

# Notifications - On any problems send email with drive status information and
# capacities including a helpful subject line to root. Also use logger to write
# the email subject to the local logs. This is the place you may want to put
# any other notifications like:
#
# + Update an anonymous twitter account with your ZFS status (https://twitter.com/zfsmonitor)
# + Playing a sound file or beep the internal speaker
# + Update Nagios, Cacti, Zabbix, Munin or even BigBrother


if [ "$problems" -ne 0 ]; then
  logger $emailSubject
echo -e "$emailSubject\t$emailMessage" > $ZFS_LOG
# Notifica via email
#
COMMAND=$(cat "$ZFS_LOG")
/usr/local/emhttp/webGui/scripts/notify -i warning -s "ZFS" -d "Zpool status change \n\n$COMMAND \n\n`date`"
fi

 Also i've changed the ashift back and forth but came to the conclusion its better left at 0 (auto) after performance tests.

 

I've set recordsize=1M on my media files

 

I've added a SLOG (32GB SSD) to my SSD (VMs) pool and my HDD pool to prevent double writes:

zpool add POOLNAME log SDX

 

I've set atime off on every pool

 

another one:

 

Set ARC size - User scripts @reboot 

#!/bin/bash
# numbers are 8GB just multiply by 2 if you want 16GB etc..
echo 8589934592 >> /sys/module/zfs/parameters/zfs_arc_max && /usr/local/emhttp/webGui/scripts/notify -i normal -s "System" -d "Adjusted ARC limit to 8G \n\n`date`"

I use this to display a text card with homeassistant (ssh to cat the file as a sensor) setup as a user script in unRAID to run every 5 mins

#!/bin/bash
HDD=$(/usr/sbin/zpool status HDD | grep -m2 ""| awk '{print $2}' | tr '\n' ' ')
SSD=$(/usr/sbin/zpool status SSD | grep -m2 ""| awk '{print $2}' | tr '\n' ' ')
SQL=$(/usr/sbin/zpool status SQL | grep -m2 ""| awk '{print $2}' | tr '\n' ' ')
TMP=$(/usr/sbin/zpool status TMP | grep -m2 ""| awk '{print $2}' | tr '\n' ' ')
DATE=$(/bin/date | awk '{print $1 " " $2 " " $3 " " $4}')

echo "___________________________________" > /tmp/zpool_status
echo "| unRAID ZFS Zpool Status Checker |" >> /tmp/zpool_status
echo "| last check: $DATE " >> /tmp/zpool_status
echo "-----------------------------------" >> /tmp/zpool_status
echo "|   $HDD  |   $SSD  |" >> /tmp/zpool_status 
echo "-----------------------------------" >> /tmp/zpool_status
echo "|   $SQL  |   $TMP  |" >> /tmp/zpool_status
echo "-----------------------------------" >> /tmp/zpool_status

Output:

image.png.6c385532c511f5289dd822ebcd495f24.png

image.thumb.png.9ba8a903320d4bf624a14c33a3e15917.png

 

Still trying to convert this script from unix to linux, could use some help:

 

#!/bin/sh

### Parameters ###
fbsd_relver=$(uname -a | awk '{print $3}' | sed 's/.......$//')
freenashost=$(hostname -s | tr '[:lower:]' '[:upper:]')
logfile="/tmp/zpool_report.tmp"
subject="ZPool Status Report for ${freenashost}"
pools=$(zpool list -H -o name)
usedWarn=75
usedCrit=90
scrubAgeWarn=30
warnSymbol="?"
critSymbol="!"

###### summary ######
(
  echo "########## ZPool status report summary for all pools on server ${freenashost} ##########"
  echo ""
  echo "+--------------+--------+------+------+------+----+----+--------+------+-----+"
  echo "|Pool Name     |Status  |Read  |Write |Cksum |Used|Frag|Scrub   |Scrub |Last |"
  echo "|              |        |Errors|Errors|Errors|    |    |Repaired|Errors|Scrub|"
  echo "|              |        |      |      |      |    |    |Bytes   |      |Age  |"
  echo "+--------------+--------+------+------+------+----+----+--------+------+-----+"
) > ${logfile}

for pool in $pools; do
  if [ "${pool}" = "freenas-boot" ]; then
    frag=""
  else
    frag="$(zpool list -H -o frag "$pool")"
  fi
  status="$(zpool list -H -o health "$pool")"
  errors="$(zpool status "$pool" | grep -E "(ONLINE|DEGRADED|FAULTED|UNAVAIL|REMOVED)[ \t]+[0-9]+")"
  readErrors=0
  for err in $(echo "$errors" | awk '{print $3}'); do
    if echo "$err" | grep -E -q "[^0-9]+"; then
      readErrors=1000
      break
    fi
    readErrors=$((readErrors + err))
  done
  writeErrors=0
  for err in $(echo "$errors" | awk '{print $4}'); do
    if echo "$err" | grep -E -q "[^0-9]+"; then
      writeErrors=1000
      break
    fi
    writeErrors=$((writeErrors + err))
  done
  cksumErrors=0
  for err in $(echo "$errors" | awk '{print $5}'); do
    if echo "$err" | grep -E -q "[^0-9]+"; then
      cksumErrors=1000
      break
    fi
    cksumErrors=$((cksumErrors + err))
  done
  if [ "$readErrors" -gt 999 ]; then readErrors=">1K"; fi
  if [ "$writeErrors" -gt 999 ]; then writeErrors=">1K"; fi
  if [ "$cksumErrors" -gt 999 ]; then cksumErrors=">1K"; fi
  used="$(zpool list -H -p -o capacity "$pool")"
  scrubRepBytes="N/A"
  scrubErrors="N/A"
  scrubAge="N/A"
  if [ "$(zpool status "$pool" | grep "scan" | awk '{print $2}')" = "scrub" ]; then
    scrubRepBytes="$(zpool status "$pool" | grep "scan" | awk '{print $4}')"
    if [ "$fbsd_relver" -gt 1101000 ]; then
      scrubErrors="$(zpool status "$pool" | grep "scan" | awk '{print $10}')"
      scrubDate="$(zpool status "$pool" | grep "scan" | awk '{print $17"-"$14"-"$15"_"$16}')"
    else
      scrubErrors="$(zpool status "$pool" | grep "scan" | awk '{print $8}')"
      scrubDate="$(zpool status "$pool" | grep "scan" | awk '{print $15"-"$12"-"$13"_"$14}')"
    fi
    scrubTS="$(date "+%Y-%b-%e_%H:%M:%S" "$scrubDate" "+%s")"
    currentTS="$(date "+%s")"
    scrubAge=$((((currentTS - scrubTS) + 43200) / 86400))
  fi
  if [ "$status" = "FAULTED" ] \
  || [ "$used" -gt "$usedCrit" ] \
  || ( [ "$scrubErrors" != "N/A" ] && [ "$scrubErrors" != "0" ] )
  then
    symbol="$critSymbol"
  elif [ "$status" != "ONLINE" ] \
  || [ "$readErrors" != "0" ] \
  || [ "$writeErrors" != "0" ] \
  || [ "$cksumErrors" != "0" ] \
  || [ "$used" -gt "$usedWarn" ] \
  || [ "$scrubRepBytes" != "0" ] \
  || [ "$(echo "$scrubAge" | awk '{print int($1)}')" -gt "$scrubAgeWarn" ]
  then
    symbol="$warnSymbol"
  else
    symbol=" "
  fi
  (
  printf "|%-12s %1s|%-8s|%6s|%6s|%6s|%3s%%|%4s|%8s|%6s|%5s|\n" \
  "$pool" "$symbol" "$status" "$readErrors" "$writeErrors" "$cksumErrors" \
  "$used" "$frag" "$scrubRepBytes" "$scrubErrors" "$scrubAge"
  ) >> ${logfile}
  done

(
  echo "+--------------+--------+------+------+------+----+----+--------+------+-----+"
) >> ${logfile}

###### for each pool ######
for pool in $pools; do
  (
  echo ""
  echo "########## ZPool status report for ${pool} ##########"
  echo ""
  zpool status -v "$pool"
  ) >> ${logfile}
done

Should give a nice ui'ish summary of all zpools:

image.png.cbd892f4d340898ee5e030d51b7f0c13.png

 

Edited by ezra
  • Like 1
Link to comment

For monitoring purpose, I found those links while scrubbing the net trying to better understand ZFS:

https://calomel.org/zfs_health_check_script.html

https://jrs-s.net/2019/06/04/continuously-updated-iostat/


Bonus tips & tweaks : https://jrs-s.net/2018/08/17/zfs-tuning-cheat-sheet/

Someone also made a script in this thread to get mails/notifications, and Steini also offers the ZnapZed plugin for further monitoring

Edited by Yros
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.