Jump to content

ezra

Members
  • Content Count

    13
  • Joined

  • Last visited

Community Reputation

5 Neutral

About ezra

  • Rank
    Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Install the corsairpsu plugin from CA, then edit the files status.php and status.page in: /usr/local/emhttp/plugins/corsairpsu/ still working on it. Could use some help.
  2. Ive just checked out the github repo, it gives me some more insights. Indeed you gave me info already, discovered this thread later. Thanks
  3. I've started to create a dashboard section for ZFS details. let me know if you can help me out with PHP/JSON/BASH I don't really know what i'm doing just trial en error, modifying an existing plugin (corsaircpu, status.php & status.page) https://github.com/ezraholm50/zfs-frontend-unraid
  4. Updated previous post with useful commands and monitoring.
  5. For sure a GUI would be great, i moved from FreeNAS to unRAID a few weeks back. It took me 2 weeks to figure it out and i now monitor everything via CLI and its fast, without any issues. I've also setup monitoring for the zpool status and unraid notifies me if somethings off. No need for a GUI, it would be nice but not a priority IMO. Let me know if anyone needs some useful commands or the monitoring setup. root@unraid:~# zpool status pool: HDD state: ONLINE scan: scrub repaired 0B in 0 days 02:43:57 with 0 errors on Sun Jan 5 14:14:00 2020 config: NAME STATE READ WRITE CKSUM HDD ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 sdp ONLINE 0 0 0 sde ONLINE 0 0 0 sdf ONLINE 0 0 0 sdr ONLINE 0 0 0 sdq ONLINE 0 0 0 sds ONLINE 0 0 0 logs sdg ONLINE 0 0 0 errors: No known data errors pool: SQL state: ONLINE scan: resilvered 254M in 0 days 00:00:01 with 0 errors on Thu Jan 9 13:10:08 2020 config: NAME STATE READ WRITE CKSUM SQL ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 sdi ONLINE 0 0 0 sdl ONLINE 0 0 0 errors: No known data errors pool: SSD state: ONLINE scan: resilvered 395M in 0 days 00:00:02 with 0 errors on Thu Jan 9 13:30:10 2020 config: NAME STATE READ WRITE CKSUM SSD ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 sdd ONLINE 0 0 0 sdo ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 sdn ONLINE 0 0 0 sdm ONLINE 0 0 0 logs sdh ONLINE 0 0 0 errors: No known data errors pool: TMP state: ONLINE scan: scrub repaired 0B in 0 days 00:00:00 with 0 errors on Sun Jan 5 11:30:04 2020 config: NAME STATE READ WRITE CKSUM TMP ONLINE 0 0 0 sdt ONLINE 0 0 0 errors: No known data errors Monitor Disk I/O root@unraid:~# zpool iostat -v 1 capacity operations bandwidth pool alloc free read write read write ---------- ----- ----- ----- ----- ----- ----- HDD 5.93T 10.4T 20 126 5.47M 29.4M raidz2 5.93T 10.4T 20 125 5.47M 29.4M sdp - - 3 19 1.14M 4.89M sde - - 3 20 936K 4.89M sdf - - 3 20 835K 4.89M sdr - - 4 23 1.06M 4.89M sdq - - 2 19 803K 4.89M sds - - 3 23 783K 4.89M logs - - - - - - sdg 172K 29.5G 0 0 56 1.65K ---------- ----- ----- ----- ----- ----- ----- SQL 3.99G 106G 3 116 287K 4.66M mirror 3.99G 106G 3 116 287K 4.66M sdi - - 1 58 136K 2.33M sdl - - 1 58 151K 2.33M ---------- ----- ----- ----- ----- ----- ----- SSD 156G 288G 25 246 1.47M 8.83M mirror 77.6G 144G 12 111 755K 3.01M sdd - - 6 52 355K 1.50M sdo - - 6 59 400K 1.50M mirror 78.0G 144G 12 102 746K 2.90M sdn - - 6 55 399K 1.45M sdm - - 5 47 346K 1.45M logs - - - - - - sdh 4.91M 29.5G 0 31 201 2.92M ---------- ----- ----- ----- ----- ----- ----- TMP 1.50M 29.5G 0 0 149 2.70K sdt 1.50M 29.5G 0 0 149 2.70K ---------- ----- ----- ----- ----- ----- ----- List snapshots root@unraid:~# zfs list -t snapshot NAME USED AVAIL REFER MOUNTPOINT HDD@manual 160K - 87.2G - HDD/Backup@2019-12-29-180000 168K - 248K - HDD/Backup@2020-01-03-150000 65.1M - 36.5G - HDD/Backup@2020-01-04-000000 40.4M - 43.3G - HDD/Backup@2020-01-05-000000 72.0M - 43.8G - HDD/Backup@2020-01-06-000000 69.1M - 44.7G - HDD/Backup@2020-01-07-000000 35.6M - 45.1G - HDD/Backup@2020-01-08-000000 7.00M - 45.5G - HDD/Backup@2020-01-08-120000 400K - 45.5G - HDD/Backup@2020-01-08-150000 400K - 45.5G - HDD/Backup@2020-01-08-180000 416K - 45.5G - HDD/Backup@2020-01-08-210000 1.33M - 45.5G - HDD/Backup@2020-01-09-000000 1.33M - 46.0G - HDD/Backup@2020-01-09-030000 687K - 46.0G - HDD/Backup@2020-01-09-060000 663K - 46.0G - HDD/Backup@2020-01-09-090000 456K - 46.0G - HDD/Backup@2020-01-09-120000 480K - 46.0G - Scrub weekly - User scripts #!/bin/bash /usr/local/emhttp/webGui/scripts/notify -i normal -s "Scrub" -d "Scrub of all sub zfs file systems started..." /usr/sbin/zpool scrub SSD /usr/sbin/zpool scrub HDD /usr/sbin/zpool scrub SQL /usr/sbin/zpool scrub TMP Trim SSD's weekly - User scripts #!/bin/bash /usr/local/emhttp/webGui/scripts/notify -i normal -s "Trim" -d "Trim of all SSD disks started..." /usr/sbin/zpool trim SSD /usr/sbin/zpool trim SQL /usr/sbin/zpool trim TMP Zpool Status check every 5 minutes (custom */5 * * * *) - User scripts #!/bin/bash # # https://gist.github.com/petervanderdoes/bd6660302404ed5b094d # problems=0 emailSubject="`hostname` - ZFS pool - HEALTH check" emailMessage="" # ZFS_LOG="/boot/logs/ZFS-LOG.txt" # # Health - Check if all zfs volumes are in good condition. We are looking for # any keyword signifying a degraded or broken array. condition=$(/usr/sbin/zpool status | egrep -i '(DEGRADED|FAULTED|OFFLINE|UNAVAIL|REMOVED|FAIL|DESTROYED|corrupt|cannot|unrecover)') #condition=$(/usr/sbin/zpool status | egrep -i '(ONLINE)') if [ "${condition}" ]; then emailSubject="$emailSubject - fault" problems=1 fi # # Capacity - Make sure pool capacities are below 80% for best performance. The # percentage really depends on how large your volume is. If you have a 128GB # SSD then 80% is reasonable. If you have a 60TB raid-z2 array then you can # probably set the warning closer to 95%. # # ZFS uses a copy-on-write scheme. The file system writes new data to # sequential free blocks first and when the uberblock has been updated the new # inode pointers become valid. This method is true only when the pool has # enough free sequential blocks. If the pool is at capacity and space limited, # ZFS will be have to randomly write blocks. This means ZFS can not create an # optimal set of sequential writes and write performance is severely impacted. maxCapacity=80 if [ ${problems} -eq 0 ]; then capacity=$(/usr/sbin/zpool list -H -o capacity) for line in ${capacity//%/} do if [ $line -ge $maxCapacity ]; then emailSubject="$emailSubject - Capacity Exceeded" problems=1 fi done fi # Errors - Check the columns for READ, WRITE and CKSUM (checksum) drive errors # on all volumes and all drives using "zpool status". If any non-zero errors # are reported an email will be sent out. You should then look to replace the # faulty drive and run "zpool scrub" on the affected volume after resilvering. if [ ${problems} -eq 0 ]; then errors=$(/usr/sbin/zpool status | grep ONLINE | grep -v state | awk '{print $3 $4 $5}' | grep -v 000) if [ "${errors}" ]; then emailSubject="$emailSubject - Drive Errors" problems=1 fi fi # Scrub Expired - Check if all volumes have been scrubbed in at least the last # 8 days. The general guide is to scrub volumes on desktop quality drives once # a week and volumes on enterprise class drives once a month. You can always # use cron to schedule "zpool scrub" in off hours. We scrub our volumes every # Sunday morning for example. # # Scrubbing traverses all the data in the pool once and verifies all blocks can # be read. Scrubbing proceeds as fast as the devices allows, though the # priority of any I/O remains below that of normal calls. This operation might # negatively impact performance, but the file system will remain usable and # responsive while scrubbing occurs. To initiate an explicit scrub, use the # "zpool scrub" command. # # The scrubExpire variable is in seconds. So for 8 days we calculate 8 days # times 24 hours times 3600 seconds to equal 691200 seconds. ##scrubExpire=691200 # # 2764800 => 32 dias # scrubExpire=2764800 if [ ${problems} -eq 0 ]; then currentDate=$(date +%s) zfsVolumes=$(/usr/sbin/zpool list -H -o name) for volume in ${zfsVolumes} do if [ $(/usr/sbin/zpool status $volume | egrep -c "none requested") -ge 1 ]; then echo "ERROR: You need to run \"zpool scrub $volume\" before this script can monitor the scrub expiration time." break fi ## if [ $(/usr/sbin/zpool status $volume | egrep -c "scrub in progress|resilver") -ge 1 ]; then if [ $(/usr/sbin/zpool status $volume | egrep -c "scrub in progress") -ge 1 ]; then break fi ### FreeBSD with *nix supported date format #scrubRawDate=$(/usr/sbin/zpool status $volume | grep scrub | awk '{print $15 $12 $13}') #scrubDate=$(date -j -f '%Y%b%e-%H%M%S' $scrubRawDate'-000000' +%s) ### Ubuntu with GNU supported date format scrubRawDate=$(/usr/sbin/zpool status $volume | grep scrub | awk '{print $13" "$14" " $15" " $16" "$17}') scrubDate=$(date -d "$scrubRawDate" +%s) if [ $(($currentDate - $scrubDate)) -ge $scrubExpire ]; then if [ ${problems} -eq 0 ]; then emailSubject="$emailSubject - Scrub Time Expired. Scrub Needed on Volume(s)" fi problems=1 emailMessage="${emailMessage}Pool: $volume needs scrub \n" fi done fi # Notifications - On any problems send email with drive status information and # capacities including a helpful subject line to root. Also use logger to write # the email subject to the local logs. This is the place you may want to put # any other notifications like: # # + Update an anonymous twitter account with your ZFS status (https://twitter.com/zfsmonitor) # + Playing a sound file or beep the internal speaker # + Update Nagios, Cacti, Zabbix, Munin or even BigBrother if [ "$problems" -ne 0 ]; then logger $emailSubject echo -e "$emailSubject\t$emailMessage" > $ZFS_LOG # Notifica via email # COMMAND=$(cat "$ZFS_LOG") /usr/local/emhttp/webGui/scripts/notify -i warning -s "ZFS" -d "Zpool status change \n\n$COMMAND \n\n`date`" fi Also i've changed the ashift back and forth but came to the conclusion its better left at 0 (auto) after performance tests. I've set recordsize=1M on my media files I've added a SLOG (32GB SSD) to my SSD (VMs) pool and my HDD pool to prevent double writes: zpool add POOLNAME log SDX I've set atime off on every pool another one: Set ARC size - User scripts @reboot #!/bin/bash # numbers are 8GB just multiply by 2 if you want 16GB etc.. echo 8589934592 >> /sys/module/zfs/parameters/zfs_arc_max && /usr/local/emhttp/webGui/scripts/notify -i normal -s "System" -d "Adjusted ARC limit to 8G \n\n`date`" I use this to display a text card with homeassistant (ssh to cat the file as a sensor) setup as a user script in unRAID to run every 5 mins #!/bin/bash HDD=$(/usr/sbin/zpool status HDD | grep -m2 ""| awk '{print $2}' | tr '\n' ' ') SSD=$(/usr/sbin/zpool status SSD | grep -m2 ""| awk '{print $2}' | tr '\n' ' ') SQL=$(/usr/sbin/zpool status SQL | grep -m2 ""| awk '{print $2}' | tr '\n' ' ') TMP=$(/usr/sbin/zpool status TMP | grep -m2 ""| awk '{print $2}' | tr '\n' ' ') DATE=$(/bin/date | awk '{print $1 " " $2 " " $3 " " $4}') echo "___________________________________" > /tmp/zpool_status echo "| unRAID ZFS Zpool Status Checker |" >> /tmp/zpool_status echo "| last check: $DATE " >> /tmp/zpool_status echo "-----------------------------------" >> /tmp/zpool_status echo "| $HDD | $SSD |" >> /tmp/zpool_status echo "-----------------------------------" >> /tmp/zpool_status echo "| $SQL | $TMP |" >> /tmp/zpool_status echo "-----------------------------------" >> /tmp/zpool_status Output: Still trying to convert this script from unix to linux, could use some help: #!/bin/sh ### Parameters ### fbsd_relver=$(uname -a | awk '{print $3}' | sed 's/.......$//') freenashost=$(hostname -s | tr '[:lower:]' '[:upper:]') logfile="/tmp/zpool_report.tmp" subject="ZPool Status Report for ${freenashost}" pools=$(zpool list -H -o name) usedWarn=75 usedCrit=90 scrubAgeWarn=30 warnSymbol="?" critSymbol="!" ###### summary ###### ( echo "########## ZPool status report summary for all pools on server ${freenashost} ##########" echo "" echo "+--------------+--------+------+------+------+----+----+--------+------+-----+" echo "|Pool Name |Status |Read |Write |Cksum |Used|Frag|Scrub |Scrub |Last |" echo "| | |Errors|Errors|Errors| | |Repaired|Errors|Scrub|" echo "| | | | | | | |Bytes | |Age |" echo "+--------------+--------+------+------+------+----+----+--------+------+-----+" ) > ${logfile} for pool in $pools; do if [ "${pool}" = "freenas-boot" ]; then frag="" else frag="$(zpool list -H -o frag "$pool")" fi status="$(zpool list -H -o health "$pool")" errors="$(zpool status "$pool" | grep -E "(ONLINE|DEGRADED|FAULTED|UNAVAIL|REMOVED)[ \t]+[0-9]+")" readErrors=0 for err in $(echo "$errors" | awk '{print $3}'); do if echo "$err" | grep -E -q "[^0-9]+"; then readErrors=1000 break fi readErrors=$((readErrors + err)) done writeErrors=0 for err in $(echo "$errors" | awk '{print $4}'); do if echo "$err" | grep -E -q "[^0-9]+"; then writeErrors=1000 break fi writeErrors=$((writeErrors + err)) done cksumErrors=0 for err in $(echo "$errors" | awk '{print $5}'); do if echo "$err" | grep -E -q "[^0-9]+"; then cksumErrors=1000 break fi cksumErrors=$((cksumErrors + err)) done if [ "$readErrors" -gt 999 ]; then readErrors=">1K"; fi if [ "$writeErrors" -gt 999 ]; then writeErrors=">1K"; fi if [ "$cksumErrors" -gt 999 ]; then cksumErrors=">1K"; fi used="$(zpool list -H -p -o capacity "$pool")" scrubRepBytes="N/A" scrubErrors="N/A" scrubAge="N/A" if [ "$(zpool status "$pool" | grep "scan" | awk '{print $2}')" = "scrub" ]; then scrubRepBytes="$(zpool status "$pool" | grep "scan" | awk '{print $4}')" if [ "$fbsd_relver" -gt 1101000 ]; then scrubErrors="$(zpool status "$pool" | grep "scan" | awk '{print $10}')" scrubDate="$(zpool status "$pool" | grep "scan" | awk '{print $17"-"$14"-"$15"_"$16}')" else scrubErrors="$(zpool status "$pool" | grep "scan" | awk '{print $8}')" scrubDate="$(zpool status "$pool" | grep "scan" | awk '{print $15"-"$12"-"$13"_"$14}')" fi scrubTS="$(date "+%Y-%b-%e_%H:%M:%S" "$scrubDate" "+%s")" currentTS="$(date "+%s")" scrubAge=$((((currentTS - scrubTS) + 43200) / 86400)) fi if [ "$status" = "FAULTED" ] \ || [ "$used" -gt "$usedCrit" ] \ || ( [ "$scrubErrors" != "N/A" ] && [ "$scrubErrors" != "0" ] ) then symbol="$critSymbol" elif [ "$status" != "ONLINE" ] \ || [ "$readErrors" != "0" ] \ || [ "$writeErrors" != "0" ] \ || [ "$cksumErrors" != "0" ] \ || [ "$used" -gt "$usedWarn" ] \ || [ "$scrubRepBytes" != "0" ] \ || [ "$(echo "$scrubAge" | awk '{print int($1)}')" -gt "$scrubAgeWarn" ] then symbol="$warnSymbol" else symbol=" " fi ( printf "|%-12s %1s|%-8s|%6s|%6s|%6s|%3s%%|%4s|%8s|%6s|%5s|\n" \ "$pool" "$symbol" "$status" "$readErrors" "$writeErrors" "$cksumErrors" \ "$used" "$frag" "$scrubRepBytes" "$scrubErrors" "$scrubAge" ) >> ${logfile} done ( echo "+--------------+--------+------+------+------+----+----+--------+------+-----+" ) >> ${logfile} ###### for each pool ###### for pool in $pools; do ( echo "" echo "########## ZPool status report for ${pool} ##########" echo "" zpool status -v "$pool" ) >> ${logfile} done Should give a nice ui'ish summary of all zpools:
  6. Thank you both, i've destroyed all snapshots prior to setting it up properly. after the creation i did a reboot. Still misses some of the 3 hourly's... id really like that. Anyway i'll start from scratch again. Thanks for the input.
  7. Hello! thanks again for this awesome stuf. If you look at my snapshots, you see there are some missed snapshots. I've set it up: daily every 3 hours, weekly everyday. i think something is off, any idea where to start looking? SSD/VMs/Ubuntu@2019-12-29-180000 0B - 25K - SSD/VMs/Ubuntu@2019-12-30-000000 0B - 25K - SSD/VMs/Ubuntu@2020-01-03-150000 0B - 25K - SSD/VMs/Ubuntu@2020-01-04-000000 0B - 25K - SSD/VMs/Ubuntu@2020-01-04-030000 0B - 25K - SSD/VMs/Ubuntu@2020-01-04-060000 0B - 25K - SSD/VMs/Ubuntu@2020-01-04-090000 0B - 25K - SSD/VMs/Ubuntu@2020-01-04-120000 0B - 25K - SSD/VMs/Ubuntu@2020-01-04-150000 0B - 25K - SSD/VMs/Ubuntu@2020-01-04-180000 0B - 25K - SSD/VMs/Ubuntu@2020-01-04-210000 0B - 25K - SSD/VMs/Ubuntu@2020-01-05-000000 0B - 25K - SSD/VMs/Ubuntu@2020-01-05-030000 0B - 25K - SSD/VMs/Windows@2019-12-29-180000 0B - 24K - SSD/VMs/Windows@2019-12-30-000000 0B - 24K - SSD/VMs/Windows@2020-01-03-150000 0B - 24K - SSD/VMs/Windows@2020-01-04-000000 0B - 24K - SSD/VMs/Windows@2020-01-04-030000 0B - 24K - SSD/VMs/Windows@2020-01-04-060000 0B - 24K - SSD/VMs/Windows@2020-01-04-090000 0B - 24K - SSD/VMs/Windows@2020-01-04-120000 0B - 24K - SSD/VMs/Windows@2020-01-04-150000 0B - 24K - SSD/VMs/Windows@2020-01-04-180000 0B - 24K - SSD/VMs/Windows@2020-01-04-210000 0B - 24K - SSD/VMs/Windows@2020-01-05-000000 0B - 24K - SSD/VMs/Windows@2020-01-05-030000 0B - 24K - SSD/tmp@2019-12-29-180000 0B - 24K - SSD/tmp@2019-12-30-000000 0B - 24K - SSD/tmp@2020-01-03-150000 0B - 24K - SSD/tmp@2020-01-04-000000 0B - 24K - SSD/tmp@2020-01-04-030000 0B - 24K - SSD/tmp@2020-01-04-060000 0B - 24K - SSD/tmp@2020-01-04-090000 0B - 24K - SSD/tmp@2020-01-04-120000 0B - 24K - SSD/tmp@2020-01-04-150000 0B - 24K - SSD/tmp@2020-01-04-180000 0B - 24K - SSD/tmp@2020-01-04-210000 0B - 24K - SSD/tmp@2020-01-05-000000 0B - 24K - SSD/tmp@2020-01-05-030000 0B - 24K -
  8. Hey, thanks for this app. I cant reach the webui of this container. I've seen something rather odd. Upon creation i see port 7818 for the webui and after i've started the container it said 8181 for the webui. Also the other ports don't add up. Please advise.
  9. Also only read permission. Anyone any pointers?
  10. Thanks man, took me less then 5 minutes. Awesome!
  11. Hello! Thanks for this great plugin, i just moved away from freenas to unRAID, and i really like ZFS. I did ran into some problems. I've setup a array just because i needed one with 2x 32GB SSD one of which is for parity. Then i followed the guide and created the following: NAME STATE READ WRITE CKSUM HDD ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 sdj ONLINE 0 0 0 sdp ONLINE 0 0 0 sdn ONLINE 0 0 0 sdl ONLINE 0 0 0 sdk ONLINE 0 0 0 sdi ONLINE 0 0 0 logs sdg ONLINE 0 0 0 pool: SSD state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM SSD ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 sdc ONLINE 0 0 0 sdm ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 sdb ONLINE 0 0 0 sdd ONLINE 0 0 0 With these datasets: root@unRAID:~# zfs list NAME USED AVAIL REFER MOUNTPOINT HDD 4.39M 10.6T 224K /mnt/HDD HDD/Backup 1.36M 10.6T 208K /mnt/HDD/Backup HDD/Backup/Desktop 192K 10.6T 192K /mnt/HDD/Backup/Desktop HDD/Backup/RPI 991K 10.6T 224K /mnt/HDD/Backup/RPI HDD/Backup/RPI/AlarmPanel 192K 10.6T 192K /mnt/HDD/Backup/RPI/AlarmPanel HDD/Backup/RPI/Garden 192K 10.6T 192K /mnt/HDD/Backup/RPI/Garden HDD/Backup/RPI/Kitchen 192K 10.6T 192K /mnt/HDD/Backup/RPI/Kitchen HDD/Backup/RPI/OctoPrint 192K 10.6T 192K /mnt/HDD/Backup/RPI/OctoPrint HDD/Film 192K 10.6T 192K /mnt/HDD/Film HDD/Foto 192K 10.6T 192K /mnt/HDD/Foto HDD/Nextcloud 192K 10.6T 192K /mnt/HDD/Nextcloud HDD/Samba 192K 10.6T 192K /mnt/HDD/Samba HDD/Serie 192K 10.6T 192K /mnt/HDD/Serie HDD/Software 192K 10.6T 192K /mnt/HDD/Software SSD 642K 430G 25K /mnt/SSD SSD/Docker 221K 430G 29K /mnt/SSD/Docker SSD/Docker/Jackett 24K 430G 24K /mnt/SSD/Docker/Jackett SSD/Docker/Nextcloud 24K 430G 24K /mnt/SSD/Docker/Nextcloud SSD/Docker/Organizr 24K 430G 24K /mnt/SSD/Docker/Organizr SSD/Docker/Plex 24K 430G 24K /mnt/SSD/Docker/Plex SSD/Docker/Radarr 24K 430G 24K /mnt/SSD/Docker/Radarr SSD/Docker/Sabnzbd 24K 430G 24K /mnt/SSD/Docker/Sabnzbd SSD/Docker/Sonarr 24K 430G 24K /mnt/SSD/Docker/Sonarr SSD/Docker/appdata 24K 430G 24K /mnt/SSD/Docker/appdata SSD/VMs 123K 430G 27K /mnt/SSD/VMs SSD/VMs/HomeAssistant 24K 430G 24K /mnt/SSD/VMs/HomeAssistant SSD/VMs/Libvert 24K 430G 24K /mnt/SSD/VMs/Libvert SSD/VMs/Ubuntu 24K 430G 24K /mnt/SSD/VMs/Ubuntu SSD/VMs/Windows 24K 430G 24K /mnt/SSD/VMs/Windows Now when i disable Docker and try to set the corresponding paths, i get this: How to solve this? Kind regards. Edit, it just needed a trailing slash after /appdata/ Now, i cant disable the VM service from the vm settings tab. Also editing the default location is not found or editable to the mount point of zfs /mnt/SSD/VMs (even with a trailing slash) i just cant press apply (same for disabling the VM service) Please advise. Second edit: Needed to stop the array, then everything is editable. Works as advertised so far. thanks again. Solved!