Jump to content

ZFS plugin for unRAID


steini84

Recommended Posts

Posted (edited)

Grafana has some ZFS dashboards available, e.g. https://grafana.com/grafana/dashboards/328

Uses the Prometheus data source, so needs a bit of work to get it running under unRaid.

Both Grafana and Prometheus have unRaid plugins, so its just the ZFS / Prometheus integration.

 

Might be nice to integrate it with the unRaid Grafana dashboard here: https://grafana.com/grafana/dashboards/7233 

 

Its on my list of things to look at, but haven’t had time...

Edited by Freebie
Posted (edited)

To everyone who has problems accessiing ZFS throught samba shares. I have a solution which works for me but its sounds realy stupid:
I mounted my Pool Tank at /mnt/Tank and created a fs at /mnt/Tank/Share.
I used the following Config in my Samba-Config, so only user1 (created via unraid-ui) can accsess data in my share.

[Share] 
path = /mnt/Tank/Share
public = yes
writeable = no  
write list = user1
create mask = 0775
directory mask = 0775

Problem:
If you now mount it in windows at \\server\Share you can't create a folder or copy data. It tells you you dont have permission in this folder to edit anything. Different users do not matter.
Solution:
Create in your share a folder via console and now you are able (with user perrmissons) to write and read in the pool.
Mount this subfolder in windows and you are good to go. 
PS: If someone has a better fix to this then pls let me know 
 

Edited by matz3
Posted (edited)

Install the corsairpsu plugin from CA, then edit the files status.php and status.page in: /usr/local/emhttp/plugins/corsairpsu/ 

still working on it. Could use some help.

Edited by ezra
Posted

Wanted to add to this thread in regards to compression.  I'm running a newer Epyc chip with ZFS and when compression is enabled I'm seeing some massive dips in drive performance.  Whilst this was only on 1 VM with 4 cores on unRaid as soon as compression was disabled the performance went back up.  Not sure if there is something with the new Epyc chips that is causing compression to slow things down or what.  I'll give it a try again once I decide how to setup my next pool of drives.  

Posted (edited)

@elstryfe I did read about a day ago that there is a new version of ZFS on linux out, which has crucially re-enabled code that uses the CPU hardware for compression (if I'm reading correctly).  I believe a kernel developer disabled something in the newer kernels that broke it.  I'm not sure if it affects unraid kernels or not - perhaps @steini84 can comment more factually.

Edited by Marshalleq
Posted (edited)

Here is an update for an overview of your pools. I use this with user scripts cron and paste the output to an nginx webserver, home assistant grabs that data and displays it:

image.png.c92ef71ee0e33936b04f62b9161c8cb1.png

 

image.thumb.png.4c079127e056c87d32d55c78a23c7c42.png

 

logfile="/tmp/zpool_report.tmp"
ZPOOL="/mnt/SSD/Docker/LetsEncrypt/www/zfs/zpool.txt"
pools=$(zpool list -H -o name)
usedWarn=75
usedCrit=90
warnSymbol="?"
critSymbol="!"

(
  echo "+--------------+--------+---+---+---+----+----+"
  echo "|Pool Name     |Status  |R  |W  |Ck |Used|Frag|"
  echo "|              |        |Err|Err|Err|    |    |"
  echo "|              |        |   |   |   |    |    |"
  echo "+--------------+--------+---+---+---+----+----+"
) > ${logfile}

for pool in $pools; do
  frag="$(zpool list -H -o frag "$pool")"
  status="$(zpool list -H -o health "$pool")"
  errors="$(zpool status "$pool" | grep -E "(ONLINE|DEGRADED|FAULTED|UNAVAIL|REMOVED)[ \t]+[0-9]+")"
  readErrors=0
  for err in $(echo "$errors" | awk '{print $3}'); do
    if echo "$err" | grep -E -q "[^0-9]+"; then
      readErrors=1000
      break
    fi
    readErrors=$((readErrors + err))
  done
  writeErrors=0
  for err in $(echo "$errors" | awk '{print $4}'); do
    if echo "$err" | grep -E -q "[^0-9]+"; then
      writeErrors=1000
      break
    fi
    writeErrors=$((writeErrors + err))
  done
  cksumErrors=0
  for err in $(echo "$errors" | awk '{print $5}'); do
    if echo "$err" | grep -E -q "[^0-9]+"; then
      cksumErrors=1000
      break
    fi
    cksumErrors=$((cksumErrors + err))
  done
  if [ "$readErrors" -gt 999 ]; then readErrors=">1K"; fi
  if [ "$writeErrors" -gt 999 ]; then writeErrors=">1K"; fi
  if [ "$cksumErrors" -gt 999 ]; then cksumErrors=">1K"; fi
  used="$(zpool list -H -p -o capacity "$pool")"

  if [ "$status" = "FAULTED" ] \
  || [ "$used" -gt "$usedCrit" ] 
  then
    symbol="$critSymbol"
  elif [ "$status" != "ONLINE" ] \
  || [ "$readErrors" != "0" ] \
  || [ "$writeErrors" != "0" ] \
  || [ "$cksumErrors" != "0" ] \
  || [ "$used" -gt "$usedWarn" ] 
  then
    symbol="$warnSymbol"
  else
    symbol=" "
  fi
  (
  printf "|%-12s %1s|%-8s|%3s|%3s|%3s|%3s%%|%4s|\n" \
  "$pool" "$symbol" "$status" "$readErrors" "$writeErrors" "$cksumErrors" \
  "$used" "$frag"
  ) >> ${logfile}
  done

(
  echo "+--------------+--------+---+---+---+----+----+"
) >> ${logfile}

cat ${logfile} > "$ZPOOL"

 

Edited by ezra
  • Like 2
Posted
Here is an update for an overview of your pools. I use this with user scripts cron and paste the output to an nginx webserver, home assistant grabs that data and displays it:
image.png.c92ef71ee0e33936b04f62b9161c8cb1.png
 
image.thumb.png.4c079127e056c87d32d55c78a23c7c42.png
 
logfile="/tmp/zpool_report.tmp"ZPOOL="/mnt/SSD/Docker/LetsEncrypt/www/zfs/zpool.txt"pools=$(zpool list -H -o name)usedWarn=75usedCrit=90warnSymbol="?"critSymbol="!"( echo "+--------------+--------+---+---+---+----+----+" echo "|Pool Name     |Status  |R  |W  |Ck |Used|Frag|" echo "|              |        |Err|Err|Err|    |    |" echo "|              |        |   |   |   |    |    |" echo "+--------------+--------+---+---+---+----+----+") > ${logfile}for pool in $pools; do frag="$(zpool list -H -o frag "$pool")" status="$(zpool list -H -o health "$pool")" errors="$(zpool status "$pool" | grep -E "(ONLINE|DEGRADED|FAULTED|UNAVAIL|REMOVED)[ \t]+[0-9]+")" readErrors=0 for err in $(echo "$errors" | awk '{print $3}'); do   if echo "$err" | grep -E -q "[^0-9]+"; then     readErrors=1000     break   fi   readErrors=$((readErrors + err)) done writeErrors=0 for err in $(echo "$errors" | awk '{print $4}'); do   if echo "$err" | grep -E -q "[^0-9]+"; then     writeErrors=1000     break   fi   writeErrors=$((writeErrors + err)) done cksumErrors=0 for err in $(echo "$errors" | awk '{print $5}'); do   if echo "$err" | grep -E -q "[^0-9]+"; then     cksumErrors=1000     break   fi   cksumErrors=$((cksumErrors + err)) done if [ "$readErrors" -gt 999 ]; then readErrors=">1K"; fi if [ "$writeErrors" -gt 999 ]; then writeErrors=">1K"; fi if [ "$cksumErrors" -gt 999 ]; then cksumErrors=">1K"; fi used="$(zpool list -H -p -o capacity "$pool")" if [ "$status" = "FAULTED" ] \ || [ "$used" -gt "$usedCrit" ]  then   symbol="$critSymbol" elif [ "$status" != "ONLINE" ] \ || [ "$readErrors" != "0" ] \ || [ "$writeErrors" != "0" ] \ || [ "$cksumErrors" != "0" ] \ || [ "$used" -gt "$usedWarn" ]  then   symbol="$warnSymbol" else   symbol=" " fi ( printf "|%-12s %1s|%-8s|%3s|%3s|%3s|%3s%%|%4s|\n" \ "$pool" "$symbol" "$status" "$readErrors" "$writeErrors" "$cksumErrors" \ "$used" "$frag" ) >> ${logfile} done( echo "+--------------+--------+---+---+---+----+----+") >> ${logfile}cat ${logfile} > "$ZPOOL"

 


Cool integration. Custom card for home assistant?


Sent from my iPhone using Tapatalk
Posted

I'm using the unraidapi plugin https://github.com/ElectricBrainUK/UnraidAPI with glances docker. 

 

config:

  - type: custom:vertical-stack-in-card
    title: unRAID Server
    cards:
      - type: horizontal-stack
        cards:
          - type: custom:card-modder
            card: 
              type: picture
              image: /local/images/freenas.png
            style:                 
              border-radius: 5px
          - type: vertical-stack
            cards:  
              - type: custom:entity-attributes-card
                filter:
                  include:
                    - key: binary_sensor.unraid_server.cpu                      
                    - key: binary_sensor.unraid_server.memory
                    - key: binary_sensor.unraid_server.motherboard
                    - key: binary_sensor.unraid_server.arrayStatus
                    - key: binary_sensor.unraid_server.diskSpace
                    - key: binary_sensor.unraid_server.arrayProtection                                   
              - type: entities
                show_header_toggle: false
                entities:            
                  - switch.unraid_array                                
      - type: conditional
        conditions:
          - entity: sensor.glances_unraid_disk_used_percent
            state_not: "unavailable"
          - entity: sensor.glances_unraid_disk_used_percent
            state_not: "unknown"    
        card:
            type: custom:bar-card
            align: split
            show_icon: true
            padding: 4px
            columns: 2
            card_style: 
              border-radius: 5px
            severity:
            - value: 50
              color: '#3bb3ee'
            - value: 80
              color: '#e7a24a'
            - value: 100
              color: '#ff0000'                        
            entities:
              - entity: sensor.glances_unraid_disk_used_percent
                title: Disk
                max: 100            
              - entity: sensor.glances_unraid_cpu_used
                title: CPU
                max: 100             
      - type: conditional
        conditions:
          - entity: sensor.glances_unraid_disk_used_percent
            state_not: "unavailable"
          - entity: sensor.glances_unraid_disk_used_percent
            state_not: "unknown"    
        card:
            type: custom:bar-card
            align: split
            show_icon: true
            padding: 4px
            columns: 2
            card_style: 
              border-radius: 5px
            severity:
            - value: 50
              color: '#3bb3ee'
            - value: 80
              color: '#e7a24a'
            - value: 100
              color: '#ff0000'                        
            entities:
              - entity: sensor.glances_unraid_swap_used_percent
                title: SWAP             
                max: 100
              - entity: sensor.glances_unraid_ram_used_percent
                title: RAM
                max: 100
      - type: conditional
        conditions:
          - entity: sensor.glances_unraid_containers_active
            state_not: '0'
          - entity: sensor.glances_unraid_disk_used_percent
            state_not: "unavailable"
          - entity: sensor.glances_unraid_disk_used_percent
            state_not: "unknown"                
        card:
            type: custom:bar-card
            align: split
            show_icon: true
            padding: 4px
            columns: 2
            card_style: 
              border-radius: 5px
            entities:
              - entity: sensor.glances_unraid_containers_ram_used
                title: Docker RAM 
                max: 10000                
              - entity: sensor.glances_unraid_containers_cpu_used
                title: Docker CPU                   
      - type: conditional
        conditions:
          - entity: sensor.glances_unraid_disk_used_percent
            state_not: "unavailable"
          - entity: sensor.glances_unraid_disk_used_percent
            state_not: "unknown"    
        card:
            type: custom:bar-card
            align: left
            title_position: left
            show_icon: true
            padding: 4px
            columns: 1
            card_style: 
              border-radius: 5px
            severity:
            - value: 50
              color: '#3bb3ee'
            - value: 80
              color: '#e7a24a'
            - value: 100
              color: '#ff0000'                        
            entities:
              - entity: sensor.glances_unraid_containers_active 
                title: Containers    
                max: 40   
      - type: custom:auto-entities
        filter:
          include:
            - entity_id: switch.unraid_vm_*   
          exclude:
            - entity_id: switch.unraid_vm_*_usb*               
        card:
          type: custom:fold-entity-row
          head:
            type: section
            label: Virtual Machine Control   
      - type: custom:auto-entities
        filter:  
          include:
            - entity_id: switch.unraid_vm_*_usb*               
        card:
          type: custom:fold-entity-row
          head:
            type: section
            label: Virtual Machine  USB Control   
      - type: custom:auto-entities
        filter:
          include:
            - entity_id: switch.unraid_docker_*   
        card:
          type: custom:fold-entity-row
          head:
            type: section
            label: Docker Control
      - type: iframe
        url: https://zfs.domain.nl:443/zpool.txt
        aspect_ratio: 40%   

 

  • Like 1
Posted

Does anyone here know if the ZFS samba sharing uses the Unraid Mac OS Enhanced Capability as set in the GUI?  I assume it does, but I'm particularly interested in xattributes, but there's actually quite a lot that can be configured via vfs_fruit and it's important to get right.

  • 3 weeks later...
Posted
On 10/22/2015 at 10:20 PM, steini84 said:

No hate against btrfs but ZFS suited me better and I decided to post this plugin if it would be helpful to others

And we are all very grateful for this, thank you.

  • Like 1
Posted

Hey all, has anyone had any success with exporting NFS shares from a ZFS pool?  I'm new to unraid but have been using ZFS via Freenas for many years. 

What I am trying to do:

Export a directory from a mirrored SSD Pool to some ESX hosts in my lab.

Pool and dataset to be exported: SSD_Pool/VM_Datastore, mounted at /mnt/SSD_Pool/VM_Datastore

Here is a list of the dataset paramaters:

SSD_Pool/VM_Datastore                 type                  filesystem                                 -
SSD_Pool/VM_Datastore                 creation              Tue Feb 18 18:21 2020                      -
SSD_Pool/VM_Datastore                 used                  24K                                        -
SSD_Pool/VM_Datastore                 available             1.58T                                      -
SSD_Pool/VM_Datastore                 referenced            24K                                        -
SSD_Pool/VM_Datastore                 compressratio         1.00x                                      -
SSD_Pool/VM_Datastore                 mounted               yes                                        -
SSD_Pool/VM_Datastore                 quota                 none                                       default
SSD_Pool/VM_Datastore                 reservation           none                                       default
SSD_Pool/VM_Datastore                 recordsize            128K                                       default
SSD_Pool/VM_Datastore                 mountpoint            /mnt/SSD_Pool/VM_Datastore                 inherited from SSD_Pool
SSD_Pool/VM_Datastore                 sharenfs              [email protected]/24                         local
SSD_Pool/VM_Datastore                 checksum              on                                         default
SSD_Pool/VM_Datastore                 compression           lz4                                        inherited from SSD_Pool
SSD_Pool/VM_Datastore                 atime                 off                                        inherited from SSD_Pool
SSD_Pool/VM_Datastore                 devices               on                                         default
SSD_Pool/VM_Datastore                 exec                  on                                         default
SSD_Pool/VM_Datastore                 setuid                on                                         default
SSD_Pool/VM_Datastore                 readonly              off                                        default
SSD_Pool/VM_Datastore                 zoned                 off                                        default
SSD_Pool/VM_Datastore                 snapdir               hidden                                     default
SSD_Pool/VM_Datastore                 aclinherit            restricted                                 default
SSD_Pool/VM_Datastore                 createtxg             125                                        -
SSD_Pool/VM_Datastore                 canmount              on                                         default
SSD_Pool/VM_Datastore                 xattr                 on                                         default
SSD_Pool/VM_Datastore                 copies                1                                          default
SSD_Pool/VM_Datastore                 version               5                                          -
SSD_Pool/VM_Datastore                 utf8only              off                                        -
SSD_Pool/VM_Datastore                 normalization         none                                       -
SSD_Pool/VM_Datastore                 casesensitivity       sensitive                                  -
SSD_Pool/VM_Datastore                 vscan                 off                                        default
SSD_Pool/VM_Datastore                 nbmand                off                                        default
SSD_Pool/VM_Datastore                 sharesmb              off                                        default
SSD_Pool/VM_Datastore                 refquota              none                                       default
SSD_Pool/VM_Datastore                 refreservation        none                                       default
SSD_Pool/VM_Datastore                 guid                  7038295691283632036                        -
SSD_Pool/VM_Datastore                 primarycache          all                                        default
SSD_Pool/VM_Datastore                 secondarycache        all                                        default
SSD_Pool/VM_Datastore                 usedbysnapshots       0B                                         -
SSD_Pool/VM_Datastore                 usedbydataset         24K                                        -
SSD_Pool/VM_Datastore                 usedbychildren        0B                                         -
SSD_Pool/VM_Datastore                 usedbyrefreservation  0B                                         -
SSD_Pool/VM_Datastore                 logbias               latency                                    default
SSD_Pool/VM_Datastore                 objsetid              71                                         -
SSD_Pool/VM_Datastore                 dedup                 off                                        default
SSD_Pool/VM_Datastore                 mlslabel              none                                       default
SSD_Pool/VM_Datastore                 sync                  standard                                   default
SSD_Pool/VM_Datastore                 dnodesize             legacy                                     default
SSD_Pool/VM_Datastore                 refcompressratio      1.00x                                      -
SSD_Pool/VM_Datastore                 written               24K                                        -
SSD_Pool/VM_Datastore                 logicalused           12K                                        -
SSD_Pool/VM_Datastore                 logicalreferenced     12K                                        -
SSD_Pool/VM_Datastore                 volmode               default                                    default
SSD_Pool/VM_Datastore                 filesystem_limit      none                                       default
SSD_Pool/VM_Datastore                 snapshot_limit        none                                       default
SSD_Pool/VM_Datastore                 filesystem_count      none                                       default
SSD_Pool/VM_Datastore                 snapshot_count        none                                       default
SSD_Pool/VM_Datastore                 snapdev               hidden                                     default
SSD_Pool/VM_Datastore                 acltype               off                                        default
SSD_Pool/VM_Datastore                 context               none                                       default
SSD_Pool/VM_Datastore                 fscontext             none                                       default
SSD_Pool/VM_Datastore                 defcontext            none                                       default
SSD_Pool/VM_Datastore                 rootcontext           none                                       default
SSD_Pool/VM_Datastore                 relatime              off                                        default
SSD_Pool/VM_Datastore                 redundant_metadata    all                                        default
SSD_Pool/VM_Datastore                 overlay               off                                        default
SSD_Pool/VM_Datastore                 encryption            off                                        default
SSD_Pool/VM_Datastore                 keylocation           none                                       default
SSD_Pool/VM_Datastore                 keyformat             none                                       default
SSD_Pool/VM_Datastore                 pbkdf2iters           0                                          default
SSD_Pool/VM_Datastore                 special_small_blocks  0                                          default

 

If I watch the log, I see the following (192.168.1.16 is one of the hosts trying to mount the export):

Feb 26 08:03:17 JC-NAS rpcbind[14617]: connect from 192.168.1.16 to getport/addr(mountd)
Feb 26 08:03:17 JC-NAS rpc.mountd[16277]: authenticated mount request from 192.168.1.16:693 for /mnt/SSD_Pool/VM_Datastore (/mnt/SSD_Pool/VM_Datastore)
Feb 26 08:03:17 JC-NAS rpc.mountd[16277]: Cannot export /mnt/SSD_Pool/VM_Datastore, possibly unsupported filesystem or fsid= required

 

I've tried creating and re -exporting via /etc/exports with fsid=0 (and a couple of other things like 1, or the dataset GUID).  Has anyone had any success with this?

Posted (edited)

While sitting with an open console, I just had this:

 

1039115469_Screenshot2020-03-01at8_16_34AM.thumb.png.54a5dad554d9c9134e49718b57da4ba9.png

I understand the first one is ZFS related, I assume therefore the second is also.  Leaving a copy here for a record as I've been having a few issues and there's a chance this is the cause.

 

Edit - dmesg below, now BTRFS involved?  Maybe I do have faulty hardware.  Similar dev loop errors I found a few days ago, I had to repair filesystems and even rebuild all the BTRFS ones from scratch as it seems most susceptible to corruption.  So I'm going to go with not specific to ZFS for now.

 

Edit 2: Memtest confirms I have faulty, or possibly misconfigured memory.  There goes my morning....

 

1518128681_Screenshot2020-03-01at8_23_33AM.thumb.png.5cd0f46901602069fcf68462695fb3e9.png

Edited by Marshalleq
Posted
On 1/15/2020 at 4:59 AM, matz3 said:

To everyone who has problems accessiing ZFS throught samba shares. I have a solution which works for me but its sounds realy stupid:
I mounted my Pool Tank at /mnt/Tank and created a fs at /mnt/Tank/Share
PS: If someone has a better fix to this then pls let me know 
 

having just moved to Un-Fun-Raid yesterday, I managed to knuckle my way around this issue as it drove me nuts for 12 hours straight. (no I am not kidding)

(I've seen two others post about this issue, no one seems to respond with how.. so here goes to help!

 

Keep the array started.

 

1) Terminal - Make sure the mounted ZFS folder (in my case 3 of them, SSD, Seagate and Western) were all write access. (view this with a " ls -l " command)

 a - Root:Root for the Owner and Group, but you will need to " chmod 777 " the folders (mount points) themselves to enable everyone have read/write access.

2) Make a share in the web-UI of the same name to my mounts. (it will allocate the folders to the array, I know, we will edit it)

 a - Set the users/permissions and settings you wish to use for the folder

 b - (Be mindful here, as it will create additional config files in your /boot/config/shares folder, these reflect the unraid array shares but will be destroyed later to prevent conflicts)

3) Continuing within the UI, using the plugin 'Config file editor' go to the tools tab to Config File Editor and then using the right side file list navigate your flash storage to: " /etc/samba/smb-shares.conf " and click the write symbol next to the address you entered. it will present you with all your folder shares.

 a - highlight the content you want, copy it to a notepad/texteditor and save it (just incase) Example of mine.

[SSD]
	path = /mnt/SSD
    comment = VM / Docker
    browseable = yes
    public = yes
    writeable = no
    write list = administator,synology
    vfs objects = catia fruit streams_xattr extd_audit recycle
    case sensitive = auto
    preserve case = yes
    short preserve case = yes
    recycle:repository = .Recycle.Bin
    recycle:directory_mode = 0777
    recycle:keeptree = Yes
    recycle:touch = Yes
    recycle:touch_mtime = No
    recycle:minsize = 1
    recycle:versions = Yes
    recycle:exclude = *.tmp
    recycle:exclude_dir = .Recycle.Bin

4) Remove those shares, and stop the array

a - (At this point the excess files from point 2B above should be eradicated, any files in there NOT concerning the unraid array, delete them. we cannot have ZFS share configs in this /boot/config/shares folder)

5) Goto your settings tab, SMB, and paste that information into the SAMBA EXTRA CONFIG area, I've put mine above all the other commands there.

Notice I have edited my path directory to reflect my mount point

 

Save/Apply button and SMB will save the config.

 

at which point, goto your dashboard, the SAMBA service will ignite and your ZFS shares should be there, ready and waiting to be written.

 

Hopefully this works for you, I was screwing around with mine for ages and if I missed a step then I apologise, once I got it working I just had to goto bed at such an obscene hour.

Might make a youtube video of it for my endeavours to help others and show them how to use a ZFS array on unraid, the level 1 techs guide and Gamers nexus video is nice but is not a step by step by a long shot.

 

Hope it helps.

 

  • Like 1
  • Thanks 1
  • 3 weeks later...
Posted

Anyone getting ZFS Kernel Panics? seems to only happen when nzbget is running full tilt.

 

here is my error

Apr  2 01:11:48 Tower kernel: PANIC: zfs: accessing past end of object e26/543cf (size=6656 access=6308+1033)
Apr  2 01:11:48 Tower kernel: Showing stack for process 25214
Apr  2 01:11:48 Tower kernel: CPU: 2 PID: 25214 Comm: nzbget Tainted: P           O      4.19.107-Unraid #1
Apr  2 01:11:48 Tower kernel: Hardware name: ASUSTeK COMPUTER INC. Z9PE-D16 Series/Z9PE-D16 Series, BIOS 5601 06/11/2015
Apr  2 01:11:48 Tower kernel: Call Trace:
Apr  2 01:11:48 Tower kernel: dump_stack+0x67/0x83
Apr  2 01:11:48 Tower kernel: vcmn_err+0x8b/0xd4 [spl]
Apr  2 01:11:48 Tower kernel: ? spl_kmem_alloc+0xc9/0xfa [spl]
Apr  2 01:11:48 Tower kernel: ? _cond_resched+0x1b/0x1e
Apr  2 01:11:48 Tower kernel: ? mutex_lock+0xa/0x25
Apr  2 01:11:48 Tower kernel: ? dbuf_find+0x130/0x14c [zfs]
Apr  2 01:11:48 Tower kernel: ? _cond_resched+0x1b/0x1e
Apr  2 01:11:48 Tower kernel: ? mutex_lock+0xa/0x25
Apr  2 01:11:48 Tower kernel: ? arc_buf_access+0x69/0x1f4 [zfs]
Apr  2 01:11:48 Tower kernel: ? _cond_resched+0x1b/0x1e
Apr  2 01:11:48 Tower kernel: zfs_panic_recover+0x67/0x7e [zfs]
Apr  2 01:11:48 Tower kernel: ? spl_kmem_zalloc+0xd4/0x107 [spl]
Apr  2 01:11:48 Tower kernel: dmu_buf_hold_array_by_dnode+0x92/0x3b6 [zfs]
Apr  2 01:11:48 Tower kernel: dmu_write_uio_dnode+0x46/0x11d [zfs]
Apr  2 01:11:48 Tower kernel: ? txg_rele_to_quiesce+0x24/0x32 [zfs]
Apr  2 01:11:48 Tower kernel: dmu_write_uio_dbuf+0x48/0x5e [zfs]
Apr  2 01:11:48 Tower kernel: zfs_write+0x6a3/0xbe8 [zfs]
Apr  2 01:11:48 Tower kernel: zpl_write_common_iovec+0xae/0xef [zfs]
Apr  2 01:11:48 Tower kernel: zpl_iter_write+0xdc/0x10d [zfs]
Apr  2 01:11:48 Tower kernel: do_iter_readv_writev+0x110/0x146
Apr  2 01:11:48 Tower kernel: do_iter_write+0x86/0x15c
Apr  2 01:11:48 Tower kernel: vfs_writev+0x90/0xe2
Apr  2 01:11:48 Tower kernel: ? list_lru_add+0x63/0x13a
Apr  2 01:11:48 Tower kernel: ? vfs_ioctl+0x19/0x26
Apr  2 01:11:48 Tower kernel: ? do_vfs_ioctl+0x533/0x55d
Apr  2 01:11:48 Tower kernel: ? syscall_trace_enter+0x163/0x1aa
Apr  2 01:11:48 Tower kernel: do_writev+0x6b/0xe2
Apr  2 01:11:48 Tower kernel: do_syscall_64+0x57/0xf2
Apr  2 01:11:48 Tower kernel: entry_SYSCALL_64_after_hwframe+0x44/0xa9
Apr  2 01:11:48 Tower kernel: RIP: 0033:0x14c478acbf90
Apr  2 01:11:48 Tower kernel: Code: 89 74 24 10 48 89 e5 48 89 04 24 49 29 c6 48 89 54 24 18 4c 89 74 24 08 49 01 d6 48 63 7b 78 49 63 d7 4c 89 e8 48 89 ee 0f 05 <48> 89 c7 e8 1b 85 fd ff 49 39 c6 75 19 48 8b 43 58 48 8b 53 60 48
Apr  2 01:11:48 Tower kernel: RSP: 002b:000014c478347640 EFLAGS: 00000216 ORIG_RAX: 0000000000000014
Apr  2 01:11:48 Tower kernel: RAX: ffffffffffffffda RBX: 0000558040d4e920 RCX: 000014c478acbf90
Apr  2 01:11:48 Tower kernel: RDX: 0000000000000002 RSI: 000014c478347640 RDI: 0000000000000005
Apr  2 01:11:48 Tower kernel: RBP: 000014c478347640 R08: 0000000000000001 R09: 000014c478b15873
Apr  2 01:11:48 Tower kernel: R10: 0000000000000006 R11: 0000000000000216 R12: 000000000000000b
Apr  2 01:11:48 Tower kernel: R13: 0000000000000014 R14: 0000000000000409 R15: 0000000000000002

 

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...