Parity disk missing


Recommended Posts

I noticed the other day that my parity disk went missing according to unRAID. Under the dashboard it would say it's missing and no other info. The disk in question is a 10TB WD Easystore that I had used kapton tape on. I've been using this drive for about 4 months without issue. I couldn't stop the array so I had to issue a powerdown command to turn it off.

 

I turned it back on and the drive was showing up fine. I did a SMART short test and it passed. I started a parity check and here is where it gets interesting. At about 30% or so I guess the disk disconnected, as I got a message that the parity sync finished (errors). It then starts back up and disconnects a few times. I was able to shut it down again and move the disk to a different bay so the SATA cable should be ruled out as an issue. I also downgraded the server from 6.6.7 to 6.6.3. This is a 10TB easystore (4 months old) that I had to tape the 3rd pin on to get it to show up in the system, tape looks fine not damaged. I am using a SYBA non raid card for additional SATA ports - I don't know exactly what drive is connected to what but it should be a mix up and I've had no other issues. Upon another parity check it made it a few hours before having an issue again, "disk missing."

 

 

I shutdown and just retaped the pin on my drive, moved it to another bay and powered on my server. I uninstalled the dynamix plugins that it told me were not compatible (system stats and another one I don't remember exactly) even though they've been running for months without issue. I've also removed the preclear plugin. 

 

Not sure what else to check? It seems to show up fine after restarting and SMART indicators seem to show no problems. I couldn't post diagnostics as when the parity disk shows missing, it doesn't let me do much on the server, including downloading the log/diagnostics. Could the drive be bad? 

 

chrome_2019-03-10_13-22-54.png

Edited by ptirmal
Link to comment

It dropped out again after maybe 8 hours? Long SMART never finished.

 

I ran an fdisk -l command and it shows up as dev/sdc 

Currently my drives are:

10TB parity

8TB

8TB

8TB

5TB

240GB - cache SSD

3TB - unassigned device drive

 

 

I'm not too familiar with command line but the fdisk command looks like the 10TB drive is being recognized there as SDC, correct? It's just not showing up in the mapper? I just want to be sure it's a bad disk before I RMA it, or if it's something else. 

Edit: just ran df -h and it doesn't come up... does this mean the drive is not showing up at the hardware level?

 

Quote

Linux 4.18.15-unRAID.
root@unraid6:~# fdisk -l
Disk /dev/loop0: 8.1 MiB, 8466432 bytes, 16536 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/loop1: 4.8 MiB, 5009408 bytes, 9784 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/loop2: 20 GiB, 21474836480 bytes, 41943040 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/loop3: 1 GiB, 1073741824 bytes, 2097152 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/sda: 7.5 GiB, 8004304896 bytes, 15633408 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xd9d9e0f4

Device     Boot Start      End  Sectors  Size Id Type
/dev/sda1  *       32 15633407 15633376  7.5G  b W95 FAT32


Disk /dev/sdb: 223.6 GiB, 240053181952 bytes, 468853871 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x00000000

Device     Boot Start       End   Sectors   Size Id Type
/dev/sdb1          64 468853870 468853807 223.6G 83 Linux


Disk /dev/sde: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: E3CC4316-EC88-478D-A805-4F6C5DF10529

Device     Start        End    Sectors  Size Type
/dev/sde1     64 5860533134 5860533071  2.7T Linux filesystem


Disk /dev/sdc: 9.1 TiB, 10000831348736 bytes, 19532873728 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: E7711511-A5C6-4FBB-BB39-3E40C1A72D56

Device     Start         End     Sectors  Size Type
/dev/sdc1     64 19532873694 19532873631  9.1T Linux filesystem


Disk /dev/sdf: 7.3 TiB, 8001563222016 bytes, 15628053168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: B2B36FBE-D126-4860-9588-8CCC9C356C29

Device     Start         End     Sectors  Size Type
/dev/sdf1     64 15628053134 15628053071  7.3T Linux filesystem


Disk /dev/sdh: 7.3 TiB, 8001563222016 bytes, 15628053168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 3184404F-C49C-4005-975B-27936380AA5F

Device     Start         End     Sectors  Size Type
/dev/sdh1     64 15628053134 15628053071  7.3T Linux filesystem


Disk /dev/sdg: 4.6 TiB, 5000981078016 bytes, 9767541168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 348C8341-AD9B-45AA-985E-7F2CA802C8DB

Device     Start        End    Sectors  Size Type
/dev/sdg1     64 9767541134 9767541071  4.6T Linux filesystem


Disk /dev/sdd: 7.3 TiB, 8001563222016 bytes, 15628053168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 49EE0DD7-7FB3-40E3-A0B4-2292FF958C2F

Device     Start         End     Sectors  Size Type
/dev/sdd1     64 15628053134 15628053071  7.3T Linux filesystem


Disk /dev/md1: 7.3 TiB, 8001563168768 bytes, 15628053064 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/md4: 4.6 TiB, 5000981024768 bytes, 9767541064 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/md5: 7.3 TiB, 8001563168768 bytes, 15628053064 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/md6: 7.3 TiB, 8001563168768 bytes, 15628053064 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/md1: 7.3 TiB, 8001561071616 bytes, 15628048968 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/md4: 4.6 TiB, 5000978927616 bytes, 9767536968 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/md5: 7.3 TiB, 8001561071616 bytes, 15628048968 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/md6: 7.3 TiB, 8001561071616 bytes, 15628048968 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
root@unraid6:~# ^C

Quote

root@unraid6:~# df -h
Filesystem       Size  Used Avail Use% Mounted on
rootfs           7.7G  7.7G     0 100% /
tmpfs             32M  488K   32M   2% /run
devtmpfs         7.7G     0  7.7G   0% /dev
tmpfs            7.8G     0  7.8G   0% /dev/shm
cgroup_root      8.0M     0  8.0M   0% /sys/fs/cgroup
tmpfs            128M   17M  112M  14% /var/log
/dev/sda1        7.5G  542M  7.0G   8% /boot
/dev/loop0       8.2M  8.2M     0 100% /lib/modules
/dev/loop1       4.9M  4.9M     0 100% /lib/firmware
/dev/mapper/md1  7.3T  5.5T  1.9T  76% /mnt/disk1
/dev/mapper/md4  4.6T  2.7T  1.9T  59% /mnt/disk4
/dev/mapper/md5  7.3T  4.4T  3.0T  60% /mnt/disk5
/dev/mapper/md6  7.3T  5.6T  1.8T  77% /mnt/disk6
/dev/sdb1        224G  115G  110G  52% /mnt/cache
shfs              27T   18T  8.4T  69% /mnt/user0
shfs              27T   19T  8.5T  69% /mnt/user
/dev/loop2        20G  8.2G  9.9G  46% /var/lib/docker
/dev/loop3       1.0G   17M  905M   2% /etc/libvirt
shm               64M     0   64M   0% /var/lib/docker/containers/1ef38c565c0dc4676a17d17524f35ad678368c6d0c619e62f48fdbebc95c2656/mounts/shm
shm               64M     0   64M   0% /var/lib/docker/containers/f7c4b2998f46afcf68f62daa4ee2d3f37011fa2104ef568a0ec5b184b600b3c6/mounts/shm
shm               64M     0   64M   0% /var/lib/docker/containers/fdfe912c8d97848f2910c5c369edf57a85c796a8f0743d184c04c316a24ad95d/mounts/shm
shm               64M  8.0K   64M   1% /var/lib/docker/containers/19830b50b6c4d964be2f55cbe81f6d9c091a3c1cde06a770194eab1a2a39f64c/mounts/shm
shm               64M     0   64M   0% /var/lib/docker/containers/41e23e93d2f7b3ab5521680dcbe44d9f2972826f04b31fd83276f7c42bb79c04/mounts/shm
shm               64M     0   64M   0% /var/lib/docker/containers/aaa87460ecb0d465f1a3b3933ee71f387169a3b3993650a2d06f734622461ccf/mounts/shm
shm               64M     0   64M   0% /var/lib/docker/containers/241da1da18347450c3a28253b71dd25bc2ee8915cf6e4d8be1e9a5d49830f76f/mounts/shm
shm               64M  8.0K   64M   1% /var/lib/docker/containers/36523e97cb4b6ad6d5cf1971207026fe3f9fd3687fd1d96be4dfc4805a60765b/mounts/shm
shm               64M  4.0K   64M   1% /var/lib/docker/containers/a35b44a4b3bb64e56d02d6b1e3052568a81fb0fc6441e8f941d75d1eb52ef3b3/mounts/shm
shm               64M  4.0K   64M   1% /var/lib/docker/containers/e0d2d3c9d15a76f63bacfe0655f546c51a4b7af549473523878deaf203004d5b/mounts/shm
shm               64M     0   64M   0% /var/lib/docker/containers/bf3b5c549e57e659122472b077668be0e112ece782401f45da725e4a5c932049/mounts/shm
shm               64M     0   64M   0% /var/lib/docker/containers/fef4b9c151c1066f2b07755159f8015fb6c3e2a9145925a075de04c4954543b4/mounts/shm
shm               64M     0   64M   0% /var/lib/docker/containers/516fa1d4914282ab733e3525b0f65d703b159298a9845b8d49b4f77dd5198542/mounts/shm
shm               64M     0   64M   0% /var/lib/docker/containers/93834dbe0cf2db0e6984bb5d0443f0bcd50dc291258ec2a05ee5383510061139/mounts/shm

 

Edited by ptirmal
Link to comment
9 minutes ago, johnnie.black said:

Please post the diagnostics: Tools -> Diagnostics

I've tried but once the disk goes missing it seems when I click download it doesn't do anything. Same as when I navigate to "main," it just hangs indefinitely. Is there a way to get some info via CL?

 

I just uploaded the last diagnostics I ran. This was yesterday after the reboot. I'll see if I can run some of these commands and attach the results

Quote

/config
copy all *.cfg files, go file and the super.dat file. These are configuration files.

/config/shares
copy all *.cfg files. These are user share settings files.

Syslog file(s)
copy the current syslog file and any previous existing syslog files.

System
save output of the following commands:
lsscsi, lspci, lsusb, free, lsof, ps, ethtool & ifconfig.
display of iommu groups.
display of command line parameters (e.g. pcie acs override, pci stubbing, etc).
save system variables.

SMART reports
save a SMART report of each individual disk present in your system.

Docker
save files docker.log, libvirtd.log and libvirt/qemu/*.log.

 

unraid6-diagnostics-20190310-1439.zip

Edited by ptirmal
Link to comment

lsscsi

Quote

root@unraid6:~# lsscsi
[0:0:0:0]    disk    SanDisk  Cruzer Fit       1.26  /dev/sda
[1:0:0:0]    disk    ATA      INTEL SSDSC2CW24 400i  /dev/sdb
[1:0:1:0]    disk    ATA      WDC WD100EMAZ-00 0A83  /dev/sdc
[2:0:0:0]    disk    ATA      WDC WD80EFZX-68U 0A83  /dev/sdd
[2:0:1:0]    disk    ATA      WDC WD30EFRX-68A 0A80  /dev/sde
[3:0:0:0]    disk    ATA      WDC WD80EFZX-68U 0A83  /dev/sdf
[5:0:0:0]    disk    ATA      WDC WD50EFRX-68M 0A82  /dev/sdg
[6:0:0:0]    disk    ATA      WDC WD80EFZX-68U 0A83  /dev/sdh

lspci

Quote

root@unraid6:~# lspci
00:00.0 Host bridge: Intel Corporation Xeon E3-1200 v2/Ivy Bridge DRAM Controller (rev 09)
00:01.0 PCI bridge: Intel Corporation Xeon E3-1200 v2/3rd Gen Core processor PCI Express Root Port (rev 09)
00:02.0 VGA compatible controller: Intel Corporation Xeon E3-1200 v2/3rd Gen Core processor Graphics Controller (rev 09)
00:14.0 USB controller: Intel Corporation 7 Series/C210 Series Chipset Family USB xHCI Host Controller (rev 04)
00:16.0 Communication controller: Intel Corporation 7 Series/C216 Chipset Family MEI Controller #1 (rev 04)
00:1a.0 USB controller: Intel Corporation 7 Series/C216 Chipset Family USB Enhanced Host Controller #2 (rev 04)
00:1b.0 Audio device: Intel Corporation 7 Series/C216 Chipset Family High Definition Audio Controller (rev 04)
00:1c.0 PCI bridge: Intel Corporation 7 Series/C216 Chipset Family PCI Express Root Port 1 (rev c4)
00:1c.4 PCI bridge: Intel Corporation 7 Series/C210 Series Chipset Family PCI Express Root Port 5 (rev c4)
00:1d.0 USB controller: Intel Corporation 7 Series/C216 Chipset Family USB Enhanced Host Controller #1 (rev 04)
00:1f.0 ISA bridge: Intel Corporation H77 Express Chipset LPC Controller (rev 04)
00:1f.2 IDE interface: Intel Corporation 7 Series/C210 Series Chipset Family 4-port SATA Controller [IDE mode] (rev 04)
00:1f.3 SMBus: Intel Corporation 7 Series/C216 Chipset Family SMBus Controller (rev 04)
00:1f.5 IDE interface: Intel Corporation 7 Series/C210 Series Chipset Family 2-port SATA Controller [IDE mode] (rev 04)
01:00.0 SATA controller: Marvell Technology Group Ltd. Device 9215 (rev 11)
03:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 09)

lsusb

Quote

root@unraid6:~# lsusb
Bus 002 Device 003: ID 0781:5571 SanDisk Corp. Cruzer Fit
Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub
Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 003 Device 002: ID 051d:0002 American Power Conversion Uninterruptible Power Supply
Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

free

Quote

root@unraid6:~# free
              total        used        free      shared  buff/cache   available
Mem:       16179168     4562688      244448     8636696    11372032     2437604
Swap:             0           0           0

 

ps

Quote

root@unraid6:~# ps
  PID TTY          TIME CMD
13780 pts/0    00:00:00 bash
18035 pts/0    00:00:00 ps

 

ifconfig

Quote

root@unraid6:~# ifconfig
br-d68a70650a31: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.18.0.1  netmask 255.255.0.0  broadcast 172.18.255.255
        inet6 fe80::42:8cff:fe5a:cb0b  prefixlen 64  scopeid 0x20<link>
        ether 02:42:8c:5a:cb:0b  txqueuelen 0  (Ethernet)
        RX packets 297663  bytes 42998732 (41.0 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 446849  bytes 340913148 (325.1 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
        inet6 fe80::42:b8ff:fecc:96b4  prefixlen 64  scopeid 0x20<link>
        ether 02:42:b8:cc:96:b4  txqueuelen 0  (Ethernet)
        RX packets 1906142  bytes 1734395673 (1.6 GiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 12873299  bytes 28880862646 (26.8 GiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.1.154  netmask 255.255.255.0  broadcast 192.168.1.255
        inet6 fe80::62a4:4cff:feb4:6373  prefixlen 64  scopeid 0x20<link>
        ether 60:a4:4c:b4:63:73  txqueuelen 1000  (Ethernet)
        RX packets 75161483  bytes 107696407991 (100.3 GiB)
        RX errors 0  dropped 47  overruns 0  frame 0
        TX packets 8459731  bytes 10041401153 (9.3 GiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 385630  bytes 432108518 (412.0 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 385630  bytes 432108518 (412.0 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

veth3c31c77: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::dc72:62ff:fec2:d8d1  prefixlen 64  scopeid 0x20<link>
        ether de:72:62:c2:d8:d1  txqueuelen 0  (Ethernet)
        RX packets 178111  bytes 94918972 (90.5 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 210728  bytes 30012083 (28.6 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

veth72f45ba: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::500e:b3ff:fec8:3687  prefixlen 64  scopeid 0x20<link>
        ether 52:0e:b3:c8:36:87  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 32693  bytes 2623326 (2.5 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

veth92e4967: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::cc02:f7ff:fe79:4670  prefixlen 64  scopeid 0x20<link>
        ether ce:02:f7:79:46:70  txqueuelen 0  (Ethernet)
        RX packets 50574  bytes 1313267126 (1.2 GiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 484128  bytes 27449301 (26.1 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

veth9f15a2f: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::903e:1bff:fefa:916d  prefixlen 64  scopeid 0x20<link>
        ether 92:3e:1b:fa:91:6d  txqueuelen 0  (Ethernet)
        RX packets 6110  bytes 487800 (476.3 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 22922  bytes 2702569 (2.5 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vetha093cce: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::e48e:2ff:fe70:48ae  prefixlen 64  scopeid 0x20<link>
        ether e6:8e:02:70:48:ae  txqueuelen 0  (Ethernet)
        RX packets 29456  bytes 21457056 (20.4 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 61536  bytes 23288991 (22.2 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vethaab2733: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::9886:f5ff:fe8a:5375  prefixlen 64  scopeid 0x20<link>
        ether 9a:86:f5:8a:53:75  txqueuelen 0  (Ethernet)
        RX packets 11205  bytes 9715015 (9.2 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 27924  bytes 3650084 (3.4 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vethadbfd50: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::34c5:f6ff:fe78:f5ca  prefixlen 64  scopeid 0x20<link>
        ether 36:c5:f6:78:f5:ca  txqueuelen 0  (Ethernet)
        RX packets 245112  bytes 184595252 (176.0 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 228798  bytes 31577015 (30.1 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vethbe24854: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::70d4:c1ff:fe27:f630  prefixlen 64  scopeid 0x20<link>
        ether 72:d4:c1:27:f6:30  txqueuelen 0  (Ethernet)
        RX packets 73322  bytes 16430195 (15.6 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 94054  bytes 65029453 (62.0 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vethc2f65a6: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::18cb:cbff:fea6:cdb3  prefixlen 64  scopeid 0x20<link>
        ether 1a:cb:cb:a6:cd:b3  txqueuelen 0  (Ethernet)
        RX packets 1278639  bytes 96301331 (91.8 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 11878708  bytes 28685197071 (26.7 GiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vethdee503b: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::9488:4bff:fe25:4b4b  prefixlen 64  scopeid 0x20<link>
        ether 96:88:4b:25:4b:4b  txqueuelen 0  (Ethernet)
        RX packets 9011  bytes 1664085 (1.5 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 44397  bytes 35227808 (33.5 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vethe20563c: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::24c2:aeff:fe25:55fc  prefixlen 64  scopeid 0x20<link>
        ether 26:c2:ae:25:55:fc  txqueuelen 0  (Ethernet)
        RX packets 268207  bytes 25708958 (24.5 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 417964  bytes 320245418 (305.4 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

virbr0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 192.168.122.1  netmask 255.255.255.0  broadcast 192.168.122.255
        ether 52:54:00:ed:6d:ab  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)

 

Just saw your post johnnie:

Quote

root@unraid6:~# diagnostics
Starting diagnostics collection... echo: write error: No space left on device
echo: write error: No space left on device
echo: write error: No space left on device
echo: write error: No space left on device
echo: write error: No space left on device
echo: write error: No space left on device
echo: write error: No space left on device
echo: write error: No space left on device
echo: write error: No space left on device
done.
ZIP file '/boot/logs/tower-diagnostics-20190311-0800.zip' created.

Not sure how to access it though since I can't connect over samba...

 

 

Not sure what it means when it says no space left on device... will /boot survive a reboot? 

log 03.11.2019.txt

Edited by ptirmal
Link to comment

Your df results showed rootfs with no space. That would explain the no space when trying to get diagnostics.

 

That may not explain your parity disk problem, but it will definitely cause other problems including trying to diagnose your parity problem.

 

Suggest you disable docker and VM services and reboot. Leave them disabled until you get your array stable.

Link to comment

Ok I rebooted. Here is the diagnostics zip. 

 

My flash drive doesn't look full to me after reboot:

image.thumb.png.71f00e2c2ec26ee86060ebca1495c37b.png

df -h results are also different...

Quote

root@unraid6:~# df -h
Filesystem      Size  Used Avail Use% Mounted on
rootfs          7.7G  813M  6.9G  11% /
tmpfs            32M  484K   32M   2% /run
devtmpfs        7.7G     0  7.7G   0% /dev
tmpfs           7.8G     0  7.8G   0% /dev/shm
cgroup_root     8.0M     0  8.0M   0% /sys/fs/cgroup
tmpfs           128M  292K  128M   1% /var/log
/dev/sda1       7.5G  542M  7.0G   8% /boot
/dev/loop0      8.2M  8.2M     0 100% /lib/modules
/dev/loop1      4.9M  4.9M     0 100% /lib/firmware

 

Edited by ptirmal
Link to comment
21 minutes ago, ptirmal said:

My flash drive doesn't look full to me after reboot:

With the other things in this thread this is an understandable misunderstanding. It was never full.

 

/boot is the flash drive.

 

It was your rootfs which was full. This is the OS filesystem, which is in RAM. When it is full lots of things are going misbehave.

Link to comment
22 minutes ago, trurl said:

With the other things in this thread this is an understandable misunderstanding. It was never full.

 

/boot is the flash drive.

 

It was your rootfs which was full. This is the OS filesystem, which is in RAM. When it is full lots of things are going misbehave.

Understood. So I'm booted into maintenance mode with no docker apps running or VM. When the parity disk went missing before dockers were running but no VM was running .I uninstalled preclear plugin which I thought I already had done... I am seeing this in the logs for warnings and errors:

Quote

Mar 11 10:13:57 unraid6 kernel: ACPI: Early table checksum verification disabled
Mar 11 10:13:57 unraid6 kernel: floppy0: no floppy controllers found
Mar 11 10:13:57 unraid6 kernel: random: 6 urandom warning(s) missed due to ratelimiting
Mar 11 10:13:57 unraid6 kernel: ata1.00: HPA detected: current 468853871, native 468862128
Mar 11 10:14:07 unraid6 rpc.statd[1694]: Failed to read /var/lib/nfs/state: Success
Mar 11 10:14:24 unraid6 avahi-daemon[5918]: WARNING: No NSS support for mDNS detected, consider installing nss-mdns!
Mar 11 10:14:24 unraid6 root: Failed to open key file.
Mar 11 10:14:24 unraid6 root: Failed to open key file.
Mar 11 10:14:24 unraid6 root: Failed to open key file.
Mar 11 10:14:24 unraid6 root: Failed to open key file.
Mar 11 10:14:26 unraid6 root: error: /webGui/include/ProcessStatus.php: wrong csrf_token

Does this looks like anything that needs to be addressed? 

Could the RAM filling up cause the parity disk to go "missing?" Since it looks like it was still showing up under fdisk?

Link to comment

I guess I deleted it in an edit, but it looks pretty much full of empty files because I guess there wasn't enough space to create it? I think the output of the commands is the most I have, and the log I copy pasted. 

 

 

 

So now in maintenance mode with no docker or VMs enabled, should I just wait and see how it behaves? Should I start a parity check?

 

 

tower-diagnostics-20190311-0800.zip

Link to comment

I was on 6.6.7 but I thought some of my issues might had been the upgrade, since it was running fine for months on 6.6.3 and not that long on 6.6.7, so I downgraded back to 6.6.3. 

 

Do you think I should do it right now or wait to see how stable it is? Current uptime 1 hr 17min. Seems it was fine for at least 5 hours yesterday before I went to sleep and when I woke up I saw the disk showing it was missing. SMART long was running and it made it to at least 40% when I last checked it but don't think it completed in time. 

 

Here is the current diagnostics file. Also, thanks all for your help!

unraid6-diagnostics-20190311-1129.zip

 

Edit: Updated to 6.6.7 on reboot (after 2 hours with no issues) - uploaded diagnostics after this reboot

unraid6-diagnostics-20190311-1218.zip

Edited by ptirmal
Link to comment

Do you think I should start a parity check or just wait for now and see if it stays up without issues?

 

 

Can someone tell me if I am interpretting the results of fdisk -l showing my parity drive while the unRAID webgui shows it as "missing disk," to be more of a software issue as opposed to a hardware one, especially since rootfs was full for unknown reasons? My only guess is a recent update of one of my dockers caused something. 

 

 

6 minutes ago, johnnie.black said:

Everything seems fine for now, if more issues try to grab the diags immediately, so hopefully we can see something.

 

Link to comment
15 minutes ago, trurl said:

When this happens, is it listed with the Unassigned Devices instead? There was another thread like that recently.

No. When this has happened before, the dashboard shows what's show in the screenshot above. I can't access "Main" to see what the disks are showing, it's just perpetually in the loading animation. 

Link to comment

So it survived the night and the results of df -h look like it's still only using 11% on rootfs

Quote

root@unraid6:~# df -h
Filesystem      Size  Used Avail Use% Mounted on
rootfs          7.7G  802M  6.9G  11% /
tmpfs            32M  484K   32M   2% /run
devtmpfs        7.7G     0  7.7G   0% /dev
tmpfs           7.8G     0  7.8G   0% /dev/shm
cgroup_root     8.0M     0  8.0M   0% /sys/fs/cgroup
tmpfs           128M  200K  128M   1% /var/log
/dev/sda1       7.5G  708M  6.8G  10% /boot
/dev/loop0      8.2M  8.2M     0 100% /lib/modules
/dev/loop1      4.9M  4.9M     0 100% /lib/firmware

Can someone explain why it shows so many reads on these disks? It's still in maintenance mode:

image.thumb.png.f6815e48249925b9513bd3d18e138477.png

 

and it doesn't look like much is going on:

image.thumb.png.95c302baa62739fa58f1cb668c0caeaa.png

 

I guess my next step is to bring the array online out of maintenance mode without docker or VM and start a parity check.

 

New diagnostics attached.

unraid6-diagnostics-20190312-0610.zip

Edited by ptirmal
Link to comment
1 minute ago, ptirmal said:

I shrunk my array to remove some smaller disks from it since I had excess capacity and didn't want them just sitting around when I could use them for other tasks.

OK. It is very easy to move the other disks down to fill the gaps if you want after you get things stable.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.