Christopher Haws Posted January 31, 2022 Share Posted January 31, 2022 Hello, I just recently started having a recurring issue where my NAS becomes unresponsive after 1 to 2 days of uptime. Looking through the syslogs I can see lots of read errors on disk0 (which I'm guessing is my parity drive). I also have started to hear a HDD grinding noise coming from the server, but was not able to identify the disk that was causing it, but I am pretty sure it is this disk. As soon as I heard a disk going bad I purchased a new 10TB WD RED Pro drive which is what i have been replacing my older disk with and is the exact same model as my current parity disk. What would be your recommended approach to this? Swap the parity drive and let it rebuild or add my new drive as a second parity drive and then remove the bad one? If I can get some advice on if this is really what is wrong and the best approach to fix the problem I would be very grateful! I have included my SMART error log for my parity disk and attached my systems diagnostic logs. Thanks! Chris SMART Parity Disk Error Logs: ATA Error Count: 30 (device log contains only the most recent five errors) CR = Command Register [HEX] FR = Features Register [HEX] SC = Sector Count Register [HEX] SN = Sector Number Register [HEX] CL = Cylinder Low Register [HEX] CH = Cylinder High Register [HEX] DH = Device/Head Register [HEX] DC = Device Command Register [HEX] ER = Error register [HEX] ST = Status register [HEX] Powered_Up_Time is measured from power on, and printed as DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes, SS=sec, and sss=millisec. It "wraps" after 49.710 days. Error 30 occurred at disk power-on lifetime: 16131 hours (672 days + 3 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 41 00 00 00 00 00 Error: UNC at LBA = 0x00000000 = 0 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- 60 80 60 c0 c7 80 40 00 20:05:05.095 READ FPDMA QUEUED 61 20 90 e0 dc 36 40 00 20:05:00.595 WRITE FPDMA QUEUED 60 20 90 e0 dc 36 40 00 20:05:00.594 READ FPDMA QUEUED 60 80 88 d0 d4 80 40 00 20:04:58.178 READ FPDMA QUEUED 60 68 80 68 d3 80 40 00 20:04:58.178 READ FPDMA QUEUED Error 29 occurred at disk power-on lifetime: 16131 hours (672 days + 3 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 41 00 00 00 00 00 Error: UNC at LBA = 0x00000000 = 0 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- 60 58 10 08 fa c5 40 00 20:03:42.428 READ FPDMA QUEUED 60 58 20 60 fb c5 40 00 20:03:35.442 READ FPDMA QUEUED 60 00 08 08 f6 c5 40 00 20:03:35.442 READ FPDMA QUEUED 60 b0 30 58 f4 c5 40 00 20:03:28.885 READ FPDMA QUEUED 60 38 28 20 f4 c5 40 00 20:03:28.884 READ FPDMA QUEUED Error 28 occurred at disk power-on lifetime: 16131 hours (672 days + 3 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 41 00 00 00 00 00 Error: UNC at LBA = 0x00000000 = 0 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- 60 70 10 60 b7 c5 40 00 20:03:14.397 READ FPDMA QUEUED 2f 00 01 10 00 00 00 00 20:03:14.397 READ LOG EXT 60 e8 08 78 c5 c5 40 00 20:03:14.382 READ FPDMA QUEUED 61 20 40 20 1c 1c 40 00 20:03:10.526 WRITE FPDMA QUEUED 60 20 40 20 1c 1c 40 00 20:03:10.513 READ FPDMA QUEUED Error 27 occurred at disk power-on lifetime: 16131 hours (672 days + 3 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 41 00 00 00 00 00 Error: UNC at LBA = 0x00000000 = 0 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- 60 00 18 b8 54 c5 40 00 20:03:02.266 READ FPDMA QUEUED 61 08 70 60 1c 1e 40 00 20:02:55.326 WRITE FPDMA QUEUED 61 08 68 e0 1c 1c 40 00 20:02:55.326 WRITE FPDMA QUEUED 60 08 70 60 1c 1e 40 00 20:02:55.324 READ FPDMA QUEUED 60 08 68 e0 1c 1c 40 00 20:02:55.324 READ FPDMA QUEUED Error 26 occurred at disk power-on lifetime: 16131 hours (672 days + 3 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 41 00 00 00 00 00 Error: UNC at LBA = 0x00000000 = 0 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- 60 a8 78 40 20 c5 40 00 20:02:46.540 READ FPDMA QUEUED 61 a8 78 40 10 c5 40 00 20:02:39.392 WRITE FPDMA QUEUED 60 98 70 40 8b 51 40 00 20:02:39.392 READ FPDMA QUEUED 60 e0 68 60 87 51 40 00 20:02:39.392 READ FPDMA QUEUED 60 a0 60 40 8b 31 40 00 20:02:39.392 READ FPDMA QUEUED nas-diagnostics-20220130-1745.zip Quote Link to comment
JorgeB Posted January 31, 2022 Share Posted January 31, 2022 You should run an extended SMART test on parity then act according to the result, but unlikely that's crashing the server, could be this issue: Quote Link to comment
Christopher Haws Posted January 31, 2022 Author Share Posted January 31, 2022 I woke up this morning to this error. That thread does look promising too, but I definitely feel like something is up with my parity disk Quote Link to comment
Christopher Haws Posted February 17, 2022 Author Share Posted February 17, 2022 Over the weekend I swapped my parity disk and let parity rebuild. That part of the issue seems to be resolved and my data is safe again! As for the issue with my server becoming unresponsive every 24h or so, unfortunately the issue is still happening. I went through the forum thread you posted and did what they said (added "blacklist i915" to "i915.conf") and the issue still happens. Any other thoughts what could be causing this? Thanks! Chris Quote Link to comment
JorgeB Posted February 17, 2022 Share Posted February 17, 2022 Enable the syslog server and post that after a crash. Quote Link to comment
Christopher Haws Posted February 26, 2022 Author Share Posted February 26, 2022 I enabled the syslog and had a crash last night. It looks like the last command to run before the server stopped responding was: Feb 24 15:00:02 nas crond[1927]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null Here are the full logs (yesterday and today logs after the reboot): Feb 24 00:00:02 nas crond[1927]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null Feb 24 01:00:35 nas crond[1927]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null Feb 24 02:00:22 nas crond[1927]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null Feb 24 03:00:07 nas crond[1927]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null Feb 24 04:00:07 nas crond[1927]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null Feb 24 04:00:12 nas kernel: veth3858b98: renamed from eth0 Feb 24 04:00:12 nas kernel: docker0: port 9(veth7201990) entered disabled state Feb 24 04:00:12 nas kernel: docker0: port 9(veth7201990) entered disabled state Feb 24 04:00:12 nas kernel: device veth7201990 left promiscuous mode Feb 24 04:00:12 nas kernel: docker0: port 9(veth7201990) entered disabled state Feb 24 04:00:13 nas kernel: docker0: port 9(vethbf67a4e) entered blocking state Feb 24 04:00:13 nas kernel: docker0: port 9(vethbf67a4e) entered disabled state Feb 24 04:00:13 nas kernel: device vethbf67a4e entered promiscuous mode Feb 24 04:00:13 nas kernel: docker0: port 9(vethbf67a4e) entered blocking state Feb 24 04:00:13 nas kernel: docker0: port 9(vethbf67a4e) entered forwarding state Feb 24 04:00:13 nas kernel: docker0: port 9(vethbf67a4e) entered disabled state Feb 24 04:00:16 nas kernel: eth0: renamed from veth2b3e1ca Feb 24 04:00:16 nas kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethbf67a4e: link becomes ready Feb 24 04:00:16 nas kernel: docker0: port 9(vethbf67a4e) entered blocking state Feb 24 04:00:16 nas kernel: docker0: port 9(vethbf67a4e) entered forwarding state Feb 24 05:00:01 nas Plugin Auto Update: Checking for available plugin updates Feb 24 05:00:04 nas Plugin Auto Update: unassigned.devices.plg version 2022.02.23a does not meet age requirements to update Feb 24 05:00:04 nas Plugin Auto Update: Checking for language updates Feb 24 05:00:04 nas Plugin Auto Update: Community Applications Plugin Auto Update finished Feb 24 05:00:04 nas crond[1927]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null Feb 24 06:00:01 nas Docker Auto Update: Community Applications Docker Autoupdate running Feb 24 06:00:01 nas Docker Auto Update: Checking for available updates Feb 24 06:00:07 nas crond[1927]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null Feb 24 06:00:40 nas Docker Auto Update: Stopping minecraft-crafters-server Feb 24 06:00:49 nas kernel: veth2b3e1ca: renamed from eth0 Feb 24 06:00:49 nas kernel: docker0: port 9(vethbf67a4e) entered disabled state Feb 24 06:00:50 nas kernel: docker0: port 9(vethbf67a4e) entered disabled state Feb 24 06:00:50 nas kernel: device vethbf67a4e left promiscuous mode Feb 24 06:00:50 nas kernel: docker0: port 9(vethbf67a4e) entered disabled state Feb 24 06:00:51 nas Docker Auto Update: Stopping sabnzbd Feb 24 06:00:56 nas kernel: veth0b8cb2e: renamed from eth0 Feb 24 06:00:56 nas kernel: docker0: port 7(veth3719d9e) entered disabled state Feb 24 06:00:56 nas kernel: docker0: port 7(veth3719d9e) entered disabled state Feb 24 06:00:56 nas kernel: device veth3719d9e left promiscuous mode Feb 24 06:00:56 nas kernel: docker0: port 7(veth3719d9e) entered disabled state Feb 24 06:00:57 nas Docker Auto Update: Stopping jackett Feb 24 06:01:02 nas kernel: veth2186cee: renamed from eth0 Feb 24 06:01:02 nas kernel: docker0: port 10(vethb076b68) entered disabled state Feb 24 06:01:02 nas kernel: docker0: port 10(vethb076b68) entered disabled state Feb 24 06:01:02 nas kernel: device vethb076b68 left promiscuous mode Feb 24 06:01:02 nas kernel: docker0: port 10(vethb076b68) entered disabled state Feb 24 06:01:03 nas Docker Auto Update: Installing Updates for grocy minecraft-crafters-server minecraft-fabric-server minecraft-server sabnzbd GitLab-CE jackett Feb 24 06:06:21 nas Docker Auto Update: Restarting minecraft-crafters-server Feb 24 06:06:21 nas kernel: docker0: port 7(vethdd6bc1e) entered blocking state Feb 24 06:06:21 nas kernel: docker0: port 7(vethdd6bc1e) entered disabled state Feb 24 06:06:21 nas kernel: device vethdd6bc1e entered promiscuous mode Feb 24 06:06:21 nas kernel: docker0: port 7(vethdd6bc1e) entered blocking state Feb 24 06:06:21 nas kernel: docker0: port 7(vethdd6bc1e) entered forwarding state Feb 24 06:06:21 nas kernel: docker0: port 7(vethdd6bc1e) entered disabled state Feb 24 06:06:23 nas kernel: eth0: renamed from veth5bf758b Feb 24 06:06:23 nas kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethdd6bc1e: link becomes ready Feb 24 06:06:23 nas kernel: docker0: port 7(vethdd6bc1e) entered blocking state Feb 24 06:06:23 nas kernel: docker0: port 7(vethdd6bc1e) entered forwarding state Feb 24 06:06:24 nas Docker Auto Update: Restarting sabnzbd Feb 24 06:06:24 nas kernel: docker0: port 9(veth72a47cf) entered blocking state Feb 24 06:06:24 nas kernel: docker0: port 9(veth72a47cf) entered disabled state Feb 24 06:06:24 nas kernel: device veth72a47cf entered promiscuous mode Feb 24 06:06:24 nas kernel: docker0: port 9(veth72a47cf) entered blocking state Feb 24 06:06:24 nas kernel: docker0: port 9(veth72a47cf) entered forwarding state Feb 24 06:06:24 nas kernel: docker0: port 9(veth72a47cf) entered disabled state Feb 24 06:06:27 nas kernel: eth0: renamed from veth0618230 Feb 24 06:06:27 nas kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth72a47cf: link becomes ready Feb 24 06:06:27 nas kernel: docker0: port 9(veth72a47cf) entered blocking state Feb 24 06:06:27 nas kernel: docker0: port 9(veth72a47cf) entered forwarding state Feb 24 06:06:28 nas Docker Auto Update: Restarting jackett Feb 24 06:06:28 nas kernel: docker0: port 10(vethfdff145) entered blocking state Feb 24 06:06:28 nas kernel: docker0: port 10(vethfdff145) entered disabled state Feb 24 06:06:28 nas kernel: device vethfdff145 entered promiscuous mode Feb 24 06:06:34 nas kernel: eth0: renamed from veth001c14b Feb 24 06:06:34 nas kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethfdff145: link becomes ready Feb 24 06:06:34 nas kernel: docker0: port 10(vethfdff145) entered blocking state Feb 24 06:06:34 nas kernel: docker0: port 10(vethfdff145) entered forwarding state Feb 24 06:06:35 nas Docker Auto Update: Community Applications Docker Autoupdate finished Feb 24 07:00:07 nas crond[1927]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null Feb 24 08:00:07 nas crond[1927]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null Feb 24 09:00:03 nas crond[1927]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null Feb 24 09:22:41 nas emhttpd: spinning down /dev/sdd Feb 24 09:32:09 nas emhttpd: read SMART /dev/sdd Feb 24 10:00:07 nas crond[1927]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null Feb 24 11:00:02 nas crond[1927]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null Feb 24 12:00:07 nas crond[1927]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null Feb 24 13:00:02 nas crond[1927]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null Feb 24 13:34:52 nas emhttpd: spinning down /dev/sdh Feb 24 14:00:03 nas crond[1927]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null Feb 24 15:00:02 nas crond[1927]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null Feb 25 16:34:22 nas cache_dirs: Arguments=-u -i appdata -i domains -i downloads -i emulation -i isos -i media -i storage -i system -i vms -c 5 -l off Feb 25 16:34:22 nas cache_dirs: Max Scan Secs=10, Min Scan Secs=1 Feb 25 16:34:22 nas cache_dirs: Scan Type=adaptive Feb 25 16:34:22 nas cache_dirs: Min Scan Depth=5 Feb 25 16:34:22 nas cache_dirs: Max Scan Depth=none Feb 25 16:34:22 nas cache_dirs: Use Command='find -noleaf' Feb 25 16:34:22 nas cache_dirs: ---------- Caching Directories --------------- Feb 25 16:34:22 nas cache_dirs: appdata Feb 25 16:34:22 nas cache_dirs: domains Feb 25 16:34:22 nas cache_dirs: downloads Feb 25 16:34:22 nas cache_dirs: emulation Feb 25 16:34:22 nas cache_dirs: isos Feb 25 16:34:22 nas cache_dirs: media Feb 25 16:34:22 nas cache_dirs: storage Feb 25 16:34:22 nas cache_dirs: system Feb 25 16:34:22 nas cache_dirs: vms Feb 25 16:34:22 nas cache_dirs: ---------------------------------------------- Feb 25 16:34:22 nas cache_dirs: Setting Included dirs: appdata,domains,downloads,emulation,isos,media,storage,system,vms Feb 25 16:34:22 nas cache_dirs: Setting Excluded dirs: Feb 25 16:34:22 nas cache_dirs: min_disk_idle_before_restarting_scan_sec=60 Feb 25 16:34:22 nas cache_dirs: scan_timeout_sec_idle=150 Feb 25 16:34:22 nas cache_dirs: scan_timeout_sec_busy=30 Feb 25 16:34:22 nas cache_dirs: scan_timeout_sec_stable=30 Feb 25 16:34:22 nas cache_dirs: frequency_of_full_depth_scan_sec=604800 Feb 25 16:34:22 nas cache_dirs: Including /mnt/user in scan Feb 25 16:34:22 nas cache_dirs: cache_dirs service rc.cachedirs: Started: '/usr/local/emhttp/plugins/dynamix.cache.dirs/scripts/cache_dirs -u -i "appdata" -i "domains" -i "downloads" -i "emulation" -i "isos" -i "media" -i "storage" -i "system" -i "vms" -c 5 -l off 2>/dev/null' Feb 25 16:34:22 nas Recycle Bin: Starting Recycle Bin Feb 25 16:34:22 nas emhttpd: Starting Recycle Bin... Feb 25 16:34:22 nas rsyslogd: [origin software="rsyslogd" swVersion="8.2002.0" x-pid="4947" x-info="https://www.rsyslog.com"] start Feb 25 16:34:26 nas unassigned.devices: Mounting 'Auto Mount' Devices... Feb 25 16:34:26 nas emhttpd: Starting services... Feb 25 16:34:27 nas emhttpd: shcmd (84): /etc/rc.d/rc.samba restart Feb 25 16:34:29 nas root: Starting Samba: /usr/sbin/smbd -D Feb 25 16:34:29 nas root: /usr/sbin/nmbd -D Feb 25 16:34:29 nas root: /usr/sbin/wsdd Feb 25 16:34:29 nas root: /usr/sbin/winbindd -D Feb 25 16:34:29 nas emhttpd: shcmd (97): /usr/local/sbin/mount_image '/mnt/user/system/docker/docker.img' /var/lib/docker 120 Feb 25 16:34:29 nas kernel: BTRFS: device fsid c134559e-955b-46fd-91f6-c957194cfbe5 devid 1 transid 3684153 /dev/loop2 scanned by udevd (5575) Feb 25 16:34:29 nas kernel: BTRFS info (device loop2): using free space tree Feb 25 16:34:29 nas kernel: BTRFS info (device loop2): has skinny extents Feb 25 16:34:29 nas zed[4074]: zed_udev_monitor: skip /dev/loop2 (in use by btrfs) Feb 25 16:34:30 nas kernel: BTRFS info (device loop2): start tree-log replay Feb 25 16:34:31 nas kernel: BTRFS info (device loop2): checking UUID tree Feb 25 16:34:31 nas root: Resize '/var/lib/docker' of 'max' Feb 25 16:34:31 nas emhttpd: shcmd (99): /etc/rc.d/rc.docker start Feb 25 16:34:31 nas root: starting dockerd ... Feb 25 16:34:31 nas zed[4074]: zed_udev_monitor: skip /dev/loop2 (in use by btrfs) Feb 25 16:34:47 nas kernel: Bridge firewalling registered Feb 25 16:35:11 nas kernel: docker0: port 1(veth9b04a6f) entered blocking state Feb 25 16:35:11 nas kernel: docker0: port 1(veth9b04a6f) entered disabled state Feb 25 16:35:11 nas kernel: device veth9b04a6f entered promiscuous mode Feb 25 16:35:11 nas kernel: docker0: port 1(veth9b04a6f) entered blocking state Feb 25 16:35:11 nas kernel: docker0: port 1(veth9b04a6f) entered forwarding state Feb 25 16:35:11 nas kernel: docker0: port 1(veth9b04a6f) entered disabled state Feb 25 16:35:14 nas kernel: cgroup: cgroup: disabling cgroup2 socket matching due to net_prio or net_cls activation Feb 25 16:35:14 nas kernel: eth0: renamed from vetha5ae1d9 Feb 25 16:35:14 nas kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth9b04a6f: link becomes ready Feb 25 16:35:14 nas kernel: docker0: port 1(veth9b04a6f) entered blocking state Feb 25 16:35:14 nas kernel: docker0: port 1(veth9b04a6f) entered forwarding state Feb 25 16:35:14 nas kernel: IPv6: ADDRCONF(NETDEV_CHANGE): docker0: link becomes ready Feb 25 16:35:16 nas emhttpd: shcmd (113): /usr/local/sbin/mount_image '/mnt/user/system/libvirt/libvirt.img' /etc/libvirt 1 Feb 25 16:35:16 nas kernel: docker0: port 2(vethb9d2b12) entered blocking state Feb 25 16:35:16 nas kernel: docker0: port 2(vethb9d2b12) entered disabled state Feb 25 16:35:16 nas kernel: device vethb9d2b12 entered promiscuous mode Feb 25 16:35:16 nas kernel: docker0: port 2(vethb9d2b12) entered blocking state Feb 25 16:35:16 nas kernel: docker0: port 2(vethb9d2b12) entered forwarding state Feb 25 16:35:16 nas kernel: docker0: port 2(vethb9d2b12) entered disabled state Feb 25 16:35:16 nas kernel: BTRFS: device fsid 3cf1223f-cce2-4028-9710-47090b938116 devid 1 transid 251 /dev/loop3 scanned by udevd (10556) Feb 25 16:35:16 nas kernel: BTRFS info (device loop3): using free space tree Feb 25 16:35:16 nas kernel: BTRFS info (device loop3): has skinny extents Feb 25 16:35:16 nas zed[4074]: zed_udev_monitor: skip /dev/loop3 (in use by btrfs) Feb 25 16:35:16 nas root: Resize '/etc/libvirt' of 'max' Feb 25 16:35:16 nas emhttpd: shcmd (115): /etc/rc.d/rc.libvirt start Feb 25 16:35:16 nas zed[4074]: zed_udev_monitor: skip /dev/loop3 (in use by btrfs) Feb 25 16:35:16 nas root: Starting virtlockd... Feb 25 16:35:16 nas root: Starting virtlogd... Feb 25 16:35:16 nas root: Starting libvirtd... Feb 25 16:35:16 nas kernel: tun: Universal TUN/TAP device driver, 1.6 Feb 25 16:35:17 nas kernel: virbr0: port 1(virbr0-nic) entered blocking state Feb 25 16:35:17 nas kernel: virbr0: port 1(virbr0-nic) entered disabled state Feb 25 16:35:17 nas kernel: device virbr0-nic entered promiscuous mode Feb 25 16:35:17 nas kernel: mdcmd (36): check nocorrect Feb 25 16:35:17 nas kernel: md: recovery thread: check P ... Feb 25 16:35:17 nas kernel: virbr0: port 1(virbr0-nic) entered blocking state Feb 25 16:35:17 nas kernel: virbr0: port 1(virbr0-nic) entered listening state Feb 25 16:35:17 nas dnsmasq[11560]: started, version 2.84rc2 cachesize 150 Feb 25 16:35:17 nas dnsmasq[11560]: compile time options: IPv6 GNU-getopt no-DBus no-UBus i18n no-IDN DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth no-cryptohash no-DNSSEC loop-detect inotify dumpfile Feb 25 16:35:17 nas dnsmasq-dhcp[11560]: DHCP, IP range 192.168.122.2 -- 192.168.122.254, lease time 1h Feb 25 16:35:17 nas dnsmasq-dhcp[11560]: DHCP, sockets bound exclusively to interface virbr0 Feb 25 16:35:17 nas dnsmasq[11560]: reading /etc/resolv.conf Feb 25 16:35:17 nas dnsmasq[11560]: using nameserver 1.1.1.1#53 Feb 25 16:35:17 nas dnsmasq[11560]: using nameserver 1.0.0.1#53 Feb 25 16:35:17 nas dnsmasq[11560]: using nameserver 8.8.8.8#53 Feb 25 16:35:17 nas dnsmasq[11560]: read /etc/hosts - 2 addresses Feb 25 16:35:17 nas dnsmasq[11560]: read /var/lib/libvirt/dnsmasq/default.addnhosts - 0 addresses Feb 25 16:35:17 nas dnsmasq-dhcp[11560]: read /var/lib/libvirt/dnsmasq/default.hostsfile Feb 25 16:35:17 nas kernel: virbr0: port 1(virbr0-nic) entered disabled state Feb 25 16:35:17 nas kernel: L1TF CPU bug present and SMT on, data leak possible. See CVE-2018-3646 and https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/l1tf.html for details. Feb 25 16:35:19 nas unassigned.devices: Mounting 'Auto Mount' Remote Shares... Feb 25 16:35:19 nas sudo: root : PWD=/ ; USER=root ; COMMAND=/bin/bash -c /usr/local/emhttp/plugins/unbalance/unbalance -port 6237 Feb 25 16:35:19 nas emhttpd: shcmd (117): /etc/rc.d/rc.php-fpm start Feb 25 16:35:19 nas root: Starting php-fpm done Feb 25 16:35:19 nas emhttpd: shcmd (118): /etc/rc.d/rc.unraid-api install Feb 25 16:35:20 nas root: Starting [email protected] Feb 25 16:35:21 nas unraid-api[11709]: ✔️ UNRAID API started successfully! Feb 25 16:35:22 nas root: API has been running for 1.3s and is in "production" mode! Feb 25 16:35:22 nas emhttpd: shcmd (119): /etc/rc.d/rc.nginx start Feb 25 16:35:22 nas root: Starting Nginx server daemon... Feb 25 16:35:22 nas root: Starting [email protected] Feb 25 16:35:23 nas emhttpd: shcmd (120): /etc/rc.d/rc.flash_backup start Feb 25 16:35:23 nas emhttpd: shcmd (120): exit status: 1 Feb 25 16:35:23 nas unraid-api[12134]: ✔️ UNRAID API started successfully! Feb 25 16:35:24 nas kernel: eth0: renamed from veth1328ec0 Feb 25 16:35:24 nas kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethb9d2b12: link becomes ready Feb 25 16:35:24 nas kernel: docker0: port 2(vethb9d2b12) entered blocking state Feb 25 16:35:24 nas kernel: docker0: port 2(vethb9d2b12) entered forwarding state Feb 25 16:35:26 nas rc.docker: homeassistant: started succesfully! Feb 25 16:35:26 nas kernel: docker0: port 3(vethdc32c8f) entered blocking state Feb 25 16:35:26 nas kernel: docker0: port 3(vethdc32c8f) entered disabled state Feb 25 16:35:26 nas kernel: device vethdc32c8f entered promiscuous mode Feb 25 16:35:26 nas kernel: docker0: port 3(vethdc32c8f) entered blocking state Feb 25 16:35:26 nas kernel: docker0: port 3(vethdc32c8f) entered forwarding state Feb 25 16:35:26 nas kernel: docker0: port 3(vethdc32c8f) entered disabled state Feb 25 16:35:39 nas kernel: eth0: renamed from vethe9aa1bc Feb 25 16:35:39 nas kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethdc32c8f: link becomes ready Feb 25 16:35:39 nas kernel: docker0: port 3(vethdc32c8f) entered blocking state Feb 25 16:35:39 nas kernel: docker0: port 3(vethdc32c8f) entered forwarding state Feb 25 16:35:47 nas rc.docker: minecraft-crafters-stats: started succesfully! Feb 25 16:35:47 nas kernel: docker0: port 4(vethff67643) entered blocking state Feb 25 16:35:47 nas kernel: docker0: port 4(vethff67643) entered disabled state Feb 25 16:35:47 nas kernel: device vethff67643 entered promiscuous mode Feb 25 16:35:47 nas kernel: docker0: port 4(vethff67643) entered blocking state Feb 25 16:35:47 nas kernel: docker0: port 4(vethff67643) entered forwarding state Feb 25 16:35:47 nas kernel: docker0: port 4(vethff67643) entered disabled state Feb 25 16:36:01 nas kernel: eth0: renamed from veth341cb1f Feb 25 16:36:01 nas kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethff67643: link becomes ready Feb 25 16:36:01 nas kernel: docker0: port 4(vethff67643) entered blocking state Feb 25 16:36:01 nas kernel: docker0: port 4(vethff67643) entered forwarding state Feb 25 16:36:04 nas rc.docker: crafters-music-bot: started succesfully! Feb 25 16:36:23 nas rc.docker: PlexMediaServer: started succesfully! Feb 25 16:36:23 nas kernel: docker0: port 5(vethed319c9) entered blocking state Feb 25 16:36:23 nas kernel: docker0: port 5(vethed319c9) entered disabled state Feb 25 16:36:23 nas kernel: device vethed319c9 entered promiscuous mode Feb 25 16:36:23 nas kernel: docker0: port 5(vethed319c9) entered blocking state Feb 25 16:36:23 nas kernel: docker0: port 5(vethed319c9) entered forwarding state Feb 25 16:36:23 nas kernel: docker0: port 5(vethed319c9) entered disabled state Feb 25 16:36:40 nas kernel: eth0: renamed from veth734152b Feb 25 16:36:40 nas kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethed319c9: link becomes ready Feb 25 16:36:40 nas kernel: docker0: port 5(vethed319c9) entered blocking state Feb 25 16:36:40 nas kernel: docker0: port 5(vethed319c9) entered forwarding state Feb 25 16:36:44 nas rc.docker: radarr: started succesfully! Feb 25 16:36:44 nas kernel: docker0: port 6(veth3380066) entered blocking state Feb 25 16:36:44 nas kernel: docker0: port 6(veth3380066) entered disabled state Feb 25 16:36:44 nas kernel: device veth3380066 entered promiscuous mode Feb 25 16:36:44 nas kernel: docker0: port 6(veth3380066) entered blocking state Feb 25 16:36:44 nas kernel: docker0: port 6(veth3380066) entered forwarding state Feb 25 16:36:44 nas kernel: docker0: port 6(veth3380066) entered disabled state Feb 25 16:37:09 nas kernel: eth0: renamed from vethfb2a2b5 Feb 25 16:37:09 nas kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth3380066: link becomes ready Feb 25 16:37:09 nas kernel: docker0: port 6(veth3380066) entered blocking state Feb 25 16:37:09 nas kernel: docker0: port 6(veth3380066) entered forwarding state Feb 25 16:37:14 nas rc.docker: sonarr: started succesfully! Feb 25 16:37:14 nas kernel: docker0: port 7(veth1c9dd17) entered blocking state Feb 25 16:37:14 nas kernel: docker0: port 7(veth1c9dd17) entered disabled state Feb 25 16:37:14 nas kernel: device veth1c9dd17 entered promiscuous mode Feb 25 16:37:14 nas kernel: docker0: port 7(veth1c9dd17) entered blocking state Feb 25 16:37:14 nas kernel: docker0: port 7(veth1c9dd17) entered forwarding state Feb 25 16:37:14 nas kernel: docker0: port 7(veth1c9dd17) entered disabled state Feb 25 16:37:40 nas kernel: eth0: renamed from veth6850ff1 Feb 25 16:37:40 nas kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth1c9dd17: link becomes ready Feb 25 16:37:40 nas kernel: docker0: port 7(veth1c9dd17) entered blocking state Feb 25 16:37:40 nas kernel: docker0: port 7(veth1c9dd17) entered forwarding state Feb 25 16:37:42 nas rc.docker: sabnzbd: started succesfully! Feb 25 16:37:42 nas kernel: docker0: port 8(veth50dc05c) entered blocking state Feb 25 16:37:42 nas kernel: docker0: port 8(veth50dc05c) entered disabled state Feb 25 16:37:42 nas kernel: device veth50dc05c entered promiscuous mode Feb 25 16:37:42 nas kernel: docker0: port 8(veth50dc05c) entered blocking state Feb 25 16:37:42 nas kernel: docker0: port 8(veth50dc05c) entered forwarding state Feb 25 16:37:42 nas kernel: docker0: port 8(veth50dc05c) entered disabled state Feb 25 16:38:02 nas kernel: eth0: renamed from veth83110ff Feb 25 16:38:02 nas kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth50dc05c: link becomes ready Feb 25 16:38:02 nas kernel: docker0: port 8(veth50dc05c) entered blocking state Feb 25 16:38:02 nas kernel: docker0: port 8(veth50dc05c) entered forwarding state Feb 25 16:38:07 nas rc.docker: caddy2: started succesfully! Feb 25 16:38:07 nas kernel: docker0: port 9(veth72bfe75) entered blocking state Feb 25 16:38:07 nas kernel: docker0: port 9(veth72bfe75) entered disabled state Feb 25 16:38:07 nas kernel: device veth72bfe75 entered promiscuous mode Feb 25 16:38:07 nas kernel: docker0: port 9(veth72bfe75) entered blocking state Feb 25 16:38:07 nas kernel: docker0: port 9(veth72bfe75) entered forwarding state Feb 25 16:38:07 nas kernel: docker0: port 9(veth72bfe75) entered disabled state Feb 25 16:38:19 nas kernel: eth0: renamed from vethfd34af9 Feb 25 16:38:19 nas kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth72bfe75: link becomes ready Feb 25 16:38:19 nas kernel: docker0: port 9(veth72bfe75) entered blocking state Feb 25 16:38:19 nas kernel: docker0: port 9(veth72bfe75) entered forwarding state Feb 25 16:38:22 nas rc.docker: jackett: started succesfully! Feb 25 16:38:22 nas kernel: docker0: port 10(vethe2bcada) entered blocking state Feb 25 16:38:22 nas kernel: docker0: port 10(vethe2bcada) entered disabled state Feb 25 16:38:22 nas kernel: device vethe2bcada entered promiscuous mode Feb 25 16:38:22 nas kernel: docker0: port 10(vethe2bcada) entered blocking state Feb 25 16:38:22 nas kernel: docker0: port 10(vethe2bcada) entered forwarding state Feb 25 16:38:22 nas kernel: docker0: port 10(vethe2bcada) entered disabled state Feb 25 16:38:34 nas webGUI: Successful login user root from 172.17.0.9 Feb 25 16:38:38 nas kernel: docker0: port 10(vethe2bcada) entered disabled state Feb 25 16:38:38 nas kernel: device vethe2bcada left promiscuous mode Feb 25 16:38:38 nas kernel: docker0: port 10(vethe2bcada) entered disabled state Quote Link to comment
JorgeB Posted February 26, 2022 Share Posted February 26, 2022 Unfortunately there's nothing relevant logged, this usually suggests a hardware issue, one thing you can try it to boot the server in safe mode with all docker/VMs disable, let it run as a basic NAS for a few days, if it still crashes it's likely a hardware problem, if it doesn't start turning on the other services one by one. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.