Server Becomes Unresponsive and Unreachable Periodically (Once Every 24 Hours at Least)


Recommended Posts

Hi all,

 

I've investigated potential causes for this and exhausted all obvious and easily accessible solutions, so I thought that I'd reach out and see if anyone is able to offer some insights.

 

My server has probably been stable for over a year, and only as of very recently (with no hardware or software changes at all) it has been either:

 

  1. Going into a standby mode where the power LED is flashing and drives are spooled down or;
  2. Going into a state where it is completely unresponsive and unreachable.

 

  • If I wake it from standby from state 1, it will simply resume as per state 2. 
  • In either state, the connected screen and peripherals do not work, other than to push it from state 1 to 2 with the keyboard.

 

My system:

 

  • Kingston Technology ValueRAM 16GB 2400MHz DDR4 ECC CL17 DIMM 2Rx8 Desktop Memory 

 

zpxbww5.png


Some of the things I've tried:

 

1. Disabling C6 States

2. Power supply idle mode set to Typical Idle Current

3. Moved the Flash Drive to USB2 slot

4. Disabled Dynamix S3 Sleep plugin

 

My suspicions:

 

1. Potentially something to do with Plex, but can't be certain (as I said, nothing has changed).

2. Potentially something to do with my network, as when I powered my Desktop on which is sitting on the same desk and connected to the same dual port power over ethernet adapter, the server spooled down into its odd standby mode. However, I think this was just a serious coincidence.

 

Thank you in advance for any assistance, this one is really baffling me. 

 

Log as of this morning (I've never seen this stuff related to pcieport before, but it may be relevant):

 

  • Please find color-coded System Log attached or below:

 

Quote

May 6 03:18:09 NAS kernel: pcieport 0000:00:01.3: PCIe Bus Error: severity=Corrected, type=Data Link Layer, (Transmitter ID)
May 6 03:18:09 NAS kernel: pcieport 0000:00:01.3: device [1022:1453] error status/mask=00001100/00006000
May 6 03:18:09 NAS kernel: pcieport 0000:00:01.3: [ 8] RELAY_NUM Rollover 
May 6 03:18:09 NAS kernel: pcieport 0000:00:01.3: [12] Replay Timer Timeout 
May 6 03:18:09 NAS kernel: pcieport 0000:03:00.2: PCIe Bus Error: severity=Corrected, type=Data Link Layer, (Transmitter ID)
May 6 03:18:09 NAS kernel: pcieport 0000:03:00.2: device [1022:43b2] error status/mask=00001000/00002000
May 6 03:18:09 NAS kernel: pcieport 0000:03:00.2: [12] Replay Timer Timeout 
May 6 03:40:02 NAS crond[1899]: exit status 3 from user root /usr/local/sbin/mover &> /dev/null
May 6 05:11:32 NAS kernel: pcieport 0000:00:01.3: AER: Corrected error received: 0000:00:00.0
May 6 05:11:32 NAS kernel: pcieport 0000:00:01.3: PCIe Bus Error: severity=Corrected, type=Data Link Layer, (Receiver ID)
May 6 05:11:32 NAS kernel: pcieport 0000:00:01.3: device [1022:1453] error status/mask=00000080/00006000
May 6 05:11:32 NAS kernel: pcieport 0000:00:01.3: [ 7] Bad DLLP 
May 6 05:59:28 NAS kernel: pcieport 0000:00:01.3: AER: Multiple Corrected error received: 0000:00:00.0
May 6 05:59:28 NAS kernel: pcieport 0000:00:01.3: PCIe Bus Error: severity=Corrected, type=Data Link Layer, (Transmitter ID)
May 6 05:59:28 NAS kernel: pcieport 0000:00:01.3: device [1022:1453] error status/mask=00001100/00006000
May 6 05:59:28 NAS kernel: pcieport 0000:00:01.3: [ 8] RELAY_NUM Rollover 
May 6 05:59:28 NAS kernel: pcieport 0000:00:01.3: [12] Replay Timer Timeout 
May 6 05:59:28 NAS kernel: pcieport 0000:03:00.2: PCIe Bus Error: severity=Corrected, type=Data Link Layer, (Transmitter ID)
May 6 05:59:28 NAS kernel: pcieport 0000:03:00.2: device [1022:43b2] error status/mask=00001100/00002000
May 6 05:59:28 NAS kernel: pcieport 0000:03:00.2: [ 8] RELAY_NUM Rollover 
May 6 05:59:28 NAS kernel: pcieport 0000:03:00.2: [12] Replay Timer Timeout 
May 6 06:47:56 NAS kernel: pcieport 0000:00:01.3: AER: Multiple Corrected error received: 0000:00:00.0
May 6 06:47:56 NAS kernel: pcieport 0000:00:01.3: PCIe Bus Error: severity=Corrected, type=Data Link Layer, (Transmitter ID)
May 6 06:47:56 NAS kernel: pcieport 0000:00:01.3: device [1022:1453] error status/mask=00001100/00006000
May 6 06:47:56 NAS kernel: pcieport 0000:00:01.3: [ 8] RELAY_NUM Rollover 
May 6 06:47:56 NAS kernel: pcieport 0000:00:01.3: [12] Replay Timer Timeout 
May 6 06:47:56 NAS kernel: pcieport 0000:03:00.2: PCIe Bus Error: severity=Corrected, type=Data Link Layer, (Transmitter ID)
May 6 06:47:56 NAS kernel: pcieport 0000:03:00.2: device [1022:43b2] error status/mask=00001000/00002000
May 6 06:47:56 NAS kernel: pcieport 0000:03:00.2: [12] Replay Timer Timeout 
May 6 07:47:22 NAS kernel: pcieport 0000:00:01.3: AER: Multiple Corrected error received: 0000:00:00.0
May 6 07:47:22 NAS kernel: pcieport 0000:00:01.3: PCIe Bus Error: severity=Corrected, type=Data Link Layer, (Transmitter ID)
May 6 07:47:22 NAS kernel: pcieport 0000:00:01.3: device [1022:1453] error status/mask=00001100/00006000
May 6 07:47:22 NAS kernel: pcieport 0000:00:01.3: [ 8] RELAY_NUM Rollover 
May 6 07:47:22 NAS kernel: pcieport 0000:00:01.3: [12] Replay Timer Timeout 
May 6 07:51:21 NAS kernel: pcieport 0000:00:03.1: AER: Corrected error received: 0000:00:00.0
May 6 07:51:21 NAS kernel: pcieport 0000:00:03.1: PCIe Bus Error: severity=Corrected, type=Data Link Layer, (Transmitter ID)
May 6 07:51:21 NAS kernel: pcieport 0000:00:03.1: device [1022:1453] error status/mask=00001000/00006000
May 6 07:51:21 NAS kernel: pcieport 0000:00:03.1: [12] Replay Timer Timeout 
May 6 08:01:24 NAS emhttpd: req (2): startState=STOPPED&file=&optionCorrect=correct&csrf_token=****************&cmdStart=Start
May 6 08:01:24 NAS emhttpd: shcmd (53338): /usr/local/sbin/set_ncq sdc 1
May 6 08:01:24 NAS kernel: mdcmd (39): set md_num_stripes 1280
May 6 08:01:24 NAS kernel: mdcmd (40): set md_sync_window 384
May 6 08:01:24 NAS kernel: mdcmd (41): set md_sync_thresh 192
May 6 08:01:24 NAS kernel: mdcmd (42): set md_write_method
May 6 08:01:24 NAS kernel: mdcmd (43): set spinup_group 0 0
May 6 08:01:24 NAS kernel: mdcmd (44): set spinup_group 1 0
May 6 08:01:24 NAS emhttpd: shcmd (53339): echo 128 > /sys/block/sdc/queue/nr_requests
May 6 08:01:24 NAS emhttpd: shcmd (53340): /usr/local/sbin/set_ncq sdb 1
May 6 08:01:24 NAS emhttpd: shcmd (53341): echo 128 > /sys/block/sdb/queue/nr_requests
May 6 08:01:24 NAS kernel: mdcmd (45): start STOPPED
May 6 08:01:24 NAS kernel: unraid: allocating 15740K for 1280 stripes (3 disks)
May 6 08:01:24 NAS kernel: md1: running, size: 7814026532 blocks
May 6 08:01:24 NAS emhttpd: shcmd (53342): udevadm settle
May 6 08:01:24 NAS root: Starting diskload
May 6 08:01:25 NAS tips.and.tweaks: Tweaks Applied
May 6 08:01:25 NAS emhttpd: Mounting disks...
May 6 08:01:25 NAS emhttpd: shcmd (53345): /sbin/btrfs device scan
May 6 08:01:25 NAS root: Scanning for Btrfs filesystems
May 6 08:01:25 NAS emhttpd: shcmd (53346): mkdir -p /mnt/disk1
May 6 08:01:25 NAS emhttpd: shcmd (53347): mount -t xfs -o noatime,nodiratime /dev/md1 /mnt/disk1
May 6 08:01:25 NAS kernel: SGI XFS with ACLs, security attributes, no debug enabled
May 6 08:01:25 NAS kernel: XFS (md1): Mounting V5 Filesystem
May 6 08:01:25 NAS kernel: XFS (md1): Starting recovery (logdev: internal)
May 6 08:01:25 NAS kernel: XFS (md1): Ending recovery (logdev: internal)
May 6 08:01:25 NAS emhttpd: shcmd (53348): xfs_growfs /mnt/disk1
May 6 08:01:25 NAS root: meta-data=/dev/md1 isize=512 agcount=8, agsize=268435455 blks
May 6 08:01:25 NAS root: = sectsz=512 attr=2, projid32bit=1
May 6 08:01:25 NAS root: = crc=1 finobt=1 spinodes=0 rmapbt=0
May 6 08:01:25 NAS root: = reflink=0
May 6 08:01:25 NAS root: data = bsize=4096 blocks=1953506633, imaxpct=5
May 6 08:01:25 NAS root: = sunit=0 swidth=0 blks
May 6 08:01:25 NAS root: naming =version 2 bsize=4096 ascii-ci=0 ftype=1
May 6 08:01:25 NAS root: log =internal bsize=4096 blocks=521728, version=2
May 6 08:01:25 NAS root: = sectsz=512 sunit=0 blks, lazy-count=1
May 6 08:01:25 NAS root: realtime =none extsz=4096 blocks=0, rtextents=0
May 6 08:01:25 NAS emhttpd: shcmd (53349): mkdir -p /mnt/cache
May 6 08:01:25 NAS emhttpd: shcmd (53350): mount -t btrfs -o noatime,nodiratime /dev/sdd1 /mnt/cache
May 6 08:01:25 NAS kernel: BTRFS info (device sdd1): disk space caching is enabled
May 6 08:01:25 NAS kernel: BTRFS info (device sdd1): has skinny extents
May 6 08:01:25 NAS kernel: BTRFS info (device sdd1): enabling ssd optimizations
May 6 08:01:25 NAS kernel: BTRFS info (device sdd1): checking UUID tree
May 6 08:01:25 NAS emhttpd: shcmd (53351): sync
May 6 08:01:25 NAS emhttpd: shcmd (53352): mkdir /mnt/user0
May 6 08:01:25 NAS emhttpd: shcmd (53353): /usr/local/sbin/shfs /mnt/user0 -disks 2 -o noatime,big_writes,allow_other |& logger
May 6 08:01:25 NAS shfs: stderr redirected to syslog
May 6 08:01:25 NAS emhttpd: shcmd (53354): mkdir /mnt/user
May 6 08:01:25 NAS emhttpd: shcmd (53355): /usr/local/sbin/shfs /mnt/user -disks 3 2048000000 -o noatime,big_writes,allow_other -o remember=0 |& logger
May 6 08:01:25 NAS shfs: stderr redirected to syslog
May 6 08:01:25 NAS emhttpd: shcmd (53357): /usr/local/sbin/update_cron
May 6 08:01:25 NAS root: Delaying execution of fix common problems scan for 10 minutes
May 6 08:01:25 NAS unassigned.devices: Mounting 'Auto Mount' Devices...
May 6 08:01:25 NAS emhttpd: Starting services...
May 6 08:01:25 NAS emhttpd: shcmd (53373): /usr/local/sbin/mount_image '/mnt/user/system/docker/docker.img' /var/lib/docker 20
May 6 08:01:26 NAS kernel: BTRFS: device fsid 602bc878-88d8-498c-8b4c-97b025f5f5be devid 1 transid 385565 /dev/loop2
May 6 08:01:26 NAS kernel: BTRFS info (device loop2): disk space caching is enabled
May 6 08:01:26 NAS kernel: BTRFS info (device loop2): has skinny extents
May 6 08:01:26 NAS root: Resize '/var/lib/docker' of 'max'
May 6 08:01:26 NAS kernel: BTRFS info (device loop2): new size for /dev/loop2 is 21474836480
May 6 08:01:26 NAS emhttpd: shcmd (53375): /etc/rc.d/rc.docker start
May 6 08:01:26 NAS root: starting dockerd ...
May 6 08:01:28 NAS avahi-daemon[5651]: Joining mDNS multicast group on interface docker0.IPv4 with address 172.17.0.1.
May 6 08:01:28 NAS avahi-daemon[5651]: New relevant interface docker0.IPv4 for mDNS.
May 6 08:01:28 NAS avahi-daemon[5651]: Registering new address record for 172.17.0.1 on docker0.IPv4.
May 6 08:01:28 NAS kernel: IPv6: ADDRCONF(NETDEV_UP): docker0: link is not ready
May 6 08:01:28 NAS avahi-daemon[5651]: Joining mDNS multicast group on interface br-ec553ef33372.IPv4 with address 172.20.0.1.
May 6 08:01:28 NAS avahi-daemon[5651]: New relevant interface br-ec553ef33372.IPv4 for mDNS.
May 6 08:01:28 NAS avahi-daemon[5651]: Registering new address record for 172.20.0.1 on br-ec553ef33372.IPv4.
May 6 08:01:28 NAS kernel: IPv6: ADDRCONF(NETDEV_UP): br-ec553ef33372: link is not ready
May 6 08:01:35 NAS root: Starting docker_load
May 6 08:01:35 NAS emhttpd: shcmd (53389): /usr/local/sbin/mount_image '/mnt/user/system/libvirt/libvirt.img' /etc/libvirt 1
May 6 08:01:35 NAS kernel: BTRFS: device fsid 819112b8-8f84-4fa7-be03-8bd4c62b2ac9 devid 1 transid 180 /dev/loop3
May 6 08:01:35 NAS kernel: BTRFS info (device loop3): disk space caching is enabled
May 6 08:01:35 NAS kernel: BTRFS info (device loop3): has skinny extents
May 6 08:01:35 NAS root: Resize '/etc/libvirt' of 'max'
May 6 08:01:35 NAS kernel: BTRFS info (device loop3): new size for /dev/loop3 is 1073741824
May 6 08:01:35 NAS emhttpd: shcmd (53391): /etc/rc.d/rc.libvirt start
May 6 08:01:35 NAS root: Starting virtlockd...
May 6 08:01:35 NAS root: Starting virtlogd...
May 6 08:01:35 NAS useradd[24273]: new user: name=tss, UID=59, GID=59, home=/, shell=/bin/false
May 6 08:01:35 NAS root: Starting libvirtd...
May 6 08:01:35 NAS kernel: tun: Universal TUN/TAP device driver, 1.6
May 6 08:01:35 NAS kernel: mdcmd (46): check correct
May 6 08:01:35 NAS kernel: md: recovery thread: check P ...
May 6 08:01:35 NAS kernel: md: using 1536k window, over a total of 7814026532 blocks.
May 6 08:01:35 NAS kernel: pcieport 0000:00:03.1: AER: Corrected error received: 0000:00:00.0
May 6 08:01:35 NAS kernel: pcieport 0000:00:03.1: PCIe Bus Error: severity=Corrected, type=Data Link Layer, (Transmitter ID)
May 6 08:01:35 NAS kernel: pcieport 0000:00:03.1: device [1022:1453] error status/mask=00001000/00006000
May 6 08:01:35 NAS kernel: pcieport 0000:00:03.1: [12] Replay Timer Timeout 
May 6 08:01:35 NAS kernel: docker0: port 1(vethf00b17a) entered blocking state
May 6 08:01:35 NAS kernel: docker0: port 1(vethf00b17a) entered disabled state
May 6 08:01:35 NAS kernel: device vethf00b17a entered promiscuous mode
May 6 08:01:35 NAS kernel: IPv6: ADDRCONF(NETDEV_UP): vethf00b17a: link is not ready
May 6 08:01:35 NAS kernel: docker0: port 1(vethf00b17a) entered blocking state
May 6 08:01:35 NAS kernel: docker0: port 1(vethf00b17a) entered forwarding state
May 6 08:01:35 NAS kernel: docker0: port 1(vethf00b17a) entered disabled state
May 6 08:01:36 NAS kernel: virbr0: port 1(virbr0-nic) entered blocking state
May 6 08:01:36 NAS kernel: virbr0: port 1(virbr0-nic) entered disabled state
May 6 08:01:36 NAS kernel: device virbr0-nic entered promiscuous mode
May 6 08:01:36 NAS dhcpcd[1823]: virbr0: new hardware address: c6:a9:79:3c:b7:13
May 6 08:01:36 NAS dhcpcd[1823]: virbr0: new hardware address: 52:54:00:63:0d:99
May 6 08:01:36 NAS avahi-daemon[5651]: Joining mDNS multicast group on interface virbr0.IPv4 with address 192.168.122.1.
May 6 08:01:36 NAS avahi-daemon[5651]: New relevant interface virbr0.IPv4 for mDNS.
May 6 08:01:36 NAS avahi-daemon[5651]: Registering new address record for 192.168.122.1 on virbr0.IPv4.
May 6 08:01:36 NAS kernel: virbr0: port 1(virbr0-nic) entered blocking state
May 6 08:01:36 NAS kernel: virbr0: port 1(virbr0-nic) entered listening state
May 6 08:01:36 NAS dnsmasq[24525]: started, version 2.79 cachesize 150
May 6 08:01:36 NAS dnsmasq[24525]: compile time options: IPv6 GNU-getopt no-DBus i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth no-DNSSEC loop-detect inotify
May 6 08:01:36 NAS dnsmasq-dhcp[24525]: DHCP, IP range 192.168.122.2 -- 192.168.122.254, lease time 1h
May 6 08:01:36 NAS dnsmasq-dhcp[24525]: DHCP, sockets bound exclusively to interface virbr0
May 6 08:01:36 NAS dnsmasq[24525]: reading /etc/resolv.conf
May 6 08:01:36 NAS dnsmasq[24525]: using nameserver 192.168.0.1#53
May 6 08:01:36 NAS dnsmasq[24525]: read /etc/hosts - 2 addresses
May 6 08:01:36 NAS dnsmasq[24525]: read /var/lib/libvirt/dnsmasq/default.addnhosts - 0 addresses
May 6 08:01:36 NAS dnsmasq-dhcp[24525]: read /var/lib/libvirt/dnsmasq/default.hostsfile
May 6 08:01:36 NAS kernel: virbr0: port 1(virbr0-nic) entered disabled state
May 6 08:01:36 NAS kernel: eth0: renamed from veth2ff11e1
May 6 08:01:36 NAS kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethf00b17a: link becomes ready
May 6 08:01:36 NAS kernel: docker0: port 1(vethf00b17a) entered blocking state
May 6 08:01:36 NAS kernel: docker0: port 1(vethf00b17a) entered forwarding state
May 6 08:01:36 NAS kernel: IPv6: ADDRCONF(NETDEV_CHANGE): docker0: link becomes ready
May 6 08:01:36 NAS rc.docker: binhex-delugevpn: started succesfully!
May 6 08:01:36 NAS kernel: docker0: port 2(vetha252019) entered blocking state
May 6 08:01:36 NAS kernel: docker0: port 2(vetha252019) entered disabled state
May 6 08:01:36 NAS kernel: device vetha252019 entered promiscuous mode
May 6 08:01:36 NAS kernel: IPv6: ADDRCONF(NETDEV_UP): vetha252019: link is not ready
May 6 08:01:36 NAS kernel: docker0: port 2(vetha252019) entered blocking state
May 6 08:01:36 NAS kernel: docker0: port 2(vetha252019) entered forwarding state
May 6 08:01:36 NAS kernel: docker0: port 2(vetha252019) entered disabled state
May 6 08:01:37 NAS kernel: eth0: renamed from veth98f3003
May 6 08:01:37 NAS kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vetha252019: link becomes ready
May 6 08:01:37 NAS kernel: docker0: port 2(vetha252019) entered blocking state
May 6 08:01:37 NAS kernel: docker0: port 2(vetha252019) entered forwarding state
May 6 08:01:37 NAS rc.docker: binhex-jackett: started succesfully!
May 6 08:01:37 NAS unassigned.devices: Mounting 'Auto Mount' Remote Shares...
May 6 08:01:38 NAS kernel: docker0: port 3(veth30ea06c) entered blocking state
May 6 08:01:38 NAS kernel: docker0: port 3(veth30ea06c) entered disabled state
May 6 08:01:38 NAS kernel: device veth30ea06c entered promiscuous mode
May 6 08:01:38 NAS kernel: IPv6: ADDRCONF(NETDEV_UP): veth30ea06c: link is not ready
May 6 08:01:38 NAS kernel: docker0: port 3(veth30ea06c) entered blocking state
May 6 08:01:38 NAS kernel: docker0: port 3(veth30ea06c) entered forwarding state
May 6 08:01:38 NAS kernel: docker0: port 3(veth30ea06c) entered disabled state
May 6 08:01:38 NAS avahi-daemon[5651]: Joining mDNS multicast group on interface vethf00b17a.IPv6 with address fe80::2467:31ff:febe:9442.
May 6 08:01:38 NAS avahi-daemon[5651]: New relevant interface vethf00b17a.IPv6 for mDNS.
May 6 08:01:38 NAS avahi-daemon[5651]: Registering new address record for fe80::2467:31ff:febe:9442 on vethf00b17a.*.
May 6 08:01:38 NAS avahi-daemon[5651]: Joining mDNS multicast group on interface docker0.IPv6 with address fe80::42:2dff:fe8a:5b87.
May 6 08:01:38 NAS avahi-daemon[5651]: New relevant interface docker0.IPv6 for mDNS.
May 6 08:01:38 NAS avahi-daemon[5651]: Registering new address record for fe80::42:2dff:fe8a:5b87 on docker0.*.
May 6 08:01:38 NAS kernel: eth0: renamed from veth7a752de
May 6 08:01:38 NAS kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth30ea06c: link becomes ready
May 6 08:01:38 NAS kernel: docker0: port 3(veth30ea06c) entered blocking state
May 6 08:01:38 NAS kernel: docker0: port 3(veth30ea06c) entered forwarding state
May 6 08:01:39 NAS avahi-daemon[5651]: Joining mDNS multicast group on interface vetha252019.IPv6 with address fe80::904b:d5ff:fe1d:8f0f.
May 6 08:01:39 NAS avahi-daemon[5651]: New relevant interface vetha252019.IPv6 for mDNS.
May 6 08:01:39 NAS avahi-daemon[5651]: Registering new address record for fe80::904b:d5ff:fe1d:8f0f on vetha252019.*.
May 6 08:01:39 NAS rc.docker: binhex-krusader: started succesfully!
May 6 08:01:39 NAS kernel: docker0: port 4(vethb635d6c) entered blocking state
May 6 08:01:39 NAS kernel: docker0: port 4(vethb635d6c) entered disabled state
May 6 08:01:39 NAS kernel: device vethb635d6c entered promiscuous mode
May 6 08:01:39 NAS kernel: IPv6: ADDRCONF(NETDEV_UP): vethb635d6c: link is not ready
May 6 08:01:39 NAS kernel: docker0: port 4(vethb635d6c) entered blocking state
May 6 08:01:39 NAS kernel: docker0: port 4(vethb635d6c) entered forwarding state
May 6 08:01:39 NAS kernel: docker0: port 4(vethb635d6c) entered disabled state
May 6 08:01:39 NAS kernel: eth0: renamed from veth9e5048b
May 6 08:01:39 NAS kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethb635d6c: link becomes ready
May 6 08:01:39 NAS kernel: docker0: port 4(vethb635d6c) entered blocking state
May 6 08:01:39 NAS kernel: docker0: port 4(vethb635d6c) entered forwarding state
May 6 08:01:40 NAS rc.docker: binhex-lidarr: started succesfully!
May 6 08:01:40 NAS avahi-daemon[5651]: Joining mDNS multicast group on interface veth30ea06c.IPv6 with address fe80::b814:72ff:fe42:7321.
May 6 08:01:40 NAS avahi-daemon[5651]: New relevant interface veth30ea06c.IPv6 for mDNS.
May 6 08:01:40 NAS avahi-daemon[5651]: Registering new address record for fe80::b814:72ff:fe42:7321 on veth30ea06c.*.
May 6 08:01:41 NAS rc.docker: binhex-plexpass: started succesfully!
May 6 08:01:41 NAS avahi-daemon[5651]: Joining mDNS multicast group on interface vethb635d6c.IPv6 with address fe80::2c23:57ff:feab:e7f9.
May 6 08:01:41 NAS avahi-daemon[5651]: New relevant interface vethb635d6c.IPv6 for mDNS.
May 6 08:01:41 NAS avahi-daemon[5651]: Registering new address record for fe80::2c23:57ff:feab:e7f9 on vethb635d6c.*.
May 6 08:01:41 NAS kernel: docker0: port 5(vetha0b089d) entered blocking state
May 6 08:01:41 NAS kernel: docker0: port 5(vetha0b089d) entered disabled state
May 6 08:01:41 NAS kernel: device vetha0b089d entered promiscuous mode
May 6 08:01:41 NAS kernel: IPv6: ADDRCONF(NETDEV_UP): vetha0b089d: link is not ready
May 6 08:01:41 NAS kernel: docker0: port 5(vetha0b089d) entered blocking state
May 6 08:01:41 NAS kernel: docker0: port 5(vetha0b089d) entered forwarding state
May 6 08:01:41 NAS kernel: docker0: port 5(vetha0b089d) entered disabled state
May 6 08:01:42 NAS kernel: eth0: renamed from veth9bc7f5f
May 6 08:01:42 NAS kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vetha0b089d: link becomes ready
May 6 08:01:42 NAS kernel: docker0: port 5(vetha0b089d) entered blocking state
May 6 08:01:42 NAS kernel: docker0: port 5(vetha0b089d) entered forwarding state
May 6 08:01:42 NAS rc.docker: binhex-radarr: started succesfully!
May 6 08:01:42 NAS kernel: docker0: port 6(veth7469e46) entered blocking state
May 6 08:01:42 NAS kernel: docker0: port 6(veth7469e46) entered disabled state
May 6 08:01:42 NAS kernel: device veth7469e46 entered promiscuous mode
May 6 08:01:42 NAS kernel: IPv6: ADDRCONF(NETDEV_UP): veth7469e46: link is not ready
May 6 08:01:42 NAS kernel: docker0: port 6(veth7469e46) entered blocking state
May 6 08:01:42 NAS kernel: docker0: port 6(veth7469e46) entered forwarding state
May 6 08:01:42 NAS kernel: docker0: port 6(veth7469e46) entered disabled state
May 6 08:01:43 NAS avahi-daemon[5651]: Joining mDNS multicast group on interface vetha0b089d.IPv6 with address fe80::dc5d:ceff:fe8b:ad4.
May 6 08:01:43 NAS avahi-daemon[5651]: New relevant interface vetha0b089d.IPv6 for mDNS.
May 6 08:01:43 NAS avahi-daemon[5651]: Registering new address record for fe80::dc5d:ceff:fe8b:ad4 on vetha0b089d.*.
May 6 08:01:44 NAS kernel: eth0: renamed from vetha750f5f
May 6 08:01:44 NAS kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth7469e46: link becomes ready
May 6 08:01:44 NAS kernel: docker0: port 6(veth7469e46) entered blocking state
May 6 08:01:44 NAS kernel: docker0: port 6(veth7469e46) entered forwarding state
May 6 08:01:44 NAS rc.docker: binhex-sabnzbd: started succesfully!
May 6 08:01:44 NAS kernel: docker0: port 7(veth9c6cb91) entered blocking state
May 6 08:01:44 NAS kernel: docker0: port 7(veth9c6cb91) entered disabled state
May 6 08:01:44 NAS kernel: device veth9c6cb91 entered promiscuous mode
May 6 08:01:44 NAS kernel: IPv6: ADDRCONF(NETDEV_UP): veth9c6cb91: link is not ready
May 6 08:01:44 NAS kernel: docker0: port 7(veth9c6cb91) entered blocking state
May 6 08:01:44 NAS kernel: docker0: port 7(veth9c6cb91) entered forwarding state
May 6 08:01:45 NAS kernel: docker0: port 7(veth9c6cb91) entered disabled state
May 6 08:01:45 NAS avahi-daemon[5651]: Joining mDNS multicast group on interface veth7469e46.IPv6 with address fe80::2ce7:19ff:fec6:2ed5.
May 6 08:01:45 NAS avahi-daemon[5651]: New relevant interface veth7469e46.IPv6 for mDNS.
May 6 08:01:45 NAS avahi-daemon[5651]: Registering new address record for fe80::2ce7:19ff:fec6:2ed5 on veth7469e46.*.
May 6 08:01:45 NAS kernel: eth0: renamed from vethebaaea4
May 6 08:01:45 NAS kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth9c6cb91: link becomes ready
May 6 08:01:45 NAS kernel: docker0: port 7(veth9c6cb91) entered blocking state
May 6 08:01:45 NAS kernel: docker0: port 7(veth9c6cb91) entered forwarding state
May 6 08:01:45 NAS rc.docker: binhex-sonarr: started succesfully!
May 6 08:01:46 NAS avahi-daemon[5651]: Joining mDNS multicast group on interface veth9c6cb91.IPv6 with address fe80::ccc1:38ff:feb3:e7a7.
May 6 08:01:46 NAS avahi-daemon[5651]: New relevant interface veth9c6cb91.IPv6 for mDNS.
May 6 08:01:46 NAS avahi-daemon[5651]: Registering new address record for fe80::ccc1:38ff:feb3:e7a7 on veth9c6cb91.*.
May 6 08:01:47 NAS rc.docker: duckdns: started succesfully!
May 6 08:01:47 NAS kernel: docker0: port 8(veth6817a29) entered blocking state
May 6 08:01:47 NAS kernel: docker0: port 8(veth6817a29) entered disabled state
May 6 08:01:47 NAS kernel: device veth6817a29 entered promiscuous mode
May 6 08:01:47 NAS kernel: IPv6: ADDRCONF(NETDEV_UP): veth6817a29: link is not ready
May 6 08:01:47 NAS kernel: docker0: port 8(veth6817a29) entered blocking state
May 6 08:01:47 NAS kernel: docker0: port 8(veth6817a29) entered forwarding state
May 6 08:01:47 NAS kernel: docker0: port 8(veth6817a29) entered disabled state
May 6 08:01:48 NAS kernel: eth0: renamed from veth77b2e1c
May 6 08:01:48 NAS kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth6817a29: link becomes ready
May 6 08:01:48 NAS kernel: docker0: port 8(veth6817a29) entered blocking state
May 6 08:01:48 NAS kernel: docker0: port 8(veth6817a29) entered forwarding state
May 6 08:01:49 NAS rc.docker: headphones: started succesfully!
May 6 08:01:49 NAS kernel: br-ec553ef33372: port 1(veth032157e) entered blocking state
May 6 08:01:49 NAS kernel: br-ec553ef33372: port 1(veth032157e) entered disabled state
May 6 08:01:49 NAS kernel: device veth032157e entered promiscuous mode
May 6 08:01:49 NAS kernel: IPv6: ADDRCONF(NETDEV_UP): veth032157e: link is not ready
May 6 08:01:49 NAS kernel: br-ec553ef33372: port 1(veth032157e) entered blocking state
May 6 08:01:49 NAS kernel: br-ec553ef33372: port 1(veth032157e) entered forwarding state
May 6 08:01:49 NAS kernel: br-ec553ef33372: port 1(veth032157e) entered disabled state
May 6 08:01:50 NAS kernel: eth0: renamed from vethf86c2f1
May 6 08:01:50 NAS kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth032157e: link becomes ready
May 6 08:01:50 NAS kernel: br-ec553ef33372: port 1(veth032157e) entered blocking state
May 6 08:01:50 NAS kernel: br-ec553ef33372: port 1(veth032157e) entered forwarding state
May 6 08:01:50 NAS kernel: IPv6: ADDRCONF(NETDEV_CHANGE): br-ec553ef33372: link becomes ready
May 6 08:01:50 NAS avahi-daemon[5651]: Joining mDNS multicast group on interface veth6817a29.IPv6 with address fe80::dcd4:eaff:fe10:946f.
May 6 08:01:50 NAS avahi-daemon[5651]: New relevant interface veth6817a29.IPv6 for mDNS.
May 6 08:01:50 NAS avahi-daemon[5651]: Registering new address record for fe80::dcd4:eaff:fe10:946f on veth6817a29.*.
May 6 08:01:50 NAS rc.docker: letsencrypt: started succesfully!
May 6 08:01:51 NAS avahi-daemon[5651]: Joining mDNS multicast group on interface br-ec553ef33372.IPv6 with address fe80::42:2bff:fe7f:7c8b.
May 6 08:01:51 NAS avahi-daemon[5651]: New relevant interface br-ec553ef33372.IPv6 for mDNS.
May 6 08:01:51 NAS avahi-daemon[5651]: Registering new address record for fe80::42:2bff:fe7f:7c8b on br-ec553ef33372.*.
May 6 08:01:52 NAS avahi-daemon[5651]: Joining mDNS multicast group on interface veth032157e.IPv6 with address fe80::82e:68ff:fed3:6a4a.
May 6 08:01:52 NAS avahi-daemon[5651]: New relevant interface veth032157e.IPv6 for mDNS.
May 6 08:01:52 NAS avahi-daemon[5651]: Registering new address record for fe80::82e:68ff:fed3:6a4a on veth032157e.*.
May 6 08:01:52 NAS rc.docker: Netdata: started succesfully!
May 6 08:01:52 NAS kernel: br-ec553ef33372: port 2(veth0d3057e) entered blocking state
May 6 08:01:52 NAS kernel: br-ec553ef33372: port 2(veth0d3057e) entered disabled state
May 6 08:01:52 NAS kernel: device veth0d3057e entered promiscuous mode
May 6 08:01:52 NAS kernel: IPv6: ADDRCONF(NETDEV_UP): veth0d3057e: link is not ready
May 6 08:01:52 NAS kernel: br-ec553ef33372: port 2(veth0d3057e) entered blocking state
May 6 08:01:52 NAS kernel: br-ec553ef33372: port 2(veth0d3057e) entered forwarding state
May 6 08:01:52 NAS kernel: br-ec553ef33372: port 2(veth0d3057e) entered disabled state
May 6 08:01:53 NAS kernel: eth0: renamed from veth1beadb1
May 6 08:01:53 NAS kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth0d3057e: link becomes ready
May 6 08:01:53 NAS kernel: br-ec553ef33372: port 2(veth0d3057e) entered blocking state
May 6 08:01:53 NAS kernel: br-ec553ef33372: port 2(veth0d3057e) entered forwarding state
May 6 08:01:53 NAS rc.docker: ombi: started succesfully!
May 6 08:01:55 NAS rc.docker: openvpn-as: started succesfully!
May 6 08:01:55 NAS avahi-daemon[5651]: Joining mDNS multicast group on interface veth0d3057e.IPv6 with address fe80::a460:4dff:fe16:a6d7.
May 6 08:01:55 NAS avahi-daemon[5651]: New relevant interface veth0d3057e.IPv6 for mDNS.
May 6 08:01:55 NAS avahi-daemon[5651]: Registering new address record for fe80::a460:4dff:fe16:a6d7 on veth0d3057e.*.
May 6 08:02:35 NAS root: Fix Common Problems Version 2019.03.26
May 6 08:02:49 NAS root: Fix Common Problems: Other Warning: CPU possibly will not throttle down frequency at idle

 

 

System Log.html

Edited by mcjfauser
Quote formatting
Link to comment

A recent log that may be relevant:

 

Quote

May 6 19:31:37 NAS root: Reloading Nginx configuration...
May 6 19:31:40 NAS emhttpd: shcmd (361): /usr/bin/php -f /usr/local/emhttp/webGui/include/UpdateDNS.php
May 6 19:31:40 NAS emhttpd: shcmd (361): exit status: 1
May 6 19:31:40 NAS nginx: 2019/05/06 19:31:40 [alert] 4761#4761: *5002 open socket #17 left in connection 12
May 6 19:31:40 NAS nginx: 2019/05/06 19:31:40 [alert] 4761#4761: aborting
May 6 19:31:41 NAS nginx: 2019/05/06 19:31:41 [error] 32452#32452: *5045 user "root": password mismatch, client: 192.168.0.111, server: , request: "GET /Users/UserEdit?name=root HTTP/1.1", host: "192.168.0.53", referrer: "http://192.168.0.53/update.htm"
May 6 19:31:41 NAS nginx: 2019/05/06 19:31:41 [error] 32452#32452: *5045 user "root": password mismatch, client: 192.168.0.111, server: , request: "GET /Users/UserEdit?name=root HTTP/1.1", host: "192.168.0.53", referrer: "http://192.168.0.53/update.htm"
May 6 19:32:09 NAS nginx: 2019/05/06 19:32:09 [error] 32452#32452: *5107 user "root": password mismatch, client: 192.168.0.111, server: , request: "GET /Users/UserEdit?name=root HTTP/1.1", host: "192.168.0.53", referrer: "http://192.168.0.53/update.htm"
May 6 19:32:09 NAS nginx: 2019/05/06 19:32:09 [error] 32452#32452: *5108 user "root": password mismatch, client: 192.168.0.111, server: , request: "POST /webGui/include/Notify.php HTTP/1.1", host: "192.168.0.53", referrer: "http://192.168.0.53/Users/UserEdit?name=root"
May 6 19:32:09 NAS nginx: 2019/05/06 19:32:09 [error] 32452#32452: *5109 user "root": password mismatch, client: 192.168.0.111, server: , request: "POST /plugins/dynamix.system.temp/include/SystemTemp.php HTTP/1.1", host: "192.168.0.53", referrer: "http://192.168.0.53/Users/UserEdit?name=root"
May 6 19:32:09 NAS nginx: 2019/05/06 19:32:09 [error] 32452#32452: *5106 user "root": password mismatch, client: 192.168.0.111, server: , request: "GET /sub/var?last_event_id=1557136897%3A0 HTTP/1.1", host: "192.168.0.53"
May 6 19:32:11 NAS nginx: 2019/05/06 19:32:11 [error] 32452#32452: *5107 user "root": password mismatch, client: 192.168.0.111, server: , request: "GET /Users/UserEdit?name=root HTTP/1.1", host: "192.168.0.53", referrer: "http://192.168.0.53/update.htm"
May 6 19:32:13 NAS nginx: 2019/05/06 19:32:13 [error] 32452#32452: *5108 user "root": password mismatch, client: 192.168.0.111, server: , request: "POST /webGui/include/Notify.php HTTP/1.1", host: "192.168.0.53", referrer: "http://192.168.0.53/Users/UserEdit?name=root"
May 6 19:32:15 NAS nginx: 2019/05/06 19:32:15 [error] 32452#32452: *5109 user "root": password mismatch, client: 192.168.0.111, server: , request: "POST /plugins/dynamix.system.temp/include/SystemTemp.php HTTP/1.1", host: "192.168.0.53", referrer: "http://192.168.0.53/Users/UserEdit?name=root"
May 6 20:49:28 NAS kernel: pcieport 0000:00:03.1: AER: Corrected error received: 0000:00:00.0
May 6 20:49:28 NAS kernel: pcieport 0000:00:03.1: PCIe Bus Error: severity=Corrected, type=Data Link Layer, (Receiver ID)
May 6 20:49:28 NAS kernel: pcieport 0000:00:03.1: device [1022:1453] error status/mask=00000040/00006000
May 6 20:49:28 NAS kernel: pcieport 0000:00:03.1: [ 6] Bad TLP 
May 6 21:19:05 NAS kernel: pcieport 0000:00:03.1: AER: Corrected error received: 0000:00:00.0
May 6 21:19:05 NAS kernel: pcieport 0000:00:03.1: PCIe Bus Error: severity=Corrected, type=Data Link Layer, (Receiver ID)
May 6 21:19:05 NAS kernel: pcieport 0000:00:03.1: device [1022:1453] error status/mask=00000040/00006000
May 6 21:19:05 NAS kernel: pcieport 0000:00:03.1: [ 6] Bad TLP 
May 6 21:23:02 NAS kernel: pcieport 0000:00:01.3: AER: Corrected error received: 0000:00:00.0
May 6 21:23:02 NAS kernel: pcieport 0000:00:01.3: PCIe Bus Error: severity=Corrected, type=Data Link Layer, (Receiver ID)
May 6 21:23:02 NAS kernel: pcieport 0000:00:01.3: device [1022:1453] error status/mask=00000040/00006000
May 6 21:23:02 NAS kernel: pcieport 0000:00:01.3: [ 6] Bad TLP 
May 6 21:23:13 NAS kernel: pcieport 0000:00:01.3: AER: Corrected error received: 0000:00:00.0
May 6 21:23:13 NAS kernel: pcieport 0000:00:01.3: PCIe Bus Error: severity=Corrected, type=Data Link Layer, (Receiver ID)
May 6 21:23:13 NAS kernel: pcieport 0000:00:01.3: device [1022:1453] error status/mask=00000040/00006000
May 6 21:23:13 NAS kernel: pcieport 0000:00:01.3: [ 6] Bad TLP 
May 6 21:24:37 NAS kernel: pcieport 0000:00:01.3: AER: Corrected error received: 0000:00:00.0
May 6 21:24:37 NAS kernel: pcieport 0000:00:01.3: PCIe Bus Error: severity=Corrected, type=Data Link Layer, (Receiver ID)
May 6 21:24:37 NAS kernel: pcieport 0000:00:01.3: device [1022:1453] error status/mask=00000080/00006000
May 6 21:24:37 NAS kernel: pcieport 0000:00:01.3: [ 7] Bad DLLP 
May 6 21:27:40 NAS kernel: pcieport 0000:00:03.1: AER: Corrected error received: 0000:00:00.0
May 6 21:27:40 NAS kernel: pcieport 0000:00:03.1: PCIe Bus Error: severity=Corrected, type=Data Link Layer, (Receiver ID)
May 6 21:27:40 NAS kernel: pcieport 0000:00:03.1: device [1022:1453] error status/mask=00000040/00006000
May 6 21:27:40 NAS kernel: pcieport 0000:00:03.1: [ 6] Bad TLP 
May 6 21:27:57 NAS kernel: pcieport 0000:00:03.1: AER: Corrected error received: 0000:00:00.0
May 6 21:27:57 NAS kernel: pcieport 0000:00:03.1: PCIe Bus Error: severity=Corrected, type=Data Link Layer, (Receiver ID)
May 6 21:27:57 NAS kernel: pcieport 0000:00:03.1: device [1022:1453] error status/mask=00000040/00006000
May 6 21:27:57 NAS kernel: pcieport 0000:00:03.1: [ 6] Bad TLP 
May 6 21:30:50 NAS kernel: pcieport 0000:00:01.3: AER: Multiple Corrected error received: 0000:00:00.0
May 6 21:30:50 NAS kernel: pcieport 0000:00:01.3: PCIe Bus Error: severity=Corrected, type=Data Link Layer, (Transmitter ID)
May 6 21:30:50 NAS kernel: pcieport 0000:00:01.3: device [1022:1453] error status/mask=00001100/00006000
May 6 21:30:50 NAS kernel: pcieport 0000:00:01.3: [ 8] RELAY_NUM Rollover 
May 6 21:30:50 NAS kernel: pcieport 0000:00:01.3: [12] Replay Timer Timeout 
May 6 21:30:50 NAS kernel: pcieport 0000:03:00.2: PCIe Bus Error: severity=Corrected, type=Data Link Layer, (Transmitter ID)
May 6 21:30:50 NAS kernel: pcieport 0000:03:00.2: device [1022:43b2] error status/mask=00003100/00002000
May 6 21:30:50 NAS kernel: pcieport 0000:03:00.2: [ 8] RELAY_NUM Rollover 
May 6 21:30:50 NAS kernel: pcieport 0000:03:00.2: [12] Replay Timer Timeout 
May 6 21:31:47 NAS kernel: pcieport 0000:00:01.3: AER: Corrected error received: 0000:00:00.0
May 6 21:31:47 NAS kernel: pcieport 0000:00:01.3: PCIe Bus Error: severity=Corrected, type=Data Link Layer, (Receiver ID)
May 6 21:31:47 NAS kernel: pcieport 0000:00:01.3: device [1022:1453] error status/mask=00000040/00006000
May 6 21:31:47 NAS kernel: pcieport 0000:00:01.3: [ 6] Bad TLP 

 

Link to comment

Please update your BIOS. 

May 6 21:30:50 NAS kernel: pcieport 0000:00:01.3: AER: Multiple Corrected error received: 0000:00:00.0
May 6 21:30:50 NAS kernel: pcieport 0000:00:01.3: PCIe Bus Error: severity=Corrected, type=Data Link Layer, (Transmitter ID)

Is an error often showed up in older BIOS revisions.

Link to comment
On 5/6/2019 at 10:53 PM, bastl said:

Please update your BIOS. 


May 6 21:30:50 NAS kernel: pcieport 0000:00:01.3: AER: Multiple Corrected error received: 0000:00:00.0
May 6 21:30:50 NAS kernel: pcieport 0000:00:01.3: PCIe Bus Error: severity=Corrected, type=Data Link Layer, (Transmitter ID)

Is an error often showed up in older BIOS revisions.

Thank you kindly for the advice regarding the BIOS version, I've gone ahead and updated it.

 

Before I updated, it would take me 1-2 hard resets for it to post BIOS and load UNRAID (it would just be a black screen with no posting at all until I held the power button down). I've noticed now that it takes 6-7 hard resets to get it to post and boot into UNRAID. Maybe unrelated...

 

Anyhow, I managed to capture this from the server which may give more insight into what's causing these issues:

 

RFior3N.jpg

 

 Any help much apprecaited!

Link to comment

you should not have to have hard resets re-seat your ram (alternating slots) and if you have thermal paste do your cpu as well if that is not the issue, clear you bios. while you have the case open give it a good shake see if you hear any loose screws. do you have an overclock on your system? if so do not overclock it again after the bios clear and see if your stable then. if your stable without the overclock you either have to much power to the cpu or not enough with the overclock. 

 

After your clear the bios it will take 2-3 min to post thats normal dont power cycle

Edited by nicksphone
added 2nd section
Link to comment
On 5/9/2019 at 11:38 PM, nicksphone said:

you should not have to have hard resets re-seat your ram (alternating slots) and if you have thermal paste do your cpu as well if that is not the issue, clear you bios. while you have the case open give it a good shake see if you hear any loose screws. do you have an overclock on your system? if so do not overclock it again after the bios clear and see if your stable then. if your stable without the overclock you either have to much power to the cpu or not enough with the overclock. 

 

After your clear the bios it will take 2-3 min to post thats normal dont power cycle

I actioned all of these suggestions and even installed a hest sink on my SSD cache drive. Thank you for the advice. 

 

It booted first time! Then it was stable for 30 or so hours (more than double my normal stability time) then this error occurred again. This morning I checked in on my server though Open VPN and success, two hours later and it won't connect and my Plex is down. 

 

I thought that the problem was solved, but it's obviously back. I'm really keen to build a list of other potential fixes, and I'll be grateful for any suggestions. 

 

Should I try things like C6 states enabled and PSU Idle Mode to Auto? These are set this way to circumvent Ryzen issues, but from my understanding they may be addressed through OS and BIOS? Just an idea... 

Link to comment

Have you ran a memtest yet?  Might be unrelated, but when my system would freeze like you are describing I ended up with a bad RAM chip.  To test, I ran memtest from the unraid menu for several hours.  Once I fixed my RAM issue my up time (knock on wood) was stable.  Just rebooted from 70ish days up time to upgrade to 6.7.  

Link to comment
9 hours ago, guy2545 said:

Have you ran a memtest yet?  Might be unrelated, but when my system would freeze like you are describing I ended up with a bad RAM chip.  To test, I ran memtest from the unraid menu for several hours.  Once I fixed my RAM issue my up time (knock on wood) was stable.  Just rebooted from 70ish days up time to upgrade to 6.7.  

I haven't yet, that's a good idea. I've probably had too much faith in my RAM just due to the fact that it's ECC. It could be the culprit. I'll let you know how it goes in a few days, thanks for the advice. 

Link to comment

you want c states disabled they dont play well with unraid. if your running any vms like say windows 10 turn all power saving and disk spin down stuff off. 
what other apps are you running? ive notice a few that dont play well with ryzen atleast for me

using a small form factor case do you have enough air flow? can you feel air passing over your drives? 

is your IHS fully covered in paste? as long as its non conductive you cant put too much on as the excess will just squish out but you can put not enough on.

30 hours uptime sounds like either a sheduled task or thermals to me anyway if it locks up at the same time or there abouts then its a task.

unraid 6.7 just came out upgrade to that, the new gui is nice and there are allot of updated backend files and fixes might help. 

 

Edited by nicksphone
after thought
Link to comment
  • 2 months later...
On 5/12/2019 at 12:09 PM, guy2545 said:

Have you ran a memtest yet?  Might be unrelated, but when my system would freeze like you are describing I ended up with a bad RAM chip.  To test, I ran memtest from the unraid menu for several hours.  Once I fixed my RAM issue my up time (knock on wood) was stable.  Just rebooted from 70ish days up time to upgrade to 6.7.  

 

On 5/13/2019 at 6:19 PM, nicksphone said:

you want c states disabled they dont play well with unraid. if your running any vms like say windows 10 turn all power saving and disk spin down stuff off. 
what other apps are you running? ive notice a few that dont play well with ryzen atleast for me

using a small form factor case do you have enough air flow? can you feel air passing over your drives? 

is your IHS fully covered in paste? as long as its non conductive you cant put too much on as the excess will just squish out but you can put not enough on.

30 hours uptime sounds like either a sheduled task or thermals to me anyway if it locks up at the same time or there abouts then its a task.

unraid 6.7 just came out upgrade to that, the new gui is nice and there are allot of updated backend files and fixes might help. 

 

Sorry about the delay on a response to your suggested troubleshooting steps. 

 

I ended up embarking on a very lengthy RMA process with my ECC Kingston RAM. I ran a memtest and found a lot of errors, but there is no servicing in Australia, so it had to go to Taiwan.

 

So months later, I have new RAM in the machine, and the same issue is occuring. It was initially stable for 40 or so hours, and now the recurring issue is much the same. So the problem is still not solved!

Link to comment
On 7/26/2019 at 4:16 PM, johnnie.black said:

There are known issues with Ryzen and Linux, most workarounds are discussed here.

Thanks for the heads up. Unfortunately, or fortunately, I'm super familiar with those threads and have them all bookmarked. 

 

I've tried everything suggested. The only thing I haven't tried is my AMD Ryzen 5 1600 for segfaults. I did try installing Debian on a USB and running the Oxalin Ryzen Test with these instructions but kept getting some error on the very last step related to RAM already being mounted somewhere and the test wouldn't run. The year month date stamp is within the production period where there were significant faults though.

 

Beyond that I'm not sure at this stage. I do have two errors that may have something to do with it which I've attached.

 

The really odd thing about all of this is that this system and OS was stable for sooooooooooooooooooo long without a single hitch. I really don't understand what the issue could be. No hardware changes, only upgrades to UNRAID version...

 

 

unraid error1.PNG

unraid error2.PNG

Link to comment
  • 2 weeks later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.