Saldash

Members
  • Posts

    53
  • Joined

  • Last visited

Posts posted by Saldash

  1. Hi All,

     

    I wrote some time ago that I was experiencing what appeared to be somewhat random system halts where the only way to resolve it was a full power off as the plug, wait and reboot.

    There would be no apparent cause for this but for a while it stopped happening, so I let the issue go.

     

    It has recently started flaring up again with a vengeance, so much so now I have a smart plug attached to the machine so I can remotely cut power and reboot.

     

    I still don't know for sure why this is happening and I'm not sure where to start.

    I've tried looking in the syslog but it only appear to start from the time the system came back online after it's last reboot, which is no good.

     

    I think there may be a correlation between the system halts and disk usage - A parity check is scheduled for 2AM every Sunday morning, and that is roughly when the machine stopped responding.

     

    image.png.55bd1a96cb2b17e4c8c6db74c3f6356f.png

     

    Looking at the power monitoring data my smart plug provides, you can see the halt, shortly after 2AM when the parity check starts, then a long stretch of idling at ~80 watts before I cut power and rebooted earlier this evening.

     

    My machine typically idles about the 50 watt mark as I'm the only user, it serves very light NAS duties but does handle downloading and local plex media streaming.

     

    All my drives are reporting a healthy state and I've never seem the error count on them above zero.

    I've run self tests on all my drives but they all report completed without error.

     

    I'm not sure what to do next with this other than bite the bullet and move to a Synology or QNAP box 😟

     

    EDIT:

    I Turned on the SysLog server and pointed Unraid back at itself to log events, then triggered a Parity check.

    I got about 0.8% before the server crashed.

    It does not appear to have logged the issue though as the crash occurred at 00:34

     

    Quote

    Jan  9 00:26:20 Tabris  rsyslogd: [origin software="rsyslogd" swVersion="8.2102.0" x-pid="18525" x-info="https://www.rsyslog.com"] start
    Jan  9 00:26:33 Tabris nginx: 2023/01/09 00:26:33 [error] 2829#2829: *50242 open() "/usr/local/emhttp/plugins/dynamix.file.manager/javascript/ace/mode-log.js" failed (2: No such file or directory) while sending to client, client: 192.168.11.14, server: , request: "GET /plugins/dynamix.file.manager/javascript/ace/mode-log.js HTTP/1.1", host: "192.168.11.100", referrer: "http://192.168.11.100/Shares/Browse?dir=/mnt/user/syslog-store"
    Jan  9 00:28:44 Tabris nginx: 2023/01/09 00:28:44 [error] 2829#2829: *51521 open() "/usr/local/emhttp/plugins/dynamix.file.manager/javascript/ace/mode-log.js" failed (2: No such file or directory) while sending to client, client: 192.168.11.14, server: , request: "GET /plugins/dynamix.file.manager/javascript/ace/mode-log.js HTTP/1.1", host: "192.168.11.100", referrer: "http://192.168.11.100/Shares/Browse?dir=/mnt/user/syslog-store"
    Jan  9 00:29:15 Tabris kernel: mdcmd (44): check correct
    Jan  9 00:29:15 Tabris kernel: md: recovery thread: check P ...
    Jan  9 00:29:17 Tabris  emhttpd: read SMART /dev/sdh
    Jan  9 00:29:17 Tabris  emhttpd: read SMART /dev/sde
    Jan  9 00:29:17 Tabris  emhttpd: read SMART /dev/sdb
    Jan  9 00:29:17 Tabris  emhttpd: read SMART /dev/sdf
    Jan  9 00:29:40 Tabris nginx: 2023/01/09 00:29:40 [error] 2829#2829: *52366 open() "/usr/local/emhttp/plugins/dynamix.file.manager/javascript/ace/mode-log.js" failed (2: No such file or directory) while sending to client, client: 192.168.11.14, server: , request: "GET /plugins/dynamix.file.manager/javascript/ace/mode-log.js HTTP/1.1", host: "192.168.11.100", referrer: "http://192.168.11.100/Shares/Browse?dir=/mnt/user/syslog-store"
    Jan  9 00:37:25 Tabris root: Delaying execution of fix common problems scan for 10 minutes
    Jan  9 00:37:25 Tabris unassigned.devices: Mounting 'Auto Mount' Devices...
    Jan  9 00:37:25 Tabris  emhttpd: Starting services...
    Jan  9 00:37:25 Tabris  emhttpd: shcmd (128): /etc/rc.d/rc.samba restart
    Jan  9 00:37:25 Tabris  wsdd2[2749]: 'Terminated' signal received.
    Jan  9 00:37:25 Tabris  nmbd[2739]: [2023/01/09 00:37:25.112438,  0] ../../source3/nmbd/nmbd.c:59(terminate)
    Jan  9 00:37:25 Tabris  nmbd[2739]:   Got SIGTERM: going down...
    Jan  9 00:37:25 Tabris  wsdd2[2749]: terminating.
    Jan  9 00:37:25 Tabris  winbindd[2754]: [2023/01/09 00:37:25.112685,  0] ../../source3/winbindd/winbindd_dual.c:1957(winbindd_sig_term_handler)
    Jan  9 00:37:25 Tabris  winbindd[2754]:   Got sig[15] terminate (is_parent=0)
    Jan  9 00:37:25 Tabris  winbindd[2832]: [2023/01/09 00:37:25.113360,  0] ../../source3/winbindd/winbindd_dual.c:1957(winbindd_sig_term_handler)
    Jan  9 00:37:25 Tabris  winbindd[2832]:   Got sig[15] terminate (is_parent=0)
    Jan  9 00:37:25 Tabris  winbindd[2752]: [2023/01/09 00:37:25.113564,  0] ../../source3/winbindd/winbindd_dual.c:1957(winbindd_sig_term_handler)
    Jan  9 00:37:25 Tabris  winbindd[2752]:   Got sig[15] terminate (is_parent=1)
    Jan  9 00:37:26 Tabris  rsyslogd: [origin software="rsyslogd" swVersion="8.2102.0" x-pid="3545" x-info="https://www.rsyslog.com"] start
    Jan  9 00:37:27 Tabris root: Starting Samba:  /usr/sbin/smbd -D
    Jan  9 00:37:27 Tabris  smbd[3600]: [2023/01/09 00:37:27.283810,  0] ../../source3/smbd/server.c:1741(main)
    Jan  9 00:37:27 Tabris  smbd[3600]:   smbd version 4.17.3 started.
    Jan  9 00:37:27 Tabris  smbd[3600]:   Copyright Andrew Tridgell and the Samba Team 1992-2022
    Jan  9 00:37:27 Tabris root:                  /usr/sbin/nmbd -D
    Jan  9 00:37:27 Tabris  nmbd[3602]: [2023/01/09 00:37:27.302984,  0] ../../source3/nmbd/nmbd.c:901(main)
    Jan  9 00:37:27 Tabris  nmbd[3602]:   nmbd version 4.17.3 started.
    Jan  9 00:37:27 Tabris  nmbd[3602]:   Copyright Andrew Tridgell and the Samba Team 1992-2022
    Jan  9 00:37:27 Tabris root:                  /usr/sbin/wsdd2 -d 
    Jan  9 00:37:27 Tabris  wsdd2[3616]: starting.
    Jan  9 00:37:27 Tabris root:                  /usr/sbin/winbindd -D
    Jan  9 00:37:27 Tabris  winbindd[3617]: [2023/01/09 00:37:27.388981,  0] ../../source3/winbindd/winbindd.c:1440(main)
    Jan  9 00:37:27 Tabris  winbindd[3617]:   winbindd version 4.17.3 started.
    Jan  9 00:37:27 Tabris  winbindd[3617]:   Copyright Andrew Tridgell and the Samba Team 1992-2022
    Jan  9 00:37:27 Tabris  winbindd[3619]: [2023/01/09 00:37:27.392216,  0] ../../source3/winbindd/winbindd_cache.c:3116(initialize_winbindd_cache)
    Jan  9 00:37:27 Tabris  winbindd[3619]:   initialize_winbindd_cache: clearing cache and re-creating with version number 2
    Jan  9 00:37:27 Tabris  emhttpd: shcmd (132): /etc/rc.d/rc.avahidaemon restart
    Jan  9 00:37:27 Tabris root: Stopping Avahi mDNS/DNS-SD Daemon: stopped
    Jan  9 00:37:27 Tabris  avahi-dnsconfd[2778]: read(): EOF
    Jan  9 00:37:27 Tabris root: Starting Avahi mDNS/DNS-SD Daemon:  /usr/sbin/avahi-daemon -D
    Jan  9 00:37:27 Tabris  avahi-daemon[3636]: Found user 'avahi' (UID 61) and group 'avahi' (GID 214).
    Jan  9 00:37:27 Tabris  avahi-daemon[3636]: Successfully dropped root privileges.
    Jan  9 00:37:27 Tabris  avahi-daemon[3636]: avahi-daemon 0.8 starting up.
    Jan  9 00:37:27 Tabris  avahi-daemon[3636]: Successfully called chroot().
    Jan  9 00:37:27 Tabris  avahi-daemon[3636]: Successfully dropped remaining capabilities.
    Jan  9 00:37:27 Tabris  avahi-daemon[3636]: Loading service file /services/sftp-ssh.service.
    Jan  9 00:37:27 Tabris  avahi-daemon[3636]: Loading service file /services/smb.service.
    Jan  9 00:37:27 Tabris  avahi-daemon[3636]: Loading service file /services/ssh.service.
    Jan  9 00:37:27 Tabris  avahi-daemon[3636]: Joining mDNS multicast group on interface br0.IPv4 with address 192.168.11.100.
    Jan  9 00:37:27 Tabris  avahi-daemon[3636]: New relevant interface br0.IPv4 for mDNS.
    Jan  9 00:37:27 Tabris  avahi-daemon[3636]: Joining mDNS multicast group on interface lo.IPv6 with address ::1.
    Jan  9 00:37:27 Tabris  avahi-daemon[3636]: New relevant interface lo.IPv6 for mDNS.
    Jan  9 00:37:27 Tabris  avahi-daemon[3636]: Joining mDNS multicast group on interface lo.IPv4 with address 127.0.0.1.
    Jan  9 00:37:27 Tabris  avahi-daemon[3636]: New relevant interface lo.IPv4 for mDNS.
    Jan  9 00:37:27 Tabris  avahi-daemon[3636]: Network interface enumeration completed.
    Jan  9 00:37:27 Tabris  avahi-daemon[3636]: Registering new address record for 192.168.11.100 on br0.IPv4.
    Jan  9 00:37:27 Tabris  avahi-daemon[3636]: Registering new address record for ::1 on lo.*.
    Jan  9 00:37:27 Tabris  avahi-daemon[3636]: Registering new address record for 127.0.0.1 on lo.IPv4.
    Jan  9 00:37:27 Tabris  emhttpd: shcmd (133): /etc/rc.d/rc.avahidnsconfd restart
    Jan  9 00:37:27 Tabris root: Stopping Avahi mDNS/DNS-SD DNS Server Configuration Daemon: stopped
    Jan  9 00:37:27 Tabris root: Starting Avahi mDNS/DNS-SD DNS Server Configuration Daemon:  /usr/sbin/avahi-dnsconfd -D
    Jan  9 00:37:27 Tabris  avahi-dnsconfd[3645]: Successfully connected to Avahi daemon.
    Jan  9 00:37:27 Tabris  emhttpd: shcmd (144): /usr/local/sbin/mount_image '/mnt/user/system/docker/docker.img' /var/lib/docker 20
    Jan  9 00:37:28 Tabris kernel: loop2: detected capacity change from 0 to 41943040
    Jan  9 00:37:28 Tabris kernel: BTRFS: device fsid 656d119c-456a-4172-8c6b-bda3bd9887eb devid 1 transid 2981255 /dev/loop2 scanned by mount (3681)
    Jan  9 00:37:28 Tabris kernel: BTRFS info (device loop2): using free space tree
    Jan  9 00:37:28 Tabris kernel: BTRFS info (device loop2): has skinny extents
    Jan  9 00:37:28 Tabris kernel: BTRFS info (device loop2): bdev /dev/loop2 errs: wr 0, rd 0, flush 0, corrupt 145, gen 0
    Jan  9 00:37:28 Tabris kernel: BTRFS info (device loop2): enabling ssd optimizations
    Jan  9 00:37:28 Tabris root: Resize device id 1 (/dev/loop2) from 20.00GiB to max
    Jan  9 00:37:28 Tabris  emhttpd: shcmd (146): /etc/rc.d/rc.docker start
    Jan  9 00:37:28 Tabris root: starting dockerd ...
    Jan  9 00:37:28 Tabris  avahi-daemon[3636]: Server startup complete. Host name is Tabris.local. Local service cookie is 385468981.
    Jan  9 00:37:28 Tabris kernel: BTRFS warning (device loop2): csum failed root 5 ino 77454 off 0 csum 0x9a7f5ae4 expected csum 0xa7f388c1 mirror 1
    Jan  9 00:37:28 Tabris kernel: BTRFS error (device loop2): bdev /dev/loop2 errs: wr 0, rd 0, flush 0, corrupt 146, gen 0
    Jan  9 00:37:28 Tabris kernel: BTRFS warning (device loop2): csum failed root 5 ino 77454 off 4096 csum 0x16af2102 expected csum 0x5b0e635e mirror 1
    Jan  9 00:37:28 Tabris kernel: BTRFS error (device loop2): bdev /dev/loop2 errs: wr 0, rd 0, flush 0, corrupt 147, gen 0
    Jan  9 00:37:28 Tabris kernel: BTRFS warning (device loop2): csum failed root 5 ino 77454 off 0 csum 0x9a7f5ae4 expected csum 0xa7f388c1 mirror 1
    Jan  9 00:37:28 Tabris kernel: BTRFS error (device loop2): bdev /dev/loop2 errs: wr 0, rd 0, flush 0, corrupt 148, gen 0
    Jan  9 00:37:28 Tabris kernel: Bridge firewalling registered
    Jan  9 00:37:28 Tabris kernel: Initializing XFRM netlink socket
    Jan  9 00:37:28 Tabris  avahi-daemon[3636]: Joining mDNS multicast group on interface docker0.IPv4 with address 172.17.0.1.
    Jan  9 00:37:28 Tabris  avahi-daemon[3636]: New relevant interface docker0.IPv4 for mDNS.
    Jan  9 00:37:28 Tabris  avahi-daemon[3636]: Registering new address record for 172.17.0.1 on docker0.IPv4.
    Jan  9 00:37:28 Tabris  avahi-daemon[3636]: Joining mDNS multicast group on interface br-cd2cd4596443.IPv4 with address 172.18.0.1.
    Jan  9 00:37:28 Tabris  avahi-daemon[3636]: New relevant interface br-cd2cd4596443.IPv4 for mDNS.
    Jan  9 00:37:28 Tabris  avahi-daemon[3636]: Registering new address record for 172.18.0.1 on br-cd2cd4596443.IPv4.
    Jan  9 00:37:29 Tabris  avahi-daemon[3636]: Service "Tabris" (/services/sftp-ssh.service) successfully established.
    Jan  9 00:37:29 Tabris  avahi-daemon[3636]: Service "Tabris" (/services/smb.service) successfully established.
    Jan  9 00:37:29 Tabris  avahi-daemon[3636]: Service "Tabris" (/services/ssh.service) successfully established.
    Jan  9 00:37:30 Tabris  emhttpd: shcmd (160): /usr/local/sbin/mount_image '/mnt/user/system/libvirt/libvirt.img' /etc/libvirt 1
    Jan  9 00:37:30 Tabris kernel: loop3: detected capacity change from 0 to 2097152
    Jan  9 00:37:30 Tabris kernel: BTRFS: device fsid c172ccc1-4210-457b-bdf9-0c8ff42244ac devid 1 transid 1357 /dev/loop3 scanned by mount (4682)
    Jan  9 00:37:30 Tabris kernel: BTRFS info (device loop3): using free space tree
    Jan  9 00:37:30 Tabris kernel: BTRFS info (device loop3): has skinny extents
    Jan  9 00:37:30 Tabris kernel: BTRFS info (device loop3): enabling ssd optimizations
    Jan  9 00:37:30 Tabris root: Resize device id 1 (/dev/loop3) from 1.00GiB to max
    Jan  9 00:37:30 Tabris  emhttpd: shcmd (162): /etc/rc.d/rc.libvirt start
    Jan  9 00:37:30 Tabris root: Starting virtlockd...
    Jan  9 00:37:30 Tabris root: Starting virtlogd...
    Jan  9 00:37:30 Tabris root: Starting libvirtd...
    Jan  9 00:37:30 Tabris kernel: tun: Universal TUN/TAP device driver, 1.6
    Jan  9 00:37:30 Tabris kernel: mdcmd (42): check correct
    Jan  9 00:37:30 Tabris kernel: md: recovery thread: check P ...
    Jan  9 00:37:30 Tabris rc.docker: Plex: started succesfully!
    Jan  9 00:37:30 Tabris  avahi-daemon[3636]: Joining mDNS multicast group on interface virbr0.IPv4 with address 192.168.122.1.
    Jan  9 00:37:30 Tabris  avahi-daemon[3636]: New relevant interface virbr0.IPv4 for mDNS.
    Jan  9 00:37:30 Tabris  avahi-daemon[3636]: Registering new address record for 192.168.122.1 on virbr0.IPv4.
    Jan  9 00:37:30 Tabris dnsmasq[4933]: started, version 2.87 cachesize 150
    Jan  9 00:37:30 Tabris dnsmasq[4933]: compile time options: IPv6 GNU-getopt DBus no-UBus i18n IDN2 DHCP DHCPv6 no-Lua TFTP conntrack ipset no-nftset auth cryptohash DNSSEC loop-detect inotify dumpfile
    Jan  9 00:37:30 Tabris dnsmasq-dhcp[4933]: DHCP, IP range 192.168.122.2 -- 192.168.122.254, lease time 1h
    Jan  9 00:37:30 Tabris dnsmasq-dhcp[4933]: DHCP, sockets bound exclusively to interface virbr0
    Jan  9 00:37:30 Tabris dnsmasq[4933]: reading /etc/resolv.conf
    Jan  9 00:37:30 Tabris dnsmasq[4933]: using nameserver 1.1.1.1#53
    Jan  9 00:37:30 Tabris dnsmasq[4933]: using nameserver 8.8.8.8#53
    Jan  9 00:37:30 Tabris dnsmasq[4933]: read /etc/hosts - 2 addresses
    Jan  9 00:37:30 Tabris dnsmasq[4933]: read /var/lib/libvirt/dnsmasq/default.addnhosts - 0 addresses
    Jan  9 00:37:30 Tabris dnsmasq-dhcp[4933]: read /var/lib/libvirt/dnsmasq/default.hostsfile
    Jan  9 00:37:30 Tabris kernel: r8169 0000:03:00.0: invalid VPD tag 0x00 (size 0) at offset 0; assume missing optional EEPROM
    Jan  9 00:37:30 Tabris kernel: br-cd2cd4596443: port 1(veth110943b) entered blocking state
    Jan  9 00:37:30 Tabris kernel: br-cd2cd4596443: port 1(veth110943b) entered disabled state
    Jan  9 00:37:30 Tabris kernel: device veth110943b entered promiscuous mode
    Jan  9 00:37:30 Tabris kernel: br-cd2cd4596443: port 1(veth110943b) entered blocking state
    Jan  9 00:37:30 Tabris kernel: br-cd2cd4596443: port 1(veth110943b) entered forwarding state
    Jan  9 00:37:30 Tabris kernel: br-cd2cd4596443: port 1(veth110943b) entered disabled state
    Jan  9 00:37:31 Tabris kernel: eth0: renamed from vethec3bdda
    Jan  9 00:37:31 Tabris kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth110943b: link becomes ready
    Jan  9 00:37:31 Tabris kernel: br-cd2cd4596443: port 1(veth110943b) entered blocking state
    Jan  9 00:37:31 Tabris kernel: br-cd2cd4596443: port 1(veth110943b) entered forwarding state
    Jan  9 00:37:31 Tabris kernel: IPv6: ADDRCONF(NETDEV_CHANGE): br-cd2cd4596443: link becomes ready
    Jan  9 00:37:31 Tabris rc.docker: ProxyManager: started succesfully!
    Jan  9 00:37:31 Tabris kernel: docker0: port 1(veth4016e87) entered blocking state
    Jan  9 00:37:31 Tabris kernel: docker0: port 1(veth4016e87) entered disabled state
    Jan  9 00:37:31 Tabris kernel: device veth4016e87 entered promiscuous mode
    Jan  9 00:37:31 Tabris kernel: docker0: port 1(veth4016e87) entered blocking state
    Jan  9 00:37:31 Tabris kernel: docker0: port 1(veth4016e87) entered forwarding state
    Jan  9 00:37:31 Tabris kernel: docker0: port 1(veth4016e87) entered disabled state
    Jan  9 00:37:31 Tabris kernel: eth0: renamed from veth52c8d6d
    Jan  9 00:37:31 Tabris kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth4016e87: link becomes ready
    Jan  9 00:37:31 Tabris kernel: docker0: port 1(veth4016e87) entered blocking state
    Jan  9 00:37:31 Tabris kernel: docker0: port 1(veth4016e87) entered forwarding state
    Jan  9 00:37:31 Tabris kernel: IPv6: ADDRCONF(NETDEV_CHANGE): docker0: link becomes ready
    Jan  9 00:37:31 Tabris rc.docker: Overseerr: started succesfully!
    Jan  9 00:37:31 Tabris kernel: br-cd2cd4596443: port 2(veth8a50a7a) entered blocking state
    Jan  9 00:37:31 Tabris kernel: br-cd2cd4596443: port 2(veth8a50a7a) entered disabled state
    Jan  9 00:37:31 Tabris kernel: device veth8a50a7a entered promiscuous mode
    Jan  9 00:37:31 Tabris kernel: br-cd2cd4596443: port 2(veth8a50a7a) entered blocking state
    Jan  9 00:37:31 Tabris kernel: br-cd2cd4596443: port 2(veth8a50a7a) entered forwarding state
    Jan  9 00:37:31 Tabris kernel: br-cd2cd4596443: port 2(veth8a50a7a) entered disabled state
    Jan  9 00:37:31 Tabris kernel: eth0: renamed from veth8b1a00b
    Jan  9 00:37:32 Tabris kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth8a50a7a: link becomes ready
    Jan  9 00:37:32 Tabris kernel: br-cd2cd4596443: port 2(veth8a50a7a) entered blocking state
    Jan  9 00:37:32 Tabris kernel: br-cd2cd4596443: port 2(veth8a50a7a) entered forwarding state
    Jan  9 00:37:32 Tabris rc.docker: radarr: started succesfully!
    Jan  9 00:37:32 Tabris kernel: br-cd2cd4596443: port 3(veth2dee507) entered blocking state
    Jan  9 00:37:32 Tabris kernel: br-cd2cd4596443: port 3(veth2dee507) entered disabled state
    Jan  9 00:37:32 Tabris kernel: device veth2dee507 entered promiscuous mode
    Jan  9 00:37:32 Tabris kernel: br-cd2cd4596443: port 3(veth2dee507) entered blocking state
    Jan  9 00:37:32 Tabris kernel: br-cd2cd4596443: port 3(veth2dee507) entered forwarding state
    Jan  9 00:37:32 Tabris kernel: eth0: renamed from veth2403a6f
    Jan  9 00:37:32 Tabris kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth2dee507: link becomes ready
    Jan  9 00:37:32 Tabris rc.docker: sonarr: started succesfully!
    Jan  9 00:37:32 Tabris kernel: r8169 0000:03:00.0 eth0: Link is Down
    Jan  9 00:37:32 Tabris kernel: docker0: port 2(vethffda899) entered blocking state
    Jan  9 00:37:32 Tabris kernel: docker0: port 2(vethffda899) entered disabled state
    Jan  9 00:37:32 Tabris kernel: device vethffda899 entered promiscuous mode
    Jan  9 00:37:32 Tabris kernel: docker0: port 2(vethffda899) entered blocking state
    Jan  9 00:37:32 Tabris kernel: docker0: port 2(vethffda899) entered forwarding state
    Jan  9 00:37:32 Tabris kernel: bond0: (slave eth0): link status definitely down, disabling slave
    Jan  9 00:37:32 Tabris kernel: device eth0 left promiscuous mode
    Jan  9 00:37:32 Tabris kernel: bond0: now running without any active interface!
    Jan  9 00:37:32 Tabris kernel: docker0: port 2(vethffda899) entered disabled state
    Jan  9 00:37:32 Tabris kernel: br0: port 1(bond0) entered disabled state
    Jan  9 00:37:32 Tabris  avahi-daemon[3636]: Joining mDNS multicast group on interface br-cd2cd4596443.IPv6 with address fe80::42:f6ff:fe14:796.
    Jan  9 00:37:32 Tabris  avahi-daemon[3636]: New relevant interface br-cd2cd4596443.IPv6 for mDNS.
    Jan  9 00:37:32 Tabris  avahi-daemon[3636]: Registering new address record for fe80::42:f6ff:fe14:796 on br-cd2cd4596443.*.
    Jan  9 00:37:32 Tabris kernel: eth0: renamed from vethd51e7ae
    Jan  9 00:37:32 Tabris kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethffda899: link becomes ready
    Jan  9 00:37:32 Tabris kernel: docker0: port 2(vethffda899) entered blocking state
    Jan  9 00:37:32 Tabris kernel: docker0: port 2(vethffda899) entered forwarding state
    Jan  9 00:37:32 Tabris rc.docker: SABnzbdVPN: started succesfully!
    Jan  9 00:37:33 Tabris  avahi-daemon[3636]: Joining mDNS multicast group on interface veth8a50a7a.IPv6 with address fe80::24b2:4aff:fe39:3447.
    Jan  9 00:37:33 Tabris  avahi-daemon[3636]: New relevant interface veth8a50a7a.IPv6 for mDNS.
    Jan  9 00:37:33 Tabris  avahi-daemon[3636]: Registering new address record for fe80::24b2:4aff:fe39:3447 on veth8a50a7a.*.
    Jan  9 00:37:33 Tabris  avahi-daemon[3636]: Joining mDNS multicast group on interface veth4016e87.IPv6 with address fe80::94d7:49ff:fef3:b175.
    Jan  9 00:37:33 Tabris  avahi-daemon[3636]: New relevant interface veth4016e87.IPv6 for mDNS.
    Jan  9 00:37:33 Tabris  avahi-daemon[3636]: Registering new address record for fe80::94d7:49ff:fef3:b175 on veth4016e87.*.
    Jan  9 00:37:33 Tabris  avahi-daemon[3636]: Joining mDNS multicast group on interface docker0.IPv6 with address fe80::42:e1ff:fe48:44d9.
    Jan  9 00:37:33 Tabris  avahi-daemon[3636]: New relevant interface docker0.IPv6 for mDNS.
    Jan  9 00:37:33 Tabris  avahi-daemon[3636]: Registering new address record for fe80::42:e1ff:fe48:44d9 on docker0.*.
    Jan  9 00:37:33 Tabris  avahi-daemon[3636]: Joining mDNS multicast group on interface veth110943b.IPv6 with address fe80::58b0:caff:fe85:b89b.
    Jan  9 00:37:33 Tabris  avahi-daemon[3636]: New relevant interface veth110943b.IPv6 for mDNS.
    Jan  9 00:37:33 Tabris  avahi-daemon[3636]: Registering new address record for fe80::58b0:caff:fe85:b89b on veth110943b.*.
    Jan  9 00:37:33 Tabris  nmbd[3606]: [2023/01/09 00:37:33.269536,  0] ../../source3/nmbd/nmbd.c:59(terminate)
    Jan  9 00:37:33 Tabris  nmbd[3606]:   Got SIGTERM: going down...
    Jan  9 00:37:33 Tabris  wsdd2[3616]: 'Terminated' signal received.
    Jan  9 00:37:33 Tabris  winbindd[3619]: [2023/01/09 00:37:33.271642,  0] ../../source3/winbindd/winbindd_dual.c:1957(winbindd_sig_term_handler)
    Jan  9 00:37:33 Tabris  winbindd[3619]:   Got sig[15] terminate (is_parent=1)
    Jan  9 00:37:33 Tabris  wsdd2[3616]: terminating.
    Jan  9 00:37:33 Tabris  winbindd[4959]: [2023/01/09 00:37:33.272079,  0] ../../source3/winbindd/winbindd_dual.c:1957(winbindd_sig_term_handler)
    Jan  9 00:37:33 Tabris  winbindd[4959]:   Got sig[15] terminate (is_parent=0)
    Jan  9 00:37:34 Tabris  avahi-daemon[3636]: Joining mDNS multicast group on interface veth2dee507.IPv6 with address fe80::4497:bbff:fe61:7b35.
    Jan  9 00:37:34 Tabris  avahi-daemon[3636]: New relevant interface veth2dee507.IPv6 for mDNS.
    Jan  9 00:37:34 Tabris  avahi-daemon[3636]: Registering new address record for fe80::4497:bbff:fe61:7b35 on veth2dee507.*.
    Jan  9 00:37:34 Tabris  avahi-daemon[3636]: Joining mDNS multicast group on interface vethffda899.IPv6 with address fe80::60ed:9cff:fe38:521f.
    Jan  9 00:37:34 Tabris  avahi-daemon[3636]: New relevant interface vethffda899.IPv6 for mDNS.
    Jan  9 00:37:34 Tabris  avahi-daemon[3636]: Registering new address record for fe80::60ed:9cff:fe38:521f on vethffda899.*.
    Jan  9 00:37:35 Tabris  smbd[7770]: [2023/01/09 00:37:35.454717,  0] ../../source3/smbd/server.c:1741(main)
    Jan  9 00:37:35 Tabris  smbd[7770]:   smbd version 4.17.3 started.
    Jan  9 00:37:35 Tabris  smbd[7770]:   Copyright Andrew Tridgell and the Samba Team 1992-2022
    Jan  9 00:37:35 Tabris  nmbd[7819]: [2023/01/09 00:37:35.481414,  0] ../../source3/nmbd/nmbd.c:901(main)
    Jan  9 00:37:35 Tabris  nmbd[7819]:   nmbd version 4.17.3 started.
    Jan  9 00:37:35 Tabris  nmbd[7819]:   Copyright Andrew Tridgell and the Samba Team 1992-2022
    Jan  9 00:37:35 Tabris  wsdd2[7918]: starting.
    Jan  9 00:37:35 Tabris  winbindd[7919]: [2023/01/09 00:37:35.580904,  0] ../../source3/winbindd/winbindd.c:1440(main)
    Jan  9 00:37:35 Tabris  winbindd[7919]:   winbindd version 4.17.3 started.
    Jan  9 00:37:35 Tabris  winbindd[7919]:   Copyright Andrew Tridgell and the Samba Team 1992-2022
    Jan  9 00:37:35 Tabris  winbindd[7953]: [2023/01/09 00:37:35.588649,  0] ../../source3/winbindd/winbindd_cache.c:3116(initialize_winbindd_cache)
    Jan  9 00:37:35 Tabris  winbindd[7953]:   initialize_winbindd_cache: clearing cache and re-creating with version number 2
    Jan  9 00:37:35 Tabris tips.and.tweaks: Tweaks Applied
    Jan  9 00:37:35 Tabris unassigned.devices: Mounting 'Auto Mount' Remote Shares...
    Jan  9 00:37:35 Tabris  sudo:     root : PWD=/ ; USER=root ; COMMAND=/bin/bash -c /usr/local/emhttp/plugins/unbalance/unbalance -port 6237
    Jan  9 00:37:35 Tabris  sudo: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
    Jan  9 00:37:35 Tabris  ntpd[1017]: Deleting interface #1 br0, 192.168.11.100#123, interface stats: received=6, sent=6, dropped=0, active_time=68 secs
    Jan  9 00:37:35 Tabris  ntpd[1017]: 93.93.131.118 local addr 192.168.11.100 -> <null>
    Jan  9 00:37:36 Tabris kernel: r8169 0000:03:00.0 eth0: Link is Up - 1Gbps/Full - flow control off
    Jan  9 00:37:36 Tabris kernel: bond0: (slave eth0): link status definitely up, 1000 Mbps full duplex
    Jan  9 00:37:36 Tabris kernel: bond0: (slave eth0): making interface the new active one
    Jan  9 00:37:36 Tabris kernel: device eth0 entered promiscuous mode
    Jan  9 00:37:36 Tabris kernel: bond0: active interface up!
    Jan  9 00:37:36 Tabris kernel: br0: port 1(bond0) entered blocking state
    Jan  9 00:37:36 Tabris kernel: br0: port 1(bond0) entered forwarding state
    Jan  9 00:37:37 Tabris  ntpd[1017]: Listen normally on 3 br0 192.168.11.100:123
    Jan  9 00:37:37 Tabris  ntpd[1017]: new interface(s) found: waking up resolver
    Jan  9 00:37:51 Tabris kernel: mdcmd (43): nocheck cancel
    Jan  9 00:37:51 Tabris kernel: md: recovery thread: exit status: -4
    Jan  9 00:37:58 Tabris  nmbd[7859]: [2023/01/09 00:37:58.530106,  0] ../../source3/nmbd/nmbd_become_lmb.c:398(become_local_master_stage2)
    Jan  9 00:37:58 Tabris  nmbd[7859]:   *****
    Jan  9 00:37:58 Tabris  nmbd[7859]:   
    Jan  9 00:37:58 Tabris  nmbd[7859]:   Samba name server TABRIS is now a local master browser for workgroup WORKGROUP on subnet 172.17.0.1
    Jan  9 00:37:58 Tabris  nmbd[7859]:   
    Jan  9 00:37:58 Tabris  nmbd[7859]:   *****
    Jan  9 00:37:58 Tabris  nmbd[7859]: [2023/01/09 00:37:58.530181,  0] ../../source3/nmbd/nmbd_become_lmb.c:398(become_local_master_stage2)
    Jan  9 00:37:58 Tabris  nmbd[7859]:   *****
    Jan  9 00:37:58 Tabris  nmbd[7859]:   
    Jan  9 00:37:58 Tabris  nmbd[7859]:   Samba name server TABRIS is now a local master browser for workgroup WORKGROUP on subnet 172.18.0.1
    Jan  9 00:37:58 Tabris  nmbd[7859]:   
    Jan  9 00:37:58 Tabris  nmbd[7859]:   *****
    Jan  9 00:37:58 Tabris  nmbd[7859]: [2023/01/09 00:37:58.530245,  0] ../../source3/nmbd/nmbd_become_lmb.c:398(become_local_master_stage2)
    Jan  9 00:37:58 Tabris  nmbd[7859]:   *****
    Jan  9 00:37:58 Tabris  nmbd[7859]:   
    Jan  9 00:37:58 Tabris  nmbd[7859]:   Samba name server TABRIS is now a local master browser for workgroup WORKGROUP on subnet 192.168.11.100
    Jan  9 00:37:58 Tabris  nmbd[7859]:   
    Jan  9 00:37:58 Tabris  nmbd[7859]:   *****
    Jan  9 00:37:58 Tabris  nmbd[7859]: [2023/01/09 00:37:58.530307,  0] ../../source3/nmbd/nmbd_become_lmb.c:398(become_local_master_stage2)
    Jan  9 00:37:58 Tabris  nmbd[7859]:   *****
    Jan  9 00:37:58 Tabris  nmbd[7859]:   
    Jan  9 00:37:58 Tabris  nmbd[7859]:   Samba name server TABRIS is now a local master browser for workgroup WORKGROUP on subnet 192.168.122.1
    Jan  9 00:37:58 Tabris  nmbd[7859]:   
    Jan  9 00:37:58 Tabris  nmbd[7859]:   *****
    Jan  9 00:38:01 Tabris nginx: 2023/01/09 00:38:01 [error] 2831#2831: *496 open() "/usr/local/emhttp/plugins/dynamix.file.manager/javascript/ace/mode-log.js" failed (2: No such file or directory) while sending to client, client: 192.168.11.14, server: , request: "GET /plugins/dynamix.file.manager/javascript/ace/mode-log.js HTTP/1.1", host: "192.168.11.100", referrer: "http://192.168.11.100/Shares/Browse?dir=/mnt/user/syslog-store"
     

     

     

    tabris-diagnostics-20230108-2347.zip

  2. Hi all,

     

    I've started getting periodic systems halts that require a full hard reset to resolve.

    The system boots and appears to start normally until I get to entering my array password where after starting the array the UI appears to hang with the unraid logo pulse for a good minute at least before the UI becomes normally responsive again.

     

    I run weekly array health checks that have not reported any issues and after a fresh boot up, I cannot see anything in the system log that would indicate the source of the fault (not that I'd know what I was looking at to be honest).

     

    I've uploaded the diagnostic file in the hopes that someone might be able to point me in the right direction to getting this sorted. The box isn't highly critical but it is starting to get annoying having to hard reset the box and I've now gotten to the point where it's common enough that the server is on a smart plug that I can remotely turn off/on when this issue crops up.

     

    Thanks in advance for your help,

    tabris-diagnostics-20220821-2044.zip

  3. Hmm.. I got updated today and now Sonarr won't start.

    I get a repeat of this block before it finally gives up and dies.

    2021-03-09 21:08:59,097 DEBG fd 8 closed, stopped monitoring <POutputDispatcher at 22543642412560 for <Subprocess at 22543642413328 with name sonarr in state STARTING> (stdout)>
    2021-03-09 21:08:59,097 DEBG fd 10 closed, stopped monitoring <POutputDispatcher at 22543642573312 for <Subprocess at 22543642413328 with name sonarr in state STARTING> (stderr)>
    2021-03-09 21:08:59,097 INFO exited: sonarr (exit status 2; not expected)
    2021-03-09 21:08:59,097 DEBG received SIGCHLD indicating a child quit
    2021-03-09 21:09:02,101 INFO spawned: 'sonarr' with pid 68
    2021-03-09 21:09:02,113 DEBG 'sonarr' stderr output:
    Cannot open assembly '/usr/lib/sonarr/NzbDrone.exe': No such file or directory.

     

    Forcing an update seems to have resolved this though. Might be useful for other people having the same issue.

  4. Has anyone been able to get the external (remote) client IP address to forward to the proxied server?

    I've skimmed a few pages and run a search over this topic but I can't find anything on getting the client's IP address to the server.

     

    For clarity I'm running a site using IIS on Windows Server 2016, with Nginx Proxy Manager fronting the public requests.

    My web server only ever sees the IP Address of the docker (my unraid server), which is problematic when my application has IP Address banning implemented for security - I've had to disable it incase someone cottoned on that they could effectively use my own security against me 😐

  5. Hey all,

     

    Just thought of a potential new feature this morning - had a power cut last night, signed into unraid to start the array this morning and discovered there's a new update available.

     

    1. Does unraid check for updates periodically and if it does, can a new notification option be added so that it sends a notification to configured clients? (I use Pushover which has been flawless so far).
    2. As an additional option to the above, could we have an option to let unraid automatically download (but not install) the update and let us know that (as well as it being available above), that it's now ready to install?
    3. Or disable the above all together as I'm guessing some people aren't too bothered.

     

    Edit: 1 and 3 already exist as pointed out.

  6. On 1/19/2020 at 2:22 AM, trurl said:

    Put flash in your PC and let it checkdisk.

     

    Are you booting from a USB2 port? You should.

    Hi,

     

    Yes it's booting from a USB 2.0 port.

    I've run check disk, I got the message that there were errors and prompted to fix them, which I confirmed to do.

    image.png.5db0ccfe4eea277bb0b3a1d91973a0fc.png

     

    This takes a few seconds before returning this message:

    image.png.10bd0a678af6144edfc7f6103a9d095d.png

     

    I've gone into CMD prompt, into DISKPART and run to clear the readonly attribute from the drive but it never changes the current read only state from the disk.

     

  7. Hi all,

     

    Pretty sure I've seen this posted before but I couldn't find it.

    I've just tried to update my server to the latest version of Unraid but I received an error:

    plugin: run failed: /bin/bash retval: 1

    Trying to update plugins errored out with either the same error or a general failure:

    plugin: updating: community.applications.plg
    
    Cleaning Up Old Versions
    
    plugin: downloading: https://raw.githubusercontent.com/Squidly271/community.applications/master/archive/community.applications-2020.01.18a-x86_64-1.txz ... failed (Generic error)
    plugin: wget: https://raw.githubusercontent.com/Squidly271/community.applications/master/archive/community.applications-2020.01.18a-x86_64-1.txz download failure (Generic error)

    I've then got a small notification that the USB drive was not read-write.

    The server was rebooted "just in case" and on loading, just before the final console login, it listed a bunch of dockers that couldn't open temp folders because of read-only.

     

    I'm a little nervous as this is the first time I've actually had anything "go wrong" with my server.

    The server booted up, has let me log into the WebGUI and has begun its usual parity check but the error saying USB was not read-write appeared again.

     

    As a precaution, I've downloaded a flash backup.

     

    The flash drive is only a few years old and is a Sandisk Cruzer Fit 16GB drive from Amazon.

     

    Background aside, is this indicative of a failing USB drive?

    Here's the disk log information:

     

    image.png.2812f31330a69798d65c3f055df9a487.png

     

    If this disk is failing, can I take the backup I've just made, get a new 32GB cruzer drive from Amazon, copy the contents of the zip over, re-licence the USB drive and carry on like nothing else happened?

     

    Thanks,

     

    S.

  8. On 9/16/2019 at 10:43 PM, Michel Amberg said:

    Hello I started getting this from my container today after an update:

     

    2019-09-16T21:41:40Z E! [inputs.disk] Error in plugin: error getting disk usage info: lstat /dev/mapper/md1: no such file or directory

     

    Any ideas? I don't get any array information anymore populated in my DB

    This is a known issue in the current build and I believe a fix is going to be released on Tuesday for the main project.

    When this particular docker will be updated is another story.

     

    https://github.com/influxdata/telegraf/issues/6388

    image.png.f002e052aebefbd15f532488bc669e79.png

     

    Hope that helps 

     

    EDIT:

    While I know it's got nothing to do with this docker, this is why I wish unRaid would support an official API endpoint for dashboard information. For right now scraping the dashboard in C# and parsing the DOM using HTMLAgilityPack is having to suffice.. barely

     

  9. On 9/10/2019 at 3:26 AM, Idolwild said:

    I know it's been awhile, I have the same use-case (need NGINX to forward to internal IIS server - care to share any pointers? Thanks!

    Sorry bud, I didn't even know you'd posted a response - I haven't had any notifications from the forum and only noticed when I popped on to ask a question about Grafana.

     

    I can't remember what is was that I had a problem with for this container, let me post this and I'll have a scroll back and edit this once I've remembered!

     

    -- EDIT

     

    Well I looked back and I haven't got a clue what i was on about!

    I do have everything setup and functioning so I would be happy to answer any specific questions you might have re the setup I use at this point.

  10. On 8/19/2019 at 11:01 AM, Djoss said:

    Did you try to add the settings under the Advanced tab of your host?

    I've literally just come back to it today, tried that and was about to post that it's worked for me before I saw your post. xD

    Had no idea if it was going to work or not but it was a shot in the dark that got the mark for me.

     

    Thank you anyway!

  11. On 12/29/2018 at 10:07 PM, Djoss said:

    This docker is for people with little to no knowledge about nginx.  It was not done with manual configuration file editing in mind.  Some static configuration files are inside the container itself (/etc/nginx), while generated files are stored under the app data folder.

     

    If you want to migrate from LE docker, you should not try to replicate your config files, but instead, use the UI to re-create the same functionality (again, this container doesn't support subfolders yet).

    Hi,

     

    I have a need to access the nginx.conf file to try and fix a problem I'm having with larger header sizes with IdentityServer.
    Specifically in relation to: https://stackoverflow.com/a/48971393/4953847

     

    How can I set the following values for this container?

    http{
    ...
    proxy_buffer_size   128k;
    proxy_buffers   4 256k;
    proxy_busy_buffers_size   256k;
    large_client_header_buffers 4 16k;
    ...
    }

    Currently I'm able to authenticate my app but I immediately get redirected to a 502 Bad Gateway from nginx.

  12. Hi,

     

    Just as the title suggests - I'm getting to a point where I'm using a lot of docker containers for various purposes - Ombi, Plex, Sab, Radarr & Sonarr as well as other nginx sites and (thank you linuxserver guys) a Visual Studio Code implementation too. But it's getting really crowded. I don't really need to see telegraf, HDDTemp or influx and without shifting the order around on the Docker tab, there's not really a way to organise things better.

     

    I would like to see some ability to group docker containers, optionally show/hide specific groups from the dashboard (for more critical dockers only).

    Being able to start/stop/restart the whole group at once would be an added bonus but not essential.

     

    For some idea of how i would see this being done, is that you could have under Settings/DockerSettings a section to add/remove docker groups, and to add a container to a group would require editing the container, and not being able to remove a group until it was empty, again editing applicable docker containers to either change their group or ungroup them.

     

    I don't know if I'm actually miss-using unRaid or if people don't tend to have 25/30 docker containers running without the need to organise them into groups?

    Thoughts?

  13. Hi,

     

    Not sure if this has been raised before or if other people have just found ways around it, but could we have some method to selectively (per specific container), have an auto-restart if certain conditions are met?

    Such as:

    • RAM usage above a set limit
    • Specific "health-check" port not responding
    • On a cron schedule?

     

    I've also noticed some containers fail to start at all sometimes, or a stuck on a startup loop.

    In cases like this would there be room to forcibly stop a container and send a notification out if a container is looping or has been forcibly shutdown as a result?

     

    My reason for asking is that occasionally I've had to restart Plex Media Server as it's hogging 4-5GB of RAM while idling, and today SABnzbd-VPN was totally unresponsive on the local port and when checking the log noticed it was continually not seeing DNS for PrivateInternetAccess and looping - a forced stop and start resolved it immediately.

     

    What's everyone else's experience here? Worth a feature add?

  14. On 5/12/2019 at 10:44 AM, bonienl said:

    You need to create a new container and choose an existing template. Next rename the new container and change settings.

    For example I have six Pihole containers running (each network has its own DNS server). They all have a unique name and settings folder.

    This is going to sound stupid, but how did you change the setting folder for the new container?

     

    I've got so far as going into the Docker tab, adding a new container, selecting a template from the list of user templates (in my case I'm selecting the Nginx container I already have) and setting the name to something different.

     

    I can't see a setting in either basic or advanced that lets me specify the appdata folder name.

    I thought it might be based on the name I enter for the container, but it just uses the appdata folder from the template I selected.

     

    Edit: 

    Was stupid, found it under the "Show more settings" bit...

  15. I'd like to set up email alerts from Grafana but the settings page is read only.

    There's a banner that states that the settings are defined in grafana.ini or custom.ini or set as ENV variables.

     

    I've tried looking in the appdata folder for Grafana but I can't see either of the name files.

    I've also tried setting up a variable in the docker edit page and restarting the app but it doesn't work.

     

    How do I go about changing the settings for Grafana?

     

    Thanks,

  16. Hi all,

     

    Just want to sense check something since I'm not really familiar with hardware support.

    I currently have a Gigabyte H81M-S2H motherboard, which has now run out of SATA connections and I'm about to run out of space.

    I'm looking at expansion cards and I think an 8-port HBA card is the right option, specifically:

    LSI SAS 9207-8i KIT 8-Port 6Gbps SATA+SAS PCI-E 3.0 HBA Kit on Amazon

     

    Before I commit to buying it and a couple of extra drives, I just want to know if the card itself should work without any hassle.

     

    Thanks

  17. Would you never want to be notified of genuine short outages or just alert if they last longer than a set period?

    Perhaps if there was a weekly / monthly digest of un-alerted notifications?

     

    EDIT:

    I suppose if you have Grafana in play, you could use its reporting services?

    image.thumb.png.cf12bbb98fa06eb677f36b186e570ffd.png