Jump to content
  • [6.12.3] Unknown server hangs (must be physically shut down or restarted manually)


    a632079
    • Urgent

    Hi, I woke up in the morning and found that Unraid NAS cannot respond to any requests, including SSH, SMB, WEBUI.

     

     The following is the system log I rescued (the log is written to Flash). The NAS has a screen and keyboard plugged into it. At this time, the screen does not respond (no signal), the keyboard light is on, but typing CTRL+ALT+DELETE cannot complete the restart operation.  

     

    Aug  3 01:57:28 OuO avahi-daemon[15911]: Withdrawing address record for 2409:8a20:5214:4c90::4b8 on br0.
    Aug  3 01:57:31 OuO ntpd[6698]: Deleting interface #3 br0, 2409:8a20:5214:4c90::4b8#123, interface stats: received=0, sent=0, dropped=0, active_time=28179 secs
    Aug  3 02:01:02 OuO rpc.statd[1522]: Version 2.6.2 starting
    Aug  3 02:01:02 OuO sm-notify[1523]: Version 2.6.2 starting
    Aug  3 02:01:02 OuO sm-notify[1523]: Already notifying clients; Exiting!
    Aug  3 02:01:02 OuO rpc.mountd[6575]: Caught signal 15, un-registering and exiting.
    Aug  3 02:01:03 OuO kernel: nfsd: last server has exited, flushing export cache
    Aug  3 02:01:05 OuO kernel: NFSD: Using UMH upcall client tracking operations.
    Aug  3 02:01:05 OuO kernel: NFSD: starting 90-second grace period (net f0000000)
    Aug  3 02:01:05 OuO rpc.mountd[1679]: Version 2.6.2 starting
    Aug  3 02:01:05 OuO ntpd[6698]: ntpd exiting on signal 1 (Hangup)
    Aug  3 02:01:05 OuO ntpd[6698]: 127.127.1.0 local addr 127.0.0.1 -> <null>
    Aug  3 02:01:05 OuO ntpd[6698]: 106.55.184.199 local addr 192.168.63.190 -> <null>
    Aug  3 02:01:05 OuO ntpd[6698]: 203.107.6.88 local addr 192.168.63.190 -> <null>
    Aug  3 02:01:05 OuO ntpd[6698]: 40.119.6.228 local addr 192.168.63.190 -> <null>
    Aug  3 02:01:05 OuO ntpd[6698]: 216.239.35.12 local addr 192.168.63.190 -> <null>
    Aug  3 02:01:05 OuO ntpd[1798]: ntpd [email protected] Tue Jun  6 17:07:37 UTC 2023 (1): Starting
    Aug  3 02:01:05 OuO ntpd[1798]: Command line: /usr/sbin/ntpd -g -u ntp:ntp
    Aug  3 02:01:05 OuO ntpd[1798]: ----------------------------------------------------
    Aug  3 02:01:05 OuO ntpd[1798]: ntp-4 is maintained by Network Time Foundation,
    Aug  3 02:01:05 OuO ntpd[1798]: Inc. (NTF), a non-profit 501(c)(3) public-benefit
    Aug  3 02:01:05 OuO ntpd[1798]: corporation.  Support and training for ntp-4 are
    Aug  3 02:01:05 OuO ntpd[1798]: available at https://www.nwtime.org/support
    Aug  3 02:01:05 OuO ntpd[1798]: ----------------------------------------------------
    Aug  3 02:01:05 OuO ntpd[1798]: DEBUG behavior is enabled - a violation of any
    Aug  3 02:01:05 OuO ntpd[1798]: diagnostic assertion will cause ntpd to abort
    Aug  3 02:01:05 OuO ntpd[1800]: proto: precision = 0.065 usec (-24)
    Aug  3 02:01:05 OuO ntpd[1800]: basedate set to 2023-05-25
    Aug  3 02:01:05 OuO ntpd[1800]: gps base set to 2023-05-28 (week 2264)
    Aug  3 02:01:05 OuO ntpd[1800]: initial drift restored to 21.963000
    Aug  3 02:01:05 OuO ntpd[1800]: Listen normally on 0 lo 127.0.0.1:123
    Aug  3 02:01:05 OuO ntpd[1800]: Listen normally on 1 br0 192.168.63.190:123
    Aug  3 02:01:05 OuO ntpd[1800]: Listen normally on 2 lo [::1]:123
    Aug  3 02:01:05 OuO ntpd[1800]: Listen normally on 3 br0 [2409:8a20:5214:4c90:2e0:4cff:fee2:3a]:123
    Aug  3 02:01:05 OuO ntpd[1800]: Listening on routing socket on fd #20 for interface updates
    Aug  3 02:01:05 OuO ntpd[1800]: kernel reports TIME_ERROR: 0x2041: Clock Unsynchronized
    Aug  3 02:01:05 OuO ntpd[1800]: kernel reports TIME_ERROR: 0x2041: Clock Unsynchronized
    Aug  3 02:01:06 OuO rpc.mountd[1679]: v4.2 client attached: 0x6de53e6564ca99e0 from "192.168.63.201:720"
    Aug  3 02:01:09 OuO sshd[7103]: Received signal 15; terminating.
    Aug  3 02:01:09 OuO sshd[2125]: Server listening on 2409:8a20:5214:4c90:2e0:4cff:fee2:3a port 22.
    Aug  3 02:01:09 OuO sshd[2125]: Server listening on 192.168.63.190 port 22.
    Aug  3 02:01:09 OuO winbindd[6023]: [2023/08/03 02:01:09.840967,  0] ../../source3/winbindd/winbindd_dual.c:1957(winbindd_sig_term_handler)
    Aug  3 02:01:09 OuO winbindd[6023]:   Got sig[15] terminate (is_parent=0)
    Aug  3 02:01:09 OuO wsdd2[15828]: 'Terminated' signal received.
    Aug  3 02:01:09 OuO winbindd[15834]: [2023/08/03 02:01:09.842757,  0] ../../source3/winbindd/winbindd_dual.c:1957(winbindd_sig_term_handler)
    Aug  3 02:01:09 OuO winbindd[15834]:   Got sig[15] terminate (is_parent=0)
    Aug  3 02:01:09 OuO winbindd[15831]: [2023/08/03 02:01:09.842772,  0] ../../source3/winbindd/winbindd_dual.c:1957(winbindd_sig_term_handler)
    Aug  3 02:01:09 OuO winbindd[15831]:   Got sig[15] terminate (is_parent=1)
    Aug  3 02:01:09 OuO wsdd2[15828]: terminating.
    Aug  3 02:01:09 OuO smbd[2277]: [2023/08/03 02:01:09.938190,  0] ../../source3/smbd/server.c:1741(main)
    Aug  3 02:01:09 OuO smbd[2277]:   smbd version 4.17.7 started.
    Aug  3 02:01:09 OuO smbd[2277]:   Copyright Andrew Tridgell and the Samba Team 1992-2022
    Aug  3 02:01:10 OuO wsdd2[2291]: starting.
    Aug  3 02:01:10 OuO winbindd[2292]: [2023/08/03 02:01:10.068837,  0] ../../source3/winbindd/winbindd.c:1440(main)
    Aug  3 02:01:10 OuO winbindd[2292]:   winbindd version 4.17.7 started.
    Aug  3 02:01:10 OuO winbindd[2292]:   Copyright Andrew Tridgell and the Samba Team 1992-2022
    Aug  3 02:01:10 OuO winbindd[2294]: [2023/08/03 02:01:10.075578,  0] ../../source3/winbindd/winbindd_cache.c:3116(initialize_winbindd_cache)
    Aug  3 02:01:10 OuO winbindd[2294]:   initialize_winbindd_cache: clearing cache and re-creating with version number 2

     

    In addition, this machine also encountered the problem of nginx not responding yesterday (old problem). The following is the diary when nginx encountered problems at that time, maybe it will help with this problem? (Only part of the diary is intercepted)

     

    Aug  2 13:26:57 OuO kernel: nginx[6095]: segfault at 7f8 ip 00000000004e6982 sp 00007ffdfeaee000 error 6 in nginx[424000+110000] likely on CPU 13 (core 3, socket 0)
    Aug  2 13:26:57 OuO kernel: Code: 31 c0 e8 31 12 f4 ff e9 6f ff ff ff 48 8b 7c 24 08 e8 d2 62 f4 ff 49 89 c6 48 85 c0 0f 88 a4 00 00 00 4c 89 ef e8 de 54 02 00 <4c> 89 70 08 e9 4d fc ff ff be f3 01 00 00 e8 cb 28 f8 ff 48 c7 c0
    Aug  2 13:26:57 OuO emhttpd: error: publish, 172: Connection reset by peer (104): read
    Aug  2 13:26:57 OuO nginx: 2023/08/02 13:26:57 [alert] 6811#6811: worker process 6095 exited on signal 11
    Aug  2 13:26:57 OuO kernel: nginx[6099]: segfault at 7f8 ip 00000000004e6982 sp 00007ffdfeaee000 error 6 in nginx[424000+110000] likely on CPU 5 (core 8, socket 0)
    Aug  2 13:26:57 OuO kernel: Code: 31 c0 e8 31 12 f4 ff e9 6f ff ff ff 48 8b 7c 24 08 e8 d2 62 f4 ff 49 89 c6 48 85 c0 0f 88 a4 00 00 00 4c 89 ef e8 de 54 02 00 <4c> 89 70 08 e9 4d fc ff ff be f3 01 00 00 e8 cb 28 f8 ff 48 c7 c0
    Aug  2 13:26:57 OuO nginx: 2023/08/02 13:26:57 [alert] 6811#6811: worker process 6099 exited on signal 11
    Aug  2 13:26:58 OuO kernel: nginx[6100]: segfault at 7f8 ip 00000000004e6982 sp 00007ffdfeaee000 error 6 in nginx[424000+110000] likely on CPU 1 (core 1, socket 0)
    Aug  2 13:26:58 OuO kernel: Code: 31 c0 e8 31 12 f4 ff e9 6f ff ff ff 48 8b 7c 24 08 e8 d2 62 f4 ff 49 89 c6 48 85 c0 0f 88 a4 00 00 00 4c 89 ef e8 de 54 02 00 <4c> 89 70 08 e9 4d fc ff ff be f3 01 00 00 e8 cb 28 f8 ff 48 c7 c0
    Aug  2 13:26:58 OuO nginx: 2023/08/02 13:26:58 [alert] 6811#6811: worker process 6100 exited on signal 11
    Aug  2 13:26:58 OuO kernel: nginx[6115]: segfault at 7f8 ip 00000000004e6982 sp 00007ffdfeaee000 error 6 in nginx[424000+110000] likely on CPU 13 (core 3, socket 0)
    Aug  2 13:26:58 OuO kernel: Code: 31 c0 e8 31 12 f4 ff e9 6f ff ff ff 48 8b 7c 24 08 e8 d2 62 f4 ff 49 89 c6 48 85 c0 0f 88 a4 00 00 00 4c 89 ef e8 de 54 02 00 <4c> 89 70 08 e9 4d fc ff ff be f3 01 00 00 e8 cb 28 f8 ff 48 c7 c0
    Aug  2 13:26:58 OuO nginx: 2023/08/02 13:26:58 [alert] 6811#6811: worker process 6115 exited on signal 11
    Aug  2 13:26:58 OuO kernel: nginx[6116]: segfault at 7f8 ip 00000000004e6982 sp 00007ffdfeaee000 error 6 in nginx[424000+110000] likely on CPU 2 (core 2, socket 0)
    Aug  2 13:26:58 OuO kernel: Code: 31 c0 e8 31 12 f4 ff e9 6f ff ff ff 48 8b 7c 24 08 e8 d2 62 f4 ff 49 89 c6 48 85 c0 0f 88 a4 00 00 00 4c 89 ef e8 de 54 02 00 <4c> 89 70 08 e9 4d fc ff ff be f3 01 00 00 e8 cb 28 f8 ff 48 c7 c0
    Aug  2 13:26:58 OuO nginx: 2023/08/02 13:26:58 [alert] 6811#6811: worker process 6116 exited on signal 11
    Aug  2 13:26:58 OuO kernel: nginx[6120]: segfault at 7f8 ip 00000000004e6982 sp 00007ffdfeaee000 error 6 in nginx[424000+110000] likely on CPU 16 (core 9, socket 0)
    Aug  2 13:26:58 OuO kernel: Code: 31 c0 e8 31 12 f4 ff e9 6f ff ff ff 48 8b 7c 24 08 e8 d2 62 f4 ff 49 89 c6 48 85 c0 0f 88 a4 00 00 00 4c 89 ef e8 de 54 02 00 <4c> 89 70 08 e9 4d fc ff ff be f3 01 00 00 e8 cb 28 f8 ff 48 c7 c0
    Aug  2 13:26:58 OuO emhttpd: error: publish, 172: Connection reset by peer (104): read
    Aug  2 13:26:58 OuO nginx: 2023/08/02 13:26:58 [alert] 6811#6811: worker process 6120 exited on signal 11
    Aug  2 13:26:58 OuO kernel: nginx[6124]: segfault at 7f8 ip 00000000004e6982 sp 00007ffdfeaee000 error 6 in nginx[424000+110000] likely on CPU 16 (core 9, socket 0)
    Aug  2 13:26:58 OuO kernel: Code: 31 c0 e8 31 12 f4 ff e9 6f ff ff ff 48 8b 7c 24 08 e8 d2 62 f4 ff 49 89 c6 48 85 c0 0f 88 a4 00 00 00 4c 89 ef e8 de 54 02 00 <4c> 89 70 08 e9 4d fc ff ff be f3 01 00 00 e8 cb 28 f8 ff 48 c7 c0
    Aug  2 13:26:58 OuO nginx: 2023/08/02 13:26:58 [alert] 6811#6811: worker process 6124 exited on signal 11
    Aug  2 13:26:58 OuO kernel: nginx[6171]: segfault at 7f8 ip 00000000004e6982 sp 00007ffdfeaee000 error 6 in nginx[424000+110000] likely on CPU 6 (core 9, socket 0)
    Aug  2 13:26:58 OuO kernel: Code: 31 c0 e8 31 12 f4 ff e9 6f ff ff ff 48 8b 7c 24 08 e8 d2 62 f4 ff 49 89 c6 48 85 c0 0f 88 a4 00 00 00 4c 89 ef e8 de 54 02 00 <4c> 89 70 08 e9 4d fc ff ff be f3 01 00 00 e8 cb 28 f8 ff 48 c7 c0
    Aug  2 13:26:58 OuO nginx: 2023/08/02 13:26:58 [alert] 6811#6811: worker process 6171 exited on signal 11
    Aug  2 13:26:59 OuO kernel: nginx[6383]: segfault at 7f8 ip 00000000004e6982 sp 00007ffdfeaee000 error 6 in nginx[424000+110000] likely on CPU 2 (core 2, socket 0)
    Aug  2 13:26:59 OuO kernel: Code: 31 c0 e8 31 12 f4 ff e9 6f ff ff ff 48 8b 7c 24 08 e8 d2 62 f4 ff 49 89 c6 48 85 c0 0f 88 a4 00 00 00 4c 89 ef e8 de 54 02 00 <4c> 89 70 08 e9 4d fc ff ff be f3 01 00 00 e8 cb 28 f8 ff 48 c7 c0
    Aug  2 13:26:59 OuO nginx: 2023/08/02 13:26:59 [alert] 6811#6811: worker process 6383 exited on signal 11
    Aug  2 13:26:59 OuO kernel: nginx[6429]: segfault at 7f8 ip 00000000004e6982 sp 00007ffdfeaee000 error 6 in nginx[424000+110000] likely on CPU 2 (core 2, socket 0)
    Aug  2 13:26:59 OuO kernel: Code: 31 c0 e8 31 12 f4 ff e9 6f ff ff ff 48 8b 7c 24 08 e8 d2 62 f4 ff 49 89 c6 48 85 c0 0f 88 a4 00 00 00 4c 89 ef e8 de 54 02 00 <4c> 89 70 08 e9 4d fc ff ff be f3 01 00 00 e8 cb 28 f8 ff 48 c7 c0
    Aug  2 13:26:59 OuO nginx: 2023/08/02 13:26:59 [alert] 6811#6811: worker process 6429 exited on signal 11
    Aug  2 13:26:59 OuO emhttpd: error: publish, 172: Connection reset by peer (104): read
    Aug  2 13:26:59 OuO nginx: 2023/08/02 13:26:59 [alert] 6811#6811: worker process 6433 exited on signal 11
    Aug  2 13:26:59 OuO nginx: 2023/08/02 13:26:59 [alert] 6811#6811: worker process 6437 exited on signal 11
    Aug  2 13:26:59 OuO emhttpd: error: publish, 172: Connection reset by peer (104): read
    Aug  2 13:26:59 OuO emhttpd: error: publish, 172: Connection reset by peer (104): read
    Aug  2 13:26:59 OuO nginx: 2023/08/02 13:26:59 [alert] 6811#6811: worker process 6438 exited on signal 11
    Aug  2 13:26:59 OuO nginx: 2023/08/02 13:26:59 [alert] 6811#6811: worker process 6439 exited on signal 11
    Aug  2 13:27:00 OuO nginx: 2023/08/02 13:27:00 [alert] 6811#6811: worker process 6441 exited on signal 11
    Aug  2 13:27:00 OuO nginx: 2023/08/02 13:27:00 [alert] 6811#6811: worker process 6443 exited on signal 11
    Aug  2 13:27:00 OuO emhttpd: error: publish, 172: Connection reset by peer (104): read
    Aug  2 13:27:00 OuO nginx: 2023/08/02 13:27:00 [alert] 6811#6811: worker process 6447 exited on signal 11
    Aug  2 13:27:00 OuO nginx: 2023/08/02 13:27:00 [alert] 6811#6811: worker process 6451 exited on signal 11
    Aug  2 13:27:00 OuO nginx: 2023/08/02 13:27:00 [alert] 6811#6811: worker process 6452 exited on signal 11
    Aug  2 13:27:00 OuO nginx: 2023/08/02 13:27:00 [alert] 6811#6811: worker process 6466 exited on signal 11
    Aug  2 13:27:01 OuO nginx: 2023/08/02 13:27:01 [alert] 6811#6811: worker process 6497 exited on signal 11
    Aug  2 13:27:01 OuO nginx: 2023/08/02 13:27:01 [alert] 6811#6811: worker process 6499 exited on signal 11
    Aug  2 13:27:01 OuO emhttpd: error: publish, 172: Connection reset by peer (104): read
    Aug  2 13:27:01 OuO nginx: 2023/08/02 13:27:01 [alert] 6811#6811: worker process 6503 exited on signal 11
    Aug  2 13:27:01 OuO nginx: 2023/08/02 13:27:01 [alert] 6811#6811: worker process 6535 exited on signal 11
    Aug  2 13:27:01 OuO nginx: 2023/08/02 13:27:01 [alert] 6811#6811: worker process 6536 exited on signal 11
    Aug  2 13:27:02 OuO kernel: show_signal_msg: 15 callbacks suppressed

     

    Before this, encountered nginx nchan's memory exhaustion bug (old problem, I had to restart nginx regularly). Frankly, if unraid needs to use websockets, why not try using Swoole? (an event-driven, asynchronous, coroutine-based concurrency library with high performance for PHP)

     

    Aug  2 01:21:10 OuO nginx: 2023/08/02 01:21:10 [crit] 32748#32748: ngx_slab_alloc() failed: no memory
    Aug  2 01:21:10 OuO nginx: 2023/08/02 01:21:10 [error] 32748#32748: shpool alloc failed
    Aug  2 01:21:10 OuO nginx: 2023/08/02 01:21:10 [error] 32748#32748: nchan: Out of shared memory while allocating channel /cpuload. Increase nchan_max_reserved_memory.
    Aug  2 01:21:10 OuO nginx: 2023/08/02 01:21:10 [error] 32748#32748: *254332 nchan: error publishing message (HTTP status code 507), client: unix:, server: , request: "POST /pub/cpuload?buffer_length=1 HTTP/1.1", host: "localhost"
    Aug  2 01:21:10 OuO nginx: 2023/08/02 01:21:10 [crit] 32748#32748: ngx_slab_alloc() failed: no memory
    Aug  2 01:21:10 OuO nginx: 2023/08/02 01:21:10 [error] 32748#32748: shpool alloc failed
    Aug  2 01:21:10 OuO nginx: 2023/08/02 01:21:10 [error] 32748#32748: nchan: Out of shared memory while allocating channel /temperature. Increase nchan_max_reserved_memory.
    Aug  2 01:21:10 OuO nginx: 2023/08/02 01:21:10 [alert] 32748#32748: *254333 header already sent while keepalive, client: 192.168.63.123, server: 192.168.63.190:80
    Aug  2 01:21:10 OuO kernel: nginx[32748]: segfault at 0 ip 0000000000000000 sp 00007ffdfeaee138 error 14 in nginx[400000+24000] likely on CPU 5 (core 8, socket 0)
    Aug  2 01:21:10 OuO kernel: Code: Unable to access opcode bytes at 0xffffffffffffffd6.
    Aug  2 01:21:10 OuO nginx: 2023/08/02 01:21:10 [alert] 6811#6811: worker process 32748 exited on signal 11
    Aug  2 01:21:11 OuO nginx: 2023/08/02 01:21:11 [crit] 348#348: ngx_slab_alloc() failed: no memory
    Aug  2 01:21:11 OuO nginx: 2023/08/02 01:21:11 [error] 348#348: shpool alloc failed
    Aug  2 01:21:11 OuO nginx: 2023/08/02 01:21:11 [error] 348#348: nchan: Out of shared memory while allocating channel m/
    Aug  2 01:21:11 OuO nginx: 2023/08/02 01:21:11 [alert] 348#348: *254335 header already sent while keepalive, client: 192.168.63.123, server: 192.168.63.190:80
    Aug  2 01:21:11 OuO kernel: nginx[348]: segfault at 0 ip 0000000000000000 sp 00007ffdfeaee138 error 14 in nginx[400000+24000] likely on CPU 8 (core 11, socket 0)
    Aug  2 01:21:11 OuO kernel: Code: Unable to access opcode bytes at 0xffffffffffffffd6.
    Aug  2 01:21:11 OuO nginx: 2023/08/02 01:21:11 [alert] 6811#6811: worker process 348 exited on signal 11
    Aug  2 01:21:11 OuO nginx: 2023/08/02 01:21:11 [crit] 350#350: ngx_slab_alloc() failed: no memory
    Aug  2 01:21:11 OuO nginx: 2023/08/02 01:21:11 [error] 350#350: shpool alloc failed
    Aug  2 01:21:11 OuO nginx: 2023/08/02 01:21:11 [error] 350#350: nchan: Out of shared memory while allocating channel /cpuload. Increase nchan_max_reserved_memory.
    Aug  2 01:21:11 OuO nginx: 2023/08/02 01:21:11 [error] 350#350: *254337 nchan: error publishing message (HTTP status code 507), client: unix:, server: , request: "POST /pub/cpuload?buffer_length=1 HTTP/1.1", host: "localhost"Aug  2 01:21:11 OuO nginx: 2023/08/02 01:21:11 [crit] 350#350: ngx_slab_alloc() failed: no memory
    Aug  2 01:21:11 OuO nginx: 2023/08/02 01:21:11 [error] 350#350: shpool alloc failed
    Aug  2 01:21:11 OuO nginx: 2023/08/02 01:21:11 [error] 350#350: nchan: Out of shared memory while allocating channel /temperature. Increase nchan_max_reserved_memory.
    Aug  2 01:21:11 OuO nginx: 2023/08/02 01:21:11 [alert] 350#350: *254338 header already sent while keepalive, client: 192.168.63.123, server: 192.168.63.190:80
    Aug  2 01:21:11 OuO kernel: nginx[350]: segfault at 0 ip 0000000000000000 sp 00007ffdfeaee138 error 14 in nginx[400000+24000] likely on CPU 18 (core 11, socket 0)
    Aug  2 01:21:11 OuO kernel: Code: Unable to access opcode bytes at 0xffffffffffffffd6.
    Aug  2 01:21:11 OuO nginx: 2023/08/02 01:21:11 [alert] 6811#6811: worker process 350 exited on signal 11
    Aug  2 01:21:12 OuO nginx: 2023/08/02 01:21:12 [crit] 354#354: ngx_slab_alloc() failed: no memory
    Aug  2 01:21:12 OuO nginx: 2023/08/02 01:21:12 [error] 354#354: shpool alloc failed
    Aug  2 01:21:12 OuO nginx: 2023/08/02 01:21:12 [error] 354#354: nchan: Out of shared memory while allocating channel m/
    Aug  2 01:21:12 OuO nginx: 2023/08/02 01:21:12 [alert] 354#354: *254340 header already sent while keepalive, client: 192.168.63.123, server: 192.168.63.190:80
    Aug  2 01:21:12 OuO kernel: nginx[354]: segfault at 0 ip 0000000000000000 sp 00007ffdfeaee138 error 14 in nginx[400000+24000] likely on CPU 7 (core 10, socket 0)
    Aug  2 01:21:12 OuO kernel: Code: Unable to access opcode bytes at 0xffffffffffffffd6.
    Aug  2 01:21:12 OuO nginx: 2023/08/02 01:21:12 [alert] 6811#6811: worker process 354 exited on signal 11
    Aug  2 01:21:12 OuO nginx: 2023/08/02 01:21:12 [crit] 355#355: ngx_slab_alloc() failed: no memory
    Aug  2 01:21:12 OuO nginx: 2023/08/02 01:21:12 [error] 355#355: shpool alloc failed
    Aug  2 01:21:12 OuO nginx: 2023/08/02 01:21:12 [error] 355#355: nchan: Out of shared memory while allocating channel /cpuload. Increase nchan_max_reserved_memory.
    Aug  2 01:21:12 OuO nginx: 2023/08/02 01:21:12 [error] 355#355: *254342 nchan: error publishing message (HTTP status code 507), client: unix:, server: , request: "POST /pub/cpuload?buffer_length=1 HTTP/1.1", host: "localhost"Aug  2 01:21:13 OuO nginx: 2023/08/02 01:21:13 [crit] 355#355: ngx_slab_alloc() failed: no memory
    Aug  2 01:21:13 OuO nginx: 2023/08/02 01:21:13 [error] 355#355: shpool alloc failed
    Aug  2 01:21:13 OuO nginx: 2023/08/02 01:21:13 [error] 355#355: nchan: Out of shared memory while allocating channel /cpuload. Increase nchan_max_reserved_memory.
    Aug  2 01:21:13 OuO nginx: 2023/08/02 01:21:13 [error] 355#355: *254343 nchan: error publishing message (HTTP status code 507), client: unix:, server: , request: "POST /pub/cpuload?buffer_length=1 HTTP/1.1", host: "localhost"Aug  2 01:21:13 OuO nginx: 2023/08/02 01:21:13 [crit] 355#355: ngx_slab_alloc() failed: no memory
    Aug  2 01:21:13 OuO nginx: 2023/08/02 01:21:13 [error] 355#355: shpool alloc failed
    Aug  2 01:21:13 OuO nginx: 2023/08/02 01:21:13 [error] 355#355: nchan: Out of shared memory while allocating channel /temperature. Increase nchan_max_reserved_memory.
    Aug  2 01:21:13 OuO nginx: 2023/08/02 01:21:13 [alert] 355#355: *254344 header already sent while keepalive, client: 192.168.63.123, server: 192.168.63.190:80
    Aug  2 01:21:13 OuO kernel: nginx[355]: segfault at 0 ip 0000000000000000 sp 00007ffdfeaee138 error 14 in nginx[400000+24000] likely on CPU 5 (core 8, socket 0)
    Aug  2 01:21:13 OuO kernel: Code: Unable to access opcode bytes at 0xffffffffffffffd6.
    Aug  2 01:21:13 OuO nginx: 2023/08/02 01:21:13 [alert] 6811#6811: worker process 355 exited on signal 11
    Aug  2 01:21:14 OuO nginx: 2023/08/02 01:21:14 [crit] 443#443: ngx_slab_alloc() failed: no memory
    Aug  2 01:21:14 OuO nginx: 2023/08/02 01:21:14 [error] 443#443: shpool alloc failed
    Aug  2 01:21:14 OuO nginx: 2023/08/02 01:21:14 [error] 443#443: nchan: Out of shared memory while allocating channel /disks. Increase nchan_max_reserved_memory.
    Aug  2 01:21:14 OuO nginx: 2023/08/02 01:21:14 [error] 443#443: *254346 nchan: error publishing message (HTTP status code 507), client: unix:, server: , request: "POST /pub/disks?buffer_length=1 HTTP/1.1", host: "localhost"
    Aug  2 01:21:14 OuO nginx: 2023/08/02 01:21:14 [crit] 443#443: ngx_slab_alloc() failed: no memory
    Aug  2 01:21:14 OuO nginx: 2023/08/02 01:21:14 [error] 443#443: shpool alloc failed
    Aug  2 01:21:14 OuO nginx: 2023/08/02 01:21:14 [error] 443#443: nchan: Out of shared memory while allocating channel /shares. Increase nchan_max_reserved_memory.
    Aug  2 01:21:14 OuO nginx: 2023/08/02 01:21:14 [error] 443#443: *254347 nchan: error publishing message (HTTP status code 507), client: unix:, server: , request: "POST /pub/shares?buffer_length=1 HTTP/1.1", host: "localhost"
    Aug  2 01:21:14 OuO nginx: 2023/08/02 01:21:14 [crit] 443#443: ngx_slab_alloc() failed: no memory
    Aug  2 01:21:14 OuO nginx: 2023/08/02 01:21:14 [error] 443#443: shpool alloc failed
    Aug  2 01:21:14 OuO nginx: 2023/08/02 01:21:14 [error] 443#443: nchan: Out of shared memory while allocating channel /cpuload. Increase nchan_max_reserved_memory.
    Aug  2 01:21:14 OuO nginx: 2023/08/02 01:21:14 [error] 443#443: *254348 nchan: error publishing message (HTTP status code 507), client: unix:, server: , request: "POST /pub/cpuload?buffer_length=1 HTTP/1.1", host: "localhost"Aug  2 01:21:15 OuO nginx: 2023/08/02 01:21:15 [crit] 443#443: ngx_slab_alloc() failed: no memory
    Aug  2 01:21:15 OuO nginx: 2023/08/02 01:21:15 [error] 443#443: shpool alloc failed
    Aug  2 01:21:15 OuO nginx: 2023/08/02 01:21:15 [error] 443#443: nchan: Out of shared memory while allocating channel /temperature. Increase nchan_max_reserved_memory.
    Aug  2 01:21:15 OuO nginx: 2023/08/02 01:21:15 [alert] 443#443: *254349 header already sent while keepalive, client: 192.168.63.123, server: 192.168.63.190:80
    Aug  2 01:21:15 OuO kernel: nginx[443]: segfault at 0 ip 0000000000000000 sp 00007ffdfeaee138 error 14 in nginx[400000+24000] likely on CPU 0 (core 0, socket 0)
    Aug  2 01:21:15 OuO kernel: Code: Unable to access opcode bytes at 0xffffffffffffffd6.
    Aug  2 01:21:15 OuO nginx: 2023/08/02 01:21:15 [alert] 6811#6811: worker process 443 exited on signal 11
    Aug  2 01:21:15 OuO nginx: 2023/08/02 01:21:15 [crit] 447#447: ngx_slab_alloc() failed: no memory
    Aug  2 01:21:15 OuO nginx: 2023/08/02 01:21:15 [error] 447#447: shpool alloc failed
    Aug  2 01:21:15 OuO nginx: 2023/08/02 01:21:15 [error] 447#447: nchan: Out of shared memory while allocating channel m/
    Aug  2 01:21:15 OuO nginx: 2023/08/02 01:21:15 [alert] 447#447: *254351 header already sent while keepalive, client: 192.168.63.123, server: 192.168.63.190:80
    Aug  2 01:21:15 OuO kernel: nginx[447]: segfault at 0 ip 0000000000000000 sp 00007ffdfeaee138 error 14 in nginx[400000+24000] likely on CPU 3 (core 3, socket 0)
    Aug  2 01:21:15 OuO kernel: Code: Unable to access opcode bytes at 0xffffffffffffffd6.
    Aug  2 01:21:15 OuO nginx: 2023/08/02 01:21:15 [alert] 6811#6811: worker process 447 exited on signal 11
    Aug  2 01:21:15 OuO nginx: 2023/08/02 01:21:15 [crit] 451#451: ngx_slab_alloc() failed: no memory
    Aug  2 01:21:15 OuO nginx: 2023/08/02 01:21:15 [error] 451#451: shpool alloc failed
    Aug  2 01:21:15 OuO nginx: 2023/08/02 01:21:15 [error] 451#451: nchan: Out of shared memory while allocating channel /cpuload. Increase nchan_max_reserved_memory.
    Aug  2 01:21:15 OuO nginx: 2023/08/02 01:21:15 [error] 451#451: *254353 nchan: error publishing message (HTTP status code 507), client: unix:, server: , request: "POST /pub/cpuload?buffer_length=1 HTTP/1.1", host: "localhost"Aug  2 01:21:16 OuO nginx: 2023/08/02 01:21:16 [crit] 451#451: ngx_slab_alloc() failed: no memory
    Aug  2 01:21:16 OuO nginx: 2023/08/02 01:21:16 [error] 451#451: shpool alloc failed
    Aug  2 01:21:16 OuO nginx: 2023/08/02 01:21:16 [error] 451#451: nchan: Out of shared memory while allocating channel m/
    Aug  2 01:21:16 OuO nginx: 2023/08/02 01:21:16 [alert] 451#451: *254354 header already sent while keepalive, client: 192.168.63.123, server: 192.168.63.190:80
    Aug  2 01:21:16 OuO kernel: nginx[451]: segfault at 0 ip 0000000000000000 sp 00007ffdfeaee138 error 14 in nginx[400000+24000] likely on CPU 6 (core 9, socket 0)
    Aug  2 01:21:16 OuO kernel: Code: Unable to access opcode bytes at 0xffffffffffffffd6.
    Aug  2 01:21:16 OuO nginx: 2023/08/02 01:21:16 [alert] 6811#6811: worker process 451 exited on signal 11
    Aug  2 01:21:16 OuO nginx: 2023/08/02 01:21:16 [crit] 452#452: ngx_slab_alloc() failed: no memory
    Aug  2 01:21:16 OuO nginx: 2023/08/02 01:21:16 [error] 452#452: shpool alloc failed
    Aug  2 01:21:16 OuO nginx: 2023/08/02 01:21:16 [error] 452#452: nchan: Out of shared memory while allocating channel /cpuload. Increase nchan_max_reserved_memory.
    Aug  2 01:21:16 OuO nginx: 2023/08/02 01:21:16 [error] 452#452: *254356 nchan: error publishing message (HTTP status code 507), client: unix:, server: , request: "POST /pub/cpuload?buffer_length=1 HTTP/1.1", host: "localhost"Aug  2 01:21:17 OuO nginx: 2023/08/02 01:21:17 [crit] 452#452: ngx_slab_alloc() failed: no memory
    Aug  2 01:21:17 OuO nginx: 2023/08/02 01:21:17 [error] 452#452: shpool alloc failed
    Aug  2 01:21:17 OuO nginx: 2023/08/02 01:21:17 [error] 452#452: nchan: Out of shared memory while allocating channel /cpuload. Increase nchan_max_reserved_memory.
    Aug  2 01:21:17 OuO nginx: 2023/08/02 01:21:17 [error] 452#452: *254357 nchan: error publishing message (HTTP status code 507), client: unix:, server: , request: "POST /pub/cpuload?buffer_length=1 HTTP/1.1", host: "localhost"Aug  2 01:21:17 OuO nginx: 2023/08/02 01:21:17 [crit] 452#452: ngx_slab_alloc() failed: no memory
    Aug  2 01:21:17 OuO nginx: 2023/08/02 01:21:17 [error] 452#452: shpool alloc failed
    Aug  2 01:21:17 OuO nginx: 2023/08/02 01:21:17 [error] 452#452: nchan: Out of shared memory while allocating channel /temperature. Increase nchan_max_reserved_memory.
    Aug  2 01:21:17 OuO nginx: 2023/08/02 01:21:17 [alert] 452#452: *254358 header already sent while keepalive, client: 192.168.63.123, server: 192.168.63.190:80
    Aug  2 01:21:17 OuO kernel: nginx[452]: segfault at 0 ip 0000000000000000 sp 00007ffdfeaee138 error 14 in nginx[400000+24000] likely on CPU 0 (core 0, socket 0)
    Aug  2 01:21:17 OuO kernel: Code: Unable to access opcode bytes at 0xffffffffffffffd6.
    Aug  2 01:21:17 OuO nginx: 2023/08/02 01:21:17 [alert] 6811#6811: worker process 452 exited on signal 11
    Aug  2 01:21:18 OuO nginx: 2023/08/02 01:21:18 [crit] 459#459: ngx_slab_alloc() failed: no memory
    Aug  2 01:21:18 OuO nginx: 2023/08/02 01:21:18 [error] 459#459: shpool alloc failed
    Aug  2 01:21:18 OuO nginx: 2023/08/02 01:21:18 [error] 459#459: nchan: Out of shared memory while allocating channel /disks. Increase nchan_max_reserved_memory.
    Aug  2 01:21:18 OuO nginx: 2023/08/02 01:21:18 [error] 459#459: *254360 nchan: error publishing message (HTTP status code 507), client: unix:, server: , request: "POST /pub/disks?buffer_length=1 HTTP/1.1", host: "localhost"
    Aug  2 01:21:18 OuO nginx: 2023/08/02 01:21:18 [crit] 459#459: ngx_slab_alloc() failed: no memory
    Aug  2 01:21:18 OuO nginx: 2023/08/02 01:21:18 [error] 459#459: shpool alloc failed
    Aug  2 01:21:18 OuO nginx: 2023/08/02 01:21:18 [error] 459#459: nchan: Out of shared memory while allocating channel m/
    Aug  2 01:21:18 OuO nginx: 2023/08/02 01:21:18 [alert] 459#459: *254361 header already sent while keepalive, client: 192.168.63.123, server: 192.168.63.190:80
    Aug  2 01:21:18 OuO kernel: nginx[459]: segfault at 0 ip 0000000000000000 sp 00007ffdfeaee138 error 14 in nginx[400000+24000] likely on CPU 8 (core 11, socket 0)
    Aug  2 01:21:18 OuO kernel: Code: Unable to access opcode bytes at 0xffffffffffffffd6.
    Aug  2 01:21:18 OuO nginx: 2023/08/02 01:21:18 [alert] 6811#6811: worker process 459 exited on signal 11
    Aug  2 01:21:18 OuO nginx: 2023/08/02 01:21:18 [crit] 463#463: ngx_slab_alloc() failed: no memory
    Aug  2 01:21:18 OuO nginx: 2023/08/02 01:21:18 [error] 463#463: shpool alloc failed
    Aug  2 01:21:18 OuO nginx: 2023/08/02 01:21:18 [error] 463#463: nchan: Out of shared memory while allocating channel /cpuload. Increase nchan_max_reserved_memory.

     




    User Feedback

    Recommended Comments



    I encountered the same problem when set ipv6. I restored the configuration, issues aways exists.I observed the system log and found that the link down occurs at 4 am every day. It’s time for the router to auto reboot. now i closed the router reboot. The problem has been temporarily solved. I will try to reinstall the system when I have free time. Hope it can help you.

    Link to comment



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Restore formatting

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.

×
×
  • Create New...