joshbgosh10592 Posted July 29, 2020 Share Posted July 29, 2020 I experienced my unRAID log file running completely full a few weeks ago, and I cleared it by deleting syslog* from /var/log/, but I didn't think twice about it. Now, It's full again, and I'm looking at it.. I'm getting spammed by these messages every second that I don't understand in the slightest: Jul 21 04:40:20 NAS nginx: 2020/07/21 04:40:20 [crit] 22193#22193: ngx_slab_alloc() failed: no memory Jul 21 04:40:20 NAS nginx: 2020/07/21 04:40:20 [error] 22193#22193: shpool alloc failed Jul 21 04:40:20 NAS nginx: 2020/07/21 04:40:20 [error] 22193#22193: nchan: Out of shared memory while allocating message of size 6775. Increase nchan_max_reserved_memory. Jul 21 04:40:20 NAS nginx: 2020/07/21 04:40:20 [error] 22193#22193: *14271051 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/disks?buffer_length=1 HTTP/1.1", host: "localhost" Jul 21 04:40:20 NAS nginx: 2020/07/21 04:40:20 [error] 22193#22193: MEMSTORE:00: can't create shared message for channel /disks Anyone have any advice on where to even start? Quote Link to comment
Energen Posted July 29, 2020 Share Posted July 29, 2020 What web browser do you use? Quote Link to comment
trurl Posted July 29, 2020 Share Posted July 29, 2020 1 hour ago, joshbgosh10592 said: advice on where to even start By posting diagnostics of course Quote Link to comment
joshbgosh10592 Posted July 30, 2020 Author Share Posted July 30, 2020 13 hours ago, trurl said: By posting diagnostics of course I tried to gather it, but the page just hangs. I also don't have the CPU stats populated (all cores show 0%). I think this happened last time I manually deleted the syslog files. 14 hours ago, Energen said: What web browser do you use? I use Chrome. Quote Link to comment
trurl Posted July 30, 2020 Share Posted July 30, 2020 11 minutes ago, joshbgosh10592 said: I tried to gather it, but the page just hangs. Best way to clear logs is to reboot. Reboot and post diagnostics. Also setup Syslog Server: Quote Link to comment
joshbgosh10592 Posted July 30, 2020 Author Share Posted July 30, 2020 1 hour ago, trurl said: Best way to clear logs is to reboot. Reboot and post diagnostics. Also setup Syslog Server: Attached is the diag zip. However, it's now not throwing those errors. I figure it's just a matter of time though? I'm working on setting up a syslog server, but haven't had the time yet. nas-diagnostics-20200730-1144.zip Quote Link to comment
trurl Posted July 30, 2020 Share Posted July 30, 2020 1 hour ago, joshbgosh10592 said: it's now not throwing those errors. I figure it's just a matter of time though? These seem to be what is spamming your log every second or so. No wonder it fills up. Jul 30 11:44:37 NAS rpcbind[18877]: connect from 10.9.220.11 to getport/addr(mountd) Jul 30 11:44:38 NAS rpcbind[18878]: connect from 10.9.220.49 to getport/addr(mountd) Jul 30 11:44:48 NAS rpcbind[19028]: connect from 10.9.220.49 to getport/addr(mountd) Jul 30 11:44:48 NAS rpcbind[19029]: connect from 10.9.220.11 to getport/addr(mountd) Jul 30 11:44:57 NAS rpcbind[19556]: connect from 10.9.220.11 to getport/addr(mountd) Jul 30 11:44:58 NAS rpcbind[19583]: connect from 10.9.220.49 to getport/addr(mountd) Any idea what that's about? 1 Quote Link to comment
joshbgosh10592 Posted July 31, 2020 Author Share Posted July 31, 2020 (edited) On 7/30/2020 at 1:09 PM, trurl said: These seem to be what is spamming your log every second or so. No wonder it fills up. Jul 30 11:44:37 NAS rpcbind[18877]: connect from 10.9.220.11 to getport/addr(mountd) Jul 30 11:44:38 NAS rpcbind[18878]: connect from 10.9.220.49 to getport/addr(mountd) Jul 30 11:44:48 NAS rpcbind[19028]: connect from 10.9.220.49 to getport/addr(mountd) Jul 30 11:44:48 NAS rpcbind[19029]: connect from 10.9.220.11 to getport/addr(mountd) Jul 30 11:44:57 NAS rpcbind[19556]: connect from 10.9.220.11 to getport/addr(mountd) Jul 30 11:44:58 NAS rpcbind[19583]: connect from 10.9.220.49 to getport/addr(mountd) Any idea what that's about? Those are my Proxmox nodes binding, which should be NFS. The nodes weren't happy I rebooted the NAS on them.. Edited August 2, 2020 by joshbgosh10592 changed "happen" to "happy" Typo Quote Link to comment
nlcjr Posted December 7, 2020 Share Posted December 7, 2020 On 7/31/2020 at 4:10 PM, joshbgosh10592 said: Those are my Proxmox nodes binding, which should be NFS. The nodes weren't happy I rebooted the NAS on them.. Did you ever find a answer to this? My proxmox is causing the same log spamming. Sure seems like a Unraid issue vs a Proxmox issue. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.