spacecops Posted March 7, 2022 Share Posted March 7, 2022 My log usage is now suddenly at 100%. After some investigation, I believe this is the log spamming thats causing this: Mar 4 05:37:20 rima-server nginx: 2022/03/04 05:37:20 [error] 11170#11170: shpool alloc failed Mar 4 05:37:20 rima-server nginx: 2022/03/04 05:37:20 [error] 11170#11170: nchan: Out of shared memory while allocating message of size 5614. Increase nchan_max_reserved_memory. Mar 4 05:37:20 rima-server nginx: 2022/03/04 05:37:20 [error] 11170#11170: *14051935 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/shares?buffer_length=1 HTTP/1.1", host: "localhost" Mar 4 05:37:20 rima-server nginx: 2022/03/04 05:37:20 [error] 11170#11170: MEMSTORE:00: can't create shared message for channel /shares Mar 4 05:37:21 rima-server nginx: 2022/03/04 05:37:21 [crit] 11170#11170: ngx_slab_alloc() failed: no memory Mar 4 05:37:21 rima-server nginx: 2022/03/04 05:37:21 [error] 11170#11170: shpool alloc failed Mar 4 05:37:21 rima-server nginx: 2022/03/04 05:37:21 [error] 11170#11170: nchan: Out of shared memory while allocating message of size 5861. Increase nchan_max_reserved_memory. Mar 4 05:37:21 rima-server nginx: 2022/03/04 05:37:21 [error] 11170#11170: *14051937 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/disks?buffer_length=1 HTTP/1.1", host: "localhost" Mar 4 05:37:21 rima-server nginx: 2022/03/04 05:37:21 [error] 11170#11170: MEMSTORE:00: can't create shared message for channel /disks Mar 4 05:37:22 rima-server nginx: 2022/03/04 05:37:22 [crit] 11170#11170: ngx_slab_alloc() failed: no memory Mar 4 05:37:22 rima-server nginx: 2022/03/04 05:37:22 [error] 11170#11170: shpool alloc failed Mar 4 05:37:22 rima-server nginx: 2022/03/04 05:37:22 [error] 11170#11170: nchan: Out of shared memory while allocating message of size 5861. Increase nchan_max_reserved_memory. (This keeps being spammed). I attached the logs, what is the best way to free up log space and prevent this from happening again? rima-server-diagnostics-20220306-2000.zip Quote Link to comment
trurl Posted March 7, 2022 Share Posted March 7, 2022 Simplest and best way to clear logs is rebooting. Not sure what caused it. Feb 12 04:40:05 rima-server root: Fix Common Problems: Warning: vfio.pci.plg Not Compatible with unRaid version 6.9.2 Quote Link to comment
Arbadacarba Posted April 6, 2022 Share Posted April 6, 2022 I'm seeing the same thing here.jupiter-diagnostics-20220406-1523.zip Quote Link to comment
SimonF Posted April 6, 2022 Share Posted April 6, 2022 3 minutes ago, Arbadacarba said: I'm seeing the same thing here.jupiter-diagnostics-20220406-1523.zip Are you accessing from an android device? Quote Link to comment
Arbadacarba Posted April 6, 2022 Share Posted April 6, 2022 Generally no, but there is always a chance that I have a browser window open on something. I don't think so though. not currently Quote Link to comment
borland502 Posted August 27, 2022 Share Posted August 27, 2022 Yeah, been seeing this problem too recently filling up the log space. Quote Link to comment
trurl Posted August 27, 2022 Share Posted August 27, 2022 If you want help post diagnostics Quote Link to comment
borland502 Posted August 27, 2022 Share Posted August 27, 2022 11 minutes ago, trurl said: If you want help post diagnostics I would be happy to do so, but I am unable to access the dashboard and the diagnostics script doesn't look like it is cli friendly. The dashboard is the only impacted feature, for me at least. If there's anything I can provide please let me know, but the next step for me is an orderly shutdown and restart. I've already restarted the applicable services. The nginx log had built up 30GB of the above and has done so before. Unfortunately, I truncated the log last night so my reply wasn't expecting much in the way of help -- just that I've seen the problem before as well 2022/08/27 04:33:55 [info] 659#659: Using 131072KiB of shared memory for nchan in /etc/nginx/nginx.conf:160 2022/08/27 04:44:01 [info] 16302#16302: Using 131072KiB of shared memory for nchan in /etc/nginx/nginx.conf:160 Quote Link to comment
itimpi Posted August 27, 2022 Share Posted August 27, 2022 you can get the diagnostics from the CLI by using the 'diagnostics' command (as mentioned in the link). Quote Link to comment
trurl Posted August 27, 2022 Share Posted August 27, 2022 1 hour ago, borland502 said: diagnostics script doesn't look like it is cli friendly Don't know what you mean by that. The word diagnostics, in this post and every other post where it appears, is a link to instructions for getting the diagnostics. Those instructions explain how to get them, including how to get them from the command line Quote Link to comment
borland502 Posted August 28, 2022 Share Posted August 28, 2022 (edited) On 8/27/2022 at 11:05 AM, trurl said: Don't know what you mean by that. The word diagnostics, in this post and every other post where it appears, is a link to instructions for getting the diagnostics. Those instructions explain how to get them, including how to get them from the command line 😊 And so it does. I apologize. I missed the line. The script had been hanging on me and I thought it depended on the ui part (/usr/bin/php -q /usr/local/sbin/diagnostics). If it completes in the next few hours I'll post it here, but otherwise I'll assume my problem to be php related as everything else about the system is humming along fine. Php processes are running, including the script, so maybe I'm just underestimating the time. It'd not be the first time impatience has made me act foolish. Edit: I pulled the plug after 6 hours Edited August 28, 2022 by borland502 Quote Link to comment
trurl Posted August 28, 2022 Share Posted August 28, 2022 Diagnostics should complete in only a few minutes at most. What do you get from the command line with this? df -h Quote Link to comment
borland502 Posted August 28, 2022 Share Posted August 28, 2022 3 hours ago, trurl said: Diagnostics should complete in only a few minutes at most. What do you get from the command line with this? df -h Filesystem Size Used Avail Use% Mounted on rootfs 16G 2.5G 14G 16% / tmpfs 32M 1.8M 31M 6% /run /dev/sdh1 15G 1.3G 14G 9% /boot overlay 16G 2.5G 14G 16% /lib/firmware overlay 16G 2.5G 14G 16% /lib/modules devtmpfs 8.0M 0 8.0M 0% /dev tmpfs 16G 8.0K 16G 1% /dev/shm cgroup_root 8.0M 0 8.0M 0% /sys/fs/cgroup tmpfs 128M 24M 105M 19% /var/log tmpfs 1.0M 0 1.0M 0% /mnt/disks tmpfs 1.0M 1.0M 0 100% /mnt/remotes tmpfs 1.0M 0 1.0M 0% /mnt/rootshare /dev/md1 233G 31G 203G 14% /mnt/disk1 /dev/md2 3.7T 1.6T 2.1T 44% /mnt/disk2 /dev/md3 3.7T 2.0T 1.7T 55% /mnt/disk3 /dev/md4 5.5T 2.0T 3.6T 36% /mnt/disk4 /dev/md5 4.6T 33G 4.6T 1% /mnt/disk5 /dev/sdi1 932G 88G 843G 10% /mnt/home /dev/sdk1 932G 231G 701G 25% /mnt/media /dev/nvme0n1p1 477G 324G 153G 69% /mnt/nvme_win_cache /dev/sdl1 466G 6.8G 458G 2% /mnt/var shfs 18T 5.6T 12T 32% /mnt/user0 shfs 18T 5.6T 12T 32% /mnt/user /dev/loop2 500G 16G 482G 4% /var/lib/docker /dev/loop3 1.0G 4.5M 904M 1% /etc/libvirt tmpfs 3.2G 0 3.2G 0% /run/user/0 Quote Link to comment
trurl Posted August 28, 2022 Share Posted August 28, 2022 5 hours ago, borland502 said: I pulled the plug after 6 hours So those results don't tell us anything. Can you get diagnostics now? Quote Link to comment
borland502 Posted August 29, 2022 Share Posted August 29, 2022 23 hours ago, trurl said: So those results don't tell us anything. Can you get diagnostics now? The dh command was done before a reboot, but still no joy on the diagnostics run -- it still freezes. These diags may be of little help, but were taken 9 days ago when I was looking through the /boot/logs dir merel-diagnostics-20220820-1443.zip Quote Link to comment
borland502 Posted August 29, 2022 Share Posted August 29, 2022 Figured it out. User error naturally, but the problem was I'd created a bad SSL certificate ... or rather untrusted. But since the UI didn't die until days after with no reboot I didn't make the obvious connection. Anyway, thank you for your patience and questions. Quote Link to comment
dja Posted June 29 Share Posted June 29 On 8/29/2022 at 7:03 PM, borland502 said: Figured it out. User error naturally, but the problem was I'd created a bad SSL certificate ... or rather untrusted. But since the UI didn't die until days after with no reboot I didn't make the obvious connection. Anyway, thank you for your patience and questions. I'm seeing the same issue and I think this might be my problem. What cert did you remove or how did you resolve? I have self signed .pem in /boot/config/ssl/certs (Self-signed or user-provided certificate) Quote Link to comment
tasmith88 Posted September 22 Share Posted September 22 @limetech I am getting nginx errors that fills up my logs. What is the cause of this and how do I fix it? Thank you. meshifyunraid-syslog-20230922-1944.zip Quote Link to comment
trurl Posted September 23 Share Posted September 23 attach diagnostics to your NEXT post in this thread Quote Link to comment
tasmith88 Posted October 5 Share Posted October 5 On 9/23/2023 at 12:44 AM, trurl said: attach diagnostics to your NEXT post in this thread Here you go meshifyunraid-diagnostics-20231005-1623.zip Quote Link to comment
Squid Posted October 6 Share Posted October 6 Are there any stale browser tabs on any device still open? Close them all down Quote Link to comment
tasmith88 Posted October 8 Share Posted October 8 On 10/6/2023 at 8:09 AM, Squid said: Are there any stale browser tabs on any device still open? Close them all down I do keep a browser open, but will close it. Didn't think this would be the issue. Is there a docker to check on the server without having the browser open all the time? I was thinking homarr. I also have some notifications going to my discord. Quote Link to comment
tasmith88 Posted October 8 Share Posted October 8 58 minutes ago, Squid said: Netdata Thanks Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.