Odd Ngnix error filling up my syslog


Recommended Posts

On 12/5/2020 at 10:22 AM, bland328 said:

I was just watching my syslog being spammed with


nginx ... worker process ... exited on signal 6

2-3 times/second, and immediately upon finding and closing four stale Unraid web GUI tabs open across two machines, it stopped.

 

Hope this helps someone.

This did it for me! Thanks a bunch for posting that.

Link to comment
  • 1 month later...
On 12/16/2020 at 9:08 AM, Gunny said:

This did it for me! Thanks a bunch for posting that.

 

Glad to hear it helped!

 

For the record, today the Unraid web GUI was draaaaagging...and I discovered these "worker process...exited on signal 6" messages rapidly spamming /var/log/syslog again.

 

So, I went hunting for stale Unraid sessions open in browsers on other computers, and found two.

 

When I closed one, the spamming slowed, and when I closed the other, the spamming stopped.

Link to comment
  • 2 weeks later...

Its a while since I posted on this topic. I'm currently at 65 days uptime - which is way more than I'd seen when I last posted.

The change has been to always close browser tabs that have the Unraid web ui open. I found the other day that I had left a tab open for a few days, and as a result my logs are at 3% of capacity. Until I left that tab open, this had remained at 1%. From past experience, that percentage climbs very rapidly once it starts moving. 

 

I will keep closing browser tabs as a workaround - but I would prefer a fix if anyone can suggest one.

Link to comment
  • 1 month later...

This is happening to me again. 

Mar 15 00:18:04 unraid nginx: 2021/03/15 00:18:04 [alert] 32727#32727: worker process 13749 exited on signal 6

and until I stopped nginx with /etc/rc.d/rc.nginx stop, there's this: 

root@unraid:~# tail -f /var/log/nginx/error.log
ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed.
2021/03/15 00:18:30 [alert] 32727#32727: worker process 14062 exited on signal 6
ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed.
2021/03/15 00:18:32 [alert] 32727#32727: worker process 14100 exited on signal 6
ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed.
2021/03/15 00:18:34 [alert] 32727#32727: worker process 14132 exited on signal 6
ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed.
2021/03/15 00:18:36 [alert] 32727#32727: worker process 14179 exited on signal 6
2021/03/15 00:18:37 [alert] 14205#14205: *24668 open socket #3 left in connection 7
2021/03/15 00:18:37 [alert] 14205#14205: aborting

 

I'm not quite sure where I have a stale tab open. This seems like a pretty bad bug. Any word on fixing it? 

Link to comment
34 minutes ago, Dovy6 said:

I've submitted a bug report 

 - however, Iv'e gone and closed all open Unraid GUI tabs on all my browsers and it didn't seem to fix the problem... So I think it may be something other than the stale tab...

I take that back. I just went and restarted the entire browser and, for now, the errors appear to have stopped. 

Link to comment
  • 5 weeks later...
  • 1 month later...
On 12/5/2020 at 5:22 PM, bland328 said:

I was just watching my syslog being spammed with


nginx ... worker process ... exited on signal 6

2-3 times/second, and immediately upon finding and closing four stale Unraid web GUI tabs open across two machines, it stopped.

 

Hope this helps someone.

 

I am having the same issue - message in syslog everey 1-2 seconds from 10:31:28 to 12:30:28 and suddenly stopped.

What is a "stale" web GUI tab? why is it defined as stale.

 

I am also getting the following errors also related to nginx every minute in between the "nginx worker process" messages:

May 21 12:30:20 Tower nginx: 2021/05/21 12:30:20 [error] 29362#29362: *143060 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: 192.168.1.100, server: , request: "GET /plugins/dynamix.my.servers/include/state.php HTTP/2.0", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.unraid.net", referrer: "https://xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.unraid.net/Dashboard"

 

Link to comment
22 minutes ago, theone said:

 

I am having the same issue - message in syslog everey 1-2 seconds from 10:31:28 to 12:30:28 and suddenly stopped.

What is a "stale" web GUI tab? why is it defined as stale.

 

I am also getting the following errors also related to nginx every minute in between the "nginx worker process" messages:


May 21 12:30:20 Tower nginx: 2021/05/21 12:30:20 [error] 29362#29362: *143060 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: 192.168.1.100, server: , request: "GET /plugins/dynamix.my.servers/include/state.php HTTP/2.0", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.unraid.net", referrer: "https://xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.unraid.net/Dashboard"

 

A stale gui tab means that in some computer, somewhere, there's an unraid tab open that hasn't been used in too long, and is causing this issue for some reason. Restart any open browser you have on any computer 

Link to comment

I'm seeing issues reported in this thread since a few weeks to.

- webterminal does not work

- web VNC does not work

- log 100%, spammed with ngix error

 

I don't have any webterminal tabs open. Would be nice to get some support on this issue...

Link to comment
On 5/21/2021 at 9:57 AM, theone said:

The "problem" is that I have no other PC or smartphone currently open with an unRAID webUI tab open.

 

31 minutes ago, lamer said:

- log 100%, spammed with ngix error

 

Look closely at the nginx error line, it will tell you the IP address is of the computer that is causing the problem.  Here is a log except pulled from above. The part I highlighted in red shows the specific IP address that is repeatedly hitting the server:

May 21 12:30:20 Tower nginx: 2021/05/21 12:30:20 [error] 29362#29362: *143060 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: 192.168.1.100, server: , request: "GET /plugins/dynamix.my.servers/include/state.php HTTP/2.0", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.unraid.net", referrer: "https://xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.unraid.net/Dashboard"

 

In that particular log entry, it looks like the issue is that the My Servers plugin was uninstalled but there was still a browser tab open on 192.168.1.100 that was trying to call it. The fix would be to close or reload all tabs on that computer. Note that some alternative browsers have things called "panels" that are pretty the same as a tab. You need to close or reload everything that is pointed at the server. Can't find it? Reboot the computer at that IP address.

Link to comment
  • 1 month later...

I just wanted to report, this is still present in the latest 6.9.2

 

Jul  5 09:47:41 UnRaid-Mini-NAS nginx: 2021/07/05 09:47:41 [alert] 8435#8435: worker process 18731 exited on signal 6
Jul  5 09:47:43 UnRaid-Mini-NAS nginx: 2021/07/05 09:47:43 [alert] 8435#8435: worker process 18756 exited on signal 6
Jul  5 09:47:45 UnRaid-Mini-NAS nginx: 2021/07/05 09:47:45 [alert] 8435#8435: worker process 18801 exited on signal 6
Jul  5 09:47:47 UnRaid-Mini-NAS nginx: 2021/07/05 09:47:47 [alert] 8435#8435: worker process 18828 exited on signal 6

 

I was checking on my parity check and noticed my logs filled with this. I only had 2 windows open, main one and a systems log.

 

Ran the following:

killall --quiet --older-than 1w process_name

 

This seemed to have solved the issue.

Edited by arch1mede
Added more verbiage for clarity
Link to comment
On 7/5/2021 at 12:30 PM, arch1mede said:

I just wanted to report, this is still present in the latest 6.9.2

 


Jul  5 09:47:41 UnRaid-Mini-NAS nginx: 2021/07/05 09:47:41 [alert] 8435#8435: worker process 18731 exited on signal 6
Jul  5 09:47:43 UnRaid-Mini-NAS nginx: 2021/07/05 09:47:43 [alert] 8435#8435: worker process 18756 exited on signal 6
Jul  5 09:47:45 UnRaid-Mini-NAS nginx: 2021/07/05 09:47:45 [alert] 8435#8435: worker process 18801 exited on signal 6
Jul  5 09:47:47 UnRaid-Mini-NAS nginx: 2021/07/05 09:47:47 [alert] 8435#8435: worker process 18828 exited on signal 6

 

I was checking on my parity check and noticed my logs filled with this. I only had 2 windows open, main one and a systems log.

 

Ran the following:


killall --quiet --older-than 1w process_name

 

This seemed to have solved the issue.

What would 'process_name' be in this case? nginx?

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.