Odd Ngnix error filling up my syslog


50 posts in this topic Last Reply

Recommended Posts

Hi all, 

 

within the last 24 hours or so, I have notices my log filling up rapidly, a snippet of the syslog is pasted below as I cant even get my diagnostic to download, the following logs fill all of the available logging space. I am currently on 6.8 rc3
 

Oct 28 22:03:59 Erebor nginx: 2019/10/28 22:03:59 [alert] 6680#6680: worker process 4147 exited on signal 6
Oct 28 22:03:59 Erebor login[4149]: ROOT LOGIN  on '/dev/pts/0'
Oct 28 22:04:00 Erebor login[4177]: ROOT LOGIN  on '/dev/pts/1'
Oct 28 22:04:00 Erebor nginx: 2019/10/28 22:04:00 [alert] 6680#6680: worker process 4148 exited on signal 6
Oct 28 22:04:00 Erebor nginx: 2019/10/28 22:04:00 [alert] 6680#6680: worker process 4185 exited on signal 6
Oct 28 22:04:00 Erebor login[4187]: ROOT LOGIN  on '/dev/pts/0'
Oct 28 22:04:00 Erebor login[4194]: ROOT LOGIN  on '/dev/pts/1'
Oct 28 22:04:01 Erebor nginx: 2019/10/28 22:04:01 [alert] 6680#6680: worker process 4186 exited on signal 6
Oct 28 22:04:01 Erebor nginx: 2019/10/28 22:04:01 [alert] 6680#6680: worker process 4250 exited on signal 6
Oct 28 22:04:01 Erebor nginx: 2019/10/28 22:04:01 [alert] 6680#6680: worker process 4251 exited on signal 6
Oct 28 22:04:01 Erebor login[4254]: ROOT LOGIN  on '/dev/pts/0'
Oct 28 22:04:01 Erebor login[4261]: ROOT LOGIN  on '/dev/pts/1'
Oct 28 22:04:02 Erebor nginx: 2019/10/28 22:04:02 [alert] 6680#6680: worker process 4252 exited on signal 6
Oct 28 22:04:02 Erebor login[4293]: ROOT LOGIN  on '/dev/pts/0'
Oct 28 22:04:02 Erebor login[4300]: ROOT LOGIN  on '/dev/pts/1'
Oct 28 22:04:03 Erebor nginx: 2019/10/28 22:04:03 [alert] 6680#6680: worker process 4291 exited on signal 6
Oct 28 22:04:03 Erebor nginx: 2019/10/28 22:04:03 [alert] 6680#6680: worker process 4399 exited on signal 6
Oct 28 22:04:03 Erebor nginx: 2019/10/28 22:04:03 [alert] 6680#6680: worker process 4401 exited on signal 6
Oct 28 22:04:03 Erebor nginx: 2019/10/28 22:04:03 [alert] 6680#6680: worker process 4402 exited on signal 6
Oct 28 22:04:03 Erebor login[4411]: ROOT LOGIN  on '/dev/pts/0'
Oct 28 22:04:03 Erebor login[4418]: ROOT LOGIN  on '/dev/pts/1'
Oct 28 22:04:04 Erebor nginx: 2019/10/28 22:04:04 [alert] 6680#6680: worker process 4410 exited on signal 6
Oct 28 22:04:04 Erebor nginx: 2019/10/28 22:04:04 [alert] 6680#6680: worker process 4446 exited on signal 6
Oct 28 22:04:04 Erebor login[4449]: ROOT LOGIN  on '/dev/pts/0'
Oct 28 22:04:04 Erebor login[4456]: ROOT LOGIN  on '/dev/pts/1'
Oct 28 22:04:05 Erebor nginx: 2019/10/28 22:04:05 [alert] 6680#6680: worker process 4448 exited on signal 6
Oct 28 22:04:05 Erebor nginx: 2019/10/28 22:04:05 [alert] 6680#6680: worker process 4484 exited on signal 6
Oct 28 22:04:05 Erebor nginx: 2019/10/28 22:04:05 [alert] 6680#6680: worker process 4486 exited on signal 6
Oct 28 22:04:05 Erebor nginx: 2019/10/28 22:04:05 [alert] 6680#6680: worker process 4487 exited on signal 6
Oct 28 22:04:05 Erebor nginx: 2019/10/28 22:04:05 [alert] 6680#6680: worker process 4488 exited on signal 6
Oct 28 22:04:05 Erebor login[4490]: ROOT LOGIN  on '/dev/pts/0'
Oct 28 22:04:05 Erebor login[4497]: ROOT LOGIN  on '/dev/pts/1'
Oct 28 22:04:06 Erebor nginx: 2019/10/28 22:04:06 [alert] 6680#6680: worker process 4489 exited on signal 6
Oct 28 22:04:06 Erebor login[4527]: ROOT LOGIN  on '/dev/pts/0'
Oct 28 22:04:06 Erebor login[4532]: ROOT LOGIN  on '/dev/pts/1'
Oct 28 22:04:07 Erebor nginx: 2019/10/28 22:04:07 [alert] 6680#6680: worker process 4525 exited on signal 6
Oct 28 22:04:07 Erebor nginx: 2019/10/28 22:04:07 [alert] 6680#6680: worker process 4564 exited on signal 6
Oct 28 22:04:07 Erebor nginx: 2019/10/28 22:04:07 [alert] 6680#6680: worker process 4566 exited on signal 6
Oct 28 22:04:07 Erebor nginx: 2019/10/28 22:04:07 [alert] 6680#6680: worker process 4567 exited on signal 6
Oct 28 22:04:07 Erebor login[4569]: ROOT LOGIN  on '/dev/pts/0'
Oct 28 22:04:07 Erebor login[4576]: ROOT LOGIN  on '/dev/pts/1'
Oct 28 22:04:09 Erebor nginx: 2019/10/28 22:04:09 [alert] 6680#6680: worker process 4568 exited on signal 6
Oct 28 22:04:09 Erebor nginx: 2019/10/28 22:04:09 [alert] 6680#6680: worker process 4704 exited on signal 6
Oct 28 22:04:09 Erebor nginx: 2019/10/28 22:04:09 [alert] 6680#6680: worker process 4706 exited on signal 6
Oct 28 22:04:09 Erebor nginx: 2019/10/28 22:04:09 [alert] 6680#6680: worker process 4707 exited on signal 6
Oct 28 22:04:09 Erebor nginx: 2019/10/28 22:04:09 [alert] 6680#6680: worker process 4708 exited on signal 6
Oct 28 22:04:09 Erebor login[4710]: ROOT LOGIN  on '/dev/pts/0'
Oct 28 22:04:09 Erebor login[4717]: ROOT LOGIN  on '/dev/pts/1'
Oct 28 22:04:10 Erebor nginx: 2019/10/28 22:04:10 [alert] 6680#6680: worker process 4709 exited on signal 6
Oct 28 22:04:10 Erebor login[4747]: ROOT LOGIN  on '/dev/pts/0'
Oct 28 22:04:10 Erebor login[4755]: ROOT LOGIN  on '/dev/pts/1'
Oct 28 22:04:11 Erebor nginx: 2019/10/28 22:04:11 [alert] 6680#6680: worker process 4746 exited on signal 6
Oct 28 22:04:11 Erebor nginx: 2019/10/28 22:04:11 [alert] 6680#6680: worker process 4783 exited on signal 6
Oct 28 22:04:11 Erebor nginx: 2019/10/28 22:04:11 [alert] 6680#6680: worker process 4785 exited on signal 6
Oct 28 22:04:11 Erebor nginx: 2019/10/28 22:04:11 [alert] 6680#6680: worker process 4786 exited on signal 6
Oct 28 22:04:11 Erebor nginx: 2019/10/28 22:04:11 [alert] 6680#6680: worker process 4787 exited on signal 6
Oct 28 22:04:11 Erebor login[4789]: ROOT LOGIN  on '/dev/pts/0'
Oct 28 22:04:11 Erebor login[4796]: ROOT LOGIN  on '/dev/pts/1'
Oct 28 22:04:12 Erebor nginx: 2019/10/28 22:04:12 [alert] 6680#6680: worker process 4788 exited on signal 6
Oct 28 22:04:12 Erebor nginx: 2019/10/28 22:04:12 [alert] 6680#6680: worker process 4849 exited on signal 6
Oct 28 22:04:12 Erebor login[4854]: ROOT LOGIN  on '/dev/pts/0'
Oct 28 22:04:12 Erebor login[4859]: ROOT LOGIN  on '/dev/pts/1'
Oct 28 22:04:13 Erebor nginx: 2019/10/28 22:04:13 [alert] 6680#6680: worker process 4851 exited on signal 6
Oct 28 22:04:13 Erebor nginx: 2019/10/28 22:04:13 [alert] 6680#6680: worker process 4891 exited on signal 6
Oct 28 22:04:13 Erebor nginx: 2019/10/28 22:04:13 [alert] 6680#6680: worker process 4893 exited on signal 6
Oct 28 22:04:13 Erebor nginx: 2019/10/28 22:04:13 [alert] 6680#6680: worker process 4894 exited on signal 6
Oct 28 22:04:13 Erebor login[4903]: ROOT LOGIN  on '/dev/pts/0'
Oct 28 22:04:13 Erebor login[4910]: ROOT LOGIN  on '/dev/pts/1'

 

Link to post
  • 3 months later...

Same here. This has something to do with VM and webUI. 

 

Here's /var/log/nginx/error.log

 

ker process: ./nchan-1.2.6/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed.
2020/03/13 14:14:07 [alert] 3138#3138: worker process 3368 exited on signal 6
ker process: ./nchan-1.2.6/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed.
2020/03/13 14:14:10 [alert] 3138#3138: worker process 3371 exited on signal 6
ker process: ./nchan-1.2.6/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed.
2020/03/13 14:14:13 [alert] 3138#3138: worker process 3377 exited on signal 6

 

These messsages appear even after I restart nginx.

 

Edited by icopy
Link to post

It seems that I've fixed the problem. 

  • Open VM tab
  • Edit VMs in XML mode
  • Change "port" properties in "graphics" section from -1 to a valid number, like 5702
  • Change "websocket" properties in "graphics" section from -1 to a valid number, like 5902

I don't know how these properties are set to -1. There is not place in the form view to change these values.

 

If unable to make these changes due to nginx workers crashing, restart the unraid server and don't open any vnc view before changing these values.

Link to post
  • 4 weeks later...
  • 1 month later...
  • 2 weeks later...
  • 2 weeks later...

I'm having what I think is the same problem. I see this filling up my nginx log files. 

ker process: ./nchan-1.2.6/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed.
2020/06/05 19:43:01 [alert] 6602#6602: worker process 12703 exited on signal 6

Eventually this log memory reaches 100% and the server crashes. This requires a reboot - I can't even ssh in. 

This is happening within 3-5 days of a reboot. 

I've noticed that the cpu activity graph on the dashboard will become inactive, with no data shown for any core.

I have VM turned off. I'm running a number of dockers.

 

____

Unraid 6.8.3 Dell r710 12cores 96gb ecc ram

Edited by wm-te
typo
Link to post
  • 2 weeks later...

Since I posted my message above 13 days ago, I've not had a system crash. 14 days 17 hours uptime. I was getting no more than 5 days due to this problem.

After the last crash, I stopped a number of new dockers I had been using for just a few weeks. Sonarr, Radarr and Jackett. Since stopping these, my system appears to be stable again. Log file size reporting as 1%. 

Edited by wm-te
Link to post
  • 3 weeks later...

No - i haven't solved it yet. And looking through other recent posts in this forum, I suspect that there are other users having a similar problem. 

My server has this issue again today - but I was able to reboot it before it became unresponsive (which saved me doing a 10hr parity check). 

 

This time I also had syslog going to a local syslog server so I was able to check what was happening in the logs before the crash. Looks like there are hundreds of these lines

Jul  9 09:10:02 Tower rsyslogd: action 'action-0-builtin:omfile' (module 'builtin:omfile') message lost, could not be processed. Check for additional error messages before this one. [v8.1908.0 try https://www.rsyslog.com/e/2027 ]

preceded by possibly thousands of these lines .. log file was at 97mb. 

Jul  8 21:27:31 Tower nginx: 2020/07/08 21:27:31 [alert] 1660#1660: *2874685 header already sent while keepalive, client: 192.168.1.187, server: 0.0.0.0:80
Jul  8 21:27:31 Tower kernel: nginx[1660]: segfault at 0 ip 0000000000000000 sp 00007ffeaafbe7b8 error 14 in nginx[400000+21000]
Jul  8 21:27:31 Tower kernel: Code: Bad RIP value.
Jul  8 21:27:31 Tower nginx: 2020/07/08 21:27:31 [alert] 6641#6641: worker process 1660 exited on signal 11
Jul  8 21:27:31 Tower nginx: 2020/07/08 21:27:31 [crit] 1665#1665: ngx_slab_alloc() failed: no memory
Jul  8 21:27:31 Tower nginx: 2020/07/08 21:27:31 [error] 1665#1665: shpool alloc failed
Jul  8 21:27:31 Tower nginx: 2020/07/08 21:27:31 [error] 1665#1665: nchan: Out of shared memory while allocating channel /var. Increase nchan_max_reserved_memory.
Jul  8 21:27:31 Tower nginx: 2020/07/08 21:27:31 [alert] 1665#1665: *2874687 header already sent while keepalive, client: 192.168.1.187, server: 0.0.0.0:80
Jul  8 21:27:31 Tower kernel: nginx[1665]: segfault at 0 ip 0000000000000000 sp 00007ffeaafbe7b8 error 14 in nginx[400000+21000]
Jul  8 21:27:31 Tower kernel: Code: Bad RIP value.
Jul  8 21:27:31 Tower nginx: 2020/07/08 21:27:31 [alert] 6641#6641: worker process 1665 exited on signal 11
Jul  8 21:27:31 Tower nginx: 2020/07/08 21:27:31 [crit] 1666#1666: ngx_slab_alloc() failed: no memory
Jul  8 21:27:31 Tower nginx: 2020/07/08 21:27:31 [error] 1666#1666: shpool alloc failed
Jul  8 21:27:31 Tower nginx: 2020/07/08 21:27:31 [error] 1666#1666: nchan: Out of shared memory while allocating channel /disks. Increase nchan_max_reserved_memory.
Jul  8 21:27:31 Tower nginx: 2020/07/08 21:27:31 [error] 1666#1666: *2874689 nchan: error publishing message (HTTP status code 507), client: unix:, server: , request: "POST /pub/disks?buffer_length=1 HTTP/1.1", host: "localhost"
Jul  8 21:27:32 Tower nginx: 2020/07/08 21:27:32 [crit] 1666#1666: ngx_slab_alloc() failed: no memory
Jul  8 21:27:32 Tower nginx: 2020/07/08 21:27:32 [error] 1666#1666: shpool alloc failed
Jul  8 21:27:32 Tower nginx: 2020/07/08 21:27:32 [error] 1666#1666: nchan: Out of shared memory while allocating channel /cpuload. Increase nchan_max_reserved_memory.
Jul  8 21:27:32 Tower nginx: 2020/07/08 21:27:32 [error] 1666#1666: *2874690 nchan: error publishing message (HTTP status code 507), client: unix:, server: , request: "POST /pub/cpuload?buffer_length=1 HTTP/1.1", host: "localhost"
Jul  8 21:27:32 Tower nginx: 2020/07/08 21:27:32 [crit] 1666#1666: ngx_slab_alloc() failed: no memory
Jul  8 21:27:32 Tower nginx: 2020/07/08 21:27:32 [error] 1666#1666: shpool alloc failed
Jul  8 21:27:32 Tower nginx: 2020/07/08 21:27:32 [error] 1666#1666: nchan: Out of shared memory while allocating channel /var. Increase nchan_max_reserved_memory.
Jul  8 21:27:32 Tower nginx: 2020/07/08 21:27:32 [alert] 1666#1666: *2874691 header already sent while keepalive, client: 192.168.1.187, server: 0.0.0.0:80

 

Link to post
  • 4 weeks later...

Ok so I had it running for:

image.png.1e35befd38eabdd360e32a312e8ebb32.png

 

And then I got the (freeze):

image.png.3db2e5665ad907ce518155af89ab3dc4.png

 

And I cant reboot it just freezes and i cannot access the VM tab either

Logging in is very slow but doable...

 

Any fix on this, I really hate to "CUT POWER" every 49 days....

 

 

Link to post

Put me down as noticing this error.  Stopping my Ngnix Proxy Manager docker stopped the spamming in my syslog.  When you go and look at your processes running (my example 7565#7565) you will see it is associated with ngnix.

 

I haven't touched anything on my sever for the past 37 days and this just started up probably within the last 48 hours (or more).

 

Digging in....

Link to post

Okay, looks like I was able to stop the nginx spamming.

 

I stopped all my dockers and VMs to figure out if I'm having conflicts between one or multiple dockers.  Couldn't find anything conclusive there.  Checking my VM's, I couldn't find any culprits or configuration issues.  Then I stopped the array to take a look at my network settings, smb, and docker settings.  I turned off NETBIOS, FTP Server, and removed the loopback for IPv6 (which I don't use/employ within my home).

 

Not able to find anything else, I just started the array back up and so far, I haven't seen one entry in the syslog since 8:40am this morning.

 

Will keep an eye on things are report back tomorrow morning.

Link to post

This may sound odd, but could the error be linked to github being down? I noticed that while the error was spamming I couldn't load my plugins page, I looked into that further and found that could occur when github is down. 12h later, github is fine, plugins page can be accessed and errors have stopped. In addition, several people appear to get the error at the same time.

Link to post
15 hours ago, enmesh-parisian-latest said:

This may sound odd, but could the error be linked to github being down? I noticed that while the error was spamming I couldn't load my plugins page, I looked into that further and found that could occur when github is down. 12h later, github is fine, plugins page can be accessed and errors have stopped. In addition, several people appear to get the error at the same time.

Owwww, very interesting.  I didn't know Github was down during that time.  It would make make since.  That's for that data point!  I can now stop looking at my log every 5-10 minutes, awaiting for the message to pop up again.

Link to post
  • 2 months later...

nginx worker process exited on signal 6

 

I'm getting mine when I open the terminal window from the web page. The terminal window 'blinks' and whatever you typed is gone. It's like the process is ending over and over and Unraid is respawning it every 1-2 seconds.

 

MAY be related to the fact I've got two tabs with the web interface open but now it's stopped doing it..since I'm posting about it of course.

 

But I've had the 'terminal window blink' happen over and over recently. Thought it was the log getting full but now it's only at 1%.

 

 

Oct 19 14:27:59 BlackTower nginx: 2020/10/19 14:27:59 [alert] 2764#2764: worker process 23358 exited on signal 6 Oct 19 14:28:01 BlackTower nginx: 2020/10/19 14:28:01 [alert] 2764#2764: worker process 23360 exited on signal 6 Oct 19 14:28:03 BlackTower nginx: 2020/10/19 14:28:03 [alert] 2764#2764: worker process 23389 exited on signal 6 Oct 19 14:28:05 BlackTower nginx: 2020/10/19 14:28:05 [alert] 2764#2764: worker process 23391 exited on signal 6 Oct 19 14:28:07 BlackTower nginx: 2020/10/19 14:28:07 [alert] 2764#2764: worker process 23396 exited on signal 6 Oct 19 14:28:09 BlackTower nginx: 2020/10/19 14:28:09 [alert] 2764#2764: worker process 23399 exited on signal 6 Oct 19 14:28:16 BlackTower nginx: 2020/10/19 14:28:16 [alert] 2764#2764: worker process 23401 exited on signal 6 Oct 19 14:28:17 BlackTower nginx: 2020/10/19 14:28:17 [alert] 2764#2764: worker process 23405 exited on signal 6 Oct 19 14:28:19 BlackTower nginx: 2020/10/19 14:28:19 [alert] 2764#2764: worker process 23406 exited on signal 6 Oct 19 14:28:21 BlackTower nginx: 2020/10/19 14:28:21 [alert] 2764#2764: worker process 23415 exited on signal 6 Oct 19 14:28:23 BlackTower nginx: 2020/10/19 14:28:23 [alert] 2764#2764: worker process 23416 exited on signal 6 Oct 19 14:28:25 BlackTower nginx: 2020/10/19 14:28:25 [alert] 2764#2764: worker process 23417 exited on signal 6 Oct 19 14:28:27 BlackTower nginx: 2020/10/19 14:28:27 [alert] 2764#2764: worker process 23420 exited on signal 6 Oct 19 14:28:29 BlackTower nginx: 2020/10/19 14:28:29 [alert] 2764#2764: worker process 23421 exited on signal 6 Oct 19 14:28:31 BlackTower nginx: 2020/10/19 14:28:31 [alert] 2764#2764: worker process 23426 exited on signal 6 Oct 19 14:28:33 BlackTower nginx: 2020/10/19 14:28:33 [alert] 2764#2764: worker process 23435 exited on signal 6 Oct 19 14:28:35 BlackTower nginx: 2020/10/19 14:28:35 [alert] 2764#2764: worker process 23436 exited on signal 6 Oct 19 14:28:37 BlackTower nginx: 2020/10/19 14:28:37 [alert] 2764#2764: worker process 23441 exited on signal 6 Oct 19 14:28:39 BlackTower nginx: 2020/10/19 14:28:39 [alert] 2764#2764: worker process 23444 exited on signal 6 Oct 19 14:28:41 BlackTower nginx: 2020/10/19 14:28:41 [alert] 2764#2764: worker process 23445 exited on signal 6 Oct 19 14:28:43 BlackTower nginx: 2020/10/19 14:28:43 [alert] 2764#2764: worker process 23446 exited on signal 6

 

 

Link to post
  • 1 month later...
On 10/19/2020 at 2:33 PM, RealActorRob said:

...two tabs with the web interface open...

I was just watching my syslog being spammed with

nginx ... worker process ... exited on signal 6

2-3 times/second, and immediately upon finding and closing four stale Unraid web GUI tabs open across two machines, it stopped.

 

Hope this helps someone.

Edited by bland328
Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.