[6.9.1] server reached max_children setting (50)


Recommended Posts

Mar 11 18:25:33 Tower php-fpm[12286]: [WARNING] [pool www] server reached max_children setting (50), consider raising it
Mar 11 18:53:15 Tower php-fpm[12286]: [WARNING] [pool www] server reached max_children setting (50), consider raising it
Mar 11 19:54:26 Tower php-fpm[12286]: [WARNING] [pool www] server reached max_children setting (50), consider raising it

 

Never seen this before, any one else on 6.9.1 seeing them.

Link to comment
6 hours ago, ljm42 said:

That's a new one. Can you upload your full diagnostics?

 

This is my test/dev server, I was coping VM img(500Gb) files at the time from Prod to the secondpool as backing up before I upgrade it to 6.9

tower-diagnostics-20210312-0708.zip

 

Also as a side note, when I collected the dags, all the drives where spun down, so it spins up to provide the smartctl.

 

But now the array doesn't show this, all the drives show as spun down. Parity is only showing active as I did a manual spin up. Guessing this is due to the additional fix Tom put in for 6.9.1.

 

Is there a way with emcmd to trigger a spin up like we used to be able to do with mdcmd or an array check. Poll doesn't check as array believes they are down?

 

or add processing into sdspin to tell the arrary if used outside of gui?

 

image.png.eb2b30fb76caad01bf00bb73ff88165a.png

root@Tower:~# smartctl -n standby /dev/sdg
smartctl 7.1 2019-12-30 r5022 [x86_64-linux-5.10.21-Unraid] (local build)
Copyright (C) 2002-19, Bruce Allen, Christian Franke, www.smartmontools.org

root@Tower:~# smartctl -n standby /dev/sdg
smartctl 7.1 2019-12-30 r5022 [x86_64-linux-5.10.21-Unraid] (local build)
Copyright (C) 2002-19, Bruce Allen, Christian Franke, www.smartmontools.org

root@Tower:~# smartctl -n standby /dev/sdf
smartctl 7.1 2019-12-30 r5022 [x86_64-linux-5.10.21-Unraid] (local build)
Copyright (C) 2002-19, Bruce Allen, Christian Franke, www.smartmontools.org

root@Tower:~# smartctl -n standby /dev/sdi
smartctl 7.1 2019-12-30 r5022 [x86_64-linux-5.10.21-Unraid] (local build)
Copyright (C) 2002-19, Bruce Allen, Christian Franke, www.smartmontools.org

root@Tower:~# smartctl -n standby /dev/sdk
smartctl 7.1 2019-12-30 r5022 [x86_64-linux-5.10.21-Unraid] (local build)
Copyright (C) 2002-19, Bruce Allen, Christian Franke, www.smartmontools.org

root@Tower:~# smartctl -n standby /dev/sdl
smartctl 7.1 2019-12-30 r5022 [x86_64-linux-5.10.21-Unraid] (local build)
Copyright (C) 2002-19, Bruce Allen, Christian Franke, www.smartmontools.org

root@Tower:~# smartctl -n standby /dev/sdj
SAS Assist Plugin: /dev/sdj is in standby mode (spun down), smartctl exits(2)
root@Tower:~# 

 

Edited by SimonF
Link to comment

I searched for that error:

  https://serverfault.com/questions/479443/php5-fpm-server-reached-pm-max-children/578673

My interpretation is that the system got bogged down under heavy load and requests from the webgui started piling up. Various parts of the webgui do poll for data, so it is possible that the server got so bogged down it couldn't keep up.


If you just installed My Servers, it adds one more polling script to the mix, maybe was enough to push you over the edge. We are going to replace that with a websocket so if that was the cause it will be resolved in a future update.

 

If polling is the cause then it shouldn't be a major problem, just that the UI won't update in real time while the system is under heavy load.

  • Like 1
Link to comment
On 3/11/2021 at 11:14 PM, SimonF said:

Also as a side note, when I collected the dags, all the drives where spun down, so it spins up to provide the smartctl.

 

But now the array doesn't show this, all the drives show as spun down.

 

What is your Settings -> Disk Settings -> Tunable (poll_attributes) set to? The spin up/down status will lag that many seconds behind reality.

Link to comment

OK TBH I have not been following the spin up/down issues too closely, plenty of other bits to focus on ;) Maybe you can find another thread to add your experience to? I don't recall anyone mentioning diags in relation to spin up/down.  it will probably be missed in this thread 

Link to comment
22 minutes ago, ljm42 said:

OK TBH I have not been following the spin up/down issues too closely, plenty of other bits to focus on ;) Maybe you can find another thread to add your experience to? I don't recall anyone mentioning diags in relation to spin up/down.  it will probably be missed in this thread 

Created bug report for it.

  • Like 1
Link to comment
  • 5 months later...

I have now upgraded my prod server to 6.9.2

 

And have a few istances of this message in the log.

 

Is  there a way to increase to count?

 

Aug 21 18:50:40 unraid php-fpm[11319]: [WARNING] [pool www] server reached max_children setting (50), consider raising it 

 

Normally seems to happen when some file process is running. i.e. Mover or a file sync using sync toy as an example.

 

 

Link to comment
  • 4 weeks later...
  • 1 month later...
On 8/21/2021 at 12:49 PM, SimonF said:

I have now upgraded my prod server to 6.9.2

 

And have a few istances of this message in the log.

 

Is  there a way to increase to count?

 

Aug 21 18:50:40 unraid php-fpm[11319]: [WARNING] [pool www] server reached max_children setting (50), consider raising it 

 

Normally seems to happen when some file process is running. i.e. Mover or a file sync using sync toy as an example.

 

 

I have also been getting this error with Radarr, Sonarr, Deluge, and Plex then when the server tries to run the mover it will spit out this error.

I have 120 Threads, 128Gb ram and, 2 2Tb NVME BTRFS cache, with 5 16Tb data, 2 16Tb Parity.

The Cache has been running around 90+% full and now I'm having this message with huge Lags and lockups.

 

Is there a was to increase the "max_children" setting to more then 50?

Link to comment
  • 3 months later...
2 minutes ago, superloopy1 said:

I cant see this bug report anywhere and my system has started to spew out these messages similar to your own

 

php-fpm[13772]: [WARNING] [pool www] server reached max_children setting (50), consider raising it

 

was it ever resolved?

No never fiund a fix. on 6.9.2 now and have not seen the issue for a while.

Link to comment
  • 3 months later...

Upgraded without incident to 6.10.2.

Started up Docker in Settings.

Hangs 'mid process' and never get control of the web gui.

syslog shows the same error reported

php-fpm[25479]: [WARNING] [pool www] server reached max_children setting (50), consider raising it
nginx: 2022/06/10 20:14:12 [error] 25731#25731: *9910 upstream timed out (110: Connection timed out) while reading upstream, client: 192.168.1.129, server: , request: "POST /update.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "192.168.1.115", referrer: "http://192.168.1.115/Settings/DockerSettings"

 

ps aux | grep fpm
root      3724  0.0  0.0  89308 17284 ?        S    20:09   0:00 php-fpm: pool www
root      3936  0.0  0.0  88984 13072 ?        S    20:12   0:00 php-fpm: pool www
root      3945  0.0  0.0  89384 19696 ?        S    20:12   0:00 php-fpm: pool www
root      4379  0.0  0.0  88984 13024 ?        S    20:09   0:00 php-fpm: pool www
root      4380  0.0  0.0  89536 18748 ?        S    20:09   0:00 php-fpm: pool www
root      4381  0.0  0.0  89272 17924 ?        S    20:09   0:00 php-fpm: pool www
root      4591  0.0  0.0  89616 19344 ?        S    20:02   0:00 php-fpm: pool www
root      5150  0.0  0.0  89680 18872 ?        S    20:12   0:00 php-fpm: pool www
root      6877  0.0  0.0  89420 14516 ?        S    20:09   0:00 php-fpm: pool www
root      6897  0.0  0.0  89604 18280 ?        S    20:09   0:00 php-fpm: pool www
root      8120  0.0  0.0  89380 15056 ?        S    20:06   0:00 php-fpm: pool www
root      8180  0.0  0.0  88764 12356 ?        S    20:09   0:00 php-fpm: pool www
root      8272  0.0  0.0  89384 19660 ?        S    20:09   0:00 php-fpm: pool www
root      8310  0.0  0.0  89580 18624 ?        S    20:06   0:00 php-fpm: pool www
root      9391  0.0  0.0  89416 18104 ?        S    20:02   0:00 php-fpm: pool www
root      9825  0.0  0.0  89276 18024 ?        S    20:10   0:00 php-fpm: pool www
root     10258  0.0  0.0  89436 18152 ?        S    20:10   0:00 php-fpm: pool www
root     10268  0.0  0.0  89224 13772 ?        S    20:06   0:00 php-fpm: pool www
root     10312  0.0  0.0  88984 13032 ?        S    20:10   0:00 php-fpm: pool www
root     10987  0.0  0.0  89608 18728 ?        S    20:10   0:00 php-fpm: pool www
root     10997  0.0  0.0  89600 20244 ?        S    20:10   0:00 php-fpm: pool www
root     11245  0.0  0.0  89332 17712 ?        S    20:06   0:00 php-fpm: pool www
root     12175  0.0  0.0  89176 13880 ?        S    20:06   0:00 php-fpm: pool www
root     12343  0.0  0.0  89308 17924 ?        S    20:10   0:00 php-fpm: pool www
root     12707  0.0  0.0  88764 12368 ?        S    20:10   0:00 php-fpm: pool www
root     12801  0.0  0.0  89292 14012 ?        S    20:10   0:00 php-fpm: pool www
root     13556  0.0  0.0  89308 17364 ?        S    20:10   0:00 php-fpm: pool www
root     13863  0.0  0.0  89272 17448 ?        S    20:10   0:00 php-fpm: pool www
root     14913  0.0  0.0  88960 13064 ?        S    20:03   0:00 php-fpm: pool www
root     16507  0.0  0.0  89536 17204 ?        S    20:10   0:00 php-fpm: pool www
root     17076  0.0  0.0  89224 13540 ?        S    20:10   0:00 php-fpm: pool www
root     17103  0.0  0.0  89500 18192 ?        S    20:10   0:00 php-fpm: pool www
root     18364  0.0  0.0  89308 17956 ?        S    20:10   0:00 php-fpm: pool www
root     18369  0.0  0.0  89640 18776 ?        S    20:10   0:00 php-fpm: pool www
root     19493  0.0  0.0  89368 15568 ?        S    20:11   0:00 php-fpm: pool www
root     19972  0.0  0.0  89336 17996 ?        S    20:11   0:00 php-fpm: pool www
root     19997  0.0  0.0  89436 18292 ?        S    20:11   0:00 php-fpm: pool www
root     21225  0.0  0.0  88764 12380 ?        S    20:11   0:00 php-fpm: pool www
root     21516  0.0  0.0  89536 18776 ?        S    20:11   0:00 php-fpm: pool www
root     22317  0.0  0.0  89636 19764 ?        S    20:07   0:00 php-fpm: pool www
root     22693  0.0  0.0  89368 15576 ?        S    20:11   0:00 php-fpm: pool www
root     23299  0.0  0.0  89488 18216 ?        S    20:11   0:00 php-fpm: pool www
root     24413  0.0  0.0  89608 19840 ?        S    20:11   0:00 php-fpm: pool www
root     24414  0.0  0.0  89308 17524 ?        S    20:11   0:00 php-fpm: pool www
root     25479  0.0  0.0  85724 11780 ?        Ss   19:47   0:00 php-fpm: master process (/etc/php-fpm/php-fpm.conf)
root     26268  0.0  0.0  88764 12388 ?        S    20:11   0:00 php-fpm: pool www
root     27996  0.0  0.0  88984 13064 ?        S    20:11   0:00 php-fpm: pool www
root     28790  0.0  0.0  89268 18048 ?        S    20:11   0:00 php-fpm: pool www
root     29001  0.0  0.0  89436 18216 ?        S    20:05   0:00 php-fpm: pool www
root     30545  0.0  0.0   3980  2216 pts/0    S+   20:26   0:00 grep fpm
root     31566  0.0  0.0  89416 14564 ?        S    20:12   0:00 php-fpm: pool www
root     31593  0.0  0.0  89444 15260 ?        S    20:12   0:00 php-fpm: pool www


I have edited 

/etc/php-fpm.d/www.conf

and upped the limit to 75 and will watched what happens following 

/etc/rc.d/rc.php-fpm restart

I had another browser open which had timed out, hit refresh and got a positive response back.

I went back to the Docker settings page that was hanging and hit refresh, at which point both web gui sessions became unresponsive.

No fresh errors in syslog (yet) and only a handful of php-fpm showing when running a ps

I restarted the whole stack 

/etc/rc.d/rc.nginx restart 
/etc/rc.d/rc.nginx reload 
/etc/rc.d/rc.php-fpm restart 
/etc/rc.d/rc.php-fpm reload

Web gui now dead

nginx: 2022/06/10 20:43:03 [alert] 25731#25731: *30351 open socket #36 left in connection 8
nginx: 2022/06/10 20:43:03 [alert] 25731#25731: *30369 open socket #47 left in connection 11
nginx: 2022/06/10 20:43:03 [alert] 25731#25731: *117 open socket #23 left in connection 12
nginx: 2022/06/10 20:43:03 [alert] 25731#25731: *30432 open socket #38 left in connection 13
nginx: 2022/06/10 20:43:03 [alert] 25731#25731: *30347 open socket #22 left in connection 30
nginx: 2022/06/10 20:43:03 [alert] 25731#25731: *30341 open socket #35 left in connection 37
nginx: 2022/06/10 20:43:03 [alert] 25731#25731: aborting

At this point I'm stuck (with a pre-clear running (or was running at least)

 

Edit/Update the following morning the increased limit was (b)reached overnight, but at least (as i expected it owuld do, as not related to the gui) the disk pre-clear continues to work uninterrupted

php-fpm[18361]: [WARNING] [pool www] server reached max_children setting (75), consider raising it
preclear_disk_Z1C0A05MFJDH[28721]: Zeroing: progress - 60% zeroed @ 226 MB/s
root: /mnt/cache_app: 564.4 GiB (606063992832 bytes) trimmed on /dev/nvme0n1p1
root: /mnt/cache: 7.5 GiB (8014397440 bytes) trimmed on /dev/sdj1
Plugin Auto Update: Checking for available plugin updates
Plugin Auto Update: Checking for language updates
Plugin Auto Update: Community Applications Plugin Auto Update finished
preclear_disk_Z1C0A05MFJDH[28721]: Zeroing: progress - 70% zeroed @ 212 MB/s
preclear_disk_Z1C0A05MFJDH[28721]: Zeroing: progress - 80% zeroed @ 172 MB/s

The original action that (seemingly) started this, namely bringing Dicker online in Settings, allegedly worked
 

# /etc/rc.d/rc.docker status
status of dockerd: running

However, none of the docker containers are actually running, and issuing a docker stop command results in no action and a Ctrl+C required to regain control.

 

Googling around I came across 

which jogged my memory that I do run 2 network cards (albeit on the same subnet, unlike the case raised here).
I tried connecting to the gui on the other IP, and bingo, complete control of the gui returned.... until I went onto Settings/Docker tab and immediately hung again

 

Edited by DeathStar Darth
Added additional observations
Link to comment
  • 2 months later...
  • 3 months later...

Anyone ever figure this out?  Im running 6.11.5 and experience this too.  What's an appropriate value to raise it to?  Is there any negative impact from raising it? Do we know what causes this?

php-fpm[12157]: [WARNING] [pool www] server reached max_children setting (50), consider raising it

 

Link to comment
  • 3 months later...

Since this is one of the first topics that comes up when searching for an answer to this problem I just wanted to point out that I had fixed my issue with this error by removing the plugin "GPU Statistics" I had installed. I've been running Unraid 6.10.3 now for two weeks straight without any crashes. I'm not sure if this is a fix for everyone who is encountering this but based on my diagnostic logs it seems that plugin was my issue.

  • Like 1
Link to comment
  • 2 weeks later...

I've been seeing this issue now as well. 
php-fpm[4558]: [WARNING] [pool www] server reached max_children setting (50), consider raising it.

 

On 3/15/2023 at 6:03 PM, Floppi3706 said:

Since this is one of the first topics that comes up when searching for an answer to this problem I just wanted to point out that I had fixed my issue with this error by removing the plugin "GPU Statistics" I had installed. I've been running Unraid 6.10.3 now for two weeks straight without any crashes. I'm not sure if this is a fix for everyone who is encountering this but based on my diagnostic logs it seems that plugin was my issue.

 

I may remove the GPU stats plugin and see if the issue resolves. 

Edited by dannyb2100
Link to comment
  • 3 weeks later...

I am on 6.11.5 and I was getting the same error. I had to restart the php-fpm constantly. I attempted to go into the settings of the GPU statistics and it would instantly lock up, again. I removed the plugin and it seems to have resolved my issue. I can move from screen to screen without it being sluggish or not responding at all.

Link to comment
  • 2 months later...
  • 3 weeks later...

I don't have the GPU Stat plugin and get this error logged anyhow. Since unRAID primarily works off a web UI and is typically extended by running heaps of plugins that seem to rely on this setting being set correctly, probably preferably higher than what it's set to by default I'd suggest LimeTech to look into this a little closer.

 

I recently re-did my docker config since I switched from a btrfs vdisk to folders on a ZFS cache pool and the more dockers I added back from my stored templates the slower the UI got, and I bet this has something to do with it as well. I have over 90 dockers (before anyone feels an itch: I'm not here to justify my setup, you will not see me entertain this discussion...)

 

unRAID's web UI HAS to scale better with large needs, this has been an issue for years now and as you extend your use cases for unRAID the UI punishes you much for it.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.