Can't stop docker container Nextcloud or reboot the whole system


Recommended Posts

Hello.
A few days ago I ran into problem - can't stop Nextcloud (linuxserver) docker container. So I tried to reboot the whole system - fail again, Unraid is stuck, it's can't reboot. Next I tried "powerdown -r" command, on the second time it worked and system rebooted (after boot parity check started). A few hours later this bug appeared again. Nextcloud container stuck as well as the system. I tried "diagnostics" command - failed, endless waiting.

This bug appears after trying to update content in the Nextcloud android app. Bug occurred 4 times now, always after refreshing mobile app.

Also I tried "docker kill nextcloud" - command don't work, endless waiting.
Next I tried to stop the whole docker from GUI. "Enable Docker - no", but Docker status is running. After reboot I nuked docker.img, set up all dockers again and Nextcloud worked well until today - same problem happened. I nuked docker.img again, it helps. But it's not a solution nuking docker.img every 3-4 days...
I found out that docker-proxy process related to nextcloud is <defunct>. It looks like the problem is with network.

Syslog and nextcloud logs, docker-proxy stuck process:

syslog: https://pastebin.com/Uc5yuYx5 (after problem appeared)
nextcloud logs: https://pastebin.com/hBm9uE8b (after problem appeared)

stuck docker-proxy process:

photo_2020-03-21_08-52-40.thumb.jpg.32e937c4f13fbc1ce3c9eb73462eeaf0.jpg

 

My hardware:

Ryzen 2700, 16 Gb ECC, ASUS TUF B450M-PRO PLUS. Dockers installed on Samsung NVMe drive (unassigned device, not cache).

I ask for help in solving this problem

Edited by SuberSeb
Link to comment
  • 2 weeks later...
  • 4 months later...
  • 3 months later...

Did you ever figure this out? I am having a similar issue. I noticed I couldn't log into my nextcloud today and attempted to restart the container in the GUI but it wouldn't stop. I tried the following commands in an ssh console but it just waited without doing anything:

docker stop nextcloud
docker kill nextcloud
docker rm --force nextcloud


Trying to download the unraid diagnostics in the webGUI results in endless waiting. Typing "diagnostics" in an ssh console resulted in "Starting diagnostics collection..." for over 12 hours and nothing was created in the /boot/logs folder.

Interestingly, on the "/Main" tab in the webGUI, my unassigned devices plugin won't load. I believe this issue may have overlapped with a parity check but I could be mistaken or that could be coincidental. My server won't shutdown because my docker service won't stop. All because I can't stop the nextcloud container.

Link to comment

Following up. Same issue is happening here as well. Really diving into the issue tonight in hopes of figuring out what's going on...

 

Other posts/threads that seem to be experiencing the same issue dating back to Unraid v6.5 and NextCloud 18.X.X.  (I am on NextCloud 20 with Unraid 6.9.X? Beta 35). I am sort of thinking that the workload to run NextCloud is causing the issue to present itself but NextCloud isn't the actual problem. Instead it might be hardware related.

 

@CyaOnDaNet @p3rky2005

If you ssh into your servers what is the output of the following command? It will list any processes stuck in an uninterruptable io wait state. (The uninterruptable part meaning that we can't kill them, even if we use kill -9)  In my instance I have a few php-fpm: pool processes which are running from the NextCloud docker.

ps axl | awk '$10 ~ /D/'

 

If I try to run lsof on any of my drives that also enters a D state that never finishes which is why I think when I try to collect diagnostics the WebUI crashes in which case I must restart with the following commands:

/etc/rc.d/rc.php-fpm restart
/etc/rc.d/rc.php-fpm reload

 

Are you all using LSI cards? Mine is a h310 perc card flashed to it mode but I do not know what firmware version or anything off the top of my head.

 

For reference, here are other posts/topics I have found detailing similar problems. 

https://forums.unraid.net/topic/99669-nextcloud-locking-up/?do=findComment&comment=919516

https://forums.unraid.net/topic/48383-support-linuxserverio-nextcloud/page/163/?tab=comments#comment-919547

https://forums.unraid.net/topic/48383-support-linuxserverio-nextcloud/page/112/?tab=comments#comment-798246

https://forums.unraid.net/topic/83174-docker-container-hangs/

https://forums.unraid.net/topic/90676-high-load-but-low-cpu-utilization-docker-frozen-unable-to-stop-array-reboot-needed/

Link to comment

Insidently i should now add ive waited untill now to post this as ive had 3 days stable, my issue actually was caused by trying to get nextcloud not to fill up the docker image upon upload, i added a /tmp variable and it worked for about a week and then started showing issues, but none of the logs show anything,

 

remove that /tmp variable and now its running stable again

Link to comment
15 hours ago, jcsnider said:

Other posts/threads that seem to be experiencing the same issue dating back to Unraid v6.5 and NextCloud 18.X.X.

I am running Unraid v6.8.0-rc7 and Nextcloud 19.0.5 (I know my unraid version is old old but I needed the the linux kernal in this version and they reverted it all v6.8.x after this RC. I will be updating to v6.9-RC1 soon after I read about any potential issues people may be having). 

15 hours ago, jcsnider said:

If you ssh into your servers what is the output of the following command?

Unfortunately, I gave up and performed an unsafe shutdown. My nextcloud docker isn't even running right now because I didn't have time to deal with this if it happened again but I will start it back up later today. 
 

15 hours ago, jcsnider said:

Are you all using LSI cards?

Yes, I have a LSI Logic SAS 9207-8i Storage Controller LSI00301.
 

1 hour ago, p3rky2005 said:

Insidently i should now add ive waited untill now to post this as ive had 3 days stable, my issue actually was caused by trying to get nextcloud not to fill up the docker image upon upload, i added a /tmp variable and it worked for about a week and then started showing issues, but none of the logs show anything,

 

remove that /tmp variable and now its running stable again

This is interesting, I completely forgot I made changes to my container back in early October. I too had issues with nextcloud filling up my docker image during file upload. I have had nextcloud for a while but never really used it. I started uploading video files and noticed my docker image filling up so I looked into it but didn't find a whole lot of info. I added the docker path /tmp  --->  /mnt/user/appdata/nextcloud/temp and that stopped my docker image from filling up. It just points the nextcloud /tmp directory to my cache drive. Has anyone else having this issue made any changes to the /tmp path? What about you @jcsnider?

Link to comment
16 hours ago, jcsnider said:

If you ssh into your servers what is the output of the following command?

Wow, so I started up my container to get the version info for my previous comment and a couple of minutes later it 504'd. I ran the command and got the following:
 

root@Nighthawk:~# ps axl | awk '$10 ~ /D/'
4    99  6321 14590  20   0 127440 37480 -      D    ?          0:00 php7 -f /config/www/nextcloud/cron.php
4    99  7676 14590  20   0 127440 39696 -      D    ?          0:00 php7 -f /config/www/nextcloud/cron.php
5    99 19593 14579  20   0 260996 24004 -      D    ?          0:00 php-fpm: pool www
5    99 20230 14579  20   0 261060 23412 -      D    ?          0:00 php-fpm: pool www
4    99 25703 14590  20   0 127440 39340 -      D    ?          0:00 php7 -f /config/www/nextcloud/cron.php

 

Link to comment

I'm guessing if you ssh into your unraid server and try to run lsof on any location (ie: lsof /mnt/user) that will also never finish executing (requiring a new terminal instance in order to issue further commands).

 

If you try to fetch diagnostics it will never finish executing either, and if you try to collect those through the WebUI then eventually the WebUI will start giving 500 errors until it's restarted.

 

I never assigned a /tmp docker path, but I modified the php config within NextCloud to use a subdirectory in the /data folder to store upload data as it's being recieved so effectively I am doing the same thing to avoid my docker file increasing in usage dramatically.

 

I needed to get everything back up and running last night so I rebooted. I went into my NextCloud config and changed all my shared paths access properties to 'Read/Write - Shared' instead of just 'Read/Write'. My instance is working for now but I don't expect that it will last.

 

I guess in each of our cases we have designated a temporary upload location that is on our arrays - I don't see how that'd be a problem but for debugging purposes maybe I will change it to an unassigned device and see how that works.

 

Edit: Are either if you all using the filesystem_check_changes’ => 1, flag in your NextCloud config.php? 

Edited by jcsnider
Link to comment
19 hours ago, jcsnider said:

I never assigned a /tmp docker path, but I modified the php config within NextCloud to use a subdirectory in the /data folder to store upload data as it's being recieved so effectively I am doing the same thing to avoid my docker file increasing in usage dramatically.

I guess I should have been a bit more specific earlier. I added 'tempdirectory' => '/tmp/nextcloudtemp/', to my config.php in nextcloud and I added upload_tmp_dir = /tmp/php/ to my php-local.ini and then I mapped /tmp  --> /mnt/user/appdata/nextcloud/temp which would technically be /data/temp.

 

19 hours ago, jcsnider said:

Are either if you all using the filesystem_check_changes’ => 1, flag in your NextCloud config.php? 

I was not but after you brought it up, I read about it and it sounds beneficial to our use case. I'm adding it now, did you have this enabled before? @jcsnider

 

19 hours ago, jcsnider said:

I went into my NextCloud config and changed all my shared paths access properties to 'Read/Write - Shared' instead of just 'Read/Write'

I will do that too.
 

19 hours ago, jcsnider said:

My instance is working for now but I don't expect that it will last.

Is it still working?

Link to comment
19 hours ago, jcsnider said:

I'm guessing if you ssh into your unraid server and try to run lsof on any location (ie: lsof /mnt/user) that will also never finish executing (requiring a new terminal instance in order to issue further commands).

By the way, I just wanted to confirm that this was true for me. After it locked up, I performed a shutdown and though it took an extra minute or two, it was actually able to safely shutdown. The other 2-3 times this has happened, I could not shutdown because the docker service couldn't stop nextcloud. Maybe because I caught it so fast yesterday it didn't completely lock everything up.

Link to comment
6 hours ago, CyaOnDaNet said:

I was not but after you brought it up, I read about it and it sounds beneficial to our use case. I'm adding it now, did you have this enabled before? @jcsnider

Yes, I was using it before. Not sure if it was a problem or not but in trying to debug these issues I have stopped using that option and instead have modified my scripts/applications that store files within NextCloud to also trigger an occ files:scan. I think that option also made page load times take longer but that could have been my imagination.

 

6 hours ago, CyaOnDaNet said:

Is it still working?

Yes but my Unraid has only been running 1 day, 19 hours. Historically when everything has broken down uptime is greater than 3 days, sometimes upwards of a week or two.

 

6 hours ago, CyaOnDaNet said:

The other 2-3 times this has happened, I could not shutdown because the docker service couldn't stop nextcloud. Maybe because I caught it so fast yesterday it didn't completely lock everything up.

Very interesting. I wonder if in your case you would have also been able to collect diagnostics. I have never been able to get a clean shutdown or collect diagnostics once things have started to lock up.

 

 

On a side note: How often is your Mover configured to run? I had mine set to every hour but now I have changed it to once a day (I have far fewer files being added to my array). Maybe it was interfering somehow? Idk.

Link to comment
14 hours ago, jcsnider said:

Very interesting. I wonder if in your case you would have also been able to collect diagnostics. I have never been able to get a clean shutdown or collect diagnostics once things have started to lock up.

Maybe, I just didn't even think to try that time because the other two times it didn't work.

 

14 hours ago, jcsnider said:

Historically when everything has broken down uptime is greater than 3 days, sometimes upwards of a week or two.

The two main times it has happened to me was about 30 days uptime. The third time I was only at maybe a week uptime but I had the container offline the whole time and only had it started for like the last 5 minutes of uptime.
 

 

14 hours ago, jcsnider said:

On a side note: How often is your Mover configured to run?

Mine runs once every 8 hours. I noticed the two main times it happened that my mover was stuck. I ran the command mover stop and it did actually stop but that did not help safely shutdown or get the diagnostics data. I have my parity check scheduled monthly on the first and both times I noticed the issue it was November 3rd and December 3rd. I thought that maybe a combination of parity check and mover caused the issue but I have run manual parity checks since then with no issue so that was coincidental. This last time it happened where I could actually shutdown, my mover was not running, it hadn't been triggered yet. Maybe a combination of nextcloud locking up and the mover running prevents safe shutdown (and maybe diagnostics collection too?). 


On a side note, my nextcloud is still running fine after the changes I made but it hasn't been very long. I will keep updated here on how it goes. If I can make it to 30+ days uptime again then I know the issue is probably solved. Unfortunately, I kind of plan on shutting down soon to adjust some things so it may be a while before I get to 30+ days again.

Link to comment

Interesting, in responce to a few things ive read looking back at this, currently ive left my nextcloud to not use a temp and let it use the docker as it wants too for now, mainly ive not changed anything so that i know where the issue was,

 

ive had another unraid restart since as i was playing with my vm but nextcloud itsself has been working flawlessly again as it should be, no more 504 errors, no more not being able to restart, id say ive had 5 days or so stable now,

 

just to give you the info on it tho, i wasnt able to replicate the fault under my own loads as such uploading from phone browsing on my laptop and desktop, but i have an office that my mother works in that syncs about 8 - 10 machines using my nextcloud, it seemed to be when they were hitting it that it would stop, as i work days and they use it in the day i narrowed it down to around 10am i would lockup just as they are starting to load it up, they dont have alot of data but i know 3 of them use the Web interface with onlyoffice integration and the rest just use the desktop stnc client,

 

mover for me runs once a week currently, since im running a ups i dont see much of an issue with data sitting on my cache drive especially if the files the office are using are frequently used,

 

i have no LSI cards yet (need to look into them my rig is full right now) in my machine its a humble little array of:

4x 6tb Western Digital Red drive (1 Parity)

1x Western Didital Black 1tb NVME (cache / VMs)

1x 2tb Seagate usb 2.0 unassigned drive (CCTV)

 

running UNRAID 6.8.3

and nextcloud 20.0.3 ( the problem occured in all upgrades from 18.0.0.6 i think i started on, total i think it took me 5 updates to get to 20.0.3) fyi if youve not updated to 20 do it the dash is a nice feature 

 

Edited by p3rky2005
adding to bottom line
Link to comment

Mine has locked up again twice today so I am attempting to dig a little deeper.

 

It appears that if you get the process ids from the processes found in D states using the command I posted above (ps axl | awk '$10 ~ /D/'), you can take those pids and get a list of file handles for those processes using the following command:

ls -l /proc/pid/fd

Here's my output:

cd5f494ae5c38db51d238f3f1a7896c6.png

 

If my assumption is that all of these php-fpm workers are getting stuck on disk io, having 4 processes trying to write to that sqlite db could be the problem. I don't have redis or anything setup to handle transactional file locking either so my db is likely getting hit a lot for that.

 

Are you all using sqlite databases? I was previously using mariadb (I think things were well) but the mariadb docker kept getting corrupted and it was tedious having to run commands all the time to fix it and get it running again.

 

Edit: Switching to a mysql database and using a redis container for cache did not solve the issue. It seems as if I can recreate the problem fairly consistently by booting up my mobile app for some reason. Next debugging step is to use a temporary upload location that is off the array/primary cache.

 

Edit 2: Moving my upload and php tmp directories off onto an unassigned disk seems to have helped, but the temporary directory was getting a lot of sess_XXXXXXXXXXXXXXXXXXXX files written to it. Turns out they are encrypted php session files. To further optimize and reduce disk usage I added the following lines to my nextcloud/php/php-local.ini file so that sessions would also be handled by redis

session.save_handler = redis
session.save_path    = tcp://172.17.0.X:6379

 

After rebooting my NextCloud container again things seem to be working well. I will let it run for awhile and see if it lasts.

Edited by jcsnider
Link to comment
  • 4 weeks later...

I too am now having the same issues as above... no point in duplicating.

 

I had no issues on NC 17 or 18.   It was only when I went to NC 20.0.4 that this started to occur... So it is NC?   

As mentioned above, I recently added (on NC 20) a binding/redirect from the docker /tmp path back to my cache disk.   After just locking up again, I'm going to remove that... and see what happens.

 

What has been consistent on the "crash" is accessing NC via the iphone IOS app.   For whatever reason that seems to be the the "thing" the locks up the docker.  Not always, but at least once a day or so when I tried to access the app on my iphone, it will do a server request and then just die.   Curious if others see that too.   Could it be with IOS app?   Other access with NC client on Windows 10 hasn't locked anything up yet...  

 

Really hoping someone figures this out!!!

 

 

Link to comment

I got the same issue today while trying to stop the array.
Nextcloud docker would not stop and pretend the system from stopping the array.

I am running Unraid OS 6.9.0-rc1 and Nextcloud 20.0.3 with mariadb and swag as reverse proxy.
There's no LSI Card installed.
Nextcloud is running on a Samsung 970Evo Plus 2TB in a cache pool.

 

@jcsnider I tried the command you mentioned:

ps axl | awk '$10 ~ /D/'

It gave me the following output:

scrn1.thumb.jpg.f5b246d63b24f16c221421ed68b937e1.jpg

scrn2.thumb.jpg.e9698aee3262128ae3f0d1a81a3c37a4.jpg

I picked a few pid's and listed the file handles with

ls -l /proc/pid/fd

as you see above.
I then tried to restart / reload nginx and php-fpm to see if that helps, which didn't.

/etc/rc.d/rc.nginx restart
/etc/rc.d/rc.nginx reload
/etc/rc.d/rc.php-fpm restart
/etc/rc.d/rc.php-fpm reload

Afterwards I restarted mariadb manually to see if nextcloud would stop while mariadb is still running which wasn't the case.

Rebooting the system wouldn't work as well so I had to do an unclean restart with parity check now running.

I attached the syslog as Unraid wasn't able to generate diagnostics.

 

In the past I recognized nextcloud docker not stopping a couple of times, so there's an issue obviously.
Anyway, while it's running there are no errors or failures for me.

syslog.txt

Link to comment
  • 3 weeks later...
  • 2 months later...

I'm having this issue too. How did you end up stopping/killing nextcloud when it locks up? Is the only way to reboot the server?

 

  

On 1/30/2021 at 4:30 PM, CyaOnDaNet said:

Well after 34 days server uptime, it finally happened again. I will try the redis container for Nextcloud cache thing that @jcsnider suggested and report back how it goes. You should hear from me in about 45 days if it seems to be working or earlier if it doesn't work. 🤞

 

Did you have any additional issues since you made the change to redis?

Edited by bobokun
Link to comment
On 4/4/2021 at 7:31 AM, bobokun said:

How did you end up stopping/killing nextcloud when it locks up?

Unfortunately, I have tried everything I could think of and have no idea how to stop it without forcing an unsafe shutdown. One time it shutdown like it was supposed to (I think because I caught it so fast) but all other times it hung. I close out everything I can, all other dockers and VM's, issue a shutdown and then hold the power button 15 minutes later. You have to do a parity check after this but otherwise I had no noticeable ill effects.

 

 

On 4/4/2021 at 7:31 AM, bobokun said:

Did you have any additional issues since you made the change to redis?

I can happily say that I am at 67 days server uptime with no issues. The redis changes seemed to be the last things I needed to make Nextcloud stable again. I have changed so much and its been a while but I will try to list everything I did below so others can hopefully benefit:


Starting with docker template changes:

  1. Changed Nextcloud appdata app path access mode to "RW/Shared" (I did this originally because my temp folder was in this path but I left it after I moved the temp folder)
  2. Changed my '/mnt/user/appdata/nextcloud-temp':'/tmp' path mapping to "RW/Shared"
  3. Added "REDIS_HOST", "REDIS_HOST_PORT", and "REDIS_HOST_PASSWORD" variables to point to new redis container


In my nextcloud appdata directory under /www/nextcloud/config/config.php I changed/added the following things:

  'filesystem_check_changes' => 1,
  'memcache.local' => '\\OC\\Memcache\\APCu',
  'memcache.distributed' => '\\OC\\Memcache\\Redis',
  'memcache.locking' => '\\OC\\Memcache\\Redis',
  'redis' => 
  array (
    'host' => 'THE SAME IP USED IN THE CONTAINER FOR REDIS_HOST',
    'password' => 'THE SAME PASSWORD USED IN THE CONTAINER FOR REDIS_HOST_PASSWORD',
    'port' => THE_SAME_PORT_USED_IN_THE_CONTAINER:REDIS_HOST_PORT,
  ),
  'filelocking.enabled' => 'true',
  'datadirectory' => '/data',
  'tempdirectory' => '/tmp/nextcloudtemp',



In my nextcloud appdata directory under /php/php-local.ini I changed/added the following things:
 

upload_tmp_dir = /tmp/php/
session.save_handler = redis
session.save_path    = "tcp://SAME_IP_FOR_REDIS_THAT_I_USED_IN_CONTAINER:REDIS_PORT?auth=REDIS_PASSWORD"

 

  • Thanks 1
Link to comment
  • 2 months later...
On 4/7/2021 at 11:27 PM, CyaOnDaNet said:

Unfortunately, I have tried everything I could think of and have no idea how to stop it without forcing an unsafe shutdown. One time it shutdown like it was supposed to (I think because I caught it so fast) but all other times it hung. I close out everything I can, all other dockers and VM's, issue a shutdown and then hold the power button 15 minutes later. You have to do a parity check after this but otherwise I had no noticeable ill effects.

 

 

I can happily say that I am at 67 days server uptime with no issues. The redis changes seemed to be the last things I needed to make Nextcloud stable again. I have changed so much and its been a while but I will try to list everything I did below so others can hopefully benefit:


Starting with docker template changes:

  1. Changed Nextcloud appdata app path access mode to "RW/Shared" (I did this originally because my temp folder was in this path but I left it after I moved the temp folder)
  2. Changed my '/mnt/user/appdata/nextcloud-temp':'/tmp' path mapping to "RW/Shared"
  3. Added "REDIS_HOST", "REDIS_HOST_PORT", and "REDIS_HOST_PASSWORD" variables to point to new redis container


In my nextcloud appdata directory under /www/nextcloud/config/config.php I changed/added the following things:


  'filesystem_check_changes' => 1,
  'memcache.local' => '\\OC\\Memcache\\APCu',
  'memcache.distributed' => '\\OC\\Memcache\\Redis',
  'memcache.locking' => '\\OC\\Memcache\\Redis',
  'redis' => 
  array (
    'host' => 'THE SAME IP USED IN THE CONTAINER FOR REDIS_HOST',
    'password' => 'THE SAME PASSWORD USED IN THE CONTAINER FOR REDIS_HOST_PASSWORD',
    'port' => THE_SAME_PORT_USED_IN_THE_CONTAINER:REDIS_HOST_PORT,
  ),
  'filelocking.enabled' => 'true',
  'datadirectory' => '/data',
  'tempdirectory' => '/tmp/nextcloudtemp',



In my nextcloud appdata directory under /php/php-local.ini I changed/added the following things:
 


upload_tmp_dir = /tmp/php/
session.save_handler = redis
session.save_path    = "tcp://SAME_IP_FOR_REDIS_THAT_I_USED_IN_CONTAINER:REDIS_PORT?auth=REDIS_PASSWORD"

 

 

I'm currently having this problem. After 6 or so days my nextcloud docker hangs and is unkillable (short of a dirty shutdown). I was already using redis, but have updated my config file and php-local.ini files in a couple places where you had lines that I was missing. I also made the two shares RW/shared. Here are the lines I didn't have:

 

config:

'filesystem_check_changes' => 1,

'memcache.local' => '\\OC\\Memcache\\APCu',      (from redis)

'memcache.distributed' => '\\OC\\Memcache\\Redis', 

'password' => 'THE SAME PASSWORD USED IN THE CONTAINER FOR REDIS_HOST_PASSWORD',

'filelocking.enabled' => 'true',

'tempdirectory' => '/tmp/nextcloudtemp',

 

php-local.ini:

upload_tmp_dir = /tmp/php/

session.save_handler = redis

session.save_path = "tcp://SAME_IP_FOR_REDIS_THAT_I_USED_IN_CONTAINER:REDIS_PORT?auth=REDIS_PASSWORD"

 

However, nextcloud was throwing server errors that prevented me from logging in. I narrowed it down to these three lines of code between the two files:

 

config:

'password' => 'THE SAME PASSWORD USED IN THE CONTAINER FOR REDIS_HOST_PASSWORD', (I put my plaintext password here, probably not ideal)

 

php-local.ini:

session.save_handler = redis

session.save_path = "tcp://SAME_IP_FOR_REDIS_THAT_I_USED_IN_CONTAINER:REDIS_PORT?auth=REDIS_PASSWORD"  (I put my plaintext password here, probably not ideal)

 

I'm not sure why errors were being thrown, but for now I have omitted them. I'm going to try this config and see if its stable, but does anyone have any idea why the server errors would have been thrown from these lines?

Link to comment
  • 3 weeks later...

Unfortunately, this did not solve my problem and Nextcloud has yet again become unstoppable. I looked through my php error logs and found this error just before nextcloud crashed:

 

"Server reached pm.max_children setting (5)"

 

 I have since added these lines to my www2.conf file:

 

pm = ondemand
pm.max_children = 300
pm.process_idle_timeout = 30s
pm.max_requests = 500

 

Update 1: This did resolve my php errors. However, nextcloud still crashed and caused the WebGUI to also later crash after a period of time (30minutes). After some MORE digging, I found this error (with my domain redacted for privacy) happened just before the crash:

 

2021/07/17 16:36:05 [error] 414#414: *301015 upstream timed out (110: Operation timed out) while reading response header from upstream, client: 172.18.0.2, server: _, request: "GET / HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "nextcloud.mydomain.com", referrer: "https://nextcloud.mydomain.com"

 

However, this error seems to happen every 5 minutes and might just be a symptom of the problem. I found it might be possible I have a RAM allowance issue. This would also somewhat explain the webgui crashing. I have reduced the nextcloud logging from debug (0) to info (1) to see if thats the issue as well (tons of logs with debug enabled). At this point, nextcloud seems to randomly crash after a couple weeks at which point a reboot resolves the situation. I will report back here if I find a solution and would advise anyone reading this to do the same. Cheers!

Edited by huquad
Update
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.