Jump to content

Docker causing high reads and high load average


Shonky

Recommended Posts

Been struggling with this one for a while

 

I have quite a few Dockers running but nothing extremely intensive for long periods. Current running list is:

Gitea
nginx
sabnzbd
unifi-controller
zigbee2mqtt
jellyfin
sonarr
PlexMediaServer
qbittorrent
homeassistant
pihole
resilio-sync
CrashPlanPRO
gotify
NodeRed-OfficialDocker
docker-bubbleupnpserver
 

After some period of time, sometimes it can occur in minutes sometimes it runs fine for weeks, the CPU load average rapidly increases getting up around 100. When running normally it usually sits around 1-1.5.

 

This appears to be a result of something reading the docker image file amongst other things.

 

iotop shows high read rate on loop2 (/dev/loop2 is the docker image) and also a number of the apps seem to start reading for some unknown reason. e.g. from a while ago so dockers running are slightly different. There is no particular reason that all these apps would be reading simultaneously.

 

image.thumb.png.02f5c6448342f9adb823e301dd32e07b.png

 

and top (albeit at a different time):

 

image.thumb.png.adf7b8da85ab01fc633a8ddc370bd00b.png

 

The unRAID server itself remains quite responsive, but all the dockers basically stop operating properly. e.g. pihole stops serving DNS queries. Webservers for others stop responding. "docker ps" sometimes hangs (well gets blocked somewhat indefinitely)

 

The "fix" is generally to shutdown a docker and immediately everything returns to normal. Usually I shutdown CrashPlanPRO. Sometimes I can't shut it down and have to kill some of the Crashplan processes. Sometimes I can catch it early and a simple restart works fine. It's definitely not just Crashplan but that's the one I usually restart. Once restarted it runs fine again until the next time it happens.

 

Docker image is 30GB and 9.3GB used. I have tried deleting and recreating the image from scratch with no improvement.

 

I found some references to *writing* the docker image but my problem is reading. There was this post on reddit:

 

Edit: I see this one here recently posted too:

 

Unraid 6.8.3

HP Gen 8 Microserver with a Xeon E3-1265L 4C/8T CPU and 10GB RAM.

Cache drive is a Samsung 830 256GB on a 3 Gb/s SATA port due to the Microserver's limited SATA port. docker.img is on the cache drive

Array is 5 x 4TB + parity. 2 drives are in an external enclosure via a Marvell 88SE9230.

 

Anyone have any ideas, it basically kills all my docker instances and happens semi regularly but randomly.

Edited by Shonky
Link to comment

Could you be hitting 100% RAM usage when this happens?  I suspect it was related to my Plex docker transcoding to RAM, maxing the RAM out and then starting to page it out to the SSD, which immediately requires some massive reads into the Plex docker (loop2). I have disabled RAM transcoding now and I haven’t seen the issue since. 

Link to comment

It just happened again. Top says 2000MB cached. Top in the above screen shot shows 1800MB cached. So I don't think RAM is running out.

 

image.thumb.png.9899230a918cc90e0429cd805e94ef56.png

 

image.thumb.png.79b851d7edf8129d895861237ba50d72.png

 

I also remembered I can simulate the symptoms with a big torrent doing a force recheck in qbitorrent on a huge file (46GB). That doesn't of course help much other than show that high read IO on my cache drive (qbitorrent download loaction), brings docker down.

 

 

Link to comment

If your docker image is on this one cache drive then we are likely having the same issue. We are saturating the IO on the SSD and docker gets choked out. 
 

I just upgraded to the 6.9 beta and set up a 2nd cache pool for this reason. My docker and system files are on their own SSDs now, separate from Plex and other downloads. 

Link to comment

Yes but that's the symptom, but not the cause. Just shows that lots of reads blocks docker from working properly.

 

Why does something all of sudden start hammering the docker image file is the issue. 

 

In my case moving the docker image file to another disk probably won't help. Lots of IO on the drive containing the docker image in turn affects docker. So if the high reads occur on the docker image in the new location, docker will still be affected.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...