Cleaning out Docker image


dalben

Recommended Posts

I have been sitting constant at my docker usage for the past half a year. There is nothing that LT can do when users have bad behaving or misconfigured dockers.

 

tl;dr: YOU have an issue with YOUR configuration of YOUR dockers that only YOU can fix.

For the most part, I would tend to agree, however there has been some evidence that updates to containers are increasing the docker.img utilization (most likely related to C.O.W.)  However, like you I have seen no abnormal increases in my utilization, and have never run out of space using 20G. 

 

But, any sudden and drastic increases in image utilization would have to be related to configuration and/or misbehaving apps (I believe there was a report somewhere about a log file for a particular app growing quickly to be in the GB range)

Link to comment

I have been sitting constant at my docker usage for the past half a year. There is nothing that LT can do when users have bad behaving or misconfigured dockers.

 

tl;dr: YOU have an issue with YOUR configuration of YOUR dockers that only YOU can fix.

 

@Brit  I posted in here to see if anyone else has the same problem so we can collaborate to fix it.  If you don't have the problem, that's great. But telling those of us that do have the problem that its our problem and we have to fix it ourselves is not helpful.  What would be helpful is if you could tell us which dockers you are using that have not given you any problems so we can possibly eliminate them from the list of potentially misbehaving dockers.

Link to comment

I have been sitting constant at my docker usage for the past half a year. There is nothing that LT can do when users have bad behaving or misconfigured dockers.

 

tl;dr: YOU have an issue with YOUR configuration of YOUR dockers that only YOU can fix.

 

@Brit  I posted in here to see if anyone else has the same problem so we can collaborate to fix it.  If you don't have the problem, that's great. But telling those of us that do have the problem that its our problem and we have to fix it ourselves is not helpful.  What would be helpful is if you could tell us which dockers you are using that have not given you any problems so we can possibly eliminate them from the list of potentially misbehaving dockers.

Running 24/7

 

Needo / Couchpotato

Needo / MariaDB

LinuxServer / NZBGet

Needo / Plex

Needo / Sonarr

Hurricane / Ubooquity

 

Running On Demand

 

cadvisor

Dolphin

Handbrake

MKVToolNix-GUI

 

Link to comment

I have been sitting constant at my docker usage for the past half a year. There is nothing that LT can do when users have bad behaving or misconfigured dockers.

 

tl;dr: YOU have an issue with YOUR configuration of YOUR dockers that only YOU can fix.

 

@Brit  I posted in here to see if anyone else has the same problem so we can collaborate to fix it.  If you don't have the problem, that's great. But telling those of us that do have the problem that its our problem and we have to fix it ourselves is not helpful.  What would be helpful is if you could tell us which dockers you are using that have not given you any problems so we can possibly eliminate them from the list of potentially misbehaving dockers.

Running 24/7

 

binhex/arch-couchpotato

binhex/arch-delugevpn

binhex/arch-sabnzbd

binhex/arch-sickrage

binhex/arch-sonarr

emby/embyserver

smdion/reverseproxy

 

Running on demand or not fully configured and utilized.

 

sparklyballs/krusader

yujiod/minecraft-mineos

lsiodev/minetest

sparklyballs/tftp-server

 

10GB docker at 64% utilization constant.

 

I don't think it's a docker app that's misbehaving, I'm betting there is a setting or configuration in the docker app itself that should be pointed to the mapped appdata location but hasn't been changed and is still writing to the image. I'd go over EVERY setting and configuration page and examine each listed location to make sure it's pointed to the correct mapped spot.

 

Link to comment

Running 24/7

 

binhex/arch-couchpotato

binhex/arch-delugevpn

binhex/arch-sabnzbd

binhex/arch-sickrage

binhex/arch-sonarr

emby/embyserver

smdion/reverseproxy

 

Running on demand or not fully configured and utilized.

 

sparklyballs/krusader

yujiod/minecraft-mineos

lsiodev/minetest

sparklyballs/tftp-server

 

10GB docker at 64% utilization constant.

 

I don't think it's a docker app that's misbehaving, I'm betting there is a setting or configuration in the docker app itself that should be pointed to the mapped appdata location but hasn't been changed and is still writing to the image. I'd go over EVERY setting and configuration page and examine each listed location to make sure it's pointed to the correct mapped spot.

 

The ones where we overlap are binhex-sonarr, and binhex-couchpotato. I too have not had any problems with these two.  The one docker that I found an issue with was needo-plex, but I think the same issue would arise with any plex docker.

 

The issue with Plex is that in order to transcode outside of the docker.img, you have to change a setting in Plex that is NOT visible in the docker setup screen. Mapping /tmp in the plex docker to /tmp does not solve the problem.

 

LMJ42 mentions this in his post above:

 

1) Add a volume mapping for /transcode to /tmp (or a location on your cache drive)
2) In Plex, go to Settings -> Server -> Transcoder.  Hit Show Advanced. In the "Transcoder temporary directory" put "/transcode"

 

After changing this setting in Plex, my docker utilization has stopped rising by gigabytes a day. It still rises a little each day, but I no longer have to keep increasing the size of docker.img every few days.

 

It may be that there are other dockers that have a similar setting in the application that cannot be changed in the docker setup screen. If so, it would be a good idea to make a list of such applications and let others know where the relevant settings are in each app.

 

Brit's list of dockers may be particularly useful. Since he doesn't share our problem, apparently all of his dockers can be configured correctly from the docker settings screens. This is useful information for the rest of us to know.

 

Link to comment

Running 24/7

 

Needo / Couchpotato

Needo / MariaDB

LinuxServer / NZBGet

Needo / Plex

Needo / Sonarr

Hurricane / Ubooquity

 

Running On Demand

 

cadvisor

Dolphin

Handbrake

MKVToolNix-GUI

 

@Squid There are two that we overlap on:  Needo-MariaDB, and Needo-Plex.  In diagnosing my issues, I've had Needo-MariaDB turned off for days when the utilization of docker.img increased by gigabytes a day, so I'm pretty sure Needo-MariaDB was not causing my problem.

 

Needo-Plex however was the docker that seems to account for most of my problem.  The issue is that in order to make Plex transcode outside of docker.img, you have to change setting that is not on the docker setup screen.  See my previous post for details.

 

There may be other dockers that have similar application settings that cannot be configured in the docker setup screen.  And, there may be some apps that generate huge log files.  Maybe we should always map the log files for any docker app to the cache drive?

 

 

Link to comment

Most apps don't have a configurable setting for where to store the logs.  And, I'm not even sure what happens if you create a mapping for a folder that already exists within the container (eg /var/logs to /tmp).  Does it store it outside? inside? both?  Someone like sparklyballs would know.

Link to comment

just incase anybody is interested, the way i configure the transcode directory for the plex and plex pass images i create is simply to set the environment variable to point at the chosen folder, in this case i just create a tmp folder under /config and get it to store temporary transcode files in there, link to (old) article showing this:-

 

https://support.plex.tv/hc/en-us/articles/200273978-Linux-User-and-Storage-configuration

 

as for tracking down unruly docker containers, i did once have to debug an issue with miniDLNA creating a VERY large log file inside the container, one of the techniques i used to track it down was to search the loopback mounted docker image for large files:-

 

find /var/lib/docker/btrfs -type f -size +50000k -exec ls -lh {} \; | awk '{ print $9 ": " $5 }'

 

the above will display all files 50MB or larger, the container id is shown in the path, as well as the file system in the docker container, with a bit more work somebody could easily map the container id in the path to the friendly name of the container for  a more human friendly result :-).

  • Like 1
Link to comment

I'm also getting this Docker Utilization alert.  Mine is actually going up to 98% sometimes before then dropping down to normal.  The only Docker I'm using is needo/Plex.  During this time, if I'm using PLEX, the movie will stop playing like the file has an error.  I can then normally restart it and it will play.  Frustrating.  I need to pay attending to exactly what files it is happening on.  I setup to have the transcoding done on my cache drive.  This seems to help but I still have the problem...  Not installing any other dockers until I get this figured out.

 

It really sounds like the transcoding is still being done in the docker image.  There are two steps to moving it outside of the docker image:

1) Add a volume mapping for /transcode to /tmp (or a location on your cache drive)

2) In Plex, go to Settings -> Server -> Transcoder.  Hit Show Advanced. In the "Transcoder temporary directory" put "/transcode"

That should solve it for you.

 

Edit - I expanded on this idea and added it to the Docker FAQ

 

If Plex is your only Docker, and transcoding really is happening outside of the docker image, and you are still getting to 98%, then your docker image is probably just too small.  Mine is 20gb.

 

Thank you.  I had done both steps but maybe I messed something up.  I'll double check again.  Either way, after doing both steps before, I got the alert one more time.  Since then, I increased the container size to 20 gb and haven't gotten an alarm.  I've since started installing other dockers and so far, so good.  Crossing my fingers!

Link to comment

Just to chime in and say that I've had this error and clearing out dangling images as suggested has worked for now.  Thanks for the advice, and hopefully something can be built into unRaid to purge these via the UI (or via a docker toolkit).

 

I've set Plex to transcode on my cache drive, but I'm not 100% sure whether it's doing it.  Need to investigate later.  I'm using the Limetech repo version, but am happy to switch if it's not considered ideal.

 

I'm also running Logitech Media Server, that does some audio transcoding, and I'm not sure whether it saves log/error files (or whether you can clear or redirect them).  That's potentially something to look into, since all other dockers are more app-based (Handbrake and the like).  My issue doesn't seem so dramatic as some people's on here though

Link to comment

just incase anybody is interested, the way i configure the transcode directory for the plex and plex pass images i create is simply to set the environment variable to point at the chosen folder, in this case i just create a tmp folder under /config and get it to store temporary transcode files in there, link to (old) article showing this:-

 

https://support.plex.tv/hc/en-us/articles/200273978-Linux-User-and-Storage-configuration

 

as for tracking down unruly docker containers, i did once have to debug an issue with miniDLNA creating a VERY large log file inside the container, one of the techniques i used to track it down was to search the loopback mounted docker image for large files:-

 

find /var/lib/docker/btrfs -type f -size +50000k -exec ls -lh {} \; | awk '{ print $9 ": " $5 }'

 

the above will display all files 50MB or larger, the container id is shown in the path, as well as the file system in the docker container, with a bit more work somebody could easily map the container id in the path to the friendly name of the container for  a more human friendly result :-).

 

Thanks Binhex, I used this command to find anything over 1GB.  The container that starts with e93 is binhex/sonarr, but I am unable to find any dockers that start with 92e.  Is there something I am missing here?  I am searching by using docker ps -a command.

 

/var/lib/docker/btrfs/subvolumes/92e0d85384a0b19ed2dc7c91e6c2471f66d7f08b1c4905c642ee6c09a3e34b12/The: 993M

/var/lib/docker/btrfs/subvolumes/92e0d85384a0b19ed2dc7c91e6c2471f66d7f08b1c4905c642ee6c09a3e34b12/The: 1.2G

/var/lib/docker/btrfs/subvolumes/92e0d85384a0b19ed2dc7c91e6c2471f66d7f08b1c4905c642ee6c09a3e34b12/The: 2.8G

/var/lib/docker/btrfs/subvolumes/92e0d85384a0b19ed2dc7c91e6c2471f66d7f08b1c4905c642ee6c09a3e34b12/The: 2.7G

/var/lib/docker/btrfs/subvolumes/92e0d85384a0b19ed2dc7c91e6c2471f66d7f08b1c4905c642ee6c09a3e34b12/The: 1.1G

/var/lib/docker/btrfs/subvolumes/e93e17d188f4b6ef2916abf382b43e3ed490c616477db527b15f846d8b80d56e/The: 2.7G

 

UPDATE:  I looked for any files over 50M in that container and added up the files and I am already at almost 20G.  This seems to be the culprit of my growing docker size.  How can I find out what container this is?

 

UPDATE #2:  Apparently this is a known issue with subvolumes not getting deleted properly.  According to the following https://github.com/docker/docker/pull/15801 is should be fixed with Docker 1.9.  Does anyone know if this will be in unRAID 6.2?

Link to comment

just incase anybody is interested, the way i configure the transcode directory for the plex and plex pass images i create is simply to set the environment variable to point at the chosen folder, in this case i just create a tmp folder under /config and get it to store temporary transcode files in there, link to (old) article showing this:-

 

https://support.plex.tv/hc/en-us/articles/200273978-Linux-User-and-Storage-configuration

 

as for tracking down unruly docker containers, i did once have to debug an issue with miniDLNA creating a VERY large log file inside the container, one of the techniques i used to track it down was to search the loopback mounted docker image for large files:-

 

find /var/lib/docker/btrfs -type f -size +50000k -exec ls -lh {} \; | awk '{ print $9 ": " $5 }'

 

the above will display all files 50MB or larger, the container id is shown in the path, as well as the file system in the docker container, with a bit more work somebody could easily map the container id in the path to the friendly name of the container for  a more human friendly result :-).

 

Thanks Binhex, I used this command to find anything over 1GB.  The container that starts with e93 is binhex/sonarr, but I am unable to find any dockers that start with 92e.  Is there something I am missing here?  I am searching by using docker ps -a command.

 

/var/lib/docker/btrfs/subvolumes/92e0d85384a0b19ed2dc7c91e6c2471f66d7f08b1c4905c642ee6c09a3e34b12/The: 993M

/var/lib/docker/btrfs/subvolumes/92e0d85384a0b19ed2dc7c91e6c2471f66d7f08b1c4905c642ee6c09a3e34b12/The: 1.2G

/var/lib/docker/btrfs/subvolumes/92e0d85384a0b19ed2dc7c91e6c2471f66d7f08b1c4905c642ee6c09a3e34b12/The: 2.8G

/var/lib/docker/btrfs/subvolumes/92e0d85384a0b19ed2dc7c91e6c2471f66d7f08b1c4905c642ee6c09a3e34b12/The: 2.7G

/var/lib/docker/btrfs/subvolumes/92e0d85384a0b19ed2dc7c91e6c2471f66d7f08b1c4905c642ee6c09a3e34b12/The: 1.1G

/var/lib/docker/btrfs/subvolumes/e93e17d188f4b6ef2916abf382b43e3ed490c616477db527b15f846d8b80d56e/The: 2.7G

 

UPDATE:  I looked for any files over 50M in that container and added up the files and I am already at almost 20G.  This seems to be the culprit of my growing docker size.  How can I find out what container this is?

 

UPDATE #2:  Apparently this is a known issue with subvolumes not getting deleted properly.  According to the following https://github.com/docker/docker/pull/15801 is should be fixed with Docker 1.9.  Does anyone know if this will be in unRAID 6.2?

 

hmm interesting bug raised there, could explain growing unraid docker image size if a user is doing a lot of churn, i.e. creating and deleting docker containers, still doesnt cover the case where people are seeing usage grow whilst not doing any creation/deletion of containers (just to be clear im NOT seeing this <-).

Link to comment

Using needo-plex.  I was able to reduce and possibly completely remove the ballooning docker utilization by mapping the Plex transcoding outside of the docker.img file.  I haven't watched it closely to see if it is completely stable or not but I've stopped seeing it go up GB at a time.  I can see the transcode temporary files being generated in my external folder. I did not make any settings in the Plex WebUI related to transcode location - the temporary transcode location setting is blank. 

 

To me this was a smoking gun for my problem.  I thought that Plex was not transcoding anything, but as it turns out it's occasionally transcoding audio and often transcoding between contains (direct stream vs. direct play), which generates the same large temporary files.  I'm surprised more people aren't seeing this issue since even using direct stream with container transcoding can easily generate many GB of temporary files and none of the Plex containers have any instructions on how to map transcode directories, or why it should be done.

Link to comment

just incase anybody is interested, the way i configure the transcode directory for the plex and plex pass images i create is simply to set the environment variable to point at the chosen folder, in this case i just create a tmp folder under /config and get it to store temporary transcode files in there, link to (old) article showing this:-

 

https://support.plex.tv/hc/en-us/articles/200273978-Linux-User-and-Storage-configuration

 

as for tracking down unruly docker containers, i did once have to debug an issue with miniDLNA creating a VERY large log file inside the container, one of the techniques i used to track it down was to search the loopback mounted docker image for large files:-

 

find /var/lib/docker/btrfs -type f -size +50000k -exec ls -lh {} \; | awk '{ print $9 ": " $5 }'

 

the above will display all files 50MB or larger, the container id is shown in the path, as well as the file system in the docker container, with a bit more work somebody could easily map the container id in the path to the friendly name of the container for  a more human friendly result :-).

 

Thanks Binhex, I used this command to find anything over 1GB.  The container that starts with e93 is binhex/sonarr, but I am unable to find any dockers that start with 92e.  Is there something I am missing here?  I am searching by using docker ps -a command.

 

/var/lib/docker/btrfs/subvolumes/92e0d85384a0b19ed2dc7c91e6c2471f66d7f08b1c4905c642ee6c09a3e34b12/The: 993M

/var/lib/docker/btrfs/subvolumes/92e0d85384a0b19ed2dc7c91e6c2471f66d7f08b1c4905c642ee6c09a3e34b12/The: 1.2G

/var/lib/docker/btrfs/subvolumes/92e0d85384a0b19ed2dc7c91e6c2471f66d7f08b1c4905c642ee6c09a3e34b12/The: 2.8G

/var/lib/docker/btrfs/subvolumes/92e0d85384a0b19ed2dc7c91e6c2471f66d7f08b1c4905c642ee6c09a3e34b12/The: 2.7G

/var/lib/docker/btrfs/subvolumes/92e0d85384a0b19ed2dc7c91e6c2471f66d7f08b1c4905c642ee6c09a3e34b12/The: 1.1G

/var/lib/docker/btrfs/subvolumes/e93e17d188f4b6ef2916abf382b43e3ed490c616477db527b15f846d8b80d56e/The: 2.7G

 

UPDATE:  I looked for any files over 50M in that container and added up the files and I am already at almost 20G.  This seems to be the culprit of my growing docker size.  How can I find out what container this is?

 

UPDATE #2:  Apparently this is a known issue with subvolumes not getting deleted properly.  According to the following https://github.com/docker/docker/pull/15801 is should be fixed with Docker 1.9.  Does anyone know if this will be in unRAID 6.2?

 

hmm interesting bug raised there, could explain growing unraid docker image size if a user is doing a lot of churn, i.e. creating and deleting docker containers, still doesnt cover the case where people are seeing usage grow whilst not doing any creation/deletion of containers (just to be clear im NOT seeing this <-).

Does stopping and starting also create new subvolumes?  I have a number of dockers that I only spin up when needed. Dolphin, caview, crashplan...

 

I wonder if this is also contributing.  I am going to recreate my docker image this weekend to get rid of all those ghost subvolumes that are eating up a lot of space.

Link to comment

On a container that follows the true docker tenants, stopping and starting does not create subvolumns. However, nearly every single docker container automatically updates their program at that time. That may create new subvolumns.

 

Those docker containers go against the fundamental premise of docker.

Link to comment
However, nearly every single docker container automatically updates their program at that time. That may create new subvolumns.

 

Those docker containers go against the fundamental premise of docker.

+1

 

Nothing drives me more insane than a particular container auto updating but the update is broken.  And there's no easy way to back track.  Hence why i try and avoid auto updates at all costs.  The vast majority of updates that come through on containers are for bug fixes that dont affect the majority of users.  But the mentality of having to have the latest couch potato is pervasive even though there is nothing wrong with the older versions that anyone can see.

 

Docker is/was designed so that what works on one users system works on all systems.  But with having auto updates means that everyone winds up on a different version, and goes against that fundamental premise.

 

Link to comment
  • 2 weeks later...

I'll debunk a few theories.  It's not plex, and it's not auto-updating containers.

 

 

At noon today, my utilization was at 79% of 20gb. It jumped up to 92% in two hours... I've watched it grow in the past week or so, but this was the most severe jump I've seen.  I cannot think of any activities that would cause me to lose 13% of 20gb in just two hours.

 

I don't use plex, and in the last 2 days, none of my apps have auto updated. Even if they had, they wouldn't have been restarted to have the changes go into effect.

 

I'm running Sync, Sonarr, RDP-Calibre, Pf-logstash, NZBGet, MariaDB, Couchpotato, and Cadvisor.

 

Cadvisor thinks all of my containers are using less than 1gb each in virtual size. Unraid is reporting much larger use:

 

Label: none  uuid: dc11f2e2-8281-4f64-9bd5-89139eb04b02
Total devices 1 FS bytes used 15.64GiB
devid    1 size 20.00GiB used 19.29GiB path /dev/loop0

 

NZBGet, sonarr, and couchpotato have their download/temp directories mapped out of the containers.

 

 

What is the effect of memory usage?  If you are running too many containers, or one container starts going rogue on memory consumption, would that effect the docker filesystem with swaps?

Link to comment
  • 2 weeks later...

Pretty new to UnRaid but been loving it so far, however the one problem I've been having is exactly what is described in this thread. My docker image utilization just keeps going up and up. I am using a lot of CouchPotato, NZBGet, and Plex and it seems both Couch and Plex have gotten massive. According to CAdvisor, Plex has multiple instances of over 800mb each (all of which are active and not dangling, tried that), and Couch is over 2gb now. I tried mapping the /temp folders by going into each docker and just putting /temp in and mapping that to the app's applicaiton data folder. However I am not sure the programs even recognize /temp as the folders are all empty. I'm at 100% image utilization which is frustrating because I already bumped my image size to 12gb.

 

Anyone here have advice besides "check for dangling images". It seems like these applications are building really large temp repositories but I can't find them/haven't been able to remap them.

Link to comment

Pretty new to UnRaid but been loving it so far, however the one problem I've been having is exactly what is described in this thread. My docker image utilization just keeps going up and up. I am using a lot of CouchPotato, NZBGet, and Plex and it seems both Couch and Plex have gotten massive. According to CAdvisor, Plex has multiple instances of over 800mb each (all of which are active and not dangling, tried that), and Couch is over 2gb now. I tried mapping the /temp folders by going into each docker and just putting /temp in and mapping that to the app's applicaiton data folder. However I am not sure the programs even recognize /temp as the folders are all empty. I'm at 100% image utilization which is frustrating because I already bumped my image size to 12gb.

 

Anyone here have advice besides "check for dangling images". It seems like these applications are building really large temp repositories but I can't find them/haven't been able to remap them.

Have you tried mapping /tmp rather than /temp?  On Linux /tmp is the traditional location for temporary files.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.