Docker high image disk utilization: why is my docker image/disk getting full?


Recommended Posts

10 minutes ago, Squid said:

Could be runaway logging, downloads being saved into the image, etc

 

Couple of entries in the docker FAQ about this topic

 

Based on the earlier info in this thread, I did read the FAQ and look into those issues.  None of my dockers should be downloading anything that I am aware of (no torrent or TV clients or the like); I have not transcoded anything in Plex and I have my /transcode directory mapped from the array, anyway; and the reason I bashed into each running container was to look at the virtual filesystems for any signs of excess logging.

 

My understanding—and, please, correct me if I am wrong—is that those logs would show up in the container filesystem.  So, if the container logs are not stored in "[CONTAINER]%/var/log", how do I find out what may be doing excessive logging?

Link to comment
  • 3 months later...

Ok so I need some guidance here... why the heck is my docker image still getting full!?!   I already increased it from the 20gb to 60gb and now I'm getting image full again... currently 99%!

 

I don't have a ton of containers.. and as far as I can tell by my mappings nothing should be downloaded to the docker itself... everything is mapped to a share.  I 'feel' like it might be something with relilio-sync since those are the newest containers I've added, but still everything is mapped out.

 

There's nothing that will tell me where the space is being used?

 

Here's my containers and mappings...

 

What am I missing??

docker.jpg

docker.jpg

Link to comment
13 hours ago, Energen said:

I already increased it from the 20gb to 60gb

Definitely the wrong idea. If you have things correctly configured it's unlikely you would need anywhere near 20G.

 

As Squid said, you have some application(s) writing to a folder that isn't mapped. Common mistakes are not beginning a path with / or using the wrong upper/lower case.

 

Each application must be configured so it is only writing into paths that are mapped to Unraid storage as specified in the volume mappings for each container. Do you understand docker volume mappings?

 

Have you looked at the docker FAQ? There is a lot there about this problem as well as other good info.

 

https://forums.unraid.net/topic/57181-real-docker-faq/

 

I recommend after you get things straightened out that you decrease the image back to 20G. That way you will know sooner rather than later if you still have a problem.

  • Upvote 1
Link to comment

I asked this earlier in this thread, but I will ask again:  How can one determine which container is causing the excessive usage?  I have used the "docker images" command and all the images together add up to less than 6GB, while unRAID is telling me that I am at 83% usage (33GB)  of my 40GB docker image.  I have bashed into each and every container I use and run "du -shx /" to get a usage report on the root (meaning docker.img loop) partition, and the sizes agree with the "docker images" output.

 

So how exactly am I supposed to troubleshoot this 33GB image usage when all the diagnostics available to me are telling me that I am only using 6GB?

 

Thanks.

Link to comment

The only thing that seems to stand out is in Sync.. the default folder location was something like /sync/Resilio Sync or /root/Resilio Sync although neither of those are used, the shared folders are saved on shares..   those seem to be the only paths that aren't mapped to a share....  could that be the issue? 

 

I read the FAQ but except for these everything seems to be mapped correctly.. for the most part nothing else would be downloading enough external data to fill up a multi-gb docker image.

 

I might delete all the Sync containers and start over anyways, as one of them I seem to have forgotten the GUI password, lol.  I have 3 Sync containers for remote and local sync stuff... wish there was a better way to do it, or could be done through one. 

Edited by Energen
Link to comment
1 hour ago, CJW said:

So how exactly am I supposed to troubleshoot

Unraid version 6.6.5 has a function to calculate container sizes and associated usage.

 

image.png.a4b3b71075da468e50bed882f2816d4f.png

 

When the "writable" section becomes large, it is usually an indication that a path is set wrong and data is written inside the container.

You can control the "log" file sizes by enabling log rotation in the Docker settings.

 

  • Upvote 2
Link to comment
2 hours ago, Energen said:

everything seems to be mapped correctly

Typically, if you have an incorrect Host path, the symptom would be RAM filling and data not surviving a reboot. This is because the Unraid OS (the Host) is in RAM, so any path that isn't mounted storage is in RAM.

 

Filling docker image would be due to some setting within the application itself that causes something to be written to a path that isn't mapped to a container volume. This is because the OS within the container is in the docker image, and any path that isn't mapped gets stored in docker image.

Link to comment

I know you guys keep suggesting that this is a user/config error time and time again. but after a year or so of struggling with this, my issue turned out to be jackett.

 

i dont really know much about docker, and no one really gave me any good way to calculate disk usage per container, so i just deleted one container after another (order of least important). my docker size was going up by about 2-3%/day, so i would delete one container each day to see when the increase stopped. turned out that the culprit for me was jackett. it was likely saving logs or something somewhere it shouldnt have as it was configured correctly. As i mentioned, ive been struggling with this for a long time, ive made sure that every configurable setting in every container was configured correctly.

 

something else you can try is, earlier in this thread, someone answered my question about ssh-ing into docker. If you have time to try it, you could try ssh, then use some disk/folder usage command to narrow down exactly which folder/files might be the cause, then submit a bug report to them.

 

hope this helps.

Link to comment
43 minutes ago, trurl said:

The most recent versions of Unraid make this very easy. Simply click on the icon for the container and select >_Console.

my unraid is set to auto update but currently at v6.4.1. i dont see this option. i assume its not currently on the main releases.

Link to comment
  • 2 weeks later...
On 11/9/2018 at 1:29 PM, bonienl said:

Unraid version 6.6.5 has a function to calculate container sizes and associated usage.

 

When the "writable" section becomes large, it is usually an indication that a path is set wrong and data is written inside the container.

You can control the "log" file sizes by enabling log rotation in the Docker settings.

 

I am still looking at the same problem.  I have a 40GB docker.img, and I am getting high usage warnings.  As of a couple of days ago, I was getting usage warnings of 33GB and I took the attached screenshot.  This adds up to no more than 10GB, even with nearly 5GB log usage on pihole.  None of my writables are big.  There is no diagnostic information I have yet found that comes close to identifying 30+GB of docker usage, so I have no idea what to fix.

 

I have repeatedly verified all my path mappings, I have run in-container disk usage analytics, and I have run docker container usage analytics.  I still have over 20GB of reported usage that is completely unaccounted for by any method available to me.

dockers-20181110.png

Link to comment
5 hours ago, CJW said:

I have repeatedly verified all my path mappings

Just in case you didn't see this. Mappings themselves aren't the cause of docker image filling

On 11/9/2018 at 2:18 PM, trurl said:

Typically, if you have an incorrect Host path, the symptom would be RAM filling and data not surviving a reboot. This is because the Unraid OS (the Host) is in RAM, so any path that isn't mounted storage is in RAM.

 

Filling docker image would be due to some setting within the application itself that causes something to be written to a path that isn't mapped to a container volume. This is because the OS within the container is in the docker image, and any path that isn't mapped gets stored in docker image.

 

Link to comment
7 hours ago, Squid said:

Curious if this script https://forums.unraid.net/topic/48707-additional-scripts-for-userscripts-plugin/?page=9&tab=comments#comment-683480 will return different values (run through user scripts)

Just ran it, and got similar results (this is several days newer than the previous screenshot):

Ampache Size: 1.4G Logs: 78.0kB
CUPS Size: 732M Logs: 367.0B
HandBrake Size: 252.7M Logs: 112.6kB
mongo Size: 355M Logs: 20.7kB
nextcloud Size: 186M Logs: 2.7kB
openvpn-as Size: 209M Logs: 13.0kB
pihole Size: 483M Logs: 2.7GB
plex Size: 400M Logs: 4.1kB
RDP-Calibre Size: 1.4G Logs: 3.1MB

Now, the script did take quite a while to churn out the pihole entry...but that could just be counting log size.  I really suspect that pihole is the culprit, but I have no idea where the extra storage usage could be.

Link to comment

Did a scrub and didn't find any errors.

 

I do have a few orphaned containers (different versions of nextcloud and pihole that I have played with), but since they aren't running they shouldn't be growing either, correct?  I am growing the image about 1% per day.

Link to comment
13 minutes ago, CJW said:

Did a scrub and didn't find any errors.

 

I do have a few orphaned containers (different versions of nextcloud and pihole that I have played with), but since they aren't running they shouldn't be growing either, correct?  I am growing the image about 1% per day.

The orphaned containers won't grow but of course they will take up space until they are removed.

Link to comment
  • 1 month later...
  • 3 months later...
On 4/11/2018 at 9:23 AM, vizi95 said:

I had the same issue.  It has to do with docker log file sizes.


Just run this and your utilization will come down a lot:

 

truncate -s 0 /var/lib/docker/containers/*/*-json.log

 

It's safe to use while the docker is still running.

 

Thank you!  This solved my issue.

I was doing a lot of operations in Plex which I believe is causing all sorts of logging.  I got the warning about 71% usage and by the time I went to investigate it was at 78% and by the time I found the thread and read to page 2 it was 97% and I had to stop all of my containers.  This immediately dropped my usage down to 28%.  All I can think is that Plex is logging and that those logs are normally trimmed every night but since I was doing a lot of operations it spun out of control before the log could be trimmed?

 

No idea.  It was increasing so fast I didn't have time to mess around.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.