Docker high image disk utilization: why is my docker image/disk getting full?


118 posts in this topic Last Reply

Recommended Posts

OK... I'm not sure why I keep getting this stuff. 

 

Docker critical image disk utilization: 23-02-2020 16:12
Alert [TOWER] - Docker image disk utilization of 92%
Docker utilization of image file /mnt/user/system/docker/docker.img

 

Container sizes:

Name                              Container     Writable          Log
---------------------------------------------------------------------
KrusadeR                            2.47 GB        12 MB      81.5 kB
Lidarr                              1.29 GB          0 B      12.8 MB
radarr                              1.12 GB      24.9 MB      8.59 MB
MineCraftServer                      899 MB          0 B      12.8 MB
SonarR                               622 MB      21.1 MB      2.83 MB
Netdata                              566 MB       267 MB      21.0 MB
dupeGuru                             491 MB          0 B      12.8 MB
PlexOfficial                         482 MB      2.44 MB       573 kB
pihole                               341 MB        37 MB       107 kB
HandBrake                            315 MB          0 B      12.8 MB
SabNZBd                              259 MB       303 kB       131 kB
tautulli                             256 MB       123 MB      2.70 MB
Grafana                              233 MB          0 B      9.22 kB
YouYube-DL                           178 MB          0 B      12.8 MB
uGet                                 118 MB          0 B      12.8 MB
transmission                        80.3 MB          0 B      12.8 MB
unifi-poller                        10.9 MB          0 B      18.5 MB

 

I've put this into each of the dockers

--log-opt max-size=50m --log-opt max-file=1

 

This is my Sab config:

image.thumb.png.391589f58fab6a3da58051de36cb1015.png

 

image.thumb.png.6c18d7434637fd81c6d77b1af486f587.png

 

What in the world am I missing here? This worked fine for the last year, this is driving me nuts through notifications.

 

Link to post
  • Replies 117
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

Odds on either sab or deluge (or possibly sync) is winding up downloading into the docker image (and things like that don't really show up under the virtual size)  Check your path mappings on the temp

Most causes of docker.img getting that big is either  out of control logging by the apps downloading directly into the image Both circumstances are covered in the docker FAQ

Unraid version 6.6.5 has a function to calculate container sizes and associated usage.     When the "writable" section becomes large, it is usually an indication that a path is set

Posted Images

Your temporary downloads folder within sab is /mnt/user/downloads.  But the container path that you've passed through is /mnt/usr/downloads

 

Since those 2 don't match, all of your temporary downloads are being saved within the image.  The container size dialog isn't really showing anything because odds on at the time you grabbed a screen shot the download had been moved from within the image to its final resting place.

Link to post

image.png.032f0b7032973bdb6448d29c075ff999.png

 

I think that's just the description??

 

Well, I changed it all to match and it's still doing it.

 

Is there a way to see which docker is causing this?

 

 

UPDATE

Ended up being my Plex Transcode directory. Apparently I had set it to be the same for container and host path during an upgrade.... Long story short is to make sure the container sees it as "/transcode"

 

Edited by CowboyRedBeard
Link to post
On 1/13/2019 at 4:56 AM, fxp555 said:

One last try could be


docker system prune --all --volumes

this deletes ALL STOPPED containers, images, volumes. Took my image from 70% to about 15% because of many old volumes.

Thank you! This fixed it for me!

Link to post
27 minutes ago, Vess said:

Thank you! This fixed it for me!

Since you are new to our forum just thought I would mention. This might be the fix for you, or it might only make things look better for a while. 

 

Other things discussed in this thread are more likely to be the cause for new users, especially those posts with links to the Docker FAQ. 

Link to post
1 hour ago, Vess said:

Thank you! This fixed it for me!

Should be reiterated though that the command listed will

 

REMOVE any application that is not currently running.

If you DO NOT have any application currently running on a static IP address, you will not be able to assign any application a static IP address without first stopping and then restarting the entire docker service (Settings - Docker)

Link to post

Sheesh, I wish I read this first. I've used this command on other PCs with docker but I didn't think that it would complete remove forever the entries I had for them under my Docker tab in Unraid. I had a ton of apps that happened to be off at the time because I wasn't actively using them.

 

Is there a log of what names they were somewhere? I can always re-install them from the Community App Store but I don't even know what all the ones were. Unfortunately, I closed the terminal window after running that command. 

Link to post

Here's what I did to discover which Docker app was bloating up the docker.img file - and found out it was one of my own created Dockers that the app server was creating internal log files. This method works at least 1 day after the Docker app was created/installed.

 

Shelled into the Docker container and changed to the root directory.

Executed: du -h --time -d 1

Reviewed the time stamps looking for a directory with today's date & time.

Switched to a suspect directory and repeated the du command. Rinse & repeat.

Link to post
  • 11 months later...
On 3/3/2020 at 7:28 PM, jbartlett said:

Shelled into the Docker container and changed to the root directory.

Executed: du -h --time -d 1


Thanks for this.
I took your solution and added some scripting to make it easier to summarise all my docker containers.
Leaving this here in case it helps someone else:
 

docker ps | awk '{print $1}' | grep -v CONTAINER | xargs -n 1 -I {} sh -c "echo -e \\\n#================\\\nDocker container:{}; docker exec {} du -hx -d1 / "

 

Link to post
  • 1 month later...

Hi everyone,

 

I ran into "DOCKER HIGH IMAGE DISK UTILIZATION" issue as well. Thanks to this thread I fix it without rebuilding the docker.img file. Here is the root cause in my case:

1. By default my containers are tracking app's official 'Repository'

2. I have few containers that are tracking different branch usually older 'stable' release.

3. After I have chance to fully test the newer release, I will manually update the App configuration: Repository.

 

The problem with this method is that it leaves the old images behind filling my docker.img file. I used:
 

docker system prune --all --volumes

 

WARNING: Before you run this command pay attention to Squid's comment here: https://forums.unraid.net/topic/57479-docker-high-image-disk-utilization-why-is-my-docker-imagedisk-getting-full/?do=findComment&comment=827838

 

Thank you

Link to post
20 hours ago, SAL-e said:

The problem with this method is that it leaves the old images behind filling my docker.img file. I used:
 







docker system prune --all --volumes

 

WARNING: Before you run this command pay attention to Squid's comment here: https://forums.unraid.net/topic/57479-docker-high-image-disk-utilization-why-is-my-docker-imagedisk-getting-full/?do=findComment&comment=827838

 

Glad to see you had success with that and I'm (even more) glad you referenced that comment, just keep in mind there's a less extreme approach with much less risk if you just want to delete the dangling images:

 

docker image prune

 

Will just delete the dangling images, and

 

docker image prune -a

 

Will delete all dangling and currently unused images.

 

vs.

 

$ docker system prune --all --volumes
WARNING! This will remove:
  - all stopped containers
  - all networks not used by at least one container
  - all volumes not used by at least one container
  - all images without at least one container associated to them
  - all build cache

 

You could even safely set docker image prune as a User Script to run daily/weekly if it's a really regular issue for you.

 

#!/bin/bash

# Clear all dangling images
# https://docs.docker.com/engine/reference/commandline/image_prune/

echo "About to forcefully remove the following dangling images:"
docker image ls dangling=true
echo ""
echo "(don't!) blame lnxd if something goes wrong"
echo ""

docker image prune -f
# Uncomment to also delete unused images
#docker image prune -f -a
echo "Done!"

 

Edited by lnxd
Link to post
  • 2 months later...
Posted (edited)

I did look at my docker.img file to better understand the underlying issue and discovered that it was filling up in my btrfs folder with

 

root@NameOfmyServer:/var/lib/docker# du -h -d 1 .
472K    ./containerd
71M     ./containers
0       ./plugins
93G     ./btrfs
45M     ./image
56K     ./volumes
0       ./trust
88K     ./network
0       ./swarm
16K     ./builder
88K     ./buildkit
1.8M    ./unraid
0       ./tmp
0       ./runtimes
93G     .

 

As you can see the ./btrfs folder holds all the data.

 

Here is a glimpse on the content of that btrfs folder:

drwxr-xr-x 1 root root   114 Apr 19 16:59 f425bd8572d449ab63414c5f6b8adf93a5a43830149beff9ae6fd78f4c153964/
drwxr-xr-x 1 root root   164 Jun  2 16:24 f56871468a4410ae08e7b34587135c9f53e7bc5f7e6d80957bc2c1bb75dc5835/
drwxr-xr-x 1 root root   114 Jul 22  2020 f59dccf9b84afc4e4437a3ba7f448acbe9d97dec1bbc8806740473db9be982e7/
drwxr-xr-x 1 root root   144 May 22 13:28 f955be376b3347302b53cc4baf9d99384f5249d516713408e990a1f9193e6a91/
drwxr-xr-x 1 root root   210 May 17 08:58 f9ff6f9552b835c1ee5f95a172a72427d2a94d97221300162c601add05741efc/
drwxr-xr-x 1 root root   188 May 17 08:58 fa15712ed8447e23f26f0377d482ef12614f4dbd938add9dc692f066af305892/
drwxr-xr-x 1 root root   114 Apr 19 17:06 fb42f9a61aa57420a38522492a160fe8ebfbacd3635604d06416f2af3d261394/
drwxr-xr-x 1 root root   226 May 17 09:09 fd6c83a4ab776e1d2d1de1ed258a31d0f14e1cc44cfc66794e13380ec84e7e7d/
drwxr-xr-x 1 root root   392 Jun  2 17:26 fe1e7c51852590ee81f1ba4cd60458c0e919eec880dc657b2125055a0a00e305/
drwxr-xr-x 1 root root   252 Jun  2 17:26 fe1e7c51852590ee81f1ba4cd60458c0e919eec880dc657b2125055a0a00e305-init/
drwxr-xr-x 1 root root   230 Jun  2 16:13 ff1f7c71fff4b8030692256f340930e61c2fc8ec67a563889477b910f9ae1ece/

 

So I looked around and found this discussion forum:

https://github.com/moby/moby/issues/27653

Which describes the buggy nature of docker on BTRFS. Awesome ...

In the forum there is a link to this git page:

https://gist.github.com/hopeseekr/cd2058e71d01deca5bae9f4e5a555440

Here is another forum talking about the issue:

https://forum.garudalinux.org/t/btrfs-docker-and-subvolumes/4601/6

 

I don't recommend any of you to follow these actions but what I am trying to say it that the Unraid devs should look into this and come back with a solution on how users like us can safely remove these orphan images from our system without risking data loss.

 

So what would be the proper procedure to get this raised to Unraid? It is clearly a docker bug with btrfs but Unraid decided to move to btrfs for the cache drive and we followed that direction.

 

Thank you.

 

Edited by Seriously_Clueless
Link to post

The Garuda Linux discussion does discuss a fix but again, one would expect that to be default in Unraid. So if any of the devs are here listening, it would be great to get your feedback on this. When going through the docker documentation they clearly point out the deficits of btrfs with docker. Maybe the decision to use btrfs was a bit rushed? if not, how are we supposed to securely fix this without deleting the docker.img file?

Link to post

While we are patiently waiting for the Unraid team to advise on how they would like to tackle ths going forward (i.e. not using BTRFS in the docker img file) I have done the following to reduce my 96G to now around 40G. I might be able to get it down further but wanted to be cautious:

 

1. sudo btrfs subvolume delete /var/lib/docker/btrfs/subvolumes/<name of subvolume>

This will delete the subvolume and free up the space. I did it on a name basis since I wanted to keep some of the subvolumes, if you want to delete all, use * instead of the name of the subvolume

2. btrfs scrub status -d /var/lib/docker/

Shows you how your btrfs volume looks like

3. btrfs scrub start -d /var/lib/docker/

Think of this like a defrag for the btrfs volume

 

if you run btrfs --help you can see a few more commands that should help with listing the subvolumes instead of looking at the directory itself.

 

You will have to have the docker service started for this otherwise the /var/lib/docker will not contain anything. It is the mounting point for the docker.img file.

 

And now my folder looks like this:

root@NameOfServer:/var/lib/docker# du -h -d 1 .
440K    ./containerd
77M     ./containers
0       ./plugins
41G     ./btrfs
39M     ./image
56K     ./volumes
0       ./trust
88K     ./network
0       ./swarm
16K     ./builder
88K     ./buildkit
1.4M    ./unraid
0       ./tmp
0       ./runtimes
41G     .

 

So 50G were recovered without any issue.

 

Link to post
On 6/2/2021 at 9:26 AM, Seriously_Clueless said:

without deleting the docker.img file?

There is no reason to avoid deleting and recreating docker.img. It only contains the executables for your containers, and these are easily downloaded again.

 

You can reinstall any of your containers exactly as they were using the Previous Apps feature on the Apps page.

 

 

Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.