Docker high image disk utilization: why is my docker image/disk getting full?


Recommended Posts

OK... I'm not sure why I keep getting this stuff. 

 

Docker critical image disk utilization: 23-02-2020 16:12
Alert [TOWER] - Docker image disk utilization of 92%
Docker utilization of image file /mnt/user/system/docker/docker.img

 

Container sizes:

Name                              Container     Writable          Log
---------------------------------------------------------------------
KrusadeR                            2.47 GB        12 MB      81.5 kB
Lidarr                              1.29 GB          0 B      12.8 MB
radarr                              1.12 GB      24.9 MB      8.59 MB
MineCraftServer                      899 MB          0 B      12.8 MB
SonarR                               622 MB      21.1 MB      2.83 MB
Netdata                              566 MB       267 MB      21.0 MB
dupeGuru                             491 MB          0 B      12.8 MB
PlexOfficial                         482 MB      2.44 MB       573 kB
pihole                               341 MB        37 MB       107 kB
HandBrake                            315 MB          0 B      12.8 MB
SabNZBd                              259 MB       303 kB       131 kB
tautulli                             256 MB       123 MB      2.70 MB
Grafana                              233 MB          0 B      9.22 kB
YouYube-DL                           178 MB          0 B      12.8 MB
uGet                                 118 MB          0 B      12.8 MB
transmission                        80.3 MB          0 B      12.8 MB
unifi-poller                        10.9 MB          0 B      18.5 MB

 

I've put this into each of the dockers

--log-opt max-size=50m --log-opt max-file=1

 

This is my Sab config:

image.thumb.png.391589f58fab6a3da58051de36cb1015.png

 

image.thumb.png.6c18d7434637fd81c6d77b1af486f587.png

 

What in the world am I missing here? This worked fine for the last year, this is driving me nuts through notifications.

 

Link to comment

Your temporary downloads folder within sab is /mnt/user/downloads.  But the container path that you've passed through is /mnt/usr/downloads

 

Since those 2 don't match, all of your temporary downloads are being saved within the image.  The container size dialog isn't really showing anything because odds on at the time you grabbed a screen shot the download had been moved from within the image to its final resting place.

Link to comment

image.png.032f0b7032973bdb6448d29c075ff999.png

 

I think that's just the description??

 

Well, I changed it all to match and it's still doing it.

 

Is there a way to see which docker is causing this?

 

 

UPDATE

Ended up being my Plex Transcode directory. Apparently I had set it to be the same for container and host path during an upgrade.... Long story short is to make sure the container sees it as "/transcode"

 

Edited by CowboyRedBeard
Link to comment
27 minutes ago, Vess said:

Thank you! This fixed it for me!

Since you are new to our forum just thought I would mention. This might be the fix for you, or it might only make things look better for a while. 

 

Other things discussed in this thread are more likely to be the cause for new users, especially those posts with links to the Docker FAQ. 

Link to comment
1 hour ago, Vess said:

Thank you! This fixed it for me!

Should be reiterated though that the command listed will

 

REMOVE any application that is not currently running.

If you DO NOT have any application currently running on a static IP address, you will not be able to assign any application a static IP address without first stopping and then restarting the entire docker service (Settings - Docker)

Link to comment

Sheesh, I wish I read this first. I've used this command on other PCs with docker but I didn't think that it would complete remove forever the entries I had for them under my Docker tab in Unraid. I had a ton of apps that happened to be off at the time because I wasn't actively using them.

 

Is there a log of what names they were somewhere? I can always re-install them from the Community App Store but I don't even know what all the ones were. Unfortunately, I closed the terminal window after running that command. 

Link to comment

Here's what I did to discover which Docker app was bloating up the docker.img file - and found out it was one of my own created Dockers that the app server was creating internal log files. This method works at least 1 day after the Docker app was created/installed.

 

Shelled into the Docker container and changed to the root directory.

Executed: du -h --time -d 1

Reviewed the time stamps looking for a directory with today's date & time.

Switched to a suspect directory and repeated the du command. Rinse & repeat.

Link to comment
  • 11 months later...
On 3/3/2020 at 7:28 PM, jbartlett said:

Shelled into the Docker container and changed to the root directory.

Executed: du -h --time -d 1


Thanks for this.
I took your solution and added some scripting to make it easier to summarise all my docker containers.
Leaving this here in case it helps someone else:
 

docker ps | awk '{print $1}' | grep -v CONTAINER | xargs -n 1 -I {} sh -c "echo -e \\\n#================\\\nDocker container:{}; docker exec {} du -hx -d1 / "

 

  • Like 1
Link to comment
  • 1 month later...

Hi everyone,

 

I ran into "DOCKER HIGH IMAGE DISK UTILIZATION" issue as well. Thanks to this thread I fix it without rebuilding the docker.img file. Here is the root cause in my case:

1. By default my containers are tracking app's official 'Repository'

2. I have few containers that are tracking different branch usually older 'stable' release.

3. After I have chance to fully test the newer release, I will manually update the App configuration: Repository.

 

The problem with this method is that it leaves the old images behind filling my docker.img file. I used:
 

docker system prune --all --volumes

 

WARNING: Before you run this command pay attention to Squid's comment here: https://forums.unraid.net/topic/57479-docker-high-image-disk-utilization-why-is-my-docker-imagedisk-getting-full/?do=findComment&comment=827838

 

Thank you

Link to comment
20 hours ago, SAL-e said:

The problem with this method is that it leaves the old images behind filling my docker.img file. I used:
 







docker system prune --all --volumes

 

WARNING: Before you run this command pay attention to Squid's comment here: https://forums.unraid.net/topic/57479-docker-high-image-disk-utilization-why-is-my-docker-imagedisk-getting-full/?do=findComment&comment=827838

 

Glad to see you had success with that and I'm (even more) glad you referenced that comment, just keep in mind there's a less extreme approach with much less risk if you just want to delete the dangling images:

 

docker image prune

 

Will just delete the dangling images, and

 

docker image prune -a

 

Will delete all dangling and currently unused images.

 

vs.

 

$ docker system prune --all --volumes
WARNING! This will remove:
  - all stopped containers
  - all networks not used by at least one container
  - all volumes not used by at least one container
  - all images without at least one container associated to them
  - all build cache

 

You could even safely set docker image prune as a User Script to run daily/weekly if it's a really regular issue for you.

 

#!/bin/bash

# Clear all dangling images
# https://docs.docker.com/engine/reference/commandline/image_prune/

echo "About to forcefully remove the following dangling images:"
docker image ls dangling=true
echo ""
echo "(don't!) blame lnxd if something goes wrong"
echo ""

docker image prune -f
# Uncomment to also delete unused images
#docker image prune -f -a
echo "Done!"

 

Edited by lnxd
  • Thanks 3
Link to comment
  • 2 months later...

I did look at my docker.img file to better understand the underlying issue and discovered that it was filling up in my btrfs folder with

 

root@NameOfmyServer:/var/lib/docker# du -h -d 1 .
472K    ./containerd
71M     ./containers
0       ./plugins
93G     ./btrfs
45M     ./image
56K     ./volumes
0       ./trust
88K     ./network
0       ./swarm
16K     ./builder
88K     ./buildkit
1.8M    ./unraid
0       ./tmp
0       ./runtimes
93G     .

 

As you can see the ./btrfs folder holds all the data.

 

Here is a glimpse on the content of that btrfs folder:

drwxr-xr-x 1 root root   114 Apr 19 16:59 f425bd8572d449ab63414c5f6b8adf93a5a43830149beff9ae6fd78f4c153964/
drwxr-xr-x 1 root root   164 Jun  2 16:24 f56871468a4410ae08e7b34587135c9f53e7bc5f7e6d80957bc2c1bb75dc5835/
drwxr-xr-x 1 root root   114 Jul 22  2020 f59dccf9b84afc4e4437a3ba7f448acbe9d97dec1bbc8806740473db9be982e7/
drwxr-xr-x 1 root root   144 May 22 13:28 f955be376b3347302b53cc4baf9d99384f5249d516713408e990a1f9193e6a91/
drwxr-xr-x 1 root root   210 May 17 08:58 f9ff6f9552b835c1ee5f95a172a72427d2a94d97221300162c601add05741efc/
drwxr-xr-x 1 root root   188 May 17 08:58 fa15712ed8447e23f26f0377d482ef12614f4dbd938add9dc692f066af305892/
drwxr-xr-x 1 root root   114 Apr 19 17:06 fb42f9a61aa57420a38522492a160fe8ebfbacd3635604d06416f2af3d261394/
drwxr-xr-x 1 root root   226 May 17 09:09 fd6c83a4ab776e1d2d1de1ed258a31d0f14e1cc44cfc66794e13380ec84e7e7d/
drwxr-xr-x 1 root root   392 Jun  2 17:26 fe1e7c51852590ee81f1ba4cd60458c0e919eec880dc657b2125055a0a00e305/
drwxr-xr-x 1 root root   252 Jun  2 17:26 fe1e7c51852590ee81f1ba4cd60458c0e919eec880dc657b2125055a0a00e305-init/
drwxr-xr-x 1 root root   230 Jun  2 16:13 ff1f7c71fff4b8030692256f340930e61c2fc8ec67a563889477b910f9ae1ece/

 

So I looked around and found this discussion forum:

https://github.com/moby/moby/issues/27653

Which describes the buggy nature of docker on BTRFS. Awesome ...

In the forum there is a link to this git page:

https://gist.github.com/hopeseekr/cd2058e71d01deca5bae9f4e5a555440

Here is another forum talking about the issue:

https://forum.garudalinux.org/t/btrfs-docker-and-subvolumes/4601/6

 

I don't recommend any of you to follow these actions but what I am trying to say it that the Unraid devs should look into this and come back with a solution on how users like us can safely remove these orphan images from our system without risking data loss.

 

So what would be the proper procedure to get this raised to Unraid? It is clearly a docker bug with btrfs but Unraid decided to move to btrfs for the cache drive and we followed that direction.

 

Thank you.

 

Edited by Seriously_Clueless
Link to comment

The Garuda Linux discussion does discuss a fix but again, one would expect that to be default in Unraid. So if any of the devs are here listening, it would be great to get your feedback on this. When going through the docker documentation they clearly point out the deficits of btrfs with docker. Maybe the decision to use btrfs was a bit rushed? if not, how are we supposed to securely fix this without deleting the docker.img file?

Link to comment

While we are patiently waiting for the Unraid team to advise on how they would like to tackle ths going forward (i.e. not using BTRFS in the docker img file) I have done the following to reduce my 96G to now around 40G. I might be able to get it down further but wanted to be cautious:

 

1. sudo btrfs subvolume delete /var/lib/docker/btrfs/subvolumes/<name of subvolume>

This will delete the subvolume and free up the space. I did it on a name basis since I wanted to keep some of the subvolumes, if you want to delete all, use * instead of the name of the subvolume

2. btrfs scrub status -d /var/lib/docker/

Shows you how your btrfs volume looks like

3. btrfs scrub start -d /var/lib/docker/

Think of this like a defrag for the btrfs volume

 

if you run btrfs --help you can see a few more commands that should help with listing the subvolumes instead of looking at the directory itself.

 

You will have to have the docker service started for this otherwise the /var/lib/docker will not contain anything. It is the mounting point for the docker.img file.

 

And now my folder looks like this:

root@NameOfServer:/var/lib/docker# du -h -d 1 .
440K    ./containerd
77M     ./containers
0       ./plugins
41G     ./btrfs
39M     ./image
56K     ./volumes
0       ./trust
88K     ./network
0       ./swarm
16K     ./builder
88K     ./buildkit
1.4M    ./unraid
0       ./tmp
0       ./runtimes
41G     .

 

So 50G were recovered without any issue.

 

Link to comment
On 6/2/2021 at 9:26 AM, Seriously_Clueless said:

without deleting the docker.img file?

There is no reason to avoid deleting and recreating docker.img. It only contains the executables for your containers, and these are easily downloaded again.

 

You can reinstall any of your containers exactly as they were using the Previous Apps feature on the Apps page.

 

 

Link to comment

looks like i'm the ./btrfs bloated club now too.

for a less than filthy casual such as myself, what's the best way to cure this, hopefully without having to stop the docker service, or deleting the docker.img file?  i've reasons for not having certain containers to not be down for long, if at all.

Link to comment
root@Cube:/var/lib/docker# du -h -d 1 .
344K    ./containerd
29M     ./containers
0       ./plugins
70G     ./btrfs
22M     ./image
86M     ./volumes
0       ./trust
108K    ./network
0       ./swarm
16K     ./builder
56K     ./buildkit
4.0K    ./unraid
0       ./tmp
0       ./runtimes
70G     .

 

root@Cube:/var/lib/docker/btrfs/subvolumes# du -h -d 1 .
119M    ./6900675e6a15e16570fa9b28d447e057fb511c3d74579e93b2cee35e97adab54
119M    ./cd83744830f664826befb336b19ff3d84c5f028d397761e969190d8c5fc33dcc
94M     ./8055a0408a6de2a9a8e47d4fb12014427251b0e2fa2e3ec1d407220c01ad075a
94M     ./a521b8304b56f20632f2bee28ddeaeceb441d7429d076f6f2c66a5977719c071
94M     ./4685dc8d679705869d5659b5e545910ce4a524a2e8715e8181681e96dd7e69c6
94M     ./af878357a5ec5c31d0280bf8a407c046dbd1ce04c5c0c738450cdddd11747945
95M     ./dbae48bb145ae1db011a6b6a13e532924ab8c4b88d2add2097737b1d447c4728
200M    ./06da3075c4b6d5902205c511773eb9e500a8a81672a2f7676e60ea0a729edb82
200M    ./8b501e81009e1f23d0b44255c758dddf81d279ebb667be60766ed5f559eddd0b
200M    ./436913a3304123f31cb0b852fcede0cc6eaaf9db315ea135a1def2f373f0cc38
113M    ./eddbbcabdff445d4787368e11b9f66581ac7a9b6e0bd9e991e116895cce3be20
137M    ./6d9df4d8f9b647bf01804ba34f026dff35d74fdd639457306722358eb9779587
144M    ./f5c761f308af8b892931e8f58db9b8829824cd3031118b20733aceaa6d6538cf
145M    ./c2c51a9d26e093fb02d6d8cfbdb4bc5442ea1518d15fe5afad9f183aaf4b7055
145M    ./ee18b76adde39f5c01d2528cb9aac565c15e715f2962bc646735c94eb889e646
145M    ./c478a3bddb87d368af752c11765ca6d7f2a27d81bba6a232c5e13092e4521d26
463M    ./2138ce0714b3f1e67b998b6355497d680e27fc3359e509ff26ddbb1aa2de189f
464M    ./2b73b47504204dcd9a03c753787405fdb8c57ea59eb872718f589bd19dd2fe46
464M    ./8c616f996f38cb2bf365b28c0ed3cf8c748b41605278f83561e0720dc83af282
464M    ./89eddd7c5607c3fd5a522e71138e2e76aaaf582bcf4bb761886d0180a090647c
481M    ./d2e75bfb299076dff17ace3803932bab02b0148e7eadadaba89fdbd4ea80582f
481M    ./ab95521a38b5ed7fc71c6a015637714d5e9cff9cccaa515d990763715a41b01f
548M    ./e832ddeec0accda55439660592dc144a21e69ad8a70c5e2ffd078cf80bf17dc1
542M    ./a56cd1cd8101fcd573ea7966ab3d2bb522bfa1b79eafc4bbdfd76dc2a7066e90
542M    ./21df3255d62031c867d4fe77a5c5090110c7d5eb4fdd07d16c70d332209e736c
542M    ./06ce9e4180fb0737e7d59c82b6a8cab7fb82109678e1b8252aa99ef97adabcd3
542M    ./e9b65d8c606ff4ddeff4314e2734ac3eb37f73e317cc7c60830615cfd4732ba4
542M    ./0f5751b6a551ae69c38d51c32871667b38da1a68c2ebc72361ad474ec3d7b6a3
542M    ./6e1688fc5d430cf9e56a0060044759a25574ddec8f2499c7df8a5b67e3e5f387
542M    ./a2cf862292813c70efd0edd76e7ca04fce98429bcb6d822ded8abe22aed60d0e
542M    ./e0ca71beb48800e63b93278cc7c6f92779f1e0dd68031765762a44c8a8917093
542M    ./54322c1ecb4fe707f036220bc51ffb20a00bc3b4a338a523b69e44685d0804ca
542M    ./dc55bf7f7c2065a0ec917463d705fdbcd1cf32c49605745085b2ea3b58f823ef
753M    ./e6c24a12ec8b8bb688edd77aaebf13802d5ea46cf193dc2e2b209c14487f0432
752M    ./6583c96aad5db076b2f0a05ffcabe7d3b3b466cdb2880e64da028e0b7db03163
140M    ./9392e8df39a61b8b5199bd536d25b2f5cc143e5cf137433a588c0f8baa4bf8e8
140M    ./15bd6c565e2c687a917195522149bafac0741e17e02342e6ff15b9cf1eb51977
125M    ./e7c58862e923db7fe44c6a9667ccb2d221cdd9514db4204bdf40620baf8fbb07
125M    ./05deef405295be86d2c12d391f264a16b8d73ecf49a211cef0cc14c9c0048088
125M    ./bc8901c12b3edc8a77eea5958c268c47ce9cde8db32abbdc793a96726a8efa60
125M    ./afcb17a3fc43b15c42a2ae54e5f6f4b2395df91c17ec40d7becf27c65b52c3fc
303M    ./142b90f74c2995831cb44f4a82bc14e9b5ddab0a65e4e17011a4a9866cb7840c
303M    ./04dfe9e39bb2f4878a1dd3939e5977a65e08956f14bc00f0ac662c5a469ea87a
303M    ./a570038b987aa3a1ae95a353052e305793a9bc919ac7853ec2ee3b95dce4c524
6.9M    ./99004ff6ba4bc9a5cedd959412329ab801456f5c825a100d9b63bf42740216d5
89M     ./17cf7f9b979ff2e7c9c71288fb75a0fd9198358fc1e829af2916300c5ad95ede
94M     ./c55e19435e6983f40354d57b136cb2f06f5ecefb43099cb2af78730298e39ad6
94M     ./ef6b530b9efeefc87198b53ca3770096717e6ae7bc2e2e60141ea5231e464783
94M     ./cf9b0d23f271b88d3d71510017b0ae50492fe21b82842b2be4f8000bb62e16a8
94M     ./28f860f8e909c4d7f32bb5db227f174ada9bf45f5cc6af762774985af5f2b535
508M    ./b016e6ede2947a2565738838d2c21ed3e763ad22af183b6778407e61a235ae79
756M    ./bccd91ddc89589992ebb38f5f2a6ce6d5d59c57702aac3b7ad9b4ff8a0139747
756M    ./c7db6d5b9ac9d867cdcfa33c6b13479a74f71ee6fb26246ed4f676ab9ebafa82
756M    ./30c59fd5019b4f9cf9e32ed1dbb68467c9722da03c863efd93a3f20d7cc88b26
756M    ./e5a869f582793e2b24bdfdcd8c65c8a4e780c6839320e50f28249db0e848c67d
882M    ./352b1fa4cddde3eee370c2896fcebed4bdf97c7220b3a9a3ba81e249016a2e77
882M    ./439fa6143069a320e1b04f309e6c63d255eac4e01102bdcbcb5391338bee0c4b
882M    ./0354f35f783fbe03f527a8e2da1b739002a8d8b7c233de3e43a879661a01b9bc
882M    ./a20b9907922015bc02534a00c6fe799f35e3258aef6a3a31c7dd24dfaf5c3bef
882M    ./01e9818c6672fdd1efa2f06410cef0ca3e2f958794ca97ba5e492f0b89cfe099
1005M   ./63e1b2b6b4a95b991df3ec461b51502efafad74a10885a771bcfe1a9b6267521
4.0K    ./e1788c9535b64292220671dcf8f0d79ebd205a0860785a91234e6e34749fda51
12K     ./5315b2675e45da81ce6f149fd5b59413b971a2cf34149bebde02a4a55140914a
1.1M    ./4538d8b5e52f6e553c0225f307f7ba77d0ebd7b1f91c7a9582070ccbbe5ba237
2.6M    ./839c58c9069cf621187857346b56b1c7564e351c062d97c44f257965552a412f
619M    ./640284226eae340863017622c458abdace7665e12abf97bb37ce80b3237a8dee
619M    ./1dffdc61f90604e18ad5972653c59eac9ced21c7d43e3026913af55988e732d1
619M    ./5e1561f67820f39d34798ae9d05e4192c7c471974e94a32bf91a8fc6073df0e8
619M    ./23cf85fcbd60a3ab78d075399d9e5b22c7202790aab77c2c2ea7c3bc8f362f6d
619M    ./6aec7ecc4c7783a81e50c44901d29973b7d0c9cc4dbcd6ea6e37091d0370e51a
1.5G    ./a34e0962cba081eba7f04e54f0c89f4e608daf547045ffd52421c1bd3fb60c11
1.5G    ./1268c4f948b28d9547f12282d8d212934466e357f1c74b907986fa6914661300
1.5G    ./321f9252c59c38b8b2363077a73e125b7f8247f168e632d58c9f52b8b66d8dfe
1.9G    ./7780b0b4174fc53106b696116bfca3297c7f96b545bfe65b8128e6a6ebddbc8e
113M    ./2b55b125ab340418f9f5d97b8d68418792a4c08c1f7f482d4207f5026568a04b
721M    ./1ef8489364d14bcc393c6daf62a31b1700e141ada483a5e4623191bca7271268
721M    ./bc87274f958f6249f670285605111fa8e9654f5a9ceb1fb96cdda91ce71a8566
721M    ./441f50ef31fbe5f0f9ea762c14aae8d0d591796d7cd1fcd75312e627059312ff-init
721M    ./441f50ef31fbe5f0f9ea762c14aae8d0d591796d7cd1fcd75312e627059312ff
4.0K    ./e7225d81c2dbbf7309d9a133d40801fdc473690897ce3652773dcb5b400bccfc
12K     ./bbe558cf3a81c0ea8da7155b5ac178405a6d911504cc8efaab9fef6e17d3ead3
1.1M    ./fdcb6d722ac9d0f29d25305e5a48b59ad56aeff3ee19b4e2bc8c32abc0e027f5
2.6M    ./7a556ceabd1c51f8f0cb991707874c2b96265009c0d0073cfe52dde2ab313969
626M    ./61f79f7ac4b7e6318ffb5fa958ef57d9d012af18f7d7447da8b3d94696aeb24e
626M    ./e8c9144c647b3724b1fdd72e3e5f1f0092b6b2adccb56e5c4b96ba8d6709fcc5
626M    ./49537208cac6d6145df7150d21bfb596d77f115b5b70b81318be2fd10838bd99
626M    ./8a0a1f4a43594bda087ad1f4b7600a494656da13ad637f2acec529736ccf8f1b
946M    ./e57bf446b09a5f6b9d2db6ebc7e2259d6f8ccb9fe8942c979b705f614ab59c2d
946M    ./c0bdcbbf603cddd206697278b871caa6a8871ab16ebeca05097c1097e192a2a3-init
946M    ./c0bdcbbf603cddd206697278b871caa6a8871ab16ebeca05097c1097e192a2a3
626M    ./c7c3110fc986894c2639fb51f2fa3dd32e38f44a5c16f0dd1776e82877345b99
626M    ./d11c22c445d5186867078043125a2ac92140f7ec37dba0c0a1c98096f8c705f2
626M    ./6c10f2f7ffb0eedbd21599986f88e1c626aa88712db6fe646946b6dd5a8fa5ed
836M    ./b70ade401a44c8f3c8f8e39b5b8631bfc225e9a949411b45ad716c32b5dc3d51
836M    ./7d3c6a21cc5aebc2a4fbdfc1aa3383c21bb7881570fc64bdebbe40cb25df3e75-init
836M    ./7d3c6a21cc5aebc2a4fbdfc1aa3383c21bb7881570fc64bdebbe40cb25df3e75
752M    ./f230ade24f3598196921f12fd053356663eccbb40cf2d8404a81dbe4e1452141
752M    ./5b89f8ba028737e0a84e313de7eee782b31787aa44f170a002cc14c2a2365149
752M    ./48153c1c3106a5ca3e8fc8ab15e5444149f38ed06dcc35f69f83598a89dda4e9
752M    ./d3896ac34156f059aad963a4d48094095b181a37571ebf402da9fc8ba2176f4d
752M    ./16bf243394f14e4ff0d87352136505bcad55ce467e9a89845e9747a78e485f6d
784M    ./8e308a9cc89caf82cf886cb35783797e93c4b9aeedb0cea20e34cd12959bc823
1000M   ./935d65ffc1ef9d7d239e0555cd79aac64950379b42255e5ccd8647d802c014e1
1.9G    ./3208f6c1ee168a85b7c669e380192b116313cd17868e32925ff8b5b1f6bfbcd7-init
1.9G    ./3208f6c1ee168a85b7c669e380192b116313cd17868e32925ff8b5b1f6bfbcd7
946M    ./70deac7e9c614864619111c9b19c1cc05dd18d9712992cc58b5981db9fbb6631-init
946M    ./70deac7e9c614864619111c9b19c1cc05dd18d9712992cc58b5981db9fbb6631
836M    ./9ef2aff2e2ac386f87d21a6ff4590c3c3fda71971b9dec061906a2298198ce1a-init
836M    ./9ef2aff2e2ac386f87d21a6ff4590c3c3fda71971b9dec061906a2298198ce1a
1000M   ./658046ea2fecdd6a127ccf00314490fe8a0a7ed211075f7554041088be07d874-init
1000M   ./658046ea2fecdd6a127ccf00314490fe8a0a7ed211075f7554041088be07d874
303M    ./f1a34b96a2121de9ac840c74c9915815a9588990866dfc1e6fed20416577d96f-init
303M    ./f1a34b96a2121de9ac840c74c9915815a9588990866dfc1e6fed20416577d96f
584M    ./9e87b55b64e6e46bc74e366ab5e9c61787b2671fca7cf75d6a49b65914cdb86c
602M    ./9e6915cc9aa61a8553c7e79f8036aaf6353e10186458651be3431e1930bd1f3d
602M    ./f543a6b1db2135112f32dfe31340e776debee145f1f302aae6ea050feda7b2cf
602M    ./ae8de21948c54acb643588594bc58e50fe43cbef4f2c3e97d62a8efe95983080
863M    ./2bc2bad54a656808741c2bb3fb3641634b50c8c62b59c592cfa2bc3258ebd310
863M    ./daebceb213a31dde22896ca0f71892e17b8a59d84b8475f324bd435205d72493-init
863M    ./daebceb213a31dde22896ca0f71892e17b8a59d84b8475f324bd435205d72493
946M    ./5154064573a4bfad750d71d6883c333b1b5559349dece1d874a03a321beb3817-init
946M    ./5154064573a4bfad750d71d6883c333b1b5559349dece1d874a03a321beb3817
1005M   ./7ad2daa1c77465e30f8e4f1117714ff4604b3de8aabc17589aa0145a7517b886-init
1.9G    ./7ad2daa1c77465e30f8e4f1117714ff4604b3de8aabc17589aa0145a7517b886
70G     .

 

So how do i clean that up?

Edited by sota
Link to comment
  • 3 weeks later...
On 6/15/2021 at 10:36 AM, sota said:

looks like i'm the ./btrfs bloated club now too.

for a less than filthy casual such as myself, what's the best way to cure this, hopefully without having to stop the docker service, or deleting the docker.img file?  i've reasons for not having certain containers to not be down for long, if at all.

I had joined the ./btrfs club as well, checked my path within the unraid docker map and within all the docker apps and all looks fine.  And after reading from page 1 to 5, seems like there's no clear definitive ways to identify which docker using the space, i would just have to leave my footprints here in case any new updates on the issue is coming out in the future!

Link to comment
  • 2 months later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.