Cleaning out Docker image


dalben

Recommended Posts

An update for anyone monitoring this thread or searching in the future:

 

I used the command "docker exec -it <container name> bash" to drop into the container and then this, "find / -xdev -type f -size +100M" to find files over 100MB. This helped me find that I had a 1.34GB error log in three of my containers! (eroz's AirVideoServer, needo's PlexMediaServer, and needo's PlexWatch). Deleting that file is an easy temporary fix, but I still need to figure out a long term solution.

 

Any idea what would be causing this error:

 

Nov  1 08:10:18 [server name] sshd[435]: error: Bind to port 22 on 0.0.0.0 failed: Address already in use.

Nov  1 08:10:18 [server name] sshd[435]: fatal: Cannot bind any address.

 

I'm guessing it's because ssh is already open on the server. How could I fix this?

This helped me temporarily fix my growing docker.img issue. I was able to identify two dockers (smdion/docker-flexget:latest and gfjardim/transmission:latest) which are continually trying to bind the SSH service and logging the error listed above by shooga. Deleting the error log file in each of these containers and restarting them freed over 10GB from my docker.img.

 

At this point I just need to see if I can fix the issue of this SSH logging event. It logs once per second in each container and grows very fast. I am not sure what the best course of action is to resolve this, but will post on the boards for the containers.

 

Thank you very much to shooga for point me in the right direction for my circumstance.

Link to comment
  • 1 month later...

For me, these are two reasons why my docker.img size was growing rapidly:

[*]For some reason in the gfjardim/crashplan docker the /usr/local/crashplan/cache is not linked (i.e. symbolic link) to /config/cache thus is using space within the docker.img. I restarted this docker and the issue was resolved

[*]I have a local crashplan backup folder destination that I missed to map to my host folder. After I mapped this folder destination to a host folder, my docker image stopped growing.

 

Thanks to hashi for providing the commands below which allowed me to pinpoint the culprit:

[*]docker exec -it <container name> bash

[*]find / -xdev -type f -size +100M

Link to comment

An update for anyone monitoring this thread or searching in the future:

 

I used the command "docker exec -it <container name> bash" to drop into the container and then this, "find / -xdev -type f -size +100M" to find files over 100MB. This helped me find that I had a 1.34GB error log in three of my containers! (eroz's AirVideoServer, needo's PlexMediaServer, and needo's PlexWatch). Deleting that file is an easy temporary fix, but I still need to figure out a long term solution.

 

Any idea what would be causing this error:

 

Nov  1 08:10:18 [server name] sshd[435]: error: Bind to port 22 on 0.0.0.0 failed: Address already in use.

Nov  1 08:10:18 [server name] sshd[435]: fatal: Cannot bind any address.

 

I'm guessing it's because ssh is already open on the server. How could I fix this?

This helped me temporarily fix my growing docker.img issue. I was able to identify two dockers (smdion/docker-flexget:latest and gfjardim/transmission:latest) which are continually trying to bind the SSH service and logging the error listed above by shooga. Deleting the error log file in each of these containers and restarting them freed over 10GB from my docker.img.

 

At this point I just need to see if I can fix the issue of this SSH logging event. It logs once per second in each container and grows very fast. I am not sure what the best course of action is to resolve this, but will post on the boards for the containers.

 

Thank you very much to shooga for point me in the right direction for my circumstance.

If you're running 6.2 this might help  http://lime-technology.com/forum/index.php?topic=40937.msg475225#msg475225
Link to comment
  • 2 weeks later...

Hello all. I rudely created a new thread for me this morning because I have the same problem. Below is the contents of that post, I am removing my post.

 

My docker image is "leaking" and reaching capacity again, I'm not sure why this happens.

 

I installed sonarr a few weeks ago, then filebot-gui container last week, then started getting warnings about the image. I have since removed the filebot-gui container, but it did not help.

 

Can anyone shed some light on this? I had this problem at least twice before but managed to be OK for a few months until now. I feel like it's related to coppit's filebot-gui, or my improper use of it.

 

See info below:

root@E***:/var/lib# df -h
Filesystem      Size  Used Avail Use% Mounted on
tmpfs           128M  3.8M  125M   3% /var/log
/dev/sda1        15G  192M   15G   2% /boot
/dev/md1        1.9T  1.8T   86G  96% /mnt/disk1
/dev/md2        1.9T  1.8T   68G  97% /mnt/disk2
/dev/md3        1.9T  1.8T   80G  96% /mnt/disk3
/dev/md4        1.9T  1.8T   85G  96% /mnt/disk4
/dev/md5        2.8T  2.7T  101G  97% /mnt/disk5
/dev/sde1       224G   71G  153G  32% /mnt/cache
shfs             11T  9.6T  418G  96% /mnt/user0
shfs             11T  9.7T  571G  95% /mnt/user
/dev/loop0       15G   13G  1.1G  93% /var/lib/docker

root@E**:~# docker images
REPOSITORY                 TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
coppit/filebot             latest              03a7edcb58e8        2 weeks ago         882.4 MB
linuxserver/sonarr         latest              3f8bf45a34ec        4 weeks ago         347.4 MB
google/cadvisor            latest              01774d197db9        4 weeks ago         46.36 MB
limetech/plex              latest              7d3f8c543242        5 weeks ago         539.8 MB
binhex/arch-nzbget         latest              d0534d106432        5 weeks ago         675.7 MB
binhex/arch-couchpotato    latest              fd5e4bc902a4        9 weeks ago         712.9 MB
binhex/arch-sickbeard      latest              50b216945b49        11 weeks ago        694.6 MB
linuxserver/plexpy         latest              fa97d2fb9cff        3 months ago        685.9 MB
linuxserver/nginx          latest              7b895511a682        3 months ago        485.7 MB
lsiodev/plexrequests       latest              02dcd617a822        3 months ago        936.9 MB
linuxserver/transmission   latest              6b07b00fb632        3 months ago        425.4 MB
coppit/mumble-server       latest              2a1dadb8c9b1        6 months ago        367.5 MB

 

swzaJGf.png

 

UdS9vpw.png

 

Thanks for reading!

Link to comment

Hello all. I rudely created a new thread for me this morning because I have the same problem. Below is the contents of that post, I am removing my post.

 

My docker image is "leaking" and reaching capacity again, I'm not sure why this happens.

 

I installed sonarr a few weeks ago, then filebot-gui container last week, then started getting warnings about the image. I have since removed the filebot-gui container, but it did not help.

 

Can anyone shed some light on this? I had this problem at least twice before but managed to be OK for a few months until now. I feel like it's related to coppit's filebot-gui, or my improper use of it.

 

See info below:

root@E***:/var/lib# df -h
Filesystem      Size  Used Avail Use% Mounted on
tmpfs           128M  3.8M  125M   3% /var/log
/dev/sda1        15G  192M   15G   2% /boot
/dev/md1        1.9T  1.8T   86G  96% /mnt/disk1
/dev/md2        1.9T  1.8T   68G  97% /mnt/disk2
/dev/md3        1.9T  1.8T   80G  96% /mnt/disk3
/dev/md4        1.9T  1.8T   85G  96% /mnt/disk4
/dev/md5        2.8T  2.7T  101G  97% /mnt/disk5
/dev/sde1       224G   71G  153G  32% /mnt/cache
shfs             11T  9.6T  418G  96% /mnt/user0
shfs             11T  9.7T  571G  95% /mnt/user
/dev/loop0       15G   13G  1.1G  93% /var/lib/docker

root@E**:~# docker images
REPOSITORY                 TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
coppit/filebot             latest              03a7edcb58e8        2 weeks ago         882.4 MB
linuxserver/sonarr         latest              3f8bf45a34ec        4 weeks ago         347.4 MB
google/cadvisor            latest              01774d197db9        4 weeks ago         46.36 MB
limetech/plex              latest              7d3f8c543242        5 weeks ago         539.8 MB
binhex/arch-nzbget         latest              d0534d106432        5 weeks ago         675.7 MB
binhex/arch-couchpotato    latest              fd5e4bc902a4        9 weeks ago         712.9 MB
binhex/arch-sickbeard      latest              50b216945b49        11 weeks ago        694.6 MB
linuxserver/plexpy         latest              fa97d2fb9cff        3 months ago        685.9 MB
linuxserver/nginx          latest              7b895511a682        3 months ago        485.7 MB
lsiodev/plexrequests       latest              02dcd617a822        3 months ago        936.9 MB
linuxserver/transmission   latest              6b07b00fb632        3 months ago        425.4 MB
coppit/mumble-server       latest              2a1dadb8c9b1        6 months ago        367.5 MB

 

swzaJGf.png

 

UdS9vpw.png

 

Thanks for reading!

The problem is going to wind up being one of your download clients downloading into the docker.img file instead of outside of it.

 

(As an aside, on a quick look it doesn't appear that either sonarr or couchpotato is working properly either based upon the mappings, since it doesn't look like they have access to the downloads done by nzbget / transmission)

 

Of the Docker FAQ entries, this one is probably most relevant: http://lime-technology.com/forum/index.php?topic=40937.msg488507#msg488507  Should probably solve the image filling up problem, and additionally what appears to be CP/Sonarr/NZBGet/Transmission not communicating properly

Link to comment

The problem is going to wind up being one of your download clients downloading into the docker.img file instead of outside of it.

 

(As an aside, on a quick look it doesn't appear that either sonarr or couchpotato is working properly either based upon the mappings, since it doesn't look like they have access to the downloads done by nzbget / transmission)

 

Of the Docker FAQ entries, this one is probably most relevant: http://lime-technology.com/forum/index.php?topic=40937.msg488507#msg488507  Should probably solve the image filling up problem, and additionally what appears to be CP/Sonarr/NZBGet/Transmission not communicating properly

 

I understand what I'm reading here but need some clarity...

 

My NZBget has moviescat and tvcat categories. Movies and TV end up in those folders once completed.

 

Do I still need the /downloads mapping in NZBget? Does this mean the sonarr /downloads mapping should be /mnt/user/media/tv/?

 

Let me know about that while I look into the bulging image.

 

Thanks for the help! This is a great community.

Link to comment

The problem is going to wind up being one of your download clients downloading into the docker.img file instead of outside of it.

 

(As an aside, on a quick look it doesn't appear that either sonarr or couchpotato is working properly either based upon the mappings, since it doesn't look like they have access to the downloads done by nzbget / transmission)

 

Of the Docker FAQ entries, this one is probably most relevant: http://lime-technology.com/forum/index.php?topic=40937.msg488507#msg488507  Should probably solve the image filling up problem, and additionally what appears to be CP/Sonarr/NZBGet/Transmission not communicating properly

 

I understand what I'm reading here but need some clarity...

 

My NZBget has moviescat and tvcat categories. Movies and TV end up in those folders once completed.

 

Do I still need the /downloads mapping in NZBget? Does this mean the sonarr /downloads mapping should be /mnt/user/media/tv/?

 

Let me know about that while I look into the bulging image.

 

Thanks for the help! This is a great community.

Set up NZBGet to download to a /downloads mapping.  All categories wind up being a subfolder from that main /downloads.  The incompletes should also wind up within it.

 

Set up the exact same mappings to both CP and sonarr. 

 

This FAQ may also help: http://lime-technology.com/forum/index.php?topic=40937.msg405070#msg405070

Link to comment

Thanks for the help! I'm assuming /media in CP app should be mapped the same as /downloads in nzbget and sonarr?

 

Docker image is better now, too:

	 root@E***:/var/lib# df -h
Filesystem      Size  Used Avail Use% Mounted on
tmpfs           128M  3.8M  125M   3% /var/log
/dev/sda1        15G  192M   15G   2% /boot
/dev/md1        1.9T  1.8T   88G  96% /mnt/disk1
/dev/md2        1.9T  1.8T   71G  97% /mnt/disk2
/dev/md3        1.9T  1.8T   86G  96% /mnt/disk3
/dev/md4        1.9T  1.8T   88G  96% /mnt/disk4
/dev/md5        2.8T  2.7T   86G  97% /mnt/disk5
/dev/sde1       224G   75G  150G  34% /mnt/cache
shfs             11T  9.6T  417G  96% /mnt/user0
shfs             11T  9.7T  566G  95% /mnt/user
/dev/loop0       15G  4.9G  8.9G  36% /var/lib/docker

Link to comment

Thanks for the help! I'm assuming /media in CP app should be mapped the same as /downloads in nzbget and sonarr?

 

Docker image is better now, too:

	 root@E***:/var/lib# df -h
Filesystem      Size  Used Avail Use% Mounted on
tmpfs           128M  3.8M  125M   3% /var/log
/dev/sda1        15G  192M   15G   2% /boot
/dev/md1        1.9T  1.8T   88G  96% /mnt/disk1
/dev/md2        1.9T  1.8T   71G  97% /mnt/disk2
/dev/md3        1.9T  1.8T   86G  96% /mnt/disk3
/dev/md4        1.9T  1.8T   88G  96% /mnt/disk4
/dev/md5        2.8T  2.7T   86G  97% /mnt/disk5
/dev/sde1       224G   75G  150G  34% /mnt/cache
shfs             11T  9.6T  417G  96% /mnt/user0
shfs             11T  9.7T  566G  95% /mnt/user
/dev/loop0       15G  4.9G  8.9G  36% /var/lib/docker

Generally, /media (or the like) would point to your actual media collection (ie: /mnt/user/Movies)

/downloads is a totally separate share (ideally cache-only) set up for where the downloads take place.  After completion, CP (or sonarr) will move the files from /downloads to /media

 

The one thing as pointed out in the FAQ is that the /downloads mapping has to match 100% between all of the apps for them to properly interact with each other.

Link to comment

I have broken a cardinal rule and used /mnt/user/appdata/<APPNAME>/ for all my containers' /config mapping and not /mnt/cache/appdata/<APPNAME>/

 

I think this caused the sonarr container to occupy a massive space, as I was once again getting image size warnings today.

 

Whats the proper way to fix this? I simply changed the mapping, but a few containers did not function properly after that; nzbget and plexmedaserver (limetech) to name two.

 

Do I need to move the <appname> folders from /mnt/user/appdata to /mnt/cache/appdata? If so, what is the best way to accomplish this? I used the dolphin app to move the /nzbget folder but got some permissions errors and it still did not load the nzbget webgui for me.

 

Thanks for the help!

 

Link to comment

I have broken a cardinal rule and used /mnt/user/appdata/<APPNAME>/ for all my containers' /config mapping and not /mnt/cache/appdata/<APPNAME>/

 

I think this caused the sonarr container to occupy a massive space, as I was once again getting image size warnings today.

 

Whats the proper way to fix this? I simply changed the mapping, but a few containers did not function properly after that; nzbget and plexmedaserver (limetech) to name two.

 

Do I need to move the <appname> folders from /mnt/user/appdata to /mnt/cache/appdata? If so, what is the best way to accomplish this? I used the dolphin app to move the /nzbget folder but got some permissions errors and it still did not load the nzbget webgui for me.

 

Thanks for the help!

Quite honestly since they were set up as /mnt/user and they're not working changing to /mnt/cache either change it back or start over again.

 

I doubt that sonarr filling the image was a direct cause of specifying user

 

Sent from my LG-D852 using Tapatalk

 

 

Link to comment

Quite honestly since they were set up as /mnt/user and they're not working changing to /mnt/cache either change it back or start over again.

 

I doubt that sonarr filling the image was a direct cause of specifying user

 

Sent from my LG-D852 using Tapatalk

 

Good point. Sonarr is working with /mnt/cache right now. I'll see in a few days if the image has grown again.

 

It was just those two so far that did not like the remapping.

Link to comment

This fixed the excessive usage in mycase:

 

http://lime-technology.com/forum/index.php?topic=45249.0

 

Great find by aptalca! linuxserver/headphones had a log file 2GB big!!

 

That was it. 11G log from a linuxserver/couchpotato.

 

EDIT: It appears you can set a /tmp directory for logs under advanced settings in CouchPotato. Going to try to map /tmp to something on the cache disk and see what happens. I'd be just as happy to disable logs entirely, but don't see an easy way to do that.

Link to comment
  • 1 year later...
  • 1 month later...

Read through this whole thread. Have the same issue of the docker.img file becoming bloated / full. I used this, but moved up one directory and found a 12G .log file

On 11/12/2015 at 4:09 PM, binhex said:

find /var/lib/docker/btrfs -type f -size +50000k -exec ls -lh {} \; | awk '{ print $9 ": " $5 }'

find /var/lib/docker/ -type f -size +50000k -exec ls -lh {} \; | awk '{ print $9 ": " $5 }'
/var/lib/docker/containers/60d8cc72b42ba65bd7d32d60f4da671268293777ea5e67027c2b630eec2dd041/60d8cc72b42ba65bd7d32d60f4da671268293777ea5e67027c2b630eec2dd041-json.log: 91M
/var/lib/docker/containers/2d0c7fff7cc76740fb7a49b94fb97441fc8dd51d5723532b290d38bbef979d3c/2d0c7fff7cc76740fb7a49b94fb97441fc8dd51d5723532b290d38bbef979d3c-json.log: 12G
/var/lib/docker/btrfs/subvolumes/4d7990cb80ec57c6e13325308389eafb1f783097818aa2c855f8684045cb0ec1/tmp/x11rdp/x11rdp_0.9.0+devel-1_amd64.deb: 66M
/var/lib/docker/btrfs/subvolumes/f26230f4f65eafdf3ec5ed86ce1c344d460fe592340c92f95c0d2868b89b7537/var/cache/locate/locatedb: 65M

After deleting the 12G file, I checked the webui and it still showed 94%. Disabled and re-enabled Docker and jumped back down to 36%. I don't know container created this file as I don't see any IDs that match it. Either way, it's much better (for now) than starting over with my docker.img and then having to re-add all of my 15+ containers

  • Like 1
Link to comment

Might it be possible to write a plugin/user script that

  1. Gets list of all installed dockers, including their template entry, and
    1. Current running state
    2. Auto-start setting
  2. Stop all running dockers
  3. Stop docker system
  4. Delete docker image file
  5. Start docker system (file will be created)
  6. Install all previously installed dockers via their template (but not all dockers in template list)
    1. Set running state
    2. Set auto-start state

Obviously not an ideal solution, since it doesn't solve the actual problem.  However, I'm rebuilding my img file maybe once every 6-9 months (maybe longer).  It would certainly come in handy for those times.  I'd planned on spending a wet weekend looking into it, but somebody way ahead of me may be able to help out.

Link to comment
  • 7 months later...
On 11/20/2017 at 9:59 PM, d8sychain said:

Read through this whole thread. Have the same issue of the docker.img file becoming bloated / full. I used this, but moved up one directory and found a 12G .log file


find /var/lib/docker/ -type f -size +50000k -exec ls -lh {} \; | awk '{ print $9 ": " $5 }'
/var/lib/docker/containers/60d8cc72b42ba65bd7d32d60f4da671268293777ea5e67027c2b630eec2dd041/60d8cc72b42ba65bd7d32d60f4da671268293777ea5e67027c2b630eec2dd041-json.log: 91M
/var/lib/docker/containers/2d0c7fff7cc76740fb7a49b94fb97441fc8dd51d5723532b290d38bbef979d3c/2d0c7fff7cc76740fb7a49b94fb97441fc8dd51d5723532b290d38bbef979d3c-json.log: 12G
/var/lib/docker/btrfs/subvolumes/4d7990cb80ec57c6e13325308389eafb1f783097818aa2c855f8684045cb0ec1/tmp/x11rdp/x11rdp_0.9.0+devel-1_amd64.deb: 66M
/var/lib/docker/btrfs/subvolumes/f26230f4f65eafdf3ec5ed86ce1c344d460fe592340c92f95c0d2868b89b7537/var/cache/locate/locatedb: 65M

After deleting the 12G file, I checked the webui and it still showed 94%. Disabled and re-enabled Docker and jumped back down to 36%. I don't know container created this file as I don't see any IDs that match it. Either way, it's much better (for now) than starting over with my docker.img and then having to re-add all of my 15+ containers

 

Thanks for this, I was able to find out that my resilio-sync container log was 13G.

 

I have the log rotation config setup through unraid for docker, does anyone know if it handles the *-json.log files or is it for some other log files?

Link to comment
  • 5 years later...

Fixing error:

"Docker image disk utilization of 76% Docker utilization of image file /mnt/user/system/docker/docker.img"

 

I've been trying to update my plex slowly after not updating it for a while, version by version, but this seems to create lots of previous versions in my docker image? Please see below when I run docker system df -v.

 

I have tried so far:

Running the CA Cleanup Appdata

Running the user script that comes with the plugin "  delete_dangling_images "

REPOSITORY                         TAG                             IMAGE ID       CREATED         SIZE      SHARED SIZE   UNIQUE SIZE   CONTAINERS
linuxserver/plex                   version-1.23.6.4881-e2e58f321   688732db5383   2 years ago     662.5MB   0B            662.5MB       1
linuxserver/plex                   version-1.23.5.4862-0f739d462   db9d42d35e57   2 years ago     662.4MB   129.6MB       532.8MB       0
linuxserver/plex                   version-1.23.4.4805-186bae04e   a972f83c583c   2 years ago     661.9MB   129.6MB       532.2MB       0
jlesage/putty                      latest                          90bf9703a35a   2 years ago     107.7MB   77.05MB       30.7MB        1
linuxserver/plex                   version-1.23.3.4707-ebb5fe9f3   8b7100002aaa   2 years ago     664.1MB   0B            664.1MB       0
linuxserver/nextcloud              latest                          4c5fbd49d707   2 years ago     414.3MB   0B            414.3MB       1
linuxserver/plex                   version-1.23.2.4656-85f0adf5b   4124bfd18206   2 years ago     655.7MB   0B            655.7MB       0
linuxserver/openvpn-as             2.8.8-cbf850a0-Ubuntu18-ls122   04cdda8c9d3e   2 years ago     261.7MB   0B            261.7MB       1
linuxserver/plex                   version-1.23.1.4602-280ab6053   c5840ee9bc0e   2 years ago     655.4MB   0B            655.4MB       0
linuxserver/plex                   version-1.22.3.4523-d0ce30438   7f0051d917d7   2 years ago     692.5MB   129.6MB       562.9MB       0
linuxserver/plex                   version-1.22.3.4392-d7c624def   f97b1ef8d7b0   2 years ago     692.3MB   129.6MB       562.7MB       0

 

EDIT: Nevermind, I solved it with this command:

For unused images, use 

docker image prune -a

 (for removing dangling and ununsed images).
Warning: 'unused' means "images not referenced by any container": be careful before using -a.

Source: https://stackoverflow.com/questions/32723111/how-to-remove-old-and-unused-docker-images#:~:text=docker system prune will delete,docker container prune

https://stackoverflow.com/questions/32723111/how-to-remove-old-and-unused-docker-images#:~:text=docker system prune will delete,docker container prune

 

Then I finally got this message:

Docker image disk utilization: DATE

Notice [SERVER] - Docker image disk utilization returned to normal level
Docker utilization of image file /mnt/user/system/docker/docker.img

 

Edited by Arby
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.