docker image full


Recommended Posts

Diagnostics attached. Have already tried disabling docker and deleting image file. When I re-enable it, docker image file is immediately maxed out, even expanding Vdisk to 400gb. Tried scrubbing, rebooting, no difference. Read through docker FAQ, can't find a container being the issue. Container sizes are all >400mb, and docker image file maxes out without re-installing them. When I click on the file it says, "the disc image file is corrupted." Using two 480gb SSDs in a pool. New to unraid, any help would be greatly appreciated.

tower-diagnostics-20200118-0106.zip

Link to comment

The default of 20GB is enough for all but the most demanding applications so it definitely sounds as if you have at least one container incorrectly configured so it is writing internally to the docker image rather that to storage external to the image. 

 

Common mistakes are:

  • Leaving off the leading / on the container side of a path mapping so it is relative rather than abrolute
  • Case mismatch on either side of a path mapping as Linux pathnames are case-significant.

If you cannot spot the error then what I would suggest is:

  • Make sure all containers are stopped and not set to auto-start
  • Stop docker service
  • delete current docker image and set a more reasonable size (e.g. 20G)
  • Start docker service
  • Use Apps >>> Previous apps to re-install your containers
  • Go to docker tab and click the Container size button
  • This will give you a starting point for the space each container is using.
  • Enable one container, let it run for a short while and then press the Container size button again to see if that particular container is consuming space when it runs.

Repeat the above until you track down the rogue container(s)

  • Like 1
Link to comment

Filling docker image is about the settings within an application that causes it to write to a path that isn't mapped.

 

If you use Transmission for example, and within the Transmission application you set it to downloading torrents to /download, but the container mapping is /Download, then that is going to write into the docker image, because Linux is case-sensitive and /download and /Download are different paths.

 

Or, if you tell it to write to download instead of /download, then that is going to be in the image also, since it is not an absolute path.

 

Taking them one at a time as suggested is the way to get this figured out.

 

Which dockers do you use?

 

Link to comment

Thanks for the responses. The docker image is typically at 20gb, I only expanded it temporarily to see if that would prevent corruption and I could look inside it. I did as suggested and disabled autostart / deleted docker.img and restarted docker and looked at each container, but the sizes didn't grow significantly. I use radarr, sonarr, plex, jackett, and syncthing. I've since deleted syncthing and deleted its data from /appdata, but that didn't do anything. I disabled downloading from deluge, removed the entire queue, and disabled it as a downloader within radarr and sonarr. I also noticed the libvirt image is corrupt, although I don't have any VMs installed. I've run the extended test of fix common problems with no results either. I based my container mapping schemes on the ones done by spaceinvader, so I think they're right, but obviously somethings wrong.

Link to comment

I tried remapping everything using mnt/cache/appdata instead of mnt/user as recommended by squid in the FAQ (although I read other posts he recommends /user), didn't make a difference. Then I used cleanup appdata plugin and wiped all my apps out, no difference. Also wiped the rest of my cache for good measure. Just to see how big the docker image was, I put it on a 3tb disc and it still maxed it out. Currently running docker not enabled with no containers and no appdata. Not sure what's left other than my media folder. Let me know what my next step should be. 

Link to comment
12 minutes ago, coolman2170 said:

Yep, disabled them individually and let them run. Nothing got larger than 1gb. Currently there is nothing in docker as I deleted all my appdata, although there are zipped backups in disk1.

This statement seems inconsistent with the fact you said the docker.img file has maxed out a 3TB drive :)

Link to comment

Which part is inconsistent? Yesterday, before I deleted my appdata, I deleted the docker image, reinstalled it, then reinstalled my apps. Let the containers run one at a time, checked the container sizes, and nothing got larger than 1gb. Since I have backups of my apps and have no VMs, decided to delete all my appdata and see if it would make a difference. After deleting the appdata, I deleted the docker image, reinstalled it at 20gb and it maxed out. Out of curiosity, I deleted that docker image, set the new size to 2900gb (my largest disc is 3tb), and reinstalled the docker image, still with no appdata. Once again, docker image maxed out. Repeated the process again right now, same result. 
 

Screenshot 2020-01-19 at 1.11.37 PM.png

 

Screenshot 2020-01-19 at 1.11.15 PM.png

 

Screenshot 2020-01-19 at 1.10.56 PM.png

Edited by coolman2170
Link to comment

The screen shots do not show that clicking the Container Size button at that point showed no container was using excessive space!   That is what I was thinking was inconsistent with a total of 3TB being used!   Something is using up all that space as the behavior you describe is not normal :(

 

BTW:   Folders like appdata are irrelevant to this issue as they are external to the docker.img file.

Link to comment
10 hours ago, coolman2170 said:

I tried remapping everything using mnt/cache/appdata instead of mnt/user

  Please study this section of the post I already made. It IS NOT about the mappings. It is instead about having the application not actually using the mappings but instead writing to a path that is not mapped.

On 1/18/2020 at 7:36 AM, trurl said:

Filling docker image is about the settings within an application that causes it to write to a path that isn't mapped.

 

If you use Transmission for example, and within the Transmission application you set it to downloading torrents to /download, but the container mapping is /Download, then that is going to write into the docker image, because Linux is case-sensitive and /download and /Download are different paths.

 

Or, if you tell it to write to download instead of /download, then that is going to be in the image also, since it is not an absolute path.

 

You almost certainly have some downloading application that is writing its downloads to a path that is not mapped.

Link to comment

trurl, I have no doubt that one or more of my apps is improperly mapped, but I'm not sure how to figure out which one. I spent the better half of yesterday checking each container and I couldn't find anything. I'm also not sure how the docker image can be blowing up due to an app after removing all appdata and deleting all containers from docker. Do certain apps function outside of docker? I think it may be Plex as I don't think the transcoding was initially mapped correctly and it was the only app malfunctioning. However, even after deleting it entirely, the docker image still blows up.

Link to comment
51 minutes ago, coolman2170 said:

I have no doubt that one or more of my apps is improperly mapped

You still seem to be missing my point. It is entirely possible that your mappings are correct, but the application within the container is configured to write to a path that isn't mapped.

 

And all those other things you said you did to fix this don't have anything to do with this problem.

 

Is your docker image actually growing when no dockers are running? I assume not. So one of your applications is writing to a path that isn't mapped.

 

Which docker(s) are you actually running when docker image fills?

 

To give another example. Since you mentioned deluge earlier, and I gave an example using Transmission. These are both torrent applications.

 

If you have the deluge container with /Download in the container mapped to /mnt/user/Download on the host, but within the settings of deluge itself, you tell it to write to /download, then it is going to write into the docker image. /Download and /download are not the same thing. And /Download is mapped, but /download isn't. Anything written to a path that isn't mapped is in the docker image.

Link to comment

I understand that you're saying the incorrect mapping path could be within the application itself. I guess I don't know how to write that coherently in a sentence, but I get you're saying with regards to deluge for instance, my downloads are supposed to go to completed, incomplete, or unzipped_torrents. If I don't have those shares in the downloads folder, or deluge can't find them because there is an error with the spelling of the downloads or data path, it will save to docker image.

I'm not saying that what I've done have been fixes, I get that removing appdata and expanding the docker image will not solve the problem. All I'm trying to do is figure out which container is causing the problem. I was under the impression that if my array has no appdata, and my docker has no containers, that the docker.img couldn't be filled from apps/containers such as deluge.
 

Quote

Is your docker image actually growing when no dockers are running? I assume not. So one of your applications is writing to a path that isn't mapped.


I could be misunderstanding "fill" and "growing". Currently, I have no appdata and no containers in my docker. No apps, nothing. If I disable docker through settings>docker>no, delete the current docker image, and then re-enable docker, the docker image will immediately be full. I assume this is what you mean when you say fill, but if you're referring to whether it grows after initially enabling it, I can't tell because the moment docker is enabled, the docker.img share maxes out to whatever I set it at. 
 

Link to comment
5 minutes ago, coolman2170 said:

, I can't tell because the moment docker is enabled, the docker.img share maxes out to whatever I set it at

The docker.img file will always be set initially to whatever size you set on the Docker Settings tab as it creates a file of that size set up as a virtual disk image.   That is different to the image being filled up with content so that there is no spare room inside it.

Link to comment
3 minutes ago, coolman2170 said:

Oh okay, so the docker.img is actually empty? Is it just a matter of giving it time to set up or will it not change to a smaller size until I save something to it?

The space physically taken up by the docker.img file will always be whatever size you set it to.  Whether thats 20GB, or 2TB it will take up that amount of space on the SSD/HDD

 

The amount utilized within it is a different story, and that's what everyone thinks about when someone points Docker.img is full

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.