(SOLVED) Typical Container sizes in a docker image?


Recommended Posts

Hi,

 

I am trying to pinpoint which of my docker containers is missbehaving, as my docker image seems to grow constantly.

I understand from reading thrue the forum that for most users a 20 GB docker image is plenty.

My docker image is now on 71% utilisation and I got a warnig in the dashboard.

As I do not consider myself special in this regard, I guess something is not working as expected.

 

I checked the container sizes, and that was the output I got:

2077288734_Screenshot_2019-07-09MorpheusDocker.thumb.png.08dd6134a8a3830c91e978831537b9e8.png

 

What is the typical size of a container within the docker image?

Collabora and the binhex containers all use way more space than all the other containers.

Expecially the Collabora container seems unreasonable as it is nearly 10 times the size of e.g. the Plex container.

I only use collabora within nextcloud (both setup with the help of @SpaceInvaderOne's youtube video) to have the possibility to write a document in the browser, but the events we really use it are very rare.

 

Has anyone had a similar issue?

Can anyone give me a hint on the root cause for the dockerimage growing?

What would be the best approach to correct this issue.

 

Edited by Kevek79
Link to comment
1 hour ago, Kevek79 said:

Hi,

 

I am trying to pinpoint which of my docker containers is missbehaving, as my docker image seems to grow constantly.

I understand from reading thrue the forum that for most users a 20 GB docker image is plenty.

My docker image is now on 71% utilisation and I got a warnig in the dashboard.

As I do not consider myself special in this regard, I guess something is not working as expected.

 

I checked the container sizes, and that was the output I got:

2077288734_Screenshot_2019-07-09MorpheusDocker.thumb.png.08dd6134a8a3830c91e978831537b9e8.png

 

What is the typical size of a container within the docker image?

Collabora and the binhex containers all use way more space than all the other containers.

Expecially the Collabora container seems unreasonable as it is nearly 10 times the size of e.g. the Plex container.

I only use collabora within nextcloud (both setup with the help of @SpaceInvaderOne's youtube video) to have the possibility to write a document in the browser, but the events we really use it are very rare.

 

Has anyone had a similar issue?

Can anyone give me a hint on the root cause for the dockerimage growing?

What would be the best approach to correct this issue.

 

The image size depends on which base OS and how big the application and its dependencies are.

 

You can check the image size on the dockerhub and compare it to the list you have to see if there is a configuration error or something else that makes the image grow.

  • Like 1
Link to comment
7 minutes ago, saarg said:

The image size depends on which base OS and how big the application and its dependencies are.

 

You can check the image size on the dockerhub and compare it to the list you have to see if there is a configuration error or something else that makes the image grow.

Thanks @saarg for the tip with the dockerhub.

when I compare the containersizes in my list above with the latest build tags listed on the respective dockerhub pages, than none of my top 6 containers in the list should be bigger than 500 M

 

I checked my templates of the containers affected and everything that should point out of the containers to the array seems to be configured that way.

What makes me especially curious is what the reason for the teamspeak container being 1 G bigger than expected might be, as there is nothing to missconfigure in the templates (No paths to be linked to the array e.g.)

 

Also I checked the log file of the collabora container via the GUI and there are a lot of entries (mostly white, some yellow when I open a file via nextcloud), but honestly speaking, I do not understand what most of them mean.

The container itself works as expected.

 

What could be my next steps to resolve this?

 

One more question on interpreting the Container Size listing above. What does Writeable mean in that table and do you know why this could be 3.3 GB in the case of the collabora container?

 

Link to comment

You could delete the docker image, recreate it, and all them all back via CA, make sure none of them run, then get a new reference list of sizes called Point 0. Then start the dockers and check after 15 minutes for reference Point 1. Then check back tomorrow morning for reference Point 2. Repeat again after a day for Point 3.

 

None of the sizes should drastically change from Point 1. I would expect the sizes to be much smaller than in your initial Post.

  • Like 1
Link to comment

I can try that tomorrow. Thanks @BRiT.

I was just editing my last post to include the question if recreating the docker image might be helpful in finding the root cause, when you sent your reply ;)

 

Just to make sure I understand the procedure

1. shut down all running dockers

2. Stop the docker service

3. Delete the docker image (Do I need to delete via CLI or can this also be performed via SMB?)

4. Restart docker service which should create a new docker image file

5. Add all my dockers using the user templates without any modifications

 

All my containers should than be back where they were (function wise) before deleting the docker image - Right?

 

Than start taking pictures of the docker container sizes as suggested above.

 

edit: Just realized that deleting the docker image can be performed in the GUI.

Edited by Kevek79
Additional Info
Link to comment
1 hour ago, Kevek79 said:

when I compare the containersizes in my list above with the latest build tags listed on the respective dockerhub pages, than none of my top 6 containers in the list should be bigger than 500 M

Some dockers do major setup when first run, including downloading large chunks of their contents that may be updated more frequently than the author wants to statically build. I would expect point 0 in @BRiT's example to approximately mirror their tagged sizes. Point 1 after a stabilized startup would be the point of reference I would track from, as he said.

 

Krusader is a problem child from a size standpoint, as it is large to begin with, and it's VERY easy to mess up with the configuration and end up putting stuff in the docker image. The biggest gotcha to a more experienced user is forgetting to properly deal with krusader's recycle bin function.

Link to comment

Good Evening everybody

 

so about 20 hours have passed and a new evening started that I can work on resolving this issue.

 

20 hours ago, jonathanm said:

Krusader is a problem child from a size standpoint, as it is large to begin with, and it's VERY easy to mess up with the configuration and end up putting stuff in the docker image. The biggest gotcha to a more experienced user is forgetting to properly deal with krusader's recycle bin function. 

Regarding krusader, recycle bin was my first thought too (even if I always try to make sure that I delete directly not using it at all), so I checked and it had only one file (couple of KB in size) in there.

 

Before I start with deleting the docker image I just rechecked the Container sizes and compared it to the values from yesterday

The only thing growing since yesterday is the Collabora container.

 

As I have never deleted my complete docker image before I'm a bit nervous about what could go wrong.

Is the procedure I described above correct ?

Is there anything that could go wrong when deleting and recreating the docker image (In theory I know that everything should be fine, but still a little nervous)

Should I backup the current docker image before deleting it? If so, does the filesystem of the backup targetdrive make any difference (e..g. needs to be XFS or BTRFS)

 

Would deleting a single docker container (including the than orphan container image) free up the space in the docker image ?

If so, could deleting a single container (e.g. Collabora) and than recreating it from scratch reset the container size to where it should be?

As it looks like Collabora is the main driver for the docker image growing, I would rather try to solve the issue with this one container than wiping the complete docker image. Does this make sense?
 

edit: I just realized that the Collabora Container is the only one (besides the duckdns docker) that has no '/config' mapping to the array (the template just does not contain any drive mappings). Is this expected behaviour? Maybe I missed something in configuration and thats the reason why Collabora is exploding!?

Edited by Kevek79
Additional Info
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.