Jump to content
BRiT

Docker Maintenance ... Users need a means of keeping it clean!

25 posts in this topic Last Reply

Recommended Posts

I was doing some typical house keeping today and came across the following on Docker Maintenance. This will become an issue that will bite each and every user if they do even a minimal amount of container experimentation, ie: installing and removing dockers or upgrading dockers.

 

I noticed my docker image file was getting up there in usage. I had only ever had 4 total dockers ever installed (NZBGet, SabNZBD, Transmission, EggDrop). I had deleted SabNZBD from my dockers list early on, so I never used it beyond 1 day.

 

With running just 3 containers my docker.img file was 65% used and I was using 15GB for the image since I saw reports of users running out of space. The docker images themselves barely take up 1Gig total. However, through very typical use it has somehow managed to ballon up an extra 8 Gigs of space, using 9.1Gig total.

 

So how does one keep the docker image file clean? Even removing all the containers and images didn't get the docker image file clean enough. I resorted to brute force deleting the image file itself and recreating it. Now you can see how clean it is. However, in order to keep it clean I had to nuke it and start over. Even after just removing the containers you can see there was still close to 2Gigs lost to something mysterious (2.1 Gig vs 24 Meg).

 

So, how is an average user expected to keep their Docker Image clean and not run out of space?

 

The command I used to check the status is: df -h

 

Before cleanup : /dev/loop8       15G  9.1G  5.1G  65% /var/lib/docker
After deletes  : /dev/loop8       15G  2.1G   12G  15% /var/lib/docker

 

After recreate : /dev/loop8       15G   24M   13G   1% /var/lib/docker
After +EggDrop : /dev/loop8       15G  446M   13G   4% /var/lib/docker
After +NZBGet  : /dev/loop8       15G  685M   13G   6% /var/lib/docker
After +Trans   : /dev/loop8       15G  1.3G   12G  10% /var/lib/docker

Share this post


Link to post

Good points!. I ran out of space early on and deleted all my dockers and recreated on a larger IMG files.

 

It might be useful to add the /dev/loop8 size/free info to the Docker Tab so that we can keep an eye on it.... 

Share this post


Link to post

I also think we have figured out a solution to the docker image corruption issues that a few users have had. We are still experimenting to confirm, but we will provide an update on this soon.

Share this post


Link to post

I also think we have figured out a solution to the docker image corruption issues that a few users have had. We are still experimenting to confirm, but we will provide an update on this soon.

Looking forward to it!  I've lost two setups due to it.  Drive keeps checking out okay, I've tried two different ones, but somehow I keep getting corruption.  Thanks!

Share this post


Link to post

i there there are two issues here:-

 

the first is caused by advanced docker configs that allow in place upgrades of apps (as in the user does NOT need to pull down a new image), unless this is coded in such a way that it cleans up after upgrading then i think users will run into issues with old installation directories, compressed files etc. imho i dont think docker images should be created this way, but hey thats just me, i prefer shared github account solution to keep up with updates etc and keep it simple.

 

the second issue is that even after removing any erroneous docker images the loopback image still reports the old disk usage stats, so there is then no way to track exactly how much of the loopback image is actually free, only way around this is to delete everything and start again, dunno if there is a solution to this?.

Share this post


Link to post

After recreate : /dev/loop8       15G   24M   13G   1% /var/lib/docker
After +EggDrop : /dev/loop8       15G  446M   13G   4% /var/lib/docker
After +NZBGet  : /dev/loop8       15G  685M   13G   6% /var/lib/docker
After +Trans   : /dev/loop8       15G  1.3G   12G  10% /var/lib/docker

 

I just looked at how things were going now and something is chewing up space inside the docker.img file again. Time to do another mass-delete of docker.img and recreate the dockers.

 

22 Days Later: /dev/loop8       15G  6.1G  7.3G  46% /var/lib/docker

Share this post


Link to post

After recreate : /dev/loop8       15G   24M   13G   1% /var/lib/docker
After +EggDrop : /dev/loop8       15G  446M   13G   4% /var/lib/docker
After +NZBGet  : /dev/loop8       15G  685M   13G   6% /var/lib/docker
After +Trans   : /dev/loop8       15G  1.3G   12G  10% /var/lib/docker

 

Now lets repeat the process of cleaning and re-adding...

 

After Delete All : /dev/loop8       15G  1.4G   12G  11% /var/lib/docker
After Scrub      : /dev/loop8       15G   25M   13G   1% /var/lib/docker
After +Eggdrop   : /dev/loop8       15G  443M   13G   4% /var/lib/docker
After +NZBGet    : /dev/loop8       15G  684M   13G   6% /var/lib/docker
After +Trans     : /dev/loop8       15G  1.3G   12G  10% /var/lib/docker

 

There we go. Much better now.

 

Share this post


Link to post

I also think we have figured out a solution to the docker image corruption issues that a few users have had. We are still experimenting to confirm, but we will provide an update on this soon.

 

Is it soon, yet?

 

Share this post


Link to post

I wasn't aware of this issue until my 10GB Docker image filled completely. The Docker daemon no longer starts:

 

  time="2015-12-30T13:43:49.173704466-08:00" level=fatal msg="Error starting daemon: Insertion failed because database is full: database or disk is full"

 

I can still use the CLI to browse /var/lib/docker. Is it possible to copy these containers into another docker.img, or have I pretty much lost all of the containers?

 

Share this post


Link to post

I wasn't aware of this issue until my 10GB Docker image filled completely. The Docker daemon no longer starts:

 

  time="2015-12-30T13:43:49.173704466-08:00" level=fatal msg="Error starting daemon: Insertion failed because database is full: database or disk is full"

 

I can still use the CLI to browse /var/lib/docker. Is it possible to copy these containers into another docker.img, or have I pretty much lost all of the containers?

Dockers are easily recreated from your templates. If you push the Add Container button on the Docker page, the very first thing is the Template selection. Every docker you have ever used with all of its settings can be selected from there. When you select one of them and add it, that docker will be redownloaded and run with all of the settings you used before, including the volume mappings that tell it where on your server the docker stores everything.

Share this post


Link to post

and how do you do that?

 

Figured that out, but it doesn't actually expand the docker.img

 

Settings ---> Docker ---> Enabled (no) ---> Change default size.

 

I've done this and below is the result:

 

Label: none  uuid: ceaad5ca-ccce-422d-88f3-0cc87bdadf9d

  Total devices 1 FS bytes used 7.24GiB

  devid    1 size 20.00GiB used 10.00GiB path /dev/loop0

 

btrfs-progs v4.1.2

 

running this command in unraid console give me the following: btrfs fi df /var/lib/docker

Data, single: total=7.97GiB, used=6.46GiB

System, DUP: total=8.00MiB, used=16.00KiB

System, single: total=4.00MiB, used=0.00B

Metadata, DUP: total=1.00GiB, used=798.53MiB

Metadata, single: total=8.00MiB, used=0.00B

GlobalReserve, single: total=176.00MiB, used=0.00B

 

It is clear that the docker.img was not actually expanded. How can I utilize the expanded size?

Share this post


Link to post

Try this bad boy for clearing the logs

 

logs=$(find /var/lib/docker/containers/ -name '*.log');for log in $logs; do cat /dev/null > $log;done

Share this post


Link to post

Try this bad boy for clearing the logs

 

logs=$(find /var/lib/docker/containers/ -name '*.log');for log in $logs; do cat /dev/null > $log;done

 

that was glorious! thank you.

 

before

/dev/loop0       20G   17G  2.4G  88% /var/lib/docker

 

after

/dev/loop0       20G  4.4G   15G  24% /var/lib/docker

 

Share this post


Link to post

Try this bad boy for clearing the logs

 

logs=$(find /var/lib/docker/containers/ -name '*.log');for log in $logs; do cat /dev/null > $log;done

 

Might cron that if 6.2 doesn't fix the issue of the log files..

Share this post


Link to post

I am still hunting where my space is going.

 

The logs are small and docker images is only 8GB so there is 10GB somewhere I cannot account for.

 

Am I missing something obvious?

Share this post


Link to post

Try this bad boy for clearing the logs

 

logs=$(find /var/lib/docker/containers/ -name '*.log');for log in $logs; do cat /dev/null > $log;done

 

Might cron that if 6.2 doesn't fix the issue of the log files..

 

How are cron jobs set up on unRaid? I've never messed with them, but I'm beginning to see more and more uses for them on my server

Share this post


Link to post

Try this bad boy for clearing the logs

 

logs=$(find /var/lib/docker/containers/ -name '*.log');for log in $logs; do cat /dev/null > $log;done

 

Might cron that if 6.2 doesn't fix the issue of the log files..

 

How are cron jobs set up on unRaid? I've never messed with them, but I'm beginning to see more and more uses for them on my server

here and some of the earlier post in that thread for more details.

Share this post


Link to post

To set it up as a cron job as a weekly event (I haven't done this yet)

 

Create a file called docker_log_cleaner.sh containing

#!/bin/bash
logs=$(find /var/lib/docker/containers/ -name '*.log');for log in $logs; do cat /dev/null > $log;done

 

and save it to your flash drive in a subdirectory called scripts.

 

Then add these lines to your go file...

 

#Add backup scripts to cron
cp /boot/scripts/docker_log_cleaner.sh /etc/cron.weekly

 

Share this post


Link to post

Very good instructions. I'll have to implement this before too long if this isn't change in a new version of unraid. I don't usually upgrade, but I think being such a new/large version change from 5.0 that I may be upgrading unraid as they iron out a few things.

Share this post


Link to post

I am still hunting where my space is going.

 

The logs are small and docker images is only 8GB so there is 10GB somewhere I cannot account for.

 

Am I missing something obvious?

 

I gave up eventually and recreated the img file which resulted in a drop from 18GB used to 3GB used for the same containers.

Share this post


Link to post

Try this bad boy for clearing the logs

 

logs=$(find /var/lib/docker/containers/ -name '*.log');for log in $logs; do cat /dev/null > $log;done

 

Apologies for bumping and old post, but this worked great for me, so thank you!

 

Does anybody know if this is automated/fixed in 6.2, or if a plugin exists to do this?  Also, any risk to clearing out these logs?

 

Thanks!

 

~Spritz

Share this post


Link to post

Try this bad boy for clearing the logs

 

logs=$(find /var/lib/docker/containers/ -name '*.log');for log in $logs; do cat /dev/null > $log;done

 

Apologies for bumping and old post, but this worked great for me, so thank you!

 

Does anybody know if this is automated/fixed in 6.2, or if a plugin exists to do this?  Also, any risk to clearing out these logs?

 

Thanks!

 

~Spritz

Try this: http://lime-technology.com/forum/index.php?topic=40937.msg475225#msg475225

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.