Jump to content
ciarlill

Updated to 6.3.5 and docker tab is empty

8 posts in this topic Last Reply

Recommended Posts

I just updated to 6.3.5. After a reboot my Docker tab is completely empty. I ssh'd into the server an it looks like dockerd was not running. I tried running it from the command line and reloaded the docker tab. At that point the tab loaded but none of my dockers were listed.

 

Any ideas on what could have happened? What logs can I provide to help sort this out?

Share this post


Link to post

I ran 'diagnostics' and got the following in logs/docker.txt (full zip attached):

 

$ tail docker.txt 
time="2017-05-30T16:41:42.206974508-04:00" level=info msg="libcontainerd: new containerd process, pid: 8833" 
time="2017-05-30T16:41:43.612576754-04:00" level=fatal msg="Error starting daemon: write /var/lib/docker/volumes/metadata.db: no space left on device" 
time="2017-05-30T16:41:44.163676548-04:00" level=info msg="libcontainerd: new containerd process, pid: 8997" 
time="2017-05-30T16:41:45.346545311-04:00" level=fatal msg="Error starting daemon: write /var/lib/docker/volumes/metadata.db: no space left on device" 
time="2017-05-30T17:10:32.184898289-04:00" level=info msg="libcontainerd: new containerd process, pid: 22537" 
time="2017-05-30T17:10:33.338567925-04:00" level=fatal msg="Error starting daemon: write /var/lib/docker/volumes/metadata.db: no space left on device" 
time="2017-05-30T17:10:39.085280285-04:00" level=info msg="libcontainerd: new containerd process, pid: 22749" 
time="2017-05-30T17:10:40.270522174-04:00" level=fatal msg="Error starting daemon: write /var/lib/docker/volumes/metadata.db: no space left on device" 
time="2017-05-30T17:10:49.450429338-04:00" level=info msg="libcontainerd: new containerd process, pid: 23006" 
time="2017-05-30T17:10:50.614545274-04:00" level=fatal msg="Error starting daemon: write /var/lib/docker/volumes/metadata.db: no space left on device" 

 

And here is my disk usage:

$ df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            7.6G     0  7.6G   0% /dev
tmpfs           1.6G  9.3M  1.6G   1% /run
/dev/sda1        48G  7.8G   38G  18% /
tmpfs           7.6G  138M  7.5G   2% /dev/shm
tmpfs           5.0M  4.0K  5.0M   1% /run/lock
tmpfs           7.6G     0  7.6G   0% /sys/fs/cgroup
/dev/sda5       129G   30G   92G  25% /home
cgmfs           100K     0  100K   0% /run/cgmanager/fs
tmpfs           1.6G   24K  1.6G   1% /run/user/1000

 

So it looks like the docker image itself is out of space.... but I'm not sure how to expand it.
 

 

lucien-diagnostics-20170530-1713.zip

Edited by ciarlill
Attached full diagnostics

Share this post


Link to post

But, with 15 Gig for the docker image, increasing the size may buy you a respite, but the root cause is still there.  Why is it filling up in the first place?  Usually, its because of downloads being saved within the image.  Peruse the Temporary Docker FAQ for some suggestions.

Share this post


Link to post

Thanks for the reply. Is there a way to do investigate and fix from the shell? My docker tab does not load anything below the navigation bar. No options whatsoever so I cannot turn it off in the UI. I also tried running 

du -ah /var/lib/docker/containers/ | grep -v "/$" | sort -rh | head -60

to investigate container log files as a root cause, but I have no "containers" folder under "/var/lib/docker".

 

I have run into this issue of filling up the docker image before and tracked it down to an individual container that I just ended up rebuilding. But in that case the docker daemon was at least still running so I could interact with the containers. If possible I just want to clean up enough space to get it booted again so I can find the cause.

Edited by ciarlill

Share this post


Link to post

Alright I am back up and running. First, it makes sense that /var/lib/docker/containers does not exist... since the daemon couldn't start and mount it. Second, I found the config file for docker in /boot/config and was able to just edit my image size 20G. Now I can figure out the container causing the problem. Thanks

Edited by ciarlill

Share this post


Link to post
On 5/31/2017 at 2:26 AM, ciarlill said:

Alright I am back up and running. First, it makes sense that /var/lib/docker/containers does not exist... since the daemon couldn't start and mount it. Second, I found the config file for docker in /boot/config and was able to just edit my image size 20G. Now I can figure out the container causing the problem. Thanks

Thank you! I had the same issue here.. Did not know how this docker stuff does work. But I searched the same error message you had on google and found this article.. :) My docker image size were already at 20. Set it to 25 now and it started up! :)

 

What is stored in that image file? and how do i access that data? hmmm 
i can see that my dockers (plex and resilio sync) writes to /mnt/user/appdata/ folder.. But that isnt the image file, is it?

 

Share this post


Link to post

Found some forum posts about the same issue i had... turned out i had a logfile growing out of control. over 18GB ... i have now cleaned that file... everything sorted :D

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.