I ran 'diagnostics' and got the following in logs/docker.txt (full zip attached):
$ tail docker.txt
time="2017-05-30T16:41:42.206974508-04:00" level=info msg="libcontainerd: new containerd process, pid: 8833"
time="2017-05-30T16:41:43.612576754-04:00" level=fatal msg="Error starting daemon: write /var/lib/docker/volumes/metadata.db: no space left on device"
time="2017-05-30T16:41:44.163676548-04:00" level=info msg="libcontainerd: new containerd process, pid: 8997"
time="2017-05-30T16:41:45.346545311-04:00" level=fatal msg="Error starting daemon: write /var/lib/docker/volumes/metadata.db: no space left on device"
time="2017-05-30T17:10:32.184898289-04:00" level=info msg="libcontainerd: new containerd process, pid: 22537"
time="2017-05-30T17:10:33.338567925-04:00" level=fatal msg="Error starting daemon: write /var/lib/docker/volumes/metadata.db: no space left on device"
time="2017-05-30T17:10:39.085280285-04:00" level=info msg="libcontainerd: new containerd process, pid: 22749"
time="2017-05-30T17:10:40.270522174-04:00" level=fatal msg="Error starting daemon: write /var/lib/docker/volumes/metadata.db: no space left on device"
time="2017-05-30T17:10:49.450429338-04:00" level=info msg="libcontainerd: new containerd process, pid: 23006"
time="2017-05-30T17:10:50.614545274-04:00" level=fatal msg="Error starting daemon: write /var/lib/docker/volumes/metadata.db: no space left on device"
And here is my disk usage:
$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 7.6G 0 7.6G 0% /dev
tmpfs 1.6G 9.3M 1.6G 1% /run
/dev/sda1 48G 7.8G 38G 18% /
tmpfs 7.6G 138M 7.5G 2% /dev/shm
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 7.6G 0 7.6G 0% /sys/fs/cgroup
/dev/sda5 129G 30G 92G 25% /home
cgmfs 100K 0 100K 0% /run/cgmanager/fs
tmpfs 1.6G 24K 1.6G 1% /run/user/1000
So it looks like the docker image itself is out of space.... but I'm not sure how to expand it.
lucien-diagnostics-20170530-1713.zip