6.1.9 upgrade to 6.2.1 boots, but no webinterface or dockers

Recommended Posts

First off, i'm really happy with Unraid and its dockers, so i was eager to try out the new 6.2 release!


After updating all plugins and dockers, yesterday i descided to take the gamble and click the update button besides the Unraid listing on the "plugin" page in the dynaMix web interface.


Everything was working well. After clicking that button, it downloaded some files, asked for a reboot, so i stopped the array (couldnt unmount untill i stopped a screen session i apparently left running) and it went down. When it came online i found that while i can ping, and ssh, and the shares work, that the webinterface and docker containers do not work.


Its idling and i see no change after 12 hours of waiting.

I have no idea where to look for errors and am still/forever learning Linux .. suggestions most welcome!


Below all details i think might be relevant. I suspect something with Docker..  i did forget to disable the autostart. How to disable that from commandline? I could not find it online.


What i also could not find online in v6 documentation, was how to use the "safe mode" thats apparently a feature added in older versions. Has it been deprecated?


Link to comment

OK.. here is all the output i think might be relevant. Look at the docker errors..



I'm no expert but the only errors i found when digging via SSH are:


root@bTower:/mnt/cache# docker ps
Cannot connect to the Docker daemon. Is the docker daemon running on this host?

root@bTower:/mnt/cache# tail /var/log/docker.log
time="2016-10-14T10:03:19.713510985+02:00" level=error msg="migration failed for 4921b0eb883769bdb272078d0ca0889b0b1d1d9e808311ca208d8f6ae341e5d5, err: open /var/lib/docker/graph/df2d4f955de259f76569c58e5a32e836c7ccd788e686fdd1d688158e1e958315/.migration-diffid: no such file or directory"
time="2016-10-14T10:03:19.713527499+02:00" level=error msg="migration failed for 54f1994713ff8706ba5d602a0c4bffdae7f923c0f7cb113fbf2c291e8b9da7c6, err: open /var/lib/docker/graph/df2d4f955de259f76569c58e5a32e836c7ccd788e686fdd1d688158e1e958315/.migration-diffid: no such file or directory"
time="2016-10-14T10:03:19.713543989+02:00" level=error msg="migration failed for 283c3da6cfa67eb155d3e745e7f7e2d4054170cb5b71dfa089931ce30a07f354, err: open /var/lib/docker/graph/df2d4f955de259f76569c58e5a32e836c7ccd788e686fdd1d688158e1e958315/.migration-diffid: no such file or directory"
time="2016-10-14T10:03:19.713560468+02:00" level=error msg="migration failed for 797f5e4995e41ec89f013370a107c8dabb07790ba2c7814125bd770ae3a0ff09, err: open /var/lib/docker/graph/df2d4f955de259f76569c58e5a32e836c7ccd788e686fdd1d688158e1e958315/.migration-diffid: no such file or directory"
time="2016-10-14T10:03:19.713576754+02:00" level=error msg="migration failed for a1d1943feb6efbd24717645960729f060a44da792b67f59399afb15ab35a4b9b, err: open /var/lib/docker/graph/df2d4f955de259f76569c58e5a32e836c7ccd788e686fdd1d688158e1e958315/.migration-diffid: no such file or directory"
time="2016-10-14T10:03:19.713593781+02:00" level=error msg="migration failed for 9168b1d3ced4cc883ec1f6f82595eb19859236177d2094acb47dc52c8de32bf7, err: open /var/lib/docker/graph/df2d4f955de259f76569c58e5a32e836c7ccd788e686fdd1d688158e1e958315/.migration-diffid: no such file or directory"
time="2016-10-14T10:03:19.713610168+02:00" level=error msg="migration failed for fda61ed038fa3277d2bfe121b797869541734c7b88da8e64d02e5a5fd1d0a2ed, err: open /var/lib/docker/graph/df2d4f955de259f76569c58e5a32e836c7ccd788e686fdd1d688158e1e958315/.migration-diffid: no such file or directory"
time="2016-10-14T10:03:19.889907013+02:00" level=error msg="Graph migration failed: \"open /var/lib/docker/.migration-v1-images.json: no space left on device\". Your old graph data was found to be too inconsistent for upgrading to content-addressable storage. Some of the old data was probably not upgraded. We recommend starting over with a clean storage directory if possible."
time="2016-10-14T10:03:19.889966418+02:00" level=info msg="Graph migration to content-addressability took 36003.30 seconds"
time="2016-10-14T10:03:20.103472277+02:00" level=fatal msg="Error starting daemon: Error initializing network controller: error obtaining controller instance: mkdir /var/lib/docker/network: no space left on device"


root@bTower:/# tail /var/log/syslog
Oct 14 12:28:26 bTower emhttp: err: shcmd: shcmd (221): exit status: 1
Oct 14 12:31:29 bTower sshd[29498]: Accepted password for root from port 57006 ssh2
Oct 14 12:43:55 bTower emhttp: shcmd (236): set -o pipefail ; /usr/local/sbin/mount_image '/mnt/disk1/docker.img' /var/lib/docker 10 |& logger
Oct 14 12:43:55 bTower root: /mnt/disk1/docker.img is in-use, cannot mount
Oct 14 12:43:55 bTower emhttp: err: shcmd: shcmd (236): exit status: 1
Oct 14 12:59:23 bTower emhttp: shcmd (251): set -o pipefail ; /usr/local/sbin/mount_image '/mnt/disk1/docker.img' /var/lib/docker 10 |& logger
Oct 14 12:59:23 bTower root: /mnt/disk1/docker.img is in-use, cannot mount
Oct 14 12:59:23 bTower emhttp: err: shcmd: shcmd (251): exit status: 1


And for good measure, whats currently running:


root@bTower:~# top
top - 12:31:41 up 12:29,  1 user,  load average: 0.00, 0.00, 0.00
Tasks: 275 total,   1 running, 273 sleeping,   0 stopped,   1 zombie
%Cpu(s):  0.0 us,  0.1 sy,  0.0 ni, 99.9 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
KiB Mem :  6172348 total,   314412 free,   106320 used,  5751616 buff/cache
KiB Swap:        0 total,        0 free,        0 used.  4933860 avail Mem

1654 root      20   0    9680   2568   2112 S   0.3  0.0   1:13.57 cpuload
29559 root      20   0   16600   2804   2224 R   0.3  0.0   0:00.04 top
    1 root      20   0    4372   1648   1540 S   0.0  0.0   0:09.72 init
    2 root      20   0       0      0      0 S   0.0  0.0   0:00.00 kthreadd
    3 root      20   0       0      0      0 S   0.0  0.0   0:03.10 ksoftirqd/0
    5 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 kworker/0:+
    7 root      20   0       0      0      0 S   0.0  0.0   0:26.76 rcu_preempt
    8 root      20   0       0      0      0 S   0.0  0.0   0:00.00 rcu_sched
    9 root      20   0       0      0      0 S   0.0  0.0   0:00.00 rcu_bh
   10 root      rt   0       0      0      0 S   0.0  0.0   0:00.13 migration/0
   11 root      rt   0       0      0      0 S   0.0  0.0   0:00.06 migration/1
   12 root      20   0       0      0      0 S   0.0  0.0   0:00.09 ksoftirqd/1
   14 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 kworker/1:+
   15 root      rt   0       0      0      0 S   0.0  0.0   0:00.07 migration/2
   16 root      20   0       0      0      0 S   0.0  0.0   0:00.08 ksoftirqd/2
   17 root      20   0       0      0      0 S   0.0  0.0   0:00.00 kworker/2:0
   19 root      rt   0       0      0      0 S   0.0  0.0   0:00.09 migration/3

root@bTower:~# ps -f
root     29525 29498  0 12:31 pts/0    00:00:00 -bash
root     29851 29525  0 12:32 pts/0    00:00:00 ps -f

root@bTower:~# free
              total        used        free      shared  buff/cache   available
Mem:        6172348      105744      314824      588364     5751780     4934288
Swap:             0           0           0

root@bTower:/mnt/cache# df
Filesystem       1K-blocks       Used  Available Use% Mounted on
rootfs             3017320     585464    2431856  20% /
tmpfs              3086172        188    3085984   1% /run
devtmpfs           3017336          0    3017336   0% /dev
cgroup_root        3086172          0    3086172   0% /sys/fs/cgroup
tmpfs               131072       2928     128144   3% /var/log
/dev/sda1         15621120     380952   15240168   3% /boot
/dev/md1        5859568668 5044055040  815513628  87% /mnt/disk1
/dev/md2        5858435620 2513603448 3344832172  43% /mnt/disk2
/dev/sdb1        488386552  109418000  378409024  23% /mnt/cache
shfs           11718004288 7557658488 4160345800  65% /mnt/user0
shfs           12206390840 7667076488 4538754824  63% /mnt/user
/dev/loop0        10485760    9604996     655004  94% /var/lib/docker
/dev/loop1         1048576      16804     926300   2% /etc/libvirt


Again, that last line shows another Docker issue: use = 94%. But that might be expected, i dont know.

Link to comment

Did you read the Release Notes for the update?    They contain special manual steps that need carrying out relating to docker containers and VMs.  The thread containing them also has a post of notes on common issues encountered and the resolution.


It has been pointed out that the Update button does not make it obvious that this should be done, and this is something to be made more obvious in the future.

Link to comment

No i had not read the "release notes" in depth, but i have read the "upgrade notes" just above that which only list reasonable 3 points. I had no reason to believe there would be more "upgrade notes" hidden below the "release notes" just below that. :-\ I skimmed over those having already read those in the blog..


Here i had typed a really satisfying rant on Unraid info being chaotic here (call it constructive critisism) but that wont help me solve this problem now. So i removed it.


Anyway, what could i have missed? I'm reading it now but besides not disabling auto-start i'm not aware of anything that i did wrong.

How can i disable autostart from the console / without the web interface?

Link to comment

And all of a sudden (about 18 hours after the update) the Dynamix web interface started working, and even on the custom port 8888 that i previously set it.


Strange .. as the machine was idling. The docker tab is missing.. so not sure what to do now.. and if doing a reboot now is unsafe or recommended instead.. insights welcome.

Link to comment

Here it says


However, there is a one-time update procedure that each container will need to go through in order to point it towards that new API going forward, even if the container itself truly isn't in need of an update.


but i cant find what that procedure is. Or is that just referring to the image hash that needs to be computed?

Link to comment

Since i cant work on this again for the next few hours i descided to just reboot this server and hope for the best.

When clicking Stop the drive array, it now just says "Unmounting disks...Retry unmounting disk share(s)...".


And i cant find out whats blocking it.

root@bTower:~# ls /mnt
disk1/  disks/
root@bTower:~# lsof | grep /mnt/disk1/

Link to comment

Well since i seem to be making progress (aka having luck) i'll keep documenting so that others can learn from this, or tell me how its supposed to be done.


Current state: after a reboot and increasing the docker image size dockers work but still no webinterface.


I was able to change the docker image size because the webinterface momentarily worked, and had to force a reset via the console.

I'm hoping that the webinterface magically starts working after 20 hours of uptime like it did last time, but thats not a solution.


edit: webinterface just started working now, after 11,5 hours of uptime. No idea why it took so long.. starting a check right now to rule out that one kind of gremlin.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.