dubbly Posted January 1, 2020 Share Posted January 1, 2020 (edited) Happy New Year! I had a disk in my pool fail and upgraded one of my dual parity drives and moved the smaller parity to the pool. Used the parity swap procedure to copy parity. The procedure completed and now the failed drive in the pool is rebuilding. My server is running 6.8 and it shows the following error the "docker failed to start". The system has dual parity and dual cache drives. Could someone take a look at the following log and give me some advice on what to do? Secondly, should I wait for the drive to finish rebuilding before trying fix the docker issue? Thank you in advance! tower-diagnostics-20200101-0841.zip Edited January 1, 2020 by dubbly Quote Link to comment
trurl Posted January 1, 2020 Share Posted January 1, 2020 Your docker image doesn't seem to be mounted for some reason. It is far larger than necessary anyway. Why have you set it to 70G? Have you had problems filling it. 20G should be more than enough and making it larger will not help anything. And your system share has files on the array instead of all on cache like it should. libvirt image isn't mounted either. Do you have any VMs? Go to Settings - Docker, disable and delete docker image. Then post a new diagnostic. Quote Link to comment
dubbly Posted January 1, 2020 Author Share Posted January 1, 2020 Thank you - question Docker size: a year ago I had problems with it filling and increased the size. I resolved that issue. However, didn’t know if I could reduce the size without causing a problem. Should I reduce the size at the same time that I delete the docker image? System share has files on the array: not sure how this happened. Suggestion in how to resolve? libvirt: I have one VM that I haven’t used in a while this is the lowest priority of the issues from my opinion Quote Link to comment
trurl Posted January 1, 2020 Share Posted January 1, 2020 23 minutes ago, dubbly said: Should I reduce the size at the same time that I delete the docker image? System share has files on the array: not sure how this happened. Suggestion in how to resolve? After you disable and delete the docker image, do not enable it again. You can change the size later when you enable it, but for now I want to see if any of your system share is still on the array. What often happens is someone will enable docker and/or VM service before installing cache, so those images get created on the array. And mover can't move open files so they get stuck there. Quote Link to comment
dubbly Posted January 1, 2020 Author Share Posted January 1, 2020 Thank you. The image has been deleted. Docker is set to not restart and the file is attached. tower-diagnostics-20200101-1004.zip Quote Link to comment
trurl Posted January 1, 2020 Share Posted January 1, 2020 system share still has files on the array. Go to Settings - VM Manager, disable, then go to Main - Array Operation and Move Now. When it completes post new diagnostic. Quote Link to comment
dubbly Posted January 1, 2020 Author Share Posted January 1, 2020 Completed. VM disabled and move finished! tower-diagnostics-20200101-1018.zip Quote Link to comment
trurl Posted January 1, 2020 Share Posted January 1, 2020 Must be a duplicate on disk3. Go to Settings - Scheduler Settings - Mover Schedule and enable Mover Logging. Then run Mover again and post a syslog. Quote Link to comment
dubbly Posted January 1, 2020 Author Share Posted January 1, 2020 Done - The mover finished almost instantly. Attached is the syslog. tower-syslog-20200101-1832.zip Quote Link to comment
trurl Posted January 1, 2020 Share Posted January 1, 2020 Yes, libvirt image is a duplicate. I think the one on cache is probably the current one. What do you get from the command line with these: ls -lah /mnt/cache/system/libvirt ls -lah /mnt/disk3/system/libvirt Quote Link to comment
dubbly Posted January 1, 2020 Author Share Posted January 1, 2020 I would be willing to wipe the VM and start over. I only use it very occasionally. See below: ls -lah /mnt/cache/system/libvirt total 1.0G drwxrwxrwx 1 root root 22 Jul 21 14:10 ./ drwxrwxrwx 1 nobody users 26 Jul 21 14:10 ../ -rw-rw-rw- 1 nobody users 1.0G Jan 1 10:13 libvirt.img ~# ls -lah /mnt/disk3/system/libvirt total 102M drwxrwxrwx 2 root root 25 Jul 21 14:10 ./ drwxrwxrwx 3 root root 21 Jul 21 14:10 ../ -rw-rw-rw- 1 nobody users 1.0G Jul 22 20:02 libvirt.img root@tower:~# Again thank you for your assistance. Quote Link to comment
trurl Posted January 2, 2020 Share Posted January 2, 2020 Looks like the one on disk3 has the newer timestamp. See if you can delete one of them in VM Manager Settings, and then delete the other manually and start over. Quote Link to comment
dubbly Posted January 2, 2020 Author Share Posted January 2, 2020 (edited) Done - both are deleted. What is your suggested next step to get the docker up and going? Should I reduce the image size down to 20GB? Edited January 2, 2020 by dubbly Quote Link to comment
trurl Posted January 2, 2020 Share Posted January 2, 2020 Yes, change docker image size to 20GB, then enable will recreate it on cache. Then Previous Apps on the Apps page will use the settings you had before to add your dockers again. Quote Link to comment
dubbly Posted January 2, 2020 Author Share Posted January 2, 2020 (edited) thank you. I will give it a go. Any idea what caused the Docker and BM to suddenly have a problem? Edited January 2, 2020 by dubbly Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.