golli53 Posted April 28, 2020 Share Posted April 28, 2020 (edited) I tried to Stop my array and it's currently still stuck on `Retry unmounting disk share(s)...` for the last 30min. Some diagnostics from command line below (I cannot access diagnostics from GUI anymore). Prior to this, I noticed one of my dockers was having weird issues... seemingly stopped after I killed it, but kept being listed as running in `docker ps`. I was using `docker exec` to execute some commands in that container and I think some processes got stuck in the container. root@Tower:/# tail -n 5 /var/log/syslog Apr 28 14:11:36 Tower emhttpd: Unmounting disks... Apr 28 14:11:36 Tower emhttpd: shcmd (43474): umount /mnt/cache Apr 28 14:11:36 Tower root: umount: /mnt/cache: target is busy. Apr 28 14:11:36 Tower emhttpd: shcmd (43474): exit status: 32 Apr 28 14:11:36 Tower emhttpd: Retry unmounting disk share(s)... root@Tower:/# lsof /mnt/cache root@Tower:/# fuser -mv /mnt/cache USER PID ACCESS COMMAND /mnt/cache: root kernel mount /mnt/cache Edited April 28, 2020 by golli53 Quote Link to comment
golli53 Posted April 28, 2020 Author Share Posted April 28, 2020 (edited) I think the docker image on /mnt/cache that's mounted on /dev/loop2 is preventing the unmount. I killed a zombie container process accessing /dev/loop2, but still cannot detach /dev/loop2 and still stuck trying to unmount. Tried everything here: https://stackoverflow.com/questions/5881134/cannot-delete-device-dev-loop0 root@Tower:/# losetup NAME SIZELIMIT OFFSET AUTOCLEAR RO BACK-FILE DIO LOG-SEC /dev/loop1 0 0 1 1 /boot/bzfirmware 0 512 /dev/loop2 0 0 1 0 /mnt/cache/system/docker/docker.img 0 512 /dev/loop0 0 0 1 1 /boot/bzmodules 0 512 root@Tower:/# lsof /dev/loop2 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME container 15050 root 4u FIFO 0,82 0t0 2917 /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/8ea313440eef7c42a99526240f16a5438cf23beb769630a6ede14276aebe8ca5/shim.stdout.log container 15050 root 7u FIFO 0,82 0t0 2917 /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/8ea313440eef7c42a99526240f16a5438cf23beb769630a6ede14276aebe8ca5/shim.stdout.log container 15050 root 8u FIFO 0,82 0t0 2918 /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/8ea313440eef7c42a99526240f16a5438cf23beb769630a6ede14276aebe8ca5/shim.stderr.log container 15050 root 9u FIFO 0,82 0t0 2918 /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/8ea313440eef7c42a99526240f16a5438cf23beb769630a6ede14276aebe8ca5/shim.stderr.log root@Tower:/# kill 15050 root@Tower:/# lsof /dev/loop2 root@Tower:/# losetup -d /dev/loop2 # fails silently root@Tower:/# echo $? 0 root@Tower:/# losetup NAME SIZELIMIT OFFSET AUTOCLEAR RO BACK-FILE DIO LOG-SEC /dev/loop1 0 0 1 1 /boot/bzfirmware 0 512 /dev/loop2 0 0 1 0 /mnt/cache/system/docker/docker.img 0 512 /dev/loop0 0 0 1 1 /boot/bzmodules 0 512 root@Tower:/# lsof | grep loop2 loop2 12310 root cwd DIR 0,2 440 2 / loop2 12310 root rtd DIR 0,2 440 2 / loop2 12310 root txt unknown /proc/12310/exe root@Tower:/# kill -9 12310 # not sure what this is, but killing it fails root@Tower:/# lsof | grep loop2 loop2 12310 root cwd DIR 0,2 440 2 / loop2 12310 root rtd DIR 0,2 440 2 / loop2 12310 root txt unknown /proc/12310/exe root@Tower:/# modprobe -r loop && modprobe loop # try to reload the module, but it's builtin modprobe: FATAL: Module loop is builtin. Edited April 28, 2020 by golli53 added to log 3 1 Quote Link to comment
Solution golli53 Posted April 28, 2020 Author Solution Share Posted April 28, 2020 Ok, finally solved it. In case anyone runs into this, `umount -l /dev/loop2` worked 9 13 Quote Link to comment
hoff Posted February 14, 2021 Share Posted February 14, 2021 i had this issue too... any idea if there is a fix, my array shutdown is always dirty when power goes because of this. Quote Link to comment
robinh Posted March 22, 2021 Share Posted March 22, 2021 Any news on this issue? I also always have unclean shutdowns. Found this post and figured out that this was working for me aswell umount -l /dev/loopX. But there most be a clean way to do a reboot. Quote Link to comment
Aegisnir Posted October 27, 2021 Share Posted October 27, 2021 Did anyone figure out a solution to this? I get this every once in a while when attempting to stop the array. Most other times it stops as expected so its hard for me to pin-point the issue. Quote Link to comment
kizer Posted October 27, 2021 Share Posted October 27, 2021 This is what I always do and I've not had any issues for a while. 1. Stop dockers 2. Stop array 3. Shut down machine Quote Link to comment
JonathanM Posted October 27, 2021 Share Posted October 27, 2021 You forgot 1.a. Stop VM's. 1.b. Close SSH sessions if any All of these items are theoretically handled by the stock Unraid shutdown, but it's not always foolproof, and it's much easier to work through an orderly shutdown than to deal with a hang and possible unclean shutdown. Quote Link to comment
KptnKMan Posted September 16, 2022 Share Posted September 16, 2022 (edited) I had this strange issue today when trying to shutdown my system due to some issues. I founc that these commands worked: killall -Iv docker killall -Iv containerd umount -l /dev/loop2 Its only started happening recently, and is the /mnt/cache unable to unmount. It seems that it's only Docker-related that is causing this issue, for me at least. I'm going to put this into a UserScripts script, in case I need it again, then I can fire it off. Edited September 16, 2022 by KptnKMan 2 Quote Link to comment
Nogami Posted June 12 Share Posted June 12 On 9/16/2022 at 6:01 AM, KptnKMan said: On 9/16/2022 at 6:01 AM, KptnKMan said: I had this strange issue today when trying to shutdown my system due to some issues. I founc that these commands worked: killall -Iv docker killall -Iv containerd umount -l /dev/loop2 Its only started happening recently, and is the /mnt/cache unable to unmount. It seems that it's only Docker-related that is causing this issue, for me at least. I'm going to put this into a UserScripts script, in case I need it again, then I can fire it off. Thanks, this worked here as well when my ZFS pool hung up after doing a lot of messing with converting my appdata and cache to ZFS drives for the compression. Quote Link to comment
sunwind Posted July 8 Share Posted July 8 I can't stop my pool ever normally for some reason. Found this thread, umount -l /dev/loop2 worked instantly. it's something there anyway. no idea what though. Quote Link to comment
Can0n Posted July 9 Share Posted July 9 On 4/28/2020 at 2:41 PM, golli53 said: Ok, finally solved it. In case anyone runs into this, `umount -l /dev/loop2` worked thank you for this I was pulling a some drives from a zpool to redo them from a 4 drive config to a 2 drive config and started and stopped too soon and docker.img held up the stop process as soon as i saw your command ran it and everything unmounted clean Still relevant 3 years later ! thank you! Quote Link to comment
dnLL Posted July 15 Share Posted July 15 Yup, same issue on Unraid 6.12.2 (while trying to reboot to install 6.12.3). I always manually stop my dockers/VMs before the array and rebooting, sadly the array wouldn't stop. Jul 14 23:16:52 server emhttpd: Unmounting disks... Jul 14 23:16:52 server emhttpd: shcmd (38816): umount /mnt/cache Jul 14 23:16:52 server root: umount: /mnt/cache: target is busy. Jul 14 23:16:52 server emhttpd: shcmd (38816): exit status: 32 Jul 14 23:16:52 server emhttpd: Retry unmounting disk share(s)... Jul 14 23:16:57 server emhttpd: Unmounting disks... Jul 14 23:16:57 server emhttpd: shcmd (38817): umount /mnt/cache Jul 14 23:16:57 server root: umount: /mnt/cache: target is busy. Jul 14 23:16:57 server emhttpd: shcmd (38817): exit status: 32 Jul 14 23:16:57 server emhttpd: Retry unmounting disk share(s)... Jul 14 23:17:02 server emhttpd: Unmounting disks... Jul 14 23:17:02 server emhttpd: shcmd (38818): umount /mnt/cache Jul 14 23:17:02 server root: umount: /mnt/cache: target is busy. Jul 14 23:17:02 server emhttpd: shcmd (38818): exit status: 32 Jul 14 23:17:02 server emhttpd: Retry unmounting disk share(s)... Umounting /dev/loop2 immediately fixes it. Quote Link to comment
hansolo77 Posted July 15 Share Posted July 15 I have this problem too. Been trying to upgrade to a higher capacity drive. Need to stop array to remove the old drive and again to install the new drive. System gets stuck stopping the array. Only solution I know of is to then do a forced reboot, which causes a dirty bit and a parity check on reboot. I know everything is good so I cancel the check, but this is a crazy thing that's never happened before. Did something change when we all upgraded to 6.12? Quote Link to comment
hansolo77 Posted July 15 Share Posted July 15 Quote This release resolves an issue where Docker does not properly stop when the array Stops, which can result in an unclean shutdown. Here's Hoping! Quote Link to comment
ljm42 Posted July 15 Share Posted July 15 44 minutes ago, dnLL said: Yup, same issue on Unraid 6.12.2 (while trying to reboot to install 6.12.3) Yeah, there are special instructions in the 6.12.3 announce post to help folks get past that: https://forums.unraid.net/topic/142116-unraid-os-version-6123-available/ Glad you figured it out on your own, kind of impressed really : ) Quote Link to comment
dnLL Posted July 15 Share Posted July 15 Yeah, there are special instructions in the 6.12.3 announce post to help folks get past that: https://forums.unraid.net/topic/142116-unraid-os-version-6123-available/ Glad you figured it out on your own, kind of impressed really : ) Great, very happy to read this. I checked with lsof and interestingly enough nothing is returned for /mnt/cache since every docker is stopped. I should have checked with lsof on /dev/loop2, thought about it too late. Anyways. Sent from my Pixel 7 Pro using Tapatalk 1 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.