golli53 Posted April 28, 2020 Share Posted April 28, 2020 (edited) I tried to Stop my array and it's currently still stuck on `Retry unmounting disk share(s)...` for the last 30min. Some diagnostics from command line below (I cannot access diagnostics from GUI anymore). Prior to this, I noticed one of my dockers was having weird issues... seemingly stopped after I killed it, but kept being listed as running in `docker ps`. I was using `docker exec` to execute some commands in that container and I think some processes got stuck in the container. root@Tower:/# tail -n 5 /var/log/syslog Apr 28 14:11:36 Tower emhttpd: Unmounting disks... Apr 28 14:11:36 Tower emhttpd: shcmd (43474): umount /mnt/cache Apr 28 14:11:36 Tower root: umount: /mnt/cache: target is busy. Apr 28 14:11:36 Tower emhttpd: shcmd (43474): exit status: 32 Apr 28 14:11:36 Tower emhttpd: Retry unmounting disk share(s)... root@Tower:/# lsof /mnt/cache root@Tower:/# fuser -mv /mnt/cache USER PID ACCESS COMMAND /mnt/cache: root kernel mount /mnt/cache Edited April 28, 2020 by golli53 Quote Link to comment
golli53 Posted April 28, 2020 Author Share Posted April 28, 2020 (edited) I think the docker image on /mnt/cache that's mounted on /dev/loop2 is preventing the unmount. I killed a zombie container process accessing /dev/loop2, but still cannot detach /dev/loop2 and still stuck trying to unmount. Tried everything here: https://stackoverflow.com/questions/5881134/cannot-delete-device-dev-loop0 root@Tower:/# losetup NAME SIZELIMIT OFFSET AUTOCLEAR RO BACK-FILE DIO LOG-SEC /dev/loop1 0 0 1 1 /boot/bzfirmware 0 512 /dev/loop2 0 0 1 0 /mnt/cache/system/docker/docker.img 0 512 /dev/loop0 0 0 1 1 /boot/bzmodules 0 512 root@Tower:/# lsof /dev/loop2 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME container 15050 root 4u FIFO 0,82 0t0 2917 /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/8ea313440eef7c42a99526240f16a5438cf23beb769630a6ede14276aebe8ca5/shim.stdout.log container 15050 root 7u FIFO 0,82 0t0 2917 /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/8ea313440eef7c42a99526240f16a5438cf23beb769630a6ede14276aebe8ca5/shim.stdout.log container 15050 root 8u FIFO 0,82 0t0 2918 /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/8ea313440eef7c42a99526240f16a5438cf23beb769630a6ede14276aebe8ca5/shim.stderr.log container 15050 root 9u FIFO 0,82 0t0 2918 /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/8ea313440eef7c42a99526240f16a5438cf23beb769630a6ede14276aebe8ca5/shim.stderr.log root@Tower:/# kill 15050 root@Tower:/# lsof /dev/loop2 root@Tower:/# losetup -d /dev/loop2 # fails silently root@Tower:/# echo $? 0 root@Tower:/# losetup NAME SIZELIMIT OFFSET AUTOCLEAR RO BACK-FILE DIO LOG-SEC /dev/loop1 0 0 1 1 /boot/bzfirmware 0 512 /dev/loop2 0 0 1 0 /mnt/cache/system/docker/docker.img 0 512 /dev/loop0 0 0 1 1 /boot/bzmodules 0 512 root@Tower:/# lsof | grep loop2 loop2 12310 root cwd DIR 0,2 440 2 / loop2 12310 root rtd DIR 0,2 440 2 / loop2 12310 root txt unknown /proc/12310/exe root@Tower:/# kill -9 12310 # not sure what this is, but killing it fails root@Tower:/# lsof | grep loop2 loop2 12310 root cwd DIR 0,2 440 2 / loop2 12310 root rtd DIR 0,2 440 2 / loop2 12310 root txt unknown /proc/12310/exe root@Tower:/# modprobe -r loop && modprobe loop # try to reload the module, but it's builtin modprobe: FATAL: Module loop is builtin. Edited April 28, 2020 by golli53 added to log 3 1 1 Quote Link to comment
Solution golli53 Posted April 28, 2020 Author Solution Share Posted April 28, 2020 Ok, finally solved it. In case anyone runs into this, `umount -l /dev/loop2` worked 10 14 Quote Link to comment
hoff Posted February 14, 2021 Share Posted February 14, 2021 i had this issue too... any idea if there is a fix, my array shutdown is always dirty when power goes because of this. Quote Link to comment
robinh Posted March 22, 2021 Share Posted March 22, 2021 Any news on this issue? I also always have unclean shutdowns. Found this post and figured out that this was working for me aswell umount -l /dev/loopX. But there most be a clean way to do a reboot. Quote Link to comment
Aegisnir Posted October 27, 2021 Share Posted October 27, 2021 Did anyone figure out a solution to this? I get this every once in a while when attempting to stop the array. Most other times it stops as expected so its hard for me to pin-point the issue. Quote Link to comment
kizer Posted October 27, 2021 Share Posted October 27, 2021 This is what I always do and I've not had any issues for a while. 1. Stop dockers 2. Stop array 3. Shut down machine 1 Quote Link to comment
JonathanM Posted October 27, 2021 Share Posted October 27, 2021 You forgot 1.a. Stop VM's. 1.b. Close SSH sessions if any All of these items are theoretically handled by the stock Unraid shutdown, but it's not always foolproof, and it's much easier to work through an orderly shutdown than to deal with a hang and possible unclean shutdown. Quote Link to comment
KptnKMan Posted September 16, 2022 Share Posted September 16, 2022 (edited) I had this strange issue today when trying to shutdown my system due to some issues. I founc that these commands worked: killall -Iv docker killall -Iv containerd umount -l /dev/loop2 Its only started happening recently, and is the /mnt/cache unable to unmount. It seems that it's only Docker-related that is causing this issue, for me at least. I'm going to put this into a UserScripts script, in case I need it again, then I can fire it off. Edited September 16, 2022 by KptnKMan 2 Quote Link to comment
Nogami Posted June 12, 2023 Share Posted June 12, 2023 On 9/16/2022 at 6:01 AM, KptnKMan said: On 9/16/2022 at 6:01 AM, KptnKMan said: I had this strange issue today when trying to shutdown my system due to some issues. I founc that these commands worked: killall -Iv docker killall -Iv containerd umount -l /dev/loop2 Its only started happening recently, and is the /mnt/cache unable to unmount. It seems that it's only Docker-related that is causing this issue, for me at least. I'm going to put this into a UserScripts script, in case I need it again, then I can fire it off. Thanks, this worked here as well when my ZFS pool hung up after doing a lot of messing with converting my appdata and cache to ZFS drives for the compression. Quote Link to comment
sunwind Posted July 8, 2023 Share Posted July 8, 2023 I can't stop my pool ever normally for some reason. Found this thread, umount -l /dev/loop2 worked instantly. it's something there anyway. no idea what though. Quote Link to comment
Can0n Posted July 9, 2023 Share Posted July 9, 2023 On 4/28/2020 at 2:41 PM, golli53 said: Ok, finally solved it. In case anyone runs into this, `umount -l /dev/loop2` worked thank you for this I was pulling a some drives from a zpool to redo them from a 4 drive config to a 2 drive config and started and stopped too soon and docker.img held up the stop process as soon as i saw your command ran it and everything unmounted clean Still relevant 3 years later ! thank you! Quote Link to comment
dnLL Posted July 15, 2023 Share Posted July 15, 2023 Yup, same issue on Unraid 6.12.2 (while trying to reboot to install 6.12.3). I always manually stop my dockers/VMs before the array and rebooting, sadly the array wouldn't stop. Jul 14 23:16:52 server emhttpd: Unmounting disks... Jul 14 23:16:52 server emhttpd: shcmd (38816): umount /mnt/cache Jul 14 23:16:52 server root: umount: /mnt/cache: target is busy. Jul 14 23:16:52 server emhttpd: shcmd (38816): exit status: 32 Jul 14 23:16:52 server emhttpd: Retry unmounting disk share(s)... Jul 14 23:16:57 server emhttpd: Unmounting disks... Jul 14 23:16:57 server emhttpd: shcmd (38817): umount /mnt/cache Jul 14 23:16:57 server root: umount: /mnt/cache: target is busy. Jul 14 23:16:57 server emhttpd: shcmd (38817): exit status: 32 Jul 14 23:16:57 server emhttpd: Retry unmounting disk share(s)... Jul 14 23:17:02 server emhttpd: Unmounting disks... Jul 14 23:17:02 server emhttpd: shcmd (38818): umount /mnt/cache Jul 14 23:17:02 server root: umount: /mnt/cache: target is busy. Jul 14 23:17:02 server emhttpd: shcmd (38818): exit status: 32 Jul 14 23:17:02 server emhttpd: Retry unmounting disk share(s)... Umounting /dev/loop2 immediately fixes it. Quote Link to comment
hansolo77 Posted July 15, 2023 Share Posted July 15, 2023 I have this problem too. Been trying to upgrade to a higher capacity drive. Need to stop array to remove the old drive and again to install the new drive. System gets stuck stopping the array. Only solution I know of is to then do a forced reboot, which causes a dirty bit and a parity check on reboot. I know everything is good so I cancel the check, but this is a crazy thing that's never happened before. Did something change when we all upgraded to 6.12? Quote Link to comment
hansolo77 Posted July 15, 2023 Share Posted July 15, 2023 Quote This release resolves an issue where Docker does not properly stop when the array Stops, which can result in an unclean shutdown. Here's Hoping! Quote Link to comment
ljm42 Posted July 15, 2023 Share Posted July 15, 2023 44 minutes ago, dnLL said: Yup, same issue on Unraid 6.12.2 (while trying to reboot to install 6.12.3) Yeah, there are special instructions in the 6.12.3 announce post to help folks get past that: https://forums.unraid.net/topic/142116-unraid-os-version-6123-available/ Glad you figured it out on your own, kind of impressed really : ) Quote Link to comment
dnLL Posted July 15, 2023 Share Posted July 15, 2023 Yeah, there are special instructions in the 6.12.3 announce post to help folks get past that: https://forums.unraid.net/topic/142116-unraid-os-version-6123-available/ Glad you figured it out on your own, kind of impressed really : ) Great, very happy to read this. I checked with lsof and interestingly enough nothing is returned for /mnt/cache since every docker is stopped. I should have checked with lsof on /dev/loop2, thought about it too late. Anyways. Sent from my Pixel 7 Pro using Tapatalk 1 Quote Link to comment
je82 Posted November 5, 2023 Share Posted November 5, 2023 On 10/27/2021 at 8:26 PM, kizer said: This is what I always do and I've not had any issues for a while. 1. Stop dockers 2. Stop array 3. Shut down machine Yeah i did the mistake of shutting down the array while dockers were still being auto started, i though unraid had some kind of programmed fix where it always shutdowns all the dockers when shutting down the array but it appears the logic is broken there if the docker auto runs are still in progress, ended up not being able to unmount the share where the appdata for the dockers resided. Quote Link to comment
kizer Posted November 6, 2023 Share Posted November 6, 2023 On 11/5/2023 at 5:54 AM, je82 said: Yeah i did the mistake of shutting down the array while dockers were still being auto started, i though unraid had some kind of programmed fix where it always shutdowns all the dockers when shutting down the array but it appears the logic is broken there if the docker auto runs are still in progress, ended up not being able to unmount the share where the appdata for the dockers resided. I'm kind of old school and often just take things into my own hands. Sure unraid is supposed to do this and that, but sometimes you just have to say "I got this." I've had to many lock ups and other things over the years so I just do it my way and sure it adds an additional layer of patience. I'd rather take 20 seconds out of my time than worry about the system glitching out and having to wait for a parity check because of an unclean shutdown or something else. Quote Link to comment
RocketSLC Posted November 14, 2023 Share Posted November 14, 2023 I had this issue recently, where I was unable to unmount /mnt/cache even though all docker and VM services were stopped (as confirmed by `ps aux`). It turned out that the docker image was still mounted and this did the trick: umount /var/lib/docker Once I did that, it was able to unmount the cache immediately. 1 Quote Link to comment
shaihulud Posted January 5 Share Posted January 5 I recently ran into a similar issue on 6.12.6. Discovered that my dockers were not running and saw a message "Docker service failed to start" in my docker tab. Saw a bunch of errors in the logs like: ``` kernel: I/O error, dev loop2, sector 2764512 op 0x1:(WRITE) flags 0x100000 phys_seg 1 prio class 2 kernel: BTRFS error (device loop2: state EA): bdev /dev/loop2 errs: wr 67, rd 0, flush 0, corrupt 0, gen 0 kernel: loop: Write error at byte offset 5152833536, length 4096. ``` Sounds like there is some issue writing to the docker image? So I try to shut down the array, it won't go down, gets stuck on unmounting disks. Tried the suggestions in this thread, unfortunately `umount /var/lib/docker` did not seem to work. Messages in the logs make it look like the docker image isn't the issue, the system also cannot unmount my cache drive and my drive in the array. I wound up going into the Open Files plugin and saw that several appdata folders were in use by `shfs`. I killed that process via the terminal, still no luck. Wound up hitting the Shutdown button and that worked cleanly but still not sure why. Seems like maybe a disk I/O issue? My plan is to go into my docker configs and change `/mnt/user` to `/mnt/cache` where possible to hopefully lighten the load on the fuse filesystem. My only significant recent changes to the system were adding an additional NVME ZFS pool (however no data has been added to that yet so I'm doubtful it's the culprit) and also setting up a Graylog stack with docker-compose... I'm a little worried that writes from Graylog mucked things up, but I only have syslog and plex feeding into there, so the writes shouldn't be *too* excessive. Anyone have any thoughts? Quote Link to comment
AwesomeAustn Posted March 2 Share Posted March 2 On 11/14/2023 at 9:22 AM, RocketSLC said: I had this issue recently, where I was unable to unmount /mnt/cache even though all docker and VM services were stopped (as confirmed by `ps aux`). It turned out that the docker image was still mounted and this did the trick: umount /var/lib/docker Once I did that, it was able to unmount the cache immediately. This worked for me. Thank you! Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.