01111000 Posted October 9, 2017 Share Posted October 9, 2017 I've had this issue as long as I remember, literally years. I'm on 6.3.5 I am trying to either Stop my array or reboot/shut down. It ALWAYS gets stuck with "Unmounting disks...Retry unmounting disk share(s)..." and I am forced to use IPMI to force a reboot. I want to fix this once and for all. I'm trying to add a new disk but cannot because of this. I stop ALL of my dockers and VMs before hitting stop in the webui. How can I troubleshoot this and resolve it? I'm so sick and tired of this issue rearing its head every few years. I really appreciate any help with this; I'll provide whatever is needed- I'm extremely frustrated right now. 1 Quote Link to comment
dlandon Posted October 9, 2017 Share Posted October 9, 2017 50 minutes ago, 01111000 said: I've had this issue as long as I remember, literally years. I'm on 6.3.5 I am trying to either Stop my array or reboot/shut down. It ALWAYS gets stuck with "Unmounting disks...Retry unmounting disk share(s)..." and I am forced to use IPMI to force a reboot. I want to fix this once and for all. I'm trying to add a new disk but cannot because of this. I stop ALL of my dockers and VMs before hitting stop in the webui. How can I troubleshoot this and resolve it? I'm so sick and tired of this issue rearing its head every few years. I really appreciate any help with this; I'll provide whatever is needed- I'm extremely frustrated right now. Post diagnostics. Quote Link to comment
01111000 Posted October 10, 2017 Author Share Posted October 10, 2017 1 hour ago, dlandon said: Post diagnostics. server-diagnostics-20171009-2011.zip See attached. Thanks Quote Link to comment
lankanmon Posted October 10, 2017 Share Posted October 10, 2017 (edited) Do you use ssh? I have had issues where I used screen through terminal and screen keeps accessing the array keeping it from being unmounted. Look for things that may be prevent unmounting of the array. Also, disable docker in settings and then try. Just turning them off individually may not be enough and maybe try unmounting array on its own before hitting shutdown. Edited October 10, 2017 by lankanmon 3 1 Quote Link to comment
dlandon Posted October 10, 2017 Share Posted October 10, 2017 (edited) You are installing packages from /boot/packages/ - tcl, expect, openvpn. I'd recommend finding another way to do this. Use the nerd pack for packages, and a docker for openvpn. These packages may be holding up a shutdown. EDIT: Also check this error: Fix Common Problems: Error: Docker application binhex-deluge has volumes being passed that are mounted by Unassigned Devices, but they are not mounted with the slave option Edited October 10, 2017 by dlandon Quote Link to comment
01111000 Posted October 10, 2017 Author Share Posted October 10, 2017 (edited) 2 hours ago, dlandon said: You are installing packages from /boot/packages/ - tcl, expect, openvpn. I'd recommend finding another way to do this. Use the nerd pack for packages, and a docker for openvpn. These packages may be holding up a shutdown. I believe that a few plugins auto installed these packages. I'm using peter sm's OpenVPN plugin, for example. Ive fixed the deluge error as well. I'll give it a try and report back. edit- nope. Same issue. Unmounted unassigned drives, stopped OpenVPN, stopped all dockers and VMs and use Open Files plugin to kill any remaining processes. Edited October 10, 2017 by 01111000 Quote Link to comment
dlandon Posted October 10, 2017 Share Posted October 10, 2017 Start your server and then do a gui shutdown. If something is holding up the shutdown, unRAID will force a shutdown after a time out. Then post the log found at /boot/logs. This should show what is preventing a shutdown. This is the log that is saved from the last shutdown. We can look at it and see what might be holding up the shutdown. Quote Link to comment
jj_uk Posted August 26, 2018 Share Posted August 26, 2018 (edited) Did you solve this? I have the same issue.. Edited August 26, 2018 by jj_uk Quote Link to comment
jj_uk Posted August 26, 2018 Share Posted August 26, 2018 (edited) I followed the above and turned off docker in settings, but it still didn't stop so I used the GUI to shutdown. It has now restarted and diagnostics are attached. Any ideas? tower1-diagnostics-20180826-1149.zip Edited August 26, 2018 by jj_uk spelling Quote Link to comment
dlandon Posted August 26, 2018 Share Posted August 26, 2018 28 minutes ago, jj_uk said: I followed the above and turned off docker in settings, but it still didn't stop so I used the GUI to shutdown. It has now restarted and diagnostics are attached. Any ideas? tower1-diagnostics-20180826-1149.zip Remove the preclear plugin. It is flooding the log with php warnings and makes it hard to find anything of importance. It would be better to post the /boot/logs/ log file and diagnostics after doing a gui shutdown. This will give information on what is happening when the array is being shutdown. 1 Quote Link to comment
jj_uk Posted August 26, 2018 Share Posted August 26, 2018 I rely on Preclear to stress-test new drives with intense read/writes for a week before adding them to the array. /boot/logs/ contains the following files: - tower1-diagnostics-20180826-1137.zip - unbalance.log Do you need these? Quote Link to comment
John_M Posted August 26, 2018 Share Posted August 26, 2018 1 hour ago, jj_uk said: I rely on Preclear to stress-test new drives with intense read/writes for a week before adding them to the array. In that case update it to the most recent release, which fixes the PHP issues. 1 Quote Link to comment
jj_uk Posted August 26, 2018 Share Posted August 26, 2018 Just now, John_M said: In that case update it to the most recent release, which fixes the PHP issues. I have CA set to update plugins automaticly, but to wait 3 days before doing so. I've now updated it, thanks! Quote Link to comment
dlandon Posted August 26, 2018 Share Posted August 26, 2018 4 hours ago, jj_uk said: I rely on Preclear to stress-test new drives with intense read/writes for a week before adding them to the array. /boot/logs/ contains the following files: - tower1-diagnostics-20180826-1137.zip - unbalance.log Do you need these? Remove preclear temporally to troubleshoot the shut down problem. Then re-install it. Yes. Diagnostics. Quote Link to comment
jj_uk Posted August 26, 2018 Share Posted August 26, 2018 (edited) I've looked in syslog.txt and I see that 'Disk 1' and 'cache' are causing the issue with unmounting: Aug 26 11:08:27 tower1 emhttpd: Unmounting disks... Aug 26 11:08:27 tower1 emhttpd: shcmd (1152): umount /mnt/disk1 Aug 26 11:08:27 tower1 root: umount: /mnt/disk1: target is busy. Aug 26 11:08:27 tower1 emhttpd: shcmd (1152): exit status: 32 Aug 26 11:08:27 tower1 emhttpd: shcmd (1153): umount /mnt/disk2 Aug 26 11:08:27 tower1 kernel: XFS (dm-1): Unmounting Filesystem Aug 26 11:08:27 tower1 emhttpd: shcmd (1154): rmdir /mnt/disk2 Aug 26 11:08:27 tower1 emhttpd: shcmd (1155): umount /mnt/disk3 Aug 26 11:08:27 tower1 kernel: XFS (dm-2): Unmounting Filesystem Aug 26 11:08:27 tower1 emhttpd: shcmd (1156): rmdir /mnt/disk3 Aug 26 11:08:27 tower1 emhttpd: shcmd (1157): umount /mnt/disk4 Aug 26 11:08:27 tower1 kernel: XFS (dm-3): Unmounting Filesystem Aug 26 11:08:27 tower1 emhttpd: shcmd (1158): rmdir /mnt/disk4 Aug 26 11:08:27 tower1 emhttpd: shcmd (1159): umount /mnt/cache Aug 26 11:08:27 tower1 root: umount: /mnt/cache: target is busy. Aug 26 11:08:27 tower1 emhttpd: shcmd (1159): exit status: 32 Aug 26 11:08:27 tower1 emhttpd: Retry unmounting disk share(s)... Aug 26 11:08:32 tower1 emhttpd: Unmounting disks... Aug 26 11:08:32 tower1 emhttpd: shcmd (1160): umount /mnt/disk1 Aug 26 11:08:32 tower1 root: umount: /mnt/disk1: target is busy. Aug 26 11:08:32 tower1 emhttpd: shcmd (1160): exit status: 32 Aug 26 11:08:32 tower1 emhttpd: shcmd (1161): umount /mnt/cache Aug 26 11:08:32 tower1 root: umount: /mnt/cache: target is busy. Aug 26 11:08:32 tower1 emhttpd: shcmd (1161): exit status: 32 Aug 26 11:08:32 tower1 emhttpd: Retry unmounting disk share(s)... Edited August 26, 2018 by jj_uk Quote Link to comment
Squid Posted August 26, 2018 Share Posted August 26, 2018 (edited) A stupid easy way to have this happen is to have a local terminal (or ssh) logged in with the current working directory set to /mnt/disk1 or /mnt/cache But you can try and figure out what's holding it up via lsof /mnt/disk1 lsof /mnt/cache Edited August 26, 2018 by Squid 1 3 Quote Link to comment
jj_uk Posted August 26, 2018 Share Posted August 26, 2018 Would Krusader do this? I was browsing around with krusader earlier, but can't recall what dir I was in. I also used the unraid "Shares -> Browse Files" (right side of share listing) to look into a few folders. Maybe one of the above could cause this? Quote Link to comment
Squid Posted August 26, 2018 Share Posted August 26, 2018 3 hours ago, jj_uk said: Would Krusader do this? I was browsing around with krusader earlier, but can't recall what dir I was in. Not if docker stopped and /var/lib/docker was successfully umounted. (This would all happen prior to the attempts to umount the individual drives) Quote Link to comment
jj_uk Posted August 30, 2018 Share Posted August 30, 2018 root@tower1:~# lsof /mnt/disk1 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME bash 11502 root 0u CHR 136,4 0t0 7 /dev/pts/4 bash 11502 root 1u CHR 136,4 0t0 7 /dev/pts/4 bash 11502 root 2u CHR 136,4 0t0 7 /dev/pts/4 bash 11502 root 255u CHR 136,4 0t0 7 /dev/pts/4 lsof 11620 root 0u CHR 136,4 0t0 7 /dev/pts/4 lsof 11620 root 1u CHR 136,4 0t0 7 /dev/pts/4 lsof 11620 root 2u CHR 136,4 0t0 7 /dev/pts/4 preclear_ 11634 root 0u CHR 136,2 0t0 5 /dev/pts/2 ttyd 12438 root 9u CHR 5,2 0t0 31 /dev/ptmx bash 15944 root 0u CHR 136,2 0t0 5 /dev/pts/2 bash 15944 root 1u CHR 136,2 0t0 5 /dev/pts/2 bash 15944 root 2u CHR 136,2 0t0 5 /dev/pts/2 bash 15944 root 255u CHR 136,2 0t0 5 /dev/pts/2 preclear_ 15955 root 0u CHR 136,2 0t0 5 /dev/pts/2 preclear_ 15955 root 1u CHR 136,2 0t0 5 /dev/pts/2 preclear_ 15964 root 1u CHR 136,2 0t0 5 /dev/pts/2 preclear_ 15964 root 2u CHR 136,2 0t0 5 /dev/pts/2 qemu-syst 21174 root 10u CHR 5,2 0t0 31 /dev/ptmx qemu-syst 21174 root 22r REG 254,0 316628992 2166604 /mnt/disk1/isos/virtio-win-0.1.141-1.iso (deleted) qemu-syst 21174 root 23r REG 254,0 316628992 2166604 /mnt/disk1/isos/virtio-win-0.1.141-1.iso (deleted) preclear_ 21632 root 0u CHR 136,2 0t0 5 /dev/pts/2 preclear_ 21632 root 1u CHR 136,2 0t0 5 /dev/pts/2 tmux:\x20 22245 root 7u CHR 5,2 0t0 31 /dev/ptmx tmux:\x20 22245 root 10u CHR 5,2 0t0 31 /dev/ptmx tmux:\x20 22245 root 11u CHR 5,2 0t0 31 /dev/ptmx bash 22246 root 0u CHR 136,1 0t0 4 /dev/pts/1 bash 22246 root 1u CHR 136,1 0t0 4 /dev/pts/1 bash 22246 root 2u CHR 136,1 0t0 4 /dev/pts/1 bash 22246 root 255u CHR 136,1 0t0 4 /dev/pts/1 bash 32057 root 0u CHR 136,3 0t0 6 /dev/pts/3 bash 32057 root 1u CHR 136,3 0t0 6 /dev/pts/3 bash 32057 root 2u CHR 136,3 0t0 6 /dev/pts/3 bash 32057 root 255u CHR 136,3 0t0 6 /dev/pts/3 root@tower1:~# lsof /mnt/cache COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME bash 11502 root 0u CHR 136,4 0t0 7 /dev/pts/4 bash 11502 root 1u CHR 136,4 0t0 7 /dev/pts/4 bash 11502 root 2u CHR 136,4 0t0 7 /dev/pts/4 bash 11502 root 255u CHR 136,4 0t0 7 /dev/pts/4 ttyd 12438 root 9u CHR 5,2 0t0 31 /dev/ptmx bash 15944 root 0u CHR 136,2 0t0 5 /dev/pts/2 bash 15944 root 1u CHR 136,2 0t0 5 /dev/pts/2 bash 15944 root 2u CHR 136,2 0t0 5 /dev/pts/2 bash 15944 root 255u CHR 136,2 0t0 5 /dev/pts/2 preclear_ 15955 root 0u CHR 136,2 0t0 5 /dev/pts/2 preclear_ 15955 root 1u CHR 136,2 0t0 5 /dev/pts/2 preclear_ 15964 root 1u CHR 136,2 0t0 5 /dev/pts/2 preclear_ 15964 root 2u CHR 136,2 0t0 5 /dev/pts/2 sleep 17657 root 0u CHR 136,2 0t0 5 /dev/pts/2 sleep 17657 root 1u CHR 136,2 0t0 5 /dev/pts/2 lsof 17711 root 0u CHR 136,4 0t0 7 /dev/pts/4 lsof 17711 root 1u CHR 136,4 0t0 7 /dev/pts/4 lsof 17711 root 2u CHR 136,4 0t0 7 /dev/pts/4 qemu-syst 21174 root 10u CHR 5,2 0t0 31 /dev/ptmx qemu-syst 21174 root 20u REG 0,33 107374182400 335 /mnt/cache/domains/001 Windows 10/vdisk1.img qemu-syst 21174 root 21u REG 0,33 107374182400 335 /mnt/cache/domains/001 Windows 10/vdisk1.img preclear_ 21632 root 0u CHR 136,2 0t0 5 /dev/pts/2 preclear_ 21632 root 1u CHR 136,2 0t0 5 /dev/pts/2 tmux:\x20 22245 root 7u CHR 5,2 0t0 31 /dev/ptmx tmux:\x20 22245 root 10u CHR 5,2 0t0 31 /dev/ptmx tmux:\x20 22245 root 11u CHR 5,2 0t0 31 /dev/ptmx bash 22246 root 0u CHR 136,1 0t0 4 /dev/pts/1 bash 22246 root 1u CHR 136,1 0t0 4 /dev/pts/1 bash 22246 root 2u CHR 136,1 0t0 4 /dev/pts/1 bash 22246 root 255u CHR 136,1 0t0 4 /dev/pts/1 bash 32057 root 0u CHR 136,3 0t0 6 /dev/pts/3 bash 32057 root 1u CHR 136,3 0t0 6 /dev/pts/3 bash 32057 root 2u CHR 136,3 0t0 6 /dev/pts/3 bash 32057 root 255u CHR 136,3 0t0 6 /dev/pts/3 root@tower1:~# I don't know what to look for? I've just tried to stop the array and again it won't stop. 1 Quote Link to comment
Squid Posted August 30, 2018 Share Posted August 30, 2018 The VM at least didn't shutdown. Probably there's a prompt on there asking to save a file or something. I would also personally install the Guest Tools (its on the virtio vdisk available in Windows), and then set unRaid to hibernate the VMs instead of trying to shut them down. Quote Link to comment
jj_uk Posted September 1, 2018 Share Posted September 1, 2018 In windows VM CD drive (virtio-win-0.1.1) I have a folder called "guest-agent". Is this the guest tools? In the VMs -> Windows 10 -> Edit, i can't see a shutdown method. The only reference to shutting down VMs I can find is in the "unradi GUI -> Settings -> VM Manager -> Upon host shutdown", and it's already set to Hibernate. Quote Link to comment
Squid Posted September 1, 2018 Share Posted September 1, 2018 You got it. Except that the VM Manager setting for Hibernate won't actually do anything unless Guest Agent is installed. Quote Link to comment
jj_uk Posted September 1, 2018 Share Posted September 1, 2018 Ahh It is the win10 VM stopping the array from stopping. If i stop the VM, then stop the array, the array stops. If I dont stop the VM, so unraid should stop the vm, the array doesn't ever stop. I've installed the guest agent, but it still doesn't stop. Atleast i know know whats causing it. Quote Link to comment
gacpac Posted January 1, 2019 Share Posted January 1, 2019 Honestly, for who knows what reason my drive wasn't getting unmounted. I uninstalled the unassigned devices plugin and installed the open files plugin. I was able to do it after that, and then put back the unassigned devices. Quote Link to comment
jollymonsa Posted July 21, 2019 Share Posted July 21, 2019 On 8/26/2018 at 1:59 PM, Squid said: A stupid easy way to have this happen is to have a local terminal (or ssh) logged in with the current working directory set to /mnt/disk1 or /mnt/cache But you can try and figure out what's holding it up via lsof /mnt/disk1 lsof /mnt/cache This was what it was for me. I had just been downloading some ISO's directly with wget and still had the terminal active in the disk. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.