I can never stop the array "Unmounting disks...Retry unmounting disk share(s)..."


Recommended Posts

I've had this issue as long as I remember, literally years.  I'm on 6.3.5

 

I am trying to either Stop my array or reboot/shut down.  It ALWAYS gets stuck with "Unmounting disks...Retry unmounting disk share(s)..." and I am forced to use IPMI to force a reboot.

 

I want to fix this once and for all.  I'm trying to add a new disk but cannot because of this.  I stop ALL of my dockers and VMs before hitting stop in the webui.  

 

How can I troubleshoot this and resolve it?  I'm so sick and tired of this issue rearing its head every few years.  I really appreciate any help with this; I'll provide whatever is needed- I'm extremely frustrated right now.

  • Like 1
Link to comment
50 minutes ago, 01111000 said:

I've had this issue as long as I remember, literally years.  I'm on 6.3.5

 

I am trying to either Stop my array or reboot/shut down.  It ALWAYS gets stuck with "Unmounting disks...Retry unmounting disk share(s)..." and I am forced to use IPMI to force a reboot.

 

I want to fix this once and for all.  I'm trying to add a new disk but cannot because of this.  I stop ALL of my dockers and VMs before hitting stop in the webui.  

 

How can I troubleshoot this and resolve it?  I'm so sick and tired of this issue rearing its head every few years.  I really appreciate any help with this; I'll provide whatever is needed- I'm extremely frustrated right now.

Post diagnostics.

Link to comment

Do you use ssh? I have had issues where I used screen through terminal and screen keeps accessing the array keeping it from being unmounted.

 

Look for things that may be prevent unmounting of the array.

 

Also, disable docker in settings and then try. Just turning them off individually may not be enough and maybe try unmounting array on its own before hitting shutdown. 

Edited by lankanmon
  • Like 3
  • Thanks 1
Link to comment

You are installing packages from /boot/packages/ -  tcl, expect, openvpn.  I'd recommend finding another way to do this.  Use the nerd pack for packages, and a docker for openvpn.  These packages may be holding up a shutdown.

 

EDIT: Also check this error:

Fix Common Problems: Error: Docker application binhex-deluge has volumes being passed that are mounted by Unassigned Devices, but they are not mounted with the slave option

 

Edited by dlandon
Link to comment
2 hours ago, dlandon said:

 

You are installing packages from /boot/packages/ -  tcl, expect, openvpn.  I'd recommend finding another way to do this.  Use the nerd pack for packages, and a docker for openvpn.  These packages may be holding up a shutdown.

 

I believe that a few plugins auto installed these packages.  I'm using peter sm's OpenVPN plugin, for example. 

 

Ive fixed the deluge error as well.  I'll give it a try and report back. 

 

edit- nope.  Same issue.  Unmounted unassigned drives, stopped OpenVPN, stopped all dockers and VMs and use Open Files plugin to kill any remaining processes.

Edited by 01111000
Link to comment

Start your server and then do a gui shutdown.  If something is holding up the shutdown, unRAID will force a shutdown after a time out.  Then post the log found at /boot/logs.  This should show what is preventing a shutdown.  This is the log that is saved from the last shutdown.  We can look at it and see what might be holding up the shutdown.

Link to comment
  • 10 months later...
28 minutes ago, jj_uk said:

I followed the above and turned off docker in settings, but it still didn't stop so I used the GUI to shutdown. 

 

It has now restarted and diagnostics are attached. Any ideas?

 

tower1-diagnostics-20180826-1149.zip

Remove the preclear plugin.  It is flooding the log with php warnings and makes it hard to find anything of importance.

 

It would be better to post the /boot/logs/ log file and diagnostics after doing a gui shutdown.  This will give information on what is happening when the array is being shutdown.

  • Upvote 1
Link to comment
4 hours ago, jj_uk said:

I rely on Preclear to stress-test new drives with intense read/writes for a week before adding them to the array.

 

/boot/logs/ contains the following files:

- tower1-diagnostics-20180826-1137.zip

- unbalance.log

 

Do you need these?

Remove preclear temporally to troubleshoot the shut down problem.  Then re-install it.

 

Yes.  Diagnostics.

Link to comment

I've looked in syslog.txt and I see that 'Disk 1' and 'cache' are causing the issue with unmounting:

 

Aug 26 11:08:27 tower1 emhttpd: Unmounting disks...
Aug 26 11:08:27 tower1 emhttpd: shcmd (1152): umount /mnt/disk1
Aug 26 11:08:27 tower1 root: umount: /mnt/disk1: target is busy.
Aug 26 11:08:27 tower1 emhttpd: shcmd (1152): exit status: 32
Aug 26 11:08:27 tower1 emhttpd: shcmd (1153): umount /mnt/disk2
Aug 26 11:08:27 tower1 kernel: XFS (dm-1): Unmounting Filesystem
Aug 26 11:08:27 tower1 emhttpd: shcmd (1154): rmdir /mnt/disk2
Aug 26 11:08:27 tower1 emhttpd: shcmd (1155): umount /mnt/disk3
Aug 26 11:08:27 tower1 kernel: XFS (dm-2): Unmounting Filesystem
Aug 26 11:08:27 tower1 emhttpd: shcmd (1156): rmdir /mnt/disk3
Aug 26 11:08:27 tower1 emhttpd: shcmd (1157): umount /mnt/disk4
Aug 26 11:08:27 tower1 kernel: XFS (dm-3): Unmounting Filesystem
Aug 26 11:08:27 tower1 emhttpd: shcmd (1158): rmdir /mnt/disk4
Aug 26 11:08:27 tower1 emhttpd: shcmd (1159): umount /mnt/cache
Aug 26 11:08:27 tower1 root: umount: /mnt/cache: target is busy.
Aug 26 11:08:27 tower1 emhttpd: shcmd (1159): exit status: 32
Aug 26 11:08:27 tower1 emhttpd: Retry unmounting disk share(s)...
Aug 26 11:08:32 tower1 emhttpd: Unmounting disks...
Aug 26 11:08:32 tower1 emhttpd: shcmd (1160): umount /mnt/disk1
Aug 26 11:08:32 tower1 root: umount: /mnt/disk1: target is busy.
Aug 26 11:08:32 tower1 emhttpd: shcmd (1160): exit status: 32
Aug 26 11:08:32 tower1 emhttpd: shcmd (1161): umount /mnt/cache
Aug 26 11:08:32 tower1 root: umount: /mnt/cache: target is busy.
Aug 26 11:08:32 tower1 emhttpd: shcmd (1161): exit status: 32
Aug 26 11:08:32 tower1 emhttpd: Retry unmounting disk share(s)...

 

 

Edited by jj_uk
Link to comment

A stupid easy way to have this happen is to have a local terminal (or ssh) logged in with the current working directory set to /mnt/disk1 or /mnt/cache

 

But you can try and figure out what's holding it up via

lsof /mnt/disk1


lsof /mnt/cache

 

Edited by Squid
  • Like 1
  • Thanks 3
Link to comment
3 hours ago, jj_uk said:

Would Krusader do this? I was browsing around with krusader earlier, but can't recall what dir I was in.

Not if docker stopped and /var/lib/docker was successfully umounted. (This would all happen prior to the attempts to umount the individual drives)

 

Link to comment
root@tower1:~# lsof /mnt/disk1
COMMAND     PID USER   FD   TYPE DEVICE  SIZE/OFF    NODE NAME
bash      11502 root    0u   CHR  136,4       0t0       7 /dev/pts/4
bash      11502 root    1u   CHR  136,4       0t0       7 /dev/pts/4
bash      11502 root    2u   CHR  136,4       0t0       7 /dev/pts/4
bash      11502 root  255u   CHR  136,4       0t0       7 /dev/pts/4
lsof      11620 root    0u   CHR  136,4       0t0       7 /dev/pts/4
lsof      11620 root    1u   CHR  136,4       0t0       7 /dev/pts/4
lsof      11620 root    2u   CHR  136,4       0t0       7 /dev/pts/4
preclear_ 11634 root    0u   CHR  136,2       0t0       5 /dev/pts/2
ttyd      12438 root    9u   CHR    5,2       0t0      31 /dev/ptmx
bash      15944 root    0u   CHR  136,2       0t0       5 /dev/pts/2
bash      15944 root    1u   CHR  136,2       0t0       5 /dev/pts/2
bash      15944 root    2u   CHR  136,2       0t0       5 /dev/pts/2
bash      15944 root  255u   CHR  136,2       0t0       5 /dev/pts/2
preclear_ 15955 root    0u   CHR  136,2       0t0       5 /dev/pts/2
preclear_ 15955 root    1u   CHR  136,2       0t0       5 /dev/pts/2
preclear_ 15964 root    1u   CHR  136,2       0t0       5 /dev/pts/2
preclear_ 15964 root    2u   CHR  136,2       0t0       5 /dev/pts/2
qemu-syst 21174 root   10u   CHR    5,2       0t0      31 /dev/ptmx
qemu-syst 21174 root   22r   REG  254,0 316628992 2166604 /mnt/disk1/isos/virtio-win-0.1.141-1.iso (deleted)
qemu-syst 21174 root   23r   REG  254,0 316628992 2166604 /mnt/disk1/isos/virtio-win-0.1.141-1.iso (deleted)
preclear_ 21632 root    0u   CHR  136,2       0t0       5 /dev/pts/2
preclear_ 21632 root    1u   CHR  136,2       0t0       5 /dev/pts/2
tmux:\x20 22245 root    7u   CHR    5,2       0t0      31 /dev/ptmx
tmux:\x20 22245 root   10u   CHR    5,2       0t0      31 /dev/ptmx
tmux:\x20 22245 root   11u   CHR    5,2       0t0      31 /dev/ptmx
bash      22246 root    0u   CHR  136,1       0t0       4 /dev/pts/1
bash      22246 root    1u   CHR  136,1       0t0       4 /dev/pts/1
bash      22246 root    2u   CHR  136,1       0t0       4 /dev/pts/1
bash      22246 root  255u   CHR  136,1       0t0       4 /dev/pts/1
bash      32057 root    0u   CHR  136,3       0t0       6 /dev/pts/3
bash      32057 root    1u   CHR  136,3       0t0       6 /dev/pts/3
bash      32057 root    2u   CHR  136,3       0t0       6 /dev/pts/3
bash      32057 root  255u   CHR  136,3       0t0       6 /dev/pts/3
root@tower1:~# lsof /mnt/cache
COMMAND     PID USER   FD   TYPE DEVICE     SIZE/OFF NODE NAME
bash      11502 root    0u   CHR  136,4          0t0    7 /dev/pts/4
bash      11502 root    1u   CHR  136,4          0t0    7 /dev/pts/4
bash      11502 root    2u   CHR  136,4          0t0    7 /dev/pts/4
bash      11502 root  255u   CHR  136,4          0t0    7 /dev/pts/4
ttyd      12438 root    9u   CHR    5,2          0t0   31 /dev/ptmx
bash      15944 root    0u   CHR  136,2          0t0    5 /dev/pts/2
bash      15944 root    1u   CHR  136,2          0t0    5 /dev/pts/2
bash      15944 root    2u   CHR  136,2          0t0    5 /dev/pts/2
bash      15944 root  255u   CHR  136,2          0t0    5 /dev/pts/2
preclear_ 15955 root    0u   CHR  136,2          0t0    5 /dev/pts/2
preclear_ 15955 root    1u   CHR  136,2          0t0    5 /dev/pts/2
preclear_ 15964 root    1u   CHR  136,2          0t0    5 /dev/pts/2
preclear_ 15964 root    2u   CHR  136,2          0t0    5 /dev/pts/2
sleep     17657 root    0u   CHR  136,2          0t0    5 /dev/pts/2
sleep     17657 root    1u   CHR  136,2          0t0    5 /dev/pts/2
lsof      17711 root    0u   CHR  136,4          0t0    7 /dev/pts/4
lsof      17711 root    1u   CHR  136,4          0t0    7 /dev/pts/4
lsof      17711 root    2u   CHR  136,4          0t0    7 /dev/pts/4
qemu-syst 21174 root   10u   CHR    5,2          0t0   31 /dev/ptmx
qemu-syst 21174 root   20u   REG   0,33 107374182400  335 /mnt/cache/domains/001 Windows 10/vdisk1.img
qemu-syst 21174 root   21u   REG   0,33 107374182400  335 /mnt/cache/domains/001 Windows 10/vdisk1.img
preclear_ 21632 root    0u   CHR  136,2          0t0    5 /dev/pts/2
preclear_ 21632 root    1u   CHR  136,2          0t0    5 /dev/pts/2
tmux:\x20 22245 root    7u   CHR    5,2          0t0   31 /dev/ptmx
tmux:\x20 22245 root   10u   CHR    5,2          0t0   31 /dev/ptmx
tmux:\x20 22245 root   11u   CHR    5,2          0t0   31 /dev/ptmx
bash      22246 root    0u   CHR  136,1          0t0    4 /dev/pts/1
bash      22246 root    1u   CHR  136,1          0t0    4 /dev/pts/1
bash      22246 root    2u   CHR  136,1          0t0    4 /dev/pts/1
bash      22246 root  255u   CHR  136,1          0t0    4 /dev/pts/1
bash      32057 root    0u   CHR  136,3          0t0    6 /dev/pts/3
bash      32057 root    1u   CHR  136,3          0t0    6 /dev/pts/3
bash      32057 root    2u   CHR  136,3          0t0    6 /dev/pts/3
bash      32057 root  255u   CHR  136,3          0t0    6 /dev/pts/3
root@tower1:~#

I don't know what to look for? I've just tried to stop the array and again it won't stop.

  • Like 1
Link to comment

The VM at least didn't shutdown.  Probably there's a prompt on there asking to save a file or something.  I would also personally install the Guest Tools (its on the virtio vdisk available in Windows), and then set unRaid to hibernate the VMs instead of trying to shut them down.

Link to comment

In windows VM CD drive (virtio-win-0.1.1) I have a folder called "guest-agent".

Is this the guest tools?

 

In the VMs -> Windows 10 -> Edit, i can't see a shutdown method. The only reference to shutting down VMs I can find is in the "unradi GUI -> Settings -> VM Manager -> Upon host shutdown", and it's already set to Hibernate.

 

Link to comment

Ahh It is the win10 VM stopping the array from stopping.

 

If i stop the VM, then stop the array, the array stops. If I dont stop the VM, so unraid should stop the vm, the array doesn't ever stop.

 

I've installed the guest agent, but it still doesn't stop. Atleast i know know whats causing it. 

Link to comment
  • 3 months later...
  • 6 months later...
On 8/26/2018 at 1:59 PM, Squid said:

A stupid easy way to have this happen is to have a local terminal (or ssh) logged in with the current working directory set to /mnt/disk1 or /mnt/cache

 

But you can try and figure out what's holding it up via


lsof /mnt/disk1



lsof /mnt/cache

 

This was what it was for me. I had just been downloading some ISO's directly with wget and still had the terminal active in the disk.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.