libvirt.img is in-use, cannot mount


Recommended Posts

I'm also getting this issue in 6.11.5, if I try to shut down libvirt and restart it while docker is running, it will not start up (happens even in safe mode). The only solution is to shutdown libvirt and docker, start libvirt first then docker after it.

Nov 28 16:04:37 X99U  emhttpd: shcmd (8076): /usr/local/sbin/mount_image '/mnt/user/system/libvirt/libvirt.img' /etc/libvirt 5
Nov 28 16:04:37 X99U root: '/mnt/user/system/libvirt/libvirt.img' is in-use, cannot mount

So I'm getting this when I'm trying to start libvirt, I checked with `fuser -c /path/to/libvirt.img` and the file is actually not in use (at least not that I can see that). `losetup` shows the libvirt.img file being attached to loop3 device and I cannot detach it, here are the outputs:

root@X99U:~# losetup 
NAME       SIZELIMIT OFFSET AUTOCLEAR RO BACK-FILE                               DIO LOG-SEC
/dev/loop1         0      0         1  1 /boot/bzmodules                           0     512
/dev/loop2         0      0         1  0 /mnt/cache/system/docker/docker-xfs.img   1     512
/dev/loop0         0      0         1  1 /boot/bzfirmware                          0     512
/dev/loop3         0      0         1  0 /mnt/cache/system/libvirt/libvirt.img     1     512

when trying to detach it

root@X99U:~# umount -ld /dev/loop3
umount: /dev/loop3: not mounted.
root@X99U:~# LOOPDEV_DEBUG=all losetup -vd /dev/loop3
922: loopdev:      CXT: [0x7ffeb1b48f20]: initialize context
922: loopdev:      CXT: [0x7ffeb1b48f20]: init: ignore ioctls
922: loopdev:      CXT: [0x7ffeb1b48f20]: init: loop-control detected 
922: loopdev:      CXT: [0x7ffeb1b48f20]: /dev/loop3 name assigned
922: loopdev:      CXT: [0x7ffeb1b48f20]: open /dev/loop3 [ro]: No such file or directory
922: loopdev:      CXT: [0x7ffeb1b48f20]: device removed
922: loopdev:      CXT: [0x7ffeb1b48f20]: de-initialize
922: loopdev:      CXT: [0x7ffeb1b48f20]: closing old open fd
922: loopdev:     ITER: [0x7ffeb1b490f8]: de-initialize

 

The only workaround that I found is shutting libvirt and docker and starting libvirt first then docker after, or reboot the host which undesirable. Sometimes when libvirt refuses to start again, rebooting the host will show an error of an unclean shutdown and will start the parity check.

 

Note that this only happens when I manually disable VMs in VMs Manager and then re-enable it. There is also a case where if libvirt is broken and does not start properly, restarting the array may not restart libvirt service, only a full reboot would help.

Edited by midi
Link to comment
  • 3 weeks later...
On 12/29/2021 at 3:32 AM, mgutt said:

The same happened to me after stopping the VM service, renaming the "domains" share to "vdisks" and restarting the VM service.

 

What I tried:

 

root@thoth:~# umount /mnt/cache/libvirt/libvirt.img
umount: /mnt/cache/libvirt/libvirt.img: not mounted.
root@thoth:~# /usr/local/sbin/mount_image '/mnt/cache/libvirt/libvirt.img' /etc/libvirt 1
/mnt/cache/libvirt/libvirt.img is in-use, cannot mount
root@thoth:~# fuser -c /mnt/cache/libvirt/libvirt.img
/mnt/cache/libvirt/libvirt.img:  6030m  6055m  7314m  7574m  8400m  8520m  8521m  8523m  8528m  8529m  8554m  8557m  8581m  8596m  8646m  8647m  8648m  8649m  8729m  8841m  8842m  8845m  9040m  9045m 12060m 12144m 16103m 16106m 16107m 16109m 16110m 16285m 16301m 16314m 16338m 16339m 16374m 16400m 16424cm 16446m 16455m 16479m 16488m 16658m 16659m 16662m 16663m 16678m 16893m 17537m 18805m 18818m 19679m 19680m 19681m 19682m 19683m 19684m 19685m 19686m 19687m 19688m 19689m 19690m 19691m 31769m 31843m 32152m 32155m 32195cm 32321m
root@thoth:~# ps -p 6030
  PID TTY          TIME CMD
 6030 ?        00:00:39 dockerd
root@thoth:~# ps -p 6055
  PID TTY          TIME CMD
 6055 ?        00:03:22 containerd
root@thoth:~# ps -p 7314
  PID TTY          TIME CMD
 7314 ?        00:00:00 s6-svscan
root@thoth:~# ps -p 7574
  PID TTY          TIME CMD
 7574 ?        00:00:00 s6-supervise
root@thoth:~# ps -p 8400
  PID TTY          TIME CMD
 8400 ?        00:00:00 dumb-init
root@thoth:~# ps -p 32321
  PID TTY          TIME CMD
32321 ?        00:00:06 Plex Tuner Serv
root@thoth:~# 

 

Then while and after stopping the docker service:

root@thoth:~# fuser -c /mnt/cache/libvirt/libvirt.img
/mnt/cache/libvirt/libvirt.img:  5651m  6030m  6055m
root@thoth:~# ps -p 5651
  PID TTY          TIME CMD
 5651 ?        00:00:00 s6-sync
root@thoth:~# ps -p 6030
  PID TTY          TIME CMD
 6030 ?        00:00:39 dockerd
root@thoth:~# ps -p 6055
  PID TTY          TIME CMD
 6055 ?        00:03:23 containerd
root@thoth:~# fuser -c /mnt/cache/libvirt/libvirt.img
/mnt/cache/libvirt/libvirt.img:  5651m  6030m  6055m
root@thoth:~# fuser -c /mnt/cache/libvirt/libvirt.img
root@thoth:~# 

 

 

Then I started the docker service again:

fuser -c /mnt/cache/libvirt/libvirt.img
/mnt/cache/libvirt/libvirt.img:  8869m  8893m  9644m  9788m 10066m 10067m 10071m 10072m 10088m 10096m 10197m 10198m 10199m 10200m 10201m 10202m 10203m 10204m 10205m 10206m 10207m 10208m 10209m 10210m 10271m 10630m 10788m 10789m 10790m 10793m 10794m 10806m 10807m 10825m 10841m 10842m 10843m 10844m 10925m 11227m 11259m 11262m 11325m 11472cm 11474m 11570m 11571m 11572m 11729m 11745m 11754m 11755m 11757m 11762m 11826m 11863cm

 

And started successfully the VM service, too (yes completely the same output):

fuser -c /mnt/cache/libvirt/libvirt.img
/mnt/cache/libvirt/libvirt.img:  8869m  8893m  9644m  9788m 10066m 10067m 10071m 10072m 10088m 10096m 10197m 10198m 10199m 10200m 10201m 10202m 10203m 10204m 10205m 10206m 10207m 10208m 10209m 10210m 10271m 10630m 10788m 10789m 10790m 10793m 10794m 10806m 10807m 10825m 10841m 10842m 10843m 10844m 10925m 11227m 11259m 11262m 11325m 11472cm 11474m 11570m 11571m 11572m 11729m 11745m 11754m 11755m 11757m 11762m 11826m 11863cm

 

I'd say it has nothing to do with the output of fuser. Must be something else wrong.

UNRAID 6.10.3

The instruction output is the same as this guy.

"It's still the same problem, it seems to be tied up by a lot of processes."

 

图像 7.png

Link to comment
On 10/17/2022 at 4:41 PM, BarbaGrump said:

I'm to getting this today in 6.11 after trying to add a isolated bridge:

1. virsh net-define /tmp/isolated0.xml

2. virsh net-start isolated0

3. virsh net-autostart --network isolated0

 

/tmp/isolated0.xml:

<network>
  <name>isolated0</name>
  <bridge name="virbr99" stp="on" delay="0"/>
</network>

 

Then after restarting VM-service - root: '/mnt/user/system/libvirt.img' is in-use, cannot mount

 

Stopping docker-service, starting vm-service, starting docker-service works.

 

Also, destroy and undefine the created network doesn't help 

Also2, the virsh net-autostart --network isolated0 does not seems to work...I still have to start the network manually, but thats another thread I guess...

 

Edit:

Rebooted and dockers, VMs started as expected. But UR reported an unclean shutdown and started parity-check...not sure iff its connected, but if the /mnt/user/system/libvirt.img is in use, umount maybe unsuccessfull?

 

Edit2:

To make a custom defined network autostart, just make a symlink in /etc/libvirt/networks/autostart pointing to ../<whatever your network is called>

Unfortunately, the silver lining is that the reboot restored normalcy

But like this guy, after the reboot:UR reported an unclean shutdown 

Link to comment
  • 5 months later...
  • 1 month later...

still getting this too using unRAID 6.12.2 

just stopped VM service to move domains from a zfs pool/dataset to array (mover isnt working to move data from ZFS datasets to xfs array by the way) used rsync -avx to copy the domains

tried to restart libvirt and it fails with same umount error

long standing but buit only docker that was running was Plex and i tied the main pid to that but could stop it as it was in use

Link to comment
  • 3 weeks later...
  • 1 month later...

Just upgraded from 6.12.3 to 6.12.4 and ran into this issue. 

 

EDIT: Probably stating the obvious. Yesterday I stopped and restarted the array and both the Docker and VM services started up. Today I stopped the VM services and was unable restart it. So once it's stopped, it's stopped. Until you restart the array or possibly stop the dockers and then start them up in reverse order. I have not tested this latter part yet.

Edited by UncleStu
Link to comment
  • 2 weeks later...

I'm at 6.12.4 since 12 days now and started again with a vm a few days ago. all of a sudden the vm field is blank but the vm is still running. when i stop the vm service i'm unable to start it again. the vm is there but not visable and the buttons (to create a new vm etc) are also missing

i was on 6.11.5 before and this never occurred before

 

stopping both the docker and the vm service, then start the vm service and then docker works but i dont wanna make that move every time i wanna do something on a vm. logs dont show anything weird to me. i dont even now when the vm page breaks tbh

Link to comment
9 hours ago, JorgeB said:

Please post them anyway. 

sure, if you need more just tell me :)

 

i use the "new" backup/restore appdata tool since 6.12 (backup every wednesday) if i remember correctly this was the first time it occurred.

no vanishing vms as of today but i keep an eye on the connection between these two

kamino-diagnostics-20230921-1614.zip

Edited by mrdreirad
Link to comment
Sep 17 11:54:35 Kamino emhttpd: shcmd (3164905): /usr/local/sbin/mount_image '/mnt/user/system/libvirt/libvirt.img' /etc/libvirt 1
Sep 17 11:54:35 Kamino root: '/mnt/user/system/libvirt/libvirt.img' is in-use, cannot mount

 

Not sure why it was in use though, since apparently it unmounted correctly just a few seconds before:

 

Sep 17 11:54:27 Kamino emhttpd: shcmd (3164883): umount /etc/libvirt

 

If it happens again post the output of

losetup

 

Link to comment
1 hour ago, JorgeB said:

Not sure why it was in use though, since apparently it unmounted correctly just a few seconds before:

It's a common issue when you stop the VM service and try to restart it, it fails with this error until you also stop docker. There's another thread about it somewhere with more detail.

 

I think OP's question is more about why his VM page goes blank and causes the need to stop/restart the VM service in the first place.

Link to comment

Just started getting this today, 6.12.4, had a random system crash overnight and discovered this while troubleshooting. I can replicat it by stopping VM Manager & Docker, starting Docker first and then VM Manager second.

 

Result of losetup:

NAME       SIZELIMIT OFFSET AUTOCLEAR RO BACK-FILE                             DIO LOG-SEC
/dev/loop1         0      0         1  1 /boot/bzfirmware                         0     512
/dev/loop2         0      0         1  0 /mnt/cache/system/libvirt/libvirt.img    0     512
/dev/loop0         0      0         1  1 /boot/bzmodules                          0     512
/dev/loop3         0      0         1  0 /mnt/cache/system/docker/docker.img      0     512

 

server-diagnostics-20231001-2202.zip

Edited by Trozmagon
spelling
Link to comment
  • 3 months later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.