golli53 Posted July 11, 2019 Share Posted July 11, 2019 (edited) I often have to restart my array for various reasons and since my family uses unRAID VMs as their go-to personal computers, this is a big hassle to close all work and shutdown. I've tried hibernation, either within the OS or using "virsh domsuspend" to disk. It appears to work, but the VMs show as "paused" not "stopped". I can resume them fine, but if I stop the array, the VMs crash and the hibernation state isn't preserved (one time, I've also encountered the qemu preventing array stop because of access to drives, in which case I had to kill the instance). I've also tested this on a brand new Win10 VM install with qemu-agent running and virtio drivers installed. [edit] Btw I can't use "virsh save" unfortunately because of GPU pass through. Edited July 11, 2019 by golli53 Quote Link to comment
bastl Posted July 11, 2019 Share Posted July 11, 2019 Why not running your VMs from the cache drive? In this case your independant from the activity of the array and if the cache is a ssd you will highly benefit of it's speed. Other option would be to use a extra unassigned device for your VMs. In such a configuration you don't have to shut down your VMs. You only loose the connection to the data on the array if you have to restart the array for whatever reason. Quote Link to comment
golli53 Posted July 12, 2019 Author Share Posted July 12, 2019 9 hours ago, bastl said: Why not running your VMs from the cache drive? In this case your independant from the activity of the array and if the cache is a ssd you will highly benefit of it's speed. Other option would be to use a extra unassigned device for your VMs. In such a configuration you don't have to shut down your VMs. You only loose the connection to the data on the array if you have to restart the array for whatever reason. I'm not sure that's correct. My VMs currently run from an unassigned NVME drive. I can't stop my array without killing my VMs (the VM settings page becomes disabled and any previously started or paused VMs become stopped upon restart). Does it work differently for you? Quote Link to comment
GHunter Posted July 12, 2019 Share Posted July 12, 2019 you should make a feature request for this. I think it would be a fairly simple fix as the logic already handles started VM's that are running normally. Basically resume all paused VM's first, then continue on with the stopping process as normal or something similar to this. Quote Link to comment
bastl Posted July 12, 2019 Share Posted July 12, 2019 @golli53 My mistake. I thought it worked this way in the past 🤨 Tried it right now and the stopping of the array triggered a shutdown of my VMs. My libvirt.img is sitting on the array. I changed this a couple weeks ago. I will try this during this weekend with the libvirt.img on the cache device. But I'am relative sure I could stop the array without shuting down the VMs in the past. Quote Link to comment
GHunter Posted July 12, 2019 Share Posted July 12, 2019 Shutting down the array has always shutdown the VM's and most other services. Wish it didn't work this way but it does. Quote Link to comment
Squid Posted July 12, 2019 Share Posted July 12, 2019 I often have to restart my array for various reasons and since my family uses unRAID VMs as their go-to personal computers, this is a big hassle to close all work and shutdown. I've tried hibernation, either within the OS or using "virsh domsuspend" to disk. It appears to work, but the VMs show as "paused" not "stopped". I can resume them fine, but if I stop the array, the VMs crash and the hibernation state isn't preserved (one time, I've also encountered the qemu preventing array stop because of access to drives, in which case I had to kill the instance). I've also tested this on a brand new Win10 VM install with qemu-agent running and virtio drivers installed. [edit] Btw I can't use "virsh save" unfortunately because of GPU pass through.Along with installing the guest agent, did you also in settings vm settings advanced view change shutdown to instead be hibernate?Sent from my NSA monitored device Quote Link to comment
golli53 Posted July 14, 2019 Author Share Posted July 14, 2019 (edited) On 7/12/2019 at 9:03 AM, Squid said: Along with installing the guest agent, did you also in settings vm settings advanced view change shutdown to instead be hibernate? Sent from my NSA monitored device No, but I just got the chance to enable it and do some tests. Now, libvirt simply fails to start up when I hibernate my VM and stop+start my array. Here's the error message: Tower root: /mnt/disks/ssd/system/libvirt/libvirt.img is in-use, cannot mount When I use fuser and ps to find out which processes are locking it, I get the hibernated VM: /usr/bin/qemu-system-x86_64 -name guest=win10,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-win10/master-key.aes -machine pc-i440fx-3.1,accel=kvm,usb=off,dump-guest-core=off,mem-merge=off -cpu host,hv_time,hv_relaxed,hv_vapic,hv_spinlocks=0x1fff,hv_vendor_id=none -drive file=/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd,if=pflash,format=raw,unit=0,readonly=on -drive file=/etc/libvirt/qemu/nvram/2f4360e3-a8a0-ca29-a732-4f1191b06173_VARS-pure-efi.fd,if=pflash,format=raw,unit=1 -m 4096 -realtime mlock=off -smp 2,sockets=1,cores=1,threads=2 -uuid 2f4360e3-a8a0-ca29-a732-4f1191b06173 -no-user-config -nodefaults -chardev socket,id=charmonitor,fd=26,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=localtime -no-hpet -no-shutdown -boot strict=on -device nec-usb-xhci,p2=15,p3=15,id=usb,bus=pci.0,addr=0x7 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive file=/mnt/disks/ssd/domains/win10/vdisk1.img,format=raw,if=none,id=drive-virtio-disk2,cache=writeback -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk2,id=virtio-disk2,bootindex=1,write-cache=on -netdev tap,fd=29,id=hostnet0,vhost=on,vhostfd=30 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:1a:cb:11,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -chardev socket,id=charchannel0,fd=31,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 -device usb-tablet,id=input0,bus=usb.0,port=1 -vnc 0.0.0.0:0,websocket=5700 -k en-us -device qxl-vga,id=video0,ram_size=67108864,vram_size=67108864,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny -msg timestamp=on (Also, my docker containers were listed as locking it as well for some reason which I wasn't sure was normal, but I turned off autostart on all dockers to remove that as a source of noise) Tests I've tried (all yield the same error as above): - Manually hibernating the VM - Leaving it running and letting unRAID hibernate it upon array stop - Increasing the time given to VM shutdown/hibernate to 3 minutes under VM settings - Rebooting and retrying above This was a clean Win10 install with just qemu-img installed and no pass-through devices. So, it looks like the VM hibernate option breaks libvirt when the VM resides on an Unassigned Device? PS Not sure if it matters but the Unassigned Device for libvirt.img and my VM images is formatted as btrfs-encrypted. Unraid 6.7.2. Unassigned Devices 2019.06.18 Edited July 14, 2019 by golli53 added version info Quote Link to comment
Anticast Posted June 28, 2021 Share Posted June 28, 2021 Any word on this? I've set up my Windows VMs to also hibernate but appear to be running into the same problems as golli53. I'm running unRAID 6.9.2. Quote Link to comment
elco1965 Posted July 9, 2021 Share Posted July 9, 2021 I too would like to know if this was figured out. I've had this problem for a while. Win1o VM preventing disk 13 from unmounting. root@BlackHole:~# lsof /mnt/disk13 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME qemu-syst 13078 root 13r REG 0,73 4250861568 30466 /mnt/disk13/isos/Windows/Windows 10/Windows 10.iso qemu-syst 13078 root 14r REG 0,73 316628992 30467 /mnt/disk13/isos/Windows/virtio-win-0.1.141-1.iso I was told to install the guest agent and set VM to hibernate in the VM manager. That was done when the VM was first set up. I made sure the VM was on the latest version of the VIRTIO driver. Quote Link to comment
danielocdh Posted April 13, 2022 Share Posted April 13, 2022 For anyone googling this, there is a solution Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.