dmwa

Members
  • Posts

    38
  • Joined

  • Last visited

Everything posted by dmwa

  1. No. No luck. The usb sound device is working so sorting out the sound card was low priority to other things I needed to do.
  2. No. I did try exploring some of the suggestions listed by others above but I ended up with a serious health issue that put me in hospital for a while. I can't recall if I actually completed any of their suggestions (possibly one but I can't recall which) and I was just happy I had a working VM when I came out that I didn't explore further. {Thanks everyone for the guidance though.} There was also the following that may have been a solution: -another unraid issue occurred, from which I was told my USB was likely faulty and to get a new USB and reinstall unraid. Did so and it fixed the other problem and possibly this issue too. And am unsure how long the USB had been faulty. - I had stopped using my Windows 11 VM after Windows started to tell me my machine didn't have the required specifications after one of the Windows 11 updates. So reverted to a Windows 10 VM, creating a new VM in the process - VM's created after an update still seem to allow the GPU through from my previous experiences. But I have done updates since that haven't lost the gpu. I'll try Win 11 again at some point in the future but I can't be bothered to fight Windows now. My original and first VM, a Windows 10 VM, had kept working, never encountering this issue with the GPU. (Possibly was made before the USB started stuffing up if that was the issue).
  3. Changed the USB from a backup. No errors thus far. Can now connect via unraid api. Plug ins all working and things seem a little faster. Perhaps previous USB was playing up far earlier and I didn't know it. Thanks for everyone's help.
  4. OK, I will back up the flash drive and then try the checkdisk this weekend, then delete the file while there. Hopefully that will fix the issue. If not I will restore from backup. Because I will have backed up from a faulty drive, will the back up likely carry the issues with it?
  5. I tried this but it says rm: cannot remove '/boot/config/plugins/dynamix.my.servers/myservers.cfg': Read-only system file And I still haven't managed to restart the unraid-api. Not sure if it is related but I keep getting a message in yellow at the top unassigned.devices.plg: And update available But running it I get plugin: updating: unassigned.devices.plg plugin: downloading: unassigned.devices-2022.10.12.tgz" ... plugin: unassigned.devices-2022.10.12.tgz" download failure (Generic error) Executing hook script: post_plugin_checks Fix Common Problems is also unable to update. Both plugins have an hourglass and pending under status. Any ideas?
  6. I tried this but it says rm: cannot remove '/boot/config/plugins/dynamix.my.servers/myservers.cfg': Read-only system file And I still haven't managed to restart the unraid-api. Not sure if it is related but I keep getting a message in yellow at the top unassigned.devices.plg: And update available But running it I get plugin: updating: unassigned.devices.plg plugin: downloading: unassigned.devices-2022.10.12.tgz" ... plugin: unassigned.devices-2022.10.12.tgz" download failure (Generic error) Executing hook script: post_plugin_checks Any ideas?
  7. Hi, I updated to 6.11.1 yesterday. Today I noticed I have a my servers error. "My Servers Error unraid-api is stopped" There is any option to "Restart unraid-api" but that does not appear to fix the problem. I tried to delete /boot/config/plugins/dynamix.my.servers/myservers.cfg as mentioned on this board through MC but it said I cannot do that, read only. First time using MC so I may have missed something. Is the issue related to 6.11.1? And how do I delete the file mentioned, if appropriate. Thanks ahead.
  8. I just updated to 6.10.2 rc3 and AGAIN the VM ceased to work. Same issue everyone else is finding. 4th VM I have had to build from scratch again. Even the VM that was working and had survived several updates on 6.10 RC3-RC8 now displays nothing.
  9. I couldn't get mine to work with VNC either.
  10. When I created a new VM this last time I did so on a new SSD as I didn't want to delete the others in case I could work out how to access them later. New SSD, new Cache name and new VM with the latest virtio drivers at the time. So it worked when everything was new. Updates since them I haven't been game to change the virtio driver. The funny thing was I had a far older VM - my first VM attempt with whatever driver there was at the time, on the same cache drive as those that ceased working. That I discovered later still opened. My more recent ones were the ones that failed.
  11. No. I didn't receive any suggestions - perhaps I submitted to the boards in the wrong place. Unraid is the only forum I have ever used so not sure really how to use it. In the end I built another VM. And with each update I am not updating virtio driver. No complications so far. So not sure if it was updating the virtio drive was the issue - or I am doing something wrong when updating the virtio driver? Beyond me, sorry.
  12. I have another VM which was set up in 6.9 which still opens into VNC. But since updating I cannot get any VM to display anything with the graphics card passed through, nor can I get my Windows 11 VM to display anything with VNC. I have attached the diagnostic file. cosmos-diagnostics-20220312-1406.zip
  13. The libvirt log shows the following after a couple of tries. 2022-03-12 03:44:21.067+0000: 24589: info : libvirt version: 7.10.0 2022-03-12 03:44:21.067+0000: 24589: info : hostname: Cosmos 2022-03-12 03:44:21.067+0000: 24589: error : qemuMonitorIORead:494 : Unable to read from monitor: Connection reset by peer 2022-03-12 03:44:21.068+0000: 24589: error : qemuProcessReportLogError:2107 : internal error: qemu unexpectedly closed the monitor: 2022-03-12T03:44:21.030353Z qemu-system-x86_64: -device {"driver":"pcie-pci-bridge","id":"pci.7","bus":"pci.1","addr":"0x0"}: Bus 'pci.1' not found Any ideas?
  14. Hi, Thanks ahead for help. I just ran the update for 6.10rc3. But I have been unable to get the Windows 11 Vm to start. Worked in previous versions. I updated the libvirt driver in Settings/VM Manager to virtio-win-0.1.215-2.iso I edited the VM to have Machine Q35-6.2. I think it was on 6.1 before And I set the VirtIO drivers ISO to the virtio-win-0.1.215-2.iso Have a NVIDIA GTX960 passed through The VM shows started but it doesn't initiate the graphics and the screen stays blank. Log attached. Changing the grphics to VNC I just get "guest has not initialized display yet. Every so often after several attempts to start the vm I get a message pop up: Execution error Internal error:qemu unexpectedly closed the monitor: 2022-3-12T00:32:19.022257Z qemu-system-x86_64:-device {"driver":"pcie-pci-bridge","id":"pci.7","bus"."pci.1","addr"."0x0"}: Bus 'pci.1' not found Any ideas? vm log.docx
  15. Sorry. I used all the time I was willing to put into getting it to work and went to a USB sound device. That works fine. Still have the sound card and I intend to give it another shot at some point.
  16. If the logout button is meant to be there in 6.10 rc2 (haven't really looked into the changes), I think the feedback button has pushed the logout button too far left for it to appear in the browser - at least one running via my motherboard's vga connector. Even changing font sizes and zoom settings. However I found I could log out by clicking on the lime icon on the bottom left. So I can lock it when I am not there now which was my goal. Thanks for you help Squid.
  17. Hi Squid, Yes it did. And when entering it told me it was a new version of firefox.
  18. Thanks for the response Squid, I tried changing the zoom and default font - still doesn't appear. Also it did appear prior to the update. So visible in all previous versions I have had up to and including 6.92. Any other possibilities?
  19. Hi, I just updated to Unraid 6.10rc2. Just spotted the logout button is missing from the top right of the screen in the GUI. I think it was there just after the update but I have been setting up Vms and doing updates so just noticed it now missing. Always had it prior to the update and it worked fine. Have a root password - noticed someone previously had same issue but didn't have one.. Any ideas how to make it reappear? Also is there a shortcut key to logout without using the button? Thanks ahead.
  20. Not certain - I was browsing multiple pages and doing lots of searches. On something I read - from memory it wasn't on a board about the same issue - someone had mentioned some success with legacy mode. So I tried it.
  21. After several more restarts, I was still able to get my VM going so I haven't tried swapping the USB yet. If it happens again I will try this. Thanks for the suggestion.
  22. After several more restarts of unraid the VM worked again. Any ideas on what is happening and what I can do to avoid restarts?
  23. In case the XML is needed for my Windows 10 VM <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm'> <name>001 Windows 10</name> <uuid>9a25d1cb-92cf-0d8d-8b94-57a4f62b1ab2</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>16777216</memory> <currentMemory unit='KiB'>16777216</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>12</vcpu> <cputune> <vcpupin vcpu='0' cpuset='14'/> <vcpupin vcpu='1' cpuset='34'/> <vcpupin vcpu='2' cpuset='15'/> <vcpupin vcpu='3' cpuset='35'/> <vcpupin vcpu='4' cpuset='16'/> <vcpupin vcpu='5' cpuset='36'/> <vcpupin vcpu='6' cpuset='17'/> <vcpupin vcpu='7' cpuset='37'/> <vcpupin vcpu='8' cpuset='18'/> <vcpupin vcpu='9' cpuset='38'/> <vcpupin vcpu='10' cpuset='19'/> <vcpupin vcpu='11' cpuset='39'/> </cputune> <os> <type arch='x86_64' machine='pc-q35-5.1'>hvm</type> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='none'/> </hyperv> </features> <cpu mode='host-passthrough' check='none' migratable='on'> <topology sockets='1' dies='1' cores='6' threads='2'/> <cache mode='passthrough'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/domains/001 Windows 10/vdisk1.img'/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/Windows.iso'/> <target dev='hda' bus='sata'/> <readonly/> <boot order='2'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/virtio-win-0.1.190.iso'/> <target dev='hdb' bus='sata'/> <readonly/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <controller type='usb' index='0' model='qemu-xhci' ports='15'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </controller> <controller type='pci' index='0' model='pcie-root'/> <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x10'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='2' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='2' port='0x11'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/> </controller> <controller type='pci' index='3' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='3' port='0x12'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/> </controller> <controller type='pci' index='4' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='4' port='0x13'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/> </controller> <controller type='pci' index='5' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='5' port='0x14'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/> </controller> <controller type='pci' index='6' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='6' port='0x8'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </controller> <controller type='sata' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <interface type='bridge'> <mac address='52:54:00:3a:6f:89'/> <source bridge='br0'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </interface> <serial type='pty'> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='tablet' bus='usb'> <address type='usb' bus='0' port='3'/> </input> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <sound model='ich9'> <address type='pci' domain='0x0000' bus='0x00' slot='0x1b' function='0x0'/> </sound> <hostdev mode='subsystem' type='pci' managed='yes' xvga='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x02' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> </hostdev> <memballoon model='none'/> </devices> </domain>
  24. Currently can't get any VM (in particular my main Windows 10 VM) started. Was operating fine until I had to shut the system down (safely) yesterday due to power maintenance in the area. Prior to that it was working fine with a Windows 10 VM passed through to a discrete graphics card and a Windows 11 running with VNC. I have tried starting and stopping the VM via the VM Manager. I have tried updating the VirtIO driver, as well as using an older VirtIO driver that work previously. I have tried restarting Raid. All of these I have tried several times. I started unraid. I started the array. Both the docker and VMs were turned on. I attempted to start the Windows 10 VM. The circle starts spinning but nothing happens. Pressing stop or stop all does not appear to stop the circle spinning. If I leave the VMS page and return I get a white page with just the virtual machine grey title bar. Eventually the USB hotplug section appears as does the unraid orange loading squiggle in the centre of the screen. But the VMs are not listed. Only if I go into VM Manager and turn off the VM can I appear to stop it. (There could be another way but I don't know it.) And on several attempts when attempting to return to the VMS screen I see a yellow message "Cannot load LibVirt" or similar mentioned in my reply. I have had this happen intermittently in the past. Usually one of the steps I listed above gets it working. But not currently. Looking in the LibVirt Log I see the following (after a couple of attempts to run the VM and a couple of restarts via VM Manager). 2021-07-16 12:27:51.370+0000: 23393: error : qemuDomainObjBeginJobInternal:965 : Timed out during operation: cannot acquire state change lock (held by monitor=remoteDispatchDomainGetBlockInfo) 2021-07-16 12:27:54.231+0000: 23389: warning : qemuDomainObjBeginJobInternal:931 : Cannot start job (query, none, none) for domain 001 Windows 10; current job is (query, none, none) owned by (23392 remoteDispatchDomainGetBlockInfo, 0 <null>, 0 <null> (flags=0x0)) for (133s, 0s, 0s) 2021-07-16 12:27:54.231+0000: 23389: error : qemuDomainObjBeginJobInternal:965 : Timed out during operation: cannot acquire state change lock (held by monitor=remoteDispatchDomainGetBlockInfo) 2021-07-16 12:28:24.238+0000: 23391: warning : qemuDomainObjBeginJobInternal:931 : Cannot start job (query, none, none) for domain 001 Windows 10; current job is (query, none, none) owned by (23392 remoteDispatchDomainGetBlockInfo, 0 <null>, 0 <null> (flags=0x0)) for (163s, 0s, 0s) 2021-07-16 12:28:24.238+0000: 23391: error : qemuDomainObjBeginJobInternal:965 : Timed out during operation: cannot acquire state change lock (held by monitor=remoteDispatchDomainGetBlockInfo) 2021-07-16 12:29:03.912+0000: 27190: error : qemuMonitorIORead:490 : Unable to read from monitor: Connection reset by peer 2021-07-16 12:31:25.429+0000: 32527: info : libvirt version: 6.5.0 2021-07-16 12:31:25.429+0000: 32527: info : hostname: Cosmos 2021-07-16 12:31:25.429+0000: 32527: warning : qemuDomainObjTaint:6075 : Domain id=1 name='001 Windows 10' uuid=9a25d1cb-92cf-0d8d-8b94-57a4f62b1ab2 is tainted: high-privileges 2021-07-16 12:31:25.429+0000: 32527: warning : qemuDomainObjTaint:6075 : Domain id=1 name='001 Windows 10' uuid=9a25d1cb-92cf-0d8d-8b94-57a4f62b1ab2 is tainted: host-cpu 2021-07-16 12:31:44.771+0000: 32525: warning : qemuDomainObjTaint:6075 : Domain id=2 name='Windows 11' uuid=2336c90d-4965-13ae-ee15-942d34e43131 is tainted: high-privileges 2021-07-16 12:31:44.771+0000: 32525: warning : qemuDomainObjTaint:6075 : Domain id=2 name='Windows 11' uuid=2336c90d-4965-13ae-ee15-942d34e43131 is tainted: host-cpu 2021-07-16 12:32:45.449+0000: 32525: warning : qemuDomainObjBeginJobInternal:931 : Cannot start job (query, none, none) for domain 001 Windows 10; current job is (query, none, none) owned by (32526 remoteDispatchDomainGetBlockInfo, 0 <null>, 0 <null> (flags=0x0)) for (57s, 0s, 0s) 2021-07-16 12:32:45.449+0000: 32525: error : qemuDomainObjBeginJobInternal:965 : Timed out during operation: cannot acquire state change lock (held by monitor=remoteDispatchDomainGetBlockInfo) 2021-07-16 12:33:15.451+0000: 32527: warning : qemuDomainObjBeginJobInternal:931 : Cannot start job (query, none, none) for domain 001 Windows 10; current job is (query, none, none) owned by (32526 remoteDispatchDomainGetBlockInfo, 0 <null>, 0 <null> (flags=0x0)) for (87s, 0s, 0s) 2021-07-16 12:33:15.451+0000: 32527: error : qemuDomainObjBeginJobInternal:965 : Timed out during operation: cannot acquire state change lock (held by monitor=remoteDispatchDomainGetBlockInfo) 2021-07-16 12:33:45.457+0000: 32524: warning : qemuDomainObjBeginJobInternal:931 : Cannot start job (query, none, none) for domain 001 Windows 10; current job is (query, none, none) owned by (32526 remoteDispatchDomainGetBlockInfo, 0 <null>, 0 <null> (flags=0x0)) for (117s, 0s, 0s) 2021-07-16 12:33:45.457+0000: 32524: error : qemuDomainObjBeginJobInternal:965 : Timed out during operation: cannot acquire state change lock (held by monitor=remoteDispatchDomainGetBlockInfo) 2021-07-16 12:33:48.699+0000: 32528: warning : qemuDomainObjBeginJobInternal:931 : Cannot start job (query, none, none) for domain 001 Windows 10; current job is (query, none, none) owned by (32526 remoteDispatchDomainGetBlockInfo, 0 <null>, 0 <null> (flags=0x0)) for (120s, 0s, 0s) 2021-07-16 12:33:48.699+0000: 32528: error : qemuDomainObjBeginJobInternal:965 : Timed out during operation: cannot acquire state change lock (held by monitor=remoteDispatchDomainGetBlockInfo) 2021-07-16 12:33:58.256+0000: 32525: warning : qemuDomainObjBeginJobInternal:931 : Cannot start job (query, none, none) for domain 001 Windows 10; current job is (query, none, none) owned by (32526 remoteDispatchDomainGetBlockInfo, 0 <null>, 0 <null> (flags=0x0)) for (130s, 0s, 0s) 2021-07-16 12:33:58.256+0000: 32525: error : qemuDomainObjBeginJobInternal:965 : Timed out during operation: cannot acquire state change lock (held by monitor=remoteDispatchDomainGetBlockInfo) 2021-07-16 12:34:18.702+0000: 32527: warning : qemuDomainObjBeginJobInternal:931 : Cannot start job (query, none, none) for domain 001 Windows 10; current job is (query, none, none) owned by (32526 remoteDispatchDomainGetBlockInfo, 0 <null>, 0 <null> (flags=0x0)) for (150s, 0s, 0s) 2021-07-16 12:34:18.702+0000: 32527: error : qemuDomainObjBeginJobInternal:965 : Timed out during operation: cannot acquire state change lock (held by monitor=remoteDispatchDomainGetBlockInfo) 2021-07-16 12:34:28.258+0000: 32524: warning : qemuDomainObjBeginJobInternal:931 : Cannot start job (query, none, none) for domain 001 Windows 10; current job is (query, none, none) owned by (32526 remoteDispatchDomainGetBlockInfo, 0 <null>, 0 <null> (flags=0x0)) for (160s, 0s, 0s) 2021-07-16 12:34:28.258+0000: 32524: error : qemuDomainObjBeginJobInternal:965 : Timed out during operation: cannot acquire state change lock (held by monitor=remoteDispatchDomainGetBlockInfo) 2021-07-16 12:34:48.709+0000: 32528: warning : qemuDomainObjBeginJobInternal:931 : Cannot start job (query, none, none) for domain 001 Windows 10; current job is (query, none, none) owned by (32526 remoteDispatchDomainGetBlockInfo, 0 <null>, 0 <null> (flags=0x0)) for (180s, 0s, 0s) 2021-07-16 12:34:48.709+0000: 32528: error : qemuDomainObjBeginJobInternal:965 : Timed out during operation: cannot acquire state change lock (held by monitor=remoteDispatchDomainGetBlockInfo) 2021-07-16 12:34:58.264+0000: 32525: warning : qemuDomainObjBeginJobInternal:931 : Cannot start job (query, none, none) for domain 001 Windows 10; current job is (query, none, none) owned by (32526 remoteDispatchDomainGetBlockInfo, 0 <null>, 0 <null> (flags=0x0)) for (190s, 0s, 0s) 2021-07-16 12:34:58.264+0000: 32525: error : qemuDomainObjBeginJobInternal:965 : Timed out during operation: cannot acquire state change lock (held by monitor=remoteDispatchDomainGetBlockInfo) 2021-07-16 12:35:38.344+0000: 32524: warning : qemuDomainObjBeginJobInternal:931 : Cannot start job (query, none, none) for domain 001 Windows 10; current job is (query, none, none) owned by (32526 remoteDispatchDomainGetBlockInfo, 0 <null>, 0 <null> (flags=0x0)) for (230s, 0s, 0s) 2021-07-16 12:35:38.344+0000: 32524: error : qemuDomainObjBeginJobInternal:965 : Timed out during operation: cannot acquire state change lock (held by monitor=remoteDispatchDomainGetBlockInfo) 2021-07-16 12:36:08.346+0000: 32527: warning : qemuDomainObjBeginJobInternal:931 : Cannot start job (query, none, none) for domain 001 Windows 10; current job is (query, none, none) owned by (32526 remoteDispatchDomainGetBlockInfo, 0 <null>, 0 <null> (flags=0x0)) for (260s, 0s, 0s) 2021-07-16 12:36:08.346+0000: 32527: error : qemuDomainObjBeginJobInternal:965 : Timed out during operation: cannot acquire state change lock (held by monitor=remoteDispatchDomainGetBlockInfo) 2021-07-16 12:36:15.283+0000: 32525: warning : qemuDomainObjBeginJobInternal:931 : Cannot start job (modify, none, none) for domain 001 Windows 10; current job is (query, none, none) owned by (32526 remoteDispatchDomainGetBlockInfo, 0 <null>, 0 <null> (flags=0x0)) for (267s, 0s, 0s) 2021-07-16 12:36:15.283+0000: 32525: error : qemuDomainObjBeginJobInternal:965 : Timed out during operation: cannot acquire state change lock (held by monitor=remoteDispatchDomainGetBlockInfo) 2021-07-16 12:36:38.353+0000: 32528: warning : qemuDomainObjBeginJobInternal:931 : Cannot start job (query, none, none) for domain 001 Windows 10; current job is (query, none, none) owned by (32526 remoteDispatchDomainGetBlockInfo, 0 <null>, 0 <null> (flags=0x0)) for (290s, 0s, 0s) 2021-07-16 12:36:38.353+0000: 32528: error : qemuDomainObjBeginJobInternal:965 : Timed out during operation: cannot acquire state change lock (held by monitor=remoteDispatchDomainGetBlockInfo) 2021-07-16 12:36:45.007+0000: 34518: error : qemuMonitorIO:578 : internal error: End of file from qemu monitor 2021-07-16 12:37:47.980+0000: 34362: error : qemuMonitorIORead:490 : Unable to read from monitor: Connection reset by peer 2021-07-16 13:07:29.030+0000: 18920: info : libvirt version: 6.5.0 2021-07-16 13:07:29.030+0000: 18920: info : hostname: Cosmos 2021-07-16 13:07:29.030+0000: 18920: warning : qemuDomainObjTaint:6075 : Domain id=1 name='001 Windows 10' uuid=9a25d1cb-92cf-0d8d-8b94-57a4f62b1ab2 is tainted: high-privileges 2021-07-16 13:07:29.030+0000: 18920: warning : qemuDomainObjTaint:6075 : Domain id=1 name='001 Windows 10' uuid=9a25d1cb-92cf-0d8d-8b94-57a4f62b1ab2 is tainted: host-cpu In the log file for the VM, after I have managed to stop it, it says: -nodefaults \ -chardev socket,id=charmonitor,fd=31,server,nowait \ -mon chardev=charmonitor,id=monitor,mode=control \ -rtc base=localtime \ -no-hpet \ -no-shutdown \ -boot strict=on \ -device pcie-root-port,port=0x10,chassis=1,id=pci.1,bus=pcie.0,multifunction=on,addr=0x2 \ -device pcie-root-port,port=0x11,chassis=2,id=pci.2,bus=pcie.0,addr=0x2.0x1 \ -device pcie-root-port,port=0x12,chassis=3,id=pci.3,bus=pcie.0,addr=0x2.0x2 \ -device pcie-root-port,port=0x13,chassis=4,id=pci.4,bus=pcie.0,addr=0x2.0x3 \ -device pcie-root-port,port=0x14,chassis=5,id=pci.5,bus=pcie.0,addr=0x2.0x4 \ -device pcie-root-port,port=0x8,chassis=6,id=pci.6,bus=pcie.0,addr=0x1 \ -device qemu-xhci,p2=15,p3=15,id=usb,bus=pcie.0,addr=0x7 \ -device virtio-serial-pci,id=virtio-serial0,bus=pci.2,addr=0x0 \ -blockdev '{"driver":"file","filename":"/mnt/user/domains/001 Windows 10/vdisk1.img","node-name":"libvirt-3-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \ -blockdev '{"node-name":"libvirt-3-format","read-only":false,"cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-3-storage"}' \ -device virtio-blk-pci,bus=pci.3,addr=0x0,drive=libvirt-3-format,id=virtio-disk2,bootindex=1,write-cache=on \ -blockdev '{"driver":"file","filename":"/mnt/user/isos/Windows.iso","node-name":"libvirt-2-storage","auto-read-only":true,"discard":"unmap"}' \ -blockdev '{"node-name":"libvirt-2-format","read-only":true,"driver":"raw","file":"libvirt-2-storage"}' \ -device ide-cd,bus=ide.0,drive=libvirt-2-format,id=sata0-0-0,bootindex=2 \ -blockdev '{"driver":"file","filename":"/mnt/user/isos/virtio-win-0.1.190-1.iso","node-name":"libvirt-1-storage","auto-read-only":true,"discard":"unmap"}' \ -blockdev '{"node-name":"libvirt-1-format","read-only":true,"driver":"raw","file":"libvirt-1-storage"}' \ -device ide-cd,bus=ide.1,drive=libvirt-1-format,id=sata0-0-1 \ -netdev tap,fd=33,id=hostnet0,vhost=on,vhostfd=34 \ -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:3a:6f:89,bus=pci.1,addr=0x0 \ -chardev pty,id=charserial0 \ -device isa-serial,chardev=charserial0,id=serial0 \ -chardev socket,id=charchannel0,fd=35,server,nowait \ -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 \ -device usb-tablet,id=input0,bus=usb.0,port=3 \ -device ich9-intel-hda,id=sound0,bus=pcie.0,addr=0x1b \ -device hda-duplex,id=sound0-codec0,bus=sound0.0,cad=0 \ -device vfio-pci,host=0000:02:00.0,id=hostdev0,x-vga=on,bus=pci.4,addr=0x0 \ -device vfio-pci,host=0000:02:00.1,id=hostdev1,bus=pci.5,addr=0x0 \ -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \ -msg timestamp=on 2021-07-16 13:07:29.030+0000: Domain id=1 is tainted: high-privileges 2021-07-16 13:07:29.030+0000: Domain id=1 is tainted: host-cpu char device redirected to /dev/pts/0 (label charserial0) with "transferring data from local host" written at the bottom of the log.