TheExplorographer Posted January 12, 2020 Share Posted January 12, 2020 My Windows 10 VM won't start now. It begins to boot then the assigned CPU's jam to 100% and stay there. Spinner below the TIANOCORE logo stuck. I have to force kill it. Prior to 6.8.1 it worked great. Log file shows no errors. Quote Link to comment
TheExplorographer Posted January 12, 2020 Author Share Posted January 12, 2020 <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm'> <name>Windows 10</name> <uuid>4e855806-e29f-cb54-f4b6-7d4897952465</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>16777216</memory> <currentMemory unit='KiB'>16777216</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>4</vcpu> <cputune> <vcpupin vcpu='0' cpuset='1'/> <vcpupin vcpu='1' cpuset='5'/> <vcpupin vcpu='2' cpuset='3'/> <vcpupin vcpu='3' cpuset='7'/> </cputune> <os> <type arch='x86_64' machine='pc-i440fx-4.2'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/4e855806-e29f-cb54-f4b6-7d4897952465_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='none'/> </hyperv> </features> <cpu mode='host-passthrough' check='none'> <topology sockets='1' cores='2' threads='2'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='none'/> <source file='/mnt/user/domains/Windows 10/vdisk1.img'/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/Windows10.iso'/> <target dev='hda' bus='ide'/> <readonly/> <boot order='2'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/virtio-win-0.1.160-1.iso'/> <target dev='hdb' bus='ide'/> <readonly/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <controller type='pci' index='0' model='pci-root'/> <controller type='ide' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:54:22:8f'/> <source bridge='br0'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </interface> <serial type='pty'> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x046d'/> <product id='0x082d'/> </source> <address type='usb' bus='0' port='1'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x046d'/> <product id='0xc06b'/> </source> <address type='usb' bus='0' port='2'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x0557'/> <product id='0x2419'/> </source> <address type='usb' bus='0' port='3'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x05af'/> <product id='0x8277'/> </source> <address type='usb' bus='0' port='4'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x0a12'/> <product id='0x0001'/> </source> <address type='usb' bus='0' port='5'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x0fd9'/> <product id='0x005c'/> </source> <address type='usb' bus='0' port='6'/> </hostdev> <memballoon model='none'/> </devices> </domain> Quote Link to comment
TheExplorographer Posted January 12, 2020 Author Share Posted January 12, 2020 (edited) Well it's worse than that. I restarted my UNraid server via Shutdown on the menu... now the server will not start. No drive errors that I know of. Passed my last parity check with flying colors as usual. This all happened after I updated tonight to 6.8.1. I need my server back. Edited January 12, 2020 by TheExplorographer Got the server running again but still no VM Quote Link to comment
TheExplorographer Posted January 12, 2020 Author Share Posted January 12, 2020 Rolled back to V6.8.0 and the VM started right up. This is a problem with the update. Quote Link to comment
TheExplorographer Posted January 14, 2020 Author Share Posted January 14, 2020 Nothing?? Quote Link to comment
bland328 Posted January 15, 2020 Share Posted January 15, 2020 (edited) I'm having a somewhat similar problem. My trusty macOS VM that I've been running 24/7 for years suddenly won't start. And by that I mean that the VM claims to have started (green 'play' button lights up in the Unraid GUI), but nothing ever happens--I don't get even as far as being able to VNC in (I get the "Guest has not initialized the display (yet)" message). In the log file for the VM, not much appears--when I try to start the VM, it spits out the long set of qemu args, then this standard stuff: 2020-01-15 01:06:05.240+0000: Domain id=4 is tainted: high-privileges 2020-01-15 01:06:05.240+0000: Domain id=4 is tainted: custom-argv 2020-01-15 01:06:05.240+0000: Domain id=4 is tainted: host-cpu char device redirected to /dev/pts/1 (label charserial0) and then...nothing. I know this may not actually have anything to do with 6.8.1, but in the name of research...is anyone else having VM woes under Unraid 6.8.1? Edited January 15, 2020 by bland328 Quote Link to comment
bland328 Posted January 15, 2020 Share Posted January 15, 2020 (edited) 19 hours ago, bland328 said: My trusty macOS VM that I've been running 24/7 for years suddenly won't start. For the record, I solved this...but I'm not sure what to make of it. And it almost surely has nothing to do with the OP's problem, but I'll leave the solution here anyway, in case it helps someone else: Apparently, the 'OVMF_VARS.fd' file (OVMF NVRAM backing for the VM) for that VM became corrupt. It is stored in an unassigned btrfs volume on an NVME drive, and the btrfs volume itself does not appear to be corrupt. I've no idea what happened there, but hopefully (and probably) it has nothing to do with Unraid 6.8.1. Edited January 15, 2020 by bland328 Quote Link to comment
TheExplorographer Posted January 15, 2020 Author Share Posted January 15, 2020 1 hour ago, bland328 said: For the record, I solved this...but I'm not sure what to make of it. And it almost surely has nothing to do with the OP's problem, but I'll leave the solution here anyway, in case it helps someone else: Apparently, the 'OVMF_VARS.fd' file (OVMF NVRAM backing for the VM) for that VM became corrupt. It is stored in an unassigned btrfs volume on an NVME drive, and the btrfs volume itself does not appear to be corrupt. I've no idea what happened there, but hopefully (and probably) it has nothing to do with Unraid 6.8.1. So what did you do? Erase the file? Just curious. Also when my VM looked as if it was running but failed, the cores assigned to it were pegged at 100% until I manually stopped it. Quote Link to comment
bland328 Posted January 16, 2020 Share Posted January 16, 2020 21 hours ago, TheExplorographer said: So what did you do? Sorry...should've explained! I was lucky enough to have recently migrated the VM in question to a second, non-Unraid box to use as a template for another project, so I was able to simply go grab a copy of the OVMF_VARS.fd file from there. Had that not been possible, I suppose I would've grabbed a clean copy of that file from here or here, the downside being the loss of my customized NVRAM settings. I didn't notice if any cores were pegged with this happened, but I rather doubt it, because in my case there was no boot activity--I didn't get to the Tianocore logo, nor even to the point of generating any (virtual) video output for noVNC to latch onto. 1 Quote Link to comment
TheExplorographer Posted January 27, 2020 Author Share Posted January 27, 2020 Well I just tried to update to 6.8.2 and my VM is frozen again with 100% CPU usage on assigned cores. Something major changed in 6.8.1 that was not in 6.8.0 and is stopping me from upgrading. How do I get support on this? I use this VM as a streaming VM for my work. Right now I am dead in the water. Quote Link to comment
limetech Posted January 27, 2020 Share Posted January 27, 2020 2 hours ago, TheExplorographer said: Well I just tried to update to 6.8.2 and my VM is frozen again with 100% CPU usage on assigned cores. Something major changed in 6.8.1 that was not in 6.8.0 and is stopping me from upgrading. How do I get support on this? I use this VM as a streaming VM for my work. Right now I am dead in the water. This should be reported in Bug Reports. FWIW I run win10 in a VM with GTX780 passthrough, and several Linux VM's and all work without issue. Here's the history of VM-relevant components: 6.8.2 kernel 4.19.98 libvirt 5.10 [same as 6.8.1] qemu 4.2.0 [same as 6.8.1] 6.8.1 kernel 4.19.94 libvirt 5.10 qemu 4.2.0 6.8.0 kernel 4.19.88 libvirt 5.8 qemu 4.1.1 6.7.2 kernel 4.19.56 libvirt 5.1.0 qemu 3.1.0 (rev 2) patched pcie link speed and width support 1 Quote Link to comment
TheExplorographer Posted January 28, 2020 Author Share Posted January 28, 2020 Well, I posted it in Bug Reports, then someone moved it to here. 😕 So I run a Win 10 VM with a GTX1060 pass through. And have for years now. 6.8.1 ended that. With the boot up freezing at the TianoCore logo after the windows spinner spins 2.5 times. All 4 assigned cores peg at 100% and stay there. It never leaves that point unless I force stop it. I have several Linux installs as well. I have not tried them. As I have said, I use the Windows VM as a streaming computer and I stream every day. So after updating I really have limited time to get the VM back up and running. I have tried just about every setting. Someone mentioned a corrupt NVRAM file but I have no idea on how I would fix that. Quote Link to comment
JorgeB Posted January 28, 2020 Share Posted January 28, 2020 2 hours ago, TheExplorographer said: Well, I posted it in Bug Reports, then someone moved it to here. No, you posted on the general support forum and I moved it since this was about KVM, bug reports are here: https://forums.unraid.net/bug-reports/stable-releases/ Quote Link to comment
TheExplorographer Posted January 28, 2020 Author Share Posted January 28, 2020 8 hours ago, johnnie.black said: No, you posted on the general support forum and I moved it since this was about KVM, bug reports are here: https://forums.unraid.net/bug-reports/stable-releases/ Okay, Well I thought I posted it to the right place. That was my mistake. Maybe next time, tell the person where to properly post it when they get it wrong. Rather than moving it to what is basically a junk forum as I don't see a lot of support here. At this point to avoid further frustration, Ill just find another solution and/or wait for an update as well as hold off on building my 2nd unraid server. Quote Link to comment
JorgeB Posted January 28, 2020 Share Posted January 28, 2020 what is basically a junk forum Respectfully disagree, even if I though this was a bug I can't move it to the bug reports forum, it uses a different database, so IMHO this was the correct forum for the original post, and this is far from a junk forum with multiple daily posts, if appropriate it could then be escalated to the bug reports forum, like it was already suggested twice, you can create a bug report there anytime. It can take a little more time to get support to a VM related issue, but that's mostly because there are fewer users using them, also many issues are hardware specific. Quote Link to comment
1812 Posted January 28, 2020 Share Posted January 28, 2020 12 hours ago, TheExplorographer said: Someone mentioned a corrupt NVRAM file but I have no idea on how I would fix that. step 1: backup your .img os step 2: make a new vm but have the disk location point to your old image instead of creating a new one step 3: boot with vnc only and verify .img os works step 4: shut down vm, reassign GPU step 5: boot vm step 6: report findings Quote Link to comment
TheExplorographer Posted February 3, 2020 Author Share Posted February 3, 2020 On 1/28/2020 at 12:05 PM, 1812 said: step 1: backup your .img os step 2: make a new vm but have the disk location point to your old image instead of creating a new one step 3: boot with vnc only and verify .img os works step 4: shut down vm, reassign GPU step 5: boot vm step 6: report findings Too late. Already dumped that VM and created a new one from scratch. So far it is working. But, I have no confidence in it at this point. Quote Link to comment
TheExplorographer Posted February 3, 2020 Author Share Posted February 3, 2020 On 1/28/2020 at 11:40 AM, johnnie.black said: Respectfully disagree, even if I though this was a bug I can't move it to the bug reports forum, it uses a different database, so IMHO this was the correct forum for the original post, and this is far from a junk forum with multiple daily posts, if appropriate it could then be escalated to the bug reports forum, like it was already suggested twice, you can create a bug report there anytime. It can take a little more time to get support to a VM related issue, but that's mostly because there are fewer users using them, also many issues are hardware specific. Well, lets just agree to disagree then. I'll weigh this exchange against my view of "support" going forward. Quote Link to comment
limetech Posted February 3, 2020 Share Posted February 3, 2020 27 minutes ago, TheExplorographer said: my view of "support" Please tell me what is your view of "support"? You didn't post your issue in the bug report section like I asked you to. You don't have to repost the entire content, you just have to post a topic in there, maybe give a quick summary and then reference this topic. It also helps tremendously to not have a demanding attitude. Quote Link to comment
Marshalleq Posted February 4, 2020 Share Posted February 4, 2020 I generally have to delete my vm templates and recreate after each unraid update. Never been sure why, but it’s pretty consistently replicatable. Probably a similar issue here. Sent from my iPhone using Tapatalk Quote Link to comment
bonienl Posted February 4, 2020 Share Posted February 4, 2020 1 hour ago, Marshalleq said: I generally have to delete my vm templates and recreate after each unraid update Certainly not the general consensus. Personally I had NEVER the need to recreate my VMs and I do a lot more 'upgrades' than the average user 1 Quote Link to comment
limetech Posted February 4, 2020 Share Posted February 4, 2020 10 hours ago, Marshalleq said: I generally have to delete my vm templates and recreate after each unraid update. Not a single time have I ever had to do this. Quote Link to comment
Marshalleq Posted February 4, 2020 Share Posted February 4, 2020 Thanks, good to know - next time I'll post some logs. I have a funny feeling it's something to do with GPU passthrough, as it happens often when I change between passthrough and vnc too. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.