sheshdaddy Posted October 25, 2019 Share Posted October 25, 2019 Hello everyone, I have a Unraid set up with several VMs all running Windows 10. They were all freshly installed a few weeks ago on a brand new PC. I had to increase the allocation for the HDD on one VM and when I restarted the system all of the VMs will not load into windows and will only boot into the shell screen (see attached). I cannot get them to boot back into Windows 10 and into each VM. Quote Link to comment
sheshdaddy Posted October 26, 2019 Author Share Posted October 26, 2019 I only get the option to boot into Windows set up and not the actual VM. I tried to remove the Windows install path and that didn't work. I tried to change the location for VM's from cache to user and that didn't work. I tried to change the Machine from Q35 to i440fx and that didn't work. I have used Unraid for several years and this is the first time the VMs just wont come back online. THe data is accessible from the shares it's just the VMs won't boot at all. I also tried to run this command: 1. fs0: 2. cd efi 3. cd boot 4. bootx64.efi This just restarts the VM and comes back to the same Shell screen. The following thread says Seabios rather than OVMF for the BIOS helps resolve the issue but I'm trying not to recreate each VM if I don't have to. [https://forums.unraid.net/topic/55887-unable-to-create-vm-just-boots-to-uefi-shell/](https://forums.unraid.net/topic/55887-unable-to-create-vm-just-boots-to-uefi-shell/) I typed in exit and it takes me to the BIOS boot menu but none of the options boot into the installed Windows vm. I see boot options for floppy disk, two different network boot options, and misc EFI shell which all don't boot into the existing VM. Quote Link to comment
bastl Posted October 27, 2019 Share Posted October 27, 2019 (edited) On 10/25/2019 at 3:51 AM, sheshdaddy said: I had to increase the allocation for the HDD How did you increased the size? In general increasing it via the Unraid ui shouldn't be a problem. Decreasing it can corrupt your data on it. And I don't get it why increasing one vdisk preventing you booting all your VMs. Did they all share a same vdisk with data maybe or with other words, do you have a disk which is connected to all of them? Did you tried to setup a fresh vm, install the os and later attach the vdisk from a none booting VM as second disk to check if there is data on it and you can access it? What format are you using for your vdisk, raw, vhd, qcow2? Edited October 27, 2019 by bastl Quote Link to comment
testdasi Posted October 27, 2019 Share Posted October 27, 2019 +1 on what bastl asked i.e. What format are you using for your vdisk, raw, vhd, qcow2? Under some conditions, the GUI may reconfigure the xml for non-raw incorrectly. Quote Link to comment
bastl Posted October 27, 2019 Share Posted October 27, 2019 @testdasi Yeah happened for me before, but if the declaration for the format doesn't met the vdisk format it will throw an error and won't boot at all. I've asked for the format of the vdisk because with the latest 6.8 RC builds I see some vdisk corruptions if I'am using compressed qcow2 images. Never occured before. I'am using this for a couple VMs for almost 2 years now and never had any issues. Quote Link to comment
testdasi Posted October 27, 2019 Share Posted October 27, 2019 35 minutes ago, bastl said: @testdasi Yeah happened for me before, but if the declaration for the format doesn't met the vdisk format it will throw an error and won't boot at all. I've asked for the format of the vdisk because with the latest 6.8 RC builds I see some vdisk corruptions if I'am using compressed qcow2 images. Never occured before. I'am using this for a couple VMs for almost 2 years now and never had any issues. That data corruption is a definite bug report to raise. Also in my particular case, my VM would still boot if vdisk format is wrong, it just not boot into Windows but into the UEFI shell instead with no error. Quote Link to comment
bastl Posted October 27, 2019 Share Posted October 27, 2019 2 minutes ago, testdasi said: That data corruption is a definite bug report to raise. Already reported, but looks like I'am the only one seeing this. No other report so far. I guess no one uses compressed qcow2 vdisks. 🤨 Quote Link to comment
sheshdaddy Posted October 28, 2019 Author Share Posted October 28, 2019 (edited) Hello, Thank you so much for your kind guidance and support 🙏 This is my exact issue https://forums.unraid.net/topic/47174-win-10-vm-drops-into-uefi-shell-upon-startup/ https://forums.unraid.net/topic/53461-all-vms-drop-into-uefi-shell/ I am going to delete all of my VMs and libvrt image and start again with Seabios to see if that resolves the issue. Do you think this is my best option? Edited October 29, 2019 by sheshdaddy Quote Link to comment
sheshdaddy Posted October 29, 2019 Author Share Posted October 29, 2019 On 10/27/2019 at 6:11 AM, testdasi said: +1 on what bastl asked i.e. What format are you using for your vdisk, raw, vhd, qcow2? Under some conditions, the GUI may reconfigure the xml for non-raw incorrectly. For the vdisk format I used qcow2 and for the file system driver I used SCSI and virtio. Quote Link to comment
testdasi Posted October 29, 2019 Share Posted October 29, 2019 2 hours ago, sheshdaddy said: For the vdisk format I used qcow2 and for the file system driver I used SCSI and virtio. Check your xml, does it have the right tag for qcow2 format? Quote Link to comment
bastl Posted October 30, 2019 Share Posted October 30, 2019 @sheshdaddy What version of Unraid are you running? Latest 6.7.2 or one of the 6.8 RC builds? Quote Link to comment
sheshdaddy Posted October 30, 2019 Author Share Posted October 30, 2019 (edited) 6 hours ago, bastl said: @sheshdaddy What version of Unraid are you running? Latest 6.7.2 or one of the 6.8 RC builds? Good morning, I was on one of the RC builds but then I downgraded to 6.7.2 from within unraid to see if that was the issue. However now I'm having other issues such as service is not wanting to start so I think I have to recreate my unraid USB drive. Edited October 30, 2019 by sheshdaddy Quote Link to comment
bastl Posted October 31, 2019 Share Posted October 31, 2019 16 hours ago, sheshdaddy said: I was on one of the RC builds This might be related to the following issue with current 6.8 RC builds. Qemu 4.1 has a bug and can corrupt qcow2 images. The result is corrupted files or none booting guest systems. Using a RAW is the the only solution for now to prevent this in the RC builds. https://forums.unraid.net/bug-reports/prereleases/680-rc1rc4-corrupted-qcow2-vdisks-on-xfs-warning-unraid-qcow2_free_clusters-failed-invalid-argument-propably-due-compressed-qcow2-files-r657/ Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.