VM (Windows 10) works for a while now won't boot and will only go into shell


Recommended Posts

Hello everyone,

 

I have a Unraid set up with several VMs all running Windows 10. They were all freshly installed a few weeks ago on a brand new PC. I had to increase the allocation for the HDD on one VM and when I restarted the system all of the VMs will not load into windows and will only boot into the shell screen (see attached). I cannot get them to boot back into Windows 10 and into each VM.

 

 

IMG-20191024-WA0001.jpeg

Link to comment

I only get the option to boot into Windows set up and not the actual VM. I tried to remove the Windows install path and that didn't work. I tried to change the location for VM's from cache to user and that didn't work. I tried to change the Machine from Q35 to i440fx and that didn't work.

 

I have used Unraid for several years and this is the first time the VMs just wont come back online. THe data is accessible from the shares it's just the VMs won't boot at all.

 

I also tried to run this command:

1. fs0:
2. cd efi
3. cd boot
4. bootx64.efi

 

This just restarts the VM and comes back to the same Shell screen.

 

The following thread says Seabios rather than OVMF for the BIOS helps resolve the issue but I'm trying not to recreate each VM if I don't have to. 

[https://forums.unraid.net/topic/55887-unable-to-create-vm-just-boots-to-uefi-shell/](https://forums.unraid.net/topic/55887-unable-to-create-vm-just-boots-to-uefi-shell/)

 

I typed in exit and it takes me to the BIOS boot menu but none of the options boot into the installed Windows vm. I see boot options for floppy disk, two different network boot options, and misc EFI shell which all don't boot into the existing VM.

Link to comment
On 10/25/2019 at 3:51 AM, sheshdaddy said:

I had to increase the allocation for the HDD

How did you increased the size? In general increasing it via the Unraid ui shouldn't be a problem. Decreasing it can corrupt your data on it. And I don't get it why increasing one vdisk preventing you booting all your VMs. Did they all share a same vdisk with data maybe or with other words, do you have a disk which is connected to all of them?

 

Did you tried to setup a fresh vm, install the os and later attach the vdisk from a none booting VM as second disk to check if there is data on it and you can access it?

 

What format are you using for your vdisk, raw, vhd, qcow2?

Edited by bastl
Link to comment

@testdasi Yeah happened for me before, but if the declaration for the format doesn't met the vdisk format it will throw an error and won't boot at all. I've asked for the format of the vdisk because with the latest 6.8 RC builds I see some vdisk corruptions if I'am using compressed qcow2 images. Never occured before. I'am using this for a couple VMs for almost 2 years now and never had any issues.

Link to comment
35 minutes ago, bastl said:

@testdasi Yeah happened for me before, but if the declaration for the format doesn't met the vdisk format it will throw an error and won't boot at all. I've asked for the format of the vdisk because with the latest 6.8 RC builds I see some vdisk corruptions if I'am using compressed qcow2 images. Never occured before. I'am using this for a couple VMs for almost 2 years now and never had any issues.

That data corruption is a definite bug report to raise.

Also in my particular case, my VM would still boot if vdisk format is wrong, it just not boot into Windows but into the UEFI shell instead with no error.

Link to comment

Hello,

 

Thank you so much for your kind guidance and support 🙏

 

This is my exact issue

 

https://forums.unraid.net/topic/47174-win-10-vm-drops-into-uefi-shell-upon-startup/

https://forums.unraid.net/topic/53461-all-vms-drop-into-uefi-shell/

 

 

I am going to delete all of my VMs and libvrt image and start again with Seabios to see if that resolves the issue. Do you think this is my best option?

 

 

Edited by sheshdaddy
Link to comment
On 10/27/2019 at 6:11 AM, testdasi said:

+1 on what bastl asked i.e. What format are you using for your vdisk, raw, vhd, qcow2?

 

Under some conditions, the GUI may reconfigure the xml for non-raw incorrectly.

For the vdisk format I used qcow2 and for the file system driver I used SCSI and virtio. 

Link to comment
6 hours ago, bastl said:

@sheshdaddy What version of Unraid are you running? Latest 6.7.2 or one of the 6.8 RC builds?

Good morning,

 

I was on one of the RC builds but then I downgraded to 6.7.2 from within unraid to see if that was the issue. However now I'm having other issues such as service is not wanting to start so I think I have to recreate my unraid USB drive.

Edited by sheshdaddy
Link to comment
16 hours ago, sheshdaddy said:

I was on one of the RC builds

This might be related to the following issue with current 6.8 RC builds. Qemu 4.1 has a bug and can corrupt qcow2 images. The result is corrupted files or none booting guest systems. Using a RAW is the the only solution for now to prevent this in the RC builds.

 

https://forums.unraid.net/bug-reports/prereleases/680-rc1rc4-corrupted-qcow2-vdisks-on-xfs-warning-unraid-qcow2_free_clusters-failed-invalid-argument-propably-due-compressed-qcow2-files-r657/

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.