Jump to content

OVMF EFI Settings Not Unique Between VMs


MikeW

Recommended Posts

I have two VMs, Windows 8 and Ubuntu. Both of these are configured to use OVMF since all components (motherboard and GPUs) are UEFI capable. I was having a problem where my VMs wouldn't boot from the guest virtual disk image so I went into the EFI settings to add the boot device and change the boot order. This seemed to fix the problem, but when I added a second VM things broke. When I make a change in the EFI boot settings it appears to affect both VMs. I see the following in the VMs' XML:

 

Ubuntu VM:

    <loader type='pflash'>/usr/share/qemu/ovmf-x64/OVMF-pure-efi.fd</loader>

 

Win 8 VM:

    <loader type='pflash'>/usr/share/qemu/ovmf-x64/OVMF-pure-efi.fd</loader>

 

When I make a change to the boot settings does it change this file? If so, then that's most likely the problem and I would need to have unique EFI files. Any suggestions? Am I expected to manually create a new EFI file for each VM as I create them?

Link to comment

Does anyone here use more than one OVMF based VM? I did notice that the OVMF-pure-efi.fd file was updated/modified when I changed the boot options for the OVMF-based VMs. This seems like a bug and I'll report it in the bug forum.

Link to comment

We haven't added multi-OVMF VM support just yet.  This is on the to-do list.  Guess we'll have to bump it up the list since folks are actually using it now!

 

That was much faster than me trying to find where you mentioned this months ago!  ;)

 

Yeah, the reality is that this just got deprioritized since I didn't see anyone bringing this issue up compared to others that were needing attention.  Now that it's being used, we'll move this up the list of things to do for a future release.

Link to comment

We haven't added multi-OVMF VM support just yet.  This is on the to-do list.  Guess we'll have to bump it up the list since folks are actually using it now!

 

That was much faster than me trying to find where you mentioned this months ago!  ;)

 

Yeah, the reality is that this just got deprioritized since I didn't see anyone bringing this issue up compared to others that were needing attention.  Now that it's being used, we'll move this up the list of things to do for a future release.

 

I assume the workaround for now is to edit the XML and point to a copied version of the OVMF-pure-efi.fd for each VM?

Link to comment

We haven't added multi-OVMF VM support just yet.  This is on the to-do list.  Guess we'll have to bump it up the list since folks are actually using it now!

 

That was much faster than me trying to find where you mentioned this months ago!  ;)

 

Yeah, the reality is that this just got deprioritized since I didn't see anyone bringing this issue up compared to others that were needing attention.  Now that it's being used, we'll move this up the list of things to do for a future release.

 

I assume the workaround for now is to edit the XML and point to a copied version of the OVMF-pure-efi.fd for each VM?

 

That's the easy short-term workaround.

 

That said, the proper implementation for this will actually use what are known as split vars.  Basically we'll have two files (one labeled as CODE and the other labeled as VARS). A VARS file will then exist for each VM you create that utilizes OVMF, and will be unique to that VM.  make sense?

 

 

Link to comment

That's the easy short-term workaround.

 

That said, the proper implementation for this will actually use what are known as split vars.  Basically we'll have two files (one labeled as CODE and the other labeled as VARS). A VARS file will then exist for each VM you create that utilizes OVMF, and will be unique to that VM.  make sense?

 

Makes perfect sense. I'll just create temporary copies of the files until the final fix is implemented. My unRAID system is still in "proof of concept" mode.

 

One more question though....I see the following files:

-rw-r--r-- 1 root root 2097152 Jul 28 10:41 OVMF-pure-efi.fd
-rw-r--r-- 1 root root 1966080 Jul 18 15:41 OVMF_CODE-pure-efi.fd
-rw-r--r-- 1 root root  131072 Jul 18 15:41 OVMF_VARS-pure-efi.fd

 

For the temporary workaround I don't need to copy the CODE and VARS files, do I? It doesn't look like they've been modified since my initial install.

Link to comment

That's the easy short-term workaround.

 

That said, the proper implementation for this will actually use what are known as split vars.  Basically we'll have two files (one labeled as CODE and the other labeled as VARS). A VARS file will then exist for each VM you create that utilizes OVMF, and will be unique to that VM.  make sense?

 

Makes perfect sense. I'll just create temporary copies of the files until the final fix is implemented. My unRAID system is still in "proof of concept" mode.

 

One more question though....I see the following files:

-rw-r--r-- 1 root root 2097152 Jul 28 10:41 OVMF-pure-efi.fd
-rw-r--r-- 1 root root 1966080 Jul 18 15:41 OVMF_CODE-pure-efi.fd
-rw-r--r-- 1 root root  131072 Jul 18 15:41 OVMF_VARS-pure-efi.fd

 

For the temporary workaround I don't need to copy the CODE and VARS files, do I? It doesn't look like they've been modified since my initial install.

No need to worry about the code and vars files.

Link to comment

Is there a reason why my copy of OVMF-pure-efi.fd would be deleted after powering down the unRAID system? My test worked before powering down. After powering on I tried to start my Win 8 VM and it gave me an error indicating that it couldn't find the following file:

 

Could not open '/usr/share/qemu/ovmf-x64/OVMF-pure-efi-win8.fd': No such file or directory

 

When I copy the file does it actually get written back to the USB flash drive?

Link to comment

Is there a reason why my copy of OVMF-pure-efi.fd would be deleted after powering down the unRAID system? My test worked before powering down. After powering on I tried to start my Win 8 VM and it gave me an error indicating that it couldn't find the following file:

 

Could not open '/usr/share/qemu/ovmf-x64/OVMF-pure-efi-win8.fd': No such file or directory

 

When I copy the file does it actually get written back to the USB flash drive?

Every time you restart your server all the changes you made to the filesystem is gone as it's loaded in memory. So if you want it to survive a reboot you will have to copy the file to your flash drive and put the copy command in the go file. Now the file gets copied at every boot.

Link to comment

Is there a reason why my copy of OVMF-pure-efi.fd would be deleted after powering down the unRAID system? My test worked before powering down. After powering on I tried to start my Win 8 VM and it gave me an error indicating that it couldn't find the following file:

 

Could not open '/usr/share/qemu/ovmf-x64/OVMF-pure-efi-win8.fd': No such file or directory

 

When I copy the file does it actually get written back to the USB flash drive?

Every time you restart your server all the changes you made to the filesystem is gone as it's loaded in memory. So if you want it to survive a reboot you will have to copy the file to your flash drive and put the copy command in the go file. Now the file gets copied at every boot.

This is correct.

Link to comment

Every time you restart your server all the changes you made to the filesystem is gone as it's loaded in memory. So if you want it to survive a reboot you will have to copy the file to your flash drive and put the copy command in the go file. Now the file gets copied at every boot.

 

Thanks for the tip. But I'm a noob and need a little help with this. If I ssh into the machine is the flash drive mounted in such a way that I can 'cd' to it? Is the copy command in the 'go' file used to copy the file from the flash to the /usr/share/qemu/ovmf-x64/OVMF-pure-efi-win8.fd file?

Link to comment
  • 5 months later...

Sorry to bump...

 

I had to resort to doing this also since all 3 of my GPUs (2x GT730 and 1x GT720) do not like Seabios.  When I would install a custom compiled version of OE (that has virtio support) in a VM, it would always default to booting into the uefi shell before the OS disk.

 

I did run into a snag that I could work around...

 

I perform the OE install in VNC and also change the boot order in the efi loader (which now lives on my flash drive).  Once that is done, I then passthrough the GPU and unfortunately this nukes the boot order changes that I made.  My only way around this was to pass through a keyboard so I could navigate the uefi boot manager to make the changes while the GPU was passed through.

 

Bit of a pain but it works.

 

John

Link to comment

All of this is already fixed for OE 6.2 (unique OVMF vars files per VM).  OVMF works great now in 6.2.

 

Like this:

 

  <os>
    <type arch='x86_64' machine='pc-i440fx-2.3'>hvm</type>
    <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader>
    <nvram>/boot/config/TEST2_VARS.fd</nvram>
  </os>

 

:D

Link to comment

OVMF is preferred for GPU pass through VMs because it utilizes UEFI as opposed to a VGA-BIOS.  VGA BIOS is what you get with SeaBIOS, and while it "works" it's really not the preferred solution for GPU pass through due to some of the hackery in vga arbitration that we have to deal with.  UEFI doesn't require such arbitration.

 

In addition, unlike SeaBIOS, UEFI can initialize a GPU after Windows is booted and the GPU drivers are loaded.  So you could install / configure your VM through VNC, then add a second graphics device (physical), install the drivers, then shut down the VM, then remove the VNC graphics from primary, then start up the VM.  Even if the video card doesn't show the actually booting part, it should initialize once it actually reaches the Windows desktop and the GPU drivers are loaded.

 

In short, OVMF will be the default BIOS type we load for VMs in 6.2...that's how good we think it is.

 

- Jon

Link to comment

OVMF is preferred for GPU pass through VMs because it utilizes UEFI as opposed to a VGA-BIOS.  VGA BIOS is what you get with SeaBIOS, and while it "works" it's really not the preferred solution for GPU pass through due to some of the hackery in vga arbitration that we have to deal with.  UEFI doesn't require such arbitration.

 

In addition, unlike SeaBIOS, UEFI can initialize a GPU after Windows is booted and the GPU drivers are loaded.  So you could install / configure your VM through VNC, then add a second graphics device (physical), install the drivers, then shut down the VM, then remove the VNC graphics from primary, then start up the VM.  Even if the video card doesn't show the actually booting part, it should initialize once it actually reaches the Windows desktop and the GPU drivers are loaded.

 

In short, OVMF will be the default BIOS type we load for VMs in 6.2...that's how good we think it is.

 

- Jon

 

Thanks for your detailed response! Sounds like I get to play with my VM's and attempt to switch to OVMF once 6.2 comes out  ;D

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...