GPU passthrough: no video with SeaBIOS? (OVMF works)


whophil

Recommended Posts

I am trying to get an Nvidia GT 730 passed through to a Windows 10 guest. I initially tried running the guest with SeaBIOS, but I couldn't get the GPU to flicker or show me anything on the connected monitor (even with no vdisk). I tried passing through the appropriate GPU ROM, and tried with PCIe ACS override enabled/disabled. No go.

 

I got it working with OVMF, but after installing Windows I found performance to be extremely poor and the system very unstable -- it only lasts a few reboots before EFI can no longer find the boot disk. This is a can of worms I don't want to deal with right now.

 

Which brings me back to SeaBIOS -- why doesn't it work? Is there any reason why OVMF (EFI) can initialize the graphics card, but not SeaBIOS? I have another card (AMD Radeon HD 6450) which is correctly initiated by a SeaBIOS guest on the same host machine and when using the same PCI slot.

 

Many thanks in advance.

Link to comment

GPU pass through using SeaBIOS is a complicated beast and its difficult to explain the challenges with it in a way that non-developers can really appreciate / understand. The simplest explanation I can give is that traditional x86 BIOSes weren't originally designed with the idea of virtualization or multiple GPUs serving multiple concurrently running operating systems. SeaBIOS emulates that type of traditional BIOS.  OVMF uses UEFI which doesn't have this issue.

 

That said, we are addressing the boot issue for OVMF in version 6.2 along with a number of other enhancements. The 6.2 beta is VERY close to release.

Link to comment

Hi Jon,

 

The Windows 10 VM had sluggish mouse movement, high CPU usage, slow opening of menus in a clean Windows 10 install. USB mouse was passed through via PCI passthrough of USB controller.

 

I didn't have time to explore fixes. The bigger problem was that the OVMF VM would fail to boot after just a few restarts. I don't know too much about how EFI works, but after few successful VM reboots, EFI would fail to load into Windows at all. (It would flash two errors, something related to floppy...) and then reboot.

Link to comment

What other apps do you have running on your system?  If you are subscribing CPU heavy apps to the same CPU cores as your VM, that is why. It is recommended to NOT use plugins (but rather docker) for apps, and then to pin the Docker apps to specific CPUs that don't overlap with your VM.

Link to comment

At the time of testing I was running only the Dropbox plugin and no other VMs.

 

Hmm, I'd be shocked if normal NAS functions would result in a noticeable issue like that.  Could be something else at play, but the next thing I'd have you try is using the isolcpus= kernel parameter to literally lock out specific CPU threads from being used by the host altogether (only usable for VMs).  At this point, I think we need to pause this thread until 6.2 is released and then revisit.  LOTS of updates/fixes to VM manager in that release.

Link to comment

Wow, I find that really strange.  I'm running the same pair and after many pairs thought I found the best match I could, Nividia GT730 with my Windows 10 using seabios and q35, it took awhile for the drivers to catch up but, on re-boot, not a hiccup.  I use my AMD 6450 card for Linux Mint and OpenELEC, same with seabios and q35.  Had to turn on ACS, but that's OK. As with just about everyone else, the only problem I have is with a forced shut-down, I have to re-start the whole system.

 

Linux Mint doesn't seem to play well with the Nvidia GT730, so I keep them setup as above.

 

Was fresh install for all VM's.

Link to comment

I'm sorry for the delay, I'll post them this evening after I get home from work.

 

I have noticed a common problem in my testing of many cards and in the posts that I've read, maybe this is just curious to VM's.  Whenever I have changed cards, (I started running 3 systems two live 6.1.6 and one on a test bench), either amd 5450 to amd R7 240? (from memory) to amd6450 to nvidia, or say nividia 610 to nvidia720/nividia 730, I notice problems with audio and/or video as if the Windows VM or Linux VM won't release the old drivers in the image or update to the current drivers, but a "clean/fresh" install will insert the proper drivers and do OK as long as you stick with them, however, once you decide to change them out for a different card (or you try to switch them between VM's and/or switch them back you may be out of luck when it comes to resolution, and often you will get the red line through the audio speaker in windows that says "no playback device" attached.  I don't have the perfect pair yet, but I keep trying.  I'm hoping 6.2 will address some of this driver stuff.

 

Link to comment

Thanks very much for your XML configurations. For some reason I misread your original post and thought you were running your GT 730 VM with OVMF instead of SeaBIOS.

 

I was eventually able to get a stable Windows 8.1 (not 10) VM with a GT 730 passed through, running OVMF.

 

In case it may help others, I've summarized my problems and findings below:

  • SeaBIOS was unable to "flicker" on a GT 730; OVMF had no problems doing so.
  • I installed Windows 10 on the OVMF machine. After a few reboots, the Tianocore UEFI would fail to boot from the Windows 10 disk, going into a boot loop. UEFI was still able to see my virtual disk and read from the partition which contained bootx64.efi. However, telling the machine to boot from the bootx64.efi file caused the VM to reboot. I have no idea why. I suspect it may have something to do with the fact that Secure Boot was disabled in UEFI, or with automatic Windows updates. I didn't get a chance to try a Windows 10 install with automatic updates disabled.
  • I then installed Windows 8.1 on the OVMF machine. This installation is stable.
     

Link to comment

I installed Windows 10 on the OVMF machine. After a few reboots, the Tianocore UEFI would fail to boot from the Windows 10 disk, going into a boot loop. UEFI was still able to see my virtual disk and read from the partition which contained bootx64.efi. However, telling the machine to boot from the bootx64.efi file caused the VM to reboot. I have no idea why. I suspect it may have something to do with the fact that Secure Boot was disabled in UEFI, or with automatic Windows updates. I didn't get a chance to try a Windows 10 install with automatic updates disabled.

 

I don't want to hijack the thread but that is the same I have experienced - with Windows 10 as well - and I reported in this post

 

I would also add that I'm still unable to get my gpu to boot.

All I get is black screen no matter what combination I'm suggested.

All this seems more a work of luck than anything... I 'm testing a GTX 750 Ti and it's the same, before I thought it was my GTS 450 to be too old for this stuff. Now I don't know what to do, whether to keep trying or drop virtualization till better times come....

 

Forget the last point. I just found out what the problem was.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.