2 Virtual Machines with Dedicated GPU Passthrough


Vanum

Recommended Posts

Hello everyone at Lime Technology!

 

I am attempting "Two Gamers, 1 Tower".

 

I have successfully built one VM1 that works like a champ, but I cannot get the 550 to push any signal to the monitor for the second VM.

 

What am I doing wrong?

 

Now, here is my setup and the configuration of each of my Virtual Machines.

 

My Setup

  • CPU: Intel i7-4690K
  • Motherboard: GIGABYTE GA-Z97X-UD5H
  • SSD: Samsung SSD 850 EVO 250GB (2x)
  • Disk Array: Seagate Barracuda 1TB, Seagate Barracuda 500GB
  • UnRaid: 16GB PNY Flash Drive
  • RAM: G.SKILL Ripjaws Series 8GB (2 x 4GB)
  • GPU1: EVGA GeForce GTX 970 04G-P4-2974-RX 4GB SC GAMING (Slot 2)
  • GPU2: ZOTAC AMP! GeForce GTX 550 Ti (Fermi) DirectX 11 ZT-50402-10L (Slot 1)
  • Case: COOLER MASTER RC-692-KKN3 CM690 II

 

UnRaid:

  • Previously Used: 6.1
  • Currently Using: 6.2.0-beta18

 

VM1:

  • 5 Logical CPUs
  • 4 GB of RAM
  • SSD: 100 GB
  • DA: 250 GB
  • Machine: i440fx-2.5
  • BIOS: SeaBIOS
  • Hyper-V: Yes
  • Graphics Card: 970
  • USB Devices: Keyboard, Mouse, Audiobox
  • USB Mode: 2.0

 

Note: I have absolutely no problem with this VM. It works and works and works. I can play WoW: Legion all day :)

 

VM2:

  • 2 Logical CPUs
  • 2 GB of RAM
  • SSD: 100 GB
  • DA: 250 GB
  • Machine: i440fx-2.5
  • BIOS: SeaBIOS
  • Hyper-V: Yes
  • Graphics Card: 550
  • USB Devices: Another Keyboard, Another Mouse
  • USB Mode: 2.0

 

Note: I have tried seaBIOS, changing the Machine type, toggling ACS, changing the order of the cards on PCI-E slots, and also reinstalling Windows.

 

IOMMU Groups w/ ACS

/sys/kernel/iommu_groups/0/devices/0000:00:00.0
/sys/kernel/iommu_groups/1/devices/0000:00:01.0
/sys/kernel/iommu_groups/2/devices/0000:00:01.1
/sys/kernel/iommu_groups/3/devices/0000:00:14.0
/sys/kernel/iommu_groups/4/devices/0000:00:16.0
/sys/kernel/iommu_groups/5/devices/0000:00:19.0
/sys/kernel/iommu_groups/6/devices/0000:00:1a.0
/sys/kernel/iommu_groups/7/devices/0000:00:1b.0
/sys/kernel/iommu_groups/8/devices/0000:00:1c.0
/sys/kernel/iommu_groups/9/devices/0000:00:1c.2
/sys/kernel/iommu_groups/10/devices/0000:00:1c.3
/sys/kernel/iommu_groups/11/devices/0000:00:1c.6
/sys/kernel/iommu_groups/12/devices/0000:00:1d.0
/sys/kernel/iommu_groups/13/devices/0000:00:1f.0
/sys/kernel/iommu_groups/13/devices/0000:00:1f.2
/sys/kernel/iommu_groups/13/devices/0000:00:1f.3
/sys/kernel/iommu_groups/14/devices/0000:01:00.0
/sys/kernel/iommu_groups/14/devices/0000:01:00.1
/sys/kernel/iommu_groups/15/devices/0000:02:00.0
/sys/kernel/iommu_groups/15/devices/0000:02:00.1
/sys/kernel/iommu_groups/16/devices/0000:04:00.0
/sys/kernel/iommu_groups/17/devices/0000:05:00.0
/sys/kernel/iommu_groups/18/devices/0000:07:00.0

 

IOMMU Groups w/o ACS

/sys/kernel/iommu_groups/0/devices/0000:00:00.0
/sys/kernel/iommu_groups/1/devices/0000:00:01.0
/sys/kernel/iommu_groups/1/devices/0000:00:01.1
/sys/kernel/iommu_groups/1/devices/0000:01:00.0
/sys/kernel/iommu_groups/1/devices/0000:01:00.1
/sys/kernel/iommu_groups/1/devices/0000:02:00.0
/sys/kernel/iommu_groups/1/devices/0000:02:00.1
/sys/kernel/iommu_groups/2/devices/0000:00:14.0
/sys/kernel/iommu_groups/3/devices/0000:00:16.0
/sys/kernel/iommu_groups/4/devices/0000:00:19.0
/sys/kernel/iommu_groups/5/devices/0000:00:1a.0
/sys/kernel/iommu_groups/6/devices/0000:00:1b.0
/sys/kernel/iommu_groups/7/devices/0000:00:1c.0
/sys/kernel/iommu_groups/8/devices/0000:00:1c.2
/sys/kernel/iommu_groups/9/devices/0000:00:1c.3
/sys/kernel/iommu_groups/10/devices/0000:00:1c.6
/sys/kernel/iommu_groups/11/devices/0000:00:1d.0
/sys/kernel/iommu_groups/12/devices/0000:00:1f.0
/sys/kernel/iommu_groups/12/devices/0000:00:1f.2
/sys/kernel/iommu_groups/12/devices/0000:00:1f.3
/sys/kernel/iommu_groups/13/devices/0000:04:00.0
/sys/kernel/iommu_groups/14/devices/0000:05:00.0
/sys/kernel/iommu_groups/15/devices/0000:07:00.0

 

PCI Devices

00:00.0 Host bridge [0600]: Intel Corporation 4th Gen Core Processor DRAM Controller [8086:0c00] (rev 06)
00:01.0 PCI bridge [0604]: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor PCI Express x16 Controller [8086:0c01] (rev 06)
00:01.1 PCI bridge [0604]: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor PCI Express x8 Controller [8086:0c05] (rev 06)
00:14.0 USB controller [0c03]: Intel Corporation 9 Series Chipset Family USB xHCI Controller [8086:8cb1]
00:16.0 Communication controller [0780]: Intel Corporation 9 Series Chipset Family ME Interface #1 [8086:8cba]
00:19.0 Ethernet controller [0200]: Intel Corporation Ethernet Connection I217-V [8086:153b]
00:1a.0 USB controller [0c03]: Intel Corporation 9 Series Chipset Family USB EHCI Controller #2 [8086:8cad]
00:1b.0 Audio device [0403]: Intel Corporation 9 Series Chipset Family HD Audio Controller [8086:8ca0]
00:1c.0 PCI bridge [0604]: Intel Corporation 9 Series Chipset Family PCI Express Root Port 1 [8086:8c90] (rev d0)
00:1c.2 PCI bridge [0604]: Intel Corporation 9 Series Chipset Family PCI Express Root Port 3 [8086:8c94] (rev d0)
00:1c.3 PCI bridge [0604]: Intel Corporation 9 Series Chipset Family PCI Express Root Port 4 [8086:8c96] (rev d0)
00:1c.6 PCI bridge [0604]: Intel Corporation 9 Series Chipset Family PCI Express Root Port 7 [8086:8c9c] (rev d0)
00:1d.0 USB controller [0c03]: Intel Corporation 9 Series Chipset Family USB EHCI Controller #1 [8086:8ca6]
00:1f.0 ISA bridge [0601]: Intel Corporation 9 Series Chipset Family Z97 LPC Controller [8086:8cc4]
00:1f.2 SATA controller [0106]: Intel Corporation 9 Series Chipset Family SATA Controller [AHCI Mode] [8086:8c82]
00:1f.3 SMBus [0c05]: Intel Corporation 9 Series Chipset Family SMBus Controller [8086:8ca2]
01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GF116 [GeForce GTX 550 Ti] [10de:1244] (rev a1)
01:00.1 Audio device [0403]: NVIDIA Corporation GF116 High Definition Audio Controller [10de:0bee] (rev a1)
02:00.0 VGA compatible controller [0300]: NVIDIA Corporation GM204 [GeForce GTX 970] [10de:13c2] (rev a1)
02:00.1 Audio device [0403]: NVIDIA Corporation GM204 High Definition Audio Controller [10de:0fbb] (rev a1)
04:00.0 Ethernet controller [0200]: Qualcomm Atheros Killer E220x Gigabit Ethernet Controller [1969:e091] (rev 10)
05:00.0 PCI bridge [0604]: Intel Corporation 82801 PCI Bridge [8086:244e] (rev 41)
07:00.0 SATA controller [0106]: Marvell Technology Group Ltd. 88SE9172 SATA 6Gb/s Controller [1b4b:9172] (rev 12)

 

What do you guys suggest I try next?

 

Thank you and have a great one!

 

- Vanum

 

Link to comment

I have an idea to try and a question... First the question do you have the integrated graphics set as the primary in the uefi? If not unraid may be grabbing the 550... If you do in fact have the IG set as primary try turning hyper v off. I have had some issues with passthrough of certain devices to Windows vms with hyperv turned on... If you arent planning on doing any virtualizing with the vm this will not hurt anything.

Link to comment

Hello again!

 

My Responses:

 

have you tried passing the second card to the working first vm to make sure it works? Also, if you're on 6.2, it does seem that q35-2.5 fixes several issues.

 

I tried switching VM2 to q35-2.5 and it did exactly the same thing. No video on the second monitor, but I could see the VM working on VNC.

 

 

changing to q35-2.5 worked for me with gtx570

 

I passed the 550 Ti to VM1 and I am getting a visual and drivers are installed.

 

 

I have an idea to try and a question... First the question do you have the integrated graphics set as the primary in the uefi? If not unraid may be grabbing the 550... If you do in fact have the IG set as primary try turning hyper v off. I have had some issues with passthrough of certain devices to Windows vms with hyperv turned on... If you arent planning on doing any virtualizing with the vm this will not hurt anything.

 

I have IGX set as the default in BIOS and I do plan on doing gaming on the second VM, so I will need Hyper-V on, correct?

 

Thank you everyone for the help! I really appreciate it!!!

Link to comment

No Hyper V is windows virtualization... It a hypervisor like KVM on Linux. If you do not plan to run virtual machines on this virtual machine it won't matter if you turn it off...

You should Google what hyper-v in kvm does. In short it' could lead to better performance in windows guests.

Link to comment

No Hyper V is windows virtualization... It a hypervisor like KVM on Linux. If you do not plan to run virtual machines on this virtual machine it won't matter if you turn it off...

You should Google what hyper-v in kvm does. In short it' could lead to better performance in windows guests.

 

Do not take this as me doubting or disputing what you said here because I am not. It sounds like you may know much more on the subject than I do. I did google it and most of what I found was discussing nested virtualization. I did find one article on better windows VM performance but it did not give much info.

 

As to my experience I have 2 windows 10 VMs one with Hyper V on and one with it off. The one with it off is because with it on I had trouble passing through my MSI GTX 960. With it off no problem. The other VM has a R9 380X and has had no trouble. I will also note I do not notice any major difference between them in terms of performance. Again this is simply my experience and me trying to help the OP.

 

I still consider myself fairly new to some of the concepts but I am enjoying learning as I go and If I can help someone that finds them self in a position I once was in I am happy to try. I am not embarrassed to say I have leaned on some of the more experienced forum members such as yourself a few times and appreciate the help all of you offer.

 

 

Link to comment

No Hyper V is windows virtualization... It a hypervisor like KVM on Linux. If you do not plan to run virtual machines on this virtual machine it won't matter if you turn it off...

You should Google what hyper-v in kvm does. In short it' could lead to better performance in windows guests.

 

Do not take this as me doubting or disputing what you said here because I am not. It sounds like you may know much more on the subject than I do. I did google it and most of what I found was discussing nested virtualization. I did find one article on better windows VM performance but it did not give much info.

 

As to my experience I have 2 windows 10 VMs one with Hyper V on and one with it off. The one with it off is because with it on I had trouble passing through my MSI GTX 960. With it off no problem. The other VM has a R9 380X and has had no trouble. I will also note I do not notice any major difference between them in terms of performance. Again this is simply my experience and me trying to help the OP.

 

I still consider myself fairly new to some of the concepts but I am enjoying learning as I go and If I can help someone that finds them self in a position I once was in I am happy to try. I am not embarrassed to say I have leaned on some of the more experienced forum members such as yourself a few times and appreciate the help all of you offer.

It probably sounded more angry than I meant (It wasn't supposed to sound angry)  :)

I don't know the technical stuff about hyper-v in KVM, but I guess it emulates being Microsoft hyper v and Microsoft implemented optimizations in windows that is activated when it runs on hyper-v.

 

The reason nvidia cards have problem with hyper-v activated is that nvidia driver senses that hyper-v is on and then disables the driver. They want people to buy the expansive pro cards for virtualization. Amd doesn't do the same.

But in unraid 6.2 it should be possible to have hyper-v enabled without the nvidia driver noticing.

Link to comment

So I haven't had time to switch off Hyper-V for my second VM to see if will make a difference, but I did want to ask a question before I do that.

 

I have tried changing the order of the cards on the mobo because I have seen that can make a difference.

 

Why is that?

 

Is there an article that someone can point me to as to why it makes a difference?

 

The order and speed is as follows:

 

Slot 1: PCI-E x16

Slot 2: PCI-E x8

Slot 3: PCI-E x4

 

Thanks!

Link to comment

So I haven't had time to switch off Hyper-V for my second VM to see if will make a difference, but I did want to ask a question before I do that.

 

I have tried changing the order of the cards on the mobo because I have seen that can make a difference.

 

Why is that?

 

Is there an article that someone can point me to as to why it makes a difference?

 

The order and speed is as follows:

 

Slot 1: PCI-E x16

Slot 2: PCI-E x8

Slot 3: PCI-E x4

 

Thanks!

 

A couple reasons I can think of off the top of my head would be,

1. Some motherboards (generally lower cost boards) do not have an option for choosing which graphics adapter is the primary in this case your may need to move the card to a different slot so the host OS does not use it.

2. Another reason may be that 2 or more slots are in the same IOMMU group. In this case you may move the card to another slot to try to get it into a IOMMU group that is not being used by another VM.

 

I am sure there are other reasons this may work and someone may chime in with a couple more. My experience has always been to try one thing at a time and work the problem not the solution... In your case you have already confirmed that you primary graphics is set to integrated so unraid should not be using the card. With ACS on the cards are in separate groups (while somewhat artificial this should be fine). I would next try to turn off Hyper V since it is just a mater of clicking a button and trying to start the VM. You could also try moving the second card to see if you can get it into another IOMMU group with ACS off. Let us know what you try and the results..... 

Link to comment

So I haven't had time to switch off Hyper-V for my second VM to see if will make a difference, but I did want to ask a question before I do that.

 

I have tried changing the order of the cards on the mobo because I have seen that can make a difference.

 

Why is that?

 

Is there an article that someone can point me to as to why it makes a difference?

 

The order and speed is as follows:

 

Slot 1: PCI-E x16

Slot 2: PCI-E x8

Slot 3: PCI-E x4

 

Thanks!

 

A couple reasons I can think of off the top of my head would be,

1. Some motherboards (generally lower cost boards) do not have an option for choosing which graphics adapter is the primary in this case your may need to move the card to a different slot so the host OS does not use it.

2. Another reason may be that 2 or more slots are in the same IOMMU group. In this case you may move the card to another slot to try to get it into a IOMMU group that is not being used by another VM.

 

I am sure there are other reasons this may work and someone may chime in with a couple more. My experience has always been to try one thing at a time and work the problem not the solution... In your case you have already confirmed that you primary graphics is set to integrated so unraid should not be using the card. With ACS on the cards are in separate groups (while somewhat artificial this should be fine). I would next try to turn off Hyper V since it is just a mater of clicking a button and trying to start the VM. You could also try moving the second card to see if you can get it into another IOMMU group with ACS off. Let us know what you try and the results..... 

 

Still haven't got home yet as I am out of town for a wedding, so I haven't got to try any of your suggestions, but I am doing research while I am away. It looks like none of my graphics cards support UEFI...does that matter?

 

I am able to get my 970 GTX to pass through video with no problem and it is one of the cards that doesn't support UEFI. If it is necessary, is there another forum post that backs that up or suggest why it is needed?

 

Thanks!

Link to comment
  • 2 months later...

Bump with an update!

 

So...I am still trying to get a second VM running that i can do some gaming on.

 

I have turned off Hyper-V and also moved the card to different slots on the mobo. I am able to boot up Win10 after reassigning the card to the VM. My other VM isn't seeing the 550 TI still...

 

Also, I thought maybe I'd try a different card so I stole my wife's graphics card and gave that a shot. No go. It would not boot up on the second VM either.

 

Thank you all!

Link to comment

Have you tried passing a VGA BIOS for the 550?

 

Look, I even found your card. Unless you would rather dump it yourself.

 

You'll have to look a bit further in the forum or on Google or Bing if you want to research how to configure a BIOS with a passthrough card in libvirt XML syntax.

 

Okay, for example, this device passed through from a Github repository readme:

 

<hostdev mode='subsystem' type='pci' managed='yes'>
  <source>
    <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
  </source>
  <rom file='/home/maikel/bios7850random.rom'/>
  <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</hostdev>

 

I'm not sure if 5xx cards have the same trouble with Seabios as they do with OVMF. Your troubles would seem to indicate it doesn't matter if it's UEFI or not, 5xx doesn't work with passthrough.

Link to comment

Have you tried passing a VGA BIOS for the 550?

 

Look, I even found your card. Unless you would rather dump it yourself.

 

You'll have to look a bit further in the forum or on Google or Bing if you want to research how to configure a BIOS with a passthrough card in libvirt XML syntax.

 

Okay, for example, this device passed through from a Github repository readme:

 

<hostdev mode='subsystem' type='pci' managed='yes'>
  <source>
    <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
  </source>
  <rom file='/home/maikel/bios7850random.rom'/>
  <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</hostdev>

 

I'm not sure if 5xx cards have the same trouble with Seabios as they do with OVMF. Your troubles would seem to indicate it doesn't matter if it's UEFI or not, 5xx doesn't work with passthrough.

 

If you do want to add a bios file then i would dump it yourself. I find the bios files on techpowerup.com dont work for us, for passthrough (maybe some do?).

I have dumped my own for quite a few different cards and the file size is always totally different to the ones on techpowerup (which never worked for me)

I did a video a while back about dumping bios here

http://lime-technology.com/forum/index.php?topic=52960.msg509057#msg509057

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.