Jump to content

Creating and Starting Win10 VM doesnt start it; No image on Monitor


Bytales

Recommended Posts

I am trying to create a WindowS 10 VM and install and run windows on this particular part of my PC.

Unraid booted from Corsair Memory stick with the first option.

 

A TV is connected to the Onboard graphics card with a vga tp hdmi adapter.

 

I have THis Motherboard, 

1 GPU is in the 5th slot (the one i am trying to use for the VM), 2nd GPU is in the last Slot. GPU is connected to monitor via DP cable.

1st Slot is a Gigabyte PCI Express Wifi Card, that i dont see detected anywhere

2nd Slot is a 4 port USB card, each with its own FrescoChip that is properly detected.

3rd Slot is a Pci Express card for one of the NVME SSDs

2nd NVME SSD is on the motherboard slot

2017060813504454_big.png

1)A 10 TB single disk array, no parity.

2)A 256GB NVME SSD as Cache

3)A 512GB NVME SSD as unassigned disk, from this one i wanted to create a 200 GB disk for game installs

4)Using one of the two Vega Frontier Editions for Monitor Output, its the one on the 5th slot.

 

HEre is the Info Page and the Main Page in the Background

unknown.png

 

GPUs are on different IOMMU Groups, i could assign the mouse and keyboard without any problem.

Windows 10 ISO is downloaded with the windows media downloader, VirtIO ISO is at its place also.

 

Here is the VIrtual Machine Setup

unknown.png

 

I havent figured out how to give it one of the two CDrom i have in the system, but thats not the point. I get no image on the Monitor when the VM is created and starts.

 

The VM is there, started doing nothing, and i have no image on monitor that shows me something has booted and that a windows installer starts.

 

What in Gods Name am i doing wrong ?

 

Later Edit, I created the Topology of the CPU with LSTOPO as directed by SpaceInvader one, here it is 

unknown.png

The interesting thing is that both AMD GPUS, the frontier editions, are with the same ID 1002.6863, is this a bad thing ? Perhaps why the VM isnt Working ?

1st

IOMMU group 60:    [1002:6863] 47:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Vega 10 XTX [Radeon Vega Frontier Edition]

 

and 2nd
IOMMU group 81:    [1002:6863] 65:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Vega 10 XTX [Radeon Vega Frontier Edition]
 

Link to comment
2 hours ago, Bytales said:

The VM is there, started doing nothing, and i have no image on monitor that shows me something has booted and that a windows installer starts.

Tried all the of the ports on the back of the card, in case it likes use a particular port for default?

Do we know, I don't know, if the GPU you're passing has a ROM which needs to be dumped to an file and loaded in XML?

Here's SpaceInvader's video on subject of GPU rom dumping:

 

Link to comment

I think the EFI shell command to reboot is "reset" (no quotes), perhaps monitor was a sleep and missed the "hit any key to boot off of CD-ROM" message that first comes up from Windows ISO image. 

 

Also you could try changing the bus the from IDE to SATA; IDK if that is playing a factor, but that's a setting I typically change on my VMs.

Link to comment

in the uefi shell once i type exit it takes me to a bios of sorts, here i have a boot manager, and this is what i see:

JPEG_20180927_224618.jpg

 

Nothing resembles a device that would be the ISO, i think its not even recognized that is why i enter the uefi shel. tried all of those, no windows setup starts.

 

Link to comment

tried them all nothing works, im trying different setting when making the VM, damn, why does it have to be so complicated ! Why the FK it just works for others where i watch YouTube videos ? And at my pc it doesnt ! Motherfucking Life ! Im on this shit for the past week trying to sort it out ! Making incremental progress every day in understanding Unraid, but this just cuts it, as everything is as it should be. It should just work !

Link to comment

   Sorry for the frustration, I admit my first month with Unraid, almost two years ago, was a series of incremental steps. My first VM was simply VNC video and passed through keyboard and mouse; then GPU; then devices/USB passthru.  You're right it's frustrating to watch a video see a series of clicks, mimic the steps and have it not work the same way - the problem is everyone's hardware is not the same and Unraid is essentially a Live Boot Disc image, so trying to hit all edge-cases is difficult.  As Jonathanm stated VM hardware passthrough is picky. 

 

   If it was me in your situation, I would take a break from this particular VM setup. Start a new Windows 10 vm template, start small pass the GPU keyboard and mouse.  Once that is working try passing another device to the working vm and build up the layers.  I know it's not the answer you wanted, but I'm a believer in K.I.S.S. then adding complexity/extra functionality.

Link to comment

I will try making a vm passing the VNC for Video Card. But today i woke up, and i have new ideeas. I have a pci express wifi Card in the first Motherboard Slot, that doesnt appear anywhere being recognized. Ill take that out, and try without it.

Also it occured to me, that i have one device that is basically a clone of itself, and that is the Video Cards. i basically have two Vega frontier Edition watercooled Version, i wonder if that is a Thing for unraid.

Anyways i will try

1)No PciExpress Wifi

2)Single GPU
3)VNC only

4)a VM with Linux Mint.

 

I really started to like unraid, and i dont want to give it up and return to a single Windows 10 machine.

Link to comment
29 minutes ago, Bytales said:

Ive read that Unraid needs a video card itself, to load, so i have a built into the motherboard ASPEED graphics card. Shouldnt this graphics card be unasignable when creating the VM ?

ASPEED controllers (AST2300/2400/2500) usually provide the graphics for IPMI/BMC/Remote Management and are dedicated to that function.  Unless something has changed with recent iterations of ASPEED controllers, they cannot be passed through to a VM.

 

CPU-integrated graphics (EPYC chips, of course, do not have that) can be passed through to a VM.

Link to comment

When I set up my Windows 10 OVMF VM, I had the same issue of it dropping into the UEFI shell.  However, typing exit and selecting the proper FSx device from the boot manager worked for me.  I realize you are not seeing anything in the boot manager that represents the device with the install media, so, you are stuck.

 

As I recall, others with similar issues found that changing the VM BIOS to SeaBIOS rather than OVMF or changing the machine type to i440fx instead of Q35 worked to get the installation media to boot.  Some even had to select a lower version number of i440fx as newer ones did not seem to work on their hardware.  OVMF is better for hardware passthrough so, I assume, that it what you are using for your VM?

 

Unfortunately, setting up a VM still has lots of variables because the hardware is variable.

Link to comment

Here is what i learned

Starting the VM with VNC works, and starts by bringing me in the uefi bios, presumably because the key is not pressed to resume booting from "CD". Once in uefi bios i type exit, i get in some sort of bios, there, i choose the first quemu CDrom, and that is the iso upon which it boots.

 

Starting the VM with the video card has mixed results, but i managed with the video card to get me into uefi shell. I type exit, get into the bios, and choosing that same CDrom (where the iso is mounted), cannot lunch the boot sequence. As if the drive where the windows 10 iso is cant lunch.

 

I disconnected one GPU, and with a single gpu, i get the same results, so the cloned gpus arent it.

 

My bios is  OVMF and machine type is i440fx-3.0. TRying SeaBios, gets me a black screen.

 

So i tried doing the mount drives as sata, SCSI or USB, same thing.

 

Opening the logs for the VM gets me this

 

2018-09-28T19:37:05.553176Z qemu-system-x86_64: vfio_region_write(0000:46:00.0:region0+0x526d8, 0x0,8) failed: Device or resource busy
2018-09-28T19:37:05.553191Z qemu-system-x86_64: vfio_region_write(0000:46:00.0:region0+0x526c0, 0x0,8) failed: Device or resource busy
2018-09-28T19:37:05.553203Z qemu-system-x86_64: vfio_region_write(0000:46:00.0:region0+0x526c8, 0x0,8) failed: Device or resource busy
2018-09-28T19:37:05.553218Z qemu-system-x86_64: vfio_region_write(0000:46:00.0:region0+0x526b0, 0x0,8) failed: Device or resource busy
2018-09-28T19:37:05.553231Z qemu-system-x86_64: vfio_region_write(0000:46:00.0:region0+0x526b8, 0x0,8) failed: Device or resource busy
2018-09-28T19:37:05.553267Z qemu-system-x86_64: vfio_region_write(0000:46:00.0:region0+0x526a0, 0x0,8) failed: Device or resource busy
2018-09-28T19:37:05.553284Z qemu-system-x86_64: vfio_region_write(0000:46:00.0:region0+0x526a8, 0x0,8) failed: Device or resource busy
2018-09-28T19:37:05.553325Z qemu-system-x86_64: vfio_region_write(0000:46:00.0:region0+0x52690, 0x0,8) failed: Device or resource busy
2018-09-28T19:37:05.553341Z qemu-system-x86_64: vfio_region_write(0000:46:00.0:region0+0x52698, 0x0,8) failed: Device or resource busy
2018-09-28T19:37:05.553362Z qemu-system-x86_64: vfio_region_write(0000:46:00.0:region0+0x52680, 0x0,8) failed: Device or resource busy
2018-09-28T19:37:05.553380Z qemu-system-x86_64: vfio_region_write(0000:46:00.0:region0+0x52688, 0x0,8) failed: Device or resource busy
2018-09-28T19:37:05.553396Z qemu-system-x86_64: vfio_region_write(0000:46:00.0:region0+0x52670, 0x0,8) failed: Device or resource busy
2018-09-28T19:37:05.553408Z qemu-system-x86_64: vfio_region_write(0000:46:00.0:region0+0x52678, 0x0,8) failed: Device or resource busy
2018-09-28T19:37:05.553424Z qemu-system-x86_64: vfio_region_write(0000:46:00.0:region0+0x52660, 0x0,8) failed: Device or resource busy
2018-09-28T19:37:05.553437Z qemu-system-x86_64: vfio_region_write(0000:46:00.0:region0+0x52668, 0x0,8) failed: Device or resource busy
2018-09-28T19:37:05.553452Z qemu-system-x86_64: vfio_region_write(0000:46:00.0:region0+0x52650, 0x0,8) failed: Device or resource busy
2018-09-28T19:37:05.553465Z qemu-system-x86_64: vfio_region_write(0000:46:00.0:region0+0x52658, 0x0,8) failed: Device or resource busy
2018-09-28T19:37:05.553487Z qemu-system-x86_64: vfio_region_write(0000:46:00.0:region0+0x52640, 0x0,8) failed: Device or resource busy
2018-09-28T19:37:05.553501Z qemu-system-x86_64: vfio_region_write(0000:46:00.0:region0+0x52648, 0x0,8) failed: Device or resource busy
2018-09-28T19:37:05.553517Z qemu-system-x86_64: vfio_region_write(0000:46:00.0:region0+0x52630, 0x0,8) failed: Device or resource busy

 

Only pasted the last lines

 

Is this the fail message why i cant load the windows 10 iso when making the VM with the video card ?
Is there any other way to make the boot medium mount in the vm ?

Somehow using the VideoCard interferes with the system ability to properly boot the WIndows ISO.

 

Perhaps i can mount or passthrough the physical DVD rom, and make a bootable cdrom that could boot from the cdrom, that could work.

The way i see it, the ISO cannot load when i make the VM using the video card.

 

Perhaps i can mount the GPU in another way, by modifying the XML file, 

 

<?xml version='1.0' encoding='UTF-8'?>
<domain type='kvm' id='4'>
  <name>Windows 10</name>
  <uuid>d122b979-98f6-286c-bb73-bac9e5375f21</uuid>
  <metadata>
    <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/>
  </metadata>
  <memory unit='KiB'>16777216</memory>
  <currentMemory unit='KiB'>16777216</currentMemory>
  <memoryBacking>
    <nosharepages/>
  </memoryBacking>
  <vcpu placement='static'>17</vcpu>
  <cputune>
    <vcpupin vcpu='0' cpuset='0'/>
    <vcpupin vcpu='1' cpuset='8'/>
    <vcpupin vcpu='2' cpuset='40'/>
    <vcpupin vcpu='3' cpuset='9'/>
    <vcpupin vcpu='4' cpuset='41'/>
    <vcpupin vcpu='5' cpuset='10'/>
    <vcpupin vcpu='6' cpuset='42'/>
    <vcpupin vcpu='7' cpuset='11'/>
    <vcpupin vcpu='8' cpuset='43'/>
    <vcpupin vcpu='9' cpuset='12'/>
    <vcpupin vcpu='10' cpuset='44'/>
    <vcpupin vcpu='11' cpuset='13'/>
    <vcpupin vcpu='12' cpuset='45'/>
    <vcpupin vcpu='13' cpuset='14'/>
    <vcpupin vcpu='14' cpuset='46'/>
    <vcpupin vcpu='15' cpuset='15'/>
    <vcpupin vcpu='16' cpuset='47'/>
  </cputune>
  <resource>
    <partition>/machine</partition>
  </resource>
  <os>
    <type arch='x86_64' machine='pc-q35-3.0'>hvm</type>
    <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader>
    <nvram>/etc/libvirt/qemu/nvram/d122b979-98f6-286c-bb73-bac9e5375f21_VARS-pure-efi.fd</nvram>
  </os>
  <features>
    <acpi/>
    <apic/>
    <hyperv>
      <relaxed state='on'/>
      <vapic state='on'/>
      <spinlocks state='on' retries='8191'/>
      <vendor_id state='on' value='none'/>
    </hyperv>
  </features>
  <cpu mode='host-passthrough' check='none'>
    <topology sockets='1' cores='17' threads='1'/>
  </cpu>
  <clock offset='localtime'>
    <timer name='hypervclock' present='yes'/>
    <timer name='hpet' present='no'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <devices>
    <emulator>/usr/local/sbin/qemu</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='writeback'/>
      <source file='/mnt/user/VIDisks/Windows 10/vdisk1.img'/>
      <backingStore/>
      <target dev='hdc' bus='virtio'/>
      <boot order='1'/>
      <alias name='virtio-disk2'/>
      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/mnt/user/ISOs/Windows10.iso'/>
      <backingStore/>
      <target dev='hda' bus='sata'/>
      <readonly/>
      <boot order='2'/>
      <alias name='sata0-0-0'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/mnt/user/ISOs/virtio-win-0.1.160-1.iso'/>
      <backingStore/>
      <target dev='hdb' bus='sata'/>
      <readonly/>
      <alias name='sata0-0-1'/>
      <address type='drive' controller='0' bus='0' target='0' unit='1'/>
    </disk>
    <controller type='usb' index='0' model='ich9-ehci1'>
      <alias name='usb'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci1'>
      <alias name='usb'/>
      <master startport='0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci2'>
      <alias name='usb'/>
      <master startport='2'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci3'>
      <alias name='usb'/>
      <master startport='4'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/>
    </controller>
    <controller type='sata' index='0'>
      <alias name='ide'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
    </controller>
    <controller type='pci' index='0' model='pcie-root'>
      <alias name='pcie.0'/>
    </controller>
    <controller type='pci' index='1' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='1' port='0x8'/>
      <alias name='pci.1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0' multifunction='on'/>
    </controller>
    <controller type='pci' index='2' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='2' port='0x9'/>
      <alias name='pci.2'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
    </controller>
    <controller type='pci' index='3' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='3' port='0xa'/>
      <alias name='pci.3'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
    </controller>
    <controller type='pci' index='4' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='4' port='0xb'/>
      <alias name='pci.4'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x3'/>
    </controller>
    <controller type='pci' index='5' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='5' port='0xc'/>
      <alias name='pci.5'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x4'/>
    </controller>
    <controller type='pci' index='6' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='6' port='0xd'/>
      <alias name='pci.6'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x5'/>
    </controller>
    <controller type='virtio-serial' index='0'>
      <alias name='virtio-serial0'/>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
    </controller>
    <interface type='bridge'>
      <mac address='52:54:00:3f:58:e6'/>
      <source bridge='br0'/>
      <target dev='vnet0'/>
      <model type='virtio'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
    </interface>
    <serial type='pty'>
      <source path='/dev/pts/1'/>
      <target type='isa-serial' port='0'>
        <model name='isa-serial'/>
      </target>
      <alias name='serial0'/>
    </serial>
    <console type='pty' tty='/dev/pts/1'>
      <source path='/dev/pts/1'/>
      <target type='serial' port='0'/>
      <alias name='serial0'/>
    </console>
    <channel type='unix'>
      <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-4-Windows 10/org.qemu.guest_agent.0'/>
      <target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/>
      <alias name='channel0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
    <input type='mouse' bus='ps2'>
      <alias name='input0'/>
    </input>
    <input type='keyboard' bus='ps2'>
      <alias name='input1'/>
    </input>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x46' slot='0x00' function='0x0'/>
      </source>
      <alias name='hostdev0'/>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='usb' managed='no'>
      <source>
        <vendor id='0x0b05'/>
        <product id='0x1857'/>
        <address bus='5' device='5'/>
      </source>
      <alias name='hostdev1'/>
      <address type='usb' bus='0' port='1'/>
    </hostdev>
    <hostdev mode='subsystem' type='usb' managed='no'>
      <source>
        <vendor id='0x1e7d'/>
        <product id='0x2e7d'/>
        <address bus='5' device='4'/>
      </source>
      <alias name='hostdev2'/>
      <address type='usb' bus='0' port='2'/>
    </hostdev>
    <memballoon model='virtio'>
      <alias name='balloon0'/>
      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
    </memballoon>
  </devices>
  <seclabel type='dynamic' model='dac' relabel='yes'>
    <label>+0:+100</label>
    <imagelabel>+0:+100</imagelabel>
  </seclabel>
</domain>

 

Link to comment

From the topic where someone had trouble passing in a 750 titanium Jonp said

Quote:

You need to make sure that graphics are output using the on-board graphics for the unRAID console.  If you are seeing the console on the NVIDIA GPU, that's your problem.

End of Quote

 

My unraid starts using the first option on its menu, and i acces is from the web console from another computer on the network !

 

Now i realize when i start Unraid, allthough i get image on the tv connected to the onboard card, the monitor connected to the GPU im trying to pass through, also has some text on it, does this mean unraid is using this GPU im trying to pas through to the vm, to itself (the amd vega card) ?

 

If that indeed is the case it explains why perhaps the vm doesnt function proprly !

 

THe question is, when is the vm starting to use the passed through GPU ? when the windows iso boots and loads ?, or it starts using it even with displaying the bios ? ?

 

Link to comment

Yeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeees this was it.

Unraid uses the upper frontier edition for itself it seems, when trying to create a vm passing through this card it wont work, i created a vm using the second card, and it from within the uefi bios i booted the win iso and it loaded it.

 

SO it seems unraid is using one the upper frontier edition for itself, even if im accesing it through a gui ui.

 

Now the next problem is to figure out, how to give unraid the onboard video, if it can use it at all, and if not i need to put a gpu in the first slot. It seems Unraid uses what ever gpu is inserted first, or how is it exactly functioning ?

 

I thought Unraid was suppose to use my onboard GPU !

Is there any way to force Unraid to use it somehow, writing a line somewhere ? If i can figure this out, everything checks out finally,

 

Holy damn !, it took me a while of fussin around. 1 Week to be more precise.

 

1201034097785204971%253Faccount_id%253D1

 

Notice how the left monitor has the text on it, that is connected to first card, and it seems it is being used by unraid. Creating the VM with the second card connected to the wide monitor, worked, and i managed to start the windows iso.

Link to comment
1 hour ago, Bytales said:

I thought Unraid was suppose to use my onboard GPU !

What is set in your BIOS as the "Primary Graphics Adapter (or whatever it is called in your BIOS)?"  My CPU happens to have an integrated GPU, so, I can set "onboard" as the Primary graphics adapter and unRAID uses the iGPU.  Any other GPUs in PCIe slots are then available for passthrough.

 

On my backup server, I have both an iGPU and an ASPEED AST2300 for IPMI.  There was an additional BIOS setting (something about using Intel graphics) that I had to enable as well in order for unRAID/Plex to use the iGPU in addition to IPMI using the "onboard (ASPEED)" Primary graphics adapter.

 

I know this is not your situation exactly as you don't have an iGPU and you are using and AMD CPU.  However, you may want to look around in your BIOS settings for a way to specify what you want used as the Primary Graphics Adapter.

 

 

Link to comment
1 hour ago, Bytales said:

Notice how the left monitor has the text on it, that is connected to first card, and it seems it is being used by unraid. Creating the VM with the second card connected to the wide monitor, worked, and i managed to start the windows iso.

I'd take a look in your system BIOS, before unRAID boots, to see if you can make the onboard video the primary graphics. Also, is there a video output on the main logic board? If so I would plug the monitor into that port; those two changes should make the onboard video be Unraid's default. 

 

Later, it can be possible (assuming your MLB isn't picky over it) for a VM to take  over the card and use it (this is what I do).

Glad to see things are moving forward; that's a wicked wide-screen monitor you have there. ;)

 

Link to comment

It is a VGA port, and i have a VGA to HDMI adapter to connect to the TV. When the TV is thus connected, when booting, text appears rolling on both the TV and on the monitor connected to the first AMD Vega card. BUt the text is different.

THere is no setting in motherboard bios to enable by default.

I could replicate the above photo, only when the TV is disconnected though, which seems strange.

Wide monitor boots iso, the other one cant.

Link to comment

i need to hardcode somehow to make unraid use the onboard vga.

You can clearly see there are 3 cards in the system when assigning the cards to the windows VM.

unknown.png

 

I m asking, if the Onboard card is used by Unraid, isnt then  suppose NOT to appear on the list of selecteble graphics cards for the VM ?

 

The card has its own IOMMU group

unknown.png

 

And the Vegas are also in different groups themselves.

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...