[SOLVED] Code 10 Error with Windows 10 VM but not Linux VM


Recommended Posts

Hi All,

I am running the latest Unraid version and have both a Linux Mint VM and a Windows 10 VM configured. The problem is my Optical Drives work perfectly in my Linux Mint VM.  However, in my Windows 10 VM the Optical drives fail to work. As shown below my Device Manager window shows the Optical drives with yellow triangle warnings.  And the  PROPERTIES window for one of the drives reads: " The device cannot start (Code 10) STATUS DEVICE POWER FAILURE."

 

image.thumb.png.efb1771f2403f0d9c08b760a3b91cffb.png

 

The SATA Controller card I am using has a Marvell 9512 chip. It also has a Marvell_SATA_V1.2.0.1049 - Windows 10. driver installed.

 

My WIndows 10 VM  Setup Form has IDE selected for both CDRom Bus parameters shown in graphic below:

 

image.png.87e2da379c0eae97dad70ff295c4f43a.png

 

Can anybody in the community please recommend a SATA controller Chipset that is compatible with both Windows 10 and Linux Mint that supports ATAPI devices such as Optical drives?  The chipsets below are the ones I have heard about for SATA controllers. 

  • AsMedia Chipset
  • JMB585 Chipset
  • Marvell Chipset

 

Is the JMB585 Chipset compatible with Windows10 and Linux Mint?  Below is the link to the JMB585 Chipset SATA controller card I'm considering to buy:

 

https://www.amazon.com/CREST-Internal-Non-Raid-Controller-Bracket/dp/B07ST9CPND

 

 

Any opinions welcome.  Thank you for your time.

Edited by slipstream
Link to comment
On 5/31/2021 at 8:59 PM, slipstream said:

Hi All,

I am running the latest Unraid version and have both a Linux Mint VM and a Windows 10 VM configured. The problem is my Optical Drives work perfectly in my Linux Mint VM.  However, in my Windows 10 VM the Optical drives fail to work. As shown below my Device Manager window shows the Optical drives with yellow triangle warnings.  And the  PROPERTIES window for one of the drives reads: " The device cannot start (Code 10) STATUS DEVICE POWER FAILURE."

 

image.thumb.png.efb1771f2403f0d9c08b760a3b91cffb.png

 

The SATA Controller card I am using has a Marvell 9512 chip. It also has a Marvell_SATA_V1.2.0.1049 - Windows 10. driver installed.

 

My WIndows 10 VM  Setup Form has IDE selected for both CDRom Bus parameters shown in graphic below:

 

image.png.87e2da379c0eae97dad70ff295c4f43a.png

 

Can anybody in the community please recommend a SATA controller Chipset that is compatible with both Windows 10 and Linux Mint that supports ATAPI devices such as Optical drives?  The chipsets below are the ones I have heard about for SATA controllers. 

  • AsMedia Chipset
  • JMB585 Chipset
  • Marvell Chipset

 

Is the JMB585 Chipset compatible with Windows10 and Linux Mint?  Below is the link to the JMB585 Chipset SATA controller card I'm considering to buy:

 

https://www.amazon.com/CREST-Internal-Non-Raid-Controller-Bracket/dp/B07ST9CPND

 

 

Any opinions welcome.  Thank you for your time.

I never installed a win vm but from your screenshit windows sees that dvd/cd rom drives as scsi.

windows only sees scsi drives.

Did you add the scsi controller to your vm?

If you don't know how to do it, paste the xml of you virtual vm.

Edited by ghost82
Link to comment

 

Ghost82,

Thank you for your email.  To answer your question I have not added a SCSI controller neither as hardware or software. Also it is news to me that Windows only sees SCSI drives. If that is the case shouldn't the Driver for my Marvell SATA controller card take care of this?  In short, I don't understand why my Windows 10 Device Manager window shows theses Optical Drives as SCSI CdROM devices if in the CDRom Bus parameters shown below show I selected the "IDE" option:

 

image.png.dedefe57be69c065c9e57b139a296a1f.png

 

Additionally, the drop down menu below shows "SCSI" as one of the parameter options to pick from but I selected "IDE" instead. Do you think these parameters are associated to the reason why my Optical Drives show as SCSI CdROM devices in my Windows 10 VM?

 

image.png.d0e778f2608cc356eea572d645aad8dd.png

 

 

Below is my Windows VM XML you requested.  I hope this XML reveals what the problem could be:

 

<?xml version='1.0' encoding='UTF-8'?>
<domain type='kvm'>
  <name>Windows 10</name>
  <uuid>8a6af88a-6985-45fd-2154-b44ce6ace238</uuid>
  <description>Win10</description>
  <metadata>
    <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/>
  </metadata>
  <memory unit='KiB'>33554432</memory>
  <currentMemory unit='KiB'>33554432</currentMemory>
  <memoryBacking>
    <nosharepages/>
  </memoryBacking>
  <vcpu placement='static'>16</vcpu>
  <cputune>
    <vcpupin vcpu='0' cpuset='4'/>
    <vcpupin vcpu='1' cpuset='16'/>
    <vcpupin vcpu='2' cpuset='5'/>
    <vcpupin vcpu='3' cpuset='17'/>
    <vcpupin vcpu='4' cpuset='6'/>
    <vcpupin vcpu='5' cpuset='18'/>
    <vcpupin vcpu='6' cpuset='7'/>
    <vcpupin vcpu='7' cpuset='19'/>
    <vcpupin vcpu='8' cpuset='8'/>
    <vcpupin vcpu='9' cpuset='20'/>
    <vcpupin vcpu='10' cpuset='9'/>
    <vcpupin vcpu='11' cpuset='21'/>
    <vcpupin vcpu='12' cpuset='10'/>
    <vcpupin vcpu='13' cpuset='22'/>
    <vcpupin vcpu='14' cpuset='11'/>
    <vcpupin vcpu='15' cpuset='23'/>
  </cputune>
  <os>
    <type arch='x86_64' machine='pc-i440fx-5.1'>hvm</type>
    <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader>
    <nvram>/etc/libvirt/qemu/nvram/8a6af88a-6985-45fd-2154-b44ce6ace238_VARS-pure-efi.fd</nvram>
  </os>
  <features>
    <acpi/>
    <apic/>
    <hyperv>
      <relaxed state='on'/>
      <vapic state='on'/>
      <spinlocks state='on' retries='8191'/>
      <vendor_id state='on' value='none'/>
    </hyperv>
  </features>
  <cpu mode='host-passthrough' check='none' migratable='on'>
    <topology sockets='1' dies='1' cores='8' threads='2'/>
    <cache mode='passthrough'/>
  </cpu>
  <clock offset='localtime'>
    <timer name='hypervclock' present='yes'/>
    <timer name='hpet' present='no'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <devices>
    <emulator>/usr/local/sbin/qemu</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='writeback'/>
      <source file='/mnt/disks/WDC_WDS100T2B0C-00PXH0_21146D875690/win10disk.img'/>
      <target dev='hdc' bus='virtio'/>
      <boot order='1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/mnt/user/isos/Windows 10/Win10_21H1_English_x64.iso'/>
      <target dev='hda' bus='ide'/>
      <readonly/>
      <boot order='2'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/mnt/user/isos/virtio-win-0.1.190-1.iso'/>
      <target dev='hdb' bus='ide'/>
      <readonly/>
      <address type='drive' controller='0' bus='0' target='0' unit='1'/>
    </disk>
    <controller type='usb' index='0' model='qemu-xhci' ports='15'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
    </controller>
    <controller type='virtio-serial' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </controller>
    <controller type='pci' index='0' model='pci-root'/>
    <controller type='ide' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
    </controller>
    <interface type='bridge'>
      <mac address='75:67:00:45:8a:50'/>
      <source bridge='br0'/>
      <model type='virtio-net'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </interface>
    <serial type='pty'>
      <target type='isa-serial' port='0'>
        <model name='isa-serial'/>
      </target>
    </serial>
    <console type='pty'>
      <target type='serial' port='0'/>
    </console>
    <channel type='unix'>
      <target type='virtio' name='org.qemu.guest_agent.0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
    <input type='mouse' bus='ps2'/>
    <input type='keyboard' bus='ps2'/>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
      </source>
      <rom file='/mnt/user/isos/vbios/1030vm1.rom'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x03' slot='0x00' function='0x1'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='usb' managed='no'>
      <source>
        <vendor id='0x03f0'/>
        <product id='0x164a'/>
      </source>
      <address type='usb' bus='0' port='1'/>
    </hostdev>
    <hostdev mode='subsystem' type='usb' managed='no'>
      <source>
        <vendor id='0x2222'/>
        <product id='0x3061'/>
      </source>
      <address type='usb' bus='0' port='2'/>
    </hostdev>
    <memballoon model='none'/>
  </devices>
</domain>

 

 

Below is a list of the cards I have installed onto my SuperMIcro Motherboard:

  1. A NVMe PCI-E Card with a 1TB stick I have partitioned with your help to install three operating systems. 
  2. A 710 NVidia Graphics Card (For UnRAID OS)
  3. A 1030 Nvidia Graphics Card (Pass Through for VMs)
  4. A USB Controller Card (Pass Through for VMs)
  5. A Marvell SATA Controller Card (Pass Through for VMs)

 

I don't think any of these cards can be considered SCSI Controller cards.

 

The screen shots below show how my Marvell card appears in the Systems Devices area. 

 

Below is how my Marvell SATA controller appears without a Checkmark applied. With no checkmark applied it does not appear in neither of my VMs:

image.thumb.png.9f6a4e5ecd706a1613c345cd7db7d959.png

 

 

However, after I apply a Checkmark then hit the BIND button and reboot the server the Optical drives work for my Linux VM but not for my Windows 10 VM due to how Windows 10 sees them as SCSI CDRom devices:

 

image.thumb.png.785aa87c78b677f6e5b91d85e421ae22.png

 

Again, thank you for your help and I hope this is the last brick wall I encounter in setting up my VMs.  

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Edited by slipstream
Link to comment

Just a follow-up.   I learned a new UnRaid Terminal Command which is "lsscsi". This command is suppose to list all of the SCSI devices installed in the server.  Below is the output and from what I see it's a list of all my ARRAY hard drives plus my VM NVMe stick ( WDC WDS100T2B0C-00PXH0__1 ).  Is it possible my NVMe stick is causing my Windows 10 to mistakenly see my Optical Drives as SCSI devices? 

 

Additionally, do you think using a Blacklisting approach targeted to my Marvell SATA Controller could enable Windows 10 to see the Optical Drives correctly?   Thank you for your time.

 

 

root@tower:~# lsscsi
[0:0:0:0]    disk    Lexar    USB Flash Drive  1100  /dev/sda 
[1:0:0:0]    disk    ATA      WDC WD101KRYZ-01 1H01  /dev/sdb 
[2:0:0:0]    disk    ATA      WDC WD101KRYZ-01 1H01  /dev/sdc 
[3:0:0:0]    disk    ATA      ST14000NM0008-2H SN02  /dev/sdd 
[4:0:0:0]    disk    ATA      ST14000NM0008-2H SN02  /dev/sde 
[5:0:0:0]    disk    ATA      ST14000NM0008-2H SN02  /dev/sdf 
[6:0:0:0]    disk    ATA      ST14000NM0008-2H SN02  /dev/sdg 
[7:0:0:0]    disk    ATA      ST14000NM0008-2H SN02  /dev/sdh 
[8:0:0:0]    disk    ATA      ST14000NM0008-2H SN02  /dev/sdi 
[9:0:0:0]    disk    ATA      ST14000NM0008-2H SN02  /dev/sdj 
[10:0:0:0]   disk    ATA      ST14000NM0008-2H SN02  /dev/sdk 
[11:0:0:0]   disk    ATA      ST14000NM0008-2H SN02  /dev/sdl 
[11:0:1:0]   disk    ATA      ST14000NM0008-2H SN02  /dev/sdm 
[11:0:2:0]   disk    ATA      ZA960NM10001     2147  /dev/sdn 
[N:0:1:1]    disk    WDC WDS100T2B0C-00PXH0__1                  /dev/nvme0n1

 

Link to comment

mmm ok..you are passing through the marvell sata controller, so all the cd/dvd drives and your samsung ssd 870.

I found that this is not uncommon for devices attached to external controller with sata ports to show in windows as scsi devices, so I would say, nothing to worry about it.

The images you posted about the ide type bus for cdrom refer only to the virtio disk and to the windows installation disk: this has nothing to do with the bus type to which your your cdroms/dvd drives are attached, because you are passing through the marvell controller, so in your xml there is no trace of cdrom/dvd drives, which is correct (cdrom/dvd are downstream the marvell controller, you pass to the vm the controller, so everything that it's attached to the controller will be available on the windows vm).

Did you install any specific drivers for the marvell controller inside the virtual machine? I would search for specific drivers to install instead of using that provided by windows. Or, if you installed the specific driver, try to uninstall it and let windows install its generic driver.

You can try this:

https://www.tenforums.com/drivers-hardware/60112-marvell-92xx-sata-controller-6gb-driver-windows-10-1-2-0-1039-whql.html

You need to register to download files attached in the first post.

 

EDIT: I saw that you should have already installed the proprietary marvell driver; make sure the controller is using it and it's not overwrite by a win10 driver.

I saw also that one cdrom should work (the tsstcorp has no exclamation mark..what if you swap the asus with the tsstcorp drive?

Does the controller has a port to power it from the power supply or it gets power only from the pcie slot?

Edited by ghost82
Link to comment

Ghost82,

Thank you for your post.  To answer your question. Yes, I already installed various versions of the Marvell drivers. In fact, I already visited the TenForums.com link you provided yesterday evening.  I downloaded the Marvell_SATA_V1.2.0.1047 - Windows 10. ZIP file and it did not fix the problem. After more research I found a Marvell_SATA_V1.2.0.1049  version and this did not fix the problem either.  The tsstcorp drive you mention also does not work. It is a Samsung Optical Drive and even though it is missing a YELLOW triangle it does not recognize a DVD disk when I put one in the tray. 

 

I think the problem is the three Marvell drivers I have tested are too old. The newest one I think goes back to 2013.  This is why I was asking in my original post if anybody in the community knows what is the best SATA controller card chipset I can use that will work in both Linux and Windows10 VMs? I have tried cards with Marvell and AsMedia Chipsets and they work for Linux but not for my Windows VM.  The only device connected to the Marvell SATA controller card that actually works in my Windows VM is my Samsung SSD drive. 

 

I don't understand why my Windows 10 VM recognizes my Samsung SSD drive but blocks all four physical Optical Drives.  The reason I need all four Optical Drives is because I have several hundred DVD titles I want to archive onto my UnRaid array.  This task will go a lot faster if I used 4 Optical drives compared to a single Optical Drive.  My plan was to dump all of the DVD disk data to the Samsung SSD and then move it over to the Array.

 

Good to learn the IDE / SATA / SCSI parameter settings I mentioned in my second post only impacts the Virtio drive and not the physical Optical Drives I have installed.  That information you provided allows me to remove these parameter settings from my troubleshooting list.  I honestly thought it was the probable cause to my problem. 

 

To answer your other question, the Marvell controller draws power from the PCI slot. I have not noticed any power port connector on the card.  And swapping the the TSSTCORP Samsung Optical drive with the ASUS Optical Drive is something I have not done.  I will try it out even though the SAMSUNG Optical drive is not working either.

 

I have confirmed the Marvell Driver is present but I need to do further poking around to see if Windows10 driver is over writing it. 

 

For right now I'm going to buy my third SATA controller card. However, this one is going to have a JMB585 Chipset.  Hopefully, this will fix the problem.   Thank you for your help.

 

 

Edited by slipstream
Link to comment
14 minutes ago, slipstream said:

Ghost82,

Thank you for your post.  To answer your question. Yes, I already installed various versions of the Marvell drivers. In fact, I already visited the TenForums.com link you provided yesterday evening.  I downloaded the Marvell_SATA_V1.2.0.1047 - Windows 10. ZIP file and it did not fix the problem. After more research I found a Marvell_SATA_V1.2.0.1049  version and this did not fix the problem either.  The tsstcorp drive you mention also does not work. It is a Samsung Optical Drive and even though it is missing a YELLOW triangle it does not recognize a DVD disk when I put one in the tray. 

 

I think the problem is the three Marvell drivers I have tested are too old. The newest one I think goes back to 2013.  This is why I was asking in my original post if anybody in the community knows what is the best SATA controller card chipset I can use that will work in both Linux and Windows10 VMs? I have tried cards with Marvell and AsMedia Chipsets and they work for Linux but not for my Windows VM.  The only device connected to the Marvell SATA controller card that actually works in my Windows VM is my Samsung SSD drive. 

 

I don't understand why my Windows 10 VM recognizes my Samsung SSD drive but blocks all four physical Optical Drives.  The reason I need all four Optical Drives is because I have several hundred DVD titles I want to archive onto my UnRaid array.  This task will go a lot faster if I used 4 Optical drives compared to a single Optical Drive.  My plan was to dump all of the DVD disk data to the Samsung SSD and then move it over to the Array.

 

Good to learn the IDE / SATA / SCSI parameter settings I mentioned in my second post only impacts the Virtio drive and not the physical Optical Drives I have installed.  That information you provided allows me to remove these parameter settings from my troubleshooting list.  I honestly thought it was the probable cause to my problem. 

 

To answer your other question, the Marvell controller draws power from the PCI slot. I have not noticed any power port connector on the card.  And swapping the the TSSTCORP Samsung Optical drive with the ASUS Optical Drive is something I have not done.  I will try it out even though the SAMSUNG Optical drive is not working either.

 

I have confirmed the Marvell Driver is present but I need to do further poking around to see if Windows10 driver is over writing it. 

 

For right now I'm going to buy my third SATA controller card. However, this one is going to have a JMB585 Chipset.  Hopefully, this will fix the problem.   Thank you for your help.

 

 

Hi, I edited my post above, as a last try I would remove the marvell driver and let windows install its generic one. Worth a try!

Link to comment
22 hours ago, slipstream said:

Again, thank you for your help and I hope this is the last brick wall I encounter in setting up my VMs.

Another thing you can try is to change the machine type of your vm: instead of using the i440fx chipset you can try with q35, which is superior for pcie passthrough.

Link to comment

Ghost82,

Thank you for your two posts.  I very much liked your suggestion relating to changing my Windows 10 VM from i440fx to Q35-5.1.  The reason is because my Linux Mint VM is setup with Q35-5.1 and as I mentioned earlier all four Optical Drives work perfectly in my Linux Mint VM.  So I thought it was worth a shot to make this change to my Windows 10 VM.  However, when I clicked on the "UPDATE" button to save my change to Q35-5.1 I got the error shown below:

 

 

 

image.png.14e841d7a792e9223767e9e26e58ba62.png

 

 

I can't make heads or tails of the Description for the ERROR.  It is not clear to me if the PCI controller mentioned in the ERROR is actually my Marvell SATA Controller Card. Nevertheless, I clicked on the OK button and by doing this it prevents me from saving the Q35-5.1 change I made to my Windows10 VM.  That was a dissapointment.

 

However, I decided to try out a different strategy which was to re-arrange how the Optical Drives connect to the Marvell SATA Controller card.  This card has a total of 8 SATA ports.  So I disconnected and reconnected 4 of the SATA cables to different SATA ports on the Marvell card.  After rebooting the server and firing up my Windows 10 VM I was pleasantly surprised to learn all four of my Optical Drives now work correctly.  My guess is the Marvell driver update I made to the latest version required I disconnect and reconnect the SATA cables to different SATA ports on the card.

 

In short, I'm glad the solution to this problem did not involve buying a different SATA controller card as I was close to doing.  Again, thank you very much for your help with this matter.  I hope this thread helps an UnRaid Community member in the future experiencing a similar problem with Optical Drives and PCI-e SATA Controller Cards. 

 

 

 

 

 

 

 

 

Edited by slipstream
Link to comment
  • slipstream changed the title to [SOLVED] Code 10 Error with Windows 10 VM but not Linux VM
3 hours ago, slipstream said:

However, when I clicked on the "UPDATE" button to save my change to Q35-5.1 I got the error shown below:

 

 

 

image.png.14e841d7a792e9223767e9e26e58ba62.png

 

Glad that you fixed it, and very strange that you had to disconnect/reconnect the ports..

For your info, the error above is because i440fx and q35 types have different topology:

The q35 has a pcie-root (bus 0), to which you can attach directly your pcie devices, different pcie-root-port (controller to attach other pcie devices on buses different than 0), different pcie-to-pci-bridge (controller to attach legacy pci devices), pxb-pcie (less common, pcie expander bus).

I don't know too much about i440fx, because it's an older chipset, better suited for legacy hardware, so I don't use it, but as far as I know it has a pci-root (bus 0, note that it's pci and not pcie), to which you can attach directly your pci devices and different pci-to-pci-bridge (controller to attach other pci devices on buses different than 0).

So the error is telling that there's an error in the xml, because with machine type q35 it isn't defined a pcie-root (that is the first error that shows, if you add/modify the pcie-root there are other errors in the xml) the xml can be corrected manually, to fix the topology for that machine type, or most of the time recreating the virtual machine (with the same disk(s) attached, but different machine type) is faster.

I wrote this about the q35 type:

 

Edited by ghost82
Link to comment

Ghost82,

Thank you for your post.  It blows my mind the rabbit hole the error I got can take me down into.  I saw your link and my hat is off to you because that is some seriously complicated stuff.  🙂  At a basic level, I used the Windows 10 Template to build my Windows 10 VM.   And this template has i440fx as a default. 

 

Do you think it is possible the UnRaid Developers could offer a Q35-5.1 version of the Windows10 template at some point in the future? 

 

I honestly think a  Q35-5.1. Windows 10 template would have probably supported my Optical Drives with no complications in the beginning.

Edited by slipstream
Link to comment
5 hours ago, slipstream said:

Do you think it is possible the UnRaid Developers could offer a Q35-5.1 version of the Windows10 template at some point in the future?

You can write your suggestions in the feature request forum:

https://forums.unraid.net/forum/53-feature-requests/

Note that these are only default settings, you can set your windows to be emulated on a q35 machine, I agree that in 2021 most of the users have newer hardware and I think most of them use to passthrough pcie hardware, especially gpu: i440fx can work, but there can be issues with drivers since the gpu is detected as a legacy pci endpoint.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.