Jump to content
mflotron

NVIDIA still getting Code 43 on unRAID 6.2

22 posts in this topic Last Reply

Recommended Posts

Hey everyone,

 

I just upgraded to unRAID 6.2, which is supposed to have support for NVIDIA GPU pass through. I created a fresh VM & install of Windows 10 but I still get the dreaded Code 43 of doom.

 

I'm using a GTX 1070 on Windows 10.

My VM is configured to use Q35-2.5, OVMF, and I've tried Hyper-V both on and off.

 

Any ideas on how to fix this?

Share this post


Link to post

Yup.

 

Of course, I had to use it to install, but I have disabled it since then and am RDP'ing in. I've reinstalled the drivers from nVidia a couple times and restarted from within the VM on multiple occasions, per suggestion from other threads here.

Share this post


Link to post

I had this battle this weekend. it seems the latest drivers have more checks in them which detect you're running a VM and refuse to start.

 

If you're exclusively using the GUI to manage VM options, make sure you've disabled hyper-v in the advanced features. After that, add this to your XML (repace the <features> part with the below). you can set the vendor_id value to whatever you want. avoid using anything KVM,QEMU,UNRAID related as this value is checked by the nvidia driver once windows starts up.:

 

  <features>
    <acpi/>
    <apic/>
    <hyperv>
      <relaxed state='on'/>
      <vapic state='on'/>
      <spinlocks state='on' retries='8191'/>
      <vendor id='none'/>
    </hyperv>
    <kvm>
      <hidden state='on'/>
    </kvm>
  </features>

 

After that's done, driver version 368.81 was the latest i could get to install and start. Later drivers didnt work as they must have additional checks to see if you're running a VM.

 

Worth mentioning that i was experimenting with fresh installs of windows each time to work out exactly what i needed to do. I didnt have the battle of having to remove old or downgrade nvidia drivers each time I made a change, so just adding this to XML and installing the driver in a current "code 43" VM might not work.

 

Good luck :)

 

 

Share this post


Link to post

I'm using the latest nvidia drivers (372.90) without any probs...

My xml...

 

  <features>

    <acpi/>

    <apic/>

    <hyperv>

      <relaxed state='on'/>

      <vapic state='on'/>

      <spinlocks state='on' retries='8191'/>

      <vendor id='none'/>

    </hyperv>

  </features>

Share this post


Link to post

I'm using the latest nvidia drivers (372.90) without any probs...

My xml...

 

  <features>

    <acpi/>

    <apic/>

    <hyperv>

      <relaxed state='on'/>

      <vapic state='on'/>

      <spinlocks state='on' retries='8191'/>

      <vendor id='none'/>

    </hyperv>

  </features>

 

Thats interesting...

did you modify the drivers in any way or just run through the normal setup?

Share this post


Link to post

I'm using the latest nvidia drivers (372.90) without any probs...

My xml...

 

  <features>

    <acpi/>

    <apic/>

    <hyperv>

      <relaxed state='on'/>

      <vapic state='on'/>

      <spinlocks state='on' retries='8191'/>

      <vendor id='none'/>

    </hyperv>

  </features>

 

Thats interesting...

did you modify the drivers in any way or just run through the normal setup?

 

Normal setup, no driver hacking. All I have done extra is move my card to slot 1, extract the vbios and add it to my xml otherwise I had a black screen on boot.

To be sure, I tried another W10 VM that hadn't seen my 980ti before. No problems at all with latest drivers.

Share this post


Link to post

Normal setup, no driver hacking. All I have done extra is move my card to slot 1, extract the vbios and add it to my xml otherwise I had a black screen on boot.

To be sure, I tried another W10 VM that hadn't seen my 980ti before. No problems at all with latest drivers.

 

I'm willing to give it a go--but I wasn't able to find the vbios for my card in the database (https://www.techpowerup.com/vgabios/). How would I go about extracting the vbios from my card

Share this post


Link to post

Normal setup, no driver hacking. All I have done extra is move my card to slot 1, extract the vbios and add it to my xml otherwise I had a black screen on boot.

To be sure, I tried another W10 VM that hadn't seen my 980ti before. No problems at all with latest drivers.

 

I'm willing to give it a go--but I wasn't able to find the vbios for my card in the database (https://www.techpowerup.com/vgabios/). How would I go about extracting the vbios from my card

 

Follow steps 1-7 here: http://lime-technology.com/forum/index.php?topic=43644.45

 

Share this post


Link to post

Normal setup, no driver hacking. All I have done extra is move my card to slot 1, extract the vbios and add it to my xml otherwise I had a black screen on boot.

To be sure, I tried another W10 VM that hadn't seen my 980ti before. No problems at all with latest drivers.

 

I'm willing to give it a go--but I wasn't able to find the vbios for my card in the database (https://www.techpowerup.com/vgabios/). How would I go about extracting the vbios from my card

 

Follow steps 1-7 here: http://lime-technology.com/forum/index.php?topic=43644.45

 

Just gave it a go right now and it didn't work. I successfully extracted the vbios and attached it to the VM. I checked the log to make sure it wasn't erroring out.

 

I'm going to try a fresh install right now. I had fumbled around with a bunch of stuff in Windows before posting here.

 

Edit: Same issue as before  :'( . As soon as I attach the graphics card, it's error 43, even with the proper vbios and xml.

Share this post


Link to post

Normal setup, no driver hacking. All I have done extra is move my card to slot 1, extract the vbios and add it to my xml otherwise I had a black screen on boot.

To be sure, I tried another W10 VM that hadn't seen my 980ti before. No problems at all with latest drivers.

 

I'm willing to give it a go--but I wasn't able to find the vbios for my card in the database (https://www.techpowerup.com/vgabios/). How would I go about extracting the vbios from my card

 

Follow steps 1-7 here: http://lime-technology.com/forum/index.php?topic=43644.45

 

Just gave it a go right now and it didn't work. I successfully extracted the vbios and attached it to the VM. I checked the log to make sure it wasn't erroring out.

 

I'm going to try a fresh install right now. I had fumbled around with a bunch of stuff in Windows before posting here.

 

Edit: Same issue as before  :'( . As soon as I attach the graphics card, it's error 43, even with the proper vbios and xml.

 

Not sure what's going on with your setup. I did notice however that your vm is using Q35. Mine is using pc-i440fx-2.5.

FYI - I am using an EVGA 980ti , MSI Godlike x99 and latest nvidia driver.

Share this post


Link to post

Normal setup, no driver hacking. All I have done extra is move my card to slot 1, extract the vbios and add it to my xml otherwise I had a black screen on boot.

To be sure, I tried another W10 VM that hadn't seen my 980ti before. No problems at all with latest drivers.

 

I'm willing to give it a go--but I wasn't able to find the vbios for my card in the database (https://www.techpowerup.com/vgabios/). How would I go about extracting the vbios from my card

 

Follow steps 1-7 here: http://lime-technology.com/forum/index.php?topic=43644.45

 

Just gave it a go right now and it didn't work. I successfully extracted the vbios and attached it to the VM. I checked the log to make sure it wasn't erroring out.

 

I'm going to try a fresh install right now. I had fumbled around with a bunch of stuff in Windows before posting here.

 

Edit: Same issue as before  :'( . As soon as I attach the graphics card, it's error 43, even with the proper vbios and xml.

 

Not sure what's going on with your setup. I did notice however that your vm is using Q35. Mine is using pc-i440fx-2.5.

FYI - I am using an EVGA 980ti , MSI Godlike x99 and latest nvidia driver.

 

When I did my fresh install I actually (accidentally) used pc-i440fx-2.5 so I don't think that's the issue.

 

I wonder if it's something with the GTX 10 series somehow. That's what's preventing me from using some super old driver from before they were doing any of this. I'm using the Zotac GTX 1070 mini.

Share this post


Link to post

Normal setup, no driver hacking. All I have done extra is move my card to slot 1, extract the vbios and add it to my xml otherwise I had a black screen on boot.

To be sure, I tried another W10 VM that hadn't seen my 980ti before. No problems at all with latest drivers.

 

I'm willing to give it a go--but I wasn't able to find the vbios for my card in the database (https://www.techpowerup.com/vgabios/). How would I go about extracting the vbios from my card

 

Follow steps 1-7 here: http://lime-technology.com/forum/index.php?topic=43644.45

 

Just gave it a go right now and it didn't work. I successfully extracted the vbios and attached it to the VM. I checked the log to make sure it wasn't erroring out.

 

I'm going to try a fresh install right now. I had fumbled around with a bunch of stuff in Windows before posting here.

 

Edit: Same issue as before  :'( . As soon as I attach the graphics card, it's error 43, even with the proper vbios and xml.

 

Not sure what's going on with your setup. I did notice however that your vm is using Q35. Mine is using pc-i440fx-2.5.

FYI - I am using an EVGA 980ti , MSI Godlike x99 and latest nvidia driver.

 

When I did my fresh install I actually (accidentally) used pc-i440fx-2.5 so I don't think that's the issue.

 

I wonder if it's something with the GTX 10 series somehow. That's what's preventing me from using some super old driver from before they were doing any of this. I'm using the Zotac GTX 1070 mini.

 

Hmm, I hope then it's not the 10 series cards that have extra checks in the BIOS to stop UNRAID users :-(

 

Share this post


Link to post

After reading a bunch more through these and other forums, a lot of people are having issues when this is their only GPU. I just ordered a cheapo GT 610 so I can try and troubleshoot it by having unRAID use the 610 and grabbing the vBIOS while the 1070 is secondary, removing the 610 and moving the 1070 back in to slot 1.

 

My mobo technically has a integrated graphics, but when I enable it in the BIOS, unRAID doesn't detect my NVIDIA at all.

 

As a plus, I may be able to use the GT 610 to work with my Mac OS VM (I didn't do my due diligence and originally wanted to use my 1070 with El Cap/Sierra, but nVidia has only made Mac drivers up to 980Ti).

 

I'll report back my findings.

 

Sidenote: Free Amazon Same-Day is to die for! haha

Share this post


Link to post

I have an Asus GTX 1070 that is passed through to a Windows 10 anniversary VM. Everything works no hacks or tricks required regarding drivers.

 

I'm using I440FX 2.5 and OVMF. I would get black screens using Seabios and Q35.

 

I am stubbing both the GPU and its audio device - not sure if thats helping or doing anything really ...... not hurting as far as I know.

 

Thinking back I am pretty sure I had HyperV turned off during the creation of the VM and Windows install and then after I installed the newest NVidia drivers I used the edit VM tab and turned HyperV support back on.

Share this post


Link to post

Success!!! I was able to get it working!

 

1) Put the 2nd "dummy" GPU in position 1 and my GTX 1070 in position 2

2) Extract vBIOS from GTX 1070 and edit XML to match

3) Remove "dummy" GPU and replace primary PCI-e slot with my GTX 1070

4) Boot.

5) Profit.

 

And this still worked even though my 2nd PCI-e slot is only x4 (physically an x16, but only x4 are active pins)

Share this post


Link to post

Success!!! I was able to get it working!

 

1) Put the 2nd "dummy" GPU in position 1 and my GTX 1070 in position 2

2) Extract vBIOS from GTX 1070 and edit XML to match

3) Remove "dummy" GPU and replace primary PCI-e slot with my GTX 1070

4) Boot.

5) Profit.

 

And this still worked even though my 2nd PCI-e slot is only x4 (physically an x16, but only x4 are active pins)

 

phew! Out of curiosity, which motherboard are you using?

Share this post


Link to post

Success!!! I was able to get it working!

 

1) Put the 2nd "dummy" GPU in position 1 and my GTX 1070 in position 2

2) Extract vBIOS from GTX 1070 and edit XML to match

3) Remove "dummy" GPU and replace primary PCI-e slot with my GTX 1070

4) Boot.

5) Profit.

 

And this still worked even though my 2nd PCI-e slot is only x4 (physically an x16, but only x4 are active pins)

 

phew! Out of curiosity, which motherboard are you using?

 

It's a Lenovo IS8XM in a Thinkserver TS440. I'm using unRAID on it as a video editing server (32TB and counting). I was previously using it with freeNAS but had a bunch of issues and when I switched to unRAID, I had way more RAM than is necessary and the processor is a decent Xeon 1225 v3  so I'm finding ways to use it. This VM is going to do some proxy rendering for me.

 

When I was tinkering around some more this time with the BIOS I actually also found a way to enable both the IGP and PCI-e card at the same time...so technically I didn't even have to get the 2nd dummy GPU. Oh well. Live and learn.

Share this post


Link to post

I tried the following items but im also getting the code 43 issue. I have an i7 4771 and a GTX 780.

Here is my xml.

 

<domain type='kvm' id='26'>
  <name>STEAM-OS</name>
  <uuid>e4953100-4ce2-0189-39a6-aa9aa1b2e2fb</uuid>
  <metadata>
    <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/>
  </metadata>
  <memory unit='KiB'>2621440</memory>
  <currentMemory unit='KiB'>2621440</currentMemory>
  <memoryBacking>
    <nosharepages/>
    <locked/>
  </memoryBacking>
  <vcpu placement='static'>8</vcpu>
  <cputune>
    <vcpupin vcpu='0' cpuset='0'/>
    <vcpupin vcpu='1' cpuset='1'/>
    <vcpupin vcpu='2' cpuset='2'/>
    <vcpupin vcpu='3' cpuset='3'/>
    <vcpupin vcpu='4' cpuset='4'/>
    <vcpupin vcpu='5' cpuset='5'/>
    <vcpupin vcpu='6' cpuset='6'/>
    <vcpupin vcpu='7' cpuset='7'/>
  </cputune>
  <resource>
    <partition>/machine</partition>
  </resource>
  <os>
    <type arch='x86_64' machine='pc-i440fx-2.5'>hvm</type>
    <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader>
    <nvram>/etc/libvirt/qemu/nvram/e4953100-4ce2-0189-39a6-aa9aa1b2e2fb_VARS-pure-efi.fd</nvram>
  </os>
  <features>
    <acpi/>
    <apic/>
  </features>
  <cpu mode='host-passthrough'>
    <topology sockets='1' cores='4' threads='2'/>
  </cpu>
  <clock offset='localtime'>
    <timer name='rtc' tickpolicy='catchup'/>
    <timer name='pit' tickpolicy='delay'/>
    <timer name='hpet' present='no'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <devices>
    <emulator>/usr/local/sbin/qemu</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='writeback'/>
      <source file='/mnt/user/vDISK-Drives/STEAM-OS/vdisk1.img'/>
      <backingStore/>
      <target dev='hdc' bus='virtio'/>
      <boot order='1'/>
      <alias name='virtio-disk2'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </disk>
    <controller type='usb' index='0' model='ich9-ehci1'>
      <alias name='usb'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci1'>
      <alias name='usb'/>
      <master startport='0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci2'>
      <alias name='usb'/>
      <master startport='2'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci3'>
      <alias name='usb'/>
      <master startport='4'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/>
    </controller>
    <controller type='pci' index='0' model='pci-root'>
      <alias name='pci.0'/>
    </controller>
    <controller type='virtio-serial' index='0'>
      <alias name='virtio-serial0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </controller>
    <interface type='bridge'>
      <mac address='52:54:00:dd:e9:7b'/>
      <source bridge='br0'/>
      <target dev='vnet4'/>
      <model type='virtio'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </interface>
    <serial type='pty'>
      <source path='/dev/pts/5'/>
      <target port='0'/>
      <alias name='serial0'/>
    </serial>
    <console type='pty' tty='/dev/pts/5'>
      <source path='/dev/pts/5'/>
      <target type='serial' port='0'/>
      <alias name='serial0'/>
    </console>
    <channel type='unix'>
      <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-STEAM-OS/org.qemu.guest_agent.0'/>
      <target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/>
      <alias name='channel0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
      </source>
      <alias name='hostdev0'/>
      <rom file='/mnt/user/Drivers/gtx780.dump'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x01' slot='0x00' function='0x1'/>
      </source>
      <alias name='hostdev1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </hostdev>
    <memballoon model='virtio'>
      <alias name='balloon0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
    </memballoon>
  </devices>
</domain>

 

 

Share this post


Link to post

Just to be clear, the OP's issue here is the lack of a GPU assigned to the host.  The 10-series cards work fine with GPU pass through for everyone.  This is just a hardware issue that this person is having.

Share this post


Link to post
On 10/4/2016 at 11:15 AM, mflotron said:

Success!!! I was able to get it working!

 

1) Put the 2nd "dummy" GPU in position 1 and my GTX 1070 in position 2

2) Extract vBIOS from GTX 1070 and edit XML to match

3) Remove "dummy" GPU and replace primary PCI-e slot with my GTX 1070

4) Boot.

5) Profit.

 

And this still worked even though my 2nd PCI-e slot is only x4 (physically an x16, but only x4 are active pins)

I am having code 43 to pass my gtx 970 , but my mobo x570i have only one pcie port so I can only plug one gpu

and I am having ryzen 3900x so I have no igpu too.. any way I can make it work? Thx

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.