cristiano1492's GPU problem (split from FAQ)


Recommended Posts

Hi community,

 

I beg your help, because I'm not able to understand where is the problem.

Just bought a nvidia 3060TI, installed. Connected the HDMI cable to my monitor.

I know that is works, because I can see all the bootup process running on the screen.

But whn I go to "System devices" I can't see my brand new card.  I just saw an anonymous :

 

" 01:00.0 VGA compatible controller: NVIDIA Corporation Device 2489 (rev a1) "  but not the name of the new card.

 

Tried to install the nvidia plugin but still I don't see any clear indication of the new card but I got this message :

 

"NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver. Make sure that the latest NVIDIA driver is installed and running. "  Tried to install either the latest version and the production one's and refreshed every tim.      About this I understood that nvidia plugin it doesn't wok for Vm, therefore I removed the plugin.

 

At any change I rebooted the server.

 

In the " settings " I put into the " PCIe ACS override" : " both " as option, no results and still not able to passthorugh my GPU in the VM. 

 

I have a Apple VM installed, tried to set the nvidia card wjth the one mentioned ( 01:00.0 VGA compatible controller: NVIDIA Corporation Device 2489 (rev a1) )  update the settings with the  no warnings, but when I start the VM, the screen is still blank. 

 

Is there someone that is able to give me some indicatiions ?

 

Should I hve to install the ROM file ?  ( I didn't in this case, because I saw on youtube some tutorial where they didn't do and were able to passthorugh in any case ) In case should i leave the extension as *.rom or should I rename it ?  In case the file has to be in the same folder where I keep the ISO's ?

 

btw : I'm newbie on unraid, therefore all the suggetions are more than welcome

 

THX

really appreciate your support 

Link to comment
On 8/8/2021 at 4:56 PM, cristiano1492 said:

Just bought a nvidia 3060TI, installed. Connected the HDMI cable to my monitor.

I know that is works, because I can see all the bootup process running on the screen.

But whn I go to "System devices" I can't see my brand new card.  I just saw an anonymous :

 

" 01:00.0 VGA compatible controller: NVIDIA Corporation Device 2489 (rev a1) "  but not the name of the new card.

Are you speaking about a windows vm?

Make sure that in the vm xml video and audio are on the same bus, same slot and different function (multifunction device), and install latest nvidia drivers.

 

On 8/8/2021 at 4:56 PM, cristiano1492 said:

I have a Apple VM installed, tried to set the nvidia card wjth the one mentioned ( 01:00.0 VGA compatible controller: NVIDIA Corporation Device 2489 (rev a1) )  update the settings with the  no warnings, but when I start the VM, the screen is still blank.

Bad choice for an apple vm, no apple os is compatible with a 3060ti, no driver available, it's a no go.

Edited by ghost82
Link to comment

Hi Ghost82, first, thanks for your answer and suggestions.

- Ok, for Apple, I won't use the 3060.... got it  ?  Any  other card could work ? perhaps a little bit older ? like a GTX 760 ? 

- Yes, I'm speaking about a Win WM. you see the two parts of the card are in group different, and this should be the best option, according to the several tutorial I saw ( is it correct ? ) 

- How can I install drivers ? Do you mean to put in the VM configuration the ROM file of the card ? is that you meant ?  If yes, can I leave extension as ROM file or should I rename somehow ( as I saw in a Spaceinvaders tutorials on VM ) ?

 

THx a lot for helping me

 

scheda grafica.png

Link to comment
17 hours ago, cristiano1492 said:

Any  other card could work ? perhaps a little bit older ? like a GTX 760 ?

Yes, the GTX 760 is supported to the most recent stable mac os (Big Sur).

You can look for compatible gpus and mac os here (amd and nvidia):

https://dortania.github.io/GPU-Buyers-Guide/modern-gpus/nvidia-gpu.html

https://dortania.github.io/GPU-Buyers-Guide/modern-gpus/amd-gpu.html

 

17 hours ago, cristiano1492 said:

- How can I install drivers ? Do you mean to put in the VM configuration the ROM file of the card ? is that you meant ?  If yes, can I leave extension as ROM file or should I rename somehow ( as I saw in a Spaceinvaders tutorials on VM ) ?

I think you can build the vm with vnc (no gpu passthrough), enable remote desktop in the guest (inside the vm), add gpu passthrough (and remove vnc access), boot the vm and if windows is not able to install any driver remote access into the vm and install from there.

 

You can try to install the drivers when you have vnc access, but I don't think you will be able to install them, as there's no gpu and the drivers may fail to install if the gpu is not found.

 

Loading the rom may be not mandatory: if you do it, make sure to dump your own rom, .dump extension is ok, mine was dumped from my workstation and it is named as .dump and it worked good.

 

17 hours ago, cristiano1492 said:

you see the two parts of the card are in group different, and this should be the best option, according to the several tutorial I saw ( is it correct ? ) 

Yes, it's ok, what I meant is make sure that in the xml you have both the audio and video part in the same bus and slot, but different function, in a multifunction device.

 

Unraid puts the audio in a different bus, function 0x00, something like this (this for a q35 machine)

    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
      </source>
      <rom file='/path/to/vbios.dump'/>
      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x01' slot='0x00' function='0x1'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
    </hostdev>

 

This is not correct, because some drivers will fail if audio and video are not in the same bus/slot, and it should be (for a q35 machine)

    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
      </source>
      <rom file='/path/to/vbios.dump'/>
      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0' multifunction='on'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x01' slot='0x00' function='0x1'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x1'/>
    </hostdev>

 

For i440fx you have only bus 0, and unraid puts audio in a different slot, same function (0x00): in this case, add multifunction for the video part (as for the q35 machine), change the slot of the audio part to the same of the video part, and change function to 0x01 for the audio part (as for the q35 machine)

Edited by ghost82
Link to comment

well, man, you have to be really patience with me, because if there something more than a newbie.... then I'm even newbier....   :o)))

you gave me a lot of advices, but not sure to get the full meaning....then, if you you is not a problem, pls...pls..slow down, and image to have in front of you a.... baby....veery baby...  :o)).

Well, let me try to clarify my doubts :

- creating a VM Win 10 following the Spaceinvader tutorial is ...let's say easy... I already did some attempts with success.... VM were created, with VNC drivers and everything works fine.... untill here....ok..

- when you mention " enable remote desktop in the guest (inside the vm) " I guess you mean to add into the VM some software to remote control from outside ? Like Splashtop ? If yes, you mean that through it I could remove the VNC and replace with passthrough, chosing a Gpu or just simply set passthrough ? If I understood correctly I might be able to swap to the new card, forcing VM Win 10 to upload the drivers ?   

- I've tried to rename the VGA Biom file into Dump. but then in the VM edit, I'm not able to find where this file is. The only file, Unraid finds is the the original file. Tried to buid the Vm directly the the invidia gpu and his audio part, but it doesn't start. ( I was trying a shortcut, but no chance :o))  )

- The rest of screens you showed, are not so easy to understand from my side, therefore I'm not going to ask you further questions on those, because I don't want to overload you too many questions.

 

My purpose is to be able to create a Vm that can work with my new ( and quite expense, I would say ) gpu card....but following several tutorial didn't give the chance to be sucessful on this side. 

 

I really appreciate your help, because is like a hand in a ocean storm...and i really would like to understand where I'm wrong.

 

Unfortunately Spaceinvader make a lot of tutorial, but never answered to questions...and this ....doesn't help so much....  

 

Thx

 

Hoping to hear you soon

 

   

Link to comment
On 8/14/2021 at 7:13 PM, cristiano1492 said:

- when you mention " enable remote desktop in the guest (inside the vm) " I guess you mean to add into the VM some software to remote control from outside ? Like Splashtop ?

Yes

 

On 8/14/2021 at 7:13 PM, cristiano1492 said:

If yes, you mean that through it I could remove the VNC and replace with passthrough, chosing a Gpu or just simply set passthrough ? If I understood correctly I might be able to swap to the new card, forcing VM Win 10 to upload the drivers ?

Yes, you must remove vnc when you passthrough the gpu: if I'm not wrong it's not allowed to have both vnc and gpu passthrough: so, when you enable gpu passthrough you can install a vnc server, or something else (like splashtop or windows remote desktop) inside the vm.

If windows is smart enough it should be able to auto install at least the basic gpu drivers, then once booted with a video output you can install gpu drivers to get most of your gpu.

 

On 8/14/2021 at 7:13 PM, cristiano1492 said:

- I've tried to rename the VGA Biom file into Dump. but then in the VM edit, I'm not able to find where this file is.

I don't understand this. Dump the vbios and save it in a folder on your pc; switch the settings of the vm to advanced view to view the xml and you will find something like:

<rom file='/path/to/vbios.dump'/>

Change /path/to/ to reflect the actual folder where the vbios dump is.

Do it manually in the xml view.

 

 

Link to comment
  • 1 month later...

I just had a similar issue and thought I would post here for anyone else that encounters similar behaviour with black screen (or no output) from the GPU where everything else looks like it should work. My Nvidia 3060ti also got detected as an "NVIDIA Corporation Device 2489" but this is not an indicator of any problem in itself. Some Nvidia GPUs do get detected as strange names, especially the newer RTX 30XX series.

 

Symptoms I got:

VM when started with GPU passed though correctly in terms of config on unraid, goes green in the UI and everything looks as though it would work fine. But no output occurs to a monitor on HDMI or Display port. When used with pure VNC as graphics card it functions fine. When the server boots a display output can be seen during post and BIOS so you know it works normally.

 

What I changed which caused this issue to occur:

Upgraded the GPU in my system from an Nvidia 3060 to an Nvidia 3060ti

 

Actual issue:

Nvidia drivers in windows VM itself. I believe that because I did not unintall the old nvidia driver before I did the upgrade, Windows kept trying to use the existing driver to work with the card. The problem I think, was that the driver I had installed was a specific one which only works with non LHR 3060 cards as this was the leaked dev driver which unlocks 3060s so they can be mined with. I should have uninstalled this driver first whilst the card was in the server still and whilst inside the VM and used a generic windows auto one, or just installed the latest nvidia drivers first for the 3060. They would probably be similar enough to the 3060ti that it may have initiated it and worked.

 

I don't think you can simply use VNC connection to go in and remove the driver because then the GPU is not listed in device manager to be able to uninstalled if you see what I mean. Although...

 

How you might be able to fix it:

A. It may be possible to use something like DDU free software to force to remove all references of the nvidia drivers. Then it may be possible to start the VM with GPU passed through and it may become auto detected and initiated with a basic found driver by Windows.

B. Create a new VM from scratch would probably solve the issue and initiate the GPU fresh.

 

How I fixed it:

In my case since I already had the new GPU installed in the server and didn't want to put the old one back in or try either of the above two things yet, I decided to:

 

(If remote desktop is already possible on your VM then skip to step 6)

1. Configure the VM to only use VNC as GPU.

2. Start the VM in unraid GUI

3. Using VNC, access the VM and go to remote desktop settings inside windows. Ensure remote desktop connections are allowed and that you can connect with a valid user.

4. Exit remote desktop and VNC.

5. Stop/shutdown the VM from unraid GUI

6. Configure the VM to now use the GPU passed through as per other instructions in various other threads/youtube videos (out of scope for this explanation as too long)

7. Start the VM form the unraid GUI

8. Remote desktop into the VM

9. In device manager you should now see the nvidia GPU as a device which can be right clicked and then uninstalled. This should remove the problematic and non functional/compatible nvidia driver which is not able to work with the currently installed and passed through GPU. In this case my new 3060ti is the GPU existing, and the driver uninstalled was the old 3060 dev mining driver.

10. Go to nvidia website and download and install the latest driver for your GPU as you normally would. I would recommend with nvidia that you select "CUSTOM" install and then do a "CLEAN" install to properly clear out any remains of older drivers. The VM will reboot to complete installation. Just remote desktop back in once done to verify it completes and exit installation. 

11. Disconnect remote desktop and test direct GPU connectivity to a monitor now. You should find you have output as normal on your screen.

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.