[SOLVED] GPU passed to VM spinning fans & not in nvidia-smi


benno87

Recommended Posts

Hey all, first time poster after almost 1 year of fumbling my way through things with the help of great forums such as this and, of course, the tutorials from SpaceInvader One.

 

I have just setup the Nvidia Drivers plugin and am now successfully hardware transcoding in Plex using the primary (1st slot) GPU (Geforce 1060 6gb).

 

My gaming VM uses a secondary (3rd slot) GPU (Geforce 1660 Ti) and runs perfectly fine with the VM.

 

My question is related to my VM GPU (secondary GPU). Even when the VM isn't running, all 3 fans are spinning - seemingly at 100%. Even after starting & stopping the VM, the fans still spin.

- The secondary GPU does not appear in the Nvidia Drivers plugin.

- The secondary GPU does not appear in 'nvidia-smi'. Only the primary GPU is shown.

- I have seen that 'nvidia-smi -pm 1' helps some users but as the secondary GPU isn't in 'nvidia-smi', there is no instruction being sent to the secondary GPU.

 

The passed through GPU in question from System Devices:

image.thumb.png.acb8f8d382548029e90bd0ed192699e8.png

 

What is shown in Nvidia Drivers plugin (the primary/transcoding GPU only is shown):

 

 

With the current Nvidia Drivers plugin, is any passed through GPU not able to be 'seen' by the plugin or the OS at all?

How am I able to:

1. See that GPU when not being used by the VM?

2. Stop the GPU fans from spinning when the VM is not in use?

 

Any help is much appreciated!!

 

Cheers

Edited by benno87
Removed GPU UUID
Link to comment

Ok so found what the problem was. I already have the VM GPU passed through in the vfio.

 

Removed the 4 devices I had to pass through in that IOMMU group from the syslinux configuration (and from System Devices). Rebooted. Played around with ACS override to see what would work.

 

In the end I could pass through the GPU without needing ACS. I only needed to manually add each of the 4 devices in that IOMMU group to the xml for the VM. Maybe could have been separated more but it's preferred to not use ACS if I don't have to.

 

The multifunction lines in the xml were numbered 1 (video), 2 (audio), 3 & 4 (USB controller). There were 4 devices as part of the GPU multifunction.

 

Now both GPUs are visible to the Nvidia Drivers plugin on startup so 'nvidia-smi -pm 1' can be used to stop the fans spinning, which is now on a script running every 15 mins.

When VM is in use, it disappears from the Nvidia Drivers plugin. VM stops and it appears again. The script doesn't impact the VM GPU as the Nvidia Drivers plugin can't see that GPU.

Edited by benno87
More details
Link to comment
  • benno87 changed the title to [SOLVED] GPU passed to VM spinning fans & not in nvidia-smi

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.