Jump to content

VMs grabbing wrong PCIe devices

4 posts in this topic Last Reply

Recommended Posts

I have just replaced the CPU/MB in my server and now two of my VMs are acting up.


The first is a pfSense VM, using a four port NIC (02:00.0 - 02:00.3), the other is a Windows 10 VM using a GPU (01:00.0 + 01:00.1).


If I try to start either VM while the other is running it throws an execution error, complaining that the other VM is using the PCIe device assigned to the other VM.  So when booting the Windows VM, it complains that the pfSense VM is using the GPU and vise-versa.


I've been over the XML for both VM's and can't find anything that would cause this, neither VM should be using the other's PCIe devices.  Anyone here have any ideas?

Share this post

Link to post

I would just create new VM's and assign the current vdisk to them.


I just did a huge upgrade last week myself and all but 2 VM's ran ok. I used this solution too and had them both back up and running in under 5 minutes.

Share this post

Link to post

Did you check your iommu groups? Could be the pcie slots you're wanting to use go through the chipset and are in the same group.

Sent from my SM-N960U using Tapatalk

Share this post

Link to post

Duh, I obviously haven't had enough caffine.  Just checked and they are indeed in the same IOMMU group on this board.  Thanks for the help :)

Share this post

Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.