FallingSloth Posted July 24, 2019 Share Posted July 24, 2019 I have just replaced the CPU/MB in my server and now two of my VMs are acting up. The first is a pfSense VM, using a four port NIC (02:00.0 - 02:00.3), the other is a Windows 10 VM using a GPU (01:00.0 + 01:00.1). If I try to start either VM while the other is running it throws an execution error, complaining that the other VM is using the PCIe device assigned to the other VM. So when booting the Windows VM, it complains that the pfSense VM is using the GPU and vise-versa. I've been over the XML for both VM's and can't find anything that would cause this, neither VM should be using the other's PCIe devices. Anyone here have any ideas? Quote Link to comment
GHunter Posted July 24, 2019 Share Posted July 24, 2019 I would just create new VM's and assign the current vdisk to them. I just did a huge upgrade last week myself and all but 2 VM's ran ok. I used this solution too and had them both back up and running in under 5 minutes. Quote Link to comment
thatnovaguy Posted July 24, 2019 Share Posted July 24, 2019 Did you check your iommu groups? Could be the pcie slots you're wanting to use go through the chipset and are in the same group.Sent from my SM-N960U using Tapatalk Quote Link to comment
FallingSloth Posted July 24, 2019 Author Share Posted July 24, 2019 Duh, I obviously haven't had enough caffine. Just checked and they are indeed in the same IOMMU group on this board. Thanks for the help :) 1 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.