Brad the Beast

Members
  • Posts

    22
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Brad the Beast's Achievements

Noob

Noob (1/14)

0

Reputation

  1. I have a server with a 4-port USB 3 PCIe add-in card. Each port on the card has it's own controller. I have two of the four controllers on the card along with two Quadro RTX 6000 GPUs passed through to this particular VM. When I attempt to connect a VR headset (Varjo XR-3), one of three things happens: Windows reports the device as unrecognizable (device descriptor request failed). The attached keyboard and mouse stop working which is followed by a VM crash or display flickering (the mixed reality cameras report they are up at USB 3). The mixed reality cameras only come up at USB 2 instead of USB 3. They have to operate at USB 3 to work. I've done about every thing I can think of to try and fix this. The VM is running Windows 10 Enterprise v22H2. The host server is running Unraid 6.11.5.
  2. I have a host running Unraid 6.9.2. I have one port from an Intel X710-T4 passed through to a VM using vfio and SeaBIOS. The problem I am having is when I first start the VM, the network link goes down and I get a "Media test failure. Check cable" error when it tries to PXE boot. If I wait for about a minute for the link to come up and then send Ctrl+Alt+Del to reset the VM, then PXE works correctly. Is there a way to configure SeaBIOS to wait before attempting to PXE boot? Based on this page I found it seems like it might be possible. Any ideas?
  3. I figured it out. I had to go and enable "IOMMU" (it's called Direct I/O on Intel boards) under some obscure submenu on the AMD CBS page of the BIOS.
  4. @jonp, Thanks for getting back to me. After updating to 6.9-rc2, the same issues still persist. It seems to only affect my Windows 10 VMs. I also tried the latest version of VirtIO.
  5. So I've figured something out. The networking works fine with the GPU and USB controller passed through. Normal behavior: VNC as primary graphics and GPU as secondary graphics Strange behavior: GPU as primary graphics. When I set the GPU to be the primary graphics card, Windows detects the NIC as if it were a different device even though the NIC has the same MAC address. The NIC shows up in device manager as the correct model but it has a #2 or #3 appended to the end. Windows also dumps the static IP I set because it thinks it has a different NIC. If I try to change it back to VNC as primary and GPU as secondary, the whole host crashes with a machine check exception. So when I change the primary graphics from VNC to the GPU, Unraid changes something which causes Windows not to detect the NIC correctly. Any ideas?
  6. One of our servers has two AMD Epyc 7402 CPUs. When I go to add a VM, no PCIe devices show up. No GPUs, no USB controllers, no network cards, nothing. I can see all the devices under System Devices and when I run lspci. The USB controllers on the Nvidia GPUs have been stubbed out. The USB controller has been stubbed out. The 10Gbps NICs have been stubbed out. What am I missing? Is there a setting I need to enable?
  7. So upon talking with our network team, we've made a discovery. When the VM's are up, the switches report input errors whenever I try to do anything like ping the gateway of the VLAN. When the VM's are off, the input errors stop.
  8. I want each VM to have it's own 10Gbps connection. Most of my machines don't have onboard 10Gbps.
  9. I'm having a really bad time trying to get all my stuff to work so bear with me. I have a Windows 10 VM that I'm trying to get working. I have an Intel X710-T4 passed through to a Windows 10 VM. When I pass through either the USB controller or the Titan RTX, then the networking works fine. But when I pass through both the GPU and the USB controller the networking breaks. I can see that the link is up and the NIC is detected correctly but the VM can't transmit any data. Sometimes I get the attached message and the NIC shows up with a "#2" at the end of the name in Device Manager. I haven't touched the NIC passthrough so I don't understand what is happening. Assistance would be greatly appreciated. Hardware: Asus Z11PG-D24 Motherboard 2x Intel Xeon Gold 6238 256GB DDR4 2x 2TB Samsung 970 Evo Plus 2x Nvidia Titan RTX 2x Nvidia Tesla V100 Intel X710-T4 NIC StarTech 4-port USB 3 PCIe Card (PEXUSB3S44V) Unraid 6.8.3
  10. I'm having trouble getting passthrough working for an Intel X710-T4 NIC. I followed this guide to set up NIC passthrough. I can see all 4 ports in the "other PCI devices" list and I have the first port selected. I have attached the screenshots from Device Manager on the VM (first screenshot is with passthrough enabled, second screenshot is with passthrough disabled). No adapters show up in the network settings on the VM. When I try to install the X710 driver (granted it's technically for Windows Server 2019) it says it can't find any Intel network adapters. I'm not sure what else to do at this point.
  11. I would like to pass through a Tesla V100 compute card to a VM. Each VM will have a Titan RTX and a Tesla V100. Should I pass the V100 through as a second GPU or stub it out and pass it through as a "other PCI device" like I am for my 10Gbps NIC?
  12. So I managed to figure it out. I had to set all other interfaces to be down (even though they were disconnected) and I had to set the optional metric for the interface I wanted to 1. Everything seems to be working as expected now. Thanks for your help @Ford Prefect!
  13. Checked. Netmask is the same on both ends. This worked. I've made sure that all other interfaces are down. I then set the static IP that it's supposed to have and plugged the server back into the switch. It worked until I rebooted and now it's not working again. Same issue. Still can't ping the gateway.