cyleleghorn

Members
  • Posts

    3
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

cyleleghorn's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Yes, I initially only owned the Nvidia GPU, and when I couldn't get that working I went and bought a $20 radeon card just to get some video output to the physical monitors so I could stop using VNC, but also to try all of the methods that required two graphics cards, since my CPUs don't have integrated graphics. When only the Nvidia GPU is installed in the system, it boots to the unraid console fine and displays that over HDMI to the monitor, but then there is the expected result of absolutely nothing happening when I try to assign that card to a VM, since it's the primary. I have attached my diagnostics. unraidbeast-diagnostics-20170606-0842.zip
  2. Yes; I've actually had the issue for about a month now and all the bullet points i listed above were things I found on this site, reddit, and other various computer sites through google. One thing i read on another non-unraid virtualization website was that you might get different results using the qemu command line arguments to pass through the graphics card, but i could never get them to boot or even find a definite guide on how to translate the xml arguments to the command line arguments. I dont know if that holds any stock since we're dealing with unraid here, but i figured i would add it.
  3. Hello everyone, I'm a new unraid user as of April. I have unix experience from the past, so the shell is not new to me. I am having problems passing through my Nvidia GTX970 to any of the VMs I have created so far. Using the Nvidia card as the second GPU in my VM settings allows me to see it in the device manager as an Nvidia GTX 970, but with the error code 43. Drivers will also not install for the Nvidia GPU. I have come to understand that this is due to Nvidia detecting that the host is virtualized and disabling the card. I'll post my current Windows 10 XML file at the bottom of this post, but first I will explain what I have tried so far. Setting the Nvidia card to be the secondary graphics card, with a Radeon card as the primary. I can boot to unraid and get video output from VMs with the radeon card, so having only one gpu is not my problem. Dumping the bios and adding it to the XML. I have also run the bios file(s) through a rom parser that was suggested elsewhere on the internet just to make sure they seemed valid. Using both SeaBios and OVMF bios settings for my VMs. Note here that I have windows fully installed on my SeaBios vm, but when I tested OVMF I only attempted to boot to the installation screen and, upon seeing no display output, deleted the VM and moved on to the next suggestion. If fully installing Windows through VNC would make any difference, please let me know and I will test it, but it seems like OVMF is the stricter of the two BIOS types when it comes to GPU passthrough so I feel there is less of a chance of it working with OVMF than with SeaBios. Disabling hypervisor in the advanced VM settings, as well as trying to manually change all the hyperv settings to off in the bios. Adding the line <feature policy='disable' name='hypervisor'/> into my XML within the cpu tag, right under the topology line. This resulted in my windows 10 vm crashing to a grey screen before getting to the login screen. At first the display was flickering, so I thought maybe it was detecting the multiple graphics cards, but there was still no output from the nvidia card and the cursor was frozen in place on the radeon. There was nothing new at the bottom of the VM log file compared to when I run the VM without that line. Changing pcie ports for both graphics cards, as well as ensuring that I tried the Nvidia gpu in pcie ports that were controlled by each of the two CPUs. Making sure that all of the virtualization settings are enabled in the host bios. I tried changing the vendor id of the GPU through the VM's XML. I just changed it to "testvendor" because I didn't read that there was any specific rule you have to follow when coming up with a new id. My host motherboard's bios has an option for addressing pcie memory greater than 4GB. Because the Nvidia GPU has 4GB of memory alone, and my Radeon GPU is set as the primary, then it seems to me that I would need to enable this for it to be able to address all the memory of both cards. No change after enabling. Here is my other hardware, in case it may be relevant. Dual socket Dell Precision T7610 motherboard. Relatively unused in custom builds, and required me to make some of my own power supply adapters because of their proprietary power supply, but the motherboard was the cheapest dual socket board I could find and it is a beast. Two Intel Xeon E5-2670 cpus. 32GB of ddr3 ecc memory, split up with 16GB going into the lanes controlled by each cpu. 2 SSDs, 5 other standard hard drives, and two disc drives running over sata. wifi card, usb hub, sata hub, the Nvidia GTX 970, and the Radeon GPU installed to pcie ports. Here is my current XML for the Windows 10 vm. I inserted some comments in the relevant areas that aren't there in the actual xml. This is a setup with what I gather should be all the correct settings I need, but results in error code 43 when looking at the nvidia gpu in the device manager. <domain type='kvm'> <name>Windows 10</name> <uuid>c3bb1f68-dac8-9d06-dbbd-636f46d2272b</uuid> <metadata> <!--My template is windows 7 because at the time I created my windows 10 vm, there was no windows 10 template and the windows 7 template was recommended online.--> <vmtemplate xmlns="unraid" name="Windows 7" icon="windows.png" os="windows7"/> </metadata> <memory unit='KiB'>8388608</memory> <currentMemory unit='KiB'>8388608</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>8</vcpu> <cputune> <vcpupin vcpu='0' cpuset='8'/> <vcpupin vcpu='1' cpuset='9'/> <vcpupin vcpu='2' cpuset='10'/> <vcpupin vcpu='3' cpuset='11'/> <vcpupin vcpu='4' cpuset='12'/> <vcpupin vcpu='5' cpuset='13'/> <vcpupin vcpu='6' cpuset='14'/> <vcpupin vcpu='7' cpuset='15'/> </cputune> <os> <type arch='x86_64' machine='pc-i440fx-2.7'>hvm</type> </os> <features> <acpi/> <apic/> </features> <cpu mode='host-passthrough'> <topology sockets='1' cores='4' threads='2'/> </cpu> <clock offset='localtime'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/domains/Windows 10/vdisk1.img'/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/virtio-win-0.1.118-2.iso'/> <target dev='hdb' bus='ide'/> <readonly/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <controller type='pci' index='0' model='pci-root'/> <controller type='ide' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:52:94:3a'/> <source bridge='br0'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </interface> <serial type='pty'> <target port='0'/> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <!--The Radeon GPU.--> <hostdev mode='subsystem' type='pci' managed='yes' xvga='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x81' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </hostdev> <!--The Nvidia GPU.--> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <rom file='/path/to/my/bios.file'/> <source> <address domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x00' slot='0x1b' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x058f'/> <product id='0x6387'/> </source> <address type='usb' bus='0' port='1'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x1532'/> <product id='0x0040'/> </source> <address type='usb' bus='0' port='2'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x1b1c'/> <product id='0x1b15'/> </source> <address type='usb' bus='0' port='3'/> </hostdev> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/> </memballoon> </devices> </domain>