M0CRT Posted December 25, 2019 Share Posted December 25, 2019 Hi all I've been operating a HP ML350P Gen 8 server successfully for six months with a 1060 GTX GPU passing through to a Windows 10 VM. Whilst I had to ACS Override and allow unsafe interrupts (even though both the GPU and the HDMI audio were in their own MMU group and nothing else), it did work fine...all be it even with the Override and unsafe interrupts, the VM wouldn't boot with the HDMI audio enabled (when each device was in it's own MMU). Anyway, swapped the 1060 for a 1660 Super today and am struggling to boot. With existing ACS Override and Unsafe Interrupts, the gpu and associated HDMI are in SEPARATE MMU groups...along with the additional USB and Serial Bus Controller: IOMMU group 33:[10de:21c4] 0a:00.0 VGA compatible controller: NVIDIA Corporation Device 21c4 (rev a1) IOMMU group 34:[10de:1aeb] 0a:00.1 Audio device: NVIDIA Corporation TU116 High Definition Audio Controller (rev a1) IOMMU group 35:[10de:1aec] 0a:00.2 USB controller: NVIDIA Corporation Device 1aec (rev a1) IOMMU group 36:[10de:1aed] 0a:00.3 Serial bus controller [0c80]: NVIDIA Corporation Device 1aed (rev a1) The VM is hanging. I've ensured I've also extracted and have attempted to apply ROM as per Space Invader ROM extract method; no change. I've also attempted to add all four of these devices to the syslinux config per: VFIO-PCI.IDS=10de:21c4,10de:1aeb,10de:1aec,10de:1aed No change. Any further suggestions? Seems the additional USB and Serial Bus Controller devices on the PCI card may be causing me some pain. Thanks in advance. Mo Quote Link to comment
gerard6110 Posted December 26, 2019 Share Posted December 26, 2019 I have the same issue; also with a GTX 1660 Super; hanging on black screen. Worse yet, after some attemps with some changes all of a sudden my BIOS got corrupted, disabling HVM and IOMMU. Which in turn disabled the VMs tab in unraid (in itself logical) and even more worse disabled the "Apply" button in VM Manager - just no response; not even a page refresh! So cannot even continue with VMs, because cannot get VM Manager running. A separate bug report has been submitted. Basically did all the same steps, except extracting my own ROM (just using the one from Techpowerup). Most interested in a safe solution. Quote Link to comment
M0CRT Posted December 26, 2019 Author Share Posted December 26, 2019 I've managed to get a boot to Windows with the TechPowerUp Gigabyte Gaming OC ROM. It shouldn't boot with this ROM but it does...unfort I've got a Code 43 error now. Going to attempt an extract via a live cd. As now, if I attempt a CAT ROM extract...I get an Input / Output error. Sigh. Quote Link to comment
Lucict Posted January 10, 2020 Share Posted January 10, 2020 Following this. I am having similar trouble with a GTX 1650 Super, though I haven't taken all the steps you have yet. Have you had any success booting? Quote Link to comment
M0CRT Posted January 10, 2020 Author Share Posted January 10, 2020 Hi. I did. I found it to be a power problem and I needed to upgrade my PSU modules from the 450 to 750w. Ensure you are using a rom file. Are you getting anything i.e. just a black screen or, like me, are you getting a 43 error? Quote Link to comment
Lucict Posted January 10, 2020 Share Posted January 10, 2020 I don't get anything. When I try to start the VM, it hangs and won't start. This in turn hangs the VM manager and then the whole system. I can't even restart, I have to hard power the system off and back on. I think I'll create a separate thread for my issues. Thanks! Quote Link to comment
aaaa Posted January 3, 2021 Share Posted January 3, 2021 (edited) Hello, i wasted a lot of time to solve problem on plain KVM (not unraid) to use in Win10 guest, may it helps to you. 1) Nvidia GPU 1000+ is protected to use for passthrought, i have flashed bios on my GTX 1050Ti to use in KVM, but may it is not nessesary 2) My CPU AMD Ryzen and kernel parameters a little different: GRUB_CMDLINE_LINUX_DEFAULT="splash=silent quiet elevator=deadline amd_iommu=force_isolation iommu=pt rd.driver.pre=vfio-pci vt.global_cursor_default=0 pcie_acs_override=downstream,multifunction" 3) This is part of my config for KVM. That worked for me: ... <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vpindex state='on'/> <synic state='on'/> <stimer state='on'/> <vendor_id state='on' value='ahb6Wah2geeb'/> <frequencies state='on'/> </hyperv> <kvm> <hidden state='on'/> </kvm> <vmport state='off'/> <ioapic driver='kvm'/> </features> <cpu mode='host-passthrough' check='partial'> <topology sockets='1' cores='6' threads='2'/> <cache mode='passthrough'/> <feature policy='require' name='topoext'/> <feature policy='disable' name='hypervisor'/> </cpu> <clock offset='localtime'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> <timer name='hypervclock' present='yes'/> </clock> ... <hostdev mode='subsystem' type='pci' managed='yes'> <source> <address domain='0x0000' bus='0x09' slot='0x00' function='0x0'/> </source> <rom file='/opt/windows/nvidia-patched.rom'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/> </hostdev> 4) I have used nvflash_linux nvidia_vbios_vfio_patcher.py to patch and flash my GPU Edited January 3, 2021 by aaaa Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.