xanvincent Posted December 21, 2018 Share Posted December 21, 2018 Hi all, Getting this error when trying to build a Linux VM with the same settings as my (successful) Windows 10 VM. I am currently only attempting to pass through the RX 580. This works flawlessly on my Win10 VM, but every time I try to start the Linux VM, I get this error. Here's the tail of syslog when I started the VM: root@iron:~# tail -f /var/log/syslog Dec 21 11:51:05 iron avahi-daemon[5365]: Joining mDNS multicast group on interface vnet1.IPv6 with address fe80::fc54:ff:fe50:3b5e. Dec 21 11:51:05 iron avahi-daemon[5365]: New relevant interface vnet1.IPv6 for mDNS. Dec 21 11:51:05 iron avahi-daemon[5365]: Registering new address record for fe80::fc54:ff:fe50:3b5e on vnet1.*. Dec 21 11:51:50 iron avahi-daemon[5365]: Interface vnet1.IPv6 no longer relevant for mDNS. Dec 21 11:51:50 iron avahi-daemon[5365]: Leaving mDNS multicast group on interface vnet1.IPv6 with address fe80::fc54:ff:fe50:3b5e. Dec 21 11:51:50 iron kernel: br0: port 3(vnet1) entered disabled state Dec 21 11:51:50 iron kernel: device vnet1 left promiscuous mode Dec 21 11:51:50 iron kernel: br0: port 3(vnet1) entered disabled state Dec 21 11:51:50 iron avahi-daemon[5365]: Withdrawing address record for fe80::fc54:ff:fe50:3b5e on vnet1. Dec 21 11:56:43 iron login[4031]: ROOT LOGIN on '/dev/pts/1' VM settings: <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm'> <name>Arch</name> <uuid>73998f8d-71bc-b076-9372-5e2952d398b0</uuid> <metadata> <vmtemplate xmlns="unraid" name="Arch" icon="arch.png" os="arch"/> </metadata> <memory unit='KiB'>6291456</memory> <currentMemory unit='KiB'>6291456</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>5</vcpu> <cputune> <vcpupin vcpu='0' cpuset='10'/> <vcpupin vcpu='1' cpuset='11'/> <vcpupin vcpu='2' cpuset='12'/> <vcpupin vcpu='3' cpuset='14'/> <vcpupin vcpu='4' cpuset='15'/> </cputune> <os> <type arch='x86_64' machine='pc-q35-3.0'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/73998f8d-71bc-b076-9372-5e2952d398b0_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> </features> <cpu mode='host-passthrough' check='none'> <topology sockets='1' cores='5' threads='1'/> </cpu> <clock offset='utc'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/domains/Arch/vdisk1.img'/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </disk> <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <controller type='sata' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='pci' index='0' model='pcie-root'/> <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x8'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='2' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='2' port='0x9'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='pci' index='3' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='3' port='0xa'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> </controller> <controller type='pci' index='4' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='4' port='0xb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x3'/> </controller> <controller type='pci' index='5' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='5' port='0xc'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x4'/> </controller> <controller type='pci' index='6' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='6' port='0xd'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x5'/> </controller> <controller type='pci' index='7' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='7' port='0xe'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x6'/> </controller> <controller type='pci' index='8' model='pcie-to-pci-bridge'> <model name='pcie-pci-bridge'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:50:3b:5e'/> <source bridge='br0'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </interface> <serial type='pty'> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='tablet' bus='usb'> <address type='usb' bus='0' port='3'/> </input> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x09' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x08' slot='0x01' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x09' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x08' slot='0x02' function='0x0'/> </hostdev> <memballoon model='none'/> </devices> </domain> Hardware: AMD Ryzen 1700x 32GB DDR4 AMD RX 580 1 Quote Link to comment
SpaceInvaderOne Posted January 1, 2019 Share Posted January 1, 2019 The error Unknown PCI header type '127' is because the card isn't resetting correctly. This is a problem that can happen with some AMD cards. However, you have no problem with the windows VM. So why is this? Well by default there is a difference in a Windows VM and a Linux VM. The Windows VM will use a machine type of i440fx whilst the Linux VM will use a machine type of pc-Q35. I have found that you can start and stop as many VMS as you want using machine type i440fx without this error occurring. However, if you start and stop a VM using pc-Q35 then you will get this error and only a reboot will fix it. You could make a Linux VM using i440fx so then you could stop your windows VM then start the Linux one without having to reboot the server. 2 Quote Link to comment
suRe Posted January 24, 2019 Share Posted January 24, 2019 Hey SpaceInvaderOne, i encountered the same problem as xanvincent. The problem is not Win10/Q35 related, since i am running a Win10 VM with Q35. If i use i440fx, i can't install the latest AMD Drivers (RX 570) without the driver getting stuck during installation (black screen), so i went with Q35 and everything works fine. In Linux on the other hand, i can't shut down any VM (also Q35) without having the "Unknown PCI header type '127'" error. Only a server reboot will fix this. There are several thread in this forum with this error, e.g.: https://forums.unraid.net/topic/75597-unknown-pci-header-type-error/?tab=comments#comment-696426&searchlight=1 https://forums.unraid.net/topic/71065-internal-error-unknown-pci-header-type-127-on-rx550/?tab=comments#comment-652514&searchlight=1 So, what might fix that? Quote Link to comment
Mustangf22 Posted October 25, 2019 Share Posted October 25, 2019 (edited) So is this the only fix to the problem to install? A new linux VM with i440fx? I really like my install and i don't want to have to set it all up again. (takes a long time with Linux) And it sounds like suRe is having problems running on i44fx with getting the latest drivers... Any information would be much appreciated Edited October 25, 2019 by Mustangf22 Quote Link to comment
smkings Posted June 11, 2020 Share Posted June 11, 2020 I realise the topic is old but for anyone else coming across this - setting up a VM configuration and doing all the software installs and configuration once inside the OS are two separate things. Add a new VM configuration and in Primary vDisk Location select manual and point it to your existing vdisk.img (i.e. the one you've spent ages working in). This is the equivalent of moving a hard drive between PCs and booting from it. Should be just as you left it Quote Link to comment
mdrodge Posted June 25, 2020 Share Posted June 25, 2020 I'm getting the same error when trying to setup a windows vm with i440fx4.2 using a 1080. It's the only GPU in the system. (First gen ryzen) I've been right through Spaceinvaderone's videos and can't see anything I'm missing. I've got my bios rom edited and added and the windows install is a manually added drive borrowed from my threadripper system (previously used as a vm in unraid) Quote Link to comment
mdrodge Posted June 25, 2020 Share Posted June 25, 2020 Update. Changing the pcie ACS override gets it to boot but no image is displayed. Quote Link to comment
Maddeen Posted June 29, 2020 Share Posted June 29, 2020 Is anything planed to fix this? I also get this error but I cant switch to i440 because my VM with AMD VEGA wont boot anymore. I need to stay at Q35 -- but daily reboots are very sad to get my VM working again. Quote Link to comment
planetwilson Posted September 8, 2020 Share Posted September 8, 2020 One of SpaceInvader's videos talks about a workaround to reset the gfx card without rebooting the server using a script:- https://www.youtube.com/watch?v=0uZODoPQH9c Quote Link to comment
audiocycle Posted February 4, 2021 Share Posted February 4, 2021 On 6/25/2020 at 12:57 PM, mdrodge said: Update. Changing the pcie ACS override gets it to boot but no image is displayed. I'm at this point right now, trying to get my 1050 to show an image. Did you ever get yours working? Quote Link to comment
mdrodge Posted February 4, 2021 Share Posted February 4, 2021 (edited) Yes. Mine has been stable for a while. P.s I got it stable on i1440 but I've recently switched to q35 because I wanted stutter free gaming. Acs downstream and vfio unsafe interrupts is what I was fiddling with and had some luck. I've still never got it working properly with only 1 card I've tried that on a few different systems over the past couple of years. Everything mentioned above is the best advice I have though. Mine would boot unraid then go blank and I'd have to kill the vm and launch it again a few times with my phone to get the gpu to take over correctly (I gave up and put in a gt710 in slot 1 so I could have full access to the gpu in slot 2) Edited February 4, 2021 by mdrodge 1 Quote Link to comment
audiocycle Posted February 4, 2021 Share Posted February 4, 2021 I've been trying to give my only GPU to my VM just like you were trying originally. My server can boot with no GPU at all, including no igpu, so I was hopeful to get it working like that but I'm not having any luck. When I assign the GPU to the VM it does what you described; The screen shows unraid stuff until I boot the VM and then it goes blank and is useless, forcing me to reboot the server. I guess I'll try a basic 1-slot GPU like you next although I was hoping to save my very last slot for another future project. 2700, X470 Master SLI/ac, GTX 1050 is the pertinent part of my setup Quote Link to comment
mdrodge Posted February 4, 2021 Share Posted February 4, 2021 On 29/06/2020 at 7:12 PM, Maddeen said: Is anything planed to fix this? I also get this error but I cant switch to i440 because my VM with AMD VEGA wont boot anymore. I need to stay at Q35 -- but daily reboots are very sad to get my VM working again. Are you sure that is your problem? Why daily reboots? Are you stable? Quote Link to comment
mdrodge Posted February 4, 2021 Share Posted February 4, 2021 (edited) 2 minutes ago, audiocycle said: I've been trying to give my only GPU to my VM just like you were trying originally. My server can boot with no GPU at all, including no igpu, so I was hopeful to get it working like that but I'm not having any luck. When I assign the GPU to the VM it does what you described; The screen shows unraid stuff until I boot the VM and then it goes blank and is useless, forcing me to reboot the server. I guess I'll try a basic 1-slot GPU like you next although I was hoping to save my very last slot for another future project. 2700, X470 Master SLI/ac, GTX 1050 is the pertinent part of my setup Hang on. Before you reboot try logging in with your phone or something and cycle just the vm Edited February 4, 2021 by mdrodge Quote Link to comment
audiocycle Posted February 4, 2021 Share Posted February 4, 2021 1 minute ago, mdrodge said: Hang on. Before you reboot try logging in with your phone or something and cycle just the vm I have a computer right next to it I tried cycling the VM a couple times, I can remote desktop into it but the screen stays blank and pressing on the wireless keyboard I have plugged in the a add-in card allocated to the VM doesn't change anything either. I also just tried booting the server with no output cable in the GPU, to see if that would help with unraid not taking it during boot. Also have unraid booting with no GUI just in case. Quote Link to comment
mdrodge Posted February 4, 2021 Share Posted February 4, 2021 6 minutes ago, audiocycle said: 2 minutes ago, audiocycle said: I have a computer right next to it I tried cycling the VM a couple times, I can remote desktop into it but the screen stays blank and pressing on the wireless keyboard I have plugged in the a add-in card allocated to the VM doesn't change anything either. I also just tried booting the server with no output cable in the GPU, to see if that would help with unraid not taking it during boot. Also have unraid booting with no GUI just in case. Iommu groups are nicely split? I'd start simple mate. First set up with just a hdd and vnc Then once I know that works I'd add a gpu then once it's working I'd go for the usb etc. Quote Link to comment
mdrodge Posted February 4, 2021 Share Posted February 4, 2021 (edited) @audiocycleWhen you say blank display do you mean on the vm side or on your vnc? Edited February 4, 2021 by mdrodge Quote Link to comment
audiocycle Posted February 4, 2021 Share Posted February 4, 2021 (edited) I did pretty much that, I had it working through VNC previously just like my other VMs. Problems started trying to do more by adding a GPU. Not sure how I would control that VM if I didn't also passthrough the USB pcie card? As far as IOMMU groups go, I had to do a PCIe ACS override and now my GPU and it's sound card are in two groups but I don't believe that to be an issue. I also followed SpaceInvaderOne's recommended XML edit to put both on the same 'multifunction' slot. In any scenario I believe I have found the issue; In the VM logs the last line is: Quote qemu-system-x86_64: vfio: Unable to power on device, stuck in D3 A google search of that brings me to a /r/unraid thread that explains that is caused by a buggy MB BIOS. That will be my new research lead for another night. I appreciate you taking the time to try and help! edit: Quote When you say blank display do you mean on the vm side or on your vnc? I mean the screen that is plugged in the GPU that is assigned to the VM. VNC is disabled by unraid since there is a GPU but remote desktop works just fine. Edited February 4, 2021 by audiocycle added info Quote Link to comment
mdrodge Posted February 5, 2021 Share Posted February 5, 2021 In remote desktop did everything look OK in hardware manager? Quote Link to comment
audiocycle Posted February 11, 2021 Share Posted February 11, 2021 Yeah, everything is updated with a working driver there. I'm gonna try a BIOS update since that seems to fix the error code 127 and code D3 for a lot of people. My only worry is that Asrock has no BIOS with AGESA 1.0.0.4 which they recommend for my 2700 Quote Link to comment
DesertCookie Posted September 19, 2022 Share Posted September 19, 2022 This happens with my Nvidia GTX 1650 too. 1 Quote Link to comment
Gee1 Posted January 21 Share Posted January 21 i have a Tesla P100 in Debian 12 VM with Q35-7.1 and the same Error 127.. must restart entire server to fix this.. anoying Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.