bobbintb Posted February 4, 2022 Share Posted February 4, 2022 (edited) I have a gtx 1060 as my primary card. I tried to add a 560 but the problem was it was too old and I needed an older driver. I couldn't get the two to work in the same computer. I replaced it with a 1050 ti but I get "Failed to mmap 0000:21:00.0 BAR 3. Performance may be slow". I have the IMMO groups just like before with the 560. I added video=efifb:off to syslinux.cfg and got it to start without the error but the screen stays black and never shows anything. From what I understand that means I need a vbios. I dumped it with spaceinvader's script but it's still a black screen. The bios is around 50k and the script warned me that's unusually small. I also tried to download and edit it from techpowerup but get the same result. Any ideas? tower-diagnostics-20220203-1745.zip Edited February 5, 2022 by bobbintb Quote Link to comment
bobbintb Posted February 7, 2022 Author Share Posted February 7, 2022 @SpaceInvaderOne, I hate to tag you personally but I'm all out of ideas. Quote Link to comment
ghost82 Posted February 10, 2022 Share Posted February 10, 2022 On 2/4/2022 at 1:55 AM, bobbintb said: "Failed to mmap 0000:21:00.0 BAR 3. Performance may be slow" Your diagnostics refer before applying the video=efifb:off kernel parameter. In the logs this is the first issue, boot with video=efifb:off and attach new diagnostics to see if there's something useful. Quote Link to comment
bobbintb Posted February 10, 2022 Author Share Posted February 10, 2022 (edited) tower-diagnostics-20220210-1122.zipSorry about that. It's not giving the mmap error anymore since adding that line. But it still won't start, it's just a black screen. Edited February 10, 2022 by bobbintb Quote Link to comment
ghost82 Posted February 11, 2022 Share Posted February 11, 2022 13 hours ago, bobbintb said: But it still won't start, it's just a black screen In your latest diagnostics I cannot see anywhere the video=efifb:off, this is your cmdline: BOOT_IMAGE=/bzimage pcie_acs_override=downstream,multifunction initrd=/bzroot Where did you apply it? Moreover I can see only a gpu passthrough for the 1060 in your windows 11 vm (and settings are wrong because the gpu is not multifunction), not the 1050, what's your goal? Quote Link to comment
bobbintb Posted February 11, 2022 Author Share Posted February 11, 2022 8 hours ago, ghost82 said: In your latest diagnostics I cannot see anywhere the video=efifb:off, this is your cmdline: BOOT_IMAGE=/bzimage pcie_acs_override=downstream,multifunction initrd=/bzroot Where did you apply it? Moreover I can see only a gpu passthrough for the 1060 in your windows 11 vm (and settings are wrong because the gpu is not multifunction), not the 1050, what's your goal? I applied it in the GUI under the boot device: I use the VM daily, which is why the 1050 is not the VM at the moment. If I had it passthrough, I wouldn't be able to use it. I don't know what you mean by it not being multifunctional. My goal is to have both card in one VM/ Quote Link to comment
ghost82 Posted February 11, 2022 Share Posted February 11, 2022 1 minute ago, bobbintb said: under the boot device same line: append pcie_acs_override=downstream,multifunction video=efifb:off initrd=/bzroot Using another setup and attaching diagnostics is really of no help. Quote Link to comment
bobbintb Posted February 11, 2022 Author Share Posted February 11, 2022 (edited) Ok, I changed the line and setup and took a new diagnostics. tower-diagnostics-20220211-1527.zip Edited February 11, 2022 by bobbintb Quote Link to comment
ghost82 Posted February 12, 2022 Share Posted February 12, 2022 (edited) Windows 11 vm, multifunction is not applied correctly, replace this: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x21' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x02' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x21' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/> </hostdev> with this: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x02' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x1'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x21' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x21' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x1'/> </hostdev> Other things to consider (in order): - consider to setup the vm with vnc, enable remote desktop INSIDE the vm, switch from vnc to gpu passthrough, connect to the vm remotely with remote desktop to see if your gpus are detected, detected with errors, or not detected; maybe you only need to install nvidia drivers (download and install latest version) - you may need to pass also vbioses for your gpus (dump your own) - you may consider to switch from i440fx/ovmf to q35/ovmf for better support for passthrough - sometimes, in some builds, passing through the nvme controller doesn't play well together with passing through the gpu(s) Edited February 12, 2022 by ghost82 Quote Link to comment
bobbintb Posted February 13, 2022 Author Share Posted February 13, 2022 I fixed the multifunction in the XML. I still get no output. I tried VNC but the VNC just stays black if I have both cards. I tried my vbios again, same thing. As mentioned in the OP, I'm not sure it dumped right because it is really small. When I get the time (maybe tomorrow), I'll have to try putting it in another machine and dumping it in Windows or something. I don't think q35 will work. I don't recall why exactly but I think I couldn't get my nvme controller to passthrough with it and I kind of need that for my SSD. Quote Link to comment
bobbintb Posted February 19, 2022 Author Share Posted February 19, 2022 I fixed the multifunction in the XML. I still get no output. I tried VNC but the VNC just stays black if I have both cards. I tried my vbios again, same thing. As mentioned in the OP, I'm not sure it dumped right because it is really small. When I get the time (maybe tomorrow), I'll have to try putting it in another machine and dumping it in Windows or something. I don't think q35 will work. I don't recall why exactly but I think I couldn't get my nvme controller to passthrough with it and I kind of need that for my SSD. edit: I finally got it working. I guess the issue was that the script wasn't dumping the vbios properly. I put it in a Windows machine, used GPUz to dumb the bios, edited it. It worked fine after that. Thanks for the help. I should have figured that out but I didn't think the vbios was the problem. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.