domrockt Posted March 31, 2021 Share Posted March 31, 2021 (edited) SO Hello there, i am trying to activate the Resizable BAR option in my Windows 10 VM, i have an Asus Sage c621e and an Beta BIOS with enabled Resizable BAR and on my bare Metal Win10 i can enable and use it. But not so with Unraid. What did i try?! I pulled the new Bios from my bare metal Win10 and copied it to my ISOs folder and can use it. My VM boots with CSM enabled with the new VBIOS. My VM can boot without CSM with the old VBIOS but not with the Resizable BAR VBIOS. So far since the new Nvidia Drivers are out i dont need to add "multifunction='on' and dont need to correct the Bus lane and device 1 on 1" for the GPU is anyone else on this ?! regards Dom it seems that ResizeBar is hidden from the Guest, so maybe this will be added later. with CSM disabled i get this error 2021-03-31T09:30:39.219757Z qemu-system-x86_64: -device vfio-pci,host=0000:5e:00.0,id=hostdev0,bus=pci.4,multifunction=on,addr=0x0: Failed to mmap 0000:5e:00.0 BAR 1. Performance may be slow my newest VM XML <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm' id='2'> <name>Windows 10</name> <uuid>289985a2-2b40-e21d-ea93-3fb675330dc9</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>33554432</memory> <currentMemory unit='KiB'>33554432</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>20</vcpu> <cputune> <vcpupin vcpu='0' cpuset='42'/> <vcpupin vcpu='1' cpuset='94'/> <vcpupin vcpu='2' cpuset='43'/> <vcpupin vcpu='3' cpuset='95'/> <vcpupin vcpu='4' cpuset='44'/> <vcpupin vcpu='5' cpuset='96'/> <vcpupin vcpu='6' cpuset='45'/> <vcpupin vcpu='7' cpuset='97'/> <vcpupin vcpu='8' cpuset='46'/> <vcpupin vcpu='9' cpuset='98'/> <vcpupin vcpu='10' cpuset='47'/> <vcpupin vcpu='11' cpuset='99'/> <vcpupin vcpu='12' cpuset='48'/> <vcpupin vcpu='13' cpuset='100'/> <vcpupin vcpu='14' cpuset='49'/> <vcpupin vcpu='15' cpuset='101'/> <vcpupin vcpu='16' cpuset='50'/> <vcpupin vcpu='17' cpuset='102'/> <vcpupin vcpu='18' cpuset='51'/> <vcpupin vcpu='19' cpuset='103'/> </cputune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-q35-5.1'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/289985a2-2b40-e21d-ea93-3fb675330dc9_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='none'/> </hyperv> </features> <cpu mode='host-passthrough' check='none' migratable='on'> <topology sockets='1' dies='1' cores='10' threads='2'/> <cache mode='passthrough'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='block' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source dev='/dev/nvme1n1' index='3'/> <backingStore/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <alias name='virtio-disk2'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/RAID/Windows 10/vdisk2.img' index='2'/> <backingStore/> <target dev='hdd' bus='virtio'/> <alias name='virtio-disk3'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/virtio-win-0.1.190-1.iso' index='1'/> <backingStore/> <target dev='hdb' bus='sata'/> <readonly/> <alias name='sata0-0-1'/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <controller type='usb' index='0' model='ich9-ehci1'> <alias name='usb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <alias name='usb'/> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <alias name='usb'/> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <alias name='usb'/> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <controller type='pci' index='4' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='4' port='0xb'/> <alias name='pci.4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x3'/> </controller> <controller type='pci' index='5' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='5' port='0xc'/> <alias name='pci.5'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x4'/> </controller> <controller type='pci' index='6' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='6' port='0xd'/> <alias name='pci.6'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x5'/> </controller> <controller type='pci' index='7' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='7' port='0xe'/> <alias name='pci.7'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x6'/> </controller> <controller type='virtio-serial' index='0'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </controller> <controller type='sata' index='0'> <alias name='ide'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='pci' index='0' model='pcie-root'> <alias name='pcie.0'/> </controller> <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x8'/> <alias name='pci.1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='2' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='2' port='0x9'/> <alias name='pci.2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='pci' index='3' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='3' port='0xa'/> <alias name='pci.3'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> </controller> <interface type='bridge'> <mac address='52:54:00:36:cc:9b'/> <source bridge='br0'/> <target dev='vnet0'/> <model type='virtio-net'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </interface> <serial type='pty'> <source path='/dev/pts/0'/> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> <alias name='serial0'/> </serial> <console type='pty' tty='/dev/pts/0'> <source path='/dev/pts/0'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-2-Windows 10/org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='mouse' bus='ps2'> <alias name='input0'/> </input> <input type='keyboard' bus='ps2'> <alias name='input1'/> </input> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x5e' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x5e' slot='0x00' function='0x1'/> </source> <alias name='hostdev1'/> <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </source> <alias name='hostdev2'/> <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/> </hostdev> <memballoon model='none'/> </devices> <seclabel type='dynamic' model='dac' relabel='yes'> <label>+0:+100</label> <imagelabel>+0:+100</imagelabel> </seclabel> </domain> Log Files from win 10 VM nodefaults \ -chardev socket,id=charmonitor,fd=31,server,nowait \ -mon chardev=charmonitor,id=monitor,mode=control \ -rtc base=localtime \ -no-hpet \ -no-shutdown \ -boot strict=on \ -device pcie-root-port,port=0x8,chassis=1,id=pci.1,bus=pcie.0,multifunction=on,addr=0x1 \ -device pcie-root-port,port=0x9,chassis=2,id=pci.2,bus=pcie.0,addr=0x1.0x1 \ -device pcie-root-port,port=0xa,chassis=3,id=pci.3,bus=pcie.0,addr=0x1.0x2 \ -device pcie-root-port,port=0xb,chassis=4,id=pci.4,bus=pcie.0,addr=0x1.0x3 \ -device pcie-root-port,port=0xc,chassis=5,id=pci.5,bus=pcie.0,addr=0x1.0x4 \ -device pcie-root-port,port=0xd,chassis=6,id=pci.6,bus=pcie.0,addr=0x1.0x5 \ -device pcie-root-port,port=0xe,chassis=7,id=pci.7,bus=pcie.0,addr=0x1.0x6 \ -device ich9-usb-ehci1,id=usb,bus=pcie.0,addr=0x7.0x7 \ -device ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pcie.0,multifunction=on,addr=0x7 \ -device ich9-usb-uhci2,masterbus=usb.0,firstport=2,bus=pcie.0,addr=0x7.0x1 \ -device ich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pcie.0,addr=0x7.0x2 \ -device virtio-serial-pci,id=virtio-serial0,bus=pci.2,addr=0x0 \ -blockdev '{"driver":"host_device","filename":"/dev/nvme1n1","node-name":"libvirt-2-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \ -blockdev '{"node-name":"libvirt-2-format","read-only":false,"cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-2-storage"}' \ -device virtio-blk-pci,bus=pci.3,addr=0x0,drive=libvirt-2-format,id=virtio-disk2,bootindex=1,write-cache=on \ -blockdev '{"driver":"file","filename":"/mnt/user/isos/virtio-win-0.1.190-1.iso","node-name":"libvirt-1-storage","auto-read-only":true,"discard":"unmap"}' \ -blockdev '{"node-name":"libvirt-1-format","read-only":true,"driver":"raw","file":"libvirt-1-storage"}' \ -device ide-cd,bus=ide.1,drive=libvirt-1-format,id=sata0-0-1 \ -netdev tap,fd=33,id=hostnet0 \ -device virtio-net,netdev=hostnet0,id=net0,mac=52:54:00:36:cc:9b,bus=pci.1,addr=0x0 \ -chardev pty,id=charserial0 \ -device isa-serial,chardev=charserial0,id=serial0 \ -chardev socket,id=charchannel0,fd=34,server,nowait \ -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 \ -device vfio-pci,host=0000:5e:00.0,id=hostdev0,bus=pci.4,multifunction=on,addr=0x0 \ -device vfio-pci,host=0000:5e:00.1,id=hostdev1,bus=pci.4,addr=0x0.0x1 \ -device vfio-pci,host=0000:04:00.0,id=hostdev2,bus=pci.6,addr=0x0 \ -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \ -msg timestamp=on 2021-03-31 09:30:37.042+0000: Domain id=1 is tainted: high-privileges 2021-03-31 09:30:37.042+0000: Domain id=1 is tainted: host-cpu char device redirected to /dev/pts/0 (label charserial0) 2021-03-31T09:30:39.219757Z qemu-system-x86_64: -device vfio-pci,host=0000:5e:00.0,id=hostdev0,bus=pci.4,multifunction=on,addr=0x0: Failed to mmap 0000:5e:00.0 BAR 1. Performance may be slow Edited March 31, 2021 by domrockt Quote Link to comment
loukaniko85 Posted August 18, 2021 Share Posted August 18, 2021 did you get this working? Quote Link to comment
zeus83 Posted October 25, 2021 Share Posted October 25, 2021 Resizable bar is not virtualized so far. It's disabled in QEMU: https://github.com/qemu/qemu/commit/3412d8ec9810b819f8b79e8e0c6b87217c876e32 Quote Link to comment
alturismo Posted October 25, 2021 Share Posted October 25, 2021 On 3/31/2021 at 11:02 AM, domrockt said: My VM can boot without CSM with the old VBIOS but not with the Resizable BAR VBIOS. may try without the vbios, i hope you flashed the bios also directly into the card ... the whole chain needs to be uefi, csm/legacy is not supported, also above4g decoding should be enabled in the bios. 1 hour ago, zeus83 said: Resizable bar is not virtualized so far. It's disabled in QEMU: i cant confirm this here ... my Gaming VM with RTX3070 with enabled rBar 1 Quote Link to comment
zeus83 Posted October 30, 2021 Share Posted October 30, 2021 On 10/25/2021 at 11:25 AM, alturismo said: may try without the vbios, i hope you flashed the bios also directly into the card ... the whole chain needs to be uefi, csm/legacy is not supported, also above4g decoding should be enabled in the bios. i cant confirm this here ... my Gaming VM with RTX3070 with enabled rBar How did you manage to do this ? Whenever I enable resizable bar support in bios my VM starts with black screen, no video output at all on my 6600 XT. Quote Link to comment
alturismo Posted October 30, 2021 Share Posted October 30, 2021 4 hours ago, zeus83 said: How did you manage to do this ? Whenever I enable resizable bar support in bios my VM starts with black screen, no video output at all on my 6600 XT. well, i had todo the following cause my hardware was not rBar ready ... - flash the GPU with the rBar BIOS (came later for my 3070) - flash the Mainboard with a rBar supported BIOS - set in BIOS rBar enabled (also 4g decoding enabled, comes ONLY together) - switch BIOS to boot unraid in uefi mode (before, set unraid to boot in uefi mode) that was it ... nothing special todo in the VM then ... but i wonder now, you compare this now with an AMD Device ? ... i mean its known that AMD and VM's are not always the best combo (sadly) Quote Link to comment
zeus83 Posted October 30, 2021 Share Posted October 30, 2021 Yes, I only have 6600 xt in my disposal right now. Works pretty well in VM. However I set up everything you did, but once rbar enabled in bios it's black screen when I start my VM. No any errors in the logs. But I've noticed also that VM manager stops after I shutdown the VM. May be there is some critical issue with that. Quote Link to comment
KptnKMan Posted March 5, 2022 Share Posted March 5, 2022 On 10/30/2021 at 6:50 PM, alturismo said: well, i had todo the following cause my hardware was not rBar ready ... - flash the GPU with the rBar BIOS (came later for my 3070) - flash the Mainboard with a rBar supported BIOS - set in BIOS rBar enabled (also 4g decoding enabled, comes ONLY together) - switch BIOS to boot unraid in uefi mode (before, set unraid to boot in uefi mode) I've been on a mission to get UEFI and rBar working this last few weeks. I did everything above before reading all of this, and confirmed everything in a bare-metal install (Also so I could use NVFlash to backup the old and new BIOS). After all of this, I still get a black screen when booting the VM that works perfectly fine with CSM enabled. Interestingly, I also find that I am unable to boot into "Unraid GUI mode" when booting Unraid UEFI enabled. When I boot the "Unraid GUI mode", everthing appears to boot correctly, and at the last moment when the screen goes black to show the GUI all I see is a flashing terminal cursor in the top left corner of the screen. Normally the GUI would boot and display the graphical login prompt. Has anyone else experienced this or understand why this is? This happens before booting any VM, and VMs boot to a black screen also, when they would normally boot ok. On 3/31/2021 at 11:02 AM, domrockt said: with CSM disabled i get this error 2021-03-31T09:30:39.219757Z qemu-system-x86_64: -device vfio-pci,host=0000:5e:00.0,id=hostdev0,bus=pci.4,multifunction=on,addr=0x0: Failed to mmap 0000:5e:00.0 BAR 1. Performance may be slow I'm also seeing this in my logs with CSM disabled. I was going to start a new thread about this summarising my experience, but I don't know if maybe I/we should try to work together about it here? Quote Link to comment
KptnKMan Posted October 6, 2022 Share Posted October 6, 2022 (edited) I'm just adding my plea again if anyone knows anything at all on how to resolve this. Is there anything known in the just-released unRAID 6.11.0 that aids in this? Also, I am not aware or could find of any hardware-specific oddities with enabling ReBAR without CSM and the black screen issues, is there something that anyone might be able to highlight? Is the QEMU rea-only ReBAR disabled issue only applicable to certain hardware or AMD-only setups, or something else? If I may, @alturismo what hardware are you using? Intel CPU? Just been trying to get this to work for a long time now, and most other things work just fine, just not this. My hardware is in my sig, not sure what the issue is. Edited October 6, 2022 by KptnKMan Quote Link to comment
alturismo Posted October 6, 2022 Share Posted October 6, 2022 3 hours ago, KptnKMan said: If I may, @alturismo what hardware are you using? Intel CPU? yes, Intel CPU and upgraded GPU also meanwhile i9 10850k RTX3080ti (from RTX3070) GTX1060 ... all fine here 1 Quote Link to comment
KptnKMan Posted October 8, 2022 Share Posted October 8, 2022 (edited) Thanks for the response. I'm still struggling with getting any VMs or even "unraid with GUI" to start (the server starts, but the local web gui shows a black screen with blinking cursor). Booting a VM just shows a black screen. I see this in the logs when I start a VM: Oct 8 19:01:50 unraid1 kernel: vfio-pci 0000:0c:00.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=io+mem:owns=none Oct 8 19:01:50 unraid1 kernel: br0: port 2(vnet0) entered blocking state Oct 8 19:01:50 unraid1 kernel: br0: port 2(vnet0) entered disabled state Oct 8 19:01:50 unraid1 kernel: device vnet0 entered promiscuous mode Oct 8 19:01:50 unraid1 kernel: br0: port 2(vnet0) entered blocking state Oct 8 19:01:50 unraid1 kernel: br0: port 2(vnet0) entered forwarding state Oct 8 19:01:52 unraid1 avahi-daemon[16989]: Joining mDNS multicast group on interface vnet0.IPv6 with address ipv6addresshere. Oct 8 19:01:52 unraid1 avahi-daemon[16989]: New relevant interface vnet0.IPv6 for mDNS. Oct 8 19:01:52 unraid1 avahi-daemon[16989]: Registering new address record for ipv6addresshere on vnet0.*. Oct 8 19:01:53 unraid1 kernel: vfio-pci 0000:0c:00.0: vfio_ecap_init: hiding ecap 0x1e@0x258 Oct 8 19:01:53 unraid1 kernel: vfio-pci 0000:0c:00.0: vfio_ecap_init: hiding ecap 0x19@0x900 Oct 8 19:01:53 unraid1 kernel: vfio-pci 0000:0c:00.0: vfio_ecap_init: hiding ecap 0x26@0xc1c Oct 8 19:01:53 unraid1 kernel: vfio-pci 0000:0c:00.0: vfio_ecap_init: hiding ecap 0x27@0xd00 Oct 8 19:01:53 unraid1 kernel: vfio-pci 0000:0c:00.0: vfio_ecap_init: hiding ecap 0x25@0xe00 Oct 8 19:01:53 unraid1 kernel: vfio-pci 0000:0c:00.0: BAR 1: can't reserve [mem 0x7000000000-0x77ffffffff 64bit pref] Oct 8 19:01:53 unraid1 kernel: vfio-pci 0000:0c:00.1: enabling device (0000 -> 0002) Oct 8 19:01:53 unraid1 kernel: vfio-pci 0000:0c:00.1: vfio_ecap_init: hiding ecap 0x25@0x160 Does anyone have an idea what's going on or where I can possibly investigate? On 10/30/2021 at 6:50 PM, alturismo said: - flash the GPU with the rBar BIOS (came later for my 3070) - flash the Mainboard with a rBar supported BIOS - set in BIOS rBar enabled (also 4g decoding enabled, comes ONLY together) - switch BIOS to boot unraid in uefi mode (before, set unraid to boot in uefi mode) Of all of these everything is set and enabled. If I leave everything the same and boot unraid as non-uefi mode, the local GUI and VMs work, but there is no reBAR enabled. Edited October 8, 2022 by KptnKMan Quote Link to comment
SimonF Posted October 8, 2022 Share Posted October 8, 2022 (edited) 19 minutes ago, KptnKMan said: Thanks for the response. I'm still struggling with getting any VMs or even "unraid with GUI" to start (the server starts, but the local web gui shows a black screen with blinking cursor). Booting a VM just shows a black screen. I see this in the logs when I start a VM: Oct 8 19:01:50 unraid1 kernel: vfio-pci 0000:0c:00.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=io+mem:owns=none Oct 8 19:01:50 unraid1 kernel: br0: port 2(vnet0) entered blocking state Oct 8 19:01:50 unraid1 kernel: br0: port 2(vnet0) entered disabled state Oct 8 19:01:50 unraid1 kernel: device vnet0 entered promiscuous mode Oct 8 19:01:50 unraid1 kernel: br0: port 2(vnet0) entered blocking state Oct 8 19:01:50 unraid1 kernel: br0: port 2(vnet0) entered forwarding state Oct 8 19:01:52 unraid1 avahi-daemon[16989]: Joining mDNS multicast group on interface vnet0.IPv6 with address ipv6addresshere. Oct 8 19:01:52 unraid1 avahi-daemon[16989]: New relevant interface vnet0.IPv6 for mDNS. Oct 8 19:01:52 unraid1 avahi-daemon[16989]: Registering new address record for ipv6addresshere on vnet0.*. Oct 8 19:01:53 unraid1 kernel: vfio-pci 0000:0c:00.0: vfio_ecap_init: hiding ecap 0x1e@0x258 Oct 8 19:01:53 unraid1 kernel: vfio-pci 0000:0c:00.0: vfio_ecap_init: hiding ecap 0x19@0x900 Oct 8 19:01:53 unraid1 kernel: vfio-pci 0000:0c:00.0: vfio_ecap_init: hiding ecap 0x26@0xc1c Oct 8 19:01:53 unraid1 kernel: vfio-pci 0000:0c:00.0: vfio_ecap_init: hiding ecap 0x27@0xd00 Oct 8 19:01:53 unraid1 kernel: vfio-pci 0000:0c:00.0: vfio_ecap_init: hiding ecap 0x25@0xe00 Oct 8 19:01:53 unraid1 kernel: vfio-pci 0000:0c:00.0: BAR 1: can't reserve [mem 0x7000000000-0x77ffffffff 64bit pref] Oct 8 19:01:53 unraid1 kernel: vfio-pci 0000:0c:00.1: enabling device (0000 -> 0002) Oct 8 19:01:53 unraid1 kernel: vfio-pci 0000:0c:00.1: vfio_ecap_init: hiding ecap 0x25@0x160 Does anyone have an idea what's going on or where I can possibly investigate? What is your primary gpu? Edited October 8, 2022 by SimonF Quote Link to comment
KptnKMan Posted October 8, 2022 Share Posted October 8, 2022 4 minutes ago, SimonF said: What is your primary gpu? I'm using a "Gigabyte RTX3090 Turbo 24G". Full system specs are in my signature, this is on UNRAID1. I flashed this card with the updated UEFI bios some time ago, and have a dumped & hexed BIOS of the same card that I use to boot VMs. Quote Link to comment
KptnKMan Posted February 15, 2023 Share Posted February 15, 2023 I don't know if anyone in this thread is still waiting, but another thread with a backported kernel 6.1 patch made ReBAR work for me on my primary setup. Link here to what I did to get it working, hopefully this can be added to UnRAID in the future, I'm super stoked about it. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.