VM GPU passthrough resizable BAR support in 6.1 kernel


Recommended Posts

I second this. Resizable BAR is now being leveraged by AMD, Nvidia, and Intel. More games are seeing massive performance improvements with it's implementation. Please make this a priority. I would love to see a a beta on kernel 6.1 so we can start playing around with this.

Link to comment

I can confirm the above patch works. Below is a script to unbind my gpu from vfio, set the bar size to 8GB, and rebind to vfio:

 

GPU=0e:00
GPU_ID="1002 744c"
echo -n "0000:${GPU}.0" > /sys/bus/pci/drivers/vfio-pci/unbind || echo "Failed to unbind gpu from vfio-pci"
cd /sys/bus/pci/devices/0000\:0e\:00.0/
echo 13 > resource0_resize
echo -n "$GPU_ID" > /sys/bus/pci/drivers/vfio-pci/new_id

 

The bar size goes from 256MB to 8GB, and stays there after rebinding and passing through the gpu.

 

HOWEVER, the memory range in Windows is unchanged and I'm seeing no gains in Forza 5 (known to benefit from resizable bar).

 

Edit: I rebooted the VM and for whatever reason on the second boot the gpu was assigned the proper 8GB address range in Windows. This time I did see a performance change... the fps in forza 5 went DOWN from 114 to 101 fps. Rebooted unraid, bar back to 256MB and fps back to 114.

 

Time to play around with different bar sizes and see what happens.

Edited by Skitals
  • Thanks 1
Link to comment

Some interesting testing and a roadblock.

 

When resize bar is enabled and I boot windows on baremetal, the bar is being sized to the max 32GB/256MB. This sees HUGE performance gains in Forza Horizon 5.

 

Resize bar off: 113fps

Resize bar on: 139fps

 

With the resize bar patch I can easily resize the bars to the max, however passthrough fails when bar 0 i set to 32GB. I don't get any errors, it just doesn't work. I can set bar 0 to any other size and passthrough works.

 

With the bars set to 16GB/256MB I'm getting 114fps and improved latency. Really frustrated I can't get the vm to boot when set to 32GB to match baremetal.

  • Thanks 1
Link to comment
12 minutes ago, Skitals said:

Some interesting testing and a roadblock.

 

When resize bar is enabled and I boot windows on baremetal, the bar is being sized to the max 32GB/256MB. This sees HUGE performance gains in Forza Horizon 5.

 

Resize bar off: 113fps

Resize bar on: 139fps

 

With the resize bar patch I can easily resize the bars to the max, however passthrough fails when bar 0 i set to 32GB. I don't get any errors, it just doesn't work. I can set bar 0 to any other size and passthrough works.

 

With the bars set to 16GB/256MB I'm getting 114fps and improved latency. Really frustrated I can't get the vm to boot when set to 32GB to match baremetal.

Maybe the virtio driver does support 32GB yet?

Link to comment
1 hour ago, SimonF said:

Maybe the virtio driver does support 32GB yet?

 

I think I found the problem. Looks like by default OVMF PCI MMIO BAR allocation is 32GB, hence there isn't enough space for the GPU + other devices.

 

Edit: Adding the xml from the link above I was able to pass the gpu with 32GB bar, but windows is only seeing 256MB. This is not unlike the first time I passed through with 8GB bar where subsequent reboots showed the full bar in windows. Except at 32GB it will not reboot and getting it to passthrough even once is finicky.

Edited by Skitals
Link to comment
1 hour ago, Skitals said:

 

I think I found the problem. Looks like by default OVMF PCI MMIO BAR allocation is 32GB, hence there isn't enough space for the GPU + other devices.

 

Edit: Adding the xml from the link above I was able to pass the gpu with 32GB bar, but windows is only seeing 256MB. This is not unlike the first time I passed through with 8GB bar where subsequent reboots showed the full bar in windows. Except at 32GB it will not reboot and getting it to passthrough even once is finicky.

I think qemu options changed.
 
options here
 

 

Edited by SimonF
  • Like 1
Link to comment

I ditched the 7900 XTX for a 4070 Ti and I'm seeing encouraging results.

 

Forza Horizon 5 1440p Extreme (No DLSS)

VM 256MB Bar: 116 fps

VM 16GB Bar: 129 fps

Baremetal Rebar Off: 127 fps

Baremetal Rebar On: 144 fps

 

Cyberpunk 1440p RT Ultra (DLSS):

VM 256MG Bar: 81 fps

VM 16GB Bar: 95 fps

Baremetal Rebar Off: 94 fps

Baremetal Rebar On: 102 fps

 

Of note I went from testing in baremetal windows (with rebar on), rebooted into unraid and the bar size was still set to 16GB. I'm not sure what negotiated that or if it remembers the last setting.

 

Setting the bar to 16GB basically brings it to parity with bare metal when rebar is OFF. Still a big performance delta, which I think it largely due to fewer cpu cores in the VM. Time Spy Graphics scores are identical in VM and baremetal (23k).

 

 

 

  • Thanks 1
Link to comment

I did a little more testing. It looks like the patch isn't needed with the RTX 40 Series.

 

If I boot unraid with Resize Bar Support OFF, bar size is 256MB.

If I boot unraid with Resize Bar Support ON, bar size is the max 16GB even when the gpu is bound to vfio on boot.

 

With the patch I can leave Resize Bar Support OFF (but Above 4G Decoding ON) and manually resize the bar and have the same effect.

 

This is another huge win for Nvidia over the AMD XTX which I couldn't passthrough at all when Resize Bar Support was enabled in the bios.

 

Same as resizing the bar manually, sometimes I need to reboot the windows vm to see the larger bar. From the Device Manager > select your GPU > Resources tab look for a "Large Memory Range" 

Link to comment

I was curious so I did a few benchmarks passing all 32 cores/threads to the VM.

 

Forza Horizon 5 1440p Extreme (No DLSS)

VM Rebar OFF 20cpu: 116 fps

VM Rebar ON 20cpu: 129 fps

VM Rebar ON 32cpu: 134 fps

Bare metal Rebar ON: 144 fps

 

Cyberpunk 1440p RT Ultra (DLSS):

VM Rebar OFF 20cpu: 81.07 fps

VM Rebar ON 20cpu: 95.29

VM Rebar ON 32cpu: 98.26

Bare metal Rebar ON: 102.21

 

That's pretty dang close to bare metal performance with full resizable bar, given the extra overhead from unraid and vfio.

 

Hitting 129 fps in the vm in Forza is amazing when with the 7900XTX I could never beat 114 fps with identical settings.

 

 

  • Like 4
  • Thanks 1
Link to comment
  • 2 weeks later...
On 1/6/2023 at 11:30 PM, Skitals said:

I did a little more testing. It looks like the patch isn't needed with the RTX 40 Series.

 

Tried your patch today with my ASUS KO RTX 3070 and a Windows 10 VM.  GPU-Z is still reporting Resizeable Bar as Disabled.

 

Was there any additional setup needed to set the initial state of the Bar or should it be on by default with the patched kernel?

Link to comment
9 hours ago, JesterEE said:

 

Tried your patch today with my ASUS KO RTX 3070 and a Windows 10 VM.  GPU-Z is still reporting Resizeable Bar as Disabled.

 

Was there any additional setup needed to set the initial state of the Bar or should it be on by default with the patched kernel?

 

The patch allows you to manipulate the bar size through sysfs. See the example how to use it in this reddit post.

 

This is the User Script I created to set the bar size to 16GB. You would obviously need to tweak this to the proper bar size, pice address, folder path, device id, etc.

 

#!/bin/bash
echo -n "0000:0d:00.0" > /sys/bus/pci/drivers/vfio-pci/unbind
echo 14 > /sys/bus/pci/devices/0000\:0d\:00.0/resource1_resize
echo -n "10de 2782" > /sys/bus/pci/drivers/vfio-pci/new_id || echo -n "0000:0d:00.0" > /sys/bus/pci/drivers/vfio-pci/bind


# Bit Sizes
# 1 = 2MB
# 2 = 4MB
# 3 = 8MB
# 4 = 16MB
# 5 = 32MB
# 6 = 64MB
# 7 = 128MB
# 8 = 256MB
# 9 = 512MB
# 10 = 1GB
# 11 = 2GB
# 12 = 4GB
# 13 = 8GB
# 14 = 16GB
# 15 = 32GB

 

As stated the patch and the script works, but it was ultimately unneeded with my 4070 Ti since passthrough works when I have Above 4G and Resize Bar enabled in my bios. With those enabled the 4070 Ti defaults to a 16GB bar size, so there is no need to manually change it. Other setups/GPUs will give a code 43 or not passthrough at all with Resize Bar enabled in the bios, which is where this patch would be helpful to manually resize the bar. In that case I'm not sure what GPU-Z would report. The only way to know if it's making a difference is to benchmark a game known to benefit from resize bar with and without manipulating the bar size.

Edited by Skitals
  • Thanks 3
Link to comment

Also, you can check if windows is using the larger bar size from Device Manager. Open the GPU and go to the Resources tab. Look for a "Large Memory Range". You can calculate the size by subtracting the second number in the range from the first in a hex calculator. For me that is 13FFFFFFFF - 1000000000 = 17179869183 bytes = 16 gigabytes.

Link to comment

@Skitals So I tried the script commands you specified in your previous post, but got stuck when actually sizing the ReBar with:

 

# echo 14 > /sys/bus/pci/devices/0000\:0d\:00.0/resource1_resize
# -bash: echo: write error: Device or resource busy

 

Did some searching and I couldn't find a way to correct this. Not looking for tech support necessarily, just reporting my experience.

 

On my system, the video card is bound to VFIO and the system is booting with a syslinux config including

... video=efifb:off ...
Link to comment
46 minutes ago, JesterEE said:

@Skitals So I tried the script commands you specified in your previous post, but got stuck when actually sizing the ReBar with:

 

# echo 14 > /sys/bus/pci/devices/0000\:0d\:00.0/resource1_resize
# -bash: echo: write error: Device or resource busy

 

Did some searching and I couldn't find a way to correct this. Not looking for tech support necessarily, just reporting my experience.

 

On my system, the video card is bound to VFIO and the system is booting with a syslinux config including

... video=efifb:off ...

 

Is the pci address of your GPU really the same as mine? The GPU needs to be unbound. The previous line in my script unbinds it from vfio, but it also can't be bound to an Nvidia driver or anything else. I also believe you need Above 4G Decoding enabled in your bios, but I'm not sure if that would give you the error you are seeing. 

 

Before referencing my script you should really read and understand the reddit post I linked. You need to be able to read and manipulate the bar sizes on your own because the addresses, paths, number of bars etc will be different than mine.

 

Edit: I just looked up your video card and see it has 8GB VRAM. Why are you trying to set the bar size to 16GB? Like I said you really need to understand the reddit post I linked before preceding. 

Edited by Skitals
Link to comment
54 minutes ago, Skitals said:

 

Is the pci address of your GPU really the same as mine?

 

No, I changed the addressed on my side, but I posted the command so it could easily be referenced from your post.  Here is my version:

 

#!/bin/bash
echo -n "0000:0b:00.0" > /sys/bus/pci/drivers/vfio-pci/unbind
echo 14 > /sys/bus/pci/devices/0000\:0b\:00.0/resource1_resize  # <<<< Gets stuck here
echo -n "10de 2488" > /sys/bus/pci/drivers/vfio-pci/new_id || echo -n "0000:0b:00.0" > /sys/bus/pci/drivers/vfio-pci/bind

 

Edited by JesterEE
Link to comment
8 hours ago, JesterEE said:

 

No, I changed the addressed on my side, but I posted the command so it could easily be referenced from your post.  Here is my version:

 

#!/bin/bash
echo -n "0000:0b:00.0" > /sys/bus/pci/drivers/vfio-pci/unbind
echo 14 > /sys/bus/pci/devices/0000\:0b\:00.0/resource1_resize  # <<<< Gets stuck here
echo -n "10de 2488" > /sys/bus/pci/drivers/vfio-pci/new_id || echo -n "0000:0b:00.0" > /sys/bus/pci/drivers/vfio-pci/bind

 

 

Like I said above, you are trying to set the bar to 16GB when you have 8GB VRAM. Read the reddit post and learn your cards bar options and learn to manipulate it manually before trying to use a script.

Link to comment
On 1/22/2023 at 5:45 AM, Skitals said:

you are trying to set the bar to 16GB when you have 8GB VRAM.

 

Yup, messed that up in the copypasta while experimenting.

 

Anyway, not a big deal...it works for me if I want to set the ReBAR to acceptable values lower than the default 256MB (for my card [64MB, 128MB, 256MB]) ... But it will not set them higher (for my card [512MB, 1GB, 2GB, 4GB, 8GB]). If I try and set it to a value lower than 64MB or higher than 256MB I will get the error. 

 

# -bash: echo: write error: Device or resource busy

 

Here is the is the memory allocation info for my card

 

# lspci -vvvs 0b:00.0
0b:00.0 VGA compatible controller: NVIDIA Corporation GA104 [GeForce RTX 3070 Lite Hash Rate] (rev a1) (prog-if 00 [VGA controller])
...
        Region 0: Memory at fb000000 (32-bit, non-prefetchable) [size=16M]
        Region 1: Memory at d0000000 (64-bit, prefetchable) [size=256M]
        Region 3: Memory at c8000000 (64-bit, prefetchable) [size=32M]
...
Physical Resizable BAR
                BAR 0: current size: 16MB, supported: 16MB
                BAR 1: current size: 256MB, supported: 64MB 128MB 256MB 512MB 1GB 2GB 4GB 8GB
                BAR 3: current size: 32MB, supported: 32MB

 

Thanks for publishing the patch and modified kernel even though it didn't work for me completely. Hope others give it a shot too to report their mileage. 

Edited by JesterEE
Link to comment
3 hours ago, JesterEE said:

 

Yup, messed that up in the copypasta while experimenting.

 

Anyway, not a big deal...it works for me if I want to set the ReBAR to acceptable values lower than the default 256MB (for my card [64MB, 128MB, 256MB]) ... But it will not set them higher (for my card [512MB, 1GB, 2GB, 4GB, 8GB]). If I try and set it to a value lower than 64MB or higher than 256MB I will get the error. 

 

# -bash: echo: write error: Device or resource busy

 

Here is the is the memory allocation info for my card

 

# lspci -vvvs 0b:00.0
0b:00.0 VGA compatible controller: NVIDIA Corporation GA104 [GeForce RTX 3070 Lite Hash Rate] (rev a1) (prog-if 00 [VGA controller])
...
        Region 0: Memory at fb000000 (32-bit, non-prefetchable) [size=16M]
        Region 1: Memory at d0000000 (64-bit, prefetchable) [size=256M]
        Region 3: Memory at c8000000 (64-bit, prefetchable) [size=32M]
...
Physical Resizable BAR
                BAR 0: current size: 16MB, supported: 16MB
                BAR 1: current size: 256MB, supported: 64MB 128MB 256MB 512MB 1GB 2GB 4GB 8GB
                BAR 3: current size: 32MB, supported: 32MB

 

Thanks for publishing the patch and modified kernel even though it didn't work for me completely. Hope others give it a shot too to report their mileage. 

 

Do you have Above 4G Decoding enabled in your bios?

Link to comment
On 1/5/2023 at 5:46 PM, SimonF said:
Spoiler

 

 

Great find, this worked for me after I was getting a blank screen and VM won't boot as soon as I turn on Resizable BAR in the BIOS. It worked fine bare metal, so I knew it was something to do with KVM but didn't know what.

I have an HP RTX 3090, motherboard is an ASRock ROMED8-2T (with beta BIOS that has ReBAR support and is not public yet).

Here's the GPUZ info from the VM:

 

gpuz_HP-3090_vm11quantum_pcie4x16.gif.c9155c8c0b063c9116c1308a4da69ad1.gif

 

Here's my lspci dump for the card:

Quote

85:00.0 VGA compatible controller: NVIDIA Corporation GA102 [GeForce RTX 3090] (rev a1) (prog-if 00 [VGA controller])
    Subsystem: Hewlett-Packard Company GA102 [GeForce RTX 3090]
    Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
    Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
    Latency: 0
    Interrupt: pin A routed to IRQ 450
    IOMMU group: 37
    Region 0: Memory at f2000000 (32-bit, non-prefetchable)
    Region 1: Memory at 47000000000 (64-bit, prefetchable)
    Region 3: Memory at 47800000000 (64-bit, prefetchable)
    Region 5: I/O ports at b000
    Expansion ROM at f3000000 [disabled]
    Capabilities: [60] Power Management version 3
        Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold-)
        Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
    Capabilities: [68] MSI: Enable+ Count=1/1 Maskable- 64bit+
        Address: 00000000fee00000  Data: 0000
    Capabilities: [78] Express (v2) Legacy Endpoint, MSI 00
        DevCap:    MaxPayload 256 bytes, PhantFunc 0, Latency L0s unlimited, L1 <64us
            ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+
        DevCtl:    CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
            RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop- FLReset-
            MaxPayload 256 bytes, MaxReadReq 512 bytes
        DevSta:    CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
        LnkCap:    Port #0, Speed 16GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <512ns, L1 <16us
            ClockPM+ Surprise- LLActRep- BwNot- ASPMOptComp+
        LnkCtl:    ASPM L1 Enabled; RCB 64 bytes, Disabled- CommClk+
            ExtSynch- ClockPM+ AutWidDis- BWInt- AutBWInt-
        LnkSta:    Speed 2.5GT/s (downgraded), Width x16
            TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
        DevCap2: Completion Timeout: Range AB, TimeoutDis+ NROPrPrP- LTR-
             10BitTagComp+ 10BitTagReq+ OBFF Via message, ExtFmt- EETLPPrefix-
             EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
             FRS-
             AtomicOpsCap: 32bit- 64bit- 128bitCAS-
        DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis- LTR- 10BitTagReq- OBFF Disabled,
             AtomicOpsCtl: ReqEn-
        LnkCap2: Supported Link Speeds: 2.5-16GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
        LnkCtl2: Target Link Speed: 16GT/s, EnterCompliance- SpeedDis-
             Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
             Compliance Preset/De-emphasis: -6dB de-emphasis, 0dB preshoot
        LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete+ EqualizationPhase1+
             EqualizationPhase2+ EqualizationPhase3+ LinkEqualizationRequest-
             Retimer- 2Retimers- CrosslinkRes: unsupported
    Capabilities: [b4] Vendor Specific Information: Len=14 <?>
    Capabilities: [100 v1] Virtual Channel
        Caps:    LPEVC=0 RefClk=100ns PATEntryBits=1
        Arb:    Fixed- WRR32- WRR64- WRR128-
        Ctrl:    ArbSelect=Fixed
        Status:    InProgress-
        VC0:    Caps:    PATOffset=00 MaxTimeSlots=1 RejSnoopTrans-
            Arb:    Fixed- WRR32- WRR64- WRR128- TWRR128- WRR256-
            Ctrl:    Enable+ ID=0 ArbSelect=Fixed TC/VC=01
            Status:    NegoPending- InProgress-
    Capabilities: [258 v1] L1 PM Substates
        L1SubCap: PCI-PM_L1.2+ PCI-PM_L1.1+ ASPM_L1.2+ ASPM_L1.1+ L1_PM_Substates+
              PortCommonModeRestoreTime=255us PortTPowerOnTime=10us
        L1SubCtl1: PCI-PM_L1.2- PCI-PM_L1.1- ASPM_L1.2- ASPM_L1.1-
               T_CommonMode=0us LTR1.2_Threshold=0ns
        L1SubCtl2: T_PwrOn=10us
    Capabilities: [128 v1] Power Budgeting <?>
    Capabilities: [420 v2] Advanced Error Reporting
        UESta:    DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
        UEMsk:    DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq+ ACSViol-
        UESvrt:    DLP+ SDES+ TLP- FCP+ CmpltTO+ CmpltAbrt- UnxCmplt+ RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol-
        CESta:    RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr-
        CEMsk:    RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr-
        AERCap:    First Error Pointer: 00, ECRCGenCap- ECRCGenEn- ECRCChkCap- ECRCChkEn-
            MultHdrRecCap- MultHdrRecEn- TLPPfxPres- HdrLogCap-
        HeaderLog: 00000000 00000000 00000000 00000000
    Capabilities: [600 v1] Vendor Specific Information: ID=0001 Rev=1 Len=024 <?>
    Capabilities: [900 v1] Secondary PCI Express
        LnkCtl3: LnkEquIntrruptEn- PerformEqu-
        LaneErrStat: 0
    Capabilities: [bb0 v1] Physical Resizable BAR
        BAR 0: current size: 16MB, supported: 16MB
        BAR 1: current size: 32GB, supported: 64MB 128MB 256MB 512MB 1GB 2GB 4GB 8GB 16GB 32GB
        BAR 3: current size: 32MB, supported: 32MB
    Capabilities: [c1c v1] Physical Layer 16.0 GT/s <?>
    Capabilities: [d00 v1] Lane Margining at the Receiver <?>
    Capabilities: [e00 v1] Data Link Feature <?>
    Kernel driver in use: vfio-pci
    Kernel modules: nvidia_drm, nvidia

 I'm on Unraid 6.11.5 and VM is on Q35-7.1, Win-11.

Edited by shpitz461
Link to comment

Looks like I figured it out, I had left out the steps to add the extra lines to my Win11 VM, which enabled 64GB ReBAR support.

 

So looks like the checklist to enable this:

- Host BIOS Enable ReBAR support

- Host BIOS Enable 4G Decoding

- Enable & Boot Custom Kernel syslinux configuration (near beginning of this thread)

- Boot Unraid in UEFI Mode

- VM must use UEFI BIOS

- VM must have the top line of XML from <domain type='kvm'> to:

<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>

- VM must have added the following (after the </device> line, before the </domain> line):

  <qemu:commandline>
    <qemu:arg value='-fw_cfg'/>
    <qemu:arg value='opt/ovmf/X-PciMmio64Mb,string=65536'/>
  </qemu:commandline>

 

After that, looks like everything worked for me as well.

I'm just summarising this for anyone looking for a complete idea.
 

I'll be testing performance over the next weeks as well to see if I'm seeing any improvement.

 

This is great, exactly what I've been waiting for!

 

EDIT for completeness:

There is a last step, that I have implemented and can confirm works, which is the bind/unbind UserScript in this comment: 

 

 

Specific details of the script is in the linked^ comment, but this script sets the Bar size.

I would highly recommend setting this up in a userscript, and set it to run "At Startup of Array":

image.png.7ad9db989f7472d2275834d9e89a4a81.png

 

This works for my setup, but your mileage may vary.

Edited by KptnKMan
Link to comment
On 1/31/2023 at 1:05 AM, KptnKMan said:

Looks like I figured it out, I had left out the steps to add the extra lines to my Win11 VM, which enabled 64GB ReBAR support.

 

So looks like the checklist to enable this:

- Host BIOS Enable ReBAR support

- Host BIOS Enable 4G Decoding

- Enable & Boot Custom Kernel syslinux configuration (near beginning of this thread)

- Boot Unraid in UEFI Mode

- VM must use UEFI BIOS

- VM must have the top line of XML from <domain type='kvm'> to:

<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>

- VM must have added the following (after the </device> line, before the </domain> line):

  <qemu:commandline>
    <qemu:arg value='-fw_cfg'/>
    <qemu:arg value='opt/ovmf/X-PciMmio64Mb,string=65536'/>
  </qemu:commandline>

 

After that, looks like everything worked for me as well.

I'm just summarising this for anyone looking for a complete idea.
 

I'll be testing performance over the next weeks as well to see if I'm seeing any improvement.

 

This is great, exactly what I've been waiting for!

 

Just a heads up, the custom kernel wasn't necessary for me with my RTX 3090, just adding the additional xml to my VM config was enough.

  • Thanks 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.