Noggers

Members
  • Posts

    7
  • Joined

  • Last visited

About Noggers

  • Birthday March 23

Noggers's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Hi Guys, quick update - Just for sh!ts and giggles, I turned off REBAR in BIOS, and I adjusted the REBAR size from 16GB to 8GB in the script and the VM now launches and shows the large memory addressing in the GPU. So even though the card is capable of 16GB REBAR size, the VM didn't like it. Moved it to the next one down, and it worked. Here is the updated user script. #!/bin/bash echo -n "0000:23:00.0" > /sys/bus/pci/drivers/vfio-pci/unbind echo 13 > /sys/bus/pci/devices/0000\:23\:00.0/resource0_resize echo -n "1002 73af" > /sys/bus/pci/drivers/vfio-pci/new_id || echo -n "0000:23:00.0" > /sys/bus/pci/drivers/vfio-pci/bind Remember to adjust the id's and addresses based on your system configuration. Hope this helps some others - it absolutely works with AMD cards and works very well. Buttery smooth with REBAR enabled. I did some testing with Tiny Tinas Wonderlands on the Video benchmarking and that no longer stutters, whereas before it would stutter quite a bit. Cheers Rod. UPDATE - I just realised the script was commented out - updated accordingly, and gave myself an upper cut while at it. UPDATE 2 - I was able to get the full 16GB REBAR enabled and the machine is working fine. So using the script above, as well as in my case disabling REBAR In Bios solved the issue and the its working nicely. Your mileage may vary.
  2. Hi Guys - I have been trying to enable resizable bar in a windows vm with and AMD 6900XT GPU. Has anyone actually managed to get resizable bar working for AMD GPUs in Unraid, or is it a case of it only works with nVidia GPUs? I have rebar enabled in bios as well as above 4g decoding. Have the array start script to set the resizable bar to 16GB which this card is capable of. Have also made the extra xml changes to the vm definition. I see no errors when the script runs, and the systems starts to boot but when it tries to load the video driver, the screen goes black and turns off. I haven't seen any posts so far with AMD card working - so was wondering if I am wasting my time or if indeed its possible to get an AMD card to work with REBAR enabled in unraid? if i turn rebar off in bios, vm boots normally and windows loads without issue. VM Def: <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'> <name>wksRod</name> <uuid>xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx</uuid> <description>Rods Workstation</description> <metadata> <vmtemplate xmlns="unraid" name="Windows 11" icon="windows11.png" os="windowstpm"/> </metadata> <memory unit='KiB'>16777216</memory> <currentMemory unit='KiB'>16777216</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>8</vcpu> <cputune> <vcpupin vcpu='0' cpuset='4'/> <vcpupin vcpu='1' cpuset='36'/> <vcpupin vcpu='2' cpuset='5'/> <vcpupin vcpu='3' cpuset='37'/> <vcpupin vcpu='4' cpuset='6'/> <vcpupin vcpu='5' cpuset='38'/> <vcpupin vcpu='6' cpuset='7'/> <vcpupin vcpu='7' cpuset='39'/> <emulatorpin cpuset='3,35'/> </cputune> <os> <type arch='x86_64' machine='pc-i440fx-7.2'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi-tpm.fd</loader> <nvram>/etc/libvirt/qemu/nvram/d62a5da1-46ea-cbf7-dce7-a57944565dd7_VARS-pure-efi-tpm.fd</nvram> </os> <features> <acpi/> <apic/> <hyperv mode='custom'> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='none'/> </hyperv> </features> <cpu mode='host-passthrough' check='none' migratable='on'> <topology sockets='1' dies='1' cores='4' threads='2'/> <cache mode='passthrough'/> <feature policy='require' name='topoext'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback' discard='unmap'/> <source file='/mnt/user/domains/wksRod/vdisk1.img'/> <target dev='hdc' bus='scsi'/> <serial>vdisk1</serial> <boot order='1'/> <address type='drive' controller='0' bus='0' target='0' unit='2'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/en-us_windows_11_consumer_editions_version_22h2_updated_nov_2022_x64_dvd_c148a37b.iso'/> <target dev='hda' bus='ide'/> <readonly/> <boot order='2'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/virtio-win-0.1.248-1.iso'/> <target dev='hdb' bus='ide'/> <readonly/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <controller type='scsi' index='0' model='virtio-scsi'> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </controller> <controller type='pci' index='0' model='pci-root'/> <controller type='ide' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:3d:14:8d'/> <source bridge='br0'/> <model type='virtio-net'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </interface> <serial type='pty'> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <tpm model='tpm-tis'> <backend type='emulator' version='2.0' persistent_state='yes'/> </tpm> <audio id='1' type='none'/> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x23' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x23' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x04' slot='0x00' function='0x3'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x25' slot='0x00' function='0x3'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x46' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x47' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x0b' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x47' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x0c' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x47' slot='0x00' function='0x3'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x0d' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x4a' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x0e' function='0x0'/> </hostdev> <memballoon model='none'/> </devices> <qemu:commandline> <qemu:arg value='-fw_cfg'/> <qemu:arg value='opt/ovmf/X-PciMmio64Mb,string=65536'/> </qemu:commandline> </domain> User Script: #!/bin/bash #echo -n "0000:23:00.0" > /sys/bus/pci/drivers/vfio-pci/unbind #echo 14 > /sys/bus/pci/devices/0000\:23\:00.0/resource0_resize #echo -n "1002 73af" > /sys/bus/pci/drivers/vfio-pci/new_id || echo -n "0000:23:00.0" > /sys/bus/pci/drivers/vfio-pci/bind If I tried to use resource1_resize, i would get an error that file already exists. VGA PCI Any thoughts on what I might be doing wrong? Cheers Rod.
  3. Gave up - need the server to work. Gone back to 6.8.3. My guess is issue with 5.8 kernel and KVM. I'll wait to see if anyone manages to solve the issue before attempting another upgrade.
  4. One other thing - I have an AMD card in the server as well in my primary video slot. If I change the card to the AMD one - it works. Its just nVidia that sucks! The diagnostics and XML for the Windows 10 machine are both attached. Cheers Rod. Windows 10 (AMD).xml tower-diagnostics-20210405-1748-AMD-Working.zip
  5. I have the same problem. Just upgraded from Unraid 6.8.3 to 6.9.1 Any VM with nvidia GPU passthrough crashes when the nvidia driver loads. If I revert to 6.8.3 - it works again - so I know its not a faulty GPU. It happens on Ubuntu and Windows 10 VM - fine until nvidia driver tries to activate. I have attached diagnostics in the event it may help, as well as the two new VM's i created from scratch to test with. Cheers Rod. tower-diagnostics-20210405-1732.zip Windows 10.xml Ubuntu.xml
  6. Did you make sure you ran the msi_util as Administrator. If you dont, it wont save teh changes back to the registry.
  7. Hi - I know this is an old post but was there any thing special you needed to do to get your single AMD card to work? I have an AMD 5450 GPU that I want to use on a VM and its the only card in the server. When I start the VM, the monitor just goes into power save mode. EDIT: Found the solution - needed to change BIOS to legacy instead of EUFI.