Jump to content

Hankanman

Members
  • Posts

    16
  • Joined

  • Last visited

Posts posted by Hankanman

  1. Okay, I have RMA'd the motherboard (only new component) and also realised I wanted more capability and have previously run unraid on an x570 board without issue before, will arrive tomorrow or day after, when it arrives I will run in Safe mode overnight. Something else to mention, before moving, it was running on Intel hardware. While I wait for the new Ryzen board I have switched back with no issues since. I have seen that moving between hardware upgrades is "plug and play" which it essentially was, but not seen someone switch between Intel and AMD hardware

  2. Update

     

    Have Tried Suggestions found on forums here:

    https://forums.unraid.net/topic/46802-faq-for-unraid-v6/page/2/?tab=comments#comment-819173

     

    Updated Global C-State control to disabled in UEFI
    Set Power Supply Idle Control to Typical Current Idle in UEFI

     

    New Mother board and known good processor and RAM, current specs:

    4x WD RED 4TB (3 in array, 1 parity)
    1x WB Black 1TB NVMe (cache)
    ASRock B550 Steel Legend
    Ryzen 3800x
    2x 8GB Corsair Vengeance LPX in dual channel

     

    All other UEFI settings stock aside from boot device and CSM disabled

    Lock up occurs as soon as 10mins after boot or longest it has been up is ~12 hours

     

    I've attached the last two lock ups and diagnostics dumps

    IMG_20210304_145222.jpg

    IMG_20210302_122223.jpg

    endeavor-diagnostics-20210304-0931.zip

  3. Hi All, just moved my unraid install and disks over to some new hardware. First issue was getting it to boot which was resolved using this post: https://forums.unraid.net/topic/74419-tried-to-upgrade-from-653-to-66-and-wont-boot-up-after-reboot/?tab=comments#comment-710968

     

    Then I needed to remove a disk from the array (yes I probably should have done that before moving it)

    Followed these instructions to shrink array: https://wiki.unraid.net/Shrink_array#For_unRAID_v6.0_and_6.1_and_possibly_5.0 ( i tried the newer method but script didn't want to run, despite being sure i had done everything right)

     

    I am running 6.9.0-rc2

    I created the new array and put my drives back in, started it up and it began parity syncing, all good, left it overnight to finish and came back to the web UI showing 90.1% done and unraid had crashed with the following on screen:

     

    IMG_20210302_081858.thumb.jpg.17a89ca0a828693ad82d629edc1d7782.jpg

     

    Reset machine and it has started parity sync again, data is all there by the looks of it, but need to make sure this doesn't happen again...

  4. Hi I scratched my head on this for a while, there was a solution, essentially unraid doesn't release the Primary GPU for use by a VM:

     

    First test with the following commands via ssh with your VM off:

     

    echo 0 > /sys/class/vtconsole/vtcon0/bind
    echo 0 > /sys/class/vtconsole/vtcon1/bind
    echo efi-framebuffer.0 > /sys/bus/platform/drivers/efi-framebuffer/unbind

     

    Then try to boot your vm with the gpu passed through. if it boots successfully you had the same issue as me. Then install user scripts from community applications and set this to run as a little script when the array starts. even works with auto starting VMs.

    • Like 1
    • Thanks 1
  5. As an update, using the following hardware:

    MSI MEG X570 ACE

    Ryzen 3800X

    MSI Gaming X Trio RTX 2080 SUPER (PCI Slot 1)

    ASUS Strix GTX 970 (PCI Slot 2)

     

    Steps so far:

    1. Using Dumped vBIOS from techpowerup
    2. Modified the vBIOS per Space Invader One's tutorials: 
    3. Assigned the card, audio device and usb elements of the GPU using both the stub-pci.ids and vfiopci.ids methods
    4. Added <ioapic driver='kvm'/> to the VM XML per the bug noted here: https://bugs.launchpad.net/qemu/+bug/1826422 found when using vfio-pci.ids (which results in the host hanging:IMG_20190801_150357.thumb.jpg.16c3fb5eb22340529905e32db1e62ae9.jpg
    5. Tried both the Q35 and i440fx chipsets (the latest of and a couple prior versions for each)
    6. PCIe ACS override set to Both
    7. VFIO allow unsafe interrupts is Enabled
    8. Finally updated to Unraid 6.7.3-rc2

    If in the second pci slot there's no problem, with a GTX 970 in the primary. But i want to run the Ssuper at full x16 speed.

    With the config and steps listed I get a screen output at 800x600 (only when using the modified vBIOS) and i am unable to install the drivers in Windows 10 (Also booted outside of Unraid and pre-installed the drivers, but they take no effect in the VM)

     

    I am also getting the following in the VM log:

    2019-08-01T17:40:48.826299Z qemu-system-x86_64: -device vfio-pci,host=2d:00.0,id=hostdev0,bus=pci.4,addr=0x0: Failed to mmap 0000:2d:00.0 BAR 3. Performance may be slow

    and:

    2019-08-01T17:41:02.439023Z qemu-system-x86_64: vfio_region_write(0000:2d:00.0:region3+0x142a0, 0x47ad010d,8) failed: Device or resource busy

    Also FYI I am booting the VM directly from the NVMe drive, with no vdisks.

     

    VM XML:

    <?xml version='1.0' encoding='UTF-8'?>
    <domain type='kvm'>
      <name>Avenger</name>
      <uuid>ecc6bdbc-7fa7-b22f-f907-af4c77e81c8f</uuid>
      <metadata>
        <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/>
      </metadata>
      <memory unit='KiB'>16777216</memory>
      <currentMemory unit='KiB'>2097152</currentMemory>
      <memoryBacking>
        <nosharepages/>
      </memoryBacking>
      <vcpu placement='static'>10</vcpu>
      <cputune>
        <vcpupin vcpu='0' cpuset='3'/>
        <vcpupin vcpu='1' cpuset='11'/>
        <vcpupin vcpu='2' cpuset='4'/>
        <vcpupin vcpu='3' cpuset='12'/>
        <vcpupin vcpu='4' cpuset='5'/>
        <vcpupin vcpu='5' cpuset='13'/>
        <vcpupin vcpu='6' cpuset='6'/>
        <vcpupin vcpu='7' cpuset='14'/>
        <vcpupin vcpu='8' cpuset='7'/>
        <vcpupin vcpu='9' cpuset='15'/>
      </cputune>
      <os>
        <type arch='x86_64' machine='pc-q35-3.1'>hvm</type>
        <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader>
        <nvram>/etc/libvirt/qemu/nvram/ecc6bdbc-7fa7-b22f-f907-af4c77e81c8f_VARS-pure-efi.fd</nvram>
        <boot dev='hd'/>
      </os>
      <features>
        <acpi/>
        <apic/>
        <ioapic driver='kvm'/>
      </features>
      <cpu mode='host-passthrough' check='none'>
        <topology sockets='1' cores='10' threads='1'/>
      </cpu>
      <clock offset='localtime'>
        <timer name='rtc' tickpolicy='catchup'/>
        <timer name='pit' tickpolicy='delay'/>
        <timer name='hpet' present='no'/>
      </clock>
      <on_poweroff>destroy</on_poweroff>
      <on_reboot>restart</on_reboot>
      <on_crash>restart</on_crash>
      <devices>
        <emulator>/usr/local/sbin/qemu</emulator>
        <disk type='file' device='cdrom'>
          <driver name='qemu' type='raw'/>
          <source file='/mnt/user/isos/virtio-win-0.1.160-1.iso'/>
          <target dev='hdb' bus='sata'/>
          <readonly/>
          <address type='drive' controller='0' bus='0' target='0' unit='1'/>
        </disk>
        <controller type='usb' index='0' model='qemu-xhci' ports='15'>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
        </controller>
        <controller type='pci' index='1' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='1' port='0x8'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0' multifunction='on'/>
        </controller>
        <controller type='pci' index='2' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='2' port='0x9'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
        </controller>
        <controller type='pci' index='3' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='3' port='0xa'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
        </controller>
        <controller type='pci' index='4' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='4' port='0xb'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x3'/>
        </controller>
        <controller type='pci' index='5' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='5' port='0xc'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x4'/>
        </controller>
        <controller type='pci' index='6' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='6' port='0xd'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x5'/>
        </controller>
        <controller type='pci' index='7' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='7' port='0xe'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x6'/>
        </controller>
        <controller type='pci' index='8' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='8' port='0xf'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x7'/>
        </controller>
        <controller type='pci' index='9' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='9' port='0x10'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
        </controller>
        <controller type='pci' index='10' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='10' port='0x11'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
        </controller>
        <controller type='pci' index='11' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='11' port='0x12'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
        </controller>
        <controller type='pci' index='12' model='pcie-to-pci-bridge'>
          <model name='pcie-pci-bridge'/>
          <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
        </controller>
        <controller type='virtio-serial' index='0'>
          <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
        </controller>
        <controller type='sata' index='0'>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
        </controller>
        <controller type='pci' index='0' model='pcie-root'/>
        <interface type='bridge'>
          <mac address='52:54:00:00:93:39'/>
          <source bridge='br0'/>
          <model type='virtio'/>
          <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
        </interface>
        <serial type='pty'>
          <target type='isa-serial' port='0'>
            <model name='isa-serial'/>
          </target>
        </serial>
        <console type='pty'>
          <target type='serial' port='0'/>
        </console>
        <channel type='unix'>
          <target type='virtio' name='org.qemu.guest_agent.0'/>
          <address type='virtio-serial' controller='0' bus='0' port='1'/>
        </channel>
        <input type='tablet' bus='usb'>
          <address type='usb' bus='0' port='1'/>
        </input>
        <input type='mouse' bus='ps2'/>
        <input type='keyboard' bus='ps2'/>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x2d' slot='0x00' function='0x0'/>
          </source>
          <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
        </hostdev>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x2d' slot='0x00' function='0x1'/>
          </source>
          <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
        </hostdev>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
          </source>
          <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
        </hostdev>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x24' slot='0x00' function='0x0'/>
          </source>
          <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
        </hostdev>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x28' slot='0x00' function='0x0'/>
          </source>
          <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
        </hostdev>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x2d' slot='0x00' function='0x2'/>
          </source>
          <address type='pci' domain='0x0000' bus='0x09' slot='0x00' function='0x0'/>
        </hostdev>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x2d' slot='0x00' function='0x3'/>
          </source>
          <address type='pci' domain='0x0000' bus='0x0a' slot='0x00' function='0x0'/>
        </hostdev>
        <memballoon model='none'/>
      </devices>
    </domain>

     

  6. Yeah i had tried vfio-pci.ids before and just tried again for the heck of it, same as last time i get the system hang on boot, see screenshot belowIMG_20190801_150357.thumb.jpg.20a85a310febe76dfe9ca7f6ea1bd915.jpg

     

    0000:2d:00.0 is the RTX 2080 Super, note i have included all the devices in vfio-pci.ids the display, audio, usb and usb controller on the card

  7. Anyone got this working with RTX? I have:

    MSI MEG X570 ACE

    Ryzen 3800X

    MSI Gaming X Trio RTX 2080 SUPER

     

    Looking to passthrough the RTX 2080 Super to VM as a single GPU, have tried it all, modded vbios, Q35 chipset, stubbed the usb controllers and passthrough.

     

    If in the second pci slot there's no problem, with a GTX 970 in the primary. But i want to run the super at full x16 speed.

     

    With the attached config and steps taken i get a screen output at 800x600 and i am unable to install the drivers in Windows 10 (Also booted outside of Unraid and pre-installed the drivers, but they take no effect in the VM)

     

    I am getting the following in the VM log:

    2019-07-31T09:48:52.886459Z qemu-system-x86_64: vfio_region_write(0000:2d:00.0:region3+0x14290, 0x67ab0e0d,8) failed: Device or resource busy

    Seems that unraid isn't fully releasing the card so far as I can tell, my only thought would be running unraid truely headless with no gfx at all, so it has no ability to interfere with the card, but don't know if that's possible, and of course there is no way to diagnose then if you have no network. Also FYI I am booting the VM directly from the NVMe drive, with no vdisks.

     

    I have attached my VM XML to save the length of my post :P

    Win10.xml

  8. Awesome! Got it all sorted now, I had to install the drivers via VNC with the 2080 as the 2nd graphics card, but once the driver was in it all came up on the display, also worth noting i no longer need the vbios there, stubbing and no Hyper-V seemed to have done the trick for running in the second PCI slot.

     

    Tried with the fixes back in the first PCI slot, looking to get that working as I want to leverage the x16 speed for the new GPU any thoughts?

  9. Hi All,

    I have unraid running with a GTX 970 and a shiny new RTX 2080 Super, the 970 works with no issue when assigning to VM and drivers were picked up etc.

    For the 2080 super i had to dump the vbios and strip the nvidia header, once i did that i was able to get it to output to a display, i have also passed through the onboard audio and usb controllers successfully, followed all sorts of forum posts and spaceinvader one videos (godsend).

    The VM picks up it is an RTX 2080 Super and recognises it. But whether through standard installer or geforce experience i cannot get the drivers to stick! Any help would be appreciated I would love to trace rays as soon as possible :D

×
×
  • Create New...