Starli0n

Members
  • Posts

    23
  • Joined

  • Last visited

Posts posted by Starli0n

  1. I am taking back my old post, hoping that I will found some help here by giving you more details:

     

    I have been following this tuto:

     

     

    To sum-up the issue, I am able to boot on Win10 from my NVME drive (without Unraid),
    but when I try to create a VM template and to boot Unraid, I have systematically a blue screen with error code: 0xc000021a.

     

    image.thumb.png.50743b02fa39bb6b089399ebe4f91a87.png


    and when I reboot directly outside Unraid evrything is fine.
     

    I have tried various configurations like not passing through the graphic card to be able to see the VNC console log.

    I have changed the machine type i440fx-7.1 / Q35-7.1 (I learnt that this parameter was better for GPU Passthrough), the Hyper-V parameter Yes / No,
    but I am not able to figure out the issue...

    I do not know exactly what to share so that you can help me properly so here are some piece of my configuration:

    - The VM Manager:

    PCIe ACS override: Both

    VFIO allow unsafe interrupts: Yes

     

    image.thumb.png.cac5cf19fae87dd32e9f2c0e3e9cc0a3.png

     

    - The System Device:

    The NVME, I try to pass through is from IOMMU group 14

     

    image.thumb.png.ee90b7f408133c11a475f8b527b34250.png

     

    - The vfio-pci log:
    I noticed two errors but I do not know if it is important nor how to correct them

     

    image.thumb.png.35d4657516b4c565b787efe9ccbfcd49.png

    - The boot log:

     

    image.thumb.png.303c8a9d9edb2b9202f13ca8464cb937.png

    - The VNC log:

     

    text  error  warn  system  array  login
    
    2023-05-17 23:55:11.098+0000: starting up libvirt version: 8.7.0, qemu version: 7.1.0, kernel: 5.19.17-Unraid, hostname: Ryzen
    LC_ALL=C \
    PATH=/bin:/sbin:/usr/bin:/usr/sbin \
    HOME='/var/lib/libvirt/qemu/domain-1-Windows 10 BareMetal' \
    XDG_DATA_HOME='/var/lib/libvirt/qemu/domain-1-Windows 10 BareMetal/.local/share' \
    XDG_CACHE_HOME='/var/lib/libvirt/qemu/domain-1-Windows 10 BareMetal/.cache' \
    XDG_CONFIG_HOME='/var/lib/libvirt/qemu/domain-1-Windows 10 BareMetal/.config' \
    /usr/local/sbin/qemu \
    -name 'guest=Windows 10 BareMetal,debug-threads=on' \
    -S \
    -object '{"qom-type":"secret","id":"masterKey0","format":"raw","file":"/var/lib/libvirt/qemu/domain-1-Windows 10 BareMetal/master-key.aes"}' \
    -blockdev '{"driver":"file","filename":"/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd","node-name":"libvirt-pflash0-storage","auto-read-only":true,"discard":"unmap"}' \
    -blockdev '{"node-name":"libvirt-pflash0-format","read-only":true,"driver":"raw","file":"libvirt-pflash0-storage"}' \
    -blockdev '{"driver":"file","filename":"/etc/libvirt/qemu/nvram/46630960-32f8-d2d3-0e31-dd5839cf0c2e_VARS-pure-efi.fd","node-name":"libvirt-pflash1-storage","auto-read-only":true,"discard":"unmap"}' \
    -blockdev '{"node-name":"libvirt-pflash1-format","read-only":false,"driver":"raw","file":"libvirt-pflash1-storage"}' \
    -machine pc-q35-7.1,usb=off,dump-guest-core=off,mem-merge=off,memory-backend=pc.ram,pflash0=libvirt-pflash0-format,pflash1=libvirt-pflash1-format \
    -accel kvm \
    -cpu host,migratable=on,topoext=on,hv-time=on,hv-relaxed=on,hv-vapic=on,hv-spinlocks=0x1fff,hv-vendor-id=none,host-cache-info=on,l3-cache=off \
    -m 32256 \
    -object '{"qom-type":"memory-backend-ram","id":"pc.ram","size":33822867456}' \
    -overcommit mem-lock=off \
    -smp 24,sockets=1,dies=1,cores=12,threads=2 \
    -uuid 032e02b4-0499-053c-f806-d90700080009 \
    -display none \
    -no-user-config \
    -nodefaults \
    -chardev socket,id=charmonitor,fd=35,server=on,wait=off \
    -mon chardev=charmonitor,id=monitor,mode=control \
    -rtc base=localtime \
    -no-hpet \
    -no-shutdown \
    -boot strict=on \
    -device '{"driver":"pcie-root-port","port":8,"chassis":1,"id":"pci.1","bus":"pcie.0","multifunction":true,"addr":"0x1"}' \
    -device '{"driver":"pcie-root-port","port":9,"chassis":2,"id":"pci.2","bus":"pcie.0","addr":"0x1.0x1"}' \
    -device '{"driver":"pcie-root-port","port":10,"chassis":3,"id":"pci.3","bus":"pcie.0","addr":"0x1.0x2"}' \
    -device '{"driver":"pcie-root-port","port":11,"chassis":4,"id":"pci.4","bus":"pcie.0","addr":"0x1.0x3"}' \
    -device '{"driver":"pcie-root-port","port":12,"chassis":5,"id":"pci.5","bus":"pcie.0","addr":"0x1.0x4"}' \
    -device '{"driver":"pcie-root-port","port":13,"chassis":6,"id":"pci.6","bus":"pcie.0","addr":"0x1.0x5"}' \
    -device '{"driver":"ich9-usb-ehci1","id":"usb","bus":"pcie.0","addr":"0x7.0x7"}' \
    -device '{"driver":"ich9-usb-uhci1","masterbus":"usb.0","firstport":0,"bus":"pcie.0","multifunction":true,"addr":"0x7"}' \
    -device '{"driver":"ich9-usb-uhci2","masterbus":"usb.0","firstport":2,"bus":"pcie.0","addr":"0x7.0x1"}' \
    -device '{"driver":"ich9-usb-uhci3","masterbus":"usb.0","firstport":4,"bus":"pcie.0","addr":"0x7.0x2"}' \
    -device '{"driver":"virtio-serial-pci","id":"virtio-serial0","bus":"pci.2","addr":"0x0"}' \
    -blockdev '{"driver":"file","filename":"/mnt/user/isos/virtio-win-0.1.190-1.iso","node-name":"libvirt-1-storage","auto-read-only":true,"discard":"unmap"}' \
    -blockdev '{"node-name":"libvirt-1-format","read-only":true,"driver":"raw","file":"libvirt-1-storage"}' \
    -device '{"driver":"ide-cd","bus":"ide.1","drive":"libvirt-1-format","id":"sata0-0-1"}' \
    -netdev tap,fd=36,id=hostnet0 \
    -device '{"driver":"virtio-net","netdev":"hostnet0","id":"net0","mac":"52:54:00:f5:39:78","bus":"pci.1","addr":"0x0"}' \
    -chardev pty,id=charserial0 \
    -device '{"driver":"isa-serial","chardev":"charserial0","id":"serial0","index":0}' \
    -chardev socket,id=charchannel0,fd=34,server=on,wait=off \
    -device '{"driver":"virtserialport","bus":"virtio-serial0.0","nr":1,"chardev":"charchannel0","id":"channel0","name":"org.qemu.guest_agent.0"}' \
    -audiodev '{"id":"audio1","driver":"none"}' \
    -device '{"driver":"vfio-pci","host":"0000:0c:00.0","id":"hostdev0","bus":"pci.3","addr":"0x0"}' \
    -device '{"driver":"vfio-pci","host":"0000:0c:00.1","id":"hostdev1","bus":"pci.4","addr":"0x0"}' \
    -device '{"driver":"vfio-pci","host":"0000:01:00.0","id":"hostdev2","bus":"pci.5","addr":"0x0"}' \
    -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
    -msg timestamp=on
    char device redirected to /dev/pts/0 (label charserial0)
    2023-05-17T23:55:14.579083Z qemu-system-x86_64: vfio: Cannot reset device 0000:0c:00.1, depends on group 36 which is not owned.
    2023-05-17T23:55:26.952967Z qemu-system-x86_64: terminating on signal 15 from pid 7750 (/usr/sbin/libvirtd)
    2023-05-17 23:55:27.576+0000: shutting down, reason=shutdown
    

     

    - The VM Template:

    image.thumb.png.112b301c29038b05a47925c5aea85403.png

     

    The XML version:

     

    <?xml version='1.0' encoding='UTF-8'?>
    <domain type='kvm'>
      <name>Windows 10 BareMetal</name>
      <uuid>bdb3ec3a-eb3f-2619-7603-f2836cadd078</uuid>
      <description>Windows 10 Pass Through</description>
      <metadata>
        <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/>
      </metadata>
      <memory unit='KiB'>33030144</memory>
      <currentMemory unit='KiB'>16777216</currentMemory>
      <memoryBacking>
        <nosharepages/>
      </memoryBacking>
      <vcpu placement='static'>24</vcpu>
      <cputune>
        <vcpupin vcpu='0' cpuset='4'/>
        <vcpupin vcpu='1' cpuset='20'/>
        <vcpupin vcpu='2' cpuset='5'/>
        <vcpupin vcpu='3' cpuset='21'/>
        <vcpupin vcpu='4' cpuset='6'/>
        <vcpupin vcpu='5' cpuset='22'/>
        <vcpupin vcpu='6' cpuset='7'/>
        <vcpupin vcpu='7' cpuset='23'/>
        <vcpupin vcpu='8' cpuset='8'/>
        <vcpupin vcpu='9' cpuset='24'/>
        <vcpupin vcpu='10' cpuset='9'/>
        <vcpupin vcpu='11' cpuset='25'/>
        <vcpupin vcpu='12' cpuset='10'/>
        <vcpupin vcpu='13' cpuset='26'/>
        <vcpupin vcpu='14' cpuset='11'/>
        <vcpupin vcpu='15' cpuset='27'/>
        <vcpupin vcpu='16' cpuset='12'/>
        <vcpupin vcpu='17' cpuset='28'/>
        <vcpupin vcpu='18' cpuset='13'/>
        <vcpupin vcpu='19' cpuset='29'/>
        <vcpupin vcpu='20' cpuset='14'/>
        <vcpupin vcpu='21' cpuset='30'/>
        <vcpupin vcpu='22' cpuset='15'/>
        <vcpupin vcpu='23' cpuset='31'/>
      </cputune>
      <os>
        <type arch='x86_64' machine='pc-q35-7.1'>hvm</type>
        <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader>
        <nvram>/etc/libvirt/qemu/nvram/bdb3ec3a-eb3f-2619-7603-f2836cadd078_VARS-pure-efi.fd</nvram>
        <boot dev='hd'/>
      </os>
      <features>
        <acpi/>
        <apic/>
        <hyperv mode='custom'>
          <relaxed state='on'/>
          <vapic state='on'/>
          <spinlocks state='on' retries='8191'/>
          <vendor_id state='on' value='none'/>
        </hyperv>
      </features>
      <cpu mode='host-passthrough' check='none' migratable='on'>
        <topology sockets='1' dies='1' cores='12' threads='2'/>
        <cache mode='passthrough'/>
        <feature policy='require' name='topoext'/>
      </cpu>
      <clock offset='localtime'>
        <timer name='hypervclock' present='yes'/>
        <timer name='hpet' present='no'/>
      </clock>
      <on_poweroff>destroy</on_poweroff>
      <on_reboot>restart</on_reboot>
      <on_crash>restart</on_crash>
      <devices>
        <emulator>/usr/local/sbin/qemu</emulator>
        <disk type='file' device='cdrom'>
          <driver name='qemu' type='raw'/>
          <source file='/mnt/user/isos/virtio-win-0.1.190-1.iso'/>
          <target dev='hdb' bus='sata'/>
          <readonly/>
          <address type='drive' controller='0' bus='0' target='0' unit='1'/>
        </disk>
        <controller type='usb' index='0' model='ich9-ehci1'>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/>
        </controller>
        <controller type='usb' index='0' model='ich9-uhci1'>
          <master startport='0'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/>
        </controller>
        <controller type='usb' index='0' model='ich9-uhci2'>
          <master startport='2'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/>
        </controller>
        <controller type='usb' index='0' model='ich9-uhci3'>
          <master startport='4'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/>
        </controller>
        <controller type='sata' index='0'>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
        </controller>
        <controller type='pci' index='0' model='pcie-root'/>
        <controller type='pci' index='1' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='1' port='0x10'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
        </controller>
        <controller type='pci' index='2' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='2' port='0x11'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
        </controller>
        <controller type='pci' index='3' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='3' port='0x12'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
        </controller>
        <controller type='pci' index='4' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='4' port='0x13'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
        </controller>
        <controller type='virtio-serial' index='0'>
          <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
        </controller>
        <interface type='bridge'>
          <mac address='52:54:00:ef:c8:6b'/>
          <source bridge='br0'/>
          <model type='virtio-net'/>
          <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
        </interface>
        <serial type='pty'>
          <target type='isa-serial' port='0'>
            <model name='isa-serial'/>
          </target>
        </serial>
        <console type='pty'>
          <target type='serial' port='0'/>
        </console>
        <channel type='unix'>
          <target type='virtio' name='org.qemu.guest_agent.0'/>
          <address type='virtio-serial' controller='0' bus='0' port='1'/>
        </channel>
        <input type='tablet' bus='usb'>
          <address type='usb' bus='0' port='1'/>
        </input>
        <input type='mouse' bus='ps2'/>
        <input type='keyboard' bus='ps2'/>
        <graphics type='vnc' port='-1' autoport='yes' websocket='-1' listen='0.0.0.0' keymap='fr'>
          <listen type='address' address='0.0.0.0'/>
        </graphics>
        <audio id='1' type='none'/>
        <video>
          <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
        </video>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
          </source>
          <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
        </hostdev>
        <memballoon model='none'/>
      </devices>
    </domain>

     

    - The Unraid Boot conf:

     

    image.thumb.png.38ed1b33ec6059ccf85bff5a85409657.png

     

    That's it, I hope I have been exhaustive enough 🙂

     

  2. Hi,


    I had a working setup with both NVME and GPU passthrough. 
     

    But since I recently upgrade my CPU, I needed to upgrade my BIOS as well causing a blue screen and triggering a repair mode in Windows. 
     

    To isolate the issue, I temporarily deactivated GPU passthrough and I still have the same issue. 
     

    The thing is that when I choose to reboot Windows in Safe mode, the NVME passthough is working. So it is encouraging. 
     

    As I had reset my BIOS settings since the upgrade, it might be the issue. 

    But I do not know what is the settings I missed. Maybe something with UEFI...

     

    I attached my diagnostic zip. 
     

    Does someone has a lead on this ?

    ryzen-diagnostics-20210729-1035.zip

  3. Hi,


    I had a working setup with both NVME and GPU passthrough. 
     

    But since I recently upgrade my CPU, I needed to upgrade my BIOS as well causing a blue screen and triggering a repair mode in Windows. 
     

    To isolate the issue, I temporarily deactivated GPU passthrough and I still have the same issue. 
     

    The thing is that when I choose to reboot Windows in Safe mode, the NVME passthough is working. So it is encouraging. 
     

    As I had reset my BIOS settings since the upgrade, it might be the issue. 

    But I do not know what is the settings I missed. Maybe something with UEFI...

     

    I attached my diagnostic zip. 
     

    Does someone has a lead on this ?

    ryzen-diagnostics-20210729-1035.zip

  4. @ich777, I just realize that you are the author of the module.

     

    I built the kernel in Custom mode and I added the two variables at the beginning of the script:

     

    export CONFIG_TLS=y
    export CONFIG_TLS_DEVICE=y

     

    it seems to work fine but I do not see the TLS module in a separate file in the output dir.

    Does it means that something went wrong or the modules are directly included into the main files ?

     

    My aim would be not to replace the whole kernel but only to install/add the ktls compiled module.

     

    Do you have any advice on this ?

     

  5. Hi,

     

    I do not know how to activate the TSL kernel module (ktsl) on Unraid 6.9.1.

    I already found some post but seems to be outdated.

     

    I tried:

     

    $ modprobe tls
    modprobe: FATAL: Module tls not found in directory /lib/modules/5.10.21-Unraid

     

    So it seems that Unraid 6.9.1 is based on linux kernel 5.10.21.

     

    The TLS feature seems to be activated since linux kernel 4.13 and should be enable with those flags (CONFIG_TLS=y and CONFIG_TLS_DEVICE=y)

     

    I learnt that Unraid is base on https://packages.slackware.com but I do not know on which version ?

    Nor if it is 64 bits or not ?

     

    Does someone have any idea to achieve that ?

     

    Thanks

     

     

  6. Hi,

     

    I do not know if it is a bug or a feature but it seems that permissions are not taken into account in /mnt/user/share folders.

     

    I create a user named 'coder'.

     

    As expected in the home folder:
     

    # in /home/coder
    
    $ ls -pla foo
    ---------- 1 root root 4 Mar 23 03:13 foo
    
    $ cat foo
    cat: foo: Permission denied
    

     

    But in a shared folder with the same file:

     

    # in /mnt/user/system
    
    $ ls -pla foo
    ---------- 1 root root 4 Mar 23 03:12 foo
    
    $ cat foo
    bar

     

    It is a strange behavior.

     

    How can I apply correct permissions in shared folders ?

     

     

  7. On 10/20/2019 at 8:06 AM, ICDeadPpl said:

    Does anyone have a 'clean' unedited sshd_config file to share?
    Mine got messed up (by the ssh plugin). I uninstalled it and am going to go with the tips provided in this thread.

    So to have a clean sshd_config file, simply make a backup of the file:

    mv /boot/config/ssh/sshd_config /boot/config/ssh/sshd_config.bak

    Actually, /boot/config/ssh/sshd_config should no longer exist

     

    Then restart sshd:

    /etc/rc.d/rc.sshd restart

    Finally you should have the following files:

    /etc/ssh/sshd_config
    /etc/ssh/sshd_config.bak

    with /etc/ssh/sshd_config set to his default configuration

     

    Provided that it works the same way in the normal version as I am using the Beta

    • Like 1
  8. On 10/19/2016 at 5:12 AM, ken-ji said:

    A slightly better way to maintain the keys across reboots is to

    * copy the authorized_keys file to /boot/config/ssh/root.pubkeys

    * copy /etc/ssh/sshd_config to /boot/config/ssh

    * modify /boot/config/sshd_config to set the following line

    
    AuthorizedKeysFile      /etc/ssh/%u.pubkeys
     

     

    This will allow you to keep the keys on the flash always and let the ssh startup scripts do all the copying.

     

    Thanks @ken-ji your method works like a charm 👌

    After I understood that /boot/config/ssh/root.pubkeys was a file and not a directory 🙄

     

    That being said, I am using Unraid Version: 6.9.0-beta30 and as far there is this symlink:

    (I do not know if the symlink is present in the stable version)

    /root/.ssh/ -> /boot/config/ssh/root/

    You can keep the /root/.ssh/authorized_keys as the default configuration for /etc/ssh/sshd_config file

    You have to put your public key files here:

    /boot/config/ssh/root/authorized_keys

    So you will have it there as a usual configuration:

    /root/.ssh/authorized_keys

    Therefore no need to copy the file from /etc/ssh/sshd_config to /boot/config/ssh/sshd_config for the modification

     

    Then restart ssh:

    /etc/rc.d/rc.sshd restart

    By the way, restarting is copying the files from /boot/config/ssh/ to /etc/ssh/ BUT not the directories inside the folder.

    Plus, it keeps the files that were already present in /etc/ssh/ even though there were deleted from /boot/config/ssh/.

    For that a reboot is required as the RAM is flushed.

     

     

  9. Hi,

     

    I do not know how to create a symlink in the USB flash drive.
    I tried different ways but each time I have the same error.

     

    While connecting to my server in ssh, I am doing:

    root@Unraid:/boot/config/custom# ln -s file file-link
    ln: failed to create symbolic link 'file-link': Operation not permitted

     

    I tried:

    root@Unraid:/boot/config/custom# bash -c "ln -s file file-link"

    or

    root@Unraid:/boot/config/custom# exec "ln -s file file-link"

    but it is the same error 😩

     

    The final goal is to have a git repository in /boot/config/custom.
    And from there to deploy, my custom scripts using symlinks.
    For instance into: /boot/config/plugins/user.scripts/scripts.

    So that I can commit my changes whenever I update my scripts.

     

    Do you have any ideas ?

  10. I needed to boot Unraid in legacy mode while booting Windows Bare Metal in UEFI.

    Therefore, I needed to change some settings in my Bios:

    • Enable CSM Support
      • Storage Boot Option Control: Legacy Only (ie: Boot Unraid in legacy mode)
      • Other PCI Device ROM Priority: UEFI Only (ie: Boot Windows Bare Metal in UEFI mode)

     

    So the Windows seems to boot pretty well from Unraid.

    However, I have an other issue in the Unraid Gui which seems to be down.

    I do not know the exact reason. Maybe I allocated to many CPUs for the VM.

     

    Luckily, I can still connect with ssh and perform a graceful shutdown.

    I am almost there...

  11. Indeed, watching others tutorials from SpaceInvader One helped me.

     

    I needed to dump the bios from my graphic card using the tool GPU-Z (HOWTO😞

    As I have an Nvidia card, I needed to remove the header using the tool HxD (HOWTO😞

    Then put the dump in a path like the following and update the VM template accordingly

    • /mnt/user/domains/hardware/GPU/Gigabyte.RTX2070Super.90.04.95.00.86.gpuz.noheader.rom

    Finally, I needed to passthrough both GPU and sound of the graphic card using the multifunction method while editing the template with the xml view (HOWTO).

        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x0b' slot='0x00' function='0x0'/>
          </source>
          <rom file='/mnt/user/domains/hardware/GPU/Gigabyte.RTX2070Super.90.04.95.00.86.gpuz.noheader.rom'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/> <!-- Update here -->
        </hostdev>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x0b' slot='0x00' function='0x1'/>
          </source>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/> <!-- And here -->
        </hostdev>

     

    I made a huge step as I could boot on my baremetal Windows VM.

    However my graphic card seems to still not be recognized even after updating to the last Nvidia drivers.

     

    Do the Gigabyte RTX 2070 Super is compatible with Unraid ?

     

    Sources:

  12. Thank you for the precision regarding VFIO-PCI settings.

    Even though, I did not get the difference between "Downstream" and "Multi-function".

     

    Regarding the drivers, sorry for being unclear.

    I installed all the necessary drivers including motherboard, graphic cards etc... during the install of the bare metal Windows.

    That's why I just needed to deal with the network because as far as I understood it is the only part which is emulated.

    When I successfully booted the VM for the first time, I had the same resolution than when booting on the bare metal.

     

    I will have a look on the tutorial you mentioned, I already watched some but I guess I miss this one.

    What bother me is that the display was working just before installing the VirtIO drivers...

     

  13. Thank you very much for this plugin 👍

     

    At first, I deactivated "PCIe ACS override" feature and remove the line from the "vfio-pci.ids=1987:5016".

    But after I pass through the NVME 0, I had the following error at Unraid boot:

    ...
    /etc/rc.d/rc.inet1: line 241: /proc/sys/net/ipv6/conf/eth0/disable_ipv6: No such file or directory
    ...
    IPv4 address: not set
    IPv6 address: not set
    ...

     

    So I had no more network and even after rebooting in safe mode without plugins did not work.

    I had to reboot in Gui mode to deactivate the passthrough of the drive and be back to normal.

     

    So I figured out that I need to reactivate the "PCIe ACS override" feature.

    By the way, do you know the difference between "Downstream" and "Multi-function" ?

    As I did not know, I activated both.

     

    Then I could start the Win10 VM with the NVME 0 passthrough as it has its own IOMMU group 🎉

    My graphic card seems to work well as I did not need to install the drivers.

    However, I had no internet inside the VM. I figured out that I needed to install the VirtIO drivers thanks to Default "Windows VirtIO driver ISO" feature.

     

    But after installing, the VirtIO drivers, I have now a BLACK SCREEN after rebooting the VM 😩

    I do not know why because the pass through of the graphic card was working well just before.

    Please note that I installed the missing drivers only for the network card from the Device Manager panel of Windows by clicking on browsing on the ISO.

    I have no graphic card on board.

     

  14. Hi,

     

    I have a Gigabyte X570 AORUS MASTER with 3x NVME slots on board:

    • NVME 0: Used for a bare metal install of Win10
    • NVME 1: Used for a cache drive for Unraid
    • NVME 2: Not used for now

     

    I successfully installed Win10 as a bare metal
    and configured correctly the cache drive for Unraid.

     

    But when I tried to passthrough the NVME 0 to Unraid to create a Win10 VM from the bare metal installation,
    the cache drive disappears from Unraid.

     

    I suspect that there is only one controller for the 3 NVME.
    And that controller can only be affected either to Unraid or either to pass through a VM.

     

    Is there a way to allocate the NVME 0 for the VM and the NVME 1 for the cache of Unraid ?

     

    ---

     

    Just in case, here is my configuration.

    Before PCIe ACS override was activated:

    IOMMU group 0:	[1022:1482] 00:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
    	[1022:1483] 00:01.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge
    	[1022:1483] 00:01.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge
    	[1987:5016] 01:00.0 Non-Volatile memory controller: Phison Electronics Corporation E16 PCIe4 NVMe Controller (rev 01)
    	[1022:57ad] 02:00.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Matisse Switch Upstream
    	[1022:57a3] 03:01.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Matisse PCIe GPP Bridge
    	[1022:57a3] 03:03.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Matisse PCIe GPP Bridge
    	[1022:57a3] 03:04.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Matisse PCIe GPP Bridge
    	[1022:57a3] 03:05.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Matisse PCIe GPP Bridge
    	[1022:57a4] 03:08.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Matisse PCIe GPP Bridge
    	[1022:57a4] 03:09.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Matisse PCIe GPP Bridge
    	[1022:57a4] 03:0a.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Matisse PCIe GPP Bridge
    	[1987:5016] 04:00.0 Non-Volatile memory controller: Phison Electronics Corporation E16 PCIe4 NVMe Controller (rev 01)
    	[8086:2723] 05:00.0 Network controller: Intel Corporation Wi-Fi 6 AX200 (rev 1a)
    	[8086:1539] 06:00.0 Ethernet controller: Intel Corporation I211 Gigabit Network Connection (rev 03)
    	[10ec:8125] 07:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8125 2.5GbE Controller (rev 01)
    	[1022:1485] 08:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Reserved SPP
    	[1022:149c] 08:00.1 USB controller: Advanced Micro Devices, Inc. [AMD] Matisse USB 3.0 Host Controller
    	[1022:149c] 08:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Matisse USB 3.0 Host Controller
    	[1022:7901] 09:00.0 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51)
    	[1022:7901] 0a:00.0 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51)
    IOMMU group 1:	[1022:1482] 00:02.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
    IOMMU group 2:	[1022:1482] 00:03.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
    	[1022:1483] 00:03.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge
    	[10de:1e84] 0b:00.0 VGA compatible controller: NVIDIA Corporation TU104 [GeForce RTX 2070 SUPER] (rev a1)
    	[10de:10f8] 0b:00.1 Audio device: NVIDIA Corporation TU104 HD Audio Controller (rev a1)
    	[10de:1ad8] 0b:00.2 USB controller: NVIDIA Corporation TU104 USB 3.1 Host Controller (rev a1)
    	[10de:1ad9] 0b:00.3 Serial bus controller [0c80]: NVIDIA Corporation TU104 USB Type-C UCSI Controller (rev a1)
    IOMMU group 3:	[1022:1482] 00:04.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
    IOMMU group 4:	[1022:1482] 00:05.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
    IOMMU group 5:	[1022:1482] 00:07.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
    IOMMU group 6:	[1022:1484] 00:07.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]
    IOMMU group 7:	[1022:1482] 00:08.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
    IOMMU group 8:	[1022:1484] 00:08.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]
    IOMMU group 9:	[1022:790b] 00:14.0 SMBus: Advanced Micro Devices, Inc. [AMD] FCH SMBus Controller (rev 61)
    	[1022:790e] 00:14.3 ISA bridge: Advanced Micro Devices, Inc. [AMD] FCH LPC Bridge (rev 51)
    IOMMU group 10:	[1022:1440] 00:18.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 0
    	[1022:1441] 00:18.1 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 1
    	[1022:1442] 00:18.2 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 2
    	[1022:1443] 00:18.3 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 3
    	[1022:1444] 00:18.4 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 4
    	[1022:1445] 00:18.5 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 5
    	[1022:1446] 00:18.6 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 6
    	[1022:1447] 00:18.7 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 7
    IOMMU group 11:	[1022:148a] 0c:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Function
    IOMMU group 12:	[1022:1485] 0d:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Reserved SPP
    IOMMU group 13:	[1022:1486] 0d:00.1 Encryption controller: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Cryptographic Coprocessor PSPCPP
    IOMMU group 14:	[1022:149c] 0d:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Matisse USB 3.0 Host Controller
    IOMMU group 15:	[1022:1487] 0d:00.4 Audio device: Advanced Micro Devices, Inc. [AMD] Starship/Matisse HD Audio Controller

    Settings available for the VM:

    USB Devices: 	
    - Logitech Unifying Receiver (046d:c52b)
    - Integrated Technology Express ITE Device(8595) (048d:8297)
    - Intel Corp. (8087:0029)
    
    Other PCI Devices: 	
    - None available
    

     

     

    After PCIe ACS override is activated:

    IOMMU group 0:	[1022:1482] 00:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
    IOMMU group 1:	[1022:1483] 00:01.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge
    IOMMU group 2:	[1022:1483] 00:01.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge
    IOMMU group 3:	[1022:1482] 00:02.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
    IOMMU group 4:	[1022:1482] 00:03.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
    IOMMU group 5:	[1022:1483] 00:03.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge
    IOMMU group 6:	[1022:1482] 00:04.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
    IOMMU group 7:	[1022:1482] 00:05.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
    IOMMU group 8:	[1022:1482] 00:07.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
    IOMMU group 9:	[1022:1484] 00:07.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]
    IOMMU group 10:	[1022:1482] 00:08.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
    IOMMU group 11:	[1022:1484] 00:08.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]
    IOMMU group 12:	[1022:790b] 00:14.0 SMBus: Advanced Micro Devices, Inc. [AMD] FCH SMBus Controller (rev 61)
    	[1022:790e] 00:14.3 ISA bridge: Advanced Micro Devices, Inc. [AMD] FCH LPC Bridge (rev 51)
    IOMMU group 13:	[1022:1440] 00:18.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 0
    	[1022:1441] 00:18.1 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 1
    	[1022:1442] 00:18.2 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 2
    	[1022:1443] 00:18.3 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 3
    	[1022:1444] 00:18.4 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 4
    	[1022:1445] 00:18.5 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 5
    	[1022:1446] 00:18.6 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 6
    	[1022:1447] 00:18.7 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 7
    IOMMU group 14:	[1987:5016] 01:00.0 Non-Volatile memory controller: Phison Electronics Corporation E16 PCIe4 NVMe Controller (rev 01)
    IOMMU group 15:	[1022:57ad] 02:00.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Matisse Switch Upstream
    IOMMU group 16:	[1022:57a3] 03:01.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Matisse PCIe GPP Bridge
    IOMMU group 17:	[1022:57a3] 03:03.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Matisse PCIe GPP Bridge
    IOMMU group 18:	[1022:57a3] 03:04.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Matisse PCIe GPP Bridge
    IOMMU group 19:	[1022:57a3] 03:05.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Matisse PCIe GPP Bridge
    IOMMU group 20:	[1022:57a4] 03:08.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Matisse PCIe GPP Bridge
    IOMMU group 21:	[1022:57a4] 03:09.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Matisse PCIe GPP Bridge
    IOMMU group 22:	[1022:57a4] 03:0a.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Matisse PCIe GPP Bridge
    IOMMU group 23:	[1987:5016] 04:00.0 Non-Volatile memory controller: Phison Electronics Corporation E16 PCIe4 NVMe Controller (rev 01)
    IOMMU group 24:	[8086:2723] 05:00.0 Network controller: Intel Corporation Wi-Fi 6 AX200 (rev 1a)
    IOMMU group 25:	[8086:1539] 06:00.0 Ethernet controller: Intel Corporation I211 Gigabit Network Connection (rev 03)
    IOMMU group 26:	[10ec:8125] 07:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8125 2.5GbE Controller (rev 01)
    IOMMU group 27:	[1022:1485] 08:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Reserved SPP
    IOMMU group 28:	[1022:149c] 08:00.1 USB controller: Advanced Micro Devices, Inc. [AMD] Matisse USB 3.0 Host Controller
    IOMMU group 29:	[1022:149c] 08:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Matisse USB 3.0 Host Controller
    IOMMU group 30:	[1022:7901] 09:00.0 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51)
    IOMMU group 31:	[1022:7901] 0a:00.0 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51)
    IOMMU group 32:	[10de:1e84] 0b:00.0 VGA compatible controller: NVIDIA Corporation TU104 [GeForce RTX 2070 SUPER] (rev a1)
    IOMMU group 33:	[10de:10f8] 0b:00.1 Audio device: NVIDIA Corporation TU104 HD Audio Controller (rev a1)
    IOMMU group 34:	[10de:1ad8] 0b:00.2 USB controller: NVIDIA Corporation TU104 USB 3.1 Host Controller (rev a1)
    IOMMU group 35:	[10de:1ad9] 0b:00.3 Serial bus controller [0c80]: NVIDIA Corporation TU104 USB Type-C UCSI Controller (rev a1)
    IOMMU group 36:	[1022:148a] 0c:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Function
    IOMMU group 37:	[1022:1485] 0d:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Reserved SPP
    IOMMU group 38:	[1022:1486] 0d:00.1 Encryption controller: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Cryptographic Coprocessor PSPCPP
    IOMMU group 39:	[1022:149c] 0d:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Matisse USB 3.0 Host Controller
    IOMMU group 40:	[1022:1487] 0d:00.4 Audio device: Advanced Micro Devices, Inc. [AMD] Starship/Matisse HD Audio Controller

     

    As you can see the NVME have both the same ID but in different group:

    ...
    IOMMU group 14:	[1987:5016] 01:00.0 Non-Volatile memory controller: Phison Electronics Corporation E16 PCIe4 NVMe Controller (rev 01)
    ...
    IOMMU group 23:	[1987:5016] 04:00.0 Non-Volatile memory controller: Phison Electronics Corporation E16 PCIe4 NVMe Controller (rev 01)
    ...

     

    So I updated the boot sequence:

    kernel /bzimage
    append pcie_acs_override=downstream,multifunction initrd=/bzroot vfio-pci.ids=1987:5016
    

    Settings available for the VM:

    USB Devices: 	
    - Logitech Unifying Receiver (046d:c52b)
    - Integrated Technology Express ITE Device(8595) (048d:8297)
    - Intel Corp. (8087:0029)
    
    Other PCI Devices: 	
    - Phison Electronics E16 PCIe4 NVMe Controller | Non-Volatile memory controller (01:00.0)
    - Phison Electronics E16 PCIe4 NVMe Controller | Non-Volatile memory controller (04:00.0)
    

    I can see both NVME drives but the cache drive is not longer available for Unraid 😕

     

    Do you know a workaround for this ?

     

    ---

     

    Solution:

     

    Bios:

    • Enable virtualization SVM mode
    • Explicitly enable AMD IOMMU option (Default: Auto is not enough)
    • Enable CSM Support
      • Storage Boot Option Control: Legacy Only (ie: Boot Unraid in legacy mode)
      • Other PCI Device ROM Priority: UEFI Only (ie: Boot Windows Bare Metal in UEFI mode)

     

    Unraid Boot (Main > Flash):

    • Unraid OS
    kernel /bzimage
    append pcie_acs_override=downstream,multifunction initrd=/bzroot
    • Server boot mode: Legacy
    • Permit UEFI boot mode: disable

     

    Settings > VM Manager (Advanced View):

    • PCIe ACS override: Both

     

    Settings > VFIO-PCI Config (plugin):

    • Enable: Group 14 01:00.01987:5016Non-Volatile memory controller: Phison Electronics Corporation E16 PCIe4 NVMe Controller (rev 01)

     

    Dump Graphics ROM Bios

     

    VMS > Windows 10 BareMetal > Edit

    • Bios: OVMF
    • Hyper-V: No
    • VirtIO Drivers ISO: /mnt/user/isos/virtio-win-0.1.173-2.iso
    • Graphics Card: NVIDIA GeForce RTX 2070 SUPER (0b:00.0)
    • Graphics ROM Bios: /mnt/user/domains/hardware/GPU/Gigabyte.RTX2070Super.90.04.95.00.86.gpuz.noheader.rom
    • Sound Card: NVIDIA TU104 HD Audio Controller (0b:00.1)
    • Other PCI Devices: Phison Electronics E16 PCIe4 NVMe Controller | Non-Volatile memory controller (01:00.0)

    XML View

    • <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
      (...)
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>

     

    ---

     

    Sources:

     

    • Thanks 1