Jump to content

testdasi

Members
  • Posts

    2,812
  • Joined

  • Last visited

  • Days Won

    17

Everything posted by testdasi

  1. Your diagnostics xml does not have the soundcard passed through. Was it saved correctly? His card is the GTX 1070 which doesn't have USB devices.
  2. Step 1: Change this: <os> <type arch='x86_64' machine='pc-q35-4.2'>hvm</type> <boot dev='hd'/> </os> ...to this: <os> <type arch='x86_64' machine='pc-q35-4.2'>hvm</type> </os> Step 2: Change this: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x0a' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/> </hostdev> ...to this: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </source> <boot order='1'/> <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x0a' slot='0x00' function='0x0'/> </source> <boot order='2'/> <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/> </hostdev> PS: not sure which of the 2 NVMe are boot drives but it should jump automatically to the next one if the first one isn't bootable.
  3. Some tips on building with the Node 304 case: You will notice the cable tying points on the upper frame portion. Use cable ties / zip ties and make large loops on those tying points (instead of typing the cable onto them). Then you use those as routing "holes" to route cables above the motherboard and cooler around the cage. Also work as storage space for those extra long cables that you may have, just loop them around these holes. You can run power cables in between the HDD cages and the PSU if you need things to go the other way. Of course, the usual under-the-motherboard technique still works. Depending your PSU, you can even go under the PSU. And don't forget the space between the drive cages. If you ever need 2.5" SSD, use velcro to stick it anywhere instead of needing to use the drive cage (to save the space for 3.5" HDD) If you want a more secured mount, you can also screw SSD to the outsides of the the left and right cages (i.e. not the middle one as there isn't enough space). May want to put electrical tape on top of the screw heads if they come into contact with the HDD electrical. Tower coolers are a terrible idea with the 304. You can use the included Intel cooler. It's janky but it's good enough in most cases. For a quiet and still low profile cooler, I loved the Noctua NH-L9x65.
  4. A few things to try: Try various combinations of SeaBIOS / OVMF and i440fx / Q35 machine types. Switching between SeaBIOS and OVMF may require reinstalling Windows, depending on how it was installed. Switching between i440fx and Q35 will just be slow on initial boot as drivers get reinstalled. Either kind of switching may require you to reactivate Windows so be mindful of that. Try a dummy HDMI plug (can be found for cheap on Amazon). It tricks the GPU into thinking a display is connected and initiates itself. Reason the RX580 works with baremetal boot was because it's the primary GPU and thus is always initiated at boot.
  5. Glad that it works for you. Reason I asked how you did it was in case it was a bug in Unraid (cuz it should be using virtio by default). If it's just old info online then it's fine then.
  6. Retry the xml in this post. It has all 3 devices. If the sound doesn't work with that xml then the onboard soundcard can't be reset which means you have to reboot the whole server to reboot Windows.
  7. Try this one and see if it works. If it still doesn't then could just be that it can't be passed through. Your other issue should be a different topic. Remember to attach diagnostics (Tools -> Diagnostics -> attach zip file). But before posting, if the xml below doesn't work then make sure to remove the onboard sound card, save and see if the 99% idling still persists. Having devices that aren't kosher can cause strange CPU load (as the driver load hangs). There's no need to close this topic. It's not solved so just leave it open so people know it isn't resolved. <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm' id='8'> <name>Windows 10 test</name> <uuid>0ee115c5-e58e-0dcf-b624-04d9f5f62185</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>4194304</memory> <currentMemory unit='KiB'>4194304</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>8</vcpu> <cputune> <vcpupin vcpu='0' cpuset='4'/> <vcpupin vcpu='1' cpuset='12'/> <vcpupin vcpu='2' cpuset='5'/> <vcpupin vcpu='3' cpuset='13'/> <vcpupin vcpu='4' cpuset='6'/> <vcpupin vcpu='5' cpuset='14'/> <vcpupin vcpu='6' cpuset='7'/> <vcpupin vcpu='7' cpuset='15'/> </cputune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-q35-4.2'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/0ee115c5-e58e-0dcf-b624-04d9f5f62185_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='none'/> </hyperv> </features> <cpu mode='host-passthrough' check='none'> <topology sockets='1' cores='4' threads='2'/> <cache mode='passthrough'/> <feature policy='require' name='topoext'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/cache/Vdisks/Windows 10/vdisk1.img' index='2'/> <backingStore/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <alias name='virtio-disk2'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/virtio-win-0.1.173-2.iso' index='1'/> <backingStore/> <target dev='hdb' bus='sata'/> <readonly/> <alias name='sata0-0-1'/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <controller type='sata' index='0'> <alias name='ide'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='pci' index='0' model='pcie-root'> <alias name='pcie.0'/> </controller> <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x10'/> <alias name='pci.1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='2' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='2' port='0x11'/> <alias name='pci.2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/> </controller> <controller type='pci' index='3' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='3' port='0x12'/> <alias name='pci.3'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/> </controller> <controller type='pci' index='4' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='4' port='0x13'/> <alias name='pci.4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/> </controller> <controller type='pci' index='5' model='pcie-to-pci-bridge'> <model name='pcie-pci-bridge'/> <alias name='pci.5'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </controller> <controller type='pci' index='6' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='6' port='0x14'/> <alias name='pci.6'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/> </controller> <controller type='pci' index='7' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='7' port='0x15'/> <alias name='pci.7'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/> </controller> <controller type='pci' index='8' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='8' port='0x8'/> <alias name='pci.8'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/> </controller> <controller type='virtio-serial' index='0'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </controller> <controller type='usb' index='0' model='ich9-ehci1'> <alias name='usb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <alias name='usb'/> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <alias name='usb'/> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <alias name='usb'/> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <interface type='bridge'> <mac address='52:54:00:44:11:6f'/> <source bridge='br0'/> <target dev='vnet0'/> <model type='virtio'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </interface> <serial type='pty'> <source path='/dev/pts/0'/> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> <alias name='serial0'/> </serial> <console type='pty' tty='/dev/pts/0'> <source path='/dev/pts/0'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-8-Windows 10 test/org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='connected'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='tablet' bus='usb'> <alias name='input0'/> <address type='usb' bus='0' port='1'/> </input> <input type='mouse' bus='ps2'> <alias name='input1'/> </input> <input type='keyboard' bus='ps2'> <alias name='input2'/> </input> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x08' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x08' slot='0x00' function='0x1'/> </source> <alias name='hostdev1'/> <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x1'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x0c' slot='0x00' function='0x3'/> </source> <alias name='hostdev4'/> <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x0b' slot='0x00' function='0x3'/> </source> <alias name='hostdev5'/> <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x045e'/> <product id='0x0023'/> <address bus='1' device='2'/> </source> <alias name='hostdev6'/> <address type='usb' bus='0' port='2'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x045e'/> <product id='0x0048'/> <address bus='1' device='3'/> </source> <alias name='hostdev7'/> <address type='usb' bus='0' port='3'/> </hostdev> <memballoon model='none'/> </devices> <seclabel type='dynamic' model='dac' relabel='yes'> <label>+0:+100</label> <imagelabel>+0:+100</imagelabel> </seclabel> </domain>
  8. You might want to post a separate topic with your details and preferably attach your diagnostics (Tools -> Diagnostics -> attach zip file) Also would be helpful to copy-paste your xml and the PCI Devices section of Tools -> System Devices. When copy-paste text from Unraid, please use the forum code functionality (the </> button next to the smiley button) so the text is sectioned and formatted correctly. In the meantime, watch SpaceInvader One tutorial on youtube. That will clear some of the questions you have.
  9. You probably are using VFIO-PCI.CFG file to stub devices. It will automatically stub all devices in the same IOMMU group so if you tick the GPU, without ACS Override, all devices in the same group will be stubbed together. That's why they start to show up in Other PCI Devices. Btw, ACS Override on its own has nothing to do with your issue. Try removing the onboard audio (2f:00.4). There have been reports of unsuccessful attempts with passing through onboard audio lately. Also how did you get your vbios? Are you 100% sure it's the right one. You seem to have a Quadro (P400?) in your system so I assume that's what Unraid boot with, isn't it? If that's the case, remove the vbios and try again. Wrong vbios is worse than no vbios. Since Unraid doesn't boot with the RTX, it shouldn't have been initiated and thus there's no need for a vbios to reset it. Failing that (assuming all 4 devices are passed through), dump your own vbios. And copy-paste your latest xml on here so everyone is on the same page.
  10. Next time you copy-paste XML, please use the forum code functionality (the </> button next to the smiley button) so the code is formatted correctly. Try this new xml and report back. <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm' id='8'> <name>Windows 10 test</name> <uuid>0ee115c5-e58e-0dcf-b624-04d9f5f62185</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>4194304</memory> <currentMemory unit='KiB'>4194304</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>8</vcpu> <cputune> <vcpupin vcpu='0' cpuset='4'/> <vcpupin vcpu='1' cpuset='12'/> <vcpupin vcpu='2' cpuset='5'/> <vcpupin vcpu='3' cpuset='13'/> <vcpupin vcpu='4' cpuset='6'/> <vcpupin vcpu='5' cpuset='14'/> <vcpupin vcpu='6' cpuset='7'/> <vcpupin vcpu='7' cpuset='15'/> </cputune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-q35-4.2'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/0ee115c5-e58e-0dcf-b624-04d9f5f62185_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='none'/> </hyperv> </features> <cpu mode='host-passthrough' check='none'> <topology sockets='1' cores='4' threads='2'/> <cache mode='passthrough'/> <feature policy='require' name='topoext'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/cache/Vdisks/Windows 10/vdisk1.img' index='2'/> <backingStore/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <alias name='virtio-disk2'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/virtio-win-0.1.173-2.iso' index='1'/> <backingStore/> <target dev='hdb' bus='sata'/> <readonly/> <alias name='sata0-0-1'/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <controller type='sata' index='0'> <alias name='ide'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='pci' index='0' model='pcie-root'> <alias name='pcie.0'/> </controller> <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x10'/> <alias name='pci.1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='2' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='2' port='0x11'/> <alias name='pci.2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/> </controller> <controller type='pci' index='3' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='3' port='0x12'/> <alias name='pci.3'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/> </controller> <controller type='pci' index='4' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='4' port='0x13'/> <alias name='pci.4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/> </controller> <controller type='pci' index='5' model='pcie-to-pci-bridge'> <model name='pcie-pci-bridge'/> <alias name='pci.5'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </controller> <controller type='pci' index='6' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='6' port='0x14'/> <alias name='pci.6'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/> </controller> <controller type='pci' index='7' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='7' port='0x15'/> <alias name='pci.7'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/> </controller> <controller type='pci' index='8' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='8' port='0x8'/> <alias name='pci.8'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/> </controller> <controller type='virtio-serial' index='0'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </controller> <controller type='usb' index='0' model='ich9-ehci1'> <alias name='usb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <alias name='usb'/> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <alias name='usb'/> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <alias name='usb'/> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <interface type='bridge'> <mac address='52:54:00:44:11:6f'/> <source bridge='br0'/> <target dev='vnet0'/> <model type='virtio'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </interface> <serial type='pty'> <source path='/dev/pts/0'/> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> <alias name='serial0'/> </serial> <console type='pty' tty='/dev/pts/0'> <source path='/dev/pts/0'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-8-Windows 10 test/org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='connected'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='tablet' bus='usb'> <alias name='input0'/> <address type='usb' bus='0' port='1'/> </input> <input type='mouse' bus='ps2'> <alias name='input1'/> </input> <input type='keyboard' bus='ps2'> <alias name='input2'/> </input> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x08' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x08' slot='0x00' function='0x1'/> </source> <alias name='hostdev1'/> <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x1'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x0c' slot='0x00' function='0x0'/> </source> <alias name='hostdev2'/> <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x0c' slot='0x00' function='0x2'/> </source> <alias name='hostdev3'/> <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x1'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x0c' slot='0x00' function='0x3'/> </source> <alias name='hostdev4'/> <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x2'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x0b' slot='0x00' function='0x3'/> </source> <alias name='hostdev5'/> <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x045e'/> <product id='0x0023'/> <address bus='1' device='2'/> </source> <alias name='hostdev6'/> <address type='usb' bus='0' port='2'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x045e'/> <product id='0x0048'/> <address bus='1' device='3'/> </source> <alias name='hostdev7'/> <address type='usb' bus='0' port='3'/> </hostdev> <memballoon model='none'/> </devices> <seclabel type='dynamic' model='dac' relabel='yes'> <label>+0:+100</label> <imagelabel>+0:+100</imagelabel> </seclabel> </domain>
  11. If your mobo IOMMU group is like mine (likely) then IIRC the short M.2 slot, bottom (4th) PCIe slot, x1 slot and the various LAN adapters are in the same group. The other 2 M.2 slots and the 2nd PCIe slot are in the same group. Each of the remaining PCIe slot is in its own group. In short, without ACS Override, you can only pass through 2 M.2 to the same VM. If you want to pass each M.2 to a separate VM then you need ACS Override multifunction, which isn't a bad thing - all the ACS Override security concerns are irrelevant to home users. 4x 2.5" can be used for plenty of things (especially if they can accommodate 15mm thick drives). For example SATA SSD for SSD in array or even a pseudo array using mergerfs to pool. The mergerfs option allows trim. 2.5" form factor U.2 SSD (e.g. Optane and mostly enterprise class SSD). They are relatively cheaper on the used market due to a lot lower demand. 5TB Seagate Barracuda mounted as unassigned (and pooled using mergerfs if multiple of them) for slow storage. I have run all 3 configs at various points. The key is whether you have a need for it or not. Cache pool of 4xSSD will have to run RAID-0 which I generally don't recommend or RAID-10 which waste 50% space.
  12. First and foremost there's a bug with multi-drive btrfs cache pool that causes excessive CPU usage under load. Not sure if the Corsair is affected but that's something to keep in mind. Even without the bug, you don't need to have all 3 NVMe in the cache pool. Most users actually don't really need a RAID cache pool (that would be the case for multi-drive cache pool). 480GB is more than enough for cache (e.g. docker image, libvirt and a few vdisks) so you can run single-drive cache pool. Then if you have heavy write activities (e.g. download temp) then you can mount the other NVMe unassigned and leave it empty most of the time, except when it's being written on. That would increase its lifespan. Some may say it's waste to have an NVMe for this but the flip side is it saves you SATA ports (which you would presumably need for slow HDD in the array) and NVMe will be less likely to hang your system under heavy IO as compared to SATA (due NVMe being designed from the ground up to support parallelism). The remaining 480GB can be passed through as PCIe device to your main VM for maximum performance. Note that pass through means exclusive use by the VM i.e. you won't be able to share it among multiple VM's at the same time (and certainly not used it as cache). The caveat is I'm not sure if the Corsair Phison controller would be happy with passing through as PCIe device. The only issues I know of are with SM2263 controller and Intel 660p. I think the Phison problem has been resolved but don't have 1 to test. How big is the 970 Evo? Have you considered using it in the Unraid server e.g. pass it through to the VM (because I know for sure 970 Evo can be passed through). With regards to 2.5" HDD, they will be slower but not because 2.5" is on its own slower. There are fast 2.5" HDD but most 2.5" HDD are 5400rpm and the highest capacity ones (e.g. the 5TB Seagate BarraCuda) are SMR so double whammy. Note that you need to check your case support for thick (10mm and 15mm) 2.5" drives. Many cases are designed to support only 7mm thickness as that's the common SSD thickness.
  13. The RTX 2070 has 4 devices (2d:00.0 - 3). You need to pass through all 4 (preferably with matching xml function) together for it to have any chance of working. Your current config only passes through function 0 and 1. You missed the 2 USB devices (function 2 and 3).
  14. You don't "have to" have a graphic card for Unraid to boot with, at least not in all cases. It very much depends on the graphic card you plan to pass through to the VM. Some can be passed through as a primary / only GPU (e.g. I'm current running my GTX 1070 as the only GPU in the system and it's passed through to my main workstation VM). Some won't work as pass through if Unraid boots with it (e.g. RX 580). In terms of your other question, which chipset are you talking about? As far as I know 3950X doesn't come with an integrated GPU and none of the consumer mobo has integrated graphics. It's mobo hardware and bios firmware combination dependency.
  15. Terrible idea! Just because you can mount an SMB share for local access doesn't mean you should put a vdisk there for the VM. You need to rethink the arrangement.
  16. Immediate problem: 3x Corsair + 970 Evo = 4 NVMe M.2. There are only 3 M.2 slots on the mobo.
  17. Would be useful to attach diagnostics (Tools -> Diagnostics -> attach zip). Also turn on syslog mirroring (Settings -> Syslog server) so next time it hangs and you force reboot, the log before the reboot is preserved.
  18. Hmm, copy-paste your latest xml on here please.
  19. It just saves a bit of time not having to extract diagnostics zip if the issue is easily fixable / identifiable from the xml. 😅
  20. A lot of the paranoia over SSD wear (and resulting recommendations) were justifiable before TRIM was a thing and especially before the advancement of vertical NAND aka 3D TLC. Nowadays, SSD's are way more resilient and capable of surviving beyond their rated endurance. Even when they fail as the cells die and reserve cells used up, SSD's tend to fail gracefully, leaving users plenty of time to find a replacement. (with the exception of Intel, which will lock the SSD in read-only state if all reserve cells are used up - but that would take a very long time anyway). All of the excessive wear cases I have seen on here were either (a) system issue or (b) user error An on-going example of (a) is the bug report with btrfs writing constantly to the cache pool at about 5MB/s or so despite (supposedly) no activity. 5MB/s translates to about 500GB / month which would be excessive wear, in the sense that it is on top of normal usage. I have several SSD's that average 250-500GB written per WEEK on normal usage and they are still refusing to die. User error would be things like not running trim often enough (or not running trim at all e.g. in case of ata-id pass-through), mixing write-heavy and static data on the same SSD, etc. In your particular case, you can put the 970 Evo as a single-drive cache pool, mount the 250GB as unassigned for write-heavy data (e.g. download temp), run trim frequently and Bob's your uncle. (Tip: set default file system as xfs and if Unraid still forces you to format cache as btrfs then you have NOT set up the cache pool correctly as single-drive. Unraid will force btrfs for multi-drive cache pool even if only a single drive is assigned.)
  21. When copy-paste text from Unraid, use the forum code functionality (the </> button next to the smiley button) so the text is sectioned and formatted correctly. You need to provide the full xml. Also copy-paste your syslinux config and the PCI device section in Tools -> System Devices. Would also be useful to include the diagnostics zip (Tools -> Diagnostics, attach zip file).
  22. Pass through = exclusive use. Vdisk = NOT pass through = can be shared i.e. you can put multiple vdisk files on the same NVMe. My point was if you use vdisks then there's no need to buy an additional SATA SSD to use as cache. Just put the NVMe in cache and put the vdisks in cache (which is the NVMe) to simplify things.
  23. First try changing slot='0x01' to slot='0x00' for the 3 hostdev blocks you posted (i.e. the 3 0c devices) It's rather unusual to have slot 01 in a pass through (those slots are typically only non-zero for the PCIe slots config up top of the xml). That is the most likely cause for their not showing up in Windows Device Manager because there's no possible config for 2nd slot of a single-slot device (Linux counts from zero, slot='0x01' means the 2nd slot). I'm not too optimistic with your passing through the onboard audio though since I have noticed unsuccessful attempts on here with the X4xx chipsets. PS: There's no need to obscure the MAC Address on the vm template screenshot. 😅 52:54 MACs are custom addresses (not dissimilar to 192.168.x.x custom IP addresses) so it doesn't actually point to any of your physical device. It's just used as a virtual address for your virtual network adapter. And you already have revealed it in your xml copy-paste so your obscuring it on the screenshot was to no avail.
  24. What do you mean by "does not show up in Device Manager in Windows"? Screenshot would be helpful. Also in your template, you are passing through the multi-function stack 0c in multiple slots instead of multiple function. Is there any particular reason for that? If you want to match the physical arrangement then you need to match by having multiple function and not having multiple slots. Could be the reason for your issue.
×
×
  • Create New...