SteelCityColt

Members
  • Posts

    19
  • Joined

  • Last visited

Everything posted by SteelCityColt

  1. I've been using the latest RC but 6.9.35, but want to return back to stable. However when I select the option from dropdown the page refreshes without showing the stable version. I can at most roll back to 6.9.30. Any ideas?
  2. Good news. I narrowed it down to a 12V fuse that seems to have blown on every PCB. So far have managed to remove and solder a replacement on one and the drive is alive and seen again in unRaid. I think I'm back in business! Lesson learned around PSU and modular cables.
  3. I now know not to do this! Learning the hard way. Hoping it's just fuses blown and I have the skills to swap out. I'll post results of the effort in a few days.
  4. Thanks for the reply. These were the connectors I was already using to power the drives. Further investigation after taking 2 of the PCBs off the back shows it looks like 12V fuses have shorted. Going to give it a go soldering a replacement to see what that does. The only thing I can think of then is either the PSU is faulty, or when I first plugged the drives in, I used the wire from the previous modular PSU. Apparently this can cause issues.
  5. So I'm not really sure what has happened or why. I had need to change my hardware and everything was going swimmingly till I was getting no hard drives showing up when I fired up the MOBO/CPU. unRAID booted fine, but nothing was showing up barring the 2 caches drives (NVMEs). Bit of head scratching till I realised having a M2 in slot shuts down the PCIE slot I was using for my HBA. Cool! Except it now seems 80% of my drives are dead dodo. Out of: 4 x WD 8TB 2 x WD 10 TB 1 x WD 6TB and a 240GB SSD I can only get the 10TBs to power up. Everything else is dead. I've tried different cables/PSUs and even getting the original WD adapter from the external caddies and nothing. Weirdly the 2 x 10TBs were plugged into separated power cables so it's not even like they were isolated. I have a sickening feeling that I've just lost everything, and am seriously out of pocket. Does anyone have bright ideas before I just give up and mourn.
  6. Due to issues with my AMD mobo (ASROCK TRX40 Creator) not playing nicely in passing through USB, I've only been able to pass through one USB C port on the mobo itself, and also a standalone PCIe USB controller. This all works but I've seen some very strange behaviours with USB devices: Two separate USB hubs, one plugged into the USB C, and one into the USB 3.0 card have both died within days of being plugged in. Could be coincidence, but seems strange. Trying to plug an external DAC in and using in 5.1 mode, throws up a "not enough USB controller resources" error. That's even when nothing else USB is plugged in, and I've tried on both controllers When using MS Teams, as soon as I turn on my Bluetooth headset and it uses "headset" rather than "headphones" for audio my keyboard stops working. It's powered, Logitech Options still thinks it's connected, but it won't register key strokes. As soon as switch my Teams audio to another device it works fine again. Passed through devices such as the NVMe drive, the VGA controller etc. all show up as ejectable USB devices, is this normal? A quick tentative experiment with 6.9 beta seemed to show it resolved the issue around passing through other onboard USB, and also allowing proper passthrough of on-board audio (rather than as a "Generic USB Audio", so I'm half tempted to try it, but twitchy as this is my one and only server not a test bed. Frustrating when I feel I'm close to the W10 VM being a viable daily driver/gamer. <domain type='kvm' id='6'> <name>windybeaver</name> <uuid>e96f9c1e-794e-866d-85a7-fa0ee9339a54</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>33554432</memory> <currentMemory unit='KiB'>33554432</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>30</vcpu> <cputune> <vcpupin vcpu='0' cpuset='3'/> <vcpupin vcpu='1' cpuset='27'/> <vcpupin vcpu='2' cpuset='4'/> <vcpupin vcpu='3' cpuset='28'/> <vcpupin vcpu='4' cpuset='5'/> <vcpupin vcpu='5' cpuset='29'/> <vcpupin vcpu='6' cpuset='6'/> <vcpupin vcpu='7' cpuset='30'/> <vcpupin vcpu='8' cpuset='7'/> <vcpupin vcpu='9' cpuset='31'/> <vcpupin vcpu='10' cpuset='8'/> <vcpupin vcpu='11' cpuset='32'/> <vcpupin vcpu='12' cpuset='9'/> <vcpupin vcpu='13' cpuset='33'/> <vcpupin vcpu='14' cpuset='10'/> <vcpupin vcpu='15' cpuset='34'/> <vcpupin vcpu='16' cpuset='11'/> <vcpupin vcpu='17' cpuset='35'/> <vcpupin vcpu='18' cpuset='18'/> <vcpupin vcpu='19' cpuset='42'/> <vcpupin vcpu='20' cpuset='19'/> <vcpupin vcpu='21' cpuset='43'/> <vcpupin vcpu='22' cpuset='20'/> <vcpupin vcpu='23' cpuset='44'/> <vcpupin vcpu='24' cpuset='21'/> <vcpupin vcpu='25' cpuset='45'/> <vcpupin vcpu='26' cpuset='22'/> <vcpupin vcpu='27' cpuset='46'/> <vcpupin vcpu='28' cpuset='23'/> <vcpupin vcpu='29' cpuset='47'/> </cputune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-q35-4.2'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/e96f9c1e-794e-866d-85a7-fa0ee9339a54_VARS-pure-efi.fd</nvram> <boot dev='hd'/> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='none'/> </hyperv> </features> <cpu mode='host-passthrough' check='none'> <topology sockets='1' cores='15' threads='2'/> <cache mode='passthrough'/> <feature policy='require' name='topoext'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/virtio-win-0.1.173-2.iso' index='1'/> <backingStore/> <target dev='hdb' bus='sata'/> <readonly/> <alias name='sata0-0-1'/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <controller type='usb' index='0' model='ich9-ehci1'> <alias name='usb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <alias name='usb'/> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <alias name='usb'/> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <alias name='usb'/> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <controller type='pci' index='0' model='pcie-root'> <alias name='pcie.0'/> </controller> <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x8'/> <alias name='pci.1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='2' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='2' port='0x9'/> <alias name='pci.2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='pci' index='3' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='3' port='0xa'/> <alias name='pci.3'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> </controller> <controller type='pci' index='4' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='4' port='0xb'/> <alias name='pci.4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x3'/> </controller> <controller type='pci' index='5' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='5' port='0xc'/> <alias name='pci.5'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x4'/> </controller> <controller type='pci' index='6' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='6' port='0xd'/> <alias name='pci.6'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x5'/> </controller> <controller type='pci' index='7' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='7' port='0xe'/> <alias name='pci.7'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x6'/> </controller> <controller type='pci' index='8' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='8' port='0xf'/> <alias name='pci.8'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x7'/> </controller> <controller type='pci' index='9' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='9' port='0x10'/> <alias name='pci.9'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </controller> <controller type='virtio-serial' index='0'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </controller> <controller type='sata' index='0'> <alias name='ide'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <interface type='bridge'> <mac address='52:54:00:02:83:9d'/> <source bridge='br0'/> <target dev='vnet0'/> <model type='virtio'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </interface> <serial type='pty'> <source path='/dev/pts/0'/> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> <alias name='serial0'/> </serial> <console type='pty' tty='/dev/pts/0'> <source path='/dev/pts/0'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-6-windybeaver/org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='mouse' bus='ps2'> <alias name='input0'/> </input> <input type='keyboard' bus='ps2'> <alias name='input1'/> </input> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <rom file='/mnt/cache/appdata/vbios/MSI.GTX1660Super.6144.191113.rom'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x1'/> </source> <alias name='hostdev1'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x2'/> </source> <alias name='hostdev2'/> <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x3'/> </source> <alias name='hostdev3'/> <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x44' slot='0x00' function='0x0'/> </source> <alias name='hostdev4'/> <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x4c' slot='0x00' function='0x0'/> </source> <alias name='hostdev5'/> <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x4d' slot='0x00' function='0x0'/> </source> <alias name='hostdev6'/> <address type='pci' domain='0x0000' bus='0x09' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x26ce'/> <product id='0x0a01'/> <address bus='11' device='3'/> </source> <alias name='hostdev7'/> <address type='usb' bus='0' port='1'/> </hostdev> <memballoon model='none'/> </devices> <seclabel type='dynamic' model='dac' relabel='yes'> <label>+0:+100</label> <imagelabel>+0:+100</imagelabel> </seclabel> </domain> unbeaver-diagnostics-20201104-0924.zip
  7. After a bit of a journey I've finally got my first W10 VM up and running, but I have one slight issue. Every time I do anything that pauses the machine, when I go to "resume" I get this "internal error: unable to execute QEMU command 'cont': Resetting the Virtual Machine is required". I've tried recreating from scratch to see if it was just a quirk of the VM but same thing. Aside from that everything else is working well. Any ideas? VM: <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm'> <name>windybeaver</name> <uuid>d5d8367b-d7ca-b0c0-c5df-37cf1b4eede4</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>33554432</memory> <currentMemory unit='KiB'>33554432</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>24</vcpu> <cputune> <vcpupin vcpu='0' cpuset='6'/> <vcpupin vcpu='1' cpuset='30'/> <vcpupin vcpu='2' cpuset='7'/> <vcpupin vcpu='3' cpuset='31'/> <vcpupin vcpu='4' cpuset='8'/> <vcpupin vcpu='5' cpuset='32'/> <vcpupin vcpu='6' cpuset='9'/> <vcpupin vcpu='7' cpuset='33'/> <vcpupin vcpu='8' cpuset='10'/> <vcpupin vcpu='9' cpuset='34'/> <vcpupin vcpu='10' cpuset='11'/> <vcpupin vcpu='11' cpuset='35'/> <vcpupin vcpu='12' cpuset='18'/> <vcpupin vcpu='13' cpuset='42'/> <vcpupin vcpu='14' cpuset='19'/> <vcpupin vcpu='15' cpuset='43'/> <vcpupin vcpu='16' cpuset='20'/> <vcpupin vcpu='17' cpuset='44'/> <vcpupin vcpu='18' cpuset='21'/> <vcpupin vcpu='19' cpuset='45'/> <vcpupin vcpu='20' cpuset='22'/> <vcpupin vcpu='21' cpuset='46'/> <vcpupin vcpu='22' cpuset='23'/> <vcpupin vcpu='23' cpuset='47'/> </cputune> <os> <type arch='x86_64' machine='pc-q35-4.2'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/d5d8367b-d7ca-b0c0-c5df-37cf1b4eede4_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='none'/> </hyperv> </features> <cpu mode='host-passthrough' check='none'> <topology sockets='1' cores='12' threads='2'/> <cache mode='passthrough'/> <feature policy='require' name='topoext'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/cache/domains/windybeaver/vdisk1.img'/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </disk> <controller type='pci' index='0' model='pcie-root'/> <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x8'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='2' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='2' port='0x9'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='pci' index='3' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='3' port='0xa'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> </controller> <controller type='pci' index='4' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='4' port='0xb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x3'/> </controller> <controller type='pci' index='5' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='5' port='0xc'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x4'/> </controller> <controller type='pci' index='6' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='6' port='0xd'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x5'/> </controller> <controller type='pci' index='7' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='7' port='0xe'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x6'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </controller> <controller type='sata' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <interface type='bridge'> <mac address='52:54:00:0a:59:67'/> <source bridge='br0'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </interface> <serial type='pty'> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </source> <rom file='/mnt/cache/appdata/vbios/MSI.RX580.8192.190625.rom'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x4c' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x4d' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x26ce'/> <product id='0x0a01'/> </source> <address type='usb' bus='0' port='1'/> </hostdev> <memballoon model='none'/> </devices> </domain> unbeaver-diagnostics-20201024-2152.zip
  8. @bastl Sorry I didn't twig straight away you're on a different mobo. I don't have the equivalent controller to pass through sadly. So experimentation has show that 6.9 beta solves all issues, but I'm a bit twitchy about running a beta on my main server. I've got a PCIE USB card landing today that I'll try and maybe use as a temporary solution until 6.9 stable is released.
  9. Apologies for jumping in on this topic. I'm slowly losing my mind with my TRX40 board (AS Rock TRX40 Creator) and 3960x combo. I want to be able to pass through a GPU (The only one in the system), a USB Controller, NVME, and the onboard sound. Every time I tried to start the VM it killed unRaid to the point of a hard reboot. It's also killed the flash drive twice, requiring a rebuild. Things I've worked out so far by trial and error and lots of VM config tests: 1) The original GPU (Vega 56) just wasn't having it. Swapping to a RX580 works. 2) Passing through the NVME controller using the VFIO plugin works fine 3) As soon as I try pass through either the sound or the USB controller it kills everything. With the USB I've tired isolating the group with the USB and also the non-essential instrumentation and then passing through (with no sound). Kills unRaid. 4) I tried passing through just the sound, both under soundcard without using the plugin, and also using the plugin. Kills unRaid. Where am I going wrong?! Iommu Groups below and diagnostics attached. IOMMU group 0: [1022:1482] 00:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge IOMMU group 1: [1022:1483] 00:01.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge IOMMU group 2: [1022:1482] 00:02.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge IOMMU group 3: [1022:1482] 00:03.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge IOMMU group 4: [1022:1482] 00:04.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge IOMMU group 5: [1022:1482] 00:05.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge IOMMU group 6: [1022:1482] 00:07.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge IOMMU group 7: [1022:1484] 00:07.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B] IOMMU group 8: [1022:1482] 00:08.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge IOMMU group 9: [1022:1484] 00:08.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B] IOMMU group 10: [1022:790b] 00:14.0 SMBus: Advanced Micro Devices, Inc. [AMD] FCH SMBus Controller (rev 61) [1022:790e] 00:14.3 ISA bridge: Advanced Micro Devices, Inc. [AMD] FCH LPC Bridge (rev 51) IOMMU group 11: [1022:1490] 00:18.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship Device 24; Function 0 [1022:1491] 00:18.1 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship Device 24; Function 1 [1022:1492] 00:18.2 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship Device 24; Function 2 [1022:1493] 00:18.3 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship Device 24; Function 3 [1022:1494] 00:18.4 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship Device 24; Function 4 [1022:1495] 00:18.5 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship Device 24; Function 5 [1022:1496] 00:18.6 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship Device 24; Function 6 [1022:1497] 00:18.7 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship Device 24; Function 7 IOMMU group 12: [1002:67df] 01:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Ellesmere [Radeon RX 470/480/570/570X/580/580X/590] (rev e7) [1002:aaf0] 01:00.1 Audio device: Advanced Micro Devices, Inc. [AMD/ATI] Ellesmere HDMI Audio [Radeon RX 470/480 / 570/580/590] IOMMU group 13: [1022:148a] 02:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Function IOMMU group 14: [1022:1485] 03:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Reserved SPP IOMMU group 15: [1022:148c] 03:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Starship USB 3.0 Host Controller IOMMU group 16: [1022:1482] 20:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge IOMMU group 17: [1022:1482] 20:02.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge IOMMU group 18: [1022:1482] 20:03.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge IOMMU group 19: [1022:1483] 20:03.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge IOMMU group 20: [1022:1482] 20:04.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge IOMMU group 21: [1022:1482] 20:05.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge IOMMU group 22: [1022:1482] 20:07.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge IOMMU group 23: [1022:1484] 20:07.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B] IOMMU group 24: [1022:1482] 20:08.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge IOMMU group 25: [1022:1484] 20:08.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B] IOMMU group 26: [1000:0072] 21:00.0 Serial Attached SCSI controller: Broadcom / LSI SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 03) IOMMU group 27: [1022:148a] 22:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Function IOMMU group 28: [1022:1485] 23:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Reserved SPP IOMMU group 29: [1022:1486] 23:00.1 Encryption controller: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Cryptographic Coprocessor PSPCPP IOMMU group 30: [1022:148c] 23:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Starship USB 3.0 Host Controller IOMMU group 31: [1022:1487] 23:00.4 Audio device: Advanced Micro Devices, Inc. [AMD] Starship/Matisse HD Audio Controller IOMMU group 32: [1022:1482] 40:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge IOMMU group 33: [1022:1483] 40:01.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge IOMMU group 34: [1022:1483] 40:01.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge IOMMU group 35: [1022:1483] 40:01.3 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge IOMMU group 36: [1022:1482] 40:02.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge IOMMU group 37: [1022:1482] 40:03.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge IOMMU group 38: [1022:1482] 40:04.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge IOMMU group 39: [1022:1482] 40:05.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge IOMMU group 40: [1022:1482] 40:07.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge IOMMU group 41: [1022:1484] 40:07.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B] IOMMU group 42: [1022:1482] 40:08.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge IOMMU group 43: [1022:1484] 40:08.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B] IOMMU group 44: [1022:57ad] 41:00.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Matisse Switch Upstream IOMMU group 45: [1022:57a3] 42:01.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Matisse PCIe GPP Bridge IOMMU group 46: [1022:57a3] 42:02.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Matisse PCIe GPP Bridge IOMMU group 47: [1022:57a3] 42:03.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Matisse PCIe GPP Bridge IOMMU group 48: [1022:57a3] 42:04.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Matisse PCIe GPP Bridge IOMMU group 49: [1022:57a3] 42:05.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Matisse PCIe GPP Bridge IOMMU group 50: [1022:57a4] 42:08.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Matisse PCIe GPP Bridge [1022:1485] 48:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Reserved SPP [1022:149c] 48:00.1 USB controller: Advanced Micro Devices, Inc. [AMD] Matisse USB 3.0 Host Controller [1022:149c] 48:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Matisse USB 3.0 Host Controller IOMMU group 51: [1022:57a4] 42:09.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Matisse PCIe GPP Bridge [1022:7901] 49:00.0 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51) IOMMU group 52: [1022:57a4] 42:0a.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Matisse PCIe GPP Bridge [1022:7901] 4a:00.0 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51) IOMMU group 53: [1987:5012] 43:00.0 Non-Volatile memory controller: Phison Electronics Corporation E12 NVMe Controller (rev 01) IOMMU group 54: [1b21:3242] 44:00.0 USB controller: ASMedia Technology Inc. Device 3242 IOMMU group 55: [1d6a:07b1] 45:00.0 Ethernet controller: Aquantia Corp. AQC107 NBase-T/IEEE 802.3bz Ethernet Controller [AQtion] (rev 02) IOMMU group 56: [8086:2723] 46:00.0 Network controller: Intel Corporation Wi-Fi 6 AX200 (rev 1a) IOMMU group 57: [10ec:8125] 47:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8125 2.5GbE Controller (rev 01) IOMMU group 58: [1987:5012] 4b:00.0 Non-Volatile memory controller: Phison Electronics Corporation E12 NVMe Controller (rev 01) IOMMU group 59: [1987:5012] 4c:00.0 Non-Volatile memory controller: Phison Electronics Corporation E12 NVMe Controller (rev 01) IOMMU group 60: [1022:148a] 4d:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Function IOMMU group 61: [1022:1485] 4e:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Reserved SPP IOMMU group 62: [1022:1482] 60:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge IOMMU group 63: [1022:1482] 60:02.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge IOMMU group 64: [1022:1482] 60:03.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge IOMMU group 65: [1022:1482] 60:04.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge IOMMU group 66: [1022:1482] 60:05.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge IOMMU group 67: [1022:1482] 60:07.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge IOMMU group 68: [1022:1484] 60:07.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B] IOMMU group 69: [1022:1482] 60:08.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge IOMMU group 70: [1022:1484] 60:08.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B] IOMMU group 71: [1022:148a] 61:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Function IOMMU group 72: [1022:1485] 62:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Reserved SPP diagnostics-20201022-1404.zip
  10. I'm taking first tentative steps into a one box solution but I'm struggling with my first attempt at a Windows 10VM, I have watched a few Spaceinvader One videos but everytime I try to start the VM... it breaks unRaid. Not a hard crash, but the UI gets much slower and the VM/VM Manager pages refuse to load. If go to the dashboard the VM in question will show as paused. I'm starting to suspect the issue is trying to pass through the sole GPU (Vega 56). I have tried passing switching my mobo (ASROCK TRX40 Creator) into legacy mode, and passing through with VBIOS, but no dice. Before I keep banging my head, am I attempting the impossible?
  11. I'm currently running unRAID on a HP Microserver which has served me well but I'm considering migrating into a bigger chassis as the 4 drive bay is a bit limiting. Current specs: Intel E3-1265 V2 (I jury rigged some active cooling but it still runs a bit warm because of the chassis being so condensed and full of gear). 16GB RAM 3 x 8TB WE Reds, 1 x 6TB WD Red (16TB used of 22TB but my hoarding is increasing exponentially) HP Smart Array P222 of which I run 4 x SSDs, 2 for Cache in a pool, 2 for VMs Aside from UnRaid also used currently for 1 x Ubuntu VM but I would like to spin up some more , and various dockers (Plex, Sonarr, Radarr, Nextcloud etc etc.) I'm also considering upgrading my desktop and I was wondering could I combine the two into one and run my desktop as Windows 10 VM? Although I do game, it's less FPS more 4X, or games such as Skylines where RAM is the main limiting factor. Current spec: i5-4690K 16GB RAM Radeon Vega 56 Probably going to hand this down to my gf as she's starting to grumble about the lack of dedicated GPU on her laptop. My current thoughts are going for a 3960X, running UnRaid as the bare metal and making VMs for a Windows 10 daily driver/gaming, possibly keep the headless Ubuntu VM, a HTPC/Living Room Gaming machine, and VMs as and when I feel the need to experiment with stuff. Current questions/hesitancy: Is it fundamentally a good idea, or would I just be better dropping the money a desktop and new lower spec server for UnRaid? MOBO choice for 3960X, ease of passthrough being the main consideration. ECC or not for RAM. Leaning towards yes. Case choice. Rack or Tower? Hot swap or not? The original thought was to put the box in a rack out of sight out of mind, but I'm also slightly tempted to put it in a tower so I can gaze in wonder at the money I've burnt. Best rack contender so far (https://www.xcase.co.uk/collections/4u-rackmount-cases/products/xk465f2-4u-rackmount-chassis-e-atx-120mm-fan-wall).
  12. I am indeed an idiot. Thank you, back into the WebGUI now! However, as soon as I set one NIC to a static IP again, it crashed the webGUI and won't let me back in again. Noticed running ifconfig there are some legacy interfaces still set up. How do I remove these for good?
  13. Logged in via console on server to delete said files... Guessing that's not right?! Or am I being dumb?
  14. Without wishing to necro this... the problem has got worse. Messing around I set each NIC in turn to automatic. When I got the eth2 the web gui hung and I wasn't able to get back in. Reset the baremetal server, and no dice still. Going in via the iLO console, I can see unRaid boot up as normal, aside from "device br1 not found" (paraphrasing as not in front of machine), and it tells me that the server is on 192.168.0.21. But I can't ping/SSH this IP, or any of the other NICs. I can see them asking for, and receiving, leases from the DHCP server though. Better off to bite the bullet and redo flash drive?
  15. Not sure why this has suddenly gone cockeyed, but I normally set my NICs up as statics from an unRaid side. But when I do this it seems like the routing goes a bit weird. When set to static with the router set as gateway and DNS I get this in routing and the interface will show as down with the message check cable. If I say change etho0 to automatic (and use the router to set the static IP by MAC) the routing updates to this and I can break out correctly via the router: Here's the example network setting for one of the NICS: All NICs plugged into a flat switch into the LAN port of said router. It's not the biggest thing in the world as I can set statics from the router, but it's annoying me that I don't get what I'm doing wrong.
  16. Thanks for the quick reply! 1) Mainly to play with Proxmox as an "enterprise" hypervisor and see what differences there are, but I'd still need some form of NAS solution (although I know it's not best practice to house on the same box) and the issue with ZFS/FreeNAS that unRaid solves is being able to add drives as you go. As for my tech skill... lacking, but I guess that's why I want to give it a go. 4) So in theory if I removed the Proxmox SSDs, and booted bare metal from the USB rather than booting off it in VM I'd be back to status quo?
  17. Due to seeing a good price on Xeon processor I'm doing a mini upgrade on my Gen 8 HP Microserver which has rekindled some interest in playing with a home lab. I've been running unRaid bare metal, but haven't really used it to it's fullest extent. Now that I have a VT-D happy processor I'm considering Proxmox for bare metal and running unRaid as VM to act as a NAS mainly. Stupid questions... 1) I was thinking of using the on board controller (B120i) to run a pair of raided SSDs to install Proxmox OS onto and store VM images while using a stand alone PCI card to pass through the 4 HDD bay and probably another SSD to unRaid. Is this a sensible approach? 2) What sort of resources should I assign to the unRaid VM if it's mainly to be used as a NAS and related dockers? 3) Should I actually run the dockers in unRaid or in another VM? I'll be spinning up at least a Linux flavour server to run some other services. 4) If I like Proxmox enough to want to play seriously, and invest in a better box (the Gen 8 16GB max RAM is a killer), how hard would it be to move unRaid back to bare metal assuming I'd probably but back in the lower TDP processor to run more efficiently as a straight up NAS but would lose VT-D.
  18. This is exactly what I ended up doing, after working out that VT-D or lack of was going to make ESXi even more of a pain I just went with unRAID bare metal. Not looked back since. Sure it's not as pretty or feature rich as ESXi but for what I need it's more than enough. Got a pfSense VM up and running and while the network bridging isn't as pretty in the console it's working wonderfully well. Big thumbs up for unRAID so far.
  19. I'm looking at rolling out my virgin unRAID install as a VM on my HP Microserver and I'm still a little confused as how I should best do this. Read the sticky and it seems the option for install is either vmdk (dependent on others to update) or to go via plop. From the reading of things it looks like I'm better off going down the plop route but I'm a little confused about the hardware pass through side of things. Server currently has a SSD and a 3TB NAS drive (hey we've all got to start somewhere!) and I wanted to use the SSD for all things VM while leaving the NAS drive purely for unRAID ti ac. Issue is the the G1610 doesn't have VT-D. Am I in trouble?