gzilla

Members
  • Posts

    12
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

gzilla's Achievements

Noob

Noob (1/14)

2

Reputation

  1. I would also like to work this out too.. I have just recently put Unraid back on my old QNAP TVS-882BRT3 and I still have the same problem I had before where the fans don't come on because they're not being detected/managed properly in Unraid. I was just about to try the recommendation above (I can't imagine it working any better than I tried last time even though its a newer kernel), when I saw this: https://github.com/guedou/TS-453Be Hmm... gonna have a sniff at this for a while. Back soon
  2. Hi guys, Just wanted to drop in and say I managed to solve this! I noticed when booting in Windows, that it would fail to find enough resources to allocate to PCI devices, resulting in error 12 on maybe a PCI-E root port or, at times, the 1080 ti graphics card! Looking at the Unraid logs, I found the same kind of thing happening there that I had missed when debugging the logs last time: Mar 27 14:57:47 gnznas kernel: pci 0000:04:00.0: PCI bridge to [bus 05] Mar 27 14:57:47 gnznas kernel: pci 0000:04:00.0: bridge window [mem 0xf2100000-0xf21fffff] Mar 27 14:57:47 gnznas kernel: pci 0000:06:00.0: BAR 15: no space for [mem size 0x18000000 64bit pref] Mar 27 14:57:47 gnznas kernel: pci 0000:06:00.0: BAR 15: failed to assign [mem size 0x18000000 64bit pref] Mar 27 14:57:47 gnznas kernel: pci 0000:06:00.0: BAR 14: assigned [mem 0xe7800000-0xe8ffffff] Mar 27 14:57:47 gnznas kernel: pci 0000:06:00.0: BAR 13: assigned [io 0xc000-0xcfff] Mar 27 14:57:47 gnznas kernel: pci 0000:07:01.0: BAR 15: no space for [mem size 0x18000000 64bit pref] Mar 27 14:57:47 gnznas kernel: pci 0000:07:01.0: BAR 15: failed to assign [mem size 0x18000000 64bit pref] Mar 27 14:57:47 gnznas kernel: pci 0000:07:01.0: BAR 14: assigned [mem 0xe7800000-0xe8ffffff] Mar 27 14:57:47 gnznas kernel: pci 0000:07:01.0: BAR 13: assigned [io 0xc000-0xcfff] Mar 27 14:57:47 gnznas kernel: pci 0000:08:00.0: BAR 1: no space for [mem size 0x10000000 64bit pref] Mar 27 14:57:47 gnznas kernel: pci 0000:08:00.0: BAR 1: failed to assign [mem size 0x10000000 64bit pref] Mar 27 14:57:47 gnznas kernel: pci 0000:08:00.0: BAR 3: no space for [mem size 0x02000000 64bit pref] Mar 27 14:57:47 gnznas kernel: pci 0000:08:00.0: BAR 3: failed to assign [mem size 0x02000000 64bit pref] In baremetal Windows, there are a number of options you can go through to fix this issue outlined here: https://egpu.io/forums/pc-setup/fix-dsdt-override-to-correct-error-12/. You can do a DSDT patch with Clover and remove a PCI-E root port that you don't care about (which can completely disable visibility of a 2nd graphics card (I also have an AMD RX580 and Nvidia GT 710 in there too) from the Windows OS). I thought maybe also this might work: https://khronokernel-4.gitbook.io/disable-unsupported-gpus/ Neither of those things I could get working, but I did find an acceptable workaround for now.. which is to press F11 at boot, and when the boot selection menu comes up, re-plug the eGPU box back into thunderbolt port. Then boot into Unraid afterwards. This now shows the devices in the PCI list as follows: +-01.2-[01-2f]----00.0-[02-2f]--+-01.0-[03-22]----00.0-[04-22]--+-00.0-[05]----00.0 Intel Corporation JHL7540 Thunderbolt 3 NHI [Titan Ridge 4C 2018] | | +-01.0-[06-13]----00.0-[07-08]----01.0-[08]--+-00.0 NVIDIA Corporation GP102 [GeForce GTX 1080 Ti] | | | \-00.1 NVIDIA Corporation GP102 HDMI Audio Controller | | +-02.0-[14]----00.0 Intel Corporation JHL7540 Thunderbolt 3 USB Controller [Titan Ridge 4C 2018] | | \-04.0-[15-22]-- ... +-03.1-[31]--+-00.0 NVIDIA Corporation GK208B [GeForce GT 710] | \-00.1 NVIDIA Corporation GK208 HDMI/DP Audio Controller +-03.2-[32]--+-00.0 Advanced Micro Devices, Inc. [AMD/ATI] Ellesmere [Radeon RX 470/480/570/570X/580/580X/590] | \-00.1 Advanced Micro Devices, Inc. [AMD/ATI] Ellesmere HDMI Audio [Radeon RX 470/480 / 570/580/590] But when I passthrough the 1080ti to a Windows 10 VM, the device won't start because error 43, which is something SpaceInvaderOne has documented very well in his Youtube videos (check them out if you haven't already they're all really good!) So I went through all of his suggestions, and _still_ it didn't work. Looking at the logs again, there are still some PCI devices whose BAR is failing to start. After much digging around Linux kernel man pages I found that adding `pci=realloc` to the boot args of Flash USB, the resource accounting is sorted out completely on the next boot and all PCI devices find resources and start successfully! yay Make sure you also have CSM disabled and above 4G enabled in your BIOS, and use a VBIOS ROM in your libvirt xml (all documented by SpaceInvaderOne) .. I also set 'Above 4G MMIO` to 36 bit as well (I thought that was related to the 36bit large memory that the egpu.io page was trying to set up, but not sure if it bears any relevance over the other Above 4G setting in CSM area). I also set PCIE Devices Power On to enabled in BIOS too. Hope this helps someone. I now have a working Mac and Windows VMs running simultaneously with their own separate graphics cards!
  3. Hi there, please find this attached diagnostics-20200311-1138.zip
  4. Hey thanks for this beta, really appreciate having access to the 5.5 kernel. I have an ASRock Taichi x570, AMD 3950x and Gigabyte Titan Ridge TB3 card installed. I have a Razer Core X (Alpine Ridge based) eGPU enclosure attached to that. The upgrade went fine, Unraid works generally well so far. It also detects the TB3 card fine, and if I hotplug the Core X after boot (doesn't see it straight away on boot), Unraid also detects the eGPU enclosure, but not the Zotac 1080ti inside the enclosure. If only I could get Unraid to detect the gfx card, I could then pass it through to a VM as a PCIE IOMMU device. Passing through the whole TB3 card also sees the Core X in Windows 10 VM, but not the graphics card inside. If I boot into Ubuntu 19.10 as bare metal boot, I can see the enclosure and I can use the 1080ti inside no problem as a display. So I know it can work. I'm just a bit unsure how Ubuntu can see it, and Unraid cant. I can only imagine boltd/boltctl has some involvement in this. I'm also booting in legacy mode at the moment, I'm not sure if a UEFI unraid boot would have any impact? Has anyone else managed to get this working at all in a similar configuration? I was hoping the updated kernel might be enough to work better with titan ridge and/or nested alpine ridge hubs (the eGPU), but I guess the problem is something else as the behaviour here is exactly the same as on 6.8.x. Any ideas anyone? Thanks!
  5. Ooo thanks Leoyzen, hadn't seen that before. Good to know. I wasn't sure if the other patches could be used or not. I will try those and see if it changes anything.
  6. Hmm.. definitely better CPU performance when back on my old Penryn again in Clover compared to the Opencore one. The Ableton glitching has mostly disappeared. But that could well just be me misconfiguring something somewhere, or not understanding something properly yet. Still pretty new to opencore, but love it already for its potential. No noticeable change in GPU performance. I'll try IvyBridge again and let you know in a bit.
  7. Did you see about that in the Troubleshooting section here? https://khronokernel.github.io/Opencore-Vanilla-Desktop-Guide/troubleshooting/troubleshooting.html (Black screen after picker) See also: https://www.reddit.com/r/hackintosh/comments/euwwe5/opencore_black_screen_after_boot_picker_even As I understand it, Threadripper should be supported in bare metal, so should also be supported through KVM, but I can't confirm that as I don't have one. GPU seems ok to me generally, however when running Ableton on a heavyish project something is throttling it quite a bit, but I haven't yet dug into what it might be. CPU in iStat shows about 25% usage across all 16 cores, so plenty to spare. Going to go back tonight to IvyBridge to see what the differences are (also have you tried Skylake-Server? That one worked well for me in Clover previously.) It might be something to do with the USB (I've got a Focusrite 2i2), but I am passing through an entire Fresco Logic FL1100 USB 3.0 Host Controller so support should be perfect for the controller and the devices on it. I've attached them with redactions. Now some questions for you guys: Does anyone know if sidecar works with Ryzen host CPU pass-through? (or Intel emulated?) Has anyone managed to get Gigabyte Titan Ridge TB3 card working in Mac VM with device passthroughs? Shutting down or rebooting the OS seems to just pause the VM. Anyone know what I need to do to fix that? I'm guessing configuring some devices in opencore will help get my NIC back to en0/builtin again? iCloud/iMessage so far doesn't seem to care. libvirt.xml EFI.zip
  8. I have just yesterday managed to get Opencore 0.5.5 to boot into MacOS 10.15.3 using host CPU passthrough with QEMU, and this is my first attempt at moving from Clover to Opencore. My setup: Asrock Taichi x570, AMD 3950x, Sapphire Nitro+ 580, Unraid 6.8.2. The Mac VM is a music production environment. To achieve this, I needed to follow this awesome Opencore setup guide pretty closely as if it were a bare metal install: https://aplus.rs/archives/ <-- check all the Feb 9 posts from here. I also spent a lot of time on this unraid forum, on the AMD-OSX forums, discord and reading their vanilla guide too.. all of which very useful. I wanted to be able to boot natively into MacOS instead of Unraid, to test the difference between VM and 'native' performance. But it turns out even though QEMU abstracts much of the VM hardware, the same Opencore EFI is so far still useful in the context of a VM and boots ok. For the 3950x I had to take the Opencore kernel patches as discussed in that guide and merge to Kernel>Patches sections of the Opencore config.plist (can do this easily using PlistEdit Pro). https://github.com/AMD-OSX/AMD_Vanilla/blob/opencore/17h/patches.plist Once I'd followed the guide and manually built myself an EFI folder, I used OC-Tool to lint/validate the plist and download any missing dependencies. https://github.com/rusty-bits/OC-tool. Finally, the EFI needs to be put into the opencore qcow2 image to use in the VM, so I grabbed `opencore.release.qcow2` from elsewhere in this forum, and mounted it on unraid using: modprobe nbd qemu-nbd --connect=/dev/nbd0 opencore.release.qcow2 <-- use network block device to mount qcow2 image into /dev/nbd0 mkdir EFI mount /dev/nbd0p1 EFI rm -rf EFI/* <-- delete all previous contents cp -r <prepared EFI folder>/* EFI/ <-- copy in the new OC-Tool validated EFI folder umount EFI qemu-nbd -d /dev/nbd0 <-- need to do this before you can boot your VM again otherwise the volume is locked In the Libvirt VM xml I am CPU pinning 16 cores and just passing the host CPU through : <cputune> <vcpupin vcpu="0" cpuset="2"/> <vcpupin vcpu="1" cpuset="18"/> <vcpupin vcpu="2" cpuset="3"/> <vcpupin vcpu="3" cpuset="19"/> <vcpupin vcpu="4" cpuset="4"/> <vcpupin vcpu="5" cpuset="20"/> <vcpupin vcpu="6" cpuset="5"/> <vcpupin vcpu="7" cpuset="21"/> <vcpupin vcpu="8" cpuset="6"/> <vcpupin vcpu="9" cpuset="22"/> <vcpupin vcpu="10" cpuset="7"/> <vcpupin vcpu="11" cpuset="23"/> <vcpupin vcpu="12" cpuset="8"/> <vcpupin vcpu="13" cpuset="24"/> <vcpupin vcpu="14" cpuset="9"/> <vcpupin vcpu="15" cpuset="25"/> <emulatorpin cpuset="0,4"/> <iothreadpin iothread="1" cpuset="7"/> </cputune> <cpu mode="host-passthrough" check="none"/> and in my qemu:commandline section: <qemu:arg value="-cpu"/> <qemu:arg value="host,+hypervisor,migratable=no,-erms,+invtsc,kvm=on,+topoext,+invtsc,+avx,+aes,+xsave,+xsaveopt,+ssse3,+sse4_2,+popcnt,+arat,+pclmuldq,+pdpe1gb,+rdtscp,+vme,+umip,check"/> I'm not sure about the CPU feature switches I have on there are right in this situation, but I can see the kernel patches working in the booted VM as the CPU is named correctly when doing: $ sysctl -n machdep.cpu.brand_string AMD Ryzen 9 3950X 16-Core Processor Although, looking in 'About This Mac' shows a CPU of '3.5 Ghz Intel Core i5' (I did see something about forcing family Penryn in those 17h Kernel patches), but CPU performance has been pretty good so far, but still have a few things to sort out. My ethernet is no longer en0 and built-in, I have no working wifi yet (I've got a Fenvi 919 but no room to fit it until I get a pci-e riser and move the GFX card out of the way first) so no working airdrop or sidecar (not sure if sidecar can work anyway?)... Also I have a Gigabyte Titan Ridge card but haven't got this working yet either. Unraid detects it, but can't detect the egpu enclosure underneath yet or the 1080 ti inside it. Booting into Ubuntu 19.10 directly on my machine does show the egpu enclosure though, so I'm hoping I just need to wait for an Unraid with 5.3+ kernel and Thunderbolt support compiled in, and perhaps boltctl (not sure), but I'm not in a desperate hurry to get that working in my macOS vm (I have a windows VM as well I want for gaming but lower priority right now).
  9. I would like to see this also if possible please? I have a Ryzen 3950x and am interested to see how opencore presents the CPU to the VM and what performance benefits there might be (if any). I'd also like to use Filevault2.
  10. Hi there, I'm using ASRock Taichi x570 + 3950x processor, with Gigabyte Titan Ridge AIC. I'm trying to use my 1080ti eGPU in Razer Core X enclosure in a Windows VM. As the 3950x has no discrete GPU, I have an Nvidia GT710 in pcie slot 1 to use as the boot GPU for now. I was able to do this for the last 6 months with my previous unraid build (6.8.0 on repurposed 2017 i7 4400k QNAP box with alpine ridge TB3 cards added). that worked pretty well.. not sure how the video card was detected and able to show up in list of IOMMU devices then, when I've only just been able to get TB3 devices visible on my new x570 system in /sys/bus/thunderbolt since it was enabled in the kernel of 6.8.1-rc1, which was installed last night (I can see the tb3 controller in IOMMU but no egpu video card yet). In the list of thunderbolt devices in /sys/bus/thunderbolt I can see the Razer X enclosure (which has an alpine ridge controller on it) but not the video card underneath yet. I'm wondering if boltctl/boltd will help with this, or should I be able to see the full chain of devices already? I might try booting into Ubuntu to check how well the thunderbolt devices work in there. When I boot the x570 machine into windows instead of unraid I can successfully use the egpu, so it all seems viable if only I can get the gfx card into IOMMU and then into a VM. I did hear of problems some people have had with 3000 series ryzens and GPU passthrough in other threads here though, so I'm not sure how easy it will be to get working. Heres my IOMMU groups on x570 box: IOMMU group 0: [1022:1482] 00:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge IOMMU group 1: [1022:1483] 00:01.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge IOMMU group 2: [1022:1483] 00:01.3 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge IOMMU group 3: [1022:1482] 00:02.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge IOMMU group 4: [1022:1482] 00:03.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge IOMMU group 5: [1022:1483] 00:03.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge IOMMU group 6: [1022:1482] 00:04.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge IOMMU group 7: [1022:1482] 00:05.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge IOMMU group 8: [1022:1482] 00:07.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge IOMMU group 9: [1022:1484] 00:07.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B] IOMMU group 10: [1022:1482] 00:08.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge IOMMU group 11: [1022:1484] 00:08.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B] IOMMU group 12: [1022:1484] 00:08.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B] IOMMU group 13: [1022:1484] 00:08.3 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B] IOMMU group 14: [1022:790b] 00:14.0 SMBus: Advanced Micro Devices, Inc. [AMD] FCH SMBus Controller (rev 61) [1022:790e] 00:14.3 ISA bridge: Advanced Micro Devices, Inc. [AMD] FCH LPC Bridge (rev 51) IOMMU group 15: [1022:1440] 00:18.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 0 [1022:1441] 00:18.1 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 1 [1022:1442] 00:18.2 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 2 [1022:1443] 00:18.3 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 3 [1022:1444] 00:18.4 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 4 [1022:1445] 00:18.5 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 5 [1022:1446] 00:18.6 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 6 [1022:1447] 00:18.7 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 7 IOMMU group 16: [1022:57ad] 01:00.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Matisse Switch Upstream IOMMU group 17: [1022:57a3] 02:01.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Matisse PCIe GPP Bridge IOMMU group 18: [1022:57a3] 02:02.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Matisse PCIe GPP Bridge IOMMU group 19: [1022:57a3] 02:03.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Matisse PCIe GPP Bridge IOMMU group 20: [1022:57a3] 02:04.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Matisse PCIe GPP Bridge IOMMU group 21: [1022:57a4] 02:08.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Matisse PCIe GPP Bridge [1022:1485] 2c:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Reserved SPP [1022:149c] 2c:00.1 USB controller: Advanced Micro Devices, Inc. [AMD] Matisse USB 3.0 Host Controller [1022:149c] 2c:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Matisse USB 3.0 Host Controller IOMMU group 22: [1022:57a4] 02:09.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Matisse PCIe GPP Bridge [1022:7901] 2d:00.0 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51) IOMMU group 23: [1022:57a4] 02:0a.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Matisse PCIe GPP Bridge [1022:7901] 2e:00.0 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51) IOMMU group 24: [8086:15ea] 03:00.0 PCI bridge: Intel Corporation JHL7540 Thunderbolt 3 Bridge [Titan Ridge 4C 2018] (rev 06) IOMMU group 25: [8086:15ea] 04:00.0 PCI bridge: Intel Corporation JHL7540 Thunderbolt 3 Bridge [Titan Ridge 4C 2018] (rev 06) IOMMU group 26: [8086:15ea] 04:01.0 PCI bridge: Intel Corporation JHL7540 Thunderbolt 3 Bridge [Titan Ridge 4C 2018] (rev 06) IOMMU group 27: [8086:15ea] 04:02.0 PCI bridge: Intel Corporation JHL7540 Thunderbolt 3 Bridge [Titan Ridge 4C 2018] (rev 06) IOMMU group 28: [8086:15ea] 04:04.0 PCI bridge: Intel Corporation JHL7540 Thunderbolt 3 Bridge [Titan Ridge 4C 2018] (rev 06) IOMMU group 29: [8086:15eb] 05:00.0 System peripheral: Intel Corporation JHL7540 Thunderbolt 3 NHI [Titan Ridge 4C 2018] (rev 06) IOMMU group 30: [8086:15ec] 14:00.0 USB controller: Intel Corporation JHL7540 Thunderbolt 3 USB Controller [Titan Ridge 4C 2018] (rev 06) IOMMU group 31: [144d:a804] 24:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller SM961/PM961 IOMMU group 32: [1b21:1184] 25:00.0 PCI bridge: ASMedia Technology Inc. ASM1184e PCIe Switch Port IOMMU group 33: [1b21:1184] 26:01.0 PCI bridge: ASMedia Technology Inc. ASM1184e PCIe Switch Port [8086:2723] 27:00.0 Network controller: Intel Corporation Wi-Fi 6 AX200 (rev 1a) IOMMU group 34: [1b21:1184] 26:03.0 PCI bridge: ASMedia Technology Inc. ASM1184e PCIe Switch Port IOMMU group 35: [1b21:1184] 26:05.0 PCI bridge: ASMedia Technology Inc. ASM1184e PCIe Switch Port [8086:1539] 29:00.0 Ethernet controller: Intel Corporation I211 Gigabit Network Connection (rev 03) IOMMU group 36: [1b21:1184] 26:07.0 PCI bridge: ASMedia Technology Inc. ASM1184e PCIe Switch Port IOMMU group 37: [1b73:1100] 2b:00.0 USB controller: Fresco Logic FL1100 USB 3.0 Host Controller (rev 10) IOMMU group 38: [144d:a804] 2f:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller SM961/PM961 IOMMU group 39: [10de:128b] 30:00.0 VGA compatible controller: NVIDIA Corporation GK208B [GeForce GT 710] (rev a1) [10de:0e0f] 30:00.1 Audio device: NVIDIA Corporation GK208 HDMI/DP Audio Controller (rev a1) IOMMU group 40: [1022:148a] 31:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Function IOMMU group 41: [1022:1485] 32:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Reserved SPP IOMMU group 42: [1022:1486] 32:00.1 Encryption controller: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Cryptographic Coprocessor PSPCPP IOMMU group 43: [1022:149c] 32:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Matisse USB 3.0 Host Controller IOMMU group 44: [1022:1487] 32:00.4 Audio device: Advanced Micro Devices, Inc. [AMD] Starship/Matisse HD Audio Controller IOMMU group 45: [1022:7901] 33:00.0 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51) IOMMU group 46: [1022:7901] 34:00.0 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51) Thanks
  11. Hovering here too with interest. I've got an Asrock taichi x570 with 3950x processor, and a Gigabyte Titan Ridge TB3 controller (detects and works fine in baremetal windows). As the 3950x doesn't have a discrete GPU, I have a nvidia GT710 in pcie slot 1, being used by the OS currently, but I ideally want to use that in a Mac Catalina VM later. Most things working pretty well so far for me, and I have some nct6775 and nct6796 sensor detection working with the dynamix plugin on unraid 6.8.1-rc1, although not thoroughly checked them yet. However I want to get my tb3 egpu nvidia 1080ti connected up to my windows VM like I did before when using unraid on my previous NAS (repurposed 2017 i7 4400k QNAP box with alpine ridge TB3 controller cards). On that Intel machine, unraid just had the gfx card showing in the IOMMU group as a device in its own right. I was then able to passthru it into Windows directly as a pci-e device (windows knew nothing about the thunderbolt bit) and get full acceleration no problem. That was quite cool. I'm not sure how unraid was able to detect the gfx card at boot when I couldn't get thunderbolt to work on my new x570 box until I updated to 6.8.1-rc1 released yesterday, which adds kernel support for thunderbolt and thunderbolt networking (another thing of great interest to me so I can do super fast file transfers to my Mac laptop over thunderbolt). Maybe the BIOS was presenting these devices as straight PCIE devices somehow? dunno.. So right now I can't see the egpu-based graphics card in any IOMMU group yet, so I can't attach it to the VM as I did before. I tried to attach the whole controller, but still couldn't get the display to bring up the BIOS or Windows boot, although the vm does seem to run ok otherwise. I'm wondering if getting boltd/boltctl into the mix will help that? It seems like its the missing link between device detection and registration onto IOMMU although I'm not certain about that. I'd quite like to see these patches in this thread go into 6.8.1-rc2 along with boltd/boltctl .. that could be pretty useful to me and a few others trying to get tb3 working properly in unraid.
  12. omg good timing.. I've just built a new PC and really needed the thunderbolt support as I couldn't find it anywhere.. Now all I need is boltctl to add devices on it onto IOMMU and I'm ready to roll!