Jump to content

testdasi

Members
  • Posts

    2,812
  • Joined

  • Last visited

  • Days Won

    17

Everything posted by testdasi

  1. So apparently my test 6 happens to be testing all the slow cores (no direct memory access). Will need to retest the fast cores once my data migration is done. ~# numactl --hardware available: 4 nodes (0-3) node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 node 0 size: 48208 MB node 0 free: 350 MB node 1 cpus: 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 node 1 size: 0 MB node 1 free: 0 MB node 2 cpus: 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 node 2 size: 48354 MB node 2 free: 4680 MB node 3 cpus: 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 node 3 size: 0 MB node 3 free: 0 MB node distances: node 0 1 2 3 0: 10 16 16 16 1: 16 10 16 16 2: 16 16 10 16 3: 16 16 16 10
  2. Apparently there's already commands to tell which core is on which die. @bastl @Jcloud Perhaps you guys can try to see what shows up? ~# numactl --hardware available: 4 nodes (0-3) node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 node 0 size: 48208 MB node 0 free: 350 MB node 1 cpus: 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 node 1 size: 0 MB node 1 free: 0 MB node 2 cpus: 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 node 2 size: 48354 MB node 2 free: 4680 MB node 3 cpus: 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 node 3 size: 0 MB node 3 free: 0 MB node distances: node 0 1 2 3 0: 10 16 16 16 1: 16 10 16 16 2: 16 16 10 16 3: 16 16 16 10 Apparently can even check which VM is using how much RAM connecting to which node: ~# numastat qemu Per-node process memory usage (in MBs) PID Node 0 Node 1 Node 2 ----------------------- --------------- --------------- --------------- 33117 (qemu-system-x86) 1751.71 0.00 2442.32 33297 (qemu-system-x86) 2840.03 0.00 1326.58 82938 (qemu-system-x86) 28445.78 0.00 20757.30 91591 (qemu-system-x86) 182.21 0.00 8052.15 ----------------------- --------------- --------------- --------------- Total 33219.73 0.00 32578.35 PID Node 3 Total ----------------------- --------------- --------------- 33117 (qemu-system-x86) 0.00 4194.02 33297 (qemu-system-x86) 0.00 4166.61 82938 (qemu-system-x86) 0.00 49203.09 91591 (qemu-system-x86) 0.00 8234.37 ----------------------- --------------- --------------- Total 0.00 65798.09
  3. Shameless plug: I already did some testing in my build topic.
  4. I asked the question a short while ago about isolating core 0 and what happens. Theoretically unRAID should avoid the core but now that I think about it, I don't think that's the case. And this draws from my experience with isolating cores and then assign the isolated cores to a docker. The docker would end up using ONE of the cores to 100%. A docker is part of what you would call "unRAID" (since it's part of the host). That means isolation actually doesn't prevent the host to use the core. My hypothesis is that a process doesn't know if the core is isolated or not until it starts and checks the isolation list and/or being told "you naughty process, you can't use this". But since it already hold the cores, it will continue to do whatever it wants to do until done like it doesn't care. But it's also prevented from using any other isolated core. So until this is fully resolved, the old advice to keep core 0 (and its SMT sister) free would still be in effect. That is complicated by the inconsistency in core pair display in different envi. My 2990WX shows 0 paired with 1 (so not 0 paired with 32 as yours). Your Zenith X399 must be doing something very different.
  5. So apparently, the 2990WX all-core turbo is 3.4GHz. Note: this was on F10 BIOS. # grep MHz /proc/cpuinfo cpu MHz : 3315.662 cpu MHz : 3302.891 cpu MHz : 3382.593 cpu MHz : 3384.594 cpu MHz : 3389.368 cpu MHz : 3389.600 cpu MHz : 3391.838 cpu MHz : 3390.623 cpu MHz : 3392.705 cpu MHz : 3397.049 cpu MHz : 3389.122 cpu MHz : 3384.777 cpu MHz : 3393.248 cpu MHz : 3393.420 cpu MHz : 3393.441 cpu MHz : 3393.442 cpu MHz : 3386.566 cpu MHz : 3378.696 cpu MHz : 3393.268 cpu MHz : 3392.793 cpu MHz : 3388.878 cpu MHz : 3392.872 cpu MHz : 3393.441 cpu MHz : 3393.330 cpu MHz : 3393.136 cpu MHz : 3391.281 cpu MHz : 3393.417 cpu MHz : 3393.139 cpu MHz : 3391.659 cpu MHz : 3393.042 cpu MHz : 3392.735 cpu MHz : 3390.230 cpu MHz : 3390.927 cpu MHz : 3399.651 cpu MHz : 3393.443 cpu MHz : 3393.257 cpu MHz : 3398.353 cpu MHz : 3393.405 cpu MHz : 3393.446 cpu MHz : 3393.409 cpu MHz : 3393.484 cpu MHz : 3392.372 cpu MHz : 3393.442 cpu MHz : 3393.443 cpu MHz : 3393.363 cpu MHz : 3392.820 cpu MHz : 3393.443 cpu MHz : 3393.308 cpu MHz : 3392.475 cpu MHz : 3393.030 cpu MHz : 3375.652 cpu MHz : 3363.026 cpu MHz : 3393.333 cpu MHz : 3393.370 cpu MHz : 3393.444 cpu MHz : 3393.236 cpu MHz : 3388.671 cpu MHz : 3392.779 cpu MHz : 3391.320 cpu MHz : 3393.352 cpu MHz : 3393.198 cpu MHz : 3393.226 cpu MHz : 3393.170 cpu MHz : 3392.884
  6. I think the red dots are due to the forum feature to pop up a "quote selection" button. Doesn't affect Safari apparently.
  7. Problems: The case has 8 expansion slots, your 4 GPU will occupy all 8. You won't have space for the USB card. You either need a new case or some creative modding. The 2nd GPU will cover the middle PCIe slot so you will need creative use of PCIe extender(s) to make it work. The Taichi X399 middle slot is PCIe x1 (albeit with open end) so I'm not sure your 4-controller PCIe x4 USB card is going to work in that slot. Theoretically it will just be slower. Theoretically, you need a case with at least 10 expansion slots: GPU1 (x2) - extender in - USB - GPU2 (x2) - GPU3 (x2) - GPU4 via exender out (x2). It's still not going to be easy to (a) get an extender to stretch over 5+ slots over 2 big GPUs and (b) your GPU3 + 4 will completely cover all the ports at the bottom of the board so access is going to be a massive pain. Also perhaps consider a Gigabyte board since it has full-length middle slot. I would also recommend opting for compact GPUs but it looks like you guys are reusing your existing stuff so that's not an option then. You might want to think simpler. There's no need for a separate USB controller if hot-plugging isn't a requirement. You can pass through individual USB devices to unRAID. If you all use exactly the same model of peripherals, it's going to be a massive pain to identify things and edit xml but should still work. The motherboard has 2 separate USB 3.0 controllers that can be pass-through to VM (in addition to a shared USB 3.1 controller that can be used for individual USB devices pass-through). So if you can live with just having 2VMs with dedicated controllers and 2 VMs with no hot-plug (preferably the 2 with distinctly different peripherals) then that simplifies things. In short, take the USB controller out of the question and the build might just work.
  8. Do you know which AGESA was the proper fix in? That probably helps the TR peeps to know for sure the min BIOS to use.
  9. All of my USB 2.0, internal USB 3.0 and 3.1 gen2 show up under the same USB 3.1 controller so I'm guessing there's no chance for me to pass through the 3.1 controller since that is shared with the unRAID stick. Funny enough, I don't have any Asmedia device! Only AMD - and all my USB port works so perhaps it's a 2nd gen TR thing. Not a big deal for me since I only need 2 USB 3.0 controllers for my main Win and Mac VM.
  10. Theoretically (based on reference below), unRAID can support vmdk but mine doesn't work Directly edit in template -> disk not showing up Convert using qemu-img leads to an all-blank raw img file so naturally doesn't work When I use the latest qemu-img for Windows, it converts the vmdk to raw correctly (in fact already using it) so I suspect probably old version is used by unRAID. Reference:
  11. Thanks to the magic of KVM, I now have MacOS running on an old Surface 3. 😁
  12. For nvme drives, you need to pass it through via PCIe passthrough for best performance. It's not a SATA device.
  13. Memory interleaving may be the difference because it relates to the Threadripper design. A Threadripper CPU is essentially equivalent to a dual-CPU / quad-CPU in the server world, which leads to the UMA / NUMA distinction. When the CPU is in UMA mode, memory is interleaved and exposed to both dies with priority for throughput. When in NUMA mode, there's no interleaving and each die access its own memory bus first and then the other die i.e. priority for better latency. In other words, UMA treats the CPU as one unit and NUMA treats each die as its own CPU. For the 1950X, UMA / NUMA can be selected. For the 2990WX, for the same reasons that you mentioned, only NUMA mode is available. So when it comes to pairing logical cores to physical cores, it might be done incorrectly in UMA if the numbering is based on NUMA. It also makes sense why the 2990WX has a different numbering scheme since NUMA is the only option. Of course, that's just my hypothesis since I can't turn on interleaving on my 2990WX to test.
  14. So I updated to the latest template. Is there anyway to pass additional parameters to openvpn? My idea is to use remote-random to allow the docker to pick a random server at every restart. The section of code below is deleted every time the docker start and replace with the server in the template. Only the remote-random line remains so I'm guessing something was set up to remove any lines starting with "remote " remote-random remote de-berlin.privateinternetaccess.com 1197 remote de-frankfurt.privateinternetaccess.com 1197 remote czech.privateinternetaccess.com 1197 remote france.privateinternetaccess.com 1197 remote ro.privateinternetaccess.com 1197 remote spain.privateinternetaccess.com 1197 remote swiss.privateinternetaccess.com 1197 remote sweden.privateinternetaccess.com 1197
  15. @bastl: One thing I can think of - did you have memory interleaving on or off? Unfortunately I can only wish that I have a 1950X laying around. I aint Linus.
  16. So here is a quick summary of my test results. I use barebone SMT-off as the base (since it's fastest). The % below is slower than base so lower is better. All tests done on Windows barebone / VM. SMT is on for VM. Nothing else is running while doing the tests (except for (8)) Barebone SMT on: 52% <-- yes SLOWER! VM 1-7, 17-23, 33-39, 49-55 (28 logical cores): 33% VM all odd numbers except 1, 17, 33, 49 (28 logical cores): 36% VM first 32 except 0, 8, 16, 24 (28 logical cores): 29% VM all odd numbers (32 logical cores): 30% VM last 32 (32 logical cores): 34% VM all odd numbers except 1, 9, 17, 25, 33, 41, 49, 57 (24 logical cores): 20% VM same as 7 but with 3 simultaneous transcodes on the even logical cores using dockers (24 logical cores): 56% My conclusions: (1), (7) and (8) says Windows is badly optimised for Threadripper 2 but Linux is much better. (3) - (7) seems to confirm what I was guessing. Each 8 logical cores represent 4 physical cores and thus 1 CCX. Spreading things evenly across more CCX improves performance. 3-3-3-3-3-3-3-3 is faster than 3-4-3-4-3-4-3-4! Linux is actually great with SMT optimisation so I'll stick to my weird-and-wonderful config moving forward.
  17. That looks similar to mine without acs override so I'm guessing it's some kind of a default setting for Threadripper BIOS.
  18. A surprising number of Millennials actually. And apparently cute tiny DVDs are popular with some folks. I started with a TR1 build plan but I timed it perfectly and got sign off for a TR2. 😅 And TR1 price went down before the TR2 came out but actually went back up. I remember at one point when the rumour mill was in full swing, the 1950X was like 600, it's now 700+. My watt meter is broken so not sure about power consumption. Will update post with IOMMU. 3 Windows VM: 1 main workstation and 2 remote-only VMs. 2 Ubuntu servers as VPN gateways. Half of the cores and RAM go to the workstation (these cores are all isolated). That's my main daily driver. The rest of the VM has 2 cores each and some share with unRAID dockers. I'm currently testing a weird-and-wonderful config. TR2 logical numbering is a bit different: 0+1 logical cores are both on the same physical core. I assigned 24 odd (logical) cores to my workstation + the remaining 8 odd ones as emulator pin. The emulator cores are shared with the other various VMs (which don't really do much). The remaining 32 logical cores - the even numbers - are distributed and shared among dockers and usual unRAID stuff. So there's no (physical) core that is exclusively used by anything and I rely on SMT to schedule the tasks appropriately. We'll see how it goes. Thanks.
  19. Cuz it costs me more to get rid of it than to keep it 😆 and every now and then it does come in handy e.g. not everyone uploads wedding vid to Youtube
  20. Last update: 30/07/2020 After a few months of researching, prepping my data, persuading she-who-must-be-obeyed, etc. I finally pulled the plug when Amazon finally has my motherboard in stock. I had a i7-5820K (overclocked to 4GHz) and a Xeon E3-1245v5 server (ITX case) but it was more out of necessity since I wasn't able to afford Dual Xeon to merge them and still have sufficient performance. The 2990WX came out at just the right price point that makes the idea possible. OS at time of building: 6.4.0 OS Current: 6.9.0-beta25 (ich777 build ZFS + Nvidia) CPU: AMD Ryzen Threadripper 2990WX Heatsink: Noctua NH-U14S TR4-SP3 (in push-pull - I nicked a 15mm fan from my old NH-D14 workstation cooler) Motherboard: Gigabyte X399 Designare EX (F12e BIOS) RAM: 64GB Corsair Vengeance LPX 2666MHz + 32GB GSkill Ripjaw 4 2800MHz (nicked from the old workstation) Case: Silverstone Fortress FT02 (old workstation case) Drive Cage: Evercool Dual (2x5.25 -> 3x3.5 with 80mm fan) - need this to mount 4x2.5" drives Power Supply: Corsair HX850 (10+ years old!) GPU: Zotac GTX 1070 Mini for main VM, Zotac GT 710 PCIe x1 for unRAID, Nvidia Quadro P2000 for unRAID + transcoding Parity Drive: None because I trust the cloud Array: Samsung 970 EVO 2TB Pool 1: 2x 1.2TB Intel 750 NVMe Pool 2: 3x 4TB Samsung 860 Evo Pool 3: 4x 7.68TB Kingston DC500R VM-only: 2x Intel Optane 905p 960GB (U.2 2.5"), Intel Optane 905p 380GB (M.2 22110) + 3x Samsung PM983 3.84TB (via Asus Hyper M.2 X16 card + split PCIe to x4,x4,x4,x4) Flash backup: Samsung FIT Plus 64GB (for rapid recovery in case main stick fails) Unassigned offline backup / decommissioned: 10TB Seagate Ironwolf NAS, 8TB Seagate Ironwolf NAS, 8TB Hitachi HE8, 5TB Seagate BarraCuda 2.5" SMR, 2TB Samsung 850 Evo, 2050GB Crucial MX300, 512GB Samsung SM951 M.2 (AHCI variety) Primary Use: Main video/photo editing workstation + various unRAID stuff that people do on unRAID Likes: Pretty much take anything thrown at it and spit it output in my face. Dislikes: It's heavy AF! Future Plans: Move to a smaller case (Raijintek Thetis?) for a more compact built. Some tips: Btrfs can do snapshot! With a bit of scripting, you can achieve what znapzend does for ZFS. ZFS (as of 30/07/2020) has a bug which causes it to not respect isolcpus. This is incredibly annoying under heavy IO. I attempted to run Unraid under a Type 2 Hypervisor and below is the short summary (TL;DR: it's not recommended). Hyper-V doesn't work due to inability to pass through USB devices (with GUID) Virtual Box basically doesn't work due to terrible storage performance. VMWare Workstation works but with some crippling limitations. Using M.2 -> U.2 adapter on the M.2 connectors should be ok with single-slot-width PCIe cards on the surrounding PCIe slots. Dual-slot width (e.g. typical GPU) overhanging the adapter usually won't work e.g. the Zotac GTX 1070 heat pipe is right on top of the U.2 connector and prevents the GPU from being slotted into place. I guess if the overhanging card has the components just in the right place then it can lego itself. Hyper-V apparently improves PCIe performance! So unless you are suffering from the dreaded error code 43 that cannot be resolved with any other tweaks, do NOT turn off Hyper-V by default. Start a new template if you need to switch on/off Hyper-V. The latest Linux kernel + F12 BIOS seem to make disabling Global C State Control less stable. That manifests as out of memory errors if I try to reserve more than 50% of RAM all at once (e.g. start my workstation VM). So if you disabled Global C State Control in the past and now seem to have some instability, maybe try enabling that with the latest BIOS. Do NOT use ACS Override with multifunction with unRAID version above 6.5.3 due to extreme lags. Unraid 6.8 (tested on rc7) has resolved the lags! 👍 Normal ACS Override does not improve on IOMMU grouping so no point using it. The bottom right M.2 slot (the 2280 size) is in the same IOMMU group as (a) both wired LAN ports, (b) wireless LAN, (c) SATA controller and (d) the middle PCIe 2.0 slot. Hence, it practically cannot be passed through via the PCIe method without ACS Override multifunction. (need ACS multifunction override which lags - see above) The bottom PCIe slot (x8) is in the same IOMMU group as the bottom right M.2 slot so it's the same situation (see above). Just bought a M.2 -> PCIe adapter to test that slot and found out I just wasted 2 hours of my life. If you build from scratch, make sure to get a motherboard with the ability to upgrade BIOS without CPU (and familiarise yourself fully with the process). Even Linus forgot to update his BIOS before his 2990WX build. Windows isn't yet optimised for too many threads/cores. My testing shows anything more than 32 processes leads to a DROP in performance. The diminishing return also means going beyond 24-28 processes leads to almost no improvement in real-world performance. (a process = 1 thread/core e.g. 24 processes = 12 cores in SMT or 24 cores without SMT). Due to Ryzen design (essentially gluing 2x4-core units into a die and then gluing 2/4 dies into a CPU), 24-core config performs better than 28-core, at least with my workload on Windows. 32-core no-SMT is fastest but only in barebone. Perhaps placebo but I found spreading the cores out evenly seems to improve performance. I guess every 8 logical cores = 1 unit so if you have an 8-core VM for example, have 1 core in each unit. Finally, I would like to thank @gridrunner, @eschultz, @Jcloud, @methanoid, @tjb_altf4, @jbartlett, @guru69 for their kind advice during my research process + @DZMM for helping me out with rclone. 👍 Below are my IOMMU groups with IOMMU in BIOS is set to "On" (but without ACS Override multifunction): The motherboard native IOMMU is actually very good (e.g. can pass through USB and main GPU). The LAN ports + wifi + the PCIe2.0 slots are all in the same group. And this Gigabyte motherboard is amazing in that it allows you to pick which slot as primary GPU so you can use a good GPU on the fast slot and dump a cheapo on the slowest slot (in my case a single-slot PCIe x1 GT 710 GPU) for unRAID. Do not need to use vbios (but I use it regardless). ACS Override multifunction would basically splits everything into its own group. However, note that I experienced extreme lag with Unraid 6.6 and 6.7 when ACS Override is turned on (regardless of mode). 6.5.3 (and prior) does not have this issue and 6.8+ has resolved this so presumably a Linux Kernel issue. IOMMU group 0: [1022:1452] 00:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge [1022:1453] 00:01.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe GPP Bridge [1022:1453] 00:01.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe GPP Bridge [1022:43ba] 01:00.0 USB controller: Advanced Micro Devices, Inc. [AMD] X399 Series Chipset USB 3.1 xHCI Controller (rev 02) [1022:43b6] 01:00.1 SATA controller: Advanced Micro Devices, Inc. [AMD] X399 Series Chipset SATA Controller (rev 02) [1022:43b1] 01:00.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] X399 Series Chipset PCIe Bridge (rev 02) [1022:43b4] 02:00.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port (rev 02) [1022:43b4] 02:01.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port (rev 02) [1022:43b4] 02:02.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port (rev 02) [1022:43b4] 02:03.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port (rev 02) [1022:43b4] 02:04.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port (rev 02) [8086:1539] 04:00.0 Ethernet controller: Intel Corporation I211 Gigabit Network Connection (rev 03) [8086:24fd] 05:00.0 Network controller: Intel Corporation Wireless 8265 / 8275 (rev 78) [8086:1539] 06:00.0 Ethernet controller: Intel Corporation I211 Gigabit Network Connection (rev 03) [10de:128b] 07:00.0 VGA compatible controller: NVIDIA Corporation GK208B [GeForce GT 710] (rev a1) [10de:0e0f] 07:00.1 Audio device: NVIDIA Corporation GK208 HDMI/DP Audio Controller (rev a1) [144d:a801] 08:00.0 SATA controller: Samsung Electronics Co Ltd Device a801 (rev 01) IOMMU group 1: [1022:1452] 00:02.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge IOMMU group 2: [1022:1452] 00:03.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge [1022:1453] 00:03.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe GPP Bridge [8086:0953] 09:00.0 Non-Volatile memory controller: Intel Corporation PCIe Data Center SSD (rev 01) IOMMU group 3: [1022:1452] 00:04.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge IOMMU group 4: [1022:1452] 00:07.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge [1022:1454] 00:07.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Internal PCIe GPP Bridge 0 to Bus B [1022:145a] 0a:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Zeppelin/Raven/Raven2 PCIe Dummy Function [1022:1456] 0a:00.2 Encryption controller: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Platform Security Processor [1022:145f] 0a:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Zeppelin USB 3.0 Host controller IOMMU group 5: [1022:1452] 00:08.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge [1022:1454] 00:08.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Internal PCIe GPP Bridge 0 to Bus B [1022:1455] 0b:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Zeppelin/Renoir PCIe Dummy Function [1022:7901] 0b:00.2 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51) [1022:1457] 0b:00.3 Audio device: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) HD Audio Controller IOMMU group 6: [1022:790b] 00:14.0 SMBus: Advanced Micro Devices, Inc. [AMD] FCH SMBus Controller (rev 59) [1022:790e] 00:14.3 ISA bridge: Advanced Micro Devices, Inc. [AMD] FCH LPC Bridge (rev 51) IOMMU group 7: [1022:1460] 00:18.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 0 [1022:1461] 00:18.1 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 1 [1022:1462] 00:18.2 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 2 [1022:1463] 00:18.3 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 3 [1022:1464] 00:18.4 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 4 [1022:1465] 00:18.5 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 5 [1022:1466] 00:18.6 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 6 [1022:1467] 00:18.7 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 7 IOMMU group 8: [1022:1460] 00:19.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 0 [1022:1461] 00:19.1 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 1 [1022:1462] 00:19.2 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 2 [1022:1463] 00:19.3 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 3 [1022:1464] 00:19.4 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 4 [1022:1465] 00:19.5 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 5 [1022:1466] 00:19.6 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 6 [1022:1467] 00:19.7 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 7 IOMMU group 9: [1022:1460] 00:1a.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 0 [1022:1461] 00:1a.1 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 1 [1022:1462] 00:1a.2 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 2 [1022:1463] 00:1a.3 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 3 [1022:1464] 00:1a.4 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 4 [1022:1465] 00:1a.5 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 5 [1022:1466] 00:1a.6 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 6 [1022:1467] 00:1a.7 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 7 IOMMU group 10: [1022:1460] 00:1b.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 0 [1022:1461] 00:1b.1 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 1 [1022:1462] 00:1b.2 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 2 [1022:1463] 00:1b.3 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 3 [1022:1464] 00:1b.4 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 4 [1022:1465] 00:1b.5 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 5 [1022:1466] 00:1b.6 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 6 [1022:1467] 00:1b.7 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 7 IOMMU group 11: [1022:1452] 20:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge IOMMU group 12: [1022:1452] 20:02.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge IOMMU group 13: [1022:1452] 20:03.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge IOMMU group 14: [1022:1452] 20:04.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge IOMMU group 15: [1022:1452] 20:07.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge [1022:1454] 20:07.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Internal PCIe GPP Bridge 0 to Bus B [1022:145a] 21:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Zeppelin/Raven/Raven2 PCIe Dummy Function [1022:1456] 21:00.2 Encryption controller: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Platform Security Processor IOMMU group 16: [1022:1452] 20:08.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge [1022:1454] 20:08.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Internal PCIe GPP Bridge 0 to Bus B [1022:1455] 22:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Zeppelin/Renoir PCIe Dummy Function IOMMU group 17: [1022:1452] 40:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge [1022:1453] 40:01.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe GPP Bridge [1022:1453] 40:01.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe GPP Bridge [1022:1453] 40:01.3 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe GPP Bridge [144d:a808] 41:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller SM981/PM981 [144d:a808] 42:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller SM981/PM981 [8086:0953] 43:00.0 Non-Volatile memory controller: Intel Corporation PCIe Data Center SSD (rev 01) IOMMU group 18: [1022:1452] 40:02.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge IOMMU group 19: [1022:1452] 40:03.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge [1022:1453] 40:03.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe GPP Bridge [10de:1b81] 44:00.0 VGA compatible controller: NVIDIA Corporation GP104 [GeForce GTX 1070] (rev a1) [10de:10f0] 44:00.1 Audio device: NVIDIA Corporation GP104 High Definition Audio Controller (rev a1) IOMMU group 20: [1022:1452] 40:04.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge IOMMU group 21: [1022:1452] 40:07.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge [1022:1454] 40:07.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Internal PCIe GPP Bridge 0 to Bus B [1022:145a] 45:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Zeppelin/Raven/Raven2 PCIe Dummy Function [1022:1456] 45:00.2 Encryption controller: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Platform Security Processor [1022:145f] 45:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Zeppelin USB 3.0 Host controller IOMMU group 22: [1022:1452] 40:08.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge [1022:1454] 40:08.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Internal PCIe GPP Bridge 0 to Bus B [1022:1455] 46:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Zeppelin/Renoir PCIe Dummy Function [1022:7901] 46:00.2 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51) IOMMU group 23: [1022:1452] 60:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge IOMMU group 24: [1022:1452] 60:02.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge IOMMU group 25: [1022:1452] 60:03.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge IOMMU group 26: [1022:1452] 60:04.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge IOMMU group 27: [1022:1452] 60:07.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge [1022:1454] 60:07.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Internal PCIe GPP Bridge 0 to Bus B [1022:145a] 61:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Zeppelin/Raven/Raven2 PCIe Dummy Function [1022:1456] 61:00.2 Encryption controller: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Platform Security Processor IOMMU group 28: [1022:1452] 60:08.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge [1022:1454] 60:08.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Internal PCIe GPP Bridge 0 to Bus B [1022:1455] 62:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Zeppelin/Renoir PCIe Dummy Function
  21. So it seems on the 2990WX, the numbering already follows what @thenonsense said (e.g. 0 + 1 are a pair - see attachment). Wonder if it's a BIOS thing since only latest BIOS can run Threadripper 2.
  22. Thanks a lot guys. With 2990WX it's gonna be even MORE complicated between the "fast" vs "slow" cores.
  23. Try one of these: Download latest driver of your network card, uninstall current driver, restart, reinstall driver. Use regedit to delete all entries of any memorized network (wired and wireless), restart. Reinstall Windows from scratch The fact that your other VM / machine can access unRAID fine suggests your unRAID config is good. So all we can do is to try to force Windows to "forget" what it had that didn't work. All have worked for me at one point or another. Why (3)? I have had a weird corrupted windows update (the major type - one that creates a windows.old folder) that suddenly caused things to inexplicably fail which was only fixed by reinstalling and reupdating from scratch.
  24. +1 and it doesn't need to be sophisticated. A settings box to limit parity sync / check speed to x MB/s (set to 0 for no limit) is sufficient.
×
×
  • Create New...