Unraid Newbie Posted June 8, 2020 Posted June 8, 2020 Anyone has this setup? I am buying a new rig, 3900x or 3950x but can't decide which board to get. I want to run 3 windows vm running at the same time, each with gpu and usb pass-through. Which board to get? looking at Asus X570e and x570 taichi. I see many of you use the arous but still run in tons of issue. ideally, I want one 2060super or above on my main gaming vm and 2 1650super or 1660 super on the other 2vms. please share ur working setup and tweaks to make it work. Quote
Storx Posted June 9, 2020 Posted June 9, 2020 Im not gonna lie... the performance of this would be terrible. The 3900x and 3950x both have a very limited amount of PCIe lanes... so after the 2nd GPU your performance will suffer horribly even if the motherboard has 3 full x16 slots. Most of the AM4 motherboards when you plug 3 GPUs into them, they will drop the first GPU to 8x, 2nd GPU to 4x, and last one to 2x or 4x only. 1 Quote
Unraid Newbie Posted June 11, 2020 Author Posted June 11, 2020 just checked, its 2 slots at 8x speed and 1 at 4x, as long as I can play 4k video and play some emulator games on the 3rd vm, it can still be very useful. to me at least. Quote
aasberry Posted September 6, 2020 Posted September 6, 2020 As someone currently fighting problems with pcie lanes, I hope you decided against this. Server hardware would do this better. Quote
mrjrp15 Posted September 11, 2020 Posted September 11, 2020 This can work. The issue is the third PCIE slot on most x570 or B550 boards uses the Chipset. If you do want to do this I would suggest PCIE Bifurcation of a single X16 slot into multiple slots. Now for this to work the board will need to support this and you will need to do your research. But you can find the parts for this at the following link: https://riser.maxcloudon.com/en/ 1 Quote
Decto Posted September 12, 2020 Posted September 12, 2020 On 9/11/2020 at 1:16 PM, mrjrp15 said: This can work. The issue is the third PCIE slot on most x570 or B550 boards uses the Chipset. If you do want to do this I would suggest PCIE Bifurcation of a single X16 slot into multiple slots. Now for this to work the board will need to support this and you will need to do your research. But you can find the parts for this at the following link: https://riser.maxcloudon.com/en/ Bifurcation while possible is a bit of a lash up. Better to buy a board designed for 3x pcie x8 electrical. Benefits of PCIE gen 4 bandwidth. Asus Pro WS X570-ACE 1 Quote
aasberry Posted September 13, 2020 Posted September 13, 2020 (edited) 6 hours ago, Decto said: Bifurcation while possible is a bit of a lash up. Better to buy a board designed for 3x pcie x8 electrical. Benefits of PCIE gen 4 bandwidth. Asus Pro WS X570-ACE That board would definitely do it. But would his slot 3 would only get x4 from the cpu right? 8x+8x from the cpu to slot 1 and 2. Would slot 3 be 8x to the chipset which would then be bottle necked to 4x by it's connection to the cpu? Or would it be 4x to the cpu? There are 24 lanes for the 3950x and 4 of those are for the chipset. -trying to understand myself so I may be way off here-. But with the use case he stated and pcie 4 that would probably be fine. Edited September 13, 2020 by aasberry Quote
Decto Posted September 13, 2020 Posted September 13, 2020 (edited) 7 hours ago, aasberry said: That board would definitely do it. But would his slot 3 would only get x4 from the cpu right? 8x+8x from the cpu to slot 1 and 2. Would slot 3 be 8x to the chipset which would then be bottle necked to 4x by it's connection to the cpu? Or would it be 4x to the cpu? There are 24 lanes for the 3950x and 4 of those are for the chipset. -trying to understand myself so I may be way off here-. But with the use case he stated and pcie 4 that would probably be fine. As I understand, you get 2x8 lane PCI-E 4.0 to the CPU slots and the chipset gets 4 lanes of PCI-E 4.0 The chipset then provides a x8 link to the 3rd GPU. The chipset connection is x4 PCIE 4.0 which is same bandwidth as x8 PCIE 3.0. This third slot bandwidth is shared with other devices but is currently the best option for 3 GPU as you get up to PCIE gen 3 x8 bandwidth depending on contention from other devices whereas other boards give you at best a 4x electrical connection so you are limited with a PCIE 3.0 GPU. Theoretically, if your VM uses the 3rd slot, along with sata/nvme also on the chipset, direct memory transfers reduce requirements on the cpu link. Bifurcation would give you x4 links at PCIE 3.0 as the gtx16x0 cards are pcie 3.0 Edited September 13, 2020 by Decto Quote
methanoid Posted October 2, 2020 Posted October 2, 2020 On 9/13/2020 at 1:24 AM, aasberry said: That board would definitely do it. But would his slot 3 would only get x4 from the cpu right? 8x+8x from the cpu to slot 1 and 2. Would slot 3 be 8x to the chipset which would then be bottle necked to 4x by it's connection to the cpu? Or would it be 4x to the cpu? There are 24 lanes for the 3950x and 4 of those are for the chipset. -trying to understand myself so I may be way off here-. But with the use case he stated and pcie 4 that would probably be fine. Pretty sure from memory when I looked at that board at launch it was 3rd slot wired to CHIPSET Quote
Timothyy Posted November 11, 2020 Posted November 11, 2020 Did you bought anything already and maybe even got it working? If yes what setup do you have? I am trying to do the same thing myself, but I still have to find a fix for this issue to use the 3rd VM without crashing. https://forums.unraid.net/topic/98216-video_internal_scheduler_error-windows-10-vm/ describes my problem. Running an X570 board with ryzen 9 3950X and 3 gpu's (full specs in post above). Quote
Timothyy Posted November 20, 2020 Posted November 20, 2020 I have checked 2 things Quote sudo lspci -tv -[0000:00]-+-00.0 Advanced Micro Devices, Inc. [AMD] Starship/Matisse Root Complex +-00.2 Advanced Micro Devices, Inc. [AMD] Starship/Matisse IOMMU +-01.0 Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge +-01.1-[01]----00.0 Kingston Technology Company, Inc. Device 2263 +-01.2-[02-0e]----00.0-[03-0e]--+-01.0-[04]--+-00.0 NVIDIA Corporation GP102 [GeForce GTX 1080 Ti] | | \-00.1 NVIDIA Corporation GP102 HDMI Audio Controller | +-02.0-[05]----00.0 Intel Corporation SSD 660P Series | +-03.0-[06-0a]----00.0-[07-0a]--+-03.0-[08]-- | | +-05.0-[09]----00.0 Intel Corporation I211 Gigabit Network Connection | | \-07.0-[0a]-- | +-05.0-[0b]----00.0 Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller | +-08.0-[0c]--+-00.0 Advanced Micro Devices, Inc. [AMD] Starship/Matisse Reserved SPP | | +-00.1 Advanced Micro Devices, Inc. [AMD] Matisse USB 3.0 Host Controller | | \-00.3 Advanced Micro Devices, Inc. [AMD] Matisse USB 3.0 Host Controller | +-09.0-[0d]----00.0 Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] | \-0a.0-[0e]----00.0 Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] +-02.0 Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge +-03.0 Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge +-03.1-[0f]--+-00.0 NVIDIA Corporation GP102 [GeForce GTX 1080 Ti] | \-00.1 NVIDIA Corporation GP102 HDMI Audio Controller +-03.2-[10]--+-00.0 NVIDIA Corporation GP102 [GeForce GTX 1080 Ti] | \-00.1 NVIDIA Corporation GP102 HDMI Audio Controller +-04.0 Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge +-05.0 Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge +-07.0 Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge +-07.1-[11]----00.0 Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Function +-08.0 Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge +-08.1-[12]--+-00.0 Advanced Micro Devices, Inc. [AMD] Starship/Matisse Reserved SPP | +-00.1 Advanced Micro Devices, Inc. [AMD] Starship/Matisse Cryptographic Coprocessor PSPCPP | +-00.3 Advanced Micro Devices, Inc. [AMD] Matisse USB 3.0 Host Controller | \-00.4 Advanced Micro Devices, Inc. [AMD] Starship/Matisse HD Audio Controller +-14.0 Advanced Micro Devices, Inc. [AMD] FCH SMBus Controller +-14.3 Advanced Micro Devices, Inc. [AMD] FCH LPC Bridge +-18.0 Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 0 +-18.1 Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 1 +-18.2 Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 2 +-18.3 Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 3 +-18.4 Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 4 +-18.5 Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 5 +-18.6 Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 6 \-18.7 Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 7 and Quote sudo lspci -vv | grep -P "[0-9a-f]{2}:[0-9a-f]{2}\.[0-9a-f]|LnkSta:" I only copied the GPU lines below: 04:00.0 VGA compatible controller: NVIDIA Corporation GP102 [GeForce GTX 1080 Ti] (rev a1) (prog-if 00 [VGA controller]) LnkSta: Speed 5GT/s (downgraded), Width x4 (downgraded) 04:00.1 Audio device: NVIDIA Corporation GP102 HDMI Audio Controller (rev a1) LnkSta: Speed 5GT/s (downgraded), Width x4 (downgraded) 0f:00.0 VGA compatible controller: NVIDIA Corporation GP102 [GeForce GTX 1080 Ti] (rev a1) (prog-if 00 [VGA controller]) LnkSta: Speed 2.5GT/s (downgraded), Width x8 (downgraded) 0f:00.1 Audio device: NVIDIA Corporation GP102 HDMI Audio Controller (rev a1) LnkSta: Speed 2.5GT/s (downgraded), Width x8 (downgraded) 10:00.0 VGA compatible controller: NVIDIA Corporation GP102 [GeForce GTX 1080 Ti] (rev a1) (prog-if 00 [VGA controller]) LnkSta: Speed 8GT/s (ok), Width x8 (downgraded) 10:00.1 Audio device: NVIDIA Corporation GP102 HDMI Audio Controller (rev a1) LnkSta: Speed 8GT/s (ok), Width x8 (downgraded) @Decto So if I understand it correctly the IOMMU group 04:00.0 is sharing bandwith with the SSD, 2 network cards, usb controller and sata controller? And this would be different if I had the server grade motherboard with a wired connection and 8 pci lanes bandwith to the slot like the Asus Pro WS X570-ACE so that it at least got 8 lanes instead of 4? Quote
Decto Posted November 22, 2020 Posted November 22, 2020 On 11/20/2020 at 7:02 AM, Timothyy said: So if I understand it correctly the IOMMU group 04:00.0 is sharing bandwith with the SSD, 2 network cards, usb controller and sata controller? And this would be different if I had the server grade motherboard with a wired connection and 8 pci lanes bandwith to the slot like the Asus Pro WS X570-ACE so that it at least got 8 lanes instead of 4? I'd agree with your interpretation, the bandwith of the third slot is shared by all chipset devices. The same thing happens on the ASUS WS board, except it is a X4 PCI 4.0 Bus that is shared among all devices and the card is presented with a x8 PCI 3.0 Bus. The bandwith of which is shared. There just aren't enough PCI-E lanes to go around on consumer platforms Quote
unrateable Posted November 22, 2020 Posted November 22, 2020 (edited) On 6/8/2020 at 7:13 PM, Unraid Newbie said: I want to run 3 windows vm running at the same time, each with gpu and usb pass-through. besides the issues mentioned from the other posters with the pcie lanes needed for proper GPU usage, you may also run into trouble isolating 3 usb controllers on top of the one you need for the host machine itself. Hot plug virt usb would be an option but I dont trust its performance much... Edited November 22, 2020 by unrateable Quote
Timothyy Posted November 22, 2020 Posted November 22, 2020 4 hours ago, Decto said: I'd agree with your interpretation, the bandwith of the third slot is shared by all chipset devices. The same thing happens on the ASUS WS board, except it is a X4 PCI 4.0 Bus that is shared among all devices and the card is presented with a x8 PCI 3.0 Bus. The bandwith of which is shared. There just aren't enough PCI-E lanes to go around on consumer platforms <image> @Decto So that is then also dependent of what I connect to the motherboard right? At my Taichi board I in theory could disconnect the M2/ssd and the ethernet card so that it will get the 8 speed instead of 4? In the end it will allow more available assignable bandwidth? Or am I wrong? In your opinion, should I get a real benefit from swapping the motherboard to the Asus Pro WS X570-ACE? If you stood in my shoes would you do it? And then of course the question, will it allow me to use 3 GPU's simultaneously (guaranteed)? I can live with giving up a second ethernet card and/or the M2/ssd, it's just for gaming. Quote
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.