Jump to content
dmarshman

ASRock X570 Taichi & Ryzen 9 3950X [w/ AMD Radeon VII]

18 posts in this topic Last Reply

Recommended Posts

Per the title.. 

Plan to use this as a workstation / high end gaming rig;  running both Windows 10 and MacOS Catalina in UnRAID VMs  [as well as Win 10 natively].

 

I'll add a new thread in the full build section once I get the machine up and running.

 

Couple of days playing around so far:

 

Latest MBoard BIOS - 2.50  [2019/11/13]

Running in UEFI

Stock clocks, voltages, etc for 3950X & RAM.

32 GiB RAM   :   G.Skill Ripjaws V 32GB 2 x 16GB DDR4-3200 PC4-25600 CL16 Dual Channel Desktop Memory Kit F4-3200C16D-32G

 

Hoping/Planning to go to 64 GiB RAM and [maybe] 2x Vega VIIs

 

 

Initial testing:

 

VM#1 :   MacOS Catalina

- using SpaceInvader's Docker / template, plus some tweaks    

- 8 cores / 16 threads - 16 GiB RAM

- Vega VII passed through in PCI-E slot #1  [PCI-E 3x16]  -    - no hardware acceleration / Metal / OpenCL yet

Cinebench R20 - 4850 [Multicore]


 

 

Native Win 10:

- 1 TB PCI-E 4 4x NVMe Drive

- 16 cores / 32 threads - 32 GiB RAM

- Vega VII  [stock everything]

Cinebench R20 - 9454 [Multicore]

 

Share this post


Link to post
PCI Devices and IOMMU Groups

IOMMU group 0:	[1022:1482] 00:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
IOMMU group 1:	[1022:1483] 00:01.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge
IOMMU group 2:	[1022:1483] 00:01.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge
IOMMU group 3:	[1022:1482] 00:02.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
IOMMU group 4:	[1022:1482] 00:03.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
IOMMU group 5:	[1022:1483] 00:03.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge
IOMMU group 6:	[1022:1482] 00:04.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
IOMMU group 7:	[1022:1482] 00:05.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
IOMMU group 8:	[1022:1482] 00:07.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
IOMMU group 9:	[1022:1484] 00:07.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]
IOMMU group 10:	[1022:1482] 00:08.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
IOMMU group 11:	[1022:1484] 00:08.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]
IOMMU group 12:	[1022:1484] 00:08.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]
IOMMU group 13:	[1022:1484] 00:08.3 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]
IOMMU group 14:	[1022:790b] 00:14.0 SMBus: Advanced Micro Devices, Inc. [AMD] FCH SMBus Controller (rev 61)
	[1022:790e] 00:14.3 ISA bridge: Advanced Micro Devices, Inc. [AMD] FCH LPC Bridge (rev 51)
IOMMU group 15:	[1022:1440] 00:18.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 0
	[1022:1441] 00:18.1 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 1
	[1022:1442] 00:18.2 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 2
	[1022:1443] 00:18.3 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 3
	[1022:1444] 00:18.4 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 4
	[1022:1445] 00:18.5 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 5
	[1022:1446] 00:18.6 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 6
	[1022:1447] 00:18.7 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 7
IOMMU group 16:	[1987:5016] 01:00.0 Non-Volatile memory controller: Phison Electronics Corporation Device 5016 (rev 01)
IOMMU group 17:	[1022:57ad] 02:00.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 57ad
IOMMU group 18:	[1022:57a3] 03:03.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 57a3
IOMMU group 19:	[1022:57a4] 03:08.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 57a4
	[1022:1485] 0a:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Reserved SPP
	[1022:149c] 0a:00.1 USB controller: Advanced Micro Devices, Inc. [AMD] Matisse USB 3.0 Host Controller
	[1022:149c] 0a:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Matisse USB 3.0 Host Controller
IOMMU group 20:	[1022:57a4] 03:09.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 57a4
	[1022:7901] 0b:00.0 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51)
IOMMU group 21:	[1022:57a4] 03:0a.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 57a4
	[1022:7901] 0c:00.0 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51)
IOMMU group 22:	[1b21:1184] 04:00.0 PCI bridge: ASMedia Technology Inc. ASM1184e PCIe Switch Port
IOMMU group 23:	[1b21:1184] 05:01.0 PCI bridge: ASMedia Technology Inc. ASM1184e PCIe Switch Port
	[8086:2723] 06:00.0 Network controller: Intel Corporation Device 2723 (rev 1a)
IOMMU group 24:	[1b21:1184] 05:03.0 PCI bridge: ASMedia Technology Inc. ASM1184e PCIe Switch Port
IOMMU group 25:	[1b21:1184] 05:05.0 PCI bridge: ASMedia Technology Inc. ASM1184e PCIe Switch Port
	[8086:1539] 08:00.0 Ethernet controller: Intel Corporation I211 Gigabit Network Connection (rev 03)
IOMMU group 26:	[1b21:1184] 05:07.0 PCI bridge: ASMedia Technology Inc. ASM1184e PCIe Switch Port
IOMMU group 27:	[1002:14a0] 0d:00.0 PCI bridge: Advanced Micro Devices, Inc. [AMD/ATI] Device 14a0 (rev c1)
IOMMU group 28:	[1002:14a1] 0e:00.0 PCI bridge: Advanced Micro Devices, Inc. [AMD/ATI] Device 14a1
IOMMU group 29:	[1002:66af] 0f:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Vega 20 [Radeon VII] (rev c1)
IOMMU group 30:	[1002:ab20] 0f:00.1 Audio device: Advanced Micro Devices, Inc. [AMD/ATI] Vega 20 HDMI Audio [Radeon VII]
IOMMU group 31:	[1022:148a] 10:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Function
IOMMU group 32:	[1022:1485] 11:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Reserved SPP
IOMMU group 33:	[1022:1486] 11:00.1 Encryption controller: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Cryptographic Coprocessor PSPCPP
IOMMU group 34:	[1022:149c] 11:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Matisse USB 3.0 Host Controller
IOMMU group 35:	[1022:1487] 11:00.4 Audio device: Advanced Micro Devices, Inc. [AMD] Starship/Matisse HD Audio Controller
IOMMU group 36:	[1022:7901] 12:00.0 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51)
IOMMU group 37:	[1022:7901] 13:00.0 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51)

 

Share this post


Link to post
On 12/5/2019 at 6:48 PM, dmarshman said:

Per the title.. 

Plan to use this as a workstation / high end gaming rig;  running both Windows 10 and MacOS Catalina in UnRAID VMs  [as well as Win 10 natively].

 

I'll add a new thread in the full build section once I get the machine up and running.

 

Couple of days playing around so far:

 

Latest MBoard BIOS - 2.50  [2019/11/13]

Running in UEFI

Stock clocks, voltages, etc for 3950X & RAM.

32 GiB RAM   :   G.Skill Ripjaws V 32GB 2 x 16GB DDR4-3200 PC4-25600 CL16 Dual Channel Desktop Memory Kit F4-3200C16D-32G

 

Hoping/Planning to go to 64 GiB RAM and [maybe] 2x Vega VIIs

 

 

Initial testing:

 

VM#1 :   MacOS Catalina

- using SpaceInvader's Docker / template, plus some tweaks    

- 8 cores / 16 threads - 16 GiB RAM

- Vega VII passed through in PCI-E slot #1  [PCI-E 3x16]  -    - no hardware acceleration / Metal / OpenCL yet

Cinebench R20 - 4850 [Multicore]


 

 

Native Win 10:

- 1 TB PCI-E 4 4x NVMe Drive

- 16 cores / 32 threads - 32 GiB RAM

- Vega VII  [stock everything]

Cinebench R20 - 9454 [Multicore]

 

Hey there nice build!
I'm feeling a lot of parallels here but where I've got slightly lower spec'd items than you.
I've got the Auorus x570 pro wifi with 3900X. 2 vega frontier cards and a quadro 4000. 32G 3600MHz CL16.

Currently I have vega in slots 1 and 2, and the quadro in 3.

1 vega for MacOS, 1 vega for Windows, quadro for unraid- webui and possibly HW acceleration for plex.

When I have 1 vega running in a vm, if I turn on the other, it will shut off what's running!
Do you know why or how to fix this? If macos is running and I start windows, it will shut down mac.

I'm kind of freaking out about it.

 

Also if you know how to optimize macos that would be cool. I've tried various smbios tweaks. ram always shows DDR3, and I feel like the benchmarking scores could be higher.

20191228_155534.jpg

Share this post


Link to post
Posted (edited)
7 hours ago, Eksster said:

Hey there nice build!
I'm feeling a lot of parallels here but where I've got slightly lower spec'd items than you.
I've got the Auorus x570 pro wifi with 3900X. 2 vega frontier cards and a quadro 4000. 32G 3600MHz CL16.

Currently I have vega in slots 1 and 2, and the quadro in 3.

1 vega for MacOS, 1 vega for Windows, quadro for unraid- webui and possibly HW acceleration for plex.

When I have 1 vega running in a vm, if I turn on the other, it will shut off what's running!
Do you know why or how to fix this? If macos is running and I start windows, it will shut down mac.

I'm kind of freaking out about it.

 

Also if you know how to optimize macos that would be cool. I've tried various smbios tweaks. ram always shows DDR3, and I feel like the benchmarking scores could be higher.

 

Wow.  I love your repurposed / custom modified Mac case...  was it a PowerMac G5 or a Xeon MacPro?

{I still have my 2008 Dual Quad Core 2.8 GHz Xeon MacPro… I haven't used it as my daily driver since 2016, and have not even turned it on in a couple of years, but it's still fully intact... maybe a project for another day}

 

For my X570 computer, I upgraded to 64 GiBs of RAM [4 DIMMs running at 2800 GHz], and have two Radeon VIIs with a 1300W PSU, and currently I have it in a Thermaltake Core P3 [open] case.

 

Re: the KVM machines and MacOS / gfx cards, I haven't experienced the issues you've seen, but have been mainly using the computer with Windows 10 on bare metal [i.e. without Unraid] over the past couple of weeks.

I have the Radeon VIIs in PCIE Slot #1 and Slot #3.

When using Unraid, I've been running a Windows 10 VM alongside a Ubuntu 18.04 LTS VM with no problems using two Radeon VIIs, but to be honest I gave up on MacOS after four days of experiments where I constantly failed to get any type of acceleration on the gfx card, and ended up corrupting/wiping the installation every few hours with failed boots. 

I think the [Unraid 6.8 RC stream] reversion back to the 4.19.x kernel [from 5.x] made things less functional for me on MacOS, so I decided to wait for Unraid 6.9 [and latest versions of QEMU] before doing any more experiments.  Looks like 6.8.1 RC just dropped with updated virtd and QEMU, but still using the 4.xx kernel, so still waiting, but maybe I'll try again this weekend….  

[I have my 2018 Macmini for day to day MacOS usage so not in too much of a rush].

 

What type of Radeon Vega cards do you have - 56, 64 or VII?

Please bear in mind that there is a know reset bug on the Vega 56 and 64 [I think now fixed in recent kernels], and also the Radeon VII [which is not yet fixed], which impacts virtual machines, that could be impacting your system.   It basically stops a VM from being restarted if it [the VM] is totally shut down... [you have to reboot unraid.  SpaceInvaderOne has videos about it...].

Edited by dmarshman

Share this post


Link to post

I've set up Invaders reset scripts for my vega cards, but for some time now this is supposed to have been resolved without the need for the scripts. But I AM having reset problems sometimes, I just can't yet pin down the exact cause. Still ironing things out. I'd definitely try again with Macos. I'm using invaders macinabox docker app for my hackintosh vm and it's isolated things very well. I symlink my home folder to a TB SSD in unassigned drives, the docker image is it's own image rather than a hidden EFI drive on another disk! I love this. I have like a 60 gig system virtural drive that acts as my main drive even though I consider my main drive my home folder. I can open my config.plist in clover and make out which drive it's on quite easily and make changes that work. Though I don't know the magic knobs to turn if there even are any for ram improvements. I haven't tried to see about hardware acceleration with the vega. How do you check if that is working?

 

My vega cards are the frontier editions. They never get any love. Until the 7's came along, I think they had a 2 year window of being the best cards ever. period. I bought way too many. Still paying them off. I have some computers with 12, but nothing that would work well as a workstation. I wish I could swing the nicest threadripper and run more cards, but I think I can scroll iphoto and drive CAD each with 1 OS having 1 card for now. The FE is a 14 nm 7 with memory that's not clocked as high as the 7. A 2017 card with 16gb HBM2 :-)

 

I applied the syslinux add on tag of turning off the unraid need for a graphics card. I got windows and mac to boot with a gfx card at the same time. But I see unraid is still displaying out of my quadro card so I'm not sure if adding that tag did anything or if it was simply changing my windows vm ram and assigning it's gfx card audio to be on the same bus and slot with device 1.

 

Personal life allocated almost no time to tinker after work yesterday either. I look forward to working with it this evening. Though I'm out of tricks to try. I'll just be working on little things and maybe backing up some more vm stuff. I plan to set up my Symless synergy with my windows vm, and get my AMD drivers going.

Did I mention I saw AMD has mac drivers listed on their site? I need to see what that's about. I've understood that AMD drivers are only built into macos and not anything you can modify.

 

I'm using a G5 case there. Though I first did a mac pro. Mac pro cases give you less motherboard room and stick the PSU up in the back. They're cool too. Just more tight. I plan to cut 2 G5 cases apart and weld them together with doors on both sides :-)

 

Know anything about maximizing pcie slots and effective gfx card use? I really don't want a gfx card taken up by unraid. I'm uneducatedly nervous about pcie lane limitations on x570 with the 3900x

Share this post


Link to post
19 minutes ago, Eksster said:

But I AM having reset problems sometimes, I just can't yet pin down the exact cause.

My reset issue went away on my Rx 570 cards  when I switched unRaid to legacy boot mode. UEFI seemed to cause a few issues on my X399/TR2570X.

 

20 minutes ago, Eksster said:

Know anything about maximizing pcie slots and effective gfx card use? I really don't want a gfx card taken up by unraid. I'm uneducatedly nervous about pcie lane limitations on x570 with the 3900x

Look into Bifurcation. Start here:

https://hardforum.com/threads/pcie-bifurcation.1870298/

 

I'm planning to run a pair of R 570s at 8x each, sharing a 16x PCIe slot, and another 16x slot broken out into four 4x slots for VM USB adaptors and the like. maximising my available slots and lanes. Fingers crossed.....

Share this post


Link to post
1 minute ago, meep said:

My reset issue went away on my Rx 570 cards  when I switched unRaid to legacy boot mode. UEFI seemed to cause a few issues on my X399/TR2570X.

 

Look into Bifurcation. Start here:

https://hardforum.com/threads/pcie-bifurcation.1870298/

 

I'm planning to run a pair of R 570s at 8x each, sharing a 16x PCIe slot, and another 16x slot broken out into four 4x slots for VM USB adaptors and the like. maximising my available slots and lanes. Fingers crossed.....

Thank you, I absolutely love it when I'm understood and reciprocated with. By you but not just you. This thread is my only unraid forum experience so far, and so far it's very supportive and helpful. A lot of nasty forums out there. Also this topic is so in the weeds, it's great to engage with like minded folks. None of the IT people or other engineers at my work can even begin to understand or have interest in anything past basic computer use.

 

So yea, the language bifurcation... In this context, thank you for that. Splitting lanes physically and I don't know how to say or refer too cogently: splitting the system lanes- is exactly the next thing I need to look into. I'm assuming when I do, that anything split off of one slot has to go to the same vm. I'll definitely look into this after work. Thank you.

Share this post


Link to post
18 minutes ago, Eksster said:

I'm assuming when I do, that anything split off of one slot has to go to the same vm. 

I don't believe this is the case. If you have IOMMU set up correctly, the two (or more) devices should split out into distinct groups.

We'll soon find out as I'll be purchasing the hardware in the coming weeks - keep an eye on my blog as I'll be writing up details there.

 

Its great to hear your experience has been good so far. I've been using unRaid since v4.x and find these forums some of the most civil and helpful around.

 

 

Share this post


Link to post
Posted (edited)
Quote

Know anything about maximizing pcie slots and effective gfx card use? I really don't want a gfx card taken up by unraid. I'm uneducatedly nervous about pcie lane limitations on x570 with the 3900x

 

I didn’t have any issues disabling the primary gfx card from being used by Unraid, and passing through both my Radeon VIIs to VMs.  I'll list exactly how I achieved it with my setup below:

 

Looks like you are much more advanced with Clover tweaking than I am, and I may need some tips from you.  I reply in a separate message for that..

 

 

 

Two steps for identical GPU pass through with neither being used by Unraid]:

- #1 - disable GPU for unraid 

- #2 - make them available to passthrough to VMs

 

#1 - To disable the Unraid video:

 

From the Main menu, click the name of your Boot Device (flash). Under Syslinux Config -> Unraid OS, add "video=efifb:off" after "append initrd=/bzroot".
The line should now read "append initrd=/bzroot video=efifb:off".

 

When you reboot you will notice there is no video output when unraid boots (you will be left with a freeze frame of the boot menu). Your primary GPU is now ready to pass through.

 

#2 Then pass through (I use the following method - by IOMMU group - which works flawlessly for Identical graphics cards, but needs to be updated if you add / remove any devices to your system (as the values may be reassigned)).

 

In following file:

/boot/config/vfio-pci.cfg

add a single line with “BIND:” followed by the IOMMU group(s) for your graphics cards (or any other PCI devices you want to pass through).

 

Example - For my system -  with two Radeon VIIs: 

                                                    

BIND=13:00.0 13:00.1 10:00.0 10:00.1

 

[technically, I think the 2nd sub group - with the xx.xx.1 - is redundant/unnecessary, but I include it anyway as it doesn't hurt either].

 

By doing both the above, I am able to run Windows and Ubuntu* VMs simultaneously with both having a passed through, fully accelerated GPU.

(*and theoretically MacOS too, I just personally haven't got it working properly yet).

Edited by dmarshman

Share this post


Link to post

Awesome. This bind thing I have yet to try. And from what you say the result of "video=efifb:off" after "append initrd=/bzroot" should be- I must have messed up this operation because the unraid monitor never froze afterwards.

And quite exciting, after some Linus and other youtube videos on PCIE, each new generation is basically 4x better, which makes me think my 3900x is 4x better in terms of lanes rather than apples to apples of another cpu architecture with a similarly stated number of available lanes. So s PCIE 3.0 gfx card like all of ours right now, should only need 4 lanes on gen 4 platforms to run their equivelant of 16 lanes with gen 3. Much more to look into and try, but this gives a lot of exciting hope and I'm surprised it's not used as a sales pitch!

Share this post


Link to post

I can't find "/boot/config/vfio-pci.cfg".

I'm not familiar with this. Below you can see I'm doing a similar thing with "vfio-pci.ids=1106:3483,14e4:43a0".

using device ID's rather than IOMMU groups as you say. Though, the in the second pic for one of my cards there it shows IOMMU group#, then Device ID, then the number set you're referring to.

However it's called, I don't see where I can implement what you say. I have Krusader and can peruse my file system. At the root there is a boot folder but it is empty. No where else in Unraid do I see boot. 

Also I can get as I said earlier my cards to run simultaneously now. What do you think this bind function might do? And is it different than the device ID "leave me alone Unraid!" command as I have shown in my "Unraid OS" section in green below. 
I'm about to reboot after kicking out 2 device ID's that were there- that I don't now see in my device list anymore. My video off was done as you say but didn't behave as you said, so the only thing I know to try is to see if knocking off those 2 non existent device id's will change anything. 

 

1057764264_ScreenShot2020-01-08at4_55_33PM.thumb.png.ad0af360a6017afc81f575cd6e0bffad.png

 

1034503377_ScreenShot2020-01-08at4_59_18PM.thumb.png.1c628e0403231e473bfa96931e727524.png

Share this post


Link to post
16 minutes ago, Eksster said:

I can't find "/boot/config/vfio-pci.cfg".

I'm not familiar with this. Below you can see I'm doing a similar thing with "vfio-pci.ids=1106:3483,14e4:43a0".

using device ID's rather than IOMMU groups as you say. Though, the in the second pic for one of my cards there it shows IOMMU group#, then Device ID, then the number set you're referring to.

However it's called, I don't see where I can implement what you say. I have Krusader and can peruse my file system. At the root there is a boot folder but it is empty. No where else in Unraid do I see boot. 

Also I can get as I said earlier my cards to run simultaneously now. What do you think this bind function might do? And is it different than the device ID "leave me alone Unraid!" command as I have shown in my "Unraid OS" section in green below. 
I'm about to reboot after kicking out 2 device ID's that were there- that I don't now see in my device list anymore. My video off was done as you say but didn't behave as you said, so the only thing I know to try is to see if knocking off those 2 non existent device id's will change anything. 

 

 

What you are doing above should work OK.  I listed an alternative method [which is helpful if you have identical cards].

 

If you want to try it - and I'm not sure you need to - then it's a text file that you need to create [in the /boot/config/ directory].

 

Use the "shell" button in Unraid's UI to open a window, and then type:

nano /boot/config/vfio-pci.cfg

 

This will open [or create and open if it doesn't already exist] the file for editing:

Add [based on your post above]:

BIND: 11.00.0 11.00.1                     

 

Then press <control>-X to exit, "y' to save, then <enter>/<return>, and then reboot...

Share this post


Link to post

#nextlevel ;-) 
I might get to trying that tonight. At least soon. 

Not only do I have a lot of the same graphics cards, but I have 2 Logitech M705 Mice I think would be pretty cool to use. So far I can't see how to use any when both are connected. Could this help with that?

Share this post


Link to post
10 hours ago, GreenEyedMonster said:

Why not passthrough a usb controller for each one?  One usb 3.0 should be enough for everything you might add?  

Great idea! I haven't yet for multiple reasons but I should!

1. I gave my mac vm a dedicated usb card which is super! But now I am out of available motherboard pcie slots.

2. I haven't got as Techie as I probably should with passing through 2 of the motherboard usb controllers I hear that I can! - I'm on the edge of being happy with what I have vs. using my old hardware and returning this ram, mobo and proc to hold out for Zen 3. Ridiculous probably. I do have a 7700k and i5-8400 with mobos sitting for sale as weirdos from across the country lowball me. If I were too get rid of my current mobo, why invest in hacking it? But yea, I'll probably stay the course on selling my older zombie-load equipment.

3. With my M705 mouse plugged into the motherboard and passed to macos, the second m705 mouse plugged into the usb card passed to macos did nothing. Didn't bring confidence. But I know windows can behave more favorably all around and possibly having 2 of the same in macos ever- just isn't a viable option. Taking the time with the scientific method on discovering what is true.. ain't nobody got...

 

But yea, my goal is to have the 1 machine have with next to no compromizes, a windows engineering workstation and a mac vm with software KVM sharing (synergy- working nicely currently) while also running other VM's and dockers. I susbscribe to the incorrect syntax of "dockers" love it. So this is all mostly in place aside from the basic logic that man, I really need to be able to hotswap USB with my windows vm! So a long winded fun explanation of how right you are Green Eyed Monster 🙂

Share this post


Link to post

@Eksster

 

USB Controller pass through is absolutely the way to go. 

 

Not only does it minimise pfaffing around in VM configs, its much (much!) more reliable. For example, I have a set of HK Soundsticks that I use with my OSX VM. Passing them through via unRaid is hit and miss. If they work at all, the audio breaks up so as to be unusable. However, when I pass through a USB adapter to the VM, and plug devices in there. the speakers work perfectly

 

Take care with USB pass through to OSX. Not all adapters are supported natively. You might find some don't work as OSX doesn't have the drivers or support the chipset. This can be especially problematic with motherboard controllers. (for reference, I've found these to be 100% compatible).

 

Another point to note - passing through motherboard controllers is a great way to get dedicated ports for VMs. Take care though! Remember that your unRaid USB is plugged into one of those ports. And sometimes, the controllers share IOMMU groups with your network card. If you pass through a MB controller in either of the above cases, you will find that either your system won't boot (can't find usb) or you won't be able to access the UI or shares  remotely (as network is unavailable).

 

If you are passing through USB MB controllers, ensure that you have another system on hand that you can plug your unRaid USB stick into and manually edit the config files, or better sill restore them from a previous backup, in case things get messed up and you cannot boot.

 

Speaking from experience!!!

 

On running out of PCIe slots, the MB pass-through will help. The Bifurcation approach we previously discussed will help. Or you can find USB adapters with multiple controllers on board. There was a Sonnet model that had 4x controllers and 4x ports, which would have permitted support for USB passthrough for up to 4x VMs on a single card. Unfortunately, it's been discontinued, and I cannot find out definitively if the replacement model works as well.

 

Edited by meep

Share this post


Link to post

How are you finding the build? Im after a more gaming focus build now I have a separate 2u Plex Quicksync Server. I posted the following on Reddit and may make a post on here depending on replies. 

 

Im looking to upgrade my current Unraid build to the same CPU etc from the following but slightly worried about running out of PCIe Lanes:

  • Dual Xeon E5-2695-V2 (Changing) - Possibly Ryzen 9 3950X
  • Supermicro X9DRH-7TF E-ATX (Changing) - Unsure yet, would want a 10Gbe NIC, M.2 slots
  • 64GB ECC REG DDR3 RAM (Changing) - 32GB DDR4-3200 PC4-25600
  • Nvidia 1080Ti (Passed through to Windows VM)
  • LSI SAS 2308 PCI-E controller (2 port, each one connects to 4 of the 16 drive bays) 
  • Solarflare dual SFP+ 10Gbe NIC (would remove if the new motherboard has a 10Gbe NIC)
  • Fresco Logic PCI USB 3.0 controller (passed through to Windows 10 gaming VM)
  • 8 x WD Red 8TB 3.5” drives (Unraid, 6 usable, 2 parity)

  • 2 x 500GB Samsung 860 Evo SSD (Unraid Cache Drive Pool)

  • 1 x 1TB Western Digital WD Blue 3D Nand Sata SSD (Used for vdisk storage but can be removed if I'm low on PCIe Lanes)

  • PCI Express M.2 NGFF PCI-E SSD to PCIe 3.0 x4 Host Adapter Card (would remove if the new motherboard has M.2 slots)

  • WD Black SN750 NVMe 250GB SSD (passed through to Windows 10 VM where Windows 10 is installed on)

Would I have enough PCIe Lanes and if so could someone suggest a decent motherboard with a 10Gbe NIC and M.2 slots?

 

Cheers

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.