Help with VR VM Gaming Build


Recommended Posts

I'm new to this, I'm a mechanical engineer who happened to be put into a situation that we need to be able to run at least 3 instances on a VR LAN based game created in Unreal. I have the experience with building computers so I am not worried about that part, its the new VM/VR application that I am new to.

 

We need LAN because this computer will not have access to the internet besides when I set it up at home, it is not allowed on the company network, darn red tape.

 

So this is what I was thinking and would like some feedback if there are some inherent flaws in my game plan.

Build:

 

CPU: i9-9960X

MotherBoard: ASUS TUF x299 Mark 1

RAM: Corsair - Vengeance RGB Pro 128GB (8x16)

Storage: 1 TB M.2, 2 TB SSD

Video Cards: RTX 6000, 2x RTX 2080ti

PCIE: 4- individual channel USB 3.0

 

 

Execution: Bypass the two 2080ti's to VM and set up the last instance on the main computer. I would bypass it to VM but I have read that it is a pain to get the last graphics card to VM since the CPU doesn't have a built in one.

 

 

So I guess I'm asking if you can still game on the host PC while the two VM also run games?

 

I'm not sure about the storage, is it easier to buy another 2 SSD have them purely for the VM machines?

 

What is the best way to connect these game instances?

 

Notes: The RTX 6000, is for other work applications I know other GTX series are cheaper, the OpenGL and Driver support for some applications is worth it.

 

Any tips/comments or general help is very much appreciated as I have not used UnRAID or really any VM software before

 

 

 

Edited by Jason Z
Forgot Something
Link to comment
1 hour ago, Jason Z said:

So I guess I'm asking if you can still game on the host PC while the two VM also run games?

 

In the case of Unraid, the host PC is running a very lean and customized version of slackware linux, primarily intended to be a NAS operating system with VM and docker hosting. So you're not going to run any game that you might have in mind on the host PC.

Link to comment
2 hours ago, trurl said:

 

In the case of Unraid, the host PC is running a very lean and customized version of slackware linux, primarily intended to be a NAS operating system with VM and docker hosting. So you're not going to run any game that you might have in mind on the host PC.

Oh ok, so you physically boot the computer from the USB, I thought you had the computer running windows and then boot the software to set up the other VM's.  Do you have any experience passing through the last GPU to a VM, because from what I have seen it is kind of complicated for NVidia cards, at least for me.

Link to comment

I have 4 nvidia cards passed through to each of my 4 gaming VMs.  The last one required a bios rom to pass through to the last VM but worked fine besides that.  The other thing you will probably have to look at is the number of USB controllers that you can break up into IOMMU groups to pass to each machine for the VR headsets.  I got an allegro card with 4 controllers- works great for USB 3.0 passthrough.  

Link to comment
Just now, jordanmw said:

I have 4 nvidia cards passed through to each of my 4 gaming VMs.  The last one required a bios rom to pass through to the last VM but worked fine besides that.  The other thing you will probably have to look at is the number of USB controllers that you can break up into IOMMU groups to pass to each machine for the VR headsets.  I got an allegro card with 4 controllers- works great for USB 3.0 passthrough.  

I have already found some PCIE with separate USB controllers, as the VR sets will all be the same.  wish they didn't take up a PCIE slot though, pretty short on those.

 

How hard was the Bios rom pass through, and is it possible to set up all of this off real Internet. set up some sort of router for IP address or something. Incase you haven't been able to tell this stuff is kind of out of my realm of study.  The main PC with all the graphics cards will not have access to the companies internet, and we cant get access since it would require a specific image for a specific pre built pc...yuck.... but I will be able to take it home to initially set up things however it should be able to operate after that without real internet connection

 

Thank you!

Link to comment

Yeah- no internet shouldn't be an issue- I actually have a steamcache and dedicated server for some of my games that runs on the same machine so even when the internet goes down, clients can still update games and connect to that dedicated server.  You will need a management IP address but it doesn't really need to have access to the internet unless you want to download updates for plugins that you install.  The only other concern is management- You will likely want a router or switch and a separate computer for management that can connect to that router/switch.  That is the best way- so you can run headless- with the 3rd GPU being taken by the third VM.  Otherwise- you can do it by preventing the 3rd machine from booting up when unraid starts- and do management tasks with the 3rd GPU in GUI mode.  Then when you are done with management- you boot the third machine and manage from any web browser from any running VM.  You will want an IP range and just keep unraid and all 3 machines on the same range.

 

My board has 4 PCI-E slots so I had to use the U.2 to PLX bridge for PCI-E 4x slot to use my USB card.  All 4 slots are holding GPUs.

 

Bios for GPU is not usually a big deal- just remove some header info from a dumped ROM and BOOM.  Not sure about the 2000 series though.

Edited by jordanmw
Link to comment
9 minutes ago, jordanmw said:

Yeah- no internet shouldn't be an issue- I actually have a steamcache and dedicated server for some of my games that runs on the same machine so even when the internet goes down, clients can still update games and connect to that dedicated server.  You will need a management IP address but it doesn't really need to have access to the internet unless you want to download updates for plugins that you install.  The only other concern is management- You will likely want a router or switch and a separate computer for management that can connect to that router/switch.  That is the best way- so you can run headless- with the 3rd GPU being taken by the third VM.  Otherwise- you can do it by preventing the 3rd machine from booting up when unraid starts- and do management tasks with the 3rd GPU in GUI mode.  Then when you are done with management- you boot the third machine and manage from any web browser from any running VM.  You will want an IP range and just keep unraid and all 3 machines on the same range.

 

My board has 4 PCI-E slots so I had to use the U.2 to PLX bridge for PCI-E 4x slot to use my USB card.  All 4 slots are holding GPUs.

 

Bios for GPU is not usually a big deal- just remove some header info from a dumped ROM and BOOM.  Not sure about the 2000 series though.

 Many Thanks, i'll definitely look into that PLX Bridge thing, that may allow us to use 4 VR clients at once.

 

Do you know how that PLX bridge works with PCIE Lanes usage. I Know the i9 9960X has 44 lanes, and you end up getting limited to x8 on some graphics already, with 4. Do you know if you would be capable of running that bridge, 4x gpu, and a M.2 ssd at the same time.  I would assume all four graphics would run at x8 which should be fine in a 3.0 PCIE? Like I said I know a little but a lot of this is new to me.

Link to comment

I got an x399 taichi motherboard with an 1920x and the combo of mb/cpu determines how many lanes you get.  I have 3x M.2 1x U.2 and 4 pci-e slots that can run 16x8x16x8 so it works great for me.  I have 12 CPU cores with SMT enabled.  I have 2 of the M.2 slots populated with 1Tb WD black drives and use my U.2 to connect to a PLX adapter:

https://www.microsatacables.com/u2-sff8639-to-pcie-4-lane-adapter-sff-993-u2-4l

 

I literally have every port on the motherboard filled except 1 M.2 slot that is disabled because I used the U.2 port.  Your board may have a similar limitation- but with that many cores- probably not as likely. 

Link to comment
3 minutes ago, jordanmw said:

I got an x399 taichi motherboard with an 1920x and the combo of mb/cpu determines how many lanes you get.  I have 3x M.2 1x U.2 and 4 pci-e slots that can run 16x8x16x8 so it works great for me.  I have 12 CPU cores with SMT enabled.  I have 2 of the M.2 slots populated with 1Tb WD black drives and use my U.2 to connect to a PLX adapter:

https://www.microsatacables.com/u2-sff8639-to-pcie-4-lane-adapter-sff-993-u2-4l

 

I literally have every port on the motherboard filled except 1 M.2 slot that is disabled because I used the U.2 port.  Your board may have a similar limitation- but with that many cores- probably not as likely. 

I'm open to changing motherboards, especially if I can fit another GPU in there, cost is really not to important, I have around 20 grand to play with as far as budget goes.  And I didn't know a lot of these things existed before this, looks like I'm back off to researching motherboards.

 

https://ark.intel.com/products/189123/Intel-Core-i9-9960X-X-series-Processor-22M-Cache-up-to-4-50-GHz-

 

intel says that the max they support with this processor is 44lanes, which is also supported by my motherboard, that's were I got 44 from.

 

Again many thanks I really appreciate it

 

Link to comment

My board with my CPU says this:

This processor includes 60 PCIe lanes with PHY of 16 lanes may each have a maximum of 8 PCIe ports (x1, x2, x4, x8, x16). Note that 48 lanes are dedicated for multiple GPUs with the other 12 lanes for I/O.

 

So I am taking all 48 lanes for the multi-GPU setup- and using all 12 for the other IO.  It is a tight setup and fully maxed out for IO, but works pretty flawlessly.  

 

I actually wish I had just spent the extra cash for a 1950x that has 16 cores. Extra cores come in handy as you realize how many games are now optimized for 4 cores or more.  

Link to comment
1 hour ago, jordanmw said:

My board with my CPU says this:

This processor includes 60 PCIe lanes with PHY of 16 lanes may each have a maximum of 8 PCIe ports (x1, x2, x4, x8, x16). Note that 48 lanes are dedicated for multiple GPUs with the other 12 lanes for I/O.

 

So I am taking all 48 lanes for the multi-GPU setup- and using all 12 for the other IO.  It is a tight setup and fully maxed out for IO, but works pretty flawlessly.  

 

I actually wish I had just spent the extra cash for a 1950x that has 16 cores. Extra cores come in handy as you realize how many games are now optimized for 4 cores or more.  

How close are you able to cut the PCIE lanes, i was looking at a new MOBO

https://www.asus.com/us/Motherboards/WS-X299-SAGE/

which would have the gpu spots for 4x, and the u.2 for the usb extensions.

However it looks like it doesn't support anything but 16 lanes in slot 1? That and a m.2 ssd, plus the bridge is an extra 8 lanes. Do you need to save lanes for other hard drives and ssds? Im not use to worrying about lanes since i have only ever bother with single gpu set ups. I can ditch the m2 if i need to but it is  a huge upgrade from SATA ports....

 

 

 

 

Link to comment
1 hour ago, jordanmw said:

My board with my CPU says this:

This processor includes 60 PCIe lanes with PHY of 16 lanes may each have a maximum of 8 PCIe ports (x1, x2, x4, x8, x16). Note that 48 lanes are dedicated for multiple GPUs with the other 12 lanes for I/O.

 

So I am taking all 48 lanes for the multi-GPU setup- and using all 12 for the other IO.  It is a tight setup and fully maxed out for IO, but works pretty flawlessly.  

 

I actually wish I had just spent the extra cash for a 1950x that has 16 cores. Extra cores come in handy as you realize how many games are now optimized for 4 cores or more.  

I also i figured i would upgrade to the i9 9980x although it doesnt add more channels

Link to comment
1 minute ago, jordanmw said:

You should really plan to have 2x16 and 2x8 for a motherboard pcie requirements.  Look at the Asus workstation boards.  Plan on not cutting it that close. 

the problem isn't the motherboard as far as I understand. Intel chips only support 44 lanes.....i'm not sure how other people do this with intel unless I am missing something, as the motherboard I looked at supports x16 time 4

Link to comment
28 minutes ago, jordanmw said:

You should really plan to have 2x16 and 2x8 for a motherboard pcie requirements.  Look at the Asus workstation boards.  Plan on not cutting it that close. 

So i would assume this would be to close

GPU: 1x16,3x8

Plus the 4 from the usb controllers

so 44 out of 44 lanes would be used, i read that cpu have their own lanes that arent pcie for onboard SATA so that should be fine. What else are PCIE lanes used for?

Also since I plan to also boot from a drive on windows, say i have a m2. ssd that isn't used in any of the VM and just when I boot directly to windows would that be taking lanes. And the opposite would the idle GPU be taking lanes of a straight windows boot?

 

i honestly for the life of me am having a hard time finding good documentation on lane usage.....

Link to comment

Anything attached will take lanes.  I did a similar setup with the separate windows drive to boot to but found that I never used it past the burn in stability testing when first setting it up, so it was a waste of a drive.  I don't know that much about pcie lanes either but know that it really can impede your speed if you don't have the setup just perfect when using all lanes. 

Link to comment

If I had your budget- I wouldn't even screw around with consumer level parts- I'd just go with a dual socket xeon rig with tons of PCI-E slots and be done with it.  Something like this: https://www.asus.com/Motherboards/WS-C621E-SAGE/  

 

I mean- why would you be looking at anything less?  You could have enough slots and lanes to increase capacity in the future- when you decide that even 4 VR rigs isn't enough.  Seems like a waste of effort to plan lanes out meticulously when you could grab 2 cpus and get all the advantages of the extra lanes- and separate die.

Link to comment
2 minutes ago, jordanmw said:

If I had your budget- I wouldn't even screw around with consumer level parts- I'd just go with a dual socket xeon rig with tons of PCI-E slots and be done with it.  Something like this: https://www.asus.com/Motherboards/WS-C621E-SAGE/  

 

I mean- why would you be looking at anything less?  You could have enough slots and lanes to increase capacity in the future- when you decide that even 4 VR rigs isn't enough.  Seems like a waste of effort to plan lanes out meticulously when you could grab 2 cpus and get all the advantages of the extra lanes- and separate die.

What I did find was that the PCIe lanes aren't technically limited by the CPU even though the CPU has a fixed number of lanes, the motherboard I mentioned earlier (https://www.asus.com/us/Motherboards/WS-X299-SAGE/) has two plx chips which actually artificially inflate the PCIe lanes so you can run things like 4x16 on a single 44 lane CPU.

 

And even with boards like the one you listed you still have a space requirement for the GPU and unless you go to single lane width GPU you wont be able to fit more

 

 

Link to comment

You can always look at getting ribbon cable extenders for the slots as long as they are still enabled.  I have seen a few cases that have a totally separate area for the GPUs with cable extenders for the slots.  Or just go watercooling for them- have seen a few people go that route.

Edited by jordanmw
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.