Jump to content
We're Hiring! Full Stack Developer ×

Help with Possible VM Gaming Rig


bigjme

Recommended Posts

Hi Everyone,

 

I was recently introduced to unRaid having never heard of it before and was amazing at its possibilities.

 

For the longest time i have wanted to create a multi os gaming rig for me and my partner and this seems like the ideal option for doing so. As of this moment, if you have Vessel, you will be able to see that Linus Tech Tips has just done an amazing video where he uses an intel 5960x, and passes through a 980ti and titan X to 2 virtual machines for games.

 

I am looking to do a similar set up. Here is the hardware i am looking to use

 

CPU: Intel Core i7 5960X - 8core 16thread processor

Motherboard: Asus X99-DELUXE

Memory: Corsair 32GB Vengeance LPX DDR4

GPU 1: EVGA 780 Hydro Copper

GPU 2: MSI 750ti

GPU 3: an old geforce gpu

 

Storage:

  • 1 x 4tb WD Red
  • 2 x 2tb Seagate Barracudas
  • 2 x 120gb Mushkin Chronos

 

I plan to set up 2 machines with the following allocations:

VM1

780

16GB Memory

1 x 120gb SSD

1 x 2tb drive (i plan to change this to a 4tb soon, would it be possible to move the vm storage to a new drive?)

 

VM2

750ti

12GB Memory

1 x 120gb SSD

1 x 2tb drive

 

Unraid

4GB memory spare

4tb drive for samba and media browser server.

 

The part i am having trouble with is this:

My machine is currently watercooled including my 780. This means i have a pipe coming from my cpu to the 780 which is in pci slot 2. If i build the machine, i need to try and get unraid to use a gpu in slot 6 and allocate slots 2 and 4 to the vm's.

Is this possible?

 

I do have a pci 1x to 16x riser i could use to put the older card in the first pci slot but i would rather not if there is another way.

 

I also was wondering if anyone else has considered the following. My VM will be the only one to be used constantly, my partners will be used once a week if lucky. Therefore it seems a waste to only allocate myself 8 of the 16 cores when there will be 8 not being used the majority of the time. What i was thinking was to do the following

 

VM 1 - Cores 1 - 12

VM 2 - Cores 8 - 16

 

my machine being VM1 and my partners being VM2. My thinking behind this is that the majority of applications and games will only use the first few cores available, meaning the 4 cores in the middle could be an overlap, allowing VM1 a lot more power when VM2 is idle.

Can anyone see a massive issue with allocating the cpu this way? Or maybe a better way to allocate these?

 

 

I am also a little lost on the meaning of Storage and Cache for the virtuals. In a video i saw, they had 2 SSD's in raid 1 put into cache, with 2 hdd's in raid 1 put as storage.

They then allocated part of the cache drives for the OS install, and part of the hdd's as storage for each VM. Is there a reason why the SSD's would be placed as cache and not storage?

 

I understand that raid 1 drives were used due to 2 machine using the same devices so the raid adds the extra read speed, but i dont see why you would do this that way rather then allocating a single SSD and a single HDD to each machine. Am i missing something?

 

I know this is a lot to ask, but this is my first time with unRaid and i don't want to spend £1500+ on hardware to find what i want is not going to be possible.

 

Any help would be greatly appreciated :-)

 

Regards,

Jamie

Link to comment

Been gaming like this for a months now, (on unRAID and previously on Arch).

cpu:4770 gpu: gtx780.

 

 

First off I always game at 120+ fps(120hz), so a tad picky.

 

I usally see test where there is, lets say 5% performance loss but thoose test are usally generic benchmarks under perfect circumstances and with 60 fps in mind and not run four hours as a gaming session usually is.

 

The more real world scenario when things happens in the underlying hypervisor, vm´s etc.. is a 15-20% performance hit. Atleast for me with my 1k plus hours gaming like this (bf3,bf4 csgo).

 

 

VM1

780

16GB Memory

1 x 120gb SSD

1 x 2tb drive (i plan to change this to a 4tb soon, would it be possible to move the vm storage to a new drive?)

 

Yes, if you use your vm storage as an vm "image file" you can just move it and change it´s location in the vm settings.

 

I do have a pci 1x to 16x riser i could use to put the older card in the first pci slot but i would rather not if there is another way.

 

It depends on the motherboard/bios if there is a setting to specify wich gpu should be "main", no idea.

 

I also was wondering if anyone else has considered the following. My VM will be the only one to be used constantly, my partners will be used once a week if lucky. Therefore it seems a waste to only allocate myself 8 of the 16 cores when there will be 8 not being used the majority of the time. What i was thinking was to do the following

 

VM 1 - Cores 1 - 12

VM 2 - Cores 8 - 16

 

my machine being VM1 and my partners being VM2. My thinking behind this is that the majority of applications and games will only use the first few cores available, meaning the 4 cores in the middle could be an overlap, allowing VM1 a lot more power when VM2 is idle.

Can anyone see a massive issue with allocating the cpu this way? Or maybe a better way to allocate these?

 

There is issues overlapping cores even if lets say they are "idle",

For example I have my gaming vm use core  2-7 and 0-1 for dockers other vm´s etc..

 

I used to have my plex docker use core 0-3 and even if it transcoded and it just used 10% of the cpu power my fps in bf3 would dip from 120fps to about 60fps and this is while my gaming vm is just using about 60% of it´s dedicated cores.

 

And that takes us to the cores in the middle as you say,

there is no guarantee that things will use core 0 even if it "looks" like all cores are avaible for "work", so when something happens in the overlapping cores and booth machines use them even if its a small amount of "cpu power" it will feel very "laggy".

 

However with an overpowered rig like that and pinned cores my guess is that it will work out nicely.

(you can always just turn off the vm and allocate some moore cores in just a click)

 

I am also a little lost on the meaning of Storage and Cache for the virtuals. In a video i saw, they had 2 SSD's in raid 1 put into cache, with 2 hdd's in raid 1 put as storage.

They then allocated part of the cache drives for the OS install, and part of the hdd's as storage for each VM. Is there a reason why the SSD's would be placed as cache and not storage?

 

The "cache" drive("pool") is just a normal drive that can be used as a "download cache" for directories of you choice there for usually a ssd drive.

It can also bee used for permanent directories(vm´s, appdata for dockers etc...) as in this case the main VM os drives.

 

The cache pool can be a single disk or raid no need to have it in raid as they apparently had, works fine with just one.

 

I understand that raid 1 drives were used due to 2 machine using the same devices so the raid adds the extra read speed, but i dont see why you would do this that way rather then allocating a single SSD and a single HDD to each machine. Am i missing something?

 

Yes,you can allocate a single drive or ssd to a vm (this is how I do it).

Either by passing through the entire disk or setup a single disk in unRAID and store the vm's on it(requires a plugin).

 

 

Hope it helps.

 

//mace

Link to comment

Thanks for the amazing amount of help.

 

I think i will do as you said, and simply shut down the vm's and allocate cpu as needed (not that my vm need much more then 4 cores for games anyway)

 

I think passing across the entire drives will be the way to go, especially as it should help keep the read/write speeds fairly high.

 

I know you are only running one gpu, but after having a look around, i can get a gtx770 for not much more then a 750ti so the second vm may as well have that.

 

For anyone doing multi gpu passthrough, i know devices can be pooled together which makes them unable to be passed through properly without registry mods. Is there anything specific i should avoid to reduce the risk of this?

Right now i am looking at my evga 780, and an msi 770, so different models and manufacturers.

 

Regards,

Jamie

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...