2x gaming PC's in 1, and Crossfire when only one native win 10 OS is running


Recommended Posts

Hi there!

 

Currently I have two gaming PC's in my appartment. One for myself, and one for my brother when he visits. I want to use my machine for several virtual machines for other tasks, so I have allready decided to go with the new Broadwell-E series coming out this summer. After seeing Linus video about two gaming PC's in one, I got stoked on the idea. I intend to get a 8 core 16 thread Broadwell-E, which should be a monster CPU for gaming. As I understand it would be somewhat equivalent to 2x 4790's. I have an R9-390 GPU as of now, and thinking about getting another one.

 

Here is the idea for virtual machines:

 

1x unRAID server with 2x Win 10, each one having its own dedicated GPU and half of the CPU cores/threads

1x Win 10 for normal computer use and gaming when I am alone, wich is most of the time. This will also have 2 VM's within Win 10 for less demanding tasks.

 

Now here is the tricky part to figure out: Is it possible to configure this setup in such a way that I can run both my 390 cards in Crossfire in my native win 10 installation, and separated in the unRAID operating system? I am ok with having to connect and disconnect the crossfire bridge, but I am watercooling, so I can't re-seat the GPU's everytime my brother comes to visit.

 

Thanks in advance!

 

PS: I saw this question posted late in another thread, but that thread seemed dead, and I am sure other people also wonder about this.

 

 

 

 

 

Link to comment

Not all Broadwell-E are Xeon. They are just "second gen" LGA2011v3 CPUs, i7 also. First Broadwell-E launcing eill be Xeon though.

 

I had very similiar udea with my LGA2011-board (Rampage IV Black edition) and 4930k (6c/12t, running 4.5GHz). I have seperate drives (raid0 controlled by chipset) where I have "natural windows" and I have 2ssd and 2hdd for unraid via asmedia controller, so they are always AHCI. I run 980ti SLI on natural windows and disable first pcie slot when doing so. When booting unraid, I enable first pcie slot where I have Nvidia 7600GS gpu for unraid host. This is something you need to take into account.

 

Currently I keep my host crashing and I am tinkering with it when I have time. I use 6.2Beta since trial key lets you have six drives like basic registration key. I'll buy real key later, either when I get this working or when trial ends and they won't allow me to renew it. I have decided to get this running however.

Link to comment

METDeath: My bad for not specifying the CPU. I will go for the High End Desktop CPU's form the Broadwell-E series, not a Xeon model.

 

contay: This sounds excactly like what I want to do. I have an extra 560ti card for the unraid OS, so that wont be a problem. If I get this right you can disable PCI-E slots in bios? and set the number of channels? If so I would like to use as few channels as possible for the 560 so that I can use more on M.2 drives, and dual 16x for both my GPU's. Hoping that the 8c/16th will be at least 40 cahnnels.

 

You say your host is currently crashing. Is that because you are running beta? Does/did it work earlier on? Do you think this is a doable and worthwhile solution for me? I would describe myself as technically competent, and fully able to follow instructions. I also have a bachelors degree in computer engineering.

 

Thanks for a good answer!

 

PowerJunkie

Link to comment

Here comes the plot twist: unraid is my first touch to Linux distros. I was inspired by Linus Techtips video of having two rigs in one tower and I decided that I could try that. It also justified me to buy second Gigabyte g1 980ti if I at least try.

 

Anyhow. Like I said, I had zero experience tinkering around with Linux or virtual machines before so Google and this forum have been very useful. Last week I didn't have much time as I am searching project for my engineers thesis (energy technologies here) while working in my first profession.

 

I did, however managed to get stable VM and played around a bit, as in played games. I don't kbow what I did different than last time, maybe just lucky? Host froze once while in VM windows and I had to hard reset it, but VM started after reboot succesfully with no crashes.

 

I have pcie usb3 -card coming which should help me and have hot plug usb slots. I try to passtrough it for "primary" VM. It gives 2 ports behind and I can use case front ports trough it so total of four ports. Just enough for mouse, keyboard, usb headset and one for hot plug drives/sticks.

 

About pcie slot: Rampage IV Black edition has four  full size pcie-slots, 16x/8x/16x/8x. I had to sacrifice first 16x for host gpu, which always is first gpu. No iGPU on lga2011 cpus. Third slot (second 16x) is empty so I have enough spacing for airflow for upper 980ti.

 

For disabling pcie slots, I don't know how 2011v3 mobos do it, but I have mechanical on/off switches on motherboard which I use for this. As third and four pcie slots are 8x only, I can't run 16x/16x sli, but after all 8x/8x is 1-2% loss in power.

 

Oh. About host crashing. Two things I have changed actually. I got rid of the raid and replaced it with single 500GB 850 Evo. Now unraid cache has 3x256 in data raid0 / metadata raid1. Having all drives in AHCI in system might do something,  even raid wasn't oart of the unraid. Now I worry having one SSD under chipset and two under seperate controller. I might change it later so all cache drives are under ASMedia and HDDs go under chipset.

 

Second change: last time I used cores 0-5 on first vm, 6-9 on second and 10-11 on host. Now host got 0-1, first vm 2-7 and second  (which I played around) has 8-11.

Link to comment

If you use one of the amd cards in the first slot, you should still be able to pass it through to the VM.

 

Ah. Okay. Would not actually help me, as second boot is nonvirtual windows running 980ti's in SLI. I should still disable first pcie slot when not using unraid.

 

In AMD system, like PowerJunkie is planning, this could ease things up for sure.

Link to comment

If you use one of the amd cards in the first slot, you should still be able to pass it through to the VM.

 

Ah. Okay. Would not actually help me, as second boot is nonvirtual windows running 980ti's in SLI. I should still disable first pcie slot when not using unraid.

 

In AMD system, like PowerJunkie is planning, this could ease things up for sure.

Didn't notice you were not the OP. My bad.

Link to comment

If you use one of the amd cards in the first slot, you should still be able to pass it through to the VM.

 

Ah. Okay. Would not actually help me, as second boot is nonvirtual windows running 980ti's in SLI. I should still disable first pcie slot when not using unraid.

 

In AMD system, like PowerJunkie is planning, this could ease things up for sure.

Didn't notice you were not the OP. My bad.

 

I am sure he appreciates all info that comes up. :)

Link to comment

I do appreciate all the info that comes up! I do intend to do this the same way as contay, but depending on wheter or not the asus x99 mobo I have in mind will let me disable first PCI-E port, and wheter I can get more lanes in 16x, I might just keep one of the AMD cards in first slot, and pass it through to th first VM. Good to know about that option before I start out. This project will cost me about 2500 dollars this year, so I really want to know as much as possible about what I am getting myself into before pouring loads of money down the drain :P

 

Thanks for all the help this far guys! And for anyone else wanting to contribute, or have questions of their own, feel free to contribute/share/ask :) I will keep an eye on this thread and probably nag some more over the next 6 months.

 

PowerJunkie

Link to comment

I did some testing last night, couldn't passtrough pcie usb card. Need to figure something later. However, I did get two seperate Windowses at the time but host crashed again. Need tos figure something here too.

 

It was almost 2am and I woke up 6 am so I was a bit tired there. So it might have been just something minor. I'll keep you posted : )

Link to comment

I found some pictures of the Asus 2011-3 bios. It seems like you can enable/disable PCI-E slots, and manually configure number of lanes. Thats the best solution I could hope for, so now I am saving up money for this beast, buying minor things I need while waiting for the release of the broadwell-e series hedt CPU's to come out.

 

For the project to be worth while, I would like you, need to be able to pass through a second USB controller for mouse, keyboard, sound and joystick. How is that coming along on your end?

Do you have any other updates/obstacles as for your build?

Link to comment

Good to know. I need all my PCI-E lanes even if I get a cpu with 40 lanes. I want full 16x for both 390's and 4 lanes for aditional gpu and 4lanes for m.2

 

Do you have any idea about what might be the root cause of your instability? Have you overclocked? The reason I ask, is because even though your CPU runs stable under stress testing, it may still crash at memory errors, which sometimes only occours using other software.

Link to comment

Good to know. I need all my PCI-E lanes even if I get a cpu with 40 lanes. I want full 16x for both 390's and 4 lanes for aditional gpu and 4lanes for m.2

 

Do you have any idea about what might be the root cause of your instability? Have you overclocked? The reason I ask, is because even though your CPU runs stable under stress testing, it may still crash at memory errors, which sometimes only occours using other software.

 

Now you mentioned, it is hauled up to 4.5GHz (From 3.5). I might try with stock clocks. However voltages shouldn't affect then?

Link to comment

For stability more voltage is usually better. Within reasonable limits ofcourse. If your overclock has passed a serious stress test, I would say just leave the voltage and put the frequency back to stock. If you have adjusted memory timings, loosen them or just put them back to stock values also. And try to double the cycles, usually 2N to 4N.

 

Try following all these steps and see if you get a more stable system. If it doesnt help, you can just re-insert you old settings. You could probably save your current bios profile as well. Please let me know how it goes. I am not going to start my own project before august I think, but I would love to join yours on a theoretical level.

Link to comment

Just thought I'd chime in as I plan to do almost exactly the same thing as the op so will be watching this thread very closely. I have a few related questions. I hope that's ok. Not trying to hijack the thread or anything. Hopefully it just provides a bit more general information towards an almost identical build.

 

I'm planning to build a new 8 core broadwell-e HEDT cpu running two win 10 vm's with an amd card to each one (with the option of running xfire on my vm if the other one is switched off). Is this possible?

 

With amd cards can you run unraid headless thus not needing a 3rd video card?

 

And lastly, what's the best solution to get audio/mouse/keyboard to each of  the vm's (the OP might already know this for his build?). Can onboard audio be passed through to one vm so a second sound card would be required for the other vm or possibly audio over gpu hdmi. I'm aware that it can be easier to pass through entire usb groups but depending on how they are configured this isn't always doable on some motherboards. Can individual usb be passed through? In general, when doing crossfire and needing double width slots it can be difficult to find boards with enough expansion space for additional usb hubs and audio.

 

I'm really interested in this build and look forward to see more updates PowerJunkie!

Link to comment

contay: going stock would most likely be the most stable thing you could do in order to rule out hardware related issues. However, bumping down the multiplier should improve system stability a great deal. For testing purposes this is my suggestion:

 

1. leave everything as it is. Turn on the VM's and write down the current boot time. Use your system until it crashes, and write down the crash time. Repeat 3-5 times.

2. set everything back to stock, and do the same as in 1 all over again.

 

If the VM's are stable, or more stable in step 2 than in step 1, the instability is likely to be hardware related. If so, try keeping the multiplier at stock, and increase Vcore. If the instability gets better, but still not completely stable, I would try using one memory stick at the time, and see if that helps. If one or bouth GPU's are overclocked, I would try running them at stock speeds as well. Please keep me posted :)

 

JustJoshin: Hi there, and welcome to our thread!

 

The way I understand this works, is that you would need one GPU for the unRAID operating system. For this I plan on using a cheap fanless GPU in the first PCI-E slot on the motherboard. The X99 motherboards should have the functionality in place to let you enable/disable PCI-E slots in BIOS. I plan on disableing the first slot when I am running the native win 10 installation on my computer, which has access to all the system resources by itself. Then, when my brother comes to visit, I enable the first PCI-E slot, activating the cheap fanless GPU, and booting up the unRAID OS. These cheap cards are really dirt cheap, we're talking maybe 50 USD. The CPU alone in a build like this will cost about a 1000 USD, so I dont worry about that extra cost. As mentioned early in this thread, you should be able to pass through the first GPU to one VM, but I am afraid that could affect the gaming performance of that card, so I am not about to take that chance.

 

I'm not sure if crossfire support has been added as of yet. However, I cant really se why you would run a VM when you only run one installation of windows for gaming? Sure you would save the space of an additional windows installation, but again, in a build like this, saving space is not really a priority.

 

For mouse/keyboard/sound I plan on getting a pci bracket usb-card that I can connect to one of the USB-headers on the motherboard. Then dedicating that header to the second VM, and have a USB sound adapter connected to the USB header to provide sound for one of the VM's. This is also how the mouse/keyboard/joystick gets connected. 4 ports should suffice. My last headache is network. I have a PCI-E network card, so I guess I will connect that, and dedicate it to the second VM.

 

The X99 motherboards in most cases have a lot of functionality. After all, they are intended for the high end users willing to pay twice the price for both CPU AND motherboard. However, this build will be a fairly complex build, so I would suggest that anyone that will go through with an attempt on doing this does some good research on motherboards ahead of ordering components. I myself have thought about the ASUS X99-A. It is one of the cheapest X99 boards, but seems to support everything I want. It would be great if you did your own research and shared your ideas/views on what motherboard you think would be best. It would be an easy decision on my part to spend a couple of hundred dollars more on a motherboard if that were to save me some trouble, so my choise of motherboard is by no means set in stone.

 

Please feel free to contribute thoughts, ideas, questions and ofcourse, solutions if you have some :) Fun to meet others with similar plans :) This will be an awesome ride!

Link to comment

I'm not sure if crossfire support has been added as of yet.

 

Neither am I. But it would be nice to have.

 

However, I cant really se why you would run a VM when you only run one installation of windows for gaming? Sure you would save the space of an additional windows installation, but again, in a build like this, saving space is not really a priority.

 

I'm not. I plan on running two win10 installations for gaming (one for myself and one for myself), passing a gpu through to each vm. Sometimes however when my wife isn't using the vm it would be nice to have the option to shut it down and pass that second gpu through to my vm for crossfire.

 

Additionally, I'm coming from FreeNAS so I'm not sure how unraid handles plugins like plex / transmission / couch potato etc. Are these dockers in unraid?? If anybody could point me towards some links to go read for setting up the aforementioned in unraid I'd be extremely grateful. I didn't mention this in my original post. So basically I plan to run two gaming vm's and a couple of plugins / dockers.

 

It would be great if you did your own research and shared your ideas/views on what motherboard you think would be best.

 

One board I am currently looking at is the MSI X99A Gaming 7 Motherboard. It has a good onboard audio chip I'd like to pass through to my vm and a good pcie layout with room for expansion.

 

For mouse/keyboard/sound I plan on getting a pci bracket usb-card that I can connect to one of the USB-headers on the motherboard. Then dedicating that header to the second VM, and have a USB sound adapter connected to the USB header to provide sound for one of the VM's. This is also how the mouse/keyboard/joystick gets connected. 4 ports should suffice.

 

That's a good idea. Had forgotten about the on board usb headers!

 

aside: I currently have 280x crossfire in my current machine but plan on selling a 280x and using the remaining one in my wife's vm and buying myself a Polaris 10 when they come out (I do more gaming than my wife ;)

Link to comment

I'm not sure if crossfire support has been added as of yet.

 

Neither am I. But it would be nice to have.

 

 

I tried crossfire with my two R9 280X, but it didn't work. I could install the drivers and it would enable crossfire, but as soon as I ran 3D Mark the VM crashed.

This was last year, so haven't tried it in newer version.

Link to comment

I'm not sure if crossfire support has been added as of yet.

 

Neither am I. But it would be nice to have.

 

 

I tried crossfire with my two R9 280X, but it didn't work. I could install the drivers and it would enable crossfire, but as soon as I ran 3D Mark the VM crashed.

This was last year, so haven't tried it in newer version.

 

Thanks saarg. Good to know. I don't suppose you still have the means to test this to see if things have changed any? :)

Link to comment

I'm not sure if crossfire support has been added as of yet.

 

Neither am I. But it would be nice to have.

 

 

I tried crossfire with my two R9 280X, but it didn't work. I could install the drivers and it would enable crossfire, but as soon as I ran 3D Mark the VM crashed.

This was last year, so haven't tried it in newer version.

 

Thanks saarg. Good to know. I don't suppose you still have the means to test this to see if things have changed any? :)

 

One of the cards is running in my girlfriends VM and used daily, so no immediate plans to test again. If I have the time and nothing else to do I might try again.

From what I remember when I tried it, the only guys pulling it off had cards that supported the new way of crossfire through PCIe, not needing the bridge. Unfortunately my cards didn't support that.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.