UNRAID based NAS/PLEX server with 2x "gaming" VM's? Wisdom please!


Recommended Posts

Hi team,

Had an idea, new around here so decided its time to ask for some help.

It’s been 12 years since my last (and first) PC build, and 10 years since I’ve used a desktop PC. From the project below you may detect I am making up for lost time.

As a summary of my experience, I would describe myself as a Google hack. I haven’t used Linux before and have only dabbled in VM’s too long ago to be of any present use.

A few weeks ago I decided to set up a Plex server/NAS for the home network. This evolved into a move to build a PC. One thing led to another and the project has grown significantly. The intention now to build a PC that would run UNRAID, taking care of the NAS duties, with two VM’s of reasonable capability to either side for what I’d consider light gaming duties (Starwars Battlefront II, rFactor, maybe some AOE II). My current "rig" is a 5 year old Lenovo Y50-70 Touch with an i7-4700HQ and a GTX 860m that I've thrown an SSD in and replaced the touchscreen and faulty hinges with the non-touch equivalents. Any improvement on this would be nice!

 

I realise this may not be the most cost-effective way of doing this. I’m of the mindset to have a go at something massive and learn a lot on the way. I’ve not just watched LTT and decided to throw some money at something shiny. I’m in this game to learn, but by all means, if I can learn it’s a completely terrible idea before spending ~$3k (NZD) on this build… that’s why I’m asking.

 

What I’m after is an experienced eye (or 10) to consider what I’m doing here and cast a pearl or two in my direction. My main concerns are outlined below, but these are just what I’m aware of, I don’t know what I don’t know.

1. Is the block diagram I see in the manual for the motherboard likely to reflect the IOMMU grouping?

2. More important than 1, will the motherboard cut the mustard? Seems to support VT-d, anything else I should be worried about.

3. Kinda crucially, I haven’t seen much on PT for two VM’s simultaneously, is there a reason for this. Bad Idea?

4. From what I can tell having 2x graphics cards sharing the 16 PCIE lanes at x8 each won’t create an appreciable performance bottleneck. Is it an issue that they run through the same switch? My motherboard choice has been dictated on the ability to have 2x x8 channels.

5. How much headache do I save by having two different model graphics cards? How different is different enough? Ie would a Gigabyte 1650 and an EVGA 1650 work?

6. Referring to question 1, with that many USB groups I’m thinking I could get away with sound over USB, possibly using a USB C dock for peripherals on each VM. Any problems with this line of thinking?

7. 2x of the same SSD’s for different VM’s, is this likely to cause issues in the same way as the graphics cards?

8. Am I on the right track with UNRAID?

 

The parts I’m looking at below:

Gigabyte Z490 AORUS PRO AX - https://www.gigabyte.com/Motherboard/Z490-AORUS-PRO-AX-rev-1x#kf

I9-10900 – 2 cores/4 threads & iGPU for host, 4 cores/8 threads for each VM (agonizing over whether to shell out extra for the 10850K)

2x Gigabyte GeForce GTX 1650 OC – For each VM (not set on this, I understand 2 of the same can be a headache, but I like symmetry! Also don’t like headaches, any thoughts?)

Kingston A400 120GB – For host

2x Samsung 860 EVO 500GB – For each VM

2x Seagate IronWolf 4TB – For NAS

G.SKILL Trident Z Series 32GB RAM (2 x 16GB) DDR4 3200Mhz

EVGA 600 BR 600W

Phanteks Enthoo Pro

 

There is a fair bit to digest there, so let me know if any/all of it doesn't make sense.

Thanks for your time, brace yourself, I’m probably going to be hurting in search of some more help in the near future.

Link to comment

I only have a single Ubuntu VM on Unraid just because. So I will let others with more experience in that area comment on VMs.

 

Just one thing I noticed

12 hours ago, curious_llama7485 said:

Kingston A400 120GB – For host

Unraid boots from a USB flash drive, and the flash drive GUID is associated with the license, whether trial or paid.

 

The Unraid host OS is unpacked fresh from the archives on the flash drive into RAM at each boot, and the OS runs completely in RAM. Think of it as firmware except easier to work with.

 

So, there is no drive used for the host OS.

Link to comment

I am just setting up something similar, though the lightweight gaming will be streamed in the local network using Parsec.

Often for Roblox type games where they have some sort of clicker to keep scoring points while AFK. You can use almost anything as a client included Raspberry PI 4, though I have yet to try one.

I'm using a much older Xeon E5-2660 10C 20T with 64GB quad channel DDR as it gave me 4 x PCI-E X8 electical slots and 10 native SATA.

 

1) Not likely, as I understand it's all up to the BIOS. You can often use 'ACS overide' to further split them out.

2) I tend to avoid bleeding edge hardware. The kernal unraid is built on lags a few versions behind so some features may not be supported immediately. Intel /Nvidia generally has less issues than AMD at this time.

3) I've had up to 4 GPU's in the system, 2 x Gaming, 1 x Linux VM, 1 x pass through to plex. I had both gaming and the linux VM running together but haven't yet moved my Plex over as I am busy consolidating drives for a switch from an old Dell T20.

4) X8 PCI-E is fine for mid to current high end GPU's. A board set for X8 X8 SLI / Crossfire should be fine. Some more basic boards do X8 / X4 and will have a performance hit on the second GPU.

5) The system doesn't care if the hardware is the same or not, it will just be a device referenced on a bus. Saying that, when setting up or changing a VM it is much easier to select between 'Gigabyte 1650' and 'EVGA 1650' than  'EVGA 1650' and  'EVGA 1650' as there will be a video and audio device for each card. There are bus numbers that help give a unique ID but a brand is simpler.

6) You will get sound over HDMI from the GPU if selected as the GPU has audio build in, the mainboard will have one audio device which cannot be shared with more than one VM. You can usually split out some USB in the IOMMU groups and pass through, I haven't tried this as yet since I am streaming from another room. Keep in mind you may want to add a separate USB Card for some extra USB pass through. You will only have a X1 slot and the bottom X16 (but not X16 electical slot). With 2 x VM drive, 1 x cache drive for host (recommend)  and 1 x parity you only have 2 spaces for data drives. You can soon run out of expansion slots for extra cards etc. Likely you could buy good 1TB NVME drive, set it as cache and then put both the VM's on it. If you buy a mini  PCI-E X2/X4 NVME with high IOPS it will likely outperform the SATA SSD's anyhow since they are limited by the SATA bus. You need to check if using some of the NVME slots disable SATA ports as often they share lanes.

7) No, but as above, 1 fast NVME drive 'may' be better and keep more slots free. Alternatively you could use NVME drives and pass them through for closer to metal performance.

8) Personal choice really, you can likely get it to work fine, however you could probably build 2 lightweight gaming PC's with similar performance + a NAS in the same budget.

9) Overclocking is not recommended. Intel even consider XMP for memory as overclocking and will void the warranty if you tell them you did it on a non 'K' sku. In a storage server, stability is king. The OC versions of the GPU's are fine though.

10) don't under estimate how quickly your plex requirements can grow when you enable automatic media downloading through Sonar / Radar etc. As a precaution I'd plan for up to 12 drives as a mix of cache, VM etc. You don't necessarily need to buy the 12 drive licence yet, but think about power, drive bay spaced, sata ports etc. and plan for the future. Nothing worse the ripping it all out to start again in 12 months.

 

Another tip, use the trial, extend it if needed. You can try before you buy so if it's not for you, nothing lost and you can use the hardware elsewhere. You can even hold off on the second GPU as it's easy to add in later. Get Unraid running, set up your VM's then expand. It's fairly easy to add a second GPU and then swap the assignment in the VM template.

 

 

Good luck

 

 

Edited by Decto
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.