Building a monster. First Unraid build.


Recommended Posts

Hi all,

 

So I've been lurking for quite a while now and musing over possible build scenarios.  Like many, I've become starry-eyed over the GPU passthrough possibilities with Unraid - along with everything else it can do, such as spinning up the occasional VM for work testbed purposes - and am looking to take the plunge in the next month or so on a new build.  The build is subject to change as I'm still musing over 1 vs 2 physical CPUs and such.

 

Existing background:  I currently have two machines, again like many - a Windows gaming desktop, and a server box running Plex and a few associated webapps etc, torrents and whatnot. 

 

Short specs on my existing kit, but pretty much what you'd expect:

 

Gaming machine:  i7-4770k, 8 GB 2400mhz RAM, 500gb SSD, GTX 780

Server machine: i7-2600k, 16 GB 1600mhz RAM, 250gb OS SSD, 5 x 2 TB storage drives presented as one drive to the OS with Drivepool in what is basically a JBOD array.

 

With this new build I'm looking at combining the two sets functions in to one monster of a machine.  There are a couple points I'd like to clear up on, and am looking for some advice.

 

So far I'm musing over:

 

CPUs: 

2 x Xeon E5-2670 v3 for 24 cores, 48 threads between them.

 

Motherboard: 

Probably an Asus Z10PE-D8 - dual socket, dual LAN, plenty of x16 PCIE sockets with VT-d support, onboard VGA, 12 SATA ports of mixed flavour amongst other goodies.

 

RAM:

Probably 32 GB of 2133mhz ECC RAM, going by the above board.

 

Drives:

This is where it gets interesting.

6 x 4 TB spinning rust of one description or another for storage

2 x 250 GB SSDs on SATA3 (Cache array)

2 x 512 GB PCIE NVMe SSDs using PCIe X4 To M.2 adapter card heatsinks - plan is to use Samsung 950 Pros, and these things run hot!  These will be outside of the array, and ideally run in Raid-0 for speed purposes as the VM host drive.

 

Graphics:

1 x GeForce GTX 1080 for the VM

 

PSU:

1 x Probably ~1000 watt PSU

 

Case:

1 x Huge case - probably Corsair Obsidian 900D

 

Cooling:

Oh yes.  Think this'll need watercooling.

 

So yeah, a couple of questions initially. 

 

Is it possible to use unraid to set up a Raid-0 array of 2 PCIE drives like that?  I'm fairly certain the board won't be able to create a raid array on PCIE disks natively (although wiser heads than me can feel free to tell me otherwise!).

 

CPU overprovisioning/pinning - I've been trying to read up about this and get my head around the best setup.  Would it be best to provision both the VM and Docker applications with their own cores/threads directly?  Or should I just provision the VM and leave docker to eat the rest as it needs/wants?  As an example - Plex transcoding for 7 users while I'm running an intensive gaming session, would unpinned Plex attempt to use the cores I'm using on my VM?

 

LAN port splitting - is it possible to provision one LAN port directly to a VM, with the other used for the server functions?

 

Any help much appreciated.  Looking forward to getting this beast set up.

Link to comment

If you get a motherboard with IPMI, you most likely can use the built in IPMI video for unraid console video, so no need to waste a slot. I know supermicro boards with IPMI work that way, because I have one.

 

Thanks Jonathan - interesting to know.  How exactly does this work?  Essentially remoting in from another machine?

 

I'm now looking at an ASUS Z10PE-D8 with Xeon E5-2670 v3 chips instead - that board seems to have an onboard VGA adapter which should make life easier.  Bit pricier but if I'm going large, I might as well go the whole hog on something half-modern.

 

 

 

Regarding the NVMe drives, my understanding is that the mainboard won't let you RAID those.  So any RAIDing that would happen with them would need to be in software (i.e. provided by an OS).

 

Yeah, had a feeling that'd be the case.  Is there another way to achieve presenting two drives like this as one storage unit outside the array, for the purposes of storing a VM?  I've read somewhere about partitioning a cache pool too which makes me wonder...

Link to comment

Using BTRFS for the cache drives you should be able to do that.  I've not used a cache drive/array/pool recently so I'm not 100% sure on that.

 

I'm not sure how capable a "trial" version of unRAID is, but if it allows setting up a BTRFS cache array, you could test the idea that way, assuming you have some spare drives laying around (and two are the same size).

 

 

Link to comment

If you get a motherboard with IPMI, you most likely can use the built in IPMI video for unraid console video, so no need to waste a slot. I know supermicro boards with IPMI work that way, because I have one.

 

Thanks Jonathan - interesting to know.  How exactly does this work?  Essentially remoting in from another machine?

Pretty much. For supermicro's version, it's like a small embedded computer listening on a totally separate ethernet connection that allows you to control power to the main board, watch and interact with the local terminal screen, enter the BIOS, pretty much anything you could do sitting at the local console on an attached keyboard, mouse and monitor. Allows you to tuck a server into a space where you have no desire to be beside it to manage it.

 

If you have a "server closet" and don't want to sit in the doorway leaning at awkward angles to see a monitor balanced on top of the tower typing on a keyboard in your lap, IPMI is your best friend. If you have a properly set up VPN on your endpoint security device, you can remote in from anywhere you have decent internet and manage the server like you were sitting there.

Link to comment

NVMe Samsung 950 Pro do not run hot.  It's nonsense scaremongering by certain reviewers who sit all day benchmarking.  Under heavy 'normal' use they don't run hot at all.  My 950 Pro is in a low airflow case, and it never gets more than 50C even after thrashing it, and returns to ambient temps almost instantly.

Link to comment

NVMe Samsung 950 Pro do not run hot.  It's nonsense scaremongering by certain reviewers who sit all day benchmarking.  Under heavy 'normal' use they don't run hot at all.  My 950 Pro is in a low airflow case, and it never gets more than 50C even after thrashing it, and returns to ambient temps almost instantly.

 

Thanks for that HellDiverUk.  It was indeed reviews of them that first got me thinking about this.

 

http://www.guru3d.com/articles-pages/samsung-950-pro-m-2-ssd-review,6.html as an example.

 

Yeah, I understand they're under proper thrash-testing scenarios but some of the temperatures I was seeing were somewhat alarming.

 

I think I will more than likely use an Angelbird in any case - but I may hold out for a 1 TB PCIE SSD to plonk in it for VM OS outside of the array (https://www.overclockers.co.uk/angelbird-wings-px1-high-performance-pcie-x4-to-m.2-adapter-card-for-ahci-nvme-hd-001-ab.html).  Unnecessary maybe, but certainly can't hurt system temperatures.

Link to comment

A few points.

 

unRAID 6.2.0 beta is still buggy. Get the Samsung SM951 AHCI version so you can use stable 6.1.9. In both benchmark and real-life, SM951 AHCI is comparable to the 950 PRO.

 

Do NOT mix RAID with unRAID, especially RAID 0. In fact, don't bother with RAID 0 at all. For SSD, real life performance improvement is imperceptible.

The risk of RAID 0 is just not worth it. And I'm speaking from personal experience when a RAID 0 failed out of the blue despite both drives later tested perfect with zero SMART error. The "but it has never failed for me" justification is a fallacy => it hasn't happened to you because it hasn't happened YET => until it does!

 

I urge you to reconsider watercooling. You are having unRAID build so presumably it would be running close to 24/7 (otherwise, you are better off with other solutions e.g. SnapRAID + DrivePool). Watercooling just introduces more points of potential failure. Your system will crash within a few minutes if the pump fails, for example.

Air doesn't fail. Even with a dead fan, a passive Noctua heatsink last longer without crashing your system. And under idle / low workload, it can be practically forever.

 

Most high end server motherboard has built-in (2D) non-Intel graphics (usually Asmedia IIRC). Check the spec. There's no need to get a separate GPU for unRAID console since there's never a need for 3D.

 

Be careful that some motherboard is not EATX but SSI-EEB (same size as max EATX but different standoff locations). Some cases advertise support for SSI-EEB but it just means you have to remove a few stand-offs (so the board is held on with fewer screws). Some cases don't mention SSI-EEB support but actually is compatible (or partially compartible i.e. by removing some standoffs). Some cases don't mention SSI-EEB and actually NOT compatible.

It is a confusing situation.

 

Get 80+ Titanium PSU if you want to run 24/7.

 

... yeah probably that's it.  ;D

 

Link to comment

A few points.

<snip>

 

Cheers testdasi - good response :)

 

Yeah, I'm still pondering different board/chip combos, latest is in fact an EEB board with 2 of the E5-2680 v3s - the board has an Asmedia controller exactly as you mention.  Case I'm looking at (The Corsair 900D) is compatible, so no worries there.

 

Mixing unraid and RAID - I was planning on having the drives in a RAID 0 pair outside of the array, and yeah I'm well aware of the dangers of RAID 0 - I don't store any data I want to keep on my gaming desktop however, it's only stuff that's re-downloadable elsewhere.  In any case though I'm probably just going to go for the one drive, but pick up a 1 TB from somewhere instead of the 2 x 512 gb.

 

Watercooling wise...  having been the victim of a leaky all in one loop and having had all in one pumps fail on me in the past too, I would normally agree with you.  However, I get the feeling this box will generate a lot of noise and heat under regular air cooling.  I am planning on using a reservoir with a dual pump configuration - so even if one of the pumps fail I'll still have flow.  I dunno, I might give air a go first and see if it's tolerable - we'll see.

 

And yeah, PSU wise - definitely not skimping here.

Link to comment

CPUs: 

2 x Xeon E5-2680 for 16 cores, 32 threads between them.  4 cores, 8 threads for gaming, 12 cores and 24 threads for server.

 

I dont think you need 12 cores for the server side.

What i would do is to isolate 8 cores 16 threads from unraid using isocpu. Then pin them to the gaming vm. Then these cores cant be used by anything unless you specify it. So if plex is transcoding etc it will not touch the vm cores. Then pin the emulation calls for that vm  to 2 of the other cores (the un isolated ones).

I would then pin a further 4 cores of the remaining cores, to the plex docker so it can use these for transcoding. This would then leave you with 4 cores for the other dockers, unraid and the emulation calls from the vm pinned earlier.

 

 

Link to comment

CPUs: 

2 x Xeon E5-2680 for 16 cores, 32 threads between them.  4 cores, 8 threads for gaming, 12 cores and 24 threads for server.

 

I dont think you need 12 cores for the server side.

What i would do is to isolate 8 cores 16 threads from unraid using isocpu. Then pin them to the gaming vm. Then these cores cant be used by anything unless you specify it. So if plex is transcoding etc it will not touch the vm cores. Then pin the emulation calls for that vm  to 2 of the other cores (the un isolated ones).

I would then pin a further 4 cores of the remaining cores, to the plex docker so it can use these for transcoding. This would then leave you with 4 cores for the other dockers, unraid and the emulation calls from the vm pinned earlier.

 

Useful info, thanks gridrunner - although I'm now looking at 2 x E5-2670 v3s which are 12 core/24 thread beasts. 

 

So 24 cores:

8 cores (16 threads) pinned to the gaming VM

2 other cores (4 threads) pinned to the VM emulation calls

8 cores (16 threads) to the Plex docker

6 cores (12 threads) for other dockers, unraid and/or other VMs as I spin them up

 

Cooling wise...  I'll admit to having second thoughts about water.  I think I'm going to go JustJoshin's route at least at first with regular old Air as testdasi suggested.  If the noise and temperatures get too much I'll go down the watercooling route.

Link to comment

Useful info, thanks gridrunner - although I'm now looking at 2 x E5-2670 v3s which are 12 core/24 thread beasts. 

 

So 24 cores:

8 cores (16 threads) pinned to the gaming VM

2 other cores (4 threads) pinned to the VM emulation calls

8 cores (16 threads) to the Plex docker

6 cores (12 threads) for other dockers, unraid and/or other VMs as I spin them up

 

Cooling wise...  I'll admit to having second thoughts about water.  I think I'm going to go JustJoshin's route at least at first with regular old Air as testdasi suggested.  If the noise and temperatures get too much I'll go down the watercooling route.

 

The 8 cores you have pinned to the gaming vm can also be pinned to other vms aswell. Obviously if the gaming vm was using those cores a performance hit would occur. The isocpu stops the host os (unraid) from using them, but you can then pin the to either dockers or vms. 8 cores seems alot of plex, but then you have alot of cores so why not. However even though you pin plex to those 8 cores, unraid and other dockers can use them. The pinning of the 8 cores to plex means plex will only use those cores and not others. If you wanted to be 100% sure those cores would only be used for plex you would again need to isocpu those cores aswell in the linuxsys.config file and then pin them to plex. However i dont think that would be necessary making sure that they are clean for plex as i doubt other processes in the server would have much effect on plex if they crept onto the cores plex using.

Hope that makes sense!?

 

Regarding watercooling, I had the same thoughts as you, however went for the watercooling. The reason being is thermal throttling. I want to make sure if all my cores are in turbo they will not clock down due to heat.Especially for games which favour a high clock speed. For my chip the all core turbo speed is 3.0 and 3.2 for single core. I want to be sure heat will not make those core speeds slower. Hence water cooling. I am glad i did as my cpu temp never goes above 50 c and that is with a 5% overclock on the bclk which gives me an extra 150 hz on my all core turbo to above figures.

 

 

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.