Help choosing CPU/Motherboard -- First Unraid Build


Recommended Posts

I started a topic in Hadware but figured I might get a better response here.

 

This is how the build is going to be used.

 

Small NAS (6Tb-9TB using 3TB drives)

Windows 10 VM -- always on, no gaming, typical productivity software and Photoshop / Corel Draw

Linux VM -- always on -- headless -- LAMP -- run a small site that is internal only php/sql very night use

Windows XP VM -- always on -- headless -- used for some software that just doesn't play well in 64-bit environments

A few dockers but nothing with transcoding -- I'm building a second unraid server for media.

 

What kind of CPU should I be looking at?

 

I was thinking Xeon E3-1230 originally but the more I read the more I'm going in circles -- basically stuck on Xeon vs i7 and if Xeon v3 vs v5.

 

Can someone please point me in the right direction.

Link to comment

With all those VM's you will need at least six cores just for them and then at least two for the UnRaid server itself. A quad core i7 will give you eight cores to use so that would max you out, assuming you dedicate two cores to each VM and leave two for UnRaid. To err on the side of caution, I'd go with at least a six core Xeon giving you 12 cores and room to breath.

Link to comment

Thank you.

 

So something like the E5-2620 v3 would be good?

 

I'm completely new to VMs -- do cores need to be assigned in pairs? If no I think the XP machine would be just one and probably the Linux machine as well.

 

Given the cost of the processor not sure if it wouldn't make more sense to build a simple unraid NAS without VMs + a new Windows 10 i5 Skylake and then just use two random old machines for the LAMP and XP desktops

Link to comment

I think that CPU would be fine, and no, cores don't need to be assigned in pairs at all and the Linux and XP machines could very well work just fine with one core each, I was just erring on the side of caution. It would save you a heck of a lot of money to go the UnRaid virtualization route rather then use two old physical machines that will take up physical space and use electricity.

Link to comment

For what you've outlined, a high-end E3 series Xeon will be fine => a SkyLake E3-1275v5 would be a good choice. 

http://www.superbiiz.com/detail.php?p=E3-1275V5B&c=fr&pid=9b876fb1a930c112138498ca331c0da745d5eebe7a4ba58cbbc47ec762e42aa6&gclid=CJvUjay8ncsCFQiJaQod8_IGEg

 

Add 32GB of ECC memory and you'll have plenty of capacity for the small VM workload you're looking to support.

 

4 cores is plenty -- no need to spring for a 6 (or more) core E5 series Xeon for the workload you've indicated.

 

 

 

Link to comment

If you're doing hardware pass-through with more than 1 VM (really it should be any, but you'll likely be fine with just 1) it is recommended to get a CPU with support for ACS on root ports (this will ensure proper isolation of devices from the PCIe slots).

If you want ECC - E5 Xeon or better.

If you don't want ECC - i7"Extreme" (no Skylake products as of yet, only Haswell, Ivy/Sandy).

Also, yes I seem to preach ACS, however it is both the stance of Limetech/JonP and also Alex Williamson who is kind of the "go to" for VFIO and updates regarding this kind of thing.

Link to comment

Thanks to all three for the help.

 

The only VM with pass-through will be the Windows 10 VM. The other two are just going to be accessed by VNC or something similar.

 

The  E3-1275v5 and E5-2620 v3 are only $100 difference and the i7-5820K is right in the middle and has ACS and a considerably higher passmark so it comes down to is ECC something that matters for my use -- if not the i7-5820k seems like the correct choice.

Link to comment

I chose to do without, and the 5930k fit the bill quite nicely for my build.

The additional 8 PCIe lanes I thought may be handy, so I sprung for the 5930 over the 5820, however it was likely unneeded, and certainly not needed for what your usage is stated to be.

However, ECC ram is certainly a better proposition for stability, and gives a mechanism to detect single bit errors.

I felt that Intel charged too much in terms of performance vs a "high end" desktop for the extra cost to be worth it for me.

If the E3 supported ACS (I previously had a Z97 setup, with plenty of shortcomings for the amount of VM's with hardware assignment I planned to support), I would have likely gone that route, but it doesn't, and I couldn't justify the cost of an E5.

If you do go X99 (5820k), you have a quad channel memory bus, so it is recommended to not exceed the 1 stick of RAM per channel.

This fit my requirements fine (32GB total, 4X8GB), but keep that in mind if you plan to load up all 8 slots.

Link to comment

In addition to the PassMark scores, you should also pay attention to the "PassMarks/core"

 

An E3-1275v5 scores 10312 on PassMark, but does this with 4 cores, so each core has 2578 PassMarks of "horsepower"

 

The E5-2620v3 scores 9912 with 6 cores -- or 1652 PassMarks/core

 

The I7-5820k scores 13002 with 6 cores -- or 2167 PassMarks/core

 

So while the I7-5820k has more raw computing power when all threads are active; the E3-1275v5 will do better with single-threaded applications.

 

Note also that the E3-1275v5 is providing that "horsepower" with an 80w TDP, and the E5-2620v3 does it with 85w, while the Core i7-5820k is a 140w TDP processor ... so it will use appreciably more power, generate more heat, and require a more robust cooling solution.

 

As for ECC => personally I consider that a nearly mandatory feature for a server.  If you're building a fault tolerant server that can continue to function when a disk fails, I'd think you'd want it to also have a reliable memory subsystem that can tolerate random bit errors, and detect when there are multiple errors -- otherwise you'll "catch" those errors when you're having strange issues and happen to run MemTest.    Unbuffered ECC (i.e. with the E3) is good; but of course buffered ECC RAM is even better -- an advantage of the E5.

 

For a single VM with pass-through, I'd definitely go with the E3 with ECC RAM.  If you really want a higher PassMark CPU, I'd go with an E5-1650v3, which scores 13530, and would provide the added advantage of supporting buffered ECC memory.  Like the i7, it's a 140w processor ... but it's a far better choice than the i7 (granted, it also costs a few hundred $$ more).

 

 

Link to comment

...

If you do go X99 (5820k), you have a quad channel memory bus, so it is recommended to not exceed the 1 stick of RAM per channel.

 

ABSOLUTELY you do NOT want to install more than one memory module per channel in any board that uses unbuffered RAM ... especially if it doesn't support ECC.    It's still a good idea even on server-class boards using unbuffered modules; but at least if you have ECC the occasional random bit errors will be corrected.

 

With buffered RAM (E5 series Xeons) the bus loading is FAR lower [1 "load" per module instead of 16-36], so it's effectively a non-factor ... you can install as many modules as you want with no concern about waveform degradation.

 

Link to comment
  • 4 weeks later...

...

ABSOLUTELY you do NOT want to install more than one memory module per channel in any board that uses unbuffered RAM ... especially if it doesn't support ECC.    It's still a good idea even on server-class boards using unbuffered modules; but at least if you have ECC the occasional random bit errors will be corrected.

 

Why is this?  Are errors more prevalent?

 

Could you explain this because I obviously don't understand. 

Put it in concrete terms for me because I thought I understood this stuff...  If one were to use a consumer board like the ASRock X99 Extreme 6 and wanted to use 32 Gb of Unbuffered (non ECC) RAM, then what configuration should they do?  2x16? 4x8?  If on a Dual Channel board you want to use 2 dimms in a channel,  wouldnt you use 4 on a quad channel?  Not 1? 

 

 

Link to comment

It's our little secret.

 

Not for everyone. A dual processor 2670 is going to suck power, but if 128 GB ecc RAM sounds good and you need more cores, this is real nice. 

 

While I was able to boot with one processor and 2 sticks of RAM at 60w, a usable system is likely to idle in unRaid at double to triple that.

 

Downside, 2 processors, 16 cores,, 32 threads sounds cool, but some apps aren't very multi core friendly. Gamers beware.

 

Want to create the next Facebook with only beer money, these are priceless.

 

YMMV

 

Link to comment

 

is there an ATX version on the S2600CP family?

I dont think so, I know Asus and Asrock have a dual socket 2011 atx board at least

Link to comment

...

ABSOLUTELY you do NOT want to install more than one memory module per channel in any board that uses unbuffered RAM ... especially if it doesn't support ECC.    It's still a good idea even on server-class boards using unbuffered modules; but at least if you have ECC the occasional random bit errors will be corrected.

 

Why is this?  Are errors more prevalent?

 

Could you explain this because I obviously don't understand. 

Put it in concrete terms for me because I thought I understood this stuff...  If one were to use a consumer board like the ASRock X99 Extreme 6 and wanted to use 32 Gb of Unbuffered (non ECC) RAM, then what configuration should they do?  2x16? 4x8?  If on a Dual Channel board you want to use 2 dimms in a channel,  wouldnt you use 4 on a quad channel?  Not 1?

 

With an unbuffered memory system, there is one "load" for every chip on the memory modules ... i.e. typically 16-18 loads per module.    With buffered modules, there is one load per module.      If you're old enough to remember when you plugged actual telephones into jacks in your home, you may remember that if you picked up too many phones they wouldn't work right -- that's because they were overloading the capabilities of the phone line.    [Phones are still marked with their "ringer equivalent" number ... which shows how much load they put on the line]    It's the same concept with your memory bus -- as the load increases, the signaling waveforms are significantly degraded, and the reliability decreases.    With buffered modules, the loads never get high enough to be an issue ... but with unbuffered modules you can present very high loads very quickly if you install a lot of modules.  It's best to only install one module per channel on those boards ... although if you're using an ECC capable board (i.e. a Cxxx chipset with an E3 Xeon) then it's less risky to use two modules, since single bit errors (which are more likely due to the high loading) will be corrected.    But personally I never install more than one module/channel on unbuffered boards.

 

This concept is show well in Item #10 here:  http://www.xlrq.com/stacks/corsair/153707/index.html

 

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.