Server Upgrade


Recommended Posts

So, this was a long time coming.  The old faithful Sempron 140, while good for unraid in general is bad for the new world of unraid. 

 

What exactly does this mean?  It means that if you want to do any of the cool stuff that is unraid today, you need a little power behind your app.  Sure, some will argue the unraid should only be a file server and leave the other duties for a dedicated app server of sorts, but I am beyond that.  I want to make 4 servers 1.

 

I have been all over the board looking at hardware, but thought it was easier to just ask.

 

I have some RDIMMs laying around , but it seems to use those I need to spend more on the mobo and CPU, so by the time I get there, I can just buy new cpu, mobo and ram for less as opposed to forcing something I have already.

 

But, if there was a way to use it, great, if not, fine.

 

All I am looking to do is run some docker containers, one of which can use some CPU.  Emby/plex, they work awesome if the player does all the work to transcode/play, but the second you fire up a tablet that tells the server to transcode the thing chokes.

 

So, requirement number one is the ability to handle some transcoding.  I have read stories about people needing 20 - 30 transcoding sessions for their family and friends, but that isnt me.  I need 1-2 sessions.  If I get 4, great, but just the option is what I am after.

 

Requirement 2 is another easy one.  I want hardware passthrough for virtualization.  I want to run a pfsense server and a kodi/xbmc system to which I can pass NIC cards and a video card, respectively.

 

Beyond this, I want to be a little future-proof.  The Sempron 140 still has life left in it, but it cannot meet todays demands.  Money-wise, I was looking in the 100-300 range and honestly see it more 200-300.

 

I have no idea if this is possible or not, but it would beat the 400-500 starting point just to force in the use of ECC RDIMMs.  Let me know, I looked a bit at AMD and Intel and it seems the hardest part is cost support for hardware virtualization.  Therefore, it just seemed easiest to just come ask the masses that have dealt with the hardware side of things a bit more than myself.

 

Link to comment
  • Replies 56
  • Created
  • Last Reply

Top Posters In This Topic

It depends on just how much "horsepower" you want to have; and how much fault-tolerance you want in the system.    Personally, I wouldn't build a system without ECC RAM ... and if you can use registered modules, that's even better.    But as you noted, that will add to the cost -- your price targets are WAY too low for a high-end system with those characteristics.

 

If you don't care about fault tolerance for your memory subsystem, you can simply use a desktop board with an i5 and have plenty of "horsepower" and vt-d support for VM's.    ... but I'd give serious thought to using a server-class board with a Xeon and ECC memory.    If you stay with unbuffered ECC modules (i.e. an E3 series Xeon) the price isn't too bad ... jumping to an E5 series Xeon with registered modules would, however, bump everything up a few hundred $$

 

Link to comment

UnRAID doesn't "benefit" from ECC memory (registered or not) ... it simply provides a level of fault tolerance for your memory system.

 

Modern memory modules are very reliable, but they still encounter a few errors/year caused by random electrical particles (from sunspots, atmospheric conditions, etc.).    Many random crashes that folks attribute to their OS are actually caused by a memory "glitch" -- it's not a FAILED memory module, it's simply a normal random error.

 

ECC modules will auto-correct most such glitches (obviously they can only correct a single bit error; but that's the vast majority of these kind of errors).

 

Buffered (whether Registered or FBDIMM) modules significantly reduce the likelihood of any of these issues by dramatically reducing the electrical loading on the address and data buses.    A typical RAM module produces 16 "loads" (or 18 for ECC modules) on the bus -- so if you install 2 modules that's 32/36 loads, or with 4 modules that's 64/72 loads.    These loads cause signaling waveform to significantly distort, which can cause errors due to a misread signaling level.    With a buffered module, there's ONE load per module -- so even if you install 4, 8, or even more modules there's very little impact on the waveform.    [Watch Item #10 here if you want to see the effect:  http://www.xlrq.com/stacks/corsair/153707/index.html ]

 

Since unbuffered ECC modules will auto-correct single bit errors, they tend to eliminate most loading-related "glitches" as long as you don't have more than 4 modules installed.    Without ECC, I NEVER install more than 2 modules in a system, to limit the waveform distortion.

 

Do you NEED ECC modules ??    Of course not ... just like you don't NEED a parity disk.    ECC simply allows your system to automatically correct memory errors ... just like the parity disk allows your system to not lose anything if one of your disks fails.

 

My view is simply that if you're spending the money to build a fault-tolerant server, you should maximize the reliability of that system ... and a few extra $$ for ECC memory is a good investment.    Whether you want to spend ever more to get registered RAM is another question altogether => the biggest advantage of that is if you want to install a truly prodigious amount of memory [6, 8, or even more modules].

 

Link to comment

so is either or better on the server side in this case?  xeon vs operton.  operton used to rule the server space even when intel was taking over the desktop, but now you never seem to see operton when searching new egg.  cost wise, is it a toss up here and also, is e5 the space to really be in to be ready for the future?

Link to comment

Opteron used to rule, but lately can't compete with Intel on low power usage and high performance. So much so that Limetech hasn't tested AMD much for pass-through. See their list of testing  hardware (which I can't find right now)

 

You will find the Supermicro X10 series motherboards with Gary's recommended ECC memory, and an Intel Xeon or vt-d rated i5 or i7 processor. 

 

If you are a diehard AMD guy you can roll your own and it will work, just expect to have to live a lonely life.  You won't have many friends running the same stuff you are and helping you resolve your unRaid hardware pass-through issues.  If that is worth it to save the $100 - $300 that you can by not having to pay the Intel tax, only you can know for sure.  I have an old desktop with AMD Hexcore Phenom 1100 Thuban, and it runs the dockers fine on 16GB Ram, it runs the VM's fine and it supports hardware pass-through, but the old motherboard doesn't.  Still it was fun to start playing with v6 and Dockers.  It is still a good backup system as long as you aren't planning to run hardware pass-through or needing ECC memory.

 

Be careful with the pfsense.  @archedraft, the only one I am aware of doing it has given up the pfsense on unRaid due to the pfsense VM being shutdown every time the unRaid array is taken offline.  We are waiting on Limetech to make it possible for VM's to remain running no matter the array status.  Let us know how you make out with this.

 

https://lime-technology.com/forum/index.php?topic=38877.0

 

 

Link to comment
  • 2 weeks later...
  • 5 weeks later...

Gary,  been reading about some of the e3 v3 xeons supporting rdimm memory,  but I don't believe it is all of them.  Any idea if this one does?  Intel Xeon E3 1241V3 3.5GHz LGA 1150

 

Or is it the newer versions?

 

Otherwise,  the thought is if a sempron 140 was good enough till now,  how terrible would a Athlon really be.  I do see your point about protection,  but my use is at home and most my data is at rest,  except my docker containers doing some work,  so the cost may not be so justified.  I say this simply on the argument of what I have been using for years.

 

Sure it doesn't make it right,  but, it saves bags of money.

Link to comment

The E3 series support unbuffered ECC memory ... you have to go up to the E5 series Xeons to get registered RAM support.    The E3's are fine ... even though the memory is unbuffered, you still get ECC protection.

 

There's nothing "wrong" or "terrible" about using an AMD chip ... if cost is a major factor you can indeed build less expensive systems that way.  You'll simply have less "horsepower" and probably use more power.

 

My view is simply that for a fault-tolerant server I want fault-tolerant memory ... and the motherboard/CPU/RAM is only a small % of the total cost, so I don't mind spending more for a relatively high-end system.    If I was building a new system today and wanted to keep the cost modest, I'd probably use a SuperMicro X10SLL-F-O board ($168) with ECC memory and an E3 series Xeon or an i3 with ECC support (e.g. the i3-4170 is $110 and scores 5169 on PassMark ... a pretty good price/performance tradeoff if you don't want to spring for an E3 series Xeon like perhaps a 1241v3, which costs $278 and scores 10041 on PassMark).    These are both very power-efficient processors ... the i3 has a TDP of 54w, the Xeon 80w, and both idle at VERY low power consumption.  My view is that the extra $160 for the Xeon is worth it, since it nearly doubles the performance ... but the i3's performance is still very good.

 

 

 

Link to comment

I suspect if you build a 6-core E5-based system it will last you a VERY long time.   

 

One other thought ... the board I noted is an excellent board, with 10 SATA ports and plenty of expansion slots => but it does not have IPMI.    There's another version of the same basic design that has a slightly different complement of expansion slots, but includes IPMI ... for essentially the same price ($1 more at Newegg):

http://www.newegg.com/Product/Product.aspx?Item=N82E16813182927

 

Just depends on whether or not you want the IPMI feature (this lets you run the server completely "headless" and still access the BIOS and do virtually anything you could do with a display and keyboard connected to it).

 

The IPMI version does not have any PCIe slots that run at x16  (there are 2 x16 connectors, but they run at x8).  With PCIe v3 that's very unlikely to matter, but it's something you should know before buying.    On the other hand, it's got more high-bandwidth expansion slots -- a total of 7 slots that run at x4 or x8, compared to 4 on the other board, which also has 2 x1 slots.

 

Link to comment

I wouldn't even know what to do with 3 vms with full video card pass through.  Doing so requires passing such to a display with keyboard and mouse.

 

With the server in the basement,  the only place I need it is to run kodi.

 

But yes,  the thought of doing it is plain silly. I will add one video card,  and one NIC to run pfsense.

 

Beyond that I guess I could add video cards to mine bitcoins...

Link to comment

Agree ... I also have no plans to run multiple video cards => I was just noting that these boards will provide PLENTY of capability for anything you may choose to do in the future, as well as a VERY reliable setup with the buffered ECC-protected RAM and an E5 series Xeon.  ... and plenty of "horsepower" distributed among 6 physical cores.

 

If I was building a system this month, I'd probably build almost exactly what you're considering ... the only difference is I'd probably spring for a bit more "horsepower" by using an E5-1650v3  [6 cores, PassMark 13484 vs. 9981 for the E5-2620v3]

 

Link to comment

While what I am going to say may have its share of risk for a bit flipped here or there, keep in mind that while ECC memory is "recommended" I would guess that most people here aren't using it.

I have no idea of the %, but certainly less % with ECC ram than with.

 

Again, I am just stating this to give some perspective.

 

For me I just completed a new build and originally planned it with ECC ram, and in the end decided to not go with it.

Why?

 

Well it certainly comes down to $$ partially, however I also felt like I was getting jipped!

Intel wants too much for Xeon E5's in my opinion, and the E3 doesn't support ACS on root processor ports (which allows proper isolation for VM's when passing through devices).

 

As noted CPU's can be compared using the CPU Pass Mark (P.M.) chart and this is what I used to do some comparisons (CPU/Price/PM), I was considering the following items:

Xeon E5-2620v3 ~$422 P.M. = 9,981

Xeon D-1540 ~(embedded) P.M.=10,904

Xeon E3-1275V5 ~$339 P.M. (Not rated yet, likely around 10k)

i7-5930k ~$399 P.M. =13,650

 

 

I really wanted proper IOMMU grouping and without ACS on root ports it is no guarantee isolation per item/add-on card will result in this.

I chose the 5930k as the "Extreme" edition processors support this, and so do the E5 Xeon's, however the E3's do not.

Now it is very possible the MB you choose does a good job at IOMMU grouping (1 device in it's own group) however I got burnt pretty bad with my previous build so I wanted to be sure.

 

While the 5930k does not support ECC, it is ~36% faster than the more expensive Xeon E5-2620V3.

For me that was enough to sway me, and the idea that if a couple of bits flip here or there, so be it, nothing of mine is THAT precious, and most computers people use all use non buffered and non error correcting memory.

 

The 5930K in comparison to the E3-1275V5 supports more PCI Express lanes also (40 as opposed to 16).

Of course if you're feeling spendy, the E5 also supports 40 lanes.

While you don't need a LOT of bandwidth for video decode/media usage, I plan to have a VM that I "could" game with, so the extra lanes is a nice addition!

 

Again, my opinion, just wanted to give some insight here and explain my reasoning for my hardware selection.

 

I have 3 Win10 VM's always on with GPU's assigned, and 4th GPU sitting in the slot awaiting an addition into another room (if/as needed).

 

 

Link to comment

Rather than tolerating a "... bit flipped here or there ..."  you can get essentially the same performance as the i7-5930k with an E5-1650v3 and have buffered ECC protected RAM, which allows you to install FAR more RAM with no degradation of bus signaling.

 

I agree that using unbuffered non-ECC RAM is fine if (a) you're not concerned about the potential data corruption from an occasional "flipped bit", and (b) you're not going to install more than 2 RAM modules.    But as I noted earlier, if you're building a fault-tolerant server than I assume you're concerned about data integrity ... so using the most reliable memory subsystem would also seem to be an important attribute.

 

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.