SM X10SL07 vs X9SCM-IIF (evolves into E3C224-4L and ECC discussion)


Recommended Posts

The X10 board has the 8port LSI board baked in, but also has less expandability (only 2 PCIe slots). i can add an 8 port card to the x9 and have 3 more slots. One of those for another 8 port card, one for my RAID card, and one more to passthru to a new unRaid Test VM. With the x10 I am one slot short of doing this. 

 

I already have 2 8 port cards to I don't have to buy another to use the x9 board.

 

I also prefer the more conventional ddr3 memory.

 

Anything I give up with the X9 as I'm currently leaning that way?

 

Thanks!

Link to comment

Ivy Bridge CPU's are fine ... the only advantage of the X10 is you can use Haswell based processors.    If you like the feature set of the X9 boards better, I'd just go with that -- although the 8 LSI ports on board the X10 is, as I'm sure you know, effectively as good as another PCIe x8.

 

 

 

Link to comment

Ivy Bridge CPU's are fine ... the only advantage of the X10 is you can use Haswell based processors.    If you like the feature set of the X9 boards better, I'd just go with that -- although the 8 LSI ports on board the X10 is, as I'm sure you know, effectively as good as another PCIe x8.

 

Correct. And if the X9 had just one extra x8 slot they would be equivalent. But the X9 has 2 extra slots. Any way you cut it the x9 has an extra slot.

 

I like being on Haswell, but not sure I like it well enough to give up the extra slot. Other options I should consider?

Link to comment

The lack of expandability options is one of the things that bothers me in the current generation of SM's server motherboards.

 

The X10SAT would be a nice option if SM didn't treat it as a desktop motherboard (no IPMI but it has vPro). There's a thread in the forums where IronicBadger reported some issues with it.

 

ASRock's E3C224-4L could be another, but ASRock's implementation of IPMI is considered buggy by some sources.

Link to comment

One other consideration:  The X9 is a more mature product, since it's had more time for SuperMicro to resolve any issues and "tweak" the BIOS.  Given IPMI, the extra slots, and the simple fact that Ivy Bridge based processors are still VERY good CPU's, it's not a bad choice at all.

 

Latest and greatest isn't always the best (e.g. I know a LOT of folks who prefer Windows 7 over Windows 8  :)

Link to comment

IPMI is not as important to me as the extra slot and lower cost. In my media room in my basement, I have a KVM switch hooked to my unRaid servers and can easily do whatever I need to.

 

Does that open other options?

Link to comment

I thought the X10 had two network adapters that were compatible with ESX 5.x, albeit you have to patch the install ISO, but both are compatible.

 

 

http://www.servethehome.com/install-vmware-esxi-5x-intel-i210-intel-i350-ethernet-adapters/

https://my.vmware.com/web/vmware/details?productId=268&downloadGroup=DT-ESXi5X-INTEL-igb-4017

 

Can you clarify - are you implying that the X10 is compatible (with this procedure) but the X9 is not? Sorry, I understand your post but trying to understand it in the context of this thread.

Link to comment

I thought the X10 had two network adapters that were compatible with ESX 5.x, albeit you have to patch the install ISO, but both are compatible.

 

 

http://www.servethehome.com/install-vmware-esxi-5x-intel-i210-intel-i350-ethernet-adapters/

https://my.vmware.com/web/vmware/details?productId=268&downloadGroup=DT-ESXi5X-INTEL-igb-4017

 

Can you clarify - are you implying that the X10 is compatible (with this procedure) but the X9 is not? Sorry, I understand your post but trying to understand it in the context of this thread.

The X9 works just fine with ESX(i).  I have built a number of servers for customers using Intel Xeon processors and the X9SCM-iiF board.

 

I usually try to steer those building ESXi server towards using something like the Intel RAID Expander card.  Then you could hook up more backplanes and only use 1 PCIe slot  It allows for more expansion later on if you wanted to add a TV Tunder card in passthrough to a Windows/Linux VM.

Link to comment

The 82574L NIC on the X9SCM-iiF works just fine with ESXi. 

 

It was the 82574LM on the X9SCM-F that you had to install a driver for.  I haven't had a single hiccup on either of my ESX hosts with the X9SCM-F with the driver installed.

Link to comment

... not as important to me as the extra slot and lower cost.

 

Does that open other options?

 

If you don't care about IPMI, you could use an AsRock Haswell board that has 3 PCIe x16 slots

http://www.newegg.com/Product/Product.aspx?Item=N82E16813157409

 

... but I'd definitely prefer SuperMicro, so I'd probably just use the X9 board if you want the extra expansion slots (An x8 slot is as good as an x16 when you're looking to add x4/x8 adapters -- so the x9 board still has more expansion capability).

 

 

Link to comment

The lack of expandability options is one of the things that bothers me in the current generation of SM's server motherboards.

 

The X10SAT would be a nice option if SM didn't treat it as a desktop motherboard (no IPMI but it has vPro). There's a thread in the forums where IronicBadger reported some issues with it.

 

ASRock's E3C224-4L could be another, but ASRock's implementation of IPMI is considered buggy by some sources.

 

I'm liking that E3C224-4L board.

 

Haswell

 

2 x8

1 x4

1 x1

 

8 onboard SATA

 

4xLAN

 

IPMI (I'll live with a little buggy, as this is not a critical feature to me)

 

Link to comment

Last decision. What are advantages of ECC over regular?

 

Surely you know this.    It's like the advantages of using a parity drive or not in UnRAID => e.g. fault tolerance.

ECC RAM will correct single-bit errors (the vast majority of RAM errors), and detect multiple bit errors (otherwise you don't know about errors unless an error happens to cause a crash or until you run a memory test).

 

Given a choice, ALWAYS use ECC modules !!

 

Link to comment

Last decision. What are advantages of ECC over regular?

 

Surely you know this.    It's like the advantages of using a parity drive or not in UnRAID => e.g. fault tolerance.

ECC RAM will correct single-bit errors (the vast majority of RAM errors), and detect multiple bit errors (otherwise you don't know about errors unless an error happens to cause a crash or until you run a memory test).

 

Given a choice, ALWAYS use ECC modules !!

 

Yes - I do know that ECC memory corrects memory errors. But i have had very good luck with non-ECC memory after testing the heck out it first. So putting that aside, are there requirements for Xen, ESX(i), etc that require ECC memory? I believe that ZFS has such a requirement - are there orhers?

Link to comment

While various software developers may "require" ECC memory, pretty much all of them run without ECC memory, including XEN, ESX, ESXi, and ZFS. I'll stay with the group recommending you spend more money and get ECC, for two reasons.

 

First, because it's your data! I am contacted by people each week to recover data. I tell them I may not be able to recover it. They say OK. I tell them it will take very long (I took a year to recover some baby pictures). They say OK. I tell them it will cost 3x what it I really think it should cost(yeah the $1200 scare). They say OK. Data is valuable. Just like RAID, ECC memory is cheap.

 

Second, because ECC can give you a warning that memory is failing (more than just a panic). You test the heck out of memory, but like a well pre-cleared drive, it will fail sometime (yes, typically long after that tested drive fails). But memory fails. When ECC memory fails, many times the correction can be made and things continue to run. Then, at a more convenient time, you can replace the memory stick. If you are going to RAID protect the data, protect it in memory too.

 

[late addition] I wish I could find it, but there was an article from a game developer who mentioned that they included in the main game loop a calculation to test the computer. They found that many machines while successfully running the game, would intermittently, and without detection, fail the calculation. They did it because the customer support was driving the developers crazy with problems they could not reproduce.

Link to comment

Definitely agree -- just to say it one more time:  Buy ECC if your motherboard supports it.    Better yet, buy buffered ECC RAM -- but that tends to move you to a much more expensive server board and higher-end CPU, so the cost is a LOT more than simply using unbuffered ECC modules on a much lower-cost system.

 

But to not buy ECC because you've "... had very good luck with non-ECC memory after testing the heck out of it ..."  is not really any different than not bothering with a parity drive because you "test the heck" out of your drives before using them.    A very high percentage of unexplained failures is likely due to intermittent memory errors that wouldn't have happened with ECC modules.

 

 

Link to comment

I am considering it.

 

Considering it ??!!    Personally I can't imagine having a board that supports ECC and not using ECC modules.    In fact, I will only buy boards that support ECC these days -- and consider it a major compromise to use unbuffered modules ... but the price differential to go with a board that supports FBDIMMs, coupled with the higher cost E5 series Xeons, makes that a fairly easy compromise to accept.

 

Link to comment

I am considering it.

 

Considering it ??!!    Personally I can't imagine having a board that supports ECC and not using ECC modules.    In fact, I will only buy boards that support ECC these days -- and consider it a major compromise to use unbuffered modules ... but the price differential to go with a board that supports FBDIMMs, coupled with the higher cost E5 series Xeons, makes that a fairly easy compromise to accept.

 

Not arguing that it provides a degree of extra protection. Not a bad thing certainly. If I am running a data center it is worth it. No doubt. But for the home user? Read on for better ways to spend those extra dollars.

 

If you are building a new or upgrading an existing server, realize that regardless of component choices, the first few days and weeks (even months) you are subject to all sorts of break-in failures. Bad drive cabling, reallocated / pending sectors, failing motherboards, random lockups, failing drives, screeching fans, smoking boards (yes, like on fire - I had this happen twice), bad PSUs and failing memory. I've had 2-3 memory chips / DIMMS be bad when running a memtest. But my experience has been that memory issues are the easiest to detect, easiest to fix, and least likely to emerge over time.

 

You can compare the algorithm used in ECC memory with the technology used by unRAID, but the comparison ends there. Bad memory has little chance of doing your array serious damage. If memory is bad it can corrupt the data being copied to the server. But it cannot corrupt the data already on your array. And parity checks would often identify memory errors (you'd get a random parity error here and there) doing no harm (unless a drive fails simultaneously). There are tools like teracopy that can verify copies to the array (a good idea even if you are using ECC memory, because we've seen issue with some LAN chipsets and cabling that can also corrupt copies). And memtest that can run the memory through quite a grueling test running every conceivable bit pattern trying to coax a bit to flip.  (See tips below for running memtest.)

 

For $40-$80 (cost of ECC over conventional for 16G and 32G memory), what else could you consider buying that would better protection your array:

 

1 - Drive cages (4in3s and 5in3s or cases with hotswap bays). Nothing is a smarter investment than these things that allow you to easily power down your server and exchange a disk without opening up the case and risk knocking a cable loose while the array is in a sensitive state. $45 (4 disks) / $100 (5 disks) [Remember that hotswap bays DO NOT MEAN you can remove running drives from them with unRAID. Trust me on this - do not do it.]

 

2 - Locking SATA cables. Although not all slots / controllers support them, locking cables can help you avoid dreaded red-balls better than anything else ($25 for 10 drives).

 

3 - Extra fans. Cooling the drives is a definitely worthwhile. Although they say they can get too cool, I routinely run mine down into the low teens C when spun down, and have some of the healthiest drives in the forums. Extra fans are very worthwhile. ($10+ / fan)

 

4 - UPS - protest your array from lost power and voltage spikes. (~$100)

 

5 - Buy your computer from Limetech, Greenleaf or some other professinoal unRAID system builder. These guys put these servers together for clients and they know all of the tricks. And they burn them in. I am not sure what premium they charge, but your risk of early failures go way way way down. If you want to reduce your risk of data loss, DO THIS!

 

6 - But Hitachi. They may be more expensive (or not depending on the deal d'jour), but Hitachi / HGST drives are the most reliable drives here.

 

7 - Add an SSD drive to store your VMs. It may not yield extra protection - but boy this will speed up those VMs like you can't imagine!

 

8 - It won't save you from data loss, and the impact much less dramatic than the SSD, the extra dollars can fund a faster CPU or more memory, which translate into better performance.

 

I'd recommend any of these as better ways to spend your money than the extra cost of the ECC memory. :P;)

 

BTW, people tend to recommend a 24 hour memory test, but you need to remember that the length of the test is not that great of a criteria. You'd test 2G of memory much more in 24 hours than 16G. I like to run 4G for 24 hours, so to run 16G through a similar test would take 4 days, and 32G would take 8 days. And I'd run teracopy verify for a few weeks to be sure that the LAN and computer were able to accurately copy files to my array. (There have been several cases of copy failures to arrays, may of which got chocked up to incompatible motherboards.)

Link to comment

I suppose it depends on whether or not $40 is a significant amount of money to you.  If you're at the point where $40 makes a difference in your ability to build the server with ECC, I'd suggest you can't afford to build it in the first place.

 

Further, your first suggestion doesn't improve reliability -- it reduces it !!  ANY electrical connection adds to the risk of failure.  A hot-swap cage may make it easier to swap drives; but it adds another connection to the mix => you still have a cable connecting to a SATA connection (on the back of the cage); but you also have the additional connection to the drive.  Electrically it's actually more reliable to NOT use the cage.  (although I agree it adds the risk of "bumping" other cables if/when you need to replace a drive.)

 

I've used lots of hot-swap cages over the years -- I'm happy to take that small added electrical risk; but certainly realize that it IS an added point of failure.

 

Most of your other suggestions should ALWAYS be done, regardless of what kind of memory you buy.  Locking SATA cables;  good cooling;  and a UPS are all MANDATORY items if you care about your data; and buying high quality drives is certainly a good idea.    And of course if you're using VMs, an SSD is excellent for hosting them -- and you should certainly buy a CPU with enough "horsepower" to do whatever tasks you plan to ask it to do.

 

As for buying your computer pre-built => that's entirely a matter of choice -- I agree if you aren't experienced and/or have ANY reservations about putting one together it's a good idea to let a professional do it for you.

 

 

One other note:  You indicated "... For $40-$80 (cost of ECC over conventional for 16G and 32G memory) "  ==>  the latter number implies you're considering installing 32GB of memory (e.g. 4 8GB modules).  Installing 4 modules of unbuffered RAM significantly increases the likelihood of memory errors -- the bus loading with that many loads results in very degraded signaling waveforms ... making errors far more likely.  I NEVER install 4 modules on an unbuffered board unless they're single-sided modules (in which case 4 modules has the same load as 2 double-sided modules -- but 8GB modules are going to be double-sided.)

 

Four modules will work ... but that's an even stronger reason to use ECC modules !!

 

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.