Xeon mobo/cpu recommendation wanted


Recommended Posts

Using this post to see if anyone has recommendations and to "think out loud" while I work on this.

Currently running an AMD Phenom II X4 processor which chokes and dies (CPU stalls) when running dual parity checks, likely due to lack of AVX instructions. This locks up the UI/terminal when it happens. I've managed to reduce the occurrences by lowering md_stripes or something, but parity checks are very slow and will never be optimal with this setup.

I figure this is as good a reason as any to upgrade to some real hardware.

I run very few dockers and have no real plans to run VMs on the server (and if I do it's very unlikely they will need video card passthrough or anything as I have a dedicated gaming PC), but there's no kill like overkill so here is what I'm thinking:


I'm not sure I have any real need for multi processor support.

ECC RAM. I'd prefer a configuration that allows me to start with 32GB and move to 64GB (or more) later.

 

IPMI. Being able to manage the computer remotely would be great.

 

Built-in graphics or the ability to run headless. Right now I have a...Diamond Stealth video card from the 90s crammed into my unraid because the mobo won't boot without a graphics card. I have no need for a monitor on the system if there is IPMI support though.

 

No bios update needed to boot the recommended processor. I have no spare processors sitting around, so either the motherboard must start with a good bios or be one of those magic boards that can somehow update without a processor.

 

SATA ports don't matter. Running M1015+RES2SV240. Only change I'd make in the future is to move to multiple M1015s (or whatever is the new hotness) rather than using the expander. Currently running 12+2 drives in a Norco 4224

M.2 and internal USB would be nice.

Intel gigabit ethernet LAN port.

I remember back in the day some super micro boards (X9SCM?) had issues running multiple M1015s and wouldn't post. Is that still a thing?

Would the current version that Supermicro board be fine? The X11SCM-F with probably a Xeon E-2174G or E-2176G? Or is there another direction that I should be considering?

This isn't really anything fancy or powerful, but trying to work out what I need is tough because there's so many options in server hardware out there that I'm a bit overwhelmed, to be honest. -_- 

EDIT: It appears the X11SCM-F only has 1 PCI slot so that's probably not the way to go. Hmmm.

Edited by SnickySnacks
Link to comment

Supermicro X11SCA-F? It’s full ATX though.

 

I was going to go with that exact Xeon cpu you listed but am now opting for the supermicro X10SLm-F, keeping myself Xeon e3-1231v3 and adding a Quadra P2000 with an LSI 9201-8i. Cheaper and I get what I need which is video transcoding for plex.

 

Also, Out of curiosity why would you run multiple SATA cards vs using the expander? Just curious for my own plans of using SATA cards.

 

Link to comment

It's been a while but my thinking goes like this:

Last I checked there's a very real performance hit when running many drives off an expander:

Plus I feel like M1015s are probably easier/cheaper to get than RES2SV240s.
I'm not planning to replace it right now, since the expander is already paid for, but I'd like the option to run 3xM1015s for full bandwidth should my expander ever fail (or 2xM1015s and the rest of the drives off the motherboard, is also an option).

The motherboard/CPU I've been using (just a low end consumer board/phenom ii CPU I picked up from Microcenter) was what I had in my Unraid test build when I was seeing if it would work for what I wanted. I migrated it into the 4224 so I wouldn't have to spend money on new hardware. For years been meaning to upgrade to something a bit better and given the problems I've been having doing parity checks means now is probably a good time. Even though I have no plans to change the M1015 right now, it's always in the back of my mind that it, or the expander, could fail in the future.

The Norco should fit a full ATX board, so no worries there.

I'm a bit curious if one can actually run all the PCIe slots on a X11SCA-F with an Intel Xeon E. I thought those topped out at 16 lanes, but the board claims to support 8/8/4/1. Guessing the 4/1 must run through the PCH or something.

Link to comment
8 minutes ago, SnickySnacks said:

Last I checked there's a very real performance hit when running many drives off an expander:

There can be dependent on the number of disks connected, but there isn't a performance hit just because you're using one, you can easily calculate the max available bandwidth, it just depends if it's SAS/SAS2/SAS3 and linked with single or dual link to the HBA.

Link to comment
On 7/29/2019 at 1:23 PM, johnnie.black said:

There can be dependent on the number of disks connected, but there isn't a performance hit just because you're using one, you can easily calculate the max available bandwidth, it just depends if it's SAS/SAS2/SAS3 and linked with single or dual link to the HBA.


Yes, that is what I said. When running "many drives". (I suppose that could be misinterpreted, but I meant "When there are a numerically large number of drives")

I was looking at your testing thread earlier:


I have 14 drives right now and it takes forever to get through (60MB/s last I checked), with 3 on the m1015 and 11 on the expander. 

I had assumed it was due to my CPU, since that was getting pegged at 100% usage and stalling but...

Now I'm wondering if I did something silly like plugging my M1015 into a x4 slot or something, because I really should be getter better speeds than this.

More things to think about...

Link to comment
  • 1 month later...

I ended up cheaping out a bit.
Here's what I went with:

CPU: Intel i3-9100

Motherboard: Gigabyte C246-WU4

Ram: Crucial CT16G4WFD8266 16GBx4

 

Turns out Xeons are really expensive. Still cost around $800 for this setup, fully half of which was for the RAM.
I decided the PCI lanes weren't as big of a deal as I was making them out to be. I settled for 8x + 8x + 8 native SATA ports, which will be enough to cover all 24 drives with two M1015s (or equivalent) and the onboard ports.

I'm not sure going from 16GB RAM to 64GB will really do anything for me, but in the back of my mind I'm hoping it will help with Folder Caching or Crashplan or something.


Pros:

Parity check single core CPU usage closer to 30% vs the 100% that was occurring before

Parity check speeds up to 115MB/sec vs 70MB/sec (so far, may get faster after the 2TB disks)

I was able to set my tunables back to the default, rather than the (very low) values I was using to prevent CPU stalls.

Cons:

The motherboard has displayport connectors on it. I'm old and grumpy and don't own a single displayport monitor or adapter. And didn't even realize there was another option than VGA/DVI/HDMI. Had to pull out an ancient, broken video card as I really have no spares. :(
Unraid won't boot unless I boot into GUI mode. I don't really care that much, so I'll leave it until I get the display situation worked out. Worst case I'll just set it as the default, so I don't have to hook a keyboard/monitor up to reboot it, if I can't fix it. Could be related to the video card or something.

Edit: Getting the onboard video card working instead of the old, broken one I was using seems to let it boot up properly now. yay.

Edited by SnickySnacks
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.