New ESXi unRAID build - Norco 4224


tinglis1

Recommended Posts

I am embarking on a mission to build a new ESXi unRAID server. I currently have a basic build which is at its max for space and is just not giving me the flexibility/functionality I would like going forward.

The only hardware to be transferred from my existing server will be the hdd’s.

 

My new server build has been heavily influenced by Johnm’s ATLAS build and settled on the following as my hardware build list. These are based on budget, recommended hardware and what I can source here in Australia:

 

Case

Norco 4224 + 120mm backplane

Motherboard

Supermicro X9SCL+-F

CPU

• Intel Xeon E3 1230

Memory

• 2 x (Kingston 2 x 4GB KVR1333D3E9SK2/8G) = Total 16GB

PSU

Corsair AX750

Controller

• 3 x IBM M1015

ESXi Datastore – option 1

• OCZ Vertex 3 240GB

• WD 1TB Black

ESXi Datastore – option 2

• 4 x 7200 rpm drives

Cooling

2 x Arctic Cooling F8 TC

• 3 x Arctic Cooling F12 TC

• Stock Intel CPU cooler

 

My final configuration will be something like the following:

• 24x7 running server

• 4 drive bays allocated for Datastore drives and connected to motherboard

• 20 drive bay connected to M1015’s using hardware pass-through for unRAID

• VM’s running under ESXi

1. unRaid

2. Ubuntu server – UseNet + PPTP

3. Windows 7 x64 – app server

4. Windows 7 x64 – sandbox

5. Windows 7 x64 – sandbox2

 

 

I would appreciate some feedback on what others think of the hardware selection. This is not going to be a cheap build (for home server standards) so I would like to get this right the first time.

Some areas I would like comment on but not limited to are the following:

• ESXi datastore options. SSD + hdd vs. 4 x hdd (1 per VM)

• 16GB memory. Too much? Too little?

• General overall hardware selection

• Cooling fans. Do you think this is enough? The use of AC Temperature Controller fans?

 

I look forward to your feedback.

 

Tim

 

 

Link to comment

Some areas I would like comment on but not limited to are the following:

• ESXi datastore options. SSD + hdd vs. 4 x hdd (1 per VM)

• 16GB memory. Too much? Too little?

• General overall hardware selection

• Cooling fans. Do you think this is enough? The use of AC Temperature Controller fans?

 

1 - This really depends on what you're using the OS installs for. If you're not using them for storing bulk data (presuming you're using unraid instead) then I'm not sure you'd need to much space HD wise? Performance wise it might be better to have 1x physical HD per vm but you'd have to be really rattling the disks. I'd stick with an SSD and consider if you can go smaller. I have a 32G SSD in use for my datastore. This admittedly might not go too far if you were going to cram windows vm's on, but for unix installations it's plenty.

 

2 - Go for as much as you can afford / justify / the board supports. You'll never wish you had less and it's (relatively) cheap just now. I shoved 16G in mine as thats the max my board will take. I would have put more in if I could but 16 seems a reasonable amount. Again really depends on the number, type and usage of vm's. I'll likely allocate 8-12 of mine just to unraid for data caching.

 

3 - Looks ok to me though I don't know too much about the board or that specific CPU. I'm guessing as it's a Xeon you have vt-d support in the board and cpu. 3 x m1015 in theory should be ok, might want to check what speeds the pci-e slots will run at once you have three cards in them and if this will affect your throughput at all. Very unlikely and even then probably only during a parity check when the bulk of your disks are being hit at once.

 

Can't comment on cooling, I just left my norco with the stock fans but undervolted them for a modicum of peace and quiet!

Link to comment

Thank you for your feedback boof.

 

The VM's will be running various tasks and are right about unraid being my bulk data store.

I have had a long look at my VM requirements and think that I can go to a OCZ Vertex 3 120GB SSD, using my 1TB WD Black as a shared temporary datastore.

I have tweaked my planned VM and the following is my planned starting point now.

VM0 unRAID with 3 x M1015 hardware passthough

VM1 ubuntu server with usenet + pptp. It will need access to a reasonable cache area for temporary files and any files that are not automatically moved.

VM2 Ubuntu server. For local network services like squeezebox, sickbeard, couch potato

VM3 Win7 app server. No large data storage required, mainly apps that i haven't found how to get onto ubuntu yet (I'm not a Linux expert)

VM4 Win 7 sandbox. An area to test out any new serving apps before transferring them to VM3 for stable use.

 

The use of the SSD gives me more flexibility in my quantity and datastore availability for VM's going forward as well. If i really need more SSD space going forward I can always buy another as I have four drive drive bay allocated to ESXi datastore.

 

 

32GB memory seems to difficult to obtain at the moment with the lack of compatible 8GB sticks. I think 16GB should be fine for my current VM needs. I am not going to be running multiple high memory usage items simultaneously.

 

I am aware of a few other forum users using the Xeon 1230/1240 for ESXi purposes, so i am pretty happy that the 1230 is suitable.

I am unable to purchase the X9SCM-F here in Australia but I could get the X9SCL-+F. As far as I can tell the main difference that would affect me is the different on-board SATA speeds and the one less PCIe slot. I am aware that one of the M1015 will be on a slower PCIe 4x slot, the M1015 on this slot will only have 4 drives attached anyway.

 

Is anybody aware of any other noticeable differences between the X9SCL-+F and the X9SCM-F?

 

I am really curious to see how the Arctic Cooling Temperature Controlled fans go. I have use AC fans in the past for aftermarket GPU cooling (ati 4870x2) and was very impressed by price and noise level (or lack of). I was thinking of going PWM fans but could not come up with a simple way of controlling them with drive temps under ESXi with hardware passthough for unraid.

 

Has anybody tried the Arctic Cooling TC fans? or the PWM fans.

Link to comment

as far as RAM. 16GB is the best you can do with that board for now.

Kingston does list the 8GB chips but  they are hard to get since they are either new or in pre-production. http://www.ec.kingston.com/ecom/configurator_new/partsinfo.asp?root=uk&LinkBack=&ktcpartno=KVR1333D3E9SK2/16G

 

it sounds more like you will need disk IO over ram from your specs.  If you do really want 32Gigs look at an X8SIL.

 

I pound my guests and the E3 1240 is almost always under 25% used.. i only see it move when encoding video or large rar/par operations.

 

3x M1015's will work fine, put 8 drives on each of the cards in the 8X slots and 4 drives on the card in the 4x slot. you wont come close to saturating the cards, even with parity checks.

just be aware there are issues with lsi cards and the latest betas (13-14)

 

the difference between the X9SCL-+F and the X9SCM-F is:

the SCM has an extra PCIe2.0 4x slot. 2 of the sata ports are SATA III. it also has 1x 82579LM and 1x 82574L GbE connectors.

the SCL-+ has 2x 82574L GbE connectors (that's better, that's what the + is).

 

Both will work fine..

 

If you are going for the. SCL+, you could  go with a SATA II SSD and save a few bucks. no point in buying a bell or whistle you cant use.

 

as far as the fans.. i would be a little worried. you need lots of  pressure more then CFM. those fans need to pull air though a tight space while creating a positive pressure area in the motherboard compartment.

A few of us have failed with some brands of fan. I would be interested in seeing your results. Look at CPU fans and radiator fans also.

 

So far the naucta's have not let me down and stayed fairly quiet.

 

 

 

 

Link to comment

32GB memory seems to difficult to obtain at the moment with the lack of compatible 8GB sticks. I think 16GB should be fine for my current VM needs. I am not going to be running multiple high memory usage items simultaneously.

 

This was exactly the process I went through too when settling on 16 :)

 

I found to go beyond meant I had to jump up another class in motherboard / go dual processor to get more slots.

 

Sounds like you have everything figured out!

Link to comment

Thats good to know about the CPU and i'm happy with the 16GB memory for my needs.

 

It would be preferable to go the X9SCM for the extra PCIe and SATA III but I just cant seem to be able to buy it from an Australian reatiler.

I can get it from overseas and for a little cheaper also. The risk is that if it is faulty it will be difficult to get it replaced under warrenty.

Is it worth the extra risk? reader opinions please.

 

 

I think the only things I need to finalise is the datastore drives and the cooling.

I'm leaning towards the 120GB SSD + 1TB WD Black solution at the moment. Leaving 2 of the 4 bays i have allocated. If i really want to expand I could go the internal option like this, maybe not using vecro though.

 

 

I may go with the Noctuas for the backplane as I have noctua fans in my HTPC which I have been pretty happy with.

The noctuas seem to be very popular with the others who have similar systems.

I am going to put all the fans on fan speed controllers.

 

 

Is anybody not using the stock cooler for a Xeon E3-1230 (or similar) CPU? and why not?

My experience of CPU stock coolers is mainly with AMD. The last Intel I installed was a Pentium II with the slot interface...

 

Link to comment

Thats good to know about the CPU and i'm happy with the 16GB memory for my needs.

 

It would be preferable to go the X9SCM for the extra PCIe and SATA III but I just cant seem to be able to buy it from an Australian reatiler.

I can get it from overseas and for a little cheaper also. The risk is that if it is faulty it will be difficult to get it replaced under warrenty.

Is it worth the extra risk? reader opinions please.

Well, honestly it is your call in the end.

The SATAIII has a noticeable difference when using SSD's

I am sure with such a powerful ESXi build, that extra PCI slot might be handy. maybe a raid card for the  datastore or a usb3 card? Another NIC? but not at the expense of an arm and a leg.

 

 

 

I think the only things I need to finalise is the datastore drives and the cooling.

I'm leaning towards the 120GB SSD + 1TB WD Black solution at the moment. Leaving 2 of the 4 bays i have allocated. If i really want to expand I could go the internal option like this, maybe not using vecro though.

 

You will be surprised how fast that space goes on hard drives.

Keep in mind that if you are backing up live guests, some ESXi backup software takes a snapshot of the guest so you need overhead on the datastore equal to the size of your largest guest.

I find that 3x 30 gig guests is all i can fit on a 120gig SSD.

 

you also mentioned a lot of usenet downloading. I have a 500GB virtual drive on my 7200 datastore spinner just for this task (I am considering giving it it's own RMD laptop spinner). i don't know that you want to write straight to the unraid.

I have a program that moves  completed rar sets to my unraid cache drive once it is "complete". unfortunately that leaves me the burden of once in a while cleaning up the "trash"

 

I may go with the Noctuas for the backplane as I have noctua fans in my HTPC which I have been pretty happy with.

The noctuas seem to be very popular with the others who have similar systems.

I am going to put all the fans on fan speed controllers.

 

I don't think you will even need the controller. even on on high, the Noctua are pretty darn quiet. the noise is all air.

test them with their own speed control wires for temp readings first. then decide.

 

 

Is anybody not using the stock cooler for a Xeon E3-1230 (or similar) CPU? and why not?

My experience of CPU stock coolers is mainly with AMD. The last Intel I installed was a Pentium II with the slot interface...

 

Intel stock coolers are pretty quiet these days. they do suck for overclocking. For this build, it should be just fine. If you are running 24x7 handbrake at 100% CPU in turbo, then you might consider aftermarket.

Just watch your tower coolers, they might not fit.

 

I just can't wait to see the Ivy Bridge models come next year....

Link to comment

I would appreciate some feedback on what others think of the hardware selection. This is not going to be a cheap build (for home server standards) so I would like to get this right the first time.

Some areas I would like comment on but not limited to are the following:

• ESXi datastore options. SSD + hdd vs. 4 x hdd (1 per VM)

• 16GB memory. Too much? Too little?

• General overall hardware selection

• Cooling fans. Do you think this is enough? The use of AC Temperature Controller fans?

 

I have several VM's running (Plex, SabNZBD/Sickbeard/Couchpotato, MSSQL 2008, Virtual Center appliance, unRAID) on one very old and very tired 400gb 7.2k WD RE2 drive.  I don't push it hard enough to consider anything fancy when it comes to the VMFS stores.  As long as you don't plan on doing a bunch of stuff on every VM at the same time a single drive will go a long ways.  If you do want more performance out of your VMFS store, I would personally go bigger/faster (10k or 15k SAS or SSD) instead of getting more drives to spread the load across.

 

With Win7 VM's, 16gb is a good place to start.  Buy it so it is easy to add two more 8gb sticks at a later date.

 

I would spend a little more and use 1x M1015 SAS card and then an SAS2 expander.  I think the VMDirectPath limit of passthrough devices is 4 per VM, your 3 M1015 cards would be getting too close to that limit plus tie up most/all the onboard PCIe slots.  You are presumably buying a large ESXi server for future use, don't limit yourself from the get-go in ways that require re-working everything in the future.

 

edit:

My SabNZBD machine has 2 separate drives to use as complete/incomplete stores.  This keeps the heavy IO off of the main OS drive that is shared by every VM guest.

Link to comment

I think the only things I need to finalise is the datastore drives and the cooling.

I'm leaning towards the 120GB SSD + 1TB WD Black solution at the moment. Leaving 2 of the 4 bays i have allocated. If i really want to expand I could go the internal option like this, maybe not using vecro though.

 

I have used these two products with success.

http://www.scythe-usa.com/product/acc/064/slotrafter_detail.html

http://www.sybausa.com/productInfo.php?iid=938

 

Link to comment

It would be preferable to go the X9SCM for the extra PCIe and SATA III but I just cant seem to be able to buy it from an Australian reatiler.

I can get it from overseas and for a little cheaper also. The risk is that if it is faulty it will be difficult to get it replaced under warrenty.

Is it worth the extra risk? reader opinions please.

Well, honestly it is your call in the end.

The SATAIII has a noticeable difference when using SSD's

I am sure with such a powerful ESXi build, that extra PCI slot might be handy. maybe a raid card for the  datastore or a usb3 card? Another NIC? but not at the expense of an arm and a leg.

 

I think if I were buildng a new esx machine, I would do what I could to have sata III and an SSD for the most important VM's.

 

I have a windows xp workstation on Workstation 6 under linux. I have it on a RAID1 pair of 10,000RPM SAS drives. it can get sluggish.  I did move it to an SSD at one point and it was so smooth, like butter.

 

I had to move it back to the hard drives because of the HBA bus errors, at least until I upgrade to Centos 6.

Then I'll move it back to the SSD.

 

One thing about the SSD, you want to leave a good chunk of spare room for it.

I had smartd running on my machine and everyday sectors were being logged as offline uncorrectable just through normal use.

 

I've read that you can write 20GB of data a day for 5 years before they are no good, but I realistically only expect 2 years or so.

 

Link to comment

Based on that feedback I am pretty sure I am going to try and source a X9SCM-F. I will likely have to shop from overseas.

The SATA III seems to make a big difference on SSD's from all the benchmarking I have read.

 

The PCI HDD adapters look pretty good, it will be a while before I have filled the front bays I think. I'll keep it in mind for future.

 

I looked at the M1015 and Expander option but I could get the 3 x M1015s for cheaper. If I need the slots later I can always change later. I think the M1015 would easily find use in another unraid build.

How much heat do the M1015's give off with unraid operations? Is the case cooling enough?

Link to comment

the pci hard drive adapters are pretty nice looking. I just don't have the realestate for 3-4 of them as per my needs.

 

I have looked at the scythe rafter to both hold my internal drives and an added fan for additional cooling of my HBA's. I just never bothered ordering one because it looks... well cheap.

 

there is a nicer quick swap one from an unheard of company that i can't source in the US for the life of me.

 

As far as the M1015 cooling.. that part of the 4224 is sort of dead air. especially if you go to slower/lower cfm fans in the fan wall.  the back top is vented to let hot air out via convection.  However, the M1015s do get a bit toasty there, especially with 3.  I don't think they are in danger of frying with stock cooling.. however. I did do a ghetto fan mod in my 4224's to keep the air in that area circulating.

 

I am very tempted to slot out all my PCI brackets and mount another exhaust fan on the outside of the case back there. I probably wont, but as time goes by, the modder in me wants to become a mad scientest with my norco. (better cooling, aditional HDD and Bluray rom mounted internal with external access... eSATA port headers in the front, sound deading foam inside the case)

 

 

EDIT: I should point out in case you missed it, the Norco branded SAS cables are to short for the M1050's.

Link to comment

Thank you for reminding me about the cable lengths. I had forgotten about reading that and was going to buy the norco cables.

What length do you think is best for this case and cards?

I was thinking 1m as to be able to route them in a tidy manner.

 

I don't need any encouragement on the modding of my case. I can't remember the last case I didn't take to with the dremmel.

I was thinking about a external bluray drive using esata. I'm curious to know how/where you were thinking about mounting an internal drive with external access.

I like the pci panel fan idea too. I am thinking of fan/ducting ideas for the pci cards and to boost the air removal from the case. I just have to remind myself I'm building a server and not one of usual modded machines.

Link to comment

I can get it from overseas and for a little cheaper also. The risk is that if it is faulty it will be difficult to get it replaced under warrenty.

Is it worth the extra risk? reader opinions please.

 

I live in Australia and I purchased the X9SCM-F through eBay. The board I purchased came with a backplate preinstalled which meant I couldn't use the stock heatsink/fan. I ended up purchasing a Coolermaster Vortex Plus which is working perfectly. Apart from that I haven't had any issues so far (touch wood). My experience is that mainboards don't fail too often so I wasn't worried about warranty support.

 

http://www.ebay.com.au/itm/NEW-Supermicro-X9SCM-F-E3-Sandy-Bridge-Platform-MB-/280659516152?pt=Motherboards&hash=item41589c5af8

 

 

Link to comment

Thank you gtaylor,

 

I have purchased from the same US supplier so will have the same issue. I read your comments on the X9SCM thread and have a bit better understanding now.

 

 

So an update on the what I have purchased so far:

 

1 x Norco 4224 with 120mm fan plate

1 x Supermicro X9SCM-F E3

2 x 8GB 1333MHz DDR3 ECC CL9 DIMM (Kit of 2) Intel Validated KVR1333D3E9SK2/8GI - Total 16GB

3 x M1015

1 x Norco 0.5M Mini-SAS(SFF-8087) to 4xSATA Discrete Cable Reverse Breakout

 

Still to purchase are the following:

1 x Corsair AX750 Gold Power Supply

1 x Intel Xeon E3-1230

1 x OCZ Vertex 3 120GB SSD

2 x Noctua NF-R8 80mm Fan

3 x Noctua NF-P12 120mm Fan

 

I can now add a CPU cooler to that list. Probably the Coolmaster Vortex Plus that gtaylor recommeded as it is cheap and I know it will work.

Link to comment

I can now add a CPU cooler to that list. Probably the Coolmaster Vortex Plus that gtaylor recommeded as it is cheap and I know it will work.

 

The Coolermaster Vortex Plus is working a treat. The fan isn't completely silent but its pretty good. You can always upgrade the cooler later if you need to.

 

I'm waiting on the M1015 cards I ordered so I can't comment on the cable situation. I suspect that the 0.5m cables I purchased may be too short due to the location of the connections on the M1015 compared to the AOC-SASLP-MV8 that I had originally planned to purchase. I'll let you know when I get the cards how I go.

 

Link to comment

I can now add a CPU cooler to that list. Probably the Coolmaster Vortex Plus that gtaylor recommeded as it is cheap and I know it will work.

 

The Coolermaster Vortex Plus is working a treat. The fan isn't completely silent but its pretty good. You can always upgrade the cooler later if you need to.

 

I'm waiting on the M1015 cards I ordered so I can't comment on the cable situation. I suspect that the 0.5m cables I purchased may be too short due to the location of the connections on the M1015 compared to the AOC-SASLP-MV8 that I had originally planned to purchase. I'll let you know when I get the cards how I go.

 

 

Can I ask why you choose theM1015 card instead of the AOC-SASLP-MV8 card?

Link to comment

Can I ask why you choose theM1015 card instead of the AOC-SASLP-MV8 card?

 

It was close really as for unraid use there wasn't much difference. In Australia its cheaper to buy these things from the US as there is very little available for a decent price.

 

I chose the M1015 for two reasons.

1. More future proof. The PCIe x8 interface and from what I could gather better compatibility.

2. About the same price as the AOC-SASLP-MV8 card is if you get a server pull M1015. And the thought of getting a >$300 card for a similar price makes me feel like I got a good deal.

 

And cant forget that Johnm recommended it in his ATLAS post. He does seem to have this setup well thought out.

Link to comment

Everything is ordered now.

 

Now I just have to wait...

Fortunately I have my new Anthem AV and VAF Research gear arriving this week. Setting that up and continuing my xbmc / commandfusion / itach universal remote project should keep me busy for a while.

 

I do still have to buy a new CPU cooler as I'm not keen on the idea of trying to remove the back plate to install the stock cooler, in the past it would've been gone without a thought. Maybe I'm getting risk adverse. I'll pick one up from my local parts dealer at some stage.

 

 

Link to comment

 

Can I ask why you choose theM1015 card instead of the AOC-SASLP-MV8 card?

Few reasons for me.

1 more bandwidth. PCIe2 8x SAS3. (you can easily over-saturate the saslp-mv8, not a big deal though)

2 native ESXi compatibility. no hack needed (that may not work in a future esxi)

3 Price, I was getting new system pulls off ebay for 1/2-2/3's the price of an MV8

4 port expander aware with the speed to back it up. (I am sure I will put two ESXi builds on atlas down the road in DAS boxes)

5... its 5am and my brain is dead...

6 industry standard enterprise card and good bios support.

7 3+TB support (so is the mv8)

8 SATAIII SSD's at full 6Gb speed... (not to mention they do a good job at a raid0 of SSD's imo not that it is needed for unRAID)

 

The saslp-mv8 is a solid card and i recommend it for anyone using unRAID on bare metal. especially if you are running 4.7.

 

The flaw with the M1015 at this point is you need to run betas for the spindown/temp support. and that it is broken completely in latest beta's.

 

Link to comment

My M1015s arrived today. The 0.5 metre SAS cables definitely do not fit  :(. Another 20mm and they'd be just long enough. I'll order some replacement cables before Monday.

 

Another (minor) issue is that heatsink/fan JUST touches the LSI card if I place one in the PCIe slot closest to the CPU.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.