Recommendations needed for a server virtualization build....


talmania

Recommended Posts

Not what you expect though as this one isn't for unraid!

 

I've been toying with the idea of building another Norco based 422x based server for virtualization and home lab environment.  I haven't kept up on recent CPU's, chipsets etc so I'm curious what others would recommend?  My goal is to have a robust home server that can handle multiple VM's (whether they be from hyper-v or vmware) for primarily Microsoft technologies.  The only things I have available are a HP SAS Expander and a HP SAS Raid controller and lots of SATA and a few SAS drives.  Would I be better of just buying a used HP G5 off ebay or going with Supermicro etc? 

 

Recommended model?  i7 or Xeon?  Single or dual CPU?  Chipset?  In an absolutely ideal world I'd have an environment that would mimick what I work in--AD, Exchange, SCCM, SQL, etc etc.

 

Just curious for others feedback that have been here before and what your recommendations would be?  Thanks all!

Link to comment

I just built my VM server (ESXi) with a socket 775 desktop MB, a 2.8Ghz dual core CPU and DDR3 ram.  The HPs are nice (I use them extensively at work, G5s some older G4s and the BL blade servers), but they draw a ton of power while running, especially if you are spinning up older SCSI drives.  I took the little bit a trade off on redundancy for the lower cost of daily operation.

 

Besides, for a lab/test environment that your indicated, you should not be to worried about total up time if there was an intemittent failure.

 

Rod

Link to comment

For me, a home server means "silence".

A good deal for a real sturdy server fished from the bay might tempt my senses, but only

until I fire it up and my basement starts to vibrate  :D

 

I'd go for an IPMI capable board from SuperMicro and a low-power XEON along with it.

Currently I employ a X8SIl-F with a L3426 and I am very happy with it.

There are new CPUs and boards on the horizon based on the 1155 socket, that are even more

promising power&consumption-wise.

 

For storage I'd go for one or two card(s) with support by ESXi, like a low entry LSI 2008 (9240-8i, 9260-8i).

There are some good deals around from time 2 time....used or new.

Link to comment

Belated thanks for the feedback. 

 

I was focusing on SuperMicro (since my Unraid has IPMI and I love it) but after some research I've discovered a good deal of their boards are proprietary or Enhanced extended ATX or even EATX which unfortunately rules out my love for the Norco's.  No way I'm dropping 500+ on a supermicro chassis.

 

I'm looking at a full tower EATX (HAF anyone?) at this point and still strongly leaning to dual socket 1366 for growth.  I can see it now....25 VM's on a single dual proc server with 96GB of RAM!  ;D

Link to comment

I was focusing on SuperMicro (since my Unraid has IPMI and I love it) but after some research I've discovered a good deal of their boards are proprietary or Enhanced extended ATX or even EATX which unfortunately rules out my love for the Norco's.  

What list are you looking at?  Out of their entire UP Xeon lineup, only 7 of the 52 boards are proprietary format?

Link to comment

I was focusing on SuperMicro (since my Unraid has IPMI and I love it) but after some research I've discovered a good deal of their boards are proprietary or Enhanced extended ATX or even EATX which unfortunately rules out my love for the Norco's.  

What list are you looking at?  Out of their entire UP Xeon lineup, only 7 of the 52 boards are proprietary format?

I'm looking at the DP Xeon lineup with the 5520 chipset.  Of the 35 boards there 14 are proprietary and 4 more in the EEATX form factor might as well be proprietary.  Unfortunately nothing in standard ATX form factor but that's to be expected I guess with a DP config.  ;D

Link to comment

Yeah, on Dual Processor boards proprietary is much more common.  But geez, why would you need THAT much horsepower?

 

Cause it's better to have too much than too little?  ;D  In all honesty I'll probably just get a single proc to start but love the idea of being able to add a second down the road if need be. 24 VM's on a server...drool! 

Link to comment

RAM. Lots of RAM. As much and as fast as you can get.

 

I virtualise all my desktops and my servers (except XBMC unRAID etc)

 

For the desktop I use VMware workstation. It is far more convenient for a sit in front of machine and the basic 3D acceleration is useful (which ESX doesnt have). This now its XP x64 + vmware minimal install but will soon be Linux (only for TRIM support and no other reason).

 

I am about to replace my desktop machine and am in a similar boat to you.

 

My thoughts/ramblings so far.

 

i7 single or dual

24GB of DDR3 2000 RAM

A PCI-E card based SSD boot drive

A RAID 5 of 3*2TB EARS for the mass storage

A 80Plus GOLD PSU

Onboard graphics

As few cables as possible

 

ESX or ESXi are excellent but they are far more work to tune correctly and require a separate control box for vsphere. The hardware support list if a fraction of Linux in general. You could be lucky (and probably will be) but you wont know until you buy and then its too late. I will probably retast my current vmaware workstation box as ESXi though.

 

 

If you really want 24 VMs on one machine you will need more RAM than that

Link to comment

Thats too generic a statement. RAID 5 might well be perfectly adequate depending on what your doing. Yes RAID 10 is the defacto middle of the road standard recommended by VMware but thats a catch all statement.

 

Also most of my VMmachine will be on a SSD card.

 

Buts you do make a very fine point I just assumed that RAID 10 cards were out of my budget. I am inspired to think about some potential homeade iSCSI/FC SAN although I think it is overkill for what I need

Link to comment

I probably would not choose RAID5 for the VMware disk store the read/rewrite of RAID5 may be noticed if you have allot of virtual machines accessing the same file system.

 

What may be considered cached may equate to writes and many of them depending on machine.

Consider what may be running in the background and will be constantly be written/rewritten.

 

My setup consists of a Supermicro machine (Yes I spent the 500+ but there's a reason) 2 3Ghz Xeon processors and 12GB of ram.

 

I run from 2-4 Virtual machines in Centos 24x7.

I moved 2 of the windows virtual disks to an SSD. It did speed things up a great deal, but what I find is the disk is constantly written to and there are sectors that go offline on a daily basis.

 

So if speed is a concern, RAID10. 

My friend in his hosting company uses a RAID10 setup with XEN and he says it's as close to bare metal as possible.

 

You can go software raid or hardware raid. The beauty of a hardware raid config is that the kernel only does one write to the controller where as in software the kernel is responsible for multiple writes.

 

One point about the Supermicro board and chassis. It's all fine tuned with airflow. So if I'm not churning like crazy on the machine everything is very quiet. if I start churning big time the fans all start to spin (and it gets noisy, but it usually quiets down later).

 

My server has been running 24x7 for the past 3 years and with the improved airflow I have not lost a drive since it's initial configuration.

 

Lots of ram helps, The more you can work with the better.

Link to comment

I will do some more looking about. Supermicro etc is almost certainly out of the question. Over in the EU it is hard to get, the support is far less and it is considerably more expensive.

 

I have been reading some posts on directly connected gigiabit NFS setups. I like the idea of this but am skeptical this now. The nice thing would be a small external NFS machine could be an elegant solution and a triviality to commision and maintain

Link to comment

Buts you do make a very fine point I just assumed that RAID 10 cards were out of my budget. I am inspired to think about some potential homeade iSCSI/FC SAN although I think it is overkill for what I need

 

I'd recommend to give a look at the LSi 9240-8i SAS/SATA controllers.

The IBM ServeRaid M1015 is similar, but lacks RAID5 support out of the box (but levels 0,1 and 10 are supported).

Both are fully supported with ESXi 4.1 and are really cheap to find sometimes.

Feels like they are the standard, entry-level controllers for a lot of servers like in X-Series, then get replaced right

away when the boxes are unpacked and end up floating at the bay.

 

Link to comment

Buts you do make a very fine point I just assumed that RAID 10 cards were out of my budget. I am inspired to think about some potential homeade iSCSI/FC SAN although I think it is overkill for what I need

 

A lot of onboard raid controllers support raid 10.  It is just a striped raid array that is then mirrored.  My company has many VMWare servers, and all of the disk shelves are raid 10.  It was very noticeable early on that raid5 was not a good choice for a server running multiple VMs.

Link to comment

The one thing that holds me back on on hardware RAID either a card or MB is that i would need to buy two of them. I have been in this situation before where I had a RAID card fail and i not only needed to source a new one (which can be problematic if saving money via ebay) but also get it onto the right firware level before I could see the array again. I am sure that wont always be the case but its not something I can risk.

 

This is why I am thinking a dedicated device with software RAID setup. Less to to go wrong, easy to replace in a rush.

 

For interest I am currently playing with RAID 5, 10 and 50 side by side on a HP SAN and an EMC SAN both fibre channel. Obviously the high RAID levels are excellent but RAID 5 is also excellent for certain machine (mainly Linux low disk access). RAID 10 as predicted proves the best compromise of cost vs space vs speed.

 

Since unRAID will never fir the bill for a dedicated vmachine store I am looking at OpenFilter. It seems very capable for this role. What is not clear though is the importance of RAM and CPU in software RAID. Obviously it has an impact but so far I have not found a way to find the sweet spot between "fit for purpose" and " lower energy consumption".

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.