RAID controller Recommendations


marcusone

Recommended Posts

I know lost of you on here have HW Raid as well as unRaid, so thought I'd ask here for help choosing/finding a good HW RAID card.

 

I'm looking to purchase a good Hardware Raid Controller, that will work great for SSDs

I found LSI 9265, but appears that is an old card and not made anymore... but is it still worth it? (from my research seems to suggest it now includes fastpath as part of updated firmware, not sure if that means i need a license or not?).

 

Or the new 9270 models? yet finding it hard to find stock.

 

Doesn't have to be LSI, just found reviews that always seem to have LSI on top, particularly with SSDs.

 

I'd like to keep it within $700 including BBU if possible.

 

For now it will go in an older system I have, i7, has a 16x PCIe 2.0 slot open (that I believe will run full speed, or at least at 8x when occupied with a non-video card).  Either way, at some point it will be upgraded to a server MB (likely supermicro).

 

Thanks for your time!

Link to comment

Sorry, I should add that I'm likely on interested in RAID 0, or RAID10

Unless you have an argument for RAID5/6.  Mostly from what I can tell researching, RAID 10 is the best "fast and safe" raid you can do.  I'm not doing raid for backup :P:)

I run a good chunk of VMWare test machines/etc that load/unload, etc so looking to get much better data rates.

might even toy with the idea of RAID0 a bunch of WD Black drives (as i'll get much more storage than SSDs for the same price)

Link to comment

You have to be very careful about RAID X statements. Not all RAID 10 are equal, etc. Is RAID 10 the same as RAID 01? RAID 4 is often the performance leader when properly tuned (ie filesystem blocksize=stripesize).

 

RAID 0 is to be avoided. RAID 10 (or 01) is dead with the unfortunate pair of drive failures. RAID 60 would take at least 3 drive failures to offline. The plaids (100, 30, 50, 53, 60) are very interesting.

 

Link to comment

You have to be very careful about RAID X statements. Not all RAID 10 are equal, etc. Is RAID 10 the same as RAID 01? RAID 4 is often the performance leader when properly tuned (ie filesystem blocksize=stripesize).

 

RAID 0 is to be avoided. RAID 10 (or 01) is dead with the unfortunate pair of drive failures. RAID 60 would take at least 3 drive failures to offline. The plaids (100, 30, 50, 53, 60) are very interesting.

 

Thank you; and I do understand all those - I think ;).  And from what I want to do, how many drives I want to make an array out of, all leads me to RAID 10 (again, I'm not looking to save my data should I have multiple drive failures, I'm looking for speed with some ability to recover from A failure.  Most of my current raids are 0, because I don't care, I have a good backup system in place for any data on my RAID 0 arrays.)

 

I'm at the point now that I would love some recommendations on actual hardware raid cards that are worth spending about $500 on (or a bit more with BBU), that can utilize good SATA drives and SSDs.

 

 

Link to comment

I'm using LSI raid controllers with esxi for home use, and may be able to provide some references, caveats and a few suggestions.

 

Here is the setup I'm running:

9265-8i with BBU, as well as 9271-8i with cachevault.  Both have hardware cachcade keys, running esxi 5.5.  The 9271 is on main host, 25tb fileserver + 8-10 smaller vms.  Main host has 3 datastores:  8x4tb raid 5 hitachi 7k4000 hdd, 2x240gb raid 0 intel 520 ssd and 4x500gb raid 5 seagate hdd.  Also using 2x240gb crucial m4 ssd as read only cachecade array.  All drives are attached to intel res2cv360 expander.   

 

9265-8i is on backup host, 4x500gb raid 5 hdd, 2x240gb raid 0 ssd, 2x240gb read only cachecade.

 

First, a couple of useful references so you can draw your own conclusions:

http://www.tinkertry.com/cachecade-pro-2-0-on-lsi-9265-8i/

http://www.servethehome.com/lsi-sas-2008-raid-controller-hba-information/

 

Second, a few warnings:

Non-server motherboards sometimes have problems with the boot time configuration menus built into the controller bios.  This is generally not an issue with server class MB.  Not an issue if using windows - you can use the MSM software to configure.  In esxi, however, you do need to add software packages available from lsi to monitor health status, and configure arrays.  Then, you can manage remotely from a workstation.  I have had some problems since upgrading to esxi 5.5 with losing connectivity to the management software.  This has no impact on the function of the host, just annoying.  If you are a "set-it-and-forget-it" type of person, no problem, otherwise you would have to reboot and access controller bios during bootup to change configuration.  You can still see array and drive status in vsphere client.  Look carefully at the compatibility matrix for controller/MB/drive combinations.  I'm using sata consumer-grade drives behind an sas expander without any problems, but there are no guarantees.

 

Last, a few suggestions from my experience:

IF you have a ups and rock-solid backup plan, consider buying a 480gb ssd, 500gb-1tb hdd for backups, and try that out first.  Crucial 480gb ssd was $269 a few days ago, and it is on the lsi compatibility list.  You will be amazed at what a difference ssd makes running vm's.  If you still want to go for HW raid... 

Watch for used equipment on ebay and hardware forums - no need for bleeding edge equipment unless you have $ to burn.  I bought most of my stuff used or open box, with the exception of HDD and BBU.  I would look for the 9266-8i over the 9265-8i (cachevault vs bbs), and the 9271-8i if the price is right (9271 adds pcie 3.0 for more bandwidth).

 

Have fun shopping!

Link to comment

Thanks!

 

I'm in the process of just testing out 2 new M500 240G in RAID0 on my M1015 (via software raid for now, from what I can find on the net, there is little performance difference from SW to HW raid0 on a M1015 since there is now cache).

 

And keeping a nightly backup routine :)

 

I'll keep an eye out for a 9266 or 9271 on ebay for use as a production card (the above I'll likely keep for testing/playing).

 

Thanks again!

 

Link to comment

I agree on the r0 - the controller is just another layer to add latency.  I did not see the iops scaling in a linear fashion with r0 and 2 ssds.  I got about 50% boost vs single drive.  The sequential reads/writes do just about double, however.  I favor the sequential component for video processing, and the iops are "good enough" for my small # of vms.

 

I would be interested in your results on the M1015; I just ordered an oem hba via ebay to test (lsi 9201-8i).

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.