Supermicro MBD-X7SBE currently with (Qty. 2) AOC-SAT2-MV8's Upgrade question?


Recommended Posts

I just upgraded to 5.0 RC12 without any issues and am wondering if I should upgrade my drive cards to: AOC-SAS2LP-MV8's instead of the old school (qty. 2) AOC-SAT2-MV8's currently in there? I plan to upgrade my drives to 3TB or more very soon.

 

Suggestions greatly appreciated... Thanks!

Link to comment
  • Replies 74
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

I just upgraded to 5.0 RC12 without any issues and am wondering if I should upgrade my drive cards to: AOC-SASLP-MV8's instead of the old school (qty. 2) AOC-SAT2-MV8's currently in there? I plan to upgrade my drives to 3TB or more very soon.

 

Suggestions greatly appreciated... Thanks!

 

The answer is simple:  It Depends  :)

 

You didn't indicate what motherboard you have, so I don't know if you have PCI-X slots, or are using the SAT2-MV8's in PCI slots.    Nor do I know if you have PCIe x4 (or greater) slots available for the SASLP-MV8's, or would have to use those in x1 slots.

 

Without knowing that, it's impossible to say.

 

To wit ... the SAT2-MV8's are PCI-X 133 cards.  Assuming you're using them in PCI-X slots, that gives you 1.06GB/s of bandwidth.    That's works out to over 130MB/s per disk for 8 disks ... so they're not appreciably limiting your disk throughput (some high-density high-capacity disks are a bit faster; but not enough to be a significant bottleneck).    However, if you're using these in PCI slots, your total bandwidth is either 133MB/s or 266MB/s, depending on the clock speed -- in either case this means the card is a major bottleneck in the performance of your drives.  Clearly if the cards are installed in PCI slots, moving to SASLP-MV8's would be a BIG improvement  IF you have the appropriate slots available (PCIe x4 or better).

 

If your motherboard supports PCIe v2, then your "per lane" bandwidth is 500MB/s  (If it's an older board with just PCIe v1.1 it supports 250MB/s per lane).    So as long as you have an x4 slot available, you'll have either 2GB/s (PCIe v2) or 1GB/s (PCIe v1.1) available for the card.  In the v2 case, that's far more bandwidth than any modern disk drive;  in the v1.1 case that's about the same as you'd get with a PCI-X slot with your older cards.    If, however, you'd have to plug these cards into x1 slots, you'd only have either 250MB/s or 500MB/s of total bandwidth -- still a nice improvement over a PCI slot (if that's what you were using) ... but also still a bottlenecked interface.

 

So ... as I said above, the answer is IT DEPENDS.    If you provide the details on your motherboard, I can easily give you a more precise answer ... but I suspect you can easily deduce that from the info above  :)

 

Note also that if you happen to have x8 slots available, there's a newer version of the SASLP-MV8 cards that uses the x8 interface -- the SAS2LP-MV8 ... this would be an even better card, as it doubles the bandwidth (IF you have x8 slots).    Of course with current drives, that's not a benefit ... but if, for example, you used one of its ports for an SSD (or 2) you'd get the advantage of the higher bandwidth.

 

Link to comment

Hi,

 

Thanks for the prompt reply...the motherboard type is a Supermicro MBD-X7SBE and both cards currently occupy the actual PCI-X 133MHz slots.

 

I just want to be sure that I am getting the most bang out of this system once I upgrade the drives.

 

Also, is there any motherboard that would support 20 drives plus parity and cache drive (22) total that might be better suited for the future - as well as the least power consumption?

 

I guess I'm basically looking for the fastest and most lightweight motherboard and cards to support this many drives, so any suggestions would be appreciated.

 

Thanks,

 

Justin

Link to comment

Hi,

 

Thanks for the prompt reply...the motherboard type is a Supermicro MBD-X7SBE and both cards currently occupy the actual PCI-X 133MHz slots.

 

I just want to be sure that I am getting the most bang out of this system once I upgrade the drives.

 

Also, is there any motherboard that would support 20 drives plus parity and cache drive (22) total that might be better suited for the future - as well as the least power consumption?

 

I guess I'm basically looking for the fastest and most lightweight motherboard and cards to support this many drives, so any suggestions would be appreciated.

 

Thanks,

 

Justin

 

In that case I wouldn't bother to upgrade the cards unless/until you also replace the motherboard.  You'll be somewhat limiting the bandwidth with new 3TB and 4TB 1TB/platter drives ... but the only time that will matter is during parity checks, since that's the only time all drives are "in play" at the same time.    And they'll still have plenty of bandwidth available.

 

As for a newer, lower-power motherboard ==> that depends a lot of how you use UnRAID.  Do you just use it as a storage server?  ... or do you run a lot of add-ons ?

 

Link to comment

I mainly use it as a storage server but do run unraid notify and unmenu mainly. I just want the most power and fastest throughput with the lowest power possible without being hindered by the cpu etc.

 

Personally, I am convinced I can satisfy my storage needs with no more than 14 drives, so I'm quite happy with my mini-ITX SuperMicro Atom board that consumes 20W on idle and ~ 45W with all 6 drives spun up (3TB WD Reds).    If I wanted more drives, I'd just mount it in a larger case that could support 14 drives, and add an SAS2LP-MV8 to handle 8 more drives.  14 drives could provide up to 65TB of storage using the forthcoming 5TB WD Reds !!  (or 60TB plus cache)

 

If you want more capacity than that, you could use a micro-ATX X9SCM-F-O board with a low-power Xeon (E3-1220L has a 20W TDP) and have plenty of x4/x8 slots to support additional SATA ports.

 

Link to comment

Do you by chance know if a micro-ATX X9SCM-F-O board can fit in a full size ATX tower? I'm hoping to be able to continue to use my current box. If not, any ideas for a box that can fit 20 drives plus 2 internal? I currently use (4) 5 bay drive docks in my tower.

 

Also, I do quite a bit of streaming to from my unraid to my Western Digital media player. There won't be any issues with lagging or problems with this with this MB will there?

 

Thanks for all the suggestions :)

Link to comment

Do you by chance know if a micro-ATX X9SCM-F-O board can fit in a full size ATX tower? I'm hoping to be able to continue to use my current box. If not, any ideas for a box that can fit 20 drives plus 2 internal? I currently use (4) 5 bay drive docks in my tower.

 

Also, I do quite a bit of streaming to from my unraid to my Western Digital media player. There won't be any issues with lagging or problems with this with this MB will there?

 

Thanks for all the suggestions :)

 

 

It'll fit in a full ATX tower. You'll just have to change the placement of the standoffs.

Link to comment

Do you by chance know if a micro-ATX X9SCM-F-O board can fit in a full size ATX tower? I'm hoping to be able to continue to use my current box. If not, any ideas for a box that can fit 20 drives plus 2 internal? I currently use (4) 5 bay drive docks in my tower.

 

As already noted by mrow, it will fit with no problem (just re-positioning a couple of the standoffs).  In fact, it will allow for better airflow;  be easier to work with when working in the case (since it occupies less space);  etc.

 

 

Also, I do quite a bit of streaming to from my unraid to my Western Digital media player. There won't be any issues with lagging or problems with this with this MB will there?

 

Absolutely not -- the important thing is what CPU you're using, etc.  Are you using any plug-ins that do a lot of transcoding?  If so, you'll want a CPU with a reasonable amount of "horsepower" => but the odds are quite good that ANY Socket 1155 Ivy Bridge CPU you install will have more power than what you're using now.

 

Link to comment

With that setup you probably do not need to upgrade the cards yet. I.E. unless you find yourself with a bottleneck.  I had the same setup and it worked well. What I did to improve performance is install an Areca X1 ARC-1200 card in the X4 slot. I put my parity drive there. I enabled write caching and it helped a great deal.

 

The X7SBE has an X4 and an X8 slot, so if you wanted to improve performance you could go with one card and move some of the fastest drives to the new card or as I've done move parity to use the fastest ports or a separate controller in the X4 slot.

 

Also. What CPU do you have now?

I remember scoring a nice 8600 cheap on eBay. It was a very cost effective upgrade.

 

If you plan to go with an ESX solution, then getting at least one controller now would set you on your way.

Although I might go for the more popular LSI card.

 

The X9SCM will fit inside your current case with a few adjustments.

There really is a big performance benefit of the E3-1230 and better if you plan to go with ESX.

 

Link to comment

Can you please explain what an "ESX Solution" means? Perhaps I have been too away from these topics for too long.

 

Also, so if I decided to go with the micro-ATX X9SCM-F-O board and E3-1220L, would that be best suited for two of the new AOC-SAS2LP-MV8's?

 

If these were the decided MB and processor, which way would be the absolute best way to go disk controller wise etc?

Link to comment

The "ESXi" solution means to run UnRAID in a virtual machine under ESXi.  You boot the system to ESXi, then run the UnRAID VM.    It requires all of the drives be on add-in controllers which can be "passed through" to the VM, and you need an ESXi compatible motherboard and CPU.

 

If you think you may want to go that route, be sure to read the threads on ESXi BEFORE you buy a new motherboard/CPU/interface cards.

 

Link to comment

What are the benefits of doing this?

 

None if you simply want a storage server.    If you want to use the same PC to run your storage server, and a few other virtual machines, then this will let you do that.    For example, some folks like to run a "stock" UnRAID for storage;  and run all of the various applications they want [Couch Potato, Crashplan, SABnzb, Plex, etc.] in another Linux virtual machine.  This isolate those from UnRAID ... which tends to make UnRAID more reliable [Note that a very high percentage of the issues posted on this forum are from the add-ons].

 

Note that to have enough "horsepower" to do this, you'll be running a system that draws more power than is necessary for just UnRAID => but probably less than running two separate boxes.

 

It's a tradeoff ... personally I prefer to have a dedicated "bare metal" UnRAID box, and use other PCs for other things.    I DO run a bunch of VMs ... but NOT on the same PC that's running UnRAID.

 

You may want to read through the following thread, which has a lot of discussion about an ESXi based UnRAID server:  http://lime-technology.com/forum/index.php?topic=14695.0

 

Link to comment

As previously mentioned, benefits are you can run other utility operating systems along side unRAID.

For me I run a centos distro for all inbound connections and my adminstrative server. I also run a Windows XP instance for torrents. I also run a Slackware distro for compiling and developing unRAID plugins.  I also used to run an XP instance just for recoding ripped DVD's to MP4s. It was nice to rdesktop into the remote virtual machine fire up the application, let it run over night without putting wear and tear on my laptop (heat and fan).

 

The E3-1220L will be fine for a bare metal unRAID server. it would probably work OK for ESX, unRAID and a limited number of VM's. (I've run more on less horesepower, it's all based on your expected response time and memory).

 

FWIW, I'm running ESX on the HP Microserver. it's a lil slow from what I'm used to, but it's good enough to run unRAID, adminstration machines and compile programs.

 

As far as controllers. I'm not informed enough to make the ultimate recommendation. I used the LSI card because I did not want to work at it too much (Although I did). The Supermicro controllers seem to be pretty popular too.

Link to comment

If I'm going to run this without ESXi, how much memory would you recommend for this MB and processor - taking overall power consumption into thought as well?

 

For stock UnRAID 4GB is plenty, although with the low cost and fairly high efficiency of modules these days, I'd install a pair of 4GB modules (8GB total).

 

Link to comment

The Intel E3-1220L seems hard to find. Anyone know where I can purchase one? Or any ideas for a different low power cpu that could be used with this board?

 

How about the version 2 of this cpu? INTEL XEON E3-1220LV2 2.3GHz 3M 2C SOC1155 17W TRAY#CM8063701099001

 

or

 

How about this:

 

Intel® Xeon® Processor E3-1265L v2 (8M Cache, 2.50 GHz)

Link to comment

You may not have to go with a LV(low voltage) processor. The max TDP posted for processors is usually the maximum amount of power you will need to dissipate if the processor is running full on.

 

Chances are you wont be running full 100% on any modern processor with unRAID alone.

 

From what I remember the E3-1230 v2 had a TDP of 69w vs E3-1230 v1 (80w).

 

Since you'll be running unRAID in bare metal you can work with anything that has the speed you need.  (2ghz or better).

 

The only time I really needed the LV processors in the past was with specific case requirements. I had to install 2 Xeon processors in a cramped case. in that situation the LV 2.4GHZ dual xeons worked really well.

 

If I were putting this in my living room or bedroom. I might consider the LV processor for bare metal unRAID if the cost wasn't prohibitive.  More so for handling the maximum heat limit and possible air conditioning competition during summertime.

 

Link to comment

Agree -- you don't really need to use the LV versions.    There's very little difference in the idle power of the various models of the E3-12xx Xeons.  The difference is in how much power they can draw under load.    Assuming you don't care about that -- and would rather have the higher "horsepower" available; then it's fine to just go with something like an E3-1230v2  (an excellent CPU)

 

Link to comment

I understand what you are saying but also wish my idle power to be as low as possible - how about this?

 

Intel Xeon E3-1265L V2 Ivy Bridge 2.5GHz (3.5GHz Turbo) 8MB L3 Cache LGA 1155 45W Quad-Core Server

 

That processor has a video chip in it. You may be wasting power energizing the unused video chip.

Double check with supermicro if it is compatible.

Link to comment

FYI.

 

I had ESXi 5.0 running on X7SBE (8GB host memory) with AOC-SAT2-MV8's passed through to unRAID VM and a 32 bit Windows 7 VM all running for a while.  The unRAID VM never had any problems that way and got about the same access as when I had the same unRAID on bare metal on the same MB.  The Windows VM I had to restart every 2-3 weeks as it would become unstable.  Possibly because of the tuner card and HDD controller passed through to it on the x4 and x8 PCIe ports.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.