Jump to content
Sign in to follow this  
SSD

Adaptec 4 Port SATA Controller (PCI-E x4) - Open Box - $71.00

7 posts in this topic Last Reply

Recommended Posts

http://www.newegg.com/Product/Product.aspx?Item=N82E16816103058R

 

I just bought one.

 

This 4 drive interface card requires a PCI-E x4 (or faster) slot.

 

For example, if you own a P5B VM D0 motherboard (like me), which has PCI-E x4 and PCI-E x16 slot, you could use two of these.  In most MBs that have 1 or 2 PCI-E x1 slots and a PCI-E x16 slot, you could use one of these in the x16 slot (and you couldn't use any if your x16 slot has a video card installed in it).

 

It will run 4 drives each on its own "lane" providing full bandwidth performance, even while doing parity checks.

 

Open box items carry a risk, but I've had good luck with prior open box purchases.  YMMV

Share this post


Link to post

That is a pretty sweet deal.  Personally I'd pay the extra $34 for a brand new one.  You don't know what kind of moron might have done God knows what to it before you're getting it. 

Share this post


Link to post

It will run 4 drives each on its own "lane" providing full bandwidth performance, even while doing parity checks.

 

Is this feaure something a Promise TX4 doesn't have?

Share this post


Link to post

Yes and No.  The TX4 will allow you to hook up 4 SATA drives, and this card will also allow you to hook up 4 SATA drives.  But the TX4 may be slower.

 

The TX4 runs on the PCI bus.  The PCI bus is limited to 133 MB/sec.  The specs say a (I used specs for 1T Seagate SATA) drive can deliver data at 105 MB/sec, but I've seen real-world speeds capping closer to 80MB/Sec.  So having ONE SATA drive on the PCI bus is fine.  Even 2 is not so bad.  But if you have 4 or 8 SATA drives performance can degrade.

 

This may sound worse than it is.  The bottlenecks are only in place for simultaneous usage.  So you could have 20 drives on the PCI bus, but if only one is being accessed at a time, it gets the whole bus to itself.

 

The PCI-E bus is different.  Each "lane" provides 250MB/sec.  So with a 4 lane card (like this one) EACH drive would have a full 250MB/sec dedicated to just that drive.  Add a second card, EACH drive on the other card each get 250MB/sec.  The drives will never be bottlenecked by the bus.

 

There are PCI-E x1 controllers in the $20-50 range that allow you to hook up 2 SATA drives to one PCI-E line.  Seems like a good solution as 250MB/sec seems plenty for two.  But this card gives a full lane to each drive.  There are no x4 cards (that I know of) that allow you to connect 8 drives.  (UPDATE:  There were none when this was written, but there is now - it is the Supermicro AOC-SASLP-MV8 card)

 

So why does it matter?  The main reason is parity checks.  Parity checks put the ultimate stress on the I/O bandwidth.  Each drive in your array is run at full speed for hours to complete this task.

 

So let’s say that you have 4 drives on the PCI bus vs 4 drives each on their own lane.  (The drives connected to the MB are ignored, as they are on high speed busses and should not negatively impact performance).

 

PCI – 33MB/Sec

PCI-E – 80MB/Sec

 

with 8 drives

 

PCI - 16MB/sec

PCI-E - 80MB/sec

 

A real world figure from my system, which has 5 drives on the PCI bus, is about 22MB/sec doing parity checks.

 

Outside of parity checkes, running multiple simultaneous streams could also start to create bottlenecks on a PCI-laden system, but remember your gigabit Ethernet is only capable of delivering about 50 MB/sec to your workstations (that’s at 40% which I think is about all you can get out of it).  An HD stream only needs about 1MB/sec.  You start to see that the bottlenecks of PCI are not so bad in real world use.  Your real bottlenecks start to become the drives themselves, because although they can pump out 80MB/sec on sequential access, if a single drive is serving multiple streams its heads are moving all over the place and your throughput goes WAY down.

 

Hope this helps.

Share this post


Link to post

Don't forget the Northbridge... if you are using ports on the mobo, you will hit the 200MB combined IO head on those drives (unless you are using some particular mobos that have improved on it) particularly during parity syncs.

Share this post


Link to post

I am using a P5B-VM D0 and found this block diagram  [ftp=ftp://download.intel.com/design/intarch/prodbref/31473304.pdf]ftp://download.intel.com/design/intarch/prodbref/31473304.pdf[/ftp] (page 3) that shows 2GB/sec bandwidth from the Southbridge to Northbridge.  With each drive capable of using ~ 80MB/sec, that would allow 25 drives to run simultaneously at their max bandwidth.

 

I did learn that internally the JMicron controller is connected to a PCI-E x1 slot.  Both the 2 SATA ports + 2 IDE ports share that.  Not a major bottleneck, but if you are using all 4 simultaneously, your bandwidth is limited to ~62MB/sec.

Share this post


Link to post

Working perfect!  Despite warnings of possible missing parts, came will all accessories and even the 4 SATA cables.  Card appeared to have never been removed from the box.

 

Easy install.  Great performance boost for parity.  Highly recommended for users with x4 or x16 slot that need 4 more SATA ports.

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Sign in to follow this