Jump to content

Why are Port Multipliers not more popular? Maybe they should be...


NAS

Recommended Posts

 

I thought I would start a post on port multipliers. We see very little chatter on them and in many respects they "could" be a bit of a magic bullet for unRAID users.

 

So what is a port multiplier. Without getting to complicated it is a device that allows several SATA drives to connect to one SATA port.

 

You can read more here: http://en.wikipedia.org/wiki/Port_multiplier

 

Depending on a few basic factors you can connect several SATA drives to one SATA port with no blocking. You could never connect the maximum 15 without blocking but there could be happy overuse trade off between cost and speed.

 

So why aren't port multipliers used more. I cant say for sure but I suspect these play a major role:

 

The technology is not sold aggressively

The technology is not well understood

The available products are confusing and their websites doubly so

In theory it should just work but in reality its chipset specific

 

 

So its not all perfect but there should be no reason why we cant as a community make a simple list of SATA chipsets and port multipliers that just work.

 

Why would this be good...

 

In theory you can scale much higher than using PCI* cards alone

It might be cheaper (if we done a group buy it could be much cheaper)

It is easy to deploy and not slot specific

Motherboards with limited slots could still cater for loads of drives leading the way for potential huge ION/ATOM builds

 

In another thread a user has purchased a huge 32 port SATA PCIe8 card. It turns out this is just a 8 port SATA card with 8*4 port multipliers build in. Fantastic price of kit but it would cost pretty close to the same amount to buy one 8 port card and 8 port multipliers. For me this would be better since several times in the past my RAID cards have failed and I needed a new one in a hurry and I like the flexibility of breaking up the device.

 

This is on a starter for discussions but I think what I have said makes sense?

 

Comments?

Link to comment

Do you literally mean SATA multipliers or do you mean SAS multipliers (which ultimately breakout to SATA somewhere along the way via a backplane) ?

 

If SAS multipliers I'd love to use them, but the ones I've seen that are any good - the cost is prohibitive and you're better off purchasing additional controllers to get the requisite ports.

 

Might be a different story if you need huge amounts of ports (i.e 32+) but that's not a problem with no support in unraid :(

Link to comment

I think that the reason SATA port multipliers haven't been popular is because they degrade performance too much (parity checks and drive rebuilds). Now that we are starting to see more SATA 3.0 (6Gb/s) ports the performance degradation going forward may be less severe.

 

In the hardware RAID world port multipliers make a lot of sense because the per port cost is so much higher and the limited bandwidth per port doesn't degrade array performance because of striping. Right now we only pay a marginal cost of about $14 per port (SASLP and cables). I can't see port multipliers dropping that price much, and it will only increase complexity unless we start to see them integrated into back-planes or SATA expansion cards.

Link to comment

SATA 2 achieves less than 300MB/s. Split across 4 ports it limits each port to less than 75MB/s. Almost all drives that would be found in a Unraid server would be throttled in that situation. SATA 3 split 4 ways very little theoretical degradation. I don't know of anybody that has performed parity check tests with and without port multipliers, I'd love to see benchmarks.

Link to comment

Good point. I could never get anything like the benchmark performance out of say a WD20EARS but your right 4 drives on one SATA2 would be a theoretical bottleneck of 75MB. However 20 drives calculating parity at those speeds woulds be excellent.

 

The key is reality vs.theory.

 

It seems though that even some high port density SATA cards include port multipliers behind the scenes at a ratio of 1:4. I am just suggesting that imagine a ION board with 4 SATA that could be made to support 16 drives with no PCI* slots used.

 

Also in theory alot of this will far more driver agnostic than SATA cards.

 

 

Link to comment

Yeah TBH i dont know which chipsets support what at the moment its more a theory for discussion around the whole technology. Theres also a new generation of stuff coming I assume for the Ultrabook that might make an apprance here.

 

Did you know that of the top of your head or is there a definitive list I cannot find? I mean its no use at all if nothing suports it :)

Link to comment

Chipsets that Support Port Multiplers.

 

Silicon Image (of course).,

SIL3132  - This is the common 2 port PCIe controller.

JMB363  - This is the common 2 Port PCIe controller.

Marvel    - I've seen this work with the 4 port Mavel, The 8 port Supermicro and the 8 port Sata in the DS520G.

 

Why are they not that popular? It's purely a performance and price vs port.

If you can find a board that support 3-4 slots and get 4 port controllers -plus 6 on the motherboard you are maxed out in a chassis.

A PMP PCI card could cost almost $80-90 and at that range you can get a nice 4-8 port controller.

 

If you want to exceed a chassis, port multiplers will work. but then you have the performance hit.

In my experiments, the best performance was with the silicon image chipsets.

 

http://www.siliconimage.com/products/product.aspx?pid=26

 

also

 

http://en.wikipedia.org/wiki/Port_multiplier

 

It all depends on command based switching or FIS based switching.

 

Command based switching reminds me of the data of P-ATA where two drives are on a cable, but in reality the CPU can only talk to one drive at a time.

 

In this layout I've seen throughput in the 60MB/s range.

 

For one drive this is not so much of a performance hit, but in parity generate or sync mode, successive drive access on the same PMP will hamper general throughput.

 

I suppose if you did a round robin of drives by staggering successive drives of the array to different controllers and PMP slots you may see an improvement, but I've not found this to be viable or a proven point.

Link to comment

you said ion but I think you meant Atom.  Most of the small boards like that use Intel's Southbridge with ICH8/9/10 Sata ports.... IIRC they don't support port multipliers!  :(

 

I've read (and that was a while back) that ICH9R, ICH10R can support port multipliers.

At least at the hardware level, Now the next level is the linux kernel and that support may be lacking.

 

My prior post is with chipsets I know work as I've tested them.

My test of an ICH9 revealed that it did not work yet (ABIT AB9 PRO).

Link to comment

In another thread a user has purchased a huge 32 port SATA PCIe8 card. It turns out this is just a 8 port SATA card with 8*4 port multipliers build in.

Can you provide more data on this or pointers to threads, I'm interested in reviewing it in further detail.

 

Sorry for the delay

 

http://lime-technology.com/forum/index.php?topic=15267.0

 

Hello all.  I'm the one that started that thread, and now a couple of others that are related.

 

http://lime-technology.com/forum/index.php?topic=15391.0

 

http://lime-technology.com/forum/index.php?topic=15426.0

 

I just jumped into the world of unRAID feet first last week, trying to move away from WHSv1.  And while I finally had some success compiling the HPT DC7280 as a kernel module, things are thus far not going well as far as actually using the danged HBA.  I've got a couple more weeks to play with it before I run out of time returning it to Amazon.  I want to make it work, but given my lack of linux knowledge the clock may run out before the card works.  And that's assuming it can be made to work.

 

As for PMs in general, I have an old 4-port eSATA SiI3124 pcie controller that I've already verified as functional within unRAID, so if I can't get this new HPT controller to work I may just try buying one of those Addonics 5x1 PMs to try with that controller.

 

If any of you following this thread happen to have any suggestions as to the DC7280 issues I've outlined in the other posts, please let me know!!!

 

Thanks, Kevin

 

P.S.  This is the 4-port eSATA controller I mentioned:  http://www.sansdigital.com/esata-port-multiplier/ha-dat-4espcie.html

Link to comment

My advice would be to go with simpler HBA since support vs the more advanced ones that require custom compilation and a custom interface.

 

With as much experience as I have, I would not go down the road with the HPT 32port controller unless I had a real need for that many spindles.

 

I know the Areca controller is supported.  I have one in my array.

I know the later 3Ware cards are supported. I've tested it with success.

 

The downside of using these advanced controllers is the spinup/spindon and smart interface may not be accessible to unRAID or emhttp without additional programming by limetech.

 

With the areca controller you can set a timer so the controller bios does the spin down.

I did not find that on the 3ware controller.

 

Also, the 3ware controller has special options required to access the SMART information from smartctl.

You cannot access the smartctl information from the raw drive on the areca controller unless you are in pass through.

 

 

unRAID is one environment where I think KISS and point to point works best.

with so many drives and so much data, you want it to be simple to work with and simple to replace when needed.

 

Before purchasing the addonics PMP's. Weigh the cost of them vs the cost of a multi port PCIe controller.

You're biggest bang for the buck in performance is a good 8 port controller.

 

PMP's work, but you will be limited in speed. For a mid sized array that should not be an issue.

but when you get into large 20 drive arrays. you don't want parity gen, check or rebuilds to run over a day.

 

As drives get bigger and arrays hold more drives this becomes more of an issue.

Link to comment

PMP's work, but you will be limited in speed. For a mid sized array that should not be an issue.

but when you get into large 20 drive arrays. you don't want parity gen, check or rebuilds to run over a day.

 

As drives get bigger and arrays hold more drives this becomes more of an issue.

 

So given this statement and the fact I already have a pile of drives and data to store, even if I could get the DC7280 to work it could prove to be problematic since it's just a PM based controller, correct?

Link to comment

 

I remember back when WHSv1 (2007-2009) was the storage craze for the home storage enthusiast.

 

My home windows 2000 server with a promise ide raid card and 8x 250gig IDE drives in raid 5 were full. I had a drive fail and decided I rather upgrade the sink more money into it suspecting other drives were at EOL.

 

The first path I tried was sil port expanders. It was an expensive path back then and performance was horrid.

I will admit that back then it seemed on par with IDE. so I thought it was good at that time.

I remember an article on a web site shortly after my build where the guy did the same thing I tried.

http://www.homeserverhacks.com/2009/01/extreme-makeover-windows-home-server.html

 

http://www.homeserverhacks.com/2009/02/11tb-whs-part-ii-drive-reconfi

 

Anyways. It was beyond slow (i basically had 8 drives on pci) and if I accessed data from 3 drives at once it basically gave up. For whs v1 it was fine since it only did mirroring.

 

I have not looked at the current port multiplier hardware. But I am guessing it is about the same bottlenecks considering you would put 4-8 drives on a pie 1x slot or or 16 drives on a pcie 4x slot

 

As mentioned you split 1 sata port into 4 (or 5). Those ports don't have simultaneous read write last I checked. For single drive access, this should not be an issue. For computing parity, this might be awful.

for windows or nix server being used as a nas, this should be fine. For a software raid or a software parity raid, this might be a bottleneck in one of the more important parts of a storage Array

 

The cost of 8+ port hba's and motherboards with 2-4 pcei 4x (and better) are so cheap in comparison. It would probably cost effective to just go the hba route.

 

To answer a question asked earlier. My intel servers have ich9r sata controllers do support port multipliers. I am not sure if it is limited to 2 or 4 ports.

 

Link to comment

An interesting topic and I'm enjoying the discussion, though I have little to add as I've never used port multipliers.

 

Motherboards with limited slots could still cater for loads of drives leading the way for potential huge ION/ATOM builds

 

My only reason for posting is in response to the above statement.  I too am a big fan of low powered Atom based NAS systems with immense capacities.  Here's been my answer to that desire:

 

Supermicro X7SLA-H - $140

Kingston ValueRAM 2GB - $27

2 x Supermicro AOC-SASLP-MV8 - $184

4 x Forward Breakout Cables - $37


Total: $388

 

That comprises the core components of an extremely energy efficient Atom-based server with a 20 drive capacity for what I believe is a very reasonable price.  Add PSU, drive cages, server chassis, and drives as desired.  For a budget-friendly route, pair it with a Norco 4220 (subbing out the forward breakout cables for a single reverse breakout and four SAS cables) and a Corsair 650W PSU and you have a full server for under $1000.

 

I'm all for innovative new designs and trying new (and old) things.  However, I just haven't seen much need to introduce port multipliers into the mix when the current HBAs available are relatively inexpensive and readily available.  I can certainly see a case for trying out port multipliers in a part of the world where Supermicro products are not available, but at least in the US I think the above configuration is going to be less expensive and more problem-free than a similar solution using port multipliers.

 

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...