Jump to content

Why not more than 19 disk and why not SAS Expanders?


starcat

Recommended Posts

From a quick glance, it seems like it's impact on unRAID would still be limited by the bus bandwidth of the original controller card. Perhaps there might be some slight overhead involved, but it could be very minimal.

 

Lets take a hypothetical situation involving a PCI Express controller card and a SAS Expander. I hope I have my numbers correct here. Each lane in PCI Express 1.x provides bandwidth of 250MB/sec. A 4 lane PCI Express slot provides bandwidth of 1000 MB/sec. A 8 lane PCI Express slot provides 2000 MB/sec. A 16 lane PCI Express slot provides 4000 MB/sec.

 

A) 4 lane PCI Express: 8 drives connected, each drive could consume 125 MB/sec.

B) 8 lane PCI Express: 8 drives connected, each drive could consume 250 MB/sec.

C) 16 lane PCI Express: 8 drives connected, each drive could consume 500 MB/sec.

 

D) 4 lane PCI Express: 16 drives connected, each drive could consume 62.5 MB/sec.

E) 8 lane PCI Express: 16 drives connected, each drive could consume 125 MB/sec.

F) 16 lane PCI Express: 16 drives connected, each drive could consume 250 MB/sec.

 

Following this pattern, a 16 lane PCI Express slot with controller card and SAS expander could easily handle 32 drives without being a limiting factor on current HDDs.

 

If PCI Express 2.0 is utilized, I believe the available bandwidth is 2x.

Link to comment

sas expanders are typically limited to their inter connects throughput, 1.2GB/s per wide interconnect (2.4GB/s sas 2.0).

 

That is ten to twelve drives per wide port. The norm seems to be 4x pcie per wide port which ties in pcie is measured in data throuhput, sata/sas is in bit throughput but uses 8/10 encoding so 1.2GB/s is really 1GB/s.

Link to comment

For me the greatest benefit of SAS together with unRAID is that one can go via SFF to external cases containing their own Expander Card and PSU (per 24x drives). I think that the bandwidth discussion isn't very relevant with 50-80 MB/s max performance unRAID provides itself.

Link to comment

starcat, I will strongly disagree with that under 1 situation.

 

The only time I can see where bandwidth discussions are relevant and why I brought it up in the first place is for Parity Checks. Maybe I'm not as patient as you or the others. There's no way I could tolerate a system where it takes over 24 hours to do a parity check, let alone where I saw someone talking about 34 to 50 hours! Mine already takes me 7.2 hours to complete. I'm using SLOW WD Green drives of 2TB a piece and my final parity check speed is averaged 75 MB/sec. I've seen other systems that completed with an average check speed of 95 MB/sec. HDDs will be getting larger. Parity Check times will be taking longer with the larger the array grows. Now after using unRAID for a while, I certainly wont be building future systems with self-imposed performance limitations.

 

When you refer to for the 50 - 80 MB/s max performance what situations was it for; writes (50) and reads (80) across a network? It certainly isn't for parity check situations.

 

I really do like the SFF to external cases benefit too.

Link to comment

My system takes abut 4 hrs for a full parity check.

 

Wow, that's pretty fast. Mine takes around 6.5hrs. What's your set-up (e.g. # of drives, how are they connected to the controller, etc)?

 

Some of that is in his sig.  ;D

I would surmise that some of the performance is due to all being 7200RPM and all the same make.

I think it was Jim White who had some stellar performance numbers too.

Link to comment

My system takes abut 4 hrs for a full parity check.

 

Wow, that's pretty fast. Mine takes around 6.5hrs. What's your set-up (e.g. # of drives, how are they connected to the controller, etc)?

 

Some of that is in his sig.  ;D

I would surmise that some of the performance is due to all being 7200RPM and all the same make.

 

Yeah. Saw the sig, but curious how he divided the drives among the controllers and how he has the AOC-SAT2-MV8 controllers connected to the motherboard. Iirc, the X7SBE has six onboard SATA ports and four PCI-X slots two of which share PCI-X 133 and the other two PCI-X 100.

 

I also have a homogeneous array (actually use the same drives he has) but on my ABIT AB9 Pro, I only get 65MB/s parity check from start to finish leading me to believe the bottleneck is one of the SATA controllers. Which one, I'm not sure. When I only had 5 drives all connected to the ICH8R, I was getting ~110MB/s start with final at ~85MB/s. I remember reading an AB9 Pro specific guide that mentioned one of the controllers needed to be set to IDE mode (either JMicron or SiI). Is it possible that that's slowing down the parity checks?

Link to comment

 

Yeah. Saw the sig, but curious how he divided the drives among the controllers and how he has the AOC-SAT2-MV8 controllers connected to the motherboard. Iirc, the X7SBE has six onboard SATA ports and four PCI-X slots two of which share PCI-X 133 and the other two PCI-X 100.

 

This would be interesting and helpful.

 

I also have a homogeneous array (actually use the same drives he has) but on my ABIT AB9 Pro, I only get 65MB/s parity check from start to finish leading me to believe the bottleneck is one of the SATA controllers. Which one, I'm not sure. When I only had 5 drives all connected to the ICH8R, I was getting ~110MB/s start with final at ~85MB/s. I remember reading an AB9 Pro specific guide that mentioned one of the controllers needed to be set to IDE mode (either JMicron or SiI). Is it possible that that's slowing down the parity checks?

 

I have the same board. I get around 55-65 on it. I use one drive in the JMICRON, One in Silicon Image. I also have two mass cool PCIe x1 controllers with drives on them.

 

I've always thought the motherboard ports were the slow ones, but it could be my older 1TB wd green drives which only have 16MB of cache.

Link to comment

I really like the idea of multiple arrays in a single machine. Parity does take a long time if you have +8 data disks. Multiple arrays and sas expanders seems a perfect match. One machine with multiple external enclosures.

 

Today I have 2 unRaid systems - mostly because I didn't want the hassle of getting a new case and migrating everything to the new case (and the risk for error during migration) and from work I got an old computer for free (same intel board as the original unRaid server). It seems wasteful to run two servers when one would have the required horsepower.

 

Roland

Link to comment

Guys, I have at the moment 4 drives connected to the mainboard, 6 to the first controller and 4 to the second controller. Both controllers are on the 133 Mhz bus (it may matter with more drives installed, in the moment it does not really). The 100 Mhz bus and the PCIe 8x slot I am eyeballing for a LSI SAS3442X and SAS3442E cards to go to external cases. Let's see what unRAID 5 will bring.

 

Link to comment

I have the same board. I get around 55-65 on it. I use one drive in the JMICRON, One in Silicon Image. I also have two mass cool PCIe x1 controllers with drives on them.

 

I've always thought the motherboard ports were the slow ones, but it could be my older 1TB wd green drives which only have 16MB of cache.

 

That's still a possibility. *sigh* makes me wish I had gotten an extra board when Newegg was selling open box versions for ~$50.

 

Guys, I have at the moment 4 drives connected to the mainboard, 6 to the first controller and 4 to the second controller. Both controllers are on the 133 Mhz bus (it may matter with more drives installed, in the moment it does not really). The 100 Mhz bus and the PCIe 8x slot I am eyeballing for a LSI SAS3442X and SAS3442E cards to go to external cases. Let's see what unRAID 5 will bring.

 

Thanks for this. PCI-X 133MHz is capable of 1067MB/s throughput so even with 10 drives, that's still over 100MB/s available per drive. PCI-X 100MHz is limited to 533MB/s.

Link to comment
  • 1 month later...
  • 6 months later...

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...