Anyone running a 24 disk array yet?


Recommended Posts

I have a Norco hotswap case that can hold 24 disks, and so far I am only using 20 of them. I know that the current versions of unRAID only support 20 disks, but Tom told me that he would provide me with a beta version for 24 disks when and if the need arose. Well, it looks like it is just about that time...:)

 

So before I take the plunge, I just want to know if anyone is already running a 24 disk array? And if so, is the beta software ready to handle the job?

 

Oh yeah, and before you ask, my parity drive is not mounted in any of the 24 hotswap slots and I am no longer bothering with a cache disk, so yes, I have room for a true 24 disk array.

Link to comment

I have a Norco hotswap case that can hold 24 disks, and so far I am only using 20 of them. I know that the current versions of unRAID only support 20 disks, but Tom told me that he would provide me with a beta version for 24 disks when and if the need arose. Well, it looks like it is just about that time...:)

 

So before I take the plunge, I just want to know if anyone is already running a 24 disk array? And if so, is the beta software ready to handle the job?

 

Oh yeah, and before you ask, my parity drive is not mounted in any of the 24 hotswap slots and I am no longer bothering with a cache disk, so yes, I have room for a true 24 disk array.

 

Hey Oddwunn,

 

I also have the 24 bay Norco, however my parity drive is in the first slot.  Where do you have your parity drive mounted at to allow you to use all 24 slots as data drives?  If we have to use a second parity drive in order to have 24 data drives where would you mount the second parity drive?

 

 

 

Link to comment
I also have the 24 bay Norco, however my parity drive is in the first slot.  Where do you have your parity drive mounted at to allow you to use all 24 slots as data drives?

I have a relatively small Supermicro motherboard, so I have plenty of room in that compartment. I have 3 Supermicro 8 port PICe controllers controlling the 24 slots, and I am keeping all of them for data drives. So that leaves me with 4 SATA controllers on the motherboard, of which I am using one to control my parity drive. The drive is mounted in a 5.25" slot adapter (with fans) on its side right behind the 4 fan wall in the same compartment with the motherboard and tied to the case side so that it won't tip over accidentally. There is enough room in there so that I could actually mount a couple more drives if I had the need.

 

I also built a home made free standing fan wall using 3 120mm fans side by side and sitting right in front of the 24 slots, connected to an internal power connector so that the wall turns on and off with the server. By making it freestanding, I can access the hotswap drives by simply moving the wall out of the way for a few seconds. I have the air flow all going in the same direction from front to back, and with the extra fans blowing directly on the drives I was able to replace the LOUD rear fans with much quieter units and at the same time am getting lower temperatures on all of my drives. Ok, I know you didn't ask about cooling, but I was just so darned happy with making my server whisper quiet and cooler at the same time that I just had to tell somebody...:)

 

 

Link to comment

This has been requested ad nauseam and Tom knows about the need.

As far as I know no one (save for Tom himself) has a version of unRAID that will support 24 or more drives.

 

Just curious as to what is the limiting factor for bigger arrays? Should be only a define somewhere in the header files. And yes, I fully understand the issues with rebuild, single parity, expensive, etc... but create a mega-pro license, change the #define and anybody is on its own anyway with decision about how big the array should be.

 

With 50-bay cases like this http://www.chenbro.eu/corporatesite/products_detail.php?sku=45 why limiting unRAID to 20 drives?

 

Link to comment

I also built a home made free standing fan wall using 3 120mm fans side by side and sitting right in front of the 24 slots, connected to an internal power connector so that the wall turns on and off with the server. By making it freestanding, I can access the hotswap drives by simply moving the wall out of the way for a few seconds. I have the air flow all going in the same direction from front to back, and with the extra fans blowing directly on the drives I was able to replace the LOUD rear fans with much quieter units and at the same time am getting lower temperatures on all of my drives. Ok, I know you didn't ask about cooling, but I was just so darned happy with making my server whisper quiet and cooler at the same time that I just had to tell somebody...:)

 

Are you using 4x 80mm or 3x 120mm in that mid fanboard? Great idea on the front wall, I was also thinking about one, but then decided to go with 4500rpm DELTA 3x 120mm PWMs and all temps instantly got really green  ;D

 

 

Link to comment

Because no sane person would trust over 25 drives to a single parity drive...

Why not leave this decision to each and every user? unRaid is not designed for being enterprise storage anyway. And what about people having two array, wholly sync'ing one to the other???

I see multiple array support on the road map. While I don't see a need for 50 disks in my future, there was a time I would have said the same about 8.
Link to comment
Are you using 4x 80mm or 3x 120mm in that mid fanboard?

4X80mm

Great idea on the front wall, I was also thinking about one, but then decided to go with 4500rpm DELTA 3x 120mm PWMs and all temps instantly got really green

Hmmm...I didn't know it the option existed. Where did you get it (if it is ok to ask that question here)? And is it quiet or incredibly loud like the stock rear fans?

 

Link to comment
I see multiple array support on the road map. While I don't see a need for 50 disks in my future, there was a time I would have said the same about 8.

 

There were some ideas about having several servers connected behind one "master", allowing basically the master to virtualize all others through its (single?) network interface (performance?). There are also no plans to support SAS expanders, so basically there is no planned alternative to go out of a single box in a viable and performant way.

 

 

Link to comment
  • 3 weeks later...

I have a backblaze storage pod:

 

http://blog.backblaze.com/2009/09/01/petabytes-on-a-budget-how-to-build-cheap-cloud-storage/

 

Is there any way I could use the unraid technology on it. I don't mind purchasing two pro licenses, just wondering what the options are if any.

Currently you would need to create multiple unRAID arrays if you wanted to use all the drives.  I don't know what would be involved in trying that on the same system, but I'm sure someone else does.

Link to comment

This has been requested ad nauseam and Tom knows about the need.

As far as I know no one (save for Tom himself) has a version of unRAID that will support 24 or more drives.

 

Just curious as to what is the limiting factor for bigger arrays? Should be only a define somewhere in the header files. And yes, I fully understand the issues with rebuild, single parity, expensive, etc... but create a mega-pro license, change the #define and anybody is on its own anyway with decision about how big the array should be.

Are you offering your services to develop a 24+ drive management OS with redundancy features like unRAID?  If not, why not let the developer worry about it, instead of deliberately goading him?  It's on the roadmap.  Are you guys that desperate for space that you have to have 48TB today?  Or is it just an aesthetic "I need to have my drive bays filled because it looks cool" wish?  Wouldn't it be better if he could crack the 2.1TB limit and get the 3TB drives going?  I can't believe there are that many people that think it's a great idea to trust their data spread across 20+ drives instead of fewer.  To me, the logical solution for higher reliability is density, not quantity.

Link to comment

I have a backblaze storage pod:

 

http://blog.backblaze.com/2009/09/01/petabytes-on-a-budget-how-to-build-cheap-cloud-storage/

 

Is there any way I could use the unraid technology on it. I don't mind purchasing two pro licenses, just wondering what the options are if any.

Currently you would need to create multiple unRAID arrays if you wanted to use all the drives.  I don't know what would be involved in trying that on the same system, but I'm sure someone else does.

That's easy. It would require two motherboards.  unRAID currently cannot handle 24 drive in the protected array. It is limited to parity + 20 data + cache, a total of 22 drives.

 

There is no item in the roadmap to allow multiple arrays in the same server, hence the need for multiple motherboards.

Link to comment

This has been requested ad nauseam and Tom knows about the need.

As far as I know no one (save for Tom himself) has a version of unRAID that will support 24 or more drives.

 

Just curious as to what is the limiting factor for bigger arrays? Should be only a define somewhere in the header files. And yes, I fully understand the issues with rebuild, single parity, expensive, etc... but create a mega-pro license, change the #define and anybody is on its own anyway with decision about how big the array should be.

Are you offering your services to develop a 24+ drive management OS with redundancy features like unRAID?  If not, why not let the developer worry about it, instead of deliberately goading him?  It's on the roadmap.  Are you guys that desperate for space that you have to have 48TB today?  Or is it just an aesthetic "I need to have my drive bays filled because it looks cool" wish?  Wouldn't it be better if he could crack the 2.1TB limit and get the 3TB drives going?  I can't believe there are that many people that think it's a great idea to trust their data spread across 20+ drives instead of fewer.  To me, the logical solution for higher reliability is density, not quantity.

 

Enthusiastic +1

 

In order to recover from a disk failure, parity plus every other data disk in the array has got to be perfectly healthy.  Every disk you add to the machine increases the chances that one of the disks, while doing a recover, would fail in some way.  So every disk you add dilutes the redundancy of your single parity disk.  20 data disks, IMO, is about the most you want to dilute it.  Now P+Q parity will make a big difference, and allow for safer expansion to higher drive counts.  Until then, you might want to consider setting up a second array rather than pushing to expand beyond 20.

Link to comment

I would run a backblaze through some serious hoops before I tried anything cute with it.  Because I think you want to isolate hardware issues from software.

 

I can not imagine voltage drop on the 12v rail when all those drives spin up initially, i do not see support for spin up control on the sata cards they use.  If you read their blog they even recommend a sequenced power up process.

Link to comment

Voltage drop is a function of wire size / length and amperage.

 

http://www.playtool.com/pages/psuconnectors/connectors.html#peripheral

 

I don't know of any official definition of the maximum current allowed in a peripheral cable. The connector can handle 13 amps according to the manufacturer. But you normally find 18 awg wire in the peripheral cables. If you have an 18 inch cable (about a half a meter) and are running 13 amps through 18 gauge wire then you get a voltage drop of about 0.25 volts counting both the power wire and the ground (it's got to go both ways) and the dissipation is about 3.3 watts. That's not good. I've just played it safe and listed the maximum current as 5 amps.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.