Add support for 24 array drives


Recommended Posts

I think that those using/adding 500GB drives in the UnRaid server is a band-aid. Not only are the undersized when comparing them to the new offerings, they are slower more times then not, use more power, and are more then likely reused from other much older builds, so they will most likely not have much life left.

When I built my second UnRaid server, I used 1TB, 1.5TB, and 2TB drives that I reused from my old UnRaid server. 3 of the 5 1.5TB, all 3 of the 1TB, and 1 of the 2TB drives have failed.

I am not saying to toss them, but be prepared to upgrade to new ones sooner then later.

Link to comment

Whilst I agree that people should deprecate 500GB drives in general I am not sure we should be saying to them "your silly to do this".

 

If you had a bunch of 500GB dives (i personally have 40+ of them) and the hardware and license to physically install them then there is every reason in the world to use them. Imagine a backup box with 10TB of 500GB disks thats turned on just when needed. Thats a good chunk of space and a good spindle count. In theory if only one single drive ever fails at one time (the plague of unRAID) this is a more reliable solution than a smaller number of larger drives.

 

I am not advocating this but my point in just because most people use unRAID a certain way it doesnt make the other ways any less valid.

 

That being said its quite clear expanding the array is no trivial task and not happening in this cycle

Link to comment

well since this is not happening soon  :-\

 

did i take a dive in flexraid ....

 

i was close to buy a second unraid license but i can't justify the cost for another license and starting up another server... cost of elctricity....

 

installed all my old drives in my windows main machine and running a flexraid pool with realtime cruise control...

 

windows machine was on anyway 24/24

and the flexraid guy made serious progress since he is selling the thing....

AND last but not least the license is cheaper then unraids and no more limits on drives.... and P & Q parity

 

we will see what time will tell us .... I like unraid but the drive limit is not a good thing if you don't want to spend every few weeks money on larger drives and trow away perfectly good drives....

Link to comment

Seems the main driver for 24 disks is the relatively common enclosure size.

 

Can i suggest that a better metric would be card + MB count. three sas = 24,  MB = 6/8 for a total of 30/32.

 

At some point the numbers get a bit worrying for the sheer scale of a single array but why not just put the number over the top this now and place this artificial limit beyond any reasonable medium term requirement.

 

That way those brave enough can report back and it wont be bad for anyone to see a /. of someone showing of their 128TB box

 

We might even see some chassis coming out to match the need

 

Also having 24 array disks leaves 2 identifiers, one for the flash, one for the cache.  But then I can't have mirrored cache or mirrored flash without taking away from the 24 array devices.

 

The other reason is that I need to actually test that everything works with that many drives - would be nice to test anyway :)  Other things increase linearly with drive count, such as buffers required etc.

 

My point is that it's more than just changing a constant from "21" to "24" to make this change, and to go beyond 24 is way more changes.

 

Re: Mirrored Cache/Mirrored Flash.

 

I would think people would want access to more drives first.

When would this mirroring functionality actually be available in unRAID?

 

For people who really want mirrored cache, Areca PCIe x1 controller is a viable mechanism.

For mirrored flash, I'm not so sure this is worth it at the current time.

 

Mirrored flash or should I say "rsync/backup" flash could be accomplished with a more advanced layout of the cache drive.

 

Heck you could even possibly boot from the cache drive. If emhttp only cared that the licensed flash drive was connected to the system (like a dongle) rather then being mounted on /boot, it could ease a number of issues.

 

Then we do have the ability to use PATA or SATA configured as IDE for /dev/hd? access.

 

I think bumping array functionality to 24 drives is feasible and if other drives are unavailable because of it, then that's an issue a person has to deal with until more advanced functionality is ready.

 

Link to comment

...

 

I think bumping array functionality to 24 drives is feasible and if other drives are unavailable because of it, then that's an issue a person has to deal with until more advanced functionality is ready.

 

This would likely cause a number of general support threads such as "I can't see diskX after adding another drive" or "My parity drive is gone!" as well as who-knows how many other issues. If additional drive support is going to be incorporated it should be done properly (and tested).

Link to comment

...

 

I think bumping array functionality to 24 drives is feasible and if other drives are unavailable because of it, then that's an issue a person has to deal with until more advanced functionality is ready.

 

This would likely cause a number of general support threads such as "I can't see diskX after adding another drive" or "My parity drive is gone!" as well as who-knows how many other issues. If additional drive support is going to be incorporated it should be done properly (and tested).

 

Well of course it would need to be tested.

 

Point is, it's feasible,

Albeit with the potential max drive issue.

With 26 drive slots available. You may have to forgo cache or spare hard drive slots if you needed that many data drives.

 

It's just a discussion. That's how ideas are born.

 

I think more people would want an extra data drive vs a software mirrored cache or a software mirrored boot flash.

(I.E. until we can support larger drive counts then 26). We've existed this long with out it.

 

Might be worth a poll.

Link to comment

Sorry Weebo, I didn't mean for that to sound snarky ;D

 

I think polls are a great idea as long as they are worded properly, if we had a poll that was simply "Would you like unRAID to support 24 drives?" then we would already know how that would turn out. At the moment SF is running a poll as to what features to work on (http://lime-technology.com/forum/index.php?topic=12698.0) and this is something that I think people are responding well to. It helps the developer understand what the community is looking for most. I made the suggestion that we could have a separate (release) thread purely for feature requests and a poll would be useful to prioritise tasks.

 

Must be hard for Tom to decide what to work on as a one man army, considering how active this forum is 8)

 

 

Link to comment

For those wanting double parity, is there any reason you couldn't get snapraid running on top of unraid? You'd probably end up with 3 parity drives (1 unraid, 2 snapraid for double parity as they probably can't share). It's not realtime but it would be something for that extra peace of mind.

Link to comment

My 20 drive array is so full right now, all 2TB and 3TB drives. Current hard drive prices don't warrant upgrading for 1TB more usable space.

 

I would kill for 22 data drives, cache, parity right about now.  The flood really hurt me, I was getting data at about the same rate as hard drives were growing, but now hard drives are the same size and the same price as they were a year ago. I was really expecting 4TBs to be cheap by now, and 5TBs to be coming out shortly.  It's probably not even going to happen in 2012 now. :-\

 

I'm comfortable with 22 data drives and 1 parity, most my data I can get again in the rare case I have 2 failures at once. I hope we get the option in 5.1. IMO, anything past 22 data drives is really really risky and this still allows us to fill our 24 drive cases. If Norco decided to make some 28 drive cases, then I could see 24 data drives with dual parity/cache, but there aren't any consumer cases that support that many and I don't see dual parity happening all that soon. 22 data seems right.

Link to comment

My 20 drive array is so full right now, all 2TB and 3TB drives. Current hard drive prices don't warrant upgrading for 1TB more usable space.

 

I would kill for 22 data drives, cache, parity right about now.  The flood really hurt me, I was getting data at about the same rate as hard drives were growing, but now hard drives are the same size and the same price as they were a year ago. I was really expecting 4TBs to be cheap by now, and 5TBs to be coming out shortly.  It's probably not even going to happen in 2012 now. :-\

 

I'm comfortable with 22 data drives and 1 parity, most my data I can get again in the rare case I have 2 failures at once. I hope we get the option in 5.1. IMO, anything past 22 data drives is really really risky and this still allows us to fill our 24 drive cases. If Norco decided to make some 28 drive cases, then I could see 24 data drives with dual parity/cache, but there aren't any consumer cases that support that many and I don't see dual parity happening all that soon. 22 data seems right.

 

TBH if you need more space than that you don't really have much choice but to build a second server. It will use more power yes but you obviously have a need for it. One option would be to move any "rarely accessed" data to the new server so it can remain off for the majority of the time. Chances are you don't need to access ~40TB worth of files every day anyway :)

Link to comment

 

TBH if you need more space than that you don't really have much choice but to build a second server. It will use more power yes but you obviously have a need for it. One option would be to move any "rarely accessed" data to the new server so it can remain off for the majority of the time. Chances are you don't need to access ~40TB worth of files every day anyway :)

 

+1

 

That is exactly what I was thinking... I was going to ask the same question and and Mr. soup already asked...

 

While I know I am guilty of it myself.  how much of my 60TB server do I use in a week, month or even a year?

 

Other then the physical space (and cost). I see no reason why I and many others can't build a second (or third) server from spare parts and drives that is stable. use that one for files you only use once a year or are archiving for long term..

this is a great use for your pile of 250's 500's and 1TB drives..

 

i am really considering a new unraid server for my primary (basically downsizing my main) and migrating most of my "not so used data" to a box that spends the majority of the time powered off..

 

You cant be a digital hoarder and then complain about the hoarding limitations... build bigger, larger storage.. delete old data..  archive older data...

 

you ever buy more groceries then will fit in your fridge? you found a way to get it in there.. you tossed out or ate old food (delete/purge)...

 

Having a physical cap does keep me paying attention to the stuff i keep and forces some housekeeping..

 

 

as far as larger arrays.

I think adding 2 drives to make it 22 drives is a good idea. with the number of Norco 24 bay units running unraid, this would cater to them.

 

I would love to see a 2 drive parity system down the road.. (this would now over-fill that 24 bay server)

 

we could always revisit Tom's idea of linking multiple unRAID servers with a single mount point.

 

 

Link to comment

 

TBH if you need more space than that you don't really have much choice but to build a second server. It will use more power yes but you obviously have a need for it. One option would be to move any "rarely accessed" data to the new server so it can remain off for the majority of the time. Chances are you don't need to access ~40TB worth of files every day anyway :)

 

+1

 

That is exactly what I was thinking... I was going to ask the same question and and Mr. soup already asked...

 

While I know I am guilty of it myself.  how much of my 60TB server do I use in a week, month or even a year?

 

 

I use my unraid box for storing movies that are played back with a popcorn hour.  When I go into the theater I want to select any movie and not worry if the that movie is stored on a server that is not powered up.  The ability to have my entire collection online all the time using less than 100w is one of the biggest advantages I see with  unraid. 

Link to comment

 

TBH if you need more space than that you don't really have much choice but to build a second server. It will use more power yes but you obviously have a need for it. One option would be to move any "rarely accessed" data to the new server so it can remain off for the majority of the time. Chances are you don't need to access ~40TB worth of files every day anyway :)

 

+1

 

That is exactly what I was thinking... I was going to ask the same question and and Mr. soup already asked...

 

While I know I am guilty of it myself.  how much of my 60TB server do I use in a week, month or even a year?

 

 

I use my unraid box for storing movies that are played back with a popcorn hour.  When I go into the theater I want to select any movie and not worry if the that movie is stored on a server that is not powered up.  The ability to have my entire collection online all the time using less than 100w is one of the biggest advantages I see with  unraid.

 

Same. I am not tearing that joy down.

 

but.. my movies are only 1/2 of my 60TB array (so far). that's why i am considering downsizing or at least cleaning off my non-movie related gunk.. less idol spindles is still power saved.

 

at some point many of us will more then 1 media server due to the shear quantity of movies we have purchased.

 

I am more thinking about the guy that has 10 years of torrents stored up or backups of backups of backups.. mixed in with his media.... not everyone uses unRAID for just media.. split the servers..

 

For example, i have about 2 million photos of trains. i do not need those very often.. I can put them on a server that is off or sleeps..

 

I also know some people have small apartments or just a room.. space can be an issue...

In addition, 40 drive array, even with double parity seems inherently dangerous...

 

 

 

 

 

 

 

Link to comment

My opinion is that for the cost of unraid and its use by probably the majority of customers, requesting and expecting support for these uber servers is extravagant (yes I will flat out and say 60+TB 20+ drive server is extravagant in a DIY solution).

 

Sure I can see supporting the most common "big" server config, 20-24 drive 4U cases, Full towers with 5n3.  Even then though low priority.

 

Though after that I don't think additional expansion should be expected.  At that point you are reaching the level of other types of extravagant situations (like mansions with gift wrapping rooms).  Which as many know extravagant solutions are generally overly expensive... thus why its "extravagant"

 

I would suspect that even if the limit was bumped to 24 drives, the issue would come up again once those are filled in 3-6months (I suspect new data rates are high with 60TB server users).

 

I would even say it would be a liability for UNRaid to support more, there is a reason why 24 drives is a standard in professional NAS solution (which cost many more K's then unraid + hardware).

Link to comment

I agree with that statement. At some point you have to either increase your density with larger drives or build a second box (or third, fourth box).

 

In an enterprise setting, we just add more servers to the cluster. Not larger servers.

 

 

Sent from my iPhone using Tapatalk

Link to comment

Here's what I personally would like to see:

 

1) Support for 24 array drives (1 parity and 23 data) included in 5.0 stable, whenever that may be available.

2) Support for more than 24 array drives in 6.0 beta1 - to be used on test drives (and data) only!

3) Support for diagonal parity or similar scheme for 2 drive fault tolerance in 6.0 beta2

4) Support for more than 24 array drives and diagonal parity released concurrently in 6.0 stable.

 

Given the path that 5.0 has taken from beta, RC and hopefully soon to stable, I would estimate that steps 2 - 4 above would probably take 6 months to a year after 5.0 stable comes out. I'm perfectly comfortable with that.

Link to comment

Here's what I personally would like to see:

 

1) Support for 24 array drives (1 parity and 23 data) included in 5.0 stable, whenever that may be available.

2) Support for more than 24 array drives in 6.0 beta1 - to be used on test drives (and data) only!

3) Support for diagonal parity or similar scheme for 2 drive fault tolerance in 6.0 beta2

4) Support for more than 24 array drives and diagonal parity released concurrently in 6.0 stable.

 

Given the path that 5.0 has taken from beta, RC and hopefully soon to stable, I would estimate that steps 2 - 4 above would probably take 6 months to a year after 5.0 stable comes out. I'm perfectly comfortable with that.

 

I would swap number 2 and 3.  Once you get anything close to 24 drives, it seems getting double fault protection is more important than getting more drives in.

 

Just my .02.  I'm sure others have different priorities and opinions.

Link to comment

Here's what I personally would like to see:

 

1) Support for 24 array drives (1 parity and 23 data) included in 5.0 stable, whenever that may be available.

2) Support for more than 24 array drives in 6.0 beta1 - to be used on test drives (and data) only!

3) Support for diagonal parity or similar scheme for 2 drive fault tolerance in 6.0 beta2

4) Support for more than 24 array drives and diagonal parity released concurrently in 6.0 stable.

 

Given the path that 5.0 has taken from beta, RC and hopefully soon to stable, I would estimate that steps 2 - 4 above would probably take 6 months to a year after 5.0 stable comes out. I'm perfectly comfortable with that.

 

I would swap number 2 and 3.  Once you get anything close to 24 drives, it seems getting double fault protection is more important than getting more drives in.

 

Just my .02.  I'm sure others have different priorities and opinions.

 

I see your point, but I purposefully put them in that order due to my perception of the difficulty of each step. Adding support for more than 24 array drives does have its complications, as Tom explained here, but it is not nearly as large of a task as rewriting the way that unRAID calculates parity. The order of development I suggested would allow users with test arrays (and test data!) to troubleshoot the >24 drive support feature while the diagonal parity feature is being developed. I thinks this would be the most streamlined approach, but that is of course just my perception and opinion.

Link to comment

I would also like to use my Norco 4224 to the full, so please... parity and 23 data drives would be very much appreciated in 5.0.

 

Bigger is not needed for me, would like to build a second box and daisy chain these

 

"Support daisy-chained servers

 

Requires two ethernet ports per server. Primay server connects to network. Secondary server connects to second ethernet port of primary server. All user shares on secondary server appear integrated with primay server. All system management for both servers is done through primary server. Not sure how many such servers could be realistically daisy-chained (probably a lot). "

 

Don't need to be physically daisy-chained for me, better would be each their own network cable to the switch (more speed), but integrated user shares and managment through (a) primary server is a need.

 

 

Link to comment

I see your point, but I purposefully put them in that order due to my perception of the difficulty of each step. Adding support for more than 24 array drives does have its complications, as Tom explained here, but it is not nearly as large of a task as rewriting the way that unRAID calculates parity. The order of development I suggested would allow users with test arrays (and test data!) to troubleshoot the >24 drive support feature while the diagonal parity feature is being developed. I thinks this would be the most streamlined approach, but that is of course just my perception and opinion.

 

OK, I'll buy that.  Can see how double fault is much more complicated.  That is a more logical flow from a development standpoint.

Link to comment

 

TBH if you need more space than that you don't really have much choice but to build a second server and it. It will use more power yes but you obviously have a need for it. One option would be to move any "rarely accessed" data to the new server so it can remain off for the majority of the time. Chances are you don't need to access ~40TB worth of files every day anyway :)

 

+1

 

That is exactly what I was thinking... I was going to ask the same question and and Mr. soup already asked...

 

While I know I am guilty of it myself.  how much of my 60TB server do I use in a week, month or even a year?

 

 

I use my unraid box for storing movies that are played back with a popcorn hour.  When I go into the theater I want to select any movie and not worry if the that movie is stored on a server that is not powered up.  The ability to have my entire collection online all the time using less than 100w is one of the biggest advantages I see with  unraid.

 

This is why I have not built a 2nd server too! It would be twice as much to manage, twice as much failures, and twice as much power consumption.

 

I was debating building a 2nd server, and separating my TV Shows and Movies... but I really don't have the money right now to buy another server and managing 2 servers feels like a major headache. I feel I could get by another 6+ months with 6TB more of free space, and by then I see 4TBs being a little more affordable. I don't feel I need an entire other server, if the flood never happened, i'd still be set.

 

I would also like to use my Norco 4224 to the full, so please... parity and 23 data drives would be very much appreciated in 5.0.

 

Bigger is not needed for me, would like to build a second box and daisy chain these

 

"Support daisy-chained servers

 

Requires two ethernet ports per server. Primay server connects to network. Secondary server connects to second ethernet port of primary server. All user shares on secondary server appear integrated with primay server. All system management for both servers is done through primary server. Not sure how many such servers could be realistically daisy-chained (probably a lot). "

 

Don't need to be physically daisy-chained for me, better would be each their own network cable to the switch (more speed), but integrated user shares and managment through (a) primary server is a need.

 

If this was possible, and I know it was on the "to-do" list in unRAID's wiki, I would definitely save the money to mirror my current server and build another with the same parts.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.