Add support for 24 array drives


Recommended Posts

+1 for (1) parity and (23) data drives

In most cases it would be (1) parity, (1) cache and (22 data)

With 24 drive enclosures being popular and 3-4TB drives this makes for an unbeatable offering, bundled with spin down support.

 

Also was the plug-in manager on the original 5.0 to do list? So everyone could follow one standard with plugins.

Link to comment

I agree. Please give us 22 data disks, parity, and cache so it can be tested before 5.0 final. It would save me so much money as i'm currently upgrading 2TB drives to 3TB, and i'm nearly out of space still. (Never thought i'd need over 52TB!)

 

 

Edit, next part was fixed in 12a:

It would be nice to be able to preclear more than 1 disk at a time on my 20 data disk, parity, and cache system too. Seems like unRAID refuses to detect more than 23 drives at once even if they aren't in the array. Could you make unRAID detect like 30 drives? I constantly upgrade 2-3 drives at the same time and having to preclear them one at a time makes it a very lengthy process.

Link to comment

It would be nice to be able to preclear more than 1 disk at a time on my 20 data disk, parity, and cache system too. Seems like unRAID refuses to detect more than 23 drives at once even if they aren't in the array. Could you make unRAID detect like 30 drives? I constantly upgrade 2-3 drives at the same time and having to preclear them one at a time makes it a very lengthy process.

 

You can, I just start another telnet session, and gave the command on another drive. I did three at a time.

Link to comment

It would be nice to be able to preclear more than 1 disk at a time on my 20 data disk, parity, and cache system too. Seems like unRAID refuses to detect more than 23 drives at once even if they aren't in the array. Could you make unRAID detect like 30 drives? I constantly upgrade 2-3 drives at the same time and having to preclear them one at a time makes it a very lengthy process.

 

You can, I just start another telnet session, and gave the command on another drive. I did three at a time.

 

EDIT: Seems this was fixed in 12a.

 

That only applies if you aren't already using 22 drives, unRAID won't detect more than 23 drives at the same time. I have 22 drives in the system (20 data, cache, parity). After you add one more, you are at the max. If you add anymore it won't detect some drives. I posted about this and pretty much everyone told me "unRAID only detects up to 23 drives". Unless this changed in one of the recent betas, going over the change logs it hasn't.

 

I have a 1200W PSU in there with far more amps on the 12v than even 50 green drives would need. All 24 slots are connected to 3x AOC-SAT2-MV8 cards.

Link to comment

It would be nice to be able to preclear more than 1 disk at a time on my 20 data disk, parity, and cache system too. Seems like unRAID refuses to detect more than 23 drives at once even if they aren't in the array. Could you make unRAID detect like 30 drives? I constantly upgrade 2-3 drives at the same time and having to preclear them one at a time makes it a very lengthy process.

 

You can, I just start another telnet session, and gave the command on another drive. I did three at a time.

 

That only applies if you aren't already using 22 drives, unRAID won't detect more than 23 drives at the same time. I have 22 drives in the system (20 data, cache, parity). After you add one more, you are at the max. If you add anymore it won't detect some drives. I posted about this and pretty much everyone told me "unRAID only detects up to 23 drives". Unless this changed in one of the recent betas, going over the change logs it hasn't.

 

I have a 1200W PSU in there with far more amps on the 12v than even 50 green drives would need. All 24 slots are connected to 3x AOC-SAT2-MV8 cards.

 

from the change logs

 

Changes from 5.0-beta12 to 5.0-beta12a

--------------------------------------

- emhttp: inventory up to 26 total storage devices

 

that what you are looking for?

Link to comment

It would be nice to be able to preclear more than 1 disk at a time on my 20 data disk, parity, and cache system too. Seems like unRAID refuses to detect more than 23 drives at once even if they aren't in the array. Could you make unRAID detect like 30 drives? I constantly upgrade 2-3 drives at the same time and having to preclear them one at a time makes it a very lengthy process.

 

You can, I just start another telnet session, and gave the command on another drive. I did three at a time.

 

That only applies if you aren't already using 22 drives, unRAID won't detect more than 23 drives at the same time. I have 22 drives in the system (20 data, cache, parity). After you add one more, you are at the max. If you add anymore it won't detect some drives. I posted about this and pretty much everyone told me "unRAID only detects up to 23 drives". Unless this changed in one of the recent betas, going over the change logs it hasn't.

 

I have a 1200W PSU in there with far more amps on the 12v than even 50 green drives would need. All 24 slots are connected to 3x AOC-SAT2-MV8 cards.

 

from the change logs

 

Changes from 5.0-beta12 to 5.0-beta12a

--------------------------------------

- emhttp: inventory up to 26 total storage devices

 

that what you are looking for?

 

Sweet, no idea how I missed that. Thanks!

 

That means if we ever do get 22 data drives, hopefully soon, you could still preclear 2 drives at once.

Link to comment

With repsect, is this really the place to moan about the number of drives supported? This is the RC3 thread. For owners that have been awaiting a stable 5.0 release for some time, it becomes frustrating to see RC threads so close to final being littered with requests for further features that have nothing to do with ironing out the last few little bugs/issues. This is even more frustrating when people come along saying they require 24 disks and more than "52 terrabytes" as stated above. Does this not strike you as slightly selfish? What about those that just want a working stable 5 release for just a few TB?

 

Great to see RC3...

 

Can we have some more tests on parity, write speeds and NFS? :)

 

Link to comment

With repsect, is this really the place to moan about the number of drives supported? This is the RC3 thread. For owners that have been awaiting a stable 5.0 release for some time, it becomes frustrating to see RC threads so close to final being littered with requests for further features that have nothing to do with ironing out the last few little bugs/issues. This is even more frustrating when people come along saying they require 24 disks and more than "52 terrabytes" as stated above. Does this not strike you as slightly selfish? What about those that just want a working stable 5 release for just a few TB?

 

Great to see RC3...

 

Can we have some more tests on parity, write speeds and NFS? :)

 

Not a single person was moaning here. Everyone is just making suggestions and giving input on what they hope to see. I never said I require 24 disks, I said it would be nice. I fail to see how anyone on this thread has made a even remotely "selfish" statement. With respect, if any post on this thread is selfish, it's yours. Your attitude is pretty much "I don't need 24 disks, so if someone else wants it, they can wait!". Very ironic, no offense.

Link to comment

With repsect, is this really the place to moan about the number of drives supported? This is the RC3 thread. For owners that have been awaiting a stable 5.0 release for some time, it becomes frustrating to see RC threads so close to final being littered with requests for further features that have nothing to do with ironing out the last few little bugs/issues. This is even more frustrating when people come along saying they require 24 disks and more than "52 terrabytes" as stated above. Does this not strike you as slightly selfish? What about those that just want a working stable 5 release for just a few TB?

 

Great to see RC3...

 

Can we have some more tests on parity, write speeds and NFS? :)

 

Not a single person was moaning here. Everyone is just making suggestions and giving input on what they hope to see. I never said I require 24 disks, I said it would be nice. I fail to see how anyone on this thread has made a even remotely "selfish" statement. With respect, if any post on this thread is selfish, it's yours. Your attitude is pretty much "I don't need 24 disks, so if someone else wants it, they can wait!". Very ironic, no offense.

 

 

I shouldn't have used the word "moan", I apologize. I'd just like to see NOTHING changed unless it is to address a direct bug/issue with basic features working with version 5. Because any change, no matter how minor, can introduce new issues as you all know.

Link to comment
  • 2 weeks later...

With respect, is this really the place to moan about the number of drives supported? This is the RC3 thread. For owners that have been awaiting a stable 5.0 release for some time, it becomes frustrating to see RC threads so close to final being littered with requests for further features that have nothing to do with ironing out the last few little bugs/issues. This is even more frustrating when people come along saying they require 24 disks and more than "52 terrabytes" as stated above. Does this not strike you as slightly selfish? What about those that just want a working stable 5 release for just a few TB?

 

IMO this is the perfect place for this thread since Tom rearranged everything. It is a feature request yes and it has been labelled as so (icon). Tom has already stated that he is looking into it and I'm sure the reason we haven't heard anything is that it most likely isnt, and shouldn't be a high priority right now. I personally think this should be a 5.1 addition unless it is an "easy" change from a coding point of view (only Tom would know that). I don't have a need for this many drives but can see the benefit for those running a 24-bay enclosure. The additional drive support could be added in 5.1 since this is now a RC.

Link to comment

Seems the main driver for 24 disks is the relatively common enclosure size.

 

Can i suggest that a better metric would be card + MB count. three sas = 24,  MB = 6/8 for a total of 30/32.

 

At some point the numbers get a bit worrying for the sheer scale of a single array but why not just put the number over the top this now and place this artificial limit beyond any reasonable medium term requirement.

 

That way those brave enough can report back and it wont be bad for anyone to see a /. of someone showing of their 128TB box

 

We might even see some chassis coming out to match the need

Link to comment

Seems the main driver for 24 disks is the relatively common enclosure size.

 

Can i suggest that a better metric would be card + MB count. three sas = 24,  MB = 6/8 for a total of 30/32.

 

At some point the numbers get a bit worrying for the sheer scale of a single array but why not just put the number over the top this now and place this artificial limit beyond any reasonable medium term requirement.

 

That way those brave enough can report back and it wont be bad for anyone to see a /. of someone showing of their 128TB box

 

We might even see some chassis coming out to match the need

 

Linux assigns a letter to each subsystem device.  So for hard drives accessed via SCSI subsystem (which includes SCSI, SATA, SAS, USB, and probably others), individual identifiers are "sda", "sdb", ..., "sdz" for total of 26 devices, then device 27 becomes "sdaa", device 28 becomes "sdab", etc.  So once you get past 26 total devices, the identifier string goes from 3 characters to 4 characters, and that will break some code.

 

Also having 24 array disks leaves 2 identifiers, one for the flash, one for the cache.  But then I can't have mirrored cache or mirrored flash without taking away from the 24 array devices.

 

The other reason is that I need to actually test that everything works with that many drives - would be nice to test anyway :)  Other things increase linearly with drive count, such as buffers required etc.

 

My point is that it's more than just changing a constant from "21" to "24" to make this change, and to go beyond 24 is way more changes.

Link to comment

I expected there would be some hidden things to take into account and thanks for pointing them out. Now that info is our there the numbers become more "real" and easier to justify.

 

Sounds like beyond 24 is for the realm of beta 6

Link to comment

Thanks for the info on that limetech. On that basis, I would personally like to see this come later on after an initial stable 5 release.

 

+1

 

Thanks for your reply Tom. I hope this makes it clear about what is involved in this change and it sounds like it won't be a 5.0/5.1 feature. I do think that long term, support for 4 character identifier strings would mean far larger servers and far more expansion possibilities but is it worth the time involved? Who's to say a server with 25, 30, 35+ drives would even run very well? Parity checks would take forever and the chances of an URE or similar causing parity/corruption issues increase drastically and there would be a need for additional protection, such as double parity.

 

I'd be more than happy to purchase another license if my storage demands needed it and would rather have a stable, secure, faster server than one which is nothing but problems (yet has "more drives"). With 4TB drives starting to come out and unRAID 5.0, a single server will be capable of storing 80TB+ in its CURRENT state, without the need for a complete rebuild or buying all of your storage capacity at once and all for the measily license price of $119. I don't know any other product that offers value like that.

 

Perhaps an alternative option would be improved features to allow running multiple unraid servers together; whether it is for a shared storage pool, better support for rsync'ing boxes or whatever other reason. IMO Tom should also consider looking for other ways to take my money away from me, such as Limetech approved/developed/supported addons or extras (please keep the core of unRAID prices as is or similar though- it's part of the reason for its success!). I would not be happy with suddenly needing to pay for an upgrade (such as 6.0) and I don't expect this would ever happen. This is a fantastic product and I plan to support it any way that I can :)

Link to comment

Thanks for the info on that limetech. On that basis, I would personally like to see this come later on after an initial stable 5 release.

 

+1

 

Thanks for your reply Tom. I hope this makes it clear about what is involved in this change and it sounds like it won't be a 5.0/5.1 feature. I do think that long term, support for 4 character identifier strings would mean far larger servers and far more expansion possibilities but is it worth the time involved? Who's to say a server with 25, 30, 35+ drives would even run very well? Parity checks would take forever and the chances of an URE or similar causing parity/corruption issues increase drastically and there would be a need for additional protection, such as double parity.

 

 

My 2 cents.  P+Q parity is needed before we build such huge systems with multiple points of failure.

Link to comment

Thanks for the info on that limetech. On that basis, I would personally like to see this come later on after an initial stable 5 release.

 

+1

 

Thanks for your reply Tom. I hope this makes it clear about what is involved in this change and it sounds like it won't be a 5.0/5.1 feature. I do think that long term, support for 4 character identifier strings would mean far larger servers and far more expansion possibilities but is it worth the time involved? Who's to say a server with 25, 30, 35+ drives would even run very well? Parity checks would take forever and the chances of an URE or similar causing parity/corruption issues increase drastically and there would be a need for additional protection, such as double parity.

 

 

My 2 cents.  P+Q parity is needed before we build such huge systems with multiple points of failure.

I agree.

 

22 data + cache + 2 parity + flash = 26 devices.  (and no need to re-write code to handle 4 char device names)

Although I think diagonal parity will probably work better, as I don't think  P+Q would allow any disks to spin down.

Link to comment

 

22 data + cache + 2 parity + flash = 26 devices.  (and no need to re-write code to handle 4 char device names)

Although I think diagonal parity will probably work better, as I don't think  P+Q would allow any disks to spin down.

 

Yea, I don't care about the methodology....  I just remember reading about P+Q here being a "maybe later" feature -- doesn't matter to me if it's diagonal, horizontal, virtual, or pluto versions :) ------ what I just mean to say is -- some new form of parity that can tolerate TWO drives failing in a system.

 

And my 2 cents is that is needed before we start building 24+ drive systems -- (heck, I think it's needed now for even smaller systems, but that's one guys opinion after being burned in my 18 drive system which I have reduced to 10 drives of higher capacity)

Link to comment

well i don't agree.....

 

unraid = the ecological server

 

what i have to do with 4 x 500 gb / 2 x 750 gb and now 2 x 1TB disks ??

buy a second license is as expensive as buying another 1TB disk ...  + more electricity ... more heat ... costs of material...

and i live on an apartment so not really a clue where i have to hide another unraid box....

 

i am pretty sure the current setup will handle the 2 extra disks

 

Tom can rewrite the whole code which he has to do anyway with the double parity disk to go over the 24 disks

 

i would have hoped to have the 2 extra disks already ....  cause i have 24tb in my box already and running now on 120 gb of space left...

but guess i will have to invest in another 2 tb drive  whereas i still have all these GOOD drives left... and don't tell me to sell them on ebay cause thailand has no ebay

 

You Americans are all Lucky .. all computer parts are made here in asia but they are all 2 times more expensive as on newegg

no logic in that but that's the truth ...

Link to comment

well i don't agree.....

 

unraid = the ecological server

 

what i have to do with 4 x 500 gb / 2 x 750 gb and now 2 x 1TB disks ??

buy a second license is as expensive as buying another 1TB disk ...  + more electricity ... more heat ... costs of material...

and i live on an apartment so not really a clue where i have to hide another unraid box....

 

i am pretty sure the current setup will handle the 2 extra disks

 

Tom can rewrite the whole code which he has to do anyway with the double parity disk to go over the 24 disks

 

i would have hoped to have the 2 extra disks already ....  cause i have 24tb in my box already and running now on 120 gb of space left...

but guess i will have to invest in another 2 tb drive  whereas i still have all these GOOD drives left... and don't tell me to sell them on ebay cause thailand has no ebay

 

You Americans are all Lucky .. all computer parts are made here in asia but they are all 2 times more expensive as on newegg

no logic in that but that's the truth ...

 

Are you trying to be ecological, or economical? Running 4 500 GB drives instead of a single 2 TB drive is economical, but the opposite of ecological, especially considering the higher power demands and heat dissipation needs of older drives. Heck, running a 24 TB server probably isn't very ecological in any case.

 

Replacing two of your 500 GB drives with 2 TB drives will net you about 3 TB of space. Adding two 2 TB drives will net you about 4 TB of space. Either solution will give you some breathing room. And one of them is available to you RIGHT NOW.

 

If you're at the limit of the supported number of disks, then you're going to have to face retiring a working drive. That's irrespective of the number of supported disks. In other words, if Tom released 24 disk support tomorrow, and you went and added two 2 TB disks, then eventually you would face the same situation again.

 

Having that many disks with a single parity drive gets to be kind of sketchy anyway. I have just gone through a rash of drive failures. In the past two months, I have sent no less than six disks back for RMA replacement, plus had another failure that is out of warranty (Not all of those were in my unRAID array). That includes two disks that failed less than a week apart. That could have led to me losing two disks' of data. As it is, I lost about 500 GB of data (luckily non critical). And I currently only run 9 data disks. With every disk added, the likelihood of multiple disk failure goes up.

 

 

 

 

Link to comment

 

 

 

Tom can rewrite the whole code which he has to do anyway with the double parity disk to go over the 24 disks

 

 

 

This discussion has no place in the unRAID OS 5.0-rc3 release support forum.  'rewrite the whole code' before 5 goes final? I think not

Link to comment

Tom can rewrite the whole code which he has to do anyway with the double parity disk to go over the 24 disks

 

That's easy to say when you aren't the one who would have to rewrite the entire operating system, assuming you are running 4.7 stable (no support for 3TB+) you could expand your existing array to 44TB. I don't see a problem here, you fill disks fast therefore you are required to purchase disks regularly. The fact that drives aren't cheap where you live has nothing to do with the argument.

 

If you're at the limit of the supported number of disks, then you're going to have to face retiring a working drive. That's irrespective of the number of supported disks. In other words, if Tom released 24 disk support tomorrow, and you went and added two 2 TB disks, then eventually you would face the same situation again.

 

+1

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.