Jump to content

Support for 45-60 data drives


bacn8r

Recommended Posts

11 hours ago, bacn8r said:

For companies that use 45 or 60 drives I would love to see support for 60 data disks. Is there any thoughts on implementing this possibly a triple parity?

 

bacn8r

 

Most situations when enterprises uses this many drives, they do not make a single array.

 

Sometimes they use multiple, independent, arrays.

Sometimes they use arrays of arrays (Such as RAID-60)

Sometimes they use clustered file systems that spreads the file content multiple times over a pool of drives - and potentially over a pool of machines.

 

Link to comment
4 minutes ago, BRiT said:

Before the number of drives in the array goes that high, I think the following features are required in order for it to make any sense:

 

More Parity Drives than just P and Q

Multiple independent arrays

Automatic Spare-Drive(s) Recovery

I agree with you on those are must haves!  After spending hours diving down the rabbit hole I realize this is a much debated thing but for some of us we want ability to run large “enterprise” type systems with easy to manage interfaces like unraid at home

Link to comment
3 minutes ago, BRiT said:

Before the number of drives in the array goes that high, I think the following features are required in order for it to make any sense:

 

More Parity Drives than just P and Q

Multiple independent arrays

Automatic Spare-Drive(s) Recovery

 

Multiple independent arrays is most definitely something unRAID needs.

 

Additional parity drives and spare drives aren't obvious needs.

 

A system with a single 60-disk array of "tiny" 4 TB disks would have to process 240 TB to validate the parity or to rebuild after a drive failure. And the MTBF would drop significantly with so many disks so rebuilding would often be needed. And it's likely that rebuilds have to be run to the end and then one more drive replaced and a new rebuild instantly started. Or that unRAID would support a drive swap halfway through a rebuild so it rebuilds the second half of replaced disk #2 and then return and rebuilds the first half.

 

Lots of complications for a construct that is bad for almost every single application an unRAID customer might have.

 

1 minute ago, bacn8r said:

but for some of us we want ability to run large “enterprise” type systems with easy to manage interfaces like unraid at home

 

But you are talking about a system that an enterprise would normally not want to touch with pliers because it's in general terms an abomination.

 

No single unRAID user has any advantage of having a single 60-drive array. Any special usage case where a 60-disk array might be seen meaningful already have other special solutions that are even better at solving the task of generating a huge pool of disk surface with redundant file storage. For an unRAID system, having three 20-drive arrays or five 12-drive arrays in the machine would be way better.

 

Remember that unRAID already implements a union file system where the individual file systems of each drive in an array are merged. There is nothing that would prohibit unRAID from merging the contents of 5 separate arrays and still present a single union file system. So a system with many arrays in the same machine could still present a combined exabyte-sized file system. But multiple arrays means multiple concurrent writes, making it way more practical to fill up all the disk space in a reasonable time.

Link to comment
3 minutes ago, pwm said:

 

Multiple independent arrays is most definitely something unRAID needs.

 

Additional parity drives and spare drives aren't obvious needs.

 

A system with a single 60-disk array of "tiny" 4 TB disks would have to process 240 TB to validate the parity or to rebuild after a drive failure. And the MTBF would drop significantly with so many disks so rebuilding would often be needed. And it's likely that rebuilds have to be run to the end and then one more drive replaced and a new rebuild instantly started. Or that unRAID would support a drive swap halfway through a rebuild so it rebuilds the second half of replaced disk #2 and then return and rebuilds the first half.

 

Lots of complications for a construct that is bad for almost every single application an unRAID customer might have.

 

 

But you are talking about a system that an enterprise would normally not want to touch with pliers because it's in general terms an abomination.

 

No single unRAID user has any advantage of having a single 60-drive array. Any special usage case where a 60-disk array might be seen meaningful already have other special solutions that are even better at solving the task of generating a huge pool of disk surface with redundant file storage. For an unRAID system, having three 20-drive arrays or five 12-drive arrays in the machine would be way better.

 

Remember that unRAID already implements a union file system where the individual file systems of each drive in an array are merged. There is nothing that would prohibit unRAID from merging the contents of 5 separate arrays and still present a single union file system. So a system with many arrays in the same machine could still present a combined exabyte-sized file system. But multiple arrays means multiple concurrent writes, making it way more practical to fill up all the disk space in a reasonable time.

I would love 5 12 disk arrays. I have the chance to get a netapp eseries 60 disk in a 4u chassis. It’s 5 shelf’s of 12 disks. I’m perfectly ok with 5 arrays and then it all be concatinated together.  

Food for thought I guess

Link to comment
Just now, bacn8r said:

I would love 5 12 disk arrays. I have the chance to get a netapp eseries 60 disk in a 4u chassis. It’s 5 shelf’s of 12 disks. I’m perfectly ok with 5 arrays and then it all be concatinated together.  

Food for thought I guess

 

Lots of people wants unRAID to add support for multiple disk arrays.

And lots of people wants unRAID to support multiple cache pools - but adding support for multiple cache pools and multiple disk arrays is quite similar allowing unification of the two concepts.

 

The more people that keep requesting it, the better chances that we may get it.

 

Limes would have to either add more license sizes or adjust their license model and possibly charge x money for every 5 disks and y money for every additional array. But in the end, they should not need to see it as a loss of income from selling fewer licenses. It's cheaper for us customers to have few machines with many arrays than to need to have many machines with one array/machine.

Link to comment
2 hours ago, pwm said:

 

Lots of people wants unRAID to add support for multiple disk arrays.

And lots of people wants unRAID to support multiple cache pools - but adding support for multiple cache pools and multiple disk arrays is quite similar allowing unification of the two concepts.

 

The more people that keep requesting it, the better chances that we may get it.

 

Limes would have to either add more license sizes or adjust their license model and possibly charge x money for every 5 disks and y money for every additional array. But in the end, they should not need to see it as a loss of income from selling fewer licenses. It's cheaper for us customers to have few machines with many arrays than to need to have many machines with one array/machine.

4

I agree I used to run 3 servers with msa's attached and using Unraid have been able to get down to 1 24 bay supermicro running vm's.  If I were able to put more disk into a single system with multiple arrays similar to glusterFS, or implement a way that I can break my 24 bay into 4 arrays that each take up a spot.    I have 2 power cords, 2 ethernets, and 2 10g SFP+ from one box that can run to network or other devices when I expand.   

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...