Increase Data Drive Limit


Recommended Posts

Request to have the current 30 data drive (28+2) limit increased.  My specific application is a media server and with the advent of 4K videos, the need for storage has almost tripled per video.  I've hit the maximum 30 drives and as I replace my smaller 4TB and 6TB models with 8 or 10TB, I cannot reutilize those 4TBs within the same data drive pool; cache drives server no real purpose for me on this server.

 

Speaking for myself, I am willing to pay an upgrade or higher tier license fee for the ability to go beyond 30 data drives.

 

  • Like 3
Link to comment

+1

 

Not that I really need it for now, but I do have one backup server with 30 drives, and since I mostly use there smaller drives that come from upgrades on the main servers I might need it in the near future, besides more drives supported = always good, though we have to keep in mind that the Pro license already increased the number of support devices several times, I don't even remember how many it originally supported, but at some point it were 21, then 24, now 30, so I would be more than willing to upgrade to an "Ultra" license for >30 drives.

Link to comment
On 10/4/2018 at 10:27 AM, primeval_god said:

Just an FYI there are at least 2 other threads requesting this feature in the "Feature Requests" forum already.

Yea, I just noticed those.  The latter is somewhat generic as 'device' could refer to either data or cache, while the former is a bit specific, though I would certainly be satisfied with 45-60 data drives as I just received my Chenbro 48 (3.5") + 2 (2.5") drive top-load chassis and a 45-60 drive limit would maximize this investment.

 

It doesn't hurt to keep adding multiple feature requests to increase its visbility!

Edited by Auggie
Link to comment
4 hours ago, AndrewT said:

which I find a bit deceptive

On the same page it is clearly explained what the limitations are.

 

Unraid OS Pro supports up to 30 storage devices in the parity-protected array (28 data and 2 parity) and up to 24 storage devices in the cache pool. Additional storage devices can still be utilized directly with other Unraid features such as Virtual Machines or the unassigned devices plugin.

 

  • Like 1
Link to comment
On 10/4/2018 at 6:56 AM, johnnie.black said:

+1

 

Not that I really need it for now, but I do have one backup server with 30 drives, and since I mostly use there smaller drives that come from upgrades on the main servers I might need it in the near future...

This is what I currently do as I have two unRAID boxes in Norco RP4224s (one of which will be replaced with the new Chenbro 43348), but I've reached the point where I just don't any more capacity on my backup unRAID as it has more than enough to handle all my computers.  So all these replaced drives are just not being truly utilized in any meaningful manner while wasting power.  I'm now just stacking the unused drives on a shelf to gather dust...

Link to comment
1 hour ago, johnnie.black said:

Looks like we'd get multiple arrays before increasing the number of disks on a single array.

Perhaps.

 

For my specific application, I need just one "share point" to access my entire library; I can't have multiple arrays as that would then make accessing the media very cumbersome (everyone would need to know where certain titles exist or have to individually and separately access each array until they find what they were looking for).

 

Still, there hasn't been any real answer as to why the 28+2 data drive limit; it seems an arbitrary number based upon assumptions of what the target audience usage would be.

 

In the meantime, I'm just throwing more money into the pit by being forced to switch to 10TB drives to replace perfectly functioning 6TB drives in the media server as it's the only way presently to increase data capacity within the 28+2 data drive limit.

Link to comment
6 minutes ago, Auggie said:

For my specific application, I need just one "share point" to access my entire library

Yes, and I assume all arrays would still be accessed under /mnt/user, though I'm not sure and did raise that question.

 

6 minutes ago, Auggie said:

Still, there hasn't been any real answer as to why the 28+2 data drive limit; it seems an arbitrary number based upon assumptions of what the target audience usage would be. 

I can't see a technical limit, there was one before where Unraid could only use sda through sdz, that was changed on v6.2 where it can use 4 letters, e.g. sdaa, LT would need to change parity2 from device 29 though, and that might require some work.

 

Edited by johnnie.black
Link to comment
53 minutes ago, johnnie.black said:

I can't see a technical limit

The limit is in the current size of super.dat, it can not hold more than 30 (=28+2) disk references.

Limetech would need to invent a solution which can handle a bigger super.dat file, while maintaining backward compatilibty.

And  - as you mentioned - the second parity disk which is the highest number in the array would need to change.

On top, several updates to the GUI need to be done to accommodate a larger array and a different parity2 assignment.

  • Like 4
  • Thanks 1
Link to comment
On 10/8/2018 at 7:40 AM, bonienl said:

The limit is in the current size of super.dat, it can not hold more than 30 (=28+2) disk references.

Limetech would need to invent a solution which can handle a bigger super.dat file, while maintaining backward compatilibty.

And  - as you mentioned - the second parity disk which is the highest number in the array would need to change.

On top, several updates to the GUI need to be done to accommodate a larger array and a different parity2 assignment.

 

I figured however the limit was established, many of the dependent routines were coded with that limit in mind and that changes to the limit would necessitate a cascading affect in order to support the new limits.  Ergo, not a simple task.

Link to comment
  • 10 months later...

I have a Cisco C220M4 with 56 Logical Cores, 768GB of ram, 5 1.92TB SSD (in cache pool) and that's connected to 2 24-bay (Netapp DS2446) shelves via Netapp x2065 qfsp card.  I'm using 24 x 8TB in one system and 6 x 8TB to complete my 30 disk (28+2) array.  I have enough room to make a 48 bay array with the same connection and hardware setup.   I know i could buy larger drives as you already stated but i have another 6 x 8TB drives, 9 x 6TB drives, and 7 x 4TB drives.   I could always go with a freenas solution but with the mixture of different drive sizes and parity protecting i want a larger array.  I'm aware i don't hold unraid accountable for any of my gear besides the OS working.  When i tried to use multi-pathing it showed 60 disks instead of 30 (2 separate paths) but wasn't smart enough to show multiple paths to each drive.  I would be willing to take on an "experimental" larger than 30+ "At my own risk" if at all possible.  I do have 2 60 bay expansion shelves fully populated with 3TB drives that would love to use at some point but honestly i think 60 disks total in an array and i'd be happy since 8/10TB drives are in a happy price point right now.

Link to comment
On 10/8/2018 at 7:32 AM, Auggie said:

Perhaps.

 

For my specific application, I need just one "share point" to access my entire library; I can't have multiple arrays as that would then make accessing the media very cumbersome (everyone would need to know where certain titles exist or have to individually and separately access each array until they find what they were looking for).

 

Still, there hasn't been any real answer as to why the 28+2 data drive limit; it seems an arbitrary number based upon assumptions of what the target audience usage would be.

 

In the meantime, I'm just throwing more money into the pit by being forced to switch to 10TB drives to replace perfectly functioning 6TB drives in the media server as it's the only way presently to increase data capacity within the 28+2 data drive limit.

Of course everyone has different needs, and even different applications they use for functions which may be similar to what someone else is using.  I use PLEX to share my media collection with my other users, and for when I am not at home and want to access my media.  I have a few arrays feeding to my PLEX server, as well as a few Windows machines with standard drive shares also showing as part of my PLEX content.

 

  With PLEX I can add a very large number of "paths" to each share as they show to my PLEX users.  They have NO idea how many different servers my media is spread across.

 

  This works well for both the WINDOWS hosted PLEX server, as well running PLEX as a Docker on Unraid.  It is a bit easier with the Windows installation of PLEX, but with adding Remote SMB/NFS shares, works very well under the Dockerized PLEX too, though much more involved to set-up.

Link to comment
On 10/7/2018 at 1:26 AM, bonienl said:

On the same page it is clearly explained what the limitations are.

 

Unraid OS Pro supports up to 30 storage devices in the parity-protected array (28 data and 2 parity) and up to 24 storage devices in the cache pool. Additional storage devices can still be utilized directly with other Unraid features such as Virtual Machines or the unassigned devices plugin.

 

Also, after adding all your "local" drives, Parity Protected Array, Cache Pool, Unassigned Devices (via plugin), you can still add even more resources to be usable via the add Remote SMB/NFS share, and ISO Image features, (I am not sure but I think that is a standard feature now, I do not remember adding it as a plugin)...

Link to comment

This feature would need to be implemented cautiously. Parity check times for large volumes with many disks are already high enough. And with the current parity/array ratio limitations, anything beyond the current 28+2 is imho reckless. I'd imagine we would see multiple arrays and array pooling before we would see a larger configuration. I'd rather see the 28+2 changed to be more flexible with additional parity disks; and multiple array and array pooling myself, as it would be a much more flexible system overall. You could also do simultaneous parity checks on the multiple arrays; and the pooling would make everything still appear as one "big logical volume"

Link to comment
On 9/10/2019 at 1:23 AM, Xaero said:

This feature would need to be implemented cautiously. Parity check times for large volumes with many disks are already high enough. And with the current parity/array ratio limitations, anything beyond the current 28+2 is imho reckless. I'd imagine we would see multiple arrays and array pooling before we would see a larger configuration. I'd rather see the 28+2 changed to be more flexible with additional parity disks; and multiple array and array pooling myself, as it would be a much more flexible system overall. You could also do simultaneous parity checks on the multiple arrays; and the pooling would make everything still appear as one "big logical volume"

I would say that the best way like you state is to have multiple arrays. Max 30 drives per array as per the usual, however you can spread a user share across multiple arrays.

 

This could easily increase the max capacity from ~448TB to ~896TB (16TB Drives RAW Capacity) with 2 arrays.

Link to comment
5 hours ago, Conmyster said:

I would say that the best way like you state is to have multiple arrays. Max 30 drives per array as per the usual, however you can spread a user share across multiple arrays.

 

This could easily increase the max capacity from ~448TB to ~896TB (16TB Drives RAW Capacity) with 2 arrays.

 

This would be a sufficient solution for my very narrow needs.

 

If the multiple-array feature should eventually be incorporated, then I would definitely want to run multiple arrays (servers) on the same iron since at the max data drive capacity, my 48-bay Chenbro would never be fully utilized (prefer to run native vs virtual to reduce the potential for latency issues during media playback).

Link to comment
9 hours ago, Necrotic said:

I am not sure how to make this work with Unraid, but Linus just talked about using GlusterFS to make multiple separate things show up as a single share.

 

Yeh GlusterFS with ZFS pools is an option. However for people who are less skilled with Linux, I don't think they would know how to create ZFS pools via command line and then make shares etc.

 

If unraid supported more than 30 drives (in an array or by having multiple arrays) then it would allow less Linux skilled people to have larger storage/more disks.

Edited by Conmyster
Link to comment

ZFS is meh... At least for now.. It brings back several limitations that we currently do not have with unraid and that I really like not having... Like not deciding how back your pools are, no need to have same type/same size disks.. You also need an amount of RAM per TB in your server and that adds up quickly..

 

My info is from a few years back so stuff might be different..

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.