30 Drive Limit


Steve-0

Recommended Posts

I just upgraded my unraid server.

 

OLD Server:

Norco 4224 (24 drive bays) with - Asus z170-AR / i7 7600 / 32gb / gtx 970 FTW+ / 256gb Samsumg 950pro NVME (cache) / 850w ps / Hauppauge QuadHD / adaptec 6805T / HP SAS expander. Also had 5x 4TB 2.5" drives off of the SATA ports. This gave me exactly 30 devices. All good.

 

New Server:

Supermicro CSE-847 (36 drive bays) - SuperMicro X8DTI+ / 2x x5675 / 144gb ram / gtx1050 low profile / 512gb Samsung SSD (cache) / hauppauge quadhd / dual 1400w ps / LSI 9211-8i. I have enough drives to fill the 36 bays, and 5x 4TB 2.5" drives mounted in the bottom of the case via the onboard SATA connectors.

 

Without having to make you do the math - that's now 42 drives - adding 12 to my previous config.

 

While I know I can use these new drives via unassigned devices, that does not protect the data on them in any way - not to mention I would like to have them as part of the data pool for shares. I saw a video from LTT where he worked with the Unraid team to allow for him to use more than 30 drives in his petabyte server (I think that was the project - he referenced as part of the tweaks needed to get 10gb transfers out one of his servers) Regardless, this tells me that the limit is artificial and can be changed.

 

Is this something that will be available in future versions? Is it something I can get help outside of the forums to setup? I could see myself buying a 60bay or larger case from 45drives down the road - or even custom building something using the backplanes from a few supermicro cse-846/847 cases. It would be great to have these drives all protected. I love unraid and have just 3 things i would like to get working - this being the biggest pain point at the moment.

 

Thank you for all the hard work and awesome product.

  • Like 1
Link to comment
I saw a video from LTT where he worked with the Unraid team to allow for him to use more than 30 drives in his petabyte server (I think that was the project - he referenced as part of the tweaks needed to get 10gb transfers out one of his servers) Regardless, this tells me that the limit is artificial and can be changed.

He used 24 drives as a cache pool, you can also do that, current limit is 28 data drives (+1 or 2 parity drives) + 24 cache drives.

 

 

Link to comment

I have two servers, each with 30 drives and only one parity drive in each. Granted, I have my data replicated three times, but frankly I think it ought to be up to the end user to decide if they want to expand their array beyond 30 drives. If the imposed limitation has technical hurdles to implement that is another matter, but if it can be done easily I think it would be a great feature. Or Lime tech  might consider elevating the drive count and releasing a Pro plus license that is truly unlimited and imposing a 30 drive limit on the Pro license. Just an idea. I would be willing to pay extra to elevate my pro licences.

  • Upvote 1
Link to comment

You obviously have thought about your data protection scheme, which is what I suggest everyone do when they think they want to get into the high number of drives. There are different issues to consider there than what dual-parity arrays can protect that most are simply not aware of. For ones own sanity, typically expanding the size of the drives is easier to deal with than the number of drives.

 

Link to comment

This is honestly something I have been asking for for years. As @ashman70 stated, if it's a technical limit that unraid has, then so be it. I can't say that I've ever seen other OS's struggle with this limit, so I'm not sure its a technical limit vs a LT imposed limit. I've also said for years I would have been happy to buy a Pro Plus or Ultra or whatever license to get rid of the limit. While I'm now running 12TB drives, it would have been a hell of a lot cheaper to buy an upgraded license rather than expensive larger drives. I'm not talking hundreds of dollars cheaper, I'm talking thousands of dollars cheaper.

 

As to not wanting that much data (or drives) being protected by 1 or 2 parity drives, that's just personal preference. I no longer even run parity drives on my array. At $400/drive and having a main and backup server, dual parity on those machines would be a $1,600 expense. If a drive dies, I replace that drive, and copy the missing data from my backups. So if that was a real concern, give us multiple arrays with each array protected by it's own set of parity disks. Cost issue solved, scaling issue solved. We get what we need, LT makes money, everyone wins.

  • Like 2
Link to comment

It may not be a technical limit, but might be a support limit, considering that LimeTech has to support it for all the consumers who opt for it and are a smaller company which means limited resources. It also means they need to have their own huge-drive system to do testing on. I would only make this option available behind an even higher tier license (Enterprise?) with an explicit legal agreement listing all the additional concerns and data safety limitations that come along with it and explicitly calling out that LT are not responsible for inability to reconstruct failed drives since the odds of a successful rebuild on an array over 24 drives with limited parity drives are statistically slim to none.

 

One needs to remember that all it takes is for one consumer to go with large number of drives, have an issue with rebuilding a failed drive, not understanding the risks, and having them get extremely upset, then post it on various sites (social media like twitter, facebook, reddit etc) to have an extremely negative impact on the product that LT may never be able to recover from. I do not want anything of the sort to happen to the much beloved product of unraid. Hopefully no one else wants that. I'd rather put the cost and risk onto the users than onto the company. This is all a long way of asking "Is this risk low enough for the limited reward afforded from it for the company?" I'm not sure it is.

 

I think they can eventually get there but only after allowing for multi-array setups.

 

 

 

 

  • Like 1
Link to comment

Anyone who uses unRAID does so at their own risk, LT can't be held responsible for anything anyone does with their unRAID build. If someone doesn't have a backup and their array blows up it's not LT's fault. I get what your saying but I don't honestly think that has anything to do with it, IMO anyway. What people choose todo with unRAID is on them, not LT.

  • Upvote 1
Link to comment

I, too, would like to see the 30 data drive (28+2) limit increased.  My specific application is a media server, therefore, data integrity is not the highest concern as I could always reload any media files lost; I agree with Ashman that the responsiblity rests on the end user, and so should the option to incorporate a larger data pool beyond 30 drives.  With the advent of 4K videos, the need for storage has almost tripled per video.

 

As the OP, I'm about to purchase a SuperMicro 36-bay chassis since my current Norco 4224 I've begun putting in drives loosely inside the enclosure in any free space around the mobo, as well as on PCI backplane brackets.  Since the chassis is installed in a rack with other enclosures, even with sliding rails, accessing these drives requires removing the enclosure immediately above it in order to get the cover off since it only slides out about 2/3rd's its length; a big PITA.

 

I did a quick search in the feature request thread but didn't see if increasing the data drive limit had been requested.  If it hasn't I will post a new feature request.

Edited by Auggie
  • Like 1
Link to comment

I have all of my crucial date backed up to enterprise grade server hardware - both onsite and offsite. I am unconcerned with the risk of 1 or 2 parity drives protecting 60+ data drives. What I need, I will not lose.

What I want is just to be able to take advantage of the 42 drives (with plans to grow to 60 or more in a 45drives style enclosure before it is over) in a single solution. I prefer not to bastardize my SSD cache with mechanical hard drives. I was under the impression that cache pools became 1 big drive and there was no way to call out individual drives for specific duties.

 

If this is a technical limitation, ok... but I do not see why it would be. The OS is fully capable of recognizing 40+ devices. The limitation is only in being able to include them in the array.

 

It would be awesome to hear from someone on the unraid team what the answer to this is. It really is the only flaw I can find in the OS. It is perfect for my needs otherwise.

 

*edit*

 

The debate is valid on both sides. Yes, I could upgrade 30 drives to 12TB... That would cost more than $10000. I have the hardware I have through incredible deals and luck.

 

  • 12x 2TB SUN SAS drives for $160
  • PX-350r with 12x 2TB Enterprise Seagate drives for $220
  • 5x 3TB, 6x 5TB and 2x 8TB Drives NIB in external enclosures from a pawn shop for $10-$40 each $250 total
  • Dell i3220 filled with 900gb SAS drives as part of a huge server buyout (2x maxed out R710's, 1x maxed out R310, 3x 6648 switches, 1 5548 switch, 2x 2700w UPS, the i3220, and a bunch on misc equipment) for $400
  • 22x Dell R710 with Dual 5530's for $180

 

I don't say that to brag.... I say that to explain that while I have an insane amount of hardware, My total out of pocket to date is ~1200. If you consider all the stuff I sold along the way... It was all free and I made money. My point is that if there is a way to expand the drive limit to truly "unlimited" it would be very helpful as enterprise grade hardware being sold in large lots is not going to be 12TB drives.. they will be 2TB and 3TB... and while they are used, they will still have years of life left. 1.6 million hours MTBF is a VERY long time.

 

Edited by Steve-0
  • Like 1
Link to comment
  • 11 months later...

I have a Cisco C220M4 with 56 Logical Cores, 768GB of ram, 5 1.92TB SSD (in cache pool) and that's connected to 2 24-bay (Netapp DS2446) shelves via Netapp x2065 qfsp card.  I'm using 24 x 8TB in one system and 6 x 8TB to complete my 30 disk (28+2) array.  I have enough room to make a 48 bay array with the same connection and hardware setup.   I know i could buy larger drives as you already stated but i have another 6 x 8TB drives, 9 x 6TB drives, and 7 x 4TB drives.   I could always go with a freenas solution but with the mixture of different drive sizes and parity protecting i want a larger array.  I'm aware i don't hold unraid accountable for any of my gear besides the OS working.  When i tried to use multi-pathing it showed 60 disks instead of 30 (2 separate paths) but wasn't smart enough to show multiple paths to each drive.  I would be willing to take on an "experimental" larger than 30+ "At my own risk" if at all possible.  I do have 2 60 bay expansion shelves fully populated with 3TB drives that would love to use at some point but honestly i think 60 disks total in an array and i'd be happy since 8/10TB drives are in a happy price point right now.

Link to comment

  I personally get a bit nervous going over 16 drives in an Unraid array, with 2 parity drives, I feel OK at about 24 drives, as long as the individual drives are not over 4 TB.

 

  I think this comes down more to a comfort level than real risk and recovery capability for many of us however.

 

  I also have a nice large connected chassis to one of my servers that has 45 drive bays, it started as a test server to see how the performance of SAS hardware compared with the normal SATA I have been using.  I could with this server have a total of 69 spinning drives on-line at one time, if I chose to...   But I doubt I ever would...

 

  I prefer running multiple servers, to spread the load and possibility of failure across the hardware.  If I have a drive failure, only one of the servers would see the rebuild load, instead of my full on-line resources.

 

  So While I doubt I would ever use more than 30 drives in an array, the option to do so would be welcome.

 

Link to comment
  • 3 weeks later...
On 10/4/2018 at 11:20 AM, Steve-0 said:

12x 2TB SUN SAS drives for $160

  • PX-350r with 12x 2TB Enterprise Seagate drives for $220
  • 5x 3TB, 6x 5TB and 2x 8TB Drives NIB in external enclosures from a pawn shop for $10-$40 each $250 total
  • Dell i3220 filled with 900gb SAS drives as part of a huge server buyout (2x maxed out R710's, 1x maxed out R310, 3x 6648 switches, 1 5548 switch, 2x 2700w UPS, the i3220, and a bunch on misc equipment) for $400
  • 22x Dell R710 with Dual 5530's for $180

 

22 dell R710's for $180... man, the USA is a batshit crazy firesale. I feel bad for those businesses as a single R710 can easily fetch $200+ on any open market.

Link to comment
  • 2 weeks later...
On 9/26/2019 at 11:37 PM, squirrelslikenuts said:

22 dell R710's for $180... man, the USA is a batshit crazy firesale. I feel bad for those businesses as a single R710 can easily fetch $200+ on any open market.

They were actually sold for scrap metal, and the guy who got them realized he could more than double the value if he sold them as computers... and I made more than $5000 selling them all - many in 3-6 lots @ 200-300 each. I even took the 10gb networking cards out of all of them before I sold them.

Edited by Steve-0
Link to comment
  • 8 months later...

Yes, I know that you dont offer openZFS by default. So as I understand you, it doesnt matter which license I buy to use my (50+ disks) ZFS pools? Or am I forced to get the PRO license? (Which would be some kind of paying for something I do not need (as I understand the disk limit is for your "unraid-array"))

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.