The "well, it was free" 50 bay server


Recommended Posts

Most of my unRAID gear has been free over the years, so while I'd love to spend thousands on a super dense micro server with 10TB disks, that's just not something I've ever been able to prioritize in the budget. But since I work in IT and datacenter environments, I do manage to get some fun stuff to play with.

 

Most recently, it was a bunch of "Aberdeen" storage servers - a mix of 48+2, 24, and 16 bay servers all full with circa 2010 compute resources and RAID controllers. They also came populated with 2tb drives of varying ages, about 160 in total. 

 

I've moved my current 16 drive unRAID build into one of the 48 bay enclosures - it's a mix of 6tb, 5tb, 4tb, 3tb, and 2tb, drives totalling 56TB of array space. I've decided to leave a bunch of bays empty since I'm only at 50% utilization, but I'd like to slowly move to 10TB disks and this will give me the space to either replace existing drives or add to the array without worrying about enclosure limits. I hate the thought of throwing a 10TB disk into the parity slot, however, so I've instead got 12x 2tb drives running in the bottom of the chassis in RAID10. This presents to unRAID as 12tb and gives me plenty of overhead for whatever sizes the 10TB drives happen to show up with. These 2tb disks are old, and they're going to fail.. but I have about 140 sitting and waiting as hot spares ready to pop in and automatically rebuild on the HW controller.

 

This actually happened a couple of weeks ago - 2 drives failed. I replaced them and the RAID10 rebuild was completed in just a couple of hours. unRAID was none the wiser and so I didn't have to worry about a parity rebuild on the 12TB virtual disk, which is about 22 hours.

 

I've seen some old threads about this, but most seem to be abandoned and full of people chiming in with how stupid it is. I could understand some arguments against going out and purchasing a bunch of hardware to make this setup, but when it's just sitting in my lap available for use? So far, it's been fairly solid. Anyone else doing anything remotely similar with hardware RAID and virtual disk presentations?

Link to comment

I'd probably replace the "well it was free" with "well it was given". With that type of hardware and the huge number of drives, I can't imagine it has no impact on the electricity cost.

 

While I understand it's nice to experiment with, I would certainly not rely on a huge number of drives driven by hardware RAID. I can't tell for others, but for me, going UNRAID was exactly to avoid such dependencies. If and when something happens on that one drive, then fine, just replace that one. With both a parity at high risk of failure (because it's a hardware raid, while it gets rebuilt you don't have parity, hence protection), I would certainly not put any data I value on such a server. But as I said, thjat is only my point of view. Others might have another.

Link to comment
19 minutes ago, denishay said:

parity at high risk of failure (because it's a hardware raid, while it gets rebuilt you don't have parity, hence protection)

I'd rather lose a parity volume than a data volume. They are both needed to recreate another missing data volume, but if one of the dead volumes is parity, you haven't lost any data. If you lose 3 data volumes but your parity is intact, it's worthless.

Link to comment
2 hours ago, denishay said:

I'd probably replace the "well it was free" with "well it was given". With that type of hardware and the huge number of drives, I can't imagine it has no impact on the electricity cost.

 

While I understand it's nice to experiment with, I would certainly not rely on a huge number of drives driven by hardware RAID. I can't tell for others, but for me, going UNRAID was exactly to avoid such dependencies. If and when something happens on that one drive, then fine, just replace that one. With both a parity at high risk of failure (because it's a hardware raid, while it gets rebuilt you don't have parity, hence protection), I would certainly not put any data I value on such a server. But as I said, thjat is only my point of view. Others might have another.

 

I'm not sure I'm following your logic. Losing parity is losing parity - so you're saying it's better to have a single 10tb with potentially a day or more of rebuild time (plus ordering, bc who's keeping a cold spare at that size/cost) than to have a raid10 backed vd made from 2tb drives with a dramatically shorter rebuild time? And I have 140+ cold spares. And I have to actually lose the entirety of one of the mirrored pairs before the vd actually becomes compromised, which means parity is actually more robust.

 

I get the more disks = greater risk of failure argument, but that's why I've gone with smaller disks for faster rebuilds. I don't think I'm any more exposed than someone rolling a single 10tb drive as their parity drive.

 

What am I missing?

 

Link to comment
23 minutes ago, primeval_god said:

One thing i would be worried about is the potential for the RAID card/hw you are using to fail. Unless of course you got a spare in that treasure trove of hardware. I dont know what a replacement would cost but i would fear a 12+ disk hardware raid card could be pricey.

I have several 16 and 24 port cards as a result of this haul, but your concern is valid in "normal" scenarios.

Link to comment

"well, it was free" ..... OP actually post all is free , 160 machine, 140 HDDs , cards , how about electricity cost ....

Of course .... no need to paid.

 

Does Unraud licence can be free ? I really want to know.

Edited by Benson
Link to comment
17 minutes ago, Benson said:

"well, it was free" ..... OP actually post all is free , 160 machine, 140 HDDs , cards , how about electricity cost ....

Of course .... no need to paid.

 

Does Unraud licence can be free ? I really want to know.

What?

 

Not 160 servers, just around 160 2TB drives, with about 140 2TB drives sitting as cold spares right now. I literally took all of the servers with the intent of only using one or two and keeping everything else as cold spares. Electricity cost hasn't been measured, but I've since stopped using several other virtualization hosts, so it should be about a wash (or less) compared to when all of that was running.

 

No additional unraid licensing - I already had unraid running in a 16 bay server with the full pro license and just moved that OS, disks, and HBA's over into this larger chassis.

Link to comment
On 6/10/2019 at 1:14 AM, thaddeussmith said:

I hate the thought of throwing a 10TB disk into the parity slot, however, so I've instead got 12x 2tb drives running in the bottom of the chassis in RAID10. This presents to unRAID as 12tb and gives me plenty of overhead for whatever sizes the 10TB drives happen to show up with. These 2tb disks are old, and they're going to fail.. but I have about 140 sitting and waiting as hot spares ready to pop in and automatically rebuild on the HW controller.

 

This actually happened a couple of weeks ago - 2 drives failed. I replaced them and the RAID10 rebuild was completed in just a couple of hours. unRAID was none the wiser and so I didn't have to worry about a parity rebuild on the 12TB virtual disk, which is about 22 hours.

 

I've seen some old threads about this, but most seem to be abandoned

Hey, great going.  I used to do this when I got a few new 4tb drives and my servers were all 3tb or 2tb.  My Areca cards would create this kind of virtual parity with a bunch of spare 2tb drives.  Worked great for me.  You just have to manage your risk and for me, I went dual parity at the same time, so the risk was minimal.  I quit using it as the Areca 1280 cards are getting long in the tooth and I got a bunch of 8tb drives all at once for the next upgrade..  But for the initial switch to larger drives it is perfectly fine especially when you will have a boatload of smaller drives sitting there doing nothing.

 

Is this 48 bays in a 4U chassis ala the backblaze design??

Link to comment

We’re you looking for more kudos?

 

 

Here’s a few more

 

D5B0B030-670C-4724-9B6C-ADA05D1570CB.jpeg.52e42f53e4231b74ab392933c05b108a.jpeg

 

 

Side note: some folks have done hardware raid for parity before. Some kept it, some abandoned  it. It’s not a popular route for various opinions. I think Johnny still does this.

 

Edited by 1812
  • Haha 1
Link to comment

As I said, opinions.... not necessarily facts

 

 

I considered it but parity checks and disk writes didn’t seem that much faster based on reported posts to justify spinning up extra disks or adding  another layer of configuration for me. And pcie slots are a premium for me.

 

a couple years  ago when I started with unRaid, I use to have lots of enterprise equipment and a big rack. Now I find that I’m more interested in density, and seeing how much I can cram into smaller spaces, finding better way to use my smaller iso box rack.

 

Link to comment
  • 4 weeks later...
  • 1 month later...

I'm already at 8x8 TB Easystore shucked drives, and a couple of those drives were funded by older pc parts sales from free pcs.  

 

I don't see any server gear. I wish...  But I do manage to sell some parts from those freebies, like hard drives, and an occasional

video card and case.  The real gems for myself have been three LG 4K ripping capable optical drives, with their older firmware

that has never been updated.   

 

 

 

 

 

Link to comment
  • 3 weeks later...
On 6/10/2019 at 8:43 AM, thaddeussmith said:

 

I'm not sure I'm following your logic. Losing parity is losing parity - so you're saying it's better to have a single 10tb with potentially a day or more of rebuild time (plus ordering, bc who's keeping a cold spare at that size/cost) than to have a raid10 backed vd made from 2tb drives with a dramatically shorter rebuild time? And I have 140+ cold spares. And I have to actually lose the entirety of one of the mirrored pairs before the vd actually becomes compromised, which means parity is actually more robust.

 

I get the more disks = greater risk of failure argument, but that's why I've gone with smaller disks for faster rebuilds. I don't think I'm any more exposed than someone rolling a single 10tb drive as their parity drive.

 

What am I missing?

 

 

So, where were you when I had to have my server rebuilt and went and bought 2TB drives as cheap as I could?

I'm guessing you would have sold me a few cheaper...?

Link to comment
  • 4 weeks later...
On 8/30/2019 at 9:52 PM, guiri said:

 

So, where were you when I had to have my server rebuilt and went and bought 2TB drives as cheap as I could?

I'm guessing you would have sold me a few cheaper...?

Ha, I might have at the time! Now, not so much.. I've decided to roll back to 2TB drives for the sake of cost replacements. So I'm running 28+2 2TB drives for the main array, then 16 2TB drives in RAID 10 for the cache drive to be used as unprotected scratch space, etc. I have a separate host for playing around with virtualization, so SSD's just really aren't needed in my use case of unRAID. I'll roll through the stash of 2TB replacement spares and not feel so bad once I start having to spend on retail 2TB drives. At 56TB+16TB, I've got all the storage I really need and ultimately just purge content that doesn't get watched in a year or two.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.