Norco 4224 vs Backblaze Storage Pod 4.5 - 4U Enclosures


limefrog

Recommended Posts

I asked the staff at Lime-Tech if they had plans to support >25 drives in the future and Tom replied that a future update will support 30. I'm new to unRAID and building servers - I want to build a future-proof system that will accommodate the increased 30 drive support (and possibly >30 in the future) in the following unRAID updates.

 

The #1 case I'm looking at is the Norco RPC-4224 4U Rackmount Server Case with 24 drive bays ($). However, the Backblaze-inspired Storage Pod 4.0 ($725) and Storage Pod 4.5 ($749) can support 45 drives and is still a 4U enclosure. The Backblaze Storage Pods are considerably more expensive, but given that I won't have to build another server in order to exceed the 24 drive capacity of the Norco, it will be more cost-efficient in the long-run to go with the Backblaze Storage Pod.

 

Does anyone have any knowledge or experience with the Backblaze Storage Pods? I know the 4.5 version went back to using backplanes and the 4.0 is a direct wire approach. It would seem that the direct wire approach would be cheaper ... but I don't have much experience in this. I also don't know of any place that sells the backplanes Backblazes use in their 4.5. I'm not even sure I need backplanes if I went with the 4.5.

Link to comment

The one thing I see with the Pod is that it appears the drives come out the top of the case.  Makes it hard to easily swap drives if you have it mounted in a rack.  The other thing that really comes to mind is do you really need that many drives? I currently have a 4220 that really is only half full at this point. I only add drives as I really need them.  Started out a long time ago with a 500gb parity and worked my way up to the latest 5TB.  My thought is that as time goes on I will add drives as they are cheaper in the long haul. 

Link to comment

The one thing I see with the Pod is that it appears the drives come out the top of the case.  Makes it hard to easily swap drives if you have it mounted in a rack.

 

Good point, I forgot about that.

 

The other thing that really comes to mind is do you really need that many drives?

 

I don't really know if I will need that many drives to be honest. I plan to store mostly large movie files for Plex. I'm hoping that the system I build will last for 10 years, and I only plan to add additional drives as I need them. You're probably right in that I may never need more than the 24 drive capacity the Norco 4224 offers, but the option to accommodate more makes me feel more secure in the chance I do need more than 24 drives.

 

Thanks for your input!

Link to comment

I asked the staff at Lime-Tech if they had plans to support >25 drives in the future and Tom replied that a future update will support 30. I'm new to unRAID and building servers - I want to build a future-proof system that will accommodate the increased 30 drive support (and possibly >30 in the future) in the following unRAID updates.

 

The #1 case I'm looking at is the Norco RPC-4224 4U Rackmount Server Case with 24 drive bays ($). However, the Backblaze-inspired Storage Pod 4.0 ($725) and Storage Pod 4.5 ($749) can support 45 drives and is still a 4U enclosure. The Backblaze Storage Pods are considerably more expensive, but given that I won't have to build another server in order to exceed the 24 drive capacity of the Norco, it will be more cost-efficient in the long-run to go with the Backblaze Storage Pod.

 

Does anyone have any knowledge or experience with the Backblaze Storage Pods? I know the 4.5 version went back to using backplanes and the 4.0 is a direct wire approach. It would seem that the direct wire approach would be cheaper ... but I don't have much experience in this. I also don't know of any place that sells the backplanes Backblazes use in their 4.5. I'm not even sure I need backplanes if I went with the 4.5.

 

You can start with Norco, and if there will be no more space for new drives, then add a second case next to Norco with power supply and little switchboard only  in it. then you can run SFF cables from Norco's controllers to second case.

Link to comment

I started this unRaid adventure with a "as many drives as I can stuff in the box" plan for building and growing capacity. Two hardware revisions and 3 major unRaid versions later, my growth plan now is to have fewer than the max number of drives (its useful to have empty bays/SATA connections) but using LARGER drives. I now have more than twice the capacity than what I started with but fewer total number of drives. Just a thought...

Link to comment

I think you'll find that 24 drives is PLENTY as long as you start off using reasonably large drives => either 6TB WD Reds or the 8TB shingled Seagates  [depending on whether you are willing to trust the SMR technology ... you may want to read this:  http://lime-technology.com/forum/index.php?topic=39526.0 ].

 

In either case, that will let you grow to a very prodigious amount of storage with 24 drives ... and if you really DO need to grow beyond that, you can add a 2nd rack enclosure directly above or below the Norco and (as already noted), just run SFF cables between them).

 

Link to comment

Man you guys are awesome! Saving me so much time from making my own mistakes.

 

You can start with Norco, and if there will be no more space for new drives, then add a second case next to Norco with power supply and little switchboard only  in it. then you can run SFF cables from Norco's controllers to second case.

 

Had no idea this was possible. Mind blown  :o

 

Sorry for the dumb question but what exactly is a "switchboard?" It's too generic a term for a google, wikipedia, amazon or newegg search to be useful, could you point to a specific model?

 

And by "SFF" cables I assume you mean the ones I find here on wikipedia and newegg.

 

I think you'll find that 24 drives is PLENTY as long as you start off using reasonably large drives => either 6TB WD Reds or the 8TB shingled Seagates  [depending on whether you are willing to trust the SMR technology ... you may want to read this:  http://lime-technology.com/forum/index.php?topic=39526.0 ].

 

In either case, that will let you grow to a very prodigious amount of storage with 24 drives ... and if you really DO need to grow beyond that, you can add a 2nd rack enclosure directly above or below the Norco and (as already noted), just run SFF cables between them).

 

Thanks for the reading material, I'll educate myself on SMR. And I plan to use reasonably large drives as you suggest, >= 6TB. I'm a big fan of Remuxing my media (although I may settle with high quality encodes in the future to decrease the file size), so the size of a movie can range from 20-40GB. My motivation for remuxing (as opposed to encoding and thus decreasing file size of media) is to future-proof myself.

 

Even with the huge file size of remux media, I don't foresee myself exceeding 24 drives in the near future, but I want to be prepared. The suggestions you both outline regarding connecting a 2nd enclosure with SFF cables seems perfect, I didn't realize that was a possibility. Thanks guys.

Link to comment

 

Had no idea this was possible. Mind blown  :o

 

Sorry for the dumb question but what exactly is a "switchboard?" It's too generic a term for a google, wikipedia, amazon or newegg search to be useful, could you point to a specific model?

 

And by "SFF" cables I assume you mean the ones I find here on wikipedia and newegg.

 

 

sorry, wrong term Switchboard used from me. i mean "Power board" - see here: http://www.servethehome.com/supermicro-cse-ptjbod-cb1-jbod-power-board-diy-jbod-chassis-made-easy/

 

and about cables - you are right, but it depends what you will have in second case - if some sort of backplanes(like Norco case), then SFF to SFF cables(assuming you will use some PCIe Controller like IBM 1015 or similar in Norco case), if simple HDDs then SFF to Sata breakout cables from newegg link above.

Link to comment

As uldise noted, the Supermicro CSE-PTJBOD-CB1 is a simple little board that lets you power on/off a chassis without an actual motherboard.    This is very convenient if you have a 2nd chassis for additional drives and need a way to turn on the power supply.

 

As for cables between the two boxes => the suggestion for SFF cables is a good one, as it minimizes the "cable clutter" that you have to run between the two boxes; but the reality is you can run whatever you need, based on the kind of controllers you have.    It's likely, however, that if you're building up a system with that many drives that you indeed have controllers with SFF connections ... and I'd think you'd also want to find a chassis with a backplane that uses SFF as well, so you can run just one cable for every 4 drives.

 

An alternative is to put some SAS expanders in the 2nd chassis, which will let you run multiple drives from a single SAS connection.    These work very well, except that since they're sharing the bandwidth of the single SAS connection parity checks and drive rebuilds will be somewhat throttled.    [Although with SAS-1200 you could split a single connection among up to 6 drives without significant bandwidth reduction]

 

Link to comment

and about cables - you are right, but it depends what you will have in second case - if some sort of backplanes(like Norco case), then SFF to SFF cables(assuming you will use some PCIe Controller like IBM 1015 or similar in Norco case), if simple HDDs then SFF to Sata breakout cables from newegg link above.

 

... and I'd think you'd also want to find a chassis with a backplane that uses SFF as well, so you can run just one cable for every 4 drives.

 

Unfortunately, I have some more reading to do now because I'm totally confused as to what "backplanes" are.

 

https://en.wikipedia.org/wiki/Backplane

https://www.backblaze.com/blog/storage-pod-4-5-tweaking-a-proven-design/

https://www.techopedia.com/definition/2150/backplane

http://www.45drives.com/wiki/images/2/2e/IMG_8320.jpeg

 

I thought a backplane was the "board" that connected different chips and ports, etc. And based on the Backblaze article about their storage pod 4.5 design, I thought backplanes in their sense was how they were able to accommodate 45 drives in a single chassis - by having 3 PCIe SATA controllers on a single MOBO, and each SATA controller is connected to 3 backplanes via SATA cable (making 9 backplanes), and each backplane is connected to 5 drives for a total of 45 drives (9 x 5 = 45). I never understood the rationale behind this method, why not just buy a SATA controller with >10 ports and have at least 5 of them, that way you could direct wire from the SATA card to the drive itself without the need of a backplane.

 

I guess the reason I'm confused is because I didn't realize the Norco 4224 used "backplanes."

 

Thanks for the help guys, you all rock! Gary, you seem to be everywhere on this forum!

Link to comment

and about cables - you are right, but it depends what you will have in second case - if some sort of backplanes(like Norco case), then SFF to SFF cables(assuming you will use some PCIe Controller like IBM 1015 or similar in Norco case), if simple HDDs then SFF to Sata breakout cables from newegg link above.

 

... and I'd think you'd also want to find a chassis with a backplane that uses SFF as well, so you can run just one cable for every 4 drives.

 

Unfortunately, I have some more reading to do now because I'm totally confused as to what "backplanes" are.

 

https://en.wikipedia.org/wiki/Backplane

https://www.backblaze.com/blog/storage-pod-4-5-tweaking-a-proven-design/

https://www.techopedia.com/definition/2150/backplane

http://www.45drives.com/wiki/images/2/2e/IMG_8320.jpeg

 

I thought a backplane was the "board" that connected different chips and ports, etc. And based on the Backblaze article about their storage pod 4.5 design, I thought backplanes in their sense was how they were able to accommodate 45 drives in a single chassis - by having 3 PCIe SATA controllers on a single MOBO, and each SATA controller is connected to 3 backplanes via SATA cable (making 9 backplanes), and each backplane is connected to 5 drives for a total of 45 drives (9 x 5 = 45). I never understood the rationale behind this method, why not just buy a SATA controller with >10 ports and have at least 5 of them, that way you could direct wire from the SATA card to the drive itself without the need of a backplane.

 

I guess the reason I'm confused is because I didn't realize the Norco 4224 used "backplanes."

 

Thanks for the help guys, you all rock! Gary, you seem to be everywhere on this forum!

 

Backplanes are different of types and of different manufacturers too. for example, Norco 4224 have 6 backplanes - one for every drive row - then you connect Molex power cable to each of them and one SFF to SFF cable on each - this way you can connect 4 drives. if you wanna connect this backplane to motherboard sata ports, then you need breakout cable. BUT, Norco have also very similar backplane with 4 SATA connectors in it - then you can connect it directly to motherboard sata ports, or another breakout cable if you wanna connect them to Controller.

there are very similar case from Supermicro with 24 drives too - this case have one large backplane with all these connectors. i prefer Norco style cos of ventilation.

 

Gary, please correct me if i'm wrong :) 

 

see here for detailed explanation about cables: http://lime-technology.com/forum/index.php?topic=7003.0

Link to comment

I'm not sure if this has been mentioned but can't a single unRAID installation only support up to 25 devices (including parity and cache) anyway?  That means you'd need a separate server for more than that many drivers anyway.

 

Or am I missing something?

 

On the side of probably never needing more than 24 drives, I have to agree.  I'd imagine that the smaller sized drives will die sooner than you will run out of bays and when you buy a replacement drive it more than likely will be at least double the capacity.  I just bought myself a 10bay chassis that can support 72TB (80TB raw) of storage currently as I'm using the Seagate 8TB SMR drives.  I imagine by the time I need more than 72TB of space there will be a lot larger sized drives out.

Link to comment

I'm not sure if this has been mentioned but can't a single unRAID installation only support up to 25 devices (including parity and cache) anyway?  That means you'd need a separate server for more than that many drivers anyway.

 

Or am I missing something?

 

Perhaps you missed the first post, where he mentioned that Limetech had told him they were expanding to 30 drives in a future release.    However, I tend to agree that 24 drives is PLENTY of capacity ... if you need more space than you can get with 24 modern high-capacity drives, you likely need to be looking at multiple servers, not just more drives.

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.