SuperMicro CSE 847 36 Bay 4U SAS 3 Barebone Chassis 2x PWS-1K28P-SQ


Recommended Posts

Not related to seller, just reporting good expericence and price.

If you're looking for a 36 BAY server chasis these are pretty nice, this seller has included 2 hot swap -SQ power supplies whicj are the super quiet versions, and both the fron 24 port and back 12 port backplanes are expanders meaning you can get away with a low port HDA card and don't need external expanders.
Both backplains are SAS3/SATA3

https://www.supermicro.com/manuals/other/BPN-SAS3-846EL.pdf

https://www.supermicro.com/manuals/other/BPN-SAS3-826EL.pdf

 

36x 3.5" LFF Hard Drive Trays w/ Screws Included

 

Sold as used, but could of passed as new, I couldn't find any hint of prior use, no dust or blemishishs anywhere!

No rack mount rails, but I double many of us are rack mounting our stuff


https://www.ebay.com/itm/204179662584
Keep in mind the motherboard area is low profile due to the 12 ports under it in the back.

Standard motheboards can be used with an optional cable to connect the front pannel if needed
https://www.ebay.com/itm/255893202956

Edited by bbrodka
  • Like 1
Link to comment
  • 7 months later...
On 6/13/2023 at 7:01 AM, bbrodka said:

Not related to seller, just reporting good expericence and price.

If you're looking for a 36 BAY server chasis these are pretty nice, this seller has included 2 hot swap -SQ power supplies whicj are the super quiet versions, and both the fron 24 port and back 12 port backplanes are expanders meaning you can get away with a low port HDA card and don't need external expanders.
Both backplains are SAS3/SATA3

https://www.supermicro.com/manuals/other/BPN-SAS3-846EL.pdf

https://www.supermicro.com/manuals/other/BPN-SAS3-826EL.pdf

 

36x 3.5" LFF Hard Drive Trays w/ Screws Included

 

Sold as used, but could of passed as new, I couldn't find any hint of prior use, no dust or blemishishs anywhere!

No rack mount rails, but I double many of us are rack mounting our stuff


https://www.ebay.com/itm/204179662584
Keep in mind the motherboard area is low profile due to the 12 ports under it in the back.

Standard motheboards can be used with an optional cable to connect the front pannel if needed
https://www.ebay.com/itm/255893202956

 

Hey, this is a great recommendation. I'd love to move to one of these as I am now running 19 HDDs in a fractal define 7 XL and I can't shove anymore in that thing. I have a few questions holding me back as I don't quite understand what else I'd need to switch over:

 

  • I have two LSI 9207-8i HBA's right now, would those reusable or is it a problem that they are mini SAS and I'd need something else to connect up with the backplane.
  • For SATA drives do you just use mini SAS breakout cables from the backplane to the HDDs?
  • Also right now I have these plugged into the x16 PCI-E on my motherboard. One is PCIE-5.0 @ 16 lanes (the one intended for the GPU) and the other is just 4 lanes. Would this provide enough bandwidth to the backplane? Also were there any challenges to wiring both backplanes, does that take two cards or do you wire the two together?
  • My cooler is also way too big right now. I've got intel, LGA 1700, was it easy to know what the max size for that would be?

Sorry for all the questions. I'll keep researching but this listing looks good and I'd totally be interested if it's possible with only some slight alterations to my current build

 

Thanks!

Edited by manofoz
Link to comment
  • 2 weeks later...
On 2/3/2024 at 9:02 PM, manofoz said:
  • I have two LSI 9207-8i HBA's right now, would those reusable or is it a problem that they are mini SAS and I'd need something else to connect up with the backplane.
    • Should only need 1 cable for each backplane (depending on the backplane's configuration). You'll need to get the correct cable depending on the backplane used (maybe: SFF-8643 to SFF-8087 cables)
  • For SATA drives do you just use mini SAS breakout cables from the backplane to the HDDs?
    • No breakout cables needed. This case is not like NORCO. The backplane provides power and SATA connections to all drives. Drives plug in to their spot and you just use a single cable from each backplane (depending on the backplane''s configuration)
  • Also right now I have these plugged into the x16 PCI-E on my motherboard. One is PCIE-5.0 @ 16 lanes (the one intended for the GPU) and the other is just 4 lanes. Would this provide enough bandwidth to the backplane? Also were there any challenges to wiring both backplanes, does that take two cards or do you wire the two together?
    • You should only need one card with this case
    • Wiring up the backplane is a pain as you have to slide out the motherboard tray. Make sure you run all the power cabling first before installing the motherboard
  • My cooler is also way too big right now. I've got intel, LGA 1700, was it easy to know what the max size for that would be?
    • For a 36-bay server you're looking at low profile PCIe lanes and limited mounting points. You'd be looking at either passive cooling or something like a 2U Dynatron Q5 for air.

 

Edited by OrneryTaurus
Link to comment
1 hour ago, OrneryTaurus said:

 

Thanks! This was great information, I think I have a plan on what exactly to do. I'll be moving in 9+ months when construction is done and plan to have a 42U rack at the new place.

 

Starting with some networking equipment tomorrow I'll be provisioning what I can before I move. Will totally grab one of these, my Define 7 XL doesn't fit great in the small rack I grabbed to stage everything...

 

rack.thumb.png.1dca284037abe93839de5e80ab6122b0.png

Link to comment
6 hours ago, OrneryTaurus said:

 

I have concerns about this rack being able to support the weight of a 4U server. You'll want to double check the maximum loads the rack supports. It might require permanent installation to increase the maximum load it can handle.

 

It may be running up against the limit, it was dirt cheap and says 500 lbs "max load bearing" on eBay. Not sure what it is when it's on wheels. 

 

Not really sure how to weigh the servers but I'd say the current build is around 100 lbs. Didn't think the 4U would add too much more weight until I added more drives and I'd be putting it at the bottom. 

 

Other than that I was going to put a dream machine special edition and a NVR pro on it. For the new house I was going to get something like this:
 

image.png.a922511dd4f5b113d685a6842e2d41e1.png

 

I think I can have it delivered to the garage and then have the movers put it at the termination point of our ethernet cable drops. After assembling the little one I don't really want to assemble a 300lb one... 

Link to comment
7 hours ago, manofoz said:

It may be running up against the limit, it was dirt cheap and says 500 lbs "max load bearing" on eBay. Not sure what it is when it's on wheels. 

 

I think the server weight itself is about 50-75 lbs before adding anything extra into it. Adding a full system of hard drives, you'll probably increase it to 150lbs. I would be more worried about the rails and sliding the server out to service it with the existing rack you have.

Link to comment
  • 3 weeks later...
  • 2 weeks later...
On 3/6/2024 at 7:01 PM, MrCrispy said:

Looking at a similar SM chassis. Does anyone run these as a JBOD DAS? I want to use these with an existing server to add more storage. When you connect a DAS like this how does it turn on/off?

The DIY JBOD's I've read about look to use a cheap motherboard to plug in an SAS expander so my guess would be powering on via a switch wired to the motherboard. I was thinking about going that route instead of getting one of these but I went with the 36 bay chassis. Since unRAID's array doesn't go over 30 drives I'm not sure I'd go the route of attaching more disks vs. making a standalone NAS if I needed more than that. I'm using 20TB drives so 28 wouldn't be bad assuming the 2 parity drives count against the 30. 

Link to comment
On 3/18/2024 at 8:11 PM, manofoz said:

The DIY JBOD's I've read about look to use a cheap motherboard to plug in an SAS expander so my guess would be powering on via a switch wired to the motherboard. I was thinking about going that route instead of getting one of these but I went with the 36 bay chassis. Since unRAID's array doesn't go over 30 drives I'm not sure I'd go the route of attaching more disks vs. making a standalone NAS if I needed more than that. I'm using 20TB drives so 28 wouldn't be bad assuming the 2 parity drives count against the 30. 

Yes, I believe its the cse-ptjbod-cb2 motherboard. My qn was more about how to turn the DAS on/off along with the main server, and does it go to sleep? 

So you will have close to 600TB of storage? that is one hell of a server!

Link to comment
On 2/4/2024 at 12:02 AM, manofoz said:

 

Hey, this is a great recommendation. I'd love to move to one of these as I am now running 19 HDDs in a fractal define 7 XL and I can't shove anymore in that thing. I have a few questions holding me back as I don't quite understand what else I'd need to switch over:

 

  • I have two LSI 9207-8i HBA's right now, would those reusable or is it a problem that they are mini SAS and I'd need something else to connect up with the backplane.
  • For SATA drives do you just use mini SAS breakout cables from the backplane to the HDDs?
  • Also right now I have these plugged into the x16 PCI-E on my motherboard. One is PCIE-5.0 @ 16 lanes (the one intended for the GPU) and the other is just 4 lanes. Would this provide enough bandwidth to the backplane? Also were there any challenges to wiring both backplanes, does that take two cards or do you wire the two together?
  • My cooler is also way too big right now. I've got intel, LGA 1700, was it easy to know what the max size for that would be?

Sorry for all the questions. I'll keep researching but this listing looks good and I'd totally be interested if it's possible with only some slight alterations to my current build

 

Thanks!

Backplane is 4-lane 8087 connectors, cables were included.  there are mini SAS to 8087 adaptors available, but personally I would just upgrade my HBA to a sas3 one, ot only to eliminate the extra cables, but also bring your speed up yo 12gb/s allowing you more futureproofing upgrades to sas3 12gb/s drives in the future.

 

You don't need breakout cables, the backplanes handle all of that, it can hadle sas or sata drives, or a mixture of them

 

There are sas3 inputs and outputs on the backplanes, so you can daisy chain them if desired., I run one HBA to front, and one 8 port HBA to back, but you could use one HBA to run all the bays if desired. (It was how it was wired this way by default)

 

Low profile cooler and expantion cards are needed as the motherboard cavity is only low profile (Since the lower chassis onter it is used for 12 more drive bays)

 

As far as bandwith, realistically the only bottleneck have to worry about is during parity checks and rebuilds.

Link to comment
On 3/23/2024 at 3:36 AM, bbrodka said:

Backplane is 4-lane 8087 connectors, cables were included.  there are mini SAS to 8087 adaptors available, but personally I would just upgrade my HBA to a sas3 one, ot only to eliminate the extra cables, but also bring your speed up yo 12gb/s allowing you more futureproofing upgrades to sas3 12gb/s drives in the future.

 

You don't need breakout cables, the backplanes handle all of that, it can hadle sas or sata drives, or a mixture of them

 

There are sas3 inputs and outputs on the backplanes, so you can daisy chain them if desired., I run one HBA to front, and one 8 port HBA to back, but you could use one HBA to run all the bays if desired. (It was how it was wired this way by default)

 

Low profile cooler and expantion cards are needed as the motherboard cavity is only low profile (Since the lower chassis onter it is used for 12 more drive bays)

 

As far as bandwith, realistically the only bottleneck have to worry about is during parity checks and rebuilds.

 

I made the switch yesterday! I was quite nervous because my server was very stable and gets a lot of use but thankfully it was smooth sailing.

 

All I needed was one LSI Broadcom SAS 9300-8i and a low profile CPU cooler. My temps are great, CPU idling at 35 right now and no disk is any higher.

 

On full speed the fans that come with this beast are turbines but I had been using the "Fan Auto Control" plug-in and after configuring that for the new fans it quieted down quite a bit. My server is in utility space so some noise doesn't hurt anyone but if it was in living space new fans would be needed. I also happened to have enough four pin fan cable splitters & extenders on hand to get them all plugged in (7 chassis fans + CPU fan was more than my motherboard could handle).

 

I was able to build out my temporary rack a bit more and it doesn't seem overly strained. I'm not using the rails yet as it's not sturdy enough for those but I have them ready for when I move. For now it's sitting on a diesel shelf. Will need some blank panels to hid the wires but it took me all day until ~2AM yesterday to get this far... 

 

image.thumb.png.1eb60698057b3967407c132329936437.png

 

Link to comment

Looks nice, I’m real happy with mine too, but got 36 disks in it now, thinking about getting another case, now I got to decide if it’s going to be another unbraid server, or just an expansion shelf to the current on with its own hba.  Rumor has it that unbraid will support more then on unbraid array in the future. Then again I could move some dockers to the new server, lots to think about..

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.