The Enclosure Thread


Blofeld

Recommended Posts

Hey all,

   I was hoping for some advice. I purchased my current case thinking i would never need more HDD space. Now I'm all out. I am pretty sure i need to jump to something rack mounted and with a "backplane" but I'm not trying to break the bank. So things like a new stornado are out of the question i think. I don't mind buying used if i can find it in good condition and i would like to move over my mobo and AIO cooler. but i an be flexible. 

 

The goal is that my mobo fits, and i can have at least 20 drive bays but more is better. 

 

Anyone willing to lend advice is greatly appreciated. 

Link to comment
  • 2 weeks later...

I'm looking around for a rack mount chassis and was hoping for some recommendations.  I don't need anything with a bankplane and really just want to migrate to a rack mount case from my tower.  I need space for at least 12 drives, but 16 would be more preferable.  I currently have 10 x 3.5" drives and 1 x 2.5" SSD Cache Drive.  It would need to support an ATX motherboard.

 

I'm hoping to stay under $150 if possible.

 

Awhile back I was looking at the Rosewill RSV-L4500, but it looks like they have dumped everything except a redesigned 4100 which doesn't look very appealing.

 

Thanks for the help!

Link to comment
  • 3 weeks later...
On 4/25/2020 at 10:04 PM, Aerodb said:

Hey all,

   I was hoping for some advice. I purchased my current case thinking i would never need more HDD space. Now I'm all out. I am pretty sure i need to jump to something rack mounted and with a "backplane" but I'm not trying to break the bank. So things like a new stornado are out of the question i think. I don't mind buying used if i can find it in good condition and i would like to move over my mobo and AIO cooler. but i can be flexible. 

 

The goal is that my mobo fits, and i can have at least 20 drive bays but more is better. 

 

Anyone willing to lend advice is greatly appreciated. 

Reading a bit back on the older posts in hopes of finding clues on options to buy but much of the old cases provided are EOL and no longer available. Still on the hunt for  a new chassis or otherwise case to upgrade to if anyone knows of a decent choice. 

Link to comment
On 5/25/2020 at 3:55 PM, Aerodb said:

Reading a bit back on the older posts in hopes of finding clues on options to buy but much of the old cases provided are EOL and no longer available. Still on the hunt for  a new chassis or otherwise case to upgrade to if anyone knows of a decent choice. 

In my search I found very few cases for those of us that have a 'large' number of disks. There's still a few in the 15 drive range but they're all quite costly. In the end I kept watching local computer recyclers inventory, local classified ads on Kijiji/Craig's List and of course eBay. Managed to find a Supermicro CSE-847 with an x8DTN+ motherboard, dual x5650 Xeons and 16GB of RAM.

 

The CSE-847 is the 36 bay storage chassis from Supermicro. It has 24 bays on front and 12 bays on back. The gotcha is that means the motherboard compartment is only able to accept half-height PCIe expansion cards. It came with a Dell HBA flashed to IT mode. I'm currently only using 22 of the 36 bays so I have lots of room to grow.

 

Alas one issue with the CSE-847 - I want to upgrade the motherboard/cpu/ram to something newer like a Ryzen 3950 or perhaps even a Threadripper setup. Unfortunately with the half-height limitation it's less friendly to adding video cards for pass-through to VMs and Docker containers. Its more popular brother the 24 bay CSE-846 might be a better choice to watch for as it can take full-height expansion cards.

 

Personally I'm considering conversion of the unit to a DAS using the DAS controller available from Supermicro. One other 'gotcha' is being aware of the SAS/SATA expander backplane. My CSE-847 came with the single controller backplanes for both the front 24 bays and the rear 12 bays. This means my disk I/O bandwidth is a little limited. If I'm adding content to the unRAID array, the contention for drive reads/writes does create a bottleneck that sometimes causes slowdowns. If you find a CSE-846/847, make sure to check the SAS/SATA expander backplane to see if they are the dual controller variety.

 

The only other case that I've considered are the Storinator 45 drive chassis. Alas they are quite expensive new and aren't as common on eBay or in the local classifieds. Backblaze uses a lot of these chassis and occasionally give away models that have an expired service contract. Alas you have to be able to pick them up at their offices in California. They won't ship them and the give-aways aren't very frequent. They limit it to one per person so as to make it fair to those who get in the line-up before 'give-away day'.

 

I'm pleased with my CSE-847 but am now watching for a deal on the DAS conversion kit and/or the dual controller SAS/SATA backplanes. Good luck with your continued search!

 

Link to comment
  • 2 weeks later...

I'm currently using a Norco 4220 (I think, it's old) installed in a 4 post rack. I'm thinking long term and would like to move toward removing the 4 post rack at some point, and going to either MDF wall mounting for everything or maybe just a 2 post rack. I also think I can reduce my disk count from the current 16 HDDs to 8 HDDs.

 

So, to start, what are my options for something that can be wall mounted and carry 8 disks plus the ATX mobo? Something like a tower case with at least 8 3.5" bays that has mounting points for the back of the chassis?

 

If I can't find a good option for that I'd consider a center mounted case in the 2 post rack.

Link to comment

Hey, I'm looking to buy a RSV-L4412 but cannot find it anywhere. Do you have some alternative? I'm looking for 4U that can be mounted vertically as a desktop station, with at least 10x hot swap drives, with backplane included (does this even exist beside the RSV-L4412?)

Link to comment

So after much searching i found a fantastic site ( https://www.theserverstore.com/ ) and after checking the equipment they had. I gave them a call. 

 

First off i have to say, I have not actually purchased from them as of today. But the guy i talked to for 2 hours educated me on all that i would need to know. With this added knowledge, my plans for my server upgrade changed a ton but i cant recommend them enough based on my call. Be sure to ask about server PSU fan noise and PSU controller cards to reduce the noise. 

 

If anyone is not sure what they need when you hit situations like mine (needing more than 12 drive bays and room to grow) at least check their site. They sell on ebay but call them or visit the site directly. There's much better inventory and pricing than their ebay store. 

Link to comment
2 hours ago, Aerodb said:

So after much searching i found a fantastic site ( https://www.theserverstore.com/ ) and after checking the equipment they had. I gave them a call. 

.

.

If anyone is not sure what they need when you hit situations like mine (needing more than 12 drive bays and room to grow) at least check their site. They sell on ebay but call them or visit the site directly. There's much better inventory and pricing than their ebay store. 

 

They have the 36 bay Supermicro CSE-847 that I went with. I picked mine up locally for slightly less than they're charging, mainly because shipping for a big unit like that is almost as much as the server itself. Note that the unit appears to contain the BPN-SAS2-846EL1 (24 bay front) and BPN-SAS2-826EL1 (12 bay rear) backplanes. These are the single controller backplanes so if you want faster disk I/O, you'll want to upgrade them to the EL2 versions at a minimum.

 

Mine also has the EL1 backplanes and my motherboard/CPU combo is 1 generation older than what they carry. UnRAID is working quite nicely, but I'm thinking of upgrading my backplanes and converting the unit to a DAS. Then I can go ahead with my plans to buy a Threadripper setup and equip it with a HBA that can support the dual controller backplanes with higher throughput. The slower disk I/O is the only real complaint I have about mine.

 

Also note that the motherboard compartment on the CSE-847 only accepts low profile adapters/controllers. The rear 12 bays takes up the lower portion of the case. That means it's difficult to upgrade to a modern motherboard/CPU and house it in the server chassis. That's the main reason why I'm considering conversion to a DAS.

 

Let me know if you have any questions about the CSE-847, but I suspect that's what you're leaning towards. Mine has 25 drives installed already - dual parity drives, 20 data drives, 2 x 2TB SSDs in a cache pool and 1 x 1TB SSD mounted with Unassigned Devices for some VMs and Docker containers.

 

 

Link to comment
16 hours ago, johnnie.black said:

EL2 are dual expander models, but that's for redundancy, not faster i/o, you can still use dual link with the EL1 models to double the available bandwidth.

Thank you for that clarification. I kept making the same assumption that only dual controller backplanes work with dual link.

 

The EL1 backplanes in my CSE-847 only have one expander controller. Most of the ones I've seen on eBay and used sites are the same - only one controller and no secondary SFF-8087 miniSAS connectors on the backplane. I can't see any mention of the dual link capability for an EL1 in the backplane/expander manual, but there are 3 SFF-8087 connectors on my 24 port backplane.

 

Right now my system has one port from the LSI2008 HBA going to the front 24 port backplane and the second LSI2008 port going to the 12 port rear backplane. I haven't read a great deal about the backplanes but there are numerous mentions of being able to use multipath I/O on the backplanes with 2 expander controllers. I now understand that means that dual port (EL2 expanders) allow for multiple HBA access to each drive for failover/redundancy.

 

I also note that the cabling from the HBA to the backplane has an extra ribbon cable running alongside the main 4 lane SFF-8087 to SFF-8087 cable (see picture below). I assume that's just one way of building the cables to isolate the 4 SATA compatible lanes from the extra connectivity needed for SAS drives. As I'm only using SATA drives in my setup my standard SFF-8087 to SFF-8087 cables could be used for the cascade connection to the 12 port rear backplane.

 

miniSASwithRibbon.jpg.1120ba6373b5718da3c1aafc95e94754.jpg

 

The manual (linked above and here) mentions that you can improve throughput by using a cascaded setup (section 3-5, page 3-13). Instead of me using 1 port off the LSI2008 HBA for each backplane, I should be able to use the cascade connection from the front 24 port to the rear 12 port. To me that seems more like it would decrease throughput as only 1 HBA port is now being used to connect to both backplanes in the CSE-847.

 

But if dual link is possible, I assume my setup should be something like this:

 

LSI2008 HBA port 0 to BPN-SAS2-846EL1 primary port J0

LSI2008 HBA port 1 to BPN-SAS2-846EL1 primary port J1

BPN-SAS2-846EL1 primary port J2 to BPN-SAS2-826EL1 primary port J0

 

BPN-SAS2-846EL1.thumb.jpg.2c2a80762c7fe66548403f6ce89bb489.jpg       BPN-SAS2-826EL1.thumb.jpg.6660da23da2e28692fa3694a284dd3b0.jpg

 

I also read that while dual-link can improve throughput, many users say that it's a matter of having the right firmware in the backplanes. I'll definitely spend some more time investigating this now. I was reluctant to spend $$$ on a set of EL2 backplanes and now I may not have to. So expect a donation to your 'coffee/beer/hobby' fund @johnnie.black 😀

 

Thanks again!

Edited by AgentXXL
Link to comment
10 hours ago, AgentXXL said:

Right now my system has one port from the LSI2008 HBA going to the front 24 port backplane and the second LSI2008 port going to the 12 port rear backplane.

You can have dual link to the front expander and then cascade the second expander, but ideally and for best bandwidth you'd have two HBAs, one going to each.

Link to comment
5 hours ago, johnnie.black said:

You can have dual link to the front expander and then cascade the second expander, but ideally and for best bandwidth you'd have two HBAs, one going to each.

 

Sometimes it's just persistance in refining your search terms. I tried all sort of expressions containing 'dual link' but the majority of the results returned were either from Supermicro (where the manuals don't seem to illustrate a dual link/cascade config) or from IXSystems (FreeNAS/TrueNAS) forums.

 

Even searching dual link here on the unRAID forums didn't get me anywhere. But somehow I finally ended up finding a post discussing it. And with the same scenario as mine, i.e. using a CSE-847 enclosure with 24 port front backplane and 12 port rear backplane.

 

 

Thanks again @johnnie.black!

Link to comment

So doing some more research, i found the video that level 1 techs did with gamers nexus where they built a rig that would act as the "brain" of the disk shelf.

 

Has anyone done anything like this before? Im curious how to connect the two boxes or what the controller card was that wendell mentions but doesnt talk about. 

 

Also, will i need to use ZFS or is what unraid uses(if thats the same) good enough? i havent had any issues so far.

 

Long story short, Im thinking this will be a cheap way to gain drive slots. anyone know much about this?

Link to comment
On 6/21/2020 at 12:18 AM, Aerodb said:

So doing some more research, i found the video that level 1 techs did with gamers nexus where they built a rig that would act as the "brain" of the disk shelf.

 

Has anyone done anything like this before? Im curious how to connect the two boxes or what the controller card was that wendell mentions but doesnt talk about. 

 

Also, will i need to use ZFS or is what unraid uses(if thats the same) good enough? i havent had any issues so far.

 

Long story short, Im thinking this will be a cheap way to gain drive slots. anyone know much about this?

Yes, I have this kind of setup.

 

You can externalise hard drives in a specialised 'disk shelf' chassis, or simply use any suitable PC case you have lying around.

 

The systems are usually connected through a SAS controller (HBA) in the main system. The controller will have internal or external SAS ports that are typically connected to the target / external drive array using something like a Dual Mini SAS 26pin SFF-8088 to 36pin SFF-8087 Adapter.

 

unRaid will see these external drives just like regular drives. You can add them to the array, use them for parity, cache or as unassigned disks.

 

You might be interested in this blog post where I touch on the idea. It can get a lot more complicated with expanders, disk backplanes etc., but in basic terms, what you envisage is possible, and quite common.

 

Search on here or the interweb for 'disk shelf', 'hba', 'SAS controller', 'backplane' to get started.

 

 

 

Edited by meep
  • Like 1
Link to comment
On 6/22/2020 at 4:18 PM, meep said:

Yes, I have this kind of setup.

 

You can externalise hard drives in a specialised 'disk shelf' chassis, or simply use any suitable PC case you have lying around.

 

The systems are usually connected through a SAS controller (HBA) in the main system. The controller will have internal or external SAS ports that are typically connected to the target / external drive array using something like a Dual Mini SAS 26pin SFF-8088 to 36pin SFF-8087 Adapter.

 

unRaid will see these external drives just like regular drives. You can add them to the array, use them for parity, cache or as unassigned disks.

 

You might be interested in this blog post where I touch on the idea. It can get a lot more complicated with expanders, disk backplanes etc., but in basic terms, what you envisage is possible, and quite common.

 

Search on here or the interweb for 'disk shelf', 'hba', 'SAS controller', 'backplane' to get started.

 

 

 

Thank you for this info and your blog post. It helped explain a lot of what i think i need to do to grow beyond my CoreX9 Case (disk space) and current mobo (lack of SATA ports). 

 

With the new info, I have three questions I was hopping you could elaborate on...

1- With the SAS/HBA cards (not sure of the difference on these). If the card sits in an PCIx16 slot, how does the SAS card handle power to the diskshelf? it seems like it would only handle the data... Disks->SAS card in disk shelf->SAS cable->SAS card in host computer->host mobo.

2- With the SAS card not handling power for the disks. You mentioned a link of two PSU units. Does this power the disk shelf (fans, disks)?

3- The SAS card in the diskshelf, does if physically mount to anything? i mean, i dont think the diskshelf has a mobo...

 

 

Any help with these questions is appreciated but also any other info you think would help. Any feedback is appreciated. 

Link to comment
6 hours ago, Aerodb said:

Thank you for this info and your blog post. It helped explain a lot of what i think i need to do to grow beyond my CoreX9 Case (disk space) and current mobo (lack of SATA ports). 

 

With the new info, I have three questions I was hopping you could elaborate on...

1- With the SAS/HBA cards (not sure of the difference on these). If the card sits in an PCIx16 slot, how does the SAS card handle power to the diskshelf? it seems like it would only handle the data... Disks->SAS card in disk shelf->SAS cable->SAS card in host computer->host mobo.

2- With the SAS card not handling power for the disks. You mentioned a link of two PSU units. Does this power the disk shelf (fans, disks)?

3- The SAS card in the diskshelf, does if physically mount to anything? i mean, i dont think the diskshelf has a mobo...

 

 

Any help with these questions is appreciated but also any other info you think would help. Any feedback is appreciated. 

Hi

 

That's a massive co-incidence, I have also recently retired a Core X9. It's a massive case, but really, for its size, has very very poor support for multiple disks.

 

Happy to try to answer your questions;

 

HBA = Host Bus Adapter. There are many types. A SAS HBA is just one type of HBA, but since it's the most prevalent for unRaid purposes, the terms are used somewhat interchangeably around here.

 

The HBA sits in a PCIe slot on your main system motherboard, and gets its own power from there. You are correct, though, it handles data only - you still need to power the drives in your external chassis. There are a few ways to do this.

 

If you purchase a dedicated 'disk shelf' chassis, it will usually come with its own power supply, often two 'server' type PSU for redundancy. If you are using a standard CP chassis as a DIY disk shelf, you wont have this luxury, so there are. a few options.

 

At a basic level, you could run a lengthy molex cable from your main system PSU out to the chassis and use that. However, if you have more than a couple of disks, you run the risk of overloading / overheating. A much better solution is to install a dedicated PSU in the external chassis. This, in itself, presents a further challenge - how can you get this secondary PSU to turn on if you have no motherboard? Again, two options;

 

You can use a multi-PSU adapter. This device takes a power cable from your main PSU, and the ATX from your secondary PSU. When your main system is powered on, the secondary PSU also receives the 'on' signal and powers up. The downside here is that you still need to run a power cable between the two systems.

 

My preferred solution is to use a  PSU Jumper Bridge  on the secondary PSU. This gadget shorts the power on pins, so as soon as you flick the physical power toggle switch on the secondary PSU, it springs to life as it receives the 'on' signal from the adapter. The downside is you lose the sync between the two systems - you secondary system is 'always on' until you physically switch it off. My solution to this is to use a power strip with slave sockets that are only powered when there's power draw on a master docket (the main PC)

 

With a secondary PSU in the external chassis, you have plenty of power for drives, fans and anything else needed.

 

If you want to move your SAS card to the external chassis, you need to extend your PCIe there. Then, you need to look at PCIe expanders, bifurcation etc. This can be expensive. However, if you place a SAS EXPANDER in your chassis, you have more options. (you can pick these up cheaper than the amazon link -  I just added that for illustrative purposes).

 

The SAS Expander looks like a PCIe card, but its not. It does not need to plug into a PCIe slot. It can, but doing so is only for the purpose of providing the card itself with power. The card also has a molex connector that is an alternative for power. So such an expander can be mounted in a PCIe slot, or mounted anywhere in your chassis. (they come in different form factors, some are integrated into hot swap drive backplanes etc.)

 

The SAS expander connects to your main system by connecting to your HBA. One or more of the HBA SAS Ports is connected to the SAS Expander which in turn connects to your drives. In this way, s ingle 4 channel SAS port can expand to up to 20 channels (drives).

 

One handy way of powering and mounting the SAS expander in the external chassis is to use a PCIe expander typically used by the mining community. In this scenario, you are not using it for data at all, just as a mounting point and power provider for the SAS expander.

 

Of course, most of the above relates to the DIY approach. Splashing out on a proper disk shelf system will negate a good deal of the hackery and provide everything in a unified package - but often at a cost!

 

 

 

 

 

 

 

 

  • Thanks 1
Link to comment
  • 1 month later...
  • 2 weeks later...
On 7/29/2020 at 5:06 PM, szymon said:

Anyone has any experience with this tower case?

 

Phanteks Enthoo Pro 2

 

Closed: http://www.phanteks.com/Enthoo-Pro2-Closed.html

Tempered glass: http://www.phanteks.com/Enthoo-Pro2-TemperedGlass.html

 

Supposedly it can fit up to 12 x 3.5 inch drives and has many cooling options.

Yep I have had one for probably a year now. Pretty pleasant to build in, and no real issues. Fans all work pretty well and its pretty quiet. 

Link to comment
  • 2 months later...

I am currently using a NORCO RPC-2212 as my secondary case and am looking to replace it with a 2U 6-12 drive chassis but am having difficulties finding something that is actually available in stock at stores. 2 alternatives I have been looking at are:

 

Chenbro RM23608
Chenbro RM245

 

But of course I cannot find these to purchases. Can anyone suggest a good case for this?

Link to comment
  • 4 weeks later...
On 9/6/2019 at 1:09 AM, falconexe said:

I just bought/built this insane rig. Dual 8 Core CPU, 30+ Devices, 200TB usable (still have one 14TB drive pre-clearing so those pics show 184TB). Meet "MassEffect". 45 Drives Storinator Q30 Turbo. Huge improvement over my old Norco 24 bay rig (now backup server). Rock solid temps peaking at 32C during parity/rebuilds. From the largest server poll thread, this may be 🤷‍♂️ the largest single UNRAID build yet (at least that I have seen documented) and completely pushes UNRAID to it's current limit on number of drives. I can take this thing all the way to 448TB with 28X16TB drives with dual parity if I want to get really crazy. Loving it!

 

Just hit 270TB total in a single server with 240 Usable.

 

image.png.bd0b53bc32b2116feced5e42053673a7.png

 

I also just moved to a new house with a dedicated server room with built in cooling.

 

I have had drives idle as low as 19C, and my array drives stay in the mid 20's under load. The yellow is my NVMe cache drive, and the green is another Unassigned Device. The array is always cool and the room is regulated at a constant 66F. This Storinator chassis continues to impress, and does even better when you give it some help with HVAC.

 

image.thumb.png.b7e7c222a9a36bc341d75e44ead083b2.png

 

Edited by falconexe
Link to comment
  • 1 month later...

Hello,

I run Unraid on a big tower PC where I literally stuffed as many HDDs as possible.
Now I want to move on to a JBOD enclosure (15 bay)
I don't know anything about JBODs. 
2 questions:
1) what connects a JBOD enclosure to a machine that runs Unraid (from a thumbdrive obviously)? what kind of cable, connector? What is required on the machine side?
2) what kind of machine, smallest possible, can run Unraid and fill the requirements to connect to a JBOD? Can this machine include the SSD drive I use for cache? That would save a bay on the JBOD.

Please advise in terms of efficiency, speed, reliability, etc..... 

Thank you 

Edited by xtrips
Link to comment
On 12/30/2020 at 8:02 AM, xtrips said:

Hello,

I run Unraid on a big tower PC where I literally stuffed as many HDDs as possible.
Now I want to move on to a JBOD enclosure (15 bay)
I don't know anything about JBODs. 
2 questions:
1) what connects a JBOD enclosure to a machine that runs Unraid (from a thumbdrive obviously)? what kind of cable, connector? What is required on the machine side?
2) what kind of machine, smallest possible, can run Unraid and fill the requirements to connect to a JBOD? Can this machine include the SSD drive I use for cache? That would save a bay on the JBOD.

Please advise in terms of efficiency, speed, reliability, etc..... 

Thank you 

 

The JBOD enclosure you choose will determine how it's connected to the system that runs unRAID. Assuming your JBOD enclosure has a SAS/SATA backplane, it likely has at least 1 external miniSAS connection. The most common controllers these days are a LSI SAS/SATA HBA with external SFF-8088 ports, like the 9207-8e flashed with IT firmware. These can be easily found on eBay, both in OEM and retail boxed units, some pre-flashed with the IT firmware. It's not overly difficult to flash the HBA, but buying one with it already done for you makes things easier. Otherwise, there are LOTS of sites/YouTube vids covering how to flash the HBA.

 

The HBA will  be installed in your system that runs unRAID, preferably in a x8 PCIe slot (PCIe 3.0 or better recommended). You'll then use a SFF-8088 to SFF-8088 cable to connect your system running unRAID to the JBOD enclosure. Here's a pic showing a standard dual port miniSAS adapter that converts the internal SFF-8087 (or sometimes SFF-8643) cabling to SFF-8088, along with a SFF-8088 cable. The one shown accepts the SFF-8087 miniSAS connectors that are used internally in the JBOD  and converts to external SFF-8088 ports. On the system end, your SFF-8088 cable will plug into one of the ports on the LSI HBA.

 

sff-8088-dual.thumb.jpg.8ca44e4e8eaea65f2cf1a987cf202b69.jpg

 

As for the system to use as your unRAID host, it can be run on almost any x86/x64 platform. My media unRAID is currently running on a 10 year old dual Xeon motherboard with 48GB of RAM. I'm also planning to convert my enclosure (36 bay Supermicro CSE-847) to JBOD and build a new outboard system. If you're wanting to also use unRAID for VMs and Docker containers, you'll want to try and get a fairly recent system like a 9th gen or later Intel or AMD Ryzen 3000/5000 series.

 

If you're planning to use your unRAID system to host Plex, get a fairly modern motherboard that has at least 2 M.2 NVME slots so you can do a dual cache drive mirror set in unRAID. Also make sure the M.2 SSDs are large enough to handle your Plex metadata and video thumbnails (if you use them). You may want to look into motherboards with 3 M.2 NVME slots if you also think you would like to setup a 'bare metal' VM running on it's own M.2 SSD.

 

If you have more questions about this, reply with the model of the JBOD enclosure and I or someone else can elaborate on what you specifically require. Hope this info helps!

 

 

Link to comment
5 minutes ago, AgentXXL said:

 

The JBOD enclosure you choose will determine how it's connected to the system that runs unRAID. Assuming your JBOD enclosure has a SAS/SATA backplane, it likely has at least 1 external miniSAS connection. The most common controllers these days are a LSI SAS/SATA HBA with external SFF-8088 ports, like the 9207-8e flashed with IT firmware. These can be easily found on eBay, both in OEM and retail boxed units, some pre-flashed with the IT firmware. It's not overly difficult to flash the HBA, but buying one with it already done for you makes things easier. Otherwise, there are LOTS of sites/YouTube vids covering how to flash the HBA.

 

The HBA will  be installed in your system that runs unRAID, preferably in a x8 PCIe slot (PCIe 3.0 or better recommended). You'll then use a SFF-8088 to SFF-8088 cable to connect your system running unRAID to the JBOD enclosure. Here's a pic showing a standard dual port miniSAS adapter that converts the internal SFF-8087 (or sometimes SFF-8643) cabling to SFF-8088, along with a SFF-8088 cable. The one shown accepts the SFF-8087 miniSAS connectors that are used internally in the JBOD  and converts to external SFF-8088 ports. On the system end, your SFF-8088 cable will plug into one of the ports on the LSI HBA.

 

sff-8088-dual.thumb.jpg.8ca44e4e8eaea65f2cf1a987cf202b69.jpg

 

As for the system to use as your unRAID host, it can be run on almost any x86/x64 platform. My media unRAID is currently running on a 10 year old dual Xeon motherboard with 48GB of RAM. I'm also planning to convert my enclosure (36 bay Supermicro CSE-847) to JBOD and build a new outboard system. If you're wanting to also use unRAID for VMs and Docker containers, you'll want to try and get a fairly recent system like a 9th gen or later Intel or AMD Ryzen 3000/5000 series.

 

If you're planning to use your unRAID system to host Plex, get a fairly modern motherboard that has at least 2 M.2 NVME slots so you can do a dual cache drive mirror set in unRAID. Also make sure the M.2 SSDs are large enough to handle your Plex metadata and video thumbnails (if you use them). You may want to look into motherboards with 3 M.2 NVME slots if you also think you would like to setup a 'bare metal' VM running on it's own M.2 SSD.

 

If you have more questions about this, reply with the model of the JBOD enclosure and I or someone else can elaborate on what you specifically require. Hope this info helps!

 

 

Thank you for such an elaborate answer. I admit I didn't understand a thing almost and will have to research many terms.
I will probably purchase the following JBOD
https://www.datoptic.com/ec/15-drive-port-multiplier-enclosure-sbox-xv-html.html
But I would be wise to order every single additional item necessary to be up and running in the least amount of time.
Otherwise it will be very difficult for me to get these kind of material where I live.
Seeing your pics I guess a minipc is out of the question to run Unraid since they don't have any PCI slots. Unless there is an external interface I could connect in between a minipc and the JBOD......
And my Unraid is used solely for 2 things: 1) nzb and torrent downloads 2) files accessed by multimedia streamers of computers.
So no Plex or any use for special requirements on the Unraid side.

I would appreciate more of your help.

Link to comment
32 minutes ago, xtrips said:

Thank you for such an elaborate answer. I admit I didn't understand a thing almost and will have to research many terms.
I will probably purchase the following JBOD
https://www.datoptic.com/ec/15-drive-port-multiplier-enclosure-sbox-xv-html.html
But I would be wise to order every single additional item necessary to be up and running in the least amount of time.
Otherwise it will be very difficult for me to get these kind of material where I live.
Seeing your pics I guess a minipc is out of the question to run Unraid since they don't have any PCI slots. Unless there is an external interface I could connect in between a minipc and the JBOD......
And my Unraid is used solely for 2 things: 1) nzb and torrent downloads 2) files accessed by multimedia streamers of computers.
So no Plex or any use for special requirements on the Unraid side.

I would appreciate more of your help.

 

I would not recommend that enclosure for your JBOD device... it appears to only offer eSATA or USB connections. unRAID can be finicky with  port multiplier based systems like the one you linked. You mentioned your existing system is a tower PC 'stuffed full of hard drives'. I would spend some time on eBay or checking with local computer surplus shops to see if you can find a used but reasonably priced multi-bay enclosure. I picked up my 36 bay Supermicro CSE-847 locally for $600 CAD, which included the motherboard, dual Xeon CPUs and RAM. Supermicro also has a 24 bay (CSE-846), a 12 bay (CSE-836) or if you really want to go big, there are 45 and 60 drive bay options. Alas these storage chassis produce a lot of heat and can be noisy.

 

Or, because your use case is pretty similar to my backup unRAID system, you could go with a Fractal Design Define 7XL case. The Define 7XL supports up to 18 3.5" drives and can accept motherboards from ITX all the way to e-ATX form factor. I picked up one myself with all the extra drive trays and mounting brackets for about $450 CAD from a local retailer. I'm currently using another 10 year old motherboard with 12GB of RAM with 2 LSI HBAs - one 9201-16i and one 9207-8i. Going this route would provide for up to 24 SATA devices between the 2 LSI HBAs (host bus adapters). If going this route you will require some miniSAS to 4 SATA forward breakout cables like these: https://www.amazon.com/Cable-Matters-Internal-SFF-8087-Breakout/dp/B018YHS8BS

 

The only big concern with using the Define 7XL fully loaded with drives is ensuring you get a power supply that has enough capacity and SATA power connections. Alas all the cabling for both power and SATA makes it difficult to do a really clean build, but it is every bit as functional as my Supermicro CSE-847. As you're not familiar with these systems, feel free to ask any specific questions and I'll try to answer.

 

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.