Adventures of building an external enclosure


Recommended Posts

Greetings.  I've been using two UnRaid servers for a number of years now.  My main UnRaid server has the ability to connect ten 3.5" drives, six 2.5" drives and seven NVMe cards.  I'm nowhere near capacity at the moment (10 total devices connected), but several of my smaller drives have reached the end of Seagate support and I swap them out as I can for larger drives.  It has 76TB of capacity right now.  I also have a backup UnRaid server with 64TB of space.  I have cron jobs to copy most data from primary to backup on a weekly basis, while Domains for my virtual machines running on NVMe get copied over monthly.  My backup server can hold ten 3.5" drives, two 2.5" drives and two NVMe cards.

 

The drives I take from my primary server haven't failed, they're just smaller capacity and are out of warranty.  I don't want to just throw them away until they start having problems, so I'm working on an external enclosure to attach to my backup server to allow either for expanded storage or for a second array.  I can describe what I've done so far, and am curious if others have completed similar projects before me.  If so, how did it turn out?  Did you have controller issues?  Are there any traps I should watch out for?

 

  • My primary server is using an Adaptec ASR-72405 so can support up to 24 devices.  My backup server has an ASR-71605, so I will need to replace the card in my backup server before the project can be completed.  By the numbers, I could swap cards between the two servers, but I'd rather have the two servers using equal spec cards moving forward.
  • My theory is to use two of the ports on the ASR-72405, connect those ports to an adapter that allows for external cables to connect to an external enclosure, then adapt them again with breakout cables for the drives.
  • I've purchased one StarTech Mini-SAS adapter so far, and will need to purchase one more.  SFF86448PLT2  0.5m cables for the card arrived today and the card itself will be here tomorrow.
  • I'm planning to use 1m SFF-8644 cables to connect to the external enclosure.  In the external enclosure, the cables I will use are the same kind of cable that I would use if the drives were normal internal drives.  Again, 0.5m shielded cables.
  • The external enclosure itself started out as a 4U Rosewill case.  After taking measurements, I'll be cutting most of the front panel away and fitting three 4-in-3 trayless bays for a total of twelve drives, trimmed with a black walnut front panel.
  • I'm not sure how robust the power supply for the external enclosure needs to be, but I have a 450w Corsair small form factor PSU that I can use.

 

The StarTech adapter cards I'm using in the UnRaid server are pricey, more than $80 each.  Other brands of cards can be had for under $40 each, but I'm not sure how much faith I should put in branding.  Am I wasting my money with the more expensive adapters?  Will the cheaper cards work reliably?  Or is my entire process flawed because of a signal degradation issue I'm not anticipating?  It would be nice if I could slave power of the external enclosure to the Unraid server, but that's probably not necessary.  I will likely just have it set up so that the power switch on the power supply acts as on/off for the unit.

 

Here is one of the options for less-expensive cards:

https://www.amazon.com/dp/B07QZV8C7M

External cables:

https://www.amazon.com/dp/B0868PMBVP

Internal cables for the enclosure:

https://www.amazon.com/dp/B08C2LJBLW

Cables used in the UnRaid server:

https://www.amazon.com/dp/B086TW4K39

Drive bays:

https://www.newegg.com/istarusa-bpn-de340hd-red-hdd-hot-swap-rack/p/N82E16816215765

IMG_2840.jpg

Link to comment

If I need to use them, I do have two dual-port 8088/8087 adapter cards and the cables for each end.  I just don't have the cables to go between yet.  The Adaptec card I'm using is rated for 6Gbps, so these would probably be fine for SATA drives.  If nothing else, I can use them for speed comparison, and if I end up using these cards for part of the drives, it saves me some money.  The computer case I selected has the front panel attached entirely with screws, not with rivets as sometimes happens.  When I have the support structure for the three drive bays (still waiting on the third one to arrive), I can remove the front panel of the case, install the structure and reattach the front panel.  The drive bays will extend 1/2" through the steel front panel as to be flush with the wooden front.

 

If anyone is wondering how a black walnut computer case front would look, see the second image below.  This one is a CentOS router that I set up to provide gigabit uplink to my main router and 40Gbps connectivity to my two UnRaid servers and two Windows 11 desktops.  Practical throughput is nowhere near that, but the direct-attach-copper links are solid.  All wifi devices and anything in the house that consumes media is connected to my main router and firewall segregated from my Windows instances and anything running on UnRaid, with specific ports opened for things like AdGuard and Emby.  LG and manufacturers of IoT devices are not to be trusted.

IMG_2841.jpg

IMG_2845.jpg

Link to comment
10 hours ago, charlesshoults said:

Are there any traps I should watch out for?

 

Are you using SATA ou SAS devices? Note that for SATA max cable length from controller to device is 1m, you can expand that by using a SAS expander, cable can be up to 10m from controller to expander and assuming SATA devices one adicional meter from expander to devices.

Link to comment
9 hours ago, JorgeB said:

Are you using SATA ou SAS devices? Note that for SATA max cable length from controller to device is 1m, you can expand that by using a SAS expander, cable can be up to 10m from controller to expander and assuming SATA devices one adicional meter from expander to devices.

 

Yeah, I'm using SATA devices.  Well, maybe I'll be making a second backup UnRaid server instead, then divide the data being backed up.

Edited by charlesshoults
Link to comment

I opened up my backup UnRaid server to do some cleaning and see what options I have for adding external connectivity.  I have a Mellanox ConnectX-3 40Gbps card in an slot that supports x8 connectivity, and an x1 Nvidia NVS300 graphics card in a slot that supports x4 connectivity.  An x1 slot next to it was left empty for future access to a third NVMe controller.  Being a backup server, it's unlikely that I'm ever going to use that slot, so I moved the video card to that x1 slot, then moved the Mellanox card to the last slot, x4.  The card is an x8 with a single port, but seems to work just fine when connected to the x4 slot.  This frees up an x8 slot, where I could put an Adaptec ASA-70165H, external connectivity, eliminating any extra cabling inside the UnRaid server.  The back of this UnRaid server and the back of the external enclosure I'm working on, are close enough together that an 0.5m cable would connect just fine, if I decide to go that route.

 

Next, I did a mockup of the two drive bays I have so far, in place with a Mini ITX motherboard an an Adaptec ASR-71605.  The drive bays are linked together with two pieces of 3mm aluminum that I machined for them.  These pieces of aluminum are raised off the bottom of the case with 10mm long brass spacers.  I'll cut similar aluminum bars to tie the bays together across the top.  Everything fits, but also keep in mind that once the front panel is trimmed, the drive bay assembly will be moved forward a little more than 1/2".  If I buy a SAS expander such as the Adaptec 2283400-R, I can connect from it to the drive bays with 0.5m cables, staying within the 1m limit for SATA.  My larger concern is the power supply.  Even though it is a 600w psu, there are only two connectors for powering drives.  How many drives should I realistically try to power from a single port.  Is eight drives from one cable a fire hazard?  I don't think I want to do that.  Should I get a slightly more robust power supply that has more connectors for drives?  I also want to add a 120mm fan to blow air across the card, whether I'm using the HBA or expander.  For my UnRaid servers, I removed the heatsinks for the cards, drilled and tapped the heatsinks for 40mm fans, applied new thermal paste and reattached them with the original clips, but a fan bracket is probably a little easier.  Somewhere, I have a four port fan controller that attaches to a expansion slot bracket, so I could have the two rear 80mm fans and the 120mm fan set up that way and adjust the fans according to noise level.  For a more DIY approach, I have a fan speed controller that could be set up to run according to temperature.

 

 

IMG_2860.jpg

IMG_2862.jpg

710M4i-esSL._AC_SL1500_.jpg

Link to comment
18 hours ago, charlesshoults said:

How many drives should I realistically try to power from a single port.  Is eight drives from one cable a fire hazard? 

You should avoid this, by observe you use one cable ( 4 SATA plug ) for two enclosure.

 

Due to SF600 come with one molex and one SATA cable, you can't connect as dualfeed for each enclosure, if you got one more match cable then it should marginal fine.

 

And much better if use expander then long direct cable and you plan with several cable adapter.

Edited by Vr2Io
Link to comment

Some progress has been made today.  Yesterday, not so much.  I mowed the back yard and cleaned up brush.  Something that I disturbed made me sick and I slept almost all of Saturday.  I got outside today and trimmed the steel front panel of the computer case to allow the drive bays to fit through.  The lines aren't perfect, but the panel is going to be covered, so not such a big deal.  I'll be drilling some holes, used to secure the wooden front panel and once those holes are drilled and deburred, I'll paint the front panel to prevent rust.

 

The third drive bay module will be here tomorrow and I have some m3 screws arriving on Tuesday.  I've decided that for the short term, I'm going to use the Corsair SF600 to run power to a single drive bay module.  I only have one hard drive to put in it anyway, so I shouldn't be stressing it at all.  Later, I'll end up ordering an HX750.  The length of the larger power supply is 7.09" and if installed normally, puts it really close to the back of the right-most drive bay assembly, so I removed the bracket to which the power supply is attached and put it on the outside of the case.  I'll probably order some 20mm spacers to move the bracket back to allow enough space to comfortably work with cables.  I'll cut a piece of acrylic and attach it to the opening where the motherboard shield normally resides.  For my UnRaid server, I'm going to order an Adaptec ASA-70165H that provides four external ports and will test connectivity, file copy and file integrity with a 0.5mm cable between the two chassis and a 0.5mm cable running from an adapter board to the drive bay assembly.  Later, I might order an Adaptec AEC-82885T SAS expander to take the place of the adapter board, but only if I have to.  If the end configuration is the adapter cards, then I'll have three cables linking the two chassis together and if using the SAS expander, two cables, most likely.

 

I'll cut my piece of black walnut to external size tomorrow and start sanding it down, then take measurements for the large opening needed for the three drive bay assemblies.  There won't be a lot of walnut.  It will be pretty thin top and bottom, and about an inch on either end.  The wood gets stained pretty dark, then treated with an orange oil and beeswax finish.  To the front of the computer chassis, I apply a 1/16" thick piece of adhesive backed foam, then screws are installed, attaching the wood to the case from the inside of the case so that no screws are ever visible.  I still need to buy a piece of aluminum to tie the three drive bay assemblies together on the top.  Four holes will be drilled in the bottom of the case for m3 screws to be installed into the spacers from the bottom, then two holes in each side for brackets that will attach to the aluminum bars I'll be using to tie the modules together.  I'll probably place some blocks of 1/2" thick adhesive backed foam to the aluminum bars on the bottom, and on the top to help prevent sagging of the modules themselves, and of the top panel.  The top panel normally has a little metal bracket that it slides into, but that bracket had to be pretty much cut away.

IMG_2868.jpg

IMG_2870.jpg

Edited by charlesshoults
Link to comment
On 6/16/2022 at 4:29 PM, charlesshoults said:

...

  • I've purchased one StarTech Mini-SAS adapter so far, and will need to purchase one more.  SFF86448PLT2  0.5m cables for the card arrived today and the card itself will be here tomorrow.

...

The StarTech adapter cards I'm using in the UnRaid server are pricey, more than $80 each.  Other brands of cards can be had for under $40 each, but I'm not sure how much faith I should put in branding.  Am I wasting my money with the more expensive adapters?

 

Depends where you buy them 😀

[Link]

$22 vs $80 ?? ... [Inflation is everywhere. Is this the "exception that proves the rule"?]

 

Link to comment

Some progress has been made today.  The drive bay modules have been linked together, top and bottom, hanging from the sides of the 4U chassis with aluminum brackets.  I started with pieces of angled aluminum that I had to thin down a little on each side to get the drive bay modules to fit.  The whole process was a lot of fitting, cutting and drilling until everything lined up properly, but hopefully the rails that link the bays together have been installed for the last time.  The lower bracket won't be attached to the bottom of the case, but I may end up installing some rubber or foam blocks to support it.  The metal front panel fits as it should and the next big task will be cutting the opening in the wooden front panel to fit around the drive bay modules.  A fan for the HBA and the cables necessary to connect the first drive bay module will be here tomorrow.  A full height bracket for the HBA I'll be using has been ordered, but shipped from China, so it will be here some time next month.  I need to order a power supply  and eventually a SAS expander and the rest of the cables.

 

 

IMG_2922.jpg

IMG_2923.jpg

IMG_2924.jpg

IMG_2925.jpg

Edited by charlesshoults
Link to comment

So many problems, but not with the external enclosure.  I've been working on this thing for most of the day.  

 

I got the fan for the Adaptec 70165H, drilled and tapped the heatsink, cleaned up the old hardened thermal paste and installed new.  I put the card into my backup UnRaid server and expected that the BIOS would detect it similarly to how it detects my 71605, however that does not seem to be the case.  I have three different computers I'm testing it on, each with MSI motherboards.  if I take a known good Adaptec 71605, Advanced Settings in the BIOS shows an option for "PMC maxView Storage Manager", scanning for devices shows a detected 71605, but if I put the new 70165H into any of the computers and do the same thing, the option in the BIOS doesn't appear.  There is no Ctrl+A option during the boot process for the new card.  I then put the card into my primary UnRaid server, Gigabyte X399 Designare EX, and let it come all the way up.  Dmesg shows one Adaptec device, however looking at System Devices, does in fact show a 70165H.  It doesn't show the one that actually has drives connected to it, which is a little strange, but okay.

 

I put the card back into my backup UnRaid server and I can no longer get the damn thing to boot UnRaid, with the card or without.  It just keeps going back to the BIOS.  I thought maybe I messed up the USB instance, so I just made another one from scratch using the demo key and it won't boot either.  The USB key is set as the only available boot device.  I'm thinking that maybe something got reset and it now doesn't know how to deal with the format of the device.  Bios mode says UEFI.  Changing it back to CSM seems to have helped a bit.  !$@&%*!!

 

It now seems to be booting, or at least trying to.  I've seen before that when I make changes, the names of the network adapters change and I can no longer access the web interface, until I boot to GUI mode and correct it.  This time however, I can let it sit for over an hour trying to boot and it still won't come up in GUI mode.  What should be eth0 is now eth1.  I plugged the USB drive into a Windows PC and manually edited the network.cfg to change the BODNICS[0] line from eth0 to eth1.  I do wish these adapters would stay fixed according to MAC address.  My primary server does this occasionally where it keep assigning the bonded address to a USB ethernet adapter.

 

The server booted successfully once, and while I could ping and SSH into it, I could not get to the web interface.  I rebooted it from the command line and from that point on, it failed to come up completely, screenshot attached.  I took the HBA out and powered it back on.  I couldn't get any output on the video card for whatever reason, but the server does come up, I can get to the web interface and start the array.  I stopped the array, powered it off and put the HBA back in.  I now have video again.  The system boots, verifies eth1: Link Up, then sits for a while.  Sometimes it continues after a few minutes and other times it just goes nowhere.  But if it does continue, it stops at the same screen in the screenshot.

 

If I take the 70165H out and put in a second 71605, the BIOS no longer gives the option of PMC maxView Strange Manager, which is strange.  I get one Ctrl+A option, but the system boots up normally.  System Devices shows two controllers.  It appears that I have little other choice than to use a SAS expander.  I've connected 0.5m cables from the second HBA to an adapter board to convert to external cables and will run 0.5m cables to an expander, once I buy the expander, and the cables.  Wretched amount of troubleshooting.

 

# Generated settings:
IFNAME[0]="br0"
BONDNAME[0]="bond0"
BONDING_MIIMON[0]="100"
BRNAME[0]="br0"
BRSTP[0]="no"
BRFD[0]="0"
BONDING_MODE[0]="1"
BONDNICS[0]="eth1"
BRNICS[0]="bond0"
PROTOCOL[0]="ipv4"
USE_DHCP[0]="no"
IPADDR[0]="10.10.4.2"
NETMASK[0]="255.255.255.0"
GATEWAY[0]="10.10.4.1"
DNS_SERVER1="10.10.4.1"
DNS_SERVER2="10.10.3.1"
USE_DHCP6[0]="yes"
DHCP6_KEEPRESOLV="no"
SYSNICS="1"
 

test.jpg

Link to comment
  • 2 weeks later...

I have received all of the cables I need to finish up the build, but I'm still waiting on the SAS Expander.  Ordered on June 29th, a tracking number was generated the same day and it cleared US import customs by July 5th.  It's been eight days since, and is not expected to be delivered here until Friday.  The international supply chain seems to be running more smoothly than the US Postal Service.

Link to comment
4 hours ago, Vr2Io said:

  Which expander you buy ?

 

Ok, so the Adaptec AEC-82885T arrived today.  I have it installed in the external enclosure and all wired up.  I first powered it up without any drives attached to the expander and it seemed fine.  The boot process shows the device, no drives attached, system boots properly, lsscsi shows the device present.

 

[6:3:0:0]    enclosu ADAPTEC  AEC-82885T       B025  -
 

I then put one drive in each of the three bay assemblies, made sure that all three were detected by the boot process, then let the system fully boot.  The monitor attached to the system shows it came all the way up, prompting for login.  I can ssh into the system, but it took a good 15 minutes further before the web interface would come up.  It may be a WD Green drive I put in that's causing issues.  I previously marked the drive as bad with a big red B on it's label, but the drive does at least get detected by Windows so I was giving it a chance.  Seems UnRaid is part of an array.  I'll try to wipe it, but that drive will most likely become a rifle target.  I've yet to test hot add, but hoping that works.  I'll do some pre-clear tasks and see how it goes.

 

Wiring is still a mess, but I have an eventual plan for that.

 

root@Dragon:/boot/config# lsscsi
[0:0:0:0]    disk    SanDisk  Cruzer Fit       1.00  /dev/sda
[1:1:0:0]    disk    ATA      ST16000NM001G-2K SN03  /dev/sdb
[1:1:1:0]    disk    ATA      ST8000NM0055-1RM SN05  /dev/sdc
[1:1:2:0]    disk    ATA      ST8000VN0022-2EL SC61  /dev/sdd
[1:1:3:0]    disk    ATA      ST8000VN0022-2EL SC61  /dev/sde
[1:1:4:0]    disk    ATA      ST8000VN0022-2EL SC61  /dev/sdf
[1:1:5:0]    disk    ATA      ST8000VN0022-2EL SC61  /dev/sdg
[1:1:6:0]    disk    ATA      ST8000VN0022-2EL SC61  /dev/sdh
[1:1:7:0]    disk    ATA      ST8000VN004-2M21 SC60  /dev/sdi
[1:1:8:0]    disk    ATA      ST8000VN0022-2EL SC61  /dev/sdj
[1:1:9:0]    disk    ATA      ST16000NM001G-2K SN03  /dev/sdk
[6:1:16:0]   disk    ATA      ST4000VN008-2DR1 SC60  /dev/sdl
[6:1:20:0]   disk    ATA      WDC WD20EARS-00M 51.0  /dev/sdm
[6:1:24:0]   disk    ATA      Hitachi HUA72101 GKAO  /dev/sdn
[6:3:0:0]    enclosu ADAPTEC  AEC-82885T       B025  -
[N:0:5:1]    disk    Samsung SSD 980 1TB__1                     /dev/nvme0n1
[N:1:5:1]    disk    Samsung SSD 980 1TB__1                     /dev/nvme1n1
 

IMG_3093.jpg

IMG_3092.jpg

IMG_3097.jpg

Link to comment

Alright, so the 4TB drive I moved from my primary UnRaid server to my backup server seems to be working fine, although I've not added it to the array yet.  I ran a PreClear process on it, no errors.  I have a 400GB WD drive that's identified as a blue, although all of the labels are black, but oh well.  It ran it's PreClear, found some problems with the drive and when it spun down the drive, it disappeared from Unassigned Devices.  I have some 1TB Hitachi drives from 2009 and they behaved the same way.  Small drives, don't really care.  I'll have to buy some new drives at some point to continue testing.

 

For the two 80mm exhaust fans and the one 120mm fan blowing air across the SAS Expander, I have them connected to a 4-port fan speed controller.  Even if I turn the dial up to max, I can't hear the Corsair ML120 fan over the rest of the components in the rack, so should be fine.

 

I wonder if I really need eight lanes of data between the Unraid server and the SAS Expander, or if four would be sufficient.  If four is good enough, it would allow me to remove one card from my server.  I have one open connector on my primary HBA, but I added the second HBA so that I would have enough to run the two cables.

Link to comment

I found 4TB Seagate Constellation ES.3 drives on Newegg for $49.00 each.  I don't currently have any Constellation drives.  In my main UnRaid server, all are either EXOS or Iron Wolf drives.  I ordered three of the drives so that I can at least get some new drives connected to the SAS Expander to run it through it's paces.  The one Iron Wolf drive I have connected to it right now appears to be stable-ish.  I had connected a Hitachi drive to a different connector and it stuck around long enough to complete it's PreClear, but when I installed a second Hitachi drive in the slot next to it, connected to the same port on the controller, the previous drive disappeared from UnRaid.  I took that drive out and put it in the slot next to the 4TB Iron Wolf and it came back.  I don't know what's up yet.

 

https://www.newegg.com/seagate-constellation-es-3-st4000nm0033-4tb/p/N82E16822178307?Item=9SIA6CCBWK4334

Link to comment

Morning.  I've done away with the 2009 1TB Hitachi drives I had in the enclosure.  One of the drives listed offline uncorrectable errors and all three were extremely slow, starting at around 50MB/s with the PreClear tasks.  I installed the 4TB Seagate Constellation drives I received.  Turns out, the reason they were so inexpensive is because they were from old stock.  They reported zero power on hours, but when I registered them with Seagate, all of them are outside the manufacturers warranty.  Oh, well, they're far better than any of the other old drives I had.  PreClear started at about 170MB/s.  I have four drives put in as a cache pool.  Help me understand this.  If I have four drives of the same capacity in a cache pool, is UnRaid configuring them as a RAID 10?  I've seen videos in the past about how the configuration is, well, configurable, but I don't seem to find that option, or even a listing of how it's configured.  All I can see is that the listed capacity for the pool, after having run a balance task, is now 8TB.  What does it do if you have an odd number of drives?

 

I've not seen the sudden disappearing disk issue since I removed the old Hitachi drives.

Link to comment
7 minutes ago, charlesshoults said:

If I have four drives of the same capacity in a cache pool, is UnRaid configuring them as a RAID 10?

The default is RAID1. But you can balance the pool to the BTRFS profile of your choosing.

Open the pool, select the profile in the drop down menu and hit 'Balance'.

 

8 minutes ago, charlesshoults said:

What does it do if you have an odd number of drives?

That works too. One limitation is that the free space tends to be off for an odd number of devices in an pool. It gets better as the pool fills up, that's a BTRFS issue, nothing Unraid can do about that. :) 

 

More details on the BTRFS RAID profiles and the capacity depending on the drives : https://carfax.org.uk/btrfs-usage/

  • Like 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.