Ye Olde CS380b - The backup server


Recommended Posts

I've been meaning to post this for months now, but for whatever reason never got around to it. My unraid setups are all what I'd refer to as sort of 'awkward' - while unraid is great 'out of the box' for many applications, I've always tinkered with it a bit to get it to where I need it to be.

 

The chassis here is the venerable CS380b:

 

20210827_085020.thumb.jpg.c38fa0fd531c6d4e080c5a579dff4a0e.jpg

20210827_084944.thumb.jpg.a34a361976feb058d975ff1229cc9ee8.jpg

 

 

It's about the only consumer chassis out there that supports both 8 hotswap drives, as well as full ATX motherboards (well... 'supports' full ATX is one thing... but I'll get to that later). There just aren't many options in the consumer space anymore for home servers it feels like. It used to be that you could go to any manufacturer and they'd any number of decent cases for such a situation - they may not've had external hotswap bays regularly, but they'd have tons of 5.25" bays you could use to make it suit whatever purpose you wanted. These days, options are so much more limited.

 

While it's a decent chassis, as others have noted, it needs some 'help' to make it better suited to it's purpose. The design is pretty abysmal on it's own for airflow; if you leave it as it is, the middle 4 drives get little airflow. Not only that, but the 'coolest' drive in the stack is the bottom, with the top drive or two getting a tiny bit of air, and the rest are just left to choke.

 

However, if you fashion yourself a few ducts, things work out pretty well - 

 

20210220_153541.thumb.jpg.3c5c4fbca550773ccced5ab97fc59953.jpg

 

 

I cut a few strips of plastic to length, then used a heat gun to form them into shape, and finally install them. Installation was a pain - I had to remove most of the cage in order to knock down some stupidly added metal tabs at the bottom which were making things difficult.

 

While still doing some testing on this, the temperatures were far and away better with the ducts in place... But sustained writes, especially during reconstruct-writes or parity syncs were warmer than I'd have liked with the included fans - it seemed like there just wasn't being even pressure applied with them, and they don't seem best suited to pressure so much as volume. Here's what it looked like during some of the testing:

 

20210220_153218.thumb.jpg.7e1f8015572d2713f7797e5bc04e095e.jpg

 

 

With the thought being that this was a pressure problem, I went ahead and replaced all the fans with more suitable Noctuas:

 

20210220_154853.thumb.jpg.3c1a1652f6a07e7a6ce6347327747e3f.jpg

20210220_154907.thumb.jpg.3c00d9833e27cb3905fcf6eae70f62c4.jpg

 

 

I decided to go with the NF-A12x25 over the pressure specific variants more for reusability than anything else. There's very little room for air to move anywhere other than where the fans are pointing it with these, and their performance is excellent. While I likely would see better performance with a fan specifically tuned for pressure alone, these cool far better than the stock fans, while also still being useful for whatever application they might be needed for down the line when this server inevitably gets swapped out:

 

20210220_162604.thumb.jpg.ff84b870001df38220e726f1b4633d0d.jpg

 

20210220_162552.thumb.jpg.eacfa1698540a169fb0daa606f626737.jpg

 

 

I swapped out the rear exhaust fan for another noctua I had to spare (a black chromax) so I could control everything with PWM. Figured if I was going to swap out two of them, I may as well do the third as well.

 

 

More on the hotswap bays

Options for this outside of commercial gear or so sorely lacking. While it's great that this is available, I'm fairly disappointed in the drive mounting mechanisms here. The little plastic tabs on the drive carriers that are meant to hold the drive in place against the backplane are cheap feeling, and definitely won't stand up to repeated swapping of drives. I've already had to use a heat gun on a couple of them to form them back into a shape that allows them to *click* back into place like they're supposed to when installing a drive.

 

The backplane itself looks like it was likely made by hand - theres enough of the capacitors sticking out that they're easy to bump when working in the system, and I was always afraid I was going to pop one off when I was trying to do some cable management. They bend back and forth, and I feel like they just wouldn't stand up to much if any abuse. However, they do seem to be decent components themselves, so as long as you're super careful around them and don't accidentally damage them (which seems like it'd be obscenely easy to do), it should be fine. Silverstone offers no way to purchase replacement parts (backplane, drive carrier) outside of emailing their support team and hoping they have some way of helping you. Makes me feel a little uneasy... But like I said, options are limited.

 

 

On to the main event:

The motherboard is an X11SPM-F - It has dual SR-IOV capable ports, albeit at only 1Gb/s, but using the X722 chipset which means that while communication outside the server is limited to a 1Gb rate, VM-to-VM communications are switched at 10Gb/s fully in hardware, which is great for virtualization workloads. In addition to that, it has support for 12 direct attached sata drives, pretty fantastic for a micro ATX board.

 

I'd previously tried using an X10SRL-F board in the system as that was what was initially planned.. but found that putting a full ATX board in this box just wasn't going to work for me here. For one, the gymnastics you have to do in order to even get the board to sit on it's stands are nearly olympian; the only way to ease that would be to remove the rear fan as well as the CPU heatsink, and even then, you risk busting off one of the capacitors on the (seemingly cheap...) backplane. Once it was installed, there wasn't really a clean way to run power, and nearly nowhere available to route power and fans cleanly. When you then start trying to add all your sata connections as well as tossing in your PCIe cards, well... I was glad to find a mATX board that'd fit the bill!

 

 

The one problem with mATX is that you have less flexibility in the number of slots available to you for add-in cards. At one point, I'd thought about trying to make this my main server. The main server needs a GPU for various workloads, not the least of which is gaming:

 

20210225_080841.thumb.jpg.b2bce2e16285127118b4d4281a287399.jpg

 

 

The problem here is that I needed 4 NVME drives as well... With the smallest GPU I had available, a dual fan 1650s, there's just no way I could find to make it fit along with the hyper m.2 asus card, even with right angle SFF-8087 connectors; there were about 2mm difference between it working.... and not :(

 

The bottom card in the above picture is a 10Gb intel NIC - these things do need decent airflow, and since this chassis is in the same room I typically work in, I also needed it quiet. I tacked on a tiny 40mm noctua onto it, and have it's PWM cycle tied to that of the CPU; if the CPU is crunching hard, odds are so is the NIC, and so far, it's worked out well.

 

The final configuration:

Chassis: Silverstone CS380b

PSU - Seasonic 550w platinum

MB - Supermicro X11SPM-F

CPU - Xeon Scalable Gold 6139 - 18 core / 36 thread, 3.7Ghz boost

Slot 1 - Intel 1.2TB DC S3520 NVME

Slot 2 - Asus Hyper M.2 board - 4x WD sn750p (power consumption vs performance makes gen3 NVMe great for VMs!)

Slot 3 - Intel X540

 

SATA Drives -

8x spinners at up to 16TB (used to be the sweetspot, pre-chia... ugh)

1x micron 5100 pro 4TB

1x samsung evo 860 1TB

 

 

 

Here's the desktop view as it was a couple weeks ago (just prior to expansion; I was obviously running almost completely out of space lol) - power consumption idles at 97w, but this includes the router and cable modem, so likely closer to 80w or so. The max I've seen with everything going balls to the wall is around 365w, but that's with all 18 cores loaded, 10Gb network saturated, and parity sync running:

8-12-2021_UnraidDashboardView.thumb.png.92327cd172745a847372e92cbc2e8c23.png

 

 

I've since started a new build for the main server... Here's a teaser:

20210605_084148.thumb.jpg.b639cc1d0b7e36319f2946a2a5cdda8f.jpg

 

 

Edited by BVD
  • Like 1
Link to comment

As for the 'awkward' part - customizations include:
ZFS for both VM and docker

SR-IOV for both 10 and 1Gb/s networks

Cron instead of userscripts GUI

LVM instead of btrfs (I've only been able to do this in a VM, as it seems while the LVM commands exist, their kernel code's been modified in unraid I guess?)

 

Several others I'm not thinking of... It probably would look like a total hackjob to anyone looking at it from the outside 😅

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.