The next generation - M12SWA, Threadripper Pro


Recommended Posts

As my earlier post noted, my main server was due for an upgrade - it'll probably keep running for now so as not to overload the backup server due to all the VMs I have running for work-related testing and such, but the replacement, it's finally becoming a reality:

20210818_223036.thumb.jpg.533f82843b6995f1487dcee8a9ef3a97.jpg

 

 

It's a supermicro CSE-743-1200B-SQ, a 1200 watt platinum power supply with 8 bays built in, with a 5 bay CSE-M35TQB in place of the 3 5.25" bays, all designed to run at less than 27db, while able to be either ran as a tower, or rack mounted (it'll spend the next 3 months in tower form... seems getting rails for this thing requires first sending a carrier pigeon to hermes, hermes then tasks zeus with forging them in the fires of the gods from unobtainium, who then ships then when he's done doing... well greek stuff). My first 700 series chassis is still doing work, still with it's original X8SAX motherboard, and I see no reason to fix something that isn't broken!

 

While having a bunch of drives is great, the idea here is to have two gaming VMs, run plex, nextcloud, homeassistant, frigate, and numerous others. All of that takes a ton of IO. Enter the motherboard:

 

20210818_191817.thumb.jpg.e78b809708f3a79131e46dd30fb93a47.jpg

 

 

This motherboard is a friggin monster - but importantly, at least to me, it's design syncs up perfectly with the chassis, so all the power monitoring, fan modulation, and LED/management functions can all be controlled via built-in out of band management. The M12SWA is currently paired to a 3955WX; given how close we are to next gen threadrippers release, I'm going to wait that out for now, and then decide whether to upgrade to next gen's mid-range (whether it be a 5975WX, or whatever the case may be), or otherwise.

 

For now, the VM's will be 4 core / 8 thread to match the CCD's, leaving the rest to docker. Down the line, they'll likely be either 8 cores each, or one 8 and one 4, depending on what the need is. The lighter of the two is going to house an always on emulation VM with a 1650s, which will play all our games on screens throughout the house (or wherever) via moonlight/parsec/whatever.

 

It slots perfectly in the chassis:

20210818_222608.thumb.jpg.224d10fb454782c1d34dca4116dbdcd7.jpg

 

 

But cable management is going to be a meeessssss:

 

20210825_095311.thumb.jpg.938a34528488d9719b9ed1e8a695ea6b.jpg

 

 

That ketchup and mustard is hurting my friggin eyes. I'm going to have to wrap those with something :| 

 

More to come on this one - the plan for now is to throw in 128GB of ECC 3200, 4 NVME, an rtx 2070, gtx 1650s, quad 10Gb nic (chelsio, since this thing comes with the stupid acquantia nic which has no SR-IOV support), quad 1Gb nic (since the intel nic they included ALSO doesn't support SR-IOV... ugh), then one slot left for potentially adding either tinker-type toys or an external SAS HBA if I somehow eventually run out of room. There are custom boards out there that combine the x540 and i350 chipsets onto one board, but I may instead consolidate this down to a single X550 or one of those fancy x700 intel based boards... We'll see.

  • Like 2
Link to comment
  • 1 month later...

Finally found a few free minutes to update! The server's been running now since Sept 5th without so much as a hiccup! However, it took a little planning to get it that way...

 

The problem area starts with this:

TheProblem1.thumb.jpg.a8033837db5deb27f6ace74a4f5615da.jpg

 

Both GPUs are dual slot, and stacking them means the 2070's intake is about 40% covered by the backplate of the 1650S. I then thought to use the intel NIC as the in-between, but it still covers a bit:

wellThatsSubOptimal.thumb.jpg.22afb2eb436febb28089739b3f167889.jpg

 

And if I can avoid covering it at all, all the better - As this is *right* on top of the NVME drives, any additional heat the 2070 radiates means heat added to them. In the end, I went ahead and put the HBA here instead:

AirFLOOOOW.thumb.jpg.7a43e65bcabdd6e2d247cfe3f61b986f.jpg

 

It's not perfect (nothing is for me I guess), but after running a temperature probe to the HBA and finding it's well within spec, it's about as good as it gets for me, and it'll do for now!

 

Here's (almost) what it looks like as of today:

tempConfigFinal1.thumb.jpg.5257771cfb1d5b1f435eb529822d5e01.jpg

 

The 32GB DIMMs I ordered didn't show up in time, and I really needed to get this thing up and running before start of business Monday morning so everyone could access their files and backups could pick back up, so this is where we're at till probably Thanksgiving or so. Running through the cards, from the top:

 

1. Intel 1.2TB NVME - a hold-over from the last server setup, which only exists for caching writes to the unraid array; seems like the md driver is modified as part of Unraid's base deployment, or this would be removed in lieu of LLVM with an XFS volume over the 4 sn750's onboard. BTRFS just doesn't have the performance needed (not to mention other areas of concern :( ) and I'm too cheap to buy 4 more M.2 drives just to up the capacity lol

2. Intel NIC - pfSense, etc

3. RTX 2070 - this serves two purposes - it's either running my gaming VM whenever I find time to play, or serving an unraid VM for an additional Tdarr node or testing out new things prior to implementing them on the main hypervisor

4. LSI 2308 based HBA - just connecting any of the drives that I don't have onboard connectors for

5. GTX 1650S - main hypervisor's GPU for Plex, Tdarr, and facial recognition in both nextcloud and frigate (well, until I can convince myself that I need a Coral accelerator anyway)

 

Hope to update again sometime after thanksgiving!

Link to comment

Nice build, I like those 5 or 6 pcie mobo platform for server application rather then mix with too much M2, cooling always headcade

 

My first 4 32G non-ecc memory set was burn out by an in service mobo,  It really not fun, After RMA, I confirm they work on other mobo(s).  I still don't know why and won't take risk to retest again. ( module abnormal hot when sit at problem mobo )

 

 

Edited by Vr2Io
Link to comment

Thanks!

 

Its more-so a dedicated workstation motherboard than a true server platform, but I get where you're coming from. Going workstation is the only simple way to get high quality audio playback capabilities that I've found without resorting to some separate card/device, and that just makes other things needlessly complicated imo. Workstation boards seem to offer the best of both worlds when it comes to home servers - IPMI, gobs of PCIe, system stability testing, double or more the ram channels, and all without losing the benefits of a consumer board (audio, plenty of USB ports, etc). Only downside... they charge a friggin arm and a leg. This was a purchase that was a little over a year in the making, mostly paid for with side-hustles, or I'd never have gotten up the nerve to pull the trigger on it lol.

 

As to the M.2 - I've honestly been quite happy with it! Peaks at about 53°C during sustained heavy IO now that I've got the card arrangement optimized a bit, which is basically ideal for NAND/NVME, and I intentionally went with PCIe 3.0 initially as part of the overall plan to limit unnecessary power consumption. Best of all (as a cheapskate), m.2 is far easier to find great deals on than U.2/2.5" counterparts.

 

If you can find a board that has enough onboard NVME connections to satisfy your needs, I personally say "go for it" - beats the snot out of throwing in an add-in bifurcation card, which not only takes up another slot, but more importantly, adds a single point of failure for all connected devices.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.