The next generation - M12SWA, Threadripper Pro

Recommended Posts

As my earlier post noted, my main server was due for an upgrade - it'll probably keep running for now so as not to overload the backup server due to all the VMs I have running for work-related testing and such, but the replacement, it's finally becoming a reality:




It's a supermicro CSE-743-1200B-SQ, a 1200 watt platinum power supply with 8 bays built in, with a 5 bay CSE-M35TQB in place of the 3 5.25" bays, all designed to run at less than 27db, while able to be either ran as a tower, or rack mounted (it'll spend the next 3 months in tower form... seems getting rails for this thing requires first sending a carrier pigeon to hermes, hermes then tasks zeus with forging them in the fires of the gods from unobtainium, who then ships then when he's done doing... well greek stuff). My first 700 series chassis is still doing work, still with it's original X8SAX motherboard, and I see no reason to fix something that isn't broken!


While having a bunch of drives is great, the idea here is to have two gaming VMs, run plex, nextcloud, homeassistant, frigate, and numerous others. All of that takes a ton of IO. Enter the motherboard:





This motherboard is a friggin monster - but importantly, at least to me, it's design syncs up perfectly with the chassis, so all the power monitoring, fan modulation, and LED/management functions can all be controlled via built-in out of band management. The M12SWA is currently paired to a 3955WX; given how close we are to next gen threadrippers release, I'm going to wait that out for now, and then decide whether to upgrade to next gen's mid-range (whether it be a 5975WX, or whatever the case may be), or otherwise.


For now, the VM's will be 4 core / 8 thread to match the CCD's, leaving the rest to docker. Down the line, they'll likely be either 8 cores each, or one 8 and one 4, depending on what the need is. The lighter of the two is going to house an always on emulation VM with a 1650s, which will play all our games on screens throughout the house (or wherever) via moonlight/parsec/whatever.


It slots perfectly in the chassis:




But cable management is going to be a meeessssss:





That ketchup and mustard is hurting my friggin eyes. I'm going to have to wrap those with something :| 


More to come on this one - the plan for now is to throw in 128GB of ECC 3200, 4 NVME, an rtx 2070, gtx 1650s, quad 10Gb nic (chelsio, since this thing comes with the stupid acquantia nic which has no SR-IOV support), quad 1Gb nic (since the intel nic they included ALSO doesn't support SR-IOV... ugh), then one slot left for potentially adding either tinker-type toys or an external SAS HBA if I somehow eventually run out of room. There are custom boards out there that combine the x540 and i350 chipsets onto one board, but I may instead consolidate this down to a single X550 or one of those fancy x700 intel based boards... We'll see.

  • Like 3
Link to comment
  • 1 month later...

Finally found a few free minutes to update! The server's been running now since Sept 5th without so much as a hiccup! However, it took a little planning to get it that way...


The problem area starts with this:



Both GPUs are dual slot, and stacking them means the 2070's intake is about 40% covered by the backplate of the 1650S. I then thought to use the intel NIC as the in-between, but it still covers a bit:



And if I can avoid covering it at all, all the better - As this is *right* on top of the NVME drives, any additional heat the 2070 radiates means heat added to them. In the end, I went ahead and put the HBA here instead:



It's not perfect (nothing is for me I guess), but after running a temperature probe to the HBA and finding it's well within spec, it's about as good as it gets for me, and it'll do for now!


Here's (almost) what it looks like as of today:



The 32GB DIMMs I ordered didn't show up in time, and I really needed to get this thing up and running before start of business Monday morning so everyone could access their files and backups could pick back up, so this is where we're at till probably Thanksgiving or so. Running through the cards, from the top:


1. Intel 1.2TB NVME - a hold-over from the last server setup, which only exists for caching writes to the unraid array; seems like the md driver is modified as part of Unraid's base deployment, or this would be removed in lieu of LLVM with an XFS volume over the 4 sn750's onboard. BTRFS just doesn't have the performance needed (not to mention other areas of concern :( ) and I'm too cheap to buy 4 more M.2 drives just to up the capacity lol

2. Intel NIC - pfSense, etc

3. RTX 2070 - this serves two purposes - it's either running my gaming VM whenever I find time to play, or serving an unraid VM for an additional Tdarr node or testing out new things prior to implementing them on the main hypervisor

4. LSI 2308 based HBA - just connecting any of the drives that I don't have onboard connectors for

5. GTX 1650S - main hypervisor's GPU for Plex, Tdarr, and facial recognition in both nextcloud and frigate (well, until I can convince myself that I need a Coral accelerator anyway)


Hope to update again sometime after thanksgiving!

Link to comment

Nice build, I like those 5 or 6 pcie mobo platform for server application rather then mix with too much M2, cooling always headcade


My first 4 32G non-ecc memory set was burn out by an in service mobo,  It really not fun, After RMA, I confirm they work on other mobo(s).  I still don't know why and won't take risk to retest again. ( module abnormal hot when sit at problem mobo )



Edited by Vr2Io
Link to comment



Its more-so a dedicated workstation motherboard than a true server platform, but I get where you're coming from. Going workstation is the only simple way to get high quality audio playback capabilities that I've found without resorting to some separate card/device, and that just makes other things needlessly complicated imo. Workstation boards seem to offer the best of both worlds when it comes to home servers - IPMI, gobs of PCIe, system stability testing, double or more the ram channels, and all without losing the benefits of a consumer board (audio, plenty of USB ports, etc). Only downside... they charge a friggin arm and a leg. This was a purchase that was a little over a year in the making, mostly paid for with side-hustles, or I'd never have gotten up the nerve to pull the trigger on it lol.


As to the M.2 - I've honestly been quite happy with it! Peaks at about 53°C during sustained heavy IO now that I've got the card arrangement optimized a bit, which is basically ideal for NAND/NVME, and I intentionally went with PCIe 3.0 initially as part of the overall plan to limit unnecessary power consumption. Best of all (as a cheapskate), m.2 is far easier to find great deals on than U.2/2.5" counterparts.


If you can find a board that has enough onboard NVME connections to satisfy your needs, I personally say "go for it" - beats the snot out of throwing in an add-in bifurcation card, which not only takes up another slot, but more importantly, adds a single point of failure for all connected devices.

Link to comment
  • 4 months later...

With rumors regarding zen 3 TR Pro seemingly evaporating overnight (nothing new since December), I've been getting antsy. Sincerely hoping that all the March 2022 release date rumors leading up to the drought turn out to be true...


Because in preparation for it's release, I've been deploying more and more of the workloads I'd had planned for the server, and now I've stacked enough on that the 16c 3955wx has been redlining somewhat regularly the last few weeks. Images of a super buff 4'5" body builder trying to backpack a volkswagen have come to mind. Given this, I've spun down had to disable Tdarr's CPU transcodes and just stick with GPU accelerated only. It's helped quite a bit, saving some headroom for those occasional spikes.


Some current results on that front:



Almost ready to add the last Plex library and queue it up. I figure the 1650S has pretty much paid for itself twice over at this point (at least at the price I'd paid for it back when anyway) - 16TB drives @ ~$320/per, with the 1650S costing me ~$170, I'm honestly pretty stoked!


The first 'new' addition since moving to the new hardware was Paperless-ng - I had no idea how much it'd change our life to be honest. We went from having 2 banker boxes full of everything from tax paperwork, to paid bills, work reviews, etc, to about 3 inches of papers left. No more having to go into the boxes each and every time we have a bill come in the mail, no more time wasted trying to find 'that one bill that we know we paid but the stupid bank/lender/government/whatever say they never received payment for... Just plop it on the scanners feed tray, hit a button, and done.


It's been a hell of a process to get here, about 5 months in the making as we're using this ancient HP business scanner and I'm too cheap to replace it - it's wicked slow at a decent quality setting, maybe a page every couple minutes... but we're in no hurry, the quality is perfect for Paperless's OCR (converting the scanned image to searchable text), it's able to scan directly to SMB, and it's got a 35 page feed tray. I just put another stack on a couple times a day and leave it at that. Paperless automatically tags them based on content, and we've not had to dig out a single page since setting it up.


Next, I opened up the media request website to my sister, now that my parents have gotten the whole 'we can just submit it here and we don't have to change DVD's anymore!? *CLICK CLICK CLICK CLICK CLICK CLICK* " thing out of their system; that might've been a bit of a mistake:



Nearly everything in TV_SaturdayMorning is hers, and she literally sat down for a full hour adding each and every one her series... I wasn't expecting that kind of growth lol. That 'last library' to add? ...That's it.


There's more, but I think I gotta hit the sack - hoping to get more time to catch this thing up to date in the coming weeks!


Brief preview of what I've still got to detail out:


Another host - two hypervisors, routing, and a little room to spare


Network refresh - IoT takes a bite out of wifi



And a couple other odds and ends - several containers, storage crunch woes (thanks sis...), and repurposing the older xeon gold 6139 based system.



Edited by BVD
Link to comment
  • 1 month later...

WOAW!  Congratz! with your upgrade!


I am also looking to do a upgrade! ;-)


Your setup looks like it covers all of my requirements and then some!

I finally found a seller of the MB:

I think hope my PSU can provide enough power? (Corsair AX860i) 


How much power does this setup use in average? (The "low" power usage and the onboard GPU of the Xeon E-2100G series was why I got into Unraid some years ago LOL)

My existing server runs 24x7 so its only at night time it draws less power (My power consumption right now shows 763W - 26 drives)

Do you have ECC memory? - It have saved my "bacon" a couple of times 🙂



Link to comment
  • 3 months later...
On 4/3/2022 at 9:52 AM, casperse said:

WOAW!  Congratz! with your upgrade!


I am also looking to do a upgrade! ;-)


Your setup looks like it covers all of my requirements and then some!

I finally found a seller of the MB:

I think hope my PSU can provide enough power? (Corsair AX860i) 


How much power does this setup use in average? (The "low" power usage and the onboard GPU of the Xeon E-2100G series was why I got into Unraid some years ago LOL)

My existing server runs 24x7 so its only at night time it draws less power (My power consumption right now shows 763W - 26 drives)

Do you have ECC memory? - It have saved my "bacon" a couple of times 🙂




Wow, I'm super delayed in responding, sorry about that! Probably no longer useful, but figured I'd respond anyway for posterity:


Idle power consumption runs around ~200w, which is pretty typical for me - that's with a couple disks spun up for folks watching plex (direct streams / no transcoding). The highest I've ever seen it pull was just over 670w - this was during a parity check while playing some games on the 2070 (and a whole bunch of other crap).


The TR Pro boards only 'support' for ECC insofar as I'm aware, so right now it's 128GB of registered ECC memory.


That 763w though... That seems awful high? Is that your max power consumption I assume? Promise I'll be a little more prompt on the next response lol

Link to comment
  • 1 month later...

What container is that Networking Refresh window from?


Epic build. What's the model number of that Intel drive? I'm aiming for more of an Epyc setup.. or was. Threadripper would be pretty good for unRAID, but the price of the M12SWA in Canada is bordering $1000.


Regarding our messages, this build thread answers pretty much answers my questions!

Link to comment
5 hours ago, SLNetworks said:

What container is that Networking Refresh window from?


Epic build. What's the model number of that Intel drive? I'm aiming for more of an Epyc setup.. or was. Threadripper would be pretty good for unRAID, but the price of the M12SWA in Canada is bordering $1000.


Regarding our messages, this build thread answers pretty much answers my questions!


Glad you found it helpful!


The Intel AIC is a P3520 - very oooooold... But especially good at 4K reads. I've stopped using it as my write cache, instead opting to use it for the docker volume and for some work backup appliance's boot drives that just consume massive amounts of space - basically just things that need the best random 4K read performance possible, but that I don't care about losing.


I picked it up for something like $150 I want to say close to 3 years ago now, and it ended up not being a large enough cache, so I watched ebay till a good enough deal came along that I couldn't pass it up; 4TB micron 5100 pros for all of 250 bucks! They had no history associated with them, and this was at the height of chia, so it was a little bit of a gamble, at least I thought so. But when the package arrived and I connected up... 8 power on hours. Total. EIGHT! On 12v power it maxes out at 3.5w, typically more like 1.5-ish (while still busy saturating the SATA read bandwidth). I couldn't believe it. Almost still can't lol.


As for the network, that's the dashboard for Engenius's cloud-enabled products. I'd been using their stuff professionally in one way or another for years and have had nothing but good things to say about them, but when it came time to build a new home network, I thought to jump on the unifi bandwagon - everyone always seems to have the best things to say about their network gear, and while I wasn't a fan of the fairly limited interface, their pricing couldn't be beat. I had my reservations about their cloud account requirements and all, but went against my better judgement and placed an order on 01.21.21. Two days later, they issued a software updated that removed multi-site support; that was enough to convince me to cancel my order. They later said it was a 'bug' - with their history, I can't really believe that... Fortunately the hardware was still in transit, and all that was needed was to notify UPS of the intercept/redirect and all was well.


Anyway, enough of all that negativity (ugh, sorry!), on to Engenius! Between my place and the parents/in-laws, I've got 5 of their switches

  • 2 x ECS1112FP (8x1GbE PoE+, 2x1Gb, 2xSFP)
  • 1 x ECS1552 (48x1GbE, 4xSFP+)
  • 1 x ECS2512 (8x2.5GbE, 4xSFP+)
  • 1 x 5512FP (8x10GbE, 4xSFP+)

Along with 11 of their AP's now:

  • 4 x ECW120 - (wifi 5) - Two at my in-laws, two at my parents
  • 3 x ECW160 - (wifi 5) - These are outdoor APs, 1 at in-laws, 2 at parents - both live out in the country, and dad likes to have spotify playing in his workshop while mom wanted cameras to monitor the barn (security stuff)
  • 2 x ECW220S - (wifi 6) - Engenius had a buy-one-get-one promo late last year that my distributor hooked me up with, or I'd have gotten the (much cheaper) non 'S' variants
  • 1 x ECW336 - (wifi 6E) - Again a promo deal where you could buy up to two for half off each... I'm wishing I bought two now lol.

My only regret thus far has been related to purchase timing - they released the damned 2528FP only about a month after I picked up the 2512, or I'd have gone with that lol. You can see the switch lineup's datasheet here.


Some notes / further details on this:

All of their gear can be locally managed if you prefer, which is a huge plus for me personally, as it means they'll remain usable no matter what as long as the hardware is still healthy (and longevity isn't a concern of mine with their hw just based on my history with them). For the switches, they're rock solid, online upgrades (PoE remains active), stellar remote troubleshooting tools built in, and 0 issues in ~18 months. They really come into their own though when you pair them with their APs (not sure I'd still have sprung for them with the cost if I'd no need for access points)...


Their best feature imo is their roaming implementation - cisco/meraki, ruckus, unifi, netgear, aruba cambium, I've never had better AP-to-AP handoff, and only one matched them (again, just my experience). Roaming between 3 units while on VOIP/webex/zoom is completely seamless, no drops or hitches in video/audio, nothing to indicate you've reassociated with a new AP. I expected this though, as even their prior (non cloud-managed) hardware was similarly capable. Simply supporting 802.11r is one thing, but their implementation does the best job I've ever seen of making it seamless/perfect.


Second best is the troubleshooting tools, and that was a new one to me (as it's only available on their newer gear). It's built for people who manage hundreds/thousands of endpoints, so they've got a number of super handy tools built in that allow you to initiate things like channel scans, PCAPs, check latency values to (whatever you want), has a client history so you can see what happened with a given device (e.g. device X associated with AP1 @ 1710GMT, Band Steering suggested move to 5Ghz do to RSSi value of ### @ 1713GMT, etc).


Here's one of the troubleshooting/diag pages, for example, showing the channel scanner and resource tools at my parents place; you can have it automatically scan channels to look for the 'least utilized' and automatically switch to that channel with the 'S' models, or do so manually with the rest:



Troubleshooting clients is also much easier as you can see the client history, when the AP nudged the device to a different frequency (or if the device just had a mind of it's own and did it itself - apple devices... ugh.), and the same for roaming:



Honestly it's saved me more in time and gas not having to hop in the car and drive out any time either parents or in-laws call that I feel they're well worth it. 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.