My "new" supermicro box. Regrets. Questions.


tiwing

Recommended Posts

Within the last 3 months I purchased a new-to-me (and not returnable) supermicro server to act as my primary unraid server with some pretty great specs. Or so I thought. It's a 36 bay supermicro X9DRi-LN4+ with SAS2 backplanes and dual Xeon e5-2670 processors for a total of 36 threads. I thought I needed it. Then about 2 weeks later I learned about intel quicksync.

 

I have two primary uses for the server: Plex, and data storage. Related to plex are things like tdarr which handle all my re-encodes for better streaming, radarr, sonarr, bazarr. I share media with family and close friends via plex over the internet with a slow outbound speed, so transcoding of outbound is quite common.

 

So now I have a power hungry 36 thread monster that can't do hardware video encoding. Sure the chassis is awesome, and redundant power supplies are great. And the fact it takes a real server 5 minutes to boot ... oh wait, not so great.

 

I currently have a second unraid box acting as a backup server, using a Xeon W series processor that automatically starts every day at 7am triggering an incremental backup of the primary box (syncthing dockers), and then shuts down.

 

I have a third box that's a dedicated pfsense firewall. 

 

 

I really want to get hardware encoding especially as I start to move into more 4k source material (and I don't want to store an extra copy pre-encoded to 1080p). So I think I have a few options:

 

1) Add a Nvidia low profile card to the supermicro. This would be ideal if quadro p2000 cards weren't so expensive. Even the "cheap" GTX 1050ti is expensive. It would keep my "primary" box as my primary in all respects, and give access to data locally rather than over the network. I'm not even sure you can get a low profile p2000 and I don't want to be limited to the number of transcodes running at any given point.

 

2) Upgrade my backup server to a modern (7th gen or higher for hevc) i3/i5 based machine, leave it also running 24/7 and accept that my "backup" is my plex (and tautulli) "primary" but still have all my "primary" data on the supermicro. This feels clunky, and I'm not sure plex can watch for file changes over the network <- that's not a deal breaker, but a "nice to have" - and would be an issue with most of the other solutions here so I'm not going to repeat it each time

 

3) build a small and dedicated plex server likely hosted as docker in ubuntu or alpine where it's sole purpose in life as as a plex server.

 

4) buy a modern i3/i5 and leave it running 24/7, but install proxmox, move pfsense into a VM there with the quad port network card passed through, and install ubuntu or alpine and run plex from that box, passing quicksync through to the plex host. there are many benefits to this option, as an i3 with modest memory 8 GB(?) would be able to easily handle pfsense (2 or 3GB is fine for it) and plex (and tautulli) in dockers, would give me a "production" server not mixing backup and primary purposes, would not result in yet another 24/7 computer being on, and would let me learn a bit about type 1 hypervisors which I know nothing about. Plus both pfsense and ubunty/alpine have small installation sizes and I assume proxmox would let me install to a set of mirrored 256GB SSDs for redundancy... (??) (way more research required).

 

5) rip the guts out of my supermicro and "downgrade" it to an i3/i5 based system with a mobo that can take at least a 4 port NIC plus two HBA cards to continue to handle the 36 drive bays (14 are currently occupied). That way I keep the chassis and gain hardware encoding (and lose the noise of the high pressure fans for passive CPU cooling), and reduce my electric bill. From a purely objective perspective this seems like the right thing to do. But it's hard to rip apart something I just spent a lot of money on. Over the next 10 years, though, the cost savings in electricity would probably more than pay for the additional hardware. I don't see choosing this option though but may revisit it a few years from now as I grow tired of wasted power. Maybe I'll find another use for the server too.

 

6) sell the supermicro (probably take a loss) and build what I would have built in the first place: an i3/i5 in a 24 bay server chassis.

 

My gut tells me option 4. But what am I NOT thinking about?

 

cheers.

Link to comment

Just my own thoughts about my Supermicro boxes (3x SC-846, 3x BPN-SAS2-EL1, 1x X9DRi-F, 2x 2680v2, 1x 1050ti (one slot), 12 years old)

 

They are high grade professional, heavy, fun to work with, loud and power hungry ... My complete homelab draws around 600 W/h. Throwing out the redundant power supply (I did it) saves some W - but it's not worth to mention.

 

Since years I'm waiting for multiple arrays. If they appear I will change MB, CPU, GPU and the style I work with my homelab ASAP. I expect a massive power reduction (1x CPU vs. 2x CPU, 1x PCH vs. 2x PCH, iGPU vs. i1050ti, 3x HBA vs. 1x HBA, etc) then.

 

With your 36 drive system there's no need to wait. I would go for an Intel iGPU. Here in the forums reports about the "Intel iGPU transcoding beast" did help to build my decision for the future.

 

Regarding your mirrored SSD: Do you mean BTRFS RAID-1? I had it ... and lots of problems. I switched to 2x single pools running XFS on M.2. Both rsynced with user scripts. I don't trust BTRFS any longer. But milage may vary.

 

Regarding 24/7: Years ago I did shutdown the server in the evening and did start in the morning. These reduced power by ~ 30%. On the other side harddisks had to be replaced regulary. Now, with the server running 24/7, a harddisk needs to be replaced once a year or so. Running 24/7 is "cheaper" than running 16/7.

 

Edited by hawihoney
Link to comment

Thanks for that. The mirror I was suggesting would be in proxmox on a separate pc, prob with proxmox itself booting off a spinner, and probably zfs raid 1 for the vms. I did my original install of pfsense on metal with raid 1 across 2 ssds. Waste of space since pf runs mostly from memory, but I had them lying around. Nothing broke... 

 

Are you saying to rip the guts out of supermicro and put Intel igpu in the server chassis or go with a separate Intel unit? 

 

I hear your comments about 24/7. I've thought a lot about it both ways... I'm not challenging you. I do find it so hard to wrap my head around moderate heat cycles and 500 start/stop a year causing failures on modern drives. I only changed to do this because I don't want my wife on me about the electric bill and supermicro draws 200watt at idle where my old box draws 120. So having the backup off most of the time balances that. We'll see... Lol but yeah. Hear you. 

 

Thanks

Link to comment

tiwing...I have your exact same box.  I bought mine from the fine folks at UnixSurplus, and have since bought some other boxes for windows and esxi clusters.  The 36 bay, Supermicro box is a beast, i'm running with dual e5-2690v2 and 128 gb of RAM (upgraded from when i bought it).   For plex, i did buy a low profile Geforce 1050 TI that i use exclusively for transcoding, using the nvidia drivers.  I do keep 2 libraries, one of 4K movies, and another of 1080p content.  It works great, I don't have any complaints.

image.png.5d3e9eabf71d512fecf42605f9123d79.png

Edited by jonathanm
nvidia eula
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.