Jump to content

Migrating Back to Unraid: Niche Build?


Recommended Posts

Hi everyone! I used to be an unraid user with a fairly simple config, I just stuck some drives into an old gaming PC I wasn't using that happened to have a 1650 in it, and all I did was use it as a NAS and Jellyfin server. 

 

(Un)fortunately, my use cases have changed as I've gotten more comfortable and I need a lot more out of my new machine. I've been running Proxmox on a mini PC, with a separate Asustor NAS and that's been... passable, but has several limitations as far as expandability and limiting things I'd like to experiment with. 

 

The TLDR of my use case is fairly simple: As far as the most commonly used service (by myself and others), it is Jellyfin. Most of my library is in 1080p, with some 4k holdouts in H.264. This takes up about 8TB. Additionally, Home Assistant, the Servarr* (Including Wizarr and Bazarr) Suite of apps, Kasm, NextCloud, Memos, PiHole (With Unbound), LinkAce, Monica CRM, and Jellyseer are services I currently am running. I also have two VMs, one Ubuntu XFCE for a VPN bound-interface instance of qBitTorrent, and another running ubuntu server serving as a tailscale subnet router.

 

However, I'd also like to begin doing a few other things, like local AI (Specifically Stable Diffusion for DND groups & Llama models for text generation), NVR w/ Frigate (via a Coral Edge TPU), and possibly other related AI homelabbing as a little bit of a side interest. Once I get bored with that, I'll probably use whatever Nvidia GPU I grab for a cloud gaming VM for a friend. On the topic of games, I'd like to run AMP or Crafty for game servers. Lastly, I'd like to reduce the size of my current media library considerably by converting it to H.265, and for the few devices that stream from jellyfin that don't support it, transcoding should even that out. 

 

So here's my current situation: I have my existing use case that's starting to run into issues with my current setup. For one, it's a Minisforum Mini PC that support has not been helpful with as far as iGPU passthrough. The whole point in my purchasing this specific product, was to be able to use hardware transcoding that has not yet happened.... and I'm using it as an excuse to build a new machine. Given the use cases I just described, as well as the other things I mentioned an interest in, as well as some misc. parts I have lying around, here's sort of where my head is at:

 

A 12th or 13th gen intel CPU offers a good core count for VMs/distributing workloads and both tend to have very capable iGPUs for hardware transcoding. I currently own an Arc A380 that I initially bought as the bare minimum to use a gaming PC I just finished building (it was replaced with a 7800XT and is now sitting on a shelf)- I think the A380 could be great for Tdarr or Handbrake if Unraid/Those tools support it. I need to shrink down my media library as I am filling up fast. 64 GB (2x32GB @ 3200Mhz) of DDR4 is also something I have on hand, though it isn't ECC. 

 

My question to you all is: How much is overkill, and how much would a reasonable person spend here? I am US (East Coast) based, have a fair bit of knowledge in this arena, but I'm a 22 year old college student who's studying something entirely unrelated, and makes college student money. If you could help me flush out this build list without spending more then $700 or so, I would greatly appreciate it- and I am more then happy to buy used or refurbished parts from a trustworthy source. 

 

CPU: Probably an Intel i5-13600

Motherboard: TBD (Need the most help here, M.2 E-Key strongly preferred for Coral TPU support)

RAM: 64GB (32gb x 2) DDR4 3200 MHz Non-ECC Memory

HDD Storage: 2x10TB WD Red (Owned), Ideally 2x More 12TB+ if possible (Used/Refurb acceptable)

2.5in SSD Storage: 4x Samsung 870 EVO 1TB (Owned)

M.2 Storage: x1 Sabrent Rocket 1TB PCIe Gen 4 (Owned)

GPU 1: Asrock Arc A380 (Owned)

GPU 2: TBD, Nvidia Prefrred, Primary use case = Local AI

PCIe 2.5GbE Network Card (If Possible, Also taking suggestions for this)

PSU: TBD

 

I've put the items I am most in need of assistance with in bold. 

 

This is a large amount of information, and I apologize for making you read it- but I would be grateful for any help! I'm fairly new to sourcing hardware, historically I've only used prebuilds that I've modified, and this is largely a ground up build. Any guidance would be greatly appreciated. 

Link to comment

Seems like you can do everything in containers (there are containers for tailscale and bittorrent I use) so I would divest and use a container cluster and have unraid as the NAS.  For the NAS portion your current hardware is more than capable-- an embedded board is more than enough.   

 

As for containers I use Rancher/Harvester using etcd in 3 node and you can scale them as needed.

Link to comment

Any reason you're not looking at epyc? For the most out of multi-gpu that seems relevant... But maybe your future use case doesn't need the pcie lanes. Interesting project. I agree with the suggestion you start with what you have and learn what it needs before buying anything new at all.

Also I'm wary of 2.5g networking... 10g is cheap especially if you can just run SFP+/DAC... Scope creep.

Link to comment
On 4/9/2024 at 7:02 PM, _cjd_ said:

Any reason you're not looking at epyc? For the most out of multi-gpu that seems relevant... But maybe your future use case doesn't need the pcie lanes. Interesting project. I agree with the suggestion you start with what you have and learn what it needs before buying anything new at all.

 

Epyc is an option, my main hope for the Intel system is QuickSync for Transcoding which frees up a GPU for media conversion (based on my limited understanding, a GPU can't do both at the same time, no?)

 

On 4/9/2024 at 7:02 PM, _cjd_ said:

Also I'm wary of 2.5g networking... 10g is cheap especially if you can just run SFP+/DAC... Scope creep.

Entirely valid. If the cost isn't too much of a difference, and the 10g connection is only used for hardwired connections on my local network (being literally my gaming/main Desktop and possibly one other server in the somewhat distant future), how would you suggest I pull off that implementation?

 

I've been looking at E-Bay, and used supermicro/epyc combinations are not terribly priced. Are there specific configs you'd suggest that would work within the budget I shared? If so, are there any downsides over the initial list I provided?

 

I'm only a bit past novice with these things, but I'm hoping this new build will give me room to tinker and learn. Any input is appreciated!

Link to comment
On 4/9/2024 at 2:17 PM, psychic99 said:

Seems like you can do everything in containers (there are containers for tailscale and bittorrent I use) so I would divest and use a container cluster and have unraid as the NAS.  For the NAS portion your current hardware is more than capable-- an embedded board is more than enough.

Containers are the primary reason I'd like to go with Unraid, the documentation and community support is more robust and centralized then searching through multiple forums for various use cases. As someone with only limited technical knowledge, Unraid's novice friendly approach is what is most appealing!

Link to comment
21 hours ago, the_scarian said:

 

Epyc is an option, my main hope for the Intel system is QuickSync for Transcoding which frees up a GPU for media conversion (based on my limited understanding, a GPU can't do both at the same time, no?)<snip>

You are going to want the pcie lanes for multi-gpu - unless you only ever plan to have one + onboard. Assuming your existing a380 is going to just barely get you rolling it may fall into use just for transcoding anyway... you can do 4 gpus at 16 lanes on epyc vs just one on 'consumer' hardware. The second thing is you're likely going to want the space for gobs of RAM. And used Epyc are far and above the affordable way in (with the usual gotchas around used hardware...) It'll be more power hungry on the flip side but if you're doing enough ML/AI it's homelab or cloud costs either way. Power consumption is likely THE big downside though. If you're building a multi-gpu rig, it'll be hungry for that reason too, though. Consider case carefully here too for good airflow. And don't skimp on power supply (so much garbage out there - I stick to Seasonic, though I've run Corsair and one system has a Be Quiet! right now)

 

10g networking, I believe if it's just two machines direct wire SFP+ dac... copper or fiber as needed, no switch necessary; avoid rj45 unless you really really have to (also, connect-x 3/4 is cheap, x710 also reasonable). Once you need a switch, lots of options. Microtik has a 4 port SFP+ switch I believe, Microtik and Unifi and probably others have 8 port SFP+ switches at reasonable prices... I have two cat6 runs, x540 is my reco on that end - but the SFP+ to rj45 are more $ and run hot. No heat issues in my Unifi gear but I've heard Microtik can overheat. Also have 5 copper DAC connections, one fiber DAC connection. If we ever have to open the walls enough I can safely run fiber where I have the cat6 I will jump on the chance. Leave this as an upgrade path option when you find a use for it, regardless (though some epyc have built-in sfp+...)

 

More than anything though, I'm going to reiterate - don't buy anything at all yet, if you can avoid it. Get things running and learn what you need next - unless the hardware simply will not work. It is cheaper to dream and learn with what you have till you know exactly what you need and buy the right things the first time (and of good quality). If you have to, put aside a few bucks every day into a build coffer... it adds up fast and is a healthy habit anyway.

 

I started on an AMD 765 black, had stability issues and swapped to goodness knows what random hardware I had in a closet (some AMD "media" cpu from ages past) till I tracked down bad ram, went back to the 765 for a bit and sorted requirements, then built the setup I have now. I've since become more power consumption aware and vaguely wish I'd gone Xeon, but even today it's hard to find actual consumption details (and I do want ECC memory). I swapped to one of the big 4u Rosewill cases (the 15 drive capacity) after learning i didn't care about hot-swap and would prefer lower drive temps at lower fan curves... modded to take 140mm fans in the middle row, room for giant motherboards if I ever want it... probably could passively cool the CPU at this point. Down to ~43-44w mean consumption at idle (home assistant and some related bits add 5+w overhead). But... no gpu at all for me, even for Jellyfin. I still like having IPMI knowing that adds another ~5 or so (trusting data from others on this). Unraid does not need much to run quite a bit of stuff well. My backup Unraid is an older i3 and some things there are clearly cpu limited (I had cachedirs plugin for no sensible reason, but it would peg a core for hours on end).

 

I can't speak to *specific* hardware recommendations on Epyc. Get a main board capable of a range of CPUs and start on the cheaper/older side for CPU. That gives you an upgrade path if it becomes a bottleneck. I've definitely seen some used options with memory included on ebay well within your budget, but a good PSU isn't cheap either... especially if you're looking at multiple GPUs down the road. Also unsure if you have a case which would support this well.

  • Upvote 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...