Build help - combo NAS and Virtualization server


Recommended Posts

I bought a server off of eBay, and unfortunately is is too loud for the guest bedroom, and there are no other places to put it.

I've got about 1000 USD to spend on it, and am looking for something that meets my needs. I'm at my wits end on this and would really appreciate some specific guidance.


1st priority - needs to be as quiet as my Desktop computer (Core i7)

I'm not sure how many cores or how much RAM I'd need, but here is what I'd be running 24x7:

Plex - at most 3 streams at a time, not too much transcoding needed, but on my i7 there haven't been any issues because of the integrated gpu doing hardware transcoding

Medusa / sabnzbd / sonarr

Splunk VM

Some sort of home web filter like Sophos XG

Google Rapid Response (remote forensics gathering tool)



Things that would be spun up for fun and learning:



various vulnerable VMs taken from sites like vulnhub


How many cores would I need? How much RAM do you suggest? Does it need to be ECC or would regular desktop RAM work fine?

I don't want underpowered, but of course I don't want overkill either.


The  more drive bays the better - preferably at least 6 so I can have a cache drive and parity drive, but would prefer at least 8 3.5" drive bays.

Link to comment

Welcome to the headache of server planning....


So here's my thoughts, since I'm going through the planning of upgrading my server also right now after having been using Unraid for a couple years..


You didn't mention if you have any drives to use already or if you need all new drives so that could really take a chunk out your budget if you need drives, or how many drives to start.


From what you've said I feel that you would be best served by a good amount of RAM.  32GB at a minimum to start, and 64GB+ if budget allows.  I started with 16GB ECC and then added another 16GB when I started doing more stuff.  Now I want 64GB minimum, 96GB preferable, 128GB if affordable (which it's not, for ECC, lol).  Perhaps 96GB is too much for what I actually do but I already have 32 so I was thinking of just adding another 64.


So with the amount of RAM in mind, you'll have to decide whether you want ECC or not.  You'll have to do the research and make the decision for yourself.  My goal of long term storage of all kinds of stuff seemed to me that ECC was the better choice for me.  Not sure if it really, actually matters or not.


So once you figure out that you will then have to choose a motherboard (and CPU) that supports ECC (if you want it).  Yes, the CPU also needs to support ECC.  If you want ECC then you're pretty much limited to a server class board, such as a SuperMicro board.  There may be a desktop board or two out there that supports ECC but I think you'll really have to search for it.. I don't "notice" many consumer boards supporting ECC.


So now if you do go with a SuperMicro server board, you'll have to choose whether or not you want to use the IPMI interface of the board which is basically remote access to the motherboard terminal... such as being able to access the BIOS, or in the case of Unraid, accessing Unraid's command line directly instead of using the GUI.


I mention this because if you have the IPMI graphics interface enabled then you cannot have a graphics card enabled in the BIOS for output.  You can't run them both at the same time.  I'm not really familiar with passing graphics cards to dockers and such to be able to say if you'd still be able to use a card for transcoding if it's not active in the BIOS... someone else would have to chime in there.


As far as a CPU with integrated graphics for transcoding, I'm also not sure how that interfaces with IPMI.


If any kind of GPU supported transcoding is a necessity for you then you will most likely lose any access to the features of IPMI.  So you'll have to see whether or not that is something you want.


The rest is pretty simple once you decide on those things.   CPU cores ... the more, the better.  Whatever you can get in your budget that fits the requirements.  This is where I'm stuck because I want a more recent CPU that has high benchmarks but can't settle on a board that suits it or my needs, and when I choose a board that I like I can't get a CPU that I'd want... I'm going in circles.


You'd probably want to give Plex it's own 2 cores, so at a minimum you'd want a 4 core CPU.  But some of those other things you want to run if they are intensive at all then you might want to dedicate a core or two to each of them also.  Medusa / sabnzbd / sonarr and things like that can all go on the same cores.  SpaceinvaderOne has a good video about pinning cores and separating dockers, you can find it on Youtube.


If you are not concerned with ECC RAM then you have a lot more options and a much easier time in choosing your hardware, and will generally be less expensive also.


As for drives.. if you need them..  I tried to find the most reliable drives that were reported from various testing sites.  I've got HGST and a couple WD Red's (before WD was exposed with inferior Red drives).  If you want 8 drives then you can clearly see how that would take a big part of your budget.  Currently it looks like some 4TB Seagate's are around $100 each.  Not terrible, if you want Seagate, but you're at $800 for drives.


Mind you, you don't need a huge cache drive except for any VM's you want to run off of it.  500GB SSD cache drive might be fine for you.  The parity drive has to be as large as the largest drive you will use, so even if you start smaller with some 1TB-2TB but think you might add 4TB later, start off now with a 4TB parity drive.


Back to the motherboard for a second.. most boards have at most 8 SATA ports, some only 6, so plan on an PCI-E SATA addon card for more ports, if and when you need them.  That means you need to have a nice PCI-E port available.  If you get a board with NVMe slots you can use those also.


Now to your first point, the noise.  Well, for 24/7 operation you want those drives to have good air flow so that means FANS, no matter what.  A case that has fans in the front to suck air in and fans in the back to blow it out, and your CPU fan.   And possibly graphics fan.  It gets hard to keep it quiet.  You can get the quietest advertised fans there are and you'll still hear something, but probably won't be as bad as the server you bought off ebay.


Anyways, I hope this was some good specific guidance for you to digest.  Hope I didn't make it worse for you. :)


Link to comment

Hey, just an idea, why not just upgrade the server you already bought to replace the elements doing too much noise ?

I think that changing the fans and maybe the power supply would be much cheaper.


But if you want to use the project to learn stuff, better tune the system to your needs etc, sure go for it.  :)

Link to comment

Thank you both for replying to me on this. I really appreciate it.


@Energen yes I'll be bringing my own drives. I currently have 4 2TB SAS drives (can I / should I use one of those as a cache drive?)

Also, please correct my understanding on this: To set up parity, I'll have to clear all my drives first, when means I'll need to buy a new 10TB drive (since that is the largest currently in my posession) and use it as the parity drive, then for my current drives I'll have to shuffle the data around and add one drive at a time, preclear it, then move data onto it?

10TB at 82% full
10TB at 52% full

4TB at 84% full but can be wiped as it is currently a backup
3 (or 4 depending on the answer to the question above about using the SAS drive for caching) 2TB SAS drives with no data on them.


Also, the 4 SAS drives are currently connected to a H710 RAID controller, but each set as an individual RAID0 drive - would this cause any issues with parity? Do I need to get rid of the RAID controller and replace it with a LSI SAS 9210-8i? I'm new to this part.


@ChatNoir I'm thinking that is the right way to go, to replace the parts doing too much noise. I've been researching all day how to replace or mod the fan.

It's a Dell T620 chassis fan, which has a proprietary 4-pin connection so soldering would be involved, and the fan dimensions are 92x38 mm - I've heard I could get a smaller noctura fan and ziptie it on. I've also heard that the Dell BIOS freaks out if you put any other fan in there and spins them at 100%. If you have any pointers or suggestions I'd love to hear them.



Edited by duffbeer
Link to comment

Yes that's basically correct about the drives.  You need either an existing or new 10tb drive to use as the parity drive if you plan on having those 10tb's as part of your array.  And yes the moving data part is pretty much what you have to do.  It's a real PITA.. I just had to do that in order to encrypt all my drives.


As far as the rest of your questions about the SAS drives, using one as cache, or your RAID0 configuration.. I can't comment.  I've never used SAS, don't know anything about SAS, and just can't give you any advice there.  I'm not sure how any additional RAID setup effects unraid's parity.  There may be something in the unraid wiki about that.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.