Big but efficient & quiet first build: recommendations?


Maddhin

Recommended Posts

Hi everybody,

 

I am pretty much a noob in terms of UnRaid and server builds. I am planning my first proper UnRaid server build now and wanted to use the power of this forum to get feedback on which motherboard/CPU and hardware in general to get.

 

So far, I am planning to use Norco's RPC-4220 (http://www.norcotek.com/product/rpc-4220/) case which has 20 HDD bays which should be plenty of space for years to come (the build will be populated with 4-5 data HDDs at first). I am also planning to use 2 SSDs as cache and a parity drive.

 

My primary target is to build a file server which might or might not run other software but it should really only serve as file server / archive. So it should be powerful enough to run smooth for years and have enough reserves to run some software (e.g. a Plex server and all those HDDs!) IF I change my mind. I am thinking i3 perhaps. But for things like Plex and pretty much all other NAS applications I want to build a dedicated, fast server. 

 

So this build should be:

very power efficient as it will be idling a lot - so motherboard / CPU / PSU should be able to use minimal energy when idling

be "future-proof" (I am planning to add e.g. 10 GB networking once the new powerful NAS server goes into operation)

quiet - I am planning at least using Noctua fans, so CPU (cooling) and PSU should be accordingly.

 

I hope you get an idea of what I have in mind and am grateful for any comment! CHEERS!

Link to comment

I have two nearly-identical servers that are always on:  one is my primary file server, the other is a backup, and is rsync'ed to from the primary at least daily.  I stressed energy efficiency in those builds, and was able to achieve power consumption of just under half a watt per terabyte of capacity:  under 24 watts (at idle) for 48TB of double-parity-protected capacity.

 

Here are the relevant build details:

 - ASRock Rack C236 WSI

 - Intel i3 6100T or Pentium G4400T

 - pair of Micron 16GiB DDR4-2133 ECC UDIMMs

 - 8x Seagate ST8000AS0002

 - SeaSonic SS-400FL2

 - pair of Xigmatek 80mm fans

 

This all fits in a 2U rackmount chassis under 16" in depth, and is very quiet.

 

The SATA/SAS backplanes in your chassis will likely add overhead beyond what I see, as I connected the six data drives directly rather than in drive bays.  The two parity drives are in a 3-in-2 trayless hot-swap drive cage.

 

Keep in mind that I only use these servers for file serving, so if you have needs beyond that, the processor choice may need a rethink.

Edited by bobkart
Link to comment

@bobkart: many thanks for your input, this is helping me!

 

The backup server is something I also seriously consider and will surely do. I probably won't do it now and mirror the built 1:1 but it's on the list:)

 

Your 2U solution is pretty neat. Which case do you use? I saw the Norco 4220 case and think it is a ideal case to have plenty of space to expand.

 

For me the most difficult is to find the right MB. The C236 looks good but it seems it will almost lost in the huge case LOL My idea was to take a MB and CPU that are not a complete overkill for a file server but still have potential to do more if I want to run a couple of applications on it or need to add some RAM/cards (which are most likely 10 GB ethernet as well as SATA controller for all the HDDs).

I do not know whether the power demand is necessarily increased by bigger MBs.

 

For the CPU, I think a low power i3 is a rational choice in terms of economics and performance. My main concern is a situation where 20 HDDs are working in the case. Is there a significant change in performance requirements if a couple more HDDs are working? The money aside, would it make sense to have a i5 in such a system? The i5-8600T seems fancy but I do not know how what and e.g. i3-6100T compare on idle power consumption.

 

I will have to study your hardware more but at least on the HDDs we are on the same page, I also looked at the Seagate archive drives and hope they are not too loud in operation (and don't break if switch on/off regularly!).

 

PSU is another of my headaches in terms of dimension (for 20 HDDs) and noise level (I'd prefer as silent as economically possible;) )

 

Cheers!

Link to comment

For chassis I used the iStarUSA D-214-MATX.  I didn't mention that earlier for a couple of reasons:

 

  - you're targeting more drives than that chassis can accommodate

  - I had to use parts from another chassis to get the archive drives to work (due to the lack of middle mounting holes)

 

Here is the UCD thread I made for this build:  https://forums.unraid.net/topic/45322-48tb-server-under-35-watts-idle-now-under-24-watts/

 

I had a different motherboard at first but it let me down . . . the one I'm using now has been rock solid.

 

Regarding motherboard physical size, there are mATX and ATX versions of that mini-ITX board (C236M WS and C236 WS).  Probably they will use marginally more power than the smallest version.  I had tighter constraints than you do:  had to be mini-ITX to fit that case with all six internal drives installed.  Other constraints included eight SATA ports and support for ECC memory.

 

I have similar unRAID setups with all thirty drives involved, which includes dual parity.  So from my experience there isn't a problem with processor performance as far as parity checks/etc. are concerned.  Regarding i5 versus i3 and idle power consumption, I suspect they're very close.  Most positions I read on that question indicate that these more modern processors have no problem throttling down to low levels of power consumption when not being used much.  Regarding 'T' versus non-'T' versions, that same argument would seem to apply, but maybe not as strongly.  I chose the 'T' versions because it allowed me to use a passive cooler, saving another fraction of a watt.

 

I suspect any sixth-generation-or-newer i3/i5 processor (LGA1151) and corresponding 'economy' motherboard will yield acceptable idle power consumption results.  I've built Windows machines using such configurations that idle at ~8 watts, and that was a few years ago, so I suspect it's only better with seventh/eight-generation processors (like with a B250 chipset if ECC isn't a requirement).

Link to comment

@bobkart Thanks a bunch for this input, this is helping me a lot.

 

As I understand, a T-version of a recent i3/5 is the best choice and I think a passive cooler is a must for this setup.

 

I would also love to see the MB switching off (part) the fans if temps allow as my case will have 5 fans (3x12mm and 2x8mm - probably Noctua), which should not be necessary to run if the system idles. I will also only use a handful of HDDs in the beginning, so one fan in the front and one in the back should be fine. As far as I understand the case, the cache SSDs are installed on top of the hot-swap trays and therefore not in the direct cooling flow anyway... Although with the passive CPU cooling, a more or less constant air flow seems advisable.

 

The ASRock Rack C236 MB looks pretty good, I just need to do more research and understand what are the key features important for me. Low power consumption (and noise level to a degree) are most important as the box is supposed to serve 24/7 and should be idle most of the time - to the degree that I am thinking whether it would make sense to use WOL instead of 24/7 operation.

 

But when I use the server, I want to have fast access. Maybe not in the beginning (as I need to upgrade my network switch first) but later I want to run 10 Gb/s network. That's also the key reason for the cache SSDs. As I understand, you are using ECC RAM for caching. For 10 GbE this seems to make even more sense than for 1 GbE as RAM should be a lot faster than SSDs. I will have to study this further and see how to setup, etc. But seems very interesting, especially in terms of cost and power consumption! I don't know whether there is something else to consider in terms of 10GbE when buying a MB.

 

 

 

Link to comment

One key thing to note, that I just remembered, is that the i5's don't support ECC memory.  Only Celeron, Pentium, i3 and Xeon.  Odd, I know.  You could still support full 10Gb/s write speeds without ECC memory if you used NVMe cache drives (SATA won't cut it as I'm sure you know).  Obviously price goes up though.

 

On cooling:

  - you could probably get away with just the three 12cm fans (in the middle of the chassis), which will be quieter than those plus the two 8cm fans (at the rear I believe those would be)

  - not sure on that motherboard but most support throttling the fans based on temperatures; I've yet to mess with that so can't help much

  - a good active CPU cooler will be plenty quiet (Noctua is what I use), but besides winning slightly on power consumption, a passive CPU cooler wins on having one less point of failure (no cooler fan to fail and fry your processor)

  - I doubt SSD temperatures will be an issue . . . I frequently run mine a good 10C hotter (up to 50C) than I do my mechanical drives, with no apparent adverse consequences

 

I've never used WOL so can't help there.  Also no 10GbE experience, but I suspect any motherboard made in the last few years can easily support such an adapter card.

 

On power consumption, just taking a wild guess, but I suspect those backplanes will add a handful of watts compared to direct connections.  Possibly you could only power them up row-by-row as you add drives though.  Not sure as I've never taken a close look at one of those chassis.

 

It seems possible that a setup close to mine, except with the obvious differences (chassis/backplanes, more/larger fans, and perhaps larger motherboard and more memory) could come in at 30-35 watts idle power consumption (for that same number of eight 8TB archive drives installed).

 

Link to comment

@bobkart as usual thanks a bunch for your highly useful comments/tips! Highly appreciated!

 

NVMe cache drives will be probably an overkill for my application case but I actually didn't have this solution on my radar. For my use case as file server, backup location, archive, etc. the smartest and most economical solution would probably to connect it with 2-4 x 1 GbE - that should give a reasonable access speed up to what the SSD's can deliver.

 

I actually need to think more carefully now what I need now. The ECC option is new to me but quite like it. NVMe seems the best option but I need to see whether I can afford that as I'd also have to mirror those drives. It seems like an overkill but with 10 GbE becoming more popular and affordable, in 2-3 years I might have wished I'd gone for the NVMe option haha

 

One (newbie) question re motherboard: you recommend the ASrock workstation MBs and not the server ones. Why? :)

 

 

 

Link to comment

I went another route, I was on a 4U 24 bay rack server and just moved to a cube.  The benifit of the chassis I chose is that it seperates the cooling air for the drives and components...letting me use crazy quiet fans and really tune them to the area they cool.  I was able to get 15 hot swap bays and 4 more fixed ones for Cache drives and VM OS drives etc.  I'm not saying this is the right answer, but a rack mount is not /always/ the right answer...sometimes a high quality tower/cube can give a lot more and be very cool looking.

 

 

IMG_0288.png

IMG_0289.png

Link to comment
12 hours ago, Maddhin said:

@bobkart as usual thanks a bunch for your highly useful comments/tips! Highly appreciated!

 

NVMe cache drives will be probably an overkill for my application case but I actually didn't have this solution on my radar. For my use case as file server, backup location, archive, etc. the smartest and most economical solution would probably to connect it with 2-4 x 1 GbE - that should give a reasonable access speed up to what the SSD's can deliver.

 

I actually need to think more carefully now what I need now. The ECC option is new to me but quite like it. NVMe seems the best option but I need to see whether I can afford that as I'd also have to mirror those drives. It seems like an overkill but with 10 GbE becoming more popular and affordable, in 2-3 years I might have wished I'd gone for the NVMe option haha

 

One (newbie) question re motherboard: you recommend the ASrock workstation MBs and not the server ones. Why? :)

 

 

 

The NVMe approach wins on cheaper-per-capacity and allows nuch larger capacity (i.e. larger single writes before slowdown).  The ECC approach wins on possibly less power consumption (I see 16GiB DIMMs using well under one watt), certainly less complexity (and thus motherboard support needed) and it's much more invisible (i.e. no mover step involved).

 

I started with 64GiB of ECC memory and was forced down to 32GiB when I moved to the mini-ITX motherboard we've discussed.  I find that to still be sufficient for the writes I tend to do (easily handles 24-30GB at a time with no slowdown).

 

Regarding server boards, I'm a big fan of those, and have a handful of other file servers (some unRAID, some FreeNAS) that all use SuperMicro server boards (although they could arguably be considered workstation boards, as they're LGA1150/LGA1151 as opposed to LGA2011).  But these servers aren't on all the time, and my impression is that as you go from desktop to workstation to server board, as capabilities increase, so does power consumption (even at idle).  So that's the main motivation for going with a small workstation board in my always-on applications.  (Desktop would likely use less power but I'd lose ECC.)

Link to comment
9 hours ago, Tybio said:

I went another route, I was on a 4U 24 bay rack server and just moved to a cube.  The benifit of the chassis I chose is that it seperates the cooling air for the drives and components...letting me use crazy quiet fans and really tune them to the area they cool.  I was able to get 15 hot swap bays and 4 more fixed ones for Cache drives and VM OS drives etc.  I'm not saying this is the right answer, but a rack mount is not /always/ the right answer...sometimes a high quality tower/cube can give a lot more and be very cool looking.

That's a great-looking server, thanks for sharing those pictures.  Those trayless hotswap bays are nice . . . they're the same ones I have in my two primary servers (the three-drive version).

 

For me rackmount makes more sense as my whole entertainment center is comprised of side-by-side racks (four Middle Atlantic RK12s).  Then my server closet has a Samson SRK21 to hold things like the primary server pair and a UPS.  Once things start stacking up like that racks can give you higher density.

 

Care to share your idle power consumption and total protected capacity numbers?

Edited by bobkart
typo
Link to comment

All the disks are spun up ATM, I think Radarr just rescanned...I really wish the would let us control that :(.

 

The UPS is reporting ~100 wats with 8 disks spun up and one plex transcode going.  With 15 disks in the trays and me running 10TB Seagates right now, that means a potential of 150TB.  My parity is a 12TB so I can up that further by expanding with 12TB drives, but I don't really see the need as I'm only 50% populated ATM.

 

Currently the total protected is 74TB on 8 data disks (Mix of 5 10TB and 3 *TB)

Edited by Tybio
Link to comment
2 hours ago, Tybio said:

All the disks are spun up ATM, I think Radarr just rescanned...I really wish the would let us control that :(.

 

The UPS is reporting ~100 wats with 8 disks spun up and one plex transcode going.  With 15 disks in the trays and me running 10TB Seagates right now, that means a potential of 150TB.  My parity is a 12TB so I can up that further by expanding with 12TB drives, but I don't really see the need as I'm only 50% populated ATM.

 

Currently the total protected is 74TB on 8 data disks (Mix of 5 10TB and 3 *TB)

Sounds like you're very close to half a watt of idle power consumption per terabyte of protected capacity, and it will only get better for you (that ratio) as you add or upgrade drives.  I'm going to guess at 35-40 watts for your idle power consumption.  Wondering now if you use single or double parity, and how your drives are connected (HBA or straight to motherboard).

 

How are you on handling full-network-speed writes?  I.e. the ECC-versus-cache-pool question raised here.

 

Link to comment
4 minutes ago, bobkart said:

Sounds like you're very close to half a watt of idle power consumption per terabyte of protected capacity, and it will only get better for you (that ratio) as you add or upgrade drives.  I'm going to guess at 35-40 watts for your idle power consumption.  Wondering now if you use single or double parity, and how your drives are connected (HBA or straight to motherboard).

 

How are you on handling full-network-speed writes?  I.e. the ECC-versus-cache-pool question raised here.

 

I haven't really been focused on power, right now I'm using an old Xeon/MB combo that only has SATA2, so I've had to use 2x8-port HBAs.  I'm planning on upgrading soon to the E-2176G to get access to the iGPU (My MB is so old it has an on board GPU that "blocks" the one in my Xeon).  When I do that I'll run 8 SATA off the MB and the rest off a single HBA to trim power/heat a bit more.

 

I generally don't worry about optimizing for network writes, I have a 2x1G Lag setup to the server, but mostly that's to cover my 1Gig Usenet downloads and allow full rate streaming to my clients on the LAN.  As I do all the downloading to the server and process the files there, I don't have much reason to push things.  That said, I will be going with an M.2 SSD in the new build, mostly to prevent a cable run and clear out the 2.5" drives so everything fits better.  That should give me insane performance when transferring files without having to do anything fancy.

 

This is the board I'm going to get: http://www.supermicro.com/products/motherboard/X11/X11SCA-F.cfm

 

8 SATA ports, IPMI, and basically everything I could need for another long term work horse system.  If I can get half the time out of the new one I've gotten out of my current one, then I'll be happy :).

Link to comment

I get that priorities are different from person to person.  Mostly I ask because the OP has these concerns (energy efficiency and fast access).  Sounds like you have a SATA SSD cache pool now, which of course can easily keep up with a couple Gb/s of writes.

 

That's a good-looking Supermicro board for sure . . . 2x m.2 slots are perfect for a high-performance cache pool (10Gb/s).  And you'll be able to ditch the HBA until you add a ninth drive.  Note that most people recommend a pair of drives in the cache pool as opposed to just one, so a drive failure won't necessarily result in data loss.

 

EDIT:  Recalling now that you're already past eight drives in your array (forgot to count the parity drive).

Edited by bobkart
addition/correction
Link to comment

sorry for the slow response, I didn't get "to play" for some days;)

 

The Supermicro board looks very nice indeed - the 2 x M.2 slots and ECC support seem ideal to have all options. This is a serious candidate for my build!

 

What I actually intend is to have this build run as a 24/7 file server where I just pop in one or more new disks if capacity becomes an issue. So a place where all data can be dumped and be accessible if needed. So this thing should be energy efficient and (as) silent (as possible) - for me this means that the HDDs should stay off as often as possible.

 

In terms of speed, I personally think it would be "funny" to build such a server if it cannot support at very least 1 x 1 GbE connection at full speed. ECC is a great option but I am currently considering to use larger cache drives which would allow to store often accessed data (e.g. user folders) in the cache which would keep the HDDs off longer. So, cache drives could serve two purposes here.

 

For all other things such as Radarr, Sonarr, Plex etc pp I indend to run a beefy, smaller form factor (potentially FreeNAS) server which uses SSDs in RAID for fast access and silent/quiet operation. As especially those 3 mentioned programs always seem to do something, chances are slim this server/SSDs gets to standby often. That build will be a headache in terms of energy efficiency because I want to have a fast CPU for all the fun (and serious!) stuff that I come up with to run on a server haha But having a - say - i7 and maybe even a graphic card (Plex transcoding) running 24/7 just to run home automatisation, watch downloads and the occasional movie/show seems decadent...

 

RE cases: I do think the Norco case is basically too big (I will have to make sure my wife is in a good mood when the delivery comes! LOL). But the big fans are great and once it runs, it should be able to run for years without any capacity worries. If anybody knows similar but slightly smaller cases (preferably rack mountable), I am happy to check :) It just need to be able to run quietly (with some pimping). 

 

Cheers y'all for your input, this is helping me quite a lot!

Link to comment

A few more thoughts:

 

Those Xeon E processors are likely difficult to find at the moment.  But an i3-8100T would fit well.  You don't necessarily need 2x m.2 slots on the motherboard to host a pair of NVMe drives:  a simple adapter allows adding an m.2 drive via a PCIex4 slot.  This of course assumes enough of those are left over from other uses.  I'd lean slightly towards uATX over ATX in the interests of reducing idle power consumption.

 

One thing I recently realized regarding the ECC-RAM-vs-SSD-cache-pool question:  in the ECC RAM solution there is still a ~5-second delay before a write transfer starts while the target and parity drives are spun up (if they weren't already).  I don't believe this happens for the SSD cache pool situation.  So that's another mark in favor of the SSD approach.  My motivator was power consumption and ECC RAM wins over a pair of SSDs in that regard.

 

Regarding cases, the Norco RPC-3216 is 1U less in racking height than the 20-drive version you referred to.  It will be hard to find one with much less depth (~24") and still have hotswap bays.  There are some shallower models that have, say, eight hotswap drive bays, with the motherboard fitting under the drive bays, but I've not run into anything like that for over twelve drives.  A final option would be a 2U, twelve-drive-bay model, like the RPC-2212.  One approach I've used (and still use for my 32-drive, 84TB monster) is external drive enclosures, but that approach definitely loses on power efficiency, as the separate enclosure typically has it's own power supply, and thus contributes more overhead to the total.

Edited by bobkart
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.