First Time unRaid Build - My Hardware Thoughts


Burizado

Recommended Posts

Hello.  I am new to unRaid and NAS builds but I have been reading a lot over the past few weeks.  I am looking to build my first DIY NAS and welcoming comments on the hardware I am looking to purchase.

 

I do not have any existing hardware outside of an old external hard drive I store files on.  My budget is $3000 USD or less (I am located in the US).  This build will be primarily a Plex and file server (photos, docs, etc.).  I will probably have some dockers for miscellaneous automation tasks and occasional VMs for gaming (not the main purpose, and not trying to run Crysis or anything).

 

Case:  Fractal Design Define XL R2LINK

MB:  AsRock Rack E3C246D4U Micro ATX - LINK

       ASRock Z390 Taichi Ultimate LGA 1151 (300 Series) Intel Z390 SATA 6Gb/s ATX - LINK

CPU:  Intel Xeon E-2278G - LINK

         Intel Core i9-9900KLINK

CPU Cooler:  Noctua NH-L9i - LINK

                   Noctua NH-U12ALINK

Memory:  NEMIX RAM 64GB 2x32GB DDR4-2666 ECC Unbuffered - LINK

              G.SKILL Ripjaws V Series 64GB (2 x 32GB) 288-Pin DDR4 SDRAM DDR4 2666 (PC4 21300)LINK

PSU:  Corsair SF750 Platinum - LINK

         CORSAIR RMx Series RM1000X 1000W 80 PLUS GOLDLINK

unRaid OS:  SanDisk 32GB USB 2.0 Low-Profile Flash DriveLINK

unRaid Cache: 2x Samsung 1TB 860 EVO SSD - LINK

                      1 x SAMSUNG 970 EVO PLUS M.2 2280 1TB PCIe Gen 3.0 x4, NVMe SSDLINK

unRaid Parity:  2x WD Red Pro 10TB 7200 256MB Cache SATA 6.0Gb/s - LINK

                       1 x WD Red Pro 10TB 7200 256MB Cache SATA 6.0Gb/s - LINK

Data:  5x WD Red Pro 4TB 7200 256MB Cache SATA 6.0Gb/s - LINK

          3 x WD Red Pro 10TB 7200 256MB Cache SATA 6.0Gb/s - LINK

Plex Metadata:  Samsung 1TB 970 NVMe M.2 Evo - LINK

                        1 x SAMSUNG 860 EVO 500GB SATA III SSDLINK

Additional:  SUPERMICRO CSE-M35T-1B 3 x 5.25" to 5 x 3.5" Hot-swap SATA HDD Trays - LINK

 

The total is around $2800 with the above parts and random additionals (cables, fans, etc.).

The updated total is around $3200, so a little over budget but I am getting another 10TB (original 5x4TB, now 3x10TB) of data storage.

 

I realize the case is huge for a mATX board, but I wanted to have a larger enclosure in case I wanted to change boards or expand to more drives in the future.  I would have preferred Samsung memory but I am not finding it in stock anywhere.  I welcome thoughts and questions on the above layout.

 

One initial question I have is on the Plex cache and metadata on the NVMe.  Should I run Plex transcoding from memory, then leave the NVMe drive just for metadata and possibly VMs?  The motherboard only has 1 M.2 slot so I didn't want it for the unRaid cache as I wanted 2 drives for that function.

 

EDIT:  Updated components based on discussions below.

Edited by Burizado
Update information
Link to comment

After writing my post last night I forgot to mention my GPU needs.  I was assuming I can use the iGPU of the 2278 to perform the Plex transcoding for now, then if I wanted to do VM gaming I would need to purchase dedicated GPU (thinking the GTX 1660) to either use for Plex or VM.  My first priority would be the best GPU for Plex, then VM as secondary.  I am not planning on having more than 2 streams transcoding.  If I anticipate more than 2 in the future I would splurge for the p2000 GPU for Plex and use the iGPU for VM.  Am I totally off base?

Link to comment

A couple of considerations.

 

I'd look for an ATX board of similar specifications, you are likely to get more flexible expansion slots which may well be useful later.

 

Any reason for 5 x 4TB drives? As you have 10TB parity, I'd just use a couple more 10TB and keep the SATA connectors free. No benefit I can think of for adding lots of small drives.

 

With 2 parity, 2 cache and 5 storage drives you have 9 SATA in use, the board only has 8 SATA. If you use all 10TB as above, you could go to 40TB without needing an expansion card.

 

Personally PLEX runs on my pair of standard 2.5" SATA SSD's. I haven't noticed any issues with this, you can always move the folder to an NVME drive later if you do see an issue. Modern SSD's + controllers are generally good. Just avoid budget ones which may have lower IOPS

 

You'll probably find the CPU trancode is fine, you have the iGPU and then the CPU cores to go before you need a dedicated GPU. Most dedicated GPU other than Quadro P2000 and up are limited to 2 transcodes at once unless you patch drivers which even a basic GTX1050 2GB can handle. I find almost everything plays direct to my Roku sticks so unless you are streaming to friends and family over the WAN you may not transcode that much, 

 

If you are working with 4k, the recommendation is to have 2 copies, 1x4k and 1x1080 of each file so you don't have to transcode from 4k.

 

If you have the right mainboard (enough slot flexibility), you can install multiple GPU's so have one for plex and one for games.

A quadro P2000 or GTX1660 would be fine in a X4 slot, though the back of the slot has to be notched out so the card plug in with the remaining finger overhanging.

 

If you want to game on the system, I assume you are connecting a monitor directly. Games aren't natively playable by streaming though some hardware can encode the game into a video stream if you have the right hardware to display.

 

Link to comment

Thanks for the reply @Decto!  Just what I was looking for, the details I am probably leaving out. :)

 

Yeah, I noticed the 9 drives with 8 sata ports on that MB today as I was reviewing my hardware.  I started looking up HBA cards to get additional connections, but it is probably better to go with higher capacity drives (i.e. 10TB) as you suggested and then look at a HBA card if I expand to more drives in the future.  The only reason I was looking at 5 x 4TB drives is they were less costly and would fill the 5 bay cage.  I could just utilize the HD bays in the case instead of the hot swap cage by going to less larger capacity drives. 

 

Thanks for the info on the GPU.  Yeah, I think I will look for an ATX MB so I can have the additional slot capability if I want to add a GPU card or two in the future, if I find I want to use this as a VM gaming system as well.  For now it will be just Plex and file server.

 

Thanks for the note on the hardware for gaming.  I had not thought of that.  Another thought I had was running cables (HDMI and USB) from the basement to the office.  I realize I would need to take into account the 3 to 5 meter limitation for USB (LINK for reference).

 

 

Link to comment
On 2/23/2020 at 5:19 AM, Burizado said:

...

unRaid Cache: 2x Samsung 1TB 860 EVO SSD - LINK

unRaid Parity:  2x WD Red Pro 10TB 7200 256MB Cache SATA 6.0Gb/s - LINK

Data:  5x WD Red Pro 4TB 7200 256MB Cache SATA 6.0Gb/s - LINK

Plex Cache or Metadata:  Samsung 1TB 970 NVMe M.2 Evo - LINK

...

One initial question I have is on the Plex cache and metadata on the NVMe.  Should I run Plex transcoding from memory, then leave the NVMe drive just for metadata and possibly VMs?  The motherboard only has 1 M.2 slot so I didn't want it for the unRaid cache as I wanted 2 drives for that function.

You don't need dual parity with only 5 data drives.

You're better off getting 1x10TB parity + 3x10TB data. That way you have fewer points of failure than your current plan.

Of course if you have terrible luck then you may wish to have dual parity but then if you have terrible luck then dual parity won't be enough.

 

What will you be holding in the cache pool? Why do you think you need 2x1TB (i.e. running it in RAID-1) - in other words, what important data are you planning to store on the cache pool that require mirror protection?

You probably just need to have the 970 in the cache pool and be done with it.

If you have a lot of write heavy activities then you just need 1x860 mounted UD and use that exclusively for write-heavy stuff (i.e. having maximum free space available by minimising static data).

 

PS: there's no need to use the cache pool for write cache, if that's what you wanted to use the cache pool for. That usage is archaic with modern high-capacity HDDs and reconstruct write (aka Turbo Write).

Link to comment
40 minutes ago, Burizado said:

Thanks for all the great information!  I have updated the build information in my original post to take into account the suggestions from @Decto and @testdasi discussed above.  The budget is only $200 over, but I am getting another 10TB of data storage.  If I feel like the price is too much I can always drop down to 2x10TB drives to stay under my $3000 budget.

 

The other thing to consider is using shucked drives, external drives removed from their case.

 

The WD 8TB and above Mybook / Elements drives are usually white label HE10 drives derated to 5400rpm (due to power limits).

They are around half the price of the WD Red Pro's, especially if you catch them on an Amazon sale.

 

You may impact the warranty, but then you have twice the drives to fall back on and modern drives don't fail all that often. 

I'm on 8TB for new drives and have a new precleared drive spun down as a hot spare.

 

All but 1 of my 9 storage/ parity drives are shucked + 3 more in a now retired WHS2011 server and a few other drives I have kicking about. 

I've only had one shucked drive fail so far and that was a 500GB drive 10+ years ago which became mechanically very noisy in 3 months so was pulled.

 

 

 

 

 

 

 

 

Link to comment

I think you should take care on use two 32G memory module.

 

Recently, I buy four 32G sticks (Crucial), but it doesn't work in Asrock Z390 taichi and Gigabyte Z390UD. I make replacement request and still in followup. Both mainboard haven't problem with four 16G modules.

 

For mainboard, I recommend Asus WS C246 PRO ( have 4 PCIe slot ) or Gigabyte Z390UD.

Edited by Benson
Link to comment
3 hours ago, Burizado said:

Thanks for all the great information!  I have updated the build information in my original post to take into account the suggestions from @Decto and @testdasi discussed above.  The budget is only $200 over, but I am getting another 10TB of data storage.  If I feel like the price is too much I can always drop down to 2x10TB drives to stay under my $3000 budget.

FYI, I just purchased the original motherboard (ASRock Rack E3C246D4U) an E-2288G CPU and 2x32GB ECC Samsung RAM (NEMIX is supposedly "equivalent") you mentioned in the original build list.  So far, it's all looking good.

 

Provantage currently has 75 E-2278G Xeons in stock at a very reasonable price.

 

When it comes to "more PCIe slots providing more flexibility" you have to keep in mind how many PCIe lanes your CPU and chipset can actually use.  Just because a board has a certain number of PCIe slots does not mean they can all be used.  NVMe SSDs or other M.2 cards will often disable x4 PCIe slots on the MB and there are other cases where a certain combination of PCIe cards (especially multiple graphics cards or graphics cards + HBAs) will exceed the number of PCIe lanes your system can support.  An iGPU means I don't have to use a slot for a graphics card, so the the three PCIe slots in the E3C246D4U are plenty for me.

 

Just make sure your system can actually be utilized in the way you envision before purchasing hardware and being disappointed.

Edited by Hoopster
Link to comment

Thanks for the reply @Hoopster!  Yeah, my initial build specs were modeled after some of the posts you put out about that MB and CPU.  It seems like a really good "base" combo to start with.  The MB has the capability to add a dedicated GPU in the PCIe-16 slot, and still has a PCIe-8 slot if I needed a HBA card (the NVMe disables the PCIe-4 slot).  I am following your post about the BIOS update on the AsRock Rack board as well, to see if both the IPMI and iGPU play well together.

 

With both builds being only about $30 difference, it really comes down to how I want to utilize the expansion slots and form factor I want to go with.  Now I am leaning back towards the Xeon setup.  Decisions, decisions....but that is the fun in spec'ing out a new build. :)

Link to comment
17 hours ago, Benson said:

I think you should take care on use two 32G memory module.

 

Recently, I buy four 32G sticks (Crucial), but it doesn't work in Asrock Z390 taichi and Gigabyte Z390UD. I make replacement request and still in followup. Both mainboard haven't problem with four 16G modules.

 

For mainboard, I recommend Asus WS C246 PRO ( have 4 PCIe slot ) or Gigabyte Z390UD.

This is good to note.  Thanks for the heads up @Benson.

Link to comment
26 minutes ago, Burizado said:

The MB has the capability to add a dedicated GPU in the PCIe-16 slot, and still has a PCIe-8 slot if I needed a HBA card (the NVMe disables the PCIe-4 slot).

Actually, if I am reading things correctly, using the M.2 slot for a SATA 3 or PCIe NVMe SSD (I have one in the slot), disables the SATA 0 connector rather than the PCIe x4 slot on the E3C246D4U.

 

On some chipset implementations a PCIe NVMe SSD does disable the x4 slot, but, on this board it appears the SATA 0 connector is the victim which I much prefer.

 

Quote

The M.2_SSD (NGFF) Socket 3 can accommodate either a M.2 SATA3 6.0 Gb/s module or a M.2 PCI Express module up to Gen 3 x4 (32Gb/s)(E3C246D4U) / Gen 3 x2 (10Gb/s)(E3C242D4U). **The M.2 slot (M2_1) is shared with the SATA_0 connector. When M2_1 is populated with a M.2 SATA3/ PCIE3.0(x4 or x2) module, SATA_0 is disabled.

 

Link to comment

Ah, ok.  Yeah when I saw this I assumed it was using the NVMe instead of the PCIe x4 slot.  I knew it shared with the SATA0, which I agree is better.

Quote

8 x SATA3 6Gb/s (SATA0-7, SATA0 supports SATA DOM, and is shared with M.2 (PCIE3.0 (x4) ) / SATA3)

 

 

I did see this as well:

Quote

Slot 6: Gen3 x16 link, auto switch to x8 link if Slot 4 is occupied

So it sounds like if Slot 6 was used for a dedicated GPU, it would be better to put a HBA card into Slot 5 so it would not limit the GPU card.  Which would be good that it can have a NVMe AND a card in Slot 5.

Edited by Burizado
Link to comment
37 minutes ago, Burizado said:

it would be better to put a HBA card into Slot 5 so it would not limit the GPU card

That depends on the bandwidth your HBA needs for the number of connected drives.  I have a Dell H310 (PCIe 2.0) which supports up to 8 drives.  It is an x8 card so I need to put it in slot 4.  This will limit slot 6 to x8 as well which is fine for a graphics card, If I ever decide to put one there.  There are very few unRAID uses for which an x16 graphics card would ever be needed unless it is to pass it through to a very high-end gaming VM.  In the vast majority of cases, x8 is more than enough.

 

I can envision having an x8 10G network card in that slot more than a graphics card, but, you never know.  And actually, with PCIe 3.0 bandwidth x4 may do it for 10G.

 

So, with the E3C246D4U it looks like you can effectively have 2x x8 cards; 1x x4 card and an x4 NVMe SSD installed on the board.  In this configuration 7 of the 8 SATA ports are available.

Edited by Hoopster
Link to comment
2 hours ago, Hoopster said:

I can envision having an x8 10G network card in that slot more than a graphics card, but, you never know.  And actually, with PCIe 3.0 bandwidth x4 may do it for 10G.

 

PCI-E 2.0 X4 has 16Gbps effective bandwidth so is enough for either 10Gbit lan or LSi 8 channel HBA with 8 X HDD with up to 250MB/s peak transfer. 

 

One of my HBA's is currently in a PCI-E 2.0 X16 physical, X4 electrical without issues though it looks on that board that that the M.2 may obstruct the fingers of the card in the PCI-E X4 slot even though the back of the slot is cutout to allow overhang.

 

Bonded dual Gbit lan will keep up with array read / write speed so unless you really need to push data really quickly to cache it's likely to be fast enough and much cheaper than 10G.

 

 

 

 

 

 

  • Thanks 1
Link to comment
  • 2 weeks later...

Thanks @dasx86.  Isn't' that always the situation.  You find something.  Wait a bit, researching it, wait for it to go on sale maybe.  Then right after you purchase it, and get it opened and setup, something new and better comes out.....seems to be ALWAYS the case.

 

I think I will choose to be happy with my purchase.  I can fit 14 drives (plus NVMe drives on the MB) in the XL R2 by adding 2 ICY Dock 3 in 2 hot swap bays and a HBA card if needed.  If I out grow that I will probably go to a rackmount setup.

Link to comment
  • 7 months later...

I bought the Fractal Define XLR2 Case for my dual xeon build & i really like it..Its easy to build in & i had room to use a pair of coolermaster hyper 212 evo's with 8 3.5hdd's & like 3-4 2.5 ssd's with room to spare! i am glad i found my xlr2 case ! I am useing it for my main Dual xeons back up now days but its still my favorite case by far..

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.