First Unraid Build - Looking for Feedback/Tips


Recommended Posts

After further research I settled on the general idea of my build. Trying to keep it under $2000 if possible. It's currently about that much.


Part List
Motherboard: ASRock EP2C612 WS SSI EEB Dual-CPU LGA2011-3 Motherboard  ($414.60 @ Amazon) 
CPU: Intel Xeon E5-2670 V3 2.3 GHz 12-Core Processor (found em for ~$100/ea @ ebay) x 2
CPU Cooler: Noctua NH-U12DXi4 55 CFM CPU Cooler  ($64.95 @ Amazon)  x 2
Memory: Samsung 32 GB Registered DDR4-2133 CL15 Memory  ($92.00 @ Amazon) x 4
Storage: Western Digital Blue 4 TB 3.5" 5400RPM Internal Hard Drive  ($89.99 @ Adorama)  x 4
Cache Drrive: Samsung 970 EVO Plus m2. 2280 1TB PCIe Gen 3.0 x4 ($150 @ Amazon)
Video Card: MSI GeForce GT 710 2 GB Video Card  ($49.99 @ Amazon)  x 2
Case: Phanteks Enthoo Pro 2 ATX Full Tower Case  ($146.98 @ Newegg) 
Power Supply: Rosewill Capstone 1000 W 80+ Gold Certified Semi-modular ATX Power Supply  ($124.00 @ Newegg) 
PCIe to NVMe2 Adatpor: M.2 NVME to PCIe 3.0 x4 Adapter with Aluminum Heatsink Solution ($16 @ Amazon)
Sata Controller: SAS9211-8I 8PORT Int 6GB Sata+SAS Pcie 2.0 ($55 @ Amazon)
Sata Data Cables: CableCreation Mini SAS 36Pin (SFF-8087) Male to 4X Angle SATA 7Pin Female Cable, Mini SAS Host/Controller to 4 SATA Target/Backplane Cable, 3.3FT ($10 @ Amazon)
Sata Power Daisy Chain Cable: StarTech - PYO4SATA .com 15.7-Inch (400mm) SATA Power Splitter Adapter Cable - M/F - 4x Serial ATA Power Cable Splitter (PYO4SATA) Black ($5 @ Amazon)
USB Stick: SanDisk Cruzer Fit CZ33 32GB USB 2.0 Low-Profile Flash Drive


My wife and I run our own business, mostly from home. We always wanted a NAS for ease of sharing files and as a central place to store data. I also have some side projects I want to run on my own that require a VM. One project requires 1-2GB of GPU memory, and I want to run 2 versions of the project at the same time. After looking at vGPU options, it just made sense to grab 2 cheap GT 710. They are only $50 and 19W TDP. Another project I have requires downloading 300-800GB/day of video and involves a lot of OCR / computer vision. All python. I am still learning, so I am not sure if I adding more cores would help or not. Beyond that I'll probably build out these projects and explore other ideas that would require a VM or docker container. We're not huge media junkies, but since we're going to setup our own unraid, it makes sense to run a plex server too so we have that option available.


I wanted to do an unraid + gaming all-in-one build, but after much research, it just doesn't make sense. I want to explore AI and machine learning. It makes sense to do this on the gaming box with an RTX 3000 GPU.


This motherboard has 7 PCIe 3.0x16 slots and 8 RAM slots. I found a great deal on 32GB ram sticks so I figured go for 4 sticks now for a solid 128GB. If I really need more I can go up to 256 later. It allows for dual CPU and the E5-2670s hit a real sweet spot. 12/24 for only $100. Not blazing fast, but plenty of cores.


PCIe Slots: SATA controller, NVME2 to PCIe adaptor, and 2 GTI 70 cards, so I'd have 3 3.0x16 slots left for a beefier GPU if I want to add one later.


Storage: 4 x 4TB is perfect for us right now. No NVMe2 slot on MOBO, but the adaptor is 3.0x16 so it should be fast enough. I am guessing I will hit close to max speeds.


Case: Large, plenty of space. Has enough PCIe slots, 3.5" slots etc. It just so happens to come with 4 brackets for 3.5" which is exactly the number of HDD we will start with. No fans included.


PSU: Found a great deal on a 1KW PSU. 600W build now, but we have room to grow.



  1. What fans, how many, and what configuration do you recommend for this build in the Phatenks Enthoo Pro 2?
  2. 1,000W is enough for this 600W build. But if we put in an RTX 3080 later (320W TDP) that'd put us as 920W. Thoughts? Buy an 800W PSU then upgrade later when needed? Keep the 1K it'd be fine for a 320W GPU later? Or go for a 1100-1200W PSU? Any recommendations on specific PSU/deals?
  3. Can these board/CPUs do non-ECC RAM? I didn't find non-ECC RAM on official memory QVL. I found a great deal on these 32GB non-ECC sticks, but heard ECC RAM can give you problems if you don't really need it.
  4. Am I missing anything? Any potential part conflicts?
  5. Any suggestions?
  6. From seeing other builds on this board with these fans, it looks like one CPU fan will blow into the other CPU fan, causing one CPU to run a little hotter. Is there a good fan option that won't do this? Maybe I get one Noctua for 1 CPU then I get another fan that blows up (is there such a thing?) for the other one?

I appreciate ANY and ALL feedback/answers. Pick 1 or a few of the above questions and take your best shot. I'm new to unraid, so I have a lot to learn. I want to make sure I am learning on a solid box that won't give me too many problems, and one that let's me dink around with the fun projects I am trying to get working :)

Edited by KBlast
Link to comment
24 minutes ago, KBlast said:


Memory: Samsung 32 GB Registered DDR4-2133 CL15 Memory  ($92.00 @ Amazon) x 4
Storage: Western Digital Blue 4 TB 3.5" 5400RPM Internal Hard Drive  ($89.99 @ Adorama)  x 4
Cache Drrive: Samsung 970 EVO Plus m2. 2280 1TB PCIe Gen 3.0 x4 ($150 @ Amazon)

PCIe to NVMe2 Adatpor: M.2 NVME to PCIe 3.0 x4 Adapter with Aluminum Heatsink Solution ($16 @ Amazon)


Another project I have requires downloading 300-800GB/day of video and involves a lot of OCR / computer vision.




Storage: 4 x 4TB is perfect for us right now. No NVMe2 slot on MOBO, but the adaptor is 3.0x16 so it should be fast enough. I am guessing I will hit close to max speeds.



This is quite a powerful box, that you designed.

As you said, the 4x 4TB for the Array will suit your storage needs.....

....but with that amount of 800GB of data per day for your other projects, along with a single 1TB NVMe....I'd say just make sure that you know your procedures on how to handle that in your workflows on/inside the box

Also a single Cache Drive will leave the data unprotected until the mover has transported it to the array.

It will need to hold the Dockers and VMs as well.


Double check if you would want to consider using a cache-pool and a larger one (for example using 4x 1TB NVME in a 2TB Pool, Raid10).

You'd have to sacrifice a x16 Slot for a quad NVMe-PCIE x4 card.


When using only one Array drive, unRaid will use it in a 1:1 mirror mode, thus keeping the write speed up.

So I'd rather suggest to use 2x12TB Disks for a start or enable tiurbo write mode....also helps should you need to move data from your larger projects onto the Array.

Are you sure that a double 1Gbps NIC setup will be enough for your needs?

Based on the workload other workstation will have/introduce, do you need a 10Gpbs NIC (or two) on that box?

Link to comment

Thanks for the feedback. I haven't worked on that project in a while (one reason I am building this unraid.) I believe it's actually 300-500GB/day. I don't need all that data to be permanently stored, but rather I need to process through it then it can be deleted. I think we'd need 4GB for our own storage needs, 4GB for temporary storage, then 2x 4GB would be parity. I could look into 2 x 8 if you think having 1 storage and 1 parity is better?


500 GB is a lot to download in a day, but I don't max out my 1GBPs connection (speedtest: ~600Mbps) so I don't think dual 1gbps or a 10gbps nic will help or would it? I don't know much about networking. I've never done anything other than a 1gbps ethernet (no 2.5, no 10 nic card, etc)) 


Ideally, everything could exist in memory while it's being processed so it's fast. Speed is king for me. If I lost one glob of downloaded data, I could always just redownload it again. Is it possible to download straight to memory to save hitting my disks so much? I download and process 30-40GB chunks of video at a time, process them, then delete them. I plan to do about 10 of these a day. It would be great to download to and work directly from the RAM.


I know conceptually how the cache drive works, but I've never used one. I thought it was for smaller downloads due to faster writing speed: write it to the nvme2 first, then it writes to the HDD.


Feel free to correct me if I am misunderstanding something.

Link to comment

...when you have more than one Data-Disk in the Array, there will be a Performance penalty, when writing Data to the Array, due to the nature of unRaid's unique parity mechanism.

In any case, unRaid does not use striping (splitting logical data blocks/files across physical disk, thus being able to combine Disk performance) and therefore reading Data (as each individual file will only reside on one of the array disks) will result in the performance of that single Disk.

You can compensate the write penalty by enabling "turbo write mode", which will result in the performance near that of a single Disk but this will result in all disks spun up (using more power)....this is probably not a big thing for your setup, as with a few disks, your Xeons and RAM will be using much more ;-)

Another option is to use only one data Disk (with 1 or 2 parity), where unRaid will change ist parity algorithm to that of a Raid1 (mirror) setup. Also resulting in read/write performance to that of a/this single disk.

As your needs for data storage seemed comparably low, I suggested to use only one data Disk, hence.

A double parity is not needed here and normally recommended for 8+ Data Disks onward (although being paranoid is not a technical issue here, but of course only starting from two data disks).


A Cache drive is normally build from a faster SSD or even NVMe (with PCIe-Interface).

It is used as a asynchronous write cache to the array, as you said. But as long as the data resides there and has not been moved to the array, it is sort of a staging disk, giving its full performance for reads to the data on it well.


In addition to that, because of the higher performance of these Disks, the storage paths for Dockers and VMs (where their OS and Apps reside) is normally configured to reside on that Cache Disk as well and not on the slower array disks.

If you want Performance for your VMs (the OS inside), RAM is not the only parameter but read/write speed and especially IOPS of the cache disk is (especially if you have more than one VM/Docker running concurrently).

But since the data for VMs/Dockers, as well as the cached Data until it gets transferred to the data Array by the mover process (that typically runs once a day) reside only there, there is no protection against failure of the cache disk.

Circumvention to this is adding a second cache disk and unraid normally will form a BTRFS Raid1 pool from these.


If you need to copy Data back and forth between VMs, I'd assumed you would need more speed than that of a single HDD and therefore also more space than that single 1TB cache disk.

You could buy larger NVMe Disks or use more Disks

Options are to use more than one Cache Disk or Pool (like one for data cache and one for VMs/Dockers) or building a larger, single Cache or pool.

Another - and maybe fastest - option for Performance for VMs (VMs only) is to deploy more PCIe-Adapters for NVMes and physically passthrough these to the individual VMs.


The Samsung you selected is a good one but "only" offers a TBW of 600TB...others do offer more durability, like 800-1800TB nowadays.So if you do not need/want a mirrored cache pool but just a cache drive, I'd recommend going for an NVMe that is in the same ballpark in terms of Performance than your Samsung, but with a much higher TBW, like 1.2PB...for example a Samsung 970-Pro or Gigabyte Aorus Gen4 SSD 1TB, M.2 (GP-AG41TB) 


A 10GBE Network Option I mentioned, not only because of your Data being transfered via Internet, but also if this needs to be transferred to/from other Workstations.

Especially, during the day, until the data from cache has been moved to the slower Array, the Performance of the NVMe based cache can fully be utilized by other Workstations as well. A typical Scenario in a Business, where data is exchanged between workers throughout the work day.

So everyone works on the cache drive(s) (if it is large enough) on actual data and the data of the day gets transferred to the slower Array during night.

Link to comment

Thanks, that information is very helpful.


I don't know what kind of projects I'll get into in the future, but for now none of my VMs pass data back and forth.


Is it possible to partition an NVMe2 drive and pass through partitions, or just share the NMVe2 and the VMs get their own folder to write to? I know it depends on my use case, but any general rules on how many VMs per NVMe2 if they can be shared or partitioned? If they can be shared in some fashion, I might do one larger nvme2. How many GB would you recommend for a normal windows 10 VM or linux VM? Maybe multiple nvme2 is the way. TeamGroup's 256GB is only $34 USD.


In terms of which SSD, it looks like the PLUS has better performance than the Pro from what Newegg shows. Same sequential read, but  sequential write max for Pro is 2700 and PLUS is 3300 mbps. Also 4KB Random Read and Write both favor the PLUS. I guess the Pro is for longevity? I've reached the upper end of the budget for this build, not sure I can squeeze another $100 in. Would it really make a big difference in the next 3-5 years? 


We'll only have 1 workstation, so there won't be others to transfer between. My wife does all her media creation/editing on her own computer currently. So I think we'll pass on 10gbe for now. In the future if I had an array of GPUs for AI/ML work if that area develops further, maybe it could make sense for 10gbe so she could make use of them for video editing.


One more question. What are your thoughts on shucking? The WD Easystore 16TBs are (or at least used to be) 16TB Red Drives. Do you know if that's the case and if those are still helium or are they made with air now? It seems like a decent deal to get down costs on HDD.

Edited by KBlast
Link to comment

Yes, the 970 Pro is a bit slower, when comparing Specs, but I was referring to TBW, where the 970-plus comes with 600TB and the Pro is double, a TBW of it is better durability, especially for high use (equaling wear&tear on the non-SLC NANDs).

But I agree, they are expensive  (and using 2x 970-plus in a pool would cost about the same than 1 single 970-Pro).


A VM, normally will be created on a virtual disk, which in turn is a (set of) physical file(s) on the unRaid Cache (or array).

I'd guess you can get away with 80-120GB easily (a fresh install of win10-pro fills the system disk at about 50GB, doesn't it?).


I am not a Windows expert, but if you plan on dedicating a single NVMe (in this case, best is to use a dedicated PCIe-NVME-Adapter, as you already listed - which would guarantee that you will be able to physically pass it through to an IOMMU-config) is an option for real world performance.

But you will be limited to this smaller disk...a virtual disk can be any size that fits on the Cache or Array.

Should you run more than one VM concurrently, using virtual disks on the inraid Cache, IOPS are more important than pure read/write throughput.


Where I live, the Corsair Force Series MP510 240GB, M.2 (CSSD-F240GBMP510) seems like a sweet spot (around 50USD, same as the TeamGroup PCIe SSD MP34 256GB, M.2 (TM8FP4256G0C101) but a thad faster and TBW of 400TB at the moment. Also I believe that Corsair is a more reputable brand.

In terms of a brand for flash memory, I'd go with SAMSUNG first, then Patriot, Mushkin, Micron, SanDisk, Kingston, Crucial, Intel or Toshiba....maybe GigaByte, PNY or ADATA and Corsair...never heard of Teamgroup in my part of the world before ;-)


I have no personal experience with shucking.

As far as I know, you cannot be sure what physical disk is really best guess is, that the assembly line will basically fill in what is available and fits the lower specs ;-) Also I *think* you cannot be sure, that you will find a standard disk with standard firmware inside, so upgrading the firmware is basically not an option, or at least a risk, i think.

For unRaid, using high performance enterprise disks is not required (maybe if you stick to a single array drive ;-) ) and any desktop drive will do.

So you can save a lot of money, but you will loose warranty of that drive (as you cannot fit it back in its enclosure when you want/need to RMA it, maybe).


As you are on a budget and your system has a lot of options for expanding later and also you do not know or have experience with the real world performance requirements, my advice is to start with e lower minimum as a baseline and expand from there as you learn.

Maybe save RAM (install 2 DIMMS instead of 4) or buy it used, like you do with the CPUs.

Find the sweet spot for HDDs in terms of USD/TByte and start with one parity drive.

Use PCIe-x8 Adapters for two NVMe-Pcie x4 each (some x16 slot pairs of that MB will either work in x16/x0 or x8/x8 mode).

Start with a single, fast in terms of IOPS,  Cache Disk with higher TBW

Link to comment


As you are on a budget and your system has a lot of options for expanding later and also you do not know or have experience with the real world performance requirements, my advice is to start with e lower minimum as a baseline and expand from there as you learn.


100% agree. I want to go for a nice system, but I also don't want to end up with expensive parts that don't really fit what I need. Either, I need different expensive parts, or I way overspec and end up running a Rube Goldberg machine but without all the pizzazz ;)


So are you saying run 1 VM per NVMe2 or I can run multiple on "virtual disks" (like a partition?), but they may be IOPS limited? Win 10 is 20GB install space. The win10 based project that requires the GT 710s doesn't require a ton of space or IOPS. If I can partition, I would run both off that Corsair Force 240GB you recommended. Could you explain more what you mean by "Use PCIe-x8 Adapters for two NVMe-Pcie x4 each (some x16 slot pairs of that MB will either work in x16/x0 or x8/x8 mode)." This is the adaptor I'm looking at right now I believe I can only use 1 per PCIe slot, correct? Or are you saying the adaptor is x8 and it can slap on two NMVe that run x4? Something like this If it can still passthrough easily, I'm all for saving PCIe slots. I got 7, but they are going fast and I want to leave a few open for future opportunities. At the same time, I want to avoid headaches. I've installed 1 GPU in a PCIe slot, a wifi card... and that's my experience with PCIe slots so far =D


I might start with 2 x 32GB ram sticks, then add more later if needed, per your suggestion. This would help with the budget going for the 16TB HDDs.


I think I will go for 2 x 16TB. The Seagate Expansion Desktop Hard Drive 16TB HDD contains a 16TB Seagate Exos. This was the suggestion I got on reddit.


"Personally I prefer the 16TB Seagate as they contain EXOS x16 drives which register for warranty with the drive serial once outside the enclosure. They're Enterprise drives and pretty speedy too...

As mentioned, they registered for warranty exactly as all my other drives did. Once they're out of their enclosure you can't distinguish them from any other EXOS enterprise drive....

Amusingly, the warranty on the drive once taken out of its enclosure is a year longer than on the enclosure itself..."


In this video, someone shucks it and reveals the Exos 16TB. He already has an Exos 16TB in his box and claims to have done it with numerous Seagate 16TB External HDD. Looks like a good option. In Canada, I can save ~$100 CAD per drive shucking. Mind boggling. I guess they have some great margins and the price point for external HDD is competitive.


I think I'll stick with the 1TB 970 PLUS NVMe2 for budget reasons, but then it gives me a good upgrade opportunity in the future if I really blow it out in the next few years. At that point a 2-4TB will be much cheaper.

Edited by KBlast
Link to comment

Well, if you shuck one of those and really find that 16TB Exos, that is a almost a steal. ;-)

These Drives come in SATA or SAS Flavor, but I guess the SATA version is inside the enclosure in order to integrate with the internal USB-SATA Adaptor.

So no need to worry there.

I cannot comment on the nature of the warranty thing. My guess is, that opening the enclosure voids warranty for all the parts immediately.

If you check the S/N of the shucked drive, it would possibly not return a valid statement.

Yes, the Esos is an enterprise Disk with extended warranty (which is calculated into the price tag).

The external USB Disk (whatever disk is inside) is a consumer grade product and only guaranteed with standard warranty rights, I guess.


Regarding your PCIe Slots...

Yes the Board has seven of them, which is plenty... WS#Specifications ...but you need to consider the potential difference in mechanical and logical specs, like given in the link above, it says:

7 slots (PCIE3/PCIE4 : x16/x0 or x8/x8 mode; PCIE6/PCIE7: x0/x16 or x8/x8 mode)

This spec says, that if you populate only one out of the pair from PCIe Slots 3 and PCIe Slot 4 (PCIE #6 and #7 respectively) this slot will work in x16 mode.

Should you populate both, each slot will only work in x8 mode (although you still can insert a x16 card physically). Hence a card with x16 capability will be working in reduced performance (normally it should and not stop working).


So for your NVMe population/deployment, this x4, x8, x16 thing is (almost) simple math.

Assume an NVMe with PCIe interface, itself is designed for PCIe-gen3.x and x4 speed according to its specs (we come to that PCIe gen-3.x thing later).

So in order to use it with maximum performance, insert it into a Slot that is capable of handling x4 speed.

This: is fine for a single NVMe, speed wise.

But should you insert that Expansion Card in a x16 slot or x8 slot, this would be a waste of potential bandwidth as you insert a physically less capable card in a more potent slot.

This: ...can physically mount 2 NVMe cards but only offers a x4 PCIe performance to both, as it only has a x4 physical insertion mount for the PCIe slot on the Motherboard...hence populated with two NVMe, each NVMe will only receive x2 performance (which is half of the speed possible), geardless of it is inserted into a x4, x8 or x16 slot.


This: is an example for a x16 PCIe Adapter, that can populate four NVMe x4 each.

Now, if you insert two of them, populated with four NVMe x4 each into slot #3 *and* into #4 of the MB, the MB will only deliver x8 bandwidth to each Expansion card (see the specs and explanation above). Hence each NVMe mounted on it, although in a x4 slot of the card, will only receive x2 bandwidth.

This: is a card suitable for two NVMe x4 each for a x8 slot in a motherboard (OK, this examples comes with NVMe included, but the adapter is a spare part and you can find it without NVMe disks...approx 50USD, I'd say.)

...hope you get the your math for performance ;-)


As a side note, your MB only supports up to PCIe gen-3.x PCIe cards...newest Standard is PCIe-gen4, which technically doubles the bandwith.

So a PCIe-gen4 x8 card can cope with same bandwidth as an x16 gen3...if the MB supports it, but do not get confused, when buying gen4 will only work at the speed of gen3 in your MB (inserting a gen4-x8 Card in a gen3-x16 or -x8 slot will only give you gen3-x8 speed)....luckily PCIe gen4 card (should) be downwards compatible, so using it *should* work.

So in terms of a NMVE: if the specs of a gen4-x4 NVMe (and they are out there, like the Samsung 980 Pro) are faster than gen3-x4 can deliver, you will not get that performance out of your system.


Now, for the physically passthrough of hardware to a VM...

This is also a complex thing.

Software wise it is not difficult (anymore) and here is a great video: 


In a nutshell, you can passthrough everything that is shown by that plugin.

Sometimes some components are combined and can only be passed through together.

For some components passthrough to a VM does not make sense, as they are allocated/needed by the unraid host itself in order to run (CPU, SATA Controller, main NIC, ...USP controller/port of unraid Stick, ...)

Unfortunately, the physical addresses that are allocated by components are not standardized and the picture, which that VFIO plugin shown in the video will present to you will be individual to your setup.

There is a good chance, that you can always passthrough devices that are attached to a single PCie-slot in a 1:1 manner.

Hence, an Expander for i.e. two NVMe will allow you to pass through the Expander, but (might) not allow for individual NVMe (like #1 to VM1 and #2 to VM2).

Your mileage may vary....


So it is sort of trial and error...even chaneing the slot position of an expansion card might give other/better results....good thing is, you cannot physically damage things...only loose your configuration...if you passthrough physical disks from the unraid array, though ... you have a good chance of loosing your data ;-). So do not make stupid decisions with passthrough.


Link to comment

Thanks for the tips, Ford Prefect! Those are very helpful.


I also got some feedback recently I should avoid unraid and go with FreeNAS for speed. Is there a thread or video that compares the pros and cons of each system? I don't know enough yet to know what will work best for what I am doing so I am trying to get feedback from pros like you.


Do you know how difficult it would be to switch from unraid to FreeNAS or FreeNAS to unraid?

Link to comment

I looked at both when I wanted to build my server, here is why I chose unRAID over FreeNAS.


+ for FreeNAS:

  • FreeNAS's ZFS is certainly better better for speed (and probably data security ?).


+ for unRAID:

  • The advantage of unRAID is that is very flexible regarding it's disk configuration. You can add drives as you need. A few data drive and a parity drive. Then add data drives. And when you want to add a second parity drive you can do it when you see fit.
  • If you really want ZFS, there is an unRAID plugin that allows the use of ZFS and Limetech is looking into an official integration according to various podcasts and blog posts.
  • Great support from the community (Community Applications and all that is available on it), the forums.


- For FreeNAS :

  • The main "issue" I had with ZFS is that it you have configure one disk group (vdev, zpool, etc) and that it is not possible to change it afterwards (for the moment ?).
  • Not the greatest community from what I heard AND also saw on the forums


That's just one man's opinion, I encourage you to make your own opinion on all those points and evaluate what is really your need and priorities.


More info on ZFS



Link to comment

Thanks. I ordered all the parts. I got a killer deal on RAM but it's a long ship time, so I got some time to think through unraid vs FreeNAS. I'm not so worried about data integrity. It's important, but not something I'm worried about. Our business data isn't huge and it's all in the cloud and backed up.


The main media I'll store on the server will me media and the VMs running some code. All that code will be stored on my personal github repo so no worry about that. My interest in ZFS would be performance, not data integrity. I'll read that article you linked on ZFS as soon as I can.

Link to comment

ZFS is for integrity first...the "classic" Raid mechanisms in ZFS, which allows striping - like other traditional raid mechanisms - is for larger, enterprise Applications that need I/O, like a database (remember die Discussion about read/write speed and IOPS...striping is also to combine the read/write and IOPS of Disks).

When going for performance with traditional disks, you need more disks, not less (like settling for two Disks in a raid1-Mirror).

When your storage is for media this is an overkill.

When using striping, you need other means of managing your risk, should too many drives go failing at the same time, as this will result in *everything* lost.

In unraid, this scenario will leave the data on the still functional disks intact.


Normal desktop Disks/Hardware will not cope with ZFS and raid/striping well, as it introduces more wear, tear and potential strain.

Also, saving power by putting individual disks to sleep is not an option. In some circumstances, you can make a complete array sleep and wake-up again...but when the array is accessed, all disks always need to spin-up (remember, for performance, more disks are needed).


Next: in an All-in-One system, a NAS and VM/Docker host...consider both concepts individually and what concept best combines them.

FreeBSD  for hosting Dockers and VMs is not the best option for performance and usability, in my eyes.

besides a NAS, for ease of use of VMs and Dockers unraid is a very good option as well...there are more and more users, that want to use unraid just for that.

Should you want to go another route, when it comes to performance, I suggest to look into other options as well.

In any case, you would need more hardware for that, at least one or two Disks/SSDs for the Host OS (remember, unraid runs in RAM, starting from a stick..."no OS-Disk").


My suggestions for this route:

- look into Proxmox (Linux KVM based, like unraid) or ESXi as VM these to FreeBSD

- using a VM Host means, you need to virtualize your NAS as well  (and you need to passthrough a dedicated SATA/SAS Card for the Disks)

- for a NAS Appliance with ZFS, use the "real" thing, like napp-it: ...look for the AiO (All-in-One) concept, here: and you get the picture.

Edited by Ford Prefect
Link to comment

Proxmox looks interesting. One person said they use Proxmox as the hypervisor and Turnkey Linux FileServer in a VM for their NAS ( Another said they use OpenMediaVault in a VM for their NAS (


I have an 8-port card I could pass through ( Is this a workable way to do what I would want or am I adding complexity or doing something wrong? It looks like I'd have my VMs for my projects then I'd have a VM for the NAS / media server.


Would the NAS HDDs be exposed to my VMs? Could it be toggled? Also, if I wanted them to have access, would it route it directly to the VMs directly through the mobo or would it go from the server to the router then back to the server because it's exposed as a network attached storage?

Link to comment

When using a virtualisation approach, you can run anything/OS in a all NAS options out there are a possibility, including virtualizing freenas, truenas, omv, nspp-it ...even using xpenology (synology OS)...and of course unraid.

with enough disks and controller, you can also create more than one NAS VM, combining/offering different raid models/strategies for the needs of your worker-VMs

That card looks like a LSI 2008 or higher chip based can handle far more devices than 8 (up to 1024) with the right storage strategy.
It will handle SATA/SAS drives 1:1 (then up to 8) or SAS expanders, i.e used in 19inch Rack Storage.
It is perfect for pass-through and software raid, like ZFS.
Just make sure it has IT-mode (HBA) firmware on it as these can also host a hardware-raid firmware (which makes it a bad/lotek choice without Backup Battery/BBU and onboard Cache).
Speaking of do plan/have a UPS?

The host will provide a virtual network switch, that offers speeds, only limited by CPU bandwidth. You must use the correct virtualized drivers in the VM-OS for that to work, though.

There are ways of giving raw block device-access from physical disks on the host to a VM, but you can't have these in the NAS storage array at the same time.
Standard way of giving direct access to a VM from a NAS storage VM that offers high IOPS is to mount the NAS storage back to the VM Host, create a virtual disk(-file) on that NAS and add it to the VM. Other options, that offer pluging remote disks in and out between VMs include iSCSI transport (raw scsi over network).

You will need to find out, what is best for your concept/usecase and workflow.

...but this is the unraid forum, isn't it
So for the sake of the argument, why not start with an unraid box, adding your VMs for projects and then decide if - for example more Disks for high IOPS are needed...then adding these with more cache disks, or using the ZFS-Plugin available in unraid to create a pool outside of the standard array for holding virtual disks or even run a second NAS with ZFS as a VM on unraid, like napp-it.
My bet is, that this will give you the easiest, fastest starting point without the need to deep-dive into other concepts and software, that are definitely more complex to build and use.

Gesendet von meinem SM-G960F mit Tapatalk

Edited by Ford Prefect
Link to comment

Thanks for the further feedback and follow up. I probably should get a UPS. Any recommendations on a brand, specific product, or the amount of VA for this system? I am looking at this one right now


Also, it looks like a UPS will give 5-15 minutes of time. Great for short blips in power, but not great for going through a blackout that lasts 30 minutes+. How do people handle this if they are remote when this happens? Do they have a script running that checks if the server is running on battery power and if so gracefully shut it down? If battery power can be detected, it's just a matter of scripting for each VM/container/etc to be powered down within that 5-15 minutes. Any generally applicable approaches to do this?

Edited by KBlast
Link to comment

There are basically three types of models or modes for a UPS with features that also have an impact on the price tag.

  1. simple, offline/passive or VFD-Voltage and Frequency Dependent from mains supply models
  2. Line-interactive or VI-Voltage Independent from mains supply
  3. online or VFI-Voltage and Frequency Independent from mains supply

...look up the details a nutshell:


VFD-models are a bit slower in switching over from mains to battery in the event the mains supply goes down and are not decoupling Voltage and Frequency Levels and Quality from mains. So if you live somewhere, where Quality of Supply (correct Voltage and Frequency) is an issue, these do not help.

VI-models do switch over faster, when mains gets down and have a sturdy, self generated Voltage level....VFI models basically generate Power, including Power-Quality completely by themselves and the output side is always running on battery, that is constantly recharged while mains is available.

I'd recommend using a VI model..they are the de facto commodity standard nowadays anyway.

The one from cyberpower you linked is a VI model.


The sizing of the UPS is not very complicated.

You need to have one that is able to cope with the max-load the devices on the outlet of the UPS can draw.

Your new Server has a 1000W PSU, so this - plus an extra margin of lets say 10% - is the minimum you would need to care for.

Maybe you want to attach more equipment, like network switch, router, WLAN-APs and such.

For myself, I have some APs and decentral network switches running over PoE, supplied from my main network switch, so I have the complete Rack,  attached to the UPS outlet and added another 250W to the sizing calculation.


Typically these UPS models come with a USB connection, that can be attached to your Server, where an App/Service can monitor state, load and of course the event, when mains supply goes down and the UPS will go to Battery powered Supply.

Should mains supply not return until a certain, configured battery level is reached (battery level should not be allowed to reach zero, as his will cause damage to the battery...typical cut-off level is 20-30% in order to preserver battery health and lifecycle) the service will then (try to) gracefully shut down your server before the UPS will shutdown itself.

This is the timespan you need to cover with the battery powered energy.

The datasheet will indicate that timespan, based on typical load scenarios/levels.


With a USB-connected UPS, the is probably a challenge.

If there are other devices connected to it, that also need a graceful shutdown, before the UPS will cut-off power by need to find a way to communicate the shutdown command from the Server that monitors the UPS to the other equipment.

The easiest way is to circumvent this by deploying a UPS to each device, that requires graceful shutdown, of course.


Your server will probably require its highest load, when starting, especially when HDDs need to be spun-up.

In your case, with a lot of RAM and Dual-Xeons with 120WTDP each but only a few HDDs, the load will probably be highest when your "number crunching" projects run.

You need to find out what that level of power is and how long you want/need to cope with a powercut from mains.

Consider if it makes sense to cover a larger powercut....if i.e. Internet / Comms is also not working during such an event (a powercut to the area, affecting the substations and Telco infrastructure in your city), there may not be the need to keep your local infrastructure up that long and a 5-15mins grace period is OK.

If not, do not forget to add your internet Router and Comms Equipment, like stationary phones or wireless base stations, to the UPS outlet.


When you buy/select one model, make sure you can change batteries yourself (every 5-7years this will be a requirement in order to keep the UPS in "good shape") and that you can purchase the individual batteries from an aftermarket supplier (do not go for a model where a replacement pack from the manufacturer is the only option and in case the battery pack is in its own housing inside, a pack where you cannot open the housing yourself without damaging it). Check the model and specs of the internal batteries and where you would find a good supply before the purchase of the UPS.


In my opinion Cyperpower, Eaton and APC are good brands, with APC probably leading the market, Eaton offers simple but sturdy solutions and cyperpower is addressing home/SoHo markets more...I had serveral throughout the years and all of them died short after end of warranty 🤐

Also for APC you have the best chance to find a "driver" for the UPS in the Server OS, including unraid.

There are other "drivers" available, like NUT and they do cover a lot of models but not all.

All brands have drivers for windows, but you need to consider:

The driver needs to be able to installed on the Host...not in a VM, as shutting down the Host from inside a VM is kind of a problem, as you could imagine.


Myself I am using an EATON Ellipse 1600VA model and NUT plugin for unraid.

I have two Servers and components with a total of 800W in my cabinet. hence the 1000W/1600VA model.

My base load is 100W, which is not that high, so my remaining runtime varies between 30-50min.

I can't find it on and others from eaton look quite expensive....maybe not a common brand where you are located:, based on your local currency, that model has a price tag of 340CDN$ where I live. 🙄

Link to comment
20 hours ago, KBlast said:

Also, it looks like a UPS will give 5-15 minutes of time. Great for short blips in power, but not great for going through a blackout that lasts 30 minutes+. How do people handle this if they are remote when this happens?

Short and sweet, your server must be notified by the UPS that the power is out, and the computer needs to shutdown now. How that notification happens, there are various methods as Ford mentioned.


I differ from him on how much capacity you should use, for several reasons. I prefer to have things set up to be fully shut down BEFORE the battery in the UPS reaches 50% discharge. 1st reason, battery lifespan. The shallower the discharge, the longer lifespan the battery should have. 2nd reason, the nature of power outages. A second outage within a short period of the power returning is very likely, as the crews temporarily restore power to the greatest number of customers, then come back through and make permanent repairs. The recharge time is WAY longer than the time used on battery, 10 to 20 times. So, if you drain the batteries to 30% left, and start everything back up as soon as power is restored, you will likely not have enough capacity left for a clean shutdown if the power is lost a second time.


The goal of a consumer level battery backup is NEVER to replace power during an outage, it's to safely shut things down in a controlled timeframe. If you want extended runtime, there are commercial units available with add on battery banks, or size things so you can fire up a generator to replace the mains for an extended outage.


In my area, if the power is out more than a minute, it's probably going to be out way longer than a UPS can handle, so I have things start shutting down pretty much immediately, with the server the last to go down after 10 minutes of outage. VM's get the signal from the server that the power is out using the apcupsd program, there are clients available for pretty much any OS. So the VM's start shutting down after 1 minute of outage, various desktops as well. If everything goes as planned, the server has no client connections after 5 minutes or so, and can shut down.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.