General thought on how to setup a all NVME unraid server.


Recommended Posts

I have been running unraid for a while on a more traditional setup with spinning fisk and a few SSD's for caching. Recently i aquired a Mini PC based on a N305 intel chip that can run 5 NVME drives at PCIe 3x1 speeds. I have 4 drives now but will eventually populate the 5 slot. Currently the drives are 2 crucial P3Plus 4Tb  drives, a WD SN770 1TB drive and a WD SN560e 2 Tb drive. 

 

The question is what is the most reasonable way to not kill the drives performance any more, but also maintain minimum impact on write endurance. 

 

Right now i have thems as a simple 4 drive array with no parity. Then backing them up to my older unraid system with spinning disk.

 

I have read some fairly negative things about ZFS when it comes to NVME drives and performance, and i would prefer not to loose 50% capacity with BTRFS. 

 

So how are others doing this with there setups. 

 

Right now i am leaning toward the best option being to go ahead and get 3 more of the crucial 4TB drives and use ZFS with it being the best option. 

 

 

Edited by mavrrick
Link to comment
  • 4 weeks later...

Well I just got the 3rd Crucial P3Plus drive and so when adding it i took your advice and so far the results have been fairly good. 

 

Current setup is the CWWK 4x2.5GB Mini PC with a Intel N305 Processor 32GB of RAM.

 

Storage configuration:

Array 3TB with no parity

------------------------- 

Disk 1 =WD SN560E 2TB (shucked)

Disk 2 = WD SN770 1TB 

 

ZFS Pool Raidz1

3x 4TB P3Plus

 

Using Zpool IOstat i have observed the speed of the drives reach around 2.4-2.5GB/s which is really good when you consider all of the drives are limited to PCIe3x1. Considering the fastest network connection is 2.5gbps that is plenty of throughput. 

 

The only real issue is that when the array is under load and transferring about as fast as it can the CPU usage jumps and can even max the cpu out. It will probably never be to much of a issue though unless doing local only tasks.

 

Hopefully I can add another P3Plus drive before to long and test this again. Though i don't expect to get much if anymore throughput since  I think I am starting to hit being cpu bound.

 

 

  • Like 1
Link to comment
  • 2 weeks later...

 

Hey guys,

 

i recently build my first UNraid Server.

 

My Server has 6x 990 Pro.

 

I configured mine with standalone XFS with 1 Parity.

 

For me the ZFS RAM usage was a no-go. It is a nice thing when you just want to use it as share/fileserver - for me it should be VMs and Dockers for primary usage.

 

For Trim i'm still unsure it is useful or not.

 

I read somewhere it is not as useful you would expected.

 

But i dont know...

 

Link to comment
Posted (edited)

I think it is all about understanding the tradeoff's the problems with using them in a unraid array is primarily the loss of trim. The performance impacts of the unraid method of parity will likely be offset by the raw speed of those NVME drives though.

 

Trim is really only impactful when doing writes. As when the drive is not trimed and needs to overwrite a block what could be a one step process becomes a many step process which can impact performance. How much on a drive that can do over 7GB/s is something i dont know. It was significant whe  i got my first sata SSD's but not sure now. That said, trim on flash storage is kind of a best practice as it will improve write performance. 

 

As far as the memory usage go remember that by default unraid assignes 1/8th your memory to the ZFS Pool for cache. So while not insignificant not hugely impactful either. The basic setup is fairly memory lean, and ZFS memory needs shouldn't be to bad until you start talking about enabling advanced features like dedup. That is when it really goes nuts. My ZFS pool gets 4GB of memory since the box has 32GB of ram. 

 

What ZFS really shines is when you start talking about the advanced features like Snapshots, compression and allowing aggregated performance.

 

Unless almost all of your content is highly compressed video files you will likely see a good return on compression and gaining space. ZFS with raidz# also allows the box to run a kind of raid that will give you potentially improved performance over a single drive. This is questionable with a all NVME setup but can be huge with spinning disk. Lastly snapshots are generally a much nicer way to save data vs creating backup zip files of individual sets of data. I Use to use the backup pluging to save my docker containers on a weekly basis. It took about 150GB each week and just put thE data in zip files. Now that i have the dockers on ZFS i use snapshota on a daily basis to back them up and the snapshots are generally a few hundred MB of data per snapshot instead. That is a good bit of saving. I now keep a much wider setup of snapshots as well. 

 

There are other good features of ZFS that make it good to use like its resistance to bit rot. 

 

Being that all of your drives are the same size they would be perfect to use ZFS with, but it is clearly not for everyone. 

 

Edited by mavrrick
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.