Jump to content

Good plan? ZFS on Unraid virtualized workstation configuration


Dav3

Recommended Posts

Hi,

 

Can the community comment on the following?

 

So I've finally got unraid set up how I want it, to serve as my new virtualized windows workstation platform.  Now I'd like to dial in the storage.  Is ZFS a good solution for a workstation scenario?  My plan is to use the following hardware:

 

8700K CPU (rock stable clocked @4.7GHz hosting stress tests in a unraid KVM, btw)

ASUS Z570-E Gaming motherboard

64GB RAM

Nvidia GTX-1080 GPU

Storage -->

Purchase a 8 Port SATA III PCI-e x4 Controller Card, probably this: https://www.sybausa.com/index.php?route=product/product&path=64_181_85&product_id=1006&filter=62 here: https://www.amazon.com/dp/B07NFRXQHC/

Re-purpose 8x Seagate ST4000D 'desktop' (slow SMR!) HDDs

Re-purpose 1x 1TB Samsung 970-EVO+ NVMe SSD that my old bare-metal system runs from

Re-purpose 1x 1TB Samsung 840-EVO SATA SSD

Continue using 1x Seagate ST4000MB 'performance' (non SMR) HDD (unraid xfs)

Re-purpose 1x 500GB Samsung 860 EVO SATA SSD

 

Plan is to:

1. Plug the new PCIe x4 sata controller into the empty PCIe x16 slot at the bottom of the motherboard (thus allowing the GPU to remain in x16 mode)

2. Put the NVMe SSD into x4 mode (thus disabling two sata ports on motherboard)

3. Plug the 840-EVO SSD into a motherboard SATA port

4. Unraid is installed on ST4000MB HDD plugged into motherboard, leave it there.

5. Plug the 860-EVO SSD into motherboard, configure as unraid cache.

6. Plug the 8x ST4000D drives into the newly purchased Syba (dual Marvell 9215),  configure all drives into a zfs zpool.

7. Configure 1TB NVMe drive as zpool read-cache (for performance).

8. Plug the 1TB 840-EVO nto motherboard, configure as write-cache for zpool (since SMR drives are so slow for writes).

Note:  I'm running my primary GPU-passthrough VM from another SATA SSD (not stubbed, just vfio by-disk device)

 

From what I understand, unraid doesn't support running out of zfs, thus #4 & #5 above.  But I intend to move the 'domains' share to the zfs pool, hopefully the .img files & shares I'll be hitting will be mostly be in-cache on the NVMe SSD.  (right?)

 

The primary reason I'm seriously looking at zfs is that from what I understand (wongly?) the unraid caching method is by-file, not by-block, but xfs does do block-level caching.  I expect that multiple large .img files won't get along together well in the unraid cache but will in a zfs block cache.

 

My expectation is that this setup should give close to ideal performance (for the given hardware), where typical reads should run from the big 1tb & super-fast nvme cache, have a large & fairly resilient (if slow smr disk) storage pool, and writes should be fast (enough) & buffered by the 1TB sata ssd.  So no bottlenecks for typical (cached IO pattern) operations.  Right???

 

My general use-case is I do a lot of heavy-duty CPU + GPU + IO, generally a lot of c++ compiles followed by heavy testing (I do 3D app development).

 

If the zfs storage proves to be performant, I'll consider moving my primary OS KVM from it's dedicated virt-io ssd into the pool.

 

Finally, I plan to (intermittently) back up everything to a USB-3 storage enclosure with a bunch of rando-drives running either JBOD or ideally another zfs array.

 

This looks ideal to me since I only need to buy a new 8-port sata adapter and reconfigure a bunch of underutilized hardware I have laying around.

 

The plan from there I plan on upgrading my primary workstation <--> server network to 10GB, then use the zpool to host a bunch of work files while I give my Windows Server 2012R2 box the unraid treatment, then move the files back.  Then probably dedicate some chunk of the zpools on workstation & server to an IPFS node living in a docker.

 

Comments, feedback, suggestions welcome.  Just not the crickets, I hate the crickets...

 

@steini84?

@Squid?

@SpaceInvaderOne?

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...