How to get max throughput with my components


Recommended Posts

Hi All

I am stranded with my idea to use a ZFS pool for max throughput. I have learned that I cannot easily extend the pool with one disk increments.

 

I would appreciate your guidance now on how I should set up my NAS.

The goal should be to get the max read and write speeds with these components:

1 x AsRock X570M Pro4 with 48 GB RAM

2 x kingston 500GB nvme

2 x WD Blue 500 GB SSD

4 x Seagate Exos 7E10 8TB HDD

 

The max throughput is because we will move into a house with 10gbe infrastructure. This means the internal network is capable of handling 10gbe. Also, the ISP can handle this speed. So I have the perfect base to set up a NAS with good throughput. So my goal should be to get the max throughput with these components.

 

The initial plan was:
1 pool with 2 nvme disks with one spare (used for VM placement)

1 pool with 2 ssd disks with one spare (used for application data, mainly docker things)

1 pool with 4 hdd disks with one spare (as the main data pool)

 

So I hoped to reach around 240 - 300 MB with the hdd pool because of parallel reads and writes to the disks.

 

Something like this may be possible if I think about a new configuration.

1 pool with 2 nvme disks with one spare (used for VM placement and application data)

1 pool with 2 ssd disks with no spare (used as cache)

1 pool with 4 hdd disks with one spare (as the main data pool)

So all write will go first to the SSD, which will be fast because it's configured as a cache for the data pool, and later on, it will save to the HDD. But I think the reads will be a "problem" because it keeps the data to individual disks and not spread over all disks. Or am I wrong here?

 

Which configuration would you recommend in my case?

Link to comment
3 hours ago, quei said:

So all write will go first to the SSD, which will be fast because it's configured as a cache for the data pool, and later on, it will save to the HDD. But I think the reads will be a "problem" because it keeps the data to individual disks and not spread over all disks. Or am I wrong here?

I assume you mean adding to the array for the hdd "pool"? If yes read/writes to the disks will be limited to a single disk speed, you could have those 4 disks as a seperate zfs pool, but will have the expansion issue and currently the mover does not support moving data from pool to pool, so it would need to be done manually or with a script.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.