Some final questions (before buying hardware)


Recommended Posts

I am getting closer to my final decision but i do have some questions left, which I was not able to read in the manual and therefor I hope are not so stupid 😉:

 

CACHE (M.2 SSD):

  1. When i add 2x 2TB SSD as cache, these 2 drives can be used simultaneously either like a RAID 0 or 1  (Performance vs. Redundancy?
  2. If i add 3x 2TB SSD as cache I can make RAID 0+1 or 1+0?
  3. When I start a VM (which is on a HDD/array) the complete VM is then loaded into the Cache and will run from there using the full speed of the cache? In parallel other services (NAS, VM and Applications) are using also the same cache for the runtime and caching files?
  4. If 3 is a yes: The bigger and faster the cache is, the better the performance will be? So 3x 2TB would be better then 2x 2TB, and that will be better then 1x 2TB?

 

ARRAYs (assuming on SATA drives):

  1. Parity disk is always needed? Or can i also add an array or disk without connecting a parity disk? For example for copying data from that drive to one array?
  2. Is it possible to physically remove a disk of an array and put it for example in an external case and directly connect and read/write data from another PC/Laptop?
  3. Is it also possible to plug this "modified" disk back, when using parity disk(s)?
  4. If i will be using 1 or 2 parity disks, is it still possible what i describe

 

VM:

  1. Is there the possibility to get Android VMs running?

 

ZFS:

  1. Would you recommend ZFS from the very beginning or can i upgrade the array at a later stage?

 

thanks!

Edited by pH-Wert
Link to comment
3 hours ago, pH-Wert said:

When i add 2x 2TB SSD as cache, these 2 drives can be used simultaneously either like a RAID 0 or 1  (Performance vs. Redundancy?

Yes, default is raid1.

 

3 hours ago, pH-Wert said:

If i add 3x 2TB SSD as cache I can make RAID 0+1 or 1+0?

You can use raid0 or raid1 with btrfs, with zfs only a 3 way mirror or stripe (raid0).

 

3 hours ago, pH-Wert said:

When I start a VM (which is on a HDD/array) the complete VM is then loaded into the Cache

No, we recommend having the vdisks on a flash based pool.

 

3 hours ago, pH-Wert said:

Parity disk is always needed?

It's optional.

 

3 hours ago, pH-Wert said:

Is it possible to physically remove a disk of an array and put it for example in an external case and directly connect and read/write data from another PC/Laptop?

Yes, must be able to mount the filesystem you are using though, e.g., Linux OS or a Windows reader for that fs.

 

3 hours ago, pH-Wert said:

Is it also possible to plug this "modified" disk back, when using parity disk(s)?

Yes, but if the array was used in the meantime the disk will need to be rebuilt, and even if not and assuming it was mounted read/write it will make parity out of sync, and so requiring a parity check.

 

3 hours ago, pH-Wert said:

If i will be using 1 or 2 parity disks, is it still possible what i describe

Yes, with the limitations mentioned above.

 

 

 

  • Like 1
Link to comment
16 hours ago, JorgeB said:

No, we recommend having the vdisks on a flash based pool.

ok, thanks, but that is confusing me a bit:

Question 1:

If I do not have a flash based Array, can is then still run the VM on the cache?

 

Question 2:

Lets assume ...

  • i would be storing 5 VMs of size 2 TB being on my cheap normal 20TB HDD drive.
  • I am not using them simultaniously (only 1 is used at the same time) 

Do i need to buy 5x 2TB SSDs to have the vdisks being on these individual SSDs or can i load the 2 TB on demand on 1x 2TB SSD array.

 

Thanks!

Link to comment
1 hour ago, pH-Wert said:

If I do not have a flash based Array, can is then still run the VM on the cache?

I said flash based pool, not array, we recommend keeping the VMs on a separate SSD based pool, not on an HDD (or even SSD) based array.

 

1 hour ago, pH-Wert said:

Do i need to buy 5x 2TB SSDs to have the vdisks being on these individual SSDs or can i load the 2 TB on demand on 1x 2TB SSD array.

Ideally all vdisks would fit in the pool, or for best performance you'd need to be manually moving then back and forth.

 

 

Link to comment

Thanks to you all.

Maybe now it rings the/my bell: The cache is a pool, and now with 6.9 we can make additional pools (vdisks, docker apps, and any other use cases?).

 

So does it make sense to buy 

  • 2x 1TB SSD just for caching 
  • 1x 2TB SSD for the rest of pooling vdisks and docker? (or maybe bigger 1x 4TB)

OR

  • 2x 2TB SSD (or 4TB) to share these for cache, vdisks and pools? 

I think version 1 makes most sense, by physically separating caching from the rest of the pools?

Edited by pH-Wert
Link to comment
On 7/12/2023 at 4:06 AM, pH-Wert said:

Lets assume ...

  • i would be storing 5 VMs of size 2 TB being on my cheap normal 20TB HDD drive.

Why so large? Normally a VM would be much smaller, with most of the data it needs to use stored on other drives or pools so all VM's could access it. Ideally each VM should be as lightweight and purpose built as possible.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.