Jump to content

confused on array vs pool


triten

Recommended Posts

so i just finished building my new server.  it has 12 hot swap drive bays.  

 

I will be only using 6 to start. with 12tb drives waiting for 2 more to come in before i can completely get it up and running.

 

I was planning on using zfs in a z2 (raid 6) format.  and that was going to be my 1st array and then down the road use the other 6 bays to create another z2 array. when i needed more storage.

 

but i just read that you can only have 1 main array?  I really dont want to buy 6 more drives as i dont need that space at the moment.

 

would a 6 drive pool in zfs r2 act the same way as the main array?   what would possibly be a better course of action to start my main array with future expandability.   I am looking at raid 6 type were i can loose 2 drives without loosing data.

 

also second question.

 

i am 2 nvme drives for cache.  1 x 1tb will be for main array and a 2nd 500gb

 

i want the 500gb to be for my dockers / plex meta data.  i have 2x250gb ssd in the case that i would like for the 500gb to back its data up to and i am not sure how to set that up.

 

Link to comment
1 hour ago, JorgeB said:

also note that for now at least, one data array device must be assigned, but an old flash drive will fulfill that requirement.

 

So any size type of drive can be used for this?  ssd or mechanical?

 

What would this drive serve as purpose?   just a fill doing nothing ?  Then the actuall zfs pool would store all my media and data?

Link to comment
4 minutes ago, triten said:

 

So any size type of drive can be used for this?  ssd or mechanical?

 

What would this drive serve as purpose?   just a fill doing nothing ?  Then the actuall zfs pool would store all my media and data?

Even a flash drive if you don't actually plan to use it. This is just a legacy requirement to allow you to start the array and mount the array and pools. There are plans to remove this requirement on future versions and so not have a "main array" with Unraid's parity implementation.

Link to comment
47 minutes ago, triten said:

also with using zfs pools can i still have a nvme drive as the real raids cashe drive?  So the mover would move from cache pool to zfs array pool?

Not at the moment with the current 6.12.x releases although typically pool performance is sufficient not to need it being fronted by a cache drive.

Link to comment
1 hour ago, trurl said:

Just write directly to the pool, no cache needed. The concept of "cache" in Unraid is just for temporarily getting fast writes of data that will later be moved to the slower parity array.

 

The settings for the share would be  -   Primary: pool of your choice; Secondary:none

 

ok that i am understanding but if i want the affect of the fast write speeds can i just add the single nvme drive to the array as a stand alone then right to array  and secondary would be the large zfs pool.  then set it to array -> cache (pool) 

 

would that be a way around the lack of ability to do pool -> pool?

Link to comment

SSDs in the array cannot be trimmed. Shouldn't matter if there is no parity, but still not allowed. Not clear there would be much speed advantage of doing that anyway unless you had really fast network and really slow zfs pool.

 

Moving between pools might be a future feature but not sure.

 

User Scripts plugin will let you create and schedule scripts.

Link to comment
35 minutes ago, triten said:

ok that i am understanding but if i want the affect of the fast write speeds can i just add the single nvme drive to the array as a stand alone then right to array  and secondary would be the large zfs pool.  then set it to array -> cache (pool) 

In the current release a pool cannot be secondary storage.   That is meant to be coming in a future release. but what the ETA is I have no idea.

Link to comment

ok my understanding has greatly improved thank you. 

 

one more question.   how does the parity or dual parity work in the "standard" array part of unraid?   i have 6 x 12tb drives how would that work if i plugged them all into the array section how much storage would that give me ?  is it similar to raid 5 or raid 6 if used a dual parity?  thats the last part that i am not completely sure about.

 

 

Link to comment
1 minute ago, triten said:

ok my understanding has greatly improved thank you. 

 

one more question.   how does the parity or dual parity work in the "standard" array part of unraid?   i have 6 x 12tb drives how would that work if i plugged them all into the array section how much storage would that give me ?  is it similar to raid 5 or raid 6 if used a dual parity?  thats the last part that i am not completely sure about.

 

 

Add up the size of all the drives not being used as parity drives to get the storage space that is available.

Link to comment
5 minutes ago, itimpi said:

Add up the size of all the drives not being used as parity drives to get the storage space that is available.

 

right but is it creating a drive redundancy with the parity drives?  so if 1 drive fails no data is lost ?  and with 2 parity drives you can loose 2 drives before data is lost?

Link to comment
1 minute ago, triten said:

 

right but is it creating a drive redundancy with the parity drives?  so if 1 drive fails no data is lost ?  and with 2 parity drives you can loose 2 drives before data is lost?

The number of parity drives is the number of drives that can fail before any data is lost.    If more drives than that fail then the data on the failed drives is lost, but that on any other drives remains intact.

Link to comment
3 minutes ago, itimpi said:

The number of parity drives is the number of drives that can fail before any data is lost.    If more drives than that fail then the data on the failed drives is lost, but that on any other drives remains intact.

 

oh ok so it does create a "raid 5/6" type of situation.  Then maybe i will just use the standard array system if that is the case.

Link to comment

Each data disk in the Unraid parity array is an independent filesystem that can be read all by itself on any Linux. There is no striping in the parity array.

 

File reads are at the speed of the single disk containing the file. File writes are somewhat slower than single disk speed since parity is realtime so must be calculated and written.

 

https://docs.unraid.net/unraid-os/manual/what-is-unraid/#parity-protected-array

 

https://docs.unraid.net/unraid-os/manual/storage-management/#array-write-modes

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...