Jump to content

ZFS RAIDZ QUESTION - MIRROR SET OF 2 STRIPES


Go to solution Solved by JorgeB,

Recommended Posts

I am currently running two pcie4.0 NVME drives in zfs raid0 for that sweet sweet performance. However, I am worried about the lack of error correction/drive failure protection. If I were to buy 2 more NVME drives, would it be possible to then run the two new drives raid0/stripped and the mirror both sets for drive protection?

 

Please note that I am not asking if you can stripe two sets of mirrors, which I know you can do and easily add mirror groups within raid. I'm asking if it is possible to MIRROR two sets of STRIPED drives.

 

The former gives you the 2X read speeds but doesn't give you the benefit of 2X write speeds, while the later gives you both 2X read speeds and 2X write speeds (theoretically of course).

 

Thanks.

Link to comment
1 hour ago, david279 said:

Sounds like raid10. I know you could do it thru the command line just adding a mirror vdev to your already raid0. I know there's a raid01 which is almost the same thing....I don't know if the unRAID gui even supports putting that configuration together right now. 

I know currently if you already have a vdev of mirrors, the unraid GUI will let you add an identical vdev of mirrors striped, increasing your pool capacity (I assume this is raid01 like you mention).

 

I'm just not sure if it will let you do the inverse (add an identical group of striped drives in mirror config (i.e. raid10). I haven't been able to find anyone discuss such a configuration and I'd like to know it's possible before I spend the money on the drives.

 

I was hoping to avoid command line because I'm a noob. But I suppose that can be my last resort.

Link to comment
13 hours ago, sunbear said:

Please note that I am not asking if you can stripe two sets of mirrors, which I know you can do and easily add mirror groups within raid. I'm asking if it is possible to MIRROR two sets of STRIPED drives.

 

The former gives you the 2X read speeds but doesn't give you the benefit of 2X write speeds, while the later gives you both 2X read speeds and 2X write speeds (theoretically of course).

With zfs you can only have striped mirrors, it will give you up to 2x writing speed and up to 4x read speed, since the mirrors are also striped for reads.

 

The GUI doesn't currently support adding mirrors to a striped vdev, you can use zpool attach in the CLI to add a mirror to each device making the pool a couple of striped mirrors, all 4 devices should be identical for best results, can post more detailed instructions if interested.

  • Thanks 1
Link to comment
13 hours ago, JorgeB said:

With zfs you can only have striped mirrors, it will give you up to 2x writing speed and up to 4x read speed, since the mirrors are also striped for reads.

 

The GUI doesn't currently support adding mirrors to a striped vdev, you can use zpool attach in the CLI to add a mirror to each device making the pool a couple of striped mirrors, all 4 devices should be identical for best results, can post more detailed instructions if interested.

Yes, I would be very interested! Thanks.

 

I'm wondering what the GUI will show for a pool that I modify in this way? I assume capacity/usage calculations will still work correctly with the additional mirror (I guess the capacity would stay the same).

Link to comment
  • Solution
14 hours ago, sunbear said:

I'm wondering what the GUI will show for a pool that I modify in this way?

In the end all will be correct, just the attach has to be done manually and then you need to re-import the pool.

 

Initial pool is a two device stripe like this:

 

image.png

 

First step is to partition the new devices, easiest way to do that is to use the UD plugin and format them, any filesystem will do, I usually use xfs since it's faster, once that's done, and with the array started, you attach one of the new devices to one of the existing ones to create a mirror, by typing (and since you'll be using NVMe devices):

 

zpool attach -f pool_name /dev/nvmeXn1p1 /dev/nvmeYn1p1

 

Double check device names are correct and replace X with first existing device id and Y with new device id, after you do that a resilver will start to mirror the source device, when done, pool will look like this:

 

image.png

 

Now, and with the array still running, just attach the other device to the remaining single device, in my case add to /dev/sdc1, after the resilver you will now have a pool with a couple of striped mirrors:

 

image.png

 

Now the final step is to re-import the new pool, to do that:

 

-stop the array

-unassign both original pool devices, if docker or VM services are using the pool best to also disable them

-start array to reset the pool

-stop array

-change the number of pool slots first if needed, and assign all 4 pool devices, device order is not important, any order should work for this type of pool

-don't change the pool filesystem or topology, leave it in "auto"

-start the array and the pool should be imported with the new config (re-enable docker/VM services if disabled)

 

Any doubts or issues please let me know.

 

 

 

 

 

  • Like 1
Link to comment
On 7/12/2023 at 5:36 AM, JorgeB said:

In the end all will be correct, just the attach has to be done manually and then you need to re-import the pool.

 

Initial pool is a two device stripe like this:

 

image.png

 

First step is to partition the new devices, easiest way to do that is to use the UD plugin and format them, any filesystem will do, I usually use xfs since it's faster, once that's done, and with the array started, you attach one of the new devices to one of the existing ones to create a mirror, by typing (and since you'll be using NVMe devices):

 

zpool attach -f pool_name /dev/nvmeXn1p1 /dev/nvmeYn1p1

 

Double check device names are correct and replace X with first existing device id and Y with new device id, after you do that a resilver will start to mirror the source device, when done, pool will look like this:

 

image.png

 

Now, and with the array still running, just attach the other device to the remaining single device, in my case add to /dev/sdc1, after the resilver you will now have a pool with a couple of striped mirrors:

 

image.png

 

Now the final step is to re-import the new pool, to do that:

 

-stop the array

-unassign both original pool devices, if docker or VM services are using the pool best to also disable them

-start array to reset the pool

-stop array

-change the number of pool slots first if needed, and assign all 4 pool devices, device order is not important, any order should work for this type of pool

-don't change the pool filesystem or topology, leave it in "auto"

-start the array and the pool should be imported with the new config (re-enable docker/VM services if disabled)

 

Any doubts or issues please let me know.

 

 

 

 

 

You're the best!

 

Thank you!

  • Like 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...