Jump to content

How can you remove a drive from a Raid 0 Cache pool with encryption?


Recommended Posts

Ok, a bit of a 2 part question.

 

How can you remove a drive from a raid 0 cache pool using terminal commands?

What are the extra steps needed for an encrypted cache pool?

 

Right now with an non-encrypted pool you can remove drives 1 at a time by simply unassigning and then starting the array and making sure the rebalance job finishes (see FAQ for full details)

 

With an encrypted drive though this does not work and trying to remove a drive will kill the pool.

 

The simplest option if you have room is to first convert to a raid1 pool, remove the drive, let balance take care of rebuilding the raid1 data. Then convert back to a raid 0 pool.

 

This is time consuming and a waste of a lot of time though if it could be done directly.

Link to comment

You can do this:

 

-With the array running type on the console:

btrfs dev del /dev/mapper/sdX1 /mnt/cache

Replace X with correct letter, don't forghet the 1 after it.

 

 

-Wait for the device to be deletet, i.e., until the command completes and you get the cursor back

-Device is now removed from the pool, you don't need to stop the array now, but at next array stop you need to make Unraid forget the now deleted member, for that:

 

-Stop the array

-Unassign all pool devices

-Start the array to make Unraid "forget" the pool config (note: if the docker and/or VMs services were using that pool best to disable those services before start or Unraid will recreate the images somewhere else, assuming they are using /mnt/user paths)

-Stop array (re-enable docker/VM services if disabled above)

-Re-assign all pool member except the removed device

-Start array

-Done

 

 

 

 

 

 

 

  • Thanks 1
Link to comment

Note that you need to maintain the minimum number of devices for the profile in use, i.e., you can remove a a device from a 3+ device raid0 pool but you can't remove one from a 2 device raid0 pool (unless it's converted to single profile first).

 

You can also remove multiple devices with a single command (as long as the above rule is observed):

btrfs dev del /dev/mapper/sdX1 /dev/mapper/sdY1 /mnt/cache

 

But in practice this does the same as removing one device, then the other, as they are still removed one at a time, just one after the other.

 

 

  • Thanks 1
Link to comment

Thanks for the tips! That is not too bad, although the note about not being able to go from 2 devices down to 1 device is a good thing to know.

 

Although didn't you say in the other thread that you took a 4 device raid 0 pool down to 1 device, 1 at a time? How did that work?

 

I suppose since the array is already mounted for this, the encryption is not an issue? Cool, I will add this to my list of command to remember.

 

Also, couldn't you use the "new config" option in the tools menu? tell it to remember the array and parity drives and then you could skip the starting the array with docker/VM disabled etc?

Edited by TexasUnraid
Link to comment

Semi-related as it is btrfs commands.

 

I have been reading up on raid 5/6 BTRFS. It seems that with the metadata in raid1c3 or raid1c4, raid 5/6 have proven quite stable and difficult to corrupt. The worst case I saw mentioned was the risk of data loss actively being written but the metadata C3/C4 (aka, 3 or 4 copies of the metadata for those reading along) would basically prevent the risk of nuking the whole file system.

 

This is a risk I would be willing to try out for my cache that doesn't generally have irreplaceable data on it. I have a UPS and don't generally copy important files when there is a risk of interruption.

 

I assume the built in raid5/6 setting just uses raid1 for metadata?

 

What would be the procedure for converting a raid0 pool to a raid5/6 pool with metadata raid1c3/c4?

 

Also how would you convert back to raid0 if it didn't work out or the performance penalty was too high?

Edited by TexasUnraid
Link to comment
25 minutes ago, TexasUnraid said:

How did that work?

Because using the GUI it automatically converts from raid0 to single profile when only one device is left.

 

27 minutes ago, TexasUnraid said:

I suppose since the array is already mounted for this, the encryption is not an issue?

Correct.

26 minutes ago, TexasUnraid said:

Also, couldn't you use the "new config" option in the tools menu?

You can, it's another valid way of making it "forget" the previous pool config.

 

 

 

 

 

 

  • Like 1
Link to comment
16 minutes ago, TexasUnraid said:

I assume the built in raid5/6 setting just uses raid1 for metadata?

 

Yes, I already asked LT to at least include raid1c3 because it's the best option for raid6 metadata, since it has the same redundancy level.

 

18 minutes ago, TexasUnraid said:

What would be the procedure for converting a raid0 pool to a raid5/6 pool with metadata raid1c3/c4?

raid1 metadata is fine for raid5, though you could use raid1c3 for increased reliability, raid1c4 is overkill for raid5 IMHO

 

See here how to convert:

https://forums.unraid.net/topic/46802-faq-for-unraid-v6/?do=findComment&comment=480421

 

Note that you can convert just the data profile (dconvert=) or just metadata (mconvert=) by itself, or both at the same time

 

20 minutes ago, TexasUnraid said:

Also how would you convert back to raid0 if it didn't work out or the performance penalty was too high?

Yes, you can always convert data and/or metadata to any profile, as long the the pool has the minimum required devices.

  • Thanks 1
Link to comment

I agree, making the default raid1c3 would seem to make a lot of sense.

 

Ok, those commands are not that bad. The hardest part about getting into linux for me is simply not speaking the language.

 

I can speak windows pretty good over the years but linux is like trying to order a meal in french, there is a lot of pointing, grunts and gimme what that person is having gestures going on lol.

 

Raid1c3 may be overkill but it uses up such a small amount of space I really see no reason not to if it basically removes the possibility of corrupting the file system. Risk of loosing active data being written is something I can accept, that has always been a risk and the effects are generally going to be pretty mild with a few files needing to be pulled form backups.

 

Risk of corrupting the whole file system on the other hand is another ballgame. if a few GB of metadata can prevent that, sign me up. Particularly since my drives are not exactly the latest and greatest. I actually had another drive I was going to use for a raid 6 setup but had to use that for the docker data.

Link to comment

Ok, I was adding these commands to my reference list but realized that the convert command does not seem complete. I assume it needs the btrfs before it but what is the syntax to tell it which file system to convert?

 

btrfs dev -mconvert=raid1c3 -dconvert=raid5 /mnt/cache

 

is the best I can assume given the information above but not a command you should guess with. It seems the latest version of unraid no longer allows you to enter manual commands in the GUI.

Edited by TexasUnraid
Link to comment
5 hours ago, TexasUnraid said:

Ok, I was adding these commands to my reference list but realized that the convert command does not seem complete.

The command in the FAQ are to use in the balance window on the cache GUI page, if you want to use the console it would be:

 

btrfs balance start -mconvert=raid1c3 -dconvert=raid5 /mnt/cache

 

Link to comment
38 minutes ago, johnnie.black said:

The command in the FAQ are to use in the balance window on the cache GUI page, if you want to use the console it would be:

 


btrfs balance start -mconvert=raid1c3 -dconvert=raid5 /mnt/cache

 

Thats what I thought but I can not find a place to type any form of input text in the GUI? Seems like they removed that option in the latest version?

 

Thanks, I will add that to my list of commands.

Link to comment

Ok, I tried to run the btrfs command :

btrfs balance start -mconvert=raid1c3 /mnt/cache

 

But I get this error:

 

ERROR: error during balancing '/mnt/cache': Invalid argument  

 

The syslog says:

 

BTRFS error (device dm-2): balance: invalid convert metadata profile single

 

Edited by TexasUnraid
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...