[SOLVED] unRAID v6.9.2 - Upgrading Cache Pool Drive


Recommended Posts

Greetings everyone,

 

I've been reading through the forum and noticed this FAQ post regarding upgrading cache drives:

 

 which lead me to the note regarding it still looking to be broken for v6.9.2

 

 

Does anyone know if this still a bug? Does it impact users who only have multiple pools, or does it impact single pools as well?

 

I'm looking to upgrade my two cache drives from 500GB to 1TB drives, one at a time, and just need some sanity checking for when I'm ready to do so given this bug. I'd rather not use the 'move to array' method as I have a lot of data inside of my appdata folder and it would likely take half a day.

 

Thanks in advance!

 

 

Edited by OrneryTaurus
Link to comment
3 minutes ago, JorgeB said:

You can do it manually using the console, I can post the instructions if interested.

 

Hi JorgeB,

 

That would be fantastic! Does the console instructions mirror what v6.8.2 used to do?

 

IIRC in that version, it was simply:

 

- Stop the array

- Shutdown the system

- Replace one of the cache drives

- Power on the system

- Replace the cache drive in the pool

- Start the array

- The drive would be formatted as BTRFS and it would automatically rebuild the cache pool raid

- Stopping the array will be greyed out until this process is complete

 

^ - this method functioned just like replacing a drive in the array and having it rebuild which is definitely what I want haha.

 

Thanks so much!

Link to comment
2 hours ago, OrneryTaurus said:

Does the console instructions mirror what v6.8.2 used to do?

It mirrors in the sense that the end result will be the same.

 

This is for v6.9.x, while it should also work with v6.10.x I didn't test it, and don't like to assume.

 

You need to have both the old and new replacement devices connected at the same time, if you can have all 4 you can do both replacements and then reset cache config, if not do one, reset cache config, do the other reset cache config again.

 

First you need to partition the new device, to do that format it using the UD plugin, you can use any filesystem, then with the array started and using the console type:

 

btrfs replace start -f /dev/sdX1 /dev/sdY1 /mnt/cache

 

Replace X with source, Y with target, note the 1 in the end of both, you can check replacement progress with:

 

btrfs replace status /mnt/cache

 

When done and if you have enough SATA ports you can repeat the procedure for the second device, if not do the cache reset below and then start over for the other device.

 

Pool config reset: stop the array, if Docker/VM services are using the cache pool disable them, unassign all cache devices, start array to make Unraid "forget" old cache config, stop array, reassign the current cache devices (there can't be an "All existing data on this device will be OVERWRITTEN when array is Started" warning for any cache device), re-enable Docker/VMs if needed, start array.

 

 

Link to comment
7 hours ago, JorgeB said:

It mirrors in the sense that the end result will be the same.

 

This is for v6.9.x, while it should also work with v6.10.x I didn't test it, and don't like to assume.

 

You need to have both the old and new replacement devices connected at the same time, if you can have all 4 you can do both replacements and then reset cache config, if not do one, reset cache config, do the other reset cache config again.

 

First you need to partition the new device, to do that format it using the UD plugin, you can use any filesystem, then with the array started and using the console type:

 

btrfs replace start -f /dev/sdX1 /dev/sdY1 /mnt/cache

 

Replace X with source, Y with target, note the 1 in the end of both, you can check replacement progress with:

 

btrfs replace status /mnt/cache

 

When done and if you have enough SATA ports you can repeat the procedure for the second device, if not do the cache reset below and then start over for the other device.

 

Pool config reset: stop the array, if Docker/VM services are using the cache pool disable them, unassign all cache devices, start array to make Unraid "forget" old cache config, stop array, reassign the current cache devices (there can't be an "All existing data on this device will be OVERWRITTEN when array is Started" warning for any cache device), re-enable Docker/VMs if needed, start array.

 

 

 

Awesome, thank you for that. Let me write my steps down that I will follow to make sure I understand the process laid out. I will be replacing 2 500GB NVMe drives with 2 1TB NVMe drives.

 

1 ) Stop the array

2 ) Shutdown the system

3 ) Add the two new NVMe drives

4 ) Power on the system

5 ) Start the array

6 ) Format the two new 1TB NVMe drives using Unassigned Devices

---- You mention any file system, assuming the btrfs replace command re-formats the drive.

7 ) From a command line, type: btrfs replace start -f /dev/NewNVMe1 /dev/OldNVMe1 /mnt/cache

---- This takes the new 1TB NVMe drive and replaces the target 500GB NVMe drive

8 ) Monitor the replacement status by running btrfs replace status /mnt/cache from the command line

9 ) Repeat steps 7 and 8 for the second drive, targeting the other 500GB NVMe drive

10 ) Stop the array

11 ) Disable Docker service

12 ) Disable VM service

13 ) Remove both cache drives from the cache pool

14 ) Start the array

15 ) Stop the array

16 ) Add each cache drive in the same position they were in based on cache drive replaced

17 ) Verify the "All existing data on this device will be OVERWRITTEN when array is Started" is NOT present

18 ) Start the array

 

Step 16/17 is where I got a little confused reading the steps within the bug report, which prompted this forum post. My initial question in my head was "What's different about cache drives that allows you to completely 'forget' the pool and set it back up again?"

 

Looking at the complete steps, I'm assuming that the pool configuration is kept regardless if it is set in unRAID?

Edited by OrneryTaurus
Link to comment
39 minutes ago, OrneryTaurus said:

6 ) Format the two new 1TB NVMe drives using Unassigned Devices

---- You mention any file system, assuming the btrfs replace command re-formats the drive.

Correct, formatting with UD is just for the device(s) to be partitioned.

 

40 minutes ago, OrneryTaurus said:

7 ) From a command line, type: btrfs replace start -f /dev/NewNVMe1 /dev/OldNVMe1 /mnt/cache

The other way around, old device first, new one after, also with NVMe devices it's a little different from what I posted since you need to add p1 for the partition, it will look like this:

 

btrfs replace start -f /dev/nvme0n1p1 /dev/nvme3n1p1 /mnt/cache

 

Just need to adjust the correct device number in red.

 

44 minutes ago, OrneryTaurus said:

13 ) Remove both cache drives from the cache pool

I assume you mean unassign here, no need to physically remove them.

 

45 minutes ago, OrneryTaurus said:

16 ) Add each cache drive in the same position they were in based on cache drive replaced

Position is not important, you can add them in any order.

 

 

46 minutes ago, OrneryTaurus said:

My initial question in my head was "What's different about cache drives that allows you to completely 'forget' the pool and set it back up again?"

btrfs pool info is saved in the devices metadata, after you start the array without devices in the pool and add them later Unraid will look for an existing pool and import if one exists.

Link to comment
30 minutes ago, JorgeB said:

Correct, formatting with UD is just for the device(s) to be partitioned.

 

The other way around, old device first, new one after, also with NVMe devices it's a little different from what I posted since you need to add p1 for the partition, it will look like this:

 

btrfs replace start -f /dev/nvme0n1p1 /dev/nvme3n1p1 /mnt/cache

 

Just need to adjust the correct device number in red.

 

I assume you mean unassign here, no need to physically remove them.

 

Position is not important, you can add them in any order.

 

 

btrfs pool info is saved in the devices metadata, after you start the array without devices in the pool and add them later Unraid will look for an existing pool and import if one exists.

 

Awesome. Revised the steps:

 

1 ) Stop the array

2 ) Shutdown the system

3 ) Add the two new NVMe drives

4 ) Power on the system

5 ) Start the array

6 ) Format the two new 1TB NVMe drives using Unassigned Devices

---- Allows for a partition to be created, used in step 7

7 ) From a command line, type: btrfs replace start -f /dev/nvme0n1p1 /dev/nvme3n1p1 /mnt/cache

---- Old drive to new drive, p1 references the partition used

8 ) Monitor the replacement status by running btrfs replace status /mnt/cache from the command line

9 ) Repeat steps 7 and 8 for the second drive, targeting the other 500GB NVMe drive

10 ) Stop the array

11 ) Disable Docker service

12 ) Disable VM service

13 ) Unassign both cache drives from the cache pool

14 ) Start the array

15 ) Stop the array

16 ) Add each cache drive back into the pool (position doesn't matter, preference only)

17 ) Verify the "All existing data on this device will be OVERWRITTEN when array is Started" is NOT present

18 ) Start the array

19 ) Stop the array

20 ) Re-enable Docker service

21 ) Re-enable VM service

22 ) Start the array

 

Optional

 

23 ) Stop the array

24 ) Shutdown the system

25 ) Remove the old cache drives

26 ) Power on the system

27 ) Start the array

 

Thanks so much @JorgeB I'll give this a go in the next few days. I appreciate your time!

  • Like 1
Link to comment
  • OrneryTaurus changed the title to [SOLVED] unRAID v6.9.2 - Upgrading Cache Pool Drive
  • 5 months later...
On 11/10/2021 at 8:29 PM, OrneryTaurus said:

 

Awesome. Revised the steps:

 

1 ) Stop the array

2 ) Shutdown the system

3 ) Add the two new NVMe drives

4 ) Power on the system

5 ) Start the array

6 ) Format the two new 1TB NVMe drives using Unassigned Devices

---- Allows for a partition to be created, used in step 7

7 ) From a command line, type: btrfs replace start -f /dev/nvme0n1p1 /dev/nvme3n1p1 /mnt/cache

---- Old drive to new drive, p1 references the partition used

8 ) Monitor the replacement status by running btrfs replace status /mnt/cache from the command line

9 ) Repeat steps 7 and 8 for the second drive, targeting the other 500GB NVMe drive

10 ) Stop the array

11 ) Disable Docker service

12 ) Disable VM service

13 ) Unassign both cache drives from the cache pool

14 ) Start the array

15 ) Stop the array

16 ) Add each cache drive back into the pool (position doesn't matter, preference only)

17 ) Verify the "All existing data on this device will be OVERWRITTEN when array is Started" is NOT present

18 ) Start the array

19 ) Stop the array

20 ) Re-enable Docker service

21 ) Re-enable VM service

22 ) Start the array

 

Optional

 

23 ) Stop the array

24 ) Shutdown the system

25 ) Remove the old cache drives

26 ) Power on the system

27 ) Start the array

 

Thanks so much @JorgeB I'll give this a go in the next few days. I appreciate your time!

 

Did it work as expected? :) I'm on the same boat... I need to replace my cache disk to a larger one.

Link to comment
  • 3 months later...
  • 1 month later...
  • 1 month later...
On 11/10/2021 at 5:54 AM, JorgeB said:

It mirrors in the sense that the end result will be the same.

 

This is for v6.9.x, while it should also work with v6.10.x I didn't test it, and don't like to assume.

 

You need to have both the old and new replacement devices connected at the same time, if you can have all 4 you can do both replacements and then reset cache config, if not do one, reset cache config, do the other reset cache config again.

 

First you need to partition the new device, to do that format it using the UD plugin, you can use any filesystem, then with the array started and using the console type:

 

btrfs replace start -f /dev/sdX1 /dev/sdY1 /mnt/cache

 

Replace X with source, Y with target, note the 1 in the end of both, you can check replacement progress with:

 

btrfs replace status /mnt/cache

 

When done and if you have enough SATA ports you can repeat the procedure for the second device, if not do the cache reset below and then start over for the other device.

 

Pool config reset: stop the array, if Docker/VM services are using the cache pool disable them, unassign all cache devices, start array to make Unraid "forget" old cache config, stop array, reassign the current cache devices (there can't be an "All existing data on this device will be OVERWRITTEN when array is Started" warning for any cache device), re-enable Docker/VMs if needed, start array.

 

Hi JorgeB - I'm about to upgrade my Cache drive (not pool) from 1TB to 2TB ... I'm on 6.9.2 ... is there a different way to do this if just a size upgrade? Should I create a pool, and then follow the process outlined above? 

Link to comment
4 minutes ago, JorgeB said:

If you have enough SATA ports you an add the new device to the pool then remove the other one, advantage of doing this is that the pool will remain online, if not use this procedure.

Thanks - will try that. 

 

I have another thread open with upgrades on the array (parity drive replace, and replace data drive). right after I solve that, will try cache pool

Link to comment
  • 4 weeks later...

I need to upgrade my cache pool soon. It's using BTRFS Raid1, what's not clear to me is that can I upgrade 1TB to 2TB without performing Mover?

Can I just stop the array, Replace Cache1 (1TB) to (2TB) > Start Array > then Wait till finished? And then perform the same step but for Cache2 once Cache1 is finished?

Also, what I couldn't find is that, does the order matter? Can I upgrade Cache2 first instead of Cache1? Or I would need to do it in order?

I'm sorry for asking too many questions here. Like many, I don't want to lose data :D 

 

Link to comment
17 minutes ago, Kloudz said:

Can I just stop the array, Replace Cache1 (1TB) to (2TB) > Start Array > then Wait till finished? And then perform the same step but for Cache2 once Cache1 is finished?

Yes, if they are in raid1.

 

17 minutes ago, Kloudz said:

Also, what I couldn't find is that, does the order matter?

Nope.

Link to comment
7 minutes ago, JorgeB said:

Yes, if they are in raid1.

 

Nope.

 

Perfect thank you.


Also, to be clear, I know its better safe than sorry but I can perform the upgrade without performing Mover. What I mean is move appdata back to the array. 

I could just perform the upgrade without doing that right?

Edited by Kloudz
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.