OrneryTaurus Posted November 10, 2021 Share Posted November 10, 2021 (edited) Greetings everyone, I've been reading through the forum and noticed this FAQ post regarding upgrading cache drives: which lead me to the note regarding it still looking to be broken for v6.9.2 Does anyone know if this still a bug? Does it impact users who only have multiple pools, or does it impact single pools as well? I'm looking to upgrade my two cache drives from 500GB to 1TB drives, one at a time, and just need some sanity checking for when I'm ready to do so given this bug. I'd rather not use the 'move to array' method as I have a lot of data inside of my appdata folder and it would likely take half a day. Thanks in advance! Edited November 11, 2021 by OrneryTaurus Quote Link to comment
JorgeB Posted November 10, 2021 Share Posted November 10, 2021 1 hour ago, OrneryTaurus said: Does anyone know if this still a bug? It is. 1 hour ago, OrneryTaurus said: Does it impact users who only have multiple pools, or does it impact single pools as well? Single as well. You can do it manually using the console, I can post the instructions if interested. Quote Link to comment
OrneryTaurus Posted November 10, 2021 Author Share Posted November 10, 2021 3 minutes ago, JorgeB said: You can do it manually using the console, I can post the instructions if interested. Hi JorgeB, That would be fantastic! Does the console instructions mirror what v6.8.2 used to do? IIRC in that version, it was simply: - Stop the array - Shutdown the system - Replace one of the cache drives - Power on the system - Replace the cache drive in the pool - Start the array - The drive would be formatted as BTRFS and it would automatically rebuild the cache pool raid - Stopping the array will be greyed out until this process is complete ^ - this method functioned just like replacing a drive in the array and having it rebuild which is definitely what I want haha. Thanks so much! Quote Link to comment
JorgeB Posted November 10, 2021 Share Posted November 10, 2021 2 hours ago, OrneryTaurus said: Does the console instructions mirror what v6.8.2 used to do? It mirrors in the sense that the end result will be the same. This is for v6.9.x, while it should also work with v6.10.x I didn't test it, and don't like to assume. You need to have both the old and new replacement devices connected at the same time, if you can have all 4 you can do both replacements and then reset cache config, if not do one, reset cache config, do the other reset cache config again. First you need to partition the new device, to do that format it using the UD plugin, you can use any filesystem, then with the array started and using the console type: btrfs replace start -f /dev/sdX1 /dev/sdY1 /mnt/cache Replace X with source, Y with target, note the 1 in the end of both, you can check replacement progress with: btrfs replace status /mnt/cache When done and if you have enough SATA ports you can repeat the procedure for the second device, if not do the cache reset below and then start over for the other device. Pool config reset: stop the array, if Docker/VM services are using the cache pool disable them, unassign all cache devices, start array to make Unraid "forget" old cache config, stop array, reassign the current cache devices (there can't be an "All existing data on this device will be OVERWRITTEN when array is Started" warning for any cache device), re-enable Docker/VMs if needed, start array. Quote Link to comment
OrneryTaurus Posted November 10, 2021 Author Share Posted November 10, 2021 (edited) 7 hours ago, JorgeB said: It mirrors in the sense that the end result will be the same. This is for v6.9.x, while it should also work with v6.10.x I didn't test it, and don't like to assume. You need to have both the old and new replacement devices connected at the same time, if you can have all 4 you can do both replacements and then reset cache config, if not do one, reset cache config, do the other reset cache config again. First you need to partition the new device, to do that format it using the UD plugin, you can use any filesystem, then with the array started and using the console type: btrfs replace start -f /dev/sdX1 /dev/sdY1 /mnt/cache Replace X with source, Y with target, note the 1 in the end of both, you can check replacement progress with: btrfs replace status /mnt/cache When done and if you have enough SATA ports you can repeat the procedure for the second device, if not do the cache reset below and then start over for the other device. Pool config reset: stop the array, if Docker/VM services are using the cache pool disable them, unassign all cache devices, start array to make Unraid "forget" old cache config, stop array, reassign the current cache devices (there can't be an "All existing data on this device will be OVERWRITTEN when array is Started" warning for any cache device), re-enable Docker/VMs if needed, start array. Awesome, thank you for that. Let me write my steps down that I will follow to make sure I understand the process laid out. I will be replacing 2 500GB NVMe drives with 2 1TB NVMe drives. 1 ) Stop the array 2 ) Shutdown the system 3 ) Add the two new NVMe drives 4 ) Power on the system 5 ) Start the array 6 ) Format the two new 1TB NVMe drives using Unassigned Devices ---- You mention any file system, assuming the btrfs replace command re-formats the drive. 7 ) From a command line, type: btrfs replace start -f /dev/NewNVMe1 /dev/OldNVMe1 /mnt/cache ---- This takes the new 1TB NVMe drive and replaces the target 500GB NVMe drive 8 ) Monitor the replacement status by running btrfs replace status /mnt/cache from the command line 9 ) Repeat steps 7 and 8 for the second drive, targeting the other 500GB NVMe drive 10 ) Stop the array 11 ) Disable Docker service 12 ) Disable VM service 13 ) Remove both cache drives from the cache pool 14 ) Start the array 15 ) Stop the array 16 ) Add each cache drive in the same position they were in based on cache drive replaced 17 ) Verify the "All existing data on this device will be OVERWRITTEN when array is Started" is NOT present 18 ) Start the array Step 16/17 is where I got a little confused reading the steps within the bug report, which prompted this forum post. My initial question in my head was "What's different about cache drives that allows you to completely 'forget' the pool and set it back up again?" Looking at the complete steps, I'm assuming that the pool configuration is kept regardless if it is set in unRAID? Edited November 10, 2021 by OrneryTaurus Quote Link to comment
JorgeB Posted November 10, 2021 Share Posted November 10, 2021 39 minutes ago, OrneryTaurus said: 6 ) Format the two new 1TB NVMe drives using Unassigned Devices ---- You mention any file system, assuming the btrfs replace command re-formats the drive. Correct, formatting with UD is just for the device(s) to be partitioned. 40 minutes ago, OrneryTaurus said: 7 ) From a command line, type: btrfs replace start -f /dev/NewNVMe1 /dev/OldNVMe1 /mnt/cache The other way around, old device first, new one after, also with NVMe devices it's a little different from what I posted since you need to add p1 for the partition, it will look like this: btrfs replace start -f /dev/nvme0n1p1 /dev/nvme3n1p1 /mnt/cache Just need to adjust the correct device number in red. 44 minutes ago, OrneryTaurus said: 13 ) Remove both cache drives from the cache pool I assume you mean unassign here, no need to physically remove them. 45 minutes ago, OrneryTaurus said: 16 ) Add each cache drive in the same position they were in based on cache drive replaced Position is not important, you can add them in any order. 46 minutes ago, OrneryTaurus said: My initial question in my head was "What's different about cache drives that allows you to completely 'forget' the pool and set it back up again?" btrfs pool info is saved in the devices metadata, after you start the array without devices in the pool and add them later Unraid will look for an existing pool and import if one exists. Quote Link to comment
OrneryTaurus Posted November 10, 2021 Author Share Posted November 10, 2021 30 minutes ago, JorgeB said: Correct, formatting with UD is just for the device(s) to be partitioned. The other way around, old device first, new one after, also with NVMe devices it's a little different from what I posted since you need to add p1 for the partition, it will look like this: btrfs replace start -f /dev/nvme0n1p1 /dev/nvme3n1p1 /mnt/cache Just need to adjust the correct device number in red. I assume you mean unassign here, no need to physically remove them. Position is not important, you can add them in any order. btrfs pool info is saved in the devices metadata, after you start the array without devices in the pool and add them later Unraid will look for an existing pool and import if one exists. Awesome. Revised the steps: 1 ) Stop the array 2 ) Shutdown the system 3 ) Add the two new NVMe drives 4 ) Power on the system 5 ) Start the array 6 ) Format the two new 1TB NVMe drives using Unassigned Devices ---- Allows for a partition to be created, used in step 7 7 ) From a command line, type: btrfs replace start -f /dev/nvme0n1p1 /dev/nvme3n1p1 /mnt/cache ---- Old drive to new drive, p1 references the partition used 8 ) Monitor the replacement status by running btrfs replace status /mnt/cache from the command line 9 ) Repeat steps 7 and 8 for the second drive, targeting the other 500GB NVMe drive 10 ) Stop the array 11 ) Disable Docker service 12 ) Disable VM service 13 ) Unassign both cache drives from the cache pool 14 ) Start the array 15 ) Stop the array 16 ) Add each cache drive back into the pool (position doesn't matter, preference only) 17 ) Verify the "All existing data on this device will be OVERWRITTEN when array is Started" is NOT present 18 ) Start the array 19 ) Stop the array 20 ) Re-enable Docker service 21 ) Re-enable VM service 22 ) Start the array Optional 23 ) Stop the array 24 ) Shutdown the system 25 ) Remove the old cache drives 26 ) Power on the system 27 ) Start the array Thanks so much @JorgeB I'll give this a go in the next few days. I appreciate your time! 1 Quote Link to comment
eltonk Posted April 21, 2022 Share Posted April 21, 2022 On 11/10/2021 at 8:29 PM, OrneryTaurus said: Awesome. Revised the steps: 1 ) Stop the array 2 ) Shutdown the system 3 ) Add the two new NVMe drives 4 ) Power on the system 5 ) Start the array 6 ) Format the two new 1TB NVMe drives using Unassigned Devices ---- Allows for a partition to be created, used in step 7 7 ) From a command line, type: btrfs replace start -f /dev/nvme0n1p1 /dev/nvme3n1p1 /mnt/cache ---- Old drive to new drive, p1 references the partition used 8 ) Monitor the replacement status by running btrfs replace status /mnt/cache from the command line 9 ) Repeat steps 7 and 8 for the second drive, targeting the other 500GB NVMe drive 10 ) Stop the array 11 ) Disable Docker service 12 ) Disable VM service 13 ) Unassign both cache drives from the cache pool 14 ) Start the array 15 ) Stop the array 16 ) Add each cache drive back into the pool (position doesn't matter, preference only) 17 ) Verify the "All existing data on this device will be OVERWRITTEN when array is Started" is NOT present 18 ) Start the array 19 ) Stop the array 20 ) Re-enable Docker service 21 ) Re-enable VM service 22 ) Start the array Optional 23 ) Stop the array 24 ) Shutdown the system 25 ) Remove the old cache drives 26 ) Power on the system 27 ) Start the array Thanks so much @JorgeB I'll give this a go in the next few days. I appreciate your time! Did it work as expected? I'm on the same boat... I need to replace my cache disk to a larger one. Quote Link to comment
JorgeB Posted April 21, 2022 Share Posted April 21, 2022 37 minutes ago, eltonk said: I need to replace my cache disk to a larger one. Bug is fixed on v6.10.0-rc4, so you can upgrade first then do the replacement using the GUI. Quote Link to comment
OrneryTaurus Posted April 22, 2022 Author Share Posted April 22, 2022 On 4/21/2022 at 7:38 AM, eltonk said: Did it work as expected? I'm on the same boat... I need to replace my cache disk to a larger one. It did work as expected 1 Quote Link to comment
xlucero1 Posted August 8, 2022 Share Posted August 8, 2022 (edited) I believe the way to upgrade your cache pool on 6.10.0+ via the GUI is instructed by the video posed in this thread. Luckily I am on 6.10.3 & I will be doing this method this weekend. Edited August 8, 2022 by xlucero1 Quote Link to comment
ridley Posted September 25, 2022 Share Posted September 25, 2022 On 4/21/2022 at 4:17 PM, JorgeB said: Bug is fixed on v6.10.0-rc4, so you can upgrade first then do the replacement using the GUI. How do you do this? Quote Link to comment
JorgeB Posted September 26, 2022 Share Posted September 26, 2022 16 hours ago, ridley said: How do you do this? https://forums.unraid.net/topic/46802-faq-for-unraid-v6/?do=findComment&comment=480419 Quote Link to comment
axeman Posted November 3, 2022 Share Posted November 3, 2022 On 11/10/2021 at 5:54 AM, JorgeB said: It mirrors in the sense that the end result will be the same. This is for v6.9.x, while it should also work with v6.10.x I didn't test it, and don't like to assume. You need to have both the old and new replacement devices connected at the same time, if you can have all 4 you can do both replacements and then reset cache config, if not do one, reset cache config, do the other reset cache config again. First you need to partition the new device, to do that format it using the UD plugin, you can use any filesystem, then with the array started and using the console type: btrfs replace start -f /dev/sdX1 /dev/sdY1 /mnt/cache Replace X with source, Y with target, note the 1 in the end of both, you can check replacement progress with: btrfs replace status /mnt/cache When done and if you have enough SATA ports you can repeat the procedure for the second device, if not do the cache reset below and then start over for the other device. Pool config reset: stop the array, if Docker/VM services are using the cache pool disable them, unassign all cache devices, start array to make Unraid "forget" old cache config, stop array, reassign the current cache devices (there can't be an "All existing data on this device will be OVERWRITTEN when array is Started" warning for any cache device), re-enable Docker/VMs if needed, start array. Hi JorgeB - I'm about to upgrade my Cache drive (not pool) from 1TB to 2TB ... I'm on 6.9.2 ... is there a different way to do this if just a size upgrade? Should I create a pool, and then follow the process outlined above? Quote Link to comment
JorgeB Posted November 3, 2022 Share Posted November 3, 2022 10 minutes ago, axeman said: I'm about to upgrade my Cache drive (not pool) from 1TB to 2TB ... I'm on 6.9.2 ... is there a different way to do this if just a size upgrade? Are you using xfs or btrfs? Quote Link to comment
axeman Posted November 3, 2022 Share Posted November 3, 2022 2 hours ago, JorgeB said: Are you using xfs or btrfs? btrfs ... Quote Link to comment
JorgeB Posted November 3, 2022 Share Posted November 3, 2022 If you have enough SATA ports you an add the new device to the pool then remove the other one, advantage of doing this is that the pool will remain online, if not use this procedure. Quote Link to comment
axeman Posted November 3, 2022 Share Posted November 3, 2022 4 minutes ago, JorgeB said: If you have enough SATA ports you an add the new device to the pool then remove the other one, advantage of doing this is that the pool will remain online, if not use this procedure. Thanks - will try that. I have another thread open with upgrades on the array (parity drive replace, and replace data drive). right after I solve that, will try cache pool Quote Link to comment
Kloudz Posted November 29, 2022 Share Posted November 29, 2022 I need to upgrade my cache pool soon. It's using BTRFS Raid1, what's not clear to me is that can I upgrade 1TB to 2TB without performing Mover? Can I just stop the array, Replace Cache1 (1TB) to (2TB) > Start Array > then Wait till finished? And then perform the same step but for Cache2 once Cache1 is finished? Also, what I couldn't find is that, does the order matter? Can I upgrade Cache2 first instead of Cache1? Or I would need to do it in order? I'm sorry for asking too many questions here. Like many, I don't want to lose data Quote Link to comment
JorgeB Posted November 29, 2022 Share Posted November 29, 2022 17 minutes ago, Kloudz said: Can I just stop the array, Replace Cache1 (1TB) to (2TB) > Start Array > then Wait till finished? And then perform the same step but for Cache2 once Cache1 is finished? Yes, if they are in raid1. 17 minutes ago, Kloudz said: Also, what I couldn't find is that, does the order matter? Nope. Quote Link to comment
Kloudz Posted November 29, 2022 Share Posted November 29, 2022 (edited) 7 minutes ago, JorgeB said: Yes, if they are in raid1. Nope. Perfect thank you. Also, to be clear, I know its better safe than sorry but I can perform the upgrade without performing Mover. What I mean is move appdata back to the array. I could just perform the upgrade without doing that right? Edited November 29, 2022 by Kloudz Quote Link to comment
JorgeB Posted November 29, 2022 Share Posted November 29, 2022 4 minutes ago, Kloudz said: I could just perform the upgrade without doing that right? Yes, no need to run the mover, pool can be in use during the upgrade, of course anything important should still be backed up. Quote Link to comment
Kloudz Posted November 29, 2022 Share Posted November 29, 2022 1 minute ago, JorgeB said: Yes, no need to run the mover, pool can be in use during the upgrade, of course anything important should still be backed up. sweet thanks Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.