daddygrant Posted May 21, 2018 Share Posted May 21, 2018 I currently have a 500GB cache drive that I want to replace with a 1.2TB drive. Both drives are in the server and the 1.2TB one is unassigned. In the past I used the mover to send data to the array then back to the new cache drive but that process took a long time. I saw a new method on the FAQ that may work for me since both drives are in the server. Does anyone have experience with this method and do I need to stop the Dockers or VMs? Is it really that easy with no data loss? Does the array continue to run during the process (Share/Dockers/VMs)? Stop - Select - Start .. that is? Mind blown if it is. On 7/18/2016 at 4:46 AM, johnnie.black said: How do I replace/upgrade a cache pool disk? A few notes: -unRAID v6.4.1 or above required, upgrade first if still on an older release. -Always a good idea to backup anything important on the current cache in case something unexpected happens -This procedure assumes you have enough ports to have both the old and new devices connected at the same time, if not you can use this procedure instead. -Current cache disk filesystem must be BTRFS, you can’t directly replace/upgrade an XFS or ReiserFS disk. -On a multi device pool you can only replace/upgrade one device at a time. -You can directly replace/upgrade a single btrfs cache device but the cache needs to be defined as a pool, you can still have a single-device "pool" if the number of defined cache slots >= 2 -You can't directly replace an existing device with a smaller one, only one of the same or larger size, you can add one or more smaller devices to a pool and after it's done balancing stop the array and remove the larger device(s) (one at a time if more than one), obviously only possible if data still fits on the resulting smaller pool. Procedure: stop the array on the main page click on the cache device you want to replace/upgrade and select the new one from the drop down list (any data on the new device will be deleted) start the array a btrfs device replace will begin, wait for cache activity to stop, the stop array button will be inhibited during the operation, this can take some time depending on how much data is on the pool and how fast your devices are. when the cache activity stops or the stop array button is available the replacement is done. 1 Quote Link to comment
JorgeB Posted May 21, 2018 Share Posted May 21, 2018 1 hour ago, daddygrant said: do I need to stop the Dockers or VMs? No, current cache needs to be btrfs like it says on the notes. 1 hour ago, daddygrant said: Is it really that easy with no data loss? Yes, unless something goes wrong, that's way it's always good to backup any important data. 1 hour ago, daddygrant said: Does the array continue to run during the process (Share/Dockers/VMs)? Yes. Quote Link to comment
daddygrant Posted May 21, 2018 Author Share Posted May 21, 2018 16 hours ago, johnnie.black said: No, current cache needs to be btrfs like it says on the notes. Yes, unless something goes wrong, that's way it's always good to backup any important data. Yes. Thank you. Unfortunately I checked and my current cache disk is xfs. I have however format the new disk as btrfs to future migrations. I'm wonder if I can use MC to move the data and swap the cache. Any other suggestions other than the legacy method? Quote Link to comment
John_M Posted May 21, 2018 Share Posted May 21, 2018 (edited) The reasons the mover method is slow are that the mover does a lot of checking to make sure the files are not in use before moving them and because appdata tends to contain a lot of small files, especially if you're running Plex. Using MC or the command line might be a little faster but it's easier to get wrong. I should think the fastest method would be to roll all the files into a tarball because then you only have one file to write for the initial copy - it wouldn't help much with the restore though. It's one of those jobs you only do very rarely so I'd use the mover and just let it get on with it. Edited May 21, 2018 by John_M Quote Link to comment
david279 Posted May 22, 2018 Share Posted May 22, 2018 So with this method the old cache disk would just become a unassigned disk? Sent from my SM-G955U using Tapatalk Quote Link to comment
daddygrant Posted May 22, 2018 Author Share Posted May 22, 2018 3 hours ago, david279 said: So with this method the old cache disk would just become a unassigned disk? Sent from my SM-G955U using Tapatalk The classic method involves stopping all VMs/Dockers Set shares to not use the cache. Run the mover to migrate the data to the array. Swap the cache drive, change the selected shares to use the cache and then run the mover again. Finally re-enable the cache and run the mover. Enable dockers and VMs. For me it takes about 2 days. Mostly because of Plex. Quote Link to comment
david279 Posted May 22, 2018 Share Posted May 22, 2018 So with the new method the old cache becomes a unassigned disk? [emoji848]Sent from my SM-G955U using Tapatalk Quote Link to comment
John_M Posted May 22, 2018 Share Posted May 22, 2018 3 hours ago, david279 said: So with the new method the old cache becomes a unassigned disk? With either method the old cache disk becomes unassigned because you can't have two disks assigned to the same slot. Quote Link to comment
david279 Posted May 22, 2018 Share Posted May 22, 2018 Thanks [emoji3]Sent from my SM-G955U using Tapatalk Quote Link to comment
daddygrant Posted May 25, 2018 Author Share Posted May 25, 2018 I got every swapped to the new SSD for cache. I made sure it was btrfs this time. Thank you everyone Quote Link to comment
jonesy8485 Posted May 28, 2018 Share Posted May 28, 2018 On 5/22/2018 at 12:23 AM, daddygrant said: The classic method involves stopping all VMs/Dockers Set shares to not use the cache. Run the mover to migrate the data to the array. Swap the cache drive, change the selected shares to use the cache and then run the mover again. Finally re-enable the cache and run the mover. Enable dockers and VMs. For me it takes about 2 days. Mostly because of Plex. I'm looking to swap my XFS cache out for a larger one. What if I add an SSD to the array and only allow the cache-only shares to move to the SSD drive using the share settings? I know it will still have to write parity, but will this save me any significant amount of time? Not looking forward to being down for two days like last time! I have slots available for options if you all know of any, but my cache is XFS currently. Quote Link to comment
daddygrant Posted May 30, 2018 Author Share Posted May 30, 2018 On 5/28/2018 at 6:37 PM, jonesy8485 said: I'm looking to swap my XFS cache out for a larger one. What if I add an SSD to the array and only allow the cache-only shares to move to the SSD drive using the share settings? I know it will still have to write parity, but will this save me any significant amount of time? Not looking forward to being down for two days like last time! I have slots available for options if you all know of any, but my cache is XFS currently. I may be faster but if you have a parity disk it could slow you down.. I know Unraid warns that SSDs aren't supported in the array but it should work. Quote Link to comment
hpka Posted December 4, 2019 Share Posted December 4, 2019 On 5/20/2018 at 9:47 PM, daddygrant said: Stop - Select - Start .. that is? Mind blown if it is. Heads up I used the steps you quoted to swap a cache drive and it worked excellently. However, know that, if you are swapping for a larger drive, you'll have to resize your BTRFS image. Longer guide here, but I just ran this: btrfs fi resize 1:max /mnt/cache 1 Quote Link to comment
blueboyscout325 Posted March 18, 2020 Share Posted March 18, 2020 Is still a valid method for upgrading a single BTRFS cache drive without having to stop running dockers? I'd like to minimize the downtime of Plex. And so would I have to configure the cache slots to 2 and just leave the second slot empty? And there's low risk of data loss? Quote Link to comment
JorgeB Posted March 18, 2020 Share Posted March 18, 2020 7 minutes ago, blueboyscout325 said: Is still a valid method for upgrading a single BTRFS cache drive without having to stop running dockers? I'd like to minimize the downtime of Plex. And so would I have to configure the cache slots to 2 and just leave the second slot empty? And there's low risk of data loss? Yes to all, risk is low but there's always a risk, just make sure cache backups are up do date. Quote Link to comment
kimocal Posted April 28, 2020 Share Posted April 28, 2020 (edited) I have only 1 cache drive formatted as BTRFS and want to replace it with a larger one. I want to make sure this step is correct: Quote -You can directly replace/upgrade a single btrfs cache device but the cache needs to be defined as a pool, you can still have a single-device "pool" if the number of defined cache slots >= 2 Does that mean I just create 2 slots in the Cache Pool like below? Then in Slot 1, I choose the larger SSD to replace the existing one? Then afterwards I run the following command since the replacement drive is larger: btrfs fi resize 1:max /mnt/cache Edited April 28, 2020 by kimocal more details Quote Link to comment
JorgeB Posted April 29, 2020 Share Posted April 29, 2020 8 hours ago, kimocal said: Does that mean I just create 2 slots in the Cache Pool like below? 8 hours ago, kimocal said: Then in Slot 1, I choose the larger SSD to replace the existing one? Correct to both 8 hours ago, kimocal said: Then afterwards I run the following command since the replacement drive is larger: No need to resize, Unraid will do it on next array start. 1 Quote Link to comment
tjb_altf4 Posted April 29, 2020 Share Posted April 29, 2020 Does this upgrade process only work on the default raid1 pool? i.e. will it work on a raid0 pool? Quote Link to comment
JorgeB Posted April 29, 2020 Share Posted April 29, 2020 7 minutes ago, tjb_altf4 said: Does this upgrade process only work on the default raid1 pool? i.e. will it work on a raid0 pool? It works for any profile, as long as the old device remains connected during the replacement. 1 Quote Link to comment
kimocal Posted April 29, 2020 Share Posted April 29, 2020 10 hours ago, johnnie.black said: Correct to both No need to resize, Unraid will do it on next array start. Great. Once my replacement SSD arrives I can give this a go. Thanks again. Quote Link to comment
trurl Posted April 29, 2020 Share Posted April 29, 2020 I know this post is a little old, but since there are new posts in this thread, I thought it needed some elaboration in case someone is trying to follow it. On 5/22/2018 at 12:23 AM, daddygrant said: The classic method involves stopping all VMs/Dockers Set shares to not use the cache. Run the mover to migrate the data to the array. Swap the cache drive, change the selected shares to use the cache and then run the mover again. Finally re-enable the cache and run the mover. Enable dockers and VMs. This "classic method" is missing some important details. It seems to imply that there are only 2 possible Use cache settings, but there are 4, and which you use at each step is critical. Here is the more complete information. Instead of stopping all VMs/Dockers (that is not enough): Go to Settings - Docker and disable Dockers. Go to Settings - VM Manager and disable VMs. Stop all writing to all user shares by anything. Set all user shares to cache-yes. This is the only setting which will move from cache to array. Run mover to get everything moved from cache to array. Swap cache drive. Set shares you want to stay on cache to cache-prefer. This is the only setting which will move from array to cache. Typically, you want appdata, domains, and system shares on cache. Set other user shares to whichever Use cache setting you prefer. Run mover to get those shares you want to stay on cache moved back to cache. Enable Dockers and VMs. Also, don't do this: On 5/28/2018 at 6:37 PM, jonesy8485 said: add an SSD to the array 2 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.