Fastest Cache Drive Swap


Recommended Posts

I currently have a 500GB cache drive that I want to replace with a 1.2TB drive.  Both drives are in the server and the 1.2TB one is unassigned.  In the past I used the mover to send data to the array then back to the new cache drive but that process took a long time.  I saw a new method on the FAQ that may work for me since both drives are in the server. 

 

 

  1. Does anyone have experience with this method and do I need to stop the Dockers or VMs? 
  2. Is it really that easy with no data loss? 
  3. Does the array continue to run during the process (Share/Dockers/VMs)?

 

Stop - Select - Start  .. that is? Mind blown if it is.

On 7/18/2016 at 4:46 AM, johnnie.black said:

How do I replace/upgrade a cache pool disk?

 

 

A few notes:

-unRAID v6.4.1 or above required, upgrade first if still on an older release.

-Always a good idea to backup anything important on the current cache in case something unexpected happens

-This procedure assumes you have enough ports to have both the old and new devices connected at the same time, if not you can use this procedure instead.

-Current cache disk filesystem must be BTRFS, you can’t directly replace/upgrade an XFS or ReiserFS disk.

-On a multi device pool you can only replace/upgrade one device at a time.

-You can directly replace/upgrade a single btrfs cache device but the cache needs to be defined as a pool, you can still have a single-device "pool" if the number of defined cache slots >= 2

-You can't directly replace an existing device with a smaller one, only one of the same or larger size, you can add one or more smaller devices to a pool and after it's done balancing stop the array and remove the larger device(s) (one at a time if more than one), obviously only possible if data still fits on the resulting smaller pool.

 

 

Procedure:

 

  • stop the array
  • on the main page click on the cache device you want to replace/upgrade and select the new one from the drop down list (any data on the new device will be deleted)
  • start the array
  • a btrfs device replace will begin, wait for cache activity to stop, the stop array button will be inhibited during the operation, this can take some time depending on how much data is on the pool and how fast your devices are.
  • when the cache activity stops or the stop array button is available the replacement is done.

 

 

 

 

  • Thanks 1
Link to comment
1 hour ago, daddygrant said:

do I need to stop the Dockers or VMs? 

No, current cache needs to be btrfs like it says on the notes.

 

1 hour ago, daddygrant said:

Is it really that easy with no data loss? 

Yes, unless something goes wrong, that's way it's always good to backup any important data.

 

1 hour ago, daddygrant said:

Does the array continue to run during the process (Share/Dockers/VMs)?

Yes.

 

Link to comment
16 hours ago, johnnie.black said:

No, current cache needs to be btrfs like it says on the notes.

 

Yes, unless something goes wrong, that's way it's always good to backup any important data.

 

Yes.

 

 

 

Thank you. Unfortunately I checked and my current cache disk is xfs.  I have however format the new disk as btrfs to future migrations.   I'm wonder if I can use MC to move the data and swap the cache.  Any other suggestions other than the legacy method?

Link to comment

The reasons the mover method is slow are that the mover does a lot of checking to make sure the files are not in use before moving them and because appdata tends to contain a lot of small files, especially if you're running Plex. Using MC or the command line might be a little faster but it's easier to get wrong. I should think the fastest method would be to roll all the files into a tarball because then you only have one file to write for the initial copy - it wouldn't help much with the restore though. It's one of those jobs you only do very rarely so I'd use the mover and just let it get on with it.

 

Edited by John_M
Link to comment
3 hours ago, david279 said:

So with this method the old cache disk would just become a unassigned disk?

 

Sent from my SM-G955U using Tapatalk

 

 

 

 

The classic method involves stopping all VMs/Dockers

Set shares to not use the cache. 

Run the mover to migrate the data to the array.

Swap the cache drive, change the selected shares to use the cache and then run the mover again.

Finally re-enable the cache and run the mover.

Enable dockers and VMs.

 

For me it takes about 2 days. Mostly because of Plex.

Link to comment
On 5/22/2018 at 12:23 AM, daddygrant said:

The classic method involves stopping all VMs/Dockers

Set shares to not use the cache. 

Run the mover to migrate the data to the array.

Swap the cache drive, change the selected shares to use the cache and then run the mover again.

Finally re-enable the cache and run the mover.

Enable dockers and VMs.

 

For me it takes about 2 days. Mostly because of Plex.

 

I'm looking to swap my XFS cache out for a larger one. What if I add an SSD to the array and only allow the cache-only shares to move to the SSD drive using the share settings? I know it will still have to write parity, but will this save me any significant amount of time? Not looking forward to being down for two days like last time! 

 

I have slots available for options if you all know of any, but my cache is XFS currently.

Link to comment
On 5/28/2018 at 6:37 PM, jonesy8485 said:

 

I'm looking to swap my XFS cache out for a larger one. What if I add an SSD to the array and only allow the cache-only shares to move to the SSD drive using the share settings? I know it will still have to write parity, but will this save me any significant amount of time? Not looking forward to being down for two days like last time! 

 

I have slots available for options if you all know of any, but my cache is XFS currently.

 

I may be faster but if you have a parity disk it could slow you down..  I know Unraid warns that SSDs aren't supported in the array but it should work.

Link to comment
  • 1 year later...
On 5/20/2018 at 9:47 PM, daddygrant said:

Stop - Select - Start  .. that is? Mind blown if it is.

Heads up I used the steps you quoted to swap a cache drive and it worked excellently.

 

However, know that, if you are swapping for a larger drive, you'll have to resize your BTRFS image. Longer guide here, but I just ran this:

btrfs fi resize 1:max /mnt/cache

 

  • Like 1
Link to comment
  • 3 months later...
7 minutes ago, blueboyscout325 said:

Is still a valid method for upgrading a single BTRFS cache drive without having to stop running dockers? I'd like to minimize the downtime of Plex. And so would I have to configure the cache slots to 2 and just leave the second slot empty? And there's low risk of data loss?

Yes to all, risk is low but there's always a risk, just make sure cache backups are up do date.

Link to comment
  • 1 month later...

I have only 1 cache drive formatted as BTRFS and want to replace it with a larger one.  I want to make sure this step is correct:

 

Quote

-You can directly replace/upgrade a single btrfs cache device but the cache needs to be defined as a pool, you can still have a single-device "pool" if the number of defined cache slots >= 2

 

Does that mean I just create 2 slots in the Cache Pool like below?

image.png.86658cc8a69cda373a315232a6e12a32.png

 

Then in Slot 1, I choose the larger SSD to replace the existing one? Then afterwards I run the following command since the replacement drive is larger:

 

btrfs fi resize 1:max /mnt/cache

 

 

Edited by kimocal
more details
Link to comment
8 hours ago, kimocal said:

Does that mean I just create 2 slots in the Cache Pool like below?

 

8 hours ago, kimocal said:

Then in Slot 1, I choose the larger SSD to replace the existing one?

Correct to both

 

8 hours ago, kimocal said:

Then afterwards I run the following command since the replacement drive is larger:

No need to resize, Unraid will do it on next array start.

  • Thanks 1
Link to comment

I know this post is a little old, but since there are new posts in this thread, I thought it needed some elaboration in case someone is trying to follow it.

On 5/22/2018 at 12:23 AM, daddygrant said:

The classic method involves stopping all VMs/Dockers

Set shares to not use the cache. 

Run the mover to migrate the data to the array.

Swap the cache drive, change the selected shares to use the cache and then run the mover again.

Finally re-enable the cache and run the mover.

Enable dockers and VMs.

This "classic method" is missing some important details. It seems to imply that there are only 2 possible Use cache settings, but there are 4, and which you use at each step is critical.

 

Here is the more complete information. Instead of stopping all VMs/Dockers (that is not enough):

  1. Go to Settings - Docker and disable Dockers. Go to Settings - VM Manager and disable VMs.
  2. Stop all writing to all user shares by anything.
  3. Set all user shares to cache-yes. This is the only setting which will move from cache to array.
  4. Run mover to get everything moved from cache to array.
  5. Swap cache drive.
  6. Set shares you want to stay on cache to cache-prefer. This is the only setting which will move from array to cache. Typically, you want appdata, domains, and system shares on cache. Set other user shares to whichever Use cache setting you prefer.
  7. Run mover to get those shares you want to stay on cache moved back to cache.
  8. Enable Dockers and VMs.

Also, don't do this:

On 5/28/2018 at 6:37 PM, jonesy8485 said:

add an SSD to the array

 

  • Thanks 2
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.