(Solved) Issues replacing cache drive | Copying VM over


Skrie

Recommended Posts

Hello, I recently got 2 new SSDs to increase the cache drive size as well as having a back-up wince that is where the VM is living. I was following along from this post 

 and connected 1 of the new SSDs and started it back up. As indicated in the post above, I stopped the array and changed the cache drive from the old one and selected the new one. Once I had selected the new cache drive and restarted the array, it did not appear to copy over from the initial drive or start a btrfs replace (Confirmed both drives were this type). I was then suggested to run the mover, and there is now a little over 22GB on the cache drive though I am unsure whether or not this is the previous data, as I am still unable to restart my VM. I had them both connected at the same time and am unable to go back to the original one as it states the drive will be formatted.

 

What have I done wrong? Is this an issue with the file location changed and needing to update the VM directory? Was i supposed to move the VM data prior to connecting the new drive?
 EDIT: Found out that the 20gb is my docker allocation, so that doesn't account for the VM on the original cache.

Edited by Skrie
Link to comment
3 hours ago, johnnie.black said:

Replace will only work if it's a pool, like mentioned on the FAQ:

 

 

Have I gone too far to undo this? When attempting to reinstate the original cache drive it is asking to wipe it and I imagine start fresh. Are there any other alternatives to obtaining the VM data for use on the new cache drive(s)?

Link to comment

With the array stopped, disable VM and docker services (assuming they are using cache), unassign all cache devices, start array, this will make Unraid "forget" cache config, stop array, re-assign original cache (you will no longer get the "data will be lost" warning), re-enable services and start the array.

Link to comment
8 hours ago, johnnie.black said:

With the array stopped, disable VM and docker services (assuming they are using cache), unassign all cache devices, start array, this will make Unraid "forget" cache config, stop array, re-assign original cache (you will no longer get the "data will be lost" warning), re-enable services and start the array.

I couldn't wait and went home during lunch to turn it back on and follow your steps. That did the trick and I'm back where I started, thank you again very much for the assist! So to do this properly, I need to assign the current cache as a pool yes? (Even with only the one drive active, I have it set to 2 slots as mentioned in the FAQ) Where would I go about setting this so when I repeat my steps it mirrors it successfully.

 

EDIT: I repeated the steps in the FAQ while this time having the cache slots set to 2, and it appears to be working as initially intended. Thanks johnnie.black for getting me back on track.

Edited by Skrie
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.