ecnal.magnus Posted March 2, 2023 Share Posted March 2, 2023 (edited) Last weekend I shut down my Docker service, moved all the data off of my 1TB cache array (2 1TB NVMe drives), replaced the drives with 2 new 2TB NVMe drives, spun the array back up, moved the appdata and system shares back to the cache array, reenabled Docker service, and everything was working fine. Today I wanted to add a couple of external USB C NVMe drives. When I added them they wouldn't mount, so I rebooted the server. When the server came back the cache array was indicating that the new 2TB drives were wrong, and it said it was expecting the old 1TB drives still. But the new ones had detected and worked just fine until the reboot. Just for fun I started the array anyway with the new drives in place of the old drives (as they had been working since Sunday) and the cache drive came up and Docker is working and I can write to the cache array, but the cache drives both have a red X by them, and they are both saying they are being emulated. I don't believe that isn't possible, considering there is nothing to emulate them from, and considering my Docker is working just fine at my new NVMe drive speeds. I mostly wanted to bring it to the forums attention to see if anyone else had experienced this or had any ideas as to why the old drives would have showed back up logically? Everything seems to be working just fine, but I don't really want to leave it like this. Any insight would be appreciated. Also, I have remote syslog enabled, so if there is anything from syslog that might help I can grab it and share it. Edited March 2, 2023 by ecnal.magnus Added additional information. Quote Link to comment
Solution JorgeB Posted March 2, 2023 Solution Share Posted March 2, 2023 Stop array, unassign all pool devices, start array, stop array, re-assign all pool devices, start array and you're done. 1 Quote Link to comment
ecnal.magnus Posted March 2, 2023 Author Share Posted March 2, 2023 That worked. Thank you. 1 Quote Link to comment
nicosemp Posted June 15, 2023 Share Posted June 15, 2023 Why is this necessary? It just happened to me too, and I nearly panicked before finding this solution. What happens to the cache configuration that resets it to the old state (before the SSD were swapped)? Quote Link to comment
JorgeB Posted June 15, 2023 Share Posted June 15, 2023 This will usually happen if you replace the cache devices incorrectly, i.e., just assign the new ones on top of the old ones and start the array without resetting the pool. Quote Link to comment
JasonK Posted June 15, 2023 Share Posted June 15, 2023 I recently upgraded my cache pool (was 2 256gb ssds upgraded to 2 512gb ssds). I'm lazy - i shut down, pulled one 256gb drive from the pool, plugged the 512 in its place. server came up, saw the missing drive...i selected the new 512gb drive for that slot in the pool, told the array to start...unraid rebuilt the 512gb drive from the other 256 that was still there. when that was all done rebuilding, shut down, did the same with the other 256gb drive, let rebuild, bam, done. sure it took longer, but i didn't have to fiddle with things. Quote Link to comment
nicosemp Posted June 17, 2023 Share Posted June 17, 2023 On 6/15/2023 at 10:26 AM, JorgeB said: i.e., just assign the new ones on top of the old ones and start the array without resetting the pool. I moved the data from the cache to the array before upgrading the 256GB SSDs to 1TB ones. I might have just replaced them without "resetting". So by "resetting the pool" you mean - unassign both old SSDs - start the array - stop the array - assign the new SSDs - start the array correct? Quote Link to comment
JorgeB Posted June 17, 2023 Share Posted June 17, 2023 4 minutes ago, nicosemp said: correct? Yep. 1 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.