Don't know if this is a bug or a limitation, but even if it's not a bug maybe something can be done to make it better, this happened to a user recently in this thread and I was able to reproduce it.
How to reproduce:
-start with a 3 device cache pool
-unassign cache1
-re-arrange pool, e.g., re-assign cache3 to cache1 slot, up to here all is working fine
-change cache slots to 2, now Unraid will indicate cache1 is new and result in an unmountable and damaged cache pool if user starts the array
The procedure works as expected if cache slots aren't changed at the same time, i.e., remove one device from a 3 device pool, re-arrange the devices, start the array to balance the pool down to two devices, stop the array and now you can safely change the number of cache slots, but possibly this can be improved, or maybe make it impossible to change the number of slots when removing (or re-arraging) cache devices or it will likely happen to someone else in the future and result in data loss.
Recommended Comments
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.