I recently added a new JBOD to my server as well as some new 1TB cache drives (configured BTRFS Raid 10). I have been writing parity to a second parity drive when my cache failed rendering my dockers all dead. The drive on drive details tab shows "Unavailable - disk must be spun up", if you spin it up on the main page looks normal, but then go back to details and it has the same must be spun up message. I have had this problem a few times recently, which is how I ended up with only 7 drives in a raid 10 configuration for cache. I am currently trying to get mover to move all my files back the array so I can rebuild cache drives (and also manually moving them over to another unraid box). I have attached diagnostics.
Oh yeah, I pulled the hot swap drive to see if it would come back and it did show back up in unassigned devices as drive "sdal", it was originally "sdj". Unassigned devices will not let me re assign the UUID to get the drive back in cache array while array is online. In the process I inadvertently pulled cache drive "sdp" which is now also in unassigned devices as "sdai", again will not allow to reassign UUID. With the normal array if a drive is pulled and then reinserted the array seems to bring it back but cache does not, is that due to raid 10/ unraid xfs parity differences?
On another note, since the data should still be on those two cache drives, is there a way to add them back to cache without losing the data, everytime you adjust cache it just re-formats the drive has been my experience.
tower-diagnostics-20200713-1414.zip