Faced a similar experience when a reboot with a bad sata card occured. Knocking out my entire cache array.
Because no actual drives was lossed, after rewiring to another sata port (SATA 2 sadly). I am able to see the entire "btrfs filesystem" even if I unable to add them back to unraid (as they are unassigned, and shows a warning that all data will be formatted when reassigning)
$ btrfs filesystem show
Label: none uuid: 0bfdf8d7-1073-454b-8dec-5a03146de885
Total devices 6 FS bytes used 1.37TiB
devid 2 size 111.79GiB used 37.00GiB path /dev/sdo1
devid 3 size 223.57GiB used 138.00GiB path /dev/sdm1
devid 4 size 223.57GiB used 138.00GiB path /dev/sdi1
devid 5 size 1.82TiB used 1.60TiB path /dev/sdd1
devid 6 size 1.82TiB used 1.60TiB path /dev/sde1
devid 7 size 111.79GiB used 37.00GiB path /dev/sdp1
... there are probably other BTRFS disk drives if you have theme as well ...
While attempting to remount this cache pool using the steps found at
I was unfortunately faced with an error of
$ mount -o degraded,usebackuproot,ro /dev/sdo1 /dev/sdm1 /dev/sdi1 /dev/sdd1 /dev/sde1 /dev/sdp1 /recovery/cache-pool
mount: bad usage
So alternatively I mount using the UUID (with /recovery/cache-pool being the the recovery folder i created)
$ mount -o degraded,usebackuproot,ro --uuid 0bfdf8d7-1073-454b-8dec-5a03146de885 /recovery/cache-pool
With that i presume i can then safely remove the drives from the cache pool (for the last 2 disk that was left), and slowly manually reorganize and recover the data.