Parity can't help with filesystem corruption, according to your diags discs 12 and 13 weren't mounting, but they are now, likely the result of running xfs_repair.
Check the disk's SMART report, like mentioned and unless the correct lifetime is reported there on the SMART self-test history section it's a disk issue.
Btrfs is very sensitive to memory issues, you are overclocking your RAM and that's known to corrupt data with some Ryzen systems, see here, for your current config you should set the RAM speed @ 2666Mhz max, also good to run memtest.
Since the diags are after rebooting we can't see what happened but parity looks fine, replace/swap cables to rule them out and re-sync parity.
Cache filesystem is corrupt, you need to re-format, if there's important data there see here for some recovery options.
Yes, you can still try btrfs restore.
Rebuild is corrupt, no doubt about that, just by how much, but note that the disk was already unmountable before the read errors on parity, so parity wasn't already 100% valid, possibly due to previous errors.
Shares are using high-water, since disk 1 is larger Unraid will only start writing to other disks once you pass the 3TB mark, more info on high-water here:
https://wiki.unraid.net/Un-Official_UnRAID_Manual#High_Water
Yes, though btrfs is not as forgiving, but there are some recovery options here:
https://forums.unraid.net/topic/46802-faq-for-unraid-v6/?do=findComment&comment=543490
On the plus side, if you can get it to mount you can know whitch files are corrupt by doing a scrub, you can also try btrrfs rescue (also on the link) but that won't check for corruption.
It does, but any read read error on another device during a rebuild will result in a corrupt rebuilt disk, you an still tun xfs_repair on the disk, depending of where and how much corruption there is it might still have some (most) data.