It's not, you can try upgrading to v6.9 final, it might include a newer LSI driver, but LT can't do anything about this, this would be an LSI + those Seagate disks issue.
But you unassigned two cache devices and started the array, so they were both wiped:
Mar 2 18:40:43 Sol emhttpd: shcmd (1232): /sbin/wipefs -a /dev/sde1
...
Mar 2 18:40:43 Sol emhttpd: shcmd (1234): /sbin/wipefs -a /dev/sdf1
And like mentioned you can' remove two devices at the same time and keep the pool, it needs to be one device at a time (assuming a raid1 or raid10 pool).
No need to update the pools, you just need to update Unraid to v6.9, after that you can disable the TRIM plugin (if all the pool are btrfs), it won't break anything to leave it installed though.
That is in the release notes, it will happen every time you go from v6.9 to v6.8.
Just stop array, re-assign all cache pool devices, there can't be a "all data on this device will be deleted" warning for any cache device, then start array.
You might not able to repair the filesystem on a failing drive, you can try cloning it with ddrescue, then repair the filesystem, but note the ddrescue is not flash device optimized.