Correct, if you have all 3 replaced disks you can do a new config with them, since parity should be mostly valid you can check "parity is already valid" before array start but should then run a correcting check.
I'm not very familiar with those controllers but after taking a second look looks more like disk2 is causing the HBA related timeouts, what disk(s) did you last replace before you got read errors from disk2? Do you still have those old disks intact? Was anything written to the array after those disks were removed?
Downgrading to v6.9.2 will also work, but you can download the v6.11.1 zip and extract all the bz* files to the flash drive replacing existing ones, then reboot.
Pulling the disk won't fix anything since the emulated disk will have the same problem:
Nov 6 17:25:09 1mehien kernel: BTRFS info (device md13: state EA): forced readonly
P.S. disk14 is also showing issues now:
Nov 6 17:13:46 1mehien kernel: BTRFS info (device md14: state EA): forced readonly
For both now.
Doesn't look like an Unraid problem:
Nov 6 20:31:56 ser kernel: nvme nvme0: Device not ready; aborting reset, CSTS=0x1
Nov 6 20:31:56 ser kernel: nvme nvme0: Removing after probe failure status: -19
IIRC another user had a similar intermittent issue with the same or similar Samsung device, try power cycling the server and/or downgrading back to v6.11.0 to see it's still detected.
Do you mean you are suing the disk share for transfer 1? Disk shares are always faster but a user share should be much faster than that, diagnostics might show something.
We usually recommend rebuilding, as long the emulated disk is mounting and contents look correct, other option is doing a new config but you'll need to do a parity check, and that will take as long as the rebuild.