Here is how this whole disable and emulation thing works.
When a write to a disk fails, Unraid disables the disk.
If the disk is a data disk, the write is still used to update parity. So that failed write can be recovered when the disabled disk is rebuilt. The disk is disabled because it is no longer in sync with parity.
After a disk is disabled, the actual disk is not used again until it is rebuilt (or in your case, a New Config, see below). Instead, the disk is emulated by reading all other disks to get its data. The emulated disk can be read, and it can also be written by updating parity. So writes to the emulated disk continue even when the disk is disabled. Those writes can be recovered by rebuilding the disk from the parity calculation.
And, rebuilding the disk is the usual way to recover from this, because the disk is no longer in sync with parity, since parity contains writes that happened with the disk disabled.
It is also possible to enable all disks again by setting a New Config and rebuilding parity, thus getting parity back in sync with all the data disks in the array. But any writes to that disk that happened with the disk disabled are lost when you take that option.
In your case, the actually failing disk14 was contributing bad data to the emulation of those disabled disks. That resulted in those emulated disks being unmountable. But the actual disks were still mountable, as we discovered. Technically, parity is out-of-sync with those disks, but maybe not much. The rebuild of disk14 is relying on that "not much".
One final note. If a read from a disk fails, Unraid will try to get its data from the parity calculation by reading all the other disks, and then try to write that data back to the disk. If that write fails the disk is disabled. So, it is possible for a failed read to cause a failed write that disables the disk.