No help for the original issue, but I think it's critical that you understand why you can see and work with a disabled disk. This ability is what allows unraid to rebuild 1 failed disk, so if you don't understand it, you won't be in a good position to work with it. Whenever a write to a disk fails, unraid disables the disk, so no more data is sent to it. Instead, ALL disk operations to the disabled disk are redirected to the phantom disk that is maintained by parity calculations spanning the entire array. You could continue to operate this way, but if another disk fails, you immediately lose access to all the data on BOTH disks that failed. That's why it is critical to deal with a single disk failure promptly, and never run an unraid array (or any array really) with a disk that you suspect is bad. When I first started playing with unraid, I figured, it's fault tolerant, I can have a failed disk rebuilt no problem, so why should I care about the health of my disks, if one fails, unraid will tell me and fix it. WRONG ATTITUDE. It bit me big time, when one disk failed, another questionable disk acted up during the rebuild, and I lost some data. I know that doesn't apply to you right now, I'm just throwing out the info in general.
As to your specific problem, try removing 2 of your AOC controller cards and operate off of just 1 for a troubleshooting run, since you should have enough ports to operate that way with your limited drive count. I suspect the three identical cards are causing some sort of timing issue.