Unfortunately I don't have access to another suitable chassis to try as a host. I did find a spare flash drive and loaded up a copy of Unraid 6.8.3, with kernel version 4.19.107, at which point it still doesn't work, but I get a different failure after initialization in the logs:
As an interesting experiment, I added mpt3sas.max_queue_depth=10000 to my syslinux.cfg for 6.8.3, at which point the controller actually appears to initialize successfully without the crash and stack trace, but the disks connected to it still aren't visible:
I'm actually pretty confused by that output - the two disks that it initializes in the bottom of the block, sdf and sdg, after the "May 5 12:23:53 Tower kernel: mpt2sas_cm1: sending port enable !!" row are both attached to the internal Intel controller, not the external LSI one.
Below I'm including the same block from the SAS2308, which is working, for comparison:
All four disks from this log excerpt (sdb through sde) are actually attached to the LSI2308. The key difference seems to me to be that for the 2308, the "sending port enable !!" command is immediately followed by "May 5 12:23:53 Tower kernel: mpt2sas_cm0: host_add: handle(0x0001), sas_addr(0x50030480301821cd), phys(8)" and
"May 5 12:23:53 Tower kernel: mpt2sas_cm0: port enable: SUCCESS", but the 2116 doesn't have the success message.
Again, all of the log excerpts in this post are from 6.8.3, the first with out of box syslinux and the last two with syslinux modified for max_queue_depth=10000
However, with all of that info, I still don't really have a sense of what that actually means as far as the actual underlying failure state. Does it just make it more likely that the problem is with the actual card itself?
Thanks again for your assistance.