pro_con

Members
  • Posts

    3
  • Joined

Everything posted by pro_con

  1. Yep, on downgrading the firmware to 20.00.07.00 the disks are now showing up correctly in 6.9.2. Super weird, the .11 firmware is what it shipped from the reseller with. Regardless, I really appreciate your help, and I'll go ahead and mark resolved.
  2. Unfortunately I don't have access to another suitable chassis to try as a host. I did find a spare flash drive and loaded up a copy of Unraid 6.8.3, with kernel version 4.19.107, at which point it still doesn't work, but I get a different failure after initialization in the logs: As an interesting experiment, I added mpt3sas.max_queue_depth=10000 to my syslinux.cfg for 6.8.3, at which point the controller actually appears to initialize successfully without the crash and stack trace, but the disks connected to it still aren't visible: I'm actually pretty confused by that output - the two disks that it initializes in the bottom of the block, sdf and sdg, after the "May 5 12:23:53 Tower kernel: mpt2sas_cm1: sending port enable !!" row are both attached to the internal Intel controller, not the external LSI one. Below I'm including the same block from the SAS2308, which is working, for comparison: All four disks from this log excerpt (sdb through sde) are actually attached to the LSI2308. The key difference seems to me to be that for the 2308, the "sending port enable !!" command is immediately followed by "May 5 12:23:53 Tower kernel: mpt2sas_cm0: host_add: handle(0x0001), sas_addr(0x50030480301821cd), phys(8)" and "May 5 12:23:53 Tower kernel: mpt2sas_cm0: port enable: SUCCESS", but the 2116 doesn't have the success message. Again, all of the log excerpts in this post are from 6.8.3, the first with out of box syslinux and the last two with syslinux modified for max_queue_depth=10000 However, with all of that info, I still don't really have a sense of what that actually means as far as the actual underlying failure state. Does it just make it more likely that the problem is with the actual card itself? Thanks again for your assistance.
  3. Hi, I'm setting up a new Unraid instance hosted in an HP Z820 workstation. I have drives connected to the internal Intel disk controller, which appear successfully, and an internal LSI SAS2308 on the motherboard, which also appear correctly. I also have disks attached to an LSI SAS9201-16e (SAS2116), and those drives don't appear in Unraid at all. The controller itself appears in Unraid's system devices assigned to an IOMMU, and I can see where in the logs it's trying to initialize the controller, but without success: I've confirmed using sas2flsh that both HBAs are running firmware version 20.X in IT mode - the 2308 has firmware and bios, and the 2116 is firmware only. Everything I've found indicates that the 2116 doesn't need the bios to work correctly as an HBA in Unraid though. I have tried attaching the 2116 to multiple PCIe ports with no change. I've found posts by other people saying that this is an issue with the linux kernel post 5.8 and that it is fixed by max_queue_depth=10000, but my system has a copy of /etc/modprobe.d/mpt3sas.conf with the correct content, and I've also tried adding the parameter directly to syslinux.cfg, with no success. I would prefer not to roll back to a previous version since I plan to use the Nvidia driver integration introduced in 6.9, but if that's the next thing that I have to do for testing I'm willing to. All of the other posts about this issue that don't refer to max_queue_depth seem to be for non-LSI HBAs, so I'm really not sure where to go next - any info or assistance would be greatly appreciated. Thank you!