Jump to content

No Hard Drives 9211 IT Mode (SOLVED)


Recommended Posts

Hello Everyone,


Long time fan, first post ?


Let me start by saying I have 2 server currently. One of them is running Unraid flawlessly for a few months now, love the software and all it can do!


I am however having issues in one of my main servers. It is a 4U Server with 16 hard drive back plane. Currently have 

  • SAS9211-8I - Flashed to IT mode P20 (9211-8i_Package_P20_IR_IT_FW_BIOS_for_MSDOS_Windows)
  • Intel Raid Expander Card  RES2SV240 - (Not flashed or updated)


Currently on the 9211 I have 1 cable plugged into the back plane (4 Hard Drives) and 1 cable running the expander (12 HD)... Mostly for testing. When I go into the 9211 interface it sees all 16 hard drives (Both HD on the expander and directly into the card), I have set the card not to post to BIO (per flashing instructions)... Before I set it to not show in the BIOS, the BIOS could see all the hard drives!


When I boot up Unraid it only sees my SSD that is plugged directly into the motherboard. In the log files Unraid can see both cards. I have attached a diagnostics in hopes someone smarter then me can find something I did wrong. 







Link to comment

From log driver can't load


  • Aug  9 20:40:26 Tower kernel: mpt2sas_cm0: unable to map adapter memory!  or resource not found
  • Aug  9 20:40:26 Tower kernel: mpt2sas_cm0: failure at drivers/scsi/mpt3sas/mpt3sas_scsih.c:8955/_scsih_probe()!


My server with 9211 + expander also set not post in BIOS, no problem with unRAID.


From your log, would you try


  • Aug 9 20:40:26 Tower kernel: pci_bus 0000:00: Automatically enabled pci realloc, if you have problem, try booting with pci=realloc=off
Link to comment
1 minute ago, johnnie.black said:

You can also look for a board bios update and try a different PCIe slot if available.



A different PCIe slot is a good idea to test. When writing Linux drivers, I have had hardware that have failed to map memory and the problem have been solved by using different slots or by changing the order of detection in the kernel. But the pci realloc message is also meaningful to follow.

Link to comment
10 minutes ago, johnnie.black said:

You can also look for a board bios update and try a different PCIe slot if available.



When I was first T/S this issue I did move the raid card to a different PCIe slot with the same results. 


added the following to my /boot/syslinux/syslinux.cfg


  append pci=realloc=off initrd=/bzroot


Per this post 


Now i can see  ALL OF MY HARD DRIVES!!!! thank you everyone for your help.


Quick question, which has been asked before. What is the down side of what i did?




Link to comment
2 minutes ago, kingtony911 said:

What is the down side of what i did?


Most probably none.


The realloc logic is to handle cards that have resources outside of the first 4 GB of memory address space - i.e. outside the range a normal 32-bit application or a 32-bit hardware can reach. And the realloc can fail if there isn't free space in the required address range. But unRAID runs 64-bit code and there aren't much hardware that are strictly limited to 32-bit address range.


Switching PCIe slot shouldn't matter either unless you had to switch to a slot that has fewer/slower PCIe lanes resulting in less bandwidth to the controller card.

Link to comment


This topic is now archived and is closed to further replies.

  • Create New...