Jump to content

Unraid not seeing drives after connecting Intel SAS expander


Recommended Posts

Posted

Today I wanted to take some time to do some upgrades to my Unraid NAS.

 

I already had my current drives connected like this

 

2x SATA SSDs connected directly to the motherboard

2z SATA SSDs + 6x SATA HHDs connected to a LSI 9207-8i (IBM M5110)

 

I wanted to add 2x nvme drives and connect the LSI to an expander (the Intel RAID Storage Expander RES2SV240 E91267-203)

 

So i shut everything down and connected the expander via sata power, attached all the mini sas cables to the expander and a mini sas cable from the expander to the LSI

I also put in the 2 nvme drives.

 

i boot the system and none of the drive connected trough the LSI+expander shows up. so i did some tries changing the PCI generation settings in the bios. They present like this:

 

PXL_20241020_151242710_MP.thumb.jpg.c8a9d2cb72ae7fdd91b10791040253af.jpgPXL_20241020_151248296.thumb.jpg.92880488be5a9a1ac7a7f5cd29b08fae.jpgPXL_20241020_151253239.thumb.jpg.b3a82c03504a8f3e17d9c4fa639d9fc9.jpg

 

I basically tried to set the gen to 3 and switch around with the lanes configuration but had no success.

I also tried taking out the GPU and powering the expander with the PCI slot instead of molex but with no results.

 

In the end i took out the expander thinking it was broken and connected the cables back to the LSI but still no disk shows up.

 

Do you have any reccomendation on what I can try? is it possible i accindetaly broke the LSI and need a new one? should i Just buy a 24i and attach everything to that?

 

I have attached diagnostics of the first time the drives did not show up (or after a reboot I can't quite remember) and the latest diagnostics.

 

Thanks

 

PXL_20241020_143512305_MP.thumb.jpg.dadac84bf4b9c0714e36da55b05e5269.jpgPXL_20241020_143508271.thumb.jpg.6688209e3ab6cdd87d905e135cf20864.jpg

row 4 column 2+3 and row 5 are the hard disks. I had moved the SSDs to row1 but i moved them back to row 4 coumn 1+4 afther this ordeal.

tower-diagnostics-20241020-1612.zip tower-diagnostics-20241020-1730.zip

Posted (edited)

You can buy an NVMe to PCIe_1x adapter and use PCIe_1x slot ( PCIe_E4), but will limited the throughput.  ( free M2_2 so can use HBA )

 

image.png.470262ebf1e2c179b7ff3fa7f1adae65.png

Edited by Vr2Io
Posted

Thanks @Vr2Io, I'll see if it's needed, in the end how much speed will i lose? i bought a nvme drive to have a better performance out of the containers.

For now I'll just run 1 i think since it was that the problem and taking out the second nvme drive solved the issue.

Posted

I usually do a btrfs pool so i imagine that it would be bottle-necked by the slower one. I don't know how much difference it would make.

Do you reccomend doing something different?

 

Maybe i could just keep one but it's not very comfortable.

Posted

I suppose you are talking mirror pool, if speed were important, then I would go to standalone route and periodic sync between two NVMe.

 

If speed not concern too much, then mirror both as usual also fine.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...