giosann Posted October 20 Posted October 20 Today I wanted to take some time to do some upgrades to my Unraid NAS. I already had my current drives connected like this 2x SATA SSDs connected directly to the motherboard 2z SATA SSDs + 6x SATA HHDs connected to a LSI 9207-8i (IBM M5110) I wanted to add 2x nvme drives and connect the LSI to an expander (the Intel RAID Storage Expander RES2SV240 E91267-203) So i shut everything down and connected the expander via sata power, attached all the mini sas cables to the expander and a mini sas cable from the expander to the LSI I also put in the 2 nvme drives. i boot the system and none of the drive connected trough the LSI+expander shows up. so i did some tries changing the PCI generation settings in the bios. They present like this: I basically tried to set the gen to 3 and switch around with the lanes configuration but had no success. I also tried taking out the GPU and powering the expander with the PCI slot instead of molex but with no results. In the end i took out the expander thinking it was broken and connected the cables back to the LSI but still no disk shows up. Do you have any reccomendation on what I can try? is it possible i accindetaly broke the LSI and need a new one? should i Just buy a 24i and attach everything to that? I have attached diagnostics of the first time the drives did not show up (or after a reboot I can't quite remember) and the latest diagnostics. Thanks row 4 column 2+3 and row 5 are the hard disks. I had moved the SSDs to row1 but i moved them back to row 4 coumn 1+4 afther this ordeal. tower-diagnostics-20241020-1612.zip tower-diagnostics-20241020-1730.zip Quote
giosann Posted October 20 Author Posted October 20 Looking at page 16 of the manual it seems like I can't use the third PCIe slot if the second M2 slot is connected? Is there anything I can do? I wanted to put system and appdata on NVME... Maybe i can keep one and just back it up Quote
Vr2Io Posted October 20 Posted October 20 (edited) You can buy an NVMe to PCIe_1x adapter and use PCIe_1x slot ( PCIe_E4), but will limited the throughput. ( free M2_2 so can use HBA ) Edited October 20 by Vr2Io Quote
giosann Posted October 20 Author Posted October 20 Thanks @Vr2Io, I'll see if it's needed, in the end how much speed will i lose? i bought a nvme drive to have a better performance out of the containers. For now I'll just run 1 i think since it was that the problem and taking out the second nvme drive solved the issue. Quote
Vr2Io Posted October 20 Posted October 20 (edited) It will max at PCIe 3.0 x1, still have around 1GB/s. The other onboard M2 still in full speed. Edited October 20 by Vr2Io Quote
giosann Posted October 20 Author Posted October 20 I usually do a btrfs pool so i imagine that it would be bottle-necked by the slower one. I don't know how much difference it would make. Do you reccomend doing something different? Maybe i could just keep one but it's not very comfortable. Quote
Vr2Io Posted October 21 Posted October 21 I suppose you are talking mirror pool, if speed were important, then I would go to standalone route and periodic sync between two NVMe. If speed not concern too much, then mirror both as usual also fine. Quote
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.