LSI 9205-8i or LSI 9207-8i for a PCIE 3.0x4 slot


Recommended Posts

Hi, I have a motherboard with a open PCIE x16 running at PCIe 3.0 x4 speed.   I'm not sure what if I should get the LSI 9205-8i or the LSI 9207-8i controller.   The LSI 9205-8i runs at PCIe 2.0 x8 and the LSI 9207-8i x8.   Both have the x8 slot.   I would like to get the 9207-8i to be future proof but I'm not sure if I would have a better chance of the 9205-8i working.   Any advice would be greatly appreciated.   Thanks!

 

David

Link to comment
19 hours ago, davidst95 said:

Hi, I have a motherboard with a open PCIE x16 running at PCIe 3.0 x4 speed.   I'm not sure what if I should get the LSI 9205-8i or the LSI 9207-8i controller.   The LSI 9205-8i runs at PCIe 2.0 x8 and the LSI 9207-8i x8.   Both have the x8 slot.   I would like to get the 9207-8i to be future proof but I'm not sure if I would have a better chance of the 9205-8i working.   Any advice would be greatly appreciated.   Thanks!

 

David

I also recommend the 9207-8i - bought mine for only € 109.-

A very good price for an 8-port controller and fully compatible with unraid 😉

But be carefull: In a normal PC-Case this controller needs active cooling because its designed for a server-case with strong airflow.

The minimum airflow must be 200lfm (max. operating Temp 55° / Power consumption ~9,8W).

 

I am using this 3D-printed braket for the Noctua NF-A4x10 FLX from ShapeWays:

https://www.shapeways.com/product/NET3LH5QP/fan-bracket-for-lsi-9207-8i?productConfiguration=104311341&etId=192375185&utm_source=automated-contact&utm_medium=email&utm_campaign=order-shipped&utm_content=5

 

Noctua fan braket.jpg

Edited by Zonediver
Link to comment
On 10/12/2019 at 6:54 AM, Zonediver said:

I also recommend the 9207-8i - bought mine for only € 109.-

A very good price for an 8-port controller and fully compatible with unraid 😉

But be carefull: In a normal PC-Case this controller needs active cooling because its designed for a server-case with strong airflow.

The minimum airflow must be 200lfm (max. operating Temp 55° / Power consumption ~9,8W).

 

I am using this 3D-printed braket for the Noctua NF-A4x10 FLX from ShapeWays:

https://www.shapeways.com/product/NET3LH5QP/fan-bracket-for-lsi-9207-8i?productConfiguration=104311341&etId=192375185&utm_source=automated-contact&utm_medium=email&utm_campaign=order-shipped&utm_content=5

 

 

Thanks for the reply.   That's good to know about the heat.    Is there the same heat issue with LSI 9205-8i?   Also, I would put the controller on the last pcie slot.   Could I put a 120mm fan on the bottom of the case and blow up on to it?   Thanks again!

 

David

Link to comment
  • 2 years later...
16 hours ago, dopeytree said:

Hi gang can you confirm if the LSI 9207-8I is a pci3 slot? what kind of speeds do you guys get?

It is PCIe 3.0, you should expect 400MB/s+ per device with 8 devices, assuming no DMI bottlenecks, but more than enough or 8 HDDs, with a x1 link there would be a bottleneck, if it works, I remember some LSI HBAs not working at x1.

 

 

  • Like 1
Link to comment
  • 7 months later...
On 10/10/2022 at 1:26 AM, JorgeB said:

It is PCIe 3.0, you should expect 400MB/s+ per device with 8 devices, assuming no DMI bottlenecks, but more than enough or 8 HDDs, with a x1 link there would be a bottleneck, if it works, I remember some LSI HBAs not working at x1.

 

 

Sorry for the necro.
I've looked through google for a while now and cant actually find an answer.

I have a couple LSI 9207-8i's and am thinking about just rebuilding my entire system because of unraid stability problems.
I also want to run a GPU for VM's in the future as well as 1-2 extra pcie slots for networking/drives.

 

A lot of the motherboards I am looking at for AM5 have PCIE 4.0x16 slots running at x1 speeds. The HBA's are PCIE3.0.

 

My question is plugging in a 3.0 device into a 4.0 x1 port, will I get 4.0x1 performance or 3.0x1 performance? 
Going from potentially bottlenecked to 'very much bottlenecked' during parity checks/rebuilds.

Thanks!

Link to comment
21 hours ago, RaidUnnewb said:

Sorry for the necro.
I've looked through google for a while now and cant actually find an answer.

I have a couple LSI 9207-8i's and am thinking about just rebuilding my entire system because of unraid stability problems.
I also want to run a GPU for VM's in the future as well as 1-2 extra pcie slots for networking/drives.

 

A lot of the motherboards I am looking at for AM5 have PCIE 4.0x16 slots running at x1 speeds. The HBA's are PCIE3.0.

 

My question is plugging in a 3.0 device into a 4.0 x1 port, will I get 4.0x1 performance or 3.0x1 performance? 
Going from potentially bottlenecked to 'very much bottlenecked' during parity checks/rebuilds.

Thanks!

What motherboard & CPU do you have?
May help us shed a light on some recommendations for you.

Link to comment
On 5/27/2023 at 12:50 PM, boomam said:

What motherboard & CPU do you have?
May help us shed a light on some recommendations for you.

I have a 7950x and will be purchasing a motherboard. Use my current one and buy a 7800x3d maybe.
If, I can figure out how to get all the PCI slots I need working.

 

Will need an AM5 mobo with 4 slots, 1 for GPU, 1 for networking 10gig, 2 for the HBA's. Case has room for 15 HDD's. Each HBA can have 6.
Each HBA is a pcie 8slot card. So I need at least 4size slots to put em into.

All the Mobo's I come across have 3 size 16 slots running in pcie 4.0 16/2/2 modes or something, and a 1 size slot, running at pcie3x1.
Cant find something with the slots I need. And the 7950x is an expensive card to just not use.

Link to comment
On 5/29/2023 at 10:44 PM, RaidUnnewb said:

I have a 7950x and will be purchasing a motherboard. Use my current one and buy a 7800x3d maybe.
If, I can figure out how to get all the PCI slots I need working.

 

Will need an AM5 mobo with 4 slots, 1 for GPU, 1 for networking 10gig, 2 for the HBA's. Case has room for 15 HDD's. Each HBA can have 6.
Each HBA is a pcie 8slot card. So I need at least 4size slots to put em into.

All the Mobo's I come across have 3 size 16 slots running in pcie 4.0 16/2/2 modes or something, and a 1 size slot, running at pcie3x1.
Cant find something with the slots I need. And the 7950x is an expensive card to just not use.

So you basically need 3x PCI-E 16x slots, with two running at 8x so your two HBAs can run at max speed?

Note, the 9207's are PCI-E x4 cards....

The issue you are going to have is that desktop CPUs have a limit amount of PCI-E lanes to go around.

Almost everything on the board will use a lane in some capacity. So you're gonna need to find a board that has the PCI-E/M.2 layout you have, whilst sacrificing other features, like Wi-Fi. Finding one with just the right combo will likely be an exercise in reading spec sheets and block diagrams.

 

A quick cursory search on PCPartPicker shows that the "ASRock X670E PG Lightning" could be worth looking at perhaps.

 

2.5Gbps networking

3x PCI-E x16 slots

  • 1 that runs at 16x
  • 1 that runs at 4x 
  • 1 that runs at 1x

2x m.2 slots, that aren't listed as overlapping with the lanes elseware.


Overall that gives you the ability to have 2x 9207 HBAs running, a spare 1x slot and 3x m.2s.

Would need to check the block diagram to be 100% sure, but sounds like it could work, perhaps.

 

 

Link to comment
27 minutes ago, JorgeB said:

They are x8.

 

 

Well corrected.

Regardless - We're still looking at 4 GB/s throughput on a 4x electrical connection - If you are assuming a fully loaded 9207-8i card, 8 drives, 500-550Mb/s throughput, the 4x connector is not going to limit said throughput as you are at the theoretical max, or as good as, of a SATA connector.
If using SAS though, perhaps.

Link to comment
On 6/2/2023 at 7:41 AM, boomam said:

Well corrected.

Regardless - We're still looking at 4 GB/s throughput on a 4x electrical connection - If you are assuming a fully loaded 9207-8i card, 8 drives, 500-550Mb/s throughput, the 4x connector is not going to limit said throughput as you are at the theoretical max, or as good as, of a SATA connector.
If using SAS though, perhaps.

Yes.

Thank you, you summed up my dilemma perfectly lol. Though, I am further screwed because I want to add my old 1070 gpu to the 16x slot for VM use. So having 4 slots working at x4 speeds is all I need. They just don't make em.

But the key to all this, and I hope it helps other people googling, is a PCI 3.0 x8 (or whatever) device hooked to a PCI 4.0 x1 speed slot, will only work at PCI 3.0 x1 speeds.

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.