DieFalse Posted October 27, 2021 Share Posted October 27, 2021 (edited) Background, I currently have 2x MD1200's with 2TB drives - connected to an R720XD via Perc H800 I am replacing the drives and adding two more MD1200s to the chain. Drives will all be 12TB 12GB/s SAS3. (48 drives total) 1. Would I be better off replacing the H800 with a different controller or adding a second card? 2. What would be the optimal connection method/card to enable max speed capable out of the 12Gb/s drives? Most will be the array, the others split between unassigned and cache pools. Edited October 27, 2021 by fmp4m Quote Link to comment
DieFalse Posted October 29, 2021 Author Share Posted October 29, 2021 Any suggestions? Quote Link to comment
JorgeB Posted October 29, 2021 Share Posted October 29, 2021 Probably not many people using or familiar with that hardware, you want a SAS3 HBA, and if the MD1200 supports dual link use two cables to connect it to the HBA. Quote Link to comment
DieFalse Posted October 29, 2021 Author Share Posted October 29, 2021 Sorry, should have added this, the drives may be 12gbs but the controller and md1200s are 6gbs. So my max bandwidth will be controlled by the card(s) and chassis. Would it be best to have 2xmd1200 on one card and 2x on the other or all 4 on one card? Quote Link to comment
JorgeB Posted October 29, 2021 Share Posted October 29, 2021 12 minutes ago, fmp4m said: So my max bandwidth will be controlled by the card(s) and chassis. Yes, they will work like SAS2 drives. 12 minutes ago, fmp4m said: Would it be best to have 2xmd1200 on one card and 2x on the other or all 4 on one card? Took a quick look and they don't support dual link, there's an out for daisy chain or you can use the second module for redundancy (not supported by Unraid), so for best performance you want to connect each MD1200 to a SAS wide port on the HBA, so get a second HBA or one with 4 ports, bottleneck then will be the PCIe 2.0 slot of the H800, if the board/CPU supports PCIe 3.0 it would be faster with one or two PCIe 3.0 HBAs. Quote Link to comment
DieFalse Posted October 29, 2021 Author Share Posted October 29, 2021 (edited) 52 minutes ago, JorgeB said: Yes, they will work like SAS2 drives. Took a quick look and they don't support dual link, there's an out for daisy chain or you can use the second module for redundancy (not supported by Unraid), so for best performance you want to connect each MD1200 to a SAS wide port on the HBA, so get a second HBA or one with 4 ports, bottleneck then will be the PCIe 2.0 slot of the H800, if the board/CPU supports PCIe 3.0 it would be faster with one or two PCIe 3.0 HBAs. Thanks for looking into this. The MD1200's do daisy chain, up to 10 in a chain. and right now I think the way it works is 6 drives per port on controller. My current H800 P1 would be drives 1-6 on md1200(1) and 1-6 on MD1200(2) and then P2 would be drives 7-12 on each. Im thinking stacking the two new ones would not benefit me, and I would need a different card / cards. My R720XD can handle PCIe3 x8/x16 easily with multiple cards. Do you have a PCIe3 HBA card / cards recommendation? With my risers I can have 3x full height and 3x low profile. only one low profile is populated with a GPU right now. (FC16 HBA Full Height) Also - have you any thoughts on using a SAS Switch as the intermediary between HBAs? LSI SAS 6160? Edited October 29, 2021 by fmp4m Quote Link to comment
JorgeB Posted October 30, 2021 Share Posted October 30, 2021 12 hours ago, fmp4m said: Do you have a PCIe3 HBA card / cards recommendation? 12 hours ago, fmp4m said: Also - have you any thoughts on using a SAS Switch as the intermediary between HBAs? LSI SAS 6160? Not familiar with those. 1 Quote Link to comment
DieFalse Posted November 1, 2021 Author Share Posted November 1, 2021 Ok, After some research, I have acquired 4x MD32 controllers (MD3200 is dual link). To benefit now as well as future upgradability to 12Gb/s; A. I will likely configure as follows: 1x 9300-8E HBA to 1x MD3200 to MD1200 1x 9300-8E HBA to 1x MD3200 to MD1200 So my server will have two 8E cards, with 2x SFF-8644 to SFF-8088 cables connecting the HBA to the MD3200 and one SFF-8088 connecting the MD3200 to the MD1200. B. My alternative would be all 4 MD's connected to the LSI-SAS-6160, and then HBA's connecting to the 6160 also. This would keep any device from being chained, and allow dual link to the switch, dual links to the md3200's and single link to the md1200's. The LSI-SAS-6160 is basically an external expander in simple terms, that has multi-path and can even connect multiple hosts to multiple DAS/SAN's. At this point I am looking for 2x 9300-8e cards. (I don't think a 16e would benefit and from my understanding two 8e's would eliminate bottlenecking, especially when I later upgrade to a 12Gb/s chassis). If you confirm that my options are good, and if you think B is better than A, let me know. I trust your judgement way more than my own and your help was invaluable to my last build (42bay chassis). Quote Link to comment
JorgeB Posted November 2, 2021 Share Posted November 2, 2021 I think option A should give you decent bandwidth, with 12 devices per enclosure the MD1200 single link and daisy chain should not be of great concern, and having the MD3200 with dual link should also leave them with enough bandwidth wen both enclosures on the same HBA are being used. 1 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.