manofoz Posted October 8, 2023 Share Posted October 8, 2023 Hello, I have two LSI 9207-8i's running for my 16 HDDs. I installed DiskSpeed and saw that one of these was "downgraded" and it's drives benchmarks were clearly slower than the other one: Fast Controller: Slow Controller: I did some digging and found this command which elaborated more on my issue: lspci -vv ... 08:00.0 Serial Attached SCSI controller: Broadcom / LSI SAS2308 PCI-Express Fusion-MPT SAS-2 (rev 05) ... LnkSta: Speed 8GT/s, Width x4 (downgraded) TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt- Width x4, I guess the card is for x8 though I did not realize that at the time of purchasing it. The Manual clearly states it supports up to x8 so I am losing some speed but it's hard to tell how much. Here are what options I'm thinking but I'd love a second opinion: Live with half my drives being slower in a way that is hard to qualtify The same Exos X20 drive benchmarked at 270 MB/s on the x4 card and at 280 MB/s on the x16 card so it doesn't seem that bad Get something like StorageTekPro STP-9300-16i which is also x8 for 16 drives (dunno how that adds up) and use the true x16 PCIE slot This thing looks like it runs even hotter than the ones I have which I mounted fans on. Would set me back $$ too. Get some PCIE to SATA and use that + my onboard SATA's Seems like a cheap option but then I'd have more of a mess of cables than I do already My motherboard supports bifurcation to go from x16 to x8/x8 but I think that means set a single slot into an x8/x8 mode which doesn't make much sense to me. If I could split the 16 lans across both 16 lenght slots it would be an easy fix but I don't think I am that lucky. Thanks! Quote Link to comment
JorgeB Posted October 8, 2023 Share Posted October 8, 2023 See my post on the container thread, basically that board doesn't support two x8 slots, according to the the the controller link speed appears to not be a bottleneck with those disks, so no issues for now. Quote Link to comment
manofoz Posted October 8, 2023 Author Share Posted October 8, 2023 3 hours ago, JorgeB said: container Thanks for the reply! Are you reffering to this thread? Good to know this isn't my bottleneck. My parity checks and disk rebuilds/upgrades are taking two days. Maybe it's just that I have some really slow drives in the array. I will replace disk 6 next which benchmarked @ 160 MB/s while Disk 11 on the same controller got 270 MB/s. I just can't imagine how any of this is suppose to work with 50TB disks hit the market. I guess people will have to either deal with 7 day parity checks or move to another platform. Quote Link to comment
manofoz Posted October 8, 2023 Author Share Posted October 8, 2023 @JorgeB I'm not sure I found the container thread but I do see these nice benchmarks you ran: I grabbed the Dynamix V6 plugin for system stats and will monitor that to make sure I'm not bottlenecked by the controller when I upgrade my next drive. Let me know if I got the right thread. Quote Link to comment
JorgeB Posted October 9, 2023 Share Posted October 9, 2023 17 hours ago, manofoz said: Thanks for the reply! Are you reffering to this thread? I was just referring to the other post you've made in the diskspeed docker thread, I had already replied there. Quote Link to comment
manofoz Posted October 10, 2023 Author Share Posted October 10, 2023 19 hours ago, JorgeB said: I was just referring to the other post you've made in the diskspeed docker thread, I had already replied there. Ah got it, missed that. Thanks. I'm not sure how to convert from gigatransfers (8GT/s) since I don't see anything advertising the data bus size in bits but I'll take your word for it. If this was a bottleneck I have other options than swapping out the board. There are 4 onboard SATA I could use and I see cheap PCIE x1 cards that add another two SATA ports so that shouldn't be difficult. I could also use the on board 4 SATA which might reduce the load on the LSI card to the point where x4 is not a problem. Not sure if the card would perform better with the remaining four on one SAS to SATA cable or split across both. Still does seem like there is a bottleneck. When running only Exos X20 drives after I pass the point where the smaller drives stop reading I don't get anywhere near the speed I am seeing pre-clearing an X20 (like 190 MB/s vs 280 MB/s). I could just be the slower 5400rpm drives and by the time they are done and it's just X20s I'm near the center of the disks. Preclear on a 20TB still look like 3-4 days at 280 MB/s, I just wasn't expecting things to take this long. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.