mad_dr Posted May 29 Posted May 29 Hi all. I've read some of the other threads on this topic but came away from them all quite confused. I'm hoping someone can provide a suggestion for a solution. Case: Phanteks Enthoo Evolv X case (cont. five 10Tb HDDs - the most it can physically hold) Five HDDs connected into on-board SATA ports on my motherboard (ASRock Z690 ExtremeWiFi 6E which has 8 SATA 3 6.0 Gb/s ports). 4 are data drives (only used for Plex) plus 1 parity drive. One 2.5" 1Tb SSD cache drive (using a sixth SATA connector on the motherboard leaving 2 empty connectors) All is running pretty well. Parity checks average between 199.0 and 199.8MB/s. Motherboard manual is Here I am looking to expand my storage by adding another 2 or so drives to occupy the remaining SATA ports. I get temperature warnings from the cache drive fairly frequently (it seems to peak at around 47C which I understand is actually pretty much fine for a 2.5" SSD which are apparently happy to run up to 70C and still be considered within an acceptable range: LINK. It idles at around 32C much of the time.) 1. I'm thinking perhaps I should replace my cache drive with an M.2 NVMe drive to free up a third SATA port on the motherboard so that I could add three more 10Tb HDDs which is my goal. However, the motherboard webpage states "If M2_2 is occupied by a SATA-type M.2 device, SATA3_7 will be disabled". I'm guessing then that I should ensure that any new cache drive is NOT installed into slot M2_2. I do not yet know how easy it is to switch the cache drive. Hopefully it's a case of powering down, installing the new NVMe drive, powering up, pointing Unraid to the new drive within the GUI, powering down then removing the 2.5" SSD before finally powering up again. 2. Given that I can only physically fit my current 5 HDDs into the case, I'll need to be looking at an external drive caddy of some kind. With that in mind, the internal motherboard SATA ports seem less ideal; I would end up with cable spaghetti: 3 SATA power cables and 3 SATA data cables snaking their way out of the case in some way (no idea how, yet). With that said, should I be looking at an alternative way to add more SATA drives neatly? 3. Is there a more elegant way for me to add 3, 4 or even 5 more HDDs to my setup without the cable spaghetti? I'm thinking perhaps of a PCIe-based card in one of the PCIe slots. I have read that there is little or no benefit in using PCIe 4.0 or 5.0 for physical drives and that PCIe 3.0 is just as fast. Does anyone have insight there? The motherboard apparently has: 1 PCIe 5.0 x16 1 PCIe 4.0 x16 1 PCIe 3.0 x16 1 PCIe 3.0 x1 4. Does anyone know of a PCIe SATA card to recommend that would suit my needs? It would be nice if it had, say 4-6 SATA ports, and would be nice if the card wasn't a bottleneck compared to the max throughput of the drives themselves. That's where I get confused. 5. How do folks typically power their HDDs in this scenario? Do you need to have a long SATA multi-power cable snaking out of your PSU and through a gap in the case somewhere, to get to the external HDD caddy? Thanks all for reading this far and for any pointers about my plans and any suggested solutions. Quote
bmartino1 Posted May 29 Posted May 29 sounds like you need an HBA. Sadly that is inviting more cables. I would recommend an HBA muti-split cable. https://www.ebay.com/itm/184433612543?chn=ps&norover=1&mkevt=1&mkrid=711-166974-028196-7&mkcid=2&mkscid=101&itemid=184433612543&targetid=2208447738141&device=c&mktype=pla&googleloc=9022234&poi=&campaignid=20336928623&mkgroupid=151102585619&rlsatarget=pla-2208447738141&abcId=9315436&merchantid=723495101&geoid=9022234&gad_source=4&gclid=Cj0KCQjwpNuyBhCuARIsANJqL9Mv1kR0hyx5GairG0md5Eu55QDCadHXg88iwu2kTsVb7IOLxqfIpPAaAln3EALw_wcB Quote
Solution Frank1940 Posted May 29 Solution Posted May 29 (edited) You can also get this HBA for external drives: https://www.ebay.com/itm/163745757552?itmmeta=01HZ320HCS8ZXAY47NV0ZBH37S&hash=item262001f970:g:siQAAOSwvT5dDDYc&itmprp=enc%3AAQAJAAAAwLzNK6WFMk2ZgdnEqrqEA3PgT5grcsY2Qudy3rKQJxCtMPLEEpICK07XkT3ZOMl25ND8SzyYYjjcdpoGjVAcEI8MSpx7yrOcExqoSsWqRmc8RP%2BQMO%2B1HQj2ADfnybOnlu6yRlO03PUo1ms%2BefzEsXWuXlR8s65m2k9%2BvGCNdRsgkUwDnoMHHrrmBhzlUUCy3nwl4NnQlPX4FqPnG3soDMPc0IB3pQT1APab9v6CZtcLhNPXpA6u3Kjg456Eh%2Ffw0A%3D%3D|tkp%3ABk9SR8SWguL4Yw If you purchase a LSI card for internal drives, I would look for one without the cables. Most of them come with 1M cables and you end up with a cabling mess inside the box. You can find 0.5M cables which will help with the cable dressing issues. You can also have a look at these types of enclosures. There do not seem to be as many them being marketed as in the past. I, personally, have never used them but I recall that other Unraid have used them successfully. (Be sure to read reviews...) https://www.amazon.com/ICY-DOCK-FatCage-MB153SP-B-Module/dp/B009HIMZ3G/ Edited May 29 by Frank1940 Quote
mad_dr Posted May 30 Author Posted May 30 Thank you for the reply - very helpful! So, essentially, this uses an expansion card that sits in the PCIe slot and either provides internal or external ports that can use a special splitter cable to plug multiple drives into the card. And, despite all being connected to this one PCIe slot, UNraid sees all the drives individually and can use them as part it or in the array as normal? Excellent. I’d probably have to go for the external version due to lack of space and cooling in the server case. I noticed that on the 5 drive version of the external dock (Link) there are five SATA data connectors - 1 per drive - but only 3 SATA power connectors. I’m assuming that 3 SATA power connectors are enough to power 5 drives so the enclosure combines the 3 power inputs and splits it across the 5 drives? Presumably you can use a single SATA power cable that has multiple connectors along its length, from a single slot on a PSU, to power the 3 inputs in the enclosure? I’ll have to check whether my PSU has a spare SATA power port and I guess that’ll mean running a long SATA power cable out of the inside of the server case over to the enclosure. I wonder if anyone makes an enclosure that accepts a standard C13 plug and handles the PSU side internally. Thanks again for your help with this - really useful! If you have any further thoughts, please let me know, otherwise I’ll mark your reply as “answered”. Quote
Frank1940 Posted May 30 Posted May 30 5 hours ago, mad_dr said: ’d probably have to go for the external version due to lack of space and cooling in the server case. I noticed that on the 5 drive version of the external dock (Link) there are five SATA data connectors - 1 per drive - but only 3 SATA power connectors. I’m assuming that 3 SATA power connectors are enough to power 5 drives so the enclosure combines the 3 power inputs and splits it across the 5 drives? Presumably you can use a single SATA power cable that has multiple connectors along its length, from a single slot on a PSU, to power the 3 inputs in the enclosure? Regarding power splitter cables. The problems with power is the SATA power connectors itself. (It has very limited current handling capability.) So you want one with MOLEX plug which is plugged into the PS. Also there is/was(?) a problem with the molded-in SATA connectors. See this YouTube video for explanation: https://www.youtube.com/watch?v=TataDaUNEFc I don't know if this issue was ever completely solved (X-raying every connector would find any with the problem but cost of that would be prohibitive!) as PS now have more SATA connectors reducing the need for additional SATA power connectors. Proceed carefully at this point. I would also look at server-type computer cases. I know they are expensive but so are external solutions when one considers the total cost. Quote
mad_dr Posted July 19 Author Posted July 19 Just checking in here: you've worried me with your comment about spontaneously combusting cables! Is that something that all of us need to worry about all the time?! I certainly don't want to be responsible for burning my building down! From watching that video, it SEEMS like if a cable is going to catch fire, it should do so pretty much as soon as you use it - If it's about moulding defects causing arcing, it should either be OK, or not. But that seems not to be the case from what he's saying - he says they can fail at any time. So I wonder how many millions of people are using them and how many thousands of them Corsair (et al) are shipping out on a daily basis! Also, you say "you want one with a MOLEX plug" but my PSU (like most others these days) doesn't have MOLEX. Are people using the standard SATA power out from their PSU, then converting it to MOLEX just so they can use a MOLEX-to-15PIN SATA cable? Seems like adding more connections isn't necessarily desirable? I would need a MOLEX-to-FOUR 15PIN SATA cable. I'm also assuming that the recommendation is to pay someone to make me a custom MOLEX-to-FOUR 15PIN SATA cable with the "correct" type of SATA connector? I say this because the video implies that you want the SATA connector to be a "DIY" version with individual pins, rather than a factory made injection-moulded version. This has me quite confused and concerned. Anyway, I picked up a SAS card which came with two splitter cables; one end goes into the card within the case (the card has no external I/O) and the other end splits into 4 SATA data connectors. I've yet to install it. It's a 9210-8i (which I believe is the same as a 9211-8i). Hopefully this is compatible with Unraid for what I need. I also picked up a yasu7 3.5in HDD Cage Hard Drive Box from Amazon. It's essentially a metal frame which can hold ten HDDs and has a couple of 120mm fans built into it which I can connect to a spare header on the mobo to cool the drives a little. I'll probably make a simple box for it from 3/16" birch ply just to tidy it up. So the next question (assuming I've bought the right card) becomes how I go about powering these drives, given the comment above about fire? My PSU is a Corsair RM850x Shift so it has no molex connectors; it's a pretty standard 2024 PSU with Corsair's Gen 5 connectors. I made a bunch of custom fan cables about 8 years ago for a previous build that turned out well, but have no tools or knowledge for making my own Gen 5 cables. I mean, does anyone?? I think I have enough SATA connectors on the PSU to add one of the bundled SATA 1-into-4 cables that came with the PSU. I'm using a couple of these already to power the 5 internal HDDs I'm running right now. I haven't had any fires... Yet!! I would hope I could just use the third bundled 1-into-4 SATA power cables that came with the PSU. The PSU does very little other than power the motherboard/CPU and the 5 existing internal HDDs. I might need to try to track down an extender to get the four 15-pin SATA connectors outside the case but it's possible that I can avoid that by just putting the new HDD cage right next to the server case and carefully drilling a 1" dia. hole in the side of the case which I can add a rubber grommet to, to feed through the SAS-SATA cables and the PSU's SATA cable. Thoughts appreciated! Quote
itimpi Posted July 19 Posted July 19 2 minutes ago, mad_dr said: I think I have enough SATA connectors on the PSU to add one of the bundled SATA 1-into-4 cables that came with the PSU. I would be very surprised if these cables have a SATA connection at the PSU end? If you mean just a PSU cable that has 4 SATA connectors on it that will be fine. If the PSU is a modular one then it frequently has a choice of cables that plug into the PSU, some with SATA connectors, some with Molex ones. Quote
JonathanM Posted July 19 Posted July 19 1 hour ago, mad_dr said: My PSU is a Corsair RM850x Shift so it has no molex connectors According to this link https://www.corsair.com/us/en/p/psu/cp-9020252-na/rm850x-shift-80-plus-gold-fully-modular-atx-power-supply-cp-9020252-na it comes with 2 PATA cables with 4 molex connectors each. It has 5 connectors on the PSU itself, each can take either SATA or PATA cables, so if you order 3 of these, https://www.corsair.com/us/en/p/pc-components-accessories/cp-8920315/premium-individually-sleeved-peripheral-power-molex-style-cable-4-connectors-type-5-gen-5-black you can have 20 4 pin molex connectors. Quote
JonathanM Posted July 19 Posted July 19 BTW, ideally you want to use as many of the modular power connections as possible to split the load evenly across as many wires as possible. Quote
mad_dr Posted July 19 Author Posted July 19 Thanks Jonathan! Interesting - so it was a reference to using MOLEX in terms of converting the PSU's connector into a MOLEX and then from there to a SATA power. I see! I will have to track down the modular cables that came with the PSU and find those MOLEX ones. Although, from what you're saying (if I understand you correctly), I could just use the PSU-to-4 SATA cable that came with the PSU (once I have checked to ensure it's not one of the injection moulded versions)? So, option 1 is to use the bundled cable that came with the PSU (which is shown at the Corsair PSU link in your comment) to add 4 new HDDs by using one more of the ports in the PSU. Option 2 would be to use the two bundled PATA-to-4 MOLEX cables and then buy some MOLEX-to-SATA cables; one cable per HDD, and perhaps 2 MOLEX cables per PATA socket, to spread the load. Does that sound right? Would you have any concerns about using the bundled SATA cables that came with the PSU? Thanks again! Quote
itimpi Posted July 19 Posted July 19 1 minute ago, mad_dr said: Although, from what you're saying (if I understand you correctly), I could just use the PSU-to-4 SATA cable that came with the PSU (once I have checked to ensure it's not one of the injection moulded versions)? The ones that come with the PSU are almost certainly OK. Quote
Frank1940 Posted July 19 Posted July 19 8 minutes ago, itimpi said: The ones that come with the PSU are almost certainly OK. 10 minutes ago, mad_dr said: Does that sound right? Would you have any concerns about using the bundled SATA cables that came with the PSU? Thanks again! @mad_dr, As @itimpi said the power cables that come with the Power suppl;y should be fine. Corsair is going to make sure that the cables that they supply are not going to be a fire hazard. They don't want you posting a review that says their PS caught fire!!! The problem with these after-market cables is that there are probably more than a couple of hundred manufacturers of these cables. Most of them are located in China. Generally, they do not sell to the final customer-- you. They sell to vendors who will buy the cheapest cable that they can find this week from one of these manufacturers. When they are ready for their next purchase, they will make it from the cheapest manufacturer again. It may not be the same one as their first purchase. Quality is seldom a concern to these guys. They will be incorporated and if they should get sued, the company will have virtually zero assets! Quote
mad_dr Posted August 7 Author Posted August 7 I’ve picked up one of the 9210-8i SAS HBA card which I’ve dropped into my Unraid server. I also picked up four new 12Tb HDDs to go alongside my existing five 10Tb HDDs. I also picked up an external HDD cage which simply holds the HDDs in place and provides a mount for fans to cool them. I have two 120mm fans which I will install on the external HDD cage and will use a spare fan header on the mobo to power them. I also picked up a good quality SATA power cable with four connectors which will work with my PSU. So I think that takes care of physically holding the HDDs as well as cooling them, and also providing them with power and giving them a data connection to the motherboard and Unraid so that they will power on with the system. A few questions: 1. Might I need to do/install/configure anything in order for Unraid to see the HBA SAS card? Or are they generally plug and play? 2. Might I need to do anything for Unraid to see the new unformatted HDDs connected TO that SAS card? Or will they show up in unassigned devices as happened when I previously added new HDDs connected directly to the motherboard’s SATA ports? 3. Given that I will now need to cease using one of my 10Tb existing HDDs as my parity drive and instead use one of the new 12Tb drives for this task, can someone bullet point the steps for me? I imagine they might be something like this at a high level: 1. Power down the server. Connect one of the new 12Tb drives to the SAS card. Power up the server. Check that the drive is listed under unassigned devices but do not format it. 2. Stop the array. Unassign the existing parity drive. Assign the new 12Tb parity drive. Start the array and allow the new 12Tb parity drive to build. 3. Somehow format the old 10Tb parity drive as XFS (will it show up in unassigned devices simply because I unassigned it from its parity role in step 2?) and then add it to the array so that the array has gone from: 4data+1parity (all 10Tb drives) to 5data(10Tb drives)+1parity(12Tb). 4. Power down. Connect the other three new 12Tb HDDs to the SAS card. Power up. Format the three new 12Tb cards as XFS and add to the array. I found a guide that says “[If all you want to do is replace your Parity drive with a larger one.] Just stop the array, unassign the parity drive and then remove the old parity drive and add the new one, and start the array.” So I guess that’s what I’m trying to do as the first phase. Then follow it up by repurposing an old parity drive as a data drive. Then follow THAT up by adding new drives to the array. thanks! Quote
Frank1940 Posted August 7 Posted August 7 2 hours ago, mad_dr said: I’ve picked up one of the 9210-8i SAS HBA card 3 hours ago, mad_dr said: 1. Might I need to do/install/configure anything in order for Unraid to see the HBA SAS card? Or are they generally plug and play? Is it flashed into the IT-mode rather than the RAID mode? From the 2012 LSI product brief for the 9210-8i: The reason for this question is that theses cards are commonly counterfeited as sold as if they were made by Broadcom. These are two first sources for these cards (Broadcom and counterfeiters). A third source is used cards pulled from retired servers from data centers. These used cards are virtually always genuine. The Broadcom cards come by default with RAID mode firmware installed. It can be flashed to IT-mode. (I believe the LSI 9207-8i, for example, is IT-mode part number for 9200 series of cards. LSI is is now known as Broadcom as the result of a couple of corporate mergers...) There is a thread (thanks, @JorgeB ) on Flashing LSI boards here: https://forums.unraid.net/topic/97870-how-to-upgrade-an-lsi-hba-firmware-using-unraid/page/4/#comment-1342713 (I have always purchased my LSI cards flashed to IT-mode by the vendor from whom I purchased the card. I also vet the vendor very carefully and price is a secondary consideration.) Quote
JonathanM Posted August 7 Posted August 7 Another consideration is cooling the LSI controller. They are generally designed for rack mount servers, which have a lot of forced airflow over the I/O slots. Regular consumer cases rely on the cards themselves to do the cooling, so you may need to add a fan to force air over the card's heatsink. Quote
mad_dr Posted August 7 Author Posted August 7 Thanks for the replies guys. As you can tell (and as I confessed in the title) I’m a noob to this generation of hardware so I appreciate the help. The eBay listing (eeek) purportedly shows a bios screen confirming IT mode (which is also mentioned in the listing title (“LSI 9210-8i 6Gbps SAS HBA FW:P20 9211-8i IT Mode ZFS FreeNAS unRAID 2* SATA US”)) I'm not sure I’d be able to identify a counterfeit card because I’m guessing the barcode labels, printed board logos etc are all routinely copied as part of their counterfeiting process… Other purchasers have given it good feedback so they appear to work at least but I’m assuming that the issue with the counterfeit versions is down to reliability and compatibility? Amazing to think that there’s a market for such thorough counterfeiting of such low-value items. I paid around $35 USD plus shipping which seems to be the same as many of the non-fan versions of the card. Perhaps the $100 refurbished version would have been a safer bet. Quote
mad_dr Posted August 9 Author Posted August 9 Does anyone know how I can check which mode this SAS card is set to? The vendor assures that it is set to IT mode and flashed with the latest firmware. I managed to generate a diagnostic file in Unraid and the syslog file says the following in yellow. I'm hoping (from reading other threads) that the Initiator,Target text means that it's already in IT mode and is good to go and it appears to have the latest firmware I believe: Aug 8 22:41:43 Plex kernel: mpt2sas_cm0: LSISAS2008: FWVersion(20.00.07.00), ChipRevision(0x03), BiosVersion(07.39.02.00) Aug 8 22:41:43 Plex kernel: mpt2sas_cm0: Protocol=(Initiator,Target), Capabilities=(TLR,EEDP,Snapshot Buffer,Diag Trace Buffer,Task Set Full,NCQ) I connected a new 12Tb drive to the SAS card and the drive shows up happily and can be set as the Parity drive and will allow the array to be started (to rebuild parity). I would really like to know if, once the parity is rebuilt, I can "clear" the old parity drive, add it to the array and then add my other three new 12Tb drives to the array. Quote
itimpi Posted August 9 Posted August 9 1 hour ago, mad_dr said: I can "clear" the old parity drive, add it to the array You do not need to clear it (although you can). Unraid will automatically initiate a Clear on any drive added to a new slot in the array if it has not already been cleared. Quote
Frank1940 Posted August 9 Posted August 9 (edited) 5 hours ago, mad_dr said: The vendor assures that it is set to IT mode and flashed with the latest firmware. Assuming that the vendor is reliable, the card should be in the proper mode. Flashing is not a 'big deal' if you have experience in doing it. That is the problem in buying the LSI cards--- The part number is how LSI designated which software/firmware they flashed the card with. However, on the secondary market, those part numbers lose tend to lose any significance except to identify the bus speed. The sellers tend to use the part numbers that buyers are looking for, regardless of the software/firmware. That is way you have to be careful... 5 hours ago, mad_dr said: Aug 8 22:41:43 Plex kernel: mpt2sas_cm0: LSISAS2008: FWVersion(20.00.07.00), ChipRevision(0x03), BiosVersion(07.39.02.00) Aug 8 22:41:43 Plex kernel: mpt2sas_cm0: Protocol=(Initiator,Target), Capabilities=(TLR,EEDP,Snapshot Buffer,Diag Trace Buffer,Task Set Full,NCQ) This supports the vendor's statement. You have the right software installed and I believe that is the current version. Edited August 9 by Frank1940 Revised first paragrapgh to make more sense. Quote
mad_dr Posted August 12 Author Posted August 12 Well, it all worked out great! Thanks for all the pointers! I was able to unassign the old parity drive, add the new 12Tb drive as parity, allow it to rebuild, then clear the old parity drive and add it to the array, then clear and add the other new 12Tb drives to the array and then rebuild again. So I've gone from 40Tb of storage plus 10Tb parity to 12Tb parity plus 86Tb storage. And I have 4 additional SATA ports on the HBA if I want to do the same thing again to go to 130Tb storage plus 12Tb parity or higher (if I rebuild parity). Thanks again! Quote
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.