Jump to content

mad_dr

Members
  • Posts

    13
  • Joined

  • Last visited

mad_dr's Achievements

Noob

Noob (1/14)

1

Reputation

  1. Well, it all worked out great! Thanks for all the pointers! I was able to unassign the old parity drive, add the new 12Tb drive as parity, allow it to rebuild, then clear the old parity drive and add it to the array, then clear and add the other new 12Tb drives to the array and then rebuild again. So I've gone from 40Tb of storage plus 10Tb parity to 12Tb parity plus 86Tb storage. And I have 4 additional SATA ports on the HBA if I want to do the same thing again to go to 130Tb storage plus 12Tb parity or higher (if I rebuild parity). Thanks again!
  2. Does anyone know how I can check which mode this SAS card is set to? The vendor assures that it is set to IT mode and flashed with the latest firmware. I managed to generate a diagnostic file in Unraid and the syslog file says the following in yellow. I'm hoping (from reading other threads) that the Initiator,Target text means that it's already in IT mode and is good to go and it appears to have the latest firmware I believe: Aug 8 22:41:43 Plex kernel: mpt2sas_cm0: LSISAS2008: FWVersion(20.00.07.00), ChipRevision(0x03), BiosVersion(07.39.02.00) Aug 8 22:41:43 Plex kernel: mpt2sas_cm0: Protocol=(Initiator,Target), Capabilities=(TLR,EEDP,Snapshot Buffer,Diag Trace Buffer,Task Set Full,NCQ) I connected a new 12Tb drive to the SAS card and the drive shows up happily and can be set as the Parity drive and will allow the array to be started (to rebuild parity). I would really like to know if, once the parity is rebuilt, I can "clear" the old parity drive, add it to the array and then add my other three new 12Tb drives to the array.
  3. Thanks for the replies guys. As you can tell (and as I confessed in the title) I’m a noob to this generation of hardware so I appreciate the help. The eBay listing (eeek) purportedly shows a bios screen confirming IT mode (which is also mentioned in the listing title (“LSI 9210-8i 6Gbps SAS HBA FW:P20 9211-8i IT Mode ZFS FreeNAS unRAID 2* SATA US”)) I'm not sure I’d be able to identify a counterfeit card because I’m guessing the barcode labels, printed board logos etc are all routinely copied as part of their counterfeiting process… Other purchasers have given it good feedback so they appear to work at least but I’m assuming that the issue with the counterfeit versions is down to reliability and compatibility? Amazing to think that there’s a market for such thorough counterfeiting of such low-value items. I paid around $35 USD plus shipping which seems to be the same as many of the non-fan versions of the card. Perhaps the $100 refurbished version would have been a safer bet.
  4. I’ve picked up one of the 9210-8i SAS HBA card which I’ve dropped into my Unraid server. I also picked up four new 12Tb HDDs to go alongside my existing five 10Tb HDDs. I also picked up an external HDD cage which simply holds the HDDs in place and provides a mount for fans to cool them. I have two 120mm fans which I will install on the external HDD cage and will use a spare fan header on the mobo to power them. I also picked up a good quality SATA power cable with four connectors which will work with my PSU. So I think that takes care of physically holding the HDDs as well as cooling them, and also providing them with power and giving them a data connection to the motherboard and Unraid so that they will power on with the system. A few questions: 1. Might I need to do/install/configure anything in order for Unraid to see the HBA SAS card? Or are they generally plug and play? 2. Might I need to do anything for Unraid to see the new unformatted HDDs connected TO that SAS card? Or will they show up in unassigned devices as happened when I previously added new HDDs connected directly to the motherboard’s SATA ports? 3. Given that I will now need to cease using one of my 10Tb existing HDDs as my parity drive and instead use one of the new 12Tb drives for this task, can someone bullet point the steps for me? I imagine they might be something like this at a high level: 1. Power down the server. Connect one of the new 12Tb drives to the SAS card. Power up the server. Check that the drive is listed under unassigned devices but do not format it. 2. Stop the array. Unassign the existing parity drive. Assign the new 12Tb parity drive. Start the array and allow the new 12Tb parity drive to build. 3. Somehow format the old 10Tb parity drive as XFS (will it show up in unassigned devices simply because I unassigned it from its parity role in step 2?) and then add it to the array so that the array has gone from: 4data+1parity (all 10Tb drives) to 5data(10Tb drives)+1parity(12Tb). 4. Power down. Connect the other three new 12Tb HDDs to the SAS card. Power up. Format the three new 12Tb cards as XFS and add to the array. I found a guide that says “[If all you want to do is replace your Parity drive with a larger one.] Just stop the array, unassign the parity drive and then remove the old parity drive and add the new one, and start the array.” So I guess that’s what I’m trying to do as the first phase. Then follow it up by repurposing an old parity drive as a data drive. Then follow THAT up by adding new drives to the array. thanks!
  5. Thanks Jonathan! Interesting - so it was a reference to using MOLEX in terms of converting the PSU's connector into a MOLEX and then from there to a SATA power. I see! I will have to track down the modular cables that came with the PSU and find those MOLEX ones. Although, from what you're saying (if I understand you correctly), I could just use the PSU-to-4 SATA cable that came with the PSU (once I have checked to ensure it's not one of the injection moulded versions)? So, option 1 is to use the bundled cable that came with the PSU (which is shown at the Corsair PSU link in your comment) to add 4 new HDDs by using one more of the ports in the PSU. Option 2 would be to use the two bundled PATA-to-4 MOLEX cables and then buy some MOLEX-to-SATA cables; one cable per HDD, and perhaps 2 MOLEX cables per PATA socket, to spread the load. Does that sound right? Would you have any concerns about using the bundled SATA cables that came with the PSU? Thanks again!
  6. Just checking in here: you've worried me with your comment about spontaneously combusting cables! Is that something that all of us need to worry about all the time?! I certainly don't want to be responsible for burning my building down! From watching that video, it SEEMS like if a cable is going to catch fire, it should do so pretty much as soon as you use it - If it's about moulding defects causing arcing, it should either be OK, or not. But that seems not to be the case from what he's saying - he says they can fail at any time. So I wonder how many millions of people are using them and how many thousands of them Corsair (et al) are shipping out on a daily basis! Also, you say "you want one with a MOLEX plug" but my PSU (like most others these days) doesn't have MOLEX. Are people using the standard SATA power out from their PSU, then converting it to MOLEX just so they can use a MOLEX-to-15PIN SATA cable? Seems like adding more connections isn't necessarily desirable? I would need a MOLEX-to-FOUR 15PIN SATA cable. I'm also assuming that the recommendation is to pay someone to make me a custom MOLEX-to-FOUR 15PIN SATA cable with the "correct" type of SATA connector? I say this because the video implies that you want the SATA connector to be a "DIY" version with individual pins, rather than a factory made injection-moulded version. This has me quite confused and concerned. Anyway, I picked up a SAS card which came with two splitter cables; one end goes into the card within the case (the card has no external I/O) and the other end splits into 4 SATA data connectors. I've yet to install it. It's a 9210-8i (which I believe is the same as a 9211-8i). Hopefully this is compatible with Unraid for what I need. I also picked up a yasu7 3.5in HDD Cage Hard Drive Box from Amazon. It's essentially a metal frame which can hold ten HDDs and has a couple of 120mm fans built into it which I can connect to a spare header on the mobo to cool the drives a little. I'll probably make a simple box for it from 3/16" birch ply just to tidy it up. So the next question (assuming I've bought the right card) becomes how I go about powering these drives, given the comment above about fire? My PSU is a Corsair RM850x Shift so it has no molex connectors; it's a pretty standard 2024 PSU with Corsair's Gen 5 connectors. I made a bunch of custom fan cables about 8 years ago for a previous build that turned out well, but have no tools or knowledge for making my own Gen 5 cables. I mean, does anyone?? I think I have enough SATA connectors on the PSU to add one of the bundled SATA 1-into-4 cables that came with the PSU. I'm using a couple of these already to power the 5 internal HDDs I'm running right now. I haven't had any fires... Yet!! I would hope I could just use the third bundled 1-into-4 SATA power cables that came with the PSU. The PSU does very little other than power the motherboard/CPU and the 5 existing internal HDDs. I might need to try to track down an extender to get the four 15-pin SATA connectors outside the case but it's possible that I can avoid that by just putting the new HDD cage right next to the server case and carefully drilling a 1" dia. hole in the side of the case which I can add a rubber grommet to, to feed through the SAS-SATA cables and the PSU's SATA cable. Thoughts appreciated!
  7. Thank you for the reply - very helpful! So, essentially, this uses an expansion card that sits in the PCIe slot and either provides internal or external ports that can use a special splitter cable to plug multiple drives into the card. And, despite all being connected to this one PCIe slot, UNraid sees all the drives individually and can use them as part it or in the array as normal? Excellent. I’d probably have to go for the external version due to lack of space and cooling in the server case. I noticed that on the 5 drive version of the external dock (Link) there are five SATA data connectors - 1 per drive - but only 3 SATA power connectors. I’m assuming that 3 SATA power connectors are enough to power 5 drives so the enclosure combines the 3 power inputs and splits it across the 5 drives? Presumably you can use a single SATA power cable that has multiple connectors along its length, from a single slot on a PSU, to power the 3 inputs in the enclosure? I’ll have to check whether my PSU has a spare SATA power port and I guess that’ll mean running a long SATA power cable out of the inside of the server case over to the enclosure. I wonder if anyone makes an enclosure that accepts a standard C13 plug and handles the PSU side internally. Thanks again for your help with this - really useful! If you have any further thoughts, please let me know, otherwise I’ll mark your reply as “answered”.
  8. Hi all. I've read some of the other threads on this topic but came away from them all quite confused. I'm hoping someone can provide a suggestion for a solution. Case: Phanteks Enthoo Evolv X case (cont. five 10Tb HDDs - the most it can physically hold) Five HDDs connected into on-board SATA ports on my motherboard (ASRock Z690 ExtremeWiFi 6E which has 8 SATA 3 6.0 Gb/s ports). 4 are data drives (only used for Plex) plus 1 parity drive. One 2.5" 1Tb SSD cache drive (using a sixth SATA connector on the motherboard leaving 2 empty connectors) All is running pretty well. Parity checks average between 199.0 and 199.8MB/s. Motherboard manual is Here I am looking to expand my storage by adding another 2 or so drives to occupy the remaining SATA ports. I get temperature warnings from the cache drive fairly frequently (it seems to peak at around 47C which I understand is actually pretty much fine for a 2.5" SSD which are apparently happy to run up to 70C and still be considered within an acceptable range: LINK. It idles at around 32C much of the time.) 1. I'm thinking perhaps I should replace my cache drive with an M.2 NVMe drive to free up a third SATA port on the motherboard so that I could add three more 10Tb HDDs which is my goal. However, the motherboard webpage states "If M2_2 is occupied by a SATA-type M.2 device, SATA3_7 will be disabled". I'm guessing then that I should ensure that any new cache drive is NOT installed into slot M2_2. I do not yet know how easy it is to switch the cache drive. Hopefully it's a case of powering down, installing the new NVMe drive, powering up, pointing Unraid to the new drive within the GUI, powering down then removing the 2.5" SSD before finally powering up again. 2. Given that I can only physically fit my current 5 HDDs into the case, I'll need to be looking at an external drive caddy of some kind. With that in mind, the internal motherboard SATA ports seem less ideal; I would end up with cable spaghetti: 3 SATA power cables and 3 SATA data cables snaking their way out of the case in some way (no idea how, yet). With that said, should I be looking at an alternative way to add more SATA drives neatly? 3. Is there a more elegant way for me to add 3, 4 or even 5 more HDDs to my setup without the cable spaghetti? I'm thinking perhaps of a PCIe-based card in one of the PCIe slots. I have read that there is little or no benefit in using PCIe 4.0 or 5.0 for physical drives and that PCIe 3.0 is just as fast. Does anyone have insight there? The motherboard apparently has: 1 PCIe 5.0 x16 1 PCIe 4.0 x16 1 PCIe 3.0 x16 1 PCIe 3.0 x1 4. Does anyone know of a PCIe SATA card to recommend that would suit my needs? It would be nice if it had, say 4-6 SATA ports, and would be nice if the card wasn't a bottleneck compared to the max throughput of the drives themselves. That's where I get confused. 5. How do folks typically power their HDDs in this scenario? Do you need to have a long SATA multi-power cable snaking out of your PSU and through a gap in the case somewhere, to get to the external HDD caddy? Thanks all for reading this far and for any pointers about my plans and any suggested solutions.
  9. Thanks guys - all is good now. I got hold of the USB HDD dock that my friend used to clone my drives and the drives immediately showed up correctly in Unassigned Devices. Then I was able to install Krusader and use it to move the data onto the shares in the array. Then I formatted the original drives and added them to the array. Finally I set up another drive as parity and all seems good. I do have some exclamation marks next to various shares (that seemed to get created automatically) but I’ll look into that separately. Thanks again.
  10. Sorry Jorge - I would never intentionally create two threads for the same thing; I got a website error ("something went wrong") when I first tried to post it so I clicked back, tried again and it worked. A few minutes later I saw it had created two threads (one of which was live and let me reply to it with the logs and the other of which was "pending moderator approval") so I went into the pending one, clicked "moderation actions" at the top and selected to delete that thread - I guess the deletion didn't happen - sorry - I'll be more careful in future. In terms of the replies - thanks everyone. I have not yet set up a disk as parity - I will only do that once the array itself is working and shows my data. So, for now, no parity disks. Just the following: USB stick with Unraid 1Tb SSD Cache pool 10Tb HDD, brand new, formatted as XFS, added to the array, ready to accept my data (VH0U8WLM) 10Tb HDD, brand new, formatted as XFS, added to the array, ready to accept my data (VH0U8WVM) 10Tb HDD, brand new, contains my data which was copied from old drives via a USB dock (VH0U8B7M) 10Tb HDD, brand new, contains my data which was copied from old drives via a USB dock (VH0U5W8M) In the screenshot above, VH0U5W8M was not installed. It is now. So, VH0U8WLM and VH0U8WVM are behaving normally. My first intent/hope had been to: 1. Add VH0U8B7M (sdc) and VH0U5W8M (sde I think) to the array with their existing data 2. Add VH0U8WLM (sdd) to the array as a third data drive 3. Add VH0U8WVM (sdb) as a parity drive 4. Enjoy When I found that I couldn't just add VH0U8B7M (sdc) and VH0U5W8M (sde) to the array (possibly due to them lacking partitions according to Jorge above), I decided to follow some of the migration approaches I'd seen on here and to format the empty drives (VH0U8WLM (sdd) and VH0U8WVM (sdb)) and add those to the array instead (which worked). Then I would copy the data over using Krusader (or similar) by mounting the existing data drives (VH0U8B7M (sdc) and VH0U5W8M (sde)) but found that I can't access them other than to Preclear them or format them. If I can access the data on VH0U8B7M (sdc) and VH0U5W8M (sde), the current intent is to: 1. Copy the data from VH0U8B7M and VH0U5W8M over to the array drives (VH0U8WLM and VH0U8WVM) 2. Format VH0U8B7M, assign the XFS file system and add it to the array 3. Format VH0U5W8M and set it as a parity drive It sounds like they lack a file system (I will have to ask my friend about how he did the data copying) which is preventing me from doing this. Potentially he can send me his USB HDD dock if that would give me another way of accessing the data on them? Thanks all.
  11. Attached are the following: 1. Diagnostics Log 2. SMART report for one drive that will mount (new, empty disk formatted and XFS file system set up. Can be added to array. Intend to copy files to here) 3. SMART report for one drive that won't mount (new disk with my content copied from the previous, working Unraid/Plex server. Cannot be added to array. Only option is to Format, per the screengrab above. Intend to copy files from here) WDC_WD102KRYZ-01A5AB0_VH0U8WLM-20240124-2206 (WILL MOUNT).txt WDC_WD102KRYZ-01A5AB0_VH0UDB7M-20240124-2205 (WON'T MOUNT).txt plex-diagnostics-20240124-2200.zip
  12. Hi all. I'm new to the world of servers and Unraid and command prompts etc so please be gentle. I've read pages and pages on here, reddit, google, etc and watched Spaceinvader One's YT videos, etc. I'm trying to set up a new Unraid/Plex server but am a little stuck on the basics when I try to follow the tutorials and guides. So thanks for any help and sorry if I've missed a guide or thread that covers this! A few years ago, a friend hosted and administered Plex for me on his Unraid server using two 10Tb HDDs with my data. He's no longer able to host Plex for me so I need to build my own Unraid/Plex server with my data. The two 10Tb data drives are old and are earmarked for something else so he copied my files to a pair of brand new WD Gold 10Tb drives using a HDD dock via USB. Now I'm having to figure a bunch of stuff out and am struggling a little. In total I have four new 10Tb HDDs (two preloaded with my data and two which are empty). I plan on using three as data drives and one as a parity drive. I've built a new server and have installed Unraid (latest version) on a 32Gb USB stick. So far I've been able to get the machine to boot into Unraid and have installed my Unraid key (basic). I installed the Unassigned Devices plugin and started by installing one of the new empty drives and formatted it as XFS (for parity). I then installed a 1Tb SSD cache drive, formatted it and assigned it to the pool option. I then installed the two XFS data drives but Unraid doesn't seem to want to let me add them to an array without formatting them first: they showed up in Unassigned Devices but there was no option to Mount them. The only button that showed up was Format which I obviously don't want to do. Does anyone have any suggestions why this might be the case? So then, after more reading about data migration with Unraid, I thought I might be able to use the two new empty drives as my data drives by adding them to the array and then copying the data from the current data drives to them - perhaps Unraid would be happier that way rather than using my drives as-is. Then I would format the current data drives and use those as the parity and data 3 drives. So I unplugged one of the existing data drives, installed the two new empty drives, formatted them as XFS no problem, added them to the array, started the array, installed Krusader and created user shares called "Movies" and "TV shows" from the Unraid GUI - so far so good. I set cache for the shares to the 1Tb pool drive (but I believe I'll run into problems unless I set the minimum free space setting on the cache pool to be greater than the largest file I'll be transferring). But I still have the basic issue of not being able to access the data on the first of the two existing data drives now that they're showing up in unassigned devices. Any ideas what I'm doing wrong and how I can get this drive mounted? Thanks all!
  13. Hi all. I'm new to the world of servers and Unraid and command prompts etc so please be gentle. I've read pages and pages on here, reddit, google, etc and watched Spaceinvader One's YT videos, etc. I'm trying to set up a new Unraid/Plex server but am a little stuck on the basics when I try to follow the tutorials and guides. So thanks for any help and sorry if I've missed a guide or thread that covers this! A few years ago, a friend hosted and administered Plex for me on his Unraid server using two 10Tb HDDs with my data. He's no longer able to host Plex for me so I need to build my own Unraid/Plex server with my data. The two 10Tb data drives are old and are earmarked for something else so he copied my files to a pair of brand new WD Gold 10Tb drives using a HDD dock via USB. Now I'm having to figure a bunch of stuff out and am struggling a little. In total I have four new 10Tb HDDs (two preloaded with my data and two which are empty). I plan on using three as data drives and one as a parity drive. I've built a new server and have installed Unraid (latest version) on a 32Gb USB stick. So far I've been able to get the machine to boot into Unraid and have installed my Unraid key (basic). I installed the Unassigned Devices plugin and started by installing one of the new empty drives and formatted it as XFS (for parity). I then installed a 1Tb SSD cache drive, formatted it and assigned it to the pool option. I then installed the two XFS data drives but Unraid doesn't seem to want to let me add them to an array without formatting them first: they showed up in Unassigned Devices but there was no option to Mount them. The only button that showed up was Format which I obviously don't want to do. Does anyone have any suggestions why this might be the case? So then, after more reading about data migration with Unraid, I thought I might be able to use the two new empty drives as my data drives by adding them to the array and then copying the data from the current data drives to them - perhaps Unraid would be happier that way rather than using my drives as-is. Then I would format the current data drives and use those as the parity and data 3 drives. So I installed the two new empty drives, formatted them as XFS no problem, added them to the array, started the array, installed Krusader and created user shares called "Movies" and "TV shows" from the Unraid GUI - so far so good. I set cache for the shares to the 1Tb pool drive (but I believe I'll run into problems unless I set the minimum free space setting on the cache pool to be greater than the largest file I'll be transferring). But I still have the basic issue of not being able to access the data on the first two drives now that they're showing up in unassigned devices. Any ideas what I'm doing wrong and how I can get this drive mounted? Thanks all!
×
×
  • Create New...