gravyrobbers91

Members
  • Posts

    11
  • Joined

  • Last visited

Everything posted by gravyrobbers91

  1. Good to know. Honestly. 90% of the data on them is probably movies and music, and not even nearly the majority of it, so i think I'm just gonna eat the L. I got ahold of someone whose practice in data recovery is more catering to my needs, but even then i'm looking at $700 at the very least
  2. Also when you say all or nothing, does that mean if i scrapped the drives and got 3 new ones, I couldn't run the array at all, like even without the data that was on those drives, and would have to erase the entire array. Or that I could indeed keep using the array, I just will have lost the data that was on those 3 drives
  3. Welp I just called one data recovery service, who basically said the only way they can service, based on their business, is to have all 13 drives sent to them so they can assess on their grounds, and then send a firm quote, but that with their services it would be 9300-23000 dollars. lol. This is an entirely personal use system. So that's off the table. Will keep searching, and hopefully find someone local (Boston) who can just examine the 3 drives, and either fix the single drive itself so I can do a rebuild, or copy all 3 drives onto 3 new drives, again for a rebuild.
  4. Word. So first off - Picking up some new drives, WD Blue, same as one of the drives that failed. I'm not expecting it, but if it's the same PCB board, I'm gonna see if swapping the boards will let me run that failed WD drive. IF I can get that to run, even slowly, then I've got only 2 actually dead drives, and I can rebuild the array, then swap the WD with the good one, and rebuild again. Reality - I probably won't be able to save the drives myself - and any more advanced HDD recovery requires removing BIOS chips from the PCB boards and other stuff that I know I'm not capable of. SO - I'm guessing I can send them off to someone who can recover the data. So I see the 3 following scenarios, which is the most realistic/practical? A) Leave the server off until the data is recovered - if I do this and each drive has it's data moved to a new 4TB drive, can I implement the new drives without screwing up the array? I feel like that's where I do the New Profile configuration method. B) I'm impatient and want to just keep using the server without the data from those drives while the data gets recovered - can a data recovery service take the data off a disk formatted for an unraid array, and deliver it to me in such a format that I can read through it, and select files to put back on to the server? C) options I'm not considering?
  5. Right. The 3 drives that failed were not parity drives. I guess the other question is, even if 2 parity drives can’t rebuild all the data lost in the 3 drives, will it at least rebuild some, or is it an all or nothing kind of thing. If it will rebuild what it can, then I think the best course may be to rebuild with 3 new drives, see what’s lost, and if it’s vital then swap the PCBs and bring that data back into the array. I do know those were more recent drives at the very least. But they were still installed awhile ago.
  6. As the title states, I fucked up and forgot to turn off the PSU before taking components out. Unplugged the daisy chain leading to the 5.25" hot swap bay for 3 of my drives. Now none of them spin up. No noise. No movement. Cold as ice. My understanding is the PCBs are dead. How much - if any - of my data will be saved from a rebuild if I have 2 parities. It's 13 disks total. All 4TB. 2 parities, 11 array. From my understanding, I can order replacement PCB boards for the HDDs, but this should only be done for data recovery, and not reinstating the drives into the array. So I'd still need to figure out how to recover the data and easily bring it back onto the array. If the parities should be enough to save me when rebuilding with new drives, then I probably won't worry about the PCB issue
  7. so one thing this is making me realize is I really need to set up some remote log saves. I'm away from my apartment for awhile about 2 hours away. Server has been working fine til yesterday. all of a sudden it's down, no services connecting, no reverse proxy, no vpn, no plex. nothing. I have a desktop comp there, I was able to remote desktop into that - and I couldn't get onto the unRAID at all through the local IP. My friend shut the server down and rebooted it today, it's back on but 6 of the 22 drives are "missing," including both parities. Most of the drives are on a netapp ds4246 connected via a Quad PT SAS Card. All of the drives that aren't connecting are on the netapp, but not all of the drives on the netapp aren't connecting... that was horribly worded. Basically the hard drives that aren't connecting are on the netapp. All lights on the bays are green. Sometimes in the past after rebooting the server it took a few minutes for it to detect all the drives on the netapp, but it's been awhile and still nothing. Trying to figure out what I can do to troubleshoot both before I get back to my apartment on thursday, and while I'm remote. What are some suggestions you have?
  8. Hi there, I'm guessing this is a straightforward question to answer, but I can't find an answer anywhere. Would it be possible for me to mount either my whole unRAID array, or specific user shares, to a separate physical windows machine, using a data transfer cable such as eSATA, thunderbolt, etc, and not via a lan connection? If that's not the case I'm fine with connecting either over LAN, but am just curious.
  9. I'm referring to this walkthrough detailing how to reflash the Pike card so that it will work after installing another LSI card of the same chipset. I'm currently running an Asus Z9PA-D8 with a PIKE 2008 installed, and am going to install an HBA with SAS Expander. While the walkthrough clearly shows that you can get around the issue with the PIKE card, I'd prefer to get a card that doesn't cause the issue in the first place. Furthermore, if it is an issue, does anyone know if it's only when you install an LSI card of the exact same chipset? (i.e. PIKE 2008 w/ LSI2008), or if you install a card with just the same firmware/bios (i.e. LSI2308)
  10. Also know that this is an old thread, and wondering A) is this still an issue on the latest version of unRaid? and B) does the bug only occur when you install an LSI card of the exact same chipset, or just one with the same BIOS (i.e. install an LSI 2308, with a PIKE 2008 running - 2008 and 2308 have the same BIOS)