Airless

Members
  • Posts

    18
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Airless's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Awesome, thanks all! Looking forward to the final release.
  2. I haven't. I'm not keen on updating to a pre-release version on my main/only unraid server. I did look through the change logs for rc3 and rc4 and didn't see any mention of the smartctl package being upgraded. Someone who has the latest RC installed what does the following output for you? smartctl --version If I don't get any answers by the weekend I'll see if I can setup a little trial instance. If SMART isn't updated I guess submitting a feature request is my best bet?
  3. Hi there, One of my drives seems to be affected by this reported bug in the smartmontools tracker for v7.1 of the tool. https://www.smartmontools.org/ticket/1346 My drive is a ST2000NMCLAR2000 and just as reported doesn't show much SMART data and cannot run self tests. The issue does not occur in 7.0 and is fixed in 7.2. Is it possible to update the tool myself? What are the odds smartctl could get updated to 7.2 in the next update of Unraid? smartctl 7.1 is over 2 years old at this point. Thanks
  4. Thank you so much for the great plugin. I'm migrating over from the Server Layout plugin and have a couple of low-priority comments/suggestions. BUG - Disk Group Name Special Characters The Name field in a disk group doesn't seem to handle the " character correctly. For example if I type the name 2.5" Bays the field only shows 3.5 This only seems to be an issue with the input field itself as all other areas the name is displayed shows text after the " character. SUGGESTION - Disabling Bays I read through the few times this feature has been brought up in the thread and completely understand that it's impossible to support every case configuration under the sun. However, I think the way the Server Layout plugin handles this is a fairly simple implementation and would make this plugin quite a bit more powerful. The other plugin allows you to click on bays when configuring to hide them from being rendered (examples below). This all allows me to better match the layout of my server with 2x2.5" bays at the top and 16x3.5" bays in the bottom. Of course, you're volunteering your time to build this and it may be more difficult to implement than it appears on the surface. I figured it was at least worth showing how this has been implemented in other places in case it inspires new features and ideas Thanks again for all of your hard work.
  5. I see what you're saying but even the cheapest 8TB drives available to me are ~$300. I need to replace 10TB of storage + 2 parities. That's $1,200 for cheap 8TBs vs $780 for 6x 3TB Reds. Having 14 HDD free slots available to me at the moment (including the 6 from the fried drives) leave me enough room to grow for my needs. By the time I start outgrowing the number of available slots 8TBs should come down in price and I can start replacing 2TB drives. HPA is a smart idea if I end up going that route. Thanks! I didn't think of that but that is a much safer option. At the moment the recovery service is working through the first 2 drives. That will influence my next steps. Kinda hoping I don't have to try and recover the parity but at this point extra work on my part will outweigh the cost. If it comes to backup up the entire raid to another system and attempting the recover that's what I'll do. I have access to plenty of temporary storage to try something like that without risk. Thanks for all the advice!
  6. A question just came to mind. Looking at HDDs these days it seems it's not really worth going for 2TB drives anymore. 3TB seem to be the sweet spot (not complaining!). If I successfully recover that one parity so I have 3 valid data + 1 valid parity in a 6 disk total array is there any issue with the 2TB drives being replaced with 3TB ones? From my understanding of how the parity works, so long as I pre-clear the 3TB drives before loading them up with the recovered data I SHOULD be fine. This is because the new space on the data drives will just return a 0 and not affect the end result of the bit had the space not existed at all.
  7. Update I did finally hear back from Donor Drive, or rather their sister company that does the repair/recovery, and their prices were quite a bit better. PCB replacement + Data recovery (no head damage) was about 30% cheaper. Unfortunately, they only have locations in the US. Shipping all of these drives across the boarder is not something I'm all that comfortable with. Ironically, the data I'm least concerned with recovering (source backups of client projects) is the data that makes me least comfortable sending cross border. So, I'm going to bite the bullet tomorrow and start the recovery of the first two data drives. Wish me luck! Side note, I RMA'd the PSU that was the start of this whole fiasco. Replacement PSU exhibits the same issue on multiple systems! Started another RMA. Hopefully 3rd time is the charm.
  8. Right, I think you missed the details of Option 1. The only option where I even consider recovering a parity. The parity is purposely the last drive to get recovered and that only happens IF all previous drives have been recovered 100% as an image. Going this route will save me a good chunk of change and the company has already agreed that I'll only be charged for that drive if they get a perfect image. My thought exactly. For the cheaper to recover drives it's purely an electronics replacement so it'll either work or not. The more expensive recoveries are the higher risk of failure or partial data recovery. That's why I'm recovering the highest risk (also highest value) drive first and choosing my course of action based on the result of that drive. If it's 100% then I move forward with recovering 2 more data drives which should only have electronic damage. If both of those recover 100% then I have good certainty the parity drive with just electronic damage will recover to 100% as well. Keep in mind evaluated condition of my drives are 2 Data, 1 Parity - PCB damage only 2 Data, 1 Parity - PCB + Head damage Yes, they're willing to recover an image and have agreed that it's no charge unless a complete image is retrieved from that drive. That comes with the condition that they were successful with the 3 previous drive recoveries and we attempt recovery of the less damaged parity. With that agreement there's no reason I shouldn't try. Either I get all my data back for less money or we try and fail for free and I decide if I want to try and recover that last data disk.
  9. Re: Government - Their IT teams are not exactly known for their speedy work. I'm sure there are parts of the government that handle very sensitive data and have their own internal recovery teams but in the spirit of keeping things lean there's been a big movement around here since the 90's to privatize services where possible. The private services can also handle overflow work when there is an unexpected spike in recovery required (Ex: building fire). Re Big Telcom Companies - Yea I don't know. Maybe at this point so many services have been moved off site and there isn't enough work to staff a proper recovery team. It's cheaper to hire a service to recover a few RAIDs a year than to staff/maintain a team. I don't really know though. I've only ever worked at small, haphazard tech companies in the region that treat things like backups and version control as "optional"... Update Still waiting on Donor Drives to get back to me. I'm going to have to pull the trigger on a company soon though. Through the file manifests I've been able to cobble together I have a plan. I'm going to get the service to recover one drive at a time and update me based on the result since the completeness of a drive's recovery will somewhat dictate my strategy. 1. Recover two drives. One of the expensive/more damaged drives that I KNOW has most of my irreplaceable data and the other that is cheaper to recover but has enough useful/pain in the but data on it to be worth paying for the recovery Here's where things split Option 1 2a. If the previous 2 drives were recovered 100% then I'll recover another one of the cheaper data disks. 3a. If that next drive is recovered 100% then I'll have the less damaged/cheaper parity drive recovered. At that point I'll have good confidence that I'll get a 100% recovery of that last drive and it's the last drive that can be recovered at the cheaper price point. So long as that one recovers 100% I should have all of my data back! Option 2 2b. If the previous 2 drives were not recovered 100% then I can stop here and know that it's not worth spending any more money on. Option 3 2c. If one of the previous 2 drives weren't 100% recovered then I know it's not worth even looking at the Parity drives. In theory I could try to recover 3 more drives and hope for the best but I really don't want to take that gamble. I'll just recover the 2 remaining cheaper data drives and call it a day. Regardless of what happens I'll make sure to get the original HDDs back. Who knows, maybe data recovery will get super cheap in the future or I'll win the lottery Worst case I can harvest a bunch of real strong fridge magnets out of them!
  10. I wouldn't say I live in a huge city but it's a combination government and tech town. Big telcom/chip companies here and lots of government buildings. The capabilities are here but their government and enterprise customers tend to drive up prices. That's what I figured. I send them the SMART data, they see the Firmware version, they download and flash it. Everything works. Your explanation makes sense though. Per unit config parameters to account for manufacturing variations seems reasonable. Hmm, ok. I'll definitely get a second opinion from another shop. Yea, live and learn. My data storage protocol will be changing quite a bit once I get through this recovery :) Thanks for the input. I'll make sure to update this thread as I make progress!
  11. While waiting on some competing quotes for drive recovery I spent some more time pouring through logs and a copy of my unRaid boot key to see if I could find any hits of what the file structure might look like on the damaged drives. File Integrity I struck gold with the dynamix.file.integrity plugin! As I had suspected there is a complete list of files stored with their hash on the key. For future reference, in case this helps anyone else you can find the files at: {yourKey}/config/plugins/dynamix.file.integrity/export/disk{n}.export.hash. Unless you've been regularly exporting your hashes these files probably won't be up to date but even if they're a bit old they can give you a good indication of what might be on the drives (depending on share drive fill strategy). Diagnostics Exports If, like me, you were having trouble with your server before destroying your drives you may have turned on regular diagnostics exports for your system. In my case I have diagnostics exports every 30mins for the last few days of operation. I haven't poured through these yet but there may be some file path hints in the logs here. Will report back if I find anything worth watching out for if someone else is in this situation. I did notice that in each export in {export}/system/vars.txt is a list of disks and what appears to be their state. Of note is the [fsFree] field. Seems like this is the amount of free space on the drive. I have yet to verify this but if this is the actual free space on the drive at the time of the export it could help me decide which drives to pay for recovery on. The adventure continues
  12. Thanks, I'll see what they quote. Hopefully they do work within Canada. I Don't really want to deal with sending these drives internationally. Any chance there's a way to figure out what files and/or shares were on each of the dead drives? I figure unRaid must have a config file or something that maps drives to shares at least.
  13. I got my quote from a local Data Recovery company. One that does recovery of all sorts (HDDs, SD Cards, Phones, etc...). According to them the electronics on all of the drives are fried (not surprising) and a firmware reconstruction is required (need to look into what that means...I'd figure the software was fairly standardized within a drive model). Half of the drives also have damaged read/write head assemblies which need to be replaced. Normally I'd be skeptical but this does somewhat make sense as I only hooked up half the drives to a proper power source after initially using the wrong cables on all 6 of them. It makes sense that half of the drives would have a different type of damage. I'm essentially looking at $700 CAD per drive for the less damaged drives and $1,300 CAD per drive for the more damaged ones. At that price you can see why I'm not going to try and recover 4 drives! On the plus side they do only charge if they're able to recover the data so I am guaranteed to get data back for any money I spend.
  14. I should add that I have both the Dynamix File Integrity, Dynamix Cache Directories and a number of other plugins installed. So if there's maybe a log file that would help indicate what paths were on a drive that could be helpful too. Anything to figure out what was on those drives!
  15. Hi All, My array is in a bad place. I'll admit up front, this is 100% my fault and I should have had offsite backups of parts of the array but I didn't. Full story below but the short of it is: I fried 6/11 drives 2 of which were parity drives I'm in the process of having a recovery service recover the drives (should be possible) To recover even just 4 of the drives and reconstruct the full array will be prohibitively expensive I can afford to recover some of the drives but not all. So I'd like to get the best bang for my buck and focus on the drives with the more important/irreplaceable data. Is there a way to generate a list of files or shares that occupied each drive? I want to generate a manifest of file and their disk locations so I can decide the specific disks I want to recover instead of rolling the dice and picking one at random! (END OF SUMMARY) ---- The long story (Lesson: Don't get lazy no matter how deep into debugging you are) ---- This all started when I was having some stability issues with my server. Every day or two my system would become unresponsive even when logged into through IPMI. After the 3rd incident I decided to disconnect the drives and start testing components. This seemed like a hardware problem. My gut feeling was that the issue was either with the RAM or Motherboard. It felt like a RAM issue so I started running MemTest on the system to see if anything came up. There were no errors reported during the tests but the system would hang after 20 - 180 mins of testing. I was starting to think this wasn't a RAM issue. I have a love hate relationship with my Supermicro Motherboard so it was next on my hit list. It's been a pain to work with. Had to buy a license from SuperMicro to flash the BIOS so it would actually boot with my Kaby Lake processor, couldn't get it to work with my original PCI Sata card, etc... Unfortunately I don't have any extra server grade parts lying around to test with so swapping the RAM, CPU, or Mobo for a good variant was going to be a lot of trouble. So, I decided to try hooking up a different PSU. Initially I hooked up just the Motherboard connections to the PSU and ran MemTest. To my surprise the tests ran for 22+ hours without issue. Cool! I thought, replacing a PSU is really the cheapest part to break, I lucked out. Here's where things go south... Without thinking I decide to hook up the power and data cables to the 6 HDDs in the main case to double check everything still runs fine before putting the server back in its place and hooking it up to my other drive enclosure. I was hoping to have the server back up and running while I RMA'd the defective PSU. I did a nice job running cables in the server case and rather than undo all of that nice work and re-run cable I decided to hook up the HDD power cables from my dead PSU to the working replacement. They're both modular, the plugs fit. No problem right?... Wrong! I hook everything up, hit power, the fans spin maybe 3/4's of a rotation and the system immediately powers down. I press the power button again. Same issue. Scratching my head I try a few different things (reset CMOS, reseat RAM, etc..). Weird, maybe the ports on this modular power supply aren't working. To be fair, this PSU has only ever been used in a gaming PC and hooked up to maybe 2 HDDs at a time (not 6). So I try different modular ports on the PSU but get the same result. Hmmm... I unplug the HDD power cables, try again, and the system posts. Great! But wait, why didn't the system start up with the HDDs powered? Are modular power cables with standardized ends really different between manufacturer? YEP... A wave of dread washes over me. Did I really just kill 6 drives at once?! I find the right cables, hook them up, power on the system and don't hear anything too out of the ordinary. I do hear a bit of clicking from one drive but for the most part things seem fine. I get into unRAID and sure enough 6 out of my 11 drives, all 6 that are in the main case are all missing... Luckily 2 of those failed drives were parity drives so I really only need to fully recover 4 drives so long as I can get 100% of the data off the parity drives (if they're the ones that get recovered). However, after getting a quote to fully recover 4 drives it's more than I can justify spending. So now, as my initial question states, I'm trying to figure out which 1 or 2 drives I get recovered. Any ideas? A Final Note: Just to be clear, this isn't a sob story. This is 100% my own damn fault. I just hope someone else reads this as a cautionary tale and stops before haphazardly working deep into the night to try and get their server fixed sooner rather than taking their time and being disciplined when dealing with their hoard of data And for the thousandth time.... Keep an offsite backup! Any suggestions are greatly appreciated!