Jump to content

<SOLVED> Lost data from drives across various dates

Recommended Posts

I just upgraded 2 of my older 500gig IDE drives to 2TB Sata drives.


They were upgraded independently, with successful rebuilds and parity checks on both.


I now go to play some music, and I notice one of my folders is missing half of its contents!  I check unraid, all drives are green and running!


I check my work files, and oh my god, half or more of my work folders and files are missing!


These are files anywhere from 2 years old to 2 days, and yet others remain for the same timeframe.


Help, what do I do?  

Link to comment

Just did a spin-up of all drives and now one of my drives is red-balled with 0degrees temperature reading, and a second drive is green but also 0 degree temperature reading.......

Both drives have errors in the error column.


How could 2 drives go bad when I just did 2 successful parity checks in the past 3 days?  Is the red-balled one a write error, and the green one a read error that is now fixed?


Syslog attached


When I try to run a smart report against the red-balled or the green drive I get the error in the telnet session for both drives:


Smartctl:  Device Read Identity Failed (not an ATA/ATAPI device)

A mandatory SMART command failed: exiting. To continue, add one or more '-T permissive' options


I can run a smart report against any of the other drives just fine.


Link to comment

Yes I still have the old 500g drives, untouched.  Good point, if all else fails my data should be there and I could mount those drives and copy over the data if required.


No, I have not run a ReiserFS check on the affected drives, I'm not sure what that is?


I powered down the server, ran a brand new SATA cable to the red-balled drive and checked all power connections.

I booted back up, and have the same red-balled drive.  The green drive is still green.


My data is also now all available, I am guessing the errors on the green drive were read errors that are no re-mapped?  I have backed up my critical data outside of unraid.


I am now able to run smart reports on both drives, they are attached.


SDG is the green drive

SDF is the red-balled drive


Should I do a re-build onto SDF, or replace the drive?  

Then, do I need to replace SDG, or is it still good?

I have 2 spare drives ready to go if required.



Link to comment

Both your smart reports look fine to me. Are you seeing temps on sdg? (I'm assuming this is the drive that was green with zero temps from your second post). As an aside it is not safe to refer to drives by sdX as they can change.


Your second drive, sdf I think is save to rebuild the data onto itself. To do this you need start the array with that drive unassigned, stop the array and reassign the drive. It will be blue and you can rebuild onto it. I apologize if you know how to do that already.


After you are up and running I would perform a reiserFS check on disk4 and disk6 (md4 and md6).




Note that file system corruption will not affect the parity feature of unraid. If the file system gets corrupted on an individual disk, it will be preserved with the parity information for that drive sector, so if the drive is replaced or rebuilt, the corruption will remain.



Link to comment

Thank you for checking my smart reports for me.


And yes, I am seeing temps on all drives since the new cable and re-boot.


I will re-build then run the reiserFS, thank you for the link to the instructions.




Re-build started, thank you again.




Re-build completed with successful parity check

Ran ReiserFS check on disks 4 and 6, no errors reported.



Link to comment


This topic is now archived and is closed to further replies.

  • Create New...