Jump to content

Upgraded from v5, what's wrong with my drive?


gprime

Recommended Posts

Ok, unfortunately disk is still unmountable but now you can try reiserfsck:

 

https://lime-technology.com/wiki/index.php/Check_Disk_Filesystems#Drives_formatted_with_ReiserFS_using_unRAID_v5_or_later

 

Closely follow instructions, but first thing to do is start the array in maintenance mode and run:

reiserfsck --check /dev/md6

 

This will try to recover data from the disk emulated by unraid, if it does not work you can still try to copy data from the failed disk, but I believe this is the only chance you have to recover any data you wrote to it from the point of the initial failure until now.

Link to comment

Ok, unfortunately disk is still unmountable but now you can try reiserfsck:

 

https://lime-technology.com/wiki/index.php/Check_Disk_Filesystems#Drives_formatted_with_ReiserFS_using_unRAID_v5_or_later

 

Closely follow instructions, but first thing to do is start the array in maintenance mode and run:

reiserfsck --check /dev/md6

 

This will try to recover data from the disk emulated by unraid, if it does not work you can still try to copy data from the failed disk, but I believe this is the only chance you have to recover any data you wrote to it from the point of the initial failure until now.

 

Trying this now.

 

EDIT:

 

So it's doing something.

 

I noticed however, that every drive LED except disk6's (and cache) is lit up right now. I think the disk6 may have disappeared completely. If that's the case, what will this accomplish?

 

Attaching server hard at work (disk6 is bottom left, not lit up). And yes it's very very dusty. Shameful :(

IMG_20160105_190022.jpg.37b63b4e111a81b74439451933747762.jpg

Link to comment

Alright, so the output was extremely long and I didn't think to capture it to file when I ran it, but here's the last little bit:

 

/155 (of 170)bad_stat_data: The objectid (1366911) is marked free, but used by an object [1366910 1366911 0x0 SD (0)]
bad_stat_data: The objectid (1366912) is marked free, but used by an object [1366910 1366912 0x0 SD (0)]
bad_stat_data: The objectid (1366913) is marked free, but used by an object [1366910 1366913 0x0 SD (0)]
bad_stat_data: The objectid (1366914) is marked free, but used by an object [1366910 1366914 0x0 SD (0)]
bad_stat_data: The objectid (1366915) is marked free, but used by an object [1366910 1366915 0x0 SD (0)]
bad_stat_data: The objectid (1366917) is marked free, but used by an object [1366916 1366917 0x0 SD (0)]
/156 (of 170)bad_stat_data: The objectid (1366918) is marked free, but used by an object [1366916 1366918 0x0 SD (0)]
bad_stat_data: The objectid (1366919) is marked free, but used by an object [1366916 1366919 0x0 SD (0)]
bad_stat_data: The objectid (1366920) is marked free, but used by an object [1366916 1366920 0x0 SD (0)]
/157 (of 170)bad_stat_data: The objectid (1366922) is marked free, but used by an object [1366921 1366922 0x0 SD (0)]
/158 (of 170)bad_stat_data: The objectid (1366923) is marked free, but used by an object [1366921 1366923 0x0 SD (0)]
bad_stat_data: The objectid (1366924) is marked free, but used by an object [1366921 1366924 0x0 SD (0)]
/159 (of 170)bad_stat_data: The objectid (1366926) is marked free, but used by an object [1366925 1366926 0x0 SD (0)]
bad_stat_data: The objectid (1366927) is marked free, but used by an object [1366925 1366927 0x0 SD (0)]
bad_stat_data: The objectid (1366928) is marked free, but used by an object [1366925 1366928 0x0 SD (0)]
bad_stat_data: The objectid (1366935) is marked free, but used by an object [1366934 1366935 0x0 SD (0)]
bad_stat_data: The objectid (1369229) is marked free, but used by an object [1369228 1369229 0x0 SD (0)]
/147 (of 157)/125 (of 170)bad_stat_data: The objectid (1369230) is marked free, but used by an object [1369228 1369230 0x0 SD (0)]
bad_stat_data: The objectid (1369231) is marked free, but used by an object [1369228 1369231 0x0 SD (0)]
bad_stat_data: The objectid (1369547) is marked free, but used by an object [1369546 1369547 0x0 SD (0)]
bad_stat_data: The objectid (1369548) is marked free, but used by an object [1369546 1369548 0x0 SD (0)]
/149 (of 157)/ 37 (of  85)bad_stat_data: The objectid (1371064) is marked free, but used by an object [1371063 1371064 0x0 SD (0)]
/150 (of 157)/154 (of 170)bad_stat_data: The objectid (1371065) is marked free, but used by an object [1371063 1371065 0x0 SD (0)]
/152 (of 157)/ 25 (of 170)bad_stat_data: The objectid (1371066) is marked free, but used by an object [1371063 1371066 0x0 SD (0)]
/153 (of 157)/ 91 (of 170)bad_stat_data: The objectid (1371067) is marked free, but used by an object [1371063 1371067 0x0 SD (0)]
/154 (of 157)/115 (of 170)bad_stat_data: The objectid (1371068) is marked free, but used by an object [1371063 1371068 0x0 SD (0)]
/155 (of 157)/146 (of 170)bad_stat_data: The objectid (1371069) is marked free, but used by an object [1371063 1371069 0x0 SD (0)]
/157 (of 157)/ 85 (of  85)bad_stat_data: The objectid (1371070) is marked free, but used by an object [1371063 1371070 0x0 SD (0)]
finished
Comparing bitmaps..vpf-10640: The on-disk and the correct bitmaps differs.
Bad nodes were found, Semantic pass skipped
21 found corruptions can be fixed only when running with --rebuild-tree
###########
reiserfsck finished at Tue Jan  5 17:53:46 2016
###########

Link to comment

I noticed however, that every drive LED except disk6's (and cache) is lit up right now. I think the disk6 may have disappeared completely. If that's the case, what will this accomplish?

 

This is normal, you are trying to fix the emulated disk6, it’s emulated by all other disks + parity, actual disk6 is currently not in use.

 

 

Alright, so the output was extremely long and I didn't think to capture it to file when I ran it, but here's the last little bit:

 

21 found corruptions can be fixed only when running with --rebuild-tree
###########
reiserfsck finished at Tue Jan  5 17:53:46 2016
###########

 

 

 

With array still array in maintenance mode now run:

reiserfsck --rebuild-tree /dev/md6

 

When this ends, you will hopefully have a mounted emulated disk6, probably with a lost+plus found folder with recovered files and folders.

Link to comment

EARS says Greens.  1TB and 2TB Greens, I would be putting any faith in those, they're probably knackered.  :(  What does SMART say as regards Load Cycle counters?

 

In regards to greens, my 2TB WD greens have lasted forever and are still kicking. I added a 3TB green to the mix a few years ago, and that's the one that's dying on me. I'm planning on replacing it with a Seagate NAS 3TB drive, and got an extra to have on hand.

Link to comment

Just an update on this: The drive definitely was dead. I will never buy another WD 3TB green because that died way too soon!

 

Thank you johnnie black because your steps worked perfectly!

 

I noticed however, that every drive LED except disk6's (and cache) is lit up right now. I think the disk6 may have disappeared completely. If that's the case, what will this accomplish?

 

This is normal, you are trying to fix the emulated disk6, it’s emulated by all other disks + parity, actual disk6 is currently not in use.

 

 

Alright, so the output was extremely long and I didn't think to capture it to file when I ran it, but here's the last little bit:

 

21 found corruptions can be fixed only when running with --rebuild-tree
###########
reiserfsck finished at Tue Jan  5 17:53:46 2016
###########

 

 

 

With array still array in maintenance mode now run:

reiserfsck --rebuild-tree /dev/md6

 

When this ends, you will hopefully have a mounted emulated disk6, probably with a lost+plus found folder with recovered files and folders.

 

This is exactly what I did and the data loss on disk 6 was pretty minimal.

 

I used a 3TB Seagate NAS drive and I'm hoping it lasts longer than that 3TB WD green!

 

Attached happy looking screenshot :)

rebuilt.png.e88fa8109e8c74b372bea5616458ec17.png

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...