Jump to content

[solved] TWO Drives Show Up As Unformatted


Recommended Posts

So I got a new 750GB last Friday, and I ran preclear on it twice successfully.  I added it to my array, replacing/upgrading an existing 500GB drive.  In the course of the array rebuilding the data on the new 750GB drive (somewhere around 80-90% completed), one of my 1TB drives decided to red-ball on me (as a result the rebuild cancelled prematurely).

 

For some reason, I thought hey, the cable might be bad for the drive that failed, so I shutdown the computer and swapped it.  Now when I turned the system back on, I have two unformatted drives.  The 750GB has an orange dot, and the 1TB has a red dot.

 

I still have the 500GB drive that the 750GB drive replaced.  I'm guessing the data on the 1TB is pooched.  I'm hoping that if I can get the 500GB drive back into play, I can rebuild the 1TB drive.  But the issue is that while the 750GB was installed and rebuilding, I was downloading stuff directly to my fileserver.

 

Is there anything I can do?  Is it possible to get data back from the 1TB drive?  Or is the 500GB old drive useful?

 

I'm using 5.0b12a.

syslog-2011-11-08.txt

Link to comment

In addition, it appears as if disk1 has file-system corruption:

kernel: REISERFS warning: reiserfs-5090 is_tree_node: node level 1 does not match to the expected one 4

Nov  8 19:49:26 fileserver kernel: REISERFS error (device md1): vs-5150 search_by_key: invalid format found in block 53927549. Fsck?

Nov  8 19:49:26 fileserver kernel: REISERFS (device md1): Remounting filesystem read-only

Nov  8 19:49:26 fileserver kernel: REISERFS error (device md1): vs-13070 reiserfs_read_locked_inode: i/o failure occurred trying to find stat data of [1 2 0x0 SD]

Link to comment

In addition, it appears as if disk1 has file-system corruption:

kernel: REISERFS warning: reiserfs-5090 is_tree_node: node level 1 does not match to the expected one 4

Nov  8 19:49:26 fileserver kernel: REISERFS error (device md1): vs-5150 search_by_key: invalid format found in block 53927549. Fsck?

Nov  8 19:49:26 fileserver kernel: REISERFS (device md1): Remounting filesystem read-only

Nov  8 19:49:26 fileserver kernel: REISERFS error (device md1): vs-13070 reiserfs_read_locked_inode: i/o failure occurred trying to find stat data of [1 2 0x0 SD]

 

DISK1 is the new 750GB drive that was partially rebuilt.

DISK8 is the existing 1TB drive that is now red balled.

Link to comment

Any suggestions on fixing Disk8?

 

Would this work?

http://lime-technology.com/forum/index.php?topic=5072.msg47122#msg47122

no, that would only work if the MBR pointed to the wrong starting sector for the partition.  That is not your issue.

 

You need to put back your original disk1 drive, initialize the configuration, and then BEFORE starting the array, force unRAID to think the other failed drive (disk8) is the one to re-construct rather than parity by use of the set invalidslot command. 

 

However, the use of the invalidslot command changed in the recent 5.0beta versions.  I don't think you can "refresh" the browser window to see all the drives turn blue.  (doing so negates what you typed using the invalidslot command.)  You are in unfamiliar territory.  Best to write to lime-technology for advice on how to use invalidslot on a recent 5.0beta12 release force re-construction of disk8.

Link to comment

What about this?  I ran reiserfsck on md8, and got the following:

 

reiserfs_open: the reiserfs superblock cannot be found on /dev/md8.

Failed to open the filesystem.

 

If the partition table has not been changed, and the partition is

valid  and  it really  contains  a reiserfs  partition,  then the

superblock  is corrupted and you need to run this utility with

--rebuild-sb.

 

Would the rebuild-sb flag help?

Link to comment

I saw this thread too:

http://lime-technology.com/forum/index.php?topic=5072.msg47037#msg47037

 

And here is the output.  Does this look anything correct?  It's for the 1TB md8 drive.

 

root@fileserver:~# sfdisk -g /dev/sdk

/dev/sdk: 121601 cylinders, 255 heads, 63 sectors/track

root@fileserver:~# fdisk -l -u /dev/sdk

 

Disk /dev/sdk: 1000.2 GB, 1000204886016 bytes

1 heads, 63 sectors/track, 31008336 cylinders, total 1953525168 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x00000000

 

  Device Boot      Start        End      Blocks  Id  System

/dev/sdk1              63  1953525167  976762552+  83  Linux

Partition 1 does not end on cylinder boundary.

 

Link to comment

I saw this thread too:

http://lime-technology.com/forum/index.php?topic=5072.msg47037#msg47037

 

And here is the output.  Does this look anything correct?  It's for the 1TB md8 drive.

 

root@fileserver:~# sfdisk -g /dev/sdk

/dev/sdk: 121601 cylinders, 255 heads, 63 sectors/track

root@fileserver:~# fdisk -l -u /dev/sdk

 

Disk /dev/sdk: 1000.2 GB, 1000204886016 bytes

1 heads, 63 sectors/track, 31008336 cylinders, total 1953525168 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x00000000

 

   Device Boot      Start         End      Blocks   Id  System

/dev/sdk1              63  1953525167   976762552+  83  Linux

Partition 1 does not end on cylinder boundary.

 

looks normal
Link to comment

I saw this thread too:

http://lime-technology.com/forum/index.php?topic=5072.msg47037#msg47037

 

And here is the output.  Does this look anything correct?  It's for the 1TB md8 drive.

 

root@fileserver:~# sfdisk -g /dev/sdk

/dev/sdk: 121601 cylinders, 255 heads, 63 sectors/track

root@fileserver:~# fdisk -l -u /dev/sdk

 

Disk /dev/sdk: 1000.2 GB, 1000204886016 bytes

1 heads, 63 sectors/track, 31008336 cylinders, total 1953525168 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x00000000

 

   Device Boot      Start         End      Blocks   Id  System

/dev/sdk1              63  1953525167   976762552+  83  Linux

Partition 1 does not end on cylinder boundary.

 

looks normal

 

Would it be worth trying reiserfsck with the --rebuild-sb flag on the md8 drive?

Link to comment

I saw this thread too:

http://lime-technology.com/forum/index.php?topic=5072.msg47037#msg47037

 

And here is the output.  Does this look anything correct?  It's for the 1TB md8 drive.

 

root@fileserver:~# sfdisk -g /dev/sdk

/dev/sdk: 121601 cylinders, 255 heads, 63 sectors/track

root@fileserver:~# fdisk -l -u /dev/sdk

 

Disk /dev/sdk: 1000.2 GB, 1000204886016 bytes

1 heads, 63 sectors/track, 31008336 cylinders, total 1953525168 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x00000000

 

   Device Boot      Start         End      Blocks   Id  System

/dev/sdk1              63  1953525167   976762552+  83  Linux

Partition 1 does not end on cylinder boundary.

 

looks normal

Would it be worth trying reiserfsck with the --rebuild-sb flag on the md8 drive?

Bad advice.

 

 

 

Would it be worth trying reiserfsck with the --rebuild-sb flag on the md8 drive?

You would ONLY use that if advised by a prior reiserfsck --check.

 

All we know is a prior "write" to that drive failed, making it then get marked as invalid.

 

The SMART report on it looks good, but it will not be considered writable again unless you force unRAID to put it back in service.

 

I'll repeat what I said earlier:

You need to put back your original disk1 drive, initialize the configuration, and then BEFORE starting the array, force unRAID to think the other failed drive (disk8) is the one to re-construct rather than parity by use of the set invalidslot command. 

 

However, the use of the invalidslot command changed in the recent 5.0beta versions.  I don't think you can "refresh" the browser window to see all the drives turn blue.  (doing so negates what you typed using the invalidslot command.)  You are in unfamiliar territory.  Best to write to lime-technology for advice on how to use invalidslot command on a recent 5.0beta12 release to force re-construction of disk8.

 

 

Link to comment

I saw this thread too:

http://lime-technology.com/forum/index.php?topic=5072.msg47037#msg47037

 

And here is the output.  Does this look anything correct?  It's for the 1TB md8 drive.

 

root@fileserver:~# sfdisk -g /dev/sdk

/dev/sdk: 121601 cylinders, 255 heads, 63 sectors/track

root@fileserver:~# fdisk -l -u /dev/sdk

 

Disk /dev/sdk: 1000.2 GB, 1000204886016 bytes

1 heads, 63 sectors/track, 31008336 cylinders, total 1953525168 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x00000000

 

  Device Boot      Start         End      Blocks   Id  System

/dev/sdk1              63  1953525167   976762552+  83  Linux

Partition 1 does not end on cylinder boundary.

 

looks normal

Would it be worth trying reiserfsck with the --rebuild-sb flag on the md8 drive?

Bad advice.

 

 

 

Would it be worth trying reiserfsck with the --rebuild-sb flag on the md8 drive?

You would ONLY use that if advised by a prior reiserfsck --check.

 

All we know is a prior "write" to that drive failed, making it then get marked as invalid.

 

The SMART report on it looks good, but it will not be considered writable again unless you force unRAID to put it back in service.

 

I'll repeat what I said earlier:

You need to put back your original disk1 drive, initialize the configuration, and then BEFORE starting the array, force unRAID to think the other failed drive (disk8) is the one to re-construct rather than parity by use of the set invalidslot command.  

 

However, the use of the invalidslot command changed in the recent 5.0beta versions.  I don't think you can "refresh" the browser window to see all the drives turn blue.  (doing so negates what you typed using the invalidslot command.)   You are in unfamiliar territory.   Best to write to lime-technology for advice on how to use invalidslot command on a recent 5.0beta12 release to force re-construction of disk8.

 

I got this output from reiserfsck --check on md8:

 

md8 also shows up as unformatted on the unRAID main screen.

 

Will read-only check consistency of the filesystem on /dev/md8

Will put log info to 'stdout'

 

Do you want to run this program?[N/Yes] (note need to type Yes if you do):Yes

 

reiserfs_open: the reiserfs superblock cannot be found on /dev/md8.

Failed to open the filesystem.

 

If the partition table has not been changed, and the partition is

valid  and  it really  contains  a reiserfs  partition,  then the

superblock  is corrupted and you need to run this utility with

--rebuild-sb.

 

The issue with getting the original disk1 working is that while the rebuild was going on, stuff was written to the array.  And my writes are set to write to the most empty drive.  So I'm worried that parity doesn't match the old disk1.  Disk8 however was working fine before, and matched parity.

 

So if I put the old disk1 back in, then I'm assuming disk8 wouldn't be properly rebuilt.  Am I right in assuming this?

 

And based on the output from reiserfsck, I'm assuming that I SHOULD use the --rebuild-sb tag for md8 right?  I just want to confirm, as it says on the wiki that I should confirm with an 'expert' first.  And I am clearly no expert.  thanks!

Link to comment

were you writing to disk1 while reconstructing disk8?

 

I don't THINK I did, as it's set to write to the least full drive (and neither disk1 nor disk8 are the least full).

Then it is the risk you'll have to take to attempt to re-construct disk8.

 

I sent a message to limetech last night, still waiting on a response as to how to do it.  I'm assuming I lost everything on drive 8 (which sucks but not a huge deal).

 

Is it easier to just dump drive 8 and revert back to the old drive 1?

Link to comment

were you writing to disk1 while reconstructing disk8?

 

I don't THINK I did, as it's set to write to the least full drive (and neither disk1 nor disk8 are the least full).

Then it is the risk you'll have to take to attempt to re-construct disk8.

 

I sent a message to limetech last night, still waiting on a response as to how to do it.  I'm assuming I lost everything on drive 8 (which sucks but not a huge deal).

 

Is it easier to just dump drive 8 and revert back to the old drive 1?

That is not what I said.

 

You put the original drive 1 back into place.

You basically tell unRAID, using command line commands, to set a new disk configuration BUT to re-construct disk8 instead of parity when you next start the array.

Then, disk8 will be re-constructed, as best it can be.  You might get all the files back that were on it.

 

Joe L.

Link to comment

were you writing to disk1 while reconstructing disk8?

 

I don't THINK I did, as it's set to write to the least full drive (and neither disk1 nor disk8 are the least full).

Then it is the risk you'll have to take to attempt to re-construct disk8.

 

I sent a message to limetech last night, still waiting on a response as to how to do it.  I'm assuming I lost everything on drive 8 (which sucks but not a huge deal).

 

Is it easier to just dump drive 8 and revert back to the old drive 1?

That is not what I said.

 

You put the original drive 1 back into place.

You basically tell unRAID, using command line commands, to set a new disk configuration BUT to re-construct disk8 instead of parity when you next start the array.

Then, disk8 will be re-constructed, as best it can be.  You might get all the files back that were on it.

 

Joe L.

 

By lost everything on disk8, I didn't mean the actual data itself (as parity and all the other drives should be able to rebuild), but I meant the actual drive itself is probably dead.

 

Thanks for the clarification for me.  As for what command line commands to use, that is what I am waiting for limetech for right?

Link to comment

As suggested on another forum, would it be a good idea to revert back to 4.7 in order to fix the problem, then go back to 5.0 after?

you might be able to do that, but it is an added complication.  I'd personally not revert, but get the proper procedure from lime-tech.
Link to comment

As suggested on another forum, would it be a good idea to revert back to 4.7 in order to fix the problem, then go back to 5.0 after?

you might be able to do that, but it is an added complication.   I'd personally not revert, but get the proper procedure from lime-tech.

 

I'm hoping that he'll get back to me soon, thanks for your help so far.

Link to comment

So I did this:

http://lime-technology.com/forum/index.php?topic=13866.msg131378#msg131378

 

And drive8 magically reappeared as a green dot drive.  I swapped the original 500GB drive instead of the new 750GB drive.  I'll add it later after the parity check is complete.

 

Fingers are crossed!!!

 

The parity check has completed successfully.  I guess Drive8 wasn't actually dead.  I wonder why it came back fine.

Link to comment

So I did this:

http://lime-technology.com/forum/index.php?topic=13866.msg131378#msg131378

 

And drive8 magically reappeared as a green dot drive.  I swapped the original 500GB drive instead of the new 750GB drive.  I'll add it later after the parity check is complete.

 

Fingers are crossed!!!

 

The parity check has completed successfully.  I guess Drive8 wasn't actually dead.  I wonder why it came back fine.

Remember, a "write" to it had failed... and it was disabled because of that.  So... I'd check for loose connections to it (either data or power)
Link to comment

So I did this:

http://lime-technology.com/forum/index.php?topic=13866.msg131378#msg131378

 

And drive8 magically reappeared as a green dot drive.  I swapped the original 500GB drive instead of the new 750GB drive.  I'll add it later after the parity check is complete.

 

Fingers are crossed!!!

 

The parity check has completed successfully.  I guess Drive8 wasn't actually dead.  I wonder why it came back fine.

Remember, a "write" to it had failed... and it was disabled because of that.   So... I'd check for loose connections to it (either data or power)

 

I've had a drive show up with a red dot due to a cabling issue, but I haven't had one come up and show up as unformatted at the same time.

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...