Jump to content

Bad Magic Number (Bad superblock) XFS drive & Bad Superblock on ReiserFS 6.3.2


numanuma

Recommended Posts

Hiya guys, my problem is with the XFS & ReiserFS. I have 1x2TB ReisterFS media drive and the more important family photos drive which is 4TB and XFS. 

Brand new build with old drives (currently replacing all). i7 6800k+MSI X99A Mobo+8GB Corsair Vengeance 2400MHz, Passive MSI GeForce 2GB 1030 GPU (in an Antec S10 case - its my new baby)

I would be really grateful for any help, like i said there is a really important drive (aren't they all). Family pictures are the most important that i get off the drive.

 

The 2TB Reiser has been reiserfsck --check, reiserfsck --fix-fixable, resierfsck --rebuild-sb, and --rebuild-tree. 


root@Tower:~# reiserfsck --fix-fixable /dev/sdc1 
reiserfsck 3.6.25

Will check consistency of the filesystem on /dev/sdc1
and will fix what can be fixed without --rebuild-tree
Will put log info to 'stdout'

Do you want to run this program?[N/Yes] (note need to type Yes if you do):Yes
###########
reiserfsck --fix-fixable started at Mon Jul 31 15:48:14 2017
###########
Replaying journal: Done.
Reiserfs journal '/dev/sdc1' in blocks [18..8211]: 0 transactions replayed
Zero bit found in on-disk bitmap after the last valid bit. Fixed.
Checking internal tree..  

Bad root block 0. (--rebuild-tree did not complete)

Aborted

 

That is the most common error now, the bad root block 0 on the reiserfs disk. It has media on it and i really don't want to lose it. Currently showing as unmountable if not in maintenance mode. 

 

 

 

The XFS 4TB WD Green (main) disk has been disconnected and turned off in the hope that somehow jesus or al gore works some magic while it is off. No such luck. I think it is a bad sector on the XFS but i will need to connect it, run xfs_repair and post the results. Any advise on this would be greatly appreciated. More so would reassurance my almost 20 years of family photos have not been lost. (Yes, i know they should have been backed up before now. Disk1 (XFS) failed, parity was corrupt and disk 3 (reiserfs) failed.

 

Thank you

Link to comment

I've back up my old configs and done Tools>New Config, moved the 5TB parity drive (which is also showing the faulty triangle on the webgui) and formatted it to XFS, then New Config again and moved it back to Parity. I've got the only of my 3 data drives in lot 2, which is also a 2TB drive, doing a parity back up now.  12hr to go, ~100MB/sec 

 

I have no idea what to do after that completes, assuming there are no errors 

Link to comment
1 hour ago, johnnie.black said:

 

You didn't post enough info, diagnostics and xfs_repair output may help.

Ok, i'll work on XFS once this reiserfs is done. Stopped the parity sync because thr data on disk2 can wait until disk3 is done.

 

Updated to latest unraid, --check, --fix-fixable & --rebuild-tree tried and failed. 

root@Tower:/dev# reiserfsck --check /dev/sdc1      
reiserfsck 3.6.24

Will read-only check consistency of the filesystem on /dev/sdc1
Will put log info to 'stdout'

Do you want to run this program?[N/Yes] (note need to type Yes if you do):Yes
###########
reiserfsck --check started at Mon Jul 31 20:19:37 2017
###########
Replaying journal: Done.
Reiserfs journal '/dev/sdc1' in blocks [18..8211]: 0 transactions replayed
Zero bit found in on-disk bitmap after the last valid bit.
Checking internal tree..  

Bad root block 0. (--rebuild-tree did not complete)

Aborted
root@Tower:/dev# reiserfsck --fix-fixable /dev/sdc1
reiserfsck 3.6.24

Will check consistency of the filesystem on /dev/sdc1
and will fix what can be fixed without --rebuild-tree
Will put log info to 'stdout'

Do you want to run this program?[N/Yes] (note need to type Yes if you do):Yes
###########
reiserfsck --fix-fixable started at Mon Jul 31 20:33:13 2017
###########
Replaying journal: Done.
Reiserfs journal '/dev/sdc1' in blocks [18..8211]: 0 transactions replayed
Zero bit found in on-disk bitmap after the last valid bit. Fixed.
Checking internal tree..  

Bad root block 0. (--rebuild-tree did not complete)

Aborted

currently running rebuild-tree

root@Tower:/dev# reiserfsck --rebuild-tree /dev/sdc1
reiserfsck 3.6.24

*************************************************************
** Do not  run  the  program  with  --rebuild-tree  unless **
** something is broken and MAKE A BACKUP  before using it. **
** If you have bad sectors on a drive  it is usually a bad **
** idea to continue using it. Then you probably should get **
** a working hard drive, copy the file system from the bad **
** drive  to the good one -- dd_rescue is  a good tool for **
** that -- and only then run this program.                 **
*************************************************************

Will rebuild the filesystem (/dev/sdc1) tree
Will put log info to 'stdout'

Do you want to run this program?[N/Yes] (note need to type Yes if you do):Yes
Replaying journal: Done.
Reiserfs journal '/dev/sdc1' in blocks [18..8211]: 0 transactions replayed
Zero bit found in on-disk bitmap after the last valid bit. Fixed.
###########
reiserfsck --rebuild-tree started at Mon Jul 31 20:39:50 2017
###########

Pass 0:
####### Pass 0 #######
Loading on-disk bitmap .. ok, 215905101 blocks marked used
Skipping 23115 blocks (super block, journal, bitmaps) 215881986 blocks will be read
0%.    left 196606499, 24743 /sec

 

Link to comment
1 hour ago, numanuma said:

-rebuild-tree tried and failed

 

failed or

 

1 hour ago, numanuma said:

currently running rebuild-tree

 

?

 

Also, reiserfsck shouldn't be run with every option blindly, if you're not sure ask for help or you'll risk making things worse.

Link to comment

I think i meant --fix-fixable first. i'm nearly at 80% on --rebuild-tree. From what i've read --check, --fix-fixable and --rebuild-tree are the ones i should be using but please if anybody knows better do chime in. :)

 

0%....20%....40%....60%...                            left 57960765, 21927 /sec

How are the latest XFS repair tools on 6.3.5? I've read they aren't that reliable

Link to comment
4 minutes ago, numanuma said:

From what i've read --check, --fix-fixable and --rebuild-tree

 

By the info provided by --check you should have gone to --rebuild-tree directly, --fix-fixable would never work, though that one is usually safe --rebuild-tree and -rebuild-sb should only be used when needed.

Link to comment
root@Tower:/dev# reiserfsck --rebuild-tree /dev/sdc1
reiserfsck 3.6.24

*************************************************************
** Do not  run  the  program  with  --rebuild-tree  unless **
** something is broken and MAKE A BACKUP  before using it. **
** If you have bad sectors on a drive  it is usually a bad **
** idea to continue using it. Then you probably should get **
** a working hard drive, copy the file system from the bad **
** drive  to the good one -- dd_rescue is  a good tool for **
** that -- and only then run this program.                 **
*************************************************************

Will rebuild the filesystem (/dev/sdc1) tree
Will put log info to 'stdout'

Do you want to run this program?[N/Yes] (note need to type Yes if you do):Yes
Replaying journal: Done.
Reiserfs journal '/dev/sdc1' in blocks [18..8211]: 0 transactions replayed
Zero bit found in on-disk bitmap after the last valid bit. Fixed.
###########
reiserfsck --rebuild-tree started at Mon Jul 31 20:39:50 2017
###########

Pass 0:
####### Pass 0 #######
Loading on-disk bitmap .. ok, 215905101 blocks marked used
Skipping 23115 blocks (super block, journal, bitmaps) 215881986 blocks will be read
0%....20%....40%....60%....80%....100%                       left 0, 19325 /sec
2102 directory entries were hashed with "r5" hash.
	"r5" hash is selected
Flushing..finished
	Read blocks (but not data blocks) 215881986
		Leaves among those 214519
		Objectids found 2115

Pass 1 (will try to insert 214519 leaves):
####### Pass 1 #######
Looking for allocable blocks .. finished
0%....20%....40%....60%....80%....100%                         left 0, 114 /sec
Flushing..finished
	214519 leaves read
		214510 inserted
		9 not inserted
####### Pass 2 #######

Pass 2:
0%....20%....40%....60%....80%....100%                           left 0, 0 /sec
Flushing..finished
	Leaves inserted item by item 9
Pass 3 (semantic):
####### Pass 3 #########
... .Nixon.PDTV.x264-W4F/american.experience.s03e04-e06.nixon.pdtv.x264-w4f.r00vpf-10680: The file [2838554 2838567] has the wrong block count in the StatData (97664) - corrected to (6440)
/media/downloading/American.Experience.S03E04-E06.Nixon.PDTV.x264-W4Frebuild_semantic_pass: The entry [2838554 2838571] ("american.experience.s03e04-e06.nixon.pdtv.x264-w4f.r01") in directory [2413036 2838554] points to nowhere - is removed
/media/downloading/American.Experience.S03E04-E06.Nixon.PDTV.x264-W4Fvpf-10650: The directory [2413036 2838554] has the wrong size in the StatData (2496) - corrected to (2424)
Flushing..finished                                                             
	Files found: 1778
	Directories found: 325
	Names pointing to nowhere (removed): 1
Pass 3a (looking for lost dir/files):
####### Pass 3a (lost+found pass) #########
Looking for lost directories:
Flushing..finished 67765 /sec
	Empty lost dirs removed 1
Pass 4 - finishedne 136508, 45502 /sec
	Deleted unreachable items 14
Flushing..finished
Syncing..finished
###########
reiserfsck finished at Tue Aug  1 00:28:28 2017
###########
root@Tower:/dev# 

reiserfs is working again, thank you. i thought i was up to date but obviously not. now doing a parity sync and will connect the XFS drive in the morning. when i checked yesterday it would not show up in the webgui

Link to comment
15 minutes ago, numanuma said:

now doing a parity sync and will connect the XFS drive in the morning

If you add a drive to a parity protected array, it will be cleared then formatted, totally erasing any data with no possibility of recovery. If you wish to keep the data, you will need to either set a new config with that drive added and rebuild parity to include it, or mount it outside the array using unassigned devices and copy the data off of it before you add it to the array.

Link to comment
6 minutes ago, jonathanm said:

If you add a drive to a parity protected array, it will be cleared then formatted, totally erasing any data with no possibility of recovery. If you wish to keep the data, you will need to either set a new config with that drive added and rebuild parity to include it, or mount it outside the array using unassigned devices and copy the data off of it before you add it to the array.

aaaaand swiftly stopping the parity sync!

 

Edit: on second thought, can't i parity these 2x2TB media drives then New Config tomorrow when it comes to fix the XFS? once that is fixed (pray), i can add back the 2x2TB drives and then add the parity back which wouldn't lose anything? just before i go clicking cancel on anything. i think thats sorta what you said but i want to make sure

Link to comment

Connected the 4TB XFS drive, it is not showing up in both the gui or /dev/. ideas?

 

12 hours ago, trurl said:

You may be better off not using parity for now. It is invalid anyway and can't help with these other problems.

The parity was going suuuper slow anyway, is that because the parity drive is showing as faulty in the gui?

Link to comment
50 minutes ago, numanuma said:

Connected the 4TB XFS drive, it is not showing up in both the gui or /dev/. ideas?

 

The parity was going suuuper slow anyway, is that because the parity drive is showing as faulty in the gui?

The indicator in the GUI was just because parity was invalid.

 

I haven't really been following this thread but it doesn't look like you ever posted a diagnostic. Go to Tools - Diagnostics and post the complete diagnostics zip file. Possibly you have hardware issues with one or more drives in addition to your filesystem problems.

Link to comment
10 minutes ago, johnnie.black said:

There's no 4TB disk connected/detected.

 

 

That's /mnt/user , the 2 data disks joined.

Damn ;_; 

 

Can you advise on what to do next? i'm at a loss. currently consolidating the 2x2TB into 1x2TB so to remove the known faulty reiserfs drive i just repaired

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...