TaterSalad Posted May 16, 2011 Share Posted May 16, 2011 Over the past couple of weeks, I have been playing around with integrating unRAID into a full Slackware. I would make some progress here and there, and then always just switch back to my old 4.7 with no problems. Today I made some real progress and actually got 5.0b6a working in full Slackware. I got to the point where I wanted to bring up my array in 5.0b6a. But when I went to add my disks to the array and ALL of my disks say UNFORMATTED. I even tested with vanilla 5.0b6a and 4.7 Final. All still say unformatted. :'( Not good. Not good at all. So I try to manually mount the disks from the command prompt. Nothing. It says it doesn't recognize the file system even when I specify ReiserFS with the -o option and it won't mount. So I turn to reiserfsck next. --check won't even start because it can't find the reiserfs superblock. It says I need to do the --rebuild-sb option. After reading a few threads about other people experiencing this, I decided to go ahead and try it. It works its magic but it runs into a 'Bad root block 0.' and says I need to do a --rebuild-tree. So I do a reiserfsck -scan-whole-partition -rebuild-tree /dev/sde After three solid hours, it is telling me this: root@tower:~# reiserfsck --scan-whole-partition --rebuild-tree /dev/sde reiserfsck 3.6.21 (2009 www.namesys.com) ************************************************************* ** Do not run the program with --rebuild-tree unless ** ** something is broken and MAKE A BACKUP before using it. ** ** If you have bad sectors on a drive it is usually a bad ** ** idea to continue using it. Then you probably should get ** ** a working hard drive, copy the file system from the bad ** ** drive to the good one -- dd_rescue is a good tool for ** ** that -- and only then run this program. ** ** If you are using the latest reiserfsprogs and it fails ** ** please email bug reports to [email protected], ** ** providing as much information as possible -- your ** ** hardware, kernel, patches, settings, all reiserfsck ** ** messages (including version), the reiserfsck logfile, ** ** check the syslog file for any related information. ** ** If you would like advice on using this program, support ** ** is available for $25 at www.namesys.com/support.html. ** ************************************************************* Will rebuild the filesystem (/dev/sde) tree Will put log info to 'stdout' Do you want to run this program?[N/Yes] (note need to type Yes if you do):Yes Replaying journal: No transactions found Zero bit found in on-disk bitmap after the last valid bit. Fixed. ########### reiserfsck --rebuild-tree started at Sun May 15 21:21:00 2011 ########### Pass 0: ####### Pass 0 ####### The whole partition (244190640 blocks) is to be scanned Skipping 15663 blocks (super block, journal, bitmaps) 244174977 blocks will be read 0%....block 91347207: The number of items (5120) is incorrect, should be (1) - corrected block 91347207: The free space (0) is incorrect, should be (2000) - corrected pass0: vpf-10110: block 91347207, item (0): Unknown item type found [335545601 30146560 0x202800000140039 (5)] - deleted block 98898560: The number of items (16384) is incorrect, should be (1) - corrected block 98898560: The free space (896) is incorrect, should be (4048) - corrected pass0: vpf-10110: block 98898560, item (0): Unknown item type found [57360 117473280 0x1000000 (15)] - deleted block 101499198: The number of items (5204) is incorrect, should be (1) - corrected block 101499198: The free space (0) is incorrect, should be (4048) - corrected pass0: vpf-10200: block 101499198, item 0: The item [268435456 1647575187 0x456000113050002 IND (1)] with wrong offset is deleted block 105550161: The number of items (16655) is incorrect, should be (1) - corrected block 105550161: The free space (237) is incorrect, should be (4048) - corrected pass0: vpf-10110: block 105550161, item (0): Unknown item type found [0 0 0x2706c2e6c70ea0f (7)] - deleted left 0, 21719 /secccc Could not find a hash in use. Using "r5" Selected hash ("r5") does not match to the hash set in the super block (not set). "r5" hash is selected Flushing..finished Read blocks (but not data blocks) 244174977 Leaves among those 10 - leaves all contents of which could not be saved and deleted 10 Objectids found 2 Pass 1 (will try to insert 0 leaves): ####### Pass 1 ####### Looking for allocable blocks .. finished Flushing..finished 0 leaves read 0 inserted ####### Pass 2 ####### Flushing..finished No reiserfs metadata found. If you are sure that you had the reiserfs on this partition, then the start of the partition might be changed or all data were wiped out. The start of the partition may get changed by a partitioner if you have used one. Then you probably rebuilt the superblock as there was no one. Zero the block at 64K offset from the start of the partition (a new super block you have just built) and try to move the start of the partition a few cylinders aside and check if debugreiserfs /dev/xxx detects a reiserfs super block. If it does this is likely to be the right super block version. If this makes you nervous, try www.namesys.com/support.html, and for $25 the author of fsck, or a colleague if he is out, will step you through it all. Aborted debugreiserfs says this: root@Tower:~# debugreiserfs /dev/sde debugreiserfs 3.6.21 (2009 www.namesys.com) Filesystem state: consistency is not checked after last mounting Reiserfs super block in block 16 on 0x840 of format 3.6 with standard journal Count of blocks on the device: 244190640 Number of bitmaps: 7453 Blocksize: 4096 Free blocks (count of blocks - used [journal, bitmaps, data, reserved] blocks): 244190640 Root block: 0 Filesystem is NOT clean Tree height: 65535 Hash function used to sort names: "r5" Objectid map size 2, max 972 Journal parameters: Device [0x0] Magic [0x0] Size 8193 blocks (including 1 for journal header) (first block 18) Max transaction length 1024 blocks Max batch size 900 blocks Max commit age 30 Blocks reserved by journal: 0 Fs state field: 0xfa02: FATAL corruptions exist. sb_version: 2 inode generation number: 0 UUID: [removed] LABEL: Set flags in SB: Mount count: 1 Maximum mount count: 30 Last fsck run: Mon May 16 02:14:51 2011 Check interval in days: 180 Sounds risky. I know for a fact the data has not been zeroed. My filesystem is just severely crapped. Is there any chance for recovery? Please help ! I'm not sure what could have caused this. I never reformatted the wrong drive. And more over...it's affecting all of my drives. :'( :'( EDIT: I looked into that www.namesys.com/support.htm link. Some of you may know this, but Namesys (the company that made ReiserFS) is no longer around. AND Reiser himself is in jail for MURDER! Just my luck. http://en.wikipedia.org/wiki/Hans_Reiser Quote Link to comment
lionelhutz Posted May 16, 2011 Share Posted May 16, 2011 Did you read the 5.0b6a thread and the way it has moved/recreated partitions on a bunch of people? There's a reason it comes with a big WARNING. It is BETA test software. If you had read the 5.0b6a release thread then you would have gotten the clues necessary to fix your problem (it's actually a real simple 2 minute fix). Go over there and read it and then you'll know what was happening. However, you might have already screwed yourself by running reiserfsck commands on the disks. Peter Quote Link to comment
TaterSalad Posted May 16, 2011 Author Share Posted May 16, 2011 Thanks for the reply, Peter. I am rereading the release thread right now. For some reason, I was under the impression that this only affected people who upgrading from 4.7. I assumed that 5.0b6 fresh installs (but migrated drives) were not affected. I guess I assumed wrong. Luckily, I have only ran reiserfsck on one disk. Hopefully, I will be able to rebuild. I ran mkmbr /dev/sde 63 0x83. /dev/sde is the disk I did the reiserfsck. I figured, if it's already borked, what do I have to loose? Sure enough. That fixed it. unRAID mounts that disk. My files seems to be intact, but I haven't tested everything. Did reiserfsck do any modifications I need to be concerned about? Do you suggest I run the mkmbr /dev/sd* 63 0x83 on my remaining data disks? What about my parity? All of my disks say MBR: 4-k aligned, except /dev/sde which says MBR: unaligned now. I've learned my lesson now though. Read the documentation. I guess I just got excited that I got my Slackware problem sorted out and jumped the gun. Quote Link to comment
Joe L. Posted May 16, 2011 Share Posted May 16, 2011 Thanks for the reply, Peter. I am rereading the release thread right now. For some reason, I was under the impression that this only affected people who upgrading from 4.7. I assumed that 5.0b6 fresh installs (but migrated drives) were not affected. I guess I assumed wrong. Luckily, I have only ran reiserfsck on one disk. Hopefully, I will be able to rebuild. I ran mkmbr /dev/sde 63 0x83. /dev/sde is the disk I did the reiserfsck. I figured, if it's already borked, what do I have to loose? Sure enough. That fixed it. unRAID mounts that disk. My files seems to be intact, but I haven't tested everything. Did reiserfsck do any modifications I need to be concerned about? Do you suggest I run the mkmbr /dev/sd* 63 0x83 on my remaining data disks? What about my parity? All of my disks say MBR: 4-k aligned, except /dev/sde which says MBR: unaligned now. I've learned my lesson now though. Read the documentation. I guess I just got excited that I got my Slackware problem sorted out and jumped the gun. The file systems is NOT on /dev/sde, but on /dev/sde1. You may have been confused by that fact. It is probably why it reported it could not find a superblock, Glad you were able to fix the partitioning and get things back to normal. Joe L. Quote Link to comment
TaterSalad Posted May 16, 2011 Author Share Posted May 16, 2011 Ah. Good thinking Joe. Judging by the output from reiserfsck, it looks like it did some modification to /dev/sde. Are any of those modifications any thing to be concerned about? Also, does mkmbr need to be run on parity too? or just data disks? Quote Link to comment
lionelhutz Posted May 16, 2011 Share Posted May 16, 2011 You don't have to do the parity because it could just be rebuilt onto the moved partition, but you might as well just do it and get it back. Peter Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.