Jump to content

Unmountable drive. Xfs_repair shows "bad magic in block N"


Recommended Posts

3 hours ago, JorgeB said:

Basically the only option you have is to run it again without -n, you should also have backups of anything important, Unraid or any RAID solution is not a backup.

 

I did that and it fixed it for like 10 minutes and then it unmounted again. I repeated the steps, fixed it but after another 30 minutes it happened again. Same disk every time.

Link to comment
1 hour ago, paululibro said:

If I replace current drive with an empty one, rebuild it from parity and then recreate fs - should it work?

Rebuilding to a replacement might help if there are actual problems with the disk causing this, in that case no point in recreating fs.

 

1 hour ago, JorgeB said:

re-creating that filesystem, after backing it up

recreating it means format, so you have to back it up somewhere first.

Link to comment

I'm rebuilding to a new disk. Currently at 18% but it already shows "Unmountable: not mounted".

 

6 minutes ago, trurl said:

recreating it means format, so you have to back it up somewhere first.

 

All data should still be backed up on the original drive. If recreating will be necessary I can copy from there.

Link to comment

Rebuilding finished with 0 errors so I ran xfs_repair with no modify flag and got thousands of these lines:

 

out-of-order bno btree record 344 (108441151 18) block 0/651941
block (0,108917856-108917947) multiply claimed by bno space tree, state - 1
data fork in ino 15034795729 claims free block 37702356
free space (7,69487107-69487108) only seen by one free space btree

 

And thousands of files and directories marked to be junked:

 

entry "Season 1" in shortform directory 106340182 references non-existent inode 2183048375
would have junked entry "Season 1" in directory inode 106340182

 

Then, I plugged in original disk, mounted it with Unassigned Devices and compared tree command from today and yesterday - all data is still on it.

 

Now:

1. unplug original drive

2. go to maintenance mode

3. click on new drive

4. re-create fs (the only way I see is to set it something else like btrfs and then again to xfs)

5. connect and mount original drive

6. copy all files from orignal drive to the new one

 

Is that correct?

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...