HDD Unmountable - SuperBlock?


JohnnyT

Recommended Posts

Last night I got an alert that my array was down. I saw that one of my new drives that I added a few days ago was unmoutnable. I tried shutting down the array and rebooting but the drive was still unmountable. I then went into maintenance mode and tried to do the fix drive errors which did not help. It then pointed me to run xfs_repair -v /dev/sdk which I did and it told me that a super block was missing. Which then pointed me to run reiserfsck --rebuild-sb /dev/sdk to rebuild the superblock. This failed and told me to run --rebuildtree which I am doing on that drive now. SO... yeah , did I lose all my data on that drive ? It is currently rebuilding or something but I did not have a parity drive yet. The other 6tb drive I had to make parity did not pass test and I was waiting for a new one. I just setup unraid on this a few weeks ago. Any help would be great as I am in deeper than I now understand.

2016-11-02_08_44_19-IMAG1012.jpg_-_IrfanView_Zoom__984_x_738.jpg.9ab56186ee49654ce2182478021a4fba.jpg

Link to comment

You should have asked for help earlier.

 

You can't use only sdX, you need to add 1 for the partition, but although that would work it would make your parity out of sync, on unRAID you allways need to use the mdX identifier, replace X with the disk number.

 

xfs_repair is for XFS formatted disks, reiserfsck is only for Reiser formatted disks.

 

If the disk is XFS abort the current check and do (with the array in maintenance mode):

 

xfs_repair -v /dev/mdX

 

X=disk number.

 

Maybe you can still salvage something.

 

Link to comment

I was able to ssh in and stop the current check. Any ideas?

 

root@davidFlix:~# xfs_repair -v /dev/md7

Phase 1 - find and verify superblock...

        - block cache size set to 1434064 entries

Phase 2 - using internal log

        - zero log...

zero_log: head block 121553 tail block 121539

ERROR: The filesystem has valuable metadata changes in a log which needs to

be replayed.  Mount the filesystem to replay the log, and unmount it before

re-running xfs_repair.  If you are unable to mount the filesystem, then use

the -L option to destroy the log and attempt a repair.

Note that destroying the log may cause corruption -- please attempt a mount

of the filesystem before doing this.

 

Link to comment

I do not but I will take a look at that for the future. I just started using unraid a few weeks ago so I still don't know all the best practices.

 

root@davidFlix:/mnt/user/lost+found# ls

/bin/ls: reading directory '.': Input/output error

 

I am guessing this means the directory is empty or is there more to it than that?

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.