Cache Drive Unmounteable


Recommended Posts

Posting for a friend.  Shortly after upgrading to 6.3.5 his dockers went offline and the cache drive is listed as unmountable.  We attempted a disk repair and this is the result:

 

xfs_repair status:
Phase 1 - find and verify superblock...
Phase 2 - using internal log
        - zero log...
        - scan filesystem freespace and inode maps...
Metadata corruption detected at xfs_agf block 0x1/0x200
flfirst 118 in agf 0 too large (max = 118)
agf 118 freelist blocks bad, skipping freelist scan
agi unlinked bucket 23 is 7447 in ag 0 (inode=7447)
agi unlinked bucket 57 is 8249 in ag 0 (inode=8249)
agi unlinked bucket 59 is 42024699 in ag 0 (inode=42024699)
sb_icount 163776, counted 164800
sb_ifree 1803, counted 1547
sb_fdblocks 16453555, counted 16027419
        - found root inode chunk
Phase 3 - for each AG...
        - scan (but don't clear) agi unlinked lists...
        - process known inodes and perform inode discovery...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
        - process newly discovered inodes...
Phase 4 - check for duplicate blocks...
        - setting up duplicate extent list...
        - check for inodes claiming duplicate blocks...
        - agno = 2
        - agno = 0
        - agno = 3
        - agno = 1
No modify flag set, skipping phase 5
Phase 6 - check inode connectivity...
        - traversing filesystem ...
        - traversal finished ...
        - moving disconnected inodes to lost+found ...
disconnected inode 7447, would move to lost+found
disconnected inode 8249, would move to lost+found
disconnected inode 42024699, would move to lost+found
Phase 7 - verify link counts...
would have reset inode 7447 nlinks from 0 to 1
would have reset inode 8249 nlinks from 0 to 1
would have reset inode 42024699 nlinks from 0 to 1
No modify flag set, skipping filesystem flush and exiting.

Attached are his diagnostics.  Assuming we need to reformat the disk and try again, how would I recover the data if the drive is unmountable?

tower-diagnostics-20170918-1728.zip

Link to comment

thanks for the reply @johnnie.black

 

I ran this the next time and got a different notice.  Should I run xfs_repair -L /dev/sdb1 now?

 

root@Tower:/home# xfs_repair -v /dev/sdb1
Phase 1 - find and verify superblock...
        - block cache size set to 758600 entries
Phase 2 - using internal log
        - zero log...
zero_log: head block 74016 tail block 70932
ERROR: The filesystem has valuable metadata changes in a log which needs to
be replayed.  Mount the filesystem to replay the log, and unmount it before
re-running xfs_repair.  If you are unable to mount the filesystem, then use
the -L option to destroy the log and attempt a repair.
Note that destroying the log may cause corruption -- please attempt a mount
of the filesystem before doing this.
 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.