xfs_repair: Assertion failed


Go to solution Solved by paululibro,

Recommended Posts

One of my drives was showing as unmountable so I started the array in the maintenance mode and ran "xfs_repair -nv /dev/md3" but got this error:

 

Phase 1 - find and verify superblock...
        - block cache size set to 305128 entries
Phase 2 - using internal log
        - zero log...
zero_log: head block 1124625 tail block 1124610
ALERT: The filesystem has valuable metadata changes in a log which is being
ignored because the -n option was used.  Expect spurious inconsistencies
which may be resolved by first mounting the filesystem to replay the log.
        - scan filesystem freespace and inode maps...
agf_freeblks 5759502, counted 5759493 in ag 0
agi_freecount 62, counted 61 in ag 0
agi_freecount 62, counted 61 in ag 0 finobt
sb_fdblocks 27013023, counted 40637666
        - found root inode chunk
Phase 3 - for each AG...
        - scan (but don't clear) agi unlinked lists...
        - process known inodes and perform inode discovery...
        - agno = 0
1524704c7700: Badness in key lookup (length)
bp=(bno 0x2217a80, len 16384 bytes) key=(bno 0x2217a80, len 4096 bytes)
imap claims in-use inode 35748416 is free, correcting imap
data fork in ino 35748420 claims free block 4468551
imap claims in-use inode 35748420 is free, correcting imap
data fork in ino 35748426 claims free block 4468550
imap claims in-use inode 35748426 is free, correcting imap
data fork in ino 35748433 claims free block 4468549
imap claims in-use inode 35748433 is free, correcting imap
data fork in inode 35748440 claims metadata block 4468560
correcting nextents for inode 35748440
bad data fork in inode 35748440
would have cleared inode 35748440
        - agno = 1
data fork in ino 2147483777 claims free block 268435458
imap claims in-use inode 2147483777 is free, correcting imap
[...]
        - agno = 2
data fork in ino 4327071235 claims free block 540883903
imap claims in-use inode 4327071235 is free, correcting imap
[...]
        - agno = 3
data fork in ino 6453720963 claims free block 806715119
imap claims in-use inode 6453720963 is free, correcting imap
[...]
        - agno = 4
data fork in ino 8609098822 claims free block 1076137360
imap claims in-use inode 8609098822 is free, correcting imap
imap claims in-use inode 8609098829 is free, correcting imap
[...]
        - agno = 5
data fork in inode 10773946882 claims metadata block 1346743359
correcting nextents for inode 10773946882
bad data fork in inode 10773946882
would have cleared inode 10773946882
[...]
        - agno = 6
imap claims in-use inode 13657483776 is free, correcting imap
data fork in ino 13657483780 claims free block 1707185480
[...]
        - agno = 7
        - process newly discovered inodes...
Phase 4 - check for duplicate blocks...
        - setting up duplicate extent list...
free space (0,4468567-4468567) only seen by one free space btree
free space (0,4468576-4468577) only seen by one free space btree
free space (0,4486063-4486063) only seen by one free space btree
        - check for inodes claiming duplicate blocks...
        - agno = 2
        - agno = 1
        - agno = 0
entry ".." at block 0 offset 80 in directory inode 2147483777 references free inode 13658521470
        - agno = 3
entry ".." at block 0 offset 80 in directory inode 2147483794 references free inode 13658521470
[...]
Metadata CRC error detected at 0x459c09, xfs_dir3_block block 0x2217a80/0x1000
bad directory block magic # 0x494e41ff in block 0 for directory inode 35748440
corrupt block 0 in directory inode 35748440
        would junk block
no . entry for directory 35748440
no .. entry for directory 35748440
problem with directory contents in inode 35748440
would have cleared inode 35748440
[...]
        - agno = 4
        - agno = 5
data fork in inode 10773946882 claims metadata block 1346743359
correcting nextents for inode 10773946882
bad data fork in inode 10773946882
would have cleared inode 10773946882
[...]
        - agno = 6
entry ".." at block 0 offset 80 in directory inode 13657483780 references free inode 13658521470
[...]
        - agno = 7
entry ".." at block 0 offset 80 in directory inode 13657483816 references free inode 13658521470
[...]
xfs_repair: rmap.c:696: mark_inode_rl: Assertion `!(!!((rmap->rm_owner) & (1ULL << 63)))' failed.
Aborted

 

xfs_repair: rmap.c:696: mark_inode_rl: Assertion `!(!!((rmap->rm_owner) & (1ULL << 63)))' failed.
Aborted

 

The same thing happens after rebooting server. Attached diagnostics before and after reboot. What should I do next?

 

after_reboot.zip before_reboot.zip

Link to comment
1 hour ago, JorgeB said:

This is a xfs_repair problem, update to Unraid v6.10.2 or v6.10.3-rc1 and run it again, hopefully it's fixed in the newer xfsprogs.

 

Still getting the same error after updating to 6.10.2. I've used xfs_repair on 6.9.2 a few times and it was working fine before

 

EDIT

I've also found this thread with the same error message (there are only two threads with it, including mine) but I don't see anything there that would help me

Edited by paululibro
Link to comment
10 minutes ago, JorgeB said:

xfs_repair should always complete the repair without crashing, with more or less data loss, when it crashes like that it means there's a problem with it, cannot check now, can you confirm xfsprogs version inclued with v6.10 by typing:

xfs_repair -V

 

 

root@Orion2:~# xfs_repair -V
xfs_repair version 5.13.0

 

It only crashes on md3. I tried running it on other drives and it was completing and exiting with no errors

Link to comment
14 minutes ago, JorgeB said:

Yes, because there's a problem with the filesystem on that disk that it can't fix, current xfsprogs is v5.14.2, you can see if there's an updated package fro slackware, like this:

 

https://forums.unraid.net/topic/80162-disk-3-after-upgrade/#comment-745063

 

 

Where did you get 5.14.2 from? slackware64-current only lists xfsprogs-5.13.0-x86_64-3.txz as current. I've also found xfsprogs-5.18.0 is the latest (?) version

Link to comment
  • Solution

I fixed it. Since mailing list and other forums were of no help I tried finding the solution myself. I figured that if I can't bring xfsprogs to my drive, I will bring the drive to xfsprogs.

 

I had an old OptiPlex laying around so I installed the latest version of Arch Linux which had xfsprogs 5.18 available. Since it only had one power cable I had to power my drive from another PSU while connected through SATA to the OptiPlex.

 

First run of xfs_repair -nv /dev/sdb1 ended with Segmentation fault.

 

I then created metadata dump of the filesystem with:

 

xfs_metadump /dev/sdb1 /home/p/metadump.meta
xfs_mdrestore /home/p/metadump.meta /home/p/metadump.img

 

and verified it with:

 

xfs_repair -nv /home/p/metadump.img

 

Test run on the image was successful so I ran xfs_repair -vL /dev/sdb1 (-L flag was required) and the program finished without any errors. I moved the drive back to my server and everything works fine again.

 

I also had one machine with Fedora on it with xfsprogs 5.14.2 and it was giving the same assertion failed error as 5.13

  • Like 2
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.