OtherWorldsTV Posted May 13, 2023 Share Posted May 13, 2023 (edited) Good morning! Woke up to the array running just fine except for this error: "Unmountable: Wrong or no file system" on Drive 1. No unclean shutdowns, no power failures, nothing of that sort. I've started an xfs_repair -v on the drive in question, been running for about 9 minutes now. It gave this message before the screen filled with the progress dots: Quote Phase 1 - find and verify superblock... bad primary superblock - bad magic number !!! attempting to find secondary superblock... .found candidate secondary superblock... unable to verify superblock, continuing... .found candidate secondary superblock... unable to verify superblock, continuing... So what are my next steps here? Update: xfs_repair has been running for nearly an hour now, with nothing being reported. Should it take this long on a 4TB drive? Attached is the diagnostic logs, thanks! kenya-diagnostics-20230513-0900.zip Edited May 13, 2023 by OtherWorldsTV Update Quote Link to comment
itimpi Posted May 13, 2023 Share Posted May 13, 2023 Looks like you tried the repair from the command line rather than the GUI? If so what was the exact command you used? Also was the array stopped or running in Maintenance mode when you ran the command? Quote Link to comment
OtherWorldsTV Posted May 14, 2023 Author Share Posted May 14, 2023 I used xfs_repair -v, and it is running in Maintenance mode. It finally finished about an hour ago with this result: "Sorry, could not find valid secondary superblock Exiting now." Would it be better to wipe and restore from backup at this point? Quote Link to comment
OtherWorldsTV Posted May 14, 2023 Author Share Posted May 14, 2023 Doing the repair from the GUI results in this: Quote Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... ERROR: The filesystem has valuable metadata changes in a log which needs to be replayed. Mount the filesystem to replay the log, and unmount it before re-running xfs_repair. If you are unable to mount the filesystem, then use the -L option to destroy the log and attempt a repair. Note that destroying the log may cause corruption -- please attempt a mount of the filesystem before doing this. Quote Link to comment
itimpi Posted May 14, 2023 Share Posted May 14, 2023 6 hours ago, OtherWorldsTV said: I used xfs_repair -v, and it is running in Maintenance mode. It finally finished about an hour ago with this result: "Sorry, could not find valid secondary superblock Exiting now." Would it be better to wipe and restore from backup at this point? You did not mention the device name that you used with this command. It is not unusual to get that wrong. Quote Link to comment
itimpi Posted May 14, 2023 Share Posted May 14, 2023 6 hours ago, OtherWorldsTV said: Doing the repair from the GUI results in this: If you get that then you need to run with the -L (and without the -n) option so that a repair takes place. The fact you get that when using the GUI suggests that you DID have an error in the device name when you tried it from the command line or you would have gotten the same message displayed there. Using the GUI protects you from this. Quote Link to comment
OtherWorldsTV Posted May 14, 2023 Author Share Posted May 14, 2023 Interesting, This morning it's saying all 4 drives are unmountable. I'm thinking the problem is with the enclosure. Got a new one ordered, and will be in tomorrow. Until then, here's the result of "xfs_repair -L" from the GUI on disk1: Quote Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... ALERT: The filesystem has valuable metadata changes in a log which is being destroyed because the -L option was used. - scan filesystem freespace and inode maps... clearing needsrepair flag and regenerating metadata Metadata corruption detected at 0x468588, xfs_agi block 0x15d508eca/0x200 bad uuid e50373e8-c13c-49a3-bcf7-92a1236cacf3 for agi 3 reset bad agi for ag 3 Metadata corruption detected at 0x43cea8, xfs_agfl block 0x15d508ecb/0x200 agi_count 0, counted 3584 in ag 3 agi_freecount 0, counted 21 in ag 3 agi_freecount 0, counted 21 in ag 3 finobt - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 1 - agno = 2 - agno = 0 - agno = 3 Phase 5 - rebuild AG headers and trees... - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... Maximum metadata LSN (1:55332) is ahead of log (1:2). Format log to cycle 4. done Quote Link to comment
JorgeB Posted May 15, 2023 Share Posted May 15, 2023 22 hours ago, OtherWorldsTV said: Until then, here's the result of "xfs_repair -L" from the GUI on disk1: Disk should mount now. Quote Link to comment
OtherWorldsTV Posted May 15, 2023 Author Share Posted May 15, 2023 Well, yes, it's mountable. Empty, though. New enclosure arrives today. Going to move all the drives to it, then start restoring from backup. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.