evilmobster Posted May 17, 2020 Posted May 17, 2020 I followed the instructions listed under Checking a File System in the Unraid 6 documentation and it said to post the results if I did not understand them. I would greatly appreciate it if someone could take a look at my results and tell me if the information of this drive can be recovered. Background: I have a parity drive but the failed drive is not being emulated. I installed the drive in question a while ago but I'm unsure if any data was actually on it. Is it possible that no data was on this drive and that is why the drive is not being emulated by the parity drive? Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... ALERT: The filesystem has valuable metadata changes in a log which is being ignored because the -n option was used. Expect spurious inconsistencies which may be resolved by first mounting the filesystem to replay the log. - scan filesystem freespace and inode maps... agf_freeblks 121652, counted 121670 in ag 1 agi_freecount 8, counted 9 in ag 1 agi_freecount 8, counted 9 in ag 1 finobt agi unlinked bucket 9 is 454730313 in ag 0 (inode=454730313) sb_ifree 4308, counted 4616 sb_fdblocks 1346139874, counted 1348594004 - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - agno = 11 - agno = 12 - agno = 13 - agno = 14 - agno = 15 - agno = 16 - agno = 17 - agno = 18 - agno = 19 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 2 - agno = 3 - agno = 1 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - agno = 11 - agno = 12 - agno = 13 - agno = 14 - agno = 15 - agno = 16 - agno = 17 - agno = 18 - agno = 19 No modify flag set, skipping phase 5 Phase 6 - check inode connectivity... - traversing filesystem ... bad hash table for directory inode 104139180 (no leaf entry): would rebuild - traversal finished ... - moving disconnected inodes to lost+found ... disconnected inode 454730313, would move to lost+found Phase 7 - verify link counts... would have reset inode 1601087717 nlinks from 1 to 2 would have reset inode 454695794 nlinks from 1 to 2 would have reset inode 454730313 nlinks from 0 to 1 would have reset inode 459241237 nlinks from 1 to 2 would have reset inode 480639760 nlinks from 1 to 2 would have reset inode 861775891 nlinks from 1 to 2 No modify flag set, skipping filesystem flush and exiting. Quote
JorgeB Posted May 17, 2020 Posted May 17, 2020 1 minute ago, evilmobster said: if the information of this drive can be recovered. It should be recoverable, run again without -n or nothing will be done, and if it asks for it use -L. Quote
evilmobster Posted May 17, 2020 Author Posted May 17, 2020 Thank you so much for your prompt reply. I removed -n and ran again. It said to attempt the mount again before using -L so I did so and it did not mount. The mount failed. I then used -L and the following output was generated. The disk is still being listed as unmountatble. Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... ALERT: The filesystem has valuable metadata changes in a log which is being destroyed because the -L option was used. - scan filesystem freespace and inode maps... agf_freeblks 121652, counted 121670 in ag 1 agi_freecount 8, counted 9 in ag 1 agi_freecount 8, counted 9 in ag 1 finobt agi unlinked bucket 9 is 454730313 in ag 0 (inode=454730313) sb_ifree 4308, counted 4616 sb_fdblocks 1346139874, counted 1348594004 - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - agno = 11 - agno = 12 - agno = 13 - agno = 14 - agno = 15 - agno = 16 - agno = 17 - agno = 18 - agno = 19 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - agno = 11 - agno = 12 - agno = 13 - agno = 14 - agno = 15 - agno = 16 - agno = 17 - agno = 18 - agno = 19 Phase 5 - rebuild AG headers and trees... - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... bad hash table for directory inode 104139180 (no leaf entry): rebuilding rebuilding directory inode 104139180 xfs_repair: phase6.c:1314: longform_dir2_rebuild: Assertion `done' failed. Quote
JorgeB Posted May 17, 2020 Posted May 17, 2020 That looks like an old xfs_repair bug, if you are not running latest Unraid (v6.8.3) upgrade and try again. Quote
trurl Posted May 17, 2020 Posted May 17, 2020 If the disk is disabled then it is being emulated, but the emulated disk is unmountable. If it is not disabled then it is not being emulated. Disabled/emulated are different than unmountable and you can have either or both conditions independently. 6 minutes ago, johnnie.black said: That looks like an old xfs_repair bug, if you are not running latest Unraid (v6.8.3) upgrade and try again. If you had posted your Diagnostics we wouldn't have to guess whether the disk is disabled/emulated and whether your Unraid version has the XFS repair bug. Go to Tools-Diagnostics and attach the complete Diagnostics ZIP file to your NEXT post. Quote
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.