Medic1 Posted December 29, 2022 Share Posted December 29, 2022 I was in the process of using unBALANCE to move data from one drive to another in order to remove a smaller drive from the array. While planning the move it appeared that none of my files were available through explorer/unraid GUI. The folder structure was there but there were no files contained in any of the folders. I decided to reboot and the array showed one of the drives (disk2) as "unmountable: wrong or no file system". I have checked the filesystem as described here: https://wiki.unraid.net/Manual/Storage_Management#Drive_shows_as_unmountable These are the results: Phase 1 - find and verify superblock... - block cache size set to 1438032 entries Phase 2 - using internal log - zero log... zero_log: head block 103768 tail block 103704 ALERT: The filesystem has valuable metadata changes in a log which is being ignored because the -n option was used. Expect spurious inconsistencies which may be resolved by first mounting the filesystem to replay the log. - scan filesystem freespace and inode maps... sb_icount 64, counted 46400 sb_ifree 57, counted 293 sb_fdblocks 2908768556, counted 2740391589 - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 3 - agno = 2 - agno = 5 - agno = 1 - agno = 7 - agno = 6 - agno = 8 - agno = 4 - agno = 9 - agno = 10 No modify flag set, skipping phase 5 Phase 6 - check inode connectivity... - traversing filesystem ... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify link counts... No modify flag set, skipping filesystem flush and exiting. XFS_REPAIR Summary Wed Dec 28 19:18:46 2022 Phase Start End Duration Phase 1: 12/28 19:18:37 12/28 19:18:37 Phase 2: 12/28 19:18:37 12/28 19:18:37 Phase 3: 12/28 19:18:37 12/28 19:18:42 5 seconds Phase 4: 12/28 19:18:42 12/28 19:18:42 Phase 5: Skipped Phase 6: 12/28 19:18:42 12/28 19:18:46 4 seconds Phase 7: 12/28 19:18:46 12/28 19:18:46 Total run time: 9 seconds I have also attached the tower diagnostics file. Can someone help with explaining how to proceed? tower-diagnostics-20221228-1932.zip Quote Link to comment
Solution trurl Posted December 29, 2022 Solution Share Posted December 29, 2022 20 minutes ago, Medic1 said: No modify flag set, skipping filesystem flush and exiting. Do it again without -n. If it asks for it add -L. Post results. Why does it think your cache pool disk assignments are wrong? Quote Link to comment
trurl Posted December 29, 2022 Share Posted December 29, 2022 Also, why do you have docker.img on the array? Quote Link to comment
Medic1 Posted December 29, 2022 Author Share Posted December 29, 2022 8 minutes ago, trurl said: Do it again without -n. If it asks for it add -L. Post results. Why does it think your cache pool disk assignments are wrong? The cache drives are disabled with contents emulated. Not sure why. The cache drives are NVME and as far as I know haven't had an issue before. Repeating the test with -L gave this: Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... ALERT: The filesystem has valuable metadata changes in a log which is being destroyed because the -L option was used. - scan filesystem freespace and inode maps... clearing needsrepair flag and regenerating metadata sb_icount 64, counted 46400 sb_ifree 57, counted 293 sb_fdblocks 2908768556, counted 2740391589 - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 1 - agno = 4 - agno = 0 - agno = 3 - agno = 9 - agno = 2 - agno = 10 - agno = 8 - agno = 6 - agno = 7 - agno = 5 Phase 5 - rebuild AG headers and trees... - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... Maximum metadata LSN (1:103640) is ahead of log (1:8). Format log to cycle 4. done Quote Link to comment
Medic1 Posted December 29, 2022 Author Share Posted December 29, 2022 18 minutes ago, trurl said: Also, why do you have docker.img on the array? Not sure. Should be on the cache drive, no? Quote Link to comment
Medic1 Posted December 29, 2022 Author Share Posted December 29, 2022 Ok so running the array now and the disk2 is back. The 2 nvme cache drives are still being emulated for some reason. Thoughts? Quote Link to comment
trurl Posted December 29, 2022 Share Posted December 29, 2022 7 minutes ago, Medic1 said: The cache drives are disabled with contents emulated pool drives can't be emulated since there is no parity. Not clear what that is about since pool does seem to be mounted. Post a screenshot of Main - Pool Devices. 2 minutes ago, Medic1 said: Ok so running the array now and the disk2 is back. Post new diagnostics Quote Link to comment
Medic1 Posted December 29, 2022 Author Share Posted December 29, 2022 6 minutes ago, trurl said: pool drives can't be emulated since there is no parity. Not clear what that is about since pool does seem to be mounted. Post a screenshot of Main - Pool Devices. Post new diagnostics new diagnostics are attached. tower-diagnostics-20221228-2037.zip Quote Link to comment
trurl Posted December 29, 2022 Share Posted December 29, 2022 Disk2 mounted now and doesn't look like any lost+found from repair, but not much data there. Is that expected? Quote Link to comment
Medic1 Posted December 29, 2022 Author Share Posted December 29, 2022 Yes. I recently formatted to XFS and was in the process of moving data from smaller drives on to it. The drive itself is about 2 years old I think and has been working in the array with no issues until now. Quote Link to comment
JorgeB Posted December 29, 2022 Share Posted December 29, 2022 To correct the pool problem, stop array, unassign both cache devices, start array, stop array, re-assign both cache devices, start array. Quote Link to comment
Medic1 Posted December 29, 2022 Author Share Posted December 29, 2022 6 hours ago, JorgeB said: To correct the pool problem, stop array, unassign both cache devices, start array, stop array, re-assign both cache devices, start array. Thank you. Everything looks to be working correctly now. 1 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.