SloppyJoe Posted February 22, 2021 Share Posted February 22, 2021 Hello, I'm not sure what exactly happened to my array. The last thing I remember doing was trying to create a VM in Unraid, and it ended up causing loading issues with the webUI. I quickly initiated a reboot from the webUI but when it came back up after a few minutes, it turned that reboot resulted in an ungraceful shutdown and the server started doing parity checks. Unfortunately, something also happened to three of my drives. It's saying "Unmountable: No file system" and I have no idea why. So attached are my diagnostics file. I have temporarily paused parity checks for now as a precaution. Can someone help me in understanding what happened? Also, is there a way to recover data or did I just lose 3 drives worth of stuff? hakkafarm-diagnostics-20210221-2310.zip Quote Link to comment
itimpi Posted February 22, 2021 Share Posted February 22, 2021 Unfortunately the diagnostics only show what happened after the reboot and not what lead up to the problem. You may find this section of the online documentation (available via the ‘Manual’ link at the bottom of the unRaid GUI) that covers ‘unmountable’ disks to be of use. Quote Link to comment
SloppyJoe Posted February 22, 2021 Author Share Posted February 22, 2021 9 hours ago, itimpi said: Unfortunately the diagnostics only show what happened after the reboot and not what lead up to the problem. You may find this section of the online documentation (available via the ‘Manual’ link at the bottom of the unRaid GUI) that covers ‘unmountable’ disks to be of use. Thank you. I tried xfs_repair with the -L option and it seems to have fixed two of the drives. Unfortunately, the 3rd drive is still unmountable even thou xfs_repair seems to suggest it was able to fix it. Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 2 - agno = 4 - agno = 9 - agno = 10 - agno = 6 - agno = 7 - agno = 8 - agno = 3 - agno = 5 Phase 5 - rebuild AG headers and trees... - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... Maximum metadata LSN (1:469380) is ahead of log (1:2). Format log to cycle 4. done Quote Link to comment
itimpi Posted February 22, 2021 Share Posted February 22, 2021 No idea why that last drive is still unmountable. As you say xfs_repair does not seem to be reporting any problem. With any luck someone else may have some ideas. Quote Link to comment
itimpi Posted February 22, 2021 Share Posted February 22, 2021 Might be worth trying the repair again just to make sure, and if it is still unmountable when you restart the array in normal mode post new diagnostics see if they suggest a cause. Quote Link to comment
SloppyJoe Posted February 23, 2021 Author Share Posted February 23, 2021 (edited) I tried it again with xfs_repair -vL. Still unmountable. Phase 1 - find and verify superblock... - block cache size set to 1353528 entries Phase 2 - using internal log - zero log... zero_log: head block 0 tail block 0 - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 3 - agno = 7 - agno = 1 - agno = 0 - agno = 10 - agno = 4 - agno = 8 - agno = 9 - agno = 5 - agno = 6 - agno = 2 Phase 5 - rebuild AG headers and trees... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... Maximum metadata LSN (1:469390) is ahead of log (1:2). Format log to cycle 4. XFS_REPAIR Summary Tue Feb 23 15:19:52 2021 Phase Start End Duration Phase 1: 02/23 15:19:06 02/23 15:19:06 Phase 2: 02/23 15:19:06 02/23 15:19:21 15 seconds Phase 3: 02/23 15:19:21 02/23 15:19:21 Phase 4: 02/23 15:19:21 02/23 15:19:21 Phase 5: 02/23 15:19:21 02/23 15:19:21 Phase 6: 02/23 15:19:21 02/23 15:19:21 Phase 7: 02/23 15:19:21 02/23 15:19:21 Total run time: 15 seconds done hakkafarm-diagnostics-20210223-1521.zip Edited February 23, 2021 by SloppyJoe Quote Link to comment
JorgeB Posted February 24, 2021 Share Posted February 24, 2021 Feb 23 15:16:40 HakkaFarm kernel: XFS (md3): Filesystem has duplicate UUID 7ef05469-45c6-4921-be50-a5c727615728 - can't mount This is why it's not mounting, post output of blkid Quote Link to comment
SloppyJoe Posted February 24, 2021 Author Share Posted February 24, 2021 (edited) /dev/loop0: TYPE="squashfs" /dev/loop1: TYPE="squashfs" /dev/sda1: LABEL_FATBOOT="UNRAID" LABEL="UNRAID" UUID="2732-64F5" TYPE="vfat" /dev/nvme0n1p1: UUID="a9d8ae2d-a705-4c62-b684-9edead9d2c80" TYPE="xfs" /dev/sdb1: UUID="e7158ed7-d794-417f-9081-b7cc063f9d7b" TYPE="xfs" /dev/sdc1: UUID="13e657cc-8abd-4455-9e0b-a08f23eac6ec" TYPE="xfs" PARTUUID="cd5c588a-dd48-407a-9c22-17e53382f798" /dev/sdd1: UUID="09a71bc8-fa02-4fd7-894f-6cb548ad4f24" TYPE="xfs" PARTUUID="49686004-56e4-42ca-8d38-a6ac9847b674" /dev/sde1: UUID="d6885a6b-2c88-4845-a2e6-f76562a4e7de" TYPE="xfs" PARTUUID="b7fd1caf-18fc-41c4-9ac5-1f605ced58e3" /dev/sdg1: UUID="7ef05469-45c6-4921-be50-a5c727615728" TYPE="xfs" PARTUUID="708d4e36-c9dd-49a4-bb43-d8887c79f098" /dev/sdh1: UUID="7ef05469-45c6-4921-be50-a5c727615728" TYPE="xfs" PARTUUID="9ece8283-9418-475c-8547-0050e2a4cfee" /dev/sdi1: UUID="9c25a8d3-9512-4c09-8dd6-c62e3c2a4cb5" TYPE="xfs" PARTUUID="ddcff4b1-10d5-4fe8-8a79-6c0324cf07c6" /dev/md1: UUID="9c25a8d3-9512-4c09-8dd6-c62e3c2a4cb5" TYPE="xfs" /dev/md2: UUID="7ef05469-45c6-4921-be50-a5c727615728" TYPE="xfs" /dev/md3: UUID="7ef05469-45c6-4921-be50-a5c727615728" TYPE="xfs" /dev/md4: UUID="13e657cc-8abd-4455-9e0b-a08f23eac6ec" TYPE="xfs" /dev/md5: UUID="09a71bc8-fa02-4fd7-894f-6cb548ad4f24" TYPE="xfs" /dev/md6: UUID="d6885a6b-2c88-4845-a2e6-f76562a4e7de" TYPE="xfs" /dev/md7: UUID="e7158ed7-d794-417f-9081-b7cc063f9d7b" TYPE="xfs" /dev/sdf1: PARTUUID="b94d139a-f7d8-4f31-8855-dca758b85502" Thank you. I changed it and it mounts now. Unfortunately, it shows 83GB used, but it should have been around the same usage as the other 12 TB drives. Looking at the disk itself, there's nothing in there. Am I out of options? Edited February 24, 2021 by SloppyJoe Quote Link to comment
JorgeB Posted February 24, 2021 Share Posted February 24, 2021 Disk3 having the same UUID as disk2 can't be a coincidence, so something happened there, only possible recovery option would probably be a file recovery util. Quote Link to comment
SloppyJoe Posted February 24, 2021 Author Share Posted February 24, 2021 Thanks you. I will try that. This thread can probably be changed to solved as the initial request was for the drives being unmountable, which is now fixed. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.