Jump to content

[SOLVED] Help: Unclean Shutdown resulted in three drives with "Unmountable: No file system"


Recommended Posts

Hello,

 

I'm not sure what exactly happened to my array.  The last thing I remember doing was trying to create a VM in Unraid, and it ended up causing loading issues with the webUI.  I quickly initiated a reboot from the webUI but when it came back up after a few minutes, it turned that reboot resulted in an ungraceful shutdown and the server started doing parity checks. 

 

Unfortunately, something also happened to three of my drives.  It's saying "Unmountable: No file system" and I have no idea why.

 

array_ss.thumb.jpg.2e3f90280a5f0e7a1027fc560fc87ead.jpg

 

So attached are my diagnostics file.  I have temporarily paused parity checks for now as a precaution.  

 

Can someone help me in understanding what happened?  

 

Also, is there a way to recover data or did I just lose 3 drives worth of stuff? 

 

 

hakkafarm-diagnostics-20210221-2310.zip

Link to comment
9 hours ago, itimpi said:

Unfortunately the diagnostics only show what happened after the reboot and not what lead up to the problem.

 

You may find this section of the online documentation (available via the ‘Manual’ link at the bottom of the unRaid GUI) that covers ‘unmountable’ disks to be of use.

 

Thank you.  I tried xfs_repair with the -L option and it seems to have fixed two of the drives.

 

Unfortunately, the 3rd drive is still unmountable even thou xfs_repair seems to suggest it was able to fix it.

 

Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 2 - agno = 4 - agno = 9 - agno = 10 - agno = 6 - agno = 7 - agno = 8 - agno = 3 - agno = 5 Phase 5 - rebuild AG headers and trees... - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... Maximum metadata LSN (1:469380) is ahead of log (1:2). Format log to cycle 4. done

 

 

Link to comment

I tried it again with xfs_repair -vL.  Still unmountable.

 

Phase 1 - find and verify superblock... - block cache size set to 1353528 entries Phase 2 - using internal log - zero log... zero_log: head block 0 tail block 0 - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 3 - agno = 7 - agno = 1 - agno = 0 - agno = 10 - agno = 4 - agno = 8 - agno = 9 - agno = 5 - agno = 6 - agno = 2 Phase 5 - rebuild AG headers and trees... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... Maximum metadata LSN (1:469390) is ahead of log (1:2). Format log to cycle 4. XFS_REPAIR Summary Tue Feb 23 15:19:52 2021 Phase Start End Duration Phase 1: 02/23 15:19:06 02/23 15:19:06 Phase 2: 02/23 15:19:06 02/23 15:19:21 15 seconds Phase 3: 02/23 15:19:21 02/23 15:19:21 Phase 4: 02/23 15:19:21 02/23 15:19:21 Phase 5: 02/23 15:19:21 02/23 15:19:21 Phase 6: 02/23 15:19:21 02/23 15:19:21 Phase 7: 02/23 15:19:21 02/23 15:19:21 Total run time: 15 seconds done

 

 

array_ss.jpg

hakkafarm-diagnostics-20210223-1521.zip

Edited by SloppyJoe
Link to comment

/dev/loop0: TYPE="squashfs"
/dev/loop1: TYPE="squashfs"
/dev/sda1: LABEL_FATBOOT="UNRAID" LABEL="UNRAID" UUID="2732-64F5" TYPE="vfat"
/dev/nvme0n1p1: UUID="a9d8ae2d-a705-4c62-b684-9edead9d2c80" TYPE="xfs"
/dev/sdb1: UUID="e7158ed7-d794-417f-9081-b7cc063f9d7b" TYPE="xfs"
/dev/sdc1: UUID="13e657cc-8abd-4455-9e0b-a08f23eac6ec" TYPE="xfs" PARTUUID="cd5c588a-dd48-407a-9c22-17e53382f798"
/dev/sdd1: UUID="09a71bc8-fa02-4fd7-894f-6cb548ad4f24" TYPE="xfs" PARTUUID="49686004-56e4-42ca-8d38-a6ac9847b674"
/dev/sde1: UUID="d6885a6b-2c88-4845-a2e6-f76562a4e7de" TYPE="xfs" PARTUUID="b7fd1caf-18fc-41c4-9ac5-1f605ced58e3"
/dev/sdg1: UUID="7ef05469-45c6-4921-be50-a5c727615728" TYPE="xfs" PARTUUID="708d4e36-c9dd-49a4-bb43-d8887c79f098"
/dev/sdh1: UUID="7ef05469-45c6-4921-be50-a5c727615728" TYPE="xfs" PARTUUID="9ece8283-9418-475c-8547-0050e2a4cfee"
/dev/sdi1: UUID="9c25a8d3-9512-4c09-8dd6-c62e3c2a4cb5" TYPE="xfs" PARTUUID="ddcff4b1-10d5-4fe8-8a79-6c0324cf07c6"
/dev/md1: UUID="9c25a8d3-9512-4c09-8dd6-c62e3c2a4cb5" TYPE="xfs"
/dev/md2: UUID="7ef05469-45c6-4921-be50-a5c727615728" TYPE="xfs"
/dev/md3: UUID="7ef05469-45c6-4921-be50-a5c727615728" TYPE="xfs"
/dev/md4: UUID="13e657cc-8abd-4455-9e0b-a08f23eac6ec" TYPE="xfs"
/dev/md5: UUID="09a71bc8-fa02-4fd7-894f-6cb548ad4f24" TYPE="xfs"
/dev/md6: UUID="d6885a6b-2c88-4845-a2e6-f76562a4e7de" TYPE="xfs"
/dev/md7: UUID="e7158ed7-d794-417f-9081-b7cc063f9d7b" TYPE="xfs"
/dev/sdf1: PARTUUID="b94d139a-f7d8-4f31-8855-dca758b85502"

 

Thank you.  

I changed it and it mounts now.  Unfortunately, it shows 83GB used, but it should have been around the same usage as the other 12 TB drives.  Looking at the disk itself, there's nothing in there.

 

Am I out of options?

 

 

ss2.thumb.JPG.229dd84e9ee4172fea9ae78f131cb44b.JPG

 

ss1.JPG

Edited by SloppyJoe
Link to comment
  • JorgeB changed the title to [SOLVED] Help: Unclean Shutdown resulted in three drives with "Unmountable: No file system"

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...