Jump to content

(SOLVED) "Unmountable: Unsupported or no file system" after unclean shutdown


Go to solution Solved by itimpi,

Recommended Posts

After some trouble with my Unraid system, it was shutdown uncleanly. Only several days later I realised that it did not reboot correctly. Upon further inspection, one drive is listed as "Unmountable: Unsupported or no file system". Most data (all important data) is backed up, but I'd prefer not to have to restore the backup.

 

What steps should I take to get the system back up and running (while minimising data loss risk)?

Also, is there a way to check integrity of the parity drive before taking any other steps (given that one of the data drives was corrupted, I can imagine that the parity drive also has problems)?

server-diagnostics-20240117-1957.zip

Edited by Superlimonade
Link to comment

Handling of unmountable drives is covered here in the online documentation accessible via the Manual link at the bottom of the Unraid GUI.  In addition every forum page has a DOCS link at the top and a Documentation link at the bottom.   The Unraid OS->Manual section covers most aspects of the current Unraid release.

Link to comment

Thanks for both your quick responses!

 

On 1/21/2024 at 1:44 PM, itimpi said:

Handling of unmountable drives is covered here

I've seen in multiple places the recommendation to run xfs_repair, but want to check here that's indeed the best thing to do.

 

I have started the array and uploaded new diagnostics.

I also noticed that data-rebuild started automatically. It doesn't sound like a wise thing to rebuild anything on a broken/vulnerable system, so I paused it as soon as I noticed it.

server-diagnostics-20240122-2003.zip

Link to comment

This is the output of xfs_repair dry-run:

 

Phase 1 - find and verify superblock...
Phase 2 - using internal log
        - zero log...
ALERT: The filesystem has valuable metadata changes in a log which is being
ignored because the -n option was used.  Expect spurious inconsistencies
which may be resolved by first mounting the filesystem to replay the log.
        - scan filesystem freespace and inode maps...
sb_fdblocks 1749116383, counted 1776357327
        - found root inode chunk
Phase 3 - for each AG...
        - scan (but don't clear) agi unlinked lists...
        - process known inodes and perform inode discovery...
        - agno = 0
        - agno = 1
bad CRC for inode 2240120426
inode identifier 7523210798274510847 mismatch on inode 2240120426
bad CRC for inode 2240120426, would rewrite
inode identifier 7523210798274510847 mismatch on inode 2240120426
would have cleared inode 2240120426
        - agno = 2
bad CRC for inode 4796252487
inode identifier 4047265315758342143 mismatch on inode 4796252487
bad CRC for inode 4796252487, would rewrite
inode identifier 4047265315758342143 mismatch on inode 4796252487
would have cleared inode 4796252487
        - agno = 3
        - agno = 4
        - agno = 5
        - agno = 6
        - agno = 7
inode 15333797490 - bad extent starting block number 4503567550766763, offset 0
correcting nextents for inode 15333797490
bad data fork in inode 15333797490
would have cleared inode 15333797490
        - agno = 8
        - agno = 9
        - agno = 10
        - agno = 11
        - agno = 12
        - agno = 13
        - agno = 14
        - process newly discovered inodes...
Phase 4 - check for duplicate blocks...
        - setting up duplicate extent list...
        - check for inodes claiming duplicate blocks...
        - agno = 0
        - agno = 2
        - agno = 6
        - agno = 11
        - agno = 5
        - agno = 7
        - agno = 8
        - agno = 3
        - agno = 9
        - agno = 10
        - agno = 4
        - agno = 1
entry "8bd" at block 10 offset 2280 in directory inode 2147555826 references free inode 2240120426
	would clear inode number in entry at offset 2280...
entry "2c30" at block 45 offset 1680 in directory inode 4295010084 references free inode 4796252487
	would clear inode number in entry at offset 1680...
bad CRC for inode 2240120426, would rewrite
inode identifier 7523210798274510847 mismatch on inode 2240120426
would have cleared inode 2240120426
bad CRC for inode 4796252487, would rewrite
inode identifier 4047265315758342143 mismatch on inode 4796252487
would have cleared inode 4796252487
entry "DSC_0318.JPG" at block 0 offset 384 in directory inode 15333624108 references free inode 15333797490
	would clear inode number in entry at offset 384...
inode 15333797490 - bad extent starting block number 4503567550766763, offset 0
correcting nextents for inode 15333797490
bad data fork in inode 15333797490
would have cleared inode 15333797490
        - agno = 12
        - agno = 13
        - agno = 14
No modify flag set, skipping phase 5
Phase 6 - check inode connectivity...
        - traversing filesystem ...
entry "8bd" in directory inode 2147555826 points to free inode 2240120426, would junk entry
would rebuild directory inode 2147555826
entry "2c30" in directory inode 4295010084 points to free inode 4796252487, would junk entry
would rebuild directory inode 4295010084
entry "DSC_0318.JPG" in directory inode 15333624108 points to free inode 15333797490, would junk entry
bad hash table for directory inode 15333624108 (no data entry): would rebuild
would rebuild directory inode 15333624108
        - traversal finished ...
        - moving disconnected inodes to lost+found ...
Phase 7 - verify link counts...
No modify flag set, skipping filesystem flush and exiting.

 

Link to comment
  • Solution

That output suggests there is some level of corruption.

 

You ran with the -n option (the no modify flag) so nothing was fixed.    To actually fix it you need to run without -n (and if it asks for it add -L). 

 

After that you can restart the array in normal mode to see if the drive now mounts.   I expect it will, so you then need to look for a lost+found folder which is where the repair process puts any folders/files for which it could not find the directory entry giving the correct name and gives them cryptic names instead.  Not having this folder is a good sign that the repair went perfectly.   However if you DO have that folder you have to sort its contents out manually or restore any missing files from backups (which is normally easiest if your backups are good enough).

Link to comment

I have run xfs_repair and started the array in normal mode.

 

Good news is that it mounted and I don't see a lost+found in the top level directory. However, the disk is still listed as emulated. Is this fixed by the data-rebuild process?

 

Also, I plan to write a script to checksum all files on Unraid and my backup disk, to see if/where any conflicts might be. Should I do this before or after fixing the "emulated"?

Link to comment
9 minutes ago, Superlimonade said:

Is this fixed by the data-rebuild process?

I assume the disk is also showing with a red 'x' indicating it was disabled so then the answer is Yes.    The rebuild will put onto a physical disk exactly what is showing up as being on the emulated disk.  On successful completion the disabled status will be cleared and the disk no longer be emulated.

 

10 minutes ago, Superlimonade said:

Also, I plan to write a script to checksum all files on Unraid and my backup disk, to see if/where any conflicts might be. Should I do this before or after fixing the "emulated"?

 

Really up to you as the emulation process makes it look like the disk is present at all times.   Only thing to bear in mind that you probably do not want to do this while actually running the disk rebuild as the rebuild and checksum processes could dramatically slow each other down.

  • Like 1
Link to comment

I have rebuilt the parity data first, and then compared the files to my backup. Most differences were in things like logs and databases of Docker containers (not a problem, as those were just updated by the containers), but I also found one specific file to be missing (named DSC_0318.JPG, as noted by the xfs_repair output).

 

I replaced this file, and everything now seems to be in order. Thanks for your help!

Link to comment
  • Superlimonade changed the title to (SOLVED) "Unmountable: Unsupported or no file system" after unclean shutdown

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...