Unmountable - Unsupported or no file system


Go to solution Solved by Matthew Kent,

Recommended Posts

Hi all,

 

Hoping someone can help. I've been running my server for awhile and my largest drive last Friday went offline. I didn't know it was happening until I got a bunch of discord alerts that my media library files were missing and a cleanup of all the files had started. When I logged in, Disk 1 (A newer 16TB drive), showed unmountable: unsupported or no file system. 

 

I've since tried to remove the drive and start the array w/ the drive disconnected to try and get it to start w/ the drive emulated. Unfortunately it still says unmountable: unsupported or no filesystem.

Does this mean the parity somehow saved with the drive in this state and is now emulated an unmountable drive? I'm hoping my data is still there in the parity. I'm planning on running xfs_repair on the drive now to see if I can get it back up.

Any other suggestions would be greatly appreciated.
I'm including my diagnostics file.

nas-diagnostics-20240108-1133.zip

Edited by Matthew Kent
Link to comment

also, this is the current output of xfs_repair on the drive now

 

attempting to find secondary superblock...
.found candidate secondary superblock...
unable to verify superblock, continuing...
.found candidate secondary superblock...
unable to verify superblock, continuing...
.found candidate secondary superblock...
unable to verify superblock, continuing...
.found candidate secondary superblock...
unable to verify superblock, continuing...
.found candidate secondary superblock...
unable to verify superblock, continuing...
.found candidate secondary superblock...
unable to verify superblock, continuing...
.found candidate secondary superblock...
unable to verify superblock, continuing...
.found candidate secondary superblock...
unable to verify superblock, continuing...
.found candidate secondary superblock...
unable to verify superblock, continuing...
.found candidate secondary superblock...
unable to verify superblock, continuing...
.found candidate secondary superblock...
unable to verify superblock, continuing...
.found candidate secondary superblock...
unable to verify superblock, continuing...
.found candidate secondary superblock...
unable to verify superblock, continuing...
.found candidate secondary superblock...
unable to verify superblock, continuing...
......................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................

 

Link to comment
39 minutes ago, Matthew Kent said:

I ran it from the command line as I couldn't get it to run in the gui.
xfs_repair /dev/sde

That command is wrong as the device name is incorrect and will give the symptoms you describe.

 

Why could you not get it to run from the Gui - that is always the most reliable.   You are likely to get better informed feedback if you attach your system’s diagnostics zip file to your next post in this thread.

Link to comment

Uploading now. 
The results for xfs_repair from the gui is
 

Phase 1 - find and verify superblock...
Phase 2 - using internal log
        - zero log...
ALERT: The filesystem has valuable metadata changes in a log which is being
ignored because the -n option was used.  Expect spurious inconsistencies
which may be resolved by first mounting the filesystem to replay the log.
        - scan filesystem freespace and inode maps...
agf_freeblks 104761526, counted 104761523 in ag 1
agf_longest 104502791, counted 104502788 in ag 1
sb_fdblocks 1282724453, counted 1282724450
        - found root inode chunk
Phase 3 - for each AG...
        - scan (but don't clear) agi unlinked lists...
        - process known inodes and perform inode discovery...
        - agno = 0
        - agno = 1
bad nblocks 9198282 for inode 2196066468, would reset to 9198253
        - agno = 2
        - agno = 3
        - agno = 4
        - agno = 5
        - agno = 6
        - agno = 7
        - agno = 8
        - agno = 9
        - agno = 10
        - agno = 11
        - agno = 12
        - agno = 13
        - agno = 14
        - process newly discovered inodes...
Phase 4 - check for duplicate blocks...
        - setting up duplicate extent list...
        - check for inodes claiming duplicate blocks...
        - agno = 0
        - agno = 2
        - agno = 8
        - agno = 9
        - agno = 10
        - agno = 3
        - agno = 5
        - agno = 7
        - agno = 1
        - agno = 11
        - agno = 13
        - agno = 14
        - agno = 4
        - agno = 6
        - agno = 12
bad nblocks 9198282 for inode 2196066468, would reset to 9198253
No modify flag set, skipping phase 5
Phase 6 - check inode connectivity...
        - traversing filesystem ...



The result without the -n is
 

Phase 1 - find and verify superblock...
Phase 2 - using internal log
        - zero log...
ERROR: The filesystem has valuable metadata changes in a log which needs to
be replayed.  Mount the filesystem to replay the log, and unmount it before
re-running xfs_repair.  If you are unable to mount the filesystem, then use
the -L option to destroy the log and attempt a repair.
Note that destroying the log may cause corruption -- please attempt a mount
of the filesystem before doing this.

 

nas-diagnostics-20240108-1347.zip

Link to comment
  • Solution

I'm back up. I ended up running the repair in the gui w/ the -L option, and then told the server to run with the new config since I didn't want to do a data rebuild from the unmountable parity image. 

As far as I can tell there are no lost-and-found files. Not sure if I lost anything, but I'm back up *whew*. Of course the wife had to ask about files she needed just when the server went down.

Link to comment
1 minute ago, Matthew Kent said:

and then told the server to run with the new config since I didn't want to do a data rebuild from the unmountable parity image. 

Parity was valid after running the repair as long as it worked.   Using New Config means any data written since the drive went unmountable is now lost.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.