unmountable: No file system


Recommended Posts

I started in maintenance mode with all disks assigned, but when I click the unmounted disk, there is no Check Filesystem Status section in settings.  I tried 

xfs_admin -U generate /dev/sdi1

but it returns:

ERROR: The filesystem has valuable metadata changes in a log which needs to
be replayed.  Mount the filesystem to replay the log, and unmount it before
re-running xfs_admin.  If you are unable to mount the filesystem, then use
the xfs_repair -L option to destroy the log and attempt a repair.
Note that destroying the log may cause corruption -- please attempt a mount
of the filesystem before doing this.

 

XFS_repair /dev/sdi1 

Returns: XFS_repair: command not found

Link to comment

Ha, guess the wiki needs updating :)

Anyhow, though the command executed this time, it returned:

Phase 1 - find and verify superblock...
Phase 2 - using internal log
        - zero log...
ERROR: The filesystem has valuable metadata changes in a log which needs to
be replayed.  Mount the filesystem to replay the log, and unmount it before
re-running xfs_repair.  If you are unable to mount the filesystem, then use
the -L option to destroy the log and attempt a repair.
Note that destroying the log may cause corruption -- please attempt a mount
of the filesystem before doing this.

 

Link to comment
6 minutes ago, curtis-r said:

Ha, guess the wiki needs updating :)

Anyhow, though the command executed this time, it returned:


Phase 1 - find and verify superblock...
Phase 2 - using internal log
        - zero log...
ERROR: The filesystem has valuable metadata changes in a log which needs to
be replayed.  Mount the filesystem to replay the log, and unmount it before
re-running xfs_repair.  If you are unable to mount the filesystem, then use
the -L option to destroy the log and attempt a repair.
Note that destroying the log may cause corruption -- please attempt a mount
of the filesystem before doing this.

 


That is quite normal - rerun with the -L option.

Link to comment
4 minutes ago, curtis-r said:

Ha, guess the wiki needs updating

Can you post a link to what you think needs updating?

5 minutes ago, curtis-r said:

use the -L option

Since Unraid has already determined the disk is unmountable, there is no way to mount it to replay the log. You have to use -L to continue without that.

 

Also, there was no point in doing this:

53 minutes ago, curtis-r said:

I tried 


xfs_admin -U generate /dev/sdi1

That was for a completely different situation and purpose.

 

 

Link to comment
16 hours ago, trurl said:

New Config instead of rebuild is not the recommended approach to fix disabled disks.

  

4 minutes ago, trurl said:

no point in doing this:

58 minutes ago, curtis-r said:

I tried 



xfs_admin -U generate /dev/sdi1

That was for a completely different situation and purpose.

You have learned some things that can be done during this thread, but not why or when you should do them. Please ask for advice.

Link to comment

xfs _repair /dev/sdi1 -L ran correctly.  Restarted the array in normal mode.  I could then see disk1's data & all appears intact! I'm surprised how much is in lost+found, but I'll dig into that after the Parity is done rebuilding.

 

Anyhow, you all are AWESOME!  Moving on, a new HD should arrive today.  Any opinion on whether the issue was the drive?  It was manf in 2013.  My inclination is to decommission disk1 as some air-gapped backup in my closet.

 

Wiki that needs updating: The "XFS" code under Checking File System.

 

EDIT: I forgot that drive already had a lost+found from the earlier issue.  I don't think there is anything new in that folder.

Edited by curtis-r
Link to comment
1 hour ago, curtis-r said:

Ha, guess the wiki needs updating :)

 


Just checked and you are right!   I suspect an auto-correct kicked and got missed.

 

I used to have edit capabilities to fix this sort of issue but that seems to have been lost with the recent upgrade of the wiki :( I’ll have to contact Limetech to see if they will trust me enough to allow me to do so in the future :) 

Link to comment
1 hour ago, JorgeB said:

Not using the md device means parity is no longer valid, you should run a correcting parity check.

 

Yet another example of

 

2 hours ago, trurl said:

You have learned some things that can be done during this thread, but not why or when you should do them. Please ask for advice.

 

Link to comment

Before we close the book on this, I ran an extended SMART test on the disk last night (after the parity completed without errors on that disk).  SMART found no errors.  I had switched HD bays the other day after the 2nd unmountable fiasco.  Does it make sense it was the HD bay to SATA interface and the drive is fine?  I've had issues with this box's quality control.  I changed the SATA cable between the two fiascos, so it's not that.

 

BTW, there were some parity errors on disk5, which trul noticed connection issues with earlier.  I'm going to switch that bay and cable later today & run a SMART.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.