Unmountable: No file system


Recommended Posts

Got home from work today, and happened to see that a new unraid version was available so I decided to upgrade.  I backed up the flash drive, and then I ran the upgrade assistant tool, which advised me to upgrade a couple of plugins, which I did.  Plugins sorted I started the upgrade, which completed just fine and told me to reboot.  Upon rebooting Disk 4 is now showing as "Unmountable: No file system".  Prior to the upgrade I didn't notice any issues or warnings.  About 3 weeks ago the server did shut down abnormally when a power outage exceeded my UPS standby time, but it had been running fine since being restarted.

 

I have only tried the most basic troubleshooting.  I tried restarting with no change.  I also tried a new SATA cable and swapping SATA ports on the motherboard, but the error remains on the original disk 4, not the cable or port.

 

I have attached my diagnostics tool zip file.  I don't have a spare disk at the moment because I used it on another system, but I already have on order.  In the meantime I have shut down the server since it houses less than critical data that I can live without for a few days.  While I wait, I would like to know what happened, in case it's something that I did or something otherwise preventable in the future. 

 

While I wait for the replacement drive I guess I need to read up on how to replace a drive and rebuild the array since this is my first failure since starting to use unraid.

 

EDIT

Rolled back the OS to version 6.6.7 but the drive is still unmountable.  Seems like maybe it is a drive issue that just didn't show up until the system was reboot.

 

ghost-diagnostics-20190516-0154.zip

Edited by jkBuckethead
Link to comment
16 hours ago, johnnie.black said:

Unfortunately I tried both, but with no positive results.

First I tried xfs_repair from the terminal window, which seemed to return an error and stop.  I can't figure how to copy text from the terminal window so I've done my best to repeat the results below.

 

xfs_repair result

Phase 1 - find and verify superblock...

            - block cashe size set to 323016 entries

Phase 2 - using internal log

             - zero log...

zero_log: head block 116006 tail block 116002

ERROR: The filesystem has valuable metadata changes in the log which needs to be replayed. Mount the filesystem to replay the log, and unmount it before re-running xfs_repair. If you are unable to mount the filesystem, then use the -L option to destroy the log and attempt a repair.

Note that destroying the log may cause corruption -- please attempt a mount of the file system before doing this.

 

I checked out the -L option under xfs_repair, and it had this to say:

-L 

Force Log Zeroing. Forces xfs_repair to zero the log even if it is dirty (contains metadata changes). When using this option the filesystem will likely appear to be corrupt, and can cause the loss of user files and/or data.

 

Since this didn't sound promising, I moved on to the webGUI option. 

 

The webGUI option first returned an ALERT that was similar to the error above, but still essentially cautioned to first mount my (unmountable) file system.  It did however complete the scen, the results of which are below.  The wiki indicates that the file system check should clearly indicate what steps should be taken if a repair is required.  Maybe I'm missing something, but I do not see any suggestions below.  Since nothing was suggested, I restarted the array normally and still the disk is unmountable.  With no obvious error, does this mean my disk is toast?

 

My replacement should arrive tomorrow.  Is rebuilding from parity the bet option?  After that I can pull the disc and more thoroughly test it.

 

webGUI file system check results

 

Phase 1 - find and verify superblock...

            - block cache size set to 323016 entries

Phase 2 - using internal log

             - zero log...

zero_log: head block 116006 tail block 116002

ALERT: The filesystem has valuable metadata changes in a log which is being ignored because the -n option was used. Expect spurious inconsistencies which may be resolved by first mounting the filesystem to replay the log.

            - scan filesystem freespace and inode maps...

            - found root inode chunk

Phase 3 - for each AG...

            - scan (but don't clear) agi unlinked lists...

            - process known inodes and perform inode discovery...

            - agno = 0

            - agno = 1

            - agno = 2

            - agno = 3

            - agno = 4

            - agno = 5

            - process newly discovered inodes...

Phase 4 - check for duplicate blocks...

            - setting up duplicate extent list...

            - check for inodes claiming duplicate blocks...

            - agno = 0

            - agno = 1

            - agno = 2

            - agno = 3

            - agno = 4

            - agno = 5

No modify flag set, skipping phase 5

Phase 6 - check inode connectivity...

            - traversing filesystem ...

            - agno = 0

            - agno = 1

            - agno = 2

            - agno = 3

            - agno = 4

            - agno = 5

            - traversal finished ...

            - moving disconnected inodes to lost+found ...

Phase 7 - verify link counts...

Maximum metadata LSN (1:120049) is ahead of log (1:116006).

Would format log to cycle 4.

No modify flag set, skipping filesystem flush and exiting.

 

            XFS_REPAIR Summary Thu May 16 18:40:32 2019

 

Phase Start End Duration

Phase 1: 05/16 18:40:31 05/16 18:40:31

Phase 2: 05/16 18:40:31 05/16 18:40:31

Phase 3: 05/16 18:40:31 05/16 18:40:32 1 second

Phase 4: 05/16 18:40:32 05/16 18:40:32

Phase 5: Skipped

Phase 6: 05/16 18:40:32 05/16 18:40:32

Phase 7: 05/16 18:40:32 05/16 18:40:32

 

Total run time: 1 second

Link to comment

It is quite normal to have to use the -L flag with xfs_repair and it virtually never has any adverse affect.    Even if it does it is only likely to affect the very last file that was being written.  The log you posted shows that if run with -L and WITHOUT -n the xfs_repair should succeed and the disk would be mountable.

 

Note that a rebuild does NOT fix file system corruption.   If you have a disk flagged as unmountable before a rebuild it will be unmountable after the rebuild.   Only a successful run of xfs_repair can fix that.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.