Jump to content

[Solved] "Unmountable: No file system" after unclean shutdown


Zandor300

Recommended Posts

Due to water leakage at a skylight causing a short circuit, power was cut to one of my unRAID servers. Water got at the powerstrips of the servers. Now that I am cleaning everything and testing if all the servers still work, one of my drives in one server is showing "Unmountable: No file system".

 

I am unable to find what is wrong. This drive is only a couple of weeks old and is replacing a 2 TB drive to increase my array capacity. Attached is my diagnostics file.

 

I hope someone can help me with this.

silicon-diagnostics-20180430-0836.zip

 

EDIT:

The drive that is acting up is "ST4000DM004-2CV104_ZFN05855" aka Disk2

 

EDIT 2:
I have now updated from v6.4.0 to v6.5.1 and thus rebooted. Now I am getting new smart errors on my Parity disk and Disk1. and the message above has disappeared from Disk2. See screenshots below:

Screenshot_182.png.3f27220f5deb876a5b551c923f0d6cfe.png

Screenshot_183.png.7bcdca1cb7aadeb5fe76f83e16e6b246.png

 

Below, again, a diagnostics dump:

silicon-diagnostics-20180430-1235.zip

 

EDIT 3:
Nvm, the "Unmountable..." message hasn't disappeared from Disk2. The problem still exists.

 

EDIT 4:

I ran the xfs_repair on disk2 with the "-n" flag. Here is the output: https://pastebin.com/w8FJritt

Should I run it with the modify flag. I don't know what that will do so thats why I'm asking.

 

EDIT 5:

When running xfs_repair with the "-v" flag, I get the following output.

Phase 1 - find and verify superblock...
        - block cache size set to 731272 entries
Phase 2 - using internal log
        - zero log...
zero_log: head block 1147475 tail block 1147392
ERROR: The filesystem has valuable metadata changes in a log which needs to
be replayed.  Mount the filesystem to replay the log, and unmount it before
re-running xfs_repair.  If you are unable to mount the filesystem, then use
the -L option to destroy the log and attempt a repair.
Note that destroying the log may cause corruption -- please attempt a mount
of the filesystem before doing this.

What are the chances that I will corrupt my data if I proceed with the repair by destroying the log?

Link to comment

Destroying the log when repairing the filesystem is pretty standard and unavoidable for unmountable disks.

 

Also, here is something from the update notes about your CRC warnings

Quote

Starting wth 6.4.1, unRAID now monitors SMART attribute 199 (UDMA CRC errors) for you.  If you get an alert about this right after upgrading, you can just acknowledge it, as the error probably happened in the past.  If you get an alert down the road, it is likely due to a loose SATA cable. Reseat both ends, or replace if needed. (see this)

 

Here is a link to the rest of those notes which you really should read:

 

https://lime-technology.com/forums/topic/66327-unraid-os-version-650-stable-release-update-notes/

 

 

Link to comment
19 minutes ago, trurl said:

Destroying the log when repairing the filesystem is pretty standard and unavoidable for unmountable disks.

 

Also, here is something from the update notes about your CRC warnings

 

Here is a link to the rest of those notes which you really should read:

 

https://lime-technology.com/forums/topic/66327-unraid-os-version-650-stable-release-update-notes/

 

 

Thanks, the server is up and running again after running xfs_repair with “-L” on Drive2.

 

As for the release notes, I always read them but must have skipped over the part where is says that. Thanks for the heads up!

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...