Jump to content

Unmountable disk and missing cache drive, simultaneously...need help with next steps.


arretx

Recommended Posts

Situation:  Today, I was adding some power circuits to my house, so before I cut the mains, I shut down Unraid.  Hours later, as I sit down to work on a few things, I notice that when I login, there's no array running due to missing cache drive.

 

I've been using two NVMe drives, both 1TB, in a mirrored configuration.

 

I used the reboot feature and it came back with the same problem.  So, I shut the box down completely and powered it back up (something I had already done earlier to accommodate the main power being taken offline.)  

 

This time, it booted with the 2nd cache disk available, but until I unassigned the first and put the 2nd in the first position, I wasn't able to start the array.  

 

So, I started the array and quickly noticed that Disk 1 of the array reports "Unmountable: Wrong or no file system."

 

This system has 2 parity drives each at 8TB and 5 additional drives in the array for data, etc.  4 are 8TB and 1 is 500GB.  One of those 8TB drives is Disk 1.

 

Have I lost data at this point?  If so, how do I know what was on Disk 1?  If I replace the disk, will the Array rebuild the data that was on that drive?

 

2nd concern is the fact that my cache is relegated to a single NVMe disk now.  Should I move all data on the cache to the array then replace the NVMe drive that's faulty (assuming that it is?)

 

Diags attached.

 

tower-diagnostics-20231024-2016.zip

Link to comment

I just ran `xfs_repair -v /dev/md1` and was presented with this:

 

Phase 1 - find and verify superblock...
        - reporting progress in intervals of 15 minutes
        - block cache size set to 3019504 entries
Phase 2 - using internal log
        - zero log...
zero_log: head block 439743 tail block 439417
ERROR: The filesystem has valuable metadata changes in a log which needs to
be replayed.  Mount the filesystem to replay the log, and unmount it before
re-running xfs_repair.  If you are unable to mount the filesystem, then use
the -L option to destroy the log and attempt a repair.
Note that destroying the log may cause corruption -- please attempt a mount
of the filesystem before doing this.

 

Destroying the log sounds bad.  Should I do this?

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...