Issue with Disks - now showing "unmountable"


Recommended Posts

So I am having an issue with two disks in my array.  No errors or anything.  Below I'll describe problem:

 

Array was running fine -- I was using these two disks exclusively for Chia Plots (chia farming).  I got a weird issue in Chia where it was showing plots as "invalid".  All plots seemed to be linked to one of my disks.  It was spitting off errors, seemed to be running fine so I just tried to delete the plots.  When I went to delete the plots via MC, it gave me some wierd I/O error.  Then when I went in to view the files, it said "No files available to view" but still showed the drive as full (this is Disk 19 mind you).  I decided to do a clean system restart.

 

Upon restarting, all disks showed as fine so I went ahead and started the array.  Now when I start the array, it is showing Disk 19 and Disk 20 as unmounted which is odd because Disk 20 was not giving me any issues prior.  I did run a Smart Test on "disk 19" before shutting down and received no errors/issues.  

 

What went wrong here?  No issues or errors on disk, clean reboots, been running fine for 30 days.  Any ideas?  

WDC_WD80EDAZ-11TA3A0_VGK7Y9PG-20210727-1405 disk20 (sdu).txt WDC_WD60EZAZ-00SF3B0_WD-WX52DC0L8SLZ-20210727-1405 disk19 (sdr).txt

Link to comment
10 minutes ago, Hawkins12 said:

Also, I assume i'll need to run a parity check when all is said and done...

This would not achieve anything and might even be counter-productive if a drive is reading unreliable.
 

Handling of unmountable disks is described here in the online documentation accessible via the ‘Manual’ link at the bottom of the Unraid GUI.

Link to comment

Ok I entered Maintenance Mode and am running checks on Disk 19 and Disk 20. My Disk 19 check is taking a while but Disk 20 was relatively quick.  I did have errors in Disk 20.  Due to the errors, I ran  xfs_repair -v /dev/md20   and afterwards, re-ran the Filesystem Check.  Below are results: 

Phase 1 - find and verify superblock...

Phase 2 - using internal log - zero log... - scan filesystem freespace and inode maps... - found root inode chunk

Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - process newly discovered inodes...

Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 3 - agno = 6 - agno = 4 - agno = 5 - agno = 1 - agno = 7 - agno = 2 No modify flag set,

skipping phase 5

Phase 6 - check inode connectivity... - traversing filesystem ... - traversal finished ... - moving disconnected inodes to lost+found ...

Phase 7 - verify link counts... No modify flag set, skipping filesystem flush and exiting.

I presume this is good?

Edited by Hawkins12
complete edit
Link to comment
45 minutes ago, Hawkins12 said:

Ok I entered Maintenance Mode and am running checks on Disk 19 and Disk 20. My Disk 19 check is taking a while but Disk 20 was relatively quick.  I did have errors in Disk 20.  Due to the errors, I ran  xfs_repair -v /dev/md20   and afterwards, re-ran the Filesystem Check.  Below are results: 

Phase 1 - find and verify superblock...

Phase 2 - using internal log - zero log... - scan filesystem freespace and inode maps... - found root inode chunk

Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - process newly discovered inodes...

Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 3 - agno = 6 - agno = 4 - agno = 5 - agno = 1 - agno = 7 - agno = 2 No modify flag set,

skipping phase 5

Phase 6 - check inode connectivity... - traversing filesystem ... - traversal finished ... - moving disconnected inodes to lost+found ...

Phase 7 - verify link counts... No modify flag set, skipping filesystem flush and exiting.

I presume this is good?


That looks good.    Running without -n and (if requested) with -L should fix things.

Link to comment
27 minutes ago, Hawkins12 said:

Disk 19 - I ran the same command mentioned above:  

 

Received the error couldn't verify primary superblock - not enough secondary superblocks with matching geometry !!!  Searching for secondary superblock which I understand can take some time.  6TB drive


Not so good.    Is the disk also disabled (with a red ‘x’)  or not?

Link to comment
13 minutes ago, itimpi said:


Not so good.    Is the disk also disabled (with a red ‘x’)  or not?

 

It was not disabled with "x".  It was green when I started array and it showed as "unmountable".   The search for Secondary superblocks, I calculate will take about 4 hrs (unless if finds something).   It's running at 420 MB/s or so on a 6TB drive.  

Edited by Hawkins12
update
Link to comment
1 minute ago, itimpi said:


OK, was just wondering as that might have affected any suggestions for recovery.   Never like it when the superblock cannot be found.

Does that generally mean the disk went bad?  Or just corruption of data?  Just odd how disk smart data shows as fine.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.