Jump to content

Disk Unmountable after upgrade to 6.12.8


Recommended Posts

Hi All,

 

the other day I upgraded my seemingly fully functioning server from 6.12.6 to 6.12.8. After the upgrade certain dockers wouldn't start anymore and the attempt gave me an "execution error". I didn't realize or suspect a potential issue with my array. I urgently needed to access my paperless docker and in a knee-jerk reaction I downgraded to 6.12.6 and then upgraded to 6.12.8 again.

 

Edit: I forgot and now remembered: After the first upgrade, the Server became unreachable and I forced a reboot.

 

Later I had the time to investigate a bit further and finally found my main Disk 1 being unmountable.

I stopped the array and today started looking into the issue. First I downloaded diagnostics, see attached (edit: attachment removed after problem was solved). Then I followed this advice, stopped the array, unassigned the drive and started again. I didn't notice an emulated drive, most of my shares were gone and the docker issue persisted. I then stopped the array, assigned the drive again and started the array. A data-rebuild started that I paused and that's where I'm standing now. I have ordered a new drive that will arrive tomorrow.

Any help would be highly appreciated.

 

Edited by iripmotoles
Link to comment

Handling of drives (emulated or not) that show as unmountable is covered here in the online documentation accessible via the Manual link at the bottom of the Unraid GUI.  In addition every forum page has a DOCS link at the top and a Documentation link at the bottom.   The Unraid OS->Manual section covers most aspects of the current Unraid release.

Link to comment
9 minutes ago, iripmotoles said:

Thank you for your reply. Does it make sense to cancel the Data-Rebuild in order to attempt check/repair or is it better to let the rebuild run through?

Probably irrelevant.   The rebuild process just makes the physical drive match the emulated one.  

Link to comment

Ok, thank you. I cancelled, then checked and repaired the drive with the following output:



    Phase 1 - find and verify superblock...
    Phase 2 - using internal log
            - zero log...
    ALERT: The filesystem has valuable metadata changes in a log which is being
    destroyed because the -L option was used.
            - scan filesystem freespace and inode maps...
    clearing needsrepair flag and regenerating metadata
    sb_ifree 350, counted 348
    sb_fdblocks 1172213351, counted 1189241404
            - found root inode chunk
    Phase 3 - for each AG...
            - scan and clear agi unlinked lists...
            - process known inodes and perform inode discovery...
            - agno = 0
            - agno = 1
            - agno = 2
    inode 4455004287 - bad extent starting block number 4503567550636411, offset 0
    correcting nextents for inode 4455004287
    bad data fork in inode 4455004287
    cleared inode 4455004287
            - agno = 3
            - agno = 4
            - agno = 5
            - agno = 6
            - agno = 7
            - agno = 8
            - agno = 9
            - process newly discovered inodes...
    Phase 4 - check for duplicate blocks...
            - setting up duplicate extent list...
            - check for inodes claiming duplicate blocks...
            - agno = 0
            - agno = 5
            - agno = 2
            - agno = 3
            - agno = 4
            - agno = 1
            - agno = 6
            - agno = 8
            - agno = 9
            - agno = 7
    entry "DeloreanASAnnotationToolbarView.nib" at block 0 offset 560 in directory inode 4455004273 references free inode 4455004287
    	clearing inode number in entry at offset 560...
    Phase 5 - rebuild AG headers and trees...
            - reset superblock...
    Phase 6 - check inode connectivity...
            - resetting contents of realtime bitmap and summary inodes
            - traversing filesystem ...
    bad hash table for directory inode 4455004273 (no data entry): rebuilding
    rebuilding directory inode 4455004273
            - traversal finished ...
            - moving disconnected inodes to lost+found ...
    Phase 7 - verify and correct link counts...
    Maximum metadata LSN (14:2462824) is ahead of log (1:2).
    Format log to cycle 17.
    done

 

Then I stopped the array and started it normally, now it has started a new Data-Rebuild. My shares are back but of course have an exclamation mark for being unprotected. I will report back the results. If everything is back to normal after the rebuild, I will install the newly ordered drive as dual parity.

 

Link to comment
Just now, iripmotoles said:

On the emulated drive now I don't have the Lost+Found folder. I will check again after rebuild.

That is always a good sign - means the recovery process did not find anything it could not handle.   The results after a rebuild should be identical.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...