Jump to content

Unmountable: Unsupported partition layout after reboot


mfarlow
Go to solution Solved by JorgeB,

Recommended Posts

After the latest Unraid upgrade ran, I decided to reboot the server.  After the server came back up I noticed drive 2 was showing the Unmountable: Unsupported partition layout error.  Not suggesting the upgrade was the cause, more likely it was the reboot.

 

I attempted to perform an xfs_repair following the directions here.  First pass came back with a repair status of 

Phase 1 - find and verify superblock... bad primary superblock - bad sector size !!! attempting to find secondary superblock... .found candidate secondary superblock... verified secondary superblock... would write modified primary superblock Primary superblock would have been modified. Cannot proceed further in no_modify mode. Exiting now.

 

I followed that up by running the repair without the -n switch figuring that would repair the superblosk issue, which it did, however the drive is still unmountable.

 

Running the repair again gives me the following

Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 3 - agno = 9 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 1 - agno = 2 No modify flag set, skipping phase 5 Phase 6 - check inode connectivity... - traversing filesystem ... - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify link counts... No modify flag set, skipping filesystem flush and exiting.

Phase 1 - find and verify superblock...
Phase 2 - using internal log
        - zero log...
        - scan filesystem freespace and inode maps...
        - found root inode chunk
Phase 3 - for each AG...
        - scan (but don't clear) agi unlinked lists...
        - process known inodes and perform inode discovery...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
        - agno = 4
        - agno = 5
        - agno = 6
        - agno = 7
        - agno = 8
        - agno = 9
        - process newly discovered inodes...
Phase 4 - check for duplicate blocks...
        - setting up duplicate extent list...
        - check for inodes claiming duplicate blocks...
        - agno = 0
        - agno = 3
        - agno = 9
        - agno = 4
        - agno = 5
        - agno = 6
        - agno = 7
        - agno = 8
        - agno = 1
        - agno = 2
No modify flag set, skipping phase 5
Phase 6 - check inode connectivity...
        - traversing filesystem ...
        - traversal finished ...
        - moving disconnected inodes to lost+found ...
Phase 7 - verify link counts...
No modify flag set, skipping filesystem flush and exiting.

 

At this point I am at a loss as how to proceed.  Is my only option to reformat the drive?

 

Thanks for any input on this!

tower-diagnostics-20230907-1452.zip

Edited by mfarlow
Link to comment

Yeah, that was my last run, which I used the -n option to see if it found any other errors.  I did one pass with the -n switch, then a second pass without no switches, that looked like it found a backup superblock, and took a little while to run.  After that I stopped the array, restarted the arry in normal mode, and it was still unmountable.  I then went back and ran xfs_repair from the GUI with the -n switch to see if there were other errors.  That is the output I posted.

 

I did stop the array and restart it in normal mode, but the disk was still unmountable.  I even restarted the server and started the array in normal mode, but same unmountable state.

 

I started running the xfs_repair on the command line and it found the bad superblock again.  The command line is still running, just giving me a bunch of .................. it's been running for about 3 hours now, not sure if it is supposed to go that long, butI want to see if it finishes.

Link to comment
2 minutes ago, mfarlow said:

started running the xfs_repair on the command line and it found the bad superblock again

Running from the command line is notorious for getting the command slightly wrong which results in the bad superblock message.    It is always a good idea to post the exact command you used so we can check it was valid.

 

it is very rare after a successful run via the GUI for the drive to remain unmountable which was why I was checking on what exactly had happened.

Link to comment
1 minute ago, mfarlow said:

This is the command I am currently running from the command line 

xfs_repair /dev/sde

.  Should I kill that command and re-run the GUI then post that command and output?

That is the wrong command as you have omitted the partition number (e.g. /dev/sde1) so no point in continuing.

Link to comment

I did try it with the partition number but I received the following error

/dev/sde1: No such file or directory

fatal error -- couldn't initialize XFS library
 

/dev/sde1: No such file or directory

fatal error -- couldn't initialize XFS library

I'll re-run the GUI and post that command and output.

Link to comment

First pass with the -n switch (xfs_repair1.png)

Phase 1 - find and verify superblock...
Phase 2 - using internal log
        - zero log...
        - scan filesystem freespace and inode maps...
        - found root inode chunk
Phase 3 - for each AG...
        - scan (but don't clear) agi unlinked lists...
        - process known inodes and perform inode discovery...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
        - agno = 4
        - agno = 5
        - agno = 6
        - agno = 7
        - agno = 8
        - agno = 9
        - process newly discovered inodes...
Phase 4 - check for duplicate blocks...
        - setting up duplicate extent list...
        - check for inodes claiming duplicate blocks...
        - agno = 0
        - agno = 3
        - agno = 8
        - agno = 5
        - agno = 4
        - agno = 6
        - agno = 7
        - agno = 1
        - agno = 9
        - agno = 2
No modify flag set, skipping phase 5
Phase 6 - check inode connectivity...
        - traversing filesystem ...
        - traversal finished ...
        - moving disconnected inodes to lost+found ...
Phase 7 - verify link counts...
No modify flag set, skipping filesystem flush and exiting.

 

And running without any switches to repair (xfs_repair2.png)

Phase 1 - find and verify superblock...
Phase 2 - using internal log
        - zero log...
        - scan filesystem freespace and inode maps...
        - found root inode chunk
Phase 3 - for each AG...
        - scan and clear agi unlinked lists...
        - process known inodes and perform inode discovery...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
        - agno = 4
        - agno = 5
        - agno = 6
        - agno = 7
        - agno = 8
        - agno = 9
        - process newly discovered inodes...
Phase 4 - check for duplicate blocks...
        - setting up duplicate extent list...
        - check for inodes claiming duplicate blocks...
        - agno = 0
        - agno = 4
        - agno = 8
        - agno = 2
        - agno = 5
        - agno = 6
        - agno = 7
        - agno = 9
        - agno = 1
        - agno = 3
Phase 5 - rebuild AG headers and trees...
        - reset superblock...
Phase 6 - check inode connectivity...
        - resetting contents of realtime bitmap and summary inodes
        - traversing filesystem ...
        - traversal finished ...
        - moving disconnected inodes to lost+found ...
Phase 7 - verify and correct link counts...
done

 

Restarting the Array in normal mode (xfs_repair3.png)

 

Disk 2 still shows as unmountable (xfs_repair4.png)

 

Any idea what I should try next?

 

Thanks for all the help!!

xfs_repair1.png

xfs_repair2.png

xfs_repair3.png

xfs_repair4.png

Link to comment

Makes sense


 

root@Tower:~# fdisk -l /dev/sde
Disk /dev/sde: 9.1 TiB, 10000831348736 bytes, 19532873728 sectors
Disk model: WDC WD101EMAZ-11
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

 

Unfortunately for me, now my parity drive is showing the red X, so I don't think I will be rebuilding from parity.  When it rains it pours.  

 

Thanks

Link to comment

Thanks You!  I was able to restore the partition using testdisk.  Now the array sees the drive as mountable.  

 

All the data was placed into lost+found, when I ran the xfs_repair, so now I need to figure out how to sort through all of that if possible, but it's still a win.  :)

 

I appreciate all the help with this!

  • Like 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...