Jump to content

Unmountable: Unsupported partition layout


Recommended Posts

So I finally got my IO CREST Internal 5 Port Non-Raid SATA III 6GB/S Pci-E X4 Controller Card for Desktop PC Support SSD and HDD with Low Profile Bracket. JMB585 Chipset SI-PEX40139.  Everything was going well but I suddenly got the "unmountable: unsupported drive.  Per the unraid manual, I started the array in maintenance mode and ran xfs_repair /dev/md2 from the command line.  The issue is with disk 2 so I think I did the correct command.  I restarted the array (uncheck maintenance mode) and the drive still says unmountable...etc.  What am I suppose to do next? 

 

root@Tower:~# xfs_repair /dev/md2
Phase 1 - find and verify superblock...
        - reporting progress in intervals of 15 minutes
Phase 2 - using internal log
        - zero log...
        - 20:26:15: zeroing log - 38155 of 38155 blocks done
        - scan filesystem freespace and inode maps...
        - 20:26:15: scanning filesystem freespace - 50 of 50 allocation groups done
        - found root inode chunk
Phase 3 - for each AG...
        - scan and clear agi unlinked lists...
        - 20:26:15: scanning agi unlinked lists - 50 of 50 allocation groups done
        - process known inodes and perform inode discovery...
        - agno = 15
        - agno = 45
        - agno = 0
        - agno = 30
        - agno = 46
        - agno = 16
        - agno = 47
        - agno = 48
        - agno = 17
        - agno = 49
        - agno = 31
        - agno = 18
        - agno = 32
        - agno = 19
        - agno = 33
        - agno = 20
        - agno = 34
        - agno = 35
        - agno = 21
        - agno = 36
        - agno = 37
        - agno = 22
        - agno = 38
        - agno = 39
        - agno = 23
        - agno = 1
        - agno = 40
        - agno = 41
        - agno = 24
        - agno = 42
        - agno = 43
        - agno = 44
        - agno = 25
        - agno = 26
        - agno = 27
        - agno = 2
        - agno = 28
        - agno = 29
        - agno = 3
        - agno = 4
        - agno = 5
        - agno = 6
        - agno = 7
        - agno = 8
        - agno = 9
        - agno = 10
        - agno = 11
        - agno = 12
        - agno = 13
        - agno = 14
        - 20:26:15: process known inodes and inode discovery - 3456 of 3456 inodes done
        - process newly discovered inodes...
        - 20:26:15: process newly discovered inodes - 50 of 50 allocation groups done
Phase 4 - check for duplicate blocks...
        - setting up duplicate extent list...
        - 20:26:15: setting up duplicate extent list - 50 of 50 allocation groups done
        - check for inodes claiming duplicate blocks...
        - agno = 0
        - agno = 2
        - agno = 1
        - agno = 7
        - agno = 5
        - agno = 6
        - agno = 3
        - agno = 8
        - agno = 9
        - agno = 10
        - agno = 11
        - agno = 12
        - agno = 13
        - agno = 14
        - agno = 15
        - agno = 16
        - agno = 17
        - agno = 18
        - agno = 19
        - agno = 20
        - agno = 21
        - agno = 22
        - agno = 23
        - agno = 25
        - agno = 26
        - agno = 24
        - agno = 27
        - agno = 28
        - agno = 29
        - agno = 30
        - agno = 31
        - agno = 33
        - agno = 34
        - agno = 35
        - agno = 36
        - agno = 37
        - agno = 38
        - agno = 39
        - agno = 40
        - agno = 41
        - agno = 42
        - agno = 32
        - agno = 43
        - agno = 45
        - agno = 46
        - agno = 48
        - agno = 44
        - agno = 49
        - agno = 4
        - agno = 47
        - 20:26:15: check for inodes claiming duplicate blocks - 3456 of 3456 inodes done
Phase 5 - rebuild AG headers and trees...
        - 20:26:18: rebuild AG headers and trees - 50 of 50 allocation groups done
        - reset superblock...
Phase 6 - check inode connectivity...
        - resetting contents of realtime bitmap and summary inodes
        - traversing filesystem ...
        - traversal finished ...
        - moving disconnected inodes to lost+found ...
Phase 7 - verify and correct link counts...
        - 20:26:18: verify and correct link counts - 50 of 50 allocation groups done
done

tower-diagnostics-20220407-2035.zip

Edited by tr3bjockey
added diagnostics
Link to comment

That error means there's a partition problem, not a filesystem one, so xfs_repair won't help, if parity is valid you can rebuild the disk and let Unraid will recreate the correct partition, to test you can stop the array, unassign that disk, start the array, if the emulated disk mounts and contents look correct you can rebuild on top, if the emulated disk doesn't mount post new diags.

Link to comment
14 hours ago, JorgeB said:

That error means there's a partition problem, not a filesystem one, so xfs_repair won't help, if parity is valid you can rebuild the disk and let Unraid will recreate the correct partition, to test you can stop the array, unassign that disk, start the array, if the emulated disk mounts and contents look correct you can rebuild on top, if the emulated disk doesn't mount post new diags.

What would cause a partition problem?

 

Would the rebuild take the same amount of time as a parity check?  (all the drives are shingled variety)

 

Are there issues with doing a rebuild on shingled drives?

Link to comment
9 hours ago, tr3bjockey said:

What would cause a partition problem?

Difficult for me to say, usually disk/controller related, often happens after a power cut.

 

9 hours ago, tr3bjockey said:

Would the rebuild take the same amount of time as a parity check?

Should be similar.

 

9 hours ago, tr3bjockey said:

Are there issues with doing a rebuild on shingled drives?

It could be a little slower, but they usually perform about the same as a CMR drive during rebuilds, normal writes is were the performance degradation is usually more obvious.

Link to comment

See if you can get GPARTED to run on your system. It is a powerful partition tool that barely takes up space, but at least you will have a visual of what's going on. Since I didn't want to fool around with my server I created a GPARTED live USB bootable stick. I used a PC I had that was running Windows 10. I just connected the drive in question to the SATA port on the computer and then booted to the USB stick which ran the GPARTED GUI.

 

Here you can download the GPARTED LIVE ISO

 

Just "burn" the ISO file with the utility RUFUS to the USB stick and away you go.

  • Thanks 1
Link to comment
On 4/9/2022 at 2:37 AM, opentoe said:

See if you can get GPARTED to run on your system. It is a powerful partition tool that barely takes up space, but at least you will have a visual of what's going on. Since I didn't want to fool around with my server I created a GPARTED live USB bootable stick. I used a PC I had that was running Windows 10. I just connected the drive in question to the SATA port on the computer and then booted to the USB stick which ran the GPARTED GUI.

 

Here you can download the GPARTED LIVE ISO

 

Just "burn" the ISO file with the utility RUFUS to the USB stick and away you go.

Thanks for the tip!  I tried to use gparted, and told it to repair but it seemed like it froze.  Then i walked away for 2 hours, came back, the monitor was on sleep mode.  No moving of mouse or keyboard woke up gparted.  So I went ahead with the recovery procedure but I messed up.  Lost about a TB of stuff because I didn't pay attention.

Link to comment
On 4/8/2022 at 12:05 AM, JorgeB said:

That error means there's a partition problem, not a filesystem one, so xfs_repair won't help, if parity is valid you can rebuild the disk and let Unraid will recreate the correct partition, to test you can stop the array, unassign that disk, start the array, if the emulated disk mounts and contents look correct you can rebuild on top, if the emulated disk doesn't mount post new diags.

I unassigned the disk from array, reassigned it, it said it needed to be formatted, formatted it, then it started rebuilding, but it rebuild a blank disk instead of my 1TB of stuff.  ;-(  I think I screwed up or something...maybe I needed to move the disk out of the array and format it there.  Not sure.  Would like to know your opinion for next time on what I might have done wrong.

Link to comment
48 minutes ago, tr3bjockey said:

it said it needed to be formatted

If the disk was unmountable it would allow you to format it, but you should never format a disk that should have data on it. There is a warning about formatting array disks and you had to actually check a box to force it to format the disk.

On 4/8/2022 at 3:05 AM, JorgeB said:

if the emulated disk mounts and contents look correct you can rebuild on top, if the emulated disk doesn't mount post new diags.

So you should have stopped at that point for further advice.

  

48 minutes ago, tr3bjockey said:

maybe I needed to move the disk out of the array and format it there

That would have been pointless, but at least it wouldn't have formatted the (emulated) disk in the array.

 

Format is NEVER part of rebuild.

 

When you formatted the emulated disk, parity was updated so it agreed the disk had been formatted. So rebuilding can only result in a formatted disk.

 

You should have repaired the filesystem instead of formatting, and then rebuilt the repaired filesystem.

 

https://wiki.unraid.net/Manual/Storage_Management#Drive_shows_as_unmountable

Link to comment
On 4/16/2022 at 6:14 PM, trurl said:

Format is NEVER part of rebuild.

When you formatted the emulated disk, parity was updated so it agreed the disk had been formatted. So rebuilding can only result in a formatted disk.

Now that actually explains it and makes sense. 

 

On 4/16/2022 at 6:14 PM, trurl said:

If the disk was unmountable it would allow you to format it, but you should never format a disk that should have data on it. So you should have stopped at that point for further advice.

 

You are absolutely correct.

 

 

On 4/16/2022 at 6:14 PM, trurl said:

You should have repaired the filesystem instead of formatting, and then rebuilt the repaired filesystem.  https://wiki.unraid.net/Manual/Storage_Management#Drive_shows_as_unmountable

I did attempt a repair of the drive (see above diagnostics) first.  I wrongly assumed that formatting is preparing the drive to be made usable after an unmountable condition.  I'll know better for next time, but still will consult with admins here to make sure I'm doing the steps correctly.  Thank you explaining what happened.  🙂

 

Quick question to clarify.

1.  How can you tell if the disk with the partition damage is being emulated, I know the disk could have had 50-150gb of stuff on it but the files I save there gets also saved to another disk so now way to know what was lost.  I don't think I saw the folder icon next to the disk to check.

2. Is it possible that if the partition is damaged that the disk is not being emulated?

3.  If the drive was being emulated, how could you copy just the files from the bad drive to a USB drive?

4. After shutting down the array, then removing the bad drive, then you restart array, is the drive still being emulated?

 

 

 

Edited by tr3bjockey
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...