mount error: Unsupported partition layout


bobe99

Recommended Posts

I had a working unraid install on an i7 based server and attempted to migrate to a xeon based system (r720xd). I moved the existing drives and flash into the new server. Created a new config based on the existing data layout and attempted to start the array. The arra started but all 7 data drives are reporting "Unmountable:  Unsupported partition layout".

 

I logged into the CLI and looked at each disk (some LUKS, most unencrypted). I was able to mount each partition by hand (e.g. mount /dev/sdc1 /mnt).  I tried this with each disk and was able to successful mount each drive. It appears that all my data is intact (whoo whoo). After unmount, I looked at each disk with sgdisk -v and each disk looked normal, e.g.:

# sgdisk -v /dev/sdc

No problems found. 2166478 free sectors (1.0 GiB) available in 2
segments, the largest of which is 2166448 (1.0 GiB) in size.

After a little forum reading, I tried a "new config" from scratch with no drives defined, then rebooted. After logging back in, all drives showed up in the unassigned devices section. Each disk was properly identified and in each case, reported the FS as "xfs". I was able to mount each drive using the UD "mount" button without issue.

 

I then reinstated my original disk config, rebooted and started the array. Same mount error seen.

In looking at syslog.txt, the only abnormality I see is the Title error: 

Sep 14 10:58:32 unraid emhttpd: /mnt/disk1 mount error: Unsupported partition layout
Sep 14 10:58:32 unraid emhttpd: shcmd (502): umount /mnt/disk1
Sep 14 10:58:32 unraid root: umount: /mnt/disk1: not mounted.
Sep 14 10:58:32 unraid emhttpd: shcmd (502): exit status: 32
Sep 14 10:58:32 unraid emhttpd: shcmd (503): rmdir /mnt/disk1
Sep 14 10:58:32 unraid emhttpd: shcmd (504): mkdir -p /mnt/disk2

< etc for all 7 drives >

This is really not a very helpful message. It would be nice if the entire failing mount command was logged. If it was, I might be able to glean some insight into the root cause of this error. I then tried looking for info on emhttpd (hoping to find source code so I could figure out the mount command used), but was unsuccessful.

 

I should also say that xfs_repair is happy with each drive, e.g.:

# xfs_repair /dev/sdc1
Phase 1 - find and verify superblock...
Phase 2 - using internal log
        - zero log...
        - scan filesystem freespace and inode maps...
        - found root inode chunk
Phase 3 - for each AG...
        - scan and clear agi unlinked lists...
        - process known inodes and perform inode discovery...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
        - process newly discovered inodes...
Phase 4 - check for duplicate blocks...
        - setting up duplicate extent list...
        - check for inodes claiming duplicate blocks...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
Phase 5 - rebuild AG headers and trees...
        - reset superblock...
Phase 6 - check inode connectivity...
        - resetting contents of realtime bitmap and summary inodes
        - traversing filesystem ...
        - traversal finished ...
        - moving disconnected inodes to lost+found ...
Phase 7 - verify and correct link counts...
done

I'm hoping someone can tell me why unraid is so unhappy with these drives, maybe provide a "partition inspection command" to look closer? Maybe tell me what the actual mount command format would be? Remember, all disks were very happy in my old server.

 

As these drives are all almost full with data, re-formatting is not an option without me first copying the data from each old "unsupported partition layout" drive to a new, properly formatted disk. I could do this, but it would mean my server is non-functional for about a week as I would have to serially copy each large 5400 RPM at less than 200MB/s. 

 

Hoping someone can provide me the secret handshake for fixing the partition issue that unraid is complaining about.

 

Thanks (and fingers crossed, someone can help)!

 

 

unraid-diagnostics-20190914-1059.zip

Edited by bobe99
Typo
Link to comment

I first did the -n flavor which looked pretty benign, so then I did the no-option check (e.g. xfs_repair /dev/sdc1). I have not tried the -L option as I thought I'd ask here first before performing the "last resort" (per the xfs_repair help message). I'm not sure what the implications of -L are.

Link to comment

I went to my the disk1 page and found the field "File system type" to be non-editable via the GUI.  I edited /boot/configs/disk.cfg:diskFsType.1="xfs" (and .2), rebooted and restarted the array. This time, the main page "FS" reported xfs on the 2 disks, but unfortunately had the same mount error.

Link to comment

Oops. Yah, the array was running. I see that the GUI works fine in the regard when stopped My bad.; I should have known better (I'm a little stressed at moment, if that's an excuse ;-).

 

I've used the GUI to set xfs on an non-encrypted array, started the array ... same result.

 

Link to comment
  • 3 years later...
4 hours ago, segator said:

could we know which are the "unraid requirements"

Main one is that the partition starts on sector 64 (or 2048 for SSDs) and uses the rest of the device for that partition, though IIRC it also needs a specific MBR signature, so you'll most likely need to reformat.

Link to comment
1 minute ago, segator said:

what I don't understand I shuked a USB drive that was plugged to unraid array, and now I got this partition layout issue, what it changes? USB controller does something to the hard drive?

I checked sector starts at 64

Many USB enclosures do not transparently pass through the drive so yes.

Link to comment
25 minutes ago, segator said:

what I don't understand I shuked a USB drive that was plugged to unraid array, and now I got this partition layout issue,

That's one of the reasons we don't recommend using USB drives, some USB bridges don't pass the whole partition the using them.

 

23 minutes ago, segator said:

if I need to format them I would need to recover 40TB of backup :(

You just need to backup that drive, then format and restore.

Link to comment

is just that I don't have any other 20TB disk avaiable to migrate the data and I didn't expected to buy another one :( HDD are expensive in europe.

I think a better solution would be to stop using unraid, I can have "unraid like" using snapraid  + mergerFS

this solution won't require any "unraid requirements" for the partition layout. so it just will work.

 

Anyway thanks for trying to help!

 

 

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.