I had a working unraid install on an i7 based server and attempted to migrate to a xeon based system (r720xd). I moved the existing drives and flash into the new server. Created a new config based on the existing data layout and attempted to start the array. The arra started but all 7 data drives are reporting "Unmountable: Unsupported partition layout".
I logged into the CLI and looked at each disk (some LUKS, most unencrypted). I was able to mount each partition by hand (e.g. mount /dev/sdc1 /mnt). I tried this with each disk and was able to successful mount each drive. It appears that all my data is intact (whoo whoo). After unmount, I looked at each disk with sgdisk -v and each disk looked normal, e.g.:
# sgdisk -v /dev/sdc
No problems found. 2166478 free sectors (1.0 GiB) available in 2
segments, the largest of which is 2166448 (1.0 GiB) in size.
After a little forum reading, I tried a "new config" from scratch with no drives defined, then rebooted. After logging back in, all drives showed up in the unassigned devices section. Each disk was properly identified and in each case, reported the FS as "xfs". I was able to mount each drive using the UD "mount" button without issue.
I then reinstated my original disk config, rebooted and started the array. Same mount error seen.
In looking at syslog.txt, the only abnormality I see is the Title error:
Sep 14 10:58:32 unraid emhttpd: /mnt/disk1 mount error: Unsupported partition layout
Sep 14 10:58:32 unraid emhttpd: shcmd (502): umount /mnt/disk1
Sep 14 10:58:32 unraid root: umount: /mnt/disk1: not mounted.
Sep 14 10:58:32 unraid emhttpd: shcmd (502): exit status: 32
Sep 14 10:58:32 unraid emhttpd: shcmd (503): rmdir /mnt/disk1
Sep 14 10:58:32 unraid emhttpd: shcmd (504): mkdir -p /mnt/disk2
< etc for all 7 drives >
This is really not a very helpful message. It would be nice if the entire failing mount command was logged. If it was, I might be able to glean some insight into the root cause of this error. I then tried looking for info on emhttpd (hoping to find source code so I could figure out the mount command used), but was unsuccessful.
I should also say that xfs_repair is happy with each drive, e.g.:
# xfs_repair /dev/sdc1
Phase 1 - find and verify superblock...
Phase 2 - using internal log
- zero log...
- scan filesystem freespace and inode maps...
- found root inode chunk
Phase 3 - for each AG...
- scan and clear agi unlinked lists...
- process known inodes and perform inode discovery...
- agno = 0
- agno = 1
- agno = 2
- agno = 3
- process newly discovered inodes...
Phase 4 - check for duplicate blocks...
- setting up duplicate extent list...
- check for inodes claiming duplicate blocks...
- agno = 0
- agno = 1
- agno = 2
- agno = 3
Phase 5 - rebuild AG headers and trees...
- reset superblock...
Phase 6 - check inode connectivity...
- resetting contents of realtime bitmap and summary inodes
- traversing filesystem ...
- traversal finished ...
- moving disconnected inodes to lost+found ...
Phase 7 - verify and correct link counts...
done
I'm hoping someone can tell me why unraid is so unhappy with these drives, maybe provide a "partition inspection command" to look closer? Maybe tell me what the actual mount command format would be? Remember, all disks were very happy in my old server.
As these drives are all almost full with data, re-formatting is not an option without me first copying the data from each old "unsupported partition layout" drive to a new, properly formatted disk. I could do this, but it would mean my server is non-functional for about a week as I would have to serially copy each large 5400 RPM at less than 200MB/s.
Hoping someone can provide me the secret handshake for fixing the partition issue that unraid is complaining about.
Thanks (and fingers crossed, someone can help)!
unraid-diagnostics-20190914-1059.zip