bobe99 Posted September 14, 2019 Share Posted September 14, 2019 (edited) I had a working unraid install on an i7 based server and attempted to migrate to a xeon based system (r720xd). I moved the existing drives and flash into the new server. Created a new config based on the existing data layout and attempted to start the array. The arra started but all 7 data drives are reporting "Unmountable: Unsupported partition layout". I logged into the CLI and looked at each disk (some LUKS, most unencrypted). I was able to mount each partition by hand (e.g. mount /dev/sdc1 /mnt). I tried this with each disk and was able to successful mount each drive. It appears that all my data is intact (whoo whoo). After unmount, I looked at each disk with sgdisk -v and each disk looked normal, e.g.: # sgdisk -v /dev/sdc No problems found. 2166478 free sectors (1.0 GiB) available in 2 segments, the largest of which is 2166448 (1.0 GiB) in size. After a little forum reading, I tried a "new config" from scratch with no drives defined, then rebooted. After logging back in, all drives showed up in the unassigned devices section. Each disk was properly identified and in each case, reported the FS as "xfs". I was able to mount each drive using the UD "mount" button without issue. I then reinstated my original disk config, rebooted and started the array. Same mount error seen. In looking at syslog.txt, the only abnormality I see is the Title error: Sep 14 10:58:32 unraid emhttpd: /mnt/disk1 mount error: Unsupported partition layout Sep 14 10:58:32 unraid emhttpd: shcmd (502): umount /mnt/disk1 Sep 14 10:58:32 unraid root: umount: /mnt/disk1: not mounted. Sep 14 10:58:32 unraid emhttpd: shcmd (502): exit status: 32 Sep 14 10:58:32 unraid emhttpd: shcmd (503): rmdir /mnt/disk1 Sep 14 10:58:32 unraid emhttpd: shcmd (504): mkdir -p /mnt/disk2 < etc for all 7 drives > This is really not a very helpful message. It would be nice if the entire failing mount command was logged. If it was, I might be able to glean some insight into the root cause of this error. I then tried looking for info on emhttpd (hoping to find source code so I could figure out the mount command used), but was unsuccessful. I should also say that xfs_repair is happy with each drive, e.g.: # xfs_repair /dev/sdc1 Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 2 - agno = 3 Phase 5 - rebuild AG headers and trees... - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... done I'm hoping someone can tell me why unraid is so unhappy with these drives, maybe provide a "partition inspection command" to look closer? Maybe tell me what the actual mount command format would be? Remember, all disks were very happy in my old server. As these drives are all almost full with data, re-formatting is not an option without me first copying the data from each old "unsupported partition layout" drive to a new, properly formatted disk. I could do this, but it would mean my server is non-functional for about a week as I would have to serially copy each large 5400 RPM at less than 200MB/s. Hoping someone can provide me the secret handshake for fixing the partition issue that unraid is complaining about. Thanks (and fingers crossed, someone can help)! unraid-diagnostics-20190914-1059.zip Edited September 14, 2019 by bobe99 Typo Quote Link to comment
Harro Posted September 14, 2019 Share Posted September 14, 2019 I know you did the xfs repair but with which command? xfs_repair -L /dev/device? Or xfs_repair -n /dev/device? Quote Link to comment
bobe99 Posted September 14, 2019 Author Share Posted September 14, 2019 I first did the -n flavor which looked pretty benign, so then I did the no-option check (e.g. xfs_repair /dev/sdc1). I have not tried the -L option as I thought I'd ask here first before performing the "last resort" (per the xfs_repair help message). I'm not sure what the implications of -L are. Quote Link to comment
Harro Posted September 14, 2019 Share Posted September 14, 2019 -n is just a check file system but the -L will force log zeroing, which will then look for the next super block. May lose some meta data but if successful you should be able to mount your disk again. Quote Link to comment
bobe99 Posted September 14, 2019 Author Share Posted September 14, 2019 I stopped the array and performed xfs_repair -L on two of the drives. The output from xfs_repair was uneventful. I restarted the array an got the same error on these drives (all 7 drives for that matter). For grins, I rebooted and started the array again; same results. Quote Link to comment
JonathanM Posted September 14, 2019 Share Posted September 14, 2019 Did you try setting the drive types to what they explicitly are instead of auto? The fact that some are encrypted and some not may not be auto detected, and you may need to click on each drive slot and define the format type for each drive. Quote Link to comment
bobe99 Posted September 14, 2019 Author Share Posted September 14, 2019 I went to my the disk1 page and found the field "File system type" to be non-editable via the GUI. I edited /boot/configs/disk.cfg:diskFsType.1="xfs" (and .2), rebooted and restarted the array. This time, the main page "FS" reported xfs on the 2 disks, but unfortunately had the same mount error. Quote Link to comment
JonathanM Posted September 14, 2019 Share Posted September 14, 2019 36 minutes ago, bobe99 said: I went to my the disk1 page and found the field "File system type" to be non-editable via the GUI. This was with the array stopped? Quote Link to comment
bobe99 Posted September 14, 2019 Author Share Posted September 14, 2019 Oops. Yah, the array was running. I see that the GUI works fine in the regard when stopped My bad.; I should have known better (I'm a little stressed at moment, if that's an excuse ;-). I've used the GUI to set xfs on an non-encrypted array, started the array ... same result. Quote Link to comment
JorgeB Posted September 15, 2019 Share Posted September 15, 2019 15 hours ago, bobe99 said: "Unmountable: Unsupported partition layout". This meas it's not a filesystem problem, it means the partition layout doesn't conform to Unraid's requirements, this happens mostly when moving from a raid controller to a standard HBA, or vice-versa. Quote Link to comment
segator Posted July 17, 2023 Share Posted July 17, 2023 could we know which are the "unraid requirements" as I have same problem and i can use mount command by hand and is working. Maybe we can fix partition layour matching unraid requirements but logs are poor Quote Link to comment
JorgeB Posted July 17, 2023 Share Posted July 17, 2023 4 hours ago, segator said: could we know which are the "unraid requirements" Main one is that the partition starts on sector 64 (or 2048 for SSDs) and uses the rest of the device for that partition, though IIRC it also needs a specific MBR signature, so you'll most likely need to reformat. Quote Link to comment
segator Posted July 17, 2023 Share Posted July 17, 2023 what I don't understand I shuked a USB drive that was plugged to unraid array, and now I got this partition layout issue, what it changes? USB controller does something to the hard drive? I checked sector starts at 64 Quote Link to comment
itimpi Posted July 17, 2023 Share Posted July 17, 2023 1 minute ago, segator said: what I don't understand I shuked a USB drive that was plugged to unraid array, and now I got this partition layout issue, what it changes? USB controller does something to the hard drive? I checked sector starts at 64 Many USB enclosures do not transparently pass through the drive so yes. Quote Link to comment
segator Posted July 17, 2023 Share Posted July 17, 2023 is really nothing we can do? if I need to format them I would need to recover 40TB of backup Unhappy, any technical reason why unraid is so picky with partition layout? disk works and I can mount partition by hand mount -t xfs ... Quote Link to comment
JorgeB Posted July 17, 2023 Share Posted July 17, 2023 25 minutes ago, segator said: what I don't understand I shuked a USB drive that was plugged to unraid array, and now I got this partition layout issue, That's one of the reasons we don't recommend using USB drives, some USB bridges don't pass the whole partition the using them. 23 minutes ago, segator said: if I need to format them I would need to recover 40TB of backup You just need to backup that drive, then format and restore. Quote Link to comment
segator Posted July 17, 2023 Share Posted July 17, 2023 is just that I don't have any other 20TB disk avaiable to migrate the data and I didn't expected to buy another one HDD are expensive in europe. I think a better solution would be to stop using unraid, I can have "unraid like" using snapraid + mergerFS this solution won't require any "unraid requirements" for the partition layout. so it just will work. Anyway thanks for trying to help! Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.