Jump to content

Problem moving to new server with the same hard drives


Go to solution Solved by apandey,

Recommended Posts

I am currently moving to a new server for unraid, I am moving from a Dell T610 to a consumer pc I built but I am having issues with the hard drives. Because the Dell T610 had a PERC H700 raid controller the names of the hard drives were all over the place and weren't the ID of the actual drive, instead it was of the raid pool, of which I just made a raid 0 for each individual drive. You can see this in the image below.

 

I am only taking the 10TB drives with me, the parity is also a 10TB Seagate Ironwolf (there is also an additional 10TB drive which I have put in and transferred all the data from the old 3TBs over, this screenshot was taken before that).

 

I have moved the drives over to the new server, same USB boot drive, but as I am plugging the drives straight into the motherboard now their serial number can be used as the ID, and unraid was confused, so I created a new config to put the drives in order, the same parity drive was used in order to ensure I keep my data intact. However I now get "unmountable: unsupported partition layout" on the drives now. They have been confirmed to work on unraid before with no issues so I am confused as to why it won't mount. I have tried an xfs_repair through the GUI as per unraids guide and it did something, but it didn't fix it. I can also mount the drives through unassigned drives and I can see all my data is there.
 

I was thinking of formatting each drive individually and move the data around but 1 move of the data will take 2 days at best, so that is a last case scenario. I don't want to have to download >10TB worth of movies and tv shows again.

 

My diags are attached as well.

 

If anyone can help me that would be greatly appreciated :)

Arrary Devices - Unraid.png

Old Setup on Dell T610

 

Mounted Disks.png

Drives can be mounted through unassigned devices and data is intact

 

Unmountable unsupported partition layout.png

Error I am having

 

manticore-diagnostics-20230422-0944.zip

 

 

Edit: I just tried another xfs_repair on one of the drives and it says it has a bad primary superblock and now it is trying to find a secondary superblock

 

Edit: I got this during the xfs_repair: "..found candidate secondary superblock... unable to verify superblock, continuing...". Not sure if this is normal or not

Edited by DizzieNight
added extra info
Link to comment
  • Solution

Since you were using raid0 via a raid controller, there would be raid identifying tags and data on the drive which would offset the actual filesystem. This is why the drive isn't seen as an xfs formatted drive. I hope you did not corrupt it further trying to assume it's xfs and repairing it

 

Unraid expects it's working with a passthrough controller with direct control of disks

 

Thr safest option is to start with freshly formatted drives in new servers array and copy over the data even if it takes time. You can do it disk by disk slowly growing the target array, or all at once if you have backups to copy from

 

Link to comment
18 minutes ago, apandey said:

Since you were using raid0 via a raid controller, there would be raid identifying tags and data on the drive which would offset the actual filesystem. This is why the drive isn't seen as an xfs formatted drive. I hope you did not corrupt it further trying to assume it's xfs and repairing it

 

Unraid expects it's working with a passthrough controller with direct control of disks

 

Thr safest option is to start with freshly formatted drives in new servers array and copy over the data even if it takes time. You can do it disk by disk slowly growing the target array, or all at once if you have backups to copy from

 

Okay I see, so I'll have to go with the workaround then. Shame I have to do that but I get it, thanks for the help mate

Link to comment

It is sometimes possible with valid parity to emulate and rebuild one drive at a time if the only issue is the partition offset. If the drives are up and reading properly in the RAID machine, and parity is valid, you could try to move ALL the array drives to the new machine, keeping the same assignments with a new config, then after Unraid shows all green balls you could pull one drive and see if the emulated drive mounts properly, if it does, rebuild to a physical drive.

 

However, this assumes you are using all the same drives, which doesn't seem like the case here, so your safest option is copying over the network from one server to the other.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...