[SOLVED] Moving drives from non-HBA RAID card to HBA


jbrubaker

Recommended Posts

First post.  Be kind?

 

I've been running Unraid (currently 6.7.2) for about 4 months now.  My SuperMicro dual Xeon server has developed some issues, possibly related to a power outage or insufficient cooling.  At any rate, I'm putting together a new-from-used-parts machine (Ryzen 7 2700X, X370) and want to move the drives (and the Unraid USB) to the new machine.  Google (and this forum) tell me that Unraid should have no problem identifying the drives and placing them in the correct slot in the array when I first boot the new build.

 

However, I'm a little worried that it will work correctly with my setup.  My SuperMicro is using a 9650SE-12ML RAID card which does not support IT mode.  The drives are setup in "single" mode in the card config, but the controller still sits in front of the disks and prevents "direct" access (e.g. blocks SMART, etc.) and changes the drive identifier string.

 

Here's one of my disks on the RAID controller with the id string: 

Oct 29 20:56:25 Hyperion kernel: md: import disk0: (sdf) 9650SE-12M_DISK_D88RER0842FC52006B26_3600050e042fc52006b26000021e80000 size: 5859363788

 

I am putting an IT-mode RAID card in the new box (Dell H310, LSI 2008) to get SMART support.  I am pretty sure that after I swap the drives to the new system/controller and boot Unraid, the disk identifier for each disk is going to be different on the new controller.

 

Do I simply need to make a note of each drive's actual identifier and then manually match them up with the appropriate slots in Array Devices when I boot the new hardware with the old drives (see attachment)?  Am I making this too complicated?  Am I missing something else important.

 

Thanks for any guidance.

Annotation 2019-10-30 115013.png

Link to comment

You are correct to be concerned that Unraid will not recognize the disks since their identifier will change.

 

The very most important thing is to not accidentally assign a data disk to the parity slot. As long as you don't do that you should not experience any data loss.

 

If you have enough of the actual serial number to work with, then that should be enough to allow you to reassign the drives correctly. Probably the actual serial number of each disk is printed on each disk label.

 

When you first boot with the flash in the new system, Unraid will not attempt to start since the expected disk identifiers are not the same. You just need to go to Tools - New Config and assign the disks, check the box saying parity is already valid, and start the array. A non-correcting parity check would be a good way to confirm that things are working well.

  • Thanks 1
Link to comment
  • 1 month later...

Thank you both for your replies.  I attempted the disk move tonight, and as predicted, Unraid said the disks were missing when I booted and "Wrong" when I selected the appropriate disk.

 

I created a new config as recommended by @trurl, checked "Parity already valid", and started the array, then got the "Unmountable: Unsupported partition layout" on each of the disks (except parity) that had been connected to the previous RAID controller (now on the Dell H310).

 

When the system was booting, I believe I saw something flash by about the GPT layout sector something not being at the end of disk or something.

 

I did some Googling and couldn't find many useful hints at what to try.

 

Any thoughts?

 

Screen Shot 2019-12-01 at 11.55.05 PM.png

Screen Shot 2019-12-01 at 11.55.11 PM.png

 

Edit:  Found this in the log:

"Alternate GPT header not at the end of the disk"

Dec  1 23:42:30 Hyperion kernel: ata4: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
Dec  1 23:42:30 Hyperion kernel: ata4.00: ATA-9: ST3000DM001-1ER166,             Z501REW5, CC25, max UDMA/133
Dec  1 23:42:30 Hyperion kernel: ata4.00: 5860533168 sectors, multi 16: LBA48 NCQ (depth 32), AA
Dec  1 23:42:30 Hyperion kernel: ata4.00: configured for UDMA/133
Dec  1 23:42:30 Hyperion kernel: scsi 4:0:0:0: Direct-Access     ATA      ST3000DM001-1ER1 CC25 PQ: 0 ANSI: 5
Dec  1 23:42:30 Hyperion kernel: sd 4:0:0:0: Attached scsi generic sg3 type 0
Dec  1 23:42:30 Hyperion kernel: sd 4:0:0:0: [sdd] 5860533168 512-byte logical blocks: (3.00 TB/2.73 TiB)
Dec  1 23:42:30 Hyperion kernel: sd 4:0:0:0: [sdd] 4096-byte physical blocks
Dec  1 23:42:30 Hyperion kernel: sd 4:0:0:0: [sdd] Write Protect is off
Dec  1 23:42:30 Hyperion kernel: sd 4:0:0:0: [sdd] Mode Sense: 00 3a 00 00
Dec  1 23:42:30 Hyperion kernel: sd 4:0:0:0: [sdd] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Dec  1 23:42:30 Hyperion kernel: mpt2sas_cm0: host_add: handle(0x0001), sas_addr(0x5000000080000000), phys(8)
Dec  1 23:42:30 Hyperion kernel: GPT:Primary header thinks Alt. header is not at the end of the disk.
Dec  1 23:42:30 Hyperion kernel: GPT:5859352575 != 5860533167
Dec  1 23:42:30 Hyperion kernel: GPT:Alternate GPT header not at the end of the disk.
Dec  1 23:42:30 Hyperion kernel: GPT:5859352575 != 5860533167
Dec  1 23:42:30 Hyperion kernel: GPT: Use GNU Parted to correct GPT errors.
Dec  1 23:42:30 Hyperion kernel: sdd: sdd1

 

Edited by jbrubaker
Link to comment

Like mentioned it was a possibility, since partitions are outside parity this can usually be fixed by rebuilding one disk at a time so Unraid recreates the partitions correctly, you can check by unassigning one of the data disks, start the array and check if the emulated disk mounts and data looks correct, if yes you can rebuild on top, then repeat for all other disks one by one.

Link to comment
8 hours ago, johnnie.black said:

Like mentioned it was a possibility, since partitions are outside parity this can usually be fixed by rebuilding one disk at a time so Unraid recreates the partitions correctly, you can check by unassigning one of the data disks, start the array and check if the emulated disk mounts and data looks correct, if yes you can rebuild on top, then repeat for all other disks one by one.

Thanks for your help.

 

I stopped array, and mounted one of the array disks in Unassigned Devices.  All the data I checked was intact.  I unmounted it from Unassigned Devices, and with it unassigned from the array, I started the array.

 

The emulated disk mounted and as far as I can tell the data still looks good, so I think I can proceed to rebuilding the disk.  Running the first disk rebuild now.

Link to comment
  • 2 weeks later...

Just wanted to report back that after about 86 hours of disk rebuilding, everything is functional.

 

Followed the steps in my previous post for each disk, one at a time, and each one rebuilt correctly, even in spite of many of the disks reporting "unmountable" before they were rebuilt.

 

My last few disks were empty, but I elected to rebuild them anyway instead of format, as I wasn't really sure if formatting them would impact the parity disk or not.

 

At any rate, thanks for the help!

 

Just added in the newest disk from Black Friday sales.  Onward and upward!

Link to comment
  • JorgeB changed the title to [SOLVED] Moving drives from non-HBA RAID card to HBA
  • 3 years later...

First of all, sorry for digging up this old thread, but this is exactly what happens with me too. 

 

I moved from a Supermicro blade (with a separate hotswap bay/raid controller, I guess) to a new custom build, with sata on the micro-ATX motherboard. 

 

I am 100% sure which drive belongs in which "slot" of the "Disk 1-3" (I used to have only 3 disks and an SSD cache).

I am sure about this, because I used Unassigned devices to check the amount that's used of the disk. 

Old server:

telegram-cloud-photo-size-4-5969839237194956451-y.thumb.jpg.e43c3cfe0fc67cc19571c107d51bfc93.jpg

 

Unassigned Devices, mounted disks, with disk usages:

image.thumb.png.e86cd5e4e1104b11b97f28a83da4d913.png

 

New server after assigning disks to the correct Disk #
telegram-cloud-photo-size-4-5974271841012924510-y.thumb.jpg.647e7ff9b23feed9c34a9a58b10d4f36.jpg

 

I ran the "New Config" on the "Array" disks after I set them up as shown in the screenshot above, which lead to this config (btw, the cache disk was detected before successfully (as seen on the screenshot above this line), nothing changed there. Maybe because I only created "New Config" for "Array" devices?)

 

image.thumb.png.45fe4afb5961d152f51d7059c6bc3396.png

 

However, when I start the array, I get an error on each disk. "Unmountable: Unsupported partition layout"
 

What do I do next? I don't understand what needs to be done from the comments above saying "rebuilding one by one". What do I need to do to get it up again.
IMO it should be fine now, why is it Unmountable?

Link to comment
8 hours ago, eXorQue said:

why is it Unmountable?

This can happen when moving from RAID controllers, they sometimes don't pass the complete partition. 

 

Rebuilding one by one is an option,  but it would have been better if you checked parity was valid. 

 

Other option is to use UD to copy the data.

Link to comment

Thank you for the response.
I understand that it happens, which is fine-ish. It's explainable, so for me that is ok.

 

11 minutes ago, JorgeB said:

Rebuilding one by one is an option

 

What does this mean. Where do I need to do what?

 

11 minutes ago, JorgeB said:

but it would have been better if you checked parity was valid

 

There was not such an option to check parity was valid... Where should that have been?

Edited by eXorQue
Link to comment
40 minutes ago, eXorQue said:

What does this mean. Where do I need to do what?

Since parity is not valid you will need to let it sync first, then form a few posts above:

On 12/2/2019 at 8:13 AM, JorgeB said:

Like mentioned it was a possibility, since partitions are outside parity this can usually be fixed by rebuilding one disk at a time so Unraid recreates the partitions correctly, you can check by unassigning one of the data disks, start the array and check if the emulated disk mounts and data looks correct, if yes you can rebuild on top, then repeat for all other disks one by one.

 

 

 

40 minutes ago, eXorQue said:

There was not such an option to check parity was valid...

There is after a new config, there's a "parity is already valid" checkbox next to array start button.

Link to comment
On 11/22/2023 at 1:04 AM, eXorQue said:

What do I do next? I don't understand what needs to be done from the comments above saying "rebuilding one by one". What do I need to do to get it up again.

 

On 11/22/2023 at 9:29 AM, eXorQue said:
On 11/22/2023 at 9:27 AM, JorgeB said:

Rebuilding one by one is an option

 

What does this mean. Where do I need to do what?

 

On 11/22/2023 at 10:11 AM, JorgeB said:
  On 12/2/2019 at 9:13 AM, JorgeB said:

Like mentioned it was a possibility, since partitions are outside parity this can usually be fixed by rebuilding one disk at a time so Unraid recreates the partitions correctly, you can check by unassigning one of the data disks, start the array and check if the emulated disk mounts and data looks correct, if yes you can rebuild on top, then repeat for all other disks one by one.

 

Sorry for my ignorance. 

My question is still, how do I do this?

 

This is my current situation:

192_168.1.3_4433_Main.thumb.png.16a62ea97cc4385179a2880b2fe0ca57.png

Step-by-step (is this correct?):

  1. stop array
  2. set parity 1 to disk `(sde)`
  3. set disk 1 to disk `(sdd)`
  4. set disk 2 to disk `no device`
  5. set disk 3 to disk `no device`
  6. start array
  7. do parity check
  8. set disk 1 to disk `no device`
  9. set disk 2 to disk `(sdc)`
  10. start array
  11. do parity check
  12. set disk 2 to disk `no device`
  13. set disk 3 to disk `(sbd)`
  14. start array
  15. do parity check
  16. set disks 1,2,3 to resp sdd, sdc, sdb
  17. start array
  18. everything works?

Does this change when I would like to move from the 2TB disk to the 4TB disk (as I'm planning to replace all disks with 5 4TB disks)

Link to comment

Wouldn't I want to get parity working first? When I do this I get this message:

 

> Start will disable the missing disk and then bring the array on-line. Install a replacement disk as soon as possible.
> [ ] Yes, I want to do this

 

This sounds to me like it'll try to rebuild from 2 disks + parity, does that make sense?

image.thumb.png.45b16e6370e7a02c3e89556100175ed0.png

Link to comment

Parity check is completed.

 

So now I would take one disk out, as in, set the disk to "no device", and run diagnostics again?

 

 

Results:

Quote

Last check completed on Thursday, 23-11-2023, 20:02 (today)
Duration: 3 hours, 29 minutes, 25 seconds. Average speed: 159.2 MB/s
Finding 134558 errors

 

parity-check-finished.thumb.png.66c25566b45650e3bcb6bc4b8df0868f.png

 

Added diagnostics as well (for the record, it says "supermicro" that's my NAS name as my previous machine was a Supermicro)

 

 

supermicro-diagnostics-20231123-2038.zip

Edited by eXorQue
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.