Jump to content

[Solved] Hardware Upgrade - Now Too Many Wrong/Missing Disks


Recommended Posts

After moving all my disks and unRAID USB from a Dell R720 to a freshly made PC I an now unable to start the array. Here is what I've done:

Built new PC, with 2 x H310 in IT mode

Testing with a "trial" unRAID USB OS, all worked, get boot screen, detects spare drives I was testing with

Moved over existing disks

Swapped for "live" unRAID USB

No output to screen on boot, eventually managed to login via web interface though

No all disk aren't recognised, see screenshot below

 

I haven't tried to fix this yet, I'm assuming the issue is because the drive serial numbers are different in the new server. This is what I think I might need to do to fix it, but was looking for someone to confirm before I go and trash everything by think that I know what I'm doing 😉  The one thing I have learnt in IT, is that I don't know everything, so don't be afraid to ask.

 

Go to TOOLS-> New Config 

Preserve Current Assignments "All" (You can see from the screenshot that I've reassigned the drives, but not 100% sure "all" is correct)

check "I want to do this"

then APPLY 

 

Alternatively, I was going to do this:

Go to TOOLS-> New Config 

Preserve Current Assignments "None"

check "I want to do this"

then APPLY 

 

following that

re-assign disks based on the screenshot

Start the array and check parity is already valid

Do parity check

 

Any advice would be appreciated.

 

Thanks

2022-07-04_17h31_16.png

Link to comment
5 minutes ago, Karl_C said:

I'm assuming the issue is because the drive serial numbers are different in the new server. This is what I think I might need to do to fix it, but was looking for someone to confirm before I go and trash everything by think that I know what I'm doing 

That's the correct way forward, but depending on the controller used with the Dell there might also be issues with the partitions, especially if it was a raid controller, you can still do it, but if the disks don't mount due to an "invalid partition" error don't do anything else and post the diagnostics.

  • Thanks 1
Link to comment

Thanks JorgeB, all seems good. It was a little nerve racking, with all the warning etc and I still need to test the Docker images, but VERY happy so far.  Many thanks.

 

So for completeness I did the following:

 

Go to TOOLS-> New Config 

Preserve Current Assignments "All" (You can see from the screenshot that I've reassigned the drives, but not 100% sure "all" is correct)

check "I want to do this"

then APPLY 

 

Ticked the box that said the Parity was OK

Started the array, this took quite a bit of time, much longer than usual

 

Browsed to the shares from my PC, all look good.

Thanks again, especially for being so quick to answer.

 

  • Like 1
Link to comment
  • Karl_C changed the title to [Solved] Hardware Upgrade - Now Too Many Wrong/Missing Disks
  • 1 year later...

Stumbled across this thread and hoping that someone can help.

I've been running unraid 6.12.4 and previous versions without issue, that was until today when I moved the disk and usb key to a new server with more hdd slots. I had been running a trial licence on the new prospective hardware for a while and decided to pull the plug and swap from one to the other.
I shutdown the servers gracefully, swapped the previous production server hdd and placed into into the corresponding devices on the new to me server.

On first boot one of the disks despite being in the right location is now reporting an incorrect size should be 4TB and now reporting 2.2TB.

I've since added more disks to the previosly empty slots and added 2x Samsung Pro SSD for future use to relocate Plex data to and maybe cache.

What was once a 4 hdd machine is now a 8hdd+2SSD.

The array will not boot however as there are too many "Wrong" disk and in the system log its advising to unmount and run xfs_repair
I'm usually a windows guy and have been for a few decades so deflated and wanting to recover everything before I feel the wrath of the better half and all our plex media family holidays, video/photos etc plus some testing VM's etc and home assistant.

Any help gratefully received. Thanks in advance.

Exert from logs - which repeats itself
Oct 4 20:28:06 Tower emhttpd: error: get_filesystem_status, 7329: Structure needs cleaning (117): scandir Structure needs cleaning
Oct 4 20:28:06 Tower kernel: XFS (md2p1): Metadata corruption detected at xfs_dinode_verify+0x131/0x732 [xfs], inode 0x11d3c71 dinode

Oct 4 20:28:06 Tower kernel: XFS (md2p1): Unmount and run xfs_repair

Oct 4 20:28:06 Tower kernel: XFS (md2p1): First 128 bytes of corrupted metadata buffer:

tower-syslog-20231004-1950.zip

Screenshot 2023-10-04 205215.png

Tower_Main.pdf

Edited by frustratedtech71
add screen shot
Link to comment
12 hours ago, frustratedtech71 said:

On first boot one of the disks despite being in the right location is now reporting an incorrect size should be 4TB and now reporting 2.2TB.

This suggests the disk in connected to an old controller that doesn't support disks larger the 2TiB, did you do a new config to accept the new size? Also why do you have a disabled disk now?

Link to comment

Thanks for your message.
I did what the previous member had done in regards to new config yes.
The disabled disk was in the previous server so for the time being had intended to retain this to keep the configuration as near to original as possible.

The motherboard has 6xSata connections and has a mezzanine card with a further 4 sata connectors which is a LSI, can't remember which but will check again. The PDF shows the original disk configuration and correct size. (new hardware is new to me but based on intel S3420GPLX motherboard) I know not great but beggers can't be choosers.

I believe the two SSD and and 2 more WD Enterprise 4TB are attached and currently being precleared ( the bottom four on the screen shot )

Link to comment
1 hour ago, frustratedtech71 said:

I did what the previous member had done in regards to new config yes.

You should not have not done that with a disk capacity being incorrect.

 

1 hour ago, frustratedtech71 said:

The disabled disk was in the previous server

But it got disabled in the new server, after the new config, right?

 

You have two 4TB disks being detected as 2.2T, need to connect those to a different controller, but since the new config was done before it will be a problem now, especially with a disabled disk.

 

Link to comment

So the new config was the wrong thing to do..

There is a single disk reporting incorrect at 2.2TB rather than the full 4TB, not two unless I'm missing something

The disk was disabled prior to the move to the new hardware. I was just retaining the spindles for continuity.
Is there is a way to correct the disk capacity and create a new configuration again?

The previous hardware was a 4xHDD server with no room for capacity, Disk 2 was disabled and was allegedly being emulated. My apparently flawed logic suggested the move to a chassis with more disk slots would allow the original disk config. Then slowly introduce the new disk into the array and allow it to rebuild.

I assume the xfs_repair in the terminal isn't really going to help resolve anything until the disk capacity correction is done?

 

Edited by frustratedtech71
Link to comment
12 minutes ago, frustratedtech71 said:

There is a single disk reporting incorrect at 2.2TB rather than the full 4TB, not two unless I'm missing something

There's also an unassigned disk with the same issue.

 

12 minutes ago, frustratedtech71 said:

The disk was disabled prior to the move to the new hardware.

Then something is missing, a new config would re-enable the disabled disk, unless you manually disabled it again.

 

13 minutes ago, frustratedtech71 said:

I assume the xfs_repair in the terminal isn't really going to help resolve anything until the disk capacity correction is done?

Correct.

Link to comment

Does Unraid include any utilities to correct the disk size issue being seen? 

The disk that was included in the array isn't connected to a different controller to the parity etc so based on that it should be being seen as 4tb. 

The ssds and 2 further 4tb hdd are connected to the LSI  controller. 

 

Would you recommend removing the additional disks that had previously not been part of the array and concentrate on somehow sorting the size issue. If so, what would your recommendation be? I'm fine with windows but this not so much. 

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...