Karl_C Posted July 4, 2022 Share Posted July 4, 2022 After moving all my disks and unRAID USB from a Dell R720 to a freshly made PC I an now unable to start the array. Here is what I've done: Built new PC, with 2 x H310 in IT mode Testing with a "trial" unRAID USB OS, all worked, get boot screen, detects spare drives I was testing with Moved over existing disks Swapped for "live" unRAID USB No output to screen on boot, eventually managed to login via web interface though No all disk aren't recognised, see screenshot below I haven't tried to fix this yet, I'm assuming the issue is because the drive serial numbers are different in the new server. This is what I think I might need to do to fix it, but was looking for someone to confirm before I go and trash everything by think that I know what I'm doing 😉 The one thing I have learnt in IT, is that I don't know everything, so don't be afraid to ask. Go to TOOLS-> New Config Preserve Current Assignments "All" (You can see from the screenshot that I've reassigned the drives, but not 100% sure "all" is correct) check "I want to do this" then APPLY Alternatively, I was going to do this: Go to TOOLS-> New Config Preserve Current Assignments "None" check "I want to do this" then APPLY following that re-assign disks based on the screenshot Start the array and check parity is already valid Do parity check Any advice would be appreciated. Thanks Quote Link to comment
JorgeB Posted July 4, 2022 Share Posted July 4, 2022 5 minutes ago, Karl_C said: I'm assuming the issue is because the drive serial numbers are different in the new server. This is what I think I might need to do to fix it, but was looking for someone to confirm before I go and trash everything by think that I know what I'm doing That's the correct way forward, but depending on the controller used with the Dell there might also be issues with the partitions, especially if it was a raid controller, you can still do it, but if the disks don't mount due to an "invalid partition" error don't do anything else and post the diagnostics. 1 Quote Link to comment
Karl_C Posted July 4, 2022 Author Share Posted July 4, 2022 Thanks JorgeB, all seems good. It was a little nerve racking, with all the warning etc and I still need to test the Docker images, but VERY happy so far. Many thanks. So for completeness I did the following: Go to TOOLS-> New Config Preserve Current Assignments "All" (You can see from the screenshot that I've reassigned the drives, but not 100% sure "all" is correct) check "I want to do this" then APPLY Ticked the box that said the Parity was OK Started the array, this took quite a bit of time, much longer than usual Browsed to the shares from my PC, all look good. Thanks again, especially for being so quick to answer. 1 Quote Link to comment
frustratedtech71 Posted October 4, 2023 Share Posted October 4, 2023 (edited) Stumbled across this thread and hoping that someone can help. I've been running unraid 6.12.4 and previous versions without issue, that was until today when I moved the disk and usb key to a new server with more hdd slots. I had been running a trial licence on the new prospective hardware for a while and decided to pull the plug and swap from one to the other. I shutdown the servers gracefully, swapped the previous production server hdd and placed into into the corresponding devices on the new to me server. On first boot one of the disks despite being in the right location is now reporting an incorrect size should be 4TB and now reporting 2.2TB. I've since added more disks to the previosly empty slots and added 2x Samsung Pro SSD for future use to relocate Plex data to and maybe cache. What was once a 4 hdd machine is now a 8hdd+2SSD. The array will not boot however as there are too many "Wrong" disk and in the system log its advising to unmount and run xfs_repair I'm usually a windows guy and have been for a few decades so deflated and wanting to recover everything before I feel the wrath of the better half and all our plex media family holidays, video/photos etc plus some testing VM's etc and home assistant. Any help gratefully received. Thanks in advance. Exert from logs - which repeats itself Oct 4 20:28:06 Tower emhttpd: error: get_filesystem_status, 7329: Structure needs cleaning (117): scandir Structure needs cleaning Oct 4 20:28:06 Tower kernel: XFS (md2p1): Metadata corruption detected at xfs_dinode_verify+0x131/0x732 [xfs], inode 0x11d3c71 dinode Oct 4 20:28:06 Tower kernel: XFS (md2p1): Unmount and run xfs_repair Oct 4 20:28:06 Tower kernel: XFS (md2p1): First 128 bytes of corrupted metadata buffer: tower-syslog-20231004-1950.zip Tower_Main.pdf Edited October 4, 2023 by frustratedtech71 add screen shot Quote Link to comment
JorgeB Posted October 5, 2023 Share Posted October 5, 2023 12 hours ago, frustratedtech71 said: On first boot one of the disks despite being in the right location is now reporting an incorrect size should be 4TB and now reporting 2.2TB. This suggests the disk in connected to an old controller that doesn't support disks larger the 2TiB, did you do a new config to accept the new size? Also why do you have a disabled disk now? Quote Link to comment
frustratedtech71 Posted October 5, 2023 Share Posted October 5, 2023 Thanks for your message. I did what the previous member had done in regards to new config yes. The disabled disk was in the previous server so for the time being had intended to retain this to keep the configuration as near to original as possible. The motherboard has 6xSata connections and has a mezzanine card with a further 4 sata connectors which is a LSI, can't remember which but will check again. The PDF shows the original disk configuration and correct size. (new hardware is new to me but based on intel S3420GPLX motherboard) I know not great but beggers can't be choosers. I believe the two SSD and and 2 more WD Enterprise 4TB are attached and currently being precleared ( the bottom four on the screen shot ) Quote Link to comment
JorgeB Posted October 5, 2023 Share Posted October 5, 2023 1 hour ago, frustratedtech71 said: I did what the previous member had done in regards to new config yes. You should not have not done that with a disk capacity being incorrect. 1 hour ago, frustratedtech71 said: The disabled disk was in the previous server But it got disabled in the new server, after the new config, right? You have two 4TB disks being detected as 2.2T, need to connect those to a different controller, but since the new config was done before it will be a problem now, especially with a disabled disk. Quote Link to comment
frustratedtech71 Posted October 5, 2023 Share Posted October 5, 2023 (edited) So the new config was the wrong thing to do.. There is a single disk reporting incorrect at 2.2TB rather than the full 4TB, not two unless I'm missing something The disk was disabled prior to the move to the new hardware. I was just retaining the spindles for continuity. Is there is a way to correct the disk capacity and create a new configuration again? The previous hardware was a 4xHDD server with no room for capacity, Disk 2 was disabled and was allegedly being emulated. My apparently flawed logic suggested the move to a chassis with more disk slots would allow the original disk config. Then slowly introduce the new disk into the array and allow it to rebuild. I assume the xfs_repair in the terminal isn't really going to help resolve anything until the disk capacity correction is done? Edited October 5, 2023 by frustratedtech71 Quote Link to comment
JorgeB Posted October 5, 2023 Share Posted October 5, 2023 12 minutes ago, frustratedtech71 said: There is a single disk reporting incorrect at 2.2TB rather than the full 4TB, not two unless I'm missing something There's also an unassigned disk with the same issue. 12 minutes ago, frustratedtech71 said: The disk was disabled prior to the move to the new hardware. Then something is missing, a new config would re-enable the disabled disk, unless you manually disabled it again. 13 minutes ago, frustratedtech71 said: I assume the xfs_repair in the terminal isn't really going to help resolve anything until the disk capacity correction is done? Correct. Quote Link to comment
frustratedtech71 Posted October 5, 2023 Share Posted October 5, 2023 Does Unraid include any utilities to correct the disk size issue being seen? The disk that was included in the array isn't connected to a different controller to the parity etc so based on that it should be being seen as 4tb. The ssds and 2 further 4tb hdd are connected to the LSI controller. Would you recommend removing the additional disks that had previously not been part of the array and concentrate on somehow sorting the size issue. If so, what would your recommendation be? I'm fine with windows but this not so much. Quote Link to comment
JorgeB Posted October 6, 2023 Share Posted October 6, 2023 12 hours ago, frustratedtech71 said: Does Unraid include any utilities to correct the disk size issue being seen? Just need to connect it to a controller that supports it, post the diagnostics. Quote Link to comment
frustratedtech71 Posted October 8, 2023 Share Posted October 8, 2023 Thanks for the reply. I'll get one of these LSI 6GB 16 PORT PCI-E in it mode and try again. Rudy annoying. Should have checked really. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.