Snowman Posted August 29, 2022 Share Posted August 29, 2022 (edited) 6.10.3 Unraid version. I was attempting to add another 8TB sas drive to this card in unraid. Was having issues viewing drive so I did some moved cable hooked only to this drive to 1 of the1 free ports on the card for this drive and looked at settings for card from super micro bios where I could see card. I don't recall changing it out of HBA mode but might of. Anyway I now have all the drives saying Unmountable: Unsupported partition layout but parity is valid and fine. I'm trying this post solution right now to take 1 drive out and then back in and rebuild. It is rebuilding disk 1 now. Not sure if this is the right thing to do or if other direction I should go? Edited August 30, 2022 by Snowman Add unraid version Quote Link to comment
Snowman Posted August 30, 2022 Author Share Posted August 30, 2022 Now in rebuild of 1st drive it shows Unmountable: Wrong or no file system for Disk 1. Will the disk be ok after rebuild? Quote Link to comment
JorgeB Posted August 30, 2022 Share Posted August 30, 2022 Please post the diagnostics. Quote Link to comment
Snowman Posted August 30, 2022 Author Share Posted August 30, 2022 This was prior to starting Disk 1 rebuild. snowtower-diagnostics-20220829-1529.zip Quote Link to comment
JorgeB Posted August 30, 2022 Share Posted August 30, 2022 Please post after the rebuild, before it's just all invalid partitions, need to see how it is after. Quote Link to comment
Snowman Posted August 30, 2022 Author Share Posted August 30, 2022 (edited) Ok disk 1 rebuilt but still shows "Unmountable: Wrong or no file system" and others still show "Unmountable: Unsupported partition layout". How do I fix this or what are steps? It wants me to format drives that are not mountable which I won't do. snowtower-diagnostics-20220830-0948.zip Edited August 30, 2022 by Snowman Quote Link to comment
JorgeB Posted August 30, 2022 Share Posted August 30, 2022 Check filesystem on disk1, but it doesn't look very good. Quote Link to comment
Snowman Posted August 30, 2022 Author Share Posted August 30, 2022 Check filesystem results: xfs_repair status: Phase 1 - find and verify superblock... bad primary superblock - bad magic number !!! attempting to find secondary superblock... ..................................................................................................................................................................................................................................................................................................................................................................found candidate secondary superblock... verified secondary superblock... would write modified primary superblock Primary superblock would have been modified. Cannot proceed further in no_modify mode. Exiting now. Quote Link to comment
JorgeB Posted August 30, 2022 Share Posted August 30, 2022 Run it again without -n or nothing will be done. Quote Link to comment
Snowman Posted August 30, 2022 Author Share Posted August 30, 2022 So from this I assume I should put -L in? Phase 1 - find and verify superblock... bad primary superblock - bad magic number !!! attempting to find secondary superblock... ....................................................................................................found candidate secondary superblock... verified secondary superblock... writing modified primary superblock - reporting progress in intervals of 15 minutes Phase 2 - using internal log - zero log... Log inconsistent (didn't find previous header) failed to find log head zero_log: cannot find log head/tail (xlog_find_tail=5) ERROR: The log head and/or tail cannot be discovered. Attempt to mount the filesystem to replay the log or use the -L option to destroy the log and attempt a repair. Quote Link to comment
Snowman Posted August 31, 2022 Author Share Posted August 31, 2022 So I run the -L and then had to run just normal xfs repair. It appears it put everything in lost and found. Do I have to move it back or how does that work? Do I have to rebuild each disk by removing from array and then add again to start rebuild and then xfs repair or can I just do repair of what is correct steps? Quote Link to comment
JorgeB Posted August 31, 2022 Share Posted August 31, 2022 3 hours ago, Snowman said: Do I have to move it back or how does that work? You'd need to manually move everything back. 3 hours ago, Snowman said: Do I have to rebuild each disk by removing from array and then add again to start rebuild and then xfs repair or can I just do repair of what is correct steps? You'd need to rebuild each disk one at a time, but because of the filesystem corruption that occurred with disk1 it makes me think that the HBA damaged more than just the MBR, so there might be some Dara loss, this is one of the reasons we don't recommend using RAID controllers with Unraid. Quote Link to comment
JorgeB Posted August 31, 2022 Share Posted August 31, 2022 Forgot to mention, you can see how the next disk would turnout by unassigning the disk and starting the array and letting Unraid emulate the disk, if it doesn't mount check filesystem on the emulated disk, and if you are happy with the results then rebuild. Quote Link to comment
Snowman Posted August 31, 2022 Author Share Posted August 31, 2022 Ok I will give it a go. On Disk 2 rebuild now. Have feeling this is going to be lots in lost in found like disk 1, the whole disk contents is in lost and found on Disk 1. Hard to put it back in place where the placeholder folder or anything are not there anymore. As far as HBA, hard to connect lots of drives without HBA but have seen Adaptec cards might be the culprit. On my pool disks do I do the same repair and rebuild process as they are in same status? Quote Link to comment
JorgeB Posted September 1, 2022 Share Posted September 1, 2022 12 hours ago, Snowman said: hard to connect lots of drives without HBA but have seen Adaptec cards might be the culprit. It's OK to use true HBAs, like the recommended LSI models. Pools cannot be recovered the same way, best bet for those is a file recovery util, like UFS explorer. Quote Link to comment
Snowman Posted September 1, 2022 Author Share Posted September 1, 2022 My disk 3 XFS repair goes no where, stuck on "attempting to find secondary superblock.... unable to verify superblock continuing" forever. This is in -n mode. I'll let it run longer but don't think it will fing secondary block. What do I do from here if it doesn't get past Phase 1? Quote Link to comment
JorgeB Posted September 2, 2022 Share Posted September 2, 2022 23 hours ago, JorgeB said: best bet for those is a file recovery util, like UFS explorer. This might also help for that. Quote Link to comment
Snowman Posted September 2, 2022 Author Share Posted September 2, 2022 Can I replace disk 3 with new disk and rebuild or will i lose data that way? Already rebuilt other disks in array. Quote Link to comment
JorgeB Posted September 2, 2022 Share Posted September 2, 2022 On 8/31/2022 at 9:19 AM, JorgeB said: you can see how the next disk would turnout by unassigning the disk and starting the array and letting Unraid emulate the disk, if it doesn't mount check filesystem on the emulated disk, and if you are happy with the results then rebuild. Whatever shows in the emulated disk is the same that will be on the rebuilt disk, so you can check before rebuilding. Quote Link to comment
Snowman Posted September 2, 2022 Author Share Posted September 2, 2022 Emulated disk for disk 3 doesn't mount or show anything...still shows unmountable: wrong or no file system.... Quote Link to comment
JorgeB Posted September 2, 2022 Share Posted September 2, 2022 Then rebuilding will have the same result. Quote Link to comment
Snowman Posted September 7, 2022 Author Share Posted September 7, 2022 I have a couple of cache drives, 1 is a single cache drive that is "unmountable: unspported partition layout" from my crash the other is a pool of 2 drives that says: unmountable: Invalid pool config. Can I recover the pool somehow with the 2 drives or not sure why I got this error on the 2 drive pool? The single drive I will likely pull and back up for UFS Explorer but thinking the 2 drive pool might be something related to this post but not sure: Other update-I have backed up Disk 3 and will format and move data back using UFS Explorer Quote Link to comment
JorgeB Posted September 7, 2022 Share Posted September 7, 2022 Post current diags. Quote Link to comment
Snowman Posted September 7, 2022 Author Share Posted September 7, 2022 FYI-I had just powered down to pull a cache drive but powered back up to get diags. snowtower-diagnostics-20220907-1041.zip Quote Link to comment
JorgeB Posted September 7, 2022 Share Posted September 7, 2022 56 minutes ago, Snowman said: The single drive I will likely pull and back up for UFS Explorer but thinking the 2 drive pool might be something related to this post but not sure: No, it's not related at all, no valid btrfs filesystem is being detected on those devices at boot, it means they doesn't exist, because they were wiped or damaged, in this case likely damaged by the RAID controller, which likely damaged the MBR for the devices and destroyed the filesystem superblock that exists in the beginning of the devices. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.