71605-Unmountable: Unsupported partition layout of drives hooked to card and settings change to HBA


Recommended Posts

Hey! Oh no! I'm so sorry to hear you're having issues with your 71605 card :(.

 

The key for me when using my Adaptec 72405 was to confirm if the drive I was going to rebuild showed up normally when emulated by UnRaid, as per JorgeB's comment in this thread:

  

On 2/2/2019 at 2:39 AM, JorgeB said:

One of the reason RAID controllers are not recommended, though would't expect it would write to the disks without you telling it to initialize them or similar.

 

Partition info is outside parity, so if you rebuild one disk at a time Unraid should recreate the partitions, to confirm, stop the array, unassign one of the unmountable disks, start the array, check that the emulated disk mounts correctly and data looks OK, also good to check filesystem, if all is well rebuild, you can rebuild on top of the original disks, though using new ones would be safer in case something goes wrong, when rebuild is done repeat for the other disk.

 

In my case, emulated drives showed up as expected, and I confirmed that the data appeared normally and without issue. Also, during the process, I confirmed correct emulation of drive and contents before every single drive was rebuilt. Once the rebuild process was done, drive appeared normally, no data was moved to lost and found, and no XFS repair was required.

 

Also, I can confirm that it is continuing to perform normally in 6.10.3, and I have since added a 36-port Expander card as well as a 4-port NVMe card to my array.

 

Not that that information helps you much at this point 😢, but I just wanted you to know that 3 years later I am still running this card without issue.

 

If you plan to continue to use your 71605 after you have recovered and have any questions on config settings for it, let me know and I can take some screenshots of mine. Though not the same card, they're from the same family, so my config settings might help you.

Link to comment

Wish it was so but my rebuild were not clean and had lost and found items. 1 Disk won't emulate or mount so have to backup in separate computer and try to put info back after format. Been a long road trying to recovery data correctly.

 

My cache drives/pools are the same mess and some info is not recognizable when copied over and try to open.

 

This was a major blow up despite UPS etc. Not sure if it was me, power spike, trying to add new drive which was not working, or other that corrupted things but main not a fun one. Hard to pinpoint what happened.

 

For sure I'll take screen shot configs for reference. I had no problems up to this point with my card.

Link to comment
On 9/1/2022 at 3:45 PM, Snowman said:

My disk 3 XFS repair goes no where, stuck on "attempting to find secondary superblock.... unable to verify superblock continuing" forever. This is in -n mode. I'll let it run longer but don't think it will fing secondary block. What do I do from here if it doesn't get past Phase 1?

Sorry so late to ask this.

 

Were you trying to check filesystem from the webUI or from the command line? Easy to get the command wrong.

Link to comment
1 hour ago, Snowman said:

Wish it was so but my rebuild were not clean and had lost and found items. 1 Disk won't emulate or mount so have to backup in separate computer and try to put info back after format. Been a long road trying to recovery data correctly.

 

My cache drives/pools are the same mess and some info is not recognizable when copied over and try to open.

 

This was a major blow up despite UPS etc. Not sure if it was me, power spike, trying to add new drive which was not working, or other that corrupted things but main not a fun one. Hard to pinpoint what happened.

 

For sure I'll take screen shot configs for reference. I had no problems up to this point with my card.

Uggh :(. I feel your pain. I had a system crash 5 or 6 years ago where several drives corrupted and could not be rebuilt from parity. It was a long slow process to recover the missing data, but I eventually got almost everything back. Good luck in recovering your system and getting UnRaid back up and running!

I will reboot my system later on and get you some pics of my settings. No guarantees that they are what you should set, but they have been working solidly for the last 3 years and might at least point you in the right direction! :).

Link to comment
  • 9 months later...
On 8/30/2022 at 12:00 PM, JorgeB said:

Run it again without -n or nothing will be done.

I have the same issue as Snowman but with encrypted xfs drives. When running xfs_repair it does not find a verified secondary superblock. I think this has to do with the encryptedness, but I can't figure out how to decrypt the partition when there are no partitions displayed in /dev to decrypt.

Link to comment
  • 5 months later...

Having been stung by this same issue recently, I've come to the conclusion that you should never "submit" any changes in the "Controller Configuration" menu with disks attached on these cards. In my case with an ASR-71605E running firmware 1.0.100.32118 though the maxView HII placed in the UEFI, pressing "submit" with any change on that page will cause the controller to immediately zero the first 64KiB of all disks attached to that controller regardless of state. Subsequent testing on my card confirmed repeatability of this behavior. Disconnecting all disks before changing configuration for my combination of card and motherboard appears to be the only 'safe' option.

 

Fortunately I managed to recover from this. BTRFS partitions were unaffected as they started far-away off in the disk that simply regenerating the partition table brought them back online. XFS drives were a little more complicated as there was metadata lost within that 64KiB, however the AG0 root directory inode was intact based on offsets of the other AG superblocks using hexdump and dd, I was confident this could be repairable. I was a little spooked reading the XFS man page as in the bugs section it states "The no‐modify mode (-n option) is not completely accurate.", so I avoided using xfs_repair until I knew for sure what was going to happen.

 

I prototyped a replica scenario using a loopback image to see what would happen if a XFS partition had everything in AG0 up to the first inode wiped out. After breaking my image, my gut was telling me that xfs_repair probably was designed with disk corruption in mind rather than disk erasure, so I wrote a 'fake' AG0 superblock based on one of the other AGs with the correct offset for the root directory inode in the correct location in the image. xfs_repair subsequently brought the test filesystem back online as it correctly identified that AG0 was present and corrupted, but the information required to find the other AGs and the root inode was enough to repair it. This carried through to my victim disk images. Ultimately only ~200 files ended up in lost+found, based on previous knowledge of the array and backups elsewhere they were re-identified despite only known by their inode numbers. I consider myself extremely lucky in this scenario. Despite the filesystems being repaired, I no longer consider them trustworthy and the process of reformatting and repopulating disks is on going. It goes without saying the parity drive was useless in this exercise due to the multiple drive failure as well as it being subject to the 64KiB erasure.

 

Regardless of fault or feature, I've learned a lot from this and hope my experience helps other users of these cards.

 

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.