Jump to content

Poential data loss after failed upgrade.


Go to solution Solved by JorgeB,

Recommended Posts

Posted (edited)

The first and biggest mistake i made when I as started building my server many years ago was to by a raid card instead of a HBA. I Guess I'm paying the price now. So I encountered this issue with the same card: https://forums.unraid.net/topic/127767-71605-unmountable-unsupported-partition-layout-of-drives-hooked-to-card-and-settings-change-to-hba/

 

 

Was doing an hardware upgrade, had nothing but troubles and I of course went in the setting and changed to HBA mode at one point.. And now all of my xfs drives on that controller shows up as Unmountable: Unsupported partition layout or no file system. 

 

I tried unassigning one drive and start the array, but the emulated disk didn't mount. It said contents emulated but Unmountable: Unsupported partition layout. 

 

Now I'm trying to check the file system and started in maintenance mode with disk 1 unassigned. Do I Need to assign it to be able to xun xfs_repair? I thought I was supposed to check filesystem on the emulated disk, I get this when i run xfs_repair -nv /dev/md1 from CLI.

 

/dev/md1: No such file or directory

fatal error -- couldn't initialize XFS library

 

If I stop the array and assign it I get as warning about data rebuild even with maintenance mode checked. So I was afraid to do that, but logic says that nothing would happen since it's not mounting any disks so no data rebuild would start.

 

Any advice? I have backup of all my important data so it's just my shows and movies that would get lost if I can't recover and I can get them back.

 

Edit: I see in my syslog now that I got a call trace, maybe that's the reason I couldn't run xfs_repair. Will reboot

Edit: No still same error. Will have to continue this later today. Hopefully I'll get som help :)

tower-diagnostics-20240522-0645.zip

Edited by strike
Link to comment

Emulated disk mounted fine now, and seems to have all the files. So I guess just rebuild and do the same for the rest of the drives? One more question, I ordered a proper a HBA so I don't have this issue again. If/when I have everything back and up and running do you think I will have the same problem if I disconnect all the drives from the raid card and move them to the LSI card? 

Link to comment
16 minutes ago, strike said:

So I guess just rebuild and do the same for the rest of the drives?

If it looks good yes.

 

16 minutes ago, strike said:

If/when I have everything back and up and running do you think I will have the same problem if I disconnect all the drives from the raid card and move them to the LSI card?

Possibly.

Link to comment
Just now, JorgeB said:

Possibly

Hmm, ok. Will need to check if more files need to be backed up before I do that then. Maybe just try one disk first on my other unraid licence and see if I can mount it with unassigned devices.  Forgot about one thing, my btrfs cache pool is also unmountable as it was on the same raid card. Just run scrub on the pool?  There aren't much files I really need there, maybe my appdata, but I have backup from this weekend for that so I guess I could just recreate the pool right away.  Will save that for last.Thank you so much for all your help btw, much appreciated! :) 

Link to comment
14 minutes ago, JorgeB said:

Scrub won't help if the pool doesn't mount, post the output of

btrfs fi show

 

It just shows my other pool which is not connected to the raid card, but my MB:

root@Tower:~# btrfs fi show
Label: none  uuid: 958e914e-929c-4e2e-af21-a3b4af824726
        Total devices 2 FS bytes used 358.16GiB
        devid    1 size 931.51GiB used 521.01GiB path /dev/sde1
        devid    2 size 931.51GiB used 521.01GiB path /dev/sdq1

Link to comment
1 minute ago, JorgeB said:

Post also the output of:

 

fdisk -l /dev/sdx

 

For the other pool devices, the ones that don't mount.

root@Tower:/mnt# fdisk -l /dev/sdx
Disk /dev/sdx: 1.82 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: Samsung SSD 870
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

 

root@Tower:/mnt# fdisk -l /dev/sdu
Disk /dev/sdu: 1.82 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: Samsung SSD 870
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Link to comment

@JorgeB I was going to ask you if you have Paypal, then I saw your signature. Just sent you some beer money. :) You've been here all day helping out. You're a real hero saving all of us unraid users a lot of time,frustration and maybe some tears.

Link to comment

Thanks for the beer money!

 

Regarding the pool devices, we can try to recover the partitions, but it will only work if the controller is showing the same sectors and/or the superblock was not wiped, and I believe both can happen with those controllers when you change mode, but it won't hurt to try, type:

 

sfdisk /dev/sdx

 

then write 2048 and hit enter, post the output of that

Link to comment

Do I answer yes or no here? And I realized I have stopped the array and ran the command with the array stopped, do I need to start it first? 

 

>>> 2048
Created a new DOS disklabel with disk identifier 0x0170dbb8.
Created a new partition 1 of type 'Linux' and of size 1.8 TiB.
Partition #1 contains a btrfs signature.

Do you want to remove the signature? [Y]es/[N]o:

Link to comment

root@Tower:~# btrfs fi show
warning, device 4 is missing
Label: none  uuid: 958e914e-929c-4e2e-af21-a3b4af824726
        Total devices 2 FS bytes used 358.16GiB
        devid    1 size 931.51GiB used 521.01GiB path /dev/sde1
        devid    2 size 931.51GiB used 521.01GiB path /dev/sdq1

Label: none  uuid: 3323890d-8be4-4998-8991-c7ce43f3aefd
        Total devices 2 FS bytes used 871.74GiB
        devid    3 size 1.82TiB used 1.20TiB path /dev/sdx1
        *** Some devices missing

 

Do I need to run the same command on the other drive in the pool?

Link to comment

It was found so now they both show up:

 

root@Tower:~# btrfs fi show
Label: none  uuid: 958e914e-929c-4e2e-af21-a3b4af824726
        Total devices 2 FS bytes used 358.16GiB
        devid    1 size 931.51GiB used 521.01GiB path /dev/sde1
        devid    2 size 931.51GiB used 521.01GiB path /dev/sdq1

Label: none  uuid: 3323890d-8be4-4998-8991-c7ce43f3aefd
        Total devices 2 FS bytes used 871.74GiB
        devid    3 size 1.82TiB used 1.20TiB path /dev/sdx1
        devid    4 size 1.82TiB used 1.20TiB path /dev/sdu1

 

The next step is to start the array and see if they mount or?

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...