Bad primary superblock on Unassigned device


Recommended Posts

Hello,

I had to shutdown my server today to replace a part and after I restarted, I found that my Unassigned device would no longer mount.  I stopped the array, started it backup in maintenance mode and attempted to do a xfs_repair /dev/sdb1 and got 

 

xfs_repair /dev/sdb1
Phase 1 - find and verify superblock...
bad primary superblock - bad magic number !!!

attempting to find secondary superblock...
.found candidate secondary superblock...
unable to verify superblock, continuing...
.found candidate secondary superblock...
unable to verify superblock, continuing...

 

and then I tried 

xfs_repair -L /dev/sdb1

and got the same message.  After the 'continuing', it would just show pages and pages of ...............................................................

 

Any ideas on how to fix this? Should the drive just be reformatted and start again?

skynet-diagnostics-20200803-2234.zip

Link to comment

I just upgraded the fans for the hard drives. Went nowhere close to the drive. If I look at the drive through terminal, I see all the content. I just finished cleaning up my Plex content and was going to backup the drive, but now it appears that I have to start all over again. Thanks.

Link to comment

I had the same issue last week 😞

 

I managed to correct the filesystem with testdisk on a Windows machine using a usb caddy. YMMV.

 

I got the data off then started again. Be warned testdisk can make any recovery impossible if you select the wrong options!

 

Testdisk can be difficult in usage and operation! So good luck, some better GUI based tools may exist, I've always used testdisk personally as in I have had a Linux LVM raid array die before and was the only thing that could read it let alone restore the filesystem.

 

Dave

Link to comment
  • 2 weeks later...
  • 2 years later...

HI,

In researching my issues, I discovered this thread. The difference between my starting point and the original poster's is that I had a power outage while I was not at home. I do have a battery back-up. I do not know if Unraid would have automatically shut itself down when it was switched over to the battery back-up or not. If not, then my Unraid server experienced a power outage. Once power was restored, I started the server up. Everything seemed to come back up as expected. Everything except one Unassigned Device. This device is critical. It contains the docker containers and VM img file. The problem I have is that Unraid will not recognize the device as mountable. After a lot of research, I found out about xfs_repair. I have tried running it, but as the original poster mentioned here is my outcome. The "continuing dots" go on forever. It a 960 GB drive, so I thought it would take a while to go throught the entire drive, but I am not sure. I also do not know the best way to fix the issue. Any advice is greatly appreciated.

 

xfs_repair -v /dev/sdo
Phase 1 - find and verify superblock...
bad primary superblock - bad magic number !!!

attempting to find secondary superblock...
.found candidate secondary superblock...
unable to verify superblock, continuing...
.found candidate secondary superblock...
unable to verify superblock, continuing...
.found candidate secondary superblock...
unable to verify superblock, continuing...

 

I also found this website https://serveradminz.com/blog/bad-magic-number-in-superblock/ and I am not sure if I should try following their recommendations since my drive is an xfs.

 

Edited by CrookedAutobot
Link to comment
10 minutes ago, itimpi said:

The command you used is wrong as you need to add the partition number (/dev/sdo1) if using the sd device.

Thanks, running that now. Here are the results.

xfs_repair -v /dev/sdo1
Phase 1 - find and verify superblock...
        - block cache size set to 3079680 entries
Phase 2 - using internal log
        - zero log...
zero_log: head block 606090 tail block 606090
        - scan filesystem freespace and inode maps...
        - found root inode chunk
Phase 3 - for each AG...
        - scan and clear agi unlinked lists...
        - process known inodes and perform inode discovery...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
        - process newly discovered inodes...
Phase 4 - check for duplicate blocks...
        - setting up duplicate extent list...
        - check for inodes claiming duplicate blocks...
        - agno = 0
        - agno = 2
        - agno = 1
        - agno = 3
clearing reflink flag on inode 1679107051
Phase 5 - rebuild AG headers and trees...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
        - reset superblock...
Phase 6 - check inode connectivity...
        - resetting contents of realtime bitmap and summary inodes
        - traversing filesystem ...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
        - traversal finished ...
        - moving disconnected inodes to lost+found ...
Phase 7 - verify and correct link counts...

        XFS_REPAIR Summary    Tue Oct 11 12:14:40 2022

Phase           Start           End             Duration
Phase 1:        10/11 12:14:23  10/11 12:14:23
Phase 2:        10/11 12:14:23  10/11 12:14:23
Phase 3:        10/11 12:14:23  10/11 12:14:32  9 seconds
Phase 4:        10/11 12:14:32  10/11 12:14:33  1 second
Phase 5:        10/11 12:14:33  10/11 12:14:33
Phase 6:        10/11 12:14:33  10/11 12:14:40  7 seconds
Phase 7:        10/11 12:14:40  10/11 12:14:40

Total run time: 17 seconds
done

 

Link to comment
3 minutes ago, itimpi said:

That looks good - if you now stop the array and start it in normal mode the disk should now mount.

 

You should now look to see if you have a lost+found folder on the disk as that is where the repair process puts any files/folders for which it cannot locate the directory information giving the name.

I am a little confused. This drive is an unassigned drive. I do not follow how stopping and restarting the array will make any difference? Please forgive my ignorance.

Link to comment
1 minute ago, JorgeB said:

It won't.

 

P.S: there's and option in UD to run xfs_repair, no need to use the CLI.

Oh, Thank you.

 

So out of safety and completeness, I stopped the array and rebooted the server. I then started the array as usual. The disk did not auto-mount; however I was able to manually mount it. SUCCESS!

 

Thanks for pointing out my errors and helping me got on the right track.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.