craignan Posted August 4, 2020 Share Posted August 4, 2020 Hello, I had to shutdown my server today to replace a part and after I restarted, I found that my Unassigned device would no longer mount. I stopped the array, started it backup in maintenance mode and attempted to do a xfs_repair /dev/sdb1 and got xfs_repair /dev/sdb1 Phase 1 - find and verify superblock... bad primary superblock - bad magic number !!! attempting to find secondary superblock... .found candidate secondary superblock... unable to verify superblock, continuing... .found candidate secondary superblock... unable to verify superblock, continuing... and then I tried xfs_repair -L /dev/sdb1 and got the same message. After the 'continuing', it would just show pages and pages of ............................................................... Any ideas on how to fix this? Should the drive just be reformatted and start again? skynet-diagnostics-20200803-2234.zip Quote Link to comment
JorgeB Posted August 4, 2020 Share Posted August 4, 2020 1 hour ago, craignan said: Should the drive just be reformatted and start again? Most likely, though strange the superblock getting damaged out of the blue, what part was replaced on the server? Quote Link to comment
craignan Posted August 4, 2020 Author Share Posted August 4, 2020 I just upgraded the fans for the hard drives. Went nowhere close to the drive. If I look at the drive through terminal, I see all the content. I just finished cleaning up my Plex content and was going to backup the drive, but now it appears that I have to start all over again. Thanks. Quote Link to comment
Flubster Posted August 5, 2020 Share Posted August 5, 2020 I had the same issue last week 😞 I managed to correct the filesystem with testdisk on a Windows machine using a usb caddy. YMMV. I got the data off then started again. Be warned testdisk can make any recovery impossible if you select the wrong options! Testdisk can be difficult in usage and operation! So good luck, some better GUI based tools may exist, I've always used testdisk personally as in I have had a Linux LVM raid array die before and was the only thing that could read it let alone restore the filesystem. Dave Quote Link to comment
craignan Posted August 17, 2020 Author Share Posted August 17, 2020 Thanks Flubster, I just formatted the drive and started all over again. I'll take a look at testdisk in case it happens again. Quote Link to comment
CrookedAutobot Posted October 11, 2022 Share Posted October 11, 2022 (edited) HI, In researching my issues, I discovered this thread. The difference between my starting point and the original poster's is that I had a power outage while I was not at home. I do have a battery back-up. I do not know if Unraid would have automatically shut itself down when it was switched over to the battery back-up or not. If not, then my Unraid server experienced a power outage. Once power was restored, I started the server up. Everything seemed to come back up as expected. Everything except one Unassigned Device. This device is critical. It contains the docker containers and VM img file. The problem I have is that Unraid will not recognize the device as mountable. After a lot of research, I found out about xfs_repair. I have tried running it, but as the original poster mentioned here is my outcome. The "continuing dots" go on forever. It a 960 GB drive, so I thought it would take a while to go throught the entire drive, but I am not sure. I also do not know the best way to fix the issue. Any advice is greatly appreciated. xfs_repair -v /dev/sdo Phase 1 - find and verify superblock... bad primary superblock - bad magic number !!! attempting to find secondary superblock... .found candidate secondary superblock... unable to verify superblock, continuing... .found candidate secondary superblock... unable to verify superblock, continuing... .found candidate secondary superblock... unable to verify superblock, continuing... I also found this website https://serveradminz.com/blog/bad-magic-number-in-superblock/ and I am not sure if I should try following their recommendations since my drive is an xfs. Edited October 11, 2022 by CrookedAutobot Quote Link to comment
trurl Posted October 11, 2022 Share Posted October 11, 2022 attach diagnostics to your NEXT post in this thread Quote Link to comment
CrookedAutobot Posted October 11, 2022 Share Posted October 11, 2022 Please find the diagnostics file attached. Thank you. cybertron-diagnostics-20221011-1151.zip Quote Link to comment
itimpi Posted October 11, 2022 Share Posted October 11, 2022 The command you used is wrong as you need to add the partition number (/dev/sdo1) if using the sd device. 1 Quote Link to comment
CrookedAutobot Posted October 11, 2022 Share Posted October 11, 2022 xfs_repair did finally finish. .Sorry, could not find valid secondary superblock Exiting now. Quote Link to comment
itimpi Posted October 11, 2022 Share Posted October 11, 2022 Just now, CrookedAutobot said: xfs_repair did finally finish. .Sorry, could not find valid secondary superblock Exiting now. That is expected since the command you used was not quite correct. 1 Quote Link to comment
CrookedAutobot Posted October 11, 2022 Share Posted October 11, 2022 10 minutes ago, itimpi said: The command you used is wrong as you need to add the partition number (/dev/sdo1) if using the sd device. Thanks, running that now. Here are the results. xfs_repair -v /dev/sdo1 Phase 1 - find and verify superblock... - block cache size set to 3079680 entries Phase 2 - using internal log - zero log... zero_log: head block 606090 tail block 606090 - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 2 - agno = 1 - agno = 3 clearing reflink flag on inode 1679107051 Phase 5 - rebuild AG headers and trees... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... XFS_REPAIR Summary Tue Oct 11 12:14:40 2022 Phase Start End Duration Phase 1: 10/11 12:14:23 10/11 12:14:23 Phase 2: 10/11 12:14:23 10/11 12:14:23 Phase 3: 10/11 12:14:23 10/11 12:14:32 9 seconds Phase 4: 10/11 12:14:32 10/11 12:14:33 1 second Phase 5: 10/11 12:14:33 10/11 12:14:33 Phase 6: 10/11 12:14:33 10/11 12:14:40 7 seconds Phase 7: 10/11 12:14:40 10/11 12:14:40 Total run time: 17 seconds done Quote Link to comment
itimpi Posted October 11, 2022 Share Posted October 11, 2022 That looks good - if you now stop the array and start it in normal mode the disk should now mount. You should now look to see if you have a lost+found folder on the disk as that is where the repair process puts any files/folders for which it cannot locate the directory information giving the name. Quote Link to comment
CrookedAutobot Posted October 11, 2022 Share Posted October 11, 2022 3 minutes ago, itimpi said: That looks good - if you now stop the array and start it in normal mode the disk should now mount. You should now look to see if you have a lost+found folder on the disk as that is where the repair process puts any files/folders for which it cannot locate the directory information giving the name. I am a little confused. This drive is an unassigned drive. I do not follow how stopping and restarting the array will make any difference? Please forgive my ignorance. Quote Link to comment
JorgeB Posted October 11, 2022 Share Posted October 11, 2022 22 minutes ago, CrookedAutobot said: I do not follow how stopping and restarting the array will make any difference? It won't. P.S: there's and option in UD to run xfs_repair, no need to use the CLI. Quote Link to comment
CrookedAutobot Posted October 11, 2022 Share Posted October 11, 2022 1 minute ago, JorgeB said: It won't. P.S: there's and option in UD to run xfs_repair, no need to use the CLI. Oh, Thank you. So out of safety and completeness, I stopped the array and rebooted the server. I then started the array as usual. The disk did not auto-mount; however I was able to manually mount it. SUCCESS! Thanks for pointing out my errors and helping me got on the right track. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.