Two new disks Unmountable after one week of working in my array


Go to solution Solved by itimpi,

Recommended Posts

Last week I installed two new 16TB disks in my array, replacing two 14TB disks. I replaced them one at a time and it took the usual 24-ish hours to rebuild my array. Everything worked fine and I used the disks as normal for several days.

 

Yesterday, I decided to upgrade to an unstable build of unRAID, 6.12 RC6 after thinking I'd like to play with the new dashboard and help test. I don't know that it was the new version or if I'd just missed it but my array was reporting much fuller. That's when I noticed my two new drives were reporting as not being mounted. Specifically:

 

Unmountable: Unsupported or no file system

 

If it were just one drive, my knee jerk reaction would be to format and let parity save me once again. However, with two drives, I don't even know if I could recover. I've seen similar issues that seemed to be resolved with xfs_repair but I'm no expert on xfs and figured I should check with the community before potentially doing something stupid. I'm attaching my complete diagnostics zip in the hopes that someone sees what I need to do.

 

Without adult supervision, I may just attempt something like this after starting the array in maintenance mode:
 

xfs_repair -Lv /dev/sde
xfs_repair -Lv /dev/sdf

 

The natives are getting restless without their shows... send help!

collective-diagnostics-20230519-0716.zip

Edited by digitaljock
Adding what I *think* I'm supposed to do
Link to comment
  • D1G1TALJEDI changed the title to Two new disks Unmountable after one week of working in my array

Thanks for the link; it looked like a few people ran with the -L option, so I jumped the gun on that. However, the instructions gave me some confidence to go ahead and give it a try with just the -v:

 

xfs_repair -v /dev/sde
xfs_repair -v /dev/sdf

 

I started the first one and it's currently looking for the secondary superblock:
 

Phase 1 - find and verify superblock...
bad primary superblock - bad magic number !!!

attempting to find secondary superblock...

 

I'll report back on how it goes. Thanks again @JorgeB for the direction.

Link to comment
  • Solution

 

1 hour ago, digitaljock said:

Without adult supervision, I may just attempt something like this after starting the array in maintenance mode:
 

xfs_repair -Lv /dev/sde
xfs_repair -Lv /dev/sdf

 


those commands are wrong and will result in the error of the superblock not being found.    You need to add the partition number on the end when using the ‘sd’ devices (e.g. /dev/sde1).    Using the ‘sd’ devices will also invalidate parity.  If doing it from the command line then it is better to use the /dev/md? type devices (where ? is the disk slot number) as that both maintains parity and means the partition is automatically selected.
 

it is much better to run the command via the GUI by clicking on the drive on the Main tab and running it from there as it will automatically use the correct device name and maintain parity.

 

  • Thanks 1
Link to comment

Ah, that's good to know. Since I did it from the GUI, though, I assume that it's okay as-is? This is the result that came back, which is encouraging:
 

verified secondary superblock...
would write modified primary superblock
Primary superblock would have been modified.
Cannot proceed further in no_modify mode.
Exiting now.

 

Link to comment

Ok, here's where I ended up:

 

Phase 7 - verify and correct link counts...
Note - stripe unit (0) and width (0) were copied from a backup superblock.
Please reset with mount -o sunit=,swidth= if necessary

        XFS_REPAIR Summary    Fri May 19 14:04:08 2023

Phase		Start		End		Duration
Phase 1:	05/19 13:03:52	05/19 14:04:03	1 hour, 11 seconds
Phase 2:	05/19 14:04:03	05/19 14:04:03
Phase 3:	05/19 14:04:03	05/19 14:04:05	2 seconds
Phase 4:	05/19 14:04:05	05/19 14:04:05
Phase 5:	05/19 14:04:05	05/19 14:04:06	1 second
Phase 6:	05/19 14:04:06	05/19 14:04:07	1 second
Phase 7:	05/19 14:04:07	05/19 14:04:07

Total run time: 1 hour, 15 seconds
done

 

This is just for the first disk. Starting on disk #2

Edited by digitaljock
Removing the question as I've decided to go for broke!
Link to comment

For the record, these are the commands the GUI sent:
 

/sbin/xfs_repair -v /dev/md5p1
/sbin/xfs_repair -v /dev/md6p1

 

Obviously, this was for my configuration and it's best to let the GUI do the work. I ran the first one with -nv to see what it looked like before deciding to go for it as-above (again, with the GUI; I'm only showing which command I found it ran). As you can see by the output, it took a while (just over 1 hour). Thanks for stopping me from doing something really stupid @JorgeB and @JonathanM. And thank you @itimpi for getting me to the GUI to make my life a lot easier 😁 I was curious about the commands, still, so I just did a "ps -ef | grep xfs_repair" from the command line after I kicked it off.

 

I love this community!

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.