digitaljedi Posted May 19, 2023 Share Posted May 19, 2023 (edited) Last week I installed two new 16TB disks in my array, replacing two 14TB disks. I replaced them one at a time and it took the usual 24-ish hours to rebuild my array. Everything worked fine and I used the disks as normal for several days. Yesterday, I decided to upgrade to an unstable build of unRAID, 6.12 RC6 after thinking I'd like to play with the new dashboard and help test. I don't know that it was the new version or if I'd just missed it but my array was reporting much fuller. That's when I noticed my two new drives were reporting as not being mounted. Specifically: Unmountable: Unsupported or no file system If it were just one drive, my knee jerk reaction would be to format and let parity save me once again. However, with two drives, I don't even know if I could recover. I've seen similar issues that seemed to be resolved with xfs_repair but I'm no expert on xfs and figured I should check with the community before potentially doing something stupid. I'm attaching my complete diagnostics zip in the hopes that someone sees what I need to do. Without adult supervision, I may just attempt something like this after starting the array in maintenance mode: xfs_repair -Lv /dev/sde xfs_repair -Lv /dev/sdf The natives are getting restless without their shows... send help! collective-diagnostics-20230519-0716.zip Edited May 19, 2023 by digitaljock Adding what I *think* I'm supposed to do Quote Link to comment
JorgeB Posted May 19, 2023 Share Posted May 19, 2023 45 minutes ago, digitaljock said: xfs_repair -Lv /dev/sde xfs_repair -Lv /dev/sdf This won't work, see check filesystem instructions. 1 Quote Link to comment
digitaljedi Posted May 19, 2023 Author Share Posted May 19, 2023 Thanks for the link; it looked like a few people ran with the -L option, so I jumped the gun on that. However, the instructions gave me some confidence to go ahead and give it a try with just the -v: xfs_repair -v /dev/sde xfs_repair -v /dev/sdf I started the first one and it's currently looking for the secondary superblock: Phase 1 - find and verify superblock... bad primary superblock - bad magic number !!! attempting to find secondary superblock... I'll report back on how it goes. Thanks again @JorgeB for the direction. Quote Link to comment
JorgeB Posted May 19, 2023 Share Posted May 19, 2023 That's not a very good sign, but let it run. Quote Link to comment
Solution itimpi Posted May 19, 2023 Solution Share Posted May 19, 2023 1 hour ago, digitaljock said: Without adult supervision, I may just attempt something like this after starting the array in maintenance mode: xfs_repair -Lv /dev/sde xfs_repair -Lv /dev/sdf those commands are wrong and will result in the error of the superblock not being found. You need to add the partition number on the end when using the ‘sd’ devices (e.g. /dev/sde1). Using the ‘sd’ devices will also invalidate parity. If doing it from the command line then it is better to use the /dev/md? type devices (where ? is the disk slot number) as that both maintains parity and means the partition is automatically selected. it is much better to run the command via the GUI by clicking on the drive on the Main tab and running it from there as it will automatically use the correct device name and maintain parity. 1 Quote Link to comment
JorgeB Posted May 19, 2023 Share Posted May 19, 2023 54 minutes ago, digitaljock said: I started the first one and it's currently looking for the secondary superblock: Do you mean like using the instructions in the link I posted or your commands? Like mentioned your commands won't work. Quote Link to comment
digitaljedi Posted May 19, 2023 Author Share Posted May 19, 2023 I started with the instructions you sent the link to; I thought I was summarizing the command properly but now I see I missed the -n and the number. For simplicity, I went with the GUI. As far as I know, it's still looking for the secondary superblock. Quote Link to comment
JorgeB Posted May 19, 2023 Share Posted May 19, 2023 12 minutes ago, digitaljock said: missed the -n and the number Also that parity will be invalidate when not using the md device. 1 Quote Link to comment
JonathanM Posted May 19, 2023 Share Posted May 19, 2023 4 hours ago, digitaljock said: format and let parity save me That won't work. If you format a drive, the format will be written to parity, and the rebuilt drive will be blank, all files will be gone. 1 Quote Link to comment
digitaljedi Posted May 19, 2023 Author Share Posted May 19, 2023 Ah, that's good to know. Since I did it from the GUI, though, I assume that it's okay as-is? This is the result that came back, which is encouraging: verified secondary superblock... would write modified primary superblock Primary superblock would have been modified. Cannot proceed further in no_modify mode. Exiting now. Quote Link to comment
digitaljedi Posted May 19, 2023 Author Share Posted May 19, 2023 (edited) Ok, here's where I ended up: Phase 7 - verify and correct link counts... Note - stripe unit (0) and width (0) were copied from a backup superblock. Please reset with mount -o sunit=,swidth= if necessary XFS_REPAIR Summary Fri May 19 14:04:08 2023 Phase Start End Duration Phase 1: 05/19 13:03:52 05/19 14:04:03 1 hour, 11 seconds Phase 2: 05/19 14:04:03 05/19 14:04:03 Phase 3: 05/19 14:04:03 05/19 14:04:05 2 seconds Phase 4: 05/19 14:04:05 05/19 14:04:05 Phase 5: 05/19 14:04:05 05/19 14:04:06 1 second Phase 6: 05/19 14:04:06 05/19 14:04:07 1 second Phase 7: 05/19 14:04:07 05/19 14:04:07 Total run time: 1 hour, 15 seconds done This is just for the first disk. Starting on disk #2 Edited May 19, 2023 by digitaljock Removing the question as I've decided to go for broke! Quote Link to comment
digitaljedi Posted May 19, 2023 Author Share Posted May 19, 2023 For the record, these are the commands the GUI sent: /sbin/xfs_repair -v /dev/md5p1 /sbin/xfs_repair -v /dev/md6p1 Obviously, this was for my configuration and it's best to let the GUI do the work. I ran the first one with -nv to see what it looked like before deciding to go for it as-above (again, with the GUI; I'm only showing which command I found it ran). As you can see by the output, it took a while (just over 1 hour). Thanks for stopping me from doing something really stupid @JorgeB and @JonathanM. And thank you @itimpi for getting me to the GUI to make my life a lot easier 😁 I was curious about the commands, still, so I just did a "ps -ef | grep xfs_repair" from the command line after I kicked it off. I love this community! Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.