3 Drives Unmountable after brownout


cr0nis

Recommended Posts

Hi All,

 

It seems i had a recent shutdown and my UPS forced a system shutdown. I didn't think anything of it and turned the server back on when i realized it wasn't running. Today CouchPotato start having errors when trying to move files and when I looked in my main dashboard I see 3 drives with issues. Before attempting anything I wanted to reach out here first to see whats the bets steps to take to resolve the issues.

 

I have attached my logs. The server has been running since it was turned off after the forced shutdown.

 

Any help will be appreciated.

 

Thanks in advance

tower-diagnostics-20191028-0110.zip

Link to comment

Check filesystem on disks 5 and 6.

 

Disk 8 is more serious, since the partition isn't valid, you can try unassigning it and starting the array, Unraid should recreate the partition, if the emulated disk mounts correctly and contents look correct you can rebuild on top, if it doesn't post new diags.

 

  • Like 1
Link to comment

Still need to fix filesystem on disks 5 and 6.

https://wiki.unraid.net/Check_Disk_Filesystems

Post the fsck outputs if you have doubts.

 

 

I mentioned rebuilding disk8 on top only if the emulated disk mounted correctly, but since the emulated disk doesn't have a valid filesystem and the partition was invalid before, what would make more sense is the disk never been formatted before.

  • Like 1
Link to comment

Looks like drive 5 is up with a large amount of data loss (mostly media). Drive 6 doesn't offer a solution after running the check and I believe drive 8 just needs a new formatting.

 

What should I do with drive 6. Also, what actions should i take to prevent this. I have a UPS attached thinking this would prevent such issues when I lose power but this is the worst i have seen since I've been running unraid.

 

Thanks again for the assistance

tower-diagnostics-20191030-1341.zip

Link to comment

It sounds as if your server did not actually manage to shut down tidily.    Have you tested that the UPS can handle the shutdown sequence without running the batteries below about 50% (to give some room for glitches during the shutdown that extend the time it takes).   You probably want to test with all drives spun down as the start point as that is likely to be the most demanding from a current draw perspective as all drives will then need spinning up.

Link to comment

Here are the results

 

Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... ERROR: The filesystem has valuable metadata changes in a log which needs to be replayed. Mount the filesystem to replay the log, and unmount it before re-running xfs_repair. If you are unable to mount the filesystem, then use the -L option to destroy the log and attempt a repair. Note that destroying the log may cause corruption -- please attempt a mount of the filesystem before doing this.

 

Is my only option to rerun with -L?

Link to comment
3 hours ago, John_M said:

Thank you for this. Had  no idea about the lost+found. Looks like most things are in there and I will begin to move them tomorrow.

Regarding disk 8. Some had said that it was never formatted correctly. I will do this tonight. Is there a specific format i should use since some of my drives are reiserfs while others are xfs.  Ive always just used the UI and selected format

 

Thanks again folks. This has really helped me

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.