cr0nis Posted October 28, 2019 Share Posted October 28, 2019 Hi All, It seems i had a recent shutdown and my UPS forced a system shutdown. I didn't think anything of it and turned the server back on when i realized it wasn't running. Today CouchPotato start having errors when trying to move files and when I looked in my main dashboard I see 3 drives with issues. Before attempting anything I wanted to reach out here first to see whats the bets steps to take to resolve the issues. I have attached my logs. The server has been running since it was turned off after the forced shutdown. Any help will be appreciated. Thanks in advance tower-diagnostics-20191028-0110.zip Quote Link to comment
JorgeB Posted October 28, 2019 Share Posted October 28, 2019 Check filesystem on disks 5 and 6. Disk 8 is more serious, since the partition isn't valid, you can try unassigning it and starting the array, Unraid should recreate the partition, if the emulated disk mounts correctly and contents look correct you can rebuild on top, if it doesn't post new diags. 1 Quote Link to comment
cr0nis Posted October 28, 2019 Author Share Posted October 28, 2019 Thank you. I am trying those steps now. Quote Link to comment
cr0nis Posted October 28, 2019 Author Share Posted October 28, 2019 Booted in maintenance mode. Scanned both drives. Also attempted to unmount and remount disk 8. All 3 drives are now showing "Unmountable: No file system". Attached are the latest diags. A data rebuild has started (I believe on drive 8). Thanks again for the help. I am lost when it comes to troubleshooting this. tower-diagnostics-20191028-1339.zip Quote Link to comment
JorgeB Posted October 28, 2019 Share Posted October 28, 2019 Still need to fix filesystem on disks 5 and 6. https://wiki.unraid.net/Check_Disk_Filesystems Post the fsck outputs if you have doubts. I mentioned rebuilding disk8 on top only if the emulated disk mounted correctly, but since the emulated disk doesn't have a valid filesystem and the partition was invalid before, what would make more sense is the disk never been formatted before. 1 Quote Link to comment
cr0nis Posted October 29, 2019 Author Share Posted October 29, 2019 I'm running the suggestion on disk 5. I am seeing a ton of this: init_source_bitmap: Bitmap 12295 (of 32768 bits) is wrong - mark all blocks [402882560 - 402915328] as used should i be concerned? Quote Link to comment
JorgeB Posted October 29, 2019 Share Posted October 29, 2019 Snippets are mostly useless, you can post the entire output, but even then you'll need to try and fix to see how bad it really is, i.e., if there's data loss after fixing it. Quote Link to comment
cr0nis Posted October 30, 2019 Author Share Posted October 30, 2019 Looks like drive 5 is up with a large amount of data loss (mostly media). Drive 6 doesn't offer a solution after running the check and I believe drive 8 just needs a new formatting. What should I do with drive 6. Also, what actions should i take to prevent this. I have a UPS attached thinking this would prevent such issues when I lose power but this is the worst i have seen since I've been running unraid. Thanks again for the assistance tower-diagnostics-20191030-1341.zip Quote Link to comment
itimpi Posted October 30, 2019 Share Posted October 30, 2019 It sounds as if your server did not actually manage to shut down tidily. Have you tested that the UPS can handle the shutdown sequence without running the batteries below about 50% (to give some room for glitches during the shutdown that extend the time it takes). You probably want to test with all drives spun down as the start point as that is likely to be the most demanding from a current draw perspective as all drives will then need spinning up. Quote Link to comment
JorgeB Posted October 30, 2019 Share Posted October 30, 2019 38 minutes ago, cr0nis said: Drive 6 doesn't offer a solution after running the check Disk6 is xfs, fsck use is different from reiser's, usually you just need to run without -n (no modify), if that doesn't fix it post the output. Quote Link to comment
cr0nis Posted October 31, 2019 Author Share Posted October 31, 2019 Here are the results Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... ERROR: The filesystem has valuable metadata changes in a log which needs to be replayed. Mount the filesystem to replay the log, and unmount it before re-running xfs_repair. If you are unable to mount the filesystem, then use the -L option to destroy the log and attempt a repair. Note that destroying the log may cause corruption -- please attempt a mount of the filesystem before doing this. Is my only option to rerun with -L? Quote Link to comment
John_M Posted October 31, 2019 Share Posted October 31, 2019 47 minutes ago, cr0nis said: Is my only option to rerun with -L? Yes, though it isn't as bad as the warning suggests. Quote Link to comment
cr0nis Posted October 31, 2019 Author Share Posted October 31, 2019 Looks like drive 5 and 6 are now up. Is there any way to recover the missing data from drive 5 or am i out of luck? Quote Link to comment
John_M Posted October 31, 2019 Share Posted October 31, 2019 I haven't been following this thread but if you did a file system repair on Disk5 and it was successful then there's nothing more you can do, other than check the lost+found folder (if the repair created one) for the missing files. Beyond that you'll need to resort to your backups. Quote Link to comment
cr0nis Posted November 1, 2019 Author Share Posted November 1, 2019 3 hours ago, John_M said: Thank you for this. Had no idea about the lost+found. Looks like most things are in there and I will begin to move them tomorrow. Regarding disk 8. Some had said that it was never formatted correctly. I will do this tonight. Is there a specific format i should use since some of my drives are reiserfs while others are xfs. Ive always just used the UI and selected format Thanks again folks. This has really helped me Quote Link to comment
John_M Posted November 1, 2019 Share Posted November 1, 2019 I would choose XFS because ReiserFS is obsolete and no longer maintained. Many users who started out with ReiserFS formatted disks have migrated towards XFS. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.