trwolff04 Posted March 31, 2023 Share Posted March 31, 2023 This is at least the 10th time this has happened now. Any ideas what's going on? Server starts becoming intermittently unresponsive with some of my docker apps once I reach this point. beast-diagnostics-20230331-1256.zip Quote Link to comment
trurl Posted March 31, 2023 Share Posted March 31, 2023 Reboot to clear log, then check filesystem on cache. Quote Link to comment
trwolff04 Posted April 2, 2023 Author Share Posted April 2, 2023 Phase 1 - find and verify superblock... - block cache size set to 3078920 entries Phase 2 - using internal log - zero log... zero_log: head block 48373 tail block 48373 - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 bad CRC for inode 1079657638 bad CRC for inode 1079657638, would rewrite would have cleared inode 1079657638 - agno = 3 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 2 - agno = 3 bad CRC for inode 1079657638, would rewrite would have cleared inode 1079657638 No modify flag set, skipping phase 5 Phase 6 - check inode connectivity... - traversing filesystem ... - agno = 0 - agno = 1 - agno = 2 Metadata corruption detected at 0x46b5b8, inode 0x405a44a6 dinode couldn't map inode 1079657638, err = 117 - agno = 3 - traversal finished ... - moving disconnected inodes to lost+found ... disconnected dir inode 1610846306, would move to lost+found Phase 7 - verify link counts... Metadata corruption detected at 0x46b5b8, inode 0x405a44a6 dinode couldn't map inode 1079657638, err = 117, can't compare link counts No modify flag set, skipping filesystem flush and exiting. XFS_REPAIR Summary Sun Apr 2 10:12:51 2023 Phase Start End Duration Phase 1: 04/02 10:12:50 04/02 10:12:50 Phase 2: 04/02 10:12:50 04/02 10:12:50 Phase 3: 04/02 10:12:50 04/02 10:12:51 1 second Phase 4: 04/02 10:12:51 04/02 10:12:51 Phase 5: Skipped Phase 6: 04/02 10:12:51 04/02 10:12:51 Phase 7: 04/02 10:12:51 04/02 10:12:51 Total run time: 1 second Quote Link to comment
JorgeB Posted April 2, 2023 Share Posted April 2, 2023 Run it again without -n, and if it asks for -L use it. Quote Link to comment
trwolff04 Posted April 2, 2023 Author Share Posted April 2, 2023 Thanks. Didn't ask for -L. Should I try it anyway? Quote Link to comment
trwolff04 Posted April 3, 2023 Author Share Posted April 3, 2023 (edited) I don't know yet. It normally takes days or weeks to fill the log. Do my diagnostics also indicate I have a mem leak? Edited April 3, 2023 by trwolff04 Quote Link to comment
trwolff04 Posted April 4, 2023 Author Share Posted April 4, 2023 This is after server has been running ~2 days in safe mode on newest Unraid version. I have been having an issue with GUI crashes that I posted about in another thread still unresolved and I'm testing for issues. Diagnostics attached. beast-diagnostics-20230404-1302.zip Quote Link to comment
Solution trwolff04 Posted April 17, 2023 Author Solution Share Posted April 17, 2023 Crashes were due to qbittorrent libtorrent v2 and the filled log was because I had a ton of log in attempts from someone poking around in India. Turned off external access. Resolved. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.