kakmoster Posted May 16, 2018 Share Posted May 16, 2018 Hey, my log is getting full, slowly... Server uptime is only 19 days. I don't understand what's filling up the log. Thanks deusvult-diagnostics-20180516-0937.zip Quote Link to comment
John_M Posted May 16, 2018 Share Posted May 16, 2018 Your cache disk has a corrupt file system and your syslog is full of call traces. Reboot and start the array in maintenance mode and run a file system check on it. Quote Link to comment
kakmoster Posted May 16, 2018 Author Share Posted May 16, 2018 12 hours ago, John_M said: Your cache disk has a corrupt file system and your syslog is full of call traces. Reboot and start the array in maintenance mode and run a file system check on it. Alright, I'll do this tomorrow. My cache disk seems to work fine, whats wrong with the file system? Thanks for quick reply. Quote Link to comment
John_M Posted May 17, 2018 Share Posted May 17, 2018 1 hour ago, kakmoster said: whats wrong with the file system? Corruption. May 16 04:40:40 DeusVult kernel: XFS (sdm1): Corruption detected. Unmount and run xfs_repair May 16 04:40:53 DeusVult kernel: XFS (sdm1): xfs_iread: validation failed for inode 282185740 Quote Link to comment
John_M Posted May 17, 2018 Share Posted May 17, 2018 1 hour ago, kakmoster said: My cache disk seems to work fine I suppose that's a sign of how robust XFS can be. See here for instructions for repairing it in the GUI. Quote Link to comment
kakmoster Posted May 17, 2018 Author Share Posted May 17, 2018 9 hours ago, John_M said: Corruption. May 16 04:40:40 DeusVult kernel: XFS (sdm1): Corruption detected. Unmount and run xfs_repair May 16 04:40:53 DeusVult kernel: XFS (sdm1): xfs_iread: validation failed for inode 282185740 Yes I noticed when I tried to do a clean shutdown. Docker hanged and there was nothing I could do. Powerdown command in terminal just shut the GUI off and I had to kill the system manually... I'll try repairing it in a few minutes. Thank you, I'll be back with result. Quote Link to comment
kakmoster Posted May 17, 2018 Author Share Posted May 17, 2018 9 hours ago, John_M said: I suppose that's a sign of how robust XFS can be. See here for instructions for repairing it in the GUI. This is the output I got. Should I do a check with the -d option? Phase 1 - find and verify superblock... - block cache size set to 752896 entries Phase 2 - using internal log - zero log... zero_log: head block 213145 tail block 213141 ALERT: The filesystem has valuable metadata changes in a log which is being ignored because the -n option was used. Expect spurious inconsistencies which may be resolved by first mounting the filesystem to replay the log. - scan filesystem freespace and inode maps... sb_fdblocks 44900452, counted 44969003 - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 bad CRC for inode 282185740 bad CRC for inode 282185740, would rewrite would have cleared inode 282185740 - agno = 3 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 2 - agno = 3 - agno = 1 bad CRC for inode 282185740, would rewrite would have cleared inode 282185740 No modify flag set, skipping phase 5 Phase 6 - check inode connectivity... - traversing filesystem ... - agno = 0 - agno = 1 - agno = 2 xfs_iread: validation failed for inode 282185740 xfs_iread: XFS_CORRUPTION_ERROR couldn't map inode 282185740, err = 117 - agno = 3 - traversal finished ... - moving disconnected inodes to lost+found ... disconnected dir inode 142041536, would move to lost+found disconnected dir inode 282185741, would move to lost+found disconnected dir inode 282185747, would move to lost+found disconnected dir inode 475609367, would move to lost+found disconnected dir inode 493453641, would move to lost+found Phase 7 - verify link counts... xfs_iread: validation failed for inode 282185740 xfs_iread: XFS_CORRUPTION_ERROR couldn't map inode 282185740, err = 117, can't compare link counts No modify flag set, skipping filesystem flush and exiting. XFS_REPAIR Summary Thu May 17 11:50:16 2018 Phase Start End Duration Phase 1: 05/17 11:50:12 05/17 11:50:12 Phase 2: 05/17 11:50:12 05/17 11:50:12 Phase 3: 05/17 11:50:12 05/17 11:50:14 2 seconds Phase 4: 05/17 11:50:14 05/17 11:50:14 Phase 5: Skipped Phase 6: 05/17 11:50:14 05/17 11:50:16 2 seconds Phase 7: 05/17 11:50:16 05/17 11:50:16 Total run time: 4 seconds Quote Link to comment
John_M Posted May 17, 2018 Share Posted May 17, 2018 4 minutes ago, kakmoster said: Should I do a check with the -d option? First thing is to try it without the -n Quote If the file system is XFS, then the options box will contain the -n option (it means check only, no modification yet). We recommend adding the -v option (it means "verbose" for greater message display), so add a v to the -n, making it -nv. Quote Link to comment
kakmoster Posted May 17, 2018 Author Share Posted May 17, 2018 Just now, John_M said: First thing is to try it without the -n Alright, I'll do it with only the -v option. Quote Link to comment
kakmoster Posted May 17, 2018 Author Share Posted May 17, 2018 Phase 1 - find and verify superblock... - block cache size set to 752896 entries Phase 2 - using internal log - zero log... zero_log: head block 213717 tail block 213717 - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 bad CRC for inode 282185740 bad CRC for inode 282185740, will rewrite cleared inode 282185740 - agno = 3 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 1 - agno = 0 - agno = 3 - agno = 2 Phase 5 - rebuild AG headers and trees... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... XFS_REPAIR Summary Thu May 17 12:08:31 2018 Phase Start End Duration Phase 1: 05/17 12:08:26 05/17 12:08:26 Phase 2: 05/17 12:08:26 05/17 12:08:26 Phase 3: 05/17 12:08:26 05/17 12:08:28 2 seconds Phase 4: 05/17 12:08:28 05/17 12:08:29 1 second Phase 5: 05/17 12:08:29 05/17 12:08:29 Phase 6: 05/17 12:08:29 05/17 12:08:30 1 second Phase 7: 05/17 12:08:30 05/17 12:08:30 Total run time: 4 seconds done Quote Link to comment
kakmoster Posted May 17, 2018 Author Share Posted May 17, 2018 Just now, John_M said: Job done. Thanks a lot for the help! Quote Link to comment
John_M Posted May 17, 2018 Share Posted May 17, 2018 No problem. Use it for a few days and see how it goes. You shouldn't get those error messages now but if you have any problems grab your diagnostics and post them again. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.