PsyVision Posted February 21, 2018 Posted February 21, 2018 Hello, I'm using unRAID 6.4.1 and every few weeks I discover that my server has crashed with a kernel panic. I've not managed to go about getting hold of a syslog or any proper diagnosis. I've attached a photo of the screen once the panic has occurred. I have 3 VMs setup but these haven't been running for ages, were off at the time and are not set to autostart. I'm running 2 Docker containers - UniFi and UniFi Video. I've attached my diagnostics file for whatever use they may be. Any help would be greatly appreciated. Thank you, Rich nas-diagnostics-20180221-1904.zip
remotevisitor Posted February 21, 2018 Posted February 21, 2018 As the backtrack shows the crash is occurring in the XFS driver, the 1st thing I would do is run a file system check on your disks .... you might have some file system corruption which is causing the driver to crash when that part of the file system is accessed.
PsyVision Posted February 22, 2018 Author Posted February 22, 2018 Thank you @remotevisitor I've followed the instructions here - https://lime-technology.com/wiki/Check_Disk_Filesystems I'm not familiar with the output but it appears to me that there aren't any issues, you may say otherwise... This was run with -nv options. Disk 1: Phase 1 - find and verify superblock... - block cache size set to 749704 entries Phase 2 - using internal log - zero log... zero_log: head block 1754761 tail block 1754761 - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 1 - agno = 0 - agno = 2 - agno = 3 No modify flag set, skipping phase 5 Phase 6 - check inode connectivity... - traversing filesystem ... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify link counts... No modify flag set, skipping filesystem flush and exiting. XFS_REPAIR Summary Thu Feb 22 18:45:36 2018 Phase Start End Duration Phase 1: 02/22 18:45:31 02/22 18:45:31 Phase 2: 02/22 18:45:31 02/22 18:45:32 1 second Phase 3: 02/22 18:45:32 02/22 18:45:34 2 seconds Phase 4: 02/22 18:45:34 02/22 18:45:34 Phase 5: Skipped Phase 6: 02/22 18:45:34 02/22 18:45:36 2 seconds Phase 7: 02/22 18:45:36 02/22 18:45:36 Total run time: 5 seconds Disk 3: Phase 1 - find and verify superblock... - block cache size set to 742224 entries Phase 2 - using internal log - zero log... zero_log: head block 820249 tail block 820249 - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 2 - agno = 1 - agno = 3 - agno = 4 - agno = 5 No modify flag set, skipping phase 5 Phase 6 - check inode connectivity... - traversing filesystem ... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify link counts... No modify flag set, skipping filesystem flush and exiting. XFS_REPAIR Summary Thu Feb 22 18:47:07 2018 Phase Start End Duration Phase 1: 02/22 18:46:46 02/22 18:46:46 Phase 2: 02/22 18:46:46 02/22 18:46:47 1 second Phase 3: 02/22 18:46:47 02/22 18:47:02 15 seconds Phase 4: 02/22 18:47:02 02/22 18:47:02 Phase 5: Skipped Phase 6: 02/22 18:47:02 02/22 18:47:07 5 seconds Phase 7: 02/22 18:47:07 02/22 18:47:07 Total run time: 21 seconds Disk 4: Phase 1 - find and verify superblock... - block cache size set to 749664 entries Phase 2 - using internal log - zero log... zero_log: head block 696667 tail block 696667 - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 2 - agno = 3 - agno = 1 - agno = 4 - agno = 5 - agno = 6 - agno = 7 No modify flag set, skipping phase 5 Phase 6 - check inode connectivity... - traversing filesystem ... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify link counts... No modify flag set, skipping filesystem flush and exiting. XFS_REPAIR Summary Thu Feb 22 18:48:21 2018 Phase Start End Duration Phase 1: 02/22 18:47:52 02/22 18:47:52 Phase 2: 02/22 18:47:52 02/22 18:47:53 1 second Phase 3: 02/22 18:47:53 02/22 18:48:07 14 seconds Phase 4: 02/22 18:48:07 02/22 18:48:07 Phase 5: Skipped Phase 6: 02/22 18:48:07 02/22 18:48:21 14 seconds Phase 7: 02/22 18:48:21 02/22 18:48:21 Total run time: 29 seconds Disk 6: Phase 1 - find and verify superblock... - block cache size set to 749680 entries Phase 2 - using internal log - zero log... zero_log: head block 1056269 tail block 1056269 - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 1 - agno = 0 - agno = 3 - agno = 2 No modify flag set, skipping phase 5 Phase 6 - check inode connectivity... - traversing filesystem ... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify link counts... No modify flag set, skipping filesystem flush and exiting. XFS_REPAIR Summary Thu Feb 22 18:49:17 2018 Phase Start End Duration Phase 1: 02/22 18:48:48 02/22 18:48:48 Phase 2: 02/22 18:48:48 02/22 18:48:49 1 second Phase 3: 02/22 18:48:49 02/22 18:49:04 15 seconds Phase 4: 02/22 18:49:04 02/22 18:49:04 Phase 5: Skipped Phase 6: 02/22 18:49:04 02/22 18:49:17 13 seconds Phase 7: 02/22 18:49:17 02/22 18:49:17 Total run time: 29 seconds Cache: Phase 1 - find and verify superblock... - block cache size set to 757096 entries Phase 2 - using internal log - zero log... zero_log: head block 146793 tail block 146793 - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 2 - agno = 3 No modify flag set, skipping phase 5 Phase 6 - check inode connectivity... - traversing filesystem ... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify link counts... No modify flag set, skipping filesystem flush and exiting. XFS_REPAIR Summary Thu Feb 22 18:50:24 2018 Phase Start End Duration Phase 1: 02/22 18:49:47 02/22 18:49:47 Phase 2: 02/22 18:49:47 02/22 18:49:48 1 second Phase 3: 02/22 18:49:48 02/22 18:50:12 24 seconds Phase 4: 02/22 18:50:12 02/22 18:50:12 Phase 5: Skipped Phase 6: 02/22 18:50:12 02/22 18:50:24 12 seconds Phase 7: 02/22 18:50:24 02/22 18:50:24 Total run time: 37 seconds
JorgeB Posted February 22, 2018 Posted February 22, 2018 5 minutes ago, PsyVision said: I'm not familiar with the output but it appears to me that there aren't any issues, you may say otherwise... This was run with -nv options. xfs_repair is not very helpful showing if there is or isn't corruption, if using -n you need to check the exit status after each disk run, or run xfs_repair without -n, to check the exit status type: echo $? 1=corruption was detected 0=no corruption detected
PsyVision Posted February 22, 2018 Author Posted February 22, 2018 Aha cheers! This was run through the web UI. I'll re-do it from the cli
Recommended Posts
Archived
This topic is now archived and is closed to further replies.