Jump to content

nfx

Members
  • Posts

    3
  • Joined

  • Last visited

nfx's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Appreciate this! This worked for me and then I updated the password.
  2. Hi Jorge, Here is a run of xfs_repair -v, via the GUI xfs_repair -v Phase 1 - find and verify superblock... - block cache size set to 1474984 entries Phase 2 - using internal log - zero log... zero_log: head block 2777195 tail block 2777195 - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 3 - agno = 4 - agno = 5 - agno = 7 - agno = 6 - agno = 2 Phase 5 - rebuild AG headers and trees... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... XFS_REPAIR Summary Sat Mar 18 10:42:52 2023 Phase Start End Duration Phase 1: 03/18 10:42:36 03/18 10:42:36 Phase 2: 03/18 10:42:36 03/18 10:42:36 Phase 3: 03/18 10:42:36 03/18 10:42:46 10 seconds Phase 4: 03/18 10:42:46 03/18 10:42:46 Phase 5: 03/18 10:42:46 03/18 10:42:47 1 second Phase 6: 03/18 10:42:47 03/18 10:42:51 4 seconds Phase 7: 03/18 10:42:51 03/18 10:42:51 Total run time: 15 seconds done Confirmed file still there. In syslog it has the inode (hex?) value. Mar 18 10:46:31 NAS kernel: XFS (md1): Metadata corruption detected at xfs_dinode_verify+0x1bb/0x732 [xfs], inode 0x23f52f720 dinode Mar 18 10:46:31 NAS kernel: XFS (md1): Unmount and run xfs_repair Mar 18 10:46:31 NAS kernel: XFS (md1): First 128 bytes of corrupted metadata buffer: Mar 18 10:46:31 NAS kernel: 00000000: 49 4e 81 ff 03 02 00 00 00 00 00 00 00 00 00 00 IN.............. Mar 18 10:46:31 NAS kernel: 00000010: 00 00 00 01 00 00 00 00 00 00 00 00 00 02 00 00 ................ Mar 18 10:46:31 NAS kernel: 00000020: 64 11 b2 ba 10 ea cd ba 64 11 61 38 31 97 49 a3 d.......d.a81.I. Mar 18 10:46:31 NAS kernel: 00000030: 64 11 c4 9d 0f f2 a6 ee 00 00 00 00 0c 55 3a 40 d............U:@ Mar 18 10:46:31 NAS kernel: 00000040: 00 00 00 00 00 00 c5 54 00 00 00 00 00 00 00 01 .......T........ Mar 18 10:46:31 NAS kernel: 00000050: 00 00 00 02 00 00 00 00 00 00 00 00 e3 a0 61 9f ..............a. Mar 18 10:46:31 NAS kernel: 00000060: ff ff ff ff ac ac 8c d2 00 00 00 00 00 00 00 18 ................ Mar 18 10:46:31 NAS kernel: 00000070: 00 00 00 43 00 28 b5 e8 00 00 00 00 00 00 00 00 ...C.(..........
  3. Hello, recently moved Unraid to new hardware, everything was up to date before moving the USB to the new mobo, added Nvidia GPU for hardware transcoding. Everything running fine for 2 days. Had some contractors at the house and they turned off the breaker to my garage where my server is located. I have a UPS and my Unraid server seemed to shutdown fine. Turned it back on later and all seemed fine. Woke up next morning and my docker service was down, and I had to delete my docker.img and recreate it. Ran a parity check and everything returned normal. Tried to use CA Backup / Restore Appdata to restore my latest backup and realized my config was poor, and was very large because it was backing up files it shouldn't have been. In the process of cleaning up my appdata structure and reinstalling my docker applications via "Previous Apps" rather than restoring from the backup, I came across a file in one of my shares that is corrupt. I tracked it down to disk1 (of four disks in the share). /bin/ls: cannot access '/mnt/disk1/media/camera/pYCOIhVEoM/front_yard_camera/2023-03-14T23-05-00.mp4': Structure needs cleaning I removed everything else from the `front_yard_camera` folder on all of my drives. Stopped my array, enabled maintenance mode and started the array. I then ran xfs_repair via the GUI, for disk1 DISK 1 - xfs_repair -nv Phase 1 - find and verify superblock... - block cache size set to 1474984 entries Phase 2 - using internal log - zero log... zero_log: head block 2770115 tail block 2770115 - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 bad CRC for inode 9652336416 bad CRC for inode 9652336416, would rewrite would have cleared inode 9652336416 - agno = 5 - agno = 6 - agno = 7 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 7 - agno = 4 - agno = 5 - agno = 3 - agno = 6 - agno = 2 bad CRC for inode 9652336416, would rewrite would have cleared inode 9652336416 No modify flag set, skipping phase 5 Phase 6 - check inode connectivity... - traversing filesystem ... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify link counts... No modify flag set, skipping filesystem flush and exiting. XFS_REPAIR Summary Fri Mar 17 22:28:57 2023 Phase Start End Duration Phase 1: 03/17 22:28:42 03/17 22:28:42 Phase 2: 03/17 22:28:42 03/17 22:28:43 1 second Phase 3: 03/17 22:28:43 03/17 22:28:52 9 seconds Phase 4: 03/17 22:28:52 03/17 22:28:52 Phase 5: Skipped Phase 6: 03/17 22:28:52 03/17 22:28:57 5 seconds Phase 7: 03/17 22:28:57 03/17 22:28:57 Total run time: 15 seconds I unfortunately did not save the output of running the actual repair (removing -nv flag). Here is the aftermath verifying the repair DISK 1 - xfs_repair -nv Phase 1 - find and verify superblock... - block cache size set to 1474984 entries Phase 2 - using internal log - zero log... zero_log: head block 2770115 tail block 2770115 - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 1 - agno = 3 - agno = 5 - agno = 2 - agno = 4 - agno = 0 - agno = 6 - agno = 7 No modify flag set, skipping phase 5 Phase 6 - check inode connectivity... - traversing filesystem ... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify link counts... No modify flag set, skipping filesystem flush and exiting. XFS_REPAIR Summary Fri Mar 17 22:39:41 2023 Phase Start End Duration Phase 1: 03/17 22:39:26 03/17 22:39:26 Phase 2: 03/17 22:39:26 03/17 22:39:26 Phase 3: 03/17 22:39:26 03/17 22:39:36 10 seconds Phase 4: 03/17 22:39:36 03/17 22:39:36 Phase 5: Skipped Phase 6: 03/17 22:39:36 03/17 22:39:41 5 seconds Phase 7: 03/17 22:39:41 03/17 22:39:41 Total run time: 15 seconds The file remains, as xfs_repair doesn't have a problem with it. I also mounted /dev/md1 to a tmp share and ran xfs_repair via the command line, to no change. Any suggestions how to remove that file? Are there any greater implications here? Every other drive in that share pool never reported any problems / bad things found. Unraid server seems to be running just fine beyond that corrupt file. Attaching diagnostics from before doing the docker.img repair and after xfs_repair. Thank you, and appreciate any help. unraid_diag-20230318.zip
×
×
  • Create New...