tola5 Posted July 14, 2019 Share Posted July 14, 2019 (edited) Hi are running unraid 6.7.0 Last night when I was putting a usb kabel in my server back I came too touch a sas so it got loos and 2 disk got out off the array so I fix it but but there git emulate now so Google and read on here It was best too power down remove disk power array back power it down and let the disk rebuild it are doing that now. The wird part are just I can see I lost some data how can that be if there emulate and when white too the array it also write too the party can see it are for when I comperin it too my backup there was running with rclone sync Hope I explain it good enough Sorry for any miss spelling tower-diagnostics-20190714-1310.zip Edited July 16, 2019 by tola5 Diagnostics Quote Link to comment
tola5 Posted July 14, 2019 Author Share Posted July 14, 2019 35 minutes ago, Squid said: Post the diagnostics Sure even I thought my question on how unraid work cold be answer without Quote Link to comment
Squid Posted July 14, 2019 Share Posted July 14, 2019 Jul 13 20:41:42 Tower kernel: XFS (md4): Metadata CRC error detected at xfs_sb_read_verify+0x111/0x15f [xfs], xfs_sb block 0xffffffffffffffff Jul 13 20:41:42 Tower kernel: XFS (md4): Unmount and run xfs_repair Jul 13 20:41:43 Tower kernel: XFS (md9): Metadata CRC error detected at xfs_sb_read_verify+0x111/0x15f [xfs], xfs_sb block 0xffffffffffffffff Jul 13 20:41:43 Tower kernel: XFS (md9): Unmount and run xfs_repair Run the file system checks on disks 4 and 9 Quote Link to comment
tola5 Posted July 14, 2019 Author Share Posted July 14, 2019 1 hour ago, Squid said: Jul 13 20:41:42 Tower kernel: XFS (md4): Metadata CRC error detected at xfs_sb_read_verify+0x111/0x15f [xfs], xfs_sb block 0xffffffffffffffff Jul 13 20:41:42 Tower kernel: XFS (md4): Unmount and run xfs_repair Jul 13 20:41:43 Tower kernel: XFS (md9): Metadata CRC error detected at xfs_sb_read_verify+0x111/0x15f [xfs], xfs_sb block 0xffffffffffffffff Jul 13 20:41:43 Tower kernel: XFS (md9): Unmount and run xfs_repair Run the file system checks on disks 4 and 9 it give this one for disk 4 just like too be sure what the right way are thx for the help hase 1 - find and verify superblock... - block cache size set to 736904 entries sb realtime bitmap inode 18446744073709551615 (NULLFSINO) inconsistent with calculated value 97 would reset superblock realtime bitmap ino pointer to 97 sb realtime summary inode 18446744073709551615 (NULLFSINO) inconsistent with calculated value 98 would reset superblock realtime summary ino pointer to 98 Phase 2 - using internal log - zero log... zero_log: head block 563714 tail block 563710 ALERT: The filesystem has valuable metadata changes in a log which is being ignored because the -n option was used. Expect spurious inconsistencies which may be resolved by first mounting the filesystem to replay the log. - scan filesystem freespace and inode maps... sb_icount 0, counted 8704 sb_ifree 0, counted 3564 sb_fdblocks 976277683, counted 49506984 - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 1 - agno = 0 - agno = 2 - agno = 3 No modify flag set, skipping phase 5 Phase 6 - check inode connectivity... - traversing filesystem ... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify link counts... Maximum metadata LSN (1:563818) is ahead of log (1:563714). Would format log to cycle 4. No modify flag set, skipping filesystem flush and exiting. XFS_REPAIR Summary Sun Jul 14 07:21:16 2019 Phase Start End Duration Phase 1: 07/14 07:21:12 07/14 07:21:12 Phase 2: 07/14 07:21:12 07/14 07:21:13 1 second Phase 3: 07/14 07:21:13 07/14 07:21:15 2 seconds Phase 4: 07/14 07:21:15 07/14 07:21:15 Phase 5: Skipped Phase 6: 07/14 07:21:15 07/14 07:21:16 1 second Phase 7: 07/14 07:21:16 07/14 07:21:16 Total run time: 4 seconds Quote Link to comment
tola5 Posted July 14, 2019 Author Share Posted July 14, 2019 2 minutes ago, Squid said: remove the -n it give this one back then Phase 1 - find and verify superblock... sb realtime bitmap inode 18446744073709551615 (NULLFSINO) inconsistent with calculated value 97 resetting superblock realtime bitmap ino pointer to 97 sb realtime summary inode 18446744073709551615 (NULLFSINO) inconsistent with calculated value 98 resetting superblock realtime summary ino pointer to 98 Phase 2 - using internal log - zero log... ERROR: The filesystem has valuable metadata changes in a log which needs to be replayed. Mount the filesystem to replay the log, and unmount it before re-running xfs_repair. If you are unable to mount the filesystem, then use the -L option to destroy the log and attempt a repair. Note that destroying the log may cause corruption -- please attempt a mount of the filesystem before doing this. Quote Link to comment
tola5 Posted July 14, 2019 Author Share Posted July 14, 2019 guest it was for I was rebulding it ? so 2 bad chose as I gues or ? but not too bad got backup or ? Quote Link to comment
itimpi Posted July 14, 2019 Share Posted July 14, 2019 1 hour ago, tola5 said: it give this one back then Phase 1 - find and verify superblock... sb realtime bitmap inode 18446744073709551615 (NULLFSINO) inconsistent with calculated value 97 resetting superblock realtime bitmap ino pointer to 97 sb realtime summary inode 18446744073709551615 (NULLFSINO) inconsistent with calculated value 98 resetting superblock realtime summary ino pointer to 98 Phase 2 - using internal log - zero log... ERROR: The filesystem has valuable metadata changes in a log which needs to be replayed. Mount the filesystem to replay the log, and unmount it before re-running xfs_repair. If you are unable to mount the filesystem, then use the -L option to destroy the log and attempt a repair. Note that destroying the log may cause corruption -- please attempt a mount of the filesystem before doing this. That is not at all unusual! You need to provide the -L flag to get past this. Normally despite the warning there is no resulting data loss. Quote Link to comment
tola5 Posted July 14, 2019 Author Share Posted July 14, 2019 29 minutes ago, itimpi said: That is not at all unusual! You need to provide the -L flag to get past this. Normally despite the warning there is no resulting data loss. Thx the data are back emulate and syncing now Quote Link to comment
phanb Posted March 8, 2020 Share Posted March 8, 2020 I have had to do this at least twice a month... any ideas on why so? I am using ECC ram Quote Link to comment
JorgeB Posted March 9, 2020 Share Posted March 9, 2020 13 hours ago, phanb said: I have had to do this at least twice a month... any ideas on why so? I am using ECC ram We need the diags, most useful if grabbed after running a few days until it happens again, and before rebooting. Quote Link to comment
phanb Posted March 11, 2020 Share Posted March 11, 2020 (edited) @johnnie.black where should I post my diags... I'm having terrible issues since upgrading to 6.8 phanstash-diagnostics-20200310-1848.zip Edited March 11, 2020 by phanb Quote Link to comment
JorgeB Posted March 11, 2020 Share Posted March 11, 2020 6 hours ago, phanb said: @johnnie.black where should I post my diags... I'm having terrible issues since upgrading to 6.8 phanstash-diagnostics-20200310-1848.zip 200.37 kB · 1 download Two visible issues: -CPU is overheating, check cooling -There are what look like connection/power issues with multiple disks, check all connections and/or use a different PSU if available Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.