cdixon Posted March 15, 2020 Share Posted March 15, 2020 I have an 40 tb unraid server set up at home that use as a media server. The server has five drives used to store the video files and two parity drives. For the past three years that I have had the server set up, disk two has had some errors now and then, but has never shown any signs of failure or impending failure. The other day I delted a file from the server through network and now disk 2 is unmountable. I tried unplugging the SATA cables and plugging them back in again but that did not work. I then ran a SMART short self-test for disk 2 and these are results: #Attribute NameFlagValueWorstThresholdTypeUpdatedFailedRaw Value 1Raw read error rate0x000f117099006Pre-failAlwaysNever137499552 3Spin up time0x0003091090000Pre-failAlwaysNever0 4Start stop count0x0032097097020Old ageAlwaysNever3638 5Reallocated sector count0x0033100100010Pre-failAlwaysNever0 7Seek error rate0x000f084060030Pre-failAlwaysNever4612661310 9Power on hours0x0032074074000Old ageAlwaysNever22887 (2y, 7m, 10d, 15h) 10Spin retry count0x0013100100097Pre-failAlwaysNever0 12Power cycle count0x0032100100020Old ageAlwaysNever204 183Runtime bad block0x0032100100000Old ageAlwaysNever0 184End-to-end error0x0032100100099Old ageAlwaysNever0 187Reported uncorrect0x0032100100000Old ageAlwaysNever0 188Command timeout0x0032100099000Old ageAlwaysNever25770196998 189High fly writes0x003a100100000Old ageAlwaysNever0 190Airflow temperature cel0x0022071062045Old ageAlwaysNever29 (min/max 29/30) 191G-sense error rate0x0032100100000Old ageAlwaysNever0 192Power-off retract count0x0032081081000Old ageAlwaysNever39551 193Load cycle count0x0032071071000Old ageAlwaysNever59687 194Temperature celsius0x0022029040000Old ageAlwaysNever29 (0 16 0 0 0) 195Hardware ECC recovered0x001a117099000Old ageAlwaysNever137499552 197Current pending sector0x0012100100000Old ageAlwaysNever0 198Offline uncorrectable0x0010100100000Old ageOfflineNever0 199UDMA CRC error count0x003e200200000Old ageAlwaysNever0 240Head flying hours0x0000100253000Old ageOfflineNever5054 (115 152 0) 241Total lbas written0x0000100253000Old ageOfflineNever231932062510 242Total lbas read0x0000100253000Old ageOfflineNever539967992669 After this I ran an XFS repair with the -nv option and this is what came up: Phase 1 - find and verify superblock... - block cache size set to 2960240 entries Phase 2 - using internal log - zero log... zero_log: head block 245530 tail block 245514 - scan filesystem freespace and inode maps... ir_freecount/free mismatch, inode chunk 7/59975168, freecount 31 nfree 7 agi_freecount 162, counted 193 in ag 7 agi_freecount 136, counted 137 in ag 6 agi unlinked bucket 58 is 50974650 in ag 6 (inode=12935876538) ir_freecount/free mismatch, inode chunk 5/4960608, freecount 63 nfree 1 agi_freecount 150, counted 213 in ag 5 sb_ifree 1150, counted 1246 sb_fdblocks 24503680, counted 26690340 - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 imap claims in-use inode 10742378880 is free, correcting imap - agno = 6 imap claims in-use inode 12935876540 is free, correcting imap - agno = 7 imap claims in-use inode 15092360749 is free, correcting imap imap claims in-use inode 15092360750 is free, correcting imap imap claims in-use inode 15092360751 is free, correcting imap imap claims in-use inode 15092360752 is free, correcting imap imap claims in-use inode 15092360753 is free, correcting imap imap claims in-use inode 15092360754 is free, correcting imap imap claims in-use inode 15092360755 is free, correcting imap - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 2 - agno = 4 - agno = 1 - agno = 5 - agno = 3 - agno = 7 - agno = 6 No modify flag set, skipping phase 5 Inode allocation btrees are too corrupted, skipping phases 6 and 7 No modify flag set, skipping filesystem flush and exiting. XFS_REPAIR Summary Sun Mar 15 17:01:16 2020 Phase Start End Duration Phase 1: 03/15 17:01:12 03/15 17:01:12 Phase 2: 03/15 17:01:12 03/15 17:01:12 Phase 3: 03/15 17:01:12 03/15 17:01:16 4 seconds Phase 4: 03/15 17:01:16 03/15 17:01:16 Phase 5: Skipped Phase 6: Skipped Phase 7: Skipped Total run time: 4 seconds What should I do to set everything back to normal? Quote Link to comment
JorgeB Posted March 16, 2020 Share Posted March 16, 2020 Run xfs_repair without -n or nothing will be done, if it asks for -L use it. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.