Alex.vision Posted August 26, 2021 Author Share Posted August 26, 2021 (edited) OK, it finished, looks like it made repairs/reset to the superblock. I will try to remount it with UD, if successful I will traverse the files. I am assuming I should do the same thing to old disk4, then we can decide if one of them makes a better candidate for putting back in the array, then follow Trurl's suggestion to "If the replacement looks good as an Unassigned Device, maybe we can use it as disk4 in a New Config with Parity Valid and then try to rebuild disk1 to a new disk. And, since you will have original disk4 and original disk1, if there are any problems maybe files can be copied from those." File System Check output FS: crypto_LUKS /sbin/xfs_repair /dev/mapper/WDC_WD80EFZX-68UW8N0_VKGW28LX 2>&1 Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... totally zeroed log - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 1 - agno = 2 - agno = 3 - agno = 0 - agno = 4 - agno = 5 - agno = 6 - agno = 7 Phase 5 - rebuild AG headers and trees... - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... Maximum metadata LSN (1:639326) is ahead of log (0:0). Format log to cycle 4. done Edited August 26, 2021 by Alex.vision Quote Link to comment
Alex.vision Posted August 26, 2021 Author Share Posted August 26, 2021 I've tested new disk4, and I will swap it for old disk4 when I get off work and test it as well. The files all seem to be there, I traversed some of the folders with midnight commander, and opened a few small text files to read their contents. It all seemed to work. One thing that concerned me though, was I saw the write count go up after i had mounted the drive. I've attached a snippet of the main gui for reference. Did these writes potentially invalidate the parity of the array or could the writes have been the superblock repair and not part of the array. If I try the same steps for old disk4 should I make any changes? Quote Link to comment
JorgeB Posted August 27, 2021 Share Posted August 27, 2021 11 hours ago, Alex.vision said: Did these writes potentially invalidate the parity Any writes will make parity out of sync, UD supports read only mode, this will keep parity in sync, but it's not possible to use if the disk needs a filesystem check first like in this case. Quote Link to comment
Alex.vision Posted August 28, 2021 Author Share Posted August 28, 2021 That was what I was afraid of, oh well guess some data loss is better than all data lost. I did try to run old disk4 in the test server but the drive is not detected by the OS for more than a few seconds. I can see the drive on the Main page, but within 10-30 seconds the drive disappears completely from the main page. Im guessing that drive is failing a lot more than it was. I am going to try plugging it back in to the server I pulled it from, in case there is some strange reason why the test server doesn't recognize it. If it still isn't recognized I think I will call old disk4 dead, and hopefully I can proceed with Trurl's idea to set a New Config with Parity Valid and replace my failing disk1. I will attempt to get old disk4 back online tonight, just so I can have as a backup, then post here for more help on Trurl's idea. Quote Link to comment
Alex.vision Posted September 5, 2021 Author Share Posted September 5, 2021 OK I have finally had enough time to try and test old disk4, which has completely failed now. So I guess I am ready to put the new disk4 into the array. The array is currently running, it is emulating disk4, and I would like to install new disk4 in place, tell the system that parity is still good, and rebuild disk1 on its replacement per Trurls idea. How should I proceed? Quote Link to comment
trurl Posted September 5, 2021 Share Posted September 5, 2021 As mentioned, parity will be somewhat out-of-sync, and I'm not sure how encryption will play with this. 1 hour ago, Alex.vision said: install new disk4 in place, tell the system that parity is still good, and rebuild disk1 Tools - New Config Assign disks as needed Check the box saying parity is valid Also check the box for Maintenance mode Start the array. At this point, Unraid will consider all disks to be enabled with parity valid. Stop the array Unassign disk1 Start the array with disk1 unassigned. At this point, Unraid will be emulating disk1 from parity. If emulated disk1 mounts you should be able to see its contents. If not you might have to do filesystem repair. If you get that far, post new diagnostics and a screenshot of Main - Array Devices. If you have any questions or things don't seem to be going according to plan, let us know. 1 Quote Link to comment
Alex.vision Posted September 5, 2021 Author Share Posted September 5, 2021 OK, will do. Thanks Quote Link to comment
Alex.vision Posted September 6, 2021 Author Share Posted September 6, 2021 I followed the steps listed above, the array does start and if I hover over the red "x" for disk1, it says "Device is missing (Disabled), Contents Emulated". Where it shows the drives size, file system and the folder icon to view the drives content, it shows as Unmountable: not mounted. So it looks like I will need to do some file system repair. Attached are the requested files from the previous post. alex.vision media-diagnostics-20210905-2226.zip Quote Link to comment
trurl Posted September 6, 2021 Share Posted September 6, 2021 7 minutes ago, Alex.vision said: some file system repair You can try to repair the emulated disk. When repairing disks in the array, you must repair the md device or you will invalidate parity. Disk1 is md1 https://wiki.unraid.net/Manual/Storage_Management#Running_the_Test_using_the_webGui Quote Link to comment
Alex.vision Posted September 6, 2021 Author Share Posted September 6, 2021 Should I conduct this repair with the array started, in maintenance mode? Quote Link to comment
itimpi Posted September 6, 2021 Share Posted September 6, 2021 1 hour ago, Alex.vision said: Should I conduct this repair with the array started, in maintenance mode? If doing it via the GUI (recommended) then you have to be in Maintenance mode as you cannot run a repair against a mounted drive. Quote Link to comment
Alex.vision Posted September 6, 2021 Author Share Posted September 6, 2021 10 minutes ago, itimpi said: via the GUI (recommended) Im not seeing the Check Filesystem Status option in the GUI, everything after the basic disk information is blank. Quote Link to comment
JorgeB Posted September 6, 2021 Share Posted September 6, 2021 Click on the disk and set the filesystem to xfs. Quote Link to comment
Alex.vision Posted September 6, 2021 Author Share Posted September 6, 2021 Should I set it to xfs - encrypted as that is what all my other disks are, or does that not matter for this emulated fs? Quote Link to comment
JorgeB Posted September 6, 2021 Share Posted September 6, 2021 If it is encrypted yes. Quote Link to comment
Alex.vision Posted September 7, 2021 Author Share Posted September 7, 2021 19 hours ago, JorgeB said: set the filesystem to xfs. Setting the disk's filesystem to xfs-encrypted, made no difference. I was unable to click apply after making the change. I tried to click done, but the option box reset to Auto as soon as I refreshed the page. Any other options? Quote Link to comment
JorgeB Posted September 7, 2021 Share Posted September 7, 2021 That's strange, it should work, but you can always run xfs_repair manually after starting the array in maintenance mode: xfs_repair -v /dev/mapper/md1 Quote Link to comment
Alex.vision Posted September 7, 2021 Author Share Posted September 7, 2021 (edited) OK the command was able to run, but It came back with an error in phase 2. root@Media:~# xfs_repair -v /dev/mapper/md1 Phase 1 - find and verify superblock... bad primary superblock - bad CRC in superblock !!! attempting to find secondary superblock... .found candidate secondary superblock... verified secondary superblock... writing modified primary superblock - block cache size set to 705704 entries sb root inode value 18446744073709551615 (NULLFSINO) inconsistent with calculated value 96 resetting superblock root inode pointer to 96 sb realtime bitmap inode value 18446744073709551615 (NULLFSINO) inconsistent with calculated value 97 resetting superblock realtime bitmap inode pointer to 97 sb realtime summary inode value 18446744073709551615 (NULLFSINO) inconsistent with calculated value 98 resetting superblock realtime summary inode pointer to 98 Phase 2 - using internal log - zero log... Log inconsistent (didn't find previous header) failed to find log head zero_log: cannot find log head/tail (xlog_find_tail=5) ERROR: The log head and/or tail cannot be discovered. Attempt to mount the filesystem to replay the log or use the -L option to destroy the log and attempt a repair. I can see it says to run with the -L option, but I want to make sure I should follow that. Would that then look like this: xfs_repair -v -L /dev/mapper/md1 Edited September 7, 2021 by Alex.vision Quote Link to comment
JorgeB Posted September 7, 2021 Share Posted September 7, 2021 That or use -vL, it's the same Quote Link to comment
Alex.vision Posted September 7, 2021 Author Share Posted September 7, 2021 OK here is the output from running that. Phase 1 - find and verify superblock... - block cache size set to 705704 entries sb root inode value 18446744073709551615 (NULLFSINO) inconsistent with calculated value 96 resetting superblock root inode pointer to 96 sb realtime bitmap inode value 18446744073709551615 (NULLFSINO) inconsistent with calculated value 97 resetting superblock realtime bitmap inode pointer to 97 sb realtime summary inode value 18446744073709551615 (NULLFSINO) inconsistent with calculated value 98 resetting superblock realtime summary inode pointer to 98 Phase 2 - using internal log - zero log... Log inconsistent (didn't find previous header) failed to find log head zero_log: cannot find log head/tail (xlog_find_tail=5) - scan filesystem freespace and inode maps... Metadata CRC error detected at 0x439496, xfs_agf block 0xfffffff1/0x200 agf has bad CRC for ag 2 Metadata CRC error detected at 0x4643f6, xfs_agi block 0xfffffff2/0x200 agi has bad CRC for ag 2 bad uuid fd73aa3e-622b-44bf-3e5b-b2fb8d2f2bf7 for agi 2 reset bad agi for ag 2 Metadata CRC error detected at 0x439034, xfs_agfl block 0xfffffff3/0x200 agfl has bad CRC for ag 2 bad agbno 1641858263 in agfl, agno 2 bad agbno 2339228161 in agfl, agno 2 bad agbno 1642631695 in agfl, agno 2 bad agbno 1671044755 for btbno root, agno 2 Metadata CRC error detected at 0x439496, xfs_agf block 0x17fffffe9/0x200 agf has bad CRC for ag 3 Metadata CRC error detected at 0x4643f6, xfs_agi block 0x17fffffea/0x200 agi has bad CRC for ag 3 bad uuid fd73aa3e-622b-44bf-0597-d609e8c24b95 for agi 3 reset bad agi for ag 3 Metadata CRC error detected at 0x439034, xfs_agfl block 0x17fffffeb/0x200 agfl has bad CRC for ag 3 bad agbno 644552049 in agfl, agno 3 bad agbno 404927693 in agfl, agno 3 bad agbno 2215831049 in agfl, agno 3 bad agbno 3229275331 in agfl, agno 3 bad agbno 4149480898 for btbno root, agno 3 bad agbno 4135602983 for btbcnt root, agno 3 agf_freeblks 135063700, counted 0 in ag 3 agf_longest 126537429, counted 0 in ag 3 bad agbno 1579711676 for inobt root, agno 3 bad agbno 3413774725 for finobt root, agno 3 agi_count 328305691, counted 0 in ag 3 agi_freecount 1393115466, counted 0 in ag 3 agi_freecount 1393115466, counted 0 in ag 3 finobt agi unlinked bucket 0 is 250143197 in ag 3 (inode=6692594141) agi unlinked bucket 1 is 1560759252 in ag 3 (inode=8003210196) Metadata CRC error detected at 0x439496, xfs_agf block 0x1ffffffe1/0x200 agf has bad CRC for ag 4 Metadata CRC error detected at 0x4643f6, xfs_agi block 0x1ffffffe2/0x200 agi has bad CRC for ag 4 bad uuid fd73aa3e-622b-44bf-7625-632f8c8a8802 for agi 4 reset bad agi for ag 4 Metadata CRC error detected at 0x439034, xfs_agfl block 0x1ffffffe3/0x200 agfl has bad CRC for ag 4 bad agbno 2131417288 in agfl, agno 4 bad agbno 2293839329 in agfl, agno 4 bad agbno 2213870180 in agfl, agno 4 bad agbno 1802937749 for btbno root, agno 4 bad agbno 2889908281 for btbcnt root, agno 4 agf_freeblks 153475745, counted 0 in ag 4 agf_longest 146910939, counted 0 in ag 4 bad agbno 996091709 for inobt root, agno 4 bad agbno 1847248807 for finobt root, agno 4 agi_count 4037017944, counted 0 in ag 4 agi_freecount 2999091546, counted 0 in ag 4 agi_freecount 2999091546, counted 0 in ag 4 finobt agi unlinked bucket 0 is 2760841074 in ag 4 (inode=11350775666) agi unlinked bucket 1 is 1851807415 in ag 4 (inode=10441742007) Metadata CRC error detected at 0x439496, xfs_agf block 0x7ffffff9/0x200 agf has bad CRC for ag 1 Metadata CRC error detected at 0x4643f6, xfs_agi block 0x7ffffffa/0x200 agi has bad CRC for ag 1 bad uuid fd73aa3e-622b-44bf-5f34-39ea895e7548 for agi 1 reset bad agi for ag 1 Metadata CRC error detected at 0x439034, xfs_agfl block 0x7ffffffb/0x200 agfl has bad CRC for ag 1 bad agbno 4237154383 in agfl, agno 1 bad agbno 2830926106 in agfl, agno 1 bad agbno 4292199968 in agfl, agno 1 bad agbno 2777775626 in agfl, agno 1 bad agbno 3746137080 for btbno root, agno 1 Metadata CRC error detected at 0x439496, xfs_agf block 0x1/0x200 agf has bad CRC for ag 0 Metadata CRC error detected at 0x4643f6, xfs_agi block 0x2/0x200 agi has bad CRC for ag 0 bad uuid fd73aa3e-622b-44bf-59be-10217cae3efc for agi 0 reset bad agi for ag 0 Metadata CRC error detected at 0x439034, xfs_agfl block 0x3/0x200 agfl has bad CRC for ag 0 bad agbno 3966867646 in agfl, agno 0 bad agbno 3205520806 in agfl, agno 0 bad agbno 3381889094 in agfl, agno 0 bad agbno 1597930656 in agfl, agno 0 bad agbno 673070724 for btbno root, agno 0 bad agbno 4060831469 for btbcnt root, agno 0 agf_freeblks 128648023, counted 0 in ag 0 agf_longest 61704483, counted 0 in ag 0 bad agbno 2624749737 for inobt root, agno 0 bad agbno 2992810618 for finobt root, agno 0 agi_count 2630760731, counted 0 in ag 0 agi_freecount 3767143734, counted 0 in ag 0 agi_freecount 3767143734, counted 0 in ag 0 finobt agi unlinked bucket 0 is 451585430 in ag 0 (inode=451585430) agi unlinked bucket 1 is 3349175922 in ag 0 (inode=3349175922) Metadata CRC error detected at 0x439496, xfs_agf block 0x27fffffd9/0x200 agf has bad CRC for ag 5 Metadata CRC error detected at 0x4643f6, xfs_agi block 0x27fffffda/0x200 agi has bad CRC for ag 5 bad uuid fd73aa3e-622b-44bf-2c80-c2a468d102fb for agi 5 reset bad agi for ag 5 Metadata CRC error detected at 0x439034, xfs_agfl block 0x27fffffdb/0x200 agfl has bad CRC for ag 5 bad agbno 1569139985 in agfl, agno 5 bad agbno 306200983 in agfl, agno 5 bad agbno 3658668364 in agfl, agno 5 bad agbno 392989890 in agfl, agno 5 bad agbno 2657965464 for btbno root, agno 5 bad agbno 974593347 for btbcnt root, agno 5 agf_freeblks 144168535, counted 0 in ag 5 agf_longest 78225295, counted 0 in ag 5 bad agbno 622988518 for inobt root, agno 5 bad agbno 2532197311 for finobt root, agno 5 agi_count 2820345283, counted 0 in ag 5 agi_freecount 4063917965, counted 0 in ag 5 agi_freecount 4063917965, counted 0 in ag 5 finobt agi unlinked bucket 0 is 2037198905 in ag 5 (inode=12774617145) agi unlinked bucket 1 is 2366659336 in ag 5 (inode=10956593928) Metadata CRC error detected at 0x439496, xfs_agf block 0x2ffffffd1/0x200 agf has bad CRC for ag 6 Metadata CRC error detected at 0x4643f6, xfs_agi block 0x2ffffffd2/0x200 agi has bad CRC for ag 6 bad uuid fd73aa3e-622b-44bf-47b3-1995c81232d1 for agi 6 reset bad agi for ag 6 Metadata CRC error detected at 0x439034, xfs_agfl block 0x2ffffffd3/0x200 agfl has bad CRC for ag 6 bad agbno 2381312845 in agfl, agno 6 bad agbno 3566701974 in agfl, agno 6 bad agbno 4177798121 in agfl, agno 6 bad agbno 597170615 in agfl, agno 6 bad agbno 3999794661 for btbno root, agno 6 bad agbno 1922417658 for btbcnt root, agno 6 agf_freeblks 146457839, counted 0 in ag 6 agf_longest 30985540, counted 0 in ag 6 bad agbno 3005714211 for inobt root, agno 6 bad agbno 1882974516 for finobt root, agno 6 agi_count 3357937912, counted 0 in ag 6 agi_freecount 1984401486, counted 0 in ag 6 agi_freecount 1984401486, counted 0 in ag 6 finobt agi unlinked bucket 0 is 992277125 in ag 6 (inode=13877179013) agi unlinked bucket 1 is 3436627124 in ag 6 (inode=16321529012) Metadata CRC error detected at 0x43cfad, xfs_cntbt block 0x163baf340/0x1000 btree block 2/209149546 is suspect, error -74 bad magic # 0xcc2d5923 in btcnt block 2/209149546 agf_freeblks 178019149, counted 0 in ag 2 agf_longest 60662350, counted 0 in ag 2 bad agbno 2260732410 for inobt root, agno 2 bad agbno 2087574938 for finobt root, agno 2 agi_count 1004421552, counted 0 in ag 2 agi_freecount 2528530697, counted 0 in ag 2 agi_freecount 2528530697, counted 0 in ag 2 finobt agi unlinked bucket 0 is 1253554967 in ag 2 (inode=5548522263) agi unlinked bucket 1 is 2222100300 in ag 2 (inode=6517067596) Metadata CRC error detected at 0x43cfad, xfs_cntbt block 0xf07ad5a8/0x1000 btree block 1/235887286 is suspect, error -74 bad magic # 0x8b760587 in btcnt block 1/235887286 agf_freeblks 108824985, counted 0 in ag 1 agf_longest 53443049, counted 0 in ag 1 bad agbno 1351678525 for inobt root, agno 1 bad agbno 1608322331 for finobt root, agno 1 agi_count 2997977395, counted 0 in ag 1 agi_freecount 434274984, counted 0 in ag 1 agi_freecount 434274984, counted 0 in ag 1 finobt agi unlinked bucket 0 is 2199689951 in ag 1 (inode=2199689951) agi unlinked bucket 1 is 459392458 in ag 1 (inode=2606876106) Metadata CRC error detected at 0x439496, xfs_agf block 0x37fffffc9/0x200 agf has bad CRC for ag 7 Metadata CRC error detected at 0x4643f6, xfs_agi block 0x37fffffca/0x200 agi has bad CRC for ag 7 bad uuid fd73aa3e-622b-44bf-e31a-0e7272b8d318 for agi 7 reset bad agi for ag 7 Metadata CRC error detected at 0x439034, xfs_agfl block 0x37fffffcb/0x200 agfl has bad CRC for ag 7 bad agbno 2747564586 in agfl, agno 7 bad agbno 921949115 in agfl, agno 7 bad agbno 615854640 in agfl, agno 7 bad agbno 3050775716 for btbno root, agno 7 bad agbno 3753715731 for btbcnt root, agno 7 agf_freeblks 74170, counted 0 in ag 7 agf_longest 46661, counted 0 in ag 7 bad agbno 1185514514 for inobt root, agno 7 bad agbno 165213724 for finobt root, agno 7 agi_count 2709549327, counted 0 in ag 7 agi_freecount 651660831, counted 0 in ag 7 agi_freecount 651660831, counted 0 in ag 7 finobt agi unlinked bucket 0 is 3628170829 in ag 7 (inode=16513072717) agi unlinked bucket 1 is 3131662536 in ag 7 (inode=16016564424) sb_fdblocks 1952984353, counted 32 root inode chunk not found Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 Metadata corruption detected at 0x435c43, xfs_inode block 0x60/0x4000 bad CRC for inode 96 bad next_unlinked 0x88fbf75d on inode 96 bad (negative) size -8962325649000601738 on inode 96 bad CRC for inode 96, will rewrite bad next_unlinked 0x88fbf75d on inode 96, resetting next_unlinked bad (negative) size -8962325649000601738 on inode 96 cleared root inode 96 imap claims in-use inode 99 is free, correcting imap imap claims in-use inode 100 is free, correcting imap imap claims in-use inode 101 is free, correcting imap imap claims in-use inode 102 is free, correcting imap imap claims in-use inode 103 is free, correcting imap imap claims in-use inode 104 is free, correcting imap imap claims in-use inode 105 is free, correcting imap imap claims in-use inode 106 is free, correcting imap imap claims in-use inode 107 is free, correcting imap imap claims in-use inode 108 is free, correcting imap imap claims in-use inode 109 is free, correcting imap imap claims in-use inode 110 is free, correcting imap imap claims in-use inode 111 is free, correcting imap imap claims in-use inode 112 is free, correcting imap imap claims in-use inode 113 is free, correcting imap imap claims in-use inode 114 is free, correcting imap imap claims in-use inode 115 is free, correcting imap imap claims in-use inode 116 is free, correcting imap imap claims in-use inode 117 is free, correcting imap imap claims in-use inode 118 is free, correcting imap imap claims in-use inode 119 is free, correcting imap imap claims in-use inode 120 is free, correcting imap imap claims in-use inode 121 is free, correcting imap imap claims in-use inode 122 is free, correcting imap imap claims in-use inode 123 is free, correcting imap imap claims in-use inode 124 is free, correcting imap imap claims in-use inode 125 is free, correcting imap imap claims in-use inode 126 is free, correcting imap imap claims in-use inode 127 is free, correcting imap imap claims in-use inode 128 is free, correcting imap imap claims in-use inode 129 is free, correcting imap imap claims in-use inode 130 is free, correcting imap imap claims in-use inode 131 is free, correcting imap imap claims in-use inode 132 is free, correcting imap imap claims in-use inode 133 is free, correcting imap imap claims in-use inode 134 is free, correcting imap imap claims in-use inode 135 is free, correcting imap imap claims in-use inode 136 is free, correcting imap imap claims in-use inode 137 is free, correcting imap imap claims in-use inode 138 is free, correcting imap imap claims in-use inode 139 is free, correcting imap imap claims in-use inode 140 is free, correcting imap imap claims in-use inode 141 is free, correcting imap imap claims in-use inode 142 is free, correcting imap imap claims in-use inode 143 is free, correcting imap imap claims in-use inode 144 is free, correcting imap imap claims in-use inode 145 is free, correcting imap imap claims in-use inode 146 is free, correcting imap imap claims in-use inode 147 is free, correcting imap imap claims in-use inode 148 is free, correcting imap imap claims in-use inode 149 is free, correcting imap imap claims in-use inode 150 is free, correcting imap imap claims in-use inode 151 is free, correcting imap imap claims in-use inode 152 is free, correcting imap imap claims in-use inode 153 is free, correcting imap imap claims in-use inode 154 is free, correcting imap imap claims in-use inode 155 is free, correcting imap imap claims in-use inode 156 is free, correcting imap imap claims in-use inode 157 is free, correcting imap imap claims in-use inode 158 is free, correcting imap imap claims in-use inode 159 is free, correcting imap - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... root inode lost - check for inodes claiming duplicate blocks... - agno = 0 - agno = 2 - agno = 7 - agno = 6 - agno = 3 - agno = 1 entry "Anime" in shortform directory 99 references non-existent inode 2147483744 junking entry "Anime" in directory inode 99 - agno = 4 - agno = 5 entry "Movies" in shortform directory 99 references non-existent inode 10688764085 junking entry "Movies" in directory inode 99 entry "Clips" in shortform directory 99 references non-existent inode 33371146 junking entry "Clips" in directory inode 99 entry "emby" in shortform directory 99 references non-existent inode 202647834 junking entry "emby" in directory inode 99 entry "Music" in shortform directory 99 references non-existent inode 6442451040 junking entry "Music" in directory inode 99 entry "Media2" in shortform directory 99 references non-existent inode 67072496 junking entry "Media2" in directory inode 99 corrected i8 count in directory 99, was 2, now 0 corrected directory 99 size, was 10, now 6 entry ".." at block 0 offset 80 in directory inode 100 references non-existent inode 12884901984 entry "A Certain Scientific Railgun - S02E20 - Febri.nfo" at block 1 offset 1840 in directory inode 100 references non-existent inode 33371104 clearing inode number in entry at offset 1840... entry "A Certain Scientific Railgun - S02E21 - Darkness-thumb.jpg" at block 1 offset 1904 in directory inode 100 references non-existent inode 33371105 clearing inode number in entry at offset 1904... entry "A Certain Scientific Railgun - S02E21 - Darkness.mkv" at block 1 offset 1976 in directory inode 100 references non-existent inode 33371106 clearing inode number in entry at offset 1976... entry "A Certain Scientific Railgun - S02E21 - Darkness.nfo" at block 1 offset 2040 in directory inode 100 references non-existent inode 33371107 clearing inode number in entry at offset 2040... entry "A Certain Scientific Railgun - S02E22 - Study-thumb.jpg" at block 1 offset 2104 in directory inode 100 references non-existent inode 33371108 clearing inode number in entry at offset 2104... entry "A Certain Scientific Railgun - S02E22 - Study.mkv" at block 1 offset 2176 in directory inode 100 references non-existent inode 33371109 clearing inode number in entry at offset 2176... entry "A Certain Scientific Railgun - S02E22 - Study.nfo" at block 1 offset 2240 in directory inode 100 references non-existent inode 33371110 clearing inode number in entry at offset 2240... entry "A Certain Scientific Railgun - S02E23 - Dawn of a Revolution-thumb.jpg" at block 1 offset 2304 in directory inode 100 references non-existent inode 33371111 clearing inode number in entry at offset 2304... entry "A Certain Scientific Railgun - S02E23 - Dawn of a Revolution.mkv" at block 1 offset 2392 in directory inode 100 references non-existent inode 33371112 clearing inode number in entry at offset 2392... entry "A Certain Scientific Railgun - S02E23 - Dawn of a Revolution.nfo" at block 1 offset 2472 in directory inode 100 references non-existent inode 33371113 clearing inode number in entry at offset 2472... entry "A Certain Scientific Railgun - S02E24 - Eternal Party-thumb.jpg" at block 1 offset 2552 in directory inode 100 references non-existent inode 33371114 clearing inode number in entry at offset 2552... entry "A Certain Scientific Railgun - S02E24 - Eternal Party.mkv" at block 1 offset 2632 in directory inode 100 references non-existent inode 33371115 clearing inode number in entry at offset 2632... entry "A Certain Scientific Railgun - S02E24 - Eternal Party.nfo" at block 1 offset 2704 in directory inode 100 references non-existent inode 33371116 clearing inode number in entry at offset 2704... Phase 5 - rebuild AG headers and trees... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - reset superblock... Phase 6 - check inode connectivity... reinitializing root directory - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - agno = 0 entry ".." in directory inode 100 points to non-existent inode 12884901984, marking entry to be junked bad hash table for directory inode 100 (no data entry): rebuilding rebuilding directory inode 100 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - traversal finished ... - moving disconnected inodes to lost+found ... disconnected dir inode 99, 151ab214c3c0: Badness in key lookup (length) bp=(bno 0x1fd33e0, len 4096 bytes) key=(bno 0x1fd33e0, len 16384 bytes) 151ab214c3c0: Badness in key lookup (length) bp=(bno 0x1fd3400, len 4096 bytes) key=(bno 0x1fd3400, len 16384 bytes) moving to lost+found disconnected dir inode 100, moving to lost+found Phase 7 - verify and correct link counts... resetting inode 99 nlinks from 8 to 2 resetting inode 33371104 nlinks from 2 to 4 Maximum metadata LSN (1817809926:28117709) is ahead of log (1:2). Format log to cycle 1817809929. XFS_REPAIR Summary Tue Sep 7 10:16:44 2021 Phase Start End Duration Phase 1: 09/07 10:14:09 09/07 10:14:09 Phase 2: 09/07 10:14:09 09/07 10:14:48 39 seconds Phase 3: 09/07 10:14:48 09/07 10:14:48 Phase 4: 09/07 10:14:48 09/07 10:14:48 Phase 5: 09/07 10:14:48 09/07 10:14:48 Phase 6: 09/07 10:14:48 09/07 10:14:48 Phase 7: 09/07 10:14:48 09/07 10:14:48 Total run time: 39 seconds done ]0;root@Media: ~[01;32mroot@Media[00m:[01;34m~[00m# Quote Link to comment
JorgeB Posted September 7, 2021 Share Posted September 7, 2021 Should be mountable now. Quote Link to comment
trurl Posted September 7, 2021 Share Posted September 7, 2021 16 minutes ago, JorgeB said: Should be mountable now. After starting the array in normal mode, be sure to check your lost+found share. Typically it will have some files and folders it couldn't name or put into correct folders. Quote Link to comment
Alex.vision Posted September 7, 2021 Author Share Posted September 7, 2021 3 hours ago, JorgeB said: Should be mountable now. Excellent. I Will give it a try. 3 hours ago, trurl said: After starting the array in normal mode, be sure to check your lost+found share. Typically it will have some files and folders it couldn't name or put into correct folders. OK I will look for that. Once I start the array and verify the disks emulated contents, I assume I should stop the array and add the replacement disk and have it rebuild disk1. Quote Link to comment
trurl Posted September 7, 2021 Share Posted September 7, 2021 49 minutes ago, Alex.vision said: Once I start the array and verify the disks emulated contents, I assume I should stop the array and add the replacement disk and have it rebuild disk1. Assuming you mean to assign the replacement to the disk1 slot and not add it to a new slot, yes. Quote Link to comment
Alex.vision Posted September 7, 2021 Author Share Posted September 7, 2021 Just now, trurl said: Assuming you mean to assign the replacement to the disk1 slot and not add it to a new slot, yes. Yes, that is what I meant, thanks. When I get off the ferry I will log into my server and make the change. I will post the results from that. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.