DaveW42 Posted February 22, 2023 Author Share Posted February 22, 2023 Thanks, itimpi ! Output is as follows. Dave xfs_repair -Lv /dev/md1 Phase 1 - find and verify superblock... - block cache size set to 2975488 entries Phase 2 - using internal log - zero log... zero_log: head block 2227444 tail block 2227440 ALERT: The filesystem has valuable metadata changes in a log which is being destroyed because the -L option was used. - scan filesystem freespace and inode maps... clearing needsrepair flag and regenerating metadata sb_icount 132352, counted 133504 sb_ifree 1881, counted 1636 sb_fdblocks 1557016821, counted 1485460369 - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - agno = 11 - agno = 12 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 5 - agno = 9 - agno = 7 - agno = 6 - agno = 8 - agno = 4 - agno = 10 - agno = 11 - agno = 12 - agno = 2 - agno = 3 Phase 5 - rebuild AG headers and trees... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - agno = 11 - agno = 12 - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - agno = 11 - agno = 12 - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... Maximum metadata LSN (8:2227435) is ahead of log (1:2). Format log to cycle 11. XFS_REPAIR Summary Wed Feb 22 12:25:07 2023 Phase Start End Duration Phase 1: 02/22 12:22:42 02/22 12:22:42 Phase 2: 02/22 12:22:42 02/22 12:23:12 30 seconds Phase 3: 02/22 12:23:12 02/22 12:23:32 20 seconds Phase 4: 02/22 12:23:32 02/22 12:23:32 Phase 5: 02/22 12:23:32 02/22 12:23:34 2 seconds Phase 6: 02/22 12:23:34 02/22 12:23:50 16 seconds Phase 7: 02/22 12:23:50 02/22 12:23:50 Total run time: 1 minute, 8 seconds done Quote Link to comment
JorgeB Posted February 22, 2023 Share Posted February 22, 2023 Now start array in normal mode and post new diags. Quote Link to comment
itimpi Posted February 22, 2023 Share Posted February 22, 2023 The xfs_repair output looks good Worth checking if you have a lost+found folder on the drive when you start in Normal mode as if not then I would think all data is intact. Quote Link to comment
DaveW42 Posted February 22, 2023 Author Share Posted February 22, 2023 Thanks, JorgeB and itimpi! Disk 1 and Disk 12 still appear with red x's as "Not installed." I made no changes there, and started the array in normal mode as instructed. Attached is the new diagnostic file. Thanks! Dave nas24-diagnostics-20230222-1300.zip Quote Link to comment
JorgeB Posted February 22, 2023 Share Posted February 22, 2023 Both emulated disks are mounting, check contents of both, including looking for a lost+found folder, if you are happy with the results you can rebuild on top. Quote Link to comment
trurl Posted February 22, 2023 Share Posted February 22, 2023 Diagnostics doesn't show any lost+found share so I expect everything is good. Quote Link to comment
DaveW42 Posted February 22, 2023 Author Share Posted February 22, 2023 Thanks, JorgeB and trurl !! I am holding my breath at this point. I first used the unRAID GUI to browse emulated Disk 1, and everything looked great (woohoo!). No lost+found directory there, as trurl noted. Then I used the unRAID GUI to browse emulated Disk 12. This actually took quite some time to load (unRAID was showing those wavy "loading" lines), but after about 5 minutes I did see file directories (again, no lost+found), and for the first time in a couple of weeks I saw the file directories associated with the files that had been missing (YAHOOOO!!!!!). I then looked back at Disk 1 and tried to navigate within the folders, to look for specific files. At this point I was greeted by the wavy loading lines again, and that is where I have been since that time, with regard to Disk 1. I also tried to navigate to individual files within Disk 12 and experienced the same story (wavy loading lines). Does the above still sound good, and indicate that I should move to rebuild those drives? And with respect to rebuilding, could you confirm that the task of rebuilding would simply involve the following as immediate next steps: 1) stop the array 2) assign the two hard drives (14TB and 10TB) to their appropriate drive slots (Drive 1 and Drive 12, respectively) within the array 3) start the array in normal mode I will also parenthetically note that I am currently seeing new buttons on the Main screen that say "Check" and "History." Next to Check it says "Check will start Read-Check of all array disks." Thanks! Dave Quote Link to comment
trurl Posted February 22, 2023 Share Posted February 22, 2023 Post new diagnostics. Quote Link to comment
DaveW42 Posted February 22, 2023 Author Share Posted February 22, 2023 Hi, trurl. Please see the attached file. Dave nas24-diagnostics-20230222-1434.zip Quote Link to comment
trurl Posted February 22, 2023 Share Posted February 22, 2023 Unrelated, but since I was looking at syslog and saw your plugins loading: ca.backup2.plg - 2022.12.13 (Deprecated) (Up to date) ca.cfg.editor.plg - 2021.04.13 (Deprecated) (Up to date) NerdPack.plg - 2021.08.11 (Up to date) (Incompatible) nut.plg - 2015.09.19 (Unknown to Community Applications) serverlayout.plg - 2018.03.09 (Unknown to Community Applications) useful_links.plg - 2016.04.16 (Unknown to Community Applications) You should definitely remove NerdPack. Install NerdTools if you really need it. This starts in syslog a few minutes after boot and continues until it completely fills log space. Feb 21 01:43:14 NAS24 nginx: 2023/02/21 01:43:14 [error] 11418#11418: *2480 limiting requests, excess: 20.365 by zone "authlimit", client: 192.168.0.169, server: , request: "PROPFIND /login HTTP/1.1", host: "192.168.0.175" Not entirely sure what that is about. After filtering that out looks like a lot of lines related to remote share with Unassigned Devices, maybe that is related. Unmount that and set it to not automount (if it will even let you), reboot to clear log, and post new diagnostics. Quote Link to comment
DaveW42 Posted February 22, 2023 Author Share Posted February 22, 2023 Thanks so much, trurl. I removed all of the deprecated and "Unknown" plugins you indicated, unmounted the remote shares, and also set them not to automount. I then rebooted and ran a new diagnostic file (see below) I suspect that that nginx error relates to a SageTV media server I have running separately, that likely keeps trying to record OTA tv shows to a corresponding share on unRAID. That share has largely been unavailable, since my array has been down. Hence the errors. Also, at one point about a week ago I issued an "nginx stop" command via the terminal, because I saw the log filling up and had read a thread indicating it might help to stop and restart that service. That just made things worse, so I rebooted (which I would think would have restored the nginx service properly). I just share that last bit in case it might be germane. Thanks! Dave nas24-diagnostics-20230222-1542.zip Quote Link to comment
trurl Posted February 23, 2023 Share Posted February 23, 2023 Emulated disks 1, 12 both mounted with plenty of contents. What do you get from command line with this? ls -lah /mnt/disk1 and this? ls -lah /mnt/disk12 Quote Link to comment
DaveW42 Posted February 23, 2023 Author Share Posted February 23, 2023 Hi, trurl. I see all my directories ... and on both drives! Does this mean we are ready to rebuild the drives and leave emulation behind? I will also send you a PM with the actual output of running these statements. Thanks!!! Dave Quote Link to comment
trurl Posted February 23, 2023 Share Posted February 23, 2023 7 minutes ago, DaveW42 said: Does this mean we are ready to rebuild the drives and leave emulation behind? Yes 6 hours ago, DaveW42 said: 1) stop the array 2) assign the two hard drives (14TB and 10TB) to their appropriate drive slots (Drive 1 and Drive 12, respectively) within the array 3) start the array in normal mode Quote Link to comment
DaveW42 Posted February 23, 2023 Author Share Posted February 23, 2023 Great! As indicated, I will: 1) stop the array 2) assign the two hard drives (14TB and 10TB) to their appropriate drive slots (Drive 1 and Drive 12, respectively) within the array 3) start the array in normal mode Dave Quote Link to comment
DaveW42 Posted February 23, 2023 Author Share Posted February 23, 2023 Just a quick update to let everyone know that so far so good (the rebuild process has started). Given the size of Disk 1, this task will probably take 24 hours or so to complete. Thanks, Dave 1 Quote Link to comment
DaveW42 Posted February 24, 2023 Author Share Posted February 24, 2023 Rebuild took approximately 31 hours to complete. Notification messages indicated rebuild was successful, and all drives appear as green. Attached is the diagnostic file. I haven't touched anything else yet (haven't told dockers and vm's to run). Fingers crossed this diagnostic shows a clean bill of health! Dave nas24-diagnostics-20230224-0945.zip Quote Link to comment
DaveW42 Posted February 25, 2023 Author Share Posted February 25, 2023 I can see files on both drives!!!! Woohooo!!!!! Thanks go to trurl, JorgeB, and itimpi for all of your help with this !!! 👏 🙌 🍾 It is so much appreciated. I can see family photos on these drives, music, etc. There is no way I could have done this myself, thank you!!!!! Dave 1 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.