Jump to content

Problem: UNRAID 6.11.5 - Unmountable: Wrong or no file system


DaveW42
Go to solution Solved by JorgeB,

Recommended Posts

Thanks, itimpi !

 

Output is as follows.

 

Dave

 

xfs_repair -Lv /dev/md1
Phase 1 - find and verify superblock...
        - block cache size set to 2975488 entries
Phase 2 - using internal log
        - zero log...
zero_log: head block 2227444 tail block 2227440
ALERT: The filesystem has valuable metadata changes in a log which is being
destroyed because the -L option was used.
        - scan filesystem freespace and inode maps...
clearing needsrepair flag and regenerating metadata
sb_icount 132352, counted 133504
sb_ifree 1881, counted 1636
sb_fdblocks 1557016821, counted 1485460369
        - found root inode chunk
Phase 3 - for each AG...
        - scan and clear agi unlinked lists...
        - process known inodes and perform inode discovery...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
        - agno = 4
        - agno = 5
        - agno = 6
        - agno = 7
        - agno = 8
        - agno = 9
        - agno = 10
        - agno = 11
        - agno = 12
        - process newly discovered inodes...
Phase 4 - check for duplicate blocks...
        - setting up duplicate extent list...
        - check for inodes claiming duplicate blocks...
        - agno = 0
        - agno = 1
        - agno = 5
        - agno = 9
        - agno = 7
        - agno = 6
        - agno = 8
        - agno = 4
        - agno = 10
        - agno = 11
        - agno = 12
        - agno = 2
        - agno = 3
Phase 5 - rebuild AG headers and trees...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
        - agno = 4
        - agno = 5
        - agno = 6
        - agno = 7
        - agno = 8
        - agno = 9
        - agno = 10
        - agno = 11
        - agno = 12
        - reset superblock...
Phase 6 - check inode connectivity...
        - resetting contents of realtime bitmap and summary inodes
        - traversing filesystem ...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
        - agno = 4
        - agno = 5
        - agno = 6
        - agno = 7
        - agno = 8
        - agno = 9
        - agno = 10
        - agno = 11
        - agno = 12
        - traversal finished ...
        - moving disconnected inodes to lost+found ...
Phase 7 - verify and correct link counts...
Maximum metadata LSN (8:2227435) is ahead of log (1:2).
Format log to cycle 11.

        XFS_REPAIR Summary    Wed Feb 22 12:25:07 2023

Phase           Start           End             Duration
Phase 1:        02/22 12:22:42  02/22 12:22:42
Phase 2:        02/22 12:22:42  02/22 12:23:12  30 seconds
Phase 3:        02/22 12:23:12  02/22 12:23:32  20 seconds
Phase 4:        02/22 12:23:32  02/22 12:23:32
Phase 5:        02/22 12:23:32  02/22 12:23:34  2 seconds
Phase 6:        02/22 12:23:34  02/22 12:23:50  16 seconds
Phase 7:        02/22 12:23:50  02/22 12:23:50

Total run time: 1 minute, 8 seconds
done

Link to comment

Thanks, JorgeB and trurl !!

 

I am holding my breath at this point.   I first used the unRAID GUI to browse emulated Disk 1, and everything looked great (woohoo!).  No lost+found directory there, as trurl noted.  Then I used the unRAID GUI to browse emulated Disk 12.  This actually took quite some time to load (unRAID was showing those wavy "loading" lines), but after about 5 minutes I did see file directories (again, no lost+found), and for the first time in a couple of weeks I saw the file directories associated with the files that had been missing (YAHOOOO!!!!!).   I then looked back at Disk 1 and tried to navigate within the folders, to look for specific files.    At this point I was greeted by the wavy loading lines again, and that is where I have been since that time, with regard to Disk 1.  I also tried to navigate to individual files within Disk 12 and experienced the same story (wavy loading lines).

 

Does the above still sound good, and indicate that I should move to rebuild those drives?  And with respect to rebuilding, could you confirm that the task of rebuilding would simply involve the following as immediate next steps:

 

1) stop the array

2) assign the two hard drives (14TB and 10TB) to their appropriate drive slots (Drive 1 and Drive 12, respectively) within the array

3) start the array in normal mode

 

I will also parenthetically note that I am currently seeing new buttons on the Main screen that say "Check" and "History."  Next to Check it says "Check will start Read-Check of all array disks."  

 

Thanks!

 

Dave

Link to comment

Unrelated, but since I was looking at syslog and saw your plugins loading:

ca.backup2.plg - 2022.12.13  (Deprecated)  (Up to date)
ca.cfg.editor.plg - 2021.04.13  (Deprecated)  (Up to date)
NerdPack.plg - 2021.08.11  (Up to date)  (Incompatible)
nut.plg - 2015.09.19  (Unknown to Community Applications)
serverlayout.plg - 2018.03.09  (Unknown to Community Applications)
useful_links.plg - 2016.04.16  (Unknown to Community Applications)

You should definitely remove NerdPack. Install NerdTools if you really need it.

 

This starts in syslog a few minutes after boot and continues until it completely fills log space.

Feb 21 01:43:14 NAS24 nginx: 2023/02/21 01:43:14 [error] 11418#11418: *2480 limiting requests, excess: 20.365 by zone "authlimit", client: 192.168.0.169, server: , request: "PROPFIND /login HTTP/1.1", host: "192.168.0.175"

Not entirely sure what that is about. After filtering that out looks like a lot of lines related to remote share with Unassigned Devices, maybe that is related. Unmount that and set it to not automount (if it will even let you), reboot to clear log, and post new diagnostics.

 

Link to comment

Thanks so much, trurl.

 

I removed all of the deprecated and "Unknown" plugins you indicated, unmounted the remote shares, and also set them not to automount.  I then rebooted and ran a new diagnostic file (see below)  I suspect that that nginx error relates to a SageTV media server I have running separately, that likely keeps trying to record OTA tv shows to a corresponding share on unRAID.  That share has largely been unavailable, since my array has been down.  Hence the errors.  

 

Also, at one point about a week ago I issued an "nginx stop" command via the terminal, because I saw the log filling up and had read a thread indicating it might help to stop and restart that service.  That just made things worse, so I rebooted (which I would think would have restored the nginx service properly).  I just share that last bit in case it might be germane.

 

Thanks!

Dave

nas24-diagnostics-20230222-1542.zip

Link to comment
7 minutes ago, DaveW42 said:

Does this mean we are ready to rebuild the drives and leave emulation behind? 

Yes

6 hours ago, DaveW42 said:

1) stop the array

2) assign the two hard drives (14TB and 10TB) to their appropriate drive slots (Drive 1 and Drive 12, respectively) within the array

3) start the array in normal mode

 

Link to comment

I can see files on both drives!!!!  Woohooo!!!!!   

 

Thanks go to trurl, JorgeB, and itimpi for all of your help with this !!!  👏  🙌  🍾   It is so much appreciated.  I can see family photos on these drives, music, etc.  There is no way I could have done this myself, thank you!!!!!

 

Dave

  • Like 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...