Jump to content

Data unvailable - Unmountable: Unsupported or no file system


Go to solution Solved by itimpi,

Recommended Posts

Hey!

 

I would like to reach out to get some help with my Unraid set-up (6.12.3). Until now I have been able to get by with reading other's post but at this moment I'm afraid of performing actions that will result in losing data, which is why I would like to post.

 

I just came back from a vacation to find my Unraid shares unavailable. The array was running, but the "space free" indicator was at 100% and none of my array shares showed up. The pool shares (SSD) appear to be fine, and my Docker containers (appdata is on the pool shares) were fine. I think these problems may have come from an unclean shutdown, as there was one during my time away.

 

What I noticed (before I started troubleshooting) was that on the main page both HDD's (one parity, one data) in the array showed a green circle, but the data drive shows (and still shows) "Unmountable: Unsupported or no file system".

 

I've started searching and arrived at this forum post that seemed to point at a similar issue. Per that forum post I stopped the array, started it in maintenance mode and ran the disk check on the data disk. I unfortunately don't have the full log anymore, but it was mostly similar to what was in the linked thread except for the fact that it mentioned a file or two that were corrup/missing (I don't know the exact terminology anymore).

 

At this point my thought was to stop the array again start the array with the disk missing so that it would be emulated from the parity disk. This would allow me to see if my data is still there and rebuild the data disk from parity (similar to what is described here). When I did so it show the data disk as being "uninstalled, contents emulated" as expected. However, I'm still not seeing any of my array shares. This is when I started getting worried. As preparation for starting a forum post asking for help I wanted to stop the array to prevent any further changes. However, when writing this post the array is still being stopped with it hanging on "Retry unmounting disk share(s)...". 

 

The current state is as follows:

image.thumb.png.34338af397596f0199bd95ede2dd9954.png

 

With none of my array shares showing up. The array is in a "stopping" (not: "stopped") state as mentioned above. I'll attach the diagnostics to this post as well.

 

What are the appropriate steps to attempt re-mounting the disk and/or rebuilding the array from the parity disk?

 

Thanks in advance for any help on the topic!

rannoch-diagnostics-20230821-1051.zip

Edited by Jonathan
More clarification.
Link to comment

I've just noticed that I'm able to run the filesystem check on the disk while it's unmounted (not sure if that's due to a plug-in or native Unraid) but here's the output of that check, which to my knowledge is still the same as the moment I described in my post above:

 

image.thumb.png.1964a9ee497be645fbcae79778cc1ce4.png

 

FS: xfs

Executing file system check: /sbin/xfs_repair -n /dev/sdb1 2>&1
Phase 1 - find and verify superblock...
Phase 2 - using internal log
- zero log...
ALERT: The filesystem has valuable metadata changes in a log which is being
ignored because the -n option was used. Expect spurious inconsistencies
which may be resolved by first mounting the filesystem to replay the log.
- scan filesystem freespace and inode maps...
sb_fdblocks 1990335180, counted 1994233943
- found root inode chunk
Phase 3 - for each AG...
- scan (but don't clear) agi unlinked lists...
- process known inodes and perform inode discovery...
- agno = 0
- agno = 1
- agno = 2
- agno = 3
- agno = 4
- agno = 5
- agno = 6
- agno = 7
- agno = 8
- agno = 9
- agno = 10
- agno = 11
- agno = 12
- agno = 13
- agno = 14
inode 30363094451 - bad extent starting block number 4503567550841217, offset 0
correcting nextents for inode 30363094451
bad data fork in inode 30363094451
would have cleared inode 30363094451
- agno = 15
- agno = 16
- agno = 17
- agno = 18
- agno = 19
- agno = 20
- agno = 21
- process newly discovered inodes...
Phase 4 - check for duplicate blocks...
- setting up duplicate extent list...
- check for inodes claiming duplicate blocks...
- agno = 0
- agno = 1
- agno = 3
- agno = 2
- agno = 4
- agno = 5
- agno = 6
- agno = 7
- agno = 8
- agno = 9
- agno = 10
- agno = 11
- agno = 12
- agno = 13
- agno = 14
- agno = 15
- agno = 16
- agno = 17
entry "IMG-20190822-WA0053.jpg" at block 39 offset 3464 in directory inode 30362347891 references free inode 30363094451
would clear inode number in entry at offset 3464...
inode 30363094451 - bad extent starting block number 4503567550841217, offset 0
correcting nextents for inode 30363094451
bad data fork in inode 30363094451
would have cleared inode 30363094451
- agno = 18
- agno = 19
- agno = 20
- agno = 21
No modify flag set, skipping phase 5
Phase 6 - check inode connectivity...
- traversing filesystem ...
entry "IMG-20190822-WA0053.jpg" in directory inode 30362347891 points to free inode 30363094451, would junk entry
would rebuild directory inode 30362347891
- traversal finished ...
- moving disconnected inodes to lost+found ...
Phase 7 - verify link counts...
No modify flag set, skipping filesystem flush and exiting.

File system corruption detected!

 

If the problem with the data on the disk is isolated to the one particular file mentioned, that file is not that important. If the safest way to rebuild the array and get parity again, it's okay if that file gets lost.

Link to comment

I have been able to stop the array by performing a clean restart via the UI. I've started the array in maintenance mode. The physical disk still shows up under unassigned.

 

image.thumb.png.31dce5c66bb657f364c65f1b5dd56bb1.png

 

image.thumb.png.3e460c840d5d50f7e71a1022a969f9c2.png

 

I've then performed a filesystem scan on the (emulated) data disk (disk 1).

 

image.png.17473b9562268acc70361efe89a133c3.png

 

Phase 1 - find and verify superblock...
Phase 2 - using internal log
        - zero log...
ALERT: The filesystem has valuable metadata changes in a log which is being
ignored because the -n option was used.  Expect spurious inconsistencies
which may be resolved by first mounting the filesystem to replay the log.
        - scan filesystem freespace and inode maps...
sb_fdblocks 1990335180, counted 1994233943
        - found root inode chunk
Phase 3 - for each AG...
        - scan (but don't clear) agi unlinked lists...
        - process known inodes and perform inode discovery...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
        - agno = 4
        - agno = 5
        - agno = 6
        - agno = 7
        - agno = 8
        - agno = 9
        - agno = 10
        - agno = 11
        - agno = 12
        - agno = 13
        - agno = 14
inode 30363094451 - bad extent starting block number 4503567550841217, offset 0
correcting nextents for inode 30363094451
bad data fork in inode 30363094451
would have cleared inode 30363094451
        - agno = 15
        - agno = 16
        - agno = 17
        - agno = 18
        - agno = 19
        - agno = 20
        - agno = 21
        - process newly discovered inodes...
Phase 4 - check for duplicate blocks...
        - setting up duplicate extent list...
        - check for inodes claiming duplicate blocks...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
        - agno = 4
        - agno = 5
        - agno = 6
        - agno = 7
        - agno = 8
        - agno = 9
        - agno = 10
        - agno = 11
        - agno = 12
        - agno = 13
        - agno = 14
        - agno = 15
        - agno = 16
        - agno = 17
entry "IMG-20190822-WA0053.jpg" at block 39 offset 3464 in directory inode 30362347891 references free inode 30363094451
	would clear inode number in entry at offset 3464...
inode 30363094451 - bad extent starting block number 4503567550841217, offset 0
correcting nextents for inode 30363094451
bad data fork in inode 30363094451
would have cleared inode 30363094451
        - agno = 18
        - agno = 19
        - agno = 20
        - agno = 21
No modify flag set, skipping phase 5
Phase 6 - check inode connectivity...
        - traversing filesystem ...
entry "IMG-20190822-WA0053.jpg" in directory inode 30362347891 points to free inode 30363094451, would junk entry
would rebuild directory inode 30362347891
        - traversal finished ...
        - moving disconnected inodes to lost+found ...
Phase 7 - verify link counts...
No modify flag set, skipping filesystem flush and exiting.

 

Link to comment

The repair has ran and logs the following. I can try to restart the array without the disk to see if I can access my data, but I would like to verify if this is the right and safe next step.

 

Phase 1 - find and verify superblock...
Phase 2 - using internal log
        - zero log...
ALERT: The filesystem has valuable metadata changes in a log which is being
destroyed because the -L option was used.
        - scan filesystem freespace and inode maps...
clearing needsrepair flag and regenerating metadata
sb_fdblocks 1990335180, counted 1994233943
        - found root inode chunk
Phase 3 - for each AG...
        - scan and clear agi unlinked lists...
        - process known inodes and perform inode discovery...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
        - agno = 4
        - agno = 5
        - agno = 6
        - agno = 7
        - agno = 8
        - agno = 9
        - agno = 10
        - agno = 11
        - agno = 12
        - agno = 13
        - agno = 14
inode 30363094451 - bad extent starting block number 4503567550841217, offset 0
correcting nextents for inode 30363094451
bad data fork in inode 30363094451
cleared inode 30363094451
        - agno = 15
        - agno = 16
        - agno = 17
        - agno = 18
        - agno = 19
        - agno = 20
        - agno = 21
        - process newly discovered inodes...
Phase 4 - check for duplicate blocks...
        - setting up duplicate extent list...
        - check for inodes claiming duplicate blocks...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
        - agno = 4
        - agno = 5
        - agno = 6
        - agno = 7
        - agno = 8
        - agno = 9
        - agno = 10
        - agno = 11
        - agno = 12
        - agno = 13
        - agno = 14
        - agno = 15
        - agno = 16
entry "IMG-20190822-WA0053.jpg" at block 39 offset 3464 in directory inode 30362347891 references free inode 30363094451
	clearing inode number in entry at offset 3464...
        - agno = 17
        - agno = 18
        - agno = 19
        - agno = 20
        - agno = 21
Phase 5 - rebuild AG headers and trees...
        - reset superblock...
Phase 6 - check inode connectivity...
        - resetting contents of realtime bitmap and summary inodes
        - traversing filesystem ...
rebuilding directory inode 30362347891
        - traversal finished ...
        - moving disconnected inodes to lost+found ...
Phase 7 - verify and correct link counts...
Maximum metadata LSN (6:957662) is ahead of log (1:2).
Format log to cycle 9.
done

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...