Jump to content

Data disk unmountable after replacing parity


Recommended Posts

Dear community,

I have recently replaced my old parity drive, since the old one had failed (red cross).
However, after installing the new drive and starting parity sync, my disk2 shows up as "unmountable".

Now I'm really concerned about data loss, since parity had not finished rebuilding and I'm not sure how to fix the mount problem (I've tried different cables and controllers with no change).

Any help would be appreciated!

 

 

 

 

2022-07-20 10_16_12-Tower_Main — Mozilla Firefox.png

tower-diagnostics-20220720-1007.zip

Link to comment

Thank you for your reply! I've already done the filesystem check, but I don't know what my next steps would be:

 

Phase 1 - find and verify superblock...
        - reporting progress in intervals of 15 minutes
Phase 2 - using internal log
        - zero log...
ALERT: The filesystem has valuable metadata changes in a log which is being
ignored because the -n option was used.  Expect spurious inconsistencies
which may be resolved by first mounting the filesystem to replay the log.
        - 10:24:41: zeroing log - 29809 of 29809 blocks done
        - scan filesystem freespace and inode maps...
sb_ifree 6816, counted 6810
sb_fdblocks 61151334, counted 61638135
        - 10:24:44: scanning filesystem freespace - 32 of 32 allocation groups done
        - found root inode chunk
Phase 3 - for each AG...
        - scan (but don't clear) agi unlinked lists...
        - 10:24:44: scanning agi unlinked lists - 32 of 32 allocation groups done
        - process known inodes and perform inode discovery...
        - agno = 0
        - agno = 30
        - agno = 15
        - agno = 16
        - agno = 31
        - agno = 1
        - agno = 2
        - agno = 17
        - agno = 3
        - agno = 4
        - agno = 18
        - agno = 5
        - agno = 6
        - agno = 19
        - agno = 7
        - agno = 20
        - agno = 21
        - agno = 8
        - agno = 22
data fork in ino 2952917491 claims free block 369115387
        - agno = 23
        - agno = 24
        - agno = 9
        - agno = 25
        - agno = 10
        - agno = 26
        - agno = 27
        - agno = 11
        - agno = 28
        - agno = 29
        - agno = 12
        - agno = 13
data fork in ino 3898087901 claims free block 487260754
        - agno = 14
        - 10:25:25: process known inodes and inode discovery - 230144 of 230144 inodes done
        - process newly discovered inodes...
        - 10:25:25: process newly discovered inodes - 32 of 32 allocation groups done
Phase 4 - check for duplicate blocks...
        - setting up duplicate extent list...
free space (29,721487-721489) only seen by one free space btree
        - 10:25:25: setting up duplicate extent list - 32 of 32 allocation groups done
        - check for inodes claiming duplicate blocks...
        - agno = 0
        - agno = 1
        - agno = 4
        - agno = 3
        - agno = 7
        - agno = 5
        - agno = 6
        - agno = 2
        - agno = 8
        - agno = 9
        - agno = 10
        - agno = 11
        - agno = 12
        - agno = 13
        - agno = 14
        - agno = 15
        - agno = 16
        - agno = 17
        - agno = 18
        - agno = 19
        - agno = 20
        - agno = 21
entry "Bounced Files" in shortform directory 2818603873 references free inode 2975656813
would have junked entry "Bounced Files" in directory inode 2818603873
        - agno = 22
        - agno = 23
        - agno = 24
        - agno = 25
        - agno = 26
        - agno = 27
        - agno = 28
        - agno = 29
        - agno = 30
        - agno = 31
        - 10:25:25: check for inodes claiming duplicate blocks - 230144 of 230144 inodes done
No modify flag set, skipping phase 5
Phase 6 - check inode connectivity...
        - traversing filesystem ...
entry "Bounced Files" in shortform directory inode 2818603873 points to free inode 2975656813
would junk entry
        - traversal finished ...
        - moving disconnected inodes to lost+found ...
Phase 7 - verify link counts...
would have reset inode 2818603873 nlinks from 3 to 2
        - 10:25:46: verify and correct link counts - 32 of 32 allocation groups done
Maximum metadata LSN (111:54898) is ahead of log (111:51422).
Would format log to cycle 114.
No modify flag set, skipping filesystem flush and exiting.

 

Link to comment

Now it's giving me the following warning:

 

Phase 1 - find and verify superblock...
        - reporting progress in intervals of 15 minutes
Phase 2 - using internal log
        - zero log...
ERROR: The filesystem has valuable metadata changes in a log which needs to
be replayed.  Mount the filesystem to replay the log, and unmount it before
re-running xfs_repair.  If you are unable to mount the filesystem, then use
the -L option to destroy the log and attempt a repair.
Note that destroying the log may cause corruption -- please attempt a mount
of the filesystem before doing this.

 

Link to comment

Wow, seems that the filesystem repair (using -L) did do the trick and my drive is now mountable again.
Thanks to each and every one for your help. You're awesome!

Two questions:
1. How likely is it that I might have lost some data in the process (I'm not seeing anything obvious...)
2. What caused this mess in the first place?

Link to comment
37 minutes ago, bwv1058 said:

1. How likely is it that I might have lost some data in the process (I'm not seeing anything obvious...)

If there is no Lost+Found folder then it is very unlikely that anything was lost.    You can only be certain, however, if you have checksums for the data that should be on the drive.

 

38 minutes ago, bwv1058 said:

What caused this mess in the first place?

Almost impossible to say.   Any sort of glitch that can cause a write to the drive to be lost could cause this.

Link to comment

Thanks again for your replies!

No, fortunately there is no "lost+found" share, so I'm assuming that everything is alright!

 

I've noticed that lately my drives have been running very hot (40-50 Celsius). Could that explain the problems I was having?

Link to comment
4 hours ago, bwv1058 said:

I've noticed that lately my drives have been running very hot (40-50 Celsius). Could that explain the problems I was having

It should not be as that will be within the temperature range that the drives are rated for.    However it could mean that thermal expansion is adversely affecting the SATA connections as it as notoriously fragile type of connector as far as good connection goes.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...