Jump to content

unmountable disk


Recommended Posts

Hello. I was doin some maintenance on my servers when I dislodged an ethernet cable. I was doing an Unbalace-app moving of files from disk 1 to another disk on the array. After realizing the ethernet cable was unplugged, I reinserted it and looked at the disks/main page.

 

Disk1 became unmountable and I have since run a parity check THEN tried to get unraid to rebuild Disk1 through emulation and/otr whatever was on the parity. 

 

Unfortunately, Disk1 is still unmountable. I do not want to format it because it is my largest disk and I may lose a lot of data that is needed. 

 

If I understand what unraid is telling me, the only way to reintegrate disk1 is to format it, which would mean I lose my data. 

unmountable disk.zip

Link to comment
54 minutes ago, YEAHHWAY said:

I do not want to format it because it is my largest disk and I may lose a lot of data that is needed

Do not format

54 minutes ago, YEAHHWAY said:

If I understand what unraid is telling me, the only way to reintegrate disk1 is to format it, which would mean I lose my data. 

A format is never part of the process

55 minutes ago, YEAHHWAY said:

I dislodged an ethernet cable.

Wouldn't have been the cause.

 

May 30 12:31:19 Tower kernel: XFS (md1): Unmount and run xfs_repair

Run the File system check against disk 1

Link to comment
21 minutes ago, Squid said:

Wouldn't have been the cause.

 


May 30 12:31:19 Tower kernel: XFS (md1): Unmount and run xfs_repair

Run the File system check against disk 1

 

I got this at the end of the xfs_repair:

 

cache_purge: shake on cache 0x529160 left 1 nodes!?
xfs_repair: Refusing to write a corrupt buffer to the data device!
xfs_repair: Lost a write to the data device!

fatal error -- File system metadata writeout failed, err=117.  Re-run xfs_repair

 

Is this from a faulty connection somewhere?

Link to comment
21 minutes ago, YEAHHWAY said:

 

I got this at the end of the xfs_repair:

 

cache_purge: shake on cache 0x529160 left 1 nodes!?
xfs_repair: Refusing to write a corrupt buffer to the data device!
xfs_repair: Lost a write to the data device!

fatal error -- File system metadata writeout failed, err=117.  Re-run xfs_repair

 

Is this from a faulty connection somewhere?

Phase 1 - find and verify superblock...
        - block cache size set to 2982856 entries
Phase 2 - using internal log
        - zero log...
zero_log: head block 0 tail block 0
        - scan filesystem freespace and inode maps...
        - found root inode chunk
Phase 3 - for each AG...
        - scan and clear agi unlinked lists...
        - process known inodes and perform inode discovery...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
        - agno = 4
        - agno = 5
        - agno = 6
        - agno = 7
        - agno = 8
        - agno = 9
        - agno = 10
        - agno = 11
        - agno = 12
        - process newly discovered inodes...
Phase 4 - check for duplicate blocks...
        - setting up duplicate extent list...
        - check for inodes claiming duplicate blocks...
        - agno = 0
        - agno = 5
        - agno = 7
        - agno = 1
        - agno = 11
        - agno = 3
        - agno = 2
        - agno = 4
        - agno = 8
        - agno = 10
        - agno = 6
        - agno = 12
        - agno = 9
Phase 5 - rebuild AG headers and trees...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
        - agno = 4
        - agno = 5
        - agno = 6
        - agno = 7
        - agno = 8
        - agno = 9
        - agno = 10
        - agno = 11
        - agno = 12
        - reset superblock...
Phase 6 - check inode connectivity...
        - resetting contents of realtime bitmap and summary inodes
        - traversing filesystem ...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
        - agno = 4
        - agno = 5
        - agno = 6
        - agno = 7
        - agno = 8
        - agno = 9
        - agno = 10
        - agno = 11
        - agno = 12
        - traversal finished ...
        - moving disconnected inodes to lost+found ...
Phase 7 - verify and correct link counts...

        XFS_REPAIR Summary    Sun May 30 15:23:54 2021

Phase           Start           End             Duration
Phase 1:        05/30 15:22:26  05/30 15:22:26
Phase 2:        05/30 15:22:26  05/30 15:22:29  3 seconds
Phase 3:        05/30 15:22:29  05/30 15:23:33  1 minute, 4 seconds
Phase 4:        05/30 15:23:33  05/30 15:23:34  1 second
Phase 5:        05/30 15:23:34  05/30 15:23:38  4 seconds
Phase 6:        05/30 15:23:38  05/30 15:23:53  15 seconds
Phase 7:        05/30 15:23:53  05/30 15:23:53

Total run time: 1 minute, 27 seconds
done

 

I've rerun the xfs_repair with the above results.

Link to comment
22 minutes ago, itimpi said:

If you start the array in normal does it now mount?   If so is there a lost+found folde?

Yes. Yes! And, YES!!

 

I'm so glad you all are here to help! I have 1000 lost and found files but at least I still have them!

 

THANK YOU!

Link to comment
1 hour ago, YEAHHWAY said:

Yes. Yes! And, YES!!

 

I'm so glad you all are here to help! I have 1000 lost and found files but at least I still have them!

 

THANK YOU!

 

It can be difficult to sort out the lost+found folder.    You can use the Linux ‘file’ command to identify the type of each of the files.

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...