Jump to content

How to Re-enable a Drive


Recommended Posts

A drive in my array is disabled because I ejected it while the array was active (I thought I picked the right drive bay).  I searched the forum and see the advice is pointing to here: https://wiki.unraid.net/How-To's.  The link for re-enabling a drive is broken.  Can someone walk me through?  I'm on 6.9.2.
 

Edit: it looks like it was moved to here: https://wiki.unraid.net/Manual/Storage_Management#Checking_a_File_System.  Recommend updating the Wiki to reflect that.  I am currently going through steps to check the file system.

Edited by snusnu1987
Link to comment

Looks like xfs_repair from GUI check gave the following output:

 

Phase 1 - find and verify superblock...
Phase 2 - using internal log
        - zero log...
        - scan filesystem freespace and inode maps...
        - found root inode chunk
Phase 3 - for each AG...
        - scan and clear agi unlinked lists...
        - process known inodes and perform inode discovery...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
        - agno = 4
        - agno = 5
        - agno = 6
        - agno = 7
        - process newly discovered inodes...
Phase 4 - check for duplicate blocks...
        - setting up duplicate extent list...
        - check for inodes claiming duplicate blocks...
        - agno = 1
        - agno = 5
        - agno = 2
        - agno = 3
        - agno = 0
        - agno = 6
        - agno = 7
        - agno = 4
Phase 5 - rebuild AG headers and trees...
        - reset superblock...
Phase 6 - check inode connectivity...
        - resetting contents of realtime bitmap and summary inodes
        - traversing filesystem ...
        - traversal finished ...
        - moving disconnected inodes to lost+found ...
Phase 7 - verify and correct link counts...
done

 

Checking Main shows that it did not clear the issue.  I then tried to add the flag -L since the documentation mentions it with the command line version.  That output shows:

Phase 1 - find and verify superblock...
Phase 2 - using internal log
        - zero log...
        - scan filesystem freespace and inode maps...
        - found root inode chunk
Phase 3 - for each AG...
        - scan and clear agi unlinked lists...
        - process known inodes and perform inode discovery...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
        - agno = 4
        - agno = 5
        - agno = 6
        - agno = 7
        - process newly discovered inodes...
Phase 4 - check for duplicate blocks...
        - setting up duplicate extent list...
        - check for inodes claiming duplicate blocks...
        - agno = 1
        - agno = 6
        - agno = 0
        - agno = 2
        - agno = 4
        - agno = 7
        - agno = 5
        - agno = 3
Phase 5 - rebuild AG headers and trees...
        - reset superblock...
Phase 6 - check inode connectivity...
        - resetting contents of realtime bitmap and summary inodes
        - traversing filesystem ...
        - traversal finished ...
        - moving disconnected inodes to lost+found ...
Phase 7 - verify and correct link counts...
Maximum metadata LSN (1:60644) is ahead of log (1:2).
Format log to cycle 4.
done

 

Finally I tried going into command-line and was given the following message:

<admin>@<name>:~# xfs_repair -v /dev/sds1
xfs_repair: cannot open /dev/sds1: Device or resource busy

Link to comment
21 minutes ago, jonathanm said:

A disabled drive should be rebuilt.

 

A non-mountable drive should do the check file system directions.

 

The two conditions are distinctly different, but can happen at the same time if parity was not valid when the slot was disabled.

 

 

 

Thank you.  Based on my output, would you recommend rebuilding this drive?

Link to comment
1 hour ago, snusnu1987 said:

I then tried to add the flag -L

 

You shouldn't do that unless the previous repair attempt advises you to do so. It didn't. As far as I can tell, that previous attempt was successful. That should have made an unmountable file system mountable again but it won't fix a disabled disk.

 

1 hour ago, snusnu1987 said:

xfs_repair -v /dev/sds1

 

Don't work on the raw partition as it will invalidate parity. To repair the file system on diskX you should work on /dev/mdX instead, with the array started but in Maintenance mode.

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...