Jump to content

Unmountable: not mounted


Recommended Posts

Hi Guys, 

 

I got myself in a bit of trouble. I was trying to replace 2 of my older hard drives with a couple of 10tb drives. While the array was on, I accidentally pulled Disk 5 out, but I plugged it back in right away(not the hdd that I wanted to replace). Then I shut down the array and removed one of the 4tb hdd(Disk 8) and replaced it with one of the new 10 TB hdd(disk 8). Restarted the array and waited for the party sync to start. 

 

initially Disk 5 was showing disabled. after the Parity check I have(which synced for Disk 8) restarted the array in the maintenance mode, now it shows the device as emulated while its trying to rebuild the Disk 5.

 

On the main  page it still shows both Disk 5 and Disk 8 as unmountable.

 

Now my question is would they both go back to normal after the parity sync?

 

What do you guys recommend to do?

 

 

diagnostics-20210819-0839.zip

Link to comment
38 minutes ago, mdaryabe said:

accidentally pulled Disk 5 out, but I plugged it back in right away(not the hdd that I wanted to replace). Then I shut down the array

There is absolutely no good reason to replace disks under power. Unraid will not do anything at all with a replacement disk until you assign it, and you can't assign a disk with the array started. Shutdown before doing any of this.

 

38 minutes ago, mdaryabe said:

waited for the party sync to start. 

I prefer to call this a data disk rebuild. Parity sync is a rebuild of a particular disk, parity. All other disks are read to get the results of the parity calculation and write it to the rebuilding disk.

 

38 minutes ago, mdaryabe said:

after the Parity check I have(which synced for Disk 8 )

a parity check is when you check the contents of parity against the results of the parity calculation, to verify (and possibly correct) parity. Parity and all disks are read. The results of the parity calculation with all other disks are compared to parity. A non-correcting parity check reads all disks and writes no disks. A correcting parity check reads all disks and may write parity if it needs correction.

 

38 minutes ago, mdaryabe said:

restarted the array in the maintenance mode, now it shows the device as emulated while its trying to rebuild the Disk 5.

 

On the main  page it still shows both Disk 5 and Disk 8 as unmountable.

This is unclear. Maintenance mode doesn't mount any disks.

 

38 minutes ago, mdaryabe said:

Now my question is would they both go back to normal after the parity sync?

Data disk rebuild typically won't fix unmountable filesystems because parity is usually in sync with the contents of all other disks, including their unmountable filesystems.

 

While waiting on advice (I haven't looked at Diagnostics yet) post a screenshot of Main - Array Devices. (You beat me to it).

 

Link to comment

Nothing in syslog to indicate any issues with current disk5 rebuild, but it does show your unmountable filesystems on disks 5,8.

 

I didn't examine SMART for each of your disks. Do any have SMART warnings on the Dashboard page?

 

Since you have already started disk5 rebuild, let it complete, and we can deal with repairing the filesystems after. If there are problems with repairing the filesystems then maybe you can recover data from the original disks if you still have them and their contents.

 

I usually estimate 2-3 hours per TB so it will be a while before rebuild completes. What is the estimate given on Main - Array Operations?

  • Like 1
Link to comment

Thank you, I have updated the screenshots. I'll wait for the parity to finish and will update you. 

 

So Disk 7 and 8 were the disk I was trying to replace. What happened is, that I unplugged disk 5(didn't count the parity drives on the physical server while array was on - Dumb me!) and plugged it back in right away.

Then shut down the array and replaced both 7 and 8. Unraid said it's missing more than 2(Disk 5, 7, 8 ) drives from parity and is unable to fix it. so I replaced back disk 8(original 4tb)  with only disk 5, 8 missing I started the parity/rebuild. which did it for Disk 8.

 

After that, I went on Maintenace mode to do the check for Drive 5. and it was giving me an error.

I shut down the array and removed the drive from the array and started without disk 5. then turned off the array and added disk 5 again and started the parity check. 

 

Btw Disk 5, is the same original disk. it was just unplugged and plugged back in right away. I think it rebuild the array without that disk once, and now it's rebuilding that 8tb(Disk 5) drive again. 

 

Disk 7 is the next disk I want to replace after its fixed. 

 

Once again, Thank you for your help!

 

Edited by mdaryabe
Link to comment
23 minutes ago, mdaryabe said:

So the parity-checkl/data-rebuild is done on those drives. they both show as green on the main paid but they both still say " Unmountable: Not mounted"

Do you need an updated log file?


I guess the next step should be follow the process documented here in the online documentation accessible via the ‘Manual’ link at the bottom of the Unraid GUI for handling unmountable drives.    I would suggest that at this stage just run the file system check (I.e. not the repair) and post the results here to see if the recommendation is to to then proceed with the repair.

Link to comment

Disk 5 and 8 both:

 

Quote

 

Phase 1 - find and verify superblock...
bad primary superblock - bad CRC in superblock !!!

attempting to find secondary superblock...
.found candidate secondary superblock...
verified secondary superblock...
would write modified primary superblock
Primary superblock would have been modified.
Cannot proceed further in no_modify mode.
Exiting now.

 

 

Should I use the following command for Disk 5 with 2 parity drives?

 

Quote

xfs_repair -v /dev/md5

 

caspian-1-diagnostics-20210820-0844.zip

 

 

 

 

parity.png

 

 

I much appreaciate your help!

Edited by mdaryabe
Link to comment
Quote

Phase 1 - find and verify superblock...
Phase 2 - using internal log
        - zero log...
        - scan filesystem freespace and inode maps...
        - found root inode chunk
Phase 3 - for each AG...
        - scan (but don't clear) agi unlinked lists...
        - process known inodes and perform inode discovery...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
        - agno = 4
        - agno = 5
        - agno = 6
        - agno = 7
        - process newly discovered inodes...
Phase 4 - check for duplicate blocks...
        - setting up duplicate extent list...
        - check for inodes claiming duplicate blocks...
        - agno = 0
        - agno = 3
        - agno = 5
        - agno = 2
        - agno = 4
        - agno = 6
        - agno = 7
        - agno = 1
No modify flag set, skipping phase 5
Phase 6 - check inode connectivity...
        - traversing filesystem ...
        - traversal finished ...
        - moving disconnected inodes to lost+found ...
Phase 7 - verify link counts...
No modify flag set, skipping filesystem flush and exiting.

 

Link to comment
34 minutes ago, mdaryabe said:

If now I want to replace Disk 7. 

 

I would just shut down the Array and replace that drive physically and run the array and let the parity rebuild the drive?

Yes, contingent on parity being in sync. Given the totality of the shenanigans that went on to get you to this point, it would be prudent to do a parity check and verify it completes with ZERO errors before doing a drive replacement.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...