[6.2.4] Drive reported as Full/Read-Only, and things are going downhill.


Recommended Posts

Last night, I had a functioning unRaid 2.6.4 array of 3 devices, one each as Storage (2TB), Parity (2TB), and Cache (500GB).

 

This morning, I opened the webGUI to check on some downloads I had queued on a Deluge docker.

CheckCommonErrors gave me a red notice indicating that a disc was either full, or mounted as read-only.

 

Which is weird, the server was fine last night, and webGUI reports the array still has ~700GB free.  So I check Deluge to find ~70 errors.  Nothing is comming down, only a few files are being uploaded.  So I telnet over to the server to check the discs from a command line.  I find the following:

 

/bin/ls: cannot access 'user0': Input/output error

/bin/ls: cannot access 'disk1': Input/output error

 

Ok, time to try something different.  I have 2 X 2TB drives sitting in Unassigned Drives.  They have both been pre-cleared and are ready for use.  Recalling the initial message of my disc being full I imagine now is the time to add these disks to the array.  So I stopped the array, grew it from 3 to 5 devices, and added the two precleared discs to Disc2 and Disc3.

 

When I started the array, it showed Disc1, Disc2, and Disc3 as all 'Unmountable'.

 

And at that point, I decided my fixes are not making things better, so I shut down the server.  I hadn't noticed any 'red balls' at any stage, and am confused as to what is even happening.  Of course it wasn't until after shut-down that I thought to start taking screenshots.  :-[

 

Attached are the screenshots of the telnet session, and of the present assigned drives of my array.

 

 

Please help me get this server back up.

Screenshot_from_2016-11-25_11-48-19.png.2f76c6e9ae69792da51ee83762e95a9b.png

Screenshot_from_2016-11-25_12-36-03.png.5430270da8d322a64242ab7618e5d107.png

Link to comment

xfs_repair aborted with the following error:

 

root@unFOCUSED:~# xfs_repair -v /dev/md1
Phase 1 - find and verify superblock...
        - block cache size set to 722032 entries
Phase 2 - using internal log
        - zero log...
zero_log: head block 1454017 tail block 1454008
ERROR: The filesystem has valuable metadata changes in a log which needs to
be replayed.  Mount the filesystem to replay the log, and unmount it before
re-running xfs_repair.  If you are unable to mount the filesystem, then use
the -L option to destroy the log and attempt a repair.
Note that destroying the log may cause corruption -- please attempt a mount
of the filesystem before doing this.

Link to comment

Repair Completed, Disks 2 and 3 formatted.  Array up without loss of important data...

 

Deluge seems to have lost all of its working data, but that is replaceable.

 

Thanks!

 

root@unFOCUSED:~# xfs_repair -vL /dev/md1
Phase 1 - find and verify superblock...
        - block cache size set to 722032 entries
Phase 2 - using internal log
        - zero log...
zero_log: head block 1454017 tail block 1454008
ALERT: The filesystem has valuable metadata changes in a log which is being
destroyed because the -L option was used.
        - scan filesystem freespace and inode maps...
Metadata corruption detected at xfs_agf block 0x74704441/0x200
flfirst 118 in agf 2 too large (max = 118)
sb_icount 12480, counted 12992
sb_ifree 123, counted 189
sb_fdblocks 290762031, counted 285542517
        - found root inode chunk
Phase 3 - for each AG...
        - scan and clear agi unlinked lists...
        - process known inodes and perform inode discovery...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
        - process newly discovered inodes...
Phase 4 - check for duplicate blocks...
        - setting up duplicate extent list...
        - check for inodes claiming duplicate blocks...
        - agno = 1
        - agno = 2
        - agno = 3
        - agno = 0
Phase 5 - rebuild AG headers and trees...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
        - reset superblock...
Phase 6 - check inode connectivity...
        - resetting contents of realtime bitmap and summary inodes
        - traversing filesystem ...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
        - traversal finished ...
        - moving disconnected inodes to lost+found ...
Phase 7 - verify and correct link counts...
Maximum metadata LSN (2:1454001) is ahead of log (1:2).
Format log to cycle 5.

        XFS_REPAIR Summary    Fri Nov 25 13:34:23 2016

Phase		Start		End		Duration
Phase 1:	11/25 13:33:11	11/25 13:33:11	
Phase 2:	11/25 13:33:11	11/25 13:33:29	18 seconds
Phase 3:	11/25 13:33:29	11/25 13:33:43	14 seconds
Phase 4:	11/25 13:33:43	11/25 13:33:43	
Phase 5:	11/25 13:33:43	11/25 13:33:43	
Phase 6:	11/25 13:33:43	11/25 13:33:45	2 seconds
Phase 7:	11/25 13:33:45	11/25 13:33:45	

Total run time: 34 seconds
done

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.