tmchow Posted September 20, 2016 Share Posted September 20, 2016 I noticed that on my plex client, some shows and movies were gone. I dug into it and found an error in the Fix Common Problems plugin telling me that /mnt/disk3/ was unwrittable. In the Unraid main UI, it still showed disk3 as being available. Dropping to command list and going to /mnt/disk3 and executing an "ls", I get this error: /bin/ls: cannot open directory '.': Input/output error I rebooted to see if that would fix the problem and not /mnt/disk3 isn't present at all. Fix Common PRoblems plugin now says: disk3 (WDC_WD60EFRX-68MYMN1_WD-WX71D65JEX4Z) has file system errors (No file system (32)) Fix Common Problems plugin had this help text as suggestion what do to: If the disk if XFS / REISERFS, stop the array, restart the Array in Maintenance mode, and run the file system checks. Followed by If the disk is listed as being unmountable, and it has data on it, whatever you do do not hit the format button. Seek assistance HERE (Which points me here ) I took array offline and started in maintenance mode but unsure what do do exactly. I tried running the xfs filesystem check through the WebUI (which uses the '-n' switch) and get this: Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... - scan filesystem freespace and inode maps... Metadata corruption detected at xfs_agf block 0x1ffffffe1/0x200 flfirst 118 in agf 4 too large (max = 118) agf 118 freelist blocks bad, skipping freelist scan agi unlinked bucket 44 is 10651756 in ag 4 (inode=8600586348) sb_icount 230400, counted 230144 sb_ifree 4006, counted 4071 sb_fdblocks 731187494, counted 731066524 - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 No modify flag set, skipping phase 5 Phase 6 - check inode connectivity... - traversing filesystem ... - traversal finished ... - moving disconnected inodes to lost+found ... disconnected inode 8600586348, would move to lost+found Phase 7 - verify link counts... would have reset inode 8600586348 nlinks from 0 to 1 No modify flag set, skipping filesystem flush and exiting. I then tried it through command line but get this: root@Tower:/dev# xfs_repair -v /dev/md3 Phase 1 - find and verify superblock... - block cache size set to 1447592 entries Phase 2 - using internal log - zero log... zero_log: head block 3365250 tail block 3360669 ERROR: The filesystem has valuable metadata changes in a log which needs to be replayed. Mount the filesystem to replay the log, and unmount it before re-running xfs_repair. If you are unable to mount the filesystem, then use the -L option to destroy the log and attempt a repair. Note that destroying the log may cause corruption -- please attempt a mount of the filesystem before doing this. I didn't want to make a mistake here so wanted to check on next step to take. Link to comment
tmchow Posted September 20, 2016 Author Share Posted September 20, 2016 diagnostics attached. tower-diagnostics-20160919-2137.zip Link to comment
MuddSkipp3r Posted September 20, 2016 Share Posted September 20, 2016 I hope we can find an answer. I'm missing over 2TB of data when this disk isn't connected!! :'( Link to comment
MuddSkipp3r Posted September 20, 2016 Share Posted September 20, 2016 So far my outcomes are identical to yours. Even down to the drive. Even looks like you're using WD Reds. Link to comment
tmchow Posted September 20, 2016 Author Share Posted September 20, 2016 So far my outcomes are identical to yours. Even down to the drive. Even looks like you're using the WD Reds like me. Seems too convenient to be coincidental. What output are you getting from xfs_repair? I'm currently running xfs_repair with the "-L" switch now. Link to comment
MuddSkipp3r Posted September 20, 2016 Share Posted September 20, 2016 Been too scared to try the '-L'. Link to comment
tmchow Posted September 20, 2016 Author Share Posted September 20, 2016 Been too scared to try the '-L'. Just completed.. output below. Array is starting and fingers crossed. root@Tower:/dev# xfs_repair -v -L /dev/md3 Phase 1 - find and verify superblock... - block cache size set to 1447592 entries Phase 2 - using internal log - zero log... zero_log: head block 3365250 tail block 3360669 ALERT: The filesystem has valuable metadata changes in a log which is being destroyed because the -L option was used. - scan filesystem freespace and inode maps... Metadata corruption detected at xfs_agf block 0x1ffffffe1/0x200 flfirst 118 in agf 4 too large (max = 118) agi unlinked bucket 44 is 10651756 in ag 4 (inode=8600586348) sb_icount 230400, counted 230144 sb_ifree 4006, counted 4071 sb_fdblocks 731187494, counted 731066530 - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 0 Phase 5 - rebuild AG headers and trees... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - traversal finished ... - moving disconnected inodes to lost+found ... disconnected inode 8600586348, moving to lost+found Phase 7 - verify and correct link counts... Maximum metadata LSN (1:3360571) is ahead of log (1:2). Format log to cycle 4. XFS_REPAIR Summary Mon Sep 19 21:44:43 2016 Phase Start End Duration Phase 1: 09/19 21:41:33 09/19 21:41:33 Phase 2: 09/19 21:41:33 09/19 21:42:27 54 seconds Phase 3: 09/19 21:42:27 09/19 21:42:38 11 seconds Phase 4: 09/19 21:42:38 09/19 21:42:38 Phase 5: 09/19 21:42:38 09/19 21:42:38 Phase 6: 09/19 21:42:38 09/19 21:42:47 9 seconds Phase 7: 09/19 21:42:47 09/19 21:42:47 Total run time: 1 minute, 14 seconds done Link to comment
tmchow Posted September 20, 2016 Author Share Posted September 20, 2016 Bingo.. looks like I got all my data back (well the disk mounts at least). Now i'm running a parity check. Link to comment
MuddSkipp3r Posted September 20, 2016 Share Posted September 20, 2016 Looks like mine is mountable now too!!!! Did you start you Docker apps yet? Link to comment
tmchow Posted September 20, 2016 Author Share Posted September 20, 2016 Looks like mine is mountable now too!!!! Did you start you Docker apps yet? Yes. Oddly it required me to update all my docker apps (no idea if that's related or just coincidental). Plex started right up and everything seems fine. Link to comment
MuddSkipp3r Posted September 20, 2016 Share Posted September 20, 2016 Mine look like they are up to date. Not requiring another update at least. I hope I'm done with this problem! Link to comment
RobJ Posted September 20, 2016 Share Posted September 20, 2016 Due to your discoveries and testing, I've added a note about this to the Additional Upgrade Advice. Thank you both! Link to comment
Recommended Posts
Archived
This topic is now archived and is closed to further replies.