flip Posted October 3, 2019 Share Posted October 3, 2019 (edited) I had my Disk 7 fail I replaced it and upon attempting to bring the array back online it appears to freeze. It's been sitting for over an hour and the system log hasn't logged anything new in about the hour as well Edited October 3, 2019 by flip Quote Link to comment
jpowell8672 Posted October 3, 2019 Share Posted October 3, 2019 What size was the failed disk and what size in the replacement disk? Is the replacement disk new and has nothing on it? Did you try to reboot and try again? I assume you removed the failed disk from the disk assignment and added the new disk? Quote Link to comment
flip Posted October 3, 2019 Author Share Posted October 3, 2019 (edited) Thanks for the reply I didn't remove the failed disk from the array. The drive is new the original was 2tb and the replacement is 4 array has two parity drives that are 4tb I have restarted twice now. I'm not sure where to start command line wise to check progress Edited October 3, 2019 by flip Quote Link to comment
jpowell8672 Posted October 3, 2019 Share Posted October 3, 2019 https://wiki.unraid.net/Replacing_a_Data_Drive Quote Link to comment
flip Posted October 3, 2019 Author Share Posted October 3, 2019 (edited) Thanks I went ahead and followed the instructions I'm still at the same place Edited October 3, 2019 by flip Quote Link to comment
jpowell8672 Posted October 3, 2019 Share Posted October 3, 2019 Where is disk 1? Quote Link to comment
flip Posted October 4, 2019 Author Share Posted October 4, 2019 I have no idea why it isn't there honestly there's a drive in there. Would that be causing the problem? Quote Link to comment
Squid Posted October 4, 2019 Share Posted October 4, 2019 With dual parity, you can have 2 disks down at the same time no problems. However I suspect the issue is because disk 7 is unmountable. You should post your diagnostics Quote Link to comment
flip Posted October 4, 2019 Author Share Posted October 4, 2019 I attached the zip I also rebooted and placed a drive for device one tower-diagnostics-20191004-0225.zip Quote Link to comment
jpowell8672 Posted October 4, 2019 Share Posted October 4, 2019 19 minutes ago, flip said: I attached the zip I also rebooted and placed a drive for device one tower-diagnostics-20191004-0225.zip 188 B · 0 downloads Diagnostics file you uploaded is not good, try again. Quote Link to comment
flip Posted October 4, 2019 Author Share Posted October 4, 2019 OKtower-diagnostics-20191004-1200.zip let's try this again Quote Link to comment
JorgeB Posted October 5, 2019 Share Posted October 5, 2019 Check filesystem on disk7, you need to remove or replace disk1 or will be unprotected if another disk fails. Quote Link to comment
flip Posted October 6, 2019 Author Share Posted October 6, 2019 I went ahead and booted in maintenance and ran a check on disk 7 that found no errors I replaced disk 1 with no issue I go to start the array it still claims disk 7 unmountable no file system and hangs on mounting cache drive Any help would be greatly appreciated tower-diagnostics-20191006-2049.zip Quote Link to comment
Squid Posted October 6, 2019 Share Posted October 6, 2019 7 minutes ago, flip said: I went ahead and booted in maintenance and ran a check on disk 7 that found no errors Post the output from when you run the filesystem check on 7 Quote Link to comment
flip Posted October 7, 2019 Author Share Posted October 7, 2019 (edited) With maintenance selected Filesystem check gave me this for and output Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... ALERT: The filesystem has valuable metadata changes in a log which is being ignored because the -n option was used. Expect spurious inconsistencies which may be resolved by first mounting the filesystem to replay the log. - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 3 - agno = 2 No modify flag set, skipping phase 5 Phase 6 - check inode connectivity... - traversing filesystem ... - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify link counts... No modify flag set, skipping filesystem flush and exiting. Edited October 7, 2019 by flip Quote Link to comment
Squid Posted October 7, 2019 Share Posted October 7, 2019 Remove the -n flag and re run Quote Link to comment
flip Posted October 7, 2019 Author Share Posted October 7, 2019 1 minute ago, Squid said: Remove the -n flag and re run Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... ERROR: The filesystem has valuable metadata changes in a log which needs to be replayed. Mount the filesystem to replay the log, and unmount it before re-running xfs_repair. If you are unable to mount the filesystem, then use the -L option to destroy the log and attempt a repair. Note that destroying the log may cause corruption -- please attempt a mount of the filesystem before doing this. Quote Link to comment
Squid Posted October 7, 2019 Share Posted October 7, 2019 use -L usually theres no data loss Quote Link to comment
flip Posted October 7, 2019 Author Share Posted October 7, 2019 xfs_repair status: Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... ALERT: The filesystem has valuable metadata changes in a log which is being destroyed because the -L option was used. - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 3 - agno = 1 - agno = 2 Phase 5 - rebuild AG headers and trees... - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... Maximum metadata LSN (1:96) is ahead of log (1:2). Format log to cycle 4. done Quote Link to comment
flip Posted October 7, 2019 Author Share Posted October 7, 2019 The array just came back online Thanks for all the help everyone Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.