taxydrivar Posted June 1, 2018 Share Posted June 1, 2018 Hi Everyone, I'm feeling a bit stressed. Been a month since an external backup due to being away from home etc. Woke up this morning to find that I have 1 drive offline which wont come back online. That's fine I say to myself, because I have a parity drive. However I notice a good chunk of my files seem to be missing (presumably on the offline drive). Wait a minute, what's my parity drive doing if its not compensating for the loss of 1 drive? I need some help understanding this. Attached is my diagnostics log. I'm planning on upgrading this hardware in the next month, and the drives but want to make sure my data is in tact. Please help! Regards TaXyDriVar ulysses-diagnostics-20180602-0852.zip Link to comment
trurl Posted June 1, 2018 Share Posted June 1, 2018 SMART for all disks looks OK, except for the unassigned 8TB disk. It has a couple of attributes that I would say are bad, but I'm not sure I believe them. Quote 197 Current_Pending_Sector -O--C- 076 064 000 - 8000 198 Offline_Uncorrectable ----C- 076 064 000 - 8000 Disk3 is disabled. It is also unmountable, which would explain the missing files. You can try to repair the filesystem on the emulated disk3, then rebuild it. Read this wiki then come back and ask any questions you have. https://lime-technology.com/wiki/Check_Disk_Filesystems Link to comment
taxydrivar Posted June 2, 2018 Author Share Posted June 2, 2018 Hi Trurl, Thank you for your quick reply. I have put the array into maintenance mode and run the check file system status -nv on the suspect drive3 as it is running xfs. Results below. Phase 1 - find and verify superblock... bad primary superblock - bad magic number !!! attempting to find secondary superblock... .found candidate secondary superblock... verified secondary superblock... would write modified primary superblock Primary superblock would have been modified. Cannot proceed further in no_modify mode. Exiting now. I have now run: xfs_repair -v /dev/md3 These are the results: Phase 1 - find and verify superblock... bad primary superblock - bad magic number !!! attempting to find secondary superblock... .found candidate secondary superblock... verified secondary superblock... writing modified primary superblock - block cache size set to 731416 entries sb realtime bitmap inode 18446744073709551615 (NULLFSINO) inconsistent with calculated value 97 resetting superblock realtime bitmap ino pointer to 97 sb realtime summary inode 18446744073709551615 (NULLFSINO) inconsistent with calculated value 98 resetting superblock realtime summary ino pointer to 98 Phase 2 - using internal log - zero log... zero_log: head block 259519 tail block 259487 ERROR: The filesystem has valuable metadata changes in a log which needs to be replayed. Mount the filesystem to replay the log, and unmount it before re-running xfs_repair. If you are unable to mount the filesystem, then use the -L option to destroy the log and attempt a repair. Note that destroying the log may cause corruption -- please attempt a mount of the filesystem before doing this. So I guess from this I just need to mount the array, unmount the array and rerun xfs_repair -v /dev/md3? Cheers Link to comment
taxydrivar Posted June 2, 2018 Author Share Posted June 2, 2018 As the drive is unmountable, should I remove from the array by selecting no device (as per the image), start the array, stop the array and then try to re-add it? Or Should I proceed with xfs_repair -vL /dev/md3? Link to comment
trurl Posted June 2, 2018 Share Posted June 2, 2018 37 minutes ago, taxydrivar said: proceed with xfs_repair -vL /dev/md3? ^this Link to comment
taxydrivar Posted June 2, 2018 Author Share Posted June 2, 2018 Looks like it failed. Here is the tail end. corrected i8 count in directory 96, was 2, now 0 entry "182e4a4dc7f9aabc8eb846ded6660b58ad9d2a_attributes" at block 0 offset 3040 in directory inode 2147483794 references free inode 2147483771 clearing inode number in entry at offset 3040... entry ".." at block 0 offset 80 in directory inode 4298782820 references free inode 2147483749 corrected directory 96 size, was 138, now 106 bogus .. inode number (0) in directory inode 96, clearing inode number xfs_repair: dir2.c:1419: process_dir2: Assertion `(ino != mp->m_sb.sb_rootino && ino != *parent) || (ino == mp->m_sb.sb_rootino && (ino == *parent || need_root_dotdot == 1))' failed. entry ".." at block 0 offset 80 in directory inode 4298782823 references free inode 2147483755 Aborted Looks like I need to try file scavenger or something and then redo the drive .... if you Trurl have no other suggestions, then thanks for all your help thus far. Link to comment
JorgeB Posted June 2, 2018 Share Posted June 2, 2018 Unassign the old disk and check if it mounts with UD plugin, if it does you can do a new config. Link to comment
Recommended Posts
Archived
This topic is now archived and is closed to further replies.