ToXIc Posted September 5, 2022 Share Posted September 5, 2022 (edited) Hi everyone, been using unraid since v4 and have replaced many drives since then. recently however was my 1st time 2 drives went bad at the same time. I'm currently on version 6.9.2 and using two parity drives. Before i did anything i noticed that one of the emulated drives showed nothing, my data in some folders were missing. Not thinking too much about it hoping it was a bug or something i proceeded with my replacement process and got my drives preclearing. I swapped them one by one leaving the badly emulated drive last. When i did switch to the 2nd drive the rebuild took over 12hrs and did almost 4tb of write to the drive but unfiortualtely my data still wasnt there.. am i still seeing the emulation or is my data really gone. syslog fatjoe-diagnostics-20220905-1015.zip Edited September 5, 2022 by ToXIc Quote Link to comment
JorgeB Posted September 5, 2022 Share Posted September 5, 2022 Please post the diagnostics. Quote Link to comment
ToXIc Posted September 5, 2022 Author Share Posted September 5, 2022 i just uploaded the logs from http://tower/log/syslog should i gather via the method you posted? Quote Link to comment
ToXIc Posted September 5, 2022 Author Share Posted September 5, 2022 unfortunately the service wasnt running Quote Link to comment
JorgeB Posted September 5, 2022 Share Posted September 5, 2022 What service? You just go to tools and click in diagnostics. Quote Link to comment
ToXIc Posted September 5, 2022 Author Share Posted September 5, 2022 (edited) oh my b i looked at the persistent diagnostic service uploaded logs here and 1st post fatjoe-diagnostics-20220905-1015.zi Edited September 5, 2022 by ToXIc Quote Link to comment
JorgeB Posted September 5, 2022 Share Posted September 5, 2022 The server was rebooted after the rebuilds, so we can't see what happened, I do see that disks 9 and 18 are empty, assuming one of those was the 2nd rebuild disk did you at any time format it? Quote Link to comment
ToXIc Posted September 5, 2022 Author Share Posted September 5, 2022 yea i rebooted hoping it would of shown the contents of drive 18, it asked to format 18 when i added 9 to the system and i did, i guess that killed the data? i also have this new section since all this occurred Quote Link to comment
JorgeB Posted September 5, 2022 Share Posted September 5, 2022 If you formatted an unmountable disk it deletes all the data, the usual solution for that is check filesystem, do you still have the old disk? Historical devices are from the UD plugin, but they can be safely removed. Quote Link to comment
ToXIc Posted September 5, 2022 Author Share Posted September 5, 2022 i still have the old disk.. it did say "unmountable" ill review the link Quote Link to comment
JorgeB Posted September 5, 2022 Share Posted September 5, 2022 If you have enough SATA ports see if the old disk mounts with the UD plugin, if it doesn't you can repair the filesystem also using UD. Quote Link to comment
ToXIc Posted September 5, 2022 Author Share Posted September 5, 2022 i have a usb doc that loads drives to UD but it runs 1/3 or the speed. or use sata? Quote Link to comment
JorgeB Posted September 5, 2022 Share Posted September 5, 2022 If you have SATA available use SATA. Quote Link to comment
ToXIc Posted September 5, 2022 Author Share Posted September 5, 2022 (edited) got the drive added to the system it shows up in UD but i dont see a "Check Filesystem Status" section in the Disk settings window for the drive the drive is xfs Edited September 5, 2022 by ToXIc Quote Link to comment
ToXIc Posted September 5, 2022 Author Share Posted September 5, 2022 i click on the check mark in UD and i get FS: xfs Executing file system check: /sbin/xfs_repair -n /dev/sdx1 2>&1 Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... ALERT: The filesystem has valuable metadata changes in a log which is being ignored because the -n option was used. Expect spurious inconsistencies which may be resolved by first mounting the filesystem to replay the log. - scan filesystem freespace and inode maps... sb_fdblocks 895215175, counted 902030091 - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 2 - agno = 3 No modify flag set, skipping phase 5 Phase 6 - check inode connectivity... - traversing filesystem ... - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify link counts... No modify flag set, skipping filesystem flush and exiting. File system corruption detected! Quote Link to comment
ToXIc Posted September 5, 2022 Author Share Posted September 5, 2022 recreated the log and i see some data not everything but some Quote Link to comment
JorgeB Posted September 5, 2022 Share Posted September 5, 2022 You can copy what's there to the array. Quote Link to comment
ToXIc Posted September 5, 2022 Author Share Posted September 5, 2022 i am took a bit to figure out how to get it to show up.. should i preclear the old drive and trust it? Quote Link to comment
ToXIc Posted September 5, 2022 Author Share Posted September 5, 2022 thank for you help btw, wished i posted here to begin with Quote Link to comment
JorgeB Posted September 5, 2022 Share Posted September 5, 2022 9 minutes ago, ToXIc said: should i preclear the old drive and trust it? Post a SMART report first. Quote Link to comment
itimpi Posted September 5, 2022 Share Posted September 5, 2022 1 hour ago, ToXIc said: i click on the check mark in UD and i get FS: xfs Executing file system check: /sbin/xfs_repair -n /dev/sdx1 2>&1 Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... ALERT: The filesystem has valuable metadata changes in a log which is being ignored because the -n option was used. Expect spurious inconsistencies which may be resolved by first mounting the filesystem to replay the log. - scan filesystem freespace and inode maps... sb_fdblocks 895215175, counted 902030091 - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 2 - agno = 3 No modify flag set, skipping phase 5 Phase 6 - check inode connectivity... - traversing filesystem ... - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify link counts... No modify flag set, skipping filesystem flush and exiting. File system corruption detected! This output suggests that the xfs_repair should be run without the -n option so that a repair rather than a check happens Quote Link to comment
ToXIc Posted September 5, 2022 Author Share Posted September 5, 2022 24 minutes ago, JorgeB said: Post a SMART report first. plz see attached WDC_WD40EFRX-68N32N0_WD-WCC7K6TSYCEL-20220905-1317.txt Quote Link to comment
JorgeB Posted September 6, 2022 Share Posted September 6, 2022 Disk looks healthy. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.