caplam

Members
  • Posts

    332
  • Joined

  • Last visited

Everything posted by caplam

  1. i'm confused. I replaced sata cable, changed sata port and still I/O error. So i changed enclosure for disk 4; and still I/O error when doing check _n with gui. So in a terminal i ran xfs_repair -n on disk4 in the external enclosure and it's running but with gui i have i/o error. I have good hope for disk 4 as the internal enclosure in which disk4 was primarily had a problem. When i pulled disk 4 the latch of the enclosure was broken and the disk was not maintained as it should.
  2. when i run xfs_repair -n on disk4 i have an input/output error.
  3. i start thinking trouble is with the hotplug enclosure in which i have disk 2, 3 and 4.
  4. ddrescue went well and i recovered all files. I did a new config with the new disk3. Parity sync started but disk4 had been disabled. So i guess i have to stop array to replace disk4. I have no 6Tb disk spare.
  5. i planned the second option but i was not aware of the first which seems more simple. but i think i have to check files integrity on the cloned disk before syncing parity.
  6. ok thank you for following up my steps. currently running 5% rescued and going on... after re-sync i can add rescued files from ud device ?
  7. i don't know if -r3 means 3 passes or 3 scrap passes
  8. ok so it could be that: ddrescue -f -v -r3 /dev/sdq /dev/sdj /boot/ddrescue.log if all is ok i stop array replace sdq with a new one assign it as disk3 start array in normal mode mount sdj with ud. Logically a parity sync will be triggered. With krusader i can add files from ud to /mnt/disk3
  9. here are the things: physical disk3 is /dev/sdq emulated disk3 is /dev/md3 unassigned disk is /dev/sdj (precleared) my array is started in maintenance mode
  10. Ok, i was thinking to ddrescue. My plan was to use it on /dev/sdq (physical disk3) to clone it on another disk. Change disk3 with a new disk and add files that could be saved. So if i understand well what you mean. I'd better use ddrescue on /dev/md3 to clone it to another disk. If files could have been saved they are at this point on a disk which is not a member of the array. I mount the disk with unassigned devices. Then replace disk3 with a new one. A parity sync will then start. At the end of the parity sync process i can add files on disk3 from the unassigned one. edit: xfs_repair didn't find any secondary superblock
  11. xfs_repair -nv had been running for 2 hours now and it keeps scanning for secondary superblock. Primary one is bad i found this thread and if xfs_repair doesn't work i think i'll go with dd to clone the disk and a data recovery software such as reclaime to try to recover files.
  12. I let rebuild run and now disk 4 is back on line. 😀 long smart test is also finished on disk 3: smartctl 7.1 2019-12-30 r5022 [x86_64-linux-5.10.1-Unraid] (local build) Copyright (C) 2002-19, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Model Family: Western Digital Red Device Model: WDC WD40EFRX-68WT0N0 Serial Number: WD-WCC4E5CEVZTC LU WWN Device Id: 5 0014ee 20da14606 Firmware Version: 82.00A82 User Capacity: 4,000,787,030,016 bytes [4.00 TB] Sector Sizes: 512 bytes logical, 4096 bytes physical Rotation Rate: 5400 rpm Device is: In smartctl database [for details use: -P show] ATA Version is: ACS-2 (minor revision not indicated) SATA Version is: SATA 3.0, 6.0 Gb/s (current: 3.0 Gb/s) Local Time is: Sat Mar 6 17:00:13 2021 CET SMART support is: Available - device has SMART capability. SMART support is: Enabled AAM feature is: Unavailable APM feature is: Unavailable Rd look-ahead is: Enabled Write cache is: Enabled DSN feature is: Unavailable ATA Security is: Disabled, NOT FROZEN [SEC1] Wt Cache Reorder: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED General SMART Values: Offline data collection status: (0x00) Offline data collection activity was never started. Auto Offline Data Collection: Disabled. Self-test execution status: ( 114) The previous self-test completed having the read element of the test failed. Total time to complete Offline data collection: (53460) seconds. Offline data collection capabilities: (0x7b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 2) minutes. Extended self-test routine recommended polling time: ( 534) minutes. Conveyance self-test routine recommended polling time: ( 5) minutes. SCT capabilities: (0x703d) SCT Status supported. SCT Error Recovery Control supported. SCT Feature Control supported. SCT Data Table supported. SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAGS VALUE WORST THRESH FAIL RAW_VALUE 1 Raw_Read_Error_Rate POSR-K 200 200 051 - 2 3 Spin_Up_Time POS--K 179 178 021 - 8050 4 Start_Stop_Count -O--CK 100 100 000 - 83 5 Reallocated_Sector_Ct PO--CK 200 200 140 - 0 7 Seek_Error_Rate -OSR-K 200 200 000 - 0 9 Power_On_Hours -O--CK 082 082 000 - 13372 10 Spin_Retry_Count -O--CK 100 253 000 - 0 11 Calibration_Retry_Count -O--CK 100 253 000 - 0 12 Power_Cycle_Count -O--CK 100 100 000 - 80 192 Power-Off_Retract_Count -O--CK 200 200 000 - 59 193 Load_Cycle_Count -O--CK 200 200 000 - 1818 194 Temperature_Celsius -O---K 120 104 000 - 32 196 Reallocated_Event_Count -O--CK 200 200 000 - 0 197 Current_Pending_Sector -O--CK 200 200 000 - 1 198 Offline_Uncorrectable ----CK 100 253 000 - 0 199 UDMA_CRC_Error_Count -O--CK 200 200 000 - 0 200 Multi_Zone_Error_Rate ---R-- 200 200 000 - 1 ||||||_ K auto-keep |||||__ C event count ||||___ R error rate |||____ S speed/performance ||_____ O updated online |______ P prefailure warning General Purpose Log Directory Version 1 SMART Log Directory Version 1 [multi-sector log support] Address Access R/W Size Description 0x00 GPL,SL R/O 1 Log Directory 0x01 SL R/O 1 Summary SMART error log 0x02 SL R/O 5 Comprehensive SMART error log 0x03 GPL R/O 6 Ext. Comprehensive SMART error log 0x06 SL R/O 1 SMART self-test log 0x07 GPL R/O 1 Extended self-test log 0x09 SL R/W 1 Selective self-test log 0x10 GPL R/O 1 NCQ Command Error log 0x11 GPL R/O 1 SATA Phy Event Counters log 0x21 GPL R/O 1 Write stream error log 0x22 GPL R/O 1 Read stream error log 0x80-0x9f GPL,SL R/W 16 Host vendor specific log 0xa0-0xa7 GPL,SL VS 16 Device vendor specific log 0xa8-0xb6 GPL,SL VS 1 Device vendor specific log 0xb7 GPL,SL VS 39 Device vendor specific log 0xbd GPL,SL VS 1 Device vendor specific log 0xc0 GPL,SL VS 1 Device vendor specific log 0xc1 GPL VS 93 Device vendor specific log 0xe0 GPL,SL R/W 1 SCT Command/Status 0xe1 GPL,SL R/W 1 SCT Data Transfer SMART Extended Comprehensive Error Log Version: 1 (6 sectors) Device Error Count: 2 CR = Command Register FEATR = Features Register COUNT = Count (was: Sector Count) Register LBA_48 = Upper bytes of LBA High/Mid/Low Registers ] ATA-8 LH = LBA High (was: Cylinder High) Register ] LBA LM = LBA Mid (was: Cylinder Low) Register ] Register LL = LBA Low (was: Sector Number) Register ] DV = Device (was: Device/Head) Register DC = Device Control Register ER = Error register ST = Status register Powered_Up_Time is measured from power on, and printed as DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes, SS=sec, and sss=millisec. It "wraps" after 49.710 days. Error 2 [1] occurred at disk power-on lifetime: 13337 hours (555 days + 17 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER -- ST COUNT LBA_48 LH LM LL DV DC -- -- -- == -- == == == -- -- -- -- -- 40 -- 51 00 00 00 01 63 08 a4 00 40 00 Error: UNC at LBA = 0x16308a400 = 5956477952 Commands leading to the command that caused the error were: CR FEATR COUNT LBA_48 LH LM LL DV DC Powered_Up_Time Command/Feature_Name -- == -- == -- == == == -- -- -- -- -- --------------- -------------------- 60 04 00 00 10 00 01 63 08 d7 90 40 08 20d+20:37:23.428 READ FPDMA QUEUED 60 04 00 00 08 00 01 63 08 d3 90 40 08 20d+20:37:23.423 READ FPDMA QUEUED 60 04 00 00 00 00 01 63 08 cf 90 40 08 20d+20:37:23.417 READ FPDMA QUEUED 60 01 b8 00 f8 00 01 63 08 cd d8 40 08 20d+20:37:23.412 READ FPDMA QUEUED 60 02 48 00 f0 00 01 63 08 cb 90 40 08 20d+20:37:23.406 READ FPDMA QUEUED Error 1 [0] occurred at disk power-on lifetime: 13337 hours (555 days + 17 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER -- ST COUNT LBA_48 LH LM LL DV DC -- -- -- == -- == == == -- -- -- -- -- 40 -- 51 00 00 00 01 63 08 a4 00 40 00 Error: UNC at LBA = 0x16308a400 = 5956477952 Commands leading to the command that caused the error were: CR FEATR COUNT LBA_48 LH LM LL DV DC Powered_Up_Time Command/Feature_Name -- == -- == -- == == == -- -- -- -- -- --------------- -------------------- 60 04 00 00 88 00 01 63 08 bb 90 40 08 20d+20:36:53.198 READ FPDMA QUEUED 60 04 00 00 80 00 01 63 08 b7 90 40 08 20d+20:36:53.193 READ FPDMA QUEUED 60 01 b8 00 78 00 01 63 08 b5 d8 40 08 20d+20:36:53.187 READ FPDMA QUEUED 60 02 48 00 70 00 01 63 08 b3 90 40 08 20d+20:36:53.182 READ FPDMA QUEUED 60 04 00 00 68 00 01 63 08 af 90 40 08 20d+20:36:53.180 READ FPDMA QUEUED SMART Extended Self-test Log Version: 1 (1 sectors) Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Extended offline Completed: read failure 20% 13353 5956477952 # 2 Extended offline Interrupted (host reset) 90% 13346 - # 3 Extended offline Interrupted (host reset) 70% 13346 - # 4 Short offline Completed without error 00% 12747 - SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay. SCT Status Version: 3 SCT Version (vendor specific): 258 (0x0102) Device State: Active (0) Current Temperature: 32 Celsius Power Cycle Min/Max Temperature: 14/36 Celsius Lifetime Min/Max Temperature: 14/45 Celsius Under/Over Temperature Limit Count: 0/0 Vendor specific: 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 SCT Temperature History Version: 2 Temperature Sampling Period: 1 minute Temperature Logging Interval: 1 minute Min/Max recommended Temperature: 0/60 Celsius Min/Max Temperature Limit: -41/85 Celsius Temperature History Size (Index): 478 (305) Index Estimated Time Temperature Celsius 306 2021-03-06 09:03 31 ************ ... ..( 56 skipped). .. ************ 363 2021-03-06 10:00 31 ************ 364 2021-03-06 10:01 32 ************* ... ..( 97 skipped). .. ************* 462 2021-03-06 11:39 32 ************* 463 2021-03-06 11:40 31 ************ ... ..(319 skipped). .. ************ 305 2021-03-06 17:00 31 ************ SCT Error Recovery Control: Read: 70 (7.0 seconds) Write: 70 (7.0 seconds) Device Statistics (GP/SMART Log 0x04) not supported Pending Defects log (GP Log 0x0c) not supported SATA Phy Event Counters (GP Log 0x11) ID Size Value Description 0x0001 2 0 Command failed due to ICRC error 0x0002 2 0 R_ERR response for data FIS 0x0003 2 0 R_ERR response for device-to-host data FIS 0x0004 2 0 R_ERR response for host-to-device data FIS 0x0005 2 0 R_ERR response for non-data FIS 0x0006 2 0 R_ERR response for device-to-host non-data FIS 0x0007 2 0 R_ERR response for host-to-device non-data FIS 0x0008 2 0 Device-to-host non-data FIS retries 0x0009 2 855 Transition from drive PhyRdy to drive PhyNRdy 0x000a 2 19 Device-to-host register FISes sent due to a COMRESET 0x000b 2 0 CRC errors within host-to-device FIS 0x000f 2 0 R_ERR response for host-to-device data FIS, CRC 0x0012 2 0 R_ERR response for host-to-device non-data FIS, CRC 0x8000 4 1925892 Vendor specific So despite the disk has less than 14OOOhrs it seems it has to be changed. Now i will bring my array offline to try to repair disk3 filesystem.
  13. rebuild is running but it's quite slow (max 90MB/S). Wouldn't it be more simple to replace disk 3 and rebuild it (of course after disk4 is rebuilt)? edit: can i run xfs_repair -n /dev/md3 while rebuild is running ?
  14. i guess i have to resync disk 4 but how. I have no other 6Tb spare. do i have to stop array and run xfs repair on disk 3?
  15. after a reboot and array start. Disk 3 is unmountable Disk 4 is emulated godzilla-diagnostics-20210305-1514.zip
  16. i rebooted the server and started array in maintenance mode godzilla-diagnostics-20210305-1458.zip edit: when i stop array i have no other choice than reboot or shutdown. I can't start array.
  17. Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 3 - agno = 2 - agno = 4 - agno = 5 No modify flag set, skipping phase 5 Phase 6 - check inode connectivity... - traversing filesystem ... - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify link counts... No modify flag set, skipping filesystem flush and exiting. ran with -L option and another time with -n (result above) stop array and restart in maintenance mode but still unmoutable. for disk 3 here is check with -n option. Phase 1 - find and verify superblock... bad primary superblock - bad CRC in superblock !!! attempting to find secondary superblock... .................... still running. It seems like i'll loose data on disk 3 and 4.
  18. Phase 1 - find and verify superblock... bad primary superblock - bad CRC in superblock !!! attempting to find secondary superblock... .found candidate secondary superblock... verified secondary superblock... writing modified primary superblock sb root inode value 18446744073709551615 (NULLFSINO) inconsistent with calculated value 128 resetting superblock root inode pointer to 128 sb realtime bitmap inode value 18446744073709551615 (NULLFSINO) inconsistent with calculated value 129 resetting superblock realtime bitmap inode pointer to 129 sb realtime summary inode value 18446744073709551615 (NULLFSINO) inconsistent with calculated value 130 resetting superblock realtime summary inode pointer to 130 Phase 2 - using internal log - zero log... ERROR: The filesystem has valuable metadata changes in a log which needs to be replayed. Mount the filesystem to replay the log, and unmount it before re-running xfs_repair. If you are unable to mount the filesystem, then use the -L option to destroy the log and attempt a repair. Note that destroying the log may cause corruption -- please attempt a mount of the filesystem before doing this. it's file system check on disk 4 how to mount it ?
  19. disk 3 and 4 are disabled. godzilla-diagnostics-20210305-1237.zip
  20. hello, here what i see on my unraid server. i can connect with ssh. Vm are still working and so are dockers. But i can't access any of the tools or settings. what should i do now ? when accessing syslog with terminal i see nothing as it's flooded with read and write errors there was a parity check running. 2 weeks ago i replaced a failing disk. godzilla-diagnostics-20210305-1155.zip
  21. here it is. ssd writes daily.json.txt ssd writes hourly.json.txt
  22. yes by ssh as it was impossible via gui. After reboot all is running perfectly well. I received a new hdd for parity 2 : currently preclearing godzilla-diagnostics-20201026-0941.zip
  23. I thought i was fine but i'm definitely a "black cat" (french expression to say very unlucky). This night smbd process crashed during vm backup. This morning some dockers and some vm were unresponsive. I had to kill smbd to be able to stop array and reboot. It took me one hour. I hope this is a one time bug (i never had this before).