Jump to content

Array disk error - how to swap out?


kilo
Go to solution Solved by kilo,

Recommended Posts

After recentkly replacing the parity disk, I think i may have an issue with an array disk?

 

Error count is: 1088

Last SMART test result: Errors occurred - Check SMART report

 

SMART report is below, not sure what to make of it though :/ How do you read these things ! It says PASSED but then lists a bunch of errors 🤣

Please could someone take a peek and advise - thanks :)

 

 

smartctl 7.1 2019-12-30 r5022 [x86_64-linux-5.10.28-Unraid] (local build)
Copyright (C) 2002-19, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family:     Toshiba N300 NAS HDD
Device Model:     TOSHIBA HDWG180
Serial Number:    7020A00XXXXXX
LU WWN Device Id: 5 000039 a48c88447
Firmware Version: 0603
User Capacity:    8,001,563,222,016 bytes [8.00 TB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Rotation Rate:    7200 rpm
Form Factor:      3.5 inches
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   ACS-3 T13/2161-D revision 5
SATA Version is:  SATA 3.3, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Sat Jan 29 12:32:29 2022 GMT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
AAM feature is:   Unavailable
APM level is:     128 (minimum power consumption without standby)
Rd look-ahead is: Enabled
Write cache is:   Enabled
DSN feature is:   Unavailable
ATA Security is:  Disabled, frozen [SEC2]
Wt Cache Reorder: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
See vendor-specific Attribute list for marginal Attributes.

General SMART Values:
Offline data collection status:  (0x82)    Offline data collection activity
                    was completed without error.
                    Auto Offline Data Collection: Enabled.
Self-test execution status:      (  73)    The previous self-test completed having
                    a test element that failed and the test
                    element that failed is not known.
Total time to complete Offline
data collection:         (  120) seconds.
Offline data collection
capabilities:              (0x5b) SMART execute Offline immediate.
                    Auto Offline data collection on/off support.
                    Suspend Offline collection upon new
                    command.
                    Offline surface scan supported.
                    Self-test supported.
                    No Conveyance Self-test supported.
                    Selective Self-test supported.
SMART capabilities:            (0x0003)    Saves SMART data before entering
                    power-saving mode.
                    Supports SMART auto save timer.
Error logging capability:        (0x01)    Error logging supported.
                    General Purpose Logging supported.
Short self-test routine
recommended polling time:      (   2) minutes.
Extended self-test routine
recommended polling time:      ( 824) minutes.
SCT capabilities:            (0x003d)    SCT Status supported.
                    SCT Error Recovery Control supported.
                    SCT Feature Control supported.
                    SCT Data Table supported.

SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAGS    VALUE WORST THRESH FAIL RAW_VALUE
  1 Raw_Read_Error_Rate     PO-R--   100   100   050    -    0
  2 Throughput_Performance  P-S---   100   100   050    -    0
  3 Spin_Up_Time            POS--K   100   100   001    -    8277
  4 Start_Stop_Count        -O--CK   100   100   000    -    19
  5 Reallocated_Sector_Ct   PO--CK   100   100   050    -    0
  7 Seek_Error_Rate         PO-R--   100   038   050    Past 0
  8 Seek_Time_Performance   P-S---   100   100   050    -    0
  9 Power_On_Hours          -O--CK   085   085   000    -    6084
 10 Spin_Retry_Count        PO--CK   100   100   030    -    0
 12 Power_Cycle_Count       -O--CK   100   100   000    -    19
191 G-Sense_Error_Rate      -O--CK   100   100   000    -    0
192 Power-Off_Retract_Count -O--CK   100   100   000    -    4
193 Load_Cycle_Count        -O--CK   100   100   000    -    31
194 Temperature_Celsius     -O---K   100   100   000    -    24 (Min/Max 8/55)
196 Reallocated_Event_Count -O--CK   100   100   000    -    0
197 Current_Pending_Sector  -O--CK   100   100   000    -    0
198 Offline_Uncorrectable   ----CK   100   100   000    -    0
199 UDMA_CRC_Error_Count    -O--CK   200   200   000    -    0
220 Disk_Shift              -O----   100   100   000    -    134348830
222 Loaded_Hours            -O--CK   085   085   000    -    6018
223 Load_Retry_Count        -O--CK   100   100   000    -    0
224 Load_Friction           -O---K   100   100   000    -    0
226 Load-in_Time            -OS--K   100   100   000    -    536
240 Head_Flying_Hours       P-----   100   100   001    -    0
                            ||||||_ K auto-keep
                            |||||__ C event count
                            ||||___ R error rate
                            |||____ S speed/performance
                            ||_____ O updated online
                            |______ P prefailure warning

General Purpose Log Directory Version 1
SMART           Log Directory Version 1 [multi-sector log support]
Address    Access  R/W   Size  Description
0x00       GPL,SL  R/O      1  Log Directory
0x01           SL  R/O      1  Summary SMART error log
0x02           SL  R/O     51  Comprehensive SMART error log
0x03       GPL     R/O      5  Ext. Comprehensive SMART error log
0x04       GPL,SL  R/O      8  Device Statistics log
0x06           SL  R/O      1  SMART self-test log
0x07       GPL     R/O      1  Extended self-test log
0x08       GPL     R/O      2  Power Conditions log
0x09           SL  R/W      1  Selective self-test log
0x0c       GPL     R/O    513  Pending Defects log
0x10       GPL     R/O      1  NCQ Command Error log
0x11       GPL     R/O      1  SATA Phy Event Counters log
0x24       GPL     R/O  49152  Current Device Internal Status Data log
0x25       GPL     R/O  49152  Saved Device Internal Status Data log
0x30       GPL,SL  R/O      9  IDENTIFY DEVICE data log
0x80-0x9f  GPL,SL  R/W     16  Host vendor specific log
0xe0       GPL,SL  R/W      1  SCT Command/Status
0xe1       GPL,SL  R/W      1  SCT Data Transfer

SMART Extended Comprehensive Error Log Version: 1 (5 sectors)
Device Error Count: 5
    CR     = Command Register
    FEATR  = Features Register
    COUNT  = Count (was: Sector Count) Register
    LBA_48 = Upper bytes of LBA High/Mid/Low Registers ]  ATA-8
    LH     = LBA High (was: Cylinder High) Register    ]   LBA
    LM     = LBA Mid (was: Cylinder Low) Register      ] Register
    LL     = LBA Low (was: Sector Number) Register     ]
    DV     = Device (was: Device/Head) Register
    DC     = Device Control Register
    ER     = Error register
    ST     = Status register
Powered_Up_Time is measured from power on, and printed as
DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes,
SS=sec, and sss=millisec. It "wraps" after 49.710 days.

Error 5 [4] occurred at disk power-on lifetime: 5932 hours (247 days + 4 hours)
  When the command that caused the error occurred, the device was in standby mode.

  After command completion occurred, registers were:
  ER -- ST COUNT  LBA_48  LH LM LL DV DC
  -- -- -- == -- == == == -- -- -- -- --
  40 -- 43 00 00 00 02 95 f9 a3 48 40 00  Error: UNC at LBA = 0x295f9a348 = 11106100040

  Commands leading to the command that caused the error were:
  CR FEATR COUNT  LBA_48  LH LM LL DV DC  Powered_Up_Time  Command/Feature_Name
  -- == -- == -- == == == -- -- -- -- --  ---------------  --------------------
  60 01 00 00 10 00 02 95 f9 a3 90 40 00  6d+14:31:20.937  READ FPDMA QUEUED
  60 01 00 00 08 00 02 95 f9 a2 90 40 00  6d+14:31:20.934  READ FPDMA QUEUED
  60 01 00 00 00 00 02 95 f9 a1 90 40 00  6d+14:31:20.934  READ FPDMA QUEUED
  60 01 00 00 f0 00 02 95 f9 a0 90 40 00  6d+14:31:20.933  READ FPDMA QUEUED
  60 01 00 00 e8 00 02 95 f9 9f 90 40 00  6d+14:31:20.932  READ FPDMA QUEUED

Error 4 [3] occurred at disk power-on lifetime: 5833 hours (243 days + 1 hours)
  When the command that caused the error occurred, the device was in standby mode.

  After command completion occurred, registers were:
  ER -- ST COUNT  LBA_48  LH LM LL DV DC
  -- -- -- == -- == == == -- -- -- -- --
  40 -- 43 00 80 00 02 95 f9 a3 48 40 00  Error: UNC at LBA = 0x295f9a348 = 11106100040

  Commands leading to the command that caused the error were:
  CR FEATR COUNT  LBA_48  LH LM LL DV DC  Powered_Up_Time  Command/Feature_Name
  -- == -- == -- == == == -- -- -- -- --  ---------------  --------------------
  60 01 b8 00 b8 00 02 95 f9 a7 40 40 00  2d+11:04:36.964  READ FPDMA QUEUED
  60 00 c0 00 b0 00 02 95 f9 a6 80 40 00  2d+11:04:36.959  READ FPDMA QUEUED
  60 01 c8 00 a8 00 02 95 f9 a4 b8 40 00  2d+11:04:36.958  READ FPDMA QUEUED
  60 02 58 00 a0 00 02 95 f9 a2 60 40 00  2d+11:04:36.955  READ FPDMA QUEUED
  60 01 18 00 98 00 02 95 f9 a1 48 40 00  2d+11:04:36.955  READ FPDMA QUEUED

Error 3 [2] occurred at disk power-on lifetime: 5731 hours (238 days + 19 hours)
  When the command that caused the error occurred, the device was in standby mode.

  After command completion occurred, registers were:
  ER -- ST COUNT  LBA_48  LH LM LL DV DC
  -- -- -- == -- == == == -- -- -- -- --
  40 -- 43 00 40 00 02 95 f9 a3 48 40 00  Error: UNC at LBA = 0x295f9a348 = 11106100040

  Commands leading to the command that caused the error were:
  CR FEATR COUNT  LBA_48  LH LM LL DV DC  Powered_Up_Time  Command/Feature_Name
  -- == -- == -- == == == -- -- -- -- --  ---------------  --------------------
  60 01 00 00 50 00 02 95 f9 a5 10 40 00  6d+06:27:28.957  READ FPDMA QUEUED
  60 01 00 00 48 00 02 95 f9 a4 10 40 00  6d+06:27:28.954  READ FPDMA QUEUED
  60 01 00 00 40 00 02 95 f9 a3 10 40 00  6d+06:27:28.954  READ FPDMA QUEUED
  60 01 00 00 38 00 02 95 f9 a2 10 40 00  6d+06:27:28.953  READ FPDMA QUEUED
  60 01 00 00 30 00 02 95 f9 a1 10 40 00  6d+06:27:28.952  READ FPDMA QUEUED

Error 2 [1] occurred at disk power-on lifetime: 5597 hours (233 days + 5 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER -- ST COUNT  LBA_48  LH LM LL DV DC
  -- -- -- == -- == == == -- -- -- -- --
  40 -- 43 00 40 00 02 95 f9 a3 48 40 00  Error: UNC at LBA = 0x295f9a348 = 11106100040

  Commands leading to the command that caused the error were:
  CR FEATR COUNT  LBA_48  LH LM LL DV DC  Powered_Up_Time  Command/Feature_Name
  -- == -- == -- == == == -- -- -- -- --  ---------------  --------------------
  60 00 08 00 48 00 02 95 f9 a4 18 40 00     15:51:42.811  READ FPDMA QUEUED
  60 00 08 00 40 00 02 95 f9 a3 48 40 00     15:51:42.802  READ FPDMA QUEUED
  60 00 08 00 38 00 02 95 f9 a3 40 40 00     15:51:42.802  READ FPDMA QUEUED
  60 00 08 00 30 00 02 95 f9 a3 38 40 00     15:51:42.802  READ FPDMA QUEUED
  60 00 08 00 28 00 02 95 f9 a3 30 40 00     15:51:42.802  READ FPDMA QUEUED

Error 1 [0] occurred at disk power-on lifetime: 5597 hours (233 days + 5 hours)
  When the command that caused the error occurred, the device was in standby mode.

  After command completion occurred, registers were:
  ER -- ST COUNT  LBA_48  LH LM LL DV DC
  -- -- -- == -- == == == -- -- -- -- --
  40 -- 43 00 c0 00 02 95 f9 a3 48 40 00  Error: UNC at LBA = 0x295f9a348 = 11106100040

  Commands leading to the command that caused the error were:
  CR FEATR COUNT  LBA_48  LH LM LL DV DC  Powered_Up_Time  Command/Feature_Name
  -- == -- == -- == == == -- -- -- -- --  ---------------  --------------------
  60 01 00 00 10 00 02 95 f9 a4 10 40 00     15:41:27.283  READ FPDMA QUEUED
  60 01 00 00 08 00 02 95 f9 a3 10 40 00     15:41:24.939  READ FPDMA QUEUED
  60 01 00 00 00 00 02 95 f9 a2 10 40 00     15:41:24.937  READ FPDMA QUEUED
  60 01 00 00 f0 00 02 95 f9 a1 10 40 00     15:41:24.937  READ FPDMA QUEUED
  60 01 00 00 e8 00 02 95 f9 a0 10 40 00     15:41:24.937  READ FPDMA QUEUED

SMART Extended Self-test Log Version: 1 (1 sectors)
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Short offline       Completed: unknown failure    90%      6084         0
# 2  Short offline       Completed: unknown failure    90%      5557         0
# 3  Short offline       Completed: unknown failure    90%      5557         0

SMART Selective self-test log data structure revision number 1
 SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
    1        0        0  Not_testing
    2        0        0  Not_testing
    3        0        0  Not_testing
    4        0        0  Not_testing
    5        0        0  Not_testing
Selective self-test flags (0x0):
  After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.

SCT Status Version:                  3
SCT Version (vendor specific):       1 (0x0001)
Device State:                        Active (0)
Current Temperature:                    24 Celsius
Power Cycle Min/Max Temperature:     13/28 Celsius
Lifetime    Min/Max Temperature:      8/55 Celsius
Specified Max Operating Temperature:    55 Celsius
Under/Over Temperature Limit Count:   0/0

SCT Temperature History Version:     2
Temperature Sampling Period:         1 minute
Temperature Logging Interval:        1 minute
Min/Max recommended Temperature:      5/55 Celsius
Min/Max Temperature Limit:           -40/70 Celsius
Temperature History Size (Index):    478 (133)

Index    Estimated Time   Temperature Celsius
 134    2022-01-29 04:35    22  ***
 135    2022-01-29 04:36    22  ***
 136    2022-01-29 04:37    21  **
 137    2022-01-29 04:38    21  **
 138    2022-01-29 04:39    22  ***
 139    2022-01-29 04:40    21  **
 ...    ..( 52 skipped).    ..  **
 192    2022-01-29 05:33    21  **
 193    2022-01-29 05:34    20  *
 194    2022-01-29 05:35    21  **
 ...    ..( 24 skipped).    ..  **
 219    2022-01-29 06:00    21  **
 220    2022-01-29 06:01    20  *
 221    2022-01-29 06:02    21  **
 ...    ..( 27 skipped).    ..  **
 249    2022-01-29 06:30    21  **
 250    2022-01-29 06:31    20  *
 251    2022-01-29 06:32    21  **
 ...    ..(  2 skipped).    ..  **
 254    2022-01-29 06:35    21  **
 255    2022-01-29 06:36    20  *
 256    2022-01-29 06:37    21  **
 ...    ..(  8 skipped).    ..  **
 265    2022-01-29 06:46    21  **
 266    2022-01-29 06:47    20  *
 ...    ..(  2 skipped).    ..  *
 269    2022-01-29 06:50    20  *
 270    2022-01-29 06:51    21  **
 271    2022-01-29 06:52    21  **
 272    2022-01-29 06:53    21  **
 273    2022-01-29 06:54    20  *
 274    2022-01-29 06:55    21  **
 275    2022-01-29 06:56    21  **
 276    2022-01-29 06:57    21  **
 277    2022-01-29 06:58    20  *
 278    2022-01-29 06:59    20  *
 279    2022-01-29 07:00    21  **
 280    2022-01-29 07:01    20  *
 281    2022-01-29 07:02    21  **
 282    2022-01-29 07:03    20  *
 ...    ..(  3 skipped).    ..  *
 286    2022-01-29 07:07    20  *
 287    2022-01-29 07:08    21  **
 288    2022-01-29 07:09    20  *
 289    2022-01-29 07:10    20  *
 290    2022-01-29 07:11    20  *
 291    2022-01-29 07:12    21  **
 292    2022-01-29 07:13    21  **
 293    2022-01-29 07:14    20  *
 294    2022-01-29 07:15    21  **
 295    2022-01-29 07:16    21  **
 296    2022-01-29 07:17    21  **
 297    2022-01-29 07:18    20  *
 298    2022-01-29 07:19    21  **
 299    2022-01-29 07:20    20  *
 300    2022-01-29 07:21    20  *
 301    2022-01-29 07:22    20  *
 302    2022-01-29 07:23    21  **
 ...    ..(  6 skipped).    ..  **
 309    2022-01-29 07:30    21  **
 310    2022-01-29 07:31    20  *
 ...    ..(  2 skipped).    ..  *
 313    2022-01-29 07:34    20  *
 314    2022-01-29 07:35    21  **
 315    2022-01-29 07:36    20  *
 316    2022-01-29 07:37    20  *
 317    2022-01-29 07:38    21  **
 318    2022-01-29 07:39    20  *
 319    2022-01-29 07:40    20  *
 320    2022-01-29 07:41    21  **
 321    2022-01-29 07:42    20  *
 ...    ..(  2 skipped).    ..  *
 324    2022-01-29 07:45    20  *
 325    2022-01-29 07:46    21  **
 326    2022-01-29 07:47    20  *
 327    2022-01-29 07:48    21  **
 ...    ..(  2 skipped).    ..  **
 330    2022-01-29 07:51    21  **
 331    2022-01-29 07:52    20  *
 332    2022-01-29 07:53    20  *
 333    2022-01-29 07:54    21  **
 334    2022-01-29 07:55    21  **
 335    2022-01-29 07:56    20  *
 336    2022-01-29 07:57    20  *
 337    2022-01-29 07:58    21  **
 338    2022-01-29 07:59    20  *
 339    2022-01-29 08:00    21  **
 340    2022-01-29 08:01    20  *
 341    2022-01-29 08:02    21  **
 342    2022-01-29 08:03    21  **
 343    2022-01-29 08:04    21  **
 344    2022-01-29 08:05    20  *
 345    2022-01-29 08:06    21  **
 ...    ..(  3 skipped).    ..  **
 349    2022-01-29 08:10    21  **
 350    2022-01-29 08:11    20  *
 351    2022-01-29 08:12    21  **
 352    2022-01-29 08:13    20  *
 353    2022-01-29 08:14    21  **
 354    2022-01-29 08:15    21  **
 355    2022-01-29 08:16    20  *
 356    2022-01-29 08:17    20  *
 357    2022-01-29 08:18    20  *
 358    2022-01-29 08:19    21  **
 359    2022-01-29 08:20    21  **
 360    2022-01-29 08:21    20  *
 361    2022-01-29 08:22    20  *
 362    2022-01-29 08:23    21  **
 363    2022-01-29 08:24    20  *
 364    2022-01-29 08:25    21  **
 ...    ..(  2 skipped).    ..  **
 367    2022-01-29 08:28    21  **
 368    2022-01-29 08:29    20  *
 369    2022-01-29 08:30    21  **
 370    2022-01-29 08:31    21  **
 371    2022-01-29 08:32    21  **
 372    2022-01-29 08:33    20  *
 ...    ..(  3 skipped).    ..  *
 376    2022-01-29 08:37    20  *
 377    2022-01-29 08:38    21  **
 378    2022-01-29 08:39    20  *
 379    2022-01-29 08:40    20  *
 380    2022-01-29 08:41    21  **
 381    2022-01-29 08:42    20  *
 382    2022-01-29 08:43    20  *
 383    2022-01-29 08:44    21  **
 384    2022-01-29 08:45    20  *
 385    2022-01-29 08:46    20  *
 386    2022-01-29 08:47    21  **
 387    2022-01-29 08:48    20  *
 388    2022-01-29 08:49    21  **
 389    2022-01-29 08:50    21  **
 390    2022-01-29 08:51    21  **
 391    2022-01-29 08:52    20  *
 392    2022-01-29 08:53    21  **
 ...    ..(  6 skipped).    ..  **
 399    2022-01-29 09:00    21  **
 400    2022-01-29 09:01    20  *
 401    2022-01-29 09:02    21  **
 ...    ..(  3 skipped).    ..  **
 405    2022-01-29 09:06    21  **
 406    2022-01-29 09:07    20  *
 407    2022-01-29 09:08    21  **
 ...    ..(  6 skipped).    ..  **
 414    2022-01-29 09:15    21  **
 415    2022-01-29 09:16    20  *
 416    2022-01-29 09:17    21  **
 417    2022-01-29 09:18    21  **
 418    2022-01-29 09:19    21  **
 419    2022-01-29 09:20    20  *
 420    2022-01-29 09:21    20  *
 421    2022-01-29 09:22    21  **
 422    2022-01-29 09:23    21  **
 423    2022-01-29 09:24    20  *
 424    2022-01-29 09:25    21  **
 425    2022-01-29 09:26    21  **
 426    2022-01-29 09:27    21  **
 427    2022-01-29 09:28    20  *
 428    2022-01-29 09:29    20  *
 429    2022-01-29 09:30    20  *
 430    2022-01-29 09:31    21  **
 ...    ..( 55 skipped).    ..  **
   8    2022-01-29 10:27    21  **
   9    2022-01-29 10:28    22  ***
  10    2022-01-29 10:29    21  **
 ...    ..(  3 skipped).    ..  **
  14    2022-01-29 10:33    21  **
  15    2022-01-29 10:34    22  ***
  16    2022-01-29 10:35    21  **
  17    2022-01-29 10:36    21  **
  18    2022-01-29 10:37    21  **
  19    2022-01-29 10:38    22  ***
  20    2022-01-29 10:39    21  **
  21    2022-01-29 10:40    21  **
  22    2022-01-29 10:41    22  ***
 ...    ..( 39 skipped).    ..  ***
  62    2022-01-29 11:21    22  ***
  63    2022-01-29 11:22    23  ****
 ...    ..(  2 skipped).    ..  ****
  66    2022-01-29 11:25    23  ****
  67    2022-01-29 11:26    22  ***
  68    2022-01-29 11:27    23  ****
  69    2022-01-29 11:28    22  ***
  70    2022-01-29 11:29    23  ****
  71    2022-01-29 11:30    23  ****
  72    2022-01-29 11:31    22  ***
  73    2022-01-29 11:32    23  ****
 ...    ..(  3 skipped).    ..  ****
  77    2022-01-29 11:36    23  ****
  78    2022-01-29 11:37    22  ***
  79    2022-01-29 11:38    23  ****
  80    2022-01-29 11:39    22  ***
  81    2022-01-29 11:40    23  ****
 ...    ..(  3 skipped).    ..  ****
  85    2022-01-29 11:44    23  ****
  86    2022-01-29 11:45    22  ***
  87    2022-01-29 11:46    23  ****
 ...    ..( 11 skipped).    ..  ****
  99    2022-01-29 11:58    23  ****
 100    2022-01-29 11:59    24  *****
 101    2022-01-29 12:00    23  ****
 102    2022-01-29 12:01    23  ****
 103    2022-01-29 12:02    24  *****
 104    2022-01-29 12:03    24  *****
 105    2022-01-29 12:04    24  *****
 106    2022-01-29 12:05    23  ****
 107    2022-01-29 12:06    24  *****
 ...    ..( 25 skipped).    ..  *****
 133    2022-01-29 12:32    24  *****

SCT Error Recovery Control:
           Read: Disabled
          Write: Disabled

Device Statistics (GP Log 0x04)
Page  Offset Size        Value Flags Description
0x01  =====  =               =  ===  == General Statistics (rev 3) ==
0x01  0x008  4              19  ---  Lifetime Power-On Resets
0x01  0x010  4            6084  ---  Power-on Hours
0x01  0x018  6     16671457536  ---  Logical Sectors Written
0x01  0x020  6        23271092  ---  Number of Write Commands
0x01  0x028  6    215579342694  ---  Logical Sectors Read
0x01  0x030  6       329103986  ---  Number of Read Commands
0x01  0x038  6     21902400000  ---  Date and Time TimeStamp
0x02  =====  =               =  ===  == Free-Fall Statistics (rev 1) ==
0x02  0x010  4               0  ---  Overlimit Shock Events
0x03  =====  =               =  ===  == Rotating Media Statistics (rev 1) ==
0x03  0x008  4             118  ---  Spindle Motor Power-on Hours
0x03  0x010  4              53  ---  Head Flying Hours
0x03  0x018  4              31  ---  Head Load Events
0x03  0x020  4               0  ---  Number of Reallocated Logical Sectors
0x03  0x028  4               2  ---  Read Recovery Attempts
0x03  0x030  4               0  ---  Number of Mechanical Start Failures
0x03  0x038  4               0  ---  Number of Realloc. Candidate Logical Sectors
0x03  0x040  4               4  ---  Number of High Priority Unload Events
0x04  =====  =               =  ===  == General Errors Statistics (rev 1) ==
0x04  0x008  4               5  ---  Number of Reported Uncorrectable Errors
0x04  0x010  4               0  ---  Resets Between Cmd Acceptance and Completion
0x05  =====  =               =  ===  == Temperature Statistics (rev 1) ==
0x05  0x008  1              24  ---  Current Temperature
0x05  0x010  1              21  N--  Average Short Term Temperature
0x05  0x018  1              20  N--  Average Long Term Temperature
0x05  0x020  1              55  ---  Highest Temperature
0x05  0x028  1               8  ---  Lowest Temperature
0x05  0x030  1              44  N--  Highest Average Short Term Temperature
0x05  0x038  1              15  N--  Lowest Average Short Term Temperature
0x05  0x040  1              32  N--  Highest Average Long Term Temperature
0x05  0x048  1              19  N--  Lowest Average Long Term Temperature
0x05  0x050  4               0  ---  Time in Over-Temperature
0x05  0x058  1              55  ---  Specified Maximum Operating Temperature
0x05  0x060  4               0  ---  Time in Under-Temperature
0x05  0x068  1               5  ---  Specified Minimum Operating Temperature
0x06  =====  =               =  ===  == Transport Statistics (rev 1) ==
0x06  0x008  4             266  ---  Number of Hardware Resets
0x06  0x010  4              84  ---  Number of ASR Events
0x06  0x018  4               0  ---  Number of Interface CRC Errors
0x07  =====  =               =  ===  == Solid State Device Statistics (rev 1) ==
                                |||_ C monitored condition met
                                ||__ D supports DSN
                                |___ N normalized value

Pending Defects log (GP Log 0x0c)
No Defects Logged

SATA Phy Event Counters (GP Log 0x11)
ID      Size     Value  Description
0x0001  4            0  Command failed due to ICRC error
0x0002  4            0  R_ERR response for data FIS
0x0003  4            0  R_ERR response for device-to-host data FIS
0x0004  4            0  R_ERR response for host-to-device data FIS
0x0005  4            0  R_ERR response for non-data FIS
0x0006  4            0  R_ERR response for device-to-host non-data FIS
0x0007  4            0  R_ERR response for host-to-device non-data FIS
0x0008  4            0  Device-to-host non-data FIS retries
0x0009  4            3  Transition from drive PhyRdy to drive PhyNRdy
0x000a  4            2  Device-to-host register FISes sent due to a COMRESET
0x000b  4            0  CRC errors within host-to-device FIS
0x000d  4            0  Non-CRC errors within host-to-device FIS
0x000f  4            0  R_ERR response for host-to-device data FIS, CRC
0x0010  4            0  R_ERR response for host-to-device data FIS, non-CRC
0x0012  4            0  R_ERR response for host-to-device non-data FIS, CRC
0x0013  4            0  R_ERR response for host-to-device non-data FIS, non-CRC

 

 

 

 

Edited by kilo
Link to comment

One of my array disks is showing errors so I'm looking to get it replaced.

Having done some searches on how to replace disks, I'm still a little confused on the best way to do this - appreciate some pointers.

Current array is 3 disks - 2x data & 1x parity.

It's one of the data disks that's faulty. So, I want to remove it from the array, but it'll take a week or so to get a replacement (I have to ship it back to the supplier, wait for them to test and confirm, and then ship a new one out).

 

So, I guess I want to shrink the array for a short period, then add a new disk to the array?!

Data is backed up elsewhere, so it's not critical that no data is lost, but would be nice if I didn't have to restore it.

 

What's the recommended process for removing a disk from the array and then adding a new one later?

Thanks

 

Link to comment
2 hours ago, JorgeB said:

You can let Unraid emulated the disk, if it's not yet disabled, then just replace it when the new one arrives.

 

Okay, thanks - how do I do that? The disk is still in the array okay (it's green) - is this the process??

 

  1. Stop the array
  2. unassign the disk
  3. Start the array
  4. (get new disk installed)
  5. stop the array
  6. re-assign the disk
  7. start the array

Will that do it ?

Thanks !

Link to comment

Would be better if you posted diagnostics so we can see if you have any other problems that might need to be taken care of before rebuilding.

 

Is the emulated disk mounted?

 

 

8 minutes ago, kilo said:
  1. Stop the array
  2. unassign the disk
  3. Start the array
  4. (get new disk installed)
  5. stop the array
  6. re-assign the disk
  7. start the array

Go to Disk Settings and disable autostart. Shutdown. Replace disk. Assign new disk to that slot. Start array to begin rebuild. Reenable autostart if you want.

 

 

Link to comment
13 hours ago, trurl said:

Go to Disk Settings and disable autostart. Shutdown. Replace disk. Assign new disk to that slot. Start array to begin rebuild. Reenable autostart if you want.

 

 

 

Yep, seems easy but this is the issue for me with "Shutdown. Replace disk. Assign new disk..."

 

In my case, it would be: "Shutdown. RMA disk. Wait for a week or more for the new disk to turn up. Install new disk. Assign new disk"

I'm trying to avoid a week or so of downtime ;)

Edited by kilo
Link to comment
4 hours ago, kilo said:

 

Yep, seems easy but this is the issue for me with "Shutdown. Replace disk. Assign new disk..."

 

In my case, it would be: "Shutdown. RMA disk. Wait for a week or more for the new disk to turn up. Install new disk. Assign new disk"

I'm trying to avoid a week or so of downtime ;)

 

As long as the disk is enabled, or even if it gets disabled and emulated, there is no downtime.

 

20 hours ago, JorgeB said:

You can let Unraid emulated the disk, if it's not yet disabled, then just replace it when the new one arrives.

 

You might consider setting all user shares to exclude that disk so nothing gets written to it. It will still be included for read.

Link to comment
4 hours ago, kilo said:

RMA disk

If you actually have to return the disk before you can get a replacement, then just shutdown, remove the disk, start the array without it, and Unraid will emulate the disk from the parity calculation by reading parity and all other disks. So no downtime.

 

Note if you only have single parity then you have no redundancy without the disk.

Link to comment
1 hour ago, trurl said:

If you actually have to return the disk before you can get a replacement, then just shutdown, remove the disk, start the array without it, and Unraid will emulate the disk from the parity calculation by reading parity and all other disks. So no downtime.

 

Note if you only have single parity then you have no redundancy without the disk.

 

Yep, this. I have to take the disk out and return it before getting a new one (under RMA) - this takes around a week from sending the old one to receiving the new one.

 

So, in this case it looks like I just shutdown the server as-is, pull the disk and start it up - and Unraid will still have the data available (using parity).

Then, when the new disk turns up, shutdown, install the disk, preclear, and assign to the array.

 

I do only have a single parity disk, and can accept the short-term lack of redundancy as I’ve got the data backed up offsite.

 

Thanks :)

Link to comment
4 minutes ago, itimpi said:

Technically this is not required unless you want to first carry out a confidence check on the new drive.

 

yes, sure. I think i will preclear for confidence sake. I did not preclear any of the 3 original disks in my server, and now 2 have become faulty in less than a year.

Link to comment
Just now, bonienl said:

Shutdown

Swap the bad disk for a new disk

Start the system

Assign the new disk as replacement

Start array and let system do the rebuild

 

 

Thanks, but I guess I haven’t made myself clear in previous comments :)

Swap the bad disk for a new disk = 1 week timeline between pulling out the bad disk and physically having the new disk available to install.

 

Sure, I could keep the server shutdown for a week or so, but I would prefer to have it running and just wanted to check this was possible, without causing further issues. It looks like it is, with the caveat of no redundancy during the time the disk is ‘missing’.

Link to comment
  • 3 weeks later...
  • Solution

All good. Took the faulty disk out and shipped it to Scan who swapped it out no worries.

Server & data were fine whilst disk was ‘missing’.

Precleaned new disk, all good

Added to array, in the same slot as the original disk - array rebuilt.

Good to go :)

 

Props again to Scan (in the UK) - great support as usual!

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...