Preclear.sh results - Questions about your results? Post them here.


Recommended Posts

Forgive me for posting this in the preclear thread.

 

Woke up this morning with a nice email saying that my parity drive reallocated sector ct went to 50. Fast forward to this afternoon and a couple more emails later, it's now up to 56. Suffice it to say, a new drive is on the way, but the way things are headed, the drive will probably fail before then. I disabled monthly parity check for Sept. tonight.

 

Firstly, I never had to deal with a bad parity, so any guidance would be appreciated. Worst case scenario, drive dies, I'm assuming unRAID can manage without a parity?

 

Second, because it's a parity, I'm assuming any data is unreadable when I RMA it back to Toshiba? No need to zero data prior?

 

Event: unRAID Parity disk SMART health [5]
Subject: Warning [Tower] - reallocated sector ct is 55
Description: TOSHIBA_DT01ACA300_XXX(sdj)
Importance: warning

 

 

1	Raw read error rate	0x000b	098	098	016	Pre-fail	Always	Never	4
2	Throughput performance	0x0005	139	139	054	Pre-fail	Offline	Never	73
3	Spin up time	0x0007	133	133	024	Pre-fail	Always	Never	431 (average 430)
4	Start stop count	0x0012	099	099	000	Old age	Always	Never	4251
5	Reallocated sector count	0x0033	100	100	005	Pre-fail	Always	Never	56
7	Seek error rate	0x000b	100	100	067	Pre-fail	Always	Never	0
8	Seek time performance	0x0005	124	124	020	Pre-fail	Offline	Never	33
9	Power on hours	0x0012	098	098	000	Old age	Always	Never	17775 (2y, 9d, 15h)
10	Spin retry count	0x0013	100	100	060	Pre-fail	Always	Never	0
12	Power cycle count	0x0032	100	100	000	Old age	Always	Never	27
192	Power-off retract count	0x0032	097	097	000	Old age	Always	Never	4263
193	Load cycle count	0x0012	097	097	000	Old age	Always	Never	4263
194	Temperature celsius	0x0002	181	181	000	Old age	Always	Never	33 (min/max 19/48)
196	Reallocated event count	0x0032	100	100	000	Old age	Always	Never	60
197	Current pending sector	0x0022	100	100	000	Old age	Always	Never	0
198	Offline uncorrectable	0x0008	100	100	000	Old age	Offline	Never	0
199	UDMA CRC error count	0x000a	200	200	000	Old age	Always	Never	0

Link to comment

..

Firstly, I never had to deal with a bad parity, so any guidance would be appreciated. Worst case scenario, drive dies, I'm assuming unRAID can manage without a parity?

 

Second, because it's a parity, I'm assuming any data is unreadable when I RMA it back to Toshiba? No need to zero data prior?

...

Your data disks can still be read and written without parity, but of course they won't have parity protection.

 

Your parity disk is just a bunch of bits that have absolutely no meaning without the other disks in your array.

Link to comment

Second, because it's a parity, I'm assuming any data is unreadable when I RMA it back to Toshiba? No need to zero data prior?

Your parity disk is just a bunch of bits that have absolutely no meaning without the other disks in your array.

However... it is possible that chunks of that data contain meaning. Consider the scenario where all disks are precleared, and only 1 drive has data. The parity drive would contain the mostly intact contents of that single drive, and running data recovery software would yield some readable files. Any region of the parity drive where all other data drive content except for 1 is zeroed would mirror that specific data. No format structure would be available, so raw recovery with binary analysis would be required to find any information.

 

tl;dr - clearing the parity drive before letting it leave your control is a good idea if there is sensitive data on the server.

 

If you are storing data that must not be released under any circumstance, I do not recommend RMA'ing dead drives. Destroy them and eat the loss.

Link to comment

Second, because it's a parity, I'm assuming any data is unreadable when I RMA it back to Toshiba? No need to zero data prior?

Your parity disk is just a bunch of bits that have absolutely no meaning without the other disks in your array.

However... it is possible that chunks of that data contain meaning. Consider the scenario where all disks are precleared, and only 1 drive has data. The parity drive would contain the mostly intact contents of that single drive, and running data recovery software would yield some readable files. Any region of the parity drive where all other data drive content except for 1 is zeroed would mirror that specific data. No format structure would be available, so raw recovery with binary analysis would be required to find any information.

 

tl;dr - clearing the parity drive before letting it leave your control is a good idea if there is sensitive data on the server.

 

If you are storing data that must not be released under any circumstance, I do not recommend RMA'ing dead drives. Destroy them and eat the loss.

Your right of course and it's probably not that uncommon to have some disk with data on it where all other drives have zeros since drives are often cleared before use. You might even have one disk larger than all other data disks so it would be the only disk that can have any parity data out beyond the other drives capacity.
Link to comment

I just finished a preclear on my new 10TB Ironwolf drive, but it seems that the report couldn't get saved?

 

/usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh: line 1401: /boot/preclear_reports/preclear_report_ZA207YY2_2016.09.06-11:09

:31.txt: Invalid argument

root@Tower:/usr/local/emhttp#

 

Is there any other way to get the info, or is this report lost?  I used the preclear plugin for this preclear and I cannot find the report anywhere on the server.

Link to comment

No preclear logs in /var/log. Oh well.

I hope the disk is fine, it passed the preclear.

 

Check the SMART report, it will tell you about the drive, any issues with it.

Then test the drive for Preclear signature (although I don't recall if the plugin has that test).

Those 2 tests are probably the most important, when a Preclear finishes.  About the only other thing tested is whether every bit is zero, but that only very rarely fails, and would indicate something really wrong with the drive, and you'll see that soon enough.  Very rare though.

Link to comment
  • 1 month later...

Hey everyone,

 

Can someone check over my new WD Red 6tb drive preclear results?  With my basic understanding of the script from it's original post, everything seems ok.

 

Thanks in advance!

 

================================================================== 1.15

=                unRAID server Pre-Clear disk /dev/sdc

=              cycle 1 of 1, partition start on sector 1

= Disk Pre-Clear-Read completed                                DONE

= Step 1 of 10 - Copying zeros to first 2048k bytes            DONE

= Step 2 of 10 - Copying zeros to remainder of disk to clear it DONE

= Step 3 of 10 - Disk is now cleared from MBR onward.          DONE

= Step 4 of 10 - Clearing MBR bytes for partition 2,3 & 4      DONE

= Step 5 of 10 - Clearing MBR code area                        DONE

= Step 6 of 10 - Setting MBR signature bytes                    DONE

= Step 7 of 10 - Setting partition 1 to precleared state        DONE

= Step 8 of 10 - Notifying kernel we changed the partitioning  DONE

= Step 9 of 10 - Creating the /dev/disk/by* entries            DONE

= Step 10 of 10 - Verifying if the MBR is cleared.              DONE

= Disk Post-Clear-Read completed                                DONE

Disk Temperature: 37C, Elapsed Time:  54:48:31

========================================================================1.15

== WDCWD60EFRX-68L0BN1  WD-WX11D65CU3Z1

== Disk /dev/sdc has been successfully precleared

== with a starting sector of 1

============================================================================

** Changed attributes in files: /tmp/smart_start_sdc  /tmp/smart_finish_sdc

                ATTRIBUTE  NEW_VAL OLD_VAL FAILURE_THRESHOLD STATUS      RAW_VALUE

          Seek_Error_Rate =  100    200            0        ok          0

      Temperature_Celsius =  115    127            0        ok          37

No SMART attributes are FAILING_NOW

 

0 sectors were pending re-allocation before the start of the preclear.

0 sectors were pending re-allocation after pre-read in cycle 1 of 1.

0 sectors were pending re-allocation after zero of disk in cycle 1 of 1.

0 sectors are pending re-allocation at the end of the preclear,

    the number of sectors pending re-allocation did not change.

0 sectors had been re-allocated before the start of the preclear.

0 sectors are re-allocated at the end of the preclear,

    the number of sectors re-allocated did not change.

 

 

Link to comment

Hey everyone,

 

Can someone check over my new WD Red 6tb drive preclear results?  With my basic understanding of the script from it's original post, everything seems ok.

 

Thanks in advance!

 

================================================================== 1.15

=                unRAID server Pre-Clear disk /dev/sdc

=              cycle 1 of 1, partition start on sector 1

= Disk Pre-Clear-Read completed                                DONE

= Step 1 of 10 - Copying zeros to first 2048k bytes            DONE

= Step 2 of 10 - Copying zeros to remainder of disk to clear it DONE

= Step 3 of 10 - Disk is now cleared from MBR onward.          DONE

= Step 4 of 10 - Clearing MBR bytes for partition 2,3 & 4      DONE

= Step 5 of 10 - Clearing MBR code area                        DONE

= Step 6 of 10 - Setting MBR signature bytes                    DONE

= Step 7 of 10 - Setting partition 1 to precleared state        DONE

= Step 8 of 10 - Notifying kernel we changed the partitioning  DONE

= Step 9 of 10 - Creating the /dev/disk/by* entries            DONE

= Step 10 of 10 - Verifying if the MBR is cleared.              DONE

= Disk Post-Clear-Read completed                                DONE

Disk Temperature: 37C, Elapsed Time:  54:48:31

========================================================================1.15

== WDCWD60EFRX-68L0BN1  WD-WX11D65CU3Z1

== Disk /dev/sdc has been successfully precleared

== with a starting sector of 1

============================================================================

** Changed attributes in files: /tmp/smart_start_sdc  /tmp/smart_finish_sdc

                ATTRIBUTE  NEW_VAL OLD_VAL FAILURE_THRESHOLD STATUS      RAW_VALUE

          Seek_Error_Rate =  100    200            0        ok          0

      Temperature_Celsius =  115    127            0        ok          37

No SMART attributes are FAILING_NOW

 

0 sectors were pending re-allocation before the start of the preclear.

0 sectors were pending re-allocation after pre-read in cycle 1 of 1.

0 sectors were pending re-allocation after zero of disk in cycle 1 of 1.

0 sectors are pending re-allocation at the end of the preclear,

    the number of sectors pending re-allocation did not change.

0 sectors had been re-allocated before the start of the preclear.

0 sectors are re-allocated at the end of the preclear,

    the number of sectors re-allocated did not change.

Perfect score
Link to comment
  • 2 weeks later...

Hi all, I am after some advice on understanding the preclear results.

 

I just completed 1 cycle on my new WD Red 8TB and I was expecting to get some more info in the preclear report. e.g. about any re-allocated sectors etc.

 

Perhaps I did not set the preclear cycle up correctly? I just used the plugin by gfjardim

 

EDIT: I just realized S.M.A.R.T was disabled by defualt in the bios (this is a brand new field). Maybe this is why no further info was given in the report? Or was I meant to run "Verify all the disk" or "Verify MBR Only" under the operation drop-down menu in the preclear plugin?

 

My result:

 

############################################################################################################################

#                                                                                                                          #

#                                        unRAID Server Pre-Clear of disk /dev/sde                                          #

#                                      Cycle 1 of 1, partition start on sector 64.                                        #

#                                                                                                                          #

#                                                                                                                          #

#  Step 1 of 5 - Pre-read verification:                                                  [15:31:11 @ 143 MB/s] SUCCESS    #

#  Step 2 of 5 - Zeroing the disk:                                                      [15:18:50 @ 145 MB/s] SUCCESS    #

#  Step 3 of 5 - Writing unRAID's Preclear signature:                                                          SUCCESS    #

#  Step 4 of 5 - Verifying unRAID's Preclear signature:                                                        SUCCESS    #

#  Step 5 of 5 - Post-Read verification:                                                [18:50:35 @ 117 MB/s] SUCCESS    #

#                                                                                                                          #

#                                                                                                                          #

#                                                                                                                          #

#                                                                                                                          #

#                                                                                                                          #

#                                                                                                                          #

#                                                                                                                          #

############################################################################################################################

#                                  Cycle elapsed time: 49:40:39 | Total elapsed time: 49:40:39                            #

############################################################################################################################

 

--> RESULT: Preclear finished succesfully.

 

 

root@Tower:/usr/local/emhttp#

 

Link to comment
  • 2 weeks later...

Results of the 10TB ironwolf drive:

...[snipped]...

How does it look? Should I be worried about the "near threshold" values?

No, "near threshold" values are almost always false positives, and the feature should probably have been fixed a long time ago.

 

Apart from 2 numbers, it looks like any other modern Seagate SMART report, for first month usage.  They have discontinued Runtime_Bad_Block, which doesn't surprise me as it was redundant, confusing therefore.

 

One mildly troubling number is Hardware_ECC_Recovered, which has already dropped to 8.  It is encoded into the same number fields as Raw_Read_Error_Rate (same RAW number), but it's not marked as a 'critical attribute' so can't fail the drive.  Raw_Read_Error_Rate *is* a critical one, and looks possibly worrisome, having dropped already to 64 (usually 100 or higher for the first years).  Everything else looks fine.

 

Since this is the first instance we've seen of the drive, it may be too soon to draw any conclusions yet, about your specific drive.

 

 

I purchased an Ironwolf 8TB and when my Hardware ECC Recovered (SMART 195) went down to 5 i got really concerned. The HDD also had random loud clicking sounds so it went back to the shop.

I could barely find on the web any Ironwolf or any recent Seagate 8TB drive SMART chart for comparison.

 

Thankfully BackBlaze publish the daily SMART status of all their units (along with very cool anual failure rate by brand/model an other stuff) and have just begun the migration to Seagate 8TB drives (ST8000DM002).

Backblaze Data: https://www.backblaze.com/b2/hard-drive-test-data.html

 

I imported their logs from September 30 2016 into an excel and customize it so I could see in the same screen smart values 1, 3, 4, 5, 7, 191 and 195 for some of the Seagate 6TB models and 8TB models

 

pH0uYB2.jpg

 

I know SMART values may change among different models, but I think this chart is interesting:

 

- Smart 1:  Raw Read Error Rate

    Normal Normalized values for the 6TB drives seems to be between 100-120

    Normal Normalized values for the 8tB drives seem to be between 70-90

 

- Smart 7:  Seek Error Rate    After web research the minimum value for Seagate HDDs seems to be around 60 for most models

  Normal Normalized values (6TB) between 85-90

  Normal Normalized values (8TB) between 80-85

 

- Smart 195:  Hardware ECC Recovered    This is kind  of interesting:

    Normalized Values (6TB): 65-75

  Normalized Values (8TB):  When you see the whole excel with 5000 drives (not just the picture) I can see three groups of values:  The percentages are estimated by eye. Not statistical analysis applied

  Around 1/2 of the 8TB units normalized value of 1

  Around 1/4 of the 8TB units normalized value around 12

  Around 1/4 of the 8TB units normalized value around  24   

 

If the Ironwolf 8TB or 10TB share some internal design with the ST8000DM002, those low Hardware ECC Recovered values could be normal for this drives

 

What do you think?

 

Edit: Every 8TB unit with Hardware ECC Recovery (HER) = 1 has huuuge numbers (hundreds of millions) in SMART 240 Raw Data (Head Flying Hours) while the ones with HER around 12 or 24 have the Head Flying Hours raw data in the thousands

Link to comment
  • 2 weeks later...

A while ago I tried to preclear 2 disks. One worked and something went wrong for the other so I had to start preclear again (don't remember the exact details though). I noticed now (several weeks later) in the logs the following after a startup :

 

Dec 9 10:04:46 Tower preclear.disk: Resuming preclear of disk 'sdm'

 

sdm was the disk that had issues with the preclear before but 'sdm' doesn't exist anymore now (there is no disk labeled sdm anymore). That disk is now sdk and is part of the cache pool. Seems preclear still thinks it has to do something. Is it possible/needed to stop this?

Link to comment

I bought anther 4tb from Amazon, which was horribly packed, I think the results look ok, but what do you all think after the preclear?

 

SMART Attributes Data Structure revision number: 10

Vendor Specific SMART Attributes with Thresholds:

ID# ATTRIBUTE_NAME          FLAG    VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE

  1 Raw_Read_Error_Rate    0x000f  100  100  044    Pre-fail  Always      -      952

  3 Spin_Up_Time            0x0003  100  100  000    Pre-fail  Always      -      0

  4 Start_Stop_Count        0x0032  100  100  020    Old_age  Always      -      1

  5 Reallocated_Sector_Ct  0x0033  100  100  010    Pre-fail  Always      -      0

  7 Seek_Error_Rate        0x000f  100  253  045    Pre-fail  Always      -      6783

  9 Power_On_Hours          0x0032  100  100  000    Old_age  Always      -      0 (125 228 0)

10 Spin_Retry_Count        0x0013  100  100  097    Pre-fail  Always      -      0

12 Power_Cycle_Count      0x0032  100  100  020    Old_age  Always      -      1

184 End-to-End_Error        0x0032  100  100  099    Old_age  Always      -      0

187 Reported_Uncorrect      0x0032  100  100  000    Old_age  Always      -      0

188 Command_Timeout        0x0032  100  253  000    Old_age  Always      -      0

189 High_Fly_Writes        0x003a  100  100  000    Old_age  Always      -      0

190 Airflow_Temperature_Cel 0x0022  075  075  040    Old_age  Always      -      25 (Min/Max 20/25)

191 G-Sense_Error_Rate      0x0032  100  100  000    Old_age  Always      -      0

192 Power-Off_Retract_Count 0x0032  100  100  000    Old_age  Always      -      1

193 Load_Cycle_Count        0x0032  100  100  000    Old_age  Always      -      2

194 Temperature_Celsius    0x0022  025  040  000    Old_age  Always      -      25 (0 20 0 0 0)

197 Current_Pending_Sector  0x0012  100  100  000    Old_age  Always      -      0

198 Offline_Uncorrectable  0x0010  100  100  000    Old_age  Offline      -      0

199 UDMA_CRC_Error_Count    0x003e  200  253  000    Old_age  Always      -      0

240 Head_Flying_Hours      0x0000  100  253  000    Old_age  Offline      -      221203700645888

241 Total_LBAs_Written      0x0000  100  253  000    Old_age  Offline      -      0

242 Total_LBAs_Read        0x0000  100  253  000    Old_age  Offline      -      952

 

Link to comment

My 3rd Seagate 10TB IronWolf:

 

ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000f   083   064   044    Pre-fail  Always       -       223599568
  3 Spin_Up_Time            0x0003   094   094   000    Pre-fail  Always       -       0
  4 Start_Stop_Count        0x0032   100   100   020    Old_age   Always       -       4
  5 Reallocated_Sector_Ct   0x0033   100   100   010    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x000f   074   060   045    Pre-fail  Always       -       25261913
  9 Power_On_Hours          0x0032   100   100   000    Old_age   Always       -       76 (206 164 0)
10 Spin_Retry_Count        0x0013   100   100   097    Pre-fail  Always       -       0
12 Power_Cycle_Count       0x0032   100   100   020    Old_age   Always       -       4
184 End-to-End_Error        0x0032   100   100   099    Old_age   Always       -       0
187 Reported_Uncorrect      0x0032   100   100   000    Old_age   Always       -       0
188 Command_Timeout         0x0032   100   100   000    Old_age   Always       -       0
189 High_Fly_Writes         0x003a   098   098   000    Old_age   Always       -       2
190 Airflow_Temperature_Cel 0x0022   058   057   040    Old_age   Always       -       42 (Min/Max 32/43)
191 G-Sense_Error_Rate      0x0032   100   100   000    Old_age   Always       -       1200
192 Power-Off_Retract_Count 0x0032   100   100   000    Old_age   Always       -       3
193 Load_Cycle_Count        0x0032   100   100   000    Old_age   Always       -       5
194 Temperature_Celsius     0x0022   042   043   000    Old_age   Always       -       42 (0 22 0 0 0)
195 Hardware_ECC_Recovered  0x001a   008   008   000    Old_age   Always       -       223599568
197 Current_Pending_Sector  0x0012   100   100   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0010   100   100   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x003e   200   200   000    Old_age   Always       -       0
200 Multi_Zone_Error_Rate   0x0023   100   100   001    Pre-fail  Always       -       0
240 Head_Flying_Hours       0x0000   100   253   000    Old_age   Offline      -       114022791774283
241 Total_LBAs_Written      0x0000   100   253   000    Old_age   Offline      -       19532873968
242 Total_LBAs_Read         0x0000   100   253   000    Old_age   Offline      -       59745589152

 

== Using :Read block size = 1003520 Bytes
== Last Cycle's Pre Read Time  : 19:59:44 (138 MB/s)
== Last Cycle's Zeroing time   : 15:57:04 (174 MB/s)
== Last Cycle's Post Read Time : 39:51:05 (69 MB/s)
== Last Cycle's Total Time     : 75:48:53
==
== Total Elapsed Time 75:48:53
==
== Disk Start Temperature: 32C
==
== Current Disk Temperature: -->42<--C, 
==
============================================================================
** Changed attributes in files: /tmp/smart_start_sda  /tmp/smart_finish_sda
                ATTRIBUTE   NEW_VAL OLD_VAL FAILURE_THRESHOLD STATUS      RAW_VALUE
      Raw_Read_Error_Rate =    83     100           44        ok          223599568
          Seek_Error_Rate =    74     100           45        ok          25261913
         Spin_Retry_Count =   100     100           97        near_thresh 0
         End-to-End_Error =   100     100           99        near_thresh 0
          High_Fly_Writes =    98     100            0        ok          2
  Airflow_Temperature_Cel =    58      68           40        near_thresh 42
      Temperature_Celsius =    42      32            0        ok          42
   Hardware_ECC_Recovered =     8     100            0        near_thresh 223599568

 

 

ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000f   100   100   044    Pre-fail  Always       -       609120
  3 Spin_Up_Time            0x0003   094   094   000    Pre-fail  Always       -       0
  4 Start_Stop_Count        0x0032   100   100   020    Old_age   Always       -       4
  5 Reallocated_Sector_Ct   0x0033   100   100   010    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x000f   100   253   045    Pre-fail  Always       -       10455
  9 Power_On_Hours          0x0032   100   100   000    Old_age   Always       -       0 (170 224 0)
10 Spin_Retry_Count        0x0013   100   100   097    Pre-fail  Always       -       0
12 Power_Cycle_Count       0x0032   100   100   020    Old_age   Always       -       4
184 End-to-End_Error        0x0032   100   100   099    Old_age   Always       -       0
187 Reported_Uncorrect      0x0032   100   100   000    Old_age   Always       -       0
188 Command_Timeout         0x0032   100   100   000    Old_age   Always       -       0
189 High_Fly_Writes         0x003a   100   100   000    Old_age   Always       -       0
190 Airflow_Temperature_Cel 0x0022   068   068   040    Old_age   Always       -       32 (Min/Max 32/32)
191 G-Sense_Error_Rate      0x0032   100   100   000    Old_age   Always       -       0
192 Power-Off_Retract_Count 0x0032   100   100   000    Old_age   Always       -       3
193 Load_Cycle_Count        0x0032   100   100   000    Old_age   Always       -       5
194 Temperature_Celsius     0x0022   032   040   000    Old_age   Always       -       32 (0 22 0 0 0)
195 Hardware_ECC_Recovered  0x001a   100   100   000    Old_age   Always       -       609120
197 Current_Pending_Sector  0x0012   100   100   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0010   100   100   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x003e   200   200   000    Old_age   Always       -       0
200 Multi_Zone_Error_Rate   0x0023   100   100   001    Pre-fail  Always       -       0
240 Head_Flying_Hours       0x0000   100   253   000    Old_age   Offline      -       94686849007616
241 Total_LBAs_Written      0x0000   100   253   000    Old_age   Offline      -       192
242 Total_LBAs_Read         0x0000   100   253   000    Old_age   Offline      -       608928

 

 

 

 

 

Unfortunately, I used a different enclosure for the 2nd HDD and therefore I wasn't able to get any SMART values. Since no sectors were pending I just assumed the disk is fine and installed it in the NAS.

Link to comment

~75 hours per cycle  :o

holy cow!

Looks like he was using the standard script based on the post read time.  Using the fast preclear script might drop it to 56 hours.  Here is the time for an HGST NAS 6TB with fast option specified:

== Using :Read block size = 1000448 Bytes
== Last Cycle's Pre Read Time  : 13:03:50 (127 MB/s)
== Last Cycle's Zeroing time   : 9:12:56 (180 MB/s)
== Last Cycle's Post Read Time : 14:30:10 (114 MB/s)
== Last Cycle's Total Time     : 23:44:06

The post read is just slightly longer than the preread.

Link to comment

My 3rd Seagate 10TB IronWolf:

 

============================================================================
** Changed attributes in files: /tmp/smart_start_sda  /tmp/smart_finish_sda
                ATTRIBUTE   NEW_VAL OLD_VAL FAILURE_THRESHOLD STATUS      RAW_VALUE
      Raw_Read_Error_Rate =    83     100           44        ok          223599568
          Seek_Error_Rate =    74     100           45        ok          25261913
         Spin_Retry_Count =   100     100           97        near_thresh 0
         End-to-End_Error =   100     100           99        near_thresh 0
          High_Fly_Writes =    98     100            0        ok          2
  Airflow_Temperature_Cel =    58      68           40        near_thresh 42
      Temperature_Celsius =    42      32            0        ok          42
   Hardware_ECC_Recovered =     8     100            0        near_thresh 223599568

 

 

I have only found 4 SMART reports from either 8TB or 10TB Ironwolfs with enough working hours to get stable normalized values for (1)RRER (7)SER and (195)HER. It is still an small sample to make general assumptions but the data seem to support the hypothesis I posted here that SMART values for the Seagate 8TB drive ST8000DM002 (heavily tested by BackBlaze) could be used as a guide of "normal SMART values" for the new 8TB and 10TB Ironwolf series.

 

The behaviour of Seek Error Rate seems typical for a Seagate drive:  A worst value of 60 and a normalized value somewhere between 70 and 85. The Read Raw Error Rate also seem normal compared to the data I have so far.

 

The (195) Hardware_ECC_recovered value is the one that I really want to figure out before I pull the trigger on the ironwolfs. Those values so close to zero worried me. However they seem to be normal for the Seagate ST8000DM002 and for the Ironwolfs I have seen so far.

  • Upvote 1
Link to comment

New disk.  I don't think I've had pending sectors go up before.  Any recommendations?

Preclear

############################################################################################################################
#                                                                                                                          #
#                                        unRAID Server Pre-Clear of disk /dev/sdi                                          #
#                                       Cycle 1 of 1, partition start on sector 64.                                        #
#                                                                                                                          #
#                                                                                                                          #
#   Step 1 of 5 - Pre-read verification:                                                   [35:52:11 @ 61 MB/s] SUCCESS    #
#   Step 2 of 5 - Zeroing the disk:                                                       [12:22:15 @ 179 MB/s] SUCCESS    #
#   Step 3 of 5 - Writing unRAID's Preclear signature:                                                          SUCCESS    #
#   Step 4 of 5 - Verifying unRAID's Preclear signature:                                                        SUCCESS    #
#   Step 5 of 5 - Post-Read verification:                                                  [43:42:25 @ 50 MB/s] SUCCESS    #
#                                                                                                                          #
#                                                                                                                          #
#                                                                                                                          #
#                                                                                                                          #
#                                                                                                                          #
#                                                                                                                          #
#                                                                                                                          #
############################################################################################################################
#                                 Cycle elapsed time: 92:08:13 | Total elapsed time: 92:08:14                              #
############################################################################################################################

############################################################################################################################
#                                                                                                                          #
#                                                   S.M.A.R.T. Status                                                      #
#                                                                                                                          #
#                                                                                                                          #
#   ATTRIBUTE                      INITIAL    CYCLE 1    STATUS                                                            #
#   5-Reallocated_Sector_Ct        0          24         Up 24                                                             #
#   9-Power_On_Hours               0          92         Up 92                                                             #
#   184-End-to-End_Error           0          0          -                                                                 #
#   187-Reported_Uncorrect         0          3          Up 3                                                              #
#   190-Airflow_Temperature_Cel    28         39         Up 11                                                             #
#   197-Current_Pending_Sector     0          8          Up 8                                                              #
#   198-Offline_Uncorrectable      0          8          Up 8                                                              #
#   199-UDMA_CRC_Error_Count       0          0          -                                                                 #
#                                                                                                                          #
#                                                                                                                          #
#                                                                                                                          #
#                                                                                                                          #
############################################################################################################################
#   SMART overall-health self-assessment test result: PASSED                                                               #
############################################################################################################################

--> ATTENTION: Please take a look into the SMART report above for drive health issues.

--> RESULT: Preclear finished succesfully.

 

Attributes

#	Attribute Name	Flag	Value	Worst	Threshold	Type	Updated	Failed	Raw Value
1	Raw read error rate	0x000f	077	064	044	Pre-fail	Always	Never	227500304
3	Spin up time	0x0003	098	098	000	Pre-fail	Always	Never	0
4	Start stop count	0x0032	100	100	020	Old age	Always	Never	1
5	Reallocated sector count	0x0033	100	100	010	Pre-fail	Always	Never	24
7	Seek error rate	0x000f	075	060	045	Pre-fail	Always	Never	29453291
9	Power on hours	0x0032	100	100	000	Old age	Always	Never	95 (130 214 0)
10	Spin retry count	0x0013	100	100	097	Pre-fail	Always	Never	0
12	Power cycle count	0x0032	100	100	020	Old age	Always	Never	1
184	End-to-end error	0x0032	100	100	099	Old age	Always	Never	0
187	Reported uncorrect	0x0032	097	097	000	Old age	Always	Never	3
188	Command timeout	0x0032	100	100	000	Old age	Always	Never	0 0 0
189	High fly writes	0x003a	100	100	000	Old age	Always	Never	0
190	Airflow temperature cel	0x0022	064	059	040	Old age	Always	Never	36 (min/max 27/41)
191	G-sense error rate	0x0032	100	100	000	Old age	Always	Never	941
192	Power-off retract count	0x0032	100	100	000	Old age	Always	Never	0
193	Load cycle count	0x0032	100	100	000	Old age	Always	Never	15
194	Temperature celsius	0x0022	036	041	000	Old age	Always	Never	36 (0 27 0 0 0)
195	Hardware ECC recovered	0x001a	032	005	000	Old age	Always	Never	227500304
197	Current pending sector	0x0012	100	100	000	Old age	Always	Never	8
198	Offline uncorrectable	0x0010	100	100	000	Old age	Offline	Never	8
199	UDMA CRC error count	0x003e	200	200	000	Old age	Always	Never	0
240	Head flying hours	0x0000	100	253	000	Old age	Offline	Never	92h+36m+50.052s
241	Total lbas written	0x0000	100	253	000	Old age	Offline	Never	15628053320
242	Total lbas read	0x0000	100	253	000	Old age	Offline	Never	31872235519

 

 

Link to comment

That does not look very good for a new disk!

 

On a new disk I would expect both reallocated sectors and pending sectors to be zero.    Although is not definitively wrong to have non-zero values for reallocated sectors one only expects that to happen later in the lifetime of the disk.   

 

Also you pending sectors to be zero when used with unRAID.  Another pre-clear cycle might clear that but if reallocated sectors keeps going up it suggests the disk is close to failing.

 

What state was the packaging in when you received the disk?  It is possible the disk was damaged in transit.

Link to comment

I don't recall the disc packaging looking abnormal. Standard Newegg packaging, sealed drive with black holders in an small box and that box inside another. I don't think it was bubble wrapped like I think the WD 8TB reds are.

So you think another preclear and see what happens?

 

 

Sent from my iPhone using Tapatalk

Link to comment

After that very bad result, I'd say you would want at least 2 full perfect Preclears in a row, before I felt the drive could be trusted.  This is one time when I would probably want to Preclear it 3 times, and expect perfection on all 3 (perfection meaning NO current pending sectors ever, and NO increases in reallocated sectors, and no other issues either).

 

And that's just me!  There are other users here that wouldn't even give it that much of a chance, would return it immediately for replacement, no more testing at all.

Link to comment

My 3rd Seagate 10TB IronWolf:

 

============================================================================
** Changed attributes in files: /tmp/smart_start_sda  /tmp/smart_finish_sda
                ATTRIBUTE   NEW_VAL OLD_VAL FAILURE_THRESHOLD STATUS      RAW_VALUE
      Raw_Read_Error_Rate =    83     100           44        ok          223599568
          Seek_Error_Rate =    74     100           45        ok          25261913
         Spin_Retry_Count =   100     100           97        near_thresh 0
         End-to-End_Error =   100     100           99        near_thresh 0
          High_Fly_Writes =    98     100            0        ok          2
  Airflow_Temperature_Cel =    58      68           40        near_thresh 42
      Temperature_Celsius =    42      32            0        ok          42
   Hardware_ECC_Recovered =     8     100            0        near_thresh 223599568

 

 

I have only found 4 SMART reports from either 8TB or 10TB Ironwolfs with enough working hours to get stable normalized values for (1)RRER (7)SER and (195)HER. It is still an small sample to make general assumptions but the data seem to support the hypothesis I posted here that SMART values for the Seagate 8TB drive ST8000DM002 (heavily tested by BackBlaze) could be used as a guide of "normal SMART values" for the new 8TB and 10TB Ironwolf series.

 

The behaviour of Seek Error Rate seems typical for a Seagate drive:  A worst value of 60 and a normalized value somewhere between 70 and 85. The Read Raw Error Rate also seem normal compared to the data I have so far.

 

The (195) Hardware_ECC_recovered value is the one that I really want to figure out before I pull the trigger on the ironwolfs. Those values so close to zero worried me. However they seem to be normal for the Seagate ST8000DM002 and for the Ironwolfs I have seen so far.

 

Good observations again, I agree with yours.  I have to apologize for not responding to your first report!  I spent a fair amount of time on it, a lot of data!  You had some interesting observations, and I believe I saw one or two other things, but I ran out of time, then got busy with other stuff, then been dealing lately with more projects on a LIFO schedule, and by the time I tried to get back to your post, I'd forgotten everything.  At some point, I'll go back and learn what I can from it, but I think you summarized the most important things.  Just have to say though - great reporting!

  • Upvote 1
Link to comment

Good observations again, I agree with yours.  I have to apologize for not responding to your first report!  I spent a fair amount of time on it, a lot of data!  You had some interesting observations, and I believe I saw one or two other things, but I ran out of time, then got busy with other stuff, then been dealing lately with more projects on a LIFO schedule, and by the time I tried to get back to your post, I'd forgotten everything.  At some point, I'll go back and learn what I can from it, but I think you summarized the most important things.  Just have to say though - great reporting!

 

Thank you very much Rob. I am also very busy so I totally get it. There is nothing to apologize for.

 

I have been considering the 8TB - 10TB ironwolf drives for a while (great price/performance ratio) but the lack of information about them five or six months after launch seemed odd. I am an "experienced amateur" on HDDs and by sharing these findings I aim to support some information gathering from real experts who can correct and/or complete my post. I hope we can achieve better understanding of the SMART values for these drives so we can sleep better at night after preclearing them.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.