Preclear.sh results - Questions about your results? Post them here.


Recommended Posts

thanks for the reply... but that not really addressing the problem... i was able to plug the HD in via USB to a windows machine...

 

It did see the HD and reported the correct size for it... but without formatting it... i wasnt able to go further and also try and get a smart report...

 

i then plugged it back into the raid and rebooted the bare metel machine just to be sure and it still doesnt see the HD there...

 

suggestions?

Link to comment

thanks for the reply... but that not really addressing the problem... i was able to plug the HD in via USB to a windows machine...

 

It did see the HD and reported the correct size for it...

 

suggestions?

 

That tells me the windows machine can/will see your raw drive; perfect.  Now, put the USB with stock unRAID into that machine, reboot the computer, and hit F8 (or DEL, or ESC or F12 or whatever that BIOS requires to get to the boot options).  Once you see your boot options, select the USB stick, and it will boot into unRAID.  Then preclear the drive.  once finished, do a clean shutdown from unRAID, remove the drive and the USB stick, and when you turn that machine back on, it will book back into windows, as if you'd never done anything with it.

 

Put the newly precleared drive back into your real server, and unRAID will recognize it's already precleared.  if the unRAID server doesn't recognize the drive, that's an entirely different problem, but at least you'll know the drive is ready to add, once the server recognizes it.

Link to comment

Thanks for the reply...

 

I will do that tonight...

 

But what interests me is why 3 days ago I plugged the HD in and the system recognized the HD... It is to be noted that I had to play around with where I inserted the HD because the first time I plugged it in some of the other HD disappeared... But after I moved the new HD into a different slot the system went back to normal... And three days later the new HDS are missing again...

 

Maybe the SAS controller is acting up?

 

I did go into /dev/disk/by_id and the new disks were not listed there either...

Link to comment

If you use your Windows machine to do a preclear it might be good to disconnect any drive that you don't intend to preclear.

 

Definitely agree -- that's by far the best way to GUARANTEE that you don't accidentally pre-clear the wrong drive  :)

... and it happens -- as several postings in this forum have shown over the years.

 

Link to comment

Well I just precleared the same drive twice in error and I also now accidentally stopped it at at step 10 and another drive I stopped it at step 2 98% complete

 

What do I do, I don't know if its possible to resume, as I was doing this via screen and using the Cntrl+a and somehow I hit C instead and it terminated the script, NO VERY HAPPY right now!

 

These drives were already previously precleared and added them in th mix by error, since i was adding in more drive

 

What will occur when I add them in UNRAID will it try to preclear them again?

FFS

 

EDIT - so read a fair bit and conclusion I am stuffed!

So I can't "resume" the pre-clear, and this now pretty much left a sour taste in my mouth with pre-clearing, Damn my stupid figers and those damn screens control keeys of using ctrl A + C or N or P to go back and forth basically killed two of my sessions of which I should not have started in the first place since I had already spent all that time preclearing them already :(

 

but you can, as noted, selectively run parts of it so how can I get a preclear signiture so the unRAID allows me to add them straight in the arry with a simple format not have to wait more hours?

 

Why couldn't it tell me I'd already run preclear in those damn drives - arrgghhhhh I'm hating this right now - damn it its so hot where I am right now even the drives are sweating at 42C do no more preclearing again

 

Listed my pro lic in the FS section :'(

Link to comment

Have you tried using the -t option to the pre-clear script?    That will tell you if the drives appear to have a valid pre-clear signature written to them.

 

Note also that their are parameter options to control which phases of the pre-clear should be run.  If you are sure that the drives are OK and you want to avoid the testing phases, then just running the write phase (using -n) would be sufficient.

Link to comment

Have you tried using the -t option to the pre-clear script?    That will tell you if the drives appear to have a valid pre-clear signature written to them.

 

Note also that their are parameter options to control which phases of the pre-clear should be run.  If you are sure that the drives are OK and you want to avoid the testing phases, then just running the write phase (using -n) would be sufficient.

 

I tried the -t option but they all show up as

 

DISK /dev/SDx IS PRECLEARED with GPT Protective MBR

 

But I have not rebooted so can I trust it as all 8 drives show the same even the 2 drives still currently running which are almost complete on stage 10

 

Thanks mate appreciate you trying to calm me down  ;)

 

Here is a snapshot of in mymain, the two in the middle (3TB) /dev/sde & /dev/sdd are the drives I accidently stopped by control C

 

MymainSnapShot_zps35b77219.jpg

Link to comment

The last step in the write step is to write the pre-clear signature, so to have that present the drive must have reached and completed that step.  The post-write phase is just a validation phase that the write was successful, and although useful as a confidence check can probably safely be omitted if you are reasonably sure the drives are OK.

 

The only thing puzzling me is why the drives are showing up as pre-cleared if you have not reached the write phase as you indicated might be the case?  Are you perhaps doing multiple passes - if so the first pass would have written the pre-clear signature and the remaining ones are simply being used to help stress test the drive.

Link to comment

The last step in the write step is to write the pre-clear signature, so to have that present the drive must have reached and completed that step.  The post-write phase is just a validation phase that the write was successful, and although useful as a confidence check can probably safely be omitted if you are reasonably sure the drives are OK.

 

The only thing puzzling me is why the drives are showing up as pre-cleared if you have not reached the write phase as you indicated might be the case?  Are you perhaps doing multiple passes - if so the first pass would have written the pre-clear signature and the remaining ones are simply being used to help stress test the drive.

 

I have no idea, if the GUI knows where it has stopped it would have been nice for a continue where it left off, as to why they all show up as ok is not a good indication since I have + sign on some and not on others that have been done in the past

Spewing

Link to comment

All,

 

I have a 4tb drive I've pre-cleared:

root@Tower:/tmp# cat preclear_report_sdc
========================================================================1.14
== invoked as: ./preclear_disk.sh -A -c 3 /dev/sdc
== ST4000DM000-1F2168   Z302AB81
== Disk /dev/sdc has been successfully precleared
== with a starting sector of 64 
== Ran 3 cycles
==
== Using :Read block size = 8388608 Bytes
== Last Cycle's Pre Read Time  : 15:01:10 (33 MB/s)
== Last Cycle's Zeroing time   : 17:03:33 (29 MB/s)
== Last Cycle's Post Read Time : 31:09:54 (16 MB/s)
== Last Cycle's Total Time     : 48:14:27
==
== Total Elapsed Time 159:44:48
==
== Disk Start Temperature: 29C
==
== Current Disk Temperature: 35C, 
==
============================================================================
** Changed attributes in files: /tmp/smart_start_sdc  /tmp/smart_finish_sdc
                ATTRIBUTE   NEW_VAL OLD_VAL FAILURE_THRESHOLD STATUS      RAW_VALUE
      Raw_Read_Error_Rate =   117     114            6        ok          160005424
         Spin_Retry_Count =   100     100           97        near_thresh 0
         End-to-End_Error =   100     100           99        near_thresh 0
  Airflow_Temperature_Cel =    65      71           45        near_thresh 35
      Temperature_Celsius =    35      29            0        ok          35
No SMART attributes are FAILING_NOW

0 sectors were pending re-allocation before the start of the preclear.
0 sectors were pending re-allocation after pre-read in cycle 1 of 3.
0 sectors were pending re-allocation after zero of disk in cycle 1 of 3.
0 sectors were pending re-allocation after post-read in cycle 1 of 3.
0 sectors were pending re-allocation after zero of disk in cycle 2 of 3.
0 sectors were pending re-allocation after post-read in cycle 2 of 3.
0 sectors were pending re-allocation after zero of disk in cycle 3 of 3.
0 sectors are pending re-allocation at the end of the preclear,
    the number of sectors pending re-allocation did not change.
0 sectors had been re-allocated before the start of the preclear.
0 sectors are re-allocated at the end of the preclear,
    the number of sectors re-allocated did not change. 
============================================================================

 

No errors were recored during pre-clear. However, when I plug it into my array, it only gives the option to clear it and not just add it. I tried the preclear_disk.sh -t:

 Pre-Clear unRAID Disk /dev/sdc
################################################################## 1.14
Model Family:     Seagate Desktop HDD.15
Device Model:     ST4000DM000-1F2168
Serial Number:    Z302AB81
LU WWN Device Id: 5 000c50 0793c3954
Firmware Version: CC54
User Capacity:    4,000,787,030,016 bytes [4.00 TB]

Disk /dev/sdc: 1801.8 GB, 1801763774464 bytes
255 heads, 63 sectors/track, 219051 cylinders, total 3519069872 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1              64  3519069871  1759534904    0  Empty
Partition 1 does not end on cylinder boundary.
########################################################################
========================================================================1.14
==
== DISK /dev/sdc IS PRECLEARED with a starting sector of 64
==
============================================================================

 

The thing that stands out to me though is that it reads the disk as Disk /dev/sdc: 1801.8 GB. Is that normal? Do I have to do anything special for large (2tb+) drives?

 

thanks,

Dan H.

Link to comment
...I have a 4tb drive I've pre-cleared:...

The thing that stands out to me though is that it reads the disk as Disk /dev/sdc: 1801.8 GB. Is that normal? Do I have to do anything special for large (2tb+) drives?...

What version of unRAID and what version of preclear?

 

Also, was this disk used previously in another system?

 

From your posting history, it looks like this may be the first 2TB+ drive you have tried to use. Are you sure your hardware supports it?

Link to comment

trurl,

 

Ahh, you're probably right about the system not recognizing the drive size. I run the preclear on an old Del XPS laptop.

 

The version of the preclear is 1.14.

 

I'll see if I can get the xps to recognize 2tb+. My actual unraid system does recognize 4tb, as my parity drive is 4tb. If I run the preclear and laptop does not see all 4tb, would that bew why unraid does not see the clear signature on the drive?

 

thanks.

Link to comment
If I run the preclear and laptop does not see all 4tb, would that bew why unraid does not see the clear signature on the drive?
Yep, because only the first part was cleared. If you actually were able to force unraid into accepting it as precleared, any stray 1's that happened to be on the later portions that hadn't been cleared would be out of sync with parity. Best case scenario would be a correcting parity check would catch them, worst case would be a drive failure and those 1's would cause the rebuilt drive to have corrupt data at those locations.
Link to comment

trurl,

 

Ahh, you're probably right about the system not recognizing the drive size. I run the preclear on an old Del XPS laptop.

 

The version of the preclear is 1.14.

 

I'll see if I can get the xps to recognize 2tb+. My actual unraid system does recognize 4tb, as my parity drive is 4tb. If I run the preclear and laptop does not see all 4tb, would that bew why unraid does not see the clear signature on the drive?

 

thanks.

exactly.  Thee signature is partly based on the drive size.  When seen as its full size on your main system, it would have the wrong preclear-signature, and the wrong partition-type.
Link to comment

trurl,

 

Ahh, you're probably right about the system not recognizing the drive size. I run the preclear on an old Del XPS laptop.

 

The version of the preclear is 1.14.

 

I'll see if I can get the xps to recognize 2tb+. My actual unraid system does recognize 4tb, as my parity drive is 4tb. If I run the preclear and laptop does not see all 4tb, would that bew why unraid does not see the clear signature on the drive?

 

thanks.

You didn't answer the question about what unRAID version. The current version of preclear is 1.15. Version 1.15 fixed a problem with running preclear on 64bit unRAID.
Link to comment

Dear all,

ran out of space and decided to re-purpose one of my WD30EZRX desktop drives and to put it in the Unraid.

The pre-clear process went painfully slow. It took around 47 hrs. for a single cycle. Most of my 3tb drives finished the first pre-clear in around 35 hrs.

The hardware/software is in my signature and all drives are attached to  2xSupermicro SASLP-MV8 controllers (except the parity drive which is attached to the main-board's sata port). The DELL Perc H310 is still free (no disks attached to it).

Please, let me know is there something strange in my pre-clear logs (I dont see any pending sectors or other weird signs) which I am attaching here.

I have never experienced such a slow pre-clear before and I am afraid that if added in the array, later it may be the bottleneck of my parity checks speed which are usually finished in 9 hrs.

Thanks in advance.

 

P8ojcAU.png

 

Here is my syslog

preclear_finish__WD-WCAWZ1658291_2015-02-03.txt

preclear_rpt__WD-WCAWZ1658291_2015-02-03.txt

preclear_start__WD-WCAWZ1658291_2015-02-03.txt

Link to comment

You didn't answer the question about what unRAID version. The current version of preclear is 1.15. Version 1.15 fixed a problem with running preclear on 64bit unRAID.

 

Sorry, my version of unRaid is 5.0.5.

 

You guys hit the nail on the head. I attached the disk to my unraid server and pre_clear.sh -t shows the full disk size, but also that the drive is not precleared. I'm running it now on that server.

 

thanks all.

Link to comment

Just finished a preclear on a new WD 4TB Red. 

 

I think it looks OK, but was wondering if anyone can explain the change in parameter 7: seek_error_rate

From what I can tell it's nothing to worry about, and it's some sort of logarithmic scale related to over or undershooting the drive head in relation to accessing the drive, but then I got a bit confused!

 

Preclear start:

ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x002f   200   200   051    Pre-fail  Always       -       0
  3 Spin_Up_Time            0x0027   205   175   021    Pre-fail  Always       -       6750
  4 Start_Stop_Count        0x0032   100   100   000    Old_age   Always       -       13
  5 Reallocated_Sector_Ct   0x0033   200   200   140    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x002e   200   200   000    Old_age   Always       -       0
  9 Power_On_Hours          0x0032   100   100   000    Old_age   Always       -       113
10 Spin_Retry_Count        0x0032   100   253   000    Old_age   Always       -       0
11 Calibration_Retry_Count 0x0032   100   253   000    Old_age   Always       -       0
12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -       13
192 Power-Off_Retract_Count 0x0032   200   200   000    Old_age   Always       -       11
193 Load_Cycle_Count        0x0032   200   200   000    Old_age   Always       -       94
194 Temperature_Celsius     0x0022   123   119   000    Old_age   Always       -       29
196 Reallocated_Event_Count 0x0032   200   200   000    Old_age   Always       -       0
197 Current_Pending_Sector  0x0032   200   200   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0030   100   253   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x0032   200   200   000    Old_age   Always       -       0
200 Multi_Zone_Error_Rate   0x0008   100   253   000    Old_age   Offline      -       0

 

Preclear Finish:

ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x002f   200   200   051    Pre-fail  Always       -       0
  3 Spin_Up_Time            0x0027   205   175   021    Pre-fail  Always       -       6750
  4 Start_Stop_Count        0x0032   100   100   000    Old_age   Always       -       13
  5 Reallocated_Sector_Ct   0x0033   200   200   140    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x002e   100   253   000    Old_age   Always       -       0
  9 Power_On_Hours          0x0032   100   100   000    Old_age   Always       -       154
10 Spin_Retry_Count        0x0032   100   253   000    Old_age   Always       -       0
11 Calibration_Retry_Count 0x0032   100   253   000    Old_age   Always       -       0
12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -       13
192 Power-Off_Retract_Count 0x0032   200   200   000    Old_age   Always       -       11
193 Load_Cycle_Count        0x0032   200   200   000    Old_age   Always       -       94
194 Temperature_Celsius     0x0022   122   119   000    Old_age   Always       -       30
196 Reallocated_Event_Count 0x0032   200   200   000    Old_age   Always       -       0
197 Current_Pending_Sector  0x0032   200   200   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0030   100   253   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x0032   200   200   000    Old_age   Always       -       0
200 Multi_Zone_Error_Rate   0x0008   100   253   000    Old_age   Offline      -       0

 

Thanks.

Link to comment

Hi,

 

I regularly preclear 4TB drives - WD Reds/Greens - and this is my first HGST 4 TB 7200 RPM Deskstar. (Heretic posted a similar issue using the NAS version of this drive - sorry I cannot see how to quote it here).  I too have had unusually slow pre-read times  - essentially a 1-cycle preclear for this HGST drive  takes as long as a WD  4TB 2-cycle preclear.  I  tried it on two different configs but similar results each time.  I have no real worry about the drive - the post preclear smart looks fine but I add this here in case somebody is curious or interested to see why so slow.  (Not a regular poster so apologies if I make errors - not sure how to reply and quote). 

 

invoked as: ./preclear_disk.sh -A /dev/sdb

== HGSTHDS724040ALE640  PK2301PCHME5PB

== Disk /dev/sdb has been successfully precleared

== with a starting sector of 1

== Ran 1 cycle

==

== Using :Read block size = 8388608 Bytes

== Last Cycle's Pre Read Time  : 21:31:26 (51 MB/s)

== Last Cycle's Zeroing time  : 8:36:10 (129 MB/s)

== Last Cycle's Post Read Time : 42:20:43 (26 MB/s)

== Last Cycle's Total Time    : 72:29:26

==

== Total Elapsed Time 72:29:26

==

== Disk Start Temperature: 34C

==

== Current Disk Temperature: -->43<--C,

Preclear_Reports_HGST_4TB.txt

Link to comment

Something isn't right with those, your Pre-Read times are significantly slower than it should be.

 

Here's snippets from January for 2 drives I precleared concurrently using the faster preclear script; first was 1 cycle, then both were run for 2 more cycles.

 

========================================================================1.15b
== invoked as: ./preclear_bjp.sh -f -A /dev/sdl
== HGSTHDN724040ALE640
== Disk /dev/sdl has been successfully precleared
== with a starting sector of 1 
== Ran 1 cycle
==
== Using :Read block size = 1000448 Bytes
== Last Cycle's Pre Read Time  : 12:08:15 (91 MB/s)
== Last Cycle's Zeroing time   : 8:36:35 (129 MB/s)
== Last Cycle's Post Read Time : 12:12:10 (91 MB/s)
== Last Cycle's Total Time     : 32:58:09
==
== Total Elapsed Time 32:58:09
==
== Disk Start Temperature: 36C
==
== Current Disk Temperature: 37C, 

 

========================================================================1.15b
== invoked as: ./preclear_bjp.sh -f -A /dev/sdd
== HGSTHDN724040ALE640
== Disk /dev/sdd has been successfully precleared
== with a starting sector of 1 
== Ran 1 cycle
==
== Using :Read block size = 1000448 Bytes
== Last Cycle's Pre Read Time  : 12:01:58 (92 MB/s)
== Last Cycle's Zeroing time   : 8:30:13 (130 MB/s)
== Last Cycle's Post Read Time : 12:06:48 (91 MB/s)
== Last Cycle's Total Time     : 32:40:07
==
== Total Elapsed Time 32:40:07
==
== Disk Start Temperature: 35C
==
== Current Disk Temperature: 36C,

 

========================================================================1.15b
== invoked as: ./preclear_bjp.sh -f -c 2 -A /dev/sdl
== HGSTHDN724040ALE640
== Disk /dev/sdl has been successfully precleared
== with a starting sector of 1 
== Ran 2 cycles
==
== Using :Read block size = 1000448 Bytes
== Last Cycle's Pre Read Time  : 12:07:51 (91 MB/s)
== Last Cycle's Zeroing time   : 8:36:52 (129 MB/s)
== Last Cycle's Post Read Time : 12:12:39 (91 MB/s)
== Last Cycle's Total Time     : 20:50:36
==
== Total Elapsed Time 53:50:05
==
== Disk Start Temperature: 35C
==
== Current Disk Temperature: 36C, 

 

========================================================================1.15b
== invoked as: ./preclear_bjp.sh -f -c 2 -A /dev/sdd
== HGSTHDN724040ALE640
== Disk /dev/sdd has been successfully precleared
== with a starting sector of 1 
== Ran 2 cycles
==
== Using :Read block size = 1000448 Bytes
== Last Cycle's Pre Read Time  : 12:01:55 (92 MB/s)
== Last Cycle's Zeroing time   : 8:30:12 (130 MB/s)
== Last Cycle's Post Read Time : 12:06:54 (91 MB/s)
== Last Cycle's Total Time     : 20:38:11
==
== Total Elapsed Time 53:18:20
==
== Disk Start Temperature: 33C
==
== Current Disk Temperature: 35C, 

Link to comment

Something isn't right with those, your Pre-Read times are significantly slower than it should be.

 

Here's snippets from January for 2 drives I precleared concurrently using the faster preclear script; first was 1 cycle, then both were run for 2 more cycles.

 

========================================================================1.15b
== invoked as: ./preclear_bjp.sh -f -A /dev/sdl
== HGSTHDN724040ALE640
== Disk /dev/sdl has been successfully precleared
== with a starting sector of 1 
== Ran 1 cycle
==
== Using :Read block size = 1000448 Bytes
== Last Cycle's Pre Read Time  : 12:08:15 (91 MB/s)
== Last Cycle's Zeroing time   : 8:36:35 (129 MB/s)
== Last Cycle's Post Read Time : 12:12:10 (91 MB/s)
== Last Cycle's Total Time     : 32:58:09
==
== Total Elapsed Time 32:58:09
==
== Disk Start Temperature: 36C
==
== Current Disk Temperature: 37C, 

 

========================================================================1.15b
== invoked as: ./preclear_bjp.sh -f -A /dev/sdd
== HGSTHDN724040ALE640
== Disk /dev/sdd has been successfully precleared
== with a starting sector of 1 
== Ran 1 cycle
==
== Using :Read block size = 1000448 Bytes
== Last Cycle's Pre Read Time  : 12:01:58 (92 MB/s)
== Last Cycle's Zeroing time   : 8:30:13 (130 MB/s)
== Last Cycle's Post Read Time : 12:06:48 (91 MB/s)
== Last Cycle's Total Time     : 32:40:07
==
== Total Elapsed Time 32:40:07
==
== Disk Start Temperature: 35C
==
== Current Disk Temperature: 36C,

 

========================================================================1.15b
== invoked as: ./preclear_bjp.sh -f -c 2 -A /dev/sdl
== HGSTHDN724040ALE640
== Disk /dev/sdl has been successfully precleared
== with a starting sector of 1 
== Ran 2 cycles
==
== Using :Read block size = 1000448 Bytes
== Last Cycle's Pre Read Time  : 12:07:51 (91 MB/s)
== Last Cycle's Zeroing time   : 8:36:52 (129 MB/s)
== Last Cycle's Post Read Time : 12:12:39 (91 MB/s)
== Last Cycle's Total Time     : 20:50:36
==
== Total Elapsed Time 53:50:05
==
== Disk Start Temperature: 35C
==
== Current Disk Temperature: 36C, 

 

========================================================================1.15b
== invoked as: ./preclear_bjp.sh -f -c 2 -A /dev/sdd
== HGSTHDN724040ALE640
== Disk /dev/sdd has been successfully precleared
== with a starting sector of 1 
== Ran 2 cycles
==
== Using :Read block size = 1000448 Bytes
== Last Cycle's Pre Read Time  : 12:01:55 (92 MB/s)
== Last Cycle's Zeroing time   : 8:30:12 (130 MB/s)
== Last Cycle's Post Read Time : 12:06:54 (91 MB/s)
== Last Cycle's Total Time     : 20:38:11
==
== Total Elapsed Time 53:18:20
==
== Disk Start Temperature: 33C
==
== Current Disk Temperature: 35C, 

 

 

Thanks.  I see we have the same zeroing speed, but the difference arises on the read phases - yours are much faster indeed. 

 

What I am surprised at is to see you got almost the same post-read speed as pre-read speed.  In my experience, post-read is usually about half pre-read speed.  But perhaps thats the different script - I use the standard version.

Link to comment

I already knew this drive was bad - just how bad is it? Should I just toss it? It took a looong time preclear, lol:

 

========================================================================1.15
== invoked as: ./preclear_disk.sh /dev/sdh
== WDCWD20EADS-00R6B0   WD-WCAVY1843494
== Disk /dev/sdh has been successfully precleared
== with a starting sector of 63 
== Ran 1 cycle
==
== Using :Read block size = 8388608 Bytes
== Last Cycle's Pre Read Time  : 48:48:17 (11 MB/s)
== Last Cycle's Zeroing time   : 7:46:57 (71 MB/s)
== Last Cycle's Post Read Time : 35:57:51 (15 MB/s)
== Last Cycle's Total Time     : 92:34:42
==
== Total Elapsed Time 92:34:42
==
== Disk Start Temperature: 29C
==
== Current Disk Temperature: 31C, 
==
============================================================================
** Changed attributes in files: /tmp/smart_start_sdh  /tmp/smart_finish_sdh
                ATTRIBUTE   NEW_VAL OLD_VAL FAILURE_THRESHOLD STATUS      RAW_VALUE
      Raw_Read_Error_Rate =   181     182           51        ok          1250509
    Reallocated_Sector_Ct =    41      43          140        FAILING_NOW 1265
          Seek_Error_Rate =   100     200            0        ok          0
      Temperature_Celsius =   121     123            0        ok          31
  Reallocated_Event_Count =     1       1            0        near_thresh 1076

*** Failing SMART Attributes in /tmp/smart_finish_sdh *** 
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  5 Reallocated_Sector_Ct   0x0033   041   041   140    Pre-fail  Always   FAILING_NOW 1265

24 sectors were pending re-allocation before the start of the preclear.
24 sectors were pending re-allocation after pre-read in cycle 1 of 1.
16 sectors were pending re-allocation after zero of disk in cycle 1 of 1.
16 sectors are pending re-allocation at the end of the preclear,
    a change of -8 in the number of sectors pending re-allocation.
1256 sectors had been re-allocated before the start of the preclear.
1265 sectors are re-allocated at the end of the preclear,
    a change of 9 in the number of sectors re-allocated.
SMART overall-health status =  FAILED! 
============================================================================

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.