Re: preclear_disk.sh - a new utility to burn-in and pre-clear disks for quick add


Recommended Posts

Ok, so I did what you suggested,prostuff1. I had already re-ran preclear on that drive that I took the printscreen with, so I did it again with the other 3 drives. 2 of them also gave those errors, but I just ignored them.

The last one, however, isn't "pre-clearing"... I've already repeated the process 3 times but it just gets to 11% of the 1st step and then it doesn't do anything else, it just stays there... Time increases, but it just goes on and on without reading the drive... Any ideas? :/

What version of the pre-clear script are you using.  Early versions (prior to .9.3) suffered from symptoms you described at 18% because of a bug in the shell.  That was fixed a while ago, but I have no idea how old your version might be.

 

Other than that, you could have a bad disk, bad memory, bad power supply, etc. You could have run out of RAM and a process terminated.  You could have a deadlock with some other resource on your server.

 

Post a syslog.   Run a smartctl "long" test on the drive.

Link to comment

Ok, so I did what you suggested,prostuff1. I had already re-ran preclear on that drive that I took the printscreen with, so I did it again with the other 3 drives. 2 of them also gave those errors, but I just ignored them.

The last one, however, isn't "pre-clearing"... I've already repeated the process 3 times but it just gets to 11% of the 1st step and then it doesn't do anything else, it just stays there... Time increases, but it just goes on and on without reading the drive... Any ideas? :/

 

Are those drives that are giving you errors hooked up to a different SATA card? That SATA card can be a suspect.

 

Also, have you run a memtest?

 

Link to comment

I belive I had the latest version because I installed it on January and I believe no new version has come out since. I re-installed preclear_disk.sh anyway, just in case.

 

I'm running a memtest right now, I'll tell you how it turned out.

Two of those 4 discs I added recently are connected directly in the mobo, and the other two are connected to a Supermicro 8port PCI-X card. Since all the 4 drives gave me those errors at first, I thought it wouldn't be from the sata card. The second time I ran the preclear script, 3 drives came out "fine" (those errors that I still don't know what are, are still there, but the disks were precleared). The only disk giving me a headache is this one that won't preclear past 11%.

 

I'll tell you how the memtest went and I'll also post a syslog once I can.

Link to comment

Ok, memtest has ended, i got the following message.

"*****Pass complete, no errors,  press Esc to exit*****"

 

Edit:

Ok, so I tried changing SATA cables to see if it would work, but still didn't. I woke up this morning and the process had stopped at 20-someting %. Here is the syslog attached, hope some of you guys can help me figure out what's going on. Weird thing is, it doesn't seem the syslog is complete, as I started the process at around 1AM but I couldn't find any log with that time stamp.

 

Cheers!

 

PS: It seems the forum doesn't accept rar files. Please remove the ".zip" part.

syslog-20100315-084710.rar.zip

Link to comment

Right near the end of the file is the excerpt from the smartctl report:

 

/dev/sdi has failed.  It cannot read its data from the disk platters.

 

Joe L.

 

Mar 15 08:47:09 Tower status[10232]: /dev/sdi: SMART overall-health self-assessment test result: FAILED!

Mar 15 08:47:09 Tower status[10232]: Drive failure expected in less than 24 hours. SAVE ALL DATA.

Mar 15 08:47:09 Tower status[10232]: Failed Attributes:

Mar 15 08:47:09 Tower status[10232]: ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE

Mar 15 08:47:09 Tower status[10232]:   1 Raw_Read_Error_Rate     0x000f   001   001   051    Pre-fail  Always   FAILING_NOW 88016

 

Link to comment

A bit further up in the syslog is the full smartctl report for /dev/sdi.  At least you learned about it before you added it to your array and started putting data on it.

 

Mar 15 08:47:07 Tower smartctl[10192]: === START OF INFORMATION SECTION ===

Mar 15 08:47:07 Tower smartctl[10192]: Device Model:     SAMSUNG HD154UI

Mar 15 08:47:07 Tower smartctl[10192]: Serial Number:    S1XWJ1MSA01546

Mar 15 08:47:07 Tower smartctl[10192]: Firmware Version: 1AG01118

Mar 15 08:47:07 Tower smartctl[10192]: User Capacity:    1,500,301,910,016 bytes

Mar 15 08:47:07 Tower smartctl[10192]: Device is:        In smartctl database [for details use: -P show]

Mar 15 08:47:07 Tower smartctl[10192]: ATA Version is:   8

Mar 15 08:47:07 Tower smartctl[10192]: ATA Standard is:  ATA-8-ACS revision 3b

Mar 15 08:47:07 Tower smartctl[10192]: Local Time is:    Mon Mar 15 08:47:07 2010 GMT

Mar 15 08:47:07 Tower smartctl[10192]:

Mar 15 08:47:07 Tower smartctl[10192]: ==> WARNING: May need -F samsung or -F samsung2 enabled; see manual for details.

Mar 15 08:47:07 Tower smartctl[10192]:

Mar 15 08:47:07 Tower smartctl[10192]: SMART support is: Available - device has SMART capability.

Mar 15 08:47:07 Tower smartctl[10192]: SMART support is: Enabled

Mar 15 08:47:07 Tower smartctl[10192]: Power mode is:    ACTIVE or IDLE

Mar 15 08:47:07 Tower smartctl[10192]:

Mar 15 08:47:07 Tower smartctl[10192]: === START OF READ SMART DATA SECTION ===

Mar 15 08:47:07 Tower smartctl[10192]: SMART overall-health self-assessment test result: FAILED!

Mar 15 08:47:07 Tower smartctl[10192]: Drive failure expected in less than 24 hours. SAVE ALL DATA.

Mar 15 08:47:07 Tower smartctl[10192]: See vendor-specific Attribute list for failed Attributes.

Mar 15 08:47:07 Tower smartctl[10192]:

Mar 15 08:47:07 Tower smartctl[10192]: General SMART Values:

Mar 15 08:47:07 Tower smartctl[10192]: Offline data collection status:  (0x00)        Offline data collection activity

Mar 15 08:47:07 Tower smartctl[10192]:                                         was never started.

Mar 15 08:47:07 Tower smartctl[10192]:                                         Auto Offline Data Collection: Disabled.

Mar 15 08:47:07 Tower smartctl[10192]: Self-test execution status:      (   0)        The previous self-test routine completed

Mar 15 08:47:07 Tower smartctl[10192]:                                         without error or no self-test has ever

Mar 15 08:47:07 Tower smartctl[10192]:                                         been run.

Mar 15 08:47:07 Tower smartctl[10192]: Total time to complete Offline

Mar 15 08:47:07 Tower smartctl[10192]: data collection:                  (19397) seconds.

Mar 15 08:47:07 Tower smartctl[10192]: Offline data collection

Mar 15 08:47:07 Tower smartctl[10192]: capabilities:                          (0x7b) SMART execute Offline immediate.

Mar 15 08:47:07 Tower smartctl[10192]:                                         Auto Offline data collection on/off support.

Mar 15 08:47:07 Tower smartctl[10192]:                                         Suspend Offline collection upon new

Mar 15 08:47:07 Tower smartctl[10192]:                                         command.

Mar 15 08:47:07 Tower smartctl[10192]:                                         Offline surface scan supported.

Mar 15 08:47:07 Tower smartctl[10192]:                                         Self-test supported.

Mar 15 08:47:07 Tower smartctl[10192]:                                         Conveyance Self-test supported.

Mar 15 08:47:07 Tower smartctl[10192]:                                         Selective Self-test supported.

Mar 15 08:47:07 Tower smartctl[10192]: SMART capabilities:            (0x0003)        Saves SMART data before entering

Mar 15 08:47:07 Tower smartctl[10192]:                                         power-saving mode.

Mar 15 08:47:07 Tower smartctl[10192]:                                         Supports SMART auto save timer.

Mar 15 08:47:07 Tower smartctl[10192]: Error logging capability:        (0x01)        Error logging supported.

Mar 15 08:47:07 Tower smartctl[10192]:                                         General Purpose Logging supported.

Mar 15 08:47:07 Tower smartctl[10192]: Short self-test routine

Mar 15 08:47:07 Tower smartctl[10192]: recommended polling time:          (   2) minutes.

Mar 15 08:47:07 Tower smartctl[10192]: Extended self-test routine

Mar 15 08:47:07 Tower smartctl[10192]: recommended polling time:          ( 255) minutes.

Mar 15 08:47:07 Tower smartctl[10192]: Conveyance self-test routine

Mar 15 08:47:07 Tower smartctl[10192]: recommended polling time:          (  34) minutes.

Mar 15 08:47:07 Tower smartctl[10192]: SCT capabilities:                (0x003f)        SCT Status supported.

Mar 15 08:47:07 Tower smartctl[10192]:                                         SCT Feature Control supported.

Mar 15 08:47:07 Tower smartctl[10192]:                                         SCT Data Table supported.

Mar 15 08:47:07 Tower smartctl[10192]:

Mar 15 08:47:07 Tower smartctl[10192]: SMART Attributes Data Structure revision number: 16

Mar 15 08:47:07 Tower smartctl[10192]: Vendor Specific SMART Attributes with Thresholds:

Mar 15 08:47:07 Tower smartctl[10192]: ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE

Mar 15 08:47:07 Tower smartctl[10192]:   1 Raw_Read_Error_Rate     0x000f   001   001   051    Pre-fail  Always   FAILING_NOW 88016

Mar 15 08:47:07 Tower smartctl[10192]:   3 Spin_Up_Time            0x0007   069   069   011    Pre-fail  Always       -       9960

Mar 15 08:47:07 Tower smartctl[10192]:   4 Start_Stop_Count        0x0032   100   100   000    Old_age   Always       -       15

Mar 15 08:47:07 Tower smartctl[10192]:   5 Reallocated_Sector_Ct   0x0033   092   092   010    Pre-fail  Always       -       330

Mar 15 08:47:07 Tower smartctl[10192]:   7 Seek_Error_Rate         0x000f   253   253   051    Pre-fail  Always       -       0

Mar 15 08:47:07 Tower smartctl[10192]:   8 Seek_Time_Performance   0x0025   100   100   015    Pre-fail  Offline      -       0

Mar 15 08:47:07 Tower smartctl[10192]:   9 Power_On_Hours          0x0032   100   100   000    Old_age   Always       -       182

Mar 15 08:47:07 Tower smartctl[10192]:  10 Spin_Retry_Count        0x0033   100   100   051    Pre-fail  Always       -       0

Mar 15 08:47:07 Tower smartctl[10192]:  11 Calibration_Retry_Count 0x0012   100   100   000    Old_age   Always       -       0

Mar 15 08:47:07 Tower smartctl[10192]:  12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -       15

Mar 15 08:47:07 Tower smartctl[10192]:  13 Read_Soft_Error_Rate    0x000e   001   001   000    Old_age   Always       -       82267

Mar 15 08:47:07 Tower smartctl[10192]: 183 Unknown_Attribute       0x0032   100   100   000    Old_age   Always       -       0

Mar 15 08:47:07 Tower smartctl[10192]: 184 Unknown_Attribute       0x0033   100   100   000    Pre-fail  Always       -       0

Mar 15 08:47:07 Tower smartctl[10192]: 187 Reported_Uncorrect      0x0032   100   100   000    Old_age   Always       -       95891

Mar 15 08:47:07 Tower smartctl[10192]: 188 Unknown_Attribute       0x0032   100   100   000    Old_age   Always       -       0

Mar 15 08:47:07 Tower smartctl[10192]: 190 Airflow_Temperature_Cel 0x0022   080   077   000    Old_age   Always       -       20 (Lifetime Min/Max 19/20)

Mar 15 08:47:07 Tower smartctl[10192]: 194 Temperature_Celsius     0x0022   080   075   000    Old_age   Always       -       20 (Lifetime Min/Max 19/21)

Mar 15 08:47:07 Tower smartctl[10192]: 195 Hardware_ECC_Recovered  0x001a   100   100   000    Old_age   Always       -       172161941

Mar 15 08:47:07 Tower smartctl[10192]: 196 Reallocated_Event_Count 0x0032   092   092   000    Old_age   Always       -       330

Mar 15 08:47:07 Tower smartctl[10192]: 197 Current_Pending_Sector  0x0012   001   001   000    Old_age   Always       -       3979

Mar 15 08:47:07 Tower smartctl[10192]: 198 Offline_Uncorrectable   0x0030   100   100   000    Old_age   Offline      -       0

Mar 15 08:47:07 Tower smartctl[10192]: 199 UDMA_CRC_Error_Count    0x003e   100   100   000    Old_age   Always       -       0

Mar 15 08:47:07 Tower smartctl[10192]: 200 Multi_Zone_Error_Rate   0x000a   099   099   000    Old_age   Always       -       139

Mar 15 08:47:07 Tower smartctl[10192]: 201 Soft_Read_Error_Rate    0x000a   134   060   000    Old_age   Always       -       7584

Mar 15 08:47:07 Tower smartctl[10192]:

Mar 15 08:47:07 Tower smartctl[10192]: SMART Error Log Version: 1

Mar 15 08:47:07 Tower smartctl[10192]: ATA Error Count: 368 (device log contains only the most recent five errors)

Mar 15 08:47:07 Tower smartctl[10192]:         CR = Command Register [HEX]

Mar 15 08:47:07 Tower smartctl[10192]:         FR = Features Register [HEX]

Mar 15 08:47:07 Tower smartctl[10192]:         SC = Sector Count Register [HEX]

Mar 15 08:47:07 Tower smartctl[10192]:         SN = Sector Number Register [HEX]

Mar 15 08:47:07 Tower smartctl[10192]:         CL = Cylinder Low Register [HEX]

Mar 15 08:47:07 Tower smartctl[10192]:         CH = Cylinder High Register [HEX]

Mar 15 08:47:07 Tower smartctl[10192]:         DH = Device/Head Register [HEX]

Mar 15 08:47:07 Tower smartctl[10192]:         DC = Device Command Register [HEX]

Mar 15 08:47:07 Tower smartctl[10192]:         ER = Error register [HEX]

Mar 15 08:47:07 Tower smartctl[10192]:         ST = Status register [HEX]

Mar 15 08:47:07 Tower smartctl[10192]: Powered_Up_Time is measured from power on, and printed as

Mar 15 08:47:07 Tower smartctl[10192]: DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes,

Mar 15 08:47:07 Tower smartctl[10192]: SS=sec, and sss=millisec. It "wraps" after 49.710 days.

Mar 15 08:47:07 Tower smartctl[10192]:

Mar 15 08:47:07 Tower smartctl[10192]: Error 368 occurred at disk power-on lifetime: 0 hours (0 days + 0 hours)

Mar 15 08:47:07 Tower smartctl[10192]:   When the command that caused the error occurred, the device was active or idle.

Mar 15 08:47:07 Tower smartctl[10192]:

Mar 15 08:47:07 Tower smartctl[10192]:   After command completion occurred, registers were:

Mar 15 08:47:07 Tower smartctl[10192]:   ER ST SC SN CL CH DH

Mar 15 08:47:07 Tower smartctl[10192]:   -- -- -- -- -- -- --

Mar 15 08:47:07 Tower smartctl[10192]:   40 51 00 88 e3 5d ef  Error: UNC at LBA = 0x0f5de388 = 257811336

Mar 15 08:47:07 Tower smartctl[10192]:

Mar 15 08:47:07 Tower smartctl[10192]:   Commands leading to the command that caused the error were:

Mar 15 08:47:07 Tower smartctl[10192]:   CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name

Mar 15 08:47:07 Tower smartctl[10192]:   -- -- -- -- -- -- -- --  ----------------  --------------------

Mar 15 08:47:07 Tower smartctl[10192]:   c8 00 08 88 e3 5d ef 08      10:08:47.560  READ DMA

Mar 15 08:47:07 Tower smartctl[10192]:   ec 00 00 00 00 00 a0 08      10:08:47.540  IDENTIFY DEVICE

Mar 15 08:47:07 Tower smartctl[10192]:   ef 03 46 00 00 00 a0 08      10:08:47.540  SET FEATURES [set transfer mode]

Mar 15 08:47:07 Tower smartctl[10192]:   ec 00 00 00 00 00 a0 08      10:08:47.520  IDENTIFY DEVICE

Mar 15 08:47:07 Tower smartctl[10192]:   00 00 01 01 00 00 a0 00      10:08:47.360  NOP [Abort queued commands]

Mar 15 08:47:07 Tower smartctl[10192]:

Mar 15 08:47:07 Tower smartctl[10192]: Error 367 occurred at disk power-on lifetime: 0 hours (0 days + 0 hours)

Mar 15 08:47:07 Tower smartctl[10192]:   When the command that caused the error occurred, the device was active or idle.

Mar 15 08:47:07 Tower smartctl[10192]:

Mar 15 08:47:07 Tower smartctl[10192]:   After command completion occurred, registers were:

Mar 15 08:47:07 Tower smartctl[10192]:   ER ST SC SN CL CH DH

Mar 15 08:47:07 Tower smartctl[10192]:   -- -- -- -- -- -- --

Mar 15 08:47:07 Tower smartctl[10192]:   40 51 00 88 e3 5d ef  Error: UNC at LBA = 0x0f5de388 = 257811336

Mar 15 08:47:07 Tower smartctl[10192]:

Mar 15 08:47:07 Tower smartctl[10192]:   Commands leading to the command that caused the error were:

Mar 15 08:47:07 Tower smartctl[10192]:   CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name

Mar 15 08:47:07 Tower smartctl[10192]:   -- -- -- -- -- -- -- --  ----------------  --------------------

Mar 15 08:47:07 Tower smartctl[10192]:   c8 00 08 88 e3 5d ef 08      10:08:44.590  READ DMA

Mar 15 08:47:07 Tower smartctl[10192]:   ec 00 00 00 00 00 a0 08      10:08:44.570  IDENTIFY DEVICE

Mar 15 08:47:07 Tower smartctl[10192]:   ef 03 46 00 00 00 a0 08      10:08:44.570  SET FEATURES [set transfer mode]

Mar 15 08:47:07 Tower smartctl[10192]:   ec 00 00 00 00 00 a0 08      10:08:44.550  IDENTIFY DEVICE

Mar 15 08:47:07 Tower smartctl[10192]:   00 00 01 01 00 00 a0 00      10:08:44.390  NOP [Abort queued commands]

Mar 15 08:47:07 Tower smartctl[10192]:

Mar 15 08:47:07 Tower smartctl[10192]: Error 366 occurred at disk power-on lifetime: 0 hours (0 days + 0 hours)

Mar 15 08:47:07 Tower smartctl[10192]:   When the command that caused the error occurred, the device was active or idle.

Link to comment

Thanks for all the help, I'll definitely send the drive for a replacement! I'm glad I did find out about it before adding it to the array, that would have been a headache, for sure!

 

As for the other 3 drives, I keep getting those S.M.A.R.T. "Airflow_temperature_Cel" and "Soft_Read_error_Rate" that I was told not to worry about. I'm just curious, what do these errors mean exactly?

 

Regards!

 

Mar 16 07:30:17 Tower preclear_disk-diff[14200]: ============================================================================
Mar 16 07:30:17 Tower preclear_disk-diff[14200]: ==
Mar 16 07:30:17 Tower preclear_disk-diff[14200]: == Disk /dev/sdg has been successfully precleared
Mar 16 07:30:17 Tower preclear_disk-diff[14200]: ==
Mar 16 07:30:17 Tower preclear_disk-diff[14200]: == Ran 1 preclear-disk cycle
Mar 16 07:30:17 Tower preclear_disk-diff[14200]: ==
Mar 16 07:30:17 Tower preclear_disk-diff[14200]: == Using :Read block size = 8225280 Bytes
Mar 16 07:30:17 Tower preclear_disk-diff[14200]: == Last Cycle's Pre Read Time  : 5:39:06 (73 MB/s)
Mar 16 07:30:17 Tower preclear_disk-diff[14200]: == Last Cycle's Zeroing time   : 5:53:20 (70 MB/s)
Mar 16 07:30:17 Tower preclear_disk-diff[14200]: == Last Cycle's Post Read Time : 10:59:39 (37 MB/s)
Mar 16 07:30:17 Tower preclear_disk-diff[14200]: == Last Cycle's Total Time     : 22:33:09
Mar 16 07:30:17 Tower preclear_disk-diff[14200]: ==
Mar 16 07:30:17 Tower preclear_disk-diff[14200]: == Total Elapsed Time 22:33:09
Mar 16 07:30:17 Tower preclear_disk-diff[14200]: ==
Mar 16 07:30:17 Tower preclear_disk-diff[14200]: == Disk Start Temperature: 18C
Mar 16 07:30:17 Tower preclear_disk-diff[14200]: ==
Mar 16 07:30:17 Tower preclear_disk-diff[14200]: == Current Disk Temperature: 21C, 
Mar 16 07:30:17 Tower preclear_disk-diff[14200]: ==
Mar 16 07:30:17 Tower preclear_disk-diff[14200]: ============================================================================
Mar 16 07:30:17 Tower preclear_disk-diff[14200]: S.M.A.R.T. error count differences detected after pre-clear 
Mar 16 07:30:17 Tower preclear_disk-diff[14200]: note, some 'raw' values may change, but not be an indication of a problem
Mar 16 07:30:17 Tower preclear_disk-diff[14200]: 71c71
Mar 16 07:30:17 Tower preclear_disk-diff[14200]: < 190 Airflow_Temperature_Cel 0x0022   082   076   000    Old_age   Always       -       18 (Lifetime Min/Max 18/18)
Mar 16 07:30:17 Tower preclear_disk-diff[14200]: ---
Mar 16 07:30:17 Tower preclear_disk-diff[14200]: > 190 Airflow_Temperature_Cel 0x0022   079   076   000    Old_age   Always       -       21 (Lifetime Min/Max 18/22)
Mar 16 07:30:17 Tower preclear_disk-diff[14200]: 78c78
Mar 16 07:30:17 Tower preclear_disk-diff[14200]: < 201 Soft_Read_Error_Rate    0x000a   253   253   000    Old_age   Always       -       0
Mar 16 07:30:17 Tower preclear_disk-diff[14200]: ---
Mar 16 07:30:17 Tower preclear_disk-diff[14200]: > 201 Soft_Read_Error_Rate    0x000a   100   100   000    Old_age   Always       -       0 
Mar 16 07:30:17 Tower preclear_disk-diff[14200]: ============================================================================

Link to comment

As for the other 3 drives, I keep getting those S.M.A.R.T. "Airflow_temperature_Cel" and "Soft_Read_error_Rate" that I was told not to worry about. I'm just curious, what do these errors mean exactly?

 

Mar 16 07:30:17 Tower preclear_disk-diff[14200]: S.M.A.R.T. error count differences detected after pre-clear 
Mar 16 07:30:17 Tower preclear_disk-diff[14200]: note, some 'raw' values may change, but not be an indication of a problem
Mar 16 07:30:17 Tower preclear_disk-diff[14200]: 71c71
Mar 16 07:30:17 Tower preclear_disk-diff[14200]: < 190 Airflow_Temperature_Cel 0x0022   082   076   000    Old_age   Always   -       18 (Lifetime Min/Max 18/18)
Mar 16 07:30:17 Tower preclear_disk-diff[14200]: ---
Mar 16 07:30:17 Tower preclear_disk-diff[14200]: > 190 Airflow_Temperature_Cel 0x0022   079   076   000    Old_age   Always   -       21 (Lifetime Min/Max 18/22)
Mar 16 07:30:17 Tower preclear_disk-diff[14200]: 78c78
Mar 16 07:30:17 Tower preclear_disk-diff[14200]: < 201 Soft_Read_Error_Rate    0x000a   253   253   000    Old_age   Always    -       0
Mar 16 07:30:17 Tower preclear_disk-diff[14200]: ---
Mar 16 07:30:17 Tower preclear_disk-diff[14200]: > 201 Soft_Read_Error_Rate    0x000a   100   100   000    Old_age   Always    -       0 
Mar 16 07:30:17 Tower preclear_disk-diff[14200]: ============================================================================

 

These aren't errors, just information.  I suppose you could classify information as an error if it indicates a problem, but nothing above indicates any problems.

 

The first is the drive temp, which obviously would increase when doing something as intense as a PreClear.  Above it shows as a 'diff', that the temp increased from 18C to 21C (Centigrade).  The soft read error rate was initialized to its starting value of 100, analogous to being 100% good.  The value of 253 means this drive is brand new, and this value had not even been used before.  That it changed from 253 to 100 is a 'diff', between the pre and post SMART reports.

Link to comment

Mar 16 07:30:17 Tower preclear_disk-diff[14200]: S.M.A.R.T. error count differences detected after pre-clear 
Mar 16 07:30:17 Tower preclear_disk-diff[14200]: note, some 'raw' values may change, but not be an indication of a problem
Mar 16 07:30:17 Tower preclear_disk-diff[14200]: 78c78
Mar 16 07:30:17 Tower preclear_disk-diff[14200]: < 201 Soft_Read_Error_Rate    0x000a   253   253   000    Old_age   Always    -       0
Mar 16 07:30:17 Tower preclear_disk-diff[14200]: ---
Mar 16 07:30:17 Tower preclear_disk-diff[14200]: > 201 Soft_Read_Error_Rate    0x000a   100   100   000    Old_age   Always    -       0 
Mar 16 07:30:17 Tower preclear_disk-diff[14200]: ============================================================================

 

These aren't errors, just information.  I suppose you could classify information as an error if it indicates a problem, but nothing above indicates any problems.

 

The soft read error rate was initialized to its starting value of 100, analogous to being 100% good.  The value of 253 means this drive is brand new, and this value had not even been used before.  That it changed from 253 to 100 is a 'diff', between the pre and post SMART reports.

 

People keep asking this same question over and over again. 

If the preclear could maybe detect this particular situation and skips repotring it, that would save lots of unnecessary questions.

 

Link to comment

Mar 16 07:30:17 Tower preclear_disk-diff[14200]: S.M.A.R.T. error count differences detected after pre-clear 
Mar 16 07:30:17 Tower preclear_disk-diff[14200]: note, some 'raw' values may change, but not be an indication of a problem
Mar 16 07:30:17 Tower preclear_disk-diff[14200]: 78c78
Mar 16 07:30:17 Tower preclear_disk-diff[14200]: < 201 Soft_Read_Error_Rate    0x000a   253   253   000    Old_age   Always    -       0
Mar 16 07:30:17 Tower preclear_disk-diff[14200]: ---
Mar 16 07:30:17 Tower preclear_disk-diff[14200]: > 201 Soft_Read_Error_Rate    0x000a   100   100   000    Old_age   Always    -       0 
Mar 16 07:30:17 Tower preclear_disk-diff[14200]: ============================================================================

 

These aren't errors, just information.  I suppose you could classify information as an error if it indicates a problem, but nothing above indicates any problems.

 

The soft read error rate was initialized to its starting value of 100, analogous to being 100% good.  The value of 253 means this drive is brand new, and this value had not even been used before.  That it changed from 253 to 100 is a 'diff', between the pre and post SMART reports.

 

People keep asking this same question over and over again. 

If the preclear could maybe detect this particular situation and skips repotring it, that would save lots of unnecessary questions.

 

I intend to completely change the reporting of the pre-clear script.  I just have not gotten very far.  I know a lot more now about what to look for than when I originally wrote the script.
Link to comment

Hey Joe,

 

I searched for info on this, but couldn't find any definitive answer.  Does the seek_error_rate value have any indication of a failing drive?  A friend showed me his preclear smart report and it had a value of 146,xxx,xxx pre-read and 149,xxx,xxx post-read.  I originally told him seek_error_rate could be an indication of a bad drive as I thought I read that somewhere, but now I can't find it.  Then I later found this post by you:

 

About the only RAW values you can interpret yourself are those for re-allocated sectors, sectors pending re-allocation, and drive temperature.

 

And I found a couple seagates in my array that had similar high values, but all the others had zeros.  So my question is, do we need to worry about seek_error_rate?  Should we RMA drives that have high values for those (especially in the millions)?

 

Thanks,

Shawn

Link to comment
Then I later found this post by you:

 

About the only RAW values you can interpret yourself are those for re-allocated sectors, sectors pending re-allocation, and drive temperature.

 

And I found a couple seagates in my array that had similar high values, but all the others had zeros.  So my question is, do we need to worry about seek_error_rate?  Should we RMA drives that have high values for those (especially in the millions)?

 

Thanks,

Shawn

The "raw" values have meaning only to the manufacturers, if they even show them.  They vary from drive model to drive model, even within the same brand. 

 

All we can do is look at those that appear to be humanly readable.  For example, there are some drives where the "raw" temperature reported is below ambient.  That is impossible, but only if you assume the number has been converted to centigrade.  If it is really "raw" and must be interpreted using a conversion factor, then we cannot even use it to determine if a drive has failed.  Perhaps the number is supposed to be 10 degrees low, to allow a higher top temperature... unfortunately, there is no consistency between models and brands.

 

With that in mind, I'll repeat:

About the only RAW values you can interpret yourself are those for re-allocated sectors, sectors pending re-allocation, and drive temperature.

(and the temperature might not be reported accurately at that)

 

The columns for CURRENT_VALUE, WORST_VALUE, and FAILURE_THRESHOLD are the only criteria the manufacturer will probably be looking at for an RMA.  If the CURRENT or WORST value is less than the threshold, the disk has FAILED. (even if it is apparently still working)

 

Most will RMA a drive once they see a trend of  sectors continuing to be being re-allocated every time they perform a parity check, even if the failure threshold has not yet been reached.

 

Link to comment

Thanks,  So basically, we need to be looking at the VALUE WORST THRESH values and not the RAW values (except for re-allocated sectors, sectors pending re-allocation, and drive temperature)? 

Yes
And for the normal values (non raw) higher is better?

for normalized values, higher is better, but even then they all have a range of 253 down to 0.  On many drives, for many attributes, the initial "normalized" value from the factory is 253, and as soon as it is used for any length of time that value is then set to 100.  It then goes upward/downward from there.

 

The "WORST" column is the lowest normalized value encountered.

each attribute also has an column telling if it is is an old-age type, or a pre-failure type.  You can easily get an old-age "failure" and have a drive that seems to be working perfectly fine, but is getting to where the wear on it is showing effects on performance.

 

Even then, it is hard to figure out what the manufacturers are doing.

 

Below are the attributes from one of the two 500Gig hitachi drives in my server.  They are the two drives I originally started it with.  at the time, they were the largest disks available... and over $300 each. Ouch... Today that same $600 can buy between 6 and 8 TB.

 

Of interest is this line showing how long the disk has been powered on.  (another that apparently has a raw value that is humanly readable)

The normalized value is 95.  I might assume that it started at 253 and will go to zero, and at zero the drive will be considered "failed".

that 37524 hours  (4.25 years) then represents 60% of the manufacturer's estimated life of the drive.  I've got 40% left estimated run-time.   

  9 Power_On_Hours          0x0012  095  095  000    Old_age  Always      -      37524

I therefore should think of replacing it sometime in the next 20,000 hours.  (2.2 years to go)

 

You can also see Hitachi does not even show RAW numbers foe Raw-Read-Error-Rate, etc.

[pre]

ID# ATTRIBUTE_NAME          FLAG    VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE

  1 Raw_Read_Error_Rate    0x000b  100  100  016    Pre-fail  Always      -      0

  2 Throughput_Performance  0x0005  100  100  050    Pre-fail  Offline      -      0

  3 Spin_Up_Time            0x0007  107  107  024    Pre-fail  Always      -      671 (Average 626)

  4 Start_Stop_Count        0x0012  100  100  000    Old_age  Always      -      2687

  5 Reallocated_Sector_Ct  0x0033  100  100  005    Pre-fail  Always      -      0

  7 Seek_Error_Rate        0x000b  100  100  067    Pre-fail  Always      -      0

  8 Seek_Time_Performance  0x0005  100  100  020    Pre-fail  Offline      -      0

  9 Power_On_Hours          0x0012  095  095  000    Old_age  Always      -      37524

10 Spin_Retry_Count        0x0013  100  100  060    Pre-fail  Always      -      0

12 Power_Cycle_Count      0x0032  100  100  000    Old_age  Always      -      241

192 Power-Off_Retract_Count 0x0032  097  097  050    Old_age  Always      -      3635

193 Load_Cycle_Count        0x0012  097  097  050    Old_age  Always      -      3635

194 Temperature_Celsius    0x0002  157  157  000    Old_age  Always      -      35 (Lifetime Min/Max 16/42)

196 Reallocated_Event_Count 0x0032  100  100  000    Old_age  Always      -      0

197 Current_Pending_Sector  0x0022  100  100  000    Old_age  Always      -      0

198 Offline_Uncorrectable  0x0008  100  100  000    Old_age  Offline      -      0

199 UDMA_CRC_Error_Count    0x000a  200  200  000    Old_age  Always      -      0

[/pre]

Link to comment

Thanks,  So basically, we need to be looking at the VALUE WORST THRESH values and not the RAW values (except for re-allocated sectors, sectors pending re-allocation, and drive temperature)?  And for the normal values (non raw) higher is better?

 

Absolutely!  And I completely agree with Joe's remarks.  Let's not over think these numbers.  You have probably lived with drives with worse numbers most of your lives, but it never bothered you because DOS and Windows and Macs never reveal these internal drive numbers.  We should never be alarmed by new information until we know enough about the information to understand it in its context, what is good and what is bad and what is essentially useless (to us).  Would you cross a bridge if you were able to see live all of the current stress levels and rates of deterioration, etc for each pillar, support, and cable?

 

Soft read and seek errors are completely normal and expected.  It may often be the way a drive adjusts its parameters for thermal expansion and other factors.  (Disclaimer: I don't know how it is actually done, so this is just speculation for illustration.)  If a platter gets hotter and expands a little, then outer tracks will be positioned farther away.  And since a drive may often be cooled better on one side than the other, it's probably quite common for the topmost platter to be a slightly different temp than the bottom-most platter, so it can't just rely on a single temp sensor for determining the true thermal expansion of any platter.  So a soft read or seek error would be the first indicator that the current coefficient is incorrect, and I would assume there would be a few intentional soft seek errors as the drive determines the new true center of a track, by intentionally seeking a little less and more to the track to statistically find the new center, and then adjust its coefficient for that platter.

 

Some of the manufacturers do not even report some of these RAW numbers, just leave them at zero.  Why would they want their tech support people being bothered by users questioning these often very large and scary numbers?  I believe only Seagate reports RAW values for Raw_Read_Error_Rate.

 

As Joe said, you generally only want to look at the VALUE, WORST, and THRESHOLD values, because these are scaled values that have been determined by the testing labs at the manufacturer.  They can be assumed to be what the manufacturer believes to be the current state of the drive for that attribute.  They typically start at 100 or 200 and drop with wear or errors to zero.  You can generally think of them as percentage points, starting at 100% good.  200 should be considered the same as 100, just twice the accuracy, twice the points to drop.  Since they are using a single byte for each (with a value from 0 to 255), someone decided they were wasting most of the unused values above 100, so they doubled the 0-to-100 scale to 0-to-200, but it still represents the same thing.  Unfortunately you will see in some drives that both scales are used, so a 98 is a very good value in a 0-to-100 scale, but pretty low in a 0-to-200 scale, and you therefore need to recognize exactly which scale each attribute is using.  And when drives are brand new, or a particular attribute has never been used yet, they often have a value of 253 to indicate UNUSED_YET.  Once it is used, then it is initialized to the true starting value, almost always 100 or 200.  And since for non-critical values, the WORST value has no meaning, it may often remain at 253.

 

Which brings us to the fact that some attributes are considered critical, and others are not.  A drive is only considered FAILED if a WORST value for a *critical* attribute drops below the THRESHOLD value for that attribute.  On the SMART report, you can tell which are critical by those that are flagged as Pre-fail, not as Old_age.  While it is the critical attributes we are most interested in, some of the non-critical do have useful information, such as the temp attributes, Power_On_Hours, and the Current_Pending_Sector and Offline_Uncorrectable attributes.  Current_Pending_Sector should be thought of as the current number of suspicious sectors that are pending further testing.  Once tested and correct data has been written to them, they will either be remapped if physically bad, or returned to full service if it was only a soft error, perhaps data corrupted by an electrical glitch such as happens if you lose power while the drive is being written to.

Link to comment
Maybe put a link to these explanations in the first page of this thread(?), as I pointed a lot of people to this script and almost all of them had no idea what to make of the resulting smart 'alerts' at the end of the script.

Good idea.  I added a link to the first post.
Link to comment

Hi,

I am currently trying to preclear a frive I used formerly in windows, but get errors:

 

Mar 24 00:34:14 XMS-GMI-03 preclear_disk-start[4187]: SMART Attributes Data Structure revision number: 16

Mar 24 00:34:14 XMS-GMI-03 preclear_disk-start[4187]: Vendor Specific SMART Attributes with Thresholds:

Mar 24 00:34:14 XMS-GMI-03 preclear_disk-start[4187]: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE

Mar 24 00:34:14 XMS-GMI-03 preclear_disk-start[4187]: 1 Raw_Read_Error_Rate 0x000f 200 199 051 Pre-fail Always - 0

Mar 24 00:34:14 XMS-GMI-03 preclear_disk-start[4187]: 3 Spin_Up_Time 0x0003 182 178 021 Pre-fail Always - 7866

Mar 24 00:34:14 XMS-GMI-03 preclear_disk-start[4187]: 4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 874

Mar 24 00:34:14 XMS-GMI-03 preclear_disk-start[4187]: 5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0

Mar 24 00:34:14 XMS-GMI-03 preclear_disk-start[4187]: 7 Seek_Error_Rate 0x000e 200 200 051 Old_age Always - 0

Mar 24 00:34:14 XMS-GMI-03 preclear_disk-start[4187]: 9 Power_On_Hours 0x0032 090 090 000 Old_age Always - 7416

Mar 24 00:34:14 XMS-GMI-03 preclear_disk-start[4187]: 10 Spin_Retry_Count 0x0012 100 100 051 Old_age Always - 0

Mar 24 00:34:14 XMS-GMI-03 preclear_disk-start[4187]: 11 Calibration_Retry_Count 0x0012 100 253 051 Old_age Always - 0

Mar 24 00:34:14 XMS-GMI-03 preclear_disk-start[4187]: 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 90

Mar 24 00:34:14 XMS-GMI-03 preclear_disk-start[4187]: 192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age Always - 49

Mar 24 00:34:14 XMS-GMI-03 preclear_disk-start[4187]: 193 Load_Cycle_Count 0x0032 188 188 000 Old_age Always - 36071

Mar 24 00:34:14 XMS-GMI-03 preclear_disk-start[4187]: 196 Reallocated_Event_Count 0x0032 200 200 000 Old_age Always - 0

Mar 24 00:34:14 XMS-GMI-03 preclear_disk-start[4187]: 197 Current_Pending_Sector 0x0012 200 200 000 Old_age Always - 1

Mar 24 00:34:14 XMS-GMI-03 preclear_disk-start[4187]: 198 Offline_Uncorrectable 0x0010 200 200 000 Old_age Offline - 1

Mar 24 00:34:14 XMS-GMI-03 preclear_disk-start[4187]: 199 UDMA_CRC_Error_Count 0x003e 200 200 000 Old_age Always - 7708

Mar 24 00:34:14 XMS-GMI-03 preclear_disk-start[4187]: 200 Multi_Zone_Error_Rate 0x0008 200 200 051 Old_age Offline - 0

 

 

and lots of things like this:

 

Mar 24 00:34:39 XMS-GMI-03 kernel: 72 03 11 04 00 00 00 0c 00 0a 80 00 00 00 00 00

Mar 24 00:34:39 XMS-GMI-03 kernel: 00 00 52 0d

Mar 24 00:34:39 XMS-GMI-03 kernel: sd 5:0:0:0: [sdb] ASC=0x11 ASCQ=0x4

Mar 24 00:34:39 XMS-GMI-03 kernel: sd 5:0:0:0: [sdb] CDB: cdb[0]=0x28: 28 00 00 00 52 00 00 02 00 00

Mar 24 00:34:39 XMS-GMI-03 kernel: end_request: I/O error, dev sdb, sector 21005

Mar 24 00:34:39 XMS-GMI-03 kernel: Buffer I/O error on device sdb, logical block 2625

Mar 24 00:34:39 XMS-GMI-03 kernel: Buffer I/O error on device sdb, logical block 2626

Mar 24 00:34:39 XMS-GMI-03 kernel: Buffer I/O error on device sdb, logical block 2627

Mar 24 00:34:39 XMS-GMI-03 kernel: Buffer I/O error on device sdb, logical block 2628

Mar 24 00:34:39 XMS-GMI-03 kernel: Buffer I/O error on device sdb, logical block 2629

Mar 24 00:34:39 XMS-GMI-03 kernel: Buffer I/O error on device sdb, logical block 2630

Mar 24 00:34:39 XMS-GMI-03 kernel: Buffer I/O error on device sdb, logical block 2631

Mar 24 00:34:39 XMS-GMI-03 kernel: Buffer I/O error on device sdb, logical block 2632

Mar 24 00:34:39 XMS-GMI-03 kernel: Buffer I/O error on device sdb, logical block 2633

Mar 24 00:34:39 XMS-GMI-03 kernel: Buffer I/O error on device sdb, logical block 2634

Mar 24 00:34:39 XMS-GMI-03 kernel: ata5: EH complete

Mar 24 00:34:43 XMS-GMI-03 kernel: ata5.00: exception Emask 0x0 SAct 0xf SErr 0x0 action 0x0

Mar 24 00:34:43 XMS-GMI-03 kernel: ata5.00: irq_stat 0x40000008

Mar 24 00:34:43 XMS-GMI-03 kernel: ata5.00: failed command: READ FPDMA QUEUED

Mar 24 00:34:43 XMS-GMI-03 kernel: ata5.00: cmd 60/08:18:08:52:00/00:00:00:00:00/40 tag 3 ncq 4096 in

Mar 24 00:34:43 XMS-GMI-03 kernel: res 41/40:00:0d:52:00/6d:00:00:00:00/40 Emask 0x409 (media error)

Mar 24 00:34:43 XMS-GMI-03 kernel: ata5.00: status: { DRDY ERR }

Mar 24 00:34:43 XMS-GMI-03 kernel: ata5.00: error: { UNC }

Mar 24 00:34:43 XMS-GMI-03 kernel: ata5.00: configured for UDMA/133

Mar 24 00:34:43 XMS-GMI-03 kernel: ata5: EH complete

Mar 24 00:34:47 XMS-GMI-03 kernel: ata5.00: exception Emask 0x0 SAct 0xf SErr 0x0 action 0x0

Mar 24 00:34:47 XMS-GMI-03 kernel: ata5.00: irq_stat 0x40000008

Mar 24 00:34:47 XMS-GMI-03 kernel: ata5.00: failed command: READ FPDMA QUEUED

Mar 24 00:34:47 XMS-GMI-03 kernel: ata5.00: cmd 60/08:00:08:52:00/00:00:00:00:00/40 tag 0 ncq 4096 in

Mar 24 00:34:47 XMS-GMI-03 kernel: res 41/40:00:0d:52:00/6d:00:00:00:00/40 Emask 0x409 (media error)

Mar 24 00:34:47 XMS-GMI-03 kernel: ata5.00: status: { DRDY ERR }

Mar 24 00:34:47 XMS-GMI-03 kernel: ata5.00: error: { UNC }

Mar 24 00:34:47 XMS-GMI-03 kernel: ata5.00: configured for UDMA/133

Mar 24 00:34:47 XMS-GMI-03 kernel: ata5: EH complete

Mar 24 00:34:51 XMS-GMI-03 kernel: ata5.00: exception Emask 0x0 SAct 0xf SErr 0x0 action 0x0

Mar 24 00:34:51 XMS-GMI-03 kernel: ata5.00: irq_stat 0x40000008

Mar 24 00:34:51 XMS-GMI-03 kernel: ata5.00: failed command: READ FPDMA QUEUED

Mar 24 00:34:51 XMS-GMI-03 kernel: ata5.00: cmd 60/08:18:08:52:00/00:00:00:00:00/40 tag 3 ncq 4096 in

Mar 24 00:34:51 XMS-GMI-03 kernel: res 41/40:00:0d:52:00/6d:00:00:00:00/40 Emask 0x409 (media error)

Mar 24 00:34:51 XMS-GMI-03 kernel: ata5.00: status: { DRDY ERR }

Mar 24 00:34:51 XMS-GMI-03 kernel: ata5.00: error: { UNC }

Mar 24 00:34:51 XMS-GMI-03 kernel: ata5.00: configured for UDMA/133

Mar 24 00:34:51 XMS-GMI-03 kernel: ata5: EH complete

 

 

There is also pending relocation shown for 1 sector. The UDMA errors probably are from the past where either the cable was bad or the powersupply weak - it should be no problem for further use, right?

 

Does it make sense to "factoryformat" the drive and give it another try or should the drive not be used anymore?

 

Can I send the drive back to the manufacturerer and will I get replacement or are those errors not giving that possibility?

 

Thanks, Guzzi

Link to comment

Let the pre-clear process finish.  So far, it showed 1 sector re-allocated when you started, but otherwise looks good.  You've seen several other media errors while it is reading the disk, those will show as additional unreadable sectors and will be re-allocated.

 

The "errors" you are reporting are simply the SMART attributes and their current values.  None are marked as FAILING_NOW. No single attribute has the current value even close to the affiliated failure threshold.

 

Wait until the pre-clear finishes and look as the smart report at the end.

 

I don't think there is a factoryformat command available on current drives.  Regardless, unless it reads and writes every sector it won't do much for you even if it was available.

 

 

Link to comment

Hi Joe L.,

 

thnks for the feedback - preclear now has finished and this is final smart differences reported. I have attached syslog also, because there are smarterrors reported and post would become too long (there is 2 harddrives in syslog, cause I was running 2 preclears at the same time, one is a brandnew disk (sda), one is an older one (sdb) to reuse where I get those problems).

 

============================================================================

==

== Disk /dev/sdb has been successfully precleared

==

============================================================================

S.M.A.R.T. error count differences detected after pre-clear

note, some 'raw' values may change, but not be an indication of a problem

58c58

<   7 Seek_Error_Rate         0x000e   200   200   051    Old_age   Always       -       0

---

>   7 Seek_Error_Rate         0x000e   100   253   051    Old_age   Always       -       0

65c65

< 197 Current_Pending_Sector  0x0012   200   200   000    Old_age   Always       -       1

---

> 197 Current_Pending_Sector  0x0012   200   200   000    Old_age   Always       -       0

71c71

< ATA Error Count: 156 (device log contains only the most recent five errors)

---

> ATA Error Count: 174 (device log contains only the most recent five errors)

86c86

< Error 156 occurred at disk power-on lifetime: 7415 hours (308 days + 23 hours)

---

> Error 174 occurred at disk power-on lifetime: 7416 hours (309 days + 0 hours)

97,101c97,101

 

 

So it seems the relocation was done - but I am worried about all those ata errors in syslog - I don't have any of those with "healthy" harddisks.

 

What would you recommend? Running another preclear and see, if it's better now?

 

Thanks for your help,

 

Guzzi

syslog-2010-03-24.txt

Link to comment

Is there any way to do just the post-read and smart comparison? I have the unusual situation where it was on the post-read step for the three drives but my vista machine (with the telnet sessions) rebooted due to a windows update.

 

In /tmp there is mdresp, read_speedsda (b, c), smart_start2268 (2989, 4044), zerosda (b, c). I was able to do my own smart scan and compare the results (only real difference is ecc which I've read is always huge for samsung drives and the temps which stayed at 24-25 throughout).

 

There's a chance it completed before the reboot, but then there should be smart_end files right? Not sure if I should run the preclear scripts again or just assume they are fine. Just hate to wait 20 hours again.

 

Thanks!

 

 

 

Link to comment

my vista machine (with the telnet sessions) rebooted due to a windows update.

 

That was funny!  ;D  (Sorry, ShawnFumo!)

 

Earlier in this thread Joe L. made a very good suggestion to run the preclear script in a "screen" session.

That way you can detach from it, and later reattach to it from anywhere.

http://lime-technology.com/forum/index.php?topic=2817.msg24827#msg24827

Screen is the way to go for any long running command.

 

And, don't assume that the disks were precleared!  That would be asking for trouble.  Just do it over again.

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.