Preclear.sh results - Questions about your results? Post them here.


Recommended Posts

 

I hate to ask, but was hoping someone could take a peek at my initial S.M.A.R.T preclear results. Running 4.7. I understand that the values are manufacturer specific but the Seek_Error_Rate looks pretty high on the first one:

 

Mar 27 22:41:32 Tower preclear_disk-diff[20271]: ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
Mar 27 22:41:32 Tower preclear_disk-diff[20271]:   1 Raw_Read_Error_Rate     0x000f   120   099   006    Pre-fail  Always       -       237497869
Mar 27 22:41:32 Tower preclear_disk-diff[20271]:   3 Spin_Up_Time            0x0003   100   100   000    Pre-fail  Always       -       0
Mar 27 22:41:32 Tower preclear_disk-diff[20271]:   4 Start_Stop_Count        0x0032   100   100   020    Old_age   Always       -       10
Mar 27 22:41:32 Tower preclear_disk-diff[20271]:   5 Reallocated_Sector_Ct   0x0033   100   100   036    Pre-fail  Always       -       0
Mar 27 22:41:32 Tower preclear_disk-diff[20271]:   7 Seek_Error_Rate         0x000f   100   253   030    Pre-fail  Always       -       550780
Mar 27 22:41:32 Tower preclear_disk-diff[20271]:   9 Power_On_Hours          0x0032   100   100   000    Old_age   Always       -       34
Mar 27 22:41:32 Tower preclear_disk-diff[20271]:  10 Spin_Retry_Count        0x0013   100   100   097    Pre-fail  Always       -       0
Mar 27 22:41:32 Tower preclear_disk-diff[20271]:  12 Power_Cycle_Count       0x0032   100   100   020    Old_age   Always       -       10
Mar 27 22:41:32 Tower preclear_disk-diff[20271]: 183 Runtime_Bad_Block       0x0032   100   100   000    Old_age   Always       -       0
Mar 27 22:41:32 Tower preclear_disk-diff[20271]: 184 End-to-End_Error        0x0032   100   100   099    Old_age   Always       -       0
Mar 27 22:41:32 Tower preclear_disk-diff[20271]: 187 Reported_Uncorrect      0x0032   100   100   000    Old_age   Always       -       0
Mar 27 22:41:32 Tower preclear_disk-diff[20271]: 188 Command_Timeout         0x0032   100   100   000    Old_age   Always       -       0
Mar 27 22:41:32 Tower preclear_disk-diff[20271]: 189 High_Fly_Writes         0x003a   100   100   000    Old_age   Always       -       0
Mar 27 22:41:32 Tower preclear_disk-diff[20271]: 190 Airflow_Temperature_Cel 0x0022   073   072   045    Old_age   Always       -       27 (Lifetime Min/Max 25/28)
Mar 27 22:41:32 Tower preclear_disk-diff[20271]: 194 Temperature_Celsius     0x0022   027   040   000    Old_age   Always       -       27 (0 19 0 0)
Mar 27 22:41:32 Tower preclear_disk-diff[20271]: 195 Hardware_ECC_Recovered  0x001a   051   030   000    Old_age   Always       -       237497869
Mar 27 22:41:32 Tower preclear_disk-diff[20271]: 197 Current_Pending_Sector  0x0012   100   100   000    Old_age   Always       -       0
Mar 27 22:41:32 Tower preclear_disk-diff[20271]: 198 Offline_Uncorrectable   0x0010   100   100   000    Old_age   Offline      -       0
Mar 27 22:41:32 Tower preclear_disk-diff[20271]: 199 UDMA_CRC_Error_Count    0x003e   200   200   000    Old_age   Always       -       0
Mar 27 22:41:32 Tower preclear_disk-diff[20271]: 240 Head_Flying_Hours       0x0000   100   253   000    Old_age   Offline      -       248227634872373
Mar 27 22:41:32 Tower preclear_disk-diff[20271]: 241 Total_LBAs_Written      0x0000   100   253   000    Old_age   Offline      -       3694521517
Mar 27 22:41:32 Tower preclear_disk-diff[20271]: 242 Total_LBAs_Read         0x0000   100   253   000    Old_age   Offline      -       4254661366

and

Mar 27 18:37:57 Tower preclear_disk-diff[8523]: SMART Attributes Data Structure revision number: 16
Mar 27 18:37:57 Tower preclear_disk-diff[8523]: Vendor Specific SMART Attributes with Thresholds:
Mar 27 18:37:57 Tower preclear_disk-diff[8523]: ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
Mar 27 18:37:57 Tower preclear_disk-diff[8523]:   1 Raw_Read_Error_Rate     0x002f   200   200   051    Pre-fail  Always       -       0
Mar 27 18:37:57 Tower preclear_disk-diff[8523]:   3 Spin_Up_Time            0x0027   253   253   021    Pre-fail  Always       -       1150
Mar 27 18:37:57 Tower preclear_disk-diff[8523]:   4 Start_Stop_Count        0x0032   100   100   000    Old_age   Always       -       11
Mar 27 18:37:57 Tower preclear_disk-diff[8523]:   5 Reallocated_Sector_Ct   0x0033   200   200   140    Pre-fail  Always       -       0
Mar 27 18:37:57 Tower preclear_disk-diff[8523]:   7 Seek_Error_Rate         0x002e   200   200   000    Old_age   Always       -       0
Mar 27 18:37:57 Tower preclear_disk-diff[8523]:   9 Power_On_Hours          0x0032   100   100   000    Old_age   Always       -       30
Mar 27 18:37:57 Tower preclear_disk-diff[8523]:  10 Spin_Retry_Count        0x0032   100   253   000    Old_age   Always       -       0
Mar 27 18:37:57 Tower preclear_disk-diff[8523]:  11 Calibration_Retry_Count 0x0032   100   253   000    Old_age   Always       -       0
Mar 27 18:37:57 Tower preclear_disk-diff[8523]:  12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -       10
Mar 27 18:37:57 Tower preclear_disk-diff[8523]: 192 Power-Off_Retract_Count 0x0032   200   200   000    Old_age   Always       -       3
Mar 27 18:37:57 Tower preclear_disk-diff[8523]: 193 Load_Cycle_Count        0x0032   200   200   000    Old_age   Always       -       40
Mar 27 18:37:57 Tower preclear_disk-diff[8523]: 194 Temperature_Celsius     0x0022   127   115   000    Old_age   Always       -       23
Mar 27 18:37:57 Tower preclear_disk-diff[8523]: 196 Reallocated_Event_Count 0x0032   200   200   000    Old_age   Always       -       0
Mar 27 18:37:57 Tower preclear_disk-diff[8523]: 197 Current_Pending_Sector  0x0032   200   200   000    Old_age   Always       -       0
Mar 27 18:37:57 Tower preclear_disk-diff[8523]: 198 Offline_Uncorrectable   0x0030   100   253   000    Old_age   Offline      -       0
Mar 27 18:37:57 Tower preclear_disk-diff[8523]: 199 UDMA_CRC_Error_Count    0x0032   200   200   000    Old_age   Always       -       0
Mar 27 18:37:57 Tower preclear_disk-diff[8523]: 200 Multi_Zone_Error_Rate   0x0008   100   253   000    Old_age   Offline      -       0
Mar 27 18:37:57 Tower preclear_disk-diff[8523]: 
Mar 27 18:37:57 Tower preclear_disk-diff[8523]: SMART Error Log Version: 1
Mar 27 18:37:57 Tower preclear_disk-diff[8523]: No Errors Logged

 

 

 

Link to comment

How many times must this be said.  The "raw" values are meaningful to the manufacturers.

 

The one disk seems to initialize most of their values to 100, the other to 200.  The seek-time initialized value is still the current value and is nowhere near the failure threshold.

Link to comment

I'm running unRAID 4.7 and using preclear script v1.9. My Unraid server has 8 x Samsung HD203WI + 7 x Samsung HD204UI (upgraded to the new firm). I've precleared 3 HDDs (HD203WI) at the same time (only 1 cycle). I post here the results and I'll do the same with the first 3 x HD204UI ones this week (if the results in the rest of the HDDs are almost the same I'll know the precleared is well done). Maybe someone can tell me if the values are normal and 1 cycle is enought.

preclear_results.txt

Link to comment

I'm running unRAID 4.7 and using preclear script v1.9. My Unraid server has 8 x Samsung HD203WI + 7 x Samsung HD204UI (upgraded to the new firm). I've precleared 3 HDDs (HD203WI) at the same time (only 1 cycle). I post here the results and I'll do the same with the first 3 x HD204UI ones this week (if the results in the rest of the HDDs are almost the same I'll know the precleared is well done). Maybe someone can tell me if the values are normal and 1 cycle is enought.

The disks all passed the pre-clear.  No sectors were re-allocated.  Nothing looks unusual.

 

1 cycle is far better than none.  Too many disks failing in the initial pre-clear for my comfort.

 

Many prefer to run several more cycles if they do not need to put the disks into service immediately.

 

Joe L.

Link to comment

Thanks Joe. Maybe I'll make a second pre-clear cycle when I've finished the first in all the HDDs. I've not buy the Pro Unraid Key yet and I can only pre-clear 3 HDDs at the same time. How many HDDs is possible to pre-clear with the Pro Key?

You can pre-clear as many disks as you like at a time  (as long as you have ports on the disk controllers to connect them to and enough memory to run all the processes).  It has nothing to do with the unRAID license. 

 

The license just determines how many disks you can assign to the protected array on the unRAID management screen and if some of the security and cache drive features are available.

 

You can always have as many disks as you desire outside of the protected array.

Link to comment

My tower has 2GB of Ram (in an Asus P5B-VM DO).

The initial speed (in the Pre-clear reading was 65-75), and writing zeros nows it's 35.2 (67% Done &  1.4TB copied of 2TB).

CPU Temp: 65ºC & the HDDs between 25-35ºC.

I'll report the syslog :)

 

EDIT: Now Post-Read in progress (7% - 52.5 MB/s - TIME 32:09:35). The CPU Temp is 67ºC and the HDDs between 24-32 ºC.

EDIT 2: I get out from the telnet session and now I don't know how to get in to the Pre-clear screen again ("screen -r" does  not work).

Link to comment

Well, after 86 hours the 12 HDDs are Pre-cleared. Here are the results attached in 3 files.

They look good.

 

I think 12 concurrent clear processes is a new record.  It did keep the server plenty busy, so I'm sure it was a good burn-in test of the disks, disk controllers, and motherboard.

 

 

Link to comment

Well, after 86 hours the 12 HDDs are Pre-cleared. Here are the results attached in 3 files.

They look good.

 

I think 12 concurrent clear processes is a new record.  It did keep the server plenty busy, so I'm sure it was a good burn-in test of the disks, disk controllers, and motherboard.

 

 

 

I think the same. I was only worried about not to watch the Pre-clear screen again ("screen - r" didn't work; next time I'll name my screens to find them back in an easier way as the user secrectagent suggested me), but I received an email when each HDD had finishied and could save my results too.

Link to comment

Well, after 86 hours the 12 HDDs are Pre-cleared. Here are the results attached in 3 files.

They look good.

 

I think 12 concurrent clear processes is a new record.  It did keep the server plenty busy, so I'm sure it was a good burn-in test of the disks, disk controllers, and motherboard.

 

 

 

I think the same. I was only worried about not to watch the Pre-clear screen again ("screen - r" didn't work; next time I'll name my screens to find them back in an easier way as the user secrectagent suggested me), but I received an email when each HDD had finishied and could save my results too.

The results are also saved in a preclear_reports directory on your flash drive.
Link to comment

I started the preclear on a 2TB drive and went out - by the time I came back, the TOWER system is totally shutdown!! There is a UPS attaced - so it's not a power supply issue.

Not sure what happened. How can I tell? Are there any logs that I can look at. If so, please guide me.

I am using unRAID 4.7 and Preclear 1.9.

Link to comment

Okay, I'm not new to preclearing, but this has me baffled.  I have a 160GB drive connected to an AOC-SASLP-MV8, and according to ls -l /dev/disk/by-id it is detected as /dev/sdb:

total 0
lrwxrwxrwx 1 root root  9 2011-04-06 23:43 ata-Hitachi_HDS722020ALA330_JK1170YAHTA8UP -> ../../sdd
lrwxrwxrwx 1 root root 10 2011-04-06 23:43 ata-Hitachi_HDS722020ALA330_JK1170YAHTA8UP-part1 -> ../../sdd1
lrwxrwxrwx 1 root root  9 2011-04-06 23:43 ata-Hitachi_HDS722020ALA330_JK1170YAHTD4TP -> ../../sdh
lrwxrwxrwx 1 root root 10 2011-04-06 23:43 ata-Hitachi_HDS722020ALA330_JK1170YAHTD4TP-part1 -> ../../sdh1
lrwxrwxrwx 1 root root  9 2011-04-06 23:43 ata-Hitachi_HDS722020ALA330_JK1170YAHTLNUP -> ../../sdf
lrwxrwxrwx 1 root root 10 2011-04-06 23:43 ata-Hitachi_HDS722020ALA330_JK1170YAHTLNUP-part1 -> ../../sdf1
lrwxrwxrwx 1 root root  9 2011-04-06 23:43 ata-Hitachi_HDS722020ALA330_JK1171YAGZ131S -> ../../sdg
lrwxrwxrwx 1 root root 10 2011-04-06 23:43 ata-Hitachi_HDS722020ALA330_JK1171YAGZ131S-part1 -> ../../sdg1
lrwxrwxrwx 1 root root  9 2011-04-06 23:43 ata-Hitachi_HDS722020ALA330_JK1171YAH3SVTS -> ../../sde
lrwxrwxrwx 1 root root 10 2011-04-06 23:43 ata-Hitachi_HDS722020ALA330_JK1171YAH3SVTS-part1 -> ../../sde1
lrwxrwxrwx 1 root root  9 2011-04-06 23:43 ata-Hitachi_HDS722020ALA330_JK11A8B9J6TV8F -> ../../sdi
lrwxrwxrwx 1 root root 10 2011-04-06 23:43 ata-Hitachi_HDS722020ALA330_JK11A8B9J6TV8F-part1 -> ../../sdi1
lrwxrwxrwx 1 root root  9 2011-04-06 23:43 ata-VB0160EAVEQ_9VY9SWE3 -> ../../sdb
lrwxrwxrwx 1 root root  9 2011-04-06 23:43 scsi-SATA_Hitachi_HDS7220_JK1170YAHTA8UP -> ../../sdd
lrwxrwxrwx 1 root root 10 2011-04-06 23:43 scsi-SATA_Hitachi_HDS7220_JK1170YAHTA8UP-part1 -> ../../sdd1
lrwxrwxrwx 1 root root  9 2011-04-06 23:43 scsi-SATA_Hitachi_HDS7220_JK1170YAHTD4TP -> ../../sdh
lrwxrwxrwx 1 root root 10 2011-04-06 23:43 scsi-SATA_Hitachi_HDS7220_JK1170YAHTD4TP-part1 -> ../../sdh1
lrwxrwxrwx 1 root root  9 2011-04-06 23:43 scsi-SATA_Hitachi_HDS7220_JK1170YAHTLNUP -> ../../sdf
lrwxrwxrwx 1 root root 10 2011-04-06 23:43 scsi-SATA_Hitachi_HDS7220_JK1170YAHTLNUP-part1 -> ../../sdf1
lrwxrwxrwx 1 root root  9 2011-04-06 23:43 scsi-SATA_Hitachi_HDS7220_JK1171YAGZ131S -> ../../sdg
lrwxrwxrwx 1 root root 10 2011-04-06 23:43 scsi-SATA_Hitachi_HDS7220_JK1171YAGZ131S-part1 -> ../../sdg1
lrwxrwxrwx 1 root root  9 2011-04-06 23:43 scsi-SATA_Hitachi_HDS7220_JK1171YAH3SVTS -> ../../sde
lrwxrwxrwx 1 root root 10 2011-04-06 23:43 scsi-SATA_Hitachi_HDS7220_JK1171YAH3SVTS-part1 -> ../../sde1
lrwxrwxrwx 1 root root  9 2011-04-06 23:43 scsi-SATA_Hitachi_HDS7220_JK11A8B9J6TV8F -> ../../sdi
lrwxrwxrwx 1 root root 10 2011-04-06 23:43 scsi-SATA_Hitachi_HDS7220_JK11A8B9J6TV8F-part1 -> ../../sdi1
lrwxrwxrwx 1 root root  9 2011-04-06 23:43 scsi-SATA_VB0160EAVEQ_9VY9SWE3 -> ../../sdb

 

However, I can't get past this "Clearing will NOT be performed" error:

Pre-Clear unRAID Disk /dev/sdb
################################################################## 1.9
Device Model:     VB0160EAVEQ
Serial Number:    9VY9SWE3
Firmware Version: HPG0
User Capacity:    160,041,885,696 bytes

Disk /dev/sdb: 160.0 GB, 160041885696 bytes
255 heads, 63 sectors/track, 19457 cylinders, total 312581808 sectors
Units = sectors of 1 * 512 = 512 bytes
Disk identifier: 0x00000000

Disk /dev/sdb doesn't contain a valid partition table
########################################################################
invoked as  /boot/scripts/preclear_disk.sh -a -c 3 -M 2 /dev/sdb
########################################################################
(-a option elected, partition will start on sector 63)
(it will not be 4k-aligned)
Are you absolutely sure you want to clear this drive?
(Answer Yes to continue. Capital 'Y', lower case 'es'): Yes
Clearing will NOT be performed

 

Using unRAID 4.7 and preclear 1.9.

 

Link to comment

Okay, I'm not new to preclearing, but this has me baffled.  I have a 160GB drive connected to an AOC-SASLP-MV8, and according to ls -l /dev/disk/by-id it is detected as /dev/sdb:

total 0
lrwxrwxrwx 1 root root  9 2011-04-06 23:43 ata-Hitachi_HDS722020ALA330_JK1170YAHTA8UP -> ../../sdd
lrwxrwxrwx 1 root root 10 2011-04-06 23:43 ata-Hitachi_HDS722020ALA330_JK1170YAHTA8UP-part1 -> ../../sdd1
lrwxrwxrwx 1 root root  9 2011-04-06 23:43 ata-Hitachi_HDS722020ALA330_JK1170YAHTD4TP -> ../../sdh
lrwxrwxrwx 1 root root 10 2011-04-06 23:43 ata-Hitachi_HDS722020ALA330_JK1170YAHTD4TP-part1 -> ../../sdh1
lrwxrwxrwx 1 root root  9 2011-04-06 23:43 ata-Hitachi_HDS722020ALA330_JK1170YAHTLNUP -> ../../sdf
lrwxrwxrwx 1 root root 10 2011-04-06 23:43 ata-Hitachi_HDS722020ALA330_JK1170YAHTLNUP-part1 -> ../../sdf1
lrwxrwxrwx 1 root root  9 2011-04-06 23:43 ata-Hitachi_HDS722020ALA330_JK1171YAGZ131S -> ../../sdg
lrwxrwxrwx 1 root root 10 2011-04-06 23:43 ata-Hitachi_HDS722020ALA330_JK1171YAGZ131S-part1 -> ../../sdg1
lrwxrwxrwx 1 root root  9 2011-04-06 23:43 ata-Hitachi_HDS722020ALA330_JK1171YAH3SVTS -> ../../sde
lrwxrwxrwx 1 root root 10 2011-04-06 23:43 ata-Hitachi_HDS722020ALA330_JK1171YAH3SVTS-part1 -> ../../sde1
lrwxrwxrwx 1 root root  9 2011-04-06 23:43 ata-Hitachi_HDS722020ALA330_JK11A8B9J6TV8F -> ../../sdi
lrwxrwxrwx 1 root root 10 2011-04-06 23:43 ata-Hitachi_HDS722020ALA330_JK11A8B9J6TV8F-part1 -> ../../sdi1
lrwxrwxrwx 1 root root  9 2011-04-06 23:43 ata-VB0160EAVEQ_9VY9SWE3 -> ../../sdb
lrwxrwxrwx 1 root root  9 2011-04-06 23:43 scsi-SATA_Hitachi_HDS7220_JK1170YAHTA8UP -> ../../sdd
lrwxrwxrwx 1 root root 10 2011-04-06 23:43 scsi-SATA_Hitachi_HDS7220_JK1170YAHTA8UP-part1 -> ../../sdd1
lrwxrwxrwx 1 root root  9 2011-04-06 23:43 scsi-SATA_Hitachi_HDS7220_JK1170YAHTD4TP -> ../../sdh
lrwxrwxrwx 1 root root 10 2011-04-06 23:43 scsi-SATA_Hitachi_HDS7220_JK1170YAHTD4TP-part1 -> ../../sdh1
lrwxrwxrwx 1 root root  9 2011-04-06 23:43 scsi-SATA_Hitachi_HDS7220_JK1170YAHTLNUP -> ../../sdf
lrwxrwxrwx 1 root root 10 2011-04-06 23:43 scsi-SATA_Hitachi_HDS7220_JK1170YAHTLNUP-part1 -> ../../sdf1
lrwxrwxrwx 1 root root  9 2011-04-06 23:43 scsi-SATA_Hitachi_HDS7220_JK1171YAGZ131S -> ../../sdg
lrwxrwxrwx 1 root root 10 2011-04-06 23:43 scsi-SATA_Hitachi_HDS7220_JK1171YAGZ131S-part1 -> ../../sdg1
lrwxrwxrwx 1 root root  9 2011-04-06 23:43 scsi-SATA_Hitachi_HDS7220_JK1171YAH3SVTS -> ../../sde
lrwxrwxrwx 1 root root 10 2011-04-06 23:43 scsi-SATA_Hitachi_HDS7220_JK1171YAH3SVTS-part1 -> ../../sde1
lrwxrwxrwx 1 root root  9 2011-04-06 23:43 scsi-SATA_Hitachi_HDS7220_JK11A8B9J6TV8F -> ../../sdi
lrwxrwxrwx 1 root root 10 2011-04-06 23:43 scsi-SATA_Hitachi_HDS7220_JK11A8B9J6TV8F-part1 -> ../../sdi1
lrwxrwxrwx 1 root root  9 2011-04-06 23:43 scsi-SATA_VB0160EAVEQ_9VY9SWE3 -> ../../sdb

 

However, I can't get past this "Clearing will NOT be performed" error:

Pre-Clear unRAID Disk /dev/sdb
################################################################## 1.9
Device Model:     VB0160EAVEQ
Serial Number:    9VY9SWE3
Firmware Version: HPG0
User Capacity:    160,041,885,696 bytes

Disk /dev/sdb: 160.0 GB, 160041885696 bytes
255 heads, 63 sectors/track, 19457 cylinders, total 312581808 sectors
Units = sectors of 1 * 512 = 512 bytes
Disk identifier: 0x00000000

Disk /dev/sdb doesn't contain a valid partition table
########################################################################
invoked as  /boot/scripts/preclear_disk.sh -a -c 3 -M 2 /dev/sdb
########################################################################
(-a option elected, partition will start on sector 63)
(it will not be 4k-aligned)
Are you absolutely sure you want to clear this drive?
(Answer Yes to continue. Capital 'Y', lower case 'es'): Yes
Clearing will NOT be performed

 

Using unRAID 4.7 and preclear 1.9.

 

Even though it looks like you responded with "Yes", is it possible you added a leading or trailing space (or some non-printing character) when you typed "Yes" ???

 

Are you using a non-standard keyboard? or a different language character set?  The code looking for the "Yes" response has been there forever.

 

Joe L.

Link to comment

Even though it looks like you responded with "Yes", is it possible you added a leading or trailing space (or some non-printing character) when you typed "Yes" ???

 

I thought it was something simple like that too, so I tried it several times, making extra sure there was nothing being input other than the "Yes".

 

Are you using a non-standard keyboard? or a different language character set?   The code looking for the "Yes" response has been there forever.

 

I wish it was something as exotic as that, but no, it's a standard keyboard.  I am using Screen to do this, just like have for dozens of drives before with no issue.  I should note this happened in 1.8 first, then I updated to 1.9 to make sure I was current and it still failed.

Link to comment

So after hours of trying to figure this out, it (of course) came down to a very simple thing.  I was submitting a series of commands as a cut and paste operation:

 

/boot/scripts/preclear_disk.sh -a -c 3 -M 2 /dev/sdb
diff /tmp/smart_start_sdb /tmp/smart_finish_sdb

 

Since preclear is looking for user input ('Yes' or anything except), the second stacked command diff /tmp/smart_start_sdb /tmp/smart_finish_sdb was screwing up the input, and of course it simply aborted since the input was not Yes but rather diff /tmp/smart_start_sdb /tmp/smart_finish_sdbYes.  As usual, the simple stuff kills your time, frustrates you during debugging, and then embarrasses you when you figure out the problem.  :-[

Link to comment

Just finished preclearing my first build. I think everything went OK but I hope someone with more experience will check my log and let me know. The only thing that I noticed was some "near_threshold". Are they anything to worry about?

 

Also, when I do "dmesg|grep SATA|grep link" I get:

 

ata1: SATA link down (SStatus 0 SControl 300)

ata3: SATA link up 3.0 Gbps (SStatus 123 SControl 300)

ata4: SATA link up 3.0 Gbps (SStatus 123 SControl 300)

ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 300)

 

I left ata1 on mobo open for future cache, but why is ata2 1.5Gbps?

preclear_results.txt

Link to comment

Just finished preclearing my first build. I think everything went OK but I hope someone with more experience will check my log and let me know. The only thing that I noticed was some "near_threshold". Are they anything to worry about?

 

Also, when I do "dmesg|grep SATA|grep link" I get:

 

ata1: SATA link down (SStatus 0 SControl 300)

ata3: SATA link up 3.0 Gbps (SStatus 123 SControl 300)

ata4: SATA link up 3.0 Gbps (SStatus 123 SControl 300)

ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 300)

 

I left ata1 on mobo open for future cache, but why is ata2 1.5Gbps?

 

Near threshold is just an alert.  I think I print that whenever the current value is within 75 of the failure threshold.

For some disks/attributes, the factory initialized value is only a few counts, and sometimes only 1 greater than the affiliated failure threshold so the warning is always going to occur even though nothing at all is wrong.

 

Joe L.

Link to comment

Here are the results of the -V.  Can you tell me if this looks ok?

 

============================================================================

** Changed attributes in files: /tmp/smart_start_sdi  /tmp/smart_finish_sdi

                ATTRIBUTE  NEW_VAL OLD_VAL FAILURE_THRESHOLD STATUS      RAW_VALUE 

      Raw_Read_Error_Rate =  114    117            6        ok          60178126

        Spin_Retry_Count =  100    100          97        near_thresh 0

        End-to-End_Error =  100    100          99        near_thresh 0

  Airflow_Temperature_Cel =    68      66          45        near_thresh 32

      Temperature_Celsius =    32      34            0        ok          32

  Hardware_ECC_Recovered =    51      44            0        ok          60178126

No SMART attributes are FAILING_NOW

 

0 sectors were pending re-allocation before the start of the preclear.

 

0 sectors are pending re-allocation at the end of the preclear,

    the number of sectors pending re-allocation did not change.

0 sectors had been re-allocated before the start of the preclear.

0 sectors are re-allocated at the end of the preclear,

    the number of sectors re-allocated did not change.

 

 

Is this drive ok and ready to use now?

 

Thanks!

 

Neil

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.