Jump to content

Preclear.sh results - Questions about your results? Post them here.


Recommended Posts

I'm in the process of pre-clearing several new 4TB Seagate NAS drives, and have the following results from the first 2 drives (the two sets of results are nearly identical, so I've only listed one, as it shows what I'd like to know).

 

I'd appreciate some feedback on the results.    My thoughts/questions are as follows:

 

(1)  The Raw Read Error Rate actually shows improvement (from 100 to 110), so I assume it's fine, and I should simply ignore the raw value.    Is that correct?

Correct

(2)  Both the Spin Retry Count and End-to-End Error values are the same (100 before, 100 after), but the status shows "Near Threshold".  Any comment on these?  ... or are they fine and "ignorable" ??

anything with a current normalized value within 25 of its affiliated failure threshold  is "near threshold"    For some attributes, the manufacturer sets the failure threshold within a count or two from the factory initial value.

(4)  The temps seem fine.  Not sure why the airflow is shown as "Near Threshold", but I don't see anything to worry about -- agree?

same above. 
Link to comment

Thanks Joe => Any comment r.e. #3 (the high fly writes) ??

 

Is this something to simply ignore, regardless of whether or not it's "near threshold" or not?      The two drives that have finished so far weren't near the threshold, but with a "within 25" standard, they came fairly close ... just a few more "high fly writes" and it would have said "near threshold".    From the comments I've seen r.e. this parameter, it's not clear what it actually means ... but I'm certainly interested in your opinion on it.

 

Link to comment

I just ran 3 cycles on a Seagate barracuda 2TB drive (first pre_clear that I've run) - took 43.42 Hrs to run 3 cycles. I read that as long as there are "No SMART attributes are FAILING_NOW" indicator - the drive is fine for use. Can someone confirm if that's the case? I did notice that by the end of the 3 cycles my drive was running a little hot (46C) but I'm hoping that's just because I'm running without a case right now and once I have a proper case with fans (already ordered) it'll be better. Here's my preclear report:

 

rpt.txt

Link to comment

I just ran 3 cycles on a Seagate barracuda 2TB drive (first pre_clear that I've run) - took 43.42 Hrs to run 3 cycles. I read that as long as there are "No SMART attributes are FAILING_NOW" indicator - the drive is fine for use. Can someone confirm if that's the case? I did notice that by the end of the 3 cycles my drive was running a little hot (46C) but I'm hoping that's just because I'm running without a case right now and once I have a proper case with fans (already ordered) it'll be better. Here's my preclear report:

 

Looks very good => nothing to be concerned about in the SMART data, and no re-allocated sectors.    Exactly what you want to see  :)

Link to comment

Just completed 3 cycles on WD 2TB drive.

 

From what I can tell, things look good correct?  The report shows the following

No SMART attributes are FAILING_NOW

 

0 sectors were pending re-allocation before the start of the preclear.

0 sectors were pending re-allocation after pre-read in cycle 1 of 3.

0 sectors were pending re-allocation after zero of disk in cycle 1 of 3.

0 sectors were pending re-allocation after post-read in cycle 1 of 3.

0 sectors were pending re-allocation after zero of disk in cycle 2 of 3.

0 sectors were pending re-allocation after post-read in cycle 2 of 3.

0 sectors were pending re-allocation after zero of disk in cycle 3 of 3.

0 sectors are pending re-allocation at the end of the preclear,

    the number of sectors pending re-allocation did not change.

0 sectors had been re-allocated before the start of the preclear.

0 sectors are re-allocated at the end of the preclear,

    the number of sectors re-allocated did not change.

preclear_rpt__WD-WCAZAH809391_2013-07-30.txt

Link to comment

Just completed 3 cycles on WD 2TB drive.

 

From what I can tell, things look good correct?  The report shows the following

No SMART attributes are FAILING_NOW

 

0 sectors were pending re-allocation before the start of the preclear.

0 sectors were pending re-allocation after pre-read in cycle 1 of 3.

0 sectors were pending re-allocation after zero of disk in cycle 1 of 3.

0 sectors were pending re-allocation after post-read in cycle 1 of 3.

0 sectors were pending re-allocation after zero of disk in cycle 2 of 3.

0 sectors were pending re-allocation after post-read in cycle 2 of 3.

0 sectors were pending re-allocation after zero of disk in cycle 3 of 3.

0 sectors are pending re-allocation at the end of the preclear,

    the number of sectors pending re-allocation did not change.

0 sectors had been re-allocated before the start of the preclear.

0 sectors are re-allocated at the end of the preclear,

    the number of sectors re-allocated did not change.

 

You should look at the final SMART report to confirm there were no bad values in both the start and ending reports ... but yes, the overall report looks good => no values changed (except temperature), and all zeroes on the reallocated sector counts.

 

Link to comment

Thanks Gary.  The final looks good too from what I can tell.  I'm just not an expert on reviewing it.

 

This 2TB drive will be replacing an existing failing 1TB drive in the arrray.  And my Parity and Cache drives are both already 2TB already, as I work to slowly move all drives up to 2TB in size.

SMART Attributes Data Structure revision number: 16

Vendor Specific SMART Attributes with Thresholds:

ID# ATTRIBUTE_NAME          FLAG    VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE

  1 Raw_Read_Error_Rate    0x002f  200  200  051    Pre-fail  Always      -      0

  3 Spin_Up_Time            0x0027  165  165  021    Pre-fail  Always      -      6716

  4 Start_Stop_Count        0x0032  100  100  000    Old_age  Always      -      8

  5 Reallocated_Sector_Ct  0x0033  200  200  140    Pre-fail  Always      -      0

  7 Seek_Error_Rate        0x002e  100  253  000    Old_age  Always      -      0

  9 Power_On_Hours          0x0032  100  100  000    Old_age  Always      -      60

10 Spin_Retry_Count        0x0032  100  253  000    Old_age  Always      -      0

11 Calibration_Retry_Count 0x0032  100  253  000    Old_age  Always      -      0

12 Power_Cycle_Count      0x0032  100  100  000    Old_age  Always      -      8

192 Power-Off_Retract_Count 0x0032  200  200  000    Old_age  Always      -      7

193 Load_Cycle_Count        0x0032  200  200  000    Old_age  Always      -      11

194 Temperature_Celsius    0x0022  120  118  000    Old_age  Always      -      30

196 Reallocated_Event_Count 0x0032  200  200  000    Old_age  Always      -      0

197 Current_Pending_Sector  0x0032  200  200  000    Old_age  Always      -      0

198 Offline_Uncorrectable  0x0030  100  253  000    Old_age  Offline      -      0

199 UDMA_CRC_Error_Count    0x0032  200  200  000    Old_age  Always      -      0

200 Multi_Zone_Error_Rate  0x0008  100  253  000    Old_age  Offline      -      0

 

SMART Error Log Version: 1

No Errors Logged

preclear_finish__WD-WCAZAH809391_2013-07-30.txt

Link to comment

Hey Guys! :)

 

I just ran a preclear on 6 of my disks on my new unRAID server. Some of the disks did not preclear successfully.

I bought two new drives which successfully precleared. The other four was Samsung SpinPoint F4EG 2TB drives all of these failed the preclear. I used these with a ONNTO Data Tale storage-box which used RAID 5. I don't know if this made some weird adjustments/settings etc to the disks, or if they are just defect (I don't think they are defect since they worked on the storage box.)

 

here is the systemlog:https://dl.dropboxusercontent.com/u/14849131/systemlog.txt

I posted the whole thing because I'm not sure what to look for..

 

Any help advice/info to resolve this would be greatly appreciated!

 

- niv

 

 

Link to comment

Thx Gary!!

 

I just finished preclearing another 2TB disk and the results look the same! So 2 data disks down - now just waiting for the 3TB parity drive to arrive. Had a quick question though:

 

On my second preclear I forgot to add the -A switch when executing the command. So now I have my first disk precleared with a starting sector of 64 and the second one with a starting sector of 63. Any issues here?? Do I need to run another preclear on the 2nd disk with the -A option?

 

 

I just ran 3 cycles on a Seagate barracuda 2TB drive (first pre_clear that I've run) - took 43.42 Hrs to run 3 cycles. I read that as long as there are "No SMART attributes are FAILING_NOW" indicator - the drive is fine for use. Can someone confirm if that's the case? I did notice that by the end of the 3 cycles my drive was running a little hot (46C) but I'm hoping that's just because I'm running without a case right now and once I have a proper case with fans (already ordered) it'll be better. Here's my preclear report:

 

Looks very good => nothing to be concerned about in the SMART data, and no re-allocated sectors.    Exactly what you want to see  :)

Link to comment

On my second preclear I forgot to add the -A switch when executing the command. So now I have my first disk precleared with a starting sector of 64 and the second one with a starting sector of 63. Any issues here?? Do I need to run another preclear on the 2nd disk with the -A option?

 

I thought that the latest version of the pre-clear script would, when run on a system with UnRAID v5, automatically always use sector 64 => but perhaps that's not the case.    So I'd change the second disk to start on sector 64.  You can make this change without a long pre-clear run by running preclear with the -C 64 option.  ... i.e. if the disk is sdb the command would be:

 

preclear_disk.sh /dev/sdb -C 64

 

 

Link to comment

Aargh!! :)

 

So - before waiting for a reply I assigned the disk to the array and as I was impatient and wanted to play around with disk / user shares etc to see how everything behaves - so that disk has some data on it now. So I'm guessing steps for me would be to :

 

- Stop array

- Unassigned disk

- Run preclear -C (I understand the data on the disk will be lost but will it be a quick preclear as you suggested OR will it take as long as the original preclear ?)

- Assign disk back to array

- Restart the array

 

On my second preclear I forgot to add the -A switch when executing the command. So now I have my first disk precleared with a starting sector of 64 and the second one with a starting sector of 63. Any issues here?? Do I need to run another preclear on the 2nd disk with the -A option?

 

I thought that the latest version of the pre-clear script would, when run on a system with UnRAID v5, automatically always use sector 64 => but perhaps that's not the case.    So I'd change the second disk to start on sector 64.  You can make this change without a long pre-clear run by running preclear with the -C 64 option.  ... i.e. if the disk is sdb the command would be:

 

preclear_disk.sh /dev/sdb -C 64

Link to comment

No, that won't work anymore since the disk is no longer a cleared disk.  You'll need to run an actual preclear cycle on it.    Since you're reasonably confident in the disk, you can skip the pre-read and speed things up a bit.    Just use the -W option when you invoke the preclear.    [i believe you can actually skip both the pre-read and post-read with a -N switch, but I've not tried that ... and it's a good idea to thoroughly check the disk after clearing it anyway.]

 

Link to comment

On my second preclear I forgot to add the -A switch when executing the command. So now I have my first disk precleared with a starting sector of 64 and the second one with a starting sector of 63. Any issues here?? Do I need to run another preclear on the 2nd disk with the -A option?

 

I thought that the latest version of the pre-clear script would, when run on a system with UnRAID v5, automatically always use sector 64 => but perhaps that's not the case. 

Incorrect, if no option is specified, then the unRAID configuration preference setting is used.

Look under

Settings->Disk Settings->Default-partition-format

to set your default preference.

 

One more thing...

Unless your disk is a WD "EARS" drive with the firmware that performs poorly when partitioned to start on sector 63, then odds are you would not notice any difference in performance at all regardless of where the partition starts.  (and even that drive would work just fine with the partition starting on sector 63 unless you were really anal about performance)    You likely could have left things as they were.

Link to comment

I thought that the latest version of the pre-clear script would, when run on a system with UnRAID v5, automatically always use sector 64 => but perhaps that's not the case. 

Incorrect, if no option is specified, then the unRAID configuration preference setting is used.

Look under

Settings->Disk Settings->Default-partition-format

to set your default preference.

 

Thanks for the details Joe.    But just for clarification, isn't it true that the default setting for v5 is 4K aligned?    ... in which case I'd think you don't need the -A switch (with v5)  UNLESS you've changed that setting.  Correct?

 

Link to comment

I thought that the latest version of the pre-clear script would, when run on a system with UnRAID v5, automatically always use sector 64 => but perhaps that's not the case. 

Incorrect, if no option is specified, then the unRAID configuration preference setting is used.

Look under

Settings->Disk Settings->Default-partition-format

to set your default preference.

 

Thanks for the details Joe.    But just for clarification, isn't it true that the default setting for v5 is 4K aligned?    ... in which case I'd think you don't need the -A switch (with v5)  UNLESS you've changed that setting.  Correct?

I do not know the default...
Link to comment

Actually I just took a look at a basic, Plus, and Pro system and all three default to 4K aligned (all are v5RC16c).

 

By the way, I noticed something else vis-à-vis pre-clear's behavior with v5 ==> if the array is stopped, a preclear_disk.sh -l  will list ALL disks, including those that are assigned to the array.    I'm fairly sure it did NOT do that with v4.7.    Apparently something has changed regarding what your script "looks at" to determine which disks are okay to pre-clear.    If the array has been started, it correctly lists only those that are not part of the array.

 

Link to comment

Hi I'm about to preclear a couple of new SEAGATE Barracuda 7200 3TB (ST3000DM001).

 

And I want to know if its okay to preclear 3-4 disk at one time? I've been told in another post that if you want to preclear multiple disks at one time you should use the; "-r" "-w" and "-b" options to the preclear script to limit their memory usage.

 

So what I'm wondering is if this command will do the job:

 

./preclear_disk.sh -r 65536 -w 65536 -b 2000 -A -c 2 /dev/sdX

 

I want to run 2 or maybe 3 cycles, and also prevent memory problems when preforming the preclear.

The values were suggested in the preclear section of the wikipage. So should I just leave it at that? Or is it needed to change the values?

 

Any suggestions or tips would be great! :)

Link to comment

Hi I'm about to preclear a couple of new SEAGATE Barracuda 7200 3TB (ST3000DM001).

 

And I want to know if its okay to preclear 3-4 disk at one time? I've been told in another post that if you want to preclear multiple disks at one time you should use the; "-r" "-w" and "-b" options to the preclear script to limit their memory usage.

 

So what I'm wondering is if this command will do the job:

 

./preclear_disk.sh -r 65536 -w 65536 -b 2000 -A -c 2 /dev/sdX

 

I want to run 2 or maybe 3 cycles, and also prevent memory problems when preforming the preclear.

The values were suggested in the preclear section of the wikipage. So should I just leave it at that? Or is it needed to change the values?

 

Any suggestions or tips would be great! :)

 

I've pre-cleared 3 at a time with no problem without any of the extra options ... just a simple

"preclear_disk.sh /dev/sdX" for each of the sessions.    That was on an Atom D525 with 8GB of memory.  Timing was almost identical to what it took for one.

 

Link to comment

Hi garycase! thanks for the reply!

 

I've pre-cleared 3 at a time with no problem without any of the extra options ... just a simple

"preclear_disk.sh /dev/sdX" for each of the sessions.    That was on an Atom D525 with 8GB of memory.  Timing was almost identical to what it took for one.

 

OK, well I guess it will be fine; im using Core i3-3220 with 8GB of memory. Also question about the cycles, if there is some errors on the first cycle will the next be aborted or will it continue even though the first notified problems? Do you prefer to run all the cycles at once or "manually" start them after each finishes? Thats what the "-c 3" command will do right? start it right after and continue for 3 cycles? Loads of questions I know, but I want to make sure i don't mess up any preclears with bad settings/choices after all this takes some time :P

Link to comment

I always run one cycle at a time, so I can't answer the "what happens" question if there are errors in an early cycle when multiple cycles are selected.  I suspect it keeps running unless the error is severe (i.e. if there are simply reallocated sectors, I suspect it just runs the additional passes ... which is what you'd want anyway, as the important thing would be to see if they are cleared and no more occur).

 

Link to comment

By the way, I presume you know you either need to run these from the UnRAID console (if you have one) or via Screen.    You could also Telnet in for each session, but if you do that the Telnet sessions have to stay open for the entire duration of the pre-clear [Not necessary with Screen, or of course from an attached console].    I always run pre-clears from an attached console -- then a simple Alt-F1, F2, F3, etc. can switch between the sessions.

 

Link to comment

OK, thanks for the info! :) Yes I also run preclears from an attached console, and use the Alt + Fx to navigate.

I haven't used Telnet sessions that much. Only tried it for some minor things like network info etc., but I will when i get the server up and running 100 %!

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...