Preclear.sh results - Questions about your results? Post them here.


Recommended Posts

Assigning a drive to a server,even just briefly, will change the preclear signature to where it will no longer be recognized if you un-assign the drive and re-assign it at a later time to the same or a different server.

 

You can use

preclear_disk.sh -t /dev/sdX

to test if a disk has a current/correct preclear signature.

 

Totally forgot one can check the status of a disk... arghhh. In the mean time i was preparing a test bench to figure this out. I found my backup SAS LSI card and a 146GB SATA drive to speed the test up and get to the bottom of this. So my thought was preclear the 146GB drive, verify (-t), reboot, verify once again (-t). I assumed this would pass and then move the drive from onboard SATA to the SAS card and verify signature again; this is where I though I may see something.

 

Much to my surprise preclearing the 146GB drive and verifying (-t) right after instantly issues a "NOT precleared" for the drive. I am using the following command "preclear_disk.sh -W -M 4 /dev/sdb"

 

My testbench desktop, which is the one I have been preclearing with for a long time is a Foxconn C51XEM2AA- 8EKRS2H (AMD nforce 590 chipset). unRAID loads 'sata_nv' for storage driver and 'forcedeth' for ethernet driver.

 

So this would explain why every drive I have precleared from this MB and moved off to my production server never worked (no signature). Only thing I can think of is possibly the sata driver (for nforce chipset) is not working nicely or something is unaccounted for in the logic for setting of the signature when this chipset is used. Is that possible Joe L.? Would this be something you would like to look/work through off-line (PM's). The rig is up and I can run anything you require.

 

Screenshot attached. I actually ran preclears several time with various unRAID builds (v5's and v6b3 for the hell of it) other command lines just in case, signature is just never found in any of the attempts.

 

Let me know Joe L. Thanks.

 

 

preclear_164GB_drive.png.f27500bd016c9ee96c256635620c4845.png

no_signature_found_160GB_drive.png.c252750479c2c03f60a44fc90c48d5ae.png

Link to comment

I have been running 3 preclear cycles on a new 4tb drive

I know early yesterday afternoon it was on 10% of step 10 I believe and I lost power sometime after 4:47am this morning so I would think it would have finished but the power was out long enough for the battery backup to run out of juice

 

I had it running on an old pc that I was only going to use to do this so like an idiot I didn't setup a clean power down

I have always been able to look at the results right after so I can't remember if it saves a log auto or if you have to tell it to save....or is it saves after every cycle or just at the end

 

basically I'm wondering if there is anyway to tell how it went or if it finished considering it died this morning

 

Thanks!

Link to comment

so unless it finished there wouldn't be any record of what happened

so even if I run it again and have no reallocated sectors for example I will have no idea if there were any before?

I just did one last week and I have a folder on my flash drive called preclear_reports. There are 3 files in there, preclear_start_*, preclear_finish_*, and preclear_rpt_*.

 

The rpt file has what you're looking for.

 

Also, you can restart the preclear script and have it skip certain steps if you want.

preclear_disk.sh -?

to see the help.

Link to comment

You won't have the preclear_finish_xxxxxxxx -file and the preclear_rpt.

Only the preclear_start_xxxxxxxxxx is in the log directory.

 

If the first preclear cycles ran without changes in the SMART values, then the

drive is probably fine. I would trust it.

 

The only thing is, that the signature for "successful preclear" is probably not set

and unRAID won't recognize the drive as precleared.

preclear_disk.sh -t /dev/sdX

to test if a disk has a current/correct preclear signature.

 

I don't know, if there is an option in the preclear that let's you set the signature?

But I think there is a way to skip the read test and shorten the procedure.

Link to comment

so unless it finished there wouldn't be any record of what happened

so even if I run it again and have no reallocated sectors for example I will have no idea if there were any before?

 

Parameters like reallocated sectors are maintained by the drive, not Preclear, so just Obtain a SMART report and you will have the equivalent of the preclear_finish report.  The Preclear start and finish reports are before and after SMART reports, the preclear_rpt is a comparison of those plus a little summary info.

Link to comment
  • 2 weeks later...

i started preclearing 3 hgst 4tb nas drives.

it seems to take  long time. How long would be normal?

 

it started prereading around 140-150 MB/s  and is currently 85 to 90 MB/s

progress is currently 94-96% and elapsed time is 16:15 - to 16:05 .

i used:  preclear_disk.sh -r 65536 -w 65536 -b 2000 -c 3 /dev/sdX

 

if the average speed would be 90MB/s it should have finished aleady right?

Link to comment

i started preclearing 3 hgst 4tb nas drives.

it seems to take  long time. How long would be normal?

 

it started prereading around 140-150 MB/s  and is currently 85 to 90 MB/s

progress is currently 94-96% and elapsed time is 16:15 - to 16:05 .

i used:  preclear_disk.sh -r 65536 -w 65536 -b 2000 -c 3 /dev/sdX

 

if the average speed would be 90MB/s it should have finished aleady right?

It is probably not even on the final step. First it reads the entire disk, then it writes zeroes to the entire disk, then it reads the entire disk again. Expect more like 40 hours to finish. The speeds seem very reasonable.
Link to comment

its step zero pre-reading

 

the thing is i'm either miscalculating or the speeds don't match the time

 

even if the speed would be constant 80MB/s it would do 288000 MB in an hour and should be finished within 14 hours

 

i'm just curious why it is the speeds have been higher yet it is still not finished the prereading

 

 

at these speeds a single cycle will be 17h + 17h +34 h = 68 Hours

Link to comment

The speed will vary depending on where on the disks the heads are positioned.  Speeds will be fastest at the outer edge and progressively slow down as you move inwards.

 

My rule of thumb is about 10 hours per TB for modern drives which mean pre-clearing a 4TB drive is about 40 hours.  This can vary depending on your system specs, in particular how the disks are connected as controller throughput is an important factor

Link to comment

2 of them are connected  to an AOC-SASLP-MV8 the other is connected to the mother board.

The drive that was connected to the MB was a little faster and finished the pre-read when the others 2 were at 98%.

it took about 17 hours for it to finish pre-reading.

 

the number of 40 hours for  4tB  would imply the following numbers:

10h pre-read 10h writing zeroes and 20h post read.

Link to comment

The speed will vary depending on where on the disks the heads are positioned.  Speeds will be fastest at the outer edge and progressively slow down as you move inwards.

 

My rule of thumb is about 10 hours per TB for modern drives which mean pre-clearing a 4TB drive is about 40 hours.  This can vary depending on your system specs, in particular how the disks are connected as controller throughput is an important factor

 

Controller throughput and bus throughput are certainly important factors.  Perhaps the biggest factor is the rotational speed of the drive.  If you check the User Benchmarks, Preclear Times wiki section, you can see that rough speeds for 7200rpm drives is 10 hours (+/- 2 hours) per terabyte, and that rough speeds for 5900rpm drives is 13 hours (+/- 1 hour) per terabyte.

Link to comment

The speed will vary depending on where on the disks the heads are positioned.  Speeds will be fastest at the outer edge and progressively slow down as you move inwards.

 

My rule of thumb is about 10 hours per TB for modern drives which mean pre-clearing a 4TB drive is about 40 hours.  This can vary depending on your system specs, in particular how the disks are connected as controller throughput is an important factor

 

Controller throughput and bus throughput are certainly important factors.  Perhaps the biggest factor is the rotational speed of the drive.  If you check the User Benchmarks, Preclear Times wiki section, you can see that rough speeds for 7200rpm drives is 10 hours (+/- 2 hours) per terabyte, and that rough speeds for 5900rpm drives is 13 hours (+/- 1 hour) per terabyte.

thats why 17 hours seem a bit long for just pre-read. these are 7200 drives.

i saw in old pre-clear logs that pre-reading and writing zeroes took roughly the same time on a wd green. (both  aprox 6:30h)

 

for what its worth writing zeroes is going on for a bit over 3 hours and it is at 39%.

it will surely slow down a bit but i doubt it will take 17 hours like the pre-read did

 

it will be a while before i can see the logs for these drives.

wonder what i will see there

Link to comment

In the middle of  a pre-clear on a new disk and checked out my syslog... Is this normal?

 

Mar 25 07:45:07 Tower udevd[26416]: timeout 'ata_id --export /dev/sdk'
Mar 25 07:45:08 Tower udevd[26416]: timeout: killing 'ata_id --export /dev/sdk' [26417]
Mar 25 07:45:17 Tower last message repeated 9 times
Mar 25 07:45:18 Tower udevd[26416]: 'ata_id --export /dev/sdk' [26417] terminated by signal 9 (Killed)
Mar 25 07:45:18 Tower udevd[26416]: timeout 'scsi_id --export --whitelisted -d /dev/sdk'
Mar 25 07:46:12 Tower kernel:  sdk: sdk1

Link to comment

disks have finally stopped preclearing

 

(The disks are the new HTGS 4tb 7200rpm NAS drives)

 

here is an example from a report.

== invoked as: ./preclear_disk.sh -r 65536 -w 65536 -b2000 -c 3 /dev/sdf
== HGSTHDN724040ALE640   PK1334PBH0H4DS
== Disk /dev/sdf has been successfully precleared
== with a starting sector of 1 
== Ran 3 cycles
==
== Using :Read block size = 65536 Bytes
== Last Cycle's Pre Read Time  : 17:19:24 (64 MB/s)
== Last Cycle's Zeroing time   : 8:53:51 (124 MB/s)
== Last Cycle's Post Read Time : 28:02:02 (39 MB/s)
== Last Cycle's Total Time     : 36:57:06
==
== Total Elapsed Time 128:11:36
==
== Disk Start Temperature: 33C
==
== Current Disk Temperature: 38C, 

 

strangely enough whenever i looked in the putty window reported read speeds were much higher more like 150-80

 

all the disks seem fine by the way.

 

only i keep wondering what caused the slow read speeds.

i do have to add i followed the wiki manual and first tried creating a new screen session withing the same putty window. but when i tried this for the 3rd drive it became unresponsive and i could see that the 2 drives that had already started prereading had stopped.  so i started 3 separate putty windows and gave the commands again.

wonder if this somehow could affect the read speeds. but if so why not the write speed?

 

i cant install the disk yet need to upgrade to to unraid 5 first and clear the cache folder that still have a lot of files on it in a hidden folder.

 

anyone has ideas what could be going on or how to find out more?

is there a way without making the preclear invalid to test the read speeds?

and is it normal that the speeds reported during the preclear and in the reports are so different (64 in log while during preread on screen it showed 150-80) ?

 

i'm very curious to find out what is going on.

 

thanks

precear_reports.zip

Link to comment

Can someone have a look to make sure my preclear went well.

Thanks in advance.

 

The Seagate looks fine.  A few things to note however:

* There is a message about a firmware update for the drive.  You should probably check it out, determine whether it sounds serious enough to be required.

* At some point in the past, one critical attribute Seek_Error_Rate dropped somewhat lower than normal, to a scaled value of 049.  It's currently at 062 which is typical for Seagates, but the fact it dropped that far *may* indicate a drive that is less than perfect mechanically.

* The temp sensor seems odd.  Was this drive 'refurbished'?  On the initial SMART report, it shows a temp of 26 (Celsius), with a lowest ever of 26 and a highest ever of 26.  In other words, it's either not working and fixed at 26, or it has been reset and all previous temp history was cleared.  On the final report, it does vary a small amount, from 25 to 28 (which seems low).  If it has been reset, then others may have been too, and that makes other SMART values less trustworthy, a little more concerning, especially the Seek_Error_Rate, as a very recent drop.

 

I would monitor this drive for awhile, perhaps check a SMART report weekly for a month or 2.  Otherwise, the numbers look fine.

Link to comment
  • 2 weeks later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.