Question about running multiple pre-clear passes


Recommended Posts

  • Replies 53
  • Created
  • Last Reply

Top Posters In This Topic

Has anyone identified problems with a hard drive by running multiple passes of the pre-clear script that were not found during the first pass?

 

Just curious about the importance/value of running multiple passes.

 

Thanks in advance for any replies.

Yes.  I always run 3 passes, even on 3TB drives.  I have found failures in the 3rd pass that didn't happen in the first 2.

Link to comment

Has anyone identified problems with a hard drive by running multiple passes of the pre-clear script that were not found during the first pass?

 

Absolutely, positively, and multiple times, yes :)

 

I use the preclear script to burn-in all the hard drives I want to use on servers both at home and at work.  Last time I had to send a LaCie external 2TB drive back because it came to a halt (logically not literally) sometime during preclear pass number 6.

 

I always pass each drive 7 times now, and it takes forever.  I do four at a time, but it's about 10 days' duration on a 2TB LaCie external over the eSATA port using a Dell Optiplex 745.

 

One pass isn't even enough to get the drive to run out of good sectors, if there are too many bad ones, and excess heat during the burn-in period will exacerbate any existing flaws.

 

My logic is simple - do it up front while the drive can still be RMA'd and you're not depending upon that drive to keep your data safe.

 

Link to comment

Write speeds to my array grind to a halt during preclear so I could not imagine doing 3 passes per drive at this point. I have not been able to pin point the bottleneck either. I have been thinking about a second unraid box on an old pc just to do preclears.

 

The hardware in your sig looks plenty fast enough to handle a few preclears without slowing down the array.  Make sure your BIOS settings are correct (drives should be running in AHCI mode not IDE or combined mode).

Link to comment

Write speeds to my array grind to a halt during preclear so I could not imagine doing 3 passes per drive at this point. I have not been able to pin point the bottleneck either. I have been thinking about a second unraid box on an old pc just to do preclears.

 

The hardware in your sig looks plenty fast enough to handle a few preclears without slowing down the array.  Make sure your BIOS settings are correct (drives should be running in AHCI mode not IDE or combined mode).

 

It is set to AHCI, transfer to the array during preclear was 3MB/s. Preclear was on the Masscool 2 port adapter and precleared at about 100/MB/s.

 

I am not sure if that cheapo card is hogging resources somehow.

Link to comment

 

It is set to AHCI, transfer to the array during preclear was 3MB/s. Preclear was on the Masscool 2 port adapter and precleared at about 100/MB/s.

 

I am not sure if that cheapo card is hogging resources somehow.

 

2 questions:

 

1. Is there an array drive on the other port on the Masscool (the one not handling the preclear)? You clarified this in the "Re: preclear" thread.

 

2. How is the read speed of your array during this? Both, in general, and, if answer to #1 is yes, the read speed of that drive.

 

--UhClem

 

Link to comment

 

It is set to AHCI, transfer to the array during preclear was 3MB/s. Preclear was on the Masscool 2 port adapter and precleared at about 100/MB/s.

 

I am not sure if that cheapo card is hogging resources somehow.

 

2 questions:

 

1. Is there an array drive on the other port on the Masscool (the one not handling the preclear)? You clarified this in the "Re: preclear" thread.

 

2. How is the read speed of your array during this? Both, in general, and, if answer to #1 is yes, the read speed of that drive.

 

--UhClem

 

 

I actually did not read from the array during that preclear, but I am going to preclear another drive tomorrow so I will test it and report back.

Link to comment

 

I actually did not read from the array during that preclear, but I am going to preclear another drive tomorrow so I will test it and report back.

 

Good ... Another thing (I hope you see this) ... While your preclear is running, also do a (test) write to the array and have a "tail -f syslog" running. Look for any anomalies/errors. That write speed (~3 MB/s) is so atrocious that I suspect there will be some glaring clues from the kernel.

 

--UhClem

 

Link to comment

 

I actually did not read from the array during that preclear, but I am going to preclear another drive tomorrow so I will test it and report back.

 

Good ... Another thing (I hope you see this) ... While your preclear is running, also do a (test) write to the array and have a "tail -f syslog" running. Look for any anomalies/errors. That write speed (~3 MB/s) is so atrocious that I suspect there will be some glaring clues from the kernel.

 

--UhClem

 

 

Thanks will give that a shot too

Link to comment

Preclear is writing zero's to the drive at 97 MB/s and I am copying to the array at about 32 MB/s. I am not sure what went wrong the first time but clearly its much better now.

 

Edit to add:

 

I am actually copying to the array at around 37 MB/s apparently the mover was still running when I tested 32 above. I am reading from the array at 55 MB/s, no clue what I read at while preclear is not running. I don't think I ever checked that.

 

 

Link to comment
  • 2 weeks later...

 

I always pass each drive 7 times now, and it takes forever.

 

yikes.. that is some serious wear and tear on a drive.

if it was not dead before, it might be after.

 

 

I pass each drive for 7 passes, too... and what would you know, the last Hitachi I did that to showed up perfectly until I tried to rebuild the array with it in place... then - wham-o.... 537642 sync errors, and SMART reports failed with hundreds of thousands of bad sectors.

 

I'm not sure if 7 passes is too many or not enough, but it's a real solid workout for a drive, and if it doesn't fail during 7 passes, it probably won't fail for a good long time (except for that last one that proved my theory wrong)  ;)

Link to comment

 

I always pass each drive 7 times now, and it takes forever.

 

yikes.. that is some serious wear and tear on a drive.

if it was not dead before, it might be after.

 

 

I pass each drive for 7 passes, too... and what would you know, the last Hitachi I did that to showed up perfectly until I tried to rebuild the array with it in place... then - wham-o.... 537642 sync errors, and SMART reports failed with hundreds of thousands of bad sectors.

 

I'm not sure if 7 passes is too many or not enough, but it's a real solid workout for a drive, and if it doesn't fail during 7 passes, it probably won't fail for a good long time (except for that last one that proved my theory wrong)   ;)

 

preclear passes are not enough to exercise the drive.  Badblocks exercises the drive better then a dd write/read test of only 0's.

 

badblocks does 4 passes each with a specific bit pattern to try and catch marginal sectors.

 

In addition, sometimes these marginal errors do not show up unless there is other activity on the bus.

For example, if there is a marginal sector, the drive may pause when it reaches that sector.

If there are other operations present on the bus, then it may reveal the issue.

 

I mention this because of recent tests in my system where a badblocks test on 5 drives simultaneously revealed a questionable drive that did not reveal smart errors.

 

What was noticed is the drive was resetting itself on the bus. At that point I decided to take the drive out of operation.

 

Link to comment

 

I always pass each drive 7 times now, and it takes forever.

 

yikes.. that is some serious wear and tear on a drive.

if it was not dead before, it might be after.

 

 

I pass each drive for 7 passes, too... and what would you know, the last Hitachi I did that to showed up perfectly until I tried to rebuild the array with it in place... then - wham-o.... 537642 sync errors, and SMART reports failed with hundreds of thousands of bad sectors.

 

I'm not sure if 7 passes is too many or not enough, but it's a real solid workout for a drive, and if it doesn't fail during 7 passes, it probably won't fail for a good long time (except for that last one that proved my theory wrong)   ;)

 

preclear passes are not enough to exercise the drive.  Badblocks exercises the drive better then a dd write/read test of only 0's.

 

badblocks does 4 passes each with a specific bit pattern to try and catch marginal sectors.

 

In addition, sometimes these marginal errors do not show up unless there is other activity on the bus.

For example, if there is a marginal sector, the drive may pause when it reaches that sector.

If there are other operations present on the bus, then it may reveal the issue.

 

I mention this because of recent tests in my system where a badblocks test on 5 drives simultaneously revealed a questionable drive that did not reveal smart errors.

 

What was noticed is the drive was resetting itself on the bus. At that point I decided to take the drive out of operation.

 

Even badblocks may have its flaws when used on a modern disk its internal RAM cache.  You say badblocks writes 4 patterns... I'm sure it does, but...

How many actually get written, and re-read from the physical disk platters, and how many are immediately satisfied from the disks internal 64meg cache memory. 

 

Unless the blocks written are large enough to ensure that the physical disks are always being written and read, the test is only getting as far as the memory in the disk drive.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.