Preclear.sh results - Questions about your results? Post them here.


Recommended Posts

Thanks again for the reply. I did manage to set up the whole email system and run a successful test, so that should be cool now I understand the commands ;o)

Good.

And I guess I just (just?) need to find out why the drives are running so slowly. Could it be anything to do with having them set as ahci?

Not likely, that is the correct setting.  Emulated-IDE is typically slower.

 

You apparently have not yet read the post I linked to earlier.  to get any further support from any forum member, a syslog is needed for analysis. 

Otherwise, all we can do is say "yes, your system is not working as well as expected"  Too bad we have no clue what might be happening.

 

Joe L.

Link to comment

OK I have run Pre Clear over a Seagate 1.5Tb Drive.

 

How does this look?

 

(BTW how long on average should this take to run this took 49Hrs!)

 

========================================================================1.13

==  ST31500341AS    9VS1QKP6

== Disk /dev/sdh has been successfully precleared

== with a starting sector of 63

============================================================================

** Changed attributes in files: /tmp/smart_start_sdh  /tmp/smart_finish_sdh

                ATTRIBUTE  NEW_VAL OLD_VAL FAILURE_THRESHOLD STATUS      RAW_VA

LUE

        Spin_Retry_Count =  100    100          97        near_thresh 0

        End-to-End_Error =  100    100          99        near_thresh 0

          High_Fly_Writes =    1      1            0        near_thresh 218

  Airflow_Temperature_Cel =    65      66          45        In_the_past 35

      Temperature_Celsius =    35      34            0        ok          35

  Hardware_ECC_Recovered =    51      21            0        ok          152154

190

No SMART attributes are FAILING_NOW

 

0 sectors were pending re-allocation before the start of the preclear.

0 sectors were pending re-allocation after pre-read in cycle 1 of 1.

0 sectors were pending re-allocation after zero of disk in cycle 1 of 1.

0 sectors are pending re-allocation at the end of the preclear,

    the number of sectors pending re-allocation did not change.

4 sectors had been re-allocated before the start of the preclear.

25 sectors are re-allocated at the end of the preclear,

    a change of 21 in the number of sectors re-allocated.

Link to comment

OK I have run Pre Clear over a Seagate 1.5Tb Drive.

 

How does this look?

 

(BTW how long on average should this take to run this took 49Hrs!)

 

========================================================================1.13

==  ST31500341AS    9VS1QKP6

== Disk /dev/sdh has been successfully precleared

== with a starting sector of 63

============================================================================

** Changed attributes in files: /tmp/smart_start_sdh  /tmp/smart_finish_sdh

                ATTRIBUTE   NEW_VAL OLD_VAL FAILURE_THRESHOLD STATUS      RAW_VA

LUE

         Spin_Retry_Count =   100     100           97        near_thresh 0

         End-to-End_Error =   100     100           99        near_thresh 0

          High_Fly_Writes =     1       1            0        near_thresh 218

  Airflow_Temperature_Cel =    65      66           45        In_the_past 35

      Temperature_Celsius =    35      34            0        ok          35

   Hardware_ECC_Recovered =    51      21            0        ok          152154

190

No SMART attributes are FAILING_NOW

 

0 sectors were pending re-allocation before the start of the preclear.

0 sectors were pending re-allocation after pre-read in cycle 1 of 1.

0 sectors were pending re-allocation after zero of disk in cycle 1 of 1.

0 sectors are pending re-allocation at the end of the preclear,

    the number of sectors pending re-allocation did not change.

4 sectors had been re-allocated before the start of the preclear.

25 sectors are re-allocated at the end of the preclear,

    a change of 21 in the number of sectors re-allocated.

The disk looks perfect, but the time it took is probably about double than most.  It probably indicates an issue with your disk controller or how it is configured in the BIOS (although it could also be affected by what else you were doing at the same time)

 

Post your syslog.  (before you reboot)

Link to comment

Hello,

 

I moved a six disk array to a new box/motherboard etc (Raj's 22 drive beast...thanks!).  First I brought the six disk array back up without adding any drives, no issues.  Then I added 7 new drives physically to the box.  Brought back up unraid and saw that my six drives which were sda to sdf, were now sdh to sdm.  The 7 new drives had been assigned sda to sdg.  I started the arrray without assigning any of the new drives to Unraid.  I then used putty to telnet to the box and tried to preclear sda, and I got the message that prclear won't run on a drive already in the array. 

 

Am I getting messed up because the original six drive array was sda-sdf  (now sdh-sdm) so somehow preclear thinks they are already in the array.

 

Any help would be appreciated.  Also I haven't tried this yet but I'm assuming that it will allow me to preclear sdg only as that was never assigned to the original array.

 

Thanks.

Alan

Link to comment

Hello,

 

I moved a six disk array to a new box/motherboard etc (Raj's 22 drive beast...thanks!).  First I brought the six disk array back up without adding any drives, no issues.  Then I added 7 new drives physically to the box.  Brought back up unraid and saw that my six drives which were sda to sdf, were now sdh to sdm.  The 7 new drives had been assigned sda to sdg.  I started the arrray without assigning any of the new drives to Unraid.  I then used putty to telnet to the box and tried to preclear sda, and I got the message that prclear won't run on a drive already in the array. 

 

Am I getting messed up because the original six drive array was sda-sdf  (now sdh-sdm) so somehow preclear thinks they are already in the array.

 

Any help would be appreciated.  Also I haven't tried this yet but I'm assuming that it will allow me to preclear sdg only as that was never assigned to the original array.

 

Thanks.

Alan

The designations of the drives are assigned dynamically, even from one boot to another they can change.  They are assigned as the linux kernel identifies then as they spin up and initialize.

 

Go by the model/serial numbers.  NEVER assume the same disk if all you are going by is sda, sdb, etc.

 

It sounds as if you still have a disk.cfg file that points to the old disk config.

 

What version of unRAID are you running?

What version of the preclear script are you running?

Type

preclear_disk.sh -v

to see the version.

 

 

Link to comment

Hello,

 

I moved a six disk array to a new box/motherboard etc (Raj's 22 drive beast...thanks!).  First I brought the six disk array back up without adding any drives, no issues.  Then I added 7 new drives physically to the box.  Brought back up unraid and saw that my six drives which were sda to sdf, were now sdh to sdm.  The 7 new drives had been assigned sda to sdg.  I started the arrray without assigning any of the new drives to Unraid.  I then used putty to telnet to the box and tried to preclear sda, and I got the message that prclear won't run on a drive already in the array. 

 

Am I getting messed up because the original six drive array was sda-sdf  (now sdh-sdm) so somehow preclear thinks they are already in the array.

 

Any help would be appreciated.  Also I haven't tried this yet but I'm assuming that it will allow me to preclear sdg only as that was never assigned to the original array.

 

Thanks.

Alan

The designations of the drives are assigned dynamically, even from one boot to another they can change.  They are assigned as the linux kernel identifies then as they spin up and initialize.

 

Go by the model/serial numbers.   NEVER assume the same disk if all you are going by is sda, sdb, etc.

 

It sounds as if you still have a disk.cfg file that points to the old disk config.

 

What version of unRAID are you running?

What version of the preclear script are you running?

Type

preclear_disk.sh -v

to see the version.

 

 

 

I'm using UnRaid 5.0 beta 11 and preclear version is 1.13.

 

I am sure that sda, sdb etc.  is not part of my array, so I know that is not the issue.  Can you help me check the disk.cfg file.  Thanks so much for your help.

Link to comment

Hello,

 

I moved a six disk array to a new box/motherboard etc (Raj's 22 drive beast...thanks!).  First I brought the six disk array back up without adding any drives, no issues.  Then I added 7 new drives physically to the box.  Brought back up unraid and saw that my six drives which were sda to sdf, were now sdh to sdm.  The 7 new drives had been assigned sda to sdg.  I started the arrray without assigning any of the new drives to Unraid.  I then used putty to telnet to the box and tried to preclear sda, and I got the message that prclear won't run on a drive already in the array. 

 

Am I getting messed up because the original six drive array was sda-sdf  (now sdh-sdm) so somehow preclear thinks they are already in the array.

 

Any help would be appreciated.  Also I haven't tried this yet but I'm assuming that it will allow me to preclear sdg only as that was never assigned to the original array.

 

Thanks.

Alan

The designations of the drives are assigned dynamically, even from one boot to another they can change.  They are assigned as the linux kernel identifies then as they spin up and initialize.

 

Go by the model/serial numbers.   NEVER assume the same disk if all you are going by is sda, sdb, etc.

 

It sounds as if you still have a disk.cfg file that points to the old disk config.

 

What version of unRAID are you running?

What version of the preclear script are you running?

Type

preclear_disk.sh -v

to see the version.

 

 

 

I'm using UnRaid 5.0 beta 11 and preclear version is 1.13.

 

I am sure that sda, sdb etc.  is not part of my array, so I know that is not the issue.  Can you help me check the disk.cfg file.  Thanks so much for your help.

Sure.

 

Use any editor you like, look at the files

config/disk.cfg

and

/var/local/emhttp/disks.ini

 

 

 

 

Link to comment

Question!

 

If you are running unRAID 4.7 onward, in the absence of either a "-a" or "-A" option specified on the command line, preclear_disk.sh will use the alignment preference you specified in the unRAID settings screen as its default.  

(-a will force MBR-unaligned. -A will force MBR-4k-aligned )

 

So I spent 40 hours running one preclear cycle on my drives, and I recall having to specify the -A to get the 4k alignment. I couldn't find the settings section that would let me specify what I want unRAID to use anywhere in the web interface (on the latest beta.) My drives are about to finish their last cycle any moment now and I just remembered I didn't specify -A.... Is there any way to check to see how they were formatted?

 

edit:

 

found this: http://lime-technology.com/forum/index.php?topic=12963.msg123077#msg123077

 

If you did not yet assign the disks to the array, you can change the partitioning by use of the

"-C 64" option of the preclear_disk.sh script.

 

Could I use that once my preclear is finished since I missed the -A switch initially?

 

edit again: now that im home, -C 64 and -C 63 both fail with:

 

DISK /dev/sdd IS PRECLEARED with a GPT Protective MBR

Conversion not possible

 

I hope I don't have to preclear again with -A :(

Link to comment

Here is a report from one of the drives if it helps:

 

root@Tower:/boot/preclear_reports# cat preclear_rpt_\ WD-WCAWZ1116457_2011-09-26
========================================================================1.13
== invoked as: ./preclear_disk.sh /dev/sdd
==  WDC WD30EZRX-00MMMB0    WD-WCAWZ1116457
== Disk /dev/sdd has been successfully precleared
== with a starting sector of 1
== Ran 1 cycle
==
== Using :Read block size = 8225280 Bytes
== Last Cycle's Pre Read Time  : 8:57:54 (92 MB/s)
== Last Cycle's Zeroing time   : 10:32:46 (79 MB/s)
== Last Cycle's Post Read Time : 18:46:40 (44 MB/s)
== Last Cycle's Total Time     : 38:18:19
==
== Total Elapsed Time 38:18:19
==
== Disk Start Temperature: 27C
==
== Current Disk Temperature: 33C,
==
============================================================================
** Changed attributes in files: /tmp/smart_start_sdd  /tmp/smart_finish_sdd
               ATTRIBUTE   NEW_VAL OLD_VAL FAILURE_THRESHOLD STATUS      RAW_VALUE
     Temperature_Celsius =   119     125            0        ok          33
No SMART attributes are FAILING_NOW

0 sectors were pending re-allocation before the start of the preclear.
0 sectors were pending re-allocation after pre-read in cycle 1 of 1.
0 sectors were pending re-allocation after zero of disk in cycle 1 of 1.
0 sectors are pending re-allocation at the end of the preclear,
   the number of sectors pending re-allocation did not change.
0 sectors had been re-allocated before the start of the preclear.
0 sectors are re-allocated at the end of the preclear,
   the number of sectors re-allocated did not change.
============================================================================

 

Checking if it's cleared with -t:

 

 Pre-Clear unRAID Disk /dev/sdd
################################################################## 1.13
Device Model:     WDC WD30EZRX-00MMMB0
Serial Number:    WD-WCAWZ1116457
Firmware Version: 80.00A80
User Capacity:    3,000,592,982,016 bytes

Disk /dev/sdd: 3000.6 GB, 3000592982016 bytes
255 heads, 63 sectors/track, 364801 cylinders, total 5860533168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00000000

   Device Boot      Start         End      Blocks   Id  System
/dev/sdd1               1  4294967295  2147483647+   0  Empty
Partition 1 does not end on cylinder boundary.
Partition 1 does not start on physical sector boundary.
########################################################################
========================================================================1.13
==
== DISK /dev/sdd IS PRECLEARED with a GPT Protective MBR
==
============================================================================

 

Trying to convert with -C 64:

 

 Pre-Clear unRAID Disk /dev/sdd
################################################################## 1.13
Device Model:     WDC WD30EZRX-00MMMB0
Serial Number:    WD-WCAWZ1116457
Firmware Version: 80.00A80
User Capacity:    3,000,592,982,016 bytes
########################################################################
Converting existing pre-cleared disk to start partition on sector 64
========================================================================1.13
Step 1. Verifying existing pre-clear signature prior to conversion.  DONE
========================================================================1.13
==
== DISK /dev/sdd IS PRECLEARED with a GPT Protective MBR
== Conversion not possible
==
============================================================================

 

-C 63 nets the same thing..

Link to comment

The -A and -a have absolutely no meaning or purpose on a drive with a size greater than 2.2TB.  Even if you had supplied them, they would be ignored.

Drives > 2.2TB use a GPT partition, not an MBR defined partition.

 

The GPT partition is always created on a 4k boundary, regardless of what other options you may have set or selected. 

 

Enjoy your new drives.

Link to comment

Ah well that makes sense then. This should probably be clarified in the first post and the -h option of the script :) Thank you!

Good feedback.  I'll add more descriptive text. (I'll probably wait until there another change is needed before posting version 1.14, but you will see the added text then.)

 

Joe L.

Link to comment

Hello.

I have 2 identical WD 1,5 TB EADS drives.

I used preclear_disk.sh -A /dev/sda and sdb

First of all is it ok to use the -A parameter?

 

The preclear process started at the same time for both of these drives. The /dev/sdb drive finished after 15-18 hours but the /dev/sda took much longer (around 25-30 hours).

Since these two drives are identical (same model, same size), they should have finished at the same time, right?

 

 

The results of /dev/sda

 

========================================================================1.13

==  WDC WD15EADS-00P8B0    WD-WMAVU0407547

== Disk /dev/sda has been successfully precleared

== with a starting sector of 64

============================================================================

** Changed attributes in files: /tmp/smart_start_sda  /tmp/smart_finish_sda

                ATTRIBUTE  NEW_VAL OLD_VAL FAILURE_THRESHOLD STATUS      RAW_VA

LUE

        Seek_Error_Rate =  100    200            0          ok        0

No SMART attributes are FAILING_NOW

 

0 sectors were pending re-allocation before the start of the preclear.

0 sectors were pending re-allocation after pre-read in cycle 1 of 1.

0 sectors were pending re-allocation after zero of disk in cycle 1 of 1.

1 sector is pending re-allocation at the end of the preclear,

    a change of 1 in the number of sectors pending re-allocation.

0 sectors had been re-allocated before the start of the preclear.

0 sectors are re-allocated at the end of the preclear,

    the number of sectors re-allocated did not change.

 

 

 

The results of /dev/sdb

 

========================================================================1.13

==  WDC WD15EADS-00P8B0    WD-WMAVU0423289

== Disk /dev/sdb has been successfully precleared

== with a starting sector of 64

============================================================================

** Changed attributes in files: /tmp/smart_start_sdb  /tmp/smart_finish_sdb

                ATTRIBUTE  NEW_VAL OLD_VAL FAILURE_THRESHOLD STATUS      RAW_VA

LUE

        Seek_Error_Rate =  100    200            0          ok        0

    Temperature_Celsius =  121    120            0          ok        29

No SMART attributes are FAILING_NOW

 

0 sectors were pending re-allocation before the start of the preclear.

0 sectors were pending re-allocation after pre-read in cycle 1 of 1.

0 sectors were pending re-allocation after zero of disk in cycle 1 of 1.

0 sectors are pending re-allocation at the end of the preclear,

    the number of sectors pending re-allocation did not change.

0 sectors had been re-allocated before the start of the preclear.

0 sectors are re-allocated at the end of the preclear,

    the number of sectors re-allocated did not change.

 

 

I suspect that there is a problem with sda since it took so much longer to finish the preclear.

Differences between sda and sdb:

sda reports that 1 sector is pending re-allocation at the end of the preclear, sdb reports 0.

sdb reports Temperature_Celsius but sda doesn't.

 

Can you please help me translate these results? Should I remove the sda drive before I build the array?

 

Thank you...

Link to comment

Something is up with sda, as there should never be any sectors pending re-allocation at the end of the process unless it was discovered in the post-read, and that would indicate it was not written properly when zeroing the drive.

 

Another pre-clear on it is in order.

 

You'll need to look in the system log for clues why it took so long to clear /dev/sda.  There could be lots of errors there on the disk controller that would not show on a smart report.

 

The "-A" option asks that the pre-clear signature request a partition starting on sector 64, (aligned on a 4k boundary)  Any disk will work perfectly with that alignment except for an EARS drive  with a jumper added to force it to electrically add 1 to requested sectors.

 

You'll need to post a system log to get any more analysis on /dev/sda, but start a new thread in the general support forum, as that is a performance issue, not a preclear one.

 

Joe L.

 

 

Link to comment

Thanks for the reply Joe.

 

I was watching the preclear process and a few times the read speed was dropping to 3 mb/s and then was going back up to usual 60-70 mb/s.

 

I used to have these two disks in a Ubuntu Server (lvm2) and I had problems that appeared occasionally (slow response).

Now I understand that it was because of sda. Btw I have no extra controller, just the standard SATA slots in the motherboard...

 

Should I run preclear again or are there any other tools that I can use in order to specify the problem with this disk?

Running preclear seems pointless. Maybe I should remove sda and plug it into my windows pc and run some diagnostic tools there...

Link to comment

Thanks for the reply Joe.

 

I was watching the preclear process and a few times the read speed was dropping to 3 mb/s and then was going back up to usual 60-70 mb/s.

 

I used to have these two disks in a Ubuntu Server (lvm2) and I had problems that appeared occasionally (slow response).

Now I understand that it was because of sda. Btw I have no extra controller, just the standard SATA slots in the motherboard...

 

Should I run preclear again or are there any other tools that I can use in order to specify the problem with this disk?

Running preclear seems pointless. Maybe I should remove sda and plug it into my windows pc and run some diagnostic tools there...

Since you did not attach a system log, I'll figure you need no other help in determining what is going on.  It sounds like a bad disk, or bad communications with the disk.  You'll need to look in your system log for the actual details.  Typically, when running slowly, errors are being logged there.

 

 

Link to comment

Joe,

I removed the sda drive from the server and connected it to my windows pc. Then I run the WD Data Lifeguard Diagnostic Utility for Windows.

Indeed, the disk is bad because it didn't even pass the first Quick Test. This is the error message:

Quick Test on drive 3 did not complete!

Status code = 07 (Failed read test element), Failure Checkpoint = 97

(Unknown Test)

SMART self-test did not complete on drive 3!

Nevertheless I can still write data on it. I think I can use it to store some data temporarily, before I transfer them to my unRAID server when it's built and ready. Can I ?

I always use TeraCopy (for Windows) to verify the integrity of the data that I copy.

 

Please let me ask you one more question, just to make sure that the other drive (sdb) is perfectly ok.

I run preclear on sdb once again. This time, the report is the same as above with one exception. One more line is added:

Power_On_Hours =    91      92            0        ok          6572

Why is it that this parameter was not reported the first time ?

In general, you don't see anything wrong with the 'good' drive (sdb), do you?

 

Thank you very much for all your help...

Link to comment

This time, the report is the same as above with one exception. One more line is added:

Power_On_Hours =    91      92            0        ok          6572

Why is it that this parameter was not reported the first time ?

It was reported this time because the normalized value changed from 91 to 92.

In general, you don't see anything wrong with the 'good' drive (sdb), do you?

I did not look at it.

Thank you very much for all your help...

You are welcome.
Link to comment

================================================================== 1.13

=                unRAID server Pre-Clear disk /dev/sdb

=              cycle 1 of 1, partition start on sector 1

= Disk Pre-Clear-Read completed                                DONE

= Step 1 of 10 - Copying zeros to first 2048k bytes            DONE

= Step 2 of 10 - Copying zeros to remainder of disk to clear it DONE

= Step 3 of 10 - Disk is now cleared from MBR onward.          DONE

= Step 4 of 10 - Clearing MBR bytes for partition 2,3 & 4      DONE

= Step 5 of 10 - Clearing MBR code area                        DONE

= Step 6 of 10 - Setting MBR signature bytes                    DONE

= Step 7 of 10 - Setting partition 1 to precleared state        DONE

= Step 8 of 10 - Notifying kernel we changed the partitioning  DONE

= Step 9 of 10 - Creating the /dev/disk/by* entries            DONE

= Step 10 of 10 - Verifying if the MBR is cleared.              DONE

= Disk Post-Clear-Read completed                                DONE

Disk Temperature: 34C, Elapsed Time:  43:07:15

========================================================================1.13

==  Hitachi HDS5C3030ALA630    MJ1313YNG1ZX6C

== Disk /dev/sdb has been successfully precleared

== with a starting sector of 1

============================================================================

** Changed attributes in files: /tmp/smart_start_sdb  /tmp/smart_finish_sdb

                ATTRIBUTE  NEW_VAL OLD_VAL FAILURE_THRESHOLD STATUS      RAW_VA

LUE

      Raw_Read_Error_Rate =    85    100          16        ok          734014

2

      Temperature_Celsius =  176    214            0        ok          34

No SMART attributes are FAILING_NOW

 

0 sectors were pending re-allocation before the start of the preclear.

0 sectors were pending re-allocation after pre-read in cycle 1 of 1.

0 sectors were pending re-allocation after zero of disk in cycle 1 of 1.

0 sectors are pending re-allocation at the end of the preclear,

    the number of sectors pending re-allocation did not change.

0 sectors had been re-allocated before the start of the preclear.

0 sectors are re-allocated at the end of the preclear,

    the number of sectors re-allocated did not change.

root@Tower2:/boot#

 

 

Is this Raw Read Error Rate of any significance?

Link to comment

I built a new unRAID server a couple of days ago. I basically ran memtest86+ and then started preclearing my drives, all via the console. When I went to bed last night there was only a few hours left on the preclears but at some point since then the power supply failed. I'm guessing that there is no way to get the result log of the preclears since I've got the box running again with a new power supply and unintentionally cleared /var/log/syslog in the process. If I was thinking I would have stuck the unRAID thumbdrive into my linux box to read syslog first. That said, there must be a way to verify that the disks were precleared but I haven't been able to find it. Can anybody point me in the right direction?

Link to comment

I built a new unRAID server a couple of days ago. I basically ran memtest86+ and then started preclearing my drives, all via the console. When I went to bed last night there was only a few hours left on the preclears but at some point since then the power supply failed. I'm guessing that there is no way to get the result log of the preclears since I've got the box running again with a new power supply and unintentionally cleared /var/log/syslog in the process. If I was thinking I would have stuck the unRAID thumbdrive into my linux box to read syslog first. That said, there must be a way to verify that the disks were precleared but I haven't been able to find it. Can anybody point me in the right direction?

for each drive, type:

preclear_disk.sh -t dev/sdX

where sdX = the three letter device name.

 

It will tell you if the preclear signature is correct for that drive.

Link to comment

for each drive, type:

preclear_disk.sh -t dev/sdX

where sdX = the three letter device name.

 

It will tell you if the preclear signature is correct for that drive.

 

Thanks. I just found the preclear reports in /boot/preclear_reports/

Completion times were:

Hitachi 5K3000 - 24:20:49

Hitachi 5K3000 - 24:10:00

WD20EARS - 26:38:20

 

This was all three in parallel on a M4A88T-M LE with an Athlon II X2 250.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.