Preclear.sh results - Questions about your results? Post them here.


Recommended Posts

I ran smartctl -T permissive -d ata -a /dev/sdc

and got

smartctl 5.39.1 2010-01-28 r3054 [i486-slackware-linux-gnu] (local build)
Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net

Smartctl: Device Read Identity Failed (not an ATA/ATAPI device)

=== START OF INFORMATION SECTION ===
Device Model:     [No Information Found]
Serial Number:    [No Information Found]
Firmware Version: [No Information Found]
Device is:        Not in smartctl database [for details use: -P showall]
ATA Version is:   1
ATA Standard is:  Exact ATA specification draft version not indicated
Local Time is:    Mon Aug  1 08:44:27 2011 EDT
SMART is only available in ATA Version 3 Revision 3 or greater.
We will try to proceed in spite of this.
SMART support is: Ambiguous - ATA IDENTIFY DEVICE words 82-83 don't show if SMART supported.
A mandatory SMART command failed: exiting. To continue, add one or more '-T permissive' options.

 

Think I'll try a different cable, the sata cable used has been sitting in a box for 8 months.

this worked, it now returns a proper smart report, running preclear again, I'll report back, probably just a loose cable

Edit:  Since it's a new page, last post on last page is my first post with syslog and stuff.  

 

Link to comment

I'm a little lost with pre-clear.

When I'm running pre-clear 1.11 with unraid 4.71 ( both unregistered at the moment ) and just two 2Tb drives.

Trying to clear 1 drive at the time, my server crashes when getting around 80% pre-read. Tried it for a couple of times but every time the server crashes ( total lockup ). The server has only 1Gb of memory and is stable. Tried serveral stress tests on it for 24hours and it's running without any problems. 

 

Getting desperate I've tried my "normal" pc and used the same harddrive with the same usb-stick and run the pre-clear test again. Now the pre-read was completed succesfully.

 

I'm a little lost now ! The only differences are the amount of memory and processor type. And ofcourse motherboard is also different but both pc's are rock solid stable , so that cannot be te problem ( I hope  ;D )

 

Does pre-clear need a mimimum amount of memory ?

Could there be another problem and how could I find that problem ?

 

Help needed ! Please !

Link to comment

I'm a little lost with pre-clear.

When I'm running pre-clear 1.11 with unraid 4.71 ( both unregistered at the moment ) and just two 2Tb drives.

Trying to clear 1 drive at the time, my server crashes when getting around 80% pre-read. Tried it for a couple of times but every time the server crashes ( total lockup ). The server has only 1Gb of memory and is stable. Tried serveral stress tests on it for 24hours and it's running without any problems. 

 

Getting desperate I've tried my "normal" pc and used the same harddrive with the same usb-stick and run the pre-clear test again. Now the pre-read was completed succesfully.

 

I'm a little lost now ! The only differences are the amount of memory and processor type. And ofcourse motherboard is also different but both pc's are rock solid stable , so that cannot be te problem ( I hope  ;D )

 

Does pre-clear need a mimimum amount of memory ?

Could there be another problem and how could I find that problem ?

 

Help needed ! Please !

 

Sounds like an issue with your unRaid server and not with your drive. If the drive loses power, that has been known to crash a server. Other crashes can occur if the syslog gets too large.

Link to comment

How many seperate preclear cycles are advised to run per disk? I hear of some people who do 3 per disk, surely this takes over 3 days? Why is one not enough? Can someone explain why you might want to run 2 or 3 as opposed to 1.

 

There is a poll somewhere that asks about whether people are finding drive issues on 1st, 2nd, or 3rd preclear pass. Most are found in 1st round. Remember any drive can fail at any time - preclears are no guarantee the drive won't fail soon after, we just think it makes it less likely. Anyway, I run 1 pass.

Link to comment

Ok thanks for the info guys about how many preclears to run. I am in the process of preclearing my first 3 HDDs on my unraid build. These are brand new 2TB drives and have never been used for anything before. I kicked them all off in the order below within minutes of each other. I couldn't help but notice that despite all three drives (2 x Hitatchi and 1 x Samsung) generally reading at around the same rate of on average 65-75 MB/s each time I have logged in and checked the sessions, the Samsung HDD is much further behind in terms of completion time than the two Hitatchis. They are very similar drives and although the Hitatchi drives are faster slightly, the read speeds stay about the same.

 

SDA = Hitatchi 5300K 2TB - 30c - 93% - 65.1 MB/s

 

SDB = Hitatchi 5300k 2TB- 30c - 86% - 70.0 MB/s

 

SDC = Samsung HD203WI 2TB - 27c - 58% - 75.8 MB/s

 

They have all been preclearing for coming up to nearly bang on 24 hours. They are all eco green slow speed 5300/5400rpm drives. any opinions on why the Samsung is behind on the preclear compared to the two Hitatchi drives?

 

EDIT: I think the drives all appear healthy and that the difference in time must simply be that the Hitachi drives are faster than the Samsung. In benchmarks external of unraid the Samsung seems to average 80MB/s, where as the more odern Hitachi with less platters gets about 100MB/s. This fits with the results perfectly as the Samsung completed in about 30hours, compared to 25ish for Hitachi drives.

Link to comment

Next question, my first drive has finished pre clear. I am getting the all zeros in the main bit with no reallocated sectors at all, and also "no smart attributes are failing now" message. But I am unsure the significance of the "Raw_Read_Error_Rate" line in the smart results. I highlighted them in bold below. Anyway if someone would not mind checking the entire text I got in my email report of the results below specially the smart report bits I would be grateful! Thanks

 

 

 

========================================================================1.11

== invoked as: ./preclear_disk.sh -m [email protected] /dev/sda

==  Hitachi XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

== Disk /dev/sda has been successfully precleared == with a starting sector of 64 == Ran 1 cycle == == Using :Read block size = 8225280 Bytes == Last Cycle's Pre Read Time  : 6:21:01 (87 MB/s)

== Last Cycle's Zeroing time  : 6:18:51 (88 MB/s)

== Last Cycle's Post Read Time : 11:50:21 (46 MB/s)

== Last Cycle's Total Time    : 24:31:36

==

== Total Elapsed Time 24:31:36

==

== Disk Start Temperature: 30C

==

== Current Disk Temperature: 28C,

==

============================================================================

** Changed attributes in files: /tmp/smart_start_sda  /tmp/smart_finish_sda

                ATTRIBUTE  NEW_VAL OLD_VAL FAILURE_THRESHOLD STATUS      RAW_VALUE

      Temperature_Celsius =  214    200            0        ok          28

No SMART attributes are FAILING_NOW

 

0 sectors were pending re-allocation before the start of the preclear.

0 sectors were pending re-allocation after pre-read in cycle 1 of 1.

0 sectors were pending re-allocation after zero of disk in cycle 1 of 1.

0 sectors are pending re-allocation at the end of the preclear,

    the number of sectors pending re-allocation did not change.

0 sectors had been re-allocated before the start of the preclear.

0 sectors are re-allocated at the end of the preclear,

    the number of sectors re-allocated did not change.

============================================================================

============================================================================

==

== S.M.A.R.T Initial Report for /dev/sda ==

Disk: /dev/sda

smartctl 5.39.1 2010-01-28 r3054 [i486-slackware-linux-gnu] (local build) Copyright © 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net

 

=== START OF INFORMATION SECTION ===

Device Model:    Hitachi HDS5C3020ALA632

Serial Number:    XXXXXXXXXXXXXXXXXX

Firmware Version: ML6OA580

User Capacity:    2,000,398,934,016 bytes

Device is:        Not in smartctl database [for details use: -P showall]

ATA Version is:  8

ATA Standard is:  ATA-8-ACS revision 4

Local Time is:    Wed Feb  9 21:21:55 2011 Local time zone must be set--see zic m

SMART support is: Available - device has SMART capability.

SMART support is: Enabled

 

=== START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED

 

General SMART Values:

Offline data collection status:  (0x80) Offline data collection activity

was never started.

Auto Offline Data Collection: Enabled.

Self-test execution status:      (  0) The previous self-test routine completed

without error or no self-test has ever

been run.

Total time to complete Offline

data collection: (22626) seconds.

Offline data collection

capabilities: (0x5b) SMART execute Offline immediate.

Auto Offline data collection on/off support.

Suspend Offline collection upon new

command.

Offline surface scan supported.

Self-test supported.

No Conveyance Self-test supported.

Selective Self-test supported.

SMART capabilities:            (0x0003) Saves SMART data before entering

power-saving mode.

Supports SMART auto save timer.

Error logging capability:        (0x01) Error logging supported.

General Purpose Logging supported.

Short self-test routine

recommended polling time: (  1) minutes.

Extended self-test routine

recommended polling time: ( 255) minutes.

SCT capabilities:       (0x003d) SCT Status supported.

SCT Feature Control supported.

SCT Data Table supported.

 

SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds:

ID# ATTRIBUTE_NAME          FLAG    VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE

  1 Raw_Read_Error_Rate    0x000b  100  100  016    Pre-fail  Always      -      0

  2 Throughput_Performance  0x0005  100  100  054    Pre-fail  Offline      -      0

  3 Spin_Up_Time            0x0007  100  100  024    Pre-fail  Always      -      303

  4 Start_Stop_Count        0x0012  100  100  000    Old_age  Always      -      8

  5 Reallocated_Sector_Ct  0x0033  100  100  005    Pre-fail  Always      -      0

  7 Seek_Error_Rate        0x000b  100  100  067    Pre-fail  Always      -      0

  8 Seek_Time_Performance  0x0005  100  100  020    Pre-fail  Offline      -      0

  9 Power_On_Hours          0x0012  100  100  000    Old_age  Always      -      1

10 Spin_Retry_Count        0x0013  100  100  060    Pre-fail  Always      -      0

12 Power_Cycle_Count      0x0032  100  100  000    Old_age  Always      -      8

192 Power-Off_Retract_Count 0x0032  100  100  000    Old_age  Always      -      8

193 Load_Cycle_Count        0x0012  100  100  000    Old_age  Always      -      8

194 Temperature_Celsius    0x0002  200  200  000    Old_age  Always      -      30 (Lifetime Min/Max 25/32)

196 Reallocated_Event_Count 0x0032  100  100  000    Old_age  Always      -      0

197 Current_Pending_Sector  0x0022  100  100  000    Old_age  Always      -      0

198 Offline_Uncorrectable  0x0008  100  100  000    Old_age  Offline      -      0

199 UDMA_CRC_Error_Count    0x000a  200  200  000    Old_age  Always      -      0

 

SMART Error Log Version: 1

No Errors Logged

 

SMART Self-test log structure revision number 1 No self-tests have been logged.  [To run self-tests, use: smartctl -t]

 

 

SMART Selective self-test log data structure revision number 1  SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS

    1        0        0  Not_testing

    2        0        0  Not_testing

    3        0        0  Not_testing

    4        0        0  Not_testing

    5        0        0  Not_testing

Selective self-test flags (0x0):

  After scanning selected spans, do NOT read-scan remainder of disk.

If Selective self-test is pending on power-up, resume after 0 minute delay.

==

============================================================================

 

 

 

============================================================================

==

== S.M.A.R.T Final Report for /dev/sda

==

Disk: /dev/sda

smartctl 5.39.1 2010-01-28 r3054 [i486-slackware-linux-gnu] (local build) Copyright © 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net

 

=== START OF INFORMATION SECTION ===

Device Model:    Hitachi HDS5C3020ALA632

Serial Number:    XXXXXXXXXXXXXXXX

Firmware Version: ML6OA580

User Capacity:    2,000,398,934,016 bytes

Device is:        Not in smartctl database [for details use: -P showall]

ATA Version is:  8

ATA Standard is:  ATA-8-ACS revision 4

Local Time is:    Thu Feb 10 21:53:30 2011 Local time zone must be set--see zic m

SMART support is: Available - device has SMART capability.

SMART support is: Enabled

 

=== START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED

 

General SMART Values:

Offline data collection status:  (0x84) Offline data collection activity

was suspended by an interrupting command from host.

Auto Offline Data Collection: Enabled.

Self-test execution status:      (  0) The previous self-test routine completed

without error or no self-test has ever

been run.

Total time to complete Offline

data collection: (22626) seconds.

Offline data collection

capabilities: (0x5b) SMART execute Offline immediate.

Auto Offline data collection on/off support.

Suspend Offline collection upon new

command.

Offline surface scan supported.

Self-test supported.

No Conveyance Self-test supported.

Selective Self-test supported.

SMART capabilities:            (0x0003) Saves SMART data before entering

power-saving mode.

Supports SMART auto save timer.

Error logging capability:        (0x01) Error logging supported.

General Purpose Logging supported.

Short self-test routine

recommended polling time: (  1) minutes.

Extended self-test routine

recommended polling time: ( 255) minutes.

SCT capabilities:       (0x003d) SCT Status supported.

SCT Feature Control supported.

SCT Data Table supported.

 

SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds:

ID# ATTRIBUTE_NAME          FLAG    VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE

  1 Raw_Read_Error_Rate    0x000b  100  100  016    Pre-fail  Always      -      65536

  2 Throughput_Performance  0x0005  100  100  054    Pre-fail  Offline      -      0

  3 Spin_Up_Time            0x0007  100  100  024    Pre-fail  Always      -      303

  4 Start_Stop_Count        0x0012  100  100  000    Old_age  Always      -      8

  5 Reallocated_Sector_Ct  0x0033  100  100  005    Pre-fail  Always      -      0

  7 Seek_Error_Rate        0x000b  100  100  067    Pre-fail  Always      -      0

  8 Seek_Time_Performance  0x0005  100  100  020    Pre-fail  Offline      -      0

  9 Power_On_Hours          0x0012  100  100  000    Old_age  Always      -      26

10 Spin_Retry_Count        0x0013  100  100  060    Pre-fail  Always      -      0

12 Power_Cycle_Count      0x0032  100  100  000    Old_age  Always      -      8

192 Power-Off_Retract_Count 0x0032  100  100  000    Old_age  Always      -      8

193 Load_Cycle_Count        0x0012  100  100  000    Old_age  Always      -      8

194 Temperature_Celsius    0x0002  214  214  000    Old_age  Always      -      28 (Lifetime Min/Max 25/32)

196 Reallocated_Event_Count 0x0032  100  100  000    Old_age  Always      -      0

197 Current_Pending_Sector  0x0022  100  100  000    Old_age  Always      -      0

198 Offline_Uncorrectable  0x0008  100  100  000    Old_age  Offline      -      0

199 UDMA_CRC_Error_Count    0x000a  200  200  000    Old_age  Always      -      0

 

SMART Error Log Version: 1

No Errors Logged

 

SMART Self-test log structure revision number 1 No self-tests have been logged.  [To run self-tests, use: smartctl -t]

 

 

SMART Selective self-test log data structure revision number 1  SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS

    1        0        0  Not_testing

    2        0        0  Not_testing

    3        0        0  Not_testing

    4        0        0  Not_testing

    5        0        0  Not_testing

Selective self-test flags (0x0):

  After scanning selected spans, do NOT read-scan remainder of disk.

If Selective self-test is pending on power-up, resume after 0 minute delay.

==

============================================================================

 

Link to comment

Notice that the "value" and "worst" columns both have the value of 100.  That means, from the manufacturer's perspective, 0 and 65536 are equally healthy.

 

In binary, 65536 is 10000000000000000.  So one bit is turned on.  That bit may have a special significance to the manufacturer that means nothing to us.

 

Your disk is good.  Use it and enjoy your array!

Link to comment

Notice that the "value" and "worst" columns both have the value of 100.  That means, from the manufacturer's perspective, 0 and 65536 are equally healthy.

 

In binary, 65536 is 10000000000000000.  So one bit is turned on.  That bit may have a special significance to the manufacturer that means nothing to us.

 

Your disk is good.  Use it and enjoy your array!

 

Thank you so much bjp999 for your continued support. I noticed on my other identical drive, it had a raw read error rate of 0 and then 0 again in both smart outputs which is why I was concerned one was different than the other.

 

Anyway, stupidly last night I accidentally exited the remaining preclear on the Samsung drive! Doh! It was 80% in :( No way to resume so I will have to rerun it. I am still concerned it seems to take much longer than the Hitatchis.

 

Link to comment

Update and some questions. I have finished preclears for all three of my drives. The part where it mentions in the results about "no Smart attribultes are failing now", has NEVER failed. i.e. All of the drives during all three preclears each have always had just zeros with no reallocated sectors. So that's something.

 

However, the smart testing sections differ somewhat. As I mentioned above, sometimes they give raw_read_error_rate numbers, and sometimes they remain at zero, and also I got the Samsung drive on the last preclear complain of a multi_zone_error_rate = 1. How are we meant to decipher all of these Smart read outs easily other than posting here and asking? I am becoming a little bit concerned that perhaps unraid involves a lot of "being paranoid" over drive reports. A quick google about any of the things smart reads and it throughs up a load of paranoia about a possibility of the drive failing and general recommendation to RMA it. I see that my drives have a varying load_cycle_count of bewteen 8 for the Hitachi drives, and 120 for the Samsung. These drives have only been on about 3 - 4 days. Should I be concerned of a load cycle count of 120 83 hours? That is 1.4 times an hour on average. That's about 12000 load cycles per year. Which would equate to it potentially failing in under 3 years if one is to assume the load cycle rating is approximately 30000 which seems to be a recommended expectation based on a quick google of other similar spec 2TB drives.

 

So I could use some advice on the multi zone error rate, load cycle and raw_read_error rate stuff and whether to proceed with my build basically.

 

 

Thanks

Link to comment

 

I'll preclear some of the remaining drives on 'Prototype' with 4 GBs of DDR2 800 RAM (2 x 2GB) installed to see if there's any difference.  Unfortunately 'Thailand' is already at its maximum RAM capacity.

 

 

So did the 4gb ram change the numbers?

 

Sorry, haven't gotten to that test yet.  I'll get it started today.

 

Edit: It seems I'm not able to run the test.  For some reason, my 'Prototype' test server refuses to boot with 4 GB (2 x 2GB) of RAM.  With 2 GB in DIMM1 it boots reliably.  With anything in DIMM2 it won't boot.  Possibly a defective DIMM slot, but I don't have time to fully test it right now.  Sorry to disappoint :-\

 

Joe L.: I wanted to bring to your attention what I believe to be a small bug with preclear 1.12beta.  When using it with unRAID 4.7, preclear 1.12beta defaults to sector 63.  Changing the 4k alignment setting in 4.7 does cause preclear to automatically switch to sector 64, without using the -A flag.  Everything seems normal when using preclear1.12beta in conjunction with unRAID 4.7.  The same is not true when using preclear1.12beta in conjunction with unRAID 5.0beta10.  unRAID 5.0beta10 defaults to 4k alignment.  When running preclear 1.12beta with a default install of unRAID 5.0beta10, preclear still defaults to sector 63, which is a mis-match compared to unRAID's default settings.  Switching 5.0beta10 to 4k unaligned and then back to 4k aligned gets preclear 1.12beta to default to sector 64.  It appears as though preclear is looking for a change in the disk alignment parameter, and that if there is no change it defaults to sector 63.  I suggest that what should happen is that preclear has no knowledge of unRAID's default setting, and instead checks the disk alignment setting each time to make sure that it matches.

Link to comment

 

I'll preclear some of the remaining drives on 'Prototype' with 4 GBs of DDR2 800 RAM (2 x 2GB) installed to see if there's any difference.  Unfortunately 'Thailand' is already at its maximum RAM capacity.

 

 

So did the 4gb ram change the numbers?

 

Sorry, haven't gotten to that test yet.  I'll get it started today.

 

Joe L.: I wanted to bring to your attention what I believe to be a small bug with preclear 1.12beta.  When using it with unRAID 4.7, preclear 1.12beta defaults to sector 63.  Changing the 4k alignment setting in 4.7 does cause preclear to automatically switch to sector 64, without using the -A flag.  Everything seems normal when using preclear1.12beta in conjunction with unRAID 4.7.  The same is not true when using preclear1.12beta in conjunction with unRAID 5.0beta10.  unRAID 5.0beta10 defaults to 4k alignment.  When running preclear 1.12beta with a default install of unRAID 5.0beta10, preclear still defaults to sector 63, which is a mis-match compared to unRAID's default settings.  Switching 5.0beta10 to 4k unaligned and then back to 4k aligned gets preclear 1.12beta to default to sector 64.  It appears as though preclear is looking for a change in the disk alignment parameter, and that if there is no change it defaults to sector 63.  I suggest that what should happen is that preclear has no knowledge of unRAID's default setting, and instead checks the disk alignment setting each time to make sure that it matches.

I was grabbing the default format, but it had a trailing cr/nl in disk.cfg.  I was stripping off the carriage return, but the newline remained. 

 

Change this line (line 363)

from:

default_format=`grep defaultFormat /boot/config/disk.cfg | sed -e "s/\([^=]*\)=\([^=]*\)/\2/" -e "s/\\r//" -e 's/"//g' `

to:

default_format=`grep defaultFormat /boot/config/disk.cfg | sed -e "s/\([^=]*\)=\([^=]*\)/\2/" -e "s/\\r//" -e 's/"//g' | tr -d '\n'`

 

Thanks for reporting it.

 

Joe L.

Link to comment

Hi everyone,

 

I'm a newbie - just built my first server on a hp-proliant microserver box. It's a beautiful piece of engineering. Anyway I've got 3 x 2 TB drives - one each from Samsung, Seagate and Western Digital.

 

Attached are my results. I'm not sure how to interpret them, but from what I can tell the Samsung is ok.

 

For the Seagate, is this a concern?

 

ATTRIBUTE                              NEW_VAL OLD_VAL FAILURE_THRESHOLD STATUS      RAW_VALUE

Raw_Read_Error_Rate =            114    100            6        ok          77182040

Hardware_ECC_Recovered =        36    100            0        ok          77182040

 

Also there seem to be some problems with the WD, including 6 sectors pending reallocation.

 

Can someone please look at these and tell me what they mean, or point me to where I can find an answer?

 

Thanks.

preclear_results.txt

Link to comment

Hi everyone,

 

I'm a newbie - just built my first server on a hp-proliant microserver box. It's a beautiful piece of engineering. Anyway I've got 3 x 2 TB drives - one each from Samsung, Seagate and Western Digital.

 

Attached are my results. I'm not sure how to interpret them, but from what I can tell the Samsung is ok.

 

For the Seagate, is this a concern?

 

ATTRIBUTE                              NEW_VAL OLD_VAL FAILURE_THRESHOLD STATUS      RAW_VALUE

Raw_Read_Error_Rate =             114     100            6        ok          77182040

Hardware_ECC_Recovered =        36     100            0        ok          77182040

 

Also there seem to be some problems with the WD, including 6 sectors pending reallocation.

 

Can someone please look at these and tell me what they mean, or point me to where I can find an answer?

 

Thanks.

 

 

6 pending sectors may not be a problem, but could indicate that the drive is doing to die an early death.  I would run the preclear for another cycle on that drive.  I would expect the 6 to be reallocated and no new ones to be detected.  If every preclear cycle / parity check increases the number of bad sectors, I'd RMA the drvie.

 

The other drives look ok.

 

I have also responded in your general support posting related to the other errors you are seeing.

Link to comment

The second preclear is not going well at all.

 

The post-read is on 23% and crawling along at 10 MB/s.

 

The pending sector count is at 749 now, although no sectors have been reallocated, so all the pending sectors from the last preclear seem to have fixed themselves.

 

I'm just wondering whether to abandon this drive altogether - the preclear looks like it will take another day or 2 as it is going so slowl now, and with so man issues appearing it just doesn't seem worth it.

 

Thoughts?

Link to comment

I just finished preclearing my drives and added them to my array.

During the parity calculation, I decided to change the data drive order so all the Hitachis and WDs are together.

Now when I change the config, unRAID tells me "invalid configuration, missing drives".

 

 

I then formatted my stick and created it from scratch, everything is fine. but the preclear-signature is gone (preclear_disk.sh -t /dev/sdX).

 

Should I preclear again before creating the array or just let unRAID clear the disk?

Link to comment

I just finished preclearing my drives and added them to my array.

During the parity calculation, I decided to change the data drive order so all the Hitachis and WDs are together.

Now when I change the config, unRAID tells me "invalid configuration, missing drives".

 

 

I then formatted my stick and created it from scratch, everything is fine. but the preclear-signature is gone (preclear_disk.sh -t /dev/sdX).

 

Should I preclear again before creating the array or just let unRAID clear the disk?

If you are first creating an array, it really does not matter.  The array will not clear it if parity has not already been established.
Link to comment

Tomorrow I will have the rest of hardware need it to build my first unraid server using 4. 7 .

I have six 1tb western digital and 3 2b Samsung f 4.  ( I have done to them the firmware patch).

I would like to know how to run the preclear....  I mean preclear.sh /dev/sdb or with the A option for the 2th Samsung.. some advice please?

Regards

 

Sent from my LG-P990 using Tapatalk

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.