Preclear.sh results - Questions about your results? Post them here.


Recommended Posts

I'm setting up my unRAID (Pro) server for the first time (and running on a full Slackware 13.1 installation).

 

Both of my SATA Samsung 1.5G 154UI drives gave me results similar to this after 10.5 hours:

 

===========================================================================
=                unRAID server Pre-Clear disk /dev/sdb
=                       cycle 1 of 1
= Disk Pre-Clear-Read completed                                 DONE
= Step 1 of 10 - Copying zeros to first 2048k bytes             DONE
= Step 2 of 10 - Copying zeros to remainder of disk to clear it DONE
= Step 3 of 10 - Disk is now cleared from MBR onward.           DONE
= Step 4 of 10 - Clearing MBR bytes for partition 2,3 & 4       DONE
= Step 5 of 10 - Clearing MBR code area                         DONE
= Step 6 of 10 - Setting MBR signature bytes                    DONE
= Step 7 of 10 - Setting partition 1 to precleared state        DONE
= Step 8 of 10 - Notifying kernel we changed the partitioning   DONE
= Step 9 of 10 - Creating the /dev/disk/by* entries             DONE
= Step 10 of 10 - Testing if the clear has been successful.     DONE
=
Disk Temperature: 32C, Elapsed Time:  10:32:36
============================================================================
==
== SORRY: Disk /dev/sdb MBR could NOT be precleared
==
== out4= 00092
== out5= 00092
============================================================================
1+0 records in
1+0 records out
512 bytes (512 B) copied, 0.000245285 s, 2.1 MB/s
0000000 0000 0000 0000 0000 0000 0000 0000 0000
*
0000700 0000 0000 0000 003f 0000 7af1 aea8 0000
0000720 0000 0000 0000 0000 0000 0000 0000 0000
*
0000760 0000 0000 0000 0000 0000 0000 0000 5c5c
0001000

 

Each item is "DONE", but it fails with no indication of what the problem is or why, just "could NOT be precleared"....

 

I see this in the syslog, but it seems odd that the parity errors would occur on both SATA drives at the exact same time

Dec 14 01:55:48 nickserver kernel: ata3.00: exception Emask 0x0 SAct 0x0 SErr 0x580000 action 0x6
Dec 14 01:55:48 nickserver kernel: ata4.00: exception Emask 0x0 SAct 0x0 SErr 0x1980000 action 0x6
Dec 14 01:55:48 nickserver kernel: ata4.00: BMDMA stat 0x25
Dec 14 01:55:48 nickserver kernel: ata4: SError: { 10B8B Dispar LinkSeq TrStaTrns }
Dec 14 01:55:48 nickserver kernel: ata3.00: BMDMA stat 0x25
Dec 14 01:55:48 nickserver kernel: ata4.00: failed command: WRITE DMA EXT
Dec 14 01:55:48 nickserver kernel: ata4.00: cmd 35/00:00:68:53:f8/00:04:10:00:00/e0 tag 0 dma 524288 out
Dec 14 01:55:48 nickserver kernel:          res 51/84:b3:b5:54:f8/84:02:10:00:00/e0 Emask 0x10 (ATA bus error)
Dec 14 01:55:48 nickserver kernel: ata4.00: status: { DRDY ERR }
Dec 14 01:55:48 nickserver kernel: ata4.00: error: { ICRC ABRT }
Dec 14 01:55:48 nickserver kernel: ata3: SError: { 10B8B Dispar Handshk }
Dec 14 01:55:48 nickserver kernel: ata3.00: failed command: WRITE DMA EXT
Dec 14 01:55:48 nickserver kernel: ata3.00: cmd 35/00:00:98:df:d2/00:04:0e:00:00/e0 tag 0 dma 524288 out
Dec 14 01:55:48 nickserver kernel:          res 51/84:61:37:e0:d2/84:03:0e:00:00/e0 Emask 0x10 (ATA bus error)
Dec 14 01:55:48 nickserver kernel: ata3.00: status: { DRDY ERR }
Dec 14 01:55:48 nickserver kernel: ata3.00: error: { ICRC ABRT }

 

Thoughts?

 

If I saw this in someone else's log, I might think it was due to an insufficient PSU.  I don't think that's my issue, though; I've got a 480W Antec power supply running 3 HDDs, a CD/DVD drive, a graphics card, and the motherboard/CPU -- that's it.

syslog.txt

Link to comment

First.. some definitions.

 

Parity errors are when there is not an even number bits across a series of drives at the identical bit position set to a "1"  The errors you are seeing when pre-clearing drives have absolutely nothing to do with parity as they are not yet assigned to the parity protected array.

 

The errors you are seeing are ICRC errors.  (checksum errors in communication with the disks)  That typically indicates problems in either the cables used, the disk controller ports used, the power supply, or the disks themselves.

 

Since there are two different disks, they are least suspect.

 

As far as not telling you why the pre-clear was un-successful, welll... it is...

 

On step 10... Testing if the pre-clear was successful out4 = 00092 and out5 = 00092 were both the un-expected values.  Basically, the values read back from the drive were not as expected. 

 

Yes, they probably were caused by the ICRC errors when attempting to read the data from the drive.  The two errors are probably on the same disk controller even though they are on different disks.  One might have caused the other.

 

I agree that a 480 Watt Antec supply should not have a problem with 3 disks, but if it is a multi-rail supply and the one rail powering the CD drive, graphics card, 3 disks, all the fans, and motherboard, it may be close to its limit if older drives, especially if it is not working properly.

 

12 amps for the disks, 1 or 2 for the case fans,  a few more amps for the motherboard and video card, and it will be close to the single rail limit of an older supply.  If it is not regulating well, it might be causing enough noise on the 12Volt bus to the disks to cause CRC errors.

 

From your other post I see you are using "experimental" drivers that nobody else in unRAID is using.  That, to me, indicates you are not a linux newbee.  (it may also be a mistake in judgment, as support other than in very general terms is impossible... and non-existent from lime-technology)

 

Because you are experienced enough to compile your own kernel I think you'll be able to look at the pre-clear shell script and see where the specific verification steps are performed checking for specific values.  Because you are using those drivers, it is impossible for me to easily tell if the drives involved are SATA or IDE.    If IDE, then it could easily be the cable used for the two disks.  It might be defective, or it might be an older 40 conductor cable instead of a 80 conductor cable.  You might have bundled the disk cables tightly to the noisy power cables.  If SATA you might have the SATA controller in IDE emulation mode.

 

In any case, these same errors will only cause hair-loss if you do not resolve them NOW before you start using that set of hardware for an unRAID array.  It has nothing directly to do with the pre-clear script, but it does show how the pre-clear process will expose them.  Any drive that cannot be read back "correctly" is a problem.  you'll face constant random parity errors, and pull out your hair trying to resolve the issue.  :)

 

 

The disks themselves are probably OK (even if they are not currently pre-cleared)  Once you resolve the CRC errors, you can attempt the pre-clear process on them again.

 

Joe L.

Link to comment

You are using full Slackware install  (which is way above my "pay grade"  ;))

 

I'm setting up my unRAID (Pro) server for the first time (and running on a full Slackware 13.1 installation).

 

Thoughts?

 

If I saw this in someone else's log, I might think it was due to an insufficient PSU.  I don't think that's my issue, though; I've got a 480W Antec power supply running 3 HDDs, a CD/DVD drive, a graphics card, and the motherboard/CPU -- that's it.

 

but you are omitting  a lot of things in your "simple" hardware list - WD7000 SCSI and 3ware 9xxx controllers, RAID6, some Compaq hardware and all this on an older nVidia based motherboard.

 

Let see what the Linux guys will say.

 

 

 

Link to comment

First.. some definitions.

 

Parity errors are when there is not an even number bits across a series of drives at the identical bit position set to a "1"   The errors you are seeing when pre-clearing drives have absolutely nothing to do with parity as they are not yet assigned to the parity protected array.

After reading all the talk here on the board about 'parity errors', they were on my mind and I simply misspoke; thanks for being extra clear, though.

 

The errors you are seeing are ICRC errors.  (checksum errors in communication with the disks)  That typically indicates problems in either the cables used, the disk controller ports used, the power supply, or the disks themselves.

I haven't been able to reproduce these ICRC errors in standalone testing yet as I'm not sure where on the disk they occurred, but...

 

 

As far as not telling you why the pre-clear was un-successful, welll... it is...

 

On step 10... Testing if the pre-clear was successful out4 = 00092 and out5 = 00092 were both the un-expected values.  Basically, the values read back from the drive were not as expected. 

This "MBR preclear error" seems to stem simply from a different implementation of "echo" in my environment.  My version of echo wants "\0" preceding octal numbers, and has no idea what I'm talking about when given, for instance, "\252" in the script:

root@nickserver:/usr/src/linux# echo -ne "\252"
\252root@nickserver:/usr/src/linux#

 

"Step 6"
  # set MBR signature in last two bytes in MBR
  # two byte MBR signature
  echo -ne "\252" | dd bs=1 count=1 seek=511 of=$theDisk
  echo -ne "\125" | dd bs=1 count=1 seek=510 of=$theDisk

 

The script is expecting out4 = 00170 and out5 = 00085

echo -ne "\252" | dd bs=1 count=1 seek=511 of=/dev/sdc   >& /dev/null
echo -ne "\125" | dd bs=1 count=1 seek=510 of=/dev/sdc  >& /dev/null
root@nickserver:~# dd bs=1 count=1 skip=511 if=/dev/sdc 2>/dev/null |sum|awk '{print $1}'
00092
root@nickserver:~# dd bs=1 count=1 skip=510 if=/dev/sdc 2>/dev/null |sum|awk '{print $1}'
00092

 

echo -ne "\0252" | dd bs=1 count=1 seek=511 of=/dev/sdc >& /dev/null
echo -ne "\0125" | dd bs=1 count=1 seek=510 of=/dev/sdc  >& /dev/null
root@nickserver:~# dd bs=1 count=1 skip=511 if=/dev/sdc 2>/dev/null |sum|awk '{print $1}' #out4
00170
root@nickserver:~# dd bs=1 count=1 skip=510 if=/dev/sdc 2>/dev/null |sum|awk '{print $1}' #out5
00085

 

 

From your other post I see you are using "experimental" drivers that nobody else in unRAID is using.   That, to me, indicates you are not a linux newbee.  (it may also be a mistake in judgment, as support other than in very general terms is impossible... and non-existent from lime-technology)

 

Because you are experienced enough to compile your own kernel I think you'll be able to look at the pre-clear shell script and see where the specific verification steps are performed checking for specific values.

I'd like to think that's an accurate statement.  On the other hand, it's possible that I know just enough to be a danger to myself ;-)  Really though, thanks for the prod to 'go figure it out yourself'!  This wasn't an issue that could have reasonably been figured out by anyone without access to my system.

 

Because you are using those drivers, it is impossible for me to easily tell if the drives involved are SATA or IDE.    If IDE, then it could easily be the cable used for the two disks.  It might be defective, or it might be an older 40 conductor cable instead of a 80 conductor cable.   You might have bundled the disk cables tightly to the noisy power cables.   If SATA you might have the SATA controller in IDE emulation mode.

These are SATA drives.  How would I check to make sure I'm not in IDE emulation mode?  A quick bit of googling wasn't conclusive.

 

 

In any case, these same errors will only cause hair-loss if you do not resolve them NOW before you start using that set of hardware for an unRAID array.  It has nothing directly to do with the pre-clear script, but it does show how the pre-clear process will expose them.  Any drive that cannot be read back "correctly" is a problem.  you'll face constant random parity errors, and pull out your hair trying to resolve the issue.   :)

 

The disks themselves are probably OK (even if they are not currently pre-cleared)  Once you resolve the CRC errors, you can attempt the pre-clear process on them again.

The preclear script had a single CRC error that I haven't been able to repeat.  I think I'm going to go ahead and power cycle and run it again, to see what happens.

 

Though if anyone has other ideas (particularly to try to reproduce the CRC error) I'd be open to trying it, as a 10 hour test cycle is going to be a little frustrating if it keeps failing at the end :)

 

Thanks again for your help, Joe.

Link to comment
you are omitting  a lot of things in your "simple" hardware list - WD7000 SCSI and 3ware 9xxx controllers, RAID6, some Compaq hardware and all this on an older nVidia based motherboard.

I don't actually have any of that hardware, except the nVidia motherboard.  Looks like I have a few extra modules loading / kernel drivers compiled in that I don't need.

Link to comment

These are SATA drives.  How would I check to make sure I'm not in IDE emulation mode?  A quick bit of googling wasn't conclusive.

 

Check in your BIOS for settings related to IDE for the SATA chipset.  you want to set the "mode" on the chipset to AHCI for best and native SATA performance.

The only potentially-relevant setting I could find in my BIOS was for a SATA RAID mode, which some sites say might implicitly enable AHCI; I left it off. However, hdparm -I says NCQ is supported/enabled for these drives, which implies to me that the drives aren't running in legacy IDE mode.

Link to comment
As far as not telling you why the pre-clear was un-successful, welll... it is...

 

On step 10... Testing if the pre-clear was successful out4 = 00092 and out5 = 00092 were both the un-expected values.  Basically, the values read back from the drive were not as expected.

This "MBR preclear error" seems to stem simply from a different implementation of "echo" in my environment.  My version of echo wants "\0" preceding octal numbers, and has no idea what I'm talking about when given, for instance, "\252" in the script:

root@nickserver:/usr/src/linux# echo -ne "\252"
\252root@nickserver:/usr/src/linux#

 

"Step 6"
 # set MBR signature in last two bytes in MBR
 # two byte MBR signature
 echo -ne "\252" | dd bs=1 count=1 seek=511 of=$theDisk
 echo -ne "\125" | dd bs=1 count=1 seek=510 of=$theDisk

 

The script is expecting out4 = 00170 and out5 = 00085

echo -ne "\252" | dd bs=1 count=1 seek=511 of=/dev/sdc   >& /dev/null
echo -ne "\125" | dd bs=1 count=1 seek=510 of=/dev/sdc  >& /dev/null
root@nickserver:~# dd bs=1 count=1 skip=511 if=/dev/sdc 2>/dev/null |sum|awk '{print $1}'
00092
root@nickserver:~# dd bs=1 count=1 skip=510 if=/dev/sdc 2>/dev/null |sum|awk '{print $1}'
00092

 

echo -ne "\0252" | dd bs=1 count=1 seek=511 of=/dev/sdc >& /dev/null
echo -ne "\0125" | dd bs=1 count=1 seek=510 of=/dev/sdc  >& /dev/null
root@nickserver:~# dd bs=1 count=1 skip=511 if=/dev/sdc 2>/dev/null |sum|awk '{print $1}' #out4
00170
root@nickserver:~# dd bs=1 count=1 skip=510 if=/dev/sdc 2>/dev/null |sum|awk '{print $1}' #out5
00085

on the official unRAID distribution we get:

root@Tower:/boot# echo -ne "\0252" | od -d

0000000   170

0000001

root@Tower:/boot# echo -ne "\252" | od -d

0000000   170

0000001

root@Tower:/boot# echo -ne "\0125" | od -d

0000000    85

0000001

root@Tower:/boot# echo -ne "\125" | od -d

0000000    85

0000001

 

Really though, thanks for the prod to 'go figure it out yourself'!  This wasn't an issue that could have reasonably been figured out by anyone without access to my system.

Actually, I think you should be thanked too.  The difference is subtle enough, but as you can see, the version of "bash" distributed by unRAID does not need the leading zero.   I'll add it to the preclear script, as it makes it less likely to break as upgrades to unRAID's distribution occur.

 

Thanks.

Thanks again for your help, Joe.

You are welcome.     Your contributions and experience will be helpful to others.  Enjoy your new server. It is a learning experience.  apparently the "\" in the two numbers was being translated to its "decimal" equivalent and it is why both of the MBR trailing bytes were set to 92. ("\" = decimal 92)

 

PS.

Arn't you glad I put some comments in the preclear-script?  

I hope you did not cringe too much in how I had to code some of the process.  I had to use what was available, and it was a challenge to partition a disk exactly like unRAID, especially when some of the linux utilities give conflicting results.

 

 

Joe L.

Link to comment

here are the results of a recent 2-cycle preclear of an old ata drive i wanted to use as a monkey drive.  I'm assuming this is bad news, but I wanted to check for sure...

 

===========================================================================
=                unRAID server Pre-Clear disk /dev/hdc1
=                       cycle 2 of 2
= Disk Pre-Clear-Read completed                                 DONE
= Step 1 of 10 - Copying zeros to first 2048k bytes             DONE
= Step 2 of 10 - Copying zeros to remainder of disk to clear it DONE
= Step 3 of 10 - Disk is now cleared from MBR onward.           DONE
= Step 4 of 10 - Clearing MBR bytes for partition 2,3 & 4       DONE
= Step 5 of 10 - Clearing MBR code area                         DONE
= Step 6 of 10 - Setting MBR signature bytes                    DONE
= Step 7 of 10 - Setting partition 1 to precleared state        DONE
= Step 8 of 10 - Notifying kernel we changed the partitioning   DONE
= Step 9 of 10 - Creating the /dev/disk/by* entries             DONE
= Step 10 of 10 - Testing if the clear has been successful.     DONE
= Disk Post-Clear-Read completed                                DONE
Disk Temperature: 33C, Elapsed Time:  13:20:46
============================================================================
==
== Disk /dev/hdc1 has been successfully precleared
==
============================================================================
S.M.A.R.T. error count differences detected after pre-clear
note, some 'raw' values may change, but not be an indication of a problem
50c50
<   1 Raw_Read_Error_Rate     0x000f   063   057   006    Pre-fail  Always                                                                                    -       163604374
---
>   1 Raw_Read_Error_Rate     0x000f   068   057   006    Pre-fail  Always                                                                                    -       110967343
54c54
<   7 Seek_Error_Rate         0x000f   088   060   030    Pre-fail  Always                                                                                    -       724031006
---
>   7 Seek_Error_Rate         0x000f   088   060   030    Pre-fail  Always                                                                                    -       727127262
57,59c57,59
< 195 Hardware_ECC_Recovered  0x001a   063   057   000    Old_age   Always                                                                                   
< 197 Current_Pending_Sector  0x0012   100   100   000    Old_age   Always                                                                                    -       226
< 198 Offline_Uncorrectable   0x0010   100   100   000    Old_age   Offline                                                                                   -       226
---
> 195 Hardware_ECC_Recovered  0x001a   068   057   000    Old_age   Always                                                                                   
> 197 Current_Pending_Sector  0x0012   100   100   000    Old_age   Always                                                                                    -       0
> 198 Offline_Uncorrectable   0x0010   100   100   000    Old_age   Offline                                                                                   -       0
65c65
< ATA Error Count: 2130 (device log contains only the most recent five errors)
---
> ATA Error Count: 2296 (device log contains only the most recent five errors)
80c80
< Error 2130 occurred at disk power-on lifetime: 28262 hours (1177 days + 14 hou                                                                             rs)
---
> Error 2296 occurred at disk power-on lifetime: 28265 hours (1177 days + 17 hou                                                                             rs)
86c86
<   40 51 08 3f 00 f0 e0  Error: UNC 8 sectors at LBA = 0x00f0003f = 15728703
---
>   40 51 00 3f 00 e8 e0  Error: UNC at LBA = 0x00e8003f = 15204415
91,95c91,95
<   25 00 08 3f 00 f0 e0 00      21:39:18.348  READ DMA EXT
<   35 00 08 27 c5 00 e0 00      21:39:18.348  WRITE DMA EXT
<   25 00 08 3f 00 ec e0 00      21:39:18.339  READ DMA EXT
<   35 00 48 df c4 00 e0 00      21:39:18.334  WRITE DMA EXT
<   25 00 08 3f 00 d8 e0 00      21:39:14.344  READ DMA EXT
---
>   25 00 00 7f ff e7 e0 00      02:22:53.633  READ DMA EXT
>   25 00 00 7f fe e7 e0 00      02:22:53.629  READ DMA EXT
>   25 00 00 7f fd e7 e0 00      02:22:53.626  READ DMA EXT
>   25 00 00 7f fc e7 e0 00      02:22:53.623  READ DMA EXT
>   25 00 00 7f fb e7 e0 00      02:22:53.619  READ DMA EXT
97c97
< Error 2129 occurred at disk power-on lifetime: 28262 hours (1177 days + 14 hou                                                                             rs)
---
> Error 2295 occurred at disk power-on lifetime: 28265 hours (1177 days + 17 hou                                                                             rs)
103c103
<   40 51 08 3f 00 ec e0  Error: UNC 8 sectors at LBA = 0x00ec003f = 15466559
---
>   40 51 00 3f 00 ac e0  Error: UNC at LBA = 0x00ac003f = 11272255
108,112c108,112
<   25 00 08 3f 00 ec e0 00      21:39:18.348  READ DMA EXT
<   35 00 48 df c4 00 e0 00      21:39:18.348  WRITE DMA EXT
<   25 00 08 3f 00 d8 e0 00      21:39:18.339  READ DMA EXT
<   25 00 08 3f 00 f0 e0 00      21:39:18.334  READ DMA EXT
<   25 00 08 3f 00 ec e0 00      21:39:14.344  READ DMA EXT
---
>   25 00 20 2f 00 ac e0 00      02:21:34.677  READ DMA EXT
>   25 00 20 0f 00 ac e0 00      02:21:34.676  READ DMA EXT
>   25 00 20 ef ff ab e0 00      02:21:34.676  READ DMA EXT
>   25 00 20 cf ff ab e0 00      02:21:34.676  READ DMA EXT
>   25 00 20 af ff ab e0 00      02:21:34.676  READ DMA EXT
114c114
< Error 2128 occurred at disk power-on lifetime: 28262 hours (1177 days + 14 hou                                                                             rs)
---
> Error 2294 occurred at disk power-on lifetime: 28265 hours (1177 days + 17 hou                                                                             rs)
120c120
<   40 51 08 3f 00 d8 e0  Error: UNC 8 sectors at LBA = 0x00d8003f = 14155839
---
>   40 51 00 3f 00 a4 e0  Error: UNC at LBA = 0x00a4003f = 10747967
125,129c125,129
<   25 00 08 3f 00 d8 e0 00      21:39:18.348  READ DMA EXT
<   25 00 08 3f 00 f0 e0 00      21:39:18.348  READ DMA EXT
<   25 00 08 3f 00 ec e0 00      21:39:18.339  READ DMA EXT
<   25 00 08 3f 00 d8 e0 00      21:39:18.334  READ DMA EXT
<   35 00 18 6f f5 fc e0 00      21:39:14.344  WRITE DMA EXT
---
>   25 00 00 b7 ff a3 e0 00      02:21:23.557  READ DMA EXT
>   25 00 00 b7 fe a3 e0 00      02:21:23.554  READ DMA EXT
>   25 00 00 b7 fd a3 e0 00      02:21:23.550  READ DMA EXT
>   25 00 00 b7 fc a3 e0 00      02:21:23.548  READ DMA EXT
>   25 00 00 b7 fb a3 e0 00      02:21:23.545  READ DMA EXT
131c131
< Error 2127 occurred at disk power-on lifetime: 28262 hours (1177 days + 14 hou                                                                             rs)
---
> Error 2293 occurred at disk power-on lifetime: 28265 hours (1177 days + 17 hou                                                                             rs)
137c137
<   40 51 08 3f 00 f0 e0  Error: UNC 8 sectors at LBA = 0x00f0003f = 15728703
---
>   40 51 08 3f 00 90 e0  Error: UNC 8 sectors at LBA = 0x0090003f = 9437247
142,146c142,146
<   25 00 08 3f 00 f0 e0 00      21:39:18.348  READ DMA EXT
<   25 00 08 3f 00 ec e0 00      21:39:18.348  READ DMA EXT
<   25 00 08 3f 00 d8 e0 00      21:39:18.339  READ DMA EXT
<   35 00 18 6f f5 fc e0 00      21:39:18.334  WRITE DMA EXT
<   35 00 08 4f f2 fc e0 00      21:39:14.344  WRITE DMA EXT
---
>   25 00 08 3f 00 90 e0 00      02:20:53.690  READ DMA EXT
>   25 00 08 3f 00 90 e0 00      02:20:49.175  READ DMA EXT
>   25 00 08 37 00 90 e0 00      02:20:49.175  READ DMA EXT
>   25 00 08 2f 00 90 e0 00      02:20:49.175  READ DMA EXT
>   25 00 08 27 00 90 e0 00      02:20:49.175  READ DMA EXT
148c148
< Error 2126 occurred at disk power-on lifetime: 28262 hours (1177 days + 14 hou                                                                             rs)
---
> Error 2292 occurred at disk power-on lifetime: 28265 hours (1177 days + 17 hou                                                                             rs)
154c154
<   40 51 08 3f 00 ec e0  Error: UNC 8 sectors at LBA = 0x00ec003f = 15466559
---
>   40 51 08 3f 00 90 e0  Error: UNC 8 sectors at LBA = 0x0090003f = 9437247
159,163c159,163
<   25 00 08 3f 00 ec e0 00      21:39:18.348  READ DMA EXT
<   25 00 08 3f 00 d8 e0 00      21:39:18.348  READ DMA EXT
<   35 00 18 6f f5 fc e0 00      21:39:18.339  WRITE DMA EXT
<   35 00 08 4f f2 fc e0 00      21:39:18.334  WRITE DMA EXT
<   35 00 08 3f 00 fc e0 00      21:39:14.344  WRITE DMA EXT
---
>   25 00 08 3f 00 90 e0 00      02:20:49.174  READ DMA EXT
>   25 00 08 37 00 90 e0 00      02:20:49.175  READ DMA EXT
>   25 00 08 2f 00 90 e0 00      02:20:49.175  READ DMA EXT
>   25 00 08 27 00 90 e0 00      02:20:49.175  READ DMA EXT
>   25 00 08 1f 00 90 e0 00      02:20:49.175  READ DMA EXT
============================================================================
root@UNRAID:/boot#

Link to comment

here are the results of a recent 2-cycle preclear of an old ata drive i wanted to use as a monkey drive.  I'm assuming this is bad news, but I wanted to check for sure...

 

===========================================================================
=                unRAID server Pre-Clear disk /dev/hdc1
=                       cycle 2 of 2
= Disk Pre-Clear-Read completed                                 DONE
= Step 1 of 10 - Copying zeros to first 2048k bytes             DONE
= Step 2 of 10 - Copying zeros to remainder of disk to clear it DONE
= Step 3 of 10 - Disk is now cleared from MBR onward.           DONE
= Step 4 of 10 - Clearing MBR bytes for partition 2,3 & 4       DONE
= Step 5 of 10 - Clearing MBR code area                         DONE
= Step 6 of 10 - Setting MBR signature bytes                    DONE
= Step 7 of 10 - Setting partition 1 to precleared state        DONE
= Step 8 of 10 - Notifying kernel we changed the partitioning   DONE
= Step 9 of 10 - Creating the /dev/disk/by* entries             DONE
= Step 10 of 10 - Testing if the clear has been successful.     DONE
= Disk Post-Clear-Read completed                                DONE
Disk Temperature: 33C, Elapsed Time:  13:20:46
============================================================================
==
== Disk /dev/hdc1 has been successfully precleared
==
============================================================================
S.M.A.R.T. error count differences detected after pre-clear
note, some 'raw' values may change, but not be an indication of a problem
50c50
<   1 Raw_Read_Error_Rate     0x000f   063   057   006    Pre-fail  Always                                                                                    -       163604374
---
>   1 Raw_Read_Error_Rate     0x000f   068   057   006    Pre-fail  Always                                                                                    -       110967343
54c54
<   7 Seek_Error_Rate         0x000f   088   060   030    Pre-fail  Always                                                                                    -       724031006
---
>   7 Seek_Error_Rate         0x000f   088   060   030    Pre-fail  Always                                                                                    -       727127262
57,59c57,59
< 195 Hardware_ECC_Recovered  0x001a   063   057   000    Old_age   Always                                                                                   
< 197 Current_Pending_Sector  0x0012   100   100   000    Old_age   Always                                                                                    -       226
< 198 Offline_Uncorrectable   0x0010   100   100   000    Old_age   Offline                                                                                   -       226
---
> 195 Hardware_ECC_Recovered  0x001a   068   057   000    Old_age   Always                                                                                   
> 197 Current_Pending_Sector  0x0012   100   100   000    Old_age   Always                                                                                    -       0
> 198 Offline_Uncorrectable   0x0010   100   100   000    Old_age   Offline                                                                                   -       0
65c65
< ATA Error Count: 2130 (device log contains only the most recent five errors)
---
> ATA Error Count: 2296 (device log contains only the most recent five errors)
80c80
< Error 2130 occurred at disk power-on lifetime: 28262 hours (1177 days + 14 hou                                                                             rs)
---
> Error 2296 occurred at disk power-on lifetime: 28265 hours (1177 days + 17 hou                                                                             rs)
86c86
<   40 51 08 3f 00 f0 e0  Error: UNC 8 sectors at LBA = 0x00f0003f = 15728703
---
>   40 51 00 3f 00 e8 e0  Error: UNC at LBA = 0x00e8003f = 15204415
91,95c91,95
<   25 00 08 3f 00 f0 e0 00      21:39:18.348  READ DMA EXT
<   35 00 08 27 c5 00 e0 00      21:39:18.348  WRITE DMA EXT
<   25 00 08 3f 00 ec e0 00      21:39:18.339  READ DMA EXT
<   35 00 48 df c4 00 e0 00      21:39:18.334  WRITE DMA EXT
<   25 00 08 3f 00 d8 e0 00      21:39:14.344  READ DMA EXT
---
>   25 00 00 7f ff e7 e0 00      02:22:53.633  READ DMA EXT
>   25 00 00 7f fe e7 e0 00      02:22:53.629  READ DMA EXT
>   25 00 00 7f fd e7 e0 00      02:22:53.626  READ DMA EXT
>   25 00 00 7f fc e7 e0 00      02:22:53.623  READ DMA EXT
>   25 00 00 7f fb e7 e0 00      02:22:53.619  READ DMA EXT
97c97
< Error 2129 occurred at disk power-on lifetime: 28262 hours (1177 days + 14 hou                                                                             rs)
---
> Error 2295 occurred at disk power-on lifetime: 28265 hours (1177 days + 17 hou                                                                             rs)
103c103
<   40 51 08 3f 00 ec e0  Error: UNC 8 sectors at LBA = 0x00ec003f = 15466559
---
>   40 51 00 3f 00 ac e0  Error: UNC at LBA = 0x00ac003f = 11272255
108,112c108,112
<   25 00 08 3f 00 ec e0 00      21:39:18.348  READ DMA EXT
<   35 00 48 df c4 00 e0 00      21:39:18.348  WRITE DMA EXT
<   25 00 08 3f 00 d8 e0 00      21:39:18.339  READ DMA EXT
<   25 00 08 3f 00 f0 e0 00      21:39:18.334  READ DMA EXT
<   25 00 08 3f 00 ec e0 00      21:39:14.344  READ DMA EXT
---
>   25 00 20 2f 00 ac e0 00      02:21:34.677  READ DMA EXT
>   25 00 20 0f 00 ac e0 00      02:21:34.676  READ DMA EXT
>   25 00 20 ef ff ab e0 00      02:21:34.676  READ DMA EXT
>   25 00 20 cf ff ab e0 00      02:21:34.676  READ DMA EXT
>   25 00 20 af ff ab e0 00      02:21:34.676  READ DMA EXT
114c114
< Error 2128 occurred at disk power-on lifetime: 28262 hours (1177 days + 14 hou                                                                             rs)
---
> Error 2294 occurred at disk power-on lifetime: 28265 hours (1177 days + 17 hou                                                                             rs)
120c120
<   40 51 08 3f 00 d8 e0  Error: UNC 8 sectors at LBA = 0x00d8003f = 14155839
---
>   40 51 00 3f 00 a4 e0  Error: UNC at LBA = 0x00a4003f = 10747967
125,129c125,129
<   25 00 08 3f 00 d8 e0 00      21:39:18.348  READ DMA EXT
<   25 00 08 3f 00 f0 e0 00      21:39:18.348  READ DMA EXT
<   25 00 08 3f 00 ec e0 00      21:39:18.339  READ DMA EXT
<   25 00 08 3f 00 d8 e0 00      21:39:18.334  READ DMA EXT
<   35 00 18 6f f5 fc e0 00      21:39:14.344  WRITE DMA EXT
---
>   25 00 00 b7 ff a3 e0 00      02:21:23.557  READ DMA EXT
>   25 00 00 b7 fe a3 e0 00      02:21:23.554  READ DMA EXT
>   25 00 00 b7 fd a3 e0 00      02:21:23.550  READ DMA EXT
>   25 00 00 b7 fc a3 e0 00      02:21:23.548  READ DMA EXT
>   25 00 00 b7 fb a3 e0 00      02:21:23.545  READ DMA EXT
131c131
< Error 2127 occurred at disk power-on lifetime: 28262 hours (1177 days + 14 hou                                                                             rs)
---
> Error 2293 occurred at disk power-on lifetime: 28265 hours (1177 days + 17 hou                                                                             rs)
137c137
<   40 51 08 3f 00 f0 e0  Error: UNC 8 sectors at LBA = 0x00f0003f = 15728703
---
>   40 51 08 3f 00 90 e0  Error: UNC 8 sectors at LBA = 0x0090003f = 9437247
142,146c142,146
<   25 00 08 3f 00 f0 e0 00      21:39:18.348  READ DMA EXT
<   25 00 08 3f 00 ec e0 00      21:39:18.348  READ DMA EXT
<   25 00 08 3f 00 d8 e0 00      21:39:18.339  READ DMA EXT
<   35 00 18 6f f5 fc e0 00      21:39:18.334  WRITE DMA EXT
<   35 00 08 4f f2 fc e0 00      21:39:14.344  WRITE DMA EXT
---
>   25 00 08 3f 00 90 e0 00      02:20:53.690  READ DMA EXT
>   25 00 08 3f 00 90 e0 00      02:20:49.175  READ DMA EXT
>   25 00 08 37 00 90 e0 00      02:20:49.175  READ DMA EXT
>   25 00 08 2f 00 90 e0 00      02:20:49.175  READ DMA EXT
>   25 00 08 27 00 90 e0 00      02:20:49.175  READ DMA EXT
148c148
< Error 2126 occurred at disk power-on lifetime: 28262 hours (1177 days + 14 hou                                                                             rs)
---
> Error 2292 occurred at disk power-on lifetime: 28265 hours (1177 days + 17 hou                                                                             rs)
154c154
<   40 51 08 3f 00 ec e0  Error: UNC 8 sectors at LBA = 0x00ec003f = 15466559
---
>   40 51 08 3f 00 90 e0  Error: UNC 8 sectors at LBA = 0x0090003f = 9437247
159,163c159,163
<   25 00 08 3f 00 ec e0 00      21:39:18.348  READ DMA EXT
<   25 00 08 3f 00 d8 e0 00      21:39:18.348  READ DMA EXT
<   35 00 18 6f f5 fc e0 00      21:39:18.339  WRITE DMA EXT
<   35 00 08 4f f2 fc e0 00      21:39:18.334  WRITE DMA EXT
<   35 00 08 3f 00 fc e0 00      21:39:14.344  WRITE DMA EXT
---
>   25 00 08 3f 00 90 e0 00      02:20:49.174  READ DMA EXT
>   25 00 08 37 00 90 e0 00      02:20:49.175  READ DMA EXT
>   25 00 08 2f 00 90 e0 00      02:20:49.175  READ DMA EXT
>   25 00 08 27 00 90 e0 00      02:20:49.175  READ DMA EXT
>   25 00 08 1f 00 90 e0 00      02:20:49.175  READ DMA EXT
============================================================================
root@UNRAID:/boot#

Actually, at the start of the preclear there were 226 sectors marked for possible re-allocation.

After the pre-clear, there were none.  No re-allocations occurred.  This indicates it was able to successfully write them to their original locations.  (and possible indicates an issue with the power supply or connectors you used previously with that drive.  Might have had too much noise or vibration for the drive to work properly)

 

nothing glaring otherwise, in fact ... the normalized read-error rate improved during the process.

 

I'd say use the drive.

Link to comment

Lol..i didnt save the syslog properly, any chance that i can recover it??

 

 

server is still up and running

 

Or should i just run the preclear again??

Why?  There was nothing wrong with what you posted.  The drive looked fine.

 

Because i just checked the syslog that i have here and is empty, so I thougt that the one that I posted is empty too...

 

So i have no idea if the drive is good or not

 

 

 

Link to comment

Lol..i didnt save the syslog properly, any chance that i can recover it??

 

 

server is still up and running

 

Or should i just run the preclear again??

Why?  There was nothing wrong with what you posted.  The drive looked fine.

 

Because i just checked the syslog that i have here and is empty, so I thougt that the one that I posted is empty too...

 

So i have no idea if the drive is good or not

 

 

 

perhaps we are talking about different drives.

 

In any case, you can get the full SMART report on the drive by typing:

smartctl -d ata -a /dev/sdX

where sdX = the device name of your specific device.

 

There is no need to run the preclear script just wait hours for it to invoke that exact same command to see the final status of the drive.

 

Joe L.

Link to comment

Lol..i didnt save the syslog properly, any chance that i can recover it??

 

 

server is still up and running

 

Or should i just run the preclear again??

Why?  There was nothing wrong with what you posted.  The drive looked fine.

 

Because i just checked the syslog that i have here and is empty, so I thougt that the one that I posted is empty too...

 

So i have no idea if the drive is good or not

 

 

 

perhaps we are talking about different drives.

 

In any case, you can get the full SMART report on the drive by typing:

smartctl -d ata -a /dev/sdX

where sdX = the device name of your specific device.

 

There is no need to run the preclear script just wait hours for it to invoke that exact same command to see the final status of the drive.

 

Joe L.

 

Ok, this is the status o the new drive.

 

Seems fine to me, but im not an expert. Can you tell me if it it's fine? I am planing to use this drive to replac my parity. Every day get 1 or 2 more current pending sector.

 

 

Thanks in advance !

Syslog.txt

Link to comment

That drive's smart report looks good.

 

I'm a bit confused... Have you run it through the preclear script?  (It's only been powered up about 64 hours)

 

Yeah, I did run it through the preclear script. It took around 28 hours, and thats the syslog that i couldn't save properly to post here.

 

The disk is unasigned now, and im planing to keep it that way until the day that my parity drive goes tothe roof with the "current pending sectors". Every day adds 2 or 3 to the count.

 

 

 

Link to comment

That drive's smart report looks good.

 

I'm a bit confused... Have you run it through the preclear script?  (It's only been powered up about 64 hours)

 

Yeah, I did run it through the preclear script. It took around 28 hours, and thats the syslog that i couldn't save properly to post here.

 

The disk is unasigned now, and im planing to keep it that way until the day that my parity drive goes tothe roof with the "current pending sectors". Every day adds 2 or 3 to the count.

I'd not wait if the current pending sectors on the parity drive are increasing by a few counts a day.  I'd replace it and get a RMA process started on the old one.

 

Joe L.

Link to comment

That drive's smart report looks good.

 

I'm a bit confused... Have you run it through the preclear script?  (It's only been powered up about 64 hours)

 

Yeah, I did run it through the preclear script. It took around 28 hours, and thats the syslog that i couldn't save properly to post here.

 

The disk is unasigned now, and im planing to keep it that way until the day that my parity drive goes tothe roof with the "current pending sectors". Every day adds 2 or 3 to the count.

I'd not wait if the current pending sectors on the parity drive are increasing by a few counts a day.  I'd replace it and get a RMA process started on the old one.

 

Joe L.

 

I could try to do that but the thing is that i live in Chile, and if I replace the drive now and another drive goes bad from now until March i will be F*****, because here in chile i don't have any chance to get that kind of drive.

 

So I was expecting get the RMA in my next trip to the USA (march) and try to hang on with this drive until that.

Link to comment

That drive's smart report looks good.

 

I'm a bit confused... Have you run it through the preclear script?  (It's only been powered up about 64 hours)

 

Yeah, I did run it through the preclear script. It took around 28 hours, and thats the syslog that i couldn't save properly to post here.

 

The disk is unasigned now, and im planing to keep it that way until the day that my parity drive goes tothe roof with the "current pending sectors". Every day adds 2 or 3 to the count.

I'd not wait if the current pending sectors on the parity drive are increasing by a few counts a day.  I'd replace it and get a RMA process started on the old one.

 

Joe L.

 

I could try to do that but the thing is that i live in Chile, and if I replace the drive now and another drive goes bad from now until March i will be F*****, because here in chile i don't have any chance to get that kind of drive.

 

So I was expecting get the RMA in my next trip to the USA (march) and try to hang on with this drive until that.

I see...

 

If it is your parity drive you can actually take some preventative measures.   Here is an idea, see what you think.

 

A sector pending re-allocation is one that the disk was unable to read properly.   The pending-reallocation logic is waiting for that same sector to be written again so it can re-allocate it from its spare pool of sectors.

 

Unfortunately, the parity disk sectors are not normally written unless the equivalent sector on one of the data disks is written.

 

What you could do is force unRAID to re-write all the parity disks sectors.  For 99.99999999% of them, it will write exactly the same contents as it currently contains.  For those 2 or three sectors it would give the SMART firmware the opportunity to re-allocate the sectors it could not read.

 

The first step is to perform a normal parity "Check"

 

This will allow all the drives to read all their sectors and if there are any un-readable, hopefully re-allocate them.  

 

Once the parity check is complete, get a set of SMART reports, one from each of your drives.  Hopefully none of the data drives will have sectors pending re-allocation.

 

Then, if there are pending-reallocation sectors only on the PARITY drive,

Stop the array

Un-assign the parity drive on the "devices" page.  (leave the parity disk unassigned for the next two steps)

Start the array with the parity disk un-assigned.

Stop the array once more.

Re-assign the parity drive on the "devices" page.

Start the array with the parity disk re-assigned.

 

This last step will force unRAID to think the parity drive needs to be completely re-written.  It will, of course, be re-writing all the parity disks sectors, including those pending re-allocation.  For 99.99999999% of them, it will write exactly the same contents as it currently contains.  For those few sectors pending re-allocation it would give the SMART firmware the opportunity to re-allocate the sectors.

 

With any luck this will allow the SMART firmware to re-allocate all the sectors pending re-allocation. You'll have the time to get to the USA in March to obtain a replacement drive.

 

To re-group if anything goes drastically wrong (another disk concurrently fails) you can use the "trust my parity" procedure since you know that parity is good (the process above was just re-writing exactly what is already on the parity disk.) Then you would be in exactly the same situation as you are now if a data disk were to fail, using parity and the other data disks to simulate the failed disk.

 

Joe L.

Link to comment

That drive's smart report looks good.

 

I'm a bit confused... Have you run it through the preclear script?  (It's only been powered up about 64 hours)

 

Yeah, I did run it through the preclear script. It took around 28 hours, and thats the syslog that i couldn't save properly to post here.

 

The disk is unasigned now, and im planing to keep it that way until the day that my parity drive goes tothe roof with the "current pending sectors". Every day adds 2 or 3 to the count.

I'd not wait if the current pending sectors on the parity drive are increasing by a few counts a day.  I'd replace it and get a RMA process started on the old one.

 

Joe L.

 

I could try to do that but the thing is that i live in Chile, and if I replace the drive now and another drive goes bad from now until March i will be F*****, because here in chile i don't have any chance to get that kind of drive.

 

So I was expecting get the RMA in my next trip to the USA (march) and try to hang on with this drive until that.

I see...

 

If it is your parity drive you can actually take some preventative measures.   Here is an idea, see what you think.

 

A sector pending re-allocation is one that the disk was unable to read properly.   The pending-reallocation logic is waiting for that same sector to be written again so it can re-allocate it from its spare pool of sectors.

 

Unfortunately, the parity disk sectors are not normally written unless the equivalent sector on one of the data disks is written.

 

What you could do is force unRAID to re-write all the parity disks sectors.  For 99.99999999% of them, it will write exactly the same contents as it currently contains.  For those 2 or three sectors it would give the SMART firmware the opportunity to re-allocate the sectors it could not read.

 

The first step is to perform a normal parity "Check"

 

This will allow all the drives to read all their sectors and if there are any un-readable, hopefully re-allocate them.  

 

Once the parity check is complete, get a set of SMART reports, one from each of your drives.  Hopefully none of the data drives will have sectors pending re-allocation.

 

Then, if there are pending-reallocation sectors only on the PARITY drive,

Stop the array

Un-assign the parity drive on the "devices" page.  (leave the parity disk unassigned for the next two steps)

Start the array with the parity disk un-assigned.

Stop the array once more.

Re-assign the parity drive on the "devices" page.

Start the array with the parity disk re-assigned.

 

This last step will force unRAID to think the parity drive needs to be completely re-written.  It will, of course, be re-writing all the parity disks sectors, including those pending re-allocation.  For 99.99999999% of them, it will write exactly the same contents as it currently contains.  For those few sectors pending re-allocation it would give the SMART firmware the opportunity to re-allocate the sectors.

 

With any luck this will allow the SMART firmware to re-allocate all the sectors pending re-allocation. You'll have the time to get to the USA in March to obtain a replacement drive.

 

To re-group if anything goes drastically wrong (another disk concurrently fails) you can use the "trust my parity" procedure since you know that parity is good (the process above was just re-writing exactly what is already on the parity disk.) Then you would be in exactly the same situation as you are now if a data disk were to fail, using parity and the other data disks to simulate the failed disk.

 

Joe L.

Ok, Im on it.

 

Thx Joe. I will let you know how things goes.

Link to comment

So I've got a new server I'm setting up, and ran into a problem preclearing the drives.  The hardware isn't too unusual, but it's definitely a budget setup.

 

CORSAIR Builder Series CX430 CMPSU-430CX

1GB DDR2

Althon64 3200+

3X2TB Seagate LP SATA Drives

ECS A780GM-M3 Motherboard

 

1: preclearing all 3 drives at the same time = 2 pass, 1 fail

2: preclearing failed drive only = pass

3: preclearing all 3 drives at the same time after changing power/sata cables from first try = 3 fail

4: preclearing all 3 drives w/spare Raidmax Power Supply = 3 pass

5. preclearing all 3 drives with Corsair PS, no other changes from step 4 = 3 fail

 

The failures are all the same, always "Postread detected un-expected non-zero bytes on disk".  Looked thru the error report files and there were several hundred entries in each.  The SMART reports look like the normal Seagate ones several other users have posted, and the drives all pass the SMART self-tests.

 

Anything else worth trying before sending the PS back to Corsair?  It takes about 30 hours to run the preclear, so it's possible I moved the server a little during them, but I wouldn't think that would cause this many problems.  If it is the power supply, how exactly do I convince Corsair it's bad?

 

SOLVED:  I was using the wrong half of the 4/8 pin CPU power connector, and while it would power the system, apparently the connection wasn't good enough to survive the preclear.  After fixing it, I re-ran everything with no errors.  (That's what I get for setting things up late at night without very good lighting.)

Link to comment

That drive's smart report looks good.

 

I'm a bit confused... Have you run it through the preclear script?  (It's only been powered up about 64 hours)

 

Yeah, I did run it through the preclear script. It took around 28 hours, and thats the syslog that i couldn't save properly to post here.

 

The disk is unasigned now, and im planing to keep it that way until the day that my parity drive goes tothe roof with the "current pending sectors". Every day adds 2 or 3 to the count.

I'd not wait if the current pending sectors on the parity drive are increasing by a few counts a day.  I'd replace it and get a RMA process started on the old one.

 

Joe L.

 

I could try to do that but the thing is that i live in Chile, and if I replace the drive now and another drive goes bad from now until March i will be F*****, because here in chile i don't have any chance to get that kind of drive.

 

So I was expecting get the RMA in my next trip to the USA (march) and try to hang on with this drive until that.

I see...

 

If it is your parity drive you can actually take some preventative measures.   Here is an idea, see what you think.

 

A sector pending re-allocation is one that the disk was unable to read properly.   The pending-reallocation logic is waiting for that same sector to be written again so it can re-allocate it from its spare pool of sectors.

 

Unfortunately, the parity disk sectors are not normally written unless the equivalent sector on one of the data disks is written.

 

What you could do is force unRAID to re-write all the parity disks sectors.  For 99.99999999% of them, it will write exactly the same contents as it currently contains.  For those 2 or three sectors it would give the SMART firmware the opportunity to re-allocate the sectors it could not read.

 

The first step is to perform a normal parity "Check"

 

This will allow all the drives to read all their sectors and if there are any un-readable, hopefully re-allocate them.  

 

Once the parity check is complete, get a set of SMART reports, one from each of your drives.  Hopefully none of the data drives will have sectors pending re-allocation.

 

Then, if there are pending-reallocation sectors only on the PARITY drive,

Stop the array

Un-assign the parity drive on the "devices" page.  (leave the parity disk unassigned for the next two steps)

Start the array with the parity disk un-assigned.

Stop the array once more.

Re-assign the parity drive on the "devices" page.

Start the array with the parity disk re-assigned.

 

This last step will force unRAID to think the parity drive needs to be completely re-written.  It will, of course, be re-writing all the parity disks sectors, including those pending re-allocation.  For 99.99999999% of them, it will write exactly the same contents as it currently contains.  For those few sectors pending re-allocation it would give the SMART firmware the opportunity to re-allocate the sectors.

 

With any luck this will allow the SMART firmware to re-allocate all the sectors pending re-allocation. You'll have the time to get to the USA in March to obtain a replacement drive.

 

To re-group if anything goes drastically wrong (another disk concurrently fails) you can use the "trust my parity" procedure since you know that parity is good (the process above was just re-writing exactly what is already on the parity disk.) Then you would be in exactly the same situation as you are now if a data disk were to fail, using parity and the other data disks to simulate the failed disk.

 

Joe L.

Ok, Im on it.

 

Thx Joe. I will let you know how things goes.

 

I did all the process and went down to 9 errors. Better than 22 but still not what i was hoping for.

 

 

THanks for the help anyway Joe

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.