Jump to content

abq-pete

Members
  • Posts

    191
  • Joined

  • Last visited

Everything posted by abq-pete

  1. I run 3 pre clears on them while still in the original casings. If they pass, I rip them out and install them into severs. If they fail, they get exchanged. Regards, Peter
  2. http://www.amazon.com/dp/B00829THLE STCA4000100 external unit that most likely has the ST4000DM000 5900rpm drive in it... Free shipping. Regards, Peter Now up to $149.99...
  3. My Costco (Redwood City, CA) has the 4TB on sale again for $139.99. Regards, Peter
  4. Yep. I bought one to have a peek inside. Mine contained a ST3000DM001. Precleared fine and works without issue in one fo my arrays.
  5. Costcos in my area (Redwood City and Foster City, California) already have them listed for this new price...
  6. Yes that PAL error is a b*tch. Out of four motherboards I use to test UNRAID, only one works for upgrading firmware MSI 790FX-GD70 (fail) Intel 975XBX2 (fail) Supermicro C2SEE (fail) MSI H55M-ED55 (success) Good luck
  7. Joe, How about an option to output to a text file and copy the text file to the flash root when completed? Regards, Peter
  8. Joe, I am using the last one we tested. It worked fine on the Seagate 1.5TB units. But I am testing WD GP 1TB and seeing a bit of weirdness. I used the -n switch on the last run to get the drive cleared without all the additional testing and that worked well. I've two more drives that I can try again. Although I am not sure if I can get to it in the next couple of days (off to CES!). By the way, do you happen to know if erasing the MBR will make the drive appear fresh to unRAID? If I take a drive that was used as a data drive in one unRAID system, wipe the MBR and install it in another unRAID system, that should cause the new system to start a clearing process, right? Finally, do drives need to be pre-cleared in the system that they will be used in or can they be cleared in one machine and used in another? Thanks and regards, Peter
  9. Another way to do it is to stop your array. Go to the devices page and pretend like you are going to add a new drive to the array. Your new drives name is visible in the drop down box. Just don't add the drive and go back to the main page and start the array again. Regards, Peter
  10. This cannot be a coincidence. I just received a batch of WD10EACS (you have the EADS version with 32mb cache). The first drive I pre-cleared finished but there was not SMART report. Rather than enjoy the 13 hour ordeal again. I just skipped it and tried another drive. This one stopped at the 88% mark and hung... I am running v4.4 on my trusty Asus P5B-VM DO. Regards, Peter
  11. Drive went back today. Replacement should be here Monday. Hopefully I will fare better with the new one. I have a few worries though. I wonder if this is typical for this new series of drives. As you can see from reviews and other user reports, there have been quite a few problems encountered. I am currently using one as my parity drive. Though its labeling indicates that it is not in the affected batch with firmware problems, I need to check it out anyway. I hope the new drive coming on Monday will test fine. Assuming that it does, I will replace the parity drive with the new one and then run your tool on the older one to verify its health. On a different note, what resources does this tool consume the most? I ask because while I was running the test, I noticed that the performance of the array was visibly slower. Copying files to the cache drive as well as moving from cache to the array seem to have a drop in performance. I think reading from the array was affected as well. I am running a Celeron 2.0gHz (single core) with 3gb of ram. All drives are on either the on board SATA connections or Adaptec PCI Express SATA cards. What might I do to restore the performance? Thanks again for developing this script. I sleep better knowing that drives that pass the test are indeed more reliable. Regards, Peter
  12. Joe, Here are the latest results: =========================================================================== = unRAID server Pre-Clear disk /dev/sda = cycle 1 of 1 = Disk Pre-Clear-Read completed DONE = Step 1 of 10 - Copying zeros to first 2048k bytes DONE = Step 2 of 10 - Copying zeros to remainder of disk to clear it DONE = Step 3 of 10 - Disk is now cleared from MBR onward. DONE = Step 4 of 10 - Clearing MBR bytes for partition 2,3 & 4 DONE = Step 5 of 10 - Clearing MBR code area DONE = Step 6 of 10 - Setting MBR signature bytes DONE = Step 7 of 10 - Setting partition 1 to precleared state DONE = Step 8 of 10 - Notifying kernel we changed the partitioning DONE = Step 9 of 10 - Creating the /dev/disk/by* entries DONE = Step 10 of 10 - Testing if the clear has been successful. DONE = Post-Read in progress: 99% complete. ( 1,500,299,297,280 of 1,500,301,910,016 bytes read ) Elapsed Time: 12:26:13 =========================================================================== = unRAID server Pre-Clear disk /dev/sda = cycle 1 of 1 = Disk Pre-Clear-Read completed DONE = Step 1 of 10 - Copying zeros to first 2048k bytes DONE = Step 2 of 10 - Copying zeros to remainder of disk to clear it DONE = Step 3 of 10 - Disk is now cleared from MBR onward. DONE = Step 4 of 10 - Clearing MBR bytes for partition 2,3 & 4 DONE = Step 5 of 10 - Clearing MBR code area DONE = Step 6 of 10 - Setting MBR signature bytes DONE = Step 7 of 10 - Setting partition 1 to precleared state DONE = Step 8 of 10 - Notifying kernel we changed the partitioning DONE = Step 9 of 10 - Creating the /dev/disk/by* entries DONE = Step 10 of 10 - Testing if the clear has been successful. DONE = Disk Post-Clear-Read completed DONE Elapsed Time: 12:26:13 ============================================================================ == == Disk /dev/sda has been successfully precleared == ============================================================================ S.M.A.R.T. error count differences detected after pre-clear note, some 'raw' values may change, but not be an indication of a problem 54c54 < 1 Raw_Read_Error_Rate 0x000f 111 099 006 Pre-fail Always - 37608068 --- > 1 Raw_Read_Error_Rate 0x000f 111 099 006 Pre-fail Always - 35710728 58c58 < 7 Seek_Error_Rate 0x000f 100 253 030 Pre-fail Always - 387419 --- > 7 Seek_Error_Rate 0x000f 100 253 030 Pre-fail Always - 404283 64,66c64,66 < 189 High_Fly_Writes 0x003a 048 048 000 Old_age Always - 52 < 190 Airflow_Temperature_Cel 0x0022 079 067 045 Old_age Always - 21 (Lifetime Min/Max 17/21) < 195 Hardware_ECC_Recovered 0x001a 037 037 000 Old_age Always --- > 189 High_Fly_Writes 0x003a 006 006 000 Old_age Always - 94 > 190 Airflow_Temperature_Cel 0x0022 079 067 045 Old_age Always - 21 (Lifetime Min/Max 17/23) > 195 Hardware_ECC_Recovered 0x001a 051 037 000 Old_age Always ============================================================================ I'm not sure but the results seem to be settling down a bit? The individual errors no longer appear. Does that mean they no longer exist or that they are not being displayed? Should I run anything else while I still have the drive? I will ship it off on monday morning... Regards, Peter
  13. Joe, No problem. I was just in the process of shipping it back but will take it out of the box and run it through again overnight with the new script and report back. On a side note, dealing with Amazon is very gratifying. They issued a free return label via UPS and are sending me new drive via Fedex overnight. Regards, Peter
  14. Joe, Thanks for taking the time to respond. I ran the original script once more and it seems to have gotten farther: =========================================================================== = unRAID server Pre-Clear disk /dev/sda = cycle 1 of 1 = Disk Pre-Clear-Read completed DONE = Step 1 of 10 - Copying zeros to first 2048k bytes DONE = Step 2 of 10 - Copying zeros to remainder of disk to clear it DONE = Step 3 of 10 - Disk is now cleared from MBR onward. DONE = Step 4 of 10 - Clearing MBR bytes for partition 2,3 & 4 DONE = Step 5 of 10 - Clearing MBR code area DONE = Step 6 of 10 - Setting MBR signature bytes DONE = Step 7 of 10 - Setting partition 1 to precleared state DONE = Step 8 of 10 - Notifying kernel we changed the partitioning DONE = Step 9 of 10 - Creating the /dev/disk/by* entries DONE = Step 10 of 10 - Testing if the clear has been successful. DONE = Post-Read in progress: 99% complete. ( 1,497,000,960,000 of 1,500,301,910,016 bytes read ) Elapsed Time: 9:33:36 =========================================================================== = unRAID server Pre-Clear disk /dev/sda = cycle 1 of 1 = Disk Pre-Clear-Read completed DONE = Step 1 of 10 - Copying zeros to first 2048k bytes DONE = Step 2 of 10 - Copying zeros to remainder of disk to clear it DONE = Step 3 of 10 - Disk is now cleared from MBR onward. DONE = Step 4 of 10 - Clearing MBR bytes for partition 2,3 & 4 DONE = Step 5 of 10 - Clearing MBR code area DONE = Step 6 of 10 - Setting MBR signature bytes DONE = Step 7 of 10 - Setting partition 1 to precleared state DONE = Step 8 of 10 - Notifying kernel we changed the partitioning DONE = Step 9 of 10 - Creating the /dev/disk/by* entries DONE = Step 10 of 10 - Testing if the clear has been successful. DONE = Disk Post-Clear-Read completed DONE Elapsed Time: 9:34:27 ============================================================================ == == Disk /dev/sda has been successfully precleared == ============================================================================ S.M.A.R.T. error count differences detected after pre-clear note, some 'raw' values may change, but not be an indication of a problem 54c54 < 1 Raw_Read_Error_Rate 0x000f 103 099 006 Pre-fail Always - 42345792 --- > 1 Raw_Read_Error_Rate 0x000f 111 099 006 Pre-fail Always - 37597902 57,58c57,58 < 5 Reallocated_Sector_Ct 0x0033 096 096 036 Pre-fail Always - 187 < 7 Seek_Error_Rate 0x000f 100 253 030 Pre-fail Always - 320916 --- > 5 Reallocated_Sector_Ct 0x0033 096 096 036 Pre-fail Always - 191 > 7 Seek_Error_Rate 0x000f 100 253 030 Pre-fail Always - 333592 62c62 < 187 Reported_Uncorrect 0x0032 076 076 000 Old_age Always - 24 --- > 187 Reported_Uncorrect 0x0032 070 070 000 Old_age Always - 30 64,68c64,68 < 189 High_Fly_Writes 0x003a 075 075 000 Old_age Always - 25 < 190 Airflow_Temperature_Cel 0x0022 081 067 045 Old_age Always - 19 (Lifetime Min/Max 18/25) < 195 Hardware_ECC_Recovered 0x001a 037 037 000 Old_age Always < 197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 4 < 198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 4 --- > 189 High_Fly_Writes 0x003a 048 048 000 Old_age Always - 52 > 190 Airflow_Temperature_Cel 0x0022 078 067 045 Old_age Always - 22 (Lifetime Min/Max 18/25) > 195 Hardware_ECC_Recovered 0x001a 051 037 000 Old_age Always > 197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0 > 198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 0 72c72 < ATA Error Count: 24 (device log contains only the most recent five errors) --- > ATA Error Count: 30 (device log contains only the most recent five errors) 87c87 < Error 24 occurred at disk power-on lifetime: 8 hours (0 days + 8 hours) --- > Error 30 occurred at disk power-on lifetime: 19 hours (0 days + 19 hours) 98,102c98,102 < 60 00 00 ff ff ff 4f 00 06:55:09.851 READ FPDMA QUEUED < 27 00 00 00 00 00 e0 02 06:55:09.831 READ NATIVE MAX ADDRESS EXT < ec 00 00 00 00 00 a0 02 06:55:09.811 IDENTIFY DEVICE < ef 03 46 00 00 00 a0 02 06:55:09.791 SET FEATURES [set transfer mode] < 27 00 00 00 00 00 e0 02 06:55:09.771 READ NATIVE MAX ADDRESS EXT --- > 60 00 00 ff ff ff 4f 00 18:39:43.313 READ FPDMA QUEUED > 27 00 00 00 00 00 e0 02 18:39:43.293 READ NATIVE MAX ADDRESS EXT > ec 00 00 00 00 00 a0 02 18:39:43.273 IDENTIFY DEVICE > ef 03 46 00 00 00 a0 02 18:39:43.253 SET FEATURES [set transfer mode] > 27 00 00 00 00 00 e0 02 18:39:43.233 READ NATIVE MAX ADDRESS EXT 104c104 < Error 23 occurred at disk power-on lifetime: 8 hours (0 days + 8 hours) --- > Error 29 occurred at disk power-on lifetime: 19 hours (0 days + 19 hours) 115,119c115,119 < 60 00 00 ff ff ff 4f 00 06:55:06.474 READ FPDMA QUEUED < 27 00 00 00 00 00 e0 02 06:55:06.454 READ NATIVE MAX ADDRESS EXT < ec 00 00 00 00 00 a0 02 06:55:06.434 IDENTIFY DEVICE < ef 03 46 00 00 00 a0 02 06:55:06.414 SET FEATURES [set transfer mode] < 27 00 00 00 00 00 e0 02 06:55:06.394 READ NATIVE MAX ADDRESS EXT --- > 60 00 00 ff ff ff 4f 00 18:39:39.826 READ FPDMA QUEUED > 27 00 00 00 00 00 e0 02 18:39:39.806 READ NATIVE MAX ADDRESS EXT > ec 00 00 00 00 00 a0 02 18:39:39.786 IDENTIFY DEVICE > ef 03 46 00 00 00 a0 02 18:39:39.766 SET FEATURES [set transfer mode] > 27 00 00 00 00 00 e0 02 18:39:39.746 READ NATIVE MAX ADDRESS EXT 121c121 < Error 22 occurred at disk power-on lifetime: 8 hours (0 days + 8 hours) --- > Error 28 occurred at disk power-on lifetime: 19 hours (0 days + 19 hours) 132,136c132,136 < 60 00 00 ff ff ff 4f 00 06:55:02.987 READ FPDMA QUEUED < 27 00 00 00 00 00 e0 02 06:55:02.967 READ NATIVE MAX ADDRESS EXT < ec 00 00 00 00 00 a0 02 06:55:02.947 IDENTIFY DEVICE < ef 03 46 00 00 00 a0 02 06:55:02.927 SET FEATURES [set transfer mode] < 27 00 00 00 00 00 e0 02 06:55:02.907 READ NATIVE MAX ADDRESS EXT --- > 60 00 00 ff ff ff 4f 00 18:39:36.419 READ FPDMA QUEUED > 27 00 00 00 00 00 e0 02 18:39:36.399 READ NATIVE MAX ADDRESS EXT > ec 00 00 00 00 00 a0 02 18:39:36.379 IDENTIFY DEVICE > ef 03 46 00 00 00 a0 02 18:39:36.359 SET FEATURES [set transfer mode] > 27 00 00 00 00 00 e0 02 18:39:36.339 READ NATIVE MAX ADDRESS EXT 138c138 < Error 21 occurred at disk power-on lifetime: 8 hours (0 days + 8 hours) --- > Error 27 occurred at disk power-on lifetime: 19 hours (0 days + 19 hours) 149,153c149,153 < 60 00 00 ff ff ff 4f 00 06:54:59.692 READ FPDMA QUEUED < 60 00 00 ff ff ff 4f 00 06:54:59.690 READ FPDMA QUEUED < 27 00 00 00 00 00 e0 02 06:54:59.670 READ NATIVE MAX ADDRESS EXT < ec 00 00 00 00 00 a0 02 06:54:59.650 IDENTIFY DEVICE < ef 03 46 00 00 00 a0 02 06:54:59.630 SET FEATURES [set transfer mode] --- > 60 00 00 ff ff ff 4f 00 18:39:33.033 READ FPDMA QUEUED > 60 00 00 ff ff ff 4f 00 18:39:33.032 READ FPDMA QUEUED > 27 00 00 00 00 00 e0 02 18:39:33.012 READ NATIVE MAX ADDRESS EXT > ec 00 00 00 00 00 a0 02 18:39:32.992 IDENTIFY DEVICE > ef 03 46 00 00 00 a0 02 18:39:32.972 SET FEATURES [set transfer mode] 155c155 < Error 20 occurred at disk power-on lifetime: 8 hours (0 days + 8 hours) --- > Error 26 occurred at disk power-on lifetime: 19 hours (0 days + 19 hours) 166,170c166,170 < 60 00 00 ff ff ff 4f 00 06:54:56.314 READ FPDMA QUEUED < 60 00 00 ff ff ff 4f 00 06:54:56.313 READ FPDMA QUEUED < 27 00 00 00 00 00 e0 02 06:54:56.293 READ NATIVE MAX ADDRESS EXT < ec 00 00 00 00 00 a0 02 06:54:56.273 IDENTIFY DEVICE < ef 03 46 00 00 00 a0 02 06:54:56.253 SET FEATURES [set transfer mode] --- > 60 00 00 ff ff ff 4f 00 18:39:29.676 READ FPDMA QUEUED > 60 00 00 ff ff ff 4f 00 18:39:29.675 READ FPDMA QUEUED > 27 00 00 00 00 00 e0 02 18:39:29.655 READ NATIVE MAX ADDRESS EXT > ec 00 00 00 00 00 a0 02 18:39:29.635 IDENTIFY DEVICE > ef 03 46 00 00 00 a0 02 18:39:29.615 SET FEATURES [set transfer mode] ============================================================================ root@Tower:/boot# But with the increasing number of S.M.A.R.T. errors, I will go ahead and send this back. Jimwhite, I purposely go this from Amazon for their no hassle return policy. I will request an RMA and replacement drive (new). In my experience, replacement drives have been refurbished and I have not had such good luck with them. Has your experience differed? Regards, Peter
  15. Good news and bad news. First the good. The entire process took only 6:34:14, to do the 1.5TB Seagate. The bad news is that post-read portion ended after the 40% mark. Following that, the S.M.A.R.T. reports listed quite some errors. Can anyone help me understand if I should send this drive back? First the last Post-Read progress information: =========================================================================== = unRAID server Pre-Clear disk /dev/sda = cycle 1 of 1 = Disk Pre-Clear-Read completed DONE = Step 1 of 10 - Copying zeros to first 2048k bytes DONE = Step 2 of 10 - Copying zeros to remainder of disk to clear it DONE = Step 3 of 10 - Disk is now cleared from MBR onward. DONE = Step 4 of 10 - Clearing MBR bytes for partition 2,3 & 4 DONE = Step 5 of 10 - Clearing MBR code area DONE = Step 6 of 10 - Setting MBR signature bytes DONE = Step 7 of 10 - Setting partition 1 to precleared state DONE = Step 8 of 10 - Notifying kernel we changed the partitioning DONE = Step 9 of 10 - Creating the /dev/disk/by* entries DONE = Step 10 of 10 - Testing if the clear has been successful. DONE = Post-Read in progress: 40% complete. ( 608,670,720,000 of 1,500,301,910,016 bytes read ) Elapsed Time: 6:31:49 Next, the Post-Read summary: =========================================================================== = unRAID server Pre-Clear disk /dev/sda = cycle 1 of 1 = Disk Pre-Clear-Read completed DONE = Step 1 of 10 - Copying zeros to first 2048k bytes DONE = Step 2 of 10 - Copying zeros to remainder of disk to clear it DONE = Step 3 of 10 - Disk is now cleared from MBR onward. DONE = Step 4 of 10 - Clearing MBR bytes for partition 2,3 & 4 DONE = Step 5 of 10 - Clearing MBR code area DONE = Step 6 of 10 - Setting MBR signature bytes DONE = Step 7 of 10 - Setting partition 1 to precleared state DONE = Step 8 of 10 - Notifying kernel we changed the partitioning DONE = Step 9 of 10 - Creating the /dev/disk/by* entries DONE = Step 10 of 10 - Testing if the clear has been successful. DONE = Disk Post-Clear-Read completed DONE Elapsed Time: 6:34:14 ============================================================================ == == Disk /dev/sda has been successfully precleared == ============================================================================ Now the S.M.A.R.T. error count: S.M.A.R.T. error count differences detected after pre-clear note, some 'raw' values may change, but not be an indication of a problem 54c54 < 1 Raw_Read_Error_Rate 0x000f 100 100 006 Pre-fail Always - 2194400 --- > 1 Raw_Read_Error_Rate 0x000f 103 099 006 Pre-fail Always - 42336756 57,58c57,58 < 5 Reallocated_Sector_Ct 0x0033 100 100 036 Pre-fail Always - 0 < 7 Seek_Error_Rate 0x000f 100 253 030 Pre-fail Always - 63626 --- > 5 Reallocated_Sector_Ct 0x0033 100 100 036 Pre-fail Always - 15 > 7 Seek_Error_Rate 0x000f 100 253 030 Pre-fail Always - 261274 62c62 < 187 Reported_Uncorrect 0x0032 100 100 000 Old_age Always - 0 --- > 187 Reported_Uncorrect 0x0032 076 076 000 Old_age Always - 24 64c64 < 189 High_Fly_Writes 0x003a 100 100 000 Old_age Always - 0 --- > 189 High_Fly_Writes 0x003a 075 075 000 Old_age Always - 25 66,68c66,68 < 195 Hardware_ECC_Recovered 0x001a 100 100 000 Old_age Always < 197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0 < 198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 0 --- > 195 Hardware_ECC_Recovered 0x001a 052 049 000 Old_age Always > 197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 1 > 198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 1 72c72,170 < No Errors Logged --- Finally, the last 5 errors (out of 24 apparently): > ATA Error Count: 24 (device log contains only the most recent five errors) > CR = Command Register [HEX] > FR = Features Register [HEX] > SC = Sector Count Register [HEX] > SN = Sector Number Register [HEX] > CL = Cylinder Low Register [HEX] > CH = Cylinder High Register [HEX] > DH = Device/Head Register [HEX] > DC = Device Command Register [HEX] > ER = Error register [HEX] > ST = Status register [HEX] > Powered_Up_Time is measured from power on, and printed as > DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes, > SS=sec, and sss=millisec. It "wraps" after 49.710 days. > > Error 24 occurred at disk power-on lifetime: 8 hours (0 days + 8 hours) > When the command that caused the error occurred, the device was active or idle. > > After command completion occurred, registers were: > ER ST SC SN CL CH DH > -- -- -- -- -- -- -- > 40 51 00 ff ff ff 0f > > Commands leading to the command that caused the error were: > CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name > -- -- -- -- -- -- -- -- ---------------- -------------------- > 60 00 00 ff ff ff 4f 00 06:55:09.851 READ FPDMA QUEUED > 27 00 00 00 00 00 e0 02 06:55:09.831 READ NATIVE MAX ADDRESS EXT > ec 00 00 00 00 00 a0 02 06:55:09.811 IDENTIFY DEVICE > ef 03 46 00 00 00 a0 02 06:55:09.791 SET FEATURES [set transfer mode] > 27 00 00 00 00 00 e0 02 06:55:09.771 READ NATIVE MAX ADDRESS EXT > > Error 23 occurred at disk power-on lifetime: 8 hours (0 days + 8 hours) > When the command that caused the error occurred, the device was active or idle. > > After command completion occurred, registers were: > ER ST SC SN CL CH DH > -- -- -- -- -- -- -- > 40 51 00 ff ff ff 0f > > Commands leading to the command that caused the error were: > CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name > -- -- -- -- -- -- -- -- ---------------- -------------------- > 60 00 00 ff ff ff 4f 00 06:55:06.474 READ FPDMA QUEUED > 27 00 00 00 00 00 e0 02 06:55:06.454 READ NATIVE MAX ADDRESS EXT > ec 00 00 00 00 00 a0 02 06:55:06.434 IDENTIFY DEVICE > ef 03 46 00 00 00 a0 02 06:55:06.414 SET FEATURES [set transfer mode] > 27 00 00 00 00 00 e0 02 06:55:06.394 READ NATIVE MAX ADDRESS EXT > > Error 22 occurred at disk power-on lifetime: 8 hours (0 days + 8 hours) > When the command that caused the error occurred, the device was active or idle. > > After command completion occurred, registers were: > ER ST SC SN CL CH DH > -- -- -- -- -- -- -- > 40 51 00 ff ff ff 0f > > Commands leading to the command that caused the error were: > CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name > -- -- -- -- -- -- -- -- ---------------- -------------------- > 60 00 00 ff ff ff 4f 00 06:55:02.987 READ FPDMA QUEUED > 27 00 00 00 00 00 e0 02 06:55:02.967 READ NATIVE MAX ADDRESS EXT > ec 00 00 00 00 00 a0 02 06:55:02.947 IDENTIFY DEVICE > ef 03 46 00 00 00 a0 02 06:55:02.927 SET FEATURES [set transfer mode] > 27 00 00 00 00 00 e0 02 06:55:02.907 READ NATIVE MAX ADDRESS EXT > > Error 21 occurred at disk power-on lifetime: 8 hours (0 days + 8 hours) > When the command that caused the error occurred, the device was active or idle. > > After command completion occurred, registers were: > ER ST SC SN CL CH DH > -- -- -- -- -- -- -- > 40 51 00 ff ff ff 0f > > Commands leading to the command that caused the error were: > CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name > -- -- -- -- -- -- -- -- ---------------- -------------------- > 60 00 00 ff ff ff 4f 00 06:54:59.692 READ FPDMA QUEUED > 60 00 00 ff ff ff 4f 00 06:54:59.690 READ FPDMA QUEUED > 27 00 00 00 00 00 e0 02 06:54:59.670 READ NATIVE MAX ADDRESS EXT > ec 00 00 00 00 00 a0 02 06:54:59.650 IDENTIFY DEVICE > ef 03 46 00 00 00 a0 02 06:54:59.630 SET FEATURES [set transfer mode] > > Error 20 occurred at disk power-on lifetime: 8 hours (0 days + 8 hours) > When the command that caused the error occurred, the device was active or idle. > > After command completion occurred, registers were: > ER ST SC SN CL CH DH > -- -- -- -- -- -- -- > 40 51 00 ff ff ff 0f > > Commands leading to the command that caused the error were: > CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name > -- -- -- -- -- -- -- -- ---------------- -------------------- > 60 00 00 ff ff ff 4f 00 06:54:56.314 READ FPDMA QUEUED > 60 00 00 ff ff ff 4f 00 06:54:56.313 READ FPDMA QUEUED > 27 00 00 00 00 00 e0 02 06:54:56.293 READ NATIVE MAX ADDRESS EXT > ec 00 00 00 00 00 a0 02 06:54:56.273 IDENTIFY DEVICE > ef 03 46 00 00 00 a0 02 06:54:56.253 SET FEATURES [set transfer mode] ============================================================================ Of course running the Seagate tools test indicates no problems. Had this tool not existed, I would not have known any of this. Was ignorance bliss? I will re-run this as well as try some other tests. Luckily I do not have an immediate need for this drive yet. Thanks and regards, Peter
  16. I just received another 1.5TB drive from Amazon. Sadly this one had the older firmware with the issues. I updated the firmware and proceeded to use SpinRite 6 to check the drive. After about an hour, SPinRite reported that an additional 400+ hours (more than 16 days) were needed to complete the very thorough check! So I decided to choose a middle ground for testing. I just loaded the drive into my array and am running this excellent utility. Hopefully in about 17 hours, I will get some good news. Thanks Joe L. ! Regards, Peter
×
×
  • Create New...