Jump to content

Joe L.

Members
  • Posts

    19,010
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by Joe L.

  1. It is not anything to worry about, and not a mis-interpertation either. The normalized value for those two parameters is 100. The failure threshold is 97 and 99 respectively. Only the manufacturer knows how many errors would cause the value to decrement from 100 to 97 (or 100 to 99) It might be 1 to 1. It could just as easily be 100 to 1 or 10000 to 1. (there is no real standard for setting the starting value or thresholds...) The raw counts are both zero, so you''ve apparently never had to re-try spinning up. (or had an end-to-end error, whatever that is) Joe L.
  2. You do not need to do anything more. The disk looks great. In the absence of a "-a" or "-A" option, the value you've elected in your unRAID preferences is used. (apparently, you have your set to 4k-aligned.) The partition starting on sector 64 is perfect;y fine in any array and with any disk. There is only one model of disk from one manufacturer (EARS drives) that performed poorly when partitioned to start on sector 63 and even that only shows the lesser performance when dealing with small files. When reading music or movie media files, most people will never notice the difference, even with an EARS drive partitioned on sector63, since the performance is still more than enough listen/view with no issues. Joe L.
  3. unfortunately, the SMART reports do NOT tell if the preclear was successful, or if there were issues reading back the zeros that were written to the drive. The smart reports can only tell you if there are sectors re-allocated/pending reallocation. (or other smart attributes failing or near failing) For that information, you really need the preclear "rpt" file.
  4. no, it is not. The results can be found in the /boot/preclear_reports directory. There are three files for each disk (the last set is saved on any given day). The "start" SMART report, the "finish" SMART report, and the actual "rpt" analysis. Joe L.
  5. Can't tell. you did not post a SMART report. All we see is a snip of your syslog, and those errors could be power supply, disk controller, loose cabling, or disk related.
  6. You should be fine. The number of re-allocated sectors did not change. If you are unsure, preclear it once more. If the number of re-allocated sectors changes, then you might see them from time to time, and the disk is not a great candidate. But... if it stays the same, use the disk. I've got one disk that has 100 re-allocated sectors, and that number has never changed. I trust it. Most disks have several thousand spare sectors to be used for re-allocation. The 36 is not an issue. Joe L.
  7. Well... the original 4 sectors were apparently re-written in place, and not re-allocated, but then on the second cycle one additional sector was not readable initially, but re-written in place on the zeroing phase. I'd run it through another few cycles. If it still has an occasional sector show as un-readable, I'd not trust it for anything critical. If it is all OK, then go ahead and use it if you like... (Or RMA it, if in warranty) Of course, you can always put it in a windows OS, and blame any future data corruption on Microsoft.
  8. Thanks for the reply Joe. Is that normal to run out of memory for 3 preclears running at the same time? I have 8GB RAM in the server and only did it because the FAQ mentioned 4 or more preclears shouldn't be run at the same time. After the second crash, I formatted the usb drive and reloaded again from scratch. I just started the preclear on one drive only. As of 7am this morning it seems to be still going so hopefully it'll go ok all the way till the end and I can start the 2nd one tonight. I've got absolutely no idea. It would depend on lots of factors. I would not have thought you would have issues... but I was not the one who wrote that part of the wiki. If your syslog filled with errors, that could use up all your memory. If you were performing a parity check/sync, all available memory would be used as cache. (unless you have more RAM than disk space) I would run a tail -f /var/log/syslog and see if anything is filling it. I would also perform a memory test, if you have not done one... who knows, you might have less RAM than you think.
  9. basically, you crashed. (probably ran out of free memory) The dump seems to indicate the kernel was attempting to find the lru (least recently used) page of memory so it could re-use it. Joe L.
  10. a very nice summary. Nobody knows if a drive with 0 re-allocated sectors is actually defect free, or just set to zero by the manufacturer after mapping an initial set of defects. I can not know for sure, but the facts I know are that it takes a finite amount of time to read all the sectors of a disk. I know we can read at roughly 100MB/s. I have no idea what the speed of an internal only test might be. Whatever it is, it must be very similar to a "long" test as issued by the smartctl utility. I'm not sure that each disk is actually tested to see that every bit is readable. With that in mind, does each drive sit on the manufacturing line for the 4 to 6 hours needed for a full test? Nobody knows except the manufacturer. I doubt the drives are tested for that length of time... it would limit the number of drives you could manufacture, as the tests would take FAR longer than the actual assembly time. Joe L.
  11. It will report the partition as 2.2TB to older utilities. I've never seen the message you described, but them I've never seen a 3TB drive Probably just how "fdisk" is describing it. (and fdisk is an older utility) Don't worry though, the preclear script does not use "fdisk" to actually create a partition. Joe L.
  12. Since it just finished a post-read, that should do. I noticed it originally had 5 sectors pending re-allocation, and they were all re-allocated when writing zeros, but then in the post-read, 13 more were identified as un-readable.... We'll see what happens this time.
  13. IT is very unusual for a pre-cleared disk to have sectors pending re-allocation. You have 13 sectors pending re-allocation. They would have had to have been identified during the post-read phase. That, combine with the two failed "short tests" indicates you need to go through another pre-clear cycle before trusting the disk in your array. next time, you should post the pre-clear report, not just a smartctl report, as it will show more of how the disk changed during the process.
  14. it is there, as an attachment to the first post. (I just looked to verify) Scroll all the way to the bottom of it to see it.
  15. All the sectors that had been marked as un-readable were able to be written to their original sectors and not re-allocated. That would indicate a problem when they were originally written. (in its prior life)
  16. Correct except that the number of disks you can concurrently pre-clear is more limited by your memory than anything else. It has nothing to do with the three disk limit of the free version of unRAID since you are never assigning the disks to the array. The three disk limit is how many disks you can assign to an unRAID array on the disk assignment screen, not how many you can attach to disk controllers in a given server. Joe L.
  17. the reports all look fine. Yes, I would pre-clear the cache drive to ensure it does not have any issues. (not because it will save time when adding it to an array)
  18. I am tempted to say no... I have spent some time browsing the forums on SMART results, what I have come to understand is that what your drive is doing could possibly be ok, but parts of the drive ARE failing so it comes down to wanting to take a risk... Looking at the prices of disks I would advise taking out this disk, using it for some other kind of storage (its not broken yet) and replace your unraid drive with a fresh one that comes out of multuple preclear cycles without any issues.. I would try another pre-clear cycle. Each cycle so far has uncovered additional un-readable sectors. If that continues, the drive is not one I'd want in my array. On the other hand, I've got one drive with 100 re-allocated sectors that has never changed from the first pre-clear I ran it through. Since it is stable, I trust it. Joe L.
  19. zip it, (they zip really well) or, use ext host as you described. For pre-clear results I really do not need to see the entire syslog. You can attach only the pre-clear reports as found in /boot/preclear_reports Joe L. Ok, got the report on both drives. thx in advance both look fine.
  20. zip it, (they zip really well) or, use ext host as you described. For pre-clear results I really do not need to see the entire syslog. You can attach only the pre-clear reports as found in /boot/preclear_reports Joe L.
  21. The third disk shows 38 sectors pending re-allocation. There would normally be none as the writing of zeros should have re-allocated all the sectors. There were no sectors re-allocated, so I'd suspect the 38 un-readable sectors were discovered in the post-read phase. (that is not good) I'd run another pre-clear on that disk. If it continues to show sectors pending re-allocation, I'd not trust it.
  22. Those are the categories those attributes belong to. Not failure unless they also say FAILING_NOW on the same line. As an example, run-time-hours would be an old_age indicator of a disk. Un-correctable-disk-read-errors will be in a category of pre-failure. High run-time hours does not indicate the drive will fail, just that it is getting older. Un-correctable errors can occur at any age. A large number, or increasing numbers of them might indicate a pending failure (once the disk runs out of spare sectors to re-allocate in place of the un-readable ones) You just need to compare the normalized value with the failure threshold for any given attribute. That will tell you of the drive's health.
  23. Not knowing what you have on your server, but theorizing. If each directory entry occupied 1000 bytes of data (I doubt this is the case, but again, just guessing it has to have the file name, the modification/access/creation dates, and permissions. ) If you had 100,000 files, the directory entries would occupy 100Meg of space in RAM. That is nowhere near 4Gig. Something else is spinning up the drives. In addition, I suspect it is less than 100 bytes per directory structure, so 100Meg should handle about a million directory entries with relative ease. Joe L.
×
×
  • Create New...