Jump to content

itimpi

Moderators
  • Posts

    19,844
  • Joined

  • Last visited

  • Days Won

    54

Everything posted by itimpi

  1. There are options to that can be passed to the pre-clear script to just run parts of the pre-clear proess if you know roughly where it got to that can be used to save time. However running from the tart will not do any damage so it might be the easiest thing to do.
  2. The /dev/sdd device is looking problematical as the number of 'sectors pending reallocated' seems to continually be going up. You do not want to use in unRAID any drive that does not finish the pre-clear with 0 pending sectors. The only anomaly is that no sectors are actually being re-allocated, so it always possible that there is an external factor causing the pending sectors such as bad cabling at the power/SATA level. The large number of sectors pending reallocation is a good enough reason to RMA the drive The other two drives look fine. The key attributes relating to reallocated sectors are all 0 which is what you want.
  3. The pre-clear tries to put the drive through the same sort of load as is involved in parity rebuilds and/or normal use. If at the end of that there are no signs of any problems you have reasonable confidence that this point the drive is showing no problems. That is actually better than you would have for new drives - a significant proportion of those fail when put through their irst stress test via pre-clear. I actually have a caddy I can plug in externally via eSATA or USB on demand to do this. You could also use another system as there is no requirement that the pre-clear run on the system where the drive is to be used.
  4. I handle this by having a spare drive that has been previously put through a thorough pre-clear to check it out. Using this disk I go through the process of rebuilding the failed drive onto this spare as its replacement. If the rebuild fails for any reason I still have the 'red-balled' disk untouched to attempt data recovery. If the rebuild works I then put the disk that had 'red-balled' through a thorough pre-clear test. I use the results of this to decide if the disk is OK or whether the drive really needs replacing. If the drive appears OK it becomes my new spare disk.
  5. The speed will vary depending on where on the disks the heads are positioned. Speeds will be fastest at the outer edge and progressively slow down as you move inwards. My rule of thumb is about 10 hours per TB for modern drives which mean pre-clearing a 4TB drive is about 40 hours. This can vary depending on your system specs, in particular how the disks are connected as controller throughput is an important factor
  6. I think your statement is too broad! I think then the issue is not that a parity check is being started, but that it is a correcting parity check which can result in writes to the parity disk. I an unclean shutdown is detected (whatever the reason) and the parity check was a non-correcting one then this would mean that most users would not notice anything much happening if the shutdown was caused by something like a power failure but their array would still be checked for integrity. What I do agree with is that a correcting parity check should not be auto-started outside user control. As has been mentioned this can lead to data loss under certain (albeit rare) circumstances. You also do not want an automatic parity check if any disk has been red-balled due to a write failure for the same reasons.
  7. If you run the script in a console/telnet session without any parameters it will list all the command line options that are available.
  8. If it finished successfully then there should be a report written. As the last phase can easily take about 20 hours on a 4TB drive I would think there is a good chance it did not complete.
  9. I believe that this is a known issue if the array is not currently started.
  10. The original error message you showed talked about /def/sdg rather than /dev/sdg. Maybe you just kept typing it wrong
  11. It looks as though that disk may have dropped off-line. This could be a problem with the disk, but may be something else. When (if) you get the disk back online you can run a smartctl command to check SMART information. I would also check carefully all cabling to see that nothing has worked its way lose.
  12. UnRAID does not support hot swapping of array drives while the array is started. However as long as your hardware + BIOS supports it there is no problem with hot swapping array drives with array stopped, or drives that are not part of the array at any time.
  13. I do not think that has ever been part of the standard GUI. You probably got it via unMenu or Simple Features.
  14. This card is definitely supported. I have two of them in my system and they work just fine. I think it may well be one of the most commonly used expansion cards with unRAID. My guess is that the drive is having problems. Whether it is the drive itself, or something like cabling or power is not clear.
  15. I have seen the pre_clear script produce unexpected results if the disk goes off-line during the pre-clear process.
  16. Those results look good. It is not at all unusual for sectors that were flagged as pending re-allocation actually turn out to be OK when they are next written. As long as they do not keep re-appearing I would not worry. The original failure that resulted in them could have been due to some outside factor. However it is worth keeping an eye on them.
  17. The figure that tends to be most important is whether the count of Reallocated sectors are steady (ideally 0), and also that the Pending Sectors value is 0 after a pre-clear (uRAID cannot work error free with sectors Pending Reallocation). You mention using unRAID 4.7. However that cannot handle disks larger than 2TB so you need the 5.0 release if you have any of those. The 5.0 release is very close to going final so should be safe to use.
  18. I wonder what is causing the memory release? If at that time the cached directory entries for cache_dirs are also getting flushed from memory that would explain the disks spinning up as now they actually have to be read to get the directory entries the next time cache_dirs attempts this.
  19. Isn't that what I said (the username needs to all be lowercase)
  20. I seem to vaguely remember that you can have a problem if the username used is not all lower case at the Linux level?
  21. But doesn't it wipe the MBR on step 4 & 5? Why is it still corrupt(?)? No idea - I assume that the write or read of the MBR did not work as expected. Maybe Joe L. will see this thread and have further insights.
  22. It is saying the MBR was not cleared, so I expect 10 failed (that WILL read back the MBR).
  23. Is the price the same if we are not US based? I am interested in getting some from your next batch.
×
×
  • Create New...