Jump to content

itimpi

Moderators
  • Posts

    19,911
  • Joined

  • Last visited

  • Days Won

    55

Everything posted by itimpi

  1. There is no supported cache_dirs plugin available at the moment. The one that was available at one point was withdrawn at the request of Joe L. (the author of cache-dirs.).
  2. Yes. There is nothing on a pre-cleared disk that ties it to a particular system. I have heard of people having a separate system specifically targeted at allowing them to run pre-clears on disks without disturbing the system running the production servers.
  3. That will almost certainly be a permissions issue! The easiest thing way to rectify the permissions is to log in via telnet and then run the 'newperms' command providing the path as a parameter.
  4. Have you tried booting in safe mode to ensure no plugins are being loaded.
  5. The error message you are getting suggests something has overwritten the libc library. That is why I am suggesting you should try with no plugins loaded.
  6. I would comment out the line that tries to load from /boot/packages. I would guess that you are loading a package that is not 64-bit compatible and is messing up the system.
  7. I wonder if it is something simpler - such as the fact that 64-bit systems do not have constraint on low memory, so can keep more entries cached in memory due to Linux not being forced to push entries out of the cache. Does anyone have any idea on how one might investigate this?
  8. yes i did see that suggestion earlier in this thread and had done that in the script before adding to go file and rebooting, still caused nasty OOM and crash. i take it you are running it now?, if so what flags are you using and are you running this via go file or manually starting after unraid has booted?. I start mine from the go file using a command line of the form
  9. I found that I had to comment out the ulimit line to get the cache_dirs to run reliably on the 64-bit environment.
  10. There are options to that can be passed to the pre-clear script to just run parts of the pre-clear proess if you know roughly where it got to that can be used to save time. However running from the tart will not do any damage so it might be the easiest thing to do.
  11. The /dev/sdd device is looking problematical as the number of 'sectors pending reallocated' seems to continually be going up. You do not want to use in unRAID any drive that does not finish the pre-clear with 0 pending sectors. The only anomaly is that no sectors are actually being re-allocated, so it always possible that there is an external factor causing the pending sectors such as bad cabling at the power/SATA level. The large number of sectors pending reallocation is a good enough reason to RMA the drive The other two drives look fine. The key attributes relating to reallocated sectors are all 0 which is what you want.
  12. The pre-clear tries to put the drive through the same sort of load as is involved in parity rebuilds and/or normal use. If at the end of that there are no signs of any problems you have reasonable confidence that this point the drive is showing no problems. That is actually better than you would have for new drives - a significant proportion of those fail when put through their irst stress test via pre-clear. I actually have a caddy I can plug in externally via eSATA or USB on demand to do this. You could also use another system as there is no requirement that the pre-clear run on the system where the drive is to be used.
  13. I handle this by having a spare drive that has been previously put through a thorough pre-clear to check it out. Using this disk I go through the process of rebuilding the failed drive onto this spare as its replacement. If the rebuild fails for any reason I still have the 'red-balled' disk untouched to attempt data recovery. If the rebuild works I then put the disk that had 'red-balled' through a thorough pre-clear test. I use the results of this to decide if the disk is OK or whether the drive really needs replacing. If the drive appears OK it becomes my new spare disk.
  14. The speed will vary depending on where on the disks the heads are positioned. Speeds will be fastest at the outer edge and progressively slow down as you move inwards. My rule of thumb is about 10 hours per TB for modern drives which mean pre-clearing a 4TB drive is about 40 hours. This can vary depending on your system specs, in particular how the disks are connected as controller throughput is an important factor
  15. I think your statement is too broad! I think then the issue is not that a parity check is being started, but that it is a correcting parity check which can result in writes to the parity disk. I an unclean shutdown is detected (whatever the reason) and the parity check was a non-correcting one then this would mean that most users would not notice anything much happening if the shutdown was caused by something like a power failure but their array would still be checked for integrity. What I do agree with is that a correcting parity check should not be auto-started outside user control. As has been mentioned this can lead to data loss under certain (albeit rare) circumstances. You also do not want an automatic parity check if any disk has been red-balled due to a write failure for the same reasons.
  16. If you run the script in a console/telnet session without any parameters it will list all the command line options that are available.
  17. If it finished successfully then there should be a report written. As the last phase can easily take about 20 hours on a 4TB drive I would think there is a good chance it did not complete.
  18. I believe that this is a known issue if the array is not currently started.
  19. The original error message you showed talked about /def/sdg rather than /dev/sdg. Maybe you just kept typing it wrong
  20. It looks as though that disk may have dropped off-line. This could be a problem with the disk, but may be something else. When (if) you get the disk back online you can run a smartctl command to check SMART information. I would also check carefully all cabling to see that nothing has worked its way lose.
  21. UnRAID does not support hot swapping of array drives while the array is started. However as long as your hardware + BIOS supports it there is no problem with hot swapping array drives with array stopped, or drives that are not part of the array at any time.
  22. I do not think that has ever been part of the standard GUI. You probably got it via unMenu or Simple Features.
  23. This card is definitely supported. I have two of them in my system and they work just fine. I think it may well be one of the most commonly used expansion cards with unRAID. My guess is that the drive is having problems. Whether it is the drive itself, or something like cabling or power is not clear.
  24. I have seen the pre_clear script produce unexpected results if the disk goes off-line during the pre-clear process.
×
×
  • Create New...