Jump to content

itimpi

Moderators
  • Posts

    19,779
  • Joined

  • Last visited

  • Days Won

    54

Everything posted by itimpi

  1. After reading this post and having a moment of lucidity I think it's dawned on me how this can happen If new files are being written to cached share, and those files are beyond the 'slit level' in the directory hierarchy, code that prevents split kicks in and forces new files to stay on device where parent exists, in this case the cache. The free space is not taken into consideration in this case - but with the cache disk it should be - that's the bug. Sound like this is your scenario? And I was just about to release -beta8.... Sounds like that could be it. Get b8 out and we can look at this for b9. Nice one. Looks as though the fix was easy once the cause was identified as it appears to have made beta 8.
  2. I believe that there is a bug in the current beta 7 GUI code to do with recognizing suffixes when trying to set the min free space value. Not sure if that is relevant if the value is already set (or you set it manually via a text editor).
  3. As long as you have Min Free Space set to be more than the largest file you want to copy to the share, then unRAID already handles this. Once the free space falls below the Min free Space value unRAID stats writing directly to the array data drives.
  4. that is a nice set of icons!. They combine both shape and color indications of what they are which is a good thing. As important, I see they are free for commercial use.
  5. RobJ - great job! Great info so far, and once complete this will be an outstanding resource!!! I will be adding a link from myMain. I agree. I am hoping that the descriptions for reallocated sectors and pending sectors get added ASAP as these are the ones that are probably of most importance to unRAID users. I would think that some sort of colour coding to indicate ones that are of particular interest would help.
  6. Sounds correct! Assuming that you are running the pre_clear script from /boot, that is where the reports will be placed on completion. A suggestion is to run preclear_disk.sh -l before doing the actual preclear to get the list of devices that are available. Helps avoid any accidents
  7. Never seen (or heard) of anything like this. cache_dirs is a read only process so never creates files/folders. Are you running any other logins? Is so it is more likely that one of these is the culprit.
  8. You do not want to use a disk in unRAID that is showing any Pending sectors. The good sign is that the number went down during the pre-clear without forcing any re-allocation. More worrying is why were they not all cleared! I would suggest you put it through another pre_clear cycle to see what happens. If you cannot clear the Pending Sectors then you need to consider a RMA.
  9. There is no supported cache_dirs plugin available at the moment. The one that was available at one point was withdrawn at the request of Joe L. (the author of cache-dirs.).
  10. Yes. There is nothing on a pre-cleared disk that ties it to a particular system. I have heard of people having a separate system specifically targeted at allowing them to run pre-clears on disks without disturbing the system running the production servers.
  11. That will almost certainly be a permissions issue! The easiest thing way to rectify the permissions is to log in via telnet and then run the 'newperms' command providing the path as a parameter.
  12. Have you tried booting in safe mode to ensure no plugins are being loaded.
  13. The error message you are getting suggests something has overwritten the libc library. That is why I am suggesting you should try with no plugins loaded.
  14. I would comment out the line that tries to load from /boot/packages. I would guess that you are loading a package that is not 64-bit compatible and is messing up the system.
  15. I wonder if it is something simpler - such as the fact that 64-bit systems do not have constraint on low memory, so can keep more entries cached in memory due to Linux not being forced to push entries out of the cache. Does anyone have any idea on how one might investigate this?
  16. yes i did see that suggestion earlier in this thread and had done that in the script before adding to go file and rebooting, still caused nasty OOM and crash. i take it you are running it now?, if so what flags are you using and are you running this via go file or manually starting after unraid has booted?. I start mine from the go file using a command line of the form
  17. I found that I had to comment out the ulimit line to get the cache_dirs to run reliably on the 64-bit environment.
  18. There are options to that can be passed to the pre-clear script to just run parts of the pre-clear proess if you know roughly where it got to that can be used to save time. However running from the tart will not do any damage so it might be the easiest thing to do.
  19. The /dev/sdd device is looking problematical as the number of 'sectors pending reallocated' seems to continually be going up. You do not want to use in unRAID any drive that does not finish the pre-clear with 0 pending sectors. The only anomaly is that no sectors are actually being re-allocated, so it always possible that there is an external factor causing the pending sectors such as bad cabling at the power/SATA level. The large number of sectors pending reallocation is a good enough reason to RMA the drive The other two drives look fine. The key attributes relating to reallocated sectors are all 0 which is what you want.
  20. The pre-clear tries to put the drive through the same sort of load as is involved in parity rebuilds and/or normal use. If at the end of that there are no signs of any problems you have reasonable confidence that this point the drive is showing no problems. That is actually better than you would have for new drives - a significant proportion of those fail when put through their irst stress test via pre-clear. I actually have a caddy I can plug in externally via eSATA or USB on demand to do this. You could also use another system as there is no requirement that the pre-clear run on the system where the drive is to be used.
  21. I handle this by having a spare drive that has been previously put through a thorough pre-clear to check it out. Using this disk I go through the process of rebuilding the failed drive onto this spare as its replacement. If the rebuild fails for any reason I still have the 'red-balled' disk untouched to attempt data recovery. If the rebuild works I then put the disk that had 'red-balled' through a thorough pre-clear test. I use the results of this to decide if the disk is OK or whether the drive really needs replacing. If the drive appears OK it becomes my new spare disk.
  22. The speed will vary depending on where on the disks the heads are positioned. Speeds will be fastest at the outer edge and progressively slow down as you move inwards. My rule of thumb is about 10 hours per TB for modern drives which mean pre-clearing a 4TB drive is about 40 hours. This can vary depending on your system specs, in particular how the disks are connected as controller throughput is an important factor
  23. I think your statement is too broad! I think then the issue is not that a parity check is being started, but that it is a correcting parity check which can result in writes to the parity disk. I an unclean shutdown is detected (whatever the reason) and the parity check was a non-correcting one then this would mean that most users would not notice anything much happening if the shutdown was caused by something like a power failure but their array would still be checked for integrity. What I do agree with is that a correcting parity check should not be auto-started outside user control. As has been mentioned this can lead to data loss under certain (albeit rare) circumstances. You also do not want an automatic parity check if any disk has been red-balled due to a write failure for the same reasons.
  24. If you run the script in a console/telnet session without any parameters it will list all the command line options that are available.
  25. If it finished successfully then there should be a report written. As the last phase can easily take about 20 hours on a 4TB drive I would think there is a good chance it did not complete.
×
×
  • Create New...