-
Posts
19,779 -
Joined
-
Last visited
-
Days Won
54
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Everything posted by itimpi
-
After reading this post and having a moment of lucidity I think it's dawned on me how this can happen If new files are being written to cached share, and those files are beyond the 'slit level' in the directory hierarchy, code that prevents split kicks in and forces new files to stay on device where parent exists, in this case the cache. The free space is not taken into consideration in this case - but with the cache disk it should be - that's the bug. Sound like this is your scenario? And I was just about to release -beta8.... Sounds like that could be it. Get b8 out and we can look at this for b9. Nice one. Looks as though the fix was easy once the cause was identified as it appears to have made beta 8.
-
I believe that there is a bug in the current beta 7 GUI code to do with recognizing suffixes when trying to set the min free space value. Not sure if that is relevant if the value is already set (or you set it manually via a text editor).
-
As long as you have Min Free Space set to be more than the largest file you want to copy to the share, then unRAID already handles this. Once the free space falls below the Min free Space value unRAID stats writing directly to the array data drives.
-
Section 508 Compliance (webGUI Indicator Support for Color Blind Users)
itimpi replied to SSD's topic in Unscheduled
that is a nice set of icons!. They combine both shape and color indications of what they are which is a good thing. As important, I see they are free for commercial use. -
Preclear.sh results - Questions about your results? Post them here.
itimpi replied to jbuszkie's topic in User Customizations
RobJ - great job! Great info so far, and once complete this will be an outstanding resource!!! I will be adding a link from myMain. I agree. I am hoping that the descriptions for reallocated sectors and pending sectors get added ASAP as these are the ones that are probably of most importance to unRAID users. I would think that some sort of colour coding to indicate ones that are of particular interest would help. -
Preclear.sh results - Questions about your results? Post them here.
itimpi replied to jbuszkie's topic in User Customizations
Sounds correct! Assuming that you are running the pre_clear script from /boot, that is where the reports will be placed on completion. A suggestion is to run preclear_disk.sh -l before doing the actual preclear to get the list of devices that are available. Helps avoid any accidents -
Preclear.sh results - Questions about your results? Post them here.
itimpi replied to jbuszkie's topic in User Customizations
You do not want to use a disk in unRAID that is showing any Pending sectors. The good sign is that the number went down during the pre-clear without forcing any re-allocation. More worrying is why were they not all cleared! I would suggest you put it through another pre_clear cycle to see what happens. If you cannot clear the Pending Sectors then you need to consider a RMA. -
That will almost certainly be a permissions issue! The easiest thing way to rectify the permissions is to log in via telnet and then run the 'newperms' command providing the path as a parameter.
-
yes i did see that suggestion earlier in this thread and had done that in the script before adding to go file and rebooting, still caused nasty OOM and crash. i take it you are running it now?, if so what flags are you using and are you running this via go file or manually starting after unraid has booted?. I start mine from the go file using a command line of the form
-
Preclear.sh results - Questions about your results? Post them here.
itimpi replied to jbuszkie's topic in User Customizations
The /dev/sdd device is looking problematical as the number of 'sectors pending reallocated' seems to continually be going up. You do not want to use in unRAID any drive that does not finish the pre-clear with 0 pending sectors. The only anomaly is that no sectors are actually being re-allocated, so it always possible that there is an external factor causing the pending sectors such as bad cabling at the power/SATA level. The large number of sectors pending reallocation is a good enough reason to RMA the drive The other two drives look fine. The key attributes relating to reallocated sectors are all 0 which is what you want. -
[SOLVED] Seagate with huge Seek Error Rate, RMA?
itimpi replied to CaptainSpalding's topic in General Support (V5 and Older)
The pre-clear tries to put the drive through the same sort of load as is involved in parity rebuilds and/or normal use. If at the end of that there are no signs of any problems you have reasonable confidence that this point the drive is showing no problems. That is actually better than you would have for new drives - a significant proportion of those fail when put through their irst stress test via pre-clear. I actually have a caddy I can plug in externally via eSATA or USB on demand to do this. You could also use another system as there is no requirement that the pre-clear run on the system where the drive is to be used. -
[SOLVED] Seagate with huge Seek Error Rate, RMA?
itimpi replied to CaptainSpalding's topic in General Support (V5 and Older)
I handle this by having a spare drive that has been previously put through a thorough pre-clear to check it out. Using this disk I go through the process of rebuilding the failed drive onto this spare as its replacement. If the rebuild fails for any reason I still have the 'red-balled' disk untouched to attempt data recovery. If the rebuild works I then put the disk that had 'red-balled' through a thorough pre-clear test. I use the results of this to decide if the disk is OK or whether the drive really needs replacing. If the drive appears OK it becomes my new spare disk. -
Preclear.sh results - Questions about your results? Post them here.
itimpi replied to jbuszkie's topic in User Customizations
The speed will vary depending on where on the disks the heads are positioned. Speeds will be fastest at the outer edge and progressively slow down as you move inwards. My rule of thumb is about 10 hours per TB for modern drives which mean pre-clearing a 4TB drive is about 40 hours. This can vary depending on your system specs, in particular how the disks are connected as controller throughput is an important factor -
I think your statement is too broad! I think then the issue is not that a parity check is being started, but that it is a correcting parity check which can result in writes to the parity disk. I an unclean shutdown is detected (whatever the reason) and the parity check was a non-correcting one then this would mean that most users would not notice anything much happening if the shutdown was caused by something like a power failure but their array would still be checked for integrity. What I do agree with is that a correcting parity check should not be auto-started outside user control. As has been mentioned this can lead to data loss under certain (albeit rare) circumstances. You also do not want an automatic parity check if any disk has been red-balled due to a write failure for the same reasons.
-
Preclear.sh results - Questions about your results? Post them here.
itimpi replied to jbuszkie's topic in User Customizations
If you run the script in a console/telnet session without any parameters it will list all the command line options that are available. -
Preclear.sh results - Questions about your results? Post them here.
itimpi replied to jbuszkie's topic in User Customizations
If it finished successfully then there should be a report written. As the last phase can easily take about 20 hours on a 4TB drive I would think there is a good chance it did not complete.