Jump to content

itimpi

Moderators
  • Posts

    20,775
  • Joined

  • Last visited

  • Days Won

    57

Everything posted by itimpi

  1. Not sure if there is a way that f getting progress when adding a drive via the GUI. Note that the GUI way is functionally different in operation to the pr_clear script. All it does is write zeroes to the drive so it can be added without breaking parity. It has no concept of passes or reading from the drive to check it is good.
  2. Why do i need to add it quickly? I ask as I am not ready to add it as I want to power it down to install the 500G WD Black drive and do another preclear. If you have pre-cleared a disk, then when you add it to a parity protected array you only have to take the array down for a minute or so while you stop the array; add the disk; and then restart the array. Since a pre-cleared disk has been zeroised unRAID does not need to take any action to keep parity valid. If you have not pre-cleared the disk then the array will be offline while unRAID zeroises the disk by doing the preclear itself (needed to keep parity valid) which can take many hours (with actual time depending on disk size)
  3. Just a thought - those times sound remarkably fast for drives of the size you mention. Do you realize that there are 3 phases per pass, and that the first one is only about 25-30% of the elapsed time for all three passes. As to how many passes, as these are previously used drives and you do not suspect any problems then one pass should be fine as you are not trying for the initial 'burn-in' test to detect early life failures. However look carefully at the final results to check that no errors are indicated and that there are no pending sectors (or a large number of reallocated sectors).
  4. At one level I guess you are corrett. However there is a vocal set o fusers who have been complaining that the current GUI is outdated and needs upting to a more modern look. I was thinking that changing the default for v6 to black what make it more obvious to people that this release is something new. It might encourage them to go looking for the new features rather than assuming that it is just 'more of the same'. On a side inote are other colored themes easy to implement for those who like things to be really flamboyant?
  5. Why? It would make it much clerer that there has been a major GUI overhaul. It can always be changed back to white for those who prefer it. Maybe changing between these themes is something that should be made particularly easy and obvious (e.g. an option on the dashboard).
  6. The default for the dynamix GUI in beta12 appears to be the 'white' theme. Would it not be better to default to the 'black' one as it looks much slicker (at least in my opinion).
  7. I have v6 beta 12 installed, and have just added a parity disk and started the sync. I then left the GUIpositioned on the dashboard. I noticed that the status line showed that prity sync was in progress and gave a percentage. However I noticed that the percentage was not updating unless I actively did something in the GUI (even though I have not selected the option to disable pdates during parity sync on the Display settings). I think either the GUI should periodically update, or the status should simple read something like "in progress" rath than giving an (incorrect) percentage.
  8. The first disk was reported as successfully pre-cleared as no errors occurred during the read/write process. However once you start getting a SMART reports indicating imminent failure I would not trust the disk. In my experience if SMART indicates imminent failure it is normally true. Unfortunately the converse is not always true - you can get a disk go bad without warnings at the SMART level. The second disk looks fine.
  9. I have precleared several disks 3-4Tb with v 1.13, is it a big problem? Did that version not completely test disks lager than 2.2Tb? The believe that earlier versions tested the whole of the disks. The issue was that on 64-bit unRAID systems the pre-clear signature was not being written correctly, so when you tried to add the disk to an existing array, then unRAID still thought it needed to be cleared.
  10. The error message is saying that what was written to the disk is not what was found when the disk was read. As such the disk is unreliable and should not be used in unRAID. A bit strange, though, that nothing appears to be showing up in the SMART report.
  11. Are you using a viewer/editor that understands Linux end-of-lines?
  12. You should be able to use the 'lsof' command to see what files are open on the disks.
  13. Another part of the SMART report says that power on hours are 10951, so those error reports are quite a while ago.
  14. Pre-clearing does not affect rebuild time. What it does is provide a confidence check that the disk is in a good condition before you commit your data to it. The one time when pre-clearing DOES save time is when you are adding a new disk to an existing parity protected array. In such a scenario it avoids the array being offline for a significant length of time while unRAID does the equivalent of the preclear script..
  15. I am often like that as well It might be worth trying the switches I mentioned as they show other ways you may have messed things up. They did on my system.
  16. Thanks for the confirmation that the fix worked for you. I would be interested to know if you have tried the -f/F and -z/Z options and whether they proved of use?
  17. I have updated the first post with v1.3 of the script and updated the usage description appropriately.. This should fix the issues that a number have encountered with spaces in share names. It also adds a few new features compared to the previously posted version: Version 1.2 13 Sep 2014 Added the -D option to check extra disk Version 1.3 01 Oct 2014 Added -f and -F options to list empty (duplicated) directories. Added -z and -Z options to list zero length (duplicated) files. Fix: Allow for shares that have spaces in their names. The options to check for empty folders and zero length files can be quite useful in identifying issues that might have been created if any errors were made when copying files between disks (it found a few on my own system). Please feel free to ask any questions; provide feedback on the current version; or make suggestions for improvement.
  18. Yp. I found the line that stopped the proposed workaround (some missing quotes). I had picked up all the cases where spaces occurred in file names, but not in share names. I have now added a share with spaces to my server so that case gets tested for next time.
  19. Thanks - that shows the problem! At the moment the script is not handling correctly user shares with spaces in their names. As can be seen from the verbose output you posted earlier it thinks there are two shares "HD" and "Movies" whereas there should only be one share of "HD Movies". That would also explain why it completed so quickly as it could not find any files belonging to the two shares it was looking for. Now that I know what the problem is should be easy to fix for the next release. Unexpected spaces (and other special characters like quotes) are notorious for causing problems in shell scripts. It is possible that using -i "HD Movies" as a parameter option to explicitly specify the share should work as a temporary workaround (rather than letting the script derive the share list automatically), but I am not sure.
  20. That is strange - with the -v option active you should be shown for each disk any share that is found on that as it is checked. The output you show suggests that nothing was found. Can you perhaps try running the command ls -ld /mnt/disk*/* to check that there are in fact folders on the disk corresponding to the share names?
  21. The first error is cosmetic and can be ignored if your parameters are valid (it is an error in parameter validation). Not sure what might cause the second one. Can you give me an example of the command line used so I can see if that is relevant. My suspicion is that you gave an incorrect parameter and because of the first error did not get warned about it correctly and the run aborted. I am testing a new version with some additional parameter options (added as a result of feedback) so now is a good time to fix any errors in the core code, and also improve validation of any parameter options.
  22. The only way I can see that error being reported is if there were no files found (the current logic assumes this is not a possibility)! Can you try running with the -v option as that might give more information.
  23. At the moment I have added an option to my test version of the script that shows any empty folders (regardless of whether they are empty) as that was easy and quick. Interestingly enough it showed some unexpected empty folders on my own system. I now need to add some additional logic to check if they are duplicates of other non-empty directories which is mostly likely to be the case where one wants to tidy up. On my system some of them were duplicates of non-empty folders and others were not (e.g I have quite a few empty AUDIO_TS folders from DVD rips and this is valid). I am wondering if there is any value to having an option to report empty directories even when they are not duplicates - what do you think? I think I will have an option with this functionality that I am prepared ti upload by sometime tomorrow.
  24. Yes - but what are the criteria for identifying these? Are they perhaps ones with no contents? On a normal unRAID system where users shares span multiple disks there are lots of duplicate folders. Listing all duplicates is unlikely to be of much use because if no criteria are applied there will be so many the problem ones are likely to be hard to spot.
  25. I deliberately did not do that as it is quite normal to have duplicate folders if your share settings allow for this. Having said that such a change is definitely a possibility. I want to clearly understand the Use Case before looking at how something might be coded. Some questions that occur to me: Are you interested in duplicates even if they are allowed by the share settings? Are you interested primarily in folders with no files. I had assumed that when tidying up duplicate files you would also do any associated folders. However I can see that empty folders could easily be missed. Should this be done a separate pass so that folders are reported in their own section of any report? Is there any other specific scenario you are thinking of?
×
×
  • Create New...