Jump to content


  • Content Count

  • Joined

  • Last visited

  • Days Won


Posts posted by itimpi

  1. 1 hour ago, RobertP said:

    There was no mention in the msg if the errors were fixed or not.  Yes, I DO have the box checked to fix errors - but my point is this msg gave no indications if fixes had been applied.)

    FYI:  If the parity check tuning plugin is installed then it will add to the history whether it was a correcting or non-correcting check.  It will not change the message you mention that is displayed at the end of the check, but it can still be useful to be able to look this up.

  2. 1 hour ago, Rendo said:

    Hi itimpi,


    First of all, thanks for the plugin! Happy user for some time now :)

    Having an issue with the 'High disk temperatures can shutdown server' setting.

    I can't seem to disable this. After changing to 'Disabled' and hitting 'Apply', the value changes back to 'Enabled'.

    Haven't tried the - uninstall - reinstall fix, since you might need some debug logging I am more than happy to supply.


    Thanks again

    I have been able to reproduce this.   Looking at the underlying configuration file it IS being changed despite the settings page saying it is enabled., so this may be just a display issue.  I will need to work out why this is happening and issue a fix.


    EDIT:  The value in the GUI is (incorrectly) displaying the setting associated with the "Pause and Resume array operations if disks overheat" setting.  I can confirm, however, that changing the setting and hitting Apply DOES change the stored setting.


    • Thanks 1

  3. I can confirm that debug level logging is active when it should not be. 

    There is another buglet where for existing users of the plugin the default for the new Shutdown option is to have it enabled rather than disabled (it is correct for new users) so you might want to check this out as that logging actually shows it is active and you may not have intended this?


    I will get both of these issues addressed and an update issued later today.

  4. 7 minutes ago, ClunkClunk said:

    Suggestion - have this default to "No" rather than "Yes."

    Reason - grumpy text message from wife asking why Plex wasn't working :) I do like the feature though! I just had to tweak some settings.

    I thought it did :) (and was certainly meant to).   I’ll check this out as you are almost certainly right if it did not default correctly for you.

  5. It looks as if disk2 has dropped offline (so there is no SMART information for that dtive in the diagnostics) and in that case no point in trying to continue rebuild of disk3.


    Doing a power cycle on the server may be required to bring it back online.    As always you want to check the SATA/power cabling to the drive.    New diagnostics after the reboot should allow us to check that it looks OK.

  6. 1 minute ago, Rysz said:

    It was the empty folder, thanks a lot for your help. Weird that an empty folder triggers "some or all files unprotected", one would think an empty user-share folder on the cache is no longer considered as "unprotected" when the last file was deleted from it. 🤨

    I think the point is that if you were doing things over the network then this situation would never arise so is has not been catered for (perhaps for efficiency reasons).

  7. 4 minutes ago, Rysz said:

    I've just been able to replicate this behaviour on my Windows machine as well.

    Putting a file on a cache-enabled user-share puts the file on the cache drive where it belongs.

    The warning sign shows up next to the user-share as there is an unprotected file (in the cache) - this is correct.


    When deleting the file from the user-share (using Windows Explorer, accessing the Samba share) the protection status is not refreshed.

    The warning sign next to the user-share remains until the mover is triggered manually and only then reverts back to green.

    Despite no unprotected file being there and the mover not moving anything (confirmed in logs) the protection status goes back to green.


    So a new file triggers a protection status refresh but a deletion does not, weird.



    I cannot replicate this.  The only way I got the symptoms you describe is if I deleted the file but not the folder on the cache drive that corresponded to the share name.  Running mover then removed this (empty) folder and the status went back to green.

  8. Just now, Rysz said:

    I have deleted the file and still get the yellow warning sign - any way to make the warning sign go away after this?

    Or did I just break something? 😞 

    It should auto correct.  The fact it has not suggests you still have a file or folder in the wrong location.   Did you remember so also delete the containing folder and not just the file?


    You should be able to find the culprit by going to the Shares tab in the GUI and click the 'folder' location against the share.  That will show which files/folders are on which drive - and you can drill down if required to find the culprit.

  9. Just now, Rysz said:

    After moving the file with "mv" (command line), however, I checked under Shares and while the file is in the right user-share it says it is on the cache still.

    This is why it has not worked :( 


    You have encountered a side-effect of the way the mv command operates under Linux.   It first tries to do a simple rename (which is fast) and only if that fails does a copy/delete operation.  In this case the rename has worked (the Linux level is not aware of user shares) so it has stayed on the drive.


    To avoid this happening you need to either do an explicit copy/delete or set the target share to Use Cache=Yes so that mover later moves the file to the array.


    You would not have encountered this issue if you had done it over the network rather than internally within the server.

  10. The file should not exist if it has really been deleted as that happens in real time.    

    how did you check that the file does not exist in the share?   It should show up if you view it via the GUI if it is on the disk as the share is simply an alternative view of the disk contents.


    I think you are going to have to manually delete the file, but it seems a good idea to try and work out why this happened in the first place.   The fact you were building parity at the same time should be irrelevant (except for the performance hit it introduces).   Parity is not aware of files (or file systems) as it works purely at the disk sector level.


    No idea if it will help but posting your system diagnostics zip file (obtained via Tools->Diagnostics) might be worthwhile to see if anyone can spot something.

  11. 1 minute ago, TexasUnraid said:

    I didn't think the parity tuning plugin was an issue, just found it odd that message was spammed just before the issue as I had not seen that happen before.

    Ok, but you only got those messages because you had the debug logging level active.   If that had not been the case none of the messages would have appeared.


    in terms of the array behaviour you want this is something that is way beyond what this plugin can achieve and it even may be beyond what Limetech could achieve in any realistic manner.

  12. you should have been notified that the disk had been disabled as long as you have notifications enabled - did this happen?

    Just a FYI the Paroty Tuning plugin will not firectly cause an issue is such a scenario as happened as it will only attempt to pause or resume an operation that was already in progress.    In fact in the log snippet posted it was idling and taking no action.


    thinking about I could consider adding a feature to the parity tuning plugin where it will halt any array operation if an array disk becomes disabled while the array operation is running.   I would be interested to hear if anyone thought this might be of use?     If so should the operation be cancelled or merely paused indefinitely so that the user can cancel it.    Feedback is welcomed.   This would be quite easy to implement once I had a clear idea of exactly what is wanted.

  13. Many people use cache disk/pools purely for application purposes and do not bother with the original use of caching writes for files that end up on the array.   


    the 6.9.0 release supports multiple cache pools and how each pool is to be used is configured by the user so you have complete flexibility.

  14. I think most people take the easy way out and simply change the share to Use Cache=Yes and let mover handle getting the file onto the array when it runs at a later point in time.   A 'benefit' of the mv behaviour you describe is that from a user perspective it completes almost instantly whereas a copy/delete takes much longer and the user does not see the time that mover later takes to get the file onto the array as it typically happens outside prime time.


    You DO get the behaviour you want if it is done by accessing the shares over the network - it is only moving them locally from within the server that exhibits this behaviour.


  15. This is a by-product of the way that the underlying Linux system implements move.    It first tries to do a ‘rename’ if it thinks source and target are on the same Mount point and only if that fails does it do a copy/delete.    In this case both appear to Linux to be under ‘mnt/user’, and so it tries rename which has worked and the file is left on the cache.    In such a case you either need to have the target set to Use Cache=Yes so that mover later moves it to the array, or do an explicit copy/delete yourself.