Jump to content

itimpi

Moderators
  • Content Count

    9821
  • Joined

  • Last visited

  • Days Won

    22

Everything posted by itimpi

  1. You can safely update the plugin at any point without problems. The vast majority of the time the plugin is actually inactive and even if you did the upgrade at the exact instance it was going to attempt a pause or resume the worst that could happen would be for that particular pause/resume to not take effect.
  2. Are you running the 6.9.0 beta? Multi-language support (and thus Chinese support) is new in the 6.9.0 release and not available in the 6.8.3 release.
  3. if you use the 6.9.0 beta release it includes drivers for 2.5Gb NIC. The 6.8.3 release is based on an older Linux kernel and does not include the required driver.
  4. FYI: If the parity check tuning plugin is installed then it will add to the history whether it was a correcting or non-correcting check. It will not change the message you mention that is displayed at the end of the check, but it can still be useful to be able to look this up.
  5. I have been able to reproduce this. Looking at the underlying configuration file it IS being changed despite the settings page saying it is enabled., so this may be just a display issue. I will need to work out why this is happening and issue a fix. EDIT: The value in the GUI is (incorrectly) displaying the setting associated with the "Pause and Resume array operations if disks overheat" setting. I can confirm, however, that changing the setting and hitting Apply DOES change the stored setting.
  6. You are probably falling foul of the fact that preclear is not a standard part of Unraid but a 3rd party utility so the GUI does not take account of the fact that a Clear had been done outside the control of Unraid.
  7. Update: Fix Debug logging being active even when set to be disabled. Fix default (for existing plugin users) of new Shutdown option defaulting to enabled when it should be disabled.
  8. I can confirm that debug level logging is active when it should not be. There is another buglet where for existing users of the plugin the default for the new Shutdown option is to have it enabled rather than disabled (it is correct for new users) so you might want to check this out as that logging actually shows it is active and you may not have intended this? I will get both of these issues addressed and an update issued later today.
  9. I thought it did (and was certainly meant to). I’ll check this out as you are almost certainly right if it did not default correctly for you.
  10. It looks as if disk2 has dropped offline (so there is no SMART information for that dtive in the diagnostics) and in that case no point in trying to continue rebuild of disk3. Doing a power cycle on the server may be required to bring it back online. As always you want to check the SATA/power cabling to the drive. New diagnostics after the reboot should allow us to check that it looks OK.
  11. This is covered here in the online documentation that you can access via the 'Manual' link at the bottom of the Unraid GUI.
  12. You need to give more information about the problems you are getting and the commands you think you have to run. Most people can shutdown their server without problems as long as the hardware is not mis-behaving.
  13. You would be able to do preclears OK. You will not be able to use VMs or dockers as the supporting services are not started until the array is started.
  14. I guess that might make it happen as that would create the folder. The yellow status is triggered by there being a folder corresponding to the share name. The system does not look inside the folder to check its contents.
  15. I think the point is that if you were doing things over the network then this situation would never arise so is has not been catered for (perhaps for efficiency reasons).
  16. I cannot replicate this. The only way I got the symptoms you describe is if I deleted the file but not the folder on the cache drive that corresponded to the share name. Running mover then removed this (empty) folder and the status went back to green.
  17. It should auto correct. The fact it has not suggests you still have a file or folder in the wrong location. Did you remember so also delete the containing folder and not just the file? You should be able to find the culprit by going to the Shares tab in the GUI and click the 'folder' location against the share. That will show which files/folders are on which drive - and you can drill down if required to find the culprit.
  18. This is why it has not worked You have encountered a side-effect of the way the mv command operates under Linux. It first tries to do a simple rename (which is fast) and only if that fails does a copy/delete operation. In this case the rename has worked (the Linux level is not aware of user shares) so it has stayed on the drive. To avoid this happening you need to either do an explicit copy/delete or set the target share to Use Cache=Yes so that mover later moves the file to the array. You would not have encountered this issue if you had done it over the network rather than internally within the server.
  19. I must admit I could not spot anything in the diagnostics to explain what happened Except that as far as parity is concerned the answer is 0
  20. Unraid is quite happy to boot without a GPU as long as the motherboard BIOS will let it do so. Many users run their Unraid systems headless and in fact until v6 this was the only way to run Unraid.
  21. The file should not exist if it has really been deleted as that happens in real time. how did you check that the file does not exist in the share? It should show up if you view it via the GUI if it is on the disk as the share is simply an alternative view of the disk contents. I think you are going to have to manually delete the file, but it seems a good idea to try and work out why this happened in the first place. The fact you were building parity at the same time should be irrelevant (except for the performance hit it introduces). Parity is not aware of files (or file systems) as it works purely at the disk sector level. No idea if it will help but posting your system diagnostics zip file (obtained via Tools->Diagnostics) might be worthwhile to see if anyone can spot something.
  22. Ok, but you only got those messages because you had the debug logging level active. If that had not been the case none of the messages would have appeared. in terms of the array behaviour you want this is something that is way beyond what this plugin can achieve and it even may be beyond what Limetech could achieve in any realistic manner.
  23. you should have been notified that the disk had been disabled as long as you have notifications enabled - did this happen? Just a FYI the Paroty Tuning plugin will not firectly cause an issue is such a scenario as happened as it will only attempt to pause or resume an operation that was already in progress. In fact in the log snippet posted it was idling and taking no action. thinking about I could consider adding a feature to the parity tuning plugin where it will halt any array operation if an array disk becomes disabled while the array operation is running. I would be interested to hear if anyone thought this might be of use? If so should the operation be cancelled or merely paused indefinitely so that the user can cancel it. Feedback is welcomed. This would be quite easy to implement once I had a clear idea of exactly what is wanted.
  24. Many people use cache disk/pools purely for application purposes and do not bother with the original use of caching writes for files that end up on the array. the 6.9.0 release supports multiple cache pools and how each pool is to be used is configured by the user so you have complete flexibility.
  25. I think most people take the easy way out and simply change the share to Use Cache=Yes and let mover handle getting the file onto the array when it runs at a later point in time. A 'benefit' of the mv behaviour you describe is that from a user perspective it completes almost instantly whereas a copy/delete takes much longer and the user does not see the time that mover later takes to get the file onto the array as it typically happens outside prime time. You DO get the behaviour you want if it is done by accessing the shares over the network - it is only moving them locally from within the server that exhibits this behaviour.