Jump to content

itimpi

Moderators
  • Posts

    20,704
  • Joined

  • Last visited

  • Days Won

    56

Everything posted by itimpi

  1. I think the closest you will get is by running plugin --help from the CLI
  2. Glad to hear that it is working, but that does leave outstanding as to what caused /mnt/user to have the wrong permissions in the first place? It is not something that should ever occur in normal operation.
  3. Yes. In addition to all drives being unmounted at that point the Docker and VM services will also have been shut down, and the User Share sub-system stopped. Certain system level services such as networking will still be running but it should not matter if power is lost while these are running. I guess one thing that is still mounted is the flash drive - and whether losing power suddenly san cause a glitch in that area I do not know
  4. I would have thought that this was the state you are in when you tidily stop the array?
  5. I must admit I am not quite sure what state you are trying end up with? I always think of Halt as being the same as power off.
  6. What is missing? /mnt/user is showing in the screenshot!
  7. I think Unraid may be getting confused by you trying to simultaneously trying to add a parity drive and a new data drive. Have you tried doing them separately? If you do not already have a parity drive then try adding the data drive first and then the parity drive as that will be faster.
  8. It is a known issue that using "Pause" gives incorrect values for the speed of a parity check. It might be worth pointing out that if you are using the Parity Check Tuning plugin to handle the pause/resume process it will correct the speed shown in the history to show what was actually achieved as one of its capabilities.
  9. There is a VERY large list of check conditions that have to be satisfied before one can be sure that it is safe to continue a parity check from the position which had previously been reached. Since Limetech are a little paranoid about changes that might have any risk of data loss I think it is this that is holding things up. As always it tends to be all the edge cases that cause the problems rather than the simple mainline one.
  10. This IS difficult Until Limetech provide the underlying support there is not much that can be done.
  11. The normal reason for wanting the system share to be on the cache is to maximise performance of dockers and/or VMs. There are no shares on the flash drive (only configuration information).
  12. Many of those reporting this also seem to be using encryption together with btrfs so I wonder if it is the combination that matters?
  13. Seems to be working fine for me Have you tried clearing your browser's cache?
  14. You might want to look into using the Parity Check Tuning plugin so that the check is split into increments that only run outside prime time to minimise the impact on end-users.
  15. FYI: The amount of data in the array should be irrelevant as far as elapsed time for the operation is concerned. The key factor is the size of the largest parity drive.
  16. As far as I know this has always been the behaviour. I think there is a feature request raised to look into changing this.
  17. You cannot set the execute bit for files that are on the flash drive (this was a security change made a few releases ago). The easiest way around this is to use the User Scripts plugin which will handle this for you.
  18. When you say you removed the plugin do you mean that you reverted to a standard Unraid build? If you merely meant that you actually removed the plugin that would not revert to the standard Unraid build.
  19. You should post your system diagnostics zip file to see if anyone can spot a possible reason. As far as I know you are the only person who has reported anything like this so it must be something specific to your system.
  20. The complete diagnostics.zip file is always a good start
  21. You need to follow the procedure that has always applied to do this Stop array Unassign disabled disk Start array so the missing disk is registered Stop array Reassign disabled disk Start array to begin rebuild
  22. Sounds as if you have at least one of the dockers misconfigured. You would need to give much more detail on how you have them configured for anyone to give sensible advice. you may also get useful information by clicking on the Container Size button on the Docker tab.
  23. If you move all the drives to a NEW array then the data is NOT cleared. It is when you try to move them to an existing array that already has parity protection in place as additional drives that their content is not preserved.
×
×
  • Create New...