trurl

Moderators
  • Posts

    43888
  • Joined

  • Last visited

  • Days Won

    137

Everything posted by trurl

  1. You can go directly to the correct support thread for this container by clicking its icon in your Unraid webUI and selecting Support. Be sure to check out the information in the first post of that thread.
  2. Possibly this happened due to starting Docker and/or VM Manager before you had a pool for them to live on. Nothing can move or delete open files. You will have to disable Docker and VM Manager in Settings before you can clean this up. Dynamix File Manager can help with this.
  3. Unrelated, your domains and system shares have files on the array. Ideally, appdata, domains, and system shares would have all files on a fast pool with nothing on the array, so Dockers/VMs will perform better, and so array disks can spin down since these files are always open. Looks like you had your docker.img set to 50G before you switched it to a docker folder. Were you having problems filling it? The usual cause of filling docker.img is an application writing to a path that isn't mapped. Docker images shouldn't be growing.
  4. Unclean shutdown means that Unraid thinks the array was not stopped before the server was shut down or rebooted. Unclean shutdowns happen when Unraid can't write the array started/stopped status to flash. This can be because shutdown happens before the array has been stopped, or because flash can't be written. Unclean shutdown always happens due to how things are shutdown and are detected on boot up. The consequence is a parity check when the array is first started after booting. Changing things will not cause this, unless you change the timeouts related to shutdown, or unless it causes your flash drive to become read-only. Here is the sticky at the top of this same subforum which discusses the timeouts.
  5. Those are your docker templates and are needed to work with your containers from the webUI. Why didn't you copy all of f your configuration? That is the usual way and the reason you keep a flash backup. The config folder is all you need to get your configuration on a new install. And why didn't you just upgrade from the webUI instead of starting over?
  6. What do you get from command line with this? ls -lah /mnt/user/Guacamole
  7. What do you get with this? du -h -d 1 /mnt/disk2/lost+found
  8. Post new diagnostics after Disabling Docker and VM Manager
  9. https://docs.unraid.net/unraid-os/manual/changing-the-flash-device/
  10. If you want us to take a look, attach Diagnostics to your NEXT post in this thread.
  11. You should try with the same flash drive unless it couldn't be reformatted. Then you wouldn't need to transfer license. And if you have all of your config folder backed up then you might not have to redo anything. Just need to see if it boots then try to copy config back.
  12. Do you no longer have the share named 'guacamole'?
  13. Your syslog is being flooded with these Mar 26 00:02:14 Goathead nginx: 2024/03/26 00:02:14 [error] 6500#6500: *56373 limiting requests, excess: 20.327 by zone "authlimit", client: 192.168.30.252, server: , request: "GET /login HTTP/1.1", host: "192.168.30.170" Any idea what that is about? Can you make it stop?
  14. Doesn't look like anything was lost+found so that's good. Also doesn't look like you did this
  15. Check filesystem has created a lost+found share on disk2 for the things that it couldn't figure out. What do you get from command line with this? ls -lah /mnt/disk2/lost+found
  16. So is domains and system. Nothing can move or delete open files. Disable Docker and VM Manager in Settings until you get this cleaned up.
  17. Does it work if you use the default go file?
  18. Start the array in normal (not maintenance) mode and post new diagnostics.
  19. Lots we can't know without the array started. Start the array with that disk still unassigned and post new diagnostics.
  20. Since the only pool you have is unmountable, there is nothing mover can do. Might as well stop it. Disable Docker and VM Manager in Settings until you get cache working again. They have already created new files on the array that you will need to clean up later. Why do you have docker.img set to 100G?