Ryonez

Members
  • Posts

    159
  • Joined

  • Last visited

Report Comments posted by Ryonez

  1. So, for some behavioural updates.

    Noticed my RAM usage was at 68, not the 60 it normally rests at. Server isn't really doing anything, so figured it was from the two new containers, diskover and elasticsearch, mostly from elasticsearch. So I shut that stack down.

    That caused docker to spike the CPU and RAM a bit.

    The containers turned off in about 3ish seconds, and RAM usage dropped to 60. Then, docker started doing it's thing:
    image.png.50d1cfa51ea42276dd7064bc70c00ad3.png

    Ram usage started spiking up to 67 in this image, and CPU usage is spiking as well.

    After docker settles with whatever it's doing:
    image.png.06f6d48a47f8f15fae6717e8554c450f.png

    This is from just two containers being turned off. This gets worse the more docker operations you preform.

  2. 5 minutes ago, Vr2Io said:

    Diskover ( indexing ) may use up all memory suddenly. Once OOM happen, system crash also expected. I run all docker in /tmp ( RAM ), even CCTV's recording, as memory usage really steady so haven't trouble.


    Diskover just happened to be what I was trying out yesterday when the symptoms occurred yesterday. It has not been present during the earlier times the symptoms occurred. After rebuilding docker yesterday I got diskover working and had it index the array, with minimal memory usage (I don't think I saw diskover go above 50mb).

     

     

    8 minutes ago, Vr2Io said:

    Try below method to identify which folder will use up memory and best map out to a SSD.


    I'm not really sure how this is meant to help with finding folders using memory. It actually suggesting it's moving log folders to ram, which would increase the ram usage. If the logs are causing a blowout of data usage, that usage would go to the cache/docker location, not memory.
    It might be good to note I use docker in the directory configuration, not the .img, so I am able to access those folders if needed.

  3. 13 minutes ago, Squid said:

     


    I don't believe this to be an issue with the docker containers memory usage.

    For example, the three images I was working on today:

    1. scr.io/linuxserver/diskover
    2. docker.elastic.co/elasticsearch/elasticsearch:7.10.2
    3. alpine (elasticsearch-helper)

     

    diskover wasn't able to connect to elasticsearch, elasticsearch was crashing due to perm issues I was looking at, and elasticsearch-helper runs a line of code and shuts down.

    I don't see any of these using 10+GB of ram in a non functional state. The system when running steady uses 60% of 25GB of ram. And this wasn't an issue with the set of containers I was using until the start of this month. 

    I believe this to be an issue in docker itself currently.

  4. 1 hour ago, trurl said:

    Just changing the SSH port is not a way to secure this. Bots will just find whatever other port you configure. You need a VPN or proxy

     

    Also, Diagnostics REQUIRED for bug reports.


    Changing the port was not done to secure it. It was done so a docker container could use it.
    Only specific ports are exposed, wireguard is used to connect when out of network for management, as management ports are not meant to be exposed. 

    The issue is unRaid starts it with default configs, creating a window of attack until it restarts services with the correct configs. This is not expected behaviour.

    Diagnostics have been added to the main post.