Out of Memory Error (SOLVED)


Jayg37

Recommended Posts

Hello

I'm getting an out of memory error "Your server has run out of memory, and processes (potentially required) are being killed off. You should post your diagnostics and ask for assistance on the unRaid forums" message from the Fix Common issues plugin. The error used to occur once a month but has become more frequent over the past few weeks...Today is the second day in a row the error is being thrown. I've narrowed it down to the mover running as the error occurs around 4AM which is when I have the mover scheduled to run. 

 

Does anyone know are can explain in layman terms what settings I need to change or why this is occurring? 

 

I have 40% ram usage at just about any given moment. So there should be plenty available. 

I only run a small VM for Home Assistant with 4gb allocated to it. 

There are a handful of Docker containers running, but should have negligible RAM usage...Plex transcodes to RAM but that's not occurring at 4AM.

Also as suggested on previous posts i adjusted a few parameters to an attempt to stop this issue from occurring. (vm.dirty_background_ratio to 1 and vm.dirty_ratio to 2)

 

My sig should have all my server info.

running latest stable version of UnRaid

 

gserver-diagnostics-20211025-1015.zip

Edited by Jayg37
adding more info to request for help
Link to comment

Did some more digging and found some interesting commands to help look for memory hogs.

Check what is using RAM and how much

ps aux | awk '{print $6/1024 " MB\t\t" $11}' | sort -n

RAMDISK sizes

df -h -t tmpfs

Check tmp folder size on RAM

du -sh /tmp

 Mine came back as:

(left off anything under 80mb or so)

ps aux | awk '{print $6/1024 " MB\t\t" $11}' | sort -n
.
.
.
84.9258 MB              /usr/bin/dockerd
757.07 MB               chia_full_node
760.863 MB              chia_full_node
760.863 MB              chia_full_node
760.863 MB              chia_full_node
760.863 MB              chia_full_node
760.863 MB              chia_full_node
760.863 MB              chia_full_node
760.863 MB              chia_full_node
760.863 MB              chia_full_node
760.863 MB              chia_full_node
760.863 MB              chia_full_node
760.871 MB              chia_full_node
760.871 MB              chia_full_node
760.871 MB              chia_full_node
760.875 MB              chia_full_node
4419.98 MB              /usr/bin/qemu-system-x86_64

 

So apparently the Chia plots I started and forgot about are taking up an extensive amount of memory ~10.5gb.

 

My next step was to limit the amount of RAM this Docker container can take.

Add the following to my Docker container extra parameters (use advanced switch in top right corner)

--memory=1G

 

I'll reboot and see if this continues to be an issue.

Link to comment

After reboot I got an immediate Out of Memory error even though my current usage (system was sitting at 19%). I believe this suggests limiting the Docker container to 1gb via extra parameters confirmed my Chia Docker is the culprit for this error.

I'm guessing there was a Chia update that broke how the Docker container operates. Due to waning interest in the crypto the container is being actively updated regularly to keep up with changes to the platform. 

I would assume a simple deletion of this docker will solve my issues. I'll consider this solved and hope the posts above will help others troubleshoot their systems. 

Link to comment
  • Jayg37 changed the title to Out of Memory Error (SOLVED)

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.