Don's know how but: Out of memory errors detected on your server


Go to solution Solved by JorgeB,

Recommended Posts

I was out of town for a few days on business and came home to find this.  I have never run out of memory before and always have plenty free.  The only difference in the server over the past month is that we have been running PLEX DVR as our primary DVR after I removed the TiVo's from our house.  I do run the Folder Caching app with cache pressure set to zero, but I have done that for like 10+ years on multiple server versions with plenty of memory without issue.  The first instance appears to have been on May 23rd 18:50:29, but there are more after that:

 

memory.thumb.png.f818359a5cb3b446602c33c6c02ed321.png

 

Attached is my diagnostics.  It seems my windows 11 VM is very activce for some unknown reason (go Microsoft), but it has memory limited to 32GB so I don't see how that could cause an issue with the 32GB dedicated to unRAID and the Dockers?

 

This came out of the blue on a server that has been running the same and flawlessly for a long time.  The only thing I changed recently was add the MariaDB-Official Docker just last week for Kodi.

 

Thanks for any help.  I'm stumped ATM.

unraid-diagnostics-20230529-1200.zip

Edited by craigr
spelling
Link to comment
39 minutes ago, JorgeB said:

If it's a one time thing you can ignore, if it keeps happening try limiting more the RAM for VMs and/or docker containers, the problem is usually not just about not enough RAM but more about fragmented RAM, alternatively a small swap file on disk might help, you can use the swapfile plugin:

 

https://forums.unraid.net/topic/109342-plugin-swapfile-for-691/

 

Question.  I know how to limit RAM on VM's.  But how do you limit RAM for Dockers? 

 

FWIW I have Docker's CPU cores Pinned to 1/9, 2/10 ,3/11.  unRAID has CPU 0/8 all to itself.

 

cpu.thumb.png.63433779f592672d130ab281dc1fbaa2.png

 

Thanks again.

Link to comment

Thanks for the info and links.  I just watched nzbget while it downloaded a relatively small file of less than 30GB.  It grabbed 16GB of memory before it was finished.  I suspect the other day it was decompressing two large files totaling over 125GB simultaneously and I bet that caused at least one of my over runs.  I am also watching PLEX as it is recording two shows and the memory usage is just going up and up.  I have the PLEX Docker using memory as the temp directory.

 

If I limit the memory PLEX Docker is allowed to use, do you know what happens to PLEX if and when it gets full?

Link to comment

Oh, I don't think I can use the Swapfile plugin.  Both of my SSD pools are RAID; one is RAID0 and the other is RAID1.  My cache drive is not RAID, but it's a 10GB spinner so not a good place for a swap file in terms of performance.  The cache drive has got to be super fragmented as well.

 

Also, I just discovered Deluge Docker uses a lot of memory while doing a recheck.  I was probably using nzbget to decompress, PLEX to record, and rechecking in Deluge all at the same time.  I'm going to have to put memory limits on each Docker as you suggested.  Suddenly 32GB for the Dockers doesn't seem like much anymore 🤔.

Edited by craigr
Link to comment
7 hours ago, craigr said:

been running PLEX DVR as our primary DVR

If you are doing this for HDHomeRun recordings and have /tmp set as the transcode location, it can contribute to out of memory errors.  /tmp can just keep filling up until the server is out of RAM. 

 

I am using a /tmp location for transcoding but have it limited to 16GB RAM usage.  Many do it with as little as 4GB and it works fine.  When I first started recording with Plex DVR for HDHR, I would frequently run out of RAM on my server which had only 32GB at the time.

 

HDHR can use up to 16GB RAM per hour of HD recording and it does not reclaim RAM until the recording ends.  It creates and keep lots of small files in the transcode location so you can scrub the timelime of an in-progress recording.  I never do this, so I force it to reclaim RAM by limiting the /tmp/PlexRamScratch location to 16GB.

  • Thanks 1
Link to comment
11 minutes ago, trurl said:

that would be independent of any RAM limitations you put on the container.

I don't limit the RAM usage via the container.  It is limited by go file entries.

 

mkdir /tmp/PlexRamScratch
chmod -R 777 /tmp/PlexRamScratch
mount -t tmpfs -o size=16g tmpfs /tmp/PlexRamScratch

 

  • Thanks 1
Link to comment
57 minutes ago, Hoopster said:

I don't limit the RAM usage via the container.  It is limited by go file entries.

 

mkdir /tmp/PlexRamScratch
chmod -R 777 /tmp/PlexRamScratch
mount -t tmpfs -o size=16g tmpfs /tmp/PlexRamScratch

 

I knew there was a way to do this, but I couldn't remember how or where to look.  I was about to ask.  Thanks for posting this again.

 

EDIT:  I already had it limited to 16GB.  I think I'll lower it to 8GB.

 

However, I'm seeing that nzbget can easily surpass 16GB on its own, deluge can easily use 8GB, so having PLEX at 8GB still could result in not enough memory.

 

I limited Deluge to 6GB and nzbget to 8GB.  The other Dockers are already limited to 2GB or don't use much memory.  I also did install the swap file plugin and assigned the swap file to my 10TB cache spinner.  Better an overflow goes there than crash the server.

 

I have a 250GB Samsung SATA SSD sitting here in a drawer but no more SATA ports that are easy to use.  I've got four ports left on my LSI card, but the sixth mini-sas terminal is really not very (not at all) usable because of other hardware on the motherboard.  I might try.  This isn't a problem often, so having a swap file on an SSD would be ideal.  Then I could put nzbget and deluge back to unlimited memory and retain full performance.

Edited by craigr
update
Link to comment
1 hour ago, Hoopster said:

If you are doing this for HDHomeRun recordings and have /tmp set as the transcode location, it can contribute to out of memory errors.  /tmp can just keep filling up until the server is out of RAM. 

 

I am using a /tmp location for transcoding but have it limited to 16GB RAM usage.  Many do it with as little as 4GB and it works fine.  When I first started recording with Plex DVR for HDHR, I would frequently run out of RAM on my server which had only 32GB at the time.

 

HDHR can use up to 16GB RAM per hour of HD recording and it does not reclaim RAM until the recording ends.  It creates and keep lots of small files in the transcode location so you can scrub the timelime of an in-progress recording.  I never do this, so I force it to reclaim RAM by limiting the /tmp/PlexRamScratch location to 16GB.

Yeah, I'm running 8 tuners as well.  Often there are three simultaneous recordings (sometimes more during sports) and someone could be watching live TV.  When the temp file size limit is reached, PLEX just writes it to the actual hard drive right?

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.