SABnzbd eating all my memory.


Recommended Posts

Is SABnzbd supposed to take up all of my ram? I've noticed that as it downloads, the amount of ram in use goes up.

 

From this:

             total       used       free     shared    buffers     cached
Mem:          3668        391       3276          0          0        294
-/+ buffers/cache:         96       3571
Swap:            0          0          0

 

to this after downloading 2gb worth of files:

             total       used       free     shared    buffers     cached
Mem:          3668       2947        720          0         22       2687
-/+ buffers/cache:        238       3430
Swap:            0          0          0

 

And I don't seem to get memory back unless I restart the machine. Yesterday, it took almost all my memory to the point where the webui shut down. If I do a echo 3 > /proc/sys/vm/drop_caches it will release it.

 

I've got SAB and Sickbeard installed to one of my shares because otherwise, i'd lose all their configs if the array went down.

 

I'm pretty new at this, and new at using telnet in general, so I'm not exactly sure what's going on.

 

Edit: Forgot this pertinent info: I'm running a 2tb parity drive, and a 1tb and a 500gb drive for my shares. I'm using the most recent beta. I only have the basic licensee.

Link to comment

*nix is designed to use as much available memory as possible for buffers/caches and will give up memory being used for buffers/caches to programs that need memory

 

That is what you are seeing on the first (Mem) line.

 

What really matters as far as you are concerned would be the 2nd line (-/+ buffers/cache)

 

As you can see, you still have 3430 free if programs need it. So that means (3430 - 720 = ) 2710 is being used for buffers/cache

 

You have plenty of memory available if needed.  No worries.

 

 

Link to comment
  • 3 months later...

I am experiencing this same fault.

 

The memory in use climbs by the rate that I'm downloading at (typically 1.6M/sec). It stops eating memory when it gets to about 178M free but then if SAB goes to post-process there isn't enough memory and the unRAID systems will kill off a process or two, generally SAB, SB or CP. When I do "echo 3 > /proc/sys/vm/drop_caches" it releases back to free but it seems that while this memory is "cached" it isn't letting other applications use it as it's supposed to.

 

I'm running the current SABnzbd+ from github and have been having the issue since I installed SAB on mr unRAID box 6 months ago. I'm currently on unRAID 5b14. I haven't got a syslog of the crash on hand but can replicate the issue so I'll upload it tomorrow.

 

Anyone else have this issue or even solved it?

Link to comment

Since there is at least 3 ways to install the program it'd also help if you gave the info on how you installed it or what package you used. You won't get much any help without this info and the answers to the questions already posted.

 

All I can say is that my version runs great. It's not the newest though, I'm a release or 2 behind.

Link to comment

Answer these questions please.

Where is located the download directory ?

Is it on a cache disk ? or another disk ?

Inside / outside the array ?

Do you start sabnzbd before or after the array comes on-line ?

 

I believe the download directory is not on a disk but located in the memory.

 

 

SABnzbd+ is installed on my cache drive (/mnt/cache/app/sabnzbd)

downloads to cache (/mnt/cache/data/sabnzbd/Downloads/Incomplete/)

SABnzbd+ starts after the array comes online.

It is installed via a plg from "https://github.com/W-W/unRAID/raw/master/sabnzbd_mod.plg" but is running that latest build from the SABnzbd+ Master.

Syslog attached.

syslog-2012-09-17-1050am.zip

Link to comment
  • 4 weeks later...

You are most probably running out of "low" memory, OR memory was temporarily full.. Both situations cause unix to kill of processes it thinks are least important.

 

The matter of "importance" of a process is (and I am simplyfying a LOT here) based on two parameters:

 

1) calculated by unix, it mainly uses the amount of times a process is used;

2) configured by you.

 

1) is done by unix, unfortunately compared to other processes emhttp has very low usage, not much to be done to help that.

 

2) is by means of a configuration parameter you can adjust that will make a process less likely (or even impossible) to get killed. If you do not configure this, then all processes have the same configuration parameter, as a result only 1) (the usage of the process) is used to determine what process to kill.

 

If you add the following to your GO file and reboot emhttp will never get killed by unix as a result of lack of memory:

 

pgrep -f "/usr/local/sbin/emhttp" | while read PID; do echo -17 > /proc/$PID/oom_adj; done

 

It effectively sets the configuration parameter to -17, this is equal to "process may never be killed", you could also set it to 17 to make it the most likely process to get killed (parameter needs to be between -17 and 17 and is by default at 0).

 

Now remember that unix kills the process for a reason, it is to prevent crashing the system, so if you start configuring everything at -17 nothing can be killed and you can expect your system to crash with all negative effects of that.

 

Personally I have EMHTTP and SMB both configured at -17, meaning that my shares (SMB) and the unraid webinterface (EMHTTP) both will not die off, this leaves SABnzbd, couchpotatoe, sickbeard to be killed (and these are the real resource hogs, especially SABnzbd).

 

For those people who are interested in why there comes to exist a memory issue:

 

- When applications start up they claim a certain amount of memory to run in;

- Unix figures that that memory will not be used all the time, so it "administers" the right for that amount of memory for the application but does not actually assign the memory TO the application;

- For every application starting this happens, in some cases unix will allow different applications to use the same memory.

 

Result is that memory in a linux system is "overbooked", unix does this to make as much as possible use of memory.

 

Now once in a while several applications at the same time will want to use the memory they requested, so they will start using it. Unix will then try and give the memory that is requested, if that memory is not available the application will still use it and will effectively be adressing memory that is assigned to another application, this will trigger a "segmentation fault" in your syslog. This can cause all kinds of misery so to avoid that unix will start of a process specifically meant to clear memory; the OOM killer (Out Of Memory). OOM Killer will kill processes according to rules 1 and 2 of above.

 

SO .. Set emhttp to -17 as specified above. What could also work, is setting SABnzbd (that we know to be the major resource hog) to a value of +15,  that way in the event of OOM the main culprit will get killed off.

 

Also not an issue since you can just restart SABnzbd. This opposite EMHTTP which cannot be restarted and make a full system restart necessary (and since you cannot shutdown gracefully this will also trigger a parity check in unraid).

 

Personally I hate having to press the big reboot button on the box, so I allways want emhttp accessible, hence my -17 on EMHTTP.

 

I also triggered Tom on this because with EMHTTP not restartable I actually think this setting could be standard on unraid... It solves a lot of unhappiness and also possible data loss as a result of having to hard reboot the system.

Link to comment

Thank you Helmonder for tracking this down. I added the line to my GO file and I was wondering if there's a way to test or verify the system has made the oom_adj change. I have been running my system for a little over a month and never had any troubles, but I can see after a few days of operation a lot of memory has been committed. I also like the suggestion of having Sabnzbd pause during processing and I have added that to my settings.

Cheers!

Link to comment
  • 2 months later...

Personally I hate having to press the big reboot button on the box, so I allways want emhttp accessible, hence my -17 on EMHTTP.

 

I also triggered Tom on this because with EMHTTP not restartable I actually think this setting could be standard on unraid... It solves a lot of unhappiness and also possible data loss as a result of having to hard reboot the system.

 

Do you know if simplefeature uses the same EMHTTP process?  If not what process needs a 17?  I cant seem to find anything in PIDs that say simplefeature.

 

Thanks,

Timbiotic

Link to comment

Personally I hate having to press the big reboot button on the box, so I allways want emhttp accessible, hence my -17 on EMHTTP.

 

I also triggered Tom on this because with EMHTTP not restartable I actually think this setting could be standard on unraid... It solves a lot of unhappiness and also possible data loss as a result of having to hard reboot the system.

 

Do you know if simplefeature uses the same EMHTTP process?  If not what process needs a 17?  I cant seem to find anything in PIDs that say simplefeature.

 

Thanks,

Timbiotic

simplefeatures is not a process, it is emhttp with an alternate "css" skin
Link to comment
  • 1 year later...

Thing is, it is not even always enough to have a certain amount of low memory available. Look up low memory fragmentation. Check your syslog when this happens (see the wiki for instructions on how to copy your syslog at the console, and how to cleanly shut down array). You will find in the syslog a section from OOM-KILLER (Our Of Memory Killer), that will detail which program made a memory-request, how large the request was, what type of memory was requested, how many chunks of memory of varying chunk-sizes were available etc. Often the problem will be that a modest chunk of low-memory was requested, and although there was more total low memory available, there is not a chunk of sufficient size available.

 

See this topic for how I learned to interpret the OOM-KILLER log:

http://lime-technology.com/forum/index.php?topic=28920.msg258192#msg258192

 

I also found with my 2GB ram system that adding a swapfile helped reduce the fragmentation enough to avoid OOM-KILLER (as chunks are swapped out and back in, fragmentation is reduced). YMMV. Probably more importantly, I recently converted to NZBget. Much MUCH less memory-hungry for the same speed and queue size. Highly recommended. Or wait for 64-bit.

Link to comment

Sounds like a problem I had once upon a time when I had my download location configured wrong in sab. Sab would eat all my memory and basically crash when the amount downloaded approached the amount of physical memory I had.  I tried added a swap file and chased it from the wrong angle.  Was just a missing '/' from memory or maybe an extra one.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.