Jump to content

5.0rc16c Out of memory while mover is running


Recommended Posts

I've gotten this two nights in a row now. Some time after 3:40am when the mover is running, transmission invokes OOM-KILLER resulting in sabnzbd being killed off.

 

I'm trying to learn all the time, but at my core I'm still a linux n00b. Help me understand why I'm low on memory.

 

I have 2GB RAM. Normally when checking with "top", I have ~1.2 GB cached. This is with both Transmission and sabnzbd running. How do I find out the core reason I'm running out of memory?

 

Could it be related to that cache pressure switch we use with cache_dirs ?

 

This is the relevant sniplet from syslog:

Aug 14 03:40:01 FileServer logger: mover started
Aug 14 04:37:08 FileServer kernel: transmission-da invoked oom-killer: gfp_mask=0x42d0, order=3, oom_score_adj=0 (Minor Issues)
Aug 14 04:37:08 FileServer kernel: Pid: 22969, comm: transmission-da Not tainted 3.9.6p-unRAID #23 (Errors)
Aug 14 04:37:08 FileServer kernel: Call Trace: (Errors)
Aug 14 04:37:08 FileServer kernel:  [<c106e9d4>] dump_header+0x5a/0x199 (Errors)
Aug 14 04:37:08 FileServer kernel:  [<c10496ce>] ? __dequeue_entity+0x23/0x27 (Errors)
Aug 14 04:37:08 FileServer kernel:  [<c125d61c>] ? ___ratelimit+0xb4/0xc8 (Errors)
Aug 14 04:37:08 FileServer kernel:  [<c106ee49>] oom_kill_process+0x4f/0x2c2 (Errors)
Aug 14 04:37:08 FileServer kernel:  [<c10312f2>] ? has_ns_capability_noaudit+0x18/0x1f (Errors)
Aug 14 04:37:08 FileServer kernel:  [<c1031308>] ? has_capability_noaudit+0xf/0x11 (Errors)
Aug 14 04:37:08 FileServer kernel:  [<c106ed9a>] ? oom_badness+0x7d/0xdd (Errors)
Aug 14 04:37:08 FileServer kernel:  [<c106f2ad>] out_of_memory+0x1f1/0x235 (Errors)
Aug 14 04:37:08 FileServer kernel:  [<c107211b>] __alloc_pages_nodemask+0x491/0x52f (Errors)
Aug 14 04:37:08 FileServer kernel:  [<c1379ca3>] sk_page_frag_refill+0x72/0xd9 (Errors)
Aug 14 04:37:08 FileServer kernel:  [<c13aa0c5>] tcp_sendmsg+0x55a/0xa6f (Errors)
Aug 14 04:37:08 FileServer kernel:  [<c1073f46>] ? __do_page_cache_readahead+0xe7/0xff (Errors)
Aug 14 04:37:08 FileServer kernel:  [<c106c5b5>] ? file_read_actor+0x76/0xc3 (Errors)
Aug 14 04:37:08 FileServer kernel:  [<c13c5f62>] inet_sendmsg+0x6f/0x77 (Errors)
Aug 14 04:37:08 FileServer kernel:  [<c106e313>] ? T.1170+0x3a3/0x3ab (Errors)
Aug 14 04:37:08 FileServer kernel:  [<c13764ac>] sock_aio_write+0xf3/0xfb (Errors)
Aug 14 04:37:08 FileServer kernel:  [<c10961e2>] do_sync_readv_writev+0x7f/0xb3 (Errors)
Aug 14 04:37:08 FileServer kernel:  [<c1096962>] do_readv_writev+0x95/0x14f (Errors)
Aug 14 04:37:08 FileServer kernel:  [<c13763b9>] ? sock_destroy_inode+0x26/0x26 (Errors)
Aug 14 04:37:08 FileServer kernel:  [<c102ea3a>] ? __do_softirq+0x12f/0x151 (Errors)
Aug 14 04:37:08 FileServer kernel:  [<c105e00f>] ? handle_irq_event+0x2e/0x3c (Errors)
Aug 14 04:37:08 FileServer kernel:  [<c1096a5b>] vfs_writev+0x3f/0x4b (Errors)
Aug 14 04:37:08 FileServer kernel:  [<c1096b14>] sys_writev+0x43/0x6e (Errors)
Aug 14 04:37:08 FileServer kernel:  [<c102e55a>] ? sys_gettimeofday+0x2b/0x58 (Errors)
Aug 14 04:37:08 FileServer kernel:  [<c1403178>] syscall_call+0x7/0xb (Errors)
Aug 14 04:37:08 FileServer kernel:  [<c1400000>] ? percpu_counter_hotcpu_callback+0x49/0x88 (Errors)
Aug 14 04:37:08 FileServer kernel: Mem-Info:
Aug 14 04:37:08 FileServer kernel: DMA per-cpu:
Aug 14 04:37:08 FileServer kernel: CPU    0: hi:    0, btch:   1 usd:   0
Aug 14 04:37:08 FileServer kernel: CPU    1: hi:    0, btch:   1 usd:   0
Aug 14 04:37:08 FileServer kernel: Normal per-cpu:
Aug 14 04:37:08 FileServer kernel: CPU    0: hi:  186, btch:  31 usd:   0
Aug 14 04:37:08 FileServer kernel: CPU    1: hi:  186, btch:  31 usd:  30
Aug 14 04:37:08 FileServer kernel: HighMem per-cpu:
Aug 14 04:37:08 FileServer kernel: CPU    0: hi:  186, btch:  31 usd:   0
Aug 14 04:37:08 FileServer kernel: CPU    1: hi:  186, btch:  31 usd:   0
Aug 14 04:37:08 FileServer kernel: active_anon:88617 inactive_anon:77 isolated_anon:0
Aug 14 04:37:08 FileServer kernel:  active_file:33146 inactive_file:88251 isolated_file:0
Aug 14 04:37:08 FileServer kernel:  unevictable:134453 dirty:12694 writeback:770 unstable:0
Aug 14 04:37:08 FileServer kernel:  free:81172 slab_reclaimable:23275 slab_unreclaimable:17409
Aug 14 04:37:08 FileServer kernel:  mapped:3207 shmem:404 pagetables:420 bounce:0
Aug 14 04:37:08 FileServer kernel:  free_cma:0
Aug 14 04:37:08 FileServer kernel: DMA free:3312kB min:68kB low:84kB high:100kB active_anon:96kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15964kB managed:15888kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:544kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes
Aug 14 04:37:08 FileServer kernel: lowmem_reserve[]: 0 820 1975 1975
Aug 14 04:37:08 FileServer kernel: Normal free:305740kB min:3628kB low:4532kB high:5440kB active_anon:153500kB inactive_anon:0kB active_file:37144kB inactive_file:37188kB unevictable:388kB isolated(anon):0kB isolated(file):0kB present:897016kB managed:840064kB mlocked:0kB dirty:44kB writeback:152kB mapped:8kB shmem:4kB slab_reclaimable:93100kB slab_unreclaimable:69092kB kernel_stack:1296kB pagetables:564kB unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:124598 all_unreclaimable? yes
Aug 14 04:37:08 FileServer kernel: lowmem_reserve[]: 0 0 9239 9239
Aug 14 04:37:08 FileServer kernel: HighMem free:15636kB min:512kB low:1788kB high:3064kB active_anon:200872kB inactive_anon:308kB active_file:95440kB inactive_file:315816kB unevictable:537424kB isolated(anon):0kB isolated(file):0kB present:1182600kB managed:1182600kB mlocked:0kB dirty:50732kB writeback:2928kB mapped:12820kB shmem:1612kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:1116kB unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
Aug 14 04:37:08 FileServer kernel: lowmem_reserve[]: 0 0 0 0
Aug 14 04:37:08 FileServer kernel: DMA: 20*4kB (U) 22*8kB (U) 19*16kB (U) 2*32kB (MR) 0*64kB 1*128kB (R) 0*256kB 3*512kB (R) 1*1024kB (R) 0*2048kB 0*4096kB = 3312kB
Aug 14 04:37:08 FileServer kernel: Normal: 24529*4kB (EM) 21851*8kB (UEM) 2044*16kB (M) 0*32kB 1*64kB (R) 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 305692kB
Aug 14 04:37:08 FileServer kernel: HighMem: 171*4kB (UR) 82*8kB (UR) 16*16kB (UR) 340*32kB (MR) 50*64kB (MR) 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 15676kB
Aug 14 04:37:08 FileServer kernel: 256272 total pagecache pages
Aug 14 04:37:08 FileServer kernel: 0 pages in swap cache
Aug 14 04:37:08 FileServer kernel: Swap cache stats: add 0, delete 0, find 0/0
Aug 14 04:37:08 FileServer kernel: Free swap  = 0kB
Aug 14 04:37:08 FileServer kernel: Total swap = 0kB
Aug 14 04:37:08 FileServer kernel: 523999 pages RAM
Aug 14 04:37:08 FileServer kernel: 295650 pages HighMem
Aug 14 04:37:08 FileServer kernel: 5960 pages reserved
Aug 14 04:37:08 FileServer kernel: 388815 pages shared
Aug 14 04:37:08 FileServer kernel: 333509 pages non-shared
Aug 14 04:37:08 FileServer kernel: [ pid ]   uid  tgid total_vm      rss nr_ptes swapents oom_score_adj name
Aug 14 04:37:08 FileServer kernel: [  742]     0   742      594      245       3        0         -1000 udevd
Aug 14 04:37:08 FileServer kernel: [ 1125]     0  1125      518      194       3        0             0 syslogd
Aug 14 04:37:08 FileServer kernel: [ 1129]     0  1129      464       97       3        0             0 klogd
Aug 14 04:37:08 FileServer kernel: [ 1171]     1  1171      713      121       2        0             0 rpc.portmap
Aug 14 04:37:08 FileServer kernel: [ 1175]     0  1175      527      198       3        0             0 rpc.statd
Aug 14 04:37:08 FileServer kernel: [ 1185]     0  1185      475      134       3        0             0 inetd
Aug 14 04:37:08 FileServer kernel: [ 1192]     0  1192     1123      383       4        0             0 ntpd
Aug 14 04:37:08 FileServer kernel: [ 1199]     0  1199      466      159       3        0             0 acpid
Aug 14 04:37:08 FileServer kernel: [ 1209]    81  1209      842      153       3        0             0 dbus-daemon
Aug 14 04:37:08 FileServer kernel: [ 1214]     0  1214      513      177       3        0             0 crond
Aug 14 04:37:08 FileServer kernel: [ 1216]     0  1216      475       73       3        0             0 atd
Aug 14 04:37:08 FileServer kernel: [20739]     0 20739     1422      381       5        0             0 emhttp
Aug 14 04:37:08 FileServer kernel: [21146]     0 21146      661      123       3        0             0 uu
Aug 14 04:37:08 FileServer kernel: [21147]     0 21147      460       62       3        0             0 logger
Aug 14 04:37:08 FileServer kernel: [21150]     0 21150      464      133       3        0             0 agetty
Aug 14 04:37:08 FileServer kernel: [21151]     0 21151      464      134       3        0             0 agetty
Aug 14 04:37:08 FileServer kernel: [21152]     0 21152      464      133       3        0             0 agetty
Aug 14 04:37:08 FileServer kernel: [21153]     0 21153      464      133       3        0             0 agetty
Aug 14 04:37:08 FileServer kernel: [21154]     0 21154      464      134       3        0             0 agetty
Aug 14 04:37:08 FileServer kernel: [21155]     0 21155      464      133       3        0             0 agetty
Aug 14 04:37:08 FileServer kernel: [21156]     0 21156     1609     1304       5        0             0 awk
Aug 14 04:37:08 FileServer kernel: [21172]     0 21172     2493      482       6        0             0 nmbd
Aug 14 04:37:08 FileServer kernel: [21174]     0 21174     4123      961       9        0             0 smbd
Aug 14 04:37:08 FileServer kernel: [21183]     0 21183     4123      517       9        0             0 smbd
Aug 14 04:37:08 FileServer kernel: [21186]     0 21186     1099      267       4        0             0 cnid_metad
Aug 14 04:37:08 FileServer kernel: [21188]     0 21188     1461      448       5        0             0 afpd
Aug 14 04:37:08 FileServer kernel: [21199]    61 21199      763      394       4        0             0 avahi-daemon
Aug 14 04:37:08 FileServer kernel: [21200]    61 21200      728       99       4        0             0 avahi-daemon
Aug 14 04:37:08 FileServer kernel: [21208]     0 21208      509      131       3        0             0 avahi-dnsconfd
Aug 14 04:37:08 FileServer kernel: [21274]     0 21274      593      231       3        0         -1000 udevd
Aug 14 04:37:08 FileServer kernel: [21275]     0 21275      593      214       3        0         -1000 udevd
Aug 14 04:37:08 FileServer kernel: [21423]     0 21423     2825      253       8        0             0 shfs
Aug 14 04:37:08 FileServer kernel: [21432]     0 21432    11109     1752      24        0             0 shfs
Aug 14 04:37:08 FileServer kernel: [21881]     0 21881      790      323       3        0             0 cache_dirs
Aug 14 04:37:08 FileServer kernel: [22159]     0 22159      788      296       4        0             0 screen
Aug 14 04:37:08 FileServer kernel: [22160]     0 22160     1082      425       4        0             0 bash
Aug 14 04:37:08 FileServer kernel: [22928]    99 22928    95567    74512     196        0             0 python
Aug 14 04:37:08 FileServer kernel: [22968]    99 22968    13556     9992      27        0             0 transmission-da
Aug 14 04:37:08 FileServer kernel: [29160]     0 29160     1083      422       4        0             0 bash
Aug 14 04:37:08 FileServer kernel: [32451]     0 32451      627      217       3        0             0 mv
Aug 14 04:37:08 FileServer kernel: [22205]     0 22205      638      240       3        0             0 sh
Aug 14 04:37:08 FileServer kernel: [22206]     0 22206      641      276       3        0             0 mover
Aug 14 04:37:08 FileServer kernel: [22207]     0 22207      460      130       3        0             0 logger
Aug 14 04:37:08 FileServer kernel: [22208]     0 22208      665      195       3        0             0 find
Aug 14 04:37:08 FileServer kernel: [22356]     0 22356      779      321       4        0             0 rsync
Aug 14 04:37:08 FileServer kernel: [22357]     0 22357      678      132       4        0             0 rsync
Aug 14 04:37:08 FileServer kernel: [22358]     0 22358      743      154       4        0             0 rsync
Aug 14 04:37:08 FileServer kernel: [23797]     0 23797      463       61       3        0             0 sleep
Aug 14 04:37:08 FileServer kernel: Out of memory: Kill process 22928 (python) score 144 or sacrifice child (Errors)
Aug 14 04:37:08 FileServer kernel: Killed process 22928 (python) total-vm:382268kB, anon-rss:294348kB, file-rss:3700kB (Errors)
Aug 14 04:37:38 FileServer kernel: mdcmd (59): spindown 5 (Routine)
Aug 14 04:37:39 FileServer kernel: mdcmd (60): spindown 13 (Routine)

 

Please don't tell me sabnzbd is hogging my memory. I'd like to be able to afford spending 0.4GB on sabnzbd in my 2GB system.

 

Thanks,

Morten

 

Link to comment

I really want to understand the core reason here. Been digging myself and still want help figuring the rest of it out.

 

As for interpreting the OOM-KILLER log - Here's what I have found so far. Anyone facing a similar problem should first familiarize themselves with the 3 memory zones DMA, LOW/Normal, and HIGH memory. The log has lines detailing how much of each is free and what the low and minimim limits are:

DMA free:3312kB min:68kB low:84kB high:100kB active_anon:96kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15964kB managed:15888kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:544kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes
Normal free:305740kB min:3628kB low:4532kB high:5440kB active_anon:153500kB inactive_anon:0kB active_file:37144kB inactive_file:37188kB unevictable:388kB isolated(anon):0kB isolated(file):0kB present:897016kB managed:840064kB mlocked:0kB dirty:44kB writeback:152kB mapped:8kB shmem:4kB slab_reclaimable:93100kB slab_unreclaimable:69092kB kernel_stack:1296kB pagetables:564kB unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:124598 all_unreclaimable? yes
HighMem free:15636kB min:512kB low:1788kB high:3064kB active_anon:200872kB inactive_anon:308kB active_file:95440kB inactive_file:315816kB unevictable:537424kB isolated(anon):0kB isolated(file):0kB present:1182600kB managed:1182600kB mlocked:0kB dirty:50732kB writeback:2928kB mapped:12820kB shmem:1612kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:1116kB unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no

 

In this case, all were above the minimum mark. This is a case of the system invoking the OOM-KILLER even though there is memory free. The plot thickens...

 

I guess the fact that the LOW/Normal memory status ends with "all_unreclaimable? yes" is also a clue. I don't like the sound of "unreclaimable".

 

Next step I think is worth looking at is the gfp_mask from this line:

Aug 14 04:37:08 FileServer kernel: transmission-da invoked oom-killer: gfp_mask=0x42d0, order=3, oom_score_adj=0 (Minor Issues)

 

Minor issue my a$$. Anyway, one would need to head to this page and decode the gfp_mask attached to the failed memory request transmission was making:

http://linux-mm.org/PageAllocation

 

For ease of reference, I'll include this:

/* Zone modifiers in GFP_ZONEMASK (see linux/mmzone.h - low two bits) */
#define __GFP_DMA       0x01u
#define __GFP_HIGHMEM   0x02u

/*
* Action modifiers - doesn't change the zoning
*
* __GFP_REPEAT: Try hard to allocate the memory, but the allocation attempt
* _might_ fail.  This depends upon the particular VM implementation.
*
* __GFP_NOFAIL: The VM implementation _must_ retry infinitely: the caller
* cannot handle allocation failures.
*
* __GFP_NORETRY: The VM implementation must not retry indefinitely.
*/
#define __GFP_WAIT      0x10u   /* Can wait and reschedule? */
#define __GFP_HIGH      0x20u   /* Should access emergency pools? */
#define __GFP_IO        0x40u   /* Can start physical IO? */
#define __GFP_FS        0x80u   /* Can call down to low-level FS? */
#define __GFP_COLD      0x100u  /* Cache-cold page required */
#define __GFP_NOWARN    0x200u  /* Suppress page allocation failure warning */
#define __GFP_REPEAT    0x400u  /* Retry the allocation.  Might fail */
#define __GFP_NOFAIL    0x800u  /* Retry for ever.  Cannot fail */
#define __GFP_NORETRY   0x1000u /* Do not retry.  Might fail */
#define __GFP_NO_GROW   0x2000u /* Slab internal usage */
#define __GFP_COMP      0x4000u /* Add compound page metadata */
#define __GFP_ZERO      0x8000u /* Return zeroed page on success */
#define __GFP_NOMEMALLOC 0x10000u /* Don't use emergency reserves */
#define __GFP_NORECLAIM  0x20000u /* No realy zone reclaim during allocation */

 

So, since I had a gfp_mask=0x42d0 the failed memory request has the following flags:

#define __GFP_HIGH      0x20u   /* Should access emergency pools? */
#define __GFP_IO        0x40u   /* Can start physical IO? */
#define __GFP_FS        0x80u   /* Can call down to low-level FS? */
#define __GFP_NOWARN    0x200u  /* Suppress page allocation failure warning */
#define __GFP_COMP      0x4000u /* Add compound page metadata */

 

It is also worth noting I did not have any of the zone modifiers for DMA or HighMem zone, so I guess that means the request was for LOW (also called NORMAL) memory. Of which I had 300MB+ available (and the "all_unreclaimable? yes" flag) when the request was made.

 

What I don't know is how large the request was? I'm assuming it can be decoded from the log but I haven't gone that far.

 

But, when googling arround lots of people are dealing with memory fragmentation. Turns out the fragmentation status is also included in the log - this part:

DMA: 20*4kB (U) 22*8kB (U) 19*16kB (U) 2*32kB (MR) 0*64kB 1*128kB (R) 0*256kB 3*512kB (R) 1*1024kB (R) 0*2048kB 0*4096kB = 3312kB
Normal: 24529*4kB (EM) 21851*8kB (UEM) 2044*16kB (M) 0*32kB 1*64kB (R) 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 305692kB
HighMem: 171*4kB (UR) 82*8kB (UR) 16*16kB (UR) 340*32kB (MR) 50*64kB (MR) 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 15676kB

 

So, out of the 300MB of LOW/Normal memory I have available, the largest chunk is 64Kb, and there are only one of those. I'm guessing that is bad?

 

So, I guess it is looking like Transmission wanted to allocate memory from the Low/Normal memory, (and can we assume probably a chunk larger than 64KB?). The memorymanager starts freeing up memory in the normal zone but after freeing up 300MB with no more options, the largest chunk available is 64Kb, and this results in an OOM situation, invoking the OOM-KILLER. Please tell me - am I on the right track here?

 

And if so, what steps can I take to combat memory fragmentation?

 

PS. I'm using cache_dirs with no specific cache_pressure switch, so it would be at the cache_dirs default of 10 (in contrast to linux default 100). But if I'm on the right track with the above, this would probably be unrelated as the cache lives mostly in High memory.

Link to comment

And if so, what steps can I take to combat memory fragmentation?

 

PS. I'm using cache_dirs with no specific cache_pressure switch, so it would be at the cache_dirs default of 10 (in contrast to linux default 100). But if I'm on the right track with the above, this would probably be unrelated as the cache lives mostly in High memory.

 

I used to have a similiar problem with rsyncs of many files.

I had so many files that just running the locate/upatedb tool would cause memory issues.

 

I bet with running cache_dirs at the same time there could be issues. I.E. forcing the kernel to hold onto low memory.

 

What I used to do in my rsync_linked_backups script was alter some parameters temporarily.

 

# Save Kernel Options.
swappiness=$(</proc/sys/vm/swappiness)
cachepressure=$(</proc/sys/vm/vfs_cache_pressure)

echo 3              > /proc/sys/vm/drop_caches
echo 100            > /proc/sys/vm/swappiness
echo 200            > /proc/sys/vm/vfs_cache_pressure
nice rsync -aW ${RSYNCOPTS} -z --exclude='*.zip' --exclude='*.gz' --exclude='*.Z' ${BACKUPSRC[*]} ${BACKUPDIR}
nice rsync -aW ${RSYNCOPTS} ${BACKUPSRC[*]} ${BACKUPDIR}
RC=$?
[ ${RC:=0} -gt ${ERC:=0} ] && ERC=${RC}

echo 3              > /proc/sys/vm/drop_caches
echo $swappiness    > /proc/sys/vm/swappiness
echo $cachepressure > /proc/sys/vm/vfs_cache_pressure

 

This had the net effect of dropping the cache and thus, loosing what dir_cache was trying to do, but it saved my programs from bombing.

 

You may want to pause dir_cache, add these to the mover then see how it works out.

 

This could cause drives to spin up since data in the cache gets dropped.

However your program may continue to run without crashing.

Link to comment

Also depending on how many files are on your filesystems, you may want to bump up memory to 4gb since you are using programs which could read many files in a random manner.  I.E. Transmission.

 

2GB is plenty for a regular fileserver, however when you are doing torrenting with all sorts of random reads, it pays to have the extra ram.

 

It will NOT stop this low memory exhaustion from happening.

The only way I could resolve that was to drop the cache when i knew a job requiring lots of file access was going to take place.

 

I think the only way to resolve the root cause is to migrate to 64 bit linux in the future.

 

once I altered my rsyncs, my crashes went away.. I had 8GB of ram. 

I could get away with one job, maybe two at the same time. Any more then that and I would get OOM issues.

 

However, I had millions and millions of files.

I was doing a daily rsync with linked-dest as a parameter it would cause the the application to traverse a file tree containing many files.

 

Link to comment

Thanks for the ideas. Rather than update RAM, I could always just reduce the depth I let cache_dirs run to if I had a problem with the High zone memory. As you said, with the current 32-bit kernel adding RAM won't help with Low zone memory.

 

I'm not sure if you were already running a swapfile (and asumed I was as well?), but I wasn't and now I have added one with 'theone's' plugin. Supposedly this will help with memory fragmentation - in the dire situation my system was in, it would be able to swap out active apps memory and regain a chunk large enough to satisfy need. In the event it needs to swap out something being used actively, that would slow down the app, but hey, a lot better than OOM-KILLER I should think. Anyways, the swapfile may help without any slowdowns just because the system would probably be able to swap out less-frequently used chunks of memory first.

 

I'll update the thread with how it goes. Last night no problem, so fingers crossed  :P

Link to comment

As you said, with the current 32-bit kernel adding RAM won't help with Low zone memory.

Not only it won't help, it will make matters worse: it will use up more lowmem to track the highmem.  I've had some serious problems when running the server with more than 4GB of RAM.  As WeeboTech said, the solution is 64-bit, which at the current rates we may get some time in 2017.

 

 

Link to comment

Adding swap probably will not help if you are traversing a filesystem with tons of files. 

The dentry cache (what dir_cache keeps active) is in low memory and does not get swapped out.

Lowering it's maxdepth or pausing it may help.

Dropping the cache (in my example) helped.

As did increasing swappiness and altering vfs_cache_pressure.

Link to comment

The dentry cache (what dir_cache keeps active) is in low memory and does not get swapped out.

 

No, cache does not get swapped out, it gets released (memory is reclaimed) when there is memory pressure. I guess that's why in my case there was 300MB of low memory free when I ran out of memory. Normally I only have about 50MB os low zone memory available.

 

Also, I'm not sure if you ment to say dentry cache is ONLY in low zone? I'm guessing that wasn't what you meant - I have seen people having 4GB+ of dentry cache when googling around and am pretty sure it can reside in both low and high zones.

 

That being said, this is still somewhat clouded in mystery. Upon closer inspection it appears I have "slab_reclaimable:93100kB" in the low zone. One has to wonder why the "all_unreclaimable?" has a "yes" when it is listed in the same line there is 93MB of reflaimable slab. I wonder if we will ever know what that is supposed to mean.

 

As did increasing swappiness and altering vfs_cache_pressure.

 

Swappiness would only work if you have a swapfile, no? So how can you say adding a swapfile won't help?

 

The reason a swapfile (and increased swappiness) work is because memory pages that get swapped out and then back in has a defragmenting effect much like defragging files on a harddrive.

 

For vfs_cache_pressure, here is what I found:

https://www.kernel.org/doc/Documentation/sysctl/vm.txt

 

vfs_cache_pressure

 

Controls the tendency of the kernel to reclaim the memory which is used for

caching of directory and inode objects.

 

At the default value of vfs_cache_pressure=100 the kernel will attempt to

reclaim dentries and inodes at a "fair" rate with respect to pagecache and

swapcache reclaim.  Decreasing vfs_cache_pressure causes the kernel to prefer

to retain dentry and inode caches. When vfs_cache_pressure=0, the kernel will

never reclaim dentries and inodes due to memory pressure and this can easily

lead to out-of-memory conditions. Increasing vfs_cache_pressure beyond 100

causes the kernel to prefer to reclaim dentries and inodes.

 

So, if the above is accurate, as long as it is non-zero, dentry cache can be reclaimed when there is memory pressure. I think my situation qualifies as memory pressure. As the description reads, this setting only controls the ratio of pagecache/swapcache to dentries/inodes reclaim. IE, since we do want dentries retained whenever possible, we ask it to release pagecache first.

 

I was able to find a desription of what pagecache and swapcache are on http://linux-mm.org/Low_On_Memory I think this is the memory that shows up as "buffers" in top.

 

Lastly, I don't mean to be assertive and sorry if my choice of words sound that way. I'm googling this up as I go. If I'm mistaken, please do put me straight. Just trying to get to the bottom of this :o

Link to comment
Also, I'm not sure if you ment to say dentry cache is ONLY in low zone? I'm guessing that wasn't what you meant - I have seen people having 4GB+ of dentry cache when googling around and am pretty sure it can reside in both low and high zones.

 

Come to think of it, that person might have been using a 64-bit kernel. What do I know. This is getting old. And reading about this memory managment system has caused me to loose a little of the respext I had for linux. It's got some sick skeletons in the closet.  ::)

Link to comment

A few kernels ago (4.7) I tried increasing the dentry hash table via a boot parameter.

The machine I was using had 8GB of memory.

 

While it worked, it was only temporary and it caused more OOM errors then before.

At the time I was testing the dentry cache was limited to low memory.

It may be different now.

 

As for swappiness, it's only helpful if you have a swap partition or swap file and applications that use allot of ram which is idle. Otherwise it wouldn't help all that much. It helped when I used tmpfs, which I'll describe later. I've never seen any other benefit from the swap file. (but that's me). Maybe a few idle pages get swapped out and sit there. But not enough for me to dedicate allot of time with it unless we were moving more data to a tmpfs filesystem. See more below.

 

this setting only controls the ratio of pagecache/swapcache to dentries/inodes reclaim.

At a certain setting it prefers pagecache/swapcache which can be in high memory.

At certain settings it will prefer to keep dentries and/or not reclaim them.

This can cause issues if you need low memory.

At least that was my experience with filesystems containing a huge number of files.

 

As far as respect for linux and it's memory management.

We need to consider we're running on a 32bit OS and also the root filesystem is in RAM. (Which can never be swapped out)

 

Where are the additional programs that were installed?

Are they on cache? on root ram disk?

 

Was the root ram cleaned of all extra files that aren't needed. Logs, libraries ,man files headers/includes, share/doc files, etc, etc.

 

Also choosing not to add more ram when it could possibly help.

I've seen improvements up to 4gb. after that it was a lower return on investment, but for me, it worked wonders for caching. Especially with allot of active torrents.

 

Another option is to use tmpfs more and move any larger files from root ram onto a tmpfs.

tmpfs can be swapped out in a pinch, rootfs can not. Granted this is way more then is needed for this case, however I'm unfamiliar with what other programs are installed and running.

 

With my 8GB system many moons ago, I was able to move root to tmpfs.

I was even able to create a file that filled root on tmpfs and watch it swap out. This saved the system  from crashing.  This isn't the answer, only more information that shows it can be dealt with in different ways.

 

Ideally a 64bit kernel is probably the answer.

Link to comment

@MortenSchmidt: Once you have your sabnzbd, transmission, etc., running, what's the output of this command?

du -shx /

 

I get

524M    /

 

482MB of it in /usr. The 524MB total would be the RAM drive size, correct? Probably corresponds to the "unevictable:537424kB" listed in the OOM-killer log in the High memory line right?

 

I use the sabnzbd, transmission, subsonic plugins which I guess all install to /usr, and I have the data on the cache drive.

 

I wonder if you had spare RAM (those of you with 4 or 16GB), if you could install the swapfile on the RAMdrive. Might be worth looking into having just a small (say 100MB) swapfile on RAMdrive toget the defragmenting effect without the spinning disk delays. Anyone know if this is possible?

 

Anyway, my server got through another night with no issues. Swapfile usage varies between 5 to 50MB from what I have seen so far. Memory fragmentation looks decent as well with the largest chunk available being 512Kb:

root@FileServer:~# free -lm
             total       used       free     shared    buffers     cached
Mem:          2023       1973         50          0        161       1063
Low:           868        820         48
High:         1154       1153          1
-/+ buffers/cache:        748       1275
Swap:         2047         15       2032
root@FileServer:~# cat /proc/buddyinfo
Node 0, zone      DMA      6     28     25     15     11     10      1      1      2      1      0
Node 0, zone   Normal    493   4944      0      1      6      2      2      1      0      0      0
Node 0, zone  HighMem    454      0      0      0      0      0      0      0      0      0      0

Link to comment

So, this got me thinking, curious about where the 524MB RAMdrive is counted in the free output? The answer is, as with so many other things linux, not immediately obvious.

 

The experiment I made was to drop first the pagecache, and then the dentry/inodes by following this:

http://linux-mm.org/Drop_Caches

Drop Caches

Kernels 2.6.16 and newer provide a mechanism to have the kernel drop the page cache and/or inode and dentry caches on command, which can help free up a lot of memory. Now you can throw away that script that allocated a ton of memory just to get rid of the cache...

To use /proc/sys/vm/drop_caches, just echo a number to it.

 

To free pagecache:

echo 1 > /proc/sys/vm/drop_caches

 

To free dentries and inodes:

echo 2 > /proc/sys/vm/drop_caches

 

To free pagecache, dentries and inodes:

echo 3 > /proc/sys/vm/drop_caches

 

This is a non-destructive operation and will only free things that are completely unused. Dirty objects will continue to be in use until written out to disk and are not freeable. If you run "sync" first to flush them out to disk, these drop operations will tend to free more memory.

 

Situation before dropping anything:

root@FileServer:~# free -lm
             total       used       free     shared    buffers     cached
Mem:          2023       1972         51          0        165       1060
Low:           868        819         49
High:         1154       1153          1
-/+ buffers/cache:        745       1277
Swap:         2047         15       2032
root@FileServer:~# cat /proc/buddyinfo
Node 0, zone      DMA      6     28     25     15     11     10      1      1      2      1      0
Node 0, zone   Normal   2646   4358     33      3      2      1      2      1      0      0      0
Node 0, zone  HighMem    835     11      0      0      0      0      0      0      0      0      0

 

Dropping the pagecache frees up 580MB (51MB -> 631MB):

root@FileServer:~# echo 1 > /proc/sys/vm/drop_caches
root@FileServer:~# free -lm
             total       used       free     shared    buffers     cached
Mem:          2023       1392        631          0         94        535
Low:           868        501        366
High:         1154        890        264
-/+ buffers/cache:        763       1260
Swap:         2047         15       2032
root@FileServer:~# cat /proc/buddyinfo
Node 0, zone      DMA     23     42     36     51     32     22      7      3      2      1      0
Node 0, zone   Normal  31163  19216   5646    545      4      3      1      1      0      0      0
Node 0, zone  HighMem  21063  14125   2160     55      0      0      0      0      0      0      0

 

Dropping dentry/inodes cache frees up an additional 40MB (631 -> 671MB):

root@FileServer:~# echo 2 > /proc/sys/vm/drop_caches
root@FileServer:~# free -lm
             total       used       free     shared    buffers     cached
Mem:          2023       1351        671          0         94        584
Low:           868        398        470
High:         1154        953        201
-/+ buffers/cache:        672       1350
Swap:         2047         15       2032
root@FileServer:~# cat /proc/buddyinfo
Node 0, zone      DMA     23     42     36     51     32     22      7      3      2      1      0
Node 0, zone   Normal  31416  20667   7723   1430     77      3      1      1      0      0      0
Node 0, zone  HighMem  13184  14124   2162     51      0      0      0      0      0      0      0

 

Lastly, to see how much of what remains are dirty (unwritten) cache, I sync the filesystem and then drop all cache, freeing up a extra 66MB (671MB -> 737MB):

root@FileServer:~# sync
root@FileServer:~# echo 3 > /proc/sys/vm/drop_caches
root@FileServer:~# free -lm
             total       used       free     shared    buffers     cached
Mem:          2023       1286        737          0         94        533
Low:           868        407        461
High:         1154        878        275
-/+ buffers/cache:        658       1365
Swap:         2047         15       2032
root@FileServer:~# cat /proc/buddyinfo
Node 0, zone      DMA     24     43     37     52     33     23      6      3      2      1      0
Node 0, zone   Normal  30447  20453   7972   1591    130      8      3      1      0      0      0
Node 0, zone  HighMem  22491  14518   2358     79      0      0      0      0      0      0      0

 

So, my original question is answered. After dropping the caches, I still have 533MB of cached memory. So the RAMdrive is counted as cached in the free output.

 

But, new questions arise. Why is it that I free up 580MB by dropping the pagecache and only an additional 40MB by dropping the dentry/inode cache??? I had more dirty cache than I had (non-dirty) dentry/inode cache. I guess perhaps when I'm only using a depth of 4 with cache_dirs, it really only takes 40MB, but it surprised me. Guess I can ramp up the depth to 5 or more and see what happens.

 

The last thing worth paying note to here is that dropping all the caches did not significantly help with memory fragmentation - I mean, of course there are more chunks available, but the largest chunk of low memory available is unchanged at 512KB. Still looks to me like the presence of a swapfile (of which the system uses between 5-50MB) does help with fragmentation - it is a lot easier to allocate a larger-than-64KB chunk now, either with or without dropping caches.

Link to comment

Ram drive on swap?  I'm not sure this is worth it.

 

In the case of your system Morten, I would recommend adding more ram.

 

While 2GB is enough for a normal unRAID system. You could use more since your ram drive is half a gig in itself.  Then add in ram needed for applications, then add in ram needed for caching, buffering and house keeping.

 

I can see why you are having memory issues.

 

If you wanted, you might be able to move /usr on a tmpfs.

It would take some work to do that. At the very least if there is memory pressure, unused blocks of the tmpfs would definitely be swapped out.

 

Also are you using the latest unRAID RC?  In the later versions /var/log is on tmpfs, which can also swap out unused blocks on memory pressure.

 

Please don't tell me sabnzbd is hogging my memory. I'd like to be able to afford spending 0.4GB on sabnzbd in my 2GB system.

 

It's more then just .4gb. You have to consider that it's taking space on your rootfs ram drive. Then running python. I've read python has issues with memory and garbage collection.  I was just jumping around the other day and read that once python requests memory, it never releases it.  The same is definitely true for perl. With perl, it manages memory internally once it grows it stays at that size and does not release it.  I don't know if this is 100% true for python, but I did come across some comments that seemed to report this.

 

As far as putting a swap file on the ram drive, I wouldn't do it in your system unless it was bumped up to 4GB.  Speaking frankly, I wouldn't put it on the ram drive at all. I remember reading that it's counter productive and can lead to a deadlock.

 

Another option is to install these other apps to the cache drive, or an SSD.

That would allow for fast access and no spinning drive. In addition putting swap on SSD makes sense also.

 

BTW, you mention 'defragment' your memory.

Dropping cache would also aid in that since you release all memory that can be released.

Link to comment
  • 7 months later...

As a 7-month+ follow-up, I did not observe any more OOM-KILLER events after adding a swap-file. Although I respect WeboTech's advice that adding more RAM is preferable, I still believe the swapfile has helped reduce low-memory fragmentation and avoid the problems I was encountering.

The problem with adding more RAM is I would want it 100% well tested with memtest, which would result in downtime. It's either a total server overhaul with updated mobo or nothing for me. Two weeks back I replaced sabnzbd with nzbget, which has a memory footprint a fraction of what sab was hording. Highly recommended. Could very likely do without the swapfile now, but I'll leave it in place since it doesn't cause any problems at all.

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...