My unRAID keeps crashing


Recommended Posts

Hi,

 

I'm having a problem with my unRAID 6.3.2. It's annoying as my trial ends in 5 days and I was all set to continue but now I'm not so sure. It's not reliable enough at this stage for a production web server and mail server.

 

It crashes and completely locks up. it has happened half a dozen times now. I cannot access the WebUI, nor can I access it or it's lone VM via ssh, and the web and mail severs running on the FreeBSD VM are down.

 

The machine is a Celeron G3900, 8 GB RAM, 4 x 4TB Hitachi HDDs in dual parity, with two 120 GB SSDs as cache in RAID 1.

 

I am running one FreeBSD vm that is allocated 1 core of the CPU and 2GB of ram and Deluge, Plex and Sickrage dockers. Could a problem with the VM cause the whole machine to lock up? Any particular logs I should look at? I've checked the Nginx logs and other than a few bots looking for wordpress and phpmyadmin login pages that don't exist there's nothing unusual there.

 

I did have a fairly reasonably sized rsync (rsnapshot actually) setup as a cron job on the VM to backup to the array, but I'm unsure if this was was running when it crashed as I don't know exactly what time it crashed. I think that rsync can be a bit of resource hog, especially if both ends of it are running on the same physical CPU. So I've commented out the cron job and I'll see if that fixes it.

 

I would love to see some logs or similar of when the crash occurs but as I cannot access the machine I cannot get diagnostics until after it has rebooted, and it appears that the data is only kept from the last reboot. I have attached these just in case they're useful to someone who is able to assist.

apollo-diagnostics-20170316-2016.zip

Link to comment

The Fix Common Problems plugin has a special Troubleshooting Mode, that if enabled will constantly monitor and save diagnostics and syslogs to the flash drive, until you reboot.  You don't want to run it normally, because it bloats the syslog hugely, but it's perfect for this kind of issue, when the machine can't be instructed to save a last diagnostics and syslog.  First, let the plugin run its tests and see if it notes anything that might be causing trouble.

Link to comment

Thanks for that, I'll definitely use those features if/when it happens again.

 

I did some more sleuthing today and it looks like it was the rsnapshot running on the VM that was the problem. I can't prove it, but there's several temp files there which should have been deleted had the job finished. I was doing it wrong in hindsight, running the program on the VM and using autofs(5) to mount the unRAID array as an on demand NFS share, pretending it was on the VM may have skitzed it out.

 

Anyhow, thanks to the help of the Nerdpack maintainer, I have resolved the problem, hopefully it's solved for good.!

Link to comment

This is still happening, but I was able to get the FCP troubleshooting logs this time. Machine config when crashed:

 

  • 1 VM (FreeBSD, 1 core, 2GB ram)
  • 4 Dockers (CrashPlan, Plex, Sickrage, Deluge)
  • Tail of syslog stops at 3:59.47, which is only 13 seconds before a Rsnapshot job was due to start.
  • Weirdly, that Rsnapshot job appears to have finished the backup job, but not finished the rm of the old file. It's odd that none of that was captured in the syslog. IE looking at the timestamps on the backup files, the last one was modified at 4:06, but the syslog stops at 3:59.
  • The system has been reliable until I started to run Crashplan as well.
  • I noticed last night before I went to bed that RAM usage was 88 %, but in the diagnostics its not that high.

Any hints would be greatly appreciated. Log tail and diagnostics attached.

apollo-diagnostics-20170323-0349.zip

FCPsyslog_tail.txt

Link to comment

Ok, so I can get this to happen at will now. Basically just run the dockers above and run rsync. So I set up all my logs and let rip!

 

I took the screenshot of htop 28 seconds before it crashed. The memory usage was creeping up at a rate of about 0.01 GB per 10 seconds however it didn't actually get to 100 %.

 

The syslog tail captured everything in it's gory detail. I have absolutely no idea what it is saying, but it doesn't look good. Some sort of kernel panic I assume? Everything was running fine until Mar 24 08:43:29.

 

Here is the first error and first call trace, after that it just kept repeating until I turned it off. So if anyone can decipher this please let me know. It would be much appreciated. I think that the 

[<ffffffff810c8322>] out_of_memory+0x3aa/0x3e5

probably tells the tale.

 

That said though, shouldn't an out of memory just start shuffling things to the swap/page file a la Windows?

 

In the meantime I'm running memtest and will get another 8 GB of memory this arvo hopefully.

 

Mar 24 08:43:29 Apollo kernel: find invoked oom-killer: gfp_mask=0x24082c2(GFP_KERNEL|__GFP_HIGHMEM|__GFP_NOWARN|__GFP_ZERO), nodemask=0, order=0, oom_score_adj=0
Mar 24 08:43:29 Apollo kernel: find cpuset=/ mems_allowed=0
Mar 24 08:43:29 Apollo kernel: CPU: 0 PID: 25573 Comm: find Not tainted 4.9.10-unRAID #1
Mar 24 08:43:29 Apollo kernel: Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./H110M-ITX, BIOS P1.60 07/28/2016
Mar 24 08:43:29 Apollo kernel: ffffc900018378a8 ffffffff813a353e ffffc90001837a90 ffffffff8193cf73
Mar 24 08:43:29 Apollo kernel: ffffc90001837928 ffffffff8111eca4 0000000000000000 0000000000000000
Mar 24 08:43:29 Apollo kernel: ffffffff810b2008 0000000000000000 ffffc90001837900 ffffffff81053fae
Mar 24 08:43:29 Apollo kernel: Call Trace:
Mar 24 08:43:29 Apollo kernel: [<ffffffff813a353e>] dump_stack+0x61/0x7e
Mar 24 08:43:29 Apollo kernel: [<ffffffff8111eca4>] dump_header+0x76/0x20e
Mar 24 08:43:29 Apollo kernel: [<ffffffff810b2008>] ? delayacct_end+0x51/0x5a
Mar 24 08:43:29 Apollo kernel: [<ffffffff81053fae>] ? has_ns_capability_noaudit+0x34/0x3e
Mar 24 08:43:29 Apollo kernel: [<ffffffff810c7b59>] oom_kill_process+0x81/0x377
Mar 24 08:43:29 Apollo kernel: [<ffffffff810c8322>] out_of_memory+0x3aa/0x3e5
Mar 24 08:43:29 Apollo kernel: [<ffffffff810cbfdd>] __alloc_pages_nodemask+0xb5b/0xc71
Mar 24 08:43:29 Apollo kernel: [<ffffffff81102ad2>] alloc_pages_current+0xbe/0xe8
Mar 24 08:43:29 Apollo kernel: [<ffffffff810fa157>] __vmalloc_node_range+0x141/0x1d8
Mar 24 08:43:29 Apollo kernel: [<ffffffff810fa217>] __vmalloc_node+0x29/0x2b
Mar 24 08:43:29 Apollo kernel: [<ffffffff812b23a2>] ? kmem_zalloc_large+0xa9/0xe5
Mar 24 08:43:29 Apollo kernel: [<ffffffff810fa2a8>] __vmalloc+0x1b/0x1d
Mar 24 08:43:29 Apollo kernel: [<ffffffff812b23a2>] kmem_zalloc_large+0xa9/0xe5
Mar 24 08:43:29 Apollo kernel: [<ffffffff812c0af5>] xfs_get_acl+0x5d/0xd9
Mar 24 08:43:29 Apollo kernel: [<ffffffff81162151>] get_acl+0x7a/0xd4
Mar 24 08:43:29 Apollo kernel: [<ffffffff81129c49>] generic_permission+0x85/0x169
Mar 24 08:43:29 Apollo kernel: [<ffffffff81129da0>] __inode_permission+0x73/0xa7
Mar 24 08:43:29 Apollo kernel: [<ffffffff81129e0f>] inode_permission+0x3b/0x3d
Mar 24 08:43:29 Apollo kernel: [<ffffffff8112d4ed>] may_open+0x8e/0xcf
Mar 24 08:43:29 Apollo kernel: [<ffffffff8112e016>] path_openat+0xae8/0xca8
Mar 24 08:43:29 Apollo kernel: [<ffffffff8112e21e>] do_filp_open+0x48/0x9e
Mar 24 08:43:29 Apollo kernel: [<ffffffff811207d1>] do_sys_open+0x137/0x1c6
Mar 24 08:43:29 Apollo kernel: [<ffffffff811207d1>] ? do_sys_open+0x137/0x1c6
Mar 24 08:43:29 Apollo kernel: [<ffffffff81120879>] SyS_open+0x19/0x1b
Mar 24 08:43:29 Apollo kernel: [<ffffffff8167d2b7>] entry_SYSCALL_64_fastpath+0x1a/0xa9
Mar 24 08:43:29 Apollo kernel: Mem-Info:
Mar 24 08:43:29 Apollo kernel: active_anon:1205186 inactive_anon:524165 isolated_anon:0
Mar 24 08:43:29 Apollo kernel: active_file:2885 inactive_file:6164 isolated_file:18
Mar 24 08:43:29 Apollo kernel: unevictable:0 dirty:91 writeback:923 unstable:0
Mar 24 08:43:29 Apollo kernel: slab_reclaimable:6967 slab_unreclaimable:152808
Mar 24 08:43:29 Apollo kernel: mapped:29824 shmem:704157 pagetables:6408 bounce:0
Mar 24 08:43:29 Apollo Mar 24 08:43:29 Apollo kernel: Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./H110M-ITX, BIOS P1.60 07/28/2016
Mar 24 08:43:29 Apollo kernel: ffffc900018378a8 ffffffff813a353e ffffc90001837a90 ffffffff8193cf73
Mar 24 08:43:29 Apollo kernel: ffffc90001837928 ffffffff8111eca4 0000000000000000 0000000000000000
Mar 24 08:43:29 Apollo kernel: ffffffff810b2008 0000000000000000 ffffc90001837900 ffffffff81053fae

 

 

apollo-diagnostics-20170324-0829.zip

FCPsyslog_tail.txt

Screen Shot 2017-03-24 at 8.42.57 am.png

Edited by giantkingsquid
Link to comment

Increased memory to 16 GB and the problem seems solved for the most part. No crashes anyway. Looks like 8.79 GB is the magic number!

 

It does hang when doing a rm -rf on a ~5 GB directory, but recovers when the rm is complete. I think this is because of my weak CPU. Will upgrade that to an i7 in due course.

 

Problem with unRAID is when you're going in you're like "I'll just use it as a NAS", but then you realise just how capable it is you start adding things on and your original hardware is inadequate ;)

Screen Shot 2017-03-24 at 11.15.12 am.png

Link to comment

This problem is still unresolved. More FCP logging and now what I think is a different problem.

 

Diagnostics and syslogtail are attached, however this time it appears that the mover is what has brought the system to a halt.

 

Mar 26 04:40:01 Apollo root: mover started
Mar 26 04:40:01 Apollo root: moving "downloads" to array
Mar 26 04:40:01 Apollo root: .d..t...... ./
Mar 26 04:40:01 Apollo root: .d..t...... downloads/
Mar 26 04:40:01 Apollo root: .d..t...... downloads/P2P/
Mar 26 04:40:01 Apollo root: .d..t...... downloads/P2P/blackhole/
Mar 26 04:40:01 Apollo root: .d..t...... downloads/P2P/
Mar 26 04:40:01 Apollo root: .d..t...... downloads/P2P/incomplete/
Mar 26 04:40:01 Apollo root: .d..t...... downloads/P2P/complete/

It's a bit hard to tell because it's been abbreviated, but are they Apple dot files that it has failed on? Has anybody ever had problems with that before? Any ideas.

 

Thanks,

 

Tom

apollo-diagnostics-20170326-1202.zip

FCPsyslog_tail.txt

Edited by giantkingsquid
Forgot to attach files.
Link to comment

There have been some OOM errors with kernels 4.8 and 4.9 that *appear* to be eliminated by decreasing the RAM write cache size, so if you want try this, install the Tips and Tweeks plugin, and set the below values like so:

 

vm.dirty_background_ratio = 1

vm.dirty_ratio = 2

 

These are very low values and if they work you can raise them a little and it should still remain stable, e.g.: 5 and 10 respectively, these OOM issues were apparently fixed on kernel 4.10, unRAID still on 4.9

 

 

Link to comment

Unfortunately it crashed again, and again and again. I'm just about ready to throw in the towel :(

 

I'm pretty sure it's the

rm -rf

that is causing the crash when running rsnapshot. For those that aren't aware rsnapshot basically rsyncs a series of directories and then cleans up by deleting (with rm -rf) the oldest backup. In my case it is trying to remove a 4.8 GB directory.

 

It doesn't crash every time, but about roughly every 24 - 30 hours.

 

What is also interesting is that if I manually run rm -rf on the directory from the command line the system locks up until the command has finished, usually 5 mins or so. A webserver running on a VM on the machine won't respond to requests, HTOP stays stationary, the unRAID webUI is non responsive, as is SSH. When the command is finished, everything goes back to normal. The problem seems to be when the rm -rf is run as part of a script that it doesn't always recover and fails silently, crashing the system with nothing being written to the syslog.

 

Does anyone have any ideas as to what would cause rm to lock the system like this? I've googled and haven't come up with anything.

 

The machine is a Celeron G3900, 16 GB RAM, 4 x 4TB Hitachi HDDs in dual parity, with two 120 GB SSDs as cache in RAID . The directory that I am trying to delete is on a disk share that is on an array disk.

 

 

Link to comment
  • 4 weeks later...

I don't have any new grand ideas other than your syslog errors, and results from the errors are extremely similar to mind.  I started experiencing these with 6.3.2.  I believe the memory issues your seeing is a process that's running that has code running in an allocated block of ram lose its mind or somehow becomes unable to manage the memory space it was using.

 

Here's a snippet of mine....

 

Apr  7 23:38:11 unraid emhttp: err: sendOutput: fork failed
Apr  7 23:38:11 unraid kernel: php[24131]: segfault at 0 ip 00000000005f42ad sp 00007ffcb0ff94b0 error 4 in php[400000+724000]
Apr  7 23:38:12 unraid emhttp: err: sendOutput: fork failed
Apr  7 23:38:15 unraid kernel: qemu-system-x86 invoked oom-killer: gfp_mask=0x24000c0(GFP_KERNEL), nodemask=0, order=0, oom_score_adj=0
Apr  7 23:38:15 unraid kernel: qemu-system-x86 cpuset=emulator mems_allowed=0
Apr  7 23:38:15 unraid kernel: CPU: 1 PID: 4873 Comm: qemu-system-x86 Not tainted 4.9.19-unRAID #1
Apr  7 23:38:15 unraid kernel: Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./H97M Pro4, BIOS P1.50 01/15/2015
Apr  7 23:38:15 unraid kernel: ffffc90003dbb7d8 ffffffff813a3bf2 ffffc90003dbb9c0 ffffffff8193d07a
Apr  7 23:38:15 unraid kernel: ffffc90003dbb858 ffffffff8111ee27 0000000000000000 0000000000000000
Apr  7 23:38:15 unraid kernel: ffffffff810b2041 00000000ffffffff ffffc90003dbb830 ffffffff81053fee
Apr  7 23:38:15 unraid kernel: Call Trace:
Apr  7 23:38:15 unraid kernel: [<ffffffff813a3bf2>] dump_stack+0x61/0x7e
Apr  7 23:38:15 unraid kernel: [<ffffffff8111ee27>] dump_header+0x76/0x20e
Apr  7 23:38:15 unraid kernel: [<ffffffff810b2041>] ? delayacct_end+0x51/0x5a
Apr  7 23:38:15 unraid kernel: [<ffffffff81053fee>] ? has_ns_capability_noaudit+0x34/0x3e
Apr  7 23:38:15 unraid kernel: [<ffffffff810c7bf6>] oom_kill_process+0x81/0x377
Apr  7 23:38:15 unraid kernel: [<ffffffff810c83bf>] out_of_memory+0x3aa/0x3e5
Apr  7 23:38:15 unraid kernel: [<ffffffff810cc07a>] __alloc_pages_nodemask+0xb5b/0xc71
Apr  7 23:38:15 unraid kernel: [<ffffffff81102c75>] alloc_pages_current+0xbe/0xe8
Apr  7 23:38:15 unraid kernel: [<ffffffff810c91e4>] __get_free_pages+0x9/0x37
Apr  7 23:38:15 unraid kernel: [<ffffffff81130ce2>] __pollwait+0x59/0xc7
Apr  7 23:38:15 unraid kernel: [<ffffffff81158eb6>] eventfd_poll+0x27/0x50
Apr  7 23:38:15 unraid kernel: [<ffffffff81131e2f>] do_sys_poll+0x243/0x481
Apr  7 23:38:15 unraid kernel: [<ffffffffa01a131b>] ? kvm_irq_delivery_to_apic+0x6e/0x20f [kvm]
Apr  7 23:38:15 unraid kernel: [<ffffffff81130c89>] ? poll_initwait+0x3f/0x3f
Apr  7 23:38:15 unraid kernel: [<ffffffff81130eaa>] ? poll_select_copy_remaining+0xfe/0xfe
Apr  7 23:38:15 unraid kernel: [<ffffffff81130eaa>] ? poll_select_copy_remaining+0xfe/0xfe
Apr  7 23:38:15 unraid kernel: [<ffffffff81130eaa>] ? poll_select_copy_remaining+0xfe/0xfe
Apr  7 23:38:15 unraid kernel: [<ffffffff81130eaa>] ? poll_select_copy_remaining+0xfe/0xfe
Apr  7 23:38:15 unraid kernel: [<ffffffff81130eaa>] ? poll_select_copy_remaining+0xfe/0xfe
Apr  7 23:38:15 unraid kernel: [<ffffffff81130eaa>] ? poll_select_copy_remaining+0xfe/0xfe
Apr  7 23:38:15 unraid kernel: [<ffffffff81130eaa>] ? poll_select_copy_remaining+0Apr  7 23:38:15 unraid kernel: qemu-system-x86 invoked oom-killer: gfp_mask=0x24000c0(GFP_KERNEL), nodemask=0, order=0, oom_score_adj=0
Apr  7 23:38:15 unraid kernel: qemu-system-x86 cpuset=emulator mems_allowed=0
Apr  7 23:38:15 unraid kernel: CPU: 1 PID: 4873 Comm: qemu-system-x86 Not tainted 4.9.19-unRAID #1
Apr  7 23:38:15 unraid kernel: Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./H97M Pro4, BIOS P1.50 01/15/2015
Apr  7 23:38:15 unraid kernel: ffffc90003dbb7d8 ffffffff813a3bf2 ffffc90003dbb9c0 ffffffff8193d07a
Apr  7 23:38:15 unraid kernel: ffffc90003dbb858 ffffffff8111ee27 0000000000000000 0000000000000000
Apr  7 23:38:15 unraid kernel: ffffffff810b2041 00000000ffffffff ffffc90003dbb830 ffffffff81053fee
Apr  7 23:38:15 unraid kernel: Call Trace:
Apr  7 23:38:15 unraid kernel: [<ffffffff813a3bf2>] dump_stack+0x61/0x7e
Apr  7 23:38:15 unraid kernel: [<ffffffff8111ee27>] dump_header+0x76/0x20e
Apr  7 23:38:15 unraid kernel: [<ffffffff810b2041>] ? delayacct_end+0x51/0x5a
Apr  7 23:38:15 unraid kernel: [<ffffffff81053fee>] ? has_ns_capability_noaudit+0x34/0x3e
Apr  7 23:38:15 unraid kernel: [<ffffffff810c7bf6>] oom_kill_process+0x81/0x377
Apr  7 23:38:15 unraid kernel: [<ffffffff810c83bf>] out_of_memory+0x3aa/0x3e5
Apr  7 23:38:15 unraid kernel: [<ffffffff810cc07a>] __alloc_pages_nodemask+0xb5b/0xc71
Apr  7 23:38:15 unraid kernel: [<ffffffff81102c75>] alloc_pages_current+0xbe/0xe8
Apr  7 23:38:15 unraid kernel: [<ffffffff810c91e4>] __get_free_pages+0x9/0x37
Apr  7 23:38:15 unraid kernel: [<ffffffff81130ce2>] __pollwait+0x59/0xc7
Apr  7 23:38:15 unraid kernel: [<ffffffff81158eb6>] eventfd_poll+0x27/0x50
Apr  7 23:38:15 unraid kernel: [<ffffffff81131e2f>] do_sys_poll+0x243/0x481
Apr  7 23:38:15 unraid kernel: [<ffffffffa01a131b>] ? kvm_irq_delivery_to_apic+0x6e/0x20f [kvm]
Apr  7 23:38:15 unraid kernel: [<ffffffff81130c89>] ? poll_initwait+0x3f/0x3f
Apr  7 23:38:15 unraid kernel: [<ffffffff81130eaa>] ? poll_select_copy_remaining+0xfe/0xfe
Apr  7 23:38:15 unraid kernel: [<ffffffff81130eaa>] ? poll_select_copy_remaining+0xfe/0xfe
Apr  7 23:38:15 unraid kernel: [<ffffffff81130eaa>] ? poll_select_copy_remaining+0xfe/0xfe
Apr  7 23:38:15 unraid kernel: [<ffffffff81130eaa>] ? poll_select_copy_remaining+0xfe/0xfe
Apr  7 23:38:15 unraid kernel: [<ffffffff81130eaa>] ? poll_select_copy_remaining+0xfe/0xfe
Apr  7 23:38:15 unraid kernel: [<ffffffff81130eaa>] ? poll_select_copy_remaining+0xfe/0xfe
Apr  7 23:38:15 unraid kernel: [<ffffffff81130eaa>] ? poll_select_copy_remaining+0xfe/0xfe
Apr  7 23:38:15 unraid kernel: [<ffffffff81130eaa>] ? poll_select_copy_remaining+0xfe/0xfe
Apr  7 23:38:15 unraid kernel: [<ffffffff81130eaa>] ? poll_select_copy_remaining+0xfe/0xfe
Apr  7 23:38:15 unraid kernel: [<ffffffff81132226>] SyS_ppoll+0xb9/0x135
Apr  7 23:38:15 unraid kernel: [<ffffffff81132226>] ? SyS_ppoll+0xb9/0x135
Apr  7 23:38:15 unraid kernel: [<ffffffff8167dfb7>] entry_SYSCALL_64_fastpath+0x1a/0xa9
Apr  7 23:38:15 unraid kernel: Mem-Info:
Apr  7 23:38:15 unraid kernel: active_anon:7976420 inactive_anon:16791 isolated_anon:0
Apr  7 23:38:15 unraid kernel: active_file:2073 inactive_file:3601 isolated_file:32
Apr  7 23:38:15 unraid kernel: unevictable:0 dirty:16 writeback:1 unstable:0
Apr  7 23:38:15 unraid kernel: slab_reclaimable:6221 slab_unreclaimable:13392
Apr  7 23:38:15 unraid kernel: mapped:20460 shmem:119836 pagetables:19084 bounce:0
Apr  7 23:38:15 unraid kernel: free:66556 free_pcp:211 free_cma:0
Apr  7 23:38:15 unraid kernel: Node 0 active_anon:31905680kB inactive_anon:67164kB active_file:8292kB inactive_file:14404kB unevictable:0kB isolated(anon):0kB isolated(file):128kB mapped:81840kB dirty:64kB writeback:4kB shmem:0kB shmem_thp: 0kB shmem_pmdmapped: 15593472kB anon_thp: 479344kB writeback_tmp:0kB unstable:0kB pages_scanned:49396 all_unreclaimable? yes
Apr  7 23:38:15 unraid kernel: Node 0 DMA free:15892kB min:64kB low:80kB high:96kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15984kB managed:15900kB mlocked:0kB slab_reclaimable:0kB slab_unreclaimable:8kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
Apr  7 23:38:15 unraid kernel: lowmem_reserve[]: 0 2852 31765 31765
Apr  7 23:38:15 unraid kernel: Node 0 DMA32 free:127584kB min:12132kB low:15164kB high:18196kB active_anon:2934768kB inactive_anon:24kB active_file:180kB inactive_file:60kB unevictable:0kB writepending:0kB present:3079716kB managed:3069728kB mlocked:0kB slab_reclaimable:1256kB slab_unreclaimable:2980kB kernel_stack:304kB pagetables:1156kB bounce:0kB free_pcp:204kB local_pcp:0kB free_cma:0kB
Apr  7 23:38:15 unraid kernel: lowmem_reserve[]: 0 0 28912 28912
Apr  7 23:38:15 unraid kernel: Node 0 Normal free:122748kB min:122968kB low:153708kB high:184448kB active_anon:28970912kB inactive_anon:67140kB active_file:8160kB inactive_file:14328kB unevictable:0kB writepending:68kB present:30128128kB managed:29607292kB mlocked:0kB slab_reclaimable:23628kB slab_unreclaimable:50580kB kernel_stack:11312kB pagetables:75180kB bounce:0kB free_pcp:640kB local_pcp:124kB free_cma:0kB
Apr  7 23:38:15 unraid kernel: lowmem_reserve[]: 0 0 0 0
Apr  7 23:38:15 unraid kernel: Node 0 DMA: 1*4kB (U) 0*8kB 1*16kB (U) 0*32kB 2*64kB (U) 1*128kB (U) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (U) 3*4096kB (M) = 15892kB
Apr  7 23:38:15 unraid kernel: Node 0 DMA32: 276*4kB (UE) 316*8kB (UME) 338*16kB (UE) 272*32kB (UE) 243*64kB (UME) 151*128kB (UME) 79*256kB (UME) 51*512kB (UME) 28*1024kB (UME) 0*2048kB 0*4096kB = 127632kB
Apr  7 23:38:15 unraid kernel: Node 0 Normal: 850*4kB (UME) 671*8kB (UME) 413*16kB (UEH) 583*32kB (UMEH) 265*64kB (UMEH) 173*128kB (UMEH) 78*256kB (UMEH) 26*512kB (UMEH) 12*1024kB (M) 2*2048kB (MH) 0*4096kB = 122800kB
Apr  7 23:38:15 unraid kernel: 125551 total pagecache pages
Apr  7 23:38:15 unraid kernel: 0 pages in swap cache
Apr  7 23:38:15 unraid kernel: Swap cache stats: add 0, delete 0, find 0/0
Apr  7 23:38:15 unraid kernel: Free swap  = 0kB
Apr  7 23:38:15 unraid kernel: Total swap = 0kB
Apr  7 23:38:15 unraid kernel: 8305957 pages RAM
Apr  7 23:38:15 unraid kernel: 0 pages HighMem/MovableOnly
Apr  7 23:38:15 unraid kernel: 132727 pages reserved
Apr  7 23:38:15 unraid kernel: [ pid ]   uid  tgid total_vm      rss nr_ptes nr_pmds swapents oom_score_adj name
Apr  7 23:38:15 unraid kernel: [ 1291]     0  1291     6688      707      14       3        0         -1000 udevd
Apr  7 23:38:15 unraid kernel: [ 1458]     0  1458    59436      675      25       3        0             0 rsyslogd
Apr  7 23:38:15 unraid kernel: [ 1591]    81  1591     4900       61      14       3        0             0 dbus-daemon
Apr  7 23:38:15 unraid kernel: [ 1599]     1  1599     3342      546      11       3        0             0 rpcbind
Apr  7 23:38:15 unraid kernel: [ 1604]    32  1604     5352     1447      15       3        0             0 rpc.statd
Apr  7 23:38:15 unraid kernel: [ 1614]     0  1614     1619      405       8       3        0             0 inetd
Apr  7 23:38:15 unraid kernel: [ 1623]     0  1623     6120      646      17       3        0         -1000 sshd
Apr  7 23:38:15 unraid kernel: [ 1637]     0  1637    24546     1194      20       3        0             0 ntpd
Apr  7 23:38:15 unraid kernel: [ 1644]     0  1644     1095       29       7       3        0             0 acpid
Apr  7 23:38:15 unraid kernel: [ 1653]     0  1653     1621      397       8       3        0             0 crond
Apr  7 23:38:15 unraid kernel: [ 1655]     0  1655     1618       26       8       3        0             0 atd
Apr  7 23:38:15 unraid kernel: [ 1661]     0  1661    55386     1483     107       3        0             0 nmbd
Apr  7 23:38:15 unraid kernel: [ 1663]     0  1663    75164     3873     143       4        0             0 smbd
Apr  7 23:38:15 unraid kernel: [ 1664]     0  1664    73583     1144     137       4        0             0 smbd-notifyd
Apr  7 23:38:15 unraid kernel: [ 1666]     0  1666    73587     1091     137       4        0             0 cleanupd
Apr  7 23:38:15 unraid kernel: [ 1670]     0  1670    68257     1860     128       3        0             0 winbindd
Apr  7 23:38:15 unraid kernel: [ 1671]     0  1671    68680     3206     130       3        0             0 winbindd
Apr  7 23:38:15 unraid kernel: [ 8454]     0  8454    22481      914      18       3        0             0 emhttp
Apr  7 23:38:15 unraid kernel: [ 8455]     0  8455     1627      395       8       3        0             0 agetty
Apr  7 23:38:15 unraid kernel: [ 8456]     0  8456     1627      403       8       3        0             0 agetty
Apr  7 23:38:15 unraid kernel: [ 8457]     0  8457     1627      408       8       3        0             0 agetty
Apr  7 23:38:15 unraid kernel: [ 8458]     0  8458     1627      399       8       3        0             0 agetty
Apr  7 23:38:15 unraid kernel: [ 8459]     0  8459     1627      413       8       3        0             0 agetty
Apr  7 23:38:15 unraid kernel: [ 8460]     0  8460     1627      416       8       3        0             0 agetty
Apr  7 23:38:15 unraid kernel: [ 8546]     0  8546     5440      975      15       3        0             0 rpc.mountd
Apr  7 23:38:15 unraid kernel: [ 8554]     0  8554    10486      719      23       3        0             0 netatalk
Apr  7 23:38:15 unraid kernel: [ 8561]     0  8561    11421     1229      26       3        0             0 afpd
Apr  7 23:38:15 unraid kernel: [ 8562]     0  8562     8870      966      20       3        0             0 cnid_metad
Apr  7 23:38:15 unraid kernel: [ 8565]    61  8565     8623      591      21       3        0             0 avahi-daemon
Apr  7 23:38:15 unraid kernel: [ 8566]    61  8566     8557       64      21       3        0             0 avahi-daemon
Apr  7 23:38:15 unraid kernel: [ 8574]     0  8574     3185       27      11       3        0             0 avahi-dnsconfd
Apr  7 23:38:15 unraid kernel: [ 9729]     0  9729    38322      151      17       3        0             0 shfs
Apr  7 23:38:15 unraid kernel: [ 9739]     0  9739   275404     1787      48       4        0             0 shfs
Apr  7 23:38:15 unraid kernel: [ 9897]     0  9897   293514    10200      84       6        0          -500 dockerd
Apr  7 23:38:15 unraid kernel: [ 9912]     0  9912    60332     2674      32       5        0          -500 docker-containe
Apr  7 23:38:15 unraid kernel: [10662]     0 10662    18821      839      38       3        0             0 virtlockd
Apr  7 23:38:15 unraid kernel: [10668]     0 10668    35734      947      40       3        0             0 virtlogd
Apr  7 23:38:15 unraid kernel: [10683]     0 10683   219397     3555      87       4        0             0 libvirtd
Apr  7 23:38:15 unraid kernel: [10877]    99 10877     4378      495      13       3        0             0 dnsmasq
Apr  7 23:38:15 unraid kernel: [10878]     0 10878     4345       54      13       3        0             0 dnsmasq
Apr  7 23:38:15 unraid kernel: [13173]     0 13173   890575   439357    1075       6        0             0 qemu-system-x86
Apr  7 23:38:15 unraid kernel: [25324]     0 25324    68257     1002     128       3        0             0 winbindd
Apr  7 23:38:15 unraid kernel: [19020]     0 19020    75813     3847     145       4        0             0 smbd
Apr  7 23:38:15 unraid kernel: [32548]     0 32548     6120     1157      16       3        0             0 sshd
Apr  7 23:38:15 unraid kernel: [32583]     0 32583     3401      913      10       3        0             0 bash
Apr  7 23:38:15 unraid kernel: [ 4431]     0  4431   439099   238093     631       5        0             0 qemu-system-x86
Apr  7 23:38:15 unraid kernel: [ 4873]     0  4873   965368   900878    1902       7        0             0 qemu-system-x86
Apr  7 23:38:15 unraid kernel: [23861]     0 23861    22658      932      17       3        0             0 top
Apr  7 23:38:15 unraid kernel: [25596]     0 25596    89436      813      25       5        0          -500 docker-containe
Apr  7 23:38:15 unraid kernel: [25614]     0 25614     1043       19       7       3        0             0 tini
Apr  7 23:38:15 unraid kernel: [25629]     0 25629    24792     3217      53       3        0             0 supervisord
Apr  7 23:38:15 unraid kernel: [25900]     0 25900     3953      102      11       3        0             0 crond.sh
Apr  7 23:38:15 unraid kernel: [25904]     0 25904     3260      140      10       3        0             0 crond
Apr  7 23:38:15 unraid kernel: [ 8348]     0  8348   745687   460782    1062       6        0             0 qemu-system-x86
Apr  7 23:38:15 unraid kernel: [11411]     0 11411    73116      771      24       5        0          -500 docker-containe
Apr  7 23:38:15 unraid kernel: [11429]     0 11429    18754     5479      38       3        0          1000 netdata
Apr  7 23:38:15 unraid kernel: [ 4946]     0  4946     5059      113      13       3        0          1000 bash
Apr  7 23:38:15 unraid kernel: [20722]     0 20722   106172      776      26       5        0          -500 docker-containe
Apr  7 23:38:15 unraid kernel: [20740]     0 20740     1043       26       6       3        0             0 tini
Apr  7 23:38:15 unraid kernel: [20759]     0 20759    24727     3174      50       3        0             0 supervisord
Apr  7 23:38:15 unraid kernel: [20914]    99 20914   103484    14024     194       3        0             0 Plex Media Serv
Apr  7 23:38:15 unraid kernel: [20978]    99 20978   443532    10374     125       4        0             0 Plex Script Hos
Apr  7 23:38:15 unraid kernel: [21027]    99 21027    64120     8273     117       3        0             0 Plex DLNA Serve
Apr  7 23:38:15 unraid kernel: [21029]    99 21029   142257      484      62       4        0             0 Plex Tuner Serv
Apr  7 23:38:15 unraid kernel: [21126]    99 21126   221840     8802      97       4        0             0 Plex Script Hos
Apr  7 23:38:15 unraid kernel: [22318]    99 22318   255176     8649     100       4        0             0 Plex Script Hos
Apr  7 23:38:15 unraid kernel: [22455]    99 22455  5785758  5724187   11297      25        0             0 Plex Media Scan
Apr  7 23:38:15 unraid kernel: [22475]    99 22475   222958     8204      99       4        0             0 Plex Script Hos
Apr  7 23:38:15 unraid kernel: [22520]    99 22520   221637     8520      96       4        0             0 Plex Script Hos
Apr  7 23:38:15 unraid kernel: [22578]    99 22578    77643    29962     122       4        0             0 Plex Transcoder
Apr  7 23:38:15 unraid kernel: [22580]    99 22580     5407      190      15       3        0             0 EasyAudioEncode
Apr  7 23:38:15 unraid kernel: Out of memory: Kill process 11429 (netdata) score 1000 or sacrifice child
Apr  7 23:38:15 unraid kernel: Killed process 4946 (bash) total-vm:20236kB, anon-rss:452kB, file-rss:0kB, shmem-rss:0kB
Apr  7 23:38:15 unraid liblogging-stdlog: action 'action 0' resumed (module 'builtin:omfile') [v8.23.0 try http://www.rsyslog.com/e/2359 ]
Apr  7 23:38:15 unraid liblogging-stdlog: action 'action 0' resumed (module 'builtin:omfile') [v8.23.0 try http://www.rsyslog.com/e/2359 ]
Apr  7 23:38:15 unraid liblogging-stdlog: action 'action 0' resumed (module 'builtin:omfile') [v8.23.0 try http://www.rsyslog.com/e/2359 ]
Apr  7 23:38:15 unraid liblogging-stdlog: action 'action 0' resumed (module 'builtin:omfile') [v8.23.0 try http://www.rsyslog.com/e/2359 ]
Apr  7 23:38:15 unraid kernel: rs:main Q:Reg invoked oom-killer: gfp_mask=0x24280ca(GFP_HIGHUSER_MOVABLE|__GFP_ZERO), nodemask=0, order=0, oom_score_adj=0
Apr  7 23:38:15 unraid kernel: rs:main Q:Reg cpuset=/ mems_allowed=0
Apr  7 23:38:15 unraid kernel: CPU: 3 PID: 1461 Comm: rs:main Q:Reg Not tainted 4.9.19-unRAID #1
Apr  7 23:38:15 unraid kernel: Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./H97M Pro4, BIOS P1.50 01/15/2015
Apr  7 23:38:15 unraid kernel: ffffc90003a3fb70 ffffffff813a3bf2 ffffc90003a3fd58 ffffffff8193d07a
Apr  7 23:38:15 unraid kernel: ffffc90003a3fbf0 ffffffff8111ee27 0000000000000000 0000000000000000
Apr  7 23:38:15 unraid kernel: ffffffff810b2041 00000000ffffffff ffffc90003a3fbc8 ffffffff81053fee
Apr  7 23:38:15 unraid kernel: Call Trace:
Apr  7 23:38:15 unraid kernel: [<ffffffff813a3bf2>] dump_stack+0x61/0x7e
Apr  7 23:38:15 unraid kernel: [<ffffffff8111ee27>] dump_header+0x76/0x20e
Apr  7 23:38:15 unraid kernel: [<ffffffff810b2041>] ? delayacct_end+0x51/0x5a
Apr  7 23:38:15 unraid kernel: [<ffffffff81053fee>] ? has_ns_capability_noaudit+0x34/0x3e
Apr  7 23:38:15 unraid kernel: [<ffffffff810c7bf6>] oom_kill_process+0x81/0x377
Apr  7 23:38:15 unraid kernel: [<ffffffff810c83bf>] out_of_memory+0x3aa/0x3e5
Apr  7 23:38:15 unraid kernel: [<ffffffff810cc07a>] __alloc_pages_nodemask+0xb5b/0xc71
Apr  7 23:38:15 unraid kernel: [<ffffffff810dbddb>] ? shmem_getpage+0x16/0x18
Apr  7 23:38:15 unraid kernel: [<ffffffff8110388a>] alloc_pages_vma+0x183/0x1f5
Apr  7 23:38:15 unraid kernel: [<ffffffff810ee511>] handle_mm_fault+0xd74/0xf96
Apr  7 23:38:15 unraid kernel: [<ffffffff81120f5b>] ? __vfs_write+0xc3/0xec
Apr  7 23:38:15 unraid kernel: [<ffffffff810421fc>] __do_page_fault+0x24a/0x3ed
Apr  7 23:38:15 unraid kernel: [<ffffffff810423e2>] do_page_fault+0x22/0x27
Apr  7 23:38:15 unraid kernel: [<ffffffff8167f998>] page_fault+0x28/0x30
Apr  7 23:38:15 unraid kernel: Mem-Info:
Apr  7 23:38:15 unraid kernel: active_anon:7976310 inactive_anon:16791 isolated_anon:0
Apr  7 23:38:15 unraid kernel: active_file:2073 inactive_file:3601 isolated_file:32
Apr  7 23:38:15 unraid kernel: unevictable:0 dirty:16 writeback:1 unstable:0
Apr  7 23:38:15 unraid kernel: slab_reclaimable:6221 slab_unreclaimable:13392
Apr  7 23:38:15 unraid kernel: mapped:20460 shmem:119836 pagetables:19084 bounce:0
Apr  7 23:38:15 unraid kernel: free:66556 free_pcp:380 free_cma:0
Apr  7 23:38:15 unraid kernel: Node 0 active_anon:31905240kB inactive_anon:67164kB active_file:8292kB inactive_file:14404kB unevictable:0kB isolated(anon):0kB isolated(file):128kB mapped:81840kB dirty:64kB writeback:4kB shmem:0kB shmem_thp: 0kB shmem_pmdmapped: 15593472kB anon_thp: 479344kB writeback_tmp:0kB unstable:0kB pages_scanned:86141 all_unreclaimable? yes
Apr  7 23:38:15 unraid kernel: Node 0 DMA free:15892kB min:64kB low:80kB high:96kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15984kB managed:15900kB mlocked:0kB slab_reclaimable:0kB slab_unreclaimable:8kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
Apr  7 23:38:15 unraid kernel: lowmem_reserve[]: 0 2852 31765 31765
Apr  7 23:38:15 unraid kernel: Node 0 DMA32 free:127584kB min:12132kB low:15164kB high:18196kB active_anon:2934768kB inactive_anon:24kB active_file:180kB inactive_file:60kB unevictable:0kB writepending:0kB present:3079716kB managed:3069728kB mlocked:0kB slab_reclaimable:1256kB slab_unreclaimable:2980kB kernel_stack:304kB pagetables:1156kB bounce:0kB free_pcp:248kB local_pcp:120kB free_cma:0kB
Apr  7 23:38:15 unraid kernel: lowmem_reserve[]: 0 0 28912 28912
Apr  7 23:38:15 unraid kernel: Node 0 Normal free:122748kB min:122968kB low:153708kB high:184448kB active_anon:28970472kB inactive_anon:67140kB active_file:8160kB inactive_file:14328kB unevictable:0kB writepending:68kB present:30128128kB managed:29607292kB mlocked:0kB slab_reclaimable:23628kB slab_unreclaimable:50580kB kernel_stack:11312kB pagetables:75180kB bounce:0kB free_pcp:1272kB local_pcp:120kB free_cma:0kB
Apr  7 23:38:15 unraid kernel: lowmem_reserve[]: 0 0 0 0
Apr  7 23:38:15 unraid kernel: Node 0 DMA: 1*4kB (U) 0*8kB 1*16kB (U) 0*32kB 2*64kB (U) 1*128kB (U) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (U) 3*4096kB (M) = 15892kB
Apr  7 23:38:15 unraid kernel: Node 0 DMA32: 276*4kB (UE) 316*8kB (UME) 338*16kB (UE) 272*32kB (UE) 243*64kB (UME) 151*128kB (UME) 79*256kB (UME) 51*512kB (UME) 28*1024kB (UME) 0*2048kB 0*4096kB = 127632kB
Apr  7 23:38:15 unraid kernel: Node 0 Normal: 850*4kB (UME) 671*8kB (UME) 413*16kB (UEH) 583*32kB (UMEH) 265*64kB (UMEH) 173*128kB (UMEH) 78*256kB (UMEH) 26*512kB (UMEH) 12*1024kB (M) 2*2048kB (MH) 0*4096kB = 122800kB
Apr  7 23:38:15 unraid kernel: 125496 total pagecache pages
Apr  7 23:38:15 unraid kernel: 0 pages in swap cache
Apr  7 23:38:15 unraid kernel: Swap cache stats: add 0, delete 0, find 0/0
Apr  7 23:38:15 unraid kernel: Free swap  = 0kB
Apr  7 23:38:15 unraid kernel: Total swap = 0kB
Apr  7 23:38:15 unraid kernel: 8305957 pages RAM
Apr  7 23:38:15 unraid kernel: 0 pages HighMem/MovableOnly
Apr  7 23:38:15 unraid kernel: 132727 pages reserved
Apr  7 23:38:15 unraid kernel: [ pid ]   uid  tgid total_vm      rss nr_ptes nr_pmds swapents oom_score_adj name
Apr  7 23:38:15 unraid kernel: [ 1291]     0  1291     6688      707      14       3        0         -1000 udevd
Apr  7 23:38:15 unraid kernel: [ 1458]     0  1458    59436      675      25       3        0             0 rsyslogd
Apr  7 23:38:15 unraid kernel: [ 1591]    81  1591     4900       61      14       3        0             0 dbus-daemon
Apr  7 23:38:15 unraid kernel: [ 1599]     1  1599     3342      546      11       3        0             0 rpcbind
Apr  7 23:38:15 unraid kernel: [ 1604]    32  1604     5352     1447      15       3        0             0 rpc.statd
Apr  7 23:38:15 unraid kernel: [ 1614]     0  1614     1619      405       8       3        0             0 inetd
Apr  7 23:38:15 unraid kernel: [ 1623]     0  1623     6120      646      17       3        0         -1000 sshd
Apr  7 23:38:15 unraid kernel: [ 1637]     0  1637    24546     1194      20       3        0             0 ntpd
Apr  7 23:38:15 unraid kernel: [ 1644]     0  1644     1095       29       7       3        0             0 acpid
Apr  7 23:38:15 unraid kernel: [ 1653]     0  1653     1621      397       8       3        0             0 crond
Apr  7 23:38:15 unraid kernel: [ 1655]     0  1655     1618       26       8       3        0             0 atd
Apr  7 23:38:15 unraid kernel: [ 1661]     0  1661    55386     1483     107       3        0             0 nmbd
Apr  7 23:38:15 unraid kernel: [ 1663]     0  1663    75164     3873     143       4        0             0 smbd
Apr  7 23:38:15 unraid kernel: [ 1664]     0  1664    73583     1144     137       4        0             0 smbd-notifyd
Apr  7 23:38:15 unraid kernel: [ 1666]     0  1666    73587     1091     137       4        0             0 cleanupd
Apr  7 23:38:15 unraid kernel: [ 1670]     0  1670    68257     1860     128       3        0             0 winbindd
Apr  7 23:38:15 unraid kernel: [ 1671]     0  1671    68680     3206     130       3        0             0 winbindd
Apr  7 23:38:15 unraid kernel: [ 8454]     0  8454    22481      914      18       3        0             0 emhttp
Apr  7 23:38:15 unraid kernel: [ 8455]     0  8455     1627      395       8       3        0             0 agetty
Apr  7 23:38:15 unraid kernel: [ 8456]     0  8456     1627      403       8       3        0             0 agetty
Apr  7 23:38:15 unraid kernel: [ 8457]     0  8457     1627      408       8       3        0             0 agetty
Apr  7 23:38:15 unraid kernel: [ 8458]     0  8458     1627      399       8       3        0             0 agetty
Apr  7 23:38:15 unraid kernel: [ 8459]     0  8459     1627      413       8       3        0             0 agetty
Apr  7 23:38:15 unraid kernel: [ 8460]     0  8460     1627      416       8       3        0             0 agetty
Apr  7 23:38:15 unraid kernel: [ 8546]     0  8546     5440      975      15       3        0             0 rpc.mountd
Apr  7 23:38:15 unraid kernel: [ 8554]     0  8554    10486      719      23       3        0             0 netatalk
Apr  7 23:38:15 unraid kernel: [ 8561]     0  8561    11421     1229      26       3        0             0 afpd
Apr  7 23:38:15 unraid kernel: [ 8562]     0  8562     8870      966      20       3        0             0 cnid_metad
Apr  7 23:38:15 unraid liblogging-stdlog: action 'action 0' resumed (module 'builtin:omfile') [v8.23.0 try http://www.rsyslog.com/e/2359 ]
Apr  7 23:38:15 unraid liblogging-stdlog: action 'action 0' resumed (module 'builtin:omfile') [v8.23.0 try http://www.rsyslog.com/e/2359 ]
Apr  7 23:38:15 unraid kernel: [ 8565]    61  8565     8623      591      21       3        0             0 avahi-daemon
Apr  7 23:38:15 unraid kernel: [ 8566]    61  8566     8557       64      21       3        0             0 avahi-daemon
Apr  7 23:38:15 unraid kernel: [ 8574]     0  8574     3185       27      11       3        0             0 avahi-dnsconfd
Apr  7 23:38:15 unraid kernel: [ 9729]     0  9729    38322      151      17       3        0             0 shfs
Apr  7 23:38:15 unraid kernel: [ 9739]     0  9739   275404     1787      48       4        0             0 shfs
Apr  7 23:38:15 unraid kernel: [ 9897]     0  9897   293514    10200      84       6        0          -500 dockerd
Apr  7 23:38:15 unraid kernel: [ 9912]     0  9912    60332     2674      32       5        0          -500 docker-containe
Apr  7 23:38:15 unraid kernel: [10662]     0 10662    18821      839      38       3        0             0 virtlockd
Apr  7 23:38:15 unraid kernel: [10668]     0 10668    35734      947      40       3        0             0 virtlogd
Apr  7 23:38:15 unraid kernel: [10683]     0 10683   219397     3555      87       4        0             0 libvirtd
Apr  7 23:38:15 unraid kernel: [10877]    99 10877     4378      495      13       3        0             0 dnsmasq
Apr  7 23:38:15 unraid kernel: [10878]     0 10878     4345       54      13       3        0             0 dnsmasq
Apr  7 23:38:15 unraid kernel: [13173]     0 13173   890575   439357    1075       6        0             0 qemu-system-x86
Apr  7 23:38:15 unraid kernel: [25324]     0 25324    68257     1002     128       3        0             0 winbindd
Apr  7 23:38:15 unraid kernel: [19020]     0 19020    75813     3847     145       4        0             0 smbd
Apr  7 23:38:15 unraid kernel: [32548]     0 32548     6120     1157      16       3        0             0 sshd
Apr  7 23:38:15 unraid kernel: [32583]     0 32583     3401      913      10       3        0             0 bash
Apr  7 23:38:15 unraid kernel: [ 4431]     0  4431   439099   238093     631       5        0             0 qemu-system-x86
Apr  7 23:38:15 unraid kernel: [ 4873]     0  4873   965368   900878    1902       7        0             0 qemu-system-x86
Apr  7 23:38:15 unraid kernel: [23861]     0 23861    22658      932      17       3        0             0 top
Apr  7 23:38:15 unraid kernel: [25596]     0 25596    89436      813      25       5        0          -500 docker-containe
Apr  7 23:38:15 unraid kernel: [25614]     0 25614     1043       19       7       3        0             0 tini
Apr  7 23:38:15 unraid kernel: [25629]     0 25629    24792     3217      53       3        0             0 supervisord
Apr  7 23:38:15 unraid kernel: [25900]     0 25900     3953      102      11       3        0             0 crond.sh
Apr  7 23:38:15 unraid kernel: [25904]     0 25904     3260      140      10       3        0             0 crond
Apr  7 23:38:15 unraid kernel: [ 8348]     0  8348   745687   460782    1062       6        0             0 qemu-system-x86
Apr  7 23:38:15 unraid kernel: [11411]     0 11411    73116      771      24       5        0          -500 docker-containe
Apr  7 23:38:15 unraid kernel: [11429]     0 11429    18754     5479      38       3        0          1000 netdata
Apr  7 23:38:15 unraid kernel: [20722]     0 20722   106172      776      26       5        0          -500 docker-containe
Apr  7 23:38:15 unraid kernel: [20740]     0 20740     1043       26       6       3        0             0 tini
Apr  7 23:38:15 unraid kernel: [20759]     0 20759    24727     3174      50       3        0             0 supervisord
Apr  7 23:38:15 unraid kernel: [20914]    99 20914   103484    14024     194       3        0             0 Plex Media Serv
Apr  7 23:38:15 unraid kernel: [20978]    99 20978   443532    10374     125       4        0             0 Plex Script Hos
Apr  7 23:38:15 unraid kernel: [21027]    99 21027    64120     8273     117       3        0             0 Plex DLNA Serve
Apr  7 23:38:15 unraid kernel: [21029]    99 21029   142257      484      62       4        0             0 Plex Tuner Serv
Apr  7 23:38:15 unraid kernel: [21126]    99 21126   221840     8802      97       4        0             0 Plex Script Hos
Apr  7 23:38:15 unraid kernel: [22318]    99 22318   255176     8649     100       4        0             0 Plex Script Hos
Apr  7 23:38:15 unraid kernel: [22455]    99 22455  5785758  5724187   11297      25        0             0 Plex Media Scan
Apr  7 23:38:15 unraid kernel: [22475]    99 22475   222958     8204      99       4        0             0 Plex Script Hos
Apr  7 23:38:15 unraid kernel: [22520]    99 22520   221637     8520      96       4        0             0 Plex Script Hos
Apr  7 23:38:15 unraid kernel: [22578]    99 22578    77643    29962     122       4        0             0 Plex Transcoder
Apr  7 23:38:15 unraid kernel: [22580]    99 22580     5407      190      15       3        0             0 EasyAudioEncode
Apr  7 23:38:15 unraid kernel: Out of memory: Kill process 11429 (netdata) score 1000 or sacrifice child
Apr  7 23:38:15 unraid kernel: Killed process 11429 (netdata) total-vm:75016kB, anon-rss:21916kB, file-rss:0kB, shmem-rss:0kB
Apr  7 23:38:15 unraid kernel: oom_reaper: reaped process 11429 (netdata), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
Apr  7 23:38:23 unraid kernel: php[24324]: segfault at 0 ip 00000000005f42ad sp 00007ffe79297830 error 4 in php[400000+724000]
Apr  7 23:38:27 unraid kernel: php[24336]: segfault at 0 ip 00000000005f42ad sp 00007fff90976920 error 4 in php[400000+724000]
Apr  7 23:38:27 unraid kernel: php[24337]: segfault at 0 ip 00000000005f42ad sp 00007ffe3e8c7a50 error 4 in php[400000+724000]
Apr  7 23:38:30 unraid kernel: php[24350]: segfault at 0 ip 00000000005f42ad sp 00007ffeff46a9b0 error 4 in php[400000+724000]
Apr  7 23:38:33 unraid emhttp: err: sendOutput: fork failed
Apr  7 23:38:33 unraid emhttp: err: sendOutput: fork failed
Apr  7 23:38:38 unraid emhttp: err: sendOutput: fork failed

 

I have 32GB of RAM and have never used 50% of what I have installed.  Thus I believe it has to do with memory allocation space issues with the kernel or a process of significance.  I'm evaluating a drop back to 6.2.4, there are some issues with VM's with the fall back that need to be tinkered with to work falling back to 6.2.4.  I've had it crash when mover starts.  Had an issue today I couldn't take the array offline because it thought mover was still invoked.  Couldn't find any mover process running. I like you am limping along. 

 

Link to comment

I ended up reducing the size of my rsnapshot job by about 80% and it has been fine ever since. It seems that it could not cope with rm -rf a 5GB directory but can cope with rm -rf a 1GB one. I like you never used all of the RAM (after I upgraded to 16GB anyway). So yeah, I never fixed it, just worked around it.

  • Upvote 1
Link to comment
  • 2 months later...
  • 1 year later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.