FreeMan Posted January 4, 2020 Share Posted January 4, 2020 Sometime yesterday, I noticed that 1 core of my CPU was pinned. I realized this morning that my log file is full, as well. The specific core bounces around a bit - it will change to a different core at about 75%, then immediately climb to 100%. I haven't a clue what's causing this, if someone can interpret the attached tea-leaves, I'd be most grateful. nas-diagnostics-20200104-0912.zip Quote Link to comment
BRiT Posted January 4, 2020 Share Posted January 4, 2020 It's your dockers. Look at your ps or top reports. top - 09:12:37 up 3 days, 22:43, 0 users, load average: 2.45, 3.10, 2.79 Tasks: 422 total, 3 running, 418 sleeping, 0 stopped, 1 zombie %Cpu(s): 59.1 us, 9.1 sy, 0.0 ni, 31.8 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st MiB Mem : 23047.5 total, 274.2 free, 6474.9 used, 16298.4 buff/cache MiB Swap: 0.0 total, 0.0 free, 0.0 used. 15115.3 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 23267 nobody 20 0 1651200 291500 28772 S 131.2 1.2 3:08.72 mono 22159 nobody 20 0 21.1g 90960 31256 S 106.2 0.4 313:56.59 EmbyStat 28177 root 20 0 904128 127168 1136 S 18.8 0.5 422:02.25 shfs 6031 root 20 0 349140 4340 3540 S 6.2 0.0 29:31.27 emhttpd 22956 nobody 20 0 45080 35656 2780 S 6.2 0.2 0:40.92 nzbget 28699 nobody 20 0 62476 21568 18028 S 6.2 0.1 173:39.48 smbd 1 root 20 0 2468 1840 1736 S 0.0 0.0 0:33.18 init 2 root 20 0 0 0 0 S 0.0 0.0 0:00.16 kthreadd Quote Link to comment
BRiT Posted January 4, 2020 Share Posted January 4, 2020 That mono process is part of Radar. root 22837 0.1 0.0 109104 7660 ? Sl 03:36 0:39 | \_ containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/7c083e04685d42459fcac88f702479ce4a413bad7e348bf06085a0650e553bdd -address /var/run/docker/containerd/containerd.sock -containerd-binary /usr/bin/containerd -runtime-root /var/run/docker/runtime-runc root 22876 0.0 0.0 204 4 ? Ss 03:36 0:00 | | \_ s6-svscan -t0 /var/run/s6/services root 23032 0.0 0.0 204 4 ? S 03:36 0:00 | | \_ s6-supervise s6-fdholderd root 23264 0.0 0.0 204 4 ? S 03:36 0:00 | | \_ s6-supervise radarr nobody 23267 0.9 1.2 1651200 291500 ? Ssl 03:36 3:08 | | \_ mono --debug Radarr.exe -nobrowser -data=/config Quote Link to comment
BRiT Posted January 4, 2020 Share Posted January 4, 2020 The other docker container: root 21735 0.1 0.0 109104 7808 ? Sl 03:36 0:39 | \_ containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/9c63bc954f1929047bfefedca0d520e92dbb1ca7dfb2e3fc61a48792a4c26aa9 -address /var/run/docker/containerd/containerd.sock -containerd-binary /usr/bin/containerd -runtime-root /var/run/docker/runtime-runc root 21752 0.0 0.0 204 4 ? Ss 03:36 0:00 | | \_ s6-svscan -t0 /var/run/s6/services root 21826 0.0 0.0 204 4 ? S 03:36 0:00 | | \_ s6-supervise s6-fdholderd root 22151 0.0 0.0 204 4 ? S 03:36 0:00 | | \_ s6-supervise embystat nobody 22159 93.3 0.3 22089576 90960 ? SLsl 03:36 313:56 | | \_ /opt/embystat/EmbyStat --no-updates Quote Link to comment
FreeMan Posted January 4, 2020 Author Share Posted January 4, 2020 4 minutes ago, BRiT said: It's your dockers. Look at your ps or top reports. Gah! That makes perfect sense - I just installed EmbyStat a couple of days ago, and I guess it runs non-stop! Stopped the docker and CPU utilization dropped to normal. I've had radarr running for a while now - the occasional spike when it's scanning isn't a worry... Quote Link to comment
FreeMan Posted January 4, 2020 Author Share Posted January 4, 2020 The log seems to be getting spammed with tons of OOM errors like: Jan 4 05:39:09 NAS nginx: 2020/01/04 05:39:09 [crit] 32725#32725: ngx_slab_alloc() failed: no memory Jan 4 05:39:09 NAS nginx: 2020/01/04 05:39:09 [error] 32725#32725: shpool alloc failed Jan 4 05:39:09 NAS nginx: 2020/01/04 05:39:09 [error] 32725#32725: nchan: Out of shared memory while allocating message of size 11260. Increase nchan_max_reserved_memory. Jan 4 05:39:09 NAS nginx: 2020/01/04 05:39:09 [error] 32725#32725: *1720414 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/disks?buffer_length=1 HTTP/1.1", host: "localhost" Jan 4 05:39:09 NAS nginx: 2020/01/04 05:39:09 [error] 32725#32725: MEMSTORE:00: can't create shared message for channel /disks Jan 4 05:39:10 NAS nginx: 2020/01/04 05:39:10 [crit] 32725#32725: ngx_slab_alloc() failed: no memory Jan 4 05:39:10 NAS nginx: 2020/01/04 05:39:10 [error] 32725#32725: shpool alloc failed Jan 4 05:39:10 NAS nginx: 2020/01/04 05:39:10 [error] 32725#32725: nchan: Out of shared memory while allocating message of size 11260. Increase nchan_max_reserved_memory. Jan 4 05:39:10 NAS nginx: 2020/01/04 05:39:10 [error] 32725#32725: *1720418 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/disks?buffer_length=1 HTTP/1.1", host: "localhost" I notice those were at 05:39 this morning and it's currently 09:58, so it seems to have stopped 4 hours ago. Is that due to lack of space for logging (log files rotate, don't they) or did whatever was causing the OOM stop? Where is the log located when the server is running - is this on the RAM disk? Can/should I clear the log, or will the older logs just rotate out, freeing up space? Do I need a reboot just to help things settle down? (scurries off to ensure EmbyStat is not set to auto-start!) Quote Link to comment
Squid Posted January 4, 2020 Share Posted January 4, 2020 Those aren't OOM's in the "classic" sense. It's more akin to a message being dropped in communication between your browser and the server when the UI is doing something like updating in real time the read/write rates on the drives, etc From a functional point of view, no processes were killed off. What was going on in your browsers during that time frame? (I also couldn't easily find a reference in unRaid's GUI to that particular message system) Quote Link to comment
FreeMan Posted January 4, 2020 Author Share Posted January 4, 2020 (edited) I have a tenancy of leaving the main Unraid GUI open, as well as others for dockers I regularly use. I suppose that's a bad habit, but I guess I'll need to break it... Other than the web GUI, none update regularly. Sent from my moto g(7) using Tapatalk Edited January 4, 2020 by FreeMan Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.