Jump to content

cores and ram at 100 percent


toonamo

Recommended Posts

not sure what happened but got a message that my plex was down. logged into unraid and see all my cores and ram are at 100 percent but nothing is usable. no one connected. no file transfer nothing.

 

not even sure what logs or where to begin. worried i'm going to corrupt all the databases again. so i don't want to just shut it down and loose the plex database yet again.

 

Model: eVGA X58 I7-950x

M/B: EVGA 121-BL-E756 Version Tylersburg - s/n:

BIOS: Phoenix Technologies, LTD Version 6.00 PG. Dated: 10/26/2010

CPU: Intel® Core™ i7 @ 3066 MHz

HVM: Enabled

IOMMU: Disabled

Cache: 32 KiB, 32 KiB, 1024 KiB

Memory: 12 GiB DDR2 (max. installable capacity 16 GiB*)

Network: bond0: fault-tolerance (active-backup), mtu 1500
 eth0: 1000 Mbps, full duplex, mtu 1500

Kernel: Linux 4.19.56-Unraid x86_64

OpenSSL: 1.1.1c

Uptime: 7 days, 08:37:11

 

Quote

Jul 18 07:06:20 Phoenix avahi-daemon[5021]: New relevant interface vethef54e64.IPv6 for mDNS.
Jul 18 07:06:20 Phoenix avahi-daemon[5021]: Registering new address record for fe80::9c27:96ff:fe91:eed9 on vethef54e64.*.
Jul 18 07:06:23 Phoenix kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000c0000-0x000dffff window]
Jul 18 07:06:23 Phoenix kernel: caller _nv000939rm+0x1bf/0x1f0 [nvidia] mapping multiple BARs
Jul 18 07:06:30 Phoenix kernel: eth0: renamed from veth4565153
Jul 18 07:06:30 Phoenix kernel: device br0 entered promiscuous mode
Jul 18 07:06:38 Phoenix kernel: eth0: renamed from vethc5fdd08
Jul 18 07:06:41 Phoenix CA Backup/Restore: #######################
Jul 18 07:06:41 Phoenix CA Backup/Restore: appData Backup complete
Jul 18 07:06:41 Phoenix CA Backup/Restore: #######################
Jul 18 07:06:41 Phoenix CA Backup/Restore: Deleting /mnt/user/backups/appdata backup/[email protected]
Jul 18 07:06:42 Phoenix CA Backup/Restore: Backup / Restore Completed
Jul 18 10:32:50 Phoenix kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000c0000-0x000dffff window]
Jul 18 10:32:50 Phoenix kernel: caller _nv000939rm+0x1bf/0x1f0 [nvidia] mapping multiple BARs
Jul 18 10:45:04 Phoenix kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000c0000-0x000dffff window]
Jul 18 10:45:04 Phoenix kernel: caller _nv000939rm+0x1bf/0x1f0 [nvidia] mapping multiple BARs
Jul 18 10:45:38 Phoenix kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000c0000-0x000dffff window]
Jul 18 10:45:38 Phoenix kernel: caller _nv000939rm+0x1bf/0x1f0 [nvidia] mapping multiple BARs
Jul 18 19:48:04 Phoenix kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000c0000-0x000dffff window]
Jul 18 19:48:04 Phoenix kernel: caller _nv000939rm+0x1bf/0x1f0 [nvidia] mapping multiple BARs
Jul 19 03:40:01 Phoenix crond[1727]: exit status 3 from user root /usr/local/sbin/mover &> /dev/null
Jul 20 03:40:01 Phoenix crond[1727]: exit status 3 from user root /usr/local/sbin/mover &> /dev/null
Jul 21 03:40:01 Phoenix crond[1727]: exit status 3 from user root /usr/local/sbin/mover &> /dev/null
Jul 21 04:30:01 Phoenix root: Fix Common Problems Version 2019.06.30a
Jul 21 04:30:01 Phoenix root: Fix Common Problems: Warning: Docker Application unifi-controller-local has an update available for it
Jul 21 04:30:01 Phoenix root: Fix Common Problems: Warning: Docker Application unifi-controller-remote has an update available for it
Jul 21 04:30:01 Phoenix root: Fix Common Problems: Warning: Scheduled Parity Checks are not enabled
Jul 22 03:40:01 Phoenix crond[1727]: exit status 3 from user root /usr/local/sbin/mover &> /dev/null
Jul 22 19:06:29 Phoenix sshd[24395]: Accepted password for root from 192.168.1.104 port 60156 ssh2
Jul 22 19:09:18 Phoenix nginx: 2019/07/22 19:09:18 [error] 5076#5076: *1365037 upstream timed out (110: Connection timed out) while reading upstream, client: 192.168.1.104, server: , request: "GET /Docker HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "phoenix", referrer: "http://phoenix/VMs"
Jul 22 19:13:52 Phoenix nginx: 2019/07/22 19:13:52 [error] 5076#5076: *1365400 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 192.168.1.104, server: , request: "POST /webGui/include/DashboardApps.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock", host: "phoenix", referrer: "http://phoenix/Dashboard"
Jul 22 19:18:57 Phoenix emhttpd: req (2): startState=STARTED&file=&cmdCheck=Check&optionCorrect=correct&csrf_token=****************
Jul 22 19:18:57 Phoenix kernel: mdcmd (38): check correct
Jul 22 19:18:57 Phoenix kernel: md: recovery thread: check P ...
Jul 22 19:18:57 Phoenix kernel: md: using 1536k window, over a total of 9766436812 blocks.
Jul 22 19:26:43 Phoenix nginx: 2019/07/22 19:26:43 [error] 5076#5076: *1367385 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 192.168.1.104, server: , request: "POST /webGui/include/DashboardApps.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock", host: "phoenix", referrer:

 

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...