juan11perez Posted December 22, 2019 Share Posted December 22, 2019 Good day, received the above referred notification this morning. Looking at glances it seemed to have happened around 00:00 my time (dec 22). It's baffling as I have 64G of ram and use 16G on dockers and 16G on a vm. M/B: ASUSTeK COMPUTER INC. ROG STRIX X470-F GAMING Version Rev X.0x CPU: AMD Ryzen 9 3900X 12-Core @ 4000 MHz Memory: 64 GiB DDR4 (max. installable capacity 128 GiB) Kernel: Linux 5.3.12-Unraid x86_64 Attached find system logs. Any guidance is much appreciated. Thank you tower-diagnostics-20191222-0449.zip Quote Link to comment
Frank1940 Posted December 22, 2019 Share Posted December 22, 2019 I would suspect that one of your Dockers, plugins, or VM's is writing files to Unraid's RAM disk. (The boot process of Unraid sets up a RAM disk and installs Unraid onto that RAM disk. To you, as a Linux user, and the Linux OS, this RAM disk appears to be a physical disk. You (and the OS) can read ans write to it the same as a physical disk.) Post up the results of these two commands: ls -al /mnt ls -al /mnt/user You should also look through all of your Dockers, plugins, and VM's and make sure that any mapping that are used for any type of data storage are pointed at /mnt If it is not, it is most likely pointed to the RAM disk! Quote Link to comment
juan11perez Posted December 22, 2019 Author Share Posted December 22, 2019 @Frank1940 thank you for looking into this. root@Tower:~# ls -al /mnt total 0 drwxr-xr-x 9 root root 180 Dec 22 17:56 ./ drwxr-xr-x 20 root root 440 Dec 22 08:49 ../ drwxrwxrwx 5 root root 120 Dec 22 17:56 RecycleBin/ drwxrwxrwx 5 nobody users 50 Dec 22 10:44 cache/ drwxrwxr-x 11 nobody users 184 Dec 22 10:44 disk1/ drwxrwxr-x 8 nobody users 126 Dec 22 10:44 disk2/ drwxrwxrwt 3 nobody users 60 Dec 20 08:08 disks/ drwxrwxrwx 1 nobody users 50 Dec 22 10:44 user/ drwxrwxr-x 1 nobody users 184 Dec 22 10:44 user0/ root@Tower:~# ls -al /mnt/user total 20 drwxrwxrwx 1 nobody users 50 Dec 22 10:44 ./ drwxr-xr-x 9 root root 180 Dec 22 17:57 ../ drwxrwxrwx 1 nobody users 4096 Dec 22 09:19 appdata/ drwxrwxrwx 1 nobody users 55 Dec 18 16:38 archive/ drwxrwxrwx 1 nobody users 134 Dec 18 12:54 backup/ drwxrwxrwx 1 nobody users 76 Dec 14 21:10 domains/ drwxrwxrwx 1 nobody users 72 Nov 19 23:05 downloads/ drwxrwxrwx 1 nobody users 4096 Dec 17 10:15 isos/ drwxrwxrwx 1 nobody users 62 Feb 1 2019 krusader/ drwxrwxrwx 1 nobody users 8192 Oct 26 12:21 lost+found/ drwxrwxrwx 1 nobody users 247 Nov 19 23:24 media/ drwxrwxrwx 1 nobody users 160 Dec 19 09:45 public/ drwxrwxrwx 1 nobody users 35 Dec 11 01:45 system/ drwxrwxrwx 1 nobody users 79 Dec 22 12:20 timemachine/ Rechecked my dockers and have all storage pointing to /mn/user/. Only thing I have pointing to quasi RAM is plex transcode as such: - /tmp/plex:/transcode Could it be the culprit?? Quote Link to comment
Frank1940 Posted December 22, 2019 Share Posted December 22, 2019 What is the output of: ls -al /mnt/disks I don't use the Unassigned Devices plugin (and I didn't check to see if you did) but if this is not a physical disk, it is in RAM. 59 minutes ago, juan11perez said: - /tmp/plex:/transcode Could it be the culprit?? That is a possibility. You might want to check on the support thread for the Plex install that you are using. You can use the ls -al command to see what is in that folder. Quote Link to comment
juan11perez Posted December 22, 2019 Author Share Posted December 22, 2019 I am using the unassigned devices plugin. root@Tower:~# ls -al /mnt/disks total 4 drwxrwxrwt 3 nobody users 60 Dec 20 08:08 ./ drwxr-xr-x 9 root root 180 Dec 23 00:14 ../ drwxrwxrwx 5 nobody users 4096 Sep 6 09:00 BACKUP/ BACKUP is a spare drive. Checked the tmp dir, but it's practically empty root@Tower:/tmp# du -h --max-depth=1 | sort -hr 11M . 5.0M ./community.applications 4.6M ./fix.common.problems 552K ./plugins 516K ./user.scripts 32K ./notifications 20K ./unassigned.devices 16K ./recycle.bin 4.0K ./emhttp 0 ./tmux-0 0 ./plex 0 ./jellyfin 0 ./ca.turbo 0 ./ca.backup2 0 ./.preclear 0 ./.X11-unix 0 ./.ICE-unix Quote Link to comment
bonienl Posted December 22, 2019 Share Posted December 22, 2019 "/tmp/plex" will only get filled when Plex starts transcoding video content. You should check when Plex is at work. Quote Link to comment
BRiT Posted December 22, 2019 Share Posted December 22, 2019 Processes using a lot of Memory at lease from the Virtual Memory viewpoint, this is at least 50 Gig between 3 processes where some of them are expected like your VM @ 25.7% (qemu) and your Database (influxdb for Glances), but likely not expected is your MariaDB (Java for Tomcat for MariaDB). You might want figure out what exactly that Java process for MariaDB is doing as part of it's /etc/firstrun/tomcat-wrapper.sh process. Or try applying memory limits to your dockers. 15086 root 20 0 21.7g 881148 7036 S 0.0 1.3 0:29.58 java root 28845 0.0 0.0 109100 10184 ? Sl Dec20 1:02 | \_ containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/32058f818192f568d243e2881ebc339f68ef8a750b9d75bb74ef037e35e7226f -address /var/run/docker/containerd/containerd.sock -containerd-binary /usr/bin/containerd -runtime-root /var/run/docker/runtime-runc root 28871 0.0 0.0 4184 84 ? Ss Dec20 0:02 | | \_ /bin/tini -- /usr/bin/supervisord -n -c /etc/supervisor/conf.d/supervisord-mariadb.conf root 29149 0.0 0.0 57720 13176 ? S Dec20 0:20 | | \_ /usr/bin/python /usr/bin/supervisord -n -c /etc/supervisor/conf.d/supervisord-mariadb.conf root 15073 0.0 0.0 19876 788 ? S 06:43 0:00 | | \_ /bin/bash /etc/firstrun/tomcat-wrapper.sh root 15086 0.3 1.3 22734356 881148 ? Sl 06:43 0:29 | | | \_ /usr/lib/jvm/java-8-openjdk-amd64/bin/java -Djava.util.logging.config.file=/var/lib/tomcat8/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Djdk.tls.ephemeralDHKeySize=2048 -Djava.protocol.handler.pkgs=org.apache.catalina.webresources -classpath /usr/share/tomcat8/bin/bootstrap.jar:/usr/share/tomcat8/bin/tomcat-juli.jar -Dcatalina.base=/var/lib/tomcat8 -Dcatalina.home=/usr/share/tomcat8 -Djava.io.tmpdir=/var/lib/tomcat8/temp org.apache.catalina.startup.Bootstrap start nobody 32613 0.0 0.0 79032 9776 ? S Dec20 0:00 | | \_ /usr/local/guacamole/sbin/guacd -b 0.0.0.0 -L info -f root 32617 0.0 0.0 19996 572 ? S Dec20 0:00 | | \_ /bin/bash /usr/bin/mysqld_safe --skip-syslog nobody 1023 0.0 0.1 1898776 75968 ? Sl Dec20 2:02 | | \_ /usr/sbin/mysqld --basedir=/usr --datadir=/config/databases --plugin-dir=/usr/lib/mysql/plugin --user=nobody --skip-log-error --log-error=/config/databases/32058f818192.err --pid-file=/var/run/mysqld/mysqld.pid --socket=/var/run/mysqld/mysqld.sock --port=3306 12822 root 20 0 16.8g 16.2g 19368 S 31.2 25.8 3344:45 qemu-syst+ root 12822 114 25.7 17575796 16964632 ? SLl Dec20 3344:45 /usr/bin/qemu-system-x86_64 -name guest=archlabs 20886 root 20 0 11.0g 582796 20864 S 6.2 0.9 13:07.74 influxd root 20886 1.1 0.8 11574620 582796 ? Ssl Dec21 13:07 | | \_ influxd Additional processes using more scraps of memory 8868 root 20 0 1611544 111248 5876 S 0.0 0.2 16:25.43 python3 root 8868 0.5 0.1 1611544 111248 ? S Dec20 16:25 | | \_ python3 /app/intelligence.py 8871 root 20 0 5738412 1.3g 13212 S 0.0 2.1 40:42.89 python3 root 8871 1.4 2.1 5738412 1408524 ? Sl Dec20 40:42 | | \_ python3 /app/intelligence.py 8873 root 20 0 3261176 958952 23048 S 0.0 1.5 333:31.89 python3 root 8873 11.5 1.4 3261176 958952 ? Sl Dec20 333:31 | | \_ python3 /app/intelligence.py 12042 nobody 20 0 8223412 553348 21308 S 0.0 0.8 3:00.97 java nobody 12042 0.5 0.8 8223412 553348 ? Ssl 00:00 3:00 | | \_ java -Xmx1024M -jar /usr/lib/unifi/lib/ace.jar start Quote Link to comment
juan11perez Posted December 23, 2019 Author Share Posted December 23, 2019 @BRiT thank you. that mariadb belongs to Guacamole docker. These dockers influxdb and guacamole do have memory caps, but it seems they're not respected . I'm setting them up via docker compose mem_limit: 2G mem_reservation: 1G Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.