vik2or Posted December 16, 2019 Share Posted December 16, 2019 After i upgraded to unraid 6.8.0 i noticed that one docker container (a game server for DontStarveTogether) wouldn't start anymore. I tried remaking the container, checked permisions on shares, tired the docker in a VM and it worked there to come to the conclusion that the game server was not able to mount it's data zips anymore in 6.8 for some reason. So i downgraded to 6.7.2, but when i started another docker game server (for minecraft now) that i mostly have on, i noticed that all cores were at 100%, even the isolated ones for VMs, that should not be affected by docker. I thought something went wrong inthe downgrade so i went back to 6.8 (i'll have to use DST server in a vm but atleast the cpu won't overheat). But it started happening again when i started mc. I restarted again and left it without any game server on ... it looked good for like 1h but then it happened again. I don't know what is causing this. I attached a htop from the last time it happened. The "vik2or" user is only used for FTP. Quote Link to comment
vik2or Posted December 16, 2019 Author Share Posted December 16, 2019 Here is the diagnostic zip after it happened again. Stoping the docker service from settings restores the cpu to normal. tower-diagnostics-20191217-0014.zip Quote Link to comment
SnickySnacks Posted December 17, 2019 Share Posted December 17, 2019 Is your server exposed to the internet? A few of those process names look suspiciously like malware. /tmp/kdevtmpfsi /var/tmp/kinsing 2 Quote Link to comment
vik2or Posted December 17, 2019 Author Share Posted December 17, 2019 yes it is, i have a nginx docker, witch i found to be the problem. If i close it the cpu returns to normal. Quote Link to comment
BRiT Posted December 17, 2019 Share Posted December 17, 2019 (edited) This process was using 136% CPU, but your other process is using 445%. nobody 29739 136 20.0 13300612 3304304 ? Sl 00:05 12:26 | | \_ /usr/bin/java -Dfile.encoding=utf-8 -d64 -server -XX:+AggressiveOpts -XX:ParallelGCThreads=3 -XX:+UseConcMarkSweepGC -XX:+UnlockExperimentalVMOptions -XX:+UseParNewGC -XX:+ExplicitGCInvokesConcurrent -XX:MaxGCPauseMillis=10 -XX:GCPauseIntervalMillis=50 -XX:+UseFastAccessorMethods -XX:+OptimizeStringConcat -XX:NewSize=84m -XX:+UseAdaptiveGCBoundary -XX:NewRatio=3 -Dfml.readTimeout=90 -Ddeployment.trace=true -Ddeployment.log=true -Ddeployment.trace.level=all -Xmx7000M -jar ForgeMod.jar nogui Top shows them as the following: 11488 vik2or 20 0 3097700 2.3g 2704 S 717.6 14.6 8:26.73 kdevtmpfsi 29739 nobody 20 0 12.7g 3.2g 30088 S 41.2 20.1 12:26.83 java Vik2or seems to be from the following docker: root 27009 0.0 0.0 107696 10384 ? Sl Dec16 0:00 | \_ containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/523b61a28e7c6e439271ec8a091126d8f9687d6c789bd0976c43cfc42782134b -address /var/run/docker/containerd/containerd.sock -containerd-binary /usr/bin/containerd -runtime-root /var/run/docker/runtime-runc root 27026 0.0 0.1 48004 18524 ? Ss Dec16 0:00 | | \_ /usr/bin/python /usr/bin/supervisord vik2or 11488 445 14.6 3097700 2410400 ? Ssl 00:13 8:27 | | \_ /tmp/kdevtmpfsi vik2or 11038 0.2 0.2 470900 41668 ? Sl 00:12 0:00 | | \_ /var/tmp/kinsing root 28067 0.0 0.0 134344 10460 ? S Dec16 0:00 | | \_ nginx: master process /usr/sbin/nginx vik2or 28304 0.0 0.0 134344 3400 ? S Dec16 0:00 | | | \_ nginx: worker process vik2or 28305 0.0 0.0 134344 3400 ? S Dec16 0:00 | | | \_ nginx: worker process vik2or 28306 0.0 0.0 134344 3400 ? S Dec16 0:00 | | | \_ nginx: worker process vik2or 28307 0.0 0.0 134344 3400 ? S Dec16 0:00 | | | \_ nginx: worker process vik2or 28308 0.0 0.0 134344 3400 ? S Dec16 0:00 | | | \_ nginx: worker process vik2or 28309 0.0 0.0 134344 3400 ? S Dec16 0:00 | | | \_ nginx: worker process vik2or 28310 0.0 0.0 134344 3400 ? S Dec16 0:00 | | | \_ nginx: worker process vik2or 28311 0.0 0.0 134344 3400 ? S Dec16 0:00 | | | \_ nginx: worker process root 28068 0.0 0.0 65516 5452 ? S Dec16 0:00 | | \_ /usr/sbin/sshd -D root 28069 0.0 0.0 29032 8636 ? Sl Dec16 0:01 | | \_ /usr/bin/redis-server 127.0.0.1:6379 root 28070 0.0 0.1 510024 32516 ? S Dec16 0:00 | | \_ php-fpm: master process (/etc/php/7.2/fpm/php-fpm.conf) vik2or 28584 0.8 0.1 512288 18804 ? S Dec16 0:13 | | | \_ php-fpm: pool www vik2or 28585 0.8 0.0 512288 16024 ? S Dec16 0:14 | | | \_ php-fpm: pool www vik2or 30426 0.4 0.0 512288 16028 ? S Dec16 0:06 | | | \_ php-fpm: pool www root 28073 0.0 0.0 23296 1372 ? S Dec16 0:00 | | \_ /usr/bin/beanstalkd Edited December 17, 2019 by BRiT 1 Quote Link to comment
vik2or Posted December 17, 2019 Author Share Posted December 17, 2019 it was exactly what @SnickySnacks said. I removed those 2 files and i found a cron i didn't set, removed that too and as i don't really need to access that nginx docker form the internet, lately i only use it locally, i removed the port forwards from the router. Thank you both very much! Quote Link to comment
Peter Schumacher Posted January 21, 2020 Share Posted January 21, 2020 Also had the pleasure of this malware Noticed that the files were owned by www-data and the infection stopped after updating an old RoundCube 1.2-RC to the latest version. FYI. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.