Michael_P

Members
  • Posts

    668
  • Joined

  • Last visited

Everything posted by Michael_P

  1. Sounds like a bad board to me, even one of those tiny little LGA pins slightly out of place will do weird things to RAM
  2. If you're limiting it in the container settings, it didn't work The only other thing is to figure out why it's using so much, bad config or bad install. You can blow it up and install from scratch to see if that does it.
  3. Nah, Prowlarr was using ~11GB when it was killed by the reaper so something is wrong with it
  4. And verify you have two dashes front of cpus (--cpus=".5")
  5. The word is it's going to be in 6.13
  6. If you have open dashboard window open in a browser too long, this is the likely cause
  7. Right, which is why I created a bug report instead of each docker container's support thread
  8. It's a field in the container's settings, likely being passed with the docker run command on container start. The 'bug' is that ps.txt needs to be sanitized or at the very least warn the user sensitive information may be stored there
  9. Here's on in this thread - system/ps.txt shows their user name and password in plain text
  10. I've already waned 2 separate users in their threads in General, and verified the same thing was happening in mine. Note that each of them were using separate VPN enabled containers, deluge and sabnzbd Here's one that's still up in the support thread you linked, their password is in plain text in system/ps.txt:
  11. Not sure what you're referring to, but 'Anonymize diagnostics' was checked before generating It's the user name and password in plain text, as shown in my original post. If you're running a VPN enabled docker container, such as @binhex delugevpn in my example - the user name and password are included in the generated diagnostics set in 'system/ps.txt'
  12. If you want to re-post, extract it and edit it out of system/ps.txt then re-zip and post
  13. @JorgeB Here's another one @planetwilson your VPN user and pass are exposed in the diagnostics file, you should delete it from your post and change the password now.
  14. Don't know if this has been reported before, but the diagnostics package saved contains VPN passwords if the server is currently running a docker container using a VPN. That would create an issue for anyone that has posted their diagnostics to the forum (note that I've masked the user and pass here) \_ /usr/bin/openvpn --reneg-sec 0 --mute-replay-warnings --auth-nocache --setenv VPN_PROV pia --setenv VPN_CLIENT openvpn --setenv DEBUG false --setenv VPN_DEVICE_TYPE tun0 --setenv VPN_ENABLED yes --setenv VPN_REMOTE_SERVER denmark.privacy.network --setenv APPLICATION deluge --script-security 2 --writepid /root/openvpn.pid --remap-usr1 SIGHUP --log-append /dev/stdout --pull-filter ignore up --pull-filter ignore down --pull-filter ignore route-ipv6 --pull-filter ignore ifconfig-ipv6 --pull-filter ignore tun-ipv6 --pull-filter ignore dhcp-option DNS6 --pull-filter ignore persist-tun --pull-filter ignore reneg-sec --up /root/openvpnup.sh --up-delay --up-restart --keepalive 10 60 --setenv STRICT_PORT_FORWARD yes --setenv VPN_USER xxxxxxxx --setenv VPN_PASS xxxxxxxxxx --down /root/openvpndown.sh --disable-occ --auth-user-pass credentials.conf --cd /config/openvpn --config /config/openvpn/denmark.ovpn
  15. You should pull this down, your VPN user and pass are exposed. You should change them now. @JorgeB or another mod nearby
  16. Misbehaving docker container root 13676 0.0 0.0 722964 18080 ? Sl 06:33 0:07 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 20ad1e7c0dcf5ee1f8fd1218813de7e4f9c1ffba466f4121c06b3ff55278aa55 -address /var/run/docker/containerd/containerd.sock root 13695 0.0 0.0 208 20 ? Ss 06:33 0:02 \_ /package/admin/s6/command/s6-svscan -d4 -- /run/service root 13737 0.0 0.0 212 20 ? S 06:33 0:00 \_ s6-supervise s6-linux-init-shutdownd root 13738 0.0 0.0 200 4 ? Ss 06:33 0:00 | \_ /package/admin/s6-linux-init/command/s6-linux-init-shutdownd -c /run/s6/basedir -g 3000 -C -B root 13748 0.0 0.0 212 20 ? S 06:33 0:00 \_ s6-supervise s6rc-oneshot-runner root 13760 0.0 0.0 188 4 ? Ss 06:33 0:00 | \_ /package/admin/s6/command/s6-ipcserverd -1 -- /package/admin/s6/command/s6-ipcserver-access -v0 -E -l0 -i data/rules -- /package/admin/s6/command/s6-sudod -t 30000 -- /package/admin/s6-rc/command/s6-rc-oneshot-run -l ../.. -- root 13749 0.0 0.0 212 20 ? S 06:33 0:00 \_ s6-supervise s6rc-fdholder root 13750 0.0 0.0 212 64 ? S 06:33 0:00 \_ s6-supervise backend root 36746 0.0 0.0 3924 3092 ? Ss 18:34 0:00 | \_ bash ./run backend root 36760 0.7 0.0 1283988 95036 ? Sl 18:34 0:02 | \_ node --abort_on_uncaught_exception --max_old_space_size=250 index.js root 13751 0.0 0.0 212 16 ? S 06:33 0:00 \_ s6-supervise frontend root 13752 0.0 0.0 212 24 ? S 06:33 0:00 \_ s6-supervise nginx root 36664 0.1 0.0 133152 43800 ? Ss 18:34 0:00 \_ nginx: master process nginx root 36712 0.2 0.0 134436 41980 ? S 18:34 0:00 \_ nginx: worker process root 36713 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36714 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36715 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36716 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36717 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36718 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36719 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36720 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36721 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36722 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36723 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36724 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36725 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36726 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36727 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36728 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36729 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36730 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36731 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36732 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36733 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36734 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36735 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36736 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36737 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36738 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36739 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36740 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36741 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36742 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36743 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36744 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36745 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36747 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36748 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36749 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36750 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36753 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36754 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36757 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36758 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36759 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36761 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36762 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36763 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36764 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36765 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36766 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36767 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36768 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36769 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36770 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36771 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36772 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36773 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36774 0.0 0.0 132620 36848 ? S 18:34 0:00 \_ nginx: cache manager process
  17. Yep, toggle advanced view while editing the config and add a limit to the extra parameters line
  18. Word on the street is it's in 6.13 whenever that gets released
  19. Looks that way - you can try to fix it or limit the memory allowed to the container nobody 21229 0.0 0.0 5488 276 ? Ss Mar25 0:00 \_ /bin/bash /launch.sh nobody 21287 0.0 0.0 2388 72 ? S Mar25 0:00 \_ sh ./run.sh nobody 21288 12.4 23.2 32248888 15301392 ? Sl Mar25 1008:20 \_ java @user_jvm_args.txt @libraries/net/minecraftforge/forge/1.18.2-40.1.84/unix_args.txt