Michael_P

Members
  • Posts

    660
  • Joined

  • Last visited

Everything posted by Michael_P

  1. The word is it's going to be in 6.13
  2. If you have open dashboard window open in a browser too long, this is the likely cause
  3. Right, which is why I created a bug report instead of each docker container's support thread
  4. It's a field in the container's settings, likely being passed with the docker run command on container start. The 'bug' is that ps.txt needs to be sanitized or at the very least warn the user sensitive information may be stored there
  5. Here's on in this thread - system/ps.txt shows their user name and password in plain text
  6. I've already waned 2 separate users in their threads in General, and verified the same thing was happening in mine. Note that each of them were using separate VPN enabled containers, deluge and sabnzbd Here's one that's still up in the support thread you linked, their password is in plain text in system/ps.txt:
  7. Not sure what you're referring to, but 'Anonymize diagnostics' was checked before generating It's the user name and password in plain text, as shown in my original post. If you're running a VPN enabled docker container, such as @binhex delugevpn in my example - the user name and password are included in the generated diagnostics set in 'system/ps.txt'
  8. If you want to re-post, extract it and edit it out of system/ps.txt then re-zip and post
  9. @JorgeB Here's another one @planetwilson your VPN user and pass are exposed in the diagnostics file, you should delete it from your post and change the password now.
  10. Don't know if this has been reported before, but the diagnostics package saved contains VPN passwords if the server is currently running a docker container using a VPN. That would create an issue for anyone that has posted their diagnostics to the forum (note that I've masked the user and pass here) \_ /usr/bin/openvpn --reneg-sec 0 --mute-replay-warnings --auth-nocache --setenv VPN_PROV pia --setenv VPN_CLIENT openvpn --setenv DEBUG false --setenv VPN_DEVICE_TYPE tun0 --setenv VPN_ENABLED yes --setenv VPN_REMOTE_SERVER denmark.privacy.network --setenv APPLICATION deluge --script-security 2 --writepid /root/openvpn.pid --remap-usr1 SIGHUP --log-append /dev/stdout --pull-filter ignore up --pull-filter ignore down --pull-filter ignore route-ipv6 --pull-filter ignore ifconfig-ipv6 --pull-filter ignore tun-ipv6 --pull-filter ignore dhcp-option DNS6 --pull-filter ignore persist-tun --pull-filter ignore reneg-sec --up /root/openvpnup.sh --up-delay --up-restart --keepalive 10 60 --setenv STRICT_PORT_FORWARD yes --setenv VPN_USER xxxxxxxx --setenv VPN_PASS xxxxxxxxxx --down /root/openvpndown.sh --disable-occ --auth-user-pass credentials.conf --cd /config/openvpn --config /config/openvpn/denmark.ovpn
  11. You should pull this down, your VPN user and pass are exposed. You should change them now. @JorgeB or another mod nearby
  12. Misbehaving docker container root 13676 0.0 0.0 722964 18080 ? Sl 06:33 0:07 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 20ad1e7c0dcf5ee1f8fd1218813de7e4f9c1ffba466f4121c06b3ff55278aa55 -address /var/run/docker/containerd/containerd.sock root 13695 0.0 0.0 208 20 ? Ss 06:33 0:02 \_ /package/admin/s6/command/s6-svscan -d4 -- /run/service root 13737 0.0 0.0 212 20 ? S 06:33 0:00 \_ s6-supervise s6-linux-init-shutdownd root 13738 0.0 0.0 200 4 ? Ss 06:33 0:00 | \_ /package/admin/s6-linux-init/command/s6-linux-init-shutdownd -c /run/s6/basedir -g 3000 -C -B root 13748 0.0 0.0 212 20 ? S 06:33 0:00 \_ s6-supervise s6rc-oneshot-runner root 13760 0.0 0.0 188 4 ? Ss 06:33 0:00 | \_ /package/admin/s6/command/s6-ipcserverd -1 -- /package/admin/s6/command/s6-ipcserver-access -v0 -E -l0 -i data/rules -- /package/admin/s6/command/s6-sudod -t 30000 -- /package/admin/s6-rc/command/s6-rc-oneshot-run -l ../.. -- root 13749 0.0 0.0 212 20 ? S 06:33 0:00 \_ s6-supervise s6rc-fdholder root 13750 0.0 0.0 212 64 ? S 06:33 0:00 \_ s6-supervise backend root 36746 0.0 0.0 3924 3092 ? Ss 18:34 0:00 | \_ bash ./run backend root 36760 0.7 0.0 1283988 95036 ? Sl 18:34 0:02 | \_ node --abort_on_uncaught_exception --max_old_space_size=250 index.js root 13751 0.0 0.0 212 16 ? S 06:33 0:00 \_ s6-supervise frontend root 13752 0.0 0.0 212 24 ? S 06:33 0:00 \_ s6-supervise nginx root 36664 0.1 0.0 133152 43800 ? Ss 18:34 0:00 \_ nginx: master process nginx root 36712 0.2 0.0 134436 41980 ? S 18:34 0:00 \_ nginx: worker process root 36713 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36714 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36715 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36716 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36717 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36718 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36719 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36720 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36721 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36722 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36723 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36724 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36725 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36726 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36727 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36728 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36729 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36730 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36731 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36732 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36733 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36734 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36735 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36736 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36737 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36738 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36739 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36740 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36741 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36742 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36743 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36744 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36745 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36747 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36748 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36749 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36750 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36753 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36754 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36757 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36758 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36759 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36761 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36762 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36763 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36764 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36765 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36766 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36767 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36768 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36769 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36770 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36771 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36772 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36773 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36774 0.0 0.0 132620 36848 ? S 18:34 0:00 \_ nginx: cache manager process
  13. Yep, toggle advanced view while editing the config and add a limit to the extra parameters line
  14. Word on the street is it's in 6.13 whenever that gets released
  15. Looks that way - you can try to fix it or limit the memory allowed to the container nobody 21229 0.0 0.0 5488 276 ? Ss Mar25 0:00 \_ /bin/bash /launch.sh nobody 21287 0.0 0.0 2388 72 ? S Mar25 0:00 \_ sh ./run.sh nobody 21288 12.4 23.2 32248888 15301392 ? Sl Mar25 1008:20 \_ java @user_jvm_args.txt @libraries/net/minecraftforge/forge/1.18.2-40.1.84/unix_args.txt
  16. Nothing jumps our at me in the logs, check to see if any of the adapters are filling up /tmp or you can try limiting the RAM available to the container. And/or disable un-needed plugins to see if the problem goes away.
  17. Might also be related to the power management issue
  18. Try running memtest to rule out bad memory
  19. No worries - first place to look is which process the reaper kills, it will (usually) kill the process using the most RAM at the time the system runs out. From that you can work backwards to see what started that process. Not 100%, but most of the time it's enough to figure it out
  20. Lots of Frigate and ffmpeg processes running, and then the reaper killing ffmpeg for using ~48GB of RAM - all signs that point to Frigate being the issue Mar 20 15:21:27 z kernel: [ 24893] 0 24893 64344 10672 450560 0 0 ffmpeg Mar 20 15:21:27 z kernel: [ 25262] 0 25262 64006 10635 446464 0 0 ffmpeg Mar 20 15:21:27 z kernel: [ 25548] 0 25548 83510 13786 598016 0 0 ffmpeg Mar 20 15:21:27 z kernel: [ 25610] 0 25610 64344 10672 446464 0 0 ffmpeg Mar 20 15:21:27 z kernel: [ 25632] 0 25632 64006 10127 442368 0 0 ffmpeg Mar 20 15:21:27 z kernel: [ 25642] 0 25642 64007 10636 442368 0 0 ffmpeg Mar 20 15:21:27 z kernel: [ 25653] 0 25653 31705 2258 155648 0 0 ffmpeg Mar 20 15:21:27 z kernel: [ 31960] 0 31960 1633221 73281 1323008 0 0 frigate.process Mar 20 15:21:27 z kernel: [ 31962] 0 31962 1635247 75319 1339392 0 0 frigate.process Mar 20 15:21:27 z kernel: [ 31963] 0 31963 1631413 71403 1306624 0 0 frigate.process Mar 20 15:21:27 z kernel: [ 31966] 0 31966 1635433 75509 1339392 0 0 frigate.process Mar 20 15:21:27 z kernel: [ 31968] 0 31968 1634820 74102 1335296 0 0 frigate.process Mar 20 15:21:27 z kernel: [ 31971] 0 31971 1635217 74896 1339392 0 0 frigate.process Mar 20 15:21:27 z kernel: [ 31973] 0 31973 1635260 75360 1343488 0 0 frigate.process Mar 20 15:21:27 z kernel: [ 31980] 0 31980 1194088 67613 1048576 0 0 frigate.capture Mar 20 15:21:27 z kernel: [ 31986] 0 31986 1194088 67710 1048576 0 0 frigate.capture Mar 20 15:21:27 z kernel: [ 31993] 0 31993 1194257 68060 1056768 0 0 frigate.capture Mar 20 15:21:27 z kernel: [ 32004] 0 32004 1194088 67412 1048576 0 0 frigate.capture Mar 20 15:21:27 z kernel: [ 32013] 0 32013 1218086 67613 1048576 0 0 frigate.capture Mar 20 15:21:27 z kernel: [ 32021] 0 32021 1221128 67406 1048576 0 0 frigate.capture Mar 20 15:21:27 z kernel: [ 32029] 0 32029 1194088 67951 1052672 0 0 frigate.capture Mar 20 15:21:27 z kernel: oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0-1,global_oom,task_memcg=/docker/2e75c9f1047141b9b35bf3cc90663194f3be0ffc64a30482e4361cc762487692,task=ffmpeg,pid=10143,uid=0 Mar 20 15:21:27 z kernel: Out of memory: Killed process 10143 (ffmpeg) total-vm:48965636kB, anon-rss:43494820kB, file-rss:78340kB, shmem-rss:18804kB, UID:0 pgtables:85608kB oom_score_adj:0
  21. No idea, I don't run Frigate - maybe try its support thread Shouldn't matter for this use case, wear would be the concern - probably another question in their support thread
  22. Check your Frigate config, ffmpeg ran it OOM which implies it's transcoding to RAM. A few users have had the same issue over the past month or so
  23. Whatever Immich is doing is running the system out of RAM - left unrestricted, any container can use whatever system resources it has access to. You can limit the container's available RAM in the advanced settings, but that won't solve whatever the container is doing to run itself out of RAM. As for why Immich is chugging down all that RAM, I can't help you there - you can ask in the container's help thread or try to get help on Immich's site