-
Posts
661 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by Michael_P
-
-
1 hour ago, casperse said:
Just wondering are we getting really close
The word is it's going to be in 6.13
-
If you have open dashboard window open in a browser too long, this is the likely cause
-
4 minutes ago, binhex said:
this would be true for any env vars for any container, not just this one
Right, which is why I created a bug report instead of each docker container's support thread
- 2
-
6 minutes ago, bmartino1 said:
Binhex vpn.
There may be a log or memory leak that shows. can a client running this go to /Tools/Diagnostics
Tools > Diagnositc and download there log the in the zip file go to
system/ps.txt
then do a search for their password to confirm this memory leak?
I couldn't reporduce.
Here's on in this thread - system/ps.txt shows their user name and password in plain text
-
2 minutes ago, planetwilson said:
Thanks, done.
If you want to re-post, extract it and edit it out of system/ps.txt then re-zip and post
-
@JorgeB Here's another one
@planetwilson your VPN user and pass are exposed in the diagnostics file, you should delete it from your post and change the password now.
-
Just now, JorgeB said:
Removed the diags.
Thanks, I've created a bug report based on it
- 1
-
20 hours ago, hobbis said:
Diagnostics attached.
You should pull this down, your VPN user and pass are exposed. You should change them now.
@JorgeB or another mod nearby
- 1
- 1
-
Misbehaving docker container
root 13676 0.0 0.0 722964 18080 ? Sl 06:33 0:07 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 20ad1e7c0dcf5ee1f8fd1218813de7e4f9c1ffba466f4121c06b3ff55278aa55 -address /var/run/docker/containerd/containerd.sock root 13695 0.0 0.0 208 20 ? Ss 06:33 0:02 \_ /package/admin/s6/command/s6-svscan -d4 -- /run/service root 13737 0.0 0.0 212 20 ? S 06:33 0:00 \_ s6-supervise s6-linux-init-shutdownd root 13738 0.0 0.0 200 4 ? Ss 06:33 0:00 | \_ /package/admin/s6-linux-init/command/s6-linux-init-shutdownd -c /run/s6/basedir -g 3000 -C -B root 13748 0.0 0.0 212 20 ? S 06:33 0:00 \_ s6-supervise s6rc-oneshot-runner root 13760 0.0 0.0 188 4 ? Ss 06:33 0:00 | \_ /package/admin/s6/command/s6-ipcserverd -1 -- /package/admin/s6/command/s6-ipcserver-access -v0 -E -l0 -i data/rules -- /package/admin/s6/command/s6-sudod -t 30000 -- /package/admin/s6-rc/command/s6-rc-oneshot-run -l ../.. -- root 13749 0.0 0.0 212 20 ? S 06:33 0:00 \_ s6-supervise s6rc-fdholder root 13750 0.0 0.0 212 64 ? S 06:33 0:00 \_ s6-supervise backend root 36746 0.0 0.0 3924 3092 ? Ss 18:34 0:00 | \_ bash ./run backend root 36760 0.7 0.0 1283988 95036 ? Sl 18:34 0:02 | \_ node --abort_on_uncaught_exception --max_old_space_size=250 index.js root 13751 0.0 0.0 212 16 ? S 06:33 0:00 \_ s6-supervise frontend root 13752 0.0 0.0 212 24 ? S 06:33 0:00 \_ s6-supervise nginx root 36664 0.1 0.0 133152 43800 ? Ss 18:34 0:00 \_ nginx: master process nginx root 36712 0.2 0.0 134436 41980 ? S 18:34 0:00 \_ nginx: worker process root 36713 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36714 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36715 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36716 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36717 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36718 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36719 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36720 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36721 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36722 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36723 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36724 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36725 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36726 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36727 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36728 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36729 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36730 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36731 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36732 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36733 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36734 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36735 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36736 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36737 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36738 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36739 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36740 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36741 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36742 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36743 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36744 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36745 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36747 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36748 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36749 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36750 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36753 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36754 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36757 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36758 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36759 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36761 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36762 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36763 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36764 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36765 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36766 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36767 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36768 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36769 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36770 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36771 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36772 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36773 0.0 0.0 133388 36680 ? S 18:34 0:00 \_ nginx: worker process root 36774 0.0 0.0 132620 36848 ? S 18:34 0:00 \_ nginx: cache manager process
- 1
-
12 hours ago, RandallC said:
Is it somewhere in the docker config?
Yep, toggle advanced view while editing the config and add a limit to the extra parameters line
-
12 hours ago, Sero101 said:
I'm curious when lime tech is gonna implement
Word on the street is it's in 6.13 whenever that gets released
-
Agreed. Not Funny.
- 1
-
7 hours ago, RandallC said:
I'm sure it's one of several modded minecraft servers acting up
Looks that way - you can try to fix it or limit the memory allowed to the container
nobody 21229 0.0 0.0 5488 276 ? Ss Mar25 0:00 \_ /bin/bash /launch.sh nobody 21287 0.0 0.0 2388 72 ? S Mar25 0:00 \_ sh ./run.sh nobody 21288 12.4 23.2 32248888 15301392 ? Sl Mar25 1008:20 \_ java @user_jvm_args.txt @libraries/net/minecraftforge/forge/1.18.2-40.1.84/unix_args.txt
-
1 hour ago, John321 said:
iobroker docker container
Nothing jumps our at me in the logs, check to see if any of the adapters are filling up /tmp or you can try limiting the RAM available to the container. And/or disable un-needed plugins to see if the problem goes away.
-
1 hour ago, Fluxonium said:
i am also getting the following in the log from time to time:
Might also be related to the power management issue
-
Try running memtest to rule out bad memory
-
5 minutes ago, BrandonG777 said:
No worries - first place to look is which process the reaper kills, it will (usually) kill the process using the most RAM at the time the system runs out. From that you can work backwards to see what started that process.
Not 100%, but most of the time it's enough to figure it out
-
25 minutes ago, BrandonG777 said:
Would you mind sharing what you looked at to determine that it's Frigate causing the OOM errors? I've dug through the logs but perhaps I'm overlooking it or just looking in the wrong place?Lots of Frigate and ffmpeg processes running, and then the reaper killing ffmpeg for using ~48GB of RAM - all signs that point to Frigate being the issue
Mar 20 15:21:27 z kernel: [ 24893] 0 24893 64344 10672 450560 0 0 ffmpeg Mar 20 15:21:27 z kernel: [ 25262] 0 25262 64006 10635 446464 0 0 ffmpeg Mar 20 15:21:27 z kernel: [ 25548] 0 25548 83510 13786 598016 0 0 ffmpeg Mar 20 15:21:27 z kernel: [ 25610] 0 25610 64344 10672 446464 0 0 ffmpeg Mar 20 15:21:27 z kernel: [ 25632] 0 25632 64006 10127 442368 0 0 ffmpeg Mar 20 15:21:27 z kernel: [ 25642] 0 25642 64007 10636 442368 0 0 ffmpeg Mar 20 15:21:27 z kernel: [ 25653] 0 25653 31705 2258 155648 0 0 ffmpeg Mar 20 15:21:27 z kernel: [ 31960] 0 31960 1633221 73281 1323008 0 0 frigate.process Mar 20 15:21:27 z kernel: [ 31962] 0 31962 1635247 75319 1339392 0 0 frigate.process Mar 20 15:21:27 z kernel: [ 31963] 0 31963 1631413 71403 1306624 0 0 frigate.process Mar 20 15:21:27 z kernel: [ 31966] 0 31966 1635433 75509 1339392 0 0 frigate.process Mar 20 15:21:27 z kernel: [ 31968] 0 31968 1634820 74102 1335296 0 0 frigate.process Mar 20 15:21:27 z kernel: [ 31971] 0 31971 1635217 74896 1339392 0 0 frigate.process Mar 20 15:21:27 z kernel: [ 31973] 0 31973 1635260 75360 1343488 0 0 frigate.process Mar 20 15:21:27 z kernel: [ 31980] 0 31980 1194088 67613 1048576 0 0 frigate.capture Mar 20 15:21:27 z kernel: [ 31986] 0 31986 1194088 67710 1048576 0 0 frigate.capture Mar 20 15:21:27 z kernel: [ 31993] 0 31993 1194257 68060 1056768 0 0 frigate.capture Mar 20 15:21:27 z kernel: [ 32004] 0 32004 1194088 67412 1048576 0 0 frigate.capture Mar 20 15:21:27 z kernel: [ 32013] 0 32013 1218086 67613 1048576 0 0 frigate.capture Mar 20 15:21:27 z kernel: [ 32021] 0 32021 1221128 67406 1048576 0 0 frigate.capture Mar 20 15:21:27 z kernel: [ 32029] 0 32029 1194088 67951 1052672 0 0 frigate.capture Mar 20 15:21:27 z kernel: oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0-1,global_oom,task_memcg=/docker/2e75c9f1047141b9b35bf3cc90663194f3be0ffc64a30482e4361cc762487692,task=ffmpeg,pid=10143,uid=0 Mar 20 15:21:27 z kernel: Out of memory: Killed process 10143 (ffmpeg) total-vm:48965636kB, anon-rss:43494820kB, file-rss:78340kB, shmem-rss:18804kB, UID:0 pgtables:85608kB oom_score_adj:0
-
5 minutes ago, BrandonG777 said:
Is that related to these extra parameters?
No idea, I don't run Frigate - maybe try its support thread
6 minutes ago, BrandonG777 said:However, I'm sure the SSD isnt as fast as system memory
Shouldn't matter for this use case, wear would be the concern - probably another question in their support thread
- 1
-
31 minutes ago, BrandonG777 said:
Frigate container,
Check your Frigate config, ffmpeg ran it OOM which implies it's transcoding to RAM. A few users have had the same issue over the past month or so
-
31 minutes ago, OriginalOne said:
Is it RAM or disk space that is running out?
And how can a docker container take down the host? Is there some method to avoid this?
Whatever Immich is doing is running the system out of RAM - left unrestricted, any container can use whatever system resources it has access to. You can limit the container's available RAM in the advanced settings, but that won't solve whatever the container is doing to run itself out of RAM.
As for why Immich is chugging down all that RAM, I can't help you there - you can ask in the container's help thread or try to get help on Immich's site
-
1 hour ago, Smokie said:
Can anyone even point me in the right direction?
Depends on how critical the data on the server is. If you don't have the capacity to back up the entire system, then figure out what data you can't afford to lose.
I use this script in conjunction with the appdata backup plugin
- 1
-
52 minutes ago, LugoCloud said:
How can I get new key for my USB or should I wait until I get a good quality USB
https://docs.unraid.net/unraid-os/manual/changing-the-flash-device/
-
16 minutes ago, LugoCloud said:
Tried a different USB now but is a very old (that worked)
DOn't have any other that I can use at this time
what USB model you suggest for me to get I can order Amazone
Thank youCant help you there, but I'd suggest a reputable brand from a reputable seller - there's a lot of shadiness in the world of flash drives. From what I remember, a 16/32GB USB 2.0 device is what's usually what's recommended but don't quote me on that
- 1
Can I move data now and add Parity drive later?
in General Support
Posted
Yes