Michael_P

Members
  • Posts

    660
  • Joined

  • Last visited

Everything posted by Michael_P

  1. DId you try what @JorgeB suggested? The log is telling you to try it, too Feb 20 02:12:55 Tower kernel: nvme nvme1: Try "nvme_core.default_ps_max_latency_us=0 pcie_aspm=off"
  2. This is true, but as the reaper will usually kill whatever is using the most RAM at the time it's OOM, it's a pretty good candidate to start with. Plex should never reach that point, unless it bugs out, which makes it the prime suspect in this case It's easier to spot with multiple OOM events, if each time it kills the same process then you've added even more weight to the likely cause.
  3. Plex ran you OOM on the 17th - if it happens again, limit the memory to the container
  4. Thanks for this clarity - this is the majority of what we've come to hate from software-as-a-service
  5. Why'd y'all wait until you were called out for hiding it in the code? Seems like a lot of people were ringing the warning bells when Unraid connect was announced and were called paranoid conspiracy theorists. Looks like a duck, quacks like a duck, that's a damn duck in my book.
  6. Ah, the 'ol bait 'n' switch. FYI, the community makes this product worth using - and pissing them off is not good for business, whatever way you folks decide to go. Absolutely abhorrent that this is a find on reddit and not an official announcement. I'm out.
  7. I really hope Immich adds nested album/hierarchical folder support
  8. I'm no docker guru, either - but you can toggle the advanced view on the docker page and see if any of the IDs listed match any part of that string
  9. If you can figure out which container this is, it's probably what's causing the issue ba41f0af8983f27bfe3f75b1497a4dbd8c3fd96f676119fbfc61deb937ac49c0
  10. It happened back on the 5th and the reaper killed Plex's DLNA server - if it happens again look for the process it killed and start from there (fix the configuration, limit the containers allowed memory etc.) Feb 5 11:05:25 Tower kernel: oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=13dc51434939c086b93a15dee381a8eb0637c5dcb2ed94866efc8c7fbebc0eba,mems_allowed=0,global_oom,task_memcg=/docker/31eeb4a9239178cb651112815262fab86c3a64c9ad5750805a77ec668d3bb3ba,task=Plex DLNA Serve,pid=29993,uid=99 Feb 5 11:05:25 Tower kernel: Out of memory: Killed process 29993 (Plex DLNA Serve) total-vm:14685364kB, anon-rss:11423304kB, file-rss:0kB, shmem-rss:0kB, UID:99 pgtables:24164kB oom_score_adj:0
  11. Are you running a Firefox container, that's what it killed at 12:37 Feb 8 12:37:20 SERVER kernel: oom-kill:constraint=CONSTRAINT_MEMCG,nodemask=(null),cpuset=ba41f0af8983f27bfe3f75b1497a4dbd8c3fd96f676119fbfc61deb937ac49c0,mems_allowed=0,oom_memcg=/docker/efd97b97e80ab913b30bb8a672ba8ce3536312968e87a201a1a956f6777253e1/docker/ba41f0af8983f27bfe3f75b1497a4dbd8c3fd96f676119fbfc61deb937ac49c0,task_memcg=/docker/efd97b97e80ab913b30bb8a672ba8ce3536312968e87a201a1a956f6777253e1/docker/ba41f0af8983f27bfe3f75b1497a4dbd8c3fd96f676119fbfc61deb937ac49c0,task=Isolated Web Co,pid=9987,uid=1000 Feb 8 12:37:20 SERVER kernel: Memory cgroup out of memory: Killed process 9987 (Isolated Web Co) total-vm:3540908kB, anon-rss:711896kB, file-rss:110832kB, shmem-rss:21052kB, UID:1000 pgtables:4552kB oom_score_adj:100
  12. If you can figure out which ones they are, that's what's running it OOM - it killed two of them yesterday Feb 5 22:33:38 arialis kernel: oom-kill:constraint=CONSTRAINT_MEMCG,nodemask=(null),cpuset=a9ec7aa2e1dadd1fa6c282acbce5a61428e03f6427ebe5eeef5e8cbe21cf2118,mems_allowed=0-1,oom_memcg=/docker/a9ec7aa2e1dadd1fa6c282acbce5a61428e03f6427ebe5eeef5e8cbe21cf2118,task_memcg=/docker/a9ec7aa2e1dadd1fa6c282acbce5a61428e03f6427ebe5eeef5e8cbe21cf2118,task=java,pid=17304,uid=988 Feb 5 22:33:38 arialis kernel: Memory cgroup out of memory: Killed process 17304 (java) total-vm:22928132kB, anon-rss:6264592kB, file-rss:0kB, shmem-rss:64kB, UID:988 pgtables:14240kB oom_score_adj:0 Feb 5 22:54:59 arialis kernel: oom-kill:constraint=CONSTRAINT_MEMCG,nodemask=(null),cpuset=267f51398ba53554d26eae3fb5106c73d8d91bffaecddb18b01b4c4515f756c4,mems_allowed=0-1,oom_memcg=/docker/267f51398ba53554d26eae3fb5106c73d8d91bffaecddb18b01b4c4515f756c4,task_memcg=/docker/267f51398ba53554d26eae3fb5106c73d8d91bffaecddb18b01b4c4515f756c4,task=java,pid=17894,uid=988 Feb 5 22:54:59 arialis kernel: Memory cgroup out of memory: Killed process 17894 (java) total-vm:22835064kB, anon-rss:6271064kB, file-rss:0kB, shmem-rss:64kB, UID:988 pgtables:14176kB oom_score_adj:0
  13. Check the settings for whatever docker container this is: 267f51398ba53554d26eae3fb5106c73d8d91bffaecddb18b01b4c4515f756c4 Looks like that's what ran the system out of memory (doing a Java task?)
  14. Never messed with HA so no help for you there. Try disabling it for a bit and see if the issues go away, then at least you'll know that it's the problem.
  15. No, it means it's either a bad drive, bad connection, or bad slot - moving it to another slot is a diagnostic step. My money is on bad drive.
  16. You have quite a few call traces in there, too - have you added a script lately that's misbehaving? HA maybe?
  17. Probaby, since it's the nginx processes that are chewing up the RAM
  18. Find out whatever docker container this is a2e8128f65f8dd826c54af5dc2db004697eaf687028a5162fb213399c95b952e and disable it, or just start disabling them one by one Looks like it goes OOM at around 1530 and 0330 so if there's a scheduled task popping off, that could point to the culprit (TDARR scan?)
  19. Your drive looks like it's failing Jan 25 04:27:11 Tower kernel: BTRFS info (device nvme0n1p1: state EA): forced readonly
  20. If it keeps happening, start disabling plugins and docker containers
  21. You should reboot for sure, but you have a crap ton of nginx processes which is what's probably causing your issue - do you leave the unraid gui open in a browser? Jan 23 03:36:10 TresCommas kernel: [ 10686] 0 10686 41862 17198 212992 0 0 nginx Jan 23 03:36:10 TresCommas kernel: [ 10687] 0 10687 41862 17198 212992 0 0 nginx Jan 23 03:36:10 TresCommas kernel: [ 10688] 0 10688 41862 17198 212992 0 0 nginx Jan 23 03:36:10 TresCommas kernel: [ 10689] 0 10689 41862 17198 212992 0 0 nginx Jan 23 03:36:10 TresCommas kernel: [ 10690] 0 10690 41862 17198 212992 0 0 nginx Jan 23 03:36:10 TresCommas kernel: [ 10691] 0 10691 41862 17198 212992 0 0 nginx Jan 23 03:36:10 TresCommas kernel: [ 10692] 0 10692 41862 17198 212992 0 0 nginx Jan 23 03:36:10 TresCommas kernel: [ 10693] 0 10693 41862 17198 212992 0 0 nginx Jan 23 03:36:10 TresCommas kernel: [ 10694] 0 10694 41862 17198 212992 0 0 nginx Jan 23 03:36:10 TresCommas kernel: [ 10695] 0 10695 41862 17198 212992 0 0 nginx Jan 23 03:36:10 TresCommas kernel: [ 10696] 0 10696 41862 17198 212992 0 0 nginx Jan 23 03:36:10 TresCommas kernel: [ 10697] 0 10697 41862 17198 212992 0 0 nginx Jan 23 03:36:10 TresCommas kernel: [ 10698] 0 10698 41862 17198 212992 0 0 nginx Jan 23 03:36:10 TresCommas kernel: [ 10699] 0 10699 41862 17198 212992 0 0 nginx Jan 23 03:36:10 TresCommas kernel: [ 10700] 0 10700 41862 17225 225280 0 0 nginx Jan 23 03:36:10 TresCommas kernel: [ 13506] 0 13506 42166 17528 217088 0 0 nginx
  22. Try limiting the memory for the Plex container, and/or try reverting the Plex version - another user reported a potential bug in Plex
  23. In the container's settings, toggle advanced view and add this into the extra parameters field (whatever amount of RAM you want to limit to, I just use 4G
  24. Did you try limiting Plex's memory in the container settings? Dec 26 03:48:48 rhino9094 kernel: Out of memory: Killed process 98495 (Plex Media Scan) total-vm:86323272kB, anon-rss:86231128kB, file-rss:0kB, shmem-rss:0kB, UID:99 pgtables:168936kB oom_score_adj:0 Jan 3 04:18:33 rhino9094 kernel: Out of memory: Killed process 61237 (Plex Media Scan) total-vm:87662668kB, anon-rss:86525528kB, file-rss:0kB, shmem-rss:0kB, UID:99 pgtables:169516kB oom_score_adj:0 Jan 10 05:09:01 rhino9094 kernel: Out of memory: Killed process 129575 (Plex Media Scan) total-vm:88265484kB, anon-rss:87135964kB, file-rss:0kB, shmem-rss:0kB, UID:99 pgtables:170712kB oom_score_adj:0