Michael_P

Members
  • Posts

    642
  • Joined

  • Last visited

Everything posted by Michael_P

  1. Add this to NGINX in the advanced tab/custom nginx configuration to allow for large file transfers in nextcloud proxy_set_header Host $host; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_buffering off; proxy_max_temp_file_size 16384m; client_max_body_size 0;
  2. Single rail power is all well and good, but it's more important to limit the number of drives per connector. And don't, use molded style connector splitters. Use only quality individually wired and pinned connectors.
  3. Sorry, I meant they're blacklisted from the kernel by default. To enable them, you need to create the file corresponding to the driver on your USB boot device
  4. The kernel Unraid is currently using doesn't support Arc cards You can use it in a VM , manually upgrade the kernel, or wait for Unraid to move to the new kernel
  5. 1 and 3: to enable the igpu for use with docker containers, you need to blacklist it by creating the i915.conf file 4 and 5- you can add the igpu to any VM, but you can only run 1 VM at a time that uses the igpu. Also, if your VM is using the igpu and a docker container attempts to use it as well, the host will crash
  6. Because a lot of us have been bitten by the 'boil the frog' type of transitions.
  7. DId you try what @JorgeB suggested? The log is telling you to try it, too Feb 20 02:12:55 Tower kernel: nvme nvme1: Try "nvme_core.default_ps_max_latency_us=0 pcie_aspm=off"
  8. This is true, but as the reaper will usually kill whatever is using the most RAM at the time it's OOM, it's a pretty good candidate to start with. Plex should never reach that point, unless it bugs out, which makes it the prime suspect in this case It's easier to spot with multiple OOM events, if each time it kills the same process then you've added even more weight to the likely cause.
  9. Plex ran you OOM on the 17th - if it happens again, limit the memory to the container
  10. Thanks for this clarity - this is the majority of what we've come to hate from software-as-a-service
  11. Why'd y'all wait until you were called out for hiding it in the code? Seems like a lot of people were ringing the warning bells when Unraid connect was announced and were called paranoid conspiracy theorists. Looks like a duck, quacks like a duck, that's a damn duck in my book.
  12. Ah, the 'ol bait 'n' switch. FYI, the community makes this product worth using - and pissing them off is not good for business, whatever way you folks decide to go. Absolutely abhorrent that this is a find on reddit and not an official announcement. I'm out.
  13. I really hope Immich adds nested album/hierarchical folder support
  14. I'm no docker guru, either - but you can toggle the advanced view on the docker page and see if any of the IDs listed match any part of that string
  15. If you can figure out which container this is, it's probably what's causing the issue ba41f0af8983f27bfe3f75b1497a4dbd8c3fd96f676119fbfc61deb937ac49c0
  16. It happened back on the 5th and the reaper killed Plex's DLNA server - if it happens again look for the process it killed and start from there (fix the configuration, limit the containers allowed memory etc.) Feb 5 11:05:25 Tower kernel: oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=13dc51434939c086b93a15dee381a8eb0637c5dcb2ed94866efc8c7fbebc0eba,mems_allowed=0,global_oom,task_memcg=/docker/31eeb4a9239178cb651112815262fab86c3a64c9ad5750805a77ec668d3bb3ba,task=Plex DLNA Serve,pid=29993,uid=99 Feb 5 11:05:25 Tower kernel: Out of memory: Killed process 29993 (Plex DLNA Serve) total-vm:14685364kB, anon-rss:11423304kB, file-rss:0kB, shmem-rss:0kB, UID:99 pgtables:24164kB oom_score_adj:0
  17. Are you running a Firefox container, that's what it killed at 12:37 Feb 8 12:37:20 SERVER kernel: oom-kill:constraint=CONSTRAINT_MEMCG,nodemask=(null),cpuset=ba41f0af8983f27bfe3f75b1497a4dbd8c3fd96f676119fbfc61deb937ac49c0,mems_allowed=0,oom_memcg=/docker/efd97b97e80ab913b30bb8a672ba8ce3536312968e87a201a1a956f6777253e1/docker/ba41f0af8983f27bfe3f75b1497a4dbd8c3fd96f676119fbfc61deb937ac49c0,task_memcg=/docker/efd97b97e80ab913b30bb8a672ba8ce3536312968e87a201a1a956f6777253e1/docker/ba41f0af8983f27bfe3f75b1497a4dbd8c3fd96f676119fbfc61deb937ac49c0,task=Isolated Web Co,pid=9987,uid=1000 Feb 8 12:37:20 SERVER kernel: Memory cgroup out of memory: Killed process 9987 (Isolated Web Co) total-vm:3540908kB, anon-rss:711896kB, file-rss:110832kB, shmem-rss:21052kB, UID:1000 pgtables:4552kB oom_score_adj:100
  18. If you can figure out which ones they are, that's what's running it OOM - it killed two of them yesterday Feb 5 22:33:38 arialis kernel: oom-kill:constraint=CONSTRAINT_MEMCG,nodemask=(null),cpuset=a9ec7aa2e1dadd1fa6c282acbce5a61428e03f6427ebe5eeef5e8cbe21cf2118,mems_allowed=0-1,oom_memcg=/docker/a9ec7aa2e1dadd1fa6c282acbce5a61428e03f6427ebe5eeef5e8cbe21cf2118,task_memcg=/docker/a9ec7aa2e1dadd1fa6c282acbce5a61428e03f6427ebe5eeef5e8cbe21cf2118,task=java,pid=17304,uid=988 Feb 5 22:33:38 arialis kernel: Memory cgroup out of memory: Killed process 17304 (java) total-vm:22928132kB, anon-rss:6264592kB, file-rss:0kB, shmem-rss:64kB, UID:988 pgtables:14240kB oom_score_adj:0 Feb 5 22:54:59 arialis kernel: oom-kill:constraint=CONSTRAINT_MEMCG,nodemask=(null),cpuset=267f51398ba53554d26eae3fb5106c73d8d91bffaecddb18b01b4c4515f756c4,mems_allowed=0-1,oom_memcg=/docker/267f51398ba53554d26eae3fb5106c73d8d91bffaecddb18b01b4c4515f756c4,task_memcg=/docker/267f51398ba53554d26eae3fb5106c73d8d91bffaecddb18b01b4c4515f756c4,task=java,pid=17894,uid=988 Feb 5 22:54:59 arialis kernel: Memory cgroup out of memory: Killed process 17894 (java) total-vm:22835064kB, anon-rss:6271064kB, file-rss:0kB, shmem-rss:64kB, UID:988 pgtables:14176kB oom_score_adj:0
  19. Check the settings for whatever docker container this is: 267f51398ba53554d26eae3fb5106c73d8d91bffaecddb18b01b4c4515f756c4 Looks like that's what ran the system out of memory (doing a Java task?)
  20. Never messed with HA so no help for you there. Try disabling it for a bit and see if the issues go away, then at least you'll know that it's the problem.
  21. No, it means it's either a bad drive, bad connection, or bad slot - moving it to another slot is a diagnostic step. My money is on bad drive.
  22. You have quite a few call traces in there, too - have you added a script lately that's misbehaving? HA maybe?
  23. Probaby, since it's the nginx processes that are chewing up the RAM