Michael_P

Members
  • Posts

    661
  • Joined

  • Last visited

Everything posted by Michael_P

  1. Did you try limiting Plex's memory in the container settings? Dec 26 03:48:48 rhino9094 kernel: Out of memory: Killed process 98495 (Plex Media Scan) total-vm:86323272kB, anon-rss:86231128kB, file-rss:0kB, shmem-rss:0kB, UID:99 pgtables:168936kB oom_score_adj:0 Jan 3 04:18:33 rhino9094 kernel: Out of memory: Killed process 61237 (Plex Media Scan) total-vm:87662668kB, anon-rss:86525528kB, file-rss:0kB, shmem-rss:0kB, UID:99 pgtables:169516kB oom_score_adj:0 Jan 10 05:09:01 rhino9094 kernel: Out of memory: Killed process 129575 (Plex Media Scan) total-vm:88265484kB, anon-rss:87135964kB, file-rss:0kB, shmem-rss:0kB, UID:99 pgtables:170712kB oom_score_adj:0
  2. In the container's settings, toggle advanced view and add this into the extra parameters field (whatever amount of RAM you want to limit to, I just use 4G --memory=4G
  3. "Need a car to commute to work? Also want crazy fast straight line runs at the track on weekends? AND you want to go to space but need an orbiter? The Fiero is the car for you! *some assembly required" -the marketing wanks at GM after watching F9
  4. I'd start with large blocks of files, and narrow it down from there. Or maybe Tdarr's health check could work, too
  5. This suggests its a video file that can't be properly read. If you've added media before it started acting up, start by removing those files and/or examining them, and adding them back until you find the culprit.
  6. Well, you can try to figure out which file it's choking on, or just limit the RAM to the container (which is what I do)
  7. Limit the Plex container to a reasonable amount of RAM and set the scheduled task to run at on the next closest hour to see it's fixed. If it is, you can change it to your regular time. If there's a media file that the scanner doesn't like, it'll cause the process to run away so limiting its RAM will restart the container instead of bringing down the system.
  8. What time is your Plex maintenance scheduled? Try limiting the Plex docker's memory, that will keep it from running away if it encounters a file it doesn't like.
  9. It's actually the reason I bought it, it was cheaper to buy the printer and do it myself than to pay someone else Rack mount chassis are meant to have push-pull cooling, if you cut a hole in the top you might actually make it worse. Me, I'd make another 'pull' fan panel for the rear if I didn't want the fan noise of the stock internal fans
  10. I 3D printed one for the front of my Norco 4224 and it keeps the drives reasonably cool
  11. Still not 100% that's the cause, but if you do need to replace it, just set all the shares to move to the array then run mover. When you have the new drive installed, set the shares back to the cache
  12. In your syslog there's an error, looks the same as when my cache drive showed signs of failing Dec 10 03:18:25 NAS-Disk kernel: BTRFS error (device sdc1): bdev /dev/sdc1 errs: wr 0, rd 0, flush 0, corrupt 6, gen 0
  13. Yep, docker would fail to unmount and generally hose everything until i rebooted and did a file system repair on the drive - then it would be fine again until the next time a lot of writes were made. It was also a 970 Evo Plus.
  14. FWIW I had an NVME cache drive that would do that too, then after a few months any writes to it would cause it to go read only and the drive was less than a year old
  15. You'd have to turn off the dockers/VMs manually - and if you forget and both try to access the GPU it will crash the host
  16. My A770 LE idles around 10W, have you tinkered with power management?
  17. Also, from the SMART report it doesn't appear to be anything wrong with the drive itself. CRC errors are almost always connection related.
  18. Looks right except yours has 8GB instead of 8G not sure it matters Set the plex scheduled task for the next closest hour, then watch the memory usage as it runs thru it (that way at least you won't have to wait overnight)
  19. Eliminate the splitters and you should be fine. The rule of thumb is no more than 4 drives per molex connector and to have more than one run back to the PSU. Toshiba drives are especially finicky in my experience about power sag and will drop offline if you look at them funny
  20. Looks like a power issue, are you using splitters?
  21. Doesn't look like you're putting it in the right spot - undo what you did and do this: Toggle basic view to advanced view Add it to the Extra Parameters line