Jump to content

Is it normal for heavy usage (docker containers running heavy stuff) to lock things up?


drumstyx

Recommended Posts

I'm starting to really stress my server, it seems -- the main systems I have running are a bunch of dockers: Plex, Sonarr, Radarr, NzbGet, plus a few less taxing ones like pihole, openvpn (not currently active, but on and listening), and NoIP. Hardware is pretty weak, but modern -- Ryzen 3 (forget if it's a 1200 or 1300, but it was the cheapest thing I could get when my last board crapped out) with 8GB of DDR4. I've isolated CPU1 so I can at least have constant access to the UI, because it would lock up even that.

 

I'm running it pretty hard -- I'll search lots of big things in Sonarr, and watch it load up NzbGet with a great big queue full of large (7-40GB each) files. This of course needs to be downloaded, extracted, and eventually moved off the cache (250gb SSD) then Plex will index when it scans.

So the main question: Plex will completely crap out frequently when lots is going on, like if I'm downloading one file, unpacking another, and at the same time running the mover to clear up the cache (which actually runs slower than my downloads most of the time, so I actually have to babysit it to make sure it doesn't overflow, and sometimes pause downloads). Is this normal? The weird thing is, if I SSH in, htop doesn't even show things working all that hard -- nothing's truly pinned at 100% hard, it's fluctuating and tapping 100 here and there, but it hardly seems like it should lock up like this.

I've been planning to work on fixing this, especially since this is obviously a bad environment for any server-side transcoding -- would I be better off using a second mediocre machine running standalone Plex with network access to the Unraid system, or going all out and putting a decent CPU in the current machine?

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...