Please help CPU spiked to 100% plex unusable


maxse

Recommended Posts

Thank you soo much! Although this has been frustrating I am definitely learning so much so that's always good :)

 

So after setting the default parameters for nice and IOnice when sab unpacks and I have 1 regular and 1 transcoding stream going I would see the CPU go up to about 50% sometimes. This is with hardware transcoding enabled. Also I set transcoding to RAM and Cache is used for everything Samsung Evo 1TB (not the pro). So that should have cut down on a lot of IO, no? Since it's staying on cahce?

 

I was thinking that once I set up a virtual VPN to be running, then add more dockers, for PIA VPN, letsencrypt, nextcloud... that I may run into issues with plex again... So I decided to just suck it up and purchased an 

i9-9900k! It says there is 8 cores instead of the 6 on mine, and also it has 2 threads/core and my i5-8400 only has 1. I'm hoping that processor will resolve these things and allow me to run those dockers easily?

 

*EDIT* So I just ran mover, and have 1 direct stream (with audio transcoded only) going and have sab running as well. The CPU hovers around 35% occasionally going to 60%. Is this normal for it to be that high? I grabbed the diagnostics again during this just in case. Could you let me know if all looks okay please?

 

TLDR: ordered an i9-9900k should be arriving this week to try to resolve these issues and futureproof. Also CPU usage getting high, normal?

sabertooth-diagnostics-20190113-1032.zip

Edited by maxse
Link to comment

The mover is very aggressive and really should be run overnight on a schedule (settings-scheduler).  The CPU going to 100% is not necessarily an issue unless it causes problems with something else then you deal with it by isolating or taming.  I've run much more on much less processor so it really is just a matter of tuning your loads.

 

The diagnostics (top.txt) show you were running mover which had top reporting 26.6% IO wait and 63.8% CPU idle with system load of 9.01, 9.33, 7.11.  Not CPU bound, it is IO bound.  More CPU won't fix that particular issue.  Linux reports loading different than Windows, it includes IO in the load calc (waiting).

Edited by unevent
Link to comment

unevent and squid, thank you sooo much for helping me guys, I'm so glad you're helping me with this, seems like you're the only two and I honestly would never be able to figure this out or where to turn to....

 

Okay thanks for explaining, so I could cancel then my order for the new i9? I was worried because I still need to install nextcloud, ombi, letsencrypt, some kind of backup software to back up to a remote unraid server, etc... didn't want all those things once loaded to give plex any issues...

 

I have mover scheduled but I had to run it again because my 1tb cache drive got full! 

So with the diagnostic above, you're saying there is still an IO issue or is all that you see there normal? What happens if I have 5-6 streams going seems like my system wouldn't handle it? Or would it have no issues? I keep seeing people write about having 10 streams and are more worried about their internet bandwidth when I'm having IO issues at the hardware level?

 

Thanks again for explaining everything!

Link to comment

Never really a bad idea to go for more CPU so that is up to you.  My point was it won't fix that immediate problem you were seeing when running the mover.  Not sure how many streams you can hardware transcode at a time so more CPU for Plex is not a bad idea.  Plex says it takes roughly 2000 passmark (CPU) to soft transcode a 1080 @10Mbit stream.

Link to comment

Right, but I'm using hardware transcoding, but you're saying I am having an IO bottleneck, and not a CPU bottleneck... I mean at least before I assigned cores and did the IONice stuff where 4 cores would jus go to 100% and 1 plex stream would buffer even direct play... That's why I don't get how others could have so many streams, when I'm getting an IO bottle neck so soon.

 

I don't want to have to waste $400 more dollars on the i9 if you're saying it's the IO bottleneck. I mean for that money I could almost get a used Dell with Windows, through in a P2000 in there not have these issues?

 

I mean, I've had an almost 10 year-old AMD Athlon processor in the unraid before this, and plex ran on a 2012 Mac mini with 4-5 streams and never had any issues at all. So I just don't understand this. I thought it is best practice to have one box with a powerful enough processor, which I do have even right now, that's why I'm getting confused why it's happening to me and how I can be sure going forward I won't have a problem if I have 4-6 streams with most even being direct play

 

I also thought that the CPU was high, but thanks for explaining that it's not as relevant because everything else is going smooth and that the CPU isn't actually being used as much as unraid is indicating

Edited by maxse
Link to comment

I use the binhex docker for sab. Just read the info you linked for par2 and I think it's causing an issue too. I figured it's part of sab and since sabe was pinned to 2 cores that all those processes are limited to those 2 cores too that are part of sab... That could explained while all the other cores spike also not just the 2 I assigned.

 

How can I find out if multicore par2 is installed part of the binhex docker? I didn't find anything definitive in the support thread. 

 

Also for the nice it should be "-n19" with no space after the "n" correct? I asked because your post shows no space but on the wiki that have "-n10" 

Link to comment

Ahhh, help! 

 

Guys, what's going on? I added the nice19 and defalt for IOnice, put par2 to use 1 core as the wiki said.

 

Nothing was unpacking now, no mover running, NO plex streams going and my CPU usage was at 70% have no idea why. This is beyond frustrating now, I don't why. This is getting beyond frustrating, I'm concerned if I tried to watch plex during this time that it would just freeze up. Why is my CPU being used like this?

 

CPU load for the dockers under the docker menu, was minimal, a couple of percent each that's it.

 

Attaching diagnostics right as it was happening

sabertooth-diagnostics-20190114-1258.zip

Edited by maxse
Link to comment

I have no idea... But I just realized that my "System" "domains" "isos" share were set to NOT use cache and my docker image file was on the array. Could that have made a difference despite me setting the downloads folder to cache disk=yes?

 

Could this have effected things? If so, whats the best way to move those files on the array back to the cache?

Not sure if this affected anything?

Link to comment

I have a much less powerfull CPU then u and i can watch atleast 2 transcodes, windows 10 vm, NZBGET, radarr, hydra, sonarr, and so on... with 100% without any lags. (if u start movie it lags ofc, but then its ok)

 

CPU: Intel® Core™ i3-4370 CPU @ 3.80GHz (you have double passmark score)

 

I tried with pinning and such things... it just got worse, and its not efficient bc if some apps have nothing to do, others cant use the full resource of your pc.

 

WHat i did is adding c= to each docker command (under extra arguments). And removing ANY pinning. cpus are nowadays pretty good in splitting work itself.

 

If u dont add c= docker uses 1024 (which means 100% cpu)

 

With that you can tell it how much it should use when there is not enaught power... 

 

E.g. i didnt set anything in plex, which means 100% for plex.

 

Nzbget has 128

all other dockers have 2 or 4.

 

What it makes is that if CPU is at 100%, the most power goes to plex. Some to nzbget and so on... 

 

its called cpu shares:https://docs.docker.com/config/containers/resource_constraints/

 

My plex transcode is in RAM, which might reduce cpu also.

Edited by nuhll
Link to comment

Thanks @nuhll That's what I didn't understand I selected this CPU after lots of researching. Thanks for the tip. I would love to try that. I went to the web site you linked to and it has nothing on "c=" they list mostly commands it seems nothing with "c="

 

How would I set that up in the way that you did to designate percentages?

I'm also confused because guys on here are saying it is an IO issue not a CPU issue.. which I don't get because I now have everything going to the cache drive, and I do have Plex transcoding to RAM!

Link to comment
7 minutes ago, maxse said:

Thanks @nuhll That's what I didn't understand I selected this CPU after lots of researching. Thanks for the tip. I would love to try that. I went to the web site you linked to and it has nothing on "c=" they list mostly commands it seems nothing with "c="

He mentioned it is the cpu shares flag.

 

From the linked document:

 

--cpu-shares

 

Set this flag to a value greater or less than the default of 1024 to increase or reduce the container’s weight, and give it access to a greater or lesser proportion of the host machine’s CPU cycles. This is only enforced when CPU cycles are constrained. When plenty of CPU cycles are available, all containers use as much CPU as they need. In that way, this is a soft limit. --cpu-shares does not prevent containers from being scheduled in swarm mode. It prioritizes container CPU resources for the available CPU cycles. It does not guarantee or reserve any specific CPU access.

 

So, putting together the information from his post, perhaps @nuhll is saying you should add '--cpu-shares=xxx' to the Extra Parameters: (you'll need to enable Advanced View in the docker edit page) in the docker(s) for which you want to control CPU usage.

Link to comment

Oh okay, he mentioned he just wrote 'c=', so I thought it was another flag specific to unraid since that website didnt list anything although I did read the entire thing.

 

Yes I had that, Squid had suggested the earlier, but unevent was saying based on the diagnostics that it is an IO issue for some reason not a CPU issue so I had removed that flag "--cpu-shares=2". But I'm not sure @nuhll was referring to that because he said nzbget gets 128? I've been reading the explanation by squid, and some google stuff, seems confusing for more then 2 dockers and multiple cores. So if I set sab to "--cpu-shares=512" that means it can only use 50% of the CPU at max, but that is only for one core, it can go to 100% on the others?" The confusing part is when I have 5 dockers how would I change the value? I tried setting --cpu-shares=2 for 2 dockers and it errored out, I don't think more than 1 docker can have the same value...

 

*But I'm still confused because unevent was saying that I have an IO issue not a CPU issue?

 

 

I'll place it back in. Okay so I'm returning the i9 then, havent opened it yet even. Would rather save all that money especially since there's another issue here and not the power of my actual CPU?

Edited by maxse
Link to comment

unraid uses docker, and docker is everwhere the same.

 

-c is is the same like --cpu-shares

 

It works liek that

 

YOUR CPU POWER / 1024 = 1 C

 

So if you have 2 dockers with 512 parts, it means each docker uses max 50% of all CPU power. (if its fully utilized, if not then all programs will use 100% - thats the great thing about this)

 

If plex is most imporatnt for you, you dont set any c, which means default 1024 (100%).

 

All other dockers u set to 2 4 or like you want. Just try it.

 

I have 2 or 4 for all dockers and only nzbget 128, so i get fullspeed even when 100% cpu load.

 

I cant believe you have really problems with your cpu, since you have double passmark score then my X years old cpu... and before i even had a much worser xD

 

 

Edited by nuhll
  • Upvote 1
Link to comment
16 minutes ago, nuhll said:

-c is is the same like --cpu-shares

 

It works liek that

 

YOUR CPU POWER / 1024 = 1 C

 

So if you have 2 dockers with 512 parts, it means each docker uses max 50% of all CPU power. (if its fully utilized, if not then all programs will use 100% - thats the great thing about this)

Good explanation.

  • Upvote 1
Link to comment
On 1/16/2019 at 12:55 PM, nuhll said:

unraid uses docker, and docker is everwhere the same.

 

-c is is the same like --cpu-shares

 

It works liek that

 

YOUR CPU POWER / 1024 = 1 C

 

So if you have 2 dockers with 512 parts, it means each docker uses max 50% of all CPU power. (if its fully utilized, if not then all programs will use 100% - thats the great thing about this)

 

If plex is most imporatnt for you, you dont set any c, which means default 1024 (100%).

 

All other dockers u set to 2 4 or like you want. Just try it.

 

I have 2 or 4 for all dockers and only nzbget 128, so i get fullspeed even when 100% cpu load.

 

I cant believe you have really problems with your cpu, since you have double passmark score then my X years old cpu... and before i even had a much worser xD

 

 

Thanks so much @nuhll for explaining! Yes I can't believe I am having issues, I even went so far as to order an i9 I was so frustrated, but I think I finally got this working and returned the i9!

 

I set Cpu shares to 2 for sab but that's it. Do you mind explaining in YOUR set up what nzbget 128 means in terms of CPU percentage? does that mean it gets a little more than 10% of CPU usage when the CPU is at max 10% of 1024=102.4? I think I'm missing something because it depends on how many dockers you're running? but I feel like the number 2 or 4 means it barely has any CPU rights since that number is so small? Thank you all for helping me with this!

Link to comment

Ehm.

 

I dont know. I didnt calculated anything. lol. Like i said. I set nzbget so high that i get full speed when cpu is maxxed out - you have to try that, i startet with 2, then 4, then i tried 100, and then i set it to 128.

 

hydra i also set to 128, so it always answeres questions from radarr and sonarr and dont seem to be down... you have to test that.

 

"but I feel like the number 2 or 4 means it barely has any CPU rights since that number is so small? Thank you all for helping me with this!"

It means that they nearly wont get any cpu power - when other processes need, yeah. but if other processes dont need power, they can take up to 100%.

 

Just let plex standard, so it gets most shares, then all other dockers you dont need get 2 or 4. This ensures plex always gets as much power as needed (tahts what u asked for)

Edited by nuhll
Link to comment
  • 9 months later...

Sorry to resurrect an old thread, but I think I'm having similar issues, however its not nzbget/sab for me or anything like that as I haven't ventured into that area yet...

 

For me, it APPEARS to either be mariadb, nextcloud or letsencrypt (or all together..) which i added by following SpaceinvaderOne's nextcloud tutorial. I also recently installed pfsense but that seems stable until i start these particular dockers.

How do I go about allocating cores to different docker containers? I'm not seeing any options for that? and is that definitely the cause?

 

My cpu is an i5 4690k so can't really understand why it would be struggling

 

managed to catch it happening just so i collected the diagnostics, oddly though when i putty in and try docker stats it shows barely any CPU usage yet my dashboard is pretty much locked at 100% CPU

 

It's especially annoying for me as the server is also running my firewall so when it overloads it shuts down pfsense as well :(

 

 

babs-diagnostics-20191113-2020.zip

Edited by zapp8rannigan
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.