rcmpayne Posted March 5, 2018 Share Posted March 5, 2018 I have Plex docker installed and pinned to cpu "--cpuset-cpus=4,12,5,13,6,14,7,15" however, when playing one or more vid's (transcoding) it's just using cpu core 4. The dashboard is also only showing cpu 4 being pegged. "top" with cpu cores enabled (Last Used Cpu (SMP)). I saved the config with "Cap W" so I can grep top after. top | grep -i plex Note: the second column is CPU SMP 29564 4 nobody 1.7 0.0 0:00.08 /usr/lib/plexmediaserver/Plex Relay -p 443 -N -R 0:localhost:324+ 16034 4 nobody 0.3 0.7 14:47.10 /usr/lib/plexmediaserver/Plex Media Server 16276 4 nobody 0.3 1.0 10:00.99 Plex Plug-in [com.plexapp.system] /usr/lib/plexmediaserver/Resou+ 16034 4 nobody 6.9 0.7 14:47.31 /usr/lib/plexmediaserver/Plex Media Server 29666 4 nobody 2.6 0.1 0:00.08 /usr/lib/plexmediaserver/Plex Transcoder -codec:0 h264 -codec:2 + 16276 4 nobody 0.3 1.0 10:01.00 Plex Plug-in [com.plexapp.system] /usr/lib/plexmediaserver/Resou+ 29666 4 nobody 77.2 1.1 0:02.42 /usr/lib/plexmediaserver/Plex Transcoder -codec:0 h264 -codec:2 + 16034 4 nobody 1.0 0.7 14:47.34 /usr/lib/plexmediaserver/Plex Media Server 29666 4 nobody 95.7 1.1 0:05.31 /usr/lib/plexmediaserver/Plex Transcoder -codec:0 h264 -codec:2 + 16034 4 nobody 1.0 0.7 14:47.37 /usr/lib/plexmediaserver/Plex Media Server 29564 4 nobody 0.3 0.0 0:00.09 /usr/lib/plexmediaserver/Plex Relay -p 443 -N -R 0:localhost:324+ 29666 4 nobody 96.0 1.1 0:08.22 /usr/lib/plexmediaserver/Plex Transcoder -codec:0 h264 -codec:2 + 16034 4 nobody 0.3 0.7 14:47.38 /usr/lib/plexmediaserver/Plex Media Server 29666 4 nobody 96.0 1.1 0:11.13 /usr/lib/plexmediaserver/Plex Transcoder -codec:0 h264 -codec:2 + 16034 4 nobody 1.0 0.7 14:47.41 /usr/lib/plexmediaserver/Plex Media Server 16276 4 nobody 0.3 1.0 10:01.01 Plex Plug-in [com.plexapp.system] /usr/lib/plexmediaserver/Resou+ 29666 4 nobody 97.0 1.1 0:14.08 /usr/lib/plexmediaserver/Plex Transcoder -codec:0 h264 -codec:2 + 16034 4 nobody 1.0 0.7 14:47.44 /usr/lib/plexmediaserver/Plex Media Server 27752 4 nobody 0.3 0.2 0:01.39 Plex Plug-in [com.plexapp.agents.plexthememusic] /usr/lib/plexme+ 29666 4 nobody 95.4 1.1 0:16.97 /usr/lib/plexmediaserver/Plex Transcoder -codec:0 h264 -codec:2 + 16034 4 nobody 1.0 0.7 14:47.47 /usr/lib/plexmediaserver/Plex Media Server 29564 4 nobody 0.3 0.0 0:00.10 /usr/lib/plexmediaserver/Plex Relay -p 443 -N -R 0:localhost:324+ 29666 4 nobody 95.7 1.1 0:19.88 /usr/lib/plexmediaserver/Plex Transcoder -codec:0 h264 -codec:2 + 16034 4 nobody 0.3 0.8 14:47.48 /usr/lib/plexmediaserver/Plex Media Server 29666 4 nobody 96.4 1.1 0:22.80 /usr/lib/plexmediaserver/Plex Transcoder -codec:0 h264 -codec:2 + 16034 4 nobody 0.7 0.8 14:47.50 /usr/lib/plexmediaserver/Plex Media Server 16276 4 nobody 0.3 1.0 10:01.02 Plex Plug-in [com.plexapp.system] /usr/lib/plexmediaserver/Resou+ 29666 4 nobody 96.4 1.2 0:25.72 /usr/lib/plexmediaserver/Plex Transcoder -codec:0 h264 -codec:2 + 16034 4 nobody 0.7 0.8 14:47.52 /usr/lib/plexmediaserver/Plex Media Server 27359 4 nobody 0.3 0.3 0:02.10 Plex Plug-in [com.plexapp.agents.thetvdb] /usr/lib/plexmediaserv+ 29666 4 nobody 96.4 1.2 0:28.64 /usr/lib/plexmediaserver/Plex Transcoder -codec:0 h264 -codec:2 + 16034 4 nobody 0.7 0.8 14:47.54 /usr/lib/plexmediaserver/Plex Media Server 27456 4 nobody 0.3 0.3 0:02.78 Plex Plug-in [com.plexapp.agents.localmedia] /usr/lib/plexmedias+ 29666 4 nobody 96.0 1.2 0:31.55 /usr/lib/plexmediaserver/Plex Transcoder -codec:0 h264 -codec:2 + 16034 4 nobody 1.7 0.7 14:47.59 /usr/lib/plexmediaserver/Plex Media Server 16276 4 nobody 0.3 1.0 10:01.03 Plex Plug-in [com.plexapp.system] /usr/lib/plexmediaserver/Resou+ 29564 4 nobody 0.3 0.0 0:00.11 /usr/lib/plexmediaserver/Plex Relay -p 443 -N -R 0:localhost:324+ 29666 4 nobody 96.4 1.2 0:34.47 /usr/lib/plexmediaserver/Plex Transcoder -codec:0 h264 -codec:2 + 16034 4 nobody 0.7 0.8 14:47.61 /usr/lib/plexmediaserver/Plex Media Server 16276 4 nobody 0.3 1.0 10:01.04 Plex Plug-in [com.plexapp.system] /usr/lib/plexmediaserver/Resou+ top without grep Tasks: 484 total, 3 running, 279 sleeping, 0 stopped, 1 zombie %Cpu(s): 8.5 us, 4.8 sy, 0.9 ni, 85.6 id, 0.1 wa, 0.0 hi, 0.1 si, 0.0 st KiB Mem : 15455624 total, 435464 free, 9284364 used, 5735796 buff/cache KiB Swap: 0 total, 0 free, 0 used. 4633040 avail Mem PID P USER %CPU %MEM TIME+ COMMAND 16315 4 nobody 98.0 1.1 0:17.65 /usr/lib/plexmediaserver/Plex Transcoder -codec:0 h264 -codec:2 + 16410 11 root 15.6 23.7 1385:09 /usr/bin/qemu-system-x86_64 -name guest=Ubuntu,debug-threads=on + 5573 8 root 3.3 0.2 1:34.96 /usr/local/sbin/guacd -p /var/run/guacd.pid 16894 1 root 3.0 0.0 0:00.09 find /mnt/disk1/Personal -noleaf -maxdepth 19 12902 0 root 2.3 0.3 0:59.71 /usr/bin/cadvisor -logtostderr 8931 10 root 1.3 0.0 23:36.74 /bin/bash /usr/local/emhttp/plugins/dynamix.cache.dirs/scripts/c+ 15956 0 107 1.3 3.4 8:49.01 /usr/lib/jvm/default-java/bin/java -Djava.util.logging.config.fi+ 12554 9 root 1.0 0.0 0:00.58 top 27111 11 nobody 1.0 5.5 5:07.04 mono --debug Radarr.exe -nobrowser -data=/config 8756 8 root 0.7 1.0 421:13.17 /usr/local/sbin/shfs /mnt/user -disks 255 10240000000 -o noatime+ 9312 8 nobody 0.7 0.5 13:21.97 /usr/lib/kodi/kodi.bin --headless 11015 9 nobody 0.7 4.4 13:16.35 java -Xmx1024M -jar /usr/lib/unifi/lib/ace.jar start 12133 9 nobody 0.7 1.2 18:52.68 bin/mongod --dbpath /usr/lib/unifi/data/db --port 27117 --unixSo+ 16034 4 nobody 0.7 0.8 14:53.22 /usr/lib/plexmediaserver/Plex Media Server 9 2 root 0.3 0.0 13:32.07 [rcu_preempt] 21 2 root 0.3 0.0 3:56.84 [ksoftirqd/2] 57 9 root 0.3 0.0 2:27.89 [ksoftirqd/9] 62 10 root 0.3 0.0 3:04.83 [ksoftirqd/10] 8423 0 root 0.3 0.0 37:24.60 /usr/local/sbin/emhttpd 8450 3 root 0.3 0.0 7:10.38 ttyd -d 0 -i /var/run/ttyd.sock login -f root 9825 2 root 0.3 0.4 13:45.24 /usr/bin/dockerd -p /var/run/dockerd.pid --mtu=9000 --storage-dr+ 10345 3 nobody 0.3 7.0 29:11.27 /usr/bin/python -OO /usr/bin/sabnzbdplus --config-file /config -+ 15535 2 root 0.3 0.0 0:48.97 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 809+ 16417 0 root 0.3 0.0 0:44.60 [vhost-16410] 27359 4 nobody 0.3 0.3 0:02.32 Plex Plug-in [com.plexapp.agents.thetvdb] /usr/lib/plexmediaserv+ 27752 4 nobody 0.3 0.2 0:01.65 Plex Plug-in [com.plexapp.agents.plexthememusic] /usr/lib/plexme+ 1 11 root 0.0 0.0 0:15.05 init 2 8 root 0.0 0.0 0:00.09 [kthreadd] 4 0 root 0.0 0.0 0:00.00 [kworker/0:0H] 7 0 root 0.0 0.0 0:00.00 [mm_percpu_wq] 8 0 root 0.0 0.0 8:13.91 [ksoftirqd/0] 10 10 root 0.0 0.0 0:11.36 [rcu_sched] 11 2 root 0.0 0.0 0:00.00 [rcu_bh] 12 0 root 0.0 0.0 0:01.08 [migration/0] 13 0 root 0.0 0.0 0:00.00 [cpuhp/0] 14 1 root 0.0 0.0 0:00.00 [cpuhp/1] HP DL160 G6, 2x Xeon X5550, 16GB memory 7x 4TB WD RED, 2x 240GB cache 1x 1TB laptop HDD in unassigned Unraid: 6.4.0 Link to comment
phanb Posted March 5, 2018 Share Posted March 5, 2018 I am actually finding the same issues here. --cpuset-cpus=2,10,3,11,4,12,5,13,6,14,7,15 my transcoding is not doing very well considering I am dedicating so many cores Link to comment
rcmpayne Posted March 5, 2018 Author Share Posted March 5, 2018 I am actually finding the same issues here. --cpuset-cpus=2,10,3,11,4,12,5,13,6,14,7,15 my transcoding is not doing very well considering I am dedicating so many coresIf you top, are you only using core 2? I assume it's the first one in the list it's using?Sent from my Pixel 2 using Tapatalk Link to comment
silversrfr Posted March 6, 2018 Share Posted March 6, 2018 I have the same issue with my plexinc/Plex docker, multiple pinned CPU cores but only one core (the first in the string of CPUs) is being utilized at as high as 100%. I actually noticed this first when running Snoopy's Unbuntu docker. If I changed the CPU cores specified by cpuset-cpus, the single core being used at a high percentage moved to whichever CPU I assigned first in the list. I suspect this is the same for any docker I have pinned CPUs to, but they just don't use a high enough processor for me to have noticed. I have an HP DL389 g6 One thought... My HP Unraid server had the processors listed incorrectly in older (6.3.x) Unraid versions. However, pinned CPU usage was normal (distributed accross all pinned cores assigned by cpuset-cpus) for me prior to upgrading unraid to 6.4.1. Is this pinning issue related to whatever was done to fix the old CPU "mapping" issue? Link to comment
rcmpayne Posted March 6, 2018 Author Share Posted March 6, 2018 14 hours ago, silversrfr said: I have the same issue with my plexinc/Plex docker, multiple pinned CPU cores but only one core (the first in the string of CPUs) is being utilized at as high as 100%. I actually noticed this first when running Snoopy's Unbuntu docker. If I changed the CPU cores specified by cpuset-cpus, the single core being used at a high percentage moved to whichever CPU I assigned first in the list. I suspect this is the same for any docker I have pinned CPUs to, but they just don't use a high enough processor for me to have noticed. I have an HP DL389 g6 One thought... My HP Unraid server had the processors listed incorrectly in older (6.3.x) Unraid versions. However, pinned CPU usage was normal (distributed accross all pinned cores assigned by cpuset-cpus) for me prior to upgrading unraid to 6.4.1. Is this pinning issue related to whatever was done to fix the old CPU "mapping" issue? Not sure I agree here. I just tested this with sabnzbd and its working correctly. Sab Docker setting: --cpuset-cpus=0,8,1,9,2,10,3,11 root@SERVER:~# top | grep -i sab sab:24.87 docker-containerd -l unix:///var/run/do+ 10345 2 nobody 63.2 7.1 30:37.29 /usr/bin/python -OO /usr/bin/sabnzbdplu+ 10345 2 nobody 25.7 7.1 30:38.07 /usr/bin/python -OO /usr/bin/sabnzbdplu+ 10345 8 nobody 56.4 7.1 30:39.78 /usr/bin/python -OO /usr/bin/sabnzbdplu+ 10345 10 nobody 113.5 7.1 30:43.23 /usr/bin/python -OO /usr/bin/sabnzbdplu+ 10345 10 nobody 99.0 7.1 30:46.24 /usr/bin/python -OO /usr/bin/sabnzbdplu+ 10345 10 nobody 65.8 7.1 30:48.24 /usr/bin/python -OO /usr/bin/sabnzbdplu+ 10345 10 nobody 35.6 7.1 30:49.32 /usr/bin/python -OO /usr/bin/sabnzbdplu+ 10345 10 nobody 37.1 7.1 30:50.44 /usr/bin/python -OO /usr/bin/sabnzbdplu+ 10345 11 nobody 32.2 7.1 30:51.42 /usr/bin/python -OO /usr/bin/sabnzbdplu+ 10345 11 nobody 26.2 7.1 30:52.21 /usr/bin/python -OO /usr/bin/sabnzbdplu+ 10345 11 nobody 31.0 7.1 30:53.15 /usr/bin/python -OO /usr/bin/sabnzbdplu+ 10345 11 nobody 30.4 7.1 30:54.07 /usr/bin/python -OO /usr/bin/sabnzbdplu+ 10345 11 nobody 23.4 7.1 30:54.78 /usr/bin/python -OO /usr/bin/sabnzbdplu+ 10345 11 nobody 30.0 7.1 30:55.69 /usr/bin/python -OO /usr/bin/sabnzbdplu+ 10345 0 nobody 32.3 7.1 30:56.67 /usr/bin/python -OO /usr/bin/sabnzbdplu+ 10345 0 nobody 42.6 7.1 30:57.96 /usr/bin/python -OO /usr/bin/sabnzbdplu+ 10345 0 nobody 53.5 7.2 30:59.58 /usr/bin/python -OO /usr/bin/sabnzbdplu+ Link to comment
rcmpayne Posted March 6, 2018 Author Share Posted March 6, 2018 Ok I fixed the issue. I followed a three part youtube series about setting cpu's to unraid and the OS. One option was talking about setting unraid to only use CPU1 and plex docker CPU2. (maybe i confused that with VMs) How to check: 1. Go to Main tab in unraid 2. Select the flash drive 3. under the Syslinux configuration tab look for label unRAID OS menu default kernel /bzimage append isolcpus=4,12,5,13,6,14,7,15 initrd=/bzroot -> this tells unraid OS not to use thoes. If you have this setting like I did, any Decker using those entries, will not work correctly. As soon as I moved plex to 0,8,1,9,2,10,3,11, it started to work correctly. Next I put plex back to only use 4,12,5,13,6,14,7,15 and removed the isolcpus from the unraid command and plex started to wrok on the second CPU cores. So if you guys are having issues like me, check and remove isolcpus from the flash Syslinux configuration tab and have unraid manage all cpu cores. I guess my misconception is removing cores from isolcpus would work for Decker apps, however, it only really applies to VMs. If you remove isolcpus for VMs, do not assign them to any dockers. maybe someone smarter with linux can vet my findings here root@SERVER:~# top | grep -i plex 15558 15 0.3 0.3 0:06.15 Plex Script Hos 17191 6 0.3 0.3 0:03.68 Plex Script Hos 15558 15 0.3 0.3 0:06.16 Plex Script Hos 15192 6 0.3 0.8 0:04.58 Plex Media Serv 15558 15 0.3 0.3 0:06.17 Plex Script Hos 15192 6 7.6 0.8 0:04.81 Plex Media Serv 21104 4 296.0 1.2 0:08.94 Plex Transcoder 15192 6 7.9 0.8 0:05.05 Plex Media Serv 18546 14 0.3 0.0 0:00.14 Plex Relay 21213 5 282.1 1.1 0:08.52 Plex Transcoder 15192 6 2.6 0.8 0:05.13 Plex Media Serv 15558 15 0.3 0.3 0:06.18 Plex Script Hos 18546 6 0.3 0.0 0:00.15 Plex Relay 21213 4 588.4 1.1 0:26.35 Plex Transcoder 15192 6 2.6 0.8 0:05.21 Plex Media Serv 18546 14 0.7 0.0 0:00.17 Plex Relay 15558 15 0.3 0.3 0:06.19 Plex Script Hos 21213 6 596.4 1.2 0:44.36 Plex Transcoder 15192 6 2.3 0.8 0:05.28 Plex Media Serv 18546 6 0.7 0.0 0:00.19 Plex Relay 15558 15 0.3 0.3 0:06.20 Plex Script Hos 21213 4 602.6 1.2 1:02.56 Plex Transcoder 15192 6 2.6 0.8 0:05.36 Plex Media Serv 18546 15 0.7 0.0 0:00.21 Plex Relay 15558 15 0.3 0.3 0:06.21 Plex Script Hos 21213 14 582.2 1.2 1:20.20 Plex Transcoder 15192 6 2.0 0.8 0:05.42 Plex Media Serv 17191 6 0.3 0.3 0:03.69 Plex Script Hos 18546 13 0.3 0.0 0:00.22 Plex Relay 21213 13 577.8 1.2 1:37.65 Plex Transcoder 15192 6 1.3 0.8 0:05.46 Plex Media Serv 15558 15 0.3 0.3 0:06.22 Plex Script Hos 15192 6 11.9 0.8 0:05.82 Plex Media Serv 18546 14 0.7 0.0 0:00.24 Plex Relay 15558 15 0.3 0.3 0:06.23 Plex Script Hos Link to comment
silversrfr Posted March 6, 2018 Share Posted March 6, 2018 To reinforce @rcmpayne above... as per @Squid : ISOLCPUS works by preventing the OS from utilizing certain cores. Since Docker runs directly as processes on the OS, then it only has access to the cores that you've assigned specifically for unRaid use. Therefore if you want Plex to utilize all of the cores available to unRaid, it is pointless to add in the cpuset-cpus parameter to plex as you've implied you did by stating you assigned the cores to Plex. And by your example, your assigning Plex access to cores that it doesn't have access to since the OS doesn't have access to it. A docker, although similar to a VM is not a VM and is constrained by the limits imposed firstly by the OS and then secondly by the parameters passed to it in the template. Link to comment
Recommended Posts
Archived
This topic is now archived and is closed to further replies.