Jump to content
jonp

Get Fancy with Docker and CPU Pinning

117 posts in this topic Last Reply

Recommended Posts

UPDATE 4/27/2015:  --cpuset will be deprecated in Docker 1.6.  For those using unRAID 6 beta 15, you are not affected, but when we upgrade to Docker 1.6, this will be impacted.  The new method will be to use --cpuset-cpus (it's just being renamed).

 

Hey guys, wanted to share something cool we figured out today that can substantially impact how Docker and VMs work together on the same host.  In short, you can force individual containers to be bound to specific CPU cores inside unRAID.

 

Why is this useful?

The number one thing that can affect user experience for VMs running on an unRAID host that are localized is context switching.  When applications are competing for access to the CPU, they essentially take turns and when that happens, the processor performs a context switch where it unloads data from within the processors L1, L2, and L3 cache back into RAM temporarily so that the other process can load into that cache quickly to perform it's job, then unload and reload the first process.  While this is a normal thing to occur, it can cause some undesirable effects when severely processor intensive activities are happening in both a container and a VM at the same time.  By pinning specific containers to specific cores, similar to how we can with virtual machines, we can completely eliminate the need for context switching to occur and as a result, avoid undesirable impacts to user experience.

 

How to do it

The plan is to implement this into dockerMan in an upcoming release as an advanced configuration option that you can choose to apply to all docker containers or individual containers, but for now, you can take advantage of this TODAY by modifying your existing containers in dockerMan like so:

 

Screen-Shot-2014-11-13-at-4.57.07-PM.png

 

In the "repository name" field, simply add the following code before the name of the author/repo:

 

--cpuset=#

 

If you want to set multiple cores, you can do so by using commas or to specify a range of cores, you can use a dash.  Examples:

 

--cpuset=0,2,4,6

--cpuset=0-3

 

Note that cores are numbered starting with 0.  Also note that you can check the # of cores you have in total on your system by typing the following in a command line session (SSH or Telnet):

 

nproc

Share this post


Link to post

Looks really good.

Thanks for the write up jon.

 

So...my cpu has 12 vcpus

 

Do you think this split would be acceptable across Unraid,Dockers and VM's

 

0-1 Unraid

2-3 Docker

4-5 XBMCubuntu

6-11 Win8

 

Also, what happens if for example I have Unraid, Docker and XBMCubuntu in use but not the Win8 VM?.

 

In my example there would be 6 vpcus not assigned if Win8 is not active. Do those still get used by Unraid, Docker etc even though their cpus are pinned?

Share this post


Link to post

I wonder what % of the unRAID user base need VMs.  Especially with docker containers available these days.

 

Totally uneducated guess but to a layman like me it looks like the time lt are spending on VM integration is disproportionate to the amount of users who will use it.

Share this post


Link to post

I wonder what % of the unRAID user base need VMs.  Especially with docker containers available these days.

 

Totally uneducated guess but to a layman like me it looks like the time lt are spending on VM integration is disproportionate to the amount of users who will use it.

 

"Need" being the important word here.

I do not need a Win8 VM, but for the cost of a gpu I can have a badass gaming pc more poweful than my ps4 inside my server.

 

I do not need an XBMCubuntu VM, but because I have one now, I no longer require a separate HTPC , so I can retire it.

 

I'm sure most people will be happy with docker, but the more I'm messing around with VM's the happier I am that they are being integrated.

 

Not to mention its cool as funk being able to tell ppl whats running on my PC when they ask  8)

I love the idea of having One-PC-To-Rule-Them-All

 

Horse for courses though

Share this post


Link to post

I wonder what % of the unRAID user base need VMs.  Especially with docker containers available these days.

 

Totally uneducated guess but to a layman like me it looks like the time lt are spending on VM integration is disproportionate to the amount of users who will use it.

 

"Need" being the important word here.

I do not need a Win8 VM, but for the cost of a gpu I can have a badass gaming pc more poweful than my ps4 inside my server.

 

I do not need an XBMCubuntu VM, but because I have one now, I no longer require a separate HTPC , so I can retire it.

 

I'm sure most people will be happy with docker, but the more I'm messing around with VM's the happier I am that they are being integrated.

 

Not to mention its cool as funk being able to tell ppl whats running on my PC when they ask  8)

I love the idea of having One-PC-To-Rule-Them-All

 

Horse for courses though

Manticore pretty much summed it up.  We are working on materials to make our mission easier to understand.  There are still a lot of really cool and useful things that VMS can do that containers cannot.

Share this post


Link to post

Looks really good.

Thanks for the write up jon.

 

So...my cpu has 12 vcpus

 

Do you think this split would be acceptable across Unraid,Dockers and VM's

 

0-1 Unraid

2-3 Docker

4-5 XBMCubuntu

6-11 Win8

 

Also, what happens if for example I have Unraid, Docker and XBMCubuntu in use but not the Win8 VM?.

 

In my example there would be 6 vpcus not assigned if Win8 is not active. Do those still get used by Unraid, Docker etc even though their cpus are pinned?

And yes your setup looks good to me!

Share this post


Link to post

Looks really good.

Thanks for the write up jon.

 

So...my cpu has 12 vcpus

 

Do you think this split would be acceptable across Unraid,Dockers and VM's

 

0-1 Unraid

2-3 Docker

4-5 XBMCubuntu

6-11 Win8

 

Also, what happens if for example I have Unraid, Docker and XBMCubuntu in use but not the Win8 VM?.

 

In my example there would be 6 vpcus not assigned if Win8 is not active. Do those still get used by Unraid, Docker etc even though their cpus are pinned?

 

So with respect to unRAID OS, Docker, and your VMs sharing system resources, the reality is that unRAID OS is very non-intensive on the CPU.  Even when copying large bulks of data across various network protocols, the OS remains fairly light on CPU resource needs.  That said, plugins can definitely drive this up especially ones like media servers which perform transcoding on the host.  That is another reason why we also recommend the use of Docker containers over plugins for these types of needs, because they can be better controlled and managed in how they allocate resources.

 

That said, unRAID itself is not bound to any specific cores by default, and will allocate resources as it sees fit for it's system tasks. unRAID OS takes precedent over all other system services as it is responsible for protecting that which matters most:  your data.

 

To go a step further, you also should consider your CPU and your workloads.  For example, let's say I want to have two instances of OpenELEC.  I can actually create OpenELEC twice with both instances given two cores (and the same cores at that), or I could give each instance one core a piece (different cores).  Now on my system, in testing, I didn't notice a difference either way, but to be fair, OpenELEC uses very little CPU as it is, even when playing content (since there isn't any transcoding).

 

My primary example for doing this would be for running localized Windows Virtual Desktops.  If you're running Windows 8.1 with a GPU passed through along with a set of input devices (Mouse / Keyboard), and you're watching youtube, and your cores are shared with Plex while it's transcoding a stream of video to another machine, your youtube video will stutter, the audio will crackle, and in general, you will feel it is an awful user experience.  But when you pin 6 vCPUs 0-5 to Windows and use cpuset to force Plex to only operate in cores 6-7, you will get great performance for the gaming VM and great performance for transcoding.

 

In a test today, we had the following:

 

- booted OpenELEC, and had it pinned to cores 6-7. (AMD HD 4350 GPU, Onboard Analog Audio [00:1b.00], USB M Only, Dell Monitor)

- booted Windows 8.1 and it had pinned to cores 0-5 (nVIDIA GTX 780 Videos & Audio, USB M/K, Gsync 4k Monitor)

- bound Needo's Plex Media Server to cores to 6-7

- Started looping transcoding through a browser on my macbook air

- Started looping transcoding through a browser on Eric's macbook pro

- Started playing XBMC movie trailers on the OpenELEC VM

- Started playing Titanfall in 1440P resolution on Windows 8.1 (pretty cranked settings too).

- Smiles on our faces...priceless...

 

I kid you not, this work flawlessly and using HTOP we monitored the CPU and verified the core pinnings were working.  Frankly, we could have probably gotten even more transcoding going on if we wanted.  The key here was not in the amount of vCPUs assigned to the other VMs, it was the pinning of the cores to keep context switching to a minimum and restricted and isolated to the VM.

Share this post


Link to post

I think that this is going to be super useful to me in one circumstance which I just found:  MariaDB & Sabnzbd competing for resources.

 

I just tried to use my HTPC (separate computer) running XBMC, and it would take FOREVER to navigate around the movies (minutes)  A little investigation found that sab was unpacking a 40gig d/l at the time and was taking up all of the resources and there was little to none left for XBMC to communicate with the Maria docker image.  I'm going to try pinning Sab to a single core (I don't really care if unpacking takes a little longer) and see how that works.  Hopefully then everything will stay usuable no matter what Sab is doing. 

 

I'm assuming that setting the cpuset parameter on the sab docker will still allow my other dockers to use any/all other cores available.

 

If this doesn't work, then I guess that the bottleneck is on the cache drive itself, and will have to move Sab over to a docker on my other server.

Share this post


Link to post

 

 

I'm assuming that setting the cpuset parameter on the sab docker will still allow my other dockers to use any/all other cores available.

 

That is correct!

 

Share this post


Link to post

Sorry to bump an old topic (well not that much :D)

 

What's the proper/newer way to do that with the latest unRAID 6 beta ?

 

Can't find any "Repository" field in the webui...

Share this post


Link to post

Sorry to bump an old topic (well not that much :D)

 

What's the proper/newer way to do that with the latest unRAID 6 beta ?

 

Can't find any "Repository" field in the webui...

 

There is an "Advanced View" button in the top right of each container page that will show the repository field.

Share this post


Link to post

I feel dumb now. :D

 

thanks!

 

Don't, I'm grateful you bumped it, never noticed it before.  :D

Share this post


Link to post

Actually the proper place to put this now would be in the "Extra Parameters" section (also hidden under "advanced view").

Share this post


Link to post

Had a question about pinning.

 

If you run a W7/8 desktop VM and unRaid with 1/2 a dozen containers. Say Plex, Sab, XMBC etc. What is the best cores to pin them too? If it matters that is. Would it be better to push the Desktop to hyperthreaded cores or try to keep it on actual cores and force the containers to the HT's. Doubt there would be a difference but thought it was worth asking. Or does it only allow pinning to physical cores.

 

Thanks, Kenny.

Share this post


Link to post

Umm stupid question, where is dockerman, i dont see that option on my webgui

the name dockerMan is a hold-over from a plugin before it was integrated into the UI.  It's just the "Docker" tab now, but old habits die hard and a number of people (myself included) still refer to the docker tab as dockerMan

Share this post


Link to post

do you have to add two cores to the docker or can only one be used?

 

i only have 4 cores and would like to assign 1 to plex and 1 to media browser

Share this post


Link to post

do you have to add two cores to the docker or can only one be used?

 

i only have 4 cores and would like to assign 1 to plex and 1 to media browser

You can set them however you like

 

eg for plex you could --cpuset=3 (assigns core # 3)

mediabrowser --cpuset=2 (assigns core # 2)

Share this post


Link to post

do you have to add two cores to the docker or can only one be used?

 

i only have 4 cores and would like to assign 1 to plex and 1 to media browser

You can set them however you like

 

eg for plex you could --cpuset=3 (assigns core # 3)

mediabrowser --cpuset=2 (assigns core # 2)

Thank you sir or madam

 

Share this post


Link to post

I keep getting this error

df173d4fe7fab13f01642ce14f7e6eacfa0defbebea81a27a1ae7263a01d7f12

time="2015-04-15T23:00:28-05:00" level="fatal" msg="Error response from daemon: Cannot start container df173d4fe7fab13f01642ce14f7e6eacfa0defbebea81a27a1ae7263a01d7f12: write /sys/fs/cgroup/cpuset/docker/df173d4fe7fab13f01642ce14f7e6eacfa0defbebea81a27a1ae7263a01d7f12/cpuset.cpus: invalid argument"

 

The command failed.

 

Using this command

--cpuset=4 mediabrowser/mbserver

 

Share this post


Link to post

You earlier post says you have four cores.  You are trying to assign core 5.  Cores start their numbering at 0

Share this post


Link to post

You earlier post says you have four cores.  You are trying to assign core 5.  Cores start their numbering at 0

 

well DAMN IT!!!!

stupid 0 core, thanks geezus

 

Ya'll people are way to smart for your own good.

 

Edit: go it going.

Share this post


Link to post

hey everyone, just thought I'd let folks know that Docker is changing the cpuset parameter in 1.6 to --cpuset-cpus.  you can use this in 1.5 and I encourage anyone doing this to update their containers to this now, so they don't get frustrated when 1.6 hits and the old method is FULLY deprecated.

 

CORRECTION:  This does NOT work in beta 15.  My bad...

 

I'll update the OP with this information as well...

Share this post


Link to post

So once you release the next beta (or RC ;) ) I am guessing the docker will fail to start on the reboot and then I just change the parameter in the docker?

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.