Get Fancy with Docker and CPU Pinning


jonp

Recommended Posts

So once you release the next beta (or RC ;) ) I am guessing the docker will fail to start on the reboot and then I just change the parameter in the docker?

 

Actually I wish that was what happened, but instead, it starts the containers just fine, just doesn't apply the CPU pinning.

Link to comment

Hmm tricky, I'd like to say I'll remember to change  this but my brain just doesn't work that way.

 

I'll make sure we put it in the announcement post in the release that includes Docker 1.6.  Eventually we want to offer options to control this from the webGui outside the "extra parameters" field, so when changes like this happen with Docker, we catch them and apply them for you automatically, but we're just not there yet.

Link to comment

 

Hmm tricky, I'd like to say I'll remember to change  this but my brain just doesn't work that way.

 

I'll make sure we put it in the announcement post in the release that includes Docker 1.6.  Eventually we want to offer options to control this from the webGui outside the "extra parameters" field, so when changes like this happen with Docker, we catch them and apply them for you automatically, but we're just not there yet.

 

Thanks, my brain appreciates that!

Link to comment
  • 2 weeks later...

Thanks for this jonp.

 

It actually came in very handy.

 

Digikam has an option so that it only uses one core for cpu intensive tasks. But when you run it in docker, I believe docker still spreads it to all cores, causing digikam to max out all cpu cores. I couldn't even access the unraid GUI. I had to telnet in and stop the docker.

 

By pinning 3 out of 4, I am able to keep other tasks alive and emhttp responsive even when digikam is doing facial recognition on thousands of pictures (lasts a while)

 

I also pinned cores for sabnzbd as well, because when it is doing checksums and unraring, it tends to hog all resources.

Link to comment

Thanks for this jonp.

 

It actually came in very handy.

 

Digikam has an option so that it only uses one core for cpu intensive tasks. But when you run it in docker, I believe docker still spreads it to all cores, causing digikam to max out all cpu cores. I couldn't even access the unraid GUI. I had to telnet in and stop the docker.

 

By pinning 3 out of 4, I am able to keep other tasks alive and emhttp responsive even when digikam is doing facial recognition on thousands of pictures (lasts a while)

 

I also pinned cores for sabnzbd as well, because when it is doing checksums and unraring, it tends to hog all resources.

Glad it helped!  I probably need to update this though because the right place to put the --cpuset parameter is now in the extra parameters field, not in the repo name field.

Link to comment
  • 1 month later...

I have been planning to play with CPU pinning with my containers, because I'm running into problems where my CPU is pinned by a docker and I lose the ability to do anything else with the server.  Clearly pinning CPUs intelligently will sort this out.  That said, in the name of user friendliness, I think this setting this parameter needs to be improved in the Webui and ideally there should a way that unraid to maintain priority for NAS/Webui functionality, whether through default CPU pinning or process prioritization.  In my opinion, add-on applications like dockers should be able to take over to the point where you can't interact with it anymore.

Link to comment

I have been planning to play with CPU pinning with my containers, because I'm running into problems where my CPU is pinned by a docker and I lose the ability to do anything else with the server.  Clearly pinning CPUs intelligently will sort this out.  That said, in the name of user friendliness, I think this setting this parameter needs to be improved in the Webui and ideally there should a way that unraid to maintain priority for NAS/Webui functionality, whether through default CPU pinning or process prioritization.  In my opinion, add-on applications like dockers should be able to take over to the point where you can't interact with it anymore.

 

I just read this thread as my weekend tinkering begins after a shocking week @ work. Sounds great - I am going to play around.

 

@jimbobulator: re your statement about Unraid maintaining enough resources for Web GUI / NAS functionality - Jonp said this in post #6

 

That said, unRAID itself is not bound to any specific cores by default, and will allocate resources as it sees fit for it's system tasks. unRAID OS takes precedent over all other system services as it is responsible for protecting that which matters most:  your data.
Link to comment

 

@jimbobulator: re your statement about Unraid maintaining enough resources for Web GUI / NAS functionality - Jonp said this in post #6

 

That said, unRAID itself is not bound to any specific cores by default, and will allocate resources as it sees fit for it's system tasks. unRAID OS takes precedent over all other system services as it is responsible for protecting that which matters most:  your data.

 

Fair enough, I missed this when I read the thread backwards (facepalm).  Based on my experience testing it seems that the WebUI does not get the same prioritization, and it's not clear if jonp's term unRAID OS covers the WebUI.  If a docker is going crazy and using 100% of all cores, and I can't access the WebUI, I can't stop the docker.  Well I can, but not without going to the command line, which it seems LT is trying to avoid users having to do.  Not much more than an annoyance for me, but it's an opportunity for improvement. 

 

To clarify, my experience is that high CPU load from a docker container makes the WebUI extremely slow, bordering on unusable.  I haven't seen it completely crash, but it gets slow enough that it's nearly unusable.  I admit I have a low tolerance for this sort of UI behavior...

Link to comment
  • 2 weeks later...

Hi JonP,

 

Perhaps you should update the body of the original post to reflect the new parameter. Now that unraid 6 is out, I doubt anyone's still on the betas.

 

And even though there is a red disclaimer at the top, people might just follow the screenshots and use the old parameter (since there is no feedback if the wrong parameter is used and no way to find out whether the container is using the specified cores or all of them, they might not even realize)

Link to comment

Hi JonP,

 

Perhaps you should update the body of the original post to reflect the new parameter. Now that unraid 6 is out, I doubt anyone's still on the betas.

 

And even though there is a red disclaimer at the top, people might just follow the screenshots and use the old parameter (since there is no feedback if the wrong parameter is used and no way to find out whether the container is using the specified cores or all of them, they might not even realize)

 

Yes sir cause i cant get it to work, please post

Link to comment

Hi JonP,

 

Perhaps you should update the body of the original post to reflect the new parameter. Now that unraid 6 is out, I doubt anyone's still on the betas.

 

And even though there is a red disclaimer at the top, people might just follow the screenshots and use the old parameter (since there is no feedback if the wrong parameter is used and no way to find out whether the container is using the specified cores or all of them, they might not even realize)

 

Yes sir cause i cant get it to work, please post

What can't you get to work? 

Link to comment

Hi JonP,

 

Perhaps you should update the body of the original post to reflect the new parameter. Now that unraid 6 is out, I doubt anyone's still on the betas.

 

And even though there is a red disclaimer at the top, people might just follow the screenshots and use the old parameter (since there is no feedback if the wrong parameter is used and no way to find out whether the container is using the specified cores or all of them, they might not even realize)

There is feedback, but you have to remove the container (just the container) and then re-add it.  You'll then see the command line and any warnings / errors.

 

As you can tell, with Docker 1.6 --cpuset still works (although it is deprecated)

 

root@localhost:# /usr/bin/docker run -d --name="MariaDB" --net="bridge" -e TZ="America/New_York" -p 3306:3306/tcp -v "/mnt/cache/appdata/mariadb/":"/db":rw --cpuset=2 needo/mariadb
Warning: '--cpuset' is deprecated, it will be replaced by '--cpuset-cpus' soon. See usage.
b39dd80519b78ee4b0cba5256b3fc6c4114e1f60bb56298a4d9375e255aba070

The command finished successfully!

 

And if I purposely put in a cpu # that doesn't exist it tells me invalid argument on a simple stop / restart

 

root@localhost:# /usr/bin/docker run -d --name="MariaDB" --net="bridge" -e TZ="America/New_York" -p 3306:3306/tcp -v "/mnt/cache/appdata/mariadb/":"/db":rw --cpuset=5 needo/mariadb
Warning: '--cpuset' is deprecated, it will be replaced by '--cpuset-cpus' soon. See usage.
02097fc0aa1077f4eb1198481489574bed6387c635833283d64201db6b414551
time="2015-06-24T22:59:11-04:00" level=fatal msg="Error response from daemon: Cannot start container 02097fc0aa1077f4eb1198481489574bed6387c635833283d64201db6b414551: [8] System error: write /sys/fs/cgroup/cpuset/docker/02097fc0aa1077f4eb1198481489574bed6387c635833283d64201db6b414551/cpuset.cpus: invalid argument" 

Link to comment

 

 

Hi JonP,

 

Perhaps you should update the body of the original post to reflect the new parameter. Now that unraid 6 is out, I doubt anyone's still on the betas.

 

And even though there is a red disclaimer at the top, people might just follow the screenshots and use the old parameter (since there is no feedback if the wrong parameter is used and no way to find out whether the container is using the specified cores or all of them, they might not even realize)

There is feedback, but you have to remove the container (just the container) and then re-add it.  You'll then see the command line and any warnings / errors.

 

As you can tell, with Docker 1.6 --cpuset still works (although it is deprecated)

 

root@localhost:# /usr/bin/docker run -d --name="MariaDB" --net="bridge" -e TZ="America/New_York" -p 3306:3306/tcp -v "/mnt/cache/appdata/mariadb/":"/db":rw --cpuset=2 needo/mariadb
Warning: '--cpuset' is deprecated, it will be replaced by '--cpuset-cpus' soon. See usage.
b39dd80519b78ee4b0cba5256b3fc6c4114e1f60bb56298a4d9375e255aba070

The command finished successfully!

 

Huh, I learn something new everyday :-) I guess I never noticed that line before.

 

Thanks

Link to comment

Small correction, you don't even need to actually remove the container, just update the extra parameters section of the existing one and save it.

oh ok... I'm still set up using the original instructions and have it in the repository line.  Too lazy to move it I guess...  ;)
Link to comment

Hi JonP,

 

Perhaps you should update the body of the original post to reflect the new parameter. Now that unraid 6 is out, I doubt anyone's still on the betas.

 

And even though there is a red disclaimer at the top, people might just follow the screenshots and use the old parameter (since there is no feedback if the wrong parameter is used and no way to find out whether the container is using the specified cores or all of them, they might not even realize)

 

Yes sir cause i cant get it to work, please post

What can't you get to work?

 

Where you set the code for the cpu pinning i was doing it the old way and with improvements to the webgui I cannot find the spot also and example would be great.

Link to comment
  • 4 weeks later...

I am trying to use BOINC from aptalca.  When I fire it up and add a project to commut CPUs to, my utilization (according to dashboard) is 100%

 

So I decided to try CPU pinning

 

I have put --cpuset=5 and I have tried --cpuset-5 in the extra param field...still my CPU goes to 100% 

 

I have a 6 core CPU, so 5 should be the last core.  Do you have to assign them in order?

Link to comment

I am trying to use BOINC from aptalca.  When I fire it up and add a project to commut CPUs to, my utilization (according to dashboard) is 100%

 

So I decided to try CPU pinning

 

I have put --cpuset=5 and I have tried --cpuset-5 in the extra param field...still my CPU goes to 100% 

 

I have a 6 core CPU, so 5 should be the last core.  Do you have to assign them in order?

not sure if it makes a difference, but --cpuset has been deprecated.  You should be using --cpuset-cpus
Link to comment

I am trying to use BOINC from aptalca.  When I fire it up and add a project to commut CPUs to, my utilization (according to dashboard) is 100%

 

So I decided to try CPU pinning

 

I have put --cpuset=5 and I have tried --cpuset-5 in the extra param field...still my CPU goes to 100% 

 

I have a 6 core CPU, so 5 should be the last core.  Do you have to assign them in order?

not sure if it makes a difference, but --cpuset has been deprecated.  You should be using --cpuset-cpus

 

Right, I mentioned that in my previous post.  By this do you mean --cpuset-5 or --cpuset-cpus=5 or something else?  I guess I am not clear on the syntax

Link to comment

I am trying to use BOINC from aptalca.  When I fire it up and add a project to commut CPUs to, my utilization (according to dashboard) is 100%

 

So I decided to try CPU pinning

 

I have put --cpuset=5 and I have tried --cpuset-5 in the extra param field...still my CPU goes to 100% 

 

I have a 6 core CPU, so 5 should be the last core.  Do you have to assign them in order?

You should be able to change the cpu utilization limits within boinc. I can

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.