[Solved] docker swarm not working.


jkluch

Recommended Posts

Link to comment where I walk through the fix: https://forums.unraid.net/topic/71259-solved-docker-swarm-not-working/?do=findComment&comment=721957

 

I'm having no trouble running single docker containers but I've been following tutorials for docker and I ran into one that I can't get working.

I haven't been able to find anyone else with this issue, but I also haven't seen evidence of anyone using swarm on unraid either.

Has anybody tried either of these apps and gotten them to work?

 

this example voting app is one

https://github.com/dockersamples/example-voting-app

docker swarm init
docker stack deploy --compose-file docker-stack.yml vote

The app should be running on port 5000 but I can't get that to load.

 

 

The other example is from this tutorial:

https://docs.docker.com/get-started/part2/

(part 2 is without using swarm)

(part 3 you use swarm, I changed the port value in the yml file to 4000:80 since unraid uses 80)

Edited by jkluch
Linking to other comment
Link to comment
3 hours ago, Squid said:

Are you using unRaid?  docker compose does not come with it out of the box

Yeah, and I've installed docker-compose even though I'm not sure it's required when using the 'docker stack' command.

 

The containers, service, and network all come up and look fine.. I just can't access them on the ports that they're set to on the network.

Link to comment
  • 2 weeks later...

Finally got it working.. The unRAID kernel is missing configs needed for docker swarm to function.  I had to rebuild /boot/bzimage and /boot/bzmodules

 

Used the following as guides:

https://wiki.gentoo.org/wiki/Docker#Kernel

https://github.com/moby/moby/blob/master/contrib/check-config.sh

$ curl https://raw.githubusercontent.com/moby/moby/master/contrib/check-config.sh > check-config.sh

$ bash ./check-config.sh

I was able to determine which modules were missing that are needed for full docker functionality

 

https://wiki.unraid.net/Building_a_custom_kernel

https://raw.githubusercontent.com/CHBMB/Unraid-DVB/master/build_scripts/kernel-compile-module.sh

 

For whatever reason I couldn't get unraid to start if my syslinux.cfg was set as

kernel /bzimage-new
append initrd=/bzroot-new

renaming the files to the defaults:

kernel /bzimage
append initrd=/bzroot

worked but I didn't want to do this in case the bzimage and bzroot were corrupt.  Maybe bzimage-new isn't a valid filename?

Edited by jkluch
Link to comment
  • 7 months later...
  • 4 weeks later...

I second this, my main server takes a long time to post, and 200gb ram/12 eth ports take a while to come up on my old hardware.  I could easily handle 20-25 docker containers on my backup client, but it would be great to utilize the power of my server when its up and when it goes down, or power goes out etc I have a low power client that can handle the docker.  This would be a fantastic addition.

Link to comment
  • 1 month later...
On 5/7/2018 at 11:41 PM, jkluch said:

Finally got it working.. The unRAID kernel is missing configs needed for docker swarm to function.  I had to rebuild /boot/bzimage and /boot/bzmodules

 

Please share how you did this. I downloaded the script and got the following:

Generally Necessary:

- CONFIG_NETFILTER_XT_MATCH_IPVS: missing

 

Optional Features:

- CONFIG_CGROUP_HUGETLB: missing

- CONFIG_NET_CLS_CGROUP: missing

- CONFIG_CGROUP_NET_PRIO: missing

- CONFIG_IP_VS: missing

- CONFIG_IP_VS_NFCT: missing

- CONFIG_IP_VS_PROTO_TCP: missing

- CONFIG_IP_VS_PROTO_UDP: missing

- CONFIG_IP_VS_RR: missing

- CONFIG_EXT4_FS: enabled (as module)

- CONFIG_EXT4_FS_POSIX_ACL: enabled

- CONFIG_EXT4_FS_SECURITY: missing

 

That's as far as I got. Searching slackware doesn't show any packages, so I am not sure what I am missing.

I would like docker swarm solely for the orchestration. No need for HA. I'd like to orchestrate OpenFaaS.

 

Thank you.

 

Link to comment

Hey, sorry for the late reply.. The fix isn't to install package dependencies.. you have to update the kernel and enable some modules that Unraid doesn't have enabled by default for some reason.

 

This link: https://wiki.unraid.net/Building_a_custom_kernel was used as a guide to update the kernel.

I used this https://raw.githubusercontent.com/CHBMB/Unraid-DVB/master/build_scripts/kernel-compile-module.sh to work out how the update process works.

 

The check-config.sh outputs the kernel modules needed for a fully functioning docker.  I'm not an expert on the kernel so honestly I don't know why a lot of the optional modules are optional or what they do for the most part.  I do know that CONFIG_IP_VS is required for swarm to function properly.

 

I had notes I wrote for when I did this the last time.. I just put them on github here: https://github.com/jkluch/unraid-docker-swarm

I do not suggest following this guide unless you understand what the commands are doing.. or at the very least you're ok with breaking your install.

Edited by jkluch
fixing links
Link to comment
  • jkluch changed the title to [Solved] docker swarm not working.
  • 3 weeks later...

is this just needed for the overlay network? As far as i can tell for me everything is working on Unraid 6.6.6 apart from overlay networks where in a single node swarm the containers on the overlay networks can't talk to each other even though they are on the same host. Have you had a chance to see if this was fixed in the 6.7.0 RC's at all? If it is just turning some things on maybe they would consider it?

Link to comment
  • 1 month later...

I may be running into the too it seems! This is also doubly weird in that I've already got the NVIDIA kernel loaded. Working with a friend to slave my server into a swarm and running into networking issues. 6.70- RC7, any chance for release we could get swarm enabled if it's not already? From what I can tell there's only one module that needs to be added?

 

CONFIG_NETFILTER_XT_MATCH_IPVS: missing
 

  • Upvote 1
Link to comment
  • 2 months later...

@CHBMB looks like I am not the only one with nvidia and needing some kernel compile options to support docker swarm.

Have to say that my compile is further complicated by the requirement for the HP RMRR patch and the nvidia support!
 

 

Edited by aterfax
  • Upvote 1
Link to comment

Probably should have added -

I have this working via doing the following:

Install from the nvidia unraid plugin - boot and check it works! i.e. GPUs showing up with 'nvidia-smi'

Extract and save the nvidia kernel modules from the bzmodule provided by the nvidia unraid plugin. They will be in the KERNELVERSION/kernel/drivers/video
Keep these files for later.

If you need to use the HP RMRR patch then apply the following ensuring you change the file path to the correct one between step 1 and step 2:
 


##Make necessary changes to fix HP RMRR issues
sed -i '/return -EPERM;/d' ./kernel/drivers/iommu/intel-iommu.c

Compile and install the bzimage-new and bzmodules-new using the method above in this thread with the correct options for IP_VS support replacing the existing files in /boot (they will be called bzimage and bzmodules - make a backup just in case!!!)


Boot up and 'modprobe ip_vs' then check it is working via docker swarm. (Modprobe might not be  needed but I didn't see it loaded...)

For testing I hosted a default nginx on my nodes then used curl to check they were alive with:  'curl localhost:8888'

docker service create --name my_web \
                        --replicas 3 \
                        --publish published=8888,target=80 \
                        nginx


Assuming you get the standard nginx response from each node, it should be working.

Now to get nvidia stuff working again.

Now navigate back to where you saved the nvidia modules and 'insmod nvidia.ko' and the remaining modules.

Now check that they are loaded and your GPUs show up with 'nvidia-smi'

I've had no success trying to include the nvidia stuff into the former compilation as yet but I suspect it is absolutely key for you to ensure you are using the bzroot file from the unraid nvidia plugin as it includes some nvidia/CUDA software required.

I've added my working bzmodules and bzimage that I compiled with RMRR patch to the post so people can skip compiling if they wish.
Also added the nvidia modules to the post so you don't need to extract them with 7zip etc...
 

bzmodules bzimage

nvidia-modeset.ko nvidia-uvm.ko nvidia.ko nvidia-drm.ko

Edited by aterfax
Added files.
Link to comment

Ideally, when just needing to add kernel modules, you would compile just the required modules into *.ko for the purposes of unraid. Otherwise you run into the issues we see above with the monolithic nature of Linux and the fixed versioning of unraid (something I personally disagree with) 

DKMS would be the ideal solution in the case of the Nvidia build - as instead of installing a new unraid build custom tailored with Nvidia baked in, one could select from available Nvidia driver packages, DKMS would do it's magic and poof you'd have Nvidia kernel modules for the active running kernel.

 

In the case of standard kernel modules - I think we could probably start to approach it like openwrt and other embedded distributions have in the past:

Compile the kernel with minimal built-ins to save on memory footprint and distribution size (unraid does this)

Compile ALL of the modules as optional packages and make them available through the repository. (nerdpack is our repo, but it'd fill it with a lot of modules)

 

This way if someone runs into a kernel module that they need, which would normally require recompiling a new kernel, they could just grab the package for the module and now it can be added to modules-load.d, or in the mkinitcpio.conf for early loading.

 

Just my 2c from working with embedded Linux distros in the past and seeing how things have been handled there.

 

Here's an example of a set of kmod-* packages for openwrt:

https://archive.openwrt.org/snapshots/trunk/ar71xx/generic/packages/kernel/

Edited by Xaero
Link to comment
11 hours ago, Xaero said:

Ideally, when just needing to add kernel modules, you would compile just the required modules into *.ko for the purposes of unraid. Otherwise you run into the issues we see above with the monolithic nature of Linux and the fixed versioning of unraid (something I personally disagree with) 

DKMS would be the ideal solution in the case of the Nvidia build - as instead of installing a new unraid build custom tailored with Nvidia baked in, one could select from available Nvidia driver packages, DKMS would do it's magic and poof you'd have Nvidia kernel modules for the active running kernel.

 

In the case of standard kernel modules - I think we could probably start to approach it like openwrt and other embedded distributions have in the past:

Compile the kernel with minimal built-ins to save on memory footprint and distribution size (unraid does this)

Compile ALL of the modules as optional packages and make them available through the repository. (nerdpack is our repo, but it'd fill it with a lot of modules)

 

This way if someone runs into a kernel module that they need, which would normally require recompiling a new kernel, they could just grab the package for the module and now it can be added to modules-load.d, or in the mkinitcpio.conf for early loading.

 

Just my 2c from working with embedded Linux distros in the past and seeing how things have been handled there.

 

Here's an example of a set of kmod-* packages for openwrt:

https://archive.openwrt.org/snapshots/trunk/ar71xx/generic/packages/kernel/

I think the custom packaging is necessary for the nvidia stuff in order for acceleration to be passed through to dockers (i.e. the CUDA libraries, which either need to be present in the install bzroot on the USB or to be installed at boot time.

WRT to modules, I agree we could do with the Unraid devs maintaining and standardising a repo of kernel modules afaik. (That way there's a chain of trust in the compilation rather than just random users and modules can be tested/tweaked to match releases if needed.)

Link to comment
10 hours ago, aterfax said:

I think the custom packaging is necessary for the nvidia stuff in order for acceleration to be passed through to dockers (i.e. the CUDA libraries, which either need to be present in the install bzroot on the USB or to be installed at boot time.

WRT to modules, I agree we could do with the Unraid devs maintaining and standardising a repo of kernel modules afaik. (That way there's a chain of trust in the compilation rather than just random users and modules can be tested/tweaked to match releases if needed.)

to my knowledge they only have the nvidia docker runtime (which could easily be packaged) and the nvidia driver blobs (which could easily be packaged into DKMS format) 
It would reduce the amount of effort required to maintain those builds. I'd approach it, but there's a reason I don't maintain packages anywhere, currently. Not to mention Unraid and limetech's 100% anti-package stance.

Edited by Xaero
Link to comment
1 hour ago, Xaero said:

to my knowledge they only have the nvidia docker runtime (which could easily be packaged) and the nvidia driver blobs (which could easily be packaged into DKMS format) 
It would reduce the amount of effort required to maintain those builds. I'd approach it, but there's a reason I don't maintain packages anywhere, currently. Not to mention Unraid and limetech's 100% anti-package stance.

We should probably summon @CHBMB for his thoughts on that for the unraid nvidia plugin then.

Link to comment
  • 1 month later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.