seer_tenedos Posted May 29, 2019 Share Posted May 29, 2019 ok i found something in the /var/log/docker.log time="2019-05-29T21:22:05.101290807+10:00" level=error msg="Could not get ipvs family information from the kernel. It is possible that ipvs is not enabled in your kernel. Native loadbalan cing will not work until this is fixed." Quote Link to comment
CHBMB Posted May 29, 2019 Share Posted May 29, 2019 2 hours ago, seer_tenedos said: @CHBMB thanks for all the help on this and sorry to take so long to get a chance to test it. Firstly i will point out how i am testing it incase other interested people want to try or can spot my mistake. 1. create a single node docker swarm (docker swarm init) 2. follow connectivity tests from https://gist.github.com/alexellis/8e15f2ea1af7281268ec7274686985ba Sadly the latest patch above did not quite work for me. The network create and service create looks like they work $ docker network create --driver=overlay --attachable=true testnet $ docker service create --network=testnet --name web --publish 80 --replicas=5 nginx:latest but when i used curl below to test it could not connect docker run --name alpine --net=testnet -ti alpine:latest sh / # apk add --no-cache curl fetch http://dl-cdn.alpinelinux.org/alpine/v3.9/main/x86_64/APKINDEX.tar.gz fetch http://dl-cdn.alpinelinux.org/alpine/v3.9/community/x86_64/APKINDEX.tar.gz (1/5) Installing ca-certificates (20190108-r0) (2/5) Installing nghttp2-libs (1.35.1-r0) (3/5) Installing libssh2 (1.8.2-r0) (4/5) Installing libcurl (7.64.0-r1) (5/5) Installing curl (7.64.0-r1) Executing busybox-1.29.3-r10.trigger Executing ca-certificates-20190108-r0.trigger OK: 7 MiB in 19 packages / # curl web curl: (7) Failed to connect to web port 80: Host is unreachable / # ping web PING web (10.0.0.2): 56 data bytes either something is still missing or i have set something up wrong. I don't think the problem is with the kernel and docker, but I can't help you as I don't use swarm. Here's a reference to a working .config for swarm and as you can see with the last patched version there are all the same kernel modules present and some that aren't present in their config. https://blog.hypriot.com/post/verify-kernel-container-compatibility/ Quote Link to comment
seer_tenedos Posted May 30, 2019 Share Posted May 30, 2019 23 hours ago, seer_tenedos said: ok i found something in the /var/log/docker.log time="2019-05-29T21:22:05.101290807+10:00" level=error msg="Could not get ipvs family information from the kernel. It is possible that ipvs is not enabled in your kernel. Native loadbalan cing will not work until this is fixed." @CHBMB based on the above it looks like it may be the ipvs stuff. I will not have too much time till the weekend to do more testing but could it be that the kernel modules are not loaded? Is there a way to confirm that if they are loaded? To test this before i just copied the files you provided over the top of the files on the flash drive. Is that all i needed to do? I know that broke my nvidia support in docker so i assumed that worked. Sorry know basic linux and can get around but it come from windows so the more advance stuff i don't know as it also seems to change based on the distro for a lot of things. Quote Link to comment
seer_tenedos Posted May 30, 2019 Share Posted May 30, 2019 (edited) @CHBMB i also tried the link you send to is to check the kernel config but could not get it work as it could not dins /proc/config.gz I also found some info to check on ipvs checks but the 2 commands they suggest returned nothing where as one of them should have listed ip_vs if it was loaded from what i can tell. This is what i tried but maybe i am doing it wrong root@Tower:~# modprobe ip_vs modprobe: FATAL: Module ip_vs not found in directory /lib/modules/4.19.43-Unraid root@Tower:~# wget https://github.com/moby/moby/raw/master/contrib/check-config.sh --2019-05-30 21:23:03-- https://github.com/moby/moby/raw/master/contrib/check-config.sh Resolving github.com (github.com)... 13.237.44.5 Connecting to github.com (github.com)|13.237.44.5|:443... connected. HTTP request sent, awaiting response... 302 Found Location: https://raw.githubusercontent.com/moby/moby/master/contrib/check-config.sh [following] --2019-05-30 21:23:03-- https://raw.githubusercontent.com/moby/moby/master/contrib/check-config.sh Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.28.133 Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.28.133|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 10314 (10K) [text/plain] Saving to: ‘check-config.sh’ check-config.sh 100%[====================================================================================================================================================================>] 10.07K --.-KB/s in 0.001s 2019-05-30 21:23:04 (9.15 MB/s) - ‘check-config.sh’ saved [10314/10314] root@Tower:~# chmod +x check-config.sh root@Tower:~# ./check-config.sh warning: /proc/config.gz does not exist, searching other paths for kernel config ... error: cannot find kernel config try running this script again, specifying the kernel config: CONFIG=/path/to/kernel/.config ./check-config.sh or ./check-config.sh /path/to/kernel/.config root@Tower:~# zcat /proc/config.gz > kernel.config gzip: /proc/config.gz: No such file or directory root@Tower:~# grep -e ipvs -e nf_conntrack_ipv4 /lib/modules/$(uname -r)/modules.builtin root@Tower:~# lsmod | grep -e ipvs -e nf_conntrack_ipv4 A few more things i tried root@Tower:/lib/modules/4.19.43-Unraid# modprobe -- ip_vs modprobe: FATAL: Module ip_vs not found in directory /lib/modules/4.19.43-Unraid more /proc/modules md_mod 49152 4 - Live 0xffffffffa009f000 igb 167936 0 - Live 0xffffffffa00b5000 i2c_algo_bit 16384 1 igb, Live 0xffffffffa000f000 sb_edac 24576 0 - Live 0xffffffffa025a000 x86_pkg_temp_thermal 16384 0 - Live 0xffffffffa0176000 intel_powerclamp 16384 0 - Live 0xffffffffa0044000 coretemp 16384 0 - Live 0xffffffffa002b000 kvm_intel 204800 0 - Live 0xffffffffa0957000 kvm 364544 1 kvm_intel, Live 0xffffffffa02a9000 crct10dif_pclmul 16384 0 - Live 0xffffffffa019a000 crc32_pclmul 16384 0 - Live 0xffffffffa024a000 crc32c_intel 24576 0 - Live 0xffffffffa0165000 ghash_clmulni_intel 16384 0 - Live 0xffffffffa001f000 pcbc 16384 0 - Live 0xffffffffa0030000 aesni_intel 200704 0 - Live 0xffffffffa0209000 aes_x86_64 20480 1 aesni_intel, Live 0xffffffffa015c000 crypto_simd 16384 1 aesni_intel, Live 0xffffffffa0036000 cryptd 20480 3 ghash_clmulni_intel,aesni_intel,crypto_simd, Live 0xffffffffa0019000 ipmi_ssif 24576 0 - Live 0xffffffffa0080000 megaraid_sas 131072 12 - Live 0xffffffffa0112000 glue_helper 16384 1 aesni_intel, Live 0xffffffffa0072000 i2c_i801 24576 0 - Live 0xffffffffa0079000 i2c_core 40960 4 igb,i2c_algo_bit,ipmi_ssif,i2c_i801, Live 0xffffffffa01fe000 intel_cstate 16384 0 - Live 0xffffffffa010d000 intel_uncore 102400 0 - Live 0xffffffffa017c000 ahci 40960 0 - Live 0xffffffffa0145000 nvme 32768 0 - Live 0xffffffffa013c000 intel_rapl_perf 16384 0 - Live 0xffffffffa008b000 nvme_core 45056 1 nvme, Live 0xffffffffa0057000 libahci 28672 1 ahci, Live 0xffffffffa0065000 wmi 20480 0 - Live 0xffffffffa0051000 acpi_power_meter 20480 0 - Live 0xffffffffa0025000 pcc_cpufreq 16384 0 - Live 0xffffffffa00b0000 ipmi_si 53248 0 - Live 0xffffffffa0091000 acpi_pad 20480 0 - Live 0xffffffffa0009000 button 16384 0 - Live 0xffffffffa0000000 also attached modules.builtin to show it is not there either which based on the post above suggests it is not in the kernel if i understand things correctly. modules.builtin Edited May 30, 2019 by seer_tenedos Added more test results Quote Link to comment
aterfax Posted September 5, 2020 Share Posted September 5, 2020 I too would like these to be added however I have solved the problem elsewhere before: https://forums.unraid.net/topic/71259-solved-docker-swarm-not-working/?do=findComment&comment=760934 Quote Link to comment
local.bin Posted September 9, 2020 Share Posted September 9, 2020 +1 for docker swarm support, as unraid is great for basic docker tasks and running single instances, but having my 3 unraid servers working together to maintain uptime to my clients would be a real plus. 1 Quote Link to comment
delaney Posted October 1, 2020 Share Posted October 1, 2020 On 9/9/2020 at 6:53 PM, local.bin said: +1 for docker swarm support, as unraid is great for basic docker tasks and running single instances, but having my 3 unraid servers working together to maintain uptime to my clients would be a real plus. +1 here as well Quote Link to comment
tjb_altf4 Posted October 1, 2020 Share Posted October 1, 2020 I don't think Swarm will be around much longer Quote Link to comment
BoKKeR Posted October 18, 2020 Share Posted October 18, 2020 +1 I would love to have this. Otherwise I might have to run a virtualisation OS and use that to run unraid Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.