seer_tenedos

Members
  • Posts

    15
  • Joined

  • Last visited

Everything posted by seer_tenedos

  1. @CHBMB i also tried the link you send to is to check the kernel config but could not get it work as it could not dins /proc/config.gz I also found some info to check on ipvs checks but the 2 commands they suggest returned nothing where as one of them should have listed ip_vs if it was loaded from what i can tell. This is what i tried but maybe i am doing it wrong root@Tower:~# modprobe ip_vs modprobe: FATAL: Module ip_vs not found in directory /lib/modules/4.19.43-Unraid root@Tower:~# wget https://github.com/moby/moby/raw/master/contrib/check-config.sh --2019-05-30 21:23:03-- https://github.com/moby/moby/raw/master/contrib/check-config.sh Resolving github.com (github.com)... 13.237.44.5 Connecting to github.com (github.com)|13.237.44.5|:443... connected. HTTP request sent, awaiting response... 302 Found Location: https://raw.githubusercontent.com/moby/moby/master/contrib/check-config.sh [following] --2019-05-30 21:23:03-- https://raw.githubusercontent.com/moby/moby/master/contrib/check-config.sh Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.28.133 Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.28.133|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 10314 (10K) [text/plain] Saving to: ‘check-config.sh’ check-config.sh 100%[====================================================================================================================================================================>] 10.07K --.-KB/s in 0.001s 2019-05-30 21:23:04 (9.15 MB/s) - ‘check-config.sh’ saved [10314/10314] root@Tower:~# chmod +x check-config.sh root@Tower:~# ./check-config.sh warning: /proc/config.gz does not exist, searching other paths for kernel config ... error: cannot find kernel config try running this script again, specifying the kernel config: CONFIG=/path/to/kernel/.config ./check-config.sh or ./check-config.sh /path/to/kernel/.config root@Tower:~# zcat /proc/config.gz > kernel.config gzip: /proc/config.gz: No such file or directory root@Tower:~# grep -e ipvs -e nf_conntrack_ipv4 /lib/modules/$(uname -r)/modules.builtin root@Tower:~# lsmod | grep -e ipvs -e nf_conntrack_ipv4 A few more things i tried root@Tower:/lib/modules/4.19.43-Unraid# modprobe -- ip_vs modprobe: FATAL: Module ip_vs not found in directory /lib/modules/4.19.43-Unraid more /proc/modules md_mod 49152 4 - Live 0xffffffffa009f000 igb 167936 0 - Live 0xffffffffa00b5000 i2c_algo_bit 16384 1 igb, Live 0xffffffffa000f000 sb_edac 24576 0 - Live 0xffffffffa025a000 x86_pkg_temp_thermal 16384 0 - Live 0xffffffffa0176000 intel_powerclamp 16384 0 - Live 0xffffffffa0044000 coretemp 16384 0 - Live 0xffffffffa002b000 kvm_intel 204800 0 - Live 0xffffffffa0957000 kvm 364544 1 kvm_intel, Live 0xffffffffa02a9000 crct10dif_pclmul 16384 0 - Live 0xffffffffa019a000 crc32_pclmul 16384 0 - Live 0xffffffffa024a000 crc32c_intel 24576 0 - Live 0xffffffffa0165000 ghash_clmulni_intel 16384 0 - Live 0xffffffffa001f000 pcbc 16384 0 - Live 0xffffffffa0030000 aesni_intel 200704 0 - Live 0xffffffffa0209000 aes_x86_64 20480 1 aesni_intel, Live 0xffffffffa015c000 crypto_simd 16384 1 aesni_intel, Live 0xffffffffa0036000 cryptd 20480 3 ghash_clmulni_intel,aesni_intel,crypto_simd, Live 0xffffffffa0019000 ipmi_ssif 24576 0 - Live 0xffffffffa0080000 megaraid_sas 131072 12 - Live 0xffffffffa0112000 glue_helper 16384 1 aesni_intel, Live 0xffffffffa0072000 i2c_i801 24576 0 - Live 0xffffffffa0079000 i2c_core 40960 4 igb,i2c_algo_bit,ipmi_ssif,i2c_i801, Live 0xffffffffa01fe000 intel_cstate 16384 0 - Live 0xffffffffa010d000 intel_uncore 102400 0 - Live 0xffffffffa017c000 ahci 40960 0 - Live 0xffffffffa0145000 nvme 32768 0 - Live 0xffffffffa013c000 intel_rapl_perf 16384 0 - Live 0xffffffffa008b000 nvme_core 45056 1 nvme, Live 0xffffffffa0057000 libahci 28672 1 ahci, Live 0xffffffffa0065000 wmi 20480 0 - Live 0xffffffffa0051000 acpi_power_meter 20480 0 - Live 0xffffffffa0025000 pcc_cpufreq 16384 0 - Live 0xffffffffa00b0000 ipmi_si 53248 0 - Live 0xffffffffa0091000 acpi_pad 20480 0 - Live 0xffffffffa0009000 button 16384 0 - Live 0xffffffffa0000000 also attached modules.builtin to show it is not there either which based on the post above suggests it is not in the kernel if i understand things correctly. modules.builtin
  2. @CHBMB based on the above it looks like it may be the ipvs stuff. I will not have too much time till the weekend to do more testing but could it be that the kernel modules are not loaded? Is there a way to confirm that if they are loaded? To test this before i just copied the files you provided over the top of the files on the flash drive. Is that all i needed to do? I know that broke my nvidia support in docker so i assumed that worked. Sorry know basic linux and can get around but it come from windows so the more advance stuff i don't know as it also seems to change based on the distro for a lot of things.
  3. ok i found something in the /var/log/docker.log time="2019-05-29T21:22:05.101290807+10:00" level=error msg="Could not get ipvs family information from the kernel. It is possible that ipvs is not enabled in your kernel. Native loadbalan cing will not work until this is fixed."
  4. @CHBMB thanks for all the help on this and sorry to take so long to get a chance to test it. Firstly i will point out how i am testing it incase other interested people want to try or can spot my mistake. 1. create a single node docker swarm (docker swarm init) 2. follow connectivity tests from https://gist.github.com/alexellis/8e15f2ea1af7281268ec7274686985ba Sadly the latest patch above did not quite work for me. The network create and service create looks like they work $ docker network create --driver=overlay --attachable=true testnet $ docker service create --network=testnet --name web --publish 80 --replicas=5 nginx:latest but when i used curl below to test it could not connect docker run --name alpine --net=testnet -ti alpine:latest sh / # apk add --no-cache curl fetch http://dl-cdn.alpinelinux.org/alpine/v3.9/main/x86_64/APKINDEX.tar.gz fetch http://dl-cdn.alpinelinux.org/alpine/v3.9/community/x86_64/APKINDEX.tar.gz (1/5) Installing ca-certificates (20190108-r0) (2/5) Installing nghttp2-libs (1.35.1-r0) (3/5) Installing libssh2 (1.8.2-r0) (4/5) Installing libcurl (7.64.0-r1) (5/5) Installing curl (7.64.0-r1) Executing busybox-1.29.3-r10.trigger Executing ca-certificates-20190108-r0.trigger OK: 7 MiB in 19 packages / # curl web curl: (7) Failed to connect to web port 80: Host is unreachable / # ping web PING web (10.0.0.2): 56 data bytes either something is still missing or i have set something up wrong.
  5. 9@BLKMGK I have not had a chance to test yet but I can tell you how I initially a2 my tests that were easy. Don't do a multi node swarm. Just turn the unraid into a single node swarm of itself. That means you don't need to worry about networking between boxes etc. When I did that then created multi containers with just a basic test webpage on them on a docker network I could I could not talk between them of I docker exec into them or from my host to them for memory. Sadly I just have my main unraid server and no spare but I may still try the kernel on it in a day or 2 when I get some time. Also just for reference I want swarm for Openfaas.
  6. Last time I tried to test swarm was 6.6.x I have not tried on the 6.7 version or the 6.7.1. Maybe I should. I will find my notes as it is a single command to turn it on it was just most of my docker networking died when I did before. Apart from that I did not notice other issues but only played a little with no networking. Everything came up fine but packets never made it.
  7. @CHBMB while I would love @limetech to add this I honestly doubt they ever will. This has been requested by different people for the last 2 years atleast with no movement or response. I was just hoping that the NVIDIA plugin may have a way to work around these sort of issues with kernel modules but sadly it sounds like that is not the case. In my case I am not looking to create a single node swarm to use some of the serverless and other technology that will not run if the docker engine is not in swarm mode or on kubernetes. Guess the only way is to do this all on a VM that runs full features docker so I can run as a single node swarm.
  8. I know this is partly off topic but is there any chance the missing kernel module needed for docker swarm "CONFIG_NETFILTER_XT_MATCH_IPVS" could be added? It would be so useful to have docker swarm and NVIDIA working on unraid. Topic talking about missing module
  9. I would love this to be added as well. Maybe the team building the NVIDIA kernel could add it if unraid is unwilling to add it though it would be great to know why they don't want to add it.
  10. Also some containers require docker swarm. On unraid I found the docker networks stop working if you enable docker as a single node swarm on unraid.
  11. is this just needed for the overlay network? As far as i can tell for me everything is working on Unraid 6.6.6 apart from overlay networks where in a single node swarm the containers on the overlay networks can't talk to each other even though they are on the same host. Have you had a chance to see if this was fixed in the 6.7.0 RC's at all? If it is just turning some things on maybe they would consider it?
  12. Also i found this post It seems like it may be kernel modules missing or something but honestly it goes over my head. i was hoping it was just a mistake in how i set up my overlay network or swarm.
  13. Using unraid 6.6.7 I think. Latest release. I needed some containers that only work in docker swarm so I setup a single node docker swarm with the command Docker swarm init and passed an advertise address of 192.168.10.10 which is my unraid server IP. Now any containers I spin up using their own overlay network using docker stack command to create everything or even using manual commands to create the overlay network and contains on it can't communicate. They can resolve name to IP on the overlay network but all traffic fails and there are no other hosts involved so I can't work out why the overlay network is not working. Can anyone give me some ideas of how to sort out this issue? Regards, Chris
  14. ok this is really confusing me now. I thought my issue was my syncing stuff so did a plan container on dockrcloud with the setup below and still got the same error trying to set the admin password OpenVpnAs: image: 'linuxserver/openvpn-as:latest' net: host ports: - '943:943' - '1194:1194/udp' - '9443:9443' privileged: true restart: on-failure autoredeploy: true what am i doing that would break the ability to change the admin password on the instance?
  15. Can anyone help me with this setup? I am trying to setup openvpn-as and use btsync to sync/backup config so if i lose a host i can bring it up on another host. I am using Docker Cloud and the following stackfile but when the openvpn container comes up i can change the admin password as i get a message saying "System Error" after i confirm the new password. Also i can access the website but i can't login because it says username or password is wrong using admin/password. Is there a better way to auto backup/restore the /config folder over hosts? OpenVpnAs: image: 'linuxserver/openvpn-as:latest' net: host ports: - '943:943' - '1194:1194/udp' - '9443:9443' privileged: true restart: on-failure tags: - Azure volumes_from: - btsync btsync: image: 'tutum/btsync:latest' restart: on-failure roles: - global target_num_containers: 2 volumes: - /config