Kubernetes cluster


Recommended Posts

Would like to request better support for running a kubernetes cluster on unraid. I'm currently using KIND (kubernetes in docker) and it's actually working quite well so far but it's not exactly natively integrated.

 

https://kind.sigs.k8s.io/

 

Incidentally I offered to help out the OP for the Ultimate Unraid Dashboard with his request for help by someone familiar with Docker to upgrade the setup and make it easier to setup and use. I've currently setup Grafana, Loki, Promtail, Telegraf, and InfluxDB (up to date versions) across 5 separate docker containers. 

 

The community has adopted helm charts for multi app setup/coordination in a cluster,, I think a lot of everyones time could be better spent adopting this paradigm versus putting more effort into Docker setups, docker itself is being deprecated by kubernetes for containerd. 

  • Upvote 5
Link to comment
  • 4 months later...
  • 1 month later...
  • 3 weeks later...

While I look forward to k8s, I suspect the odds are not good.

 

We may have better chance setting up a new k8s bare-metal server, and let it be managed by unraid k8s if possible. Let UnRaid do what it does best: NAS.

 

I find that even the VMs are somewhat painful. Say windows VM (can't upgrade 10 -> 11 directly, lack bluetooth support, gpu passthrough, xml vs UI etc)

Link to comment
  • 1 month later...
On 12/21/2022 at 9:13 AM, ctab579 said:

Running KinD is better than nothing. Can you post your cluster config and deployment as an example?

 

I couldn't confidently deploy k3d and insure that I wouldn't lose the sqlite data. (I guess I could have just tested it).

 

Happily! It did take sometime to setup and figure out maintenance as I encountered a growing-size issue I'll discuss below, but now I think it's running really well for my simple use cases. I haven't spent too much time with k8s in general so I would love some help/tips on a couple of things. 

 

Setup

I made a folder `/mnt/user/appdata/cluster` where everything lives. Inside I believe I simply used `wget` to download the following binaries, including kind, kubectl, k9s, helm, etc. I also update them to the latest versions by just using `wget` manually to download the new binaries here.

 

I have the following bash scripts to facilitate the overall setup/maintenance in numbered order of how they would be used in a new cluster creation.

Workflow

  1. I cd to /mnt/user/appdata/cluster
  2. Run `source 0-setenv.sh`, which changes the HOME of the shell mainly, so that k9s and kubectl work etc.
  3. Cluster creation I only run once at the start, but in case I need to recreate the cluster with additional port mappings etc this order should work.
  4. When changing env-vars for the cluster, I re-run `2-add-app-envs.sh` to delete/re-apply the env-vars.

0-setenv.sh

#
# Run `source setenv.sh'
#
export HOME=$(pwd)
export PATH="$HOME:${KREW_ROOT:-$HOME/.krew}/bin:$PATH"
export KUBECONFIG=$(pwd)/.kube/config.yml
export EDITOR=nano

1-create-cluster.sh

#!/bin/bash
kind create cluster --config=cluster.yaml
helm repo add grafana https://grafana.github.io/helm-charts
helm install promtail grafana/promtail --set config.lokiAddress=http://MY-UNRAID-RUNNING-LOKI-DOCKER-IP:3100/loki/api/v1/push

2-add-app-envs.sh

kubectl delete secret my-app-env
kubectl apply -f my-app-env.yml

Config

cluster.yaml

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  # add a mount from /path/to/my/files on the host to /files on the node
  extraMounts:
  - hostPath: /mnt/user/my-apps-storage
    containerPath: /my-apps-storage
    # optional: if set, the mount is read-only.
    # default false
    readOnly: false
    # optional: if set, the mount needs SELinux relabeling.
    # default false
    selinuxRelabel: false
    # optional: set propagation mode (None, HostToContainer or Bidirectional)
    # see https://kubernetes.io/docs/concepts/storage/volumes/#mount-propagation
    # default None
    propagation: HostToContainer
  extraPortMappings:
  - containerPort: 30150
    hostPort: 30150
    # optional: set the bind address on the host
    # 0.0.0.0 is the current default
    listenAddress: "0.0.0.0"
    # optional: set the protocol to one of TCP, UDP, SCTP.
    # TCP is the default
    protocol: TCP
   - containerPort: 30151
    hostPort: 30151
    # optional: set the bind address on the host
    # 0.0.0.0 is the current default
    listenAddress: "0.0.0.0"
    # optional: set the protocol to one of TCP, UDP, SCTP.
    # TCP is the default
    protocol: TCP
  

 

  • I didn't find a way to change a cluster config once it's already created, which is very annoying. If someone has tips on this it would be greatly appreciated.
  • Here I mapped unraids share my-apps-storage to the cluster, as well as multiple ports. The idea being I can't change the cluster so I added multiple port mappings in the beginning. This is just 2 port mappings as I've distilled and scrubbed my actual setup so you'll need to go through each file and personalize the setup a bit. You can add more.
  • As well I made one share my-apps-storage, which has sub folder for each app, since the cluster creation lets you map one folder. So each k8s node is given a persistent volume claim to a folder inside of this mapped folder.
  • I then use these port mappings with my traefik container to map this kubernetes node to my domain.

my-app-env.yml

metadata:
  name: my-app-env
type: Opaque
stringData:
  MY_APP_ENV_VAR_KEY: https://example.com
  # COMMENTED_APP_ENV_VAR_KEY: COMMENTED_VALUE

 

This is just env-vars I want in the cluster for my app.

Deployment (Gitlab)

deployment.yaml (in gitlab repo)

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-first-storage
spec:
  storageClassName: standard
  accessModes:
    - ReadWriteOnce
  capacity:
    storage: 1000Gi
  hostPath:
    path: /my-apps-storage/first-app
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-first-storage
spec:
  volumeName: pv-first-storage
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1000Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-first-app
  labels:
    app: my-first-app
spec:
  replicas: 1
  selector:
    matchLabels:
      app: my-first-app
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: my-first-app
    spec:
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
      volumes:
        - name: storage
          persistentVolumeClaim:
            claimName: pvc-first-storage
      containers:
        - name: my-first-apps-repo-name
          image: registry.example-domain.com/user/repo:main
          imagePullPolicy: Always
          resources:
            requests:
              memory: '8Gi'
              cpu: '4'
            limits:
              memory: '10Gi'
              cpu: '6'
          ports:
            - containerPort: 8080
          volumeMounts:
            - name: storage
              mountPath: /storage
          envFrom:
            - secretRef:
                name: my-app-env
      imagePullSecrets:
        - name: gitlab-deploy-token
---
apiVersion: v1
kind: Service
metadata:
  name: my-first-app
spec:
  type: NodePort
  ports:
    - name: http
      nodePort: 30150
      port: 8080
  selector:
    app: my-first-app

 

  • Here you can see I mapped one of the extraPortMappings to my apps http port 8080
  • Using a PersistentVolume and PersistentVolumeClaim, I'm able to map a folder inside of my unraid share my-apps-storage/my-first-app, into the k8s node as /storage inside of the container.
  • I also map my apps env vars from the cluster here, you can see at the container - envFrom.

Project Repo

.gitlab-ci.yml (in gitlab repo)

.kube-context:
  before_script:
    - if [ -n "$KUBE_CONTEXT" ]; then kubectl config use-context "$KUBE_CONTEXT"; fi

stages:
  - build
  - deploy
  - notify

variables:
  IMAGE_TAG: $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG
  GITLAB_DEPLOY_TOKEN_USERNAME: $GITLAB_DEPLOY_TOKEN_USERNAME
  GITLAB_DEPLOY_TOKEN_PASSWORD: $GITLAB_DEPLOY_TOKEN_PASSWORD
  NODE_ENV: $NODE_ENV
  PUBLIC_HOSTNAME: $PUBLIC_HOSTNAME

build:
  stage: build
  script:
    - printenv | sort
    - time docker login -u $CI_REGISTRY_USER -p $CI_JOB_TOKEN $CI_REGISTRY
    - 'time docker build
      --progress=plain
      --pull
      --cache-from $IMAGE_TAG
      --tag $IMAGE_TAG
      --build-arg PUBLIC_HOSTNAME
      .'
    - time docker push $IMAGE_TAG

deploy:
  extends: [.kube-context]
  stage: deploy
  image: 'registry.gitlab.com/gitlab-org/cluster-integration/cluster-applications:latest'
  environment:
    name: production
  script:
    - gl-ensure-namespace gitlab-managed-apps
    # - gl-helmfile --file $CI_PROJECT_DIR/helmfile.yaml apply --suppress-secrets
    - kubectl delete secret gitlab-deploy-token || true
    - kubectl create secret docker-registry gitlab-deploy-token
      --docker-server=$CI_REGISTRY
      --docker-username=$GITLAB_DEPLOY_TOKEN_USERNAME
      --docker-password=$GITLAB_DEPLOY_TOKEN_PASSWORD
      [email protected]
    - kubectl apply -f deployment.yaml
    # - gl-helmfile --file $CI_PROJECT_DIR/helmfile.yaml apply --suppress-secrets
    - kubectl rollout restart deploy my-first-app
  rules:
    - if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH

 

This is pretty much my exact gitlab-ci.yml, I've even left some commented statements because if anyone is more experienced they might be able to help in integrating helm better. I added helm at cluster-creation for loki so I can ingest the apps logs.

 

My gitlab setup is pretty simple, just gitlab-ce running in a container, I have it mapped to my domain through traefik/cloudflared, and then gitlab-runner running in a separate docker container all on my unraid. Here's my gitlab-runner config currently because the access to Docker in Docker is needed to build docker images, and I encountered several issues getting this working, including an issue recently so if anyone wants to take a second look they can. I had to map /var/run/docker.sock to get it to work correctly.

 

config.toml (in gitlab-runner)

concurrent = 1
check_interval = 0

[session_server]
  #listen_address = "0.0.0.0:8093" #  listen on all available interfaces on port 8093
  #listen_address = "[::]:8093"
  advertise_address = "gitlab-runner-1:8093"
  session_timeout = 1800

[[runners]]
  name = "docker-runner-1"
  url = "https://my-gitlab-instance.com"
  token = "MY_GITLAB_TOKEN"
  executor = "docker"
  builds_dir = "/builds"
  cache_dir = "/cache"
  environment = [
    "GIT_DEPTH=10",
    "GIT_CLONE_PATH=$CI_BUILDS_DIR/$CI_CONCURRENT_ID/$CI_PROJECT_NAME",
    "DOCKER_TLS_CERTDIR=/certs",
    "DOCKER_DRIVER=overlay2"
  ]
  
  [runners.custom_build_dir]
    enabled = true
  [runners.docker]
    tls_verify = false
    tls_cert_path = "/certs"
    image = "docker"
    privileged = true
    disable_entrypoint_overwrite = false
    oom_kill_disable = false
    disable_cache = false
    volumes = ["/mnt/user/appdata/gitlab-runner-1/cache:/cache", "/mnt/user/appdata/gitlab-runner-1/builds:/builds", "/mnt/user/appdata/gitlab-runner-1/certs:/certs", "/var/run/docker.sock:/var/run/docker.sock"]
    shm_size = 0
  [[runners.docker.services]]
    name = "docker"
    alias = "docker"

Issues

The only issue I encountered was a growing-cache issue, this is a known issue with KinD and I have more information on the github issue I created. 

 

https://github.com/kubernetes-sigs/kind/issues/2865

 

You can read more about it on github, but it was easily resolved by adding a Unraid User Script that runs daily the following command

 

# docker delete dangling images
docker rmi $(docker images --quiet --filter "dangling=true")

# docker prune unused images
docker image prune -f
docker image prune -a --filter "until=4320h" -f

# clear KinD control cache manually
docker exec kind-control-plane /bin/bash -c "crictl rmi --prune"

 

Mainly the clear KinD control cache part, but because gitlab-runner rebuilds docker images on deploy, I believe the Unraid docker fills up with unused layers over time so I clean that up too.

 

Conclusion

As you can see, this is definitely not a setup for the faint of heart and probably not even a setup I would recommend for anyone else unless you understand at least the plumbing. But, it works really well. I am able to commit changes, gitlab will deploy my container to the cluster, and I'm able to access my custom codebase running on my unraid cluster. Using `k9s` I can shell into my container. Also worth noting I don't have a fancy cluster setup, I just needed 1 pod to run my app. Inside of that pod I have enough cpu allocated for multiple processes running doing different things etc.

 

I should say after all of this I really am not very experienced with kubernetes at all. I just needed a way to run a docker container on my unraid. I didn't like the other solutions such as a separate docker container that checks if a new image was added to the registry and then updates it. I also didn't want to use a VM because this is exactly what kubernetes is designed for, and with the benefit of no overhead. Although a really involved setup, it does work.

 

NOTE: this really wasn't meant as a tutorial, if you're not able to understand what to do from this just ask questions and I'm happy to answer them. If there's enough interest I could maybe do a better walkthrough tutorial. Thanks for reading!

 

EXPERTS

I would like some additional things out of this setup. I currently have been looking into how to do a maintenance window (to bring down my app). As well I want some more control over the networking, because KinD uses the `kind` docker network, all of my other docker containers that are in unraid cannot communicate directly using the docker container name, as you do in a docker custom network. I ended up having to use my Unraids LAN IP to get the cluster node to use a pg database I run in an unraid docker. If anyone has any thoughts on these please share!

  • Like 5
Link to comment
  • 2 weeks later...

@gvkhna Thank you for that lengthy writeup! That will probably help many people extensively.

However, I have a question about your config. Am I understanding that you recreate your cluster (and more importantly your database; e.g.: etcd) every time UnRaid boots? That is what I would like to avoid.

 

Thanks!

Link to comment
@gvkhna Thank you for that lengthy writeup! That will probably help many people extensively.

However, I have a question about your config. Am I understanding that you recreate your cluster (and more importantly your database; e.g.: etcd) every time UnRaid boots? That is what I would like to avoid.
 
Thanks!

I don’t believe so, in this setup, I don’t do anything custom at boot. I’m pretty sure KinD takes care of all of that, the KinD container starts on its own with Docker and basically starts the cluster.

But I really don’t know the internals of what’s happening at boot and wouldn’t be able to tell you for sure exactly.


Sent from my iPhone using Tapatalk
Link to comment
  • 11 months later...

I use KinD, it works fine, bit of an annoyance to setup and use it but i have had it running for almost 2 years so it’s quite stable.

 

Yes only if the community requests it but many newer applications are multiple docker containers and setup/management is a pain. That’s what k8 solves and it has uses in home labs. Not sure why it’s not more requested. 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.