gvkhna

Members
  • Posts

    42
  • Joined

  • Last visited

Everything posted by gvkhna

  1. I use KinD, it works fine, bit of an annoyance to setup and use it but i have had it running for almost 2 years so it’s quite stable. Yes only if the community requests it but many newer applications are multiple docker containers and setup/management is a pain. That’s what k8 solves and it has uses in home labs. Not sure why it’s not more requested.
  2. I may do this but after a couple of years I may switch to truenas scale because of this issue. I would rather not switch to a VM, it's just more a heavier hammer than I would like for the job.
  3. I used another PC yes, put it in the motherboard of a server running windows. Did the install. Took it out and put it into the unraid server. Still working!
  4. I’ll give this a try and report back. My suspicion is it’s not writing anything to cgroups. It’s all the right environment for systemd to shut up and load. Systemd is failing without the right conditions. as far as I also read cgroup2 changes the cgroup namespace by default from host to private, so that may have some bearing in the issue. we’re you able to get the container working? Can you post your flags for confirmation, that will be helpful. I’ll update the readme etc.
  5. @ich777 I just got it working, you can see in the latest post on that topic. I had to set cgroupns host, and /sys/fs/cgroup RW, a little suspect but it is working so I'm not too bothered. Any issue you think to look out for?
  6. After some fiddling this is how I got it working. I'll update if i have any issues later on. --name='mullvadvpn' --net='internal' --ip='172.22.251.251' --privileged=true -e TZ="America/Los_Angeles" -e HOST_OS="Unraid" -e HOST_HOSTNAME="Unraid" -e HOST_CONTAINERNAME="mullvadvpn" -e 'VPN_INPUT_PORTS'='8080,8888,9118' -e 'VPN_ALLOW_FORWARDING'='true' -e 'MICROSOCKS_ENABLE'='true' -e 'DEBUG'='true' -e 'MICROSOCKS_AUTH_NONE'='true' -e 'TINYPROXY_ENABLE'='true' -l net.unraid.docker.managed=dockerman -l net.unraid.docker.icon='https://mullvad.net/apple-touch-icon.png' -p '8080:8080/tcp' -p '9118:9118/tcp' -p '8888:8888/tcp' -v '/mnt/user/appdata/mullvadvpn/etc-mullvadvpn/':'/etc/mullvad-vpn/':'rw' -v '/mnt/user/appdata/mullvadvpn/custom-init.d/':'/etc/custom-init.d':'ro' -v '/mnt/user/appdata/mullvadvpn/var-cache/':'/var/cache/mullvad-vpn':'rw' -v '/sys/fs/cgroup':'/sys/fs/cgroup':'rw' --cgroupns host --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run --tmpfs /run/lock --restart=always --log-opt max-size=1m --ulimit nofile=80000:90000 'ghcr.io/gvkhna/docker-mullvadvpn' Key: --cgroupns host, and /sys/fs/cgroup RW This is on unraidcgroup2
  7. I did that, that got me in the right direction. But this container would not start with that error. Container running in privileged yes. The container is the mullvadvpn container I liked in my last message:
  8. It looks like I'm having issues related to this and not being able to set a "hybrid" cgroup setup. This seems to be issues mostly related to Docker, something Podman doesn't exhibit. So I'm pretty annoyed. I set the following run flags on this container --cgroup-parent=docker.slice --cgroupns private --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run --tmpfs /run/lock --restart=always --log-opt max-size=1m --ulimit nofile=90000:90000 and no volume mount for /sys/fs/cgroup but am getting the following error: Failed to mount cgroup at /sys/fs/cgroup/systemd: Operation not permitted It seems setting the systemd.unified_cgroup_hierarchy=0 is the solution but since this is not possible in unraid, it's unclear what to do. As well this kind of setup is getting extremely convoluted just to be able to run systemd in a container. So I'm starting to think again that's probably not worth the headache. @ich777 Do you have any idea of what to do here? I see you're following this type of issue on github for LXC as well?
  9. Following up on this. I changed to unraidcgroup2 as specified here: By adding unraidcgroup2 to syslinux.cfg. Now the container wouldn't start. I started receiving these errors: Failed to create control group inotify object: Too many open files 2023-05-04T19:03:57.371264570Z Failed to allocate manager object: Too many open files Which I was able to solve by running the following commands: sysctl fs.inotify.max_user_instances=512 Now I'm getting the following: 2023-05-04T19:10:46.161263036Z Failed to create /init.scope control group: Read-only file system 2023-05-04T19:10:46.161265019Z Failed to allocate manager object: Read-only file system 2023-05-04T19:10:46.161266536Z [!!!!!!] Failed to allocate manager object. So it looks like I'm reproducing the errors everyone else is getting. Will report back, saving this for recollection purposes in case anyone else has these issues.
  10. Ok great thank you. That is the only low hanging fruit I saw. Let me look into cgroups further, I also have limited time for a bit. But at least it’s an interesting problem. 👍
  11. I hear you. Honestly a crapshoot from my end as I can’t reproduce without more information of what is going on. I still actually prefer the setup of systemd running the mullvad deb out of the box because it’s what they expect, instead of a custom service setup that could break in the future. Running systemd in docker is full of issues, if unraid switched to podman I hear the situation would be a lot better. I’ll look into this as I get time as well. Would like to understand cgroup better anyway. try just starting/running this container please and im curious if this starts (what’s the output). This has instructions about some tmpfs folders that systemd needs and I’m curious if that has any impact. https://github.com/bdellegrazie/docker-ubuntu-systemd
  12. Related to cgroups here: @BiGBaLLA Can you run `docker info` and at least state what your Cgroups version is. Mine is the following:
  13. My `docker info` shows I'm running cgroup 1. Also my `docker stats` has the 0% everything cpu. I've been trying to figure this out for a while but gave up as I would like to see cpu usage of containers. I believe this could be related, and it's a starting point to debug potential issues other's are having with this container as well here: I see in Unraids release notes about enabling cgroup 2 with a "syslinux append line." Can someone describe this procedure in more detail. I'm not as familiar with what exactly that means. Found: https://unraid-dl.sfo2.cdn.digitaloceanspaces.com/stable/unRAIDServer-6.11.1-x86_64.txt I'm on the latest Unraid version 6.11.5. Thank you.
  14. Here's my run flags with unraid docker docker run -d --name='mullvadvpn' --net='internal' --ip='172.22.251.251' --privileged=true -e TZ="America/Los_Angeles" -e HOST_OS="Unraid" -e HOST_HOSTNAME="Unraid" -e HOST_CONTAINERNAME="mullvadvpn" -e 'VPN_INPUT_PORTS'='8080,8888,9118' -e 'VPN_ALLOW_FORWARDING'='true' -e 'MICROSOCKS_ENABLE'='true' -e 'DEBUG'='true' -e 'MICROSOCKS_AUTH_NONE'='true' -e 'TINYPROXY_ENABLE'='true' -l net.unraid.docker.managed=dockerman -l net.unraid.docker.icon='https://mullvad.net/apple-touch-icon.png' -p '8080:8080/tcp' -p '9118:9118/tcp' -p '8888:8888/tcp' -v '/sys/fs/cgroup':'/sys/fs/cgroup':'ro' -v '/mnt/user/appdata/mullvadvpn/etc-mullvadvpn/':'/etc/mullvad-vpn/':'rw' -v '/mnt/user/appdata/mullvadvpn/custom-init.d/':'/etc/custom-init.d':'ro' -v '/mnt/user/appdata/mullvadvpn/var-cache/':'/var/cache/mullvad-vpn':'rw' --restart=always --log-opt max-size=1m 'ghcr.io/gvkhna/docker-mullvadvpn' e3d7185ce10cc32f4f3b3fa56dc8230e39e26cc5ef98d1557b5513cacf7a750b Let me look into your log, I have/had issues with cgroups with unraid. It's not well documented/supported and it's possible I made some out of band changes to my unraid config that may not be in the stock version to get cgroups working (although mine don't work correctly) but the container does start/work fine. Also please try enabling the DEBUG=true flag, although unrelated but could help if any additional issues crop up. As well I'll look into MICROSOCKS_ENABLE being required as an option, hopefully that's something a simple template could solve but I don't believe it is actually required.
  15. Appreciate the help trying to debug issues. But it’s concerning that none of that is reproducible for me. And in fact I have 0 issues. Running latest unraid and latest docker. Let me restart my container and check my flags. It would also be great to get clarity on your entire run commands. Since that seems to be the variable. The other consideration could be the setup files. I messaged squid about the community template forum access but so far have not heard back.
  16. Here's my startup logs for the container. So i do believe it's an issue related to the read-only filesystem and I'm not sure how you have setup the container. It's ridiculously easy to setup in unraid so I don't think I will have time to setup a template for a while. In your fork I don't think you want to change ubuntu:latest to ubuntu:focal, but not sure what you're trying to accomplish. Also I see you disabled the microsocks package you may want to look into the systemd unit files for microsocks, that could be breaking. ed to create symlink /sys/fs/cgroup/cpuacct: File exists Failed to create symlink /sys/fs/cgroup/net_prio: File exists Failed to create symlink /sys/fs/cgroup/net_cls: File exists Welcome to Ubuntu 22.04.2 LTS! Queued start job for default target Graphical Interface. [ OK ] Reached target Path Units. [ OK ] Reached target Slice Units. [ OK ] Reached target Swaps. [ OK ] Listening on Journal Socket (/dev/log). [ OK ] Listening on Journal Socket.
  17. You’ll need to post your flags. I’m not sure why you have a read only file system. You’ll need a privileged flag as well I believe. I can put together a community template soon for unraid, glad to see someone finds it useful (hopefully when it works for you) cheers!
  18. Hello everyone, I've been searching for sometime for a native docker container with the mullvad-cli client. I have not been able to find one, most community maintainers support a wireguard setup, which requires fixing to one specific mullvad server (AFAIK). I would prefer to designate a country/city, and allow the mullvad client connect to any that are suitable out of the pool. Of course it should support features such as kill-switch etc. Details: Simply installing Mullvad-cli.deb into an ubuntu docker container doesn't work. I found a docker container that adds systemd (which mullvad-cli relies on) into the ubuntu container. Courtesy of gh/j8s. I installed mullvad.deb into that and it does work. There's not much to it, but I'm contacting linuxserver.io if they would be able to take this "upstream" to include all of the standard features and support their community provides for their docker containers, including plugins/configuration. I'm hoping this post helps generate interest in this, I've had some issues with just using wireguard to connect to mullvad and it motivated me to find a way to get their native client into a docker container. You can take a look at the simple dockerfile here: https://github.com/gvkhna/docker-mullvadvpn
  19. I don’t believe so, in this setup, I don’t do anything custom at boot. I’m pretty sure KinD takes care of all of that, the KinD container starts on its own with Docker and basically starts the cluster. But I really don’t know the internals of what’s happening at boot and wouldn’t be able to tell you for sure exactly. Sent from my iPhone using Tapatalk
  20. Happily! It did take sometime to setup and figure out maintenance as I encountered a growing-size issue I'll discuss below, but now I think it's running really well for my simple use cases. I haven't spent too much time with k8s in general so I would love some help/tips on a couple of things. Setup I made a folder `/mnt/user/appdata/cluster` where everything lives. Inside I believe I simply used `wget` to download the following binaries, including kind, kubectl, k9s, helm, etc. I also update them to the latest versions by just using `wget` manually to download the new binaries here. I have the following bash scripts to facilitate the overall setup/maintenance in numbered order of how they would be used in a new cluster creation. Workflow I cd to /mnt/user/appdata/cluster Run `source 0-setenv.sh`, which changes the HOME of the shell mainly, so that k9s and kubectl work etc. Cluster creation I only run once at the start, but in case I need to recreate the cluster with additional port mappings etc this order should work. When changing env-vars for the cluster, I re-run `2-add-app-envs.sh` to delete/re-apply the env-vars. 0-setenv.sh # # Run `source setenv.sh' # export HOME=$(pwd) export PATH="$HOME:${KREW_ROOT:-$HOME/.krew}/bin:$PATH" export KUBECONFIG=$(pwd)/.kube/config.yml export EDITOR=nano 1-create-cluster.sh #!/bin/bash kind create cluster --config=cluster.yaml helm repo add grafana https://grafana.github.io/helm-charts helm install promtail grafana/promtail --set config.lokiAddress=http://MY-UNRAID-RUNNING-LOKI-DOCKER-IP:3100/loki/api/v1/push 2-add-app-envs.sh kubectl delete secret my-app-env kubectl apply -f my-app-env.yml Config cluster.yaml kind: Cluster apiVersion: kind.x-k8s.io/v1alpha4 nodes: - role: control-plane # add a mount from /path/to/my/files on the host to /files on the node extraMounts: - hostPath: /mnt/user/my-apps-storage containerPath: /my-apps-storage # optional: if set, the mount is read-only. # default false readOnly: false # optional: if set, the mount needs SELinux relabeling. # default false selinuxRelabel: false # optional: set propagation mode (None, HostToContainer or Bidirectional) # see https://kubernetes.io/docs/concepts/storage/volumes/#mount-propagation # default None propagation: HostToContainer extraPortMappings: - containerPort: 30150 hostPort: 30150 # optional: set the bind address on the host # 0.0.0.0 is the current default listenAddress: "0.0.0.0" # optional: set the protocol to one of TCP, UDP, SCTP. # TCP is the default protocol: TCP - containerPort: 30151 hostPort: 30151 # optional: set the bind address on the host # 0.0.0.0 is the current default listenAddress: "0.0.0.0" # optional: set the protocol to one of TCP, UDP, SCTP. # TCP is the default protocol: TCP I didn't find a way to change a cluster config once it's already created, which is very annoying. If someone has tips on this it would be greatly appreciated. Here I mapped unraids share my-apps-storage to the cluster, as well as multiple ports. The idea being I can't change the cluster so I added multiple port mappings in the beginning. This is just 2 port mappings as I've distilled and scrubbed my actual setup so you'll need to go through each file and personalize the setup a bit. You can add more. As well I made one share my-apps-storage, which has sub folder for each app, since the cluster creation lets you map one folder. So each k8s node is given a persistent volume claim to a folder inside of this mapped folder. I then use these port mappings with my traefik container to map this kubernetes node to my domain. my-app-env.yml metadata: name: my-app-env type: Opaque stringData: MY_APP_ENV_VAR_KEY: https://example.com # COMMENTED_APP_ENV_VAR_KEY: COMMENTED_VALUE This is just env-vars I want in the cluster for my app. Deployment (Gitlab) deployment.yaml (in gitlab repo) apiVersion: v1 kind: PersistentVolume metadata: name: pv-first-storage spec: storageClassName: standard accessModes: - ReadWriteOnce capacity: storage: 1000Gi hostPath: path: /my-apps-storage/first-app --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-first-storage spec: volumeName: pv-first-storage accessModes: - ReadWriteOnce resources: requests: storage: 1000Gi --- apiVersion: apps/v1 kind: Deployment metadata: name: my-first-app labels: app: my-first-app spec: replicas: 1 selector: matchLabels: app: my-first-app strategy: type: Recreate template: metadata: labels: app: my-first-app spec: dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 30 volumes: - name: storage persistentVolumeClaim: claimName: pvc-first-storage containers: - name: my-first-apps-repo-name image: registry.example-domain.com/user/repo:main imagePullPolicy: Always resources: requests: memory: '8Gi' cpu: '4' limits: memory: '10Gi' cpu: '6' ports: - containerPort: 8080 volumeMounts: - name: storage mountPath: /storage envFrom: - secretRef: name: my-app-env imagePullSecrets: - name: gitlab-deploy-token --- apiVersion: v1 kind: Service metadata: name: my-first-app spec: type: NodePort ports: - name: http nodePort: 30150 port: 8080 selector: app: my-first-app Here you can see I mapped one of the extraPortMappings to my apps http port 8080 Using a PersistentVolume and PersistentVolumeClaim, I'm able to map a folder inside of my unraid share my-apps-storage/my-first-app, into the k8s node as /storage inside of the container. I also map my apps env vars from the cluster here, you can see at the container - envFrom. Project Repo .gitlab-ci.yml (in gitlab repo) .kube-context: before_script: - if [ -n "$KUBE_CONTEXT" ]; then kubectl config use-context "$KUBE_CONTEXT"; fi stages: - build - deploy - notify variables: IMAGE_TAG: $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG GITLAB_DEPLOY_TOKEN_USERNAME: $GITLAB_DEPLOY_TOKEN_USERNAME GITLAB_DEPLOY_TOKEN_PASSWORD: $GITLAB_DEPLOY_TOKEN_PASSWORD NODE_ENV: $NODE_ENV PUBLIC_HOSTNAME: $PUBLIC_HOSTNAME build: stage: build script: - printenv | sort - time docker login -u $CI_REGISTRY_USER -p $CI_JOB_TOKEN $CI_REGISTRY - 'time docker build --progress=plain --pull --cache-from $IMAGE_TAG --tag $IMAGE_TAG --build-arg PUBLIC_HOSTNAME .' - time docker push $IMAGE_TAG deploy: extends: [.kube-context] stage: deploy image: 'registry.gitlab.com/gitlab-org/cluster-integration/cluster-applications:latest' environment: name: production script: - gl-ensure-namespace gitlab-managed-apps # - gl-helmfile --file $CI_PROJECT_DIR/helmfile.yaml apply --suppress-secrets - kubectl delete secret gitlab-deploy-token || true - kubectl create secret docker-registry gitlab-deploy-token --docker-server=$CI_REGISTRY --docker-username=$GITLAB_DEPLOY_TOKEN_USERNAME --docker-password=$GITLAB_DEPLOY_TOKEN_PASSWORD [email protected] - kubectl apply -f deployment.yaml # - gl-helmfile --file $CI_PROJECT_DIR/helmfile.yaml apply --suppress-secrets - kubectl rollout restart deploy my-first-app rules: - if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH This is pretty much my exact gitlab-ci.yml, I've even left some commented statements because if anyone is more experienced they might be able to help in integrating helm better. I added helm at cluster-creation for loki so I can ingest the apps logs. My gitlab setup is pretty simple, just gitlab-ce running in a container, I have it mapped to my domain through traefik/cloudflared, and then gitlab-runner running in a separate docker container all on my unraid. Here's my gitlab-runner config currently because the access to Docker in Docker is needed to build docker images, and I encountered several issues getting this working, including an issue recently so if anyone wants to take a second look they can. I had to map /var/run/docker.sock to get it to work correctly. config.toml (in gitlab-runner) concurrent = 1 check_interval = 0 [session_server] #listen_address = "0.0.0.0:8093" # listen on all available interfaces on port 8093 #listen_address = "[::]:8093" advertise_address = "gitlab-runner-1:8093" session_timeout = 1800 [[runners]] name = "docker-runner-1" url = "https://my-gitlab-instance.com" token = "MY_GITLAB_TOKEN" executor = "docker" builds_dir = "/builds" cache_dir = "/cache" environment = [ "GIT_DEPTH=10", "GIT_CLONE_PATH=$CI_BUILDS_DIR/$CI_CONCURRENT_ID/$CI_PROJECT_NAME", "DOCKER_TLS_CERTDIR=/certs", "DOCKER_DRIVER=overlay2" ] [runners.custom_build_dir] enabled = true [runners.docker] tls_verify = false tls_cert_path = "/certs" image = "docker" privileged = true disable_entrypoint_overwrite = false oom_kill_disable = false disable_cache = false volumes = ["/mnt/user/appdata/gitlab-runner-1/cache:/cache", "/mnt/user/appdata/gitlab-runner-1/builds:/builds", "/mnt/user/appdata/gitlab-runner-1/certs:/certs", "/var/run/docker.sock:/var/run/docker.sock"] shm_size = 0 [[runners.docker.services]] name = "docker" alias = "docker" Issues The only issue I encountered was a growing-cache issue, this is a known issue with KinD and I have more information on the github issue I created. https://github.com/kubernetes-sigs/kind/issues/2865 You can read more about it on github, but it was easily resolved by adding a Unraid User Script that runs daily the following command # docker delete dangling images docker rmi $(docker images --quiet --filter "dangling=true") # docker prune unused images docker image prune -f docker image prune -a --filter "until=4320h" -f # clear KinD control cache manually docker exec kind-control-plane /bin/bash -c "crictl rmi --prune" Mainly the clear KinD control cache part, but because gitlab-runner rebuilds docker images on deploy, I believe the Unraid docker fills up with unused layers over time so I clean that up too. Conclusion As you can see, this is definitely not a setup for the faint of heart and probably not even a setup I would recommend for anyone else unless you understand at least the plumbing. But, it works really well. I am able to commit changes, gitlab will deploy my container to the cluster, and I'm able to access my custom codebase running on my unraid cluster. Using `k9s` I can shell into my container. Also worth noting I don't have a fancy cluster setup, I just needed 1 pod to run my app. Inside of that pod I have enough cpu allocated for multiple processes running doing different things etc. I should say after all of this I really am not very experienced with kubernetes at all. I just needed a way to run a docker container on my unraid. I didn't like the other solutions such as a separate docker container that checks if a new image was added to the registry and then updates it. I also didn't want to use a VM because this is exactly what kubernetes is designed for, and with the benefit of no overhead. Although a really involved setup, it does work. NOTE: this really wasn't meant as a tutorial, if you're not able to understand what to do from this just ask questions and I'm happy to answer them. If there's enough interest I could maybe do a better walkthrough tutorial. Thanks for reading! EXPERTS I would like some additional things out of this setup. I currently have been looking into how to do a maintenance window (to bring down my app). As well I want some more control over the networking, because KinD uses the `kind` docker network, all of my other docker containers that are in unraid cannot communicate directly using the docker container name, as you do in a docker custom network. I ended up having to use my Unraids LAN IP to get the cluster node to use a pg database I run in an unraid docker. If anyone has any thoughts on these please share!
  21. I looked but can't find the receipt for the one I purchased. If i remember correctly I ended up going with the exact model number mentioned in the LTT video by Anthony. It was pricey but haven't had any issues so well worth the cost if it lasts many years which I'm hoping. There's a good reddit thread on the topic with some more info.
  22. Installed the USB DOM, works great.
  23. *sigh* I guess so. It looks like the generic FIDO2 keys can be had on Amazon for $22. Not a bad option, the problem flash has is yield, so most usb flash keys are bargain bin flash. I doubt something as simple as a FIDO key has such issues.
  24. Fair point, I just looked and yubikeys to my surprise cost almost $50, but the sheer longevity considering their simplicity might be worth it.
  25. This is a good point, many motherboards do have USB’s inside, but most of them are unused, as they will probably NetBoot. the general lack of options is my gripe with the situation, having to have something like a yubikey or a hardware pcie card for licensing is relatively fine, but constraining boot options makes it more inconvenient for just administrating your setup however makes sense to you IMHO. I guess it’s a small issue but something like a yubikey for licensing, and allowing booting off of whatever makes sense to you (even netbooting) would be great!