Jump to content

Mullvad-cli Docker Container


gvkhna

Recommended Posts

Hello everyone,

 

I've been searching for sometime for a native docker container with the mullvad-cli client. I have not been able to find one, most community maintainers support a wireguard setup, which requires fixing to one specific mullvad server (AFAIK). I would prefer to designate a country/city, and allow the mullvad client connect to any that are suitable out of the pool. Of course it should support features such as kill-switch etc.

 

Details:

 

Simply installing Mullvad-cli.deb into an ubuntu docker container doesn't work. I found a docker container that adds systemd (which mullvad-cli relies on) into the ubuntu container. Courtesy of gh/j8s. I installed mullvad.deb into that and it does work.

 

There's not much to it, but I'm contacting linuxserver.io if they would be able to take this "upstream" to include all of the standard features and support their community provides for their docker containers, including plugins/configuration. 

 

I'm hoping this post helps generate interest in this, I've had some issues with just using wireguard to connect to mullvad and it motivated me to find a way to get their native client into a docker container.

 

You can take a look at the simple dockerfile here: https://github.com/gvkhna/docker-mullvadvpn

Link to comment
  • 3 months later...

Hi.

I spent a whole month trying to do the same without significant progress so i am most grateful you have done it.

I must say I can not launch it yet, do you use docker or podman? My docker setup returns

Quote

Detected virtualization docker.
Detected architecture x86-64.
Welcome to Ubuntu 22.04.2 LTS!
Failed to create /init.scope control group: Read-only file system
Failed to allocate manager object: Read-only file system
[!!!!!!] Failed to allocate manager object.
Exiting PID 1...
systemd 249.11-0ubuntu3.9 running in system mode (+PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)

 

Link to comment

You’ll need to post your flags.

 

I’m not sure why you have a read only file system.


You’ll need a privileged flag as well I believe.
 

I can put together a community template soon for unraid, glad to see someone finds it useful (hopefully when it works for you) cheers!

  • Like 1
Link to comment

Here's my startup logs for the container. So i do believe it's an issue related to the read-only filesystem and I'm not sure how you have setup the container. It's ridiculously easy to setup in unraid so I don't think I will have time to setup a template for a while. 

 

In your fork I don't think you want to change ubuntu:latest to ubuntu:focal, but not sure what you're trying to accomplish. Also I see you disabled the microsocks package you may want to look into the systemd unit files for microsocks, that could be breaking. 

 

ed to create symlink /sys/fs/cgroup/cpuacct: File exists
Failed to create symlink /sys/fs/cgroup/net_prio: File exists
Failed to create symlink /sys/fs/cgroup/net_cls: File exists

Welcome to Ubuntu 22.04.2 LTS!

Queued start job for default target Graphical Interface.
[  OK  ] Reached target Path Units.
[  OK  ] Reached target Slice Units.
[  OK  ] Reached target Swaps.
[  OK  ] Listening on Journal Socket (/dev/log).
[  OK  ] Listening on Journal Socket.

 

Edited by gvkhna
Link to comment

I appreciate the work on this container as I have been thinking of something like this for a while, and running the GUI in a VM and moving everything there sounds counterproductive.   Having spent some time with it, I wanted to add a little insight that may help from my debugging.

 

Since you're using ubuntu latest, I'm wondering if latest LTS has shifted since creating this container.  Have you tried building without cache for this recently? I ask because if I build and run jrei's container I can only get 18.04 to run.  Running the following will not start the container:

 

docker run -d --name systemd-ubuntu --privileged -v /sys/fs/cgroup:/sys/fs/cgroup:ro jrei/systemd-ubuntu:22.04
docker run -d --name systemd-ubuntu --privileged -v /sys/fs/cgroup:/sys/fs/cgroup:ro jrei/systemd-ubuntu:20.04

 

But running 18.04 it starts.  So something has to be different between ubuntu versions.

 

docker run -d --name systemd-ubuntu --privileged -v /sys/fs/cgroup:/sys/fs/cgroup:ro jrei/systemd-ubuntu:18.04

 

When building the docker file, all systemd commands fail due to systemd not running on unraid. You mention a lot about building a community docker template and community maintainers so I assume you're running this on your unraid host.  But if not, that may be where some of the confusion is.  If you are in fact running this on your host, are you running systemd on your host?  During building, I get the following errors.

 

Configuration file /usr/lib/systemd/system/coredns.service is marked world-writable. Please remove world writability permission bits. Proceeding anyway.
Created symlink /etc/systemd/system/multi-user.target.wants/coredns.service -> /usr/lib/systemd/system/coredns.service.
System has not been booted with systemd as init system (PID 1). Can't operate.
Failed to connect to bus: Host is down
Configuration file /usr/lib/systemd/system/microsocks.service is marked world-writable. Please remove world writability permission bits. Proceeding anyway.
Created symlink /etc/systemd/system/multi-user.target.wants/microsocks.service -> /usr/lib/systemd/system/microsocks.service.
System has not been booted with systemd as init system (PID 1). Can't operate.
Failed to connect to bus: Host is down
Configuration file /usr/lib/systemd/system/mullvad-stdout.service is marked world-writable. Please remove world writability permission bits. Proceeding anyway.
Created symlink /etc/systemd/system/multi-user.target.wants/mullvad-stdout.service -> /usr/lib/systemd/system/mullvad-stdout.service.
System has not been booted with systemd as init system (PID 1). Can't operate.
Failed to connect to bus: Host is down
Configuration file /usr/lib/systemd/system/tinyproxy.service is marked world-writable. Please remove world writability permission bits. Proceeding anyway.
Created symlink /etc/systemd/system/multi-user.target.wants/tinyproxy.service -> /usr/lib/systemd/system/tinyproxy.service.
System has not been booted with systemd as init system (PID 1). Can't operate.
Failed to connect to bus: Host is down

 

Rebasing the base image to 18.04 produces the same errors, but at least the container starts up through it 'freezes' with

 

Failed to insert module 'autofs4': No such file or directory
systemd 237 running in system mode. (+PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD -IDN2 +IDN -PCRE2 default-hierarchy=hybrid)
Detected virtualization docker.
Detected architecture x86-64.

Welcome to Ubuntu 18.04.6 LTS!

Set hostname to <containerId>.
Cannot determine cgroup we are running in: No data available
Failed to allocate manager object: No data available
[!!!!!!] Failed to allocate manager object, freezing.

 

Running the container, still does not start the app.  (Unless the bind mounts are wrong) As your documentation shifts a little from 'myappdata/mullvadvpn/etc:/etc/mullvad-vpn:rw' to 'appdata/etc-mullvadvpn:/etc/mullvad-vpn:rw' but placing the startup files in both still don't start the container.  Can you post your full run command after your build? You can omit your left side binds for local directories, but want to ensure the container mounts are correct.

 

I feel like a lot of this was what winterx was trying to go do with their fork.

 

On 4/21/2023 at 12:58 PM, gvkhna said:

I’m not sure why you have a read only file system.

 

As for this comment, your documentation / jrei's has cgroup as a read only bind mount.  I would think this is intentional as I wouldn't want it modifying outside of the container, but that's how it's shown to be setup and why the above error is saying read only file system.

 

-v /sys/fs/cgroup:/sys/fs/cgroup:ro 

 

Since I can't run 22.04, but winterx's at least starts, are either of you running the RC build (6.12) and maybe my kernel is too old on stable: 6.11.5?

Edited by BiGBaLLA
Link to comment

Appreciate the help trying to debug issues. But it’s concerning that none of that is reproducible for me. And in fact I have 0 issues. Running latest unraid and latest docker. Let me restart my container and check my flags. It would also be great to get clarity on your entire run commands. Since that seems to be the variable. The other consideration could be the setup files. 
 

I messaged squid about the community template forum access but so far have not heard back. 

Link to comment

Thanks for the quick response. I did realize, that it appears that I need to specify socks environment variable, which previously I had not been providing as it said it was optional. 

 

At least the container started with it, but I'm running into the same thing as above.  Main command is here, but you can see all transaction logs from beginning to end.

 

docker build --no-cache -t mullvadvpn .

docker run \
  --privileged \
  --name mullvadvpn \
  -v /sys/fs/cgroup:/sys/fs/cgroup:ro \
  -v /mnt/user/appdata/mullvadvpn/etc:/etc/mullvad-vpn:rw \
  -v /mnt/user/appdata/mullvadvpn/var:/var/cache/mullvad-vpn:rw \
  -v /mnt/user/appdata/mullvadvpn/custom-init.d:/etc/custom-init.d:ro \
  -e MICROSOCKS_ENABLE='true' \
  mullvadvpn

mull-debug-commands.txt

Edited by BiGBaLLA
Link to comment

Here's my run flags with unraid docker

 

docker run
  -d
  --name='mullvadvpn'
  --net='internal'
  --ip='172.22.251.251'
  --privileged=true
  -e TZ="America/Los_Angeles"
  -e HOST_OS="Unraid"
  -e HOST_HOSTNAME="Unraid"
  -e HOST_CONTAINERNAME="mullvadvpn"
  -e 'VPN_INPUT_PORTS'='8080,8888,9118'
  -e 'VPN_ALLOW_FORWARDING'='true'
  -e 'MICROSOCKS_ENABLE'='true'
  -e 'DEBUG'='true'
  -e 'MICROSOCKS_AUTH_NONE'='true'
  -e 'TINYPROXY_ENABLE'='true'
  -l net.unraid.docker.managed=dockerman
  -l net.unraid.docker.icon='https://mullvad.net/apple-touch-icon.png'
  -p '8080:8080/tcp'
  -p '9118:9118/tcp'
  -p '8888:8888/tcp'
  -v '/sys/fs/cgroup':'/sys/fs/cgroup':'ro'
  -v '/mnt/user/appdata/mullvadvpn/etc-mullvadvpn/':'/etc/mullvad-vpn/':'rw'
  -v '/mnt/user/appdata/mullvadvpn/custom-init.d/':'/etc/custom-init.d':'ro'
  -v '/mnt/user/appdata/mullvadvpn/var-cache/':'/var/cache/mullvad-vpn':'rw'
  --restart=always
  --log-opt max-size=1m 'ghcr.io/gvkhna/docker-mullvadvpn'
e3d7185ce10cc32f4f3b3fa56dc8230e39e26cc5ef98d1557b5513cacf7a750b

 

Let me look into your log, I have/had issues with cgroups with unraid. It's not well documented/supported and it's possible I made some out of band changes to my unraid config that may not be in the stock version to get cgroups working (although mine don't work correctly) but the container does start/work fine.

 

Also please try enabling the DEBUG=true flag, although unrelated but could help if any additional issues crop up.

 

As well I'll look into MICROSOCKS_ENABLE being required as an option, hopefully that's something a simple template could solve but I don't believe it is actually required.

Link to comment

I'll map it through unraid and try again, but running a slightly modified version of that on the command line produces the same thing for me.  I also do have cgroupfs version 1, but I'm going to dig into this tomorrow.  

 

I'll map your version into a template to compare 1 to 1 if I don't get anywhere when looking at cgroup info. 

 

Thanks for pointing me in the right direction.  I'll report back.

Edited by BiGBaLLA
Link to comment

I'm really not sure about this one... I spent a lot of time with this and am still stumped.  Switching to cgroupfs 2 allows starting but gets caught on the Read-only file system.  /sys/fs/cgroups/ is mapped on boot based on this setting.  I did follow trying to do this in an lxc container (strictly using the deb) in debian which resulted in the service failing to start. The exception was missing (net_cls).  But since this does exist in cgroupfs 1, the daemon starts on boot inside lxc.  At this point, I don't have a ton of time to look back into this, so I can operate using the GUI / CLI through the lxc container in debian.

 

Putting everything together, net_cls is probably needed for the app, and most likely wouldn't work under cgroupfs 2 as this library isn't loaded.  The errors in v1 hint that it's trying to map the library on startup, but since it exists, it's fine. most likely why gvkhna hasn't hit the issue.  v2 doesn't have those errors, but still has the same two errors which are preventing me from loading it, but aren't present above.

 

Welcome to Ubuntu 22.04.2 LTS!

Cannot determine cgroup we are running in: No data available
Failed to allocate manager object: No data available
[!!!!!!] Failed to allocate manager object.
Exiting PID 1...

 

Docker settings could also be playing a role, as well as kernel boot params when reading about someone trying to do something similar. For reference my GUI and non-GUI startup commands are below.

 

kernel /bzimage
append pcie_acs_override=downstream,multifunction vfio_iommu_type1.allow_unsafe_interrupts=1 initrd=/bzroot

kernel /bzimage
append pcie_acs_override=downstream,multifunction vfio_iommu_type1.allow_unsafe_interrupts=1 initrd=/bzroot,/bzroot-gui

 

If I have some time in a couple weeks I'll look back into this as whole, but my current setup allows me to do everything I wanted.

 

Link to comment

I hear you. Honestly a crapshoot from my end as I can’t reproduce without more information of what is going on. 
 

I still actually prefer the setup of systemd running the mullvad deb out of the box because it’s what they expect, instead of a custom service setup that could break in the future. Running systemd in docker is full of issues, if unraid switched to podman I hear the situation would be a lot better. I’ll look into this as I get time as well. Would like to understand cgroup better anyway. 
 

try just starting/running this container please and im curious if this starts (what’s the output). This has instructions about some tmpfs folders that systemd needs and I’m curious if that has any impact. 
 

https://github.com/bdellegrazie/docker-ubuntu-systemd

Link to comment

No dig at you, I really appreciate your help and time! Where I ended isn't perfect either, as I can't get it to run with cgroupfs 2 so the above for me is on legacy software.  And equally would like to learn more about cgroupfs, specifically how it ties into unraid at boot. But like you mentioned earlier, the information seems to be few and far between.

 

That container was a little outdated (I know it's not yours), so I had to modify the Dockerfile to swap "python python-apt" with "python-apt-doc python3-apt python-apt-common 2to3 python-is-python3".. unrelated, I'm surprised their is still support for moving python2 to 3..

 

But ultimately get the same error.  The errors on startup are the same, and still can't determine cgroup.  Which seems to my main issue in comparison to yours running.

 

systemd 252.5-2ubuntu3 running in system mode (+PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Detected virtualization docker.
Detected architecture x86-64.
Failed to create symlink /sys/fs/cgroup/cpuacct: File exists
Failed to create symlink /sys/fs/cgroup/cpu: File exists
Failed to create symlink /sys/fs/cgroup/net_prio: File exists
Failed to create symlink /sys/fs/cgroup/net_cls: File exists

Welcome to Ubuntu 23.04!

Cannot determine cgroup we are running in: No data available
Failed to allocate manager object: No data available
[!!!!!!] Failed to allocate manager object.
Exiting PID 1...

 

But here are the logs from beginning to run.

 

ubuntu-systemd.txt

Link to comment

Following up on this. I changed to unraidcgroup2 as specified here: 

By adding unraidcgroup2 to syslinux.cfg. 

 

Now the container wouldn't start. I started receiving these errors:

 

Failed to create control group inotify object: Too many open files
2023-05-04T19:03:57.371264570Z Failed to allocate manager object: Too many open files

Which I was able to solve by running the following commands:

sysctl fs.inotify.max_user_instances=512

Now I'm getting the following: 

2023-05-04T19:10:46.161263036Z Failed to create /init.scope control group: Read-only file system
2023-05-04T19:10:46.161265019Z Failed to allocate manager object: Read-only file system
2023-05-04T19:10:46.161266536Z [!!!!!!] Failed to allocate manager object.

So it looks like I'm reproducing the errors everyone else is getting. Will report back, saving this for recollection purposes in case anyone else has these issues.

Link to comment

After some fiddling this is how I got it working. I'll update if i have any issues later on.

 

--name='mullvadvpn'
  --net='internal'
  --ip='172.22.251.251'
  --privileged=true
  -e TZ="America/Los_Angeles"
  -e HOST_OS="Unraid"
  -e HOST_HOSTNAME="Unraid"
  -e HOST_CONTAINERNAME="mullvadvpn"
  -e 'VPN_INPUT_PORTS'='8080,8888,9118'
  -e 'VPN_ALLOW_FORWARDING'='true'
  -e 'MICROSOCKS_ENABLE'='true'
  -e 'DEBUG'='true'
  -e 'MICROSOCKS_AUTH_NONE'='true'
  -e 'TINYPROXY_ENABLE'='true'
  -l net.unraid.docker.managed=dockerman
  -l net.unraid.docker.icon='https://mullvad.net/apple-touch-icon.png'
  -p '8080:8080/tcp'
  -p '9118:9118/tcp'
  -p '8888:8888/tcp'
  -v '/mnt/user/appdata/mullvadvpn/etc-mullvadvpn/':'/etc/mullvad-vpn/':'rw'
  -v '/mnt/user/appdata/mullvadvpn/custom-init.d/':'/etc/custom-init.d':'ro'
  -v '/mnt/user/appdata/mullvadvpn/var-cache/':'/var/cache/mullvad-vpn':'rw'
  -v '/sys/fs/cgroup':'/sys/fs/cgroup':'rw'
  --cgroupns host
  --security-opt seccomp=unconfined
  --tmpfs /tmp
  --tmpfs /run
  --tmpfs /run/lock
  --restart=always
  --log-opt max-size=1m
  --ulimit nofile=80000:90000 'ghcr.io/gvkhna/docker-mullvadvpn'

 

Key: --cgroupns host, and /sys/fs/cgroup RW

 

This is on unraidcgroup2

Link to comment

Thanks! Backed up first, but as much as I didn't want to give it write permission, it clearly needs to modify `net_tools`, which is strange because it works fine in a cgroup1 lxc container, but not in v2 so I can't imagine what it's writing to disk.  I didn't try v1 with write on cgroups.

 

From my testing, you can probably remove some of those settings, I didn't mess with the tempfs, mainly if it's writing a lot, I'd prefer to write to ram over disk, but these two could be removed for me.

  • --cgroupns host

  • --security-opt seccomp=unconfined

 

Link to comment

I’ll give this a try and report back. 
 

My suspicion is it’s not writing anything to cgroups. It’s all the right environment for systemd to shut up and load. Systemd is failing without the right conditions. 
 

as far as I also read cgroup2 changes the cgroup namespace by default from host to private, so that may have some bearing in the issue. 
 

we’re you able to get the container working? Can you post your flags for confirmation, that will be helpful. I’ll update the readme etc.

Link to comment

Yes, sorry, I had an unrelated issue I wanted to track down to make sure it wasn't related. I didn't run through the unraid GUI, but here is my compose file.  More or less the same as your run command.

 

networks:
  secondary:
    external: true

version: '3.9'
services:
  mullvadvpn:
    image: mullvadvpn
    container_name: mullvadvpn
    build:
      context: .
      dockerfile: /mnt/user/Documents/docker-mullvadvpn/Dockerfile
    privileged: true
    environment:
      TZ: ${TIME_ZONE}
      HOST_OS: ${HOST_OS}
      HOST_HOSTNAME: ${HOST_HOSTNAME}
      HOST_CONTAINERNAME: mullvadvpn
      VPN_INPUT_PORTS: 
      VPN_ALLOW_FORWARDING: true
      MICROSOCKS_ENABLE: true
      DEBUG: false
      MICROSOCKS_AUTH_NONE: true
      TINYPROXY_ENABLE: true
    networks:
      - secondary
    volumes:
      - /sys/fs/cgroup:/sys/fs/cgroup:rw
      - /mnt/user/appdata/mullvadvpn/custom-init.d:/etc/custom-init.d:ro
      - /mnt/user/appdata/mullvadvpn/etc:/etc/mullvad-vpn:rw
      - /mnt/user/appdata/mullvadvpn/var:/var/cache/mullvad-vpn:rw
    tmpfs:
      - /tmp:mode=1777,size=256000000
      - /run:mode=1777,size=256000000
      - /run/lock:mode=1777,size=256000000

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...