Jump to content

BiGBaLLA

Members
  • Posts

    15
  • Joined

  • Last visited

Posts posted by BiGBaLLA

  1. Yes, sorry, I had an unrelated issue I wanted to track down to make sure it wasn't related. I didn't run through the unraid GUI, but here is my compose file.  More or less the same as your run command.

     

    networks:
      secondary:
        external: true
    
    version: '3.9'
    services:
      mullvadvpn:
        image: mullvadvpn
        container_name: mullvadvpn
        build:
          context: .
          dockerfile: /mnt/user/Documents/docker-mullvadvpn/Dockerfile
        privileged: true
        environment:
          TZ: ${TIME_ZONE}
          HOST_OS: ${HOST_OS}
          HOST_HOSTNAME: ${HOST_HOSTNAME}
          HOST_CONTAINERNAME: mullvadvpn
          VPN_INPUT_PORTS: 
          VPN_ALLOW_FORWARDING: true
          MICROSOCKS_ENABLE: true
          DEBUG: false
          MICROSOCKS_AUTH_NONE: true
          TINYPROXY_ENABLE: true
        networks:
          - secondary
        volumes:
          - /sys/fs/cgroup:/sys/fs/cgroup:rw
          - /mnt/user/appdata/mullvadvpn/custom-init.d:/etc/custom-init.d:ro
          - /mnt/user/appdata/mullvadvpn/etc:/etc/mullvad-vpn:rw
          - /mnt/user/appdata/mullvadvpn/var:/var/cache/mullvad-vpn:rw
        tmpfs:
          - /tmp:mode=1777,size=256000000
          - /run:mode=1777,size=256000000
          - /run/lock:mode=1777,size=256000000

     

  2. Thanks! Backed up first, but as much as I didn't want to give it write permission, it clearly needs to modify `net_tools`, which is strange because it works fine in a cgroup1 lxc container, but not in v2 so I can't imagine what it's writing to disk.  I didn't try v1 with write on cgroups.

     

    From my testing, you can probably remove some of those settings, I didn't mess with the tempfs, mainly if it's writing a lot, I'd prefer to write to ram over disk, but these two could be removed for me.

    • --cgroupns host

    • --security-opt seccomp=unconfined

     

  3. No dig at you, I really appreciate your help and time! Where I ended isn't perfect either, as I can't get it to run with cgroupfs 2 so the above for me is on legacy software.  And equally would like to learn more about cgroupfs, specifically how it ties into unraid at boot. But like you mentioned earlier, the information seems to be few and far between.

     

    That container was a little outdated (I know it's not yours), so I had to modify the Dockerfile to swap "python python-apt" with "python-apt-doc python3-apt python-apt-common 2to3 python-is-python3".. unrelated, I'm surprised their is still support for moving python2 to 3..

     

    But ultimately get the same error.  The errors on startup are the same, and still can't determine cgroup.  Which seems to my main issue in comparison to yours running.

     

    systemd 252.5-2ubuntu3 running in system mode (+PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
    Detected virtualization docker.
    Detected architecture x86-64.
    Failed to create symlink /sys/fs/cgroup/cpuacct: File exists
    Failed to create symlink /sys/fs/cgroup/cpu: File exists
    Failed to create symlink /sys/fs/cgroup/net_prio: File exists
    Failed to create symlink /sys/fs/cgroup/net_cls: File exists
    
    Welcome to Ubuntu 23.04!
    
    Cannot determine cgroup we are running in: No data available
    Failed to allocate manager object: No data available
    [!!!!!!] Failed to allocate manager object.
    Exiting PID 1...

     

    But here are the logs from beginning to run.

     

    ubuntu-systemd.txt

  4. I'm really not sure about this one... I spent a lot of time with this and am still stumped.  Switching to cgroupfs 2 allows starting but gets caught on the Read-only file system.  /sys/fs/cgroups/ is mapped on boot based on this setting.  I did follow trying to do this in an lxc container (strictly using the deb) in debian which resulted in the service failing to start. The exception was missing (net_cls).  But since this does exist in cgroupfs 1, the daemon starts on boot inside lxc.  At this point, I don't have a ton of time to look back into this, so I can operate using the GUI / CLI through the lxc container in debian.

     

    Putting everything together, net_cls is probably needed for the app, and most likely wouldn't work under cgroupfs 2 as this library isn't loaded.  The errors in v1 hint that it's trying to map the library on startup, but since it exists, it's fine. most likely why gvkhna hasn't hit the issue.  v2 doesn't have those errors, but still has the same two errors which are preventing me from loading it, but aren't present above.

     

    Welcome to Ubuntu 22.04.2 LTS!
    
    Cannot determine cgroup we are running in: No data available
    Failed to allocate manager object: No data available
    [!!!!!!] Failed to allocate manager object.
    Exiting PID 1...

     

    Docker settings could also be playing a role, as well as kernel boot params when reading about someone trying to do something similar. For reference my GUI and non-GUI startup commands are below.

     

    kernel /bzimage
    append pcie_acs_override=downstream,multifunction vfio_iommu_type1.allow_unsafe_interrupts=1 initrd=/bzroot
    
    kernel /bzimage
    append pcie_acs_override=downstream,multifunction vfio_iommu_type1.allow_unsafe_interrupts=1 initrd=/bzroot,/bzroot-gui

     

    If I have some time in a couple weeks I'll look back into this as whole, but my current setup allows me to do everything I wanted.

     

  5. I'll map it through unraid and try again, but running a slightly modified version of that on the command line produces the same thing for me.  I also do have cgroupfs version 1, but I'm going to dig into this tomorrow.  

     

    I'll map your version into a template to compare 1 to 1 if I don't get anywhere when looking at cgroup info. 

     

    Thanks for pointing me in the right direction.  I'll report back.

  6. Thanks for the quick response. I did realize, that it appears that I need to specify socks environment variable, which previously I had not been providing as it said it was optional. 

     

    At least the container started with it, but I'm running into the same thing as above.  Main command is here, but you can see all transaction logs from beginning to end.

     

    docker build --no-cache -t mullvadvpn .
    
    docker run \
      --privileged \
      --name mullvadvpn \
      -v /sys/fs/cgroup:/sys/fs/cgroup:ro \
      -v /mnt/user/appdata/mullvadvpn/etc:/etc/mullvad-vpn:rw \
      -v /mnt/user/appdata/mullvadvpn/var:/var/cache/mullvad-vpn:rw \
      -v /mnt/user/appdata/mullvadvpn/custom-init.d:/etc/custom-init.d:ro \
      -e MICROSOCKS_ENABLE='true' \
      mullvadvpn

    mull-debug-commands.txt

  7. I appreciate the work on this container as I have been thinking of something like this for a while, and running the GUI in a VM and moving everything there sounds counterproductive.   Having spent some time with it, I wanted to add a little insight that may help from my debugging.

     

    Since you're using ubuntu latest, I'm wondering if latest LTS has shifted since creating this container.  Have you tried building without cache for this recently? I ask because if I build and run jrei's container I can only get 18.04 to run.  Running the following will not start the container:

     

    docker run -d --name systemd-ubuntu --privileged -v /sys/fs/cgroup:/sys/fs/cgroup:ro jrei/systemd-ubuntu:22.04
    docker run -d --name systemd-ubuntu --privileged -v /sys/fs/cgroup:/sys/fs/cgroup:ro jrei/systemd-ubuntu:20.04

     

    But running 18.04 it starts.  So something has to be different between ubuntu versions.

     

    docker run -d --name systemd-ubuntu --privileged -v /sys/fs/cgroup:/sys/fs/cgroup:ro jrei/systemd-ubuntu:18.04

     

    When building the docker file, all systemd commands fail due to systemd not running on unraid. You mention a lot about building a community docker template and community maintainers so I assume you're running this on your unraid host.  But if not, that may be where some of the confusion is.  If you are in fact running this on your host, are you running systemd on your host?  During building, I get the following errors.

     

    Configuration file /usr/lib/systemd/system/coredns.service is marked world-writable. Please remove world writability permission bits. Proceeding anyway.
    Created symlink /etc/systemd/system/multi-user.target.wants/coredns.service -> /usr/lib/systemd/system/coredns.service.
    System has not been booted with systemd as init system (PID 1). Can't operate.
    Failed to connect to bus: Host is down
    Configuration file /usr/lib/systemd/system/microsocks.service is marked world-writable. Please remove world writability permission bits. Proceeding anyway.
    Created symlink /etc/systemd/system/multi-user.target.wants/microsocks.service -> /usr/lib/systemd/system/microsocks.service.
    System has not been booted with systemd as init system (PID 1). Can't operate.
    Failed to connect to bus: Host is down
    Configuration file /usr/lib/systemd/system/mullvad-stdout.service is marked world-writable. Please remove world writability permission bits. Proceeding anyway.
    Created symlink /etc/systemd/system/multi-user.target.wants/mullvad-stdout.service -> /usr/lib/systemd/system/mullvad-stdout.service.
    System has not been booted with systemd as init system (PID 1). Can't operate.
    Failed to connect to bus: Host is down
    Configuration file /usr/lib/systemd/system/tinyproxy.service is marked world-writable. Please remove world writability permission bits. Proceeding anyway.
    Created symlink /etc/systemd/system/multi-user.target.wants/tinyproxy.service -> /usr/lib/systemd/system/tinyproxy.service.
    System has not been booted with systemd as init system (PID 1). Can't operate.
    Failed to connect to bus: Host is down

     

    Rebasing the base image to 18.04 produces the same errors, but at least the container starts up through it 'freezes' with

     

    Failed to insert module 'autofs4': No such file or directory
    systemd 237 running in system mode. (+PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD -IDN2 +IDN -PCRE2 default-hierarchy=hybrid)
    Detected virtualization docker.
    Detected architecture x86-64.
    
    Welcome to Ubuntu 18.04.6 LTS!
    
    Set hostname to <containerId>.
    Cannot determine cgroup we are running in: No data available
    Failed to allocate manager object: No data available
    [!!!!!!] Failed to allocate manager object, freezing.

     

    Running the container, still does not start the app.  (Unless the bind mounts are wrong) As your documentation shifts a little from 'myappdata/mullvadvpn/etc:/etc/mullvad-vpn:rw' to 'appdata/etc-mullvadvpn:/etc/mullvad-vpn:rw' but placing the startup files in both still don't start the container.  Can you post your full run command after your build? You can omit your left side binds for local directories, but want to ensure the container mounts are correct.

     

    I feel like a lot of this was what winterx was trying to go do with their fork.

     

    On 4/21/2023 at 12:58 PM, gvkhna said:

    I’m not sure why you have a read only file system.

     

    As for this comment, your documentation / jrei's has cgroup as a read only bind mount.  I would think this is intentional as I wouldn't want it modifying outside of the container, but that's how it's shown to be setup and why the above error is saying read only file system.

     

    -v /sys/fs/cgroup:/sys/fs/cgroup:ro 

     

    Since I can't run 22.04, but winterx's at least starts, are either of you running the RC build (6.12) and maybe my kernel is too old on stable: 6.11.5?

  8. On 12/14/2022 at 12:48 PM, lukej33 said:

      -e 'LAN_NETWORK'='192.168.0.0/255'

    This is likely your issue. For whitelisting your local LAN, I'm not sure what that would equate to knowing your unraid IP. The last part of the network should be where your server lives. 192 is the default on most routers, so I'll assume your server is on 192.168.X.X, you'd need a subnet mask of 24.

     

    ```

    -e 'LAN_NETWORK'='192.168.0.0/24'

    ```

  9. 2 hours ago, jkpe said:

    Just to confirm... if I have been running 6.10.1 on a MicroServer gen8 with Xeon processor should I definitely expect file corruption or just maybe? How widespread is it?

    To add to this, is there an easy way to see this for XFS? (as both cache and array are xfs for me) I have only found info on this issue with btrfs, should we assume that if a drive does has corruption it was written, and written incorrectly to the parity as well and wouldn't come out as an issue during a parity check?

  10. That's really strange.  Assuming the recent change was for the config, that error message looks like it can't find postgres, but I did notice I have two configs in my template.

     

    / $ diff /invidious/config/config.yml /config/config/config.yml 

     

    They're both the same for me and Invidious is running after most recent pull, but I'm not sure which is being used. I'd start by validating you have postgres on that port with the credentials and didn't modify anything outside the default.

  11. On 10/6/2021 at 9:33 AM, Joshndroid said:

    The only way I could get it to work again was

     

    /invidious/config  -> /mnt/user/appdata/invidious/config/

     

    Is that what your referring too?

     

    I can confirm this was my missing piece. I was able to instantly get this running through your directions using a custom network.  But I wanted to route this container through one of Binhex's VPN containers.  Once I changed the container to none and routed through the VPN, it never resolved 'postgres'.  I kept changing the config to host IPs / container IPs with no luck and was stuck for a while.  - I should have read through the posts.

     

    Making the above change fixed my problem instantly. The moment I changed the path variable and restarted the container it picked up the IP and instantly loaded.  Invidious looks like it works through a reverse proxy wither or not it has the port, domain or https_only config values set.  I'm guessing it doesn't correctly link without them set, and instead would give the IP:Port/video id.  But since it looked like it worked, I assumed it was reading the config.  Knowing this, it most likely would work on bridge connections by directly pointing to the postgres container IP.

     

    Anyways, thanks for putting this container together!

    • Thanks 1
  12. 2 hours ago, binhex said:

    @lzrdking71 @theGrok i have identified the issue, not exactly sure how it was related to my previous change but it has caused a race condition, i have now corrected this and the image is building, please pull it down in around 1 hour from now.

    This did it for me! I updated all my instances and all are working correctly again, thank you for fixing this so quickly!

    • Like 1
  13. 1 hour ago, binhex said:

    if you are using AirVPN then i would be interested to see your log, please do the following:-

    https://github.com/binhex/documentation/blob/master/docker/faq/help.md

    I'm not sure if it's helpful, but I attached the latest one and the 3.10 version. Both the same config / torrents. 3.10 shows the vpn ip to the tracker, and latest is showing 127.0.0.1.

     

    Thanks, I really appreciate your work!

    supervisord_3_10_01.log supervisord_latest.log

  14. On 8/29/2020 at 11:24 PM, lzrdking71 said:

    @binhex It looks like this is happening again https://github.com/binhex/arch-rtorrentvpn/issues/104

     

    2020-08-29 17:10:49,842 DEBG 'start-script' stdout output:
    [warn] Cannot determine external IP address, performing tests before setting to '127.0.0.1'...
    [info] Show name servers defined for container

     

    IP in the rutorrent settings/bittorent shows 127.0.0.1

    You helped fix it before, hopefully an easy fix again.

     

    I started seeing this with the most recent version as well.  I'm using AirVPN.  Rolling back to version v3.10-01 fixed the issue.  Looking at Github, the only difference between the two versions is Commit ad060d.  I don't see how change would have any impact, but every time I downgrade the issue goes away. 

     

    EDIT: Ahh, I didn't see the comments on the commit. Binhex confirmed the change is not related to this.

  15. Thanks for keeping this updated! I really appreciate it. I feel like I'm adding a different vm every month or so.

     

    Only note, you may want to update the two comments for the intel options that are pointing to amd.

     

    #/intel-ucode.img for amd systems

     

×
×
  • Create New...