Jump to content
  • [7.0.0-beta3] Docker Container Networking - Updating Containers


    erf89
    • Solved Minor

    Using the new docker container network feature, and I'm unable to update a "source" container whilst a container that uses it as a network exists - whether that container is running or stopped. Looks like I would have to go into the container using the network, set it as none/something else, to update the source container, and then go back in and change back to using the container as a network. Any slicker way for Unraid to do this in the background?

     

    image.png.3e06418bcbe20b1d47273c3dbfff9150.pngimage.thumb.png.629d68de6e9723a9e61bf900d5abf667.png

    • Upvote 1



    User Feedback

    Recommended Comments

    I can confirm, same issue. In my case even worse, trying to fix the issue I completely busted my docker setup by deleting gluetun container (via docker system prune) - now all dependent containers constantly recreated and immediately failed, no way to stop it. UI freezes in browser after 3 seconds and unusable. Was not an issue in beta2.

    Edited by asychev
    Link to comment

    @erf89 Can you please take a screenshot how you connected the containers together alongside with your Docker run commands for the containers.

     

    EDIT: I found the cause of the issue why the update is failing, should be fixed in the next beta.

     

    I assume you created dedicated networks for the containers which on my system is working just as expected:

    grafik.png.f40f8aa832f945b4b7d73b15dee68c7a.png

     

    Did you maybe assign the custom network that you created to the secondary container too?

     

    For the VPN container you should do something like (in my case OpenVPN-Client) :
    grafik.png.6c06315e02d902c6028bc7d037b8b103.png

     

    In all attached containers it needs to be configured like (in my case Proxy-Server) :

    grafik.png.f2302f9535f2ca2cbb28c8c17d42efb8.png

     

     

    The docker run command for my OpenVPN-Client looks like:
     

    docker run
      -d
      --name='OpenVPN-Client'
      --net='vpn'
      --pids-limit 2048
      -e TZ="Europe/Berlin"
      -e HOST_OS="Unraid"
      -e HOST_HOSTNAME="Server"
      -e HOST_CONTAINERNAME="OpenVPN-Client"
      -e 'FIREWALL'=''
      -e 'CONNECTED_CONTAINERS'='27286'
      -e 'PING_INTERVAL'='300'
      -l net.unraid.docker.managed=dockerman
      -l net.unraid.docker.icon='https://raw.githubusercontent.com/ich777/docker-templates/master/ich777/images/openvpn-client.png'
      -p '1234:1234/tcp'
      -v '/mnt/cache/appdata/openvpn-client':'/vpn':'rw'
      --device='/dev/net/tun'
      --cap-add=NET_ADMIN
      --dns=8.8.8.8
      --sysctl net.ipv6.conf.all.disable_ipv6=1
      --restart=unless-stopped 'ich777/openvpn-client'
    
    688f773d5e47f1f4e7533301c22bcf9910155d5ef00b0f93de810a1ae68c3a71

    (I removed some ports here but other than that the docker run command is untouched)

     

    and here is the docker run command for the Proxy-Server client:

    docker run
      -d
      --name='Proxy-Server'
      --net='container:OpenVPN-Client'
      --pids-limit 2048
      -e TZ="Europe/Berlin"
      -e HOST_OS="Unraid"
      -e HOST_HOSTNAME="Server"
      -e HOST_CONTAINERNAME="Proxy-Server"
      -e 'HTTP_PROXY'='true'
      -e 'SOCKS5_PROXY'='true'
      -e 'HTTP_PROXY_USER'=''
      -e 'HTTP_PROXY_PWD'=''
      -e 'SOCKS5_PROXY_USER'=''
      -e 'SOCKS5_PROXY_PWD'=''
      -e 'CONNECTED_CONTAINERS'='127.0.0.1:27286'
      -e 'HTTP_PROXY_PORT'='8118'
      -e 'SOCKS5_PROXY_PORT'='1080'
      -e 'UID'='99'
      -e 'GID'='100'
      -e 'UMASK'='0000'
      -l net.unraid.docker.managed=dockerman
      -l net.unraid.docker.icon='https://raw.githubusercontent.com/ich777/docker-templates/master/ich777/images/proxy-server.png'
      --restart=unless-stopped
      --sysctl net.ipv6.conf.all.disable_ipv6=1
      --log-driver=none 'ich777/proxy-server'
    
    a46b662c9264608cde9794d13a72ea019a382a2381bfb7f2dce46380fdf44cba

     

     

    The attached containers rebuild normally and I can update the container without any issues:

    grafik.thumb.png.0a804ebe6bcbb8a90f5a99535e08443f.png

    Link to comment
    1 hour ago, asychev said:

    I can confirm, same issue.

    Same here, can you please post your docker run commands from the containers and how you configured them?

     

    1 hour ago, asychev said:

    I completely busted my docker setup by deleting gluetun container (via docker system prune) - now all dependent containers constantly recreated and immediately failed, no way to stop it. UI freezes in browser after 3 seconds and unusable. Was not an issue in beta2.

    This was the case on all previous versions from Unraid and will be fixed in the next beta release, at least to a certain degree (the fix was issued by @EDACerton).

     

    However if you mess up your VPN container that badly that it can't start then the fix won't help and you will run into the same issue again.

     

    You can fix that by opening a terminal and removing all containers that where attached to the main VPN container by issuing:

    docker rm <CONTAINERNAME>

     

    Link to comment


    @ich777 I have custom docker network, where all containers attached and Nginx Proxy Manager to expose them, gluetun container also in this network, other containers use --network=container:gluetun.

    This setup was working well in beta2 and was broken in beta3.

     

    Solution with docker rm <CONTAINERNAME> in my case does not work, I have to remove everything and start from scratch :(

    Link to comment
    1 hour ago, asychev said:

    @ich777 I have custom docker network, where all containers attached and Nginx Proxy Manager to expose them, gluetun container also in this network, other containers use --network=container:gluetun.

    This setup was working well in beta2 and was broken in beta3.

    Again, please post the docker run command, otherwise it's really hard what's going on.

     

    EDIT: I found the cause of the issue why the update is failing, should be fixed in the next beta.

     

    But I might now know where the issue is, at least from what I think what you described.

     

                                                              EXPOSED NET
                                                             /           \
                                                           /               \
                                             VPN-Container                  Nginx-Proxy-Manager
                                          /       |         \
                                        /         |           \
                                      /           |             \
                                    /             |               \
                                  /               |                 \
             Container 1                      Container 2                   Container 3
    --net=container:VPN-Container    --net=container:VPN-Container    --net=container:VPN-Container                           

     

    This is how it should look like but from your explanation you have another dedicated VPN network as far as I understand it.

     

    Containers 1,2 & 3 don't have a network at all, so to speak they just have --net=container:VPN-Container in them (this is basically what the new Container drop down adds, and ports from the containers which are connected to the VPN are exposed in the VPN container template.

     

    If you have a dedicated network for the VPN like me, only the VPN-Container is in that network and the connected container should not be in there because they are using the network from the VPN-Container anyway.

     

    This is how it should be set up, hope that makes sense and it is really hard to explain.

     

    (BTW sorry I'm not the best at ASCII Art :D )

     

    1 hour ago, asychev said:

    Solution with docker rm <CONTAINERNAME> in my case does not work, I have to remove everything and start from scratch :(

    This is indeed working but you maybe have to issue the command frequently until Docker says the container doesn't exist any more.

    I run into this multiple times myself, the only thing that you have to be aware of is that you are not on the Docker page when issuing the command.

    • Thanks 1
    Link to comment

     

     

    sorry just trying to catch up on all these posts. I have a custom docker network setup just for the sole purpose of referencing containers by their names - all my containers use this network, bar the 3 containers using another container as their network.

     

    My Gluetun container is then used as the network for the tailscale docker container (to create an exit node in tailscale). Pre-beta3 I was just using 'Network: None' and then had the extra arg of

    --network=container:gluetun

    but since beta3 I wanted to try the new functionality, so I used the Network: Container, Container: Gluetun...

     

    Getting the docker run for my gluetun is a nightmare because I can't run it without changing the dependant containers, but it's just the stock gluetun CA template with my variables in, and network set as custom (my one custom network for all containers):

     

    image.png.edf6de4c160cd3e0c527501e30b8aa1c.png

     

     

     

    Docker Run for Tailscale with the Network Type set as Container

    image.png.e40b5ff72774c2f78254d9942a36ee47.png

    docker run
      -d
      --name='tailscale-gluetun'
      --net='container:gluetun-uk'
      --pids-limit 2048
      -e TZ="Europe/London"
      -e HOST_OS="Unraid"
      -e HOST_HOSTNAME="farrosphere"
      -e HOST_CONTAINERNAME="tailscale-gluetun"
      -e 'TS_HOSTNAME'='vpn-uk'
      -e 'TS_AUTHKEY'=''
      -e 'TS_AUTH_ONCE'='true'
      -e 'TS_USERSPACE'='true'
      -e 'TS_ACCEPT_DNS'='false'
      -e 'TS_EXTRA_ARGS'='--advertise-exit-node
      --accept-routes'
      -e 'TS_STATE_DIR'='/state'
      -l net.unraid.docker.managed=dockerman
      -l net.unraid.docker.icon='https://raw.githubusercontent.com/dkaser/unraid-tailscale/main/logo.png'
      -v '/mnt/user/appdata/tailscale-gluetun-uk':'/state':'rw' 'tailscale/tailscale:stable'
    741733a0ede2f6865b522c15648a18ccdd84c0c79143931a952d17c3594c6ef9

     

     

    Docker Network List:

     

    NETWORK ID     NAME          DRIVER    SCOPE
    e9e1d1fbdc8a   br0           ipvlan    local
    1238e44c05f3   bridge        bridge    local
    60956d51ed99   farrosphere   bridge    local
    e0b0892b2eb4   host          host      local
    72de74c4c178   none          null      local

     

     

    Link to comment
    13 minutes ago, erf89 said:

    sorry just trying to catch up on all these posts. I have a custom docker network setup just for the sole purpose of referencing containers by their names - all my containers use this network, bar the 3 containers using another container as their network.

    No worries, this is actually a bug that the container won't update and should be in rolled back in the next beta so to speak work properly in the next beta.

    Link to comment
    3 minutes ago, ich777 said:

    No worries, this is actually a bug that the container won't update and should be in rolled back in the next beta so to speak work properly in the next beta.

     

    Ahh okay - I guess this feature in beta3 is just replicating using Network: None and the extra --network command anyway... that always worked previously, updating the container with dependencies did always send those dependant containers into a "rebuilding" spin but it only ever lasted a few seconds and then it was all good again

     

    So in beta4, will the feature be gone (and I go back to using network: none and the extra arg.), or will it be fixed so that the containers can still update?

    Edited by erf89
    Link to comment
    6 minutes ago, erf89 said:

    Ahh okay - I guess this feature in beta3 is just replicating using Network: None and the extra --network command anyway... that always worked previously, updating the container with dependencies did always send those dependant containers into a "rebuilding" spin but it only ever lasted a few seconds and then it was all good again

    Sorry I wasn't clear enough, we are talking about two different things:

    1. The Container drop down isn't going anywhere and is working as it should, as you've guessed it replaces Network none and --net=container:<CONTAINERNAME>
    2. The update not working is caused by another issue in the code which will be reverted in the next beta so that the rebuild is working again.
    • Like 1
    • Thanks 1
    Link to comment
    1 minute ago, ich777 said:

    Sorry I wasn't clear enough, we are talking about two different things:

    1. The Container drop down isn't going anywhere and is working as it should, as you've guessed it replaces Network none and --net=container:<CONTAINERNAME>
    2. The update not working is caused by another issue in the code which will be reverted in the next beta so that the rebuild is working again.

    Ahh, yes sorry my misunderstanding! Thanks for clarifying and also for finding the fix :) 

    • Like 1
    Link to comment

    @ich777 

    Quote

    This is how it should look like but from your explanation you have another dedicated VPN network as far as I understand it.

    This is not correct, my setup is exactly how you drew on schema (and same as @erf89), no dedicated VPN docker network. Looking forward for beta4, any ETA maybe?

    • Thanks 1
    Link to comment

    just tested on beta4 and the dependant containers go into the brief rebuild loop and then are back up and running again after updating the source container of a network! :)

    • Like 1
    • Thanks 1
    Link to comment


    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Restore formatting

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.

×
×
  • Create New...