Jump to content
  • Lost Access to Host from Docker on Custom br0 after reboot


    ceyo14
    • Minor

    I have a few dockers running on br0 with their own IPs. after a reboot I lose access to the Host from these dockers.

     

    If I open the console for the docker and curl to the host it fails. I then have to stop the docker service, disable the allow host access to custom networks hit apply, enable the allow host access to custom networks again, hit apply and finally start the docker service and hit apply. then everything works beautifully. If I run curl on the host now it instantly responds.

     

     

    Below has info on this, and to me it is not solved...

     

     




    User Feedback

    Recommended Comments



    mlody11

    Posted (edited)

    Having the same issue here.  Using static IP and the docker containers stop communicating, disabling, enabling, and restarting docker fixes it.

     

    On version 6.11.1

    Edited by mlody11
    vakilando

    Posted

    just wanted to say that the issue came back last week (unraid 6.11.1) - out of the blue...

    No crash, no reboot, no configuration change.....

    Stopped and started Docker and everything is fine again.

     

    at a loss....

    pixeldoc81

    Posted (edited)

    Having the same issue here.

     

    Happend to me after Server Crash because of wrong Power Saving Settings.

     

    UNRAID v6.11.5

    Edited by pixeldoc81
    pixeldoc81

    Posted (edited)

    And again the same issue, still on UNRAID v6.11.5 .

    Problem

    Docker Container with host network can't reach Container running on br0 custom network.

    Main Server - Docker Service does not create shim device

    No shim-br0 devices exists:

    grafik.png

     

    root@unraid:~# ifconfig
    br0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
            inet 192.168.0.5  netmask 255.255.255.0  broadcast 0.0.0.0
            ether 0c:c4:7a:01:27:34  txqueuelen 1000  (Ethernet)
            RX packets 13517068  bytes 20098500303 (18.7 GiB)
            RX errors 0  dropped 98729  overruns 0  frame 0
            TX packets 8678775  bytes 12182403225 (11.3 GiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    br1: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
            ether 0c:c4:7a:01:27:35  txqueuelen 1000  (Ethernet)
            RX packets 0  bytes 0 (0.0 B)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 2  bytes 164 (164.0 B)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
            inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
            inet6 fe80::42:1fff:fe41:cacf  prefixlen 64  scopeid 0x20<link>
            ether 02:42:1f:41:ca:cf  txqueuelen 0  (Ethernet)
            RX packets 3527988  bytes 239563584 (228.4 MiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 4245217  bytes 2526698325 (2.3 GiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    eth0: flags=4419<UP,BROADCAST,RUNNING,PROMISC,MULTICAST>  mtu 1500
            ether 0c:c4:7a:01:27:34  txqueuelen 1000  (Ethernet)
            RX packets 17924821  bytes 20384341579 (18.9 GiB)
            RX errors 0  dropped 2994  overruns 0  frame 0
            TX packets 16171030  bytes 12291364591 (11.4 GiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
            device memory 0xf7100000-f717ffff  
    
    eth1: flags=4355<UP,BROADCAST,PROMISC,MULTICAST>  mtu 1500
            ether 0c:c4:7a:01:27:35  txqueuelen 1000  (Ethernet)
            RX packets 0  bytes 0 (0.0 B)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 0  bytes 0 (0.0 B)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
            device interrupt 20  memory 0xf7200000-f7220000  
    
    lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
            inet 127.0.0.1  netmask 255.0.0.0
            inet6 ::1  prefixlen 128  scopeid 0x10<host>
            loop  txqueuelen 1000  (Local Loopback)
            RX packets 92237  bytes 17909535 (17.0 MiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 92237  bytes 17909535 (17.0 MiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    ...

     

    root@unraid:~# route
    Kernel IP routing table
    Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
    default         speedport.ip    0.0.0.0         UG    0      0        0 br0
    172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
    192.168.0.0     0.0.0.0         255.255.255.0   U     0      0        0 br0
    root@unraid:~# ip route
    default via 192.168.0.1 dev br0 
    172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 
    192.168.0.0/24 dev br0 proto kernel scope link src 192.168.0.5 

     

    /boot/config/network.cfg

    # Generated settings:
    IFNAME[0]="br0"
    DHCP_KEEPRESOLV="yes"
    DNS_SERVER1="192.168.0.1"
    DHCP6_KEEPRESOLV="no"
    BRNAME[0]="br0"
    BRNICS[0]="eth0"
    BRSTP[0]="no"
    BRFD[0]="0"
    DESCRIPTION[0]="LAN1"
    PROTOCOL[0]="ipv4"
    USE_DHCP[0]="no"
    IPADDR[0]="192.168.0.5"
    NETMASK[0]="255.255.255.0"
    GATEWAY[0]="192.168.0.1"
    USE_DHCP6[0]="yes"
    IFNAME[1]="br1"     <--- second bond with eth1
    BRNAME[1]="br1"
    BRSTP[1]="no"
    BRFD[1]="0"
    DESCRIPTION[1]="LAN2"
    BRNICS[1]="eth1"
    PROTOCOL[1]="ipv4"
    SYSNICS="2"
    

     

    /boot/config/docker.cfg

    DOCKER_ENABLED="yes"
    DOCKER_IMAGE_FILE="/mnt/user/system/docker/docker.img"
    DOCKER_IMAGE_SIZE="50"
    DOCKER_APP_CONFIG_PATH="/mnt/user/appdata/"
    DOCKER_APP_UNRAID_PATH=""
    DOCKER_CUSTOM_NETWORKS="br1 "   <--- Whitespace after interface !!! ---
    DOCKER_LOG_ROTATION="yes"
    DOCKER_LOG_SIZE="50m"
    DOCKER_LOG_FILES="1"
    DOCKER_AUTHORING_MODE="yes"
    DOCKER_USER_NETWORKS="remove"
    DOCKER_TIMEOUT="10"
    DOCKER_ALLOW_ACCESS="yes"

    I noticed a whitespace in DOCKER_CUSTOM_NETWORKS="br1 ", maybe the will cause the issue?

     

    docker service does not create br0 or shim device:

    root@srv:~# grep docker /var/log/syslog
    ...
    Mar  2 15:02:35 srv  emhttpd: shcmd (3419): /usr/local/sbin/mount_image '/mnt/user/system/docker/docker.img' /var/lib/docker 60
    Mar  2 15:02:39 srv  emhttpd: shcmd (3421): /etc/rc.d/rc.docker start
    Mar  2 15:02:39 srv root: starting dockerd ...
    Mar  2 15:02:40 srv  avahi-daemon[23860]: Joining mDNS multicast group on interface docker0.IPv4 with address 172.17.0.1.
    Mar  2 15:02:40 srv  avahi-daemon[23860]: New relevant interface docker0.IPv4 for mDNS.
    Mar  2 15:02:40 srv  avahi-daemon[23860]: Registering new address record for 172.17.0.1 on docker0.IPv4.
    Mar  2 15:02:44 srv kernel: docker0: port 1(vethe796f48) entered blocking state
    Mar  2 15:02:44 srv kernel: docker0: port 1(vethe796f48) entered disabled state
    ...
    root@srv:~# grep shim /var/log/syslog

     

    Maybe a Problem with "old" (network.cfg) config file(s) (retained from update). The Server is updated since 5.9.x ?

    Test Server (fresh install 6.11.5)

    br0 and shim device is created on docker service start:

    root@srv2:~# grep docker /var/log/syslog
    ...
    Mar  2 22:59:07 srv2  emhttpd: shcmd (1621): /usr/local/sbin/mount_image '/mnt/user/system/docker/docker.img' /var/lib/docker 20
    Mar  2 22:59:07 srv2  emhttpd: shcmd (1623): /etc/rc.d/rc.docker start
    Mar  2 22:59:07 srv2 root: starting dockerd ...
    Mar  2 22:59:07 srv2  avahi-daemon[4978]: Joining mDNS multicast group on interface docker0.IPv4 with address 172.17.0.1.
    Mar  2 22:59:07 srv2  avahi-daemon[4978]: New relevant interface docker0.IPv4 for mDNS.
    Mar  2 22:59:07 srv2  avahi-daemon[4978]: Registering new address record for 172.17.0.1 on docker0.IPv4.
    Mar  2 22:59:08 srv2 rc.docker: created network br0 with subnets: 192.168.0.0/24; 
    Mar  2 22:59:08 srv2 rc.docker: created network shim-br0 for host access
    ...

     

    my /boot/config/network.cfg:

    # Generated network settings
    USE_DHCP="yes"
    IPADDR=
    NETMASK=
    GATEWAY=
    BONDING="yes"
    BRIDGING="yes"

    Using DHCP, no static ip on this server.

     

    Possible Issue - Second Bond

    Looks like I had a second bond configured from some testing a while ago.

    After removing the second bond (br1) the docker shim device is created like it is supposed to.

    I will do some further testing.

     

    Edit

    Can't discern any reason why this issue happens. Created a clean network configuration, but it still happens.

    Edited by pixeldoc81
    Update
    micudaj

    Posted

    I also have the same issue with loosing access to custom networks after reboot in version 6.11.5.

    SeanJW

    Posted

    Same issue on 6.11.5 with all static IPs (unraid and container)

    Stop and restart docker can fix it, but it happend again after unraid reboot.

    pants

    Posted (edited)

    I recently experienced this issue on Unraid 6.10.3 as well. I noticed that I was unable to send packets between docker containers on br0 and the Unraid server (pings to the server's ip address from containers on br0 failed) even though the docker setting "Host access to custom networks" was enabled. I then checked the routing table on the network settings page and saw that no routes referencing the shim interfaces appeared in the table. Stopping and starting the docker service recreated the shim routes and restored connectivity. Unfortunately, I did not think to check if the shim interfaces actually existed by running ifconfig before restarting docker.

     

    It looks like some users have patched this issue via a user script (see https://blog.siglerdev.us/unraidos/ for more details)

    Edited by pants
    • Thanks 1
    pixeldoc81

    Posted

    @pants

    Thanks for sharing, the User Scripts works well for me.

     

    Lets hope the UNRAID Bug will be fixed.

    Ptolemyiv

    Posted

    Have the same issue following an unexpected reboot due to powercut. That script website above no longer seems to work for me so unable to try it out whilst the actual underlying bug is hopefully fixed..

     

    Thanks 

    bmrowe

    Posted (edited)

    Just experienced this myself on 6.12.3 so the bug is definitely not 'closed'. Power outage and battery backup failure led to a non-clean shutdown. Upon restart, this bug was present. Stopping and starting docker fixed the problem. But took me hours of troubleshooting to find this post. Please reopen bug and fix.

    Edited by bmrowe
    HHUBS

    Posted (edited)

    On 1/7/2022 at 12:20 AM, B_Sinn3d said:

    I am on 6.9.2 and I just had this issue.  After reboot it was not working so I had to stop Docker, disable that option, apply, enable, apply and reenable docker to get it working again.

     

    I'm running 6.12.3 and this happened to me that my coontainers on br0 is unreachable after reboot. So I followed the above quote and it works again. The developers should look into this.

    Edited by HHUBS
    Ptolemyiv

    Posted

    On 3/15/2023 at 1:10 AM, SeanOne said:

    Same issue on 6.11.5 with all static IPs (unraid and container)

    Stop and restart docker can fix it, but it happend again after unraid reboot.

    Thanks - unfortunately I get a bad gateway trying to access this link - could someone share a copy of the script and any associated instructions?

    aje14700

    Posted

    6.12.4 here. I also had an unclean shutdown (I've seen some bug reports is c-state related to having a Ryzen 5600, unrelated to thread's issue), resulting in br0 not being accessible from host. Following the docker stop, toggle setting, and start back up resolved it.

     

    Fortunately, this bug only affects me when I'm wireguarding into my network; however I can resolve it remotely using the above steps... just quite annoying to fix.

    Avsynthe

    Posted (edited)

    The blog post at that site that is no longer accessible is this. Found it using the Wayback Machine.

     

    I haven't used it yet and I'm not sure how it will work using ipvlan or if it has anything to do with that or not.



    Here is that entire blog post:

    ________________________________________________

     

    UnraidOS host access to custom networks [Fix]

     

    So quick little post about my favorite operating system, UnraidOS. A lot of my processes are hosted on this beautiful little machine but a major issue I had was communicating between the host (My UnraidOS operating system) and a container running on that host.

     

    By default the setting 'Host access to custom networks:" in the Settings > Docker tab is set to false as this allows any docker container running on the host to be able to reach out and communicate with the host. This does go against the concept of containers and best-practices, but I swear I had a valid reason... so therefore we turned this setting on!

     

    The way that the UnraidOS does this is by creating a shim-br0 network to connect all of my devices on the custom network (in my case br0) to the host system. What I decided to do was create a custom user script which checks to see if the network is created on startup, if it fails (which it often does) it will then create the network for you. The basic usage of this is just a command you can run in the UnraidOS console, which can be found below.

     

    ip link add shim-br0 link br0 type macvlan mode bridge
    ip link set shim-br0 up

     

    One of the other fixes which I found on the forum is to turn off "Docker" by setting Settings > Docker "Enable Docker" to false, applying it, changing it back to true, and applying it again. Usually toggling the "Host access to custom networks" will force the system to re-create the network

     

    Once installed, go to your Settings > User Scripts and create a new script, copy and paste the below script (which checks to see if the network exists, and if it doesn't create it).

     

    #!/bin/bash
    
    #!/bin/bash
    ip link | grep 'shim-br0' &> /dev/null
    if [ $? != 0 ]; then
       echo "No shim br0 found, creating it!"
       ip link add shim-br0 link br0 type macvlan mode bridge
       ip link set shim-br0 up
       ip route add 192.168.1.0/25 dev shim-br0 
       ip route add 192.168.1.128/25 dev shim-br0
    else
       echo "shim-br0 network was found!"
    fi

     

    Once saved go to the frequency and choose "At Startup of Array", which means this script will run whenever you start up your array (IE a restart or manually starting/stopping it). Even if the network was created correctly, this will still run and make sure.

     

    Hopefully this helps you out and saves you many hours of debugging by always ensuring your container can reach the host!

     

    The other way I'd recomend doing this, so you dont ever have to worry about it, is using the User.Scripts plugin. Most users of UnraidOS already have this to begin with, but if not just do the old google of "UserScripts UnraidOS" and you'll get it installed in no time.

     

    ________________________________________________

     

    Credit goes to Alexander Sigler for this blog post

    Edited by Avsynthe
    1unraid_user

    Posted

    I can confirm this. It is in deed a huge pain, as I run my UniFi Controller in a docker and require the communication with it from other dockers.

    Lucas Massucci

    Posted

    Same problem here, when i stop docker, array or reboot the br0 connections are lost and i can't upload the containers anymore, the only way is to stop and disable the docker stacks and upload again. it's very frustrating. i'm using trial version 6.12.10 and evaluating other options besides unraid, because of this.  

    image.png

    Ohelig

    Posted

    Still happens in 6.12.10 

     

    When using the script above, be sure to change the commands to match your system. In my case, I changed the ip route to 192.168.0.0/16 and the bridge type to ipvlan 




    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Restore formatting

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.

×
×
  • Create New...