Omri Posted April 22 Share Posted April 22 This are the settings I used br1 working, all containers using it ipv6 on containers working using "--sysctl net.ipv6.conf.all.disable_ipv6=0" in extra paramaters of each container so far, no macvlan traces (finger crossed) Quote Link to comment
thorzeen Posted April 22 Share Posted April 22 I have been using this setup for idk 2 months? I have not had any issues. I am running 6.11.5 Using Unifi UDMP pointing to pihole just for dns. Docker DHCP allows for labels NIC on unraid and UDMP are in promiscuous mode. I add the flag --mac-address <mac address> to every container under Extra Parameters: I also add --dns. I let docker DHCP add the IP address, then i add that to pihole DNS and make it a fixed address in UDMP. Also set the DHCP that UDMP hands out really low like 10 ip's 192.168.x.20 to 30. Quote Link to comment
nraygun Posted April 23 Share Posted April 23 I'm on 6.11.5. and the br1 did not show up in the Docker containers as an option. I think it's still using whatever networks I had configured originally so I'll just leave it alone until someone provides suggestions on how to make br1 show up. Quote Link to comment
Omri Posted April 23 Share Posted April 23 what does "Docker DHCP allows for labels" mean? and why do you need --dns in every container? Thanks Quote Link to comment
thorzeen Posted April 23 Share Posted April 23 I am running JUST br1 network on a UDMP router all other networks are behind a different router. I remember reading that "unraid docker" uses the host netwok (br0) configured DNS server to resolve (I confirmed this by removing br1 dns entry from br0 and it left br1 in the dark) So I started adding --dns Labels are a way docker containers can reach each other using host names (vs IP addresses) when on same network in my case running "Traefik" and "Authentik" containers Docker DHCP allows docker to know which container is on the network thus "hostnames" = labels vs IP addresses. Just an easier way to configure containers that need to talk to each other Quote Link to comment
Omri Posted April 23 Share Posted April 23 (edited) Well, hostnames work automaticly by docker name if no other host defined. In my case, DNS working fine without br0 (eth0 bridging set to no) but I'll verify. Thanks for the response Edited April 23 by Omri Quote Link to comment
thorzeen Posted April 23 Share Posted April 23 Quote until someone provides suggestions on how to make br1 show up. I had that same problem i just remembered it reading your post. IIRC it has to do how br0 is configured you have to enable bridging? It has to do with how your networks are set up and i also think i had to reboot the server after configured. It seems most of "Unraid Docker" underpinning is tied to br0. It has been a frustrating ordeal to figure it out and make it play nice (docker) with UDMP on something other then br0 and then UDMP changes OS but thats another story PS. UDMP changing OS is a good thing! Quote Link to comment
thorzeen Posted April 23 Share Posted April 23 Quote n my case, DNS working fine without br0 (eth bridging set to no) Ya I am trying to remember I had to bridge something inorder to get it to show up as docker custom network and then it was definitely tied to br0. We are talking about a separate network port controled by a separate router Quote Link to comment
nraygun Posted April 23 Share Posted April 23 4 hours ago, thorzeen said: I had that same problem i just remembered it reading your post. IIRC it has to do how br0 is configured you have to enable bridging? It has to do with how your networks are set up and i also think i had to reboot the server after configured. It seems most of "Unraid Docker" underpinning is tied to br0. It has been a frustrating ordeal to figure it out and make it play nice (docker) with UDMP on something other then br0 and then UDMP changes OS but thats another story PS. UDMP changing OS is a good thing! Thanks, but at this point I'm at a loss for what to do. This is above my pay grade! I just don't understand how setting up another ethernet port (I have 4 NICs on my R710) can still use macvlan and still work. Seems to have something to do with MAC addresses. Haven't had a warning/crash since the last reboot last night using macvlan. And when I get syslog entries on a possible macvlan issue, my server continues to operate. I'll see how it goes. Does anyone lknow what the overall plan is for this issue? Is this separate NIC thing a workaround or a permanent fix? Thought I saw somewhere that the macvlan issue was deemed a bug of some sort. Quote Link to comment
Omri Posted April 24 Share Posted April 24 So far this method seems to work Two days+ and no macvlan traces Before, it would take few hours to get the first error (although it never crashed my server I think) Thanks Quote Link to comment
aglyons Posted April 24 Share Posted April 24 First up, I tried looking at the Unraid docs to try and figure it out myself but there is nothing there! I followed along, like everyone else. But, I ran into something that confuses the heck out of me. I have a second NIC, always have and on that NIC I had bridging turned off. Each container assigned to eth1 would be on network 168.202.x defined as vlan2 on my UDMProSE, and assigned an IP manually to each container. This would be the situation for any container that I want to expose to the internet via NPM. But, some containers are not exposed and don't need to have a dedicated IP so for those I stuck on the bridge for local and VPN access only (Radarr, Sonarr etc). But once I turned on bridging for eth1, br1 showed up and eth1 disappeared! All the containers I had setup on eth1 were offline. On a sidenote; 'Bridge' is still listed in the networks but choosing that is using the eth0 NIC network 168.200.x even though bridging is disabled for eth0. Why isn't that bridge using the eth1 NIC for the bridge mode since it IS enabled? Same for 'Host' Quote Link to comment
thorzeen Posted April 25 Share Posted April 25 (edited) Quote I ran into something that confuses the heck out of me One thing I have run into with unraid networking is it has a memory. I have not researched this, it might be a bug or it might be a safe guard. I have deleted networks that still call out for dhcp untill I literally shutdown not reboot but shutdown. Clearing arp might help ip -s -s neigh flush all Edited April 26 by thorzeen Quote Link to comment
Omri Posted April 26 Share Posted April 26 Well, it took longer than usual (~4 days) But I got macvlan call traces again. all containers are one br1. bummer Quote Link to comment
thorzeen Posted April 26 Share Posted April 26 9 hours ago, Omri said: Well, it took longer than usual (~4 days) But I got macvlan call traces again. all containers are one br1. bummer Are your NIC's configured for promiscuous mode ? Quote Link to comment
Omri Posted April 27 Share Posted April 27 Do these settings might be related to the problem? Quote Link to comment
ailliano Posted April 27 Share Posted April 27 I always had this setup and has been working great, single 2.5G nic + vlans, Docker network as IPvlan. Putting it here if anyone having issue or similar setup. Quote Link to comment
thorzeen Posted May 4 Share Posted May 4 (edited) There's a interesting discussion going on over here about this bugreports>prereleases 6.12.0-rc4 "macvlan call traces found", but not on <=6.11.x - Prereleases - Unraid Quote There is a suspicion that there occurs a conflict situation between bridge function and macvlan function Basically if your not using bridge (VM's) turn it off on br0 and br1 and eth1 shows up in docker for a custom network on v6.12.0-rc5 Edited May 4 by thorzeen Quote Link to comment
Omri Posted May 5 Share Posted May 5 Been trying this method since his post, so far no macvlan traces Quote Link to comment
nik82 Posted May 5 Share Posted May 5 Thanks for the Tutorial. After reading it I finally bit the bullet and tried this and I activated the Onboard 2.5gb Nic and assigned it to dockers only. The problem is that this does not solve any of the issues, with this solution Host access to custom networks does not work at all, neither in Mcvlan or with Ipvlan. Even if this is stable is does not solve the issue and I can achieve the exact same "stable" system using IPVLAN on 1 nic and share it with unraids "normal" system. The problem still stands, you either have Mcvland with Host access to custom networks which allows you to assign dockers like Guacamole or Unifi a dedicated IP and they can talk to other dockers, which allows you to setup and use NGINIX reverse proxy, BUT and it is a big freaking but, It crashes all the time. OR you have ipvlan or this dedicated NIC solution which cause no crashes but you have no Host access to custom networks. As such I don't see how this "solution" helps vs just using IPVLAN? 1 Quote Link to comment
thorzeen Posted May 5 Share Posted May 5 27 minutes ago, nik82 said: OR you have ipvlan or this dedicated NIC solution which cause no crashes but you have no Host access to custom networks Just to be clear "Host access" meaning unraid host? Quote Link to comment
nik82 Posted May 5 Share Posted May 5 3 hours ago, thorzeen said: Just to be clear "Host access" meaning unraid host? Its for the dockers to be able to communicate if one docker is using a custom IP. Quote Link to comment
thorzeen Posted May 5 Share Posted May 5 (edited) 5 hours ago, nik82 said: Its for the dockers to be able to communicate if one docker is using a custom IP. I am just curious are you using a custom docker network on a vlan off br0, or on a separate ethernet port ? Edited May 5 by thorzeen Quote Link to comment
Omri Posted May 12 Share Posted May 12 The only solution that worked for me is: Disable bridging Docker use macvlan through eth0 Optional: Second network interface is passthrough to VM Thanks to @bonienl 1 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.