[6.3.0+] How to setup Dockers without sharing unRAID IP address


ken-ji

168 posts in this topic Last Reply

Recommended Posts

3 minutes ago, ken-ji said:

Something is wrong with your setup right now...
How did you create the br0.1 interface? The one in your ip commands is created as a VLAN subinterface.

You can reconfigure the pihole container back, delete the homenet docker network with "docker network rm homenet", and stop the array and disable the VLAN network. Then try again.

 

When I initially started, VLAN was disabled and I still had the same problems. I only enabled it after as a troubleshooting step. I will disable and do it again, but I don't think it will help. To answer your question about br0.1, it's the same steps you outline in your OP but instead of br0, which I couldn't use because it says it is already being used by another interface, i tried br0.1. That's obviously not right, but I was trying to see how to get this working. From an earlier post, this is what happens when i follow your instructions for the Single NIC solution:

 

Error response from daemon: network dm-ba57b5a60b33 is already using parent interface br0

The only other thing I can think of is, i tried 6.4 RC7 for a little while, I wonder if something happened with that before i backed out to 6.3.5 again.

Link to post
  • Replies 167
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

How to setup Dockers to have own IP address without sharing the host IP address: This is only valid in unRAID 6.3 series going forward. 6.4.0 has this built into the GUI but currently have a

Perhaps you would be interested to know that macvlan support is added in the upcoming version of unRAID, it allows you to select additional 'custom' networks from the GUI.  

Bridging is just a sample (and a recommended setup since you typically want VMs to use the same physical NICs without having to NAT - the default vmbr0 is a NAT bridge). You can use eth1 just the same

Posted Images

hmm. run "docker network ls" - you should have only 3 networks

bridge

host

none

 

run "docker rm name" on all the others

I'm guessing dm-ba57b5a60b33 is autogenerated docker network in 6.4 series. Docker will persist the network settings in the docker.img file across unraid upgrades.

 

Also, telling docker to use br0.1 when it has not been configured will make docker create it anyway, but how its setup is not clear - which can cause problems that will be hard to debug. (AFAIK it will try to do a macvlan subinterface, which makes the containers use a subinterface of a subinterface)

Link to post
12 minutes ago, ken-ji said:

hmm. run "docker network ls" - you should have only 3 networks

bridge

host

none

 

run "docker rm name" on all the others

I'm guessing dm-ba57b5a60b33 is autogenerated docker network in 6.4 series. Docker will persist the network settings in the docker.img file across unraid upgrades.

 

Also, telling docker to use br0.1 when it has not been configured will make docker create it anyway, but how its setup is not clear - which can cause problems that will be hard to debug. (AFAIK it will try to do a macvlan subinterface, which makes the containers use a subinterface of a subinterface)

 

Do you think regenerating the Docker image will help? I may also install 6.4 RC7 again and try to undo whatever I did in that short time span i had it installed. But I'll try to get myself back to stock setup. Thanks for the help so far.  I seem to have gotten myself into a bit of a mess :)

Link to post

Have you tried unRAID 6.4rc? It has built-in support for creating custom networks based on macvlan.

 

The Docker implementation prohibits any container using a macvlan connection to communicate with the host system. Containers can communicate with each other or default gateway to reach outside.

 

Link to post
7 hours ago, bonienl said:

Have you tried unRAID 6.4rc? It has built-in support for creating custom networks based on macvlan.

 

The Docker implementation prohibits any container using a macvlan connection to communicate with the host system. Containers can communicate with each other or default gateway to reach outside.

 

 

I did and things went horribly wrong :) BTW, great job on the new templates.

 

 

Link to post
5 hours ago, ken-ji said:

AFAIK, bridging is optional for dockers with the macvlan support, but for me its easier to keep bridging turned on.

 

So I went back to 6.4RC7 and tried to undo whatever I might have done there. I deleted the Macvlan interface (docker network rm) that was there in 6.4RC7, not sure if that was something I had done, or if that is something that is done automatically in RC7. Cleaned things up and went back to 6.3.5, but same thing happens. I disabled VLAN, so that's gone now. With Bridging enabled, it won't let me create the docker network using br0 saying another interface is already using it. With Bridging disabled it won't let me install it for another reason, I can't recall. I'll have to try it again. Ah well :)

 

Do you know if i wipe my docker image, will it reset anything that 6.4 RC7 might have done?

Link to post
1 minute ago, CHBMB said:

I do believe the macvlan stuff is configured within the docker.img

Sent from my LG-H815 using Tapatalk
 

 

So if 6.4 RC7 added some macvlan stuff that got stuck in the docker.img and i go back to 6.3.5. If I were to wipe the docker.img, i'd go back to a fresh 6.3.5 state?

Link to post
1 minute ago, Kewjoe said:

 

So if 6.4 RC7 added some macvlan stuff that got stuck in the docker.img and i go back to 6.3.5. If I were to wipe the docker.img, i'd go back to a fresh 6.3.5 state?

 

Yes, CHBMB is correct the macvlan information is stored in the docker image and this image needs to be deleted to start with a clean sheet.

 

Link to post
  • 3 weeks later...
On 5/1/2017 at 2:32 PM, bonienl said:

Perhaps you would be interested to know that macvlan support is added in the upcoming version of unRAID, it allows you to select additional 'custom' networks from the GUI.

 

 

@bonienl is there info on how to find this option in 6.4 rc8q? I checked the network settings and it looks pretty  I tried Ken-ji's method in 6.4 rc8q and it works but doesn't survive a reboot for some reason. Wondering if there is a better way to do it.

Link to post
10 hours ago, Kewjoe said:

 

@bonienl is there info on how to find this option in 6.4 rc8q? I checked the network settings and it looks pretty  I tried Ken-ji's method in 6.4 rc8q and it works but doesn't survive a reboot for some reason. Wondering if there is a better way to do it.

 

Most of it goes automatic.

 

When the Docker service is started it will scan all available network connections and build a list of custom networks for those connections which have valid IP settings.

 

When creating/editing a Docker container the custom network(s) are automatically available in the dropdown list for network type. Choose here a custom network and optionally set a fixed IP address otherwise a dynamic address is assigned (you can set the range for dynamic assignments under Docker settings and avoid conflicts with the 'regular' DHCP server).

 

 

Edited by bonienl
Link to post
  • 1 month later...
On 8/6/2017 at 3:08 PM, ebnerjoh said:

Hi, 

 

I have here a question when using multiple nics: With the following command I am able to use the second NIC:

 


# docker network create \
-o parent=br1 \
--driver macvlan \
--subnet 10.0.3.0/24 \
--ip-range 10.0.3.128/25 \
--gateway 10.0.3.1 \
docker1

Why is bridging used? Wouldnt it also work if bridging is disabled in the networksettings and for creating the docker network "eth1" is used instead of "br1"?

 

Br,

Johannes

 I was hoping there was an answer to this because I'm also interested.

 

My server has 1 NIC now but I've got an Intel dual gb card sitting here doing nothing.  I was hoping I could slot it in, give it a static IP, and then use that for the docker connection.  My thoughts were I could carve out some leases from my DHCP and give them to Docker.  i.r. DHCP range on my router (USG) would be .10 to .219, then I give .220 to .253 to docker containers.

 

Is that possible?  The example of a dual NIC setup only shows the dockers having their own dedicated network with different IPs to the rest of my network.

Link to post

I was going to wait until 6.4 to implement this but I am losing patience so I want to try this out on my 6.3.5 system.  Any risks in creating the br0 interface - ie. could it break anything that I currently have running?

 

I will want to set the IP range to 192.168.1.224-255 so I am going to use 192.168.1.224/27 which was pointed out to me earlier in this thread.  If I want a bigger range is 192.168.1.192/26 a valid setting? (I don't totally understand the /24 etc stuff in networking but I think this is correct).

 

What does the ip-range command do?  Does that set the valid range for ip addresses in homenet or is that the range of addresses that can be assigned by the DHCP server in homenet.

 

If I am going to be assigning static IPs to some of my dockers when creating them by including an argument like --ip 192.168.1.193 then is the macvlan smart enough to not assign that address to future dockers that I create? 

 

Can I assign a static IP to a docker in homenet that is outside of the IP range.  Let's say I use the following command to create the homenet:

Quote

docker network create -o parent=br0 --driver macvlan --subnet 192.168.1.0/24 --ip-range 192.168.1.224/27 --gateway 192.168.1.1 homenet

Could I then assign a static IP to a docker with parameters like  "--network homenet --ip 192.168.1.200" ?  Or is this not a valid IP when using the network homenet as it must use an address in the range defined when homenet was created?

Link to post
3 hours ago, wayner said:

I was going to wait until 6.4 to implement this

 

unRAID 6.4 makes it a lot easier to user custom (macvlan) networks. It takes care of the available subnets when multiple interfaces or VLANs exist and does so for both IPv4 and IPv6 addressing. Setting a DHCP range for docker container assignment or using fixed addresses or using both is greatly simplified to set up from the GUI.

 

In short I would wait :)

 

4 hours ago, dalben said:

I was hoping there was an answer to this because I'm also interested.

 

macvlan can be assigned to either bridge (br) or ethernet (eth) ports, mixing is allowed too. So your first NIC can be set up as bridge to allow VM communication, while your second NIC can be a plain ethernet port dedicated for docker containers communication.

Link to post
4 minutes ago, bonienl said:

 

unRAID 6.4 makes it a lot easier to user custom (macvlan) networks. It takes care of the available subnets when multiple interfaces or VLANs exist and does so for both IPv4 and IPv6 addressing. Setting a DHCP range for docker container assignment or using fixed addresses or using both is greatly simplified to set up from the GUI.

 

In short I would wait :)

Ok, thanks. It just seems to me like 6.4 is taking a very long time to get released - we are up to something like rc10 now, aren't we.  

Link to post
4 minutes ago, wayner said:

Ok, thanks. It just seems to me like 6.4 is taking a very long time to get released - we are up to something like rc10 now, aren't we.  

 

Development of 6.4 started about 8 months ago. Won't consider that very long, but agree it is longer than previous 6.x releases.

 

My personal feeling is that upcoming RC is very close to final, but ultimately it is LT who decides when and what to release.

 

Link to post
9 hours ago, dalben said:

 I was hoping there was an answer to this because I'm also interested.

 

My server has 1 NIC now but I've got an Intel dual gb card sitting here doing nothing.  I was hoping I could slot it in, give it a static IP, and then use that for the docker connection.  My thoughts were I could carve out some leases from my DHCP and give them to Docker.  i.r. DHCP range on my router (USG) would be .10 to .219, then I give .220 to .253 to docker containers.

 

Is that possible?  The example of a dual NIC setup only shows the dockers having their own dedicated network with different IPs to the rest of my network.

 

Bridging is just a sample (and a recommended setup since you typically want VMs to use the same physical NICs without having to NAT - the default vmbr0 is a NAT bridge). You can use eth1 just the same as long as its not bridged or bonded.

 

I'm sorry I wasn't clear about using the second NIC to attach to same network. (I though it was clear by the fact the unRAID and gateways were defined to be in the same network)

 

You should only assign one IP to unRAID per logical network or VLAN. Having more will cause you some weird problems down the line.

We'll be using a dedicated interface br1 with only eth1 slaved to it (the native eth1 interface can used instead if its not bridged or bonded)
There is no need to assign an IP address to the interface

We only have one network (ie we have a simple plain router), no VLANs, no other subnets

The IP address details are:

  • Network: 192.168.1.0/24
  • Router: 192.168.1.1
  • unRAID: 192.168.1.2 (on eth0/br0)
  • DHCP range: 192.168.1.64-192.168.1.127 (this just simplifies some of the math)
  • Docker container range: 192.168.1.128-192.168.1.192

Running on the terminal:

# docker network create \
-o parent=br1 \
--driver macvlan \
--subnet 192.168.1.0/24 \
--ip-range 192.168.1.128/26 \
--gateway 192.168.1.1 \
localnetwork
  • Modify any Docker via the WebUI in Advanced mode
  • Set Network to None
  • Remove any port mappings
  • Fill in the Extra Parameters with: --network localnetwork
  • Apply and start the docker
  • The docker is assigned an IP from the pool 192.168.1.128 - 192.168.1.192; typically the first docker gets the first IP address

The important point is that this docker network will now allow dockers with dedicated local network IP addresses, that can can talk with unRAID.

Please do not forget to leave the IP address for br1/eth1 as unassigned so as to avoid issues later on (unless you really really know what you are doing)

 

Edited by ken-ji
Link to post

Thanks kenji, that all makes sense now but I'm sure I'll stuff up along the way.  Has anyone tried the bridge method with the Unifi container on a different ip subnet to the controllers and router?  I do see advantages to having the dockers on their own subnet as long as it doesn't cause more problems.  Either way I am going to have to change the settings of all dockers that talk to each other.

Link to post

It shouldn't cause any problems, especially take note of the settings I just posted here.

The containers on this new network will happily coexist with your real network and you can migrate slowly, rather than needing to do an all or nothing approach.

 

That said, I'm currently using all my containers in another subnet (another VLAN actually)

 

Link to post

I'm now going with the single NIC solution and getting this error when I try and create the network:

Error response from daemon: failed to allocate gateway (192.168.1.1): Address already in use

One the unRAID server, the DNS is set to 192.168.1.1  Can I assume that's what creating the error above?  If not, what else could it be?  I'm using the following command:

 

I got it going finally.  Something was conflicting with the unRAID GUI's ability to set something like this up I guess.  Anyway I have everything running on eth0

 

Two quirks:

  1. Once I have dockers with their own IP, I get access denied for any docker when I try to launch the  webgui that is still configured with the Servers IP
  2. my kodi database uses SMB and the source path is //TDM/mnt etc.  Kodi can't find that directory.  Would that be because there's no link access from the dedicated IP to the Server, or is it because tdm, the servers host name, isn't resolving?

 

#2 is painful right now.  I'd be tempted to put Kodi back to the old shared IP but as above that would mean I can't access it anymore.

 

Alternatively I could setup a bridge as the switch the server is on supports vlan.  Would that solve the issue ?

Edited by dalben
Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.