Jump to content

Changed a docker to virtual IP, cant access from WAN via nginx reverse proxy (RESOLVED)

12 posts in this topic Last Reply

Recommended Posts

I wish to change my emby docker so it has its own virtual IP for QOS/Routing etc.

I only have the one nic in my unraid server.

I noticed this was all possible when I recently created a pihole docker based on space invaders video.

So, a simple edit on the docker, changed to Custom : br, then put the new IP address in it.

All starts fine, and all works locally.

I then changed the nginx reverse proxy for the emby subdomain to be the new ip address (simply changing to

Restarted nginx

My emby server is no longer contactable from the wan.  All others remain fine.

I'm thinking this is some sort of routing issue with this virtual IP but I have no idea where to look.

As I cant have emby down for long, I set it back as it was and all good.

I am now using my ombi docker instead for this testing.


Share this post

Link to post

OK - I did a load of googling / searching and I think the issue is that the virtual IP allocated container is not allowed to communicate to the unraid host IP (where nginx is runnng as a container).


What I have is an nginx container as host which currently redirects to all the containers also on host, so when I separated the emby container to its own VIP, then nginx could not communicate with it.


Apart from adding actual physical NIC's to the unraid server, or buying a smart vlan capable switch, what are my other options ?


If I give every docker used by nginx, and the nginx container also a virtual IP address, can they communicate with each other then ?


nginx, sonarr, radarr, sab, deluge, ombi and a web server 



Share this post

Link to post
7 hours ago, vw-kombi said:

If I give every docker used by nginx, and the nginx container also a virtual IP address, can they communicate with each other then ?


This is the best solution available for you. All the containers that need to interact with each other need to be on the same class of network. This is also grants a bit of security as none of theses containers can interact with unRAID except via disk path mappings.


I should also clarify for you that a VLAN aware switch is not enough, you need a VLAN capable router as only the router can pass packets between VLANS, a switch will not be able to do it.

Share this post

Link to post

Nice 1.  Thanks for that.

Will get that done.

I will one day get a new vlan switch.  Router is pfsense so no worries there.

Share this post

Link to post

hhhhmmmm - so I am missing something here.

The containers I change to have their own IP address cannot connect to the internet.

Do they have to have the gateway set somewhere ?

Share this post

Link to post

Sorry - My Bad.  The ones I was testing with had individual issues with the router vpn affecting their internet.

I tested the ombi container, started with new virtual IP address, used it and it can connect to the internet fine.


So my plan is to change the following containers to have BR Custom IP addresses - as they are all related to each other and need access between many of them :












I also have a VM web server that is also needed to be connected from the nginx container but I believe this will be fine.


Then they can all communicate to each other and I believe only access host via the container options on start up.

Or so I believe.  Have to plan a time for this test as the nginx is in use often


If anyone thinks I am on the wrong track, please advise.



Share this post

Link to post

OK - So I need some help with what I am trying to do - a simple yes/no on if this is possible, as I seem to have an issue with my proposed config.


To recap the end solution - no vlans, one nic in unraid - I just want to set the IP address on my emby container.


Currently, all docker containers are on bridge mode and work (obviously) and talk to each other and the unraid host.

I have a router port forward of 80 and 443 to my nginx container on 85 and 4443.  All been working for ages.


After getting some answers in here (from a helpful @ken-ji), I found out that once I set an IP address on a container (emby), it can no longer be talked to the unraid host ip.


So the solution as posted earlier was to set IP addresses on ALL my containers.


So, I set ip to my nginx container, and for my emby container.  Updated the nginx config for this IP address for emby, and changed the port forwards on the router from (unraid host) to (nginx container).


It does not work - I cant access nginx remotely.  I checked nginx logs and nothing is getting to it.  I used the router to test the port from the wan side (pfsense), and not getting through to nginx.  My port forwarding is correct - I only changed a .7 to .5.


I have surmised that the WAN cant get access to the nginx container IP.


Is there an issue with routing / port forwarding to a docker container that has a br: Custom IP address set ?


Please help.



Share this post

Link to post



You might have not noticed, but when a container is assigned its own IP address, port mappings are now ignored. Port mappings are actually used only in the default bridge mode where the container is actually given an ip in an internal hidden network.


So when you change the port forwards on your router. you map port 80 to (same for 443) - the port 85 and 4443 are now ignored.


I have this setup on one of my simple setups where we have VLAN support on the router, but we didn't bother to implement it.

Share this post

Link to post

Ahhhhh. @ken-ji - Thanks for that.....


I did not know that.

So, the nginx container setup - where I enter port 85 and 4443 are negated when the nginx container is changed to its own IP address......

So I just needed to change the port forward in my router to 80 and 443 to the new nginx containers IP address -

If I am understanding you correctly, then that would link the internet to the nginx container.


How then do the other containers work ? with all their ports, once they also have their own IP address ?

As they are 'talked to' locally from nginx - that will all be fine correct ?


For example :

emby - 8096 default in the app

sonarr - 8989 default in the app

radarr - 7878 default in the app 

etc etc

Share this post

Link to post

Once a container has an IP address everything on the LAN including other containers can now reach it using the default ports

Only the Unraid host will not be able to reach any of the containers with dedicated IP addresses.

Share this post

Link to post

Just wanted to say to @ken-ji - Tnaks heaps - all working perfectly and now I can track and qos everything via IP address in my router.

Share this post

Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.