vw-kombi Posted April 13, 2019 Share Posted April 13, 2019 I wish to change my emby docker so it has its own virtual IP for QOS/Routing etc. I only have the one nic in my unraid server. I noticed this was all possible when I recently created a pihole docker based on space invaders video. So, a simple edit on the docker, changed to Custom : br, then put the new IP address in it. All starts fine, and all works locally. I then changed the nginx reverse proxy for the emby subdomain to be the new ip address (simply changing 192.168.1.7 to 192.168.1.5. Restarted nginx My emby server is no longer contactable from the wan. All others remain fine. I'm thinking this is some sort of routing issue with this virtual IP but I have no idea where to look. As I cant have emby down for long, I set it back as it was and all good. I am now using my ombi docker instead for this testing. Thanks. Quote Link to comment
vw-kombi Posted April 15, 2019 Author Share Posted April 15, 2019 OK - I did a load of googling / searching and I think the issue is that the virtual IP allocated container is not allowed to communicate to the unraid host IP (where nginx is runnng as a container). What I have is an nginx container as host which currently redirects to all the containers also on host, so when I separated the emby container to its own VIP, then nginx could not communicate with it. Apart from adding actual physical NIC's to the unraid server, or buying a smart vlan capable switch, what are my other options ? If I give every docker used by nginx, and the nginx container also a virtual IP address, can they communicate with each other then ? nginx, sonarr, radarr, sab, deluge, ombi and a web server Thanks Quote Link to comment
ken-ji Posted April 15, 2019 Share Posted April 15, 2019 7 hours ago, vw-kombi said: If I give every docker used by nginx, and the nginx container also a virtual IP address, can they communicate with each other then ? This is the best solution available for you. All the containers that need to interact with each other need to be on the same class of network. This is also grants a bit of security as none of theses containers can interact with unRAID except via disk path mappings. I should also clarify for you that a VLAN aware switch is not enough, you need a VLAN capable router as only the router can pass packets between VLANS, a switch will not be able to do it. Quote Link to comment
vw-kombi Posted April 15, 2019 Author Share Posted April 15, 2019 Nice 1. Thanks for that. Will get that done. I will one day get a new vlan switch. Router is pfsense so no worries there. Quote Link to comment
vw-kombi Posted April 16, 2019 Author Share Posted April 16, 2019 hhhhmmmm - so I am missing something here. The containers I change to have their own IP address cannot connect to the internet. Do they have to have the gateway set somewhere ? Quote Link to comment
vw-kombi Posted April 17, 2019 Author Share Posted April 17, 2019 Sorry - My Bad. The ones I was testing with had individual issues with the router vpn affecting their internet. I tested the ombi container, started with new virtual IP address, used it and it can connect to the internet fine. So my plan is to change the following containers to have BR Custom IP addresses - as they are all related to each other and need access between many of them : nginx 192.168.1.5 emby 192.168.1.200 ombi 192.168.1.201 tvheadend 192.168.1.202 sab 192.168.1.203 deluge 192.168.1.204 sonarr 192.168.1.205 radarr 192.168.1.206 Jackett 192.168.1.207 I also have a VM web server that is also needed to be connected from the nginx container but I believe this will be fine. Then they can all communicate to each other and I believe only access host via the container options on start up. Or so I believe. Have to plan a time for this test as the nginx is in use often If anyone thinks I am on the wrong track, please advise. Quote Link to comment
vw-kombi Posted April 19, 2019 Author Share Posted April 19, 2019 OK - So I need some help with what I am trying to do - a simple yes/no on if this is possible, as I seem to have an issue with my proposed config. To recap the end solution - no vlans, one nic in unraid - I just want to set the IP address on my emby container. Currently, all docker containers are on bridge mode and work (obviously) and talk to each other and the unraid host. I have a router port forward of 80 and 443 to my nginx container on 85 and 4443. All been working for ages. After getting some answers in here (from a helpful @ken-ji), I found out that once I set an IP address on a container (emby), it can no longer be talked to the unraid host ip. So the solution as posted earlier was to set IP addresses on ALL my containers. So, I set ip 192.168.1.5 to my nginx container, and 192.168.1.200 for my emby container. Updated the nginx config for this IP address for emby, and changed the port forwards on the router from 192.168.1.7 (unraid host) to 192.168.1.5 (nginx container). It does not work - I cant access nginx remotely. I checked nginx logs and nothing is getting to it. I used the router to test the port from the wan side (pfsense), and not getting through to nginx. My port forwarding is correct - I only changed a .7 to .5. I have surmised that the WAN cant get access to the nginx container IP. Is there an issue with routing / port forwarding to a docker container that has a br: Custom IP address set ? Please help. Quote Link to comment
ken-ji Posted April 19, 2019 Share Posted April 19, 2019 @vw-kombi You might have not noticed, but when a container is assigned its own IP address, port mappings are now ignored. Port mappings are actually used only in the default bridge mode where the container is actually given an ip in an internal hidden network. So when you change the port forwards on your router. you map port 80 to 192.168.1.7:80 (same for 443) - the port 85 and 4443 are now ignored. I have this setup on one of my simple setups where we have VLAN support on the router, but we didn't bother to implement it. Quote Link to comment
vw-kombi Posted April 19, 2019 Author Share Posted April 19, 2019 Ahhhhh. @ken-ji - Thanks for that..... I did not know that. So, the nginx container setup - where I enter port 85 and 4443 are negated when the nginx container is changed to its own IP address...... So I just needed to change the port forward in my router to 80 and 443 to the new nginx containers IP address - 192.168.1.5....... If I am understanding you correctly, then that would link the internet to the nginx container. How then do the other containers work ? with all their ports, once they also have their own IP address ? As they are 'talked to' locally from nginx - that will all be fine correct ? For example : emby - 8096 default in the app sonarr - 8989 default in the app radarr - 7878 default in the app etc etc Quote Link to comment
ken-ji Posted April 19, 2019 Share Posted April 19, 2019 Once a container has an IP address everything on the LAN including other containers can now reach it using the default ports Only the Unraid host will not be able to reach any of the containers with dedicated IP addresses. 1 Quote Link to comment
vw-kombi Posted April 19, 2019 Author Share Posted April 19, 2019 Thanks heaps. Third time is hopefully the charm tomorrow. Quote Link to comment
vw-kombi Posted April 19, 2019 Author Share Posted April 19, 2019 Just wanted to say to @ken-ji - Tnaks heaps - all working perfectly and now I can track and qos everything via IP address in my router. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.