Jump to content

ken-ji

Members
  • Content Count

    881
  • Joined

  • Last visited

  • Days Won

    4

ken-ji last won the day on June 27 2018

ken-ji had the most liked content!

Community Reputation

106 Very Good

About ken-ji

  • Rank
    Advanced Member

Converted

  • Gender
    Male
  • Location
    Philippines

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Once a container has an IP address everything on the LAN including other containers can now reach it using the default ports Only the Unraid host will not be able to reach any of the containers with dedicated IP addresses.
  2. @vw-kombi You might have not noticed, but when a container is assigned its own IP address, port mappings are now ignored. Port mappings are actually used only in the default bridge mode where the container is actually given an ip in an internal hidden network. So when you change the port forwards on your router. you map port 80 to 192.168.1.7:80 (same for 443) - the port 85 and 4443 are now ignored. I have this setup on one of my simple setups where we have VLAN support on the router, but we didn't bother to implement it.
  3. This is the best solution available for you. All the containers that need to interact with each other need to be on the same class of network. This is also grants a bit of security as none of theses containers can interact with unRAID except via disk path mappings. I should also clarify for you that a VLAN aware switch is not enough, you need a VLAN capable router as only the router can pass packets between VLANS, a switch will not be able to do it.
  4. there's the interface rules portion in Settings | Network to renumber the network interfaces But you didn't listen to my initial advice. Do not set two interfaces with the same subnet, unpredictable things happen - like the OS getting confused as to which interface it really should be using to talk to the rest of the world, which normally is ok, but causes issues with Docker and macvlan subinterfaces, which simply put, blocks docker containers with their own IPs from talking to the host on the same subnet. see here for more elaboration:
  5. You need to post your diagnostics for complete info, but I can guess the problem and solution. Do not assign an IP to the br2 network interface. it is causing unraid to be accessible on both interfaces, but the dockers are only on one and its probably the wrong one now. Recreate the custom docker network on br2 (after removing the docker network on eth0)
  6. ken-ji

    LAN Routing Issue

    The script is useful for custom routes but you don't have any in this case. Also. The correct key to stop most Linux commands is Ctrl+c not Ctrl+z. I had a few until I migrated to real router and left the custom routes there.
  7. ken-ji

    LAN Routing Issue

    And that is the totally wrong script to be using since you shouldn't be specifying local direct attached subnets specially when the interface does not have an ip address. When you assign an IP to an interface, the networking stack will automatically define an route to the subnet on that interface. You never define this route manually.
  8. ken-ji

    LAN Routing Issue

    Just curious, but have you restarted Unraid? Because your routing table has entries for the subnets but no IP addresses are assigned. A quick test is to run ip route del 192.168.80.0/24 ping 192.168.80.15
  9. ken-ji

    LAN Routing Issue

    You can reach the gateway? ping 192.168.1.1 Also ip addr Because you routing table mentions the various subnets for some reason.
  10. ken-ji

    LAN Routing Issue

    you have two interfaces for the subnet 192.168.100.0/24 - br0 & br0.100 you can see it in the routing table that there are two entries for 192.168.100.0/24 there only be one unless you know what you are doing. the default metric is 0 which makes br0.100 the default interface to use to talk to the gateway. but going to other subnets the routing table indicates to use br0 to talk to the gateway. I think you have asymmetric routing going on here - packets go out one interface and the response comes back on another. You must delete the ip from br0.100 so its not considered a possible route for packets for the 192.168.100.0/24 subnet The fact that all your interfaces have routes mean you have IP addresses on all the VLANs, which I mentioned previously tends to be a confusing and messy config, particularly if you are trying to perform VLAN segregation. Additionaly, if the docker networks were autocreated (they will be if the VLAN interfaces have IP addresses), they might need to be deleted when you remove the IP from the interface (Its been a while since I configured this)
  11. ken-ji

    LAN Routing Issue

    there's something wrong with your config. can you show the output of ip route because from your screen shots it seems the default route is to 192.168.1.1 but br0 has an ip of 192.168.100.100 which is not in the same subnet as your gateway?
  12. ken-ji

    LAN Routing Issue

    Sorry since you pulled me in the discussion. What exactly is your issue? Unraid cannot ping the other subnets? try grabbing the output of traceroute -n <unpingable ip> Also, i'm amazed you were allowed to define br0.100 as I'm fairly sure docker won't let you create networks to the same gateway
  13. ken-ji

    LAN Routing Issue

    @surfshack66 Simple. Configure docker networks on the VLANs you have defined. Do take note that the VLAN subinterfaces preferably should not have an IP address, as it will cause confusion with asymmetric routing on Unraid. It will look like this: (Sorry but my only server has 2 network interfaces, but it should be identical) place the containers on the VLANs, while keeping Unraid on the unbridged main network eth0/br0/bond0. so when container A (ie 192.168.95.129) in VLAN 3 talks to Unraid (192.168.2.5) it will always talk to the router (192.168.95.1) instead of trying to talk to Unraid directly (which the lack of IP prevents)
  14. the net-define command already makes it persistent. run virsh net-autostart lab-network to make the corresponding bridge auto start
  15. You just need to create (and persist) a bridge device for your VMs to use. create a xml file (ie /tmp/lab-network.xml) <network ipv6='yes'> <name>lab-network</name> <bridge name="virbr1" stp="on" delay="0"/> </network> Then you enable the network with virsh net-define /tmp/lab-network.xml virsh net-start lab-network This will create a bridge virbr1, which you can assign to your VMs. There will be a host interface virbr1-nic (but will not be assigned an IP or any such automatically) refer to https://libvirt.org/formatnetwork.html for more details on the xml file format