Jump to content
alturismo

multiple network interfaces for a docker

18 posts in this topic Last Reply

Recommended Posts

hi, may a question if something like this is "easy" possible ;)

 

in my example tvheadend docker would need more then 1 ip address to get more satip streams from my fritz 6490 (buildin 4 cable tuner, only 1 stream per ip)

 

tvheadend docker running in host mode (not a sep. ip cause setup ip´s for dockers and then how to route them so they can talk to host each other is over my head ...)

 

fritz is my modem, running 192.168.178.1 -> only connected to ->

router behind is a netgear r9000 (also the DHCP, Wifi, ...) running 192.168.1.1

from here my network starts.

like unraid on 192.168.1.2

 

i aleady setup tvheadend so it sees and can use the fritz dvb-c tuner (1 stream only), so that is looking good, now, as the fritz 6490 only allows 1 stream each i would need to

assign different ips to each tuner, therefore my TVH docker would need more ip´s ;)

 

so, is that "easy" possible, on linux i assume i would assign them in /etc/network/interfaces, but here ... ;) also without loosing the host mode ...

 

im using nginx, plex, tvhproxy etc etc ... wich are all used together, so starting with sep. ip´s would probably need to route all them together, and when i tryed that when it was introduced in 6.4 i guess i got lost all over and reverted back ;)

 

thanks ahead for an tipp

Share this post


Link to post

Might be that you can assign multiple interfaces using pipework. 

Share this post


Link to post
2 hours ago, saarg said:

Might be that you can assign multiple interfaces using pipework. 

 

got pipework working and also the host communication is working, just have to figure the multiply nic command ;)

when i add more lines it always only adds the last one, i ll take a look at the docs if i can figure it.

 

thanks for the hint.

Share this post


Link to post

pipework isn't really needed. It would interfere with the built-in custom (macvlan) network creation of unRAID.

 

Docker containers can participate in multiple networks using the command

docker network connect [OPTIONS] NETWORK CONTAINER

The above command can be repeated for any network the container needs to talk to.

Share this post


Link to post

well, may a hint where to look for, i can assign a ip address and that works out of the box, where would i know add this command

 

docker network connect [OPTIONS] NETWORK CONTAINER

so my docker tvheadend could communicate with the host again

 

my network config is like

 

image.thumb.png.8987eace7e15ee9c2cfdc9ad6fa2a25f.png

 

next step then would be, how to add more interfaces inside the docker cause when i assign an ip i dont see /etc/network/interfaces

also i dont see a option to add more ip´s from unraid UI.

 

thanks again for an hint.

Share this post


Link to post
# docker network connect bridge Tonido

# docker inspect Tonido
...
...
            "Networks": {
                "br1": {
                    "IPAMConfig": {
                        "IPv4Address": "10.0.101.104"
                    },
                    "Links": null,
                    "Aliases": [
                        "0a602a7896a8"
                    ],
                    "NetworkID": "1c23fd91938349a1d6fd145ba064517539fe2a82767beb0263a19c084c2fefc1",
                    "EndpointID": "8e0cd148eda5f37d479313e036ec2ef2f967680d330ca9dc186c8fe62462037a",
                    "Gateway": "10.0.101.1",
                    "IPAddress": "10.0.101.104",
                    "IPPrefixLen": 24,
                    "IPv6Gateway": "2a02:a448:32d5:101::1",
                    "GlobalIPv6Address": "2a02:a448:32d5:101::5",
                    "GlobalIPv6PrefixLen": 64,
                    "MacAddress": "02:42:0a:00:65:68",
                    "DriverOpts": null
                },
                "bridge": {
                    "IPAMConfig": {},
                    "Links": null,
                    "Aliases": [],
                    "NetworkID": "26797d1c47da870e2cad448cb1cbddc47d681bf96bd3652d2353f3a84d2ecda1",
                    "EndpointID": "3ae12786b363aa32903ac096dc834bec338e6620fb33ac125afc72da2b492560",
                    "Gateway": "172.17.0.1",
                    "IPAddress": "172.17.0.3",
                    "IPPrefixLen": 16,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": "02:42:ac:11:00:03",
                    "DriverOpts": null
                }
            }

 

Inside the container it looks like:

# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.0.101.1      0.0.0.0         UG    0      0        0 eth0
10.0.101.0      0.0.0.0         255.255.255.0   U     0      0        0 eth0
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 eth1

 

Share this post


Link to post

when i set an ip in tvheadend docker settings

 

then i do

 

root@AlsServer:~# docker network connect bridge tvheadend

 

still cant ping ... so no access from host

 

root@AlsServer:~# docker exec -ti tvheadend bash
root@f79300b7868c:/$ route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.1.1     0.0.0.0         UG    0      0        0 eth0
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 eth1
192.168.1.0     0.0.0.0         255.255.255.0   U     0      0        0 eth0
root@f79300b7868c:/$ exit
exit
root@AlsServer:~# ping 192.168.1.210
PING 192.168.1.210 (192.168.1.210) 56(84) bytes of data.
From 192.168.1.2 icmp_seq=1 Destination Host Unreachable
From 192.168.1.2 icmp_seq=2 Destination Host Unreachable
From 192.168.1.2 icmp_seq=3 Destination Host Unreachable

...

 

im pretty sure the command has to specified better ?

 

thats my tvheadend ip setup

image.thumb.png.b08d7a271fec01050a2a71933a223132.png

 

thanks ahead

Share this post


Link to post

A bridge network does not allow ping from outside to inside of the container.

 

You can ping from inside the container.

# ifconfig
eth0      Link encap:Ethernet  HWaddr 02:42:0a:00:65:68
          inet addr:10.0.101.104  Bcast:0.0.0.0  Mask:255.255.255.0
          inet6 addr: fe80::42:aff:fe00:6568/64 Scope:Link
          inet6 addr: 2a02:a448:32d5:101:42:aff:fe00:6568/64 Scope:Global
          inet6 addr: 2a02:a448:32d5:101::5/64 Scope:Global
          UP BROADCAST RUNNING MULTICAST  MTU:9198  Metric:1
          RX packets:286411 errors:0 dropped:0 overruns:0 frame:0
          TX packets:77330 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:20487070 (20.4 MB)  TX bytes:168488081 (168.4 MB)

eth1      Link encap:Ethernet  HWaddr 02:42:ac:11:00:03
          inet addr:172.17.0.3  Bcast:0.0.0.0  Mask:255.255.0.0
          UP BROADCAST RUNNING MULTICAST  MTU:9198  Metric:1
          RX packets:3247 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:177336 (177.3 KB)  TX bytes:0 (0.0 B)

# ping 10.0.101.1
PING 10.0.101.1 (10.0.101.1) 56(84) bytes of data.
64 bytes from 10.0.101.1: icmp_seq=1 ttl=64 time=0.423 ms
64 bytes from 10.0.101.1: icmp_seq=2 ttl=64 time=0.346 ms
64 bytes from 10.0.101.1: icmp_seq=3 ttl=64 time=0.333 ms
64 bytes from 10.0.101.1: icmp_seq=4 ttl=64 time=0.319 ms
^C
# ping 172.17.0.1
PING 172.17.0.1 (172.17.0.1) 56(84) bytes of data.
64 bytes from 172.17.0.1: icmp_seq=1 ttl=64 time=0.108 ms
64 bytes from 172.17.0.1: icmp_seq=2 ttl=64 time=0.056 ms
64 bytes from 172.17.0.1: icmp_seq=3 ttl=64 time=0.062 ms
64 bytes from 172.17.0.1: icmp_seq=4 ttl=64 time=0.067 ms
^C

 

Share this post


Link to post

thats what i sadly thought, then it doesnt fit my usecase, example, i also use nginx and tvheadend behind a reverse proxy ...

so i would need that the docker is reachable from the host ...

i knew i had issues before cause i once wanted to assign all dockers a unique ip and ended up in the same dead end ;)

 

thanks anyway

Share this post


Link to post

Does tvheadend even know how to use multiple IP addresses?

Most apps just let the OS routing tables to figure out which outgoing ip/interface it needs to use.

And if you have multiple IP addresses on the same subnet to the same target IP, you either use the first or last IP to be defined.

You probably need to have 4 tvheadend dockers, all on the default bridge network with a secondary connection to your own network

that way they talk to whatever is on your LAN using their own IP address, but can be still be reached by the host with the internal IP.

 

Its a mess, unfortunately.

Share this post


Link to post

If open to something besides a Docker, use KVM or Virtualbox which will make the networking easier.  Virtualbox definitely will, but not standard with unRAID and the plugin has not been updated for latest build yet.  I haven't messed with KVM much, but should have same/similar capability.

Using a Fritz!Box 6490 as Tuner for Tvheadend

 

Share this post


Link to post

thats exactly what is the goal, using the subinterfaces cause the fritz only allows 1 stream per ip ... Fritz 6490 as tuner for TVH ...

nvm, i have enough other tuners, was an attempt to free the tuners from my vu+ box ... and better setup then the iptv streams from the vu+.

 

i may will give a try with an libreelec vm, theres also tvheadend integrated, but as the iptv setup was a real pain and im pretty sure i cant copy over the config ... ;)

some day i will give it another shot.

 

thanks alot for trying helping.

Share this post


Link to post
14 hours ago, alturismo said:

thats exactly what is the goal, using the subinterfaces cause the fritz only allows 1 stream per ip ... Fritz 6490 as tuner for TVH ...

nvm, i have enough other tuners, was an attempt to free the tuners from my vu+ box ... and better setup then the iptv streams from the vu+.

 

i may will give a try with an libreelec vm, theres also tvheadend integrated, but as the iptv setup was a real pain and im pretty sure i cant copy over the config ... ;)

some day i will give it another shot.

 

thanks alot for trying helping.

 

You can copy over the tvheadend config and it should work. The only thin to worry about is getting it in the correct folder. Content of /config in the container goes to the .hts folder. 

I'll try toget multiple interfaces in the tvheadend container when I get home. 

Share this post


Link to post
Posted (edited)

huge thanks ahead for the try ;)

 

also asked on Fritz support if its possible to turn this limit off, wich i dont think will be done ... but i asked ;)

Edited by alturismo

Share this post


Link to post
22 hours ago, alturismo said:

huge thanks ahead for the try ;)

 

also asked on Fritz support if its possible to turn this limit off, wich i dont think will be done ... but i asked ;)

 

I was going to suggest that ?

I can't see any reason why they have this limit. 

Share this post


Link to post

ok, seems i have my 4 ips now with host mode active ;) funny, nvm.

 

tomorrow i have to reconfig my routers cause i saw now im double NAT ... i use the FRITZ only as modem,

but if i want to use the TV tuners now ... my 2nd router is always the receiving client, so my 4 ips are still useless ;)

and as i have no idea how to pass the ip to the router outside ...

 

lets see if i find time tomorrow, i ll report if it worked out.

Share this post


Link to post

ok, changed my 2nd Router to AP mode, and TVHeadend worked now with its ip addresses ;) so, thanks for the tipp with pipework.

what i did was, i started TVHeadend with the pipework trigger AND in host mode, for each several ip i added

-e 'pipework_cmd=br0 -i eth4 @CONTAINER_NAME@ 192.168.1.211/24@192.168.1.1'

-e 'pipework_cmd=br0 -i eth5 @CONTAINER_NAME@ 192.168.1.212/24@192.168.1.1'

-e 'pipework_cmd=br0 -i eth6 @CONTAINER_NAME@ 192.168.1.213/24@192.168.1.1'

 

so now, TVH listening here on 192.168.1.2 (host), 192.168.1.211 ..212 ...213 (extra interfaces), no issue with other dockers cause Host is active and extra ip´s are here ...

 

now, TVHeadend has its ip´s and even kept them after docker update, also when i start TVHeadend now without any pipework command the ip´s keep active ... ;)

 

sadly something messed up generally with changing the router setups, couldnt resolve my IPv4 externally anymore, so i had to revert back now to my double nat setup, other story to fix.

 

my ipaddr in TVH looks now

 

root@AlsServer:~# docker exec -ti tvheadend bash
root@AlsServer:/$ ipaddr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN qlen 1000
    link/ipip 0.0.0.0 brd 0.0.0.0
3: gre0@NONE: <NOARP> mtu 1476 qdisc noop state DOWN qlen 1000
    link/gre 0.0.0.0 brd 0.0.0.0
4: gretap0@NONE: <BROADCAST,MULTICAST> mtu 1462 qdisc noop state DOWN qlen 1000
    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
5: erspan0@NONE: <BROADCAST,MULTICAST> mtu 1450 qdisc noop state DOWN qlen 1000
    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
6: ip_vti0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN qlen 1000
    link/ipip 0.0.0.0 brd 0.0.0.0
7: sit0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN qlen 1000
    link/sit 0.0.0.0 brd 0.0.0.0
10: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP100> mtu 1500 qdisc mq master br0 state UP qlen 1000
    link/ether b4:99:ba:5d:b4:70 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::b699:baff:fe5d:b470/64 scope link
       valid_lft forever preferred_lft forever
11: eth1: <NO-CARRIER,BROADCAST,MULTICAST,UP100> mtu 1500 qdisc mq master br1 state DOWN qlen 1000
    link/ether b4:99:ba:5d:b4:74 brd ff:ff:ff:ff:ff:ff
12: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000
    link/ether 12:13:41:c1:45:73 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.2/24 scope global br0
       valid_lft forever preferred_lft forever
    inet6 fe80::8c19:40ff:fe1a:969f/64 scope link
       valid_lft forever preferred_lft forever
13: br1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN qlen 1000
    link/ether b4:99:ba:5d:b4:74 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.2/24 scope global br1
       valid_lft forever preferred_lft forever
    inet6 fe80::6441:91ff:fe56:2b3f/64 scope link
       valid_lft forever preferred_lft forever
14: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
    link/ether 02:42:14:7c:b9:99 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:14ff:fe7c:b999/64 scope link
       valid_lft forever preferred_lft forever
16: veth0b2605c@if15: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue master docker0 state UP
    link/ether 62:fb:64:a4:d0:27 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::60fb:64ff:fea4:d027/64 scope link
       valid_lft forever preferred_lft forever
18: veth6f3e79b@if17: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue master docker0 state UP
    link/ether 06:4a:ee:a9:03:f8 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::44a:eeff:fea9:3f8/64 scope link
       valid_lft forever preferred_lft forever
20: vethd612b77@if19: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue master docker0 state UP
    link/ether a6:9a:ab:0d:ad:87 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::a49a:abff:fe0d:ad87/64 scope link
       valid_lft forever preferred_lft forever
22: veth6626e7d@if21: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue master docker0 state UP
    link/ether e6:51:21:0e:f5:12 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::e451:21ff:fe0e:f512/64 scope link
       valid_lft forever preferred_lft forever
24: vethf9270a8@if23: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue master docker0 state UP
    link/ether 8a:58:75:de:4e:a4 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::8858:75ff:fede:4ea4/64 scope link
       valid_lft forever preferred_lft forever
26: vethb3fbfb0@if25: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue master docker0 state UP
    link/ether fe:c4:c3:40:5a:6d brd ff:ff:ff:ff:ff:ff
    inet6 fe80::fcc4:c3ff:fe40:5a6d/64 scope link
       valid_lft forever preferred_lft forever
28: veth72f610d@if27: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue master docker0 state UP
    link/ether 7a:f7:cb:4c:c2:f8 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::78f7:cbff:fe4c:c2f8/64 scope link
       valid_lft forever preferred_lft forever
30: veth68a9cf6@if29: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue master docker0 state UP
    link/ether 0e:67:d0:1e:20:de brd ff:ff:ff:ff:ff:ff
    inet6 fe80::c67:d0ff:fe1e:20de/64 scope link
       valid_lft forever preferred_lft forever
32: veth58a9f7b@if31: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue master docker0 state UP
    link/ether 02:70:79:4c:53:cc brd ff:ff:ff:ff:ff:ff
    inet6 fe80::70:79ff:fe4c:53cc/64 scope link
       valid_lft forever preferred_lft forever
34: veth1d11dd9@if33: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue master docker0 state UP
    link/ether a6:22:eb:08:c6:f1 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::a422:ebff:fe08:c6f1/64 scope link
       valid_lft forever preferred_lft forever
36: veth19c0fe2@if35: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue master docker0 state UP
    link/ether ae:0b:3c:f5:b1:f1 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::ac0b:3cff:fef5:b1f1/64 scope link
       valid_lft forever preferred_lft forever
37: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN qlen 1000
    link/ether 52:54:00:9b:87:26 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
38: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN qlen 1000
    link/ether 52:54:00:9b:87:26 brd ff:ff:ff:ff:ff:ff
39: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master br0 state UNKNOWN qlen 1000
    link/ether fe:54:00:74:f7:ef brd ff:ff:ff:ff:ff:ff
    inet6 fe80::fc54:ff:fe74:f7ef/64 scope link
       valid_lft forever preferred_lft forever
40: eth6@veth6pl32569: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000
    link/ether 82:c8:48:2c:2f:e2 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.213/24 scope global eth6
       valid_lft forever preferred_lft forever
    inet6 fe80::80c8:48ff:fe2c:2fe2/64 scope link
       valid_lft forever preferred_lft forever
41: veth6pl32569@eth6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UP qlen 1000
    link/ether 96:b5:e5:c6:97:d1 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::94b5:e5ff:fec6:97d1/64 scope link
       valid_lft forever preferred_lft forever
42: veth6pg1146@veth6pl1146: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
    link/ether e6:34:85:7c:8f:6e brd ff:ff:ff:ff:ff:ff
43: veth6pl1146@veth6pg1146: <NO-CARRIER,BROADCAST,MULTICAST,UP,M-DOWN> mtu 1500 qdisc noqueue master br0 state LOWERLAYERDOWN qlen 1000
    link/ether 12:13:41:c1:45:73 brd ff:ff:ff:ff:ff:ff
44: veth6pg2080@veth6pl2080: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
    link/ether ee:61:8b:31:95:fb brd ff:ff:ff:ff:ff:ff
45: veth6pl2080@veth6pg2080: <NO-CARRIER,BROADCAST,MULTICAST,UP,M-DOWN> mtu 1500 qdisc noqueue master br0 state LOWERLAYERDOWN qlen 1000
    link/ether e2:19:d6:3e:f0:f0 brd ff:ff:ff:ff:ff:ff
46: eth5@veth5pl3302: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000
    link/ether be:aa:11:87:07:4f brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.212/24 scope global eth5
       valid_lft forever preferred_lft forever
    inet6 fe80::bcaa:11ff:fe87:74f/64 scope link
       valid_lft forever preferred_lft forever
47: veth5pl3302@eth5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UP qlen 1000
    link/ether c2:5d:27:06:92:80 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::c05d:27ff:fe06:9280/64 scope link
       valid_lft forever preferred_lft forever
48: veth6pg4147@veth6pl4147: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
    link/ether 02:a7:cf:e1:a4:50 brd ff:ff:ff:ff:ff:ff
49: veth6pl4147@veth6pg4147: <NO-CARRIER,BROADCAST,MULTICAST,UP,M-DOWN> mtu 1500 qdisc noqueue master br0 state LOWERLAYERDOWN qlen 1000
    link/ether 8e:de:48:53:65:43 brd ff:ff:ff:ff:ff:ff
50: eth4@veth4pl6261: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000
    link/ether fa:79:02:da:a0:0f brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.211/24 scope global eth4
       valid_lft forever preferred_lft forever
    inet6 fe80::f879:2ff:feda:a00f/64 scope link
       valid_lft forever preferred_lft forever
51: veth4pl6261@eth4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UP qlen 1000
    link/ether 3a:8f:c9:f3:11:02 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::388f:c9ff:fef3:1102/64 scope link
       valid_lft forever preferred_lft forever

 

thanks again for the tipp with pipework, hope it didnt break anything else now ;), i now have to check my routers setup to get this sorted with my IP v4 addresses and so on.

Share this post


Link to post

ok, recingured routers and all is good, result with 4 tuners active ;)

 

TVH4.thumb.PNG.a524bc78a538dcf0ef4b19d260a9f2e6.PNGFritz4.thumb.PNG.12d045f15655f740e8e863599f8b4263.PNG

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now