Kewjoe

Members
  • Posts

    314
  • Joined

  • Last visited

Everything posted by Kewjoe

  1. Do you think regenerating the Docker image will help? I may also install 6.4 RC7 again and try to undo whatever I did in that short time span i had it installed. But I'll try to get myself back to stock setup. Thanks for the help so far. I seem to have gotten myself into a bit of a mess
  2. When I initially started, VLAN was disabled and I still had the same problems. I only enabled it after as a troubleshooting step. I will disable and do it again, but I don't think it will help. To answer your question about br0.1, it's the same steps you outline in your OP but instead of br0, which I couldn't use because it says it is already being used by another interface, i tried br0.1. That's obviously not right, but I was trying to see how to get this working. From an earlier post, this is what happens when i follow your instructions for the Single NIC solution: Error response from daemon: network dm-ba57b5a60b33 is already using parent interface br0 The only other thing I can think of is, i tried 6.4 RC7 for a little while, I wonder if something happened with that before i backed out to 6.3.5 again.
  3. This is in your original post: "With only a single NIC, and no VLAN support on your network, it is impossible for the host unRAID to talk to the containers and vice versa; the macvlan driver specifically prohibits this. This situation prevents a reverse proxy docker from proxying unRAID, but will work with all other containers on the new docker network. " I thought that latter part of that paragraph was that it'd still work as long as i wasn't trying to have the container talk to unRaid. Shouldn't that still mean the container can talk to the outside world? or did I misunderstand what you're saying? In what cases does your single NIC example work? Is it not feasible if you don't have a VLAN supported network? I do have a second NIC and can try your 2 NIC reco, but i was hoping to isolate the second NIC specifically for a VM running PFSense (which i haven't started yet).
  4. "ip -d link show" 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 promiscuity 0 2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN mode DEFAULT group default qlen 1 link/ipip 0.0.0.0 brd 0.0.0.0 promiscuity 0 ipip remote any local any ttl inherit nopmtudisc 3: gre0@NONE: <NOARP> mtu 1476 qdisc noop state DOWN mode DEFAULT group default qlen 1 link/gre 0.0.0.0 brd 0.0.0.0 promiscuity 0 gre remote any local any ttl inherit nopmtudisc 4: gretap0@NONE: <BROADCAST,MULTICAST> mtu 1462 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff promiscuity 0 gretap remote any local any ttl inherit nopmtudisc 5: ip_vti0@NONE: <NOARP> mtu 1364 qdisc noop state DOWN mode DEFAULT group default qlen 1 link/ipip 0.0.0.0 brd 0.0.0.0 promiscuity 0 vti remote any local any ikey 0.0.0.0 okey 0.0.0.0 6: eth0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc mq master br0 state UP mode DEFAULT group default qlen 1000 link/ether 70:85:c2:2e:e1:ac brd ff:ff:ff:ff:ff:ff promiscuity 2 bridge_slave state forwarding priority 32 cost 4 hairpin off guard off root_block off fastleave off learning on flood on 37: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000 link/ether 70:85:c2:2e:e1:ac brd ff:ff:ff:ff:ff:ff promiscuity 0 bridge forward_delay 0 hello_time 200 max_age 2000 ageing_time 30000 stp_state 0 priority 32768 vlan_filtering 0 vlan_protocol 802.1Q 38: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default link/ether 02:42:7b:d0:d6:90 brd ff:ff:ff:ff:ff:ff promiscuity 0 bridge forward_delay 1500 hello_time 200 max_age 2000 ageing_time 30000 stp_state 0 priority 32768 vlan_filtering 0 vlan_protocol 802.1Q 68: vethcffe18d@if67: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default link/ether 6a:b9:83:53:9e:9e brd ff:ff:ff:ff:ff:ff link-netnsid 0 promiscuity 1 veth bridge_slave state forwarding priority 32 cost 2 hairpin off guard off root_block off fastleave off learning on flood on 70: vethf36d2cb@if69: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default link/ether c2:c0:45:06:69:a0 brd ff:ff:ff:ff:ff:ff link-netnsid 1 promiscuity 1 veth bridge_slave state forwarding priority 32 cost 2 hairpin off guard off root_block off fastleave off learning on flood on 72: veth85c2f81@if71: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default link/ether b2:fb:fe:40:09:f9 brd ff:ff:ff:ff:ff:ff link-netnsid 2 promiscuity 1 veth bridge_slave state forwarding priority 32 cost 2 hairpin off guard off root_block off fastleave off learning on flood on 74: veth8841ad0@if73: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default link/ether 2e:f3:d0:ca:18:04 brd ff:ff:ff:ff:ff:ff link-netnsid 3 promiscuity 1 veth bridge_slave state forwarding priority 32 cost 2 hairpin off guard off root_block off fastleave off learning on flood on 76: veth43be249@if75: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default link/ether 3a:e7:11:e4:3a:f0 brd ff:ff:ff:ff:ff:ff link-netnsid 4 promiscuity 1 veth bridge_slave state forwarding priority 32 cost 2 hairpin off guard off root_block off fastleave off learning on flood on 78: vethb23822a@if77: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default link/ether 5e:15:4e:4f:e8:27 brd ff:ff:ff:ff:ff:ff link-netnsid 5 promiscuity 1 veth bridge_slave state forwarding priority 32 cost 2 hairpin off guard off root_block off fastleave off learning on flood on 84: veth8ea80ff@if83: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default link/ether a6:b5:c8:54:17:43 brd ff:ff:ff:ff:ff:ff link-netnsid 8 promiscuity 1 veth bridge_slave state forwarding priority 32 cost 2 hairpin off guard off root_block off fastleave off learning on flood on 86: veth00d62c7@if85: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default link/ether 36:f6:b3:2f:15:7a brd ff:ff:ff:ff:ff:ff link-netnsid 9 promiscuity 1 veth bridge_slave state forwarding priority 32 cost 2 hairpin off guard off root_block off fastleave off learning on flood on 88: veth6cf8a31@if87: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default link/ether 86:34:77:55:ad:f1 brd ff:ff:ff:ff:ff:ff link-netnsid 11 promiscuity 1 veth bridge_slave state forwarding priority 32 cost 2 hairpin off guard off root_block off fastleave off learning on flood on 90: veth1cfac99@if89: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default link/ether da:8f:52:a8:1f:25 brd ff:ff:ff:ff:ff:ff link-netnsid 12 promiscuity 1 veth bridge_slave state forwarding priority 32 cost 2 hairpin off guard off root_block off fastleave off learning on flood on 92: veth8e022a0@if91: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default link/ether 02:47:3d:0a:27:07 brd ff:ff:ff:ff:ff:ff link-netnsid 13 promiscuity 1 veth bridge_slave state forwarding priority 32 cost 2 hairpin off guard off root_block off fastleave off learning on flood on 94: veth16c6eaa@if93: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default link/ether 4a:a5:da:63:12:00 brd ff:ff:ff:ff:ff:ff link-netnsid 14 promiscuity 1 veth bridge_slave state forwarding priority 32 cost 2 hairpin off guard off root_block off fastleave off learning on flood on 96: vethff638b9@if95: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default link/ether da:6b:d2:56:31:75 brd ff:ff:ff:ff:ff:ff link-netnsid 15 promiscuity 1 veth bridge_slave state forwarding priority 32 cost 2 hairpin off guard off root_block off fastleave off learning on flood on 100: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default qlen 1000 link/ether 52:54:00:35:e4:53 brd ff:ff:ff:ff:ff:ff promiscuity 0 bridge forward_delay 200 hello_time 200 max_age 2000 ageing_time 30000 stp_state 1 priority 32768 vlan_filtering 0 vlan_protocol 802.1Q 101: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN mode DEFAULT group default qlen 1000 link/ether 52:54:00:35:e4:53 brd ff:ff:ff:ff:ff:ff promiscuity 1 tun bridge_slave state disabled priority 32 cost 100 hairpin off guard off root_block off fastleave off learning on flood on 102: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master br0 state UNKNOWN mode DEFAULT group default qlen 1000 link/ether fe:54:00:44:b4:34 brd ff:ff:ff:ff:ff:ff promiscuity 1 tun bridge_slave state forwarding priority 32 cost 100 hairpin off guard off root_block off fastleave off learning on flood on 104: veth8e6e43c@if103: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default link/ether ae:b8:63:19:65:4a brd ff:ff:ff:ff:ff:ff link-netnsid 10 promiscuity 1 veth bridge_slave state forwarding priority 32 cost 2 hairpin off guard off root_block off fastleave off learning on flood on 106: vethe9ce18e@if105: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default link/ether 6a:b6:eb:40:23:1b brd ff:ff:ff:ff:ff:ff link-netnsid 16 promiscuity 1 veth bridge_slave state forwarding priority 32 cost 2 hairpin off guard off root_block off fastleave off learning on flood on 110: br0.1@br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default link/ether 70:85:c2:2e:e1:ac brd ff:ff:ff:ff:ff:ff promiscuity 0 vlan protocol 802.1Q id 1 <REORDER_HDR> and the docker inspect for pihole is attached. BTW, the Vlan i enabled after i posted while troubleshooting. I should disable it but forgot to. But i was having the issues before i enabled it. pihole inspect.txt
  5. In case it's helpful, here are my diagnostics too. tower-diagnostics-20170816-2028.zip
  6. Thanks @ken-ji network.cfg attached, Output of "docker network inspect homenet": [ { "Name": "towernet", "Id": "e700c906426a27f3dd1d61279ab286ae0c403eff7d72cb2ccfc9c50cbc819c54", "Scope": "local", "Driver": "macvlan", "EnableIPv6": false, "IPAM": { "Driver": "default", "Options": {}, "Config": [ { "Subnet": "192.168.79.0/24", "IPRange": "192.168.79.200/30", "Gateway": "192.168.79.83" } ] }, "Internal": false, "Containers": {}, "Options": { "parent": "br0.1" }, "Labels": {} } ] Output of "ip addr": 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/32 scope host lo valid_lft forever preferred_lft forever inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1 link/ipip 0.0.0.0 brd 0.0.0.0 3: gre0@NONE: <NOARP> mtu 1476 qdisc noop state DOWN group default qlen 1 link/gre 0.0.0.0 brd 0.0.0.0 4: gretap0@NONE: <BROADCAST,MULTICAST> mtu 1462 qdisc noop state DOWN group default qlen 1000 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff 5: ip_vti0@NONE: <NOARP> mtu 1364 qdisc noop state DOWN group default qlen 1 link/ipip 0.0.0.0 brd 0.0.0.0 6: eth0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc mq master br0 state UP group default qlen 1000 link/ether 70:85:c2:2e:e1:ac brd ff:ff:ff:ff:ff:ff 37: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 70:85:c2:2e:e1:ac brd ff:ff:ff:ff:ff:ff inet 192.168.79.15/24 scope global br0 valid_lft forever preferred_lft forever 38: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 02:42:7b:d0:d6:90 brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 scope global docker0 valid_lft forever preferred_lft forever 68: vethcffe18d@if67: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default link/ether 6a:b9:83:53:9e:9e brd ff:ff:ff:ff:ff:ff link-netnsid 0 70: vethf36d2cb@if69: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default link/ether c2:c0:45:06:69:a0 brd ff:ff:ff:ff:ff:ff link-netnsid 1 72: veth85c2f81@if71: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default link/ether b2:fb:fe:40:09:f9 brd ff:ff:ff:ff:ff:ff link-netnsid 2 74: veth8841ad0@if73: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default link/ether 2e:f3:d0:ca:18:04 brd ff:ff:ff:ff:ff:ff link-netnsid 3 76: veth43be249@if75: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default link/ether 3a:e7:11:e4:3a:f0 brd ff:ff:ff:ff:ff:ff link-netnsid 4 78: vethb23822a@if77: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default link/ether 5e:15:4e:4f:e8:27 brd ff:ff:ff:ff:ff:ff link-netnsid 5 84: veth8ea80ff@if83: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default link/ether a6:b5:c8:54:17:43 brd ff:ff:ff:ff:ff:ff link-netnsid 8 86: veth00d62c7@if85: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default link/ether 36:f6:b3:2f:15:7a brd ff:ff:ff:ff:ff:ff link-netnsid 9 88: veth6cf8a31@if87: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default link/ether 86:34:77:55:ad:f1 brd ff:ff:ff:ff:ff:ff link-netnsid 11 90: veth1cfac99@if89: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default link/ether da:8f:52:a8:1f:25 brd ff:ff:ff:ff:ff:ff link-netnsid 12 92: veth8e022a0@if91: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default link/ether 02:47:3d:0a:27:07 brd ff:ff:ff:ff:ff:ff link-netnsid 13 94: veth16c6eaa@if93: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default link/ether 4a:a5:da:63:12:00 brd ff:ff:ff:ff:ff:ff link-netnsid 14 96: vethff638b9@if95: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default link/ether da:6b:d2:56:31:75 brd ff:ff:ff:ff:ff:ff link-netnsid 15 100: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000 link/ether 52:54:00:35:e4:53 brd ff:ff:ff:ff:ff:ff inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 valid_lft forever preferred_lft forever 101: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN group default qlen 1000 link/ether 52:54:00:35:e4:53 brd ff:ff:ff:ff:ff:ff 102: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master br0 state UNKNOWN group default qlen 1000 link/ether fe:54:00:44:b4:34 brd ff:ff:ff:ff:ff:ff 104: veth8e6e43c@if103: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default link/ether ae:b8:63:19:65:4a brd ff:ff:ff:ff:ff:ff link-netnsid 10 106: vethe9ce18e@if105: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default link/ether 6a:b6:eb:40:23:1b brd ff:ff:ff:ff:ff:ff link-netnsid 16 110: br0.1@br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 70:85:c2:2e:e1:ac brd ff:ff:ff:ff:ff:ff "ip link": 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN mode DEFAULT group default qlen 1 link/ipip 0.0.0.0 brd 0.0.0.0 3: gre0@NONE: <NOARP> mtu 1476 qdisc noop state DOWN mode DEFAULT group default qlen 1 link/gre 0.0.0.0 brd 0.0.0.0 4: gretap0@NONE: <BROADCAST,MULTICAST> mtu 1462 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff 5: ip_vti0@NONE: <NOARP> mtu 1364 qdisc noop state DOWN mode DEFAULT group default qlen 1 link/ipip 0.0.0.0 brd 0.0.0.0 6: eth0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc mq master br0 state UP mode DEFAULT group default qlen 1000 link/ether 70:85:c2:2e:e1:ac brd ff:ff:ff:ff:ff:ff 37: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000 link/ether 70:85:c2:2e:e1:ac brd ff:ff:ff:ff:ff:ff 38: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default link/ether 02:42:7b:d0:d6:90 brd ff:ff:ff:ff:ff:ff 68: vethcffe18d@if67: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default link/ether 6a:b9:83:53:9e:9e brd ff:ff:ff:ff:ff:ff link-netnsid 0 70: vethf36d2cb@if69: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default link/ether c2:c0:45:06:69:a0 brd ff:ff:ff:ff:ff:ff link-netnsid 1 72: veth85c2f81@if71: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default link/ether b2:fb:fe:40:09:f9 brd ff:ff:ff:ff:ff:ff link-netnsid 2 74: veth8841ad0@if73: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default link/ether 2e:f3:d0:ca:18:04 brd ff:ff:ff:ff:ff:ff link-netnsid 3 76: veth43be249@if75: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default link/ether 3a:e7:11:e4:3a:f0 brd ff:ff:ff:ff:ff:ff link-netnsid 4 78: vethb23822a@if77: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default link/ether 5e:15:4e:4f:e8:27 brd ff:ff:ff:ff:ff:ff link-netnsid 5 84: veth8ea80ff@if83: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default link/ether a6:b5:c8:54:17:43 brd ff:ff:ff:ff:ff:ff link-netnsid 8 86: veth00d62c7@if85: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default link/ether 36:f6:b3:2f:15:7a brd ff:ff:ff:ff:ff:ff link-netnsid 9 88: veth6cf8a31@if87: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default link/ether 86:34:77:55:ad:f1 brd ff:ff:ff:ff:ff:ff link-netnsid 11 90: veth1cfac99@if89: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default link/ether da:8f:52:a8:1f:25 brd ff:ff:ff:ff:ff:ff link-netnsid 12 92: veth8e022a0@if91: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default link/ether 02:47:3d:0a:27:07 brd ff:ff:ff:ff:ff:ff link-netnsid 13 94: veth16c6eaa@if93: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default link/ether 4a:a5:da:63:12:00 brd ff:ff:ff:ff:ff:ff link-netnsid 14 96: vethff638b9@if95: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default link/ether da:6b:d2:56:31:75 brd ff:ff:ff:ff:ff:ff link-netnsid 15 100: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default qlen 1000 link/ether 52:54:00:35:e4:53 brd ff:ff:ff:ff:ff:ff 101: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN mode DEFAULT group default qlen 1000 link/ether 52:54:00:35:e4:53 brd ff:ff:ff:ff:ff:ff 102: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master br0 state UNKNOWN mode DEFAULT group default qlen 1000 link/ether fe:54:00:44:b4:34 brd ff:ff:ff:ff:ff:ff 104: veth8e6e43c@if103: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default link/ether ae:b8:63:19:65:4a brd ff:ff:ff:ff:ff:ff link-netnsid 10 106: vethe9ce18e@if105: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default link/ether 6a:b6:eb:40:23:1b brd ff:ff:ff:ff:ff:ff link-netnsid 16 110: br0.1@br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default link/ether 70:85:c2:2e:e1:ac brd ff:ff:ff:ff:ff:ff network.cfg
  7. That's exactly what an SSD Serial Killer would say... Sent from my ONEPLUS A3000 using Tapatalk
  8. I tried creating this with -o parent=br0.1 and it created it. But when I assign an IP to a docker it is not reachable. root@Tower:/mnt/user/appdata/pihole# docker inspect pihole | grep IPAddress "SecondaryIPAddresses": null, "IPAddress": "", "IPAddress": "192.168.79.128", root@Tower:/mnt/user/appdata/pihole# docker exec pihole ping www.google.com ping: bad address 'www.google.com' i also can't reach the webui of the docker either.
  9. When I try to run the command in the OP, i get the following error: Error response from daemon: network dm-ba57b5a60b33 is already using parent interface br0 I attached my network settings and also an ifconfig readout with docker enabled, but all dockers turned off. Not sure if I have things set up right. I have 2 NIC's (both Intel onboard), but i have the second NIC detatched from unRAID and available to a VM using append vfio-pci.ids=etc.
  10. Your wife's betrayal should force her to watch it a second time with you. That's only fair.
  11. CHBMB is correct. I've been running this way for the past year. I use Syncthing to backup various devices (and my family's devices) to my server and then use the one client on the individual license to backup to Crashplan.
  12. Is there a particular format needed for the Exclude Folders section? if i typed "Appdata\EmbyServer\metadata" that would work? and if i want more than 1 folder i separate them with commas?
  13. Easy enough. Did the same. Thanks
  14. Hi Binhex, I've started using Privoxy in my browser. Works great. Except for some odd reason it blocks the banner in unRaid. Everything else local works, but the banner for some odd reason is being blocked. If I turn off the proxy it comes back. Obviously not a huge issue, but I thought I'd bring it up. Edit: Eh, it was a simple fix. I just added my unRaid machine in the section to not use the proxy.
  15. Ha! I have the same problem. Definitely blocked by Privoxy. @wgstarks I think it's wise we ask Binhex (assuming you are using privoxy through one of his containers)
  16. A lot of great videos by @gridrunner here to get you started: https://www.youtube.com/channel/UCZDfnUn74N0WeAPvMqTOrtA
  17. I found the high thread priority option, but not the max thread. Where did you see that one?
  18. Have you tried seabios? Sent from my ONEPLUS A3000 using Tapatalk
  19. Boom! works. You da man. Enjoy the small, long overdue donation
  20. Thanks Binhex! log attached. Could it possibly be the fact i edited rtorrent.rc and specified a port? I noticed that you are randomizing the port at each start. What should the port_range be? or should it just be commented out? supervisord.log
  21. Still closed. Just to check, in the config file i have 49160-49160 as the port. My router doesn't have anything open and I've just removed the VPN_INCOMING_PORT. Here's my full Docker Config:
  22. I'm already connected to CA Toronto. Should RUTorrent say the port is open? or does the message saying it's closed not matter?