[Support] Clowrym's Docker Repository


clowrym

Recommended Posts

7 hours ago, Fierce Waffle said:

Transmission becomes unresponsive(last time was after I ran "delete local data"). I can't stop or kill the docker container. Stopping the docker service doesn't cause the container to stop either(disabling docker in settings). I've tried deleting the docker vdisk img and starting from scratch and I'm getting the same issues. Trying to stop or shutdown the array causes the server to hang. I have to hard shutdown the server using the power button. 

 

What on earth could be causing this?

It's not an Intel NUC despite the diagnostic name. 

When I try to get into the container after it's become unresponsive I get the following error: 
 


root@nuc:~# docker exec -it transmission bash
OCI runtime exec failed: exec failed: container_linux.go:367: starting container process caused: process_linux.go:112: executing setns process caused: exit status 1: unknown

 

nuc-diagnostics-20210518-1203 (1).zip 157.92 kB · 0 downloads



Reposting my response here so we don't keep bumping that other thread;
 

From diag.zip -> system/ps.txt:

nobody   31750  0.3  0.0      0     0 ?        Zl   May17   4:31      \_ [transmission-da] <defunct>



The transmission-da process has become a zombie. Interestingly, the container is using dumb-init which should be handling the zombie process cleanup with a wait() syscall; but doesn't seem to be. Typically this would indicate that the offending zombie process is waiting on IO of some kind. I also note in your syslog that the BTRFS docker image was corrupted (twice) and recreated. I'm assuming a write operation in the docker.img is hanging and when forcibly rebooting the server that's corrupting it.

Double check that none of the paths you have mapped to this container are on the docker image. If they are, this could be the culprit as that docker image could be filling up and then write operations just repeatedly failing.

Link to comment
On 5/18/2021 at 2:49 PM, clowrym said:

 

 

I have this issue now and again with PIA, I run the following script every ~15 minutes to look for the error and restart the containeer

 


#!/bin/bash
#Check for error in Transmission log and restarts docker if found
#change /Search_String/ to suit the required error
echo checking for Transmission_vpn Fatal Error
docker logs --tail 50 Transmission_VPN 2>&1 | awk '/fatal_error/ {print | "docker restart Transmission_VPN"}'

 

 

smart, I'll try this

Link to comment
On 5/18/2021 at 2:49 PM, clowrym said:

 

 

I have this issue now and again with PIA, I run the following script every ~15 minutes to look for the error and restart the containeer

 


#!/bin/bash
#Check for error in Transmission log and restarts docker if found
#change /Search_String/ to suit the required error
echo checking for Transmission_vpn Fatal Error
docker logs --tail 50 Transmission_VPN 2>&1 | awk '/fatal_error/ {print | "docker restart Transmission_VPN"}'

 

 

I wonder if this can be built into the docker healthcheck so it is restarted when unhealthy?

Link to comment
Posted (edited)

I can't connect/access the web gui unless my location is set to switzerland?  When it's netherlands, it just says it can't connect to web gui.  When I open it with switzerland, my logs come through clean and say success and I can use normally.

 

This is what my logs say once I switch my location to netherlands(just keeps repeating):

Wed May 26 15:06:56 2021 NOTE: the current --script-security setting may allow this configuration to call user-defined scripts
Wed May 26 15:06:56 2021 TCP/UDP: Preserving recently used remote address: [AF_INET]191.96.168.102:502
Wed May 26 15:06:56 2021 Attempting to establish TCP connection with [AF_INET]191.96.168.102:502 [nonblock]
Wed May 26 15:06:57 2021 TCP connection established with [AF_INET]191.96.168.102:502
Wed May 26 15:06:57 2021 TCP_CLIENT link local: (not bound)
Wed May 26 15:06:57 2021 TCP_CLIENT link remote: [AF_INET]191.96.168.102:502
Wed May 26 15:06:58 2021 VERIFY ERROR: depth=0, error=certificate is not yet valid: C=US, ST=CA, L=LosAngeles, O=Private Internet Access, OU=Private Internet Access, CN=amsterdam426, name=amsterdam426
Wed May 26 15:06:58 2021 OpenSSL: error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed
Wed May 26 15:06:58 2021 TLS_ERROR: BIO read tls_read_plaintext error
Wed May 26 15:06:58 2021 TLS Error: TLS object -> incoming plaintext read error
Wed May 26 15:06:58 2021 TLS Error: TLS handshake failed
Wed May 26 15:06:58 2021 Fatal TLS error (check_tls_errors_co), restarting
Wed May 26 15:06:58 2021 SIGUSR1[soft,tls-error] received, process restarting

 

Solved:

https://github.com/binhex/documentation/blob/master/docker/faq/vpn.md

Followed the steps in Q19/A19

Edited by tential
Link to comment
Posted (edited)
1 hour ago, tential said:

I can't connect/access the web gui unless my location is set to switzerland?  When it's netherlands, it just says it can't connect to web gui.  When I open it with switzerland, my logs come through clean and say success and I can use normally.

 

This is what my logs say once I switch my location to netherlands(just keeps repeating):

Wed May 26 15:06:56 2021 NOTE: the current --script-security setting may allow this configuration to call user-defined scripts
Wed May 26 15:06:56 2021 TCP/UDP: Preserving recently used remote address: [AF_INET]191.96.168.102:502
Wed May 26 15:06:56 2021 Attempting to establish TCP connection with [AF_INET]191.96.168.102:502 [nonblock]
Wed May 26 15:06:57 2021 TCP connection established with [AF_INET]191.96.168.102:502
Wed May 26 15:06:57 2021 TCP_CLIENT link local: (not bound)
Wed May 26 15:06:57 2021 TCP_CLIENT link remote: [AF_INET]191.96.168.102:502
Wed May 26 15:06:58 2021 VERIFY ERROR: depth=0, error=certificate is not yet valid: C=US, ST=CA, L=LosAngeles, O=Private Internet Access, OU=Private Internet Access, CN=amsterdam426, name=amsterdam426
Wed May 26 15:06:58 2021 OpenSSL: error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed
Wed May 26 15:06:58 2021 TLS_ERROR: BIO read tls_read_plaintext error
Wed May 26 15:06:58 2021 TLS Error: TLS object -> incoming plaintext read error
Wed May 26 15:06:58 2021 TLS Error: TLS handshake failed
Wed May 26 15:06:58 2021 Fatal TLS error (check_tls_errors_co), restarting
Wed May 26 15:06:58 2021 SIGUSR1[soft,tls-error] received, process restarting

 

Solved:

https://github.com/binhex/documentation/blob/master/docker/faq/vpn.md

Followed the steps in Q19/A19

This thread isnt for binhex's docker, its for haugene/transmission-openvpn, the variable 

PIA_OPENVPN_CONFIG_BUNDLE:

was added to this template a while back when the change was made to PIA to automatically download whichever configs you selected!!

 

 

Glad it worked for you though!!

 

 

Edited by clowrym
Link to comment
  • 2 weeks later...

I'm trying to figure out how to get ALL transmission traffic to go through one of my internet connections rather than another.

 

One way I do that using Policy Based Routing is that I differentiate by IP address... So that some of my VM's can only communicate through the slower but cheaper (unlimited) ISP.

 

Is it possible to make Transmission use a different IP address than the rest of the Unraid Server?

 

Or, Barring that is there a single outgoing port that is used? or even a range of ports?

This is counter to the VPN goal isn't it?

 

Slightly confused I know, but any insight would be appreciated.

 

Arbadacarba

Link to comment
Posted (edited)

Trying to get minecraft_server.1.17.jar to work with this.  I've been able to download it, but when I try and start the server nothing happens.  I verified eula.txt has true.. what could I be missing?

 

No log file is created either.  If I use a different version it starts up fine.

Edited by Saiba Samurai
Link to comment

Saiba,
I'm having the same issue.

I backed up my world and reset all the non-relevant chunks using the MCA editor program.

Now I'm trying to move my server to 1.17 in MineOS and it will not start.

I have seen other instances of this happening - looks like it could be some incompatibility with the old version of java..


Need some help to get this resolved though - just wanted to let you know you are not alone :)

Link to comment

From the docker console you can run:
 

java -version
openjdk version "1.8.0_292"


As you can see, mine is running 1.8.
From the Minecraft website: https://help.minecraft.net/hc/en-us/articles/360035131371-Minecraft-Java-Edition-system-requirements-
you can see at the bottom that Minecraft 1.17 requires Java 16 or newer.
So the docker needs a Java upgrade before a 1.17 server will be able to run successfully.

Link to comment
1 hour ago, UNRAID5 said:

From the docker console you can run:
 


java -version
openjdk version "1.8.0_292"


As you can see, mine is running 1.8.
From the Minecraft website:  
you can see at the bottom that Minecraft 1.17 requires Java 16 or newer.
So the docker needs a Java upgrade before a 1.17 server will be able to run successfully.

 

4 hours ago, MarshallU said:

Saiba,
I'm having the same issue.

I backed up my world and reset all the non-relevant chunks using the MCA editor program.

Now I'm trying to move my server to 1.17 in MineOS and it will not start.

I have seen other instances of this happening - looks like it could be some incompatibility with the old version of java..


Need some help to get this resolved though - just wanted to let you know you are not alone :)

 

I would post your issue here https://github.com/hexparrot/mineos-node 

Link to comment
Posted (edited)
22 hours ago, Arbadacarba said:

I'm trying to figure out how to get ALL transmission traffic to go through one of my internet connections rather than another.

 

One way I do that using Policy Based Routing is that I differentiate by IP address... So that some of my VM's can only communicate through the slower but cheaper (unlimited) ISP.

 

Is it possible to make Transmission use a different IP address than the rest of the Unraid Server?

 

Or, Barring that is there a single outgoing port that is used? or even a range of ports?

This is counter to the VPN goal isn't it?

 

Slightly confused I know, but any insight would be appreciated.

 

Arbadacarba

you can give each docker its own IP address if you want, I'm not sure on trying to route your traffic through one connection....i've never tried, or had the need to do it.....

 

Switch your network to custom and specify an ip in the template:

 

image.thumb.png.a338f1a6f0d4900527a59064b3e1e2c0.png

Edited by clowrym
Link to comment
Posted (edited)

@clowrym, perhaps you can educate me here. Wouldn't the version of Java running inside this docker be best updated through the hub? Or is upgrading Java manually inside the docker truly the best way (as mentioned by hexparrot in his post on github)? I looked at binhex's post as well, on this forum, and he has some valid hesitations listed there. Just trying to make sense of all the data to make an informed decision.

Edited by UNRAID5
Link to comment
4 hours ago, UNRAID5 said:

@clowrym, perhaps you can educate me here. Wouldn't the version of Java running inside this docker be best updated through the hub? Or is upgrading Java manually inside the docker truly the best way (as mentioned by hexparrot in his post on github)? I looked at binhex's post as well, on this forum, and he has some valid hesitations listed there. Just trying to make sense of all the data to make an informed decision.

That would be a question for hexparrot....I created the unraid docker template, not the github repo!!

I have in the past updated java from within the docker with no issue....

Link to comment
4 hours ago, clowrym said:

That would be a question for hexparrot....I created the unraid docker template, not the github repo!!

I have in the past updated java from within the docker with no issue....


Ah, gotcha. Thanks for educating me. :)

Link to comment
On 6/8/2021 at 8:56 PM, clowrym said:

you can give each docker its own IP address if you want, I'm not sure on trying to route your traffic through one connection....i've never tried, or had the need to do it.....

 

Switch your network to custom and specify an ip in the template:

 

image.thumb.png.a338f1a6f0d4900527a59064b3e1e2c0.png

 

Does it make sense that I would need to clear the LOCAL_NETWORK: variable? I tried assigning the IP address and could not get into the WebUI.

 

Thanks

Link to comment
2 hours ago, Arbadacarba said:

 

Does it make sense that I would need to clear the LOCAL_NETWORK: variable? I tried assigning the IP address and could not get into the WebUI.

 

Thanks

I have a script I run to add other IP's / subnets when i use a VPN to connect, or change IP's etc... Otherwise I dont get access to the webgui...Maybe you need something similar? I havent tried / tested giving this docker its own IP

 

#!/bin/bash
echo adding 10.1.0.0/24 LAN_NETWORK
docker exec Transmission_VPN /bin/sh -c "/sbin/ip r a 10.1.0.0/24 via 172.17.0.1 dev eth0"
echo Netowrk added
exit

 

Link to comment
12 hours ago, Arbadacarba said:

Zeroing out the LOCAL_NETWORK variable has it working correctly I think... I was just unsure if that made sense or if there could be other problems caused by that change.

I would confirm that your Docker is connecting through the VPN . the point of the local_network variable is to allow your local network access to the VPN subnet.... Personally i use Check my VPN IP .... 

  • Thanks 1
Link to comment

Seems to be good... The site shows me connecting through the vpn location.

 

Seems to be working.

 

The Goal is that while I want to lock the Torrent Server out of the more expensive internet connection, I want the unraid server itself to be able to get updates through whatever internet connection is working.

 

Thanks for all the help.

Link to comment

Recently I installed transmission-vpn. Painful process and template could definitely be improved. It's inconsistent, naming of parameters/variables and placement/grouping (like directories etc.) are a1ll over the place. Nevertheless, I managed to configure it and it works like a charm. Couple of the things I had to hack like OPENVPN_PROVIDER and OPENVPN_CONFIG. I use ExprexxVPN and docker supports it but it's missing from the default values. I played with defaults, set EXPRESSVPN and added ExpressVPN specific configs. So far so good. Except that at night the template reverts to it's defaults and both OPENVPN_PROVIDER and OPENVPN_CONFIG revert to original values (first values on the list). Also the download directory get (sometimes) messed up. Any ideas how to fix this? I switched off automatic updates so the container does not get restarted but the values have changed nevertheless. 

 

Also, what I do not understand is what are /mnt/user/T_Media/Torrent/ for? It seems to not have any use besides automatically creating the share. I am also confused by 3 separate Download parameters, I do understand container path /Downloads but what is /Download for? Not to mention /mnt/user/T_Media/Torrent/ ;-) 

 

Thanks! 

Link to comment
  • 2 weeks later...
Posted (edited)

Who can help me with the following?

 

Transmission_VPN is working properly but my logs is being spammed by these kind of messages and after a few days my syslog is full.

 

Thanks in advance :)

 

spam text:

kernel: docker0: port 1(veth6712fad) entered blocking state
kernel: docker0: port 1(veth6712fad) entered disabled state
kernel: device veth6712fad entered promiscuous mode
kernel: docker0: port 1(veth6712fad) entered blocking state
kernel: docker0: port 1(veth6712fad) entered forwarding state
kernel: docker0: port 1(veth6712fad) entered disabled state
kernel: eth0: renamed from veth5c249c5
kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth6712fad: link becomes ready
kernel: docker0: port 1(veth6712fad) entered blocking state
kernel: docker0: port 1(veth6712fad) entered forwarding state
avahi-daemon[28188]: Joining mDNS multicast group on interface veth6712fad.IPv6 with address fe80::e8ca:52ff:fef8:4323.
avahi-daemon[28188]: New relevant interface veth6712fad.IPv6 for mDNS.
avahi-daemon[28188]: Registering new address record for fe80::e8ca:52ff:fef8:4323 on veth6712fad.*.
kernel: docker0: port 2(veth4523209) entered blocking state
kernel: docker0: port 2(veth4523209) entered disabled state
kernel: device veth4523209 entered promiscuous mode
kernel: docker0: port 2(veth4523209) entered blocking state
kernel: docker0: port 2(veth4523209) entered forwarding state
kernel: docker0: port 2(veth4523209) entered disabled state
kernel: eth0: renamed from vethab7956c
kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth4523209: link becomes ready
kernel: docker0: port 2(veth4523209) entered blocking state
kernel: docker0: port 2(veth4523209) entered forwarding state
avahi-daemon[28188]: Joining mDNS multicast group on interface veth4523209.IPv6 with address fe80::40af:d4ff:fe5b:3241.
avahi-daemon[28188]: New relevant interface veth4523209.IPv6 for mDNS.
avahi-daemon[28188]: Registering new address record for fe80::40af:d4ff:fe5b:3241 on veth4523209.*.
kernel: docker0: port 3(veth0c8096c) entered blocking state
kernel: docker0: port 3(veth0c8096c) entered disabled state
kernel: device veth0c8096c entered promiscuous mode
kernel: docker0: port 3(veth0c8096c) entered blocking state
kernel: docker0: port 3(veth0c8096c) entered forwarding state
kernel: docker0: port 3(veth0c8096c) entered disabled state
kernel: eth0: renamed from vethbece65c
kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth0c8096c: link becomes ready
kernel: docker0: port 3(veth0c8096c) entered blocking state
kernel: docker0: port 3(veth0c8096c) entered forwarding state
avahi-daemon[28188]: Joining mDNS multicast group on interface veth0c8096c.IPv6 with address fe80::a476:25ff:fe9b:db5e.
avahi-daemon[28188]: New relevant interface veth0c8096c.IPv6 for mDNS.
avahi-daemon[28188]: Registering new address record for fe80::a476:25ff:fe9b:db5e on veth0c8096c.*.
kernel: docker0: port 4(veth075334c) entered blocking state
kernel: docker0: port 4(veth075334c) entered disabled state
kernel: device veth075334c entered promiscuous mode
kernel: docker0: port 4(veth075334c) entered blocking state
kernel: docker0: port 4(veth075334c) entered forwarding state
kernel: docker0: port 4(veth075334c) entered disabled state
kernel: eth0: renamed from veth03bc65b
kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth075334c: link becomes ready
kernel: docker0: port 4(veth075334c) entered blocking state
kernel: docker0: port 4(veth075334c) entered forwarding state
avahi-daemon[28188]: Joining mDNS multicast group on interface veth075334c.IPv6 with address fe80::28d7:90ff:fe7d:62c8.
avahi-daemon[28188]: New relevant interface veth075334c.IPv6 for mDNS.
avahi-daemon[28188]: Registering new address record for fe80::28d7:90ff:fe7d:62c8 on veth075334c.*.
kernel: docker0: port 5(veth54fbbe9) entered blocking state
kernel: docker0: port 5(veth54fbbe9) entered disabled state
kernel: device veth54fbbe9 entered promiscuous mode
kernel: docker0: port 5(veth54fbbe9) entered blocking state
kernel: docker0: port 5(veth54fbbe9) entered forwarding state
kernel: docker0: port 5(veth54fbbe9) entered disabled state
kernel: eth0: renamed from veth39a5ea6
kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth54fbbe9: link becomes ready
kernel: docker0: port 5(veth54fbbe9) entered blocking state
kernel: docker0: port 5(veth54fbbe9) entered forwarding state
avahi-daemon[28188]: Joining mDNS multicast group on interface veth54fbbe9.IPv6 with address fe80::ccc3:dff:fe76:c95b.
avahi-daemon[28188]: New relevant interface veth54fbbe9.IPv6 for mDNS.
avahi-daemon[28188]: Registering new address record for fe80::ccc3:dff:fe76:c95b on veth54fbbe9.*.
kernel: docker0: port 6(veth980dc81) entered blocking state
kernel: docker0: port 6(veth980dc81) entered disabled state
kernel: device veth980dc81 entered promiscuous mode
kernel: docker0: port 6(veth980dc81) entered blocking state
kernel: docker0: port 6(veth980dc81) entered forwarding state
kernel: docker0: port 6(veth980dc81) entered disabled state
kernel: eth0: renamed from vethc97037d
kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth980dc81: link becomes ready
kernel: docker0: port 6(veth980dc81) entered blocking state
kernel: docker0: port 6(veth980dc81) entered forwarding state
avahi-daemon[28188]: Joining mDNS multicast group on interface veth980dc81.IPv6 with address fe80::7477:8aff:fe1a:1356.
avahi-daemon[28188]: New relevant interface veth980dc81.IPv6 for mDNS.
avahi-daemon[28188]: Registering new address record for fe80::7477:8aff:fe1a:1356 on veth980dc81.*.
kernel: docker0: port 7(veth75c5cb5) entered blocking state
kernel: docker0: port 7(veth75c5cb5) entered disabled state
kernel: device veth75c5cb5 entered promiscuous mode
kernel: docker0: port 7(veth75c5cb5) entered blocking state
kernel: docker0: port 7(veth75c5cb5) entered forwarding state
kernel: docker0: port 7(veth75c5cb5) entered disabled state
kernel: eth0: renamed from vetha2d0beb
kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth75c5cb5: link becomes ready
kernel: docker0: port 7(veth75c5cb5) entered blocking state
kernel: docker0: port 7(veth75c5cb5) entered forwarding state
avahi-daemon[28188]: Joining mDNS multicast group on interface veth75c5cb5.IPv6 with address fe80::5cc6:33ff:fee8:1048.
avahi-daemon[28188]: New relevant interface veth75c5cb5.IPv6 for mDNS.
avahi-daemon[28188]: Registering new address record for fe80::5cc6:33ff:fee8:1048 on veth75c5cb5.*.
kernel: docker0: port 8(veth06a71c8) entered blocking state
kernel: docker0: port 8(veth06a71c8) entered disabled state
kernel: device veth06a71c8 entered promiscuous mode
kernel: docker0: port 8(veth06a71c8) entered blocking state
kernel: docker0: port 8(veth06a71c8) entered forwarding state
kernel: docker0: port 8(veth06a71c8) entered disabled state
kernel: eth0: renamed from vethc0ecc6b
kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth06a71c8: link becomes ready
kernel: docker0: port 8(veth06a71c8) entered blocking state
kernel: docker0: port 8(veth06a71c8) entered forwarding state
avahi-daemon[28188]: Joining mDNS multicast group on interface veth06a71c8.IPv6 with address fe80::a420:c2ff:fe40:ad04.
avahi-daemon[28188]: New relevant interface veth06a71c8.IPv6 for mDNS.
avahi-daemon[28188]: Registering new address record for fe80::a420:c2ff:fe40:ad04 on veth06a71c8.*.
kernel: docker0: port 9(vethebd3f24) entered blocking state
kernel: docker0: port 9(vethebd3f24) entered disabled state
kernel: device vethebd3f24 entered promiscuous mode
kernel: docker0: port 9(vethebd3f24) entered blocking state
kernel: docker0: port 9(vethebd3f24) entered forwarding state
kernel: docker0: port 9(vethebd3f24) entered disabled state
kernel: eth0: renamed from vethfc7c14d
kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethebd3f24: link becomes ready
kernel: docker0: port 9(vethebd3f24) entered blocking state
kernel: docker0: port 9(vethebd3f24) entered forwarding state
avahi-daemon[28188]: Joining mDNS multicast group on interface vethebd3f24.IPv6 with address fe80::90ea:d1ff:fe14:36b4.
avahi-daemon[28188]: New relevant interface vethebd3f24.IPv6 for mDNS.
avahi-daemon[28188]: Registering new address record for fe80::90ea:d1ff:fe14:36b4 on vethebd3f24.*.
kernel: docker0: port 10(veth6cca96b) entered blocking state
kernel: docker0: port 10(veth6cca96b) entered disabled state
kernel: device veth6cca96b entered promiscuous mode
kernel: docker0: port 10(veth6cca96b) entered blocking state
kernel: docker0: port 10(veth6cca96b) entered forwarding state
kernel: docker0: port 10(veth6cca96b) entered disabled state
kernel: docker0: port 11(veth38bd82b) entered blocking state
kernel: docker0: port 11(veth38bd82b) entered disabled state
kernel: device veth38bd82b entered promiscuous mode
kernel: docker0: port 11(veth38bd82b) entered blocking state
kernel: docker0: port 11(veth38bd82b) entered forwarding state
kernel: docker0: port 11(veth38bd82b) entered disabled state
kernel: eth0: renamed from veth62471ef
kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth6cca96b: link becomes ready
kernel: docker0: port 10(veth6cca96b) entered blocking state
kernel: docker0: port 10(veth6cca96b) entered forwarding state
avahi-daemon[28188]: Joining mDNS multicast group on interface veth6cca96b.IPv6 with address fe80::b41c:cbff:fec3:d55a.
avahi-daemon[28188]: New relevant interface veth6cca96b.IPv6 for mDNS.
avahi-daemon[28188]: Registering new address record for fe80::b41c:cbff:fec3:d55a on veth6cca96b.*.
kernel: docker0: port 12(vetha327f3c) entered blocking state
kernel: docker0: port 12(vetha327f3c) entered disabled state
kernel: device vetha327f3c entered promiscuous mode
kernel: docker0: port 12(vetha327f3c) entered blocking state
kernel: docker0: port 12(vetha327f3c) entered forwarding state
kernel: docker0: port 12(vetha327f3c) entered disabled state
kernel: eth0: renamed from vethe928529
kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth38bd82b: link becomes ready
kernel: docker0: port 11(veth38bd82b) entered blocking state
kernel: docker0: port 11(veth38bd82b) entered forwarding state
kernel: docker0: port 13(veth067a1fb) entered blocking state
kernel: docker0: port 13(veth067a1fb) entered disabled state
kernel: device veth067a1fb entered promiscuous mode
kernel: docker0: port 13(veth067a1fb) entered blocking state
kernel: docker0: port 13(veth067a1fb) entered forwarding state
kernel: docker0: port 13(veth067a1fb) entered disabled state
avahi-daemon[28188]: Joining mDNS multicast group on interface veth38bd82b.IPv6 with address fe80::64b9:d1ff:fe3a:dbf1.
avahi-daemon[28188]: New relevant interface veth38bd82b.IPv6 for mDNS.
avahi-daemon[28188]: Registering new address record for fe80::64b9:d1ff:fe3a:dbf1 on veth38bd82b.*.
kernel: eth0: renamed from vethef886ce
kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vetha327f3c: link becomes ready
kernel: docker0: port 12(vetha327f3c) entered blocking state
kernel: docker0: port 12(vetha327f3c) entered forwarding state
kernel: eth0: renamed from veth18ad872
kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth067a1fb: link becomes ready
kernel: docker0: port 13(veth067a1fb) entered blocking state
kernel: docker0: port 13(veth067a1fb) entered forwarding state
avahi-daemon[28188]: Joining mDNS multicast group on interface vetha327f3c.IPv6 with address fe80::80dc:3dff:fe7d:4e5f.
avahi-daemon[28188]: New relevant interface vetha327f3c.IPv6 for mDNS.
avahi-daemon[28188]: Registering new address record for fe80::80dc:3dff:fe7d:4e5f on vetha327f3c.*.
avahi-daemon[28188]: Joining mDNS multicast group on interface veth067a1fb.IPv6 with address fe80::2803:9aff:fe77:198.
avahi-daemon[28188]: New relevant interface veth067a1fb.IPv6 for mDNS.
avahi-daemon[28188]: Registering new address record for fe80::2803:9aff:fe77:198 on veth067a1fb.*.
kernel: docker0: port 12(vetha327f3c) entered disabled state
kernel: vethef886ce: renamed from eth0
avahi-daemon[28188]: Interface vetha327f3c.IPv6 no longer relevant for mDNS.
avahi-daemon[28188]: Leaving mDNS multicast group on interface vetha327f3c.IPv6 with address fe80::80dc:3dff:fe7d:4e5f.
kernel: docker0: port 12(vetha327f3c) entered disabled state
kernel: device vetha327f3c left promiscuous mode
kernel: docker0: port 12(vetha327f3c) entered disabled state
avahi-daemon[28188]: Withdrawing address record for fe80::80dc:3dff:fe7d:4e5f on vetha327f3c.
kernel: docker0: port 12(vethc5e8668) entered blocking state
kernel: docker0: port 12(vethc5e8668) entered disabled state
kernel: device vethc5e8668 entered promiscuous mode
kernel: docker0: port 12(vethc5e8668) entered blocking state
kernel: docker0: port 12(vethc5e8668) entered forwarding state
kernel: docker0: port 12(vethc5e8668) entered disabled state
kernel: eth0: renamed from veth6142204
kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethc5e8668: link becomes ready
kernel: docker0: port 12(vethc5e8668) entered blocking state
kernel: docker0: port 12(vethc5e8668) entered forwarding state
avahi-daemon[28188]: Joining mDNS multicast group on interface vethc5e8668.IPv6 with address fe80::d0a6:76ff:fec2:3055.
avahi-daemon[28188]: New relevant interface vethc5e8668.IPv6 for mDNS.
avahi-daemon[28188]: Registering new address record for fe80::d0a6:76ff:fec2:3055 on vethc5e8668.*.
kernel: docker0: port 12(vethc5e8668) entered disabled state
kernel: veth6142204: renamed from eth0
avahi-daemon[28188]: Interface vethc5e8668.IPv6 no longer relevant for mDNS.
avahi-daemon[28188]: Leaving mDNS multicast group on interface vethc5e8668.IPv6 with address fe80::d0a6:76ff:fec2:3055.
kernel: docker0: port 12(vethc5e8668) entered disabled state
kernel: device vethc5e8668 left promiscuous mode
kernel: docker0: port 12(vethc5e8668) entered disabled state
avahi-daemon[28188]: Withdrawing address record for fe80::d0a6:76ff:fec2:3055 on vethc5e8668.
kernel: docker0: port 12(veth292e11f) entered blocking state
kernel: docker0: port 12(veth292e11f) entered disabled state
kernel: device veth292e11f entered promiscuous mode
kernel: docker0: port 12(veth292e11f) entered blocking state
kernel: docker0: port 12(veth292e11f) entered forwarding state
kernel: docker0: port 12(veth292e11f) entered disabled state
kernel: eth0: renamed from vethd8992ca
kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth292e11f: link becomes ready
kernel: docker0: port 12(veth292e11f) entered blocking state
kernel: docker0: port 12(veth292e11f) entered forwarding state
avahi-daemon[28188]: Joining mDNS multicast group on interface veth292e11f.IPv6 with address fe80::e4aa:d6ff:fe4a:db4c.
avahi-daemon[28188]: New relevant interface veth292e11f.IPv6 for mDNS.
avahi-daemon[28188]: Registering new address record for fe80::e4aa:d6ff:fe4a:db4c on veth292e11f.*.
kernel: docker0: port 12(veth292e11f) entered disabled state
kernel: vethd8992ca: renamed from eth0
avahi-daemon[28188]: Interface veth292e11f.IPv6 no longer relevant for mDNS.
avahi-daemon[28188]: Leaving mDNS multicast group on interface veth292e11f.IPv6 with address fe80::e4aa:d6ff:fe4a:db4c.
kernel: docker0: port 12(veth292e11f) entered disabled state
kernel: device veth292e11f left promiscuous mode
kernel: docker0: port 12(veth292e11f) entered disabled state
avahi-daemon[28188]: Withdrawing address record for fe80::e4aa:d6ff:fe4a:db4c on veth292e11f.
kernel: docker0: port 12(vethdf4f1c0) entered blocking state
kernel: docker0: port 12(vethdf4f1c0) entered disabled state
kernel: device vethdf4f1c0 entered promiscuous mode
kernel: docker0: port 12(vethdf4f1c0) entered blocking state
kernel: docker0: port 12(vethdf4f1c0) entered forwarding state
kernel: eth0: renamed from veth1498e09
kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethdf4f1c0: link becomes ready
kernel: docker0: port 12(vethdf4f1c0) entered disabled state
kernel: veth1498e09: renamed from eth0
kernel: docker0: port 12(vethdf4f1c0) entered disabled state
kernel: device vethdf4f1c0 left promiscuous mode
kernel: docker0: port 12(vethdf4f1c0) entered disabled state
kernel: docker0: port 12(veth2efc5d8) entered blocking state
kernel: docker0: port 12(veth2efc5d8) entered disabled state
kernel: device veth2efc5d8 entered promiscuous mode
kernel: docker0: port 12(veth2efc5d8) entered blocking state
kernel: docker0: port 12(veth2efc5d8) entered forwarding state
kernel: eth0: renamed from vethf319843
kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth2efc5d8: link becomes ready
avahi-daemon[28188]: Joining mDNS multicast group on interface veth2efc5d8.IPv6 with address fe80::3474:37ff:fe78:7a21.
avahi-daemon[28188]: New relevant interface veth2efc5d8.IPv6 for mDNS.
avahi-daemon[28188]: Registering new address record for fe80::3474:37ff:fe78:7a21 on veth2efc5d8.*.
kernel: docker0: port 12(veth2efc5d8) entered disabled state
kernel: vethf319843: renamed from eth0
avahi-daemon[28188]: Interface veth2efc5d8.IPv6 no longer relevant for mDNS.
avahi-daemon[28188]: Leaving mDNS multicast group on interface veth2efc5d8.IPv6 with address fe80::3474:37ff:fe78:7a21.
kernel: docker0: port 12(veth2efc5d8) entered disabled state
kernel: device veth2efc5d8 left promiscuous mode
kernel: docker0: port 12(veth2efc5d8) entered disabled state
avahi-daemon[28188]: Withdrawing address record for fe80::3474:37ff:fe78:7a21 on veth2efc5d8.
kernel: docker0: port 12(veth6ebfaa8) entered blocking state
kernel: docker0: port 12(veth6ebfaa8) entered disabled state
kernel: device veth6ebfaa8 entered promiscuous mode
kernel: docker0: port 12(veth6ebfaa8) entered blocking state
kernel: docker0: port 12(veth6ebfaa8) entered forwarding state
kernel: eth0: renamed from vethc3e7d4b
kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth6ebfaa8: link becomes ready
avahi-daemon[28188]: Joining mDNS multicast group on interface veth6ebfaa8.IPv6 with address fe80::801a:e1ff:fe71:a6ba.
avahi-daemon[28188]: New relevant interface veth6ebfaa8.IPv6 for mDNS.
avahi-daemon[28188]: Registering new address record for fe80::801a:e1ff:fe71:a6ba on veth6ebfaa8.*.
kernel: docker0: port 12(veth6ebfaa8) entered disabled state
kernel: vethc3e7d4b: renamed from eth0
avahi-daemon[28188]: Interface veth6ebfaa8.IPv6 no longer relevant for mDNS.
avahi-daemon[28188]: Leaving mDNS multicast group on interface veth6ebfaa8.IPv6 with address fe80::801a:e1ff:fe71:a6ba.
kernel: docker0: port 12(veth6ebfaa8) entered disabled state
kernel: device veth6ebfaa8 left promiscuous mode
kernel: docker0: port 12(veth6ebfaa8) entered disabled state
avahi-daemon[28188]: Withdrawing address record for fe80::801a:e1ff:fe71:a6ba on veth6ebfaa8.
kernel: docker0: port 12(veth716a41e) entered blocking state
kernel: docker0: port 12(veth716a41e) entered disabled state
kernel: device veth716a41e entered promiscuous mode
kernel: docker0: port 12(veth716a41e) entered blocking state
kernel: docker0: port 12(veth716a41e) entered forwarding state
kernel: docker0: port 12(veth716a41e) entered disabled state
kernel: eth0: renamed from vethe453ff8
kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth716a41e: link becomes ready
kernel: docker0: port 12(veth716a41e) entered blocking state
kernel: docker0: port 12(veth716a41e) entered forwarding state
avahi-daemon[28188]: Joining mDNS multicast group on interface veth716a41e.IPv6 with address fe80::e0cb:c7ff:fead:a8b9.
avahi-daemon[28188]: New relevant interface veth716a41e.IPv6 for mDNS.
avahi-daemon[28188]: Registering new address record for fe80::e0cb:c7ff:fead:a8b9 on veth716a41e.*.
kernel: docker0: port 12(veth716a41e) entered disabled state
kernel: vethe453ff8: renamed from eth0
avahi-daemon[28188]: Interface veth716a41e.IPv6 no longer relevant for mDNS.
avahi-daemon[28188]: Leaving mDNS multicast group on interface veth716a41e.IPv6 with address fe80::e0cb:c7ff:fead:a8b9.
kernel: docker0: port 12(veth716a41e) entered disabled state
kernel: device veth716a41e left promiscuous mode
kernel: docker0: port 12(veth716a41e) entered disabled state
avahi-daemon[28188]: Withdrawing address record for fe80::e0cb:c7ff:fead:a8b9 on veth716a41e.
kernel: docker0: port 12(veth861ce35) entered blocking state
kernel: docker0: port 12(veth861ce35) entered disabled state
kernel: device veth861ce35 entered promiscuous mode
kernel: docker0: port 12(veth861ce35) entered blocking state
kernel: docker0: port 12(veth861ce35) entered forwarding state
kernel: docker0: port 12(veth861ce35) entered disabled state
kernel: eth0: renamed from veth30940ef
kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth861ce35: link becomes ready
kernel: docker0: port 12(veth861ce35) entered blocking state
kernel: docker0: port 12(veth861ce35) entered forwarding state
avahi-daemon[28188]: Joining mDNS multicast group on interface veth861ce35.IPv6 with address fe80::689f:41ff:fec2:8d4.
avahi-daemon[28188]: New relevant interface veth861ce35.IPv6 for mDNS.
avahi-daemon[28188]: Registering new address record for fe80::689f:41ff:fec2:8d4 on veth861ce35.*.
kernel: docker0: port 12(veth861ce35) entered disabled state
kernel: veth30940ef: renamed from eth0
avahi-daemon[28188]: Interface veth861ce35.IPv6 no longer relevant for mDNS.
avahi-daemon[28188]: Leaving mDNS multicast group on interface veth861ce35.IPv6 with address fe80::689f:41ff:fec2:8d4.
kernel: docker0: port 12(veth861ce35) entered disabled state
kernel: device veth861ce35 left promiscuous mode
kernel: docker0: port 12(veth861ce35) entered disabled state
avahi-daemon[28188]: Withdrawing address record for fe80::689f:41ff:fec2:8d4 on veth861ce35.
kernel: docker0: port 12(veth86004e6) entered blocking state
kernel: docker0: port 12(veth86004e6) entered disabled state
kernel: device veth86004e6 entered promiscuous mode
kernel: docker0: port 12(veth86004e6) entered blocking state
kernel: docker0: port 12(veth86004e6) entered forwarding state
kernel: docker0: port 12(veth86004e6) entered disabled state
kernel: eth0: renamed from veth5913a56
kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth86004e6: link becomes ready
kernel: docker0: port 12(veth86004e6) entered blocking state
kernel: docker0: port 12(veth86004e6) entered forwarding state
avahi-daemon[28188]: Joining mDNS multicast group on interface veth86004e6.IPv6 with address fe80::a078:b8ff:fefe:27a0.
avahi-daemon[28188]: New relevant interface veth86004e6.IPv6 for mDNS.
avahi-daemon[28188]: Registering new address record for fe80::a078:b8ff:fefe:27a0 on veth86004e6.*.
kernel: docker0: port 12(veth86004e6) entered disabled state
kernel: veth5913a56: renamed from eth0
avahi-daemon[28188]: Interface veth86004e6.IPv6 no longer relevant for mDNS.
avahi-daemon[28188]: Leaving mDNS multicast group on interface veth86004e6.IPv6 with address fe80::a078:b8ff:fefe:27a0.
kernel: docker0: port 12(veth86004e6) entered disabled state
kernel: device veth86004e6 left promiscuous mode
kernel: docker0: port 12(veth86004e6) entered disabled state
avahi-daemon[28188]: Withdrawing address record for fe80::a078:b8ff:fefe:27a0 on veth86004e6.
kernel: docker0: port 12(veth2eae628) entered blocking state
kernel: docker0: port 12(veth2eae628) entered disabled state
kernel: device veth2eae628 entered promiscuous mode
kernel: docker0: port 12(veth2eae628) entered blocking state
kernel: docker0: port 12(veth2eae628) entered forwarding state
kernel: docker0: port 12(veth2eae628) entered disabled state
kernel: eth0: renamed from veth99b8f62
kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth2eae628: link becomes ready
kernel: docker0: port 12(veth2eae628) entered blocking state
kernel: docker0: port 12(veth2eae628) entered forwarding state
kernel: docker0: port 12(veth2eae628) entered disabled state
kernel: veth99b8f62: renamed from eth0
kernel: docker0: port 12(veth2eae628) entered disabled state
kernel: device veth2eae628 left promiscuous mode
kernel: docker0: port 12(veth2eae628) entered disabled state

 

 

 

mammon-diagnostics-20210705-1117.zip

Edited by Mondeus
Link to comment
  • 1 month later...

After updating getting spam of :

curl failed to verify the legitimacy of the server and therefore could not

establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.
Starting container with revision: 0bd47cb67bf0d106096f844cc5b9fe1b4f444c09
Creating TUN device /dev/net/tun
mknod: /dev/net/tun: File exists
Using OpenVPN provider: PIA
Running with VPN_CONFIG_SOURCE auto
Provider PIA has a bundled setup script. Defaulting to internal config
Executing setup script for PIA
Downloading OpenVPN config bundle openvpn-strong into temporary file /tmp/tmp.G48iYLG3yl
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.haxx.se/docs/sslcerts.html

 

Also getting cert error when going to PIA site:

(I'm sure these are related, just don't know the fix)

www.privateinternetaccess.com has a security policy called HTTP Strict Transport Security (HSTS), which means that Firefox can only connect to it securely. You can’t add an exception to visit this site.

 

 

Edited by Joker169
Link to comment
  • 3 weeks later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.