• Posts

  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Jenardo's Achievements


Newbie (1/14)



  1. @aptalca Now it works except that everything goes to the main unraid UI. Basically, anything ending in "" goes to the unraid's main web page. Is this because of the "wildcard" that I set for subdomains in the docker's configuration? By the way, I am using the subdomain conf files as I have described in the previous post.
  2. @aptalca just moving this to the right thread. To recap, I wanted to set up letsencrypt to be used as an internal reverse proxy without exposing it outside my lan. Accordingly I cannot use http validation .. and attempted using duckdns validation. So here is what I have done: create a custom user network moved sonarr to the new custom network set up the letsencrypt container with the following params: Network: new custom network domain name: subdomains: wildcard Only subdomains: true validation: duckdns DUCKDNSTOKEN: my duckdns token added a sonarr.subdomain.conf to proxy-confs (made sure it points to the correct sonarr container name) The log from the letsencrypt container looks fine. I don't see errors basically. Trying to access yields nothing (not found). Am I missing anything? (one thing that I can think of, is that when a reverse proxy is setup for external access, requests are routed to the reverse proxy. In this case, there are no external requests, what directs the requests to the reverse proxy? Also do I need to add any DNS records to my duckdns domain?)
  3. @aptalca Maybe this is the wrong topic, let me know if I should be moving this somewhere else. I already have a duckdns domain setup. And I assume we established that I cannot use http validation because I do not want to expose a public port to my reverse proxy and only want to use it internally. Is there a guide to how I can set this up? I tried "quickly" searching the letsencrypt topic but couldn't find what I am looking for. Thanks.
  4. @aptalca Correct me if I am wrong but letsencrypt needs to verify the ownership of a domain in order to deploy a certificate. This is not a problem in itself. But doesn't this mean that I will have to forward ports to the letsencrypt container so that it can verify my publicly resolvable domain? How can I not expose the reverse proxy outside my lan as you have suggested in this case?
  5. Do you mean using a reverse proxy with vpn as well? I assume you mean using a reverse proxy without vpn. Isn't this less secure? And why would I do reverse proxying if I am the only person who wants to remotely access my services?
  6. When I first setup my environment, custom bridges was there as an option ... so I said "why not?". It seemed much cleaner to deal with IPs than with ports. And it also seemed that everything is just a piece of cake from there ... obviously not the case. I, honestly, didn't even consider my options .. and this seemed easy and straightforward. Maybe I can just do bridge networking with a local dns server to work with hostnames instead of host-ip:port. But this time I would like to consider my options. So far: Containers on custom bridge with openvpn-as All containers with bridge networking and use something as a local dns server What other options do I have? And if I ever decide to give public access to any of the containers, is it just a matter of throwing in a reverse proxy? Or do I have to take this into account now somehow? Edit 1: BTW won't a dns container need a separate IP so that I can properly configure my router's dns servers?
  7. Thanks for taking the time. Bright side is ... It wasn't a bad configuration at my end. And it seems, from what you have said, that it's getting more complicated than need be. Regarding the "hoping somebody else works out the issue" part, I have seen very few complaints sitting unanswered for months now ... so I am not very optimistic about that. With that said, I have a question for you. What I want is simply the following: Easily addressable containers Remotely reach into my home network including VMs and containers My thought process was put all containers on a custom bridge to get their own IPs and be easily addressable, use the openvpn-as container to vpn into my network and reach VMs and containers. Obviously, this is not working atm .. or let's say getting more complicated than need be. So the question is: what is a simpler alternative setup that I should use to achieve what I want?
  8. I appreciate the effort. The thing is ... this seems to be a traditional "required" setup to me .. containers have their own IPs and openvpn gives clients access to both host and containers ... nevertheless, nobody seems to be complaining about it (or just a handful who have gone silent). Also, I would have tested with openvpn-as running on custom:br1, however, the container does not seem to be allowing that anymore (unresolved dependencies error) ... should I be reverting to a much older version of the container for instance. I don't even know if that would work. I can't really think of a decent solution here.
  9. @ken-ji here are a few things that I found in an attempt to debug the issue. I am sticking to host mode since it's the most promising so far. I am testing this through a terminal on my phone which is connected to the open vpn server. I can ping the server, a VM on br0, my laptop which is connected to my home network. I cannot ping any of the br1 containers (can still ping them from the openvpn-as container though) I used wireshark to take a look at packets leaving my server for some scenarios: Ping an invalid IP on the network -- ARP packet to find the IP -> Expected Ping one of the br1 containers -- ICMP packet for the PING request with a "no response found" -> Isn't this strange? I was expecting these packets to be routed directly to the br1 containers. Any ideas? Edit: In the network settings of open vpn, I don't see br1. Is that expected? When I do an 'ifconfig' inside the openvpn-as container, I see all the available interfaces (as0t0, br0, br1, docker0, eth0, eth1, lo, virbr0, vnet0). However, br0 has an ipv4 addr and a few ipv6 addrs defined while br1 only has the ipv6 ones. Expected? I assume that's the reason I don't see br1 in the network settings.
  10. I tried all three options: Custom:br1 - vpn server does not start ... gives the "service failed to start due to unresolved dependencies" error that everyone has been complaining about. Bridge mode - vpn server starts but all the custom:br1 containers are unreachable from the vpn client. I tried to ping/telnet the custom:br1 containers through the openvpn-as container's shell, but couldn't. Host mode - vpn server starts and I can ping/telnet the custom:br1 containers successfully from the openvpn-as container's shell. However, all the custom:br1 containers are unreachable from the vpn client. Edit: @ken-ji any ideas?
  11. I read your earlier posts and it said that. Interestingly, I initially configured the container for bridge mode. It worked. I changed to host, it still worked. Maybe it's a glitch on my side!
  12. Check your container configuration. I usually get this error when I am not using host or bridge network modes (as described also by other users earlier in this thread).
  13. Setup: 2 NICs Followed @ken-ji's solution to sidestep the mcvlan security No bonding between interfaces No IP assigned to eth1 Replace docker's eth0/br0 settings with eth1/br1 Move all containers that were on custom:br0 to custom:br1 Setup the openvpn-as container Version: 2.6.1-ls11 (seems to be the most stable) bridge mode VPN settings Added my subnet to the routing section Test: My openvpn client can connect to the server I can reach my unraid GUI => Problem: I cannot access any of the containers running on custom:br1 I went through the last ~25 pages of this topic. There were a few posts complaining about a similar issue then they went silent. I couldn't see any replies to their questions (unless I missed them of course). Any help is appreciated. @jfrancais you seem to have had a similar issue. Ever managed to resolve it? Edit 1: I tried to ping/telnet the custom:br1 containers through the openvpn-as container's shell, but couldn't. I believe this means a problem with the network settings. I am sure I followed the steps that @ken-ji outlined. Edit 2: Changing the openvpn-as container to host mode allows me to ping/telnet custom:br1 containers through the shell. However, vpn clients still cannot connect to custom:br1 containers!