Jenardo

Members
  • Posts

    23
  • Joined

  • Last visited

Everything posted by Jenardo

  1. @aptalca Now it works except that everything goes to the main unraid UI. XYZ.duckdns.org sonarr.XYZ.duckdns.org bla123.XYZ.duckdns.org Basically, anything ending in "XYZ.duckdns.org" goes to the unraid's main web page. Is this because of the "wildcard" that I set for subdomains in the docker's configuration? By the way, I am using the subdomain conf files as I have described in the previous post.
  2. @aptalca just moving this to the right thread. To recap, I wanted to set up letsencrypt to be used as an internal reverse proxy without exposing it outside my lan. Accordingly I cannot use http validation .. and attempted using duckdns validation. So here is what I have done: create a custom user network moved sonarr to the new custom network set up the letsencrypt container with the following params: Network: new custom network domain name: XYZ.duckdns.org subdomains: wildcard Only subdomains: true validation: duckdns DUCKDNSTOKEN: my duckdns token added a sonarr.subdomain.conf to proxy-confs (made sure it points to the correct sonarr container name) The log from the letsencrypt container looks fine. I don't see errors basically. Trying to access sonarr.XYZ.duckdns.org yields nothing (not found). Am I missing anything? (one thing that I can think of, is that when a reverse proxy is setup for external access, requests are routed to the reverse proxy. In this case, there are no external requests, what directs the requests to the reverse proxy? Also do I need to add any DNS records to my duckdns domain?)
  3. @aptalca Maybe this is the wrong topic, let me know if I should be moving this somewhere else. I already have a duckdns domain setup. And I assume we established that I cannot use http validation because I do not want to expose a public port to my reverse proxy and only want to use it internally. Is there a guide to how I can set this up? I tried "quickly" searching the letsencrypt topic but couldn't find what I am looking for. Thanks.
  4. @aptalca Correct me if I am wrong but letsencrypt needs to verify the ownership of a domain in order to deploy a certificate. This is not a problem in itself. But doesn't this mean that I will have to forward ports to the letsencrypt container so that it can verify my publicly resolvable domain? How can I not expose the reverse proxy outside my lan as you have suggested in this case?
  5. Do you mean using a reverse proxy with vpn as well? I assume you mean using a reverse proxy without vpn. Isn't this less secure? And why would I do reverse proxying if I am the only person who wants to remotely access my services?
  6. When I first setup my environment, custom bridges was there as an option ... so I said "why not?". It seemed much cleaner to deal with IPs than with ports. And it also seemed that everything is just a piece of cake from there ... obviously not the case. I, honestly, didn't even consider my options .. and this seemed easy and straightforward. Maybe I can just do bridge networking with a local dns server to work with hostnames instead of host-ip:port. But this time I would like to consider my options. So far: Containers on custom bridge with openvpn-as All containers with bridge networking and use something as a local dns server What other options do I have? And if I ever decide to give public access to any of the containers, is it just a matter of throwing in a reverse proxy? Or do I have to take this into account now somehow? Edit 1: BTW won't a dns container need a separate IP so that I can properly configure my router's dns servers?
  7. Thanks for taking the time. Bright side is ... It wasn't a bad configuration at my end. And it seems, from what you have said, that it's getting more complicated than need be. Regarding the "hoping somebody else works out the issue" part, I have seen very few complaints sitting unanswered for months now ... so I am not very optimistic about that. With that said, I have a question for you. What I want is simply the following: Easily addressable containers Remotely reach into my home network including VMs and containers My thought process was put all containers on a custom bridge to get their own IPs and be easily addressable, use the openvpn-as container to vpn into my network and reach VMs and containers. Obviously, this is not working atm .. or let's say getting more complicated than need be. So the question is: what is a simpler alternative setup that I should use to achieve what I want?
  8. I appreciate the effort. The thing is ... this seems to be a traditional "required" setup to me .. containers have their own IPs and openvpn gives clients access to both host and containers ... nevertheless, nobody seems to be complaining about it (or just a handful who have gone silent). Also, I would have tested with openvpn-as running on custom:br1, however, the container does not seem to be allowing that anymore (unresolved dependencies error) ... should I be reverting to a much older version of the container for instance. I don't even know if that would work. I can't really think of a decent solution here.
  9. @ken-ji here are a few things that I found in an attempt to debug the issue. I am sticking to host mode since it's the most promising so far. I am testing this through a terminal on my phone which is connected to the open vpn server. I can ping the server, a VM on br0, my laptop which is connected to my home network. I cannot ping any of the br1 containers (can still ping them from the openvpn-as container though) I used wireshark to take a look at packets leaving my server for some scenarios: Ping an invalid IP on the network -- ARP packet to find the IP -> Expected Ping one of the br1 containers -- ICMP packet for the PING request with a "no response found" -> Isn't this strange? I was expecting these packets to be routed directly to the br1 containers. Any ideas? Edit: In the network settings of open vpn, I don't see br1. Is that expected? When I do an 'ifconfig' inside the openvpn-as container, I see all the available interfaces (as0t0, br0, br1, docker0, eth0, eth1, lo, virbr0, vnet0). However, br0 has an ipv4 addr and a few ipv6 addrs defined while br1 only has the ipv6 ones. Expected? I assume that's the reason I don't see br1 in the network settings.
  10. I tried all three options: Custom:br1 - vpn server does not start ... gives the "service failed to start due to unresolved dependencies" error that everyone has been complaining about. Bridge mode - vpn server starts but all the custom:br1 containers are unreachable from the vpn client. I tried to ping/telnet the custom:br1 containers through the openvpn-as container's shell, but couldn't. Host mode - vpn server starts and I can ping/telnet the custom:br1 containers successfully from the openvpn-as container's shell. However, all the custom:br1 containers are unreachable from the vpn client. Edit: @ken-ji any ideas?
  11. I read your earlier posts and it said that. Interestingly, I initially configured the container for bridge mode. It worked. I changed to host, it still worked. Maybe it's a glitch on my side!
  12. Check your container configuration. I usually get this error when I am not using host or bridge network modes (as described also by other users earlier in this thread).
  13. Setup: 2 NICs Followed @ken-ji's solution to sidestep the mcvlan security No bonding between interfaces No IP assigned to eth1 Replace docker's eth0/br0 settings with eth1/br1 Move all containers that were on custom:br0 to custom:br1 Setup the openvpn-as container Version: 2.6.1-ls11 (seems to be the most stable) bridge mode VPN settings Added my subnet to the routing section Test: My openvpn client can connect to the server I can reach my unraid GUI => Problem: I cannot access any of the containers running on custom:br1 I went through the last ~25 pages of this topic. There were a few posts complaining about a similar issue then they went silent. I couldn't see any replies to their questions (unless I missed them of course). Any help is appreciated. @jfrancais you seem to have had a similar issue. Ever managed to resolve it? Edit 1: I tried to ping/telnet the custom:br1 containers through the openvpn-as container's shell, but couldn't. I believe this means a problem with the network settings. I am sure I followed the steps that @ken-ji outlined. Edit 2: Changing the openvpn-as container to host mode allows me to ping/telnet custom:br1 containers through the shell. However, vpn clients still cannot connect to custom:br1 containers!
  14. @binhex Sorry for the delayed response. I just got around to looking at this. Are you 100% sure that that's the case? I have two points against your explanation: 1- I have a Maria DB in a container with a fixed IP. I connect other containers to it using its fixed IP. 2- I tested Sonarr with TransmissionVPN. I used the fixed IP of the TransmissionVPN container and it works. Something is different with how qbittorrentVPN and delugeVPN. I have no idea what it is though.
  15. Here is sonarr's config: And here is qbittorrent's connection config in Sonarr: @binhex Thanks for your help! (Note: The extra downloads path that I added is "after the fact" to test with transmission)
  16. @binhex Do you have any idea what might be going on here? I have the same problem with delugevpn.
  17. I have attempted to narrow this down even further. My laptop is on the same subnet as both containers (192.168.0.0/24). From my laptop, I can open the UI and login. And for debugging purposes, I used a curl post request to login through my laptop and that works too. Through sonarr's console, I can ping the qbittorrentvpn container. However, the curl login request just times out. Does that mean that the qbittorrentvpn is just rejecting the requests from the sonarr container? Why would that happen when it accepts them from my laptop?
  18. My sonarr container cannot connect to my delugevpn container when vpn is enabled. I use pia. When VPN_ENABLED is set to 'no', sonarr can connect to deluge without issues. I also tried setting VPN_ENABLED set to 'yes' and STRICT_PORT_FORWARD set to 'no'. sonarr fails to connect to deluge in this case too. I use fixed IPs for the docker containers. Disclaimer: I have the same issue with qbittorrentvpn. It seems that deluge is more popular, hence I gave it a try too (and the cross posting). I will update both posts once I can figure this out.
  19. That's the first thing I double checked after reading this thread. It is set to 192.168.0.0/24 Update: I gave delugevpn a shot. I am getting the same exact timeout behavior. 2nd Update: If I configure the container with VPN_ENABLED set to 'no', sonarr can connect normally to qbittorrent. 3rd update: To eliminate port forwarding being the issue, I tested with VPN_ENABLED set to 'yes' and STRICT_PORT_FORWARD set to 'no'. Still sonarr cannot connect to qbittorrent. I can't figure out what's wrong. Did anyone get this working using fixed IPs for the two containers?
  20. I have the exact same problem. VPN is connected. I can use the web UI. I can download stuff, etc. However, all my attempts to connect sonarr to qbittorrent have failed. My qbittorrentvpn docker uses a custom bridge with a fixed IP 192.168.0.xxx I use the following in sonarr: Host: 192.168.0.xxx Port: 8080 + my qbittorrentvpn credentials. Can someone help me figure this out? Update: Checking the sonarr logs, I see this: The operation has timed out: 'http://192.168.0.xxx:8080/api/v2/app/webapiVersion' Note: Copying and pasting the url into a browser works just fine.
  21. My pihole container is working properly. I can see the dashboard updating, etc. I followed the exact same steps mentioned above. Then restarted the container. However, I still cannot use the local hostnames. They are not recognized. Update: I can use the hostname "pi.hole" which apparently comes pre-configured inside the container. Still cannot access the ones I manually added though. Another update: Got this working when I started using proper hostnames with "dots". As far as I understand, names without a domain (e.g., gallery) should also work. I am happy it works now. And I cannot justify, to myself, the time/effort of looking into why non-domain names do not work