Jenardo

Members
  • Posts

    23
  • Joined

  • Last visited

Posts posted by Jenardo

  1. 21 hours ago, aptalca said:

    On the duckdns website, set the IP to the local IP of your unraid server

    @aptalca Now it works except that everything goes to the main unraid UI.

    XYZ.duckdns.org

    sonarr.XYZ.duckdns.org

    bla123.XYZ.duckdns.org

    Basically, anything ending in "XYZ.duckdns.org" goes to the unraid's main web page.

    Is this because of the "wildcard" that I set for subdomains in the docker's configuration?

    By the way, I am using the subdomain conf files as I have described in the previous post.

     

  2. Quote

    Github and docker hub pages linked in the first post have the most up to date info

     

    You can also check out this blog article for some examples: https://blog.linuxserver.io/2019/04/25/letsencrypt-nginx-starter-guide/

    @aptalca just moving this to the right thread.

    To recap, I wanted to set up letsencrypt to be used as an internal reverse proxy without exposing it outside my lan. Accordingly I cannot use http validation .. and attempted using duckdns validation. So here is what I have done:

    • create a custom user network
    • moved sonarr to the new custom network
    • set up the letsencrypt container with the following params:
      • Network: new custom network
      • domain name: XYZ.duckdns.org
      • subdomains: wildcard
      • Only subdomains: true
      • validation: duckdns
      • DUCKDNSTOKEN: my duckdns token
    • added a sonarr.subdomain.conf to proxy-confs (made sure it points to the correct sonarr container name)

    The log from the letsencrypt container looks fine. I don't see errors basically. Trying to access sonarr.XYZ.duckdns.org yields nothing (not found). Am I missing anything? (one thing that I can think of, is that when a reverse proxy is setup for external access, requests are routed to the reverse proxy. In this case, there are no external requests, what directs the requests to the reverse proxy? Also do I need to add any DNS records to my duckdns domain?)

     

  3. On 9/27/2019 at 7:07 PM, aptalca said:

    Not if you do dns or duckdns validation

    @aptalca Maybe this is the wrong topic, let me know if I should be moving this somewhere else.

    I already have a duckdns domain setup. And I assume we established that I cannot use http validation because I do not want to expose a public port to my reverse proxy and only want to use it internally.

    Is there a guide to how I can set this up? I tried "quickly" searching the letsencrypt topic but couldn't find what I am looking for. Thanks.

  4. On 9/23/2019 at 6:41 PM, aptalca said:

    Reverse proxy for pretty addresses for containers. You don't have to expose your reverse proxy url outside of your lan.

     

    You can vpn in like you do, then enter sonarr.domain.com in the browser and you get sonarr

    @aptalca Correct me if I am wrong but letsencrypt needs to verify the ownership of a domain in order to deploy a certificate. This is not a problem in itself. But doesn't this mean that I will have to forward ports to the letsencrypt container so that it can verify my publicly resolvable domain? How can I not expose the reverse proxy outside my lan as you have suggested in this case?

  5. 8 hours ago, aptalca said:

    I don't understand why you'd need a local dns server. With bridge networking, everything is going to be on your server ip. You just reach them at different ports.

     

    Just set up our letsencrypt image and reverse proxy them at subdomains. Then set up heimdall as your homepage with pretty buttons for them all, voila

    Do you mean using a reverse proxy with vpn as well?

    I assume you mean using a reverse proxy without vpn. Isn't this less secure? And why would I do reverse proxying if I am the only person who wants to remotely access my services?

  6. 3 hours ago, aptalca said:

    If all you want are those two things, why don't you just run everything with bridge networking? No need to overcomplicate your setup.

     

    I only have two containers on macvlan, and the only reason for that is, my whole internet connection goes through a vpn and I wanted to be able to bypass the vpn gateway for those two containers. I do it via an IP based routing rule in pfsense. Everything else is on bridge.

    When I first setup my environment, custom bridges was there as an option ... so I said "why not?". It seemed much cleaner to deal with IPs than with ports. And it also seemed that everything is just a piece of cake from there ... obviously not the case.

     

    I, honestly, didn't even consider my options .. and this seemed easy and straightforward. Maybe I can just do bridge networking with a local dns server to work with hostnames instead of host-ip:port. But this time I would like to consider my options. So far:

    1. Containers on custom bridge with openvpn-as
    2. All containers with bridge networking and use something as a local dns server

    What other options do I have?

    And if I ever decide to give public access to any of the containers, is it just a matter of throwing in a reverse proxy? Or do I have to take this into account now somehow?

     

    Edit 1: BTW won't a dns container need a separate IP so that I can properly configure my router's dns servers?

  7. 7 hours ago, ken-ji said:

    Finally took a look and i probably won't be using this thing as a docker - it requires way too many capabilities than what I'd like to limit it too.

    Its very nature is that the docker needs to be in host mode to create multiple bridges and connect the client to a bridge then mess with the firewall rules to allow whatever you have. I'm sure I was hitting conflicts with my setup but yeah I never go it to work with my LAN at all. This might one of those applications I'd rather it run as a VM. But I might have a better look with this when I have time, hoping somebody else works out the issue.

     

    In hindsight just realized the reason I couldn't even get it to work is that I set the thing to routed mode for everything, but OpenVPN-AS does not readily show you all the subnets they generated, which needed to be programmed into my router. Talk about complicated if you are trying to do all of this remotely. :P

     

    Thanks for taking the time. Bright side is ... It wasn't a bad configuration at my end.

    And it seems, from what you have said, that it's getting more complicated than need be.

    Regarding the "hoping somebody else works out the issue" part, I have seen very few complaints sitting unanswered for months now ... so I am not very optimistic about that.

     

    With that said, I have a question for you.

    What I want is simply the following:

    • Easily addressable containers
    • Remotely reach into my home network including VMs and containers

    My thought process was put all containers on a custom bridge to get their own IPs and be easily addressable, use the openvpn-as container to vpn into my network and reach VMs and containers. Obviously, this is not working atm .. or let's say getting more complicated than need be. So the question is: what is a simpler alternative setup that I should use to achieve what I want?

  8. 22 hours ago, ken-ji said:

    I'm going to have to give this a try. I'm not using the openvpn-as container myself (though I used to) as I've left VPN capabilities to a VPS that my router has an IPSEC connection with - since my provider is slowly rolling out CGNAT and I got selected as an early bird with no way out it seems. (Business grade plans need you to be a real business and no other non CGNAT ISP provider in the area)

    I appreciate the effort.

    The thing is ... this seems to be a traditional "required" setup to me .. containers have their own IPs and openvpn gives clients access to both host and containers ... nevertheless, nobody seems to be complaining about it (or just a handful who have gone silent).

    Also, I would have tested with openvpn-as running on custom:br1, however, the container does not seem to be allowing that anymore (unresolved dependencies error) ... should I be reverting to a much older version of the container for instance. I don't even know if that would work. I can't really think of a decent solution here.

  9. On 9/6/2019 at 3:48 PM, Jenardo said:

    I tried all three options:

    • Custom:br1 - vpn server does not start ... gives the "service failed to start due to unresolved dependencies" error that everyone has been complaining about.
    • Bridge mode - vpn server starts but all the custom:br1 containers are unreachable from the vpn client. I tried to ping/telnet the custom:br1 containers through the openvpn-as container's shell, but couldn't.
    • Host mode - vpn server starts and I can ping/telnet the custom:br1 containers successfully from the openvpn-as container's shell. However, all the custom:br1 containers are unreachable from the vpn client.

    Edit: @ken-ji any ideas?

    @ken-ji here are a few things that I found in an attempt to debug the issue. I am sticking to host mode since it's the most promising so far. I am testing this through a terminal on my phone which is connected to the open vpn server.

    • I can ping the server, a VM on br0, my laptop which is connected to my home network.
    • I cannot ping any of the br1 containers (can still ping them from the openvpn-as container though)
    • I used wireshark to take a look at packets leaving my server for some scenarios:
      • Ping an invalid IP on the network -- ARP packet to find the IP -> Expected
      • Ping one of the br1 containers -- ICMP packet for the PING request with a "no response found" -> Isn't this strange? I was expecting these packets to be routed directly to the br1 containers.

    Any ideas?

     

    Edit:

    • In the network settings of open vpn, I don't see br1. Is that expected?
    • When I do an 'ifconfig' inside the openvpn-as container, I see all the available interfaces (as0t0, br0, br1, docker0, eth0, eth1, lo, virbr0, vnet0). However, br0 has an ipv4 addr and a few ipv6 addrs defined while br1 only has the ipv6 ones. Expected? I assume that's the reason I don't see br1 in the network settings.
  10. On 9/6/2019 at 9:09 AM, ken-ji said:

    The only thing I see that could be wrong is that your openvpn-as copintainer is in host mode right? if so, make sure its bound to br0 not eth0 (I think that's how bridges should be used.) Does the openvpn-as container work in custom network mode work? (set to br1 with own IP address)

    I tried all three options:

    • Custom:br1 - vpn server does not start ... gives the "service failed to start due to unresolved dependencies" error that everyone has been complaining about.
    • Bridge mode - vpn server starts but all the custom:br1 containers are unreachable from the vpn client. I tried to ping/telnet the custom:br1 containers through the openvpn-as container's shell, but couldn't.
    • Host mode - vpn server starts and I can ping/telnet the custom:br1 containers successfully from the openvpn-as container's shell. However, all the custom:br1 containers are unreachable from the vpn client.

    Edit: @ken-ji any ideas?

  11. 6 hours ago, aptalca said:

    Host networking doesn't work on the latest unraid. Plenty of posts in this thread if you search.

    Use bridge networking

    I read your earlier posts and it said that.
    Interestingly, I initially configured the container for bridge mode. It worked. I changed to host, it still worked. Maybe it's a glitch on my side!

  12. 11 hours ago, Jeffarese said:

    I'm getting this error: 

     

    
    service failed to start due to unresolved dependencies: set(['user', 'iptables_live', 'iptables_openvpn'])

     

    has anybody been able to make this work with the latest versions?

     

    I've already tried an old version recommended in the thread.

     

     

    Check your container configuration. I usually get this error when I am not using host or bridge network modes (as described also by other users earlier in this thread).

  13. Setup:

    • 2 NICs
    • Followed @ken-ji's solution to sidestep the mcvlan security
      • No bonding between interfaces
      • No IP assigned to eth1
      • Replace docker's eth0/br0 settings with eth1/br1
      • Move all containers that were on custom:br0 to custom:br1
    • Setup the openvpn-as container
      • Version: 2.6.1-ls11 (seems to be the most stable)
      • bridge mode
    • VPN settings
      • Added my subnet to the routing section

    Test:

    • My openvpn client can connect to the server
    • I can reach my unraid GUI
    • => Problem: I cannot access any of the containers running on custom:br1

    I went through the last ~25 pages of this topic. There were a few posts complaining about a similar issue then they went silent. I couldn't see any replies to their questions (unless I missed them of course). Any help is appreciated.

     

    @jfrancais you seem to have had a similar issue. Ever managed to resolve it?

     

    Edit 1: I tried to ping/telnet the custom:br1 containers through the openvpn-as container's shell, but couldn't. I believe this means a problem with the network settings. I am sure I followed the steps that @ken-ji outlined.

     

    Edit 2: Changing the openvpn-as container to host mode allows me to ping/telnet custom:br1 containers through the shell. However, vpn clients still cannot connect to custom:br1 containers!

  14. On 4/11/2019 at 4:50 AM, binhex said:

    im assuming from the sonarr config screen (not unraid) that 192.168.0.165 is the ip address you have assigned to qbittorrentvpn on the custom bridge 192.168.0.0/24 right? if so this is the problem, you cannot talk directly from fixed ip to fixed ip, its a bridge network still so you need to specify the hosts ip address (as in unraid's ip). 

     

    all a custom bridge allows you to do is define the INTERNAL network for the containers virtual adapter, its not a fixed ip on the hosts network.

    @binhex Sorry for the delayed response. I just got around to looking at this.

    Are you 100% sure that that's the case?

    I have two points against your explanation:

    1- I have a Maria DB in a container with a fixed IP. I connect other containers to it using its fixed IP.

    2- I tested Sonarr with TransmissionVPN. I used the fixed IP of the TransmissionVPN container and it works. Something is different with how qbittorrentVPN and delugeVPN. I have no idea what it is though.

  15. 4 hours ago, binhex said:

    i got around to testing this, created a custom bridge and then attached sonarr and qbittorrentvpn to it, i then plugged in the details in sonarr and could successfully connect to qbittorrent so im confident the issue is mis-configuration. please post screenshot of your sonarr config.

    Here is sonarr's config:

    78426063_2019-04-10-215844_2560x1600_scrot.thumb.png.296b972760c5ab4825b1891b893cd0eb.png

     

    And here is qbittorrent's connection config in Sonarr:

    68886648_2019-04-10-220142_2560x1600_scrot.thumb.png.2ed225dd3bfc6702df5190efac9eb56e.png

     

    @binhex Thanks for your help!

    (Note: The extra downloads path that I added is "after the fact" to test with transmission)

  16. On 4/7/2019 at 12:35 AM, Jenardo said:

    I have attempted to narrow this down even further.

    My laptop is on the same subnet as both containers (192.168.0.0/24).

    From my laptop, I can open the UI and login. And for debugging purposes, I used a curl post request to login through my laptop and that works too.

    Through sonarr's console, I can ping the qbittorrentvpn container. However, the curl login request just times out.

    Does that mean that the qbittorrentvpn is just rejecting the requests from the sonarr container? Why would that happen when it accepts them from my laptop?

    @binhex Do you have any idea what might be going on here? I have the same problem with delugevpn.

  17. I have attempted to narrow this down even further.

    My laptop is on the same subnet as both containers (192.168.0.0/24).

    From my laptop, I can open the UI and login. And for debugging purposes, I used a curl post request to login through my laptop and that works too.

    Through sonarr's console, I can ping the qbittorrentvpn container. However, the curl login request just times out.

    Does that mean that the qbittorrentvpn is just rejecting the requests from the sonarr container? Why would that happen when it accepts them from my laptop?

  18. My sonarr container cannot connect to my delugevpn container when vpn is enabled. I use pia.
    When VPN_ENABLED is set to 'no', sonarr can connect to deluge without issues.

    I also tried setting VPN_ENABLED set to 'yes' and STRICT_PORT_FORWARD set to 'no'. sonarr fails to connect to deluge in this case too.

    I use fixed IPs for the docker containers.

     

    Disclaimer: I have the same issue with qbittorrentvpn. It seems that deluge is more popular, hence I gave it a try too (and the cross posting). I will update both posts once I can figure this out.

  19. On 4/5/2019 at 2:31 AM, binhex said:

    I would suspect that you don't have lan_network defined correctly, what do you currently have it set to?

    That's the first thing I double checked after reading this thread.

    It is set to 192.168.0.0/24

     

    Update: I gave delugevpn a shot. I am getting the same exact timeout behavior.

     

    2nd Update: If I configure the container with VPN_ENABLED set to 'no', sonarr can connect normally to qbittorrent.

     

    3rd update: To eliminate port forwarding being the issue, I tested with VPN_ENABLED set to 'yes' and STRICT_PORT_FORWARD set to 'no'. Still sonarr cannot connect to qbittorrent.

     

    I can't figure out what's wrong. Did anyone get this working using fixed IPs for the two containers?

  20. On 12/5/2018 at 9:44 PM, Dolce said:

    I'm trying to access qbittorrent via sonarr and I've tried multiple attempts at remotely connecting. Is there a setting that must be enabled via qBittorrent for this to work? I am able to remotely log in via the web GUI but no luck with sonarr. Does anyone else have this working?

     

    Thanks.

    I have the exact same problem. VPN is connected. I can use the web UI. I can download stuff, etc.

    However, all my attempts to connect sonarr to qbittorrent have failed.

    My qbittorrentvpn docker uses a custom bridge with a fixed IP 192.168.0.xxx

    I use the following in sonarr:
    Host: 192.168.0.xxx
    Port: 8080
    + my qbittorrentvpn credentials.

    Can someone help me figure this out?

     

    Update: Checking the sonarr logs, I see this:
    The operation has timed out: 'http://192.168.0.xxx:8080/api/v2/app/webapiVersion'
    Note: Copying and pasting the url into a browser works just fine.

  21. On 4/2/2019 at 7:20 AM, L0rdRaiden said:

    add this

    
    addn-hosts=/etc/pihole/lan.list

    go to /Pihole

    create lan.list

    add you local server example

    
    192.168.1.220 abc.duckdns.org

    reboot

    My pihole container is working properly. I can see the dashboard updating, etc.

    I followed the exact same steps mentioned above. Then restarted the container.

    However, I still cannot use the local hostnames. They are not recognized.

     

    Update: I can use the hostname "pi.hole" which apparently comes pre-configured inside the container.
    Still cannot access the ones I manually added though.

     

    Another update: Got this working when I started using proper hostnames with "dots". As far as I understand, names without a domain (e.g., gallery) should also work. I am happy it works now. And I cannot justify, to myself, the time/effort of looking into why non-domain names do not work :)