aptalca

Community Developer
  • Posts

    3064
  • Joined

  • Last visited

  • Days Won

    3

Posts posted by aptalca

  1. 4 hours ago, martinjuhasz said:

    Is there some way to expose a locally running webserver that i started on the code-server terminal for development.

    I have code-server running using swag for reverse proxy, everything is fine. I have a command to spin up a local web-server for web-dev needs (like hot-reloading etc). The command runs fine and starts a server on some port (lets say 8000). I'm unable to access that website of course.

     

    Is there some recommended way to do this on code-server? I was looking for some plugins that would run a headless-chrome or something inside vs-code but they all seem to connect on the client side between browser and server. I also thought about exposing a specific port on the docker and on the proxy, but without luck. Even tho a web-server on port 8000 is runnin in the code-server terminal, i don't seem to be able to access it locally (without swag) even tho the port is exposed in the docker config.

    Code-server already has that functionality built in where it can proxy a service on a port via the port as a sub-subdomain.

     

    Look into the proxy_domain env var

  2. 31 minutes ago, bobokun said:

    Anyone know how to get Heimdall working locally without any reverse proxy? I don't want to reverse proxy sonarr, radarr, tautulli just to get heimdell to work. However whenever I click on any of the applications on heimdall it links me to a invalid URL in the format

     

    http://UNRAIDIP:HEIMDALPORT/UNRAIDIP:APPPORT

    For example clicking on tautulli from heimdall brings me to a page

    http://10.0.0.1:1234/10.0.0.1:8181

     

    where I want it to only open up http://10.0.0.1:8181 to go to tautulli

    Am I configuring something wrong?

    Put in the full url including http:// in app settings

  3. 15 hours ago, frodr said:

    I am not able to get the Nvidia GPU in the app. 

     

    Adding: 

    <slot id='1' type='GPU'/>

     

    to:

     

      <!-- Folding Slots -->
      <slot id='0' type='CPU'/>
    </config>

     

    the AMD GPU in the server appear for some reason. Tried with slot id 2 and 0, not working. 

     

    Ideas?

     

    Cheers,

     

    Frode

     

     

     

    Screenshot 2020-08-26 at 23.45.38.png

    This container should detect all gpus active on the host system whether they're accessible by the container or not because it queries (iirc) the kernel modules/drivers.

     

    If the Nvidia card is available on the host (driver loaded), it should be detected and listed here even without the gpu uuid passed. But without the uuid passed, it won't be able to access it.

     

    If you're not seeing the card listed, it suggests the driver is not loaded on host. Perhaps it is stubbed, or passed through to a VM.

  4. 58 minutes ago, cardo said:

    Thanks for the response, so if I have a reverse proxy set up for Ombi like request.domain.com, adding the following to ombi.sub domain.conf will block someone from connecting to domain.com too?

     

    allow 192.168.0.0/16;

    deny all;

     

    I have the swag container set to only sub domains and cname record only for request.domain.com.

    In your previous question, you were asking about subfolder. They are handled differently.

     

    The basics are that, server blocks are parents of location blocks. If you put the deny in a server block for ombi, it will work for that subdomains and all child location blocks.

     

    A subfolder proxy conf is a child location block of the main domain's server block.

     

    So to answer your last question, if you add the allow/deny into ombi subdomain's server block, it will only affect that subdomain, not the main domain as the main domain is served under a different server block.

  5. 11 hours ago, cardo said:

    Hi All,

     

    I set up the SWAG docker container last weekend and have reverse proxied all of the services I want except one, Pi-Hole. I had it working when I was using a physical Pi-Hole on my 192.168.0.0 network, and I have Pi-Hole running fine when I use the custom network as per @SpaceInvaderOne’s video, but I am unable to use the needed network that is shared with the SWAG container for the all of the reverse proxy containers as it is on the internal 172.18.0.0 network and I need it to be on my 192.168 network. The other issue is that UnRAID is already using port 80 and 443.  I know I can change those, but port 67 is still be used by something and I’m not sure what.  I tried searching this thread, but didn’t have much luck. I’m certain it’s something easy I’m missing, but just don’t know what.

     

    EDIT: After some more digging I determined that libvert binds to port 67 which makes pihole not start unless I disable my vm manager. I was able to get pihole to work by specifying the letsencrypt custom interface and specifying the IP for the pihole docker container, but now VM Manager won’t start because the pihole docker has port 67 bound now.

     

    I also just realized that my pihole is using the unRAID default internal ip and not the one I specified so that won’t work.

     

    Any recommendations/best practices here?

     

     

    Also, I set up Plex to reverse proxy via a subfolder as required so I’m reversing the root domain, is there a .conf file I can add the allow/deny entry so the root site domain.com is only accessible from my internal network?  I have all of the other services locked down via the appropriate file in proxy-confs.

    If you give pihole its own ip, it will use macvlan network type. That type blocks connections between the container and the host (and everything else bridged on host) as a security feature. So swag won't be able to connect to pihole. We highly recommend running pihole on bare metal (an rpi gets the job done) instead of in docker.

     

    The subfolder confs get included in the main server block in the default site conf. You can edit that.

  6. 1 hour ago, EvilTiger said:

    2 issues ...

    1) Downloading GeoIP2 City database.
    tar: invalid tar magic

     I've added my api key for MaxMind but getting an invalid tar magic error

     

    2) nginx isnt routing requests to my downstream app container on the same subnet

    letsencrypt component seems to be working, nginx is just getting me to the welcome page. not seeing issues in the container log file. app specific sample .conf files have been changed to map to the specific container names in my environment [no other change other than renaming the file to remove .sample]

     

    any pointers as to why nginx isnt forwarding on the request to my downstream app container? or where to look for log files?

     

    Thank you in advance

    Likely your api key is not correctly added or is not correct

    If you're getting the default landing page, then likely the proxy conf is not activated correctly. Check its name, and check the server name directive

  7. 6 minutes ago, BoxOfSnoo said:

     

    I agree, and have had the same experience.  I think the intention of PlexPass releases is that they are stable, just released early to subscribers.

    Yeah, the time for a release to go from beta to stable is pretty short. I guess they're more like release candidates but that could change and it's all up to plex.

  8. 8 hours ago, Greygoose said:

    I have lets encrypt running on Nginx proxy manager and i'm looking to come back to this docker as my lets encrypt certs are set to expire, and they won’t renew.

     

    I have followed Spaceinvaders guide, when I start lets encrypt docket i get

     

    Challenge failed for domain nextcloud.mydomain.co.uk

     

    ERROR: Cert does not exist! Please see the validation error above. The issue may be due to incorrect dns or port forwarding settings. Please fix your settings and recreate the container

     

    The confusing part is my dockers currently working, so its like the port forward settings work but not allowing certificate renewal.

     

     

    Check your port forwarding for port 80

    Follow this: https://blog.linuxserver.io/2019/07/10/troubleshooting-letsencrypt-image-port-mapping-and-forwarding/

  9. 18 minutes ago, HALPtech said:

    How can I update to the latest stable version of PMS 1.20 using this container? I don't want to be on the beta release because I don't want to test new features or bugs, but I'd like access to the new Plex Movie agent and installing this container from community apps only pulls down v1.19.

    There is no stable 1.20 yet. When there is, our builder will push out a new image with it.

    You can set the version to docker and it will use the version the image comes with.

  10. 3 hours ago, akamemmnon said:

    Hi, I set this up also using that guide. the problem is that when im using bridge mode it works, but when I use a custom proxynet (which letsencrypt is working on) it wont work. so if I want to keep the letsencrypt docker on the proxynet, but have it still work with the open-vpn docker (on bridge mode) I think we have to make some changes to the .conf file that you guys provide. what are those changes? I think this is the problem everyone is having. my ports on my router are open so when open-vpn is on the proxynet it is accessible through my domain, but when its on bridge mode, its not (get Nginx error) 

    You need to use the host IP and the host mapped port in the proxy conf for it, instead of the container name and and container port

  11. 36 minutes ago, sickb0y said:

    Anybody else using nzbtoSickbeard.py to sort your tv shows? After 3 years without a single issue the latest sabnzbd is giving me this error when processing tv-shows

     

    /usr/bin/env: ‘python2’: No such file or directory

     

    Was there a change in the python version or path in the latest docker? this issue started five days ago.

    Screenshot_2.jpg

    Python2 is EOL and is therefore removed. It's in the changelog

  12. 7 hours ago, Jerky_san said:

    Sorry I don't know why I said blank.. HTTP challenge over port 80. Even though the port is totally accessible it seems it has trouble completing the challenges stating "Timeout during connect (likely firewall problem)". It will even fail to do the challenge on subdomains it just did a few minutes ago when adding another subdomain to the list. But if I spin up "NginxProxyManager" as a test container just to see if other containers fail. It is able to challenge via http without issue. To my knowledge when it does the HTTP challenge the server redirects to the let'sencrypt folder where the challenges are stored but for some reason it times out sometimes on one or more subdomains and succeeds on others. I almost wonder if fail to ban is kicking in because I have so many subdomains.

    Follow this: https://blog.linuxserver.io/2019/07/10/troubleshooting-letsencrypt-image-port-mapping-and-forwarding/

  13. 12 hours ago, Jerky_san said:

    @aptalca Hey sorry to bother.. I was wondering to do an HTTP blank setup on let's encrypt does it have to have anything special set anywhere besides the standard stuff in the docker like subdomains and stuff? I had an issue trying to add a subdomain but another container would set it properly so made me think I might have something configured improperly. Though I couldn't for the life of me figure it  out. The error I was getting was "Timeout during connect (likely firewall problem)”. But if I just pointed my ports to the other container HTTP worked. The other strange thing is sometimes it would work for a subdomain and other times it wouldn't after just a restart. I assume it's something I'm doing but just wondering if you heard of this ever happening. I ended up doing a DNS challenge and it all worked fine. Thanks for any insights

     

     

    Edit

    Should also mention I only use cloudflare for my DNS now and no longer use it as a pass through so it shouldn't be that to my knowledge. Also the other container shouldn't of worked if that was the case. I have 6-7 subdomains.

    I don't follow. What's "HTTP blank"?

     

    You'll have to be provide a clearer description of the issues you're having.

  14. 5 hours ago, Wong said:

    OHHH MYYY GODDDD, it worked. So the problem is because when I saved the nextcloud.subdomain.conf, notepad++ save it as text file. I edited the save type into all type. Then it worked. It feel good to get things working. Thanks you for the awesome unraid community support. 

    In windows, make sure you enable the setting for displaying file extensions even if known

    • Like 1