Kash76

Members
  • Posts

    108
  • Joined

  • Last visited

Posts posted by Kash76

  1. 1 hour ago, Xoron said:

    First: @binhex, thanks for all the work you put into your dockers and the support you offer on this forum.

     

    Forum users, looking for help with a VPN issue I'm having. 

     

    I had delugevpn setup and working for months with my VPN provider (Mullvad) using openvpn.  It's been working great up until now.  But since a recent server shutdown, I can't get the VPN tunnel to work in the docker + I can't access the deluge gui.   I'm at a loss to what the issue is.  What I've done so far (with no success):

    1. When I set the Container Variable: VPN_ENABLED to NO, I can then access the GUI on port 8112. 
    2. Downloaded and implemented a new config file for openvpn from Mullvad
    3. I've confirmed the openvpn config file by using the exact same config file in my openvpn client on my PC.  So I know the file is structured correctly.  As well, my PC and Unraid server are on the same network segment, and have the same firewall rules applied to both.
    4. Tried using Mullvad VPN servers in another region
    5. Pulled an older version of the docker to see if openvpn 2.5.5 client was the issue.  (binhex/arch-delugevpn:2.0.4-2-01)
    6. Running ifconfig from inside the docker, I don't see tun0 up (which I think is the correct interface for the VPN tunnel)
    7. I've completely wiped the docker, cleaned up the files in appdata and pulled the full delugevpn again. 
    8. I've stopped and restarted docker services on my unraid server. 
    9. I've looked over the supervisord.log so many times, I don't see any errors or anything that could explain why the tunnel isn't coming up (See the attached supervisord.log file)

    What am I missing / what else can I look at to see why this isn't working? 

     

    supervisord.log 11.57 kB · 1 download

    Same for me. Hoping for a solution to this awesome solution!

  2. Simple question, why do you need to transcode? Can your clients not handle direct play? It may be far simpler to upgrade your client hardware than your server. That can enable direct play and eliminate the need for transcoding completely. But it depends on the circumstances.



    I use Roku units so I don’t have to transcode today. I think my concern lies in the 4K ATSC 3.0 capabilities that my new HD homerun has and not knowing if I can play those streams native.


    Sent from my iPhone using Tapatalk Pro
  3. Hey all, I'm looking to upgrade from my old AMD 8350 which has onboard video to a AMD 3900XT which does not have onboard video. I understand that only the Intel chips are currently supported for offloading video transcoding for things containers such as Plex.

     

    My questions

    1. Can I successfully transcode video for Plex using the 3900x?

    2. Are there any limitations to running headless or running something simple like this - https://www.microcenter.com/product/462020/visiontek-radeon-hd-5450-low-profile-passive-cooled-1gb-ddr3-pcie-21-graphics-card

    3. Is there any value to getting something like a 8GB RX 580?

     

    I run many dockers including NextCloud, Plex, and others. I only run a single Windows VM but not for gaming.

     

    Thanks for any help you can offer!

  4. 2 hours ago, Kash76 said:

    Thank you! Making progress, did that and am now getting "ERR_SSL_PROTOCOL_ERROR" in Chrome and "SSL_ERROR_RX_RECORD_TOO_LONG" in Firefox. I usually do not have issues like this but am having a hell of a time troubleshooting this.

     

    Nothing in my error log, access log has entries like this...

    
    10.x.x.x - - [27/Nov/2019:12:15:12 -0600] "\x16\x03\x01\x01.\x01\x00\x01*\x04\x03H\xC4z\xDE\x0B(\xF8\x9E-\x88\xD0l0\x8EC\xC9\x14\xBD\xC2\xD0\xFEq{\xE8\x07H\x9EX\xFDs\xF6D\x00\x00\x88\xC00\xC0,\xC0(\xC0$\xC0\x14\xC0" 400 157 "-" "-"

     

    Well this is embarrassing. I had the http and https ports for LetsEncrypt crossed. Thanks for the support and sorry for the bother!!

  5. 37 minutes ago, aptalca said:

    Turn off cloudflare proxy (click on the orange cloud)

    Thank you! Making progress, did that and am now getting "ERR_SSL_PROTOCOL_ERROR" in Chrome and "SSL_ERROR_RX_RECORD_TOO_LONG" in Firefox. I usually do not have issues like this but am having a hell of a time troubleshooting this.

     

    Nothing in my error log, access log has entries like this...

    10.x.x.x - - [27/Nov/2019:12:15:12 -0600] "\x16\x03\x01\x01.\x01\x00\x01*\x04\x03H\xC4z\xDE\x0B(\xF8\x9E-\x88\xD0l0\x8EC\xC9\x14\xBD\xC2\xD0\xFEq{\xE8\x07H\x9EX\xFDs\xF6D\x00\x00\x88\xC00\xC0,\xC0(\xC0$\xC0\x14\xC0" 400 157 "-" "-"

     

  6. 5 hours ago, saarg said:

    @Kash76
    You do not change the port in the proxy conf when using a custom bridge as letsencryptaand nextcloud are talking internally  and don't use the port forwards.

    Change it from 8000 back to 443.

    Thanks much for the response.  I changed it back to this and am still getting 522 errors on network and 523 off network

     

    server {
        listen 443 ssl;
        listen [::]:443 ssl;
    
        server_name cloud.*;
    
        include /config/nginx/ssl.conf;
    
        client_max_body_size 0;
    
        location / {
            include /config/nginx/proxy.conf;
            resolver 127.0.0.11 valid=30s;
            set $upstream_nextcloud nextcloud;
            proxy_max_temp_file_size 2048m;
            proxy_pass https://$upstream_nextcloud:443;
        }
    }

     

  7. Hey everyone, I had to change DNS configurations recently due to port 80 being blocked and I'm having a heck of a time since moving to Cloudflare and trying to use DNS authentication. I have tried many things and am getting 522 errors from Cloudflare and am hoping that you can help me.

     

    I most recently started over with the letsencrypt container, here is my configs....

     

    Log file output

    Variables set:
    PUID=99
    PGID=100
    TZ=America/Chicago
    URL=xxx.net
    SUBDOMAINS=cloud
    EXTRA_DOMAINS=
    ONLY_SUBDOMAINS=false
    DHLEVEL=2048
    VALIDATION=dns
    DNSPLUGIN=cloudflare
    [email protected]
    STAGING=
    
    2048 bit DH parameters present
    SUBDOMAINS entered, processing
    SUBDOMAINS entered, processing
    Sub-domains processed are: -d cloud.xxx.net
    E-mail address entered: [email protected]
    dns validation via cloudflare plugin is selected
    Certificate exists; parameters unchanged; starting nginx
    [cont-init.d] 50-config: exited 0.
    [cont-init.d] 99-custom-files: executing...
    [custom-init] no custom files found exiting...
    [cont-init.d] 99-custom-files: exited 0.
    [cont-init.d] done.
    [services.d] starting services
    [services.d] done.
    nginx: [alert] detected a LuaJIT version which is not OpenResty's; many optimizations will be disabled and performance will be compromised (see https://github.com/openresty/luajit2 for OpenResty's LuaJIT or, even better, consider using the OpenResty releases from https://openresty.org/en/download.html)
    Server ready

     

    My Cloudflare.ini is set okay based on the cert being setup - skipping that

     

    Proxy config for nextcloud (also the name in my docker settings:

    
    server {
        listen 443 ssl;
        listen [::]:443 ssl;
    
        server_name cloud.*;
    
        include /config/nginx/ssl.conf;
    
        client_max_body_size 0;
    
        location / {
            include /config/nginx/proxy.conf;
            resolver 127.0.0.11 valid=30s;
            set $upstream_nextcloud nextcloud;
            proxy_max_temp_file_size 2048m;
            proxy_pass https://$upstream_nextcloud:8000;
        }
    }

     

    Nextcloud and Letsencrypt Docker configs are attached. Unraid web interface runs on another port so I do actually use 443 for Letsencrypt

     

    My Cloudflare settings are also attached. I'm not sure if my subdomains should be proxied or not and what my SSL setting should be.

     

    Thanks for any help you can offer!

     

     

    Annotation 2019-11-26 183616.png

    cf_list.png

    cf_ssl.png

     

     

    letsencrypt.png

  8. 4 hours ago, Pim Bliek said:

    I just installed this docker container and I have a little problem. I did change the parameter --env GITLAB_OMNIBUS_CONFIG="external_url 'http://192.168.178.15:9080/'" to reflect my situation. This works fine, for one exception (I have found so far).

    When you try to create a new milestone it tries to go to: http://unraid:9080/pim/pimbliek.nl/milestones/new  which is not reachable since that is not resolvable.

     

    How to make sure the 192.168.178.15 is used in the *whole* application?

    I've had the same issue and have not gotten past it. This makes for erratic behavior in the GUI.

  9. Thank you for the response. This is all I see when I click on the thumbs-down icon in the dashboard...

    It means the SSD reached the estimated life, it doesn't mean it's failing, it can work for a long time without issues, you can acknowledge the SMART attribute by clicking on the dashboard warning.
    16d774a19dc49aa1425692790c4a9114.jpg

    Sent from my ONEPLUS A6013 using Tapatalk

  10. Hey guys, I had an error show up on one of my cache drives (Crucial 275GB SSD) - percent lifetime remain (failing now) is 99. 

     

    There is a 'thumbs down' next to the drive in my dashboard. Do I replace the drive? If it's okay, how do I clear the error?

     

    Thanks!

     

    I do not see any errors reported in the SMART log. Here is what I see in the drive info...

     

    Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
    # 1  Vendor (0xff)       Completed without error       00%     17049         -
    # 2  Vendor (0xff)       Completed without error       00%     16971         -
    # 3  Vendor (0xff)       Completed without error       00%     16396         -
    # 4  Vendor (0xff)       Completed without error       00%     15846         -
    # 5  Vendor (0xff)       Completed without error       00%     15335         -
    # 6  Vendor (0xff)       Completed without error       00%     14859         -
    # 7  Vendor (0xff)       Completed without error       00%     14352         -
    # 8  Vendor (0xff)       Completed without error       00%     13846         -
    # 9  Vendor (0xff)       Completed without error       00%     13703         -
    #10  Vendor (0xff)       Completed without error       00%     13614         -
    #11  Vendor (0xff)       Completed without error       00%     13151         -
    #12  Vendor (0xff)       Completed without error       00%     12715         -
    #13  Vendor (0xff)       Completed without error       00%     12447         -
    #14  Vendor (0xff)       Completed without error       00%     12311         -
    #15  Vendor (0xff)       Completed without error       00%     11901         -
    #16  Vendor (0xff)       Completed without error       00%     11419         -
    #17  Vendor (0xff)       Completed without error       00%     10831         -
    #18  Vendor (0xff)       Completed without error       00%     10084         -
    #19  Vendor (0xff)       Completed without error       00%      9613         -
    #20  Vendor (0xff)       Completed without error       00%      9248         -
    #21  Vendor (0xff)       Completed without error       00%      8886         -

     

  11. On 10/2/2018 at 5:16 PM, TCosta29 said:

    Hello guys!
    I've been reading this thread in order to figure out how can I setup my GitLab Docker with my letsencrypt docker. 
     

    Right now my letsencrypt docker is like this

    
      location /gitlab {
                    include /config/nginx/proxy.conf;
                    proxy_pass http://192.168.1.104:9080;
                    proxy_set_header X-Forwarded-Proto https;
                    proxy_set_header X-Forwarded-Ssl on;
            }

    and my GitLab docker is like this :http://prntscr.com/l1gatc

    Extra Parameters:

    --env GITLAB_OMNIBUS_CONFIG="external_url 'https://"subdomain".duckdns.org/'; nginx['listen_port'] = 9080; nginx['listen_https'] = false"

     

    But for some reason, when I try to connect to my gitlab through  https://"subdomain".duckdns.org/gitlab it show's me 502 bad gateway error. 

     

    If someone could help me, I would be very gratefull 
    Also I'm still new to this unraid and docker stuff so please be patient with me. 

    Did you ever get this working?

  12.  
    This is the first docker i've installed thats restricting me from accessing its config and log files. If I change the permissions is the program going to freak out?
    It looks like all settings have a # in front, is this config file actually being used for anything?
    Those # lines are currently commented out. Take them away with the correct config and you should be set.

    Sent from my ONEPLUS A6003 using Tapatalk