Jump to content

aptalca

Community Developer
  • Posts

    3,064
  • Joined

  • Last visited

  • Days Won

    3

Posts posted by aptalca

  1. 2 hours ago, adambeck7 said:
    
    Error: listen EADDRNOTAVAIL: address not available 192.168.1.200:3001
        at Server.setupListenHandle [as _listen2] (net.js:1292:21)
        at listenInCluster (net.js:1357:12)
        at doListen (net.js:1496:7)
        at processTicksAndRejections (internal/process/task_queues.js:85:21)
    Emitted 'error' event on Server instance at:
        at emitErrorNT (net.js:1336:8)
        at processTicksAndRejections (internal/process/task_queues.js:84:21) {
      code: 'EADDRNOTAVAIL',
      errno: 'EADDRNOTAVAIL',
      syscall: 'listen',
      address: '192.168.1.200',
      port: 3001
    }

    Not code-server related, but any idea why it would be saying the address isn't available? I tried a few different ports I know are open but it doesn't like any of them so I'm assuming it's having an issue w/ the ip not the port. No matter which IP on my local network it crashes saying the address isn't available.  Thanks for all your help!

    Try 0.0.0.0

    • Like 1
  2. 5 hours ago, BomB191 said:

    OMFG! ok so i shouldn't do stuff on 3 hours sleep. I was under the impression I could use port 180 and 1443 externally But couldn't figure out how it was mapped. forwarding port 80 and 443 works. I am an idiot. Thanks for the assistance :)

     

    Makes me wonder though can one change the external port used to not be 80 or 443? or is that something embedded withing the protocol?

    Https default is 443. If you use a different port, you'll have to define it to browse it like https://domain.com:1443

  3. 10 hours ago, BomB191 said:

    Ive been banging my head on this all day. I have my domains DNS linked up with cloudflare and cloudflare pointing to duckdns pointing to me.

    Lets encrypt got its cert all fine and happy as.

    I'm currently only trying to get ombi and nextcloud sorted out and have followed space invaders videos on lets encrypt and dns certs 

     

    What else do you guys need to help? it should just work but i only get 522 errors :(

    I feel ipv6 might be screwing things up but i have ipv4 and 6 forwarded. I'm at a complete loss

    2020-04-04 18_06_03-DNS _ bomb191.xyz _ Account _ Cloudflare - Web Performance & Security.png

    2020-04-04 18_09_20-Tower_UpdateContainer.png

    2020-04-04 18_08_54-Tower_Docker.png

    Did you Google error 522? It tells you exactly what the problem is. Cloudflare can't reach your server. Check your port forwarding

     

    https://blog.linuxserver.io/2019/07/10/troubleshooting-letsencrypt-image-port-mapping-and-forwarding/

    • Thanks 1
  4. 13 hours ago, puncho said:

    Doesn't seem like my certs are renewing for some reason...thanks in advance for any insight.
     

    User uid: 99
    User gid: 100
    -------------------------------------

    [cont-init.d] 10-adduser: exited 0.
    [cont-init.d] 20-config: executing...
    [cont-init.d] 20-config: exited 0.
    [cont-init.d] 30-keygen: executing...
    using keys found in /config/keys
    [cont-init.d] 30-keygen: exited 0.
    [cont-init.d] 50-config: executing...
    Variables set:
    PUID=99
    PGID=100
    TZ=America/Los_Angeles
    URL=mydomain.duckdns.org
    SUBDOMAINS=nextcloud,home,heimdall
    EXTRA_DOMAINS=
    ONLY_SUBDOMAINS=false
    DHLEVEL=2048
    VALIDATION=http
    DNSPLUGIN=
    [email protected]
    STAGING=

    2048 bit DH parameters present
    SUBDOMAINS entered, processing
    SUBDOMAINS entered, processing
    Sub-domains processed are: -d nextcloud.mydomain.duckdns.org -d home.mydomain.duckdns.org -d heimdall.mydomain.duckdns.org
    E-mail address entered: [email protected]
    http validation is selected
    Certificate exists; parameters unchanged; starting nginx
    [cont-init.d] 50-config: exited 0.
    [cont-init.d] 60-renew: executing...
    The cert is either expired or it expires within the next day. Attempting to renew. This could take up to 10 minutes.
    <------------------------------------------------->

    <------------------------------------------------->
    cronjob running on Thu Apr 2 22:27:57 PDT 2020
    Running certbot renew
    Saving debug log to /var/log/letsencrypt/letsencrypt.log

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Processing /etc/letsencrypt/renewal/mydomain.duckdns.org.conf
    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

    Traceback (most recent call last):
    File "/usr/lib/python3.8/site-packages/certbot/_internal/renewal.py", line 63, in _reconstitute
    renewal_candidate = storage.RenewableCert(full_path, config)
    File "/usr/lib/python3.8/site-packages/certbot/_internal/storage.py", line 445, in __init__
    raise errors.CertStorageError(
    certbot.errors.CertStorageError: renewal config file {} is missing a required file reference
    Renewal configuration file /etc/letsencrypt/renewal/mydomain.duckdns.org.conf is broken. Skipping.

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

    No renewals were attempted.
    No hooks were run.

    Additionally, the following renewal configurations were invalid:
    /etc/letsencrypt/renewal/mydomain.duckdns.org.conf (parsefail)
    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    0 renew failure(s), 1 parse failure(s)
    [cont-init.d] 60-renew: exited 0.
    [cont-init.d] 99-custom-files: executing...
    [custom-init] no custom files found exiting...
    [cont-init.d] 99-custom-files: exited 0.
    [cont-init.d] done.
    [services.d] starting services
    [services.d] done.
    nginx: [alert] detected a LuaJIT version which is not OpenResty's; many optimizations will be disabled and performance will be compromised (see https://github.com/openresty/luajit2 for OpenResty's LuaJIT or, even better, consider using the OpenResty releases from https://openresty.org/en/download.html)
    nginx: [error] lua_load_resty_core failed to load the resty.core module from https://github.com/openresty/lua-resty-core; ensure you are using an OpenResty release from https://openresty.org/en/download.html (rc: 2, reason: module 'resty.core' not found:
    no field package.preload['resty.core']
    no file './resty/core.lua'
    no file '/usr/share/luajit-2.1.0-beta3/resty/core.lua'
    no file '/usr/local/share/lua/5.1/resty/core.lua'
    no file '/usr/local/share/lua/5.1/resty/core/init.lua'
    no file '/usr/share/lua/5.1/resty/core.lua'
    no file '/usr/share/lua/5.1/resty/core/init.lua'
    no file '/usr/share/lua/common/resty/core.lua'
    no file '/usr/share/lua/common/resty/core/init.lua'
    no file './resty/core.so'
    no file '/usr/local/lib/lua/5.1/resty/core.so'
    no file '/usr/lib/lua/5.1/resty/core.so'
    no file '/usr/local/lib/lua/5.1/loadall.so'
    no file './resty.so'
    no file '/usr/local/lib/lua/5.1/resty.so'
    no file '/usr/lib/lua/5.1/resty.so'
    no file '/usr/local/lib/lua/5.1/loadall.so')
    Server ready

    Your renewal conf file is broken for some reason. Perhaps a bad backup/restore. Change the parameters and recreate to force a renewal

  5. 3 minutes ago, CoZ said:

    If I can ping my duckdns subdomains from within unRaid and Windows10 (eg: ping subdomain.duckdns.org) and it returns results but I can't access the subdomain.duckdns.org within a browser window, than it must be a Nginx Proxy manager issue, correct?

     

    I'm trying to eliminate some issues here I'm having.

    Possibly

  6. 1 hour ago, anongum said:

    Ok, i reinstalled everything and looks like letsencrypt works fine now, I get the "website currently being setup under this address" that confirms me that the reverseproxy is working (I guess), and managed to make it work for nextcloud. Now I'm trying to install plex. The plex conf file says: 
     

    
    # if plex is running in bridge mode and the container is named "plex", the below config should work as is
    # if not, replace the line "set $upstream_app plex;" with "set $upstream_app <containername>;"
    # or "set $upstream_app <HOSTIP>;" for host mode, HOSTIP being the IP address of plex
    # in plex server settings, under network, fill in "Custom server access URLs" with your domain (ie. "https://plex.yourdomain.url:443")
      
    server {
        listen 443 ssl;
        listen [::]:443 ssl;
    
        server_name plex.*;
    
        include /config/nginx/ssl.conf;
    
        client_max_body_size 0;
        proxy_redirect off;
        proxy_buffering off;
    
        # enable for ldap auth, fill in ldap details in ldap.conf
        #include /config/nginx/ldap.conf;
        location / {
            # enable the next two lines for http auth
            #auth_basic "Restricted";
            #auth_basic_user_file /config/nginx/.htpasswd;
    
            # enable the next two lines for ldap auth
            #auth_request /auth;
            #error_page 401 =200 /login;
    
            include /config/nginx/proxy.conf;
            resolver 127.0.0.11 valid=30s;
            set $upstream_app plex;
            set $upstream_port 32400;
            set $upstream_proto http;
            proxy_pass $upstream_proto://$upstream_app:$upstream_port;
    
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection "upgrade";
    
            proxy_set_header X-Plex-Client-Identifier $http_x_plex_client_identifier;
            proxy_set_header X-Plex-Device $http_x_plex_device;
            proxy_set_header X-Plex-Device-Name $http_x_plex_device_name;
            proxy_set_header X-Plex-Platform $http_x_plex_platform;
            proxy_set_header X-Plex-Platform-Version $http_x_plex_platform_version;
            proxy_set_header X-Plex-Product $http_x_plex_product;
            proxy_set_header X-Plex-Token $http_x_plex_token;
            proxy_set_header X-Plex-Version $http_x_plex_version;
            proxy_set_header X-Plex-Nocache $http_x_plex_nocache;
            proxy_set_header X-Plex-Provides $http_x_plex_provides;
            proxy_set_header X-Plex-Device-Vendor $http_x_plex_device_vendor;
            proxy_set_header X-Plex-Model $http_x_plex_model;
        }
    }

    Now, considering that I'm installing PlexMediaServer form the plexinc/pms-docker repo, what should be my move? Do I simply name the container "plex" when adding it from the community apps plugin, and then just edit the server_name <plexsubdomain>.*? Or is better to try the second option, leaving the network to host? In that case the <HOSTIP> is the regular localurl:32400 of the webgui, or is some other ip? 

    You don't have to change the server name. That sets your subdomain. Leave it as plex.*

     

    Change the container name to plex so you don't have to modify the proxy conf.

     

    Follow the rest of the steps outlined at the top of the proxy conf

  7. 3 minutes ago, grandprix said:

    I somewhat wish I had the issue others are experiencing in regards to the LSIO BOINC docker utilizing "too many" resources, as mine is the complete opposite, whereas, it's using only two cpu/ht's.  I admit to being ignorant to how BOINC (or any shared computing type of program) works, so perhaps the "task" or "work" only needs two cpu/ht's?  Though, it would seem like a waste of the 16/32 machine it is running on.

    Each task uses 1 core/thread. If you're only assigned 1 task, it will only use 1 core at a time.

     

    You select the max number of cores boinc can use through computing preferences by entering in a percentage.

     

    So on a 4 thread machine, you set 50% in settings, it will run a max of 2 tasks at a time and use 2 threads total

  8. 4 hours ago, SeaMax said:

    Hello,

     

    I have two problems with openvpn-as:

    FIRST PROBLEM

    i've also got the Error

    
    SESSION ERROR: SESSION: Your session has expired, please reauthenticate (9007)

    and I am at a loss of what exactly i have to do to fix it.

     

    My setup:

    (1) I've installed the openvpn-as container in bridge mode - i set up another user name (also with admin access), then login as said user and delete the standard admin user.

    (2) I switch the network mode in the container to a custom proxynet (nginx setup from spaceinvader video)  so that i can reach my openvpn user and admin login from anywhere

    (3) i edit the as.config file entry "boot_pam_users.0=" and put random characters in, so that my admin acc is not accesible if it was reset during switching of dhe network mode

    (4) i go to my web interface login of openvpn: openvpn.***.* -> it opens to the user login page

    -> i can login as my created user

    (5) i go to openvpn.***.*/admin it opens to the admin login page

    -> i get said error on login attempt with my created admin user

     

    Now, people linked to this POST a couple of posts back.

    There it says, regarding error solution:

    "

    1. iptables issues on host (either not installed or missing kernel modules)

    2. you didn’t add cap-add NET_ADMIN

    3. you’re using an unsupported networking method (host or macvlan)

    "

    1) i do not know what this means or what i have to check and possible fix

    2) i've checked in advanced view, docker container is still created with "cap-add NET_ADMIN"

    3) i do not know exactly what this means, is it possible that you cannot run openvpn on a custom setup unraid network (in my case "proxynet" and letsencrypt) - does it only run on "bridge" mode?

     

    SECOND PROBLEM

    Maybe related to first problem.

     

    With my setup (as explained above) i can go on my mobile, go to my openvpn domain and download the access file for the mobile openvpn client.

    BUT when i try to connect to my openvpn server the connection times out.

    Openvpn is configured on UDP 1194 and i've forwarded this port to my unraid server (as per spaceinvaders video).

    Any idea what could prevent it from getting a connection?

     

    Thanks for the people reading this and in general developing this container.

     

     

     

     

    Try accessing on the ip directly, not via reverse proxy

  9. 5 minutes ago, anongum said:

    Today something weird happened to letsencrypt. 
    I had a clean installation of unraid, on docker just Plex, Nextcloud, Mariadb, duckdns and letsencrypt. Everything perfectly worked until this afternoon, when things just stopped working. Nextcloud and plex would kept working when trying to access them locally, but would timeout whenever trying to use the reverse proxy. So, since I'm far from being an expert user, and one time I already broke my docker containers by messing too much, I deleted my docker image, all my folders in appdata relative to docker containers, and just installed plex and letsencrypt, to see if the problem went away. But it still doesn't work - tried to change domain, issue new certificates - no luck.

    Then I started thinking. For the sake of explaining I'll my plex domain plex.duckdns.org. I issued one certificate for this subdomain, but never actually used. Yet, for the sake of testing, I tried to access remotely my machine by typing plex.duckdns.org:32400, which is the port used by plex for its webgui, and it worked. I could access plex remotely just fine. Then I went, created a conf file in letsencrypt, which I'm posting:
     

    
    # make sure that your dns has a cname set for plex
    # if plex is running in bridge mode and the container is named "plex", the below config should work as is
    # if not, replace the line "set $upstream_app plex;" with "set $upstream_app <containername>;"
    # or "set $upstream_app <HOSTIP>;" for host mode, HOSTIP being the IP address of plex
    # in plex server settings, under network, fill in "Custom server access URLs" with your domain (ie. "https://plex.yourdomain.url:443")
    
    server {
        listen 443 ssl;
        listen [::]:443 ssl;
    
        server_name plex.*;
    
        include /config/nginx/ssl.conf;
    
        client_max_body_size 0;
        proxy_redirect off;
        proxy_buffering off;
    
        # enable for ldap auth, fill in ldap details in ldap.conf
        #include /config/nginx/ldap.conf;
        location / {
            # enable the next two lines for http auth
            #auth_basic "Restricted";
            #auth_basic_user_file /config/nginx/.htpasswd;
    
            # enable the next two lines for ldap auth
            #auth_request /auth;
            #error_page 401 =200 /login;
    
            include /config/nginx/proxy.conf;
            resolver 127.0.0.11 valid=30s;
            set $upstream_app PlexMediaServer;
            set $upstream_port 32400;
            set $upstream_proto http;
            proxy_pass $upstream_proto://$upstream_app:$upstream_port;
    
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection "upgrade";
    
            proxy_set_header X-Plex-Client-Identifier $http_x_plex_client_identifier;
            proxy_set_header X-Plex-Device $http_x_plex_device;
            proxy_set_header X-Plex-Device-Name $http_x_plex_device_name;
            proxy_set_header X-Plex-Platform $http_x_plex_platform;
            proxy_set_header X-Plex-Platform-Version $http_x_plex_platform_version;
            proxy_set_header X-Plex-Product $http_x_plex_product;
            proxy_set_header X-Plex-Token $http_x_plex_token;
            proxy_set_header X-Plex-Version $http_x_plex_version;
            proxy_set_header X-Plex-Nocache $http_x_plex_nocache;
            proxy_set_header X-Plex-Provides $http_x_plex_provides;
            proxy_set_header X-Plex-Device-Vendor $http_x_plex_device_vendor;
            proxy_set_header X-Plex-Model $http_x_plex_model;
        }
    }

    Changed the name from plex to the docker container name, so PlexMediaServer, obviously changed plex.* to the actual subdomain. And it worked! I shared the link with my friend, so that he could access my plex webgui remotely. 
    Everything was fine, but tonight everything is "time out" and I can't wrap my head around this.

    The port forwarding is working fine. The certs are issued without issues, and the letsencrypt log looks normal, the server is up (Server ready, as end message of the log). Before tonight, when trying to access one of the domains for which I issued a certificate I would get a white and simple html page saying "the site or server is under construction, for more info contact the server admin", but now, no matter the certs I issued, everything would just time out.

    Letsencrypt is in a custom "proxynet" network (yes, I too followed, or better bought unraid thanks to the spaceinvaderone tutorials) and the command is the same of when the reverse proxy worked. Since the port forwarding is fine, the plex container itself is fine, the problem is either duckdns or letsencrypt, tertium non datur.

    What can I do to see what is the problem? I tried a minimal troubleshooting, but I'm not an expert user, and already disheartened at how could this even happen without touching the nas or any settings.

    Please, help me.

    A few things wrong here.

     

    Using uppercase letters in container name will prevent nginx from properly resolving it. You'll get a 502

     

    If you're trying to access https://url:32400, you're not going through reverse proxy as letsencrypt is accessed at port 443, not 32400. You are giving direct access to Plex via that port.

     

    Start over and follow our guides: https://blog.linuxserver.io/2019/04/25/letsencrypt-nginx-starter-guide/

    And for troubleshooting: https://blog.linuxserver.io/2019/07/10/troubleshooting-letsencrypt-image-port-mapping-and-forwarding/

  10. 49 minutes ago, Snubbers said:

    I have been over there and see it affects a few people so added to a thread or two! Thanks!

     

    The other issue which I've also just had is the DNS rebind issue that only affects EAC3, so that's two silent ways it won't work

    1. EasyAudioEncoder X flag setting incorrectly

    2. EAE uses URI's (*.plex.direct) that my router for one sees as a DNS rebind attempt and blocks it..


    I was sat there scratching my head when the same EAC3 audio tracks wouldn't play and finally stumbled across this whilst on the plex forums.. just adding *.plex.direct as an exception on the router and all is well again!

     

    This seems something so easy for plex to fix but they just seem to sit on it.. sometimes you feel like just doing it for them!

     

     

     

     

     

     

    If it was open source you could PR, but it's not. I guess you'll have to let them know and wait until they get to it.

  11. 1 hour ago, oskarax said:

    Hi! I trying for days now to get this going and I followed every guide a could find. I want to be able to access my Nextcloud from outside my network. First I tried the "regular" way with http and duckdns but no luck. After that I followed Spaceinvader Ones new guide using wildcard and SSL with my own domain name. Im not very good at this but I've followed a lot of guides from Spaceinvader One and this is the first one that I just can't get working. Ill post the log file from lets encrypt. Im really stuck and I think I've tried everything.

     

     

    [s6-init] making user provided files available at /var/run/s6/etc...exited 0.
    [s6-init] ensuring user provided files have correct perms...exited 0.
    [fix-attrs.d] applying ownership & permissions fixes...
    [fix-attrs.d] done.
    [cont-init.d] executing container initialization scripts...
    [cont-init.d] 01-envfile: executing...
    [cont-init.d] 01-envfile: exited 0.
    [cont-init.d] 10-adduser: executing...

    -------------------------------------
    _ ()
    | | ___ _ __
    | | / __| | | / \
    | | \__ \ | | | () |
    |_| |___/ |_| \__/


    Brought to you by linuxserver.io
    We gratefully accept donations at:
    https://www.linuxserver.io/donate/
    -------------------------------------
    GID/UID
    -------------------------------------

    User uid: 99
    User gid: 100
    -------------------------------------

    [cont-init.d] 10-adduser: exited 0.
    [cont-init.d] 20-config: executing...
    [cont-init.d] 20-config: exited 0.
    [cont-init.d] 30-keygen: executing...
    generating self-signed keys in /config/keys, you can replace these with your own keys if required
    Generating a RSA private key
    ........+++++
    ....................+++++
    writing new private key to '/config/keys/cert.key'
    -----
    [cont-init.d] 30-keygen: exited 0.
    [cont-init.d] 50-config: executing...
    Variables set:
    PUID=99
    PGID=100
    TZ=Europe/Berlin
    URL=reverseproxy.nu
    SUBDOMAINS=wildcard
    EXTRA_DOMAINS=
    ONLY_SUBDOMAINS=true
    DHLEVEL=2048
    VALIDATION=dns
    DNSPLUGIN=cloudflare
    [email protected]
    STAGING=

    Created donoteditthisfile.conf
    Creating DH parameters for additional security. This may take a very long time. There will be another message once this process is completed
    Generating DH parameters, 2048 bit long safe prime, generator 2
    This is going to take a long time
    [cont-finish.d] executing container finish scripts...
    [cont-finish.d] done.
    [s6-finish] waiting for services.
    [s6-finish] sending all processes the TERM signal.
    [s6-finish] sending all processes the KILL signal and exiting.
    ...............................................+..............................................................................................................................+.....................................................+...................................+.................................................+............................+.............+..........................................+.........................................................................................................................................................................................................................+...........................................................+...........................+..................................................+......+....................................................................+........................................................+......................................................................................................................................................................+........................................................................................................................................................................+............................................................................+............................................+................................................N[s6-init] making user provided files available at /var/run/s6/etc...exited 0.
    [s6-init] ensuring user provided files have correct perms...exited 0.
    [fix-attrs.d] applying ownership & permissions fixes...
    [fix-attrs.d] done.
    [cont-init.d] executing container initialization scripts...
    [cont-init.d] 01-envfile: executing...
    [cont-init.d] 01-envfile: exited 0.
    [cont-init.d] 10-adduser: executing...
    usermod: no changes

    -------------------------------------
    _ ()
    | | ___ _ __
    | | / __| | | / \
    | | \__ \ | | | () |
    |_| |___/ |_| \__/


    Brought to you by linuxserver.io
    We gratefully accept donations at:
    https://www.linuxserver.io/donate/
    -------------------------------------
    GID/UID
    -------------------------------------

    User uid: 99
    User gid: 100
    -------------------------------------

    [cont-init.d] 10-adduser: exited 0.
    [cont-init.d] 20-config: executing...
    [cont-init.d] 20-config: exited 0.
    [cont-init.d] 30-keygen: executing...
    using keys found in /config/keys
    [cont-init.d] 30-keygen: exited 0.
    [cont-init.d] 50-config: executing...
    Variables set:
    PUID=99
    PGID=100
    TZ=Europe/Berlin
    URL=reverseproxy.nu
    SUBDOMAINS=wildcard
    EXTRA_DOMAINS=
    ONLY_SUBDOMAINS=true
    DHLEVEL=2048
    VALIDATION=dns
    DNSPLUGIN=cloudflare
    [email protected]
    STAGING=

    2048 bit DH parameters present
    SUBDOMAINS entered, processing
    Wildcard cert for only the subdomains of reverseproxy.nu will be requested
    E-mail address entered: [email protected]
    dns validation via cloudflare plugin is selected
    Generating new certificate
    Saving debug log to /var/log/letsencrypt/letsencrypt.log
    Plugins selected: Authenticator dns-cloudflare, Installer None
    Obtaining a new certificate
    Performing the following challenges:
    dns-01 challenge for reverseproxy.nu
    Unsafe permissions on credentials configuration file: /config/dns-conf/cloudflare.ini
    Waiting 10 seconds for DNS changes to propagate
    Waiting for verification...
    Waiting for verification...
    Challenge failed for domain reverseproxy.nu
    dns-01 challenge for reverseproxy.nu
    Cleaning up challenges
    Some challenges have failed.
    IMPORTANT NOTES:
    - The following errors were reported by the server:

    Domain: reverseproxy.nu
    Type: dns
    Detail: DNS problem: SERVFAIL looking up TXT for
    _acme-challenge.reverseproxy.nu - the domain's nameservers may be
    malfunctioning
    - Your account credentials have been saved in your Certbot
    configuration directory at /etc/letsencrypt. You should make a
    secure backup of this folder now. This configuration directory will
    also contain certificates and private keys obtained by Certbot so
    making regular backups of this folder is ideal.
    ERROR: Cert does not exist! Please see the validation error above. Make sure you entered correct credentials into the /config/dns-conf/cloudflare.ini file.
     

     

    AND the above error is a mystery as I have edited it with the API Key and email adress as the guide states.

     

    Please I need help.

    A couple of others on discord mentioned a cloudflare outage that resulted in the same outcome as above. No errors setting txt records, but they can't be verified.

  12. 32 minutes ago, B8NU4TK6 said:

    Yes, I made that modification a few months ago when I setup my Plex docker container and it has worked fine in there.  I am unsure if I can have both dockers have access to /dev/dri simultaneously so I am currently keeping my Plex docker turned off.

    Docker containers can share the gpu

  13. 2 hours ago, Snubbers said:

    I've just realised I have/had the EAC3 'issue'. i.e. any video file with EAC3 audio requiring audio transcode down to 2 channgels (most of my client apps) won't play the file, I get the following log entry:

    
    "ERROR - [Transcoder] [eac3_eae @ 0x7e9840] EAE timeout! EAE not running, or wrong folder? Could not read '/tmp/pms-198c89ec-c5fa-4ceb-99dc-409b57434d00/EasyAudioEncoder/Convert to WAV (to 8ch or less)/C02939D8-5F8B-432B-9FD9-6E7F76C40456_522-0-21.wav'"

    I found a solution in this thread, just deleting the appdata\plex\..\Codecs folder and restart so it recreates it and everything seems fine now!

     

    Instead of deleting, I just renamed the folder to "Codecs_OLD" so I could see what the difference was, there are only two differences

    1. The licence file has different contents

    2. (probably the most crucial!) the "EasyAudioEncoder" file (2.5Mb, no extension) does not have the executable flag set on the old non-working version!

     

    I think this happened after the update a day or so ago, or that's when I noticed it!

     

    Obviously it's fixed for now, but just wondering if anyone has any idea on how it might have happened in case it comes back at a later date?

     

     

    Long time Plex issue. Report it to them

  14. 1 hour ago, bwnautilus said:

    I posted this in the blog forum.  In case it was missed I'm re-posting here.

     

    I updated the BOINC docker this morning and noticed it wasn't getting any new tasks.  Reset the project and still no new tasks.  Anyone else notice this?

     

    EDIT: my Windows 10 BOINC client is still getting new tasks.  Looks like the Linux docker version is broken.

    EDIT2: Duh! Rosetta doesn't have any more tasks in the queue.  Nevermind.

     

    image.png.43cddec2527a23a7a3dde77a23b9f093.png

    Too many users, the well dried up 😅

  15. 4 hours ago, bobokun said:

    I used Dockermod to install python3, however I would like to install some python libraries and want to make sure I'm doing this the best way.

    Do I create a folder called custom-cont-init.d and inside create a file with the library name? For example 99-pandas and the script would be something like 

    
    #!/usr/bin/with-contenv bash
    
    echo "**** installing pandas****"
    pip3 install pandas

    Edit: I find this not working because it tries to run my custom-cont-init.d prior to the dockermods which is throwing me back with an error pip3 command not found

    You bring up an interesting point I had not considered. s6 supervisor executes the init files in alphabetical order. The custom files in that folder are executed by a script named "99-custom-files" and the python3 docker mod creates an init file named "99-python3" and that's why your custom file is executed before python3 installation.

     

    I guess I'll have to update all the mods to use "98-blah" so they execute before the custom files. Until then, restarting the container should fix it for you because on second start, pip will already be installed when your custom file runs.

  16. 7 hours ago, Mikey160984 said:

     

    Another question for 2 GPUs

    If I type in the the docker only the GPU ID I want to use, the sytem still uses the two GPUs (like the "all" argument). Testet with a reinstall (yes, deletet the folder in appdata).
     

    Is this normal?

    If you set it to only one gpu's id, f@h will still see both gpus, but it won't be able to start the job on one of them. You'll see an error in the log, something like " no compute devices matched gpu #0 blah blah you may need to update your graphics drivers".

     

    I paused that gpu so it no longer receives jobs it won't be able to complete.

    • Like 1
  17. 3 hours ago, d2dyno said:

    I'm getting this on a fresh install. Fresh because the update wiped out my config folder: https://github.com/linuxserver/docker-openvpn-as/issues/108

     

    Getting this in container log

     

    Unpacking openvpn-as (2.8.3-f28d2eae-Ubuntu18) ...
    Setting up openvpn-as-bundled-clients (7) ...
    Setting up openvpn-as (2.8.3-f28d2eae-Ubuntu18) ...
    Automatic configuration failed, see /usr/local/openvpn_as/init.log
    You can configure manually using the /usr/local/openvpn_as/bin/ovpn-init tool.
    /var/lib/dpkg/info/openvpn-as.postinst: line 68: systemctl: command not found
    Stopping openvpn-as now; will start again later after configuring
    cat: /var/run/openvpnas.pid: No such file or directory
    kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]

    Did you read my linked thread above?

    Post a full log, post your docker run and check the openvpn log in the config folder

  18. 1 hour ago, Mikey160984 said:

    Did anyone try to use more then one nvidia gpu with this container? editing the config for a second gpu slot is no problem, but do both gpus work as they should?

    I tested with 2 gpus and it works

    There's a screenshot of it in this article: https://blog.linuxserver.io/2020/03/21/covid-19-a-quick-update/

     

    By the way, you don't need to edit the config at all (in fact, don't). If you allow both gpus via nvidia arguments, they'll both be used automatically.

    • Like 1
  19. 5 hours ago, mschindl said:

    Hello,

     

    it works well for CPU processing, but did someone get it running on ubuntu with docker and GPU (i.e. M2200)?

     

    What I did:

     

    # Add the package repositories
    distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
    curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
    curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list

    sudo apt-get update && sudo apt-get install -y nvidia-container-toolkit
    sudo systemctl restart docker

     

    docker run -d -it \
      --name=foldingathome \
      -e PUID=1000 \
      -e PGID=1000 \
      -e TZ=Europe/Berlin \

      -e NVIDIA_VISIBLE_DEVICES=all \

      -p 7396:7396 \
      -v /DATAINT/Docker-Conf/foldinghome:/config \
      --restart unless-stopped \

      --name foldingathome \

      linuxserver/foldingathome

     

    But I got following error with newest driver in Ubuntu 18.04:

     

    root@Server:~# nvidia-smi

    Thu Mar 26 14:47:54 2020

    +-----------------------------------------------------------------------------+

    | NVIDIA-SMI 440.64       Driver Version: 440.64       CUDA Version: 10.2     |

     

    root@Server:~# docker logs -f foldingathome

    13:40:16:ERROR:WU01:FS01:Failed to start core: OpenCL device matching slot 1 not found, try setting 'opencl-index' manually

     

    10:53:15:******************************* System ********************************

    10:53:15:        CPU: Intel(R) Core(TM) i7-7820HQ CPU @ 2.90GHz

    10:53:15:     CPU ID: GenuineIntel Family 6 Model 158 Stepping 9

    10:53:15:       CPUs: 8

    10:53:15:     Memory: 31.14GiB

    10:53:15:Free Memory: 29.29GiB

    10:53:15:    Threads: POSIX_THREADS

    10:53:15: OS Version: 4.15

    10:53:15:Has Battery: true

    10:53:15: On Battery: false

    10:53:15: UTC Offset: 1

    10:53:15:        PID: 259

    10:53:15:        CWD: /config

    10:53:15:         OS: Linux 4.15.0-91-generic x86_64

    10:53:15:    OS Arch: AMD64

    10:53:15:       GPUs: 1

    10:53:15:      GPU 0: Bus:1 Slot:0 Func:0 NVIDIA:5 GM206 [Quadro M2200]

    10:53:15:       CUDA: Not detected: cuInit() returned 100

    10:53:15:     OpenCL: Not detected: clGetPlatformIDs() returned -1001

     

    12:22:50:<config>

    12:22:50:  <!-- Remote Command Server -->

    12:22:50:  <password v='********'/>

    12:22:50:

    12:22:50:  <!-- Slot Control -->

    12:22:50:  <power v='FULL'/>

    12:22:50:

    12:22:50:  <!-- User Information -->

    12:22:50:  <passkey v='********************************'/>

    12:22:50:  <team v='xxx'/>

    12:22:50:  <user v='xxx'/>

    12:22:50:

    12:22:50:  <!-- Folding Slots -->

    12:22:50:  <slot id='0' type='CPU'>

    12:22:50:    <paused v='true'/>

    12:22:50:  </slot>

    12:22:50:  <slot id='1' type='GPU'>

    12:22:50:    <paused v='true'/>

    12:22:50:  </slot>

    12:22:50:</config>

     

    image002.png

    You forgot "--runtime=nvidia"

    • Like 1
  20. 3 hours ago, njdowdy said:

    I'm looking for advice on how to setup a subdomain.conf for a custom docker. 

    I'm trying to emulate what's described here: https://pgsnake.blogspot.com/2019/07/reverse-proxying-to-pgadmin.html

     

    I've also tried to use some other of the provided templates to build from. Here's what I have:

    
    # filename: pgsql.subdomain.conf
    server {
        listen 443 ssl;
        listen [::]:443 ssl;
    
        server_name pgsql.*;
    
        include /config/nginx/ssl.conf;
        proxy_redirect off;
        proxy_buffering off;
    	client_max_body_size 0;
    
        location / {
    
            include /config/nginx/proxy.conf;
            resolver 127.0.0.11 valid=30s;
    		# custom docker's name: pgadmin4
            set $upstream_pgadmin4 pgadmin4;
            proxy_pass http://$upstream_pgadmin4:5050;
        }
    }

    In the custom docker the network type is set to custom and pointed at my proxy network. Letsencrypt docker has pgsql as a subdomain to look out for. 

    When I restart letsencrypt docker and visit the subdomain (pgsql.mydomain.com) I get an nginx 502 bad gateway. Have I forgotten something in my configuration? The only thing I can see from other templates is not including any "proxy_set_header"'s, but I'm not really sure what those are and if they are needed. Thanks in advance!
     

    Doesn't pgadmin listen on port 80?

    • Thanks 1
×
×
  • Create New...