aptalca

Community Developer
  • Posts

    3064
  • Joined

  • Last visited

  • Days Won

    3

Posts posted by aptalca

  1. 4 hours ago, cosmicrelish said:

    Hi,

    I just started with unraid and have been following Spaceinvader1 on youtube.  I was attempting to create a reverse proxy and followed his instructions exactly.  However I am getting an error that the challenges have failed and that a cert does not exist.  

     

    I started using my own domain name and then cname to point to duckdns.org.  At this point to troubleshoot I removed that and am just trying to  get it to work through duckdns only.  I can't find any info on how to solve this issue.  I'm not sure if it's telling me my port forwarding is not working or not.  I set it up the same as in the video but I don't have the ability to select http and https for the destination.  That is the only difference.

     

    This is what I see:

    
    rought to you by linuxserver.io
    -------------------------------------
    
    To support the app dev(s) visit:
    Let's Encrypt: https://letsencrypt.org/donate/
    
    To support LSIO projects visit:
    https://www.linuxserver.io/donate/
    -------------------------------------
    GID/UID
    -------------------------------------
    
    User uid: 99
    User gid: 100
    -------------------------------------
    
    [cont-init.d] 10-adduser: exited 0.
    [cont-init.d] 20-config: executing...
    [cont-init.d] 20-config: exited 0.
    [cont-init.d] 30-keygen: executing...
    using keys found in /config/keys
    [cont-init.d] 30-keygen: exited 0.
    [cont-init.d] 50-config: executing...
    Variables set:
    PUID=99
    PGID=100
    TZ=America/Los_Angeles
    URL=duckdns.org
    SUBDOMAINS=xxxxxxxserver
    EXTRA_DOMAINS=
    ONLY_SUBDOMAINS=true
    VALIDATION=http
    DNSPLUGIN=
    [email protected]
    STAGING=
    
    SUBDOMAINS entered, processing
    SUBDOMAINS entered, processing
    Only subdomains, no URL in cert
    Sub-domains processed are: -d xxxxserver.duckdns.org
    E-mail address entered: [email protected]
    http validation is selected
    Generating new certificate
    Saving debug log to /var/log/letsencrypt/letsencrypt.log
    Plugins selected: Authenticator standalone, Installer None
    Obtaining a new certificate
    Performing the following challenges:
    http-01 challenge for xxxxxxserver.duckdns.org
    Waiting for verification...
    Challenge failed for domain xxxxxxserver.duckdns.org
    
    http-01 challenge for xxxxxxxserver.duckdns.org
    Cleaning up challenges
    Some challenges have failed.
    
    IMPORTANT NOTES:
    - The following errors were reported by the server:
    
    Domain: xxxxxxxxserver.duckdns.org
    Type: connection
    Detail: Fetching
    http://xxxxxxxserver.duckdns.org/.well-known/acme-challenge/HoaCFK90SDgQaw2iuma2cx4BtMENmLm5vgXzS39iybw:
    Timeout during connect (likely firewall problem)
    
    To fix these errors, please make sure that your domain name was
    entered correctly and the DNS A/AAAA record(s) for that domain
    contain(s) the right IP address. Additionally, please check that
    your computer has a publicly routable IP address and that no
    firewalls are preventing the server from communicating with the
    client. If you're using the webroot plugin, you should also verify
    that you are serving files from the webroot path you provided.
    ERROR: Cert does not exist! Please see the validation error above. The issue may be due to incorrect dns or port forwarding settings. Please fix your settings and recreate the container
    
    [cont-finish.d] executing container finish scripts...
    [cont-finish.d] done.
    [s6-finish] waiting for services.
    [s6-finish] sending all processes the TERM signal.
    [s6-finish] sending all processes the KILL signal and exiting.

     

    I'm not sure what else to try at this point.  I am on an ATT network using their router so I don't have a lot of control over it.

     

    Ideally I would love to type in my own domain and get redirected to the Heimdall page that has all my apps easily ready to click on.  If not I can still do things through the webui's

     

    thanks!

    Use the guide linked in the post above yours

  2. 1 hour ago, draeh said:

    Sorry if I didn't make that clear.

     

    I have an existing apache server that my firewall pointed to. That server managed a letsencrypt certificate. I decided to employ the letsencrypt reverse proxy docker on my unraid server to manage the certificate to make it easier to host multiple named servers and subdomains. As a first step I simply used the docker to reverse proxy the original server which is working great, but I've lost the ability to audit my server in the original way that I did. I would audit the apache access logs for undesired behavior and sometimes blacklist other domains or ips based on the addresses listed in those logs. Now the apache server's access logs only show the unraid server's ip address as the one making the requests. Is there somewhere within the reverse proxy docker where I can view a kind of access log that will show me what internet addresses are trying to access the proxy?

    Nginx logs in letsencrypt will show you all the connections. They're in the config folder.

     

    Also, if you reverse proxied with all the correct headers, letsencrypt will pass the original ip in there. You may have to tell apache to trust those headers. For nginx, you do it via "real ip" module and settings. Not sure what apache needs

  3. 3 hours ago, draeh said:

    Just started using this instead of having my server handle the SSL certificate directly. Now that this is running, my server's access log shows all requests as having come from the reverse proxy. Is there an access log on the reverse proxy where I can see the outside addresses using the server?

    You need to provide more context. Are you reverse proxying the server? And by server do you mean unraid?

  4. 8 hours ago, lusitopp said:

    im new to linux systems, but im eager to learn, this is the output i get

     

    
    root@0d9237f2d370:/config/www/wordpress/wp-content# ls -la
    total 8
    drwxr-xr-x 1 root root   67 Jul  2 07:59 .
    drwxr-xr-x 1 root root 4096 Jul  2 07:50 ..
    -rw-r--r-- 1 root root   28 Jul  1 17:36 index.php
    drwxr-xr-x 1 root root   80 Jul  2 18:00 plugins
    drwxr-xr-x 1 root root  108 Jul  2 18:00 themes
    drwxr-xr-x 1 abc  abc    54 Jul  1 18:03 uploads

     

    Restart the container and it should fix the permissions

  5. 7 hours ago, lusitopp said:

      

    As soon as I copy/pasted my docker run I did see what I did wrong, 'only subdomains' was in true. After changing to false i now get certificate for https://mysite.io.

    But another question that someone might be able to help me with.
    With wordpress there is often updates to plugins, themes and wordpress itself. Trying do update from admin page will prompt me for ftp username and password, I dont have a ftp.
    I understand that this is because the user that runs the page don't have access to the folders in wordpress, anyone knows how to set that up?

     

    The user is abc and its pid is set to 99 (unless you changed it). It should have access to those folders on host

  6. 52 minutes ago, lusitopp said:

    Hi,

     

    I currently use this docker for nextcloud and bitwarden dockers, that works great.
    Now im trying to setup a Wordress inside the www folder in the letsencrypt docker and i want to redirect www.mysite.io to mysite.io.

    But if i use www, in the subdomain field the certificate will be for www.mysite.io, and then visitors get redirected to mysite.io and a cert warning will show.

     

    If i dont enter www, in the subdomain field i get this error

     

    
    No subdomains defined
    E-mail address entered: [email protected]
    http validation is selected
    Different validation parameters entered than what was used before. Revoking and deleting existing certificate, and an updated one will be created
    Saving debug log to /var/log/letsencrypt/letsencrypt.log
    No match found for cert-path /config/etc/letsencrypt/live/www.mysite.io/fullchain.pem!
    Generating new certificate
    Saving debug log to /var/log/letsencrypt/letsencrypt.log
    Plugins selected: Authenticator standalone, Installer None
    Obtaining a new certificate
    Performing the following challenges:
    http-01 challenge for mysite.io
    Waiting for verification...
    Cleaning up challenges
    IMPORTANT NOTES:
    - Congratulations! Your certificate and chain have been saved at:
    /etc/letsencrypt/live/mysite.io/fullchain.pem
    Your key file has been saved at:
    /etc/letsencrypt/live/mysiste.io/privkey.pem
    Your cert will expire on 2020-09-29. To obtain a new or tweaked
    version of this certificate in the future, simply run certbot
    again. To non-interactively renew *all* of your certificates, run
    "certbot renew"
    - Your account credentials have been saved in your Certbot
    configuration directory at /etc/letsencrypt. You should make a
    secure backup of this folder now. This configuration directory will
    also contain certificates and private keys obtained by Certbot so
    making regular backups of this folder is ideal.
    - If you like Certbot, please consider supporting our work by:
    
    Donating to ISRG / Let's Encrypt: https://letsencrypt.org/donate
    Donating to EFF: https://eff.org/donate-le
    
    ERROR: Cert does not exist! Please see the validation error above. The issue may be due to incorrect dns or port forwarding settings. Please fix your settings and recreate the container

     

    
    # redirect www to https://[domain.com]
    server {
     listen 80;
     listen 443 ssl http2;
     server_name www.mysite.io; 
     return 301 https://mysite.io$request_uri;
    }
    
    # redirect http to https://[domain.com]
    server {
        listen 80;
        server_name mysite.io; 
        return 301 https://mysite.io$request_uri;
    }
    
    # server config
    server {
     listen 443 ssl http2;
     server_name mysite.io;

    anyone know what i have done wrong?

    post your docker run

  7. Just a general comment. I'm seeing quite a few people here with the comment "followed spaceinvaderone video, it doesn't work".

     

    Perhaps you should ask him for support, maybe there is an issue with the directions there.

     

    If you use the default template as is, and follow the directions we provide in the readme (linked in the first post here), it works. I've been using it for years. It only once crapped out on me during an image update, I restored from a backup and it worked just fine since.

     

    Also keep in mind that when you update the image, it has to connect to the openvpn-as repo to download the package. If you have networking issues (dns config, mtu issue, or something like pihole blocking it) you'll see in the logs that it is unable to connect to the repo.

     

    To ask for support from us, post your docker run, and a full docker log on pastebin or the like and drop links here. Also let us know how you're trying to access it (the address) and what settings you changed in the gui. "I followed X guide and it doesn't work" is not going to get you support from us.

  8. 7 hours ago, Energen said:

    Am I doing this wrong or what don't I understand here....   ? (which is a lot)

     

    I'm playing with a Gotify docker container for push notifications.

    I'm playing with this letsencrypt docker for SSL certificates.

     

    Is it possible/how do I use the SSL certs from the letscrypt container in the Gotify container?

     

    The Gotify config file has an area for SSL

     

    
      ssl:
        enabled: false # if https should be enabled
        redirecttohttps: true # redirect to https if site is accessed by http
        listenaddr: "" # the address to bind on, leave empty to bind on all addresses
        port: 443 # the https port
        certfile: # the cert file (leave empty when using letsencrypt)
        certkey: # the cert key (leave empty when using letsencrypt)
        letsencrypt:
          enabled: false # if the certificate should be requested from letsencrypt
          accepttos: false # if you accept the tos from letsencrypt
          cache: data/certs # the directory of the cache from letsencrypt
    
    

     

    But this seems to require that letsencrypt is running within the same docker container?

     

    I've tried just copying the files from appdata/letsencrypt to a folder in appdata/gotify but the files "weren't found", so not sure where gotify was looking for them.  The main config file is found in appdata/gotify/config, tried the certs there also.

     

    Gotify doesn't have a support thread here so I'll try in the letsencrypt thread, since I need letsencrypt files ;)

     

    Thanks for any assistance.

    It's explained in the readme, but you really should reverse proxy rather than share certs

  9. 1 hour ago, Jerky_san said:

    That I see no.. just says "server ready" until I restart it. Ports go completely down but the docker itself is still running. Error logs do not show anything either but it was able to renew the cert last night so it must of went down after that happened.

    Can you post the output of ”ps -ef" from inside the container when that happens?

  10. 8 hours ago, Sain said:

    I have never had OpenDNS running on any of machines. In fact didn't know what OpenDNS is. I thought its something like CloudFlare DNS or Google DNS. I use Google as my DNS or my local ISP. (I use Pihole but I bypassed it completely) Till now I don't know what the problem is. OpenVPN used to work fine for months Until one day it's stops working at all. no matter how many times I try to fresh install. try different machines both client and server, I still dont' have access to OpenVPN WebGui and I get this error repeated on the log "./run: line 3: /usr/local/openvpn_as/scripts/openvpnas: No such file or directory" 

    That line alone is not helpful. All it tells you is that there was an issue with openvpn install. Post a full log if you seek assistance, and post a docker run.

     

    Pihole is known to cause such issues

  11. 2 hours ago, Cpt. Chaz said:

    hey guys, read through the thread here and saw folks dealing with a few reverse proxy problems, but didn't see a solution for me. i've kept my setup fairly simple with a custom domain "mysite.com". i've got no issues getting this to work for other cname instances, but can't seem to get it working for heimdall. i'm using linux's letsencrypt default subdomain config:

     

    
    # make sure that your dns has a cname set for heimdall
    
    server {
        listen 443 ssl;
        listen [::]:443 ssl;
    
        server_name heimdall.*;
    
        include /config/nginx/ssl.conf;
    
        client_max_body_size 0;
    
        # enable for ldap auth, fill in ldap details in ldap.conf
        #include /config/nginx/ldap.conf;
    
        # enable for Authelia
        #include /config/nginx/authelia-server.conf;
    
        location / {
            # enable the next two lines for http auth
            #auth_basic "Restricted";
            #auth_basic_user_file /config/nginx/.htpasswd;
    
            # enable the next two lines for ldap auth
            #auth_request /auth;
            #error_page 401 =200 /ldaplogin;
    
            # enable for Authelia
            #include /config/nginx/authelia-location.conf;
    
            include /config/nginx/proxy.conf;
            resolver 127.0.0.11 valid=30s;
            set $upstream_app heimdall;
            set $upstream_port 443;
            set $upstream_proto https;
            proxy_pass $upstream_proto://$upstream_app:$upstream_port;
    
        }
    }

    here's a screenshot of my container config:

    1798738947_ScreenShot2020-06-18at5_21_16PM.png.08d873968ae9e2789e41de5116d72af1.png

     

    i've got heimdall setup and working just fine on lan, just can't get it on the reverse proxy side. during troubleshooting, i tried changing upstream to match the container 2443:

    
    set $upstream_port 2443

    but it didn't make any difference, so i reverted back to default 443 until i get some guidance. i've triple checked my cname in cloudfare at heimdall.mysite.com. 

     

    Any help is much appreciated, thanks!

     

    See here https://blog.linuxserver.io/2019/04/25/letsencrypt-nginx-starter-guide/

  12. 5 hours ago, CORNbread said:

    I'm trying to get a subdomain reverse proxy working for airsonic...  all of my other apps work fine but I get 400 Bad Request errors with airsonic.  The CONTEXT_PATH in the airsonic container was originally /airsonic so I removed that but maybe I screwed something up there?  Hopefully someone can help!

     

    My site-confs default has:

    
    server {
    	listen 443 ssl;
    
    	root /config/www;
    	index index.html index.htm index.php;
    
    	server_name music.mydomain.ca;
    
    	include /config/nginx/ssl.conf;
    
    	client_max_body_size 0;
    
    	location / {
    		include /config/nginx/proxy.conf;
    		proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto https;
            proxy_set_header X-Forwarded-Host $http_host;
            proxy_set_header Host $http_host;
            proxy_max_temp_file_size 0;
    		proxy_pass http://192.168.1.10:4040/;
    		proxy_redirect http:// https://;
    	}
    	
    	
    }

     

    Why don't you use the preset conf we provide?

  13. 12 hours ago, Snipe3000 said:

    I'm getting errors trying to renew the cert, so as I attempt to fix the problem via the router, I need to test the changes by trying to renew the cert again.

    I'm opening up a console in the letsencrypt docker and im editing the root file in crontabs.

    I'm changing the last line from  8 2 * * *  to something like 45 20 * * *, I leave the command section as is. I save, restart the docker and wait for the time to come around. But no cronjob starts.

    Don't run commands manually inside the container.

     

    Crontab is in the config folder. Edit that

  14. 2 hours ago, bunkermagnus said:

    First, thank you for putting this container together to tap into the potential of our unused CPU cycles.

     

    I've noticed a "quirk", the container does it's job and uploads the WU fine, but it fails to clean up the work folder after it's done and the log gets spammed with the below. 

    If I restart the container it cleans up fine on startup, but when the next WU is done it fails to clean up again.,

     

    
    13:02:30:WU00:FS00:0xa7: Version: 0.0.18
    
    13:02:30:WU00:FS00:0xa7: Author: Joseph Coffland <joseph@cauldrondevelopment.com>
    13:02:30:WU00:FS00:0xa7: Copyright: 2019 foldingathome.org
    13:02:30:WU00:FS00:0xa7: Homepage: https://foldingathome.org/
    13:02:30:WU00:FS00:0xa7: Date: Nov 5 2019
    13:02:30:WU00:FS00:0xa7: Time: 06:13:26
    13:02:30:WU00:FS00:0xa7: Revision: 490c9aa2957b725af319379424d5c5cb36efb656
    13:02:30:WU00:FS00:0xa7: Branch: master
    13:02:30:WU00:FS00:0xa7: Compiler: GNU 8.3.0
    13:02:30:WU00:FS00:0xa7: Options: -std=c++11 -O3 -funroll-loops -fno-pie
    13:02:30:WU00:FS00:0xa7: Platform: linux2 4.19.0-5-amd64
    13:02:30:WU00:FS00:0xa7: Bits: 64
    13:02:30:WU00:FS00:0xa7: Mode: Release
    13:02:30:WU00:FS00:0xa7:************************************ Build *************************************
    13:02:30:WU00:FS00:0xa7: SIMD: avx_256
    13:02:30:WU00:FS00:0xa7:********************************************************************************
    13:02:30:WU00:FS00:0xa7:Project: 16806 (Run 7, Clone 236, Gen 36)
    13:02:30:WU00:FS00:0xa7:Unit: 0x0000002d82ed0b915eb41c47fe4bf238
    13:02:30:WU00:FS00:0xa7:Reading tar file core.xml
    13:02:30:WU00:FS00:0xa7:Reading tar file frame36.tpr
    13:02:30:WU00:FS00:0xa7:Digital signatures verified
    13:02:30:WU00:FS00:0xa7:Calling: mdrun -s frame36.tpr -o frame36.trr -cpt 15 -nt 2
    13:02:30:WU00:FS00:0xa7:Steps: first=18000000 total=500000
    13:02:31:WU00:FS00:0xa7:Completed 1 out of 500000 steps (0%)
    13:02:35:WU01:FS00:Upload 50.94%
    13:02:41:WU01:FS00:Upload complete
    13:02:42:WU01:FS00:Server responded WORK_ACK (400)
    13:02:42:WU01:FS00:Final credit estimate, 3275.00 points
    13:02:42:WU01:FS00:Cleaning up
    [91m13:02:42:ERROR:WU01:FS00:Exception: Failed to remove directory 'work/01': Directory not empty[0m
    
    13:02:42:WU01:FS00:Cleaning up
    [91m13:02:42:ERROR:WU01:FS00:Exception: Failed to remove directory 'work/01': Directory not empty[0m
    
    13:03:42:WU01:FS00:Cleaning up
    [91m13:03:42:ERROR:WU01:FS00:Exception: Failed to remove directory 'work/01': Directory not empty[0m
    
    13:05:19:WU01:FS00:Cleaning up
    [91m13:05:19:ERROR:WU01:FS00:Exception: Failed to remove directory 'work/01': Directory not empty[0m
    
    13:07:56:WU01:FS00:Cleaning up
    [91m13:07:56:ERROR:WU01:FS00:Exception: Failed to remove directory 'work/01': Directory not empty[0m
    
    13:08:20:WU00:FS00:0xa7:Completed 5000 out of 500000 steps (1%)
    13:12:11:WU01:FS00:Cleaning up
    [91m13:12:11:ERROR:WU01:FS00:Exception: Failed to remove directory 'work/01': Directory not empty[0m
    
    13:14:09:WU00:FS00:0xa7:Completed 10000 out of 500000 steps (2%)
    13:19:02:WU01:FS00:Cleaning up
    [91m13:19:02:ERROR:WU01:FS00:Exception: Failed to remove directory 'work/01': Directory not empty[0m
    
    13:19:58:WU00:FS00:0xa7:Completed 15000 out of 500000 steps (3%)
    13:25:47:WU00:FS00:0xa7:Completed 20000 out of 500000 steps (4%)
    13:30:08:WU01:FS00:Cleaning up
    [91m13:30:08:ERROR:WU01:FS00:Exception: Failed to remove directory 'work/01': Directory not empty[0m
    
    13:31:35:WU00:FS00:0xa7:Completed 25000 out of 500000 steps (5%)
    13:37:25:WU00:FS00:0xa7:Completed 30000 out of 500000 steps (6%)
    13:43:13:WU00:FS00:0xa7:Completed 35000 out of 500000 steps (7%)
    13:48:04:WU01:FS00:Cleaning up
    [91m13:48:04:ERROR:WU01:FS00:Exception: Failed to remove directory 'work/01': Directory not empty[0m
    
    13:49:02:WU00:FS00:0xa7:Completed 40000 out of 500000 steps (8%)
    13:54:51:WU00:FS00:0xa7:Completed 45000 out of 500000 steps (9%)
    14:00:40:WU00:FS00:0xa7:Completed 50000 out of 500000 steps (10%)
    14:06:29:WU00:FS00:0xa7:Completed 55000 out of 500000 steps (11%)
    14:12:19:WU00:FS00:0xa7:Completed 60000 out of 500000 steps (12%)
    14:17:07:WU01:FS00:Cleaning up
    [91m14:17:07:ERROR:WU01:FS00:Exception: Failed to remove directory 'work/01': Directory not empty[0m
    
    14:18:09:WU00:FS00:0xa7:Completed 65000 out of 500000 steps (13%)
    14:23:58:WU00:FS00:0xa7:Completed 70000 out of 500000 steps (14%)
    14:29:47:WU00:FS00:0xa7:Completed 75000 out of 500000 steps (15%)
    14:35:35:WU00:FS00:0xa7:Completed 80000 out of 500000 steps (16%)
    14:41:24:WU00:FS00:0xa7:Completed 85000 out of 500000 steps (17%)
    14:47:14:WU00:FS00:0xa7:Completed 90000 out of 500000 steps (18%)

     

    That's really an upstream issue as opposed to a docker container one. You should report it to folding@home

  15. 7 hours ago, Vesko said:

    Hi, first I'd like to thank you for all great work and support of this docker.
    I use 7 Dockers to access it from Wan perfect.

    Now I try to setup Rocketchat docker bit I can't find the Rocketchat.subdomain.conf file in the folder in Let'sencrypt.

    Is it possible to use another Subdomain.conf file and just to change the name to Rocketchat or please give me some advice.

    Sorry,i haven't very good skils in this.

    Thank you, very much.

     

    Screenshot_20200609-225657_Firefox.jpg

    Sure, you can copy another one and modify as needed

  16. 7 hours ago, mrMTB said:

    Firstly let me thank the developer(s) for their work and an excellent container.

     

    My unRAID server is only about a month old, and I'm slowly working through optimizing everything in it. I have a pair of 1660 Supers in my machine, one I've isolated for use in a Win10 gaming VM (primary position) and a second I use for transcoding (currently passed through to Plex and Handbrake, though the latter is generally not running).

     

    Transcoding works like a charm in Plex, and I've scaled it to twenty HEVC>1080p transcodes with resources left to spare on the card. When the transcoding stops, however, the card stays in P0 consuming about 35W compared to my isolated card that sits in P8 consuming about 11W. I have persistence mode enabled on both cards. If I restart the Plex container the card will immediately drop to P8.

     

    Does anyone else see this behavior or have thoughts on how I might get the card to idle properly without having to manually restart the container?

    It's a Plex thing

  17. 6 hours ago, Danietech said:

    Hi Noob User here,

    I am currently using lets encrypt as a docker within my unraid server, for the most part it is working fine.

    The current issue is read from my logs, I believe it is a note letting me know that I can add a GeoIP2 license if I add an environment variable.

     

     

    “Starting 2019/12/30, GeoIP2 databases require personal license key to download. Please retrieve a free license key from MaxMind,
    and add a new env variable “MAXMINDDB_LICENSE_KEY”, set to your license key.”

     

     

    I would like to utilize the feature, so I have signed up to Maxmind and managed to, by filtering through with my limited knowleged to generated a license.

     

    At this stage this is where I have got stuck, only for the firm reason I don’t want to break what is seemly working for the most part.

    I want to add the env variable, So I opened a console to the lets encrypt docker and opened the enviornment text file in nano, but after searching the internet to learn how to add the command to call for the database, I have found serval ways to do this causing me some confusion, I am hoping I could get a little help to correctly add the right syntax command to enable the feature as I don’t want to break it.

     

     

    I always say any help would be greatly appreciated, so any help would be greatly apreciated.

    Just edit the container settings in unraid gui and add a variable

  18. 5 hours ago, SiRMarlon said:

    Hmmm I think there is a disconnect here ... I was updating Unraid from 6.8.0 to 6.8.3 and I knew it was going to break as it had done this before when I switched from 6.7 to 6.8. I get that part. After I upgraded to 6.8.3 I knew I was going to have to update the Nvidia drivers to the latest version for 6.8.3. That is when the issues started. It def was not my ISP as I ran a few speed test to verify it wasn't my end of things that were slow. But regardless like I stated I rolled back to 6.8.0 after I couldn't get the Nvidia drivers to download and install correctly ... it is what it is. Just figured I'd let you all know. 

    I think you misunderstand how this addon works. It doesn't download nvidia drivers. It downloads and installs a custom unraid (the whole operating system). You don't have to upgrade or downgrade unraid itself. You just install an nvidia build (or stock) through the addon.

  19. 1 hour ago, mfwade said:

    Interestingly enough, I too am having issues as of late - say a few days ago. My docker.img wavered around 27% used or so (20G size) and all of a sudden it crept up to 90% utilized. I removed the img and recreated at 30G, reinstalled all of the dockers and see that Plex was at a small 423mb (yesterday) size. 24 hours or so later it is sitting at 7.2G. I suspect it may double in size again as it did previously (yesterday when i got the utilization report/alert) to over 14G in 24 hours.

     

    Transcode resides on its own share in cache. I creted a text file (touch test.txt) to verify in the Plex console that Plex is actually seeing and able to write tothat directory and it is. Nothing else other than reinstalling Plex has changed for a long time.... This is net new behavior.

     

    Docker run command:  root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='Plex' --net='host' -e TZ="America/New_York" -e HOST_OS="Unraid" -e 'VERSION'='docker' -e 'NVIDIA_VISIBLE_DEVICES'='' -e 'TCP_PORT_32400'='32400' -e 'TCP_PORT_3005'='3005' -e 'TCP_PORT_8324'='8324' -e 'TCP_PORT_32469'='32469' -e 'UDP_PORT_1900'='1900' -e 'UDP_PORT_32410'='32410' -e 'UDP_PORT_32412'='32412' -e 'UDP_PORT_32413'='32413' -e 'UDP_PORT_32414'='32414' -e 'PUID'='99' -e 'PGID'='100' -v '/mnt/user/movies/':'/movies':'rw' -v '/mnt/user/tvshows/':'/tv':'rw' -v '/mnt/user/music/':'/music':'rw' -v '/mnt/user/transcode/':'/transcode':'rw' -v '/mnt/user/appdata/plex/':'/config':'rw' 'linuxserver/plex' 

    7b62f9fb6c0ba757bbe78663cd8155d3a29bb848162f0c40004eb0b9b8cfb640

     

    Any help this group can provide is appreciated.

     

    -MW

     

     

    plex_today.png

    plex_yesterday.png

    tower-diagnostics-20200602-1022.zip 178.45 kB · 0 downloads

    Check the plex server gui settings and make sure transcoding folder is set to "/transcode"