njdowdy

Members
  • Posts

    32
  • Joined

  • Last visited

Posts posted by njdowdy

  1. I am experiencing an issue with calibre-web when reverse proxied through nginx via this container.

    These are the nginx configs I have set for this subdomain:
    Scheme: http
    HTTP/2 Support: On

    The calibre-web container is accessible via the internet and works fine for a short time. I do get a message that "You are not securely connected to this site" and that "parts of this page are insecure (such as images)" (mixed content).

    After some amount of activity (or 2.5-5 minutes of inactivity), I get re-directed to the login page. This re-direct tends to trigger more often if I am clicking around the application a lot -- for example if I follow 6-10 links (e.g., "Books" to "Settings" to "Top Rated" to "Account", etc) within about 30-45 seconds. It's not unusable, but it is very annoying, especially on mobile where entering my credentials over and over is a chore!

    This does not occur via local IP, so the issue is with the reverse proxy setup. Is there some custom Nginx configuration I need to include to stop this from happening? I also use Cloudflare. Could it be something with caching that is making it unhappy? I don't understand how it would control my credentials and log-in status though... I don't see anything in any logs to help troubleshoot the problem. Can anyone suggest where I could look for more info?

    Thanks!

  2. SOLVED: Turned out to be address via a later firmware update to the router.


    Hello! Thanks in advance for checking out my question.

    TLDR; I installed a new router and now my plugins and community applications pages are very slow to load and pulling new docker containers does not work / takes a very long time. I tried many steps to resolve, but the issue persists.

    Before yesterday I was running my unraid server (version 6.9.0-rc2) on my home network. I purchased a new router (QNAP QHora-301W w/ up-to-date firmware) and plugged my unraid server into a 10Gbit LAN port. Nothing else was changed. I updated the LAN IP range in the router (192.168.100.X > 10.20.30.X), set my unraid server to the same static IP it was on prior to the new router (10.20.30.222; managed by the router), and set DNS to 1.1.1.1 (was 8.8.8.8 before). I did NOT bring over my old port forwarding mappings, because I felt I could use a fresh start on that (I probably had more open ports than I really needed). Everything ISP-related stayed exactly the same. I access some docker containers via the internet using Nginx Proxy Manager (NPM), and so I did forward port 80 > 1880 and 443 > 18443.

    At this point, everything on the server is seemingly working great. However, I tried to log into the NPM web GUI and discovered that I could not, with the error of "Bad Gateway". I tried to Google that error and didn't find any good leads. I figured I'd just backup my appdata folder for NPM, delete the docker container, and start fresh to see if that fixed anything. I then went to Community Applications (CA) and tried to re-download the NPM container. CA (and also the "Plugins" tab) was much slower than usual to load. I also could not pull down any containers (NPM or anything else). These were the errors from Tools>Diagnostics:
     

    time="2021-04-07T08:39:55.878436185-07:00" level=error msg="Download failed, retrying: read tcp 10.20.30.222:33386->104.18.123.25:443: read: connection timed out"
    time="2021-04-07T08:39:55.878487095-07:00" level=error msg="Download failed, retrying: read tcp 10.20.30.222:33382->104.18.123.25:443: read: connection timed out"
    time="2021-04-07T08:39:55.878441744-07:00" level=error msg="Not continuing with pull after error: error pulling image configuration: read tcp 10.20.30.222:33384->104.18.123.25:443: read: connection timed out"
    time="2021-04-07T08:39:56.390430850-07:00" level=error msg="Download failed, retrying: read tcp 10.20.30.222:33392->104.18.123.25:443: read: connection timed out"
    time="2021-04-07T08:40:49.127467179-07:00" level=error msg="Not continuing with pull after error: error pulling image configuration: read tcp 10.20.30.222:56906->104.18.125.25:443: read: connection timed out"
    time="2021-04-07T08:40:57.830529626-07:00" level=error msg="Not continuing with pull after error: error pulling image configuration: read tcp 10.20.30.222:56916->104.18.125.25:443: read: connection timed out"
    time="2021-04-07T08:41:50.054561692-07:00" level=error msg="Not continuing with pull after error: error pulling image configuration: read tcp 10.20.30.222:46292->104.18.124.25:443: read: connection timed out"


    And on the docker pull page from community applications:
     

    IMAGE ID [1610037955]: Pulling from jlesage/nginx-proxy-manager.
    IMAGE ID [df20fa9351a1]: Already exists.
    IMAGE ID [c29f2a9687c5]: Already exists.
    IMAGE ID [f2b10fbfc380]: Already exists.
    IMAGE ID [529722c2e3cf]: Already exists.
    IMAGE ID [f0cf5f38d987]: Already exists.
    IMAGE ID [21fb739242f4]: Already exists.
    IMAGE ID [b17e90563eea]: Pulling fs layer. Downloading 92% of 532 KB.
    IMAGE ID [bff4d859ae50]: Pulling fs layer. Downloading 67% of 4 MB.
    IMAGE ID [fd1567abff3c]: Pulling fs layer. Downloading 92% of 34 MB.
    IMAGE ID [90a3e8820aa7]: Pulling fs layer.
    IMAGE ID [4704d454c63e]: Pulling fs layer.
    IMAGE ID [138ba29e4057]: Pulling fs layer.

     

    It is strange to me that some data is coming through (92% of 34MB), but not enough! Those downloads stall at those percentages after about 5-10 seconds. After about 1 minute elapses, this appears:

     

    root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='NginxProxyManager' --net='bridge' -e TZ="America/Los_Angeles" -e HOST_OS="Unraid" -e 'USER_ID'='99' -e 'GROUP_ID'='100' -e 'UMASK'='000' -e 'APP_NICENESS'='' -e 'DISABLE_IPV6'='0' -p '7818:8181/tcp' -p '1880:8080/tcp' -p '18443:4443/tcp' -v '/mnt/user/appdata/NginxProxyManager':'/config':'rw' 'jlesage/nginx-proxy-manager'
    Please wait ...
    
    Unable to find image 'jlesage/nginx-proxy-manager:latest' locally
    latest: Pulling from jlesage/nginx-proxy-manager
    df20fa9351a1: Already exists
    c29f2a9687c5: Already exists
    f2b10fbfc380: Already exists
    529722c2e3cf: Already exists
    f0cf5f38d987: Already exists
    21fb739242f4: Already exists
    b17e90563eea: Pulling fs layer
    bff4d859ae50: Pulling fs layer
    fd1567abff3c: Pulling fs layer
    90a3e8820aa7: Pulling fs layer
    4704d454c63e: Pulling fs layer
    138ba29e4057: Pulling fs layer
    4704d454c63e: Waiting
    90a3e8820aa7: Waiting
    138ba29e4057: Waiting


    And then we wait for a long, long time. These are the abbreviated steps I took next, more or less in this order:
     

    • Unraid is able to ping Google just fine
    • All docker containers (except NPM's Bad Gateway) are working perfectly fine via local ports (but not outside network, because NPM is gone now)
    • Checked "Fix Common Problems" and removed preclear plugin (probably unrelated); no other problems show up; Problem persists
    • Googled around and determined it might be a DNS issue; tried openDNS (208.67.222.222), Google DNS server (8.8.8.8), as well as a different CloudFlare DNS (1.0.0.1) all set via the router (unraid set to get automatically); Problem persists
    • Checked that Docker Hub was online and functional
    • Checked that GitHub was online and functional
    • Shutdown server, shutdown router, started router, started server; Problem persists
    • Turned off port forwarding of 80 and 443 to nginx ports (since the docker container is gone)

    • Shutdown server, shutdown router, started router, started server; Problem persists

    • Opened up some high-valued ports near those that I saw referenced in the docker diagnostics (~30000-50000); Problem persists

    • Switch to a 1Gbit port on router (was 10Gbit port); Problem persists

    • Settings > Docker > Scrub docker btrfs with correct errors checked. No errors found.; Problem persists

    • Reboot server in GUI Safe Mode, installed CA, try again; Problem persists
       

    In GUI Safe Mode, I decided to just leave the docker container download going for a long time (~1 hour). I came back to this:
     

    root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='NginxProxyManager' --net='bridge' -e TZ="America/Los_Angeles" -e HOST_OS="Unraid" -e 'USER_ID'='99' -e 'GROUP_ID'='100' -e 'UMASK'='000' -e 'APP_NICENESS'='' -e 'DISABLE_IPV6'='0' -p '7818:8181/tcp' -p '1880:8080/tcp' -p '18443:4443/tcp' -v '/mnt/user/appdata/NginxProxyManager':'/config':'rw' 'jlesage/nginx-proxy-manager'
    Please wait .
    
    Unable to find image 'jlesage/nginx-proxy-manager:latest' locally
    latest: Pulling from jlesage/nginx-proxy-manager
    df20fa9351a1: Already exists
    c29f2a9687c5: Already exists
    f2b10fbfc380: Already exists
    529722c2e3cf: Already exists
    f0cf5f38d987: Already exists
    21fb739242f4: Already exists
    b17e90563eea: Pulling fs layer
    bff4d859ae50: Pulling fs layer
    fd1567abff3c: Pulling fs layer
    90a3e8820aa7: Pulling fs layer
    4704d454c63e: Pulling fs layer
    138ba29e4057: Pulling fs layer
    138ba29e4057: Waiting
    90a3e8820aa7: Waiting
    4704d454c63e: Waiting
    b17e90563eea: Retrying in 5 seconds
    bff4d859ae50: Retrying in 5 seconds
    b17e90563eea: Retrying in 4 seconds
    bff4d859ae50: Retrying in 4 seconds
    b17e90563eea: Retrying in 3 seconds
    bff4d859ae50: Retrying in 3 seconds
    fd1567abff3c: Retrying in 5 seconds
    b17e90563eea: Retrying in 2 seconds
    bff4d859ae50: Retrying in 2 seconds
    fd1567abff3c: Retrying in 4 seconds
    b17e90563eea: Retrying in 1 second
    bff4d859ae50: Retrying in 1 second
    fd1567abff3c: Retrying in 3 seconds
    fd1567abff3c: Retrying in 2 seconds
    bff4d859ae50: Download complete
    b17e90563eea: Verifying Checksum
    b17e90563eea: Download complete
    b17e90563eea: Pull complete
    bff4d859ae50: Pull complete
    fd1567abff3c: Retrying in 1 second
    fd1567abff3c: Download complete
    138ba29e4057: Verifying Checksum
    138ba29e4057: Download complete
    fd1567abff3c: Pull complete


    and in docker logs via diagnostics:
     

    time="2021-04-07T18:43:31.729223056-07:00" level=error msg="Download failed, retrying: read tcp 10.20.30.222:43780->104.18.124.25:443: read: connection timed out"
    time="2021-04-07T18:43:31.729239234-07:00" level=error msg="Download failed, retrying: read tcp 10.20.30.222:43782->104.18.124.25:443: read: connection timed out"
    time="2021-04-07T18:43:34.289240096-07:00" level=error msg="Download failed, retrying: read tcp 10.20.30.222:43784->104.18.124.25:443: read: connection timed out"
    time="2021-04-07T18:48:39.953230819-07:00" level=error msg="Download failed, retrying: read tcp 10.20.30.222:43930->104.18.124.25:443: read: connection timed out"
    time="2021-04-07T18:48:40.978206276-07:00" level=error msg="Download failed, retrying: read tcp 10.20.30.222:43928->104.18.124.25:443: read: connection timed out"


    Amazingly, the docker container is installed at this point. I guess it constantly tried different ports until it could make a connection to 104.18.124.25:443? That IP address (104.18.124.25:443) appears to be a cloudflare server, not sure if there's significance to that, or that's just what docker hub uses. However, NPM still displays the "Bad Gateway" error from before (even with new appdata path).

     

    So, I got my short-term problem of getting access to my containers via the internet fixed after re-forwarding ports for NPM. However, I am unable to install new docker containers (at least in a reasonable amount of time), I probably can't update them (not tested yet), nor add new subdomains to NPM (due to "Bad Gateway" problem). I don't see much in the NPM logs except:
     

    [4/7/2021] [8:03:10 PM] [Migrate ] › ℹ info Current database version: none


    Nothing else on my network is having any issues with this new router. What have I missed on the Unraid side?

    Thanks so much!!

     

  3. 7 hours ago, jonathanm said:

    I wrote a quick little script to deal with my specific 429 errors. If you are routing youtubedl through one of the vpn enabled containers, this should work for you with some modifications.

    
    #!/bin/bash
    # This script will attempt to get a new IP for the Youtube DL Material container.
    # It assumes you are using the binhex-delugevpn for your connection, and already
    # have that process working, by setting youtube-dl-material network to none,
    # and adding --net=container:binhex-delugevpn to the Extra Parameters: field.
    # You also have to add a mapping for port 17442 to binhex-delugevpn container
    # and use your own shortcut to access the WebGUI for Youtube DL Material.
    # If you are using a different VPN enabled container you will need to alter this
    # script for your specific setup. If you don't want to be notified, comment out
    # the /usr/local/emhttp/webGui/scripts/notify line.
    # Set this script to run however often you want to check for 429 errors
    
    if docker ps | grep -q youtube-dl-material
    then
    	echo "Youtube DL Material container running, checking for error 429"
    		if /usr/bin/docker logs youtube-dl-material --tail 1 | /usr/bin/grep -q "HTTP Error 429"
    		then
        			echo "Restarting VPN container and Youtube DL Material"
                            echo "_stopping Youtube DL Material"
    			/usr/bin/docker stop --time=60 youtube-dl-material
                            echo "_restarting Binhex DelugeVPN"
    			/usr/bin/docker restart --time=60 binhex-delugevpn
    			echo "_waiting 5 minutes"
                            sleep 300
                            echo "_starting Youtube DL Material"
    			/usr/bin/docker start youtube-dl-material
                            echo "_sending notifications"
    			/usr/local/emhttp/webGui/scripts/notify -s "Youtube DL Material restarted" -i "warning" -m "Too many requests error detected"  -d "Too many requests error detected"
    		else
        			echo "No 429 error found"
    		fi
    else
    	echo "Youtube DL Material container not running"
    fi	

     


    Actually, I was going to figure out how to set up a VPN container, but then I discovered how to pass the youtube API key and cookies.txt from within the youtube-dl container. Once I did that, the problems stopped. At least, for now!

    I also saw there are some options in there to include a sleep interval between downloads to keep the requests/time lower.

    Thanks!

  4. On 9/18/2020 at 5:51 PM, jonathanm said:

    Sorry, I already readded all the subscriptions. I only have the default log level set, so I wasn't expecting much info there.

     

    When I went to the subs page, it displayed the same message of no items for both playlists and subscriptions, I've never put in any playlists, but the message was displayed for both. When I readded the subs, they showed up properly, both on the left column and the subs list. One anomaly to note is that even though adding the sub closes the add sub popup, the new sub doesn't show in the list until the browser is refreshed.

     

    At the moment I'm getting the 429 too many requests, so I've shut down the container until tomorrow, I'll post back. Prior to shutting down I added a cookie and a youtube API key and the 429's continued, but I don't know if that makes a difference once the 429 is already happening.

     

     


    Did the "too many requests" issue resolve for you? I'm in the same boat. Can I ask how you added the cookie and youtube API to the docker? NOTE: I have a cookies.txt from the firefox extension and an API key. Just wondering how to input into the template.

    Thanks!

  5. 12 hours ago, ich777 said:

    What resolution gave you problems? The picture that you've posted looks like if you take a picture with something like 300x150 and resize it to 1000x500.

     

    Do you use any stretch or scaling options within noVNC?

     

    Are you stretching the image? I don't experience this in the DebianBuster Container only if you stretch or zoom the image drastically.

    Here is a screenshot:

    grafik.png.7ecb353016581ec17dca764a0e1f49f8.png


    The ferdi resolution is set to 1920x1080 for these images. I did take a screenshot of only a small region of that, just to demonstrate the artifacts. That was blown up to a high resolution only to make the artifacts clearly visible. They are noticeable at the resolution of 1920x1080 in the browser window.

    I don't think I'm stretching the image with novnc. Scaling mode: none. I have made no other adjustments elsewhere. Only setting the resolution in the template. Here are my novnc settings:

    image.png.ba06e5e30fdb3514a1b4dfed4f12b6dc.png

    To me, your posted image actually has these noticeable artifacts at the posted resolution - specifically around darker text / outlines. The blue hashtag text actually seems fine to me. See below.

     

    image.thumb.png.40631b3ca4a90f535dde08ecda6c5d12.png

    This graphic attempts to demonstrate how a range of quality/compression levels affects the display of text in the other docker I use that incorporates noVNC. All these images captured at the same resolution.

    image.thumb.png.5670c229e6198cb69e90312029707e41.png

    Here are a few other comparisons between Ferdi's Tweetdeck versus visiting tweetdeck.twitter.com directly. Both are same browser, same resolution. You can see the artifacts appear in Ferdi, but not in the native site. 

    image.thumb.png.a4f25d33799c2992cf4781e0bc87c762.png
     

    image.png.a545c6a57dec6db8b141fee59ac2e05d.png



    Sorry to ask you to spend your time on such a minor issue. But it bothers me and I'm wondering if I can fix it and learn something in the process. As I said, I have experienced this with other noVNC dockers too, so it may be something applicable to other projects as well. Thanks!

     

     

  6. On 1/14/2021 at 9:22 PM, ich777 said:

    May I ask to which resolution you want to set it?

    Can you anyways post the logs after setting the resolution? You sould also have a novnc.log and a x11nc.log in the appdata directory can you post them as well?


    Thank you for your reply! While I was trying to obtain the information you requested, this functionality starting working! It now adjusts the resolution at least to 1600x900 and 1920x1080. So, that's great! I have no idea what changed...

    However, I am experiencing what I assume are compression artifacts. Is there anything I can do about this? See:

    image.png.525b6237b6e70e0ea594fedfb822fabd.png

    I have experienced this in other dockers using novnc, but those dockers included an option in the side-bar menu > "Settings" > "Advanced" to adjust compression level. That seems to be missing here (other docker on left; ferdi docker on right).

    image.png.61b8993ddc22f7a5dc8e1505df2db7bf.pngimage.png.61274a290b6da0942fcb390f1a422e04.png

    Thank you again!

  7. I can't get the ferdi-client docker to display at any resolution higher than 1280x768 (the default setting). Whenever I change it in the docker template, the container starts fine (no errors in the logs) and I can even establish a connection via noVNC. However, I just get a black screen with the dimensions of the desired resolution.

    Is it possible to increase the resolution? This docker is not usable at the default resolution for me :(

    Thanks!

  8. On 8/27/2020 at 9:30 AM, GilbN said:

    Check your container logs 

    !SOLVED: see below if you have this problem

    Sorry to be such a noob, but I'm also have trouble with finding the default user and password.

    I'm aware the documentation states that these are logged to STDOUT, however I cannot seem to find any credentials in the logs. I checked the container logs with:

    docker logs Mango > /boot/MangoLog.txt

    Which yields only:

                  _|      _|
                  _|_|  _|_|    _|_|_|  _|_|_|      _|_|_|    _|_|
                  _|  _|  _|  _|    _|  _|    _|  _|    _|  _|    _|
                  _|      _|  _|    _|  _|    _|  _|    _|  _|    _|
                  _|      _|    _|_|_|  _|    _|    _|_|_|    _|_|
                                                        _|
                                                    _|_|
    
    Mango - Manga Server and Web Reader. Version 0.18.2
    ^[[93m[INFO]    ^[[0m2021/01/11 20:35:28 | Starting DB optimization
    ^[[93m[INFO]    ^[[0m2021/01/11 20:35:28 | DB optimization finished
    ^[[93m[INFO]    ^[[0m2021/01/11 20:35:28 | Scanned 0 titles in 4.195587ms
    ^[[93m[INFO]    ^[[0m2021/01/11 20:36:28 | Starting thumbnail generation
    ^[[93m[INFO]    ^[[0m2021/01/11 20:36:28 | Thumbnail generation finished
    ^[[93m[INFO]    ^[[0m2021/01/11 20:40:28 | Scanned 0 titles in 2.073916ms

    Finally, from the console of the container, I tried:

    find . -name "*stdout*"

    and only found "./dev/stdout". However, I can't access it from unraid with:

    docker cp Mango:dev/stdout /boot/mango.stdout

    And from within the container:

    cat stdout

    yields no information.

    Finally, I changed the log_level to "debug" and tried "admin" as a username. Apparently that is the username, however I don't know the password.

    _| _|
    _|_| _|_| _|_|_| _|_|_| _|_|_| _|_|
    _| _| _| _| _| _| _| _| _| _| _|
    _| _| _| _| _| _| _| _| _| _|
    _| _| _|_|_| _| _| _|_|_| _|_|
    _|
    _|_|
    
    
    Mango - Manga Server and Web Reader. Version 0.18.2
    
    [96m[DEBUG] [0m2021/01/11 21:08:00 | We are in release mode. Using embedded static files.
    [96m[DEBUG] [0m2021/01/11 21:08:00 | Starting Kemal server
    [93m[INFO] [0m2021/01/11 21:08:00 | Starting DB optimization
    [93m[INFO] [0m2021/01/11 21:08:00 | DB optimization finished
    [96m[DEBUG] [0m2021/01/11 21:08:00 | Scan completed
    [93m[INFO] [0m2021/01/11 21:08:00 | Scanned 0 titles in 3.915386ms
    [96m[DEBUG] [0m2021/01/11 21:08:12 | 302 GET / 836.62µs
    [96m[DEBUG] [0m2021/01/11 21:08:12 | 200 GET /login 36.27µs
    [96m[DEBUG] [0m2021/01/11 21:08:12 | 200 GET /css/mango.css 49.43µs
    [96m[DEBUG] [0m2021/01/11 21:08:12 | 200 GET /css/uikit.css 1.69ms
    [96m[DEBUG] [0m2021/01/11 21:08:12 | 200 GET /js/common.js 39.77µs
    [96m[DEBUG] [0m2021/01/11 21:08:12 | 200 GET /js/fontawesome.min.js 261.47µs
    [96m[DEBUG] [0m2021/01/11 21:08:12 | 200 GET /js/uikit.min.js 654.11µs
    [96m[DEBUG] [0m2021/01/11 21:08:12 | 200 GET /js/uikit-icons.min.js 418.27µs
    [96m[DEBUG] [0m2021/01/11 21:08:12 | 200 GET /js/solid.min.js 18.33ms
    [96m[DEBUG] [0m2021/01/11 21:08:16 | Password does not match the hash
    [96m[DEBUG] [0m2021/01/11 21:08:16 | 302 POST /login 139.43ms
    [96m[DEBUG] [0m2021/01/11 21:08:16 | 200 GET /login 32.36µs
    [96m[DEBUG] [0m2021/01/11 21:08:16 | 200 GET /css/uikit.css 820.8µs
    [96m[DEBUG] [0m2021/01/11 21:08:16 | 200 GET /css/mango.css 42.45µs
    [96m[DEBUG] [0m2021/01/11 21:08:16 | 200 GET /js/fontawesome.min.js 273.5µs
    [96m[DEBUG] [0m2021/01/11 21:08:16 | 200 GET /js/common.js 49.3µs
    [96m[DEBUG] [0m2021/01/11 21:08:16 | 200 GET /js/uikit.min.js 640.04µs
    [96m[DEBUG] [0m2021/01/11 21:08:16 | 200 GET /js/uikit-icons.min.js 328.12µs
    [96m[DEBUG] [0m2021/01/11 21:08:16 | 200 GET /js/solid.min.js 9.39ms


    Is there some other log I'm supposed to be looking for to find the password?

    Thank you!!

    SOLUTION: I never did find the default password posted anywhere. However, this approach worked for me.

    1. Edit the config file in appdata so that "disable_login" is "true" and "default_username" is "admin". Save.

    2. Restart container.
    3. You will now be logged in. You can use the user management options to change the admin password now.
    4. Edit the config file in appdata to return "disable_login" is "false" and "default_username" is "". Save.
    5. Restart container.
     

  9. On 12/14/2020 at 7:56 AM, mattie112 said:

    You can try to stop your docker container and then use the `exec` step so that you are the only one running certbot. I assume a restart of the container did not work? You can check to see if your DNS is configured correctly by using https://dnscheck.ripe.net/ for example. (Or sharing your domain here)

    I believe the issue was with my new ISP. I'm using CloudFlare now without issue. Thanks for taking the time to respond.

  10. On 5/4/2018 at 8:04 PM, wayner said:

    Can you run Jupyter Notebook with this docker from within Pycharm?   I tried following these directions here:  https://www.jetbrains.com/help/pycharm/using-ipython-notebook-with-product.html

     

    But that didn't seem to work as Jupyter by default only likes to run on the localhost and I don't know how you do that in a docker.

     

    It might be nice to have a docker that just runs Jupyter Notebook (FKA IPython).

    I'm trying to do the same (also to serve a development version of a flask application). I looked at @waster's recommendation, however the options mentioned in the documentation don't appear for me. It says at the top of the page that this is a "Professional" feature. This PyCharm seems to be the community edition.

    Is this possible with this docker? Thanks in advance!

  11. I'm trying to replace pgadmin4. The github page for this project mentions this can:

    - Data export/migration in multiple formats (does this functionality similar to pg_backup & pg_restore?)
    - Generate entity diagrams

    I'm not able to do these out of the box. I can export tables one at a time, but not as a dump. And I don't see a "diagram" tab on entities which should support a diagram representation. Running :latest.

    There's no info on the docker page about additional configuration steps. Am I missing something or are these things not supported? Thanks for the info!

  12. I "resolved" the issue described in my previous post. For those facing similar errors renewing certificates, check your ISP policies. My new ISP has a stricter port policy than my previous one. This ISP blocks port 80, which breaks the Let'sEncrypt certificate renewal process.

    My solution was to integrate CloudFlare with NPM. That allows for a work around to the ISP blocking port 80. I hope that helps others.

  13. I recently moved and am now having problems renewing my certificates. My issues are similar to what others have posted here, but I am having a difficult time finding whether a solution was found. The problem is:

    1. When the docker is first started the log says:

    ⚠ warning Command failed: /usr/bin/certbot certonly --non-interactive --config "/etc/letsencrypt.ini" --cert-name "npm-31" --agree-tos --email "[email protected]" --preferred-challenges "dns,http" --domains "<mysubdomain.mydomain.com"
    Another instance of Certbot is already running

    And then a bunch of challenges fail.

    2. When I attempt to manually renew or add SSL certificates from within the interface I get an "Internal Error" notification and the same message as in #1 in the docker log.

    3. When I go to the console and attempt "certbot renew --dry-run" as suggested by @mattie112, the challenges fail and I get the following:
     

    IMPORTANT NOTES:
     - The following errors were reported by the server:
    
       Domain: mysubdomain.mydomain.com
       Type:   connection
       Detail: Fetching
       http://mysubdomain.mydomain.com/.well-known/acme-challenge/hlQQ3HIdDm_aurZNHIpTu3jjgUe3KwBRcOtRtwhk5Vg:
       Timeout during connect (likely firewall problem)
    
       To fix these errors, please make sure that your domain name was
       entered correctly and the DNS A/AAAA record(s) for that domain
       contain(s) the right IP address. Additionally, please check that
       your computer has a publicly routable IP address and that no
       firewalls are preventing the server from communicating with the
       client. If you're using the webroot plugin, you should also verify
       that you are serving files from the webroot path you provided.

    I can ping from within the nginxproxymanager docker console. My ports 80 and 443 are forwarded to 180 and 1443 and those are mapped to the nginxproxymanager docker just as they were when things were functional prior to the move.

    When I set things up in my new location I did register my new WAN IP address with duckdns.org to reflect this IP change. My websites are accessible via the internet, but some give me a warning that they are unsafe because of self-signed certificates. Some (e.g. nextcloud) won't allow me to upload files to the server and they time out.

    I'm not sure what else needs to be done. Could this be something with the new ISP or am I missing something?

    Thanks!

  14. Is there any trick to getting this to work with nginx proxy manager?

    EDIT: Nevermind. Solved. Nginx scheme needs to be http not https. Not sure why though. But nextcloud does that to me too. If I visit https://subdomain.mydomain.com it seems fine.

    I had to use the docker link fix with MONGO_URL earlier in the thread, which worked great (so mongo and wekan on same docker network as nginxproxymanager). I changed the ROOT_URL to https://subdomain.mydomain.com

    In nginx config I've got https://<unraid_ip>:5555 linked to subdomain.mydomain.com

    But I keep getting 502 bad gateway errors when I visit https://subdomain.mydomain.com

    Thanks!

  15. Can someone please advise me on how to pass my postgres docker to the Taskcafe docker template? 

    I have a postgres11 docker named "postgres-taskcafe" running on <my_unraid_ip>:5433

    In the Taskcafe template I have tried passing the following into TASKCAFE_DATABASE_HOST: postgres-taskcafe; <my_unraid_ip>:5433
     

    With the following error: error="dial tcp: lookup postgres-taskcafe on 1.1.1.1:53: no such host"

    Network Type set to "Bridge" like the rest of my dockers. I'm sure I'm making a noob mistake. Thanks in advance!

  16. Do we need a docker mod to get remote development tools? I did not see anything like that in the list of currently available docker mods. See snapshot of my local install of VS that lists some of these. These don't show up as search results using the extensions search from within this docker. Can anyone share info on how to set up ssh for remote development from within this docker? Thanks!!

    remote.PNG

  17. @phreeq, thanks so much for the nginx config. It's working great.

    I was wondering though what you meant by "# Make sure that your dns has a cname set for dokuwiki". Can you elaborate on that the dokuwiki thing or was that a leftover comment from a different template you copied as a base for this template?

    Also, can anyone confirm if the default behavior for token images is to point to "icons/svg/mystery-man.svg"? I see most of the token images point to this placeholder and I wasn't sure if this was proper default behavior or if I've misconfigured something. When I change the path to a new token, the token on the map changes but not the small profile image on the "Actors Directory" menu which lists all the currently imported tokens. 

    Thanks again!

  18. On 6/3/2020 at 5:39 AM, binhex said:

    this will most probably be a port conflict as vnc will use port 5900, and if you are running multiple of my ui enabled docker containers then you will also need to change the port that novnc runs on which is port 6080.

     

    so change the host port for both of the above to be unique for each container and they should all start fine then.

    This wasn't the issue as far as I can tell. I run this docker on port 6081, your krusader docker on 6082, and the Filebot was on 7813 and 7913. Your dockers don't conflict with each other's VNC - I can run both simultaneously. Not sure why Filebot broke, but for now I'm just using the Windows installed version and accessing files for renaming via shares. Maybe I'll figure it out someday :D

  19. I got this docker up and running, but it seems it has broken my FileBot and Krusader (djaydev) dockers. I can't access them after installing this docker. I think it has something to do with vnc, because the dockers come up, but I get the following error displayed: "server disconnected (code: 1006)". Additionally, the log file of Krusader was displaying "Not connected to D-Bus server". 

    I removed the djaydev Krusader and replaced with binhex's Krusader docker - that one works fine. There's no alternative to the FileBot image in community applications though. I removed it and re-installed it, but no dice. No other dockers I have seem to have been affected.

    Any advice on what is causing the conflict and how to resolve?

  20. I'm experience a strange problem. Whenever I try to play .m4a files through the browser (chrome & firefox), transcode fails and playback does not start. Logs say:

    "DEBG 'plexmediaserver' stderr output:
    Jobs: failed with chdir, errno=2"

    However, if I use the plex android app, the files play fine. FLAC/mp3/etc formats work fine in browser and in app. What could be going wrong?