[Support] Linuxserver.io - SWAG - Secure Web Application Gateway (Nginx/PHP/Certbot/Fail2ban)


5482 posts in this topic Last Reply

Recommended Posts

  • Replies 5.5k
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

I will only post this once. Feel free to refer folks to this post.   A few points of clarification:   The last update of this image didn't break things. Letsencrypt abruptly disabl

Application Name: SWAG - Secure Web Application Gateway Application Site:  https://docs.linuxserver.io/general/swag Docker Hub: https://hub.docker.com/r/linuxserver/swag Github: https:/

There is a PR just merged, it will be in next Friday's image, and will let you append php.ini via editing a file in the config folder   If you want to see how the sausage is made: https://gi

Posted Images

6 hours ago, KoNeko said:

i didnt change anything in the template. every docker has its own IP.

 

even if it was that it was a port problem which it didnt say it should not remove the docker at all it should just have stopped it.

 

i had reinstalled it and i had it stopped. SO it isnt running. But when i just checked its removed again.

When a container is updated, unraid first pull the new image, then stops the old container, deletes it and create a new container. If there is an error creating the container, the container is deleted from the list as it have not been created. This is how unraid works, so nothing we can do anything about.

 

You have to find out why the container can't be created again. Force update it and you will se the error. We can't help when you don't post logs or error messages.

Link to post

By default, after installing the letsencrypt docker, when accessing mydomain.com or *.mydomain.com (except defined subdomain), we would land on a page with the following message:

<div class="message">
                <h1>Welcome to our server</h1>
                <p>The website is currently being setup under this address.</p>
                <p>For help and support, please contact: <a href="me@example.com">me@example.com</a></p>
            </div>

May I know is there a way to return 404 or 444 or whatever error for any pages that I have not defined yet, like this page is totally inaccessible? For example, I defined cloud.mydomain.com under "letsencrypt\nginx\proxy-confs" to access my nextcloud docker. This means that only cloud.mydomain.com is actually redirecting to some meaning page. 

 

Now, what I want is that only cloud.mydomain.com is redirecting to the expected site, but other thing else return error page, like you are going to a totally wrong page or typed wrong url, instead of the default HTML block shown above.

 

I tried to edit "nginx\site-confs\default", but then it will block all the pages and I cannot access them. Would anyone please give some advice please? thanks!

Edited by PzrrL
Link to post

Okay, I've been wrestling with Owncloud for the last 9 hours straight because I didn't suspected let's encrypt could actually have a derp.

 

I hate myself for updating.

It just dropped its job, aka, providing certificate for owncloud to a domain I own, and I have been trying to setup temporary owncloud, event tried switching to nextcloud (been thinking about it for a while now), but either of these, at the instant I restart them on the docker network to be listened by letsencrypt to be forwarded and cert to the subdomain, it goes haywire.

Edited by Keexrean
Link to post
2 hours ago, Keexrean said:

Okay, I've been wrestling with Owncloud for the last 9 hours straight because I didn't suspected let's encrypt could actually have a derp.

 

I hate myself for updating.

It just dropped its job, aka, providing certificate for owncloud to a domain I own, and I have been trying to setup temporary owncloud, event tried switching to nextcloud (been thinking about it for a while now), but either of these, at the instant I restart them on the docker network to be listened by letsencrypt to be forwarded and cert to the subdomain, it goes haywire.

If that is all the info you can give, there isn't much for us to go on.

Link to post
14 hours ago, PzrrL said:

By default, after installing the letsencrypt docker, when accessing mydomain.com or *.mydomain.com (except defined subdomain), we would land on a page with the following message:


<div class="message">
                <h1>Welcome to our server</h1>
                <p>The website is currently being setup under this address.</p>
                <p>For help and support, please contact: <a href="me@example.com">me@example.com</a></p>
            </div>

May I know is there a way to return 404 or 444 or whatever error for any pages that I have not defined yet, like this page is totally inaccessible? For example, I defined cloud.mydomain.com under "letsencrypt\nginx\proxy-confs" to access my nextcloud docker. This means that only cloud.mydomain.com is actually redirecting to some meaning page. 

 

Now, what I want is that only cloud.mydomain.com is redirecting to the expected site, but other thing else return error page, like you are going to a totally wrong page or typed wrong url, instead of the default HTML block shown above.

 

I tried to edit "nginx\site-confs\default", but then it will block all the pages and I cannot access them. Would anyone please give some advice please? thanks!

Create a new server block as a catch all, set it as default, and serve the 404 or block. That way anything that doesn't match another server block will match there

Link to post
5 hours ago, aptalca said:

Create a new server block as a catch all, set it as default, and serve the 404 or block. That way anything that doesn't match another server block will match there

Thanks for your reply. I am sorry that I am kinda noob in nginx, not sure how to do, would you please guide me on how to do it or what should I search in Google? Thanks!

Link to post

@saarg well basically there wasn't much point detailing it atm because, trying stuff, I already over-ran the limit of 50 certificates / week, so I'm already effed and anything kind of support I would get now would be basically useless.

 

For some reason it tried to renew the certificate many time despite it being valid, domain and subdomain staying the sames, and due to expire in 2034.


So now I'm on hold a week. Great.

 

Before you ask, my setup was pretty similar in every functional way to the one of Space invader one for nextcloud, on owncloud.

docker network, edited owncloud's config.php, edited nextcloud's subdomain sample file into one that works for owncloud, using a no-ip domain to track my dynamic IP and a CNAME redirection to a subdomain of a domain I own, everything the exact same functionally speaking.

 

I noticed that since the last update (or sometime since like summer 2018 when I setup up this whole thing) the default nextcloud.subdomain sample file changed.
old:

    location / {
        include /config/nginx/proxy.conf;
        resolver 127.0.0.11 valid=30s;
        set $upstream_nextcloud nextcloud;
        proxy_max_temp_file_size 2048m;
        proxy_pass https://$upstream_nextcloud:443;
    }

new:

    location / {
        include /config/nginx/proxy.conf;
        resolver 127.0.0.11 valid=30s;
        set $upstream_app nextcloud;
        set $upstream_port 443;
        set $upstream_proto https;
        proxy_pass $upstream_proto://$upstream_app:$upstream_port;

        proxy_max_temp_file_size 2048m;
    }

Trying to replicate the changes to my custom owncloud.subdomain thing yielded no result.

 

 

Chrome's reaction is just "cunnekshion not private NET::ERR_CERT_AUTHORITY_INVALID" , and the "read more" just says "You cannot visit censored.censored.dork right now because the website uses HSTS" , so truly down for everyone who isn't the host of the whole thing (not CRITICAL, since only 3 people use it beside me, but widely bothering, since it's their off-site backup but also their way of accessing their data when not home, and one uses it for keeping versions of their game development sources backed up and versioned on a daily basis). No amount of Firefox, Opera, vivaldi or Edge give access.

 

I made a full appdata folder duplicata of the whole stuff before starting to tinker with anything obviously, example of life: I killed a SSD by sending 12v into 5v just 2hours ago, so indeed I do backups since I'm a dork.

 

Thing is, it just went from a configuration (again, nothing fancy, similar to what SIO did in video few months later) that worked flawlessly for 1year and 10 months, to something that doesn't work after an update.

Edited by Keexrean
Link to post
1 hour ago, Keexrean said:

@saarg well basically there wasn't much point detailing it atm because, trying stuff, I already over-ran the limit of 50 certificates / week, so I'm already effed and anything kind of support I would get now would be basically useless.

 

For some reason it tried to renew the certificate many time despite it being valid, domain and subdomain staying the sames, and due to expire in 2034.


So now I'm on hold a week. Great.

 

Before you ask, my setup was pretty similar in every functional way to the one of Space invader one for nextcloud, on owncloud.

docker network, edited owncloud's config.php, edited nextcloud's subdomain sample file into one that works for owncloud, using a no-ip domain to track my dynamic IP and a CNAME redirection to a subdomain of a domain I own, everything the exact same functionally speaking.

 

I noticed that since the last update (or sometime since like summer 2018 when I setup up this whole thing) the default nextcloud.subdomain sample file changed.
old:


    location / {
        include /config/nginx/proxy.conf;
        resolver 127.0.0.11 valid=30s;
        set $upstream_nextcloud nextcloud;
        proxy_max_temp_file_size 2048m;
        proxy_pass https://$upstream_nextcloud:443;
    }

new:


    location / {
        include /config/nginx/proxy.conf;
        resolver 127.0.0.11 valid=30s;
        set $upstream_app nextcloud;
        set $upstream_port 443;
        set $upstream_proto https;
        proxy_pass $upstream_proto://$upstream_app:$upstream_port;

        proxy_max_temp_file_size 2048m;
    }

Trying to replicate the changes to my custom owncloud.subdomain thing yielded no result.

 

 

Chrome's reaction is just "cunnekshion not private NET::ERR_CERT_AUTHORITY_INVALID" , and the "read more" just says "You cannot visit censored.censored.dork right now because the website uses HSTS" , so truly down for everyone who isn't the host of the whole thing (not CRITICAL, since only 3 people use it beside me, but widely bothering, since it's their off-site backup but also their way of accessing their data when not home, and one uses it for keeping versions of their game development sources backed up and versioned on a daily basis). No amount of Firefox, Opera, vivaldi or Edge give access.

 

I made a full appdata folder duplicata of the whole stuff before starting to tinker with anything obviously, example of life: I killed a SSD by sending 12v into 5v just 2hours ago, so indeed I do backups since I'm a dork.

 

Thing is, it just went from a configuration (again, nothing fancy, similar to what SIO did in video few months later) that worked flawlessly for 1year and 10 months, to something that doesn't work after an update.

If the cert was made using letsencrypt it's not valid until 2034. It's valid for 90 days.

 

If you read the documentation at github you would have seen that you can use the staging variable to test things without getting rate limited.

 

We don't watch spaceinvader videos so we don't know your setup, so if you want help, you need to supply information.

Link to post

Greetings, not really sure how to troubleshoot my problem so any help is appreciated.  I had letsencrypt set up with my registered domain and reverse proxies working with 3 docker containers (calibre-web is one).  Sometime in the past couple days it stopped working.  I am getting a site cannot be reached error and port 443 is no longer showing open for me.  Checking the letsencrypt logs I see this:

Quote

 

nginx: [emerg] unknown "upstream_protop" variable
nginx: [alert] detected a LuaJIT version which is not OpenResty's; many optimizations will be disabled and performance will be compromised (see https://github.com/openresty/luajit2 for OpenResty's LuaJIT or, even better, consider using the OpenResty releases from https://openresty.org/en/download.html)

Repeated endlessly.  I glanced at my pihole and see that my unraid server is spamming ocsp.int-x3.letsencrypt.org about 2500 times every 10 minutes since yesterday.  That's about 4 times a second.  Some googling on that error essentially tells me to ignore it and letsencrypt still works fine.  

 

My port forwarding has not changed, I have not touched the configs for letsencrypt, etc.  My best guess is something stopped nginx from serving up the sites and thus the ports are not showing open.  I am not sure where else to look for how to resolve this issue?  Thanks.

 

Edit: 
Found the problem, there was a typo aka user error in one of the conf files.  I had replaced http with $upstream_proto in one of the conf files and had a typo such that the p from http was still there.  Sorry for the distraction!

Edited by EvilNuff
Found my answer
Link to post

Okay, now it's been a week and I can renew certificates again. Had backups of old configs, not working now, used to. Tried redoing my config, the one that held for 2 years, from scratch, not working either.

 

So I'm going to detail each step I took, both 2 years ago and 1 hour ago, which are basically the sames with just name differences, and an added docker, well supported and freshly installed for reference, since my owncloud is a clusterfluke.


It's thus now a double setup of owncloud (my true install, tinkered with) and nextcloud (a bog-standard sqlite install with no modification).

Owncloud and Nextcloud share a lot of code, and used to be one thing that just forked in dev and org. My old Owncloud config for letsencrypt was basically a copy of nextcloud's sample config, just tweaked slightly for it.

 

I went to my router, cleared old port forward, port forwarded host180 to 80 and host1443 to 443. I have multiple other services portforwarded, they all work, so it should too. Port-probing tools show them open and not blocked by my ISP.
image.png.a3b26be7395c4832f57a9f93a561a43c.png

On the host server, which now you know is called Procyon, opened console and went:

$docker network create proxynet 

 

Installing letsencrypt from scratch with these settings:

 

[Edited for post length and privacy]

 

Yes this mail address exists and works too.
 

Owncloud and Nextcloud are both restarted on this network:

 

[Edited for post length and privacy]

 

The xxx.fr domain belongs to me, through OVH.  The nextcloud and ark-cloud subdomains are created (ark-cloud is 2 years old and never modified), C-NAME to a no-ip dynamic subdomain, xxx.servehttp.com, pinging either xxx.xxx.net or xxx.noipdomain.com pings my actual IP.

[...]> nextcloud IN CNAME XXXX.servehttp.com.

[...]> ark-cloud IN CNAME XXXXX.servehttp.com.

 

Yes, I know, there is nothing on the main domain. A project didn't happened yet. whatever.

 

 

nextcloud.subdomain.conf and owncloud.subdomain.conf, and their respective config.php files are configured as follow:


appdata\letsencrypt\nginx\proxy-confs\nextcloud.subdomain.conf: (untouched on purpose, and valid in its default state)

server {
    listen 443 ssl;
    listen [::]:443 ssl;

    server_name nextcloud.*;

    include /config/nginx/ssl.conf;

    client_max_body_size 0;

    location / {
        include /config/nginx/proxy.conf;
        resolver 127.0.0.11 valid=30s;
        set $upstream_app nextcloud;
        set $upstream_port 443;
        set $upstream_proto https;
        proxy_pass $upstream_proto://$upstream_app:$upstream_port;

        proxy_max_temp_file_size 2048m;
    }
}

appdata\Tempnextcloud\www\nextcloud\config\config.php:

<?php
$CONFIG = array (
  'memcache.local' => '\\OC\\Memcache\\APCu',
  'datadirectory' => '/data',
  'instanceid' => 'XXXXXXXXXXXXXX',
  'passwordsalt' => 'XXXXXXXXXXXXXXXXXXXXXX',
  'secret' => 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX',
  'trusted_domains' => 
  array (
    0 => '10.10.7.70',
    1 => 'xxx.servehttp.com',
    2 => 'nextcloud.xxx.fr',
  ),
  'datadirectory' => '/data',
  'trusted_proxies' => ['letsencrypt'],
  'overwrite.cli.url' => 'https://nextcloud.xxx.fr/',
  'overwritehost' => 'nextcloud.xxx.fr',
  'overwriteprotocol' => 'https',
  'dbtype' => 'sqlite3',
  'version' => '18.0.4.2',
  'installed' => true,
);

 

appdata\letsencrypt\nginx\proxy-confs\owncloud.subdomain.conf: (basically a derivation of the nextcloud file, and alike to the original I used)

server {
    listen 443 ssl;
    listen [::]:443 ssl;

    server_name ark-cloud.*;

    include /config/nginx/ssl.conf;

    client_max_body_size 0;

    location / {
        include /config/nginx/proxy.conf;
        resolver 127.0.0.11 valid=30s;
        set $upstream_app owncloud;
        set $upstream_port 443;
        set $upstream_proto https;
        proxy_pass $upstream_proto://$upstream_app:$upstream_port;

        proxy_max_temp_file_size 2048m;
    }
}

appdata\ownCloud\www\owncloud\config\config.php : (untouched for basically 2 years, just commenting stuff on and off when troubleshooting)

<?php
$CONFIG = array (
  'memcache.local' => '\\OC\\Memcache\\APCu',
  'filelocking.enabled' => true,
  'memcache.locking' => '\\OC\\Memcache\\Redis',
  'redis' => 
  array (
    'host' => '/var/run/redis/redis-server.sock',
    'port' => 0,
    'timeout' => 0.0,
  ),
  'instanceid' => 'XXXXXXXXX',
  'passwordsalt' => 'XXXXXXXXXXXXXXXXXXXXXXXXXX',
  'secret' => 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX',
  'trusted_domains' => 
  array (
    0 => '10.10.7.70',
    1 => 'xxx.servehttp.com',
    2 => 'ark-cloud.xxx.fr',
  ),
  'datadirectory' => '/data',
  'trusted_proxies' => ['letsencrypt'],
  'overwrite.cli.url' => 'https://ark-cloud.xxx.fr/',
  'overwritehost' => 'ark-cloud.xxx.fr',
  'overwriteprotocol' => 'https',
  'dbtype' => 'mysql',
  'version' => '10.3.2.2',
  'dbname' => 'owncloud',
  'dbhost' => 'localhost',
  'dbtableprefix' => 'oc_',
  'dbuser' => 'owncloud',
  'maintenance' => false,
  'dbpassword' => 'owncloud',
  'logtimezone' => 'UTC',
  'installed' => true,
  'loglevel' => 1,
  'theme' => '',
);

 

Indeed corresponding to the names of the dockers, with not capitalized characters (used to have caps, and working with caps):

image.png.748b61f2a53b8f1a895cdcbaf905691e.png

 

letsencrypt's console is clean and showing everything working as intended, like it did before it failed, and while it failed:

 

[Edited for post length and privacy]

 

No, I don't care about GeoIP.

 

Both my owncloud docker and nextcloud 'reference' installation docker are perfectly working when taken out of the proxynet, and told to run insecure on LAN through edition of the config files.

They also both work in unencrypted mode when portforwarded and displayed with xxx.servehttp.com:port, my no-ip.com subdomain.

 

Owncloud has been running in tandem with letsencrypt on the ark-cloud subdomain across multiple updates, 2 different routers, 1 host server migration, and several unraid updates without a single hassle on the exact doppleganger of this config, that I can even check (backedup) is carbon-copy of the one I just re-did, just with different network and docker names (yeah, I like to rename my stuff).

 

And the authelia update came, and it just crapped itself.

 

https://ark-cloud.xxx.fr/index.php/login went:

Your connection is not private

Attackers might be trying to steal your information from ark-cloud.xxx.fr (for example, passwords, messages, or credit cards). Learn more

NET::ERR_CERT_AUTHORITY_INVALID

[ADVANCED]

Blabla man-in-the-middle-dumbed-down explanation from chromium

You cannot visit ark-cloud.xxx.fr right now because the website uses HSTS. Network errors and attacks are usually temporary, so this page will probably work later.

 

and the newly made for testing purposes nextcloud docker https://nextcloud.xxx.fr/ returns:

image.png.242801b83e7124ab2d285eb2119fa95f.png

 

 

Different computer, different browser, even using smartphone with wifi disable on data, same result.

 

Here where I'm at. Everything looks like it works. Everything shows up like it works. Everything has been working like that for almost two years. And now it doesn't. If it at least threw a bunch of errors at me, that I can deal with.


I went through the same setup process dozens of times on the sole purpose it would finally throw an error at me, throw something I can chew on and that would be a starting point.

 

So far, only thing that changed between when it worked and now, is updating letsencrypt.

Not even jumping versions, I basically applied updates to it at worse a week late, and now it's a from-scratch freshly created new install, with no possible reminiscence of the past.

 

Where is something wrong? If it's wrong, it used to work as is, but I'll hammer myself that it's not okay anymore that way. At that point I HOPE something I did is wrong.

Or maybe is it that now OVH requires the use of the DNS plugin? I don't know. Interresting behavior though, neither of the messages change when I actually shut down all 3 containers. Still returns the same invalid certificate and didn’t send any data respectively. Yes, even after the emptying of the browser's cache, or trying from a computer that never visited them too.

 

 

 

 

 

 

Edited by Keexrean
Link to post

If you get the same errors in the browser if you shut down the containers, the issue is your port forwarding or your ISP started blocking ports.

 

Your port forwarding looks to me to be the wrong way, but you don't post more from the router, so hard to say. Logically the wan port would be on the left side and the internal port on the right side. Do you only choose from a list of computer or did you write in the name of the server yourself? Might try with the IP instead.

Edited by jonathanm
Quoted content removed by original author request
Link to post

Yeah, I know it looks backward, but this router hasn't been built by geniuses.

image.png.abbfc6c9b478c5aa16db46f91d532214.png

image.png.09ba3978ed62ace6a04c7fb98dcb9713.png
And I choose the computer from the list of devices I attributed a name and static IP to through the DHCP server of the router.

 

Checking with websites like https://canyouseeme.org/ and all show 80 and 443 as working. Again, it all went haywire after a letsencrypt update.

 

I did also at the same time made modifications to the host (changing the networkcard, setting the right port to eth0, deleting its old self from the DHCP server, adding its new MAC to it, setting back up all its port forwarding, ect, but thing is, other services work as before after this "migration")

So I'm not sure of anything anymore... maybe it's a fluke of communication between docker and the new network card?... that makes no sense, sad. 

 

 

EDIT: I tried something I rarely do, and only out of frustration. I pulled the plug. On both the router and server (after shutting down the VMs and important services properly though).
Plugged back in. 

It actually made a difference... #$@&%*!#$@&%*!? 

 

So what changed? Well, looking from an other IP than mine (to avoid loopbacks), nextcloud now shows up! That's stupid, but that's the case!

Properly encrypted and all.

owncloud though displays the "Welcome to our server" letsencrypt message.

 

I swear I feel like my computers are haunted sometime, and out to mess with my nerves.

 

Edit²: I changed the owncloud docker name to "Cloud" (yes, with a cap, like what it was originally), edited the owncloud.subdomain.conf accordingly, restarted the thing, as just a shoot-for-nothing attempt... it #$@&%*!ing worked... I'm not even happy it worked at that point...

 

 

SO! Well, that's fixed for everyone else at least! Up and working and all, but I used to be able to visit these adresses with no issues from devices connected to this network, it seems like it wont let me now, apparently cause of a loopback.

So I dug a little more. My ISP released a firmware update around the 20th of May.... and an other yesterday... #$@&%*, means neither letsencrypt nor migration were the cause most likely, and I just spent time wrestling my dockers and config files for nothing... I did nothing wrong... yeay, *underwhelmed happy noises*

 


Bug reports on their forums seem to tell the first update was a buggy mess (reason it went all haywire, and due to migration and letsencrypt update, I thought it was else due to Letsencrypt or the network card swap and rule recreation, the firmware update was done silently and the buggy mess wasn't affecting anything else than port 80 and 443) and the second fixed most issues, but neither have a working loopback on port 80... There previous router had a lunatic loopback that worked on and off, had it for 3 years but working most of the time for me... had it upgraded to their lastest router for 2Gbps down 1.5up fiber (from 900mbps down 750mbps up) in March... #$@&%*!

 

So basically my case here is solved, the rest I'll have to deal with my ISP and join the mob to pressure them into releasing less buggy firmware.... or actually using my VPN subscription for something else than my qbitorrent docker, just to access my own services.

 

Thanks all anyways!

Edited by Keexrean
Link to post

Is there any way i can use multiple certificates if i have multiple domains? Right now i have 2 domains. For simplicity let's call them domain1 and domain2.
Domain1 is my personal stuff while domain2 is my friend's domain which im hosting for him from a VM. Right now both of them seem to be using the cert issued to domain1 but i want domain2 to have it's own cert. Is that possible?

Link to post
3 hours ago, Ephoxia said:

Is there any way i can use multiple certificates if i have multiple domains? Right now i have 2 domains. For simplicity let's call them domain1 and domain2.
Domain1 is my personal stuff while domain2 is my friend's domain which im hosting for him from a VM. Right now both of them seem to be using the cert issued to domain1 but i want domain2 to have it's own cert. Is that possible?

No.

Link to post
On 11/7/2016 at 9:56 PM, linuxserver.io said:

linuxserver_medium.png

 

Application Name: Letsencrypt (Nginx)

Application Site: https://letsencrypt.org/ https://www.nginx.com/

Docker Hub: https://hub.docker.com/r/linuxserver/letsencrypt/

Github: https://github.com/linuxserver/docker-letsencrypt

 

Please post any questions/issues relating to this docker you have in this thread.

 

If you are not using Unraid (and you should be!) then please do not post here, instead head to linuxserver.io to see how to get support.

Hi Noob User here,

I am currently using lets encrypt as a docker within my unraid server, for the most part it is working fine.

The current issue is read from my logs, I believe it is a note letting me know that I can add a GeoIP2 license if I add an environment variable.

 

 

“Starting 2019/12/30, GeoIP2 databases require personal license key to download. Please retrieve a free license key from MaxMind,
and add a new env variable “MAXMINDDB_LICENSE_KEY”, set to your license key.”

 

 

I would like to utilize the feature, so I have signed up to Maxmind and managed to, by filtering through with my limited knowleged to generated a license.

 

At this stage this is where I have got stuck, only for the firm reason I don’t want to break what is seemly working for the most part.

I want to add the env variable, So I opened a console to the lets encrypt docker and opened the enviornment text file in nano, but after searching the internet to learn how to add the command to call for the database, I have found serval ways to do this causing me some confusion, I am hoping I could get a little help to correctly add the right syntax command to enable the feature as I don’t want to break it.

 

 

I always say any help would be greatly appreciated, so any help would be greatly apreciated.

Edited by Danietech
Link to post
6 hours ago, Danietech said:

Hi Noob User here,

I am currently using lets encrypt as a docker within my unraid server, for the most part it is working fine.

The current issue is read from my logs, I believe it is a note letting me know that I can add a GeoIP2 license if I add an environment variable.

 

 

“Starting 2019/12/30, GeoIP2 databases require personal license key to download. Please retrieve a free license key from MaxMind,
and add a new env variable “MAXMINDDB_LICENSE_KEY”, set to your license key.”

 

 

I would like to utilize the feature, so I have signed up to Maxmind and managed to, by filtering through with my limited knowleged to generated a license.

 

At this stage this is where I have got stuck, only for the firm reason I don’t want to break what is seemly working for the most part.

I want to add the env variable, So I opened a console to the lets encrypt docker and opened the enviornment text file in nano, but after searching the internet to learn how to add the command to call for the database, I have found serval ways to do this causing me some confusion, I am hoping I could get a little help to correctly add the right syntax command to enable the feature as I don’t want to break it.

 

 

I always say any help would be greatly appreciated, so any help would be greatly apreciated.

Just edit the container settings in unraid gui and add a variable

Link to post

Hi all

 

I was trying to setting up nextcloud / letsencrypt via spaceinvadars videos.

 

I ran into som problems, when I add the line 'overwritehost' => 'MYSUBDOMAIN.*DNS*.org'

the nextcloud webui (local on unraid server) stops working,

 

I cant either get to it from the website "https://MYSUBDOMAIN.*DNS*.org"

Letsencrypt generate an certificate

 

dns update docker, seems too work fine, it said its update the ip.

 

I can't get to the nextcloud setup (webui) from internally network, via smartphone there is No problems (internally).

 

can you please help me? i have reinstalled it 3-5 times, i cant see why it isnt working

Link to post
7 hours ago, WoooW said:

Hi all

 

I was trying to setting up nextcloud / letsencrypt via spaceinvadars videos.

 

I ran into som problems, when I add the line 'overwritehost' => 'MYSUBDOMAIN.*DNS*.org'

the nextcloud webui (local on unraid server) stops working,

 

I cant either get to it from the website "https://MYSUBDOMAIN.*DNS*.org"

Letsencrypt generate an certificate

 

dns update docker, seems too work fine, it said its update the ip.

 

I can't get to the nextcloud setup (webui) from internally network, via smartphone there is No problems (internally).

 

can you please help me? i have reinstalled it 3-5 times, i cant see why it isnt working

To do list:

 

Method of attack 1 : copy your program you are about to edit so you can revert if it goes wrong

 

Method of attack 2: use a good text editor when editing and saving your program files.

 

if you notice that in the program that you are editoring it differs i.e the colour differences in the text from the tutorail examples you have already watched, it can give some indicators where you have entered the incorrect syntax command information.

 

Text editors if you have not already source them are; textmate(Mac) and notepad++(Windows)

 

Basic checks, are the correct spaces and characters used in your commands making your syntax 100% - rule of thumb check check check again

 

Note: if your subdomain is your own have you given enough time for your domain provider to setup your CNAME ( without this knowlegde you can send yourself mad wondering why its not working when a little coffee time is needed, upto 24-hours sometimes) " even if the DNS pings back to your IP 🙂 "

 

if you are not able to open the docker locally its time to check through your IP and ports, are all your IPs aligned with the custom proxy for example;

 

custom proxynet ip range 192.168.1.100 - 200;

 

The dockers will have to be on the custom docker so they can be seen by each other

 

Nextcloud should be on 192.168.1.101

Letsencrypt should on 192.168.1.102

 

For the must part I am guessing at your issues but these are some of the things that catch me out, saying that its nice to get a fresh pair of eyes from one noob to another, it would be more helpful if you put your program example on line and it could be spotted by others, its the stuff between the lines that catches out the up and coming and even the efficenardos. 

 

 

 

 

 

 

                   

 

 

Link to post
On 6/5/2020 at 8:30 PM, Danietech said:

thanks?

To add this variable to a docker container, do the following.

 

  • Edit the container
  • Switch view to advanced at top right
  • In "extra parameters", add the following text: 
-e MAXMINDDB_LICENSE_KEY=[type key here, no brackets]
  • The "-e" means it's an environment variable
  • The next thing after that is the name of the variable
  • Value of variable is then set to the right.

I'm also really new to Docker containers and have been figuring this out on the fly.

Link to post
25 minutes ago, bigmak said:

To add this variable to a docker container, do the following.

 

  • Edit the container
  • Switch view to advanced at top right
  • In "extra parameters", add the following text: 

-e MAXMINDDB_LICENSE_KEY=[type key here, no brackets]
  • The "-e" means it's an environment variable
  • The next thing after that is the name of the variable
  • Value of variable is then set to the right.

I'm also really new to Docker containers and have been figuring this out on the fly.

Or you can just click the Add another Path, Port, Variable, Label or Device at the bottom of the template and choose variable and fill out the fields.

Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.