[Support] Linuxserver.io - SWAG - Secure Web Application Gateway (Nginx/PHP/Certbot/Fail2ban)


Recommended Posts

2 minutes ago, alturismo said:

assuming your unraid server is running on ip 10.0.0.195

 

you need to forward external 80 to 180 and external 443 to 1443,

 

this looks more like a simple 180 forward rule, so you could reach your swag only by http://whatever.mydomain.com:180 or http://whatever.mydomain.com:1443 which ir prolly not what you want ... and wont work as swag wants to generate cert(s) and needs http and https

 

for usage like https://whatever.yourdomain.com

 

like described ... in the readme and many times here in this thread, so check your router howto use it and forward

 

image.png.16b60145c1c409bbe6fa9b7d03d7b7ad.png

image.png.6d99790db787499aa940895e8914367b.png

Got it! So unfortunately with the freebie router that comcast provides they have removed access to manage it, and now you can only do a simple forward rule Port X to Port X, and NOT Port X to Port  Y.

 

I have forwarded 443 on my router and now it seems to be working right. I think I'll just buy a proper router I can manage, and ditch the comcast supplied one.

 

Thanks!

Link to comment
3 minutes ago, Lygris said:

When you try and go to subdomain.duckdns.com that is pointing to your router on port 80 or 443, so you need to forward 80 and 443 from your router to your docker host at port 180 and 1443.

 

From a quick google it doesnt look like your comcast router supports port forwarding to different destination ports, so you need to put the container on a custom network which it looks like you already did but didn't assign an ip. Once you assign an ip to the container you just forward 80 and 443 on the router to that ip address.

I'm still learning and loving it lol. I didn't realize I can set up a unique IP address for the docker and use that for navigation/forwarding. I'll try that out. Thanks!

Link to comment
5 minutes ago, moose1207 said:

I'm still learning and loving it lol. I didn't realize I can set up a unique IP address for the docker and use that for navigation/forwarding. I'll try that out. Thanks!

You will have to enable bridging on the network settings page for your interface and then set the "Ipv4 custom network on interface brX" in docker settings to your internal network space, then assign the ip to the container in your internal range. It will appear just like any other thing connected to your network at that point and you can use port 80 since it's on a different ip than unraid is using.

 

 

another option would be to change the port unraid is using for the webgui to something else so the container can use 80, but the bridge network way is what i prefer

Edited by Lygris
Link to comment

I noticed this while checking the logs on swag, why does the cert use the most recent subdomain name listed in the container? As you can see the last subdomain I added was read (read.matchettmedia.com), and now that is being used as the certificate name? 

 

image.png.56fdc9d06d12ac023c5ac34318ecee9e.png

 

Can i change this to a wildcard cert or anything? *.matchettmedia.com, or is this how it's designed?

Link to comment

I'm new to the whole reverse proxy/letsencrypt thing. So this might be a really dumb question. Sorry in advance. :)

 

Is there any way to have SWAG point to other computers on my network? For instance, if I want to install a Bitwarden server on an rpi, is there a way to have SWAG redirect traffic from some subdomain (like bitwarden.mydomain.com) to the rpi?

 

And, if not, what is the correct way to have multiple computers behind a reverse proxy on my network?

Link to comment
15 hours ago, Hollandex said:

Is there any way to have SWAG point to other computers on my network? For instance, if I want to install a Bitwarden server on an rpi, is there a way to have SWAG redirect traffic from some subdomain (like bitwarden.mydomain.com) to the rpi?

short, yes, just replace name with ip and look for the proper port, thats it

Link to comment

Solved!!!!!

 

HELP

i followed the spaceivator tutorial on how to use a reverse proxy but i only get 502 bad gateway

but everything works fine, my swag container is getting certified 

 

my settings 

## Version 2021/05/18
# make sure that your dns has a cname set for nextcloud
# assuming this container is called "swag", edit your nextcloud container's config
# located at /config/www/nextcloud/config/config.php and add the following lines before the ");":
#  'trusted_proxies' => ['swag'],
#  'overwrite.cli.url' => 'https://nextcloud.your-domain.com/',
#  'overwritehost' => 'nextcloud.your-domain.com',
#  'overwriteprotocol' => 'https',
#
# Also don't forget to add your domain name to the trusted domains array. It should look somewhat like this:
#  array (
#    0 => '192.168.0.1:444', # This line may look different on your setup, don't modify it.
#    1 => 'nextcloud.your-domain.com',
#  ),

server {
    listen 443 ssl;
    listen [::]:443 ssl;

    server_name cloud.*;

    include /config/nginx/ssl.conf;

    client_max_body_size 0;

    location / {
        include /config/nginx/proxy.conf;
        include /config/nginx/resolver.conf;
        set $upstream_app nextcloud;
        set $upstream_port 443;
        set $upstream_proto https;
        proxy_pass $upstream_proto://$upstream_app:$upstream_port;

        proxy_max_temp_file_size 2048m;
    }
}

i change the server name nextcloud to icloud because of my domain name. its cloud.xxxxxxxxx.com

 

<?php
$CONFIG = array (
  'memcache.local' => '\\OC\\Memcache\\APCu',
  'datadirectory' => '/data',
  'instanceid' => 'xxx',
  'passwordsalt' => 'xxx',
  'secret' => 'xxx',
  'trusted_domains' => 
  array (
    0 => '192.168.178.120:444',
    1 => 'cloud.xxx-xxx.com',
    ),
  'dbtype' => 'mysql',
  'version' => '20.0.2.2',
  'trusted_proxies' => ['swag'],
  'overwrite.cli.url' => 'https://cloud.xxx-xxx.com/',
  'overwritehost' => 'cloud.xxx-xxx.com',
  'overwriteprotocol' => 'https',
  'dbname' => 'nextcloud',
  'dbhost' => '192.168.178.120:3306',
  'dbport' => '',
  'dbtableprefix' => 'oc_',
  'mysql.utf8mb4' => true,
  'dbuser' => 'NextCloud',
  'dbpassword' => 'xxx-xxx-xxx',
  'installed' => true,
  'app_install_overwrite' => 
  array (
    0 => 'keeporsweep',
    1 => 'blockscores',
  ),
  'twofactor_enforced' => 'true',
  'twofactor_enforced_groups' => 
  array (
    0 => 'Gebruikers',
    1 => 'admin',
  ),
  'twofactor_enforced_excluded_groups' => 
  array (
  ),
  'maintenance' => false,
);

i am really lost over here i just don't know what to do now

 

swag log

Using Let's Encrypt as the cert provider
SUBDOMAINS entered, processing
SUBDOMAINS entered, processing
Only subdomains, no URL in cert
Sub-domains processed are: -d cloud.xxx-xxx.com
E-mail address entered: xxx@xxx.com
http validation is selected
Certificate exists; parameters unchanged; starting nginx
Starting 2019/12/30, GeoIP2 databases require personal license key to download. Please retrieve a free license key from MaxMind,
and add a new env variable "MAXMINDDB_LICENSE_KEY", set to your license key.
[cont-init.d] 50-config: exited 0.
[cont-init.d] 60-renew: executing...
The cert does not expire within the next day. Letting the cron script handle the renewal attempts overnight (2:08am).
[cont-init.d] 60-renew: exited 0.
[cont-init.d] 70-templates: executing...
**** The following nginx confs have different version dates than the defaults that are shipped. ****

**** This may be due to user customization or an update to the defaults. ****
**** To update them to the latest defaults shipped within the image, delete these files and restart the container. ****
**** If they are user customized, check the date version at the top and compare to the upstream changelog via the link. ****

/config/nginx/ssl.conf
/config/nginx/site-confs/default
/config/nginx/proxy.conf
/config/nginx/nginx.conf
/config/nginx/authelia-server.conf
/config/nginx/authelia-location.conf

[cont-init.d] 70-templates: exited 0.
[cont-init.d] 90-custom-folders: executing...
[cont-init.d] 90-custom-folders: exited 0.
[cont-init.d] 99-custom-files: executing...
[custom-init] no custom files found exiting...
[cont-init.d] 99-custom-files: exited 0.
[cont-init.d] done.
[services.d] starting services
[services.d] done.
Server ready

settings nextcloud

image.thumb.png.c97bc334f5ed497b4aabde03ed50b4d4.png

settings swag 

image.thumb.png.39e99cb02e39d558d9924b8bc26c08cc.png

what i get 

image.png.2551f2400e07d6317448ce4c02abb143.png

 

 

if some one knows how to get out of this. that would be really nice thank you so much in advance

 

with kind regards Pascal

 

 

 

 

 

FIXT IT!!!

 

the problem was that the swag container had already run before i assign a custom network so the /config/nginx/resolver.conf file had the wrong number in it in my case it was 8.8.8.8 8.8.4.4 google dns what you need to do if you have this problem is Delete the file, and restart the container so the file is regenerated

Edited by SnootyPascal
solved
Link to comment

sorry if this question has already been asked but i couldn't find it.

I am trying to get my home assistant setting to work - Home Assistant runs as  vm with set ip 192.168.188.35 and is completely reachable from lan

I set up my homeassistant.subdomain.conf

## Version 2021/07/13
# make sure that your dns has a cname set for homeassistant and that your homeassistant container is not using a base url

# As of homeassistant 2021.7.0, it is now required to define the network range your proxy resides in, this is done in Homeassitants configuration.yaml
# https://www.home-assistant.io/integrations/http/#trusted_proxies
# Example below uses the default dockernetwork ranges, you may need to update this if you dont use defaults.
#
# http:
#   use_x_forwarded_for: true
#   trusted_proxies:
#     - 172.16.0.0/12

server {
    listen 443 ssl;
    listen [::]:443 ssl;

    server_name homeassistanturl.*;

    include /config/nginx/ssl.conf;

    client_max_body_size 0;

    # enable for ldap auth, fill in ldap details in ldap.conf
    #include /config/nginx/ldap.conf;

    # enable for Authelia
    #include /config/nginx/authelia-server.conf;

    location / {
        # enable the next two lines for http auth
        #auth_basic "Restricted";
        #auth_basic_user_file /config/nginx/.htpasswd;

        # enable the next two lines for ldap auth
        #auth_request /auth;
        #error_page 401 =200 /ldaplogin;

        # enable for Authelia
        #include /config/nginx/authelia-location.conf;

        include /config/nginx/proxy.conf;
        include /config/nginx/resolver.conf;
        set $upstream_app 192.168.188.35;;
        set $upstream_port 8123;
        set $upstream_proto http;
        proxy_pass $upstream_proto://$upstream_app:$upstream_port;

    }

    location /api {
        include /config/nginx/proxy.conf;
        include /config/nginx/resolver.conf;
        set $upstream_app 192.168.188.35;;
        set $upstream_port 8123;
        set $upstream_proto http;
        proxy_pass $upstream_proto://$upstream_app:$upstream_port;
    }

    location /local {
        include /config/nginx/proxy.conf;
        include /config/nginx/resolver.conf;
        set $upstream_app 192.168.188.35;;
        set $upstream_port 8123;
        set $upstream_proto http;
        proxy_pass $upstream_proto://$upstream_app:$upstream_port;
    }
}

and in Home Assistant the configuration.yaml:

http:
  use_x_forwarded_for: true
  trusted_proxies:
    - 192.168.188.10
    - 172.18.0.4

where 192.168.188.10 is the ip of my unraid server hence the ip of the docker image within and 172.18.0.4 the ip od swag in docker

Settings in Docker:

172.18.0.4:443/TCP192.168.188.10:1443
172.18.0.4:80/TCP192.168.188.10:180

 

But when i try to access home assistant from the web I get "ERR_CONNECTION_REFUSED"

what am i doing wrong?

Thank you all in advance

Link to comment

I'm using swag for nextcloud. It works great when I connect via an external network. When I'm on the same network the connection times out. I followed the tutorial by spaceinvader one (https://youtu.be/I0lhZc25Sro) and am using duckdns.org for my WAN IP. I have used the subdomains approach with duckdns.org and then setup my own domain. Both work externally, but time out when I'm on the same LAN as my unRAID server.

 

I'm guessing it is a DNS issue, but that is beyond me. I haven't been able to find anything to fix this issue, nor have I seen anything in the logs to troubleshoot. Let me know what other information you need to diagnose this issue.

Link to comment
3 hours ago, sonic6 said:

@bat2o if it works externally and you try to access your nextcloud from the same LAN with your DuckDNS URL then maybe DNS-Rebind Protection / DNS Hairpinning (nat loopback) on your router/firewall can help you.

 

Thanks @sonic6. I also found that this approach was recommended earlier in this forum too. Thanks for the pointer.

 

Looks like I might need to get a new router, because I cannot see a way to set this up on my system and couldn't find any recommendations on the web. My setup is a Deco TP-Link mesh network behind a technicolor C2100T modem/router. Both of these are quite limited. I'll keep searching.

 

I had to use this setup for SWAG because centurylink's routers don't let you change the incoming WAN port to a different LAN port (443 -> 1443), unless you can identify the incoming/remote IP. For my SWAG setup I forward 443 (C2100T) to the mesh network (Deco TP-Link) router, and there I'm able to port forward 443 to 1443.

 

Since I couldn't find a way to turn on NAT Loopback on either router, I tried a different approach. I assigned all the dockers to a specific LAN IP (192.168.0.xx) so I can do local DNS records through Pi-hole. I had to assign them their own LAN IP since it wouldn't let me use ports. This worked for some dockers, but not for nextcloud. It gave an internal error.  

 

image.thumb.png.ec50e338cbdf1665623a6ca7029fc91d.png

 

Link to comment

Hi all,

 

Just started getting this error with the container: Any ideas? No changes were made to the container aside from periodic updates.

 

cronjob running on Sun Aug 1 13:18:41 CDT 2021
Running certbot renew
Saving debug log to /var/log/letsencrypt/letsencrypt.log

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Processing /etc/letsencrypt/renewal/xxx.duckdns.org.conf
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Renewal configuration file /etc/letsencrypt/renewal/xxx.duckdns.org.conf is broken.
The error was: expected /etc/letsencrypt/live/xxx.duckdns.org/cert.pem to be a symlink


.

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
No renewals were attempted.
No hooks were run.

Additionally, the following renewal configurations were invalid:
/etc/letsencrypt/renewal/xxx.duckdns.org.conf (parsefail)
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
0 renew failure(s), 1 parse failure(s)

 

Link to comment
17 hours ago, bat2o said:

Since I couldn't find a way to turn on NAT Loopback on either router, I tried a different approach. I assigned all the dockers to a specific LAN IP (192.168.0.xx) so I can do local DNS records through Pi-hole. I had to assign them their own LAN IP since it wouldn't let me use ports. This worked for some dockers, but not for nextcloud. It gave an internal error.  

I think this depends to the guide from spaceinvader and the rewrite from the nextcloud IP to the WAN URL inside the nextcloud conf.

Link to comment

Hello, 

I have successfully setup the SWAG docker on one of my unraid servers and has been working well for some time. I setup another unraid server I would like to run some dockers on separate from my current server.

 

These servers are on the same network/subnet
I have added the port forwarding rules and firewall rules 


I have added the swag to this new server and attempted to get the cert for this subdomain I would like to use on this second server however I always received the following error

 

Certbot failed to authenticate some domains (authenticator: standalone). The Certificate Authority reported these problems:
Domain: *****************
Type: unauthorized
Detail: Invalid response from https:*************/.well-known/acme-challenge/GDsJPauIBpmR07lLXweaxJDIqW3wgFA10Fd3dKSUr1w [WAN IP ADDRESS]: " <html>\n <head>\n <title>Welcome to our server</title>\n <style>\n body{\n "

Hint: The Certificate Authority failed to download the challenge files from the temporary standalone webserver started by Certbot on port 80. Ensure that the listed domains point to this machine and that it can accept inbound connections from the internet.


Some challenges have failed.

Ask for help or search for solutions at https://community.letsencrypt.org. See the logfile /var/log/letsencrypt/letsencrypt.log or re-run Certbot with -v for more details.
ERROR: Cert does not exist! Please see the validation error above. The issue may be due to incorrect dns or port forwarding settings. Please fix your settings and recreate the container


I can ping this domain (dns) from the internet and replies back as well

If I add this same subdomain on my SWAG docker I already have setup it gets the cert with no issues.... the problem with that is I cannot point it to the other docker on the other server.....

 

I cannot figure out why it is not working, I have tried different ports, rules, etc. and nothing seems to be working.
Can I have 2 separate SWAG docker instance running on the same network/subnet... as I don't think there is a way to use my existing SWAG docker to point to another docker container on another unraid server

Going to have to set it up on my server running swag successfully until I can figure this out or someone can assist 

Any ideas?

 

Edited by bombz
Link to comment
On 7/23/2021 at 10:24 PM, Hollandex said:

I'm new to the whole reverse proxy/letsencrypt thing. So this might be a really dumb question. Sorry in advance. :)

 

Is there any way to have SWAG point to other computers on my network? For instance, if I want to install a Bitwarden server on an rpi, is there a way to have SWAG redirect traffic from some subdomain (like bitwarden.mydomain.com) to the rpi?

 

And, if not, what is the correct way to have multiple computers behind a reverse proxy on my network?

+1
I would like to know this too !

To be able to point SWAG running on unraid -> another unraid server running other dockers

Link to comment
36 minutes ago, bombz said:

+1
I would like to know this too !

To be able to point SWAG running on unraid -> another unraid server running other dockers

like answered before, of course, just add the ip, ports corresponding to the services you like, they dont have to be on the same mashine ...

 

you can use almost anything like the web frontend of your router etc ... but please beware what you do, security !!!

Link to comment
24 minutes ago, alturismo said:

like answered before, of course, just add the ip, ports corresponding to the services you like, they dont have to be on the same mashine ...

 

you can use almost anything like the web frontend of your router etc ... but please beware what you do, security !!!

Cool. 
Is there a step guide that explains this? Been trying to point my SWAG to another docker service / system

I have been trying to setup another swag instance on another unraid server without success. Not sure if I can run 2 instances on the same network on 2 different servers.

Edited by bombz
Link to comment

you could do so but this would be senseless as you want swag to listen remote and serve internal service(s)

 

sample, check your unraid appdata folder with the templates, there is a config sample for subdomain and subfolder, just adjust the ip, poer, ...

Link to comment
1 hour ago, alturismo said:

you could do so but this would be senseless as you want swag to listen remote and serve internal service(s)

 

sample, check your unraid appdata folder with the templates, there is a config sample for subdomain and subfolder, just adjust the ip, poer, ...

Not a bad idea.
I will keep that in mind.

I went another route being the second server is old hardware that I am using solely for storage
01. create NFS share
02. setup docker on the primary NAS with the host path pointing to the other NAS NFS share
03. follow the guide to setup the docker on the primary NAS
04. Done

Why I couldn't get SWAG on the secondary nas to pull certs is beyond me.

Thanks!

Link to comment

I am not getting my certs running in my docker on my unRAID machine.

The server self is found at 192.168.2.*, I have already forwarded port 80/443 to 180/1443 and I get the following output while running my docker image:

 

-------------------------------------
_ ()
| | ___ _ __
| | / __| | | / \
| | \__ \ | | | () |
|_| |___/ |_| \__/


Brought to you by linuxserver.io
-------------------------------------

To support the app dev(s) visit:
Certbot: https://supporters.eff.org/donate/support-work-on-certbot

To support LSIO projects visit:
https://www.linuxserver.io/donate/
-------------------------------------
GID/UID
-------------------------------------

User uid: 99
User gid: 100
-------------------------------------

[cont-init.d] 10-adduser: exited 0.
[cont-init.d] 20-config: executing...
[cont-init.d] 20-config: exited 0.
[cont-init.d] 30-keygen: executing...
using keys found in /config/keys
[cont-init.d] 30-keygen: exited 0.
[cont-init.d] 50-config: executing...
Variables set:
PUID=99
PGID=100
TZ=Europe/Berlin
URL=xxx.nl
SUBDOMAINS=serverb,serverd,servers,serverso,
EXTRA_DOMAINS=
ONLY_SUBDOMAINS=true
VALIDATION=http
CERTPROVIDER=
DNSPLUGIN=
EMAIL=xxx@gmail.com
STAGING=false

Using Let's Encrypt as the cert provider
SUBDOMAINS entered, processing
SUBDOMAINS entered, processing
Only subdomains, no URL in cert
Sub-domains processed are: -d serverb.xxx.nl -d serverd.xxx.nl -d servers.xxx.nl -d serverso.xxx.nl
E-mail address entered: xxx@gmail.com
http validation is selected
Generating new certificate
Using Let's Encrypt as the cert provider
SUBDOMAINS entered, processing
SUBDOMAINS entered, processing
Only subdomains, no URL in cert
Sub-domains processed are: -d serverb.xxx.nl -d serverd.xxx.nl -d servers.xxx.nl -d serverso.xxx.nl
E-mail address entered: xxx@gmail.com
http validation is selected
Generating new certificate
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Requesting a certificate for serverb.xxx.nl and 3 more domains

Certbot failed to authenticate some domains (authenticator: standalone). The Certificate Authority reported these problems:

Domain: serverd.xxx.nl
Type: connection
Detail: Fetching http://serverd.xxx.nl/.well-known/acme-challenge/2lXEHSD3cpIQLhVQrkg1cIx9JyRiPqn9-nAccQ6p9nI: Timeout during connect (likely firewall problem)

Domain: servers.xxx.nl
Type: connection
Detail: Fetching http://servers.xxx.nl/.well-known/acme-challenge/RNAtDuspfWv8neRJm1AI7AqBeuAd3TiecHGh5T05Q-A: Timeout during connect (likely firewall problem)

Domain: serverso.xxx.nl
Type: connection
Detail: Fetching http://serverso.xxx.nl/.well-known/acme-challenge/jC4OeIIX1u4oLEIpuTW7zwbchHE-K8ZMsPjWGGs53oY: Timeout during connect (likely firewall problem)

Domain: serverb.xxx.nl
Type: connection
Detail: Fetching http://serverb.xxx.nl/.well-known/acme-challenge/1WoF3A9D-U-UW0KsWi-lWCQjUaQi7RotiKmKN7mNYuA: Timeout during connect (likely firewall problem)

Hint: The Certificate Authority failed to download the challenge files from the temporary standalone webserver started by Certbot on port 80. Ensure that the listed domains point to this machine and that it can accept inbound connections from the internet.



Certbot failed to authenticate some domains (authenticator: standalone). The Certificate Authority reported these problems:

Domain: serverd.xxx.nl
Type: connection
Detail: Fetching http://serverd.xxx.nl/.well-known/acme-challenge/2lXEHSD3cpIQLhVQrkg1cIx9JyRiPqn9-nAccQ6p9nI: Timeout during connect (likely firewall problem)

Domain: servers.xxx.nl
Type: connection
Detail: Fetching http://servers.xxx.nl/.well-known/acme-challenge/RNAtDuspfWv8neRJm1AI7AqBeuAd3TiecHGh5T05Q-A: Timeout during connect (likely firewall problem)

Domain: serverso.xxx.nl
Type: connection
Detail: Fetching http://serverso.xxx.nl/.well-known/acme-challenge/jC4OeIIX1u4oLEIpuTW7zwbchHE-K8ZMsPjWGGs53oY: Timeout during connect (likely firewall problem)

Domain: serverb.xxx.nl
Type: connection
Detail: Fetching http://serverb.xxx.nl/.well-known/acme-challenge/1WoF3A9D-U-UW0KsWi-lWCQjUaQi7RotiKmKN7mNYuA: Timeout during connect (likely firewall problem)

Hint: The Certificate Authority failed to download the challenge files from the temporary standalone webserver started by Certbot on port 80. Ensure that the listed domains point to this machine and that it can accept inbound connections from the internet.


Some challenges have failed.

Ask for help or search for solutions at https://community.letsencrypt.org. See the logfile /var/log/letsencrypt/letsencrypt.log or re-run Certbot with -v for more details.
ERROR: Cert does not exist! Please see the validation error above. The issue may be due to incorrect dns or port forwarding settings. Please fix your settings and recreate the container

 

 

The domain has CNAME to my DuckDNS subdomain and the DuckDNS is set to my public IP of my unRAID box.

What am I doing wrong while setting up my swag container?

I followed the most of this video on YT: 

 

 

and I still don't get it working...

I hope someone can help me out with this one, because I am a noob regarding docker images, Let's Encrypt and unRAID...... :S 

Edited by rikdegraaff
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.