[Support] Linuxserver.io - SWAG - Secure Web Application Gateway (Nginx/PHP/Certbot/Fail2ban)


Recommended Posts

As with someone else, I would like to request some conf.sample files for the LinuxServer Ubooquity container.

 

I am trying to set it up with DuckDNS validation and am utterly failing to make it work. I've tried everything that used to work for me, and that I have found in this topic and the Ubooquity topic that pertain to it.
 

Link to comment
3 hours ago, smashingtool said:

As with someone else, I would like to request some conf.sample files for the LinuxServer Ubooquity container.

 

I am trying to set it up with DuckDNS validation and am utterly failing to make it work. I've tried everything that used to work for me, and that I have found in this topic and the Ubooquity topic that pertain to it.
 

This works here for subfolder.

You need to run it in a custom docker bridge ass all the rest of our RP configs.


    location ^~ /ubooquity {
        include /config/nginx/proxy.conf;
        resolver 127.0.0.11 valid=30s;
        set $upstream_ubooquity ubooquity;
        proxy_pass http://$upstream_ubooquity:2202;
    }

    location ^~ /ubooquity/admin {
	include /config/nginx/proxy.conf;
        resolver 127.0.0.11 valid=30s;
        set $upstream_ubooquity ubooquity;
        proxy_pass http://$upstream_ubooquity:2203;
    }

 

Link to comment

Thanks to those who helped! After a couple more hours of testing and remembering to turn off advanced docker settings to see the logs, I realized that couldn't get certs because by default all my traffic goes out through VPN. Once I created a WAN rule for the letsencrypt docker it all worked great. Thanks again! 

Link to comment

See attachment... on a scale of 1 to 10, how bad of an idea is this?

This exposes Unraid to the internet which is not normally the case.

It adds https but is relying solely on whatever protections are built into Unraid.

Seems to work but opening terminal windows (either to host or docker containers) doesn't work.

 

Is there a better way to do this?

 

I looked at Serveo but cannot wrap my head around how there is any security there.  I think you're trusting someone else to be a man in the middle.

unraid.subdomain.conf.sample

Edited by eric.frederich
Link to comment
8 hours ago, trurl said:

11

Yeah.  Turns out you don't need to rely on anyone else's infrastructure.

If you have any machine on your home network which is accessible via SSH just do something like thid

ssh -L 9000:10.10.1.99:80 home-computer

Where 10.10.1.99 is the local IP of your Unraid server and home-computer is something you have set up in ~/.ssh/config to connect to your home machine.

Then I can just point my browser at http://localhost:9000 and everything seems to work.

Link to comment

HI,

 

Finally had enough of my manual creation of certs for my server. So I bought a Throwaway Domain and set the NS Records to Cloudflare.(For reasons I don't want to use my personal domains with CF). Cert gets created and 👍

 

My unraid WebUI is only available in my local network but resolves with server.domain.tld

 

One thing to do now is to copy the priv-fullchain-bundle.pem over to  \flash\config\ssl\certs so I can access the web UI of unraid. How would I do this?

Edited by Aluavin
Link to comment

Good Evening,

 

I am trying to get a reverse proxy to allow me to use sonarr and radarr from out of my lan. However, I can get the sonarr and radarr setup correctly, yet the connections to deluge and jackett fail.

 

I am following this tutorial here: https://www.youtube.com/watch?v=I0lhZc25Sro

 

and have included both deluge and jackett into the letsencrypt settings. Whenever I try to test an indexer i get an an ReadDoneAsync2 error:

 

27425601_Screenshot2019-03-27at21_56_28.thumb.png.95985221f45b7994a73db662a43f41c9.png

 

Any help would be appreciated as I have been trying to solve this for hours.

Link to comment
19 minutes ago, edmoisaa said:

Good Evening,

 

I am trying to get a reverse proxy to allow me to use sonarr and radarr from out of my lan. However, I can get the sonarr and radarr setup correctly, yet the connections to deluge and jackett fail.

 

I am following this tutorial here: https://www.youtube.com/watch?v=I0lhZc25Sro

 

and have included both deluge and jackett into the letsencrypt settings. Whenever I try to test an indexer i get an an ReadDoneAsync2 error:

 

27425601_Screenshot2019-03-27at21_56_28.thumb.png.95985221f45b7994a73db662a43f41c9.png

 

Any help would be appreciated as I have been trying to solve this for hours.

Don't expect us to watch videos to figure out what you may or may not have done. Post what you did, your config, full logs, etc. and then we can try and figure what went wrong

Link to comment
7 hours ago, dkerlee said:

Wondering if it's possible to set this up without being able to forward external ports to different internal ports, as Spaceinvader One suggests port 80 > 180, 443 > 1443. My crap-tastic router doesn't support it! So, will I be able to get my Nextcloud visible outside my LAN? thanks!

Yes it's possible. You will have to ensure 80 and 443 aren't in use on the IP you are using for this docker.

Link to comment
17 hours ago, aptalca said:

Don't expect us to watch videos to figure out what you may or may not have done. Post what you did, your config, full logs, etc. and then we can try and figure what went wrong

My indexer settings requires simply copying the torznab feed into the url and setting the pi key correctly (which was done). I have also currently tried changing ports removing letsencyrpt and resetting my containers (with no luck error still persists) so it has simply messed up my whole docker library. All tutorials for letsencrypt perform simple changes that just seem to work on their specific configs. This error also refers to connecting to a download client (in my case deluge) where a similar ReadDoneAsync2 error occurs also, so it is something with the sonarr.

 

I cannot post the whole config for sonarr, however here is theerror, docker and sonarr indexer info:

 

Error Log:

19-3-28 15:41:43.0|Warn|Torznab|Unable to connect to indexer [v3.0.1.418] System.Net.WebException: Error getting response stream (ReadDoneAsync2): ReceiveFailure: 'http://192.168.1.135:9111/api/v2.0/indexers/ettv/results/torznab/api?t=caps&apikey=(removed) ---> System.Net.WebException: Error getting response stream (ReadDoneAsync2): ReceiveFailure at System.Net.WebResponseStream.InitReadAsync (System.Threading.CancellationToken cancellationToken) [0x000f3] in <f74b9f2fea8d47a7922726600e468191>:0 at System.Net.WebOperation.Run () [0x001d9] in <f74b9f2fea8d47a7922726600e468191>:0 at System.Net.WebCompletionSource`1[T].WaitForCompletion () [0x00094] in <f74b9f2fea8d47a7922726600e468191>:0 at System.Net.HttpWebRequest.RunWithTimeoutWorker[T] (System.Threading.Tasks.Task`1[TResult] workerTask, System.Int32 timeout, System.Action abort, System.Func`1[TResult] aborted, System.Threading.CancellationTokenSource cts) [0x000f8] in <f74b9f2fea8d47a7922726600e468191>:0 at System.Net.HttpWebRequest.GetResponse () [0x00016] in <f74b9f2fea8d47a7922726600e468191>:0 at NzbDrone.Common.Http.Dispatchers.ManagedHttpDispatcher.GetResponse (NzbDrone.Common.Http.HttpRequest request, System.Net.CookieContainer cookies) [0x0011b] in M:\BuildAgent\work\63739567f01dbcc2\src\NzbDrone.Common\Http\Dispatchers\ManagedHttpDispatcher.cs:74 --- End of inner exception stack trace --- at NzbDrone.Common.Http.Dispatchers.ManagedHttpDispatcher.GetResponse (NzbDrone.Common.Http.HttpRequest request, System.Net.CookieContainer cookies) [0x001c3] in M:\BuildAgent\work\63739567f01dbcc2\src\NzbDrone.Common\Http\Dispatchers\ManagedHttpDispatcher.cs:102 at NzbDrone.Common.Http.Dispatchers.FallbackHttpDispatcher.GetResponse (NzbDrone.Common.Http.HttpRequest request, System.Net.CookieContainer cookies) [0x000b5] in M:\BuildAgent\work\63739567f01dbcc2\src\NzbDrone.Common\Http\Dispatchers\FallbackHttpDispatcher.cs:53 at NzbDrone.Common.Http.HttpClient.ExecuteRequest (NzbDrone.Common.Http.HttpRequest request, System.Net.CookieContainer cookieContainer) [0x0007e] in M:\BuildAgent\work\63739567f01dbcc2\src\NzbDrone.Common\Http\HttpClient.cs:121 at NzbDrone.Common.Http.HttpClient.Execute (NzbDrone.Common.Http.HttpRequest request) [0x00008] in M:\BuildAgent\work\63739567f01dbcc2\src\NzbDrone.Common\Http\HttpClient.cs:57 at NzbDrone.Common.Http.HttpClient.Get (NzbDrone.Common.Http.HttpRequest request) [0x00007] in M:\BuildAgent\work\63739567f01dbcc2\src\NzbDrone.Common\Http\HttpClient.cs:264 at NzbDrone.Core.Indexers.Newznab.NewznabCapabilitiesProvider.FetchCapabilities (NzbDrone.Core.Indexers.Newznab.NewznabSettings indexerSettings) [0x0009a] in M:\BuildAgent\work\63739567f01dbcc2\src\NzbDrone.Core\Indexers\Newznab\NewznabCapabilitiesProvider.cs:62 at NzbDrone.Core.Indexers.Newznab.NewznabCapabilitiesProvider+<>c__DisplayClass4_0.<GetCapabilities>b__0 () [0x00000] in M:\BuildAgent\work\63739567f01dbcc2\src\NzbDrone.Core\Indexers\Newznab\NewznabCapabilitiesProvider.cs:35 at NzbDrone.Common.Cache.Cached`1[T].Get (System.String key, System.Func`1[TResult] function, System.Nullable`1[T] lifeTime) [0x00069] in M:\BuildAgent\work\63739567f01dbcc2\src\NzbDrone.Common\Cache\Cached.cs:81 at NzbDrone.Core.Indexers.Newznab.NewznabCapabilitiesProvider.GetCapabilities (NzbDrone.Core.Indexers.Newznab.NewznabSettings indexerSettings) [0x00020] in M:\BuildAgent\work\63739567f01dbcc2\src\NzbDrone.Core\Indexers\Newznab\NewznabCapabilitiesProvider.cs:35 at NzbDrone.Core.Indexers.Torznab.Torznab.get_PageSize () [0x00000] in M:\BuildAgent\work\63739567f01dbcc2\src\NzbDrone.Core\Indexers\Torznab\Torznab.cs:23 at NzbDrone.Core.Indexers.Torznab.Torznab.GetRequestGenerator () [0x00000] in M:\BuildAgent\work\63739567f01dbcc2\src\NzbDrone.Core\Indexers\Torznab\Torznab.cs:27 at NzbDrone.Core.Indexers.HttpIndexerBase`1[TSettings].TestConnection () [0x00007] in M:\BuildAgent\work\63739567f01dbcc2\src\NzbDrone.Core\Indexers\HttpIndexerBase.cs:334 19-3-28 15:41:43.0|Warn|SonarrErrorPipeline|Invalid request Validation failed: -- Unable to connect to indexer, check the log for more details

 

1.thumb.png.9deab2e20984d7afd3fab3ba62c8ec9a.png2.thumb.png.0a3ed24befb5ee48810e65fe3a133a0f.png

 

 

Edited by edmoisaa
Link to comment

So, I'm beating my head against my keyboard trying to get UNMS to work with the letsEncrypt docker and I can't seem to figure it out.  I am posting here and a few other places as I'm not sure if this is a failure of the setup of LE or if it's my UNMS docker or something else so sorry if this doesn't fully belong.  For the record: I think I'm only just moving out of noob territory when it comes to unRaid/Linux usage so apologies if I fail to include something or miss something obvious.

 

Issue:

I cannot get my ER4 to connect to UNMS, even though if i go through the Discovery manager, the UNMS key *will* be automatically populated into my ER4 system settings.

 

Setup (hopefully in a concise order):

FQDN for unms access = unms.my.domain (UMD)

LetsEncrypt Docker (LED) setup =

- port forward on router = WAN:443 --> tower:643 / WAN:80 --> tower:280

- LetsEncrypt ports = tower:643 --> LED:443 / tower:280 --> LED:80

- LED is on a custom Docker network "proxynet", along with Ombi and UNMS docker

- config for UMD =

# make sure that your dns has a cname set for unifi and that your unifi-controller container is not using a base url

server {
    listen 443 ssl;
    listen [::]:443 ssl;

    server_name unms.my.domain

    include /config/nginx/ssl.conf;

    client_max_body_size 0;

    # enable for ldap auth, fill in ldap details in ldap.conf
    #include /config/nginx/ldap.conf;

    location / {
        # enable the next two lines for http auth
        #auth_basic "Restricted";
        #auth_basic_user_file /config/nginx/.htpasswd;

        # enable the next two lines for ldap auth
        #auth_request /auth;
        #error_page 401 =200 /login;

        include /config/nginx/proxy.conf;
        resolver 127.0.0.11 valid=30s;
        set $upstream_unifi unms;
        proxy_pass https://$upstream_unifi:8443;
    }

    location /wss {
        # enable the next two lines for http auth
        #auth_basic "Restricted";
        #auth_basic_user_file /config/nginx/.htpasswd;

        # enable the next two lines for ldap auth
        #auth_request /auth;
        #error_page 401 =200 /login;

        include /config/nginx/proxy.conf;
        resolver 127.0.0.11 valid=30s;
        set $upstream_unifi unms;
        proxy_pass https://$upstream_unifi:8443;
        proxy_buffering off;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "Upgrade";
        proxy_ssl_verify off;
    }

}

UNMS docker (UD) setup (oznu/unms docker template added through Community Apps):

- UD port mapping: tower:643 --> UD:443 / tower:6080 --> UD:80

- LetsEncrypt within UD is currently disabled

 

Currently Working:

1) Lets Encrypt docker is currently working:

  - I can access ombi from outside the network, as can my Plex users.

 

2) UNMS docker is working from within the local network:

  - I can access the WebUI, I have set up UNMS.

 

3) LED --> UD:

  - If i change port in unms.subdomain.conf from :8443 to :443 (EDIT: wrong port #), I can access the WebUI from the internet at unms.my.domain with no problem.

 

The Failure:

When I go into UD and use the Discovery Manager to add my er4, the er4 never completes the connection to UNMS (connection times out).

What is confusing to me is that, when I initiate the connection from the Discovery Manager, the credentials are clearly correct along with some portions of forwarding/connects/etc because the UNMS key is then populated in my ER4.  However, it always hangs on "device connecting".

 

(EDIT: added this line from more testing) Additionally, if I manually change the fqdn in the UNMS key to tower.lan:6443, the er4 connects to UNMS fine.

 

Is there anything glaring in my configuration that anyone can see as to why connecting to the er4 isn't working?  Is there something about port forwarding 443 to LED that's messing with the UNMS connection?  I'm just not versed enough in how the actual UNMS --> device connection occurs to better troubleshoot it.

Edited by phil1c
Link to comment
5 hours ago, edmoisaa said:

My indexer settings requires simply copying the torznab feed into the url and setting the pi key correctly (which was done). I have also currently tried changing ports removing letsencyrpt and resetting my containers (with no luck error still persists) so it has simply messed up my whole docker library. All tutorials for letsencrypt perform simple changes that just seem to work on their specific configs. This error also refers to connecting to a download client (in my case deluge) where a similar ReadDoneAsync2 error occurs also, so it is something with the sonarr.

 

I cannot post the whole config for sonarr, however here is theerror, docker and sonarr indexer info:

 

Error Log:

19-3-28 15:41:43.0|Warn|Torznab|Unable to connect to indexer [v3.0.1.418] System.Net.WebException: Error getting response stream (ReadDoneAsync2): ReceiveFailure: 'http://192.168.1.135:9111/api/v2.0/indexers/ettv/results/torznab/api?t=caps&apikey=(removed) ---> System.Net.WebException: Error getting response stream (ReadDoneAsync2): ReceiveFailure at System.Net.WebResponseStream.InitReadAsync (System.Threading.CancellationToken cancellationToken) [0x000f3] in <f74b9f2fea8d47a7922726600e468191>:0 at System.Net.WebOperation.Run () [0x001d9] in <f74b9f2fea8d47a7922726600e468191>:0 at System.Net.WebCompletionSource`1[T].WaitForCompletion () [0x00094] in <f74b9f2fea8d47a7922726600e468191>:0 at System.Net.HttpWebRequest.RunWithTimeoutWorker[T] (System.Threading.Tasks.Task`1[TResult] workerTask, System.Int32 timeout, System.Action abort, System.Func`1[TResult] aborted, System.Threading.CancellationTokenSource cts) [0x000f8] in <f74b9f2fea8d47a7922726600e468191>:0 at System.Net.HttpWebRequest.GetResponse () [0x00016] in <f74b9f2fea8d47a7922726600e468191>:0 at NzbDrone.Common.Http.Dispatchers.ManagedHttpDispatcher.GetResponse (NzbDrone.Common.Http.HttpRequest request, System.Net.CookieContainer cookies) [0x0011b] in M:\BuildAgent\work\63739567f01dbcc2\src\NzbDrone.Common\Http\Dispatchers\ManagedHttpDispatcher.cs:74 --- End of inner exception stack trace --- at NzbDrone.Common.Http.Dispatchers.ManagedHttpDispatcher.GetResponse (NzbDrone.Common.Http.HttpRequest request, System.Net.CookieContainer cookies) [0x001c3] in M:\BuildAgent\work\63739567f01dbcc2\src\NzbDrone.Common\Http\Dispatchers\ManagedHttpDispatcher.cs:102 at NzbDrone.Common.Http.Dispatchers.FallbackHttpDispatcher.GetResponse (NzbDrone.Common.Http.HttpRequest request, System.Net.CookieContainer cookies) [0x000b5] in M:\BuildAgent\work\63739567f01dbcc2\src\NzbDrone.Common\Http\Dispatchers\FallbackHttpDispatcher.cs:53 at NzbDrone.Common.Http.HttpClient.ExecuteRequest (NzbDrone.Common.Http.HttpRequest request, System.Net.CookieContainer cookieContainer) [0x0007e] in M:\BuildAgent\work\63739567f01dbcc2\src\NzbDrone.Common\Http\HttpClient.cs:121 at NzbDrone.Common.Http.HttpClient.Execute (NzbDrone.Common.Http.HttpRequest request) [0x00008] in M:\BuildAgent\work\63739567f01dbcc2\src\NzbDrone.Common\Http\HttpClient.cs:57 at NzbDrone.Common.Http.HttpClient.Get (NzbDrone.Common.Http.HttpRequest request) [0x00007] in M:\BuildAgent\work\63739567f01dbcc2\src\NzbDrone.Common\Http\HttpClient.cs:264 at NzbDrone.Core.Indexers.Newznab.NewznabCapabilitiesProvider.FetchCapabilities (NzbDrone.Core.Indexers.Newznab.NewznabSettings indexerSettings) [0x0009a] in M:\BuildAgent\work\63739567f01dbcc2\src\NzbDrone.Core\Indexers\Newznab\NewznabCapabilitiesProvider.cs:62 at NzbDrone.Core.Indexers.Newznab.NewznabCapabilitiesProvider+<>c__DisplayClass4_0.<GetCapabilities>b__0 () [0x00000] in M:\BuildAgent\work\63739567f01dbcc2\src\NzbDrone.Core\Indexers\Newznab\NewznabCapabilitiesProvider.cs:35 at NzbDrone.Common.Cache.Cached`1[T].Get (System.String key, System.Func`1[TResult] function, System.Nullable`1[T] lifeTime) [0x00069] in M:\BuildAgent\work\63739567f01dbcc2\src\NzbDrone.Common\Cache\Cached.cs:81 at NzbDrone.Core.Indexers.Newznab.NewznabCapabilitiesProvider.GetCapabilities (NzbDrone.Core.Indexers.Newznab.NewznabSettings indexerSettings) [0x00020] in M:\BuildAgent\work\63739567f01dbcc2\src\NzbDrone.Core\Indexers\Newznab\NewznabCapabilitiesProvider.cs:35 at NzbDrone.Core.Indexers.Torznab.Torznab.get_PageSize () [0x00000] in M:\BuildAgent\work\63739567f01dbcc2\src\NzbDrone.Core\Indexers\Torznab\Torznab.cs:23 at NzbDrone.Core.Indexers.Torznab.Torznab.GetRequestGenerator () [0x00000] in M:\BuildAgent\work\63739567f01dbcc2\src\NzbDrone.Core\Indexers\Torznab\Torznab.cs:27 at NzbDrone.Core.Indexers.HttpIndexerBase`1[TSettings].TestConnection () [0x00007] in M:\BuildAgent\work\63739567f01dbcc2\src\NzbDrone.Core\Indexers\HttpIndexerBase.cs:334 19-3-28 15:41:43.0|Warn|SonarrErrorPipeline|Invalid request Validation failed: -- Unable to connect to indexer, check the log for more details

 

1.thumb.png.9deab2e20984d7afd3fab3ba62c8ec9a.png2.thumb.png.0a3ed24befb5ee48810e65fe3a133a0f.png

 

 

I don't see how any of it relates to letsencrypt. The log snippet you posted shows one container trying to connect to the other directly via host ip. I don't see any proxying involved

Link to comment

Hi, i also have the issue where nextcloud downloads are limited in size ... i think i readed about it also here but cant find it anymore.

 

what i did meanwhile

 

set the limit in nextcloud webgui to 10G, changed the nextcloud.subfolder.conf -> proxy_max_temp_file_size 8192m;

 

may a hint where else there could be the limiter ? thanks ahead

Link to comment
Hi, i also have the issue where nextcloud downloads are limited in size ... i think i readed about it also here but cant find it anymore.
 
what i did meanwhile
 
set the limit in nextcloud webgui to 10G, changed the nextcloud.subfolder.conf -> proxy_max_temp_file_size 8192m;
 
may a hint where else there could be the limiter ? thanks ahead
Google it and it will out up the link on how to fix it.

Sent from my SM-N960U using Tapatalk

Link to comment
2 hours ago, alturismo said:

Hi, i also have the issue where nextcloud downloads are limited in size ... i think i readed about it also here but cant find it anymore.

 

what i did meanwhile

 

set the limit in nextcloud webgui to 10G, changed the nextcloud.subfolder.conf -> proxy_max_temp_file_size 8192m;

 

may a hint where else there could be the limiter ? thanks ahead

Also check the proxy.conf, which used to have a limiter a while back

Link to comment

Hello everyone,

 

I followed the tutorial to install nginx and take advantage of the reverse proxy for my nextcloud application.

When I try to connect to nextcloud in local network

 

I can not access ... 

the search bar runs in a vacuum and nothing is displayed while everything is running from the outside.

 

 

 

 

Can you tell me what you need like log, that I can send screen printing and be able to troubleshoot.

 

Thank You.

Edited by snake382
Link to comment

I'm having an issue and I'm not really sure where to start.  I'm not even 100% certain it's an issue with letsencrypt, which I run as a docker on UNRAID.

 

Anyway, here is my situation.  I recently moved, all infrastructure (unifi) moved with me, including my ATT gateway.  Everything was shutdown at my home before I moved and I just booted everything back up (about 2 1/2 weeks later) at my new location.  I have everything back up y and running again, all settings and configs are the same, including my WAN IP.  

 

All my dockers are up and running and fully functional except I for that I can no longer resolve any of my external applications that utilize letsencrypt with nginx reverse proxy.  I can ping and resolve my domain name fine, and can even access PLEX externally, but nothing that was fronted by the letsencrypt docker seems to work, about 8 applications in all with the same generic error.  No status code, nothing, just a "domain name" refused to connect message followed by a ERR_CONNECTION_REFUSED.  

 

Has anyone experienced any similar issues before where everything just stopped working?

 

Thanks for any input.

 

 

Link to comment

Hi,

Followed spaceinvaderone's youtube guide and I'm getting this in logs after I start letsencrypt container:

Server ready
nginx: [warn] "ssl_stapling" ignored, host not found in OCSP responder "ocsp.int-x3.letsencrypt.org" in the certificate "/config/keys/letsencrypt/fullchain.pem"

When I try to access my subdomain I get the generic "Welcome to our server" page rather than nextcloud GUI. Any pointers?

 

***EDIT [warn] message***

Update of the container to the latest version cleared above mentioned nginix [warn], but I am still getting sent to that generic "Wellcome..." page. Any help is apprechiated.

 

 

****SOLVED**** no more "Wellcome page" -  getting container GUI

I was editing nextcloud.subdomain.conf.sample via krusader and failed to notice the "sample" part.... facepalm....

Edited and saved as nextcloud.subdomain.conf, restarted letsencrypt, working as intended. Getting Nextcloud's GUI on my subdomain.

Edited by Lavoslav
SOLVED
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.