[Support] Linuxserver.io - SWAG - Secure Web Application Gateway (Nginx/PHP/Certbot/Fail2ban)


Recommended Posts

i migrated from letsencrypt to swag. didnt change anything in config.

4 subdomain address airsonic, plex, tautulli, nextcloud all working just fine

 

5 subdomain address sonarr, 4kradarr, radarr, transmission, nzbget all come back with "file not found."

dont know whats going on with these

 

any help??

Link to comment

I was trying to get Collabora running as seperate Docker: I installed the container with default name (collabora), renamed the sample proxy conf, added the subdomain in SWAG and as CNAME at my DNS provider. SWAG log is showing:

[cont-init.d] done. [services.d] starting services nginx: [emerg] "server" directive is not allowed here in /config/nginx/proxy-confs/collabora.subdomain.conf:3 Server ready nginx: [emerg] "server" directive is not allowed here in /config/nginx/proxy-confs/collabora.subdomain.conf:3 nginx: [emerg] "server" directive is not allowed here in /config/nginx/proxy-confs/collabora.subdomain.conf:3 nginx: [emerg] "server" directive is not allowed here in /config/nginx/proxy-confs/collabora.subdomain.conf:3 nginx: [emerg] "server" directive is not allowed here in /config/nginx/proxy-confs/collabora.subdomain.conf:3 ......

[cont-init.d] done.
[services.d] starting services
nginx: [emerg] "server" directive is not allowed here in /config/nginx/proxy-confs/collabora.subdomain.conf:3
Server ready
nginx: [emerg] "server" directive is not allowed here in /config/nginx/proxy-confs/collabora.subdomain.conf:3
nginx: [emerg] "server" directive is not allowed here in /config/nginx/proxy-confs/collabora.subdomain.conf:3
nginx: [emerg] "server" directive is not allowed here in /config/nginx/proxy-confs/collabora.subdomain.conf:3
nginx: [emerg] "server" directive is not allowed here in /config/nginx/proxy-confs/collabora.subdomain.conf:3
......

I thought that I followed @SpaceInvaderOne guide quite well but obviously I made a mistake.....

Link to comment

Hey y'all.  I'm trying to add a second domain to my container, and I need a wordpress container to be where www. goes for the new domain.

 

domain a.com is working, and I have both subdomain and subfolder conf files working, and the same subdomains will work with the new b.com domain

 

Wife does not want to have to use wordpress.b.com she wants to just use b.com and/or www.b.com - I am not quite sure how to get this setup in the container and without rebuilding my entire proxynet

 

is there a way to set this up?

 

I have figured out that I cannot use *.b.com in the extra_domains field, so the cert is now good for a.com and b.com - but I am unsure how to force www.b.com and b.com to go to a hosted wordpress container but not have a.com and www.a.com go there - is there a config I can change so it looks at both and routes based on the domain?

Edited by PsiKoTicK
Link to comment
On 11/3/2020 at 8:28 PM, PsiKoTicK said:

Hey y'all.  I'm trying to add a second domain to my container, and I need a wordpress container to be where www. goes for the new domain.

 

domain a.com is working, and I have both subdomain and subfolder conf files working, and the same subdomains will work with the new b.com domain

 

Wife does not want to have to use wordpress.b.com she wants to just use b.com and/or www.b.com - I am not quite sure how to get this setup in the container and without rebuilding my entire proxynet

 

is there a way to set this up?

 

I have figured out that I cannot use *.b.com in the extra_domains field, so the cert is now good for a.com and b.com - but I am unsure how to force www.b.com and b.com to go to a hosted wordpress container but not have a.com and www.a.com go there - is there a config I can change so it looks at both and routes based on the domain?

If you're doing dns validation, you can get wildcard for b by setting extra domains to "b.com,*.b.com"

 

Then to serve wordpress at b.com, set the server name for the wordpress server block to b.com

Link to comment
12 hours ago, aptalca said:

If you're doing dns validation, you can get wildcard for b by setting extra domains to "b.com,*.b.com"

 

Then to serve wordpress at b.com, set the server name for the wordpress server block to b.com

Ah, I just have default certification, I don't "need" the *. just the main b.com and www.b.com - and it errors if I use *. so I am fine not having it. 

 

So should I make a new Proxy Config?  My server block seems to only exist in the proxy subdomain/subfolder.conf files - I admit I am a n00b with this stuff but I am a tech guy, just...  I still find this hard to wrap my head around, I'm getting there, though!

Link to comment
7 hours ago, PsiKoTicK said:

Ah, I just have default certification, I don't "need" the *. just the main b.com and www.b.com - and it errors if I use *. so I am fine not having it. 

 

So should I make a new Proxy Config?  My server block seems to only exist in the proxy subdomain/subfolder.conf files - I admit I am a n00b with this stuff but I am a tech guy, just...  I still find this hard to wrap my head around, I'm getting there, though!

Main conf is /config/nginx/nginx.conf, which includes (imports) /config/nginx/site-confs/default, which contains the main server block and it also includes (imports) all the proxy confs

 

Check out the examples in the default site conf

Edited by aptalca
Link to comment

I just got this container set up yesterday morning and man, it is so great.  I have a handful of my containers proxied already, but I've hit a snag that I can't figure out and I'm hoping someone can help.  These 5-7 other servers I'm trying to proxy fall into 3 categories:

  1. On this Unraid server, but assigned to a different network and VLAN
  2. On my other Unraid server
  3. On a Raspberry Pi

I'm hoping that some sort of editing the configuration files and also my OPNsense firewall rules will solve items #2 and #3.  I'm wondering if #1 acts differently, though, and if so how I am supposed to make those proxyable by Swag.  Here are a few more details if you can help guide me I would appreciate it.

  • Swag and currentlly proxied containers - br1.60, network 192.168.60.0/24
  • Non-working containers - br0.20, network 192.168.20.0/24

Thanks for any help.

 

UPDATE:  So getting item 1  handled turned out to be easier than I feared.  By using an IP for upstream app it worked great.  I thought then that #3 would be easy, but it's not working.  Specifically, I'm trying to get to my Hass (Home Assistant Supervised) on it, which is available when I hit it directly via its 192.168.60.X:8123 address.  If I try to hit it via the proxy I get the Nginx default page and I don't see any traffic trying to proxy from the swag IP to the hass server.  It's like it isn't recognizing the hass.mydomain.me even though I edited the conf to reflect that subdomain name of hass and the app IP.  Any ideas as to what could be up there?

Edited by BurntOC
UPDATE
Link to comment
13 hours ago, aptalca said:

Main conf is /config/nginx/nginx.conf, which includes (imports) /config/nginx/site-confs/default, which contains the main server block and it also includes (imports) all the proxy confs

 

Check out the examples in the default site conf

Hey, man, thank you.  It was not a direct answer, but I was able to find enough to realize I needed a second file in the site-confs for the other server, not just another block in the default (cuz it wouldn't let me, who knew?) - and it's up and going, and my wife is happy, and now I know new things.  Appreciate your help.

Link to comment
2 hours ago, PsiKoTicK said:

Hey, man, thank you.  It was not a direct answer, but I was able to find enough to realize I needed a second file in the site-confs for the other server, not just another block in the default (cuz it wouldn't let me, who knew?) - and it's up and going, and my wife is happy, and now I know new things.  Appreciate your help.

To clarify, you CAN edit the default site conf to modify or add server blocks.

Alternatively you can add more site conf files like you did. Either works

Link to comment

Hi All

 

I am moving my last reverse proxy from my Synology box (Built in funct. with lets encrypt)

Already have it working for all my dockers! and the instruction to setup and use the special "Proxynet" in Docker

 

But my last servers are running as VM's not dockers so I cannot use the Network type: "Proxynet"

 

I have two servers running as VM left with fixed IP's (Both virtual lan on Br0)

Found the template for Home assistant:

 

# make sure that your dns has a cname set for homeassistant and that your homeassistant container is not using a base url

server {
    listen 443 ssl;
    listen [::]:443 ssl;

    server_name home.mydomain.dk;

    include /config/nginx/ssl.conf;

    client_max_body_size 0;

    # enable for ldap auth, fill in ldap details in ldap.conf
    #include /config/nginx/ldap.conf;

    # enable for Authelia
    #include /config/nginx/authelia-server.conf;

    location / {
        # enable the next two lines for http auth
        #auth_basic "Restricted";
        #auth_basic_user_file /config/nginx/.htpasswd;

        # enable the next two lines for ldap auth
        #auth_request /auth;
        #error_page 401 =200 /ldaplogin;

        # enable for Authelia
        #include /config/nginx/authelia-location.conf;

        include /config/nginx/proxy.conf;
        resolver 127.0.0.11 valid=30s;
        set $upstream_app homeassistant;
        set $upstream_port 8123;
        set $upstream_proto http;
        proxy_pass $upstream_proto://$upstream_app:$upstream_port;

    }
}

But not sure how to point it to a specific mysub.mydomain.dk?

Or adding the IP (I would prefer not to have any mysub.* as wildcard have 4 different domains added (The work fine)

Tried looking through the posts but so far have not succeeded getting this working (Trial & Error)

 

Hooping to adjust the above to work with Synology port 5001 https running as a VM

Thanks!

 

 

 

 

 

 

 

 

 

Link to comment
Hi All
 
I am moving my last reverse proxy from my Synology box (Built in funct. with lets encrypt)
Already have it working for all my dockers! and the instruction to setup and use the special "Proxynet" in Docker
 
But my last servers are running as VM's not dockers so I cannot use the Network type: "Proxynet"
 
I have two servers running as VM left with fixed IP's (Both virtual lan on Br0)
Found the template for Home assistant:
 
# make sure that your dns has a cname set for homeassistant and that your homeassistant container is not using a base urlserver {   listen 443 ssl;   listen [::]:443 ssl;   server_name home.mydomain.dk;   include /config/nginx/ssl.conf;   client_max_body_size 0;   # enable for ldap auth, fill in ldap details in ldap.conf   #include /config/nginx/ldap.conf;   # enable for Authelia   #include /config/nginx/authelia-server.conf;   location / {       # enable the next two lines for http auth       #auth_basic "Restricted";       #auth_basic_user_file /config/nginx/.htpasswd;       # enable the next two lines for ldap auth       #auth_request /auth;       #error_page 401 =200 /ldaplogin;       # enable for Authelia       #include /config/nginx/authelia-location.conf;       include /config/nginx/proxy.conf;       resolver 127.0.0.11 valid=30s;       set $upstream_app homeassistant;       set $upstream_port 8123;       set $upstream_proto http;       proxy_pass $upstream_proto://$upstream_app:$upstream_port;   }}

But not sure how to point it to a specific mysub.mydomain.dk?
Or adding the IP (I would prefer not to have any mysub.* as wildcard have 4 different domains added (The work fine)
Tried looking through the posts but so far have not succeeded getting this working (Trial & Error)
 
Hooping to adjust the above to work with Synology port 5001 https running as a VM
Thanks!
 
 
 
 
 
 
 
 
 

You'd just need to change the proxy pass line to http://vmip:port

Sent from my Pixel 4 XL using Tapatalk

Link to comment
21 minutes ago, H2O_King89 said:

You'd just need to change the proxy pass line to http://vmip:port

Sent from my Pixel 4 XL using Tapatalk
 

Yes I see the docker name should be the IP (Making this to complicated)

 

I tried but it doesn't seem to work

        include /config/nginx/proxy.conf;
        resolver 127.0.0.11 valid=30s;
        set $upstream_app 192.168.0.12;
        set $upstream_port 8123;
        set $upstream_proto http;
        proxy_pass $upstream_proto://$upstream_app:$upstream_port;

I tested and its working accessing: http://192.168.0.12:8123/lovelace/default_view

(The domain is getting cert. log file and I tried doing the mysub.* wildcard but with no effect

Link to comment



Yes I see the docker name should be the IP (Making this to complicated)
 
I tried but it doesn't seem to work
        include /config/nginx/proxy.conf;       resolver 127.0.0.11 valid=30s;       set $upstream_app 192.168.0.12;       set $upstream_port 8123;       set $upstream_proto http;       proxy_pass $upstream_proto://$upstream_app:$upstream_port;

I tested and its working accessing: http://192.168.0.12:8123/lovelace/default_view
(The domain is getting cert. log file and I tried doing the mysub.* wildcard but with no effect



Am I reading this right? You are running hass.io in a VM? If so you need to change the proxy config line proxy_pass to https://192.168.0.12:8123

Sent from my Pixel 4 XL using Tapatalk

Link to comment
20 minutes ago, H2O_King89 said:


 

 


Am I reading this right? You are running hass.io in a VM? If so you need to change the proxy config line proxy_pass to https://192.168.0.12:8123

Sent from my Pixel 4 XL using Tapatalk
 

Yes its the same thing.... 🙂

set $upstream_app 192.168.0.12;
set $upstream_port 8123;
set $upstream_proto http;
proxy_pass $upstream_proto://$upstream_app:$upstream_port;

I just used the variable in the last line

proxy_pass http://192.168.0.12:8123;

 

But I can see that it is sort of working seem to be some difference between this reverse proxy and the one Synology sets up I get to the webpage

image.png.aba7dc3ae2bd513b25446a28b4c2e409.png

Looking at the log from my app I can see a new error message: shared.webhookError 1

Very strange?

 

I made this for the Synology VM:

server {
    listen 443 ssl;
    listen [::]:443 ssl;

    server_name mydomain.dk;

    include /config/nginx/ssl.conf;
#   add_header X-Frame-Options "SAMEORIGIN" always; 
    add_header Strict-Transport-Security "max-age=15768000; includeSubDomians; preload;";


    client_max_body_size 0;

    location / {
        include /config/nginx/proxy.conf;
        resolver 127.0.0.11 valid=30s;
        proxy_pass https://192.168.0.10:5001;
        proxy_max_temp_file_size 2048m;
    }
}

Synology domain looks to be working perfectly, any changes that I need to make?

Its for a domain not a sub domain and I used the template for NextCloud as a base

 

Link to comment
7 minutes ago, casperse said:

Yes its the same thing.... 🙂


set $upstream_app 192.168.0.12;
set $upstream_port 8123;
set $upstream_proto http;
proxy_pass $upstream_proto://$upstream_app:$upstream_port;

I just used the variable in the last line

proxy_pass http://192.168.0.12:8123;

 

But I can see that it is sort of working seem to be some difference between this reverse proxy and the one Synology sets up I get to the webpage

image.png.aba7dc3ae2bd513b25446a28b4c2e409.png

Looking at the log from my app I can see a new error message: shared.webhookError 1

Very strange?

 

I made this for the Synology VM:


server {
    listen 443 ssl;
    listen [::]:443 ssl;

    server_name mydomain.dk;

    include /config/nginx/ssl.conf;
#   add_header X-Frame-Options "SAMEORIGIN" always; 
    add_header Strict-Transport-Security "max-age=15768000; includeSubDomians; preload;";


    client_max_body_size 0;

    location / {
        include /config/nginx/proxy.conf;
        resolver 127.0.0.11 valid=30s;
        proxy_pass https://192.168.0.10:5001;
        proxy_max_temp_file_size 2048m;
    }
}

Synology domain looks to be working perfectly, any changes that I need to make?

Its for a domain not a sub domain and I used the template for NextCloud as a base

 

here is mine

 


server {
    listen 443 ssl;
    listen [::]:443 ssl;


    server_name hass.*;

    include /config/nginx/ssl.conf;

    client_max_body_size 0;
    
    # enable for ldap auth, fill in ldap details in ldap.conf 
    #include /config/nginx/ldap.conf;

     location / {
        proxy_pass http://10.1.60.2:8123;
        proxy_set_header Host $host;

        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
		proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }

    location /api/websocket {
        proxy_pass http://10.1.60.2:8123/api/websocket;
        proxy_set_header Host $host;

        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
		proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

    }
}

 

  • Thanks 1
Link to comment
11 hours ago, aptalca said:

To clarify, you CAN edit the default site conf to modify or add server blocks.

Alternatively you can add more site conf files like you did. Either works

Huh...  guess I just did it wrong, lol, it was easy enough to copy the default, rename, and edit, at least...  now I'm trying to figure out wordpress, so thank you, it's like 98% of the way there :)

Link to comment

I have recently decided to setup separate instances of Radarr and Sonarr exclusively to handle my 4k content. I installed a second docker instance of each and named them radarr4k and sonarr4k respectively. I set the appdata folder to the same name and placed them on my custom dockernet along with the rest of the others and swag itself. I duplicated my old configs for each to save me some time. I then listed a subdomain in my DNS and added the new subdomains to my swag docker settings. Next I duplicated the proxy-configs and saved them under radarr4k.subdomain.conf and sonarr4k.subdomain.conf. I edited the the file to match the new docker name. I'll paste that below.

However, after waiting the 48 hours to ensure my DNS entries propagated fully, my subdomains still don't work. I had the same issue a while back with my speedtest docker and subdomain. Is there a step I'm missing?

I also have this weird issue where sometimes sonarr4k.mydomain.com forwards to heimdall.mydomain.com even though I haven't had Heimdall installed for a long time. I even ensured I removed the heimdall.subdomain.conf file and dns entries to try and resolve it. Unfortunately that did not work.

Finally, after adding the sonarr4k and radarr4k files, I now get this error in swag:
nginx: [warn] conflicting server name "radarr.*" on 0.0.0.0:443, ignored
nginx: [warn] conflicting server name "sonarr.*" on 0.0.0.0:443, ignored
nginx: [warn] conflicting server name "radarr.*" on [::]:443, ignored
nginx: [warn] conflicting server name "sonarr.*" on [::]:443, ignored

Unraid 6.8.3
I recently migrated to swag from letsencrypt with no issues. The issues described here were also happening in letsencrypt prior to migration.

sonarr4k.subdomain.conf

EDIT: I should add that my original subdomains for sonarr, radarr, and others still work fine. Just the new ones don't.

Edited by DeathByDentures
Link to comment
4 hours ago, DeathByDentures said:

I have recently decided to setup separate instances of Radarr and Sonarr exclusively to handle my 4k content. I installed a second docker instance of each and named them radarr4k and sonarr4k respectively. I set the appdata folder to the same name and placed them on my custom dockernet along with the rest of the others and swag itself. I duplicated my old configs for each to save me some time. I then listed a subdomain in my DNS and added the new subdomains to my swag docker settings. Next I duplicated the proxy-configs and saved them under radarr4k.subdomain.conf and sonarr4k.subdomain.conf. I edited the the file to match the new docker name. I'll paste that below.

However, after waiting the 48 hours to ensure my DNS entries propagated fully, my subdomains still don't work. I had the same issue a while back with my speedtest docker and subdomain. Is there a step I'm missing?

I also have this weird issue where sometimes sonarr4k.mydomain.com forwards to heimdall.mydomain.com even though I haven't had Heimdall installed for a long time. I even ensured I removed the heimdall.subdomain.conf file and dns entries to try and resolve it. Unfortunately that did not work.

Finally, after adding the sonarr4k and radarr4k files, I now get this error in swag:
nginx: [warn] conflicting server name "radarr.*" on 0.0.0.0:443, ignored
nginx: [warn] conflicting server name "sonarr.*" on 0.0.0.0:443, ignored
nginx: [warn] conflicting server name "radarr.*" on [::]:443, ignored
nginx: [warn] conflicting server name "sonarr.*" on [::]:443, ignored

Unraid 6.8.3
I recently migrated to swag from letsencrypt with no issues. The issues described here were also happening in letsencrypt prior to migration.

sonarr4k.subdomain.conf

EDIT: I should add that my original subdomains for sonarr, radarr, and others still work fine. Just the new ones don't.

Nginx complains about conflicting server names, so you need to fix that.

Link to comment
11 minutes ago, saarg said:

Nginx complains about conflicting server names, so you need to fix that.

I've changed the server_name in each .subdomain.conf file but it still say there is a conflict. I don't understand how since the containers and storage locations have different names. Where else do I need to change it for this subdomain?
 

server {
    listen 443 ssl;
    listen [::]:443 ssl;

    server_name radarr4k.*;

EDIT: I have confirmed it's the presence of both sonarr.subdomain.conf and sonarr4k.subdomain.conf that create the error as removing sonarr4k clears the error. I just got it working for ombi no problems. So there is a solution.

Edited by DeathByDentures
Adding clarification
Link to comment
On 11/5/2020 at 4:06 PM, BurntOC said:

I just got this container set up yesterday morning and man, it is so great.  I have a handful of my containers proxied already, but I've hit a snag that I can't figure out and I'm hoping someone can help.  These 5-7 other servers I'm trying to proxy fall into 3 categories:

  1. On this Unraid server, but assigned to a different network and VLAN
  2. On my other Unraid server
  3. On a Raspberry Pi

I'm hoping that some sort of editing the configuration files and also my OPNsense firewall rules will solve items #2 and #3.  I'm wondering if #1 acts differently, though, and if so how I am supposed to make those proxyable by Swag.  Here are a few more details if you can help guide me I would appreciate it.

  • Swag and currentlly proxied containers - br1.60, network 192.168.60.0/24
  • Non-working containers - br0.20, network 192.168.20.0/24

Thanks for any help.

 

UPDATE:  So getting item 1  handled turned out to be easier than I feared.  By using an IP for upstream app it worked great.  I thought then that #3 would be easy, but it's not working.  Specifically, I'm trying to get to my Hass (Home Assistant Supervised) on it, which is available when I hit it directly via its 192.168.60.X:8123 address.  If I try to hit it via the proxy I get the Nginx default page and I don't see any traffic trying to proxy from the swag IP to the hass server.  It's like it isn't recognizing the hass.mydomain.me even though I edited the conf to reflect that subdomain name of hass and the app IP.  Any ideas as to what could be up there?

So I've gotten it all operating fine - EXCEPT Home Assistant Supervised on my Pi.  I still get the 502 Gateway error and I don't see it even trying to proxy request to the Pi.  I know there are some pointers to ensure the Hass instance accepts the proxy, but why the heck would it not even be forwarding the proxy requests like it does the other dozen servers and containers I'm running just fine?

Link to comment
13 hours ago, BurntOC said:

So I've gotten it all operating fine - EXCEPT Home Assistant Supervised on my Pi.  I still get the 502 Gateway error and I don't see it even trying to proxy request to the Pi.  I know there are some pointers to ensure the Hass instance accepts the proxy, but why the heck would it not even be forwarding the proxy requests like it does the other dozen servers and containers I'm running just fine?

You probably don't have your routing correct if you have set up the proxy conf correctly, which is hard to say since you didn't post it.

Can you ping the RPI from inside swag?

Link to comment
6 hours ago, saarg said:

You probably don't have your routing correct if you have set up the proxy conf correctly, which is hard to say since you didn't post it.

Can you ping the RPI from inside swag?

Fair observation.  I thought about including it originally but if the connectivity is there, it seems like this would be some well-known trick that I don't know about.  To that point, your question is a great one to which I believed the answer was "Yes, I've tested it.".  But if so I'd have been wrong, as checking right now it is not getting a response.  I'm up to 15 other devices that are working just fine across the other 2 situations I included in my initial post on this.  Since it is working for other servers in that same domain it would seem like the traffic should have no problems getting from my Unraid server to the firewall headed to the Pi, but clearly I do.  Here's my proxy, in any event (I use hassio.mydomain.me and the device is on 192.168.60.4 in this example):

 

server {
    listen 443 ssl;
    listen [::]:443 ssl;

    server_name hassio.*;

    include /config/nginx/ssl.conf;

    client_max_body_size 0;

    # enable for ldap auth, fill in ldap details in ldap.conf
    #include /config/nginx/ldap.conf;

    # enable for Authelia
    #include /config/nginx/authelia-server.conf;

    location / {
        # enable the next two lines for http auth
        #auth_basic "Restricted";
        #auth_basic_user_file /config/nginx/.htpasswd;

        # enable the next two lines for ldap auth
        #auth_request /auth;
        #error_page 401 =200 /ldaplogin;

        # enable for Authelia
        #include /config/nginx/authelia-location.conf;

        include /config/nginx/proxy.conf;
        resolver 127.0.0.11 valid=30s;
#        set $upstream_app homeassistant;
        set $upstream_app 192.168.60.4;
        set $upstream_port 8123;
        set $upstream_proto http;
        proxy_pass $upstream_proto://$upstream_app:$upstream_port;

    }
}

 

Link to comment

Maybee a stupid Q

 

But is it okay to add multiple subdomains like this?

server {
    listen 443 ssl;
    listen [::]:443 ssl;

    server_name photo.doamin.dk;
    server_name photos.domain.dk;
    server_name piwigo.domain.dk;

And could I just add a piwigo.domain2.dk also?

It might work but I dont want to go against the approved structure

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.