Jump to content

casperse

Members
  • Content Count

    323
  • Joined

  • Last visited

Posts posted by casperse


  1. Hi All

     

    I have started to see some weird things after some weeks the server "slows" down and on my main page the cpu monitoring is blank (In any browser)

    image.png.60b1d49b6db7251428ed9b110171d786.png

    The system runs with all the dockers and VM's but I cant get a diagnostic before I do a reboot or worst case cut the power?

    (I will try to get a diagnsotic after the next rebbot)

     

    Anyone here that have seen this before?

     

    Br

    Casperse

     

    After a reboot I got the diagnostic files, hope someone can help me find the cause?

     

     

    plexzone-diagnostics-20200609-0910.zip


  2. 5 hours ago, squelch said:

    I just updated plex and now it won't start again.

     

    Execution error

    Server error

     

    Is the window unraid spits out when attempting to start.

     

    
    root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='plex' --net='host' -e TZ="America/New_York" -e HOST_OS="Unraid" -e 'VERSION'='docker' -e 'NVIDIA_VISIBLE_DEVICES'='GPU-8746aba1-0848-abca-4528-4f640b678b58' -e 'TCP_PORT_32400'='32400' -e 'TCP_PORT_3005'='3005' -e 'TCP_PORT_8324'='8324' -e 'TCP_PORT_32469'='32469' -e 'UDP_PORT_1900'='1900' -e 'UDP_PORT_32410'='32410' -e 'UDP_PORT_32412'='32412' -e 'UDP_PORT_32413'='32413' -e 'UDP_PORT_32414'='32414' -e 'PUID'='99' -e 'PGID'='100' -v '/mnt/user/plexshares/Movies/':'/movies':'rw' -v '/mnt/user/plexshares/TV Shows/':'/tv':'rw' -v '/mnt/user/Media/Music/':'/music':'rw' -v '/tmp':'/transcode':'rw' -v '/mnt/user/appdata/plex':'/config':'rw' --runtime=nvidia 'linuxserver/plex'
    
    854f2503e585b2eb9817e8b67040d73a6038955308f2d9b633b39bc297d1dbb1
    /usr/bin/docker: Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:449: container init caused "process_linux.go:432: running prestart hook 0 caused \"error running hook: exit status 1, stdout: , stderr: nvidia-container-cli: initialization error: cuda error: invalid device ordinal\\n\""": unknown.
    
    The command failed.

     

    Not  saying it's the same problem but I also couldn't start Plex (I run the Org. Plex docker) and I added the log option and then it started again?

    --runtime=nvidia --log-opt max-size=50m --log-opt max-file=1

    (I am also having a backup docker installed with the Linux server Plex docker that I can start if the other one doesn't start after a update, I recommend having this setup you can point both of them to the same media meta-data folder so it doesn't really take up too much storage and one of them always work :-)


  3. 37 minutes ago, Squid said:

    Deluge / Radarr has the container paths don't match.  Change Radarr's container path of /downloads -> /mnt/disks/SEED/downloads to be /data -> /mnt/disks/SEED/downloads

     

    Deluge is telling Radarr the movie is in /data/movies/....  But Radarr doesn't have a /data hence the message

     

    https://forums.unraid.net/topic/57181-docker-faq/?tab=comments#comment-566086

     

    Argh I need to stop thinking that these "default" docker inputs are mandatory - everything is configurable!

    Thanks! 


  4. Hi All

     

    Sofar everything works when I set it up externally server, but I am trying to move it to my local Unraid server

    Also read many posts about the advantages of having a separate UAT drive so I also did this

    Keeping the array from spinning up every time you download files and moving them later

     

    Mapping is always what is causing everyone problems or its access rights! :-)

    And this is probably also the case here I just can't put my finger on where I screwed up!

     

    Deluge mappings: (Unassigned disk)

    image.png.93a5620ec695da937d0800d350e68df4.png

    Radarr:

    image.png.822e150a56e8f00e2118aaa9d409ac90.png

    Same path!

     

    Deluge moves them correctly from .incomplete to movies:

    image.png.e987a94385dd5537b2e3ec8851e76c2f.png

    Radarr also shows the correct path in the UI:

    image.png.f6a071b714f34dad2f71aa6c294bfaa5.png

     

    But in the log files in Radarr it says:

    Quote

    Import failed, path does not exist or is not accessible by Radarr: /data/movies/George. Ensure the path exists and the user running Radarr has the correct permissions to access this file/folder

     

    Then looking at the permissions I can't see anything wrong:

    image.png.e0179a5ec0be44f2aab5ff9ee666e4c8.png

     

    Summery:

    Deluge /data --> /mnt/disks/SEED/downloads/

    Radarr /downloads -> /mnt/disks/SEED/downloads/

     

    Deluge moves files from .incomplete --> movies

    /mnt/disks/SEED/downloads/.incomplete

    /mnt/disks/SEED/downloads/movies

     

    Radarr moving files to array.... not working

     

    That should work - shouldn't it?

    It's driving me nuts... I have tried so many options and using sublevel folders and not a root folder of a UA - drive nothing works

    Last option is to use the Path mappings! but I am running everything local so shouldnt be needed should it?

    (localhost doesn't work? I am using IP for the download clients)

     

    As always new eyes on the problem and inputs are most welcome!


  5. Hi Everyone

     

    I have installed Nextcloud and everything is working I then would like to map shared Unraid drives to Nextcloud and I think there is some "Mapping" problems

     

    Unraid have a great feature where you can copy a shared read setting to other shares (Making sure they are the same!)

     

    Example: Two shared folders with the exact same SMB settings in the Unraid configuration one works the other doesn't and any other folder I try to share also does not work

    image.png.a602618554dac334d2d110bc608b1c06.png

     

    image.png.bd1ea45003a3188d297c3959d3e6124e.png

    Again read rights are copied from the one that works?

     

    One positive side is that there is also another option to share locally shared files

    In the Nextcloud Docker you set a path:

     

    image.png.95516580cfc0c92137d40d68664dc798.png

     

    Then in Nextcloud:

    image.png.7a2b286dca9d415f656396c33648aa47.png

    And this would work...

     

    Can anyone give any idea on what I can do next in regards to get SMB working for more than one share?

    Have anyone this working with more than one SMB share?

    Any docker commands I can use to see internal mappings?

    This doesn't work: docker exec -it name nextcloud config

     

    Thanks!

     

     

     

     

     

     

     


  6. 17 hours ago, dariusz65 said:

    Can you post your settings for 3615 6.2.3? I'm getting hard disk not found. I'm using 1.3b bootloader.

    Then you forgot to change the controller in the xml (Had the same problem when I started)

    change hdd bus sata controller from 0 to 1
    
        <disk type='file' device='disk'>
          <driver name='qemu' type='qcow2' cache='writeback'/>
          <source file='/mnt/user/domains/XPEnology_3/vdisk2.img'/>
          <target dev='hdd' bus='sata'/>
          <address type='drive' controller='1' bus='0' target='0' unit='3'/>
        </disk>

     


  7. Mounting unassigned devices as SMB shares?

     

    Normally I don't have any problems with UAD shares

    And I can mount any internal Unraid shared folder to NextCloud

    But for some reason I can get UAD shares working with Nextcloud?

     

    Could this be related to the Smb v1 or v2 thing? or because its a shared drive and not a folder share?

    If I write \\192.168.0.6\ I get listed all shares (except UAD drive shares) but writing the share drive names works

    \\192.168.0.6\domains_ssd\ or is this just not possible because its not a folder share but a whole drive share?

     

    Example:

    image.png.49d408de92000657d651fe1e90933b17.png

     

     

     

     

     

     


  8. Hi All

     

    I now have a Pfsense router as a VM - So if this VM dosent autostart there is no internet to the server?

    The Docker autostart perfectly

     

    The log from the boot:

    Apr 22 19:21:44 SERVER avahi-daemon[10180]: Joining mDNS multicast group on interface veth4efa0a4.IPv6 with address fe80::a01b:a4ff:fe7d:884.
    Apr 22 19:21:44 SERVER avahi-daemon[10180]: New relevant interface veth4efa0a4.IPv6 for mDNS.
    Apr 22 19:21:44 SERVER avahi-daemon[10180]: Registering new address record for fe80::a01b:a4ff:fe7d:884 on veth4efa0a5.*.
    Apr 22 19:23:23 SERVER kernel: veth84a7df6: renamed from eth0
    Apr 22 19:23:23 SERVER kernel: docker0: port 2(vethd967554) entered disabled state
    Apr 22 19:23:23 SERVER avahi-daemon[10180]: Interface vethd967554.IPv6 no longer relevant for mDNS.
    Apr 22 19:23:23 SERVER avahi-daemon[10180]: Leaving mDNS multicast group on interface vethd967554.IPv6 with address fe80::410:92ff:fe6c:114e.
    Apr 22 19:23:23 SERVER kernel: docker0: port 2(vethd967554) entered disabled state
    Apr 22 19:23:23 SERVER kernel: device vethd967554 left promiscuous mode
    Apr 22 19:23:23 SERVER kernel: docker0: port 2(vethd967554) entered disabled state
    Apr 22 19:23:23 SERVER avahi-daemon[10180]: Withdrawing address record for fe80::410:92ff:fe6c:115e on vethd967554.
    Apr 22 19:23:32 SERVER kernel: docker0: port 3(veth1d3fcb8) entered disabled state
    Apr 22 19:23:32 SERVER kernel: veth715f555: renamed from eth0
    Apr 22 19:23:32 SERVER avahi-daemon[10180]: Interface veth1d3fcb8.IPv6 no longer relevant for mDNS.
    Apr 22 19:23:32 SERVER avahi-daemon[10180]: Leaving mDNS multicast group on interface veth1d3fcb8.IPv6 with address fe80::5469:97ff:feac:a308.
    Apr 22 19:23:32 SERVER kernel: docker0: port 3(veth1d3fcb8) entered disabled state
    Apr 22 19:23:32 SERVER kernel: device veth1d3fcb8 left promiscuous mode
    Apr 22 19:23:32 SERVER kernel: docker0: port 3(veth1d3fcb8) entered disabled state
    Apr 22 19:23:32 SERVER avahi-daemon[10180]: Withdrawing address record for fe80::5469:97ff:feac:a308 on veth1d3fcb8.
    Apr 22 19:27:28 SERVER kernel: vfio-pci 0000:0a:00.0: enabling device (0000 -> 0003)
    Apr 22 19:27:29 SERVER kernel: vfio-pci 0000:0a:00.1: enabling device (0000 -> 0003)
    Apr 22 19:27:29 SERVER kernel: vfio-pci 0000:0b:00.0: enabling device (0000 -> 0003)
    Apr 22 19:27:29 SERVER kernel: vfio-pci 0000:0b:00.1: enabling device (0000 -> 0003)
    Apr 22 19:27:31 SERVER kernel: br0: port 2(vnet0) entered blocking state
    Apr 22 19:27:31 SERVER kernel: br0: port 2(vnet0) entered disabled state
    Apr 22 19:27:31 SERVER kernel: device vnet0 entered promiscuous mode
    Apr 22 19:27:31 SERVER kernel: br0: port 2(vnet0) entered blocking state
    Apr 22 19:27:31 SERVER kernel: br0: port 2(vnet0) entered forwarding state
    Apr 22 19:27:32 SERVER avahi-daemon[10180]: Joining mDNS multicast group on interface vnet0.IPv6 with address fe80::fc54:ff:fe2c:e872.
    Apr 22 19:27:32 SERVER avahi-daemon[10180]: New relevant interface vnet0.IPv6 for mDNS.
    Apr 22 19:27:32 SERVER avahi-daemon[10180]: Registering new address record for fe80::fc54:ff:fe2c:e873 on vnet0.*.
    Apr 22 19:27:35 SERVER kernel: br0: port 3(vnet1) entered blocking state
    Apr 22 19:27:35 SERVER kernel: br0: port 3(vnet1) entered disabled state
    Apr 22 19:27:35 SERVER kernel: device vnet1 entered promiscuous mode
    Apr 22 19:27:35 SERVER kernel: br0: port 3(vnet1) entered blocking state
    Apr 22 19:27:35 SERVER kernel: br0: port 3(vnet1) entered forwarding state
    Apr 22 19:27:37 SERVER avahi-daemon[10180]: Joining mDNS multicast group on interface vnet1.IPv6 with address fe80::fc27:ebff:feb8:e5c9.
    Apr 22 19:27:37 SERVER avahi-daemon[10180]: New relevant interface vnet1.IPv6 for mDNS.
    Apr 22 19:27:37 SERVER avahi-daemon[10180]: Registering new address record for fe80::fc28:ebef:feb8:e5c9 on vnet1.*.
    Apr 22 19:28:00 SERVER root: Fix Common Problems Version 2020.04.19

    I have looked on the forum and the things I found had no effect on this?

     

    Running: 6.8.3

     

    Br

    Casperse


  9. So I went back experimenting with this in Unraid 6.8.3:

    And I got the following results:

     

    DS3615xs up to DSM_DS3615xs_25423_6.2.3 ! - Performance seem really good even LAN

    DS3617xs I can only get to DSM_DS3617xs_23739_6.2.0 anything higher and it breaks?

     

    If anyone have had better luck then please share 😏


  10. On 4/17/2020 at 6:14 AM, aptalca said:

    Edit the default site conf and comment out the block with listen 80 

     

    Also, if you use dns validation, you don't have to forward port 80

    Do you mean these lines?

     

    # redirect all traffic to https
    server {
        listen 80 default_server;
        listen [::]:80 default_server;
        server_name _;
        return 301 https://$host$request_uri;


  11. Woouu that seems really complicated! Is this to fix the LAN speed?

     

    I just followed the guide on this page and on the link here to the Xpenology forum

    Yes its trial and error I have tried them all and the highest DSM I could get working was with:

     

    XPEnology_3 DSM_DS3615xs_24922 v6.2.2

    XPEnology_2 DSM_DS3617xs_23739_6.2

    XPEnology DSM_DS3615xs_6.1.7

     

    https://xpenology.com/forum/topic/24168-dsm-621-on-unraid-vm/

     

    Biggest problem is LAN speed on virtual lan?

    Virtual LAN is needed for MAC addresses if you want to use licences with DS Cam

    (I have 4 LAN port (HW) ported to the VM, so I thought this might solve the problem but since I need specific MAC address this is a no/go)

     

    QUESTION: I would really like to know how people use HD storage with a Xpenology virtual server?

    Do you create one large terabyte vdisk2.qcow2 and mount this?

    Or do you use a unassigned drive and mount that to the VM?

     

    I would like to have:

    DS Camera (Nice app low CPU usage) - have bought licenses for my old Synology

    DS Photo/Moments

    DS Backup tools/DS Cloud

    DS Note

     

    All have nice apps and low power requirements

     

    UPDATE: Ok didnt know about the docker project... https://github.com/segator/xpenology-docker

    But what's the advantages? from building your own VM?

    Docker: Latest commit3e18362on Mar 8, 2019

     

     


  12. Hi All

     

    I keep having access problems caused by file permissions I have the following in Radarr

    image.png.2e4d3a664452036bff4d7312f3ca09d5.png

     

    And in the docker settings:

    image.thumb.png.adaa8d7ca3bb6985052ba54d578418ab.png

     

    My only option is to keep running "New permissions" under Tools

    Strange thing is that its mostly .srt files that cant be read by the player before updating permissions?


  13. 1 minute ago, Squid said:

    What's the error?  Don't copy / paste from the forum.  Type it in instead

    Sorry Squid my mistake! (long day)

    I found the error I had some limits to logs etc on the same docker under Extra Parameters: and made a error fixed and its running again

    Synchting is back using 100% but now the other apps and the Unraid UI should still be responsive I hope! :-)

    Thanks - Of all the dockers Synchting is the most demanding! even more than Plex 


  14. On 11/24/2016 at 5:12 PM, Squid said:
    On 11/24/2016 at 5:07 PM, xxredxpandaxx said:

    So while browsing the forum I just found that you can set up priority for CPU on your dockers!!! So Im hoping that this will fix my problem I have been having where sabnzbd is hogging the CPU while unpacking a downloaded file, basically making it so Plex won't play anything. But My question is, is the number you set the --cpu-shares to basically a percentage of how much CPU it will use while competing with other dockers? For example, If I set Plex to 2048 and sabnzbd to 512 will plex use 75% and sab use 25%? (while both are trying to use the most cpu they can of corse)

     

    Oh also is UnRAID set to a number? I think I read somewhere that the OS has priority over dockers and plugins but when the cpu is at 100% the web gui becomes unresponsive.

     

    I have an example in the docker FAQ about CPU shares.  For even more examples, and further options to prioritize docker apps over unRaid / VMs / etc then you need to google "docker run reference" for the parameters to pop into the extra parameters section

    I just had a situation yesterday where Synchting was causing 100% CPU load! causing everything to be unresponsive!

    Stopping that docker and my CPU usage fell to 34%

    So would the best solution here be to prioritize other Dockers above making sure they always have enough power to run?

    Or somehow to minimize what CPU allocation Synchting can use as a max?

     

     


  15. I moved from Resilio to Synchting because Resilio didn’t work - when moving files already synched it would download them all over again! Synchting supports atomic move! 

     

    But I have a major problem with CPU usage it’s spikes to 100% and unraid and even Plex gets unresponsive - So can someone recommend some settings to “control” Synchting overload?


  16. 4 hours ago, aptalca said:

    The heimdall subfolder method is only for setting the homepage of the main domain. You don't need to do that for the homepage of a secondary domain because it is not already set up.

     

    For ombi as the homepage of the second domain, just use the ombi subdomain conf, and edit the server name to read "seconddomain.com"

    Perfect that did it!  - So NO need to change anything in the default conf for the: # main server block? I thought you said that was needed?

     

    Is there any security implications, I can see that any subdomain I can think of will now always point to domain_1

     

    anything*.domain_1

    anything*.domain_2

    anything*.domain_3

     

    all --> will point to the domain set for the "Heimdal subfolder sample" which was for domain_1 (Nextcloud)

    Normally I guess you would get a "This site can’t be reached"

    Or is this because each domain have a A record and a C name *.domain1-> A record? so Letsencrypt just forwards everything to the domain_1

     

    I have been playing with this all day :-) hoping to remove my old Synology setup

     

    [UPDATE]: Nextcloud works but cant connect to the IOS app, switching Nextcloud to Domain_2 and using Domain_1 with Emby resolved that, Nextcloud wants the sample file for the Subdomain not the subfolder?

     

    Everything seem to work!

    But I am getting alot of Unraid log errors?

    I can see that the IP is from my Laptop that I used to test whit

     


    Apr 12 20:52:59 SERVER nginx: 2020/04/12 20:52:59 [error] 10389#10389: *34579 recv() failed (104: Connection reset by peer) while reading upstream, client: 192.168.0.34, server: , request: "GET //wsproxy/5700/ HTTP/1.1", upstream: "http://127.0.0.1:5700/", host: "192.168.0.6"
    Apr 12 20:53:01 SERVER nginx: 2020/04/12 20:53:01 [error] 10389#10389: *34593 recv() failed (104: Connection reset by peer) while reading upstream, client: 192.168.0.34, server: , request: "GET //wsproxy/5700/ HTTP/1.1", upstream: "http://127.0.0.1:5700/", host: "192.168.0.6"
    Apr 12 20:53:02 SERVER nginx: 2020/04/12 20:53:02 [error] 10389#10389: *34599 recv() failed (104: Connection reset by peer) while reading upstream, client: 192.168.0.34, server: , request: "GET //wsproxy/5700/ HTTP/1.1", upstream: "http://127.0.0.1:5700/", host: "192.168.0.6"
    Apr 12 20:53:03 SERVER nginx: 2020/04/12 20:53:03 [error] 10389#10389: *34604 recv() failed (104: Connection reset by peer) while reading upstream, client: 192.168.0.34, server: , request: "GET //wsproxy/5700/ HTTP/1.1", upstream: "http://127.0.0.1:5700/", host: "192.168.0.6"
    Apr 12 20:53:03 SERVER nginx: 2020/04/12 20:53:03 [error] 10389#10389: *34607 recv() failed (104: Connection reset by peer) while reading upstream, client: 192.168.0.34, server: , request: "GET //wsproxy/5700/ HTTP/1.1", upstream: "http://127.0.0.1:5700/", host: "192.168.0.6"
    Apr 12 20:53:03 SERVER nginx: 2020/04/12 20:53:03 [error] 10389#10389: *34612 recv() failed (104: Connection reset by peer) while reading upstream, client: 192.168.0.34, server: , request: "GET //wsproxy/5700/ HTTP/1.1", upstream: "http://127.0.0.1:5700/", host: "192.168.0.6"
    Apr 12 20:53:03 SERVER nginx: 2020/04/12 20:53:03 [error] 10389#10389: *34615 recv() failed (104: Connection reset by peer) while reading upstream, client: 192.168.0.34, server: , request: "GET //wsproxy/5700/ HTTP/1.1", upstream: "http://127.0.0.1:5700/", host: "192.168.0.6"
    Apr 12 20:53:04 SERVER nginx: 2020/04/12 20:53:04 [error] 10389#10389: *34618 recv() failed (104: Connection reset by peer) while reading upstre
    Apr 12 21:59:13 SERVER nginx: 2020/04/12 21:59:13 [error] 10389#10389: *56034 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: 127.0.0.1, server: , request: "GET /admin/api.php?version HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "127.0.0.1"
    Apr 12 21:59:13 SERVER nginx: 2020/04/12 21:59:13 [error] 10389#10389: *56036 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: ::1, server: , request: "GET /admin/api.php?version HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "localhost"


  17. On 4/11/2020 at 3:19 AM, aptalca said:

    Server name directive.

     

    Create a new subdomain conf for the new server name

     

    Ok I have almost read through the entire thread and on page 167 I found the missing parameter to insert the extra domain names! LOL

    I now have 3 domains added and getting certificates!

     

    Domain_1 --> Nextcloud (OK)

    Domain_2 --> Ombi (Not working)

    sub-domain.Domain_2 (OK)

    sub-domain.Domain_3 (OK)

     

    But I still can't get the two main domains to co-exist...

    I know it's how I add the two servers to the default conf?

     

    I have created the two main domain on the sample from Heimdahl.subfolder.conf.sample and created:

     

    "nextcloud.subfolder.conf"

    "ombi.subfolder.conf"

     

    I just need some help on how to define the servers in the appdata\letsencrypt\nginx\site-confs\defaults (conf)

    My addition in Yellow

    Quote

    ## Version 2020/03/05 - Changelog: https://github.com/linuxserver/docker-letsencrypt/commits/master/root/defaults/default

    # redirect all traffic to https
    server {
        listen 80 default_server;
        listen [::]:80 default_server;
        server_name _;
        return 301 https://$host$request_uri;
    }

    # main server block
    server {
        listen 443 ssl http2 default_server;
        listen [::]:443 ssl http2 default_server;

        root /config/www;
        index index.html index.htm index.php;

        server_name _;

        # enable subfolder method reverse proxy confs
        include /config/nginx/proxy-confs/*.subfolder.conf;

        # all ssl related config moved to ssl.conf
        include /config/nginx/ssl.conf;

        # enable for ldap auth
        #include /config/nginx/ldap.conf;

        client_max_body_size 0;

    #    location / {
    #        try_files $uri $uri/ /index.html /index.php?$args =404;
    #    }

        location ~ \.php$ {
            fastcgi_split_path_info ^(.+\.php)(/.+)$;
            fastcgi_pass 127.0.0.1:9000;
            fastcgi_index index.php;
            include /etc/nginx/fastcgi_params;
        }


    }

    # sample reverse proxy config without url base, but as a subdomain "cp", ip and port same as above
    # notice this is a new server block, you need a new server block for each subdomain 

    server {

     listen 443 ssl http2;
     listen [::]:443 ssl http2;

     root /config/www;
     index index.html index.htm index.php;

     server_name DOMAIN_Ombi;
     include /config/nginx/ssl.conf;
     client_max_body_size 0;

     location / {

    #  auth_basic "Restricted";
    #  auth_basic_user_file /config/nginx/.htpasswd;

      include /config/nginx/proxy.conf;
      proxy_pass http://192.168.0.6:3579;

     }

    }

    server {

     listen 443 ssl http2;
     listen [::]:443 ssl http2;

     root /config/www;
     index index.html index.htm index.php;

     server_name DOMAIN_Nextcloud";
     include /config/nginx/ssl.conf;
     client_max_body_size 0;

     location / {

    #  auth_basic "Restricted";
    #  auth_basic_user_file /config/nginx/.htpasswd;

      include /config/nginx/proxy.conf;
      proxy_pass http://192.168.0.6:443;

     }

    }

     


  18. On 4/11/2020 at 3:19 AM, aptalca said:

    Server name directive.

     

    Create a new subdomain conf for the new server name

     So again copying the sample from Heimdahl.subfolder.conf.sample and creating the "nextcloud.subfolder.conf"

    Quote

    # In order to use this location block you need to edit the default file one folder up and comment out the / location

    location / {
        # enable the next two lines for http auth
        #auth_basic "Restricted";
        #auth_basic_user_file /config/nginx/.htpasswd;

        # enable the next two lines for ldap auth, also customize and enable ldap.conf in the default conf
        #auth_request /auth;
        #error_page 401 =200 /login;

        include /config/nginx/proxy.conf;
        resolver 127.0.0.11 valid=30s;
        set $upstream_app nextcloud;
        set $upstream_port 443;
        set $upstream_proto https;
        proxy_pass $upstream_proto://$upstream_app:$upstream_port;

    }

    Than adding the two servers to the appdata\letsencrypt\nginx\site-confs\defaults conf

    (Removing the two lines for the htpassword in the example below)

    #  auth_basic "Restricted";

    #  auth_basic_user_file /config/nginx/.htpasswd;

     

    Quote

    # sample reverse proxy config without url base, but as a subdomain "cp", ip and port same as above

    # notice this is a new server block, you need a new server block for each subdomain

     

    server {

     listen 443 ssl http2;

     listen [::]:443 ssl http2;

     

     root /config/www;

     index index.html index.htm index.php;

     

     server_name domain_1;

     include /config/nginx/ssl.conf;

     client_max_body_size 0;

     

     location / {

    #  auth_basic "Restricted";

    #  auth_basic_user_file /config/nginx/.htpasswd;

      include /config/nginx/proxy.conf;

      proxy_pass http://192.168.0.6:3579;

     }

    }

     

    server {

     listen 443 ssl http2;

     listen [::]:443 ssl http2;

     

     root /config/www;

     index index.html index.htm index.php;

     

     server_name domain_2;

     include /config/nginx/ssl.conf;

     client_max_body_size 0;

     

     location / {

    #  auth_basic "Restricted";

    #  auth_basic_user_file /config/nginx/.htpasswd;

      include /config/nginx/proxy.conf;

      proxy_pass http://192.168.0.6:443;

     }

    }

    Then of course updating the nextcloud PHP configuration to the domain and not the sub.domian

    I have been reading your old posts today :-)

    Did I forget something?

     

    Would sub.domains still work? bitwarden.domain_2

    Or would I need to define them as servers also?

     

    Update: Adding domain should be like this right?

    image.thumb.png.d3ccce284067c7011678062f3921efa9.png

     

    I thought I had made some A record wrong but if I just enter one domain it works, but if I add more domains I get this error:

     

    ERROR: Cert does not exist! Please see the validation error above. The issue may be due to incorrect dns or port forwarding settings. Please fix your settings and recreate the container

     

    On page 167 I found a note about creating this extra field for more domains?

    But it talks about subdomains? would I be able to do as shown below?

    image.png.c0dc63a155d066c2f8ddb8970c2259dd.png