Jump to content

MrChunky

Members
  • Posts

    104
  • Joined

  • Last visited

Posts posted by MrChunky

  1. 17 hours ago, jasgud said:

    Your issue is the same one I was having.  its because letsencrypt disabled a service.  under your container, hit advanced options add the below code into your extra parameters.  restart/start the service.  Note that this means http (tcp 80) will need to be forwarded as well as https (tcp 443) for validation.

    
    -e "HTTPVAL"="true"

     

     

    I have what seems like the same problem, so I applied the suggested fix. FYI the required variable is set to false by default in the docker config already. There is no need to add a new variable, just change the existing one.

     

    But, I am getting connection refused on port 80. Should I change something in the nginx config as well?

     

    Domain: www.xxx.com
    Type: connection
    Detail: Fetching
    http://www.xxx.com/.well-known/acme-challenge/xxx:
    Connection refused

    Here is my current nginx config... port 80 listening seems to be enabled as per instructions.

    server {
    	listen 80;
    	server_name www.xxx.com;
    	return 301 https://$host$request_uri;
    }
    
    server {
    
    	listen 443 ssl default_server;
    	
    	root /config/www;
    	index index.html index.htm index.php;
    
    	server_name www.xxx.com;

    Edit: I have figured out that the problem started after the last update of letsencrypt docker. Still don't know how to fix it.

  2. On 5/31/2017 at 1:37 AM, aaronhong13 said:

    I think I'm having the same issue. Causing Ombi to crash after a while.

     

    It doesn't look like my log is helpful. It's filled with messages as seen below. I can attach the full log, but I have since restarted the docker and this is all that shows up right now.

     

    2b252c3ea000-2b2530000000 ---p 00000000 00:00 0
    2b2530000000-2b253023e000 rw-p 00000000 00:00 0
    2b2530300000-2b253053e000 rw-p 00000000 00:00 0
    2b2530600000-2b253083e000 rw-p 00000000 00:00 0
    2b2530900000-2b2530b3e000 rw-p 00000000 00:00 0
    2b2530c00000-2b2530d1f000 rw-p 00000000 00:00 0
    2b2531300000-2b2531a00000 rw-p 00000000 00:00 0
    2b2531b00000-2b2531c1f000 rw-p 00000000 00:00 0
    2b2531c1f000-2b2531c20000 ---p 00000000 00:00 0
    2b2531c20000-2b2531e20000 rw-p 00000000 00:00 0
    7ffe98bba000-7ffe98bdb000 rw-p 00000000 00:00 0 [stack]
    7ffe98be1000-7ffe98be3000 r--p 00000000 00:00 0 [vvar]
    7ffe98be3000-7ffe98be5000 r-xp 00000000 00:00 0 [vdso]
    ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0 [vsyscall]

    Just took some screenshots of the openfiles plugin. 12000 files open :/ Gonna shut that down until its fixed.

     

     

     

    OMBI.PNG

    OMBI2.PNG

  3. OMBI is keeping thousands of files open, which is leading to the system becoming unstable (running out of allowed open files limit). Is this expected behaviour, or can this be fixed somehow in the future?

     

    I can provide logs if it is necessary.

  4. I ran into a problem where I try to edit files that were renamed and placed into a folder using Dolphin or SABnzbd dockers...

    These files have nobody as the owner:

    drwxrwxrwx 1 theuser1 users   30 Feb 18 16:12 Out\ of\ Sight/
    drwxr-xr-x 1 nobody   users   51 Mar  5 22:25 Pan's\ Labyrinth\ (2006)\ 1080p/
    drwxrwxrwx 1 theuser1 users   44 Mar  5 22:25 Papillon\ (1973)\ 1080p/
    drwxr-xr-x 1 nobody   users   53 Apr 16 16:03 Paprika\ (2006)/
    drwxrwxrwx 1 theuser2 users   72 Mar  6 00:06 Passengers\ (2016)/

    The way I edit these files is by mounting the share where the files are using SMB and logging in as theuser1. However when I try to edit the files owned by nobody I get a permission error. I see that people suggest using New Permissions tool to fix permissions issues, however I am reluctant to do so, as I would like the permissions to remain with theuser1 account rather then set to nobody...

     

    How can I prevent dockers from creating files with user nobody, or at least allow write permissions on these files? How can I fix the files that have already been affected by this?

     

    Thanks in advance for your time...

     

     

  5. On 4/12/2017 at 10:48 PM, ljm42 said:

     

    For me anyway, I don't know how you are measuring iowait or how you determined Crashplan was the problem.

     

    But... the service.log.0 you uploaded contains many lines that look like this:

    
    [04.12.17 10:46:41.721 WARN  Thread-127   ode42.jna.inotify.InotifyManager] Unable to add watch for path /mnt/user/...

    according to the link above, that means need to increase your inotify watch value.

     

    Whether it will fix the problem you are trying to solve, I don't know :) but it will fix *a* problem.

     

    I suggested the Fix Common Problems plugin because it reduces a bunch of complex troubleshooting steps down to the click of a button.  I can't guarantee it will solve this problem, but it will almost certainly help.

     

    If you are still having problems after that you might need to contact Crashplan. If you find the solution, please let us know.

    I have increased the inotify value yet again. I haven't had the locking up behavior since. Not sure if it solved the problem, but I haven't seen it for a while. Thanks for the suggestions.

  6. 3 hours ago, Leifgg said:

    Firstly, I have been running CrashPlan for a bit more than two years but I am unfortunately not able to give much help but here are my thoughts…

     

    The CrashPlan Java app is developed and maintained by CrashPlan and gfjardim (the maintainer of this Docker container) downloads the app and make the adaptations needed for this Docker container. I would assume that any adaptation for multicore processes I best done by CrashPlan.

     

    Regarding the inotify problems I can only say that I haven’t seen them myself and I have around 450.000 files (12 TBytes). From what I can see from your logs there are a lot related to Plex and I don’t have any of these errors in my logs (I use Plex as well).

     

    Here is a link that talks about inotify:

     

    https://support.code42.com/CrashPlan/4/Troubleshooting/Linux_Real-Time_File_Watching_Errors

     

    The only thoughts I have about this is if it could be related to permissions (access rights) to the files on the appdata share?

     

    And BTW, some of the log files you provided contains (sensitive) personal information so you might prefer to edit your post...

     

     

    Forgive me if I am talking nonsense, as I am not particularly well versed in this.

     

    In my post I mentioned that I see high iowait values, which I assume are related to large number of requests to a storage drive? How does inotify relate to this issue?

     

    I have already doubled the inotify resource through the Tips and Tricks plugin from the original  value (something around 500000).

  7. I am having several "issues" with the docker.

     

    Firstly, I would like to clarify whether crashplan is supposed to run only on a single core? I see it pinning a single core to whatever I will allow it and moving on to another etc. Is there anyway to allow for multicore process.

     

    Secondly, once in a while I get an issue with UnRaid becoming unresponsive due to large iowait values. I have narrowed the issue down to crashplan docker, however I have no idea where the crazy io requests are coming from. THe problem is usually solved by restarting the docker. I attach the docker logs. 

     

    Would be great if someone could chime in on those issues... 

    log (2).zip

  8. 22 hours ago, DoeBoye said:

    I installed it over the weekend and found I needed to increase "inotify max_user_watches". If you install the app Tips and Tweaks, there is a simple option in there to increase it from the default (If it doesn't state the default, click on the help button. I think it is 524288). The higher the number, the more ram is going to be consumed. My understanding is the default uses roughly 512MB of ram.

     What number did you increase it to get it to work?  524288 seems to be on the high end already...

  9. I am attempting to start the cAdvisor container downloaded through the Community Applications monitoring section. I haven't changed any settings for the docker from the default config. I get the following errors in the log:

     

    I0403 12:36:51.843171 1 storagedriver.go:50] Caching stats in memory for 2m0s
    I0403 12:36:51.843318 1 manager.go:143] cAdvisor running in container: "/docker/94b54043c31aca01be6c1bc9cea0d44c306aedf3e85cc65d00e80a0957834a3c"
    W0403 12:36:51.852249 1 manager.go:151] unable to connect to Rkt api service: rkt: cannot tcp Dial rkt api service: dial tcp 127.0.0.1:15441: getsockopt: connection refused
    I0403 12:36:51.867089 1 fs.go:117] Filesystem partitions: map[/dev/sdb1:{mountpoint:/rootfs/mnt/cache major:0 minor:28 fsType:btrfs blockSize:0} /dev/loop1:{mountpoint:/rootfs/etc/libvirt major:0 minor:240 fsType:btrfs blockSize:0} /dev/loop0:{mountpoint:/var/lib/docker/btrfs major:0 minor:33 fsType:btrfs blockSize:0} /dev/md1:{mountpoint:/rootfs/mnt/disk1 major:9 minor:1 fsType:xfs blockSize:0} /dev/md2:{mountpoint:/rootfs/mnt/disk2 major:9 minor:2 fsType:xfs blockSize:0} /dev/md3:{mountpoint:/rootfs/mnt/disk3 major:9 minor:3 fsType:xfs blockSize:0} /dev/md4:{mountpoint:/rootfs/mnt/disk4 major:9 minor:4 fsType:xfs blockSize:0} /dev/md5:{mountpoint:/rootfs/mnt/disk5 major:9 minor:5 fsType:xfs blockSize:0}]
    I0403 12:36:51.870386 1 info.go:47] Couldn't collect info from any of the files in "/rootfs/etc/machine-id,/var/lib/dbus/machine-id"
    I0403 12:36:51.870445 1 manager.go:198] Machine: {NumCores:8 CpuFrequency:3900000 MemoryCapacity:16560410624 MachineID: SystemUUID:259D1280-2980-11E6-BA3C-D05099C162B1 BootID:98c37790-c644-4824-8346-fb14e7be7b3b Filesystems:[{Device:/dev/loop0 Capacity:21474836480 Type:vfs Inodes:0 HasInodes:true} {Device:/dev/md1 Capacity:3998833471488 Type:vfs Inodes:390701824 HasInodes:true} {Device:/dev/md2 Capacity:3998833471488 Type:vfs Inodes:390701824 HasInodes:true} {Device:/dev/md3 Capacity:3999321845760 Type:vfs Inodes:390701824 HasInodes:true} {Device:/dev/md4 Capacity:3998833471488 Type:vfs Inodes:390701824 HasInodes:true} {Device:/dev/md5 Capacity:3998833471488 Type:vfs Inodes:390701824 HasInodes:true} {Device:/dev/sdb1 Capacity:525112680448 Type:vfs Inodes:0 HasInodes:true} {Device:/dev/loop1 Capacity:1073741824 Type:vfs Inodes:0 HasInodes:true}] DiskMap:map[43:0:{Name:nbd0 Major:43 Minor:0 Size:0 Scheduler:none} 9:1:{Name:md1 Major:9 Minor:1 Size:4000786976768 Scheduler:none} 9:5:{Name:md5 Major:9 Minor:5 Size:4000786976768 Scheduler:none} 8:80:{Name:sdf Major:8 Minor:80 Size:4000787030016 Scheduler:noop} 9:2:{Name:md2 Major:9 Minor:2 Size:4000786976768 Scheduler:none} 43:1792:{Name:nbd14 Major:43 Minor:1792 Size:0 Scheduler:none} 43:1152:{Name:nbd9 Major:43 Minor:1152 Size:0 Scheduler:none} 8:32:{Name:sdc Major:8 Minor:32 Size:525112713216 Scheduler:deadline} 8:96:{Name:sdg Major:8 Minor:96 Size:4000787030016 Scheduler:noop} 43:128:{Name:nbd1 Major:43 Minor:128 Size:0 Scheduler:none} 43:256:{Name:nbd2 Major:43 Minor:256 Size:0 Scheduler:none} 8:48:{Name:sdd Major:8 Minor:48 Size:4000787030016 Scheduler:noop} 8:112:{Name:sdh Major:8 Minor:112 Size:4000787030016 Scheduler:noop} 9:4:{Name:md4 Major:9 Minor:4 Size:4000786976768 Scheduler:none} 43:1280:{Name:nbd10 Major:43 Minor:1280 Size:0 Scheduler:none} 43:384:{Name:nbd3 Major:43 Minor:384 Size:0 Scheduler:none} 43:512:{Name:nbd4 Major:43 Minor:512 Size:0 Scheduler:none} 43:896:{Name:nbd7 Major:43 Minor:896 Size:0 Scheduler:none} 43:1408:{Name:nbd11 Major:43 Minor:1408 Size:0 Scheduler:none} 43:1536:{Name:nbd12 Major:43 Minor:1536 Size:0 Scheduler:none} 43:1024:{Name:nbd8 Major:43 Minor:1024 Size:0 Scheduler:none} 8:16:{Name:sdb Major:8 Minor:16 Size:525112713216 Scheduler:deadline} 8:128:{Name:sdi Major:8 Minor:128 Size:4000787030016 Scheduler:noop} 43:1920:{Name:nbd15 Major:43 Minor:1920 Size:0 Scheduler:none} 43:640:{Name:nbd5 Major:43 Minor:640 Size:0 Scheduler:none} 43:768:{Name:nbd6 Major:43 Minor:768 Size:0 Scheduler:none} 8:64:{Name:sde Major:8 Minor:64 Size:4000787030016 Scheduler:noop} 9:3:{Name:md3 Major:9 Minor:3 Size:4000786976768 Scheduler:none} 43:1664:{Name:nbd13 Major:43 Minor:1664 Size:0 Scheduler:none} 8:0:{Name:sda Major:8 Minor:0 Size:15693664256 Scheduler:noop}] NetworkDevices:[{Name:as0t0 MacAddress: Speed:10 Mtu:1500} {Name:as0t1 MacAddress: Speed:10 Mtu:1500} {Name:as0t10 MacAddress: Speed:10 Mtu:1500} {Name:as0t11 MacAddress: Speed:10 Mtu:1500} {Name:as0t12 MacAddress: Speed:10 Mtu:1500} {Name:as0t13 MacAddress: Speed:10 Mtu:1500} {Name:as0t14 MacAddress: Speed:10 Mtu:1500} {Name:as0t15 MacAddress: Speed:10 Mtu:1500} {Name:as0t2 MacAddress: Speed:10 Mtu:1500} {Name:as0t3 MacAddress: Speed:10 Mtu:1500} {Name:as0t4 MacAddress: Speed:10 Mtu:1500} {Name:as0t5 MacAddress: Speed:10 Mtu:1500} {Name:as0t6 MacAddress: Speed:10 Mtu:1500} {Name:as0t7 MacAddress: Speed:10 Mtu:1500} {Name:as0t8 MacAddress: Speed:10 Mtu:1500} {Name:as0t9 MacAddress: Speed:10 Mtu:1500} {Name:br0 MacAddress:d0:50:99:c1:62:b1 Speed:0 Mtu:1500} {Name:eth0 MacAddress:d0:50:99:c1:62:b1 Speed:1000 Mtu:1500} {Name:eth1 MacAddress:d0:50:99:c1:62:b2 Speed:0 Mtu:1500} {Name:gre0 MacAddress:00:00:00:00 Speed:0 Mtu:1476} {Name:gretap0 MacAddress:00:00:00:00:00:00 Speed:0 Mtu:1462} {Name:ip_vti0 MacAddress:00:00:00:00 Speed:0 Mtu:1364} {Name:tunl0 MacAddress:00:00:00:00 Speed:0 Mtu:1480} {Name:virbr0 MacAddress:52:54:00:a9:af:35 Speed:0 Mtu:1500} {Name:virbr0-nic MacAddress:52:54:00:a9:af:35 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:16560410624 Cores:[{Id:0 Threads:[0 4] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2}]} {Id:1 Threads:[1 5] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2}]} {Id:2 Threads:[2 6] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2}]} {Id:3 Threads:[3 7] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2}]}] Caches:[{Size:8388608 Type:Unified Level:3}]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None}
    I0403 12:36:51.870963 1 manager.go:204] Version: {KernelVersion:4.9.10-unRAID ContainerOsVersion:Alpine Linux v3.4 DockerVersion:1.12.6 CadvisorVersion:v0.25.0 CadvisorRevision:17543be}
    
    I0403 12:36:51.882253 1 factory.go:309] Registering Docker factory
    W0403 12:36:51.882268 1 manager.go:247] Registration of the rkt container factory failed: unable to communicate with Rkt api service: rkt: cannot tcp Dial rkt api service: dial tcp 127.0.0.1:15441: getsockopt: connection refused
    
    I0403 12:36:51.882271 1 factory.go:54] Registering systemd factory
    I0403 12:36:51.889569 1 factory.go:86] Registering Raw factory
    I0403 12:36:51.890556 1 manager.go:1106] Started watching for new ooms in manager
    W0403 12:36:51.890663 1 manager.go:275] Could not configure a source for OOM detection, disabling OOM events: unable to find any kernel log file available from our set: [/var/log/kern.log /var/log/messages /var/log/syslog]
    I0403 12:36:51.890999 1 manager.go:288] Starting recovery of all containers
    I0403 12:36:51.936687 1 manager.go:293] Recovery completed
    F0403 12:36:51.936702 1 cadvisor.go:151] Failed to start container manager: inotify_add_watch /var/lib/docker/btrfs/subvolumes/955ed28cbda2db4d867294a5ab43737aca06ef0148e4b8922fa3de2d9e740daa/sys/fs/cgroup/blkio: no space left on device

    I checked and my docker image is less then the specified max size (20G).

    Could someone help me out here?:)

  10. 40 minutes ago, Squid said:

    Nothing in the ps report in the diagnostics outright jumps out at me as which process it is, so at this point, you can either wait it out -> hopefully it will complete, or at the command prompt

    
    powerdown -r

    to hopefully restart the system.

     Is there any way to restart emhttp separately? I've looked around, but could not find anything.

     

    Its weird that emhttp is not even showing high usage. I was particularly concerned about the broken pipe message in the syslog that showed up around the time GUI crashed.

  11. As mentioned in the title I have encountered a weird problem recently were the web GUI becomes unresponsive (loading forever), all other components working perfectly.

     

    This morning I connected to my routers VPN and tried to access the GUI, while the /main page started loading it did not manage to load fully. When I tried to switch to /dashboard the interface was unable to load at all. Switching from router VPN to openvpn-as docker VPN did not solve the issue. Using a different client did not solve the issue. It seems this is not the first time this has happened to my server. So I was wondering if someone could provide some clarity on the issue. The diagnostics file is attached.

     

    i am running the latest stable version of UnRaid.

     

    Thanks in advance.

    tower-diagnostics-20170321-1009.zip

  12. On 1/23/2017 at 4:44 AM, Codeh said:

    I just started using this ruTorrent docker and I'm confused on a few things, mainly how do I connect it to CouchPotato / SickBeard and how do I set authentication?

     

    After a few hours of tinkering I ended up using nginx basic auth and a virtual server. So far ruTorrent works amazingly!

    Could you be so kind as to describe your config...? I am having hitting a wall here. 

     

    I got the basic pass through in the site-configs file:

    location /rutorrent {
    		auth_basic "Restricted";
    		auth_basic_user_file /config/nginx/.htpasswd;
    		include /config/nginx/proxy.conf;
    		proxy_pass http://192.168.0.xxx:xxxx/rutorrent;
    		}

    But I am getting the 404 error

  13. HEy guys,

     

    I am using mover to get everything off the cache drive before I replace it. I am getting very slow copy speeds 1-2 MB/s.

     

    Simultaneously my flash docker log is showing 100% permanently and I am getting the foolowing in the log file:

     

    Feb 2 20:25:24 Tower inotifywait[2891]: Couldn't watch new directory /mnt/disk2/appdata/PMS-Docker/Library/Application Support/Plex Media Server/Media/localhost/5/1dd80a0084800b660b9223e9a5f5d1f2c4de78e.bundle: No space left on device
    Feb 2 20:25:24 Tower root: cd+++++++++ appdata/PMS-Docker/Library/Application Support/Plex Media Server/Media/localhost/5/1dd80a0084800b660b9223e9a5f5d1f2c4de78e.bundle/
    Feb 2 20:25:24 Tower root: cd+++++++++ appdata/PMS-Docker/Library/Application Support/Plex Media Server/Media/localhost/5/1dd80a0084800b660b9223e9a5f5d1f2c4de78e.bundle/Contents/
    Feb 2 20:25:24 Tower root: cd+++++++++ appdata/PMS-Docker/Library/Application Support/Plex Media Server/Media/localhost/5/1dd80a0084800b660b9223e9a5f5d1f2c4de78e.bundle/Contents/Thumbnails/
    Feb 2 20:25:24 Tower root: >f+++++++++ appdata/PMS-Docker/Library/Application Support/Plex Media Server/Media/localhost/5/1dd80a0084800b660b9223e9a5f5d1f2c4de78e.bundle/Contents/Thumbnails/thumb1.jpg
    Feb 2 20:25:24 Tower root: .d..t...... appdata/PMS-Docker/Library/Application Support/Plex Media Server/Media/localhost/5/1dd80a0084800b660b9223e9a5f5d1f2c4de78e.bundle/Contents/Thumbnails/
    Feb 2 20:25:24 Tower root: .d..t...... appdata/PMS-Docker/Library/Application Support/Plex Media Server/Media/localhost/5/1dd80a0084800b660b9223e9a5f5d1f2c4de78e.bundle/Contents/
    Feb 2 20:25:24 Tower root: cd+++++++++ appdata/PMS-Docker/Library/Application Support/Plex Media Server/Media/localhost/5/1dd80a0084800b660b9223e9a5f5d1f2c4de78e.bundle/Contents/Art/
    Feb 2 20/usr/bin/tail: inotify resources exhausted
    /usr/bin/tail: inotify cannot be used, reverting to polling
    

     

    Would be great if some one could chime in on the situation and how to resolve it. Thanks!

  14. Hey guys, I am having the oddest issue lately...

     

    I set everything as per instructions and it was working perfectly. However mid day today, while I was configuring a "front hub" page for the Nginx/Letsencrypt, the NextCloud docker is extremely slow to load. The login page loads up after approximately a minute, often missing parts like logo. When I attempt to login the website times out with 504. All the other entries are working fine and fast. Here are my configs:

     

    letsencrypt/nginx/site-confs/default

    upstream backend {
    server local_ip:19999;
    keepalive 64;
    }
    
    server {
    listen 443 ssl default_server;
    listen 80 default_server;
    root /config/www;
    index index.html index.htm index.php;
    
    server_name _;
    
    ssl_certificate /config/keys/letsencrypt/fullchain.pem;
    ssl_certificate_key /config/keys/letsencrypt/privkey.pem;
    ssl_dhparam /config/nginx/dhparams.pem;
    ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';
    ssl_prefer_server_ciphers on;
    
    client_max_body_size 0;
    
    location /robot.txt {
    add_header Content-type text/plain;
    return 200 "User-agent: *\nDisallow: /\n";
    }
    
    location = / {
    	auth_basic "Restricted";
    	auth_basic_user_file /config/nginx/.htpasswd;
    	try_files $uri $uri/ /index.html /index.php?$args =404;
    }
    
    location /sonarr {
    	auth_basic "Restricted";
    	auth_basic_user_file /config/nginx/.htpasswd;
    	include /config/nginx/proxy.conf;
    	proxy_pass http://local_ip:8989/sonarr;
    }
    
    
    location /transmission {
    	include /config/nginx/proxy.conf;
    	proxy_pass http://local_ip:9091/transmission;
    }
    
    #PLEX
    location /web {
    	# serve the CSS code
    	proxy_pass http://local_ip:32400;
    }
    
    # Main /plex rewrite
    location /plex {
    	# proxy request to plex server
    	proxy_pass http://local_ip:32400/web;
    }
    
    location /nextcloud {
    	include /config/nginx/proxy.conf;
    	proxy_pass https://local_ip:444/nextcloud;
    }
    
    location /requests {
    	auth_basic "Restricted";
    	auth_basic_user_file /config/nginx/.htpasswd;
    	include /config/nginx/proxy.conf;
    	proxy_pass http://local_ip:3000/requests;
    }
    
    location ~ /netdata/(?<ndpath>.*) {
    	proxy_set_header X-Forwarded-Host $host;
    	proxy_set_header X-Forwarded-Server $host;
    	proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    	proxy_pass http://backend/$ndpath$is_args$args;
    	proxy_http_version 1.1;
    	proxy_pass_request_headers on;
    	proxy_set_header Connection "keep-alive";
    	proxy_store off;
    }
    }

     

    letsencrypt/nginx/proxy.conf

    client_max_body_size 4000M;
    client_body_buffer_size 128k;
    
    #Timeout if the real server is dead
    proxy_next_upstream error timeout invalid_header http_500 http_502 http_503;
    
    # Advanced Proxy Config
    send_timeout 5m;
    proxy_read_timeout 240;
    proxy_send_timeout 240;
    proxy_connect_timeout 240;
    
    # Basic Proxy Config
    proxy_set_header Host $host:$server_port;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto https;
    proxy_redirect  http://  $scheme://;
    proxy_http_version 1.1;
    proxy_set_header Connection "";
    proxy_cache_bypass $cookie_session;
    proxy_no_cache $cookie_session;
    proxy_buffers 32 4k;
    
    

     

    /nextcloud/www/nextcloud.config/config.php

    <?php
    $CONFIG = array (
      'memcache.local' => '\\OC\\Memcache\\APCu',
      'datadirectory' => '/data',
      'instanceid' => 'ocsl2k5v7dsp',
      'passwordsalt' => 'BgfEwYx7QOC4P/73FmzHuoBb7Eb3ea',
      'secret' => 'lzdo7LGtrhzaN7d6yr4el8Sto+sKJa9F7jQA0r3rU4CfC7YH',
      'trusted_domains' => 
      array (
        0 => 'local_ip',
        1 => 'www.xxx.com',
      ),
      'trusted_proxies' =>
      array (
      0 => 'local_ip',
      ),
      'overwrite.cli.url' => '/nextcloud',
    #  'overwritehost' => 'xxx.duckdns.org',
    #  'overwriteprotocol' => 'https',
      'overwritewebroot' => '/nextcloud',
      'dbtype' => 'mysql',
      'version' => '11.0.0.10',
      'dbname' => 'nextcloud',
      'dbhost' => 'local_ip:3305',
      'dbport' => '',
      'dbtableprefix' => 'oc_',
      'dbuser' => 'oc_admin',
      'dbpassword' => 'OVNb8gG8Ta30pRiEQI0gpM8D8XoA0Q',
      'logtimezone' => 'UTC',
      'installed' => true,
    );
    
    

     

    Any help would be greatly appreciated.

     

×
×
  • Create New...