Jump to content

dalben

Members
  • Posts

    1,463
  • Joined

  • Last visited

Everything posted by dalben

  1. No. I deleted the old one and set it all up from scratch. Tried reinstalling a couple of other times too. I'll keep sniffing around and see if I can see anything
  2. Is anyone seeing huge CPU usage with this container? With no cameras connected, Zoneminder was pushing 2 of my 4 i5-2500 cores to around 80%. Stopping the container brings the CPU back to low single digits idle.
  3. No one else uses this combo of containers and seeing this problem? Myself and one other than I have seen so far. I'm surprised as they are both popular containers.
  4. Reporting this from the ngix thread where it didn't get any answers. I'm not sure where the problem is: I have an issue where when the ngix docker stops and restarts (nightly backup, etc) the port forwarding from my router (USG using the LSIO Unifi Docker) no longer works. I have to restart the Unifi container/controller for the port forwarding to work. Does anyone else have this combo of containers and experiencing the same problem ?
  5. I have an issue where when this docker stops and restarts (nightly backup, etc) the port forwarding from my router (USG using the LSIO Unifi Docker) no longer works. I have to restart the Unifi container/controller for the port forwarding to work. Does anyone else have this combo of containers and experiencing the same problem ?
  6. Is it possible to add the UNRaid GUI through this letsencrypt/nginx reverse proxy? I've tried but the formatting is all out of whack. I also need to turn off restricted and rely on the unraid WebGui for authentication.
  7. Is it possible to get the UNRAID GUI working through this letsencrypt / nginx reverse proxy ? I tried but there were some pretty bad formatting errors that made it unusable.
  8. Thanks, changing the localhost to 0.0.0.0 worked fine. Looking at me logs, up until dec 31 2017, my container was using 0.0.0.0 From Jan 1st 2018 onwards it was loading with localhost. I can only assume it was a change in the docker update
  9. I came home to find the server had hung. Power cycled it to get it going with no problem. Started the letsencrypt docker so I could see the logs again and FMD letsencrypt started fine, pulled the certs and installed everything. I just checked from my phone and the site is accessible. I have no idea what it didn't work or why it works now. Nothing has changed. But I remember now why "have you tried restarting your machine" was the first question asked from support people. Thanks and sorry for wasting your time
  10. Two logs attached. One fresh start after zapping the container. One after a restart (in case it makes a difference). A couple of screen shots showing access to the nginx container from the interweb log_new.txt log_restart.txt
  11. Correct. When I have the standalone nginx docker running, it gives me the default web pages whether I use http or https. This is is consistent on whether intranet or internet using my phone. I stop that docker and start the letsencrypt docker (same ports being used so no router or dns changes] and I get nothing. Again this is consistent whether intra or internet. I’ve killed the container a couple of times and recreated but the symptoms are the same.
  12. OK, done. But https://192.168.1.10:7443/ gives me nothing here's the run command: root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='br-letsencrypt' --net='bridge' --privileged=true -e TZ="Asia/Singapore" -e HOST_OS="unRAID" -e 'EMAIL'='[email protected]' -e 'URL'='mydomain.com' -e 'SUBDOMAINS'='www' -e 'ONLY_SUBDOMAINS'='false' -e 'DHLEVEL'='2048' -e 'VALIDATION'='http' -e 'DNSPLUGIN'='' -e 'PUID'='99' -e 'PGID'='100' -p '81:80/tcp' -p '7443:443/tcp' -v '/mnt/cache/appdata/letsencrypt':'/config':'rw' 'linuxserver/letsencrypt' OK, I have confirmed my port forwarding / DNS settings are working fine by installing the plain ngix docker from LSIO and managed to access the default site it throws up via 80 and 443.
  13. Quick question beiore I get too detailed. If I install the docker and start it, should I at least get some sort of page when I go to https://192.168.1.10 and/or http://192.168.1.10:81 I have port forwarded the router 443 to the container 443 and the router 80 to container 81. 81 is set to the containers 80 and 443 to 443. I am getting errors in the log file about not being able to get validation data etc but before I delve there I just want to make sure that ngix and the port forwarding is working at least internally before looking at the outside world. This was originally set up with it's own IP address but it's now back to the server's IP in case that was a pre-req.
  14. Tried the Docker, maybe worse than the plugin. Docker: Testing download speed................................................................................ Download: 576.49 Mbit/s Testing upload speed.................................................................................................... Upload: 95.03 Mbit/s Plugin: Download 385Mbit Upload 180Mbit PC: Download 753Mbit Upload 897Mbit
  15. Looking into my random disconnects, I found a very old IP address that keeps being referenced no matter how many times I try to kill it. It seems to stem from the container somehow. here are some screenshots. Note: The correct IP should be 192.1687.1.10 unRaid Docker config page showing the WebUI setting. After saving, I go to the main docker page and highlight the WebUi option. Look at the address at the bottom of the browser, it wants to go to 192.168.1.240... So I changed the docker WebUI setting to a fixed IP Save, then look at the WebUI address again and still get .240: ssh'ing into the IPs, I found the .240 address in the /cfg/mgmt file. Removing them works fiune for a while but they return after a docker restart (I suspect). Can anyone tell me where this .240 is coming from and how I can remove it?
  16. Haven't tried yet. Will give it a shot during the week and post the results.
  17. My connection is 1Gb /1Gb No, this is plugin related. Here is a test i did just now: Plugin: 285 down, 146 up PC Client And here's a view of the daily tests that occur at 03:30am
  18. This morning I woke to find my 4 devices (1 x USG, 3 x AP-AC) are intermittently disconnecting. Then after a while reconnecting, then disconnecting. Repeat. Not all at the same time, just whenever they feel like it of course. Before I start set-informing, re-adopting, resetting etc, has anyone seen this behaviour before in what was a perfectly stable functioning Unifi setup? Controller software is 5.6.36. I've stopped and started the container a few times as well as restarted the devices. but the behavior continues.
  19. No, I have a 1Gb connection and the speeds shown with the plugin just aren't reliable. I get 200-300 down. But now that it's a known limitation it's not a problem
  20. For people with the kodi/nzbget/medusaa/radarr etc ecosystem, has anyone worked out what can be on a separate docker ip and what will be tough? I'm assuming anything that needs the smb access will have problems, i.e. headless-Kodi. The other tools like nzbget, medussa, sonar, radar, transmission, MariaDB etc should be ok because they either talk to each other via ip:port and/or read/write from the unraid server via mounts inside the container. Is that the correct understanding ?
  21. Ken-ji had written up how to do this very thing using VLANs but it seems the last update to UNRAID trashes any custom docker network settings you may have created. So it's either some script to recreate it all again before launching the containers or some hacking away pf some UNRAID scripts to disable the removal of docker networks. I really don't know the details as it all became a bit too hard for me. Annoying as I went and bought a second NIC and smart switch to handle the VLAN, but such is life.
  22. Sounds all a bit too much for me. I’ll have to live with the minor annoyances I have until this becomes child proof enough that I can drive it. Thanks for the previous help Ken-ji. Much appreciated.
  23. So the ability to have a container with its own IP still accessing the host via a separate VLAN is now crippled? That sucks.
×
×
  • Create New...