Kopernikus

Members
  • Posts

    88
  • Joined

  • Last visited

About Kopernikus

  • Birthday 04/26/1979

Converted

  • Gender
    Male
  • Location
    Belgium

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Kopernikus's Achievements

Rookie

Rookie (2/14)

4

Reputation

  1. Thx. There's no way to make this permanent? I can imagine I'm not the only person who access his dockers from another (V)LAN then were the Unraid server resides. Personally I like to segment my network into VLAN aka "Trusted/Servers/IoT/Dockers/VM/Guests/Management etc..." Now I'm running the Wireguard connection inside the docker container and for the docker containers who don't have support I forwared them or use a proxy, but it would be better to assign them directly to the "wg?" interface.
  2. Why can I reach my other containers from my "trusted vlan"? Only the ones assigned to wg1 are not reachable. Would I a be able to fix is with a static route? Just like I did for my wireguard tunnel (wg0) when I want to reach my network from outside my home so I can reach my dockers/vm who are on a different VLAN.
  3. I did some more test and found the issue. So my Unraid is running on my untagged server VLAN, when I set my client to this same VLAN I am able to reach to docker, (I think) this is caused by the IP tables added in the Wireguard config, they only allow traffic from my server VLAN, but ofcourse I'm accessing the server from my trusted VLAN. For example with the docker container qbittorentvpn you can define the trusted networks so those ared added.
  4. Hi, Upgraded to 6.10.0-rc5 to test out this new functionality. I'm using TorGuard as commercial VPN, so created config file, imported (it created wg1) and when activating it seems to connect fine (able to ping to the peer endpoint). However when I want to use this connection for a container for example firefox, I'm setting the network type to custom wg1, but as soon as the container is started I can't reach it anymore, tried it with other containers same result. Any idea? @bonienl @ljm42 To be more complete: my Unraid runs untagged on my server VLAN and my containers/VM's are running on their own VLAN. To test tried it with AirVPN same result tunnel is connecting fine however as soon as I am connected I can't connect to the docker container it uses. Could it have something to do with the iptables who are set?
  5. @mgutt After doing some more research I've found that the problem must be inside the docker container. Link to the report I filled: https://github.com/NginxProxyManager/nginx-proxy-manager/issues/1982
  6. ssl_session_timeout 5m; ssl_session_cache shared:SSL:50m; # intermediate configuration. tweak to your needs. ssl_protocols TLSv1.2 TLSv1.3; ssl_ciphers 'ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA512:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA512:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305'; ssl_prefer_server_ciphers on; It's the same as the ssl-cipher.conf already in the NPM docker except unsecure ciphers are removed and ssl_prefer_server is set to on
  7. Hi, For the headers this seems to be working, thx. However for the TLS I found the solution but don't now how to implement it. I need to edit /etc/nginx/conf.d/include/ssl-ciphers.conf with a more "thight" ssl cipher list. Ofcourse I could edit the file and save it to a new container but then I would have to do this every time the container is updated. So I thought of mounting (overwriting) the file directly into the container, so something like this: /etc/nginx/conf.d/include/ssl-ciphers.conf:/path/to/local/file/myown-custom-ssl-ciphers.conf I tried it true the mount path in unraid but that doesn't seem to work. Any idea? Ideally it would be good if we can set this as an option in /data/nginx/custom/ssl-ciphers.conf or through the UI of NPM
  8. @mgutt Hi, Using this container for a few months now, and all working fine. However I wanted to run a public instance off the SearxNG metasearch docker. I have set this up as usual in NPM and all seem te be working. However to allow it to be placed on the public instance list (https://searx.space/#) it needs to have an A+ TLS grade and an A+ HTML grade. For nginx the config would be: https://ssl-config.mozilla.org/ and for the HTML https://github.com/searxng/searx-docker/blob/master/Caddyfile#L33-L84 Is this possible with Nginx Proxy Manager or will this require a full Nginx docker?
  9. @xthursdayx Hi, Found an error in your searxng template. You have to change: Container Path: /etc/searx to Container Path: /etc/searxng otherwise "settings.yml" and "uwsg.ini" will not be created. Thx
  10. Hi, I'm running this docker as custom network on my vlan, all working fine. However when I change my port to 443 and enable the vpn I can't reach the webui anymore. This happens bevause in the iptables it allows only the standard ports. Can this be changed? For example I use binhex-qbittorrentvpn and here the port is set to 443 and also in the iptables. A solution would be creating a WEBUI variable. A temp fix is by adding 443 to the aditional ports, but now in the iptables the default ports are also open. Thx
  11. Hi, Any plans for an nzbgetvpn version? I could ofcourse route through another containers but think fesh connection is better? Thx
  12. Hi, First off all thx to Binhex for his excellent docker containers. I run most of my containers on a seperate (docker) VLAN. For example with the qbittorrent docker I have access https://qbittorrent.mydomain.com who points to ip:443 where certificates are installed for my domain However now the problem with Sabnzbd I can enable https and install the certificate however I can't seem to be able to change the port from 8090 to 443, I now I can use ngnix and I use this for my external but internally I like to have an direct connection. Also the http port can't be saved it reverts to the default 8080. Is there any variable I need to add? Or change it manually in a config file? Seems to be the same issue in the Linuxserver version of the container: https://github.com/linuxserver/docker-sabnzbd/issues/90 Thx
  13. Hi, I have an Unraid server running 7 8TB HDD's + 1 8TB HDD as parity. Also I have two 1TB NVME SSD cache drives in RAID1 for my Docker/VM and two 1TB SATA SSD cache that I use for my Downloads/Shares. I use the server for automatic media management (with the arr's & hardlinks), also some VM's, backups of our pc's, and some docker containers. The problem I'm having now is that the Downloads/Share cache pool gets full to fast so I would like to upgrade, but what would be the best setup? Leave the two 1TB in NVME and RAID1 for my Dockers/VM + Replace the 1TB's SATA with 2TB SATA? or Set the two 1TB's SATA in RAID1 so I have 2TB but no redundency Alternative: leave the two RAID1 cache pools and add an extra SATA SSD or HDD just for the Downloads? Thx
  14. All working fine now, seems like wrongly formmated pem. Thx for the help!
  15. If run the command I get: root@Unraid-Server:~# /usr/bin/openssl x509 -noout -subject -nameopt multiline -in /boot/config/ssl/certs/Unraid-Server_unraid_bundle.pem subject= countryName = GB stateOrProvinceName = Greater Manchester localityName = Salford organizationName = Sectigo Limited commonName = Sectigo RSA Domain Validation Secure Server CA root@Unraid-Server:~# What could be wrong? It's a Wildcard cert that I use for Nginx my UDM Pro, Pihole's, all working fine.