tinynja98

Members
  • Posts

    6
  • Joined

  • Last visited

Everything posted by tinynja98

  1. Alright I finally got it to work!!! It only took 2 full days of messing around and a corrupt bsmodules file in the flash drive but I got there haha. After much digging I found a way to completely disable the routing function of my Fizz router/modem, and I am only using TP-Link with OpenWRT now. I didn't know about "NAT Loopback" concept, it would make sense that this could be the problem in the Fizz router. I'm also thinking it could have been a firewall issue in that router as I did encounter connection problems with my work-related setup, and the solution ended up being something regarding its firewall settings (i don't know what they use but its a very closed up environment, for security reasons i suppose). Anyway.... I thank you again for the tips you've provided me, it was very nice to have at least someone with whom to brainstorm about this issue. Wish you all the best!
  2. Hey heyyy I just got my first *glimmer* of hope! If I understood correctly, it seems the problem is coming from my Fizz router. Let me explain how I came up to this conclusion. My ISP is Fizz, so I have a Fizz router/modem (in a single unit). It sucks, so I'm using a TP-Link router "on top" of that Fizz router, and I have OpenWRT on that TP-Link router to give me access to a whole bunch of features I don't have with the Fizz router/modem. When I forward ports for my applications, I do make sure to do it on both routers (as you saw my NPM host worked with HTTP). So I decided as much as possible to test each router individually (even though I cannot completely remove the Fizz router/modem because I need it for its "modem" component). Here are the tests I performed (please follow along with the attached diagram). Please note that I did make sure to forward the ports to the correct local IP every time, and I am still using the subdomain from freedns.afraid.org. Scenario A is for reference, this was the setup I have been using up to this point, which yields successful access when using HTTP but ERR_CONNECTION_TIMED_OUT when using HTTPS. Scenario B, my intent was to removed the TP-Link router from the equation, by placing the NPM server right behind the Fizz router. This resulted in exactly the same outcome as scenario A. Scenario C, this is when it gets interesting. My intent was to remove the Fizz router from the equation. I put my computer "outside" the TP-Link router, and I added a line in my /etc/hosts which redirects "test.subdomain.org" to the TP-Link router.This way (if I'm understanding things correctly) the request never goes through the Fizz router, and the domain name still matches the one in the certificate. And it finally worked! Successful access via HTTP and HTTPS, and the connection is secured. All of this leads me to believe there is something wrong in the configuration of the Fizz router (please confirm my conclusions). If this is right, the question that remains is what kind of setting should I be looking for in my Fizz router that would result in this sort of behavior? Since I think most people have no experience with Fizz routers, I've attached a PDF file with screenshots of every page of the admin panel. Thanks a lot!! Fizz Admin Panel.pdf
  3. Hey mgutt thanks a lot for your quick response! My bad... I forgot to mention I only followed the NPM GUI part of the explanation, didn't touch any config file Indeed! I created a custom bridge network so I can use hostname instead of the IP because I noticed it changes everytime I start it up. However I have no idea if I am using docker compose, I don't know what this is, but I don't remember taking specific actions in order to use docker compose. So I followed your suggestion and I am now using another subdomain I got from freedns.afraid.org just to remove cloudflare completely from the equation. Let's call it "test.subdomain.org". Also I'm setting this up for another docker container (deluge) since I feel like it is a "simpler" webserver, just for testing things out. And I also reinstalled NPM to restart with a blank slate. So when I setup a host in NPM as shown in the attached picture, with a newly generated Let's Encrypt SSL certificate, I am able to access deluge from the internet when typing "http://test.subdomain.org", but I still get an ERR_CONNECTION_TIMED_OUT when I try to access it with "https://test.subdomain.org". Again I verified if ports 80 (TCP) and 443 (TCP) are open with canyouseeme.org, and both are indeed open. Just to see if another webserver was using those ports, I stopped the NPM container and sure enough both ports become closed. If this can be of any help, I also attached the various nginx config files associated with this host (I didn't modify any of these manually). What do you think could be causing this timeout issue? Thanks again for your help! EDIT: If I use a local subdomain (say deluge.lan) instead of using "test.subdomain.org", I can access deluge with HTTP, and I also can access it with HTTPS, but I get the "Your connection is not private" message from Chrome, so I have to click on "Show advanced settings" and "Proceed to deluge.lan (unsafe)". proxy.conf block-exploits.conf ssl-ciphers.conf letsencrypt-acme-challenge.conf 1.conf
  4. Hello there! I've been at this for a little more than 10 hours straight now but I couldn't for the life of me setup Nextcloud with HTTPS via NginxProxyManager, so I figured I would try to ask for some help over here as a last resort. So here's where I'm at... I'm able to access my nextcloud server from the internet by entering my domain name "example.org" when I configure the Proxy Host in NPM to use HTTP. See attached image below for the NPM configuration I used and nextcloud's config.php file. Here are some of the things I tried to get it to work through HTTPS (please don't judge if you see some nonsense here, I don't know what I'm doing): Open ports 80 and 443 on my router, and verify with http://canyouseeme.org that they are indeed both open. As per this video, I kept the NPM scheme as http, and the port as 80, but I created an SSL certificate with Let's Encrypt from the SSL tab, tick "Force SSL", "HTTP/2 Support", and "HSTS Enabled". This results in a timeout. Tried all combinations of NPM scheme http/https and port 80/443 because why not. Same result, timeout. Created a cloudflare account, and used that as a nameserver for my domain instead, created a CNAME record for my domain, and enabled the proxy option. Then I went back to my "HTTP:80:noSSL" NPM configuration that I mentionned earlier. Now I can connect to my nextcloud server, and I do get "Connection is secured" lock icon on my Chrome browser. However, if I block port 443 in my router, I can still access my nextcloud server, and I still get the "Connection is secured" lock icon. If I close port 80 and keep 443 open, I can no longer access (timeout), so I very doubt that this "connection is secured". Then I thought maybe I need to put my own key and certificate on the nextcloud server (under appdata/nextcloud/keys), didn't change anything, still timeout. Also tried to download the SSL/TLS client certificates from my cloudflare dashboard, and added those key and certificate as a custom SSL certificate in NPM, and used that instead of the Let's Encrypt auto-generated certificates, same result, timeout. I probably tried a bunch of other stuff as well, but I think this is a good starting point. Surely I'm missing something here but I have no idea what and I'm really out of things to try at this point. It would be very immensly appreciated if someone could please point me in the right direction to get Nextcloud to work with HTTPS via NginxProxyManager Thanks a lot! config_http.php
  5. Well it seems my motherboard (Asus P5K) recognizes the usb stick sort of as a hard drive, because it places it in the hard disk drives group when i go in the boot options... So i guess unRAID assumes my usb stick is a hard drive too? Do you have anything to suggest to fix this? EDIT: Is there any way i can boot from a certain drive and use this same drive as a cache device in unRAID?
  6. Hello, I want to test unRAID Server, so i have the trial version installed on my personal usb thumb drive. On my server, i have 2 PATA drives (i want both to be for the array) and 1 SATA drive (i want this to be a cache drive). When i boot up unraid server, it says that i cannot start the array because there is too many drives connected, but i do have only those 3 internal drives and only the usb stick used for booting unraid connected to the pc. The unRAID website does state that the trial version can support up to 3 drives, which is what i have in my pc :\ What am i doing wrong?