Jump to content

Ryonez

Members
  • Content Count

    105
  • Joined

  • Last visited

Community Reputation

2 Neutral

About Ryonez

  • Rank
    Advanced Member
  • Birthday April 19

Converted

  • Gender
    Male

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I found this: https://blog.linuxserver.io/2019/04/25/letsencrypt-nginx-starter-guide/#simplehtmlwebpagehosting And then I noticed the readme file in /ngnix/proxy-confs. Both of these were created long after I first started using the container >.< As for creating the network I used this: `docker network create [networkname]` That paired with using lowercase characters in the container names and switching the containers into using that network, brought the container name dns resolution up. Then changed unRAID's web ports and switch letsencypt's ones to be exposed in their place. With pihole managing dns for my network, I set up internal domain names to point at the unRAID's ip. This gave me internal site resolution for sites I wanted internally accessed. To secure those sites I added the following to the location blocks of internal site confs: allow [internal IP]/24; deny all; Now I have https with a valid cert, that can only be accessed on my internal network. External sites are still accessible as well. This is a massive improvement over what I had before. While I do control my network, it always sat wrong with me that exposed unsecured traffic could be caught before. Like I hated it, especially considering the changes I'd have to make to some dockers to secure traffic that should be secured by default. Looking at you LDAP, postgres and maridb. I feel so much more comfortable now. The only two containers still outside the bridge are pihole and openvpn-as. I'd like to secure the web interface for pihole better that it's default later. openvpn-as is a selfsigned cert, but I'm okay with it atm.
  2. Got it, learned about docker networks a bit better, and learned that this was just because the default network unRaid makes doesn't have DNS resolution on it.
  3. Jesus, this really makes it off-putting for people to ask for help. Is not noticing I missed something, asking if it was the issue taking responsibility? *I noticed a mistake I made, and asked it that was causing silent failures.* Also, I outlined how and why I missed it, and offered an improved that would make it harder to miss. Wasn't blaming anyone else on that in the slightest. You need to keep in mind you guys are good at this shit. I'm not dumb by any means, but I don't know everything and something like this can be overwhelming, especially in instances when you're trying to piece together several dockers. I'm happy to stop the conversation here, I just want to get that off my chest because the way things started to turn. "we do our best to lower the bar" defiantly felt like it would naturally follow on to "but if you're to dumb you're outta luck". Logically I know that's not what you mean, so don't worry, but emotionally it does feel like an attack kinda. But, we think we have the issue, I'll look into it some other time, and if there's an issue I'll pop back and provide stuff like the run commands and see if we can hammer it out. As it stands, I feel like the mistake I made is what's feked it up. Thank you very much for you're help guys. Even if I got a little upset, I do appreciate the help alot. I hope you have a good day!
  4. As mention in the section you quoted: Gone, as in, I had like 5-6 places I was getting instructions from. Something that's normally used as a description field ended up being glossed over, especially as it didn't look like instructions among all the instructions being looked at. I had already identified that I missed it, and why and how it could've been prevented by adding it to the dockerhub instructions. Telling me it's in the template again doesn't help if I'm to be frank.
  5. I followed the dockerhub instructions for the most part, which don't mention this limitation. And I must have glazed over the instructions in the template because it looked like a two line description, not instructions like the on in elasticsearch's template.After juggling three docker image setups, it was just gone. Mmm, please defiantly add that to the dockerhub instructions, maybe under `Application Setup`? I wish this didn't fail silently as well, bu I feel that's more an issue with how diskover is written over the image itself. I'll keep that in mind and add it to future help requests. At the time it was asked of me, I had given up and removed the images completely. Maybe after I sort out network isolation with my docker containers I'll give this all another go.
  6. The template information that's filled when using CA. Yes, the instructions for this are in two places. I don't even know what the full command is, and I've since removed the dockers images after trying three full times to get it working from start to finish. Again, nothing is throwing an error. They all seem to function, but just not do anything. Just noticed thought, the template I was looking at most was elasticsearch's. discover had this: ``` Elasticsearch is needed for this container. Use 5.6.x. ``` I just used what was in the template, which seems to point to 6.6.2. Did this really silently fail because of this?
  7. I'm aware of that, I followed the instructions on dockerhub. Redis and Elasticsearch were set up, but there's no errors other than redis saying it's not going to have high performance on unraid. It gets here: ``` Once running the URL will be http://host-ip/ initial application spinup will take some time so please reload if you get an empty response. ``` There's a response that I mentioned above that, but otherwise nothing happens. no info, nothing saying it's processing, nada. If it's failing because of this: ``` If you simply want the application to work you can mount these to folders with 0777 permissions, otherwise you will need to create these users host level and set the folder ownership properly. ``` Then the documentation needs the instructions for unraid added. It's not in the template info. Also again, it should error on perm issues, which it isn't.
  8. Fair enough. It just doesn't seem to work on my end, and as mentioned there's no apparent reason why. There's no errors, and it doesn't seem to actually do anything. Without it being in the docs I'm also not sure what to expect, so I can't do anything with this. For now I'm going to use WizTree64. It's a tad easier, maybe I'll hit this up again sometime to see if I can get it to do something.
  9. Hi there! I'd like some assistance figuring out how to secure my docker network and some of the various services. So, there some services I'd like to secure away from my home network, and possibly between other docker images. This includes things like services that only provide non secure connections (Such as http, which instead get run through a reverse proxy to secure them), or services that are just to hard to get secured (such as ssl on postgress, or LDAP, omg LDAP just fights so much). For example, I'd like to make something like this: Network: bro Image | Access pihole | IP: Lan IP Network: Docker Bridge (Default Network) I probably won't even use this If I can get the other stuff going. Network: Docker Secure Image | Access Postgres | Internal only Maridb | Internal only Keycloak | Internal only LDAP | Internal only searx | Internal only letsencrypt | Port 80 and 443 Exposed Network: Docker Test Image | Access Postgres | Port 5432 searx | Internal only letsencrypt | Port 80 and 443 Exposed Network: host Image | Access Plex | Host Sadly I can't really look into vlans atm. In terms of HW, I have: FritzBox 7490 (Seems to not support vlan or work as a managed switch) My server as listed in my Sig. I don't have a spare network card, and I can't really earn money, so saving for hardware takes a really long time. Not complaining mind, but it limits what I can do sorry. During tests, I found some weird things I can't explain, like giving the letsencrypt docker it's own ip stopped it from serving any site from the server, citing the gateway was bad. Also I notice linuxserver's images use names to talk to other images lot, however that doesn't work for me and I have to often replace the names with the server's lan ip. Any help would be greatly appreciated!
  10. And still getting the same issue. Yup noticed this very quickly, and it's something not mention in the documentation. I'm done with this. It shouldn't be hard, everything is there. Yet it just spits out `No diskover indices found in Elasticsearch. Please run a crawl and come back.` There's no indications of anything actually doing anything, no guides as to what to expect at the start. It just doesn't work, for no obvious reason. In my opinion right now, this needs to either be updated with better docs on dealing with issues and how to find out what's actually going wrong, or looked at for removal. And I'm not saying that lightly. I'm a firm support of linuxserver's work for the community, but this thing is just a mess.
  11. Came back to it after leaving it for a night, still no clue what's happening.
  12. How do you know if things are working? Atm I'm seeing this: My assumption is it needs to do an initial scan, but there's no proper indicators as to what's happening. Should I be worried that almost nothing shows on the dashboard?
  13. Are those of us on the stable release of unRaid missing any important updates with Dynamix System Statistics? You have it restrict to a release candidates atm.
  14. Thankfully that was a horribly false alert. It seems I made the mistake of copying another folder that had those files missing instead. I was from an instance that broke that I could never narrow down exactly what cause it. Checking the backups, I can confirm the correct folder was backed up safely, sorry for the trouble. Now I can relax with it back up >.<
  15. This plugin critically fails to backup poste, WITHOUT USER NOTIFICATION Unfortunately, this has gutted my email system as I deleted the appdata folder for it after a change didn't work and was going to restore from backups. As far as I can tell, files and folders with mail group and mem user are what's missed: