Ryonez

Members
  • Posts

    159
  • Joined

  • Last visited

Everything posted by Ryonez

  1. My current codimd instance is broken. Looking in the container, no such file is there:
  2. Running my small server would be so much more hassle if it wasn't for you guys. Thank you for creating something that just works and lets me focus on other things.
  3. I found this: https://blog.linuxserver.io/2019/04/25/letsencrypt-nginx-starter-guide/#simplehtmlwebpagehosting And then I noticed the readme file in /ngnix/proxy-confs. Both of these were created long after I first started using the container >.< As for creating the network I used this: `docker network create [networkname]` That paired with using lowercase characters in the container names and switching the containers into using that network, brought the container name dns resolution up. Then changed unRAID's web ports and switch letsencypt's ones to be exposed in their place. With pihole managing dns for my network, I set up internal domain names to point at the unRAID's ip. This gave me internal site resolution for sites I wanted internally accessed. To secure those sites I added the following to the location blocks of internal site confs: allow [internal IP]/24; deny all; Now I have https with a valid cert, that can only be accessed on my internal network. External sites are still accessible as well. This is a massive improvement over what I had before. While I do control my network, it always sat wrong with me that exposed unsecured traffic could be caught before. Like I hated it, especially considering the changes I'd have to make to some dockers to secure traffic that should be secured by default. Looking at you LDAP, postgres and maridb. I feel so much more comfortable now. The only two containers still outside the bridge are pihole and openvpn-as. I'd like to secure the web interface for pihole better that it's default later. openvpn-as is a selfsigned cert, but I'm okay with it atm.
  4. Got it, learned about docker networks a bit better, and learned that this was just because the default network unRaid makes doesn't have DNS resolution on it.
  5. Jesus, this really makes it off-putting for people to ask for help. Is not noticing I missed something, asking if it was the issue taking responsibility? *I noticed a mistake I made, and asked it that was causing silent failures.* Also, I outlined how and why I missed it, and offered an improved that would make it harder to miss. Wasn't blaming anyone else on that in the slightest. You need to keep in mind you guys are good at this shit. I'm not dumb by any means, but I don't know everything and something like this can be overwhelming, especially in instances when you're trying to piece together several dockers. I'm happy to stop the conversation here, I just want to get that off my chest because the way things started to turn. "we do our best to lower the bar" defiantly felt like it would naturally follow on to "but if you're to dumb you're outta luck". Logically I know that's not what you mean, so don't worry, but emotionally it does feel like an attack kinda. But, we think we have the issue, I'll look into it some other time, and if there's an issue I'll pop back and provide stuff like the run commands and see if we can hammer it out. As it stands, I feel like the mistake I made is what's feked it up. Thank you very much for you're help guys. Even if I got a little upset, I do appreciate the help alot. I hope you have a good day!
  6. As mention in the section you quoted: Gone, as in, I had like 5-6 places I was getting instructions from. Something that's normally used as a description field ended up being glossed over, especially as it didn't look like instructions among all the instructions being looked at. I had already identified that I missed it, and why and how it could've been prevented by adding it to the dockerhub instructions. Telling me it's in the template again doesn't help if I'm to be frank.
  7. I followed the dockerhub instructions for the most part, which don't mention this limitation. And I must have glazed over the instructions in the template because it looked like a two line description, not instructions like the on in elasticsearch's template.After juggling three docker image setups, it was just gone. Mmm, please defiantly add that to the dockerhub instructions, maybe under `Application Setup`? I wish this didn't fail silently as well, bu I feel that's more an issue with how diskover is written over the image itself. I'll keep that in mind and add it to future help requests. At the time it was asked of me, I had given up and removed the images completely. Maybe after I sort out network isolation with my docker containers I'll give this all another go.
  8. The template information that's filled when using CA. Yes, the instructions for this are in two places. I don't even know what the full command is, and I've since removed the dockers images after trying three full times to get it working from start to finish. Again, nothing is throwing an error. They all seem to function, but just not do anything. Just noticed thought, the template I was looking at most was elasticsearch's. discover had this: ``` Elasticsearch is needed for this container. Use 5.6.x. ``` I just used what was in the template, which seems to point to 6.6.2. Did this really silently fail because of this?
  9. I'm aware of that, I followed the instructions on dockerhub. Redis and Elasticsearch were set up, but there's no errors other than redis saying it's not going to have high performance on unraid. It gets here: ``` Once running the URL will be http://host-ip/ initial application spinup will take some time so please reload if you get an empty response. ``` There's a response that I mentioned above that, but otherwise nothing happens. no info, nothing saying it's processing, nada. If it's failing because of this: ``` If you simply want the application to work you can mount these to folders with 0777 permissions, otherwise you will need to create these users host level and set the folder ownership properly. ``` Then the documentation needs the instructions for unraid added. It's not in the template info. Also again, it should error on perm issues, which it isn't.
  10. Fair enough. It just doesn't seem to work on my end, and as mentioned there's no apparent reason why. There's no errors, and it doesn't seem to actually do anything. Without it being in the docs I'm also not sure what to expect, so I can't do anything with this. For now I'm going to use WizTree64. It's a tad easier, maybe I'll hit this up again sometime to see if I can get it to do something.
  11. Hi there! I'd like some assistance figuring out how to secure my docker network and some of the various services. So, there some services I'd like to secure away from my home network, and possibly between other docker images. This includes things like services that only provide non secure connections (Such as http, which instead get run through a reverse proxy to secure them), or services that are just to hard to get secured (such as ssl on postgress, or LDAP, omg LDAP just fights so much). For example, I'd like to make something like this: Network: bro Image | Access pihole | IP: Lan IP Network: Docker Bridge (Default Network) I probably won't even use this If I can get the other stuff going. Network: Docker Secure Image | Access Postgres | Internal only Maridb | Internal only Keycloak | Internal only LDAP | Internal only searx | Internal only letsencrypt | Port 80 and 443 Exposed Network: Docker Test Image | Access Postgres | Port 5432 searx | Internal only letsencrypt | Port 80 and 443 Exposed Network: host Image | Access Plex | Host Sadly I can't really look into vlans atm. In terms of HW, I have: FritzBox 7490 (Seems to not support vlan or work as a managed switch) My server as listed in my Sig. I don't have a spare network card, and I can't really earn money, so saving for hardware takes a really long time. Not complaining mind, but it limits what I can do sorry. During tests, I found some weird things I can't explain, like giving the letsencrypt docker it's own ip stopped it from serving any site from the server, citing the gateway was bad. Also I notice linuxserver's images use names to talk to other images lot, however that doesn't work for me and I have to often replace the names with the server's lan ip. Any help would be greatly appreciated!
  12. And still getting the same issue. Yup noticed this very quickly, and it's something not mention in the documentation. I'm done with this. It shouldn't be hard, everything is there. Yet it just spits out `No diskover indices found in Elasticsearch. Please run a crawl and come back.` There's no indications of anything actually doing anything, no guides as to what to expect at the start. It just doesn't work, for no obvious reason. In my opinion right now, this needs to either be updated with better docs on dealing with issues and how to find out what's actually going wrong, or looked at for removal. And I'm not saying that lightly. I'm a firm support of linuxserver's work for the community, but this thing is just a mess.
  13. Came back to it after leaving it for a night, still no clue what's happening.
  14. How do you know if things are working? Atm I'm seeing this: My assumption is it needs to do an initial scan, but there's no proper indicators as to what's happening. Should I be worried that almost nothing shows on the dashboard?
  15. Are those of us on the stable release of unRaid missing any important updates with Dynamix System Statistics? You have it restrict to a release candidates atm.
  16. Thankfully that was a horribly false alert. It seems I made the mistake of copying another folder that had those files missing instead. I was from an instance that broke that I could never narrow down exactly what cause it. Checking the backups, I can confirm the correct folder was backed up safely, sorry for the trouble. Now I can relax with it back up >.<
  17. This plugin critically fails to backup poste, WITHOUT USER NOTIFICATION Unfortunately, this has gutted my email system as I deleted the appdata folder for it after a change didn't work and was going to restore from backups. As far as I can tell, files and folders with mail group and mem user are what's missed:
  18. It was just pure deduction. All of the reporting tools were saying it was fine. But it kept having issues which meant I had to look into it further. In the end, it was manually testing the device that allowed me to narrow it down to a point I could make the ssd fail 100% of the time under one of my test conditions. At that point it doesn't matter what the SMART says, I can reproduce a failure at will with something that within it's specs, proving the device was faulty. Hardware failure can be tricky. Not everything is going to be easily visible as a concrete red line somewhere. There will be times you have to just test for things yourself to try and figure it out.
  19. I like to be there when updating just in case something breaks. Not sure if the auto update plugin would fix it though, because it wouldn't be updating the plugin would it? Or does it suppress the notifications anyway?
  20. Dynamix System Statistics has a minimum Unraid version of 6.7.0 as of it's last update. Latest stable unRaid version is 6.6.6. I've been getting a notification about this update everyday for a week.
  21. Would someone happen to have a guide on how to set up gitlab pages with the letsencrypt reverse proxy docker? I'v tried to go through it a few times, but with it wanting another ip, and another domain (I have one I can use), I'm really not sure how to proceed.
  22. And done. So in the end, it was a faulty ssd. Even though SMART was reporting nothing wrong, it'd fail under heavy write conditions. I replaced the drive, copying the data that was on the old cache onto the new drive. With the help of the appdata backup, I went through all the dockers and they look safe, no errors being reported from them. Testing the services the dockers provides yields no problems. On a side note, the gui seems to report the right loads for the CPU now, not sure why that was screwing up because of the hdd. As everything seems good now, I'm going to mark this as solved.
  23. Alright, I had a look. I actually got myself a new sdd for my desktop for Christmas, and the old ssd was a sister to the one I had in the server. Popping it in, and checking the cables, so far it's not a cable failure, but a sdd one. Even though the one from the desktop has 2 relocated sectors, it works and giving the one in the server is failing to report an issue, I have to replace it now. I'm shifting files back onto the cache now, will report back here the results later.
  24. Alright, Testing with copying just the 40GB docker image to the cache, this should be the only activity on the server, and I'm seeing this: Even 5 minutes in this is taking... Ahah, I've actually managed to trigger a failure durning testing, let me throw the diagnostics to this. Is this an issue with the controller, drive, the kernel? atlantis-diagnostics-20190104-2243.zip