Jump to content

Marshalleq

Members
  • Content Count

    573
  • Joined

  • Last visited

Community Reputation

52 Good

About Marshalleq

  • Rank
    Advanced Member
  • Birthday October 17

Converted

  • Gender
    Male
  • URL
    https://www.tech-knowhow.com
  • Location
    New Zealand
  • Personal Text
    TT

Recent Profile Visitors

901 profile views
  1. All good and thanks so much! I'm so tired of cloud mail. Finally realised how to get around the lack of PTR on home ISP's. Amazing what happens when you sit down and actually work stuff out! Marshalleq
  2. Thanks - yeah in my original it says Imap - but recognise easy to overlook, you have a huge job responding to all these requests! Many thanks for the info, will check it out! Marshalleq
  3. I was more thinking along these lines: https://www.nginx.com/resources/wiki/start/topics/examples/imapproxyexample/ Apparently nginx needs to be compiled with special support for the mail directive.
  4. Hi all, anyone know if the lets encrypt container supports the mail directive? Am trying to use it to proxy imap and smtp. Many thanks.
  5. Yeah, I'm finding I'm just outgrowing the unraid docker GUI. I need to to create multi-image containers and such. Trying to install something as 5 separate containers when unraid has little ability to offer any dependency mapping is a nightmare. Especially during updates. As far as I know, it can't work. So docker compose solves it apparently. But, for some reason it's been pulled from nerd pack, I'd certainly vote to have it back rather than have to use some bodgy script to keep it running.
  6. It does take a little while for that change to take affect. Basically with the cloud on it proxy's a Cloudflare IP to your real IP, so if you ping your domain, it will come up with a Cloudflare address, vs if you turn the cloud off, a ping will come back with your real IP address. It would pay to test that on your client before confirming it doesn't work. I assume that it's working internally OK? And also, I strongly recommend changing unpaid's 80 and 443 ports so that lets encrypt can use them. Things just work better / and are more consistent, particularly when you're internal. Failing that, I'd suggest you share a little more of your config.
  7. You gotta disable Cloudflare proxy (the cloud next to your domain). And don't use cnames.
  8. @dlandon thanks for your answer, I get where you're going with it, but in my case UD wasn't mounting the NFS share, so at first your response seems unhelpful, though technically it's correct. And this is the problem throughout this thread when this question is asked, no-one really ever answers the question. So after my research I thought I'd help others by answering it here. Main Point As far as I know, NFS doesn't support username password. Instead it will mount anything and handle permissions via the UID/GID match of the local and foreign accounts. There seem to be two cases where this becomes impossible though: 1 - When a 3rd party has assigned you an NFS share and hasn't used the mapall option 2 - When a 3rd party has assigned you an NFS share and hasn't shared which UID it's under So in summary a functioning NFS setup obviously relies on a functioning network and correctly set up NFS export at the other end, which the 3rd party has now resolved for me. I effectively had both of those. And just to add one for good measure, even though I confirmed NFS client connects on the firewall, if I connect across my firewall from a client I get "server <IPADDRESS> requires stronger authentication”. As far as I know the firewall is completely opened up to this host and I am still to resolve this one, but it just goes to show that the error messages aren't always very accurate. Hope it's helpful to someone. Marshalleq
  9. Hi all, I'm trying to connect to a remote NFS share that has a specific username / password. I've seen this question asked multiple times, but each time someone says they don't know the answer, or completely ignore the question. I've been scouring this thread for 90 minutes so far and haven't found any answer. So, does anyone know how to use the unassigned devices plugin, to specify connecting to NFS with a username / password? Of course I could use fstab, but I'd prefer not, I have no idea if that survives a reboot or not either. And of course that would mean I have a plain text password showing. Many thanks, Marshalleq
  10. Oh, so this is not that? I guess I got mixed up somehow. My apologies.
  11. Hi all, I'm trying to connect to a remote NFS share that has a specific username / password. I've seen this question asked multiple times, but each time someone says they don't know the answer, or completely ignore the question. I've been scouring this thread for 90 minutes so far and haven't found any answer. So, does anyone know how to use the unassigned devices plugin, to specify connecting to NFS with a username / password? Of course I could use fstab, but I'd prefer not, I have no idea if that survives a reboot or not either. And of course that would mean I have a plain text password showing. Many thanks, Marshalleq
  12. Hands down nicest dev EVER.
  13. So giving a little back - here's how to get gitlab working with unrair letsencrypt/nginx neverse proxy and SSL. Obviously the lets encrypt container is covered elsewhere, so not going into that. I wouldn't be surprised to find that there are a few extra things to configure in NGINX to get everything working more betterer, but anyway, once you have a domain, change the following settings in gitlab.rb and reconfigure. Point a standard nginx proxy config at it on port 80. There's an official proxy config to base it from here. Configure for reverse proxy by editing gitlab.rb in your docker config location Find and locate the following values and change to below nginx['listen_port'] = 80 nginx['listen_https'] = false external_url 'https://gitlab.yourdomain.com' Add to existing trusted proxies so that the logging doesn’t all come from a single IP address. (example) gitlab_rails['trusted_proxies'] = ['192.168.1.0/24', '172.18.0.0/24'] Reconfigure as per standard reconfig procedure e.g. below # docker exec -it GitLab-CE bash # gitlab-ctl reconfigure That's all that's needed to get the page to come up anyway. I assume mattermost will be similar - hoping I can easily rename that to chat.domain.com
  14. Also, I see the mattermost page has a good NGINX config for it, however Gitlab comes with NGINX built in. Does anyone have a functioning NGINX setup using lets encrypt container? It's a lot to learn, when you don't know how the internals of an omnibus container work. Further I've noted that as soon as I add the external url as https, it enables it's own https stack, which I don't want. I assume I just add the external url as just http: and let's encrypt proxy it, like with everything else. However, I'd bet that this external URL will define all sorts of outgoing addresses that need it to be accurate and am wondering if it's even possible to do this. Here was me thinking this was going to be easy. Edit: Seems like I can use this: Disable bundled NGINX In /etc/gitlab/gitlab.rb set: nginx['enable'] = false Taken from here. Which has links to external web server settings. I guess I'll give this a go.
  15. @frakman1 seems like we're both trying to do this, maybe we can help each other. I do run the awesome letsencrypt/nginx container though for https but figure I should at least be able to get it to work just on an IP address, but can't even get the package to start after editing gitlab.rb and starting the reconfigure. Does yours start after adding it? I note that unraid says it's started, but after a page refresh it's actually stopped - something to do with crashes not updating the GUI I guess.