Jump to content

Marshalleq

Members
  • Posts

    968
  • Joined

  • Last visited

Everything posted by Marshalleq

  1. Oh, so this is not that? I guess I got mixed up somehow. My apologies.
  2. Hi all, I'm trying to connect to a remote NFS share that has a specific username / password. I've seen this question asked multiple times, but each time someone says they don't know the answer, or completely ignore the question. I've been scouring this thread for 90 minutes so far and haven't found any answer. So, does anyone know how to use the unassigned devices plugin, to specify connecting to NFS with a username / password? Of course I could use fstab, but I'd prefer not, I have no idea if that survives a reboot or not either. And of course that would mean I have a plain text password showing. Many thanks, Marshalleq
  3. So giving a little back - here's how to get gitlab working with unrair letsencrypt/nginx neverse proxy and SSL. Obviously the lets encrypt container is covered elsewhere, so not going into that. I wouldn't be surprised to find that there are a few extra things to configure in NGINX to get everything working more betterer, but anyway, once you have a domain, change the following settings in gitlab.rb and reconfigure. Point a standard nginx proxy config at it on port 80. There's an official proxy config to base it from here. Configure for reverse proxy by editing gitlab.rb in your docker config location Find and locate the following values and change to below nginx['listen_port'] = 80 nginx['listen_https'] = false external_url 'https://gitlab.yourdomain.com' Add to existing trusted proxies so that the logging doesn’t all come from a single IP address. (example) gitlab_rails['trusted_proxies'] = ['192.168.1.0/24', '172.18.0.0/24'] Reconfigure as per standard reconfig procedure e.g. below # docker exec -it GitLab-CE bash # gitlab-ctl reconfigure That's all that's needed to get the page to come up anyway. I assume mattermost will be similar - hoping I can easily rename that to chat.domain.com
  4. Also, I see the mattermost page has a good NGINX config for it, however Gitlab comes with NGINX built in. Does anyone have a functioning NGINX setup using lets encrypt container? It's a lot to learn, when you don't know how the internals of an omnibus container work. Further I've noted that as soon as I add the external url as https, it enables it's own https stack, which I don't want. I assume I just add the external url as just http: and let's encrypt proxy it, like with everything else. However, I'd bet that this external URL will define all sorts of outgoing addresses that need it to be accurate and am wondering if it's even possible to do this. Here was me thinking this was going to be easy. Edit: Seems like I can use this: Disable bundled NGINX In /etc/gitlab/gitlab.rb set: nginx['enable'] = false Taken from here. Which has links to external web server settings. I guess I'll give this a go.
  5. @frakman1 seems like we're both trying to do this, maybe we can help each other. I do run the awesome letsencrypt/nginx container though for https but figure I should at least be able to get it to work just on an IP address, but can't even get the package to start after editing gitlab.rb and starting the reconfigure. Does yours start after adding it? I note that unraid says it's started, but after a page refresh it's actually stopped - something to do with crashes not updating the GUI I guess.
  6. Did anyone get mattermost running with this docker? I actually hadn't realised you could and have been concentrating on a standalone mattermost docker, but finding a few issues that ultimately stop it from running. I did get gitlab running quite easily and supposedly you edit one file and mattermost works. But when I do that it breaks. So am interested in what others have achieved. I'm spinning up a startup and chat is really important for us with all our remote workers.
  7. Well, for the first time, I am now too getting the dreaded Warning: file_get_contents(https://lsio.ams3.digitaloceanspaces.com/?prefix=unraid-nvidia/): failed to open stream: HTTP request failed! in /usr/local/emhttp/plugins/Unraid-Nvidia/include/exec.php on line 45 etc. I checked it yesterday, was needing to do some encoding so thought I'd try running it, and it worked, but the server was busy, so thought I'd do this morning. This morning it says this. That to me says it something external to our networks and more common to the digital ocean side. I don't run pi-hole or any kind of filtering. It's direct to internet. It's a Gigabit Fibre Optic connection and no updates have been applied to the firewall since yesterday. Just thought this might help to narrow down the cause for someone at some point.
  8. In case it helps to rule anything out, I have a bond between an onboard 1Gb NIC and one port in a Dual Port 10Gb Nic (running also at 1Gb). No issues with downloading the plugin.
  9. The Point is you’re not the only one and it’s unlikely to be specific to this docker. Sent from my iPhone using Tapatalk
  10. Easy to find others on google: https://forum.level1techs.com/t/troubleshooting-steam-cache-speed-when-downloading-non-cached-games/132384/7 This one highlights to be careful with your DNS, which I agree - I have mine set via DHCP going to my lancache, then that points upstream to my INTERNAL DNS NOT the internet DNS, which in turn then points to the internet from my firewall. I'd recommend you do the same. This one claims to have helped fix the issue on freenas
  11. The speed at first download issue used to be a consequence of the steamcache and as such I expect it's something upstream. My recent testing though seemed to indicate that it was resolved in this particular bundle - this one even maxes out my bandwidth when cached which the previous never did. If I recall correctly, there was some issue where the download halved the speed due to downloading two copies simultaneously - i.e. one for your gaming client and one for the cache. I'm sure this is not normal proxy behaviour, however this isn't a typical proxy being that it uses DNS instead of standard traffic routing methods. To that end I'd suggest someone looks upstream for resolution. I'm overseas for a few weeks so a bit difficult for me.
  12. I have endless problems with this not deleting snapshots when it's supposed to. So I thought I'd just update in case it refreshes something. However, I just get the below - first plugin error I've ever had. Is it because I'm not running the beta?
  13. If I'm reading correctly and you just need firmware updated, it may be possible to boot from something other than unraid to do that...
  14. Hands down, the best and most capable tool / docker is Tdarr. Supports GPU / CPU encoding and all sorts of other stuff.
  15. Dear @limetech, I note the below line: does it require us to reset up the vm template, or is it a kernel fix? I vaguely recall something about an xml option that enabled this, but could be wrong. webgui: VMs: enable cpu cache passthrough; AMD + multithreaded Any info you can link us to? Many thanks. Marshalleq
  16. I just updated and forgot to check this, lucky! Sent from my iPhone using Tapatalk
  17. Yeah, I don't know why Bridge mode is called Bridge, because in Unraid it's actually referring to NAT as far as I know. I have a vague recollection it might be some confusing docker terminology. I'd suggest you set up both containers using br0 with their own IP address. Then you can point one to the other. 'Bridge' mode can only have one container within the whole bridge with a specific port. Br0 on the other hand could have the same port on multiple IP addresses.
  18. Or possibly set them up on their own IP addresses by using the bridge option.
  19. Seems to be a common problem for many these days. I believe there are manual download instructions earlier in the thread. Sent from my iPhone using Tapatalk
  20. Yep, your gut was right, it was a faulty memory stick. Though this exercise I have learnt the following: I don't know how long the memory was faulty - my assumption is many months, it was at about 79GB so it may not always have been used. Also that: All file systems were impacted The BTRFS file system required to be formatted to be recovered because it wouldn't rebalance. The BTRFS docker image required to be deleted and recreated (or older version restored from backup) because it wouldn't repair ZFS clearly pointed me directly at the corrupted file (on my single disk ZFS volume) so I could restore it which was nice The ZFS mirror healed with a simple scrub. XFS of course just ran an fsck type thing, so something could still be lingering, but that data is not very important, that's why it's on XFS. I'd like something more robust, but short of memory errors like this and cold reboots, it's probably pretty safe for what it is. I do believe BTRFS will also point me at the corrupted file, maybe it did, my memory on that is struggling. It has been a good exercise. It's possible someone with more knowledge of BTRFS could have fixed it, though a corrupted image file didn't sound very fixable to me so I elected to start again because I don't trust it based on past experience. I could be completely wrong, but that's where I landed.
  21. Seems like someone is on it. Unsure if it's implemented or not. https://github.com/uklans/cache-domains/pull/118
×
×
  • Create New...