opticon

Members
  • Content Count

    25
  • Joined

  • Last visited

Community Reputation

7 Neutral

About opticon

  • Rank
    Newbie

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. It works thankyou Running a full benchmark on everything now
  2. Hi @jbartlett, Thanks for putting the docker together, I've got it installed on 2x unRAID boxes and 1 works fine but the other does not. Motherboard: Asus B85M-E HBA: H310 (flashed to LSI HBA mode) The SSD's where originally connected to the HBA but I've moved them to the onboard SATA controller and I get the same error message. I think the SSD make/model being identical up until the last 3 digits might be the issue? Any help would be appreciated
  3. I'm currently working on a ZT docker that will allow you to route to internal networks, I'm testing it in bridge mode (container with it's own IP). It's a fork of someone else's which I'll write a config for UnRAID. It's a best effort thing only right now and I'll try to finish it asap. One thing I discovered while testing my docker, as your internal network clients usually get an DHCP IP Address from your router. Your router actually has no idea about the ZT network. So you'll need to log onto your firewall and create a route to send all ZT network traffic to the IP Address of you
  4. I think that might be because your on the light theme? I've just left mine at the new default dark theme. If there's a fix etc let me know. First time posting with this new forum so I don't know if it's a issue for everyone etc
  5. @malvarez00 Any idea when you can commit @aartr's fix so we don't have to do the workarounds?
  6. So your getting the unraid Main page? What port have you assigned to the gitlab HTTP and HTTPS Port? The docker ports are what you need to set the proxy_pass URL line to Perhaps post your config and redacted IP's and if your using the docker in host mode or IP mode etc and we can help you from there
  7. I was testing you all to see if you followed the instructions I do have that as well actually, I just forgot to put it in the instructions!
  8. Are you using the exernal letsencrypt/nginx docker or the built in one to access gitlab? I run gitlab from it's own subdomain and best practice is to do just that, if at all possible it would be easier for you to change your config to git.mydomain.bla From what I understand, your wanting to change the relative root path for gitlab, it's not something I've done but going by the link below you just need to change your setting to: --env GITLAB_OMNIBUS_CONFIG="external_url 'https://mydomain.bla:9080/gitlab" If that doesn't work, try removing the port number and
  9. Here's some instructions on how to configure GitLab-rake to create its backups and automatically purging backups older than 31 days ##### How to configure GitLab-CE docker to run rake backups Summary The following steps will create a dedicated backup folder for gitlab rake to export the backups to and automatically purge backups older than 31 days. If you wish, you can use the default folder and time to keep backups by removing the config lines so it uses the defaults from gitlab.rb This backs up the REPOSITORY DATA ONLY and not your configuration/secre
  10. Sorry I missed the other guys post, can you attach a copy of your nginx conf file, docker/gitlab log after it's started/fully running and a copy of the extra params your using? Remember to rename your domain etc to just genericdomain.com or something
  11. From what I understand you shouldn't edit gitlab.rb at all, it WILL get destroyed on the next update. You need to put all config you want to override in the extra params section in the unraid gitlab docker config page. I have all the stuff I mentioned plus more in a single like and it works fine
  12. Are you using your ISP SMTP server? does it 100% require auth? If so you may need to set it. I have 'true' in quotes for my config, which is the only difference I can see? gitlab_rails['gitlab_email_enabled'] = 'true'; gitlab_rails['gitlab_email_display_name'] = 'Gitlab'; gitlab_rails['gitlab_email_reply_to'] = 'noreply@domain.com'; gitlab_rails['smtp_enable'] = 'true'; gitlab_rails['smtp_user'] = 'user@domain.net.au'; gitlab_rails['smtp_address'] = 'mail.domain.net.au'; gitlab_rails['smtp_port'] = '25'; gitlab_rails[
  13. I've wondered the same and also had issues when using the --memory flag. It's suppose to run just fine on a RPI2/3 with 1GB of RAM (non-docker) using the Omnibus package but I guess there's additional config built into it. I started to play around with it but ran out of time. Google around and check the documentation, there's options to limit worker processes etc and this is what I was starting off with: postgresql['shared_buffers'] = '256MB'; unicorn['worker_processes'] = '3'; unicorn['worker_timeout'] = '60'; Can't remember how well it worked, other life t
  14. I had a similar issue. Here's how I fixed it: 1. SSH to Unraid 2. Start the docker and run 'docker ps' to get the container ID before it crashes again 3. Keep the SSH window open and get ready to do this as quick as possible a) Start the docker again either from cli or WebGUI b) Replace the 'DockerID' text below with the container ID you got from step 1 and run the following lines in the SSH windo docker exec -it DockerID chmod -R 0770 /var/opt/gitlab/git-data docker exec -it DockerID chmod -R 2770 /var/opt/gitlab/git-data/reposi
  15. No idea on this one specifically. I run the CA nightly backup plugin and keep 30 days worth for the reason of auto or manual updates breaking the docker and loosing files