[Support] GitLab-CE


Recommended Posts

Hi guys,

 

I configured Gitlab-CE about a week ago, and so far it was working smoothly, but today I noticed it was down.

 

When trying to get it back up, I get the following in the logs:
 

Spoiler


      * execute[create gitlab postgresql user] action run

		=========================================================================
        Error executing action `run` on resource 'execute[create gitlab postgres'
        =========================================================================

        RuntimeError
        ------------
        Exhausted service checks and database is still not available

        Cookbook Trace:
        ---------------
        /opt/gitlab/embedded/cookbooks/cache/cookbooks/gitlab/libraries/helpers/'
        /opt/gitlab/embedded/cookbooks/cache/cookbooks/gitlab/libraries/helpers/'
        /opt/gitlab/embedded/cookbooks/cache/cookbooks/postgresql/resources/user'

        Resource Declaration:
        ---------------------
        # In /opt/gitlab/embedded/cookbooks/cache/cookbooks/postgresql/resourcesb

         11:   execute "create #{new_resource.username} postgresql user" do
         12:     command %(/opt/gitlab/bin/#{new_resource.helper.service_cmd} -d)
         13:     user account_helper.postgresql_user
         14:     only_if { new_resource.helper.is_running? && new_resource.helpe}
         15:     not_if { new_resource.helper.is_offline_or_readonly? || new_res}
         16:   end
         17:

        Compiled Resource:
        ------------------
        # Declared in /opt/gitlab/embedded/cookbooks/cache/cookbooks/postgresql/'

        execute("create gitlab postgresql user") do
          action [:run]
          default_guard_interpreter :execute
          command "/opt/gitlab/bin/gitlab-psql -d template1 -c \"CREATE USER \\\"
          backup 5
          declared_type :execute
          cookbook_name "postgresql"
          domain nil
          user "gitlab-psql"
          not_if { #code block }
          only_if { #code block }
        end

        System Info:
        ------------
        chef_version=15.17.4
        platform=ubuntu
        platform_version=20.04
        ruby=ruby 2.7.5p203 (2021-11-24 revision f69aeb8314) [x86_64-linux]
        program_name=/opt/gitlab/embedded/bin/chef-client
        executable=/opt/gitlab/embedded/bin/chef-client

 

For what I can guess, it's trying to run some startup scripts and at some point Postgres either dies off or won't answer. I can't figure out what is happening since the container won't stay up for more than 2 minutes.

 

How do I debug/fix it?

 

The repos are almost empty, and I have copies of everything involved, the only thing I don't want to lose is the user accounts since I made my teammates register already. If there's any way to nuke the thing while keeping the users, and start over it's fine by me too.

 

Thank you!

Link to comment
56 minutes ago, ondono said:

How do I debug/fix it?

 

I don't know but a Google of the error message  took me to this forum discussion:

 

>  Initializing to bash, running gitlab-ctl reconfigure (waiting for the db to fail, then start accepting connections, which took ~12 minutes for me) and then running reconfigure again allows it start.

 

 

Link to comment
12 minutes ago, frakman1 said:

I don't know but a Google of the error message  took me to this forum discussion:

 

First of all thank you for taking the time to respond!

 

I had seen that, but it's not resolving my issue. I can run gitlab-ctl reconfigure once (until it fails), but the container will stop itself, so I can't run it again.

Link to comment
  • 3 weeks later...

I try but I don't really understand this...
 

I set in /opt/gitlab/embedded/service/gitlab-rails/config/gitlab.yml my parameters:

production: &base
  #
  # 1. GitLab app settings
  # ==========================

  ## GitLab settings
  gitlab:
    ## Web server settings (note: host is the FQDN, do not include http://)
    host: mydomain.org
    port: 443
    https: true

 

I understand I had to put in /etc/gitlab/gitlab.rb to keep my data at every restart...

 

but what are this variable?

 

If I use external_url='https://mydomain.org' the data in /etc/gitlab/gitlab.rb are changing on restart... but container doesn't work!

 

instead... if I change manually JUST into /etc/gitlab/gitlab.rb, everythink works!

 

 

I don't find correct variables... so I think I need something like:

gitlab_rails['host'] = 'mydomain.org'

gitlab_rails['port'] = 443

gitlab_rails['https'] = true

gitlab_rails['max_request_duration_seconds'] = 33

 

but just max_request_duration_seconds works and is changed in gitlab.rb

 

what is the correct gitlab_rails['????'] variable for host?

 

thanks

Edited by Jack_T
better
Link to comment
  • 1 month later...
  • 4 weeks later...

Greetings.  I've just installed this container and got it up and running but I have 2 things I'm not sure how to do.  1. I want to setup the email.  Do I have to use gmail?  2. Can I change the data directory from cache to the array?  Thanks in advance for your help!

Edited by Spectral Force
Link to comment
  • 2 weeks later...
  • 2 weeks later...
  • 3 weeks later...
On 6/5/2022 at 5:03 PM, Cemion said:

The same question lol. 
Dont undestands why people didnt write it in description/comments... Not first app with same problem

 

There's a file in the gitlab-ce config folder. It's called initial_root_password. It has the root password you need to initially login. Hoping this helps everyone who looks for this.

  • Like 3
Link to comment
  • 3 weeks later...

Tried to install Socat with default parameters and I get this:

root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='socat' --net='bridge' -e TZ="America/New_York" -e HOST_OS="Unraid" -p '443:443/tcp' -p '100:PGID/tcp' --expose 443 'tynor88/socat'

docker: Invalid containerPort: PGID.
See 'docker run --help'.

The command failed.

I'm wanting socat so my HomeAssistant VM can talk to an iTach. Should I be using some parameters other than default? 

 

Cheers, Richard

Link to comment
  • 1 month later...

I'm about to panic -

 

I was in the middle of a the update process of the gitlab.ce image when a power-outage occurred.. unRaid is up but not sure how healthy my docker environment is. One thing is for sure that gitlab.ce is definitely not 😞 - It does not even show anymore as installed - only the inactive container - Will the docker pick up everything that was allready there or do I have a serious problem ??

 

image.png.5c8391cda17bdf8d65325e3c68fc5856.png

 

 

 

EDIT:

Trying this at the moment - not having fun at the moment (because I'm too lazy to sort out my backups)

 

 

EDIT2:

Panic attack over - gitlabce is running fine - no data lost there..

Not sure if this is common knowledge or if this is the best way but I make gitlab.ce backups like this:

* Open de console of the gitlab.ce container

# cd /var/opt/gitlab/backups (if it does not exist create it)

# gitlab-ctl backup-etc --backup-path /var/opt/gitlab/backups

# gitlab-backup create

 

The backups can be found at /mnt/cache/appdata/gitlab-ce/data/backups and can be moved to a backupdisc

 

Edited by sjoerd
Link to comment
  • 2 weeks later...
On 6/24/2022 at 3:37 PM, Phycoforce said:

There's a file in the gitlab-ce config folder. It's called initial_root_password. It has the root password you need to initially login. Hoping this helps everyone who looks for this.

Thanks for help. By default no file generated, you must uncomment it in config. 

Link to comment
  • 2 weeks later...
  • 1 month later...
On 9/26/2022 at 2:38 AM, Figrol said:

Ok, this might be a really stupid question, I have the service up and running and can get to the login page, however, I don't know what the default login credentials are! Does anyone know?

I saw this question posted twice on this page and thought I'd assist. I too had to figure this out... I first tried registering an account, but it was not obvious that it didn't work.

Unfortunately there aren't any hints/tutorial when installing/configuring the docker. I've seen some other apps do this which is very helpful.

 

First, you should confirm where your Unraid system stores its docker configurations. To do this, navigate to Unraid's Docker WebGUI page and left click on the GitLab-CE application. Select the "edit" option from the pop-up menu. Scroll down to the "show more settings" and click it. This will show you where the "Config Storage Path" is. Next, open a terminal from the webGUI (or SSH in or whatever your preferred way is).

cd /mnt/cache/appdata/gitlab-ce/config

cat initial_root_password

 

# WARNING: This value is valid only in the following conditions
#          1. If provided manually (either via `GITLAB_ROOT_PASSWORD` environment variable or via `gitlab_rails['initial_root_password']` setting in `gitlab.rb`, it was provided before database was seeded for the first time (usually, the first reconfigure run).
#          2. Password hasn't been changed manually, either via UI or via command line.
#
#          If the password shown here doesn't work, you must reset the admin password following https://docs.gitlab.com/ee/security/reset_user_password.html#reset-your-root-password.

Password: <YOUR PASSWORD WILL BE HERE>

 

Select the password, copy it (if the WebGUI didn't already).

Open the GitLab-CE WebGUI.

Use the account "root" with the password provided to log in.

 

Have fun from here!

 

Link to comment
  • 3 months later...

Hi Everyone, I've this docker instance running on my unraid server on the latest version and just noticed that my Gitlab is missing the Container Registry, what are the pre-requesites for this to be working, I cannot find it anywhere.

I've Package Registry working for npm

 

Thanks in advance! 

Edited by Tiago TA
Link to comment
  • 3 weeks later...

I am not sure what happened in the last update (March 2023) to gitlab-ce.

The latest version produces tons of debug output and nukes the cache disk over night and Unraid becomes unusable.
In my case 300+ GB.

 

Unfortunately I do not have the time to dig into the issue and rebuild the container, so for save measures,

 

DO NOT UPGRADE to the latest version !!!!

Link to comment
15 hours ago, Germanus4711 said:

I am not sure what happened in the last update (March 2023) to gitlab-ce.

The latest version produces tons of debug output and nukes the cache disk over night and Unraid becomes unusable.
In my case 300+ GB.

 

Unfortunately I do not have the time to dig into the issue and rebuild the container, so for save measures,

 

DO NOT UPGRADE to the latest version !!!!

So far I found the following sequence(s) in the logs:
--- log clipping start ---

==> /var/log/gitlab/gitlab-rails/application.log <==
2023-03-09T06:31:12.575Z: writing value to /dev/shm/gitlab/puma/gauge_all_puma_2-0.db failed with unmapped file

==> /var/log/gitlab/gitlab-rails/application_json.log <==
{"severity":"WARN","time":"2023-03-09T06:31:12.575Z","correlation_id":null,"message":"writing value to /dev/shm/gitlab/puma/gauge_all_puma_2-0.db failed with unmapped file"}

==> /var/log/gitlab/gitlab-rails/application.log <==
2023-03-09T06:31:12.575Z: /opt/gitlab/embedded/lib/ruby/gems/2.7.0/gems/prometheus-client-mmap-0.17.0/lib/prometheus/client/mmaped_dict.rb:46:in `upsert_entry'
/opt/gitlab/embedded/lib/ruby/gems/2.7.0/gems/prometheus-client-mmap-0.17.0/lib/prometheus/client/mmaped_dict.rb:46:in `write_value'
/opt/gitlab/embedded/lib/ruby/gems/2.7.0/gems/prometheus-client-mmap-0.17.0/lib/prometheus/client/mmaped_value.rb:139:in `write_value'
/opt/gitlab/embedded/lib/ruby/gems/2.7.0/gems/prometheus-client-mmap-0.17.0/lib/prometheus/client/mmaped_value.rb:48:in `block in set'
/opt/gitlab/embedded/lib/ruby/gems/2.7.0/gems/prometheus-client-mmap-0.17.0/lib/prometheus/client/mmaped_value.rb:44:in `synchronize'
/opt/gitlab/embedded/lib/ruby/gems/2.7.0/gems/prometheus-client-mmap-0.17.0/lib/prometheus/client/mmaped_value.rb:44:in `set'
/opt/gitlab/embedded/lib/ruby/gems/2.7.0/gems/prometheus-client-mmap-0.17.0/lib/prometheus/client/gauge.rb:28:in `set'
/opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/metrics/samplers/threads_sampler.rb:41:in `set_running_threads'
/opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/metrics/samplers/threads_sampler.rb:34:in `block in sample'
/opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/metrics/samplers/threads_sampler.rb:30:in `each'
/opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/metrics/samplers/threads_sampler.rb:30:in `sample'
/opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/metrics/samplers/base_sampler.rb:30:in `safe_sample'
/opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/metrics/samplers/base_sampler.rb:80:in `run_thread'
/opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/daemon.rb:58:in `block (2 levels) in start'

==> /var/log/gitlab/gitlab-rails/application_json.log <==
{"severity":"DEBUG","time":"2023-03-09T06:31:12.575Z","correlation_id":null,"message":"/opt/gitlab/embedded/lib/ruby/gems/2.7.0/gems/prometheus-client-mmap-0.17.0/lib/prometheus/client/mmaped_dict.rb:46:in `upsert_entry'\n/opt/gitlab/embedded/lib/ruby/gems/2.7.0/gems/prometheus-client-mmap-0.17.0/lib/prometheus/client/mmaped_dict.rb:46:in `write_value'\n/opt/gitlab/embedded/lib/ruby/gems/2.7.0/gems/prometheus-client-mmap-0.17.0/lib/prometheus/client/mmaped_value.rb:139:in `write_value'\n/opt/gitlab/embedded/lib/ruby/gems/2.7.0/gems/prometheus-client-mmap-0.17.0/lib/prometheus/client/mmaped_value.rb:48:in `block in set'\n/opt/gitlab/embedded/lib/ruby/gems/2.7.0/gems/prometheus-client-mmap-0.17.0/lib/prometheus/client/mmaped_value.rb:44:in `synchronize'\n/opt/gitlab/embedded/lib/ruby/gems/2.7.0/gems/prometheus-client-mmap-0.17.0/lib/prometheus/client/mmaped_value.rb:44:in `set'\n/opt/gitlab/embedded/lib/ruby/gems/2.7.0/gems/prometheus-client-mmap-0.17.0/lib/prometheus/client/gauge.rb:28:in `set'\n/opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/metrics/samplers/threads_sampler.rb:41:in `set_running_threads'\n/opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/metrics/samplers/threads_sampler.rb:34:in `block in sample'\n/opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/metrics/samplers/threads_sampler.rb:30:in `each'\n/opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/metrics/samplers/threads_sampler.rb:30:in `sample'\n/opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/metrics/samplers/base_sampler.rb:30:in `safe_sample'\n/opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/metrics/samplers/base_sampler.rb:80:in `run_thread'\n/opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/daemon.rb:58:in `block (2 levels) in start'"}

 

--- log clipping end ---

 

Link to comment

Hi, it seems my users and repositories are gone. I tried logging in both as root and my users and got an "Invalid login or password" error.

I had to use GitLab docker's console to run gitlab-rake "gitlab:password:reset[root]" to reset the root password. My other accounts did not exist.


I did two things recently, I'm not sure which one caused the problem:

  • I used the "Update all" button from the docker management screen.
  • I used the mover to move data from my cache drive to my array.

I checked both /mnt/user/appdata and /mnt/cache/appdata for the data and there are no traces of the repositories, usernames, or data.
I did not have a local copy of the data because I was using the webIDE.

*edit* changing from */mnt/cache/appdata/gitlab-ce/* to */mnt/user/appdata/gitlab-ce/* for /data, /config, and /log fixed it!

*edit 2* Nope, I could see the projects, but ended up getting Internal 500 errors when I tried to access the repos. Seems I won't get this data back.

Edited by byb
Link to comment
  • 4 weeks later...

I am trying to learn gitlab in a single user environment (my unraid box in my home LAN). Although I have the resources (for now) I noticed ram usage was high due to 8 puma cluster workers talking up just under 800MB each. Definitely high for my single user needs.

I read up on the documentation (here and here) and changed the following in "/etc/gitlab/gitlab.rb"
from

# puma['worker_processes'] = 2
# puma['min_threads'] = 4
# puma['max_threads'] = 4 

to

# puma['worker_processes'] = 0
# puma['min_threads'] = 0 
# puma['max_threads'] = 4

 

then ran "gitlab-ctl reconfigure" and also restarted the docker when that failed but I still have 8 workers gobbling up 6 GB of ram.

Any tips?

Link to comment
6 hours ago, Mark_LAN said:

I am trying to learn gitlab in a single user environment (my unraid box in my home LAN). Although I have the resources (for now) I noticed ram usage was high due to 8 puma cluster workers talking up just under 800MB each. Definitely high for my single user needs.

I read up on the documentation (here and here) and changed the following in "/etc/gitlab/gitlab.rb"
from

# puma['worker_processes'] = 2
# puma['min_threads'] = 4
# puma['max_threads'] = 4 

to

# puma['worker_processes'] = 0
# puma['min_threads'] = 0 
# puma['max_threads'] = 4

 

then ran "gitlab-ctl reconfigure" and also restarted the docker when that failed but I still have 8 workers gobbling up 6 GB of ram.

Any tips?

If you leave the hash symbol (#) in front of the variable to value assignment,  it is still a comment and will not be having any effect.
My tip would be to remove the # symbol before the term and try again.

Looking at the values, I am not sure what you want to accomplish. Setting the number of worker processes to 0 means that gitlab* does not have any "helper" to process any request (overly simplified).
Maybe a more traditional value like 1 could be beneficial.

Secondly I would like to add, that the value of 0 for a minimum I/O threading might look good if nothing is to be processed. For whatever reason, I would probably put in a 1 value to at least get something in and out of the server in a timely fashion. To then limit the "resource consumption", given your UC, I would drop the max value also down to 1.

 

But for memory settings probably it looks more like this:
puma['per_worker_max_memory_mb'] = 1024 # 1GB

 

and of course the mandatory:
sudo gitlab-ctl reconfigure
 

There is documentation available on the gitlab* website, look for:
"Configure the bundled Puma instance of the GitLab package"

On that page, you will also find an interesting sentence for the worker == 0 situation (v. 15.11):

"This is an experimental Alpha feature and subject to change without notice."

As a final thought, I would like to suggest to stay as close as possible to a vanilla default setup on a server, when learning a new technology. Once the default is understood, then the slimming down can be played with. 

Link to comment

Alright Homies, I have searched the thread and CAN NOT find an answer that works. PLZ help me.

I am trying to get the container to register my custom external URL (https://git.example.com)

I edit the extra variable --env GITLAB_OMNIBUS_CONFIG="external_url 'http://unraid:9080'" and change it to 'https://git.example.com'"

That doesnt work. I use the SWAG container and the included NGINX as my reverse proxy. I have it going through Cloudflare. I have a CNAME record created on cloudflare. When it is set to the unraid address I can access it through my custom git.example.com url but then all the references and git commands in GitLab show Unraid:9080 vs my url so that is why I want to fix it.

The proxy.conf is setup to forward the containers http 9080 address and redirect to HTTPS

I have tried looking at the gitlab.rb file but everything is commented out so i am not sure what I would even edit, or if it would have an effect anyway

 

I also ran gitlab-ctl reconfigure after changing the address both times.

Please help me, I have no clue why it isn't working and why the entire gitlab.rb file is commented out.

Link to comment
18 hours ago, Germanus4711 said:

If you leave the hash symbol (#) in front of the variable to value assignment,  it is still a comment and will not be having any effect.
My tip would be to remove the # symbol before the term and try again.

Looking at the values, I am not sure what you want to accomplish. Setting the number of worker processes to 0 means that gitlab* does not have any "helper" to process any request (overly simplified).
Maybe a more traditional value like 1 could be beneficial.

Secondly I would like to add, that the value of 0 for a minimum I/O threading might look good if nothing is to be processed. For whatever reason, I would probably put in a 1 value to at least get something in and out of the server in a timely fashion. To then limit the "resource consumption", given your UC, I would drop the max value also down to 1.

 

But for memory settings probably it looks more like this:
puma['per_worker_max_memory_mb'] = 1024 # 1GB

 

and of course the mandatory:
sudo gitlab-ctl reconfigure
 

There is documentation available on the gitlab* website, look for:
"Configure the bundled Puma instance of the GitLab package"

On that page, you will also find an interesting sentence for the worker == 0 situation (v. 15.11):

"This is an experimental Alpha feature and subject to change without notice."

As a final thought, I would like to suggest to stay as close as possible to a vanilla default setup on a server, when learning a new technology. Once the default is understood, then the slimming down can be played with. 

Every single value has a '#' in front of it. That's an odd way to me to do a config file. Seems like comments require ## but ill give it a shot.

Docs state '0' for "workers" disables the multithreading workers, goes to a single thread for low ram environments, which is more or less what I am chasing. '0' for min also lets the system scale, not sure if that is only in threaded environment or not, either way supposed to scale.

Link to comment
On 4/8/2023 at 3:12 PM, Mark_LAN said:

Every single value has a '#' in front of it. That's an odd way to me to do a config file. Seems like comments require ## but ill give it a shot.

Docs state '0' for "workers" disables the multithreading workers, goes to a single thread for low ram environments, which is more or less what I am chasing. '0' for min also lets the system scale, not sure if that is only in threaded environment or not, either way supposed to scale.

I guess the odd way is actually the "default" config file, which the author has provided with 'some or more' options described in a text block marked by 2 x '#', denoting that that is an actual comment following. A style that has been around since the 80s. Having only 1 x '#' stands for an optional, not by default enabled option, again, old school.

The problem omitting such a file in a distribution and, as "normal" having outdated documentation, may not be beneficial.

But, if the docs are always up-to-date or the source code is available, such file is not actually needed, it is just a nicety the author provided. I totally agree, it is way better to read up on an option and evaluate its use.

As for the "workers", that is an architectural part of the software in question. The developers decided to "thread-out" specific parts of the "application", mostly for performance reasons and sometimes due to coding styles.

When one disables a worker thread, the functionality becomes either not available or, gets migrated over to something called a "green thread" or, falls back to some hard-coded code block, e.g. just throwing an exception.

The statement "'0' for min also lets the system scale", made me think for a second where the scaling might come from. In the case that 0 disables the option, as outlined, it will not exist and scales nicely as it does nothing no matter how much it scales.

Assuming 0 doesn't really disable but green threads the option, the RAM requirements will not be eliminated, it will only allocate the "needed" memory in the same thread as the main app. With that said, a possible scaling would be some sort of containerization like Docker or Kuber*. Now the problem escalated a bit more as each container will allocate the memory per instance not only the worker, but ALSO for the app again, running the app vs. one time running the app with multiple memory pools. The next thing that might happen in this scenario is then the blocking I/O due to resource bottlenecks that the container needs to sort out and the scaling may go, lets say bad.

Coming back to the issue at hand, we are talking about some git* stuff. That means the workers are there to intake or spill out streams of data from a client app, let's say cli git to keep it easy. the git-server and the git-client run something of a "protocol" with tells both parties how the data is represented and what to do with it. It does some magic compress / de-compress with diffing besides the security things etc.

Knowing that, we can now answer the question when and how much memory will be allocated, threaded or not.

If and only if (iff) the client connects to the server and transfers data.

So just using the server without a client and hundreds of workers will do actually nothing to the memory on the server. It will allocate 1 x worker memory to store and allow the execution of the code and while active, not zombied (Z - process), it will allocate the communication buffers and other resource artifacts needed. Same would be true if a green thread or hard coding is used, only the executable code section is not copied out to a worker memory. Disregarding execution in place (XIP), vm copy-left or other optimizations.

Allocating more than one worker would not happen unless the client load requires the system to spin up another worker. I would actually read the '0' above as no worker is spun up when the systems starts and/or is idle. When a client request is received on a port, a worker will startup iff no worker existed before. And consequentially for increasing client load, more workers would spin-up and shutdown as needed. I saw some documentation about the worker sizing in the docs somewhere, but 'htop' would be sufficient to observe that behavior on the server.

Now what is the difference between 0 and 1 for workers? Assuming one decides to have at least one worker active at all times, that allows immediate processing of the incoming data. Versus, detecting a request, spinning up a worker, routing the request to the worker, start processing and shutdown when done, in the 0 worker settings case.

TBH it is quite a bit more complicated, but as I am looking for teachable bit and pieces for my YT channel, this one is a worthy candidate on the topic of how to do config files and what they do, in general.

Enjoy your discoveries! 🙂

 

Link to comment
On 4/7/2023 at 11:34 PM, kyaustad said:

Alright Homies, I have searched the thread and CAN NOT find an answer that works. PLZ help me.

I am trying to get the container to register my custom external URL (https://git.example.com)

I edit the extra variable --env GITLAB_OMNIBUS_CONFIG="external_url 'http://unraid:9080'" and change it to 'https://git.example.com'"

That doesnt work. I use the SWAG container and the included NGINX as my reverse proxy. I have it going through Cloudflare. I have a CNAME record created on cloudflare. When it is set to the unraid address I can access it through my custom git.example.com url but then all the references and git commands in GitLab show Unraid:9080 vs my url so that is why I want to fix it.

The proxy.conf is setup to forward the containers http 9080 address and redirect to HTTPS

I have tried looking at the gitlab.rb file but everything is commented out so i am not sure what I would even edit, or if it would have an effect anyway

 

I also ran gitlab-ctl reconfigure after changing the address both times.

Please help me, I have no clue why it isn't working and why the entire gitlab.rb file is commented out.

Please forgive me if I may not have understood the whole situation.

I do assume that the 'https://git.example.com' is a literal example as example.com is actually a real wildcard domain. Let's substitute it with gitlab.myownserver.xyz for discussion purposes.

The easiest is to edit your host file and edit the entry looking like "192.168.x.yyy unraid" and just add the desired lookup name to it, e.g. "192.168.x.yyy unraid gitlab.myownserver.xyz"

With that, your unraid server is now also known as (aka) gitlab.myow.... on every machine you edit the host file. I find that nice for development, but it is not good to maintain. The next set up would be to have something like pi-hole and add an entry there for gitlab.myown... pointing to the unraid IP address (locally) or some Dynamic DNS service somewhere, similar to set up (external).

The HTTP to HTTPS protocol change is, if local or private bastion cloud, just a matter of taste vs. public it's a must. Just make sure you have the certs in place etc.

When using nginx, just layout a route from gitlab.myown... to unraid:9080 (kind of the same like the pi-hole) docs-link:
https://nginx.org/en/docs/http/request_processing.html

 

The other part you may want to look at is the ssh config you have for your client. Look at ~/.ssh/config
it should have something like/similar to the following:
 

#Host *
#UseKeychain yes

Host unraid
        User git
        Hostname unraid
        Preferredauthentications publickey
        AddKeysToAgent yes
        TCPKeepAlive yes
        IdentitiesOnly yes
        IdentityFile  ~/.ssh/id_rsa.pub
 

Once you got the keys right, test it with ssh first:
https://docs.github.com/en/authentication/connecting-to-github-with-ssh/testing-your-ssh-connection

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.