Germanus4711

Members
  • Posts

    6
  • Joined

  • Last visited

Everything posted by Germanus4711

  1. Please forgive me if I may not have understood the whole situation. I do assume that the 'https://git.example.com' is a literal example as example.com is actually a real wildcard domain. Let's substitute it with gitlab.myownserver.xyz for discussion purposes. The easiest is to edit your host file and edit the entry looking like "192.168.x.yyy unraid" and just add the desired lookup name to it, e.g. "192.168.x.yyy unraid gitlab.myownserver.xyz" With that, your unraid server is now also known as (aka) gitlab.myow.... on every machine you edit the host file. I find that nice for development, but it is not good to maintain. The next set up would be to have something like pi-hole and add an entry there for gitlab.myown... pointing to the unraid IP address (locally) or some Dynamic DNS service somewhere, similar to set up (external). The HTTP to HTTPS protocol change is, if local or private bastion cloud, just a matter of taste vs. public it's a must. Just make sure you have the certs in place etc. When using nginx, just layout a route from gitlab.myown... to unraid:9080 (kind of the same like the pi-hole) docs-link: https://nginx.org/en/docs/http/request_processing.html The other part you may want to look at is the ssh config you have for your client. Look at ~/.ssh/config it should have something like/similar to the following: #Host * #UseKeychain yes Host unraid User git Hostname unraid Preferredauthentications publickey AddKeysToAgent yes TCPKeepAlive yes IdentitiesOnly yes IdentityFile ~/.ssh/id_rsa.pub Once you got the keys right, test it with ssh first: https://docs.github.com/en/authentication/connecting-to-github-with-ssh/testing-your-ssh-connection
  2. I guess the odd way is actually the "default" config file, which the author has provided with 'some or more' options described in a text block marked by 2 x '#', denoting that that is an actual comment following. A style that has been around since the 80s. Having only 1 x '#' stands for an optional, not by default enabled option, again, old school. The problem omitting such a file in a distribution and, as "normal" having outdated documentation, may not be beneficial. But, if the docs are always up-to-date or the source code is available, such file is not actually needed, it is just a nicety the author provided. I totally agree, it is way better to read up on an option and evaluate its use. As for the "workers", that is an architectural part of the software in question. The developers decided to "thread-out" specific parts of the "application", mostly for performance reasons and sometimes due to coding styles. When one disables a worker thread, the functionality becomes either not available or, gets migrated over to something called a "green thread" or, falls back to some hard-coded code block, e.g. just throwing an exception. The statement "'0' for min also lets the system scale", made me think for a second where the scaling might come from. In the case that 0 disables the option, as outlined, it will not exist and scales nicely as it does nothing no matter how much it scales. Assuming 0 doesn't really disable but green threads the option, the RAM requirements will not be eliminated, it will only allocate the "needed" memory in the same thread as the main app. With that said, a possible scaling would be some sort of containerization like Docker or Kuber*. Now the problem escalated a bit more as each container will allocate the memory per instance not only the worker, but ALSO for the app again, running the app vs. one time running the app with multiple memory pools. The next thing that might happen in this scenario is then the blocking I/O due to resource bottlenecks that the container needs to sort out and the scaling may go, lets say bad. Coming back to the issue at hand, we are talking about some git* stuff. That means the workers are there to intake or spill out streams of data from a client app, let's say cli git to keep it easy. the git-server and the git-client run something of a "protocol" with tells both parties how the data is represented and what to do with it. It does some magic compress / de-compress with diffing besides the security things etc. Knowing that, we can now answer the question when and how much memory will be allocated, threaded or not. If and only if (iff) the client connects to the server and transfers data. So just using the server without a client and hundreds of workers will do actually nothing to the memory on the server. It will allocate 1 x worker memory to store and allow the execution of the code and while active, not zombied (Z - process), it will allocate the communication buffers and other resource artifacts needed. Same would be true if a green thread or hard coding is used, only the executable code section is not copied out to a worker memory. Disregarding execution in place (XIP), vm copy-left or other optimizations. Allocating more than one worker would not happen unless the client load requires the system to spin up another worker. I would actually read the '0' above as no worker is spun up when the systems starts and/or is idle. When a client request is received on a port, a worker will startup iff no worker existed before. And consequentially for increasing client load, more workers would spin-up and shutdown as needed. I saw some documentation about the worker sizing in the docs somewhere, but 'htop' would be sufficient to observe that behavior on the server. Now what is the difference between 0 and 1 for workers? Assuming one decides to have at least one worker active at all times, that allows immediate processing of the incoming data. Versus, detecting a request, spinning up a worker, routing the request to the worker, start processing and shutdown when done, in the 0 worker settings case. TBH it is quite a bit more complicated, but as I am looking for teachable bit and pieces for my YT channel, this one is a worthy candidate on the topic of how to do config files and what they do, in general. Enjoy your discoveries! 🙂
  3. If you leave the hash symbol (#) in front of the variable to value assignment, it is still a comment and will not be having any effect. My tip would be to remove the # symbol before the term and try again. Looking at the values, I am not sure what you want to accomplish. Setting the number of worker processes to 0 means that gitlab* does not have any "helper" to process any request (overly simplified). Maybe a more traditional value like 1 could be beneficial. Secondly I would like to add, that the value of 0 for a minimum I/O threading might look good if nothing is to be processed. For whatever reason, I would probably put in a 1 value to at least get something in and out of the server in a timely fashion. To then limit the "resource consumption", given your UC, I would drop the max value also down to 1. But for memory settings probably it looks more like this: puma['per_worker_max_memory_mb'] = 1024 # 1GB and of course the mandatory: sudo gitlab-ctl reconfigure There is documentation available on the gitlab* website, look for: "Configure the bundled Puma instance of the GitLab package" On that page, you will also find an interesting sentence for the worker == 0 situation (v. 15.11): "This is an experimental Alpha feature and subject to change without notice." As a final thought, I would like to suggest to stay as close as possible to a vanilla default setup on a server, when learning a new technology. Once the default is understood, then the slimming down can be played with.
  4. So far I found the following sequence(s) in the logs: --- log clipping start --- ==> /var/log/gitlab/gitlab-rails/application.log <== 2023-03-09T06:31:12.575Z: writing value to /dev/shm/gitlab/puma/gauge_all_puma_2-0.db failed with unmapped file ==> /var/log/gitlab/gitlab-rails/application_json.log <== {"severity":"WARN","time":"2023-03-09T06:31:12.575Z","correlation_id":null,"message":"writing value to /dev/shm/gitlab/puma/gauge_all_puma_2-0.db failed with unmapped file"} ==> /var/log/gitlab/gitlab-rails/application.log <== 2023-03-09T06:31:12.575Z: /opt/gitlab/embedded/lib/ruby/gems/2.7.0/gems/prometheus-client-mmap-0.17.0/lib/prometheus/client/mmaped_dict.rb:46:in `upsert_entry' /opt/gitlab/embedded/lib/ruby/gems/2.7.0/gems/prometheus-client-mmap-0.17.0/lib/prometheus/client/mmaped_dict.rb:46:in `write_value' /opt/gitlab/embedded/lib/ruby/gems/2.7.0/gems/prometheus-client-mmap-0.17.0/lib/prometheus/client/mmaped_value.rb:139:in `write_value' /opt/gitlab/embedded/lib/ruby/gems/2.7.0/gems/prometheus-client-mmap-0.17.0/lib/prometheus/client/mmaped_value.rb:48:in `block in set' /opt/gitlab/embedded/lib/ruby/gems/2.7.0/gems/prometheus-client-mmap-0.17.0/lib/prometheus/client/mmaped_value.rb:44:in `synchronize' /opt/gitlab/embedded/lib/ruby/gems/2.7.0/gems/prometheus-client-mmap-0.17.0/lib/prometheus/client/mmaped_value.rb:44:in `set' /opt/gitlab/embedded/lib/ruby/gems/2.7.0/gems/prometheus-client-mmap-0.17.0/lib/prometheus/client/gauge.rb:28:in `set' /opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/metrics/samplers/threads_sampler.rb:41:in `set_running_threads' /opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/metrics/samplers/threads_sampler.rb:34:in `block in sample' /opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/metrics/samplers/threads_sampler.rb:30:in `each' /opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/metrics/samplers/threads_sampler.rb:30:in `sample' /opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/metrics/samplers/base_sampler.rb:30:in `safe_sample' /opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/metrics/samplers/base_sampler.rb:80:in `run_thread' /opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/daemon.rb:58:in `block (2 levels) in start' ==> /var/log/gitlab/gitlab-rails/application_json.log <== {"severity":"DEBUG","time":"2023-03-09T06:31:12.575Z","correlation_id":null,"message":"/opt/gitlab/embedded/lib/ruby/gems/2.7.0/gems/prometheus-client-mmap-0.17.0/lib/prometheus/client/mmaped_dict.rb:46:in `upsert_entry'\n/opt/gitlab/embedded/lib/ruby/gems/2.7.0/gems/prometheus-client-mmap-0.17.0/lib/prometheus/client/mmaped_dict.rb:46:in `write_value'\n/opt/gitlab/embedded/lib/ruby/gems/2.7.0/gems/prometheus-client-mmap-0.17.0/lib/prometheus/client/mmaped_value.rb:139:in `write_value'\n/opt/gitlab/embedded/lib/ruby/gems/2.7.0/gems/prometheus-client-mmap-0.17.0/lib/prometheus/client/mmaped_value.rb:48:in `block in set'\n/opt/gitlab/embedded/lib/ruby/gems/2.7.0/gems/prometheus-client-mmap-0.17.0/lib/prometheus/client/mmaped_value.rb:44:in `synchronize'\n/opt/gitlab/embedded/lib/ruby/gems/2.7.0/gems/prometheus-client-mmap-0.17.0/lib/prometheus/client/mmaped_value.rb:44:in `set'\n/opt/gitlab/embedded/lib/ruby/gems/2.7.0/gems/prometheus-client-mmap-0.17.0/lib/prometheus/client/gauge.rb:28:in `set'\n/opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/metrics/samplers/threads_sampler.rb:41:in `set_running_threads'\n/opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/metrics/samplers/threads_sampler.rb:34:in `block in sample'\n/opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/metrics/samplers/threads_sampler.rb:30:in `each'\n/opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/metrics/samplers/threads_sampler.rb:30:in `sample'\n/opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/metrics/samplers/base_sampler.rb:30:in `safe_sample'\n/opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/metrics/samplers/base_sampler.rb:80:in `run_thread'\n/opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/daemon.rb:58:in `block (2 levels) in start'"} --- log clipping end ---
  5. I am not sure what happened in the last update (March 2023) to gitlab-ce. The latest version produces tons of debug output and nukes the cache disk over night and Unraid becomes unusable. In my case 300+ GB. Unfortunately I do not have the time to dig into the issue and rebuild the container, so for save measures, DO NOT UPGRADE to the latest version !!!!
  6. I had some success with the following inserted into the <disk> section: <disk type='block' device='cdrom'> <driver name='qemu' type='raw'/> <source dev='/dev/sr0'/> <target dev='hdg' bus='sata'/> <readonly/> </disk> Given that lsscsi revealed: [4:0:0:0] cd/dvd TSSTcorp DVDWBD SH-B123L SB04 /dev/sr0