Configure the bundled Puma instance of the GitLab package

Puma is a fast, multi-threaded, and highly concurrent HTTP 1.1 server for Ruby applications. It runs the core Rails application that provides the user-facing features of GitLab.

Reducing memory use

To reduce memory use, Puma forks worker processes. Each time a worker is created, it shares memory with the primary process. The worker uses additional memory only when it changes or adds to its memory pages. This can lead to Puma workers using more physical memory over time as workers handle additional web requests. The amount of memory used over time depends on the use of GitLab. The more features used by GitLab users, the higher the expected memory use over time.

To stop uncontrolled memory growth, the GitLab Rails application runs a supervision thread that automatically restarts workers if they exceed a given resident set size (RSS) threshold for a certain amount of time.

GitLab sets a default of 1200Mb for the memory limit. To override the default value, set per_worker_max_memory_mb to the new RSS limit in megabytes:

  1. Edit /etc/gitlab/gitlab.rb:

    puma['per_worker_max_memory_mb'] = 1024 # 1GB
    
  2. Reconfigure GitLab:

    sudo gitlab-ctl reconfigure
    

When workers are restarted, capacity to run GitLab is reduced for a short period of time. Set per_worker_max_memory_mb to a higher value if workers are replaced too often.

Worker count is calculated based on CPU cores. A small GitLab deployment with 4-8 workers may experience performance issues if workers are being restarted too often (once or more per minute).

A higher value of 1200 or more could be beneficial if the server has free memory.

Monitor worker restarts

GitLab emits log events if workers are restarted due to high memory use.

The following is an example of one of these log events in /var/log/gitlab/gitlab-rails/application_json.log:

{
  "severity": "WARN",
  "time": "2023-01-04T09:45:16.173Z",
  "correlation_id": null,
  "pid": 2725,
  "worker_id": "puma_0",
  "memwd_handler_class": "Gitlab::Memory::Watchdog::PumaHandler",
  "memwd_sleep_time_s": 5,
  "memwd_rss_bytes": 1077682176,
  "memwd_max_rss_bytes": 629145600,
  "memwd_max_strikes": 5,
  "memwd_cur_strikes": 6,
  "message": "rss memory limit exceeded"
}

memwd_rss_bytes is the actual amount of memory consumed, and memwd_max_rss_bytes is the RSS limit set through per_worker_max_memory_mb.

Change the worker timeout

The default Puma timeout is 60 seconds.

note
The puma['worker_timeout'] setting does not set the maximum request duration.

To change the worker timeout to 600 seconds:

  1. Edit /etc/gitlab/gitlab.rb:

    gitlab_rails['env'] = {
       'GITLAB_RAILS_RACK_TIMEOUT' => 600
     }
    
  2. Reconfigure GitLab:

    sudo gitlab-ctl reconfigure
    

Disable Puma clustered mode in memory-constrained environments

caution
This is an experimental Alpha feature and subject to change without notice. The feature is not ready for production use. If you want to use this feature, you should test outside of production first. See the known issues for additional details.

In a memory-constrained environment with less than 4 GB of RAM available, consider disabling Puma clustered mode.

Set the number of workers to 0 to reduce memory usage by hundreds of MB:

  1. Edit /etc/gitlab/gitlab.rb:

    puma['worker_processes'] = 0
    
  2. Reconfigure GitLab:

    sudo gitlab-ctl reconfigure
    

Unlike in a clustered mode, which is set up by default, only a single Puma process would serve the application. For details on Puma worker and thread settings, see the Puma requirements.

The downside of running Puma in this configuration is the reduced throughput, which can be considered a fair tradeoff in a memory-constrained environment.

Remember to have sufficient swap available to avoid out of memory (OOM) conditions. View the Memory requirements for details.

Puma single mode known issues

When running Puma in single mode, some features are not supported:

To learn more, visit epic 5303.

Performance caveat when using Puma with Rugged

For deployments where NFS is used to store Git repositories, GitLab uses direct Git access to improve performance by using Rugged.

Rugged usage is automatically enabled if direct Git access is available and Puma is running single threaded, unless it is disabled by a feature flag.

MRI Ruby uses a Global VM Lock (GVL). GVL allows MRI Ruby to be multi-threaded, but running at most on a single core.

Git includes intensive I/O operations. When Rugged uses a thread for a long period of time, other threads that might be processing requests can starve. Puma running in single thread mode does not have this issue, because concurrently at most one request is being processed.

GitLab is working to remove Rugged usage. Even though performance without Rugged is acceptable today, in some cases it might be still beneficial to run with it.

Given the caveat of running Rugged with multi-threaded Puma, and acceptable performance of Gitaly, we disable Rugged usage if Puma multi-threaded is used (when Puma is configured to run with more than one thread).

This default behavior may not be the optimal configuration in some situations. If Rugged plays an important role in your deployment, we suggest you benchmark to find the optimal configuration:

  • The safest option is to start with single-threaded Puma.
  • To force Rugged to be used with multi-threaded Puma, you can use a feature flag.

Configuring Puma to listen over SSL

Puma, when deployed with Omnibus GitLab, listens over a Unix socket by default. To configure Puma to listen over an HTTPS port instead, follow the steps below:

  1. Generate an SSL certificate key-pair for the address where Puma will listen. For the example below, this is 127.0.0.1.

    note
    If using a self-signed certificate from a custom Certificate Authority (CA), follow the documentation to make them trusted by other GitLab components.
  2. Edit /etc/gitlab/gitlab.rb:

    puma['ssl_listen'] = '127.0.0.1'
    puma['ssl_port'] = 9111
    puma['ssl_certificate'] = '<path_to_certificate>'
    puma['ssl_certificate_key'] = '<path_to_key>'
    
    # Disable UNIX socket
    puma['socket'] = ""
    
  3. Reconfigure GitLab:

    sudo gitlab-ctl reconfigure
    
note
In addition to the Unix socket, Puma also listens over HTTP on port 8080 for providing metrics to be scraped by Prometheus. It is not currently possible to make Prometheus scrape them over HTTPS, and support for it is being discussed in this issue. Hence, it is not technically possible to turn off this HTTP listener without losing Prometheus metrics.

Switch from Unicorn to Puma

note
For Helm-based deployments, see the webservice chart documentation.

Starting with GitLab 13.0, Puma is the default web server and Unicorn has been disabled. In GitLab 14.0, Unicorn was removed from the Linux package and is no longer supported.

Puma has a multi-thread architecture that uses less memory than a multi-process application server like Unicorn. On GitLab.com, we saw a 40% reduction in memory consumption. Most Rails application requests usually include a proportion of I/O wait time.

During I/O wait time, MRI Ruby releases the GVL to other threads. Multi-threaded Puma can therefore still serve more requests than a single process.

When switching to Puma, any Unicorn server configuration will not carry over automatically, due to differences between the two application servers.

To switch from Unicorn to Puma:

  1. Determine suitable Puma worker and thread settings.
  2. Convert any custom Unicorn settings to Puma in /etc/gitlab/gitlab.rb.

    The table below summarizes which Unicorn configuration keys correspond to those in Puma when using the Linux package, and which ones have no corresponding counterpart.

    Unicorn Puma
    unicorn['enable'] puma['enable']
    unicorn['worker_timeout'] puma['worker_timeout']
    unicorn['worker_processes'] puma['worker_processes']
    Not applicable puma['ha']
    Not applicable puma['min_threads']
    Not applicable puma['max_threads']
    unicorn['listen'] puma['listen']
    unicorn['port'] puma['port']
    unicorn['socket'] puma['socket']
    unicorn['pidfile'] puma['pidfile']
    unicorn['tcp_nopush'] Not applicable
    unicorn['backlog_socket'] Not applicable
    unicorn['somaxconn'] puma['somaxconn']
    Not applicable puma['state_path']
    unicorn['log_directory'] puma['log_directory']
    unicorn['worker_memory_limit_min'] Not applicable
    unicorn['worker_memory_limit_max'] puma['per_worker_max_memory_mb']
    unicorn['exporter_enabled'] puma['exporter_enabled']
    unicorn['exporter_address'] puma['exporter_address']
    unicorn['exporter_port'] puma['exporter_port']
  3. Reconfigure GitLab:

    sudo gitlab-ctl reconfigure
    
  4. Optional. For multi-node deployments, configure the load balancer to use the readiness check.

Troubleshooting Puma

502 Gateway Timeout after Puma spins at 100% CPU

This error occurs when the Web server times out (default: 60 s) after not hearing back from the Puma worker. If the CPU spins to 100% while this is in progress, there may be something taking longer than it should.

To fix this issue, we first need to figure out what is happening. The following tips are only recommended if you do not mind users being affected by downtime. Otherwise, skip to the next section.

  1. Load the problematic URL
  2. Run sudo gdb -p <PID> to attach to the Puma process.
  3. In the GDB window, type:

    call (void) rb_backtrace()
    
  4. This forces the process to generate a Ruby backtrace. Check /var/log/gitlab/puma/puma_stderr.log for the backtrace. For example, you may see:

    from /opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/metrics/sampler.rb:33:in `block in start'
    from /opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/metrics/sampler.rb:33:in `loop'
    from /opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/metrics/sampler.rb:36:in `block (2 levels) in start'
    from /opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/metrics/sampler.rb:44:in `sample'
    from /opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/metrics/sampler.rb:68:in `sample_objects'
    from /opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/metrics/sampler.rb:68:in `each_with_object'
    from /opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/metrics/sampler.rb:68:in `each'
    from /opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/metrics/sampler.rb:69:in `block in sample_objects'
    from /opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/metrics/sampler.rb:69:in `name'
    
  5. To see the current threads, run:

    thread apply all bt
    
  6. Once you’re done debugging with gdb, be sure to detach from the process and exit:

    detach
    exit
    

GDB reports an error if the Puma process terminates before you can run these commands. To buy more time, you can always raise the Puma worker timeout. For omnibus users, you can edit /etc/gitlab/gitlab.rb and increase it from 60 seconds to 600:

gitlab_rails['env'] = {
        'GITLAB_RAILS_RACK_TIMEOUT' => 600
}

For source installations, set the environment variable. Refer to Puma Worker timeout.

Reconfigure GitLab for the changes to take effect.

Troubleshooting without affecting other users

The previous section attached to a running Puma process, which may have undesirable effects on users trying to access GitLab during this time. If you are concerned about affecting others during a production system, you can run a separate Rails process to debug the issue:

  1. Log in to your GitLab account.
  2. Copy the URL that is causing problems (for example, https://gitlab.com/ABC).
  3. Create a Personal Access Token for your user (User Settings -> Access Tokens).
  4. Bring up the GitLab Rails console.
  5. At the Rails console, run:

    app.get '<URL FROM STEP 2>/?private_token=<TOKEN FROM STEP 3>'
    

    For example:

    app.get 'https://gitlab.com/gitlab-org/gitlab-foss/-/issues/1?private_token=123456'
    
  6. In a new window, run top. It should show this Ruby process using 100% CPU. Write down the PID.
  7. Follow step 2 from the previous section on using GDB.

GitLab: API is not accessible

This often occurs when GitLab Shell attempts to request authorization via the internal API (for example, http://localhost:8080/api/v4/internal/allowed), and something in the check fails. There are many reasons why this may happen:

  1. Timeout connecting to a database (for example, PostgreSQL or Redis)
  2. Error in Git hooks or push rules
  3. Error accessing the repository (for example, stale NFS handles)

To diagnose this problem, try to reproduce the problem and then see if there is a Puma worker that is spinning via top. Try to use the gdb techniques above. In addition, using strace may help isolate issues:

strace -ttTfyyy -s 1024 -p <PID of puma worker> -o /tmp/puma.txt

If you cannot isolate which Puma worker is the issue, try to run strace on all the Puma workers to see where the /internal/allowed endpoint gets stuck:

ps auwx | grep puma | awk '{ print " -p " $2}' | xargs  strace -ttTfyyy -s 1024 -o /tmp/puma.txt

The output in /tmp/puma.txt may help diagnose the root cause.