nixpkgs mirror (for testing)
github.com/NixOS/nixpkgs
nix
1# GitLab {#module-services-gitlab}
2
3GitLab is a feature-rich git hosting service.
4
5## Prerequisites {#module-services-gitlab-prerequisites}
6
7The `gitlab` service exposes only an Unix socket at
8`/run/gitlab/gitlab-workhorse.socket`. You need to
9configure a webserver to proxy HTTP requests to the socket.
10
11For instance, the following configuration could be used to use nginx as
12frontend proxy:
13
14```nix
15{
16 services.nginx = {
17 enable = true;
18 recommendedGzipSettings = true;
19 recommendedOptimisation = true;
20 recommendedProxySettings = true;
21 recommendedTlsSettings = true;
22 virtualHosts."git.example.com" = {
23 enableACME = true;
24 forceSSL = true;
25 locations."/" = {
26 proxyPass = "http://unix:/run/gitlab/gitlab-workhorse.socket";
27 proxyWebsockets = true;
28 };
29 };
30 };
31}
32```
33
34## Configuring {#module-services-gitlab-configuring}
35
36GitLab depends on both PostgreSQL and Redis and will automatically enable
37both services. In the case of PostgreSQL, a database and a role will be
38created.
39
40The default state dir is `/var/gitlab/state`. This is where
41all data like the repositories and uploads will be stored.
42
43A basic configuration with some custom settings could look like this:
44
45```nix
46{
47 services.gitlab = {
48 enable = true;
49 databasePasswordFile = "/var/keys/gitlab/db_password";
50 initialRootPasswordFile = "/var/keys/gitlab/root_password";
51 https = true;
52 host = "git.example.com";
53 port = 443;
54 user = "git";
55 group = "git";
56 smtp = {
57 enable = true;
58 address = "localhost";
59 port = 25;
60 };
61 secrets = {
62 dbFile = "/var/keys/gitlab/db";
63 secretFile = "/var/keys/gitlab/secret";
64 otpFile = "/var/keys/gitlab/otp";
65 jwsFile = "/var/keys/gitlab/jws";
66 };
67 extraConfig = {
68 gitlab = {
69 email_from = "gitlab-no-reply@example.com";
70 email_display_name = "Example GitLab";
71 email_reply_to = "gitlab-no-reply@example.com";
72 default_projects_features = {
73 builds = false;
74 };
75 };
76 };
77 };
78}
79```
80
81If you're setting up a new GitLab instance, generate new
82secrets. You for instance use
83`tr -dc A-Za-z0-9 < /dev/urandom | head -c 128 > /var/keys/gitlab/db` to
84generate a new db secret. Make sure the files can be read by, and
85only by, the user specified by
86[services.gitlab.user](#opt-services.gitlab.user). GitLab
87encrypts sensitive data stored in the database. If you're restoring
88an existing GitLab instance, you must specify the secrets secret
89from `config/secrets.yml` located in your GitLab
90state folder.
91
92When `incoming_mail.enabled` is set to `true`
93in [extraConfig](#opt-services.gitlab.extraConfig) an additional
94service called `gitlab-mailroom` is enabled for fetching incoming mail.
95
96Refer to [](#ch-options) for all available configuration
97options for the [services.gitlab](#opt-services.gitlab.enable) module.
98
99## Maintenance {#module-services-gitlab-maintenance}
100
101### Backups {#module-services-gitlab-maintenance-backups}
102
103Backups can be configured with the options in
104[services.gitlab.backup](#opt-services.gitlab.backup.keepTime). Use
105the [services.gitlab.backup.startAt](#opt-services.gitlab.backup.startAt)
106option to configure regular backups.
107
108To run a manual backup, start the `gitlab-backup` service:
109
110```ShellSession
111$ systemctl start gitlab-backup.service
112```
113
114### Rake tasks {#module-services-gitlab-maintenance-rake}
115
116You can run GitLab's rake tasks with `gitlab-rake`
117which will be available on the system when GitLab is enabled. You
118will have to run the command as the user that you configured to run
119GitLab with.
120
121A list of all available rake tasks can be obtained by running:
122
123```ShellSession
124$ sudo -u git -H gitlab-rake -T
125```
126
127## Runner {#module-services-gitlab-runner}
128
129GitLab Runner is a CI runner which is an executable which you can host yourself.
130A Gitlab pipeline runs operations over a Gitlab Runner. These can include
131building an executable, running a test suite, pushing a docker image, etc. The
132Gitlab Runner receives jobs from Gitlab which it then dispatches to the
133configured executors
134([`docker` (`podman`), or `shell` or `kubernetes`](https://docs.gitlab.com/runner/executors)).
135
136The
137[services.gitlab-runner.services](https://search.nixos.org/options?query=services.gitlab-runner.services)
138documents a number of typical setups to configure multiple runners with
139different executors.
140
141The [below example](#ex-gitlab-runner-podman) gives a **more elaborate** example how to
142configure a Gitlab Runner with caching and reasonably good security practices.
143
144::: {#ex-gitlab-runner-podman .example}
145
146## Gitlab Runner with `podman` and Nix Store Caching
147
148The [VM tested `podman-runner`](https://github.com/NixOS/nixpkgs/blob/master/nixos/tests/gitlab/runner/podman-runner/default.nix)
149(a NixOS module for reuse) configures an advanced Gitlab runner with the following features:
150
151- The executor is `podman` which gives you better additional safety than
152 `docker`. That means every job is run in a `podman` container.
153
154- The following container **images** are built with Nix:
155
156 **Container Images for Gitlab Jobs**:
157 - `local/alpine`: An image based on Alpine with a Nix installation
158 (attribute `jobImages.alpine`).
159 - `local/ubuntu`: An image based on Ubuntu with a Nix installation
160 (attribute `jobImages.ubuntu`).
161 - `local/nix`: An image based on Nix which only comes with `nix`
162 installed (attribute `jobImages.nix`).
163
164 **Images for VM Setup**:
165 - `local/nix-daemon-image`: An image with a Nix daemon which is
166 used to share the `/nix/store` across jobs (variable `nixDaemonImage`) setup with some essentials derivations `bootstrapPkgs`.
167 - `local/podman-daemon-image`: An image with `podman` running as a daemon which is
168 used to run `podman` inside the above job containers images
169 (variable `podmanDaemonImage`).
170
171- Every job container runs in a `podman` container instance based by default on
172 `jobImage.ubuntu`. A pipeline job can override this with `image: local/alpine`.
173 - Each job container will have the `/nix/store` mounted from the container
174 `nix-daemon-container` (see registration flags
175 `--docker-volumes-from "nix-daemon-container:ro"`).
176
177 The `nix-daemon-container` is a single container instance of a
178 `nixDaemonImage`. This enables caching of `/nix/store` paths across all jobs
179 in **all** runners. This makes **the host VM's `/nix/store` independent of the
180 Nix store used in the jobs**, which is good.
181
182 ::: {.note}
183 **Security:** If you don't want this you need multiple `nixDaemonImage`
184 containers for each registered runner (`gitlab-runner.services.<name>`).
185 :::
186
187 - Each job container will have the `/run/podman/podman.sock` socket mounted from the
188 `podman-daemon-container`.
189
190 The `podman-daemon-container` is a single container of a `podmanDaemonImage` which runs
191 `podman` as a daemon. Job containers can use this daemon to spawn nested containers as well (podman-in-podman).
192 **Keep in mind that `bind` mounts are local to the `podman-daemon-container`**
193 and can be be worked around with a `podman volume create <vol>` and manual copy-to/copy-from this volume `<vol>`.
194
195 If you only need to build containers you don't need this feature (`podman-daemon-container`), see below point.
196
197 Container configuration files (`auxRootFiles`) are copied to all containers to
198 ensure `podman` works consistently inside the job containers.
199
200 - The job containers do **not** mount the `podman` socket from the host (NixOS
201 VM) mounted for security reasons.
202
203 ::: {.note}
204 Building container images with `buildah` (stripped
205 `podman` for building images) inside a job which runs `jobImage.alpine`
206 is still possible.
207 :::
208
209 - **Cleanup Disk Space**:
210
211 With this setup its really easy to clean the `nix-daemon-container`
212 (e.g. if you run out of disk space), then reboot and have the runner in a clean state.
213 You can do the following to effectively clean everything and start with fresh volumes safely:
214
215 ```bash
216 # Stop the Gitlab runner.
217 systemctl stop gitlab-runner.service
218 # Stop `systemd`-managed containers, such that they get not recreated
219 # when deleting below.
220 systemctl stop podman-podman-daemon-container.service \
221 podman-nix-daemon-container.service \
222 podman-nix-container.service \
223 podman-alpine-container.service \
224 podman-ubuntu-container.service || true
225
226 podman container rm -f --all
227 podman image rm -f --all
228 podman volumes rm -f --all
229
230 reboot
231 # Systemd will restart all containers and create volumes etc.
232 ```
233
234:::