···7172This syntax is taken from [MyST](https://myst-parser.readthedocs.io/en/latest/using/syntax.html#targets-and-cross-referencing).
730000074#### Roles
7576If you want to link to a man page, you can use `` {manpage}`nix.conf(5)` ``. The references will turn into links when a mapping exists in [`doc/manpage-urls.json`](./manpage-urls.json).
···157158In an effort to keep the Nixpkgs manual in a consistent style, please follow the conventions below, unless they prevent you from properly documenting something.
159In that case, please open an issue about the particular documentation convention and tag it with a "needs: documentation" label.
000160161- Put each sentence in its own line.
162 This makes reviews and suggestions much easier, since GitHub's review system is based on lines.
···188 }
189 ```
190191-- Use [definition lists](#definition-lists) to document function arguments, and the attributes of such arguments. For example:
0000000000000000000000000000000000000000000000000000192193 ```markdown
194 # pkgs.coolFunction
195196 Description of what `coolFunction` does.
000197 `coolFunction` expects a single argument which should be an attribute set, with the following possible attributes:
198199- `name`
200201 : The name of the resulting image.
202203- `tag` _optional_
204205 : Tag of the generated image.
206207- _Default value:_ the output path's hash.
000000000208209- ```
000000000000000000000000000000000000000000000000000000000000000210211## Getting help
212
···7172This syntax is taken from [MyST](https://myst-parser.readthedocs.io/en/latest/using/syntax.html#targets-and-cross-referencing).
7374+75+#### HTML
76+77+Inlining HTML is not allowed. Parts of the documentation gets rendered to various non-HTML formats, such as man pages in the case of NixOS manual.
78+79#### Roles
8081If you want to link to a man page, you can use `` {manpage}`nix.conf(5)` ``. The references will turn into links when a mapping exists in [`doc/manpage-urls.json`](./manpage-urls.json).
···162163In an effort to keep the Nixpkgs manual in a consistent style, please follow the conventions below, unless they prevent you from properly documenting something.
164In that case, please open an issue about the particular documentation convention and tag it with a "needs: documentation" label.
165+When needed, each convention explain why it exists, so you can make a decision whether to follow it or not based on your particular case.
166+Note that these conventions are about the **structure** of the manual (and its source files), not about the content that goes in it.
167+You, as the writer of documentation, are still in charge of its content.
168169- Put each sentence in its own line.
170 This makes reviews and suggestions much easier, since GitHub's review system is based on lines.
···196 }
197 ```
198199+- When showing inputs/outputs of any [REPL](https://en.wikipedia.org/wiki/Read%E2%80%93eval%E2%80%93print_loop), such as a shell or the Nix REPL, use a format as you'd see in the REPL, while trying to visually separate inputs from outputs.
200+ This means that for a shell, you should use a format like the following:
201+ ```shell
202+ $ nix-build -A hello '<nixpkgs>' \
203+ --option require-sigs false \
204+ --option trusted-substituters file:///tmp/hello-cache \
205+ --option substituters file:///tmp/hello-cache
206+ /nix/store/zhl06z4lrfrkw5rp0hnjjfrgsclzvxpm-hello-2.12.1
207+ ```
208+ Note how the input is preceded by `$` on the first line and indented on subsequent lines, and how the output is provided as you'd see on the shell.
209+210+ For the Nix REPL, you should use a format like the following:
211+ ```shell
212+ nix-repl> builtins.attrNames { a = 1; b = 2; }
213+ [ "a" "b" ]
214+ ```
215+ Note how the input is preceded by `nix-repl>` and the output is provided as you'd see on the Nix REPL.
216+217+- When documenting functions or anything that has inputs/outputs and example usage, use nested headings to clearly separate inputs, outputs, and examples.
218+ Keep examples as the last nested heading, and link to the examples wherever applicable in the documentation.
219+220+ The purpose of this convention is to provide a familiar structure for navigating the manual, so any reader can expect to find content related to inputs in an "inputs" heading, examples in an "examples" heading, and so on.
221+ An example:
222+ ```
223+ ## buildImage
224+225+ Some explanation about the function here.
226+ Describe a particular scenario, and point to [](#ex-dockerTools-buildImage), which is an example demonstrating it.
227+228+ ### Inputs
229+230+ Documentation for the inputs of `buildImage`.
231+ Perhaps even point to [](#ex-dockerTools-buildImage) again when talking about something specifically linked to it.
232+233+ ### Passthru outputs
234+235+ Documentation for any passthru outputs of `buildImage`.
236+237+ ### Examples
238+239+ Note that this is the last nested heading in the `buildImage` section.
240+241+ :::{.example #ex-dockerTools-buildImage}
242+243+ # Using `buildImage`
244+245+ Example of how to use `buildImage` goes here.
246+247+ :::
248+ ```
249+250+- Use [definition lists](#definition-lists) to document function arguments, and the attributes of such arguments as well as their [types](https://nixos.org/manual/nix/stable/language/values).
251+ For example:
252253 ```markdown
254 # pkgs.coolFunction
255256 Description of what `coolFunction` does.
257+258+ ## Inputs
259+260 `coolFunction` expects a single argument which should be an attribute set, with the following possible attributes:
261262+ `name` (String)
263264 : The name of the resulting image.
265266+ `tag` (String; _optional_)
267268 : Tag of the generated image.
269270+ _Default:_ the output path's hash.
271+ ```
272+273+#### Examples
274+275+To define a referenceable figure use the following fencing:
276+277+```markdown
278+:::{.example #an-attribute-set-example}
279+# An attribute set example
280281+You can add text before
282+283+ ```nix
284+ { a = 1; b = 2;}
285+ ```
286+287+and after code fencing
288+:::
289+```
290+291+Defining examples through the `example` fencing class adds them to a "List of Examples" section after the Table of Contents.
292+Though this is not shown in the rendered documentation on nixos.org.
293+294+#### Figures
295+296+To define a referencable figure use the following fencing:
297+298+```markdown
299+::: {.figure #nixos-logo}
300+# NixOS Logo
301+
302+:::
303+```
304+305+Defining figures through the `figure` fencing class adds them to a `List of Figures` after the `Table of Contents`.
306+Though this is not shown in the rendered documentation on nixos.org.
307+308+#### Footnotes
309+310+To add a foonote explanation, use the following syntax:
311+312+```markdown
313+Sometimes it's better to add context [^context] in a footnote.
314+315+[^context]: This explanation will be rendered at the end of the chapter.
316+```
317+318+#### Inline comments
319+320+Inline comments are supported with following syntax:
321+322+```markdown
323+<!-- This is an inline comment -->
324+```
325+326+The comments will not be rendered in the rendered HTML.
327+328+#### Link reference definitions
329+330+Links can reference a label, for example, to make the link target reusable:
331+332+```markdown
333+::: {.note}
334+Reference links can also be used to [shorten URLs][url-id] and keep the markdown readable.
335+:::
336+337+[url-id]: https://github.com/NixOS/nixpkgs/blob/19d4f7dc485f74109bd66ef74231285ff797a823/doc/README.md
338+```
339+340+This syntax is taken from [CommonMark](https://spec.commonmark.org/0.30/#link-reference-definitions).
341+342+#### Typographic replacements
343+344+Typographic replacements are enabled. Check the [list of possible replacement patterns check](https://github.com/executablebooks/markdown-it-py/blob/3613e8016ecafe21709471ee0032a90a4157c2d1/markdown_it/rules_core/replacements.py#L1-L15).
345346## Getting help
347
+156-31
doc/build-helpers/images/dockertools.section.md
···676dockerTools.streamLayeredImage {
677 name = "hello";
678 contents = [ hello ];
0679}
680```
681···714```
715:::
716717-## pullImage {#ssec-pkgs-dockerTools-fetchFromRegistry}
0000000000000000000000000718719-This function is analogous to the `docker pull` command, in that it can be used to pull a Docker image from a Docker registry. By default [Docker Hub](https://hub.docker.com/) is used to pull images.
720721-Its parameters are described in the example below:
000000722723-```nix
724-pullImage {
725- imageName = "nixos/nix";
726- imageDigest =
727- "sha256:473a2b527958665554806aea24d0131bacec46d23af09fef4598eeab331850fa";
728- finalImageName = "nix";
729- finalImageTag = "2.11.1";
730- sha256 = "sha256-qvhj+Hlmviz+KEBVmsyPIzTB3QlVAFzwAY1zDPIBGxc=";
731- os = "linux";
732- arch = "x86_64";
733-}
734-```
0000000000000000000000000000000000000735736-- `imageName` specifies the name of the image to be downloaded, which can also include the registry namespace (e.g. `nixos`). This argument is required.
737738-- `imageDigest` specifies the digest of the image to be downloaded. This argument is required.
0739740-- `finalImageName`, if specified, this is the name of the image to be created. Note it is never used to fetch the image since we prefer to rely on the immutable digest ID. By default it's equal to `imageName`.
741742-- `finalImageTag`, if specified, this is the tag of the image to be created. Note it is never used to fetch the image since we prefer to rely on the immutable digest ID. By default it's `latest`.
0743744-- `sha256` is the checksum of the whole fetched image. This argument is required.
745746-- `os`, if specified, is the operating system of the fetched image. By default it's `linux`.
0000000000747748-- `arch`, if specified, is the cpu architecture of the fetched image. By default it's `x86_64`.
0749750-`nix-prefetch-docker` command can be used to get required image parameters:
751752-```ShellSession
753-$ nix run nixpkgs#nix-prefetch-docker -- --image-name mysql --image-tag 5
0000000754```
0755756-Since a given `imageName` may transparently refer to a manifest list of images which support multiple architectures and/or operating systems, you can supply the `--os` and `--arch` arguments to specify exactly which image you want. By default it will match the OS and architecture of the host the command is run on.
0000757758-```ShellSession
759-$ nix-prefetch-docker --image-name mysql --image-tag 5 --arch x86_64 --os linux
000000000000000760```
761762-Desired image name and tag can be set using `--final-image-name` and `--final-image-tag` arguments:
0763764-```ShellSession
765-$ nix-prefetch-docker --image-name mysql --image-tag 5 --final-image-name eu.gcr.io/my-project/mysql --final-image-tag prod
00766```
0767768## exportImage {#ssec-pkgs-dockerTools-exportImage}
769···844```
845846Creating base files like `/etc/passwd` or `/etc/login.defs` is necessary for shadow-utils to manipulate users and groups.
000000000000847848## fakeNss {#ssec-pkgs-dockerTools-fakeNss}
849
···676dockerTools.streamLayeredImage {
677 name = "hello";
678 contents = [ hello ];
679+ includeStorePaths = false;
680}
681```
682···715```
716:::
717718+[]{#ssec-pkgs-dockerTools-fetchFromRegistry}
719+## pullImage {#ssec-pkgs-dockerTools-pullImage}
720+721+This function is similar to the `docker pull` command, which means it can be used to pull a Docker image from a registry that implements the [Docker Registry HTTP API V2](https://distribution.github.io/distribution/spec/api/).
722+By default, the `docker.io` registry is used.
723+724+The image will be downloaded as an uncompressed Docker-compatible repository tarball, which is suitable for use with other `dockerTools` functions such as [`buildImage`](#ssec-pkgs-dockerTools-buildImage), [`buildLayeredImage`](#ssec-pkgs-dockerTools-buildLayeredImage), and [`streamLayeredImage`](#ssec-pkgs-dockerTools-streamLayeredImage).
725+726+This function requires two different types of hashes/digests to be specified:
727+728+- One of them is used to identify a unique image within the registry (see the documentation for the `imageDigest` attribute).
729+- The other is used by Nix to ensure the contents of the output haven't changed (see the documentation for the `sha256` attribute).
730+731+Both hashes are required because they must uniquely identify some content in two completely different systems (the Docker registry and the Nix store), but their values will not be the same.
732+See [](#ex-dockerTools-pullImage-nixprefetchdocker) for a tool that can help gather these values.
733+734+### Inputs {#ssec-pkgs-dockerTools-pullImage-inputs}
735+736+`pullImage` expects a single argument with the following attributes:
737+738+`imageName` (String)
739+740+: Specifies the name of the image to be downloaded, as well as the registry endpoint.
741+ By default, the `docker.io` registry is used.
742+ To specify a different registry, prepend the endpoint to `imageName`, separated by a slash (`/`).
743+ See [](#ex-dockerTools-pullImage-differentregistry) for how to do that.
744745+`imageDigest` (String)
746747+: Specifies the digest of the image to be downloaded.
748+749+ :::{.tip}
750+ **Why can't I specify a tag to pull from, and have to use a digest instead?**
751+752+ Tags are often updated to point to different image contents.
753+ The most common example is the `latest` tag, which is usually updated whenever a newer image version is available.
754755+ An image tag isn't enough to guarantee the contents of an image won't change, but a digest guarantees this.
756+ Providing a digest helps ensure that you will still be able to build the same Nix code and get the same output even if newer versions of an image are released.
757+ :::
758+759+`sha256` (String)
760+761+: The hash of the image after it is downloaded.
762+ Internally, this is passed to the [`outputHash`](https://nixos.org/manual/nix/stable/language/advanced-attributes#adv-attr-outputHash) attribute of the resulting derivation.
763+ This is needed to provide a guarantee to Nix that the contents of the image haven't changed, because Nix doesn't support the value in `imageDigest`.
764+765+`finalImageName` (String; _optional_)
766+767+: Specifies the name that will be used for the image after it has been downloaded.
768+ This only applies after the image is downloaded, and is not used to identify the image to be downloaded in the registry.
769+ Use `imageName` for that instead.
770+771+ _Default value:_ the same value specified in `imageName`.
772+773+`finalImageTag` (String; _optional_)
774+775+: Specifies the tag that will be used for the image after it has been downloaded.
776+ This only applies after the image is downloaded, and is not used to identify the image to be downloaded in the registry.
777+778+ _Default value:_ `"latest"`.
779+780+`os` (String; _optional_)
781+782+: Specifies the operating system of the image to pull.
783+ If specified, its value should follow the [OCI Image Configuration Specification](https://github.com/opencontainers/image-spec/blob/main/config.md#properties), which should still be compatible with Docker.
784+ According to the linked specification, all possible values for `$GOOS` in [the Go docs](https://go.dev/doc/install/source#environment) should be valid, but will commonly be one of `darwin` or `linux`.
785+786+ _Default value:_ `"linux"`.
787+788+`arch` (String; _optional_)
789+790+: Specifies the architecture of the image to pull.
791+ If specified, its value should follow the [OCI Image Configuration Specification](https://github.com/opencontainers/image-spec/blob/main/config.md#properties), which should still be compatible with Docker.
792+ According to the linked specification, all possible values for `$GOARCH` in [the Go docs](https://go.dev/doc/install/source#environment) should be valid, but will commonly be one of `386`, `amd64`, `arm`, or `arm64`.
793+794+ _Default value:_ the same value from `pkgs.go.GOARCH`.
795+796+`tlsVerify` (Boolean; _optional_)
797+798+: Used to enable or disable HTTPS and TLS certificate verification when communicating with the chosen Docker registry.
799+ Setting this to `false` will make `pullImage` connect to the registry through HTTP.
800+801+ _Default value:_ `true`.
802+803+`name` (String; _optional_)
804805+: The name used for the output in the Nix store path.
806807+ _Default value:_ a value derived from `finalImageName` and `finalImageTag`, with some symbols replaced.
808+ It is recommended to treat the default as an opaque value.
809810+### Examples {#ssec-pkgs-dockerTools-pullImage-examples}
811812+::: {.example #ex-dockerTools-pullImage-niximage}
813+# Pulling the nixos/nix Docker image from the default registry
814815+This example pulls the [`nixos/nix` image](https://hub.docker.com/r/nixos/nix) and saves it in the Nix store.
816817+```nix
818+{ dockerTools }:
819+dockerTools.pullImage {
820+ imageName = "nixos/nix";
821+ imageDigest = "sha256:b8ea88f763f33dfda2317b55eeda3b1a4006692ee29e60ee54ccf6d07348c598";
822+ finalImageName = "nix";
823+ finalImageTag = "2.19.3";
824+ sha256 = "zRwlQs1FiKrvHPaf8vWOR/Tlp1C5eLn1d9pE4BZg3oA=";
825+}
826+```
827+:::
828829+::: {.example #ex-dockerTools-pullImage-differentregistry}
830+# Pulling the nixos/nix Docker image from a specific registry
831832+This example pulls the [`coreos/etcd` image](https://quay.io/repository/coreos/etcd) from the `quay.io` registry.
833834+```nix
835+{ dockerTools }:
836+dockerTools.pullImage {
837+ imageName = "quay.io/coreos/etcd";
838+ imageDigest = "sha256:24a23053f29266fb2731ebea27f915bb0fb2ae1ea87d42d890fe4e44f2e27c5d";
839+ finalImageName = "etcd";
840+ finalImageTag = "v3.5.11";
841+ sha256 = "Myw+85f2/EVRyMB3axECdmQ5eh9p1q77FWYKy8YpRWU=";
842+}
843```
844+:::
845846+::: {.example #ex-dockerTools-pullImage-nixprefetchdocker}
847+# Finding the digest and hash values to use for `dockerTools.pullImage`
848+849+Since [`dockerTools.pullImage`](#ssec-pkgs-dockerTools-pullImage) requires two different hashes, one can run the `nix-prefetch-docker` tool to find out the values for the hashes.
850+The tool outputs some text for an attribute set which you can pass directly to `pullImage`.
851852+```shell
853+$ nix run nixpkgs#nix-prefetch-docker -- --image-name nixos/nix --image-tag 2.19.3 --arch amd64 --os linux
854+(some output removed for clarity)
855+Writing manifest to image destination
856+-> ImageName: nixos/nix
857+-> ImageDigest: sha256:498fa2d7f2b5cb3891a4edf20f3a8f8496e70865099ba72540494cd3e2942634
858+-> FinalImageName: nixos/nix
859+-> FinalImageTag: latest
860+-> ImagePath: /nix/store/4mxy9mn6978zkvlc670g5703nijsqc95-docker-image-nixos-nix-latest.tar
861+-> ImageHash: 1q6cf2pdrasa34zz0jw7pbs6lvv52rq2aibgxccbwcagwkg2qj1q
862+{
863+ imageName = "nixos/nix";
864+ imageDigest = "sha256:498fa2d7f2b5cb3891a4edf20f3a8f8496e70865099ba72540494cd3e2942634";
865+ sha256 = "1q6cf2pdrasa34zz0jw7pbs6lvv52rq2aibgxccbwcagwkg2qj1q";
866+ finalImageName = "nixos/nix";
867+ finalImageTag = "latest";
868+}
869```
870871+It is important to supply the `--arch` and `--os` arguments to `nix-prefetch-docker` to filter to a single image, in case there are multiple architectures and/or operating systems supported by the image name and tags specified.
872+By default, `nix-prefetch-docker` will set `os` to `linux` and `arch` to `amd64`.
873874+Run `nix-prefetch-docker --help` for a list of all supported arguments:
875+```shell
876+$ nix run nixpkgs#nix-prefetch-docker -- --help
877+(output removed for clarity)
878```
879+:::
880881## exportImage {#ssec-pkgs-dockerTools-exportImage}
882···957```
958959Creating base files like `/etc/passwd` or `/etc/login.defs` is necessary for shadow-utils to manipulate users and groups.
960+961+When using `buildLayeredImage`, you can put this in `fakeRootCommands` if you `enableFakechroot`:
962+```nix
963+buildLayeredImage {
964+ name = "shadow-layered";
965+966+ fakeRootCommands = ''
967+ ${pkgs.dockerTools.shadowSetup}
968+ '';
969+ enableFakechroot = true;
970+}
971+```
972973## fakeNss {#ssec-pkgs-dockerTools-fakeNss}
974
+1-1
doc/languages-frameworks/dotnet.section.md
···144145 projectReferences = [ referencedProject ]; # `referencedProject` must contain `nupkg` in the folder structure.
146147- dotnet-sdk = dotnetCorePackages.sdk_6.0;
148 dotnet-runtime = dotnetCorePackages.runtime_6_0;
149150 executables = [ "foo" ]; # This wraps "$out/lib/$pname/foo" to `$out/bin/foo`.
···144145 projectReferences = [ referencedProject ]; # `referencedProject` must contain `nupkg` in the folder structure.
146147+ dotnet-sdk = dotnetCorePackages.sdk_6_0;
148 dotnet-runtime = dotnetCorePackages.runtime_6_0;
149150 executables = [ "foo" ]; # This wraps "$out/lib/$pname/foo" to `$out/bin/foo`.
···23In addition to exposing the Idris2 compiler itself, Nixpkgs exposes an `idris2Packages.buildIdris` helper to make it a bit more ergonomic to build Idris2 executables or libraries.
45-The `buildIdris` function takes a package set that defines at a minimum the `src` and `projectName` of the package to be built and any `idrisLibraries` required to build it. The `src` is the same source you're familiar with but the `projectName` must be the name of the `ipkg` file for the project (omitting the `.ipkg` extension). The `idrisLibraries` is a list of other library derivations created with `buildIdris`. You can optionally specify other derivation properties as needed but sensible defaults for `configurePhase`, `buildPhase`, and `installPhase` are provided.
67Importantly, `buildIdris` does not create a single derivation but rather an attribute set with two properties: `executable` and `library`. The `executable` property is a derivation and the `library` property is a function that will return a derivation for the library with or without source code included. Source code need not be included unless you are aiming to use IDE or LSP features that are able to jump to definitions within an editor.
8···10```nix
11{ fetchFromGitHub, idris2Packages }:
12let lspLibPkg = idris2Packages.buildIdris {
13- projectName = "lsp-lib";
14 src = fetchFromGitHub {
15 owner = "idris-community";
16 repo = "LSP-lib";
···31# Assuming the previous example lives in `lsp-lib.nix`:
32let lspLib = callPackage ./lsp-lib.nix { };
33 lspPkg = idris2Packages.buildIdris {
34- projectName = "idris2-lsp";
35 src = fetchFromGitHub {
36 owner = "idris-community";
37 repo = "idris2-lsp";
···23In addition to exposing the Idris2 compiler itself, Nixpkgs exposes an `idris2Packages.buildIdris` helper to make it a bit more ergonomic to build Idris2 executables or libraries.
45+The `buildIdris` function takes an attribute set that defines at a minimum the `src` and `ipkgName` of the package to be built and any `idrisLibraries` required to build it. The `src` is the same source you're familiar with and the `ipkgName` must be the name of the `ipkg` file for the project (omitting the `.ipkg` extension). The `idrisLibraries` is a list of other library derivations created with `buildIdris`. You can optionally specify other derivation properties as needed but sensible defaults for `configurePhase`, `buildPhase`, and `installPhase` are provided.
67Importantly, `buildIdris` does not create a single derivation but rather an attribute set with two properties: `executable` and `library`. The `executable` property is a derivation and the `library` property is a function that will return a derivation for the library with or without source code included. Source code need not be included unless you are aiming to use IDE or LSP features that are able to jump to definitions within an editor.
8···10```nix
11{ fetchFromGitHub, idris2Packages }:
12let lspLibPkg = idris2Packages.buildIdris {
13+ ipkgName = "lsp-lib";
14 src = fetchFromGitHub {
15 owner = "idris-community";
16 repo = "LSP-lib";
···31# Assuming the previous example lives in `lsp-lib.nix`:
32let lspLib = callPackage ./lsp-lib.nix { };
33 lspPkg = idris2Packages.buildIdris {
34+ ipkgName = "idris2-lsp";
35 src = fetchFromGitHub {
36 owner = "idris-community";
37 repo = "idris2-lsp";
···1# Contributing to this manual {#chap-contributing}
23-The [DocBook] and CommonMark sources of the NixOS manual are in the [nixos/doc/manual](https://github.com/NixOS/nixpkgs/tree/master/nixos/doc/manual) subdirectory of the [Nixpkgs](https://github.com/NixOS/nixpkgs) repository.
4This manual uses the [Nixpkgs manual syntax](https://nixos.org/manual/nixpkgs/unstable/#sec-contributing-markup).
56You can quickly check your edits with the following:
···1# Contributing to this manual {#chap-contributing}
23+The sources of the NixOS manual are in the [nixos/doc/manual](https://github.com/NixOS/nixpkgs/tree/master/nixos/doc/manual) subdirectory of the [Nixpkgs](https://github.com/NixOS/nixpkgs) repository.
4This manual uses the [Nixpkgs manual syntax](https://nixos.org/manual/nixpkgs/unstable/#sec-contributing-markup).
56You can quickly check your edits with the following:
···78## Building the Manual {#sec-writing-docs-building-the-manual}
910-The DocBook sources of the [](#book-nixos-manual) are in the
11[`nixos/doc/manual`](https://github.com/NixOS/nixpkgs/tree/master/nixos/doc/manual)
12subdirectory of the Nixpkgs repository.
13···29When this command successfully finishes, it will tell you where the
30manual got generated. The HTML will be accessible through the `result`
31symlink at `./result/share/doc/nixos/index.html`.
32-33-## Editing DocBook XML {#sec-writing-docs-editing-docbook-xml}
34-35-For general information on how to write in DocBook, see [DocBook 5: The
36-Definitive Guide](https://tdg.docbook.org/tdg/5.1/).
37-38-Emacs nXML Mode is very helpful for editing DocBook XML because it
39-validates the document as you write, and precisely locates errors. To
40-use it, see [](#sec-emacs-docbook-xml).
41-42-[Pandoc](https://pandoc.org/) can generate DocBook XML from a multitude of
43-formats, which makes a good starting point. Here is an example of Pandoc
44-invocation to convert GitHub-Flavoured MarkDown to DocBook 5 XML:
45-46-```ShellSession
47-pandoc -f markdown_github -t docbook5 docs.md -o my-section.md
48-```
49-50-Pandoc can also quickly convert a single `section.xml` to HTML, which is
51-helpful when drafting.
52-53-Sometimes writing valid DocBook is too difficult. In this case,
54-submit your documentation updates in a [GitHub
55-Issue](https://github.com/NixOS/nixpkgs/issues/new) and someone will
56-handle the conversion to XML for you.
57-58-## Creating a Topic {#sec-writing-docs-creating-a-topic}
59-60-You can use an existing topic as a basis for the new topic or create a
61-topic from scratch.
62-63-Keep the following guidelines in mind when you create and add a topic:
64-65-- The NixOS [`book`](https://tdg.docbook.org/tdg/5.0/book.html)
66- element is in `nixos/doc/manual/manual.xml`. It includes several
67- [`parts`](https://tdg.docbook.org/tdg/5.0/book.html) which are in
68- subdirectories.
69-70-- Store the topic file in the same directory as the `part` to which it
71- belongs. If your topic is about configuring a NixOS module, then the
72- XML file can be stored alongside the module definition `nix` file.
73-74-- If you include multiple words in the file name, separate the words
75- with a dash. For example: `ipv6-config.xml`.
76-77-- Make sure that the `xml:id` value is unique. You can use abbreviations
78- if the ID is too long. For example: `nixos-config`.
79-80-- Determine whether your topic is a chapter or a section. If you are
81- unsure, open an existing topic file and check whether the main
82- element is chapter or section.
83-84-## Adding a Topic to the Book {#sec-writing-docs-adding-a-topic}
85-86-Open the parent CommonMark file and add a line to the list of
87-chapters with the file name of the topic that you created. If you
88-created a `section`, you add the file to the `chapter` file. If you created
89-a `chapter`, you add the file to the `part` file.
90-91-If the topic is about configuring a NixOS module, it can be
92-automatically included in the manual by using the `meta.doc` attribute.
93-See [](#sec-meta-attributes) for an explanation.
···78## Building the Manual {#sec-writing-docs-building-the-manual}
910+The sources of the [](#book-nixos-manual) are in the
11[`nixos/doc/manual`](https://github.com/NixOS/nixpkgs/tree/master/nixos/doc/manual)
12subdirectory of the Nixpkgs repository.
13···29When this command successfully finishes, it will tell you where the
30manual got generated. The HTML will be accessible through the `result`
31symlink at `./result/share/doc/nixos/index.html`.
00000000000000000000000000000000000000000000000000000000000000
+31-6
nixos/doc/manual/release-notes/rl-2405.section.md
···5455- [ollama](https://ollama.ai), server for running large language models locally.
560057- [Anki Sync Server](https://docs.ankiweb.net/sync-server.html), the official sync server built into recent versions of Anki. Available as [services.anki-sync-server](#opt-services.anki-sync-server.enable).
58The pre-existing [services.ankisyncd](#opt-services.ankisyncd.enable) has been marked deprecated and will be dropped after 24.05 due to lack of maintenance of the anki-sync-server softwares.
59···86- `idris2` was updated to v0.7.0. This version introduces breaking changes. Check out the [changelog](https://github.com/idris-lang/Idris2/blob/v0.7.0/CHANGELOG.md#v070) for details.
8788- `nitter` requires a `guest_accounts.jsonl` to be provided as a path or loaded into the default location at `/var/lib/nitter/guest_accounts.jsonl`. See [Guest Account Branch Deployment](https://github.com/zedeus/nitter/wiki/Guest-Account-Branch-Deployment) for details.
00000008990- Invidious has changed its default database username from `kemal` to `invidious`. Setups involving an externally provisioned database (i.e. `services.invidious.database.createLocally == false`) should adjust their configuration accordingly. The old `kemal` user will not be removed automatically even when the database is provisioned automatically.(https://github.com/NixOS/nixpkgs/pull/265857)
91···142- `services.avahi.nssmdns` got split into `services.avahi.nssmdns4` and `services.avahi.nssmdns6` which enable the mDNS NSS switch for IPv4 and IPv6 respectively.
143 Since most mDNS responders only register IPv4 addresses, most users want to keep the IPv6 support disabled to avoid long timeouts.
144145-- `multi-user.target` no longer depends on `network-online.target`.
146- This will potentially break services that assumed this was the case in the past.
147- This was changed for consistency with other distributions as well as improved boot times.
148-149- We have added a warning for services that are
150- `after = [ "network-online.target" ]` but do not depend on it (e.g. using `wants`).
151152- `services.archisteamfarm` no longer uses the abbreviation `asf` for its state directory (`/var/lib/asf`), user and group (both `asf`). Instead the long name `archisteamfarm` is used.
153 Configurations with `system.stateVersion` 23.11 or earlier, default to the old stateDirectory until the 24.11 release and must either set the option explicitly or move the data to the new directory.
···200201 - The `-data` path is no longer required to run the package, and will be set to point to a folder in `$TMP` if missing.
2020000000000000203## Other Notable Changes {#sec-release-24.05-notable-changes}
204205<!-- To avoid merge conflicts, consider adding your item at an arbitrary place in the list instead. -->
···243244- A new hardening flag, `zerocallusedregs` was made available, corresponding to the gcc/clang option `-fzero-call-used-regs=used-gpr`.
245000246- The Yama LSM is now enabled by default in the kernel, which prevents ptracing
247 non-child processes. This means you will not be able to attach gdb to an
248 existing process, but will need to start that process from gdb (so it is a
249 child). Or you can set `boot.kernel.sysctl."kernel.yama.ptrace_scope"` to 0.
00250251- [Nginx virtual hosts](#opt-services.nginx.virtualHosts) using `forceSSL` or
252 `globalRedirect` can now have redirect codes other than 301 through
···5455- [ollama](https://ollama.ai), server for running large language models locally.
5657+- [hebbot](https://github.com/haecker-felix/hebbot), a Matrix bot to generate "This Week in X" like blog posts. Available as [services.hebbot](#opt-services.hebbot.enable).
58+59- [Anki Sync Server](https://docs.ankiweb.net/sync-server.html), the official sync server built into recent versions of Anki. Available as [services.anki-sync-server](#opt-services.anki-sync-server.enable).
60The pre-existing [services.ankisyncd](#opt-services.ankisyncd.enable) has been marked deprecated and will be dropped after 24.05 due to lack of maintenance of the anki-sync-server softwares.
61···88- `idris2` was updated to v0.7.0. This version introduces breaking changes. Check out the [changelog](https://github.com/idris-lang/Idris2/blob/v0.7.0/CHANGELOG.md#v070) for details.
8990- `nitter` requires a `guest_accounts.jsonl` to be provided as a path or loaded into the default location at `/var/lib/nitter/guest_accounts.jsonl`. See [Guest Account Branch Deployment](https://github.com/zedeus/nitter/wiki/Guest-Account-Branch-Deployment) for details.
91+92+- `services.aria2.rpcSecret` has been replaced with `services.aria2.rpcSecretFile`.
93+ This was done so that secrets aren't stored in the world-readable nix store.
94+ To migrate, you will have create a file with the same exact string, and change
95+ your module options to point to that file. For example, `services.aria2.rpcSecret =
96+ "mysecret"` becomes `services.aria2.rpcSecretFile = "/path/to/secret_file"`
97+ where the file `secret_file` contains the string `mysecret`.
9899- Invidious has changed its default database username from `kemal` to `invidious`. Setups involving an externally provisioned database (i.e. `services.invidious.database.createLocally == false`) should adjust their configuration accordingly. The old `kemal` user will not be removed automatically even when the database is provisioned automatically.(https://github.com/NixOS/nixpkgs/pull/265857)
100···151- `services.avahi.nssmdns` got split into `services.avahi.nssmdns4` and `services.avahi.nssmdns6` which enable the mDNS NSS switch for IPv4 and IPv6 respectively.
152 Since most mDNS responders only register IPv4 addresses, most users want to keep the IPv6 support disabled to avoid long timeouts.
153154+- A warning has been added for services that are
155+ `after = [ "network-online.target" ]` but do not depend on it (e.g. using
156+ `wants`), because the dependency that `multi-user.target` has on
157+ `network-online.target` is planned for removal.
00158159- `services.archisteamfarm` no longer uses the abbreviation `asf` for its state directory (`/var/lib/asf`), user and group (both `asf`). Instead the long name `archisteamfarm` is used.
160 Configurations with `system.stateVersion` 23.11 or earlier, default to the old stateDirectory until the 24.11 release and must either set the option explicitly or move the data to the new directory.
···207208 - The `-data` path is no longer required to run the package, and will be set to point to a folder in `$TMP` if missing.
209210+- `nomad` has been updated - note that HashiCorp recommends updating one minor version at a time. Please check [their upgrade guide](https://developer.hashicorp.com/nomad/docs/upgrade) for information on safely updating clusters and potential breaking changes.
211+212+ - `nomad` is now Nomad 1.7.x.
213+214+ - `nomad_1_4` has been removed, as it is now unsupported upstream.
215+216+- The `livebook` package is now built as a `mix release` instead of an `escript`.
217+ This means that configuration now has to be done using [environment variables](https://hexdocs.pm/livebook/readme.html#environment-variables) instead of command line arguments.
218+ This has the further implication that the `livebook` service configuration has changed:
219+220+ - The `erlang_node_short_name`, `erlang_node_name`, `port` and `options` configuration parameters are gone, and have been replaced with an `environment` parameter.
221+ Use the appropriate [environment variables](https://hexdocs.pm/livebook/readme.html#environment-variables) inside `environment` to configure the service instead.
222+223## Other Notable Changes {#sec-release-24.05-notable-changes}
224225<!-- To avoid merge conflicts, consider adding your item at an arbitrary place in the list instead. -->
···263264- A new hardening flag, `zerocallusedregs` was made available, corresponding to the gcc/clang option `-fzero-call-used-regs=used-gpr`.
265266+- New options were added to the dnsdist module to enable and configure a DNSCrypt endpoint (see `services.dnsdist.dnscrypt.enable`, etc.).
267+ The module can generate the DNSCrypt provider key pair, certificates and also performs their rotation automatically with no downtime.
268+269- The Yama LSM is now enabled by default in the kernel, which prevents ptracing
270 non-child processes. This means you will not be able to attach gdb to an
271 existing process, but will need to start that process from gdb (so it is a
272 child). Or you can set `boot.kernel.sysctl."kernel.yama.ptrace_scope"` to 0.
273+274+- The netbird module now allows running multiple tunnels in parallel through [`services.netbird.tunnels`](#opt-services.netbird.tunnels).
275276- [Nginx virtual hosts](#opt-services.nginx.virtualHosts) using `forceSSL` or
277 `globalRedirect` can now have redirect codes other than 301 through
···768 self.booted = False
769 self.connected = False
770771+ def wait_for_qmp_event(
772+ self, event_filter: Callable[[dict[str, Any]], bool], timeout: int = 60 * 10
773+ ) -> dict[str, Any]:
774+ """
775+ Wait for a QMP event which you can filter with the `event_filter` function.
776+ The function takes as an input a dictionary of the event and if it returns True, we return that event,
777+ if it does not, we wait for the next event and retry.
778+779+ It will skip all events received in the meantime, if you want to keep them,
780+ you have to do the bookkeeping yourself and store them somewhere.
781+782+ By default, it will wait up to 10 minutes, `timeout` is in seconds.
783+ """
784+ if self.qmp_client is None:
785+ raise RuntimeError("QMP API is not ready yet, is the VM ready?")
786+787+ start = time.time()
788+ while True:
789+ evt = self.qmp_client.wait_for_event(timeout=timeout)
790+ if event_filter(evt):
791+ return evt
792+793+ elapsed = time.time() - start
794+ if elapsed >= timeout:
795+ raise TimeoutError
796+797 def get_tty_text(self, tty: str) -> str:
798 status, output = self.execute(
799 f"fold -w$(stty -F /dev/tty{tty} size | "
···39 # Allow the user to log in as root without a password.
40 users.users.root.initialHashedPassword = "";
4100042 # Allow passwordless sudo from nixos user
43 security.sudo = {
44 enable = mkDefault true;
···39 # Allow the user to log in as root without a password.
40 users.users.root.initialHashedPassword = "";
4142+ # Don't require sudo/root to `reboot` or `poweroff`.
43+ security.polkit.enable = true;
44+45 # Allow passwordless sudo from nixos user
46 security.sudo = {
47 enable = mkDefault true;
···9{
10 options = {
11 programs.light = {
12+13 enable = mkOption {
14 default = false;
15 type = types.bool;
···18 and udev rules granting access to members of the "video" group.
19 '';
20 };
21+22+ brightnessKeys = {
23+ enable = mkOption {
24+ type = types.bool;
25+ default = false;
26+ description = ''
27+ Whether to enable brightness control with keyboard keys.
28+29+ This is mainly useful for minimalistic (desktop) environments. You
30+ may want to leave this disabled if you run a feature-rich desktop
31+ environment such as KDE, GNOME or Xfce as those handle the
32+ brightness keys themselves. However, enabling brightness control
33+ with this setting makes the control independent of X, so the keys
34+ work in non-graphical ttys, so you might want to consider using this
35+ instead of the default offered by the desktop environment.
36+37+ Enabling this will turn on {option}`services.actkbd`.
38+ '';
39+ };
40+41+ step = mkOption {
42+ type = types.int;
43+ default = 10;
44+ description = ''
45+ The percentage value by which to increase/decrease brightness.
46+ '';
47+ };
48+49+ };
50+51 };
52 };
5354 config = mkIf cfg.enable {
55 environment.systemPackages = [ pkgs.light ];
56 services.udev.packages = [ pkgs.light ];
57+ services.actkbd = mkIf cfg.brightnessKeys.enable {
58+ enable = true;
59+ bindings = let
60+ light = "${pkgs.light}/bin/light";
61+ step = toString cfg.brightnessKeys.step;
62+ in [
63+ {
64+ keys = [ 224 ];
65+ events = [ "key" ];
66+ # Use minimum brightness 0.1 so the display won't go totally black.
67+ command = "${light} -N 0.1 && ${light} -U ${step}";
68+ }
69+ {
70+ keys = [ 225 ];
71+ events = [ "key" ];
72+ command = "${light} -A ${step}";
73+ }
74+ ];
75+ };
76 };
77}
+15
nixos/modules/programs/mouse-actions.nix
···000000000000000
···1+{ config, lib, pkgs, ... }:
2+3+let
4+ cfg = config.programs.mouse-actions;
5+in
6+ {
7+ options.programs.mouse-actions = {
8+ enable = lib.mkEnableOption ''
9+ mouse-actions udev rules. This is a prerequisite for using mouse-actions without being root.
10+ '';
11+ };
12+ config = lib.mkIf cfg.enable {
13+ services.udev.packages = [ pkgs.mouse-actions ];
14+ };
15+ }
···15{
16 services.livebook = {
17 enableUserService = true;
18- port = 20123;
00019 # See note below about security
20- environmentFile = pkgs.writeText "livebook.env" ''
21- LIVEBOOK_PASSWORD = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
22- '';
23 };
24}
25```
···30is running under, so securing access to it with a password is highly
31recommended.
3233-Putting the password in the Nix configuration like above is an easy
34-way to get started but it is not recommended in the real world because
35-the `livebook.env` file will be added to the world-readable Nix store.
36-A better approach would be to put the password in some secure
37-user-readable location and set `environmentFile = /home/user/secure/livebook.env`.
3839:::
000004041### Extra dependencies {#module-services-livebook-extra-dependencies}
42
···15{
16 services.livebook = {
17 enableUserService = true;
18+ environment = {
19+ LIVEBOOK_PORT = 20123;
20+ LIVEBOOK_PASSWORD = "mypassword";
21+ };
22 # See note below about security
23+ environmentFile = "/var/lib/livebook.env";
0024 };
25}
26```
···31is running under, so securing access to it with a password is highly
32recommended.
3334+Putting the password in the Nix configuration like above is an easy way to get
35+started but it is not recommended in the real world because the resulting
36+environment variables can be read by unprivileged users. A better approach
37+would be to put the password in some secure user-readable location and set
38+`environmentFile = /home/user/secure/livebook.env`.
3940:::
41+42+The [Livebook
43+documentation](https://hexdocs.pm/livebook/readme.html#environment-variables)
44+lists all the applicable environment variables. It is recommended to at least
45+set `LIVEBOOK_PASSWORD` or `LIVEBOOK_TOKEN_ENABLED=false`.
4647### Extra dependencies {#module-services-livebook-extra-dependencies}
48
+51-50
nixos/modules/services/development/livebook.nix
···1415 package = mkPackageOption pkgs "livebook" { };
1617- environmentFile = mkOption {
18- type = types.path;
019 description = lib.mdDoc ''
20- Environment file as defined in {manpage}`systemd.exec(5)` passed to the service.
00002122- This must contain at least `LIVEBOOK_PASSWORD` or
23- `LIVEBOOK_TOKEN_ENABLED=false`. See `livebook server --help`
24- for other options.'';
00000000000000025 };
2627- erlang_node_short_name = mkOption {
28- type = with types; nullOr str;
29 default = null;
30- example = "livebook";
31- description = "A short name for the distributed node.";
32- };
3334- erlang_node_name = mkOption {
35- type = with types; nullOr str;
36- default = null;
37- example = "livebook@127.0.0.1";
38- description = "The name for the app distributed node.";
39- };
04041- port = mkOption {
42- type = types.port;
43- default = 8080;
44- description = "The port to start the web application on.";
45- };
4647- address = mkOption {
48- type = types.str;
49- default = "127.0.0.1";
50- description = lib.mdDoc ''
51- The address to start the web application on. Must be a valid IPv4 or
52- IPv6 address.
53- '';
54- };
5556- options = mkOption {
57- type = with types; attrsOf str;
58- default = { };
59- description = lib.mdDoc ''
60- Additional options to pass as command-line arguments to the server.
61- '';
62- example = literalExpression ''
63- {
64- cookie = "a value shared by all nodes in this cluster";
65- }
66 '';
067 };
6869 extraPackages = mkOption {
···81 serviceConfig = {
82 Restart = "always";
83 EnvironmentFile = cfg.environmentFile;
84- ExecStart =
85- let
86- args = lib.cli.toGNUCommandLineShell { } ({
87- inherit (cfg) port;
88- ip = cfg.address;
89- name = cfg.erlang_node_name;
90- sname = cfg.erlang_node_short_name;
91- } // cfg.options);
92- in
93- "${cfg.package}/bin/livebook server ${args}";
94 };
00095 path = [ pkgs.bash ] ++ cfg.extraPackages;
96 wantedBy = [ "default.target" ];
97 };
···1415 package = mkPackageOption pkgs "livebook" { };
1617+ environment = mkOption {
18+ type = with types; attrsOf (nullOr (oneOf [ bool int str ]));
19+ default = { };
20 description = lib.mdDoc ''
21+ Environment variables to set.
22+23+ Livebook is configured through the use of environment variables. The
24+ available configuration options can be found in the [Livebook
25+ documentation](https://hexdocs.pm/livebook/readme.html#environment-variables).
2627+ Note that all environment variables set through this configuration
28+ parameter will be readable by anyone with access to the host
29+ machine. Therefore, sensitive information like {env}`LIVEBOOK_PASSWORD`
30+ or {env}`LIVEBOOK_COOKIE` should never be set using this configuration
31+ option, but should instead use
32+ [](#opt-services.livebook.environmentFile). See the documentation for
33+ that option for more information.
34+35+ Any environment variables specified in the
36+ [](#opt-services.livebook.environmentFile) will supersede environment
37+ variables specified in this option.
38+ '';
39+40+ example = literalExpression ''
41+ {
42+ LIVEBOOK_PORT = 8080;
43+ }
44+ '';
45 };
4647+ environmentFile = mkOption {
48+ type = with types; nullOr types.path;
49 default = null;
50+ description = lib.mdDoc ''
51+ Additional dnvironment file as defined in {manpage}`systemd.exec(5)`.
05253+ Secrets like {env}`LIVEBOOK_PASSWORD` (which is used to specify the
54+ password needed to access the livebook site) or {env}`LIVEBOOK_COOKIE`
55+ (which is used to specify the
56+ [cookie](https://www.erlang.org/doc/reference_manual/distributed.html#security)
57+ used to connect to the running Elixir system) may be passed to the
58+ service without making them readable to everyone with access to
59+ systemctl by using this configuration parameter.
6061+ Note that this file needs to be available on the host on which
62+ `livebook` is running.
0006364+ For security purposes, this file should contain at least
65+ {env}`LIVEBOOK_PASSWORD` or {env}`LIVEBOOK_TOKEN_ENABLED=false`.
0000006667+ See the [Livebook
68+ documentation](https://hexdocs.pm/livebook/readme.html#environment-variables)
69+ and the [](#opt-services.livebook.environment) configuration parameter
70+ for further options.
00000071 '';
72+ example = "/var/lib/livebook.env";
73 };
7475 extraPackages = mkOption {
···87 serviceConfig = {
88 Restart = "always";
89 EnvironmentFile = cfg.environmentFile;
90+ ExecStart = "${cfg.package}/bin/livebook start";
91+ KillMode = "mixed";
0000000092 };
93+ environment = mapAttrs (name: value:
94+ if isBool value then boolToString value else toString value)
95+ cfg.environment;
96 path = [ pkgs.bash ] ++ cfg.extraPackages;
97 wantedBy = [ "default.target" ];
98 };
···312 ipfs.gid = config.ids.gids.ipfs;
313 };
314315- systemd.tmpfiles.rules = [
316- "d '${cfg.dataDir}' - ${cfg.user} ${cfg.group} - -"
317- ] ++ optionals cfg.autoMount [
318- "d '${cfg.settings.Mounts.IPFS}' - ${cfg.user} ${cfg.group} - -"
319- "d '${cfg.settings.Mounts.IPNS}' - ${cfg.user} ${cfg.group} - -"
320- ];
0321322 # The hardened systemd unit breaks the fuse-mount function according to documentation in the unit file itself
323 systemd.packages = if cfg.autoMount
···312 ipfs.gid = config.ids.gids.ipfs;
313 };
314315+ systemd.tmpfiles.settings."10-kubo" = let
316+ defaultConfig = { inherit (cfg) user group; };
317+ in {
318+ ${cfg.dataDir}.d = defaultConfig;
319+ ${cfg.settings.Mounts.IPFS}.d = mkIf (cfg.autoMount) defaultConfig;
320+ ${cfg.settings.Mounts.IPNS}.d = mkIf (cfg.autoMount) defaultConfig;
321+ };
322323 # The hardened systemd unit breaks the fuse-mount function according to documentation in the unit file itself
324 systemd.packages = if cfg.autoMount
+10-5
nixos/modules/services/networking/aria2.nix
···18 dir=${cfg.downloadDir}
19 listen-port=${concatStringsSep "," (rangesToStringList cfg.listenPortRange)}
20 rpc-listen-port=${toString cfg.rpcListenPort}
21- rpc-secret=${cfg.rpcSecret}
22 '';
2324in
25{
000026 options = {
27 services.aria2 = {
28 enable = mkOption {
···65 default = 6800;
66 description = lib.mdDoc "Specify a port number for JSON-RPC/XML-RPC server to listen to. Possible Values: 1024-65535";
67 };
68- rpcSecret = mkOption {
69- type = types.str;
70- default = "aria2rpc";
71 description = lib.mdDoc ''
72- Set RPC secret authorization token.
73 Read https://aria2.github.io/manual/en/html/aria2c.html#rpc-auth to know how this option value is used.
74 '';
75 };
···117 touch "${sessionFile}"
118 fi
119 cp -f "${settingsFile}" "${settingsDir}/aria2.conf"
0120 '';
121122 serviceConfig = {
···125 ExecReload = "${pkgs.coreutils}/bin/kill -HUP $MAINPID";
126 User = "aria2";
127 Group = "aria2";
0128 };
129 };
130 };
···18 dir=${cfg.downloadDir}
19 listen-port=${concatStringsSep "," (rangesToStringList cfg.listenPortRange)}
20 rpc-listen-port=${toString cfg.rpcListenPort}
021 '';
2223in
24{
25+ imports = [
26+ (mkRemovedOptionModule [ "services" "aria2" "rpcSecret" ] "Use services.aria2.rpcSecretFile instead")
27+ ];
28+29 options = {
30 services.aria2 = {
31 enable = mkOption {
···68 default = 6800;
69 description = lib.mdDoc "Specify a port number for JSON-RPC/XML-RPC server to listen to. Possible Values: 1024-65535";
70 };
71+ rpcSecretFile = mkOption {
72+ type = types.path;
73+ example = "/run/secrets/aria2-rpc-token.txt";
74 description = lib.mdDoc ''
75+ A file containing the RPC secret authorization token.
76 Read https://aria2.github.io/manual/en/html/aria2c.html#rpc-auth to know how this option value is used.
77 '';
78 };
···120 touch "${sessionFile}"
121 fi
122 cp -f "${settingsFile}" "${settingsDir}/aria2.conf"
123+ echo "rpc-secret=$(cat "$CREDENTIALS_DIRECTORY/rpcSecretFile")" >> "${settingsDir}/aria2.conf"
124 '';
125126 serviceConfig = {
···129 ExecReload = "${pkgs.coreutils}/bin/kill -HUP $MAINPID";
130 User = "aria2";
131 Group = "aria2";
132+ LoadCredential="rpcSecretFile:${cfg.rpcSecretFile}";
133 };
134 };
135 };
···45let
6 cfg = config.services.dnsdist;
7+8+ toLua = lib.generators.toLua {};
9+10+ mkBind = cfg: toLua "${cfg.listenAddress}:${toString cfg.listenPort}";
11+12 configFile = pkgs.writeText "dnsdist.conf" ''
13+ setLocal(${mkBind cfg})
14+ ${lib.optionalString cfg.dnscrypt.enable dnscryptSetup}
15 ${cfg.extraConfig}
16 '';
17+18+ dnscryptSetup = ''
19+ last_rotation = 0
20+ cert_serial = 0
21+ provider_key = ${toLua cfg.dnscrypt.providerKey}
22+ cert_lifetime = ${toLua cfg.dnscrypt.certLifetime} * 60
23+24+ function file_exists(name)
25+ local f = io.open(name, "r")
26+ return f ~= nil and io.close(f)
27+ end
28+29+ function dnscrypt_setup()
30+ -- generate provider keys on first run
31+ if provider_key == nil then
32+ provider_key = "/var/lib/dnsdist/private.key"
33+ if not file_exists(provider_key) then
34+ generateDNSCryptProviderKeys("/var/lib/dnsdist/public.key",
35+ "/var/lib/dnsdist/private.key")
36+ print("DNSCrypt: generated provider keypair")
37+ end
38+ end
39+40+ -- generate resolver certificate
41+ local now = os.time()
42+ generateDNSCryptCertificate(
43+ provider_key, "/run/dnsdist/resolver.cert", "/run/dnsdist/resolver.key",
44+ cert_serial, now - 60, now + cert_lifetime)
45+ addDNSCryptBind(
46+ ${mkBind cfg.dnscrypt}, ${toLua cfg.dnscrypt.providerName},
47+ "/run/dnsdist/resolver.cert", "/run/dnsdist/resolver.key")
48+ end
49+50+ function maintenance()
51+ -- certificate rotation
52+ local now = os.time()
53+ local dnscrypt = getDNSCryptBind(0)
54+55+ if ((now - last_rotation) > 0.9 * cert_lifetime) then
56+ -- generate and start using a new certificate
57+ dnscrypt:generateAndLoadInMemoryCertificate(
58+ provider_key, cert_serial + 1,
59+ now - 60, now + cert_lifetime)
60+61+ -- stop advertising the last certificate
62+ dnscrypt:markInactive(cert_serial)
63+64+ -- remove the second to last certificate
65+ if (cert_serial > 1) then
66+ dnscrypt:removeInactiveCertificate(cert_serial - 1)
67+ end
68+69+ print("DNSCrypt: rotated certificate")
70+71+ -- increment serial number
72+ cert_serial = cert_serial + 1
73+ last_rotation = now
74+ end
75+ end
76+77+ dnscrypt_setup()
78+ '';
79+80in {
81 options = {
82 services.dnsdist = {
···8485 listenAddress = mkOption {
86 type = types.str;
87+ description = lib.mdDoc "Listen IP address";
88 default = "0.0.0.0";
89 };
90 listenPort = mkOption {
91+ type = types.port;
92 description = lib.mdDoc "Listen port";
93 default = 53;
94 };
9596+ dnscrypt = {
97+ enable = mkEnableOption (lib.mdDoc "a DNSCrypt endpoint to dnsdist");
98+99+ listenAddress = mkOption {
100+ type = types.str;
101+ description = lib.mdDoc "Listen IP address of the endpoint";
102+ default = "0.0.0.0";
103+ };
104+105+ listenPort = mkOption {
106+ type = types.port;
107+ description = lib.mdDoc "Listen port of the endpoint";
108+ default = 443;
109+ };
110+111+ providerName = mkOption {
112+ type = types.str;
113+ default = "2.dnscrypt-cert.${config.networking.hostName}";
114+ defaultText = literalExpression "2.dnscrypt-cert.\${config.networking.hostName}";
115+ example = "2.dnscrypt-cert.myresolver";
116+ description = lib.mdDoc ''
117+ The name that will be given to this DNSCrypt resolver.
118+119+ ::: {.note}
120+ The provider name must start with `2.dnscrypt-cert.`.
121+ :::
122+ '';
123+ };
124+125+ providerKey = mkOption {
126+ type = types.nullOr types.path;
127+ default = null;
128+ description = lib.mdDoc ''
129+ The filepath to the provider secret key.
130+ If not given a new provider key pair will be generated in
131+ /var/lib/dnsdist on the first run.
132+133+ ::: {.note}
134+ The file must be readable by the dnsdist user/group.
135+ :::
136+ '';
137+ };
138+139+ certLifetime = mkOption {
140+ type = types.ints.positive;
141+ default = 15;
142+ description = lib.mdDoc ''
143+ The lifetime (in minutes) of the resolver certificate.
144+ This will be automatically rotated before expiration.
145+ '';
146+ };
147+148+ };
149+150 extraConfig = mkOption {
151 type = types.lines;
152 default = "";
···158 };
159160 config = mkIf cfg.enable {
161+ users.users.dnsdist = {
162+ description = "dnsdist daemons user";
163+ isSystemUser = true;
164+ group = "dnsdist";
165+ };
166+167+ users.groups.dnsdist = {};
168+169 systemd.packages = [ pkgs.dnsdist ];
170171 systemd.services.dnsdist = {
···173174 startLimitIntervalSec = 0;
175 serviceConfig = {
176+ User = "dnsdist";
177+ Group = "dnsdist";
178+ RuntimeDirectory = "dnsdist";
179+ StateDirectory = "dnsdist";
180 # upstream overrides for better nixos compatibility
181 ExecStartPre = [ "" "${pkgs.dnsdist}/bin/dnsdist --check-config --config ${configFile}" ];
182 ExecStart = [ "" "${pkgs.dnsdist}/bin/dnsdist --supervised --disable-syslog --config ${configFile}" ];
+8-4
nixos/modules/services/networking/headscale.nix
···444 tls_letsencrypt_cache_dir = "${dataDir}/.cache";
445 };
446447- # Setup the headscale configuration in a known path in /etc to
448- # allow both the Server and the Client use it to find the socket
449- # for communication.
450- environment.etc."headscale/config.yaml".source = configFile;
0000451452 users.groups.headscale = mkIf (cfg.group == "headscale") {};
453
···444 tls_letsencrypt_cache_dir = "${dataDir}/.cache";
445 };
446447+ environment = {
448+ # Setup the headscale configuration in a known path in /etc to
449+ # allow both the Server and the Client use it to find the socket
450+ # for communication.
451+ etc."headscale/config.yaml".source = configFile;
452+453+ systemPackages = [ cfg.package ];
454+ };
455456 users.groups.headscale = mkIf (cfg.group == "headscale") {};
457
···395 };
396 };
397398- systemd.tmpfiles.rules = [
399- "d /var/log/jitsi/jibri 755 jibri jibri"
400- ];
401-402-403404 # Configure Chromium to not show the "Chrome is being controlled by automatic test software" message.
405 environment.etc."chromium/policies/managed/managed_policies.json".text = builtins.toJSON { CommandLineFlagSecurityWarningsEnabled = false; };
···395 };
396 };
397398+ systemd.tmpfiles.settings."10-jibri"."/var/log/jitsi/jibri".d = {
399+ user = "jibri";
400+ group = "jibri";
401+ mode = "755";
402+ };
403404 # Configure Chromium to not show the "Chrome is being controlled by automatic test software" message.
405 environment.etc."chromium/policies/managed/managed_policies.json".text = builtins.toJSON { CommandLineFlagSecurityWarningsEnabled = false; };
···1+# Netbird {#module-services-netbird}
2+3+## Quickstart {#module-services-netbird-quickstart}
4+5+The absolute minimal configuration for the netbird daemon looks like this:
6+7+```nix
8+services.netbird.enable = true;
9+```
10+11+This will set up a netbird service listening on the port `51820` associated to the
12+`wt0` interface.
13+14+It is strictly equivalent to setting:
15+16+```nix
17+services.netbird.tunnels.wt0.stateDir = "netbird";
18+```
19+20+The `enable` option is mainly kept for backward compatibility, as defining netbird
21+tunnels through the `tunnels` option is more expressive.
22+23+## Multiple connections setup {#module-services-netbird-multiple-connections}
24+25+Using the `services.netbird.tunnels` option, it is also possible to define more than
26+one netbird service running at the same time.
27+28+The following configuration will start a netbird daemon using the interface `wt1` and
29+the port 51830. Its configuration file will then be located at `/var/lib/netbird-wt1/config.json`.
30+31+```nix
32+services.netbird.tunnels = {
33+ wt1 = {
34+ port = 51830;
35+ };
36+};
37+```
38+39+To interact with it, you will need to specify the correct daemon address:
40+41+```bash
42+netbird --daemon-addr unix:///var/run/netbird-wt1/sock ...
43+```
44+45+The address will by default be `unix:///var/run/netbird-<name>`.
46+47+It is also possible to overwrite default options passed to the service, for
48+example:
49+50+```nix
51+services.netbird.tunnels.wt1.environment = {
52+ NB_DAEMON_ADDR = "unix:///var/run/toto.sock"
53+};
54+```
55+56+This will set the socket to interact with the netbird service to `/var/run/toto.sock`.
···277278 # The systemd service will fail to execute the preStart hook
279 # if the WorkingDirectory does not exist
280- systemd.tmpfiles.rules = [
281- ''d "${cfg.statePath}" -''
282- ];
283284 systemd.services.mattermost = {
285 description = "Mattermost chat service";
···277278 # The systemd service will fail to execute the preStart hook
279 # if the WorkingDirectory does not exist
280+ systemd.tmpfiles.settings."10-mattermost".${cfg.statePath}.d = { };
00281282 systemd.services.mattermost = {
283 description = "Mattermost chat service";
···163 Please do not disable HTTPS mode in production. In this mode, access to the nifi is opened without authentication.
164 '';
165166- systemd.tmpfiles.rules = [
167- "d '/var/lib/nifi/conf' 0750 ${cfg.user} ${cfg.group}"
168- "L+ '/var/lib/nifi/lib' - - - - ${cfg.package}/lib"
169- ];
00000170171172 systemd.services.nifi = {
···163 Please do not disable HTTPS mode in production. In this mode, access to the nifi is opened without authentication.
164 '';
165166+ systemd.tmpfiles.settings."10-nifi" = {
167+ "/var/lib/nifi/conf".d = {
168+ inherit (cfg) user group;
169+ mode = "0750";
170+ };
171+ "/var/lib/nifi/lib"."L+" = {
172+ argument = "${cfg.package}/lib";
173+ };
174+ };
175176177 systemd.services.nifi = {
···1415Secrets are pinned against the presence of a TPM2 device, for example:
16```
17-echo hi | clevis encrypt tpm2 '{}' > hi.jwe
18```
192) Tang policies
2021Secrets are pinned against the presence of a Tang server, for example:
22```
23-echo hi | clevis encrypt tang '{"url": "http://tang.local"}' > hi.jwe
24```
25263) Shamir Secret Sharing
2728Using Shamir's Secret Sharing ([sss](https://en.wikipedia.org/wiki/Shamir%27s_secret_sharing)), secrets are pinned using a combination of the two preceding policies. For example:
29```
30-echo hi | clevis encrypt sss \
31'{"t": 2, "pins": {"tpm2": {"pcr_ids": "0"}, "tang": {"url": "http://tang.local"}}}' \
32> hi.jwe
33```
···1415Secrets are pinned against the presence of a TPM2 device, for example:
16```
17+echo -n hi | clevis encrypt tpm2 '{}' > hi.jwe
18```
192) Tang policies
2021Secrets are pinned against the presence of a Tang server, for example:
22```
23+echo -n hi | clevis encrypt tang '{"url": "http://tang.local"}' > hi.jwe
24```
25263) Shamir Secret Sharing
2728Using Shamir's Secret Sharing ([sss](https://en.wikipedia.org/wiki/Shamir%27s_secret_sharing)), secrets are pinned using a combination of the two preceding policies. For example:
29```
30+echo -n hi | clevis encrypt sss \
31'{"t": 2, "pins": {"tpm2": {"pcr_ids": "0"}, "tang": {"url": "http://tang.local"}}}' \
32> hi.jwe
33```
···64 # This is not supported at the moment.
65 # https://trello.com/b/HHs01Pab/cinnamon-wayland
66 machine.execute("${su "cinnamon-screensaver-command -l >&2 &"}")
67- machine.wait_until_succeeds("journalctl -b --grep 'Cinnamon Screensaver is unavailable on Wayland'")
6869 with subtest("Open GNOME Terminal"):
70 machine.succeed("${su "dbus-launch gnome-terminal"}")
···64 # This is not supported at the moment.
65 # https://trello.com/b/HHs01Pab/cinnamon-wayland
66 machine.execute("${su "cinnamon-screensaver-command -l >&2 &"}")
67+ machine.wait_until_succeeds("journalctl -b --grep 'cinnamon-screensaver is disabled in wayland sessions'")
6869 with subtest("Open GNOME Terminal"):
70 machine.succeed("${su "dbus-launch gnome-terminal"}")
···1-{ lib, buildGoModule, fetchFromGitHub }:
23# SHA of ${version} for the tool's help output. Unfortunately this is needed in build flags.
4-let rev = "6f9e27f1795f10475c9f6f5decdff692e1e228da";
05in
6buildGoModule rec {
7 pname = "sonobuoy";
···2728 subPackages = [ "." ];
2900000000030 meta = with lib; {
31- description = ''
32- Diagnostic tool that makes it easier to understand the
33- state of a Kubernetes cluster.
34- '';
35 longDescription = ''
36 Sonobuoy is a diagnostic tool that makes it easier to understand the state of
37 a Kubernetes cluster by running a set of Kubernetes conformance tests in an
···39 '';
4041 homepage = "https://sonobuoy.io";
042 license = licenses.asl20;
043 maintainers = with maintainers; [ carlosdagos saschagrunert wilsonehusin ];
44 };
45}
···1+{ lib, buildGoModule, fetchFromGitHub, testers, sonobuoy }:
23# SHA of ${version} for the tool's help output. Unfortunately this is needed in build flags.
4+# The update script can update this automatically, the comment is used to find the line.
5+let rev = "6f9e27f1795f10475c9f6f5decdff692e1e228da"; # update-commit-sha
6in
7buildGoModule rec {
8 pname = "sonobuoy";
···2829 subPackages = [ "." ];
3031+ passthru = {
32+ updateScript = ./update.sh;
33+ tests.version = testers.testVersion {
34+ package = sonobuoy;
35+ command = "sonobuoy version";
36+ version = "v${version}";
37+ };
38+ };
39+40 meta = with lib; {
41+ description = "Diagnostic tool that makes it easier to understand the state of a Kubernetes cluster";
00042 longDescription = ''
43 Sonobuoy is a diagnostic tool that makes it easier to understand the state of
44 a Kubernetes cluster by running a set of Kubernetes conformance tests in an
···46 '';
4748 homepage = "https://sonobuoy.io";
49+ changelog = "https://github.com/vmware-tanzu/sonobuoy/releases/tag/v${version}";
50 license = licenses.asl20;
51+ mainProgram = "sonobuoy";
52 maintainers = with maintainers; [ carlosdagos saschagrunert wilsonehusin ];
53 };
54}
···62 # should come from or be proposed to upstream. This list will probably never
63 # be empty since dependencies update all the time.
64 packageUpgradePatches = [
00000065 ];
6667 patches = nixPatches ++ bugfixPatches ++ packageUpgradePatches;
···62 # should come from or be proposed to upstream. This list will probably never
63 # be empty since dependencies update all the time.
64 packageUpgradePatches = [
65+ # https://github.com/sagemath/sage/pull/37123, to land in 10.3.beta7
66+ (fetchpatch {
67+ name = "scipy-1.12-upgrade.patch";
68+ url = "https://github.com/sagemath/sage/commit/54eec464e9fdf18b411d9148aecb918178e95909.diff";
69+ sha256 = "sha256-9wyNrcSfF6mYFTIV4ev2OdD7igb0AeyZZYWSc/+JrIU=";
70+ })
71 ];
7273 patches = nixPatches ++ bugfixPatches ++ packageUpgradePatches;
···14}:
1516let
17- version = "1.17.5";
18 # Using two URLs as the first one will break as soon as a new version is released
19 src_bin = fetchurl {
20 urls = [
21 "http://www.makemkv.com/download/makemkv-bin-${version}.tar.gz"
22 "http://www.makemkv.com/download/old/makemkv-bin-${version}.tar.gz"
23 ];
24- sha256 = "ywCcMfaWAeL2bjFZJaCa0XW60EHyfFCW17Bt1QBN8E8=";
25 };
26 src_oss = fetchurl {
27 urls = [
28 "http://www.makemkv.com/download/makemkv-oss-${version}.tar.gz"
29 "http://www.makemkv.com/download/old/makemkv-oss-${version}.tar.gz"
30 ];
31- sha256 = "/C9LDcUxF6tJkn2aQV+nMILRpK5H3wxOMMxHEMTC/CI=";
32 };
3334in mkDerivation {
···14}:
1516let
17+ version = "1.17.6";
18 # Using two URLs as the first one will break as soon as a new version is released
19 src_bin = fetchurl {
20 urls = [
21 "http://www.makemkv.com/download/makemkv-bin-${version}.tar.gz"
22 "http://www.makemkv.com/download/old/makemkv-bin-${version}.tar.gz"
23 ];
24+ sha256 = "KHZGAFAp93HTZs8OT76xf88QM0UtlVVH3q57CZm07Rs=";
25 };
26 src_oss = fetchurl {
27 urls = [
28 "http://www.makemkv.com/download/makemkv-oss-${version}.tar.gz"
29 "http://www.makemkv.com/download/old/makemkv-oss-${version}.tar.gz"
30 ];
31+ sha256 = "2dtNdyv0+QYWQrfrIu5RQKSN4scSWKuLFNlJZXpxDUM=";
32 };
3334in mkDerivation {
···238239 meta = {
240 description = "PC emulator";
000000241 sourceProvenance = with lib.sourceTypes; [
242 fromSource
243 binaryNativeCode
···238239 meta = {
240 description = "PC emulator";
241+ longDescription = ''
242+ VirtualBox is an x86 and AMD64/Intel64 virtualization product for enterprise and home use.
243+244+ To install on NixOS, please use the option `virtualisation.virtualbox.host.enable = true`.
245+ Please also check other options under `virtualisation.virtualbox`.
246+ '';
247 sourceProvenance = with lib.sourceTypes; [
248 fromSource
249 binaryNativeCode
···1-{ lib, stdenv
2-, fetchurl, perl, gcc
3-, ncurses5
4-, ncurses6, gmp, libiconv, numactl
5-, llvmPackages
6-, coreutils
7-, targetPackages
8-9- # minimal = true; will remove files that aren't strictly necessary for
10- # regular builds and GHC bootstrapping.
11- # This is "useful" for staying within hydra's output limits for at least the
12- # aarch64-linux architecture.
13-, minimal ? false
14-}:
15-16-# Prebuilt only does native
17-assert stdenv.targetPlatform == stdenv.hostPlatform;
18-19-let
20- downloadsUrl = "https://downloads.haskell.org/ghc";
21-22- version = "8.10.2";
23-24- # Information about available bindists that we use in the build.
25- #
26- # # Bindist library checking
27- #
28- # The field `archSpecificLibraries` also provides a way for us get notified
29- # early when the upstream bindist changes its dependencies (e.g. because a
30- # newer Debian version is used that uses a new `ncurses` version).
31- #
32- # Usage:
33- #
34- # * You can find the `fileToCheckFor` of libraries by running `readelf -d`
35- # on the compiler binary (`exePathForLibraryCheck`).
36- # * To skip library checking for an architecture,
37- # set `exePathForLibraryCheck = null`.
38- # * To skip file checking for a specific arch specfic library,
39- # set `fileToCheckFor = null`.
40- ghcBinDists = {
41- # Binary distributions for the default libc (e.g. glibc, or libSystem on Darwin)
42- # nixpkgs uses for the respective system.
43- defaultLibc = {
44- i686-linux = {
45- variantSuffix = "";
46- src = {
47- url = "${downloadsUrl}/${version}/ghc-${version}-i386-deb9-linux.tar.xz";
48- sha256 = "0bvwisl4w0z5z8z0da10m9sv0mhm9na2qm43qxr8zl23mn32mblx";
49- };
50- exePathForLibraryCheck = "ghc/stage2/build/tmp/ghc-stage2";
51- archSpecificLibraries = [
52- { nixPackage = gmp; fileToCheckFor = null; }
53- # The i686-linux bindist provided by GHC HQ is currently built on Debian 9,
54- # which link it against `libtinfo.so.5` (ncurses 5).
55- # Other bindists are linked `libtinfo.so.6` (ncurses 6).
56- { nixPackage = ncurses5; fileToCheckFor = "libtinfo.so.5"; }
57- ];
58- };
59- x86_64-linux = {
60- variantSuffix = "";
61- src = {
62- url = "${downloadsUrl}/${version}/ghc-${version}-x86_64-deb10-linux.tar.xz";
63- sha256 = "0chnzy9j23b2wa8clx5arwz8wnjfxyjmz9qkj548z14cqf13slcl";
64- };
65- exePathForLibraryCheck = "ghc/stage2/build/tmp/ghc-stage2";
66- archSpecificLibraries = [
67- { nixPackage = gmp; fileToCheckFor = null; }
68- { nixPackage = ncurses6; fileToCheckFor = "libtinfo.so.6"; }
69- ];
70- };
71- armv7l-linux = {
72- variantSuffix = "";
73- src = {
74- url = "${downloadsUrl}/${version}/ghc-${version}-armv7-deb10-linux.tar.xz";
75- sha256 = "1j41cq5d3rmlgz7hzw8f908fs79gc5mn3q5wz277lk8zdf19g75v";
76- };
77- exePathForLibraryCheck = "ghc/stage2/build/tmp/ghc-stage2";
78- archSpecificLibraries = [
79- { nixPackage = gmp; fileToCheckFor = null; }
80- { nixPackage = ncurses6; fileToCheckFor = "libtinfo.so.6"; }
81- ];
82- };
83- aarch64-linux = {
84- variantSuffix = "";
85- src = {
86- url = "${downloadsUrl}/${version}/ghc-${version}-aarch64-deb10-linux.tar.xz";
87- sha256 = "14smwl3741ixnbgi0l51a7kh7xjkiannfqx15b72svky0y4l3wjw";
88- };
89- exePathForLibraryCheck = "ghc/stage2/build/tmp/ghc-stage2";
90- archSpecificLibraries = [
91- { nixPackage = gmp; fileToCheckFor = null; }
92- { nixPackage = ncurses6; fileToCheckFor = "libtinfo.so.6"; }
93- { nixPackage = numactl; fileToCheckFor = null; }
94- ];
95- };
96- x86_64-darwin = {
97- variantSuffix = "";
98- src = {
99- url = "${downloadsUrl}/${version}/ghc-${version}-x86_64-apple-darwin.tar.xz";
100- sha256 = "1hngyq14l4f950hzhh2d204ca2gfc98pc9xdasxihzqd1jq75dzd";
101- };
102- exePathForLibraryCheck = null; # we don't have a library check for darwin yet
103- archSpecificLibraries = [
104- { nixPackage = gmp; fileToCheckFor = null; }
105- { nixPackage = ncurses6; fileToCheckFor = null; }
106- { nixPackage = libiconv; fileToCheckFor = null; }
107- ];
108- };
109- };
110- # Binary distributions for the musl libc for the respective system.
111- musl = {
112- x86_64-linux = {
113- variantSuffix = "-musl";
114- src = {
115- url = "${downloadsUrl}/${version}/ghc-${version}-x86_64-alpine3.10-linux-integer-simple.tar.xz";
116- sha256 = "0xpcbyaxqyhbl6f0i3s4rp2jm67nqpkfh2qlbj3i2fiaix89ml0l";
117- };
118- exePathForLibraryCheck = "bin/ghc";
119- archSpecificLibraries = [
120- { nixPackage = gmp; fileToCheckFor = null; }
121- # In contrast to glibc builds, the musl-bindist uses `libncursesw.so.*`
122- # instead of `libtinfo.so.*.`
123- { nixPackage = ncurses6; fileToCheckFor = "libncursesw.so.6"; }
124- ];
125- isHadrian = true;
126- };
127- };
128- };
129-130- distSetName = if stdenv.hostPlatform.isMusl then "musl" else "defaultLibc";
131-132- binDistUsed = ghcBinDists.${distSetName}.${stdenv.hostPlatform.system}
133- or (throw "cannot bootstrap GHC on this platform ('${stdenv.hostPlatform.system}' with libc '${distSetName}')");
134-135- useLLVM = !stdenv.targetPlatform.isx86;
136-137- libPath =
138- lib.makeLibraryPath (
139- # Add arch-specific libraries.
140- map ({ nixPackage, ... }: nixPackage) binDistUsed.archSpecificLibraries
141- );
142-143- libEnvVar = lib.optionalString stdenv.hostPlatform.isDarwin "DY"
144- + "LD_LIBRARY_PATH";
145-146- runtimeDeps = [
147- targetPackages.stdenv.cc
148- targetPackages.stdenv.cc.bintools
149- coreutils # for cat
150- ]
151- ++ lib.optionals useLLVM [
152- (lib.getBin llvmPackages.llvm)
153- ]
154- # On darwin, we need unwrapped bintools as well (for otool)
155- ++ lib.optionals (stdenv.targetPlatform.linker == "cctools") [
156- targetPackages.stdenv.cc.bintools.bintools
157- ];
158-159-in
160-161-stdenv.mkDerivation rec {
162- inherit version;
163- pname = "ghc-binary${binDistUsed.variantSuffix}";
164-165- src = fetchurl binDistUsed.src;
166-167- # Note that for GHC 8.10 versions <= 8.10.5, the GHC HQ musl bindist
168- # has a `gmp` dependency:
169- # https://gitlab.haskell.org/ghc/ghc/-/commit/8306501020cd66f683ad9c215fa8e16c2d62357d
170- # Related nixpkgs issues:
171- # * https://github.com/NixOS/nixpkgs/pull/130441#issuecomment-922452843
172-173- nativeBuildInputs = [ perl ];
174- propagatedBuildInputs =
175- # Because musl bindists currently provide no way to tell where
176- # libgmp is (see not [musl bindists have no .buildinfo]), we need
177- # to propagate `gmp`, otherwise programs built by this ghc will
178- # fail linking with `cannot find -lgmp` errors.
179- # Concrete cases are listed in:
180- # https://github.com/NixOS/nixpkgs/pull/130441#issuecomment-922459988
181- #
182- # Also, as of writing, the release pages of musl bindists claim
183- # that they use `integer-simple` and do not require `gmp`; however
184- # that is incorrect, so `gmp` is required until a release has been
185- # made that includes https://gitlab.haskell.org/ghc/ghc/-/issues/20059.
186- # (Note that for packaging the `-binary` compiler, nixpkgs does not care
187- # about whether or not `gmp` is used; this comment is just here to explain
188- # why the `gmp` dependency exists despite what the release page says.)
189- #
190- # For GHC >= 8.10.6, `gmp` was switched out for `integer-simple`
191- # (https://gitlab.haskell.org/ghc/ghc/-/commit/8306501020cd66f683ad9c215fa8e16c2d62357d),
192- # fixing the above-mentioned release issue,
193- # and for GHC >= 9.* it is not clear as of writing whether that switch
194- # will be made there too.
195- lib.optionals stdenv.hostPlatform.isMusl [ gmp ]; # musl bindist needs this
196-197- # Set LD_LIBRARY_PATH or equivalent so that the programs running as part
198- # of the bindist installer can find the libraries they expect.
199- # Cannot patchelf beforehand due to relative RPATHs that anticipate
200- # the final install location.
201- ${libEnvVar} = libPath;
202-203- postUnpack =
204- # Verify our assumptions of which `libtinfo.so` (ncurses) version is used,
205- # so that we know when ghc bindists upgrade that and we need to update the
206- # version used in `libPath`.
207- lib.optionalString
208- (binDistUsed.exePathForLibraryCheck != null)
209- # Note the `*` glob because some GHCs have a suffix when unpacked, e.g.
210- # the musl bindist has dir `ghc-VERSION-x86_64-unknown-linux/`.
211- # As a result, don't shell-quote this glob when splicing the string.
212- (let buildExeGlob = ''ghc-${version}*/"${binDistUsed.exePathForLibraryCheck}"''; in
213- lib.concatStringsSep "\n" [
214- (''
215- shopt -u nullglob
216- echo "Checking that ghc binary exists in bindist at ${buildExeGlob}"
217- if ! test -e ${buildExeGlob}; then
218- echo >&2 "GHC binary ${binDistUsed.exePathForLibraryCheck} could not be found in the bindist build directory (at ${buildExeGlob}) for arch ${stdenv.hostPlatform.system}, please check that ghcBinDists correctly reflect the bindist dependencies!"; exit 1;
219- fi
220- '')
221- (lib.concatMapStringsSep
222- "\n"
223- ({ fileToCheckFor, nixPackage }:
224- lib.optionalString (fileToCheckFor != null) ''
225- echo "Checking bindist for ${fileToCheckFor} to ensure that is still used"
226- if ! readelf -d ${buildExeGlob} | grep "${fileToCheckFor}"; then
227- echo >&2 "File ${fileToCheckFor} could not be found in ${binDistUsed.exePathForLibraryCheck} for arch ${stdenv.hostPlatform.system}, please check that ghcBinDists correctly reflect the bindist dependencies!"; exit 1;
228- fi
229-230- echo "Checking that the nix package ${nixPackage} contains ${fileToCheckFor}"
231- if ! test -e "${lib.getLib nixPackage}/lib/${fileToCheckFor}"; then
232- echo >&2 "Nix package ${nixPackage} did not contain ${fileToCheckFor} for arch ${stdenv.hostPlatform.system}, please check that ghcBinDists correctly reflect the bindist dependencies!"; exit 1;
233- fi
234- ''
235- )
236- binDistUsed.archSpecificLibraries
237- )
238- ])
239- # GHC has dtrace probes, which causes ld to try to open /usr/lib/libdtrace.dylib
240- # during linking
241- + lib.optionalString stdenv.isDarwin ''
242- export NIX_LDFLAGS+=" -no_dtrace_dof"
243- # not enough room in the object files for the full path to libiconv :(
244- for exe in $(find . -type f -executable); do
245- isScript $exe && continue
246- ln -fs ${libiconv}/lib/libiconv.dylib $(dirname $exe)/libiconv.dylib
247- install_name_tool -change /usr/lib/libiconv.2.dylib @executable_path/libiconv.dylib -change /usr/local/lib/gcc/6/libgcc_s.1.dylib ${gcc.cc.lib}/lib/libgcc_s.1.dylib $exe
248- done
249- '' +
250-251- # Some scripts used during the build need to have their shebangs patched
252- ''
253- patchShebangs ghc-${version}/utils/
254- patchShebangs ghc-${version}/configure
255- test -d ghc-${version}/inplace/bin && \
256- patchShebangs ghc-${version}/inplace/bin
257- '' +
258- # We have to patch the GMP paths for the integer-gmp package.
259- # Note [musl bindists have no .buildinfo]
260- # Note that musl bindists do not contain them; unclear if that's intended;
261- # see: https://gitlab.haskell.org/ghc/ghc/-/issues/20073#note_363231
262- ''
263- find . -name integer-gmp.buildinfo \
264- -exec sed -i "s@extra-lib-dirs: @extra-lib-dirs: ${gmp.out}/lib@" {} \;
265- '' + lib.optionalString stdenv.isDarwin ''
266- find . -name base.buildinfo \
267- -exec sed -i "s@extra-lib-dirs: @extra-lib-dirs: ${libiconv}/lib@" {} \;
268- '' +
269- # aarch64 does HAVE_NUMA so -lnuma requires it in library-dirs in rts/package.conf.in
270- # FFI_LIB_DIR is a good indication of places it must be needed.
271- lib.optionalString stdenv.hostPlatform.isAarch64 ''
272- find . -name package.conf.in \
273- -exec sed -i "s@FFI_LIB_DIR@FFI_LIB_DIR ${numactl.out}/lib@g" {} \;
274- '' +
275- # Rename needed libraries and binaries, fix interpreter
276- lib.optionalString stdenv.isLinux ''
277- find . -type f -executable -exec patchelf \
278- --interpreter ${stdenv.cc.bintools.dynamicLinker} {} \;
279- '' +
280- # The hadrian install Makefile uses 'xxx' as a temporary placeholder in path
281- # substitution. Which can break the build if the store path / prefix happens
282- # to contain this string. This will be fixed with 9.4 bindists.
283- # https://gitlab.haskell.org/ghc/ghc/-/issues/21402
284- ''
285- # Detect hadrian Makefile by checking for the target that has the problem
286- if grep '^update_package_db' ghc-${version}*/Makefile > /dev/null; then
287- echo Hadrian bindist, applying workaround for xxx path substitution.
288- # based on https://gitlab.haskell.org/ghc/ghc/-/commit/dd5fecb0e2990b192d92f4dfd7519ecb33164fad.patch
289- substituteInPlace ghc-${version}*/Makefile --replace 'xxx' '\0xxx\0'
290- else
291- echo Not a hadrian bindist, not applying xxx path workaround.
292- fi
293- '';
294-295- # fix for `configure: error: Your linker is affected by binutils #16177`
296- preConfigure = lib.optionalString
297- stdenv.targetPlatform.isAarch32
298- "LD=ld.gold";
299-300- configurePlatforms = [ ];
301- configureFlags = [
302- "--with-gmp-includes=${lib.getDev gmp}/include"
303- # Note `--with-gmp-libraries` does nothing for GHC bindists:
304- # https://gitlab.haskell.org/ghc/ghc/-/merge_requests/6124
305- ] ++ lib.optional stdenv.isDarwin "--with-gcc=${./gcc-clang-wrapper.sh}"
306- # From: https://github.com/NixOS/nixpkgs/pull/43369/commits
307- ++ lib.optional stdenv.hostPlatform.isMusl "--disable-ld-override";
308-309- # No building is necessary, but calling make without flags ironically
310- # calls install-strip ...
311- dontBuild = true;
312-313- # Patch scripts to include runtime dependencies in $PATH.
314- postInstall = ''
315- for i in "$out/bin/"*; do
316- test ! -h "$i" || continue
317- isScript "$i" || continue
318- sed -i -e '2i export PATH="${lib.makeBinPath runtimeDeps}:$PATH"' "$i"
319- done
320- '';
321-322- # Apparently necessary for the ghc Alpine (musl) bindist:
323- # When we strip, and then run the
324- # patchelf --set-rpath "${libPath}:$(patchelf --print-rpath $p)" $p
325- # below, running ghc (e.g. during `installCheckPhase)` gives some apparently
326- # corrupted rpath or whatever makes the loader work on nonsensical strings:
327- # running install tests
328- # Error relocating /nix/store/...-ghc-8.10.2-binary/lib/ghc-8.10.5/bin/ghc: : symbol not found
329- # Error relocating /nix/store/...-ghc-8.10.2-binary/lib/ghc-8.10.5/bin/ghc: ir6zf6c9f86pfx8sr30n2vjy-ghc-8.10.2-binary/lib/ghc-8.10.5/bin/../lib/x86_64-linux-ghc-8.10.5/libHSexceptions-0.10.4-ghc8.10.5.so: symbol not found
330- # Error relocating /nix/store/...-ghc-8.10.2-binary/lib/ghc-8.10.5/bin/ghc: y/lib/ghc-8.10.5/bin/../lib/x86_64-linux-ghc-8.10.5/libHStemplate-haskell-2.16.0.0-ghc8.10.5.so: symbol not found
331- # Error relocating /nix/store/...-ghc-8.10.2-binary/lib/ghc-8.10.5/bin/ghc: 8.10.5/libHStemplate-haskell-2.16.0.0-ghc8.10.5.so: symbol not found
332- # Error relocating /nix/store/...-ghc-8.10.2-binary/lib/ghc-8.10.5/bin/ghc: �: symbol not found
333- # Error relocating /nix/store/...-ghc-8.10.2-binary/lib/ghc-8.10.5/bin/ghc: �?: symbol not found
334- # Error relocating /nix/store/...-ghc-8.10.2-binary/lib/ghc-8.10.5/bin/ghc: 64-linux-ghc-8.10.5/libHSexceptions-0.10.4-ghc8.10.5.so: symbol not found
335- # This is extremely bogus and should be investigated.
336- dontStrip = if stdenv.hostPlatform.isMusl then true else false; # `if` for explicitness
337-338- # On Linux, use patchelf to modify the executables so that they can
339- # find editline/gmp.
340- postFixup = lib.optionalString stdenv.isLinux
341- (if stdenv.hostPlatform.isAarch64 then
342- # Keep rpath as small as possible on aarch64 for patchelf#244. All Elfs
343- # are 2 directories deep from $out/lib, so pooling symlinks there makes
344- # a short rpath.
345- ''
346- (cd $out/lib; ln -s ${ncurses6.out}/lib/libtinfo.so.6)
347- (cd $out/lib; ln -s ${gmp.out}/lib/libgmp.so.10)
348- (cd $out/lib; ln -s ${numactl.out}/lib/libnuma.so.1)
349- for p in $(find "$out/lib" -type f -name "*\.so*"); do
350- (cd $out/lib; ln -s $p)
351- done
352-353- for p in $(find "$out/lib" -type f -executable); do
354- if isELF "$p"; then
355- echo "Patchelfing $p"
356- patchelf --set-rpath "\$ORIGIN:\$ORIGIN/../.." $p
357- fi
358- done
359- ''
360- else
361- ''
362- for p in $(find "$out" -type f -executable); do
363- if isELF "$p"; then
364- echo "Patchelfing $p"
365- patchelf --set-rpath "${libPath}:$(patchelf --print-rpath $p)" $p
366- fi
367- done
368- '') + lib.optionalString stdenv.isDarwin ''
369- # not enough room in the object files for the full path to libiconv :(
370- for exe in $(find "$out" -type f -executable); do
371- isScript $exe && continue
372- ln -fs ${libiconv}/lib/libiconv.dylib $(dirname $exe)/libiconv.dylib
373- install_name_tool -change /usr/lib/libiconv.2.dylib @executable_path/libiconv.dylib -change /usr/local/lib/gcc/6/libgcc_s.1.dylib ${gcc.cc.lib}/lib/libgcc_s.1.dylib $exe
374- done
375-376- for file in $(find "$out" -name setup-config); do
377- substituteInPlace $file --replace /usr/bin/ranlib "$(type -P ranlib)"
378- done
379- '' +
380- lib.optionalString minimal ''
381- # Remove profiling files
382- find $out -type f -name '*.p_o' -delete
383- find $out -type f -name '*.p_hi' -delete
384- find $out -type f -name '*_p.a' -delete
385- # `-f` because e.g. musl bindist does not have this file.
386- rm -f $out/lib/ghc-*/bin/ghc-iserv-prof
387- # Hydra will redistribute this derivation, so we have to keep the docs for
388- # legal reasons (retaining the legal notices etc)
389- # As a last resort we could unpack the docs separately and symlink them in.
390- # They're in $out/share/{doc,man}.
391- '';
392-393- # In nixpkgs, musl based builds currently enable `pie` hardening by default
394- # (see `defaultHardeningFlags` in `make-derivation.nix`).
395- # But GHC cannot currently produce outputs that are ready for `-pie` linking.
396- # Thus, disable `pie` hardening, otherwise `recompile with -fPIE` errors appear.
397- # See:
398- # * https://github.com/NixOS/nixpkgs/issues/129247
399- # * https://gitlab.haskell.org/ghc/ghc/-/issues/19580
400- hardeningDisable = lib.optional stdenv.targetPlatform.isMusl "pie";
401-402- doInstallCheck = true;
403- installCheckPhase = ''
404- # Sanity check, can ghc create executables?
405- cd $TMP
406- mkdir test-ghc; cd test-ghc
407- cat > main.hs << EOF
408- {-# LANGUAGE TemplateHaskell #-}
409- module Main where
410- main = putStrLn \$([|"yes"|])
411- EOF
412- # can't use env -i here because otherwise we don't find -lgmp on musl
413- env ${libEnvVar}= PATH= \
414- $out/bin/ghc --make main.hs || exit 1
415- echo compilation ok
416- [ $(./main) == "yes" ]
417- '';
418-419- passthru = {
420- targetPrefix = "";
421- enableShared = true;
422-423- inherit llvmPackages;
424-425- # Our Cabal compiler name
426- haskellCompilerName = "ghc-${version}";
427- }
428- # We duplicate binDistUsed here since we have a sensible default even if no bindist is avaible,
429- # this makes sure that getting the `meta` attribute doesn't throw even on unsupported platforms.
430- // lib.optionalAttrs (ghcBinDists.${distSetName}.${stdenv.hostPlatform.system}.isHadrian or false) {
431- # Normal GHC derivations expose the hadrian derivation used to build them
432- # here. In the case of bindists we just make sure that the attribute exists,
433- # as it is used for checking if a GHC derivation has been built with hadrian.
434- # The isHadrian mechanism will become obsolete with GHCs that use hadrian
435- # exclusively, i.e. 9.6 (and 9.4?).
436- hadrian = null;
437- };
438-439- meta = rec {
440- homepage = "http://haskell.org/ghc";
441- description = "The Glasgow Haskell Compiler";
442- license = lib.licenses.bsd3;
443- # HACK: since we can't encode the libc / abi in platforms, we need
444- # to make the platform list dependent on the evaluation platform
445- # in order to avoid eval errors with musl which supports less
446- # platforms than the default libcs (i. e. glibc / libSystem).
447- # This is done for the benefit of Hydra, so `packagePlatforms`
448- # won't return any platforms that would cause an evaluation
449- # failure for `pkgsMusl.haskell.compiler.ghc8102Binary`, as
450- # long as the evaluator runs on a platform that supports
451- # `pkgsMusl`.
452- platforms = builtins.attrNames ghcBinDists.${distSetName};
453- maintainers = with lib.maintainers; [
454- guibou
455- ] ++ lib.teams.haskell.members;
456- };
457-}
···442 # platforms than the default libcs (i. e. glibc / libSystem).
443 # This is done for the benefit of Hydra, so `packagePlatforms`
444 # won't return any platforms that would cause an evaluation
445- # failure for `pkgsMusl.haskell.compiler.ghc8102Binary`, as
446 # long as the evaluator runs on a platform that supports
447 # `pkgsMusl`.
448 platforms = builtins.attrNames ghcBinDists.${distSetName};
···442 # platforms than the default libcs (i. e. glibc / libSystem).
443 # This is done for the benefit of Hydra, so `packagePlatforms`
444 # won't return any platforms that would cause an evaluation
445+ # failure for `pkgsMusl.haskell.compiler.ghc8107Binary`, as
446 # long as the evaluator runs on a platform that supports
447 # `pkgsMusl`.
448 platforms = builtins.attrNames ghcBinDists.${distSetName};
+1-1
pkgs/development/compilers/ghc/8.6.5-binary.nix
···222 "x86_64-darwin"
223 "powerpc64le-linux"
224 ];
225- # build segfaults, use ghc8102Binary which has proper musl support instead
226 broken = stdenv.hostPlatform.isMusl;
227 maintainers = with lib.maintainers; [
228 guibou
···222 "x86_64-darwin"
223 "powerpc64le-linux"
224 ];
225+ # build segfaults, use ghc8107Binary which has proper musl support instead
226 broken = stdenv.hostPlatform.isMusl;
227 maintainers = with lib.maintainers; [
228 guibou
···1-diff --git a/rts/win32/OSMem.c b/rts/win32/OSMem.c
2---- a/rts/win32/OSMem.c
3-+++ b/rts/win32/OSMem.c
4-@@ -41,7 +41,7 @@ static block_rec* free_blocks = NULL;
5- typedef LPVOID(WINAPI *VirtualAllocExNumaProc)(HANDLE, LPVOID, SIZE_T, DWORD, DWORD, DWORD);
6-7- /* Cache NUMA API call. */
8--VirtualAllocExNumaProc VirtualAllocExNuma;
9-+VirtualAllocExNumaProc _VirtualAllocExNuma;
10-11- void
12- osMemInit(void)
13-@@ -52,8 +52,8 @@ osMemInit(void)
14- /* Resolve and cache VirtualAllocExNuma. */
15- if (osNumaAvailable() && RtsFlags.GcFlags.numa)
16- {
17-- VirtualAllocExNuma = (VirtualAllocExNumaProc)GetProcAddress(GetModuleHandleW(L"kernel32"), "VirtualAllocExNuma");
18-- if (!VirtualAllocExNuma)
19-+ _VirtualAllocExNuma = (VirtualAllocExNumaProc)(void*)GetProcAddress(GetModuleHandleW(L"kernel32"), "VirtualAllocExNuma");
20-+ if (!_VirtualAllocExNuma)
21- {
22- sysErrorBelch(
23- "osBindMBlocksToNode: VirtualAllocExNuma does not exist. How did you get this far?");
24-@@ -569,7 +569,7 @@ void osBindMBlocksToNode(
25- On windows also -xb is broken, it does nothing so that can't
26- be used to tweak it (see #12577). So for now, just let the OS decide.
27- */
28-- temp = VirtualAllocExNuma(
29-+ temp = _VirtualAllocExNuma(
30- GetCurrentProcess(),
31- NULL, // addr? See base memory
32- size,
···1-From 8c747d3157df2830eed9205e7caf1203b345de17 Mon Sep 17 00:00:00 2001
2-From: Khem Raj <raj.khem@gmail.com>
3-Date: Sat, 4 Feb 2023 13:54:41 -0800
4-Subject: [PATCH] cmake: Enable 64bit off_t on 32bit glibc systems
5-6-Pass -D_FILE_OFFSET_BITS=64 to compiler flags on 32bit glibc based
7-systems. This will make sure that 64bit versions of LFS functions are
8-used e.g. seek will behave same as lseek64. Also revert [1] partially
9-because this added a cmake test to detect lseek64 but then forgot to
10-pass the needed macro to actual compile, this test was incomplete too
11-since libc implementations like musl has 64bit off_t by default on 32bit
12-systems and does not bundle[2] -D_LARGEFILE64_SOURCE under -D_GNU_SOURCE
13-like glibc, which means the compile now fails on musl because the cmake
14-check passes but we do not have _LARGEFILE64_SOURCE defined. Using the
15-*64 function was transitional anyways so use -D_FILE_OFFSET_BITS=64
16-instead
17-18-[1] https://github.com/llvm/llvm-project/commit/8db7e5e4eed4c4e697dc3164f2c9351d8c3e942b
19-[2] https://git.musl-libc.org/cgit/musl/commit/?id=25e6fee27f4a293728dd15b659170e7b9c7db9bc
20-21-Reviewed By: MaskRay
22-23-Differential Revision: https://reviews.llvm.org/D139752
24-25-(cherry picked from commit 5cd554303ead0f8891eee3cd6d25cb07f5a7bf67)
26----
27- cmake/config-ix.cmake | 13 ++++++++++---
28- include/llvm/Config/config.h.cmake | 3 ---
29- lib/Support/raw_ostream.cpp | 2 --
30- 3 files changed, 10 insertions(+), 8 deletions(-)
31-32-diff --git a/cmake/config-ix.cmake b/cmake/config-ix.cmake
33-index 18977d9950ff..b558aa83fa62 100644
34---- a/cmake/config-ix.cmake
35-+++ b/cmake/config-ix.cmake
36-@@ -197,9 +197,6 @@ check_symbol_exists(posix_fallocate fcntl.h HAVE_POSIX_FALLOCATE)
37- if( HAVE_SIGNAL_H AND NOT LLVM_USE_SANITIZER MATCHES ".*Address.*" AND NOT APPLE )
38- check_symbol_exists(sigaltstack signal.h HAVE_SIGALTSTACK)
39- endif()
40--set(CMAKE_REQUIRED_DEFINITIONS "-D_LARGEFILE64_SOURCE")
41--check_symbol_exists(lseek64 "sys/types.h;unistd.h" HAVE_LSEEK64)
42--set(CMAKE_REQUIRED_DEFINITIONS "")
43- check_symbol_exists(mallctl malloc_np.h HAVE_MALLCTL)
44- check_symbol_exists(mallinfo malloc.h HAVE_MALLINFO)
45- check_symbol_exists(malloc_zone_statistics malloc/malloc.h
46-@@ -237,6 +234,16 @@ if( PURE_WINDOWS )
47- check_function_exists(__main HAVE___MAIN)
48- check_function_exists(__cmpdi2 HAVE___CMPDI2)
49- endif()
50-+
51-+check_symbol_exists(__GLIBC__ stdio.h LLVM_USING_GLIBC)
52-+if( LLVM_USING_GLIBC )
53-+# enable 64bit off_t on 32bit systems using glibc
54-+ if (CMAKE_SIZEOF_VOID_P EQUAL 4)
55-+ add_compile_definitions(_FILE_OFFSET_BITS=64)
56-+ list(APPEND CMAKE_REQUIRED_DEFINITIONS "-D_FILE_OFFSET_BITS=64")
57-+ endif()
58-+endif()
59-+
60- if( HAVE_DLFCN_H )
61- if( HAVE_LIBDL )
62- list(APPEND CMAKE_REQUIRED_LIBRARIES dl)
63-diff --git a/include/llvm/Config/config.h.cmake b/include/llvm/Config/config.h.cmake
64-index e934617d7ec7..3c39c373b3c1 100644
65---- a/include/llvm/Config/config.h.cmake
66-+++ b/include/llvm/Config/config.h.cmake
67-@@ -112,9 +112,6 @@
68- /* Define to 1 if you have the <link.h> header file. */
69- #cmakedefine HAVE_LINK_H ${HAVE_LINK_H}
70-71--/* Define to 1 if you have the `lseek64' function. */
72--#cmakedefine HAVE_LSEEK64 ${HAVE_LSEEK64}
73--
74- /* Define to 1 if you have the <mach/mach.h> header file. */
75- #cmakedefine HAVE_MACH_MACH_H ${HAVE_MACH_MACH_H}
76-77-diff --git a/lib/Support/raw_ostream.cpp b/lib/Support/raw_ostream.cpp
78-index 038ad00bd608..921ab8409008 100644
79---- a/lib/Support/raw_ostream.cpp
80-+++ b/lib/Support/raw_ostream.cpp
81-@@ -677,8 +677,6 @@ uint64_t raw_fd_ostream::seek(uint64_t off) {
82- flush();
83- #ifdef _WIN32
84- pos = ::_lseeki64(FD, off, SEEK_SET);
85--#elif defined(HAVE_LSEEK64)
86-- pos = ::lseek64(FD, off, SEEK_SET);
87- #else
88- pos = ::lseek(FD, off, SEEK_SET);
89- #endif
90---
91-2.37.1
92-
···6 owner = "thery";
7 inherit version;
8 defaultVersion = with lib.versions; lib.switch coq.coq-version [
9- { case = range "8.14" "8.18"; out = "8.18"; }
10 { case = range "8.12" "8.16"; out = "8.15"; }
11 { case = range "8.10" "8.11"; out = "8.10"; }
12 { case = range "8.8" "8.9"; out = "8.8"; }
···6 owner = "thery";
7 inherit version;
8 defaultVersion = with lib.versions; lib.switch coq.coq-version [
9+ { case = range "8.14" "8.19"; out = "8.18"; }
10 { case = range "8.12" "8.16"; out = "8.15"; }
11 { case = range "8.10" "8.11"; out = "8.10"; }
12 { case = range "8.8" "8.9"; out = "8.8"; }
···5 owner = "snu-sf";
6 inherit version;
7 defaultVersion = with lib.versions; lib.switch coq.coq-version [
8- { case = range "8.13" "8.18"; out = "4.2.0"; }
9 { case = range "8.12" "8.17"; out = "4.1.2"; }
10 { case = range "8.9" "8.13"; out = "4.1.1"; }
11 { case = range "8.6" "8.13"; out = "4.0.2"; }
···5 owner = "snu-sf";
6 inherit version;
7 defaultVersion = with lib.versions; lib.switch coq.coq-version [
8+ { case = range "8.13" "8.19"; out = "4.2.0"; }
9 { case = range "8.12" "8.17"; out = "4.1.2"; }
10 { case = range "8.9" "8.13"; out = "4.1.1"; }
11 { case = range "8.6" "8.13"; out = "4.0.2"; }
···1-Date: 2017-09-02 13:03:15.353403096 +0200
2-From: Jan Engelhardt <jengelh@inai.de>
3-4-Stop redefining libc definitions that cause build failures under glibc-2.26.
5-6-[ 46s] In file included from /usr/include/sys/types.h:156:0,
7-[ 46s] from /usr/include/stdlib.h:279,
8-[ 46s] from malloc.c:15:
9-[ 46s] /usr/include/bits/stdint-intn.h:27:19: error: conflicting types for 'int64_t'
10-[ 46s] typedef __int64_t int64_t;
11-[ 46s] ^~~~~~~
12-[ 46s] In file included from ../include/aal/libaal.h:17:0,
13-[ 46s] from malloc.c:6:
14-[ 46s] ../include/aal/types.h:35:33: note: previous declaration of 'int64_t' was here
15-[ 46s] typedef long long int int64_t;
16-17-18----
19- include/aal/types.h | 48 ++----------------------------------------------
20- 1 file changed, 2 insertions(+), 46 deletions(-)
21-22-Index: libaal-1.0.6/include/aal/types.h
23-===================================================================
24---- libaal-1.0.6.orig/include/aal/types.h
25-+++ libaal-1.0.6/include/aal/types.h
26-@@ -26,24 +26,7 @@
27- #undef ESTRUCT
28- #define ESTRUCT 50
29-30--#ifndef __int8_t_defined
31--#define __int8_t_defined
32--typedef signed char int8_t;
33--typedef short int int16_t;
34--typedef int int32_t;
35--__extension__
36--typedef long long int int64_t;
37--#endif
38--
39--typedef unsigned char uint8_t;
40--typedef unsigned short int uint16_t;
41--#ifndef __uint32_t_defined
42--#define __uint32_t_defined
43--typedef unsigned int uint32_t;
44--__extension__
45--typedef unsigned long long int uint64_t;
46--#endif
47--
48-+#include <stdint.h>
49- #define MAX_UINT8 ((uint8_t)~0)
50- #define MAX_UINT16 ((uint16_t)~0)
51- #define MAX_UINT32 ((uint32_t)~0)
52-@@ -53,36 +36,9 @@ typedef unsigned long long int uint64_t
53- because we don't want use gcc builtins in minimal mode for achive as small
54- binary size as possible. */
55-56--#ifndef ENABLE_MINIMAL
57- # include <stdarg.h>
58--#else
59--#ifndef _VA_LIST_
60--#define _VA_LIST_
61--typedef char *va_list;
62--#endif
63--#undef va_arg
64--#undef va_end
65--#undef va_start
66--
67--#define va_end(ap) \
68-- do {} while(0);
69--
70--#define va_start(ap, p) \
71-- (ap = (char *)(&(p)+1))
72--
73--#define va_arg(ap, type) \
74-- ((type *)(ap += sizeof(type)))[-1]
75--#endif
76--
77--/* As libaal may be used without any standard headers, we need to declare NULL
78-- macro here in order to avoid compilation errors. */
79--#undef NULL
80-81--#if defined(__cplusplus)
82--# define NULL 0
83--#else
84--# define NULL ((void *)0)
85--#endif
86-+#include <stdio.h>
87-88- /* Simple type for direction denoting */
89- enum aal_dir {
···1-{ stdenv, fetchurl, patchelf }:
2-3-# Note: this package is used for bootstrapping fetchurl, and thus
4-# cannot use fetchpatch! All mutable patches (generated by GitHub or
5-# cgit) that are needed here should be included directly in Nixpkgs as
6-# files.
7-8-stdenv.mkDerivation rec {
9- pname = "patchelf";
10- version = "0.13.1";
11-12- src = fetchurl {
13- url = "https://github.com/NixOS/${pname}/releases/download/${version}/${pname}-${version}.tar.bz2";
14- sha256 = "sha256-OeiuzNdJXVTfCU0rSnwIAQ/3d3A2+q8k8o4Hd30VmOI=";
15- };
16-17- setupHook = [ ./setup-hook.sh ];
18-19- # fails 8 out of 24 tests, problems when loading libc.so.6
20- doCheck = stdenv.name == "stdenv-linux";
21-22- inherit (patchelf) meta;
23-}
···45 jinja2
46 pyicu
47 datrie
048 ]))
49 # python3Packages.pylint # We don't want to run pylint because the package could break on pylint bumps which is really annoying.
50 # python3Packages.pytest # disabled since I can't get it to run tests anyway
···45 jinja2
46 pyicu
47 datrie
48+ pyosmium
49 ]))
50 # python3Packages.pylint # We don't want to run pylint because the package could break on pylint bumps which is really annoying.
51 # python3Packages.pytest # disabled since I can't get it to run tests anyway
···154 pkgs.pkgsMusl.pkgsCross.gnu64.hello
155156 # Two web browsers -- exercises almost the entire packageset
157- pkgs.pkgsCross.aarch64-multiplatform.qt5.qutebrowser
158 pkgs.pkgsCross.aarch64-multiplatform.firefox
159160 # Uses pkgsCross.riscv64-embedded; see https://github.com/NixOS/nixpkgs/issues/267859
···154 pkgs.pkgsMusl.pkgsCross.gnu64.hello
155156 # Two web browsers -- exercises almost the entire packageset
157+ pkgs.pkgsCross.aarch64-multiplatform.qutebrowser-qt5
158 pkgs.pkgsCross.aarch64-multiplatform.firefox
159160 # Uses pkgsCross.riscv64-embedded; see https://github.com/NixOS/nixpkgs/issues/267859
···49 # ```
50 # {
51 # ghc810 = "ghc810";
52- # ghc8102Binary = "ghc8102Binary";
53- # ghc8102BinaryMinimal = "ghc8102BinaryMinimal";
54 # ghc8107 = "ghc8107";
55 # ghc924 = "ghc924";
56 # ...
···385 {
386 # remove musl ghc865Binary since it is known to be broken and
387 # causes an evaluation error on darwin.
388- # TODO: remove ghc865Binary altogether and use ghc8102Binary
389 ghc865Binary = {};
390391 ghcjs = {};
···647 ];
648 };
649 constituents = accumulateDerivations [
650- jobs.pkgsMusl.haskell.compiler.ghc8102Binary
651 jobs.pkgsMusl.haskell.compiler.ghc8107Binary
652 jobs.pkgsMusl.haskell.compiler.ghc8107
653 jobs.pkgsMusl.haskell.compiler.ghc902
···49 # ```
50 # {
51 # ghc810 = "ghc810";
52+ # ghc8107Binary = "ghc8107Binary";
053 # ghc8107 = "ghc8107";
54 # ghc924 = "ghc924";
55 # ...
···384 {
385 # remove musl ghc865Binary since it is known to be broken and
386 # causes an evaluation error on darwin.
0387 ghc865Binary = {};
388389 ghcjs = {};
···645 ];
646 };
647 constituents = accumulateDerivations [
0648 jobs.pkgsMusl.haskell.compiler.ghc8107Binary
649 jobs.pkgsMusl.haskell.compiler.ghc8107
650 jobs.pkgsMusl.haskell.compiler.ghc902
+2-2
pkgs/top-level/release.nix
···220 };
221 };
222 in {
223- inherit (bootstrap) dist test;
224 }
225 else if hasSuffix "-darwin" config then
226 let
···229 };
230 in {
231 # Lightweight distribution and test
232- inherit (bootstrap) dist test;
233 # Test a full stdenv bootstrap from the bootstrap tools definition
234 # TODO: Re-enable once the new bootstrap-tools are in place.
235 #inherit (bootstrap.test-pkgs) stdenv;
···220 };
221 };
222 in {
223+ inherit (bootstrap) build dist test;
224 }
225 else if hasSuffix "-darwin" config then
226 let
···229 };
230 in {
231 # Lightweight distribution and test
232+ inherit (bootstrap) build dist test;
233 # Test a full stdenv bootstrap from the bootstrap tools definition
234 # TODO: Re-enable once the new bootstrap-tools are in place.
235 #inherit (bootstrap.test-pkgs) stdenv;