lol

Merge master into staging-next

authored by

github-actions[bot] and committed by
GitHub
dc4a7c97 5d5d432e

+1825 -558
+2 -2
.github/workflows/basic-eval.yml
··· 19 19 # we don't limit this action to only NixOS repo since the checks are cheap and useful developer feedback 20 20 steps: 21 21 - uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # v4.1.1 22 - - uses: cachix/install-nix-action@7ac1ec25491415c381d9b62f0657c7a028df52a7 # v24 23 - - uses: cachix/cachix-action@6a2e08b5ebf7a9f285ff57b1870a4262b06e0bee # v13 22 + - uses: cachix/install-nix-action@6004951b182f8860210c8d6f0d808ec5b1a33d28 # v25 23 + - uses: cachix/cachix-action@18cf96c7c98e048e10a83abd92116114cd8504be # v14 24 24 with: 25 25 # This cache is for the nixpkgs repo checks and should not be trusted or used elsewhere. 26 26 name: nixpkgs-ci
+1 -1
.github/workflows/check-by-name.yml
··· 90 90 base=$(mktemp -d) 91 91 git worktree add "$base" "$(git rev-parse HEAD^1)" 92 92 echo "base=$base" >> "$GITHUB_ENV" 93 - - uses: cachix/install-nix-action@7ac1ec25491415c381d9b62f0657c7a028df52a7 # v24 93 + - uses: cachix/install-nix-action@6004951b182f8860210c8d6f0d808ec5b1a33d28 # v25 94 94 - name: Fetching the pinned tool 95 95 # Update the pinned version using pkgs/test/nixpkgs-check-by-name/scripts/update-pinned-tool.sh 96 96 run: |
+1 -1
.github/workflows/check-maintainers-sorted.yaml
··· 16 16 with: 17 17 # pull_request_target checks out the base branch by default 18 18 ref: refs/pull/${{ github.event.pull_request.number }}/merge 19 - - uses: cachix/install-nix-action@7ac1ec25491415c381d9b62f0657c7a028df52a7 # v24 19 + - uses: cachix/install-nix-action@6004951b182f8860210c8d6f0d808ec5b1a33d28 # v25 20 20 with: 21 21 # explicitly enable sandbox 22 22 extra_nix_config: sandbox = true
+1 -1
.github/workflows/editorconfig.yml
··· 28 28 with: 29 29 # pull_request_target checks out the base branch by default 30 30 ref: refs/pull/${{ github.event.pull_request.number }}/merge 31 - - uses: cachix/install-nix-action@7ac1ec25491415c381d9b62f0657c7a028df52a7 # v24 31 + - uses: cachix/install-nix-action@6004951b182f8860210c8d6f0d808ec5b1a33d28 # v25 32 32 with: 33 33 # nixpkgs commit is pinned so that it doesn't break 34 34 # editorconfig-checker 2.4.0
+2 -2
.github/workflows/manual-nixos.yml
··· 18 18 with: 19 19 # pull_request_target checks out the base branch by default 20 20 ref: refs/pull/${{ github.event.pull_request.number }}/merge 21 - - uses: cachix/install-nix-action@7ac1ec25491415c381d9b62f0657c7a028df52a7 # v24 21 + - uses: cachix/install-nix-action@6004951b182f8860210c8d6f0d808ec5b1a33d28 # v25 22 22 with: 23 23 # explicitly enable sandbox 24 24 extra_nix_config: sandbox = true 25 - - uses: cachix/cachix-action@6a2e08b5ebf7a9f285ff57b1870a4262b06e0bee # v13 25 + - uses: cachix/cachix-action@18cf96c7c98e048e10a83abd92116114cd8504be # v14 26 26 with: 27 27 # This cache is for the nixpkgs repo checks and should not be trusted or used elsewhere. 28 28 name: nixpkgs-ci
+2 -2
.github/workflows/manual-nixpkgs.yml
··· 19 19 with: 20 20 # pull_request_target checks out the base branch by default 21 21 ref: refs/pull/${{ github.event.pull_request.number }}/merge 22 - - uses: cachix/install-nix-action@7ac1ec25491415c381d9b62f0657c7a028df52a7 # v24 22 + - uses: cachix/install-nix-action@6004951b182f8860210c8d6f0d808ec5b1a33d28 # v25 23 23 with: 24 24 # explicitly enable sandbox 25 25 extra_nix_config: sandbox = true 26 - - uses: cachix/cachix-action@6a2e08b5ebf7a9f285ff57b1870a4262b06e0bee # v13 26 + - uses: cachix/cachix-action@18cf96c7c98e048e10a83abd92116114cd8504be # v14 27 27 with: 28 28 # This cache is for the nixpkgs repo checks and should not be trusted or used elsewhere. 29 29 name: nixpkgs-ci
+1 -1
.github/workflows/nix-parse.yml
··· 29 29 # pull_request_target checks out the base branch by default 30 30 ref: refs/pull/${{ github.event.pull_request.number }}/merge 31 31 if: ${{ env.CHANGED_FILES && env.CHANGED_FILES != '' }} 32 - - uses: cachix/install-nix-action@7ac1ec25491415c381d9b62f0657c7a028df52a7 # v24 32 + - uses: cachix/install-nix-action@6004951b182f8860210c8d6f0d808ec5b1a33d28 # v25 33 33 with: 34 34 nix_path: nixpkgs=channel:nixpkgs-unstable 35 35 - name: Parse all changed or added nix files
+1 -1
.github/workflows/update-terraform-providers.yml
··· 17 17 runs-on: ubuntu-latest 18 18 steps: 19 19 - uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # v4.1.1 20 - - uses: cachix/install-nix-action@7ac1ec25491415c381d9b62f0657c7a028df52a7 # v24 20 + - uses: cachix/install-nix-action@6004951b182f8860210c8d6f0d808ec5b1a33d28 # v25 21 21 with: 22 22 nix_path: nixpkgs=channel:nixpkgs-unstable 23 23 - name: setup
+585 -110
doc/build-helpers/images/dockertools.section.md
··· 1 1 # pkgs.dockerTools {#sec-pkgs-dockerTools} 2 2 3 - `pkgs.dockerTools` is a set of functions for creating and manipulating Docker images according to the [Docker Image Specification v1.2.0](https://github.com/moby/moby/blob/master/image/spec/v1.2.md#docker-image-specification-v120). Docker itself is not used to perform any of the operations done by these functions. 3 + `pkgs.dockerTools` is a set of functions for creating and manipulating Docker images according to the [Docker Image Specification v1.3.0](https://github.com/moby/moby/blob/46f7ab808b9504d735d600e259ca0723f76fb164/image/spec/spec.md#image-json-field-descriptions). 4 + Docker itself is not used to perform any of the operations done by these functions. 4 5 5 6 ## buildImage {#ssec-pkgs-dockerTools-buildImage} 6 7 7 - This function is analogous to the `docker build` command, in that it can be used to build a Docker-compatible repository tarball containing a single image with one or multiple layers. As such, the result is suitable for being loaded in Docker with `docker load`. 8 + This function builds a Docker-compatible repository tarball containing a single image. 9 + As such, the result is suitable for being loaded in Docker with `docker load` (see [](#ex-dockerTools-buildImage) for how to do this). 10 + 11 + This function will create a single layer for all files (and dependencies) that are specified in its argument. 12 + Only new dependencies that are not already in the existing layers will be copied. 13 + If you prefer to create multiple layers for the files and dependencies you want to add to the image, see [](#ssec-pkgs-dockerTools-buildLayeredImage) or [](#ssec-pkgs-dockerTools-streamLayeredImage) instead. 14 + 15 + This function allows a script to be run during the layer generation process, allowing custom behaviour to affect the final results of the image (see the documentation of the `runAsRoot` and `extraCommands` attributes). 16 + 17 + The resulting repository tarball will list a single image as specified by the `name` and `tag` attributes. 18 + By default, that image will use a static creation date (see documentation for the `created` attribute). 19 + This allows `buildImage` to produce reproducible images. 20 + 21 + :::{.tip} 22 + When running an image built with `buildImage`, you might encounter certain errors depending on what you included in the image, especially if you did not start with any base image. 23 + 24 + If you encounter errors similar to `getProtocolByName: does not exist (no such protocol name: tcp)`, you may need to add the contents of `pkgs.iana-etc` in the `copyToRoot` attribute. 25 + Similarly, if you encounter errors similar to `Error_Protocol ("certificate has unknown CA",True,UnknownCa)`, you may need to add the contents of `pkgs.cacert` in the `copyToRoot` attribute. 26 + ::: 27 + 28 + ### Inputs {#ssec-pkgs-dockerTools-buildImage-inputs} 29 + 30 + `buildImage` expects an argument with the following attributes: 31 + 32 + `name` (String) 33 + 34 + : The name of the generated image. 35 + 36 + `tag` (String or Null; _optional_) 37 + 38 + : Tag of the generated image. 39 + If `null`, the hash of the nix derivation will be used as the tag. 40 + 41 + _Default value:_ `null`. 42 + 43 + `fromImage` (Path or Null; _optional_) 44 + 45 + : The repository tarball of an image to be used as the base for the generated image. 46 + It must be a valid Docker image, such as one exported by `docker save`, or another image built with the `dockerTools` utility functions. 47 + This can be seen as an equivalent of `FROM fromImage` in a `Dockerfile`. 48 + A value of `null` can be seen as an equivalent of `FROM scratch`. 49 + 50 + If specified, the layer created by `buildImage` will be appended to the layers defined in the base image, resulting in an image with at least two layers (one or more layers from the base image, and the layer created by `buildImage`). 51 + Otherwise, the resulting image with contain the single layer created by `buildImage`. 52 + 53 + _Default value:_ `null`. 54 + 55 + `fromImageName` (String or Null; _optional_) 56 + 57 + : Used to specify the image within the repository tarball in case it contains multiple images. 58 + A value of `null` means that `buildImage` will use the first image available in the repository. 59 + 60 + :::{.note} 61 + This must be used with `fromImageTag`. Using only `fromImageName` without `fromImageTag` will make `buildImage` use the first image available in the repository. 62 + ::: 63 + 64 + _Default value:_ `null`. 65 + 66 + `fromImageTag` (String or Null; _optional_) 67 + 68 + : Used to specify the image within the repository tarball in case it contains multiple images. 69 + A value of `null` means that `buildImage` will use the first image available in the repository. 70 + 71 + :::{.note} 72 + This must be used with `fromImageName`. Using only `fromImageTag` without `fromImageName` will make `buildImage` use the first image available in the repository 73 + ::: 74 + 75 + _Default value:_ `null`. 76 + 77 + `copyToRoot` (Path, List of Paths, or Null; _optional_) 78 + 79 + : Files to add to the generated image. 80 + Anything that coerces to a path (e.g. a derivation) can also be used. 81 + This can be seen as an equivalent of `ADD contents/ /` in a `Dockerfile`. 8 82 9 - The parameters of `buildImage` with relative example values are described below: 83 + _Default value:_ `null`. 10 84 11 - []{#ex-dockerTools-buildImage} 12 - []{#ex-dockerTools-buildImage-runAsRoot} 85 + `keepContentsDirlinks` (Boolean; _optional_) 86 + 87 + : When adding files to the generated image (as specified by `copyToRoot`), this attribute controls whether to preserve symlinks to directories. 88 + If `false`, the symlinks will be transformed into directories. 89 + This behaves the same as `rsync -k` when `keepContentsDirlinks` is `false`, and the same as `rsync -K` when `keepContentsDirlinks` is `true`. 90 + 91 + _Default value:_ `false`. 92 + 93 + `runAsRoot` (String or Null; _optional_) 94 + 95 + : A bash script that will run as root inside a VM that contains the existing layers of the base image and the new generated layer (including the files from `copyToRoot`). 96 + The script will be run with a working directory of `/`. 97 + This can be seen as an equivalent of `RUN ...` in a `Dockerfile`. 98 + A value of `null` means that this step in the image generation process will be skipped. 99 + 100 + See [](#ex-dockerTools-buildImage-runAsRoot) for how to work with this attribute. 101 + 102 + :::{.caution} 103 + Using this attribute requires the `kvm` device to be available, see [`system-features`](https://nixos.org/manual/nix/stable/command-ref/conf-file.html#conf-system-features). 104 + If the `kvm` device isn't available, you should consider using [`buildLayeredImage`](#ssec-pkgs-dockerTools-buildLayeredImage) or [`streamLayeredImage`](#ssec-pkgs-dockerTools-streamLayeredImage) instead. 105 + Those functions allow scripts to be run as root without access to the `kvm` device. 106 + ::: 107 + 108 + :::{.note} 109 + At the time the script in `runAsRoot` is run, the files specified directly in `copyToRoot` will be present in the VM, but their dependencies might not be there yet. 110 + Copying their dependencies into the generated image is a step that happens after `runAsRoot` finishes running. 111 + ::: 112 + 113 + _Default value:_ `null`. 114 + 115 + `extraCommands` (String; _optional_) 116 + 117 + : A bash script that will run before the layer created by `buildImage` is finalised. 118 + The script will be run on some (opaque) working directory which will become `/` once the layer is created. 119 + This is similar to `runAsRoot`, but the script specified in `extraCommands` is **not** run as root, and does not involve creating a VM. 120 + It is simply run as part of building the derivation that outputs the layer created by `buildImage`. 121 + 122 + See [](#ex-dockerTools-buildImage-extraCommands) for how to work with this attribute, and subtle differences compared to `runAsRoot`. 123 + 124 + _Default value:_ `""`. 125 + 126 + `config` (Attribute Set; _optional_) 127 + 128 + : Used to specify the configuration of the containers that will be started off the generated image. 129 + Must be an attribute set, with each attribute as listed in the [Docker Image Specification v1.3.0](https://github.com/moby/moby/blob/46f7ab808b9504d735d600e259ca0723f76fb164/image/spec/spec.md#image-json-field-descriptions). 130 + 131 + _Default value:_ `null`. 132 + 133 + `architecture` (String; _optional_) 134 + 135 + : Used to specify the image architecture. 136 + This is useful for multi-architecture builds that don't need cross compiling. 137 + If specified, its value should follow the [OCI Image Configuration Specification](https://github.com/opencontainers/image-spec/blob/main/config.md#properties), which should still be compatible with Docker. 138 + According to the linked specification, all possible values for `$GOARCH` in [the Go docs](https://go.dev/doc/install/source#environment) should be valid, but will commonly be one of `386`, `amd64`, `arm`, or `arm64`. 139 + 140 + _Default value:_ the same value from `pkgs.go.GOARCH`. 141 + 142 + `diskSize` (Number; _optional_) 143 + 144 + : Controls the disk size (in megabytes) of the VM used to run the script specified in `runAsRoot`. 145 + This attribute is ignored if `runAsRoot` is `null`. 146 + 147 + _Default value:_ 1024. 148 + 149 + `buildVMMemorySize` (Number; _optional_) 150 + 151 + : Controls the amount of memory (in megabytes) provisioned for the VM used to run the script specified in `runAsRoot`. 152 + This attribute is ignored if `runAsRoot` is `null`. 153 + 154 + _Default value:_ 512. 155 + 156 + `created` (String; _optional_) 157 + 158 + : Specifies the time of creation of the generated image. 159 + This should be either a date and time formatted according to [ISO-8601](https://en.wikipedia.org/wiki/ISO_8601) or `"now"`, in which case `buildImage` will use the current date. 160 + 161 + See [](#ex-dockerTools-buildImage-creatednow) for how to use `"now"`. 162 + 163 + :::{.caution} 164 + Using `"now"` means that the generated image will not be reproducible anymore (because the date will always change whenever it's built). 165 + ::: 166 + 167 + _Default value:_ `"1970-01-01T00:00:01Z"`. 168 + 169 + `uid` (Number; _optional_) 170 + 171 + : The uid of the user that will own the files packed in the new layer built by `buildImage`. 172 + 173 + _Default value:_ 0. 174 + 175 + `gid` (Number; _optional_) 176 + 177 + : The gid of the group that will own the files packed in the new layer built by `buildImage`. 178 + 179 + _Default value:_ 0. 180 + 181 + `contents` **DEPRECATED** 182 + 183 + : This attribute is deprecated, and users are encouraged to use `copyToRoot` instead. 184 + 185 + ### Passthru outputs {#ssec-pkgs-dockerTools-buildImage-passthru-outputs} 186 + 187 + `buildImage` defines a few [`passthru`](#var-stdenv-passthru) attributes: 188 + 189 + `buildArgs` (Attribute Set) 190 + 191 + : The argument passed to `buildImage` itself. 192 + This allows you to inspect all attributes specified in the argument, as described above. 193 + 194 + `layer` (Attribute Set) 195 + 196 + : The derivation with the layer created by `buildImage`. 197 + This allows easier inspection of the contents added by `buildImage` in the generated image. 198 + 199 + `imageTag` (String) 200 + 201 + : The tag of the generated image. 202 + This is useful if no tag was specified in the attributes of the argument to `buildImage`, because an automatic tag will be used instead. 203 + `imageTag` allows you to retrieve the value of the tag used in this case. 204 + 205 + ### Examples {#ssec-pkgs-dockerTools-buildImage-examples} 206 + 207 + :::{.example #ex-dockerTools-buildImage} 208 + # Building a Docker image 209 + 210 + The following package builds a Docker image that runs the `redis-server` executable from the `redis` package. 211 + The Docker image will have name `redis` and tag `latest`. 13 212 14 213 ```nix 15 - buildImage { 214 + { dockerTools, buildEnv, redis }: 215 + dockerTools.buildImage { 16 216 name = "redis"; 17 217 tag = "latest"; 18 218 19 - fromImage = someBaseImage; 20 - fromImageName = null; 21 - fromImageTag = "latest"; 22 - 23 - copyToRoot = pkgs.buildEnv { 219 + copyToRoot = buildEnv { 24 220 name = "image-root"; 25 - paths = [ pkgs.redis ]; 221 + paths = [ redis ]; 26 222 pathsToLink = [ "/bin" ]; 27 223 }; 28 224 29 225 runAsRoot = '' 30 - #!${pkgs.runtimeShell} 31 226 mkdir -p /data 32 227 ''; 33 228 ··· 36 231 WorkingDir = "/data"; 37 232 Volumes = { "/data" = { }; }; 38 233 }; 39 - 40 - diskSize = 1024; 41 - buildVMMemorySize = 512; 42 234 } 43 235 ``` 44 236 45 - The above example will build a Docker image `redis/latest` from the given base image. Loading and running this image in Docker results in `redis-server` being started automatically. 237 + The result of building this package is a `.tar.gz` file that can be loaded into Docker: 46 238 47 - - `name` specifies the name of the resulting image. This is the only required argument for `buildImage`. 239 + ```shell 240 + $ nix-build 241 + (some output removed for clarity) 242 + building '/nix/store/yw0adm4wpsw1w6j4fb5hy25b3arr9s1v-docker-image-redis.tar.gz.drv'... 243 + Adding layer... 244 + tar: Removing leading `/' from member names 245 + Adding meta... 246 + Cooking the image... 247 + Finished. 248 + /nix/store/p4dsg62inh9d2ksy3c7bv58xa851dasr-docker-image-redis.tar.gz 48 249 49 - - `tag` specifies the tag of the resulting image. By default it's `null`, which indicates that the nix output hash will be used as tag. 250 + $ docker load -i /nix/store/p4dsg62inh9d2ksy3c7bv58xa851dasr-docker-image-redis.tar.gz 251 + (some output removed for clarity) 252 + Loaded image: redis:latest 253 + ``` 254 + ::: 50 255 51 - - `fromImage` is the repository tarball containing the base image. It must be a valid Docker image, such as exported by `docker save`. By default it's `null`, which can be seen as equivalent to `FROM scratch` of a `Dockerfile`. 256 + :::{.example #ex-dockerTools-buildImage-runAsRoot} 257 + # Building a Docker image with `runAsRoot` 52 258 53 - - `fromImageName` can be used to further specify the base image within the repository, in case it contains multiple images. By default it's `null`, in which case `buildImage` will peek the first image available in the repository. 259 + The following package builds a Docker image with the `hello` executable from the `hello` package. 260 + It uses `runAsRoot` to create a directory and a file inside the image. 54 261 55 - - `fromImageTag` can be used to further specify the tag of the base image within the repository, in case an image contains multiple tags. By default it's `null`, in which case `buildImage` will peek the first tag available for the base image. 262 + This works the same as [](#ex-dockerTools-buildImage-extraCommands), but uses `runAsRoot` instead of `extraCommands`. 56 263 57 - - `copyToRoot` is a derivation that will be copied in the new layer of the resulting image. This can be similarly seen as `ADD contents/ /` in a `Dockerfile`. By default it's `null`. 58 - 59 - - `runAsRoot` is a bash script that will run as root in an environment that overlays the existing layers of the base image with the new resulting layer, including the previously copied `contents` derivation. This can be similarly seen as `RUN ...` in a `Dockerfile`. 60 - 61 - > **_NOTE:_** Using this parameter requires the `kvm` device to be available. 62 - 63 - - `config` is used to specify the configuration of the containers that will be started off the built image in Docker. The available options are listed in the [Docker Image Specification v1.2.0](https://github.com/moby/moby/blob/master/image/spec/v1.2.md#image-json-field-descriptions). 64 - 65 - - `architecture` is _optional_ and used to specify the image architecture, this is useful for multi-architecture builds that don't need cross compiling. If not specified it will default to `hostPlatform`. 264 + ```nix 265 + { dockerTools, buildEnv, hello }: 266 + dockerTools.buildImage { 267 + name = "hello"; 268 + tag = "latest"; 66 269 67 - - `diskSize` is used to specify the disk size of the VM used to build the image in megabytes. By default it's 1024 MiB. 270 + copyToRoot = buildEnv { 271 + name = "image-root"; 272 + paths = [ hello ]; 273 + pathsToLink = [ "/bin" ]; 274 + }; 68 275 69 - - `buildVMMemorySize` is used to specify the memory size of the VM to build the image in megabytes. By default it's 512 MiB. 276 + runAsRoot = '' 277 + mkdir -p /data 278 + echo "some content" > my-file 279 + ''; 70 280 71 - After the new layer has been created, its closure (to which `contents`, `config` and `runAsRoot` contribute) will be copied in the layer itself. Only new dependencies that are not already in the existing layers will be copied. 281 + config = { 282 + Cmd = [ "/bin/hello" ]; 283 + WorkingDir = "/data"; 284 + }; 285 + } 286 + ``` 287 + ::: 72 288 73 - At the end of the process, only one new single layer will be produced and added to the resulting image. 289 + :::{.example #ex-dockerTools-buildImage-extraCommands} 290 + # Building a Docker image with `extraCommands` 74 291 75 - The resulting repository will only list the single image `image/tag`. In the case of [the `buildImage` example](#ex-dockerTools-buildImage), it would be `redis/latest`. 292 + The following package builds a Docker image with the `hello` executable from the `hello` package. 293 + It uses `extraCommands` to create a directory and a file inside the image. 76 294 77 - It is possible to inspect the arguments with which an image was built using its `buildArgs` attribute. 295 + This works the same as [](#ex-dockerTools-buildImage-runAsRoot), but uses `extraCommands` instead of `runAsRoot`. 296 + Note that with `extraCommands`, we can't directly reference `/` and must create files and directories as if we were already on `/`. 78 297 79 - > **_NOTE:_** If you see errors similar to `getProtocolByName: does not exist (no such protocol name: tcp)` you may need to add `pkgs.iana-etc` to `contents`. 298 + ```nix 299 + { dockerTools, buildEnv, hello }: 300 + dockerTools.buildImage { 301 + name = "hello"; 302 + tag = "latest"; 80 303 81 - > **_NOTE:_** If you see errors similar to `Error_Protocol ("certificate has unknown CA",True,UnknownCa)` you may need to add `pkgs.cacert` to `contents`. 304 + copyToRoot = buildEnv { 305 + name = "image-root"; 306 + paths = [ hello ]; 307 + pathsToLink = [ "/bin" ]; 308 + }; 82 309 83 - By default `buildImage` will use a static date of one second past the UNIX Epoch. This allows `buildImage` to produce binary reproducible images. When listing images with `docker images`, the newly created images will be listed like this: 310 + extraCommands = '' 311 + mkdir -p data 312 + echo "some content" > my-file 313 + ''; 84 314 85 - ```ShellSession 86 - $ docker images 87 - REPOSITORY TAG IMAGE ID CREATED SIZE 88 - hello latest 08c791c7846e 48 years ago 25.2MB 315 + config = { 316 + Cmd = [ "/bin/hello" ]; 317 + WorkingDir = "/data"; 318 + }; 319 + } 89 320 ``` 321 + ::: 90 322 91 - You can break binary reproducibility but have a sorted, meaningful `CREATED` column by setting `created` to `now`. 323 + :::{.example #ex-dockerTools-buildImage-creatednow} 324 + # Building a Docker image with a creation date set to the current time 325 + 326 + Note that using a value of `"now"` in the `created` attribute will break reproducibility. 92 327 93 328 ```nix 94 - pkgs.dockerTools.buildImage { 329 + { dockerTools, buildEnv, hello }: 330 + dockerTools.buildImage { 95 331 name = "hello"; 96 332 tag = "latest"; 333 + 97 334 created = "now"; 98 - copyToRoot = pkgs.buildEnv { 335 + 336 + copyToRoot = buildEnv { 99 337 name = "image-root"; 100 - paths = [ pkgs.hello ]; 338 + paths = [ hello ]; 101 339 pathsToLink = [ "/bin" ]; 102 340 }; 103 341 ··· 105 343 } 106 344 ``` 107 345 108 - Now the Docker CLI will display a reasonable date and sort the images as expected: 346 + After importing the generated repository tarball with Docker, its CLI will display a reasonable date and sort the images as expected: 109 347 110 348 ```ShellSession 111 349 $ docker images 112 350 REPOSITORY TAG IMAGE ID CREATED SIZE 113 351 hello latest de2bf4786de6 About a minute ago 25.2MB 114 352 ``` 353 + ::: 115 354 116 - However, the produced images will not be binary reproducible. 355 + ## buildLayeredImage {#ssec-pkgs-dockerTools-buildLayeredImage} 356 + 357 + `buildLayeredImage` uses [`streamLayeredImage`](#ssec-pkgs-dockerTools-streamLayeredImage) underneath to build a compressed Docker-compatible repository tarball. 358 + Basically, `buildLayeredImage` runs the script created by `streamLayeredImage` to save the compressed image in the Nix store. 359 + `buildLayeredImage` supports the same options as `streamLayeredImage`, see [`streamLayeredImage`](#ssec-pkgs-dockerTools-streamLayeredImage) for details. 360 + 361 + :::{.note} 362 + Despite the similar name, [`buildImage`](#ssec-pkgs-dockerTools-buildImage) works completely differently from `buildLayeredImage` and `streamLayeredImage`. 363 + 364 + Even though some of the arguments may seem related, they cannot be interchanged. 365 + ::: 366 + 367 + You can use this function to load an image in Docker with `docker load`. 368 + See [](#ex-dockerTools-buildLayeredImage-hello) to see how to do that. 369 + 370 + ### Examples {#ssec-pkgs-dockerTools-buildLayeredImage-examples} 371 + 372 + :::{.example #ex-dockerTools-buildLayeredImage-hello} 373 + # Building a layered Docker image 374 + 375 + The following package builds a layered Docker image that runs the `hello` executable from the `hello` package. 376 + The Docker image will have name `hello` and tag `latest`. 377 + 378 + ```nix 379 + { dockerTools, hello }: 380 + dockerTools.buildLayeredImage { 381 + name = "hello"; 382 + tag = "latest"; 383 + 384 + contents = [ hello ]; 385 + 386 + config.Cmd = [ "/bin/hello" ]; 387 + } 388 + ``` 389 + 390 + The result of building this package is a `.tar.gz` file that can be loaded into Docker: 391 + 392 + ```shell 393 + $ nix-build 394 + (some output removed for clarity) 395 + building '/nix/store/bk8bnrbw10nq7p8pvcmdr0qf57y6scha-hello.tar.gz.drv'... 396 + No 'fromImage' provided 397 + Creating layer 1 from paths: ['/nix/store/i93s7xxblavsacpy82zdbn4kplsyq48l-libunistring-1.1'] 398 + Creating layer 2 from paths: ['/nix/store/ji01n9vinnj22nbrb86nx8a1ssgpilx8-libidn2-2.3.4'] 399 + Creating layer 3 from paths: ['/nix/store/ldrslljw4rg026nw06gyrdwl78k77vyq-xgcc-12.3.0-libgcc'] 400 + Creating layer 4 from paths: ['/nix/store/9y8pmvk8gdwwznmkzxa6pwyah52xy3nk-glibc-2.38-27'] 401 + Creating layer 5 from paths: ['/nix/store/zhl06z4lrfrkw5rp0hnjjfrgsclzvxpm-hello-2.12.1'] 402 + Creating layer 6 with customisation... 403 + Adding manifests... 404 + Done. 405 + /nix/store/hxcz7snvw7f8rzhbh6mv8jq39d992905-hello.tar.gz 406 + 407 + $ docker load -i /nix/store/hxcz7snvw7f8rzhbh6mv8jq39d992905-hello.tar.gz 408 + (some output removed for clarity) 409 + Loaded image: hello:latest 410 + ``` 411 + ::: 412 + 413 + ## streamLayeredImage {#ssec-pkgs-dockerTools-streamLayeredImage} 414 + 415 + `streamLayeredImage` builds a **script** which, when run, will stream to stdout a Docker-compatible repository tarball containing a single image, using multiple layers to improve sharing between images. 416 + This means that `streamLayeredImage` does not output an image into the Nix store, but only a script that builds the image, saving on IO and disk/cache space, particularly with large images. 417 + 418 + You can use this function to load an image in Docker with `docker load`. 419 + See [](#ex-dockerTools-streamLayeredImage-hello) to see how to do that. 420 + 421 + For this function, you specify a [store path](https://nixos.org/manual/nix/stable/store/store-path) or a list of store paths to be added to the image, and the functions will automatically include any dependencies of those paths in the image. 422 + The function will attempt to create one layer per object in the Nix store that needs to be added to the image. 423 + In case there are more objects to include than available layers, the function will put the most ["popular"](https://github.com/NixOS/nixpkgs/tree/release-23.11/pkgs/build-support/references-by-popularity) objects in their own layers, and group all remaining objects into a single layer. 424 + 425 + An additional layer will be created with symlinks to the store paths you specified to be included in the image. 426 + These symlinks are built with [`symlinkJoin`](#trivial-builder-symlinkJoin), so they will be included in the root of the image. 427 + See [](#ex-dockerTools-streamLayeredImage-exploringlayers) to understand how these symlinks are laid out in the generated image. 428 + 429 + `streamLayeredImage` allows scripts to be run when creating the additional layer with symlinks, allowing custom behaviour to affect the final results of the image (see the documentation of the `extraCommands` and `fakeRootCommands` attributes). 430 + 431 + The resulting repository tarball will list a single image as specified by the `name` and `tag` attributes. 432 + By default, that image will use a static creation date (see documentation for the `created` attribute). 433 + This allows the function to produce reproducible images. 117 434 118 - ## buildLayeredImage {#ssec-pkgs-dockerTools-buildLayeredImage} 435 + ### Inputs {#ssec-pkgs-dockerTools-streamLayeredImage-inputs} 119 436 120 - Create a Docker image with many of the store paths being on their own layer to improve sharing between images. The image is realized into the Nix store as a gzipped tarball. Depending on the intended usage, many users might prefer to use `streamLayeredImage` instead, which this function uses internally. 437 + `streamLayeredImage` expects one argument with the following attributes: 121 438 122 - `name` 439 + `name` (String) 123 440 124 - : The name of the resulting image. 441 + : The name of the generated image. 125 442 126 - `tag` _optional_ 443 + `tag` (String; _optional_) 127 444 128 445 : Tag of the generated image. 446 + If `null`, the hash of the nix derivation will be used as the tag. 129 447 130 - *Default:* the output path's hash 448 + _Default value:_ `null`. 131 449 132 - `fromImage` _optional_ 450 + `fromImage`(Path or Null; _optional_) 133 451 134 - : The repository tarball containing the base image. It must be a valid Docker image, such as one exported by `docker save`. 452 + : The repository tarball of an image to be used as the base for the generated image. 453 + It must be a valid Docker image, such as one exported by `docker save`, or another image built with the `dockerTools` utility functions. 454 + This can be seen as an equivalent of `FROM fromImage` in a `Dockerfile`. 455 + A value of `null` can be seen as an equivalent of `FROM scratch`. 135 456 136 - *Default:* `null`, which can be seen as equivalent to `FROM scratch` of a `Dockerfile`. 457 + If specified, the created layers will be appended to the layers defined in the base image. 458 + 459 + _Default value:_ `null`. 460 + 461 + `contents` (Path or List of Paths; _optional_) []{#dockerTools-buildLayeredImage-arg-contents} 462 + 463 + : Directories whose contents will be added to the generated image. 464 + Things that coerce to paths (e.g. a derivation) can also be used. 465 + This can be seen as an equivalent of `ADD contents/ /` in a `Dockerfile`. 466 + 467 + All the contents specified by `contents` will be added as a final layer in the generated image. 468 + They will be added as links to the actual files (e.g. links to the store paths). 469 + The actual files will be added in previous layers. 470 + 471 + _Default value:_ `[]` 472 + 473 + `config` (Attribute Set; _optional_) []{#dockerTools-buildLayeredImage-arg-config} 474 + 475 + : Used to specify the configuration of the containers that will be started off the generated image. 476 + Must be an attribute set, with each attribute as listed in the [Docker Image Specification v1.3.0](https://github.com/moby/moby/blob/46f7ab808b9504d735d600e259ca0723f76fb164/image/spec/spec.md#image-json-field-descriptions). 477 + 478 + If any packages are used directly in `config`, they will be automatically included in the generated image. 479 + See [](#ex-dockerTools-streamLayeredImage-configclosure) for an example. 480 + 481 + _Default value:_ `null`. 482 + 483 + `architecture` (String; _optional_) 484 + 485 + : Used to specify the image architecture. 486 + This is useful for multi-architecture builds that don't need cross compiling. 487 + If specified, its value should follow the [OCI Image Configuration Specification](https://github.com/opencontainers/image-spec/blob/main/config.md#properties), which should still be compatible with Docker. 488 + According to the linked specification, all possible values for `$GOARCH` in [the Go docs](https://go.dev/doc/install/source#environment) should be valid, but will commonly be one of `386`, `amd64`, `arm`, or `arm64`. 489 + 490 + _Default value:_ the same value from `pkgs.go.GOARCH`. 491 + 492 + `created` (String; _optional_) 493 + 494 + : Specifies the time of creation of the generated image. 495 + This should be either a date and time formatted according to [ISO-8601](https://en.wikipedia.org/wiki/ISO_8601) or `"now"`, in which case the current date will be used. 496 + 497 + :::{.caution} 498 + Using `"now"` means that the generated image will not be reproducible anymore (because the date will always change whenever it's built). 499 + ::: 500 + 501 + _Default value:_ `"1970-01-01T00:00:01Z"`. 502 + 503 + `maxLayers` (Number; _optional_) []{#dockerTools-buildLayeredImage-arg-maxLayers} 504 + 505 + : The maximum number of layers that will be used by the generated image. 506 + If a `fromImage` was specified, the number of layers used by `fromImage` will be subtracted from `maxLayers` to ensure that the image generated will have at most `maxLayers`. 137 507 138 - `contents` _optional_ 508 + :::{.caution} 509 + Depending on the tool/runtime where the image will be used, there might be a limit to the number of layers that an image can have. 510 + For Docker, see [this issue on GitHub](https://github.com/docker/docs/issues/8230). 511 + ::: 512 + 513 + _Default value:_ 100. 514 + 515 + `extraCommands` (String; _optional_) 516 + 517 + : A bash script that will run in the context of the layer created with the contents specified by `contents`. 518 + At the moment this script runs, only the contents directly specified by `contents` will be available as links. 519 + 520 + _Default value:_ `""`. 139 521 140 - : Top-level paths in the container. Either a single derivation, or a list of derivations. 522 + `fakeRootCommands` (String; _optional_) 141 523 142 - *Default:* `[]` 524 + : A bash script that will run in the context of the layer created with the contents specified by `contents`. 525 + During the process to generate that layer, the script in `extraCommands` will be run first, if specified. 526 + After that, a {manpage}`fakeroot(1)` environment will be entered. 527 + The script specified in `fakeRootCommands` runs inside the fakeroot environment, and the layer is then generated from the view of the files inside the fakeroot environment. 143 528 144 - `config` _optional_ 529 + This is useful to change the owners of the files in the layer (by running `chown`, for example), or performing any other privileged operations related to file manipulation (by default, all files in the layer will be owned by root, and the build environment doesn't have enough privileges to directly perform privileged operations on these files). 145 530 146 - `architecture` is _optional_ and used to specify the image architecture, this is useful for multi-architecture builds that don't need cross compiling. If not specified it will default to `hostPlatform`. 531 + For more details, see the manpage for {manpage}`fakeroot(1)`. 147 532 148 - : Run-time configuration of the container. A full list of the options available is in the [Docker Image Specification v1.2.0](https://github.com/moby/moby/blob/master/image/spec/v1.2.md#image-json-field-descriptions). 533 + :::{.caution} 534 + Due to how fakeroot works, static binaries cannot perform privileged file operations in `fakeRootCommands`, unless `enableFakechroot` is set to `true`. 535 + ::: 149 536 150 - *Default:* `{}` 537 + _Default value:_ `""`. 151 538 152 - `created` _optional_ 539 + `enableFakechroot` (Boolean; _optional_) 153 540 154 - : Date and time the layers were created. Follows the same `now` exception supported by `buildImage`. 541 + : By default, the script specified in `fakeRootCommands` only runs inside a fakeroot environment. 542 + If `enableFakechroot` is `true`, a more complete chroot environment will be created using [`proot`](https://proot-me.github.io/) before running the script in `fakeRootCommands`. 543 + Files in the Nix store will be available. 544 + This allows scripts that perform installation in `/` to work as expected. 545 + This can be seen as an equivalent of `RUN ...` in a `Dockerfile`. 155 546 156 - *Default:* `1970-01-01T00:00:01Z` 547 + _Default value:_ `false` 157 548 158 - `maxLayers` _optional_ 549 + `includeStorePaths` (Boolean; _optional_) 159 550 160 - : Maximum number of layers to create. 551 + : The files specified in `contents` are put into layers in the generated image. 552 + If `includeStorePaths` is `false`, the actual files will not be included in the generated image, and only links to them will be added instead. 553 + It is **not recommended** to set this to `false` unless you have other tooling to insert the store paths via other means (such as bind mounting the host store) when running containers with the generated image. 554 + If you don't provide any extra tooling, the generated image won't run properly. 161 555 162 - *Default:* `100` 556 + See [](#ex-dockerTools-streamLayeredImage-exploringlayers) to understand the impact of setting `includeStorePaths` to `false`. 163 557 164 - *Maximum:* `125` 558 + _Default value:_ `true` 165 559 166 - `extraCommands` _optional_ 560 + `passthru` (Attribute Set; _optional_) 167 561 168 - : Shell commands to run while building the final layer, without access to most of the layer contents. Changes to this layer are "on top" of all the other layers, so can create additional directories and files. 562 + : Use this to pass any attributes as [passthru](#var-stdenv-passthru) for the resulting derivation. 169 563 170 - `fakeRootCommands` _optional_ 564 + _Default value:_ `{}` 171 565 172 - : Shell commands to run while creating the archive for the final layer in a fakeroot environment. Unlike `extraCommands`, you can run `chown` to change the owners of the files in the archive, changing fakeroot's state instead of the real filesystem. The latter would require privileges that the build user does not have. Static binaries do not interact with the fakeroot environment. By default all files in the archive will be owned by root. 566 + ### Passthru outputs {#ssec-pkgs-dockerTools-streamLayeredImage-passthru-outputs} 173 567 174 - `enableFakechroot` _optional_ 568 + `streamLayeredImage` also defines its own [`passthru`](#var-stdenv-passthru) attributes: 175 569 176 - : Whether to run in `fakeRootCommands` in `fakechroot`, making programs behave as though `/` is the root of the image being created, while files in the Nix store are available as usual. This allows scripts that perform installation in `/` to work as expected. Considering that `fakechroot` is implemented via the same mechanism as `fakeroot`, the same caveats apply. 570 + `imageTag` (String) 177 571 178 - *Default:* `false` 572 + : The tag of the generated image. 573 + This is useful if no tag was specified in the attributes of the argument to the function, because an automatic tag will be used instead. 574 + `imageTag` allows you to retrieve the value of the tag used in this case. 179 575 180 - ### Behavior of `contents` in the final image {#dockerTools-buildLayeredImage-arg-contents} 576 + ### Examples {#ssec-pkgs-dockerTools-streamLayeredImage-examples} 181 577 182 - Each path directly listed in `contents` will have a symlink in the root of the image. 578 + :::{.example #ex-dockerTools-streamLayeredImage-hello} 579 + # Streaming a layered Docker image 183 580 184 - For example: 581 + The following package builds a **script** which, when run, will stream a layered Docker image that runs the `hello` executable from the `hello` package. 582 + The Docker image will have name `hello` and tag `latest`. 185 583 186 584 ```nix 187 - pkgs.dockerTools.buildLayeredImage { 585 + { dockerTools, hello }: 586 + dockerTools.streamLayeredImage { 188 587 name = "hello"; 189 - contents = [ pkgs.hello ]; 588 + tag = "latest"; 589 + 590 + contents = [ hello ]; 591 + 592 + config.Cmd = [ "/bin/hello" ]; 190 593 } 191 594 ``` 192 595 193 - will create symlinks for all the paths in the `hello` package: 596 + The result of building this package is a script. 597 + Running this script and piping it into `docker load` gives you the same image that was built in [](#ex-dockerTools-buildLayeredImage-hello). 598 + Note that in this case, the image is never added to the Nix store, but instead streamed directly into Docker. 194 599 195 - ```ShellSession 196 - /bin/hello -> /nix/store/h1zb1padqbbb7jicsvkmrym3r6snphxg-hello-2.10/bin/hello 197 - /share/info/hello.info -> /nix/store/h1zb1padqbbb7jicsvkmrym3r6snphxg-hello-2.10/share/info/hello.info 198 - /share/locale/bg/LC_MESSAGES/hello.mo -> /nix/store/h1zb1padqbbb7jicsvkmrym3r6snphxg-hello-2.10/share/locale/bg/LC_MESSAGES/hello.mo 600 + ```shell 601 + $ nix-build 602 + (output removed for clarity) 603 + /nix/store/wsz2xl8ckxnlb769irvq6jv1280dfvxd-stream-hello 604 + 605 + $ /nix/store/wsz2xl8ckxnlb769irvq6jv1280dfvxd-stream-hello | docker load 606 + No 'fromImage' provided 607 + Creating layer 1 from paths: ['/nix/store/i93s7xxblavsacpy82zdbn4kplsyq48l-libunistring-1.1'] 608 + Creating layer 2 from paths: ['/nix/store/ji01n9vinnj22nbrb86nx8a1ssgpilx8-libidn2-2.3.4'] 609 + Creating layer 3 from paths: ['/nix/store/ldrslljw4rg026nw06gyrdwl78k77vyq-xgcc-12.3.0-libgcc'] 610 + Creating layer 4 from paths: ['/nix/store/9y8pmvk8gdwwznmkzxa6pwyah52xy3nk-glibc-2.38-27'] 611 + Creating layer 5 from paths: ['/nix/store/zhl06z4lrfrkw5rp0hnjjfrgsclzvxpm-hello-2.12.1'] 612 + Creating layer 6 with customisation... 613 + Adding manifests... 614 + Done. 615 + (some output removed for clarity) 616 + Loaded image: hello:latest 199 617 ``` 618 + ::: 200 619 201 - ### Automatic inclusion of `config` references {#dockerTools-buildLayeredImage-arg-config} 202 - 203 - The closure of `config` is automatically included in the closure of the final image. 620 + :::{.example #ex-dockerTools-streamLayeredImage-exploringlayers} 621 + # Exploring the layers in an image built with `streamLayeredImage` 204 622 205 - This allows you to make very simple Docker images with very little code. This container will start up and run `hello`: 623 + Assume the following package, which builds a layered Docker image with the `hello` package. 206 624 207 625 ```nix 208 - pkgs.dockerTools.buildLayeredImage { 626 + { dockerTools, hello }: 627 + dockerTools.streamLayeredImage { 209 628 name = "hello"; 210 - config.Cmd = [ "${pkgs.hello}/bin/hello" ]; 629 + contents = [ hello ]; 211 630 } 212 631 ``` 213 632 214 - ### Adjusting `maxLayers` {#dockerTools-buildLayeredImage-arg-maxLayers} 633 + The `hello` package depends on 4 other packages: 215 634 216 - Increasing the `maxLayers` increases the number of layers which have a chance to be shared between different images. 635 + ```shell 636 + $ nix-store --query -R $(nix-build -A hello) 637 + /nix/store/i93s7xxblavsacpy82zdbn4kplsyq48l-libunistring-1.1 638 + /nix/store/ji01n9vinnj22nbrb86nx8a1ssgpilx8-libidn2-2.3.4 639 + /nix/store/ldrslljw4rg026nw06gyrdwl78k77vyq-xgcc-12.3.0-libgcc 640 + /nix/store/9y8pmvk8gdwwznmkzxa6pwyah52xy3nk-glibc-2.38-27 641 + /nix/store/zhl06z4lrfrkw5rp0hnjjfrgsclzvxpm-hello-2.12.1 642 + ``` 217 643 218 - Modern Docker installations support up to 128 layers, but older versions support as few as 42. 644 + This means that all these packages will be included in the image generated by `streamLayeredImage`. 645 + It will put each package in its own layer, for a total of 5 layers with actual files in them. 646 + A final layer will be created only with symlinks for the `hello` package. 219 647 220 - If the produced image will not be extended by other Docker builds, it is safe to set `maxLayers` to `128`. However, it will be impossible to extend the image further. 648 + The image generated will have the following directory structure (some directories were collapsed for readability): 221 649 222 - The first (`maxLayers-2`) most "popular" paths will have their own individual layers, then layer \#`maxLayers-1` will contain all the remaining "unpopular" paths, and finally layer \#`maxLayers` will contain the Image configuration. 650 + ``` 651 + ├── bin 652 + │ └── hello → /nix/store/zhl06z4lrfrkw5rp0hnjjfrgsclzvxpm-hello-2.12.1/bin/hello 653 + ├── nix 654 + │ └── store 655 + │ ├─⊕ 9y8pmvk8gdwwznmkzxa6pwyah52xy3nk-glibc-2.38-27 656 + │ ├─⊕ i93s7xxblavsacpy82zdbn4kplsyq48l-libunistring-1.1 657 + │ ├─⊕ ji01n9vinnj22nbrb86nx8a1ssgpilx8-libidn2-2.3.4 658 + │ ├─⊕ ldrslljw4rg026nw06gyrdwl78k77vyq-xgcc-12.3.0-libgcc 659 + │ └─⊕ zhl06z4lrfrkw5rp0hnjjfrgsclzvxpm-hello-2.12.1 660 + └── share 661 + ├── info 662 + │ └── hello.info → /nix/store/zhl06z4lrfrkw5rp0hnjjfrgsclzvxpm-hello-2.12.1/share/info/hello.info 663 + ├─⊕ locale 664 + └── man 665 + └── man1 666 + └── hello.1.gz → /nix/store/zhl06z4lrfrkw5rp0hnjjfrgsclzvxpm-hello-2.12.1/share/man/man1/hello.1.gz 667 + ``` 223 668 224 - Docker's Layers are not inherently ordered, they are content-addressable and are not explicitly layered until they are composed in to an Image. 669 + Each of the packages in `/nix/store` comes from a layer in the image. 670 + The final layer adds the `/bin` and `/share` directories, but they only contain links to the actual files in `/nix/store`. 225 671 226 - ## streamLayeredImage {#ssec-pkgs-dockerTools-streamLayeredImage} 672 + If our package sets `includeStorePaths` to `false`, we'll end up with only the final layer with the links, but the actual files won't exist in the image: 227 673 228 - Builds a script which, when run, will stream an uncompressed tarball of a Docker image to stdout. The arguments to this function are as for `buildLayeredImage`. This method of constructing an image does not realize the image into the Nix store, so it saves on IO and disk/cache space, particularly with large images. 674 + ```nix 675 + { dockerTools, hello }: 676 + dockerTools.streamLayeredImage { 677 + name = "hello"; 678 + contents = [ hello ]; 679 + } 680 + ``` 229 681 230 - The image produced by running the output script can be piped directly into `docker load`, to load it into the local docker daemon: 682 + After building this package, the image will have the following directory structure: 231 683 232 - ```ShellSession 233 - $(nix-build) | docker load 684 + ``` 685 + ├── bin 686 + │ └── hello → /nix/store/zhl06z4lrfrkw5rp0hnjjfrgsclzvxpm-hello-2.12.1/bin/hello 687 + └── share 688 + ├── info 689 + │ └── hello.info → /nix/store/zhl06z4lrfrkw5rp0hnjjfrgsclzvxpm-hello-2.12.1/share/info/hello.info 690 + ├─⊕ locale 691 + └── man 692 + └── man1 693 + └── hello.1.gz → /nix/store/zhl06z4lrfrkw5rp0hnjjfrgsclzvxpm-hello-2.12.1/share/man/man1/hello.1.gz 234 694 ``` 235 695 236 - Alternatively, the image be piped via `gzip` into `skopeo`, e.g., to copy it into a registry: 696 + Note how the links point to paths in `/nix/store`, but they're not included in the image itself. 697 + This is why you need extra tooling when using `includeStorePaths`: 698 + a container created from such image won't find any of the files it needs to run otherwise. 699 + ::: 700 + 701 + ::: {.example #ex-dockerTools-streamLayeredImage-configclosure} 702 + # Building a layered Docker image with packages directly in `config` 703 + 704 + The closure of `config` is automatically included in the generated image. 705 + The following package shows a more compact way to create the same output generated in [](#ex-dockerTools-streamLayeredImage-hello). 237 706 238 - ```ShellSession 239 - $(nix-build) | gzip --fast | skopeo copy docker-archive:/dev/stdin docker://some_docker_registry/myimage:tag 707 + ```nix 708 + { dockerTools, hello, lib }: 709 + dockerTools.streamLayeredImage { 710 + name = "hello"; 711 + tag = "latest"; 712 + config.Cmd = [ "${lib.getExe hello}" ]; 713 + } 240 714 ``` 715 + ::: 241 716 242 717 ## pullImage {#ssec-pkgs-dockerTools-fetchFromRegistry} 243 718
+4
doc/build-helpers/special/mkshell.section.md
··· 29 29 30 30 ... all the attributes of `stdenv.mkDerivation`. 31 31 32 + ## Variants {#sec-pkgs-mkShell-variants} 33 + 34 + `pkgs.mkShellNoCC` is a variant that uses `stdenvNoCC` instead of `stdenv` as base environment. This is useful if no C compiler is needed in the shell environment. 35 + 32 36 ## Building the shell {#sec-pkgs-mkShell-building} 33 37 34 38 This derivation output will contain a text file that contains a reference to
+1
doc/languages-frameworks/javascript.section.md
··· 354 354 355 355 - The `echo 9` steps comes from this answer: <https://stackoverflow.com/a/49139496> 356 356 - Exporting the headers in `npm_config_nodedir` comes from this issue: <https://github.com/nodejs/node-gyp/issues/1191#issuecomment-301243919> 357 + - `offlineCache` (described [above](#javascript-yarn2nix-preparation)) must be specified to avoid [Import From Derivation](#ssec-import-from-derivation) (IFD) when used inside Nixpkgs. 357 358 358 359 ## Outside Nixpkgs {#javascript-outside-nixpkgs} 359 360
+11
doc/languages-frameworks/qt.section.md
··· 26 26 27 27 Additionally all Qt packages must include `wrapQtAppsHook` in `nativeBuildInputs`, or you must explicitly set `dontWrapQtApps`. 28 28 29 + `pkgs.callPackage` does not provide injections for `qtbase` or the like. 30 + Instead you want to either use `pkgs.libsForQt5.callPackage`, or `pkgs.qt6Packages.callPackage`, depending on the Qt version you want to use. 31 + 32 + For example (from [here](https://github.com/NixOS/nixpkgs/blob/2f9286912cb215969ece465147badf6d07aa43fe/pkgs/top-level/all-packages.nix#L30106)) 33 + 34 + ```nix 35 + zeal-qt5 = libsForQt5.callPackage ../data/documentation/zeal { }; 36 + zeal-qt6 = qt6Packages.callPackage ../data/documentation/zeal { }; 37 + zeal = zeal-qt5; 38 + ``` 39 + 29 40 ## Locating runtime dependencies {#qt-runtime-dependencies} 30 41 31 42 Qt applications must be wrapped to find runtime dependencies.
+10 -9
doc/languages-frameworks/rust.section.md
··· 44 44 } 45 45 ``` 46 46 47 - `buildRustPackage` requires either the `cargoSha256` or the 48 - `cargoHash` attribute which is computed over all crate sources of this 49 - package. `cargoHash256` is used for traditional Nix SHA-256 hashes, 50 - such as the one in the example above. `cargoHash` should instead be 51 - used for [SRI](https://www.w3.org/TR/SRI/) hashes. For example: 47 + `buildRustPackage` requires either the `cargoHash` or the `cargoSha256` 48 + attribute which is computed over all crate sources of this package. 49 + `cargoSha256` is used for traditional Nix SHA-256 hashes. `cargoHash` should 50 + instead be used for [SRI](https://www.w3.org/TR/SRI/) hashes and should be 51 + preferred. For example: 52 + 53 + ```nix 54 + cargoHash = "sha256-l1vL2ZdtDRxSGvP0X/l3nMw8+6WF67KPutJEzUROjg8="; 55 + ``` 52 56 53 57 Exception: If the application has cargo `git` dependencies, the `cargoHash`/`cargoSha256` 54 58 approach will not work, and you will need to copy the `Cargo.lock` file of the application 55 - to nixpkgs and continue with the next section for specifying the options of the`cargoLock` 59 + to nixpkgs and continue with the next section for specifying the options of the `cargoLock` 56 60 section. 57 61 58 - ```nix 59 - cargoHash = "sha256-l1vL2ZdtDRxSGvP0X/l3nMw8+6WF67KPutJEzUROjg8="; 60 - ``` 61 62 62 63 Both types of hashes are permitted when contributing to nixpkgs. The 63 64 Cargo hash is obtained by inserting a fake checksum into the
+2 -2
doc/stdenv/stdenv.chapter.md
··· 475 475 ```nix 476 476 passthru.updateScript = writeScript "update-zoom-us" '' 477 477 #!/usr/bin/env nix-shell 478 - #!nix-shell -i bash -p curl pcre common-updater-scripts 478 + #!nix-shell -i bash -p curl pcre2 common-updater-scripts 479 479 480 480 set -eu -o pipefail 481 481 482 - version="$(curl -sI https://zoom.us/client/latest/zoom_x86_64.tar.xz | grep -Fi 'Location:' | pcregrep -o1 '/(([0-9]\.?)+)/')" 482 + version="$(curl -sI https://zoom.us/client/latest/zoom_x86_64.tar.xz | grep -Fi 'Location:' | pcre2grep -o1 '/(([0-9]\.?)+)/')" 483 483 update-source-version zoom-us "$version" 484 484 ''; 485 485 ```
+1 -1
nixos/doc/manual/administration/imperative-containers.section.md
··· 77 77 78 78 There are several ways to change the configuration of the container. 79 79 First, on the host, you can edit 80 - `/var/lib/container/name/etc/nixos/configuration.nix`, and run 80 + `/var/lib/nixos-containers/foo/etc/nixos/configuration.nix`, and run 81 81 82 82 ```ShellSession 83 83 # nixos-container update foo
+39
nixos/doc/manual/development/unit-handling.section.md
··· 63 63 is **restart**ed with the others. If it is set, both the service and the 64 64 socket are **stop**ped and the socket is **start**ed, leaving socket 65 65 activation to start the service when it's needed. 66 + 67 + ## Sysinit reactivation {#sec-sysinit-reactivation} 68 + 69 + [`sysinit.target`](https://www.freedesktop.org/software/systemd/man/latest/systemd.special.html#sysinit.target) 70 + is a systemd target that encodes system initialization (i.e. early startup). A 71 + few units that need to run very early in the bootup process are ordered to 72 + finish before this target is reached. Probably the most notable one of these is 73 + `systemd-tmpfiles-setup.service`. We will refer to these units as "sysinit 74 + units". 75 + 76 + "Normal" systemd units, by default, are ordered AFTER `sysinit.target`. In 77 + other words, these "normal" units expect all services ordered before 78 + `sysinit.target` to have finished without explicity declaring this dependency 79 + relationship for each dependency. See the [systemd 80 + bootup](https://www.freedesktop.org/software/systemd/man/latest/bootup.html) 81 + for more details on the bootup process. 82 + 83 + When restarting both a unit ordered before `sysinit.target` as well as one 84 + after, this presents a problem because they would be started at the same time 85 + as they do not explicitly declare their dependency relations. 86 + 87 + To solve this, NixOS has an artificial `sysinit-reactivation.target` which 88 + allows you to ensure that services ordered before `sysinit.target` are 89 + restarted correctly. This applies both to the ordering between these sysinit 90 + services as well as ensuring that sysinit units are restarted before "normal" 91 + units. 92 + 93 + To make an existing sysinit service restart correctly during system switch, you 94 + have to declare: 95 + 96 + ```nix 97 + systemd.services.my-sysinit = { 98 + requiredBy = [ "sysinit-reactivation.target" ]; 99 + before = [ "sysinit-reactivation.target" ]; 100 + restartTriggers = [ config.environment.etc."my-sysinit.d".source ]; 101 + }; 102 + ``` 103 + 104 + You need to configure appropriate `restartTriggers` specific to your service.
+1 -1
nixos/doc/manual/development/what-happens-during-a-system-switch.chapter.md
··· 37 37 - Forget about the failed state of units (`systemctl reset-failed`) 38 38 - Reload systemd (`systemctl daemon-reload`) 39 39 - Reload systemd user instances (`systemctl --user daemon-reload`) 40 - - Set up tmpfiles (`systemd-tmpfiles --create`) 40 + - Reactivate sysinit (`systemctl restart sysinit-reactivation.target`) 41 41 - Reload units (`systemctl reload`) 42 42 - Restart units (`systemctl restart`) 43 43 - Start units (`systemctl start`)
+5
nixos/doc/manual/release-notes/rl-2405.section.md
··· 116 116 117 117 - The executable file names for `firefox-devedition`, `firefox-beta`, `firefox-esr` now matches their package names, which is consistent with the `firefox-*-bin` packages. The desktop entries are also updated so that you can have multiple editions of firefox in your app launcher. 118 118 119 + - switch-to-configuration does not directly call systemd-tmpfiles anymore. 120 + Instead, the new artificial sysinit-reactivation.target is introduced which 121 + allows to restart multiple services that are ordered before sysinit.target 122 + and respect the ordering between the services. 123 + 119 124 - The `systemd.oomd` module behavior is changed as: 120 125 121 126 - Raise ManagedOOMMemoryPressureLimit from 50% to 80%. This should make systemd-oomd kill things less often, and fix issues like [this](https://pagure.io/fedora-workstation/issue/358).
+3 -3
nixos/maintainers/option-usages.nix
··· 9 9 10 10 # This file is made to be used as follow: 11 11 # 12 - # $ nix-instantiate ./option-usage.nix --argstr testOption service.xserver.enable -A txtContent --eval 12 + # $ nix-instantiate ./option-usages.nix --argstr testOption service.xserver.enable -A txtContent --eval 13 13 # 14 14 # or 15 15 # 16 - # $ nix-build ./option-usage.nix --argstr testOption service.xserver.enable -A txt -o service.xserver.enable._txt 16 + # $ nix-build ./option-usages.nix --argstr testOption service.xserver.enable -A txt -o service.xserver.enable._txt 17 17 # 18 18 # Other targets exists such as `dotContent`, `dot`, and `pdf`. If you are 19 19 # looking for the option usage of multiple options, you can provide a list 20 20 # as argument. 21 21 # 22 - # $ nix-build ./option-usage.nix --arg testOptions \ 22 + # $ nix-build ./option-usages.nix --arg testOptions \ 23 23 # '["boot.loader.gummiboot.enable" "boot.loader.gummiboot.timeout"]' \ 24 24 # -A txt -o gummiboot.list 25 25 #
+9 -3
nixos/modules/system/activation/switch-to-configuration.pl
··· 889 889 890 890 close($list_active_users) || die("Unable to close the file handle to loginctl"); 891 891 892 - # Set the new tmpfiles 893 - print STDERR "setting up tmpfiles\n"; 894 - system("$new_systemd/bin/systemd-tmpfiles", "--create", "--remove", "--exclude-prefix=/dev") == 0 or $res = 3; 892 + # Restart sysinit-reactivation.target. 893 + # This target only exists to restart services ordered before sysinit.target. We 894 + # cannot use X-StopOnReconfiguration to restart sysinit.target because then ALL 895 + # services of the system would be restarted since all normal services have a 896 + # default dependency on sysinit.target. sysinit-reactivation.target ensures 897 + # that services ordered BEFORE sysinit.target get re-started in the correct 898 + # order. Ordering between these services is respected. 899 + print STDERR "restarting sysinit-reactivation.target\n"; 900 + system("$new_systemd/bin/systemctl", "restart", "sysinit-reactivation.target") == 0 or $res = 4; 895 901 896 902 # Before reloading we need to ensure that the units are still active. They may have been 897 903 # deactivated because one of their requirements got stopped. If they are inactive
+7
nixos/modules/system/boot/systemd.nix
··· 569 569 unitConfig.X-StopOnReconfiguration = true; 570 570 }; 571 571 572 + # This target only exists so that services ordered before sysinit.target 573 + # are restarted in the correct order, notably BEFORE the other services, 574 + # when switching configurations. 575 + systemd.targets.sysinit-reactivation = { 576 + description = "Reactivate sysinit units"; 577 + }; 578 + 572 579 systemd.units = 573 580 mapAttrs' (n: v: nameValuePair "${n}.path" (pathToUnit n v)) cfg.paths 574 581 // mapAttrs' (n: v: nameValuePair "${n}.service" (serviceToUnit n v)) cfg.services
+35
nixos/modules/system/boot/systemd/tmpfiles.nix
··· 150 150 "systemd-tmpfiles-setup.service" 151 151 ]; 152 152 153 + # Allow systemd-tmpfiles to be restarted by switch-to-configuration. This 154 + # service is not pulled into the normal boot process. It only exists for 155 + # switch-to-configuration. 156 + # 157 + # This needs to be a separate unit because it does not execute 158 + # systemd-tmpfiles with `--boot` as that is supposed to only be executed 159 + # once at boot time. 160 + # 161 + # Keep this aligned with the upstream `systemd-tmpfiles-setup.service` unit. 162 + systemd.services."systemd-tmpfiles-resetup" = { 163 + description = "Re-setup tmpfiles on a system that is already running."; 164 + 165 + requiredBy = [ "sysinit-reactivation.target" ]; 166 + after = [ "local-fs.target" "systemd-sysusers.service" "systemd-journald.service" ]; 167 + before = [ "sysinit-reactivation.target" "shutdown.target" ]; 168 + conflicts = [ "shutdown.target" ]; 169 + restartTriggers = [ config.environment.etc."tmpfiles.d".source ]; 170 + 171 + unitConfig.DefaultDependencies = false; 172 + 173 + serviceConfig = { 174 + Type = "oneshot"; 175 + RemainAfterExit = true; 176 + ExecStart = "systemd-tmpfiles --create --remove --exclude-prefix=/dev"; 177 + SuccessExitStatus = "DATAERR CANTCREAT"; 178 + ImportCredential = [ 179 + "tmpfiles.*" 180 + "loging.motd" 181 + "login.issue" 182 + "network.hosts" 183 + "ssh.authorized_keys.root" 184 + ]; 185 + }; 186 + }; 187 + 153 188 environment.etc = { 154 189 "tmpfiles.d".source = (pkgs.symlinkJoin { 155 190 name = "tmpfiles.d";
+5 -4
nixos/modules/virtualisation/incus.nix
··· 150 150 after = [ 151 151 "network-online.target" 152 152 "lxcfs.service" 153 - ] ++ (lib.optional cfg.socketActivation "incus.socket"); 153 + "incus.socket" 154 + ]; 154 155 requires = [ 155 156 "lxcfs.service" 156 - ] ++ (lib.optional cfg.socketActivation "incus.socket"); 157 + "incus.socket" 158 + ]; 157 159 wants = [ 158 160 "network-online.target" 159 161 ]; ··· 183 185 }; 184 186 }; 185 187 186 - systemd.sockets.incus = lib.mkIf cfg.socketActivation { 188 + systemd.sockets.incus = { 187 189 description = "Incus UNIX socket"; 188 190 wantedBy = [ "sockets.target" ]; 189 191 ··· 191 193 ListenStream = "/var/lib/incus/unix.socket"; 192 194 SocketMode = "0660"; 193 195 SocketGroup = "incus-admin"; 194 - Service = "incus.service"; 195 196 }; 196 197 }; 197 198
+2 -4
nixos/modules/virtualisation/lxd.nix
··· 214 214 LimitNPROC = "infinity"; 215 215 TasksMax = "infinity"; 216 216 217 - Restart = "on-failure"; 218 - TimeoutStartSec = "${cfg.startTimeout}s"; 219 - TimeoutStopSec = "30s"; 220 - 221 217 # By default, `lxd` loads configuration files from hard-coded 222 218 # `/usr/share/lxc/config` - since this is a no-go for us, we have to 223 219 # explicitly tell it where the actual configuration files are 224 220 Environment = lib.mkIf (config.virtualisation.lxc.lxcfs.enable) 225 221 "LXD_LXC_TEMPLATE_CONFIG=${pkgs.lxcfs}/share/lxc/config"; 226 222 }; 223 + 224 + unitConfig.ConditionPathExists = "!/var/lib/incus/.migrated-from-lxd"; 227 225 }; 228 226 229 227 systemd.services.lxd-preseed = lib.mkIf (cfg.preseed != null) {
+1
nixos/tests/all-tests.nix
··· 820 820 syncthing-init = handleTest ./syncthing-init.nix {}; 821 821 syncthing-many-devices = handleTest ./syncthing-many-devices.nix {}; 822 822 syncthing-relay = handleTest ./syncthing-relay.nix {}; 823 + sysinit-reactivation = runTest ./sysinit-reactivation.nix; 823 824 systemd = handleTest ./systemd.nix {}; 824 825 systemd-analyze = handleTest ./systemd-analyze.nix {}; 825 826 systemd-binfmt = handleTestOn ["x86_64-linux"] ./systemd-binfmt.nix {};
+2 -3
nixos/tests/incus/default.nix
··· 6 6 }: 7 7 { 8 8 container = import ./container.nix { inherit system pkgs; }; 9 + lxd-to-incus = import ./lxd-to-incus.nix { inherit system pkgs; }; 9 10 preseed = import ./preseed.nix { inherit system pkgs; }; 10 11 socket-activated = import ./socket-activated.nix { inherit system pkgs; }; 11 - virtual-machine = handleTestOn [ "x86_64-linux" ] ./virtual-machine.nix { 12 - inherit system pkgs; 13 - }; 12 + virtual-machine = handleTestOn [ "x86_64-linux" ] ./virtual-machine.nix { inherit system pkgs; }; 14 13 }
+112
nixos/tests/incus/lxd-to-incus.nix
··· 1 + import ../make-test-python.nix ( 2 + 3 + { pkgs, lib, ... }: 4 + 5 + let 6 + releases = import ../../release.nix { configuration.documentation.enable = lib.mkForce false; }; 7 + 8 + container-image-metadata = releases.lxdContainerMeta.${pkgs.stdenv.hostPlatform.system}; 9 + container-image-rootfs = releases.lxdContainerImage.${pkgs.stdenv.hostPlatform.system}; 10 + in 11 + { 12 + name = "lxd-to-incus"; 13 + 14 + meta = { 15 + maintainers = lib.teams.lxc.members; 16 + }; 17 + 18 + nodes.machine = 19 + { lib, ... }: 20 + { 21 + environment.systemPackages = [ pkgs.lxd-to-incus ]; 22 + 23 + virtualisation = { 24 + diskSize = 6144; 25 + cores = 2; 26 + memorySize = 2048; 27 + 28 + lxd.enable = true; 29 + lxd.preseed = { 30 + networks = [ 31 + { 32 + name = "nixostestbr0"; 33 + type = "bridge"; 34 + config = { 35 + "ipv4.address" = "10.0.100.1/24"; 36 + "ipv4.nat" = "true"; 37 + }; 38 + } 39 + ]; 40 + profiles = [ 41 + { 42 + name = "default"; 43 + devices = { 44 + eth0 = { 45 + name = "eth0"; 46 + network = "nixostestbr0"; 47 + type = "nic"; 48 + }; 49 + root = { 50 + path = "/"; 51 + pool = "nixostest_pool"; 52 + size = "35GiB"; 53 + type = "disk"; 54 + }; 55 + }; 56 + } 57 + { 58 + name = "nixos_notdefault"; 59 + devices = { }; 60 + } 61 + ]; 62 + storage_pools = [ 63 + { 64 + name = "nixostest_pool"; 65 + driver = "dir"; 66 + } 67 + ]; 68 + }; 69 + 70 + incus.enable = true; 71 + }; 72 + }; 73 + 74 + testScript = '' 75 + def lxd_wait_for_preseed(_) -> bool: 76 + _, output = machine.systemctl("is-active lxd-preseed.service") 77 + return ("inactive" in output) 78 + 79 + def lxd_instance_is_up(_) -> bool: 80 + status, _ = machine.execute("lxc exec container --disable-stdin --force-interactive /run/current-system/sw/bin/true") 81 + return status == 0 82 + 83 + def incus_instance_is_up(_) -> bool: 84 + status, _ = machine.execute("incus exec container --disable-stdin --force-interactive /run/current-system/sw/bin/true") 85 + return status == 0 86 + 87 + with machine.nested("initialize lxd and resources"): 88 + machine.wait_for_unit("sockets.target") 89 + machine.wait_for_unit("lxd.service") 90 + retry(lxd_wait_for_preseed) 91 + 92 + machine.succeed("lxc image import ${container-image-metadata}/*/*.tar.xz ${container-image-rootfs}/*/*.tar.xz --alias nixos") 93 + machine.succeed("lxc launch nixos container") 94 + retry(lxd_instance_is_up) 95 + 96 + machine.wait_for_unit("incus.service") 97 + 98 + with machine.nested("run migration"): 99 + machine.succeed("lxd-to-incus --yes") 100 + 101 + with machine.nested("verify resources migrated to incus"): 102 + machine.succeed("incus config show container") 103 + retry(incus_instance_is_up) 104 + machine.succeed("incus exec container -- true") 105 + machine.succeed("incus profile show default | grep nixostestbr0") 106 + machine.succeed("incus profile show default | grep nixostest_pool") 107 + machine.succeed("incus profile show nixos_notdefault") 108 + machine.succeed("incus storage show nixostest_pool") 109 + machine.succeed("incus network show nixostestbr0") 110 + ''; 111 + } 112 + )
+107
nixos/tests/sysinit-reactivation.nix
··· 1 + # This runs to two scenarios but in one tests: 2 + # - A post-sysinit service needs to be restarted AFTER tmpfiles was restarted. 3 + # - A service needs to be restarted BEFORE tmpfiles is restarted 4 + 5 + { lib, ... }: 6 + 7 + let 8 + makeGeneration = generation: { 9 + "${generation}".configuration = { 10 + systemd.services.pre-sysinit-before-tmpfiles.environment.USER = 11 + lib.mkForce "${generation}-tmpfiles-user"; 12 + 13 + systemd.services.pre-sysinit-after-tmpfiles.environment = { 14 + NEEDED_PATH = lib.mkForce "/run/${generation}-needed-by-pre-sysinit-after-tmpfiles"; 15 + PATH_TO_CREATE = lib.mkForce "/run/${generation}-needed-by-post-sysinit"; 16 + }; 17 + 18 + systemd.services.post-sysinit.environment = { 19 + NEEDED_PATH = lib.mkForce "/run/${generation}-needed-by-post-sysinit"; 20 + PATH_TO_CREATE = lib.mkForce "/run/${generation}-created-by-post-sysinit"; 21 + }; 22 + 23 + systemd.tmpfiles.settings.test = lib.mkForce { 24 + "/run/${generation}-needed-by-pre-sysinit-after-tmpfiles".f.user = 25 + "${generation}-tmpfiles-user"; 26 + }; 27 + }; 28 + }; 29 + in 30 + 31 + { 32 + 33 + name = "sysinit-reactivation"; 34 + 35 + meta.maintainers = with lib.maintainers; [ nikstur ]; 36 + 37 + nodes.machine = { config, lib, pkgs, ... }: { 38 + systemd.services.pre-sysinit-before-tmpfiles = { 39 + wantedBy = [ "sysinit.target" ]; 40 + requiredBy = [ "sysinit-reactivation.target" ]; 41 + before = [ "systemd-tmpfiles-setup.service" "systemd-tmpfiles-resetup.service" ]; 42 + unitConfig.DefaultDependencies = false; 43 + serviceConfig.Type = "oneshot"; 44 + serviceConfig.RemainAfterExit = true; 45 + environment.USER = "tmpfiles-user"; 46 + script = "${pkgs.shadow}/bin/useradd $USER"; 47 + }; 48 + 49 + systemd.services.pre-sysinit-after-tmpfiles = { 50 + wantedBy = [ "sysinit.target" ]; 51 + requiredBy = [ "sysinit-reactivation.target" ]; 52 + after = [ "systemd-tmpfiles-setup.service" "systemd-tmpfiles-resetup.service" ]; 53 + unitConfig.DefaultDependencies = false; 54 + serviceConfig.Type = "oneshot"; 55 + serviceConfig.RemainAfterExit = true; 56 + environment = { 57 + NEEDED_PATH = "/run/needed-by-pre-sysinit-after-tmpfiles"; 58 + PATH_TO_CREATE = "/run/needed-by-post-sysinit"; 59 + }; 60 + script = '' 61 + if [[ -e $NEEDED_PATH ]]; then 62 + touch $PATH_TO_CREATE 63 + fi 64 + ''; 65 + }; 66 + 67 + systemd.services.post-sysinit = { 68 + wantedBy = [ "default.target" ]; 69 + serviceConfig.Type = "oneshot"; 70 + serviceConfig.RemainAfterExit = true; 71 + environment = { 72 + NEEDED_PATH = "/run/needed-by-post-sysinit"; 73 + PATH_TO_CREATE = "/run/created-by-post-sysinit"; 74 + }; 75 + script = '' 76 + if [[ -e $NEEDED_PATH ]]; then 77 + touch $PATH_TO_CREATE 78 + fi 79 + ''; 80 + }; 81 + 82 + systemd.tmpfiles.settings.test = { 83 + "/run/needed-by-pre-sysinit-after-tmpfiles".f.user = 84 + "tmpfiles-user"; 85 + }; 86 + 87 + specialisation = lib.mkMerge [ 88 + (makeGeneration "second") 89 + (makeGeneration "third") 90 + ]; 91 + }; 92 + 93 + testScript = { nodes, ... }: '' 94 + def switch(generation): 95 + toplevel = "${nodes.machine.system.build.toplevel}"; 96 + machine.succeed(f"{toplevel}/specialisation/{generation}/bin/switch-to-configuration switch") 97 + 98 + machine.wait_for_unit("default.target") 99 + machine.succeed("test -e /run/created-by-post-sysinit") 100 + 101 + switch("second") 102 + machine.succeed("test -e /run/second-created-by-post-sysinit") 103 + 104 + switch("third") 105 + machine.succeed("test -e /run/third-created-by-post-sysinit") 106 + ''; 107 + }
+22 -3
pkgs/applications/editors/vim/plugins/overrides.nix
··· 969 969 dependencies = with self; [ nvim-lspconfig ]; 970 970 }; 971 971 972 - nvim-spectre = super.nvim-spectre.overrideAttrs { 973 - dependencies = with self; [ plenary-nvim ]; 974 - }; 972 + nvim-spectre = super.nvim-spectre.overrideAttrs (old: 973 + let 974 + spectre_oxi = rustPlatform.buildRustPackage { 975 + pname = "spectre_oxi"; 976 + inherit (old) version src; 977 + sourceRoot = "source/spectre_oxi"; 978 + 979 + cargoHash = "sha256-y2ZIgOApIShkIesXmItPKDO6XjFrG4GS5HCPncJUmN8="; 980 + 981 + 982 + preCheck = '' 983 + mkdir tests/tmp/ 984 + ''; 985 + }; 986 + in 987 + (lib.optionalAttrs stdenv.isLinux { 988 + dependencies = with self; 989 + [ plenary-nvim ]; 990 + postInstall = '' 991 + ln -s ${spectre_oxi}/lib/libspectre_oxi.* $out/lua/spectre_oxi.so 992 + ''; 993 + })); 975 994 976 995 nvim-teal-maker = super.nvim-teal-maker.overrideAttrs { 977 996 postPatch = ''
+5 -5
pkgs/applications/misc/1password-gui/default.nix
··· 9 9 let 10 10 11 11 pname = "1password"; 12 - version = if channel == "stable" then "8.10.23" else "8.10.24-6.BETA"; 12 + version = if channel == "stable" then "8.10.23" else "8.10.24-35.BETA"; 13 13 14 14 sources = { 15 15 stable = { ··· 33 33 beta = { 34 34 x86_64-linux = { 35 35 url = "https://downloads.1password.com/linux/tar/beta/x86_64/1password-${version}.x64.tar.gz"; 36 - hash = "sha256-vrC+JzcRQnXTB0KDoIpYTJjoQCNFgFaZuV+8BXTwwmk="; 36 + hash = "sha256-NO8jxXvdjDn7uTyboav8UnHfc0plHDLoKQ/FHZJqpsE="; 37 37 }; 38 38 aarch64-linux = { 39 39 url = "https://downloads.1password.com/linux/tar/beta/aarch64/1password-${version}.arm64.tar.gz"; 40 - hash = "sha256-4v5gtaPWjyBs5VV5quuq77MzjcYQN1k/Ju0NYB44gYM="; 40 + hash = "sha256-9qnODNE3kNRZyj5+2nfoz9zBmY2MqxVPo3rpLOCFAsI="; 41 41 }; 42 42 x86_64-darwin = { 43 43 url = "https://downloads.1password.com/mac/1Password-${version}-x86_64.zip"; 44 - hash = "sha256-SSGg8zLiEaYFTWRb4K145nG/dDQCQw2di8bD59xoTrA="; 44 + hash = "sha256-gU11xBIGOCRbQshOQ4ktYVgHe6dxJ0GnONkVnZkCiEE="; 45 45 }; 46 46 aarch64-darwin = { 47 47 url = "https://downloads.1password.com/mac/1Password-${version}-aarch64.zip"; 48 - hash = "sha256-SgTv1gYPBAr/LPeAtHGBZUw35TegpaVW1M84maT8BdY="; 48 + hash = "sha256-YcnVIgV+2MZOS+a+3lFuNMgnLaGVrOP53B/k70zRoTI="; 49 49 }; 50 50 }; 51 51 };
+3 -3
pkgs/applications/misc/gpu-viewer/default.nix
··· 19 19 20 20 python3.pkgs.buildPythonApplication rec { 21 21 pname = "gpu-viewer"; 22 - version = "2.26"; 22 + version = "2.32"; 23 23 24 24 format = "other"; 25 25 26 26 src = fetchFromGitHub { 27 27 owner = "arunsivaramanneo"; 28 28 repo = pname; 29 - rev = "v${version}"; 30 - hash = "sha256-3GYJq76g/pU8dt+OMGBeDcw47z5Xv3AGkLsACcBCELs="; 29 + rev = "refs/tags/v${version}"; 30 + hash = "sha256-zv53tvFQ0NAqFPYp7qZVmbuM1fBJwC4t43YJDZdqSPU="; 31 31 }; 32 32 33 33 nativeBuildInputs = [
+2 -2
pkgs/applications/misc/logseq/default.nix
··· 14 14 15 15 in { 16 16 pname = "logseq"; 17 - version = "0.10.3"; 17 + version = "0.10.4"; 18 18 19 19 src = fetchurl { 20 20 url = "https://github.com/logseq/logseq/releases/download/${version}/logseq-linux-x64-${version}.AppImage"; 21 - hash = "sha256-aduFqab5cpoXR3oFOHzsXJwogm1bZ9KgT2Mt6G9kbBA="; 21 + hash = "sha256-vFCNhnhfxlSLeieB1DJgym5nbzPKO1ngArTUXvf+DAU="; 22 22 name = "${pname}-${version}.AppImage"; 23 23 }; 24 24
+3 -3
pkgs/applications/networking/cluster/atmos/default.nix
··· 2 2 3 3 buildGoModule rec { 4 4 pname = "atmos"; 5 - version = "1.53.0"; 5 + version = "1.54.0"; 6 6 7 7 src = fetchFromGitHub { 8 8 owner = "cloudposse"; 9 9 repo = pname; 10 10 rev = "v${version}"; 11 - sha256 = "sha256-2T5LCtycTBnJntcKQoJqNwTczWR8bC1SBAqjMN+3Qd4="; 11 + sha256 = "sha256-WGOuFqkrX3/5RINdsegTSxJ28W4iEMPuLVrCjtmCkTw="; 12 12 }; 13 13 14 - vendorHash = "sha256-piK9IVwGAidDhBNAEnu9hD7Ng67ZKxZMcNqgOXLCkq0="; 14 + vendorHash = "sha256-kR13BVbjgQoEjb2xwH8LkxLeMp30h6mbWum9RbzzSGE="; 15 15 16 16 ldflags = [ "-s" "-w" "-X github.com/cloudposse/atmos/cmd.Version=v${version}" ]; 17 17
+3 -3
pkgs/applications/networking/cluster/terraform/default.nix
··· 167 167 mkTerraform = attrs: pluggable (generic attrs); 168 168 169 169 terraform_1 = mkTerraform { 170 - version = "1.6.6"; 171 - hash = "sha256-fYFmHypzSbSgut9Wij6Sz8xR97DVOwPLQap6pan7IRA="; 172 - vendorHash = "sha256-fQsxTX1v8HsMDIkofeCVfNitJAaTWHwppC7DniXlvT4="; 170 + version = "1.7.0"; 171 + hash = "sha256-oF0osIC/ti9ZkWDTBIQuBHreIBVfeo4f/naGFdaMxJE="; 172 + vendorHash = "sha256-77W0x6DENB+U3yB4LI3PwJU9bTuH7Eqz2a9FNoERuJg="; 173 173 patches = [ ./provider-path-0_15.patch ]; 174 174 passthru = { 175 175 inherit plugins;
+2 -2
pkgs/applications/office/appflowy/default.nix
··· 13 13 14 14 stdenv.mkDerivation rec { 15 15 pname = "appflowy"; 16 - version = "0.4.1"; 16 + version = "0.4.3"; 17 17 18 18 src = fetchzip { 19 19 url = "https://github.com/AppFlowy-IO/appflowy/releases/download/${version}/AppFlowy-${version}-linux-x86_64.tar.gz"; 20 - hash = "sha256-9wv7/3wtR1xiOHRYXP29Qbom1Xl9xZbhCFEPf0LJitg="; 20 + hash = "sha256-JrcqVPlFr8zD9ZSBxk9WqN7KCLKq+yCjMfA4QbIfDZE="; 21 21 stripRoot = false; 22 22 }; 23 23
+6 -2
pkgs/applications/science/misc/snakemake/default.nix
··· 6 6 7 7 python3.pkgs.buildPythonApplication rec { 8 8 pname = "snakemake"; 9 - version = "8.0.1"; 9 + version = "8.2.1"; 10 10 format = "setuptools"; 11 11 12 12 src = fetchFromGitHub { 13 13 owner = "snakemake"; 14 14 repo = pname; 15 15 rev = "refs/tags/v${version}"; 16 - hash = "sha256-F4c/lgp7J6LLye+f3FpzaXz3zM7R+jXxTziPlVbxFxA="; 16 + hash = "sha256-NpsDJuxH+NHvE735OCHaISPSOhYDxWiKqCb4Yk9DHf4="; 17 + # https://github.com/python-versioneer/python-versioneer/issues/217 18 + postFetch = '' 19 + sed -i "$out"/snakemake/_version.py -e 's#git_refnames = ".*"#git_refnames = " (tag: v${version})"#' 20 + ''; 17 21 }; 18 22 19 23 postPatch = ''
+2 -2
pkgs/applications/video/obs-studio/plugins/advanced-scene-switcher/default.nix
··· 23 23 24 24 stdenv.mkDerivation rec { 25 25 pname = "advanced-scene-switcher"; 26 - version = "1.24.0"; 26 + version = "1.24.2"; 27 27 28 28 src = fetchFromGitHub { 29 29 owner = "WarmUpTill"; 30 30 repo = "SceneSwitcher"; 31 31 rev = version; 32 - hash = "sha256-Xnf8Vz6I5EfiiVoG0JRd0f0IJHw1IVkTLL4Th/hWYrc="; 32 + hash = "sha256-J5Qcs2eoKMeO1O/MCsR5wfmfbtndRaZmHrbleEZqqOo="; 33 33 }; 34 34 35 35 nativeBuildInputs = [
+5 -10
pkgs/applications/video/obs-studio/plugins/obs-ndi/default.nix
··· 4 4 pname = "obs-ndi"; 5 5 version = "4.13.0"; 6 6 7 - nativeBuildInputs = [ cmake ]; 7 + nativeBuildInputs = [ cmake qtbase ]; 8 8 buildInputs = [ obs-studio qtbase ndi ]; 9 9 10 10 src = fetchFromGitHub { 11 11 owner = "Palakis"; 12 12 repo = "obs-ndi"; 13 - rev = "dummy-tag-${version}"; 13 + rev = version; 14 14 sha256 = "sha256-ugAMSTXbbIZ61oWvoggVJ5kZEgp/waEcWt89AISrSdE="; 15 15 }; 16 16 ··· 19 19 ]; 20 20 21 21 postPatch = '' 22 - # Add path (variable added in hardcode-ndi-path.patch) 23 - sed -i -e s,@NDI@,${ndi},g src/obs-ndi.cpp 22 + # Add path (variable added in hardcode-ndi-path.patch 23 + sed -i -e s,@NDI@,${ndi},g src/plugin-main.cpp 24 24 25 25 # Replace bundled NDI SDK with the upstream version 26 26 # (This fixes soname issues) ··· 28 28 ln -s ${ndi}/include lib/ndi 29 29 ''; 30 30 31 - postInstall = '' 32 - mkdir $out/lib $out/share 33 - mv $out/obs-plugins/64bit $out/lib/obs-plugins 34 - rm -rf $out/obs-plugins 35 - mv $out/data $out/share/obs 36 - ''; 31 + cmakeFlags = [ "-DENABLE_QT=ON" ]; 37 32 38 33 dontWrapQtApps = true; 39 34
+14 -17
pkgs/applications/video/obs-studio/plugins/obs-ndi/hardcode-ndi-path.patch
··· 1 - diff --git a/src/obs-ndi.cpp b/src/obs-ndi.cpp 2 - index 1a8aeb3..9a36ea9 100644 3 - --- a/src/obs-ndi.cpp 4 - +++ b/src/obs-ndi.cpp 5 - @@ -132,13 +132,7 @@ const NDIlib_v5 *load_ndilib() 6 - const char *redistFolder = std::getenv(NDILIB_REDIST_FOLDER); 7 - if (redistFolder) 8 - libraryLocations.push_back(redistFolder); 1 + diff --git a/src/plugin-main.cpp b/src/plugin-main.cpp 2 + index 0d94add..617af73 100644 3 + --- a/src/plugin-main.cpp 4 + +++ b/src/plugin-main.cpp 5 + @@ -244,10 +244,7 @@ const NDIlib_v4 *load_ndilib() 6 + if (!path.isEmpty()) { 7 + locations << path; 8 + } 9 9 -#if defined(__linux__) || defined(__APPLE__) 10 - - libraryLocations.push_back("/usr/lib"); 11 - - libraryLocations.push_back("/usr/lib64"); 12 - - libraryLocations.push_back("/usr/lib/x86_64-linux-gnu"); 13 - - libraryLocations.push_back("/usr/local/lib"); 14 - - libraryLocations.push_back("/usr/local/lib64"); 10 + - locations << "/usr/lib"; 11 + - locations << "/usr/local/lib"; 15 12 -#endif 16 - + libraryLocations.push_back("@NDI@/lib"); 17 - 18 - for (std::string path : libraryLocations) { 19 - blog(LOG_DEBUG, "[load_ndilib] Trying library path: '%s'", path.c_str()); 13 + + locations << "@NDI@/lib"; 14 + for (QString location : locations) { 15 + path = QDir::cleanPath( 16 + QDir(location).absoluteFilePath(NDILIB_LIBRARY_NAME));
+5 -5
pkgs/by-name/co/codeium/package.nix
··· 13 13 }.${system} or throwSystem; 14 14 15 15 hash = { 16 - x86_64-linux = "sha256-zJsgYjmnGT9Ye5hnhqtv5piGM1/HT+DFhVivKLlvE1Q="; 17 - aarch64-linux = "sha256-RjIiSgSxkejS+Dun1xMCZ6C9SPH9AahudQMICH3thC0="; 18 - x86_64-darwin = "sha256-PrfHusjA6o1L60eMblnydTKAYe8vKvK2W3jQZYp5dPc="; 19 - aarch64-darwin = "sha256-LpyXsdjPpdoIqFzm3sLOlBBQdJgrNl8cPehNAVqFvXg="; 16 + x86_64-linux = "sha256-vr/c7kYXoKlZh7+f1ZPHcmIGw0nB8x1wJt/iR2F9bQI="; 17 + aarch64-linux = "sha256-mKLbxj5LSztjHtLWdZFlW4T6S+kN56SZnJNxKZDQIQ4="; 18 + x86_64-darwin = "sha256-AllKEadf+1s3XGCXD0PRycvDUyYNL6HLaViBwwaYswU="; 19 + aarch64-darwin = "sha256-6Pik3uYLfbeAW4Q4ZxJFt90IH+jhXWKY6kpDA6NAmaA="; 20 20 }.${system} or throwSystem; 21 21 22 22 bin = "$out/bin/codeium_language_server"; ··· 24 24 in 25 25 stdenv.mkDerivation (finalAttrs: { 26 26 pname = "codeium"; 27 - version = "1.6.22"; 27 + version = "1.6.23"; 28 28 src = fetchurl { 29 29 name = "${finalAttrs.pname}-${finalAttrs.version}.gz"; 30 30 url = "https://github.com/Exafunction/codeium/releases/download/language-server-v${finalAttrs.version}/language_server_${plat}.gz";
+3 -3
pkgs/by-name/fl/flarectl/package.nix
··· 5 5 6 6 buildGoModule rec { 7 7 pname = "flarectl"; 8 - version = "0.85.0"; 8 + version = "0.86.0"; 9 9 10 10 src = fetchFromGitHub { 11 11 owner = "cloudflare"; 12 12 repo = "cloudflare-go"; 13 13 rev = "v${version}"; 14 - hash = "sha256-mXbWiHU28MlcYbS+RLHToJZpVMWsQ7qY6dAyY+ulwjw="; 14 + hash = "sha256-BGjay9DTlIU563bCSjprq5YwF47Xqj+ZulCda5t2t5I="; 15 15 }; 16 16 17 - vendorHash = "sha256-v6xhhufqxfFvY3BpcM6Qvpljf/vE8ZwPG47zhx+ilb0="; 17 + vendorHash = "sha256-Bn2SDvFWmmMYDpOe+KBuzyTZLpdDtYDPc8HixgEgX+M="; 18 18 19 19 subPackages = [ "cmd/flarectl" ]; 20 20
+54
pkgs/by-name/gl/glauth/package.nix
··· 1 + { lib 2 + , fetchFromGitHub 3 + , buildGoModule 4 + , oath-toolkit 5 + , openldap 6 + }: 7 + 8 + buildGoModule rec { 9 + pname = "glauth"; 10 + version = "2.3.0"; 11 + 12 + src = fetchFromGitHub { 13 + owner = "glauth"; 14 + repo = "glauth"; 15 + rev = "v${version}"; 16 + hash = "sha256-XYNNR3bVLNtAl+vbGRv0VhbLf+em8Ay983jqcW7KDFU="; 17 + }; 18 + 19 + vendorHash = "sha256-SFmGgxDokIbVl3ANDPMCqrB0ck8Wyva2kSV2mgNRogo="; 20 + 21 + nativeCheckInputs = [ 22 + oath-toolkit 23 + openldap 24 + ]; 25 + 26 + modRoot = "v2"; 27 + 28 + # Disable go workspaces to fix build. 29 + env.GOWORK = "off"; 30 + 31 + # Fix this build error: 32 + # main module (github.com/glauth/glauth/v2) does not contain package github.com/glauth/glauth/v2/vendored/toml 33 + excludedPackages = [ "vendored/toml" ]; 34 + 35 + # Based on ldflags in <glauth>/Makefile. 36 + ldflags = [ 37 + "-s" 38 + "-w" 39 + "-X main.GitClean=1" 40 + "-X main.LastGitTag=v${version}" 41 + "-X main.GitTagIsCommit=1" 42 + ]; 43 + 44 + # Tests fail in the sandbox. 45 + doCheck = false; 46 + 47 + meta = with lib; { 48 + description = "A lightweight LDAP server for development, home use, or CI"; 49 + homepage = "https://github.com/glauth/glauth"; 50 + license = licenses.mit; 51 + maintainers = with maintainers; [ bjornfor ]; 52 + mainProgram = "glauth"; 53 + }; 54 + }
+61
pkgs/by-name/lo/lorem/package.nix
··· 1 + { lib 2 + , cargo 3 + , desktop-file-utils 4 + , fetchFromGitLab 5 + , glib 6 + , gtk4 7 + , libadwaita 8 + , meson 9 + , ninja 10 + , pkg-config 11 + , rustPlatform 12 + , rustc 13 + , stdenv 14 + , wrapGAppsHook4 15 + }: 16 + 17 + stdenv.mkDerivation rec { 18 + pname = "lorem"; 19 + version = "1.3"; 20 + 21 + src = fetchFromGitLab { 22 + domain = "gitlab.gnome.org"; 23 + owner = "World/design"; 24 + repo = pname; 25 + rev = version; 26 + hash = "sha256-+Dp/o1rZSHWihLLLe6CzV6c7uUnSsE8Ct3tbLNqlGF0="; 27 + }; 28 + 29 + cargoDeps = rustPlatform.fetchCargoTarball { 30 + inherit src; 31 + name = "${pname}-${version}"; 32 + hash = "sha256-YYjPhlPp211i+ECPu1xgDumz8nVqWRO8YzcZXy8uunI="; 33 + }; 34 + 35 + nativeBuildInputs = [ 36 + cargo 37 + desktop-file-utils 38 + meson 39 + ninja 40 + pkg-config 41 + rustPlatform.cargoSetupHook 42 + rustc 43 + wrapGAppsHook4 44 + ]; 45 + 46 + buildInputs = [ 47 + glib 48 + gtk4 49 + libadwaita 50 + ]; 51 + 52 + meta = with lib; { 53 + description = "Generate placeholder text"; 54 + homepage = "https://gitlab.gnome.org/World/design/lorem"; 55 + changelog = "https://gitlab.gnome.org/World/design/lorem/-/releases/${version}"; 56 + license = licenses.gpl3Plus; 57 + maintainers = with maintainers; [ michaelgrahamevans ]; 58 + mainProgram = "lorem"; 59 + platforms = platforms.linux; 60 + }; 61 + }
+9
pkgs/by-name/lx/lxd-to-incus/package.nix
··· 1 1 { lib 2 2 , buildGoModule 3 3 , fetchFromGitHub 4 + , fetchpatch 4 5 , nix-update-script 5 6 }: 6 7 ··· 14 15 rev = "refs/tags/v${version}"; 15 16 hash = "sha256-crWepf5j3Gd1lhya2DGIh/to7l+AnjKJPR+qUd9WOzw="; 16 17 }; 18 + 19 + patches = [ 20 + # create migration touch file, remove > 0.4.0 21 + (fetchpatch { 22 + url = "https://github.com/lxc/incus/commit/edc5fd2a9baccfb7b6814a440e2947cbb580afcf.diff"; 23 + hash = "sha256-ffQfMFrKDPuLU4jVbG/VGHSO3DmeHw30ATJ8yxJAoHQ="; 24 + }) 25 + ]; 17 26 18 27 modRoot = "cmd/lxd-to-incus"; 19 28
+22 -30
pkgs/by-name/ni/nickel/Cargo.lock
··· 405 405 checksum = "3362992a0d9f1dd7c3d0e89e0ab2bb540b7a95fea8cd798090e758fda2899b5e" 406 406 dependencies = [ 407 407 "codespan-reporting", 408 + "serde", 408 409 ] 409 410 410 411 [[package]] ··· 424 425 source = "registry+https://github.com/rust-lang/crates.io-index" 425 426 checksum = "3538270d33cc669650c4b093848450d380def10c331d38c768e34cac80576e6e" 426 427 dependencies = [ 428 + "serde", 427 429 "termcolor", 428 430 "unicode-width", 429 431 ] ··· 1642 1644 1643 1645 [[package]] 1644 1646 name = "nickel-lang-cli" 1645 - version = "1.3.0" 1647 + version = "1.4.0" 1646 1648 dependencies = [ 1647 1649 "clap 4.4.7", 1648 1650 "clap_complete", ··· 1660 1662 1661 1663 [[package]] 1662 1664 name = "nickel-lang-core" 1663 - version = "0.3.0" 1665 + version = "0.4.0" 1664 1666 dependencies = [ 1665 1667 "ansi_term", 1666 1668 "assert_matches", ··· 1706 1708 "toml", 1707 1709 "topiary", 1708 1710 "topiary-queries", 1709 - "tree-sitter-nickel 0.1.0", 1711 + "tree-sitter-nickel", 1710 1712 "typed-arena", 1711 1713 "unicode-segmentation", 1712 1714 "void", ··· 1715 1717 1716 1718 [[package]] 1717 1719 name = "nickel-lang-lsp" 1718 - version = "1.3.0" 1720 + version = "1.4.0" 1719 1721 dependencies = [ 1720 1722 "anyhow", 1721 1723 "assert_cmd", ··· 1760 1762 1761 1763 [[package]] 1762 1764 name = "nickel-wasm-repl" 1763 - version = "0.3.0" 1765 + version = "0.4.0" 1764 1766 dependencies = [ 1765 1767 "nickel-lang-core", 1766 1768 ] ··· 2106 2108 2107 2109 [[package]] 2108 2110 name = "pyckel" 2109 - version = "1.3.0" 2111 + version = "1.4.0" 2110 2112 dependencies = [ 2111 2113 "codespan-reporting", 2112 2114 "nickel-lang-core", ··· 2984 2986 2985 2987 [[package]] 2986 2988 name = "topiary" 2987 - version = "0.2.3" 2988 - source = "git+https://github.com/tweag/topiary.git?rev=8299a04bf83c4a2774cbbff7a036c022efa939b3#8299a04bf83c4a2774cbbff7a036c022efa939b3" 2989 + version = "0.3.0" 2990 + source = "git+https://github.com/tweag/topiary.git?rev=9ae9ef49c2fa968d15107b817864ff6627e0983e#9ae9ef49c2fa968d15107b817864ff6627e0983e" 2989 2991 dependencies = [ 2990 2992 "clap 4.4.7", 2991 2993 "futures", ··· 3001 3003 "tree-sitter-bash", 3002 3004 "tree-sitter-facade", 3003 3005 "tree-sitter-json", 3004 - "tree-sitter-nickel 0.0.1", 3006 + "tree-sitter-nickel", 3005 3007 "tree-sitter-ocaml", 3006 3008 "tree-sitter-ocamllex", 3007 3009 "tree-sitter-query", ··· 3013 3015 3014 3016 [[package]] 3015 3017 name = "topiary-queries" 3016 - version = "0.2.3" 3017 - source = "git+https://github.com/tweag/topiary.git?rev=8299a04bf83c4a2774cbbff7a036c022efa939b3#8299a04bf83c4a2774cbbff7a036c022efa939b3" 3018 + version = "0.3.0" 3019 + source = "git+https://github.com/tweag/topiary.git?rev=9ae9ef49c2fa968d15107b817864ff6627e0983e#9ae9ef49c2fa968d15107b817864ff6627e0983e" 3018 3020 3019 3021 [[package]] 3020 3022 name = "tree-sitter" ··· 3058 3060 3059 3061 [[package]] 3060 3062 name = "tree-sitter-nickel" 3061 - version = "0.0.1" 3062 - source = "git+https://github.com/nickel-lang/tree-sitter-nickel?rev=b1a4718601ebd29a62bf3a7fd1069a99ccf48093#b1a4718601ebd29a62bf3a7fd1069a99ccf48093" 3063 - dependencies = [ 3064 - "cc", 3065 - "tree-sitter", 3066 - ] 3067 - 3068 - [[package]] 3069 - name = "tree-sitter-nickel" 3070 3063 version = "0.1.0" 3071 - source = "registry+https://github.com/rust-lang/crates.io-index" 3072 - checksum = "8e95267764f0648c768e4da3e4c31b96bc5716446497dfa8b6296924b149f64a" 3064 + source = "git+https://github.com/nickel-lang/tree-sitter-nickel?rev=091b5dcc7d138901bcc162da9409c0bb626c0d27#091b5dcc7d138901bcc162da9409c0bb626c0d27" 3073 3065 dependencies = [ 3074 3066 "cc", 3075 3067 "tree-sitter", ··· 3078 3070 [[package]] 3079 3071 name = "tree-sitter-ocaml" 3080 3072 version = "0.20.4" 3081 - source = "git+https://github.com/tree-sitter/tree-sitter-ocaml.git#694c57718fd85d514f8b81176038e7a4cfabcaaf" 3073 + source = "git+https://github.com/tree-sitter/tree-sitter-ocaml.git#4abfdc1c7af2c6c77a370aee974627be1c285b3b" 3082 3074 dependencies = [ 3083 3075 "cc", 3084 3076 "tree-sitter", ··· 3105 3097 [[package]] 3106 3098 name = "tree-sitter-rust" 3107 3099 version = "0.20.4" 3108 - source = "git+https://github.com/tree-sitter/tree-sitter-rust.git#48e053397b587de97790b055a1097b7c8a4ef846" 3100 + source = "git+https://github.com/tree-sitter/tree-sitter-rust.git#79456e6080f50fc1ca7c21845794308fa5d35a51" 3109 3101 dependencies = [ 3110 3102 "cc", 3111 3103 "tree-sitter", ··· 3197 3189 3198 3190 [[package]] 3199 3191 name = "unsafe-libyaml" 3200 - version = "0.2.9" 3192 + version = "0.2.10" 3201 3193 source = "registry+https://github.com/rust-lang/crates.io-index" 3202 - checksum = "f28467d3e1d3c6586d8f25fa243f544f5800fec42d97032474e17222c2b75cfa" 3194 + checksum = "ab4c90930b95a82d00dc9e9ac071b4991924390d46cbd0dfe566148667605e4b" 3203 3195 3204 3196 [[package]] 3205 3197 name = "url" ··· 3566 3558 3567 3559 [[package]] 3568 3560 name = "zerocopy" 3569 - version = "0.7.18" 3561 + version = "0.7.31" 3570 3562 source = "registry+https://github.com/rust-lang/crates.io-index" 3571 - checksum = "ede7d7c7970ca2215b8c1ccf4d4f354c4733201dfaaba72d44ae5b37472e4901" 3563 + checksum = "1c4061bedbb353041c12f413700357bec76df2c7e2ca8e4df8bac24c6bf68e3d" 3572 3564 dependencies = [ 3573 3565 "zerocopy-derive", 3574 3566 ] 3575 3567 3576 3568 [[package]] 3577 3569 name = "zerocopy-derive" 3578 - version = "0.7.18" 3570 + version = "0.7.31" 3579 3571 source = "registry+https://github.com/rust-lang/crates.io-index" 3580 - checksum = "4b27b1bb92570f989aac0ab7e9cbfbacdd65973f7ee920d9f0e71ebac878fd0b" 3572 + checksum = "b3c129550b3e6de3fd0ba67ba5c81818f9805e58b8d7fee80a3a59d2c9fc601a" 3581 3573 dependencies = [ 3582 3574 "proc-macro2 1.0.69", 3583 3575 "quote 1.0.33",
+6 -6
pkgs/by-name/ni/nickel/package.nix
··· 8 8 9 9 rustPlatform.buildRustPackage rec { 10 10 pname = "nickel"; 11 - version = "1.3.0"; 11 + version = "1.4.0"; 12 12 13 13 src = fetchFromGitHub { 14 14 owner = "tweag"; 15 15 repo = "nickel"; 16 16 rev = "refs/tags/${version}"; 17 - hash = "sha256-MBonps3yFEpw9l3EAJ6BXNNjY2fUGzWCP+7h0M8LEAY="; 17 + hash = "sha256-YPS+Szj0T8mbcrYBdAuoQupv1x0EIq4rFS2Wk5oYVsY="; 18 18 }; 19 19 20 20 cargoLock = { 21 21 lockFile = ./Cargo.lock; 22 22 outputHashes = { 23 - "topiary-0.2.3" = "sha256-EgDFjJeGJb36je/be7DXvzvpBYDUaupOiQxtL7bN/+Q="; 23 + "topiary-0.3.0" = "sha256-1leQLRohX0iDiOOO96ETM2L3yOElW8OwR5IcrsoxfOo="; 24 24 "tree-sitter-bash-0.20.4" = "sha256-VP7rJfE/k8KV1XN1w5f0YKjCnDMYU1go/up0zj1mabM="; 25 25 "tree-sitter-facade-0.9.3" = "sha256-M/npshnHJkU70pP3I4WMXp3onlCSWM5mMIqXP45zcUs="; 26 - "tree-sitter-nickel-0.0.1" = "sha256-aYsEx1Y5oDEqSPCUbf1G3J5Y45ULT9OkD+fn6stzrOU="; 26 + "tree-sitter-nickel-0.1.0" = "sha256-HyHdameEgET5UXKMgw7EJvZsJxToc9Qz26XHvc5qmU0="; 27 27 "tree-sitter-query-0.1.0" = "sha256-5N7FT0HTK3xzzhAlk3wBOB9xlEpKSNIfakgFnsxEi18="; 28 28 "tree-sitter-json-0.20.1" = "sha256-Msnct7JzPBIR9+PIBZCJTRdVMUzhaDTKkl3JaDUKAgo="; 29 - "tree-sitter-ocaml-0.20.4" = "sha256-j3Hv2qOMxeBNOW+WIgIYzG3zMIFWPQpoHe94b2rT+A8="; 29 + "tree-sitter-ocaml-0.20.4" = "sha256-ycmjIKfrsVSVHmPP3HCxfk5wcBIF/JFH8OnU8mY1Cc8="; 30 30 "tree-sitter-ocamllex-0.20.2" = "sha256-YhmEE7I7UF83qMuldHqc/fD/no/7YuZd6CaAIaZ1now="; 31 31 "tree-sitter-toml-0.5.1" = "sha256-5nLNBxFeOGE+gzbwpcrTVnuL1jLUA0ZLBVw2QrOLsDQ="; 32 - "tree-sitter-rust-0.20.4" = "sha256-ht0l1a3esvBbVHNbUosItmqxwL7mDp+QyhIU6XTUiEk="; 32 + "tree-sitter-rust-0.20.4" = "sha256-57CuGp7gP+AVYIR3HbMXnmmSAbtlpWrOHRYpMbmWfds="; 33 33 "web-tree-sitter-sys-1.3.0" = "sha256-9rKB0rt0y9TD/HLRoB9LjEP9nO4kSWR9ylbbOXo2+2M="; 34 34 35 35 };
+6
pkgs/by-name/oc/ocenaudio/package.nix
··· 42 42 dpkg -x $src $out 43 43 cp -av $out/opt/ocenaudio/* $out 44 44 rm -rf $out/opt 45 + mv $out/usr/share $out/share 46 + rm -rf $out/usr 47 + substituteInPlace $out/share/applications/ocenaudio.desktop \ 48 + --replace "/opt/ocenaudio/bin/ocenaudio" "ocenaudio" 49 + mkdir -p $out/share/licenses/ocenaudio 50 + mv $out/bin/ocenaudio_license.txt $out/share/licenses/ocenaudio/LICENSE 45 51 46 52 # Create symlink bzip2 library 47 53 ln -s ${bzip2.out}/lib/libbz2.so.1 $out/lib/libbz2.so.1.0
+52
pkgs/by-name/pf/pfft/package.nix
··· 1 + { autoreconfHook 2 + , fetchFromGitHub 3 + , fftwMpi 4 + , lib 5 + , llvmPackages 6 + , mpi 7 + , precision ? "double" 8 + , stdenv 9 + }: 10 + 11 + assert lib.elem precision [ "single" "double" "long-double" ]; 12 + 13 + let 14 + fftw' = fftwMpi.override { inherit precision; }; 15 + in 16 + stdenv.mkDerivation (finalAttrs: { 17 + pname = "pfft-${precision}"; 18 + version = "1.0.8-alpha"; 19 + 20 + src = fetchFromGitHub { 21 + owner = "mpip"; 22 + repo = "pfft"; 23 + rev = "v${finalAttrs.version}"; 24 + hash = "sha256-T5nPlkPKjYYRCuT1tSzXNJTPs/o6zwJMv9lPCWOwabw="; 25 + }; 26 + 27 + outputs = [ "out" "dev" ]; 28 + 29 + nativeBuildInputs = [ autoreconfHook ]; 30 + 31 + preConfigure = '' 32 + export FCFLAGS="-I${lib.getDev fftw'}/include" 33 + ''; 34 + 35 + configureFlags = [ 36 + "--enable-portable-binary" 37 + ] ++ lib.optional (precision != "double") "--enable-${precision}"; 38 + 39 + buildInputs = lib.optional stdenv.cc.isClang llvmPackages.openmp; 40 + 41 + propagatedBuildInputs = [ fftw' mpi ]; 42 + 43 + doCheck = true; 44 + 45 + meta = { 46 + description = "Parallel fast Fourier transforms"; 47 + homepage = "https://www-user.tu-chemnitz.de/~potts/workgroup/pippig/software.php.en#pfft"; 48 + license = lib.licenses.gpl3Plus; 49 + maintainers = with lib.maintainers; [ hmenke ]; 50 + platforms = lib.platforms.linux; 51 + }; 52 + })
+53
pkgs/by-name/pn/pnfft/package.nix
··· 1 + { autoreconfHook 2 + , fetchurl 3 + , fftwMpi 4 + , gsl 5 + , lib 6 + , llvmPackages 7 + , pfft 8 + , precision ? "double" 9 + , stdenv 10 + }: 11 + 12 + assert lib.elem precision [ "single" "double" "long-double" ]; 13 + 14 + let 15 + fftw' = fftwMpi.override { inherit precision; }; 16 + pfft' = pfft.override { inherit precision; }; 17 + in 18 + stdenv.mkDerivation (finalAttrs: { 19 + pname = "pnfft-${precision}"; 20 + version = "1.0.7-alpha"; 21 + 22 + src = fetchurl { 23 + url = "https://www-user.tu-chemnitz.de/~potts/workgroup/pippig/software/pnfft-${finalAttrs.version}.tar.gz"; 24 + hash = "sha256-/aVY/1fuMRl1Q2O7bmc5M4aA0taGD+fcQgCdhVYr1no="; 25 + }; 26 + 27 + outputs = [ "out" "dev" ]; 28 + 29 + nativeBuildInputs = [ autoreconfHook ]; 30 + 31 + preConfigure = '' 32 + export FCFLAGS="-I${lib.getDev fftw'}/include -I${lib.getDev pfft'}/include" 33 + ''; 34 + 35 + configureFlags = [ 36 + "--enable-threads" 37 + "--enable-portable-binary" 38 + ] ++ lib.optional (precision != "double") "--enable-${precision}"; 39 + 40 + buildInputs = [ gsl ] ++ lib.optional stdenv.cc.isClang llvmPackages.openmp; 41 + 42 + propagatedBuildInputs = [ pfft' ]; 43 + 44 + doCheck = true; 45 + 46 + meta = { 47 + description = "Parallel nonequispaced fast Fourier transforms"; 48 + homepage = "https://www-user.tu-chemnitz.de/~potts/workgroup/pippig/software.php.en#pnfft"; 49 + license = lib.licenses.gpl3Plus; 50 + maintainers = with lib.maintainers; [ hmenke ]; 51 + platforms = lib.platforms.linux; 52 + }; 53 + })
+12
pkgs/by-name/po/poptracker/assets-path.diff
··· 1 + diff --git a/src/poptracker.cpp b/src/poptracker.cpp 2 + index dbf477b..6ccfac2 100644 3 + --- a/src/poptracker.cpp 4 + +++ b/src/poptracker.cpp 5 + @@ -217,6 +217,7 @@ PopTracker::PopTracker(int argc, char** argv, bool cli, const json& args) 6 + Pack::addOverrideSearchPath(os_pathcat(appPath, "user-override")); // portable/system overrides 7 + Assets::addSearchPath(os_pathcat(appPath, "assets")); // system assets 8 + } 9 + + Assets::addSearchPath("@assets@"); 10 + 11 + _asio = new asio::io_service(); 12 + HTTP::certfile = asset("cacert.pem"); // https://curl.se/docs/caextract.html
+74
pkgs/by-name/po/poptracker/package.nix
··· 1 + { lib 2 + , stdenv 3 + , fetchFromGitHub 4 + , util-linux 5 + , SDL2 6 + , SDL2_ttf 7 + , SDL2_image 8 + , openssl 9 + , which 10 + , libsForQt5 11 + , makeWrapper 12 + }: 13 + 14 + stdenv.mkDerivation (finalAttrs: { 15 + pname = "poptracker"; 16 + version = "0.25.7"; 17 + 18 + src = fetchFromGitHub { 19 + owner = "black-sliver"; 20 + repo = "PopTracker"; 21 + rev = "v${finalAttrs.version}"; 22 + hash = "sha256-wP2d8cWNg80KUyw1xPQMriNRg3UyXgKaSoJ17U5vqCE="; 23 + fetchSubmodules = true; 24 + }; 25 + 26 + patches = [ ./assets-path.diff ]; 27 + 28 + postPatch = '' 29 + substituteInPlace src/poptracker.cpp --replace "@assets@" "$out/share/$pname/" 30 + ''; 31 + 32 + enableParallelBuilding = true; 33 + 34 + nativeBuildInputs = [ 35 + util-linux 36 + makeWrapper 37 + ]; 38 + 39 + buildInputs = [ 40 + SDL2 41 + SDL2_ttf 42 + SDL2_image 43 + openssl 44 + ]; 45 + 46 + buildFlags = [ 47 + "native" 48 + "CONF=RELEASE" 49 + "VERSION=v${finalAttrs.version}" 50 + ]; 51 + 52 + installPhase = '' 53 + runHook preInstall 54 + install -m555 -Dt $out/bin build/linux-x86_64/poptracker 55 + install -m444 -Dt $out/share/${finalAttrs.pname} assets/* 56 + wrapProgram $out/bin/poptracker --prefix PATH : ${lib.makeBinPath [ which libsForQt5.kdialog ]} 57 + runHook postInstall 58 + ''; 59 + 60 + meta = with lib; { 61 + description = "Scriptable tracker for randomized games"; 62 + longDescription = '' 63 + Universal, scriptable randomizer tracking solution that is open source. Supports auto-tracking. 64 + 65 + PopTracker packs should be placed in `~/PopTracker/packs` or `./packs`. 66 + ''; 67 + homepage = "https://github.com/black-sliver/PopTracker"; 68 + changelog = "https://github.com/black-sliver/PopTracker/releases/tag/v${finalAttrs.version}"; 69 + license = licenses.gpl3Only; 70 + maintainers = with maintainers; [ freyacodes ]; 71 + mainProgram = "poptracker"; 72 + platforms = [ "x86_64-linux" ]; 73 + }; 74 + })
+3 -3
pkgs/by-name/ri/rimgo/package.nix
··· 6 6 }: 7 7 buildGoModule rec { 8 8 pname = "rimgo"; 9 - version = "1.2.1"; 9 + version = "1.2.3"; 10 10 11 11 src = fetchFromGitea { 12 12 domain = "codeberg.org"; 13 13 owner = "rimgo"; 14 14 repo = "rimgo"; 15 15 rev = "v${version}"; 16 - hash = "sha256-C6xixULZCDs+rIP7IWBVQNo34Yk/8j9ell2D0nUoHBg="; 16 + hash = "sha256-nokXM+lnTiaWKwglmFYLBpnGHJn1yFok76tqb0nulVA="; 17 17 }; 18 18 19 - vendorHash = "sha256-u5N7aI9RIQ3EmiyHv0qhMcKkvmpp+5G7xbzdQcbhybs="; 19 + vendorHash = "sha256-wDTSqfp1Bb1Jb9XX3A3/p5VUcjr5utpe6l/3pXfZpsg="; 20 20 21 21 nativeBuildInputs = [ tailwindcss ]; 22 22
+40
pkgs/by-name/sh/sha2wordlist/package.nix
··· 1 + { lib 2 + , stdenv 3 + , fetchFromGitHub 4 + , libbsd 5 + }: 6 + 7 + stdenv.mkDerivation { 8 + pname = "sha2wordlist"; 9 + version = "unstable-2023-02-20"; 10 + 11 + src = fetchFromGitHub { 12 + owner = "kirei"; 13 + repo = "sha2wordlist"; 14 + rev = "2017b7ac786cfb5ad7f35f3f9068333b426d65f7"; 15 + hash = "sha256-A5KIXvwllzUcUm52lhw0QDjhEkCVTcbLQGFZWmHrFpU="; 16 + }; 17 + 18 + postPatch = '' 19 + substituteInPlace Makefile \ 20 + --replace "gcc" "$CC" 21 + ''; 22 + 23 + buildInputs = [ 24 + libbsd 25 + ]; 26 + 27 + installPhase = '' 28 + mkdir -p $out/bin 29 + install -m 755 sha2wordlist $out/bin 30 + ''; 31 + 32 + meta = with lib; { 33 + description = "Display SHA-256 as PGP words"; 34 + homepage = "https://github.com/kirei/sha2wordlist"; 35 + maintainers = with maintainers; [ baloo ]; 36 + license = [ licenses.bsd2 ]; 37 + platforms = platforms.all; 38 + mainProgram = "sha2wordlist"; 39 + }; 40 + }
+3 -3
pkgs/by-name/us/usql/package.nix
··· 11 11 12 12 buildGoModule rec { 13 13 pname = "usql"; 14 - version = "0.17.4"; 14 + version = "0.17.5"; 15 15 16 16 src = fetchFromGitHub { 17 17 owner = "xo"; 18 18 repo = "usql"; 19 19 rev = "v${version}"; 20 - hash = "sha256-mEx0RMfPNRvsgjVcZDTzr74G7l5C8UcTZ15INNX4Kuo="; 20 + hash = "sha256-Lh5CProffPB/GEYvU1h7St8zgmnS1QOjBgvdUXlsGzc="; 21 21 }; 22 22 23 23 buildInputs = [ unixODBC icu ]; 24 24 25 - vendorHash = "sha256-zVSgrlTWDaN5uhA0iTcYMer4anly+m0BRTa6uuiLIjk="; 25 + vendorHash = "sha256-IdqSTwQeMRjB5sE53VvTVAXPyIyN+pMj4XziIT31rV0="; 26 26 proxyVendor = true; 27 27 28 28 # Exclude broken genji, hive & impala drivers (bad group)
+1
pkgs/desktops/lomiri/default.nix
··· 34 34 35 35 #### Services 36 36 biometryd = callPackage ./services/biometryd { }; 37 + content-hub = callPackage ./services/content-hub { }; 37 38 hfd-service = callPackage ./services/hfd-service { }; 38 39 history-service = callPackage ./services/history-service { }; 39 40 lomiri-download-manager = callPackage ./services/lomiri-download-manager { };
+179
pkgs/desktops/lomiri/services/content-hub/default.nix
··· 1 + { stdenv 2 + , lib 3 + , fetchFromGitLab 4 + , fetchpatch 5 + , fetchpatch2 6 + , gitUpdater 7 + , testers 8 + , cmake 9 + , cmake-extras 10 + , dbus-test-runner 11 + , gettext 12 + , glib 13 + , gsettings-qt 14 + , gtest 15 + , libapparmor 16 + , libnotify 17 + , lomiri-api 18 + , lomiri-app-launch 19 + , lomiri-download-manager 20 + , lomiri-ui-toolkit 21 + , pkg-config 22 + , properties-cpp 23 + , qtbase 24 + , qtdeclarative 25 + , qtfeedback 26 + , qtgraphicaleffects 27 + , wrapGAppsHook 28 + , xvfb-run 29 + }: 30 + 31 + stdenv.mkDerivation (finalAttrs: { 32 + pname = "content-hub"; 33 + version = "1.1.0"; 34 + 35 + src = fetchFromGitLab { 36 + owner = "ubports"; 37 + repo = "development/core/content-hub"; 38 + rev = finalAttrs.version; 39 + hash = "sha256-IntEpgPCBmOL6K6TU+UhgGb6OHVA9pYurK5VN3woIIw="; 40 + }; 41 + 42 + outputs = [ 43 + "out" 44 + "dev" 45 + "examples" 46 + ]; 47 + 48 + patches = [ 49 + # Remove when https://gitlab.com/ubports/development/core/content-hub/-/merge_requests/33 merged & in release 50 + (fetchpatch { 51 + name = "0001-content-hub-Migrate-to-GetConnectionCredentials.patch"; 52 + url = "https://gitlab.com/ubports/development/core/content-hub/-/commit/9c0eae42d856b4b6e24fa609ade0e674c7a84cfe.patch"; 53 + hash = "sha256-IWoCQKSCCk26n7133oG0Ht+iEjavn/IiOVUM+tCLX2U="; 54 + }) 55 + 56 + # Remove when https://gitlab.com/ubports/development/core/content-hub/-/merge_requests/34 merged & in release 57 + (fetchpatch { 58 + name = "0002-content-hub-import-Lomiri-Content-CMakeLists-Drop-qt-argument-to-qmlplugindump.patch"; 59 + url = "https://gitlab.com/ubports/development/core/content-hub/-/commit/63a4baf1469de31c4fd50c69ed85d061f5e8e80a.patch"; 60 + hash = "sha256-T+6T9lXne6AhDFv9d7L8JNwdl8f0wjDmvSoNVPkHza4="; 61 + }) 62 + 63 + # Remove when https://gitlab.com/ubports/development/core/content-hub/-/merge_requests/35 merged & in release 64 + # fetchpatch2 due to renames, https://github.com/NixOS/nixpkgs/issues/32084 65 + (fetchpatch2 { 66 + name = "0003-content-hub-Add-more-better-GNUInstallDirs-variables-usage.patch"; 67 + url = "https://gitlab.com/ubports/development/core/content-hub/-/commit/3c5ca4a8ec125e003aca78c14521b70140856c25.patch"; 68 + hash = "sha256-kYN0eLwMyM/9yK+zboyEsoPKZMZ4SCXodVYsvkQr2F8="; 69 + }) 70 + ]; 71 + 72 + postPatch = '' 73 + substituteInPlace import/*/Content/CMakeLists.txt \ 74 + --replace "\''${CMAKE_INSTALL_LIBDIR}/qt5/qml" "\''${CMAKE_INSTALL_PREFIX}/${qtbase.qtQmlPrefix}" 75 + 76 + # Look for peer files in running system 77 + substituteInPlace src/com/lomiri/content/service/registry-updater.cpp \ 78 + --replace '/usr' '/run/current-system/sw' 79 + 80 + # Don't override default theme search path (which honours XDG_DATA_DIRS) with a FHS assumption 81 + substituteInPlace import/Lomiri/Content/contenthubplugin.cpp \ 82 + --replace 'QIcon::setThemeSearchPaths(QStringList() << ("/usr/share/icons/"));' "" 83 + ''; 84 + 85 + strictDeps = true; 86 + 87 + nativeBuildInputs = [ 88 + cmake 89 + gettext 90 + pkg-config 91 + qtdeclarative # qmlplugindump 92 + wrapGAppsHook 93 + ]; 94 + 95 + buildInputs = [ 96 + cmake-extras 97 + glib 98 + gsettings-qt 99 + libapparmor 100 + libnotify 101 + lomiri-api 102 + lomiri-app-launch 103 + lomiri-download-manager 104 + lomiri-ui-toolkit 105 + properties-cpp 106 + qtbase 107 + qtdeclarative 108 + qtfeedback 109 + qtgraphicaleffects 110 + ]; 111 + 112 + nativeCheckInputs = [ 113 + dbus-test-runner 114 + xvfb-run 115 + ]; 116 + 117 + checkInputs = [ 118 + gtest 119 + ]; 120 + 121 + dontWrapQtApps = true; 122 + 123 + cmakeFlags = [ 124 + (lib.cmakeBool "GSETTINGS_COMPILE" true) 125 + (lib.cmakeBool "GSETTINGS_LOCALINSTALL" true) 126 + (lib.cmakeBool "ENABLE_TESTS" finalAttrs.finalPackage.doCheck) 127 + (lib.cmakeBool "ENABLE_DOC" false) # needs Qt5 qdoc: https://github.com/NixOS/nixpkgs/pull/245379 128 + (lib.cmakeBool "ENABLE_UBUNTU_COMPAT" true) # in case something still depends on it 129 + ]; 130 + 131 + preBuild = let 132 + listToQtVar = list: suffix: lib.strings.concatMapStringsSep ":" (drv: "${lib.getBin drv}/${suffix}") list; 133 + in '' 134 + # Executes qmlplugindump 135 + export QT_PLUGIN_PATH=${listToQtVar [ qtbase ] qtbase.qtPluginPrefix} 136 + export QML2_IMPORT_PATH=${listToQtVar [ qtdeclarative lomiri-ui-toolkit qtfeedback qtgraphicaleffects ] qtbase.qtQmlPrefix} 137 + ''; 138 + 139 + doCheck = stdenv.buildPlatform.canExecute stdenv.hostPlatform; 140 + 141 + # Starts & talks to D-Bus services, breaks under parallelism 142 + enableParallelChecking = false; 143 + 144 + preFixup = '' 145 + for exampleExe in content-hub-test-{importer,exporter,sharer}; do 146 + moveToOutput bin/$exampleExe $examples 147 + moveToOutput share/applications/$exampleExe.desktop $examples 148 + done 149 + moveToOutput share/icons $examples 150 + ''; 151 + 152 + postFixup = '' 153 + for exampleBin in $examples/bin/*; do 154 + wrapGApp $exampleBin 155 + done 156 + ''; 157 + 158 + passthru = { 159 + tests.pkg-config = testers.testMetaPkgConfig finalAttrs.finalPackage; 160 + updateScript = gitUpdater { }; 161 + }; 162 + 163 + meta = with lib; { 164 + description = "Content sharing/picking service"; 165 + longDescription = '' 166 + content-hub is a mediation service to let applications share content between them, 167 + even if they are not running at the same time. 168 + ''; 169 + homepage = "https://gitlab.com/ubports/development/core/content-hub"; 170 + license = with licenses; [ gpl3Only lgpl3Only ]; 171 + mainProgram = "content-hub-service"; 172 + maintainers = teams.lomiri.members; 173 + platforms = platforms.linux; 174 + pkgConfigModules = [ 175 + "libcontent-hub" 176 + "libcontent-hub-glib" 177 + ]; 178 + }; 179 + })
+2 -2
pkgs/desktops/xfce/panel-plugins/xfce4-whiskermenu-plugin/default.nix
··· 17 17 mkXfceDerivation { 18 18 category = "panel-plugins"; 19 19 pname = "xfce4-whiskermenu-plugin"; 20 - version = "2.8.2"; 20 + version = "2.8.3"; 21 21 rev-prefix = "v"; 22 22 odd-unstable = false; 23 - sha256 = "sha256-v1YvmdL1AUyzJjbU9/yIYAAuQfbVlJCcdagM5yhKMuU="; 23 + sha256 = "sha256-xRLvjRu/I+wsTWXUhrJUcrQz+JkZCYqoJSqYAYOztgg="; 24 24 25 25 nativeBuildInputs = [ 26 26 cmake
+3 -3
pkgs/development/compilers/gleam/default.nix
··· 12 12 13 13 rustPlatform.buildRustPackage rec { 14 14 pname = "gleam"; 15 - version = "0.33.0"; 15 + version = "0.34.0"; 16 16 17 17 src = fetchFromGitHub { 18 18 owner = "gleam-lang"; 19 19 repo = pname; 20 20 rev = "refs/tags/v${version}"; 21 - hash = "sha256-fAI4GKdMg2FlNLqXtqAEpmvi63RApRZdQEWPqEf+Dyw="; 21 + hash = "sha256-cqJNNSN3x2tr6/i7kXAlvIaU9SfyPWBE4c6twc/p1lY="; 22 22 }; 23 23 24 24 nativeBuildInputs = [ git pkg-config ]; ··· 26 26 buildInputs = [ openssl ] ++ 27 27 lib.optionals stdenv.isDarwin [ Security SystemConfiguration ]; 28 28 29 - cargoHash = "sha256-Ogjt6lIOvoTPWQhtNFqMgACNrH/27+8JRDlFb//9oUg="; 29 + cargoHash = "sha256-mCMfVYbpUik8oc7TLLAXPBmBUchy+quAZLmd9pqCZ7Y="; 30 30 31 31 passthru.updateScript = nix-update-script { }; 32 32
+9 -8
pkgs/development/compilers/shaderc/default.nix
··· 8 8 glslang = fetchFromGitHub { 9 9 owner = "KhronosGroup"; 10 10 repo = "glslang"; 11 - rev = "728c689574fba7e53305b475cd57f196c1a21226"; 12 - hash = "sha256-BAgDQosiO3e4yy2DpQ6SjrJNrHTUDSduHFRvzWvd4v0="; 11 + rev = "a91631b260cba3f22858d6c6827511e636c2458a"; 12 + hash = "sha256-7kIIU45pe+IF7lGltpIKSvQBmcXR+TWFvmx7ztMNrpc="; 13 13 }; 14 14 spirv-tools = fetchFromGitHub { 15 15 owner = "KhronosGroup"; 16 16 repo = "SPIRV-Tools"; 17 - rev = "d9446130d5165f7fafcb3599252a22e264c7d4bd"; 18 - hash = "sha256-fuYhzfkWXDm1icLHifc32XZCNQ6Dj5f5WJslT2JoMbc="; 17 + rev = "f0cc85efdbbe3a46eae90e0f915dc1509836d0fc"; 18 + hash = "sha256-RzGvoDt1Qc+f6mZsfs99MxX4YB3yFc5FP92Yx/WGrsI="; 19 19 }; 20 20 spirv-headers = fetchFromGitHub { 21 21 owner = "KhronosGroup"; 22 22 repo = "SPIRV-Headers"; 23 - rev = "c214f6f2d1a7253bb0e9f195c2dc5b0659dc99ef"; 24 - hash = "sha256-/9EDOiqN6ZzDhRKP/Kv8D/BT2Cs7G8wyzEsGATLpmrA="; 23 + rev = "1c6bb2743599e6eb6f37b2969acc0aef812e32e3"; 24 + hash = "sha256-/I9dJlBE0kvFvqooKuqMETtOE72Jmva3zIGnq0o4+aE="; 25 25 }; 26 26 in 27 27 stdenv.mkDerivation rec { 28 28 pname = "shaderc"; 29 - version = "2022.4"; 29 + version = "2023.8"; 30 30 31 31 outputs = [ "out" "lib" "bin" "dev" "static" ]; 32 32 ··· 34 34 owner = "google"; 35 35 repo = "shaderc"; 36 36 rev = "v${version}"; 37 - hash = "sha256-/p2gJ7Lnh8IfvwBwHPDtmfLJ8j+Rbv+Oxu9lxY6fxfk="; 37 + hash = "sha256-c8mJ361DY2VlSFZ4/RCrV+nqB9HblbOdfMkI4cM1QzM="; 38 38 }; 39 39 40 40 patchPhase = '' 41 41 cp -r --no-preserve=mode ${glslang} third_party/glslang 42 42 cp -r --no-preserve=mode ${spirv-tools} third_party/spirv-tools 43 43 ln -s ${spirv-headers} third_party/spirv-tools/external/spirv-headers 44 + patchShebangs --build utils/ 44 45 ''; 45 46 46 47 nativeBuildInputs = [ cmake python3 ]
+3 -1
pkgs/development/coq-modules/smtcoq/cvc4.nix
··· 3 3 , python3 4 4 }: 5 5 6 + let cln' = cln.override { gccStdenv = stdenv; }; in 7 + 6 8 stdenv.mkDerivation rec { 7 9 pname = "cvc4"; 8 10 version = "1.6"; ··· 15 17 # Build fails with GNUmake 4.4 16 18 nativeBuildInputs = [ autoreconfHook gnumake42 pkg-config ]; 17 19 buildInputs = [ gmp swig libantlr3c boost python3 ] 18 - ++ lib.optionals stdenv.isLinux [ cln ]; 20 + ++ lib.optionals stdenv.isLinux [ cln' ]; 19 21 20 22 configureFlags = [ 21 23 "--enable-language-bindings=c"
+2 -2
pkgs/development/interpreters/ruby/default.nix
··· 300 300 }; 301 301 302 302 ruby_3_2 = generic { 303 - version = rubyVersion "3" "2" "2" ""; 304 - hash = "sha256-lsV1WIcaZ0jeW8nydOk/S1qtBs2PN776Do2U57ikI7w="; 303 + version = rubyVersion "3" "2" "3" ""; 304 + hash = "sha256-r38XV9ndtjA0WYgTkhHx/VcP9bqDDe8cx8Rorptlybo="; 305 305 cargoHash = "sha256-6du7RJo0DH+eYMOoh3L31F3aqfR5+iG1iKauSV1uNcQ="; 306 306 }; 307 307
+5 -10
pkgs/development/libraries/SDL2_image/default.nix
··· 1 1 { lib, stdenv, fetchurl 2 2 , pkg-config 3 3 , SDL2, libpng, libjpeg, libtiff, giflib, libwebp, libXpm, zlib, Foundation 4 - , version ? "2.8.2" 5 - , hash ? "sha256-j0hrv7z4Rk3VjJ5dkzlKsCVc5otRxalmqRgkSCCnbdw=" 6 4 }: 7 5 8 - let 6 + stdenv.mkDerivation (finalAttrs: { 9 7 pname = "SDL2_image"; 10 - in 11 - 12 - stdenv.mkDerivation { 13 - inherit pname version; 8 + version = "2.8.2"; 14 9 15 10 src = fetchurl { 16 - url = "https://www.libsdl.org/projects/SDL_image/release/${pname}-${version}.tar.gz"; 17 - inherit hash; 11 + url = "https://www.libsdl.org/projects/SDL_image/release/SDL2_image-${finalAttrs.version}.tar.gz"; 12 + hash = "sha256-j0hrv7z4Rk3VjJ5dkzlKsCVc5otRxalmqRgkSCCnbdw="; 18 13 }; 19 14 20 15 nativeBuildInputs = [ pkg-config ]; ··· 44 39 license = licenses.zlib; 45 40 maintainers = with maintainers; [ cpages ]; 46 41 }; 47 - } 42 + })
+2 -2
pkgs/development/libraries/faudio/default.nix
··· 4 4 5 5 stdenv.mkDerivation rec { 6 6 pname = "faudio"; 7 - version = "23.12"; 7 + version = "24.01"; 8 8 9 9 src = fetchFromGitHub { 10 10 owner = "FNA-XNA"; 11 11 repo = "FAudio"; 12 12 rev = version; 13 - sha256 = "sha256-bftS5gcIzvJlv9K2hKIIXl5lzP4RVwSK5/kxpQrJe/A="; 13 + sha256 = "sha256-9/hgGrMtEz2CXZUPVMT1aSwDMlb+eQ9soTp1X1uME7I="; 14 14 }; 15 15 16 16 nativeBuildInputs = [cmake];
+5 -4
pkgs/development/libraries/libayatana-common/default.nix
··· 4 4 , gitUpdater 5 5 , testers 6 6 , cmake 7 - , cmake-extras 8 7 , glib 9 8 , gobject-introspection 10 9 , gtest 11 10 , intltool 11 + , lomiri 12 12 , pkg-config 13 13 , systemd 14 14 , vala ··· 28 28 postPatch = '' 29 29 # Queries via pkg_get_variable, can't override prefix 30 30 substituteInPlace data/CMakeLists.txt \ 31 - --replace 'DESTINATION "''${SYSTEMD_USER_UNIT_DIR}"' 'DESTINATION "${placeholder "out"}/lib/systemd/user"' 31 + --replace 'pkg_get_variable(SYSTEMD_USER_UNIT_DIR systemd systemd_user_unit_dir)' 'set(SYSTEMD_USER_UNIT_DIR ''${CMAKE_INSTALL_PREFIX}/lib/systemd/user)' 32 32 ''; 33 33 34 34 strictDeps = true; ··· 42 42 ]; 43 43 44 44 buildInputs = [ 45 - cmake-extras 45 + lomiri.cmake-extras 46 46 glib 47 + lomiri.lomiri-url-dispatcher 47 48 systemd 48 49 ]; 49 50 ··· 53 54 54 55 cmakeFlags = [ 55 56 "-DENABLE_TESTS=${lib.boolToString finalAttrs.finalPackage.doCheck}" 56 - "-DENABLE_LOMIRI_FEATURES=OFF" 57 + "-DENABLE_LOMIRI_FEATURES=ON" 57 58 "-DGSETTINGS_LOCALINSTALL=ON" 58 59 "-DGSETTINGS_COMPILE=ON" 59 60 ];
+11 -5
pkgs/development/libraries/ndi/default.nix
··· 2 2 3 3 let 4 4 versionJSON = lib.importJSON ./version.json; 5 + ndiPlatform = 6 + if stdenv.isAarch64 then "aarch64-rpi4-linux-gnueabi" 7 + else if stdenv.isAarch32 then "arm-rpi2-linux-gnueabihf" 8 + else if stdenv.isx86_64 then "x86_64-linux-gnu" 9 + else if stdenv.isi686 then "i686-linux-gnu" 10 + else throw "unsupported platform for NDI SDK"; 5 11 in 6 12 stdenv.mkDerivation rec { 7 13 pname = "ndi"; ··· 35 41 36 42 installPhase = '' 37 43 mkdir $out 38 - mv bin/x86_64-linux-gnu $out/bin 44 + mv bin/${ndiPlatform} $out/bin 39 45 for i in $out/bin/*; do 46 + if [ -L "$i" ]; then continue; fi 40 47 patchelf --set-interpreter "$(cat $NIX_CC/nix-support/dynamic-linker)" "$i" 41 48 done 42 49 patchelf --set-rpath "${avahi}/lib:${stdenv.cc.libc}/lib" $out/bin/ndi-record 43 - mv lib/x86_64-linux-gnu $out/lib 50 + mv lib/${ndiPlatform} $out/lib 44 51 for i in $out/lib/*; do 45 52 if [ -L "$i" ]; then continue; fi 46 53 patchelf --set-rpath "${avahi}/lib:${stdenv.cc.libc}/lib" "$i" ··· 48 55 mv include examples $out/ 49 56 mkdir -p $out/share/doc/${pname}-${version} 50 57 mv licenses $out/share/doc/${pname}-${version}/licenses 51 - mv logos $out/share/doc/${pname}-${version}/logos 52 58 mv documentation/* $out/share/doc/${pname}-${version}/ 53 59 ''; 54 60 ··· 61 67 passthru.updateScript = ./update.py; 62 68 63 69 meta = with lib; { 64 - homepage = "https://ndi.tv/sdk/"; 70 + homepage = "https://ndi.video/ndi-sdk/"; 65 71 description = "NDI Software Developer Kit"; 66 - platforms = ["x86_64-linux"]; 72 + platforms = ["x86_64-linux" "i686-linux" "aarch64-linux" "armv7l-linux"]; 67 73 hydraPlatforms = []; 68 74 sourceProvenance = with sourceTypes; [ binaryNativeCode ]; 69 75 license = licenses.unfree;
+1 -1
pkgs/development/libraries/ndi/version.json
··· 1 - {"hash": "sha256:70e04c2e7a629a9854de2727e0f978175b7a4ec6cf4cd9799a22390862f6fa27", "version": "5.5.2"} 1 + {"hash": "sha256:4ff4b92f2c5f42d234aa7d142e2de7e9b045c72b46ad5149a459d48efd9218de", "version": "5.6.0"}
-56
pkgs/development/python-modules/aioquic-mitmproxy/default.nix
··· 1 - { lib 2 - , buildPythonPackage 3 - , certifi 4 - , cryptography 5 - , fetchFromGitHub 6 - , pylsqpack 7 - , pyopenssl 8 - , pytestCheckHook 9 - , pythonOlder 10 - , service-identity 11 - , setuptools 12 - , wheel 13 - }: 14 - 15 - buildPythonPackage rec { 16 - pname = "aioquic-mitmproxy"; 17 - version = "0.9.21.1"; 18 - pyproject = true; 19 - 20 - disabled = pythonOlder "3.8"; 21 - 22 - src = fetchFromGitHub { 23 - owner = "meitinger"; 24 - repo = "aioquic_mitmproxy"; 25 - rev = "refs/tags/${version}"; 26 - hash = "sha256-eD3eICE9jS1jyqMgWwcv6w3gkR0EyGcKwgSXhasXNeA="; 27 - }; 28 - 29 - nativeBuildInputs = [ 30 - setuptools 31 - wheel 32 - ]; 33 - 34 - propagatedBuildInputs = [ 35 - certifi 36 - cryptography 37 - pylsqpack 38 - pyopenssl 39 - service-identity 40 - ]; 41 - 42 - nativeCheckInputs = [ 43 - pytestCheckHook 44 - ]; 45 - 46 - pythonImportsCheck = [ 47 - "aioquic" 48 - ]; 49 - 50 - meta = with lib; { 51 - description = "QUIC and HTTP/3 implementation in Python"; 52 - homepage = "https://github.com/meitinger/aioquic_mitmproxy"; 53 - license = licenses.bsd3; 54 - maintainers = with maintainers; [ fab ]; 55 - }; 56 - }
+4 -6
pkgs/development/python-modules/aiounifi/default.nix
··· 11 11 , segno 12 12 , setuptools 13 13 , trustme 14 - , wheel 15 14 }: 16 15 17 16 buildPythonPackage rec { 18 17 pname = "aiounifi"; 19 - version = "68"; 20 - format = "pyproject"; 18 + version = "69"; 19 + pyproject = true; 21 20 22 21 disabled = pythonOlder "3.11"; 23 22 24 23 src = fetchFromGitHub { 25 24 owner = "Kane610"; 26 - repo = pname; 25 + repo = "aiounifi"; 27 26 rev = "refs/tags/v${version}"; 28 - hash = "sha256-fMTkk2+4RQzE8V4Nemkh2/0Keum+3eMKO5LlPQB9kOU="; 27 + hash = "sha256-XYwdnG3OprHRZm3zQgoPw4VOzvvVflsQzi7+XQiASAU="; 29 28 }; 30 29 31 30 postPatch = '' ··· 38 37 39 38 nativeBuildInputs = [ 40 39 setuptools 41 - wheel 42 40 ]; 43 41 44 42 propagatedBuildInputs = [
+2 -2
pkgs/development/python-modules/botocore-stubs/default.nix
··· 9 9 10 10 buildPythonPackage rec { 11 11 pname = "botocore-stubs"; 12 - version = "1.34.20"; 12 + version = "1.34.21"; 13 13 format = "pyproject"; 14 14 15 15 disabled = pythonOlder "3.7"; ··· 17 17 src = fetchPypi { 18 18 pname = "botocore_stubs"; 19 19 inherit version; 20 - hash = "sha256-6FwnFoWMvtW5NRM/1oFTe2S7mRrU+0PVUpXt//r0lOk="; 20 + hash = "sha256-xc3pikb8lNUNTs1GXdXGRQEiHJT+KJWmBt5cReyDdkM="; 21 21 }; 22 22 23 23 nativeBuildInputs = [
+5
pkgs/development/python-modules/deal-solver/default.nix
··· 7 7 , astroid 8 8 , pytestCheckHook 9 9 , hypothesis 10 + , pythonRelaxDepsHook 10 11 }: 11 12 12 13 buildPythonPackage rec { ··· 25 26 26 27 nativeBuildInputs = [ 27 28 flit-core 29 + pythonRelaxDepsHook 28 30 ]; 31 + 32 + # z3 does not provide a dist-info, so python-runtime-deps-check will fail 33 + pythonRemoveDeps = [ "z3-solver" ]; 29 34 30 35 postPatch = '' 31 36 substituteInPlace pyproject.toml \
-12
pkgs/development/python-modules/django-mailman3/default.nix
··· 1 1 { lib 2 2 , buildPythonPackage 3 3 , fetchPypi 4 - , fetchpatch 5 4 6 5 # propagates 7 6 , django-gravatar2 ··· 24 23 inherit pname version; 25 24 hash = "sha256-uIjJaZHWL2evj+oISLprvKWT5Sm5f2EKgUD1twL1VbQ="; 26 25 }; 27 - 28 - patches = [ 29 - (fetchpatch { 30 - url = "https://gitlab.com/mailman/django-mailman3/-/commit/840d0d531a0813de9a30e72427e202aea21b40fe.patch"; 31 - hash = "sha256-vltvsIP/SWpQZeXDUB+GWlTu+ghFMUqIT8i6CrYcmGo="; 32 - }) 33 - (fetchpatch { 34 - url = "https://gitlab.com/mailman/django-mailman3/-/commit/25c55e31d28f2fa8eb23f0e83c12f9b0a05bfbf0.patch"; 35 - hash = "sha256-ug5tBmnVfJTn5ufDDVg/cEtsZM59jQYJpQZV51T3qIc="; 36 - }) 37 - ]; 38 26 39 27 postPatch = '' 40 28 substituteInPlace setup.py \
+2
pkgs/development/python-modules/django-q/default.nix
··· 16 16 , pytestCheckHook 17 17 , pythonOlder 18 18 , redis 19 + , setuptools 19 20 }: 20 21 21 22 buildPythonPackage rec { ··· 40 41 41 42 nativeBuildInputs = [ 42 43 poetry-core 44 + setuptools 43 45 ]; 44 46 45 47 propagatedBuildInputs = [
+2 -2
pkgs/development/python-modules/karton-core/default.nix
··· 10 10 11 11 buildPythonPackage rec { 12 12 pname = "karton-core"; 13 - version = "5.3.0"; 13 + version = "5.3.2"; 14 14 format = "setuptools"; 15 15 16 16 disabled = pythonOlder "3.7"; ··· 19 19 owner = "CERT-Polska"; 20 20 repo = "karton"; 21 21 rev = "refs/tags/v${version}"; 22 - hash = "sha256-sf8O4Y/yMoTFCibQRtNDX3pXdQ0Xzor3WqeU4xp3WuU="; 22 + hash = "sha256-/MPD83sBo9n/dI1uXbHbjvz6upJSJrssMGmGwfQ+KE8="; 23 23 }; 24 24 25 25 propagatedBuildInputs = [
+17 -16
pkgs/development/python-modules/mitmproxy/default.nix
··· 3 3 , fetchFromGitHub 4 4 , buildPythonPackage 5 5 , pythonOlder 6 + , pythonRelaxDepsHook 6 7 # Mitmproxy requirements 7 - , aioquic-mitmproxy 8 + , aioquic 8 9 , asgiref 9 10 , blinker 10 11 , brotli ··· 56 57 hash = "sha256-BO7oQ4TVuZ4dCtROq2M24V6HVo0jzyBdQfb67dYA07U="; 57 58 }; 58 59 60 + nativeBuildInputs = [ 61 + pythonRelaxDepsHook 62 + ]; 63 + 64 + pythonRelaxDeps = [ 65 + "aioquic" 66 + ]; 67 + 59 68 propagatedBuildInputs = [ 60 - aioquic-mitmproxy 69 + aioquic 61 70 asgiref 62 71 blinker 63 72 brotli ··· 109 118 "test_get_version" 110 119 # https://github.com/mitmproxy/mitmproxy/commit/36ebf11916704b3cdaf4be840eaafa66a115ac03 111 120 # Tests require terminal 112 - "test_integration" 121 + "test_commands_exist" 113 122 "test_contentview_flowview" 114 123 "test_flowview" 115 - # ValueError: Exceeds the limit (4300) for integer string conversion 116 - "test_roundtrip_big_integer" 124 + "test_integration" 125 + "test_statusbar" 126 + # FileNotFoundError: [Errno 2] No such file or directory 127 + # likely wireguard is also not working in the sandbox 117 128 "test_wireguard" 118 - "test_commands_exist" 119 - "test_statusbar" 120 - # AssertionError: Playbook mismatch! 121 - "test_untrusted_cert" 122 - "test_mitmproxy_ca_is_untrusted" 123 - ]; 124 - 125 - disabledTestPaths = [ 126 - # teardown of half the tests broken 127 - "test/mitmproxy/addons/test_onboarding.py" 128 129 ]; 129 130 130 131 dontUsePytestXdist = true; ··· 136 137 homepage = "https://mitmproxy.org/"; 137 138 changelog = "https://github.com/mitmproxy/mitmproxy/blob/${version}/CHANGELOG.md"; 138 139 license = licenses.mit; 139 - maintainers = with maintainers; [ kamilchm SuperSandro2000 ]; 140 + maintainers = with maintainers; [ SuperSandro2000 ]; 140 141 }; 141 142 }
+8 -3
pkgs/development/python-modules/pytado/default.nix
··· 4 4 , pytestCheckHook 5 5 , requests 6 6 , pythonOlder 7 + , setuptools 7 8 }: 8 9 9 10 buildPythonPackage rec { 10 11 pname = "pytado"; 11 - version = "0.17.3"; 12 - format = "setuptools"; 12 + version = "0.17.4"; 13 + pyproject = true; 13 14 14 15 disabled = pythonOlder "3.7"; 15 16 ··· 17 18 owner = "wmalgadey"; 18 19 repo = "PyTado"; 19 20 rev = "refs/tags/${version}"; 20 - sha256 = "sha256-whpNYiAb2cqKI4m0HJN2lPt51FLuEzrkrRTSWs6uznU="; 21 + hash = "sha256-Wdd9HdsQjaYlL8knhMuO87+dom+aTsmrLRK0UdrpsbQ="; 21 22 }; 23 + 24 + nativeBuildInputs = [ 25 + setuptools 26 + ]; 22 27 23 28 propagatedBuildInputs = [ 24 29 requests
+2 -2
pkgs/development/python-modules/pytensor/default.nix
··· 24 24 25 25 buildPythonPackage rec { 26 26 pname = "pytensor"; 27 - version = "2.18.5"; 27 + version = "2.18.6"; 28 28 pyproject = true; 29 29 30 30 disabled = pythonOlder "3.9"; ··· 33 33 owner = "pymc-devs"; 34 34 repo = "pytensor"; 35 35 rev = "refs/tags/rel-${version}"; 36 - hash = "sha256-0xwzFmYsec7uQaq6a4BAA6MYy2zIVZ0cTwodVJQ6yMs="; 36 + hash = "sha256-SMh4wVZwmc87ztFn2OOI234VP3JzmxVMBkn7lYwVu6M="; 37 37 }; 38 38 39 39 postPatch = ''
+11 -6
pkgs/development/python-modules/sfrbox-api/default.nix
··· 14 14 15 15 buildPythonPackage rec { 16 16 pname = "sfrbox-api"; 17 - version = "0.0.8"; 18 - format = "pyproject"; 17 + version = "0.0.9"; 18 + pyproject = true; 19 19 20 20 disabled = pythonOlder "3.8"; 21 21 22 22 src = fetchFromGitHub { 23 23 owner = "hacf-fr"; 24 - repo = pname; 24 + repo = "sfrbox-api"; 25 25 rev = "refs/tags/v${version}"; 26 - hash = "sha256-yvVoWBupHRbMoXmun/pj0bPpujWKfH1SknEhvgIsPzk="; 26 + hash = "sha256-rMfX9vA8IuWxXvVs4WYNHO6neeoie/3gABwhXyJoAF8="; 27 27 }; 28 28 29 29 postPatch = '' ··· 36 36 ]; 37 37 38 38 propagatedBuildInputs = [ 39 - click 40 39 defusedxml 41 40 httpx 42 41 pydantic 43 42 ]; 44 43 44 + passthru.optional-dependencies = { 45 + cli = [ 46 + click 47 + ]; 48 + }; 49 + 45 50 nativeCheckInputs = [ 46 51 pytest-asyncio 47 52 pytestCheckHook 48 53 respx 49 - ]; 54 + ] ++ lib.flatten (builtins.attrValues passthru.optional-dependencies); 50 55 51 56 pythonImportsCheck = [ 52 57 "sfrbox_api"
-24
pkgs/development/python-modules/torchmetrics/0001-remove-illegal-name-from-extra-dependencies.patch
··· 1 - From 3ae04e8b9be879cf25fb5b51a48c8a1263a4844d Mon Sep 17 00:00:00 2001 2 - From: Gaetan Lepage <gaetan@glepage.com> 3 - Date: Mon, 15 Jan 2024 10:05:40 +0100 4 - Subject: [PATCH] remove-illegal-name-from-extra-dependencies 5 - 6 - --- 7 - setup.py | 1 + 8 - 1 file changed, 1 insertion(+) 9 - 10 - diff --git a/setup.py b/setup.py 11 - index 968c32d6..c98ee9f8 100755 12 - --- a/setup.py 13 - +++ b/setup.py 14 - @@ -190,6 +190,7 @@ def _prepare_extras(skip_pattern: str = "^_", skip_files: Tuple[str] = ("base.tx 15 - # create an 'all' keyword that install all possible dependencies 16 - extras_req["all"] = list(chain([pkgs for k, pkgs in extras_req.items() if k not in ("_test", "_tests")])) 17 - extras_req["dev"] = extras_req["all"] + extras_req["_tests"] 18 - + extras_req.pop("_tests") 19 - return extras_req 20 - 21 - 22 - -- 23 - 2.42.0 24 -
+2 -9
pkgs/development/python-modules/torchmetrics/default.nix
··· 20 20 21 21 let 22 22 pname = "torchmetrics"; 23 - version = "1.3.0"; 23 + version = "1.3.0.post"; 24 24 in 25 25 buildPythonPackage { 26 26 inherit pname version; ··· 32 32 owner = "Lightning-AI"; 33 33 repo = "torchmetrics"; 34 34 rev = "refs/tags/v${version}"; 35 - hash = "sha256-xDUT9GSOn6ZNDFRsFws3NLxBsILKDHPKeEANwM8NXj8="; 35 + hash = "sha256-InwXOeQ/u7sdq/+gjm0CSCiuB/9YXP+rPVbvOSH16Dk="; 36 36 }; 37 - 38 - patches = [ 39 - # The extra dependencies dictionary contains an illegally named entry '_tests'. 40 - # The build fails because of this. 41 - # Issue has been opened upstream: https://github.com/Lightning-AI/torchmetrics/issues/2305 42 - ./0001-remove-illegal-name-from-extra-dependencies.patch 43 - ]; 44 37 45 38 propagatedBuildInputs = [ 46 39 numpy
+2 -2
pkgs/development/python-modules/vallox-websocket-api/default.nix
··· 13 13 14 14 buildPythonPackage rec { 15 15 pname = "vallox-websocket-api"; 16 - version = "4.0.2"; 16 + version = "4.0.3"; 17 17 format = "pyproject"; 18 18 19 19 disabled = pythonOlder "3.8"; ··· 22 22 owner = "yozik04"; 23 23 repo = "vallox_websocket_api"; 24 24 rev = "refs/tags/${version}"; 25 - hash = "sha256-a9cYYRAKX9sY9fQhefLWgyvk0vQl7Ao3zvw0SAtFW/Q="; 25 + hash = "sha256-L6uLA8iVYzh3wFVSwxzleHhu22sQeomq9N9A1oAxpf4="; 26 26 }; 27 27 28 28 nativeBuildInputs = [
+2 -2
pkgs/development/tools/analysis/checkov/default.nix
··· 5 5 6 6 python3.pkgs.buildPythonApplication rec { 7 7 pname = "checkov"; 8 - version = "3.1.63"; 8 + version = "3.1.66"; 9 9 pyproject = true; 10 10 11 11 src = fetchFromGitHub { 12 12 owner = "bridgecrewio"; 13 13 repo = "checkov"; 14 14 rev = "refs/tags/${version}"; 15 - hash = "sha256-MQAREb3ivMTQGE/ktHDxz6r2t7LnsVoIEoZtv7rfC2U="; 15 + hash = "sha256-hvl29/K4qHvDiXM0Ufmi3ExMq+2JXQbSzaFYCCP0OhU="; 16 16 }; 17 17 18 18 patches = [
+3 -3
pkgs/development/tools/build-managers/corrosion/default.nix
··· 10 10 11 11 stdenv.mkDerivation rec { 12 12 pname = "corrosion"; 13 - version = "0.4.5"; 13 + version = "0.4.6"; 14 14 15 15 src = fetchFromGitHub { 16 16 owner = "corrosion-rs"; 17 17 repo = "corrosion"; 18 18 rev = "v${version}"; 19 - hash = "sha256-eE3RNLK5xKOjXeA+vDQmM1hvw92TbmPEDLdeqimgwcA="; 19 + hash = "sha256-WPMxewswSRc1ULBgGTrdZmWeFDWVzHk2jzqGChkRYKE="; 20 20 }; 21 21 22 22 cargoRoot = "generator"; ··· 25 25 inherit src; 26 26 sourceRoot = "${src.name}/${cargoRoot}"; 27 27 name = "${pname}-${version}"; 28 - hash = "sha256-j9tsRho/gWCGwXUYZSbs3rudT6nYHh0FSfBCAemZHmw="; 28 + hash = "sha256-R09sgCjwqc22zXg1T7iMx9qmyMz9xlnEuOelPB4O7jw="; 29 29 }; 30 30 31 31 buildInputs = lib.optional stdenv.isDarwin libiconv;
+8 -3
pkgs/development/tools/djlint/default.nix
··· 11 11 src = fetchFromGitHub { 12 12 owner = "Riverside-Healthcare"; 13 13 repo = "djlint"; 14 - rev = "v${version}"; 14 + rev = "refs/tags/v${version}"; 15 15 hash = "sha256-p9RIzX9zoZxBrhiNaIeCX9OgfQm/lXNwYsh6IcsnIVk="; 16 16 }; 17 17 18 - nativeBuildInputs = [ 19 - python3.pkgs.poetry-core 18 + nativeBuildInputs = with python3.pkgs; [ 19 + poetry-core 20 + pythonRelaxDepsHook 21 + ]; 22 + 23 + pythonRelaxDeps = [ 24 + "pathspec" 20 25 ]; 21 26 22 27 propagatedBuildInputs = with python3.pkgs; [
+2 -2
pkgs/development/tools/symfony-cli/default.nix
··· 10 10 11 11 buildGoModule rec { 12 12 pname = "symfony-cli"; 13 - version = "5.8.1"; 13 + version = "5.8.2"; 14 14 vendorHash = "sha256-bscRqFYV2qzTmu04l00/iMsFQR5ITPBFVr9BQwVGFU8="; 15 15 16 16 src = fetchFromGitHub { 17 17 owner = "symfony-cli"; 18 18 repo = "symfony-cli"; 19 19 rev = "v${version}"; 20 - hash = "sha256-GJPUYza1LhWZP9U3JKoe3i0npLgypo3DkKex9DFo1U4="; 20 + hash = "sha256-P5VitZL6KYplMpWdwTkzJEqf5UoSB5HaH/0kL2CbUEA="; 21 21 }; 22 22 23 23 ldflags = [
+18 -17
pkgs/os-specific/linux/kmscube/default.nix
··· 1 - { lib, stdenv, fetchgit, fetchpatch, autoreconfHook, libdrm, libX11, libGL, mesa, pkg-config }: 1 + { lib, stdenv, fetchFromGitLab, meson, ninja, libdrm, libX11, libGL, mesa, pkg-config, gst_all_1 }: 2 2 3 3 stdenv.mkDerivation { 4 4 pname = "kmscube"; 5 - version = "unstable-2018-06-17"; 5 + version = "unstable-2023-09-25"; 6 6 7 - src = fetchgit { 8 - url = "git://anongit.freedesktop.org/mesa/kmscube"; 9 - rev = "9dcce71e603616ee7a54707e932f962cdf8fb20a"; 10 - sha256 = "1q5b5yvyfj3127385mp1bfmcbnpnbdswdk8gspp7g4541xk4k933"; 7 + src = fetchFromGitLab { 8 + domain = "gitlab.freedesktop.org"; 9 + owner = "mesa"; 10 + repo = "kmscube"; 11 + rev = "96d63eb59e34c647cda1cbb489265f8c536ae055"; 12 + hash = "sha256-kpnn4JBNvwatrcCF/RGk/fQ7qiKD26iLBr9ovDmAKBo="; 11 13 }; 12 14 13 - patches = [ 14 - # Pull upstream patch for -fno-common toolchains. 15 - (fetchpatch { 16 - name = "fno-common.patch"; 17 - url = "https://gitlab.freedesktop.org/mesa/kmscube/-/commit/908ef39864442c0807954af5d3f88a3da1a6f8a5.patch"; 18 - sha256 = "1gxn3b50mvjlc25234839v5z29r8fd9di4176a3yx4gbsz8cc5vi"; 19 - }) 20 - ]; 21 - 22 - nativeBuildInputs = [ autoreconfHook pkg-config ]; 23 - buildInputs = [ libdrm libX11 libGL mesa ]; 15 + nativeBuildInputs = [ meson pkg-config ninja ]; 16 + buildInputs = [ 17 + libdrm 18 + libX11 19 + libGL 20 + mesa 21 + ] ++ (with gst_all_1; [ 22 + gstreamer 23 + gst-plugins-base 24 + ]); 24 25 25 26 meta = with lib; { 26 27 description = "Example OpenGL app using KMS/GBM";
+5
pkgs/servers/nosql/surrealdb/default.nix
··· 43 43 buildInputs = [ openssl ] 44 44 ++ lib.optionals stdenv.isDarwin [ SystemConfiguration ]; 45 45 46 + checkFlags = [ 47 + # flaky 48 + "--skip=ws_integration::none::merge" 49 + ]; 50 + 46 51 passthru.tests.version = testers.testVersion { 47 52 package = surrealdb; 48 53 command = "surreal version";
+10 -4
pkgs/tools/admin/azure-cli/python-packages.nix
··· 99 99 azure-mgmt-advisor = overrideAzureMgmtPackage super.azure-mgmt-advisor "9.0.0" "zip" "sha256-/ECLNzFf6EeBtRkST4yxuKwQsvQkHkOdDT4l/WyhjXs="; 100 100 azure-mgmt-apimanagement = overrideAzureMgmtPackage super.azure-mgmt-apimanagement "4.0.0" "zip" "sha256-AiTjLJ28g80xnrRFLfPUevJgeaxLpuGmvkd3+FskNiw="; 101 101 azure-mgmt-authorization = overrideAzureMgmtPackage super.azure-mgmt-authorization "4.0.0" "zip" "sha256-abhavAmuZPxyl1vUNDEXDYx+tdFmdUuYqsXzhF3lfcQ="; 102 - azure-mgmt-batch = overrideAzureMgmtPackage super.azure-mgmt-batch "17.0.0" "zip" "sha256-hkM4WVLuwxj4qgXsY8Ya7zu7/v37gKdP0Xbf2EqrsWo="; 103 102 azure-mgmt-billing = overrideAzureMgmtPackage super.azure-mgmt-billing "6.0.0" "zip" "sha256-1PXFpBiKRW/h6zK2xF9VyiBpx0vkHrdpIYQLOfL1wH8="; 104 103 azure-mgmt-botservice = overrideAzureMgmtPackage super.azure-mgmt-botservice "2.0.0b3" "zip" "sha256-XZGQOeMw8usyQ1tl8j57fZ3uqLshomHY9jO/rbpQOvM="; 105 104 azure-mgmt-cognitiveservices = overrideAzureMgmtPackage super.azure-mgmt-cognitiveservices "13.5.0" "zip" "sha256-RK8LGbH4J+nN6gnGBUweZgkqUcMrwe9aVtvZtAvFeBU="; ··· 138 137 139 138 azure-mgmt-appcontainers = overrideAzureMgmtPackage super.azure-mgmt-appcontainers "2.0.0" "zip" 140 139 "sha256-ccdIdvdgTYPWEZCWqkLc8lEuMuAEERvl5B1huJyBkvU="; 140 + 141 + azure-mgmt-batch = (overrideAzureMgmtPackage super.azure-mgmt-batch "17.0.0" "zip" 142 + "sha256-hkM4WVLuwxj4qgXsY8Ya7zu7/v37gKdP0Xbf2EqrsWo=").overridePythonAttrs (attrs: { 143 + propagatedBuildInputs = attrs.propagatedBuildInputs or [ ] ++ [ self.msrest ]; 144 + }); 141 145 142 146 azure-mgmt-batchai = overrideAzureMgmtPackage super.azure-mgmt-batchai "7.0.0b1" "zip" 143 147 "sha256-mT6vvjWbq0RWQidugR229E8JeVEiobPD3XA/nDM3I6Y="; ··· 181 185 "sha256-WvyNgfiliEt6qawqy8Le8eifhxusMkoZbf6YcyY1SBA="; 182 186 183 187 azure-mgmt-netapp = overrideAzureMgmtPackage super.azure-mgmt-netapp "10.1.0" "zip" 184 - "sha256-eJiWTOCk2C79Jotku9bKlu3vU6H8004hWrX+h76MjQM="; 188 + "sha256-eJiWTOCk2C79Jotku9bKlu3vU6H8004hWrX+h76MjQM="; 185 189 186 190 azure-mgmt-signalr = overrideAzureMgmtPackage super.azure-mgmt-signalr "2.0.0b2" "tar.gz" 187 191 "sha256-05PUV8ouAKq/xhGxVEWIzDop0a7WDTV5mGVSC4sv9P4="; ··· 204 208 azure-mgmt-applicationinsights = overrideAzureMgmtPackage super.azure-mgmt-applicationinsights "1.0.0" "zip" 205 209 "sha256-woeix9703hn5LAwxugKGf6xvW433G129qxkoi7RV/Fs="; 206 210 207 - azure-mgmt-servicefabric = overrideAzureMgmtPackage super.azure-mgmt-servicefabric "1.0.0" "zip" 208 - "sha256-3jXhF5EoMsGp6TEJqNJMq5T1VwOpCHsuscWwZVs7GRM="; 211 + azure-mgmt-servicefabric = (overrideAzureMgmtPackage super.azure-mgmt-servicefabric "1.0.0" "zip" 212 + "sha256-3jXhF5EoMsGp6TEJqNJMq5T1VwOpCHsuscWwZVs7GRM=").overridePythonAttrs (attrs: { 213 + propagatedBuildInputs = attrs.propagatedBuildInputs or [ ] ++ [ self.msrest ]; 214 + }); 209 215 210 216 azure-mgmt-servicelinker = overrideAzureMgmtPackage super.azure-mgmt-servicelinker "1.2.0b1" "zip" 211 217 "sha256-RK1Q51Q0wAG55oKrFmv65/2AUKl+gRdp27t/EcuMONk=";
+3 -3
pkgs/tools/admin/copilot-cli/default.nix
··· 2 2 3 3 buildGoModule rec { 4 4 pname = "copilot-cli"; 5 - version = "1.32.1"; 5 + version = "1.33.0"; 6 6 7 7 src = fetchFromGitHub { 8 8 owner = "aws"; 9 9 repo = pname; 10 10 rev = "v${version}"; 11 - hash = "sha256-OdzycH+52F6lfCErKlsVFiPE2gxU22ySV5uPA6zBXUg="; 11 + hash = "sha256-4LDeilWi3FzvrvHjEyQKQi1GxouSlzDY96yBuMfpsXM="; 12 12 }; 13 13 14 - vendorHash = "sha256-5Nlo5Ol4YdO3XI5RhpFfBgprVUV5DUkySvCXeFZqulk="; 14 + vendorHash = "sha256-EqgOyjb2raE5hW3h+czbsi/F9SVNDwPWM1L6GC7v6IY="; 15 15 16 16 nativeBuildInputs = [ installShellFiles ]; 17 17
+1
pkgs/tools/admin/lxd/default.nix
··· 76 76 ''; 77 77 78 78 passthru.tests.lxd = nixosTests.lxd; 79 + passthru.tests.lxd-to-incus = nixosTests.incus.lxd-to-incus; 79 80 passthru.ui = callPackage ./ui.nix { }; 80 81 passthru.updateScript = gitUpdater { 81 82 url = "https://github.com/canonical/lxd.git";
+3 -3
pkgs/tools/admin/qovery-cli/default.nix
··· 8 8 9 9 buildGoModule rec { 10 10 pname = "qovery-cli"; 11 - version = "0.80.0"; 11 + version = "0.81.0"; 12 12 13 13 src = fetchFromGitHub { 14 14 owner = "Qovery"; 15 15 repo = "qovery-cli"; 16 16 rev = "refs/tags/v${version}"; 17 - hash = "sha256-HEOv58cUF/U/fa52cxre4HXXXNONSfHqbInI5nYvk0Q="; 17 + hash = "sha256-Me2UIyBJ/TFP6M7zqQvJ/NDYoiOWop8Lkh8e1KbD9eU="; 18 18 }; 19 19 20 - vendorHash = "sha256-Vvc2YoZnoCzIU/jE6XSg/eVkWTwl6i04Fd5RHTaS1WM="; 20 + vendorHash = "sha256-IDKJaWnQsOtghpCh7UyO6RzWgSZS0S0jdF5hVV7xVbs="; 21 21 22 22 nativeBuildInputs = [ 23 23 installShellFiles
+10
pkgs/tools/misc/addlicense/default.nix
··· 1 1 { lib 2 2 , buildGoModule 3 3 , fetchFromGitHub 4 + , fetchpatch 4 5 }: 5 6 6 7 buildGoModule rec { ··· 13 14 rev = "v${version}"; 14 15 sha256 = "sha256-YMMHj6wctKtJi/rrcMIrLmNw/uvO6wCwokgYRQxcsFw="; 15 16 }; 17 + 18 + patches = [ 19 + # Add support for Nix files. Upstream is slow with responding to PRs, 20 + # patch backported from PR https://github.com/google/addlicense/pull/153. 21 + (fetchpatch { 22 + url = "https://github.com/google/addlicense/commit/e0fb3f44cc7670dcc5cbcec2211c9ad238c5f9f1.patch"; 23 + hash = "sha256-XCAvL+HEa1hGc0GAnl+oYHKzBJ3I5ArS86vgABrP/Js="; 24 + }) 25 + ]; 16 26 17 27 vendorHash = "sha256-2mncc21ecpv17Xp8PA9GIodoaCxNBacbbya/shU8T9Y="; 18 28
+6 -1
pkgs/tools/misc/dooit/default.nix
··· 8 8 python3.pkgs.buildPythonApplication rec { 9 9 pname = "dooit"; 10 10 version = "2.1.1"; 11 - format = "pyproject"; 11 + pyproject = true; 12 12 13 13 src = fetchFromGitHub { 14 14 owner = "kraanzu"; ··· 19 19 20 20 nativeBuildInputs = with python3.pkgs; [ 21 21 poetry-core 22 + pythonRelaxDepsHook 23 + ]; 24 + 25 + pythonRelaxDeps = [ 26 + "tzlocal" 22 27 ]; 23 28 24 29 propagatedBuildInputs = with python3.pkgs; [
+15 -7
pkgs/tools/misc/xstow/default.nix
··· 1 - { stdenv, lib, fetchurl, ncurses, autoreconfHook }: 1 + { stdenv 2 + , lib 3 + , fetchFromGitHub 4 + , ncurses 5 + , autoreconfHook 6 + }: 7 + 2 8 stdenv.mkDerivation rec { 3 9 pname = "xstow"; 4 - version = "1.1.0"; 10 + version = "1.1.1"; 5 11 6 - src = fetchurl { 7 - url = "http://downloads.sourceforge.net/sourceforge/${pname}/${pname}-${version}.tar.bz2"; 8 - sha256 = "sha256-wXQ5XSmogAt1torfarrqIU4nBYj69MGM/HBYqeIE+dw="; 12 + src = fetchFromGitHub { 13 + owner = "majorkingleo"; 14 + repo = "xstow"; 15 + rev = version; 16 + fetchSubmodules = true; 17 + hash = "sha256-c89+thw5N3Cgl1Ww+W7c3YsyhNJMLlreedvdWJFY3WY="; 9 18 }; 10 19 11 20 nativeBuildInputs = [ autoreconfHook ]; ··· 23 32 ]; 24 33 25 34 meta = with lib; { 26 - broken = stdenv.isDarwin; 27 35 description = "A replacement of GNU Stow written in C++"; 28 - homepage = "https://xstow.sourceforge.net"; 36 + homepage = "https://github.com/majorkingleo/xstow"; 29 37 license = licenses.gpl2Only; 30 38 maintainers = with maintainers; [ nzbr ]; 31 39 platforms = platforms.unix;
+4 -8
pkgs/tools/networking/dae/default.nix
··· 2 2 , clang 3 3 , fetchFromGitHub 4 4 , buildGoModule 5 - , installShellFiles 6 5 }: 7 6 buildGoModule rec { 8 7 pname = "dae"; 9 - version = "0.5.0"; 8 + version = "0.4.0"; 10 9 11 10 src = fetchFromGitHub { 12 11 owner = "daeuniverse"; 13 12 repo = "dae"; 14 13 rev = "v${version}"; 15 - hash = "sha256-DxGKfxu13F7+5zV/31GP9gkbGHrz5RdRe84J3DQ0iUs="; 14 + hash = "sha256-hvAuWCacaWxXwxx5ktj57hnWt8fcnwD6rUuRj1+ZtFA="; 16 15 fetchSubmodules = true; 17 16 }; 18 17 19 - vendorHash = "sha256-UQRM3/JSsPDAGqYZ43bVYVvSLvqqZ/BJE6hwx5wzfcQ="; 18 + vendorHash = "sha256-qK+x6ciAebwIWHRjRpNXCAqsfnmEx37evS4+7kwcFIs="; 20 19 21 20 proxyVendor = true; 22 21 23 - nativeBuildInputs = [ clang installShellFiles ]; 24 - 25 - CGO_ENABLED = 0; 22 + nativeBuildInputs = [ clang ]; 26 23 27 24 ldflags = [ 28 25 "-s" ··· 44 41 install -Dm444 install/dae.service $out/lib/systemd/system/dae.service 45 42 substituteInPlace $out/lib/systemd/system/dae.service \ 46 43 --replace /usr/bin/dae $out/bin/dae 47 - installShellCompletion install/shell-completion/dae.{bash,zsh,fish} 48 44 ''; 49 45 50 46 meta = with lib; {
+8 -4
pkgs/tools/networking/dnstwist/default.nix
··· 5 5 6 6 python3.pkgs.buildPythonApplication rec { 7 7 pname = "dnstwist"; 8 - version = "20230918"; 9 - format = "setuptools"; 8 + version = "20240116"; 9 + pyproject = true; 10 10 11 11 src = fetchFromGitHub { 12 12 owner = "elceef"; 13 - repo = pname; 13 + repo = "dnstwist"; 14 14 rev = "refs/tags/${version}"; 15 - hash = "sha256-LGeDb0++9Zsal9HOXjfjF18RFQS+6i578EfD3YTtlS4="; 15 + hash = "sha256-areFRDi728SedArhUy/rbPzhoFabNoT/WdyyN+6OQK0="; 16 16 }; 17 + 18 + nativeBuildInputs = with python3.pkgs; [ 19 + setuptools 20 + ]; 17 21 18 22 propagatedBuildInputs = with python3.pkgs; [ 19 23 dnspython
+3 -3
pkgs/tools/networking/ockam/default.nix
··· 12 12 13 13 let 14 14 pname = "ockam"; 15 - version = "0.115.0"; 15 + version = "0.116.0"; 16 16 in 17 17 rustPlatform.buildRustPackage { 18 18 inherit pname version; ··· 21 21 owner = "build-trust"; 22 22 repo = pname; 23 23 rev = "ockam_v${version}"; 24 - sha256 = "sha256-DPRMPGxOuF4FwDXyVNxv9j2qy3K1p/9AVmrp0pPUQXM="; 24 + sha256 = "sha256-dcSH/mO3cUamjOCuvEB/C24n7K5T1KnUMvTn8fVu+YM="; 25 25 }; 26 26 27 - cargoHash = "sha256-SeBv2yO0E60C4xMGf/7LOOyTOXf8vZCxIBC1dU2CAX0="; 27 + cargoHash = "sha256-9UwPPOKg+Im+vfQFiYKS68tONYkKz1TqX7ukbtmLcRk="; 28 28 nativeBuildInputs = [ git pkg-config ]; 29 29 buildInputs = [ openssl dbus ] 30 30 ++ lib.optionals stdenv.isDarwin [ Security ];
+3 -3
pkgs/tools/networking/sniffglue/default.nix
··· 2 2 3 3 rustPlatform.buildRustPackage rec { 4 4 pname = "sniffglue"; 5 - version = "0.15.0"; 5 + version = "0.16.0"; 6 6 7 7 src = fetchFromGitHub { 8 8 owner = "kpcyrd"; 9 9 repo = pname; 10 10 rev = "v${version}"; 11 - sha256 = "sha256-8SkwdPaKHf0ZE/MeM4yOe2CpQvZzIHf5d06iM7KPAT8="; 11 + sha256 = "sha256-MOw0WBdpo6dYXsjbUrqoIJl/sjQ4wSAcm4dPxDgTYgY="; 12 12 }; 13 13 14 - cargoSha256 = "sha256-UGvFLW48sakNuV3eXBpCxaHOrveQPXkynOayMK6qs4g="; 14 + cargoHash = "sha256-vnfviiXJ4L/j5M3N+LegOIvLuD6vYJB1QeBgZJVfDnI="; 15 15 16 16 nativeBuildInputs = [ pkg-config ]; 17 17
-49
pkgs/tools/text/transifex-client/default.nix
··· 1 - { lib 2 - , buildPythonApplication 3 - , fetchPypi 4 - , python-slugify 5 - , requests 6 - , urllib3 7 - , six 8 - , setuptools 9 - , gitpython 10 - , pythonRelaxDepsHook 11 - }: 12 - 13 - buildPythonApplication rec { 14 - pname = "transifex-client"; 15 - version = "0.14.4"; 16 - 17 - src = fetchPypi { 18 - inherit pname version; 19 - sha256 = "11dc95cefe90ebf0cef3749c8c7d85b9d389c05bd0e3389bf117685df562bd5c"; 20 - }; 21 - 22 - # https://github.com/transifex/transifex-client/issues/323 23 - nativeBuildInputs = [ 24 - pythonRelaxDepsHook 25 - ]; 26 - 27 - pythonRelaxDeps = [ 28 - "python-slugify" 29 - ]; 30 - 31 - propagatedBuildInputs = [ 32 - gitpython 33 - python-slugify 34 - requests 35 - setuptools 36 - six 37 - urllib3 38 - ]; 39 - 40 - # Requires external resources 41 - doCheck = false; 42 - 43 - meta = with lib; { 44 - description = "Transifex translation service client"; 45 - homepage = "https://www.transifex.com/"; 46 - license = licenses.gpl2Only; 47 - maintainers = with maintainers; [ sikmir ]; 48 - }; 49 - }
+1
pkgs/top-level/aliases.nix
··· 1036 1036 tokodon = plasma5Packages.tokodon; 1037 1037 tor-browser-bundle-bin = tor-browser; # Added 2023-09-23 1038 1038 transfig = fig2dev; # Added 2022-02-15 1039 + transifex-client = transifex-cli; # Added 2023-12-29 1039 1040 trezor_agent = trezor-agent; # Added 2024-01-07 1040 1041 trustedGrub = throw "trustedGrub has been removed, because it is not maintained upstream anymore"; # Added 2023-05-10 1041 1042 trustedGrub-for-HP = throw "trustedGrub-for-HP has been removed, because it is not maintained upstream anymore"; # Added 2023-05-10
+15 -12
pkgs/top-level/all-packages.nix
··· 14077 14077 14078 14078 tracefilesim = callPackage ../development/tools/analysis/garcosim/tracefilesim { }; 14079 14079 14080 - transifex-client = python39.pkgs.callPackage ../tools/text/transifex-client { }; 14081 - 14082 14080 transifex-cli = callPackage ../applications/misc/transifex-cli { }; 14083 14081 14084 14082 translatelocally = callPackage ../applications/misc/translatelocally { }; ··· 22433 22431 22434 22432 libavif = callPackage ../development/libraries/libavif { }; 22435 22433 22436 - libayatana-common = callPackage ../development/libraries/libayatana-common { 22437 - inherit (lomiri) cmake-extras; 22438 - }; 22434 + libayatana-common = callPackage ../development/libraries/libayatana-common { }; 22439 22435 22440 22436 libb2 = callPackage ../development/libraries/libb2 { }; 22441 22437 ··· 24889 24885 SDL2_image = callPackage ../development/libraries/SDL2_image { 24890 24886 inherit (darwin.apple_sdk.frameworks) Foundation; 24891 24887 }; 24892 - SDL2_image_2_0_5 = SDL2_image.override({ # Pinned for pygame, toppler 24888 + # Pinned for pygame, toppler 24889 + SDL2_image_2_0 = SDL2_image.overrideAttrs (oldAttrs: { 24893 24890 version = "2.0.5"; 24894 - hash = "sha256-vdX24CZoL31+G+C2BRsgnaL0AqLdi9HEvZwlrSYxCNA"; 24891 + src = fetchurl { 24892 + inherit (oldAttrs.src) url; 24893 + hash = "sha256-vdX24CZoL31+G+C2BRsgnaL0AqLdi9HEvZwlrSYxCNA"; 24894 + }; 24895 24895 }); 24896 - SDL2_image_2_6 = SDL2_image.override({ 24897 - # Pinned for hedgewars: 24898 - # https://github.com/NixOS/nixpkgs/pull/274185#issuecomment-1856764786 24896 + # Pinned for hedgewars: 24897 + # https://github.com/NixOS/nixpkgs/pull/274185#issuecomment-1856764786 24898 + SDL2_image_2_6 = SDL2_image.overrideAttrs (oldAttrs: { 24899 24899 version = "2.6.3"; 24900 - hash = "sha256-kxyb5b8dfI+um33BV4KLfu6HTiPH8ktEun7/a0g2MSw="; 24900 + src = fetchurl { 24901 + inherit (oldAttrs.src) url; 24902 + hash = "sha256-kxyb5b8dfI+um33BV4KLfu6HTiPH8ktEun7/a0g2MSw="; 24903 + }; 24901 24904 }); 24902 24905 24903 24906 SDL2_mixer = callPackage ../development/libraries/SDL2_mixer { ··· 38391 38394 tome4 = callPackage ../games/tome4 { }; 38392 38395 38393 38396 toppler = callPackage ../games/toppler { 38394 - SDL2_image = SDL2_image_2_0_5; 38397 + SDL2_image = SDL2_image_2_0; 38395 38398 }; 38396 38399 38397 38400 torus-trooper = callPackage ../games/torus-trooper { };
+1
pkgs/top-level/python-aliases.nix
··· 39 39 aioh2 = throw "aioh2 has been removed because it is abandoned and broken."; # Added 2022-03-30 40 40 aionotify = throw "aionotify has been removed because is unmaintained and incompatible with python3.11."; # Added 2023-10-27 41 41 aiosenseme = throw "aiosenseme has been removed, because it does no longer work with the latest firmware and has become unmaintained"; # Added 2023-07-05 42 + aioquic-mitmproxy = throw "aioquic-mitmproxy has been removed because mitmproxy no longer uses it"; # Added 2024-01-16 42 43 amazon_kclpy = amazon-kclpy; # added 2023-08-08 43 44 ansible-base = throw "ansible-base has been removed, because it is end of life"; # added 2022-03-30 44 45 ansible-doctor = throw "ansible-doctor has been promoted to a top-level attribute"; # Added 2023-05-16
+1 -3
pkgs/top-level/python-packages.nix
··· 349 349 350 350 aioquic = callPackage ../development/python-modules/aioquic { }; 351 351 352 - aioquic-mitmproxy = callPackage ../development/python-modules/aioquic-mitmproxy { }; 353 - 354 352 aiorecollect = callPackage ../development/python-modules/aiorecollect { }; 355 353 356 354 aioredis = callPackage ../development/python-modules/aioredis { }; ··· 10462 10460 10463 10461 pygame = callPackage ../development/python-modules/pygame { 10464 10462 inherit (pkgs.darwin.apple_sdk.frameworks) AppKit; 10465 - SDL2_image = pkgs.SDL2_image_2_0_5; 10463 + SDL2_image = pkgs.SDL2_image_2_0; 10466 10464 }; 10467 10465 10468 10466 pygame-sdl2 = callPackage ../development/python-modules/pygame-sdl2 { };