lol

Merge pull request #305120 from RuRo/fix-triton-max-jobs

python311Packages.openai-triton: use requested number of cores for build

authored by

Aleksana and committed by
GitHub
33156229 8b31f30e

+3
+3
pkgs/development/python-modules/openai-triton/default.nix
··· 115 115 116 116 # Avoid GLIBCXX mismatch with other cuda-enabled python packages 117 117 preConfigure = '' 118 + # Ensure that the build process uses the requested number of cores 119 + export MAX_JOBS="$NIX_BUILD_CORES" 120 + 118 121 # Upstream's setup.py tries to write cache somewhere in ~/ 119 122 export HOME=$(mktemp -d) 120 123