python3Packages.llama-cpp-python: disable pythonImportsCheck when cudaSupport is enabled#465751
Conversation
…aSupport is enabled
|
Well... Unfortunately, |
|
|
If you really care, one approach could be to propagate a hook that, at the least, displays a meaningful error message if |
Things done
llama_cppfails to import in the sandbox whencudaSupportis enabled.Indeed,
libllama.sois dlopen-ed at import time and, whencudaSupportis enabled, loadslibcuda.sowhich should be provided by the driver (in/run/opengl-driver/lib).This cannot work inside the sandbox.
I also added
autoAddDriverRunpathso that the driver runpath is effectively added tolibllama.so's rpath.https://hydra.nixos-cuda.org/build/26753/log/tail
cc @SomeoneSerge
passthru.tests.nixpkgs-reviewon this PR. See nixpkgs-review usage../result/bin/.Add a 👍 reaction to pull requests you find important.