We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I’m looking for guidance on how to use Pipenv to install llama-cpp-python with CUDA support. The installation command using pip is:
CMAKE_ARGS="-DGGML_CUDA=on" pip install llama-cpp-python
Could someone help me with how to implement this in Pipenv?
Thanks in advance!
The text was updated successfully, but these errors were encountered:
@michaelsheka have you tried CMAKE_ARGS="-DGGML_CUDA=on" pipenv install llama-cpp-python ?
CMAKE_ARGS="-DGGML_CUDA=on" pipenv install llama-cpp-python
Sorry, something went wrong.
No branches or pull requests
I’m looking for guidance on how to use Pipenv to install llama-cpp-python with CUDA support. The installation command using pip is:
CMAKE_ARGS="-DGGML_CUDA=on" pip install llama-cpp-python
Could someone help me with how to implement this in Pipenv?
Thanks in advance!
The text was updated successfully, but these errors were encountered: