Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CI : Add coverage for talk-llama when WHISPER_CUBLAS=1 #1672

Merged
merged 2 commits into from
Dec 21, 2023

Conversation

bobqianic
Copy link
Collaborator

It's normal for the CI of this PR to not pass. We should first merge #1669, and then the CI of this PR will pass, because there is an issue with talk-llama's CMake.

Copy link
Owner

@ggerganov ggerganov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Merge if CI is green

@bobqianic bobqianic merged commit db8ccdb into ggerganov:master Dec 21, 2023
37 checks passed
@bobqianic bobqianic deleted the cuda-ci branch December 22, 2023 20:58
bygreencn added a commit to bygreencn/whisper.cpp that referenced this pull request Dec 25, 2023
* ggerganov/master:
  whisper : Replace WHISPER_PRINT_DEBUG with WHISPER_LOG_DEBUG (ggerganov#1681)
  sync : ggml (ggml_scale, ggml_row_size, etc.) (ggerganov#1677)
  docker :  Dockerize whisper.cpp (ggerganov#1674)
  CI : Add coverage for talk-llama when WHISPER_CUBLAS=1 (ggerganov#1672)
  examples : Revert CMakeLists.txt for talk-llama (ggerganov#1669)
  cmake : set default CUDA architectures (ggerganov#1667)
viktor-silakov pushed a commit to viktor-silakov/whisper_node_mic.cpp that referenced this pull request May 11, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants