Skip to content

Conversation

@MollySophia
Copy link
Collaborator

The initial implementation didn't add special tokens when converting RWKV v6 models.
It's not really an urgent problem, but it's better fix it as is discussed in #9315

Also this should close #9315 too.

@MollySophia
Copy link
Collaborator Author

BTW I'm sorry for being not able to solve the problems all at once :P

@github-actions github-actions bot added the python python script changes label Sep 11, 2024
@ggerganov ggerganov merged commit 39f852f into ggml-org:master Sep 12, 2024
dsx1986 pushed a commit to dsx1986/llama.cpp that referenced this pull request Oct 29, 2024
arthw pushed a commit to arthw/llama.cpp that referenced this pull request Nov 15, 2024
arthw pushed a commit to arthw/llama.cpp that referenced this pull request Nov 18, 2024
Nexesenex pushed a commit to Nexesenex/croco.cpp that referenced this pull request Feb 25, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

python python script changes

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Bug: RWKV 6 Finch 3B+ models crash llama.cpp with CPU backend

3 participants