Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bartowski Reflection Llama 3.1 70b creates just weird characters as output on M4 Pro 48GB #3190

Open
incredibleole opened this issue Nov 15, 2024 · 1 comment
Labels
bug-unconfirmed chat gpt4all-chat issues

Comments

@incredibleole
Copy link

incredibleole commented Nov 15, 2024

When using the 70b Llama-Model it just generates Garbage random Characters as output, this seems to be the case with other 70b-Models as well. Using macbook pro M4 Pro with 48gb of RAM.

Your Environment

  • GPT4All version: 3.4.2
  • Operating System: MacOS Sequoia 15.1
  • Chat model used (if applicable): bartowski Reflection Llama 3.1 70b
@incredibleole incredibleole added bug-unconfirmed chat gpt4all-chat issues labels Nov 15, 2024
@AndriyMulyar
Copy link
Contributor

What's the tok/s you get on the 48GB M4?

It is likely a prompt template issue for gguf issue, we will investigate

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug-unconfirmed chat gpt4all-chat issues
Projects
None yet
Development

No branches or pull requests

2 participants