-
-
Notifications
You must be signed in to change notification settings - Fork 232
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Increasing token limit breaks co-pilot #1
Comments
Thanks for reporting the issue @noahevers97! Hmm, I can't seem to reproduce this, I tried 2000 for both gpt3.5 and gpt4, they are working fine for me. Do you see any console errors? |
I had mine set to 5000 and had to go back to 1000 otherwise it wouldn't work, complaining about credentials/API Key. I love this concept of interacting with GPT. Thanks for the plugin! |
Thanks for trying it out! Sorry about the confusing error message, that API key complaint is currently covering all OpenAI errors. If you open your console you will see the true error message which is quite long. I suspect the 5000 token limit is over what is allowed for that model you are using. Feel free to copy and paste the console error message here, I can take a look. I will add better error messages soon. |
|
Basically the issue is that the model you picked does not support token limit as big as 5000 I added better error message like this, hope it helps In the meantime, please keep the OpenAI limitations in mind. You can find the API token limits in their docs: https://platform.openai.com/docs/models/overview I'm planning to support unlimited context in the near future, please stay tuned! |
Closing this one for now, please open a new issue with the new error messages if you still encounter problems. |
Thank you for the info. Appreciated |
When increasing the token limit of co-pilot above 1000, I do not get any response from chat-gpt, also not after a restart. Setting the token limit <1000 fixes the issue again.
The text was updated successfully, but these errors were encountered: