-
Notifications
You must be signed in to change notification settings - Fork 44.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
This model's maximum context length is 8191 tokens, however you requested 21485 tokens (21485 in your prompt; 0 for the completion). Please reduce your prompt; or completion length. #4233
Comments
You will want to split up the csv file into sections that the llm can handle - make that 5-8 parts to be on the safe side and still have space for the prompt itself. For starters, consider coming up with a challenge based on this, so that this can be added as a benchmark. PS: And check out: #3031 |
I had a similar error but this is not from reading one file but from having it browse the internet. I want to search the internet and collect information for me into separate files and save those locally.
|
marked as challenge and talked to @merwanehamadi about capturing this via corresponding challenge coverage. |
|
we could gather some massive CSV data sets for this one to get people started with the data: |
I am working on a chatbot with personal data. |
Trying to get AutoGPT to read a .csv file within the auto_gpt_workspace path and receiving the following error:
openai.error.InvalidRequestError: This model's maximum context length is 8191 tokens, however you requested 21485 tokens (21485 in your prompt; 0 for the completion). Please reduce your prompt; or completion length.
Can I edit anything in the .env file to surpass the token limit? I have pinecone set up but my knowledge is limited. Apologies if this has already been solved. I'm new to coding and to Github.
The text was updated successfully, but these errors were encountered: