-
Notifications
You must be signed in to change notification settings - Fork 173
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
incomplete extracted result #717
Comments
Yes @haluwong - The gpt-4 model is having an output token limit of 4096. |
@VikashPratheepan -- Curious, how do we handle data extraction that is larger than LLM model's output token limit? I mean most LLMs are going big in input size and not so much on Output. |
@ashwanthkumar we handle this by internally splitting the context, making multiple requests and responding with a concatenated result. However, this feature is only available in the enterprise version. |
I have the same problem. This should never happen since chatgpt simply asks you if you want to proceed. why is this feature not inbuild into the software? It makes it nearly useless for anything of useful size, perhaps apart from receipts and short bank statements. |
Hi All, we have a 7 pages pdf which is a delivery note and we would like to get the item information on it.
There are 13 items but unstract only extract 6 items.
I can use the prompt to get the total number of items, meaning all the pages are extracted.
But for the details, it cannot extract all the data.
Here is my prompt:
item after "007" cannot be extracted.
is there any limitation on the output size?
Here is the json output for the above prompt
result.json
The text was updated successfully, but these errors were encountered: