-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Output interrupted #145
Comments
May you activate the debug option in the addon preferences, run again and post the last lines of the log? |
Yes - will post as soon as it happens again. |
Here we go:
This now happened with this prompt:
The output was just a bit less than one line of text, interrupted in the middle of a sentence. |
It's not the prompt. I can get full response without problems. |
As I said: it is non-deterministic: Whenever it happens, I can just repeat the same thing and it works. It is just inconvenient and doubles the cost (via the tokens). |
You're right, sorry. May I give you a version with more logs to use, so we can try to catch the problem? |
Yes please do. While you are at it: is there a way to place the text string inside a prompt? It would be fantastic (and actually required) to be able to have some shortcode that can be used inside a prompt text. For example [t]: "answer the following email: [t] and express this answer: " and then check the "Ask for additional text"... I'll test this, too, if you like ;-) |
Ok, I'll post here the version to try. It will be based on the version 2.2.0 that I'm working on. Your suggestion is great, may you file a separated issue? Thank you. |
You can find here version 2.2.0_i145_v1. Please follow this steps:
This version prints to the console log the lines received from ChatGPT. If you need the source code, you can find it in the branch: https://github.com/micz/ThunderAI/tree/issue_145 (full diff) Thank you. |
I finally got it once.
The one before, that was complete, was like this:
I will give an error message in this case, but I can't do anything to prevent it. |
#146 😉 |
...
...
Here's a quick answer from Kagi Search. I am guessing it must be 3 or 4, I can pretty much rule out 1 and 2: Quick AnswerYes, the OpenAI API can sometimes interrupt chunked output. This issue may arise due to several factors:
To mitigate these issues, consider breaking down requests into smaller parts or adjusting your input to stay within token limits.
Are you setting a token limit? I don't think that's a reason because the times it broke were both with long and short prompts. |
I'm not setting a token limit, but I got the error with a very small prompt. |
This version tries to get the chunk missing part. |
The problem
Expected behavior:
Wrong behavior:
The wrong behavior occurs about half of the times I use the add-on.
At first, I thought it could be related to the number of tokens. But it happens randomly, without any correlation with length of input or output. Completely non-deterministic for now.
Which Operative System are you using?
Windows 10
Which version of Thunderbird are you using?
128.2.3esr (64-bit)
Which version of ThunderAI has the issue?
2.1.5
Which integration are you using?
OpenAI ChatGPT API
Anything in the Thunderbird console logs that might be useful?
Will add when it happens next...
Additional information
No response
The text was updated successfully, but these errors were encountered: