Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Output interrupted #145

Open
schucan opened this issue Oct 1, 2024 · 14 comments
Open

Output interrupted #145

schucan opened this issue Oct 1, 2024 · 14 comments

Comments

@schucan
Copy link

schucan commented Oct 1, 2024

The problem

Expected behavior:

  1. A ThunderAI command is clicked
  2. the chat dialog opens, with the prompt consisting of the prompt template and the input text
  3. ChatGPT writes its output
  4. user can continue chatting or use the output

Wrong behavior:

  • At step 3, the output just stops. there is no error message. At the bottom of the dialog window, there is the "Receiving Data..." indicator, but nothing happens, nothing changes anymore.

The wrong behavior occurs about half of the times I use the add-on.

At first, I thought it could be related to the number of tokens. But it happens randomly, without any correlation with length of input or output. Completely non-deterministic for now.

Which Operative System are you using?

Windows 10

Which version of Thunderbird are you using?

128.2.3esr (64-bit)

Which version of ThunderAI has the issue?

2.1.5

Which integration are you using?

OpenAI ChatGPT API

Anything in the Thunderbird console logs that might be useful?

Will add when it happens next...

Additional information

No response

@micz
Copy link
Owner

micz commented Oct 1, 2024

May you activate the debug option in the addon preferences, run again and post the last lines of the log?
Thanks.

@schucan
Copy link
Author

schucan commented Oct 1, 2024

Yes - will post as soon as it happens again.

@schucan
Copy link
Author

schucan commented Oct 1, 2024

Here we go:

20:43:33.493 [ThunderAI Logger | mzta-background] [ThunderAI] Prompt length: 890 [mzta-logger.js:35:44](moz-extension://79bdee5f.../js/mzta-logger.js)
20:43:34.236 [ThunderAI Logger | mzta-background] [OpenAI ChatGPI] Connection succeded! [mzta-logger.js:35:44](moz-extension://79bdee5f-0e0d-44ba-97c3-5bf8da3c199e/js/mzta-logger.js)
20:43:35.977 SyntaxError: JSON.parse: unterminated string at line 1 column 154 of the JSON data
[model-worker-openai.js:132:37](moz-extension://79bdee5f.../api_webchat/model-worker-openai.js)

This now happened with this prompt:

Expand these notes into a full email with a greeting and a salutation. You must use the same language as is used in the notes (example: English notes must be made into an English email!). Try to make it formal or informal according to the content. If German, write it for Switzerland, without "scharfes S" (ß), and make it in "Du Form" for informal email, and in polite "Sie Form" for a formal matter. Make the email polite, but simple and clear. The author is the man X Y. End with just the first name X for informal, with the full name X Y for a formal email. Do not add a subject line - just the message text in Markdown format. Here are the notes: "[some text]" 

The output was just a bit less than one line of text, interrupted in the middle of a sentence.

@micz
Copy link
Owner

micz commented Oct 1, 2024

It's not the prompt. I can get full response without problems.
Does it happen all the time with any email?

@schucan
Copy link
Author

schucan commented Oct 1, 2024

As I said: it is non-deterministic: Whenever it happens, I can just repeat the same thing and it works. It is just inconvenient and doubles the cost (via the tokens).

@micz
Copy link
Owner

micz commented Oct 1, 2024

You're right, sorry.

May I give you a version with more logs to use, so we can try to catch the problem?
I've tried the prompt may times, but never got the error.

@schucan
Copy link
Author

schucan commented Oct 1, 2024

Yes please do.

While you are at it: is there a way to place the text string inside a prompt? It would be fantastic (and actually required) to be able to have some shortcode that can be used inside a prompt text. For example [t]: "answer the following email: [t] and express this answer: " and then check the "Ask for additional text"... I'll test this, too, if you like ;-)

@micz
Copy link
Owner

micz commented Oct 1, 2024

Ok, I'll post here the version to try. It will be based on the version 2.2.0 that I'm working on.

Your suggestion is great, may you file a separated issue? Thank you.

@micz
Copy link
Owner

micz commented Oct 1, 2024

You can find here version 2.2.0_i145_v1.

Please follow this steps:

  1. Download the file thunderai-v2.2.0_i145_v1.zip
  2. Rename it from *.zip to *.xpi
  3. Backup your custom prompts if you have any
  4. Install this version

This version prints to the console log the lines received from ChatGPT.
Before posting here the log check for any sensitive information that could be printed with the reply.

If you need the source code, you can find it in the branch: https://github.com/micz/ThunderAI/tree/issue_145 (full diff)

Thank you.

@micz
Copy link
Owner

micz commented Oct 1, 2024

I finally got it once.
The connection has been closed by ChatGPT, this is the line received:

>>>>>>>>>>>>> [ThunderAI] line: "{\"id\":\"xxxxxxxxxxxxx\",\"object\":\"chat.completion.chunk\",\"created\":1727813166,\"model\":\"gpt-4o-mini-2024-07-18\",\"system_fingerprint\":\"fp_f85bea6784\",\"choices\":[{\"in"

The one before, that was complete, was like this:

>>>>>>>>>>>>> [ThunderAI] line: "{\"id\":\"xxxxxxxxxxxxxxxxxxxxxx\",\"object\":\"chat.completion.chunk\",\"created\":1727813166,\"model\":\"gpt-4o-mini-2024-07-18\",\"system_fingerprint\":\"fp_f85bea6784\",\"choices\":[{\"index\":0,\"delta\":{\"content\":\" me\"},\"logprobs\":null,\"finish_reason\":null}]}"

I will give an error message in this case, but I can't do anything to prevent it.

@schucan
Copy link
Author

schucan commented Oct 1, 2024

Your suggestion is great, may you file a separated issue? Thank you.

#146 😉

@schucan
Copy link
Author

schucan commented Oct 1, 2024

I finally got it once. The connection has been closed by ChatGPT, this is the line received:

...

The one before, that was complete, was like this:

...

I will give an error message in this case, but I can't do anything to prevent it.

Here's a quick answer from Kagi Search. I am guessing it must be 3 or 4, I can pretty much rule out 1 and 2:

Quick Answer

Yes, the OpenAI API can sometimes interrupt chunked output. This issue may arise due to several factors:

  1. Response Limits: The API has a maximum token limit for responses. If the input tokens are high, the output may be truncated unexpectedly1.
  2. Connection Issues: Streaming responses can lead to errors like ChunkedEncodingError, indicating that the connection was broken during data transmission2.
  3. Mid-Event Chunking: There have been reports of the API chunking responses mid-event, which can disrupt JSON deserialization and lead to incomplete outputs3.
  4. Unreliable Responses: Some users have experienced instability with the API, particularly when using function calls, resulting in random failures4.

To mitigate these issues, consider breaking down requests into smaller parts or adjusting your input to stay within token limits.
References

  1. Openai response getting truncated - API - OpenAI Developer Forum community.openai.com
  2. Getting ChunkedEncodingError in every stream request to GPT-4 - API community.openai.com
  3. Bugs - OpenAI Developer Forum - OpenAI API Community Forum community.openai.com
  4. API unstable with function calls (on both Both 4K and 16K 3.5 Models) community.openai.com

Are you setting a token limit? I don't think that's a reason because the times it broke were both with long and short prompts.

@micz
Copy link
Owner

micz commented Oct 1, 2024

I'm not setting a token limit, but I got the error with a very small prompt.
I'm trying to keep the truncated line and check if the missing part is being resent from ChatGPT, but now I never get the error.

@micz
Copy link
Owner

micz commented Oct 1, 2024

thunderai-v2.2.0_i145_v2.zip

This version tries to get the chunk missing part.
I never got the error after I implemented this. Please, try this one.
Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

When branches are created from issues, their pull requests are automatically linked.

2 participants