You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
To stream via groq you'd want to set stream: true in the completions call. But then you will have to face the challenge on how TTS will handle the stream. You can't begin synthesis immediately, because that means you'll attempt to generate speech from just a few tokens that may not form a sentence. This will sound very bad to the user.
If you choose to tokenize the stream chunks into sentences you will have to add logic to queue/dequeue sentences as they come before you send them for synthesis. This would work, but adds computation overhead and complexity. I added something like this in an older project
Best case scenario here is since groq is really fast, send the text response to the speech API and just stream the Speech itself back to the front end.
Wouldn't it be better to use streaming interfaces in both the llm and speech systems?
For example:
elevenlabs/elevenlabs-js#4 (comment)
vercel should support this:
https://vercel.com/docs/functions/streaming
The text was updated successfully, but these errors were encountered: