You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
BE really ought to stream audio in chunks to the FE.
Alternatively, we could send audio chunks to FE as they are generated.
FE could store all generated audio on-device.
We could track the audio chunk currently in use so that we can send its transcript to the LLM. (or maybe, if you keep using LLMs with large context windows, you can just send the entire transcript -->{even deeper, you could estimate the token count of the payload and add as much context as is possible from the transcript to it})
However you want to solve it.
The text was updated successfully, but these errors were encountered:
BE really ought to stream audio in chunks to the FE.
Alternatively, we could send audio chunks to FE as they are generated.
FE could store all generated audio on-device.
We could track the audio chunk currently in use so that we can send its transcript to the LLM. (or maybe, if you keep using LLMs with large context windows, you can just send the entire transcript -->{even deeper, you could estimate the token count of the payload and add as much context as is possible from the transcript to it})
However you want to solve it.
The text was updated successfully, but these errors were encountered: