-
Notifications
You must be signed in to change notification settings - Fork 1.5k
use inference gateway in the readme #3665
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
| cli, | ||
| function_tool, | ||
| ) | ||
| from livekit.plugins import deepgram, elevenlabs, openai, silero |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you can remove this
| You'll need the following environment variables for this example: | ||
|
|
||
| - DEEPGRAM_API_KEY | ||
| - OPENAI_API_KEY | ||
| - ELEVEN_API_KEY |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remove this
| llm=openai.LLM(model="gpt-4o-mini"), | ||
| tts=openai.TTS(voice="echo"), | ||
| stt="deepgram/nova-3", | ||
| llm="openai/gpt-4o", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we're recommending openai/gpt-4.1.mini as the new default
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This model hallucinates a lot, can't get the drive-thru tests to pass
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh we should probably not be using it as default then - will revive the convo in slack
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@bcherry @theomonnom I can't find the slack thread where you're discussing what the default should be. Looks like you landed on 4o. Should we have the examples and the quickstart also use 4o by default?
When we were talking about this for the gateway before launching, I remember DZ saying we should use 4.1 bc OAI will eventually deprecate 4o. But I was also just talking to a user who said, "4o 2024-05-13 is better at tool calling, and oai models are not robotic, and the intelligence doesn’t drop with large context, all for the best trade off on latency. 4.1 and gemini 2.5 flash comes close, but the may version of 4o is the best. They even charge more for it." Lots of users have different preferences, but it was interesting to hear this guy back up what he likes about 4o, and specify the may version specifically.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i posted in slack but got no replies on it https://live-kit.slack.com/archives/C05P6DRC6MV/p1760744284787699
I don't have an opinion, happy to do whatever the team thinks is the right answer but want to be consistent across the board and also get this PR landed!
| tts=openai.TTS(voice="echo"), | ||
| stt="deepgram/nova-3", | ||
| llm="openai/gpt-4o", | ||
| tts="elevenlabs/eleven_turbo_v2_5:Xb7hH8MSUJpSbSDYk0k2", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| tts="elevenlabs/eleven_turbo_v2_5:Xb7hH8MSUJpSbSDYk0k2", | |
| tts="cartesia/sonic-2:9626c31c-bec5-4cca-baa8-f8ba9e84c8bc", |
No description provided.