Skip to content

Conversation

@theomonnom
Copy link
Member

No description provided.

@theomonnom theomonnom requested a review from davidzhao October 16, 2025 06:07
cli,
function_tool,
)
from livekit.plugins import deepgram, elevenlabs, openai, silero
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you can remove this

Comment on lines 115 to 119
You'll need the following environment variables for this example:

- DEEPGRAM_API_KEY
- OPENAI_API_KEY
- ELEVEN_API_KEY
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove this

llm=openai.LLM(model="gpt-4o-mini"),
tts=openai.TTS(voice="echo"),
stt="deepgram/nova-3",
llm="openai/gpt-4o",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we're recommending openai/gpt-4.1.mini as the new default

Copy link
Member Author

@theomonnom theomonnom Oct 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This model hallucinates a lot, can't get the drive-thru tests to pass

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh we should probably not be using it as default then - will revive the convo in slack

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@bcherry @theomonnom I can't find the slack thread where you're discussing what the default should be. Looks like you landed on 4o. Should we have the examples and the quickstart also use 4o by default?

When we were talking about this for the gateway before launching, I remember DZ saying we should use 4.1 bc OAI will eventually deprecate 4o. But I was also just talking to a user who said, "4o 2024-05-13 is better at tool calling, and oai models are not robotic, and the intelligence doesn’t drop with large context, all for the best trade off on latency. 4.1 and gemini 2.5 flash comes close, but the may version of 4o is the best. They even charge more for it." Lots of users have different preferences, but it was interesting to hear this guy back up what he likes about 4o, and specify the may version specifically.

image

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i posted in slack but got no replies on it https://live-kit.slack.com/archives/C05P6DRC6MV/p1760744284787699

I don't have an opinion, happy to do whatever the team thinks is the right answer but want to be consistent across the board and also get this PR landed!

tts=openai.TTS(voice="echo"),
stt="deepgram/nova-3",
llm="openai/gpt-4o",
tts="elevenlabs/eleven_turbo_v2_5:Xb7hH8MSUJpSbSDYk0k2",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
tts="elevenlabs/eleven_turbo_v2_5:Xb7hH8MSUJpSbSDYk0k2",
tts="cartesia/sonic-2:9626c31c-bec5-4cca-baa8-f8ba9e84c8bc",

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants