-
Notifications
You must be signed in to change notification settings - Fork 6.5k
ElevenLabs integration #18967
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ElevenLabs integration #18967
Conversation
| client, | ||
| AGENT_ID, | ||
| requires_auth=bool(API_KEY), | ||
| audio_interface=DefaultAudioInterface(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just thinking for the future, we might need to design (or use existing) our own audio interface across voice integrations
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With the callback functions defined in utils, we already do use AudioBlock :)
I'm planning on scaling the support across the various integrations
| conversation.start_session() | ||
|
|
||
| # RUN UNTIL CTRL+C IS HIT | ||
| signal.signal(signal.SIGINT, lambda sig, frame: conversation.end_session()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think a more useful demo might be showing a while True loop with some kind of exit condition? If I was to use this in my own script, thats probably what I would do
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Makes sense! I'll try to implement it :))
| self._conversation_id = None | ||
| self._last_interrupt_id = 0 | ||
|
|
||
| def _handle_message(self, message, ws): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
missing type annotations on functions (even if we don't use mypy in CICD, I've started to use mypy locally to sanity check my new code haha)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh shoot thanks a lot, adding them right away!
| if event["ping_ms"] is None: | ||
| event["ping_ms"] = 0 | ||
| self._callback_latency_measurement(self._latencies, int(event["ping_ms"])) | ||
| elif message["type"] == "client_tool_call": |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do tool calls need to be in the chat history as the conversation runs? they already are? (just trying to follow the logic of what the voice agent actually sees)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As far as I know, they're not in the chat history of ElevenLabs, that's why I stored them in the LlamaIndex-formatted one as text
But I'm happy to give it a second look :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is fine to merge as a beta.
So far from reading this and my experience from openai, some ideas for what a base class should provide:
- voice APIs often operate on events -- should we design our own event schema that works across providers? I.e. interrupt, tool call, etc.?
- the object for voice/device input/output should be common across all providers if we can
- the methods/UX for starting/ending a session should be common across all providers
- since voice APIs commonly include tool-calling, we can probably brand these "llama-index voice agents"
- the concept of "handling events" reminds me of workflows honestly. I'm not sure if there's a tie in there or not, but maybe worth thinking about (should the base class be a workflow? Not sure lol)
|
I agree on pretty much everything, and I'll check, as the integrations for voice agents progress, the possibility of subclassing Workflows, which seems interesting! I'll just push some changes related to the example and re-version it as beta |
Description
This package adds a LlamaIndex wrapper for ElevenLabs, that allows users to chat with their ElevenLabs Agents real-time, recording their interactions with the agent in LlamaIndex
ChatMessages.New Package?
Did I fill in the
tool.llamahubsection in thepyproject.tomland provide a detailed README.md for my new integration or package?Version Bump?
Did I bump the version in the
pyproject.tomlfile of the package I am updating? (Except for thellama-index-corepackage)Type of Change
Please delete options that are not relevant.
How Has This Been Tested?
Your pull-request will likely not be merged unless it is covered by some form of impactful unit testing.
Suggested Checklist:
uv run make format; uv run make lintto appease the lint gods