-
Notifications
You must be signed in to change notification settings - Fork 6.5k
llm-compiler support stream_step/astream_step #18809
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Hey @linshibo I added some small corrections for the way You can try it yourself with and without this last fix that I added, using the following code for the async method: from llama_index.agent.llm_compiler import LLMCompilerAgentWorker
from llama_index.core.agent import AgentRunner
from llama_index.core.tools import FunctionTool
from llama_index.llms.openai import OpenAI
from llama_index.llms.anthropic import Anthropic
from llama_index.llms.groq import Groq
from llama_index.llms.google_genai import GoogleGenAI
from typing import List
from llama_index.core.agent.types import TaskStep
def get_weather(location: str) -> str:
"""
Useful to get the weather of a location.
Args:
location (str): The location to get the weather from
Returns:
str: the description of the weather for the given location
"""
return f"Weather for {location}: Cloudy and windy, with 7°C of min temperature and 15°C of max temperature. Humidity at 70%, precipitation probability at 55%."
def get_local_time(location: str) -> str:
"""
Useful to get the local time of a location.
Args:
location (str): The location to get the weather from
Returns:
str: the time at a given location
"""
return f"The current time in {location} is: 12.45.00"
async def main():
openai = OpenAI(model="gpt-4.1")
gemini = GoogleGenAI(model="gemini-2.0-flash")
llama = Groq(model="llama-3.3-70b-versatile")
claude = Anthropic(model="claude-sonnet-4-20250514")
llms: List[GoogleGenAI | OpenAI | Groq | Anthropic] = [openai, gemini, llama, claude]
for llm in llms:
print(f"========{llm.model.upper()}========")
agent_worker = LLMCompilerAgentWorker.from_tools(
tools=[FunctionTool.from_defaults(get_weather), FunctionTool.from_defaults(get_local_time)], llm=llm, verbose=True,
)
agent_runner = AgentRunner(agent_worker=agent_worker)
task = agent_runner.create_task("What is the weather in New York? What is the local time there?")
step = TaskStep(task_id=task.task_id, step_id="1", input=task.input)
await agent_worker.astream_step(task=task, step=step)
if __name__ == "__main__":
import asyncio
asyncio.run(main()) And this code for the sync method: from llama_index.agent.llm_compiler import LLMCompilerAgentWorker
from llama_index.core.agent import AgentRunner
from llama_index.core.tools import FunctionTool
from llama_index.llms.openai import OpenAI
from llama_index.llms.anthropic import Anthropic
from llama_index.llms.groq import Groq
from llama_index.llms.google_genai import GoogleGenAI
from typing import List
from llama_index.core.agent.types import TaskStep
def get_weather(location: str) -> str:
"""
Useful to get the weather of a location.
Args:
location (str): The location to get the weather from
Returns:
str: the description of the weather for the given location
"""
return f"Weather for {location}: Cloudy and windy, with 7°C of min temperature and 15°C of max temperature. Humidity at 70%, precipitation probability at 55%."
def get_local_time(location: str) -> str:
"""
Useful to get the local time of a location.
Args:
location (str): The location to get the weather from
Returns:
str: the time at a given location
"""
return f"The current time in {location} is: 12.45.00"
def main():
openai = OpenAI(model="gpt-4.1")
gemini = GoogleGenAI(model="gemini-2.0-flash")
llama = Groq(model="llama-3.3-70b-versatile")
claude = Anthropic(model="claude-sonnet-4-20250514")
llms: List[GoogleGenAI | OpenAI | Groq | Anthropic] = [openai, gemini, llama, claude]
for llm in llms:
print(f"========{llm.model.upper()}========")
agent_worker = LLMCompilerAgentWorker.from_tools(
tools=[FunctionTool.from_defaults(get_weather), FunctionTool.from_defaults(get_local_time)], llm=llm, verbose=True,
)
agent_runner = AgentRunner(agent_worker=agent_worker)
task = agent_runner.create_task("What is the weather in New York? What is the local time there?")
step = TaskStep(task_id=task.task_id, step_id="1", input=task.input)
agent_worker.stream_step(task=task, step=step)
if __name__ == "__main__":
main() |
AstraBert
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If there is no further comment, I'd say this is lgtm!
Description
Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. List any dependencies that are required for this change.
Fixes # (issue)
New Package?
Did I fill in the
tool.llamahubsection in thepyproject.tomland provide a detailed README.md for my new integration or package?Version Bump?
Did I bump the version in the
pyproject.tomlfile of the package I am updating? (Except for thellama-index-corepackage)Type of Change
Please delete options that are not relevant.
How Has This Been Tested?
Your pull-request will likely not be merged unless it is covered by some form of impactful unit testing.
Suggested Checklist:
uv run make format; uv run make lintto appease the lint gods