Skip to content

Conversation

@linshibo
Copy link
Contributor

@linshibo linshibo commented May 22, 2025

Description

Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. List any dependencies that are required for this change.

Fixes # (issue)

New Package?

Did I fill in the tool.llamahub section in the pyproject.toml and provide a detailed README.md for my new integration or package?

  • Yes
  • No

Version Bump?

Did I bump the version in the pyproject.toml file of the package I am updating? (Except for the llama-index-core package)

  • Yes
  • No

Type of Change

Please delete options that are not relevant.

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • This change requires a documentation update

How Has This Been Tested?

Your pull-request will likely not be merged unless it is covered by some form of impactful unit testing.

  • I added new unit tests to cover this change
  • I believe this change is already covered by existing unit tests

Suggested Checklist:

  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • I have added Google Colab support for the newly added notebooks.
  • My changes generate no new warnings
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes
  • I ran uv run make format; uv run make lint to appease the lint gods

@dosubot dosubot bot added the size:L This PR changes 100-499 lines, ignoring generated files. label May 22, 2025
@AstraBert AstraBert self-assigned this May 23, 2025
@AstraBert AstraBert self-requested a review May 23, 2025 11:45
@AstraBert
Copy link
Member

Hey @linshibo

I added some small corrections for the way _run_step, _run_step_stream and _arun_step_stream handle getting values from a dictionary: instead of using the direct key-value match (dict[key]) I am now checking that the key exists within the step.step_state dictionary, and then, if it does not exist, I add the key to the dictionary, to avoid that the code crashes if it doesn't find the key :))

You can try it yourself with and without this last fix that I added, using the following code for the async method:

from llama_index.agent.llm_compiler import LLMCompilerAgentWorker
from llama_index.core.agent import AgentRunner
from llama_index.core.tools import FunctionTool
from llama_index.llms.openai import OpenAI
from llama_index.llms.anthropic import Anthropic
from llama_index.llms.groq import Groq
from llama_index.llms.google_genai import GoogleGenAI
from typing import List
from llama_index.core.agent.types import TaskStep

def get_weather(location: str) -> str:
    """
    Useful to get the weather of a location.
    
    Args:
        location (str): The location to get the weather from
    
    Returns:
        str: the description of the weather for the given location
    """
    return f"Weather for {location}: Cloudy and windy, with 7°C of min temperature and 15°C of max temperature. Humidity at 70%, precipitation probability at 55%."

def get_local_time(location: str) -> str:
    """
    Useful to get the local time of a location.
    
    Args:
        location (str): The location to get the weather from
    
    Returns:
        str: the time at a given location
    """
    return f"The current time in {location} is: 12.45.00"


async def main():
    openai = OpenAI(model="gpt-4.1")
    gemini = GoogleGenAI(model="gemini-2.0-flash")
    llama = Groq(model="llama-3.3-70b-versatile")
    claude = Anthropic(model="claude-sonnet-4-20250514")
    llms: List[GoogleGenAI | OpenAI | Groq | Anthropic] = [openai, gemini, llama, claude]

    for llm in llms:
        print(f"========{llm.model.upper()}========")
        agent_worker = LLMCompilerAgentWorker.from_tools(
            tools=[FunctionTool.from_defaults(get_weather), FunctionTool.from_defaults(get_local_time)], llm=llm, verbose=True,
        )
        agent_runner = AgentRunner(agent_worker=agent_worker)
        task = agent_runner.create_task("What is the weather in New York? What is the local time there?")
        step = TaskStep(task_id=task.task_id, step_id="1", input=task.input)
        await agent_worker.astream_step(task=task, step=step)


if __name__ == "__main__":
    import asyncio
    asyncio.run(main())            

And this code for the sync method:

from llama_index.agent.llm_compiler import LLMCompilerAgentWorker
from llama_index.core.agent import AgentRunner
from llama_index.core.tools import FunctionTool
from llama_index.llms.openai import OpenAI
from llama_index.llms.anthropic import Anthropic
from llama_index.llms.groq import Groq
from llama_index.llms.google_genai import GoogleGenAI
from typing import List
from llama_index.core.agent.types import TaskStep

def get_weather(location: str) -> str:
    """
    Useful to get the weather of a location.
    
    Args:
        location (str): The location to get the weather from
    
    Returns:
        str: the description of the weather for the given location
    """
    return f"Weather for {location}: Cloudy and windy, with 7°C of min temperature and 15°C of max temperature. Humidity at 70%, precipitation probability at 55%."

def get_local_time(location: str) -> str:
    """
    Useful to get the local time of a location.
    
    Args:
        location (str): The location to get the weather from
    
    Returns:
        str: the time at a given location
    """
    return f"The current time in {location} is: 12.45.00"


def main():
    openai = OpenAI(model="gpt-4.1")
    gemini = GoogleGenAI(model="gemini-2.0-flash")
    llama = Groq(model="llama-3.3-70b-versatile")
    claude = Anthropic(model="claude-sonnet-4-20250514")
    llms: List[GoogleGenAI | OpenAI | Groq | Anthropic] = [openai, gemini, llama, claude]

    for llm in llms:
        print(f"========{llm.model.upper()}========")
        agent_worker = LLMCompilerAgentWorker.from_tools(
            tools=[FunctionTool.from_defaults(get_weather), FunctionTool.from_defaults(get_local_time)], llm=llm, verbose=True,
        )
        agent_runner = AgentRunner(agent_worker=agent_worker)
        task = agent_runner.create_task("What is the weather in New York? What is the local time there?")
        step = TaskStep(task_id=task.task_id, step_id="1", input=task.input)
        agent_worker.stream_step(task=task, step=step)


if __name__ == "__main__":
    main()          

Copy link
Member

@AstraBert AstraBert left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If there is no further comment, I'd say this is lgtm!

@dosubot dosubot bot added the lgtm This PR has been approved by a maintainer label May 26, 2025
@AstraBert AstraBert merged commit 0aa57ab into run-llama:main May 26, 2025
9 of 10 checks passed
@colca colca mentioned this pull request Jun 9, 2025
18 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

lgtm This PR has been approved by a maintainer size:L This PR changes 100-499 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants