Skip to content

WebSearchTool no LifeCycle Events on Agent via AgentHook subclass #1889

@yvanaquino-alchemy

Description

@yvanaquino-alchemy

Please read this first

Have you read the docs? Yes - Agents SDK docs
Have you searched for related issues? Yes - found issue #778 which mentions similar problems in earlier versions, but this persists in v0.3.3

Describe the bug

The on_tool_start and on_tool_end lifecycle hooks in AgentHooks are not being invoked when tools (specifically WebSearchTool) are executed by an agent. LLM lifecycle hooks (on_llm_start and on_llm_end) work correctly, but tool lifecycle hooks are completely silent with no output or errors.
The tools themselves are being executed successfully (web search results appear in agent responses), but the lifecycle events are never triggered. This makes it impossible to track, log, or intercept tool execution in custom hooks.

Debug information

Agents SDK version: v0.3.3
Python version: Python 3.12
OpenAI version: 1.109.1

Repro steps

import asyncio
from agents import Agent, AgentHooks, Runner, WebSearchTool, RunContextWrapper, TContext, Tool
from agents.lifecycle import TAgent
from agents.models.openai_responses import OpenAIResponsesModel
from openai import AsyncOpenAI
from typing import Optional

class TestHooks(AgentHooks):
    async def on_llm_start(self, context: RunContextWrapper[TContext], 
                          agent: Agent[TContext], system_prompt: Optional[str], 
                          input_items: list) -> None:
        print("✅ LLM START - This prints correctly")
        await super().on_llm_start(context, agent, system_prompt, input_items)

    async def on_llm_end(self, context: RunContextWrapper[TContext], 
                        agent: Agent[TContext], response) -> None:
        print("✅ LLM END - This prints correctly")
        await super().on_llm_end(context, agent, response)

    async def on_tool_start(self, context: RunContextWrapper[TContext], 
                           agent: TAgent, tool: Tool) -> None:
        print(f"❌ TOOL START - This never prints: {tool.name}")
        await super().on_tool_start(context, agent, tool)

    async def on_tool_end(self, context: RunContextWrapper[TContext], 
                         agent: TAgent, tool: Tool, result: str) -> None:
        print(f"❌ TOOL END - This never prints: {tool.name}")
        await super().on_tool_end(context, agent, tool, result)

async def main():
    client = AsyncOpenAI(api_key='your-api-key')
    model = OpenAIResponsesModel(model='gpt-4o-mini', openai_client=client)
    
    agent = Agent(
        model=model,
        name="TestAgent",
        instructions="Use web search to answer the question.",
        hooks=TestHooks(),
        tools=[WebSearchTool()]
    )
    
    response = await Runner.run(agent, "What is the capital of France?")
    print(response.final_output)

asyncio.run(main())

Output:

✅ LLM START - This prints correctly
✅ LLM END - This prints correctly
✅ LLM START - This prints correctly
✅ LLM END - This prints correctly
Paris

Notice: Tool hooks never print, even though the web search is performed (the agent correctly answers using web data).
Expected behavior
The on_tool_start and on_tool_end hooks should be invoked when the agent executes the WebSearchTool (or any tool), similar to how on_llm_start and on_llm_end are invoked during LLM calls.

Expected output should include:
This would allow developers to properly monitor, log, and intercept tool execution in their custom hook implementations.

✅ LLM START - This prints correctly
❌ TOOL START - This never prints: web_search
❌ TOOL END - This never prints: web_search
✅ LLM END - This prints correctly
Paris

Additional context: This appears to be related to issue #778, but that was reportedly fixed in earlier versions. The problem still exists in v0.3.3, suggesting either a regression or incomplete fix.

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions