Skip to content

Handle error response from OpenRouter as exception instead of validation failure #2323

@R0boji

Description

@R0boji

Initial Checks

Description

Seems that OpenRouter Provider doesn't work as expected. It raises an error that we added to fix #2238 via #2247. I think something changed in OpenRouter API that causes this.

Traceback (most recent call last):
  File "D:\programming\python\.venv\Lib\site-packages\pydantic_ai\models\openai.py", line 380, in _process_response
    response = chat.ChatCompletion.model_validate(response.model_dump())
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\programming\python\.venv\Lib\site-packages\pydantic\main.py", line 705, in model_validate
    return cls.__pydantic_validator__.validate_python(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pydantic_core._pydantic_core.ValidationError: 4 validation errors for ChatCompletion
id
  Input should be a valid string [type=string_type, input_value=None, input_type=NoneType]
    For further information visit https://errors.pydantic.dev/2.11/v/string_type
choices
  Input should be a valid list [type=list_type, input_value=None, input_type=NoneType]
    For further information visit https://errors.pydantic.dev/2.11/v/list_type
model
  Input should be a valid string [type=string_type, input_value=None, input_type=NoneType]
    For further information visit https://errors.pydantic.dev/2.11/v/string_type
object
  Input should be 'chat.completion' [type=literal_error, input_value=None, input_type=NoneType]
    For further information visit https://errors.pydantic.dev/2.11/v/literal_error

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "d:\programming\python\test.py", line 52, in <module>
    print(fast_agent.run_sync("Give me servers' info").output)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\programming\python\.venv\Lib\site-packages\pydantic_ai\agent.py", line 987, in run_sync
    return get_event_loop().run_until_complete(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\admin\AppData\Local\Programs\Python\Python312\Lib\asyncio\base_events.py", line 684, in run_until_complete
    return future.result()
           ^^^^^^^^^^^^^^^
  File "D:\programming\python\.venv\Lib\site-packages\pydantic_ai\agent.py", line 562, in run
    async for _ in agent_run:
  File "D:\programming\python\.venv\Lib\site-packages\pydantic_ai\agent.py", line 2173, in __anext__
    next_node = await self._graph_run.__anext__()
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\programming\python\.venv\Lib\site-packages\pydantic_graph\graph.py", line 809, in __anext__
    return await self.next(self._next_node)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\programming\python\.venv\Lib\site-packages\pydantic_graph\graph.py", line 782, in next
    self._next_node = await node.run(ctx)
                      ^^^^^^^^^^^^^^^^^^^
  File "D:\programming\python\.venv\Lib\site-packages\pydantic_ai\_agent_graph.py", line 299, in run
    return await self._make_request(ctx)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\programming\python\.venv\Lib\site-packages\pydantic_ai\_agent_graph.py", line 359, in _make_request
    model_response = await ctx.deps.model.request(message_history, model_settings, model_request_parameters)
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\programming\python\.venv\Lib\site-packages\pydantic_ai\models\openai.py", line 247, in request
    model_response = self._process_response(response)
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\programming\python\.venv\Lib\site-packages\pydantic_ai\models\openai.py", line 382, in _process_response
    raise UnexpectedModelBehavior(f'Invalid response from OpenAI chat completions endpoint: {e}') from e
pydantic_ai.exceptions.UnexpectedModelBehavior: Invalid response from OpenAI chat completions endpoint: 4 validation errors for ChatCompletion       
id
  Input should be a valid string [type=string_type, input_value=None, input_type=NoneType]
    For further information visit https://errors.pydantic.dev/2.11/v/string_type
choices
  Input should be a valid list [type=list_type, input_value=None, input_type=NoneType]
    For further information visit https://errors.pydantic.dev/2.11/v/list_type
model
  Input should be a valid string [type=string_type, input_value=None, input_type=NoneType]
    For further information visit https://errors.pydantic.dev/2.11/v/string_type
object
  Input should be 'chat.completion' [type=literal_error, input_value=None, input_type=NoneType]
    For further information visit https://errors.pydantic.dev/2.11/v/literal_error

Example Code

import asyncio
from pydantic_ai import Agent, RunContext
from pydantic_ai.models.openai import OpenAIModel
from pydantic_ai.providers.openrouter import OpenRouterProvider
from dotenv import load_dotenv
import os 
load_dotenv()

fast_model = OpenAIModel(
    "deepseek/deepseek-chat-v3-0324:free",
    provider=OpenRouterProvider(api_key=os.getenv("OPENROUTER_API_KEY"))
)

fast_agent = Agent(fast_model)

@fast_agent.tool_plain
def get_servers_status() -> list[dict]:
    """Get main servers' characteristics and resources, that are available to use."""
    return [
        {
            "id": 1,
            "location": "Berlin, Germany",
            "CPU": "AMD Ryzen 7800X3D",
            "CPU_FREQUENCY-GHZ": "3",
            "CPU_CORES": 16,
            "HDD-GB": 1024,
            "SSD-GB": 100,
            "RAM-MB": 20480,
            "RAM_TYPE": "DDR3"
        },
        {
            "id": 2,
            "location": "London, United Kingdom",
            "CPU": "Intel Pentium Gold",
            "CPU_FREQUENCY-GHZ": "1",
            "CPU_CORES": 2,
            "HDD-GB": 200,
            "SSD-GB": 5000,
            "RAM-MB": 40960,
            "RAM_TYPE": "DDR6"
        }
    ]
    
async def main():
    print((await fast_agent.run("Give me servers' info")).output)
if __name__ == "__main__":
    print(fast_agent.run_sync("Give me servers' info").output)
    # asyncio.run(main())

Python, Pydantic AI & LLM client version

Python 3.12.1, Pydantic AI 0.4.7

Metadata

Metadata

Assignees

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions