Skip to content

feat(ovhcloud): Add support of responses API#22902

Merged
krrishdholakia merged 3 commits intoBerriAI:litellm_oss_staging_03_06_2026from
eliasto:ovhcloud/support-responses-api
Mar 6, 2026
Merged

feat(ovhcloud): Add support of responses API#22902
krrishdholakia merged 3 commits intoBerriAI:litellm_oss_staging_03_06_2026from
eliasto:ovhcloud/support-responses-api

Conversation

@eliasto
Copy link
Contributor

@eliasto eliasto commented Mar 5, 2026

Pre-Submission checklist

Please complete all items before asking a LiteLLM maintainer to review your PR

  • I have Added testing in the tests/test_litellm/ directory, Adding at least 1 test is a hard requirement - see details
  • My PR passes all unit tests on make test-unit
  • My PR's scope is as isolated as possible, it only solves 1 specific problem
  • I have requested a Greptile review by commenting @greptileai and received a Confidence Score of at least 4/5 before requesting a maintainer review

CI (LiteLLM team)

CI status guideline:

  • 50-55 passing tests: main is stable with minor issues.
  • 45-49 passing tests: acceptable but needs attention
  • <= 40 passing tests: unstable; be careful with your merges and assess the risk.
  • Branch creation CI run
    Link:

  • CI run for the last commit
    Link:

  • Merge / cherry-pick CI run
    Links:

Type

🆕 New Feature

Changes

Add support of /v1/responses for OVHcloud AI Endpoints

@vercel
Copy link

vercel bot commented Mar 5, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
litellm Ready Ready Preview, Comment Mar 5, 2026 2:04pm

Request Review

@greptile-apps
Copy link
Contributor

greptile-apps bot commented Mar 5, 2026

Greptile Summary

This PR adds /v1/responses endpoint support for OVHCloud AI Endpoints by introducing OVHCloudResponsesAPIConfig, a thin subclass of OpenAIResponsesAPIConfig that wires in OVHCloud-specific auth, URL construction, and model-capability-aware parameter filtering. Registration plumbing in __init__.py, _lazy_imports_registry.py, and utils.py follows the established pattern used by other providers (Perplexity, Databricks, OpenRouter, etc.).

Key changes:

  • New litellm/llms/ovhcloud/responses/transformation.py implementing validate_environment, get_complete_url, and get_supported_openai_params with tools/tool_choice filtering based on get_model_info and debug logging for lookup failures
  • Provider config registration across three existing files
  • Unit tests covering URL construction, auth header injection, and missing-key error path

Minor issue found:

  • get_complete_url includes litellm.api_base (a global setting typically used for other providers) in its fallback chain, creating inconsistent behavior compared to OVHCloudChatConfig which does not read the global. This can cause OVHCloud responses requests to silently use the wrong endpoint if litellm.api_base was set for another provider.

Confidence Score: 4/5

  • Safe to merge with one minor fix: remove the global litellm.api_base fallback to match the chat config pattern.
  • The implementation is clean, additive, and follows the established provider pattern. No backwards-incompatible changes. One logic concern exists: the global litellm.api_base fallback in get_complete_url is inconsistent with OVHCloudChatConfig, which could cause silent routing errors if that global is set for another provider. This is easily fixable and not a blocker. The test coverage verifies primary flows with properly mocked calls.
  • litellm/llms/ovhcloud/responses/transformation.py — review the api_base fallback order to exclude the global litellm.api_base

Sequence Diagram

sequenceDiagram
    participant Caller
    participant ProviderConfigManager
    participant OVHCloudResponsesAPIConfig
    participant OpenAIResponsesAPIConfig
    participant get_model_info

    Caller->>ProviderConfigManager: get_provider_responses_api_config(model, OVHCLOUD)
    ProviderConfigManager-->>Caller: OVHCloudResponsesAPIConfig()

    Caller->>OVHCloudResponsesAPIConfig: validate_environment(headers, model, litellm_params)
    OVHCloudResponsesAPIConfig->>OVHCloudResponsesAPIConfig: resolve api_key (params → litellm.api_key → litellm.ovhcloud_key → OVHCLOUD_API_KEY env)
    OVHCloudResponsesAPIConfig-->>Caller: headers with Authorization: Bearer <key>

    Caller->>OVHCloudResponsesAPIConfig: get_complete_url(api_base, litellm_params)
    OVHCloudResponsesAPIConfig->>OVHCloudResponsesAPIConfig: resolve api_base (param → litellm.api_base → OVHCLOUD_API_BASE env → default)
    OVHCloudResponsesAPIConfig-->>Caller: https://...ovh.net/v1/responses

    Caller->>OVHCloudResponsesAPIConfig: get_supported_openai_params(model)
    OVHCloudResponsesAPIConfig->>OpenAIResponsesAPIConfig: super().get_supported_openai_params(model)
    OpenAIResponsesAPIConfig-->>OVHCloudResponsesAPIConfig: full OpenAI Responses API param list
    OVHCloudResponsesAPIConfig->>get_model_info: get_model_info(model, "ovhcloud")
    get_model_info-->>OVHCloudResponsesAPIConfig: supports_function_calling (true/false/exception)
    OVHCloudResponsesAPIConfig->>OVHCloudResponsesAPIConfig: remove tools, tool_choice if not supported
    OVHCloudResponsesAPIConfig-->>Caller: filtered param list
Loading

Last reviewed commit: e3eed6e

@eliasto eliasto force-pushed the ovhcloud/support-responses-api branch from 8ec1f3f to b7b5051 Compare March 5, 2026 13:45
@eliasto
Copy link
Contributor Author

eliasto commented Mar 5, 2026

@greptileai Do a review of my PR please

Comment on lines +73 to +88
)

assert "Authorization" in result
assert result["Authorization"] == "Bearer test-api-key-123"

def test_validate_environment_missing_api_key(self):
"""Test that validate_environment raises error when API key is missing"""
config = OVHCloudResponsesAPIConfig()

headers = {}

with pytest.raises(ValueError, match="OVHcloud AI Endpoints API key is required"):
config.validate_environment(
headers=headers,
model="ovhcloud/gpt-oss-120b",
litellm_params=None
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Test may not raise due to uncleared global state

test_validate_environment_missing_api_key relies on litellm.api_key, litellm.ovhcloud_key, and the OVHCLOUD_API_KEY environment variable all being falsy. If any other test in the suite has set litellm.api_key (a common global that many providers fall back to), this test will silently pass through validate_environment without hitting the raise ValueError and pytest will fail with DID NOT RAISE.

The fix is to mock/patch the global keys and the secret getter for the duration of this test:

from unittest.mock import patch

def test_validate_environment_missing_api_key(self):
    """Test that validate_environment raises error when API key is missing"""
    config = OVHCloudResponsesAPIConfig()
    headers = {}

    with (
        patch.object(litellm, "api_key", None),
        patch.object(litellm, "ovhcloud_key", None),
        patch("litellm.llms.ovhcloud.responses.transformation.get_secret_str", return_value=None),
    ):
        with pytest.raises(ValueError, match="OVHcloud AI Endpoints API key is required"):
            config.validate_environment(
                headers=headers,
                model="ovhcloud/gpt-oss-120b",
                litellm_params=None,
            )

@krrishdholakia krrishdholakia changed the base branch from main to litellm_oss_staging_03_06_2026 March 6, 2026 04:42
@krrishdholakia krrishdholakia merged commit 3c95588 into BerriAI:litellm_oss_staging_03_06_2026 Mar 6, 2026
28 of 38 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants