Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 10 additions & 0 deletions docs/my-website/docs/proxy/config_settings.md
Original file line number Diff line number Diff line change
Expand Up @@ -452,6 +452,8 @@ router_settings:
| BERRISPEND_ACCOUNT_ID | Account ID for BerriSpend service
| BRAINTRUST_API_KEY | API key for Braintrust integration
| BRAINTRUST_API_BASE | Base URL for Braintrust API. Default is https://api.braintrustdata.com/v1
| BRAINTRUST_MOCK | Enable mock mode for Braintrust integration testing. When set to true, intercepts Braintrust API calls and returns mock responses without making actual network calls. Default is false
| BRAINTRUST_MOCK_LATENCY_MS | Mock latency in milliseconds for Braintrust API calls when mock mode is enabled. Simulates network round-trip time. Default is 100ms
| CACHED_STREAMING_CHUNK_DELAY | Delay in seconds for cached streaming chunks. Default is 0.02
| CHATGPT_API_BASE | Base URL for ChatGPT API. Default is https://chatgpt.com/backend-api/codex
| CHATGPT_AUTH_FILE | Filename for ChatGPT authentication data. Default is "auth.json"
Expand Down Expand Up @@ -511,6 +513,8 @@ router_settings:
| DD_ENV | Environment identifier for Datadog logs. Only supported for `datadog_llm_observability` callback
| DD_SERVICE | Service identifier for Datadog logs. Defaults to "litellm-server"
| DD_VERSION | Version identifier for Datadog logs. Defaults to "unknown"
| DATADOG_MOCK | Enable mock mode for Datadog integration testing. When set to true, intercepts Datadog API calls and returns mock responses without making actual network calls. Default is false
| DATADOG_MOCK_LATENCY_MS | Mock latency in milliseconds for Datadog API calls when mock mode is enabled. Simulates network round-trip time. Default is 100ms
| DEBUG_OTEL | Enable debug mode for OpenTelemetry
| DEFAULT_ALLOWED_FAILS | Maximum failures allowed before cooling down a model. Default is 3
| DEFAULT_A2A_AGENT_TIMEOUT | Default timeout in seconds for A2A (Agent-to-Agent) protocol requests. Default is 6000
Expand Down Expand Up @@ -675,6 +679,8 @@ router_settings:
| HCP_VAULT_CERT_ROLE | Role for [Hashicorp Vault Secret Manager Auth](../secret.md#hashicorp-vault)
| HELICONE_API_KEY | API key for Helicone service
| HELICONE_API_BASE | Base URL for Helicone service, defaults to `https://api.helicone.ai`
| HELICONE_MOCK | Enable mock mode for Helicone integration testing. When set to true, intercepts Helicone API calls and returns mock responses without making actual network calls. Default is false
| HELICONE_MOCK_LATENCY_MS | Mock latency in milliseconds for Helicone API calls when mock mode is enabled. Simulates network round-trip time. Default is 100ms
| HOSTNAME | Hostname for the server, this will be [emitted to `datadog` logs](https://docs.litellm.ai/docs/proxy/logging#datadog)
| HOURS_IN_A_DAY | Hours in a day for calculation purposes. Default is 24
| HIDDENLAYER_API_BASE | Base URL for HiddenLayer API. Defaults to `https://api.hiddenlayer.ai`
Expand Down Expand Up @@ -713,6 +719,8 @@ router_settings:
| LANGSMITH_PROJECT | Project name for Langsmith integration
| LANGSMITH_SAMPLING_RATE | Sampling rate for Langsmith logging
| LANGSMITH_TENANT_ID | Tenant ID for Langsmith multi-tenant deployments
| LANGSMITH_MOCK | Enable mock mode for Langsmith integration testing. When set to true, intercepts Langsmith API calls and returns mock responses without making actual network calls. Default is false
| LANGSMITH_MOCK_LATENCY_MS | Mock latency in milliseconds for Langsmith API calls when mock mode is enabled. Simulates network round-trip time. Default is 100ms
| LANGTRACE_API_KEY | API key for Langtrace service
| LASSO_API_BASE | Base URL for Lasso API
| LASSO_API_KEY | API key for Lasso service
Expand Down Expand Up @@ -845,6 +853,8 @@ router_settings:
| POD_NAME | Pod name for the server, this will be [emitted to `datadog` logs](https://docs.litellm.ai/docs/proxy/logging#datadog) as `POD_NAME`
| POSTHOG_API_KEY | API key for PostHog analytics integration
| POSTHOG_API_URL | Base URL for PostHog API (defaults to https://us.i.posthog.com)
| POSTHOG_MOCK | Enable mock mode for PostHog integration testing. When set to true, intercepts PostHog API calls and returns mock responses without making actual network calls. Default is false
| POSTHOG_MOCK_LATENCY_MS | Mock latency in milliseconds for PostHog API calls when mock mode is enabled. Simulates network round-trip time. Default is 100ms
| PREDIBASE_API_BASE | Base URL for Predibase API
| PRESIDIO_ANALYZER_API_BASE | Base URL for Presidio Analyzer service
| PRESIDIO_ANONYMIZER_API_BASE | Base URL for Presidio Anonymizer service
Expand Down
12 changes: 12 additions & 0 deletions litellm/integrations/braintrust_logging.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,10 @@

import litellm
from litellm import verbose_logger
from litellm.integrations.braintrust_mock_client import (
should_use_braintrust_mock,
create_mock_braintrust_client,
)
from litellm.integrations.custom_logger import CustomLogger
from litellm.llms.custom_httpx.http_handler import (
HTTPHandler,
Expand All @@ -34,6 +38,10 @@ def __init__(
self, api_key: Optional[str] = None, api_base: Optional[str] = None
) -> None:
super().__init__()
self.is_mock_mode = should_use_braintrust_mock()
if self.is_mock_mode:
create_mock_braintrust_client()
verbose_logger.info("[BRAINTRUST MOCK] Braintrust logger initialized in mock mode")
self.validate_environment(api_key=api_key)
self.api_base = api_base or os.getenv("BRAINTRUST_API_BASE") or API_BASE
self.default_project_id = None
Expand Down Expand Up @@ -254,6 +262,8 @@ def log_success_event( # noqa: PLR0915
json={"events": [request_data]},
headers=self.headers,
)
if self.is_mock_mode:
print_verbose("[BRAINTRUST MOCK] Sync event successfully mocked")
except httpx.HTTPStatusError as e:
raise Exception(e.response.text)
except Exception as e:
Expand Down Expand Up @@ -399,6 +409,8 @@ async def async_log_success_event( # noqa: PLR0915
json={"events": [request_data]},
headers=self.headers,
)
if self.is_mock_mode:
print_verbose("[BRAINTRUST MOCK] Async event successfully mocked")
except httpx.HTTPStatusError as e:
raise Exception(e.response.text)
except Exception as e:
Expand Down
131 changes: 131 additions & 0 deletions litellm/integrations/braintrust_mock_client.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,131 @@
"""
Mock HTTP client for Braintrust integration testing.

This module intercepts Braintrust API calls and returns successful mock responses,
allowing full code execution without making actual network calls.

Usage:
Set BRAINTRUST_MOCK=true in environment variables or config to enable mock mode.
"""

import os
import time
from urllib.parse import urlparse

from litellm._logging import verbose_logger
from litellm.integrations.mock_client_factory import MockClientConfig, MockResponse, create_mock_client_factory

# Use factory for should_use_mock and MockResponse
# Braintrust uses both HTTPHandler (sync) and AsyncHTTPHandler (async)
# Braintrust needs endpoint-specific responses, so we use custom HTTPHandler.post patching
_config = MockClientConfig(
"BRAINTRUST",
"BRAINTRUST_MOCK",
default_latency_ms=100,
default_status_code=200,
default_json_data={"id": "mock-project-id", "status": "success"},
url_matchers=[
".braintrustdata.com",
"braintrustdata.com",
".braintrust.dev",
"braintrust.dev",
],
patch_async_handler=True, # Patch AsyncHTTPHandler.post for async calls
patch_sync_client=False, # HTTPHandler uses self.client.send(), not self.client.post()
patch_http_handler=False, # We use custom patching for endpoint-specific responses
)

# Get should_use_mock and create_mock_client from factory
# We need to call the factory's create_mock_client to patch AsyncHTTPHandler.post
create_mock_braintrust_factory_client, should_use_braintrust_mock = create_mock_client_factory(_config)

# Store original HTTPHandler.post method (Braintrust-specific for sync calls with custom logic)
_original_http_handler_post = None
_mocks_initialized = False

# Default mock latency in seconds
_MOCK_LATENCY_SECONDS = float(os.getenv("BRAINTRUST_MOCK_LATENCY_MS", "100")) / 1000.0


def _is_braintrust_url(url: str) -> bool:
"""Check if URL is a Braintrust API URL."""
if not isinstance(url, str):
return False

parsed = urlparse(url)
host = (parsed.hostname or "").lower()

if not host:
return False

return (
host == "braintrustdata.com"
or host.endswith(".braintrustdata.com")
or host == "braintrust.dev"
or host.endswith(".braintrust.dev")
)


def _mock_http_handler_post(self, url, data=None, json=None, params=None, headers=None, timeout=None, stream=False, files=None, content=None, logging_obj=None):
"""Monkey-patched HTTPHandler.post that intercepts Braintrust calls with endpoint-specific responses."""
# Only mock Braintrust API calls
if isinstance(url, str) and _is_braintrust_url(url):
verbose_logger.info(f"[BRAINTRUST MOCK] POST to {url}")
time.sleep(_MOCK_LATENCY_SECONDS)
# Return appropriate mock response based on endpoint
if "/project" in url:
# Project creation/retrieval/register endpoint
project_name = json.get("name", "litellm") if json else "litellm"
mock_data = {"id": f"mock-project-id-{project_name}", "name": project_name}
elif "/project_logs" in url:
# Log insertion endpoint
mock_data = {"status": "success"}
else:
mock_data = _config.default_json_data
return MockResponse(
status_code=_config.default_status_code,
json_data=mock_data,
url=url,
elapsed_seconds=_MOCK_LATENCY_SECONDS
)
if _original_http_handler_post is not None:
return _original_http_handler_post(self, url=url, data=data, json=json, params=params, headers=headers, timeout=timeout, stream=stream, files=files, content=content, logging_obj=logging_obj)
raise RuntimeError("Original HTTPHandler.post not available")


def create_mock_braintrust_client():
"""
Monkey-patch HTTPHandler.post to intercept Braintrust sync calls.

Braintrust uses HTTPHandler for sync calls and AsyncHTTPHandler for async calls.
HTTPHandler.post uses self.client.send(), not self.client.post(), so we need
custom patching for sync (similar to Helicone).
AsyncHTTPHandler.post is patched by the factory.

We use custom patching instead of factory's patch_http_handler because we need
endpoint-specific responses (different for /project vs /project_logs).

This function is idempotent - it only initializes mocks once, even if called multiple times.
"""
global _original_http_handler_post, _mocks_initialized

if _mocks_initialized:
return

verbose_logger.debug("[BRAINTRUST MOCK] Initializing Braintrust mock client...")

from litellm.llms.custom_httpx.http_handler import HTTPHandler

if _original_http_handler_post is None:
_original_http_handler_post = HTTPHandler.post
HTTPHandler.post = _mock_http_handler_post # type: ignore
verbose_logger.debug("[BRAINTRUST MOCK] Patched HTTPHandler.post")

# CRITICAL: Call the factory's initialization function to patch AsyncHTTPHandler.post
# This is required for async calls to be mocked
create_mock_braintrust_factory_client()

verbose_logger.debug(f"[BRAINTRUST MOCK] Mock latency set to {_MOCK_LATENCY_SECONDS*1000:.0f}ms")
verbose_logger.debug("[BRAINTRUST MOCK] Braintrust mock client initialization complete")

_mocks_initialized = True
28 changes: 23 additions & 5 deletions litellm/integrations/datadog/datadog.py
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,10 @@
from litellm._logging import verbose_logger
from litellm._uuid import uuid
from litellm.integrations.custom_batch_logger import CustomBatchLogger
from litellm.integrations.datadog.datadog_mock_client import (
should_use_datadog_mock,
create_mock_datadog_client,
)
from litellm.integrations.datadog.datadog_handler import (
get_datadog_hostname,
get_datadog_service,
Expand Down Expand Up @@ -81,6 +85,12 @@ def __init__(
"""
try:
verbose_logger.debug("Datadog: in init datadog logger")

self.is_mock_mode = should_use_datadog_mock()

if self.is_mock_mode:
create_mock_datadog_client()
verbose_logger.debug("[DATADOG MOCK] Datadog logger initialized in mock mode")

#########################################################
# Handle datadog_params set as litellm.datadog_params
Expand Down Expand Up @@ -220,6 +230,9 @@ async def async_send_batch(self):
len(self.log_queue),
self.intake_url,
)

if self.is_mock_mode:
verbose_logger.debug("[DATADOG MOCK] Mock mode enabled - API calls will be intercepted")

response = await self.async_send_compressed_data(self.log_queue)
if response.status_code == 413:
Expand All @@ -232,11 +245,16 @@ async def async_send_batch(self):
f"Response from datadog API status_code: {response.status_code}, text: {response.text}"
)

verbose_logger.debug(
"Datadog: Response from datadog API status_code: %s, text: %s",
response.status_code,
response.text,
)
if self.is_mock_mode:
verbose_logger.debug(
f"[DATADOG MOCK] Batch of {len(self.log_queue)} events successfully mocked"
)
else:
verbose_logger.debug(
"Datadog: Response from datadog API status_code: %s, text: %s",
response.status_code,
response.text,
)
except Exception as e:
verbose_logger.exception(
f"Datadog Error sending batch API - {str(e)}\n{traceback.format_exc()}"
Expand Down
31 changes: 28 additions & 3 deletions litellm/integrations/datadog/datadog_llm_obs.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,10 @@
import litellm
from litellm._logging import verbose_logger
from litellm.integrations.custom_batch_logger import CustomBatchLogger
from litellm.integrations.datadog.datadog_mock_client import (
should_use_datadog_mock,
create_mock_datadog_client,
)
from litellm.integrations.datadog.datadog_handler import (
get_datadog_service,
get_datadog_tags,
Expand All @@ -44,6 +48,19 @@ class DataDogLLMObsLogger(CustomBatchLogger):
def __init__(self, **kwargs):
try:
verbose_logger.debug("DataDogLLMObs: Initializing logger")

self.is_mock_mode = should_use_datadog_mock()

if self.is_mock_mode:
create_mock_datadog_client()
verbose_logger.debug("[DATADOG MOCK] DataDogLLMObs logger initialized in mock mode")

if os.getenv("DD_API_KEY", None) is None:
raise Exception("DD_API_KEY is not set, set 'DD_API_KEY=<>'")
if os.getenv("DD_SITE", None) is None:
raise Exception(
"DD_SITE is not set, set 'DD_SITE=<>', example sit = `us5.datadoghq.com`"
)
# Configure DataDog endpoint (Agent or Direct API)
# Use LITELLM_DD_AGENT_HOST to avoid conflicts with ddtrace's DD_AGENT_HOST
dd_agent_host = os.getenv("LITELLM_DD_AGENT_HOST")
Expand Down Expand Up @@ -170,6 +187,9 @@ async def async_send_batch(self):
verbose_logger.debug(
f"DataDogLLMObs: Flushing {len(self.log_queue)} events"
)

if self.is_mock_mode:
verbose_logger.debug("[DATADOG MOCK] Mock mode enabled - API calls will be intercepted")

# Prepare the payload
payload = {
Expand Down Expand Up @@ -210,9 +230,14 @@ async def async_send_batch(self):
f"DataDogLLMObs: Unexpected response - status_code: {response.status_code}, text: {response.text}"
)

verbose_logger.debug(
f"DataDogLLMObs: Successfully sent batch - status_code: {response.status_code}"
)
if self.is_mock_mode:
verbose_logger.debug(
f"[DATADOG MOCK] Batch of {len(self.log_queue)} events successfully mocked"
)
else:
verbose_logger.debug(
f"DataDogLLMObs: Successfully sent batch - status_code: {response.status_code}"
)
self.log_queue.clear()
except httpx.HTTPStatusError as e:
verbose_logger.exception(
Expand Down
28 changes: 28 additions & 0 deletions litellm/integrations/datadog/datadog_mock_client.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
"""
Mock client for Datadog integration testing.

This module intercepts Datadog API calls and returns successful mock responses,
allowing full code execution without making actual network calls.

Usage:
Set DATADOG_MOCK=true in environment variables or config to enable mock mode.
"""

from litellm.integrations.mock_client_factory import MockClientConfig, create_mock_client_factory

# Create mock client using factory
_config = MockClientConfig(
name="DATADOG",
env_var="DATADOG_MOCK",
default_latency_ms=100,
default_status_code=202,
default_json_data={"status": "ok"},
url_matchers=[
".datadoghq.com",
"datadoghq.com",
],
patch_async_handler=True,
patch_sync_client=True,
)

create_mock_datadog_client, should_use_datadog_mock = create_mock_client_factory(_config)
Loading
Loading