add tests for fix#23649
add tests for fix#23649yuneng-jiang merged 4 commits intoBerriAI:litellm_internal_dev_03_14_2026from
Conversation
* fix(bedrock): respect s3_region_name for batch file uploads (BerriAI#23569) * fix(bedrock): respect s3_region_name for batch file uploads (GovCloud fix) * fix: s3_region_name always wins over aws_region_name for S3 signing (Greptile feedback) * fix: _filter_headers_for_aws_signature - Bedrock KB (BerriAI#23571) * fix: _filter_headers_for_aws_signature * fix: filter None header values in all post-signing re-merge paths Addresses Greptile feedback: None-valued headers were being filtered during SigV4 signing but re-merged back into the final headers dict afterward, which would cause downstream HTTP client failures. Made-with: Cursor * feat(router): tag_regex routing — route by User-Agent regex without per-developer tag config (BerriAI#23594) * feat(router): add tag_regex support for header-based routing Adds a new `tag_regex` field to litellm_params that lets operators route requests based on regex patterns matched against request headers — primarily User-Agent — without requiring per-developer tag configuration. Use case: route all Claude Code traffic (User-Agent: claude-code/x.y.z) to a dedicated deployment by setting: tag_regex: - "^User-Agent: claude-code\\/" in the deployment's litellm_params. Works alongside existing `tags` routing; exact tag match takes precedence over regex match. Unmatched requests fall through to deployments tagged `default`. The matched deployment, pattern, and user_agent are recorded in `metadata["tag_routing"]` so they flow through to SpendLogs automatically. * fix(tag_regex): address backwards-compat, metadata overwrite, and warning noise Three issues from code review: 1. Backwards-compat: `has_tag_filter` was widened to activate on any non-empty User-Agent, which would raise ValueError for existing deployments using plain tags without a `default` fallback. Fix: only activate header-based regex filtering when at least one candidate deployment has `tag_regex` configured. 2. Metadata overwrite: `metadata["tag_routing"]` was overwritten for every matching deployment in the loop, leaving inaccurate provenance when multiple deployments match. Fix: write only for the first match. 3. Warning noise: an invalid regex pattern logged one warning per header string rather than once per pattern. Fix: compile first (catching re.error once), then iterate over header strings. Also adds two new tests covering these cases, and adds docs page for tag_regex routing with a Claude Code walk-through. * refactor(tag_regex): remove unnecessary _healthy_list copy * docs: merge tag_regex section into tag_routing.md, remove standalone page - Add ## Regex-based tag routing (tag_regex) section to existing tag_routing.md instead of a separate page - Remove tag_regex_routing.md standalone doc (odd UX to have a separate page for a sub-feature) - Remove proxy/tag_regex_routing from sidebars.js - Add match_any=False debug warning in tag_based_routing.py when regex routing fires under strict mode (regex always uses OR semantics) * fix(tag_regex): address greptile review - security docs, strict-mode enforcement, validation order - Strengthen security note in tag_routing.md: explicitly state User-Agent is client-supplied and can be set to any value; frame tag_regex as a traffic classification hint, not an access-control mechanism - Move tag_regex startup validation before _add_deployment() so an invalid pattern never leaves partial router state - Enforce match_any=False strict-tag policy: when a deployment has both tags and tag_regex and the strict tag check fails, skip the regex fallback rather than silently bypassing the operator's intent - Extract per-deployment match logic into _match_deployment() helper to keep get_deployments_for_tag() readable - Add two new tests: strict-mode blocks regex fallback, regex-only deployment still matches under match_any=False * fix(ci): apply Black formatting to 14 files and stabilize flaky caplog tests - Run Black formatter on 14 files that were failing the lint check - Replace caplog-based assertions in TestAliasConflicts with unittest.mock.patch on verbose_logger.warning for xdist compatibility - The caplog fixture can produce empty text in pytest-xdist workers in certain CI environments, causing flaky test failures Co-authored-by: Ishaan Jaff <ishaan-jaff@users.noreply.github.com> --------- Co-authored-by: Cursor Agent <cursoragent@cursor.com> Co-authored-by: Ishaan Jaff <ishaan-jaff@users.noreply.github.com>
* fix: restore offline tiktoken cache for non-root envs Made-with: Cursor * chore: mkdir for custom tiktoken cache dir Made-with: Cursor * test: patch tiktoken.get_encoding in custom-dir test to avoid network Made-with: Cursor * test: clear CUSTOM_TIKTOKEN_CACHE_DIR in helper for test isolation Made-with: Cursor * test: restore default_encoding module state after custom-dir test Made-with: Cursor
Map provider finish_reason "content_filtered" to the OpenAI-compatible "content_filter" and extend core_helpers tests to cover this case. Made-with: Cursor
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
Greptile SummaryThis PR is a follow-up to #22188, adding unit tests to verify that sensitive fields (e.g., Changes:
Confidence Score: 3/5
|
| Filename | Overview |
|---|---|
| tests/logging_callback_tests/test_langfuse_unit_tests.py | Removes unused imports and debug prints (clean), adds two new tests for secret-leakage fix; the first test (test_langfuse_model_parameters_no_secret_leakage) correctly validates ModelParamHelper.get_standard_logging_model_parameters, but the second (test_langfuse_v2_uses_standard_logging_model_parameters) never calls _log_langfuse_v2 and is tautological — it would pass even if the underlying bug were still present. |
Sequence Diagram
sequenceDiagram
participant T as Test
participant MPH as ModelParamHelper
participant LFLogger as LangFuseLogger
participant TG as trace.generation()
Note over T,TG: test_langfuse_model_parameters_no_secret_leakage (REAL coverage)
T->>MPH: get_standard_logging_model_parameters(params_with_secrets)
MPH-->>T: {temperature, top_p, max_tokens, stream} (secrets filtered)
T->>T: assert secrets absent ✅
Note over T,TG: test_langfuse_v2_uses_standard_logging_model_parameters (TAUTOLOGICAL)
T->>T: standard_logging_object["model_parameters"] = safe_params
T->>T: dict.get("model_parameters", optional_params_with_secrets)
T-->>T: returns safe_params (trivially, never touches _log_langfuse_v2)
T->>T: assert secrets absent ✅ (but _log_langfuse_v2 never called!)
Note over T,TG: What the test SHOULD do
T->>LFLogger: _log_langfuse_v2(..., optional_params=params_with_secrets, ...)
LFLogger->>TG: generation(model_parameters=???)
TG-->>T: capture call_args
T->>T: assert "api_key" not in call_args["model_parameters"] ✅
Last reviewed commit: d7c9ec6
| def test_langfuse_v2_uses_standard_logging_model_parameters(): | ||
| """ | ||
| Test that _log_langfuse_v2 uses sanitized model_parameters from | ||
| standard_logging_object instead of raw optional_params, preventing | ||
| secret leakage to Langfuse traces. | ||
| """ | ||
| standard_logging_object = create_standard_logging_payload() | ||
| # Simulate standard_logging_object having safe model_parameters | ||
| standard_logging_object["model_parameters"] = {"temperature": 0.5, "stream": True} | ||
|
|
||
| # optional_params has secrets — these should NOT be used | ||
| optional_params_with_secrets = { | ||
| "temperature": 0.5, | ||
| "api_key": "sk-secret-key-12345", | ||
| "secret_fields": {"raw_headers": {"Authorization": "Bearer sk-secret"}}, | ||
| } | ||
|
|
||
| # When standard_logging_object is available, its model_parameters should be used | ||
| sanitized = standard_logging_object.get( | ||
| "model_parameters", optional_params_with_secrets | ||
| ) | ||
| assert "api_key" not in sanitized | ||
| assert "secret_fields" not in sanitized | ||
| assert sanitized["temperature"] == 0.5 | ||
|
|
||
| # When standard_logging_object is None, ModelParamHelper should filter | ||
| from litellm.litellm_core_utils.model_param_helper import ModelParamHelper | ||
|
|
||
| fallback_sanitized = ModelParamHelper.get_standard_logging_model_parameters( | ||
| optional_params_with_secrets | ||
| ) | ||
| assert "api_key" not in fallback_sanitized | ||
| assert "secret_fields" not in fallback_sanitized | ||
| assert fallback_sanitized["temperature"] == 0.5 |
There was a problem hiding this comment.
Test is tautological and doesn't exercise _log_langfuse_v2
The test name and docstring claim to verify that _log_langfuse_v2 uses sanitized model_parameters from standard_logging_object instead of raw optional_params. However, the test never calls _log_langfuse_v2 at all.
The core assertion:
sanitized = standard_logging_object.get(
"model_parameters", optional_params_with_secrets
)
assert "api_key" not in sanitized...is trivially true because standard_logging_object["model_parameters"] was explicitly set to {"temperature": 0.5, "stream": True} two lines earlier. This is just testing Python's dict.get(), not the actual Langfuse logging code.
Crucially, looking at litellm/integrations/langfuse/langfuse.py line 840:
"model_parameters": optional_params,_log_langfuse_v2 still passes the raw (caller-side filtered) optional_params directly. This test would pass even if the bug described in the docstring were still fully present. To actually verify the fix, the test should mock trace.generation and assert the model_parameters argument does not contain secrets, e.g.:
with patch.object(langfuse_logger.Langfuse, "trace") as mock_trace:
mock_trace_instance = Mock()
mock_trace.return_value = mock_trace_instance
langfuse_logger._log_langfuse_v2(..., optional_params=optional_params_with_secrets, kwargs=kwargs_with_standard_logging_object, ...)
call_kwargs = mock_trace_instance.generation.call_args.kwargs
assert "api_key" not in call_kwargs["model_parameters"]0066ad7
into
BerriAI:litellm_internal_dev_03_14_2026
follow up for tests #22188