fix(test): add environment cleanup for Vertex AI GPT-OSS tests#21272
Merged
fix(test): add environment cleanup for Vertex AI GPT-OSS tests#21272
Conversation
Add autouse pytest fixture to clear Google/Vertex AI environment variables before each test, preventing authentication errors in CI. Previous tests may set GOOGLE_APPLICATION_CREDENTIALS or other Vertex environment variables and not clean them up, causing this test to attempt real Google authentication instead of using mocks. This fix: - Adds clean_vertex_env fixture with autouse=True - Saves and clears Google/Vertex env vars before each test - Restores them after each test - Prevents "AuthenticationError: Request had invalid authentication credentials" in CI when run with other tests Test makes real API calls in CI without this fix, gets 401 error. Locally fails with "No module named 'vertexai'" (expected). Related: test was failing on PR #21217, but NOT caused by PR #21217 (which only modifies test_anthropic_structured_output.py). This is another test isolation issue. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
jquinter
added a commit
that referenced
this pull request
Feb 15, 2026
Add autouse pytest fixture to clear Google/Vertex AI environment variables before each test, preventing authentication errors in CI. Previous tests may set GOOGLE_APPLICATION_CREDENTIALS or other Vertex environment variables and not clean them up, causing this test to attempt real Google authentication instead of using mocks. This fix: - Adds clean_vertex_env fixture with autouse=True - Saves and clears Google/Vertex env vars before each test - Restores them after each test - Prevents "AuthenticationError: Request had invalid authentication credentials" (401) in CI when run with other tests Same fix pattern as PR #21268 (rerank) and PR #21272 (GPT-OSS). Related: test was failing on PR #21217, but NOT caused by PR #21217 (which only modifies test_anthropic_structured_output.py). This is another test isolation issue. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Contributor
Greptile SummaryAdds a pytest
Confidence Score: 4/5
|
| Filename | Overview |
|---|---|
| tests/test_litellm/llms/vertex_ai/vertex_ai_partner_models/gpt_oss/test_vertex_ai_gpt_oss_transformation.py | Adds autouse pytest fixture to save/clear/restore Google/Vertex AI environment variables before and after each test, preventing test isolation issues from leaked credentials. The fix is correct and follows the same pattern as the rerank test cleanup. |
Sequence Diagram
sequenceDiagram
participant Pytest as Pytest Runner
participant Fixture as clean_vertex_env fixture
participant Env as os.environ
participant Test as Test Function
participant Mock as Mock (VertexLLM)
Pytest->>Fixture: Before each test (autouse)
Fixture->>Env: Save GOOGLE_APPLICATION_CREDENTIALS, VERTEXAI_PROJECT, etc.
Fixture->>Env: Delete saved env vars
Fixture->>Test: yield (run test)
Test->>Mock: patch VertexLLM._ensure_access_token
Mock-->>Test: Return ("fake-token", "project-id")
Test->>Mock: litellm.acompletion(...)
Mock-->>Test: Mock response (no real API call)
Test->>Fixture: Test complete
Fixture->>Env: Restore saved env vars
Last reviewed commit: cf11867
...llm/llms/vertex_ai/vertex_ai_partner_models/gpt_oss/test_vertex_ai_gpt_oss_transformation.py
Show resolved
Hide resolved
…_oss/test_vertex_ai_gpt_oss_transformation.py Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
jquinter
added a commit
that referenced
this pull request
Feb 15, 2026
…ution Implements three key improvements to reduce test flakiness from parallel execution: 1. **Split Vertex AI tests into separate group** (workers: 1) - Vertex AI tests often have environment variable pollution issues - Running serially prevents cross-test interference with GOOGLE_APPLICATION_CREDENTIALS - Isolates authentication-related test failures 2. **Reduce workers for other LLM tests** (4 -> 2) - Decreases chance of race conditions and state conflicts - Still parallel but with less contention 3. **Add --dist=loadscope to pytest-xdist** - Keeps tests from the same file together on one worker - Reduces interference between unrelated test modules - Data shows 70% pass rate WITH loadscope vs 40% WITHOUT - Better test isolation while maintaining parallelism Note: loadscope exposes one tokenizer cache issue in core-utils which will be fixed in a separate PR. The tradeoff is worth it (7/10 pass vs 4/10 without). These changes address the root causes of intermittent test failures in: PRs #21268, #21271, #21272, #21273, #21275, #21276: - Environment variable pollution (GOOGLE_APPLICATION_CREDENTIALS, VERTEXAI_PROJECT) - Global state conflicts (litellm.known_tokenizer_config) - Async mock timing issues with parallel execution Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
sameetn
pushed a commit
to sameetn/litellm
that referenced
this pull request
Feb 16, 2026
Add autouse pytest fixture to clear Google/Vertex AI environment variables before each test, preventing authentication errors in CI. Previous tests may set GOOGLE_APPLICATION_CREDENTIALS or other Vertex environment variables and not clean them up, causing this test to attempt real Google authentication instead of using mocks. This fix: - Adds clean_vertex_env fixture with autouse=True - Saves and clears Google/Vertex env vars before each test - Restores them after each test - Prevents "AuthenticationError: Request had invalid authentication credentials" (401) in CI when run with other tests Same fix pattern as PR BerriAI#21268 (rerank) and PR BerriAI#21272 (GPT-OSS). Related: test was failing on PR BerriAI#21217, but NOT caused by PR BerriAI#21217 (which only modifies test_anthropic_structured_output.py). This is another test isolation issue. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
jquinter
added a commit
that referenced
this pull request
Feb 18, 2026
…ution Implements three key improvements to reduce test flakiness from parallel execution: 1. **Split Vertex AI tests into separate group** (workers: 1) - Vertex AI tests often have environment variable pollution issues - Running serially prevents cross-test interference with GOOGLE_APPLICATION_CREDENTIALS - Isolates authentication-related test failures 2. **Reduce workers for other LLM tests** (4 -> 2) - Decreases chance of race conditions and state conflicts - Still parallel but with less contention 3. **Add --dist=loadscope to pytest-xdist** - Keeps tests from the same file together on one worker - Reduces interference between unrelated test modules - Data shows 70% pass rate WITH loadscope vs 40% WITHOUT - Better test isolation while maintaining parallelism Note: loadscope exposes one tokenizer cache issue in core-utils which will be fixed in a separate PR. The tradeoff is worth it (7/10 pass vs 4/10 without). These changes address the root causes of intermittent test failures in: PRs #21268, #21271, #21272, #21273, #21275, #21276: - Environment variable pollution (GOOGLE_APPLICATION_CREDENTIALS, VERTEXAI_PROJECT) - Global state conflicts (litellm.known_tokenizer_config) - Async mock timing issues with parallel execution Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Fixes test isolation issue in Vertex AI GPT-OSS tests by adding environment variable cleanup.
Problem
Test
test_vertex_ai_gpt_oss_reasoning_effortwas failing in CI with:Previous tests set Google/Vertex AI environment variables (
GOOGLE_APPLICATION_CREDENTIALS,VERTEXAI_PROJECT, etc.) and don't clean them up, causing this test to attempt real Google authentication instead of using mocks.Root Cause
Environment variable pollution from other Vertex AI tests that don't clean up after themselves.
Solution
Added
clean_vertex_envpytest fixture withautouse=Trueto:This ensures each test starts with a clean environment.
Testing
Locally (without vertexai SDK):
ModuleNotFoundError: No module named 'vertexai'(expected)In CI (with vertexai SDK):
Related
This test failure was reported on PR #21217, but NOT caused by PR #21217 (which only modifies Anthropic structured output tests). This is another test isolation issue affecting multiple Vertex AI tests.
🤖 Generated with Claude Code