Skip to content

fix(test): add environment cleanup for Vertex AI Qwen tests#21273

Merged
jquinter merged 1 commit intomainfrom
fix/vertex-ai-qwen-test-isolation
Feb 15, 2026
Merged

fix(test): add environment cleanup for Vertex AI Qwen tests#21273
jquinter merged 1 commit intomainfrom
fix/vertex-ai-qwen-test-isolation

Conversation

@jquinter
Copy link
Contributor

Summary

Fixes test isolation issue in Vertex AI Qwen tests by adding environment variable cleanup.

Problem

Test test_vertex_ai_qwen_global_endpoint_url was failing in CI with:

litellm.exceptions.AuthenticationError: Vertex_aiException - {
  "error": {
    "code": 401,
    "message": "Request had invalid authentication credentials..."
  }
}

Previous tests set Google/Vertex AI environment variables and don't clean them up, causing this test to attempt real Google authentication instead of using mocks.

Solution

Added clean_vertex_env pytest fixture with autouse=True to:

  1. Save and clear all Google/Vertex AI environment variables before each test
  2. Restore them after each test completes

Same fix pattern as:

Testing

Prevents real API calls by ensuring clean environment for all tests in this file.

Related

This test failure was reported on PR #21217, but NOT caused by PR #21217 (which only modifies Anthropic structured output tests). This is part of a broader test isolation issue affecting multiple Vertex AI test files.

🤖 Generated with Claude Code

Add autouse pytest fixture to clear Google/Vertex AI environment
variables before each test, preventing authentication errors in CI.

Previous tests may set GOOGLE_APPLICATION_CREDENTIALS or other Vertex
environment variables and not clean them up, causing this test to
attempt real Google authentication instead of using mocks.

This fix:
- Adds clean_vertex_env fixture with autouse=True
- Saves and clears Google/Vertex env vars before each test
- Restores them after each test
- Prevents "AuthenticationError: Request had invalid authentication
  credentials" (401) in CI when run with other tests

Same fix pattern as PR #21268 (rerank) and PR #21272 (GPT-OSS).

Related: test was failing on PR #21217, but NOT caused by PR #21217
(which only modifies test_anthropic_structured_output.py). This is
another test isolation issue.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
@vercel
Copy link

vercel bot commented Feb 15, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
litellm Ready Ready Preview, Comment Feb 15, 2026 10:21pm

Request Review

@greptile-apps
Copy link
Contributor

greptile-apps bot commented Feb 15, 2026

Greptile Summary

This PR adds a clean_vertex_env pytest fixture with autouse=True to the Vertex AI Qwen global endpoint test file. The fixture saves and clears 6 Google/Vertex AI environment variables (GOOGLE_APPLICATION_CREDENTIALS, GOOGLE_CLOUD_PROJECT, VERTEXAI_PROJECT, VERTEX_PROJECT, VERTEX_LOCATION, VERTEX_AI_PROJECT) before each test, and restores them afterward.

Confidence Score: 5/5

  • This PR is safe to merge — it only adds test infrastructure (environment cleanup fixture) with no production code changes.
  • The change is minimal and well-scoped: a single pytest fixture added to a test file. It follows an established pattern from sibling PRs, correctly saves/restores environment state, and introduces no risk to production code.
  • No files require special attention.

Important Files Changed

Filename Overview
tests/test_litellm/llms/vertex_ai/vertex_ai_partner_models/qwen/test_vertex_ai_qwen_global_endpoint.py Adds autouse pytest fixture to clean Google/Vertex AI environment variables before each test and restore them after. Standard test isolation pattern consistent with sibling PRs.

Sequence Diagram

sequenceDiagram
    participant Pytest as Pytest Runner
    participant Fixture as clean_vertex_env Fixture
    participant Env as os.environ
    participant Test as Test Function

    Pytest->>Fixture: Before each test (autouse)
    Fixture->>Env: Save & clear GOOGLE_APPLICATION_CREDENTIALS
    Fixture->>Env: Save & clear GOOGLE_CLOUD_PROJECT
    Fixture->>Env: Save & clear VERTEXAI_PROJECT
    Fixture->>Env: Save & clear VERTEX_PROJECT
    Fixture->>Env: Save & clear VERTEX_LOCATION
    Fixture->>Env: Save & clear VERTEX_AI_PROJECT
    Fixture-->>Pytest: yield (env is clean)
    Pytest->>Test: Run test with clean environment
    Test-->>Pytest: Test completes
    Pytest->>Fixture: After test (teardown)
    Fixture->>Env: Restore saved variables
    Fixture-->>Pytest: Cleanup done
Loading

Last reviewed commit: 62ac8ce

Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 file reviewed, no comments

Edit Code Review Agent Settings | Greptile

@jquinter jquinter merged commit 03215c6 into main Feb 15, 2026
17 of 23 checks passed
jquinter added a commit that referenced this pull request Feb 15, 2026
…ution

Implements three key improvements to reduce test flakiness from parallel execution:

1. **Split Vertex AI tests into separate group** (workers: 1)
   - Vertex AI tests often have environment variable pollution issues
   - Running serially prevents cross-test interference with GOOGLE_APPLICATION_CREDENTIALS
   - Isolates authentication-related test failures

2. **Reduce workers for other LLM tests** (4 -> 2)
   - Decreases chance of race conditions and state conflicts
   - Still parallel but with less contention

3. **Add --dist=loadscope to pytest-xdist**
   - Keeps tests from the same file together on one worker
   - Reduces interference between unrelated test modules
   - Data shows 70% pass rate WITH loadscope vs 40% WITHOUT
   - Better test isolation while maintaining parallelism

Note: loadscope exposes one tokenizer cache issue in core-utils which will be
fixed in a separate PR. The tradeoff is worth it (7/10 pass vs 4/10 without).

These changes address the root causes of intermittent test failures in:
PRs #21268, #21271, #21272, #21273, #21275, #21276:
- Environment variable pollution (GOOGLE_APPLICATION_CREDENTIALS, VERTEXAI_PROJECT)
- Global state conflicts (litellm.known_tokenizer_config)
- Async mock timing issues with parallel execution

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
jquinter added a commit that referenced this pull request Feb 18, 2026
…ution

Implements three key improvements to reduce test flakiness from parallel execution:

1. **Split Vertex AI tests into separate group** (workers: 1)
   - Vertex AI tests often have environment variable pollution issues
   - Running serially prevents cross-test interference with GOOGLE_APPLICATION_CREDENTIALS
   - Isolates authentication-related test failures

2. **Reduce workers for other LLM tests** (4 -> 2)
   - Decreases chance of race conditions and state conflicts
   - Still parallel but with less contention

3. **Add --dist=loadscope to pytest-xdist**
   - Keeps tests from the same file together on one worker
   - Reduces interference between unrelated test modules
   - Data shows 70% pass rate WITH loadscope vs 40% WITHOUT
   - Better test isolation while maintaining parallelism

Note: loadscope exposes one tokenizer cache issue in core-utils which will be
fixed in a separate PR. The tradeoff is worth it (7/10 pass vs 4/10 without).

These changes address the root causes of intermittent test failures in:
PRs #21268, #21271, #21272, #21273, #21275, #21276:
- Environment variable pollution (GOOGLE_APPLICATION_CREDENTIALS, VERTEXAI_PROJECT)
- Global state conflicts (litellm.known_tokenizer_config)
- Async mock timing issues with parallel execution

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant