Skip to content

Unify SamplingHandler and promote OpenAI handler#2616

Merged
jlowin merged 3 commits intomainfrom
unified-sampling-handler
Dec 14, 2025
Merged

Unify SamplingHandler and promote OpenAI handler#2616
jlowin merged 3 commits intomainfrom
unified-sampling-handler

Conversation

@jlowin
Copy link
Copy Markdown
Member

@jlowin jlowin commented Dec 14, 2025

Consolidates the separate ServerSamplingHandler and ClientSamplingHandler type aliases into a single unified SamplingHandler type. This simplifies the API since both client and server use the same handler signature for sampling.

Also promotes OpenAISamplingHandler from experimental to fastmcp.client.sampling.handlers.openai as its canonical location.

# Before: separate types
from fastmcp.server.sampling.handler import ServerSamplingHandler
from fastmcp.client.sampling import ClientSamplingHandler

# After: one unified type
from fastmcp.client.sampling import SamplingHandler

# OpenAI handler moved from experimental
from fastmcp.client.sampling.handlers.openai import OpenAISamplingHandler

Backwards compatibility maintained for imports from fastmcp.experimental.sampling.handlers.openai.


Handling sampling is fundamentally a client concern, which is why these classes accrue to the client submodule. When a server uses a sampling handler, it is explicitly as a "client fallback".

Consolidates ServerSamplingHandler and ClientSamplingHandler into a single
SamplingHandler type alias. Moves OpenAISamplingHandler from experimental
to fastmcp.client.sampling.handlers.openai as the canonical location.

Backwards compatibility maintained for imports from experimental.
@marvin-context-protocol marvin-context-protocol Bot added the enhancement Improvement to existing functionality. For issues and smaller PR improvements. label Dec 14, 2025
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Dec 14, 2025

Warning

Rate limit exceeded

@jlowin has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 2 minutes and 9 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📥 Commits

Reviewing files that changed from the base of the PR and between 1dd83b9 and e1329c6.

📒 Files selected for processing (1)
  • src/fastmcp/client/sampling/handlers/openai.py (1 hunks)

Walkthrough

This PR restructures the sampling subsystem: it removes the old client-level sampling module, adds a new package at src/fastmcp/client/sampling/ that defines unified SamplingHandler types and create_sampling_callback, and implements a new OpenAISamplingHandler under src/fastmcp/client/sampling/handlers/openai.py. Client and Server constructors and setters now accept the unified SamplingHandler type. Docs and examples updated to the new import paths, and backward-compatible re-exports are provided under the experimental package.

Possibly related PRs

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 28.57% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly summarizes the main changes: unifying SamplingHandler types and promoting the OpenAI handler from experimental to canonical location.
Description check ✅ Passed The description explains the consolidation and promotion clearly, includes code examples showing before/after imports, and mentions backwards compatibility. However, it does not follow the provided template with required sections like Contributors Checklist and Review Checklist.

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (3)
examples/advanced_sampling/client_sampling_test.py (1)

23-87: Import update is good; fix messages type to match step.history to keep the example type-clean.

@@
-    messages: list[str] = [question]
+    messages: list[str | SamplingMessage] = [question]
@@
-        messages = step.history
+        messages = list(step.history)
docs/servers/sampling.mdx (1)

460-474: Use AsyncOpenAI, not OpenAI, in the sampling handler example.

The OpenAISamplingHandler class requires an AsyncOpenAI client (client: AsyncOpenAI | None = None) and uses await self.client.chat.completions.create(...) in its async __call__ method. Passing the sync OpenAI client will cause a runtime error.

Update the example to:

import os
from openai import AsyncOpenAI
from fastmcp import FastMCP
from fastmcp.client.sampling.handlers.openai import OpenAISamplingHandler

server = FastMCP(
    name="My Server",
    sampling_handler=OpenAISamplingHandler(
        default_model="gpt-4o-mini",
        client=AsyncOpenAI(api_key=os.getenv("OPENAI_API_KEY")),
    ),
    sampling_handler_behavior="fallback",
)
examples/sampling_fallback.py (1)

8-26: Critical: example passes sync OpenAI client to async handler—fix required.

The OpenAISamplingHandler explicitly requires AsyncOpenAI (line 51 of src/fastmcp/client/sampling/handlers/openai.py) and awaits self.client.chat.completions.create() (line 82). Passing the sync OpenAI client will cause a runtime error.

Update the import and client instantiation:

@@
-from openai import OpenAI
+from openai import AsyncOpenAI
@@
         sampling_handler=OpenAISamplingHandler(
             default_model=os.getenv("MODEL") or "gpt-4o-mini",  # pyright: ignore[reportArgumentType]
-            client=OpenAI(
-                api_key=os.getenv("API_KEY"),
+            client=AsyncOpenAI(
+                api_key=os.getenv("OPENAI_API_KEY") or os.getenv("API_KEY"),
                 base_url=os.getenv("BASE_URL"),
             ),
         ),
🧹 Nitpick comments (4)
examples/advanced_sampling/structured_output.py (1)

20-45: Nice update to the canonical OpenAISamplingHandler import; consider removing the type: ignore with a cast + precise return type.

@@
-from fastmcp import Client, Context, FastMCP
+from typing import Any, cast
+
+from fastmcp import Client, Context, FastMCP
 from fastmcp.client.sampling.handlers.openai import OpenAISamplingHandler
@@
 @mcp.tool
-async def analyze_sentiment(text: str, ctx: Context) -> dict:
+async def analyze_sentiment(text: str, ctx: Context) -> dict[str, Any]:
@@
-    return result.result.model_dump()  # type: ignore[attr-defined]
+    analysis = cast(SentimentAnalysis, result.result)
+    return analysis.model_dump()
examples/advanced_sampling/tool_use.py (1)

20-66: Good import update; consider tightening the example typing and dropping type: ignore.

@@
 import asyncio
 
+from typing import Any, cast
+
 from pydantic import BaseModel, Field
 
 from fastmcp import Client, Context, FastMCP
 from fastmcp.client.sampling.handlers.openai import OpenAISamplingHandler
@@
 @mcp.tool
-async def research(question: str, ctx: Context) -> dict:
+async def research(question: str, ctx: Context) -> dict[str, Any]:
@@
-    return result.result.model_dump()  # type: ignore[attr-defined]
+    report = cast(ResearchReport, result.result)
+    return report.model_dump()
src/fastmcp/client/sampling/__init__.py (1)

44-67: Add type annotation for context parameter.

The context parameter on line 48 lacks a type annotation. While Python allows this, the coding guidelines specify full type annotations for Python ≥3.10 code.

 def create_sampling_callback(
     sampling_handler: SamplingHandler,
 ) -> SamplingFnT:
-    async def _sampling_handler(
-        context,
+    async def _sampling_handler(
+        context: RequestContext[SessionT, LifespanContextT],
         params: SamplingParams,
     ) -> CreateMessageResult | CreateMessageResultWithTools | mcp.types.ErrorData:

Note: The broad except Exception on line 63 is intentional here—the MCP protocol requires returning ErrorData rather than propagating exceptions from sampling handlers, so this catch-all is appropriate for protocol compliance.

src/fastmcp/client/sampling/handlers/openai.py (1)

138-144: Unreachable code path.

The messages parameter is typed as Sequence[SamplingMessage], so the isinstance(messages, str) check on line 138 will never be true. This appears to be defensive code for a case that cannot occur given the type signature.

Consider removing this unreachable branch to reduce confusion:

-        if isinstance(messages, str):
-            openai_messages.append(
-                ChatCompletionUserMessageParam(
-                    role="user",
-                    content=messages,
-                )
-            )
-
         if isinstance(messages, list):

Alternatively, if you want to support string inputs as a convenience, update the type signature to Sequence[SamplingMessage] | str.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 41ec7ee and 0060531.

⛔ Files ignored due to path filters (3)
  • tests/client/sampling/__init__.py is excluded by none and included by none
  • tests/client/sampling/handlers/__init__.py is excluded by none and included by none
  • tests/client/sampling/handlers/test_openai_handler.py is excluded by none and included by none
📒 Files selected for processing (16)
  • docs/clients/sampling.mdx (2 hunks)
  • docs/servers/sampling.mdx (1 hunks)
  • examples/advanced_sampling/client_sampling_test.py (1 hunks)
  • examples/advanced_sampling/structured_output.py (1 hunks)
  • examples/advanced_sampling/tool_use.py (1 hunks)
  • examples/sampling_fallback.py (1 hunks)
  • src/fastmcp/client/client.py (2 hunks)
  • src/fastmcp/client/sampling.py (0 hunks)
  • src/fastmcp/client/sampling/__init__.py (1 hunks)
  • src/fastmcp/client/sampling/handlers/openai.py (1 hunks)
  • src/fastmcp/experimental/sampling/handlers/__init__.py (1 hunks)
  • src/fastmcp/experimental/sampling/handlers/base.py (0 hunks)
  • src/fastmcp/experimental/sampling/handlers/openai.py (1 hunks)
  • src/fastmcp/server/sampling/__init__.py (0 hunks)
  • src/fastmcp/server/sampling/handler.py (0 hunks)
  • src/fastmcp/server/server.py (3 hunks)
💤 Files with no reviewable changes (4)
  • src/fastmcp/server/sampling/handler.py
  • src/fastmcp/client/sampling.py
  • src/fastmcp/server/sampling/init.py
  • src/fastmcp/experimental/sampling/handlers/base.py
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

**/*.py: Write Python code with Python ≥3.10 and include full type annotations
Use specific exception types in error handling - never use bare except
Prioritize readable, understandable code - clarity over cleverness; avoid obfuscated or confusing patterns even if they're shorter

Files:

  • examples/advanced_sampling/tool_use.py
  • src/fastmcp/experimental/sampling/handlers/__init__.py
  • src/fastmcp/experimental/sampling/handlers/openai.py
  • src/fastmcp/server/server.py
  • examples/advanced_sampling/structured_output.py
  • src/fastmcp/client/sampling/__init__.py
  • examples/sampling_fallback.py
  • examples/advanced_sampling/client_sampling_test.py
  • src/fastmcp/client/sampling/handlers/openai.py
  • src/fastmcp/client/client.py
docs/**/*.mdx

📄 CodeRabbit inference engine (docs/.cursor/rules/mintlify.mdc)

docs/**/*.mdx: Use clear, direct language appropriate for technical audiences
Write in second person ('you') for instructions and procedures in MDX documentation
Use active voice over passive voice in MDX technical documentation
Employ present tense for current states and future tense for outcomes in MDX documentation
Maintain consistent terminology throughout all MDX documentation
Keep sentences concise while providing necessary context in MDX documentation
Use parallel structure in lists, headings, and procedures in MDX documentation
Lead with the most important information using inverted pyramid structure in MDX documentation
Use progressive disclosure in MDX documentation: present basic concepts before advanced ones
Break complex procedures into numbered steps in MDX documentation
Include prerequisites and context before instructions in MDX documentation
Provide expected outcomes for each major step in MDX documentation
End sections with next steps or related information in MDX documentation
Use descriptive, keyword-rich headings for navigation and SEO in MDX documentation
Focus on user goals and outcomes rather than system features in MDX documentation
Anticipate common questions and address them proactively in MDX documentation
Include troubleshooting for likely failure points in MDX documentation
Provide multiple pathways (beginner vs advanced) but offer an opinionated path to avoid overwhelming users in MDX documentation
Always include complete, runnable code examples that users can copy and execute in MDX documentation
Show proper error handling and edge case management in MDX code examples
Use realistic data instead of placeholder values in MDX code examples
Include expected outputs and results for verification in MDX code examples
Test all code examples thoroughly before publishing in MDX documentation
Specify language and include filename when relevant in MDX code examples
Add explanatory comments for complex logic in MDX code examples
Document all API...

Files:

  • docs/clients/sampling.mdx
  • docs/servers/sampling.mdx
🧠 Learnings (1)
📚 Learning: 2025-12-13T19:58:20.851Z
Learnt from: CR
Repo: jlowin/fastmcp PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-13T19:58:20.851Z
Learning: Applies to tests/**/*.py : Pass FastMCP servers directly to clients using in-memory transport for testing; only use HTTP transport when explicitly testing network features

Applied to files:

  • src/fastmcp/server/server.py
🧬 Code graph analysis (6)
examples/advanced_sampling/tool_use.py (1)
src/fastmcp/client/sampling/handlers/openai.py (1)
  • OpenAISamplingHandler (45-417)
src/fastmcp/experimental/sampling/handlers/__init__.py (1)
src/fastmcp/client/sampling/handlers/openai.py (1)
  • OpenAISamplingHandler (45-417)
src/fastmcp/experimental/sampling/handlers/openai.py (1)
src/fastmcp/client/sampling/handlers/openai.py (1)
  • OpenAISamplingHandler (45-417)
examples/advanced_sampling/structured_output.py (1)
src/fastmcp/client/sampling/handlers/openai.py (1)
  • OpenAISamplingHandler (45-417)
examples/advanced_sampling/client_sampling_test.py (1)
src/fastmcp/client/sampling/handlers/openai.py (1)
  • OpenAISamplingHandler (45-417)
src/fastmcp/client/sampling/handlers/openai.py (3)
src/fastmcp/tools/tool.py (1)
  • Tool (123-272)
src/fastmcp/server/server.py (4)
  • name (368-369)
  • tool (1897-1912)
  • tool (1915-1930)
  • tool (1932-2075)
src/fastmcp/server/sampling/run.py (2)
  • tool_calls (85-92)
  • text (72-82)
🪛 Ruff (0.14.8)
src/fastmcp/client/sampling/__init__.py

62-62: Consider moving this statement to an else block

(TRY300)


63-63: Do not catch blind exception: Exception

(BLE001)

src/fastmcp/client/sampling/handlers/openai.py

39-42: Avoid specifying long messages outside the exception class

(TRY003)


60-60: Unused method argument: context

(ARG002)


286-286: Avoid specifying long messages outside the exception class

(TRY003)


295-295: Avoid specifying long messages outside the exception class

(TRY003)


306-306: Avoid specifying long messages outside the exception class

(TRY003)


352-352: Avoid specifying long messages outside the exception class

(TRY003)


360-360: Avoid specifying long messages outside the exception class

(TRY003)


394-397: Avoid specifying long messages outside the exception class

(TRY003)


410-410: Avoid specifying long messages outside the exception class

(TRY003)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: Run tests: Python 3.10 on ubuntu-latest
  • GitHub Check: Run tests: Python 3.10 on windows-latest
  • GitHub Check: Run tests: Python 3.13 on ubuntu-latest
  • GitHub Check: Run tests with lowest-direct dependencies
🔇 Additional comments (7)
docs/clients/sampling.mdx (1)

186-194: Import path update looks correct.

src/fastmcp/experimental/sampling/handlers/__init__.py (1)

1-5: Backwards-compatible re-export looks good.

src/fastmcp/experimental/sampling/handlers/openai.py (1)

1-5: Re-export approach is clean and keeps the legacy import path working.

src/fastmcp/client/client.py (1)

49-52: Type unification looks correct.

The import and type annotation changes consistently replace ClientSamplingHandler with the unified SamplingHandler type across the constructor parameter, the set_sampling_callback method, and the __all__ exports. The behavioral logic using create_sampling_callback remains unchanged.

Also applies to: 90-90, 249-249, 369-369

src/fastmcp/server/server.py (1)

97-97: Type unification applied correctly to server.

The server-side sampling handler type is now unified with the client-side type. The import correctly references the new canonical location at fastmcp.client.sampling.

Note: The generic [LifespanResultT] parameter was removed from the sampling handler type. This simplification is appropriate since the SamplingHandler type alias uses a TypeVar constrained to ClientSession | ServerSession, which handles the session polymorphism without needing the lifespan context type parameter.

Also applies to: 211-211, 291-291

src/fastmcp/client/sampling/handlers/openai.py (2)

56-62: Unused context parameter is protocol-compliant.

The static analysis flags context as unused (ARG002). This is expected—the parameter is required by the SamplingHandler protocol signature, and not all handler implementations need to use it. The OpenAI handler can fulfill requests without accessing the MCP context.


82-92: > Likely an incorrect or invalid review comment.

Comment thread docs/clients/sampling.mdx
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (2)
src/fastmcp/client/sampling/handlers/openai.py (2)

99-122: Improve clarity with elif statements.

The method uses sequential if statements to handle different types of model_preferences, but these cases are mutually exclusive. Using elif would make the logic clearer and signal that only one branch executes.

Apply this diff:

     @staticmethod
     def _iter_models_from_preferences(
         model_preferences: ModelPreferences | str | list[str] | None,
     ) -> Iterator[str]:
         if model_preferences is None:
             return
 
         if isinstance(model_preferences, str) and model_preferences in get_args(
             ChatModel
         ):
             yield model_preferences
 
-        if isinstance(model_preferences, list):
+        elif isinstance(model_preferences, list):
             yield from model_preferences
 
-        if isinstance(model_preferences, ModelPreferences):
+        elif isinstance(model_preferences, ModelPreferences):
             if not (hints := model_preferences.hints):
                 return
 
             for hint in hints:
                 if not (name := hint.name):
                     continue
 
                 yield name

124-270: Consider extracting duplicate text extraction logic.

The logic for extracting text from ToolResultContent.content is duplicated at lines 167-171 and 236-240. Consider extracting this into a small helper method to reduce duplication.

Example helper method to add:

@staticmethod
def _extract_text_from_content(content: list[TextContent | Any]) -> str:
    """Extract text parts from content list."""
    result_texts = []
    for item in content:
        if isinstance(item, TextContent):
            result_texts.append(item.text)
    return "\n".join(result_texts)

Then replace both occurrences:

                         content_text = ""
                         if item.content:
-                            result_texts = []
-                            for sub_item in item.content:
-                                if isinstance(sub_item, TextContent):
-                                    result_texts.append(sub_item.text)
-                            content_text = "\n".join(result_texts)
+                            content_text = self._extract_text_from_content(item.content)

And similarly for lines 236-245.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 0060531 and 1dd83b9.

📒 Files selected for processing (1)
  • src/fastmcp/client/sampling/handlers/openai.py (1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

**/*.py: Write Python code with Python ≥3.10 and include full type annotations
Use specific exception types in error handling - never use bare except
Prioritize readable, understandable code - clarity over cleverness; avoid obfuscated or confusing patterns even if they're shorter

Files:

  • src/fastmcp/client/sampling/handlers/openai.py
🧬 Code graph analysis (1)
src/fastmcp/client/sampling/handlers/openai.py (3)
src/fastmcp/tools/tool.py (1)
  • Tool (123-272)
src/fastmcp/server/server.py (4)
  • name (368-369)
  • tool (1897-1912)
  • tool (1915-1930)
  • tool (1932-2075)
src/fastmcp/server/sampling/run.py (2)
  • tool_calls (85-92)
  • text (72-82)
🪛 Ruff (0.14.8)
src/fastmcp/client/sampling/handlers/openai.py

39-42: Avoid specifying long messages outside the exception class

(TRY003)


60-60: Unused method argument: context

(ARG002)


268-268: Avoid specifying long messages outside the exception class

(TRY003)


277-277: Avoid specifying long messages outside the exception class

(TRY003)


288-288: Avoid specifying long messages outside the exception class

(TRY003)


334-334: Avoid specifying long messages outside the exception class

(TRY003)


342-342: Avoid specifying long messages outside the exception class

(TRY003)


376-379: Avoid specifying long messages outside the exception class

(TRY003)


392-392: Avoid specifying long messages outside the exception class

(TRY003)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: Run tests: Python 3.10 on ubuntu-latest
  • GitHub Check: Run tests: Python 3.10 on windows-latest
  • GitHub Check: Run tests: Python 3.13 on ubuntu-latest
  • GitHub Check: Run tests with lowest-direct dependencies
🔇 Additional comments (6)
src/fastmcp/client/sampling/handlers/openai.py (6)

45-54: LGTM!

The constructor properly initializes the client with sensible defaults and type annotations are complete.


272-288: LGTM!

The method properly validates the completion response and handles error cases with clear error messages.


290-298: LGTM!

The model selection logic correctly validates preferences against available ChatModel options and falls back to the default model when needed. The type ignore comment is justified.


300-320: LGTM!

The tool conversion properly handles the schema structure and ensures OpenAI-required fields are present with appropriate defaults.


336-399: LGTM!

The method comprehensively handles conversion from OpenAI completion to MCP result format, including proper error handling for JSON parsing, validation of required fields, and mapping of finish reasons to stop reasons.


56-62: The unused context parameter is required by the SamplingHandler protocol and is intentional.

The SamplingHandler TypeAlias (defined in src/fastmcp/client/sampling/__init__.py) requires three parameters, including RequestContext. The OpenAISamplingHandler.__call__ method implements this protocol, so the parameter must be present for interface compliance even though this implementation doesn't use it. No changes needed.

Comment thread src/fastmcp/client/sampling/handlers/openai.py Outdated
@marvin-context-protocol
Copy link
Copy Markdown
Contributor

Test Failure Analysis

Summary: Integration test timed out due to a 429 rate limit error from the GitHub Copilot MCP API.

Root Cause: The test failure is unrelated to the PR changes. This PR only reorganizes sampling handler imports and doesn't affect the GitHub MCP remote integration tests. The failure occurred because:

  1. The test was making a request to https://api.githubcopilot.com/mcp/
  2. The external API returned a 429 (Too Many Requests) rate limit error
  3. The test timed out after 30 seconds waiting for a response that never completed
  4. The actual error only appeared in the teardown logs, not during the test execution

Suggested Solution:

This is a flaky test caused by external API rate limiting. Consider:

  1. Retry the workflow - The rate limit may have cleared
  2. Add rate limit handling to the test - Catch 429 errors and mark as xfail or skip
  3. Use mocking for this test instead of hitting the real API to avoid rate limits in CI

The PR changes are correct and the failure is environmental, not code-related.

Detailed Analysis

Error from logs:

httpx.HTTPStatusError: Client error '429 Too Many Requests' for url 'https://api.githubcopilot.com/mcp/'
For more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/429

Test that failed: tests/integration_tests/test_github_mcp_remote.py::TestGithubMCPRemote::test_call_tool_list_commits

The test makes a call to list_commits tool on the GitHub Copilot MCP server. The request hung waiting for a response, triggering the 30-second pytest timeout. The 429 error only surfaced during async cleanup in the test teardown phase.

Related Files
  • tests/integration_tests/test_github_mcp_remote.py - Contains the failing test
  • This test hits external GitHub Copilot API which has rate limits

@jlowin jlowin merged commit da77cfa into main Dec 14, 2025
12 of 13 checks passed
@jlowin jlowin deleted the unified-sampling-handler branch December 14, 2025 20:22
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement Improvement to existing functionality. For issues and smaller PR improvements.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant