fix(google-genai): update logic for deciding whether to use awrap or wrap in the Google Generative AI Instrumentation#3329
Conversation
…ogle Generative AI Instrumentation - Right now we use `wrap_method == "generate_content_async"` to decide whether to use `awrap` or `wrap` - This `wrap_method` doesn't exist, so we never call `awrap` - This leads to some weird behavior when trying to add input and output attributes to the span when using the async Gemini APIs
WalkthroughThe instrumentation updates change how async vs sync wrappers are selected: instead of checking a method name, the code now checks whether the target belongs to AsyncModels to apply _awrap, and to Models for _wrap. This alters the control flow for async streaming handling only; other logic remains the same. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant I as Instrumentor
participant M as Models
participant AM as AsyncModels
participant W as _wrap (sync)
participant AW as _awrap (async)
Note over I: During _instrument()
I->>M: Detect target (Models)
I->>W: Apply sync wrapper
Note over M,W: generate_content (sync) wrapped
I->>AM: Detect target (AsyncModels)
I->>AW: Apply async wrapper
Note over AM,AW: generate_content (async) wrapped
rect rgba(230,245,255,0.5)
Note right of AM: Changed logic: selection by class (AsyncModels) rather than method name
end
Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes Possibly related PRs
Suggested reviewers
Poem
Tip 🔌 Remote MCP (Model Context Protocol) integration is now available!Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats. ✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Important
Looks good to me! 👍
Reviewed everything up to c7f62a4 in 1 minute and 27 seconds. Click for details.
- Reviewed
13lines of code in1files - Skipped
0files when reviewing. - Skipped posting
1draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/__init__.py:282
- Draft comment:
The change correctly switches from checking a non-existent method name ('generate_content_async') to verifying if the wrapping object is 'AsyncModels'. This fix ensures the async instrumentation uses _awrap as intended. - Reason this comment was not posted:
Comment did not seem useful. Confidence is useful =0%<= threshold50%This comment is purely informative, explaining what the change does without providing any actionable feedback or suggestions. It doesn't ask for confirmation or suggest improvements, so it should be removed according to the rules.
Workflow ID: wflow_9qP9nA4XEyGkuTqx
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (2)
packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/__init__.py (2)
279-287: Make async/sync selection data-driven (avoid string checks)Relying on
"AsyncModels"as a string is brittle. Prefer encoding async-ness in WRAPPED_METHODS and branching on that flag. This improves robustness if class names evolve.Apply this diff within the selected lines:
( - _awrap(tracer, event_logger, wrapped_method) - if wrap_object == "AsyncModels" - else _wrap(tracer, event_logger, wrapped_method) + _awrap(tracer, event_logger, wrapped_method) + if wrapped_method.get("is_async") + else _wrap(tracer, event_logger, wrapped_method) ),And update WRAPPED_METHODS accordingly (outside this hunk):
WRAPPED_METHODS = [ { "package": "google.genai.models", "object": "Models", "method": "generate_content", "span_name": "gemini.generate_content", "is_async": False, }, { "package": "google.genai.models", "object": "AsyncModels", "method": "generate_content", "span_name": "gemini.generate_content", "is_async": True, }, ]
279-287: Add regression tests to cover the async path (awrap) and streamingThe bug surfaced because
_awrapwas never exercised. Add tests that:
- Verify AsyncModels.generate_content is wrapped with
_awrap(returns a coroutine).- Validate span attributes are set and span ends for:
- non-streaming async response
- async streaming response (AsyncGenerator), ensuring span ends after iteration.
I can provide a test that injects a fake
google.genai.modelsmodule intosys.modules, instruments, and asserts wrapper behavior without hitting external services.Do you want me to draft the test module with a fake
AsyncModelsandModels, plus a minimal in-memory tracer provider to assert spans?
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/__init__.py(1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit Inference Engine (CLAUDE.md)
**/*.py: Store API keys only in environment variables/secure vaults; never hardcode secrets in code
Use Flake8 for code linting and adhere to its rules
Files:
packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/__init__.py
🔇 Additional comments (3)
packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/__init__.py (3)
279-287: Correct fix: route AsyncModels to the async wrapperSwitching the wrapper selection to
wrap_object == "AsyncModels"correctly ensures_awrapis used for the async client and fixes the bug whereawrapwas never called.
279-287: Symbol names verified for google-genai v1.x
Bothgoogle.genai.models.Models.generate_content(sync) andgoogle.genai.models.AsyncModels.generate_content(async) are present in the v1.x SDK, matching the wrapped methods. No changes needed.
279-287: No stalegenerate_content_asyncreferences in google_generativeai instrumentationA repo-wide search shows all occurrences of
generate_content_asynclive outside the google_generativeai package (in the Vertex AI instrumentation and sample-app examples) and are intentional. The google_generativeai instrumentation now only wrapsModels.generate_contentandAsyncModels.generate_content. No further changes are needed.
|
@nirga @galkleinman Saw you guys recently made changes to the Google Generative AI instrumentation. Would you guys be the right people to review this change? The instrumentation doesn't work with async usage of the google-genai package right now which is making it hard for us to debug issues we're having in our agents. |
wrap_method == "generate_content_async"to decide whether to useawraporwrapwrap_methoddoesn't exist, so we never callawrapfeat(instrumentation): ...orfix(instrumentation): ....Important
Fixes logic in Google Generative AI instrumentation to correctly use
awrapforAsyncModels, ensuring proper async API handling._instrument()in__init__.pyto useawrapforAsyncModelsinstead of checking for non-existentwrap_method == "generate_content_async".This description was created by
for c7f62a4. You can customize this summary. It will automatically update as commits are pushed.
Summary by CodeRabbit