Skip to content

fix(google-genai): update logic for deciding whether to use awrap or wrap in the Google Generative AI Instrumentation#3329

Merged
nirga merged 2 commits intotraceloop:mainfrom
ashwin153:patch-1
Aug 22, 2025
Merged

fix(google-genai): update logic for deciding whether to use awrap or wrap in the Google Generative AI Instrumentation#3329
nirga merged 2 commits intotraceloop:mainfrom
ashwin153:patch-1

Conversation

@ashwin153
Copy link
Copy Markdown
Contributor

@ashwin153 ashwin153 commented Aug 20, 2025

  • Right now we use wrap_method == "generate_content_async" to decide whether to use awrap or wrap
  • This wrap_method doesn't exist, so we never call awrap
  • This leads to some weird behavior when trying to add input and output attributes to the span when using the async Gemini APIs
  • I tested this by monkey patched this locally and it works
image
  • I have added tests that cover my changes.
  • If adding a new instrumentation or changing an existing one, I've added screenshots from some observability platform showing the change.
  • PR name follows conventional commits format: feat(instrumentation): ... or fix(instrumentation): ....
  • (If applicable) I have updated the documentation accordingly.

Important

Fixes logic in Google Generative AI instrumentation to correctly use awrap for AsyncModels, ensuring proper async API handling.

  • Behavior:
    • Fixes logic in _instrument() in __init__.py to use awrap for AsyncModels instead of checking for non-existent wrap_method == "generate_content_async".
    • Ensures correct handling of input and output attributes for async Gemini APIs.
  • Misc:
    • No new tests or documentation updates included.

This description was created by Ellipsis for c7f62a4. You can customize this summary. It will automatically update as commits are pushed.

Summary by CodeRabbit

  • Bug Fixes
    • Corrected instrumentation for asynchronous usage of Google Generative AI, ensuring async and streaming calls are traced reliably.
    • Maintains existing behavior for synchronous calls with no impact on current setups.
    • No public API changes.

…ogle Generative AI Instrumentation

- Right now we use `wrap_method == "generate_content_async"` to decide whether to use `awrap` or `wrap`
- This `wrap_method` doesn't exist, so we never call `awrap`
- This leads to some weird behavior when trying to add input and output attributes to the span when using the async Gemini APIs
@CLAassistant
Copy link
Copy Markdown

CLAassistant commented Aug 20, 2025

CLA assistant check
All committers have signed the CLA.

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Aug 20, 2025

Walkthrough

The instrumentation updates change how async vs sync wrappers are selected: instead of checking a method name, the code now checks whether the target belongs to AsyncModels to apply _awrap, and to Models for _wrap. This alters the control flow for async streaming handling only; other logic remains the same.

Changes

Cohort / File(s) Summary of changes
Instrumentation routing update
packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/__init__.py
Switches wrapper selection from method-name check to class/object check: AsyncModels routes to _awrap; Models continues to use _wrap. Adjusts control flow specific to async streaming.

Sequence Diagram(s)

sequenceDiagram
    autonumber
    participant I as Instrumentor
    participant M as Models
    participant AM as AsyncModels
    participant W as _wrap (sync)
    participant AW as _awrap (async)

    Note over I: During _instrument()
    I->>M: Detect target (Models)
    I->>W: Apply sync wrapper
    Note over M,W: generate_content (sync) wrapped

    I->>AM: Detect target (AsyncModels)
    I->>AW: Apply async wrapper
    Note over AM,AW: generate_content (async) wrapped

    rect rgba(230,245,255,0.5)
    Note right of AM: Changed logic: selection by class (AsyncModels) rather than method name
    end
Loading

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

Possibly related PRs

Suggested reviewers

  • galkleinman

Poem

In cables and clouds, I twitch my ears,
Swapping paths for async gears—
Models hop left, Async bounds right,
Wrappers snug in day or night.
Trace by trace, I tap my paw,
Instrumented streams—no flaw! 🐇✨

Tip

🔌 Remote MCP (Model Context Protocol) integration is now available!

Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@ashwin153 ashwin153 changed the title fix: update logic for deciding whether to use awrap or wrap in the Go… fix: update logic for deciding whether to use awrap or wrap in the Google Generative AI Instrumentation Aug 20, 2025
@ashwin153 ashwin153 changed the title fix: update logic for deciding whether to use awrap or wrap in the Google Generative AI Instrumentation fix(instrumentation): update logic for deciding whether to use awrap or wrap in the Google Generative AI Instrumentation Aug 20, 2025
Copy link
Copy Markdown
Contributor

@ellipsis-dev ellipsis-dev Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Looks good to me! 👍

Reviewed everything up to c7f62a4 in 1 minute and 27 seconds. Click for details.
  • Reviewed 13 lines of code in 1 files
  • Skipped 0 files when reviewing.
  • Skipped posting 1 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/__init__.py:282
  • Draft comment:
    The change correctly switches from checking a non-existent method name ('generate_content_async') to verifying if the wrapping object is 'AsyncModels'. This fix ensures the async instrumentation uses _awrap as intended.
  • Reason this comment was not posted:
    Comment did not seem useful. Confidence is useful = 0% <= threshold 50% This comment is purely informative, explaining what the change does without providing any actionable feedback or suggestions. It doesn't ask for confirmation or suggest improvements, so it should be removed according to the rules.

Workflow ID: wflow_9qP9nA4XEyGkuTqx

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

@ashwin153 ashwin153 changed the title fix(instrumentation): update logic for deciding whether to use awrap or wrap in the Google Generative AI Instrumentation fix(google-generativeai): update logic for deciding whether to use awrap or wrap in the Google Generative AI Instrumentation Aug 20, 2025
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/__init__.py (2)

279-287: Make async/sync selection data-driven (avoid string checks)

Relying on "AsyncModels" as a string is brittle. Prefer encoding async-ness in WRAPPED_METHODS and branching on that flag. This improves robustness if class names evolve.

Apply this diff within the selected lines:

                 (
-                    _awrap(tracer, event_logger, wrapped_method)
-                    if wrap_object == "AsyncModels"
-                    else _wrap(tracer, event_logger, wrapped_method)
+                    _awrap(tracer, event_logger, wrapped_method)
+                    if wrapped_method.get("is_async")
+                    else _wrap(tracer, event_logger, wrapped_method)
                 ),

And update WRAPPED_METHODS accordingly (outside this hunk):

WRAPPED_METHODS = [
    {
        "package": "google.genai.models",
        "object": "Models",
        "method": "generate_content",
        "span_name": "gemini.generate_content",
        "is_async": False,
    },
    {
        "package": "google.genai.models",
        "object": "AsyncModels",
        "method": "generate_content",
        "span_name": "gemini.generate_content",
        "is_async": True,
    },
]

279-287: Add regression tests to cover the async path (awrap) and streaming

The bug surfaced because _awrap was never exercised. Add tests that:

  • Verify AsyncModels.generate_content is wrapped with _awrap (returns a coroutine).
  • Validate span attributes are set and span ends for:
    • non-streaming async response
    • async streaming response (AsyncGenerator), ensuring span ends after iteration.

I can provide a test that injects a fake google.genai.models module into sys.modules, instruments, and asserts wrapper behavior without hitting external services.

Do you want me to draft the test module with a fake AsyncModels and Models, plus a minimal in-memory tracer provider to assert spans?

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between fd5e4dc and c7f62a4.

📒 Files selected for processing (1)
  • packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/__init__.py (1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py

📄 CodeRabbit Inference Engine (CLAUDE.md)

**/*.py: Store API keys only in environment variables/secure vaults; never hardcode secrets in code
Use Flake8 for code linting and adhere to its rules

Files:

  • packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/__init__.py
🔇 Additional comments (3)
packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/__init__.py (3)

279-287: Correct fix: route AsyncModels to the async wrapper

Switching the wrapper selection to wrap_object == "AsyncModels" correctly ensures _awrap is used for the async client and fixes the bug where awrap was never called.


279-287: Symbol names verified for google-genai v1.x
Both google.genai.models.Models.generate_content (sync) and google.genai.models.AsyncModels.generate_content (async) are present in the v1.x SDK, matching the wrapped methods. No changes needed.


279-287: No stale generate_content_async references in google_generativeai instrumentation

A repo-wide search shows all occurrences of generate_content_async live outside the google_generativeai package (in the Vertex AI instrumentation and sample-app examples) and are intentional. The google_generativeai instrumentation now only wraps Models.generate_content and AsyncModels.generate_content. No further changes are needed.

@ashwin153 ashwin153 changed the title fix(google-generativeai): update logic for deciding whether to use awrap or wrap in the Google Generative AI Instrumentation fix(google-genai): update logic for deciding whether to use awrap or wrap in the Google Generative AI Instrumentation Aug 20, 2025
@ashwin153
Copy link
Copy Markdown
Contributor Author

ashwin153 commented Aug 22, 2025

@nirga @galkleinman Saw you guys recently made changes to the Google Generative AI instrumentation. Would you guys be the right people to review this change? The instrumentation doesn't work with async usage of the google-genai package right now which is making it hard for us to debug issues we're having in our agents.

Copy link
Copy Markdown
Member

@nirga nirga left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @ashwin153!

@nirga nirga merged commit 8fefb87 into traceloop:main Aug 22, 2025
9 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants