Skip to content

Revert "Revert "fix(google-genai): migrate Google Generative AI instrumentation to googleapis/python-genai""#3289

Merged
galkleinman merged 3 commits intomainfrom
revert-3285-revert-3282-fix/google-genai-migration-3281
Aug 17, 2025
Merged

Revert "Revert "fix(google-genai): migrate Google Generative AI instrumentation to googleapis/python-genai""#3289
galkleinman merged 3 commits intomainfrom
revert-3285-revert-3282-fix/google-genai-migration-3281

Conversation

@nirga
Copy link
Copy Markdown
Member

@nirga nirga commented Aug 17, 2025

Reverts #3285


Important

Reverts previous revert to migrate Google Generative AI instrumentation to googleapis/python-genai, updating imports, dependencies, and tests.

  • Behavior:
    • Reverts previous revert, reinstating migration to googleapis/python-genai.
    • Updates imports from google.generativeai to google.genai in __init__.py and event_emitter.py.
    • Removes legacy support for google.generativeai package.
  • Dependencies:
    • Removes google-generativeai from pyproject.toml.
    • Updates instrumentation_dependencies() to return google-genai >= 1.0.0.
  • Tests:
    • Adds test_new_library_instrumentation.py to verify google-genai instrumentation.
    • Updates conftest.py to configure genai.Client and related fixtures.

This description was created by Ellipsis for 632ce0f. You can customize this summary. It will automatically update as commits are pushed.

Summary by CodeRabbit

  • Refactor
    • Standardized instrumentation on the latest Google GenAI client-based API, removing legacy support and simplifying wrapper selection.
  • Chores
    • Updated dependency to require google-genai >= 1.0.0; removed google-generativeai from optional and test dependencies.
  • Tests
    • Added coverage for instrumentation using the new client-based Models and AsyncModels APIs.
    • Updated test fixtures to use the client workflow and validated dependency reporting.

Note: Users should ensure google-genai >= 1.0.0 is installed; legacy google-generativeai is no longer supported.

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Aug 17, 2025

Walkthrough

Migrates instrumentation from google.generativeai to google.genai, removes legacy support and wrappers, updates imports and dependency declarations, and adds tests validating the new client-based API and wrapper selection. Instrumentation now exclusively targets google.genai Models/AsyncModels generate_content methods, with updated pyproject extras and tests reflecting the new API.

Changes

Cohort / File(s) Summary of Changes
Instrumentation core
packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/__init__.py
Dropped legacy google.generativeai paths and checks; removed LEGACY_WRAPPED_METHODS; unified WRAPPED_METHODS targeting google.genai Models/AsyncModels; simplified _wrapped_methods to return WRAPPED_METHODS; adjusted _instrument/_uninstrument to iterate over _wrapped_methods; updated GenerateContentResponse import to google.genai.types; dependency function returns google-genai >= 1.0.0.
Event emitter import update
packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/event_emitter.py
Updated GenerateContentResponse import path to google.genai.types; no functional logic changes.
Project configuration
packages/opentelemetry-instrumentation-google-generativeai/pyproject.toml
Removed google-generativeai from test deps and extras; retained google-genai ^1.0.0 as the sole dependency.
Tests migration to google.genai
packages/opentelemetry-instrumentation-google-generativeai/tests/conftest.py, packages/opentelemetry-instrumentation-google-generativeai/tests/test_new_library_instrumentation.py
Switched imports to google.genai; moved to Client-based API and models access; added tests verifying wrapping of Models/AsyncModels, instrumentation_dependencies content, and _wrapped_methods contents; removed reliance on legacy configuration and GenerativeModel.

Sequence Diagram(s)

sequenceDiagram
  participant Dev as Application
  participant Inst as GoogleGenerativeAiInstrumentor
  participant Wrap as Wrapper
  participant GenAI as google.genai.Models

  Dev->>Inst: instrument()
  Inst->>GenAI: wrap Models.generate_content
  Dev->>GenAI: Models.generate_content(request)
  GenAI-->>Wrap: call intercepted
  Wrap-->>GenAI: invoke original
  GenAI-->>Wrap: response
  Wrap-->>Dev: traced response
Loading
sequenceDiagram
  participant Dev as Application
  participant Inst as GoogleGenerativeAiInstrumentor
  participant Wrap as Async Wrapper
  participant GenAI as google.genai.AsyncModels

  Dev->>Inst: instrument()
  Inst->>GenAI: wrap AsyncModels.generate_content (async)
  Dev->>GenAI: await AsyncModels.generate_content(request)
  GenAI-->>Wrap: call intercepted
  Wrap-->>GenAI: await original
  GenAI-->>Wrap: response
  Wrap-->>Dev: traced response
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Possibly related PRs

Suggested reviewers

  • doronkopit5

Poem

I thump my paw: old paths, begone!
New genai trails, we bound along.
Wrappers snug on models two,
Async hops and traces true.
Dependency carrots trimmed just right—
Ship it quick, then moonlit byte. 🥕✨

Tip

🔌 Remote MCP (Model Context Protocol) integration is now available!

Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch revert-3285-revert-3282-fix/google-genai-migration-3281

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Copy Markdown
Contributor

@ellipsis-dev ellipsis-dev Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Looks good to me! 👍

Reviewed everything up to 632ce0f in 38 seconds. Click for details.
  • Reviewed 221 lines of code in 5 files
  • Skipped 1 files when reviewing.
  • Skipped posting 7 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/__init__.py:7
  • Draft comment:
    Update import path for GenerateContentResponse to 'google.genai.types' (removes legacy import).
  • Reason this comment was not posted:
    Comment did not seem useful. Confidence is useful = 0% <= threshold 50% This comment is purely informative, as it only states what was done without providing any suggestion or asking for confirmation. It doesn't align with the rules for useful comments.
2. packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/__init__.py:36
  • Draft comment:
    Legacy wrapped methods block removed; ensure dropping support for 'google.generativeai' is acceptable.
  • Reason this comment was not posted:
    Comment did not seem useful. Confidence is useful = 0% <= threshold 50% The comment is asking the author to ensure that dropping support for 'google.generativeai' is acceptable. This falls under asking the author to confirm their intention, which is against the rules. Therefore, this comment should be removed.
3. packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/__init__.py:257
  • Draft comment:
    Instrumentation dependencies now return a static 'google-genai >= 1.0.0'; confirm no legacy support needed.
  • Reason this comment was not posted:
    Comment did not seem useful. Confidence is useful = 0% <= threshold 50% The comment is asking the PR author to confirm that no legacy support is needed for a dependency change. This falls under the rule of not asking the author to confirm their intention or ensure behavior is intended. Therefore, this comment should be removed.
4. packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/__init__.py:312
  • Draft comment:
    Loop now uses self._wrapped_methods() for consistency with dependency updates.
  • Reason this comment was not posted:
    Comment was not on a location in the diff, so it can't be submitted as a review comment.
5. packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/event_emitter.py:5
  • Draft comment:
    Update import for GenerateContentResponse to use 'google.genai.types'.
  • Reason this comment was not posted:
    Comment did not seem useful. Confidence is useful = 0% <= threshold 50% This comment is purely informative, suggesting an update to an import statement without providing any reasoning or context. It doesn't ask for a specific change or suggest an improvement, nor does it highlight a potential issue with the current code. Therefore, it violates the rule against making purely informative comments.
6. packages/opentelemetry-instrumentation-google-generativeai/pyproject.toml:37
  • Draft comment:
    Dependency updated: removed 'google-generativeai' and now only 'google-genai'; verify compatibility.
  • Reason this comment was not posted:
    Comment did not seem useful. Confidence is useful = 0% <= threshold 50% This comment is about a dependency change, specifically the removal of 'google-generativeai' and the use of 'google-genai'. It asks to verify compatibility, which is not allowed according to the rules. The comment does not provide a specific suggestion or point out a specific issue with the code.
7. packages/opentelemetry-instrumentation-google-generativeai/tests/conftest.py:55
  • Draft comment:
    Update genai_client fixture: use genai.Client(api_key) and return client.models per new API.
  • Reason this comment was not posted:
    Comment did not seem useful. Confidence is useful = 0% <= threshold 50% This comment is purely informative, describing a change in the code without suggesting any improvements or asking for clarification. It doesn't provide any actionable feedback or raise any concerns about the code.

Workflow ID: wflow_JBdhgr5Z9Stn0eIy

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🔭 Outside diff range comments (2)
packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/event_emitter.py (1)

62-69: Be resilient to finish_reason being a string in the new SDK

In google-genai, finish_reason may already be a string. Accessing .name could raise AttributeError. Use getattr fallback.

Apply this diff:

-                finish_reason=candidate.finish_reason.name,
+                finish_reason=getattr(candidate.finish_reason, "name", candidate.finish_reason),
packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/__init__.py (1)

279-287: Bug: AsyncModels.generate_content is async but is wrapped with the sync wrapper

The selection logic uses the method name to decide async wrapping, but both entries are named generate_content. AsyncModels.generate_content must use _awrap, otherwise it returns an unawaited coroutine and misses instrumentation.

Apply this diff to select by object name:

             wrap_function_wrapper(
                 wrap_package,
                 f"{wrap_object}.{wrap_method}",
-                (
-                    _awrap(tracer, event_logger, wrapped_method)
-                    if wrap_method == "generate_content_async"
-                    else _wrap(tracer, event_logger, wrapped_method)
-                ),
+                _awrap(tracer, event_logger, wrapped_method)
+                if wrap_object == "AsyncModels"
+                else _wrap(tracer, event_logger, wrapped_method),
             )

Optionally, encode this intent explicitly in WRAPPED_METHODS with an "is_async" flag to avoid future regressions.

🧹 Nitpick comments (4)
packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/event_emitter.py (1)

35-55: Handle non-string Content objects in prompts (compat with google-genai signatures)

google-genai accepts structured Content objects for contents; MessageEvent currently forwards whatever is passed (potentially non-serializable). Consider normalizing to strings or dicts when should_send_prompts() is True.

I can propose a conversion that detects google.genai.types.Content and maps it into a serializable dict via part_to_dict for each part; want me to draft it?

packages/opentelemetry-instrumentation-google-generativeai/tests/conftest.py (1)

61-69: Fixture name suggests legacy but uses current instrumentation

instrument_legacy now wires the current (non-legacy) behavior. Consider renaming to instrument_default or instrument_current to avoid confusion.

I can send a small follow-up PR to rename and update references in tests.

packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/__init__.py (2)

54-60: Streaming detection may miss google-genai stream types

google-genai streaming responses may not be plain GeneratorType/AsyncGeneratorType. Consider a more general check (e.g., hasattr(response, "iter")/hasattr(response, "aiter")) or detecting known stream wrapper types.

I can draft a conservative helper that avoids misclassifying simple containers while catching generator-like streams.


101-119: Type import used only for annotations; avoid hard import to reduce import-time failures

If the instrumentation package is imported without google-genai installed, this hard import (and the annotation use) will raise at import time. If you want to remain import-safe, gate the type under TYPE_CHECKING or use a typing alias fallback.

Proposed pattern:

-from google.genai.types import GenerateContentResponse
+from typing import TYPE_CHECKING, Any
+if TYPE_CHECKING:
+    from google.genai.types import GenerateContentResponse  # pragma: no cover
+else:
+    GenerateContentResponse = Any  # type: ignore

This preserves annotations while avoiding mandatory runtime dependency.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 9668097 and 632ce0f.

⛔ Files ignored due to path filters (1)
  • packages/opentelemetry-instrumentation-google-generativeai/poetry.lock is excluded by !**/*.lock
📒 Files selected for processing (5)
  • packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/__init__.py (3 hunks)
  • packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/event_emitter.py (1 hunks)
  • packages/opentelemetry-instrumentation-google-generativeai/pyproject.toml (1 hunks)
  • packages/opentelemetry-instrumentation-google-generativeai/tests/conftest.py (2 hunks)
  • packages/opentelemetry-instrumentation-google-generativeai/tests/test_new_library_instrumentation.py (1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py

📄 CodeRabbit Inference Engine (CLAUDE.md)

**/*.py: Store API keys only in environment variables/secure vaults; never hardcode secrets in code
Use Flake8 for code linting and adhere to its rules

Files:

  • packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/event_emitter.py
  • packages/opentelemetry-instrumentation-google-generativeai/tests/test_new_library_instrumentation.py
  • packages/opentelemetry-instrumentation-google-generativeai/tests/conftest.py
  • packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/__init__.py
🧬 Code Graph Analysis (1)
packages/opentelemetry-instrumentation-google-generativeai/tests/test_new_library_instrumentation.py (1)
packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/__init__.py (3)
  • GoogleGenerativeAiInstrumentor (249-296)
  • instrumentation_dependencies (257-258)
  • _wrapped_methods (260-261)
🔇 Additional comments (5)
packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/event_emitter.py (1)

5-5: Import path migration to google.genai.types looks good

Switching GenerateContentResponse to google.genai.types aligns with the rest of the migration.

packages/opentelemetry-instrumentation-google-generativeai/pyproject.toml (1)

47-47: Extras narrowed to google-genai only — aligned with removal of legacy support

This matches the migration and avoids pulling the deprecated google-generativeai package.

packages/opentelemetry-instrumentation-google-generativeai/tests/conftest.py (1)

55-58: Fixture migrated to Client-based API is correct

Using genai.Client(api_key=...) and returning client.models matches the new library surface.

packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/__init__.py (2)

38-51: Wrapping targets updated to google.genai.Models/AsyncModels — good

The WRAPPED_METHODS list correctly points at google.genai.models.Models and AsyncModels with generate_content.


257-262: Deps narrowed to google-genai >= 1.0.0 — consistent with migration

The unified dependency declaration is clear and matches tests.

Comment on lines +8 to +41
def test_library_instrumentation():
"""Test that the google-genai library gets properly instrumented."""
# Import the library
from google import genai
from google.genai.models import Models

# Set up instrumentor
instrumentor = GoogleGenerativeAiInstrumentor()

# Verify methods are not wrapped initially
assert not hasattr(Models.generate_content, '__wrapped__')

try:
instrumentor.instrument()

# Verify methods are now wrapped
assert hasattr(Models.generate_content, '__wrapped__')

# Verify it's our wrapper
wrapped_method = Models.generate_content
assert callable(wrapped_method)

# Test that we can create a client
client = genai.Client(api_key="test_key")
assert client is not None
assert hasattr(client, 'models')
assert isinstance(client.models, Models)

finally:
instrumentor.uninstrument()

# Verify methods are unwrapped
assert not hasattr(Models.generate_content, '__wrapped__')

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Covers sync Models wrapping; add async coverage to catch wrapper type mismatches

Great assertions for sync Models. Add a test to verify AsyncModels.generate_content remains a coroutine (wrapped with an async wrapper). This will guard against accidentally applying a sync wrapper to async functions.

Apply this diff to extend coverage:

@@
 def test_library_instrumentation():
@@
         assert not hasattr(Models.generate_content, '__wrapped__')
@@
 
 
 def test_instrumentation_dependencies():
@@
     assert "google-genai >= 1.0.0" in deps[0]
 
@@
 def test_wrapped_methods():
@@
     assert "AsyncModels" in objects
+
+def test_async_models_wrapper_is_async():
+    """Ensure AsyncModels.generate_content is wrapped with an async wrapper."""
+    import inspect
+    from google.genai.models import AsyncModels
+
+    instrumentor = GoogleGenerativeAiInstrumentor()
+    try:
+        instrumentor.instrument()
+        assert hasattr(AsyncModels.generate_content, "__wrapped__")
+        # Wrapped function should still be a coroutine function
+        assert inspect.iscoroutinefunction(AsyncModels.generate_content)
+    finally:
+        instrumentor.uninstrument()
+        assert not hasattr(AsyncModels.generate_content, "__wrapped__")
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def test_library_instrumentation():
"""Test that the google-genai library gets properly instrumented."""
# Import the library
from google import genai
from google.genai.models import Models
# Set up instrumentor
instrumentor = GoogleGenerativeAiInstrumentor()
# Verify methods are not wrapped initially
assert not hasattr(Models.generate_content, '__wrapped__')
try:
instrumentor.instrument()
# Verify methods are now wrapped
assert hasattr(Models.generate_content, '__wrapped__')
# Verify it's our wrapper
wrapped_method = Models.generate_content
assert callable(wrapped_method)
# Test that we can create a client
client = genai.Client(api_key="test_key")
assert client is not None
assert hasattr(client, 'models')
assert isinstance(client.models, Models)
finally:
instrumentor.uninstrument()
# Verify methods are unwrapped
assert not hasattr(Models.generate_content, '__wrapped__')
def test_wrapped_methods():
# existing setup…
assert "AsyncModels" in objects
def test_async_models_wrapper_is_async():
"""Ensure AsyncModels.generate_content is wrapped with an async wrapper."""
import inspect
from google.genai.models import AsyncModels
instrumentor = GoogleGenerativeAiInstrumentor()
try:
instrumentor.instrument()
assert hasattr(AsyncModels.generate_content, "__wrapped__")
# Wrapped function should still be a coroutine function
assert inspect.iscoroutinefunction(AsyncModels.generate_content)
finally:
instrumentor.uninstrument()
assert not hasattr(AsyncModels.generate_content, "__wrapped__")
🤖 Prompt for AI Agents
In
packages/opentelemetry-instrumentation-google-generativeai/tests/test_new_library_instrumentation.py
around lines 8 to 41, add an async test branch that mirrors the sync assertions
for Models but targets AsyncModels: import AsyncModels from google.genai.models,
assert AsyncModels.generate_content is not wrapped initially, call
instrumentor.instrument(), then assert AsyncModels.generate_content is wrapped
and that the wrapped function is an async coroutine (use
inspect.iscoroutinefunction or asyncio.iscoroutinefunction on the wrapped
method) to ensure the wrapper preserves async behavior, and finally
instrumentor.uninstrument() and assert AsyncModels.generate_content is unwrapped
again.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants