Skip to content

Revert "fix(google-genai): migrate Google Generative AI instrumentation to googleapis/python-genai"#3285

Merged
galkleinman merged 1 commit intomainfrom
revert-3282-fix/google-genai-migration-3281
Aug 17, 2025
Merged

Revert "fix(google-genai): migrate Google Generative AI instrumentation to googleapis/python-genai"#3285
galkleinman merged 1 commit intomainfrom
revert-3282-fix/google-genai-migration-3281

Conversation

@galkleinman
Copy link
Copy Markdown
Contributor

@galkleinman galkleinman commented Aug 17, 2025

Reverts #3282


Important

Reverts migration of Google Generative AI instrumentation from google.genai back to google.generativeai, affecting imports, dependencies, and tests.

  • Revert Migration:
    • Reverts imports from google.genai back to google.generativeai in __init__.py and event_emitter.py.
    • Reverts WRAPPED_METHODS and LEGACY_WRAPPED_METHODS logic in __init__.py.
    • Reverts instrumentation_dependencies() and _wrapped_methods() logic in GoogleGenerativeAiInstrumentor.
  • Dependencies:
    • Re-adds google-generativeai to pyproject.toml dependencies and extras.
  • Tests:
    • Reverts test setup in conftest.py to use google.generativeai.
    • Removes test_new_library_instrumentation.py which tested the new package.

This description was created by Ellipsis for 2080019. You can customize this summary. It will automatically update as commits are pushed.

Summary by CodeRabbit

  • New Features
    • Added compatibility with both google-genai and google-generativeai clients, including async generate_content.
    • Automatic detection of installed Google Generative AI library to enable appropriate instrumentation.
    • Exposed constants to reflect wrapped methods for legacy and modern libraries.
  • Chores
    • Updated dependencies and extras to include google-generativeai (min version aligned).
  • Tests
    • Updated fixtures to use the current Generative AI client patterns.
    • Removed obsolete tests tied to the previous client API and dependency expectations.

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Aug 17, 2025

Walkthrough

Adds dual-library support for Google Generative AI instrumentation: introduces WRAPPED_METHODS and LEGACY_WRAPPED_METHODS, updates dependency detection via is_package_installed, adjusts wrapping/unwrapping loops, and updates GenerateContentResponse import paths. Test fixtures switch to google.generativeai usage; one test module is removed. pyproject.toml adds google-generativeai dependencies and extras.

Changes

Cohort / File(s) Summary
Instrumentation core
packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/__init__.py
Adds WRAPPED_METHODS and LEGACY_WRAPPED_METHODS; dynamic instrumentation_dependencies via is_package_installed; _wrapped_methods selects by installed package; wrap/unwrap loops iterate WRAPPED_METHODS; updates GenerateContentResponse import path.
Event emitter import fix
.../opentelemetry/instrumentation/google_generativeai/event_emitter.py
Changes GenerateContentResponse import to google.generativeai.types.generation_types.
Project configuration
.../pyproject.toml
Adds google-generativeai dependency and test dependency; updates extras to include google-generativeai.
Tests migration
.../tests/conftest.py
Switches to import google.generativeai as genai; configures API key globally; returns GenerativeModel instance instead of client.models.
Tests removal
.../tests/test_new_library_instrumentation.py
Removes tests targeting google.genai Models/AsyncModels instrumentation and dependency assertions.

Sequence Diagram(s)

sequenceDiagram
  participant User
  participant Instrumentor
  participant Utils as is_package_installed
  participant GenAI as google.genai
  participant GAI as google.generativeai

  User->>Instrumentor: instrument()
  Instrumentor->>Utils: is_package_installed("google.genai")
  alt google.genai installed
    Note over Instrumentor,GenAI: instrumentation_dependencies -> ["google-genai >= 0.1.0"]
  else google.generativeai installed
    Note over Instrumentor,GAI: instrumentation_dependencies -> ["google-generativeai >= 0.5.0"]
  else none
    Note over Instrumentor: instrumentation_dependencies -> []
  end
  Note over Instrumentor: Iterate WRAPPED_METHODS
  Instrumentor->>GenAI: wrap generate_content (sync)
  Instrumentor->>GenAI: wrap generate_content (async if defined)
  User->>Instrumentor: uninstrument()
  Instrumentor->>GenAI: unwrap methods from WRAPPED_METHODS
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Possibly related PRs

Suggested reviewers

  • doronkopit5
  • nirga

Poem

I thump my paws, two paths align,
GenAI and GenerativeAI, both now fine.
Wrappers hop where versions stray,
Spans sprout carrots all the way.
With tidy imports, deps in tow,
This bunny traces every flow. 🥕🐇

Tip

🔌 Remote MCP (Model Context Protocol) integration is now available!

Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch revert-3282-fix/google-genai-migration-3281

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Copy Markdown
Contributor

@ellipsis-dev ellipsis-dev Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Changes requested ❌

Reviewed everything up to 2080019 in 1 minute and 41 seconds. Click for details.
  • Reviewed 221 lines of code in 5 files
  • Skipped 1 files when reviewing.
  • Skipped posting 2 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/opentelemetry-instrumentation-google-generativeai/tests/conftest.py:56
  • Draft comment:
    The genai_client fixture uses google.generativeai (Legacy API) while _instrument always iterates over WRAPPED_METHODS for google.genai. This mismatch may result in instrumentation not being applied as expected in tests.
  • Reason this comment was not posted:
    Comment looked like it was already resolved.
2. packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/__init__.py:20
  • Draft comment:
    Typographical error: There's an extraneous closing parenthesis on line 20. It looks like it might have been left accidentally and should be removed to match the intended import block formatting.
  • Reason this comment was not posted:
    Comment was on unchanged code.

Workflow ID: wflow_ET1ZZe8aNRsQBCsv

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

)

for wrapped_method in self._wrapped_methods():
for wrapped_method in WRAPPED_METHODS:
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use self._wrapped_methods() here instead of hardcoding WRAPPED_METHODS. This bypasses the branch that selects legacy methods based on the installed package.

Suggested change
for wrapped_method in WRAPPED_METHODS:
for wrapped_method in self._wrapped_methods():

},
]

WRAPPED_METHODS = [
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Duplicate definitions of WRAPPED_METHODS detected. The second declaration (lines 69–82) overrides the first one, which may be unintended.

def instrumentation_dependencies(self) -> Collection[str]:
return ("google-genai >= 1.0.0",)
if is_package_installed("google.genai"):
return ("google-genai >= 0.1.0",)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The instrumentation_dependencies method returns a tuple for 'google-genai' but a list for 'google.generativeai'. Consider using a consistent container type for clarity.

Suggested change
return ("google-genai >= 0.1.0",)
return ["google-genai >= 0.1.0"]

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🔭 Outside diff range comments (2)
packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/__init__.py (2)

69-82: Duplicate WRAPPED_METHODS block overrides the previous definition.

This second definition silently replaces the above one. It will also discard the proposed is_async metadata, breaking async wrapping. Remove this duplicated block.

Apply this diff to remove the redundant block:

- WRAPPED_METHODS = [
-     {
-         "package": "google.genai.models",
-         "object": "Models",
-         "method": "generate_content",
-         "span_name": "gemini.generate_content",
-     },
-     {
-         "package": "google.genai.models",
-         "object": "AsyncModels",
-         "method": "generate_content",
-         "span_name": "gemini.generate_content",
-     },
- ]

315-328: Instrumentation ignores selected wrapper set and mis-identifies async methods.

  • The loop hardcodes WRAPPED_METHODS, bypassing the dynamic selection in _wrapped_methods(). As a result, installs with only google.generativeai won’t be wrapped.
  • Async detection via method name ("generate_content_async") misses google.genai AsyncModels.generate_content.

Apply this diff to honor dynamic selection and use explicit async flags:

-        for wrapped_method in WRAPPED_METHODS:
+        selected = self._wrapped_methods()
+        for wrapped_method in selected:
             wrap_package = wrapped_method.get("package")
             wrap_object = wrapped_method.get("object")
             wrap_method = wrapped_method.get("method")

             wrap_function_wrapper(
                 wrap_package,
                 f"{wrap_object}.{wrap_method}",
                 (
-                    _awrap(tracer, event_logger, wrapped_method)
-                    if wrap_method == "generate_content_async"
-                    else _wrap(tracer, event_logger, wrapped_method)
+                    _awrap(tracer, event_logger, wrapped_method)
+                    if wrapped_method.get("is_async")
+                    else _wrap(tracer, event_logger, wrapped_method)
                 ),
             )
🧹 Nitpick comments (1)
packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/__init__.py (1)

288-294: Normalize return types for instrumentation_dependencies.

The function currently returns a tuple in one branch and a list in another. While both are Collections, returning the same type (tuple) improves consistency.

Apply this diff:

-        if is_package_installed("google.genai"):
-            return ("google-genai >= 0.1.0",)
-        elif is_package_installed("google.generativeai"):
-            return ["google-generativeai >= 0.5.0"]
-        else:
-            return []
+        if is_package_installed("google.genai"):
+            return ("google-genai >= 0.1.0",)
+        elif is_package_installed("google.generativeai"):
+            return ("google-generativeai >= 0.5.0",)
+        else:
+            return ()
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between a912e20 and 2080019.

⛔ Files ignored due to path filters (1)
  • packages/opentelemetry-instrumentation-google-generativeai/poetry.lock is excluded by !**/*.lock
📒 Files selected for processing (5)
  • packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/__init__.py (5 hunks)
  • packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/event_emitter.py (1 hunks)
  • packages/opentelemetry-instrumentation-google-generativeai/pyproject.toml (1 hunks)
  • packages/opentelemetry-instrumentation-google-generativeai/tests/conftest.py (2 hunks)
  • packages/opentelemetry-instrumentation-google-generativeai/tests/test_new_library_instrumentation.py (0 hunks)
💤 Files with no reviewable changes (1)
  • packages/opentelemetry-instrumentation-google-generativeai/tests/test_new_library_instrumentation.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py

📄 CodeRabbit Inference Engine (CLAUDE.md)

**/*.py: Python code must conform to Flake8 linting rules
Do not hardcode API keys in source code; read them from environment variables or a secure vault

Files:

  • packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/event_emitter.py
  • packages/opentelemetry-instrumentation-google-generativeai/tests/conftest.py
  • packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/__init__.py
🧬 Code Graph Analysis (1)
packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/__init__.py (1)
packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/utils.py (1)
  • is_package_installed (74-75)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: Build Packages (3.11)
  • GitHub Check: Lint
🔇 Additional comments (3)
packages/opentelemetry-instrumentation-google-generativeai/pyproject.toml (2)

40-42: Packaging/runtime alignment: ensure imports remain optional.

You added google-generativeai to test deps (good for tests), but core code (event_emitter and init) currently import google.generativeai unconditionally. Either:

  • adopt the suggested optional import guards in code (preferred), or
  • move google-generativeai to a runtime dependency or enforce via extras when the instrumentor is used.

Given dual-library support, optional guarded imports are the safer route to avoid import errors in environments with only google-genai installed.


48-48: LGTM: extras declare both instrumentable libs.

Declaring both "google-generativeai" and "google-genai" under instruments aligns with the dual support strategy.

packages/opentelemetry-instrumentation-google-generativeai/tests/conftest.py (1)

55-59: LGTM: Fixture updated to the GenerativeModel API.

Using genai.configure and returning genai.GenerativeModel("gemini-1.5-flash") matches the google.generativeai usage and works with the instrumentor flow.

from typing import Collection

from google.genai.types import GenerateContentResponse
from google.generativeai.types.generation_types import GenerateContentResponse
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Same import pitfall as event_emitter: hard dependency on google.generativeai.

This will raise when only google-genai is installed. Make it resilient as suggested below.

Apply this diff (and update the typing import to include Any, see note below):

-from google.generativeai.types.generation_types import GenerateContentResponse
+from typing import Any
+try:
+    from google.generativeai.types.generation_types import GenerateContentResponse  # type: ignore
+except Exception:  # pragma: no cover
+    try:
+        from google.genai.types import GenerateContentResponse  # type: ignore
+    except Exception:
+        GenerateContentResponse = Any  # type: ignore

Outside the selected line, amend the typing import to include Any:

-from typing import Collection
+from typing import Collection, Any
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
from google.generativeai.types.generation_types import GenerateContentResponse
++ b/packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/__init__.py
@@
from typing import Collection, Any
@@
try:
from google.generativeai.types.generation_types import GenerateContentResponse # type: ignore
except Exception: # pragma: no cover
try:
from google.genai.types import GenerateContentResponse # type: ignore
except Exception:
GenerateContentResponse = Any # type: ignore
🤖 Prompt for AI Agents
In
packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/__init__.py
around line 7, the direct import from google.generativeai.types.generation_types
creates a hard dependency and will raise if only google-genai is installed;
replace the direct import with a safe optional import pattern (e.g., import
typing.Any and then either use a try/except ImportError to assign
GenerateContentResponse = Any on import failure, or guard the real import with
TYPE_CHECKING so runtime does not require google.generativeai), and update the
typing import at the top of the file to include Any so type annotations remain
valid.

Comment on lines +54 to +68
WRAPPED_METHODS = [
{
"package": "google.genai.models",
"object": "Models",
"method": "generate_content",
"span_name": "gemini.generate_content",
},
{
"package": "google.genai.models",
"object": "AsyncModels",
"method": "generate_content",
"span_name": "gemini.generate_content",
},
]

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Wrapper config lacks async metadata and is duplicated below; leads to incorrect wrapping.

  • Async detection is currently inferred from the method name "generate_content_async". This fails for google.genai’s AsyncModels.generate_content (async method with the same name as sync).
  • This list is duplicated again at Lines 69–82, overriding any updates you make here.

Add an explicit is_async flag and remove the duplicate definition.

Apply this diff to add explicit async flags:

 WRAPPED_METHODS = [
     {
         "package": "google.genai.models",
         "object": "Models",
         "method": "generate_content",
         "span_name": "gemini.generate_content",
+        "is_async": False,
     },
     {
         "package": "google.genai.models",
         "object": "AsyncModels",
         "method": "generate_content",
         "span_name": "gemini.generate_content",
+        "is_async": True,
     },
 ]

And similarly for LEGACY_WRAPPED_METHODS (above):

 LEGACY_WRAPPED_METHODS = [
     {
         "package": "google.generativeai.generative_models",
         "object": "GenerativeModel",
         "method": "generate_content",
         "span_name": "gemini.generate_content",
+        "is_async": False,
     },
     {
         "package": "google.generativeai.generative_models",
         "object": "GenerativeModel",
         "method": "generate_content_async",
         "span_name": "gemini.generate_content_async",
+        "is_async": True,
     },
 ]

Then delete the duplicate WRAPPED_METHODS block at Lines 69–82 (see next comment).

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
WRAPPED_METHODS = [
{
"package": "google.genai.models",
"object": "Models",
"method": "generate_content",
"span_name": "gemini.generate_content",
},
{
"package": "google.genai.models",
"object": "AsyncModels",
"method": "generate_content",
"span_name": "gemini.generate_content",
},
]
WRAPPED_METHODS = [
{
"package": "google.genai.models",
"object": "Models",
"method": "generate_content",
"span_name": "gemini.generate_content",
"is_async": False,
},
{
"package": "google.genai.models",
"object": "AsyncModels",
"method": "generate_content",
"span_name": "gemini.generate_content",
"is_async": True,
},
]
🤖 Prompt for AI Agents
In
packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/__init__.py
around lines 54 to 68, the WRAPPED_METHODS entries for google.genai.models lack
explicit async metadata (causing AsyncModels.generate_content to be misdetected)
and the same WRAPPED_METHODS block is duplicated later (lines 69–82) which
overrides changes; add an explicit "is_async": False to the Models entry and
"is_async": True to the AsyncModels entry, make the same explicit is_async
adjustments in the LEGACY_WRAPPED_METHODS section above, and remove the
duplicate WRAPPED_METHODS block at lines 69–82 so there is a single
authoritative definition.

from typing import Union

from google.genai.types import GenerateContentResponse
from google.generativeai.types.generation_types import GenerateContentResponse
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Hard dependency on google.generativeai breaks legacy-only installs (ImportError risk).

This import forces google-generativeai at runtime even when only google-genai is installed, contradicting the dual-library support introduced in this PR. Make the import optional to keep runtime compatibility.

Apply this diff to make the type import resilient to either library (and fall back to Any when neither is present):

-from google.generativeai.types.generation_types import GenerateContentResponse
+from typing import Any
+try:
+    from google.generativeai.types.generation_types import GenerateContentResponse  # type: ignore
+except Exception:  # pragma: no cover
+    try:
+        from google.genai.types import GenerateContentResponse  # type: ignore
+    except Exception:
+        GenerateContentResponse = Any  # type: ignore

Additionally (outside the selected lines), consider updating signatures that accept event_logger to use Optional[EventLogger] instead of Union[EventLogger], since None is passed in some paths:

# outside selected lines
from typing import Optional
def emit_message_events(args, kwargs, event_logger: Optional[EventLogger]):
    ...
def emit_choice_events(response: GenerateContentResponse, event_logger: Optional[EventLogger]):
    ...
def emit_event(event: Union[MessageEvent, ChoiceEvent], event_logger: Optional[EventLogger]) -> None:
    ...
def _emit_message_event(event: MessageEvent, event_logger: Optional[EventLogger]) -> None:
    ...
def _emit_choice_event(event: ChoiceEvent, event_logger: Optional[EventLogger]) -> None:
    ...
🤖 Prompt for AI Agents
In
packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/event_emitter.py
around line 5, the direct import from google.generativeai.types.generation_types
creates a hard runtime dependency; wrap the import in a safe optional pattern
(use try/except ImportError or guard with typing.TYPE_CHECKING) and fall back to
typing.Any for GenerateContentResponse when the library is not available so
importing this module won't raise ImportError; additionally, update any function
signatures that accept event_logger to use Optional[EventLogger] instead of
Union[EventLogger] because None is passed in some call paths.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants