Revert "fix(google-genai): migrate Google Generative AI instrumentation to googleapis/python-genai"#3285
Conversation
…on to go…" This reverts commit a912e20.
WalkthroughAdds dual-library support for Google Generative AI instrumentation: introduces WRAPPED_METHODS and LEGACY_WRAPPED_METHODS, updates dependency detection via is_package_installed, adjusts wrapping/unwrapping loops, and updates GenerateContentResponse import paths. Test fixtures switch to google.generativeai usage; one test module is removed. pyproject.toml adds google-generativeai dependencies and extras. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant Instrumentor
participant Utils as is_package_installed
participant GenAI as google.genai
participant GAI as google.generativeai
User->>Instrumentor: instrument()
Instrumentor->>Utils: is_package_installed("google.genai")
alt google.genai installed
Note over Instrumentor,GenAI: instrumentation_dependencies -> ["google-genai >= 0.1.0"]
else google.generativeai installed
Note over Instrumentor,GAI: instrumentation_dependencies -> ["google-generativeai >= 0.5.0"]
else none
Note over Instrumentor: instrumentation_dependencies -> []
end
Note over Instrumentor: Iterate WRAPPED_METHODS
Instrumentor->>GenAI: wrap generate_content (sync)
Instrumentor->>GenAI: wrap generate_content (async if defined)
User->>Instrumentor: uninstrument()
Instrumentor->>GenAI: unwrap methods from WRAPPED_METHODS
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Suggested reviewers
Poem
Tip 🔌 Remote MCP (Model Context Protocol) integration is now available!Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats. ✨ Finishing Touches
🧪 Generate unit tests
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Caution
Changes requested ❌
Reviewed everything up to 2080019 in 1 minute and 41 seconds. Click for details.
- Reviewed
221lines of code in5files - Skipped
1files when reviewing. - Skipped posting
2draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/opentelemetry-instrumentation-google-generativeai/tests/conftest.py:56
- Draft comment:
The genai_client fixture uses google.generativeai (Legacy API) while _instrument always iterates over WRAPPED_METHODS for google.genai. This mismatch may result in instrumentation not being applied as expected in tests. - Reason this comment was not posted:
Comment looked like it was already resolved.
2. packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/__init__.py:20
- Draft comment:
Typographical error: There's an extraneous closing parenthesis on line 20. It looks like it might have been left accidentally and should be removed to match the intended import block formatting. - Reason this comment was not posted:
Comment was on unchanged code.
Workflow ID: wflow_ET1ZZe8aNRsQBCsv
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
| ) | ||
|
|
||
| for wrapped_method in self._wrapped_methods(): | ||
| for wrapped_method in WRAPPED_METHODS: |
There was a problem hiding this comment.
Use self._wrapped_methods() here instead of hardcoding WRAPPED_METHODS. This bypasses the branch that selects legacy methods based on the installed package.
| for wrapped_method in WRAPPED_METHODS: | |
| for wrapped_method in self._wrapped_methods(): |
| }, | ||
| ] | ||
|
|
||
| WRAPPED_METHODS = [ |
There was a problem hiding this comment.
Duplicate definitions of WRAPPED_METHODS detected. The second declaration (lines 69–82) overrides the first one, which may be unintended.
| def instrumentation_dependencies(self) -> Collection[str]: | ||
| return ("google-genai >= 1.0.0",) | ||
| if is_package_installed("google.genai"): | ||
| return ("google-genai >= 0.1.0",) |
There was a problem hiding this comment.
The instrumentation_dependencies method returns a tuple for 'google-genai' but a list for 'google.generativeai'. Consider using a consistent container type for clarity.
| return ("google-genai >= 0.1.0",) | |
| return ["google-genai >= 0.1.0"] |
There was a problem hiding this comment.
Actionable comments posted: 3
🔭 Outside diff range comments (2)
packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/__init__.py (2)
69-82: Duplicate WRAPPED_METHODS block overrides the previous definition.This second definition silently replaces the above one. It will also discard the proposed is_async metadata, breaking async wrapping. Remove this duplicated block.
Apply this diff to remove the redundant block:
- WRAPPED_METHODS = [ - { - "package": "google.genai.models", - "object": "Models", - "method": "generate_content", - "span_name": "gemini.generate_content", - }, - { - "package": "google.genai.models", - "object": "AsyncModels", - "method": "generate_content", - "span_name": "gemini.generate_content", - }, - ]
315-328: Instrumentation ignores selected wrapper set and mis-identifies async methods.
- The loop hardcodes WRAPPED_METHODS, bypassing the dynamic selection in _wrapped_methods(). As a result, installs with only google.generativeai won’t be wrapped.
- Async detection via method name ("generate_content_async") misses google.genai AsyncModels.generate_content.
Apply this diff to honor dynamic selection and use explicit async flags:
- for wrapped_method in WRAPPED_METHODS: + selected = self._wrapped_methods() + for wrapped_method in selected: wrap_package = wrapped_method.get("package") wrap_object = wrapped_method.get("object") wrap_method = wrapped_method.get("method") wrap_function_wrapper( wrap_package, f"{wrap_object}.{wrap_method}", ( - _awrap(tracer, event_logger, wrapped_method) - if wrap_method == "generate_content_async" - else _wrap(tracer, event_logger, wrapped_method) + _awrap(tracer, event_logger, wrapped_method) + if wrapped_method.get("is_async") + else _wrap(tracer, event_logger, wrapped_method) ), )
🧹 Nitpick comments (1)
packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/__init__.py (1)
288-294: Normalize return types for instrumentation_dependencies.The function currently returns a tuple in one branch and a list in another. While both are Collections, returning the same type (tuple) improves consistency.
Apply this diff:
- if is_package_installed("google.genai"): - return ("google-genai >= 0.1.0",) - elif is_package_installed("google.generativeai"): - return ["google-generativeai >= 0.5.0"] - else: - return [] + if is_package_installed("google.genai"): + return ("google-genai >= 0.1.0",) + elif is_package_installed("google.generativeai"): + return ("google-generativeai >= 0.5.0",) + else: + return ()
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
⛔ Files ignored due to path filters (1)
packages/opentelemetry-instrumentation-google-generativeai/poetry.lockis excluded by!**/*.lock
📒 Files selected for processing (5)
packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/__init__.py(5 hunks)packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/event_emitter.py(1 hunks)packages/opentelemetry-instrumentation-google-generativeai/pyproject.toml(1 hunks)packages/opentelemetry-instrumentation-google-generativeai/tests/conftest.py(2 hunks)packages/opentelemetry-instrumentation-google-generativeai/tests/test_new_library_instrumentation.py(0 hunks)
💤 Files with no reviewable changes (1)
- packages/opentelemetry-instrumentation-google-generativeai/tests/test_new_library_instrumentation.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit Inference Engine (CLAUDE.md)
**/*.py: Python code must conform to Flake8 linting rules
Do not hardcode API keys in source code; read them from environment variables or a secure vault
Files:
packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/event_emitter.pypackages/opentelemetry-instrumentation-google-generativeai/tests/conftest.pypackages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/__init__.py
🧬 Code Graph Analysis (1)
packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/__init__.py (1)
packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/utils.py (1)
is_package_installed(74-75)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: Build Packages (3.11)
- GitHub Check: Lint
🔇 Additional comments (3)
packages/opentelemetry-instrumentation-google-generativeai/pyproject.toml (2)
40-42: Packaging/runtime alignment: ensure imports remain optional.You added google-generativeai to test deps (good for tests), but core code (event_emitter and init) currently import google.generativeai unconditionally. Either:
- adopt the suggested optional import guards in code (preferred), or
- move google-generativeai to a runtime dependency or enforce via extras when the instrumentor is used.
Given dual-library support, optional guarded imports are the safer route to avoid import errors in environments with only google-genai installed.
48-48: LGTM: extras declare both instrumentable libs.Declaring both "google-generativeai" and "google-genai" under instruments aligns with the dual support strategy.
packages/opentelemetry-instrumentation-google-generativeai/tests/conftest.py (1)
55-59: LGTM: Fixture updated to the GenerativeModel API.Using
genai.configureand returninggenai.GenerativeModel("gemini-1.5-flash")matches the google.generativeai usage and works with the instrumentor flow.
| from typing import Collection | ||
|
|
||
| from google.genai.types import GenerateContentResponse | ||
| from google.generativeai.types.generation_types import GenerateContentResponse |
There was a problem hiding this comment.
Same import pitfall as event_emitter: hard dependency on google.generativeai.
This will raise when only google-genai is installed. Make it resilient as suggested below.
Apply this diff (and update the typing import to include Any, see note below):
-from google.generativeai.types.generation_types import GenerateContentResponse
+from typing import Any
+try:
+ from google.generativeai.types.generation_types import GenerateContentResponse # type: ignore
+except Exception: # pragma: no cover
+ try:
+ from google.genai.types import GenerateContentResponse # type: ignore
+ except Exception:
+ GenerateContentResponse = Any # type: ignoreOutside the selected line, amend the typing import to include Any:
-from typing import Collection
+from typing import Collection, Any📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| from google.generativeai.types.generation_types import GenerateContentResponse | |
| ++ b/packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/__init__.py | |
| @@ | |
| from typing import Collection, Any | |
| @@ | |
| try: | |
| from google.generativeai.types.generation_types import GenerateContentResponse # type: ignore | |
| except Exception: # pragma: no cover | |
| try: | |
| from google.genai.types import GenerateContentResponse # type: ignore | |
| except Exception: | |
| GenerateContentResponse = Any # type: ignore |
🤖 Prompt for AI Agents
In
packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/__init__.py
around line 7, the direct import from google.generativeai.types.generation_types
creates a hard dependency and will raise if only google-genai is installed;
replace the direct import with a safe optional import pattern (e.g., import
typing.Any and then either use a try/except ImportError to assign
GenerateContentResponse = Any on import failure, or guard the real import with
TYPE_CHECKING so runtime does not require google.generativeai), and update the
typing import at the top of the file to include Any so type annotations remain
valid.
| WRAPPED_METHODS = [ | ||
| { | ||
| "package": "google.genai.models", | ||
| "object": "Models", | ||
| "method": "generate_content", | ||
| "span_name": "gemini.generate_content", | ||
| }, | ||
| { | ||
| "package": "google.genai.models", | ||
| "object": "AsyncModels", | ||
| "method": "generate_content", | ||
| "span_name": "gemini.generate_content", | ||
| }, | ||
| ] | ||
|
|
There was a problem hiding this comment.
🛠️ Refactor suggestion
Wrapper config lacks async metadata and is duplicated below; leads to incorrect wrapping.
- Async detection is currently inferred from the method name "generate_content_async". This fails for google.genai’s AsyncModels.generate_content (async method with the same name as sync).
- This list is duplicated again at Lines 69–82, overriding any updates you make here.
Add an explicit is_async flag and remove the duplicate definition.
Apply this diff to add explicit async flags:
WRAPPED_METHODS = [
{
"package": "google.genai.models",
"object": "Models",
"method": "generate_content",
"span_name": "gemini.generate_content",
+ "is_async": False,
},
{
"package": "google.genai.models",
"object": "AsyncModels",
"method": "generate_content",
"span_name": "gemini.generate_content",
+ "is_async": True,
},
]And similarly for LEGACY_WRAPPED_METHODS (above):
LEGACY_WRAPPED_METHODS = [
{
"package": "google.generativeai.generative_models",
"object": "GenerativeModel",
"method": "generate_content",
"span_name": "gemini.generate_content",
+ "is_async": False,
},
{
"package": "google.generativeai.generative_models",
"object": "GenerativeModel",
"method": "generate_content_async",
"span_name": "gemini.generate_content_async",
+ "is_async": True,
},
]Then delete the duplicate WRAPPED_METHODS block at Lines 69–82 (see next comment).
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| WRAPPED_METHODS = [ | |
| { | |
| "package": "google.genai.models", | |
| "object": "Models", | |
| "method": "generate_content", | |
| "span_name": "gemini.generate_content", | |
| }, | |
| { | |
| "package": "google.genai.models", | |
| "object": "AsyncModels", | |
| "method": "generate_content", | |
| "span_name": "gemini.generate_content", | |
| }, | |
| ] | |
| WRAPPED_METHODS = [ | |
| { | |
| "package": "google.genai.models", | |
| "object": "Models", | |
| "method": "generate_content", | |
| "span_name": "gemini.generate_content", | |
| "is_async": False, | |
| }, | |
| { | |
| "package": "google.genai.models", | |
| "object": "AsyncModels", | |
| "method": "generate_content", | |
| "span_name": "gemini.generate_content", | |
| "is_async": True, | |
| }, | |
| ] |
🤖 Prompt for AI Agents
In
packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/__init__.py
around lines 54 to 68, the WRAPPED_METHODS entries for google.genai.models lack
explicit async metadata (causing AsyncModels.generate_content to be misdetected)
and the same WRAPPED_METHODS block is duplicated later (lines 69–82) which
overrides changes; add an explicit "is_async": False to the Models entry and
"is_async": True to the AsyncModels entry, make the same explicit is_async
adjustments in the LEGACY_WRAPPED_METHODS section above, and remove the
duplicate WRAPPED_METHODS block at lines 69–82 so there is a single
authoritative definition.
| from typing import Union | ||
|
|
||
| from google.genai.types import GenerateContentResponse | ||
| from google.generativeai.types.generation_types import GenerateContentResponse |
There was a problem hiding this comment.
Hard dependency on google.generativeai breaks legacy-only installs (ImportError risk).
This import forces google-generativeai at runtime even when only google-genai is installed, contradicting the dual-library support introduced in this PR. Make the import optional to keep runtime compatibility.
Apply this diff to make the type import resilient to either library (and fall back to Any when neither is present):
-from google.generativeai.types.generation_types import GenerateContentResponse
+from typing import Any
+try:
+ from google.generativeai.types.generation_types import GenerateContentResponse # type: ignore
+except Exception: # pragma: no cover
+ try:
+ from google.genai.types import GenerateContentResponse # type: ignore
+ except Exception:
+ GenerateContentResponse = Any # type: ignoreAdditionally (outside the selected lines), consider updating signatures that accept event_logger to use Optional[EventLogger] instead of Union[EventLogger], since None is passed in some paths:
# outside selected lines
from typing import Optional
def emit_message_events(args, kwargs, event_logger: Optional[EventLogger]):
...
def emit_choice_events(response: GenerateContentResponse, event_logger: Optional[EventLogger]):
...
def emit_event(event: Union[MessageEvent, ChoiceEvent], event_logger: Optional[EventLogger]) -> None:
...
def _emit_message_event(event: MessageEvent, event_logger: Optional[EventLogger]) -> None:
...
def _emit_choice_event(event: ChoiceEvent, event_logger: Optional[EventLogger]) -> None:
...🤖 Prompt for AI Agents
In
packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/event_emitter.py
around line 5, the direct import from google.generativeai.types.generation_types
creates a hard runtime dependency; wrap the import in a safe optional pattern
(use try/except ImportError or guard with typing.TYPE_CHECKING) and fall back to
typing.Any for GenerateContentResponse when the library is not available so
importing this module won't raise ImportError; additionally, update any function
signatures that accept event_logger to use Optional[EventLogger] instead of
Union[EventLogger] because None is passed in some call paths.
Reverts #3282
Important
Reverts migration of Google Generative AI instrumentation from
google.genaiback togoogle.generativeai, affecting imports, dependencies, and tests.google.genaiback togoogle.generativeaiin__init__.pyandevent_emitter.py.WRAPPED_METHODSandLEGACY_WRAPPED_METHODSlogic in__init__.py.instrumentation_dependencies()and_wrapped_methods()logic inGoogleGenerativeAiInstrumentor.google-generativeaitopyproject.tomldependencies and extras.conftest.pyto usegoogle.generativeai.test_new_library_instrumentation.pywhich tested the new package.This description was created by
for 2080019. You can customize this summary. It will automatically update as commits are pushed.
Summary by CodeRabbit