Skip to content

3451 bug report switch to log events on recent opentelemetry sdks#3679

Open
gofert9911-alt wants to merge 4 commits intotraceloop:mainfrom
gofert9911-alt:3451-bug-report-switch-to-log-events-on-recent-opentelemetry-sdks
Open

3451 bug report switch to log events on recent opentelemetry sdks#3679
gofert9911-alt wants to merge 4 commits intotraceloop:mainfrom
gofert9911-alt:3451-bug-report-switch-to-log-events-on-recent-opentelemetry-sdks

Conversation

@gofert9911-alt
Copy link
Copy Markdown

@gofert9911-alt gofert9911-alt commented Feb 13, 2026

  • I have added tests that cover my changes.
  • If adding a new instrumentation or changing an existing one, I've added screenshots from some observability platform showing the change.
  • PR name follows conventional commits format: feat(instrumentation): ... or fix(instrumentation): ....
  • (If applicable) I have updated the documentation accordingly.

Summary by CodeRabbit

  • Chores
    • Updated core OpenTelemetry dependencies to v1.38.0+ across all instrumentation packages.
    • Migrated event logging system from Event-based to LogRecord-based format for improved compatibility with OpenTelemetry v1.38+.
    • Updated event name access in logs to use direct event_name field instead of attributes lookup.

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Feb 13, 2026

📝 Walkthrough

Walkthrough

This PR migrates all OpenTelemetry instrumentation packages from the deprecated Events API to the Logger/LogRecord API. It updates type hints from EventLogger to Logger, renames event_logger_provider to logger_provider, replaces get_event_logger calls with get_logger, and updates event emission from Event objects to LogRecord instances. OpenTelemetry dependencies are upgraded to 1.38.0+. Test assertions are updated to use log_record.event_name directly instead of attribute lookups.

Changes

Cohort / File(s) Summary
Event logger API migration - alephalpha
packages/opentelemetry-instrumentation-alephalpha/opentelemetry/instrumentation/alephalpha/__init__.py, event_emitter.py
Replaced EventLogger with Logger, get_event_logger with get_logger, event_logger_provider with logger_provider. Updated event emission from Event to LogRecord. Relaxed aleph_alpha_client version constraint from <8 to unbounded.
Event logger API migration - anthropic
packages/opentelemetry-instrumentation-anthropic/opentelemetry/instrumentation/anthropic/__init__.py, event_emitter.py, streaming.py
Replaced EventLogger with Logger across type hints and imports. Updated event emission to LogRecord. Adjusted function signatures for sync/async handlers (_handle_input, _ahandle_input, _handle_response, _ahandle_response, _wrap, _awrap).
Event logger API migration - bedrock
packages/opentelemetry-instrumentation-bedrock/opentelemetry/instrumentation/bedrock/__init__.py, event_emitter.py, span_utils.py
Replaced EventLogger with Logger and Event with LogRecord. Added header-based token counting support (x-amzn-bedrock-input-token-count, x-amzn-bedrock-output-token-count) to Anthropic span attribute setters alongside existing usage-based fallback.
Event logger API migration - core packages
packages/opentelemetry-instrumentation-{chromadb,cohere,crewai,google-generativeai,groq,haystack,lancedb,langchain,llamaindex,marqo,mcp,milvus,mistralai,ollama,openai-agents,openai-{shared,v0,v1},pinecone,qdrant,replicate,sagemaker,together,transformers,vertexai,watsonx,weaviate,writer}/*/(__init__.py|event_emitter.py|config.py|utils.py)
Consistent migration across all packages: EventLogger→Logger, get_event_logger→get_logger, event_logger_provider→logger_provider, Event→LogRecord. Replaced event_logger parameter types and updated event emission paths.
Event logger API migration - test fixtures
packages/opentelemetry-instrumentation-{alephalpha,anthropic,bedrock,cohere,google-generativeai,groq,langchain,llamaindex,mistralai,ollama,openai,replicate,sagemaker,together,transformers,vertexai,watsonx,writer}/tests/conftest.py
Renamed event_logger_provider fixture to logger_provider, changed return type from EventLoggerProvider to LoggerProvider. Updated instrument_with_content/no_content signatures and instrumentation calls.
Log record assertion updates
packages/opentelemetry-instrumentation-*/tests/{test_*.py,traces/test_*.py}
Removed EventAttributes imports and replaced log.log_record.attributes.get(EventAttributes.EVENT_NAME) with log.log_record.event_name across all test assertion helpers.
Dependency version upgrades - pyproject.toml
packages/opentelemetry-instrumentation-*/pyproject.toml, packages/opentelemetry-semantic-conventions-ai/pyproject.toml, packages/traceloop-sdk/pyproject.toml
Bumped opentelemetry-api from ^1.28.0 to ^1.38.0, opentelemetry-sdk from ^1.27.0 to ^1.38.0, opentelemetry-instrumentation from >=0.50b0 to >=0.59b0, opentelemetry-semantic-conventions from >=0.50b0 to >=0.59b0 across all packages.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~75 minutes

Possibly related PRs

Suggested reviewers

  • nirga

Poem

🐰 From Events to Logs, our whiskers twitch with glee,
Logger's new path shines bright as morning dew,
EventLogger fades like shadows under trees,
LogRecord blooms, with event_name's fresh hue,
Across all packages, consistent and true! 🪵✨

🚥 Pre-merge checks | ✅ 2 | ❌ 2
❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 12.37% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Merge Conflict Detection ⚠️ Warning ❌ Merge conflicts detected (268 files):

⚔️ .cz.toml (content)
⚔️ .github/dependabot.yml (content)
⚔️ .github/workflows/ci.yml (content)
⚔️ .github/workflows/release.yml (content)
⚔️ .gitignore (content)
⚔️ CHANGELOG.md (content)
⚔️ CLAUDE.md (content)
⚔️ CONTRIBUTING.md (content)
⚔️ README.md (content)
⚔️ package-lock.json (content)
⚔️ packages/opentelemetry-instrumentation-alephalpha/.python-version (content)
⚔️ packages/opentelemetry-instrumentation-alephalpha/opentelemetry/instrumentation/alephalpha/version.py (content)
⚔️ packages/opentelemetry-instrumentation-alephalpha/project.json (content)
⚔️ packages/opentelemetry-instrumentation-alephalpha/pyproject.toml (content)
⚔️ packages/opentelemetry-instrumentation-alephalpha/tests/test_completion.py (content)
⚔️ packages/opentelemetry-instrumentation-anthropic/.python-version (content)
⚔️ packages/opentelemetry-instrumentation-anthropic/opentelemetry/instrumentation/anthropic/__init__.py (content)
⚔️ packages/opentelemetry-instrumentation-anthropic/opentelemetry/instrumentation/anthropic/span_utils.py (content)
⚔️ packages/opentelemetry-instrumentation-anthropic/opentelemetry/instrumentation/anthropic/version.py (content)
⚔️ packages/opentelemetry-instrumentation-anthropic/project.json (content)
⚔️ packages/opentelemetry-instrumentation-anthropic/pyproject.toml (content)
⚔️ packages/opentelemetry-instrumentation-anthropic/tests/test_completion.py (content)
⚔️ packages/opentelemetry-instrumentation-anthropic/tests/test_messages.py (content)
⚔️ packages/opentelemetry-instrumentation-anthropic/tests/test_prompt_caching.py (content)
⚔️ packages/opentelemetry-instrumentation-anthropic/tests/test_thinking.py (content)
⚔️ packages/opentelemetry-instrumentation-bedrock/.python-version (content)
⚔️ packages/opentelemetry-instrumentation-bedrock/opentelemetry/instrumentation/bedrock/version.py (content)
⚔️ packages/opentelemetry-instrumentation-bedrock/project.json (content)
⚔️ packages/opentelemetry-instrumentation-bedrock/pyproject.toml (content)
⚔️ packages/opentelemetry-instrumentation-bedrock/tests/traces/test_ai21.py (content)
⚔️ packages/opentelemetry-instrumentation-bedrock/tests/traces/test_anthropic.py (content)
⚔️ packages/opentelemetry-instrumentation-bedrock/tests/traces/test_cohere.py (content)
⚔️ packages/opentelemetry-instrumentation-bedrock/tests/traces/test_imported_model.py (content)
⚔️ packages/opentelemetry-instrumentation-bedrock/tests/traces/test_meta.py (content)
⚔️ packages/opentelemetry-instrumentation-bedrock/tests/traces/test_nova.py (content)
⚔️ packages/opentelemetry-instrumentation-bedrock/tests/traces/test_titan.py (content)
⚔️ packages/opentelemetry-instrumentation-chromadb/.python-version (content)
⚔️ packages/opentelemetry-instrumentation-chromadb/opentelemetry/instrumentation/chromadb/version.py (content)
⚔️ packages/opentelemetry-instrumentation-chromadb/opentelemetry/instrumentation/chromadb/wrapper.py (content)
⚔️ packages/opentelemetry-instrumentation-chromadb/project.json (content)
⚔️ packages/opentelemetry-instrumentation-chromadb/pyproject.toml (content)
⚔️ packages/opentelemetry-instrumentation-cohere/.python-version (content)
⚔️ packages/opentelemetry-instrumentation-cohere/opentelemetry/instrumentation/cohere/version.py (content)
⚔️ packages/opentelemetry-instrumentation-cohere/project.json (content)
⚔️ packages/opentelemetry-instrumentation-cohere/pyproject.toml (content)
⚔️ packages/opentelemetry-instrumentation-cohere/tests/test_chat.py (content)
⚔️ packages/opentelemetry-instrumentation-cohere/tests/test_completion.py (content)
⚔️ packages/opentelemetry-instrumentation-cohere/tests/test_rerank.py (content)
⚔️ packages/opentelemetry-instrumentation-crewai/project.json (content)
⚔️ packages/opentelemetry-instrumentation-crewai/pyproject.toml (content)
⚔️ packages/opentelemetry-instrumentation-google-generativeai/.python-version (content)
⚔️ packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/__init__.py (content)
⚔️ packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/span_utils.py (content)
⚔️ packages/opentelemetry-instrumentation-google-generativeai/project.json (content)
⚔️ packages/opentelemetry-instrumentation-google-generativeai/pyproject.toml (content)
⚔️ packages/opentelemetry-instrumentation-google-generativeai/tests/conftest.py (content)
⚔️ packages/opentelemetry-instrumentation-google-generativeai/tests/test_generate_content.py (content)
⚔️ packages/opentelemetry-instrumentation-google-generativeai/tests/test_new_library_instrumentation.py (content)
⚔️ packages/opentelemetry-instrumentation-groq/.python-version (content)
⚔️ packages/opentelemetry-instrumentation-groq/opentelemetry/instrumentation/groq/version.py (content)
⚔️ packages/opentelemetry-instrumentation-groq/project.json (content)
⚔️ packages/opentelemetry-instrumentation-groq/pyproject.toml (content)
⚔️ packages/opentelemetry-instrumentation-groq/tests/traces/test_chat_tracing.py (content)
⚔️ packages/opentelemetry-instrumentation-haystack/.python-version (content)
⚔️ packages/opentelemetry-instrumentation-haystack/opentelemetry/instrumentation/haystack/version.py (content)
⚔️ packages/opentelemetry-instrumentation-haystack/project.json (content)
⚔️ packages/opentelemetry-instrumentation-haystack/pyproject.toml (content)
⚔️ packages/opentelemetry-instrumentation-lancedb/.python-version (content)
⚔️ packages/opentelemetry-instrumentation-lancedb/opentelemetry/instrumentation/lancedb/version.py (content)
⚔️ packages/opentelemetry-instrumentation-lancedb/project.json (content)
⚔️ packages/opentelemetry-instrumentation-lancedb/pyproject.toml (content)
⚔️ packages/opentelemetry-instrumentation-langchain/opentelemetry/instrumentation/langchain/__init__.py (content)
⚔️ packages/opentelemetry-instrumentation-langchain/opentelemetry/instrumentation/langchain/callback_handler.py (content)
⚔️ packages/opentelemetry-instrumentation-langchain/opentelemetry/instrumentation/langchain/config.py (content)
⚔️ packages/opentelemetry-instrumentation-langchain/opentelemetry/instrumentation/langchain/span_utils.py (content)
⚔️ packages/opentelemetry-instrumentation-langchain/opentelemetry/instrumentation/langchain/version.py (content)
⚔️ packages/opentelemetry-instrumentation-langchain/project.json (content)
⚔️ packages/opentelemetry-instrumentation-langchain/pyproject.toml (content)
⚔️ packages/opentelemetry-instrumentation-langchain/tests/cassettes/test_chains/test_astream.yaml (content)
⚔️ packages/opentelemetry-instrumentation-langchain/tests/cassettes/test_chains/test_astream_with_events_with_content.yaml (content)
⚔️ packages/opentelemetry-instrumentation-langchain/tests/cassettes/test_chains/test_astream_with_events_with_no_content.yaml (content)
⚔️ packages/opentelemetry-instrumentation-langchain/tests/cassettes/test_chains/test_stream.yaml (content)
⚔️ packages/opentelemetry-instrumentation-langchain/tests/cassettes/test_chains/test_stream_with_events_with_content.yaml (content)
⚔️ packages/opentelemetry-instrumentation-langchain/tests/cassettes/test_chains/test_stream_with_events_with_no_content.yaml (content)
⚔️ packages/opentelemetry-instrumentation-langchain/tests/cassettes/test_documents_chains/test_sequential_chain.yaml (content)
⚔️ packages/opentelemetry-instrumentation-langchain/tests/cassettes/test_documents_chains/test_sequential_chain_with_events_with_content.yaml (content)
⚔️ packages/opentelemetry-instrumentation-langchain/tests/cassettes/test_documents_chains/test_sequential_chain_with_events_with_no_content.yaml (content)
⚔️ packages/opentelemetry-instrumentation-langchain/tests/conftest.py (content)
⚔️ packages/opentelemetry-instrumentation-langchain/tests/metrics/test_langchain_metrics.py (content)
⚔️ packages/opentelemetry-instrumentation-langchain/tests/test_agents.py (content)
⚔️ packages/opentelemetry-instrumentation-langchain/tests/test_chains.py (content)
⚔️ packages/opentelemetry-instrumentation-langchain/tests/test_documents_chains.py (content)
⚔️ packages/opentelemetry-instrumentation-langchain/tests/test_lcel.py (content)
⚔️ packages/opentelemetry-instrumentation-langchain/tests/test_llms.py (content)
⚔️ packages/opentelemetry-instrumentation-langchain/tests/test_structured_output.py (content)
⚔️ packages/opentelemetry-instrumentation-langchain/tests/test_tool_calls.py (content)
⚔️ packages/opentelemetry-instrumentation-llamaindex/.python-version (content)
⚔️ packages/opentelemetry-instrumentation-llamaindex/opentelemetry/instrumentation/llamaindex/dispatcher_wrapper.py (content)
⚔️ packages/opentelemetry-instrumentation-llamaindex/opentelemetry/instrumentation/llamaindex/span_utils.py (content)
⚔️ packages/opentelemetry-instrumentation-llamaindex/opentelemetry/instrumentation/llamaindex/version.py (content)
⚔️ packages/opentelemetry-instrumentation-llamaindex/project.json (content)
⚔️ packages/opentelemetry-instrumentation-llamaindex/pyproject.toml (content)
⚔️ packages/opentelemetry-instrumentation-llamaindex/tests/cassettes/test_agents/test_agent_with_multiple_tools.yaml (content)
⚔️ packages/opentelemetry-instrumentation-llamaindex/tests/cassettes/test_agents/test_agents_and_tools.yaml (content)
⚔️ packages/opentelemetry-instrumentation-llamaindex/tests/cassettes/test_chroma_vector_store/test_rag_with_chroma.yaml (content)
⚔️ packages/opentelemetry-instrumentation-llamaindex/tests/cassettes/test_llamaparse/test_llamaparse_aload_data_instrumentation.yaml (content)
⚔️ packages/opentelemetry-instrumentation-llamaindex/tests/cassettes/test_llamaparse/test_llamaparse_load_data_instrumentation.yaml (content)
⚔️ packages/opentelemetry-instrumentation-llamaindex/tests/cassettes/test_structured_llm/test_structured_llm_achat_model_attributes.yaml (content)
⚔️ packages/opentelemetry-instrumentation-llamaindex/tests/cassettes/test_structured_llm/test_structured_llm_model_attributes.yaml (content)
⚔️ packages/opentelemetry-instrumentation-llamaindex/tests/test_agents.py (content)
⚔️ packages/opentelemetry-instrumentation-llamaindex/tests/test_llamaparse.py (content)
⚔️ packages/opentelemetry-instrumentation-marqo/.python-version (content)
⚔️ packages/opentelemetry-instrumentation-marqo/opentelemetry/instrumentation/marqo/version.py (content)
⚔️ packages/opentelemetry-instrumentation-marqo/project.json (content)
⚔️ packages/opentelemetry-instrumentation-marqo/pyproject.toml (content)
⚔️ packages/opentelemetry-instrumentation-mcp/opentelemetry/instrumentation/mcp/instrumentation.py (content)
⚔️ packages/opentelemetry-instrumentation-mcp/opentelemetry/instrumentation/mcp/version.py (content)
⚔️ packages/opentelemetry-instrumentation-mcp/project.json (content)
⚔️ packages/opentelemetry-instrumentation-mcp/pyproject.toml (content)
⚔️ packages/opentelemetry-instrumentation-milvus/.python-version (content)
⚔️ packages/opentelemetry-instrumentation-milvus/opentelemetry/instrumentation/milvus/version.py (content)
⚔️ packages/opentelemetry-instrumentation-milvus/project.json (content)
⚔️ packages/opentelemetry-instrumentation-milvus/pyproject.toml (content)
⚔️ packages/opentelemetry-instrumentation-mistralai/.python-version (content)
⚔️ packages/opentelemetry-instrumentation-mistralai/opentelemetry/instrumentation/mistralai/version.py (content)
⚔️ packages/opentelemetry-instrumentation-mistralai/project.json (content)
⚔️ packages/opentelemetry-instrumentation-mistralai/pyproject.toml (content)
⚔️ packages/opentelemetry-instrumentation-mistralai/tests/test_chat.py (content)
⚔️ packages/opentelemetry-instrumentation-mistralai/tests/test_embeddings.py (content)
⚔️ packages/opentelemetry-instrumentation-ollama/.python-version (content)
⚔️ packages/opentelemetry-instrumentation-ollama/opentelemetry/instrumentation/ollama/__init__.py (content)
⚔️ packages/opentelemetry-instrumentation-ollama/opentelemetry/instrumentation/ollama/version.py (content)
⚔️ packages/opentelemetry-instrumentation-ollama/project.json (content)
⚔️ packages/opentelemetry-instrumentation-ollama/pyproject.toml (content)
⚔️ packages/opentelemetry-instrumentation-ollama/tests/test_chat.py (content)
⚔️ packages/opentelemetry-instrumentation-ollama/tests/test_embeddings.py (content)
⚔️ packages/opentelemetry-instrumentation-ollama/tests/test_generation.py (content)
⚔️ packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/__init__.py (content)
⚔️ packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py (content)
⚔️ packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/utils.py (content)
⚔️ packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/version.py (content)
⚔️ packages/opentelemetry-instrumentation-openai-agents/project.json (content)
⚔️ packages/opentelemetry-instrumentation-openai-agents/pyproject.toml (content)
⚔️ packages/opentelemetry-instrumentation-openai-agents/tests/conftest.py (content)
⚔️ packages/opentelemetry-instrumentation-openai-agents/tests/test_complete_handoff_with_tools.py (content)
⚔️ packages/opentelemetry-instrumentation-openai-agents/tests/test_openai_agents.py (content)
⚔️ packages/opentelemetry-instrumentation-openai-agents/tests/test_recipe_agents_hierarchy.py (content)
⚔️ packages/opentelemetry-instrumentation-openai/.python-version (content)
⚔️ packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/shared/__init__.py (content)
⚔️ packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py (content)
⚔️ packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/v1/__init__.py (content)
⚔️ packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/v1/responses_wrappers.py (content)
⚔️ packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/version.py (content)
⚔️ packages/opentelemetry-instrumentation-openai/project.json (content)
⚔️ packages/opentelemetry-instrumentation-openai/pyproject.toml (content)
⚔️ packages/opentelemetry-instrumentation-openai/tests/traces/test_assistant.py (content)
⚔️ packages/opentelemetry-instrumentation-openai/tests/traces/test_azure.py (content)
⚔️ packages/opentelemetry-instrumentation-openai/tests/traces/test_chat.py (content)
⚔️ packages/opentelemetry-instrumentation-openai/tests/traces/test_chat_parse.py (content)
⚔️ packages/opentelemetry-instrumentation-openai/tests/traces/test_completions.py (content)
⚔️ packages/opentelemetry-instrumentation-openai/tests/traces/test_embeddings.py (content)
⚔️ packages/opentelemetry-instrumentation-openai/tests/traces/test_functions.py (content)
⚔️ packages/opentelemetry-instrumentation-openai/tests/traces/test_prompt_caching.py (content)
⚔️ packages/opentelemetry-instrumentation-openai/tests/traces/test_responses.py (content)
⚔️ packages/opentelemetry-instrumentation-openai/tests/traces/test_span_context_propagation.py (content)
⚔️ packages/opentelemetry-instrumentation-openai/tests/traces/test_vision.py (content)
⚔️ packages/opentelemetry-instrumentation-pinecone/.python-version (content)
⚔️ packages/opentelemetry-instrumentation-pinecone/opentelemetry/instrumentation/pinecone/version.py (content)
⚔️ packages/opentelemetry-instrumentation-pinecone/project.json (content)
⚔️ packages/opentelemetry-instrumentation-pinecone/pyproject.toml (content)
⚔️ packages/opentelemetry-instrumentation-qdrant/.python-version (content)
⚔️ packages/opentelemetry-instrumentation-qdrant/opentelemetry/instrumentation/qdrant/__init__.py (content)
⚔️ packages/opentelemetry-instrumentation-qdrant/opentelemetry/instrumentation/qdrant/version.py (content)
⚔️ packages/opentelemetry-instrumentation-qdrant/project.json (content)
⚔️ packages/opentelemetry-instrumentation-qdrant/pyproject.toml (content)
⚔️ packages/opentelemetry-instrumentation-replicate/.python-version (content)
⚔️ packages/opentelemetry-instrumentation-replicate/opentelemetry/instrumentation/replicate/version.py (content)
⚔️ packages/opentelemetry-instrumentation-replicate/project.json (content)
⚔️ packages/opentelemetry-instrumentation-replicate/pyproject.toml (content)
⚔️ packages/opentelemetry-instrumentation-replicate/tests/test_image_generation.py (content)
⚔️ packages/opentelemetry-instrumentation-replicate/tests/test_llama.py (content)
⚔️ packages/opentelemetry-instrumentation-sagemaker/.python-version (content)
⚔️ packages/opentelemetry-instrumentation-sagemaker/opentelemetry/instrumentation/sagemaker/version.py (content)
⚔️ packages/opentelemetry-instrumentation-sagemaker/project.json (content)
⚔️ packages/opentelemetry-instrumentation-sagemaker/pyproject.toml (content)
⚔️ packages/opentelemetry-instrumentation-sagemaker/tests/test_invocation.py (content)
⚔️ packages/opentelemetry-instrumentation-together/.python-version (content)
⚔️ packages/opentelemetry-instrumentation-together/opentelemetry/instrumentation/together/version.py (content)
⚔️ packages/opentelemetry-instrumentation-together/project.json (content)
⚔️ packages/opentelemetry-instrumentation-together/pyproject.toml (content)
⚔️ packages/opentelemetry-instrumentation-together/tests/test_chat.py (content)
⚔️ packages/opentelemetry-instrumentation-together/tests/test_completion.py (content)
⚔️ packages/opentelemetry-instrumentation-transformers/.python-version (content)
⚔️ packages/opentelemetry-instrumentation-transformers/opentelemetry/instrumentation/transformers/version.py (content)
⚔️ packages/opentelemetry-instrumentation-transformers/project.json (content)
⚔️ packages/opentelemetry-instrumentation-transformers/pyproject.toml (content)
⚔️ packages/opentelemetry-instrumentation-transformers/tests/test_pipeline.py (content)
⚔️ packages/opentelemetry-instrumentation-vertexai/.python-version (content)
⚔️ packages/opentelemetry-instrumentation-vertexai/opentelemetry/instrumentation/vertexai/version.py (content)
⚔️ packages/opentelemetry-instrumentation-vertexai/project.json (content)
⚔️ packages/opentelemetry-instrumentation-vertexai/pyproject.toml (content)
⚔️ packages/opentelemetry-instrumentation-vertexai/tests/disabled_test_bison.py (content)
⚔️ packages/opentelemetry-instrumentation-vertexai/tests/disabled_test_gemini.py (content)
⚔️ packages/opentelemetry-instrumentation-watsonx/.python-version (content)
⚔️ packages/opentelemetry-instrumentation-watsonx/opentelemetry/instrumentation/watsonx/version.py (content)
⚔️ packages/opentelemetry-instrumentation-watsonx/project.json (content)
⚔️ packages/opentelemetry-instrumentation-watsonx/pyproject.toml (content)
⚔️ packages/opentelemetry-instrumentation-watsonx/tests/traces/test_generate.py (content)
⚔️ packages/opentelemetry-instrumentation-weaviate/.python-version (content)
⚔️ packages/opentelemetry-instrumentation-weaviate/opentelemetry/instrumentation/weaviate/__init__.py (content)
⚔️ packages/opentelemetry-instrumentation-weaviate/opentelemetry/instrumentation/weaviate/version.py (content)
⚔️ packages/opentelemetry-instrumentation-weaviate/project.json (content)
⚔️ packages/opentelemetry-instrumentation-weaviate/pyproject.toml (content)
⚔️ packages/opentelemetry-instrumentation-weaviate/tests/conftest.py (content)
⚔️ packages/opentelemetry-instrumentation-weaviate/tests/test_weaviate_instrumentation.py (content)
⚔️ packages/opentelemetry-instrumentation-writer/opentelemetry/instrumentation/writer/version.py (content)
⚔️ packages/opentelemetry-instrumentation-writer/project.json (content)
⚔️ packages/opentelemetry-instrumentation-writer/pyproject.toml (content)
⚔️ packages/opentelemetry-instrumentation-writer/tests/test_chat.py (content)
⚔️ packages/opentelemetry-instrumentation-writer/tests/test_completions.py (content)
⚔️ packages/opentelemetry-semantic-conventions-ai/.python-version (content)
⚔️ packages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/__init__.py (content)
⚔️ packages/opentelemetry-semantic-conventions-ai/project.json (content)
⚔️ packages/opentelemetry-semantic-conventions-ai/pyproject.toml (content)
⚔️ packages/sample-app/.python-version (content)
⚔️ packages/sample-app/project.json (content)
⚔️ packages/sample-app/pyproject.toml (content)
⚔️ packages/sample-app/sample_app/dataset_example.py (content)
⚔️ packages/sample-app/sample_app/experiment/experiment_example.py (content)
⚔️ packages/sample-app/sample_app/langchain_agent.py (content)
⚔️ packages/sample-app/sample_app/langchain_lcel.py (content)
⚔️ packages/sample-app/sample_app/langchain_watsonx.py (content)
⚔️ packages/sample-app/sample_app/llama_index_chroma_huggingface_app.py (content)
⚔️ packages/traceloop-sdk/.python-version (content)
⚔️ packages/traceloop-sdk/project.json (content)
⚔️ packages/traceloop-sdk/pyproject.toml (content)
⚔️ packages/traceloop-sdk/tests/conftest.py (content)
⚔️ packages/traceloop-sdk/tests/datasets/test_create_dataset.py (content)
⚔️ packages/traceloop-sdk/tests/datasets/test_datasets_operations.py (content)
⚔️ packages/traceloop-sdk/tests/experiment/test_experiment.py (content)
⚔️ packages/traceloop-sdk/tests/test_association_properties.py (content)
⚔️ packages/traceloop-sdk/tests/test_sdk_initialization.py (content)
⚔️ packages/traceloop-sdk/tests/test_user_feedback.py (content)
⚔️ packages/traceloop-sdk/traceloop/sdk/__init__.py (content)
⚔️ packages/traceloop-sdk/traceloop/sdk/annotation/base_annotation.py (content)
⚔️ packages/traceloop-sdk/traceloop/sdk/annotation/user_feedback.py (content)
⚔️ packages/traceloop-sdk/traceloop/sdk/client/__init__.py (content)
⚔️ packages/traceloop-sdk/traceloop/sdk/client/client.py (content)
⚔️ packages/traceloop-sdk/traceloop/sdk/client/http.py (content)
⚔️ packages/traceloop-sdk/traceloop/sdk/datasets/datasets.py (content)
⚔️ packages/traceloop-sdk/traceloop/sdk/decorators/__init__.py (content)
⚔️ packages/traceloop-sdk/traceloop/sdk/decorators/base.py (content)
⚔️ packages/traceloop-sdk/traceloop/sdk/evaluator/__init__.py (content)
⚔️ packages/traceloop-sdk/traceloop/sdk/evaluator/evaluator.py (content)
⚔️ packages/traceloop-sdk/traceloop/sdk/evaluator/model.py (content)
⚔️ packages/traceloop-sdk/traceloop/sdk/evaluator/stream_client.py (content)
⚔️ packages/traceloop-sdk/traceloop/sdk/experiment/experiment.py (content)
⚔️ packages/traceloop-sdk/traceloop/sdk/experiment/model.py (content)
⚔️ packages/traceloop-sdk/traceloop/sdk/fetcher.py (content)
⚔️ packages/traceloop-sdk/traceloop/sdk/images/image_uploader.py (content)
⚔️ packages/traceloop-sdk/traceloop/sdk/instruments.py (content)
⚔️ packages/traceloop-sdk/traceloop/sdk/logging/logging.py (content)
⚔️ packages/traceloop-sdk/traceloop/sdk/metrics/metrics.py (content)
⚔️ packages/traceloop-sdk/traceloop/sdk/prompts/client.py (content)
⚔️ packages/traceloop-sdk/traceloop/sdk/tracing/__init__.py (content)
⚔️ packages/traceloop-sdk/traceloop/sdk/tracing/tracing.py (content)
⚔️ packages/traceloop-sdk/traceloop/sdk/version.py (content)
⚔️ scripts/build-release.sh (content)

These conflicts must be resolved before merging into main.
Resolve conflicts locally and push changes to this branch.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly describes the primary objective: migrating event logging to work with recent OpenTelemetry SDKs, which aligns with the extensive changes shown across 40+ packages updating the logging API.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
⚔️ Resolve merge conflicts (beta)
  • Auto-commit resolved conflicts to branch 3451-bug-report-switch-to-log-events-on-recent-opentelemetry-sdks
  • Post resolved changes as copyable diffs in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (7)
packages/opentelemetry-instrumentation-ollama/opentelemetry/instrumentation/ollama/event_emitter.py (1)

69-72: ⚠️ Potential issue | 🟠 Major

Pre-existing bug: or LLMRequestTypeValues.EMBEDDING is always truthy.

This condition evaluates as (llm_request_type == LLMRequestTypeValues.COMPLETION) or (LLMRequestTypeValues.EMBEDDING) — the second operand is always truthy, so this branch is always taken and the else at line 75 is unreachable.

Proposed fix
-    elif (
-        llm_request_type == LLMRequestTypeValues.COMPLETION
-        or LLMRequestTypeValues.EMBEDDING
-    ):
+    elif llm_request_type in (
+        LLMRequestTypeValues.COMPLETION,
+        LLMRequestTypeValues.EMBEDDING,
+    ):
packages/opentelemetry-instrumentation-langchain/tests/test_llms.py (1)

21-23: ⚠️ Potential issue | 🟡 Minor

Remove unused EventAttributes import.

The event_attributes as EventAttributes import is not referenced anywhere in this file and should be removed. The two separate imports from opentelemetry.semconv._incubating.attributes can also be consolidated.

Proposed fix
-from opentelemetry.semconv._incubating.attributes import (
-    event_attributes as EventAttributes,
-)
-from opentelemetry.semconv._incubating.attributes import (
+from opentelemetry.semconv._incubating.attributes import (
     gen_ai_attributes as GenAIAttributes,
 )
packages/opentelemetry-instrumentation-anthropic/opentelemetry/instrumentation/anthropic/streaming.py (1)

226-287: ⚠️ Potential issue | 🟠 Major

Pre-existing bug: duplicated logic and broken nesting in _handle_completion.

Lines 237–246 duplicate lines 228–235 (same metric_attributes, same set_span_attribute, same duration_histogram.record). More critically, lines 248–287 (token usage, event emission, span.end(), _instrumentation_completed = True) are nested inside the if self._duration_histogram: block at line 230. When duration_histogram is None (metrics disabled), the span is never ended and instrumentation is never marked complete.

Compare with AnthropicAsyncStream._complete_instrumentation (lines 392–447) which correctly keeps these steps at the top level.

🐛 Proposed fix — remove duplication and fix nesting
     def _handle_completion(self):
         """Handle completion logic"""
         metric_attributes = shared_metrics_attributes(self._complete_response)
         set_span_attribute(self._span, GenAIAttributes.GEN_AI_RESPONSE_ID, self._complete_response.get("id"))
+
         if self._duration_histogram:
             duration = time.time() - self._start_time
             self._duration_histogram.record(
                 duration,
                 attributes=metric_attributes,
             )
 
-            # This mirrors the logic from build_from_streaming_response
-            metric_attributes = shared_metrics_attributes(self._complete_response)
-            set_span_attribute(self._span, GenAIAttributes.GEN_AI_RESPONSE_ID, self._complete_response.get("id"))
-
-            if self._duration_histogram:
-                duration = time.time() - self._start_time
-                self._duration_histogram.record(
-                    duration,
-                    attributes=metric_attributes,
-                )
-
-            # Calculate token usage
-            if Config.enrich_token_usage:
-                try:
-                    if usage := self._complete_response.get("usage"):
-                        prompt_tokens = usage.get("input_tokens", 0) or 0
-                    else:
-                        prompt_tokens = count_prompt_tokens_from_request(self._instance, self._kwargs)
-
-                    if usage := self._complete_response.get("usage"):
-                        completion_tokens = usage.get("output_tokens", 0) or 0
-                    else:
-                        completion_content = ""
-                        if self._complete_response.get("events"):
-                            model_name = self._complete_response.get("model") or None
-                            for event in self._complete_response.get("events"):
-                                if event.get("text"):
-                                    completion_content += event.get("text")
-
-                            if model_name and hasattr(self._instance, "count_tokens"):
-                                completion_tokens = self._instance.count_tokens(completion_content)
-
-                    _set_token_usage(
-                        self._span,
-                        self._complete_response,
-                        prompt_tokens,
-                        completion_tokens,
-                        metric_attributes,
-                        self._token_histogram,
-                        self._choice_counter,
-                    )
-                except Exception as e:
-                    logger.warning("Failed to set token usage, error: %s", e)
-
-            _handle_streaming_response(self._span, self._event_logger, self._complete_response)
-
-            if self._span.is_recording():
-                self._span.set_status(Status(StatusCode.OK))
-                self._span.end()
-
-            self._instrumentation_completed = True
+        # Calculate token usage
+        if Config.enrich_token_usage:
+            try:
+                if usage := self._complete_response.get("usage"):
+                    prompt_tokens = usage.get("input_tokens", 0) or 0
+                else:
+                    prompt_tokens = count_prompt_tokens_from_request(self._instance, self._kwargs)
+
+                if usage := self._complete_response.get("usage"):
+                    completion_tokens = usage.get("output_tokens", 0) or 0
+                else:
+                    completion_content = ""
+                    if self._complete_response.get("events"):
+                        model_name = self._complete_response.get("model") or None
+                        for event in self._complete_response.get("events"):
+                            if event.get("text"):
+                                completion_content += event.get("text")
+
+                        if model_name and hasattr(self._instance, "count_tokens"):
+                            completion_tokens = self._instance.count_tokens(completion_content)
+
+                _set_token_usage(
+                    self._span,
+                    self._complete_response,
+                    prompt_tokens,
+                    completion_tokens,
+                    metric_attributes,
+                    self._token_histogram,
+                    self._choice_counter,
+                )
+            except Exception as e:
+                logger.warning("Failed to set token usage, error: %s", e)
+
+        _handle_streaming_response(self._span, self._event_logger, self._complete_response)
+
+        if self._span.is_recording():
+            self._span.set_status(Status(StatusCode.OK))
+            self._span.end()
+
+        self._instrumentation_completed = True
packages/opentelemetry-instrumentation-cohere/opentelemetry/instrumentation/cohere/event_emitter.py (1)

162-164: ⚠️ Potential issue | 🟠 Major

Pre-existing bug: condition is always truthy.

This elif always evaluates to True because LLMRequestTypeValues.COMPLETION is a truthy enum member. The or applies to the bare value, not a comparison. This means rerank responses would incorrectly enter this branch.

🐛 Proposed fix
-    elif (
-        llm_request_type == LLMRequestTypeValues.CHAT or LLMRequestTypeValues.COMPLETION
-    ):
+    elif llm_request_type in (
+        LLMRequestTypeValues.CHAT, LLMRequestTypeValues.COMPLETION
+    ):
packages/opentelemetry-instrumentation-bedrock/opentelemetry/instrumentation/bedrock/event_emitter.py (1)

246-277: ⚠️ Potential issue | 🟡 Minor

event_logger typed Optional[Logger] but dereferenced without a None guard.

_emit_message_event and _emit_choice_event accept Optional[Logger] but call event_logger.emit(log_record) unconditionally. While callers in __init__.py currently guard with if should_emit_events() and event_logger:, the function signatures advertise that None is acceptable, which could lead to an AttributeError if a future caller omits the guard.

Compare with the replicate emitter (Line 78) which correctly handles this: if not should_emit_events() or event_logger is None: return.

Option A: Add a guard in `emit_event` (preferred, single choke point)
 def emit_event(
     event: Union[MessageEvent, ChoiceEvent], event_logger: Optional[Logger]
 ) -> None:
-    if not should_emit_events():
+    if not should_emit_events() or event_logger is None:
         return
Option B: Tighten the type hints to non-Optional on internal helpers
-def _emit_message_event(
-    event: MessageEvent, event_logger: Optional[Logger]
-) -> None:
+def _emit_message_event(
+    event: MessageEvent, event_logger: Logger
+) -> None:

(and similarly for _emit_choice_event)

packages/opentelemetry-instrumentation-watsonx/opentelemetry/instrumentation/watsonx/__init__.py (2)

362-370: ⚠️ Potential issue | 🔴 Critical

Bug: _emit_response_events doesn't accept or forward event_logger.

This function calls emit_event(ChoiceEvent(...)) without the required event_logger argument (see emit_event signature in event_emitter.py Line 32-33). Additionally, the call at Line 517 passes event_logger as a second argument, but this function's signature only accepts response: dict — so that call will raise TypeError at runtime (though @dont_throw on _handle_response will silently swallow it, meaning events are silently never emitted).

Proposed fix
-def _emit_response_events(response: dict):
+def _emit_response_events(response: dict, event_logger):
     for i, message in enumerate(response.get("results", [])):
         emit_event(
             ChoiceEvent(
                 index=i,
                 message={"content": message.get("generated_text"), "role": "assistant"},
                 finish_reason=message.get("stop_reason", "unknown"),
-            )
+            ),
+            event_logger,
         )

Also update the call in _handle_stream_response (Line 536) which currently doesn't pass event_logger:

Fix the call in _handle_stream_response
         _emit_response_events(
             {
                 "results": [
                     {
                         "stop_reason": stream_stop_reason,
                         "generated_text": stream_generated_text,
                     }
                 ]
             },
+            event_logger,
         )

581-581: ⚠️ Potential issue | 🔴 Critical

Bug: _handle_input called with wrong number of arguments — response_counter is missing.

_handle_input at line 490 expects 7 parameters: (span, event_logger, name, instance, response_counter, args, kwargs). The call at line 581 passes only 6, binding args to the response_counter parameter and kwargs to args, with the actual kwargs missing entirely.

Fix:

Proposed fix
-    _handle_input(span, event_logger, name, instance, args, kwargs)
+    _handle_input(span, event_logger, name, instance, response_counter, args, kwargs)

Note: response_counter is available in the _wrap function scope. However, the parameter is not used within _handle_input's body (lines 490–510), so an alternative is to remove it from the function signature entirely rather than adding it to the call.

🤖 Fix all issues with AI agents
In
`@packages/opentelemetry-instrumentation-llamaindex/opentelemetry/instrumentation/llamaindex/config.py`:
- Around line 3-9: The code imports opentelemetry._logs.Logger (unstable API)
and uses it in the Config class (event_logger: Optional[Logger]) which risks
breakage; replace the direct runtime import with a stability-safe approach:
remove the direct from opentelemetry._logs import Logger, keep Optional typing
for event_logger/exception_logger, and annotate event_logger using a stable type
(e.g., typing.Any) or a TYPE_CHECKING-only import of Logger so the unstable
symbol is only referenced for type checking; update references to
Config.event_logger and Config.exception_logger accordingly to use the new safe
annotation.

In
`@packages/opentelemetry-instrumentation-transformers/opentelemetry/instrumentation/transformers/event_emitter.py`:
- Line 4: Replace the private import of LogRecord from
opentelemetry.sdk._logs._internal with the public API import from
opentelemetry._logs so the module uses the stable public symbol; specifically,
change the import statement that references LogRecord (currently from
opentelemetry.sdk._logs._internal import LogRecord) to import LogRecord from
opentelemetry._logs instead, ensuring any references to LogRecord in this module
(e.g., type checks or annotations) continue to work.

In `@packages/opentelemetry-instrumentation-transformers/pyproject.toml`:
- Around line 27-29: Update the dev-dependency constraint for opentelemetry-sdk
in pyproject.toml from "^1.34.1" to "^1.38.0" so it matches opentelemetry-api
and other packages; edit the opentelemetry-sdk entry under
[tool.poetry.dev-dependencies] (or equivalent) to use "^1.38.0" and then run
your dependency tooling (poetry lock/poetry update or pip-tools) to refresh the
lockfile and ensure consistency across the repo.
🧹 Nitpick comments (35)
packages/opentelemetry-instrumentation-mcp/pyproject.toml (1)

33-33: Consider bumping opentelemetry-exporter-otlp to ^1.38.0 for consistency.

The SDK and API are now pinned to ^1.38.0, but the exporter is still at ^1.34.1 in both dev and test groups. While ^1.34.1 technically allows 1.38.x, aligning the floor keeps the dependency set explicit and avoids accidentally resolving an older exporter during local development.

Suggested diff
 [tool.poetry.group.dev.dependencies]
-opentelemetry-exporter-otlp = "^1.34.1"
+opentelemetry-exporter-otlp = "^1.38.0"

 [tool.poetry.group.test.dependencies]
-opentelemetry-exporter-otlp = "^1.34.1"
+opentelemetry-exporter-otlp = "^1.38.0"

Also applies to: 43-43

packages/traceloop-sdk/pyproject.toml (2)

32-37: Unbounded lower-only constraints on beta instrumentation packages.

These use >=0.59b0 with no upper bound, whereas the core packages use ^1.38.0 (bounded). OTel instrumentation packages are pre-release betas that can introduce breaking changes between minor versions. Since OTel versioning aligns core ↔ contrib (1.38.0 ↔ 0.59b0, 1.39.0 ↔ 0.60b0), an unbounded >= could pull in a future contrib version that's incompatible with the pinned core range.

Consider adding an upper bound, e.g. >=0.59b0,<1.0 or a tighter range like >=0.59b0,<0.61, to avoid surprise breakage. That said, Poetry's caret (^) doesn't work well with pre-release identifiers for 0.x versions, so the current approach may be pragmatic.


39-67: Large number of local path-based develop = true dependencies added.

These local path references are fine for development, but they make pyproject.toml quite long and tightly couple the SDK package to all instrumentation packages in the monorepo. This is not blocking, but worth being aware of for maintainability — any new instrumentation package will require a line here.

packages/opentelemetry-instrumentation-together/opentelemetry/instrumentation/together/event_emitter.py (2)

163-169: Nit: stray blank line inside constructor call.

Line 167 has an extra blank line before the closing parenthesis, inconsistent with the equivalent block at lines 139–143.

Suggested fix
     log_record = LogRecord(
         body=body,
         attributes=EVENT_ATTRIBUTES,
-        event_name="gen_ai.choice"
-
+        event_name="gen_ai.choice",
     )

139-144: Trailing comma missing after last argument.

Line 142 is missing a trailing comma after event_name=name. While not a syntax error, it's inconsistent with Python style conventions and makes future diffs noisier.

Suggested fix
     log_record = LogRecord(
         body=body,
         attributes=EVENT_ATTRIBUTES,
-        event_name=name
+        event_name=name,
     )
packages/opentelemetry-instrumentation-transformers/opentelemetry/instrumentation/transformers/__init__.py (1)

48-56: Stale variable name: event_logger now holds a Logger, not an EventLogger.

After the migration, get_logger() returns a Logger instance, but the local variable is still called event_logger. Consider renaming to logger (or log_emitter) for clarity — though note it's also referenced downstream in wrappers and event_emitter.py, so a rename would span multiple files.

packages/opentelemetry-instrumentation-together/opentelemetry/instrumentation/together/__init__.py (1)

7-7: Migration to Logs API is correct per OpenTelemetry 1.38.0+ guidelines.

The import change from opentelemetry._events.get_event_logger to opentelemetry._logs.get_logger follows the official deprecation path documented in OpenTelemetry Python 1.39.0 release notes, where the Events API was explicitly deprecated in favor of the Logs API. The module opentelemetry._logs is the documented public API for this.

The downstream integration in event_emitter.py is already compatible—it correctly uses the Logger.emit(log_record) pattern with LogRecord objects (not the old Events API), as confirmed by the type annotations at line 96 (event_logger: Union[Logger, None]) and actual calls at lines 144 and 169.

Minor semantic issue: The variable event_logger holds a Logger instance (returned by get_logger), not an EventLogger. Renaming it to logger throughout would improve clarity and align with the actual type, though the code is functionally correct as-is.

API breaking change: The kwargs key change from "event_logger_provider" to "logger_provider" is a breaking change for callers, but this appears to be an intentional, library-wide migration pattern consistent across other instrumentations.

packages/opentelemetry-instrumentation-vertexai/opentelemetry/instrumentation/vertexai/__init__.py (1)

337-345: Variable name event_logger is now misleading — it holds a Logger, not an EventLogger.

After the migration, get_logger returns a Logger instance, but the variable is still called event_logger throughout this file (and passed under that name into _wrap, _awrap, _build_from_streaming_response, _handle_request, _handle_response, etc.). Consider renaming it to logger or log_event_logger to reflect the new type. This is a low-priority naming nit given the scope of the rename across the codebase.

packages/opentelemetry-instrumentation-vertexai/opentelemetry/instrumentation/vertexai/event_emitter.py (1)

167-173: Minor: extra blank line inside LogRecord constructor.

Line 171 has an extraneous blank line inside the constructor call. This is inconsistent with _emit_message_event (lines 143–147) which doesn't have one.

🧹 Remove extra blank line
     log_record = LogRecord(
         body=body,
         attributes=EVENT_ATTRIBUTES,
         event_name="gen_ai.choice"
-
     )
packages/opentelemetry-instrumentation-mistralai/opentelemetry/instrumentation/mistralai/event_emitter.py (1)

101-107: Minor: stray blank line inside the LogRecord(...) constructor.

Line 105 has an empty line within the constructor call. Harmless but inconsistent with the _emit_message_event block above (lines 77-81).

Proposed fix
     log_record = LogRecord(
         body=body,
         attributes=EVENT_ATTRIBUTES,
-        event_name="gen_ai.choice"
-
+        event_name="gen_ai.choice",
     )
packages/opentelemetry-instrumentation-mistralai/opentelemetry/instrumentation/mistralai/__init__.py (1)

8-8: Variable event_logger naming is inconsistent with its new type — consider renaming for clarity.

The import of get_logger from opentelemetry._logs and its usage at lines 498-503 are correct. However, the variable is still called event_logger across ~32 places in this file, despite now holding a Logger object rather than the previous EventLogger. Renaming to otel_logger or log_emitter would better reflect the actual type and reduce confusion.

packages/opentelemetry-instrumentation-sagemaker/opentelemetry/instrumentation/sagemaker/event_emitter.py (1)

152-158: Stray blank line inside LogRecord(...) constructor.

Line 156 has a blank line between event_name="gen_ai.choice" and the closing parenthesis. While it doesn't affect functionality, it's inconsistent with the _emit_message_event constructor on lines 128–132 which doesn't have this blank line.

🧹 Suggested cleanup
     log_record = LogRecord(
         body=body,
         attributes=EVENT_ATTRIBUTES,
         event_name="gen_ai.choice"
-
     )
packages/opentelemetry-instrumentation-sagemaker/opentelemetry/instrumentation/sagemaker/__init__.py (1)

188-194: Consider renaming event_logger to logger for consistency with the new API.

The variable event_logger (line 188) now holds a Logger instance, not an EventLogger. While I understand this would ripple through _wrap, _instrumented_endpoint_invoke, _handle_call, _handle_stream_call, etc., it would improve clarity. This can be deferred to a follow-up.

packages/opentelemetry-instrumentation-sagemaker/tests/conftest.py (1)

95-99: Environment fixture doesn't clean up dummy credentials on teardown.

The environment fixture sets AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY to "test" but never restores the original state. Since it's autouse=True, this could leak dummy credentials into subsequent test sessions if running in a persistent process. Consider using a yield and restoring the environment afterward, or use monkeypatch.

packages/opentelemetry-instrumentation-ollama/opentelemetry/instrumentation/ollama/event_emitter.py (1)

187-193: Nit: stray blank line inside constructor call.

Line 191 has an extra blank line inside the LogRecord(...) constructor.

Suggested fix
     log_record = LogRecord(
         body=body,
         attributes=EVENT_ATTRIBUTES,
         event_name="gen_ai.choice"
-
     )
packages/opentelemetry-instrumentation-langchain/tests/conftest.py (1)

94-101: Minor inconsistency in how the logger is obtained across test fixtures.

This file uses logger_provider.get_logger(__name__, __version__) (the provider method), while the OpenAI conftest at lines 152-168 uses get_logger(__name__, __version__, logger_provider=logger_provider) (the module-level function). Both are functionally equivalent, but it would be cleaner to standardize on one approach across all test fixtures for consistency.

Also applies to: 111-118

packages/opentelemetry-instrumentation-langchain/tests/test_tool_calls.py (1)

275-276: Duplicate @pytest.mark.skip decorators on multiple tests.

Several tests in this file have duplicate (or even triplicate, e.g., lines 777-779, 928-930) @pytest.mark.skip decorators with the same reason. While functionally harmless, it adds noise. Consider deduplicating them in a follow-up.

packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/shared/event_emitter.py (1)

79-84: Consider emitting via keyword arguments instead of constructing an intermediate LogRecord.

The Logger.emit() method accepts event_name, body, and attributes directly as keyword arguments, which would simplify this and avoid constructing an intermediate API-level LogRecord:

Config.event_logger.emit(body=body, attributes=EVENT_ATTRIBUTES, event_name=name)

That said, the current approach works correctly and is consistent across the PR.

packages/opentelemetry-instrumentation-vertexai/tests/conftest.py (1)

82-84: Nit: reader parameter is unused in both fixtures (also flagged by Ruff ARG001).

The reader fixture is already created transitively via the meter_provider dependency. The explicit parameter is redundant. This also applies to the pre-existing instrument_legacy fixture at line 69.

♻️ Optional cleanup
 `@pytest.fixture`(scope="function")
 def instrument_with_content(
-    reader, tracer_provider, logger_provider, meter_provider
+    tracer_provider, logger_provider, meter_provider
 ):
 `@pytest.fixture`(scope="function")
 def instrument_with_no_content(
-    reader, tracer_provider, logger_provider, meter_provider
+    tracer_provider, logger_provider, meter_provider
 ):

Also applies to: 100-103

packages/opentelemetry-instrumentation-alephalpha/opentelemetry/instrumentation/alephalpha/__init__.py (1)

72-72: Nit: inconsistent type hint style — Optional[Logger] vs Union[Logger, None].

Line 72 uses Optional[Logger] while line 144 uses Union[Logger, None]. Both are equivalent but it would be cleaner to pick one style consistently within the file.

Also applies to: 144-144

packages/opentelemetry-instrumentation-google-generativeai/opentelemetry/instrumentation/google_generativeai/event_emitter.py (1)

144-150: Stray blank line inside LogRecord constructor.

Line 148 is a blank line inside the LogRecord(...) call. This is a minor formatting inconsistency — the same pattern appears in other emitter files (bedrock line 300, replicate line 141). The _emit_message_event method at Line 119 doesn't have this extra blank line.

Suggested cleanup
     log_record = LogRecord(
         body=body,
         attributes=EVENT_ATTRIBUTES,
         event_name="gen_ai.choice"
-
     )
packages/opentelemetry-instrumentation-bedrock/opentelemetry/instrumentation/bedrock/event_emitter.py (1)

296-302: Stray blank line inside LogRecord constructor (Line 300).

Same formatting inconsistency as the google-generativeai emitter.

Suggested cleanup
     log_record = LogRecord(
         body=body,
         attributes=EVENT_ATTRIBUTES,
         event_name="gen_ai.choice"
-
     )
packages/opentelemetry-instrumentation-bedrock/tests/conftest.py (1)

102-114: Static analysis: reader parameter is unused in instrument_with_content and instrument_with_no_content.

Ruff flags reader as unused (ARG001) at Lines 104 and 125. Since meter_provider already depends on reader through the fixture graph, explicitly listing reader here is redundant. This is pre-existing and not introduced by this PR, but worth noting for a future cleanup.

Also applies to: 123-135

packages/opentelemetry-instrumentation-replicate/opentelemetry/instrumentation/replicate/event_emitter.py (1)

137-143: Stray blank line inside LogRecord constructor (Line 141).

Same minor formatting inconsistency as in the other emitter files.

Suggested cleanup
     log_record = LogRecord(
         body=body,
         attributes=EVENT_ATTRIBUTES,
         event_name="gen_ai.choice"
-
     )
packages/opentelemetry-instrumentation-anthropic/opentelemetry/instrumentation/anthropic/__init__.py (1)

805-812: Stale comment: "event_logger is inited here" — the variable is now a Logger, not an EventLogger.

The comment on line 805 is a leftover from the previous API. Consider updating it to reflect that this is now a Logger instance obtained via get_logger. Also, the variable name event_logger throughout the file is now slightly misleading since it holds a Logger, not an EventLogger — though renaming it would be a larger change that could be deferred.

packages/opentelemetry-instrumentation-anthropic/opentelemetry/instrumentation/anthropic/event_emitter.py (1)

235-241: Stray blank line inside LogRecord constructor.

Line 239 has an extra blank line between event_name="gen_ai.choice" and the closing parenthesis. This is a minor formatting inconsistency compared to the _emit_message_event function above (lines 211-216).

Proposed fix
     log_record = LogRecord(
         body=body,
         attributes=EVENT_ATTRIBUTES,
         event_name="gen_ai.choice"
-
     )
packages/opentelemetry-instrumentation-anthropic/tests/conftest.py (1)

98-101: Unused reader argument in fixture functions.

Static analysis (Ruff ARG001) flags reader as unused in instrument_with_content (line 100) and instrument_with_no_content (line 126). If it's only there to force fixture ordering, consider adding a brief comment or using pytest.mark.usefixtures("reader") instead.

Also applies to: 124-127

packages/opentelemetry-instrumentation-writer/tests/test_completions.py (1)

346-348: Nit: duplicate assertion on Line 347.

assert writer_span.name == "writerai.completions" appears twice (Lines 346–347). This is pre-existing but could be cleaned up.

packages/opentelemetry-instrumentation-watsonx/opentelemetry/instrumentation/watsonx/event_emitter.py (1)

100-106: Minor: trailing blank line inside LogRecord constructor call.

Line 104 has a stray blank line inside the constructor arguments. Inconsistent with _emit_message_event (Lines 76-80) and other packages.

Formatting fix
     log_record = LogRecord(
         body=body,
         attributes=EVENT_ATTRIBUTES,
         event_name="gen_ai.choice"
-
     )
packages/opentelemetry-instrumentation-groq/tests/traces/conftest.py (1)

94-107: Pre-existing: reader parameter is unused in these fixtures.

Ruff flags reader as unused (ARG001) in both instrument_with_content (Line 96) and instrument_with_no_content (Line 117). Since meter_provider already depends on reader through pytest's fixture graph, the explicit parameter is redundant. This is pre-existing and not introduced by this PR, so low priority.

Also applies to: 115-133

packages/opentelemetry-instrumentation-writer/tests/conftest.py (1)

91-107: Pre-existing: reader parameter is unused in these fixtures.

Same as the groq conftest — Ruff flags reader as unused (ARG001) on Lines 93 and 112. Since meter_provider already depends on reader via the fixture graph, the explicit parameter is redundant. Pre-existing; not introduced by this PR.

Also applies to: 110-126

packages/opentelemetry-instrumentation-writer/opentelemetry/instrumentation/writer/event_emitter.py (1)

144-150: Minor: trailing blank line inside LogRecord constructor (Line 148).

Same cosmetic inconsistency as the watsonx event_emitter.

Formatting fix
     log_record = LogRecord(
         body=body,
         attributes=EVENT_ATTRIBUTES,
         event_name="gen_ai.choice"
-
     )
packages/opentelemetry-instrumentation-ollama/tests/conftest.py (2)

92-108: reader parameter is unused.

reader is accepted but never referenced in the function body. Since meter_provider already depends on reader, pytest will instantiate the reader fixture transitively — no need to list it here.

♻️ Proposed fix
 `@pytest.fixture`(scope="function")
 def instrument_with_content(
-    reader, tracer_provider, logger_provider, meter_provider
+    tracer_provider, logger_provider, meter_provider
 ):

111-127: Same unused reader parameter here.

♻️ Proposed fix
 `@pytest.fixture`(scope="function")
 def instrument_with_no_content(
-    reader, tracer_provider, logger_provider, meter_provider
+    tracer_provider, logger_provider, meter_provider
 ):
packages/opentelemetry-instrumentation-groq/opentelemetry/instrumentation/groq/event_emitter.py (1)

146-152: Stray blank line inside the LogRecord(...) constructor call.

Line 150 has an empty line before the closing parenthesis. Minor formatting nit — all other instrumentation packages omit it.

♻️ Proposed fix
     log_record = LogRecord(
         body=body,
         attributes=EVENT_ATTRIBUTES,
         event_name="gen_ai.choice"
-
     )

Comment on lines +3 to +9
from opentelemetry._logs import Logger


class Config:
exception_logger = None
use_legacy_attributes = True
event_logger: Optional[EventLogger] = None
event_logger: Optional[Logger] = None
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🌐 Web query:

opentelemetry._logs.Logger public API stable

💡 Result:

opentelemetry._logs.Logger is part of the documented OpenTelemetry Python API surface (it appears in the “stable” API docs and is obtained via get_logger() / LoggerProvider.get_logger()) [6].

However, that does not mean the Python Logs signal is stable:

  • The OpenTelemetry Logs API specification is marked “Status: Stable” [1].
  • But the OpenTelemetry project status explicitly lists Python Logs = Development [2], and the Python docs still say the logs API & SDK are under development [3].
  • The opentelemetry-python repo also flags Logs = Development and warns stabilization will involve deprecations / breaking changes [4].
  • Additionally, the Python logs examples warn that opentelemetry.sdk._logs APIs are experimental and may change without backward-compat guarantees [5].

So in practice (today), treat opentelemetry._logs.Logger / Python logging signal as not yet a stable public API guarantee in Python, even though the cross-language/spec API is “stable.”


🏁 Script executed:

# Find how Logger is imported in the repository
rg "from opentelemetry.*Logger" --type py

# Also check for any existing Logger type hints
rg "Logger\s*=" --type py -A 2 -B 2

Repository: traceloop/openllmetry

Length of output: 5998


opentelemetry._logs.Logger import carries stability risk.
This import pattern is consistent across the codebase, but the OpenTelemetry Python Logs API is still in Development status (not yet stable). The OpenTelemetry project explicitly warns that Python logging APIs are experimental and may change with breaking changes or deprecations in future versions. Be aware this type is not guaranteed to remain backward-compatible across OTel releases.

🤖 Prompt for AI Agents
In
`@packages/opentelemetry-instrumentation-llamaindex/opentelemetry/instrumentation/llamaindex/config.py`
around lines 3 - 9, The code imports opentelemetry._logs.Logger (unstable API)
and uses it in the Config class (event_logger: Optional[Logger]) which risks
breakage; replace the direct runtime import with a stability-safe approach:
remove the direct from opentelemetry._logs import Logger, keep Optional typing
for event_logger/exception_logger, and annotate event_logger using a stable type
(e.g., typing.Any) or a TYPE_CHECKING-only import of Logger so the unstable
symbol is only referenced for type checking; update references to
Config.event_logger and Config.exception_logger accordingly to use the new safe
annotation.

from typing import Union

from opentelemetry._events import Event
from opentelemetry.sdk._logs._internal import LogRecord
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# First, let's verify the current imports and usage in the event_emitter.py file
cd packages/opentelemetry-instrumentation-transformers && cat -n opentelemetry/instrumentation/transformers/event_emitter.py | head -30

Repository: traceloop/openllmetry

Length of output: 1050


🏁 Script executed:

# Check if LogRecord is exported from the public module
rg -n 'LogRecord' --type py opentelemetry-api/src/opentelemetry/sdk/_logs/__init__.py

Repository: traceloop/openllmetry

Length of output: 162


🏁 Script executed:

# Check the structure of the SDK _logs module
fd -t f . opentelemetry-api/src/opentelemetry/sdk/_logs/ 2>/dev/null | head -20

Repository: traceloop/openllmetry

Length of output: 47


🏁 Script executed:

# Search for where LogRecord is defined in the SDK
rg -l 'class LogRecord' --type py

Repository: traceloop/openllmetry

Length of output: 47


🏁 Script executed:

# List the top-level directories
ls -la | grep -E '^d'

Repository: traceloop/openllmetry

Length of output: 460


🏁 Script executed:

# Search for LogRecord class definition across the entire repo
rg 'class LogRecord' --type py -g '**/*'

Repository: traceloop/openllmetry

Length of output: 47


🏁 Script executed:

# Find SDK _logs modules
fd '_logs' --type d

Repository: traceloop/openllmetry

Length of output: 47


🏁 Script executed:

# List contents of packages directory
ls -la packages/

Repository: traceloop/openllmetry

Length of output: 2995


🏁 Script executed:

# Read the full event_emitter.py file to see LogRecord usage
cat -n packages/opentelemetry-instrumentation-transformers/opentelemetry/instrumentation/transformers/event_emitter.py

Repository: traceloop/openllmetry

Length of output: 5566


🏁 Script executed:

# Check conftest.py to verify its imports
find . -name conftest.py -type f | xargs cat -n

Repository: traceloop/openllmetry

Length of output: 50378


🏁 Script executed:

# Check the dependencies in the transformers package
cat packages/opentelemetry-instrumentation-transformers/pyproject.toml

Repository: traceloop/openllmetry

Length of output: 1228


🏁 Script executed:

# Search for LogRecord usage in the entire codebase
rg 'LogRecord' --type py -C 2

Repository: traceloop/openllmetry

Length of output: 50377


🏁 Script executed:

# Check if LogRecord is exported from opentelemetry._logs API
rg 'from opentelemetry._logs import.*LogRecord' --type py | head -20

Repository: traceloop/openllmetry

Length of output: 2653


🏁 Script executed:

# Check API-level logging init to understand exports
find . -name '__init__.py' -path '*/opentelemetry/_logs*' | head -5

Repository: traceloop/openllmetry

Length of output: 47


🏁 Script executed:

# Verify if opentelemetry.sdk._logs is the public re-export module
rg 'from opentelemetry.sdk._logs import LogRecord' --type py | head -10

Repository: traceloop/openllmetry

Length of output: 47


🏁 Script executed:

# Check if there's documentation or imports that clarify the public API
rg 'from opentelemetry\.sdk\._logs import' --type py -A 1 | head -30

Repository: traceloop/openllmetry

Length of output: 2475


Import LogRecord from the public API module, not the internal one.

opentelemetry.sdk._logs._internal is a private implementation detail and may break without notice. The correct public import (used consistently across all other instrumentations in this codebase) is opentelemetry._logs.

Proposed fix
-from opentelemetry.sdk._logs._internal import LogRecord
+from opentelemetry._logs import LogRecord
🤖 Prompt for AI Agents
In
`@packages/opentelemetry-instrumentation-transformers/opentelemetry/instrumentation/transformers/event_emitter.py`
at line 4, Replace the private import of LogRecord from
opentelemetry.sdk._logs._internal with the public API import from
opentelemetry._logs so the module uses the stable public symbol; specifically,
change the import statement that references LogRecord (currently from
opentelemetry.sdk._logs._internal import LogRecord) to import LogRecord from
opentelemetry._logs instead, ensuring any references to LogRecord in this module
(e.g., type checks or annotations) continue to work.

Comment on lines +27 to +29
opentelemetry-api = "^1.38.0"
opentelemetry-instrumentation = ">=0.59b0"
opentelemetry-semantic-conventions = ">=0.59b0"
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# First, check the actual current state of the file in question
echo "=== Checking packages/opentelemetry-instrumentation-transformers/pyproject.toml ==="
cat -n packages/opentelemetry-instrumentation-transformers/pyproject.toml | sed -n '25,40p'

Repository: traceloop/openllmetry

Length of output: 665


🏁 Script executed:

#!/bin/bash
# Search for opentelemetry-sdk across all pyproject.toml files to identify the pattern
echo "=== All opentelemetry-sdk versions in pyproject.toml files ==="
rg 'opentelemetry-sdk\s*=' --include='pyproject.toml' -n

Repository: traceloop/openllmetry

Length of output: 586


🏁 Script executed:

#!/bin/bash
# Also search for opentelemetry-api to see the API version pattern across packages
echo "=== All opentelemetry-api versions in pyproject.toml files ==="
rg 'opentelemetry-api\s*=' --include='pyproject.toml' -n

Repository: traceloop/openllmetry

Length of output: 586


🏁 Script executed:

#!/bin/bash
# Find all pyproject.toml files and search for opentelemetry versions
echo "=== opentelemetry-sdk versions across all packages ==="
find . -name "pyproject.toml" -exec grep -n "opentelemetry-sdk" {} + | head -20

echo ""
echo "=== opentelemetry-api versions across all packages ==="
find . -name "pyproject.toml" -exec grep -n "opentelemetry-api" {} + | head -20

Repository: traceloop/openllmetry

Length of output: 4093


Update opentelemetry-sdk to ^1.38.0 to match the API version.

opentelemetry-api was bumped to ^1.38.0, but the dev dependency opentelemetry-sdk remains at ^1.34.1. All other packages in this repository have been updated to ^1.38.0, making this an oversight. While version resolution would still work (the solver would upgrade SDK to a compatible version), the stale constraint should be updated for consistency.

Suggested fix
-opentelemetry-sdk = "^1.34.1"
+opentelemetry-sdk = "^1.38.0"
🤖 Prompt for AI Agents
In `@packages/opentelemetry-instrumentation-transformers/pyproject.toml` around
lines 27 - 29, Update the dev-dependency constraint for opentelemetry-sdk in
pyproject.toml from "^1.34.1" to "^1.38.0" so it matches opentelemetry-api and
other packages; edit the opentelemetry-sdk entry under
[tool.poetry.dev-dependencies] (or equivalent) to use "^1.38.0" and then run
your dependency tooling (poetry lock/poetry update or pip-tools) to refresh the
lockfile and ensure consistency across the repo.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants