feat(anthropic): conform instrumentation to OTel GenAI semantic conventions#3835
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughMigrates Anthropic instrumentation from legacy LLM semconv to GenAI semantic-convention keys, consolidates per-index prompt/completion attributes into JSON message payloads, updates provider/operation span attributes and finish-reason mapping, revises streaming/output serialization, and updates tests and dependency ranges. Changes
Sequence Diagram(s)(omitted) Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related issues
Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (2)
packages/opentelemetry-instrumentation-anthropic/tests/test_messages.py (1)
1274-1275: Assert the exactfinish_reasonspayload here.
inis too weak for the new array semantics—it will still pass if the instrumentation emits duplicates or unrelated finish reasons. These fixtures look single-choice, so an exact assertion will catch the regression this PR is trying to prevent.Based on learnings: Follow the OpenTelemetry GenAI semantic specification at https://opentelemetry.io/docs/specs/semconv/gen-ai/
Also applies to: 1613-1614
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/opentelemetry-instrumentation-anthropic/tests/test_messages.py` around lines 1274 - 1275, Replace the weak membership check with an exact payload assertion: instead of asserting response.stop_reason in finish_reasons, assert that anthropic_span.attributes[GenAIAttributes.GEN_AI_RESPONSE_FINISH_REASONS] equals the exact list expected (e.g. [response.stop_reason]) so duplicates or extra values fail; do the same change for the other occurrence referenced (lines around 1613-1614) and keep the reference to GenAIAttributes.GEN_AI_RESPONSE_FINISH_REASONS to locate the test.packages/opentelemetry-instrumentation-anthropic/tests/test_semconv_span_attrs.py (1)
228-237: Consider simplifying the content parsing logic.The nested conditional parsing (
if isinstance(content, str)→json.loadscheck →if isinstance(content_parsed, list)) is complex. Since you control the span_utils output, you could assert a more specific expected structure.However, this defensive approach does handle edge cases, so it's acceptable as-is.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/opentelemetry-instrumentation-anthropic/tests/test_semconv_span_attrs.py` around lines 228 - 237, The current nested parsing around assistant_msg["content"] (variables content → content_parsed) is more defensive than needed; simplify by asserting the expected shape up front, e.g. require content to be a str containing JSON when produced by span_utils, parse it once into content_parsed, then assert isinstance(content_parsed, list) before building types_in_content and checking "tool_use" is not present; update the test code that references assistant_msg, content, content_parsed, and types_in_content to remove the nested isinstance checks and make the expected structure explicit.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/opentelemetry-instrumentation-anthropic/pyproject.toml`:
- Line 16: The pyproject.toml entry under tool.uv.sources masks the PyPI package
opentelemetry-semantic-conventions-ai so our tests never validate that the
published 0.5.0 wheel actually exports the symbols we import
(opentelemetry.semconv_ai.SpanAttributes, Meters,
SUPPRESS_LANGUAGE_MODEL_INSTRUMENTATION_KEY, and _testing); remove or
conditionally disable the local source override for
opentelemetry-semantic-conventions-ai in tool.uv.sources (or add an integration
test that installs the released wheel and imports those symbols) so CI/monorepo
tests exercise the real published artifact rather than the sibling checkout.
In `@packages/opentelemetry-instrumentation-anthropic/tests/test_messages.py`:
- Around line 1918-1932: The test fails because streaming builds separate
output.message entries for the same assistant turn (one for text and another for
tool_calls) instead of consolidating them like the non-streaming path; update
the streaming serialization to accumulate all blocks for a single assistant turn
and emit one message object with both "content" and "tool_calls" before writing
GEN_AI_OUTPUT_MESSAGES. Locate the streaming code that appends to
output_messages (the logic that produces the separate text_msg and tool_msgs)
and change it to merge content fragments and tool call entries into a single
assistant message (matching the non-streaming behavior shown around the
serialize logic in span_utils.py lines ~220–223), ensuring output_messages
yields one assistant message containing "content", "role":"assistant", and a
combined "tool_calls" array with arguments serialized the same way the
non-streaming path does.
---
Nitpick comments:
In `@packages/opentelemetry-instrumentation-anthropic/tests/test_messages.py`:
- Around line 1274-1275: Replace the weak membership check with an exact payload
assertion: instead of asserting response.stop_reason in finish_reasons, assert
that anthropic_span.attributes[GenAIAttributes.GEN_AI_RESPONSE_FINISH_REASONS]
equals the exact list expected (e.g. [response.stop_reason]) so duplicates or
extra values fail; do the same change for the other occurrence referenced (lines
around 1613-1614) and keep the reference to
GenAIAttributes.GEN_AI_RESPONSE_FINISH_REASONS to locate the test.
In
`@packages/opentelemetry-instrumentation-anthropic/tests/test_semconv_span_attrs.py`:
- Around line 228-237: The current nested parsing around
assistant_msg["content"] (variables content → content_parsed) is more defensive
than needed; simplify by asserting the expected shape up front, e.g. require
content to be a str containing JSON when produced by span_utils, parse it once
into content_parsed, then assert isinstance(content_parsed, list) before
building types_in_content and checking "tool_use" is not present; update the
test code that references assistant_msg, content, content_parsed, and
types_in_content to remove the nested isinstance checks and make the expected
structure explicit.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 8fd99eb7-5d57-4e35-9b6e-765b533d3d31
⛔ Files ignored due to path filters (1)
packages/opentelemetry-instrumentation-anthropic/uv.lockis excluded by!**/*.lock
📒 Files selected for processing (12)
packages/opentelemetry-instrumentation-anthropic/opentelemetry/instrumentation/anthropic/__init__.pypackages/opentelemetry-instrumentation-anthropic/opentelemetry/instrumentation/anthropic/span_utils.pypackages/opentelemetry-instrumentation-anthropic/opentelemetry/instrumentation/anthropic/streaming.pypackages/opentelemetry-instrumentation-anthropic/pyproject.tomlpackages/opentelemetry-instrumentation-anthropic/tests/test_bedrock_with_raw_response.pypackages/opentelemetry-instrumentation-anthropic/tests/test_completion.pypackages/opentelemetry-instrumentation-anthropic/tests/test_messages.pypackages/opentelemetry-instrumentation-anthropic/tests/test_prompt_caching.pypackages/opentelemetry-instrumentation-anthropic/tests/test_semconv_compliance.pypackages/opentelemetry-instrumentation-anthropic/tests/test_semconv_span_attrs.pypackages/opentelemetry-instrumentation-anthropic/tests/test_structured_outputs.pypackages/opentelemetry-instrumentation-anthropic/tests/test_thinking.py
OzBenSimhonTraceloop
left a comment
There was a problem hiding this comment.
Furhter more, current anthropic sdk version is
| stop_reason = response.get("stop_reason") | ||
|
|
||
| if stop_reason: | ||
| span.set_attribute(GenAIAttributes.GEN_AI_RESPONSE_FINISH_REASONS, [stop_reason]) |
There was a problem hiding this comment.
The OTel FinishReason enum defines: stop, length, content_filter, tool_call, error, while Anthropic are different AFAIK, so u need mapping here instead of taking Anthropic as is
There was a problem hiding this comment.
Bump to latest
Address PR #3835 review comments from OzBenSimhonTraceloop: - Input/output messages now use "parts" array structure per gen-ai-input-messages.json and gen-ai-output-messages.json schemas - Map Anthropic finish reasons to OTel enum: end_turn→stop, tool_use→tool_call, max_tokens→length, stop_sequence→stop - Thinking content is now a ReasoningPart (type: "reasoning") inside the assistant message's parts array, not a separate message - Tool calls are now tool_call parts, not a separate "tool_calls" key - Streaming events consolidated into a single assistant message - Rewrote test_semconv_span_attrs.py as spec-driven unit tests (TDD) - Updated all VCR integration test assertions to match Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
🧹 Nitpick comments (2)
packages/opentelemetry-instrumentation-anthropic/opentelemetry/instrumentation/anthropic/span_utils.py (2)
401-402: Remove dead code.Lines 401-402 are a no-op –
tool_argumentsis alreadyNoneif the condition is true.🧹 Remove redundant lines
if isinstance(tool_arguments, str): try: tool_arguments = json.loads(tool_arguments) except (json.JSONDecodeError, TypeError): pass - elif tool_arguments is None: - tool_arguments = None parts.append({🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/opentelemetry-instrumentation-anthropic/opentelemetry/instrumentation/anthropic/span_utils.py` around lines 401 - 402, Remove the redundant no-op branch that checks "elif tool_arguments is None: tool_arguments = None" in span_utils.py; delete those two lines so the surrounding if/elif logic no longer contains this dead branch and the behavior for tool_arguments remains handled by the other existing branches.
207-220: Async operations inside list comprehension may not work as expected.The list comprehension on lines 207-220 contains
awaitinside a regular list comprehension. While Python 3.10+ allowsawaitin list comprehensions within async functions, this pattern creates all coroutines first and then awaits them sequentially, which is correct here. However, the structure is complex and hard to read.Consider using an explicit loop or
asyncio.gatherfor clarity:♻️ Suggested simplification
- content = [ - ( - await _process_image_item( - model_as_dict(item), - span.context.trace_id, - span.context.span_id, - message_index, - j, - ) - if _is_base64_image(model_as_dict(item)) - else model_as_dict(item) - ) - for j, item in enumerate(content) - ] + processed = [] + for j, item in enumerate(content): + item_dict = model_as_dict(item) + if _is_base64_image(item_dict): + processed.append(await _process_image_item( + item_dict, + span.context.trace_id, + span.context.span_id, + message_index, + j, + )) + else: + processed.append(item_dict) + content = processed🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/opentelemetry-instrumentation-anthropic/opentelemetry/instrumentation/anthropic/span_utils.py` around lines 207 - 220, The list comprehension mixes await with conditional logic (using _process_image_item, _is_base64_image, model_as_dict and span.context.trace_id/span_id) which is hard to read; replace it by first building a list of coroutines for items that need processing (e.g., for j,item in enumerate(content) append _process_image_item(...) when _is_base64_image(model_as_dict(item)) else append a completed value or use asyncio.sleep(0)/None), then run them with asyncio.gather(...) to concurrently await all results and assign the returned list back to content; alternatively use an explicit async for loop to sequentially await _process_image_item for clarity.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In
`@packages/opentelemetry-instrumentation-anthropic/opentelemetry/instrumentation/anthropic/span_utils.py`:
- Around line 401-402: Remove the redundant no-op branch that checks "elif
tool_arguments is None: tool_arguments = None" in span_utils.py; delete those
two lines so the surrounding if/elif logic no longer contains this dead branch
and the behavior for tool_arguments remains handled by the other existing
branches.
- Around line 207-220: The list comprehension mixes await with conditional logic
(using _process_image_item, _is_base64_image, model_as_dict and
span.context.trace_id/span_id) which is hard to read; replace it by first
building a list of coroutines for items that need processing (e.g., for j,item
in enumerate(content) append _process_image_item(...) when
_is_base64_image(model_as_dict(item)) else append a completed value or use
asyncio.sleep(0)/None), then run them with asyncio.gather(...) to concurrently
await all results and assign the returned list back to content; alternatively
use an explicit async for loop to sequentially await _process_image_item for
clarity.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 90a921e4-eef5-4e54-84b5-ba910e5af755
📒 Files selected for processing (8)
packages/opentelemetry-instrumentation-anthropic/opentelemetry/instrumentation/anthropic/span_utils.pypackages/opentelemetry-instrumentation-anthropic/tests/test_bedrock_with_raw_response.pypackages/opentelemetry-instrumentation-anthropic/tests/test_completion.pypackages/opentelemetry-instrumentation-anthropic/tests/test_messages.pypackages/opentelemetry-instrumentation-anthropic/tests/test_prompt_caching.pypackages/opentelemetry-instrumentation-anthropic/tests/test_semconv_span_attrs.pypackages/opentelemetry-instrumentation-anthropic/tests/test_structured_outputs.pypackages/opentelemetry-instrumentation-anthropic/tests/test_thinking.py
✅ Files skipped from review due to trivial changes (1)
- packages/opentelemetry-instrumentation-anthropic/tests/test_prompt_caching.py
There was a problem hiding this comment.
@max-deygin-servicenow please bump the Anthropic SDK from 0.76 to 0.86 + fix the lint issue :)
Address PR #3835 review comments from OzBenSimhonTraceloop: - Input/output messages now use "parts" array structure per gen-ai-input-messages.json and gen-ai-output-messages.json schemas - Map Anthropic finish reasons to OTel enum: end_turn→stop, tool_use→tool_call, max_tokens→length, stop_sequence→stop - Thinking content is now a ReasoningPart (type: "reasoning") inside the assistant message's parts array, not a separate message - Tool calls are now tool_call parts, not a separate "tool_calls" key - Streaming events consolidated into a single assistant message - Rewrote test_semconv_span_attrs.py as spec-driven unit tests (TDD) - Updated all VCR integration test assertions to match Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
f1971c3 to
10a5f47
Compare
There was a problem hiding this comment.
🧹 Nitpick comments (2)
packages/opentelemetry-instrumentation-anthropic/tests/test_thinking.py (2)
690-714: Same observation as sync streaming test.The
thinkingvariable is accumulated (lines 691, 699) but not asserted. See previous comment about the sync streaming test for the same consideration.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/opentelemetry-instrumentation-anthropic/tests/test_thinking.py` around lines 690 - 714, The test accumulates a `thinking` string from streaming events but never asserts it; update the test (in the async streaming block where `text` and `thinking` are built and where `anthropic_span` and `output_messages` are inspected) to assert that the span/output reasoning part contains the collected `thinking` content—e.g., locate `thinking` and `text` accumulation in the async for loop and add an assertion comparing `thinking` to the reasoning part(s) in `output_messages[0]["parts"]` (or to the relevant attribute on `anthropic_span`) to ensure the streamed thinking deltas are represented in the recorded spans/messages.
449-473: Consider adding assertion for accumulatedthinkingcontent.The
thinkingvariable accumulates streamed thinking deltas (lines 450, 458) but is not used in any assertion. While theany()check on line 471 verifies that reasoning parts exist, it doesn't validate the content matches the accumulated thinking.If this is intentional due to streaming consolidation differences, consider adding a brief comment explaining why the
thinkingvariable is accumulated but not asserted. Alternatively, if it's unused dead code, remove the accumulation.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/opentelemetry-instrumentation-anthropic/tests/test_thinking.py` around lines 449 - 473, The test accumulates streaming thinking deltas into the variable thinking inside the response loop but never asserts or documents its purpose; either remove the unused accumulation or add an assertion or explanatory comment: locate the loop that builds text and thinking (the for event in response block) and either (a) add an assertion that the reasoning part in output_messages[0]["parts"] contains the accumulated thinking (compare thinking to the appropriate reasoning part content), or (b) delete the thinking accumulation if not needed, or (c) add a short comment above the accumulation explaining why streamed thinking is not asserted (e.g., due to consolidation differences), so the intent is clear to future readers.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@packages/opentelemetry-instrumentation-anthropic/tests/test_thinking.py`:
- Around line 690-714: The test accumulates a `thinking` string from streaming
events but never asserts it; update the test (in the async streaming block where
`text` and `thinking` are built and where `anthropic_span` and `output_messages`
are inspected) to assert that the span/output reasoning part contains the
collected `thinking` content—e.g., locate `thinking` and `text` accumulation in
the async for loop and add an assertion comparing `thinking` to the reasoning
part(s) in `output_messages[0]["parts"]` (or to the relevant attribute on
`anthropic_span`) to ensure the streamed thinking deltas are represented in the
recorded spans/messages.
- Around line 449-473: The test accumulates streaming thinking deltas into the
variable thinking inside the response loop but never asserts or documents its
purpose; either remove the unused accumulation or add an assertion or
explanatory comment: locate the loop that builds text and thinking (the for
event in response block) and either (a) add an assertion that the reasoning part
in output_messages[0]["parts"] contains the accumulated thinking (compare
thinking to the appropriate reasoning part content), or (b) delete the thinking
accumulation if not needed, or (c) add a short comment above the accumulation
explaining why streamed thinking is not asserted (e.g., due to consolidation
differences), so the intent is clear to future readers.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: e7a92ea5-daca-4a2a-a4e0-75ff6051f497
⛔ Files ignored due to path filters (1)
packages/opentelemetry-instrumentation-anthropic/uv.lockis excluded by!**/*.lock
📒 Files selected for processing (12)
packages/opentelemetry-instrumentation-anthropic/opentelemetry/instrumentation/anthropic/__init__.pypackages/opentelemetry-instrumentation-anthropic/opentelemetry/instrumentation/anthropic/span_utils.pypackages/opentelemetry-instrumentation-anthropic/opentelemetry/instrumentation/anthropic/streaming.pypackages/opentelemetry-instrumentation-anthropic/pyproject.tomlpackages/opentelemetry-instrumentation-anthropic/tests/test_bedrock_with_raw_response.pypackages/opentelemetry-instrumentation-anthropic/tests/test_completion.pypackages/opentelemetry-instrumentation-anthropic/tests/test_messages.pypackages/opentelemetry-instrumentation-anthropic/tests/test_prompt_caching.pypackages/opentelemetry-instrumentation-anthropic/tests/test_semconv_compliance.pypackages/opentelemetry-instrumentation-anthropic/tests/test_semconv_span_attrs.pypackages/opentelemetry-instrumentation-anthropic/tests/test_structured_outputs.pypackages/opentelemetry-instrumentation-anthropic/tests/test_thinking.py
✅ Files skipped from review due to trivial changes (6)
- packages/opentelemetry-instrumentation-anthropic/opentelemetry/instrumentation/anthropic/streaming.py
- packages/opentelemetry-instrumentation-anthropic/pyproject.toml
- packages/opentelemetry-instrumentation-anthropic/tests/test_semconv_compliance.py
- packages/opentelemetry-instrumentation-anthropic/tests/test_completion.py
- packages/opentelemetry-instrumentation-anthropic/tests/test_prompt_caching.py
- packages/opentelemetry-instrumentation-anthropic/tests/test_bedrock_with_raw_response.py
🚧 Files skipped from review as they are similar to previous changes (4)
- packages/opentelemetry-instrumentation-anthropic/tests/test_structured_outputs.py
- packages/opentelemetry-instrumentation-anthropic/opentelemetry/instrumentation/anthropic/init.py
- packages/opentelemetry-instrumentation-anthropic/tests/test_messages.py
- packages/opentelemetry-instrumentation-anthropic/opentelemetry/instrumentation/anthropic/span_utils.py
Address PR #3835 review comments from OzBenSimhonTraceloop: - Input/output messages now use "parts" array structure per gen-ai-input-messages.json and gen-ai-output-messages.json schemas - Map Anthropic finish reasons to OTel enum: end_turn→stop, tool_use→tool_call, max_tokens→length, stop_sequence→stop - Thinking content is now a ReasoningPart (type: "reasoning") inside the assistant message's parts array, not a separate message - Tool calls are now tool_call parts, not a separate "tool_calls" key - Streaming events consolidated into a single assistant message - Rewrote test_semconv_span_attrs.py as spec-driven unit tests (TDD) - Updated all VCR integration test assertions to match Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
10a5f47 to
3ec1309
Compare
Address PR #3835 review comments from OzBenSimhonTraceloop: - Input/output messages now use "parts" array structure per gen-ai-input-messages.json and gen-ai-output-messages.json schemas - Map Anthropic finish reasons to OTel enum: end_turn→stop, tool_use→tool_call, max_tokens→length, stop_sequence→stop - Thinking content is now a ReasoningPart (type: "reasoning") inside the assistant message's parts array, not a separate message - Tool calls are now tool_call parts, not a separate "tool_calls" key - Streaming events consolidated into a single assistant message - Rewrote test_semconv_span_attrs.py as spec-driven unit tests (TDD) - Updated all VCR integration test assertions to match Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
827dd3b to
543070a
Compare
…→ GEN_AI_ - Import GEN_AI_REQUEST_FREQUENCY/PRESENCE_PENALTY from upstream gen_ai_attributes - Use SpanAttributes.GEN_AI_USAGE_TOTAL_TOKENS/IS_STREAMING/RESPONSE_FINISH_REASON/ RESPONSE_STOP_REASON (renamed from LLM_*) - Remove duplicate LLM_REQUEST_TYPE dict entry (operation_name var already handles it) - Update test_messages.py and test_bedrock_with_raw_response.py Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…pan_attrs.py Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- llm.usage.total_tokens → gen_ai.usage.total_tokens - gen_ai.usage.cache_creation_input_tokens → gen_ai.usage.cache_creation.input_tokens - gen_ai.usage.cache_read_input_tokens → gen_ai.usage.cache_read.input_tokens Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Address PR #3835 review comments from OzBenSimhonTraceloop: - Input/output messages now use "parts" array structure per gen-ai-input-messages.json and gen-ai-output-messages.json schemas - Map Anthropic finish reasons to OTel enum: end_turn→stop, tool_use→tool_call, max_tokens→length, stop_sequence→stop - Thinking content is now a ReasoningPart (type: "reasoning") inside the assistant message's parts array, not a separate message - Tool calls are now tool_call parts, not a separate "tool_calls" key - Streaming events consolidated into a single assistant message - Rewrote test_semconv_span_attrs.py as spec-driven unit tests (TDD) - Updated all VCR integration test assertions to match Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- system_instructions: emit JSON array-of-parts instead of plain string - images: emit OTel BlobPart/UriPart instead of OpenAI image_url format - finish_reason: omit key when absent (both streaming and non-streaming) instead of serializing null - cache token attrs: use non-deprecated dotted constants (gen_ai.usage.cache_creation.input_tokens) - remove redundant _DEPRECATED cache token aliases from semconv-ai - bump semconv-ai dependency lower bound to >=0.5.1 - add comprehensive semconv compliance unit tests - update VCR test assertions to match new compliant formats
543070a to
53cbe2e
Compare
911f2f1 to
c5c0fe9
Compare
What
Refactors the Anthropic instrumentation package to emit span attributes that comply with the OpenTelemetry GenAI semantic conventions spec.
Changes
New attributes (replaces legacy flat gen_ai.prompt.* / gen_ai.completion.* keys):
Bug fixes:
Checklist
I have added tests that cover my changes.
If adding a new instrumentation or changing an existing one, I've added screenshots from some observability platform showing the change.
PR name follows conventional commits format: feat(instrumentation): ... or fix(instrumentation): ....
(If applicable) I have updated the documentation accordingly.
Validated locally against a running agent using ConsoleSpanExporter.
Confirmed all appear correctly on
anthropic.chat spans:Summary by CodeRabbit
Refactor
Tests
Chores