Skip to content

Conversation

@wconti27
Copy link
Contributor

@wconti27 wconti27 commented May 29, 2024

Fixes #6055.

Checklist

  • Change(s) are motivated and described in the PR description
  • Testing strategy is described if automated tests are not included in the PR
  • Risks are described (performance impact, potential for breakage, maintainability)
  • Change is maintainable (easy to change, telemetry, documentation)
  • Library release note guidelines are followed or label changelog/no-changelog is set
  • Documentation is included (in-code, generated user docs, public corp docs)
  • Backport labels are set (if applicable)
  • If this PR changes the public interface, I've notified @DataDog/apm-tees.

Reviewer Checklist

  • Title is accurate
  • All changes are related to the pull request's stated goal
  • Description motivates each change
  • Avoids breaking API changes
  • Testing strategy adequately addresses listed risks
  • Change is maintainable (easy to change, telemetry, documentation)
  • Release note makes sense to a user of the library
  • Author has acknowledged and discussed the performance implications of this PR as reported in the benchmarks PR comment
  • Backport labels are set in a manner that is consistent with the release branch maintenance policy

@wconti27 wconti27 requested a review from Yun-Kim May 29, 2024 00:22
@datadog-dd-trace-py-rkomorn
Copy link

datadog-dd-trace-py-rkomorn bot commented May 29, 2024

Datadog Report

Branch report: conti/support-anthropic
Commit report: c118944
Test service: dd-trace-py

❌ 1 Failed (0 Known Flaky), 171831 Passed, 1139 Skipped, 11h 16m 6.38s Total duration (24m 50.11s time saved)

❌ Failed Tests (1)

  • test_anthropic_llm_error - test_anthropic.py - Details

    Expand for error
     At request <Request GET /test/session/snapshot >:
        At snapshot (token='tests.contrib.anthropic.test_anthropic.test_anthropic_llm_error'):
         - Directory: /snapshots
         - CI mode: 1
         - Trace File: /snapshots/tests.contrib.anthropic.test_anthropic.test_anthropic_llm_error.json
         - Stats File: /snapshots/tests.contrib.anthropic.test_anthropic.test_anthropic_llm_error_tracestats.json
         At compare of 1 expected trace(s) to 1 received trace(s):
          At trace 'anthropic.request' (1 spans):
           At snapshot compare of span 'anthropic.request' at position 1 in trace:
            - Expected span:
     ...
    

Copy link
Contributor

@Yun-Kim Yun-Kim left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking great! Added some comments to help clear some context / suggestions.

@wconti27 wconti27 requested a review from Yun-Kim May 31, 2024 20:07
def record_usage(self, span: Span, usage: Dict[str, Any]) -> None:
if not usage or not self.metrics_enabled:
return
for token_type in ("prompt", "completion"):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
for token_type in ("prompt", "completion"):
for token_type in ("input", "output"):

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why? I thought we were going with prompt and completion for the tag names?

Comment on lines 61 to 64
self.record_usage(
span,
{"prompt": _get_attr(usage, "input_tokens", 0), "completion": getattr(usage, "output_tokens", 0)},
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AnthropicIntegration.record_usage() should be used to tag span metrics for the input/output/total token counts. We should have a separate AnthropicIntegration._set_llmobs_metrics_tags() to return the recorded span metric values, i.e.

def _get_llmobs_metrics_tags(span):
    return {"input_tokens": span.get_metric("anthropic.response.usage.input_tokens"), "output_tokens": ..., "total_tokens": ...}

And set that on the span, i.e. span.set_tag_str(METRICS, json.dumps(self._get_llmobs_metrics_tags(span))

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

and we set this tag in the LLMObs Integration correct?

@wconti27 wconti27 closed this Jun 3, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Add Anthropic support similar to OpenAI

4 participants