Implement Cohere instrumentation#3081
Conversation
| } | ||
| response_format = kwargs.get("response_format") | ||
| if response_format: | ||
| # TODO: Add to sem conv |
There was a problem hiding this comment.
Should we make the openai generic instead?
There was a problem hiding this comment.
so you mean remove the openai one and make a generic one for this entry: https://github.com/open-telemetry/semantic-conventions/blob/abd92c153b627efd3222e6b46828c5e94bf1b2cd/model/gen-ai/spans.yaml#L88
I think it can make sense, and in any case we can open an issue any cohere specific attributes and add that into the TODO comment. That issue could be closed either by normalizing things from openai, cohere specific adds, or a combo.
@karthikscale3 you happen to know how many other LLM providers have a concept of response_format?
There was a problem hiding this comment.
so you mean remove the openai one and make a generic one for this entry: https://github.com/open-telemetry/semantic-conventions/blob/abd92c153b627efd3222e6b46828c5e94bf1b2cd/model/gen-ai/spans.yaml#L88
Yes
There was a problem hiding this comment.
@lmolkova Is it possible to have a generic attribute but have each implementation (openai, cohere) have their own list of possible values?
| "stop_sequences" | ||
| ), | ||
| GenAIAttributes.GEN_AI_REQUEST_TEMPERATURE: kwargs.get("temperature"), | ||
| # TODO: Add to sem conv |
There was a problem hiding this comment.
👍
I'd merge it with gen_ai.openai.request.seed into gen_ai.request.seed
| if getattr(result.usage, "tokens"): | ||
| span.set_attribute( | ||
| GenAIAttributes.GEN_AI_USAGE_INPUT_TOKENS, | ||
| result.usage.tokens.input_tokens, |
There was a problem hiding this comment.
PTAL at the https://github.com/open-telemetry/semantic-conventions/issues/1279
Cohere reports billed tokens in addition to input/output
https://docs.cohere.com/v2/reference/chat#response.body.usage
I'm not sure what's the difference and if we need to report one or another (or maybe both?)
|
|
||
| def get_span_name(span_attributes): | ||
| name = span_attributes.get(GenAIAttributes.GEN_AI_OPERATION_NAME, "") | ||
| model = span_attributes.get(GenAIAttributes.GEN_AI_REQUEST_MODEL, "") |
There was a problem hiding this comment.
model might not be available in some cases, we should report "f{name}"
| span.set_attribute( | ||
| ErrorAttributes.ERROR_TYPE, type(error).__qualname__ | ||
| ) | ||
| span.end() |
| trace.get_tracer_provider().add_span_processor( | ||
| BatchSpanProcessor(ConsoleSpanExporter()) | ||
| ) | ||
| tracer = trace.get_tracer(__name__) |
There was a problem hiding this comment.
should we setup logs/events and mention how to turn on content?
|
This PR has been automatically marked as stale because it has not had any activity for 14 days. It will be closed if no further activity occurs within 14 days of this comment. |
|
This PR has been closed due to inactivity. Please reopen if you would like to continue working on it. |
Part of #3050
TODO:
Add tests
Support streaming