diff --git a/.chloggen/gen-ai-system-naming.yaml b/.chloggen/gen-ai-system-naming.yaml
new file mode 100644
index 0000000000..b595b5d1fb
--- /dev/null
+++ b/.chloggen/gen-ai-system-naming.yaml
@@ -0,0 +1,10 @@
+change_type: breaking
+component: gen-ai
+note: |
+ Follow system-specific naming policy in GenAI semantic conventions.
+ - Rename `gen_ai.system` to `gen_ai.provider.name`
+ - Remove `gen_ai` prefix from `gen_ai.openai.*` attributes.
+ - Rename `az.ai.*` attribute names to `azure.ai.*`.
+
+issues: [ 2046 ]
+subtext:
diff --git a/.github/ISSUE_TEMPLATE/bug_report.yaml b/.github/ISSUE_TEMPLATE/bug_report.yaml
index e4cc277b4a..126dac97a8 100644
--- a/.github/ISSUE_TEMPLATE/bug_report.yaml
+++ b/.github/ISSUE_TEMPLATE/bug_report.yaml
@@ -72,6 +72,7 @@ body:
- area:network
- area:nodejs
- area:oci
+ - area:openai
- area:opentracing
- area:os
- area:otel
diff --git a/.github/ISSUE_TEMPLATE/change_proposal.yaml b/.github/ISSUE_TEMPLATE/change_proposal.yaml
index f9b4f2bd53..82f7ea55d9 100644
--- a/.github/ISSUE_TEMPLATE/change_proposal.yaml
+++ b/.github/ISSUE_TEMPLATE/change_proposal.yaml
@@ -64,6 +64,7 @@ body:
- area:network
- area:nodejs
- area:oci
+ - area:openai
- area:opentracing
- area:os
- area:otel
diff --git a/.github/ISSUE_TEMPLATE/new-conventions.yaml b/.github/ISSUE_TEMPLATE/new-conventions.yaml
index 1ac60df843..a655c59c26 100644
--- a/.github/ISSUE_TEMPLATE/new-conventions.yaml
+++ b/.github/ISSUE_TEMPLATE/new-conventions.yaml
@@ -75,6 +75,7 @@ body:
- area:network
- area:nodejs
- area:oci
+ - area:openai
- area:opentracing
- area:os
- area:otel
diff --git a/docs/gen-ai/README.md b/docs/gen-ai/README.md
index 11ed92b099..0a0b588660 100644
--- a/docs/gen-ai/README.md
+++ b/docs/gen-ai/README.md
@@ -7,9 +7,25 @@ linkTitle: Generative AI
**Status**: [Development][DocumentStatus]
> [!Warning]
-> The semantic conventions for GenAI and LLM are currently in development.
-> We encourage instrumentation libraries and telemetry consumers developers to
-> use the conventions in limited non-critical workloads and share the feedback
+>
+> Existing GenAI instrumentations that are using
+> [v1.36.0 of this document](https://github.com/open-telemetry/semantic-conventions/blob/v1.36.0/docs/gen-ai/README.md)
+> (or prior):
+>
+> * SHOULD NOT change the version of the GenAI conventions that they emit by default.
+> Conventions include, but are not limited to, attributes, metric, span and event names,
+> span kind and unit of measure.
+> * SHOULD introduce an environment variable `OTEL_SEMCONV_STABILITY_OPT_IN`
+> as a comma-separated list of category-specific values. The list of values
+> includes:
+> * `gen_ai_latest_experimental` - emit the latest experimental version of
+> GenAI conventions (supported by the instrumentation) and do not emit the
+> old one (v1.36.0 or prior).
+> * The default behavior is to continue emitting whatever version of the GenAI
+> conventions the instrumentation was emitting (1.36.0 or prior).
+>
+> This transition plan will be updated to include stable version before the
+> GenAI conventions are marked as stable.
Semantic conventions for Generative AI operations are defined for the following signals:
diff --git a/docs/gen-ai/aws-bedrock.md b/docs/gen-ai/aws-bedrock.md
index 92edf6b57c..03302cf0ff 100644
--- a/docs/gen-ai/aws-bedrock.md
+++ b/docs/gen-ai/aws-bedrock.md
@@ -6,12 +6,33 @@ linkTitle: AWS Bedrock
**Status**: [Development][DocumentStatus]
+> [!Warning]
+>
+> Existing GenAI instrumentations that are using
+> [v1.36.0 of this document](https://github.com/open-telemetry/semantic-conventions/blob/v1.36.0/docs/gen-ai/README.md)
+> (or prior):
+>
+> * SHOULD NOT change the version of the GenAI conventions that they emit by default.
+> Conventions include, but are not limited to, attributes, metric, span and event names,
+> span kind and unit of measure.
+> * SHOULD introduce an environment variable `OTEL_SEMCONV_STABILITY_OPT_IN`
+> as a comma-separated list of category-specific values. The list of values
+> includes:
+> * `gen_ai_latest_experimental` - emit the latest experimental version of
+> GenAI conventions (supported by the instrumentation) and do not emit the
+> old one (v1.36.0 or prior).
+> * The default behavior is to continue emitting whatever version of the GenAI
+> conventions the instrumentation was emitting (1.34.0 or prior).
+>
+> This transition plan will be updated to include stable version before the
+> GenAI conventions are marked as stable.
+
+## AWS Bedrock Spans
+
The Semantic Conventions for [AWS Bedrock](https://aws.amazon.com/bedrock/) extend and override the semantic conventions
for [Gen AI Spans](gen-ai-spans.md).
-`gen_ai.system` MUST be set to `"aws.bedrock"`.
-
-## AWS Bedrock Spans
+`gen_ai.provider.name` MUST be set to `"aws.bedrock"`.
These attributes track input data and metadata for a request to an AWS Bedrock model. The attributes include general Generative AI
attributes and ones specific the AWS Bedrock.
@@ -35,7 +56,7 @@ Describes an AWS Bedrock operation span.
|---|---|---|---|---|---|
| [`aws.bedrock.guardrail.id`](/docs/registry/attributes/aws.md) | string | The unique identifier of the AWS Bedrock Guardrail. A [guardrail](https://docs.aws.amazon.com/bedrock/latest/userguide/guardrails.html) helps safeguard and prevent unwanted behavior from model responses or user messages. | `sgi5gkybzqak` | `Required` |  |
| [`gen_ai.operation.name`](/docs/registry/attributes/gen-ai.md) | string | The name of the operation being performed. [1] | `chat`; `generate_content`; `text_completion` | `Required` |  |
-| [`gen_ai.system`](/docs/registry/attributes/gen-ai.md) | string | The Generative AI product as identified by the client or server instrumentation. [2] | `openai` | `Required` |  |
+| [`gen_ai.provider.name`](/docs/registry/attributes/gen-ai.md) | string | The Generative AI provider as identified by the client or server instrumentation. [2] | `openai`; `gcp.gen_ai`; `gcp.vertex_ai` | `Required` |  |
| [`error.type`](/docs/registry/attributes/error.md) | string | Describes a class of error the operation ended with. [3] | `timeout`; `java.net.UnknownHostException`; `server_certificate_invalid`; `500` | `Conditionally Required` if the operation ended in an error |  |
| [`gen_ai.conversation.id`](/docs/registry/attributes/gen-ai.md) | string | The unique identifier for a conversation (session, thread), used to store and correlate messages within this conversation. [4] | `conv_5j66UpCpwteGg4YSxUnt7lPY` | `Conditionally Required` when available |  |
| [`gen_ai.output.type`](/docs/registry/attributes/gen-ai.md) | string | Represents the content type requested by the client. [5] | `text`; `json`; `image` | `Conditionally Required` [6] |  |
@@ -60,17 +81,24 @@ Describes an AWS Bedrock operation span.
**[1] `gen_ai.operation.name`:** If one of the predefined values applies, but specific system uses a different name it's RECOMMENDED to document it in the semantic conventions for specific GenAI system and use system-specific name in the instrumentation. If a different name is not documented, instrumentation libraries SHOULD use applicable predefined value.
-**[2] `gen_ai.system`:** The `gen_ai.system` describes a family of GenAI models with specific model identified
-by `gen_ai.request.model` and `gen_ai.response.model` attributes.
+**[2] `gen_ai.provider.name`:** The attribute SHOULD be set based on the instrumentation's best
+knowledge and may differ from the actual model provider.
+
+Multiple providers, including Azure OpenAI, Gemini, and AI hosting platforms
+are accessible using the OpenAI REST API and corresponding client libraries,
+but may proxy or host models from different providers.
-The actual GenAI product may differ from the one identified by the client.
-Multiple systems, including Azure OpenAI and Gemini, are accessible by OpenAI client
-libraries. In such cases, the `gen_ai.system` is set to `openai` based on the
-instrumentation's best knowledge, instead of the actual system. The `server.address`
-attribute may help identify the actual system in use for `openai`.
+The `gen_ai.request.model`, `gen_ai.response.model`, and `server.address`
+attributes may help identify the actual system in use.
-For custom model, a custom friendly name SHOULD be used.
-If none of these options apply, the `gen_ai.system` SHOULD be set to `_OTHER`.
+The `gen_ai.provider.name` attribute acts as a discriminator that
+identifies the GenAI telemetry format flavor specific to that provider
+within GenAI semantic conventions.
+It SHOULD be set consistently with provider-specific attributes and signals.
+For example, GenAI spans, metrics, and events related to AWS Bedrock
+should have the `gen_ai.provider.name` set to `aws.bedrock` and include
+applicable `aws.bedrock.*` attributes and are not expected to include
+`openai.*` attributes.
**[3] `error.type`:** The `error.type` SHOULD match the error code returned by the Generative AI provider or the client library,
the canonical name of exception that occurred, or another low-cardinality error identifier.
@@ -139,31 +167,31 @@ Additional output format details may be recorded in the future in the `gen_ai.ou
---
-`gen_ai.system` has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.
+`gen_ai.provider.name` has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.
| Value | Description | Stability |
|---|---|---|
-| `anthropic` | Anthropic |  |
-| `aws.bedrock` | AWS Bedrock |  |
+| `anthropic` | [Anthropic](https://www.anthropic.com/) |  |
+| `aws.bedrock` | [AWS Bedrock](https://aws.amazon.com/bedrock) |  |
| `azure.ai.inference` | Azure AI Inference |  |
-| `azure.ai.openai` | Azure OpenAI |  |
-| `cohere` | Cohere |  |
-| `deepseek` | DeepSeek |  |
-| `gcp.gemini` | Gemini [11] |  |
+| `azure.ai.openai` | [Azure OpenAI](https://azure.microsoft.com/products/ai-services/openai-service/) |  |
+| `cohere` | [Cohere](https://cohere.com/) |  |
+| `deepseek` | [DeepSeek](https://www.deepseek.com/) |  |
+| `gcp.gemini` | [Gemini](https://cloud.google.com/products/gemini) [11] |  |
| `gcp.gen_ai` | Any Google generative AI endpoint [12] |  |
-| `gcp.vertex_ai` | Vertex AI [13] |  |
-| `groq` | Groq |  |
-| `ibm.watsonx.ai` | IBM Watsonx AI |  |
-| `mistral_ai` | Mistral AI |  |
-| `openai` | OpenAI |  |
-| `perplexity` | Perplexity |  |
-| `xai` | xAI |  |
+| `gcp.vertex_ai` | [Vertex AI](https://cloud.google.com/vertex-ai) [13] |  |
+| `groq` | [Groq](https://groq.com/) |  |
+| `ibm.watsonx.ai` | [IBM Watsonx AI](https://www.ibm.com/products/watsonx-ai) |  |
+| `mistral_ai` | [Mistral AI](https://mistral.ai/) |  |
+| `openai` | [OpenAI](https://openai.com/) |  |
+| `perplexity` | [Perplexity](https://www.perplexity.ai/) |  |
+| `x_ai` | [xAI](https://x.ai/) |  |
-**[11]:** This refers to the 'generativelanguage.googleapis.com' endpoint. Also known as the AI Studio API. May use common attributes prefixed with 'gcp.gen_ai.'.
+**[11]:** Used when accessing the 'generativelanguage.googleapis.com' endpoint. Also known as the AI Studio API.
-**[12]:** May be used when specific backend is unknown. May use common attributes prefixed with 'gcp.gen_ai.'.
+**[12]:** May be used when specific backend is unknown.
-**[13]:** This refers to the 'aiplatform.googleapis.com' endpoint. May use common attributes prefixed with 'gcp.gen_ai.'.
+**[13]:** Used when accessing the 'aiplatform.googleapis.com' endpoint.
diff --git a/docs/gen-ai/azure-ai-inference.md b/docs/gen-ai/azure-ai-inference.md
index c89e28cb14..c24042cb5f 100644
--- a/docs/gen-ai/azure-ai-inference.md
+++ b/docs/gen-ai/azure-ai-inference.md
@@ -6,13 +6,45 @@ linkTitle: Azure AI Inference
**Status**: [Development][DocumentStatus]
+
+
+- [Spans](#spans)
+ - [Inference](#inference)
+ - [Embedding](#embedding)
+- [Metrics](#metrics)
+
+
+
+> [!Warning]
+>
+> Existing GenAI instrumentations that are using
+> [v1.36.0 of this document](https://github.com/open-telemetry/semantic-conventions/blob/v1.36.0/docs/gen-ai/README.md)
+> (or prior):
+>
+> * SHOULD NOT change the version of the GenAI conventions that they emit by default.
+> Conventions include, but are not limited to, attributes, metric, span and event names,
+> span kind and unit of measure.
+> * SHOULD introduce an environment variable `OTEL_SEMCONV_STABILITY_OPT_IN`
+> as a comma-separated list of category-specific values. The list of values
+> includes:
+> * `gen_ai_latest_experimental` - emit the latest experimental version of
+> GenAI conventions (supported by the instrumentation) and do not emit the
+> old one (v1.36.0 or prior).
+> * The default behavior is to continue emitting whatever version of the GenAI
+> conventions the instrumentation was emitting (1.36.0 or prior).
+>
+> This transition plan will be updated to include stable version before the
+> GenAI conventions are marked as stable.
+
The Semantic Conventions for [Azure AI Inference](https://learn.microsoft.com/azure/ai-studio) extend and override the [GenAI Semantic Conventions](README.md).
## Spans
### Inference
-
+`gen_ai.provider.name` MUST be set to `"azure.ai.inference"` and SHOULD be provided **at span creation time**.
+
+
@@ -23,7 +55,7 @@ The Semantic Conventions for [Azure AI Inference](https://learn.microsoft.com/az
Semantic Conventions for [Azure AI Inference](https://learn.microsoft.com/azure/ai-studio/reference/reference-model-inference-api) client spans extend and override the semantic conventions for [Gen AI Spans](gen-ai-spans.md).
-`gen_ai.system` MUST be set to `"az.ai.inference"` and SHOULD be provided **at span creation time**.
+`gen_ai.provider.name` MUST be set to `"azure.ai.inference"` and SHOULD be provided **at span creation time**.
**Span name** SHOULD be `{gen_ai.operation.name} {gen_ai.request.model}` when the
model name is available and `{gen_ai.operation.name}` otherwise.
diff --git a/docs/gen-ai/gen-ai-agent-spans.md b/docs/gen-ai/gen-ai-agent-spans.md
index 83cdbb4826..d1e492dcc2 100644
--- a/docs/gen-ai/gen-ai-agent-spans.md
+++ b/docs/gen-ai/gen-ai-agent-spans.md
@@ -6,17 +6,36 @@ linkTitle: Agent spans
**Status**: [Development][DocumentStatus]
-
-
- [Spans](#spans)
- [Create agent span](#create-agent-span)
- - [Invoke Agent Span](#invoke-agent-span)
-- [Agent execute tool span](#agent-execute-tool-span)
+ - [Invoke agent span](#invoke-agent-span)
+- [Execute tool span](#execute-tool-span)
+> [!Warning]
+>
+> Existing GenAI instrumentations that are using
+> [v1.36.0 of this document](https://github.com/open-telemetry/semantic-conventions/blob/v1.36.0/docs/gen-ai/README.md)
+> (or prior):
+>
+> * SHOULD NOT change the version of the GenAI conventions that they emit by default.
+> Conventions include, but are not limited to, attributes, metric, span and event names,
+> span kind and unit of measure.
+> * SHOULD introduce an environment variable `OTEL_SEMCONV_STABILITY_OPT_IN`
+> as a comma-separated list of category-specific values. The list of values
+> includes:
+> * `gen_ai_latest_experimental` - emit the latest experimental version of
+> GenAI conventions (supported by the instrumentation) and do not emit the
+> old one (v1.36.0 or prior).
+> * The default behavior is to continue emitting whatever version of the GenAI
+> conventions the instrumentation was emitting (1.34.0 or prior).
+>
+> This transition plan will be updated to include stable version before the
+> GenAI conventions are marked as stable.
+
Generative AI models can be trained to use tools to access real-time information or suggest a real-world action. For example, a model can leverage a database retrieval tool to access specific information, like a customer's purchase history, so it can generate tailored shopping recommendations. Alternatively, based on a user's query, a model can make various API calls to send an email response to a colleague or complete a financial transaction on your behalf. To do so, the model must not only have access to a set of external tools, it needs the ability to plan and execute any task in a self-directed fashion. This combination of reasoning, logic, and access to external information that are all connected to a Generative AI model invokes the concept of an agent.
This document defines semantic conventions for GenAI agent calls that are defined by this [whitepaper](https://www.kaggle.com/whitepaper-agents).
@@ -52,7 +71,7 @@ Semantic conventions for individual GenAI systems and frameworks MAY specify dif
| Attribute | Type | Description | Examples | [Requirement Level](https://opentelemetry.io/docs/specs/semconv/general/attribute-requirement-level/) | Stability |
|---|---|---|---|---|---|
| [`gen_ai.operation.name`](/docs/registry/attributes/gen-ai.md) | string | The name of the operation being performed. [1] | `chat`; `generate_content`; `text_completion` | `Required` |  |
-| [`gen_ai.system`](/docs/registry/attributes/gen-ai.md) | string | The Generative AI product as identified by the client or server instrumentation. [2] | `openai` | `Required` |  |
+| [`gen_ai.provider.name`](/docs/registry/attributes/gen-ai.md) | string | The Generative AI provider as identified by the client or server instrumentation. [2] | `openai`; `gcp.gen_ai`; `gcp.vertex_ai` | `Required` |  |
| [`error.type`](/docs/registry/attributes/error.md) | string | Describes a class of error the operation ended with. [3] | `timeout`; `java.net.UnknownHostException`; `server_certificate_invalid`; `500` | `Conditionally Required` if the operation ended in an error |  |
| [`gen_ai.agent.description`](/docs/registry/attributes/gen-ai.md) | string | Free-form description of the GenAI agent provided by the application. | `Helps with math problems`; `Generates fiction stories` | `Conditionally Required` If provided by the application. |  |
| [`gen_ai.agent.id`](/docs/registry/attributes/gen-ai.md) | string | The unique identifier of the GenAI agent. | `asst_5j66UpCpwteGg4YSxUnt7lPY` | `Conditionally Required` if applicable. |  |
@@ -63,17 +82,24 @@ Semantic conventions for individual GenAI systems and frameworks MAY specify dif
**[1] `gen_ai.operation.name`:** If one of the predefined values applies, but specific system uses a different name it's RECOMMENDED to document it in the semantic conventions for specific GenAI system and use system-specific name in the instrumentation. If a different name is not documented, instrumentation libraries SHOULD use applicable predefined value.
-**[2] `gen_ai.system`:** The `gen_ai.system` describes a family of GenAI models with specific model identified
-by `gen_ai.request.model` and `gen_ai.response.model` attributes.
+**[2] `gen_ai.provider.name`:** The attribute SHOULD be set based on the instrumentation's best
+knowledge and may differ from the actual model provider.
-The actual GenAI product may differ from the one identified by the client.
-Multiple systems, including Azure OpenAI and Gemini, are accessible by OpenAI client
-libraries. In such cases, the `gen_ai.system` is set to `openai` based on the
-instrumentation's best knowledge, instead of the actual system. The `server.address`
-attribute may help identify the actual system in use for `openai`.
+Multiple providers, including Azure OpenAI, Gemini, and AI hosting platforms
+are accessible using the OpenAI REST API and corresponding client libraries,
+but may proxy or host models from different providers.
-For custom model, a custom friendly name SHOULD be used.
-If none of these options apply, the `gen_ai.system` SHOULD be set to `_OTHER`.
+The `gen_ai.request.model`, `gen_ai.response.model`, and `server.address`
+attributes may help identify the actual system in use.
+
+The `gen_ai.provider.name` attribute acts as a discriminator that
+identifies the GenAI telemetry format flavor specific to that provider
+within GenAI semantic conventions.
+It SHOULD be set consistently with provider-specific attributes and signals.
+For example, GenAI spans, metrics, and events related to AWS Bedrock
+should have the `gen_ai.provider.name` set to `aws.bedrock` and include
+applicable `aws.bedrock.*` attributes and are not expected to include
+`openai.*` attributes.
**[3] `error.type`:** The `error.type` SHOULD match the error code returned by the Generative AI provider or the client library,
the canonical name of exception that occurred, or another low-cardinality error identifier.
@@ -109,38 +135,38 @@ Instrumentations SHOULD document the list of errors they report.
---
-`gen_ai.system` has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.
+`gen_ai.provider.name` has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.
| Value | Description | Stability |
|---|---|---|
-| `anthropic` | Anthropic |  |
-| `aws.bedrock` | AWS Bedrock |  |
+| `anthropic` | [Anthropic](https://www.anthropic.com/) |  |
+| `aws.bedrock` | [AWS Bedrock](https://aws.amazon.com/bedrock) |  |
| `azure.ai.inference` | Azure AI Inference |  |
-| `azure.ai.openai` | Azure OpenAI |  |
-| `cohere` | Cohere |  |
-| `deepseek` | DeepSeek |  |
-| `gcp.gemini` | Gemini [7] |  |
+| `azure.ai.openai` | [Azure OpenAI](https://azure.microsoft.com/products/ai-services/openai-service/) |  |
+| `cohere` | [Cohere](https://cohere.com/) |  |
+| `deepseek` | [DeepSeek](https://www.deepseek.com/) |  |
+| `gcp.gemini` | [Gemini](https://cloud.google.com/products/gemini) [7] |  |
| `gcp.gen_ai` | Any Google generative AI endpoint [8] |  |
-| `gcp.vertex_ai` | Vertex AI [9] |  |
-| `groq` | Groq |  |
-| `ibm.watsonx.ai` | IBM Watsonx AI |  |
-| `mistral_ai` | Mistral AI |  |
-| `openai` | OpenAI |  |
-| `perplexity` | Perplexity |  |
-| `xai` | xAI |  |
+| `gcp.vertex_ai` | [Vertex AI](https://cloud.google.com/vertex-ai) [9] |  |
+| `groq` | [Groq](https://groq.com/) |  |
+| `ibm.watsonx.ai` | [IBM Watsonx AI](https://www.ibm.com/products/watsonx-ai) |  |
+| `mistral_ai` | [Mistral AI](https://mistral.ai/) |  |
+| `openai` | [OpenAI](https://openai.com/) |  |
+| `perplexity` | [Perplexity](https://www.perplexity.ai/) |  |
+| `x_ai` | [xAI](https://x.ai/) |  |
-**[7]:** This refers to the 'generativelanguage.googleapis.com' endpoint. Also known as the AI Studio API. May use common attributes prefixed with 'gcp.gen_ai.'.
+**[7]:** Used when accessing the 'generativelanguage.googleapis.com' endpoint. Also known as the AI Studio API.
-**[8]:** May be used when specific backend is unknown. May use common attributes prefixed with 'gcp.gen_ai.'.
+**[8]:** May be used when specific backend is unknown.
-**[9]:** This refers to the 'aiplatform.googleapis.com' endpoint. May use common attributes prefixed with 'gcp.gen_ai.'.
+**[9]:** Used when accessing the 'aiplatform.googleapis.com' endpoint.
-### Invoke Agent Span
+### Invoke agent span
@@ -165,7 +191,7 @@ Semantic conventions for individual GenAI systems and frameworks MAY specify dif
| Attribute | Type | Description | Examples | [Requirement Level](https://opentelemetry.io/docs/specs/semconv/general/attribute-requirement-level/) | Stability |
|---|---|---|---|---|---|
| [`gen_ai.operation.name`](/docs/registry/attributes/gen-ai.md) | string | The name of the operation being performed. [1] | `chat`; `generate_content`; `text_completion` | `Required` |  |
-| [`gen_ai.system`](/docs/registry/attributes/gen-ai.md) | string | The Generative AI product as identified by the client or server instrumentation. [2] | `openai` | `Required` |  |
+| [`gen_ai.provider.name`](/docs/registry/attributes/gen-ai.md) | string | The Generative AI provider as identified by the client or server instrumentation. [2] | `openai`; `gcp.gen_ai`; `gcp.vertex_ai` | `Required` |  |
| [`error.type`](/docs/registry/attributes/error.md) | string | Describes a class of error the operation ended with. [3] | `timeout`; `java.net.UnknownHostException`; `server_certificate_invalid`; `500` | `Conditionally Required` if the operation ended in an error |  |
| [`gen_ai.agent.description`](/docs/registry/attributes/gen-ai.md) | string | Free-form description of the GenAI agent provided by the application. | `Helps with math problems`; `Generates fiction stories` | `Conditionally Required` when available |  |
| [`gen_ai.agent.id`](/docs/registry/attributes/gen-ai.md) | string | The unique identifier of the GenAI agent. | `asst_5j66UpCpwteGg4YSxUnt7lPY` | `Conditionally Required` if applicable. |  |
@@ -192,17 +218,24 @@ Semantic conventions for individual GenAI systems and frameworks MAY specify dif
**[1] `gen_ai.operation.name`:** If one of the predefined values applies, but specific system uses a different name it's RECOMMENDED to document it in the semantic conventions for specific GenAI system and use system-specific name in the instrumentation. If a different name is not documented, instrumentation libraries SHOULD use applicable predefined value.
-**[2] `gen_ai.system`:** The `gen_ai.system` describes a family of GenAI models with specific model identified
-by `gen_ai.request.model` and `gen_ai.response.model` attributes.
+**[2] `gen_ai.provider.name`:** The attribute SHOULD be set based on the instrumentation's best
+knowledge and may differ from the actual model provider.
+
+Multiple providers, including Azure OpenAI, Gemini, and AI hosting platforms
+are accessible using the OpenAI REST API and corresponding client libraries,
+but may proxy or host models from different providers.
-The actual GenAI product may differ from the one identified by the client.
-Multiple systems, including Azure OpenAI and Gemini, are accessible by OpenAI client
-libraries. In such cases, the `gen_ai.system` is set to `openai` based on the
-instrumentation's best knowledge, instead of the actual system. The `server.address`
-attribute may help identify the actual system in use for `openai`.
+The `gen_ai.request.model`, `gen_ai.response.model`, and `server.address`
+attributes may help identify the actual system in use.
-For custom model, a custom friendly name SHOULD be used.
-If none of these options apply, the `gen_ai.system` SHOULD be set to `_OTHER`.
+The `gen_ai.provider.name` attribute acts as a discriminator that
+identifies the GenAI telemetry format flavor specific to that provider
+within GenAI semantic conventions.
+It SHOULD be set consistently with provider-specific attributes and signals.
+For example, GenAI spans, metrics, and events related to AWS Bedrock
+should have the `gen_ai.provider.name` set to `aws.bedrock` and include
+applicable `aws.bedrock.*` attributes and are not expected to include
+`openai.*` attributes.
**[3] `error.type`:** The `error.type` SHOULD match the error code returned by the Generative AI provider or the client library,
the canonical name of exception that occurred, or another low-cardinality error identifier.
@@ -273,38 +306,38 @@ Additional output format details may be recorded in the future in the `gen_ai.ou
---
-`gen_ai.system` has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.
+`gen_ai.provider.name` has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.
| Value | Description | Stability |
|---|---|---|
-| `anthropic` | Anthropic |  |
-| `aws.bedrock` | AWS Bedrock |  |
+| `anthropic` | [Anthropic](https://www.anthropic.com/) |  |
+| `aws.bedrock` | [AWS Bedrock](https://aws.amazon.com/bedrock) |  |
| `azure.ai.inference` | Azure AI Inference |  |
-| `azure.ai.openai` | Azure OpenAI |  |
-| `cohere` | Cohere |  |
-| `deepseek` | DeepSeek |  |
-| `gcp.gemini` | Gemini [12] |  |
+| `azure.ai.openai` | [Azure OpenAI](https://azure.microsoft.com/products/ai-services/openai-service/) |  |
+| `cohere` | [Cohere](https://cohere.com/) |  |
+| `deepseek` | [DeepSeek](https://www.deepseek.com/) |  |
+| `gcp.gemini` | [Gemini](https://cloud.google.com/products/gemini) [12] |  |
| `gcp.gen_ai` | Any Google generative AI endpoint [13] |  |
-| `gcp.vertex_ai` | Vertex AI [14] |  |
-| `groq` | Groq |  |
-| `ibm.watsonx.ai` | IBM Watsonx AI |  |
-| `mistral_ai` | Mistral AI |  |
-| `openai` | OpenAI |  |
-| `perplexity` | Perplexity |  |
-| `xai` | xAI |  |
+| `gcp.vertex_ai` | [Vertex AI](https://cloud.google.com/vertex-ai) [14] |  |
+| `groq` | [Groq](https://groq.com/) |  |
+| `ibm.watsonx.ai` | [IBM Watsonx AI](https://www.ibm.com/products/watsonx-ai) |  |
+| `mistral_ai` | [Mistral AI](https://mistral.ai/) |  |
+| `openai` | [OpenAI](https://openai.com/) |  |
+| `perplexity` | [Perplexity](https://www.perplexity.ai/) |  |
+| `x_ai` | [xAI](https://x.ai/) |  |
-**[12]:** This refers to the 'generativelanguage.googleapis.com' endpoint. Also known as the AI Studio API. May use common attributes prefixed with 'gcp.gen_ai.'.
+**[12]:** Used when accessing the 'generativelanguage.googleapis.com' endpoint. Also known as the AI Studio API.
-**[13]:** May be used when specific backend is unknown. May use common attributes prefixed with 'gcp.gen_ai.'.
+**[13]:** May be used when specific backend is unknown.
-**[14]:** This refers to the 'aiplatform.googleapis.com' endpoint. May use common attributes prefixed with 'gcp.gen_ai.'.
+**[14]:** Used when accessing the 'aiplatform.googleapis.com' endpoint.
-## Agent execute tool span
+## Execute tool span
If you are using some tools in your agent, refer to [Execute Tool Span](./gen-ai-spans.md#execute-tool-span).
diff --git a/docs/gen-ai/gen-ai-events.md b/docs/gen-ai/gen-ai-events.md
index 95f15e7af6..bec087be34 100644
--- a/docs/gen-ai/gen-ai-events.md
+++ b/docs/gen-ai/gen-ai-events.md
@@ -21,6 +21,27 @@ linkTitle: Events
+> [!Warning]
+>
+> Existing GenAI instrumentations that are using
+> [v1.36.0 of this document](https://github.com/open-telemetry/semantic-conventions/blob/v1.36.0/docs/gen-ai/README.md)
+> (or prior):
+>
+> * SHOULD NOT change the version of the GenAI conventions that they emit by default.
+> Conventions include, but are not limited to, attributes, metric, span and event names,
+> span kind and unit of measure.
+> * SHOULD introduce an environment variable `OTEL_SEMCONV_STABILITY_OPT_IN`
+> as a comma-separated list of category-specific values. The list of values
+> includes:
+> * `gen_ai_latest_experimental` - emit the latest experimental version of
+> GenAI conventions (supported by the instrumentation) and do not emit the
+> old one (v1.36.0 or prior).
+> * The default behavior is to continue emitting whatever version of the GenAI
+> conventions the instrumentation was emitting (1.36.0 or prior).
+>
+> This transition plan will be updated to include stable version before the
+> GenAI conventions are marked as stable.
+
GenAI instrumentations MAY capture user inputs sent to the model and responses received from it as [events](https://github.com/open-telemetry/opentelemetry-specification/tree/v1.46.0/specification/logs/data-model.md#events).
> Note:
@@ -61,47 +82,54 @@ This event describes the system instructions passed to the GenAI model.
| Attribute | Type | Description | Examples | [Requirement Level](https://opentelemetry.io/docs/specs/semconv/general/attribute-requirement-level/) | Stability |
|---|---|---|---|---|---|
-| [`gen_ai.system`](/docs/registry/attributes/gen-ai.md) | string | The Generative AI product as identified by the client or server instrumentation. [1] | `openai` | `Recommended` |  |
+| [`gen_ai.provider.name`](/docs/registry/attributes/gen-ai.md) | string | The Generative AI provider as identified by the client or server instrumentation. [1] | `openai`; `gcp.gen_ai`; `gcp.vertex_ai` | `Recommended` |  |
+
+**[1] `gen_ai.provider.name`:** The attribute SHOULD be set based on the instrumentation's best
+knowledge and may differ from the actual model provider.
-**[1] `gen_ai.system`:** The `gen_ai.system` describes a family of GenAI models with specific model identified
-by `gen_ai.request.model` and `gen_ai.response.model` attributes.
+Multiple providers, including Azure OpenAI, Gemini, and AI hosting platforms
+are accessible using the OpenAI REST API and corresponding client libraries,
+but may proxy or host models from different providers.
-The actual GenAI product may differ from the one identified by the client.
-Multiple systems, including Azure OpenAI and Gemini, are accessible by OpenAI client
-libraries. In such cases, the `gen_ai.system` is set to `openai` based on the
-instrumentation's best knowledge, instead of the actual system. The `server.address`
-attribute may help identify the actual system in use for `openai`.
+The `gen_ai.request.model`, `gen_ai.response.model`, and `server.address`
+attributes may help identify the actual system in use.
-For custom model, a custom friendly name SHOULD be used.
-If none of these options apply, the `gen_ai.system` SHOULD be set to `_OTHER`.
+The `gen_ai.provider.name` attribute acts as a discriminator that
+identifies the GenAI telemetry format flavor specific to that provider
+within GenAI semantic conventions.
+It SHOULD be set consistently with provider-specific attributes and signals.
+For example, GenAI spans, metrics, and events related to AWS Bedrock
+should have the `gen_ai.provider.name` set to `aws.bedrock` and include
+applicable `aws.bedrock.*` attributes and are not expected to include
+`openai.*` attributes.
---
-`gen_ai.system` has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.
+`gen_ai.provider.name` has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.
| Value | Description | Stability |
|---|---|---|
-| `anthropic` | Anthropic |  |
-| `aws.bedrock` | AWS Bedrock |  |
+| `anthropic` | [Anthropic](https://www.anthropic.com/) |  |
+| `aws.bedrock` | [AWS Bedrock](https://aws.amazon.com/bedrock) |  |
| `azure.ai.inference` | Azure AI Inference |  |
-| `azure.ai.openai` | Azure OpenAI |  |
-| `cohere` | Cohere |  |
-| `deepseek` | DeepSeek |  |
-| `gcp.gemini` | Gemini [2] |  |
+| `azure.ai.openai` | [Azure OpenAI](https://azure.microsoft.com/products/ai-services/openai-service/) |  |
+| `cohere` | [Cohere](https://cohere.com/) |  |
+| `deepseek` | [DeepSeek](https://www.deepseek.com/) |  |
+| `gcp.gemini` | [Gemini](https://cloud.google.com/products/gemini) [2] |  |
| `gcp.gen_ai` | Any Google generative AI endpoint [3] |  |
-| `gcp.vertex_ai` | Vertex AI [4] |  |
-| `groq` | Groq |  |
-| `ibm.watsonx.ai` | IBM Watsonx AI |  |
-| `mistral_ai` | Mistral AI |  |
-| `openai` | OpenAI |  |
-| `perplexity` | Perplexity |  |
-| `xai` | xAI |  |
+| `gcp.vertex_ai` | [Vertex AI](https://cloud.google.com/vertex-ai) [4] |  |
+| `groq` | [Groq](https://groq.com/) |  |
+| `ibm.watsonx.ai` | [IBM Watsonx AI](https://www.ibm.com/products/watsonx-ai) |  |
+| `mistral_ai` | [Mistral AI](https://mistral.ai/) |  |
+| `openai` | [OpenAI](https://openai.com/) |  |
+| `perplexity` | [Perplexity](https://www.perplexity.ai/) |  |
+| `x_ai` | [xAI](https://x.ai/) |  |
-**[2]:** This refers to the 'generativelanguage.googleapis.com' endpoint. Also known as the AI Studio API. May use common attributes prefixed with 'gcp.gen_ai.'.
+**[2]:** Used when accessing the 'generativelanguage.googleapis.com' endpoint. Also known as the AI Studio API.
-**[3]:** May be used when specific backend is unknown. May use common attributes prefixed with 'gcp.gen_ai.'.
+**[3]:** May be used when specific backend is unknown.
-**[4]:** This refers to the 'aiplatform.googleapis.com' endpoint. May use common attributes prefixed with 'gcp.gen_ai.'.
+**[4]:** Used when accessing the 'aiplatform.googleapis.com' endpoint.
**Body fields:**
@@ -136,47 +164,54 @@ This event describes the user message passed to the GenAI model.
| Attribute | Type | Description | Examples | [Requirement Level](https://opentelemetry.io/docs/specs/semconv/general/attribute-requirement-level/) | Stability |
|---|---|---|---|---|---|
-| [`gen_ai.system`](/docs/registry/attributes/gen-ai.md) | string | The Generative AI product as identified by the client or server instrumentation. [1] | `openai` | `Recommended` |  |
+| [`gen_ai.provider.name`](/docs/registry/attributes/gen-ai.md) | string | The Generative AI provider as identified by the client or server instrumentation. [1] | `openai`; `gcp.gen_ai`; `gcp.vertex_ai` | `Recommended` |  |
+
+**[1] `gen_ai.provider.name`:** The attribute SHOULD be set based on the instrumentation's best
+knowledge and may differ from the actual model provider.
-**[1] `gen_ai.system`:** The `gen_ai.system` describes a family of GenAI models with specific model identified
-by `gen_ai.request.model` and `gen_ai.response.model` attributes.
+Multiple providers, including Azure OpenAI, Gemini, and AI hosting platforms
+are accessible using the OpenAI REST API and corresponding client libraries,
+but may proxy or host models from different providers.
-The actual GenAI product may differ from the one identified by the client.
-Multiple systems, including Azure OpenAI and Gemini, are accessible by OpenAI client
-libraries. In such cases, the `gen_ai.system` is set to `openai` based on the
-instrumentation's best knowledge, instead of the actual system. The `server.address`
-attribute may help identify the actual system in use for `openai`.
+The `gen_ai.request.model`, `gen_ai.response.model`, and `server.address`
+attributes may help identify the actual system in use.
-For custom model, a custom friendly name SHOULD be used.
-If none of these options apply, the `gen_ai.system` SHOULD be set to `_OTHER`.
+The `gen_ai.provider.name` attribute acts as a discriminator that
+identifies the GenAI telemetry format flavor specific to that provider
+within GenAI semantic conventions.
+It SHOULD be set consistently with provider-specific attributes and signals.
+For example, GenAI spans, metrics, and events related to AWS Bedrock
+should have the `gen_ai.provider.name` set to `aws.bedrock` and include
+applicable `aws.bedrock.*` attributes and are not expected to include
+`openai.*` attributes.
---
-`gen_ai.system` has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.
+`gen_ai.provider.name` has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.
| Value | Description | Stability |
|---|---|---|
-| `anthropic` | Anthropic |  |
-| `aws.bedrock` | AWS Bedrock |  |
+| `anthropic` | [Anthropic](https://www.anthropic.com/) |  |
+| `aws.bedrock` | [AWS Bedrock](https://aws.amazon.com/bedrock) |  |
| `azure.ai.inference` | Azure AI Inference |  |
-| `azure.ai.openai` | Azure OpenAI |  |
-| `cohere` | Cohere |  |
-| `deepseek` | DeepSeek |  |
-| `gcp.gemini` | Gemini [2] |  |
+| `azure.ai.openai` | [Azure OpenAI](https://azure.microsoft.com/products/ai-services/openai-service/) |  |
+| `cohere` | [Cohere](https://cohere.com/) |  |
+| `deepseek` | [DeepSeek](https://www.deepseek.com/) |  |
+| `gcp.gemini` | [Gemini](https://cloud.google.com/products/gemini) [2] |  |
| `gcp.gen_ai` | Any Google generative AI endpoint [3] |  |
-| `gcp.vertex_ai` | Vertex AI [4] |  |
-| `groq` | Groq |  |
-| `ibm.watsonx.ai` | IBM Watsonx AI |  |
-| `mistral_ai` | Mistral AI |  |
-| `openai` | OpenAI |  |
-| `perplexity` | Perplexity |  |
-| `xai` | xAI |  |
+| `gcp.vertex_ai` | [Vertex AI](https://cloud.google.com/vertex-ai) [4] |  |
+| `groq` | [Groq](https://groq.com/) |  |
+| `ibm.watsonx.ai` | [IBM Watsonx AI](https://www.ibm.com/products/watsonx-ai) |  |
+| `mistral_ai` | [Mistral AI](https://mistral.ai/) |  |
+| `openai` | [OpenAI](https://openai.com/) |  |
+| `perplexity` | [Perplexity](https://www.perplexity.ai/) |  |
+| `x_ai` | [xAI](https://x.ai/) |  |
-**[2]:** This refers to the 'generativelanguage.googleapis.com' endpoint. Also known as the AI Studio API. May use common attributes prefixed with 'gcp.gen_ai.'.
+**[2]:** Used when accessing the 'generativelanguage.googleapis.com' endpoint. Also known as the AI Studio API.
-**[3]:** May be used when specific backend is unknown. May use common attributes prefixed with 'gcp.gen_ai.'.
+**[3]:** May be used when specific backend is unknown.
-**[4]:** This refers to the 'aiplatform.googleapis.com' endpoint. May use common attributes prefixed with 'gcp.gen_ai.'.
+**[4]:** Used when accessing the 'aiplatform.googleapis.com' endpoint.
**Body fields:**
@@ -211,47 +246,54 @@ This event describes the assistant message passed to GenAI system.
| Attribute | Type | Description | Examples | [Requirement Level](https://opentelemetry.io/docs/specs/semconv/general/attribute-requirement-level/) | Stability |
|---|---|---|---|---|---|
-| [`gen_ai.system`](/docs/registry/attributes/gen-ai.md) | string | The Generative AI product as identified by the client or server instrumentation. [1] | `openai` | `Recommended` |  |
+| [`gen_ai.provider.name`](/docs/registry/attributes/gen-ai.md) | string | The Generative AI provider as identified by the client or server instrumentation. [1] | `openai`; `gcp.gen_ai`; `gcp.vertex_ai` | `Recommended` |  |
-**[1] `gen_ai.system`:** The `gen_ai.system` describes a family of GenAI models with specific model identified
-by `gen_ai.request.model` and `gen_ai.response.model` attributes.
+**[1] `gen_ai.provider.name`:** The attribute SHOULD be set based on the instrumentation's best
+knowledge and may differ from the actual model provider.
-The actual GenAI product may differ from the one identified by the client.
-Multiple systems, including Azure OpenAI and Gemini, are accessible by OpenAI client
-libraries. In such cases, the `gen_ai.system` is set to `openai` based on the
-instrumentation's best knowledge, instead of the actual system. The `server.address`
-attribute may help identify the actual system in use for `openai`.
+Multiple providers, including Azure OpenAI, Gemini, and AI hosting platforms
+are accessible using the OpenAI REST API and corresponding client libraries,
+but may proxy or host models from different providers.
-For custom model, a custom friendly name SHOULD be used.
-If none of these options apply, the `gen_ai.system` SHOULD be set to `_OTHER`.
+The `gen_ai.request.model`, `gen_ai.response.model`, and `server.address`
+attributes may help identify the actual system in use.
+
+The `gen_ai.provider.name` attribute acts as a discriminator that
+identifies the GenAI telemetry format flavor specific to that provider
+within GenAI semantic conventions.
+It SHOULD be set consistently with provider-specific attributes and signals.
+For example, GenAI spans, metrics, and events related to AWS Bedrock
+should have the `gen_ai.provider.name` set to `aws.bedrock` and include
+applicable `aws.bedrock.*` attributes and are not expected to include
+`openai.*` attributes.
---
-`gen_ai.system` has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.
+`gen_ai.provider.name` has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.
| Value | Description | Stability |
|---|---|---|
-| `anthropic` | Anthropic |  |
-| `aws.bedrock` | AWS Bedrock |  |
+| `anthropic` | [Anthropic](https://www.anthropic.com/) |  |
+| `aws.bedrock` | [AWS Bedrock](https://aws.amazon.com/bedrock) |  |
| `azure.ai.inference` | Azure AI Inference |  |
-| `azure.ai.openai` | Azure OpenAI |  |
-| `cohere` | Cohere |  |
-| `deepseek` | DeepSeek |  |
-| `gcp.gemini` | Gemini [2] |  |
+| `azure.ai.openai` | [Azure OpenAI](https://azure.microsoft.com/products/ai-services/openai-service/) |  |
+| `cohere` | [Cohere](https://cohere.com/) |  |
+| `deepseek` | [DeepSeek](https://www.deepseek.com/) |  |
+| `gcp.gemini` | [Gemini](https://cloud.google.com/products/gemini) [2] |  |
| `gcp.gen_ai` | Any Google generative AI endpoint [3] |  |
-| `gcp.vertex_ai` | Vertex AI [4] |  |
-| `groq` | Groq |  |
-| `ibm.watsonx.ai` | IBM Watsonx AI |  |
-| `mistral_ai` | Mistral AI |  |
-| `openai` | OpenAI |  |
-| `perplexity` | Perplexity |  |
-| `xai` | xAI |  |
+| `gcp.vertex_ai` | [Vertex AI](https://cloud.google.com/vertex-ai) [4] |  |
+| `groq` | [Groq](https://groq.com/) |  |
+| `ibm.watsonx.ai` | [IBM Watsonx AI](https://www.ibm.com/products/watsonx-ai) |  |
+| `mistral_ai` | [Mistral AI](https://mistral.ai/) |  |
+| `openai` | [OpenAI](https://openai.com/) |  |
+| `perplexity` | [Perplexity](https://www.perplexity.ai/) |  |
+| `x_ai` | [xAI](https://x.ai/) |  |
-**[2]:** This refers to the 'generativelanguage.googleapis.com' endpoint. Also known as the AI Studio API. May use common attributes prefixed with 'gcp.gen_ai.'.
+**[2]:** Used when accessing the 'generativelanguage.googleapis.com' endpoint. Also known as the AI Studio API.
-**[3]:** May be used when specific backend is unknown. May use common attributes prefixed with 'gcp.gen_ai.'.
+**[3]:** May be used when specific backend is unknown.
-**[4]:** This refers to the 'aiplatform.googleapis.com' endpoint. May use common attributes prefixed with 'gcp.gen_ai.'.
+**[4]:** Used when accessing the 'aiplatform.googleapis.com' endpoint.
**Body fields:**
@@ -301,47 +343,54 @@ This event describes the response from a tool or function call passed to the Gen
| Attribute | Type | Description | Examples | [Requirement Level](https://opentelemetry.io/docs/specs/semconv/general/attribute-requirement-level/) | Stability |
|---|---|---|---|---|---|
-| [`gen_ai.system`](/docs/registry/attributes/gen-ai.md) | string | The Generative AI product as identified by the client or server instrumentation. [1] | `openai` | `Recommended` |  |
+| [`gen_ai.provider.name`](/docs/registry/attributes/gen-ai.md) | string | The Generative AI provider as identified by the client or server instrumentation. [1] | `openai`; `gcp.gen_ai`; `gcp.vertex_ai` | `Recommended` |  |
+
+**[1] `gen_ai.provider.name`:** The attribute SHOULD be set based on the instrumentation's best
+knowledge and may differ from the actual model provider.
-**[1] `gen_ai.system`:** The `gen_ai.system` describes a family of GenAI models with specific model identified
-by `gen_ai.request.model` and `gen_ai.response.model` attributes.
+Multiple providers, including Azure OpenAI, Gemini, and AI hosting platforms
+are accessible using the OpenAI REST API and corresponding client libraries,
+but may proxy or host models from different providers.
-The actual GenAI product may differ from the one identified by the client.
-Multiple systems, including Azure OpenAI and Gemini, are accessible by OpenAI client
-libraries. In such cases, the `gen_ai.system` is set to `openai` based on the
-instrumentation's best knowledge, instead of the actual system. The `server.address`
-attribute may help identify the actual system in use for `openai`.
+The `gen_ai.request.model`, `gen_ai.response.model`, and `server.address`
+attributes may help identify the actual system in use.
-For custom model, a custom friendly name SHOULD be used.
-If none of these options apply, the `gen_ai.system` SHOULD be set to `_OTHER`.
+The `gen_ai.provider.name` attribute acts as a discriminator that
+identifies the GenAI telemetry format flavor specific to that provider
+within GenAI semantic conventions.
+It SHOULD be set consistently with provider-specific attributes and signals.
+For example, GenAI spans, metrics, and events related to AWS Bedrock
+should have the `gen_ai.provider.name` set to `aws.bedrock` and include
+applicable `aws.bedrock.*` attributes and are not expected to include
+`openai.*` attributes.
---
-`gen_ai.system` has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.
+`gen_ai.provider.name` has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.
| Value | Description | Stability |
|---|---|---|
-| `anthropic` | Anthropic |  |
-| `aws.bedrock` | AWS Bedrock |  |
+| `anthropic` | [Anthropic](https://www.anthropic.com/) |  |
+| `aws.bedrock` | [AWS Bedrock](https://aws.amazon.com/bedrock) |  |
| `azure.ai.inference` | Azure AI Inference |  |
-| `azure.ai.openai` | Azure OpenAI |  |
-| `cohere` | Cohere |  |
-| `deepseek` | DeepSeek |  |
-| `gcp.gemini` | Gemini [2] |  |
+| `azure.ai.openai` | [Azure OpenAI](https://azure.microsoft.com/products/ai-services/openai-service/) |  |
+| `cohere` | [Cohere](https://cohere.com/) |  |
+| `deepseek` | [DeepSeek](https://www.deepseek.com/) |  |
+| `gcp.gemini` | [Gemini](https://cloud.google.com/products/gemini) [2] |  |
| `gcp.gen_ai` | Any Google generative AI endpoint [3] |  |
-| `gcp.vertex_ai` | Vertex AI [4] |  |
-| `groq` | Groq |  |
-| `ibm.watsonx.ai` | IBM Watsonx AI |  |
-| `mistral_ai` | Mistral AI |  |
-| `openai` | OpenAI |  |
-| `perplexity` | Perplexity |  |
-| `xai` | xAI |  |
+| `gcp.vertex_ai` | [Vertex AI](https://cloud.google.com/vertex-ai) [4] |  |
+| `groq` | [Groq](https://groq.com/) |  |
+| `ibm.watsonx.ai` | [IBM Watsonx AI](https://www.ibm.com/products/watsonx-ai) |  |
+| `mistral_ai` | [Mistral AI](https://mistral.ai/) |  |
+| `openai` | [OpenAI](https://openai.com/) |  |
+| `perplexity` | [Perplexity](https://www.perplexity.ai/) |  |
+| `x_ai` | [xAI](https://x.ai/) |  |
-**[2]:** This refers to the 'generativelanguage.googleapis.com' endpoint. Also known as the AI Studio API. May use common attributes prefixed with 'gcp.gen_ai.'.
+**[2]:** Used when accessing the 'generativelanguage.googleapis.com' endpoint. Also known as the AI Studio API.
-**[3]:** May be used when specific backend is unknown. May use common attributes prefixed with 'gcp.gen_ai.'.
+**[3]:** May be used when specific backend is unknown.
-**[4]:** This refers to the 'aiplatform.googleapis.com' endpoint. May use common attributes prefixed with 'gcp.gen_ai.'.
+**[4]:** Used when accessing the 'aiplatform.googleapis.com' endpoint.
**Body fields:**
@@ -377,47 +426,54 @@ This event describes the Gen AI response message.
| Attribute | Type | Description | Examples | [Requirement Level](https://opentelemetry.io/docs/specs/semconv/general/attribute-requirement-level/) | Stability |
|---|---|---|---|---|---|
-| [`gen_ai.system`](/docs/registry/attributes/gen-ai.md) | string | The Generative AI product as identified by the client or server instrumentation. [1] | `openai` | `Recommended` |  |
+| [`gen_ai.provider.name`](/docs/registry/attributes/gen-ai.md) | string | The Generative AI provider as identified by the client or server instrumentation. [1] | `openai`; `gcp.gen_ai`; `gcp.vertex_ai` | `Recommended` |  |
+
+**[1] `gen_ai.provider.name`:** The attribute SHOULD be set based on the instrumentation's best
+knowledge and may differ from the actual model provider.
-**[1] `gen_ai.system`:** The `gen_ai.system` describes a family of GenAI models with specific model identified
-by `gen_ai.request.model` and `gen_ai.response.model` attributes.
+Multiple providers, including Azure OpenAI, Gemini, and AI hosting platforms
+are accessible using the OpenAI REST API and corresponding client libraries,
+but may proxy or host models from different providers.
-The actual GenAI product may differ from the one identified by the client.
-Multiple systems, including Azure OpenAI and Gemini, are accessible by OpenAI client
-libraries. In such cases, the `gen_ai.system` is set to `openai` based on the
-instrumentation's best knowledge, instead of the actual system. The `server.address`
-attribute may help identify the actual system in use for `openai`.
+The `gen_ai.request.model`, `gen_ai.response.model`, and `server.address`
+attributes may help identify the actual system in use.
-For custom model, a custom friendly name SHOULD be used.
-If none of these options apply, the `gen_ai.system` SHOULD be set to `_OTHER`.
+The `gen_ai.provider.name` attribute acts as a discriminator that
+identifies the GenAI telemetry format flavor specific to that provider
+within GenAI semantic conventions.
+It SHOULD be set consistently with provider-specific attributes and signals.
+For example, GenAI spans, metrics, and events related to AWS Bedrock
+should have the `gen_ai.provider.name` set to `aws.bedrock` and include
+applicable `aws.bedrock.*` attributes and are not expected to include
+`openai.*` attributes.
---
-`gen_ai.system` has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.
+`gen_ai.provider.name` has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.
| Value | Description | Stability |
|---|---|---|
-| `anthropic` | Anthropic |  |
-| `aws.bedrock` | AWS Bedrock |  |
+| `anthropic` | [Anthropic](https://www.anthropic.com/) |  |
+| `aws.bedrock` | [AWS Bedrock](https://aws.amazon.com/bedrock) |  |
| `azure.ai.inference` | Azure AI Inference |  |
-| `azure.ai.openai` | Azure OpenAI |  |
-| `cohere` | Cohere |  |
-| `deepseek` | DeepSeek |  |
-| `gcp.gemini` | Gemini [2] |  |
+| `azure.ai.openai` | [Azure OpenAI](https://azure.microsoft.com/products/ai-services/openai-service/) |  |
+| `cohere` | [Cohere](https://cohere.com/) |  |
+| `deepseek` | [DeepSeek](https://www.deepseek.com/) |  |
+| `gcp.gemini` | [Gemini](https://cloud.google.com/products/gemini) [2] |  |
| `gcp.gen_ai` | Any Google generative AI endpoint [3] |  |
-| `gcp.vertex_ai` | Vertex AI [4] |  |
-| `groq` | Groq |  |
-| `ibm.watsonx.ai` | IBM Watsonx AI |  |
-| `mistral_ai` | Mistral AI |  |
-| `openai` | OpenAI |  |
-| `perplexity` | Perplexity |  |
-| `xai` | xAI |  |
+| `gcp.vertex_ai` | [Vertex AI](https://cloud.google.com/vertex-ai) [4] |  |
+| `groq` | [Groq](https://groq.com/) |  |
+| `ibm.watsonx.ai` | [IBM Watsonx AI](https://www.ibm.com/products/watsonx-ai) |  |
+| `mistral_ai` | [Mistral AI](https://mistral.ai/) |  |
+| `openai` | [OpenAI](https://openai.com/) |  |
+| `perplexity` | [Perplexity](https://www.perplexity.ai/) |  |
+| `x_ai` | [xAI](https://x.ai/) |  |
-**[2]:** This refers to the 'generativelanguage.googleapis.com' endpoint. Also known as the AI Studio API. May use common attributes prefixed with 'gcp.gen_ai.'.
+**[2]:** Used when accessing the 'generativelanguage.googleapis.com' endpoint. Also known as the AI Studio API.
-**[3]:** May be used when specific backend is unknown. May use common attributes prefixed with 'gcp.gen_ai.'.
+**[3]:** May be used when specific backend is unknown.
-**[4]:** This refers to the 'aiplatform.googleapis.com' endpoint. May use common attributes prefixed with 'gcp.gen_ai.'.
+**[4]:** Used when accessing the 'aiplatform.googleapis.com' endpoint.
**Body fields:**
@@ -466,7 +522,7 @@ Semantic conventions for individual systems MAY specify a different type for arg
## Custom events
System-specific events that are not covered in this document SHOULD be documented in corresponding Semantic Conventions extensions and
-SHOULD follow `gen_ai.{gen_ai.system}.*` naming pattern for system-specific events.
+SHOULD follow `{gen_ai.provider.name}.*` naming pattern.
## Examples
@@ -497,7 +553,7 @@ sequenceDiagram
| Attribute name | Value |
|---------------------------------|--------------------------------------------|
| Span name | `"chat gpt-4"` |
-| `gen_ai.system` | `"openai"` |
+| `gen_ai.provider.name` | `"openai"` |
| `gen_ai.request.model` | `"gpt-4"` |
| `gen_ai.request.max_tokens` | `200` |
| `gen_ai.request.top_p` | `1.0` |
@@ -513,21 +569,21 @@ sequenceDiagram
| Property | Value |
|---------------------|-------------------------------------------------------|
- | `gen_ai.system` | `"openai"` |
+ | `gen_ai.provider.name`| `"openai"` |
| Event body (with content enabled) | `{"content": "You're a helpful bot"}` |
2. `gen_ai.user.message`
| Property | Value |
|---------------------|-------------------------------------------------------|
- | `gen_ai.system` | `"openai"` |
+ | `gen_ai.provider.name`| `"openai"` |
| Event body (with content enabled) | `{"content":"Tell me a joke about OpenTelemetry"}` |
3. `gen_ai.choice`
| Property | Value |
|---------------------|-------------------------------------------------------|
- | `gen_ai.system` | `"openai"` |
+ | `gen_ai.provider.name`| `"openai"` |
| Event body (with content enabled) | `{"index":0,"finish_reason":"stop","message":{"content":"Why did the developer bring OpenTelemetry to the party? Because it always knows how to trace the fun!"}}` |
| Event body (without content) | `{"index":0,"finish_reason":"stop","message":{}}` |
@@ -568,7 +624,7 @@ Here's the telemetry generated for each step in this scenario:
| Attribute name | Value |
|---------------------|-------------------------------------------------------|
| Span name | `"chat gpt-4"` |
-| `gen_ai.system` | `"openai"` |
+| `gen_ai.provider.name`| `"openai"` |
| `gen_ai.request.model`| `"gpt-4"` |
| `gen_ai.request.max_tokens`| `200` |
| `gen_ai.request.top_p`| `1.0` |
@@ -586,14 +642,14 @@ Here's the telemetry generated for each step in this scenario:
| Property | Value |
|---------------------|-------------------------------------------------------|
- | `gen_ai.system` | `"openai"` |
+ | `gen_ai.provider.name`| `"openai"` |
| Event body | `{"content":"What's the weather in Paris?"}` |
2. `gen_ai.choice`
| Property | Value |
|---------------------|-------------------------------------------------------|
- | `gen_ai.system` | `"openai"` |
+ | `gen_ai.provider.name`| `"openai"` |
| Event body (with content) | `{"index":0,"finish_reason":"tool_calls","message":{"tool_calls":[{"id":"call_VSPygqKTWdrhaFErNvMV18Yl","function":{"name":"get_weather","arguments":"{\"location\":\"Paris\"}"},"type":"function"}]}` |
| Event body (without content) | `{"index":0,"finish_reason":"tool_calls","message":{"tool_calls":[{"id":"call_VSPygqKTWdrhaFErNvMV18Yl","function":{"name":"get_weather"},"type":"function"}]}` |
@@ -602,7 +658,7 @@ Here's the telemetry generated for each step in this scenario:
| Attribute name | Value |
|---------------------------------|-------------------------------------------------------|
| Span name | `"chat gpt-4"` |
- | `gen_ai.system` | `"openai"` |
+ | `gen_ai.provider.name` | `"openai"` |
| `gen_ai.request.model` | `"gpt-4"` |
| `gen_ai.request.max_tokens` | `200` |
| `gen_ai.request.top_p` | `1.0` |
@@ -622,14 +678,14 @@ Here's the telemetry generated for each step in this scenario:
| Property | Value |
|----------------------------------|------------------------------------------------------------|
- | `gen_ai.system` | `"openai"` |
+ | `gen_ai.provider.name` | `"openai"` |
| Event body | `{"content":"What's the weather in Paris?"}` |
2. `gen_ai.assistant.message`
| Property | Value |
|----------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------|
- | `gen_ai.system` | `"openai"` |
+ | `gen_ai.provider.name` | `"openai"` |
| Event body (content enabled) | `{"tool_calls":[{"id":"call_VSPygqKTWdrhaFErNvMV18Yl","function":{"name":"get_weather","arguments":"{\"location\":\"Paris\"}"},"type":"function"}]}` |
| Event body (content not enabled) | `{"tool_calls":[{"id":"call_VSPygqKTWdrhaFErNvMV18Yl","function":{"name":"get_weather"},"type":"function"}]}` |
@@ -637,7 +693,7 @@ Here's the telemetry generated for each step in this scenario:
| Property | Value |
|----------------------------------|------------------------------------------------------------------------------------------------|
- | `gen_ai.system` | `"openai"` |
+ | `gen_ai.provider.name` | `"openai"` |
| Event body (content enabled) | `{"content":"rainy, 57°F","id":"call_VSPygqKTWdrhaFErNvMV18Yl"}` |
| Event body (content not enabled) | `{"id":"call_VSPygqKTWdrhaFErNvMV18Yl"}` |
@@ -645,7 +701,7 @@ Here's the telemetry generated for each step in this scenario:
| Property | Value |
|----------------------------------|-------------------------------------------------------------------------------------------------------------------------------|
- | `gen_ai.system` | `"openai"` |
+ | `gen_ai.provider.name` | `"openai"` |
| Event body (content enabled) | `{"index":0,"finish_reason":"stop","message":{"content":"The weather in Paris is rainy and overcast, with temperatures around 57°F"}}` |
| Event body (content not enabled) | `{"index":0,"finish_reason":"stop","message":{}}` |
@@ -676,7 +732,7 @@ sequenceDiagram
| Attribute name | Value |
|---------------------|--------------------------------------------|
| Span name | `"chat gpt-4"` |
-| `gen_ai.system` | `"openai"` |
+| `gen_ai.provider.name`| `"openai"` |
| `gen_ai.request.model`| `"gpt-4"` |
| `gen_ai.request.max_tokens`| `200` |
| `gen_ai.request.top_p`| `1.0` |
@@ -696,14 +752,14 @@ All events are parented to the GenAI chat span above.
| Property | Value |
|------------------------------|-------------------------------------------------------|
- | `gen_ai.system` | `"openai"` |
+ | `gen_ai.provider.name` | `"openai"` |
| Event body (content enabled) | `{"index":0,"finish_reason":"stop","message":{"content":"Why did the developer bring OpenTelemetry to the party? Because it always knows how to trace the fun!"}}` |
4. `gen_ai.choice`
| Property | Value |
|------------------------------|-------------------------------------------------------|
- | `gen_ai.system` | `"openai"` |
+ | `gen_ai.provider.name` | `"openai"` |
| Event body (content enabled) | `{"index":1,"finish_reason":"stop","message":{"content":"Why did OpenTelemetry get promoted? It had great span of control!"}}` |
[DocumentStatus]: https://opentelemetry.io/docs/specs/otel/document-status
diff --git a/docs/gen-ai/gen-ai-metrics.md b/docs/gen-ai/gen-ai-metrics.md
index d4d22b97f9..0fd5f32dac 100644
--- a/docs/gen-ai/gen-ai-metrics.md
+++ b/docs/gen-ai/gen-ai-metrics.md
@@ -18,6 +18,27 @@ linkTitle: Metrics
+> [!Warning]
+>
+> Existing GenAI instrumentations that are using
+> [v1.36.0 of this document](https://github.com/open-telemetry/semantic-conventions/blob/v1.36.0/docs/gen-ai/README.md)
+> (or prior):
+>
+> * SHOULD NOT change the version of the GenAI conventions that they emit by default.
+> Conventions include, but are not limited to, attributes, metric, span and event names,
+> span kind and unit of measure.
+> * SHOULD introduce an environment variable `OTEL_SEMCONV_STABILITY_OPT_IN`
+> as a comma-separated list of category-specific values. The list of values
+> includes:
+> * `gen_ai_latest_experimental` - emit the latest experimental version of
+> GenAI conventions (supported by the instrumentation) and do not emit the
+> old one (v1.36.0 or prior).
+> * The default behavior is to continue emitting whatever version of the GenAI
+> conventions the instrumentation was emitting (1.36.0 or prior).
+>
+> This transition plan will be updated to include stable version before the
+> GenAI conventions are marked as stable.
+
## Generative AI client metrics
The conventions described in this section are specific to Generative AI client
@@ -60,7 +81,7 @@ This metric SHOULD be specified with [ExplicitBucketBoundaries] of [1, 4, 16, 64
| Attribute | Type | Description | Examples | [Requirement Level](https://opentelemetry.io/docs/specs/semconv/general/attribute-requirement-level/) | Stability |
|---|---|---|---|---|---|
| [`gen_ai.operation.name`](/docs/registry/attributes/gen-ai.md) | string | The name of the operation being performed. [1] | `chat`; `generate_content`; `text_completion` | `Required` |  |
-| [`gen_ai.system`](/docs/registry/attributes/gen-ai.md) | string | The Generative AI product as identified by the client or server instrumentation. [2] | `openai` | `Required` |  |
+| [`gen_ai.provider.name`](/docs/registry/attributes/gen-ai.md) | string | The Generative AI provider as identified by the client or server instrumentation. [2] | `openai`; `gcp.gen_ai`; `gcp.vertex_ai` | `Required` |  |
| [`gen_ai.token.type`](/docs/registry/attributes/gen-ai.md) | string | The type of token being counted. | `input`; `output` | `Required` |  |
| [`gen_ai.request.model`](/docs/registry/attributes/gen-ai.md) | string | The name of the GenAI model a request is being made to. | `gpt-4` | `Conditionally Required` If available. |  |
| [`server.port`](/docs/registry/attributes/server.md) | int | GenAI server port. [3] | `80`; `8080`; `443` | `Conditionally Required` If `server.address` is set. |  |
@@ -69,17 +90,24 @@ This metric SHOULD be specified with [ExplicitBucketBoundaries] of [1, 4, 16, 64
**[1] `gen_ai.operation.name`:** If one of the predefined values applies, but specific system uses a different name it's RECOMMENDED to document it in the semantic conventions for specific GenAI system and use system-specific name in the instrumentation. If a different name is not documented, instrumentation libraries SHOULD use applicable predefined value.
-**[2] `gen_ai.system`:** The `gen_ai.system` describes a family of GenAI models with specific model identified
-by `gen_ai.request.model` and `gen_ai.response.model` attributes.
+**[2] `gen_ai.provider.name`:** The attribute SHOULD be set based on the instrumentation's best
+knowledge and may differ from the actual model provider.
+
+Multiple providers, including Azure OpenAI, Gemini, and AI hosting platforms
+are accessible using the OpenAI REST API and corresponding client libraries,
+but may proxy or host models from different providers.
-The actual GenAI product may differ from the one identified by the client.
-Multiple systems, including Azure OpenAI and Gemini, are accessible by OpenAI client
-libraries. In such cases, the `gen_ai.system` is set to `openai` based on the
-instrumentation's best knowledge, instead of the actual system. The `server.address`
-attribute may help identify the actual system in use for `openai`.
+The `gen_ai.request.model`, `gen_ai.response.model`, and `server.address`
+attributes may help identify the actual system in use.
-For custom model, a custom friendly name SHOULD be used.
-If none of these options apply, the `gen_ai.system` SHOULD be set to `_OTHER`.
+The `gen_ai.provider.name` attribute acts as a discriminator that
+identifies the GenAI telemetry format flavor specific to that provider
+within GenAI semantic conventions.
+It SHOULD be set consistently with provider-specific attributes and signals.
+For example, GenAI spans, metrics, and events related to AWS Bedrock
+should have the `gen_ai.provider.name` set to `aws.bedrock` and include
+applicable `aws.bedrock.*` attributes and are not expected to include
+`openai.*` attributes.
**[3] `server.port`:** When observed from the client side, and when communicating through an intermediary, `server.port` SHOULD represent the server port behind any intermediaries, for example proxies, if it's available.
@@ -101,31 +129,31 @@ If none of these options apply, the `gen_ai.system` SHOULD be set to `_OTHER`.
---
-`gen_ai.system` has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.
+`gen_ai.provider.name` has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.
| Value | Description | Stability |
|---|---|---|
-| `anthropic` | Anthropic |  |
-| `aws.bedrock` | AWS Bedrock |  |
+| `anthropic` | [Anthropic](https://www.anthropic.com/) |  |
+| `aws.bedrock` | [AWS Bedrock](https://aws.amazon.com/bedrock) |  |
| `azure.ai.inference` | Azure AI Inference |  |
-| `azure.ai.openai` | Azure OpenAI |  |
-| `cohere` | Cohere |  |
-| `deepseek` | DeepSeek |  |
-| `gcp.gemini` | Gemini [5] |  |
+| `azure.ai.openai` | [Azure OpenAI](https://azure.microsoft.com/products/ai-services/openai-service/) |  |
+| `cohere` | [Cohere](https://cohere.com/) |  |
+| `deepseek` | [DeepSeek](https://www.deepseek.com/) |  |
+| `gcp.gemini` | [Gemini](https://cloud.google.com/products/gemini) [5] |  |
| `gcp.gen_ai` | Any Google generative AI endpoint [6] |  |
-| `gcp.vertex_ai` | Vertex AI [7] |  |
-| `groq` | Groq |  |
-| `ibm.watsonx.ai` | IBM Watsonx AI |  |
-| `mistral_ai` | Mistral AI |  |
-| `openai` | OpenAI |  |
-| `perplexity` | Perplexity |  |
-| `xai` | xAI |  |
+| `gcp.vertex_ai` | [Vertex AI](https://cloud.google.com/vertex-ai) [7] |  |
+| `groq` | [Groq](https://groq.com/) |  |
+| `ibm.watsonx.ai` | [IBM Watsonx AI](https://www.ibm.com/products/watsonx-ai) |  |
+| `mistral_ai` | [Mistral AI](https://mistral.ai/) |  |
+| `openai` | [OpenAI](https://openai.com/) |  |
+| `perplexity` | [Perplexity](https://www.perplexity.ai/) |  |
+| `x_ai` | [xAI](https://x.ai/) |  |
-**[5]:** This refers to the 'generativelanguage.googleapis.com' endpoint. Also known as the AI Studio API. May use common attributes prefixed with 'gcp.gen_ai.'.
+**[5]:** Used when accessing the 'generativelanguage.googleapis.com' endpoint. Also known as the AI Studio API.
-**[6]:** May be used when specific backend is unknown. May use common attributes prefixed with 'gcp.gen_ai.'.
+**[6]:** May be used when specific backend is unknown.
-**[7]:** This refers to the 'aiplatform.googleapis.com' endpoint. May use common attributes prefixed with 'gcp.gen_ai.'.
+**[7]:** Used when accessing the 'aiplatform.googleapis.com' endpoint.
---
@@ -161,7 +189,7 @@ This metric SHOULD be specified with [ExplicitBucketBoundaries] of [0.01, 0.02,
| Attribute | Type | Description | Examples | [Requirement Level](https://opentelemetry.io/docs/specs/semconv/general/attribute-requirement-level/) | Stability |
|---|---|---|---|---|---|
| [`gen_ai.operation.name`](/docs/registry/attributes/gen-ai.md) | string | The name of the operation being performed. [1] | `chat`; `generate_content`; `text_completion` | `Required` |  |
-| [`gen_ai.system`](/docs/registry/attributes/gen-ai.md) | string | The Generative AI product as identified by the client or server instrumentation. [2] | `openai` | `Required` |  |
+| [`gen_ai.provider.name`](/docs/registry/attributes/gen-ai.md) | string | The Generative AI provider as identified by the client or server instrumentation. [2] | `openai`; `gcp.gen_ai`; `gcp.vertex_ai` | `Required` |  |
| [`error.type`](/docs/registry/attributes/error.md) | string | Describes a class of error the operation ended with. [3] | `timeout`; `java.net.UnknownHostException`; `server_certificate_invalid`; `500` | `Conditionally Required` if the operation ended in an error |  |
| [`gen_ai.request.model`](/docs/registry/attributes/gen-ai.md) | string | The name of the GenAI model a request is being made to. | `gpt-4` | `Conditionally Required` If available. |  |
| [`server.port`](/docs/registry/attributes/server.md) | int | GenAI server port. [4] | `80`; `8080`; `443` | `Conditionally Required` If `server.address` is set. |  |
@@ -170,17 +198,24 @@ This metric SHOULD be specified with [ExplicitBucketBoundaries] of [0.01, 0.02,
**[1] `gen_ai.operation.name`:** If one of the predefined values applies, but specific system uses a different name it's RECOMMENDED to document it in the semantic conventions for specific GenAI system and use system-specific name in the instrumentation. If a different name is not documented, instrumentation libraries SHOULD use applicable predefined value.
-**[2] `gen_ai.system`:** The `gen_ai.system` describes a family of GenAI models with specific model identified
-by `gen_ai.request.model` and `gen_ai.response.model` attributes.
+**[2] `gen_ai.provider.name`:** The attribute SHOULD be set based on the instrumentation's best
+knowledge and may differ from the actual model provider.
+
+Multiple providers, including Azure OpenAI, Gemini, and AI hosting platforms
+are accessible using the OpenAI REST API and corresponding client libraries,
+but may proxy or host models from different providers.
-The actual GenAI product may differ from the one identified by the client.
-Multiple systems, including Azure OpenAI and Gemini, are accessible by OpenAI client
-libraries. In such cases, the `gen_ai.system` is set to `openai` based on the
-instrumentation's best knowledge, instead of the actual system. The `server.address`
-attribute may help identify the actual system in use for `openai`.
+The `gen_ai.request.model`, `gen_ai.response.model`, and `server.address`
+attributes may help identify the actual system in use.
-For custom model, a custom friendly name SHOULD be used.
-If none of these options apply, the `gen_ai.system` SHOULD be set to `_OTHER`.
+The `gen_ai.provider.name` attribute acts as a discriminator that
+identifies the GenAI telemetry format flavor specific to that provider
+within GenAI semantic conventions.
+It SHOULD be set consistently with provider-specific attributes and signals.
+For example, GenAI spans, metrics, and events related to AWS Bedrock
+should have the `gen_ai.provider.name` set to `aws.bedrock` and include
+applicable `aws.bedrock.*` attributes and are not expected to include
+`openai.*` attributes.
**[3] `error.type`:** The `error.type` SHOULD match the error code returned by the Generative AI provider or the client library,
the canonical name of exception that occurred, or another low-cardinality error identifier.
@@ -214,31 +249,31 @@ Instrumentations SHOULD document the list of errors they report.
---
-`gen_ai.system` has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.
+`gen_ai.provider.name` has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.
| Value | Description | Stability |
|---|---|---|
-| `anthropic` | Anthropic |  |
-| `aws.bedrock` | AWS Bedrock |  |
+| `anthropic` | [Anthropic](https://www.anthropic.com/) |  |
+| `aws.bedrock` | [AWS Bedrock](https://aws.amazon.com/bedrock) |  |
| `azure.ai.inference` | Azure AI Inference |  |
-| `azure.ai.openai` | Azure OpenAI |  |
-| `cohere` | Cohere |  |
-| `deepseek` | DeepSeek |  |
-| `gcp.gemini` | Gemini [6] |  |
+| `azure.ai.openai` | [Azure OpenAI](https://azure.microsoft.com/products/ai-services/openai-service/) |  |
+| `cohere` | [Cohere](https://cohere.com/) |  |
+| `deepseek` | [DeepSeek](https://www.deepseek.com/) |  |
+| `gcp.gemini` | [Gemini](https://cloud.google.com/products/gemini) [6] |  |
| `gcp.gen_ai` | Any Google generative AI endpoint [7] |  |
-| `gcp.vertex_ai` | Vertex AI [8] |  |
-| `groq` | Groq |  |
-| `ibm.watsonx.ai` | IBM Watsonx AI |  |
-| `mistral_ai` | Mistral AI |  |
-| `openai` | OpenAI |  |
-| `perplexity` | Perplexity |  |
-| `xai` | xAI |  |
+| `gcp.vertex_ai` | [Vertex AI](https://cloud.google.com/vertex-ai) [8] |  |
+| `groq` | [Groq](https://groq.com/) |  |
+| `ibm.watsonx.ai` | [IBM Watsonx AI](https://www.ibm.com/products/watsonx-ai) |  |
+| `mistral_ai` | [Mistral AI](https://mistral.ai/) |  |
+| `openai` | [OpenAI](https://openai.com/) |  |
+| `perplexity` | [Perplexity](https://www.perplexity.ai/) |  |
+| `x_ai` | [xAI](https://x.ai/) |  |
-**[6]:** This refers to the 'generativelanguage.googleapis.com' endpoint. Also known as the AI Studio API. May use common attributes prefixed with 'gcp.gen_ai.'.
+**[6]:** Used when accessing the 'generativelanguage.googleapis.com' endpoint. Also known as the AI Studio API.
-**[7]:** May be used when specific backend is unknown. May use common attributes prefixed with 'gcp.gen_ai.'.
+**[7]:** May be used when specific backend is unknown.
-**[8]:** This refers to the 'aiplatform.googleapis.com' endpoint. May use common attributes prefixed with 'gcp.gen_ai.'.
+**[8]:** Used when accessing the 'aiplatform.googleapis.com' endpoint.
@@ -272,7 +307,7 @@ This metric SHOULD be specified with [ExplicitBucketBoundaries] of
| Attribute | Type | Description | Examples | [Requirement Level](https://opentelemetry.io/docs/specs/semconv/general/attribute-requirement-level/) | Stability |
|---|---|---|---|---|---|
| [`gen_ai.operation.name`](/docs/registry/attributes/gen-ai.md) | string | The name of the operation being performed. [1] | `chat`; `generate_content`; `text_completion` | `Required` |  |
-| [`gen_ai.system`](/docs/registry/attributes/gen-ai.md) | string | The Generative AI product as identified by the client or server instrumentation. [2] | `openai` | `Required` |  |
+| [`gen_ai.provider.name`](/docs/registry/attributes/gen-ai.md) | string | The Generative AI provider as identified by the client or server instrumentation. [2] | `openai`; `gcp.gen_ai`; `gcp.vertex_ai` | `Required` |  |
| [`error.type`](/docs/registry/attributes/error.md) | string | Describes a class of error the operation ended with. [3] | `timeout`; `java.net.UnknownHostException`; `server_certificate_invalid`; `500` | `Conditionally Required` if the operation ended in an error |  |
| [`gen_ai.request.model`](/docs/registry/attributes/gen-ai.md) | string | The name of the GenAI model a request is being made to. | `gpt-4` | `Conditionally Required` If available. |  |
| [`server.port`](/docs/registry/attributes/server.md) | int | GenAI server port. [4] | `80`; `8080`; `443` | `Conditionally Required` If `server.address` is set. |  |
@@ -281,17 +316,24 @@ This metric SHOULD be specified with [ExplicitBucketBoundaries] of
**[1] `gen_ai.operation.name`:** If one of the predefined values applies, but specific system uses a different name it's RECOMMENDED to document it in the semantic conventions for specific GenAI system and use system-specific name in the instrumentation. If a different name is not documented, instrumentation libraries SHOULD use applicable predefined value.
-**[2] `gen_ai.system`:** The `gen_ai.system` describes a family of GenAI models with specific model identified
-by `gen_ai.request.model` and `gen_ai.response.model` attributes.
+**[2] `gen_ai.provider.name`:** The attribute SHOULD be set based on the instrumentation's best
+knowledge and may differ from the actual model provider.
-The actual GenAI product may differ from the one identified by the client.
-Multiple systems, including Azure OpenAI and Gemini, are accessible by OpenAI client
-libraries. In such cases, the `gen_ai.system` is set to `openai` based on the
-instrumentation's best knowledge, instead of the actual system. The `server.address`
-attribute may help identify the actual system in use for `openai`.
+Multiple providers, including Azure OpenAI, Gemini, and AI hosting platforms
+are accessible using the OpenAI REST API and corresponding client libraries,
+but may proxy or host models from different providers.
-For custom model, a custom friendly name SHOULD be used.
-If none of these options apply, the `gen_ai.system` SHOULD be set to `_OTHER`.
+The `gen_ai.request.model`, `gen_ai.response.model`, and `server.address`
+attributes may help identify the actual system in use.
+
+The `gen_ai.provider.name` attribute acts as a discriminator that
+identifies the GenAI telemetry format flavor specific to that provider
+within GenAI semantic conventions.
+It SHOULD be set consistently with provider-specific attributes and signals.
+For example, GenAI spans, metrics, and events related to AWS Bedrock
+should have the `gen_ai.provider.name` set to `aws.bedrock` and include
+applicable `aws.bedrock.*` attributes and are not expected to include
+`openai.*` attributes.
**[3] `error.type`:** The `error.type` SHOULD match the error code returned by the Generative AI service,
the canonical name of exception that occurred, or another low-cardinality error identifier.
@@ -325,31 +367,31 @@ Instrumentations SHOULD document the list of errors they report.
---
-`gen_ai.system` has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.
+`gen_ai.provider.name` has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.
| Value | Description | Stability |
|---|---|---|
-| `anthropic` | Anthropic |  |
-| `aws.bedrock` | AWS Bedrock |  |
+| `anthropic` | [Anthropic](https://www.anthropic.com/) |  |
+| `aws.bedrock` | [AWS Bedrock](https://aws.amazon.com/bedrock) |  |
| `azure.ai.inference` | Azure AI Inference |  |
-| `azure.ai.openai` | Azure OpenAI |  |
-| `cohere` | Cohere |  |
-| `deepseek` | DeepSeek |  |
-| `gcp.gemini` | Gemini [6] |  |
+| `azure.ai.openai` | [Azure OpenAI](https://azure.microsoft.com/products/ai-services/openai-service/) |  |
+| `cohere` | [Cohere](https://cohere.com/) |  |
+| `deepseek` | [DeepSeek](https://www.deepseek.com/) |  |
+| `gcp.gemini` | [Gemini](https://cloud.google.com/products/gemini) [6] |  |
| `gcp.gen_ai` | Any Google generative AI endpoint [7] |  |
-| `gcp.vertex_ai` | Vertex AI [8] |  |
-| `groq` | Groq |  |
-| `ibm.watsonx.ai` | IBM Watsonx AI |  |
-| `mistral_ai` | Mistral AI |  |
-| `openai` | OpenAI |  |
-| `perplexity` | Perplexity |  |
-| `xai` | xAI |  |
+| `gcp.vertex_ai` | [Vertex AI](https://cloud.google.com/vertex-ai) [8] |  |
+| `groq` | [Groq](https://groq.com/) |  |
+| `ibm.watsonx.ai` | [IBM Watsonx AI](https://www.ibm.com/products/watsonx-ai) |  |
+| `mistral_ai` | [Mistral AI](https://mistral.ai/) |  |
+| `openai` | [OpenAI](https://openai.com/) |  |
+| `perplexity` | [Perplexity](https://www.perplexity.ai/) |  |
+| `x_ai` | [xAI](https://x.ai/) |  |
-**[6]:** This refers to the 'generativelanguage.googleapis.com' endpoint. Also known as the AI Studio API. May use common attributes prefixed with 'gcp.gen_ai.'.
+**[6]:** Used when accessing the 'generativelanguage.googleapis.com' endpoint. Also known as the AI Studio API.
-**[7]:** May be used when specific backend is unknown. May use common attributes prefixed with 'gcp.gen_ai.'.
+**[7]:** May be used when specific backend is unknown.
-**[8]:** This refers to the 'aiplatform.googleapis.com' endpoint. May use common attributes prefixed with 'gcp.gen_ai.'.
+**[8]:** Used when accessing the 'aiplatform.googleapis.com' endpoint.
@@ -383,7 +425,7 @@ This metric SHOULD be specified with [ExplicitBucketBoundaries] of
| Attribute | Type | Description | Examples | [Requirement Level](https://opentelemetry.io/docs/specs/semconv/general/attribute-requirement-level/) | Stability |
|---|---|---|---|---|---|
| [`gen_ai.operation.name`](/docs/registry/attributes/gen-ai.md) | string | The name of the operation being performed. [1] | `chat`; `generate_content`; `text_completion` | `Required` |  |
-| [`gen_ai.system`](/docs/registry/attributes/gen-ai.md) | string | The Generative AI product as identified by the client or server instrumentation. [2] | `openai` | `Required` |  |
+| [`gen_ai.provider.name`](/docs/registry/attributes/gen-ai.md) | string | The Generative AI provider as identified by the client or server instrumentation. [2] | `openai`; `gcp.gen_ai`; `gcp.vertex_ai` | `Required` |  |
| [`gen_ai.request.model`](/docs/registry/attributes/gen-ai.md) | string | The name of the GenAI model a request is being made to. | `gpt-4` | `Conditionally Required` If available. |  |
| [`server.port`](/docs/registry/attributes/server.md) | int | GenAI server port. [3] | `80`; `8080`; `443` | `Conditionally Required` If `server.address` is set. |  |
| [`gen_ai.response.model`](/docs/registry/attributes/gen-ai.md) | string | The name of the model that generated the response. | `gpt-4-0613` | `Recommended` |  |
@@ -391,17 +433,24 @@ This metric SHOULD be specified with [ExplicitBucketBoundaries] of
**[1] `gen_ai.operation.name`:** If one of the predefined values applies, but specific system uses a different name it's RECOMMENDED to document it in the semantic conventions for specific GenAI system and use system-specific name in the instrumentation. If a different name is not documented, instrumentation libraries SHOULD use applicable predefined value.
-**[2] `gen_ai.system`:** The `gen_ai.system` describes a family of GenAI models with specific model identified
-by `gen_ai.request.model` and `gen_ai.response.model` attributes.
+**[2] `gen_ai.provider.name`:** The attribute SHOULD be set based on the instrumentation's best
+knowledge and may differ from the actual model provider.
+
+Multiple providers, including Azure OpenAI, Gemini, and AI hosting platforms
+are accessible using the OpenAI REST API and corresponding client libraries,
+but may proxy or host models from different providers.
-The actual GenAI product may differ from the one identified by the client.
-Multiple systems, including Azure OpenAI and Gemini, are accessible by OpenAI client
-libraries. In such cases, the `gen_ai.system` is set to `openai` based on the
-instrumentation's best knowledge, instead of the actual system. The `server.address`
-attribute may help identify the actual system in use for `openai`.
+The `gen_ai.request.model`, `gen_ai.response.model`, and `server.address`
+attributes may help identify the actual system in use.
-For custom model, a custom friendly name SHOULD be used.
-If none of these options apply, the `gen_ai.system` SHOULD be set to `_OTHER`.
+The `gen_ai.provider.name` attribute acts as a discriminator that
+identifies the GenAI telemetry format flavor specific to that provider
+within GenAI semantic conventions.
+It SHOULD be set consistently with provider-specific attributes and signals.
+For example, GenAI spans, metrics, and events related to AWS Bedrock
+should have the `gen_ai.provider.name` set to `aws.bedrock` and include
+applicable `aws.bedrock.*` attributes and are not expected to include
+`openai.*` attributes.
**[3] `server.port`:** When observed from the client side, and when communicating through an intermediary, `server.port` SHOULD represent the server port behind any intermediaries, for example proxies, if it's available.
@@ -423,31 +472,31 @@ If none of these options apply, the `gen_ai.system` SHOULD be set to `_OTHER`.
---
-`gen_ai.system` has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.
+`gen_ai.provider.name` has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.
| Value | Description | Stability |
|---|---|---|
-| `anthropic` | Anthropic |  |
-| `aws.bedrock` | AWS Bedrock |  |
+| `anthropic` | [Anthropic](https://www.anthropic.com/) |  |
+| `aws.bedrock` | [AWS Bedrock](https://aws.amazon.com/bedrock) |  |
| `azure.ai.inference` | Azure AI Inference |  |
-| `azure.ai.openai` | Azure OpenAI |  |
-| `cohere` | Cohere |  |
-| `deepseek` | DeepSeek |  |
-| `gcp.gemini` | Gemini [5] |  |
+| `azure.ai.openai` | [Azure OpenAI](https://azure.microsoft.com/products/ai-services/openai-service/) |  |
+| `cohere` | [Cohere](https://cohere.com/) |  |
+| `deepseek` | [DeepSeek](https://www.deepseek.com/) |  |
+| `gcp.gemini` | [Gemini](https://cloud.google.com/products/gemini) [5] |  |
| `gcp.gen_ai` | Any Google generative AI endpoint [6] |  |
-| `gcp.vertex_ai` | Vertex AI [7] |  |
-| `groq` | Groq |  |
-| `ibm.watsonx.ai` | IBM Watsonx AI |  |
-| `mistral_ai` | Mistral AI |  |
-| `openai` | OpenAI |  |
-| `perplexity` | Perplexity |  |
-| `xai` | xAI |  |
+| `gcp.vertex_ai` | [Vertex AI](https://cloud.google.com/vertex-ai) [7] |  |
+| `groq` | [Groq](https://groq.com/) |  |
+| `ibm.watsonx.ai` | [IBM Watsonx AI](https://www.ibm.com/products/watsonx-ai) |  |
+| `mistral_ai` | [Mistral AI](https://mistral.ai/) |  |
+| `openai` | [OpenAI](https://openai.com/) |  |
+| `perplexity` | [Perplexity](https://www.perplexity.ai/) |  |
+| `x_ai` | [xAI](https://x.ai/) |  |
-**[5]:** This refers to the 'generativelanguage.googleapis.com' endpoint. Also known as the AI Studio API. May use common attributes prefixed with 'gcp.gen_ai.'.
+**[5]:** Used when accessing the 'generativelanguage.googleapis.com' endpoint. Also known as the AI Studio API.
-**[6]:** May be used when specific backend is unknown. May use common attributes prefixed with 'gcp.gen_ai.'.
+**[6]:** May be used when specific backend is unknown.
-**[7]:** This refers to the 'aiplatform.googleapis.com' endpoint. May use common attributes prefixed with 'gcp.gen_ai.'.
+**[7]:** Used when accessing the 'aiplatform.googleapis.com' endpoint.
@@ -480,7 +529,7 @@ This metric SHOULD be specified with [ExplicitBucketBoundaries] of
| Attribute | Type | Description | Examples | [Requirement Level](https://opentelemetry.io/docs/specs/semconv/general/attribute-requirement-level/) | Stability |
|---|---|---|---|---|---|
| [`gen_ai.operation.name`](/docs/registry/attributes/gen-ai.md) | string | The name of the operation being performed. [1] | `chat`; `generate_content`; `text_completion` | `Required` |  |
-| [`gen_ai.system`](/docs/registry/attributes/gen-ai.md) | string | The Generative AI product as identified by the client or server instrumentation. [2] | `openai` | `Required` |  |
+| [`gen_ai.provider.name`](/docs/registry/attributes/gen-ai.md) | string | The Generative AI provider as identified by the client or server instrumentation. [2] | `openai`; `gcp.gen_ai`; `gcp.vertex_ai` | `Required` |  |
| [`gen_ai.request.model`](/docs/registry/attributes/gen-ai.md) | string | The name of the GenAI model a request is being made to. | `gpt-4` | `Conditionally Required` If available. |  |
| [`server.port`](/docs/registry/attributes/server.md) | int | GenAI server port. [3] | `80`; `8080`; `443` | `Conditionally Required` If `server.address` is set. |  |
| [`gen_ai.response.model`](/docs/registry/attributes/gen-ai.md) | string | The name of the model that generated the response. | `gpt-4-0613` | `Recommended` |  |
@@ -488,17 +537,24 @@ This metric SHOULD be specified with [ExplicitBucketBoundaries] of
**[1] `gen_ai.operation.name`:** If one of the predefined values applies, but specific system uses a different name it's RECOMMENDED to document it in the semantic conventions for specific GenAI system and use system-specific name in the instrumentation. If a different name is not documented, instrumentation libraries SHOULD use applicable predefined value.
-**[2] `gen_ai.system`:** The `gen_ai.system` describes a family of GenAI models with specific model identified
-by `gen_ai.request.model` and `gen_ai.response.model` attributes.
+**[2] `gen_ai.provider.name`:** The attribute SHOULD be set based on the instrumentation's best
+knowledge and may differ from the actual model provider.
+
+Multiple providers, including Azure OpenAI, Gemini, and AI hosting platforms
+are accessible using the OpenAI REST API and corresponding client libraries,
+but may proxy or host models from different providers.
-The actual GenAI product may differ from the one identified by the client.
-Multiple systems, including Azure OpenAI and Gemini, are accessible by OpenAI client
-libraries. In such cases, the `gen_ai.system` is set to `openai` based on the
-instrumentation's best knowledge, instead of the actual system. The `server.address`
-attribute may help identify the actual system in use for `openai`.
+The `gen_ai.request.model`, `gen_ai.response.model`, and `server.address`
+attributes may help identify the actual system in use.
-For custom model, a custom friendly name SHOULD be used.
-If none of these options apply, the `gen_ai.system` SHOULD be set to `_OTHER`.
+The `gen_ai.provider.name` attribute acts as a discriminator that
+identifies the GenAI telemetry format flavor specific to that provider
+within GenAI semantic conventions.
+It SHOULD be set consistently with provider-specific attributes and signals.
+For example, GenAI spans, metrics, and events related to AWS Bedrock
+should have the `gen_ai.provider.name` set to `aws.bedrock` and include
+applicable `aws.bedrock.*` attributes and are not expected to include
+`openai.*` attributes.
**[3] `server.port`:** When observed from the client side, and when communicating through an intermediary, `server.port` SHOULD represent the server port behind any intermediaries, for example proxies, if it's available.
@@ -520,31 +576,31 @@ If none of these options apply, the `gen_ai.system` SHOULD be set to `_OTHER`.
---
-`gen_ai.system` has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.
+`gen_ai.provider.name` has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.
| Value | Description | Stability |
|---|---|---|
-| `anthropic` | Anthropic |  |
-| `aws.bedrock` | AWS Bedrock |  |
+| `anthropic` | [Anthropic](https://www.anthropic.com/) |  |
+| `aws.bedrock` | [AWS Bedrock](https://aws.amazon.com/bedrock) |  |
| `azure.ai.inference` | Azure AI Inference |  |
-| `azure.ai.openai` | Azure OpenAI |  |
-| `cohere` | Cohere |  |
-| `deepseek` | DeepSeek |  |
-| `gcp.gemini` | Gemini [5] |  |
+| `azure.ai.openai` | [Azure OpenAI](https://azure.microsoft.com/products/ai-services/openai-service/) |  |
+| `cohere` | [Cohere](https://cohere.com/) |  |
+| `deepseek` | [DeepSeek](https://www.deepseek.com/) |  |
+| `gcp.gemini` | [Gemini](https://cloud.google.com/products/gemini) [5] |  |
| `gcp.gen_ai` | Any Google generative AI endpoint [6] |  |
-| `gcp.vertex_ai` | Vertex AI [7] |  |
-| `groq` | Groq |  |
-| `ibm.watsonx.ai` | IBM Watsonx AI |  |
-| `mistral_ai` | Mistral AI |  |
-| `openai` | OpenAI |  |
-| `perplexity` | Perplexity |  |
-| `xai` | xAI |  |
+| `gcp.vertex_ai` | [Vertex AI](https://cloud.google.com/vertex-ai) [7] |  |
+| `groq` | [Groq](https://groq.com/) |  |
+| `ibm.watsonx.ai` | [IBM Watsonx AI](https://www.ibm.com/products/watsonx-ai) |  |
+| `mistral_ai` | [Mistral AI](https://mistral.ai/) |  |
+| `openai` | [OpenAI](https://openai.com/) |  |
+| `perplexity` | [Perplexity](https://www.perplexity.ai/) |  |
+| `x_ai` | [xAI](https://x.ai/) |  |
-**[5]:** This refers to the 'generativelanguage.googleapis.com' endpoint. Also known as the AI Studio API. May use common attributes prefixed with 'gcp.gen_ai.'.
+**[5]:** Used when accessing the 'generativelanguage.googleapis.com' endpoint. Also known as the AI Studio API.
-**[6]:** May be used when specific backend is unknown. May use common attributes prefixed with 'gcp.gen_ai.'.
+**[6]:** May be used when specific backend is unknown.
-**[7]:** This refers to the 'aiplatform.googleapis.com' endpoint. May use common attributes prefixed with 'gcp.gen_ai.'.
+**[7]:** Used when accessing the 'aiplatform.googleapis.com' endpoint.
diff --git a/docs/gen-ai/gen-ai-spans.md b/docs/gen-ai/gen-ai-spans.md
index 74988211b3..b384b81653 100644
--- a/docs/gen-ai/gen-ai-spans.md
+++ b/docs/gen-ai/gen-ai-spans.md
@@ -16,6 +16,27 @@ linkTitle: Spans
+> [!Warning]
+>
+> Existing GenAI instrumentations that are using
+> [v1.36.0 of this document](https://github.com/open-telemetry/semantic-conventions/blob/v1.36.0/docs/gen-ai/README.md)
+> (or prior):
+>
+> * SHOULD NOT change the version of the GenAI conventions that they emit by default.
+> Conventions include, but are not limited to, attributes, metric, span and event names,
+> span kind and unit of measure.
+> * SHOULD introduce an environment variable `OTEL_SEMCONV_STABILITY_OPT_IN`
+> as a comma-separated list of category-specific values. The list of values
+> includes:
+> * `gen_ai_latest_experimental` - emit the latest experimental version of
+> GenAI conventions (supported by the instrumentation) and do not emit the
+> old one (v1.36.0 or prior).
+> * The default behavior is to continue emitting whatever version of the GenAI
+> conventions the instrumentation was emitting (1.36.0 or prior).
+>
+> This transition plan will be updated to include stable version before the
+> GenAI conventions are marked as stable.
+
## Spans
### Inference
@@ -45,7 +66,7 @@ client or when the GenAI call happens over instrumented protocol such as HTTP.
| Attribute | Type | Description | Examples | [Requirement Level](https://opentelemetry.io/docs/specs/semconv/general/attribute-requirement-level/) | Stability |
|---|---|---|---|---|---|
| [`gen_ai.operation.name`](/docs/registry/attributes/gen-ai.md) | string | The name of the operation being performed. [1] | `chat`; `generate_content`; `text_completion` | `Required` |  |
-| [`gen_ai.system`](/docs/registry/attributes/gen-ai.md) | string | The Generative AI product as identified by the client or server instrumentation. [2] | `openai` | `Required` |  |
+| [`gen_ai.provider.name`](/docs/registry/attributes/gen-ai.md) | string | The Generative AI provider as identified by the client or server instrumentation. [2] | `openai`; `gcp.gen_ai`; `gcp.vertex_ai` | `Required` |  |
| [`error.type`](/docs/registry/attributes/error.md) | string | Describes a class of error the operation ended with. [3] | `timeout`; `java.net.UnknownHostException`; `server_certificate_invalid`; `500` | `Conditionally Required` if the operation ended in an error |  |
| [`gen_ai.conversation.id`](/docs/registry/attributes/gen-ai.md) | string | The unique identifier for a conversation (session, thread), used to store and correlate messages within this conversation. [4] | `conv_5j66UpCpwteGg4YSxUnt7lPY` | `Conditionally Required` when available |  |
| [`gen_ai.output.type`](/docs/registry/attributes/gen-ai.md) | string | Represents the content type requested by the client. [5] | `text`; `json`; `image` | `Conditionally Required` [6] |  |
@@ -69,17 +90,24 @@ client or when the GenAI call happens over instrumented protocol such as HTTP.
**[1] `gen_ai.operation.name`:** If one of the predefined values applies, but specific system uses a different name it's RECOMMENDED to document it in the semantic conventions for specific GenAI system and use system-specific name in the instrumentation. If a different name is not documented, instrumentation libraries SHOULD use applicable predefined value.
-**[2] `gen_ai.system`:** The `gen_ai.system` describes a family of GenAI models with specific model identified
-by `gen_ai.request.model` and `gen_ai.response.model` attributes.
+**[2] `gen_ai.provider.name`:** The attribute SHOULD be set based on the instrumentation's best
+knowledge and may differ from the actual model provider.
+
+Multiple providers, including Azure OpenAI, Gemini, and AI hosting platforms
+are accessible using the OpenAI REST API and corresponding client libraries,
+but may proxy or host models from different providers.
-The actual GenAI product may differ from the one identified by the client.
-Multiple systems, including Azure OpenAI and Gemini, are accessible by OpenAI client
-libraries. In such cases, the `gen_ai.system` is set to `openai` based on the
-instrumentation's best knowledge, instead of the actual system. The `server.address`
-attribute may help identify the actual system in use for `openai`.
+The `gen_ai.request.model`, `gen_ai.response.model`, and `server.address`
+attributes may help identify the actual system in use.
-For custom model, a custom friendly name SHOULD be used.
-If none of these options apply, the `gen_ai.system` SHOULD be set to `_OTHER`.
+The `gen_ai.provider.name` attribute acts as a discriminator that
+identifies the GenAI telemetry format flavor specific to that provider
+within GenAI semantic conventions.
+It SHOULD be set consistently with provider-specific attributes and signals.
+For example, GenAI spans, metrics, and events related to AWS Bedrock
+should have the `gen_ai.provider.name` set to `aws.bedrock` and include
+applicable `aws.bedrock.*` attributes and are not expected to include
+`openai.*` attributes.
**[3] `error.type`:** The `error.type` SHOULD match the error code returned by the Generative AI provider or the client library,
the canonical name of exception that occurred, or another low-cardinality error identifier.
@@ -148,31 +176,31 @@ Additional output format details may be recorded in the future in the `gen_ai.ou
---
-`gen_ai.system` has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.
+`gen_ai.provider.name` has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.
| Value | Description | Stability |
|---|---|---|
-| `anthropic` | Anthropic |  |
-| `aws.bedrock` | AWS Bedrock |  |
+| `anthropic` | [Anthropic](https://www.anthropic.com/) |  |
+| `aws.bedrock` | [AWS Bedrock](https://aws.amazon.com/bedrock) |  |
| `azure.ai.inference` | Azure AI Inference |  |
-| `azure.ai.openai` | Azure OpenAI |  |
-| `cohere` | Cohere |  |
-| `deepseek` | DeepSeek |  |
-| `gcp.gemini` | Gemini [11] |  |
+| `azure.ai.openai` | [Azure OpenAI](https://azure.microsoft.com/products/ai-services/openai-service/) |  |
+| `cohere` | [Cohere](https://cohere.com/) |  |
+| `deepseek` | [DeepSeek](https://www.deepseek.com/) |  |
+| `gcp.gemini` | [Gemini](https://cloud.google.com/products/gemini) [11] |  |
| `gcp.gen_ai` | Any Google generative AI endpoint [12] |  |
-| `gcp.vertex_ai` | Vertex AI [13] |  |
-| `groq` | Groq |  |
-| `ibm.watsonx.ai` | IBM Watsonx AI |  |
-| `mistral_ai` | Mistral AI |  |
-| `openai` | OpenAI |  |
-| `perplexity` | Perplexity |  |
-| `xai` | xAI |  |
+| `gcp.vertex_ai` | [Vertex AI](https://cloud.google.com/vertex-ai) [13] |  |
+| `groq` | [Groq](https://groq.com/) |  |
+| `ibm.watsonx.ai` | [IBM Watsonx AI](https://www.ibm.com/products/watsonx-ai) |  |
+| `mistral_ai` | [Mistral AI](https://mistral.ai/) |  |
+| `openai` | [OpenAI](https://openai.com/) |  |
+| `perplexity` | [Perplexity](https://www.perplexity.ai/) |  |
+| `x_ai` | [xAI](https://x.ai/) |  |
-**[11]:** This refers to the 'generativelanguage.googleapis.com' endpoint. Also known as the AI Studio API. May use common attributes prefixed with 'gcp.gen_ai.'.
+**[11]:** Used when accessing the 'generativelanguage.googleapis.com' endpoint. Also known as the AI Studio API.
-**[12]:** May be used when specific backend is unknown. May use common attributes prefixed with 'gcp.gen_ai.'.
+**[12]:** May be used when specific backend is unknown.
-**[13]:** This refers to the 'aiplatform.googleapis.com' endpoint. May use common attributes prefixed with 'gcp.gen_ai.'.
+**[13]:** Used when accessing the 'aiplatform.googleapis.com' endpoint.
diff --git a/docs/gen-ai/openai.md b/docs/gen-ai/openai.md
index 0c0b656152..e06b16bd82 100644
--- a/docs/gen-ai/openai.md
+++ b/docs/gen-ai/openai.md
@@ -17,13 +17,36 @@ linkTitle: OpenAI
+> [!Warning]
+>
+> Existing GenAI instrumentations that are using
+> [v1.36.0 of this document](https://github.com/open-telemetry/semantic-conventions/blob/v1.36.0/docs/gen-ai/README.md)
+> (or prior):
+>
+> * SHOULD NOT change the version of the GenAI conventions that they emit by default.
+> Conventions include, but are not limited to, attributes, metric, span and event names,
+> span kind and unit of measure.
+> * SHOULD introduce an environment variable `OTEL_SEMCONV_STABILITY_OPT_IN`
+> as a comma-separated list of category-specific values. The list of values
+> includes:
+> * `gen_ai_latest_experimental` - emit the latest experimental version of
+> GenAI conventions (supported by the instrumentation) and do not emit the
+> old one (v1.36.0 or prior).
+> * The default behavior is to continue emitting whatever version of the GenAI
+> conventions the instrumentation was emitting (1.36.0 or prior).
+>
+> This transition plan will be updated to include stable version before the
+> GenAI conventions are marked as stable.
+
The Semantic Conventions for [OpenAI](https://openai.com/) extend and override the [Gen AI Semantic Conventions](/docs/gen-ai/README.md).
## Spans
+`gen_ai.provider.name` MUST be set to `"openai"`.
+
### Inference
-
+
@@ -34,7 +57,7 @@ The Semantic Conventions for [OpenAI](https://openai.com/) extend and override t
Semantic Conventions for [OpenAI](https://openai.com/) client spans extend and override the semantic conventions for [Gen AI Spans](gen-ai-spans.md).
-`gen_ai.system` MUST be set to `"openai"` and SHOULD be provided **at span creation time**.
+`gen_ai.provider.name` MUST be set to `"openai"` and SHOULD be provided **at span creation time**.
**Span name** SHOULD be `{gen_ai.operation.name} {gen_ai.request.model}`.
@@ -48,13 +71,12 @@ Semantic Conventions for [OpenAI](https://openai.com/) client spans extend and o
| [`gen_ai.request.model`](/docs/registry/attributes/gen-ai.md) | string | The name of the GenAI model a request is being made to. [2] | `gpt-4` | `Required` |  |
| [`error.type`](/docs/registry/attributes/error.md) | string | Describes a class of error the operation ended with. [3] | `timeout`; `java.net.UnknownHostException`; `server_certificate_invalid`; `500` | `Conditionally Required` if the operation ended in an error |  |
| [`gen_ai.conversation.id`](/docs/registry/attributes/gen-ai.md) | string | The unique identifier for a conversation (session, thread), used to store and correlate messages within this conversation. [4] | `conv_5j66UpCpwteGg4YSxUnt7lPY` | `Conditionally Required` when available |  |
-| [`gen_ai.openai.request.service_tier`](/docs/registry/attributes/gen-ai.md) | string | The service tier requested. May be a specific tier, default, or auto. | `auto`; `default` | `Conditionally Required` [5] |  |
-| [`gen_ai.openai.response.service_tier`](/docs/registry/attributes/gen-ai.md) | string | The service tier used for the response. | `scale`; `default` | `Conditionally Required` [6] |  |
-| [`gen_ai.output.type`](/docs/registry/attributes/gen-ai.md) | string | Represents the content type requested by the client. [7] | `text`; `json`; `image` | `Conditionally Required` [8] |  |
+| [`gen_ai.output.type`](/docs/registry/attributes/gen-ai.md) | string | Represents the content type requested by the client. [5] | `text`; `json`; `image` | `Conditionally Required` [6] |  |
| [`gen_ai.request.choice.count`](/docs/registry/attributes/gen-ai.md) | int | The target number of candidate completions to return. | `3` | `Conditionally Required` if available, in the request, and !=1 |  |
| [`gen_ai.request.seed`](/docs/registry/attributes/gen-ai.md) | int | Requests with same seed value more likely to return same result. | `100` | `Conditionally Required` if applicable and if the request includes a seed |  |
+| [`openai.request.service_tier`](/docs/registry/attributes/openai.md) | string | The service tier requested. May be a specific tier, default, or auto. | `auto`; `default` | `Conditionally Required` [7] |  |
+| [`openai.response.service_tier`](/docs/registry/attributes/openai.md) | string | The service tier used for the response. | `scale`; `default` | `Conditionally Required` [8] |  |
| [`server.port`](/docs/registry/attributes/server.md) | int | GenAI server port. [9] | `80`; `8080`; `443` | `Conditionally Required` If `server.address` is set. |  |
-| [`gen_ai.openai.response.system_fingerprint`](/docs/registry/attributes/gen-ai.md) | string | A fingerprint to track any eventual change in the Generative AI environment. | `fp_44709d6fcb` | `Recommended` |  |
| [`gen_ai.request.frequency_penalty`](/docs/registry/attributes/gen-ai.md) | double | The frequency penalty setting for the GenAI request. | `0.1` | `Recommended` |  |
| [`gen_ai.request.max_tokens`](/docs/registry/attributes/gen-ai.md) | int | The maximum number of tokens the model generates for a request. | `100` | `Recommended` |  |
| [`gen_ai.request.presence_penalty`](/docs/registry/attributes/gen-ai.md) | double | The presence penalty setting for the GenAI request. | `0.1` | `Recommended` |  |
@@ -66,6 +88,7 @@ Semantic Conventions for [OpenAI](https://openai.com/) client spans extend and o
| [`gen_ai.response.model`](/docs/registry/attributes/gen-ai.md) | string | The name of the model that generated the response. [10] | `gpt-4-0613` | `Recommended` |  |
| [`gen_ai.usage.input_tokens`](/docs/registry/attributes/gen-ai.md) | int | The number of tokens used in the GenAI input (prompt). | `100` | `Recommended` |  |
| [`gen_ai.usage.output_tokens`](/docs/registry/attributes/gen-ai.md) | int | The number of tokens used in the GenAI response (completion). | `180` | `Recommended` |  |
+| [`openai.response.system_fingerprint`](/docs/registry/attributes/openai.md) | string | A fingerprint to track any eventual change in the Generative AI environment. | `fp_44709d6fcb` | `Recommended` |  |
| [`server.address`](/docs/registry/attributes/server.md) | string | GenAI server address. [11] | `example.com`; `10.1.2.80`; `/tmp/my.sock` | `Recommended` |  |
**[1] `gen_ai.operation.name`:** If one of the predefined values applies, but specific system uses a different name it's RECOMMENDED to document it in the semantic conventions for specific GenAI system and use system-specific name in the instrumentation. If a different name is not documented, instrumentation libraries SHOULD use applicable predefined value.
@@ -90,11 +113,7 @@ Application developers that manage conversation history MAY add conversation id
spans or logs using custom span or log record processors or hooks provided by instrumentation
libraries.
-**[5] `gen_ai.openai.request.service_tier`:** if the request includes a service_tier and the value is not 'auto'
-
-**[6] `gen_ai.openai.response.service_tier`:** if the response was received and includes a service_tier
-
-**[7] `gen_ai.output.type`:** This attribute SHOULD be set to the output type requested by the client:
+**[5] `gen_ai.output.type`:** This attribute SHOULD be set to the output type requested by the client:
- `json` for structured outputs with defined or undefined schema
- `image` for image output
- `speech` for speech output
@@ -107,7 +126,11 @@ URL pointing to an image file.
Additional output format details may be recorded in the future in the
`gen_ai.output.{type}.*` attributes.
-**[8] `gen_ai.output.type`:** when applicable and if the request includes an output format.
+**[6] `gen_ai.output.type`:** when applicable and if the request includes an output format.
+
+**[7] `openai.request.service_tier`:** if the request includes a service_tier and the value is not 'auto'
+
+**[8] `openai.response.service_tier`:** if the response was received and includes a service_tier
**[9] `server.port`:** When observed from the client side, and when communicating through an intermediary, `server.port` SHOULD represent the server port behind any intermediaries, for example proxies, if it's available.
@@ -125,15 +148,6 @@ Additional output format details may be recorded in the future in the
---
-`gen_ai.openai.request.service_tier` has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.
-
-| Value | Description | Stability |
-|---|---|---|
-| `auto` | The system will utilize scale tier credits until they are exhausted. |  |
-| `default` | The system will utilize the default scale tier. |  |
-
----
-
`gen_ai.operation.name` has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.
| Value | Description | Stability |
@@ -157,6 +171,15 @@ Additional output format details may be recorded in the future in the
| `speech` | Speech |  |
| `text` | Plain text |  |
+---
+
+`openai.request.service_tier` has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.
+
+| Value | Description | Stability |
+|---|---|---|
+| `auto` | The system will utilize scale tier credits until they are exhausted. |  |
+| `default` | The system will utilize the default scale tier. |  |
+
@@ -177,7 +200,7 @@ Reports the usage of tokens following the common [gen_ai.client.token.usage](./g
Additional attributes:
-
+
@@ -186,8 +209,8 @@ Additional attributes:
| Attribute | Type | Description | Examples | [Requirement Level](https://opentelemetry.io/docs/specs/semconv/general/attribute-requirement-level/) | Stability |
|---|---|---|---|---|---|
-| [`gen_ai.openai.response.service_tier`](/docs/registry/attributes/gen-ai.md) | string | The service tier used for the response. | `scale`; `default` | `Recommended` |  |
-| [`gen_ai.openai.response.system_fingerprint`](/docs/registry/attributes/gen-ai.md) | string | A fingerprint to track any eventual change in the Generative AI environment. | `fp_44709d6fcb` | `Recommended` |  |
+| [`openai.response.service_tier`](/docs/registry/attributes/openai.md) | string | The service tier used for the response. | `scale`; `default` | `Recommended` |  |
+| [`openai.response.system_fingerprint`](/docs/registry/attributes/openai.md) | string | A fingerprint to track any eventual change in the Generative AI environment. | `fp_44709d6fcb` | `Recommended` |  |
@@ -200,7 +223,7 @@ Measures the to complete an operation following the common [gen_ai.client.operat
Additional attributes:
-
+
@@ -209,8 +232,8 @@ Additional attributes:
| Attribute | Type | Description | Examples | [Requirement Level](https://opentelemetry.io/docs/specs/semconv/general/attribute-requirement-level/) | Stability |
|---|---|---|---|---|---|
-| [`gen_ai.openai.response.service_tier`](/docs/registry/attributes/gen-ai.md) | string | The service tier used for the response. | `scale`; `default` | `Recommended` |  |
-| [`gen_ai.openai.response.system_fingerprint`](/docs/registry/attributes/gen-ai.md) | string | A fingerprint to track any eventual change in the Generative AI environment. | `fp_44709d6fcb` | `Recommended` |  |
+| [`openai.response.service_tier`](/docs/registry/attributes/openai.md) | string | The service tier used for the response. | `scale`; `default` | `Recommended` |  |
+| [`openai.response.system_fingerprint`](/docs/registry/attributes/openai.md) | string | A fingerprint to track any eventual change in the Generative AI environment. | `fp_44709d6fcb` | `Recommended` |  |
diff --git a/docs/registry/attributes/README.md b/docs/registry/attributes/README.md
index 653fbb40ca..12c8d61416 100644
--- a/docs/registry/attributes/README.md
+++ b/docs/registry/attributes/README.md
@@ -82,6 +82,7 @@ Currently, the following namespaces exist:
- [Network](network.md)
- [NodeJS](nodejs.md)
- [OCI](oci.md)
+- [OpenAI](openai.md)
- [OpenTracing](opentracing.md)
- [OS](os.md)
- [OTel](otel.md)
diff --git a/docs/registry/attributes/gen-ai.md b/docs/registry/attributes/gen-ai.md
index eae9d1acb2..4425237c90 100644
--- a/docs/registry/attributes/gen-ai.md
+++ b/docs/registry/attributes/gen-ai.md
@@ -4,7 +4,6 @@
# Gen AI
- [GenAI Attributes](#genai-attributes)
-- [OpenAI Attributes](#openai-attributes)
- [Deprecated GenAI Attributes](#deprecated-genai-attributes)
- [Deprecated OpenAI GenAI Attributes](#deprecated-openai-genai-attributes)
@@ -21,8 +20,9 @@ This document defines the attributes used to describe telemetry in the context o
| `gen_ai.data_source.id` | string | The data source identifier. [1] | `H7STPQYOND` |  |
| `gen_ai.operation.name` | string | The name of the operation being performed. [2] | `chat`; `generate_content`; `text_completion` |  |
| `gen_ai.output.type` | string | Represents the content type requested by the client. [3] | `text`; `json`; `image` |  |
+| `gen_ai.provider.name` | string | The Generative AI provider as identified by the client or server instrumentation. [4] | `openai`; `gcp.gen_ai`; `gcp.vertex_ai` |  |
| `gen_ai.request.choice.count` | int | The target number of candidate completions to return. | `3` |  |
-| `gen_ai.request.encoding_formats` | string[] | The encoding formats requested in an embeddings operation, if specified. [4] | `["base64"]`; `["float", "binary"]` |  |
+| `gen_ai.request.encoding_formats` | string[] | The encoding formats requested in an embeddings operation, if specified. [5] | `["base64"]`; `["float", "binary"]` |  |
| `gen_ai.request.frequency_penalty` | double | The frequency penalty setting for the GenAI request. | `0.1` |  |
| `gen_ai.request.max_tokens` | int | The maximum number of tokens the model generates for a request. | `100` |  |
| `gen_ai.request.model` | string | The name of the GenAI model a request is being made to. | `gpt-4` |  |
@@ -35,7 +35,6 @@ This document defines the attributes used to describe telemetry in the context o
| `gen_ai.response.finish_reasons` | string[] | Array of reasons the model stopped generating tokens, corresponding to each generation received. | `["stop"]`; `["stop", "length"]` |  |
| `gen_ai.response.id` | string | The unique identifier for the completion. | `chatcmpl-123` |  |
| `gen_ai.response.model` | string | The name of the model that generated the response. | `gpt-4-0613` |  |
-| `gen_ai.system` | string | The Generative AI product as identified by the client or server instrumentation. [5] | `openai` |  |
| `gen_ai.token.type` | string | The type of token being counted. | `input`; `output` |  |
| `gen_ai.tool.call.id` | string | The tool call identifier. | `call_mszuSIzqtI65i1wAUOE8w5H4` |  |
| `gen_ai.tool.description` | string | The tool description. | `Multiply two numbers` |  |
@@ -52,19 +51,26 @@ This document defines the attributes used to describe telemetry in the context o
This attribute specifies the output modality and not the actual output format. For example, if an image is requested, the actual output could be a URL pointing to an image file.
Additional output format details may be recorded in the future in the `gen_ai.output.{type}.*` attributes.
-**[4] `gen_ai.request.encoding_formats`:** In some GenAI systems the encoding formats are called embedding types. Also, some GenAI systems only accept a single format per request.
+**[4] `gen_ai.provider.name`:** The attribute SHOULD be set based on the instrumentation's best
+knowledge and may differ from the actual model provider.
-**[5] `gen_ai.system`:** The `gen_ai.system` describes a family of GenAI models with specific model identified
-by `gen_ai.request.model` and `gen_ai.response.model` attributes.
+Multiple providers, including Azure OpenAI, Gemini, and AI hosting platforms
+are accessible using the OpenAI REST API and corresponding client libraries,
+but may proxy or host models from different providers.
-The actual GenAI product may differ from the one identified by the client.
-Multiple systems, including Azure OpenAI and Gemini, are accessible by OpenAI client
-libraries. In such cases, the `gen_ai.system` is set to `openai` based on the
-instrumentation's best knowledge, instead of the actual system. The `server.address`
-attribute may help identify the actual system in use for `openai`.
+The `gen_ai.request.model`, `gen_ai.response.model`, and `server.address`
+attributes may help identify the actual system in use.
-For custom model, a custom friendly name SHOULD be used.
-If none of these options apply, the `gen_ai.system` SHOULD be set to `_OTHER`.
+The `gen_ai.provider.name` attribute acts as a discriminator that
+identifies the GenAI telemetry format flavor specific to that provider
+within GenAI semantic conventions.
+It SHOULD be set consistently with provider-specific attributes and signals.
+For example, GenAI spans, metrics, and events related to AWS Bedrock
+should have the `gen_ai.provider.name` set to `aws.bedrock` and include
+applicable `aws.bedrock.*` attributes and are not expected to include
+`openai.*` attributes.
+
+**[5] `gen_ai.request.encoding_formats`:** In some GenAI systems the encoding formats are called embedding types. Also, some GenAI systems only accept a single format per request.
**[6] `gen_ai.tool.type`:** Extension: A tool executed on the agent-side to directly call external APIs, bridging the gap between the agent and real-world systems.
Agent-side operations involve actions that are performed by the agent on the server or within the agent's controlled environment.
@@ -99,31 +105,31 @@ Datastore: A tool used by the agent to access and query structured or unstructur
---
-`gen_ai.system` has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.
+`gen_ai.provider.name` has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.
| Value | Description | Stability |
|---|---|---|
-| `anthropic` | Anthropic |  |
-| `aws.bedrock` | AWS Bedrock |  |
+| `anthropic` | [Anthropic](https://www.anthropic.com/) |  |
+| `aws.bedrock` | [AWS Bedrock](https://aws.amazon.com/bedrock) |  |
| `azure.ai.inference` | Azure AI Inference |  |
-| `azure.ai.openai` | Azure OpenAI |  |
-| `cohere` | Cohere |  |
-| `deepseek` | DeepSeek |  |
-| `gcp.gemini` | Gemini [7] |  |
+| `azure.ai.openai` | [Azure OpenAI](https://azure.microsoft.com/products/ai-services/openai-service/) |  |
+| `cohere` | [Cohere](https://cohere.com/) |  |
+| `deepseek` | [DeepSeek](https://www.deepseek.com/) |  |
+| `gcp.gemini` | [Gemini](https://cloud.google.com/products/gemini) [7] |  |
| `gcp.gen_ai` | Any Google generative AI endpoint [8] |  |
-| `gcp.vertex_ai` | Vertex AI [9] |  |
-| `groq` | Groq |  |
-| `ibm.watsonx.ai` | IBM Watsonx AI |  |
-| `mistral_ai` | Mistral AI |  |
-| `openai` | OpenAI |  |
-| `perplexity` | Perplexity |  |
-| `xai` | xAI |  |
+| `gcp.vertex_ai` | [Vertex AI](https://cloud.google.com/vertex-ai) [9] |  |
+| `groq` | [Groq](https://groq.com/) |  |
+| `ibm.watsonx.ai` | [IBM Watsonx AI](https://www.ibm.com/products/watsonx-ai) |  |
+| `mistral_ai` | [Mistral AI](https://mistral.ai/) |  |
+| `openai` | [OpenAI](https://openai.com/) |  |
+| `perplexity` | [Perplexity](https://www.perplexity.ai/) |  |
+| `x_ai` | [xAI](https://x.ai/) |  |
-**[7]:** This refers to the 'generativelanguage.googleapis.com' endpoint. Also known as the AI Studio API. May use common attributes prefixed with 'gcp.gen_ai.'.
+**[7]:** Used when accessing the 'generativelanguage.googleapis.com' endpoint. Also known as the AI Studio API.
-**[8]:** May be used when specific backend is unknown. May use common attributes prefixed with 'gcp.gen_ai.'.
+**[8]:** May be used when specific backend is unknown.
-**[9]:** This refers to the 'aiplatform.googleapis.com' endpoint. May use common attributes prefixed with 'gcp.gen_ai.'.
+**[9]:** Used when accessing the 'aiplatform.googleapis.com' endpoint.
---
@@ -134,35 +140,46 @@ Datastore: A tool used by the agent to access and query structured or unstructur
| `input` | Input tokens (prompt, input, etc.) |  |
| `output` | Output tokens (completion, response, etc.) |  |
-## OpenAI Attributes
+## Deprecated GenAI Attributes
-This group defines attributes for OpenAI.
+Describes deprecated `gen_ai` attributes.
| Attribute | Type | Description | Examples | Stability |
|---|---|---|---|---|
-| `gen_ai.openai.request.service_tier` | string | The service tier requested. May be a specific tier, default, or auto. | `auto`; `default` |  |
-| `gen_ai.openai.response.service_tier` | string | The service tier used for the response. | `scale`; `default` |  |
-| `gen_ai.openai.response.system_fingerprint` | string | A fingerprint to track any eventual change in the Generative AI environment. | `fp_44709d6fcb` |  |
+| `gen_ai.completion` | string | Deprecated, use Event API to report completions contents. | `[{'role': 'assistant', 'content': 'The capital of France is Paris.'}]` | 
Removed, no replacement at this time. |
+| `gen_ai.prompt` | string | Deprecated, use Event API to report prompt contents. | `[{'role': 'user', 'content': 'What is the capital of France?'}]` | 
Removed, no replacement at this time. |
+| `gen_ai.system` | string | Deprecated, use `gen_ai.provider.name` instead. | `openai`; `gcp.gen_ai`; `gcp.vertex_ai` | 
Replaced by `gen_ai.provider.name`. |
+| `gen_ai.usage.completion_tokens` | int | Deprecated, use `gen_ai.usage.output_tokens` instead. | `42` | 
Replaced by `gen_ai.usage.output_tokens`. |
+| `gen_ai.usage.prompt_tokens` | int | Deprecated, use `gen_ai.usage.input_tokens` instead. | `42` | 
Replaced by `gen_ai.usage.input_tokens`. |
---
-`gen_ai.openai.request.service_tier` has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.
+`gen_ai.system` has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.
| Value | Description | Stability |
|---|---|---|
-| `auto` | The system will utilize scale tier credits until they are exhausted. |  |
-| `default` | The system will utilize the default scale tier. |  |
+| `anthropic` | Anthropic |  |
+| `aws.bedrock` | AWS Bedrock |  |
+| `az.ai.inference` | Azure AI Inference |  |
+| `az.ai.openai` | Azure OpenAI |  |
+| `azure.ai.inference` | Azure AI Inference |  |
+| `azure.ai.openai` | Azure OpenAI |  |
+| `cohere` | Cohere |  |
+| `deepseek` | DeepSeek |  |
+| `gcp.gemini` | Gemini [10] |  |
+| `gcp.gen_ai` | Any Google generative AI endpoint [11] |  |
+| `gcp.vertex_ai` | Vertex AI [12] |  |
+| `groq` | Groq |  |
+| `ibm.watsonx.ai` | IBM Watsonx AI |  |
+| `mistral_ai` | Mistral AI |  |
+| `openai` | OpenAI |  |
+| `perplexity` | Perplexity |  |
-## Deprecated GenAI Attributes
+**[10]:** This refers to the 'generativelanguage.googleapis.com' endpoint. Also known as the AI Studio API. May use common attributes prefixed with 'gcp.gen_ai.'.
-Describes deprecated `gen_ai` attributes.
+**[11]:** May be used when specific backend is unknown. May use common attributes prefixed with 'gcp.gen_ai.'.
-| Attribute | Type | Description | Examples | Stability |
-|---|---|---|---|---|
-| `gen_ai.completion` | string | Deprecated, use Event API to report completions contents. | `[{'role': 'assistant', 'content': 'The capital of France is Paris.'}]` | 
Removed, no replacement at this time. |
-| `gen_ai.prompt` | string | Deprecated, use Event API to report prompt contents. | `[{'role': 'user', 'content': 'What is the capital of France?'}]` | 
Removed, no replacement at this time. |
-| `gen_ai.usage.completion_tokens` | int | Deprecated, use `gen_ai.usage.output_tokens` instead. | `42` | 
Replaced by `gen_ai.usage.output_tokens`. |
-| `gen_ai.usage.prompt_tokens` | int | Deprecated, use `gen_ai.usage.input_tokens` instead. | `42` | 
Replaced by `gen_ai.usage.input_tokens`. |
+**[12]:** This refers to the 'aiplatform.googleapis.com' endpoint. May use common attributes prefixed with 'gcp.gen_ai.'.
## Deprecated OpenAI GenAI Attributes
@@ -172,6 +189,9 @@ Describes deprecated `gen_ai.openai` attributes.
|---|---|---|---|---|
| `gen_ai.openai.request.response_format` | string | Deprecated, use `gen_ai.output.type`. | `text`; `json_object`; `json_schema` | 
Replaced by `gen_ai.output.type`. |
| `gen_ai.openai.request.seed` | int | Deprecated, use `gen_ai.request.seed`. | `100` | 
Replaced by `gen_ai.request.seed`. |
+| `gen_ai.openai.request.service_tier` | string | Deprecated, use `openai.request.service_tier`. | `auto`; `default` | 
Replaced by `openai.request.service_tier`. |
+| `gen_ai.openai.response.service_tier` | string | Deprecated, use `openai.response.service_tier`. | `scale`; `default` | 
Replaced by `openai.response.service_tier`. |
+| `gen_ai.openai.response.system_fingerprint` | string | Deprecated, use `openai.response.system_fingerprint`. | `fp_44709d6fcb` | 
Replaced by `openai.response.system_fingerprint`. |
---
@@ -182,3 +202,12 @@ Describes deprecated `gen_ai.openai` attributes.
| `json_object` | JSON object response format |  |
| `json_schema` | JSON schema response format |  |
| `text` | Text response format |  |
+
+---
+
+`gen_ai.openai.request.service_tier` has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.
+
+| Value | Description | Stability |
+|---|---|---|
+| `auto` | The system will utilize scale tier credits until they are exhausted. |  |
+| `default` | The system will utilize the default scale tier. |  |
diff --git a/docs/registry/attributes/openai.md b/docs/registry/attributes/openai.md
new file mode 100644
index 0000000000..07f53b86d7
--- /dev/null
+++ b/docs/registry/attributes/openai.md
@@ -0,0 +1,23 @@
+
+
+
+# OpenAI
+
+## OpenAI Attributes
+
+This group defines attributes for OpenAI.
+
+| Attribute | Type | Description | Examples | Stability |
+|---|---|---|---|---|
+| `openai.request.service_tier` | string | The service tier requested. May be a specific tier, default, or auto. | `auto`; `default` |  |
+| `openai.response.service_tier` | string | The service tier used for the response. | `scale`; `default` |  |
+| `openai.response.system_fingerprint` | string | A fingerprint to track any eventual change in the Generative AI environment. | `fp_44709d6fcb` |  |
+
+---
+
+`openai.request.service_tier` has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.
+
+| Value | Description | Stability |
+|---|---|---|
+| `auto` | The system will utilize scale tier credits until they are exhausted. |  |
+| `default` | The system will utilize the default scale tier. |  |
diff --git a/model/gen-ai/deprecated/registry-deprecated.yaml b/model/gen-ai/deprecated/registry-deprecated.yaml
index 27efee6463..876412a7ff 100644
--- a/model/gen-ai/deprecated/registry-deprecated.yaml
+++ b/model/gen-ai/deprecated/registry-deprecated.yaml
@@ -36,6 +36,103 @@ groups:
note: Removed, no replacement at this time.
brief: "Deprecated, use Event API to report completions contents."
examples: ["[{'role': 'assistant', 'content': 'The capital of France is Paris.'}]"]
+ - id: gen_ai.system
+ stability: development
+ type:
+ members:
+ - id: openai
+ stability: development
+ value: "openai"
+ brief: 'OpenAI'
+ - id: gcp.gen_ai
+ stability: development
+ value: "gcp.gen_ai"
+ brief: "Any Google generative AI endpoint"
+ note: >
+ May be used when specific backend is unknown.
+ May use common attributes prefixed with 'gcp.gen_ai.'.
+ - id: gcp.vertex_ai
+ stability: development
+ value: "gcp.vertex_ai"
+ brief: 'Vertex AI'
+ note: >
+ This refers to the 'aiplatform.googleapis.com' endpoint.
+ May use common attributes prefixed with 'gcp.gen_ai.'.
+ - id: gcp.gemini
+ stability: development
+ value: "gcp.gemini"
+ brief: 'Gemini'
+ note: >
+ This refers to the 'generativelanguage.googleapis.com' endpoint.
+ Also known as the AI Studio API.
+ May use common attributes prefixed with 'gcp.gen_ai.'.
+ - id: vertex_ai
+ stability: development
+ value: "vertex_ai"
+ brief: 'Vertex AI'
+ deprecated: "Use 'gcp.vertex_ai' instead."
+ - id: gemini
+ stability: development
+ value: "gemini"
+ brief: 'Gemini'
+ deprecated: "Use 'gcp.gemini' instead."
+ - id: anthropic
+ stability: development
+ value: "anthropic"
+ brief: 'Anthropic'
+ - id: cohere
+ stability: development
+ value: "cohere"
+ brief: 'Cohere'
+ - id: az.ai.inference
+ stability: development
+ value: "az.ai.inference"
+ brief: 'Azure AI Inference'
+ - id: az.ai.openai
+ stability: development
+ value: "az.ai.openai"
+ brief: 'Azure OpenAI'
+ - id: azure.ai.inference
+ stability: development
+ value: "azure.ai.inference"
+ brief: 'Azure AI Inference'
+ - id: azure.ai.openai
+ stability: development
+ value: "azure.ai.openai"
+ brief: 'Azure OpenAI'
+ - id: ibm.watsonx.ai
+ stability: development
+ value: "ibm.watsonx.ai"
+ brief: 'IBM Watsonx AI'
+ - id: aws.bedrock
+ stability: development
+ value: "aws.bedrock"
+ brief: 'AWS Bedrock'
+ - id: perplexity
+ stability: development
+ value: "perplexity"
+ brief: 'Perplexity'
+ - id: xai
+ stability: development
+ value: "xai"
+ brief: 'xAI'
+ deprecated: "Use 'x_ai' instead."
+ - id: deepseek
+ stability: development
+ value: "deepseek"
+ brief: 'DeepSeek'
+ - id: groq
+ stability: development
+ value: "groq"
+ brief: 'Groq'
+ - id: mistral_ai
+ stability: development
+ value: "mistral_ai"
+ brief: 'Mistral AI'
+ brief: "Deprecated, use `gen_ai.provider.name` instead."
+ deprecated:
+ reason: renamed
+ renamed_to: gen_ai.provider.name
- id: registry.gen_ai.openai.deprecated
type: attribute_group
brief: Describes deprecated `gen_ai.openai` attributes.
@@ -70,3 +167,35 @@ groups:
deprecated:
reason: renamed
renamed_to: gen_ai.output.type
+ - id: gen_ai.openai.request.service_tier
+ stability: development
+ type:
+ members:
+ - id: auto
+ value: "auto"
+ brief: The system will utilize scale tier credits until they are exhausted.
+ stability: development
+ - id: default
+ value: "default"
+ brief: The system will utilize the default scale tier.
+ stability: development
+ brief: "Deprecated, use `openai.request.service_tier`."
+ deprecated:
+ reason: renamed
+ renamed_to: openai.request.service_tier
+ - id: gen_ai.openai.response.service_tier
+ stability: development
+ type: string
+ brief: "Deprecated, use `openai.response.service_tier`."
+ examples: ['scale', 'default']
+ deprecated:
+ reason: renamed
+ renamed_to: openai.response.service_tier
+ - id: gen_ai.openai.response.system_fingerprint
+ stability: development
+ type: string
+ brief: "Deprecated, use `openai.response.system_fingerprint`."
+ examples: ["fp_44709d6fcb"]
+ deprecated:
+ reason: renamed
+ renamed_to: openai.response.system_fingerprint
diff --git a/model/gen-ai/events.yaml b/model/gen-ai/events.yaml
index 4c1811a86e..0f72e19b5e 100644
--- a/model/gen-ai/events.yaml
+++ b/model/gen-ai/events.yaml
@@ -5,7 +5,7 @@ groups:
brief: >
Describes common Gen AI event attributes.
attributes:
- - ref: gen_ai.system
+ - ref: gen_ai.provider.name
- id: event.gen_ai.system.message
name: gen_ai.system.message
diff --git a/model/gen-ai/metrics.yaml b/model/gen-ai/metrics.yaml
index 180924e5b4..a54319e252 100644
--- a/model/gen-ai/metrics.yaml
+++ b/model/gen-ai/metrics.yaml
@@ -15,7 +15,7 @@ groups:
- ref: gen_ai.request.model
requirement_level:
conditionally_required: If available.
- - ref: gen_ai.system
+ - ref: gen_ai.provider.name
requirement_level: required
- ref: gen_ai.operation.name
requirement_level: required
@@ -31,13 +31,13 @@ groups:
The `error.type` SHOULD match the error code returned by the Generative AI service,
the canonical name of exception that occurred, or another low-cardinality error identifier.
Instrumentations SHOULD document the list of errors they report.
- - id: metric_attributes.gen_ai.openai
+ - id: metric_attributes.openai
type: attribute_group
brief: 'This group describes GenAI server metrics attributes'
attributes:
- - ref: gen_ai.openai.response.service_tier
+ - ref: openai.response.service_tier
requirement_level: recommended
- - ref: gen_ai.openai.response.system_fingerprint
+ - ref: openai.response.system_fingerprint
requirement_level: recommended
- id: metric.gen_ai.client.token.usage
type: metric
diff --git a/model/gen-ai/registry.yaml b/model/gen-ai/registry.yaml
index d8ac5f414e..433d5736bd 100644
--- a/model/gen-ai/registry.yaml
+++ b/model/gen-ai/registry.yaml
@@ -5,54 +5,41 @@ groups:
brief: >
This document defines the attributes used to describe telemetry in the context of Generative Artificial Intelligence (GenAI) Models requests and responses.
attributes:
- - id: gen_ai.system
+ - id: gen_ai.provider.name
stability: development
type:
members:
- id: openai
stability: development
value: "openai"
- brief: 'OpenAI'
+ brief: '[OpenAI](https://openai.com/)'
- id: gcp.gen_ai
stability: development
value: "gcp.gen_ai"
brief: "Any Google generative AI endpoint"
note: >
May be used when specific backend is unknown.
- May use common attributes prefixed with 'gcp.gen_ai.'.
- id: gcp.vertex_ai
stability: development
value: "gcp.vertex_ai"
- brief: 'Vertex AI'
+ brief: "[Vertex AI](https://cloud.google.com/vertex-ai)"
note: >
- This refers to the 'aiplatform.googleapis.com' endpoint.
- May use common attributes prefixed with 'gcp.gen_ai.'.
+ Used when accessing the 'aiplatform.googleapis.com' endpoint.
- id: gcp.gemini
stability: development
value: "gcp.gemini"
- brief: 'Gemini'
+ brief: '[Gemini](https://cloud.google.com/products/gemini)'
note: >
- This refers to the 'generativelanguage.googleapis.com' endpoint.
+ Used when accessing the 'generativelanguage.googleapis.com' endpoint.
Also known as the AI Studio API.
- May use common attributes prefixed with 'gcp.gen_ai.'.
- - id: vertex_ai
- stability: development
- value: "vertex_ai"
- brief: 'Vertex AI'
- deprecated: "Use 'gcp.vertex_ai' instead."
- - id: gemini
- stability: development
- value: "gemini"
- brief: 'Gemini'
- deprecated: "Use 'gcp.gemini' instead."
- id: anthropic
stability: development
value: "anthropic"
- brief: 'Anthropic'
+ brief: '[Anthropic](https://www.anthropic.com/)'
- id: cohere
stability: development
value: "cohere"
- brief: 'Cohere'
+ brief: '[Cohere](https://cohere.com/)'
- id: azure.ai.inference
stability: development
value: "azure.ai.inference"
@@ -60,60 +47,57 @@ groups:
- id: azure.ai.openai
stability: development
value: "azure.ai.openai"
- brief: 'Azure OpenAI'
- - id: az.ai.inference
- stability: development
- value: "az.ai.inference"
- deprecated: "Replaced by azure.ai.inference"
- brief: 'Azure AI Inference'
- - id: az.ai.openai
- stability: development
- value: "azure.ai.openai"
- brief: 'Azure OpenAI'
- deprecated: "Replaced by azure.ai.openai"
+ brief: '[Azure OpenAI](https://azure.microsoft.com/products/ai-services/openai-service/)'
- id: ibm.watsonx.ai
stability: development
value: "ibm.watsonx.ai"
- brief: 'IBM Watsonx AI'
+ brief: '[IBM Watsonx AI](https://www.ibm.com/products/watsonx-ai)'
- id: aws.bedrock
stability: development
value: "aws.bedrock"
- brief: 'AWS Bedrock'
+ brief: '[AWS Bedrock](https://aws.amazon.com/bedrock)'
- id: perplexity
stability: development
value: "perplexity"
- brief: 'Perplexity'
- - id: xai
+ brief: '[Perplexity](https://www.perplexity.ai/)'
+ - id: x_ai
stability: development
- value: "xai"
- brief: 'xAI'
+ value: "x_ai"
+ brief: '[xAI](https://x.ai/)'
- id: deepseek
stability: development
value: "deepseek"
- brief: 'DeepSeek'
+ brief: '[DeepSeek](https://www.deepseek.com/)'
- id: groq
stability: development
value: "groq"
- brief: 'Groq'
+ brief: '[Groq](https://groq.com/)'
- id: mistral_ai
stability: development
value: "mistral_ai"
- brief: 'Mistral AI'
+ brief: '[Mistral AI](https://mistral.ai/)'
- brief: The Generative AI product as identified by the client or server instrumentation.
+ brief: The Generative AI provider as identified by the client
+ or server instrumentation.
note: |
- The `gen_ai.system` describes a family of GenAI models with specific model identified
- by `gen_ai.request.model` and `gen_ai.response.model` attributes.
+ The attribute SHOULD be set based on the instrumentation's best
+ knowledge and may differ from the actual model provider.
+
+ Multiple providers, including Azure OpenAI, Gemini, and AI hosting platforms
+ are accessible using the OpenAI REST API and corresponding client libraries,
+ but may proxy or host models from different providers.
- The actual GenAI product may differ from the one identified by the client.
- Multiple systems, including Azure OpenAI and Gemini, are accessible by OpenAI client
- libraries. In such cases, the `gen_ai.system` is set to `openai` based on the
- instrumentation's best knowledge, instead of the actual system. The `server.address`
- attribute may help identify the actual system in use for `openai`.
+ The `gen_ai.request.model`, `gen_ai.response.model`, and `server.address`
+ attributes may help identify the actual system in use.
- For custom model, a custom friendly name SHOULD be used.
- If none of these options apply, the `gen_ai.system` SHOULD be set to `_OTHER`.
- examples: 'openai'
+ The `gen_ai.provider.name` attribute acts as a discriminator that
+ identifies the GenAI telemetry format flavor specific to that provider
+ within GenAI semantic conventions.
+ It SHOULD be set consistently with provider-specific attributes and signals.
+ For example, GenAI spans, metrics, and events related to AWS Bedrock
+ should have the `gen_ai.provider.name` set to `aws.bedrock` and include
+ applicable `aws.bedrock.*` attributes and are not expected to include
+ `openai.*` attributes.
- id: gen_ai.request.model
stability: development
type: string
@@ -226,7 +210,7 @@ groups:
type: string
brief: The unique identifier for a conversation (session, thread), used to store and correlate messages within this conversation.
examples: ["conv_5j66UpCpwteGg4YSxUnt7lPY"]
- - id: gen_ai.agent.id # alternatives: assistant (openai)
+ - id: gen_ai.agent.id
stability: development
type: string
brief: The unique identifier of the GenAI agent.
@@ -348,33 +332,3 @@ groups:
Additional output format details may be recorded in the future in the
`gen_ai.output.{type}.*` attributes.
- - id: registry.gen_ai.openai
- type: attribute_group
- display_name: OpenAI Attributes
- brief: >
- This group defines attributes for OpenAI.
- attributes:
- - id: gen_ai.openai.request.service_tier
- stability: development
- type:
- members:
- - id: auto
- value: "auto"
- brief: The system will utilize scale tier credits until they are exhausted.
- stability: development
- - id: default
- value: "default"
- brief: The system will utilize the default scale tier.
- stability: development
- brief: The service tier requested. May be a specific tier, default, or auto.
- examples: ['auto', 'default']
- - id: gen_ai.openai.response.service_tier
- stability: development
- type: string
- brief: The service tier used for the response.
- examples: ['scale', 'default']
- - id: gen_ai.openai.response.system_fingerprint
- stability: development
- type: string
- brief: A fingerprint to track any eventual change in the Generative AI environment.
- examples: ["fp_44709d6fcb"]
diff --git a/model/gen-ai/spans.yaml b/model/gen-ai/spans.yaml
index a07d454ef3..3cfdd407db 100644
--- a/model/gen-ai/spans.yaml
+++ b/model/gen-ai/spans.yaml
@@ -107,7 +107,7 @@ groups:
client or when the GenAI call happens over instrumented protocol such as HTTP.
extends: attributes.gen_ai.inference.client
attributes:
- - ref: gen_ai.system
+ - ref: gen_ai.provider.name
# TODO: Not adding to common attributes because of https://github.com/open-telemetry/weaver/issues/479
requirement_level: required
- ref: gen_ai.request.top_k
@@ -135,7 +135,7 @@ groups:
Additional output format details may be recorded in the future in the
`gen_ai.output.{type}.*` attributes.
- - id: span.gen_ai.openai.inference.client
+ - id: span.openai.inference.client
extends: attributes.gen_ai.inference.openai_based
stability: development
span_kind: client
@@ -144,22 +144,22 @@ groups:
Semantic Conventions for [OpenAI](https://openai.com/) client spans extend
and override the semantic conventions for [Gen AI Spans](gen-ai-spans.md).
note: |
- `gen_ai.system` MUST be set to `"openai"` and SHOULD be provided **at span creation time**.
+ `gen_ai.provider.name` MUST be set to `"openai"` and SHOULD be provided **at span creation time**.
**Span name** SHOULD be `{gen_ai.operation.name} {gen_ai.request.model}`.
attributes:
- ref: gen_ai.request.model
requirement_level: required
- - ref: gen_ai.openai.request.service_tier
+ - ref: openai.request.service_tier
requirement_level:
conditionally_required: if the request includes a service_tier and the value is not 'auto'
- - ref: gen_ai.openai.response.service_tier
+ - ref: openai.response.service_tier
requirement_level:
conditionally_required: if the response was received and includes a service_tier
- - ref: gen_ai.openai.response.system_fingerprint
+ - ref: openai.response.system_fingerprint
requirement_level: recommended
- - id: span.gen_ai.azure.ai.inference.client
+ - id: span.azure.ai.inference.client
extends: attributes.gen_ai.inference.openai_based
stability: development
type: span
@@ -168,7 +168,7 @@ groups:
Semantic Conventions for [Azure AI Inference](https://learn.microsoft.com/azure/ai-studio/reference/reference-model-inference-api)
client spans extend and override the semantic conventions for [Gen AI Spans](gen-ai-spans.md).
note: |
- `gen_ai.system` MUST be set to `"az.ai.inference"` and SHOULD be provided **at span creation time**.
+ `gen_ai.provider.name` MUST be set to `"azure.ai.inference"` and SHOULD be provided **at span creation time**.
**Span name** SHOULD be `{gen_ai.operation.name} {gen_ai.request.model}` when the
model name is available and `{gen_ai.operation.name}` otherwise.
@@ -220,7 +220,7 @@ groups:
Semantic conventions for individual GenAI systems and frameworks MAY specify different span name format.
extends: attributes.gen_ai.common.client
attributes:
- - ref: gen_ai.system
+ - ref: gen_ai.provider.name
requirement_level: required
- ref: gen_ai.agent.id
requirement_level:
@@ -246,7 +246,7 @@ groups:
Semantic conventions for individual GenAI systems and frameworks MAY specify different span name format.
extends: attributes.gen_ai.inference.client
attributes:
- - ref: gen_ai.system
+ - ref: gen_ai.provider.name
requirement_level: required
- ref: gen_ai.agent.id
requirement_level:
diff --git a/model/openai/registry.yaml b/model/openai/registry.yaml
new file mode 100644
index 0000000000..c36ad2b4c8
--- /dev/null
+++ b/model/openai/registry.yaml
@@ -0,0 +1,31 @@
+groups:
+ - id: registry.openai
+ type: attribute_group
+ display_name: OpenAI Attributes
+ brief: >
+ This group defines attributes for OpenAI.
+ attributes:
+ - id: openai.request.service_tier
+ stability: development
+ type:
+ members:
+ - id: auto
+ value: "auto"
+ brief: The system will utilize scale tier credits until they are exhausted.
+ stability: development
+ - id: default
+ value: "default"
+ brief: The system will utilize the default scale tier.
+ stability: development
+ brief: The service tier requested. May be a specific tier, default, or auto.
+ examples: ['auto', 'default']
+ - id: openai.response.service_tier
+ stability: development
+ type: string
+ brief: The service tier used for the response.
+ examples: ['scale', 'default']
+ - id: openai.response.system_fingerprint
+ stability: development
+ type: string
+ brief: A fingerprint to track any eventual change in the Generative AI environment.
+ examples: ["fp_44709d6fcb"]
diff --git a/schema-next.yaml b/schema-next.yaml
index 67ab49986f..e738f8a8af 100644
--- a/schema-next.yaml
+++ b/schema-next.yaml
@@ -2,6 +2,15 @@ file_format: 1.1.0
schema_url: https://opentelemetry.io/schemas/next
versions:
next:
+ all:
+ changes:
+ # https://github.com/open-telemetry/semantic-conventions/pull/2046
+ - rename_attributes:
+ attribute_map:
+ gen_ai.system: gen_ai.provider.name
+ gen_ai.openai.request.service_tier: openai.request.service_tier
+ gen_ai.openai.response.service_tier: openai.response.service_tier
+ gen_ai.openai.response.system_fingerprint: openai.response.system_fingerprint
1.36.0:
1.35.0:
all:
diff --git a/templates/registry/markdown/weaver.yaml b/templates/registry/markdown/weaver.yaml
index ec0c76351e..99c570fde1 100644
--- a/templates/registry/markdown/weaver.yaml
+++ b/templates/registry/markdown/weaver.yaml
@@ -42,6 +42,7 @@ acronyms:
- JVM
- NodeJS
- OCI
+ - OpenAI
- OpenTracing
- OracleDB
- OS