diff --git a/.chloggen/reasoning_tokens.yaml b/.chloggen/reasoning_tokens.yaml
new file mode 100644
index 0000000000..188a614c7a
--- /dev/null
+++ b/.chloggen/reasoning_tokens.yaml
@@ -0,0 +1,23 @@
+# Use this changelog template to create an entry for release notes.
+#
+# If your change doesn't affect end users you should instead start
+# your pull request title with [chore] or use the "Skip Changelog" label.
+
+# One of 'breaking', 'deprecation', 'new_component', 'enhancement', 'bug_fix'
+change_type: "enhancement"
+
+# The name of the area of concern in the attributes-registry, (e.g. http, cloud, db)
+component: gen-ai
+
+# A brief description of the change. Surround your text with quotes ("") if it needs to start with a backtick (`).
+note: "Added reasoning_tokens attribute to GenAI semantic conventions."
+
+# Mandatory: One or more tracking issues related to the change. You can use the PR number here if no issue exists.
+# The values here must be integers.
+issues: [3194]
+
+# (Optional) One or more lines of additional information to render under the primary note.
+# These lines will be padded with 2 spaces and then inserted directly into the document.
+# Use pipe (|) for multiline entries.
+subtext: |
+ The `gen_ai.usage.reasoning_tokens` attribute was added to capture the number of tokens used in the reasoning or thinking process of GenAI models.
diff --git a/docs/gen-ai/aws-bedrock.md b/docs/gen-ai/aws-bedrock.md
index 170c3fbb34..82e93563db 100644
--- a/docs/gen-ai/aws-bedrock.md
+++ b/docs/gen-ai/aws-bedrock.md
@@ -63,6 +63,7 @@ Describes an AWS Bedrock operation span.
| [`gen_ai.request.choice.count`](/docs/registry/attributes/gen-ai.md) |  | `Conditionally Required` if available, in the request, and !=1 | int | The target number of candidate completions to return. | `3` |
| [`gen_ai.request.model`](/docs/registry/attributes/gen-ai.md) |  | `Conditionally Required` If available. | string | The name of the GenAI model a request is being made to. [7] | `gpt-4` |
| [`gen_ai.request.seed`](/docs/registry/attributes/gen-ai.md) |  | `Conditionally Required` if applicable and if the request includes a seed | int | Requests with same seed value more likely to return same result. | `100` |
+| [`gen_ai.usage.reasoning_tokens`](/docs/registry/attributes/gen-ai.md) |  | `Conditionally Required` when available | int | The number of tokens used in the GenAI reasoning or thinking, only set this if the GenAI model provides a separate count for reasoning tokens and they are not also included in input or output tokens. | `180` |
| [`server.port`](/docs/registry/attributes/server.md) |  | `Conditionally Required` If `server.address` is set. | int | GenAI server port. [8] | `80`; `8080`; `443` |
| [`aws.bedrock.knowledge_base.id`](/docs/registry/attributes/aws.md) |  | `Recommended` | string | The unique identifier of the AWS Bedrock Knowledge base. A [knowledge base](https://docs.aws.amazon.com/bedrock/latest/userguide/knowledge-base.html) is a bank of information that can be queried by models to generate more relevant responses and augment prompts. | `XFWUPB9PAW` |
| [`gen_ai.request.frequency_penalty`](/docs/registry/attributes/gen-ai.md) |  | `Recommended` | double | The frequency penalty setting for the GenAI request. | `0.1` |
diff --git a/docs/gen-ai/azure-ai-inference.md b/docs/gen-ai/azure-ai-inference.md
index 8660c00d2a..1a50ff7cb3 100644
--- a/docs/gen-ai/azure-ai-inference.md
+++ b/docs/gen-ai/azure-ai-inference.md
@@ -73,6 +73,7 @@ model name is available and `{gen_ai.operation.name}` otherwise.
| [`gen_ai.request.choice.count`](/docs/registry/attributes/gen-ai.md) |  | `Conditionally Required` if available, in the request, and !=1 | int | The target number of candidate completions to return. | `3` |
| [`gen_ai.request.model`](/docs/registry/attributes/gen-ai.md) |  | `Conditionally Required` If available. | string | The name of the GenAI model a request is being made to. [6] | `gpt-4` |
| [`gen_ai.request.seed`](/docs/registry/attributes/gen-ai.md) |  | `Conditionally Required` if applicable and if the request includes a seed | int | Requests with same seed value more likely to return same result. | `100` |
+| [`gen_ai.usage.reasoning_tokens`](/docs/registry/attributes/gen-ai.md) |  | `Conditionally Required` when available | int | The number of reasoning tokens as reported in the usage reasoning_tokens property of the response. | `180` |
| [`server.port`](/docs/registry/attributes/server.md) |  | `Conditionally Required` If not default (443). | int | GenAI server port. [7] | `80`; `8080`; `443` |
| [`azure.resource_provider.namespace`](/docs/registry/attributes/azure.md) |  | `Recommended` | string | [Azure Resource Provider Namespace](https://learn.microsoft.com/azure/azure-resource-manager/management/azure-services-resource-providers) as recognized by the client. [8] | `Microsoft.CognitiveServices` |
| [`gen_ai.request.frequency_penalty`](/docs/registry/attributes/gen-ai.md) |  | `Recommended` | double | The frequency penalty setting for the GenAI request. | `0.1` |
diff --git a/docs/gen-ai/gen-ai-agent-spans.md b/docs/gen-ai/gen-ai-agent-spans.md
index 48f662d089..bd3378e340 100644
--- a/docs/gen-ai/gen-ai-agent-spans.md
+++ b/docs/gen-ai/gen-ai-agent-spans.md
@@ -212,6 +212,7 @@ Examples of span kinds for different agent scenarios:
| [`gen_ai.request.choice.count`](/docs/registry/attributes/gen-ai.md) |  | `Conditionally Required` if available, in the request, and !=1 | int | The target number of candidate completions to return. | `3` |
| [`gen_ai.request.model`](/docs/registry/attributes/gen-ai.md) |  | `Conditionally Required` If available. | string | The name of the GenAI model a request is being made to. [8] | `gpt-4` |
| [`gen_ai.request.seed`](/docs/registry/attributes/gen-ai.md) |  | `Conditionally Required` if applicable and if the request includes a seed | int | Requests with same seed value more likely to return same result. | `100` |
+| [`gen_ai.usage.reasoning_tokens`](/docs/registry/attributes/gen-ai.md) |  | `Conditionally Required` when available | int | The number of tokens used in the GenAI reasoning or thinking, only set this if the GenAI model provides a separate count for reasoning tokens and they are not also included in input or output tokens. | `180` |
| [`server.port`](/docs/registry/attributes/server.md) |  | `Conditionally Required` If `server.address` is set. | int | GenAI server port. [9] | `80`; `8080`; `443` |
| [`gen_ai.request.frequency_penalty`](/docs/registry/attributes/gen-ai.md) |  | `Recommended` | double | The frequency penalty setting for the GenAI request. | `0.1` |
| [`gen_ai.request.max_tokens`](/docs/registry/attributes/gen-ai.md) |  | `Recommended` | int | The maximum number of tokens the model generates for a request. | `100` |
diff --git a/docs/gen-ai/gen-ai-events.md b/docs/gen-ai/gen-ai-events.md
index 6cf7976513..5a5f750505 100644
--- a/docs/gen-ai/gen-ai-events.md
+++ b/docs/gen-ai/gen-ai-events.md
@@ -65,6 +65,7 @@ This event is opt-in and could be used to store input and output details indepen
| [`gen_ai.request.choice.count`](/docs/registry/attributes/gen-ai.md) |  | `Conditionally Required` if available, in the request, and !=1 | int | The target number of candidate completions to return. | `3` |
| [`gen_ai.request.model`](/docs/registry/attributes/gen-ai.md) |  | `Conditionally Required` If available. | string | The name of the GenAI model a request is being made to. [6] | `gpt-4` |
| [`gen_ai.request.seed`](/docs/registry/attributes/gen-ai.md) |  | `Conditionally Required` if applicable and if the request includes a seed | int | Requests with same seed value more likely to return same result. | `100` |
+| [`gen_ai.usage.reasoning_tokens`](/docs/registry/attributes/gen-ai.md) |  | `Conditionally Required` when available | int | The number of tokens used in the GenAI reasoning or thinking, only set this if the GenAI model provides a separate count for reasoning tokens and they are not also included in input or output tokens. | `180` |
| [`server.port`](/docs/registry/attributes/server.md) |  | `Conditionally Required` If `server.address` is set. | int | GenAI server port. [7] | `80`; `8080`; `443` |
| [`gen_ai.request.frequency_penalty`](/docs/registry/attributes/gen-ai.md) |  | `Recommended` | double | The frequency penalty setting for the GenAI request. | `0.1` |
| [`gen_ai.request.max_tokens`](/docs/registry/attributes/gen-ai.md) |  | `Recommended` | int | The maximum number of tokens the model generates for a request. | `100` |
diff --git a/docs/gen-ai/gen-ai-spans.md b/docs/gen-ai/gen-ai-spans.md
index c4e10aaa7f..1b31dee059 100644
--- a/docs/gen-ai/gen-ai-spans.md
+++ b/docs/gen-ai/gen-ai-spans.md
@@ -77,6 +77,7 @@ client or when the GenAI call happens over instrumented protocol such as HTTP.
| [`gen_ai.request.choice.count`](/docs/registry/attributes/gen-ai.md) |  | `Conditionally Required` if available, in the request, and !=1 | int | The target number of candidate completions to return. | `3` |
| [`gen_ai.request.model`](/docs/registry/attributes/gen-ai.md) |  | `Conditionally Required` If available. | string | The name of the GenAI model a request is being made to. [7] | `gpt-4` |
| [`gen_ai.request.seed`](/docs/registry/attributes/gen-ai.md) |  | `Conditionally Required` if applicable and if the request includes a seed | int | Requests with same seed value more likely to return same result. | `100` |
+| [`gen_ai.usage.reasoning_tokens`](/docs/registry/attributes/gen-ai.md) |  | `Conditionally Required` when available | int | The number of tokens used in the GenAI reasoning or thinking, only set this if the GenAI model provides a separate count for reasoning tokens and they are not also included in input or output tokens. | `180` |
| [`server.port`](/docs/registry/attributes/server.md) |  | `Conditionally Required` If `server.address` is set. | int | GenAI server port. [8] | `80`; `8080`; `443` |
| [`gen_ai.request.frequency_penalty`](/docs/registry/attributes/gen-ai.md) |  | `Recommended` | double | The frequency penalty setting for the GenAI request. | `0.1` |
| [`gen_ai.request.max_tokens`](/docs/registry/attributes/gen-ai.md) |  | `Recommended` | int | The maximum number of tokens the model generates for a request. | `100` |
diff --git a/docs/gen-ai/openai.md b/docs/gen-ai/openai.md
index e29d73e761..431241933e 100644
--- a/docs/gen-ai/openai.md
+++ b/docs/gen-ai/openai.md
@@ -74,6 +74,7 @@ Semantic Conventions for [OpenAI](https://openai.com/) client spans extend and o
| [`gen_ai.output.type`](/docs/registry/attributes/gen-ai.md) |  | `Conditionally Required` [5] | string | Represents the content type requested by the client. [6] | `text`; `json`; `image` |
| [`gen_ai.request.choice.count`](/docs/registry/attributes/gen-ai.md) |  | `Conditionally Required` if available, in the request, and !=1 | int | The target number of candidate completions to return. | `3` |
| [`gen_ai.request.seed`](/docs/registry/attributes/gen-ai.md) |  | `Conditionally Required` if applicable and if the request includes a seed | int | Requests with same seed value more likely to return same result. | `100` |
+| [`gen_ai.usage.reasoning_tokens`](/docs/registry/attributes/gen-ai.md) |  | `Conditionally Required` when available | int | The number of tokens used in the GenAI reasoning or thinking, only set this if the GenAI model provides a separate count for reasoning tokens and they are not also included in input or output tokens. | `180` |
| [`openai.request.service_tier`](/docs/registry/attributes/openai.md) |  | `Conditionally Required` [7] | string | The service tier requested. May be a specific tier, default, or auto. | `auto`; `default` |
| [`openai.response.service_tier`](/docs/registry/attributes/openai.md) |  | `Conditionally Required` [8] | string | The service tier used for the response. | `scale`; `default` |
| [`server.port`](/docs/registry/attributes/server.md) |  | `Conditionally Required` If `server.address` is set. | int | GenAI server port. [9] | `80`; `8080`; `443` |
diff --git a/docs/registry/attributes/gen-ai.md b/docs/registry/attributes/gen-ai.md
index 34a73fa908..97990ad18b 100644
--- a/docs/registry/attributes/gen-ai.md
+++ b/docs/registry/attributes/gen-ai.md
@@ -55,6 +55,7 @@ This document defines the attributes used to describe telemetry in the context o
| `gen_ai.tool.type` |  | string | Type of the tool utilized by the agent [13] | `function`; `extension`; `datastore` |
| `gen_ai.usage.input_tokens` |  | int | The number of tokens used in the GenAI input (prompt). | `100` |
| `gen_ai.usage.output_tokens` |  | int | The number of tokens used in the GenAI response (completion). | `180` |
+| `gen_ai.usage.reasoning_tokens` |  | int | The number of tokens used in the GenAI reasoning or thinking, only set this if the GenAI model provides a separate count for reasoning tokens and they are not also included in input or output tokens. | `180` |
**[1] `gen_ai.data_source.id`:** Data sources are used by AI agents and RAG applications to store grounding data. A data source may be an external database, object store, document collection, website, or any other storage system used by the GenAI agent or application. The `gen_ai.data_source.id` SHOULD match the identifier used by the GenAI system rather than a name specific to the external storage, such as a database or object store. Semantic conventions referencing `gen_ai.data_source.id` MAY also leverage additional attributes, such as `db.*`, to further identify and describe the data source.
diff --git a/model/gen-ai/registry.yaml b/model/gen-ai/registry.yaml
index 8b3fa51dd4..42e077cea0 100644
--- a/model/gen-ai/registry.yaml
+++ b/model/gen-ai/registry.yaml
@@ -187,6 +187,12 @@ groups:
type: int
brief: The number of tokens used in the GenAI response (completion).
examples: [180]
+ - id: gen_ai.usage.reasoning_tokens
+ stability: development
+ type: int
+ brief: The number of tokens used in the GenAI reasoning or thinking, only set this if
+ the GenAI model provides a separate count for reasoning tokens and they are not also included in input or output tokens.
+ examples: [180]
- id: gen_ai.token.type
stability: development
type:
diff --git a/model/gen-ai/spans.yaml b/model/gen-ai/spans.yaml
index 542de5fdff..1642f0e911 100644
--- a/model/gen-ai/spans.yaml
+++ b/model/gen-ai/spans.yaml
@@ -71,6 +71,9 @@ groups:
requirement_level: recommended
- ref: gen_ai.usage.output_tokens
requirement_level: recommended
+ - ref: gen_ai.usage.reasoning_tokens
+ requirement_level:
+ conditionally_required: when available
- ref: gen_ai.conversation.id
requirement_level:
conditionally_required: when available
@@ -193,6 +196,9 @@ groups:
- ref: gen_ai.usage.output_tokens
brief: >
The number of completion tokens as reported in the usage completion_tokens property of the response.
+ - ref: gen_ai.usage.reasoning_tokens
+ brief: >
+ The number of reasoning tokens as reported in the usage reasoning_tokens property of the response.
- ref: server.port
requirement_level:
conditionally_required: If not default (443).