gen-ai: add judgment boundary attributes to evaluation result#3297
Closed
Nick-heo-eg wants to merge 3 commits intoopen-telemetry:mainfrom
Closed
gen-ai: add judgment boundary attributes to evaluation result#3297Nick-heo-eg wants to merge 3 commits intoopen-telemetry:mainfrom
Nick-heo-eg wants to merge 3 commits intoopen-telemetry:mainfrom
Conversation
This commit adds attributes to the gen_ai.evaluation.result event to support traceability of decision boundaries where multiple alternative outcomes were evaluated. Implements discussion from open-telemetry/semantic-conventions#3244
463cfb9 to
3459667
Compare
…t intent Adds two concrete JSON examples demonstrating judgment boundary attribute usage: - Content safety pre-execution check with automatic decision - Cost boundary evaluation with human escalation Clarifies that judgment boundary attributes are intended for event-level auditability and post-hoc inspection rather than high-cardinality metric aggregation.
187f313 to
b5eb523
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
This PR adds a small set of attributes to the existing
gen_ai.evaluation.resultevent to improve traceability of decision boundarieswhere multiple alternative outcomes were evaluated.
The design follows the direction discussed in open-telemetry/semantic-conventions-genai#72 and does not introduce new events.
Changes
model/gen-ai/registry.yaml:gen_ai.evaluation.judgment.phasegen_ai.evaluation.judgment.selected_pathgen_ai.evaluation.judgment.alternatives_evaluatedgen_ai.evaluation.judgment.human_in_loopgen_ai.evaluation.resultevent inmodel/gen-ai/events.yamlMotivation
Current GenAI traces capture execution outcomes but do not provide an explicit signal that alternative paths (e.g., allow vs block) were evaluated.
These attributes allow systems to demonstrate that such evaluations occurred, without exposing internal reasoning or policy logic.
Design Rationale
gen_ai.evaluation.resultrather than creating a new eventNote
Documentation files under
docs/gen-ai/are autogenerated from the registry and event schemas and are not edited directly in this PR.Related Issue