-
Notifications
You must be signed in to change notification settings - Fork 569
feat(langgraph): Usage attributes on invocation spans #5211
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #5211 +/- ##
==========================================
+ Coverage 84.17% 84.22% +0.04%
==========================================
Files 181 181
Lines 18443 18486 +43
Branches 3283 3295 +12
==========================================
+ Hits 15524 15569 +45
+ Misses 1904 1899 -5
- Partials 1015 1018 +3
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: Usage data miscounted when PII collection disabled
When should_send_default_pii() is false or include_prompts is false, input_messages remains None. This causes _get_new_messages in _set_response_attributes to return all output messages instead of just the new ones. Since LangGraph state accumulates messages, usage data will include tokens from all messages in the response rather than only the new messages added during this invocation. The input messages need to be parsed unconditionally (at least for _get_new_messages) to correctly calculate usage data regardless of PII settings.
sentry_sdk/integrations/langgraph.py#L184-L204
sentry-python/sentry_sdk/integrations/langgraph.py
Lines 184 to 204 in 4f3fab3
| # Store input messages to later compare with output | |
| input_messages = None | |
| if ( | |
| len(args) > 0 | |
| and should_send_default_pii() | |
| and integration.include_prompts | |
| ): | |
| input_messages = _parse_langgraph_messages(args[0]) | |
| if input_messages: | |
| normalized_input_messages = normalize_message_roles(input_messages) | |
| scope = sentry_sdk.get_current_scope() | |
| messages_data = truncate_and_annotate_messages( | |
| normalized_input_messages, span, scope | |
| ) | |
| if messages_data is not None: | |
| set_data_normalized( | |
| span, | |
| SPANDATA.GEN_AI_REQUEST_MESSAGES, | |
| messages_data, | |
| unpack=False, | |
| ) |
sentry_sdk/integrations/langgraph.py#L240-L260
sentry-python/sentry_sdk/integrations/langgraph.py
Lines 240 to 260 in 4f3fab3
| input_messages = None | |
| if ( | |
| len(args) > 0 | |
| and should_send_default_pii() | |
| and integration.include_prompts | |
| ): | |
| input_messages = _parse_langgraph_messages(args[0]) | |
| if input_messages: | |
| normalized_input_messages = normalize_message_roles(input_messages) | |
| scope = sentry_sdk.get_current_scope() | |
| messages_data = truncate_and_annotate_messages( | |
| normalized_input_messages, span, scope | |
| ) | |
| if messages_data is not None: | |
| set_data_normalized( | |
| span, | |
| SPANDATA.GEN_AI_REQUEST_MESSAGES, | |
| messages_data, | |
| unpack=False, | |
| ) |
Bug: Usage data miscounted when PII collection disabled
When should_send_default_pii() is false or include_prompts is false, input_messages remains None. This causes _get_new_messages in _set_response_attributes to return all output messages instead of just the new ones. Since LangGraph state accumulates messages, usage data will include tokens from all messages in the response rather than only the new messages added during this invocation. The input messages need to be parsed unconditionally (at least for _get_new_messages) to correctly calculate usage data regardless of PII settings.
sentry_sdk/integrations/langgraph.py#L184-L204
sentry-python/sentry_sdk/integrations/langgraph.py
Lines 184 to 204 in 4f3fab3
| # Store input messages to later compare with output | |
| input_messages = None | |
| if ( | |
| len(args) > 0 | |
| and should_send_default_pii() | |
| and integration.include_prompts | |
| ): | |
| input_messages = _parse_langgraph_messages(args[0]) | |
| if input_messages: | |
| normalized_input_messages = normalize_message_roles(input_messages) | |
| scope = sentry_sdk.get_current_scope() | |
| messages_data = truncate_and_annotate_messages( | |
| normalized_input_messages, span, scope | |
| ) | |
| if messages_data is not None: | |
| set_data_normalized( | |
| span, | |
| SPANDATA.GEN_AI_REQUEST_MESSAGES, | |
| messages_data, | |
| unpack=False, | |
| ) |
sentry_sdk/integrations/langgraph.py#L240-L260
sentry-python/sentry_sdk/integrations/langgraph.py
Lines 240 to 260 in 4f3fab3
| input_messages = None | |
| if ( | |
| len(args) > 0 | |
| and should_send_default_pii() | |
| and integration.include_prompts | |
| ): | |
| input_messages = _parse_langgraph_messages(args[0]) | |
| if input_messages: | |
| normalized_input_messages = normalize_message_roles(input_messages) | |
| scope = sentry_sdk.get_current_scope() | |
| messages_data = truncate_and_annotate_messages( | |
| normalized_input_messages, span, scope | |
| ) | |
| if messages_data is not None: | |
| set_data_normalized( | |
| span, | |
| SPANDATA.GEN_AI_REQUEST_MESSAGES, | |
| messages_data, | |
| unpack=False, | |
| ) |
|
This is okay, input messages do not have token info attached to them.
|
Description
Add prompt, response, and total token counts to LangGraph invocation spans.
Issues
Contributes to #5170
Reminders
tox -e linters.feat:,fix:,ref:,meta:)