Skip to content

Conversation

tedzhouhk
Copy link
Contributor

@tedzhouhk tedzhouhk commented Jun 4, 2025

use comment filed in annotated to pass metric-related information, this allows annotated to be aligned with SSE

Summary by CodeRabbit

  • Refactor
    • Removed direct token count fields from response objects and data structures.
    • Centralized token metrics into structured annotations rather than separate fields.
  • Bug Fixes
    • Improved extraction and processing of token metrics for accurate reporting.
  • Tests
    • Updated tests and helpers to reflect the new token metrics annotation approach.

Copy link

copy-pr-bot bot commented Jun 4, 2025

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

Copy link
Contributor

coderabbitai bot commented Jun 4, 2025

Walkthrough

This change removes the chunk_tokens, input_tokens, and output_tokens fields from the Annotated struct and all related code across multiple modules. Token count metadata is now stored and parsed via the comment field as formatted strings using the new LLMMetricAnnotation abstraction instead of dedicated struct fields. Test cases and conversion logic were updated accordingly.

Changes

File(s) Change Summary
lib/runtime/src/protocols/annotated.rs Removed chunk_tokens, input_tokens, and output_tokens fields from Annotated<R> struct and all methods.
lib/llm/src/engines.rs
lib/engines/mistralrs/src/lib.rs
lib/llm/src/protocols/codec.rs
Removed setting of token-related fields in Annotated construction; omitted these fields entirely.
lib/llm/src/http/service/openai.rs Changed token metric extraction: now parses token counts from comment strings using LLMMetricAnnotation.
lib/llm/src/preprocessor.rs Introduced LLMMetricAnnotation to serialize/deserialize token counts into/from Annotated comment field; replaced direct token fields with annotations.
lib/llm/src/protocols/openai/chat_completions/aggregator.rs
lib/llm/src/protocols/openai/completions/aggregator.rs
Updated test helpers and cases to omit token fields when constructing Annotated instances.

Sequence Diagram(s)

sequenceDiagram
    participant Engine
    participant Preprocessor
    participant Annotated
    participant HTTPService

    Engine->>Preprocessor: Generate response with token counts
    Preprocessor->>Annotated: Serialize token counts into comment field (LLMMetricAnnotation)
    Annotated->>HTTPService: Yield response (token counts in comment)
    HTTPService->>HTTPService: Parse token counts from comment for metrics via LLMMetricAnnotation
Loading

Possibly related PRs

Poem

A hop and a skip, some tokens we drop,
From fields to comments, they now softly plop.
Annotated’s lighter, its burden set free,
Token counts hidden, but still there to see.
With strings in the comments, the code’s feeling spry—
Hooray for the change, let the bunnies hop by! 🐇✨


🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
lib/llm/src/preprocessor.rs (1)

254-260: Consider using constants for token metric prefixes to ensure consistency.

The hardcoded string prefixes ("chunk_tokens: ", "input_tokens: ", "output_tokens: ") create a dependency between this encoding logic and the parsing logic in other modules. Consider defining these as constants to prevent inconsistencies and make future changes easier.

+const TOKEN_COMMENT_PREFIX_CHUNK: &str = "chunk_tokens: ";
+const TOKEN_COMMENT_PREFIX_INPUT: &str = "input_tokens: ";
+const TOKEN_COMMENT_PREFIX_OUTPUT: &str = "output_tokens: ";
+
                    // Store token information in comment field
                    let token_info = vec![
-                        format!("chunk_tokens: {}", chunk_tokens),
-                        format!("input_tokens: {}", isl),
-                        format!("output_tokens: {}", current_osl),
+                        format!("{}{}", TOKEN_COMMENT_PREFIX_CHUNK, chunk_tokens),
+                        format!("{}{}", TOKEN_COMMENT_PREFIX_INPUT, isl),
+                        format!("{}{}", TOKEN_COMMENT_PREFIX_OUTPUT, current_osl),
                    ];
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e31f8d9 and 27e1be0.

📒 Files selected for processing (8)
  • lib/engines/mistralrs/src/lib.rs (0 hunks)
  • lib/llm/src/engines.rs (2 hunks)
  • lib/llm/src/http/service/openai.rs (1 hunks)
  • lib/llm/src/preprocessor.rs (1 hunks)
  • lib/llm/src/protocols/codec.rs (0 hunks)
  • lib/llm/src/protocols/openai/chat_completions/aggregator.rs (0 hunks)
  • lib/llm/src/protocols/openai/completions/aggregator.rs (0 hunks)
  • lib/runtime/src/protocols/annotated.rs (0 hunks)
💤 Files with no reviewable changes (5)
  • lib/llm/src/protocols/openai/completions/aggregator.rs
  • lib/llm/src/protocols/codec.rs
  • lib/llm/src/protocols/openai/chat_completions/aggregator.rs
  • lib/engines/mistralrs/src/lib.rs
  • lib/runtime/src/protocols/annotated.rs
🧰 Additional context used
🧬 Code Graph Analysis (1)
lib/llm/src/http/service/openai.rs (1)
lib/bindings/python/rust/lib.rs (1)
  • comments (846-848)
⏰ Context from checks skipped due to timeout of 90000ms (4)
  • GitHub Check: pre-merge-rust (lib/bindings/python)
  • GitHub Check: pre-merge-rust (.)
  • GitHub Check: pre-merge-rust (lib/runtime/examples)
  • GitHub Check: Build and Test - vllm
🔇 Additional comments (1)
lib/llm/src/engines.rs (1)

205-205: LGTM: Annotated struct initialization correctly updated.

The removal of explicit token field assignments aligns perfectly with the struct refactoring. The simplified initialization is cleaner and maintains the same functionality.

Also applies to: 213-213, 237-237, 241-241

@ryanolson
Copy link
Contributor

How about something like

pub const LLM_METRICS_ANNOTATION: &str  = "llm_metrics: ";

#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct LLMMetricAnnotation {
    pub input_tokens: usize,
    pub output_tokens: usize, 
    pub chunk_tokens: usize,
}

impl LLMMetricAnnotation {
    pub fn to_annotation<T>(&self) -> Annotated<T> {
        Annotated::from_annotation(LLM_METRICS_ANNOTATION, self)
    }

    
    pub fn from_annotation<T>(annotation: Annotated<T>) -> Result<Option<LLMMetricAnnotation>> {
         if !annotation.is_event() { return Ok(None); }
         if annotation.event() != LLM_METRICS_ANNOTATION { return Ok(None); }
         if annotation.comments().len() != 1 { return Err("malformed comments block"); }
         let metrics: LLMMetricAnnotation = serde_json::from_string(annotation.comments[0])?;
         Ok(Some(metrics))
    }
}

then to parse, use from_annotion

@tedzhouhk
Copy link
Contributor Author

tedzhouhk commented Jun 5, 2025

How about something like

pub const LLM_METRICS_ANNOTATION: &str  = "llm_metrics: ";

#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct LLMMetricAnnotation {
    pub input_tokens: usize,
    pub output_tokens: usize, 
    pub chunk_tokens: usize,
}

impl LLMMetricAnnotation {
    pub fn to_annotation<T>(&self) -> Annotated<T> {
        Annotated::from_annotation(LLM_METRICS_ANNOTATION, self)
    }

    
    pub fn from_annotation<T>(annotation: Annotated<T>) -> Result<Option<LLMMetricAnnotation>> {
         if !annotation.is_event() { return Ok(None); }
         if annotation.event() != LLM_METRICS_ANNOTATION { return Ok(None); }
         if annotation.comments().len() != 1 { return Err("malformed comments block"); }
         let metrics: LLMMetricAnnotation = serde_json::from_string(annotation.comments[0])?;
         Ok(Some(metrics))
    }
}

then to parse, use from_annotion

@ryanolson thanks for the suggestion, not sure if I fully understand the to_annotation part but I made some changes and it seems to work. Please give another look. The part I don't understand well is if we want to use the current event field of a normal response (cause a normal response will have event None) or if we want to send out a new response with only the metrics. Currently I am doing the first approach.

@paulhendricks
Copy link
Member

@ryanolson we do have annotations in the NvExt field: https://github.com/ai-dynamo/dynamo/blob/main/lib/llm/src/protocols/openai/nvext.rs#L63

This is available to use for Completions Requests and for ChatCompletions Requests.

We can choose to add NvExt to both Completions Responses and ChatCompletions Responses as well?

@tedzhouhk
Copy link
Contributor Author

@ryanolson @paulhendricks based on our discussion today I think current approach is the right way to do it IIUC? Could you please take another look? thanks

@pull-request-size pull-request-size bot added size/L and removed size/M labels Jun 11, 2025
Copy link
Member

@paulhendricks paulhendricks left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is great - LGTM

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
lib/llm/src/preprocessor.rs (1)

76-96: Minor: tighten error handling & return types in from_annotation

  1. Returning Box<dyn std::error::Error> introduces dynamic dispatch for a simple parsing routine; the rest of the crate predominantly uses anyhow::Error.
  2. Re-creating ad-hoc validation logic duplicates behaviour already enforced by Annotated::from_annotation.
  3. comments.len() != 1 is a strong restriction – consider at least allowing >= 1 and picking the first element to stay forward-compatible.
-pub fn from_annotation<T>(
-    annotation: &Annotated<T>,
-) -> Result<Option<LLMMetricAnnotation>, Box<dyn std::error::Error>> {
+pub fn from_annotation<T>(
+    annotation: &Annotated<T>,
+) -> anyhow::Result<Option<Self>> {
     if annotation.event.as_deref() != Some(ANNOTATION_LLM_METRICS) {
         return Ok(None);
     }
-    let comments = annotation
-        .comment
-        .as_ref()
-        .ok_or("missing comments block")?;
-    if comments.len() != 1 {
-        return Err("malformed comments block - expected exactly 1 comment".into());
-    }
-    let metrics: LLMMetricAnnotation = serde_json::from_str(&comments[0])?;
-    Ok(Some(metrics))
+    let comment = annotation
+        .comment
+        .as_ref()
+        .and_then(|v| v.first())
+        .context("llm_metrics annotation missing comment payload")?;
+
+    Ok(Some(serde_json::from_str(comment)?))
 }

This keeps error semantics consistent and removes a few unwrap/manual checks.

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 8d440f7 and 113d958.

📒 Files selected for processing (1)
  • lib/llm/src/preprocessor.rs (2 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (4)
  • GitHub Check: pre-merge-rust (lib/runtime/examples)
  • GitHub Check: pre-merge-rust (lib/bindings/python)
  • GitHub Check: Build and Test - vllm
  • GitHub Check: pre-merge-rust (.)

@tedzhouhk tedzhouhk enabled auto-merge (squash) June 11, 2025 17:35
@tedzhouhk tedzhouhk merged commit 227a0e7 into main Jun 11, 2025
9 checks passed
@tedzhouhk tedzhouhk deleted the hzhou/annotation-fix branch June 11, 2025 17:50
@coderabbitai coderabbitai bot mentioned this pull request Jun 15, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants