Skip to content

Conversation

@Rish-it
Copy link
Contributor

@Rish-it Rish-it commented Sep 19, 2025

Context-Aware Chat

A context-aware chat system that tracks conversation state and provides visual feedback on context window usage with a clean, minimal design approach.

Overview

The context-aware chat integrates with the editor to provide intelligent assistance while maintaining awareness of conversation length and token usage. It includes visual indicators and context management for optimal user experience.

Key Components

Context Indicator (context-indicator.tsx)

  • Visual: Circular progress ring showing context usage percentage
  • Color: Consistent gray (#6b7280) across all usage levels for minimal distraction
  • Behavior: Shows tooltip with detailed token usage, auto-hides when empty
  • Location: Top-right of chat input area

Context Tracking (use-context-tracking.ts)

  • Monitors token usage across conversation
  • Calculates percentage of context window used
  • Supports multiple model types (default: openai:gpt-4)
  • Real-time updates via MobX observers

Chat Input Integration

  • Context pills for active context items
  • Drag-and-drop image support with context validation
  • Screenshot capture with automatic context addition
  • Mode switching between Ask/Edit modes

Design

Screenshot 2025-09-19 at 11 03 28 AM

Important

Introduces a context-aware chat system with a ContextIndicator component and useContextTracking hook for tracking token usage, and updates model limits in packages/ai/src/tokens/index.ts.

  • Behavior:
    • Adds ContextIndicator component in context-indicator.tsx to show context usage percentage with a tooltip for token details.
    • Integrates ContextIndicator into chat-input/index.tsx.
    • Implements useContextTracking hook in use-context-tracking.ts for real-time token usage tracking.
    • Updates ChatContext class in context.ts to include getContextTokenCount() and getContextSummary() methods.
  • Models:
    • Defines ModelLimits and MODEL_LIMITS in packages/ai/src/tokens/index.ts for various models.
    • Adds getModelLimits() function to retrieve model limits.
  • Misc:
    • Adds @ai-sdk/ui-utils dependency in packages/ui/package.json.

This description was created by Ellipsis for 4d34d65. You can customize this summary. It will automatically update as commits are pushed.

Summary by CodeRabbit

  • New Features

    • Added a live context-usage indicator in chat showing percent of context window used (with hover tooltip for token and window details); supports multiple models.
  • Improvements

    • Refined chat input layout to better balance context pills and controls; indicator placement adapts based on available context.
  • Chores

    • Added a UI utility dependency to support the new indicator.

@vercel
Copy link

vercel bot commented Sep 19, 2025

@Rish-it is attempting to deploy a commit to the Onlook Team on Vercel.

A member of the Team first needs to authorize it.

@coderabbitai
Copy link

coderabbitai bot commented Sep 19, 2025

Note

Other AI code review bot(s) detected

CodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review.

Walkthrough

Adds token-aware context tracking and a compact inline ContextIndicator to the chat input. Implements a useContextTracking hook and model limits/token-usage utilities, extends ChatContext with token-summary helpers, and wires the indicator into the chat input layout; also adds a UI dependency.

Changes

Cohort / File(s) Summary
Chat input UI integration
apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-input/index.tsx, apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/context-indicator.tsx
Adds and renders ContextIndicator in chat input; layout adjusted to conditionally show the indicator left/right based on existing context; ContextIndicator is a client React component that displays a donut progress and tooltip driven by messages and modelId.
Context tracking hook
apps/web/client/src/app/project/[id]/_hooks/use-context-tracking.tsx
Adds useContextTracking(messages, modelId) client hook that asynchronously computes token usage, model limits, and percentage via getContextUsage.
Token utilities & model limits
packages/ai/src/tokens/index.ts
Adds ModelLimits and TokenUsage interfaces, MODEL_LIMITS registry, getModelLimits, countTokensInString, and getContextUsage(messages, modelId) returning usage, limits, percentage. Minor cleanup in token counting helper.
Chat context store additions
apps/web/client/src/components/store/editor/chat/context.ts
Adds getContextTokenCount() and getContextSummary() methods to ChatContext to compute total tokens and counts per context type (read-only).
UI package dependency
packages/ui/package.json
Adds dependency @ai-sdk/ui-utils ^1.2.11.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant User
  participant ChatInput
  participant ContextIndicator
  participant Hook as useContextTracking
  participant Tokens as tokens/index.ts

  User->>ChatInput: type / edit message
  ChatInput->>ContextIndicator: render(props: messages, modelId?)
  ContextIndicator->>Hook: useContextTracking(messages, modelId)
  Hook->>Tokens: getContextUsage(messages, modelId)
  Tokens-->>Hook: { usage, limits, percentage }
  Hook-->>ContextIndicator: context state
  ContextIndicator-->>ChatInput: display inline donut + percent

  User-->>ContextIndicator: hover
  ContextIndicator-->>User: tooltip with token details
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Possibly related PRs

Suggested reviewers

  • Kitenite

Poem

I hop and tally every token bright,
A tiny donut glowing in the night.
I nibble context, count with glee,
“Watch the window!” — squeak from me.
Hop, show, and vanish — code and carrot sprite. 🐇✨

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Title Check ✅ Passed The PR title "Context aware chat" succinctly captures the primary change — adding context tracking and a ContextIndicator to the chat input — without extraneous detail, making it clear to reviewers what the main feature is. It is concise, relevant, and easy to scan in history.
Description Check ✅ Passed The PR description provides a clear overview, detailed component summaries, behavioral notes, and includes a screenshot, so it covers the core information reviewers need. However it does not follow the repository's template exactly: the "Related Issues" field and the "Type of Change" checkbox section are not populated and there is no explicit "Testing" section describing verification steps. Given the overall thoroughness I mark this as a pass while recommending the missing template fields be added for consistency and reviewer convenience.
✨ Finishing touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@Rish-it Rish-it requested a review from Kitenite September 19, 2025 05:50
}> {
const totalTokens = await countTokensWithRoles(messages);
const limits = getModelLimits(modelId);
const inputTokens = Math.floor(totalTokens * 0.8);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The fixed 80/20 split for input/output tokens may not accurately reflect actual token usage. Consider clarifying or revisiting this logic.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (3)
apps/web/client/src/app/project/[id]/_hooks/use-context-tracking.tsx (1)

14-19: Consider avoiding synchronous getModelLimits in useState initializer.

While getModelLimits appears to be synchronous, calling functions in useState initializers can be risky if the function changes behavior later or has side effects.

-    const [contextState, setContextState] = useState<ContextTrackingState>({
-        usage: { inputTokens: 0, outputTokens: 0, totalTokens: 0 },
-        limits: getModelLimits(modelId),
-        percentage: 0
-    });
+    const [contextState, setContextState] = useState<ContextTrackingState>(() => ({
+        usage: { inputTokens: 0, outputTokens: 0, totalTokens: 0 },
+        limits: getModelLimits(modelId),
+        percentage: 0
+    }));
apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/context-indicator.tsx (1)

30-31: Consider making colors configurable.

The hardcoded colors work well for the current design, but consider making them configurable or theme-aware for future flexibility.

-    const colors = { stroke: '#6b7280', text: 'text-gray-500' };
+    const colors = { 
+        stroke: 'rgb(107 114 128)', // text-gray-500 equivalent
+        text: 'text-gray-500' 
+    };

Or better yet, use CSS custom properties that can be themed.

packages/ai/src/tokens/index.ts (1)

115-141: Consider more sophisticated token distribution.

The current 80/20 split between input and output tokens is a simple approximation. For better accuracy, consider analyzing actual message content vs expected response length.

-    const inputTokens = Math.floor(totalTokens * 0.8);
-    const outputTokens = totalTokens - inputTokens;
+    // More sophisticated calculation based on actual message roles
+    const inputTokens = await countInputTokens(messages);
+    const outputTokens = totalTokens - inputTokens;

However, the current approach is acceptable for usage estimation purposes.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 65db8bb and 4d34d65.

⛔ Files ignored due to path filters (1)
  • bun.lock is excluded by !**/*.lock
📒 Files selected for processing (6)
  • apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-input/index.tsx (2 hunks)
  • apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/context-indicator.tsx (1 hunks)
  • apps/web/client/src/app/project/[id]/_hooks/use-context-tracking.tsx (1 hunks)
  • apps/web/client/src/components/store/editor/chat/context.ts (2 hunks)
  • packages/ai/src/tokens/index.ts (2 hunks)
  • packages/ui/package.json (1 hunks)
🧰 Additional context used
📓 Path-based instructions (6)
apps/web/client/src/app/**/*.tsx

📄 CodeRabbit inference engine (AGENTS.md)

apps/web/client/src/app/**/*.tsx: Default to Server Components; add 'use client' when using events, state/effects, browser APIs, or client‑only libraries
Do not use process.env in client code; import env from @/env instead

Avoid hardcoded user-facing text; use next-intl messages/hooks

Files:

  • apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-input/index.tsx
  • apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/context-indicator.tsx
  • apps/web/client/src/app/project/[id]/_hooks/use-context-tracking.tsx
apps/web/client/src/**/*.{ts,tsx}

📄 CodeRabbit inference engine (AGENTS.md)

apps/web/client/src/**/*.{ts,tsx}: Use path aliases @/* and ~/* for imports that map to apps/web/client/src/*
Avoid hardcoded user-facing text; use next-intl messages/hooks instead

Use path aliases @/* and ~/* for imports mapping to src/*

Files:

  • apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-input/index.tsx
  • apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/context-indicator.tsx
  • apps/web/client/src/app/project/[id]/_hooks/use-context-tracking.tsx
  • apps/web/client/src/components/store/editor/chat/context.ts
apps/web/client/src/**/*.tsx

📄 CodeRabbit inference engine (AGENTS.md)

apps/web/client/src/**/*.tsx: Create MobX store instances with useState(() => new Store()) for stable references across renders
Keep the active MobX store in a useRef and perform async cleanup with setTimeout(() => storeRef.current?.clear(), 0) to avoid route-change races
Avoid useMemo for creating MobX store instances
Avoid putting the MobX store instance in effect dependency arrays if it causes loops; split concerns by domain

apps/web/client/src/**/*.tsx: Create MobX store instances with useState(() => new Store()) for stable identities across renders
Keep the active MobX store in a useRef and clean up asynchronously with setTimeout(() => storeRef.current?.clear(), 0)
Do not use useMemo to create MobX stores
Avoid placing MobX store instances in effect dependency arrays if it causes loops; split concerns instead
observer components must be client components; place a single client boundary at the feature entry; child observers need not repeat 'use client'

Files:

  • apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-input/index.tsx
  • apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/context-indicator.tsx
  • apps/web/client/src/app/project/[id]/_hooks/use-context-tracking.tsx
**/*.{ts,tsx}

📄 CodeRabbit inference engine (AGENTS.md)

Do not use the any type unless necessary

Files:

  • apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-input/index.tsx
  • apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/context-indicator.tsx
  • apps/web/client/src/app/project/[id]/_hooks/use-context-tracking.tsx
  • apps/web/client/src/components/store/editor/chat/context.ts
  • packages/ai/src/tokens/index.ts
apps/web/client/src/app/**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

Default to Server Components; add 'use client' only when using events, state/effects, browser APIs, or client-only libs

Files:

  • apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-input/index.tsx
  • apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/context-indicator.tsx
  • apps/web/client/src/app/project/[id]/_hooks/use-context-tracking.tsx
{apps,packages}/**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

Avoid using the any type unless absolutely necessary

Files:

  • apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-input/index.tsx
  • apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/context-indicator.tsx
  • apps/web/client/src/app/project/[id]/_hooks/use-context-tracking.tsx
  • apps/web/client/src/components/store/editor/chat/context.ts
  • packages/ai/src/tokens/index.ts
🧬 Code graph analysis (5)
apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-input/index.tsx (2)
apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/context-pills/input-context-pills.tsx (1)
  • InputContextPills (19-60)
apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/context-indicator.tsx (1)
  • ContextIndicator (23-72)
apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/context-indicator.tsx (2)
packages/models/src/chat/message/message.ts (1)
  • ChatMessage (18-18)
apps/web/client/src/app/project/[id]/_hooks/use-context-tracking.tsx (1)
  • useContextTracking (14-33)
apps/web/client/src/app/project/[id]/_hooks/use-context-tracking.tsx (2)
packages/ai/src/tokens/index.ts (4)
  • TokenUsage (105-109)
  • ModelLimits (5-10)
  • getModelLimits (81-83)
  • getContextUsage (115-141)
packages/models/src/chat/message/message.ts (1)
  • ChatMessage (18-18)
apps/web/client/src/components/store/editor/chat/context.ts (1)
packages/ai/src/tokens/index.ts (1)
  • countTokensInString (111-113)
packages/ai/src/tokens/index.ts (2)
packages/db/src/schema/chat/message.ts (1)
  • messages (11-27)
packages/models/src/chat/message/message.ts (1)
  • ChatMessage (18-18)
🔇 Additional comments (20)
apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-input/index.tsx (2)

25-25: LGTM! Clean import addition.

The import follows the established pattern and aligns with the modular component structure.


353-359: LGTM! Well-structured layout integration.

The layout changes properly accommodate the new ContextIndicator with justify-between, maintaining clear visual separation between context pills and the usage indicator.

apps/web/client/src/app/project/[id]/_hooks/use-context-tracking.tsx (4)

1-1: Follow coding guidelines for client directives.

The 'use client' directive is correctly placed since this hook uses React state and effects.


3-6: LGTM! Clean imports with proper path aliases.

Imports follow the coding guidelines by using path aliases (@/) and importing from the appropriate packages.


21-28: LGTM! Proper async effect implementation.

The async function properly handles the context usage updates with correct dependency array.


30-33: LGTM! Clean return pattern.

Using object spread to return the state is clean and maintains proper typing.

apps/web/client/src/components/store/editor/chat/context.ts (3)

15-15: LGTM! Proper import for token counting.

The import follows the established pattern and brings in the needed functionality from the AI package.


316-320: LGTM! Efficient token counting implementation.

The method correctly uses reduce to sum token counts from all context items, with proper fallback for empty content.


322-339: LGTM! Comprehensive context summary.

The method provides a useful summary including total contexts, token count, and type distribution. The implementation is clean and follows good practices.

apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/context-indicator.tsx (5)

1-1: LGTM! Proper client directive.

The component correctly uses 'use client' since it's an observer component with interactive elements.


14-21: LGTM! Clean token formatting utility.

The formatTokens function provides appropriate human-readable formatting for token counts with proper decimal precision.


23-32: LGTM! Proper observer implementation with early return.

The component correctly uses MobX observer pattern and includes a sensible early return when no tokens are present.


36-65: LGTM! Well-implemented progress indicator.

The SVG-based circular progress indicator is properly implemented with smooth transitions and good visual design.


67-69: LGTM! Clear tooltip content.

The tooltip provides useful contextual information using the formatting helper.

packages/ai/src/tokens/index.ts (5)

5-10: LGTM! Well-defined interface.

The ModelLimits interface appropriately captures the key constraints for different AI models.


81-83: LGTM! Proper fallback implementation.

The function correctly falls back to default limits for unknown models.


86-96: LGTM! Clean code improvement.

The removal of inline comments improves code readability while maintaining functionality.


105-109: LGTM! Clear token usage interface.

The interface properly captures input, output, and total token usage.


111-113: LGTM! Simple and effective utility.

The function provides a clean interface for token counting.

packages/ui/package.json (1)

56-56: Verify internal @ai-sdk/ui-utils usage & prefer public APIs

  • packages/ui/package.json lists "@ai-sdk/ui-utils": "^1.2.11".
  • The package is explicitly marked as an internal API ("Expect breaking changes") and the published/latest release is 1.2.11 (published May 7, 2025). (npmjs.com)
  • Action: Confirm whether this dependency is required (search the repo for imports/usages); if unused, remove it; if used, migrate to public AI SDK APIs (e.g., the public "ai" / @ai-sdk/react hooks) or a maintained chat UI kit to reduce instability. (ai-sdk.dev)

Comment on lines +12 to +79
export const MODEL_LIMITS: Record<string, ModelLimits> = {
'claude-sonnet-4-20250514': {
maxTokens: 1000000,
contextWindow: 1000000,
inputLimit: 800000,
outputLimit: 200000,
},
'claude-3-5-haiku-20241022': {
maxTokens: 200000,
contextWindow: 200000,
inputLimit: 180000,
outputLimit: 200000,
},
'anthropic/claude-sonnet-4': {
maxTokens: 1000000,
contextWindow: 1000000,
inputLimit: 800000,
outputLimit: 200000,
},
'anthropic/claude-3.5-haiku': {
maxTokens: 200000,
contextWindow: 200000,
inputLimit: 180000,
outputLimit: 200000,
},
'openai/gpt-5': {
maxTokens: 400000,
contextWindow: 400000,
inputLimit: 272000,
outputLimit: 400000,
},
'openai/gpt-5-mini': {
maxTokens: 400000,
contextWindow: 400000,
inputLimit: 272000,
outputLimit: 400000,
},
'openai/gpt-5-nano': {
maxTokens: 400000,
contextWindow: 400000,
inputLimit: 272000,
outputLimit: 400000,
},
'gpt-5': {
maxTokens: 400000,
contextWindow: 400000,
inputLimit: 272000,
outputLimit: 128000,
},
'claude-sonnet-4': {
maxTokens: 1000000,
contextWindow: 1000000,
inputLimit: 800000,
outputLimit: 200000,
},
'claude-3-5-haiku': {
maxTokens: 200000,
contextWindow: 200000,
inputLimit: 180000,
outputLimit: 8000,
},
default: {
maxTokens: 128000,
contextWindow: 128000,
inputLimit: 100000,
outputLimit: 4000,
},
};
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

🧩 Analysis chain

Verify model limit accuracy and consider maintenance.

The model limits contain specific values for various AI models. These may become outdated as providers update their offerings.

Please verify:

  1. The accuracy of these model limits, especially for newer models like GPT-5
  2. Consider documenting the source of these limits
  3. Plan for regular updates as model providers change their limits

🌐 Web query:

What are the current context window sizes for GPT-5 and Claude Sonnet 4?

💡 Result:

  • GPT-5 (API): total context 400,000 tokens (up to 272,000 input + up to 128,000 output). [1] (OpenAI, Aug 7, 2025)
  • Claude Sonnet 4: default/standard context 200,000 tokens; Sonnet 4 also supports a 1,000,000-token context window (public beta / tiered availability). [2][3] (Anthropic, Aug 2025)

Fix token-limit inaccuracies and add provenance

packages/ai/src/tokens/index.ts (lines 12–79):

  • openai/gpt-5, openai/gpt-5-mini, openai/gpt-5-nano — outputLimit is 400000; change to 128000 and keep inputLimit=272000, contextWindow/maxTokens=400000 (OpenAI: total 400,000 = up to 272,000 input + up to 128,000 output; Aug 7, 2025).
  • gpt-5 (alias) already uses outputLimit=128000 — consolidate duplicates to a single canonical entry to avoid conflicting values.
  • claude-sonnet-4* variants — currently set to 1,000,000; Anthropic lists default/standard 200,000 with 1,000,000 available only as a tier/beta (Aug 2025). Set default to 200000 or add explicit tiered entries + availability notes.
  • Add inline comments citing provider docs (with dates), and add a validation test that flags configs where outputLimit > provider-declared output limit or > (contextWindow - inputLimit). Add a short README/update cadence for keeping limits current.
🤖 Prompt for AI Agents
In packages/ai/src/tokens/index.ts around lines 12–79, several model token
limits are incorrect or duplicated: adjust openai/gpt-5, openai/gpt-5-mini, and
openai/gpt-5-nano outputLimit from 400000 to 128000 while keeping
inputLimit=272000 and contextWindow/maxTokens=400000; consolidate the duplicate
"gpt-5" alias into a single canonical entry (remove conflicting duplicate);
change claude-sonnet-4* default entries from 1,000,000 to 200,000 (or add
explicit tiered entries with availability notes) and add short inline comments
citing provider docs with dates for each provider; then add a unit/validation
test that fails if any config has outputLimit greater than the provider-declared
output limit or greater than contextWindow - inputLimit, and include a brief
README note describing update cadence for keeping limits current.

Comment on lines +22 to +28
const updateContextUsage = async () => {
const contextUsage = await getContextUsage(messages, modelId);
setContextState(contextUsage);
};

updateContextUsage();
}, [messages, modelId]);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Async setup race condition: The useEffect calls an async function updateContextUsage() but doesn't await it or handle the Promise properly. This can cause the component to render with stale state while the async operation is still pending. The async function should be properly awaited or the Promise should be handled to prevent race conditions between state updates and renders.

Spotted by Diamond

Fix in Graphite


Is this helpful? React 👍 or 👎 to let us know.

Comment on lines +125 to +126
const inputTokens = Math.floor(totalTokens * 0.8);
const outputTokens = totalTokens - inputTokens;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Logic error in token calculation: The code arbitrarily assigns 80% of total tokens as input tokens and 20% as output tokens (lines 125-126). This is incorrect because it's calculating token distribution from existing messages, not predicting future usage. For existing messages, all tokens should be considered input tokens since they're already part of the conversation context. The output tokens should represent available space for the model's response, not a portion of existing messages.

Suggested change
const inputTokens = Math.floor(totalTokens * 0.8);
const outputTokens = totalTokens - inputTokens;
const inputTokens = totalTokens;
const outputTokens = maxTokens - inputTokens;

Spotted by Diamond

Fix in Graphite


Is this helpful? React 👍 or 👎 to let us know.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-input/index.tsx (1)

150-151: Hardcoded user‑facing strings; use next‑intl.
Replace literals with t()/transKeys per repo guidelines.

Examples:

-    toast.error('Failed to send message. Please try again.');
+    toast.error(t(transKeys.editor.panels.edit.tabs.chat.input.sendError));

-        if (chatMode === ChatType.ASK) {
-            return 'Ask a question about your project...';
-        }
+        if (chatMode === ChatType.ASK) {
+            return t(transKeys.editor.panels.edit.tabs.chat.input.askPlaceholder);
+        }

-            <TooltipContent>{'Stop response'}</TooltipContent>
+            <TooltipContent>{t(transKeys.common.stopResponse)}</TooltipContent>

Apply the same pattern to: “Pasted image”, “Dropped image”, “No active frame available for screenshot”, “No screenshot data”, “Screenshot”, “Screenshot added to chat”.

Also applies to: 157-159, 172-173, 189-190, 234-235, 258-269, 427-427

🧹 Nitpick comments (5)
apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-input/index.tsx (5)

25-25: Prefer alias import for consistency with repo guidelines.
Switch to @/* to match path‑alias convention in apps/web/client.

-import { ContextIndicator } from '../context-indicator';
+import { ContextIndicator } from '@/app/project/[id]/_components/right-panel/chat-tab/context-indicator';

353-364: ContextIndicator placement conflicts with PR spec (“top‑right”).
Current logic renders it on the left when context exists; spec says it should live at the top‑right regardless. Recommend always rendering on the right and letting the indicator self‑hide when empty.

-                <div className="flex flex-row flex-wrap items-center justify-between gap-1.5 mb-1">
-                    <div className="flex flex-row flex-wrap items-center gap-1.5">
-                        {editorEngine.chat.context.context.length > 0 && (
-                            <ContextIndicator messages={messages} />
-                        )}
-                        {/* <ContextWheel /> */}
-                        <InputContextPills />
-                    </div>
-                    {editorEngine.chat.context.context.length === 0 && (
-                        <ContextIndicator messages={messages} />
-                    )}
-                </div>
+                <div className="flex flex-row flex-wrap items-center justify-between gap-1.5 mb-1">
+                    <div className="flex flex-row flex-wrap items-center gap-1.5">
+                        {/* <ContextWheel /> */}
+                        <InputContextPills />
+                    </div>
+                    <ContextIndicator messages={messages} />
+                </div>

81-90: Double invocation of handleEnterSelection().
You call it in the condition and again inside the block; this can execute selection twice. Call once and branch on the result.

-        const handleGlobalKeyDown = (e: KeyboardEvent) => {
-            if (e.key === 'Enter' && suggestionRef.current?.handleEnterSelection()) {
-                e.preventDefault();
-                e.stopPropagation();
-                // Stop the event from bubbling to the canvas
-                e.stopImmediatePropagation();
-                // Handle the suggestion selection
-                suggestionRef.current.handleEnterSelection();
-            }
-        };
+        const handleGlobalKeyDown = (e: KeyboardEvent) => {
+            if (e.key === 'Enter') {
+                const handled = suggestionRef.current?.handleEnterSelection();
+                if (handled) {
+                    e.preventDefault();
+                    e.stopPropagation();
+                    // Stop the event from bubbling to the canvas
+                    e.stopImmediatePropagation();
+                }
+            }
+        };

270-271: Avoid surfacing raw error objects to users.
Show a localized generic message; log the error to console for diagnostics.

-            toast.error('Failed to capture screenshot. Error: ' + error);
+            console.error('Failed to capture screenshot', error);
+            toast.error(t(transKeys.editor.panels.edit.tabs.chat.input.screenshotError));

415-426: Add accessible labels to icon‑only buttons.
Ensure SR users can identify actions.

-                                <Button
+                                <Button
+                                    aria-label={t(transKeys.common.stopResponse)}
+                                    title={t(transKeys.common.stopResponse)}
                                     size={'icon'}
                                     variant={'secondary'}
                                     className="text-smallPlus w-fit h-full py-0.5 px-2.5 text-primary"
-                        <Button
+                        <Button
+                            aria-label={t(transKeys.common.send)}
+                            title={t(transKeys.common.send)}
                             size={'icon'}
                             variant={'secondary'}
                             className="text-smallPlus w-fit h-full py-0.5 px-2.5 text-primary"

Also applies to: 430-439

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4d34d65 and 81f60a6.

📒 Files selected for processing (2)
  • apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-input/index.tsx (3 hunks)
  • apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/context-indicator.tsx (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/context-indicator.tsx
🧰 Additional context used
📓 Path-based instructions (6)
apps/web/client/src/app/**/*.tsx

📄 CodeRabbit inference engine (AGENTS.md)

apps/web/client/src/app/**/*.tsx: Default to Server Components; add 'use client' when using events, state/effects, browser APIs, or client‑only libraries
Do not use process.env in client code; import env from @/env instead

Avoid hardcoded user-facing text; use next-intl messages/hooks

Files:

  • apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-input/index.tsx
apps/web/client/src/**/*.{ts,tsx}

📄 CodeRabbit inference engine (AGENTS.md)

apps/web/client/src/**/*.{ts,tsx}: Use path aliases @/* and ~/* for imports that map to apps/web/client/src/*
Avoid hardcoded user-facing text; use next-intl messages/hooks instead

Use path aliases @/* and ~/* for imports mapping to src/*

Files:

  • apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-input/index.tsx
apps/web/client/src/**/*.tsx

📄 CodeRabbit inference engine (AGENTS.md)

apps/web/client/src/**/*.tsx: Create MobX store instances with useState(() => new Store()) for stable references across renders
Keep the active MobX store in a useRef and perform async cleanup with setTimeout(() => storeRef.current?.clear(), 0) to avoid route-change races
Avoid useMemo for creating MobX store instances
Avoid putting the MobX store instance in effect dependency arrays if it causes loops; split concerns by domain

apps/web/client/src/**/*.tsx: Create MobX store instances with useState(() => new Store()) for stable identities across renders
Keep the active MobX store in a useRef and clean up asynchronously with setTimeout(() => storeRef.current?.clear(), 0)
Do not use useMemo to create MobX stores
Avoid placing MobX store instances in effect dependency arrays if it causes loops; split concerns instead
observer components must be client components; place a single client boundary at the feature entry; child observers need not repeat 'use client'

Files:

  • apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-input/index.tsx
**/*.{ts,tsx}

📄 CodeRabbit inference engine (AGENTS.md)

Do not use the any type unless necessary

Files:

  • apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-input/index.tsx
apps/web/client/src/app/**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

Default to Server Components; add 'use client' only when using events, state/effects, browser APIs, or client-only libs

Files:

  • apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-input/index.tsx
{apps,packages}/**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

Avoid using the any type unless absolutely necessary

Files:

  • apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-input/index.tsx
🧬 Code graph analysis (1)
apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-input/index.tsx (2)
apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/context-indicator.tsx (1)
  • ContextIndicator (23-92)
apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/context-pills/input-context-pills.tsx (1)
  • InputContextPills (19-60)
🔇 Additional comments (1)
apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-input/index.tsx (1)

381-383: LGTM: removed unused event parameter.
Cleaner handler; no behavior change.

@Rish-it Rish-it closed this Sep 22, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants