-
Notifications
You must be signed in to change notification settings - Fork 1.7k
Context aware chat #2876
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Context aware chat #2876
Conversation
|
@Rish-it is attempting to deploy a commit to the Onlook Team on Vercel. A member of the Team first needs to authorize it. |
|
Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. WalkthroughAdds token-aware context tracking and a compact inline ContextIndicator to the chat input. Implements a useContextTracking hook and model limits/token-usage utilities, extends ChatContext with token-summary helpers, and wires the indicator into the chat input layout; also adds a UI dependency. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant User
participant ChatInput
participant ContextIndicator
participant Hook as useContextTracking
participant Tokens as tokens/index.ts
User->>ChatInput: type / edit message
ChatInput->>ContextIndicator: render(props: messages, modelId?)
ContextIndicator->>Hook: useContextTracking(messages, modelId)
Hook->>Tokens: getContextUsage(messages, modelId)
Tokens-->>Hook: { usage, limits, percentage }
Hook-->>ContextIndicator: context state
ContextIndicator-->>ChatInput: display inline donut + percent
User-->>ContextIndicator: hover
ContextIndicator-->>User: tooltip with token details
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Suggested reviewers
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
| }> { | ||
| const totalTokens = await countTokensWithRoles(messages); | ||
| const limits = getModelLimits(modelId); | ||
| const inputTokens = Math.floor(totalTokens * 0.8); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The fixed 80/20 split for input/output tokens may not accurately reflect actual token usage. Consider clarifying or revisiting this logic.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (3)
apps/web/client/src/app/project/[id]/_hooks/use-context-tracking.tsx (1)
14-19: Consider avoiding synchronous getModelLimits in useState initializer.While
getModelLimitsappears to be synchronous, calling functions in useState initializers can be risky if the function changes behavior later or has side effects.- const [contextState, setContextState] = useState<ContextTrackingState>({ - usage: { inputTokens: 0, outputTokens: 0, totalTokens: 0 }, - limits: getModelLimits(modelId), - percentage: 0 - }); + const [contextState, setContextState] = useState<ContextTrackingState>(() => ({ + usage: { inputTokens: 0, outputTokens: 0, totalTokens: 0 }, + limits: getModelLimits(modelId), + percentage: 0 + }));apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/context-indicator.tsx (1)
30-31: Consider making colors configurable.The hardcoded colors work well for the current design, but consider making them configurable or theme-aware for future flexibility.
- const colors = { stroke: '#6b7280', text: 'text-gray-500' }; + const colors = { + stroke: 'rgb(107 114 128)', // text-gray-500 equivalent + text: 'text-gray-500' + };Or better yet, use CSS custom properties that can be themed.
packages/ai/src/tokens/index.ts (1)
115-141: Consider more sophisticated token distribution.The current 80/20 split between input and output tokens is a simple approximation. For better accuracy, consider analyzing actual message content vs expected response length.
- const inputTokens = Math.floor(totalTokens * 0.8); - const outputTokens = totalTokens - inputTokens; + // More sophisticated calculation based on actual message roles + const inputTokens = await countInputTokens(messages); + const outputTokens = totalTokens - inputTokens;However, the current approach is acceptable for usage estimation purposes.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
bun.lockis excluded by!**/*.lock
📒 Files selected for processing (6)
apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-input/index.tsx(2 hunks)apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/context-indicator.tsx(1 hunks)apps/web/client/src/app/project/[id]/_hooks/use-context-tracking.tsx(1 hunks)apps/web/client/src/components/store/editor/chat/context.ts(2 hunks)packages/ai/src/tokens/index.ts(2 hunks)packages/ui/package.json(1 hunks)
🧰 Additional context used
📓 Path-based instructions (6)
apps/web/client/src/app/**/*.tsx
📄 CodeRabbit inference engine (AGENTS.md)
apps/web/client/src/app/**/*.tsx: Default to Server Components; add 'use client' when using events, state/effects, browser APIs, or client‑only libraries
Do not use process.env in client code; import env from @/env insteadAvoid hardcoded user-facing text; use next-intl messages/hooks
Files:
apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-input/index.tsxapps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/context-indicator.tsxapps/web/client/src/app/project/[id]/_hooks/use-context-tracking.tsx
apps/web/client/src/**/*.{ts,tsx}
📄 CodeRabbit inference engine (AGENTS.md)
apps/web/client/src/**/*.{ts,tsx}: Use path aliases @/* and ~/* for imports that map to apps/web/client/src/*
Avoid hardcoded user-facing text; use next-intl messages/hooks insteadUse path aliases @/* and ~/* for imports mapping to src/*
Files:
apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-input/index.tsxapps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/context-indicator.tsxapps/web/client/src/app/project/[id]/_hooks/use-context-tracking.tsxapps/web/client/src/components/store/editor/chat/context.ts
apps/web/client/src/**/*.tsx
📄 CodeRabbit inference engine (AGENTS.md)
apps/web/client/src/**/*.tsx: Create MobX store instances with useState(() => new Store()) for stable references across renders
Keep the active MobX store in a useRef and perform async cleanup with setTimeout(() => storeRef.current?.clear(), 0) to avoid route-change races
Avoid useMemo for creating MobX store instances
Avoid putting the MobX store instance in effect dependency arrays if it causes loops; split concerns by domain
apps/web/client/src/**/*.tsx: Create MobX store instances with useState(() => new Store()) for stable identities across renders
Keep the active MobX store in a useRef and clean up asynchronously with setTimeout(() => storeRef.current?.clear(), 0)
Do not use useMemo to create MobX stores
Avoid placing MobX store instances in effect dependency arrays if it causes loops; split concerns instead
observer components must be client components; place a single client boundary at the feature entry; child observers need not repeat 'use client'
Files:
apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-input/index.tsxapps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/context-indicator.tsxapps/web/client/src/app/project/[id]/_hooks/use-context-tracking.tsx
**/*.{ts,tsx}
📄 CodeRabbit inference engine (AGENTS.md)
Do not use the any type unless necessary
Files:
apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-input/index.tsxapps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/context-indicator.tsxapps/web/client/src/app/project/[id]/_hooks/use-context-tracking.tsxapps/web/client/src/components/store/editor/chat/context.tspackages/ai/src/tokens/index.ts
apps/web/client/src/app/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Default to Server Components; add 'use client' only when using events, state/effects, browser APIs, or client-only libs
Files:
apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-input/index.tsxapps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/context-indicator.tsxapps/web/client/src/app/project/[id]/_hooks/use-context-tracking.tsx
{apps,packages}/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Avoid using the any type unless absolutely necessary
Files:
apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-input/index.tsxapps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/context-indicator.tsxapps/web/client/src/app/project/[id]/_hooks/use-context-tracking.tsxapps/web/client/src/components/store/editor/chat/context.tspackages/ai/src/tokens/index.ts
🧬 Code graph analysis (5)
apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-input/index.tsx (2)
apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/context-pills/input-context-pills.tsx (1)
InputContextPills(19-60)apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/context-indicator.tsx (1)
ContextIndicator(23-72)
apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/context-indicator.tsx (2)
packages/models/src/chat/message/message.ts (1)
ChatMessage(18-18)apps/web/client/src/app/project/[id]/_hooks/use-context-tracking.tsx (1)
useContextTracking(14-33)
apps/web/client/src/app/project/[id]/_hooks/use-context-tracking.tsx (2)
packages/ai/src/tokens/index.ts (4)
TokenUsage(105-109)ModelLimits(5-10)getModelLimits(81-83)getContextUsage(115-141)packages/models/src/chat/message/message.ts (1)
ChatMessage(18-18)
apps/web/client/src/components/store/editor/chat/context.ts (1)
packages/ai/src/tokens/index.ts (1)
countTokensInString(111-113)
packages/ai/src/tokens/index.ts (2)
packages/db/src/schema/chat/message.ts (1)
messages(11-27)packages/models/src/chat/message/message.ts (1)
ChatMessage(18-18)
🔇 Additional comments (20)
apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-input/index.tsx (2)
25-25: LGTM! Clean import addition.The import follows the established pattern and aligns with the modular component structure.
353-359: LGTM! Well-structured layout integration.The layout changes properly accommodate the new ContextIndicator with
justify-between, maintaining clear visual separation between context pills and the usage indicator.apps/web/client/src/app/project/[id]/_hooks/use-context-tracking.tsx (4)
1-1: Follow coding guidelines for client directives.The
'use client'directive is correctly placed since this hook uses React state and effects.
3-6: LGTM! Clean imports with proper path aliases.Imports follow the coding guidelines by using path aliases (
@/) and importing from the appropriate packages.
21-28: LGTM! Proper async effect implementation.The async function properly handles the context usage updates with correct dependency array.
30-33: LGTM! Clean return pattern.Using object spread to return the state is clean and maintains proper typing.
apps/web/client/src/components/store/editor/chat/context.ts (3)
15-15: LGTM! Proper import for token counting.The import follows the established pattern and brings in the needed functionality from the AI package.
316-320: LGTM! Efficient token counting implementation.The method correctly uses reduce to sum token counts from all context items, with proper fallback for empty content.
322-339: LGTM! Comprehensive context summary.The method provides a useful summary including total contexts, token count, and type distribution. The implementation is clean and follows good practices.
apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/context-indicator.tsx (5)
1-1: LGTM! Proper client directive.The component correctly uses
'use client'since it's an observer component with interactive elements.
14-21: LGTM! Clean token formatting utility.The
formatTokensfunction provides appropriate human-readable formatting for token counts with proper decimal precision.
23-32: LGTM! Proper observer implementation with early return.The component correctly uses MobX observer pattern and includes a sensible early return when no tokens are present.
36-65: LGTM! Well-implemented progress indicator.The SVG-based circular progress indicator is properly implemented with smooth transitions and good visual design.
67-69: LGTM! Clear tooltip content.The tooltip provides useful contextual information using the formatting helper.
packages/ai/src/tokens/index.ts (5)
5-10: LGTM! Well-defined interface.The
ModelLimitsinterface appropriately captures the key constraints for different AI models.
81-83: LGTM! Proper fallback implementation.The function correctly falls back to default limits for unknown models.
86-96: LGTM! Clean code improvement.The removal of inline comments improves code readability while maintaining functionality.
105-109: LGTM! Clear token usage interface.The interface properly captures input, output, and total token usage.
111-113: LGTM! Simple and effective utility.The function provides a clean interface for token counting.
packages/ui/package.json (1)
56-56: Verify internal @ai-sdk/ui-utils usage & prefer public APIs
- packages/ui/package.json lists "@ai-sdk/ui-utils": "^1.2.11".
- The package is explicitly marked as an internal API ("Expect breaking changes") and the published/latest release is 1.2.11 (published May 7, 2025). (npmjs.com)
- Action: Confirm whether this dependency is required (search the repo for imports/usages); if unused, remove it; if used, migrate to public AI SDK APIs (e.g., the public "ai" / @ai-sdk/react hooks) or a maintained chat UI kit to reduce instability. (ai-sdk.dev)
| export const MODEL_LIMITS: Record<string, ModelLimits> = { | ||
| 'claude-sonnet-4-20250514': { | ||
| maxTokens: 1000000, | ||
| contextWindow: 1000000, | ||
| inputLimit: 800000, | ||
| outputLimit: 200000, | ||
| }, | ||
| 'claude-3-5-haiku-20241022': { | ||
| maxTokens: 200000, | ||
| contextWindow: 200000, | ||
| inputLimit: 180000, | ||
| outputLimit: 200000, | ||
| }, | ||
| 'anthropic/claude-sonnet-4': { | ||
| maxTokens: 1000000, | ||
| contextWindow: 1000000, | ||
| inputLimit: 800000, | ||
| outputLimit: 200000, | ||
| }, | ||
| 'anthropic/claude-3.5-haiku': { | ||
| maxTokens: 200000, | ||
| contextWindow: 200000, | ||
| inputLimit: 180000, | ||
| outputLimit: 200000, | ||
| }, | ||
| 'openai/gpt-5': { | ||
| maxTokens: 400000, | ||
| contextWindow: 400000, | ||
| inputLimit: 272000, | ||
| outputLimit: 400000, | ||
| }, | ||
| 'openai/gpt-5-mini': { | ||
| maxTokens: 400000, | ||
| contextWindow: 400000, | ||
| inputLimit: 272000, | ||
| outputLimit: 400000, | ||
| }, | ||
| 'openai/gpt-5-nano': { | ||
| maxTokens: 400000, | ||
| contextWindow: 400000, | ||
| inputLimit: 272000, | ||
| outputLimit: 400000, | ||
| }, | ||
| 'gpt-5': { | ||
| maxTokens: 400000, | ||
| contextWindow: 400000, | ||
| inputLimit: 272000, | ||
| outputLimit: 128000, | ||
| }, | ||
| 'claude-sonnet-4': { | ||
| maxTokens: 1000000, | ||
| contextWindow: 1000000, | ||
| inputLimit: 800000, | ||
| outputLimit: 200000, | ||
| }, | ||
| 'claude-3-5-haiku': { | ||
| maxTokens: 200000, | ||
| contextWindow: 200000, | ||
| inputLimit: 180000, | ||
| outputLimit: 8000, | ||
| }, | ||
| default: { | ||
| maxTokens: 128000, | ||
| contextWindow: 128000, | ||
| inputLimit: 100000, | ||
| outputLimit: 4000, | ||
| }, | ||
| }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Verify model limit accuracy and consider maintenance.
The model limits contain specific values for various AI models. These may become outdated as providers update their offerings.
Please verify:
- The accuracy of these model limits, especially for newer models like GPT-5
- Consider documenting the source of these limits
- Plan for regular updates as model providers change their limits
🌐 Web query:
What are the current context window sizes for GPT-5 and Claude Sonnet 4?
💡 Result:
- GPT-5 (API): total context 400,000 tokens (up to 272,000 input + up to 128,000 output). [1] (OpenAI, Aug 7, 2025)
- Claude Sonnet 4: default/standard context 200,000 tokens; Sonnet 4 also supports a 1,000,000-token context window (public beta / tiered availability). [2][3] (Anthropic, Aug 2025)
Fix token-limit inaccuracies and add provenance
packages/ai/src/tokens/index.ts (lines 12–79):
- openai/gpt-5, openai/gpt-5-mini, openai/gpt-5-nano — outputLimit is 400000; change to 128000 and keep inputLimit=272000, contextWindow/maxTokens=400000 (OpenAI: total 400,000 = up to 272,000 input + up to 128,000 output; Aug 7, 2025).
- gpt-5 (alias) already uses outputLimit=128000 — consolidate duplicates to a single canonical entry to avoid conflicting values.
- claude-sonnet-4* variants — currently set to 1,000,000; Anthropic lists default/standard 200,000 with 1,000,000 available only as a tier/beta (Aug 2025). Set default to 200000 or add explicit tiered entries + availability notes.
- Add inline comments citing provider docs (with dates), and add a validation test that flags configs where outputLimit > provider-declared output limit or > (contextWindow - inputLimit). Add a short README/update cadence for keeping limits current.
🤖 Prompt for AI Agents
In packages/ai/src/tokens/index.ts around lines 12–79, several model token
limits are incorrect or duplicated: adjust openai/gpt-5, openai/gpt-5-mini, and
openai/gpt-5-nano outputLimit from 400000 to 128000 while keeping
inputLimit=272000 and contextWindow/maxTokens=400000; consolidate the duplicate
"gpt-5" alias into a single canonical entry (remove conflicting duplicate);
change claude-sonnet-4* default entries from 1,000,000 to 200,000 (or add
explicit tiered entries with availability notes) and add short inline comments
citing provider docs with dates for each provider; then add a unit/validation
test that fails if any config has outputLimit greater than the provider-declared
output limit or greater than contextWindow - inputLimit, and include a brief
README note describing update cadence for keeping limits current.
| const updateContextUsage = async () => { | ||
| const contextUsage = await getContextUsage(messages, modelId); | ||
| setContextState(contextUsage); | ||
| }; | ||
|
|
||
| updateContextUsage(); | ||
| }, [messages, modelId]); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Async setup race condition: The useEffect calls an async function updateContextUsage() but doesn't await it or handle the Promise properly. This can cause the component to render with stale state while the async operation is still pending. The async function should be properly awaited or the Promise should be handled to prevent race conditions between state updates and renders.
Spotted by Diamond
Is this helpful? React 👍 or 👎 to let us know.
| const inputTokens = Math.floor(totalTokens * 0.8); | ||
| const outputTokens = totalTokens - inputTokens; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Logic error in token calculation: The code arbitrarily assigns 80% of total tokens as input tokens and 20% as output tokens (lines 125-126). This is incorrect because it's calculating token distribution from existing messages, not predicting future usage. For existing messages, all tokens should be considered input tokens since they're already part of the conversation context. The output tokens should represent available space for the model's response, not a portion of existing messages.
| const inputTokens = Math.floor(totalTokens * 0.8); | |
| const outputTokens = totalTokens - inputTokens; | |
| const inputTokens = totalTokens; | |
| const outputTokens = maxTokens - inputTokens; |
Spotted by Diamond
Is this helpful? React 👍 or 👎 to let us know.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-input/index.tsx (1)
150-151: Hardcoded user‑facing strings; use next‑intl.
Replace literals with t()/transKeys per repo guidelines.Examples:
- toast.error('Failed to send message. Please try again.'); + toast.error(t(transKeys.editor.panels.edit.tabs.chat.input.sendError)); - if (chatMode === ChatType.ASK) { - return 'Ask a question about your project...'; - } + if (chatMode === ChatType.ASK) { + return t(transKeys.editor.panels.edit.tabs.chat.input.askPlaceholder); + } - <TooltipContent>{'Stop response'}</TooltipContent> + <TooltipContent>{t(transKeys.common.stopResponse)}</TooltipContent>Apply the same pattern to: “Pasted image”, “Dropped image”, “No active frame available for screenshot”, “No screenshot data”, “Screenshot”, “Screenshot added to chat”.
Also applies to: 157-159, 172-173, 189-190, 234-235, 258-269, 427-427
🧹 Nitpick comments (5)
apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-input/index.tsx (5)
25-25: Prefer alias import for consistency with repo guidelines.
Switch to @/* to match path‑alias convention in apps/web/client.-import { ContextIndicator } from '../context-indicator'; +import { ContextIndicator } from '@/app/project/[id]/_components/right-panel/chat-tab/context-indicator';
353-364: ContextIndicator placement conflicts with PR spec (“top‑right”).
Current logic renders it on the left when context exists; spec says it should live at the top‑right regardless. Recommend always rendering on the right and letting the indicator self‑hide when empty.- <div className="flex flex-row flex-wrap items-center justify-between gap-1.5 mb-1"> - <div className="flex flex-row flex-wrap items-center gap-1.5"> - {editorEngine.chat.context.context.length > 0 && ( - <ContextIndicator messages={messages} /> - )} - {/* <ContextWheel /> */} - <InputContextPills /> - </div> - {editorEngine.chat.context.context.length === 0 && ( - <ContextIndicator messages={messages} /> - )} - </div> + <div className="flex flex-row flex-wrap items-center justify-between gap-1.5 mb-1"> + <div className="flex flex-row flex-wrap items-center gap-1.5"> + {/* <ContextWheel /> */} + <InputContextPills /> + </div> + <ContextIndicator messages={messages} /> + </div>
81-90: Double invocation of handleEnterSelection().
You call it in the condition and again inside the block; this can execute selection twice. Call once and branch on the result.- const handleGlobalKeyDown = (e: KeyboardEvent) => { - if (e.key === 'Enter' && suggestionRef.current?.handleEnterSelection()) { - e.preventDefault(); - e.stopPropagation(); - // Stop the event from bubbling to the canvas - e.stopImmediatePropagation(); - // Handle the suggestion selection - suggestionRef.current.handleEnterSelection(); - } - }; + const handleGlobalKeyDown = (e: KeyboardEvent) => { + if (e.key === 'Enter') { + const handled = suggestionRef.current?.handleEnterSelection(); + if (handled) { + e.preventDefault(); + e.stopPropagation(); + // Stop the event from bubbling to the canvas + e.stopImmediatePropagation(); + } + } + };
270-271: Avoid surfacing raw error objects to users.
Show a localized generic message; log the error to console for diagnostics.- toast.error('Failed to capture screenshot. Error: ' + error); + console.error('Failed to capture screenshot', error); + toast.error(t(transKeys.editor.panels.edit.tabs.chat.input.screenshotError));
415-426: Add accessible labels to icon‑only buttons.
Ensure SR users can identify actions.- <Button + <Button + aria-label={t(transKeys.common.stopResponse)} + title={t(transKeys.common.stopResponse)} size={'icon'} variant={'secondary'} className="text-smallPlus w-fit h-full py-0.5 px-2.5 text-primary"- <Button + <Button + aria-label={t(transKeys.common.send)} + title={t(transKeys.common.send)} size={'icon'} variant={'secondary'} className="text-smallPlus w-fit h-full py-0.5 px-2.5 text-primary"Also applies to: 430-439
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-input/index.tsx(3 hunks)apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/context-indicator.tsx(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/context-indicator.tsx
🧰 Additional context used
📓 Path-based instructions (6)
apps/web/client/src/app/**/*.tsx
📄 CodeRabbit inference engine (AGENTS.md)
apps/web/client/src/app/**/*.tsx: Default to Server Components; add 'use client' when using events, state/effects, browser APIs, or client‑only libraries
Do not use process.env in client code; import env from @/env insteadAvoid hardcoded user-facing text; use next-intl messages/hooks
Files:
apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-input/index.tsx
apps/web/client/src/**/*.{ts,tsx}
📄 CodeRabbit inference engine (AGENTS.md)
apps/web/client/src/**/*.{ts,tsx}: Use path aliases @/* and ~/* for imports that map to apps/web/client/src/*
Avoid hardcoded user-facing text; use next-intl messages/hooks insteadUse path aliases @/* and ~/* for imports mapping to src/*
Files:
apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-input/index.tsx
apps/web/client/src/**/*.tsx
📄 CodeRabbit inference engine (AGENTS.md)
apps/web/client/src/**/*.tsx: Create MobX store instances with useState(() => new Store()) for stable references across renders
Keep the active MobX store in a useRef and perform async cleanup with setTimeout(() => storeRef.current?.clear(), 0) to avoid route-change races
Avoid useMemo for creating MobX store instances
Avoid putting the MobX store instance in effect dependency arrays if it causes loops; split concerns by domain
apps/web/client/src/**/*.tsx: Create MobX store instances with useState(() => new Store()) for stable identities across renders
Keep the active MobX store in a useRef and clean up asynchronously with setTimeout(() => storeRef.current?.clear(), 0)
Do not use useMemo to create MobX stores
Avoid placing MobX store instances in effect dependency arrays if it causes loops; split concerns instead
observer components must be client components; place a single client boundary at the feature entry; child observers need not repeat 'use client'
Files:
apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-input/index.tsx
**/*.{ts,tsx}
📄 CodeRabbit inference engine (AGENTS.md)
Do not use the any type unless necessary
Files:
apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-input/index.tsx
apps/web/client/src/app/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Default to Server Components; add 'use client' only when using events, state/effects, browser APIs, or client-only libs
Files:
apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-input/index.tsx
{apps,packages}/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Avoid using the any type unless absolutely necessary
Files:
apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-input/index.tsx
🧬 Code graph analysis (1)
apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-input/index.tsx (2)
apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/context-indicator.tsx (1)
ContextIndicator(23-92)apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/context-pills/input-context-pills.tsx (1)
InputContextPills(19-60)
🔇 Additional comments (1)
apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-input/index.tsx (1)
381-383: LGTM: removed unused event parameter.
Cleaner handler; no behavior change.
Context-Aware Chat
A context-aware chat system that tracks conversation state and provides visual feedback on context window usage with a clean, minimal design approach.
Overview
The context-aware chat integrates with the editor to provide intelligent assistance while maintaining awareness of conversation length and token usage. It includes visual indicators and context management for optimal user experience.
Key Components
Context Indicator (
context-indicator.tsx)#6b7280) across all usage levels for minimal distractionContext Tracking (
use-context-tracking.ts)openai:gpt-4)Chat Input Integration
Design
Important
Introduces a context-aware chat system with a
ContextIndicatorcomponent anduseContextTrackinghook for tracking token usage, and updates model limits inpackages/ai/src/tokens/index.ts.ContextIndicatorcomponent incontext-indicator.tsxto show context usage percentage with a tooltip for token details.ContextIndicatorintochat-input/index.tsx.useContextTrackinghook inuse-context-tracking.tsfor real-time token usage tracking.ChatContextclass incontext.tsto includegetContextTokenCount()andgetContextSummary()methods.ModelLimitsandMODEL_LIMITSinpackages/ai/src/tokens/index.tsfor various models.getModelLimits()function to retrieve model limits.@ai-sdk/ui-utilsdependency inpackages/ui/package.json.This description was created by
for 4d34d65. You can customize this summary. It will automatically update as commits are pushed.
Summary by CodeRabbit
New Features
Improvements
Chores