-
Notifications
You must be signed in to change notification settings - Fork 1.7k
feat: allow reasoning and fix some performance issue #2828
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
This pull request has been ignored for the connected project Preview Branches by Supabase. |
WalkthroughThe PR updates model selections across several API routes and a stream helper, adds a new OpenRouter model enum, adjusts chat message rendering to handle streaming and reasoning parts, and changes a PagesManager reaction to trigger page scans based on sandbox indexing status. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor User
participant ChatUI as Chat Messages UI
participant Engine as Engine Messages
note over ChatUI: Compute messagesToRender
User->>ChatUI: Open chat / view messages
ChatUI->>Engine: Read engineMessages, isWaiting
alt isWaiting and last UI msg is assistant
ChatUI->>ChatUI: Exclude streaming assistant message
else
ChatUI->>ChatUI: Use engineMessages as-is
end
loop Render list
ChatUI->>ChatUI: renderMessage(message, index) with key id-index
end
note over ChatUI: For content parts
ChatUI->>ChatUI: If part.type == reasoning<br/>strip "[REDACTED]" and render if non-empty
sequenceDiagram
autonumber
participant Store as PagesManager
participant Sandbox as Sandbox Status
participant UI as Left Panel
participant Scanner as scanPages()
Sandbox-->>Store: isIndexing / isIndexed change
Store->>Store: Reaction(sandboxStatus)
Store->>UI: Check left panel tab
alt tab == PAGES and isIndexed && !isIndexing
Store->>Scanner: scanPages()
else
Store->>Store: No action
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Pre-merge checks (1 passed, 2 warnings)❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
Poem
✨ Finishing touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (5)
apps/web/client/src/app/api/chat/helperts/stream.ts (1)
19-23: Consider cheaper default for ASK/EDIT or make default model configurableASK/EDIT often don’t need full GPT‑5; using MINI can cut latency/cost. Alternatively read a default from env/feature flag.
Example:
- model = await initModel({ - provider: LLMProvider.OPENROUTER, - model: OPENROUTER_MODELS.OPEN_AI_GPT_5, - }); + model = await initModel({ + provider: LLMProvider.OPENROUTER, + model: OPENROUTER_MODELS.OPEN_AI_GPT_5_MINI, + });apps/web/client/src/components/store/editor/pages/index.ts (1)
32-44: Guard optional sandbox and store disposer to prevent leaks
- activeSandbox can be undefined; optional-chain the flags.
- Keep the reaction disposer and clear it when tearing down.
Within this block:
- () => { - return { - isIndexing: this.editorEngine.activeSandbox.isIndexing, - isIndexed: this.editorEngine.activeSandbox.isIndexed, - }; - }, + () => ({ + isIndexing: this.editorEngine.activeSandbox?.isIndexing ?? false, + isIndexed: this.editorEngine.activeSandbox?.isIndexed ?? false, + }),Also assign the disposer:
- reaction( + this.disposer = reaction(Outside this range (class members):
// add field private disposer?: import('mobx').IReactionDisposer; // dispose when appropriate (e.g., in clear() or a new destroy()) this.disposer?.(); this.disposer = undefined;apps/web/client/src/server/api/routers/chat/suggestion.ts (1)
24-25: Tune for cost and consistency: include providerOptions; reduce max tokensPass providerOptions like other routes, and lower an excessive 10k token cap for simple suggestions.
- const { model, headers } = await initModel({ + const { model, headers, providerOptions } = await initModel({ provider: LLMProvider.OPENROUTER, model: OPENROUTER_MODELS.OPEN_AI_GPT_5_NANO, });And later:
- maxOutputTokens: 10000, + providerOptions, + maxOutputTokens: 512,apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-messages/message-content/index.tsx (1)
53-69: Sanitize reasoning more robustly and avoid redundant child key
- Replace all “[REDACTED]” occurrences and trim.
- Use type="reasoning" for semantics.
- Don’t key the inner MarkdownRenderer; the container key suffices.
- const processedText = part.text.replace('[REDACTED]', ''); - if (processedText === '') { + const processedText = part.text.replaceAll('[REDACTED]', '').trim(); + if (processedText.length === 0) { return null; } return ( <div key={`reasoning-${idx}`} className="my-2 px-3 py-2 border-l-1 max-h-32 overflow-y-auto"> <MarkdownRenderer messageId={messageId} - type="text" - key={processedText} + type="reasoning" content={processedText} applied={applied} isStream={isStream} className="text-xs text-foreground-secondary p-0 m-0" /> </div> );apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-messages/index.tsx (1)
39-49: Prefer enum over string literal for role checkAvoid typos and keep type safety.
- const streamingAssistantId = isWaiting && lastUiMessage?.role === 'assistant' ? lastUiMessage.id : undefined; + const streamingAssistantId = + isWaiting && lastUiMessage?.role === ChatMessageRole.ASSISTANT + ? lastUiMessage.id + : undefined;
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (8)
apps/web/client/src/app/api/chat/helperts/stream.ts(1 hunks)apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-messages/index.tsx(3 hunks)apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-messages/message-content/index.tsx(1 hunks)apps/web/client/src/components/store/editor/pages/index.ts(1 hunks)apps/web/client/src/server/api/routers/chat/conversation.ts(1 hunks)apps/web/client/src/server/api/routers/chat/suggestion.ts(1 hunks)apps/web/client/src/server/api/routers/project/project.ts(1 hunks)packages/models/src/llm/index.ts(1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.tsx
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.tsx: In React providers managing MobX stores, create the observable store with useState(() => new Store()) to ensure a stable instance
Keep a ref (e.g., storeRef.current) to the MobX store to avoid stale closures in effects
Use setTimeout(() => store.clear(), 0) for delayed cleanup of MobX stores to avoid race conditions
Separate project changes from branch updates by using proper effect dependency arrays
Do not use useMemo to hold MobX observable instances; React may drop memoized values causing data loss
Do not clean up MobX stores synchronously during navigation; perform delayed cleanup instead
Do not include the MobX store instance in effect dependency arrays when it causes infinite loops
Files:
apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-messages/message-content/index.tsxapps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-messages/index.tsx
🧠 Learnings (1)
📚 Learning: 2025-09-07T23:36:29.687Z
Learnt from: CR
PR: onlook-dev/onlook#0
File: CLAUDE.md:0-0
Timestamp: 2025-09-07T23:36:29.687Z
Learning: Applies to **/*.tsx : Do not clean up MobX stores synchronously during navigation; perform delayed cleanup instead
Applied to files:
apps/web/client/src/components/store/editor/pages/index.ts
🧬 Code graph analysis (2)
apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-messages/message-content/index.tsx (1)
apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-messages/markdown-renderer.tsx (1)
MarkdownRenderer(5-48)
apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-messages/index.tsx (1)
packages/models/src/chat/message/message.ts (1)
ChatMessage(34-34)
🔇 Additional comments (4)
packages/models/src/llm/index.ts (1)
14-17: Add MODEL_MAX_TOKENS entry for OPENROUTER_MODELS.CLAUDE_3_5_HAIKUOPENROUTER_MODELS.CLAUDE_3_5_HAIKU is missing from MODEL_MAX_TOKENS in packages/models/src/llm/index.ts — lookups will return undefined.
Apply:
export const MODEL_MAX_TOKENS = { [OPENROUTER_MODELS.CLAUDE_4_SONNET]: 200000, + [OPENROUTER_MODELS.CLAUDE_3_5_HAIKU]: 200000, [OPENROUTER_MODELS.OPEN_AI_GPT_5_NANO]: 400000, [OPENROUTER_MODELS.OPEN_AI_GPT_5_MINI]: 400000, [OPENROUTER_MODELS.OPEN_AI_GPT_5]: 400000, [ANTHROPIC_MODELS.SONNET_4]: 200000, [ANTHROPIC_MODELS.HAIKU]: 200000, } as const;apps/web/client/src/server/api/routers/project/project.ts (1)
326-327: LGTM: switch to GPT‑5 Nano for namingAppropriate trade-off for short prompts; telemetry preserved.
apps/web/client/src/server/api/routers/chat/conversation.ts (1)
71-74: Verify token-cap mapping exists for CLAUDE_3_5_HAIKU via OpenRouterinitModel may consult MODEL_MAX_TOKENS; add the missing OPENROUTER_MODELS.CLAUDE_3_5_HAIKU entry to avoid undefined caps.
apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-messages/index.tsx (1)
23-36: Keys look good; uniqueness preserved with id-index comboNo action needed.
Also applies to: 74-75
Description
Related Issues
Type of Change
Testing
Screenshots (if applicable)
Additional Notes
Important
Enhances reasoning in chat messages, updates model configurations, and improves page scanning logic.
getModelFromTypeinstream.tsto useOPENROUTER_MODELS.OPEN_AI_GPT_5for default cases.renderMessageinindex.tsxto includeindexin key for message rendering.MessageContentinindex.tsxto process and render reasoning parts, removing '[REDACTED]'.PagesManagerinpages/index.tsto scan pages only when indexing is complete.CLAUDE_3_5_HAIKUtoOPENROUTER_MODELSinindex.ts.conversation.tstoCLAUDE_3_5_HAIKUfor conversation title generation.suggestion.tstoOPEN_AI_GPT_5_NANOfor suggestion generation.project.tstoOPEN_AI_GPT_5_NANOfor project name generation.This description was created by
for 79479c0. You can customize this summary. It will automatically update as commits are pushed.
Summary by CodeRabbit
New Features
Bug Fixes
Chores