Skip to content

Conversation

@Kitenite
Copy link
Contributor

@Kitenite Kitenite commented Sep 12, 2025

Description

Related Issues

Type of Change

  • Bug fix
  • New feature
  • Documentation update
  • Release
  • Refactor
  • Other (please describe):

Testing

Screenshots (if applicable)

Additional Notes


Important

Enhances reasoning in chat messages, updates model configurations, and improves page scanning logic.

  • Behavior:
    • Updates getModelFromType in stream.ts to use OPENROUTER_MODELS.OPEN_AI_GPT_5 for default cases.
    • Modifies renderMessage in index.tsx to include index in key for message rendering.
    • Enhances MessageContent in index.tsx to process and render reasoning parts, removing '[REDACTED]'.
    • Updates PagesManager in pages/index.ts to scan pages only when indexing is complete.
  • Models:
    • Adds CLAUDE_3_5_HAIKU to OPENROUTER_MODELS in index.ts.
  • Misc:
    • Changes model in conversation.ts to CLAUDE_3_5_HAIKU for conversation title generation.
    • Updates model in suggestion.ts to OPEN_AI_GPT_5_NANO for suggestion generation.
    • Changes model in project.ts to OPEN_AI_GPT_5_NANO for project name generation.

This description was created by Ellipsis for 79479c0. You can customize this summary. It will automatically update as commits are pushed.

Summary by CodeRabbit

  • New Features

    • Reasoning sections in assistant messages are now displayed (previously hidden).
  • Bug Fixes

    • Prevented duplicate/placeholder streaming assistant messages from appearing in chat.
    • Pages panel now scans only after indexing completes, reducing unnecessary rescans and improving stability.
  • Chores

    • Updated AI model selections for chat, suggestions, and project naming to improve consistency and responsiveness.

@vercel
Copy link

vercel bot commented Sep 12, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Preview Comments Updated (UTC)
docs Ready Ready Preview Comment Sep 12, 2025 0:15am
web Error Error Sep 12, 2025 0:15am

@supabase
Copy link

supabase bot commented Sep 12, 2025

This pull request has been ignored for the connected project wowaemfasoptxrdjhilu because there are no changes detected in apps/backend/supabase directory. You can change this behaviour in Project Integrations Settings ↗︎.


Preview Branches by Supabase.
Learn more about Supabase Branching ↗︎.

@coderabbitai
Copy link

coderabbitai bot commented Sep 12, 2025

Walkthrough

The PR updates model selections across several API routes and a stream helper, adds a new OpenRouter model enum, adjusts chat message rendering to handle streaming and reasoning parts, and changes a PagesManager reaction to trigger page scans based on sandbox indexing status.

Changes

Cohort / File(s) Summary
Model selection updates
apps/web/client/src/app/api/chat/helperts/stream.ts, apps/web/client/src/server/api/routers/chat/conversation.ts, apps/web/client/src/server/api/routers/chat/suggestion.ts, apps/web/client/src/server/api/routers/project/project.ts, packages/models/src/llm/index.ts
Default/create/fix model in stream helper switched to OPEN_AI_GPT_5. Conversation title model changed to CLAUDE_3_5_HAIKU. Suggestions model changed from GPT-5 Mini to GPT-5 Nano. Project name model changed from Sonnet to GPT-5 Nano. Added CLAUDE_3_5_HAIKU enum to OpenRouter models.
Chat UI rendering
apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-messages/index.tsx, .../chat-messages/message-content/index.tsx
Messages list now computed via useMemo, excluding a streaming assistant message when waiting; renderMessage receives index and uses composite keys. Reasoning parts now render sanitized text (stripping “[REDACTED]”) using Markdown instead of being skipped.
PagesManager reaction
apps/web/client/src/components/store/editor/pages/index.ts
Reaction now watches sandboxStatus (isIndexing, isIndexed) and triggers scanPages() only when indexed and not indexing, and the left panel tab is PAGES.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  actor User
  participant ChatUI as Chat Messages UI
  participant Engine as Engine Messages
  note over ChatUI: Compute messagesToRender
  User->>ChatUI: Open chat / view messages
  ChatUI->>Engine: Read engineMessages, isWaiting
  alt isWaiting and last UI msg is assistant
    ChatUI->>ChatUI: Exclude streaming assistant message
  else
    ChatUI->>ChatUI: Use engineMessages as-is
  end
  loop Render list
    ChatUI->>ChatUI: renderMessage(message, index) with key id-index
  end
  note over ChatUI: For content parts
  ChatUI->>ChatUI: If part.type == reasoning<br/>strip "[REDACTED]" and render if non-empty
Loading
sequenceDiagram
  autonumber
  participant Store as PagesManager
  participant Sandbox as Sandbox Status
  participant UI as Left Panel
  participant Scanner as scanPages()

  Sandbox-->>Store: isIndexing / isIndexed change
  Store->>Store: Reaction(sandboxStatus)
  Store->>UI: Check left panel tab
  alt tab == PAGES and isIndexed && !isIndexing
    Store->>Scanner: scanPages()
  else
    Store->>Store: No action
  end
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Possibly related PRs

Pre-merge checks (1 passed, 2 warnings)

❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Description Check ⚠️ Warning The PR body contains the repository template but the structured sections are not filled: Related Issues are not linked, the Type of Change checkboxes are all unchecked, and Testing steps/screenshots are missing despite an appended Ellipsis-generated summary that describes the changes. Because the required template fields are incomplete, the description does not meet the repository's expected PR template. Please populate the template before merging: add a concise Description (or move the Ellipsis summary into the Description section), link any Related Issues (e.g., "closes #..."), mark the applicable Type of Change checkbox(es), and provide explicit Testing steps and screenshots or verification instructions so reviewers can reproduce and validate the changes.
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (1 passed)
Check name Status Explanation
Title Check ✅ Passed The title "feat: allow reasoning" is concise, follows the conventional "feat:" prefix, and directly reflects the PR's primary functional change of rendering reasoning parts in chat messages; it therefore accurately summarizes the main developer-visible intent even though the changeset also includes model and scanning tweaks. The phrasing is clear and not misleading for someone scanning commit history.

Poem

A rabbit tapped keys with a gentle thrum,
Swapped out the models—hum hum hum.
Messages trimmed when streams run long,
Reasoning whispers now join the song.
Pages wait till the index is done—
Hop, review, merge—another one won. 🐇✨

✨ Finishing touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/reasoning-messages

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@Kitenite Kitenite changed the title feat: allow reasoning feat: allow reasoning and fix some performance issue Sep 12, 2025
@Kitenite Kitenite merged commit 0908f7e into main Sep 12, 2025
5 of 7 checks passed
@Kitenite Kitenite deleted the feat/reasoning-messages branch September 12, 2025 00:21
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (5)
apps/web/client/src/app/api/chat/helperts/stream.ts (1)

19-23: Consider cheaper default for ASK/EDIT or make default model configurable

ASK/EDIT often don’t need full GPT‑5; using MINI can cut latency/cost. Alternatively read a default from env/feature flag.

Example:

-            model = await initModel({
-                provider: LLMProvider.OPENROUTER,
-                model: OPENROUTER_MODELS.OPEN_AI_GPT_5,
-            });
+            model = await initModel({
+                provider: LLMProvider.OPENROUTER,
+                model: OPENROUTER_MODELS.OPEN_AI_GPT_5_MINI,
+            });
apps/web/client/src/components/store/editor/pages/index.ts (1)

32-44: Guard optional sandbox and store disposer to prevent leaks

  • activeSandbox can be undefined; optional-chain the flags.
  • Keep the reaction disposer and clear it when tearing down.

Within this block:

-            () => {
-                return {
-                    isIndexing: this.editorEngine.activeSandbox.isIndexing,
-                    isIndexed: this.editorEngine.activeSandbox.isIndexed,
-                };
-            },
+            () => ({
+                isIndexing: this.editorEngine.activeSandbox?.isIndexing ?? false,
+                isIndexed: this.editorEngine.activeSandbox?.isIndexed ?? false,
+            }),

Also assign the disposer:

-        reaction(
+        this.disposer = reaction(

Outside this range (class members):

// add field
private disposer?: import('mobx').IReactionDisposer;

// dispose when appropriate (e.g., in clear() or a new destroy())
this.disposer?.();
this.disposer = undefined;
apps/web/client/src/server/api/routers/chat/suggestion.ts (1)

24-25: Tune for cost and consistency: include providerOptions; reduce max tokens

Pass providerOptions like other routes, and lower an excessive 10k token cap for simple suggestions.

-            const { model, headers } = await initModel({
+            const { model, headers, providerOptions } = await initModel({
                 provider: LLMProvider.OPENROUTER,
                 model: OPENROUTER_MODELS.OPEN_AI_GPT_5_NANO,
             });

And later:

-                maxOutputTokens: 10000,
+                providerOptions,
+                maxOutputTokens: 512,
apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-messages/message-content/index.tsx (1)

53-69: Sanitize reasoning more robustly and avoid redundant child key

  • Replace all “[REDACTED]” occurrences and trim.
  • Use type="reasoning" for semantics.
  • Don’t key the inner MarkdownRenderer; the container key suffices.
-                const processedText = part.text.replace('[REDACTED]', '');
-                if (processedText === '') {
+                const processedText = part.text.replaceAll('[REDACTED]', '').trim();
+                if (processedText.length === 0) {
                     return null;
                 }
                 return (
                     <div key={`reasoning-${idx}`} className="my-2 px-3 py-2 border-l-1 max-h-32 overflow-y-auto">
                         <MarkdownRenderer
                             messageId={messageId}
-                            type="text"
-                            key={processedText}
+                            type="reasoning"
                             content={processedText}
                             applied={applied}
                             isStream={isStream}
                             className="text-xs text-foreground-secondary p-0 m-0"
                         />
                     </div>
                 );
apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-messages/index.tsx (1)

39-49: Prefer enum over string literal for role check

Avoid typos and keep type safety.

-        const streamingAssistantId = isWaiting && lastUiMessage?.role === 'assistant' ? lastUiMessage.id : undefined;
+        const streamingAssistantId =
+            isWaiting && lastUiMessage?.role === ChatMessageRole.ASSISTANT
+                ? lastUiMessage.id
+                : undefined;
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 5831d5d and 79479c0.

📒 Files selected for processing (8)
  • apps/web/client/src/app/api/chat/helperts/stream.ts (1 hunks)
  • apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-messages/index.tsx (3 hunks)
  • apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-messages/message-content/index.tsx (1 hunks)
  • apps/web/client/src/components/store/editor/pages/index.ts (1 hunks)
  • apps/web/client/src/server/api/routers/chat/conversation.ts (1 hunks)
  • apps/web/client/src/server/api/routers/chat/suggestion.ts (1 hunks)
  • apps/web/client/src/server/api/routers/project/project.ts (1 hunks)
  • packages/models/src/llm/index.ts (1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.tsx

📄 CodeRabbit inference engine (CLAUDE.md)

**/*.tsx: In React providers managing MobX stores, create the observable store with useState(() => new Store()) to ensure a stable instance
Keep a ref (e.g., storeRef.current) to the MobX store to avoid stale closures in effects
Use setTimeout(() => store.clear(), 0) for delayed cleanup of MobX stores to avoid race conditions
Separate project changes from branch updates by using proper effect dependency arrays
Do not use useMemo to hold MobX observable instances; React may drop memoized values causing data loss
Do not clean up MobX stores synchronously during navigation; perform delayed cleanup instead
Do not include the MobX store instance in effect dependency arrays when it causes infinite loops

Files:

  • apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-messages/message-content/index.tsx
  • apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-messages/index.tsx
🧠 Learnings (1)
📚 Learning: 2025-09-07T23:36:29.687Z
Learnt from: CR
PR: onlook-dev/onlook#0
File: CLAUDE.md:0-0
Timestamp: 2025-09-07T23:36:29.687Z
Learning: Applies to **/*.tsx : Do not clean up MobX stores synchronously during navigation; perform delayed cleanup instead

Applied to files:

  • apps/web/client/src/components/store/editor/pages/index.ts
🧬 Code graph analysis (2)
apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-messages/message-content/index.tsx (1)
apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-messages/markdown-renderer.tsx (1)
  • MarkdownRenderer (5-48)
apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-messages/index.tsx (1)
packages/models/src/chat/message/message.ts (1)
  • ChatMessage (34-34)
🔇 Additional comments (4)
packages/models/src/llm/index.ts (1)

14-17: Add MODEL_MAX_TOKENS entry for OPENROUTER_MODELS.CLAUDE_3_5_HAIKU

OPENROUTER_MODELS.CLAUDE_3_5_HAIKU is missing from MODEL_MAX_TOKENS in packages/models/src/llm/index.ts — lookups will return undefined.

Apply:

 export const MODEL_MAX_TOKENS = {
     [OPENROUTER_MODELS.CLAUDE_4_SONNET]: 200000,
+    [OPENROUTER_MODELS.CLAUDE_3_5_HAIKU]: 200000,
     [OPENROUTER_MODELS.OPEN_AI_GPT_5_NANO]: 400000,
     [OPENROUTER_MODELS.OPEN_AI_GPT_5_MINI]: 400000,
     [OPENROUTER_MODELS.OPEN_AI_GPT_5]: 400000,
     [ANTHROPIC_MODELS.SONNET_4]: 200000,
     [ANTHROPIC_MODELS.HAIKU]: 200000,
 } as const;
apps/web/client/src/server/api/routers/project/project.ts (1)

326-327: LGTM: switch to GPT‑5 Nano for naming

Appropriate trade-off for short prompts; telemetry preserved.

apps/web/client/src/server/api/routers/chat/conversation.ts (1)

71-74: Verify token-cap mapping exists for CLAUDE_3_5_HAIKU via OpenRouter

initModel may consult MODEL_MAX_TOKENS; add the missing OPENROUTER_MODELS.CLAUDE_3_5_HAIKU entry to avoid undefined caps.

apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-messages/index.tsx (1)

23-36: Keys look good; uniqueness preserved with id-index combo

No action needed.

Also applies to: 74-75

@coderabbitai coderabbitai bot mentioned this pull request Oct 21, 2025
5 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants