Skip to content

Conversation

@spartan-vutrannguyen
Copy link
Contributor

@spartan-vutrannguyen spartan-vutrannguyen commented Aug 21, 2025

Description

Related Issues

Type of Change

  • Bug fix
  • New feature
  • Documentation update
  • Release
  • Refactor
  • Other (please describe):

Testing

Screenshots (if applicable)

Additional Notes


Important

Upgrade project dependencies to version 5, refactor chat message handling, and align schemas and types for consistency.

  • Dependencies:
    • Upgraded ai to version 5.0.0 in package.json files of apps/web/client, packages/ai, packages/models, and packages/ui.
    • Upgraded zod to version 4.0.17 in package.json files of apps/web/client, apps/web/server, packages/ai, packages/models, and packages/ui.
    • Upgraded @ai-sdk/react to version 2.0.0 in apps/web/client/package.json.
    • Upgraded @ai-sdk/anthropic, @ai-sdk/google, and @ai-sdk/openai to version 2.0.0 in packages/ai/package.json.
  • Refactor:
    • Refactored chat message handling to use UIMessage and UIMessagePart in packages/ai/src/chat/providers.ts and packages/ai/src/prompt/provider.ts.
    • Replaced parameters with inputSchema in tool definitions in packages/ai/src/tools/cli.ts, packages/ai/src/tools/plan.ts, and packages/ai/src/tools/web.ts.
    • Updated ChatMessageContent to extend MastraMessageContentV3 in packages/models/src/chat/message/message.ts.
  • Schema and Type Alignment:
    • Aligned message types and tool schemas across app and AI layer for consistency in apps/web/client/src/app/api/chat/route.ts and packages/db/src/schema/project/chat/message.ts.
    • Updated format in ChatMessageContent to 3 in packages/db/src/dto/message.ts and packages/db/src/schema/project/chat/message.ts.

This description was created by Ellipsis for 9e48f72. You can customize this summary. It will automatically update as commits are pushed.


Summary by CodeRabbit

  • New Features

    • Mode-aware tools with clear toasts when a tool isn’t available in the current chat mode.
    • Chat now distinguishes “Thinking…” vs “Introspecting…” states during streaming.
  • Improvements

    • Smoother, more reliable chat streaming with clearer error messages and usage limit handling.
    • Tool call responses render more consistently, with broader tool-part support.
    • Chat mode is managed centrally in the editor and easier to toggle.
    • Conversation suggestions are more structured and consistent.
  • Bug Fixes

    • Safer handling of optional data (e.g., screenshots, metadata) to prevent runtime errors.
  • Chores

    • Preserve environment variables across deploy, cancel, and unpublish flows.

@vercel
Copy link

vercel bot commented Aug 21, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Preview Comments Updated (UTC)
docs Ready Ready Preview Comment Aug 27, 2025 6:05pm
web Ready Ready Preview Comment Aug 27, 2025 6:05pm

@spartan-vutrannguyen spartan-vutrannguyen marked this pull request as draft August 21, 2025 07:48
@supabase
Copy link

supabase bot commented Aug 21, 2025

This pull request has been ignored for the connected project wowaemfasoptxrdjhilu because there are no changes detected in apps/backend/supabase directory. You can change this behaviour in Project Integrations Settings ↗︎.


Preview Branches by Supabase.
Learn more about Supabase Branching ↗︎.

@coderabbitai
Copy link

coderabbitai bot commented Aug 21, 2025

Note

Other AI code review bot(s) detected

CodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review.

Caution

Review failed

The pull request is closed.

Walkthrough

This PR upgrades AI SDK and zod, removes Mastra, migrates message types from Message/content to UIMessage/parts, renames tool schemas from parameters to inputSchema, refactors chat streaming and hooks to new APIs, adjusts server routes, updates tokens/tests, adds publish envVars propagation, and revises models/DB types.

Changes

Cohort / File(s) Change Summary
Dependencies
apps/web/client/package.json, apps/web/server/package.json, packages/ai/package.json, packages/models/package.json, packages/ui/package.json, packages/db/package.json
Upgrade ai to 5.x and zod to 4.x; add @ai-sdk/provider-utils; add @onlook/rpc; remove Mastra deps; pin versions as noted.
Remove Mastra
apps/web/client/src/mastra/index.ts, .../mastra/agents/index.ts, .../mastra/memory/index.ts, .../mastra/storage/index.ts
Delete Mastra initialization, agent, memory, and storage modules; remove exported symbols.
Message model migration (UIMessage/parts)
packages/models/src/chat/message/message.ts, packages/db/src/dto/message.ts, packages/ai/src/stream/index.ts, packages/ai/src/prompt/provider.ts, packages/ai/src/tokens/index.ts, packages/models/src/chat/request.ts, packages/db/src/schema/project/chat/message.ts, packages/ai/test/*, apps/web/client/src/components/store/editor/chat/*, apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-messages/*, apps/web/client/src/app/project/[id]/_hooks/use-chat.tsx
Replace Message/content with UIMessage/parts across types, DTOs, stream conversion, prompt hydration, token counting, tests, store, and UI. Update APIs and guards accordingly.
Tools API: parameters → inputSchema; ToolCall typing
packages/ai/src/tools/*, packages/ai/test/tools/web-search.test.ts, apps/web/client/src/components/tools/tools.ts, apps/web/client/src/app/api/chat/helperts/stream.ts
Rename tool config field parameters→inputSchema; switch ToolCall type source to @ai-sdk/provider-utils; use toolCall.input; add tool availability by mode; add runtime schema checks and repair flow updates.
Chat streaming route refactor
apps/web/client/src/app/api/chat/route.ts
Switch to UIMessage, convertToModelMessages, streamText with stopWhen(stepCountIs); restructure model retrieval; update response via toUIMessageStreamResponse; revise error handling and headers.
Client chat hook/transport
apps/web/client/src/app/project/[id]/_hooks/use-chat.tsx
Introduce DefaultChatTransport to /api/chat; expose sendMessageToChat; handle onToolCall with addToolResult; rework onFinish/onError; map messages to/from Vercel; derive isWaiting; use UIMessage generics.
UI: chat components alignment
.../chat-input/index.tsx, .../chat-messages/index.tsx, .../assistant-message.tsx, .../user-message.tsx, .../message-content/index.tsx, .../message-content/tool-call-display.tsx, .../message-content/tool-call-simple.tsx, .../stream-message.tsx, .../overlay/elements/buttons/chat.tsx, .../right-panel/chat-tab/error.tsx
Use sendMessageToChat; read top-level parts/metadata; broaden tool part detection to type starting with 'tool-'; adapt ToolUIPart input/output; memoize rendering; add introspecting UI branch.
Models and enums
packages/models/src/llm/index.ts, packages/ai/src/chat/providers.ts
Replace LanguageModelV1 with LanguageModel; rename maxTokens→maxOutputTokens; add OPEN_AI_GPT_5_MINI and token caps; provider init adjustments.
Server routers (tokens/options/model)
apps/web/client/src/server/api/routers/chat/conversation.ts, .../project/project.ts, .../chat/suggestion.ts
Use maxOutputTokens; change suggestion model to OPEN_AI_GPT_5_MINI; include headers; update prompt; type-only role import.
Publish env vars propagation
apps/web/client/src/server/api/routers/publish/{deployment.ts,helpers/publish.ts,helpers/unpublish.ts,manager.ts}
Add envVars to publish manager input and all deployment updates; preserve envVars on success/fail/cancel; thread envVars through helper flows.
Editor/store/context tweaks
apps/web/client/src/components/store/editor/chat/{conversation.ts,index.ts,message.ts,suggestions.ts,context.ts}, .../state/index.ts
Use top-level parts/metadata; handle vercelId replacement; wrap conversation updates; add chatMode to state; minor await removal; logging added.
Tool repair flow
apps/web/client/src/app/api/chat/helperts/stream.ts
Import ToolCall from provider-utils; enforce inputSchema presence; change repair model to OPEN_AI_GPT_5_NANO; use generateObject with tool.inputSchema; return shape {type:'tool-call', toolCallId, toolName, input}.
DB schemas and DTOs
packages/db/src/dto/conversation.ts, packages/db/src/schema/feedback/feedback.ts
Remove resourceId alias; map projectId directly; feedback schema: pageUrl uses z.url; metadata keys typed as string.
Tests alignment
packages/ai/test/{stream/convert.test.ts,prompt/prompt.test.ts,tokens.test.ts}
Update to parts-based messages, tool- prefixed parts, new hydrator signature, and token logic expectations.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant UI as Client UI
  participant Hook as useChat (DefaultChatTransport)
  participant API as POST /api/chat
  participant LLM as Model (LanguageModel)
  participant Tools as Tool Handlers

  UI->>Hook: sendMessageToChat(type, uiMessages)
  Hook->>Hook: toVercelMessageFromOnlook(messages)
  Hook->>API: streamText({ messages: convertToModelMessages(...) })
  API->>LLM: streamText(model, stopWhen(stepCountIs(MAX_STEPS)))
  LLM-->>API: stream events (assistant parts/tool- parts)
  API-->>Hook: UI message stream (toUIMessageStreamResponse)
  Hook->>Hook: onToolCall(part)
  Hook->>Tools: handleToolCall(toolName, input)
  Tools-->>Hook: addToolResult(output)
  Hook->>LLM: (continues/resumes stream)
  LLM-->>Hook: final assistant message
  Hook->>UI: onFinish({ message, metadata })
  note over Hook,UI: Updates conversation, suggestions, clears state
Loading
sequenceDiagram
  autonumber
  participant API as /api/chat/helperts/stream.repairToolCall
  participant Tools as tools[inputSchema]
  participant LLM as initModel(OPEN_AI_GPT_5_NANO)

  API->>Tools: resolve tool by toolName
  alt missing inputSchema
    API-->>API: throw invalid-parameter
  else valid
    API->>LLM: generateObject(schema = tool.inputSchema, input = toolCall.input)
    LLM-->>API: repaired input
    API-->>API: return { type:'tool-call', toolCallId, toolName, input }
  end
Loading

Estimated code review effort

🎯 5 (Critical) | ⏱️ ~120 minutes

Possibly related PRs

Poem

Thump-thump, my paws tap code so neat,
Messages now in parts—what a treat!
Tools wear new coats, inputSchema chic,
Streams flow true, no longer weak.
Mastra hops away; I stash env vars tight—
Carrots compiled, we ship tonight! 🥕✨


📜 Recent review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 29c17e5 and 649bd50.

📒 Files selected for processing (6)
  • apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-messages/message-content/index.tsx (3 hunks)
  • apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-messages/message-content/tool-call-display.tsx (7 hunks)
  • apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/error.tsx (1 hunks)
  • apps/web/client/src/components/store/editor/chat/conversation.ts (5 hunks)
  • apps/web/client/src/components/tools/tools.ts (4 hunks)
  • packages/models/src/llm/index.ts (3 hunks)
✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/update-from-v4-to-v5

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

...message,
parts: message.content.parts,
content: messageContent,
// content: messageContent,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In toVercelMessageFromOnlook, the 'content' field for assistant messages is commented out. Verify that with the new message structure the 'parts' field is sufficient.

import { useEditorEngine } from '@/components/store/editor';
import { handleToolCall } from '@/components/tools';
import { useChat, type UseChatHelpers } from '@ai-sdk/react';
import { useChat, type UseChatHelpers} from '@ai-sdk/react';
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's a minor spacing inconsistency in the import statement on line 5: import { useChat, type UseChatHelpers} from '@ai-sdk/react';. It would be clearer and more consistent to include a space before the closing brace, e.g. type UseChatHelpers }. Consider fixing this typographical error.

@@ -889,7 +889,7 @@ var Ec = class extends Error {
if (e) return;
g.push(yc(l, i, r, n));
let { remoteProxy: G, destroy: L } = Cc(l, r, n);
g.push(L), clearTimeout(u), (e = !0), c({ remoteProxy: G, destroy: X });
(g.push(L), clearTimeout(u), (e = !0), c({ remoteProxy: G, destroy: X }));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Typographical suggestion: In the function call to c(), the object property is set as 'destroy: X', but the variable destructured earlier is 'L'. Consider verifying if 'X' is the correct variable or if it should be renamed to 'L' for consistency.

@@ -10767,7 +10782,7 @@ class E {
return this._refinement(l);
}
constructor(l) {
(this.spa = this.safeParseAsync),
((this.spa = this.safeParseAsync),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Typo: It looks like there's an extra opening parenthesis in ((this.spa = this.safeParseAsync),. Please confirm if the double parenthesis is intentional or if it should be corrected to a single one.

@@ -10949,10 +10964,10 @@ class Gl extends E {
exact: !1,
message: n.message,
}),
t.dirty();
t.dirty());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There seems to be an extra closing parenthesis in t.dirty()); — it likely should be t.dirty();.

@@ -10961,7 +10976,7 @@ class Gl extends E {
exact: !1,
message: n.message,
}),
t.dirty();
t.dirty());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Typographical error: There is an extra closing parenthesis on this line. It should likely be t.dirty(); instead of t.dirty()); so please remove the superfluous ).

}
if (r.minLength !== null) {
if (i.data.length < r.minLength.value)
x(i, {
(x(i, {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Typo: There's an unnecessary opening parenthesis before the call to x(i, { ... }. It should probably be x(i, { without the extra parenthesis.

@@ -12889,7 +12913,7 @@ function sh(l, i = {}, t) {
var K4 = { object: tl.lazycreate },
W;
(function (l) {
(l.ZodString = 'ZodString'),
((l.ZodString = 'ZodString'),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There appears to be an extra opening parenthesis at the start of this line. It looks like ((l.ZodString = 'ZodString') may be a typo. Please verify if the extra parenthesis is intended or if it should be removed.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 9

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (7)
apps/web/client/src/server/api/routers/chat/conversation.ts (1)

51-56: Incorrect Drizzle update invocation—passing an object instead of a table.

db.update expects the table, not a spread object. As written, this likely fails type-checking and at runtime.

Fix:

-            const [conversation] = await ctx.db.update({
-                ...conversations,
-                updatedAt: new Date(),
-            }).set(input.conversation)
-                .where(eq(conversations.id, input.conversationId)).returning();
+            const [conversation] = await ctx.db
+                .update(conversations)
+                .set({ ...input.conversation, updatedAt: new Date() })
+                .where(eq(conversations.id, input.conversationId))
+                .returning();
packages/models/src/llm/index.ts (1)

8-17: Update LLM model enums and token‐limit constants to match provider specs

The model identifiers in your enums are correct, but the associated context-window limits (MODEL_MAX_TOKENS) must be updated to prevent runtime truncation or inference errors.

Anthropic Sonnet 4 (Direct vs. OpenRouter)
– Direct API ID: claude-sonnet-4-20250514 supports 1 000 000 tokens.
– OpenRouter ID: anthropic/claude-sonnet-4 supports 200 000 tokens and does not recognize the dated Anthropic endpoint.
Action: set
• ANTHROPIC_MODELS.SONNET_4 → maxTokens = 1_000_000
• OPENROUTER_MODELS.CLAUDE_4_SONNET → maxTokens = 200_000

Anthropic 3.5 Haiku
claude-3-5-haiku-20241022 universally supports 200 000 tokens.
Action: set ANTHROPIC_MODELS.HAIKU → maxTokens = 200_000

OpenAI GPT-5 & GPT-5-Nano
– No public context-window documentation as of SDK v5.
Action: add a // TODO: confirm maxTokens with OpenAI docs or error payloads comment or use a safe default + runtime fallback

Suggested diff (in packages/models/src/llm/index.ts):

 export enum ANTHROPIC_MODELS {
     SONNET_4 = 'claude-sonnet-4-20250514',
     HAIKU     = 'claude-3-5-haiku-20241022',
 }

 export const MODEL_MAX_TOKENS: Record<ANTHROPIC_MODELS|OPENROUTER_MODELS, number> = {
-    [ANTHROPIC_MODELS.SONNET_4]: 200_000,
+    [ANTHROPIC_MODELS.SONNET_4]: 1_000_000,        // direct Anthropic API supports 1M tokens
     [ANTHROPIC_MODELS.HAIKU]: 200_000,            // universal for Haiku

     [OPENROUTER_MODELS.CLAUDE_4_SONNET]: 200_000, // OpenRouter limit
-    [OPENROUTER_MODELS.OPEN_AI_GPT_5_NANO]: /*?*/,
-    [OPENROUTER_MODELS.OPEN_AI_GPT_5]: /*?*/,
+    [OPENROUTER_MODELS.OPEN_AI_GPT_5_NANO]: 0,    // TODO: confirm with OpenAI
+    [OPENROUTER_MODELS.OPEN_AI_GPT_5]: 0,         // TODO: confirm with OpenAI
 };

• Verify that OpenRouter isn’t being called with claude-sonnet-4-20250514 (it only recognizes anthropic/claude-sonnet-4).
• Add runtime guards or fallbacks for unknown GPT-5 limits to avoid silent truncation.

apps/web/client/src/app/api/chat/route.ts (1)

41-42: Await streamResponse to keep POST’s try/catch effective.

Returning the promise without awaiting means errors thrown after the first await inside streamResponse (e.g., await req.json()) won’t be caught by the POST handler’s catch. Awaiting ensures consistent error responses and logging.

Apply this diff:

-        return streamResponse(req, user.id);
+        return await streamResponse(req, user.id);
apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-input/index.tsx (1)

68-76: Enter handler calls handleEnterSelection twice (global keydown).

The condition already invokes the selection; calling it again executes duplicate actions.

Apply this diff:

-        const handleGlobalKeyDown = (e: KeyboardEvent) => {
-            if (e.key === 'Enter' && suggestionRef.current?.handleEnterSelection()) {
-                e.preventDefault();
-                e.stopPropagation();
-                // Stop the event from bubbling to the canvas
-                e.stopImmediatePropagation();
-                // Handle the suggestion selection
-                suggestionRef.current.handleEnterSelection();
-            }
-        };
+        const handleGlobalKeyDown = (e: KeyboardEvent) => {
+            if (e.key !== 'Enter') return;
+            const selected = suggestionRef.current?.handleEnterSelection();
+            if (selected) {
+                e.preventDefault();
+                e.stopPropagation();
+                e.stopImmediatePropagation();
+            }
+        };

Also applies to: 79-82

apps/web/client/src/app/project/[id]/_components/canvas/overlay/elements/buttons/chat.tsx (1)

28-37: Add submitting guard, await sendMessageToChat, and localize error toast

To ensure errors aren’t swallowed, prevent duplicate sends, and keep toast messages translatable:

• Prevent double-submits by checking and toggling inputState.isSubmitting (the InputState already includes this flag).
• Await sendMessageToChat(...) so rejections are caught by your try/catch (its signature is async (type: ChatType) => Promise<string | null | undefined>).
• Reset isSubmitting in a finally block so the button always becomes active again.
• Replace the hard-coded English toast with your useTranslations key.

File: apps/web/client/src/app/project/[id]/_components/canvas/overlay/elements/buttons/chat.tsx
Lines: ~28–37

 const handleSubmit = async () => {
-    try {
+    // Prevent double sends
+    if (inputState.isSubmitting) return;
+    setInputState(prev => ({ ...prev, isSubmitting: true }));
+    try {
         editorEngine.state.rightPanelTab = EditorTabValue.CHAT;
         await editorEngine.chat.addEditMessage(inputState.value);
-        sendMessageToChat(ChatType.EDIT);
+        // Await so errors propagate to this catch
+        await sendMessageToChat(ChatType.EDIT);
         setInputState(DEFAULT_INPUT_STATE);
-    } catch (error) {
+    } catch (error) {
         console.error('Error sending message', error);
-        toast.error('Failed to send message. Please try again.');
+        toast.error(
+          // Translations key; fallback to English if missing
+          t(transKeys.editor.panels.edit.tabs.chat.errors.sendFailed) ??
+          'Failed to send message. Please try again.'
+        );
+    } finally {
+        // Always re-enable submit
+        setInputState(prev => ({ ...prev, isSubmitting: false }));
     }
 };
packages/ai/src/chat/providers.ts (1)

18-22: Guard against missing MODEL_MAX_TOKENS entries.

If a new model key is added to OPENROUTER_MODELS/ANTHROPIC_MODELS without a corresponding entry in MODEL_MAX_TOKENS, maxTokens becomes undefined and may propagate silently. Fail fast here.

Apply:

-    let maxTokens: number = MODEL_MAX_TOKENS[requestedModel];
+    const maxTokens = MODEL_MAX_TOKENS[requestedModel];
+    if (maxTokens == null) {
+        throw new Error(`MODEL_MAX_TOKENS missing for model: ${requestedModel}`);
+    }
packages/ai/src/stream/index.ts (1)

66-84: Remove unused getAssistantParts export

The getAssistantParts function and its associated toolCallSignatures logic in packages/ai/src/stream/index.ts aren’t referenced anywhere in the codebase (no imports or calls were found), so it can be safely removed to reduce surface area.

• File to update:

  • packages/ai/src/stream/index.ts
    • Action:
  • Delete the entire getAssistantParts function (lines 66–84)
  • Remove its export statement
🧹 Nitpick comments (35)
apps/web/client/src/server/api/routers/chat/conversation.ts (6)

79-85: Token cap vs. “2–4 words” mismatch—tighten to a smaller cap and lower max title length.

Fifty output tokens is far more than needed for a 2–4 word title; you’ll still get long titles. Recommend a smaller cap and a slightly lower character limit to better enforce brevity.

Apply:

-            const MAX_NAME_LENGTH = 50;
+            const MAX_NAME_LENGTH = 40;
@@
-                maxOutputTokens: 50,
+                maxOutputTokens: 12,

98-104: Enforce 2–4 word constraint post-generation.

Add a lightweight server-side guard to match the prompt instructions and avoid persisting overly long titles.

-            const generatedName = result.text.trim();
-            if (generatedName && generatedName.length > 0 && generatedName.length <= MAX_NAME_LENGTH) {
+            const generatedName = result.text.trim().replace(/\s+/g, ' ');
+            const wordCount = generatedName.split(' ').filter(Boolean).length;
+            if (generatedName.length > 0 && generatedName.length <= MAX_NAME_LENGTH && wordCount >= 2 && wordCount <= 4) {
                 await ctx.db.update(conversations).set({
                     displayName: generatedName,
                 }).where(eq(conversations.id, input.conversationId));
                 return generatedName;
             }

86-95: Avoid sending raw userId in telemetry—hash or pseudonymize it first.

If experimental telemetry is routed to a third party, emitting userId may be undesirable. Hashing preserves cohort analysis without exposing PII.

Apply:

               experimental_telemetry: {
                 isEnabled: true,
                 metadata: {
                   conversationId: input.conversationId,
-                  userId: ctx.user.id,
+                  anonUserId: hashUserId(ctx.user.id),
                   tags: ['conversation-title-generation'],
                   sessionId: input.conversationId,
                   langfuseTraceId: uuidv4(),
                 },
               },

Add the helper (outside the selected lines, near the top-level of this module):

import crypto from 'node:crypto';

const HASH_SALT = process.env.TELEMETRY_HASH_SALT ?? 'onlook-default-salt';
function hashUserId(input: string): string {
  return crypto.createHash('sha256').update(HASH_SALT).update(input).digest('hex');
}

Please confirm your telemetry destination and policy before adopting this.


100-103: Also bump updatedAt when persisting a generated title.

Your list view orders by updatedAt (Line 21). Persisting a new title without updating the timestamp could keep the conversation out of order.

-                await ctx.db.update(conversations).set({
-                    displayName: generatedName,
-                }).where(eq(conversations.id, input.conversationId));
+                await ctx.db.update(conversations).set({
+                    displayName: generatedName,
+                    updatedAt: new Date(),
+                }).where(eq(conversations.id, input.conversationId));

80-97: Harden error handling around the model call.

A provider/network failure will throw and bubble up as a 500. Consider graceful handling that returns null so the client can retry.

-            const result = await generateText({
-                model,
-                headers,
-                prompt: `Generate a concise and meaningful conversation title (2-4 words maximum) that reflects the main purpose or theme of the conversation based on user's creation prompt. Generate only the conversation title, nothing else. Keep it short and descriptive. User's creation prompt: <prompt>${input.content}</prompt>`,
-                providerOptions,
-                maxOutputTokens: 50,
-                experimental_telemetry: {
-                    isEnabled: true,
-                    metadata: {
-                        conversationId: input.conversationId,
-                        userId: ctx.user.id,
-                        tags: ['conversation-title-generation'],
-                        sessionId: input.conversationId,
-                        langfuseTraceId: uuidv4(),
-                    },
-                },
-            });
+            let result;
+            try {
+                result = await generateText({
+                    model,
+                    headers,
+                    prompt: `Generate a concise and meaningful conversation title (2-4 words maximum) that reflects the main purpose or theme of the conversation based on user's creation prompt. Generate only the conversation title, nothing else. Keep it short and descriptive. User's creation prompt: <prompt>${input.content}</prompt>`,
+                    providerOptions,
+                    maxOutputTokens: 50,
+                    experimental_telemetry: {
+                        isEnabled: true,
+                        metadata: {
+                            conversationId: input.conversationId,
+                            userId: ctx.user.id,
+                            tags: ['conversation-title-generation'],
+                            sessionId: input.conversationId,
+                            langfuseTraceId: uuidv4(),
+                        },
+                    },
+                });
+            } catch (err) {
+                console.error('generateTitle: model call failed', { conversationId: input.conversationId, err });
+                return null;
+            }

106-107: Avoid logging full model result—log minimal context.

The full result may contain provider-specific metadata or sensitive prompt echoes. Log identifiers instead.

-            console.error('Error generating conversation title', result);
+            console.error('Error generating conversation title', { conversationId: input.conversationId });
apps/web/client/src/components/store/editor/chat/conversation.ts (2)

53-55: Avoid using exceptions for control flow when the current conversation is already empty.

Throwing here leads to user-facing error toasts for a benign case. Early-return instead to reuse the empty conversation.

-            if (this.current?.messages.length === 0 && !this.current?.conversation.title) {
-                throw new Error('Current conversation is already empty.');
-            }
+            if (this.current && this.current.messages.length === 0 && !this.current.conversation.title) {
+                // Reuse the empty, untitled conversation; no-op.
+                return;
+            }

56-60: Omit empty suggestions when calling conversation.upsert

We’ve verified that the suggestions column has a default of [] in the DB schema and that conversationInsertSchema (used by the TRPC input) makes it optional. Passing an explicit empty array will work, but it’s redundant and generates unnecessary writes. Consider guarding it out when empty:

• File: apps/web/client/src/components/store/editor/chat/conversation.ts
• Around lines 56–60, replace

const newConversation = await api.chat.conversation.upsert.mutate({
    projectId: this.editorEngine.projectId,
    suggestions: [],          // always an empty array here
});

with

const payload = {
  projectId: this.editorEngine.projectId,
  ...(suggestions.length > 0 && { suggestions }),
};
const newConversation = await api.chat.conversation.upsert.mutate(payload);

This tweak reduces noisy writes and leverages the schema’s default. If you’d rather be explicit about defaults, leaving suggestions: [] is harmless.

apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-messages/user-message.tsx (3)

91-97: Await sendMessageToChat to preserve ordering and catch errors.

This call isn’t awaited here, while other call sites (e.g., ChatInput) await it. If sendMessageToChat returns a Promise, lack of await can cause race conditions and unhandled rejections.

Apply this diff:

-            sendMessageToChat(ChatType.EDIT);
+            await sendMessageToChat(ChatType.EDIT);

69-74: Wrap clipboard write in try/catch to handle permission errors.

navigator.clipboard can reject (e.g., on HTTP origins or policy denials). Provide a graceful fallback/toast.

Apply this diff:

-    async function handleCopyClick() {
-        const text = getUserMessageContent(message);
-        await navigator.clipboard.writeText(text);
-        setIsCopied(true);
-        setTimeout(() => setIsCopied(false), 2000);
-    }
+    async function handleCopyClick() {
+        try {
+            const text = getUserMessageContent(message);
+            await navigator.clipboard.writeText(text);
+            setIsCopied(true);
+            setTimeout(() => setIsCopied(false), 2000);
+        } catch (err) {
+            toast.error('Copy failed. Please try again.');
+        }
+    }

213-216: Avoid unstable keys for list items.

Generating a new nanoid() on each render forces unnecessary re-mounts and can affect focus/animation. Prefer a stable identifier or fall back to index.

Apply this diff:

-                            {message.content.metadata.context.map((context) => (
-                                <SentContextPill key={nanoid()} context={context} />
+                            {message.content.metadata.context.map((context, idx) => (
+                                <SentContextPill key={(context as any)?.id ?? idx} context={context} />
                             ))}
packages/models/src/llm/index.ts (1)

31-36: Consider deriving maxTokens from MODEL_MAX_TOKENS to avoid divergence.

Today, ModelConfig carries a maxTokens value independent of MODEL_MAX_TOKENS. Consider making maxTokens optional and defaulting from the mapping at the call site to prevent config drift.

Example change:

-export type ModelConfig = {
-    model: LanguageModel;
-    providerOptions?: Record<string, any>;
-    headers?: Record<string, string>;
-    maxTokens: number;
-};
+export type ModelConfig = {
+    model: LanguageModel;
+    providerOptions?: Record<string, any>;
+    headers?: Record<string, string>;
+    maxTokens?: number; // default from MODEL_MAX_TOKENS by model id
+};

And a small helper (in a suitable utils module):

export function resolveMaxTokens(modelId: string, override?: number) {
  return override ?? MODEL_MAX_TOKENS[modelId as keyof typeof MODEL_MAX_TOKENS];
}

Also applies to: 38-44

apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-messages/message-content/index.tsx (1)

22-55: Harden keys and add an explicit default case in parts map.

  • Use stable keys; part.text and toolCallId can collide or be missing.
  • Return null for unhandled part types to avoid inserting undefined into the render array.

Apply this diff:

-        const lastToolInvocationIdx = parts.map(p => p.type).lastIndexOf('tool-invocation');
-        return parts.map((part, idx) => {
+        const lastToolInvocationIdx = parts.map(p => p.type).lastIndexOf('tool-invocation');
+        return parts.map((part, idx) => {
             if (part.type === 'text') {
                 return (
                     <MarkdownRenderer
                         messageId={messageId}
                         type="text"
-                        key={part.text}
+                        key={`${messageId}-text-${idx}`}
                         content={part.text}
                         applied={applied}
                         isStream={isStream}
                     />
                 );
             } else if (part.type === 'tool-invocation') {
                 return (
                     <ToolCallDisplay
                         messageId={messageId}
                         index={idx}
                         lastToolInvocationIdx={lastToolInvocationIdx}
                         toolInvocationData={part.toolInvocation}
-                        key={part.toolInvocation.toolCallId}
+                        key={part.toolInvocation.toolCallId ?? `${messageId}-tool-${idx}`}
                         isStream={isStream}
                         applied={applied}
                     />
                 );
             } else if (part.type === 'reasoning') {
                 if (!isStream) {
                     return null;
                 }
                 return (
-                    <p>Introspecting...</p>
+                    <p key={`${messageId}-reasoning-${idx}`}>Introspecting...</p>
                 );
-            }
+            }
+            return null;
         });
apps/web/client/src/app/api/chat/route.ts (1)

4-4: Step-based termination with stepCountIs is a good v5-aligned change.

Nice simplification. Consider making MAX_STEPS configurable per environment to ease tuning across deployments.

Apply this diff:

-const MAX_STEPS = 20;
+const MAX_STEPS = Number(process.env.NEXT_PUBLIC_AI_MAX_STEPS ?? process.env.AI_MAX_STEPS ?? 20);

Also applies to: 8-8, 82-82

packages/models/src/chat/request.ts (2)

11-16: DRY up StreamRequest/StreamRequestV2 to prevent drift.

Both types share requestType and useAnalytics. Factor out a base to keep them in sync and ease future edits.

 export enum StreamRequestType {
   CHAT = 'chat',
   CREATE = 'create',
   ERROR_FIX = 'error-fix',
   SUGGESTIONS = 'suggestions',
   SUMMARY = 'summary',
 }
 
-export type StreamRequest = {
-    messages: ModelMessage[];
-    systemPrompt: string;
-    requestType: StreamRequestType;
-    useAnalytics: boolean;
-};
+type BaseStreamRequest = {
+  requestType: StreamRequestType;
+  useAnalytics: boolean;
+};
+
+export type StreamRequest = BaseStreamRequest & {
+  messages: ModelMessage[];
+  systemPrompt: string;
+};
 
-export type StreamRequestV2 = {
-    messages: ModelMessage[];
-    requestType: StreamRequestType;
-    useAnalytics: boolean;
-};
+export type StreamRequestV2 = BaseStreamRequest & {
+  messages: ModelMessage[];
+};

Also applies to: 18-22


1-1: No CoreMessage usages in repo; optional: decouple ‘ai’ types in chat, llm, and db packages

Verified that there are no remaining CoreMessage references in the monorepo and that StreamRequest/StreamRequestV2 are only defined (not consumed) in packages/models/src/chat/request.ts. To insulate downstream consumers from future-breaking changes in the external ai package, you may optionally re-export any imported types locally.

• packages/models/src/chat/request.ts

-import type { ModelMessage } from 'ai';
+import type { ModelMessage as AiModelMessage } from 'ai';
+// Re-export to decouple external packages from direct 'ai' imports
+export type ModelMessage = AiModelMessage;

• packages/models/src/llm/index.ts

-import type { LanguageModel } from 'ai';
+import type { LanguageModel as AiLanguageModel } from 'ai';
+export type LanguageModel = AiLanguageModel;

• packages/db/src/dto/message.ts

-import type { UIMessage as VercelMessage } from 'ai';
+import type { UIMessage as AiUIMessage } from 'ai';
+export type UIMessage = AiUIMessage;

Also remember to run your existing ripgrep commands (or your IDE’s “find references”) against any downstream packages that consume these public APIs to ensure nothing breaks with the new ModelMessage surface.

packages/ai/src/tools/sandbox.ts (1)

5-8: Optional: rename SANDBOX_TOOL_PARAMETERS to SANDBOX_TOOL_INPUT_SCHEMA for consistency.

Name now mismatches the property. Not required, but it prevents cognitive overhead going forward.

-export const SANDBOX_TOOL_PARAMETERS = z.object({
+export const SANDBOX_TOOL_INPUT_SCHEMA = z.object({
     command: ALLOWED_SANDBOX_COMMANDS.describe('The allowed command to run'),
 });
 export const sandboxTool = tool({
   description:
       'Restart the development server. This should only be used if absolutely necessary such as if updating dependencies, clearing next cache, or if the server is not responding.',
-  inputSchema: SANDBOX_TOOL_PARAMETERS,
+  inputSchema: SANDBOX_TOOL_INPUT_SCHEMA,
 });

Also applies to: 12-13

apps/web/client/src/components/tools/tools.ts (1)

61-63: Broaden inputSchema type to accept any Zod schema (not just ZodObject).

Future schemas might be wrapped with ZodEffects/ZodPipeline or differ in shape. Using ZodTypeAny avoids unnecessary friction.

-interface ClientToolMap extends Record<string, {
-    name: string;
-    inputSchema: z.ZodObject<any>;
-    handler: (args: any, editorEngine: EditorEngine) => Promise<any>;
-}> { }
+interface ClientToolMap extends Record<string, {
+  name: string;
+  inputSchema: z.ZodTypeAny;
+  handler: (args: any, editorEngine: EditorEngine) => Promise<any>;
+}> {}
apps/web/client/src/app/project/[id]/_hooks/use-start-project.tsx (1)

98-99: Await the asynchronous sendMessageToChat call to guarantee correct order

The sendMessageToChat helper is declared as an async function returning a Promise, so invoking it without await may allow subsequent state updates (like marking the creation request complete) to run before the chat message is actually sent or fails. To ensure errors are handled in sequence and UI state remains consistent, await the call.

Pinpoint:

  • File: apps/web/client/src/app/project/[id]/_hooks/use-start-project.tsx
  • Lines: ~98–99

Suggested change:

-            sendMessageToChat(ChatType.CREATE);
+            await sendMessageToChat(ChatType.CREATE);
packages/models/src/chat/message/message.ts (2)

1-6: Drop unused V2 imports to avoid noUnusedLocals issues.

The file no longer references V2 types.

-import type {
-    MastraMessageContentV2,
-    MastraMessageContentV3,
-    MastraMessageV3,
-} from '@mastra/core/agent';
-import type { MastraMessageV2 } from '@mastra/core/memory';
+import type {
+    MastraMessageContentV3,
+    MastraMessageV3,
+} from '@mastra/core/agent';

22-26: Remove leftover V2 type imports in message.ts

We’ve verified via ripgrep that there are no downstream references to MastraMessageV2 or MastraMessageContentV2 outside of this file. The only remaining V2 mentions are two unused imports here, which should be removed to avoid confusion and keep the codebase clean.

– packages/models/src/chat/message/message.ts
• Remove the import of MastraMessageContentV2 from @mastra/core/agent
• Remove the import of MastraMessageV2 from @mastra/core/memory

Confirmed no other MastraMessage(V2|ContentV2) usages across packages/ or apps/ after running:

rg -nP -C3 '\bMastraMessage(V2|ContentV2)\b' packages/ apps/
packages/ai/src/tools/read.ts (2)

12-13: Rename to inputSchema is correct.

No logic change; wiring is good.

Optional: tighten validation to prevent negative or fractional offsets/limits.

Example (outside this hunk):

export const READ_FILE_TOOL_PARAMETERS = z.object({
  file_path: z.string().describe('Absolute path to file'),
  offset: z.number().int().nonnegative().optional().describe('Starting line number (0-based)'),
  limit: z.number().int().positive().optional().describe('Number of lines to read'),
});

22-23: listFilesTool: inputSchema rename looks good.

Consider whether ignore should default to common patterns (e.g., node_modules, build artifacts) to reduce noise; can be added later if desired.

packages/ai/src/tools/plan.ts (1)

4-15: Remove the unused taskTool to avoid dead code and potential TS lint errors.

The comment says “Not used” and the const isn’t exported or referenced.

-// Not used
-const TASK_TOOL_NAME = 'task';
-const TASK_TOOL_PARAMETERS = z.object({
-    description: z.string().min(3).max(50).describe('Short task description (3-5 words)'),
-    prompt: z.string().describe('Detailed task for the agent'),
-    subagent_type: z.enum(['general-purpose']).describe('Agent type'),
-});
-const taskTool = tool({
-    description: 'Launch specialized agents for analysis tasks',
-    inputSchema: TASK_TOOL_PARAMETERS,
-});
+// (Removed unused task tool)
packages/ai/src/chat/providers.ts (1)

53-56: Anthropic provider: add fast-fail API key check & use providerOptions for cache control

  • Add a fast-fail guard for ANTHROPIC_API_KEY, since createAnthropic uses this env var for its apiKey option and failing fast prevents confusing 401 errors (sdk.vercel.ai).
  • Acknowledge that v5 dropped the top-level cacheControl setting on provider creation; to request ephemeral caching you must pass
    providerOptions: {
      anthropic: { cacheControl: { type: 'ephemeral' } },
    }
    on the specific message or message-part (sdk.vercel.ai).

Example patch:

 async function getAnthropicProvider(model: ANTHROPIC_MODELS): Promise<LanguageModel> {
+  if (!process.env.ANTHROPIC_API_KEY) {
+    throw new Error('ANTHROPIC_API_KEY must be set');
+  }
   const anthropic = createAnthropic();
   return anthropic(model);
 }

Ensure any existing code relying on provider-level cacheControl is migrated to the per-message providerOptions approach—and upgrade to a post-#5043 v5 release to get the message-part providerOptions forwarding fix.

packages/ai/test/tools/web-search.test.ts (2)

10-14: Nit: test title doesn’t match its assertions.

The test mentions inputSchema but only checks name/exports here (inputSchema equality is asserted later). Either rename the title or add a light assertion.

Apply one of:

-it('should have the correct tool name and inputSchema', () => {
+it('should have the correct tool name and exports', () => {

or

 it('should have the correct tool name and inputSchema', () => {
   expect(WEB_SEARCH_TOOL_NAME).toBe('web_search');
   expect(WEB_SEARCH_TOOL_PARAMETERS).toBeDefined();
   expect(webSearchTool).toBeDefined();
+  expect(webSearchTool.inputSchema).toBeDefined();
 });

33-45: Nit: rename for clarity.

“optional inputSchema” reads oddly. “optional fields” is clearer in test output.

-        it('should validate all optional inputSchema', () => {
+        it('should validate all optional fields', () => {
packages/db/src/dto/message.ts (3)

18-22: Avoid leaking DB-only fields into ChatMessage via spread.

Spreading ...message brings DB-specific properties (checkpoints, snapshots, etc.) onto a ChatMessage object. Prefer a minimal base to avoid accidental coupling.

-    const baseMessage = {
-        ...message,
-        content,
-        threadId: message.conversationId,
-    }
+    const baseMessage = {
+        id: message.id,
+        createdAt: message.createdAt,
+        content,
+        threadId: message.conversationId,
+    };

47-53: fromMessage() discards non-text parts in content aggregation.

This is fine for a human-readable content string, but be aware that files/images/tools won’t be represented here. If any consumer expects content to reflect those parts, consider including markers (e.g., “[file: X.png]”) or leaving content empty when non-text parts dominate.

Would you like a small helper to stringify parts with minimal markers?


61-72: Ensure consistent handling of vercelId across mapping functions

The current mappings don’t preserve the original Vercel UI message ID through a save/load cycle:

  • toOnlookMessageFromVercel() sets metadata.vercelId = message.id (the Vercel UI ID).
  • fromMessage() (ChatMessage → DbMessage) omits vercelId, so it’s not stored in the database.
  • toMessage() (DbMessage → ChatMessage) then repurposes metadata.vercelId = message.id as the DB record’s ID, overwriting the original value.

If downstream logic ever inspects content.metadata.vercelId expecting the original UI ID, it will now see the database ID instead.

Options to address this:

• Persist the Vercel ID in the database
– Add a vercelId column to the DbMessage model (or embed it in a JSON column), then update
ts fromMessage(message: ChatMessage): DbMessage { …; vercelId: message.content.metadata.vercelId; … }
and in toMessage():
ts metadata: { vercelId: message.vercelId, // restores original UI ID context: …, checkpoints: …, }

• Rename the field in one of the mappings to avoid overloading the meaning
– e.g. in toMessage(), use metadata.sourceId = message.id (DB ID), leaving metadata.vercelId exclusively for the original UI ID.

• Drop metadata.vercelId in toMessage() if preserving the Vercel ID isn’t required downstream.

packages/ai/src/tools/cli.ts (1)

31-39: Optional: tighten Zod schemas (timeout, command).

Consider:

  • timeout: add .int().positive() to ensure whole milliseconds and non-negative values.
  • command: basic refinement to ensure the first token is in allowed_commands when provided, or at least non-empty/trimmed.

This can live here or in the tool executor layer depending on architecture.

Would you like me to draft a small shared helper (validateCommandAgainstAllowlist(command, enum)) used by both bash_read and bash_edit?

Also applies to: 58-66, 83-99

packages/ai/src/stream/index.ts (3)

3-8: Type aliasing is fine; minor naming nit to reduce confusion.

Alias type UIMessage as VercelMessage works, but it introduces a mental mapping cost across the codebase. Consider standardizing on UIMessage terminology everywhere to avoid dual naming (VercelMessage/UIMessage), or export a local UIMessage alias instead.


11-30: Switch to ModelMessage[] looks correct; watch for lost tool-call de-dup behavior.

The new convertToStreamMessages returns ModelMessage[] via convertToModelMessages(uiMessages), which is aligned with the v5 flow. Previously we had infrastructure to avoid repeating identical tool invocations across assistant messages (via a toolCallSignatures map). That de-dup is no longer wired here. If repetition avoidance is still required, consider re-integrating the logic (e.g., by folding it into toVercelMessageFromOnlook or introducing a small pre-pass).

If you want, I can scan for any remaining references of the old toolCallSignatures pattern and propose a minimal reintegration based on actual usage sites.


38-42: Avoid returning both parts and content; strip content to match UIMessage shape.

Spreading ...message also brings over content from the Onlook message. Returning both parts and a non-string content can confuse downstream consumers expecting the v5 parts shape. Remove content when constructing the UI message.

Apply this diff:

   if (message.role === ChatMessageRole.ASSISTANT) {
-        return {
-            ...message,
-            parts: message.content.parts,
-            // content: messageContent,
-        } satisfies VercelMessage;
+        const { content: _omitContent, ...rest } = message;
+        return {
+            ...rest,
+            parts: message.content.parts,
+        } satisfies VercelMessage;
   }
apps/web/client/src/app/project/[id]/_hooks/use-chat.tsx (1)

34-55: Guard access to message.metadata.finishReason.

Depending on the exact UIMessage shape your provider returns, metadata may be absent or differently shaped. Add a safe access to avoid runtime errors and consider tightening the message type to include your metadata extension.

Apply this diff:

-        onFinish: ({message}) => {
-            const finishReason = message.metadata.finishReason;
+        onFinish: ({ message }) => {
+            const finishReason = (message as any)?.metadata?.finishReason as string | undefined;
             console.log('finishReason', finishReason);
-            console.log('message', message.metadata);
+            console.log('message', (message as any)?.metadata);

Optional follow-up: define a local type OnlookUIMessage = UIMessage & { metadata?: { finishReason?: string } } and type UseChatHelpers<OnlookUIMessage> to keep things typed.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between aedc548 and 43939d2.

⛔ Files ignored due to path filters (3)
  • apps/web/server/bun.lock is excluded by !**/*.lock
  • bun.lock is excluded by !**/*.lock
  • docs/bun.lock is excluded by !**/*.lock
📒 Files selected for processing (34)
  • apps/web/client/package.json (4 hunks)
  • apps/web/client/src/app/api/chat/helperts/stream.ts (1 hunks)
  • apps/web/client/src/app/api/chat/route.ts (3 hunks)
  • apps/web/client/src/app/project/[id]/_components/canvas/overlay/elements/buttons/chat.tsx (1 hunks)
  • apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-input/index.tsx (2 hunks)
  • apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-messages/message-content/index.tsx (2 hunks)
  • apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-messages/user-message.tsx (1 hunks)
  • apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/error.tsx (1 hunks)
  • apps/web/client/src/app/project/[id]/_hooks/use-chat.tsx (2 hunks)
  • apps/web/client/src/app/project/[id]/_hooks/use-start-project.tsx (2 hunks)
  • apps/web/client/src/components/store/editor/chat/conversation.ts (1 hunks)
  • apps/web/client/src/components/tools/tools.ts (2 hunks)
  • apps/web/client/src/mastra/index.ts (1 hunks)
  • apps/web/client/src/server/api/routers/chat/conversation.ts (1 hunks)
  • apps/web/client/src/server/api/routers/project/project.ts (1 hunks)
  • apps/web/server/package.json (1 hunks)
  • packages/ai/package.json (1 hunks)
  • packages/ai/src/chat/providers.ts (2 hunks)
  • packages/ai/src/prompt/provider.ts (3 hunks)
  • packages/ai/src/stream/index.ts (2 hunks)
  • packages/ai/src/tools/cli.ts (5 hunks)
  • packages/ai/src/tools/edit.ts (4 hunks)
  • packages/ai/src/tools/guides.ts (1 hunks)
  • packages/ai/src/tools/plan.ts (3 hunks)
  • packages/ai/src/tools/read.ts (2 hunks)
  • packages/ai/src/tools/sandbox.ts (1 hunks)
  • packages/ai/src/tools/web.ts (2 hunks)
  • packages/ai/test/tools/web-search.test.ts (3 hunks)
  • packages/db/src/dto/message.ts (1 hunks)
  • packages/models/package.json (1 hunks)
  • packages/models/src/chat/message/message.ts (2 hunks)
  • packages/models/src/chat/request.ts (2 hunks)
  • packages/models/src/llm/index.ts (2 hunks)
  • packages/ui/package.json (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (10)
apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/error.tsx (2)
apps/web/client/src/app/project/[id]/_hooks/use-chat.tsx (1)
  • useChatContext (95-100)
apps/web/client/src/components/store/editor/index.tsx (1)
  • useEditorEngine (9-13)
apps/web/client/src/app/project/[id]/_hooks/use-start-project.tsx (1)
apps/web/client/src/app/project/[id]/_hooks/use-chat.tsx (1)
  • useChatContext (95-100)
packages/ai/src/chat/providers.ts (1)
packages/models/src/llm/index.ts (2)
  • InitialModelPayload (24-29)
  • ModelConfig (31-36)
packages/ai/test/tools/web-search.test.ts (1)
packages/ai/src/tools/web.ts (2)
  • webSearchTool (43-46)
  • WEB_SEARCH_TOOL_PARAMETERS (38-42)
apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-messages/user-message.tsx (1)
apps/web/client/src/app/project/[id]/_hooks/use-chat.tsx (1)
  • useChatContext (95-100)
apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-input/index.tsx (1)
apps/web/client/src/app/project/[id]/_hooks/use-chat.tsx (1)
  • useChatContext (95-100)
apps/web/client/src/components/tools/tools.ts (8)
packages/ai/src/tools/read.ts (2)
  • LIST_FILES_TOOL_PARAMETERS (16-19)
  • READ_FILE_TOOL_PARAMETERS (5-9)
packages/ai/src/tools/guides.ts (1)
  • READ_STYLE_GUIDE_TOOL_NAME (10-10)
apps/web/client/src/components/tools/helpers.ts (1)
  • EMPTY_TOOL_PARAMETERS (3-3)
packages/ai/src/tools/edit.ts (3)
  • SEARCH_REPLACE_EDIT_FILE_TOOL_PARAMETERS (5-10)
  • WRITE_FILE_TOOL_PARAMETERS (39-42)
  • FUZZY_EDIT_FILE_TOOL_PARAMETERS (50-63)
packages/ai/src/tools/cli.ts (5)
  • TERMINAL_COMMAND_TOOL_PARAMETERS (5-7)
  • BASH_READ_TOOL_PARAMETERS (31-39)
  • GLOB_TOOL_PARAMETERS (73-76)
  • GREP_TOOL_PARAMETERS (83-99)
  • BASH_EDIT_TOOL_PARAMETERS (58-66)
packages/ai/src/tools/web.ts (2)
  • SCRAPE_URL_TOOL_PARAMETERS (5-29)
  • WEB_SEARCH_TOOL_PARAMETERS (38-42)
packages/ai/src/tools/sandbox.ts (1)
  • SANDBOX_TOOL_PARAMETERS (6-8)
packages/ai/src/tools/plan.ts (2)
  • TODO_WRITE_TOOL_PARAMETERS (17-28)
  • EXIT_PLAN_MODE_TOOL_PARAMETERS (35-37)
packages/models/src/chat/message/message.ts (2)
packages/models/src/chat/message/context.ts (1)
  • MessageContext (42-47)
packages/models/src/chat/message/checkpoint.ts (1)
  • MessageCheckpoints (15-15)
packages/ai/src/stream/index.ts (3)
packages/db/src/schema/project/chat/message.ts (1)
  • messages (11-27)
packages/models/src/chat/message/message.ts (1)
  • ChatMessage (38-38)
packages/ai/src/prompt/provider.ts (1)
  • HydrateMessageOptions (19-24)
apps/web/client/src/app/project/[id]/_hooks/use-chat.tsx (3)
apps/web/client/src/components/store/editor/index.tsx (1)
  • useEditorEngine (9-13)
apps/web/client/src/components/tools/tools.ts (1)
  • handleToolCall (170-184)
packages/ai/src/stream/index.ts (1)
  • toVercelMessageFromOnlook (32-53)
🔇 Additional comments (37)
apps/web/server/package.json (1)

23-23: Zod v4 upgrade verified—no legacy ^3 dependencies remain

All package.json files now declare Zod at ^4.0.17 (no ^3.x entries found):

  • packages/ui/package.json
  • packages/models/package.json
  • packages/ai/package.json
  • apps/web/server/package.json
  • apps/web/client/package.json

Next steps before merging:

  • Run the full test suite (including TRPC endpoint and UI validator tests) to catch any runtime incompatibilities introduced by Zod v4.
  • Scan your Zod schemas for any deprecated v3 patterns (e.g. changed defaults for .optional(), differences in error formatting) and update as needed.
  • Optionally, confirm at install time that only one version of Zod is present (e.g. via npm ls zod or pnpm why zod) to avoid duplicate copies.

With version alignment confirmed, this bump is ready to land once compatibility is validated.

apps/web/client/src/mastra/index.ts (1)

7-7: LGTM: formatting-only change.

Trailing comma removal is harmless. No behavioral impact.

packages/models/package.json (2)

39-39: All Zod dependencies upgraded to v4—no action needed

I ran the suggested ripgrep command across all package.json files to look for any "zod": "^3…" entries; it returned no matches, confirming that all packages now reference Zod v4.

• No ^3 occurrences of Zod found in any package.json.
• Cross-package alignment with Zod v4 is confirmed.


36-36: No runtime imports of “ai”; devDependency remains appropriate

Verified that all mentions of ai in packages/models/src are type-only imports—no runtime imports or require calls were found:

  • packages/models/src/llm/index.ts: import type { LanguageModel } from 'ai';
  • packages/models/src/chat/request.ts: import type { ModelMessage } from 'ai';

Since ai is only used for type definitions, it can stay in devDependencies.
Optionally, if you’d like to pick up patch fixes, you can loosen the version to "ai": "^5.0.0" in devDependencies.

apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/error.tsx (1)

14-14: No action needed: useChatContext correctly exposes sendMessageToChat.

Verification shows:

  • ChatContext is created with a type that includes sendMessageToChat.
  • ChatProvider passes sendMessageToChat into the context value.
  • useChatContext() returns { ...context, isWaiting }, which spreads in sendMessageToChat.
apps/web/client/src/app/api/chat/helperts/stream.ts (1)

80-84: Confirm the expected type for ToolCall.args to avoid double JSON stringification

It’s unclear whether in AI SDK v5 the ToolCall.args field is typed as a string or an object. If it’s defined as an object, wrapping your repairedArgs in JSON.stringify will introduce an extra layer of JSON encoding, forcing consumers to parse twice. If it’s a string, you’ll need the JSON output to match the original shape.

Action items:

  • Inspect the SDK’s ToolCall declaration (e.g. in your hoisted node_modules/ai folder or TypeScript types) to confirm whether args is string or object.
  • Only apply JSON.stringify when the SDK expects a string; otherwise, pass the raw object.

Suggested refactor:

-    return {
-        ...toolCall,
-        args: JSON.stringify(repairedArgs),
-        toolCallType: 'function' as const
-    };
+    const normalizedArgs = typeof toolCall.args === 'string'
+      ? JSON.stringify(repairedArgs)
+      : repairedArgs;
+    return {
+      ...toolCall,
+      args: normalizedArgs,
+      toolCallType: 'function' as const
+    };

Please verify and align this change with the actual ToolCall.args type in your AI SDK v5.

apps/web/client/src/server/api/routers/project/project.ts (1)

304-311: Remaining maxTokens occurrences found – please verify rename consistency

I ran a sweep and found four instances of the old maxTokens field. If we’re standardizing on maxOutputTokens (Anthropic AI SDK v5) across the codebase, we should confirm whether these should be renamed or intentionally left as-is:

• apps/web/client/src/server/api/routers/chat/suggestion.ts:44 (maxTokens: 10000,)
• packages/models/src/llm/index.ts:35 (maxTokens: number;)
• packages/ai/src/chat/providers.ts:21 (let maxTokens: number = MODEL_MAX_TOKENS[requestedModel];)
• packages/ai/src/chat/providers.ts:49 (maxTokens,)

Aside from consistency, consider lowering the 50-token cap in project.ts to around 12–16 tokens for a 2–4 word name to save latency and cost (though 50 is harmless). Please verify these references and update as needed.

packages/ai/package.json (1)

35-49: AI SDK & Zod versions verified – all consistent

No conflicting versions or duplicates detected.

apps/web/client/package.json (1)

31-31: Validate integration after AI SDK v5, Mastra, and Zod v4 bumps

Based on the search results:

  • References to the old parameters API still exist in:
    • packages/utility/test/urls.test.ts and packages/utility/test/image.test.ts (describe('…parameters…'))
    • packages/ai/test/tools/web.test.ts (checks for tool parameters)
    • The generated apps/web/template/public/onlook-preload-script.js (static bundle – safe to ignore for source integration but worth regenerating)
  • maxTokens is still used in:
    • apps/web/client/src/server/api/routers/chat/suggestion.ts (maxTokens: 10000)
    • packages/models/src/llm/index.ts (type/interface)
    • packages/ai/src/chat/providers.ts (provider defaults and usage)
  • No plain useChat or unqualified sendMessage calls were found in apps/web/client/src (note: the look-around regex need PCRE2 support for finer filtering).
  • The pnpm-lock.yaml file wasn’t located by the script, so Zod versions should be confirmed manually in your lockfile (ensure only v4 entries are present).

Please manually verify that:

  • All AI SDK hooks/components (streaming, tool calls, etc.) in @ai-sdk/[email protected] and [email protected] are updated to the new method signatures.
  • The special @mastra/*@ai-v5 packages are compatible and stable with Next 15/React 19.
  • All Zod schemas—especially in your TRPC routers—have been migrated to the v4 API (check for any lingering v3/v5 imports or signatures).

Affected areas to review:

  • apps/web/client/src/server/api/routers/chat/suggestion.ts
  • packages/ai/src/chat/providers.ts
  • packages/models/src/llm/index.ts
  • packages/utility/test/**/*.ts
  • packages/ai/test/tools/web.test.ts
  • Your root lockfile (pnpm-lock.yaml or equivalent)
apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-messages/user-message.tsx (1)

30-30: Rename to sendMessageToChat aligns with new ChatContext API.

Destructuring looks correct and consistent with the PR’s broader API update.

packages/models/src/llm/index.ts (1)

1-1: Lingering LanguageModelV1 References: None found in the codebase.
getModelFromType Call Sites:

  • apps/web/client/src/app/api/chat/route.ts (line 72)
  • apps/web/client/src/app/api/chat/helperts/stream.ts (line 5)

To complete verification, please provide the implementation of getModelFromType (in apps/web/client/src/app/api/chat/helperts/stream.ts) so we can confirm it returns a LanguageModel instance rather than just a model ID or string.

apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-messages/message-content/index.tsx (2)

1-1: Type update to UIMessage['parts'] matches AI SDK v5.

This aligns the component with v5 UIMessage typing. No runtime impact expected.

Also applies to: 14-14


22-23: lastToolInvocationIdx can be -1. Ensure ToolCallDisplay accepts it.

If none exist, lastIndexOf returns -1. Confirm ToolCallDisplay gracefully handles this sentinel.

apps/web/client/src/app/api/chat/route.ts (2)

55-60: No internal maxSteps references found; verify external callers

A repository-wide search for maxSteps in .ts, .tsx, .js, and .jsx files returned no matches, confirming that no internal client code still sends this field.

• Ensure any external or third-party clients (mobile apps, integrations, API consumers) have been updated to remove maxSteps from their request payloads and that your API documentation or changelog reflects this breaking change.
• Consider adding a deprecation notice or compatibility shim if you need to support older clients during a transition period.


79-109: Experimental flag names confirmed
The streamText options experimental_repairToolCall and top-level providerOptions are correct for Vercel AI SDK v5 and match the official naming. No changes needed.

apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-input/index.tsx (2)

29-29: Context API rename consumption looks correct.

sendMessageToChat, stop, isWaiting are destructured per the new API.


221-224: Optional chaining on captureScreenshot result is a solid safety improvement.

Avoids null access when integrations vary by frame/view.

packages/ai/src/tools/sandbox.ts (1)

12-13: All .parameters references removed—rename to inputSchema is safe

I ran a repository-wide search for both property accesses (.parameters) and object keys (parameters:), excluding build artifacts, and found zero occurrences. This confirms that every consumer now references inputSchema consistently.

apps/web/client/src/components/tools/tools.ts (2)

38-38: Type-only import for z is correct here.

Only used for z.infer and types; keeps runtime bundle lean.


65-168: Verification complete – no leftover .parameters references found
All occurrences of the deprecated .parameters API have been replaced with inputSchema, and no .parameters usages remain in the apps/web directory. Approving these changes.

packages/ai/src/tools/guides.ts (1)

7-8: parameters → inputSchema migration looks correct and consistent.

No behavior change; matches other tool modules.

Also applies to: 13-14

apps/web/client/src/app/project/[id]/_components/canvas/overlay/elements/buttons/chat.tsx (1)

26-27: Context API rename to sendMessageToChat verified

  • The useChatContext hook in
    apps/web/client/src/app/project/[id]/_hooks/use-chat.tsx defines and exports
    sendMessageToChat as part of its provider value.
  • All consumers (canvas overlay button, chat-input, error section, user messages) now destructure and invoke
    sendMessageToChat with the appropriate ChatType.
  • There are no leftover references to a context‐based sendMessage()—the only calls to a sendMessage() function are local, component-scoped helpers and do not refer to the old context API.
packages/ai/src/tools/web.ts (2)

45-46: webSearchTool: inputSchema rename is consistent with the migration.

Matches the pattern used across other tools in this PR.


34-35: Manual verification required: residual .parameters references

The automated scans did not complete successfully, so please manually confirm that all instances of the old parameters property and any consumers of .parameters have been removed or updated.

• In packages/ai/src:
• Run
bash grep -R "\bparameters\s*:" packages/ai/src
to ensure no tool configs still declare parameters:.
• In apps/ and other packages/:
• Run
bash grep -R "\.parameters\b" apps packages
to ensure no downstream code is accessing .parameters.

Once you’ve verified that there are zero matches, you can consider this change fully approved.

apps/web/client/src/app/project/[id]/_hooks/use-start-project.tsx (1)

33-34: ✔ ChatContext sendMessageToChat is correctly exposed and all call sites updated

I’ve confirmed that:

  • ChatProvider’s context value spreads chat and includes the new sendMessageToChat helper (lines 67–93 in use-chat.tsx).
  • useChatContext() returns that helper (alongside isWaiting) as expected.
  • use-start-project.tsx now destructures { sendMessageToChat } = useChatContext() and no longer references any legacy sendMessage from context.
  • There are no remaining destructurings of a sendMessage helper from useChatContext, nor any direct context.sendMessage(...) calls anywhere in the codebase.

Everything looks correctly migrated—no further changes needed here.

packages/models/src/chat/message/message.ts (1)

15-21: Upgrade to MastraMessageContentV3: confirm metadata compatibility.

If V3’s base content already defines metadata, your extension should refine (not conflict) with its shape. If there’s a type mismatch, we may need to intersect rather than extend.

Please confirm CI type-check passes for this interface change. If needed, we can switch to:

export type ChatMessageContent = MastraMessageContentV3 & {
  metadata: {
    vercelId?: string;
    context: MessageContext[];
    checkpoints: MessageCheckpoints[];
  };
};
packages/ai/src/tools/plan.ts (2)

31-32: todoWriteTool: migrated to inputSchema — LGTM.

Consistent with the repo-wide change.


40-41: exitPlanModeTool: inputSchema rename — LGTM.

No behavior changes.

packages/ai/src/chat/providers.ts (2)

58-64: OpenRouter provider path looks good.

Key presence check + provider instantiation is consistent with the SDK. No concerns here.


12-12: Lingering LanguageModelV1 References Removed?
I ran the provided ripgrep command and saw no matches for LanguageModelV1 across your TypeScript files. It looks like the migration to LanguageModel is complete, but please manually verify that:

• There are no stray references in other file types (e.g. .js, .jsx, .md, .json).
• Configuration or documentation files have been updated accordingly.

Once you’ve confirmed those, we can consider this fully resolved.

packages/ai/test/tools/web-search.test.ts (1)

67-69: Schema wiring assertion is correct.

Asserting identity with WEB_SEARCH_TOOL_PARAMETERS is the right guard after the rename to inputSchema.

packages/ai/src/prompt/provider.ts (2)

8-9: Return-type migration to UIMessage with parts[] looks good.

The new shape (parts: [{ type: 'text', text }]) aligns with the UIMessage direction across the PR.

Also applies to: 69-69


115-119: Restore image attachments with file UIMessage.parts

We've confirmed that in Vercel AI SDK v5 the correct UIMessage.parts discriminator for binary files (including images) is type: 'file'. To preserve image context, uncomment and adapt the attachment mapping so it’s included alongside the text part. Apply this change in both affected sections (lines 115–119 and 124–126) of packages/ai/src/prompt/provider.ts, and—if the ImageMessageContext type exists in @onlook/models—add its import near the other types.

• File: packages/ai/src/prompt/provider.ts
– Lines 115–119 (and similarly 124–126): restore image parts
– Near other imports:
ts import type { ImageMessageContext } from '@onlook/models';

-    // const attachments = images.map((i) => ({
-    //     type: 'file',
-    //     mimeType: i.mimeType,
-    //     data: i.content,
-    // }));
+    const imageParts = images.map((i) => ({
+        type: 'file',
+        mimeType: i.mimeType,
+        data: i.content,
+    }));

@@
     return {
         id,
         role: 'user',
-        parts: [{ type: 'text', text: prompt }],
-        // attachments,
+        parts: [{ type: 'text', text: prompt }, ...imageParts],
     };

This restores the dropped image context using the correct file type in v5.

packages/ai/src/tools/cli.ts (1)

10-11: No lingering parameters references detected

Ripgrep searches across packages/ai and its tests found no occurrences of parameters: or .parameters—all tools now consistently use inputSchema. Changes can be approved.

packages/ai/src/tools/edit.ts (1)

13-14: No remaining parameters: declarations found

I ran comprehensive scans across .ts, .tsx, and markdown files for any lingering parameters: entries and found none. The migration to inputSchema appears fully applied in both code and documentation.

apps/web/client/src/app/project/[id]/_hooks/use-chat.tsx (2)

25-33: Transport migration to DefaultChatTransport looks correct.

sendAutomaticallyWhen: lastAssistantMessageIsCompleteWithToolCalls + transport body shape align with the v5 flow.


92-93: Context provider shape LGTM.

Exposing sendMessageToChat alongside the useChat helpers keeps the surface tidy and backward-friendly.

Comment on lines 67 to 81
const { object: repairedArgs } = await generateObject({
model,
schema: tool?.parameters,
schema: tool?.inputSchema,
prompt: [
`The model tried to call the tool "${toolCall.toolName}"` +
` with the following arguments:`,
JSON.stringify(toolCall.args),
`The tool accepts the following schema:`,
JSON.stringify(tool?.parameters),
JSON.stringify(tool?.inputSchema),
'Please fix the arguments.',
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Guard against missing tool schema; JSON.stringify on a Zod schema will throw or yield “{}”.

In v5, tools expose a Zod schema via inputSchema. Two issues:

  • If tool or tool.inputSchema is undefined, generateObject will fail.
  • JSON.stringify(tool.inputSchema) often throws (circular) or becomes {}, degrading the repair prompt.

Add a hard guard and serialize the schema for the prompt via zod-to-json-schema.

     const { object: repairedArgs } = await generateObject({
       model,
-      schema: tool?.inputSchema,
+      schema: (() => {
+        if (!tool || !('inputSchema' in tool) || !tool.inputSchema) {
+          throw new Error(`Tool "${toolCall.toolName}" is missing inputSchema; cannot repair args.`);
+        }
+        return tool.inputSchema;
+      })(),
       prompt: [
         `The model tried to call the tool "${toolCall.toolName}"` +
         ` with the following arguments:`,
         JSON.stringify(toolCall.args),
         `The tool accepts the following schema:`,
-        JSON.stringify(tool?.inputSchema),
+        // Use a JSON representation of the Zod schema for clarity in the prompt
+        JSON.stringify(zodToJsonSchema(tool.inputSchema), null, 2),
         'Please fix the arguments.',
       ].join('\n'),
     });

Add this import at the top of the file:

import { zodToJsonSchema } from 'zod-to-json-schema';

And add zod-to-json-schema to the web client package.json dependencies.

 "dependencies": {
+  "zod-to-json-schema": "^3.23.0",
   ...
 }
🤖 Prompt for AI Agents
In apps/web/client/src/app/api/chat/helperts/stream.ts around lines 67 to 76,
the code calls JSON.stringify(tool?.inputSchema) without guarding for missing
tool or Zod schema and JSON.stringify on a Zod schema can throw or yield "{}";
modify to first import zodToJsonSchema from 'zod-to-json-schema' (and add the
package to web client dependencies), then compute a safeSchema variable that is
null or a serialized JSON Schema: if tool and tool.inputSchema exist convert
with zodToJsonSchema(tool.inputSchema) and JSON.stringify that result, otherwise
set safeSchema to a clear fallback string like "no schema provided"; use
safeSchema in the prompt and ensure generateObject is only called with schema
when present (or pass undefined explicitly) so generateObject won’t receive an
invalid value.

},
"dependencies": {
"@fastify/websocket": "^11.0.2",
"@onlook/rpc": "*",
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Avoid unpinned dependency: replace "*" with a stable range (or workspace link).

Using "" for @onlook/rpc makes installs non-deterministic and can break builds unexpectedly. If this is an internal workspace package, prefer "workspace:". Otherwise, pin to a stable range like ^x.y.z.

Apply one of the following:

-        "@onlook/rpc": "*",
+        "@onlook/rpc": "workspace:*",

or

-        "@onlook/rpc": "*",
+        "@onlook/rpc": "^0.1.0",

Verification script to confirm it’s a workspace package:


🏁 Script executed:

#!/bin/bash
# List local packages named @onlook/rpc
fd package.json -t f | xargs -I{} jq -r '(.name // empty) + " @ " + input_filename' {} | rg '^@onlook/rpc @'

Length of output: 154


Pin internal workspace dependency to workspace:*

Since @onlook/rpc is a local workspace package (found in packages/rpc/package.json), using "*" makes installs non-deterministic. Please update the version in apps/web/server/package.json:

-        "@onlook/rpc": "*",
+        "@onlook/rpc": "workspace:*",
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
"@onlook/rpc": "*",
"@onlook/rpc": "workspace:*",
🤖 Prompt for AI Agents
In apps/web/server/package.json around line 20, the internal workspace
dependency "@onlook/rpc" is pinned to "*" making installs non-deterministic;
change its version string from "*" to "workspace:*" so the package manager
resolves it from the monorepo workspace. After updating the dependency entry,
run your package manager install (npm/yarn/pnpm) to refresh the lockfile so the
workspace reference is recorded.

@ellipsis-dev
Copy link
Contributor

ellipsis-dev bot commented Aug 22, 2025

⚠️ This PR is too big for Ellipsis, but support for larger PRs is coming soon. If you want us to prioritize this feature, let us know at [email protected]


Generated with ❤️ by ellipsis.dev

@ellipsis-dev
Copy link
Contributor

ellipsis-dev bot commented Aug 22, 2025

⚠️ This PR is too big for Ellipsis, but support for larger PRs is coming soon. If you want us to prioritize this feature, let us know at [email protected]


Generated with ❤️ by ellipsis.dev

@ellipsis-dev
Copy link
Contributor

ellipsis-dev bot commented Aug 22, 2025

⚠️ This PR is too big for Ellipsis, but support for larger PRs is coming soon. If you want us to prioritize this feature, let us know at [email protected]


Generated with ❤️ by ellipsis.dev

@ellipsis-dev
Copy link
Contributor

ellipsis-dev bot commented Aug 22, 2025

⚠️ This PR is too big for Ellipsis, but support for larger PRs is coming soon. If you want us to prioritize this feature, let us know at [email protected]


Generated with ❤️ by ellipsis.dev

@ellipsis-dev
Copy link
Contributor

ellipsis-dev bot commented Aug 23, 2025

⚠️ This PR is too big for Ellipsis, but support for larger PRs is coming soon. If you want us to prioritize this feature, let us know at [email protected]


Generated with ❤️ by ellipsis.dev

@ellipsis-dev
Copy link
Contributor

ellipsis-dev bot commented Aug 25, 2025

⚠️ This PR is too big for Ellipsis, but support for larger PRs is coming soon. If you want us to prioritize this feature, let us know at [email protected]


Generated with ❤️ by ellipsis.dev

@ellipsis-dev
Copy link
Contributor

ellipsis-dev bot commented Aug 25, 2025

⚠️ This PR is too big for Ellipsis, but support for larger PRs is coming soon. If you want us to prioritize this feature, let us know at [email protected]


Generated with ❤️ by ellipsis.dev

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (1)
packages/db/src/dto/message.ts (1)

9-17: Don’t spread DbMessage into UI ChatMessage; explicitly project allowed fields

Spreading reintroduces DB-only/deprecated props (e.g., content, snapshots) into the UI/domain shape.

Apply:

-    const baseMessage = {
-        ...message,
-        threadId: message.conversationId,
-        metadata: {
-            vercelId: message.id,
-            context: message.context ?? [],
-            checkpoints: message.checkpoints ?? [],
-        },
-        parts: message.parts ?? [],
-    }
+    const baseMessage = {
+        id: message.id,
+        createdAt: message.createdAt,
+        threadId: message.conversationId,
+        metadata: {
+            vercelId: message.id,
+            context: message.context ?? [],
+            checkpoints: message.checkpoints ?? [],
+        },
+        parts: message.parts ?? [],
+    }
🧹 Nitpick comments (2)
packages/ai/package.json (1)

36-42: Confirm AI SDK v2 migration and standardize semver ranges

  • No remaining v1-era “parameters:” keys in the entire codebase (checked via ripgrep; zero hits)
  • All imports from @ai-sdk/anthropic|google|openai occur only in packages/ai/src/chat/providers.ts, indicating providers have been updated to the v2 API
  • Critical deps use exact versions in multiple package.json files, e.g.:
    • packages/ai/package.json (lines 36–42)
    • packages/models/package.json (line 36)
    • apps/web/client/package.json (line 69)

To avoid duplicated installs and ensure consistent dependency resolution, consider switching these to caret ranges. For example, in packages/ai/package.json:

-        "@ai-sdk/anthropic": "2.0.0",
-        "@ai-sdk/google": "2.0.0",
-        "@ai-sdk/openai": "2.0.0",
-        "ai": "5.0.26",
+        "@ai-sdk/anthropic": "^2.0.0",
+        "@ai-sdk/google": "^2.0.0",
+        "@ai-sdk/openai": "^2.0.0",
+        "ai": "^5.0.26",

Apply similar updates in packages/models/package.json and apps/web/client/package.json to align semver style across the monorepo.

packages/db/src/dto/message.ts (1)

41-44: Derive DB content more robustly; avoid empty fragments and add a separator

Current join('') collapses text parts without spacing and inserts empty strings for non-text parts.

Apply:

-        content: message.parts.map((part) => {
-            if (part.type === 'text') {
-                return part.text;
-            }
-            return '';
-        }).join(''),
+        content: message.parts
+            .filter((p) => p.type === 'text')
+            .map((p) => p.text)
+            .join(' '),

Please confirm DB consumers expect a plain-text summary and don’t require non-text parts to be serialized into content.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between f460b73 and 29c17e5.

⛔ Files ignored due to path filters (1)
  • bun.lock is excluded by !**/*.lock
📒 Files selected for processing (4)
  • apps/web/client/package.json (3 hunks)
  • packages/ai/package.json (1 hunks)
  • packages/db/src/dto/message.ts (3 hunks)
  • packages/models/package.json (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • apps/web/client/package.json
  • packages/models/package.json
🧰 Additional context used
🧬 Code graph analysis (1)
packages/db/src/dto/message.ts (1)
packages/models/src/chat/message/message.ts (1)
  • ChatMessage (34-34)
🔇 Additional comments (4)
packages/ai/package.json (2)

47-47: Dependency usage confirmed – keep the openai SDK

A quick search shows a direct import from openai in packages/ai/src/apply/client.ts:

• packages/ai/src/apply/client.ts:

import OpenAI from 'openai';

Since the package is still in use, the "openai": "^4.103.0" dependency should remain.


48-48: No Zod version discrepancy detected

All package.json files reference zod at ^4.1.3 (packages/ai, packages/models, apps/web/server, apps/web/client), so there’s no mixed 4.0.17 usage in the workspace. The original suggestion to align versions can be dismissed.

Likely an incorrect or invalid review comment.

packages/db/src/dto/message.ts (2)

50-50: LGTM: checkpoint mapping moved into metadata

This matches the new top-level metadata contract.


72-72: Return is fine once base object is corrected

No further action after applying the baseMessage refactor.

import { ChatMessageRole, type AssistantChatMessage, type ChatMessage, type UserChatMessage } from "@onlook/models";
import { assertNever } from '@onlook/utility';
import type { Message as VercelMessage } from 'ai';
import type { UIMessage as VercelMessage } from 'ai';
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Prefer neutral alias: use UIMessage directly instead of VercelMessage

Removes vendor naming and aligns with package export. Also keeps signatures consistent.

Apply:

-import type { UIMessage as VercelMessage } from 'ai';
+import type { UIMessage } from 'ai';
-export const toOnlookMessageFromVercel = (message: VercelMessage, conversationId: string): ChatMessage => {
+export const toOnlookMessageFromVercel = (message: UIMessage, conversationId: string): ChatMessage => {
-export const toDbMessageFromVercel = (message: VercelMessage, conversationId: string): DbMessage => {
+export const toDbMessageFromVercel = (message: UIMessage, conversationId: string): DbMessage => {

To confirm repository-wide consistency, run:

Also applies to: 57-57, 75-75


🏁 Script executed:

#!/bin/bash
# Find any remaining aliasing to VercelMessage
rg -nP -C2 $'\\bUIMessage\\s+as\\s+VercelMessage\\b|\\bVercelMessage\\b' --type=ts

Length of output: 2245


Rename VercelMessage alias to UIMessage across packages

To remove the vendor-specific alias and align with the exported UIMessage type, apply these refactors in both packages/db and packages/ai.

• packages/db/src/dto/message.ts
– Change the import and all occurrences of VercelMessage to UIMessage
– Update the signatures of toOnlookMessageFromVercel and toDbMessageFromVercel accordingly

packages/db/src/dto/message.ts
-import type { UIMessage as VercelMessage } from 'ai';
+import type { UIMessage } from 'ai';

-export const toOnlookMessageFromVercel = (message: VercelMessage, conversationId: string): ChatMessage => {
+export const toOnlookMessageFromVercel = (message: UIMessage, conversationId: string): ChatMessage => {

-export const toDbMessageFromVercel = (message: VercelMessage, conversationId: string): DbMessage => {
+export const toDbMessageFromVercel = (message: UIMessage, conversationId: string): DbMessage => {

• packages/ai/src/stream/index.ts
– Remove the as VercelMessage alias in the import
– Change the function’s return type and any satisfies VercelMessage clauses to use UIMessage

packages/ai/src/stream/index.ts
-import type { UIMessage as VercelMessage, ModelMessage, TextPart } from 'ai';
+import type { UIMessage, ModelMessage, TextPart } from 'ai';

-export function toVercelStream(message: ChatMessage, opt: HydrateMessageOptions): VercelMessage {
+export function toVercelStream(message: ChatMessage, opt: HydrateMessageOptions): UIMessage {

-        } satisfies VercelMessage;
+        } satisfies UIMessage;

Run this final check to ensure no stray VercelMessage references remain:

rg -nP -C2 '\bUIMessage\s+as\s+VercelMessage\b|\bVercelMessage\b' --type=ts
🤖 Prompt for AI Agents
In packages/db/src/dto/message.ts around line 4 and in
packages/ai/src/stream/index.ts, replace the vendor-specific alias VercelMessage
with the exported UIMessage: change the import to import type { UIMessage } from
'ai' (remove any "as VercelMessage"), rename all occurrences of VercelMessage to
UIMessage, and update the function signatures (toOnlookMessageFromVercel,
toDbMessageFromVercel) and any return types or "satisfies" clauses to use
UIMessage instead; run the provided ripgrep check to ensure no VercelMessage
references remain.

Comment on lines +63 to 71
const baseMessage: ChatMessage = {
...message,
id: uuidv4(),
createdAt: message.createdAt ?? new Date(),
createdAt: new Date(),
threadId: conversationId,
content,
}

switch (message.role) {
case ChatMessageRole.ASSISTANT:
return {
...baseMessage,
role: message.role as ChatMessageRole.ASSISTANT,
} satisfies AssistantChatMessage;
case ChatMessageRole.USER:
return {
...baseMessage,
role: message.role as ChatMessageRole.USER,
} satisfies UserChatMessage;
default:
throw new Error(`Unsupported message role: ${message.role}`);
metadata,
parts: message.parts ?? [],
role: message.role as ChatMessageRole,
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Avoid spreading UIMessage into ChatMessage; preserve createdAt if provided

Prevents leaking UI-only fields and keeps chronology when upstream supplies timestamps.

Apply:

-    const baseMessage: ChatMessage = {
-        ...message,
-        id: uuidv4(),
-        createdAt: new Date(),
-        threadId: conversationId,
-        metadata,
-        parts: message.parts ?? [],
-        role: message.role as ChatMessageRole,
-    }
+    const baseMessage: ChatMessage = {
+        id: uuidv4(),
+        createdAt: (message as any).createdAt ?? new Date(),
+        threadId: conversationId,
+        metadata,
+        parts: message.parts ?? [],
+        role: message.role as ChatMessageRole,
+    }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const baseMessage: ChatMessage = {
...message,
id: uuidv4(),
createdAt: message.createdAt ?? new Date(),
createdAt: new Date(),
threadId: conversationId,
content,
}
switch (message.role) {
case ChatMessageRole.ASSISTANT:
return {
...baseMessage,
role: message.role as ChatMessageRole.ASSISTANT,
} satisfies AssistantChatMessage;
case ChatMessageRole.USER:
return {
...baseMessage,
role: message.role as ChatMessageRole.USER,
} satisfies UserChatMessage;
default:
throw new Error(`Unsupported message role: ${message.role}`);
metadata,
parts: message.parts ?? [],
role: message.role as ChatMessageRole,
}
const baseMessage: ChatMessage = {
id: uuidv4(),
createdAt: (message as any).createdAt ?? new Date(),
threadId: conversationId,
metadata,
parts: message.parts ?? [],
role: message.role as ChatMessageRole,
}
🤖 Prompt for AI Agents
In packages/db/src/dto/message.ts around lines 63 to 71, avoid spreading the
entire UIMessage into the ChatMessage (which can leak UI-only fields) and ensure
createdAt from upstream is preserved when present; instead construct baseMessage
by explicitly mapping only the allowed ChatMessage fields from message (e.g.,
content/parts, role, metadata) and set id to uuidv4(), threadId to
conversationId, and createdAt to message.createdAt ?? new Date(); remove the
object spread of message so UI-only properties are not copied.

@Kitenite Kitenite changed the title Feat/update from v4 to v5 feat: update ai sdk from v4 to v5 Aug 27, 2025
@Kitenite Kitenite merged commit 1f6e025 into main Aug 27, 2025
4 of 7 checks passed
@Kitenite Kitenite deleted the feat/update-from-v4-to-v5 branch August 27, 2025 17:58
@ellipsis-dev
Copy link
Contributor

ellipsis-dev bot commented Aug 27, 2025

⚠️ This PR is too big for Ellipsis, but support for larger PRs is coming soon. If you want us to prioritize this feature, let us know at [email protected]


Generated with ❤️ by ellipsis.dev

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants