-
Notifications
You must be signed in to change notification settings - Fork 1.7k
feat: update ai sdk from v4 to v5 #2718
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
This pull request has been ignored for the connected project Preview Branches by Supabase. |
|
Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. Caution Review failedThe pull request is closed. WalkthroughThis PR upgrades AI SDK and zod, removes Mastra, migrates message types from Message/content to UIMessage/parts, renames tool schemas from parameters to inputSchema, refactors chat streaming and hooks to new APIs, adjusts server routes, updates tokens/tests, adds publish envVars propagation, and revises models/DB types. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant UI as Client UI
participant Hook as useChat (DefaultChatTransport)
participant API as POST /api/chat
participant LLM as Model (LanguageModel)
participant Tools as Tool Handlers
UI->>Hook: sendMessageToChat(type, uiMessages)
Hook->>Hook: toVercelMessageFromOnlook(messages)
Hook->>API: streamText({ messages: convertToModelMessages(...) })
API->>LLM: streamText(model, stopWhen(stepCountIs(MAX_STEPS)))
LLM-->>API: stream events (assistant parts/tool- parts)
API-->>Hook: UI message stream (toUIMessageStreamResponse)
Hook->>Hook: onToolCall(part)
Hook->>Tools: handleToolCall(toolName, input)
Tools-->>Hook: addToolResult(output)
Hook->>LLM: (continues/resumes stream)
LLM-->>Hook: final assistant message
Hook->>UI: onFinish({ message, metadata })
note over Hook,UI: Updates conversation, suggestions, clears state
sequenceDiagram
autonumber
participant API as /api/chat/helperts/stream.repairToolCall
participant Tools as tools[inputSchema]
participant LLM as initModel(OPEN_AI_GPT_5_NANO)
API->>Tools: resolve tool by toolName
alt missing inputSchema
API-->>API: throw invalid-parameter
else valid
API->>LLM: generateObject(schema = tool.inputSchema, input = toolCall.input)
LLM-->>API: repaired input
API-->>API: return { type:'tool-call', toolCallId, toolName, input }
end
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
Poem
📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: CHILL Plan: Pro 💡 Knowledge Base configuration:
You can enable these sources in your CodeRabbit configuration. 📒 Files selected for processing (6)
✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
packages/ai/src/stream/index.ts
Outdated
| ...message, | ||
| parts: message.content.parts, | ||
| content: messageContent, | ||
| // content: messageContent, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In toVercelMessageFromOnlook, the 'content' field for assistant messages is commented out. Verify that with the new message structure the 'parts' field is sufficient.
| import { useEditorEngine } from '@/components/store/editor'; | ||
| import { handleToolCall } from '@/components/tools'; | ||
| import { useChat, type UseChatHelpers } from '@ai-sdk/react'; | ||
| import { useChat, type UseChatHelpers} from '@ai-sdk/react'; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's a minor spacing inconsistency in the import statement on line 5: import { useChat, type UseChatHelpers} from '@ai-sdk/react';. It would be clearer and more consistent to include a space before the closing brace, e.g. type UseChatHelpers }. Consider fixing this typographical error.
| @@ -889,7 +889,7 @@ var Ec = class extends Error { | |||
| if (e) return; | |||
| g.push(yc(l, i, r, n)); | |||
| let { remoteProxy: G, destroy: L } = Cc(l, r, n); | |||
| g.push(L), clearTimeout(u), (e = !0), c({ remoteProxy: G, destroy: X }); | |||
| (g.push(L), clearTimeout(u), (e = !0), c({ remoteProxy: G, destroy: X })); | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Typographical suggestion: In the function call to c(), the object property is set as 'destroy: X', but the variable destructured earlier is 'L'. Consider verifying if 'X' is the correct variable or if it should be renamed to 'L' for consistency.
| @@ -10767,7 +10782,7 @@ class E { | |||
| return this._refinement(l); | |||
| } | |||
| constructor(l) { | |||
| (this.spa = this.safeParseAsync), | |||
| ((this.spa = this.safeParseAsync), | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Typo: It looks like there's an extra opening parenthesis in ((this.spa = this.safeParseAsync),. Please confirm if the double parenthesis is intentional or if it should be corrected to a single one.
| @@ -10949,10 +10964,10 @@ class Gl extends E { | |||
| exact: !1, | |||
| message: n.message, | |||
| }), | |||
| t.dirty(); | |||
| t.dirty()); | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There seems to be an extra closing parenthesis in t.dirty()); — it likely should be t.dirty();.
| @@ -10961,7 +10976,7 @@ class Gl extends E { | |||
| exact: !1, | |||
| message: n.message, | |||
| }), | |||
| t.dirty(); | |||
| t.dirty()); | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Typographical error: There is an extra closing parenthesis on this line. It should likely be t.dirty(); instead of t.dirty()); so please remove the superfluous ).
| } | ||
| if (r.minLength !== null) { | ||
| if (i.data.length < r.minLength.value) | ||
| x(i, { | ||
| (x(i, { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Typo: There's an unnecessary opening parenthesis before the call to x(i, { ... }. It should probably be x(i, { without the extra parenthesis.
| @@ -12889,7 +12913,7 @@ function sh(l, i = {}, t) { | |||
| var K4 = { object: tl.lazycreate }, | |||
| W; | |||
| (function (l) { | |||
| (l.ZodString = 'ZodString'), | |||
| ((l.ZodString = 'ZodString'), | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There appears to be an extra opening parenthesis at the start of this line. It looks like ((l.ZodString = 'ZodString') may be a typo. Please verify if the extra parenthesis is intended or if it should be removed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 9
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (7)
apps/web/client/src/server/api/routers/chat/conversation.ts (1)
51-56: Incorrect Drizzle update invocation—passing an object instead of a table.
db.updateexpects the table, not a spread object. As written, this likely fails type-checking and at runtime.Fix:
- const [conversation] = await ctx.db.update({ - ...conversations, - updatedAt: new Date(), - }).set(input.conversation) - .where(eq(conversations.id, input.conversationId)).returning(); + const [conversation] = await ctx.db + .update(conversations) + .set({ ...input.conversation, updatedAt: new Date() }) + .where(eq(conversations.id, input.conversationId)) + .returning();packages/models/src/llm/index.ts (1)
8-17: Update LLM model enums and token‐limit constants to match provider specsThe model identifiers in your enums are correct, but the associated context-window limits (MODEL_MAX_TOKENS) must be updated to prevent runtime truncation or inference errors.
• Anthropic Sonnet 4 (Direct vs. OpenRouter)
– Direct API ID:claude-sonnet-4-20250514supports 1 000 000 tokens.
– OpenRouter ID:anthropic/claude-sonnet-4supports 200 000 tokens and does not recognize the dated Anthropic endpoint.
Action: set
• ANTHROPIC_MODELS.SONNET_4 → maxTokens = 1_000_000
• OPENROUTER_MODELS.CLAUDE_4_SONNET → maxTokens = 200_000• Anthropic 3.5 Haiku
–claude-3-5-haiku-20241022universally supports 200 000 tokens.
Action: set ANTHROPIC_MODELS.HAIKU → maxTokens = 200_000• OpenAI GPT-5 & GPT-5-Nano
– No public context-window documentation as of SDK v5.
Action: add a// TODO: confirm maxTokens with OpenAI docs or error payloadscomment or use a safe default + runtime fallbackSuggested diff (in
packages/models/src/llm/index.ts):export enum ANTHROPIC_MODELS { SONNET_4 = 'claude-sonnet-4-20250514', HAIKU = 'claude-3-5-haiku-20241022', } export const MODEL_MAX_TOKENS: Record<ANTHROPIC_MODELS|OPENROUTER_MODELS, number> = { - [ANTHROPIC_MODELS.SONNET_4]: 200_000, + [ANTHROPIC_MODELS.SONNET_4]: 1_000_000, // direct Anthropic API supports 1M tokens [ANTHROPIC_MODELS.HAIKU]: 200_000, // universal for Haiku [OPENROUTER_MODELS.CLAUDE_4_SONNET]: 200_000, // OpenRouter limit - [OPENROUTER_MODELS.OPEN_AI_GPT_5_NANO]: /*?*/, - [OPENROUTER_MODELS.OPEN_AI_GPT_5]: /*?*/, + [OPENROUTER_MODELS.OPEN_AI_GPT_5_NANO]: 0, // TODO: confirm with OpenAI + [OPENROUTER_MODELS.OPEN_AI_GPT_5]: 0, // TODO: confirm with OpenAI };• Verify that OpenRouter isn’t being called with
claude-sonnet-4-20250514(it only recognizesanthropic/claude-sonnet-4).
• Add runtime guards or fallbacks for unknown GPT-5 limits to avoid silent truncation.apps/web/client/src/app/api/chat/route.ts (1)
41-42: Await streamResponse to keep POST’s try/catch effective.Returning the promise without awaiting means errors thrown after the first await inside streamResponse (e.g., await req.json()) won’t be caught by the POST handler’s catch. Awaiting ensures consistent error responses and logging.
Apply this diff:
- return streamResponse(req, user.id); + return await streamResponse(req, user.id);apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-input/index.tsx (1)
68-76: Enter handler calls handleEnterSelection twice (global keydown).The condition already invokes the selection; calling it again executes duplicate actions.
Apply this diff:
- const handleGlobalKeyDown = (e: KeyboardEvent) => { - if (e.key === 'Enter' && suggestionRef.current?.handleEnterSelection()) { - e.preventDefault(); - e.stopPropagation(); - // Stop the event from bubbling to the canvas - e.stopImmediatePropagation(); - // Handle the suggestion selection - suggestionRef.current.handleEnterSelection(); - } - }; + const handleGlobalKeyDown = (e: KeyboardEvent) => { + if (e.key !== 'Enter') return; + const selected = suggestionRef.current?.handleEnterSelection(); + if (selected) { + e.preventDefault(); + e.stopPropagation(); + e.stopImmediatePropagation(); + } + };Also applies to: 79-82
apps/web/client/src/app/project/[id]/_components/canvas/overlay/elements/buttons/chat.tsx (1)
28-37: Add submitting guard, awaitsendMessageToChat, and localize error toastTo ensure errors aren’t swallowed, prevent duplicate sends, and keep toast messages translatable:
• Prevent double-submits by checking and toggling
inputState.isSubmitting(theInputStatealready includes this flag).
• AwaitsendMessageToChat(...)so rejections are caught by yourtry/catch(its signature isasync (type: ChatType) => Promise<string | null | undefined>).
• ResetisSubmittingin afinallyblock so the button always becomes active again.
• Replace the hard-coded English toast with youruseTranslationskey.File: apps/web/client/src/app/project/[id]/_components/canvas/overlay/elements/buttons/chat.tsx
Lines: ~28–37const handleSubmit = async () => { - try { + // Prevent double sends + if (inputState.isSubmitting) return; + setInputState(prev => ({ ...prev, isSubmitting: true })); + try { editorEngine.state.rightPanelTab = EditorTabValue.CHAT; await editorEngine.chat.addEditMessage(inputState.value); - sendMessageToChat(ChatType.EDIT); + // Await so errors propagate to this catch + await sendMessageToChat(ChatType.EDIT); setInputState(DEFAULT_INPUT_STATE); - } catch (error) { + } catch (error) { console.error('Error sending message', error); - toast.error('Failed to send message. Please try again.'); + toast.error( + // Translations key; fallback to English if missing + t(transKeys.editor.panels.edit.tabs.chat.errors.sendFailed) ?? + 'Failed to send message. Please try again.' + ); + } finally { + // Always re-enable submit + setInputState(prev => ({ ...prev, isSubmitting: false })); } };packages/ai/src/chat/providers.ts (1)
18-22: Guard against missing MODEL_MAX_TOKENS entries.If a new model key is added to OPENROUTER_MODELS/ANTHROPIC_MODELS without a corresponding entry in MODEL_MAX_TOKENS, maxTokens becomes undefined and may propagate silently. Fail fast here.
Apply:
- let maxTokens: number = MODEL_MAX_TOKENS[requestedModel]; + const maxTokens = MODEL_MAX_TOKENS[requestedModel]; + if (maxTokens == null) { + throw new Error(`MODEL_MAX_TOKENS missing for model: ${requestedModel}`); + }packages/ai/src/stream/index.ts (1)
66-84: Remove unusedgetAssistantPartsexportThe
getAssistantPartsfunction and its associatedtoolCallSignatureslogic inpackages/ai/src/stream/index.tsaren’t referenced anywhere in the codebase (no imports or calls were found), so it can be safely removed to reduce surface area.• File to update:
- packages/ai/src/stream/index.ts
• Action:- Delete the entire
getAssistantPartsfunction (lines 66–84)- Remove its export statement
🧹 Nitpick comments (35)
apps/web/client/src/server/api/routers/chat/conversation.ts (6)
79-85: Token cap vs. “2–4 words” mismatch—tighten to a smaller cap and lower max title length.Fifty output tokens is far more than needed for a 2–4 word title; you’ll still get long titles. Recommend a smaller cap and a slightly lower character limit to better enforce brevity.
Apply:
- const MAX_NAME_LENGTH = 50; + const MAX_NAME_LENGTH = 40; @@ - maxOutputTokens: 50, + maxOutputTokens: 12,
98-104: Enforce 2–4 word constraint post-generation.Add a lightweight server-side guard to match the prompt instructions and avoid persisting overly long titles.
- const generatedName = result.text.trim(); - if (generatedName && generatedName.length > 0 && generatedName.length <= MAX_NAME_LENGTH) { + const generatedName = result.text.trim().replace(/\s+/g, ' '); + const wordCount = generatedName.split(' ').filter(Boolean).length; + if (generatedName.length > 0 && generatedName.length <= MAX_NAME_LENGTH && wordCount >= 2 && wordCount <= 4) { await ctx.db.update(conversations).set({ displayName: generatedName, }).where(eq(conversations.id, input.conversationId)); return generatedName; }
86-95: Avoid sending raw userId in telemetry—hash or pseudonymize it first.If experimental telemetry is routed to a third party, emitting
userIdmay be undesirable. Hashing preserves cohort analysis without exposing PII.Apply:
experimental_telemetry: { isEnabled: true, metadata: { conversationId: input.conversationId, - userId: ctx.user.id, + anonUserId: hashUserId(ctx.user.id), tags: ['conversation-title-generation'], sessionId: input.conversationId, langfuseTraceId: uuidv4(), }, },Add the helper (outside the selected lines, near the top-level of this module):
import crypto from 'node:crypto'; const HASH_SALT = process.env.TELEMETRY_HASH_SALT ?? 'onlook-default-salt'; function hashUserId(input: string): string { return crypto.createHash('sha256').update(HASH_SALT).update(input).digest('hex'); }Please confirm your telemetry destination and policy before adopting this.
100-103: Also bump updatedAt when persisting a generated title.Your list view orders by updatedAt (Line 21). Persisting a new title without updating the timestamp could keep the conversation out of order.
- await ctx.db.update(conversations).set({ - displayName: generatedName, - }).where(eq(conversations.id, input.conversationId)); + await ctx.db.update(conversations).set({ + displayName: generatedName, + updatedAt: new Date(), + }).where(eq(conversations.id, input.conversationId));
80-97: Harden error handling around the model call.A provider/network failure will throw and bubble up as a 500. Consider graceful handling that returns null so the client can retry.
- const result = await generateText({ - model, - headers, - prompt: `Generate a concise and meaningful conversation title (2-4 words maximum) that reflects the main purpose or theme of the conversation based on user's creation prompt. Generate only the conversation title, nothing else. Keep it short and descriptive. User's creation prompt: <prompt>${input.content}</prompt>`, - providerOptions, - maxOutputTokens: 50, - experimental_telemetry: { - isEnabled: true, - metadata: { - conversationId: input.conversationId, - userId: ctx.user.id, - tags: ['conversation-title-generation'], - sessionId: input.conversationId, - langfuseTraceId: uuidv4(), - }, - }, - }); + let result; + try { + result = await generateText({ + model, + headers, + prompt: `Generate a concise and meaningful conversation title (2-4 words maximum) that reflects the main purpose or theme of the conversation based on user's creation prompt. Generate only the conversation title, nothing else. Keep it short and descriptive. User's creation prompt: <prompt>${input.content}</prompt>`, + providerOptions, + maxOutputTokens: 50, + experimental_telemetry: { + isEnabled: true, + metadata: { + conversationId: input.conversationId, + userId: ctx.user.id, + tags: ['conversation-title-generation'], + sessionId: input.conversationId, + langfuseTraceId: uuidv4(), + }, + }, + }); + } catch (err) { + console.error('generateTitle: model call failed', { conversationId: input.conversationId, err }); + return null; + }
106-107: Avoid logging full model result—log minimal context.The full result may contain provider-specific metadata or sensitive prompt echoes. Log identifiers instead.
- console.error('Error generating conversation title', result); + console.error('Error generating conversation title', { conversationId: input.conversationId });apps/web/client/src/components/store/editor/chat/conversation.ts (2)
53-55: Avoid using exceptions for control flow when the current conversation is already empty.Throwing here leads to user-facing error toasts for a benign case. Early-return instead to reuse the empty conversation.
- if (this.current?.messages.length === 0 && !this.current?.conversation.title) { - throw new Error('Current conversation is already empty.'); - } + if (this.current && this.current.messages.length === 0 && !this.current.conversation.title) { + // Reuse the empty, untitled conversation; no-op. + return; + }
56-60: Omit emptysuggestionswhen callingconversation.upsertWe’ve verified that the
suggestionscolumn has a default of[]in the DB schema and thatconversationInsertSchema(used by the TRPC input) makes it optional. Passing an explicit empty array will work, but it’s redundant and generates unnecessary writes. Consider guarding it out when empty:• File:
apps/web/client/src/components/store/editor/chat/conversation.ts
• Around lines 56–60, replaceconst newConversation = await api.chat.conversation.upsert.mutate({ projectId: this.editorEngine.projectId, suggestions: [], // always an empty array here });with
const payload = { projectId: this.editorEngine.projectId, ...(suggestions.length > 0 && { suggestions }), }; const newConversation = await api.chat.conversation.upsert.mutate(payload);This tweak reduces noisy writes and leverages the schema’s default. If you’d rather be explicit about defaults, leaving
suggestions: []is harmless.apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-messages/user-message.tsx (3)
91-97: Await sendMessageToChat to preserve ordering and catch errors.This call isn’t awaited here, while other call sites (e.g., ChatInput) await it. If sendMessageToChat returns a Promise, lack of await can cause race conditions and unhandled rejections.
Apply this diff:
- sendMessageToChat(ChatType.EDIT); + await sendMessageToChat(ChatType.EDIT);
69-74: Wrap clipboard write in try/catch to handle permission errors.navigator.clipboard can reject (e.g., on HTTP origins or policy denials). Provide a graceful fallback/toast.
Apply this diff:
- async function handleCopyClick() { - const text = getUserMessageContent(message); - await navigator.clipboard.writeText(text); - setIsCopied(true); - setTimeout(() => setIsCopied(false), 2000); - } + async function handleCopyClick() { + try { + const text = getUserMessageContent(message); + await navigator.clipboard.writeText(text); + setIsCopied(true); + setTimeout(() => setIsCopied(false), 2000); + } catch (err) { + toast.error('Copy failed. Please try again.'); + } + }
213-216: Avoid unstable keys for list items.Generating a new nanoid() on each render forces unnecessary re-mounts and can affect focus/animation. Prefer a stable identifier or fall back to index.
Apply this diff:
- {message.content.metadata.context.map((context) => ( - <SentContextPill key={nanoid()} context={context} /> + {message.content.metadata.context.map((context, idx) => ( + <SentContextPill key={(context as any)?.id ?? idx} context={context} /> ))}packages/models/src/llm/index.ts (1)
31-36: Consider deriving maxTokens from MODEL_MAX_TOKENS to avoid divergence.Today, ModelConfig carries a maxTokens value independent of MODEL_MAX_TOKENS. Consider making maxTokens optional and defaulting from the mapping at the call site to prevent config drift.
Example change:
-export type ModelConfig = { - model: LanguageModel; - providerOptions?: Record<string, any>; - headers?: Record<string, string>; - maxTokens: number; -}; +export type ModelConfig = { + model: LanguageModel; + providerOptions?: Record<string, any>; + headers?: Record<string, string>; + maxTokens?: number; // default from MODEL_MAX_TOKENS by model id +};And a small helper (in a suitable utils module):
export function resolveMaxTokens(modelId: string, override?: number) { return override ?? MODEL_MAX_TOKENS[modelId as keyof typeof MODEL_MAX_TOKENS]; }Also applies to: 38-44
apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-messages/message-content/index.tsx (1)
22-55: Harden keys and add an explicit default case in parts map.
- Use stable keys; part.text and toolCallId can collide or be missing.
- Return null for unhandled part types to avoid inserting undefined into the render array.
Apply this diff:
- const lastToolInvocationIdx = parts.map(p => p.type).lastIndexOf('tool-invocation'); - return parts.map((part, idx) => { + const lastToolInvocationIdx = parts.map(p => p.type).lastIndexOf('tool-invocation'); + return parts.map((part, idx) => { if (part.type === 'text') { return ( <MarkdownRenderer messageId={messageId} type="text" - key={part.text} + key={`${messageId}-text-${idx}`} content={part.text} applied={applied} isStream={isStream} /> ); } else if (part.type === 'tool-invocation') { return ( <ToolCallDisplay messageId={messageId} index={idx} lastToolInvocationIdx={lastToolInvocationIdx} toolInvocationData={part.toolInvocation} - key={part.toolInvocation.toolCallId} + key={part.toolInvocation.toolCallId ?? `${messageId}-tool-${idx}`} isStream={isStream} applied={applied} /> ); } else if (part.type === 'reasoning') { if (!isStream) { return null; } return ( - <p>Introspecting...</p> + <p key={`${messageId}-reasoning-${idx}`}>Introspecting...</p> ); - } + } + return null; });apps/web/client/src/app/api/chat/route.ts (1)
4-4: Step-based termination with stepCountIs is a good v5-aligned change.Nice simplification. Consider making MAX_STEPS configurable per environment to ease tuning across deployments.
Apply this diff:
-const MAX_STEPS = 20; +const MAX_STEPS = Number(process.env.NEXT_PUBLIC_AI_MAX_STEPS ?? process.env.AI_MAX_STEPS ?? 20);Also applies to: 8-8, 82-82
packages/models/src/chat/request.ts (2)
11-16: DRY up StreamRequest/StreamRequestV2 to prevent drift.Both types share requestType and useAnalytics. Factor out a base to keep them in sync and ease future edits.
export enum StreamRequestType { CHAT = 'chat', CREATE = 'create', ERROR_FIX = 'error-fix', SUGGESTIONS = 'suggestions', SUMMARY = 'summary', } -export type StreamRequest = { - messages: ModelMessage[]; - systemPrompt: string; - requestType: StreamRequestType; - useAnalytics: boolean; -}; +type BaseStreamRequest = { + requestType: StreamRequestType; + useAnalytics: boolean; +}; + +export type StreamRequest = BaseStreamRequest & { + messages: ModelMessage[]; + systemPrompt: string; +}; -export type StreamRequestV2 = { - messages: ModelMessage[]; - requestType: StreamRequestType; - useAnalytics: boolean; -}; +export type StreamRequestV2 = BaseStreamRequest & { + messages: ModelMessage[]; +};Also applies to: 18-22
1-1: No CoreMessage usages in repo; optional: decouple ‘ai’ types in chat, llm, and db packagesVerified that there are no remaining
CoreMessagereferences in the monorepo and thatStreamRequest/StreamRequestV2are only defined (not consumed) inpackages/models/src/chat/request.ts. To insulate downstream consumers from future-breaking changes in the externalaipackage, you may optionally re-export any imported types locally.• packages/models/src/chat/request.ts
-import type { ModelMessage } from 'ai'; +import type { ModelMessage as AiModelMessage } from 'ai'; +// Re-export to decouple external packages from direct 'ai' imports +export type ModelMessage = AiModelMessage;• packages/models/src/llm/index.ts
-import type { LanguageModel } from 'ai'; +import type { LanguageModel as AiLanguageModel } from 'ai'; +export type LanguageModel = AiLanguageModel;• packages/db/src/dto/message.ts
-import type { UIMessage as VercelMessage } from 'ai'; +import type { UIMessage as AiUIMessage } from 'ai'; +export type UIMessage = AiUIMessage;Also remember to run your existing ripgrep commands (or your IDE’s “find references”) against any downstream packages that consume these public APIs to ensure nothing breaks with the new
ModelMessagesurface.packages/ai/src/tools/sandbox.ts (1)
5-8: Optional: rename SANDBOX_TOOL_PARAMETERS to SANDBOX_TOOL_INPUT_SCHEMA for consistency.Name now mismatches the property. Not required, but it prevents cognitive overhead going forward.
-export const SANDBOX_TOOL_PARAMETERS = z.object({ +export const SANDBOX_TOOL_INPUT_SCHEMA = z.object({ command: ALLOWED_SANDBOX_COMMANDS.describe('The allowed command to run'), }); export const sandboxTool = tool({ description: 'Restart the development server. This should only be used if absolutely necessary such as if updating dependencies, clearing next cache, or if the server is not responding.', - inputSchema: SANDBOX_TOOL_PARAMETERS, + inputSchema: SANDBOX_TOOL_INPUT_SCHEMA, });Also applies to: 12-13
apps/web/client/src/components/tools/tools.ts (1)
61-63: Broaden inputSchema type to accept any Zod schema (not just ZodObject).Future schemas might be wrapped with ZodEffects/ZodPipeline or differ in shape. Using ZodTypeAny avoids unnecessary friction.
-interface ClientToolMap extends Record<string, { - name: string; - inputSchema: z.ZodObject<any>; - handler: (args: any, editorEngine: EditorEngine) => Promise<any>; -}> { } +interface ClientToolMap extends Record<string, { + name: string; + inputSchema: z.ZodTypeAny; + handler: (args: any, editorEngine: EditorEngine) => Promise<any>; +}> {}apps/web/client/src/app/project/[id]/_hooks/use-start-project.tsx (1)
98-99: Await the asynchronous sendMessageToChat call to guarantee correct orderThe
sendMessageToChathelper is declared as anasyncfunction returning aPromise, so invoking it withoutawaitmay allow subsequent state updates (like marking the creation request complete) to run before the chat message is actually sent or fails. To ensure errors are handled in sequence and UI state remains consistent, await the call.Pinpoint:
- File:
apps/web/client/src/app/project/[id]/_hooks/use-start-project.tsx- Lines: ~98–99
Suggested change:
- sendMessageToChat(ChatType.CREATE); + await sendMessageToChat(ChatType.CREATE);packages/models/src/chat/message/message.ts (2)
1-6: Drop unused V2 imports to avoid noUnusedLocals issues.The file no longer references V2 types.
-import type { - MastraMessageContentV2, - MastraMessageContentV3, - MastraMessageV3, -} from '@mastra/core/agent'; -import type { MastraMessageV2 } from '@mastra/core/memory'; +import type { + MastraMessageContentV3, + MastraMessageV3, +} from '@mastra/core/agent';
22-26: Remove leftover V2 type imports inmessage.tsWe’ve verified via ripgrep that there are no downstream references to
MastraMessageV2orMastraMessageContentV2outside of this file. The only remaining V2 mentions are two unused imports here, which should be removed to avoid confusion and keep the codebase clean.– packages/models/src/chat/message/message.ts
• Remove the import ofMastraMessageContentV2from@mastra/core/agent
• Remove the import ofMastraMessageV2from@mastra/core/memoryConfirmed no other
MastraMessage(V2|ContentV2)usages acrosspackages/orapps/after running:rg -nP -C3 '\bMastraMessage(V2|ContentV2)\b' packages/ apps/packages/ai/src/tools/read.ts (2)
12-13: Rename to inputSchema is correct.No logic change; wiring is good.
Optional: tighten validation to prevent negative or fractional offsets/limits.
Example (outside this hunk):
export const READ_FILE_TOOL_PARAMETERS = z.object({ file_path: z.string().describe('Absolute path to file'), offset: z.number().int().nonnegative().optional().describe('Starting line number (0-based)'), limit: z.number().int().positive().optional().describe('Number of lines to read'), });
22-23: listFilesTool: inputSchema rename looks good.Consider whether
ignoreshould default to common patterns (e.g., node_modules, build artifacts) to reduce noise; can be added later if desired.packages/ai/src/tools/plan.ts (1)
4-15: Remove the unused taskTool to avoid dead code and potential TS lint errors.The comment says “Not used” and the const isn’t exported or referenced.
-// Not used -const TASK_TOOL_NAME = 'task'; -const TASK_TOOL_PARAMETERS = z.object({ - description: z.string().min(3).max(50).describe('Short task description (3-5 words)'), - prompt: z.string().describe('Detailed task for the agent'), - subagent_type: z.enum(['general-purpose']).describe('Agent type'), -}); -const taskTool = tool({ - description: 'Launch specialized agents for analysis tasks', - inputSchema: TASK_TOOL_PARAMETERS, -}); +// (Removed unused task tool)packages/ai/src/chat/providers.ts (1)
53-56: Anthropic provider: add fast-fail API key check & use providerOptions for cache control
- Add a fast-fail guard for
ANTHROPIC_API_KEY, sincecreateAnthropicuses this env var for itsapiKeyoption and failing fast prevents confusing 401 errors (sdk.vercel.ai).- Acknowledge that v5 dropped the top-level
cacheControlsetting on provider creation; to request ephemeral caching you must passon the specific message or message-part (sdk.vercel.ai).providerOptions: { anthropic: { cacheControl: { type: 'ephemeral' } }, }Example patch:
async function getAnthropicProvider(model: ANTHROPIC_MODELS): Promise<LanguageModel> { + if (!process.env.ANTHROPIC_API_KEY) { + throw new Error('ANTHROPIC_API_KEY must be set'); + } const anthropic = createAnthropic(); return anthropic(model); }Ensure any existing code relying on provider-level cacheControl is migrated to the per-message
providerOptionsapproach—and upgrade to a post-#5043 v5 release to get the message-partproviderOptionsforwarding fix.packages/ai/test/tools/web-search.test.ts (2)
10-14: Nit: test title doesn’t match its assertions.The test mentions inputSchema but only checks name/exports here (inputSchema equality is asserted later). Either rename the title or add a light assertion.
Apply one of:
-it('should have the correct tool name and inputSchema', () => { +it('should have the correct tool name and exports', () => {or
it('should have the correct tool name and inputSchema', () => { expect(WEB_SEARCH_TOOL_NAME).toBe('web_search'); expect(WEB_SEARCH_TOOL_PARAMETERS).toBeDefined(); expect(webSearchTool).toBeDefined(); + expect(webSearchTool.inputSchema).toBeDefined(); });
33-45: Nit: rename for clarity.“optional inputSchema” reads oddly. “optional fields” is clearer in test output.
- it('should validate all optional inputSchema', () => { + it('should validate all optional fields', () => {packages/db/src/dto/message.ts (3)
18-22: Avoid leaking DB-only fields into ChatMessage via spread.Spreading ...message brings DB-specific properties (checkpoints, snapshots, etc.) onto a ChatMessage object. Prefer a minimal base to avoid accidental coupling.
- const baseMessage = { - ...message, - content, - threadId: message.conversationId, - } + const baseMessage = { + id: message.id, + createdAt: message.createdAt, + content, + threadId: message.conversationId, + };
47-53: fromMessage() discards non-text parts in content aggregation.This is fine for a human-readable content string, but be aware that files/images/tools won’t be represented here. If any consumer expects content to reflect those parts, consider including markers (e.g., “[file: X.png]”) or leaving content empty when non-text parts dominate.
Would you like a small helper to stringify parts with minimal markers?
61-72: Ensure consistent handling ofvercelIdacross mapping functionsThe current mappings don’t preserve the original Vercel UI message ID through a save/load cycle:
toOnlookMessageFromVercel()setsmetadata.vercelId = message.id(the Vercel UI ID).fromMessage()(ChatMessage → DbMessage) omitsvercelId, so it’s not stored in the database.toMessage()(DbMessage → ChatMessage) then repurposesmetadata.vercelId = message.idas the DB record’s ID, overwriting the original value.If downstream logic ever inspects
content.metadata.vercelIdexpecting the original UI ID, it will now see the database ID instead.Options to address this:
• Persist the Vercel ID in the database
– Add avercelIdcolumn to theDbMessagemodel (or embed it in a JSON column), then update
ts fromMessage(message: ChatMessage): DbMessage { …; vercelId: message.content.metadata.vercelId; … }
and intoMessage():
ts metadata: { vercelId: message.vercelId, // restores original UI ID context: …, checkpoints: …, }• Rename the field in one of the mappings to avoid overloading the meaning
– e.g. intoMessage(), usemetadata.sourceId = message.id(DB ID), leavingmetadata.vercelIdexclusively for the original UI ID.• Drop
metadata.vercelIdintoMessage()if preserving the Vercel ID isn’t required downstream.packages/ai/src/tools/cli.ts (1)
31-39: Optional: tighten Zod schemas (timeout, command).Consider:
- timeout: add .int().positive() to ensure whole milliseconds and non-negative values.
- command: basic refinement to ensure the first token is in allowed_commands when provided, or at least non-empty/trimmed.
This can live here or in the tool executor layer depending on architecture.
Would you like me to draft a small shared helper (validateCommandAgainstAllowlist(command, enum)) used by both bash_read and bash_edit?
Also applies to: 58-66, 83-99
packages/ai/src/stream/index.ts (3)
3-8: Type aliasing is fine; minor naming nit to reduce confusion.Alias
type UIMessage as VercelMessageworks, but it introduces a mental mapping cost across the codebase. Consider standardizing onUIMessageterminology everywhere to avoid dual naming (VercelMessage/UIMessage), or export a localUIMessagealias instead.
11-30: Switch to ModelMessage[] looks correct; watch for lost tool-call de-dup behavior.The new
convertToStreamMessagesreturnsModelMessage[]viaconvertToModelMessages(uiMessages), which is aligned with the v5 flow. Previously we had infrastructure to avoid repeating identical tool invocations across assistant messages (via atoolCallSignaturesmap). That de-dup is no longer wired here. If repetition avoidance is still required, consider re-integrating the logic (e.g., by folding it intotoVercelMessageFromOnlookor introducing a small pre-pass).If you want, I can scan for any remaining references of the old
toolCallSignaturespattern and propose a minimal reintegration based on actual usage sites.
38-42: Avoid returning both parts and content; strip content to match UIMessage shape.Spreading
...messagealso brings overcontentfrom the Onlook message. Returning bothpartsand a non-stringcontentcan confuse downstream consumers expecting the v5partsshape. Removecontentwhen constructing the UI message.Apply this diff:
if (message.role === ChatMessageRole.ASSISTANT) { - return { - ...message, - parts: message.content.parts, - // content: messageContent, - } satisfies VercelMessage; + const { content: _omitContent, ...rest } = message; + return { + ...rest, + parts: message.content.parts, + } satisfies VercelMessage; }apps/web/client/src/app/project/[id]/_hooks/use-chat.tsx (1)
34-55: Guard access to message.metadata.finishReason.Depending on the exact
UIMessageshape your provider returns,metadatamay be absent or differently shaped. Add a safe access to avoid runtime errors and consider tightening the message type to include your metadata extension.Apply this diff:
- onFinish: ({message}) => { - const finishReason = message.metadata.finishReason; + onFinish: ({ message }) => { + const finishReason = (message as any)?.metadata?.finishReason as string | undefined; console.log('finishReason', finishReason); - console.log('message', message.metadata); + console.log('message', (message as any)?.metadata);Optional follow-up: define a local
type OnlookUIMessage = UIMessage & { metadata?: { finishReason?: string } }and typeUseChatHelpers<OnlookUIMessage>to keep things typed.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
⛔ Files ignored due to path filters (3)
apps/web/server/bun.lockis excluded by!**/*.lockbun.lockis excluded by!**/*.lockdocs/bun.lockis excluded by!**/*.lock
📒 Files selected for processing (34)
apps/web/client/package.json(4 hunks)apps/web/client/src/app/api/chat/helperts/stream.ts(1 hunks)apps/web/client/src/app/api/chat/route.ts(3 hunks)apps/web/client/src/app/project/[id]/_components/canvas/overlay/elements/buttons/chat.tsx(1 hunks)apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-input/index.tsx(2 hunks)apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-messages/message-content/index.tsx(2 hunks)apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-messages/user-message.tsx(1 hunks)apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/error.tsx(1 hunks)apps/web/client/src/app/project/[id]/_hooks/use-chat.tsx(2 hunks)apps/web/client/src/app/project/[id]/_hooks/use-start-project.tsx(2 hunks)apps/web/client/src/components/store/editor/chat/conversation.ts(1 hunks)apps/web/client/src/components/tools/tools.ts(2 hunks)apps/web/client/src/mastra/index.ts(1 hunks)apps/web/client/src/server/api/routers/chat/conversation.ts(1 hunks)apps/web/client/src/server/api/routers/project/project.ts(1 hunks)apps/web/server/package.json(1 hunks)packages/ai/package.json(1 hunks)packages/ai/src/chat/providers.ts(2 hunks)packages/ai/src/prompt/provider.ts(3 hunks)packages/ai/src/stream/index.ts(2 hunks)packages/ai/src/tools/cli.ts(5 hunks)packages/ai/src/tools/edit.ts(4 hunks)packages/ai/src/tools/guides.ts(1 hunks)packages/ai/src/tools/plan.ts(3 hunks)packages/ai/src/tools/read.ts(2 hunks)packages/ai/src/tools/sandbox.ts(1 hunks)packages/ai/src/tools/web.ts(2 hunks)packages/ai/test/tools/web-search.test.ts(3 hunks)packages/db/src/dto/message.ts(1 hunks)packages/models/package.json(1 hunks)packages/models/src/chat/message/message.ts(2 hunks)packages/models/src/chat/request.ts(2 hunks)packages/models/src/llm/index.ts(2 hunks)packages/ui/package.json(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (10)
apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/error.tsx (2)
apps/web/client/src/app/project/[id]/_hooks/use-chat.tsx (1)
useChatContext(95-100)apps/web/client/src/components/store/editor/index.tsx (1)
useEditorEngine(9-13)
apps/web/client/src/app/project/[id]/_hooks/use-start-project.tsx (1)
apps/web/client/src/app/project/[id]/_hooks/use-chat.tsx (1)
useChatContext(95-100)
packages/ai/src/chat/providers.ts (1)
packages/models/src/llm/index.ts (2)
InitialModelPayload(24-29)ModelConfig(31-36)
packages/ai/test/tools/web-search.test.ts (1)
packages/ai/src/tools/web.ts (2)
webSearchTool(43-46)WEB_SEARCH_TOOL_PARAMETERS(38-42)
apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-messages/user-message.tsx (1)
apps/web/client/src/app/project/[id]/_hooks/use-chat.tsx (1)
useChatContext(95-100)
apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-input/index.tsx (1)
apps/web/client/src/app/project/[id]/_hooks/use-chat.tsx (1)
useChatContext(95-100)
apps/web/client/src/components/tools/tools.ts (8)
packages/ai/src/tools/read.ts (2)
LIST_FILES_TOOL_PARAMETERS(16-19)READ_FILE_TOOL_PARAMETERS(5-9)packages/ai/src/tools/guides.ts (1)
READ_STYLE_GUIDE_TOOL_NAME(10-10)apps/web/client/src/components/tools/helpers.ts (1)
EMPTY_TOOL_PARAMETERS(3-3)packages/ai/src/tools/edit.ts (3)
SEARCH_REPLACE_EDIT_FILE_TOOL_PARAMETERS(5-10)WRITE_FILE_TOOL_PARAMETERS(39-42)FUZZY_EDIT_FILE_TOOL_PARAMETERS(50-63)packages/ai/src/tools/cli.ts (5)
TERMINAL_COMMAND_TOOL_PARAMETERS(5-7)BASH_READ_TOOL_PARAMETERS(31-39)GLOB_TOOL_PARAMETERS(73-76)GREP_TOOL_PARAMETERS(83-99)BASH_EDIT_TOOL_PARAMETERS(58-66)packages/ai/src/tools/web.ts (2)
SCRAPE_URL_TOOL_PARAMETERS(5-29)WEB_SEARCH_TOOL_PARAMETERS(38-42)packages/ai/src/tools/sandbox.ts (1)
SANDBOX_TOOL_PARAMETERS(6-8)packages/ai/src/tools/plan.ts (2)
TODO_WRITE_TOOL_PARAMETERS(17-28)EXIT_PLAN_MODE_TOOL_PARAMETERS(35-37)
packages/models/src/chat/message/message.ts (2)
packages/models/src/chat/message/context.ts (1)
MessageContext(42-47)packages/models/src/chat/message/checkpoint.ts (1)
MessageCheckpoints(15-15)
packages/ai/src/stream/index.ts (3)
packages/db/src/schema/project/chat/message.ts (1)
messages(11-27)packages/models/src/chat/message/message.ts (1)
ChatMessage(38-38)packages/ai/src/prompt/provider.ts (1)
HydrateMessageOptions(19-24)
apps/web/client/src/app/project/[id]/_hooks/use-chat.tsx (3)
apps/web/client/src/components/store/editor/index.tsx (1)
useEditorEngine(9-13)apps/web/client/src/components/tools/tools.ts (1)
handleToolCall(170-184)packages/ai/src/stream/index.ts (1)
toVercelMessageFromOnlook(32-53)
🔇 Additional comments (37)
apps/web/server/package.json (1)
23-23: Zod v4 upgrade verified—no legacy ^3 dependencies remainAll
package.jsonfiles now declare Zod at^4.0.17(no^3.xentries found):
- packages/ui/package.json
- packages/models/package.json
- packages/ai/package.json
- apps/web/server/package.json
- apps/web/client/package.json
Next steps before merging:
- Run the full test suite (including TRPC endpoint and UI validator tests) to catch any runtime incompatibilities introduced by Zod v4.
- Scan your Zod schemas for any deprecated v3 patterns (e.g. changed defaults for
.optional(), differences in error formatting) and update as needed.- Optionally, confirm at install time that only one version of Zod is present (e.g. via
npm ls zodorpnpm why zod) to avoid duplicate copies.With version alignment confirmed, this bump is ready to land once compatibility is validated.
apps/web/client/src/mastra/index.ts (1)
7-7: LGTM: formatting-only change.Trailing comma removal is harmless. No behavioral impact.
packages/models/package.json (2)
39-39: All Zod dependencies upgraded to v4—no action neededI ran the suggested ripgrep command across all
package.jsonfiles to look for any"zod": "^3…"entries; it returned no matches, confirming that all packages now reference Zod v4.• No
^3occurrences of Zod found in anypackage.json.
• Cross-package alignment with Zod v4 is confirmed.
36-36: No runtime imports of “ai”; devDependency remains appropriateVerified that all mentions of
aiin packages/models/src are type-only imports—no runtime imports orrequirecalls were found:
- packages/models/src/llm/index.ts:
import type { LanguageModel } from 'ai';- packages/models/src/chat/request.ts:
import type { ModelMessage } from 'ai';Since
aiis only used for type definitions, it can stay in devDependencies.
Optionally, if you’d like to pick up patch fixes, you can loosen the version to"ai": "^5.0.0"in devDependencies.apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/error.tsx (1)
14-14: No action needed:useChatContextcorrectly exposessendMessageToChat.Verification shows:
ChatContextis created with a type that includessendMessageToChat.ChatProviderpassessendMessageToChatinto the context value.useChatContext()returns{ ...context, isWaiting }, which spreads insendMessageToChat.apps/web/client/src/app/api/chat/helperts/stream.ts (1)
80-84: Confirm the expected type for ToolCall.args to avoid double JSON stringificationIt’s unclear whether in AI SDK v5 the
ToolCall.argsfield is typed as astringor anobject. If it’s defined as an object, wrapping yourrepairedArgsinJSON.stringifywill introduce an extra layer of JSON encoding, forcing consumers to parse twice. If it’s a string, you’ll need the JSON output to match the original shape.Action items:
- Inspect the SDK’s
ToolCalldeclaration (e.g. in your hoistednode_modules/aifolder or TypeScript types) to confirm whetherargsisstringorobject.- Only apply
JSON.stringifywhen the SDK expects a string; otherwise, pass the raw object.Suggested refactor:
- return { - ...toolCall, - args: JSON.stringify(repairedArgs), - toolCallType: 'function' as const - }; + const normalizedArgs = typeof toolCall.args === 'string' + ? JSON.stringify(repairedArgs) + : repairedArgs; + return { + ...toolCall, + args: normalizedArgs, + toolCallType: 'function' as const + };Please verify and align this change with the actual
ToolCall.argstype in your AI SDK v5.apps/web/client/src/server/api/routers/project/project.ts (1)
304-311: RemainingmaxTokensoccurrences found – please verify rename consistencyI ran a sweep and found four instances of the old
maxTokensfield. If we’re standardizing onmaxOutputTokens(Anthropic AI SDK v5) across the codebase, we should confirm whether these should be renamed or intentionally left as-is:• apps/web/client/src/server/api/routers/chat/suggestion.ts:44 (
maxTokens: 10000,)
• packages/models/src/llm/index.ts:35 (maxTokens: number;)
• packages/ai/src/chat/providers.ts:21 (let maxTokens: number = MODEL_MAX_TOKENS[requestedModel];)
• packages/ai/src/chat/providers.ts:49 (maxTokens,)Aside from consistency, consider lowering the 50-token cap in
project.tsto around 12–16 tokens for a 2–4 word name to save latency and cost (though 50 is harmless). Please verify these references and update as needed.packages/ai/package.json (1)
35-49: AI SDK & Zod versions verified – all consistent
The
aipackage is only consumed by
• packages/ai/package.json → [email protected]
• apps/web/client/package.json → [email protected]
Both are on 5.0.0, so no mismatched major versions.All
@ai-sdk/*providers share major version 2:
• packages/ai/package.json
– [email protected]
– [email protected]
– [email protected]
– [email protected]
– [email protected]
• apps/web/client/package.json – [email protected]Zod is uniformly pinned to ^4.0.17 in every package that declares it (no v3 or v5 elsewhere).
No conflicting versions or duplicates detected.
apps/web/client/package.json (1)
31-31: Validate integration after AI SDK v5, Mastra, and Zod v4 bumpsBased on the search results:
- References to the old
parametersAPI still exist in:
packages/utility/test/urls.test.tsandpackages/utility/test/image.test.ts(describe('…parameters…'))packages/ai/test/tools/web.test.ts(checks for tool parameters)- The generated
apps/web/template/public/onlook-preload-script.js(static bundle – safe to ignore for source integration but worth regenerating)maxTokensis still used in:
apps/web/client/src/server/api/routers/chat/suggestion.ts(maxTokens: 10000)packages/models/src/llm/index.ts(type/interface)packages/ai/src/chat/providers.ts(provider defaults and usage)- No plain
useChator unqualifiedsendMessagecalls were found inapps/web/client/src(note: the look-around regex need PCRE2 support for finer filtering).- The
pnpm-lock.yamlfile wasn’t located by the script, so Zod versions should be confirmed manually in your lockfile (ensure only v4 entries are present).Please manually verify that:
- All AI SDK hooks/components (streaming, tool calls, etc.) in
@ai-sdk/[email protected]and[email protected]are updated to the new method signatures.- The special
@mastra/*@ai-v5packages are compatible and stable with Next 15/React 19.- All Zod schemas—especially in your TRPC routers—have been migrated to the v4 API (check for any lingering v3/v5 imports or signatures).
Affected areas to review:
- apps/web/client/src/server/api/routers/chat/suggestion.ts
- packages/ai/src/chat/providers.ts
- packages/models/src/llm/index.ts
- packages/utility/test/**/*.ts
- packages/ai/test/tools/web.test.ts
- Your root lockfile (
pnpm-lock.yamlor equivalent)apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-messages/user-message.tsx (1)
30-30: Rename to sendMessageToChat aligns with new ChatContext API.Destructuring looks correct and consistent with the PR’s broader API update.
packages/models/src/llm/index.ts (1)
1-1: LingeringLanguageModelV1References: None found in the codebase.
getModelFromTypeCall Sites:
- apps/web/client/src/app/api/chat/route.ts (line 72)
- apps/web/client/src/app/api/chat/helperts/stream.ts (line 5)
To complete verification, please provide the implementation of
getModelFromType(in apps/web/client/src/app/api/chat/helperts/stream.ts) so we can confirm it returns aLanguageModelinstance rather than just a model ID or string.apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-messages/message-content/index.tsx (2)
1-1: Type update to UIMessage['parts'] matches AI SDK v5.This aligns the component with v5 UIMessage typing. No runtime impact expected.
Also applies to: 14-14
22-23: lastToolInvocationIdx can be -1. Ensure ToolCallDisplay accepts it.If none exist, lastIndexOf returns -1. Confirm ToolCallDisplay gracefully handles this sentinel.
apps/web/client/src/app/api/chat/route.ts (2)
55-60: No internalmaxStepsreferences found; verify external callersA repository-wide search for
maxStepsin.ts,.tsx,.js, and.jsxfiles returned no matches, confirming that no internal client code still sends this field.• Ensure any external or third-party clients (mobile apps, integrations, API consumers) have been updated to remove
maxStepsfrom their request payloads and that your API documentation or changelog reflects this breaking change.
• Consider adding a deprecation notice or compatibility shim if you need to support older clients during a transition period.
79-109: Experimental flag names confirmed
ThestreamTextoptionsexperimental_repairToolCalland top-levelproviderOptionsare correct for Vercel AI SDK v5 and match the official naming. No changes needed.apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-input/index.tsx (2)
29-29: Context API rename consumption looks correct.sendMessageToChat, stop, isWaiting are destructured per the new API.
221-224: Optional chaining on captureScreenshot result is a solid safety improvement.Avoids null access when integrations vary by frame/view.
packages/ai/src/tools/sandbox.ts (1)
12-13: All.parametersreferences removed—rename toinputSchemais safeI ran a repository-wide search for both property accesses (
.parameters) and object keys (parameters:), excluding build artifacts, and found zero occurrences. This confirms that every consumer now referencesinputSchemaconsistently.apps/web/client/src/components/tools/tools.ts (2)
38-38: Type-only import for z is correct here.Only used for z.infer and types; keeps runtime bundle lean.
65-168: Verification complete – no leftover.parametersreferences found
All occurrences of the deprecated.parametersAPI have been replaced withinputSchema, and no.parametersusages remain in theapps/webdirectory. Approving these changes.packages/ai/src/tools/guides.ts (1)
7-8: parameters → inputSchema migration looks correct and consistent.No behavior change; matches other tool modules.
Also applies to: 13-14
apps/web/client/src/app/project/[id]/_components/canvas/overlay/elements/buttons/chat.tsx (1)
26-27: Context API rename tosendMessageToChatverified
- The
useChatContexthook in
apps/web/client/src/app/project/[id]/_hooks/use-chat.tsxdefines and exports
sendMessageToChatas part of its provider value.- All consumers (canvas overlay button, chat-input, error section, user messages) now destructure and invoke
sendMessageToChatwith the appropriateChatType.- There are no leftover references to a context‐based
sendMessage()—the only calls to asendMessage()function are local, component-scoped helpers and do not refer to the old context API.packages/ai/src/tools/web.ts (2)
45-46: webSearchTool: inputSchema rename is consistent with the migration.Matches the pattern used across other tools in this PR.
34-35: Manual verification required: residual.parametersreferencesThe automated scans did not complete successfully, so please manually confirm that all instances of the old
parametersproperty and any consumers of.parametershave been removed or updated.• In packages/ai/src:
• Run
bash grep -R "\bparameters\s*:" packages/ai/src
to ensure no tool configs still declareparameters:.
• In apps/ and other packages/:
• Run
bash grep -R "\.parameters\b" apps packages
to ensure no downstream code is accessing.parameters.Once you’ve verified that there are zero matches, you can consider this change fully approved.
apps/web/client/src/app/project/[id]/_hooks/use-start-project.tsx (1)
33-34: ✔ ChatContextsendMessageToChatis correctly exposed and all call sites updatedI’ve confirmed that:
ChatProvider’s context value spreadschatand includes the newsendMessageToChathelper (lines 67–93 inuse-chat.tsx).useChatContext()returns that helper (alongsideisWaiting) as expected.use-start-project.tsxnow destructures{ sendMessageToChat } = useChatContext()and no longer references any legacysendMessagefrom context.- There are no remaining destructurings of a
sendMessagehelper fromuseChatContext, nor any directcontext.sendMessage(...)calls anywhere in the codebase.Everything looks correctly migrated—no further changes needed here.
packages/models/src/chat/message/message.ts (1)
15-21: Upgrade to MastraMessageContentV3: confirm metadata compatibility.If V3’s base content already defines
metadata, your extension should refine (not conflict) with its shape. If there’s a type mismatch, we may need to intersect rather than extend.Please confirm CI type-check passes for this interface change. If needed, we can switch to:
export type ChatMessageContent = MastraMessageContentV3 & { metadata: { vercelId?: string; context: MessageContext[]; checkpoints: MessageCheckpoints[]; }; };packages/ai/src/tools/plan.ts (2)
31-32: todoWriteTool: migrated to inputSchema — LGTM.Consistent with the repo-wide change.
40-41: exitPlanModeTool: inputSchema rename — LGTM.No behavior changes.
packages/ai/src/chat/providers.ts (2)
58-64: OpenRouter provider path looks good.Key presence check + provider instantiation is consistent with the SDK. No concerns here.
12-12: LingeringLanguageModelV1References Removed?
I ran the provided ripgrep command and saw no matches forLanguageModelV1across your TypeScript files. It looks like the migration toLanguageModelis complete, but please manually verify that:• There are no stray references in other file types (e.g.
.js,.jsx,.md,.json).
• Configuration or documentation files have been updated accordingly.Once you’ve confirmed those, we can consider this fully resolved.
packages/ai/test/tools/web-search.test.ts (1)
67-69: Schema wiring assertion is correct.Asserting identity with WEB_SEARCH_TOOL_PARAMETERS is the right guard after the rename to inputSchema.
packages/ai/src/prompt/provider.ts (2)
8-9: Return-type migration to UIMessage with parts[] looks good.The new shape (parts: [{ type: 'text', text }]) aligns with the UIMessage direction across the PR.
Also applies to: 69-69
115-119: Restore image attachments withfileUIMessage.partsWe've confirmed that in Vercel AI SDK v5 the correct
UIMessage.partsdiscriminator for binary files (including images) istype: 'file'. To preserve image context, uncomment and adapt the attachment mapping so it’s included alongside the text part. Apply this change in both affected sections (lines 115–119 and 124–126) ofpackages/ai/src/prompt/provider.ts, and—if theImageMessageContexttype exists in@onlook/models—add its import near the other types.• File:
packages/ai/src/prompt/provider.ts
– Lines 115–119 (and similarly 124–126): restore image parts
– Near other imports:
ts import type { ImageMessageContext } from '@onlook/models';- // const attachments = images.map((i) => ({ - // type: 'file', - // mimeType: i.mimeType, - // data: i.content, - // })); + const imageParts = images.map((i) => ({ + type: 'file', + mimeType: i.mimeType, + data: i.content, + })); @@ return { id, role: 'user', - parts: [{ type: 'text', text: prompt }], - // attachments, + parts: [{ type: 'text', text: prompt }, ...imageParts], };This restores the dropped image context using the correct
filetype in v5.packages/ai/src/tools/cli.ts (1)
10-11: No lingeringparametersreferences detectedRipgrep searches across
packages/aiand its tests found no occurrences ofparameters:or.parameters—all tools now consistently useinputSchema. Changes can be approved.packages/ai/src/tools/edit.ts (1)
13-14: No remainingparameters:declarations foundI ran comprehensive scans across
.ts,.tsx, and markdown files for any lingeringparameters:entries and found none. The migration toinputSchemaappears fully applied in both code and documentation.apps/web/client/src/app/project/[id]/_hooks/use-chat.tsx (2)
25-33: Transport migration to DefaultChatTransport looks correct.
sendAutomaticallyWhen: lastAssistantMessageIsCompleteWithToolCalls+transportbody shape align with the v5 flow.
92-93: Context provider shape LGTM.Exposing
sendMessageToChatalongside theuseChathelpers keeps the surface tidy and backward-friendly.
| const { object: repairedArgs } = await generateObject({ | ||
| model, | ||
| schema: tool?.parameters, | ||
| schema: tool?.inputSchema, | ||
| prompt: [ | ||
| `The model tried to call the tool "${toolCall.toolName}"` + | ||
| ` with the following arguments:`, | ||
| JSON.stringify(toolCall.args), | ||
| `The tool accepts the following schema:`, | ||
| JSON.stringify(tool?.parameters), | ||
| JSON.stringify(tool?.inputSchema), | ||
| 'Please fix the arguments.', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Guard against missing tool schema; JSON.stringify on a Zod schema will throw or yield “{}”.
In v5, tools expose a Zod schema via inputSchema. Two issues:
- If
toolortool.inputSchemais undefined,generateObjectwill fail. JSON.stringify(tool.inputSchema)often throws (circular) or becomes{}, degrading the repair prompt.
Add a hard guard and serialize the schema for the prompt via zod-to-json-schema.
const { object: repairedArgs } = await generateObject({
model,
- schema: tool?.inputSchema,
+ schema: (() => {
+ if (!tool || !('inputSchema' in tool) || !tool.inputSchema) {
+ throw new Error(`Tool "${toolCall.toolName}" is missing inputSchema; cannot repair args.`);
+ }
+ return tool.inputSchema;
+ })(),
prompt: [
`The model tried to call the tool "${toolCall.toolName}"` +
` with the following arguments:`,
JSON.stringify(toolCall.args),
`The tool accepts the following schema:`,
- JSON.stringify(tool?.inputSchema),
+ // Use a JSON representation of the Zod schema for clarity in the prompt
+ JSON.stringify(zodToJsonSchema(tool.inputSchema), null, 2),
'Please fix the arguments.',
].join('\n'),
});Add this import at the top of the file:
import { zodToJsonSchema } from 'zod-to-json-schema';And add zod-to-json-schema to the web client package.json dependencies.
"dependencies": {
+ "zod-to-json-schema": "^3.23.0",
...
}🤖 Prompt for AI Agents
In apps/web/client/src/app/api/chat/helperts/stream.ts around lines 67 to 76,
the code calls JSON.stringify(tool?.inputSchema) without guarding for missing
tool or Zod schema and JSON.stringify on a Zod schema can throw or yield "{}";
modify to first import zodToJsonSchema from 'zod-to-json-schema' (and add the
package to web client dependencies), then compute a safeSchema variable that is
null or a serialized JSON Schema: if tool and tool.inputSchema exist convert
with zodToJsonSchema(tool.inputSchema) and JSON.stringify that result, otherwise
set safeSchema to a clear fallback string like "no schema provided"; use
safeSchema in the prompt and ensure generateObject is only called with schema
when present (or pass undefined explicitly) so generateObject won’t receive an
invalid value.
apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/error.tsx
Show resolved
Hide resolved
| }, | ||
| "dependencies": { | ||
| "@fastify/websocket": "^11.0.2", | ||
| "@onlook/rpc": "*", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Avoid unpinned dependency: replace "*" with a stable range (or workspace link).
Using "" for @onlook/rpc makes installs non-deterministic and can break builds unexpectedly. If this is an internal workspace package, prefer "workspace:". Otherwise, pin to a stable range like ^x.y.z.
Apply one of the following:
- "@onlook/rpc": "*",
+ "@onlook/rpc": "workspace:*",or
- "@onlook/rpc": "*",
+ "@onlook/rpc": "^0.1.0",Verification script to confirm it’s a workspace package:
🏁 Script executed:
#!/bin/bash
# List local packages named @onlook/rpc
fd package.json -t f | xargs -I{} jq -r '(.name // empty) + " @ " + input_filename' {} | rg '^@onlook/rpc @'Length of output: 154
Pin internal workspace dependency to workspace:*
Since @onlook/rpc is a local workspace package (found in packages/rpc/package.json), using "*" makes installs non-deterministic. Please update the version in apps/web/server/package.json:
- "@onlook/rpc": "*",
+ "@onlook/rpc": "workspace:*",📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "@onlook/rpc": "*", | |
| "@onlook/rpc": "workspace:*", |
🤖 Prompt for AI Agents
In apps/web/server/package.json around line 20, the internal workspace
dependency "@onlook/rpc" is pinned to "*" making installs non-deterministic;
change its version string from "*" to "workspace:*" so the package manager
resolves it from the monorepo workspace. After updating the dependency entry,
run your package manager install (npm/yarn/pnpm) to refresh the lockfile so the
workspace reference is recorded.
|
Generated with ❤️ by ellipsis.dev |
|
Generated with ❤️ by ellipsis.dev |
|
Generated with ❤️ by ellipsis.dev |
|
Generated with ❤️ by ellipsis.dev |
|
Generated with ❤️ by ellipsis.dev |
|
Generated with ❤️ by ellipsis.dev |
|
Generated with ❤️ by ellipsis.dev |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (1)
packages/db/src/dto/message.ts (1)
9-17: Don’t spread DbMessage into UI ChatMessage; explicitly project allowed fieldsSpreading reintroduces DB-only/deprecated props (e.g., content, snapshots) into the UI/domain shape.
Apply:
- const baseMessage = { - ...message, - threadId: message.conversationId, - metadata: { - vercelId: message.id, - context: message.context ?? [], - checkpoints: message.checkpoints ?? [], - }, - parts: message.parts ?? [], - } + const baseMessage = { + id: message.id, + createdAt: message.createdAt, + threadId: message.conversationId, + metadata: { + vercelId: message.id, + context: message.context ?? [], + checkpoints: message.checkpoints ?? [], + }, + parts: message.parts ?? [], + }
🧹 Nitpick comments (2)
packages/ai/package.json (1)
36-42: Confirm AI SDK v2 migration and standardize semver ranges
- No remaining v1-era “parameters:” keys in the entire codebase (checked via ripgrep; zero hits)
- All imports from
@ai-sdk/anthropic|google|openaioccur only inpackages/ai/src/chat/providers.ts, indicating providers have been updated to the v2 API- Critical deps use exact versions in multiple
package.jsonfiles, e.g.:
• packages/ai/package.json (lines 36–42)
• packages/models/package.json (line 36)
• apps/web/client/package.json (line 69)To avoid duplicated installs and ensure consistent dependency resolution, consider switching these to caret ranges. For example, in
packages/ai/package.json:- "@ai-sdk/anthropic": "2.0.0", - "@ai-sdk/google": "2.0.0", - "@ai-sdk/openai": "2.0.0", - "ai": "5.0.26", + "@ai-sdk/anthropic": "^2.0.0", + "@ai-sdk/google": "^2.0.0", + "@ai-sdk/openai": "^2.0.0", + "ai": "^5.0.26",Apply similar updates in
packages/models/package.jsonandapps/web/client/package.jsonto align semver style across the monorepo.packages/db/src/dto/message.ts (1)
41-44: Derive DB content more robustly; avoid empty fragments and add a separatorCurrent join('') collapses text parts without spacing and inserts empty strings for non-text parts.
Apply:
- content: message.parts.map((part) => { - if (part.type === 'text') { - return part.text; - } - return ''; - }).join(''), + content: message.parts + .filter((p) => p.type === 'text') + .map((p) => p.text) + .join(' '),Please confirm DB consumers expect a plain-text summary and don’t require non-text parts to be serialized into
content.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
⛔ Files ignored due to path filters (1)
bun.lockis excluded by!**/*.lock
📒 Files selected for processing (4)
apps/web/client/package.json(3 hunks)packages/ai/package.json(1 hunks)packages/db/src/dto/message.ts(3 hunks)packages/models/package.json(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- apps/web/client/package.json
- packages/models/package.json
🧰 Additional context used
🧬 Code graph analysis (1)
packages/db/src/dto/message.ts (1)
packages/models/src/chat/message/message.ts (1)
ChatMessage(34-34)
🔇 Additional comments (4)
packages/ai/package.json (2)
47-47: Dependency usage confirmed – keep theopenaiSDKA quick search shows a direct import from
openaiinpackages/ai/src/apply/client.ts:• packages/ai/src/apply/client.ts:
import OpenAI from 'openai';Since the package is still in use, the
"openai": "^4.103.0"dependency should remain.
48-48: No Zod version discrepancy detectedAll
package.jsonfiles referencezodat^4.1.3(packages/ai, packages/models, apps/web/server, apps/web/client), so there’s no mixed4.0.17usage in the workspace. The original suggestion to align versions can be dismissed.Likely an incorrect or invalid review comment.
packages/db/src/dto/message.ts (2)
50-50: LGTM: checkpoint mapping moved into metadataThis matches the new top-level metadata contract.
72-72: Return is fine once base object is correctedNo further action after applying the baseMessage refactor.
| import { ChatMessageRole, type AssistantChatMessage, type ChatMessage, type UserChatMessage } from "@onlook/models"; | ||
| import { assertNever } from '@onlook/utility'; | ||
| import type { Message as VercelMessage } from 'ai'; | ||
| import type { UIMessage as VercelMessage } from 'ai'; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Prefer neutral alias: use UIMessage directly instead of VercelMessage
Removes vendor naming and aligns with package export. Also keeps signatures consistent.
Apply:
-import type { UIMessage as VercelMessage } from 'ai';
+import type { UIMessage } from 'ai';-export const toOnlookMessageFromVercel = (message: VercelMessage, conversationId: string): ChatMessage => {
+export const toOnlookMessageFromVercel = (message: UIMessage, conversationId: string): ChatMessage => {-export const toDbMessageFromVercel = (message: VercelMessage, conversationId: string): DbMessage => {
+export const toDbMessageFromVercel = (message: UIMessage, conversationId: string): DbMessage => {To confirm repository-wide consistency, run:
Also applies to: 57-57, 75-75
🏁 Script executed:
#!/bin/bash
# Find any remaining aliasing to VercelMessage
rg -nP -C2 $'\\bUIMessage\\s+as\\s+VercelMessage\\b|\\bVercelMessage\\b' --type=tsLength of output: 2245
Rename VercelMessage alias to UIMessage across packages
To remove the vendor-specific alias and align with the exported UIMessage type, apply these refactors in both packages/db and packages/ai.
• packages/db/src/dto/message.ts
– Change the import and all occurrences of VercelMessage to UIMessage
– Update the signatures of toOnlookMessageFromVercel and toDbMessageFromVercel accordingly
packages/db/src/dto/message.ts
-import type { UIMessage as VercelMessage } from 'ai';
+import type { UIMessage } from 'ai';
-export const toOnlookMessageFromVercel = (message: VercelMessage, conversationId: string): ChatMessage => {
+export const toOnlookMessageFromVercel = (message: UIMessage, conversationId: string): ChatMessage => {
-export const toDbMessageFromVercel = (message: VercelMessage, conversationId: string): DbMessage => {
+export const toDbMessageFromVercel = (message: UIMessage, conversationId: string): DbMessage => {• packages/ai/src/stream/index.ts
– Remove the as VercelMessage alias in the import
– Change the function’s return type and any satisfies VercelMessage clauses to use UIMessage
packages/ai/src/stream/index.ts
-import type { UIMessage as VercelMessage, ModelMessage, TextPart } from 'ai';
+import type { UIMessage, ModelMessage, TextPart } from 'ai';
-export function toVercelStream(message: ChatMessage, opt: HydrateMessageOptions): VercelMessage {
+export function toVercelStream(message: ChatMessage, opt: HydrateMessageOptions): UIMessage {
- } satisfies VercelMessage;
+ } satisfies UIMessage;Run this final check to ensure no stray VercelMessage references remain:
rg -nP -C2 '\bUIMessage\s+as\s+VercelMessage\b|\bVercelMessage\b' --type=ts🤖 Prompt for AI Agents
In packages/db/src/dto/message.ts around line 4 and in
packages/ai/src/stream/index.ts, replace the vendor-specific alias VercelMessage
with the exported UIMessage: change the import to import type { UIMessage } from
'ai' (remove any "as VercelMessage"), rename all occurrences of VercelMessage to
UIMessage, and update the function signatures (toOnlookMessageFromVercel,
toDbMessageFromVercel) and any return types or "satisfies" clauses to use
UIMessage instead; run the provided ripgrep check to ensure no VercelMessage
references remain.
| const baseMessage: ChatMessage = { | ||
| ...message, | ||
| id: uuidv4(), | ||
| createdAt: message.createdAt ?? new Date(), | ||
| createdAt: new Date(), | ||
| threadId: conversationId, | ||
| content, | ||
| } | ||
|
|
||
| switch (message.role) { | ||
| case ChatMessageRole.ASSISTANT: | ||
| return { | ||
| ...baseMessage, | ||
| role: message.role as ChatMessageRole.ASSISTANT, | ||
| } satisfies AssistantChatMessage; | ||
| case ChatMessageRole.USER: | ||
| return { | ||
| ...baseMessage, | ||
| role: message.role as ChatMessageRole.USER, | ||
| } satisfies UserChatMessage; | ||
| default: | ||
| throw new Error(`Unsupported message role: ${message.role}`); | ||
| metadata, | ||
| parts: message.parts ?? [], | ||
| role: message.role as ChatMessageRole, | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Avoid spreading UIMessage into ChatMessage; preserve createdAt if provided
Prevents leaking UI-only fields and keeps chronology when upstream supplies timestamps.
Apply:
- const baseMessage: ChatMessage = {
- ...message,
- id: uuidv4(),
- createdAt: new Date(),
- threadId: conversationId,
- metadata,
- parts: message.parts ?? [],
- role: message.role as ChatMessageRole,
- }
+ const baseMessage: ChatMessage = {
+ id: uuidv4(),
+ createdAt: (message as any).createdAt ?? new Date(),
+ threadId: conversationId,
+ metadata,
+ parts: message.parts ?? [],
+ role: message.role as ChatMessageRole,
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const baseMessage: ChatMessage = { | |
| ...message, | |
| id: uuidv4(), | |
| createdAt: message.createdAt ?? new Date(), | |
| createdAt: new Date(), | |
| threadId: conversationId, | |
| content, | |
| } | |
| switch (message.role) { | |
| case ChatMessageRole.ASSISTANT: | |
| return { | |
| ...baseMessage, | |
| role: message.role as ChatMessageRole.ASSISTANT, | |
| } satisfies AssistantChatMessage; | |
| case ChatMessageRole.USER: | |
| return { | |
| ...baseMessage, | |
| role: message.role as ChatMessageRole.USER, | |
| } satisfies UserChatMessage; | |
| default: | |
| throw new Error(`Unsupported message role: ${message.role}`); | |
| metadata, | |
| parts: message.parts ?? [], | |
| role: message.role as ChatMessageRole, | |
| } | |
| const baseMessage: ChatMessage = { | |
| id: uuidv4(), | |
| createdAt: (message as any).createdAt ?? new Date(), | |
| threadId: conversationId, | |
| metadata, | |
| parts: message.parts ?? [], | |
| role: message.role as ChatMessageRole, | |
| } |
🤖 Prompt for AI Agents
In packages/db/src/dto/message.ts around lines 63 to 71, avoid spreading the
entire UIMessage into the ChatMessage (which can leak UI-only fields) and ensure
createdAt from upstream is preserved when present; instead construct baseMessage
by explicitly mapping only the allowed ChatMessage fields from message (e.g.,
content/parts, role, metadata) and set id to uuidv4(), threadId to
conversationId, and createdAt to message.createdAt ?? new Date(); remove the
object spread of message so UI-only properties are not copied.
|
Generated with ❤️ by ellipsis.dev |
Description
Related Issues
Type of Change
Testing
Screenshots (if applicable)
Additional Notes
Important
Upgrade project dependencies to version 5, refactor chat message handling, and align schemas and types for consistency.
aito version 5.0.0 inpackage.jsonfiles ofapps/web/client,packages/ai,packages/models, andpackages/ui.zodto version 4.0.17 inpackage.jsonfiles ofapps/web/client,apps/web/server,packages/ai,packages/models, andpackages/ui.@ai-sdk/reactto version 2.0.0 inapps/web/client/package.json.@ai-sdk/anthropic,@ai-sdk/google, and@ai-sdk/openaito version 2.0.0 inpackages/ai/package.json.UIMessageandUIMessagePartinpackages/ai/src/chat/providers.tsandpackages/ai/src/prompt/provider.ts.parameterswithinputSchemain tool definitions inpackages/ai/src/tools/cli.ts,packages/ai/src/tools/plan.ts, andpackages/ai/src/tools/web.ts.ChatMessageContentto extendMastraMessageContentV3inpackages/models/src/chat/message/message.ts.apps/web/client/src/app/api/chat/route.tsandpackages/db/src/schema/project/chat/message.ts.formatinChatMessageContentto 3 inpackages/db/src/dto/message.tsandpackages/db/src/schema/project/chat/message.ts.This description was created by
for 9e48f72. You can customize this summary. It will automatically update as commits are pushed.
Summary by CodeRabbit
New Features
Improvements
Bug Fixes
Chores