Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
WalkthroughRemoves the CUSTOM provider and its routing logic, standardizes OpenRouter option application in default model selection (merging any existing openrouter settings), replaces default providerOptions initialization with Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor User
participant App as Web App
participant Env as Env Vars
participant Model as selectDefaultModel()
participant OR as OpenRouter Options
User->>App: triggers default model selection
App->>Model: selectDefaultModel()
Model->>Env: read DEFAULT_LLM_PROVIDER and DEFAULT_OPENROUTER_PROVIDERS
alt Provider == OPENROUTER
Model->>OR: createOpenRouterProviderOptions(env.DEFAULT_OPENROUTER_PROVIDERS || "")
OR-->>Model: openRouterOptions (may be empty)
Model->>Model: merge providerOptions with openRouterOptions (preserve existing openrouter & reasoning fields)
else Provider != OPENROUTER
Model->>Model: set providerOptions = {}
end
Model-->>App: return selected model + providerOptions
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (2)
apps/web/utils/llms/model.ts (2)
289-318: Add a safety shim for persisted "custom" providers to avoid crashes.Users or legacy records may still have
aiProvider === "custom". Normalize it before use to avoid the “LLM provider not supported” throw.if (aiApiKey) { aiProvider = userAi.aiProvider || env.DEFAULT_LLM_PROVIDER; aiModel = userAi.aiModel || null; } else { aiProvider = env.DEFAULT_LLM_PROVIDER; aiModel = env.DEFAULT_LLM_MODEL || null; } + // Backward-compat: normalize deprecated "custom" to a supported default + if (aiProvider === "custom") { + logger.warn('Deprecated provider "custom" detected. Falling back to "anthropic".', { + source: "selectDefaultModel", + }); + aiProvider = Provider.ANTHROPIC; + }
195-200: Nit: avoidprovider: undefinedin options payload.Prefer omitting the key entirely to reduce noise in logs/requests.
- return { - openrouter: { - provider: order.length > 0 ? { order } : undefined, - reasoning: { max_tokens: 20 }, - }, - }; + return { + openrouter: { + ...(order.length > 0 ? { provider: { order } } : {}), + reasoning: { max_tokens: 20 }, + }, + };
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
apps/web/env.ts(1 hunks)apps/web/utils/llms/config.ts(0 hunks)apps/web/utils/llms/model.test.ts(0 hunks)apps/web/utils/llms/model.ts(2 hunks)
💤 Files with no reviewable changes (2)
- apps/web/utils/llms/config.ts
- apps/web/utils/llms/model.test.ts
🧰 Additional context used
📓 Path-based instructions (13)
apps/web/**/*.{ts,tsx}
📄 CodeRabbit inference engine (apps/web/CLAUDE.md)
apps/web/**/*.{ts,tsx}: Use TypeScript with strict null checks
Path aliases: Use@/for imports from project root
Use proper error handling with try/catch blocks
Format code with Prettier
Leverage TypeScript inference for better DX
Files:
apps/web/env.tsapps/web/utils/llms/model.ts
apps/web/**/{.env.example,env.ts,turbo.json}
📄 CodeRabbit inference engine (apps/web/CLAUDE.md)
Add environment variables to
.env.example,env.ts, andturbo.json
Files:
apps/web/env.ts
apps/web/**/{.env.example,env.ts}
📄 CodeRabbit inference engine (apps/web/CLAUDE.md)
Client-side environment variables: Prefix with
NEXT_PUBLIC_
Files:
apps/web/env.ts
!{.cursor/rules/*.mdc}
📄 CodeRabbit inference engine (.cursor/rules/cursor-rules.mdc)
Never place rule files in the project root, in subdirectories outside .cursor/rules, or in any other location
Files:
apps/web/env.tsapps/web/utils/llms/model.ts
apps/web/env.ts
📄 CodeRabbit inference engine (.cursor/rules/environment-variables.mdc)
apps/web/env.ts: When adding a new environment variable, add it toapps/web/env.tsin the appropriate section: useserverfor server-only variables, and for client-side variables, use theclientsection and also add toexperimental__runtimeEnv.
Client-side environment variables must be prefixed withNEXT_PUBLIC_and added to both theclientandexperimental__runtimeEnvsections inapps/web/env.ts.
Files:
apps/web/env.ts
**/*.ts
📄 CodeRabbit inference engine (.cursor/rules/form-handling.mdc)
**/*.ts: The same validation should be done in the server action too
Define validation schemas using Zod
Files:
apps/web/env.tsapps/web/utils/llms/model.ts
**/*.{ts,tsx}
📄 CodeRabbit inference engine (.cursor/rules/logging.mdc)
**/*.{ts,tsx}: UsecreateScopedLoggerfor logging in backend TypeScript files
Typically add the logger initialization at the top of the file when usingcreateScopedLogger
Only use.with()on a logger instance within a specific function, not for a global loggerImport Prisma in the project using
import prisma from "@/utils/prisma";
**/*.{ts,tsx}: Don't use TypeScript enums.
Don't use TypeScript const enum.
Don't use the TypeScript directive @ts-ignore.
Don't use primitive type aliases or misleading types.
Don't use empty type parameters in type aliases and interfaces.
Don't use any or unknown as type constraints.
Don't use implicit any type on variable declarations.
Don't let variables evolve into any type through reassignments.
Don't use non-null assertions with the ! postfix operator.
Don't misuse the non-null assertion operator (!) in TypeScript files.
Don't use user-defined types.
Use as const instead of literal types and type annotations.
Use export type for types.
Use import type for types.
Don't declare empty interfaces.
Don't merge interfaces and classes unsafely.
Don't use overload signatures that aren't next to each other.
Use the namespace keyword instead of the module keyword to declare TypeScript namespaces.
Don't use TypeScript namespaces.
Don't export imported variables.
Don't add type annotations to variables, parameters, and class properties that are initialized with literal expressions.
Don't use parameter properties in class constructors.
Use either T[] or Array consistently.
Initialize each enum member value explicitly.
Make sure all enum members are literal values.
Files:
apps/web/env.tsapps/web/utils/llms/model.ts
**/*.{js,jsx,ts,tsx}
📄 CodeRabbit inference engine (.cursor/rules/ultracite.mdc)
**/*.{js,jsx,ts,tsx}: Don't useelements in Next.js projects.
Don't use elements in Next.js projects.
Don't use namespace imports.
Don't access namespace imports dynamically.
Don't use global eval().
Don't use console.
Don't use debugger.
Don't use var.
Don't use with statements in non-strict contexts.
Don't use the arguments object.
Don't use consecutive spaces in regular expression literals.
Don't use the comma operator.
Don't use unnecessary boolean casts.
Don't use unnecessary callbacks with flatMap.
Use for...of statements instead of Array.forEach.
Don't create classes that only have static members (like a static namespace).
Don't use this and super in static contexts.
Don't use unnecessary catch clauses.
Don't use unnecessary constructors.
Don't use unnecessary continue statements.
Don't export empty modules that don't change anything.
Don't use unnecessary escape sequences in regular expression literals.
Don't use unnecessary labels.
Don't use unnecessary nested block statements.
Don't rename imports, exports, and destructured assignments to the same name.
Don't use unnecessary string or template literal concatenation.
Don't use String.raw in template literals when there are no escape sequences.
Don't use useless case statements in switch statements.
Don't use ternary operators when simpler alternatives exist.
Don't use useless this aliasing.
Don't initialize variables to undefined.
Don't use the void operators (they're not familiar).
Use arrow functions instead of function expressions.
Use Date.now() to get milliseconds since the Unix Epoch.
Use .flatMap() instead of map().flat() when possible.
Use literal property access instead of computed property access.
Don't use parseInt() or Number.parseInt() when binary, octal, or hexadecimal literals work.
Use concise optional chaining instead of chained logical expressions.
Use regular expression literals instead of the RegExp constructor when possible.
Don't use number literal object member names th...
Files:
apps/web/env.tsapps/web/utils/llms/model.ts
!pages/_document.{js,jsx,ts,tsx}
📄 CodeRabbit inference engine (.cursor/rules/ultracite.mdc)
!pages/_document.{js,jsx,ts,tsx}: Don't import next/document outside of pages/_document.jsx in Next.js projects.
Don't import next/document outside of pages/_document.jsx in Next.js projects.
Files:
apps/web/env.tsapps/web/utils/llms/model.ts
apps/web/utils/**
📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)
Create utility functions in
utils/folder for reusable logic
Files:
apps/web/utils/llms/model.ts
apps/web/utils/**/*.ts
📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)
apps/web/utils/**/*.ts: Use lodash utilities for common operations (arrays, objects, strings)
Import specific lodash functions to minimize bundle size
Files:
apps/web/utils/llms/model.ts
apps/web/utils/{ai,llms}/**/*.ts
📄 CodeRabbit inference engine (.cursor/rules/llm.mdc)
apps/web/utils/{ai,llms}/**/*.ts: Place LLM-related implementation code under apps/web/utils/ai or apps/web/utils/llms
Keep system and user prompts separate; system defines role/task, user contains data/context
Always validate LLM responses with a specific Zod schema
Use descriptive scoped loggers per feature and log inputs/outputs with appropriate levels and context
Implement early returns for invalid inputs and use proper error types with logging
Add fallbacks for AI failures and include retry logic for transient errors using withRetry
Format prompts with XML-like tags; remove excessive whitespace; truncate overly long inputs; keep formatting consistent
Use TypeScript types for all parameters/returns and define interfaces for complex IO structures
Keep related AI functions co-located; extract shared logic into utilities; document complex AI logic with clear comments
Call LLMs via createGenerateObject; pass system, prompt, and a Zod schema; return the validated result.object
Derive model options using getModel(...) and pass them to createGenerateObject and the generate call
Files:
apps/web/utils/llms/model.ts
apps/web/utils/llms/model.ts
📄 CodeRabbit inference engine (.cursor/rules/llm.mdc)
Define model configurations and helpers in apps/web/utils/llms/model.ts
Files:
apps/web/utils/llms/model.ts
🧠 Learnings (2)
📚 Learning: 2025-08-17T16:57:25.834Z
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-08-17T16:57:25.834Z
Learning: Applies to apps/web/utils/{ai,llms}/**/*.ts : Derive model options using getModel(...) and pass them to createGenerateObject and the generate call
Applied to files:
apps/web/utils/llms/model.ts
📚 Learning: 2025-08-17T16:57:25.834Z
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-08-17T16:57:25.834Z
Learning: Applies to apps/web/utils/llms/model.ts : Define model configurations and helpers in apps/web/utils/llms/model.ts
Applied to files:
apps/web/utils/llms/model.ts
🧬 Code graph analysis (1)
apps/web/utils/llms/model.ts (2)
apps/web/utils/llms/config.ts (2)
providerOptions(37-48)Provider(7-15)apps/web/env.ts (1)
env(16-236)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: cubic · AI code reviewer
🔇 Additional comments (1)
apps/web/utils/llms/model.ts (1)
177-181: Good defensive default for unknown providers.Clear error + log; once "custom" normalization is added, this should only trigger for truly invalid input.
| DEFAULT_LLM_PROVIDER: z | ||
| // custom is deprecated | ||
| .enum([...llmProviderEnum.options, "custom"]) | ||
| .default("anthropic"), | ||
| DEFAULT_LLM_MODEL: z.string().optional(), |
There was a problem hiding this comment.
Blocker: Env schema still allows "custom" while runtime support was removed. Fail fast instead of crashing later.
Provider.CUSTOM is gone (apps/web/utils/llms/config.ts), but the env schema still accepts "custom". If DEFAULT_LLM_PROVIDER=custom (or persisted configs set it), selectModel will hit the default case and throw at runtime. Remove "custom" from the enum so misconfigs are caught at startup.
Apply this diff:
- DEFAULT_LLM_PROVIDER: z
- // custom is deprecated
- .enum([...llmProviderEnum.options, "custom"])
- .default("anthropic"),
+ DEFAULT_LLM_PROVIDER: llmProviderEnum.default("anthropic"),Optional: if you prefer a friendlier error message during migration, keep accepting the literal but reject it explicitly:
- DEFAULT_LLM_PROVIDER: z
- .enum([...llmProviderEnum.options, "custom"])
- .default("anthropic"),
+ DEFAULT_LLM_PROVIDER: z
+ .enum(llmProviderEnum.options)
+ .or(z.literal("custom"))
+ .refine((v) => v !== "custom", {
+ message:
+ 'DEFAULT_LLM_PROVIDER "custom" is deprecated. Use one of: ' +
+ llmProviderEnum.options.join(", "),
+ })
+ .transform((v) => (v === "custom" ? "anthropic" : v))
+ .default("anthropic"),📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| DEFAULT_LLM_PROVIDER: z | |
| // custom is deprecated | |
| .enum([...llmProviderEnum.options, "custom"]) | |
| .default("anthropic"), | |
| DEFAULT_LLM_MODEL: z.string().optional(), | |
| DEFAULT_LLM_PROVIDER: llmProviderEnum.default("anthropic"), | |
| DEFAULT_LLM_MODEL: z.string().optional(), |
🤖 Prompt for AI Agents
In apps/web/env.ts around lines 30 to 34, the zod env schema still includes the
deprecated "custom" value in DEFAULT_LLM_PROVIDER which causes a runtime crash
if used; remove "custom" from the enum options so invalid configs fail at
startup. Update the enum to exclude "custom" (or, if you want a
migration-friendly message, accept the literal but add an explicit refine/check
that throws a clear error instructing to change "custom" to a supported
provider). Ensure DEFAULT_LLM_PROVIDER still has the same default after the
change and run schema validation in startup to catch bad env values early.
apps/web/utils/llms/model.ts
Outdated
| if (aiProvider === Provider.OPENROUTER) { | ||
| const openRouterOptions = createOpenRouterProviderOptions( | ||
| env.DEFAULT_OPENROUTER_PROVIDERS, | ||
| env.DEFAULT_OPENROUTER_PROVIDERS || "", | ||
| ); | ||
| Object.assign(providerOptions, openRouterOptions); |
There was a problem hiding this comment.
🧩 Analysis chain
Unconditionally injecting OpenRouter providerOptions (with reasoning tokens) — verify safety and align with other flows.
Default flow now always adds providerOptions for OpenRouter, which includes reasoning: { max_tokens: 20 }. This may be ignored by non‑reasoning models, but could also be rejected by some routes. Also, economy/chat only set options when their providers list is present, leading to inconsistent behavior.
- Action: verify that sending
reasoningis universally safe across your OpenRouter models. - Suggestion: either gate
reasoningbehind an env/flag or include OpenRouter options consistently across default/economy/chat.
Proposed alignment (always build options with empty string fallback for economy/chat):
@@ function selectEconomyModel(userAi: UserAIFields): SelectModel {
- let providerOptions: Record<string, any> | undefined;
- if (
- env.ECONOMY_LLM_PROVIDER === Provider.OPENROUTER &&
- env.ECONOMY_OPENROUTER_PROVIDERS
- ) {
- providerOptions = createOpenRouterProviderOptions(
- env.ECONOMY_OPENROUTER_PROVIDERS,
- );
- }
+ const providerOptions =
+ env.ECONOMY_LLM_PROVIDER === Provider.OPENROUTER
+ ? createOpenRouterProviderOptions(env.ECONOMY_OPENROUTER_PROVIDERS || "")
+ : undefined;
@@ function selectChatModel(userAi: UserAIFields): SelectModel {
- let providerOptions: Record<string, any> | undefined;
- if (
- env.CHAT_LLM_PROVIDER === Provider.OPENROUTER &&
- env.CHAT_OPENROUTER_PROVIDERS
- ) {
- providerOptions = createOpenRouterProviderOptions(
- env.CHAT_OPENROUTER_PROVIDERS,
- );
- }
+ const providerOptions =
+ env.CHAT_LLM_PROVIDER === Provider.OPENROUTER
+ ? createOpenRouterProviderOptions(env.CHAT_OPENROUTER_PROVIDERS || "")
+ : undefined;Optional: make reasoning opt‑in to avoid surprises:
-function createOpenRouterProviderOptions(
- providers: string,
-): Record<string, any> {
+function createOpenRouterProviderOptions(
+ providers: string,
+ includeReasoning = false,
+): Record<string, any> {
@@
- return {
- openrouter: {
- provider: order.length > 0 ? { order } : undefined,
- reasoning: { max_tokens: 20 },
- },
- };
+ return {
+ openrouter: {
+ ...(order.length > 0 ? { provider: { order } } : {}),
+ ...(includeReasoning ? { reasoning: { max_tokens: 20 } } : {}),
+ },
+ };
}To check library behavior quickly:
🌐 Web query:
For @openrouter/ai-sdk-provider, is it safe to always include `openrouter.reasoning` in params for non-reasoning models, or should it be omitted unless the target model supports reasoning?
💡 Result:
Short answer: omit it unless the target model supports reasoning.
Why: OpenRouter exposes a unified reasoning param but model metadata shows which models accept it; unsupported reasoning settings can be ignored by some backends or rejected by others (some providers return an "unsupported parameter" 400). Use the model's supported_parameters or model list to check for reasoning support and only add reasoning (or include_reasoning) when it's supported. [1][2][3]
Sources:
- OpenRouter parameters / reasoning docs. [1]
- OpenRouter "Reasoning Tokens" docs (controls, include/exclude, model support). [2]
- Example of providers rejecting unsupported reasoning parameters (error reported on OpenAI client). [3]
Do not unconditionally include openrouter.reasoning — gate it or ensure the model supports it
OpenRouter can reject unsupported reasoning params (some backends return 400); only add reasoning when the model's supported_parameters include it or make it explicitly opt‑in via an env/flag. Fix: change apps/web/utils/llms/model.ts (lines ~303–307) so OpenRouter options are built consistently and reasoning is only included when safe.
Proposed alignment (always build options with empty string fallback for economy/chat):
@@ function selectEconomyModel(userAi: UserAIFields): SelectModel {
- let providerOptions: Record<string, any> | undefined;
- if (
- env.ECONOMY_LLM_PROVIDER === Provider.OPENROUTER &&
- env.ECONOMY_OPENROUTER_PROVIDERS
- ) {
- providerOptions = createOpenRouterProviderOptions(
- env.ECONOMY_OPENROUTER_PROVIDERS,
- );
- }
+ const providerOptions =
+ env.ECONOMY_LLM_PROVIDER === Provider.OPENROUTER
+ ? createOpenRouterProviderOptions(env.ECONOMY_OPENROUTER_PROVIDERS || "")
+ : undefined;
@@ function selectChatModel(userAi: UserAIFields): SelectModel {
- let providerOptions: Record<string, any> | undefined;
- if (
- env.CHAT_LLM_PROVIDER === Provider.OPENROUTER &&
- env.CHAT_OPENROUTER_PROVIDERS
- ) {
- providerOptions = createOpenRouterProviderOptions(
- env.CHAT_OPENROUTER_PROVIDERS,
- );
- }
+ const providerOptions =
+ env.CHAT_LLM_PROVIDER === Provider.OPENROUTER
+ ? createOpenRouterProviderOptions(env.CHAT_OPENROUTER_PROVIDERS || "")
+ : undefined;Optional: make reasoning opt‑in to avoid surprises:
-function createOpenRouterProviderOptions(
- providers: string,
-): Record<string, any> {
+function createOpenRouterProviderOptions(
+ providers: string,
+ includeReasoning = false,
+): Record<string, any> {
@@
- return {
- openrouter: {
- provider: order.length > 0 ? { order } : undefined,
- reasoning: { max_tokens: 20 },
- },
- };
+ return {
+ openrouter: {
+ ...(order.length > 0 ? { provider: { order } } : {}),
+ ...(includeReasoning ? { reasoning: { max_tokens: 20 } } : {}),
+ },
+ };
}Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In apps/web/utils/llms/model.ts around lines 303–307, the code unconditionally
assigns openRouter options which can include a `reasoning` param that some
OpenRouter backends reject; change the logic to always build openRouterOptions
using createOpenRouterProviderOptions(env.DEFAULT_OPENROUTER_PROVIDERS || "")
but only merge/include `reasoning` into providerOptions when the target model
advertises support (e.g., model.supported_parameters?.includes("reasoning")) or
when an explicit opt‑in env flag (like DEFAULT_OPENROUTER_INCLUDE_REASONING ===
"true") is set; implement the check and conditionally Object.assign the
reasoning field (or the whole openRouterOptions) accordingly so unsupported
models never receive `reasoning`.
There was a problem hiding this comment.
Actionable comments posted: 0
♻️ Duplicate comments (1)
apps/web/utils/llms/model.ts (1)
301-315: Don’t unconditionally include openrouter.reasoning — gate or omit to avoid 400sSending
openrouter.reasoningfor non‑reasoning models can be rejected by some OpenRouter routes. Also, default path always injects reasoning while economy/chat only do so when providers are set, creating inconsistent behavior. Make reasoning opt‑in and stop forcing it in the default flow.Apply this diff to the default-model block to stop forcing reasoning and to tolerate
providerOptionsbeing undefined (pairs with the function change below):@@ - if (aiProvider === Provider.OPENROUTER) { - const openRouterOptions = createOpenRouterProviderOptions( - env.DEFAULT_OPENROUTER_PROVIDERS || "", - ); - - // Preserve any custom options set earlier; always ensure reasoning exists. - const existingOpenRouterOptions = providerOptions.openrouter || {}; - providerOptions.openrouter = { - ...openRouterOptions.openrouter, - ...existingOpenRouterOptions, - reasoning: { - ...openRouterOptions.openrouter.reasoning, - ...(existingOpenRouterOptions.reasoning ?? {}), - }, - }; - } + if (aiProvider === Provider.OPENROUTER) { + const openRouterOptions = createOpenRouterProviderOptions( + env.DEFAULT_OPENROUTER_PROVIDERS || "", + /* includeReasoning */ false, + ); + providerOptions = { + ...(providerOptions ?? {}), + ...openRouterOptions, + }; + }And update the OpenRouter options builder to make reasoning opt‑in and avoid emitting
provider: undefined:@@ -function createOpenRouterProviderOptions( - providers: string, -): Record<string, any> { +function createOpenRouterProviderOptions( + providers: string, + includeReasoning = false, +): Record<string, any> { @@ - return { - openrouter: { - provider: order.length > 0 ? { order } : undefined, - reasoning: { max_tokens: 20 }, - }, - }; + return { + openrouter: { + ...(order.length > 0 ? { provider: { order } } : {}), + ...(includeReasoning ? { reasoning: { max_tokens: 20 } } : {}), + }, + };Optional alignment (economy/chat – outside changed hunk): always build OpenRouter options with empty‑string fallback to keep behavior consistent, and pass includeReasoning explicitly per use case.
To verify blast radius, search for any other places that add
reasoning:#!/bin/bash rg -nC2 -g '!**/node_modules/**' -e '\breasoning\s*:\s*{'
🧹 Nitpick comments (1)
apps/web/utils/llms/model.ts (1)
289-289: Keep providerOptions undefined until neededTiny nit: avoid passing an empty object downstream; initialize lazily so callers can check truthiness cleanly.
- const providerOptions: Record<string, any> = {}; + let providerOptions: Record<string, any> | undefined;
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
apps/web/utils/llms/model.ts(2 hunks)version.txt(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- version.txt
🧰 Additional context used
📓 Path-based instructions (10)
apps/web/**/*.{ts,tsx}
📄 CodeRabbit inference engine (apps/web/CLAUDE.md)
apps/web/**/*.{ts,tsx}: Use TypeScript with strict null checks
Path aliases: Use@/for imports from project root
Use proper error handling with try/catch blocks
Format code with Prettier
Leverage TypeScript inference for better DX
Files:
apps/web/utils/llms/model.ts
!{.cursor/rules/*.mdc}
📄 CodeRabbit inference engine (.cursor/rules/cursor-rules.mdc)
Never place rule files in the project root, in subdirectories outside .cursor/rules, or in any other location
Files:
apps/web/utils/llms/model.ts
**/*.ts
📄 CodeRabbit inference engine (.cursor/rules/form-handling.mdc)
**/*.ts: The same validation should be done in the server action too
Define validation schemas using Zod
Files:
apps/web/utils/llms/model.ts
**/*.{ts,tsx}
📄 CodeRabbit inference engine (.cursor/rules/logging.mdc)
**/*.{ts,tsx}: UsecreateScopedLoggerfor logging in backend TypeScript files
Typically add the logger initialization at the top of the file when usingcreateScopedLogger
Only use.with()on a logger instance within a specific function, not for a global loggerImport Prisma in the project using
import prisma from "@/utils/prisma";
**/*.{ts,tsx}: Don't use TypeScript enums.
Don't use TypeScript const enum.
Don't use the TypeScript directive @ts-ignore.
Don't use primitive type aliases or misleading types.
Don't use empty type parameters in type aliases and interfaces.
Don't use any or unknown as type constraints.
Don't use implicit any type on variable declarations.
Don't let variables evolve into any type through reassignments.
Don't use non-null assertions with the ! postfix operator.
Don't misuse the non-null assertion operator (!) in TypeScript files.
Don't use user-defined types.
Use as const instead of literal types and type annotations.
Use export type for types.
Use import type for types.
Don't declare empty interfaces.
Don't merge interfaces and classes unsafely.
Don't use overload signatures that aren't next to each other.
Use the namespace keyword instead of the module keyword to declare TypeScript namespaces.
Don't use TypeScript namespaces.
Don't export imported variables.
Don't add type annotations to variables, parameters, and class properties that are initialized with literal expressions.
Don't use parameter properties in class constructors.
Use either T[] or Array consistently.
Initialize each enum member value explicitly.
Make sure all enum members are literal values.
Files:
apps/web/utils/llms/model.ts
apps/web/utils/**
📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)
Create utility functions in
utils/folder for reusable logic
Files:
apps/web/utils/llms/model.ts
apps/web/utils/**/*.ts
📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)
apps/web/utils/**/*.ts: Use lodash utilities for common operations (arrays, objects, strings)
Import specific lodash functions to minimize bundle size
Files:
apps/web/utils/llms/model.ts
**/*.{js,jsx,ts,tsx}
📄 CodeRabbit inference engine (.cursor/rules/ultracite.mdc)
**/*.{js,jsx,ts,tsx}: Don't useelements in Next.js projects.
Don't use elements in Next.js projects.
Don't use namespace imports.
Don't access namespace imports dynamically.
Don't use global eval().
Don't use console.
Don't use debugger.
Don't use var.
Don't use with statements in non-strict contexts.
Don't use the arguments object.
Don't use consecutive spaces in regular expression literals.
Don't use the comma operator.
Don't use unnecessary boolean casts.
Don't use unnecessary callbacks with flatMap.
Use for...of statements instead of Array.forEach.
Don't create classes that only have static members (like a static namespace).
Don't use this and super in static contexts.
Don't use unnecessary catch clauses.
Don't use unnecessary constructors.
Don't use unnecessary continue statements.
Don't export empty modules that don't change anything.
Don't use unnecessary escape sequences in regular expression literals.
Don't use unnecessary labels.
Don't use unnecessary nested block statements.
Don't rename imports, exports, and destructured assignments to the same name.
Don't use unnecessary string or template literal concatenation.
Don't use String.raw in template literals when there are no escape sequences.
Don't use useless case statements in switch statements.
Don't use ternary operators when simpler alternatives exist.
Don't use useless this aliasing.
Don't initialize variables to undefined.
Don't use the void operators (they're not familiar).
Use arrow functions instead of function expressions.
Use Date.now() to get milliseconds since the Unix Epoch.
Use .flatMap() instead of map().flat() when possible.
Use literal property access instead of computed property access.
Don't use parseInt() or Number.parseInt() when binary, octal, or hexadecimal literals work.
Use concise optional chaining instead of chained logical expressions.
Use regular expression literals instead of the RegExp constructor when possible.
Don't use number literal object member names th...
Files:
apps/web/utils/llms/model.ts
!pages/_document.{js,jsx,ts,tsx}
📄 CodeRabbit inference engine (.cursor/rules/ultracite.mdc)
!pages/_document.{js,jsx,ts,tsx}: Don't import next/document outside of pages/_document.jsx in Next.js projects.
Don't import next/document outside of pages/_document.jsx in Next.js projects.
Files:
apps/web/utils/llms/model.ts
apps/web/utils/{ai,llms}/**/*.ts
📄 CodeRabbit inference engine (.cursor/rules/llm.mdc)
apps/web/utils/{ai,llms}/**/*.ts: Place LLM-related implementation code under apps/web/utils/ai or apps/web/utils/llms
Keep system and user prompts separate; system defines role/task, user contains data/context
Always validate LLM responses with a specific Zod schema
Use descriptive scoped loggers per feature and log inputs/outputs with appropriate levels and context
Implement early returns for invalid inputs and use proper error types with logging
Add fallbacks for AI failures and include retry logic for transient errors using withRetry
Format prompts with XML-like tags; remove excessive whitespace; truncate overly long inputs; keep formatting consistent
Use TypeScript types for all parameters/returns and define interfaces for complex IO structures
Keep related AI functions co-located; extract shared logic into utilities; document complex AI logic with clear comments
Call LLMs via createGenerateObject; pass system, prompt, and a Zod schema; return the validated result.object
Derive model options using getModel(...) and pass them to createGenerateObject and the generate call
Files:
apps/web/utils/llms/model.ts
apps/web/utils/llms/model.ts
📄 CodeRabbit inference engine (.cursor/rules/llm.mdc)
Define model configurations and helpers in apps/web/utils/llms/model.ts
Files:
apps/web/utils/llms/model.ts
🧠 Learnings (2)
📚 Learning: 2025-08-17T16:57:25.834Z
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-08-17T16:57:25.834Z
Learning: Applies to apps/web/utils/{ai,llms}/**/*.ts : Derive model options using getModel(...) and pass them to createGenerateObject and the generate call
Applied to files:
apps/web/utils/llms/model.ts
📚 Learning: 2025-08-17T16:57:25.834Z
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-08-17T16:57:25.834Z
Learning: Applies to apps/web/utils/llms/model.ts : Define model configurations and helpers in apps/web/utils/llms/model.ts
Applied to files:
apps/web/utils/llms/model.ts
🧬 Code graph analysis (1)
apps/web/utils/llms/model.ts (2)
apps/web/utils/llms/config.ts (2)
providerOptions(37-48)Provider(7-15)apps/web/env.ts (1)
env(16-236)
Summary by CodeRabbit
Refactor
Tests
Chores
Impact