Conversation
|
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
|
Caution Review failedThe pull request is closed. WalkthroughThe changes update environment variable defaults and configuration for LLM providers and models, add new environment variables, adjust model selection logic to use these variables, consolidate Gemini model cost definitions, and update the environment variable lists in build configurations. A subproject commit reference is also updated in the marketing app directory. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant ModelSelector
participant EnvConfig
User->>ModelSelector: Request default LLM model
ModelSelector->>EnvConfig: Check if user API key is present
alt User API key present
ModelSelector->>ModelSelector: Use user's model
else No API key
ModelSelector->>EnvConfig: Retrieve DEFAULT_LLM_MODEL
alt DEFAULT_LLM_MODEL exists
ModelSelector->>User: Return DEFAULT_LLM_MODEL
else
ModelSelector->>User: Return null
end
end
Poem
📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (1)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Actionable comments posted: 1
🔭 Outside diff range comments (1)
apps/web/utils/llms/model.ts (1)
185-196: 💡 Verification agent🧩 Analysis chain
Validate cross-compatibility between
DEFAULT_LLM_MODELandDEFAULT_LLM_PROVIDER.
aiProvideris set todefaultProvider, whileaiModelis optionally pulled fromenv.DEFAULT_LLM_MODEL. Nothing enforces that the model string actually belongs to the chosen provider. A mis-configured.env(e.g.DEFAULT_LLM_PROVIDER=anthropicbutDEFAULT_LLM_MODEL=gpt-4o) will silently flow into the switch-statement and hit the wrongcase, throwing the generic “LLM provider not supported” error.Consider adding an upfront guard (or schema refinement in
env.ts) that validates the pair, or at minimum logs a clear warning.+if (env.DEFAULT_LLM_MODEL && !env.DEFAULT_LLM_MODEL.startsWith(aiProvider)) { + logger.warn( + "DEFAULT_LLM_MODEL (%s) appears incompatible with DEFAULT_LLM_PROVIDER (%s)", + env.DEFAULT_LLM_MODEL, + aiProvider, + ); +}
I’d like to verify how
DEFAULT_LLM_PROVIDERandDEFAULT_LLM_MODELare defined and used before finalizing. Let’s pull up their definitions inenv.tsand see whereDEFAULT_LLMis referenced inmodel.ts:
🏁 Script executed:
#!/bin/bash # Locate env.ts and show lines defining the defaults env_file=$(find . -type f -name "env.ts" | head -n1) echo "env.ts: $env_file" grep -n "DEFAULT_LLM_PROVIDER" -n "$env_file" grep -n "DEFAULT_LLM_MODEL" -n "$env_file" # Show where DEFAULT_LLM* and defaultProvider appear in model.ts grep -n "DEFAULT_LLM" apps/web/utils/llms/model.ts grep -n "defaultProvider" apps/web/utils/llms/model.tsLength of output: 715
I’ll locate all
env.tsfiles and inspect whereDEFAULT_LLM_PROVIDERandDEFAULT_LLM_MODELare defined, plus see howenvis imported inmodel.ts:
🏁 Script executed:
#!/bin/bash # List every env.ts in the repo echo "=== env.ts files ===" find . -type f -name 'env.ts' # Search for DEFAULT_LLM_* definitions across the codebase echo "=== DEFAULT_LLM_PROVIDER definitions ===" grep -R "DEFAULT_LLM_PROVIDER" -n . echo "=== DEFAULT_LLM_MODEL definitions ===" grep -R "DEFAULT_LLM_MODEL" -n . # Preview imports in model.ts to confirm which env file is used echo "=== model.ts import section ===" sed -n '1,30p' apps/web/utils/llms/model.tsLength of output: 2232
Ensure
DEFAULT_LLM_MODELactually matchesDEFAULT_LLM_PROVIDER
We currently validate each env var independently via Zod, but never enforce that the chosen default model belongs to the selected provider. A mis-configured.env(e.g.DEFAULT_LLM_PROVIDER=anthropicwithDEFAULT_LLM_MODEL=gpt-4) will only surface later as a generic “LLM provider not supported” error.Two ways to catch this sooner:
In apps/web/env.ts – refine the Zod schema to check the model against a provider→models map:
import { z } from "zod"; import { Provider } from "@/utils/llms/config"; const providerModels: Record<Provider, string[]> = { [Provider.OPENAI]: [/* …openai models… */], [Provider.ANTHROPIC]: [/* …anthropic models… */], /* etc. */ }; const envSchema = z.object({ DEFAULT_LLM_PROVIDER: z.enum(Object.values(Provider)), DEFAULT_LLM_MODEL: z .string() .optional() .refine((m, ctx) => { const p = (ctx.parent as any).DEFAULT_LLM_PROVIDER as Provider; return !m || providerModels[p]?.includes(m); }, { message: (ctx) => `DEFAULT_LLM_MODEL (${ctx.parent.DEFAULT_LLM_MODEL}) isn’t valid for provider ${ctx.parent.DEFAULT_LLM_PROVIDER}`, }), // … });Or in apps/web/utils/llms/model.ts – add a runtime guard before instantiation:
const defaultProvider = env.DEFAULT_LLM_PROVIDER as Provider; const defaultModel = env.DEFAULT_LLM_MODEL; if (defaultModel && !providerModels[defaultProvider]?.includes(defaultModel)) { logger.warn( "DEFAULT_LLM_MODEL (%s) is not compatible with DEFAULT_LLM_PROVIDER (%s)", defaultModel, defaultProvider, ); } // …then proceed with switch(defaultProvider) …Locations to update:
- apps/web/env.ts (Zod schema)
- apps/web/utils/llms/model.ts (pre-switch validation)
This will fail fast on invalid
.envcombos or at least emit a clear warning instead of a generic error later.
🧹 Nitpick comments (4)
apps/web/.env.example (1)
16-19: Add guidance for the newDEFAULT_LLM_MODELplaceholder.End-users may not know the expected format (provider-qualified vs raw model id). A short comment, mirroring accepted examples (
anthropic/claude-3.7-sonnet,gpt-4o, etc.), would avoid misconfiguration.apps/web/utils/usage.ts (2)
51-58: Duplicated literals – extract toconstor reuse existing table.You introduced
gemini2_5flash/gemini2_5pro, which is good, but they still live inside the same file as ad-hoc objects.
To keep the cost table maintainable, consider moving these constants to a sharedpricing.ts(or similar) and importing them, so they aren’t re-declared if other files start referencing them.
134-147: Spelling mistake in function name – public API ergonomics.
calcuateCost→calculateCost. Even though unchanged in this diff, now is a good time to fix before it leaks further.-const cost = calcuateCost(model, usage); +const cost = calculateCost(model, usage); ... -function calcuateCost( +function calculateCost(apps/web/env.ts (1)
26-44: Default economy config is internally inconsistent.
ECONOMY_LLM_PROVIDERdefaults toopenrouter, whileECONOMY_LLM_MODELdefaults to a direct Google model id (google/gemini-2.5-flash-preview-05-20).
This works only because OpenRouter proxies that model, but newcomers reading the defaults may assume a mismatch or file a bug.Recommend documenting the coupling or, better, aligning the defaults (e.g. provider
anthropic/…when provider=openrouter).
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
apps/web/.env.example(1 hunks)apps/web/app/(marketing)(1 hunks)apps/web/env.ts(1 hunks)apps/web/utils/llms/model.ts(1 hunks)apps/web/utils/usage.ts(2 hunks)turbo.json(2 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
apps/web/utils/llms/model.ts (1)
apps/web/env.ts (1)
env(5-203)
⏰ Context from checks skipped due to timeout of 90000ms (2)
- GitHub Check: Jit Security
- GitHub Check: Analyze (javascript-typescript)
🔇 Additional comments (2)
apps/web/app/(marketing) (1)
1-1: Confirm submodule pointer is correct
Please verify that commita57f611d113dd505cd06432f52562bfea06dc474exists in the marketing subproject’s repository and that your.gitmodules(or submodule configuration) is updated to point to this SHA.turbo.json (1)
14-21: Ensure new env vars are whitelisted for all affected tasks.Only the
buildtask hasDEFAULT_LLM_MODEL,ECONOMY_LLM_PROVIDER, andECONOMY_LLM_MODEL.
If thedevortestpipelines rely on default model selection, they will read empty values at runtime. Verify whether they need to be added to thedev/testenv arrays as well.
Summary by CodeRabbit
New Features
Improvements
Chores