Skip to content

Default llm model env var#500

Merged
elie222 merged 2 commits intomainfrom
feat/default-model
Jun 11, 2025
Merged

Default llm model env var#500
elie222 merged 2 commits intomainfrom
feat/default-model

Conversation

@elie222
Copy link
Owner

@elie222 elie222 commented Jun 11, 2025

Summary by CodeRabbit

  • New Features

    • Added support for specifying a default LLM model through a new environment variable.
    • Enhanced configuration options for economy LLM provider and model selection.
  • Improvements

    • Updated the default LLM provider to Anthropic.
    • Refined model selection logic to use the default model when no user API key is set.
    • Standardized and consolidated cost rates for Gemini model variants.
  • Chores

    • Improved organization and clarity of environment variable configuration files.
    • Updated project version to v1.4.3.

@vercel
Copy link

vercel bot commented Jun 11, 2025

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Updated (UTC)
inbox-zero ✅ Ready (Inspect) Visit Preview Jun 11, 2025 7:11am

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jun 11, 2025

Caution

Review failed

The pull request is closed.

Walkthrough

The changes update environment variable defaults and configuration for LLM providers and models, add new environment variables, adjust model selection logic to use these variables, consolidate Gemini model cost definitions, and update the environment variable lists in build configurations. A subproject commit reference is also updated in the marketing app directory.

Changes

File(s) Change Summary
apps/web/.env.example, turbo.json Added DEFAULT_LLM_MODEL, reordered and added LLM-related env vars, changed default LLM provider.
apps/web/env.ts Moved and updated LLM provider/model env var declarations, added DEFAULT_LLM_MODEL, changed defaults.
apps/web/utils/llms/model.ts Updated model selection logic to use DEFAULT_LLM_MODEL as fallback when no user API key is present.
apps/web/utils/usage.ts Consolidated Gemini model cost definitions, introduced cost constants, updated references accordingly.
apps/web/app/(marketing) Added/updated subproject commit reference to a specific commit hash.
version.txt Updated version from v1.4.2 to v1.4.3

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant ModelSelector
    participant EnvConfig

    User->>ModelSelector: Request default LLM model
    ModelSelector->>EnvConfig: Check if user API key is present
    alt User API key present
        ModelSelector->>ModelSelector: Use user's model
    else No API key
        ModelSelector->>EnvConfig: Retrieve DEFAULT_LLM_MODEL
        alt DEFAULT_LLM_MODEL exists
            ModelSelector->>User: Return DEFAULT_LLM_MODEL
        else
            ModelSelector->>User: Return null
        end
    end
Loading

Poem

In the warren where configs dwell,
New models and providers now ring the bell.
Gemini’s costs, neat and tight,
Defaults set for day and night.
With every hop, our logic grows—
The rabbit’s code, as always, flows! 🐇✨


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a1dca02 and ffe38df.

📒 Files selected for processing (1)
  • version.txt (1 hunks)

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🔭 Outside diff range comments (1)
apps/web/utils/llms/model.ts (1)

185-196: 💡 Verification agent

🧩 Analysis chain

Validate cross-compatibility between DEFAULT_LLM_MODEL and DEFAULT_LLM_PROVIDER.

aiProvider is set to defaultProvider, while aiModel is optionally pulled from env.DEFAULT_LLM_MODEL. Nothing enforces that the model string actually belongs to the chosen provider. A mis-configured .env (e.g. DEFAULT_LLM_PROVIDER=anthropic but DEFAULT_LLM_MODEL=gpt-4o) will silently flow into the switch-statement and hit the wrong case, throwing the generic “LLM provider not supported” error.

Consider adding an upfront guard (or schema refinement in env.ts) that validates the pair, or at minimum logs a clear warning.

+if (env.DEFAULT_LLM_MODEL && !env.DEFAULT_LLM_MODEL.startsWith(aiProvider)) {
+  logger.warn(
+    "DEFAULT_LLM_MODEL (%s) appears incompatible with DEFAULT_LLM_PROVIDER (%s)",
+    env.DEFAULT_LLM_MODEL,
+    aiProvider,
+  );
+}

I’d like to verify how DEFAULT_LLM_PROVIDER and DEFAULT_LLM_MODEL are defined and used before finalizing. Let’s pull up their definitions in env.ts and see where DEFAULT_LLM is referenced in model.ts:


🏁 Script executed:

#!/bin/bash
# Locate env.ts and show lines defining the defaults
env_file=$(find . -type f -name "env.ts" | head -n1)
echo "env.ts: $env_file"
grep -n "DEFAULT_LLM_PROVIDER" -n "$env_file"
grep -n "DEFAULT_LLM_MODEL"   -n "$env_file"

# Show where DEFAULT_LLM* and defaultProvider appear in model.ts
grep -n "DEFAULT_LLM" apps/web/utils/llms/model.ts
grep -n "defaultProvider" apps/web/utils/llms/model.ts

Length of output: 715


I’ll locate all env.ts files and inspect where DEFAULT_LLM_PROVIDER and DEFAULT_LLM_MODEL are defined, plus see how env is imported in model.ts:


🏁 Script executed:

#!/bin/bash
# List every env.ts in the repo
echo "=== env.ts files ==="
find . -type f -name 'env.ts'

# Search for DEFAULT_LLM_* definitions across the codebase
echo "=== DEFAULT_LLM_PROVIDER definitions ==="
grep -R "DEFAULT_LLM_PROVIDER" -n .
echo "=== DEFAULT_LLM_MODEL definitions ==="
grep -R "DEFAULT_LLM_MODEL" -n .

# Preview imports in model.ts to confirm which env file is used
echo "=== model.ts import section ==="
sed -n '1,30p' apps/web/utils/llms/model.ts

Length of output: 2232


Ensure DEFAULT_LLM_MODEL actually matches DEFAULT_LLM_PROVIDER
We currently validate each env var independently via Zod, but never enforce that the chosen default model belongs to the selected provider. A mis-configured .env (e.g. DEFAULT_LLM_PROVIDER=anthropic with DEFAULT_LLM_MODEL=gpt-4) will only surface later as a generic “LLM provider not supported” error.

Two ways to catch this sooner:

  • In apps/web/env.ts – refine the Zod schema to check the model against a provider→models map:

    import { z } from "zod";
    import { Provider } from "@/utils/llms/config";
    
    const providerModels: Record<Provider, string[]> = {
      [Provider.OPENAI]: [/* …openai models… */],
      [Provider.ANTHROPIC]: [/* …anthropic models… */],
      /* etc. */
    };
    
    const envSchema = z.object({
      DEFAULT_LLM_PROVIDER: z.enum(Object.values(Provider)),
      DEFAULT_LLM_MODEL: z
        .string()
        .optional()
        .refine((m, ctx) => {
          const p = (ctx.parent as any).DEFAULT_LLM_PROVIDER as Provider;
          return !m || providerModels[p]?.includes(m);
        }, {
          message: (ctx) =>
            `DEFAULT_LLM_MODEL (${ctx.parent.DEFAULT_LLM_MODEL}) isn’t valid for provider ${ctx.parent.DEFAULT_LLM_PROVIDER}`,
        }),
      // …
    });
  • Or in apps/web/utils/llms/model.ts – add a runtime guard before instantiation:

    const defaultProvider = env.DEFAULT_LLM_PROVIDER as Provider;
    const defaultModel    = env.DEFAULT_LLM_MODEL;
    
    if (defaultModel && !providerModels[defaultProvider]?.includes(defaultModel)) {
      logger.warn(
        "DEFAULT_LLM_MODEL (%s) is not compatible with DEFAULT_LLM_PROVIDER (%s)",
        defaultModel,
        defaultProvider,
      );
    }
    // …then proceed with switch(defaultProvider) …

Locations to update:

  • apps/web/env.ts (Zod schema)
  • apps/web/utils/llms/model.ts (pre-switch validation)

This will fail fast on invalid .env combos or at least emit a clear warning instead of a generic error later.

🧹 Nitpick comments (4)
apps/web/.env.example (1)

16-19: Add guidance for the new DEFAULT_LLM_MODEL placeholder.

End-users may not know the expected format (provider-qualified vs raw model id). A short comment, mirroring accepted examples (anthropic/claude-3.7-sonnet, gpt-4o, etc.), would avoid misconfiguration.

apps/web/utils/usage.ts (2)

51-58: Duplicated literals – extract to const or reuse existing table.

You introduced gemini2_5flash / gemini2_5pro, which is good, but they still live inside the same file as ad-hoc objects.
To keep the cost table maintainable, consider moving these constants to a shared pricing.ts (or similar) and importing them, so they aren’t re-declared if other files start referencing them.


134-147: Spelling mistake in function name – public API ergonomics.

calcuateCostcalculateCost. Even though unchanged in this diff, now is a good time to fix before it leaks further.

-const cost = calcuateCost(model, usage);
+const cost = calculateCost(model, usage);
...
-function calcuateCost(
+function calculateCost(
apps/web/env.ts (1)

26-44: Default economy config is internally inconsistent.

ECONOMY_LLM_PROVIDER defaults to openrouter, while ECONOMY_LLM_MODEL defaults to a direct Google model id (google/gemini-2.5-flash-preview-05-20).
This works only because OpenRouter proxies that model, but newcomers reading the defaults may assume a mismatch or file a bug.

Recommend documenting the coupling or, better, aligning the defaults (e.g. provider google or model prefixed with anthropic/… when provider=openrouter).

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e162219 and a1dca02.

📒 Files selected for processing (6)
  • apps/web/.env.example (1 hunks)
  • apps/web/app/(marketing) (1 hunks)
  • apps/web/env.ts (1 hunks)
  • apps/web/utils/llms/model.ts (1 hunks)
  • apps/web/utils/usage.ts (2 hunks)
  • turbo.json (2 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
apps/web/utils/llms/model.ts (1)
apps/web/env.ts (1)
  • env (5-203)
⏰ Context from checks skipped due to timeout of 90000ms (2)
  • GitHub Check: Jit Security
  • GitHub Check: Analyze (javascript-typescript)
🔇 Additional comments (2)
apps/web/app/(marketing) (1)

1-1: Confirm submodule pointer is correct
Please verify that commit a57f611d113dd505cd06432f52562bfea06dc474 exists in the marketing subproject’s repository and that your .gitmodules (or submodule configuration) is updated to point to this SHA.

turbo.json (1)

14-21: Ensure new env vars are whitelisted for all affected tasks.

Only the build task has DEFAULT_LLM_MODEL, ECONOMY_LLM_PROVIDER, and ECONOMY_LLM_MODEL.
If the dev or test pipelines rely on default model selection, they will read empty values at runtime. Verify whether they need to be added to the dev/test env arrays as well.

@elie222 elie222 merged commit fbe578d into main Jun 11, 2025
5 of 6 checks passed
@coderabbitai coderabbitai bot mentioned this pull request Jul 17, 2025
@coderabbitai coderabbitai bot mentioned this pull request Aug 13, 2025
@coderabbitai coderabbitai bot mentioned this pull request Sep 16, 2025
This was referenced Nov 21, 2025
@coderabbitai coderabbitai bot mentioned this pull request Dec 4, 2025
@elie222 elie222 deleted the feat/default-model branch December 18, 2025 23:02
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant

Comments