Conversation
|
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
WalkthroughThe changes refactor model selection logic by introducing a Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant AI_Module
participant ModelSelector
participant EnvConfig
User->>AI_Module: Request (chat, economy, or default)
AI_Module->>ModelSelector: getModel(user, modelType)
ModelSelector->>EnvConfig: Fetch provider/model for modelType
EnvConfig-->>ModelSelector: Return provider/model (chat/economy/default)
ModelSelector-->>AI_Module: Selected provider/model
AI_Module-->>User: Processed response
Possibly related PRs
Poem
📜 Recent review detailsConfiguration used: .coderabbit.yaml 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
✨ Finishing Touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
|
Claude finished @elie222's task —— View job Pull Request ReviewReviewing PR #577: "Add seperate chat model" Review Checklist:
Code Review Feedback✅ Positive Aspects
|
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (2)
apps/web/utils/usage.ts (1)
129-133: Model cost entry added correctly, but clarify provider information.The cost entry for "moonshotai/kimi-k2" is properly structured and follows the established pattern. However, the comment mentions "Kimi K2 Groq via OpenRouter" while the model identifier suggests MoonShot AI. Consider clarifying whether this is a Groq model or MoonShot AI model accessed through OpenRouter.
apps/web/utils/llms/model.ts (1)
187-208: Consider enhancing the JSDoc comment for better documentation.The implementation is solid and follows the established pattern. However, the JSDoc comment could be more descriptive about specific use cases for the chat model, similar to the detailed comment for
selectEconomyModel./** - * Selects the appropriate chat model for fast conversational tasks + * Selects the appropriate chat model for fast conversational tasks + * Optimized for interactive chat experiences and real-time responses + * + * Use cases: + * - AI assistant conversations + * - Real-time chat interactions + * - Tasks requiring fast response times + * - Interactive user assistance */
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (11)
apps/web/env.ts(2 hunks)apps/web/utils/ai/assistant/chat.ts(1 hunks)apps/web/utils/ai/assistant/process-user-request.ts(1 hunks)apps/web/utils/ai/knowledge/extract-from-email-history.ts(1 hunks)apps/web/utils/ai/knowledge/extract.ts(1 hunks)apps/web/utils/llms/config.ts(1 hunks)apps/web/utils/llms/index.ts(15 hunks)apps/web/utils/llms/model.test.ts(2 hunks)apps/web/utils/llms/model.ts(4 hunks)apps/web/utils/usage.ts(1 hunks)version.txt(1 hunks)
🧰 Additional context used
📓 Path-based instructions (5)
apps/web/**/*.{ts,tsx}
Instructions used from:
Sources:
📄 CodeRabbit Inference Engine
- apps/web/CLAUDE.md
**/*.{ts,tsx}
Instructions used from:
Sources:
📄 CodeRabbit Inference Engine
- .cursor/rules/form-handling.mdc
apps/web/**/{.env.example,env.ts,turbo.json}
Instructions used from:
Sources:
📄 CodeRabbit Inference Engine
- apps/web/CLAUDE.md
apps/web/**/{.env.example,env.ts}
Instructions used from:
Sources:
📄 CodeRabbit Inference Engine
- apps/web/CLAUDE.md
apps/web/env.ts
Instructions used from:
Sources:
📄 CodeRabbit Inference Engine
- .cursor/rules/environment-variables.mdc
🧠 Learnings (3)
apps/web/utils/llms/model.test.ts (7)
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm-test.mdc:0-0
Timestamp: 2025-06-23T12:26:27.693Z
Learning: When writing tests for LLM-related functionality in TypeScript (e.g., in 'apps/web/__tests__/*.test.ts'), always create helper functions for common test data to ensure consistency and reduce duplication.
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm-test.mdc:0-0
Timestamp: 2025-06-23T12:26:27.693Z
Learning: Test both AI and non-AI code paths in LLM-related functionality to ensure comprehensive coverage.
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm-test.mdc:0-0
Timestamp: 2025-06-23T12:26:27.693Z
Learning: Standard test cases for LLM-related functionality should include: happy path with expected input, error handling, edge cases (such as empty input or null values), different user configurations, and various input formats.
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm-test.mdc:0-0
Timestamp: 2025-06-23T12:26:27.693Z
Learning: Use descriptive 'console.debug' statements to log generated content in LLM-related tests for easier debugging and traceability.
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-23T12:27:30.570Z
Learning: When mocking Prisma in Vitest, import the Prisma mock from '@/utils/__mocks__/prisma', mock '@/utils/prisma', and clear all mocks in a beforeEach hook to ensure test isolation.
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm-test.mdc:0-0
Timestamp: 2025-06-23T12:26:27.693Z
Learning: Do not mock the LLM call in LLM-related tests; always call the actual LLM to ensure test reliability and real-world coverage.
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/posthog-feature-flags.mdc:0-0
Timestamp: 2025-06-23T12:26:51.119Z
Learning: For A/B test feature flags, always define a TypeScript type for the possible variants and ensure the hook returns a default fallback value (such as 'control') if the flag is not set.
apps/web/utils/llms/index.ts (5)
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm-test.mdc:0-0
Timestamp: 2025-06-23T12:26:27.693Z
Learning: When writing tests for LLM-related functionality in TypeScript (e.g., in 'apps/web/__tests__/*.test.ts'), always create helper functions for common test data to ensure consistency and reduce duplication.
Learnt from: CR
PR: elie222/inbox-zero#0
File: apps/web/CLAUDE.md:0-0
Timestamp: 2025-07-03T12:02:16.137Z
Learning: Applies to apps/web/**/types/**/*.{ts,d.ts} : Centralize types in dedicated type files when shared
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-06-23T12:26:36.694Z
Learning: Use TypeScript types for all parameters and return values, and define clear interfaces for complex input and output structures to ensure type safety.
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-06-23T12:26:36.694Z
Learning: When implementing LLM-related functions in TypeScript, follow a standard structure: create a scoped logger, define a Zod schema for output validation, perform early input validation and returns, separate system and user prompts, log inputs and outputs, call the LLM with proper configuration, and return the validated result.
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/logging.mdc:0-0
Timestamp: 2025-06-23T12:26:43.177Z
Learning: In TypeScript backend code, use the `createScopedLogger` function from `@/utils/logger` to perform logging. Typically, instantiate the logger at the top of the file with a relevant scope string.
apps/web/env.ts (7)
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/environment-variables.mdc:0-0
Timestamp: 2025-07-03T12:02:45.107Z
Learning: Applies to apps/web/env.ts : For server-only environment variables, add them to the `server` section in `apps/web/env.ts`
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-06-23T12:26:36.694Z
Learning: When implementing LLM-related functions in TypeScript, follow a standard structure: create a scoped logger, define a Zod schema for output validation, perform early input validation and returns, separate system and user prompts, log inputs and outputs, call the LLM with proper configuration, and return the validated result.
Learnt from: CR
PR: elie222/inbox-zero#0
File: apps/web/CLAUDE.md:0-0
Timestamp: 2025-07-03T12:02:16.137Z
Learning: Applies to apps/web/**/{.env.example,env.ts,turbo.json} : Add environment variables to `.env.example`, `env.ts`, and `turbo.json`
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/environment-variables.mdc:0-0
Timestamp: 2025-07-03T12:02:45.107Z
Learning: Applies to apps/web/env.ts : For client-side environment variables, prefix them with `NEXT_PUBLIC_` and add them to both the `client` and `experimental__runtimeEnv` sections in `apps/web/env.ts`
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm-test.mdc:0-0
Timestamp: 2025-06-23T12:26:27.693Z
Learning: When writing tests for LLM-related functionality in TypeScript (e.g., in 'apps/web/__tests__/*.test.ts'), always create helper functions for common test data to ensure consistency and reduce duplication.
Learnt from: CR
PR: elie222/inbox-zero#0
File: apps/web/CLAUDE.md:0-0
Timestamp: 2025-07-03T12:02:16.137Z
Learning: Applies to apps/web/**/{.env.example,env.ts} : Client-side environment variables must be prefixed with `NEXT_PUBLIC_`
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/environment-variables.mdc:0-0
Timestamp: 2025-07-03T12:02:45.107Z
Learning: Applies to apps/web/.env.example : Add new environment variables to `.env.example` with an example value
🧬 Code Graph Analysis (2)
apps/web/utils/llms/index.ts (2)
apps/web/utils/llms/types.ts (1)
UserAIFields(3-9)apps/web/utils/llms/model.ts (1)
ModelType(16-16)
apps/web/utils/llms/model.ts (2)
apps/web/utils/llms/types.ts (1)
UserAIFields(3-9)apps/web/env.ts (1)
env(15-206)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: Jit Security
- GitHub Check: claude-review
- GitHub Check: Analyze (javascript-typescript)
🔇 Additional comments (17)
version.txt (1)
1-1: Version bump looks appropriate for the feature additions.The increment from v1.9.17 to v1.9.18 correctly reflects the scope of changes for adding separate chat model functionality and refactoring model selection logic.
apps/web/utils/ai/assistant/process-user-request.ts (1)
203-203: Correctly implements the new modelType parameter.The addition of
modelType: "chat"aligns with the refactoring from boolean flags to typed parameters, appropriately specifying chat model usage for this assistant function.apps/web/utils/ai/assistant/chat.ts (1)
435-435: Correct implementation of modelType parameter for chat functionality.The addition of
modelType: "chat"is appropriately placed in thechatCompletionStreamcall within theaiProcessAssistantChatfunction, following the established pattern for the model type refactoring.apps/web/utils/llms/config.ts (1)
35-35: New model identifier added correctly.The addition of
KIMI_K2_OPENROUTER: "moonshotai/kimi-k2"follows the established naming convention and is consistent with the corresponding cost entry in usage.ts.apps/web/utils/ai/knowledge/extract-from-email-history.ts (1)
106-106: LGTM - Model type refactoring applied correctlyThe change from
useEconomyModel: truetomodelType: "economy"is consistent with the codebase refactoring and appropriate for this use case.apps/web/utils/ai/knowledge/extract.ts (1)
108-108: LGTM - Consistent model type refactoringThe update to
modelType: "economy"aligns with the broader refactoring and is appropriate for knowledge extraction tasks.apps/web/utils/llms/model.test.ts (2)
41-44: Good test coverage for new environment variablesThe mock environment correctly includes all the new LLM provider and model variables needed to test the expanded model type functionality.
219-262: Excellent test coverage for the new modelType parameterThe test cases comprehensively verify the behavior for each model type ("chat", "economy", "default") and properly use null UserAIFields to ensure selection is based solely on the modelType parameter and environment configuration.
apps/web/env.ts (2)
5-13: Excellent use of reusable enum for consistencyThe
llmProviderEnumprovides a centralized definition of valid LLM providers, improving maintainability and ensuring consistency across all provider-related environment variables.
30-35: Well-structured environment variables for new model typesThe addition of
CHAT_LLM_PROVIDERandCHAT_LLM_MODELwith helpful comments follows the coding guidelines and provides clear configuration options for the new chat model type.apps/web/utils/llms/index.ts (3)
31-31: Good import addition for type safetyThe import of
ModelTypeensures type safety for the new parameter across all function signatures.
44-44: Consistent refactoring of function signaturesThe systematic replacement of
useEconomyModel?: booleanwithmodelType?: ModelTypewith a default value of"default"is well-executed and maintains backward compatibility.Also applies to: 51-51, 90-90, 115-115, 158-158, 170-170, 232-232, 256-256, 303-303, 313-313
59-61: Correct parameter passing to getModel functionAll calls to
getModelhave been consistently updated to pass themodelTypeparameter instead of the previous boolean flag.Also applies to: 125-127, 186-188, 267-269, 323-325
apps/web/utils/llms/model.ts (4)
16-21: LGTM! Good refactoring for type safety and extensibility.The new
ModelTypeunion type provides better type safety compared to the previous boolean flag, and the default parameter ensures backward compatibility.
34-43: Clean implementation with proper enum handling.The switch statement correctly handles all
ModelTypevalues with a sensible default fallback.
297-307: Good generalization that improves reusability.The function signature change from a union type to
stringmakes it more flexible for new providers while maintaining proper fallback behavior (returningundefinedfor unknown providers).
25-25: Logging update correctly reflects the refactored parameter.The logging statement properly uses the new
modelTypeparameter instead of the previous boolean flag.
| DEFAULT_LLM_PROVIDER: z | ||
| .enum([ | ||
| "anthropic", | ||
| // "bedrock", | ||
| "google", | ||
| "openai", | ||
| "openrouter", | ||
| "groq", | ||
| "ollama", | ||
| "custom", | ||
| ]) | ||
| .enum([...llmProviderEnum.options, "custom"]) | ||
| .default("anthropic"), | ||
| DEFAULT_LLM_MODEL: z.string().optional(), | ||
| // Economy LLM configuration (for large context windows where cost efficiency matters) | ||
| ECONOMY_LLM_PROVIDER: z | ||
| .enum([ | ||
| "anthropic", | ||
| "google", | ||
| "openai", | ||
| "bedrock", | ||
| "openrouter", | ||
| "groq", | ||
| "ollama", | ||
| ]) | ||
| .optional() | ||
| .default("openrouter"), | ||
| ECONOMY_LLM_MODEL: z | ||
| .string() | ||
| .optional() | ||
| .default("google/gemini-2.5-flash-preview-05-20"), | ||
| // Set this to a cheaper model like Gemini Flash | ||
| ECONOMY_LLM_PROVIDER: llmProviderEnum.optional(), | ||
| ECONOMY_LLM_MODEL: z.string().optional(), |
There was a problem hiding this comment.
💡 Verification agent
🧩 Analysis chain
Request verification of environment variable additions
Please ensure these new environment variables are also added to .env.example and turbo.json as specified in the coding guidelines.
🏁 Script executed:
#!/bin/bash
# Description: Check if new environment variables are properly added to required files
# Expected: Variables should be present in both .env.example and turbo.json
echo "Checking .env.example for new environment variables..."
fd -t f "\.env\.example$" --exec grep -l "ECONOMY_LLM_PROVIDER\|ECONOMY_LLM_MODEL\|CHAT_LLM_PROVIDER\|CHAT_LLM_MODEL" {} \;
echo -e "\nChecking turbo.json for new environment variables..."
fd -t f "turbo\.json$" --exec grep -l "ECONOMY_LLM_PROVIDER\|ECONOMY_LLM_MODEL\|CHAT_LLM_PROVIDER\|CHAT_LLM_MODEL" {} \;Length of output: 507
Add missing environment variables to .env.example
The new variables are present in turbo.json but aren’t in .env.example. Please add them with example values to satisfy our environment-variable guidelines.
• File: .env.example
– Add entries for:
• ECONOMY_LLM_PROVIDER=
• ECONOMY_LLM_MODEL=
• CHAT_LLM_PROVIDER=
• CHAT_LLM_MODEL=
🤖 Prompt for AI Agents
In apps/web/env.ts around lines 26 to 32, new environment variables
ECONOMY_LLM_PROVIDER, ECONOMY_LLM_MODEL, CHAT_LLM_PROVIDER, and CHAT_LLM_MODEL
are defined but missing from the .env.example file. To fix this, open the
.env.example file and add these variables with appropriate example values
following the existing format and environment-variable guidelines to ensure
consistency and clarity for users.
|
Claude finished @elie222's task —— View job Pull Request ReviewReviewing PR #577: "Add seperate chat model" Review Checklist:
Code Review Feedback✅ Positive Aspects
|
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (1)
apps/web/utils/llms/model.test.ts (1)
222-354: Consider creating helper functions for test data consistency.Based on the retrieved learnings, you should create helper functions for common test data to ensure consistency and reduce duplication. The new tests repeat similar patterns of setting up
userAiobjects and environment variables.Consider creating helper functions to reduce duplication:
// Helper function for creating test user AI data const createTestUserAi = (overrides: Partial<UserAIFields> = {}): UserAIFields => ({ aiApiKey: null, aiProvider: null, aiModel: null, ...overrides, }); // Helper function for setting up environment variables const setupEnvForModelType = (modelType: string, provider: string, model: string) => { const providerVar = `${modelType.toUpperCase()}_LLM_PROVIDER`; const modelVar = `${modelType.toUpperCase()}_LLM_MODEL`; vi.mocked(env)[providerVar] = provider; vi.mocked(env)[modelVar] = model; };This would align with the learning about creating helper functions for common test data.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
apps/web/env.ts(2 hunks)apps/web/utils/llms/model.test.ts(2 hunks)apps/web/utils/llms/model.ts(6 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- apps/web/utils/llms/model.ts
- apps/web/env.ts
🧰 Additional context used
📓 Path-based instructions (2)
apps/web/**/*.{ts,tsx}
Instructions used from:
Sources:
📄 CodeRabbit Inference Engine
- apps/web/CLAUDE.md
**/*.{ts,tsx}
Instructions used from:
Sources:
📄 CodeRabbit Inference Engine
- .cursor/rules/form-handling.mdc
🧠 Learnings (1)
apps/web/utils/llms/model.test.ts (7)
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm-test.mdc:0-0
Timestamp: 2025-06-23T12:26:27.693Z
Learning: When writing tests for LLM-related functionality in TypeScript (e.g., in 'apps/web/__tests__/*.test.ts'), always create helper functions for common test data to ensure consistency and reduce duplication.
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm-test.mdc:0-0
Timestamp: 2025-06-23T12:26:27.693Z
Learning: Test both AI and non-AI code paths in LLM-related functionality to ensure comprehensive coverage.
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-23T12:27:30.570Z
Learning: When mocking Prisma in Vitest, import the Prisma mock from '@/utils/__mocks__/prisma', mock '@/utils/prisma', and clear all mocks in a beforeEach hook to ensure test isolation.
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm-test.mdc:0-0
Timestamp: 2025-06-23T12:26:27.693Z
Learning: Standard test cases for LLM-related functionality should include: happy path with expected input, error handling, edge cases (such as empty input or null values), different user configurations, and various input formats.
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm-test.mdc:0-0
Timestamp: 2025-06-23T12:26:27.693Z
Learning: Use descriptive 'console.debug' statements to log generated content in LLM-related tests for easier debugging and traceability.
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm-test.mdc:0-0
Timestamp: 2025-06-23T12:26:27.693Z
Learning: Do not mock the LLM call in LLM-related tests; always call the actual LLM to ensure test reliability and real-world coverage.
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/posthog-feature-flags.mdc:0-0
Timestamp: 2025-06-23T12:26:51.119Z
Learning: For A/B test feature flags, always define a TypeScript type for the possible variants and ensure the hook returns a default fallback value (such as 'control') if the flag is not set.
🧬 Code Graph Analysis (1)
apps/web/utils/llms/model.test.ts (4)
apps/web/utils/llms/types.ts (1)
UserAIFields(3-9)apps/web/env.ts (1)
env(15-209)apps/web/utils/llms/model.ts (1)
getModel(18-32)apps/web/utils/llms/config.ts (1)
Provider(7-15)
🪛 GitHub Check: test
apps/web/utils/llms/model.test.ts
[failure] 325-325: utils/llms/model.test.ts > Models > getModel > should use OpenRouter with provider options for default model
AssertionError: expected undefined to deeply equal [ 'Google Vertex', 'Anthropic' ]
- Expected:
[
"Google Vertex",
"Anthropic",
]
- Received:
undefined
❯ utils/llms/model.test.ts:325:67
🪛 GitHub Actions: Run Tests
apps/web/utils/llms/model.test.ts
[error] 325-325: Test failure in 'should use OpenRouter with provider options for default model': AssertionError - expected undefined to deeply equal [ 'Google Vertex', 'Anthropic' ]. The test expected the provider options order array to be defined but received undefined.
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: claude-review
- GitHub Check: Analyze (javascript-typescript)
🔇 Additional comments (8)
apps/web/utils/llms/model.test.ts (8)
41-47: Environment variables correctly updated for new modelType functionality.The mock environment variables properly reflect the new schema with separate provider/model configurations for chat, economy, and default model types, including OpenRouter provider options.
222-236: Test correctly validates chat model selection.The test properly verifies that when
modelTypeis "chat", the function selects the configured chat provider and model from environment variables.
238-257: Test correctly validates OpenRouter provider options for chat.The test properly verifies that OpenRouter provider options are correctly parsed and applied when using the chat model type.
259-274: Test correctly validates economy model selection.The test properly verifies that when
modelTypeis "economy", the function selects the configured economy provider and model from environment variables.
276-296: Test correctly validates OpenRouter provider options for economy.The test properly verifies that OpenRouter provider options are correctly parsed and applied when using the economy model type.
298-308: Test correctly validates default model selection.The test properly verifies that when
modelTypeis "default", the function falls back to the default provider and model configuration.
331-354: Test correctly validates custom provider logic preservation.The test properly verifies that when using a custom provider, the default OpenRouter provider options don't override the custom logic. This ensures the custom provider behavior is preserved.
310-329: Incorrect assumption about DEFAULT_OPENROUTER_PROVIDERS handlingThe
model.tsimplementation already injectsenv.DEFAULT_OPENROUTER_PROVIDERSwhenaiProvider === Provider.OPENROUTERand noproviderOptions.openrouterexists:if ( aiProvider === Provider.OPENROUTER && env.DEFAULT_OPENROUTER_PROVIDERS && !providerOptions.openrouter ) { providerOptions.openrouter = createOpenRouterProviderOptions( env.DEFAULT_OPENROUTER_PROVIDERS, ); }The failing test isn’t due to missing logic in
selectModelByTypeorgetModel, but rather to how the Vitest mock of@/envis applied after the module has already been imported. To fix the test:• In
apps/web/utils/llms/model.test.ts, move or wrap yourvi.mock("@/env", …)call so it runs before any import ofgetModel.
- Either place the mock at the very top of the file
- Or call
vi.resetModules()followed by your mock, then dynamicallyimportthe module under test within eachitblock.Example adjustment:
// At the very top before any imports vi.mock("@/env", () => ({ env: { DEFAULT_LLM_PROVIDER: "openrouter", DEFAULT_LLM_MODEL: "anthropic/claude-3.5-sonnet", DEFAULT_OPENROUTER_PROVIDERS: "Google Vertex,Anthropic", OPENROUTER_API_KEY: "test-openrouter-key", // …other defaults }, })); import { getModel } from "@/utils/llms/model";Or per-test:
it("should use OpenRouter …", async () => { vi.resetModules(); vi.mock("@/env", () => ({ env: { DEFAULT_LLM_PROVIDER: "openrouter", DEFAULT_LLM_MODEL: "anthropic/claude-3.5-sonnet", DEFAULT_OPENROUTER_PROVIDERS: "Google Vertex,Anthropic", OPENROUTER_API_KEY: "test-openrouter-key", }, })); const { getModel } = await import("@/utils/llms/model"); // …test logic… });With the mock in place before module initialization,
providerOptions.openrouterwill correctly be populated and the test will pass.Likely an incorrect or invalid review comment.
|
Claude finished @elie222's task —— View job Pull Request ReviewReviewing PR #577: "Add seperate chat model" Review Checklist:
Code Review Feedback✅ Positive Aspects
|
|
Claude finished @elie222's task —— View job Pull Request ReviewReviewing PR #577: "Add seperate chat model" Review Checklist:
Code Review Feedback✅ Positive Aspects
|
Summary by CodeRabbit
New Features
Improvements
Tests
Chores