Conversation
|
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
WalkthroughAdds AI Gateway as a new LLM provider across env schema, config, and model-selection logic; introduces AI_GATEWAY_API_KEY; updates Gemini OpenRouter model IDs; adds @ai-sdk/gateway dependency; wires the new env var into build; bumps version to v2.4.8. Changes
Sequence Diagram(s)sequenceDiagram
actor User
participant WebApp as Web App
participant Selector as LLM Model Selector
participant GatewaySDK as AI Gateway SDK
participant AIGW as AI Gateway API
User->>WebApp: Initiates LLM request
WebApp->>Selector: selectModel(provider, aiModel, aiApiKey)
alt provider == AI_GATEWAY
Selector->>Selector: resolve apiKey (aiApiKey || env.AI_GATEWAY_API_KEY)
Selector->>GatewaySDK: createGateway(apiKey)
Selector->>GatewaySDK: gateway(modelName)
GatewaySDK->>AIGW: Invoke model
AIGW-->>GatewaySDK: Response
GatewaySDK-->>WebApp: Model handle/response
else other providers
Selector->>WebApp: Delegate to existing provider flow
end
WebApp-->>User: Return LLM result
Estimated code review effort🎯 2 (Simple) | ⏱️ ~8 minutes Possibly related PRs
Poem
✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
Status, Documentation and Community
|
|
Claude finished @elie222's task —— View job Code Review for AI Gateway SupportI've reviewed the AI gateway implementation across the changed files. Here's my comprehensive feedback: Overall Assessment ✅The AI gateway integration is well-implemented and follows existing patterns in the codebase. The changes are minimal, focused, and maintain backward compatibility. Code Quality & Best Practices✅ Good:
Potential Issues✅ No Major Issues Found
Performance Considerations✅ Excellent:
Security Concerns✅ Secure Implementation:
✅ Environment Variable Security:
Technical Implementation✅ Well Structured:
Configuration Files Updated Appropriately:
Recommendations
SummaryThis is a clean, well-implemented addition that maintains code quality standards and follows established patterns. The changes are minimal and focused, making it easy to review and maintain. No blocking issues identified. Approval Status: ✅ LGTM - Ready for merge after addressing minor suggestions if desired. |
|
Review the following changes in direct dependencies. Learn more about Socket for GitHub.
|
There was a problem hiding this comment.
Actionable comments posted: 3
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (6)
apps/web/env.ts(2 hunks)apps/web/package.json(1 hunks)apps/web/utils/llms/config.ts(3 hunks)apps/web/utils/llms/model.ts(3 hunks)turbo.json(1 hunks)version.txt(1 hunks)
🧰 Additional context used
📓 Path-based instructions (15)
!{.cursor/rules/*.mdc}
📄 CodeRabbit Inference Engine (.cursor/rules/cursor-rules.mdc)
Never place rule files in the project root, in subdirectories outside .cursor/rules, or in any other location
Files:
version.txtapps/web/package.jsonturbo.jsonapps/web/utils/llms/config.tsapps/web/utils/llms/model.tsapps/web/env.ts
!pages/_document.{js,jsx,ts,tsx}
📄 CodeRabbit Inference Engine (.cursor/rules/ultracite.mdc)
!pages/_document.{js,jsx,ts,tsx}: Don't import next/document outside of pages/_document.jsx in Next.js projects.
Don't import next/document outside of pages/_document.jsx in Next.js projects.
Files:
version.txtapps/web/package.jsonturbo.jsonapps/web/utils/llms/config.tsapps/web/utils/llms/model.tsapps/web/env.ts
turbo.json
📄 CodeRabbit Inference Engine (.cursor/rules/environment-variables.mdc)
When adding a new environment variable, add it to
turbo.jsonundertasks.build.envto ensure it is included in the build process.
Files:
turbo.json
apps/web/**/*.{ts,tsx}
📄 CodeRabbit Inference Engine (apps/web/CLAUDE.md)
apps/web/**/*.{ts,tsx}: Use TypeScript with strict null checks
Path aliases: Use@/for imports from project root
Use proper error handling with try/catch blocks
Format code with Prettier
Leverage TypeScript inference for better DX
Files:
apps/web/utils/llms/config.tsapps/web/utils/llms/model.tsapps/web/env.ts
**/*.ts
📄 CodeRabbit Inference Engine (.cursor/rules/form-handling.mdc)
**/*.ts: The same validation should be done in the server action too
Define validation schemas using Zod
Files:
apps/web/utils/llms/config.tsapps/web/utils/llms/model.tsapps/web/env.ts
**/*.{ts,tsx}
📄 CodeRabbit Inference Engine (.cursor/rules/logging.mdc)
**/*.{ts,tsx}: UsecreateScopedLoggerfor logging in backend TypeScript files
Typically add the logger initialization at the top of the file when usingcreateScopedLogger
Only use.with()on a logger instance within a specific function, not for a global loggerImport Prisma in the project using
import prisma from "@/utils/prisma";
**/*.{ts,tsx}: Don't use TypeScript enums.
Don't use TypeScript const enum.
Don't use the TypeScript directive @ts-ignore.
Don't use primitive type aliases or misleading types.
Don't use empty type parameters in type aliases and interfaces.
Don't use any or unknown as type constraints.
Don't use implicit any type on variable declarations.
Don't let variables evolve into any type through reassignments.
Don't use non-null assertions with the ! postfix operator.
Don't misuse the non-null assertion operator (!) in TypeScript files.
Don't use user-defined types.
Use as const instead of literal types and type annotations.
Use export type for types.
Use import type for types.
Don't declare empty interfaces.
Don't merge interfaces and classes unsafely.
Don't use overload signatures that aren't next to each other.
Use the namespace keyword instead of the module keyword to declare TypeScript namespaces.
Don't use TypeScript namespaces.
Don't export imported variables.
Don't add type annotations to variables, parameters, and class properties that are initialized with literal expressions.
Don't use parameter properties in class constructors.
Use either T[] or Array consistently.
Initialize each enum member value explicitly.
Make sure all enum members are literal values.
Files:
apps/web/utils/llms/config.tsapps/web/utils/llms/model.tsapps/web/env.ts
apps/web/utils/**
📄 CodeRabbit Inference Engine (.cursor/rules/project-structure.mdc)
Create utility functions in
utils/folder for reusable logic
Files:
apps/web/utils/llms/config.tsapps/web/utils/llms/model.ts
apps/web/utils/**/*.ts
📄 CodeRabbit Inference Engine (.cursor/rules/project-structure.mdc)
apps/web/utils/**/*.ts: Use lodash utilities for common operations (arrays, objects, strings)
Import specific lodash functions to minimize bundle size
Files:
apps/web/utils/llms/config.tsapps/web/utils/llms/model.ts
**/*.{js,jsx,ts,tsx}
📄 CodeRabbit Inference Engine (.cursor/rules/ultracite.mdc)
**/*.{js,jsx,ts,tsx}: Don't useelements in Next.js projects.
Don't use elements in Next.js projects.
Don't use namespace imports.
Don't access namespace imports dynamically.
Don't use global eval().
Don't use console.
Don't use debugger.
Don't use var.
Don't use with statements in non-strict contexts.
Don't use the arguments object.
Don't use consecutive spaces in regular expression literals.
Don't use the comma operator.
Don't use unnecessary boolean casts.
Don't use unnecessary callbacks with flatMap.
Use for...of statements instead of Array.forEach.
Don't create classes that only have static members (like a static namespace).
Don't use this and super in static contexts.
Don't use unnecessary catch clauses.
Don't use unnecessary constructors.
Don't use unnecessary continue statements.
Don't export empty modules that don't change anything.
Don't use unnecessary escape sequences in regular expression literals.
Don't use unnecessary labels.
Don't use unnecessary nested block statements.
Don't rename imports, exports, and destructured assignments to the same name.
Don't use unnecessary string or template literal concatenation.
Don't use String.raw in template literals when there are no escape sequences.
Don't use useless case statements in switch statements.
Don't use ternary operators when simpler alternatives exist.
Don't use useless this aliasing.
Don't initialize variables to undefined.
Don't use the void operators (they're not familiar).
Use arrow functions instead of function expressions.
Use Date.now() to get milliseconds since the Unix Epoch.
Use .flatMap() instead of map().flat() when possible.
Use literal property access instead of computed property access.
Don't use parseInt() or Number.parseInt() when binary, octal, or hexadecimal literals work.
Use concise optional chaining instead of chained logical expressions.
Use regular expression literals instead of the RegExp constructor when possible.
Don't use number literal object member names th...
Files:
apps/web/utils/llms/config.tsapps/web/utils/llms/model.tsapps/web/env.ts
apps/web/utils/{ai,llms}/**/*
📄 CodeRabbit Inference Engine (.cursor/rules/llm.mdc)
LLM-related code must be organized in the directories: apps/web/utils/ai/, apps/web/utils/llms/, and apps/web/tests/ for LLM-specific tests.
Files:
apps/web/utils/llms/config.tsapps/web/utils/llms/model.ts
apps/web/utils/{ai,llms}/**/*.ts
📄 CodeRabbit Inference Engine (.cursor/rules/llm.mdc)
apps/web/utils/{ai,llms}/**/*.ts: Keep system prompts and user prompts separate in LLM-related functions.
System prompt should define the LLM's role and task specifications.
User prompt should contain the actual data and context.
Always define a Zod schema for response validation in LLM-related functions.
Make Zod schemas as specific as possible to guide the LLM output.
Use descriptive scoped loggers for each LLM feature.
Log inputs and outputs with appropriate log levels in LLM-related functions.
Include relevant context in log messages for LLM-related code.
Implement early returns for invalid inputs in LLM-related functions.
Use proper error types and logging in LLM-related code.
Implement fallbacks for AI failures in LLM-related functions.
Add retry logic for transient failures using withRetry in LLM-related functions.
Use XML-like tags to structure data in LLM prompts.
Remove excessive whitespace and truncate long inputs in LLM prompts.
Format data consistently across similar LLM-related functions.
Use TypeScript types for all parameters and return values in LLM-related functions.
Define clear interfaces for complex input/output structures in LLM-related code.
Keep related AI functions in the same file or directory.
Extract common patterns into utility functions in LLM-related code.
Document complex AI logic with clear comments in LLM-related code.
Files:
apps/web/utils/llms/config.tsapps/web/utils/llms/model.ts
apps/web/utils/llms/model.ts
📄 CodeRabbit Inference Engine (.cursor/rules/llm.mdc)
Model definitions and configurations must be implemented in utils/llms/model.ts.
Files:
apps/web/utils/llms/model.ts
apps/web/**/{.env.example,env.ts,turbo.json}
📄 CodeRabbit Inference Engine (apps/web/CLAUDE.md)
Add environment variables to
.env.example,env.ts, andturbo.json
Files:
apps/web/env.ts
apps/web/**/{.env.example,env.ts}
📄 CodeRabbit Inference Engine (apps/web/CLAUDE.md)
Client-side environment variables: Prefix with
NEXT_PUBLIC_
Files:
apps/web/env.ts
apps/web/env.ts
📄 CodeRabbit Inference Engine (.cursor/rules/environment-variables.mdc)
apps/web/env.ts: When adding a new environment variable, add it toapps/web/env.tsin the appropriate section: useserverfor server-only variables, and for client-side variables, use theclientsection and also add toexperimental__runtimeEnv.
Client-side environment variables must be prefixed withNEXT_PUBLIC_and added to both theclientandexperimental__runtimeEnvsections inapps/web/env.ts.
Files:
apps/web/env.ts
🧠 Learnings (7)
📚 Learning: 2025-07-18T15:04:50.520Z
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/environment-variables.mdc:0-0
Timestamp: 2025-07-18T15:04:50.520Z
Learning: Applies to turbo.json : When adding a new environment variable, add it to `turbo.json` under `tasks.build.env` to ensure it is included in the build process.
Applied to files:
turbo.json
📚 Learning: 2025-07-18T15:04:30.467Z
Learnt from: CR
PR: elie222/inbox-zero#0
File: apps/web/CLAUDE.md:0-0
Timestamp: 2025-07-18T15:04:30.467Z
Learning: Applies to apps/web/**/{.env.example,env.ts,turbo.json} : Add environment variables to `.env.example`, `env.ts`, and `turbo.json`
Applied to files:
turbo.json
📚 Learning: 2025-08-10T22:08:49.231Z
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-08-10T22:08:49.231Z
Learning: Applies to apps/web/utils/{ai,llms}/**/*.ts : Implement fallbacks for AI failures in LLM-related functions.
Applied to files:
apps/web/env.ts
📚 Learning: 2025-07-18T15:04:50.520Z
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/environment-variables.mdc:0-0
Timestamp: 2025-07-18T15:04:50.520Z
Learning: Applies to apps/web/env.ts : When adding a new environment variable, add it to `apps/web/env.ts` in the appropriate section: use `server` for server-only variables, and for client-side variables, use the `client` section and also add to `experimental__runtimeEnv`.
Applied to files:
apps/web/env.ts
📚 Learning: 2025-08-10T22:08:49.231Z
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-08-10T22:08:49.231Z
Learning: Applies to apps/web/utils/{ai,llms}/**/*.ts : Include relevant context in log messages for LLM-related code.
Applied to files:
apps/web/env.ts
📚 Learning: 2025-08-10T22:08:49.231Z
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-08-10T22:08:49.231Z
Learning: Applies to apps/web/utils/{ai,llms}/**/*.ts : Use TypeScript types for all parameters and return values in LLM-related functions.
Applied to files:
apps/web/env.ts
📚 Learning: 2025-08-10T22:08:49.231Z
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-08-10T22:08:49.231Z
Learning: Applies to apps/web/utils/{ai,llms}/**/*.ts : Document complex AI logic with clear comments in LLM-related code.
Applied to files:
apps/web/env.ts
🧬 Code Graph Analysis (1)
apps/web/utils/llms/model.ts (2)
apps/web/utils/llms/config.ts (2)
Provider(7-16)Model(18-36)apps/web/env.ts (1)
env(16-227)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: Jit Security
- GitHub Check: Analyze (javascript-typescript)
🔇 Additional comments (8)
version.txt (1)
1-1: LGTM: version bump to v2.4.8Version increment matches the scope of the changes introduced in this PR.
apps/web/package.json (1)
18-18: Confirmed: @ai-sdk/gateway@1.0.5 is compatible with ai@5.0.0 and all provider packagesAccording to the official AI SDK 5.0 migration guide and package documentation, the following stack is fully supported:
- ai@5.0.0
- @ai-sdk/gateway@1.0.5
- @ai-sdk/anthropic@2.x
- @ai-sdk/google@2.x
- @ai-sdk/openai@2.x
No further changes needed for version alignment—this dependency addition is safe to merge.
apps/web/env.ts (1)
12-12: LGTM: ‘aigateway’ added to llmProviderEnumThis wires the new provider into all env-backed selections using the enum.
apps/web/utils/llms/config.ts (2)
13-16: LGTM: Provider.AI_GATEWAY registeredProvider mapping and value align with llmProviderEnum ("aigateway").
45-49: LGTM: Provider option added to UI list"AI Gateway" entry will surface the new provider in selection UIs.
apps/web/utils/llms/model.ts (3)
8-8: AI Gateway provider import — OKImporting createGateway from @ai-sdk/gateway is correct and aligns with the added provider path.
387-388: getProviderApiKey mapping for AI Gateway — OKAdds AI_GATEWAY → env.AI_GATEWAY_API_KEY, enabling economy/chat/default selectors to resolve the key consistently.
119-130: AI Gateway model IDs are slash-based—no change needed for the defaultBased on the Vercel AI Gateway docs, slash-delimited IDs like
"google/gemini-2.5"are fully supported for createGateway. We can continue usingModel.GEMINI_2_5_PRO_OPENROUTER(which equals"google/gemini-2.5") as the default without risking mismatches.However, there are still hard-coded
"google/gemini-2.5-pro"strings in this file that should be replaced with the correspondingModelconstant to keep things DRY and avoid drift:• apps/web/utils/llms/model.ts line 324
• apps/web/utils/llms/model.ts line 333Replace each
"google/gemini-2.5-pro"with
Model.GEMINI_2_5_PRO(or whatever
Modelenum key matches that exact ID).
| AI_GATEWAY_API_KEY: z.string().optional(), | ||
| OLLAMA_BASE_URL: z.string().optional(), |
There was a problem hiding this comment.
💡 Verification agent
🧩 Analysis chain
Server env added: AI_GATEWAY_API_KEY — verify .env.example and align OpenRouter defaults
- Correctly scoped as server-only (not in experimental__runtimeEnv).
- Please add to apps/web/.env.example too.
Also, cross-check the OpenRouter model defaults: config.ts updated to "google/gemini-2.0-flash" and "google/gemini-2.5", but env.ts still defaults OPENROUTER_BACKUP_MODEL to "google/gemini-2.5-flash" (Line 49), which may be invalid on OpenRouter.
Suggested default (pick one based on your performance/price target):
- "google/gemini-2.0-flash" for a flash-tier default
- "google/gemini-2.5" for a pro-tier default
Run these to verify consistency and locate any lingering old/invalid model IDs:
And confirm whether "google/gemini-2.5-flash" is a valid OpenRouter model:
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "Check OPENROUTER defaults in env.ts:"
rg -n 'OPENROUTER_BACKUP_MODEL|2\.5-flash' apps/web/env.ts -n -A 2 -B 2 || true
echo
echo "Search for old OpenRouter Gemini IDs:"
rg -n -S '"google/gemini-2.0-flash-001"|"google/gemini-2.5-pro-preview-03-25"' -A 2 -B 2 || trueLength of output: 1188
🌐 Web query:
Does OpenRouter expose a model with the exact ID "google/gemini-2.5-flash"? If not, what are the currently valid Google Gemini model IDs on OpenRouter?
💡 Result:
OpenRouter does expose a model with the exact ID "google/gemini-2.5-flash", making it a valid selection for API requests[9]. This model is listed as available, and its ID matches Google's official naming scheme for Gemini models[10].
Currently valid Google Gemini model IDs on OpenRouter include:
- google/gemini-2.5-pro
- google/gemini-2.5-flash[9]
- google/gemini-2.5-flash-lite
- google/gemini-2.5-flash-lite-preview-06-17
- google/gemini-2.0-flash
- google/gemini-1.5-pro
- google/gemini-1.5-pro-latest
- google/gemini-1.5-flash
- google/gemini-1.5-flash-latest
- google/gemini-1.5-flash-8b
- google/gemini-1.5-flash-8b-latest[3][6][7]
These represent popular, officially supported model IDs. For the latest and exhaustive list, consult OpenRouter's model directory[7], but the above covers currently supported Gemini IDs. Note that naming may change or expand as Google and OpenRouter introduce model updates, typically following the format google/gemini-- for stable releases[10].
Citations:
- 1: https://huggingface.co/docs/smolagents/en/examples/using_different_models
- 2: https://www.sohamkamani.com/java/openrouter/
- 3: https://ai-sdk.dev/providers/ai-sdk-providers/google-generative-ai
- 4: https://docs.typingmind.com/manage-and-connect-ai-models/openrouter-models
- 5: https://ai.google.dev/gemini-api/docs/openai
- 6: https://v5.ai-sdk.dev/providers/ai-sdk-providers/google-generative-ai
- 7: https://www.promptfoo.dev/docs/providers/openrouter/
- 8: https://openrouter.ai/google/gemini-2.5-pro
- 9: https://langbase.com/docs/llms.txt
- 10: https://ai.google.dev/gemini-api/docs/models
🔧 Add new env vars to .env.example & align OpenRouter defaults
-
apps/web/.env.example
• AddAI_GATEWAY_API_KEY= OLLAMA_BASE_URL=
-
apps/web/env.ts (lines 59–60)
• AI_GATEWAY_API_KEY and OLLAMA_BASE_URL are now declared server-side—mirror them in your .env.example. -
apps/web/env.ts (line 49) vs. apps/web/config.ts
•OPENROUTER_BACKUP_MODELdefaults to"google/gemini-2.5-flash", which is valid on OpenRouter.
• In config.ts you’ve set model defaults to"google/gemini-2.0-flash"(flash-tier) and"google/gemini-2.5"(pro-tier).
• For consistency, pick one default across both files:- Flash-tier:
"google/gemini-2.0-flash" - Pro-tier:
"google/gemini-2.5"
Or, if you intend to keep"google/gemini-2.5-flash"as the explicit backup, update your config.ts defaults to match.
- Flash-tier:
🤖 Prompt for AI Agents
In apps/web/env.ts around lines 59–60 add that AI_GATEWAY_API_KEY and
OLLAMA_BASE_URL are declared server-side and must be added to
apps/web/.env.example (add blank entries for AI_GATEWAY_API_KEY and
OLLAMA_BASE_URL); also reconcile the OPENROUTER_BACKUP_MODEL default mismatch
between apps/web/env.ts (line 49) and apps/web/config.ts by choosing one
consistent default across both files — either set both to the flash-tier
"google/gemini-2.0-flash" and pro-tier "google/gemini-2.5" as needed, or update
config.ts to match "google/gemini-2.5-flash" if that is the intended backup —
make the same string value appear in both places.
| GEMINI_2_0_FLASH_OPENROUTER: "google/gemini-2.0-flash", | ||
| GEMINI_2_5_PRO_OPENROUTER: "google/gemini-2.5", | ||
| GROQ_LLAMA_3_3_70B: "llama-3.3-70b-versatile", |
There was a problem hiding this comment.
💡 Verification agent
🧩 Analysis chain
Updated OpenRouter Gemini IDs — verify no stale IDs remain
Model IDs look current. Ensure all old identifiers are removed elsewhere to avoid 404s or misroutes.
Run this to find any stale IDs:
🏁 Script executed:
#!/bin/bash
set -euo pipefail
rg -n -S '"google/gemini-2.0-flash-001"|"google/gemini-2.5-pro-preview-03-25"' -A 2 -B 2 || trueLength of output: 616
Remove stale OpenRouter Gemini IDs in usage.ts
The regex search uncovered obsolete model keys still present in apps/web/utils/usage.ts. Please remove or consolidate these entries to match the updated constants in config.ts.
• File: apps/web/utils/usage.ts
• Lines: 116–120
Suggested diff:
@@ apps/web/utils/usage.ts:114-120
- "google/gemini-2.0-flash-001": gemini2_5flash,
- "google/gemini-2.5-flash-preview-05-20": gemini2_5flash,
- "google/gemini-2.5-pro-preview-03-25": gemini2_5pro,
- "google/gemini-2.5-pro-preview-06-05": gemini2_5pro,
- "google/gemini-2.5-pro-preview": gemini2_5pro,
+ "google/gemini-2.0-flash": gemini2_5flash,
+ "google/gemini-2.5": gemini2_5pro,Ensure all downstream references use the new IDs from config.ts (GEMINI_2_0_FLASH_OPENROUTER and GEMINI_2_5_PRO_OPENROUTER).
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| GEMINI_2_0_FLASH_OPENROUTER: "google/gemini-2.0-flash", | |
| GEMINI_2_5_PRO_OPENROUTER: "google/gemini-2.5", | |
| GROQ_LLAMA_3_3_70B: "llama-3.3-70b-versatile", | |
| "google/gemini-2.0-flash": gemini2_5flash, | |
| "google/gemini-2.5": gemini2_5pro, |
🤖 Prompt for AI Agents
In apps/web/utils/usage.ts around lines 116 to 120 there are stale OpenRouter
Gemini model keys; remove those obsolete entries and replace them with the
current constants exported from apps/web/utils/llms/config.ts
(GEMINI_2_0_FLASH_OPENROUTER and GEMINI_2_5_PRO_OPENROUTER), update any
import/usage to reference those constants instead of hardcoded strings, and
consolidate duplicate mappings; also run a quick repo-wide search to update any
other references to the old IDs so all downstream uses point to the new config
constants.
| "GOOGLE_API_KEY", | ||
| "GROQ_API_KEY", | ||
| "OPENROUTER_API_KEY", | ||
| "AI_GATEWAY_API_KEY", |
There was a problem hiding this comment.
💡 Verification agent
🧩 Analysis chain
Added AI_GATEWAY_API_KEY to build env — verify .env.example is updated too
Good addition to ensure the key is available at build time. Please also confirm it’s documented in apps/web/.env.example per guidelines.
Run this to verify presence across the repo and specifically in .env.example:
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "Search for AI_GATEWAY_API_KEY across the repo:"
rg -n "AI_GATEWAY_API_KEY" -S -A 2 -B 2 || true
echo
echo "Verify it's listed in any .env.example files (should minimally include apps/web/.env.example):"
fd -t f -g ".env.example" | while read -r f; do
if rg -n "AI_GATEWAY_API_KEY" "$f" > /dev/null; then
echo "OK in $f"
else
echo "MISSING in $f"
fi
doneLength of output: 1744
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "Locating all .env.example files (including hidden):"
# Include hidden files to find .env.example in apps/web
fd --hidden -t f -g ".env.example" | while read -r f; do
if rg -q "AI_GATEWAY_API_KEY" "$f"; then
echo "OK in $f"
else
echo "MISSING in $f"
fi
doneLength of output: 635
Missing AI_GATEWAY_API_KEY in .env.example files
The new AI_GATEWAY_API_KEY isn’t documented in any of our .env.example files—per our guidelines we need at least apps/web/.env.example updated. Please add a placeholder entry so developers know to supply this value:
• apps/web/.env.example (required)
AI_GATEWAY_API_KEY=your_api_key_here• apps/mcp-server/.env.example (optional) – add only if that service uses this key at runtime
• apps/unsubscriber/.env.example (optional) – same as above
🤖 Prompt for AI Agents
In turbo.json around line 41 the new AI_GATEWAY_API_KEY entry is present but the
key is not documented in any .env.example files; update apps/web/.env.example
(required) by adding a placeholder line "AI_GATEWAY_API_KEY=your_api_key_here"
so developers know to supply it, and optionally add the same placeholder to
apps/mcp-server/.env.example and apps/unsubscriber/.env.example only if those
services consume the key at runtime.
Summary by CodeRabbit
New Features
Improvements
Chores