Skip to content

Add ai gateway support#674

Merged
elie222 merged 1 commit intomainfrom
feat/ai-gateway
Aug 13, 2025
Merged

Add ai gateway support#674
elie222 merged 1 commit intomainfrom
feat/ai-gateway

Conversation

@elie222
Copy link
Owner

@elie222 elie222 commented Aug 13, 2025

Summary by CodeRabbit

  • New Features

    • Added AI Gateway as a selectable LLM provider, visible in the provider list.
    • Supports configuration via AI_GATEWAY_API_KEY for authenticated use.
  • Improvements

    • Updated Gemini OpenRouter model identifiers to the latest names for improved compatibility and selection accuracy.
  • Chores

    • Bumped app version to v2.4.8.

@vercel
Copy link

vercel bot commented Aug 13, 2025

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Project Deployment Preview Updated (UTC)
inbox-zero Ready Preview Aug 13, 2025 9:58am

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Aug 13, 2025

Walkthrough

Adds AI Gateway as a new LLM provider across env schema, config, and model-selection logic; introduces AI_GATEWAY_API_KEY; updates Gemini OpenRouter model IDs; adds @ai-sdk/gateway dependency; wires the new env var into build; bumps version to v2.4.8.

Changes

Cohort / File(s) Summary of changes
Env and build wiring
apps/web/env.ts, turbo.json
Added provider enum member "aigateway"; introduced optional server env AI_GATEWAY_API_KEY; included AI_GATEWAY_API_KEY in turbo build env.
LLM provider config
apps/web/utils/llms/config.ts
Added Provider.AI_GATEWAY and providerOptions entry; updated Gemini OpenRouter IDs: GEMINI_2_0_FLASH_OPENROUTER → google/gemini-2.0-flash, GEMINI_2_5_PRO_OPENROUTER → google/gemini-2.5.
Model selection logic
apps/web/utils/llms/model.ts
Integrated AI Gateway branch using createGateway; resolves API key from param or env; selects model (defaulting to GEMINI_2_5_PRO_OPENROUTER); maps AI_GATEWAY to AI_GATEWAY_API_KEY.
Dependencies
apps/web/package.json
Added dependency: @ai-sdk/gateway@1.0.5.
Version bump
version.txt
Updated version from v2.4.7 to v2.4.8.

Sequence Diagram(s)

sequenceDiagram
  actor User
  participant WebApp as Web App
  participant Selector as LLM Model Selector
  participant GatewaySDK as AI Gateway SDK
  participant AIGW as AI Gateway API

  User->>WebApp: Initiates LLM request
  WebApp->>Selector: selectModel(provider, aiModel, aiApiKey)
  alt provider == AI_GATEWAY
    Selector->>Selector: resolve apiKey (aiApiKey || env.AI_GATEWAY_API_KEY)
    Selector->>GatewaySDK: createGateway(apiKey)
    Selector->>GatewaySDK: gateway(modelName)
    GatewaySDK->>AIGW: Invoke model
    AIGW-->>GatewaySDK: Response
    GatewaySDK-->>WebApp: Model handle/response
  else other providers
    Selector->>WebApp: Delegate to existing provider flow
  end
  WebApp-->>User: Return LLM result
Loading

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~8 minutes

Possibly related PRs

Poem

A gateway opens, whiskers twitch—
New keys in burrows, quick-switch.
Gemini signs now neatly aligned,
Models hop on a streamlined bind.
v2.4.8—carrot raised high,
Through the Gateway, bunnies fly! 🥕✨

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/ai-gateway

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@claude
Copy link

claude bot commented Aug 13, 2025

Claude finished @elie222's task —— View job


Code Review for AI Gateway Support

I've reviewed the AI gateway implementation across the changed files. Here's my comprehensive feedback:

Overall Assessment ✅

The AI gateway integration is well-implemented and follows existing patterns in the codebase. The changes are minimal, focused, and maintain backward compatibility.

Code Quality & Best Practices

✅ Good:

  • Consistent naming conventions (aigateway, AI_GATEWAY, AI Gateway)
  • Follows existing provider pattern in apps/web/utils/llms/model.ts:119-130
  • Proper environment variable validation in apps/web/env.ts:59
  • Clean integration with the @ai-sdk/gateway package

⚠️ Minor Improvement Suggestion:

  • Consider adding JSDoc comments for the AI Gateway case in selectModel() function for consistency with other providers

Potential Issues

✅ No Major Issues Found

⚠️ Minor Considerations:

  1. Default Model Selection (apps/web/utils/llms/model.ts:120): Using Model.GEMINI_2_5_PRO_OPENROUTER as the default for AI Gateway might be confusing since it references an OpenRouter model. Consider:

    • Adding a dedicated AI Gateway model constant
    • Or documenting why this specific model is used as default
  2. Error Handling: The implementation inherits error handling from getBackupModel(), which is appropriate

Performance Considerations

✅ Excellent:

  • No performance impact - follows the same lazy loading pattern as other providers
  • Proper backup model integration maintains fallback reliability
  • No additional network calls or overhead introduced

Security Concerns

✅ Secure Implementation:

  • API key properly handled through environment variables with optional fallback
  • Uses the established pattern for API key management in getProviderApiKey() function at apps/web/utils/llms/model.ts:380-391
  • No secrets or sensitive data exposed in the code

✅ Environment Variable Security:

  • AI_GATEWAY_API_KEY is optional and server-side only (not exposed to client)
  • Follows the existing security model used by other provider API keys

Technical Implementation

✅ Well Structured:

  • Clean integration with @ai-sdk/gateway package (version 1.0.5)
  • Maintains consistency with other provider implementations
  • Proper TypeScript usage throughout

Configuration Files Updated Appropriately:

  • turbo.json: Added AI_GATEWAY_API_KEY to build environment variables
  • package.json: Added @ai-sdk/gateway dependency
  • env.ts: Added proper validation and enum inclusion

Recommendations

  1. Documentation: Consider adding a comment explaining the AI Gateway provider's purpose and use cases
  2. Model Constant: Consider adding a dedicated Model.AI_GATEWAY_DEFAULT constant instead of reusing OpenRouter model names
  3. Testing: Ensure integration tests cover the new AI Gateway provider path

Summary

This is a clean, well-implemented addition that maintains code quality standards and follows established patterns. The changes are minimal and focused, making it easy to review and maintain. No blocking issues identified.

Approval Status: ✅ LGTM - Ready for merge after addressing minor suggestions if desired.


@socket-security
Copy link

Review the following changes in direct dependencies. Learn more about Socket for GitHub.

Diff Package Supply Chain
Security
Vulnerability Quality Maintenance License
Added@​ai-sdk/​gateway@​1.0.5100100100100100

View full report

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 7aa0b51 and 73ec7c6.

⛔ Files ignored due to path filters (1)
  • pnpm-lock.yaml is excluded by !**/pnpm-lock.yaml
📒 Files selected for processing (6)
  • apps/web/env.ts (2 hunks)
  • apps/web/package.json (1 hunks)
  • apps/web/utils/llms/config.ts (3 hunks)
  • apps/web/utils/llms/model.ts (3 hunks)
  • turbo.json (1 hunks)
  • version.txt (1 hunks)
🧰 Additional context used
📓 Path-based instructions (15)
!{.cursor/rules/*.mdc}

📄 CodeRabbit Inference Engine (.cursor/rules/cursor-rules.mdc)

Never place rule files in the project root, in subdirectories outside .cursor/rules, or in any other location

Files:

  • version.txt
  • apps/web/package.json
  • turbo.json
  • apps/web/utils/llms/config.ts
  • apps/web/utils/llms/model.ts
  • apps/web/env.ts
!pages/_document.{js,jsx,ts,tsx}

📄 CodeRabbit Inference Engine (.cursor/rules/ultracite.mdc)

!pages/_document.{js,jsx,ts,tsx}: Don't import next/document outside of pages/_document.jsx in Next.js projects.
Don't import next/document outside of pages/_document.jsx in Next.js projects.

Files:

  • version.txt
  • apps/web/package.json
  • turbo.json
  • apps/web/utils/llms/config.ts
  • apps/web/utils/llms/model.ts
  • apps/web/env.ts
turbo.json

📄 CodeRabbit Inference Engine (.cursor/rules/environment-variables.mdc)

When adding a new environment variable, add it to turbo.json under tasks.build.env to ensure it is included in the build process.

Files:

  • turbo.json
apps/web/**/*.{ts,tsx}

📄 CodeRabbit Inference Engine (apps/web/CLAUDE.md)

apps/web/**/*.{ts,tsx}: Use TypeScript with strict null checks
Path aliases: Use @/ for imports from project root
Use proper error handling with try/catch blocks
Format code with Prettier
Leverage TypeScript inference for better DX

Files:

  • apps/web/utils/llms/config.ts
  • apps/web/utils/llms/model.ts
  • apps/web/env.ts
**/*.ts

📄 CodeRabbit Inference Engine (.cursor/rules/form-handling.mdc)

**/*.ts: The same validation should be done in the server action too
Define validation schemas using Zod

Files:

  • apps/web/utils/llms/config.ts
  • apps/web/utils/llms/model.ts
  • apps/web/env.ts
**/*.{ts,tsx}

📄 CodeRabbit Inference Engine (.cursor/rules/logging.mdc)

**/*.{ts,tsx}: Use createScopedLogger for logging in backend TypeScript files
Typically add the logger initialization at the top of the file when using createScopedLogger
Only use .with() on a logger instance within a specific function, not for a global logger

Import Prisma in the project using import prisma from "@/utils/prisma";

**/*.{ts,tsx}: Don't use TypeScript enums.
Don't use TypeScript const enum.
Don't use the TypeScript directive @ts-ignore.
Don't use primitive type aliases or misleading types.
Don't use empty type parameters in type aliases and interfaces.
Don't use any or unknown as type constraints.
Don't use implicit any type on variable declarations.
Don't let variables evolve into any type through reassignments.
Don't use non-null assertions with the ! postfix operator.
Don't misuse the non-null assertion operator (!) in TypeScript files.
Don't use user-defined types.
Use as const instead of literal types and type annotations.
Use export type for types.
Use import type for types.
Don't declare empty interfaces.
Don't merge interfaces and classes unsafely.
Don't use overload signatures that aren't next to each other.
Use the namespace keyword instead of the module keyword to declare TypeScript namespaces.
Don't use TypeScript namespaces.
Don't export imported variables.
Don't add type annotations to variables, parameters, and class properties that are initialized with literal expressions.
Don't use parameter properties in class constructors.
Use either T[] or Array consistently.
Initialize each enum member value explicitly.
Make sure all enum members are literal values.

Files:

  • apps/web/utils/llms/config.ts
  • apps/web/utils/llms/model.ts
  • apps/web/env.ts
apps/web/utils/**

📄 CodeRabbit Inference Engine (.cursor/rules/project-structure.mdc)

Create utility functions in utils/ folder for reusable logic

Files:

  • apps/web/utils/llms/config.ts
  • apps/web/utils/llms/model.ts
apps/web/utils/**/*.ts

📄 CodeRabbit Inference Engine (.cursor/rules/project-structure.mdc)

apps/web/utils/**/*.ts: Use lodash utilities for common operations (arrays, objects, strings)
Import specific lodash functions to minimize bundle size

Files:

  • apps/web/utils/llms/config.ts
  • apps/web/utils/llms/model.ts
**/*.{js,jsx,ts,tsx}

📄 CodeRabbit Inference Engine (.cursor/rules/ultracite.mdc)

**/*.{js,jsx,ts,tsx}: Don't use elements in Next.js projects.
Don't use elements in Next.js projects.
Don't use namespace imports.
Don't access namespace imports dynamically.
Don't use global eval().
Don't use console.
Don't use debugger.
Don't use var.
Don't use with statements in non-strict contexts.
Don't use the arguments object.
Don't use consecutive spaces in regular expression literals.
Don't use the comma operator.
Don't use unnecessary boolean casts.
Don't use unnecessary callbacks with flatMap.
Use for...of statements instead of Array.forEach.
Don't create classes that only have static members (like a static namespace).
Don't use this and super in static contexts.
Don't use unnecessary catch clauses.
Don't use unnecessary constructors.
Don't use unnecessary continue statements.
Don't export empty modules that don't change anything.
Don't use unnecessary escape sequences in regular expression literals.
Don't use unnecessary labels.
Don't use unnecessary nested block statements.
Don't rename imports, exports, and destructured assignments to the same name.
Don't use unnecessary string or template literal concatenation.
Don't use String.raw in template literals when there are no escape sequences.
Don't use useless case statements in switch statements.
Don't use ternary operators when simpler alternatives exist.
Don't use useless this aliasing.
Don't initialize variables to undefined.
Don't use the void operators (they're not familiar).
Use arrow functions instead of function expressions.
Use Date.now() to get milliseconds since the Unix Epoch.
Use .flatMap() instead of map().flat() when possible.
Use literal property access instead of computed property access.
Don't use parseInt() or Number.parseInt() when binary, octal, or hexadecimal literals work.
Use concise optional chaining instead of chained logical expressions.
Use regular expression literals instead of the RegExp constructor when possible.
Don't use number literal object member names th...

Files:

  • apps/web/utils/llms/config.ts
  • apps/web/utils/llms/model.ts
  • apps/web/env.ts
apps/web/utils/{ai,llms}/**/*

📄 CodeRabbit Inference Engine (.cursor/rules/llm.mdc)

LLM-related code must be organized in the directories: apps/web/utils/ai/, apps/web/utils/llms/, and apps/web/tests/ for LLM-specific tests.

Files:

  • apps/web/utils/llms/config.ts
  • apps/web/utils/llms/model.ts
apps/web/utils/{ai,llms}/**/*.ts

📄 CodeRabbit Inference Engine (.cursor/rules/llm.mdc)

apps/web/utils/{ai,llms}/**/*.ts: Keep system prompts and user prompts separate in LLM-related functions.
System prompt should define the LLM's role and task specifications.
User prompt should contain the actual data and context.
Always define a Zod schema for response validation in LLM-related functions.
Make Zod schemas as specific as possible to guide the LLM output.
Use descriptive scoped loggers for each LLM feature.
Log inputs and outputs with appropriate log levels in LLM-related functions.
Include relevant context in log messages for LLM-related code.
Implement early returns for invalid inputs in LLM-related functions.
Use proper error types and logging in LLM-related code.
Implement fallbacks for AI failures in LLM-related functions.
Add retry logic for transient failures using withRetry in LLM-related functions.
Use XML-like tags to structure data in LLM prompts.
Remove excessive whitespace and truncate long inputs in LLM prompts.
Format data consistently across similar LLM-related functions.
Use TypeScript types for all parameters and return values in LLM-related functions.
Define clear interfaces for complex input/output structures in LLM-related code.
Keep related AI functions in the same file or directory.
Extract common patterns into utility functions in LLM-related code.
Document complex AI logic with clear comments in LLM-related code.

Files:

  • apps/web/utils/llms/config.ts
  • apps/web/utils/llms/model.ts
apps/web/utils/llms/model.ts

📄 CodeRabbit Inference Engine (.cursor/rules/llm.mdc)

Model definitions and configurations must be implemented in utils/llms/model.ts.

Files:

  • apps/web/utils/llms/model.ts
apps/web/**/{.env.example,env.ts,turbo.json}

📄 CodeRabbit Inference Engine (apps/web/CLAUDE.md)

Add environment variables to .env.example, env.ts, and turbo.json

Files:

  • apps/web/env.ts
apps/web/**/{.env.example,env.ts}

📄 CodeRabbit Inference Engine (apps/web/CLAUDE.md)

Client-side environment variables: Prefix with NEXT_PUBLIC_

Files:

  • apps/web/env.ts
apps/web/env.ts

📄 CodeRabbit Inference Engine (.cursor/rules/environment-variables.mdc)

apps/web/env.ts: When adding a new environment variable, add it to apps/web/env.ts in the appropriate section: use server for server-only variables, and for client-side variables, use the client section and also add to experimental__runtimeEnv.
Client-side environment variables must be prefixed with NEXT_PUBLIC_ and added to both the client and experimental__runtimeEnv sections in apps/web/env.ts.

Files:

  • apps/web/env.ts
🧠 Learnings (7)
📚 Learning: 2025-07-18T15:04:50.520Z
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/environment-variables.mdc:0-0
Timestamp: 2025-07-18T15:04:50.520Z
Learning: Applies to turbo.json : When adding a new environment variable, add it to `turbo.json` under `tasks.build.env` to ensure it is included in the build process.

Applied to files:

  • turbo.json
📚 Learning: 2025-07-18T15:04:30.467Z
Learnt from: CR
PR: elie222/inbox-zero#0
File: apps/web/CLAUDE.md:0-0
Timestamp: 2025-07-18T15:04:30.467Z
Learning: Applies to apps/web/**/{.env.example,env.ts,turbo.json} : Add environment variables to `.env.example`, `env.ts`, and `turbo.json`

Applied to files:

  • turbo.json
📚 Learning: 2025-08-10T22:08:49.231Z
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-08-10T22:08:49.231Z
Learning: Applies to apps/web/utils/{ai,llms}/**/*.ts : Implement fallbacks for AI failures in LLM-related functions.

Applied to files:

  • apps/web/env.ts
📚 Learning: 2025-07-18T15:04:50.520Z
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/environment-variables.mdc:0-0
Timestamp: 2025-07-18T15:04:50.520Z
Learning: Applies to apps/web/env.ts : When adding a new environment variable, add it to `apps/web/env.ts` in the appropriate section: use `server` for server-only variables, and for client-side variables, use the `client` section and also add to `experimental__runtimeEnv`.

Applied to files:

  • apps/web/env.ts
📚 Learning: 2025-08-10T22:08:49.231Z
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-08-10T22:08:49.231Z
Learning: Applies to apps/web/utils/{ai,llms}/**/*.ts : Include relevant context in log messages for LLM-related code.

Applied to files:

  • apps/web/env.ts
📚 Learning: 2025-08-10T22:08:49.231Z
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-08-10T22:08:49.231Z
Learning: Applies to apps/web/utils/{ai,llms}/**/*.ts : Use TypeScript types for all parameters and return values in LLM-related functions.

Applied to files:

  • apps/web/env.ts
📚 Learning: 2025-08-10T22:08:49.231Z
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-08-10T22:08:49.231Z
Learning: Applies to apps/web/utils/{ai,llms}/**/*.ts : Document complex AI logic with clear comments in LLM-related code.

Applied to files:

  • apps/web/env.ts
🧬 Code Graph Analysis (1)
apps/web/utils/llms/model.ts (2)
apps/web/utils/llms/config.ts (2)
  • Provider (7-16)
  • Model (18-36)
apps/web/env.ts (1)
  • env (16-227)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: Jit Security
  • GitHub Check: Analyze (javascript-typescript)
🔇 Additional comments (8)
version.txt (1)

1-1: LGTM: version bump to v2.4.8

Version increment matches the scope of the changes introduced in this PR.

apps/web/package.json (1)

18-18: Confirmed: @ai-sdk/gateway@1.0.5 is compatible with ai@5.0.0 and all provider packages

According to the official AI SDK 5.0 migration guide and package documentation, the following stack is fully supported:

No further changes needed for version alignment—this dependency addition is safe to merge.

apps/web/env.ts (1)

12-12: LGTM: ‘aigateway’ added to llmProviderEnum

This wires the new provider into all env-backed selections using the enum.

apps/web/utils/llms/config.ts (2)

13-16: LGTM: Provider.AI_GATEWAY registered

Provider mapping and value align with llmProviderEnum ("aigateway").


45-49: LGTM: Provider option added to UI list

"AI Gateway" entry will surface the new provider in selection UIs.

apps/web/utils/llms/model.ts (3)

8-8: AI Gateway provider import — OK

Importing createGateway from @ai-sdk/gateway is correct and aligns with the added provider path.


387-388: getProviderApiKey mapping for AI Gateway — OK

Adds AI_GATEWAY → env.AI_GATEWAY_API_KEY, enabling economy/chat/default selectors to resolve the key consistently.


119-130: AI Gateway model IDs are slash-based—no change needed for the default

Based on the Vercel AI Gateway docs, slash-delimited IDs like "google/gemini-2.5" are fully supported for createGateway. We can continue using Model.GEMINI_2_5_PRO_OPENROUTER (which equals "google/gemini-2.5") as the default without risking mismatches.

However, there are still hard-coded "google/gemini-2.5-pro" strings in this file that should be replaced with the corresponding Model constant to keep things DRY and avoid drift:

• apps/web/utils/llms/model.ts line 324
• apps/web/utils/llms/model.ts line 333

Replace each

"google/gemini-2.5-pro"

with

Model.GEMINI_2_5_PRO

(or whatever Model enum key matches that exact ID).

Comment on lines +59 to 60
AI_GATEWAY_API_KEY: z.string().optional(),
OLLAMA_BASE_URL: z.string().optional(),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Server env added: AI_GATEWAY_API_KEY — verify .env.example and align OpenRouter defaults

  • Correctly scoped as server-only (not in experimental__runtimeEnv).
  • Please add to apps/web/.env.example too.

Also, cross-check the OpenRouter model defaults: config.ts updated to "google/gemini-2.0-flash" and "google/gemini-2.5", but env.ts still defaults OPENROUTER_BACKUP_MODEL to "google/gemini-2.5-flash" (Line 49), which may be invalid on OpenRouter.

Suggested default (pick one based on your performance/price target):

  • "google/gemini-2.0-flash" for a flash-tier default
  • "google/gemini-2.5" for a pro-tier default

Run these to verify consistency and locate any lingering old/invalid model IDs:

And confirm whether "google/gemini-2.5-flash" is a valid OpenRouter model:


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "Check OPENROUTER defaults in env.ts:"
rg -n 'OPENROUTER_BACKUP_MODEL|2\.5-flash' apps/web/env.ts -n -A 2 -B 2 || true

echo
echo "Search for old OpenRouter Gemini IDs:"
rg -n -S '"google/gemini-2.0-flash-001"|"google/gemini-2.5-pro-preview-03-25"' -A 2 -B 2 || true

Length of output: 1188


🌐 Web query:

Does OpenRouter expose a model with the exact ID "google/gemini-2.5-flash"? If not, what are the currently valid Google Gemini model IDs on OpenRouter?

💡 Result:

OpenRouter does expose a model with the exact ID "google/gemini-2.5-flash", making it a valid selection for API requests[9]. This model is listed as available, and its ID matches Google's official naming scheme for Gemini models[10].

Currently valid Google Gemini model IDs on OpenRouter include:

  • google/gemini-2.5-pro
  • google/gemini-2.5-flash[9]
  • google/gemini-2.5-flash-lite
  • google/gemini-2.5-flash-lite-preview-06-17
  • google/gemini-2.0-flash
  • google/gemini-1.5-pro
  • google/gemini-1.5-pro-latest
  • google/gemini-1.5-flash
  • google/gemini-1.5-flash-latest
  • google/gemini-1.5-flash-8b
  • google/gemini-1.5-flash-8b-latest[3][6][7]

These represent popular, officially supported model IDs. For the latest and exhaustive list, consult OpenRouter's model directory[7], but the above covers currently supported Gemini IDs. Note that naming may change or expand as Google and OpenRouter introduce model updates, typically following the format google/gemini-- for stable releases[10].

Citations:


🔧 Add new env vars to .env.example & align OpenRouter defaults

  • apps/web/.env.example
    • Add

    AI_GATEWAY_API_KEY=  
    OLLAMA_BASE_URL=  
  • apps/web/env.ts (lines 59–60)
    • AI_GATEWAY_API_KEY and OLLAMA_BASE_URL are now declared server-side—mirror them in your .env.example.

  • apps/web/env.ts (line 49) vs. apps/web/config.ts
    OPENROUTER_BACKUP_MODEL defaults to "google/gemini-2.5-flash", which is valid on OpenRouter.
    • In config.ts you’ve set model defaults to "google/gemini-2.0-flash" (flash-tier) and "google/gemini-2.5" (pro-tier).
    • For consistency, pick one default across both files:

    • Flash-tier: "google/gemini-2.0-flash"
    • Pro-tier: "google/gemini-2.5"
      Or, if you intend to keep "google/gemini-2.5-flash" as the explicit backup, update your config.ts defaults to match.
🤖 Prompt for AI Agents
In apps/web/env.ts around lines 59–60 add that AI_GATEWAY_API_KEY and
OLLAMA_BASE_URL are declared server-side and must be added to
apps/web/.env.example (add blank entries for AI_GATEWAY_API_KEY and
OLLAMA_BASE_URL); also reconcile the OPENROUTER_BACKUP_MODEL default mismatch
between apps/web/env.ts (line 49) and apps/web/config.ts by choosing one
consistent default across both files — either set both to the flash-tier
"google/gemini-2.0-flash" and pro-tier "google/gemini-2.5" as needed, or update
config.ts to match "google/gemini-2.5-flash" if that is the intended backup —
make the same string value appear in both places.

Comment on lines +31 to 33
GEMINI_2_0_FLASH_OPENROUTER: "google/gemini-2.0-flash",
GEMINI_2_5_PRO_OPENROUTER: "google/gemini-2.5",
GROQ_LLAMA_3_3_70B: "llama-3.3-70b-versatile",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Updated OpenRouter Gemini IDs — verify no stale IDs remain

Model IDs look current. Ensure all old identifiers are removed elsewhere to avoid 404s or misroutes.

Run this to find any stale IDs:


🏁 Script executed:

#!/bin/bash
set -euo pipefail

rg -n -S '"google/gemini-2.0-flash-001"|"google/gemini-2.5-pro-preview-03-25"' -A 2 -B 2 || true

Length of output: 616


Remove stale OpenRouter Gemini IDs in usage.ts

The regex search uncovered obsolete model keys still present in apps/web/utils/usage.ts. Please remove or consolidate these entries to match the updated constants in config.ts.

• File: apps/web/utils/usage.ts
• Lines: 116–120

Suggested diff:

@@ apps/web/utils/usage.ts:114-120
-  "google/gemini-2.0-flash-001": gemini2_5flash,
-  "google/gemini-2.5-flash-preview-05-20": gemini2_5flash,
-  "google/gemini-2.5-pro-preview-03-25": gemini2_5pro,
-  "google/gemini-2.5-pro-preview-06-05": gemini2_5pro,
-  "google/gemini-2.5-pro-preview": gemini2_5pro,
+  "google/gemini-2.0-flash": gemini2_5flash,
+  "google/gemini-2.5":       gemini2_5pro,

Ensure all downstream references use the new IDs from config.ts (GEMINI_2_0_FLASH_OPENROUTER and GEMINI_2_5_PRO_OPENROUTER).

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
GEMINI_2_0_FLASH_OPENROUTER: "google/gemini-2.0-flash",
GEMINI_2_5_PRO_OPENROUTER: "google/gemini-2.5",
GROQ_LLAMA_3_3_70B: "llama-3.3-70b-versatile",
"google/gemini-2.0-flash": gemini2_5flash,
"google/gemini-2.5": gemini2_5pro,
🤖 Prompt for AI Agents
In apps/web/utils/usage.ts around lines 116 to 120 there are stale OpenRouter
Gemini model keys; remove those obsolete entries and replace them with the
current constants exported from apps/web/utils/llms/config.ts
(GEMINI_2_0_FLASH_OPENROUTER and GEMINI_2_5_PRO_OPENROUTER), update any
import/usage to reference those constants instead of hardcoded strings, and
consolidate duplicate mappings; also run a quick repo-wide search to update any
other references to the old IDs so all downstream uses point to the new config
constants.

"GOOGLE_API_KEY",
"GROQ_API_KEY",
"OPENROUTER_API_KEY",
"AI_GATEWAY_API_KEY",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Added AI_GATEWAY_API_KEY to build env — verify .env.example is updated too

Good addition to ensure the key is available at build time. Please also confirm it’s documented in apps/web/.env.example per guidelines.

Run this to verify presence across the repo and specifically in .env.example:


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "Search for AI_GATEWAY_API_KEY across the repo:"
rg -n "AI_GATEWAY_API_KEY" -S -A 2 -B 2 || true

echo
echo "Verify it's listed in any .env.example files (should minimally include apps/web/.env.example):"
fd -t f -g ".env.example" | while read -r f; do
  if rg -n "AI_GATEWAY_API_KEY" "$f" > /dev/null; then
    echo "OK in $f"
  else
    echo "MISSING in $f"
  fi
done

Length of output: 1744


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "Locating all .env.example files (including hidden):"
# Include hidden files to find .env.example in apps/web
fd --hidden -t f -g ".env.example" | while read -r f; do
  if rg -q "AI_GATEWAY_API_KEY" "$f"; then
    echo "OK in $f"
  else
    echo "MISSING in $f"
  fi
done

Length of output: 635


Missing AI_GATEWAY_API_KEY in .env.example files

The new AI_GATEWAY_API_KEY isn’t documented in any of our .env.example files—per our guidelines we need at least apps/web/.env.example updated. Please add a placeholder entry so developers know to supply this value:

• apps/web/.env.example (required)

AI_GATEWAY_API_KEY=your_api_key_here

• apps/mcp-server/.env.example (optional) – add only if that service uses this key at runtime
• apps/unsubscriber/.env.example (optional) – same as above

🤖 Prompt for AI Agents
In turbo.json around line 41 the new AI_GATEWAY_API_KEY entry is present but the
key is not documented in any .env.example files; update apps/web/.env.example
(required) by adding a placeholder line "AI_GATEWAY_API_KEY=your_api_key_here"
so developers know to supply it, and optionally add the same placeholder to
apps/mcp-server/.env.example and apps/unsubscriber/.env.example only if those
services consume the key at runtime.

@elie222 elie222 merged commit 2d2ffa9 into main Aug 13, 2025
18 checks passed
@coderabbitai coderabbitai bot mentioned this pull request Sep 16, 2025
@coderabbitai coderabbitai bot mentioned this pull request Nov 19, 2025
@coderabbitai coderabbitai bot mentioned this pull request Dec 9, 2025
@elie222 elie222 deleted the feat/ai-gateway branch December 18, 2025 23:01
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant

Comments