Skip to content

Comments

Limit reasoning tokens used#789

Merged
elie222 merged 1 commit intomainfrom
feat/limit-reasoning
Sep 16, 2025
Merged

Limit reasoning tokens used#789
elie222 merged 1 commit intomainfrom
feat/limit-reasoning

Conversation

@elie222
Copy link
Owner

@elie222 elie222 commented Sep 16, 2025

Summary by CodeRabbit

  • Refactor

    • Improved AI model routing with ordered provider preferences and a default reasoning limit, delivering more consistent, predictable responses. Behavior remains unchanged when no provider is specified.
  • Chores

    • Bumped version to v2.9.36.

@vercel
Copy link

vercel bot commented Sep 16, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Preview Updated (UTC)
inbox-zero Building Building Preview Sep 16, 2025 11:59am

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Sep 16, 2025

Walkthrough

Adds a helper to build OpenRouter provider options (ordered providers from comma-separated input and a reasoning max_tokens=20), integrates it into the default model path, and bumps version.txt from v2.9.35 to v2.9.36. No public signatures changed.

Changes

Cohort / File(s) Summary
OpenRouter provider options construction
apps/web/utils/llms/model.ts
Adds createOpenRouterProviderOptions(providers) to parse a comma-separated string into provider order; sets openrouter.provider to { order } when non-empty, otherwise undefined; always includes reasoning: { max_tokens: 20 }. Uses this helper for default providerOptions.
Version bump
version.txt
Updates version from v2.9.35 to v2.9.36.

Sequence Diagram(s)

sequenceDiagram
  actor User
  participant UI as Web UI
  participant MS as ModelSelector
  participant OR as OpenRouter API

  User->>UI: Initiate model usage
  UI->>MS: Request model with provider input (e.g., "a,b,c")
  MS->>MS: createOpenRouterProviderOptions(input)\n- split, trim, filter -> order\n- set provider: {order} or undefined\n- set reasoning: {max_tokens: 20}
  MS->>OR: Call with providerOptions (provider?, reasoning)
  OR-->>MS: Response
  MS-->>UI: Model result
Loading

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

Possibly related PRs

  • Custom model routing #503 — Also modifies apps/web/utils/llms/model.ts to adjust OpenRouter providerOptions and model selection logic.

Poem

A bunny taps keys with careful delight,
Sorting providers in tidy, swift light.
Twenty thoughts max, to reason just right,
Hops to v2.9.36—what a sight! 🐇✨
Carrots in order, models take flight.

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 50.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title Check ✅ Passed The title "Limit reasoning tokens used" succinctly and accurately reflects the primary intent of this PR — adding a reasoning token limit (max_tokens: 20) to the OpenRouter provider options and adjusting providerOptions initialization. It is concise, focused, and conveys the main change a reviewer would care about.
✨ Finishing touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/limit-reasoning

Tip

👮 Agentic pre-merge checks are now available in preview!

Pro plan users can now enable pre-merge checks in their settings to enforce checklists before merging PRs.

  • Built-in checks – Quickly apply ready-made checks to enforce title conventions, require pull request descriptions that follow templates, validate linked issues for compliance, and more.
  • Custom agentic checks – Define your own rules using CodeRabbit’s advanced agentic capabilities to enforce organization-specific policies and workflows. For example, you can instruct CodeRabbit’s agent to verify that API documentation is updated whenever API schema files are modified in a PR. Note: Upto 5 custom checks are currently allowed during the preview period. Pricing for this feature will be announced in a few weeks.

Please see the documentation for more information.

Example:

reviews:
  pre_merge_checks:
    custom_checks:
      - name: "Undocumented Breaking Changes"
        mode: "warning"
        instructions: |
          Pass/fail criteria: All breaking changes to public APIs, CLI flags, environment variables, configuration keys, database schemas, or HTTP/GraphQL endpoints must be documented in the "Breaking Change" section of the PR description and in CHANGELOG.md. Exclude purely internal or private changes (e.g., code not exported from package entry points or explicitly marked as internal).

Please share your feedback with us on this Discord post.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@elie222 elie222 merged commit 8179d44 into main Sep 16, 2025
10 of 13 checks passed
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
apps/web/utils/llms/model.ts (1)

336-351: Merge OpenRouter defaults instead of overwriting them

Direct assignment to providerOptions.openrouter replaces the base openrouter options (dropping reasoning: { max_tokens: 20 }). Merge the base to preserve reasoning.

File: apps/web/utils/llms/model.ts (lines 336-351)

-        providerOptions.openrouter = {
-          models: [
-            "google/gemini-2.5-pro",
-            // "anthropic/claude-sonnet-4",
-            // "anthropic/claude-3.7-sonnet",
-          ],
-          provider: {
-            // max 3 options
-            order: [
-              "Google Vertex",
-              "Google AI Studio",
-              // "Anthropic",
-              // "Amazon Bedrock",
-            ],
-          },
-        };
+        providerOptions.openrouter = {
+          ...createOpenRouterProviderOptions("").openrouter, // keeps reasoning
+          models: [
+            "google/gemini-2.5-pro",
+            // "anthropic/claude-sonnet-4",
+            // "anthropic/claude-3.7-sonnet",
+          ],
+          provider: {
+            order: [
+              "Google Vertex",
+              "Google AI Studio",
+              // "Anthropic",
+              // "Amazon Bedrock",
+            ],
+          },
+        };
🧹 Nitpick comments (1)
apps/web/utils/llms/model.ts (1)

190-194: Tiny cleanup: dedupe providers and keep code compact.
No behavior change; just guards against accidental repeats.

-  const order = providers
-    .split(",")
-    .map((p: string) => p.trim())
-    .filter(Boolean);
+  const order = Array.from(
+    new Set(
+      providers
+        .split(",")
+        .map((p) => p.trim())
+        .filter(Boolean),
+    ),
+  );
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 053953d and 9830c53.

📒 Files selected for processing (2)
  • apps/web/utils/llms/model.ts (2 hunks)
  • version.txt (1 hunks)
🧰 Additional context used
📓 Path-based instructions (10)
!{.cursor/rules/*.mdc}

📄 CodeRabbit inference engine (.cursor/rules/cursor-rules.mdc)

Never place rule files in the project root, in subdirectories outside .cursor/rules, or in any other location

Files:

  • version.txt
  • apps/web/utils/llms/model.ts
!pages/_document.{js,jsx,ts,tsx}

📄 CodeRabbit inference engine (.cursor/rules/ultracite.mdc)

!pages/_document.{js,jsx,ts,tsx}: Don't import next/document outside of pages/_document.jsx in Next.js projects.
Don't import next/document outside of pages/_document.jsx in Next.js projects.

Files:

  • version.txt
  • apps/web/utils/llms/model.ts
apps/web/**/*.{ts,tsx}

📄 CodeRabbit inference engine (apps/web/CLAUDE.md)

apps/web/**/*.{ts,tsx}: Use TypeScript with strict null checks
Path aliases: Use @/ for imports from project root
Use proper error handling with try/catch blocks
Format code with Prettier
Leverage TypeScript inference for better DX

Files:

  • apps/web/utils/llms/model.ts
**/*.ts

📄 CodeRabbit inference engine (.cursor/rules/form-handling.mdc)

**/*.ts: The same validation should be done in the server action too
Define validation schemas using Zod

Files:

  • apps/web/utils/llms/model.ts
**/*.{ts,tsx}

📄 CodeRabbit inference engine (.cursor/rules/logging.mdc)

**/*.{ts,tsx}: Use createScopedLogger for logging in backend TypeScript files
Typically add the logger initialization at the top of the file when using createScopedLogger
Only use .with() on a logger instance within a specific function, not for a global logger

Import Prisma in the project using import prisma from "@/utils/prisma";

**/*.{ts,tsx}: Don't use TypeScript enums.
Don't use TypeScript const enum.
Don't use the TypeScript directive @ts-ignore.
Don't use primitive type aliases or misleading types.
Don't use empty type parameters in type aliases and interfaces.
Don't use any or unknown as type constraints.
Don't use implicit any type on variable declarations.
Don't let variables evolve into any type through reassignments.
Don't use non-null assertions with the ! postfix operator.
Don't misuse the non-null assertion operator (!) in TypeScript files.
Don't use user-defined types.
Use as const instead of literal types and type annotations.
Use export type for types.
Use import type for types.
Don't declare empty interfaces.
Don't merge interfaces and classes unsafely.
Don't use overload signatures that aren't next to each other.
Use the namespace keyword instead of the module keyword to declare TypeScript namespaces.
Don't use TypeScript namespaces.
Don't export imported variables.
Don't add type annotations to variables, parameters, and class properties that are initialized with literal expressions.
Don't use parameter properties in class constructors.
Use either T[] or Array consistently.
Initialize each enum member value explicitly.
Make sure all enum members are literal values.

Files:

  • apps/web/utils/llms/model.ts
apps/web/utils/**

📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)

Create utility functions in utils/ folder for reusable logic

Files:

  • apps/web/utils/llms/model.ts
apps/web/utils/**/*.ts

📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)

apps/web/utils/**/*.ts: Use lodash utilities for common operations (arrays, objects, strings)
Import specific lodash functions to minimize bundle size

Files:

  • apps/web/utils/llms/model.ts
**/*.{js,jsx,ts,tsx}

📄 CodeRabbit inference engine (.cursor/rules/ultracite.mdc)

**/*.{js,jsx,ts,tsx}: Don't use elements in Next.js projects.
Don't use elements in Next.js projects.
Don't use namespace imports.
Don't access namespace imports dynamically.
Don't use global eval().
Don't use console.
Don't use debugger.
Don't use var.
Don't use with statements in non-strict contexts.
Don't use the arguments object.
Don't use consecutive spaces in regular expression literals.
Don't use the comma operator.
Don't use unnecessary boolean casts.
Don't use unnecessary callbacks with flatMap.
Use for...of statements instead of Array.forEach.
Don't create classes that only have static members (like a static namespace).
Don't use this and super in static contexts.
Don't use unnecessary catch clauses.
Don't use unnecessary constructors.
Don't use unnecessary continue statements.
Don't export empty modules that don't change anything.
Don't use unnecessary escape sequences in regular expression literals.
Don't use unnecessary labels.
Don't use unnecessary nested block statements.
Don't rename imports, exports, and destructured assignments to the same name.
Don't use unnecessary string or template literal concatenation.
Don't use String.raw in template literals when there are no escape sequences.
Don't use useless case statements in switch statements.
Don't use ternary operators when simpler alternatives exist.
Don't use useless this aliasing.
Don't initialize variables to undefined.
Don't use the void operators (they're not familiar).
Use arrow functions instead of function expressions.
Use Date.now() to get milliseconds since the Unix Epoch.
Use .flatMap() instead of map().flat() when possible.
Use literal property access instead of computed property access.
Don't use parseInt() or Number.parseInt() when binary, octal, or hexadecimal literals work.
Use concise optional chaining instead of chained logical expressions.
Use regular expression literals instead of the RegExp constructor when possible.
Don't use number literal object member names th...

Files:

  • apps/web/utils/llms/model.ts
apps/web/utils/{ai,llms}/**/*.ts

📄 CodeRabbit inference engine (.cursor/rules/llm.mdc)

apps/web/utils/{ai,llms}/**/*.ts: Place LLM-related implementation code under apps/web/utils/ai or apps/web/utils/llms
Keep system and user prompts separate; system defines role/task, user contains data/context
Always validate LLM responses with a specific Zod schema
Use descriptive scoped loggers per feature and log inputs/outputs with appropriate levels and context
Implement early returns for invalid inputs and use proper error types with logging
Add fallbacks for AI failures and include retry logic for transient errors using withRetry
Format prompts with XML-like tags; remove excessive whitespace; truncate overly long inputs; keep formatting consistent
Use TypeScript types for all parameters/returns and define interfaces for complex IO structures
Keep related AI functions co-located; extract shared logic into utilities; document complex AI logic with clear comments
Call LLMs via createGenerateObject; pass system, prompt, and a Zod schema; return the validated result.object
Derive model options using getModel(...) and pass them to createGenerateObject and the generate call

Files:

  • apps/web/utils/llms/model.ts
apps/web/utils/llms/model.ts

📄 CodeRabbit inference engine (.cursor/rules/llm.mdc)

Define model configurations and helpers in apps/web/utils/llms/model.ts

Files:

  • apps/web/utils/llms/model.ts
🧠 Learnings (1)
📚 Learning: 2025-08-17T16:57:25.834Z
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-08-17T16:57:25.834Z
Learning: Applies to apps/web/utils/{ai,llms}/**/*.ts : Derive model options using getModel(...) and pass them to createGenerateObject and the generate call

Applied to files:

  • apps/web/utils/llms/model.ts
🧬 Code graph analysis (1)
apps/web/utils/llms/model.ts (1)
apps/web/utils/llms/config.ts (1)
  • providerOptions (38-49)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: cubic · AI code reviewer
🔇 Additional comments (2)
version.txt (1)

1-1: Version bump looks good.
Matches the code changes scope.

apps/web/utils/llms/model.ts (1)

197-199: Omit empty provider and verify reasoning.max_tokens compatibility

File: apps/web/utils/llms/model.ts (lines 197–199)
Use a conditional spread to avoid emitting provider: undefined. Confirm target model(s) accept OpenRouter reasoning.max_tokens — OpenRouter supports reasoning.max_tokens for Anthropic and Gemini “thinking” models but may map/ignore it for other providers (e.g., OpenAI o-series, Gemini Flash Thinking); verify for models like google/gemini-2.5-pro.

-      provider: order.length > 0 ? { order } : undefined,
-      reasoning: { max_tokens: 20 },
+      ...(order.length > 0 ? { provider: { order } } : {}),
+      reasoning: { max_tokens: 20 },

Comment on lines +289 to 291
const providerOptions: Record<string, any> =
createOpenRouterProviderOptions("");

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Regression: DEFAULT_OPENROUTER_PROVIDERS never applied.
Pre‑initializing providerOptions.openrouter prevents the later merge (guarded by !providerOptions.openrouter). Result: default provider ordering from env is ignored.

Minimal fix: initialize as empty, and always layer OpenRouter options (ensuring reasoning is present) only when aiProvider is OPENROUTER.

-  const providerOptions: Record<string, any> =
-    createOpenRouterProviderOptions("");
+  const providerOptions: Record<string, any> = {};
-  if (
-    aiProvider === Provider.OPENROUTER &&
-    env.DEFAULT_OPENROUTER_PROVIDERS &&
-    !providerOptions.openrouter
-  ) {
-    const openRouterOptions = createOpenRouterProviderOptions(
-      env.DEFAULT_OPENROUTER_PROVIDERS,
-    );
-    Object.assign(providerOptions, openRouterOptions);
-  }
+  if (aiProvider === Provider.OPENROUTER) {
+    const openRouterOptions = createOpenRouterProviderOptions(
+      env.DEFAULT_OPENROUTER_PROVIDERS || "",
+    );
+    // Preserve any custom options set earlier; always ensure reasoning exists.
+    providerOptions.openrouter = {
+      ...openRouterOptions.openrouter,          // reasoning + env order
+      ...(providerOptions.openrouter || {}),    // custom models/provider override env if present
+      reasoning: {
+        ...openRouterOptions.openrouter.reasoning,
+        ...(providerOptions.openrouter?.reasoning ?? {}),
+      },
+    };
+  }

Also applies to: 365-373

🤖 Prompt for AI Agents
In apps/web/utils/llms/model.ts around lines 289-291 (and similarly lines
365-373), providerOptions is pre-initialized with
createOpenRouterProviderOptions(""), which prevents the later merge guarded by
!providerOptions.openrouter and thus stops DEFAULT_OPENROUTER_PROVIDERS from
being applied; change to initialize providerOptions as an empty object (e.g.,
{}) and only add/merge OpenRouter options when aiProvider === OPENROUTER
(ensuring to include the required reasoning flag), so default ordering from env
can be merged correctly.

Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 issue found across 2 files

Prompt for AI agents (all 1 issues)

Understand the root cause of the following 1 issues and fix them.


<file name="apps/web/utils/llms/model.ts">

<violation number="1" location="apps/web/utils/llms/model.ts:290">
Initializing providerOptions with createOpenRouterProviderOptions(&quot;&quot;) sets openrouter and prevents DEFAULT_OPENROUTER_PROVIDERS override from applying.</violation>
</file>

React with 👍 or 👎 to teach cubic. Mention @cubic-dev-ai to give feedback, ask questions, or re-run the review.


const providerOptions: Record<string, any> = {};
const providerOptions: Record<string, any> =
createOpenRouterProviderOptions("");
Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot Sep 16, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Initializing providerOptions with createOpenRouterProviderOptions("") sets openrouter and prevents DEFAULT_OPENROUTER_PROVIDERS override from applying.

Prompt for AI agents
Address the following comment on apps/web/utils/llms/model.ts at line 290:

<comment>Initializing providerOptions with createOpenRouterProviderOptions(&quot;&quot;) sets openrouter and prevents DEFAULT_OPENROUTER_PROVIDERS override from applying.</comment>

<file context>
@@ -282,7 +286,8 @@ function selectDefaultModel(userAi: UserAIFields): SelectModel {
 
-  const providerOptions: Record&lt;string, any&gt; = {};
+  const providerOptions: Record&lt;string, any&gt; =
+    createOpenRouterProviderOptions(&quot;&quot;);
 
   // If user has not api key set, then use default model
</file context>
Suggested change
createOpenRouterProviderOptions("");
{};
Fix with Cubic

@coderabbitai coderabbitai bot mentioned this pull request Sep 16, 2025
@elie222 elie222 deleted the feat/limit-reasoning branch December 18, 2025 23:03
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant