Skip to content

Improved LLM calls#660

Merged
elie222 merged 17 commits intomainfrom
feat/ai-fixes
Aug 7, 2025
Merged

Improved LLM calls#660
elie222 merged 17 commits intomainfrom
feat/ai-fixes

Conversation

@elie222
Copy link
Owner

@elie222 elie222 commented Aug 7, 2025

Summary by CodeRabbit

  • New Features

    • Enhanced AI functionality with a new function-calling style, improving AI tool response handling.
    • Added new factory functions for generating AI text and objects with integrated error handling.
    • Introduced backup AI model support to improve reliability when primary models are unavailable.
  • Bug Fixes

    • Corrected snippet data access in rule generation to fix prompt creation.
    • Ensured appended rule prompts start on a new line to improve editor formatting.
  • Chores

    • Updated version number to v2.1.4.
    • Standardized AI model selection with clearer naming conventions and unified return types.
    • Refactored AI calls across multiple utilities to use unified generation functions, removing manual usage tracking and logging for streamlined performance.
    • Updated environment variables to support new backup model configuration and removed deprecated keys.

@vercel
Copy link

vercel bot commented Aug 7, 2025

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Updated (UTC)
inbox-zero ✅ Ready (Inspect) Visit Preview Aug 7, 2025 9:20am

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Aug 7, 2025

Warning

Rate limit exceeded

@elie222 has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 7 minutes and 46 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📥 Commits

Reviewing files that changed from the base of the PR and between 46d76c6 and f99b072.

📒 Files selected for processing (6)
  • apps/web/utils/ai/choose-rule/ai-choose-args.ts (3 hunks)
  • apps/web/utils/ai/rule/generate-rules-prompt.ts (3 hunks)
  • apps/web/utils/llms/index.ts (5 hunks)
  • apps/web/utils/llms/model.test.ts (15 hunks)
  • apps/web/utils/llms/model.ts (9 hunks)
  • version.txt (1 hunks)

Walkthrough

This change refactors AI invocation patterns across multiple utilities to use new factory functions createGenerateText and createGenerateObject that encapsulate model options, user context, and error handling. It removes explicit usage tracking and logging, updates schema handling to a tool-calling style API, corrects snippet access, standardizes model selection return types and naming, and increments the application version.

Changes

Cohort / File(s) Change Summary
AI Argument Selection Refactor
apps/web/utils/ai/choose-rule/ai-choose-args.ts
Refactored to use createGenerateText for AI calls, adopting a function-calling style with tool schema, updated response handling, and new imports.
LLM Generation Factories
apps/web/utils/llms/index.ts
Added createGenerateText and createGenerateObject factory functions wrapping generation with usage tracking and error handling; removed chatCompletion and chatCompletionObject.
AI Reply Generation Update
apps/web/utils/ai/reply/generate-nudge.ts
Replaced direct chatCompletion calls with createGenerateText factory usage, removing logger traces and adapting to new generation pattern.
Rules Prompt UI Fix
apps/web/app/(app)/[emailAccountId]/assistant/RulesPrompt.tsx
Prepends a newline before appending generated rules prompt text to ensure proper formatting and handles undefined safely.
Rules Prompt Generation Fix
apps/web/utils/actions/ai-rule.ts
Corrected snippet array access by referencing snippetsResult.snippets instead of snippetsResult directly.
Rules Prompt Generation Refactor
apps/web/utils/ai/rule/generate-rules-prompt.ts
Changed AI call to use createGenerateText with tool schema and new response extraction; updated parsing and output formatting; return type now allows undefined.
AI Snippets Generation Refactor
apps/web/utils/ai/snippets/find-snippets.ts
Refactored to use createGenerateObject with updated schema expecting a snippets array; removed logging and usage saving; updated imports accordingly.
Model Selection Standardization
apps/web/utils/llms/model.ts
Introduced SelectModel type; standardized naming from model to modelName and llmModel to model; enabled previously commented chat case; updated all selection functions.
AI Categorization Refactors
apps/web/utils/ai/categorize-sender/ai-categorize-senders.ts, apps/web/utils/ai/categorize-sender/ai-categorize-single-sender.ts
Replaced direct generateObject calls with createGenerateObject factory; removed logging and usage saving; simplified model options passing and schema handling.
AI Rule Utilities Refactor
apps/web/utils/ai/choose-rule/ai-choose-rule.ts, apps/web/utils/ai/choose-rule/ai-detect-recurring-pattern.ts, apps/web/utils/ai/rule/create-rule.ts, apps/web/utils/ai/rule/diff-rules.ts, apps/web/utils/ai/rule/find-existing-rules.ts, apps/web/utils/ai/rule/generate-prompt-on-delete-rule.ts, apps/web/utils/ai/rule/generate-prompt-on-update-rule.ts
Refactored AI calls to use createGenerateText or createGenerateObject with tool schemas; removed explicit usage saving and logging; updated schema definitions and response parsing to new tool call style.
AI Clean and Knowledge Extraction Refactors
apps/web/utils/ai/clean/ai-clean.ts, apps/web/utils/ai/clean/ai-clean-select-labels.ts, apps/web/utils/ai/knowledge/extract.ts, apps/web/utils/ai/knowledge/extract-from-email-history.ts, apps/web/utils/ai/knowledge/writing-style.ts
Replaced direct generateObject calls with createGenerateObject factory; removed usage saving and logging; renamed constants for clarity; simplified model options handling.
AI Reply and Draft Refactors
apps/web/utils/ai/reply/check-if-needs-reply.ts, apps/web/utils/ai/reply/draft-with-knowledge.ts
Refactored AI calls to use createGenerateObject; moved schema inline; removed logging and usage tracking; simplified model options passing.
AI Digest Summarization Refactor
apps/web/utils/ai/digest/summarize-email-for-digest.ts
Refactored to use createGenerateObject with model options; removed usage saving and detailed logging; retained conditional info logs.
Cold Email Detection Refactor
apps/web/utils/cold-email/is-cold-email.ts
Replaced direct generateObject calls with createGenerateObject factory; removed usage saving and logging; simplified model options handling.
Version Bump
version.txt
Updated version from v2.1.3 to v2.1.4.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant aiGenerateArgs
    participant createGenerateText
    participant generateText
    participant AI
    participant UsageTracker

    User->>aiGenerateArgs: Call with params
    aiGenerateArgs->>createGenerateText: Create generateText with user/model options
    aiGenerateArgs->>generateText: Call with prompt, tools, model options
    generateText->>AI: Generate text with tool schema
    AI-->>generateText: Return toolCalls with input
    generateText-->>createGenerateText: Return response with usage
    createGenerateText->>UsageTracker: (async) Save usage details
    createGenerateText-->>aiGenerateArgs: Return AI response
    aiGenerateArgs-->>User: Return toolCall.input as result
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~15 minutes

Possibly related PRs

Poem

A rabbit hopped through lines anew,
With tools and wrappers, clever too.
AI now calls with schema clear,
Tracks its usage, far and near.
Version hops up, code refined—
More carrot bytes for peace of mind! 🥕✨

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/ai-fixes

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@claude
Copy link

claude bot commented Aug 7, 2025

Claude finished @elie222's task —— View job


PR Review: Revert generate args to tool call

Review Progress

  • Read and analyze the changed files to understand the revert
  • Review code quality and best practices
  • Check for potential bugs or issues
  • Assess performance considerations
  • Evaluate security concerns
  • Provide constructive feedback in GitHub comment

Summary

This PR reverts from using chatCompletionObject with structured schema output back to using generateText with tool calls for AI argument generation. The change affects the core AI processing pipeline in ai-choose-args.ts.

Code Quality & Best Practices ✅

  • Clean Architecture: The revert maintains good separation of concerns with proper logging, error handling, and retry logic
  • Type Safety: Strong TypeScript typing is preserved with proper interfaces and Zod schemas
  • Error Handling: Robust error handling with withRetry and InvalidArgumentError catching
  • Logging: Comprehensive logging with scoped logger and trace-level debugging
  • Factory Pattern: Good use of createGenerateText factory for dependency injection

Analysis of Changes

What Changed:

  1. API Approach: Switched from chatCompletionObject (structured output) to generateText with tools
  2. Response Processing: Changed from aiResponse.object to aiResponse.toolCalls[0].input
  3. New Utilities: Added createGenerateText and createGenerateObject factory functions
  4. Model Handling: Added explicit getModel call and model options management

Why This Might Be Better:

  • Tool Calls: More explicit control over AI function calling vs. structured output
  • Flexibility: Tool calls allow for more complex multi-step interactions
  • Fallback: Better handling when AI doesn't provide expected tool calls (if (!toolCall?.toolName) return)

Potential Issues ⚠️

  1. Silent Failure Risk (ai-choose-args.ts:114):

    if (!toolCall?.toolName) return; // Returns undefined silently

    Consider logging when this happens or returning a more explicit error state.

  2. Array Access Without Bounds Check (ai-choose-args.ts:112):

    const toolCall = aiResponse.toolCalls[0]; // Could be empty array

    Should validate aiResponse.toolCalls.length > 0 first.

  3. Unused Import: The generateObject import is unused in the current code, though createGenerateObject is exported.

Performance Considerations ✅

  • Minimal Overhead: Factory functions add negligible overhead
  • Usage Tracking: Proper AI usage tracking preserved for billing/monitoring
  • Retry Logic: Same retry strategy maintained (3 retries, 1s delay)
  • No Breaking Changes: Model selection logic preserved

Security Concerns ✅

  • Input Validation: Zod schemas provide robust input validation
  • API Key Handling: Secure credential management through modelOptions
  • Error Sanitization: Error handling doesn't leak sensitive information
  • User Context: Proper user email tracking for audit trails

Suggestions for Improvement

  1. Add bounds checking:

    const toolCalls = aiResponse.toolCalls;
    if (!toolCalls || toolCalls.length === 0) {
      logger.warn("No tool calls received from AI", loggerOptions);
      return;
    }
    const toolCall = toolCalls[0];
  2. Consider more descriptive return types instead of undefined for clarity.

  3. Add JSDoc comments to the new factory functions for better developer experience.

Overall Assessment ✅

This is a well-executed revert that:

  • Maintains code quality and architectural patterns
  • Preserves important functionality (logging, error handling, usage tracking)
  • Adds useful utility functions for future use
  • Has minimal risk with only minor improvements needed

The switch from structured output to tool calls suggests a strategic decision to have more control over AI interactions, which is reasonable for a rule processing system.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (1)
apps/web/utils/ai/choose-rule/ai-choose-args.ts (1)

91-103: Improve tool description for better AI guidance

The tool description "Apply the rule with the given arguments." is too generic and doesn't provide sufficient context for the AI model.

  tools: {
    apply_rule: {
-     description: "Apply the rule with the given arguments.",
+     description: "Generate the required arguments to apply the selected email rule. Fill in all template variables with appropriate content based on the email context and rule instructions.",
      inputSchema: z.object(
        Object.fromEntries(
          parameters.map((p) => [
            `${p.type}-${p.actionId}`,
            p.parameters,
          ]),
        ),
      ),
    },
  },
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 66e58dc and f5a667e.

📒 Files selected for processing (3)
  • apps/web/utils/ai/choose-rule/ai-choose-args.ts (3 hunks)
  • apps/web/utils/llms/index.ts (1 hunks)
  • version.txt (1 hunks)
🧰 Additional context used
📓 Path-based instructions (11)
!{.cursor/rules/*.mdc}

📄 CodeRabbit Inference Engine (.cursor/rules/cursor-rules.mdc)

Never place rule files in the project root, in subdirectories outside .cursor/rules, or in any other location

Files:

  • version.txt
  • apps/web/utils/ai/choose-rule/ai-choose-args.ts
  • apps/web/utils/llms/index.ts
!pages/_document.{js,jsx,ts,tsx}

📄 CodeRabbit Inference Engine (.cursor/rules/ultracite.mdc)

!pages/_document.{js,jsx,ts,tsx}: Don't import next/document outside of pages/_document.jsx in Next.js projects.
Don't import next/document outside of pages/_document.jsx in Next.js projects.

Files:

  • version.txt
  • apps/web/utils/ai/choose-rule/ai-choose-args.ts
  • apps/web/utils/llms/index.ts
apps/web/**/*.{ts,tsx}

📄 CodeRabbit Inference Engine (apps/web/CLAUDE.md)

apps/web/**/*.{ts,tsx}: Use TypeScript with strict null checks
Path aliases: Use @/ for imports from project root
Use proper error handling with try/catch blocks
Format code with Prettier
Leverage TypeScript inference for better DX

Files:

  • apps/web/utils/ai/choose-rule/ai-choose-args.ts
  • apps/web/utils/llms/index.ts
**/*.ts

📄 CodeRabbit Inference Engine (.cursor/rules/form-handling.mdc)

**/*.ts: The same validation should be done in the server action too
Define validation schemas using Zod

Files:

  • apps/web/utils/ai/choose-rule/ai-choose-args.ts
  • apps/web/utils/llms/index.ts
apps/web/utils/{ai,llms}/**/*

📄 CodeRabbit Inference Engine (.cursor/rules/llm.mdc)

apps/web/utils/{ai,llms}/**/*: LLM-related code must be organized in the directories: apps/web/utils/ai/, apps/web/utils/llms/, and apps/web/tests/ for LLM-specific tests.
Keep related AI functions in the same file or directory.

Files:

  • apps/web/utils/ai/choose-rule/ai-choose-args.ts
  • apps/web/utils/llms/index.ts
apps/web/utils/{ai,llms}/**/*.ts

📄 CodeRabbit Inference Engine (.cursor/rules/llm.mdc)

apps/web/utils/{ai,llms}/**/*.ts: Follow the standard structure for LLM-related functions: use a scoped logger, define a Zod schema for output, validate inputs early, separate system and user prompts, log inputs and outputs, call chatCompletionObject with proper configuration, and return validated results.
Keep system prompts and user prompts separate in LLM-related code.
System prompt should define the LLM's role and task specifications.
User prompt should contain the actual data and context.
Always define a Zod schema for response validation in LLM-related functions.
Make Zod schemas as specific as possible to guide the LLM output.
Use descriptive scoped loggers for each LLM feature.
Log inputs and outputs with appropriate log levels in LLM-related functions.
Include relevant context in log messages for LLM-related code.
Implement early returns for invalid inputs in LLM-related functions.
Use proper error types and logging in LLM-related code.
Implement fallbacks for AI failures in LLM-related functions.
Add retry logic for transient failures using withRetry in LLM-related code.
Use XML-like tags to structure data in LLM prompts.
Remove excessive whitespace and truncate long inputs in LLM prompts.
Format data consistently across similar LLM-related functions.
Use TypeScript types for all parameters and return values in LLM-related code.
Define clear interfaces for complex input/output structures in LLM-related code.
Extract common patterns into utility functions in LLM-related code.
Document complex AI logic with clear comments in LLM-related code.

Files:

  • apps/web/utils/ai/choose-rule/ai-choose-args.ts
  • apps/web/utils/llms/index.ts
**/*.{ts,tsx}

📄 CodeRabbit Inference Engine (.cursor/rules/logging.mdc)

**/*.{ts,tsx}: Use createScopedLogger for logging in backend TypeScript files
Typically add the logger initialization at the top of the file when using createScopedLogger
Only use .with() on a logger instance within a specific function, not for a global logger

Import Prisma in the project using import prisma from "@/utils/prisma";

**/*.{ts,tsx}: Don't use TypeScript enums.
Don't use TypeScript const enum.
Don't use the TypeScript directive @ts-ignore.
Don't use primitive type aliases or misleading types.
Don't use empty type parameters in type aliases and interfaces.
Don't use any or unknown as type constraints.
Don't use implicit any type on variable declarations.
Don't let variables evolve into any type through reassignments.
Don't use non-null assertions with the ! postfix operator.
Don't misuse the non-null assertion operator (!) in TypeScript files.
Don't use user-defined types.
Use as const instead of literal types and type annotations.
Use export type for types.
Use import type for types.
Don't declare empty interfaces.
Don't merge interfaces and classes unsafely.
Don't use overload signatures that aren't next to each other.
Use the namespace keyword instead of the module keyword to declare TypeScript namespaces.
Don't use TypeScript namespaces.
Don't export imported variables.
Don't add type annotations to variables, parameters, and class properties that are initialized with literal expressions.
Don't use parameter properties in class constructors.
Use either T[] or Array consistently.
Initialize each enum member value explicitly.
Make sure all enum members are literal values.

Files:

  • apps/web/utils/ai/choose-rule/ai-choose-args.ts
  • apps/web/utils/llms/index.ts
apps/web/utils/**

📄 CodeRabbit Inference Engine (.cursor/rules/project-structure.mdc)

Create utility functions in utils/ folder for reusable logic

Files:

  • apps/web/utils/ai/choose-rule/ai-choose-args.ts
  • apps/web/utils/llms/index.ts
apps/web/utils/**/*.ts

📄 CodeRabbit Inference Engine (.cursor/rules/project-structure.mdc)

apps/web/utils/**/*.ts: Use lodash utilities for common operations (arrays, objects, strings)
Import specific lodash functions to minimize bundle size

Files:

  • apps/web/utils/ai/choose-rule/ai-choose-args.ts
  • apps/web/utils/llms/index.ts
**/*.{js,jsx,ts,tsx}

📄 CodeRabbit Inference Engine (.cursor/rules/ultracite.mdc)

**/*.{js,jsx,ts,tsx}: Don't use elements in Next.js projects.
Don't use elements in Next.js projects.
Don't use namespace imports.
Don't access namespace imports dynamically.
Don't use global eval().
Don't use console.
Don't use debugger.
Don't use var.
Don't use with statements in non-strict contexts.
Don't use the arguments object.
Don't use consecutive spaces in regular expression literals.
Don't use the comma operator.
Don't use unnecessary boolean casts.
Don't use unnecessary callbacks with flatMap.
Use for...of statements instead of Array.forEach.
Don't create classes that only have static members (like a static namespace).
Don't use this and super in static contexts.
Don't use unnecessary catch clauses.
Don't use unnecessary constructors.
Don't use unnecessary continue statements.
Don't export empty modules that don't change anything.
Don't use unnecessary escape sequences in regular expression literals.
Don't use unnecessary labels.
Don't use unnecessary nested block statements.
Don't rename imports, exports, and destructured assignments to the same name.
Don't use unnecessary string or template literal concatenation.
Don't use String.raw in template literals when there are no escape sequences.
Don't use useless case statements in switch statements.
Don't use ternary operators when simpler alternatives exist.
Don't use useless this aliasing.
Don't initialize variables to undefined.
Don't use the void operators (they're not familiar).
Use arrow functions instead of function expressions.
Use Date.now() to get milliseconds since the Unix Epoch.
Use .flatMap() instead of map().flat() when possible.
Use literal property access instead of computed property access.
Don't use parseInt() or Number.parseInt() when binary, octal, or hexadecimal literals work.
Use concise optional chaining instead of chained logical expressions.
Use regular expression literals instead of the RegExp constructor when possible.
Don't use number literal object member names th...

Files:

  • apps/web/utils/ai/choose-rule/ai-choose-args.ts
  • apps/web/utils/llms/index.ts
apps/web/utils/llms/index.ts

📄 CodeRabbit Inference Engine (.cursor/rules/llm.mdc)

The core LLM functionality must be implemented in utils/llms/index.ts.

Files:

  • apps/web/utils/llms/index.ts
🧠 Learnings (18)
📓 Common learnings
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-07-18T15:06:40.272Z
Learning: Applies to apps/web/utils/{ai,llms}/**/*.ts : Implement fallbacks for AI failures in LLM-related functions.
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-07-18T15:06:40.272Z
Learning: Applies to apps/web/utils/{ai,llms}/**/* : Keep related AI functions in the same file or directory.
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-07-18T15:06:40.272Z
Learning: Applies to apps/web/utils/{ai,llms}/**/*.ts : Document complex AI logic with clear comments in LLM-related code.
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-07-18T15:06:40.272Z
Learning: Applies to apps/web/utils/{ai,llms}/**/*.ts : Follow the standard structure for LLM-related functions: use a scoped logger, define a Zod schema for output, validate inputs early, separate system and user prompts, log inputs and outputs, call chatCompletionObject with proper configuration, and return validated results.
📚 Learning: applies to apps/web/utils/{ai,llms}/**/*.ts : follow the standard structure for llm-related function...
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-07-18T15:06:40.272Z
Learning: Applies to apps/web/utils/{ai,llms}/**/*.ts : Follow the standard structure for LLM-related functions: use a scoped logger, define a Zod schema for output, validate inputs early, separate system and user prompts, log inputs and outputs, call chatCompletionObject with proper configuration, and return validated results.

Applied to files:

  • apps/web/utils/ai/choose-rule/ai-choose-args.ts
  • apps/web/utils/llms/index.ts
📚 Learning: applies to apps/web/utils/{ai,llms}/**/*.ts : make zod schemas as specific as possible to guide the ...
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-07-18T15:06:40.272Z
Learning: Applies to apps/web/utils/{ai,llms}/**/*.ts : Make Zod schemas as specific as possible to guide the LLM output.

Applied to files:

  • apps/web/utils/ai/choose-rule/ai-choose-args.ts
📚 Learning: applies to apps/web/utils/{ai,llms}/**/*.ts : implement fallbacks for ai failures in llm-related fun...
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-07-18T15:06:40.272Z
Learning: Applies to apps/web/utils/{ai,llms}/**/*.ts : Implement fallbacks for AI failures in LLM-related functions.

Applied to files:

  • apps/web/utils/ai/choose-rule/ai-choose-args.ts
  • apps/web/utils/llms/index.ts
📚 Learning: applies to apps/web/utils/{ai,llms}/**/*.ts : document complex ai logic with clear comments in llm-r...
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-07-18T15:06:40.272Z
Learning: Applies to apps/web/utils/{ai,llms}/**/*.ts : Document complex AI logic with clear comments in LLM-related code.

Applied to files:

  • apps/web/utils/ai/choose-rule/ai-choose-args.ts
  • apps/web/utils/llms/index.ts
📚 Learning: applies to apps/web/utils/{ai,llms}/**/*.ts : use typescript types for all parameters and return val...
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-07-18T15:06:40.272Z
Learning: Applies to apps/web/utils/{ai,llms}/**/*.ts : Use TypeScript types for all parameters and return values in LLM-related code.

Applied to files:

  • apps/web/utils/ai/choose-rule/ai-choose-args.ts
  • apps/web/utils/llms/index.ts
📚 Learning: applies to apps/web/utils/{ai,llms}/**/*.ts : extract common patterns into utility functions in llm-...
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-07-18T15:06:40.272Z
Learning: Applies to apps/web/utils/{ai,llms}/**/*.ts : Extract common patterns into utility functions in LLM-related code.

Applied to files:

  • apps/web/utils/ai/choose-rule/ai-choose-args.ts
  • apps/web/utils/llms/index.ts
📚 Learning: applies to apps/web/utils/{ai,llms}/**/*.ts : define clear interfaces for complex input/output struc...
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-07-18T15:06:40.272Z
Learning: Applies to apps/web/utils/{ai,llms}/**/*.ts : Define clear interfaces for complex input/output structures in LLM-related code.

Applied to files:

  • apps/web/utils/ai/choose-rule/ai-choose-args.ts
  • apps/web/utils/llms/index.ts
📚 Learning: applies to apps/web/utils/{ai,llms}/**/*.ts : format data consistently across similar llm-related fu...
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-07-18T15:06:40.272Z
Learning: Applies to apps/web/utils/{ai,llms}/**/*.ts : Format data consistently across similar LLM-related functions.

Applied to files:

  • apps/web/utils/ai/choose-rule/ai-choose-args.ts
  • apps/web/utils/llms/index.ts
📚 Learning: applies to apps/web/utils/{ai,llms}/**/*.ts : user prompt should contain the actual data and context...
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-07-18T15:06:40.272Z
Learning: Applies to apps/web/utils/{ai,llms}/**/*.ts : User prompt should contain the actual data and context.

Applied to files:

  • apps/web/utils/ai/choose-rule/ai-choose-args.ts
  • apps/web/utils/llms/index.ts
📚 Learning: applies to apps/web/utils/{ai,llms}/**/*.ts : system prompt should define the llm's role and task sp...
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-07-18T15:06:40.272Z
Learning: Applies to apps/web/utils/{ai,llms}/**/*.ts : System prompt should define the LLM's role and task specifications.

Applied to files:

  • apps/web/utils/ai/choose-rule/ai-choose-args.ts
  • apps/web/utils/llms/index.ts
📚 Learning: applies to apps/web/utils/{ai,llms}/**/*.ts : always define a zod schema for response validation in ...
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-07-18T15:06:40.272Z
Learning: Applies to apps/web/utils/{ai,llms}/**/*.ts : Always define a Zod schema for response validation in LLM-related functions.

Applied to files:

  • apps/web/utils/ai/choose-rule/ai-choose-args.ts
📚 Learning: applies to apps/web/utils/{ai,llms}/**/*.ts : use proper error types and logging in llm-related code...
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-07-18T15:06:40.272Z
Learning: Applies to apps/web/utils/{ai,llms}/**/*.ts : Use proper error types and logging in LLM-related code.

Applied to files:

  • apps/web/utils/ai/choose-rule/ai-choose-args.ts
  • apps/web/utils/llms/index.ts
📚 Learning: applies to apps/web/__tests__/**/*.test.ts : test both ai and non-ai paths in llm-related tests...
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm-test.mdc:0-0
Timestamp: 2025-07-18T15:06:10.570Z
Learning: Applies to apps/web/__tests__/**/*.test.ts : Test both AI and non-AI paths in LLM-related tests

Applied to files:

  • apps/web/utils/ai/choose-rule/ai-choose-args.ts
📚 Learning: applies to apps/web/utils/{ai,llms}/**/*.ts : keep system prompts and user prompts separate in llm-r...
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-07-18T15:06:40.272Z
Learning: Applies to apps/web/utils/{ai,llms}/**/*.ts : Keep system prompts and user prompts separate in LLM-related code.

Applied to files:

  • apps/web/utils/ai/choose-rule/ai-choose-args.ts
📚 Learning: applies to apps/web/utils/{ai,llms}/**/*.ts : use xml-like tags to structure data in llm prompts....
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-07-18T15:06:40.272Z
Learning: Applies to apps/web/utils/{ai,llms}/**/*.ts : Use XML-like tags to structure data in LLM prompts.

Applied to files:

  • apps/web/utils/ai/choose-rule/ai-choose-args.ts
📚 Learning: applies to apps/web/utils/{ai,llms}/**/*.ts : include relevant context in log messages for llm-relat...
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-07-18T15:06:40.272Z
Learning: Applies to apps/web/utils/{ai,llms}/**/*.ts : Include relevant context in log messages for LLM-related code.

Applied to files:

  • apps/web/utils/llms/index.ts
📚 Learning: applies to apps/web/__tests__/**/*.test.ts : always create helper functions for common test data in ...
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm-test.mdc:0-0
Timestamp: 2025-07-18T15:06:10.570Z
Learning: Applies to apps/web/__tests__/**/*.test.ts : Always create helper functions for common test data in LLM-related tests

Applied to files:

  • apps/web/utils/llms/index.ts
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Analyze (javascript-typescript)
🔇 Additional comments (1)
version.txt (1)

1-1: Version bump looks good

Appropriate version increment for the API changes introduced.

Comment on lines 112 to 116
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add logging and validation for tool call response

The function silently returns undefined when no tool is called, and doesn't validate the structure of toolCall.input.

  const toolCall = aiResponse.toolCalls[0];

-  if (!toolCall?.toolName) return;
+  if (!toolCall?.toolName) {
+    logger.warn("No tool call in AI response", loggerOptions);
+    return;
+  }
+
+  if (!toolCall.input) {
+    logger.error("Tool call missing input", { ...loggerOptions, toolCall });
+    throw new Error("AI tool call did not provide required input");
+  }

  const result = toolCall.input;
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const toolCall = aiResponse.toolCalls[0];
if (!toolCall?.toolName) return;
const result = toolCall.input;
const toolCall = aiResponse.toolCalls[0];
if (!toolCall?.toolName) {
logger.warn("No tool call in AI response", loggerOptions);
return;
}
if (!toolCall.input) {
logger.error("Tool call missing input", { ...loggerOptions, toolCall });
throw new Error("AI tool call did not provide required input");
}
const result = toolCall.input;
🤖 Prompt for AI Agents
In apps/web/utils/ai/choose-rule/ai-choose-args.ts around lines 112 to 116, the
code silently returns undefined if no tool call or toolName exists and does not
validate the structure of toolCall.input. Add logging to record when toolCall or
toolName is missing to aid debugging. Also, validate that toolCall.input exists
and has the expected structure before using it, returning early or throwing an
error if validation fails.

Comment on lines 44 to 104
export function createGenerateText({
userEmail,
label,
modelOptions,
}: {
userEmail: string;
label: string;
modelOptions: ReturnType<typeof getModel>;
}): typeof generateText {
return async (...args) => {
try {
const result = await generateText(...args);

if (result.usage) {
await saveAiUsage({
email: userEmail,
usage: result.usage,
provider: modelOptions.provider,
model: modelOptions.model,
label,
});
}

return result;
} catch (error) {
await handleError(error, userEmail);
throw error;
}
};
}

export function createGenerateObject({
userEmail,
label,
modelOptions,
}: {
userEmail: string;
label: string;
modelOptions: ReturnType<typeof getModel>;
}): typeof generateObject {
return async (...args) => {
try {
const result = await generateObject(...args);

if (result.usage) {
await saveAiUsage({
email: userEmail,
usage: result.usage,
provider: modelOptions.provider,
model: modelOptions.model,
label,
});
}

return result;
} catch (error) {
await handleError(error, userEmail);
throw error;
}
};
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Extract common pattern to reduce duplication

Both createGenerateText and createGenerateObject follow an identical pattern. This violates the DRY principle and makes maintenance harder.

Consider extracting a common factory function:

function createGenerateWrapper<T extends typeof generateText | typeof generateObject>(
  generateFn: T,
  {
    userEmail,
    label,
    modelOptions,
  }: {
    userEmail: string;
    label: string;
    modelOptions: ReturnType<typeof getModel>;
  }
): T {
  // Validate inputs early
  if (!userEmail || !label || !modelOptions) {
    throw new Error(`Missing required parameters for ${generateFn.name} wrapper`);
  }

  const scopedLogger = logger.with({ userEmail, label, fn: generateFn.name });

  return (async (...args) => {
    scopedLogger.trace("Generating with AI", { args });
    try {
      const result = await generateFn(...args);

      if (result.usage) {
        await saveAiUsage({
          email: userEmail,
          usage: result.usage,
          provider: modelOptions.provider,
          model: modelOptions.model,
          label,
        });
      }

      scopedLogger.trace("AI generation complete", { usage: result.usage });
      return result;
    } catch (error) {
      scopedLogger.error("AI generation failed", { error });
      await handleError(error, userEmail);
      throw error;
    }
  }) as T;
}

export const createGenerateText = (options: Parameters<typeof createGenerateWrapper>[1]) =>
  createGenerateWrapper(generateText, options);

export const createGenerateObject = (options: Parameters<typeof createGenerateWrapper>[1]) =>
  createGenerateWrapper(generateObject, options);
🤖 Prompt for AI Agents
In apps/web/utils/llms/index.ts from lines 44 to 104, the functions
createGenerateText and createGenerateObject share nearly identical logic,
causing code duplication. Refactor by extracting a common factory function that
accepts the generate function (generateText or generateObject) and the options
object, then returns the wrapped async function with shared error handling,
usage saving, and logging. Replace the original functions with calls to this new
factory to adhere to the DRY principle and improve maintainability.

Comment on lines 44 to 73
export function createGenerateText({
userEmail,
label,
modelOptions,
}: {
userEmail: string;
label: string;
modelOptions: ReturnType<typeof getModel>;
}): typeof generateText {
return async (...args) => {
try {
const result = await generateText(...args);

if (result.usage) {
await saveAiUsage({
email: userEmail,
usage: result.usage,
provider: modelOptions.provider,
model: modelOptions.model,
label,
});
}

return result;
} catch (error) {
await handleError(error, userEmail);
throw error;
}
};
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add logging and input validation per LLM coding standards

According to the retrieved learnings, LLM-related functions should follow the standard structure including scoped logging, input validation, and logging of inputs/outputs.

 export function createGenerateText({
   userEmail,
   label,
   modelOptions,
 }: {
   userEmail: string;
   label: string;
   modelOptions: ReturnType<typeof getModel>;
 }): typeof generateText {
+  // Validate inputs early
+  if (!userEmail || !label || !modelOptions) {
+    throw new Error("Missing required parameters for createGenerateText");
+  }
+
+  const scopedLogger = logger.with({ userEmail, label });
+
   return async (...args) => {
+    scopedLogger.trace("Generating text", { args });
     try {
       const result = await generateText(...args);

       if (result.usage) {
         await saveAiUsage({
           email: userEmail,
           usage: result.usage,
           provider: modelOptions.provider,
           model: modelOptions.model,
           label,
         });
       }

+      scopedLogger.trace("Text generation complete", { usage: result.usage });
       return result;
     } catch (error) {
+      scopedLogger.error("Text generation failed", { error });
       await handleError(error, userEmail);
       throw error;
     }
   };
 }

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In apps/web/utils/llms/index.ts between lines 44 and 73, the createGenerateText
function lacks scoped logging and input validation as per LLM coding standards.
Add input validation at the start of the returned async function to check the
validity of inputs before calling generateText. Implement scoped logging to log
the function invocation and its inputs, as well as the outputs or errors. Ensure
that all logs are appropriately scoped to this function for better traceability.

@claude
Copy link

claude bot commented Aug 7, 2025

Claude finished @elie222's task —— View job


Claude finished @elie222's task —— View job


PR Review: Revert generate args to tool call

Review Progress

  • Read and analyze the changed files to understand the revert
  • Review code quality and best practices
  • Check for potential bugs or issues
  • Assess performance considerations
  • Evaluate security concerns
  • Provide constructive feedback in GitHub comment

Summary

This PR reverts from using chatCompletionObject with structured schema output back to using generateText with tool calls for AI argument generation. The change affects the core AI processing pipeline in ai-choose-args.ts.

Code Quality & Best Practices ✅

  • Clean Architecture: The revert maintains good separation of concerns with proper logging, error handling, and retry logic
  • Type Safety: Strong TypeScript typing is preserved with proper interfaces and Zod schemas
  • Error Handling: Robust error handling with withRetry and InvalidArgumentError catching
  • Logging: Comprehensive logging with scoped logger and trace-level debugging
  • Factory Pattern: Good use of createGenerateText factory for dependency injection

Analysis of Changes

What Changed:

  1. API Approach: Switched from chatCompletionObject (structured output) to generateText with tools
  2. Response Processing: Changed from aiResponse.object to aiResponse.toolCalls[0].input
  3. New Utilities: Added createGenerateText and createGenerateObject factory functions
  4. Model Handling: Added explicit getModel call and model options management

Why This Might Be Better:

  • Tool Calls: More explicit control over AI function calling vs. structured output
  • Flexibility: Tool calls allow for more complex multi-step interactions
  • Fallback: Better handling when AI doesn't provide expected tool calls (if (!toolCall?.toolName) return)

Potential Issues ⚠️

  1. Silent Failure Risk (ai-choose-args.ts:114):

    if (!toolCall?.toolName) return; // Returns undefined silently

    Consider logging when this happens or returning a more explicit error state.

  2. Array Access Without Bounds Check (ai-choose-args.ts:112):

    const toolCall = aiResponse.toolCalls[0]; // Could be empty array

    Should validate aiResponse.toolCalls.length > 0 first.

  3. Unused Import: The generateObject import is unused in the current code, though createGenerateObject is exported.

Performance Considerations ✅

  • Minimal Overhead: Factory functions add negligible overhead
  • Usage Tracking: Proper AI usage tracking preserved for billing/monitoring
  • Retry Logic: Same retry strategy maintained (3 retries, 1s delay)
  • No Breaking Changes: Model selection logic preserved

Security Concerns ✅

  • Input Validation: Zod schemas provide robust input validation
  • API Key Handling: Secure credential management through modelOptions
  • Error Sanitization: Error handling doesn't leak sensitive information
  • User Context: Proper user email tracking for audit trails

Suggestions for Improvement

  1. Add bounds checking:

    const toolCalls = aiResponse.toolCalls;
    if (!toolCalls || toolCalls.length === 0) {
      logger.warn("No tool calls received from AI", loggerOptions);
      return;
    }
    const toolCall = toolCalls[0];
  2. Consider more descriptive return types instead of undefined for clarity.

  3. Add JSDoc comments to the new factory functions for better developer experience.

Overall Assessment ✅

This is a well-executed revert that:

  • Maintains code quality and architectural patterns
  • Preserves important functionality (logging, error handling, usage tracking)
  • Adds useful utility functions for future use
  • Has minimal risk with only minor improvements needed

The switch from structured output to tool calls suggests a strategic decision to have more control over AI interactions, which is reasonable for a rule processing system.

@claude
Copy link

claude bot commented Aug 7, 2025

Claude finished @elie222's task —— View job


PR Review: Revert generate args to tool call

Review Progress

  • Read and analyze the changed files to understand the revert
  • Review code quality and best practices
  • Check for potential bugs or issues
  • Assess performance considerations
  • Evaluate security concerns
  • Provide constructive feedback in GitHub comment

Summary

This PR reverts from using chatCompletionObject with structured schema output back to using generateText with tool calls for AI argument generation. The change affects the core AI processing pipeline in ai-choose-args.ts.

Code Quality & Best Practices ✅

  • Clean Architecture: The revert maintains good separation of concerns with proper logging, error handling, and retry logic
  • Type Safety: Strong TypeScript typing is preserved with proper interfaces and Zod schemas
  • Error Handling: Robust error handling with withRetry and InvalidArgumentError catching
  • Logging: Comprehensive logging with scoped logger and trace-level debugging
  • Factory Pattern: Good use of createGenerateText factory for dependency injection

Analysis of Changes

What Changed:

  1. API Approach: Switched from chatCompletionObject (structured output) to generateText with tools
  2. Response Processing: Changed from aiResponse.object to aiResponse.toolCalls[0].input
  3. New Utilities: Added createGenerateText and createGenerateObject factory functions
  4. Model Handling: Added explicit getModel call and model options management

Why This Might Be Better:

  • Tool Calls: More explicit control over AI function calling vs. structured output
  • Flexibility: Tool calls allow for more complex multi-step interactions
  • Fallback: Better handling when AI doesn't provide expected tool calls (if (!toolCall?.toolName) return)

Potential Issues ⚠️

  1. Silent Failure Risk (ai-choose-args.ts:114):

    if (!toolCall?.toolName) return; // Returns undefined silently

    Consider logging when this happens or returning a more explicit error state.

  2. Array Access Without Bounds Check (ai-choose-args.ts:112):

    const toolCall = aiResponse.toolCalls[0]; // Could be empty array

    Should validate aiResponse.toolCalls.length > 0 first.

  3. Unused Import: The generateObject import is unused in the current code, though createGenerateObject is exported.

Performance Considerations ✅

  • Minimal Overhead: Factory functions add negligible overhead
  • Usage Tracking: Proper AI usage tracking preserved for billing/monitoring
  • Retry Logic: Same retry strategy maintained (3 retries, 1s delay)
  • No Breaking Changes: Model selection logic preserved

Security Concerns ✅

  • Input Validation: Zod schemas provide robust input validation
  • API Key Handling: Secure credential management through modelOptions
  • Error Sanitization: Error handling doesn't leak sensitive information
  • User Context: Proper user email tracking for audit trails

Suggestions for Improvement

  1. Add bounds checking:

    const toolCalls = aiResponse.toolCalls;
    if (!toolCalls || toolCalls.length === 0) {
      logger.warn("No tool calls received from AI", loggerOptions);
      return;
    }
    const toolCall = toolCalls[0];
  2. Consider more descriptive return types instead of undefined for clarity.

  3. Add JSDoc comments to the new factory functions for better developer experience.

Overall Assessment ✅

This is a well-executed revert that:

  • Maintains code quality and architectural patterns
  • Preserves important functionality (logging, error handling, usage tracking)
  • Adds useful utility functions for future use
  • Has minimal risk with only minor improvements needed

The switch from structured output to tool calls suggests a strategic decision to have more control over AI interactions, which is reasonable for a rule processing system.


@claude
Copy link

claude bot commented Aug 7, 2025

Claude finished @elie222's task —— View job


PR Review: Revert generate args to tool call

Review Progress

  • Read and analyze the changed files to understand the revert
  • Review code quality and best practices
  • Check for potential bugs or issues
  • Assess performance considerations
  • Evaluate security concerns
  • Provide constructive feedback in GitHub comment

Summary

This PR reverts from using chatCompletionObject with structured schema output back to using generateText with tool calls for AI argument generation. The change affects the core AI processing pipeline in ai-choose-args.ts.

Code Quality & Best Practices ✅

  • Clean Architecture: The revert maintains good separation of concerns with proper logging, error handling, and retry logic
  • Type Safety: Strong TypeScript typing is preserved with proper interfaces and Zod schemas
  • Error Handling: Robust error handling with withRetry and InvalidArgumentError catching
  • Logging: Comprehensive logging with scoped logger and trace-level debugging
  • Factory Pattern: Good use of createGenerateText factory for dependency injection

Analysis of Changes

What Changed:

  1. API Approach: Switched from chatCompletionObject (structured output) to generateText with tools
  2. Response Processing: Changed from aiResponse.object to aiResponse.toolCalls[0].input
  3. New Utilities: Added createGenerateText and createGenerateObject factory functions
  4. Model Handling: Added explicit getModel call and model options management

Why This Might Be Better:

  • Tool Calls: More explicit control over AI function calling vs. structured output
  • Flexibility: Tool calls allow for more complex multi-step interactions
  • Fallback: Better handling when AI doesn't provide expected tool calls (if (!toolCall?.toolName) return)

Potential Issues ⚠️

  1. Silent Failure Risk (ai-choose-args.ts:114):

    if (!toolCall?.toolName) return; // Returns undefined silently

    Consider logging when this happens or returning a more explicit error state.

  2. Array Access Without Bounds Check (ai-choose-args.ts:112):

    const toolCall = aiResponse.toolCalls[0]; // Could be empty array

    Should validate aiResponse.toolCalls.length > 0 first.

  3. Unused Import: The generateObject import is unused in the current code, though createGenerateObject is exported.

Performance Considerations ✅

  • Minimal Overhead: Factory functions add negligible overhead
  • Usage Tracking: Proper AI usage tracking preserved for billing/monitoring
  • Retry Logic: Same retry strategy maintained (3 retries, 1s delay)
  • No Breaking Changes: Model selection logic preserved

Security Concerns ✅

  • Input Validation: Zod schemas provide robust input validation
  • API Key Handling: Secure credential management through modelOptions
  • Error Sanitization: Error handling doesn't leak sensitive information
  • User Context: Proper user email tracking for audit trails

Suggestions for Improvement

  1. Add bounds checking:

    const toolCalls = aiResponse.toolCalls;
    if (!toolCalls || toolCalls.length === 0) {
      logger.warn("No tool calls received from AI", loggerOptions);
      return;
    }
    const toolCall = toolCalls[0];
  2. Consider more descriptive return types instead of undefined for clarity.

  3. Add JSDoc comments to the new factory functions for better developer experience.

Overall Assessment ✅

This is a well-executed revert that:

  • Maintains code quality and architectural patterns
  • Preserves important functionality (logging, error handling, usage tracking)
  • Adds useful utility functions for future use
  • Has minimal risk with only minor improvements needed

The switch from structured output to tool calls suggests a strategic decision to have more control over AI interactions, which is reasonable for a rule processing system.


@claude
Copy link

claude bot commented Aug 7, 2025

Claude finished @elie222's task —— View job


PR Review: Revert generate args to tool call

Review Progress

  • Read and analyze the changed files to understand the revert
  • Review code quality and best practices
  • Check for potential bugs or issues
  • Assess performance considerations
  • Evaluate security concerns
  • Provide constructive feedback in GitHub comment

Summary

This PR reverts from using chatCompletionObject with structured schema output back to using generateText with tool calls for AI argument generation. The change affects the core AI processing pipeline in ai-choose-args.ts.

Code Quality & Best Practices ✅

  • Clean Architecture: The revert maintains good separation of concerns with proper logging, error handling, and retry logic
  • Type Safety: Strong TypeScript typing is preserved with proper interfaces and Zod schemas
  • Error Handling: Robust error handling with withRetry and InvalidArgumentError catching
  • Logging: Comprehensive logging with scoped logger and trace-level debugging
  • Factory Pattern: Good use of createGenerateText factory for dependency injection

Analysis of Changes

What Changed:

  1. API Approach: Switched from chatCompletionObject (structured output) to generateText with tools
  2. Response Processing: Changed from aiResponse.object to aiResponse.toolCalls[0].input
  3. New Utilities: Added createGenerateText and createGenerateObject factory functions
  4. Model Handling: Added explicit getModel call and model options management

Why This Might Be Better:

  • Tool Calls: More explicit control over AI function calling vs. structured output
  • Flexibility: Tool calls allow for more complex multi-step interactions
  • Fallback: Better handling when AI doesn't provide expected tool calls (if (!toolCall?.toolName) return)

Potential Issues ⚠️

  1. Silent Failure Risk (ai-choose-args.ts:114):

    if (!toolCall?.toolName) return; // Returns undefined silently

    Consider logging when this happens or returning a more explicit error state.

  2. Array Access Without Bounds Check (ai-choose-args.ts:112):

    const toolCall = aiResponse.toolCalls[0]; // Could be empty array

    Should validate aiResponse.toolCalls.length > 0 first.

  3. Unused Import: The generateObject import is unused in the current code, though createGenerateObject is exported.

Performance Considerations ✅

  • Minimal Overhead: Factory functions add negligible overhead
  • Usage Tracking: Proper AI usage tracking preserved for billing/monitoring
  • Retry Logic: Same retry strategy maintained (3 retries, 1s delay)
  • No Breaking Changes: Model selection logic preserved

Security Concerns ✅

  • Input Validation: Zod schemas provide robust input validation
  • API Key Handling: Secure credential management through modelOptions
  • Error Sanitization: Error handling doesn't leak sensitive information
  • User Context: Proper user email tracking for audit trails

Suggestions for Improvement

  1. Add bounds checking:

    const toolCalls = aiResponse.toolCalls;
    if (!toolCalls || toolCalls.length === 0) {
      logger.warn("No tool calls received from AI", loggerOptions);
      return;
    }
    const toolCall = toolCalls[0];
  2. Consider more descriptive return types instead of undefined for clarity.

  3. Add JSDoc comments to the new factory functions for better developer experience.

Overall Assessment ✅

This is a well-executed revert that:

  • Maintains code quality and architectural patterns
  • Preserves important functionality (logging, error handling, usage tracking)
  • Adds useful utility functions for future use
  • Has minimal risk with only minor improvements needed

The switch from structured output to tool calls suggests a strategic decision to have more control over AI interactions, which is reasonable for a rule processing system.


@claude
Copy link

claude bot commented Aug 7, 2025

Claude finished @elie222's task —— View job


PR Review: Revert generate args to tool call

Review Progress

  • Read and analyze the changed files to understand the revert
  • Review code quality and best practices
  • Check for potential bugs or issues
  • Assess performance considerations
  • Evaluate security concerns
  • Provide constructive feedback in GitHub comment

Summary

This PR reverts from using chatCompletionObject with structured schema output back to using generateText with tool calls for AI argument generation. The change affects the core AI processing pipeline in ai-choose-args.ts.

Code Quality & Best Practices ✅

  • Clean Architecture: The revert maintains good separation of concerns with proper logging, error handling, and retry logic
  • Type Safety: Strong TypeScript typing is preserved with proper interfaces and Zod schemas
  • Error Handling: Robust error handling with withRetry and InvalidArgumentError catching
  • Logging: Comprehensive logging with scoped logger and trace-level debugging
  • Factory Pattern: Good use of createGenerateText factory for dependency injection

Analysis of Changes

What Changed:

  1. API Approach: Switched from chatCompletionObject (structured output) to generateText with tools
  2. Response Processing: Changed from aiResponse.object to aiResponse.toolCalls[0].input
  3. New Utilities: Added createGenerateText and createGenerateObject factory functions
  4. Model Handling: Added explicit getModel call and model options management

Why This Might Be Better:

  • Tool Calls: More explicit control over AI function calling vs. structured output
  • Flexibility: Tool calls allow for more complex multi-step interactions
  • Fallback: Better handling when AI doesn't provide expected tool calls (if (!toolCall?.toolName) return)

Potential Issues ⚠️

  1. Silent Failure Risk (ai-choose-args.ts:114):

    if (!toolCall?.toolName) return; // Returns undefined silently

    Consider logging when this happens or returning a more explicit error state.

  2. Array Access Without Bounds Check (ai-choose-args.ts:112):

    const toolCall = aiResponse.toolCalls[0]; // Could be empty array

    Should validate aiResponse.toolCalls.length > 0 first.

  3. Unused Import: The generateObject import is unused in the current code, though createGenerateObject is exported.

Performance Considerations ✅

  • Minimal Overhead: Factory functions add negligible overhead
  • Usage Tracking: Proper AI usage tracking preserved for billing/monitoring
  • Retry Logic: Same retry strategy maintained (3 retries, 1s delay)
  • No Breaking Changes: Model selection logic preserved

Security Concerns ✅

  • Input Validation: Zod schemas provide robust input validation
  • API Key Handling: Secure credential management through modelOptions
  • Error Sanitization: Error handling doesn't leak sensitive information
  • User Context: Proper user email tracking for audit trails

Suggestions for Improvement

  1. Add bounds checking:

    const toolCalls = aiResponse.toolCalls;
    if (!toolCalls || toolCalls.length === 0) {
      logger.warn("No tool calls received from AI", loggerOptions);
      return;
    }
    const toolCall = toolCalls[0];
  2. Consider more descriptive return types instead of undefined for clarity.

  3. Add JSDoc comments to the new factory functions for better developer experience.

Overall Assessment ✅

This is a well-executed revert that:

  • Maintains code quality and architectural patterns
  • Preserves important functionality (logging, error handling, usage tracking)
  • Adds useful utility functions for future use
  • Has minimal risk with only minor improvements needed

The switch from structured output to tool calls suggests a strategic decision to have more control over AI interactions, which is reasonable for a rule processing system.


@claude
Copy link

claude bot commented Aug 7, 2025

Claude finished @elie222's task —— View job


PR Review: Revert generate args to tool call

Review Progress

  • Read and analyze the changed files to understand the revert
  • Review code quality and best practices
  • Check for potential bugs or issues
  • Assess performance considerations
  • Evaluate security concerns
  • Provide constructive feedback in GitHub comment

Summary

This PR reverts from using chatCompletionObject with structured schema output back to using generateText with tool calls for AI argument generation. The change affects the core AI processing pipeline in ai-choose-args.ts.

Code Quality & Best Practices ✅

  • Clean Architecture: The revert maintains good separation of concerns with proper logging, error handling, and retry logic
  • Type Safety: Strong TypeScript typing is preserved with proper interfaces and Zod schemas
  • Error Handling: Robust error handling with withRetry and InvalidArgumentError catching
  • Logging: Comprehensive logging with scoped logger and trace-level debugging
  • Factory Pattern: Good use of createGenerateText factory for dependency injection

Analysis of Changes

What Changed:

  1. API Approach: Switched from chatCompletionObject (structured output) to generateText with tools
  2. Response Processing: Changed from aiResponse.object to aiResponse.toolCalls[0].input
  3. New Utilities: Added createGenerateText and createGenerateObject factory functions
  4. Model Handling: Added explicit getModel call and model options management

Why This Might Be Better:

  • Tool Calls: More explicit control over AI function calling vs. structured output
  • Flexibility: Tool calls allow for more complex multi-step interactions
  • Fallback: Better handling when AI doesn't provide expected tool calls (if (!toolCall?.toolName) return)

Potential Issues ⚠️

  1. Silent Failure Risk (ai-choose-args.ts:114):

    if (!toolCall?.toolName) return; // Returns undefined silently

    Consider logging when this happens or returning a more explicit error state.

  2. Array Access Without Bounds Check (ai-choose-args.ts:112):

    const toolCall = aiResponse.toolCalls[0]; // Could be empty array

    Should validate aiResponse.toolCalls.length > 0 first.

  3. Unused Import: The generateObject import is unused in the current code, though createGenerateObject is exported.

Performance Considerations ✅

  • Minimal Overhead: Factory functions add negligible overhead
  • Usage Tracking: Proper AI usage tracking preserved for billing/monitoring
  • Retry Logic: Same retry strategy maintained (3 retries, 1s delay)
  • No Breaking Changes: Model selection logic preserved

Security Concerns ✅

  • Input Validation: Zod schemas provide robust input validation
  • API Key Handling: Secure credential management through modelOptions
  • Error Sanitization: Error handling doesn't leak sensitive information
  • User Context: Proper user email tracking for audit trails

Suggestions for Improvement

  1. Add bounds checking:

    const toolCalls = aiResponse.toolCalls;
    if (!toolCalls || toolCalls.length === 0) {
      logger.warn("No tool calls received from AI", loggerOptions);
      return;
    }
    const toolCall = toolCalls[0];
  2. Consider more descriptive return types instead of undefined for clarity.

  3. Add JSDoc comments to the new factory functions for better developer experience.

Overall Assessment ✅

This is a well-executed revert that:

  • Maintains code quality and architectural patterns
  • Preserves important functionality (logging, error handling, usage tracking)
  • Adds useful utility functions for future use
  • Has minimal risk with only minor improvements needed

The switch from structured output to tool calls suggests a strategic decision to have more control over AI interactions, which is reasonable for a rule processing system.


Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🔭 Outside diff range comments (1)
apps/web/utils/llms/model.ts (1)

31-36: Fix logging property name

The logger still references data.model but should use data.modelName based on the new naming convention.

  logger.info("Using model", {
    modelType,
    provider: data.provider,
-   model: data.model,
+   modelName: data.modelName,
    providerOptions: data.providerOptions,
  });
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 77e0be7 and 256bd61.

📒 Files selected for processing (6)
  • apps/web/app/(app)/[emailAccountId]/assistant/RulesPrompt.tsx (1 hunks)
  • apps/web/utils/actions/ai-rule.ts (1 hunks)
  • apps/web/utils/ai/rule/generate-rules-prompt.ts (3 hunks)
  • apps/web/utils/ai/snippets/find-snippets.ts (2 hunks)
  • apps/web/utils/llms/index.ts (8 hunks)
  • apps/web/utils/llms/model.ts (8 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • apps/web/utils/llms/index.ts
🧰 Additional context used
📓 Path-based instructions (22)
apps/web/**/*.{ts,tsx}

📄 CodeRabbit Inference Engine (apps/web/CLAUDE.md)

apps/web/**/*.{ts,tsx}: Use TypeScript with strict null checks
Path aliases: Use @/ for imports from project root
Use proper error handling with try/catch blocks
Format code with Prettier
Leverage TypeScript inference for better DX

Files:

  • apps/web/utils/ai/snippets/find-snippets.ts
  • apps/web/utils/ai/rule/generate-rules-prompt.ts
  • apps/web/app/(app)/[emailAccountId]/assistant/RulesPrompt.tsx
  • apps/web/utils/actions/ai-rule.ts
  • apps/web/utils/llms/model.ts
!{.cursor/rules/*.mdc}

📄 CodeRabbit Inference Engine (.cursor/rules/cursor-rules.mdc)

Never place rule files in the project root, in subdirectories outside .cursor/rules, or in any other location

Files:

  • apps/web/utils/ai/snippets/find-snippets.ts
  • apps/web/utils/ai/rule/generate-rules-prompt.ts
  • apps/web/app/(app)/[emailAccountId]/assistant/RulesPrompt.tsx
  • apps/web/utils/actions/ai-rule.ts
  • apps/web/utils/llms/model.ts
**/*.ts

📄 CodeRabbit Inference Engine (.cursor/rules/form-handling.mdc)

**/*.ts: The same validation should be done in the server action too
Define validation schemas using Zod

Files:

  • apps/web/utils/ai/snippets/find-snippets.ts
  • apps/web/utils/ai/rule/generate-rules-prompt.ts
  • apps/web/utils/actions/ai-rule.ts
  • apps/web/utils/llms/model.ts
apps/web/utils/{ai,llms}/**/*

📄 CodeRabbit Inference Engine (.cursor/rules/llm.mdc)

apps/web/utils/{ai,llms}/**/*: LLM-related code must be organized in the directories: apps/web/utils/ai/, apps/web/utils/llms/, and apps/web/tests/ for LLM-specific tests.
Keep related AI functions in the same file or directory.

Files:

  • apps/web/utils/ai/snippets/find-snippets.ts
  • apps/web/utils/ai/rule/generate-rules-prompt.ts
  • apps/web/utils/llms/model.ts
apps/web/utils/{ai,llms}/**/*.ts

📄 CodeRabbit Inference Engine (.cursor/rules/llm.mdc)

apps/web/utils/{ai,llms}/**/*.ts: Follow the standard structure for LLM-related functions: use a scoped logger, define a Zod schema for output, validate inputs early, separate system and user prompts, log inputs and outputs, call chatCompletionObject with proper configuration, and return validated results.
Keep system prompts and user prompts separate in LLM-related code.
System prompt should define the LLM's role and task specifications.
User prompt should contain the actual data and context.
Always define a Zod schema for response validation in LLM-related functions.
Make Zod schemas as specific as possible to guide the LLM output.
Use descriptive scoped loggers for each LLM feature.
Log inputs and outputs with appropriate log levels in LLM-related functions.
Include relevant context in log messages for LLM-related code.
Implement early returns for invalid inputs in LLM-related functions.
Use proper error types and logging in LLM-related code.
Implement fallbacks for AI failures in LLM-related functions.
Add retry logic for transient failures using withRetry in LLM-related code.
Use XML-like tags to structure data in LLM prompts.
Remove excessive whitespace and truncate long inputs in LLM prompts.
Format data consistently across similar LLM-related functions.
Use TypeScript types for all parameters and return values in LLM-related code.
Define clear interfaces for complex input/output structures in LLM-related code.
Extract common patterns into utility functions in LLM-related code.
Document complex AI logic with clear comments in LLM-related code.

Files:

  • apps/web/utils/ai/snippets/find-snippets.ts
  • apps/web/utils/ai/rule/generate-rules-prompt.ts
  • apps/web/utils/llms/model.ts
**/*.{ts,tsx}

📄 CodeRabbit Inference Engine (.cursor/rules/logging.mdc)

**/*.{ts,tsx}: Use createScopedLogger for logging in backend TypeScript files
Typically add the logger initialization at the top of the file when using createScopedLogger
Only use .with() on a logger instance within a specific function, not for a global logger

Import Prisma in the project using import prisma from "@/utils/prisma";

**/*.{ts,tsx}: Don't use TypeScript enums.
Don't use TypeScript const enum.
Don't use the TypeScript directive @ts-ignore.
Don't use primitive type aliases or misleading types.
Don't use empty type parameters in type aliases and interfaces.
Don't use any or unknown as type constraints.
Don't use implicit any type on variable declarations.
Don't let variables evolve into any type through reassignments.
Don't use non-null assertions with the ! postfix operator.
Don't misuse the non-null assertion operator (!) in TypeScript files.
Don't use user-defined types.
Use as const instead of literal types and type annotations.
Use export type for types.
Use import type for types.
Don't declare empty interfaces.
Don't merge interfaces and classes unsafely.
Don't use overload signatures that aren't next to each other.
Use the namespace keyword instead of the module keyword to declare TypeScript namespaces.
Don't use TypeScript namespaces.
Don't export imported variables.
Don't add type annotations to variables, parameters, and class properties that are initialized with literal expressions.
Don't use parameter properties in class constructors.
Use either T[] or Array consistently.
Initialize each enum member value explicitly.
Make sure all enum members are literal values.

Files:

  • apps/web/utils/ai/snippets/find-snippets.ts
  • apps/web/utils/ai/rule/generate-rules-prompt.ts
  • apps/web/app/(app)/[emailAccountId]/assistant/RulesPrompt.tsx
  • apps/web/utils/actions/ai-rule.ts
  • apps/web/utils/llms/model.ts
apps/web/utils/**

📄 CodeRabbit Inference Engine (.cursor/rules/project-structure.mdc)

Create utility functions in utils/ folder for reusable logic

Files:

  • apps/web/utils/ai/snippets/find-snippets.ts
  • apps/web/utils/ai/rule/generate-rules-prompt.ts
  • apps/web/utils/actions/ai-rule.ts
  • apps/web/utils/llms/model.ts
apps/web/utils/**/*.ts

📄 CodeRabbit Inference Engine (.cursor/rules/project-structure.mdc)

apps/web/utils/**/*.ts: Use lodash utilities for common operations (arrays, objects, strings)
Import specific lodash functions to minimize bundle size

Files:

  • apps/web/utils/ai/snippets/find-snippets.ts
  • apps/web/utils/ai/rule/generate-rules-prompt.ts
  • apps/web/utils/actions/ai-rule.ts
  • apps/web/utils/llms/model.ts
**/*.{js,jsx,ts,tsx}

📄 CodeRabbit Inference Engine (.cursor/rules/ultracite.mdc)

**/*.{js,jsx,ts,tsx}: Don't use elements in Next.js projects.
Don't use elements in Next.js projects.
Don't use namespace imports.
Don't access namespace imports dynamically.
Don't use global eval().
Don't use console.
Don't use debugger.
Don't use var.
Don't use with statements in non-strict contexts.
Don't use the arguments object.
Don't use consecutive spaces in regular expression literals.
Don't use the comma operator.
Don't use unnecessary boolean casts.
Don't use unnecessary callbacks with flatMap.
Use for...of statements instead of Array.forEach.
Don't create classes that only have static members (like a static namespace).
Don't use this and super in static contexts.
Don't use unnecessary catch clauses.
Don't use unnecessary constructors.
Don't use unnecessary continue statements.
Don't export empty modules that don't change anything.
Don't use unnecessary escape sequences in regular expression literals.
Don't use unnecessary labels.
Don't use unnecessary nested block statements.
Don't rename imports, exports, and destructured assignments to the same name.
Don't use unnecessary string or template literal concatenation.
Don't use String.raw in template literals when there are no escape sequences.
Don't use useless case statements in switch statements.
Don't use ternary operators when simpler alternatives exist.
Don't use useless this aliasing.
Don't initialize variables to undefined.
Don't use the void operators (they're not familiar).
Use arrow functions instead of function expressions.
Use Date.now() to get milliseconds since the Unix Epoch.
Use .flatMap() instead of map().flat() when possible.
Use literal property access instead of computed property access.
Don't use parseInt() or Number.parseInt() when binary, octal, or hexadecimal literals work.
Use concise optional chaining instead of chained logical expressions.
Use regular expression literals instead of the RegExp constructor when possible.
Don't use number literal object member names th...

Files:

  • apps/web/utils/ai/snippets/find-snippets.ts
  • apps/web/utils/ai/rule/generate-rules-prompt.ts
  • apps/web/app/(app)/[emailAccountId]/assistant/RulesPrompt.tsx
  • apps/web/utils/actions/ai-rule.ts
  • apps/web/utils/llms/model.ts
!pages/_document.{js,jsx,ts,tsx}

📄 CodeRabbit Inference Engine (.cursor/rules/ultracite.mdc)

!pages/_document.{js,jsx,ts,tsx}: Don't import next/document outside of pages/_document.jsx in Next.js projects.
Don't import next/document outside of pages/_document.jsx in Next.js projects.

Files:

  • apps/web/utils/ai/snippets/find-snippets.ts
  • apps/web/utils/ai/rule/generate-rules-prompt.ts
  • apps/web/app/(app)/[emailAccountId]/assistant/RulesPrompt.tsx
  • apps/web/utils/actions/ai-rule.ts
  • apps/web/utils/llms/model.ts
apps/web/app/**

📄 CodeRabbit Inference Engine (apps/web/CLAUDE.md)

NextJS app router structure with (app) directory

Files:

  • apps/web/app/(app)/[emailAccountId]/assistant/RulesPrompt.tsx
apps/web/**/*.tsx

📄 CodeRabbit Inference Engine (apps/web/CLAUDE.md)

apps/web/**/*.tsx: Follow tailwindcss patterns with prettier-plugin-tailwindcss
Prefer functional components with hooks
Use shadcn/ui components when available
Ensure responsive design with mobile-first approach
Follow consistent naming conventions (PascalCase for components)
Use LoadingContent component for async data
Use result?.serverError with toastError and toastSuccess
Use LoadingContent component to handle loading and error states consistently
Pass loading, error, and children props to LoadingContent

Files:

  • apps/web/app/(app)/[emailAccountId]/assistant/RulesPrompt.tsx
**/*.tsx

📄 CodeRabbit Inference Engine (.cursor/rules/form-handling.mdc)

**/*.tsx: Use React Hook Form with Zod for validation
Validate form inputs before submission
Show validation errors inline next to form fields

Files:

  • apps/web/app/(app)/[emailAccountId]/assistant/RulesPrompt.tsx
apps/web/app/(app)/*/**

📄 CodeRabbit Inference Engine (.cursor/rules/page-structure.mdc)

Components for the page are either put in page.tsx, or in the apps/web/app/(app)/PAGE_NAME folder

Files:

  • apps/web/app/(app)/[emailAccountId]/assistant/RulesPrompt.tsx
apps/web/app/(app)/*/**/*.tsx

📄 CodeRabbit Inference Engine (.cursor/rules/page-structure.mdc)

If you need to use onClick in a component, that component is a client component and file must start with 'use client'

Files:

  • apps/web/app/(app)/[emailAccountId]/assistant/RulesPrompt.tsx
apps/web/app/(app)/*/**/**/*.tsx

📄 CodeRabbit Inference Engine (.cursor/rules/page-structure.mdc)

If we're in a deeply nested component we will use swr to fetch via API

Files:

  • apps/web/app/(app)/[emailAccountId]/assistant/RulesPrompt.tsx
apps/web/app/**/*.tsx

📄 CodeRabbit Inference Engine (.cursor/rules/project-structure.mdc)

Components with onClick must be client components with use client directive

Files:

  • apps/web/app/(app)/[emailAccountId]/assistant/RulesPrompt.tsx
**/*.{jsx,tsx}

📄 CodeRabbit Inference Engine (.cursor/rules/ultracite.mdc)

**/*.{jsx,tsx}: Don't destructure props inside JSX components in Solid projects.
Don't use both children and dangerouslySetInnerHTML props on the same element.
Don't use Array index in keys.
Don't assign to React component props.
Don't define React components inside other components.
Don't use event handlers on non-interactive elements.
Don't assign JSX properties multiple times.
Don't add extra closing tags for components without children.
Use <>...</> instead of ....
Don't insert comments as text nodes.
Don't use the return value of React.render.
Make sure all dependencies are correctly specified in React hooks.
Make sure all React hooks are called from the top level of component functions.
Don't use unnecessary fragments.
Don't pass children as props.
Use semantic elements instead of role attributes in JSX.

Files:

  • apps/web/app/(app)/[emailAccountId]/assistant/RulesPrompt.tsx
**/*.{html,jsx,tsx}

📄 CodeRabbit Inference Engine (.cursor/rules/ultracite.mdc)

**/*.{html,jsx,tsx}: Don't use or elements.
Don't use accessKey attribute on any HTML element.
Don't set aria-hidden="true" on focusable elements.
Don't add ARIA roles, states, and properties to elements that don't support them.
Only use the scope prop on elements.
Don't assign non-interactive ARIA roles to interactive HTML elements.
Make sure label elements have text content and are associated with an input.
Don't assign interactive ARIA roles to non-interactive HTML elements.
Don't assign tabIndex to non-interactive HTML elements.
Don't use positive integers for tabIndex property.
Don't include "image", "picture", or "photo" in img alt prop.
Don't use explicit role property that's the same as the implicit/default role.
Make static elements with click handlers use a valid role attribute.
Always include a title element for SVG elements.
Give all elements requiring alt text meaningful information for screen readers.
Make sure anchors have content that's accessible to screen readers.
Assign tabIndex to non-interactive HTML elements with aria-activedescendant.
Include all required ARIA attributes for elements with ARIA roles.
Make sure ARIA properties are valid for the element's supported roles.
Always include a type attribute for button elements.
Make elements with interactive roles and handlers focusable.
Give heading elements content that's accessible to screen readers (not hidden with aria-hidden).
Always include a lang attribute on the html element.
Always include a title attribute for iframe elements.
Accompany onClick with at least one of: onKeyUp, onKeyDown, or onKeyPress.
Accompany onMouseOver/onMouseOut with onFocus/onBlur.
Include caption tracks for audio and video elements.
Make sure all anchors are valid and navigable.
Ensure all ARIA properties (aria-*) are valid.
Use valid, non-abstract ARIA roles for elements with ARIA roles.
Use valid ARIA state and property values.
Use valid values for the autocomplete attribute on input eleme...

Files:

  • apps/web/app/(app)/[emailAccountId]/assistant/RulesPrompt.tsx
apps/web/utils/actions/**/*.ts

📄 CodeRabbit Inference Engine (apps/web/CLAUDE.md)

apps/web/utils/actions/**/*.ts: Use server actions for all mutations (create/update/delete operations)
next-safe-action provides centralized error handling
Use Zod schemas for validation on both client and server
Use revalidatePath in server actions for cache invalidation

apps/web/utils/actions/**/*.ts: Use server actions (with next-safe-action) for all mutations (create/update/delete operations); do NOT use POST API routes for mutations.
Use revalidatePath in server actions to invalidate cache after mutations.

Files:

  • apps/web/utils/actions/ai-rule.ts
apps/web/utils/actions/*.ts

📄 CodeRabbit Inference Engine (.cursor/rules/server-actions.mdc)

apps/web/utils/actions/*.ts: Implement all server actions using the next-safe-action library for type safety, input validation, context management, and error handling. Refer to apps/web/utils/actions/safe-action.ts for client definitions (actionClient, actionClientUser, adminActionClient).
Use actionClientUser when only authenticated user context (userId) is needed.
Use actionClient when both authenticated user context and a specific emailAccountId are needed. The emailAccountId must be bound when calling the action from the client.
Use adminActionClient for actions restricted to admin users.
Access necessary context (like userId, emailAccountId, etc.) provided by the safe action client via the ctx object in the .action() handler.
Server Actions are strictly for mutations (operations that change data, e.g., creating, updating, deleting). Do NOT use Server Actions for data fetching (GET operations). For data fetching, use dedicated GET API Routes combined with SWR Hooks.
Use SafeError for expected/handled errors within actions if needed. next-safe-action provides centralized error handling.
Use the .metadata({ name: "actionName" }) method to provide a meaningful name for monitoring. Sentry instrumentation is automatically applied via withServerActionInstrumentation within the safe action clients.
If an action modifies data displayed elsewhere, use revalidatePath or revalidateTag from next/cache within the action handler as needed.

Server action files must start with use server

Files:

  • apps/web/utils/actions/ai-rule.ts
apps/web/utils/llms/model.ts

📄 CodeRabbit Inference Engine (.cursor/rules/llm.mdc)

Model definitions and configurations must be placed in utils/llms/model.ts.

Files:

  • apps/web/utils/llms/model.ts
🧠 Learnings (26)
📓 Common learnings
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-07-18T15:06:40.272Z
Learning: Applies to apps/web/utils/{ai,llms}/**/*.ts : Implement fallbacks for AI failures in LLM-related functions.
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-07-18T15:06:40.272Z
Learning: Applies to apps/web/utils/{ai,llms}/**/* : Keep related AI functions in the same file or directory.
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-07-18T15:06:40.272Z
Learning: Applies to apps/web/utils/{ai,llms}/**/*.ts : Document complex AI logic with clear comments in LLM-related code.
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-07-18T15:06:40.272Z
Learning: Applies to apps/web/utils/{ai,llms}/**/*.ts : Follow the standard structure for LLM-related functions: use a scoped logger, define a Zod schema for output, validate inputs early, separate system and user prompts, log inputs and outputs, call chatCompletionObject with proper configuration, and return validated results.
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-07-18T15:06:40.272Z
Learning: Applies to apps/web/utils/{ai,llms}/**/*.ts : System prompt should define the LLM's role and task specifications.
📚 Learning: applies to apps/web/utils/{ai,llms}/**/*.ts : document complex ai logic with clear comments in llm-r...
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-07-18T15:06:40.272Z
Learning: Applies to apps/web/utils/{ai,llms}/**/*.ts : Document complex AI logic with clear comments in LLM-related code.

Applied to files:

  • apps/web/utils/ai/snippets/find-snippets.ts
  • apps/web/utils/ai/rule/generate-rules-prompt.ts
  • apps/web/app/(app)/[emailAccountId]/assistant/RulesPrompt.tsx
  • apps/web/utils/actions/ai-rule.ts
  • apps/web/utils/llms/model.ts
📚 Learning: applies to apps/web/utils/{ai,llms}/**/*.ts : implement fallbacks for ai failures in llm-related fun...
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-07-18T15:06:40.272Z
Learning: Applies to apps/web/utils/{ai,llms}/**/*.ts : Implement fallbacks for AI failures in LLM-related functions.

Applied to files:

  • apps/web/utils/ai/snippets/find-snippets.ts
  • apps/web/utils/ai/rule/generate-rules-prompt.ts
  • apps/web/app/(app)/[emailAccountId]/assistant/RulesPrompt.tsx
  • apps/web/utils/actions/ai-rule.ts
  • apps/web/utils/llms/model.ts
📚 Learning: applies to apps/web/utils/{ai,llms}/**/*.ts : extract common patterns into utility functions in llm-...
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-07-18T15:06:40.272Z
Learning: Applies to apps/web/utils/{ai,llms}/**/*.ts : Extract common patterns into utility functions in LLM-related code.

Applied to files:

  • apps/web/utils/ai/snippets/find-snippets.ts
  • apps/web/utils/ai/rule/generate-rules-prompt.ts
  • apps/web/utils/actions/ai-rule.ts
  • apps/web/utils/llms/model.ts
📚 Learning: applies to apps/web/utils/{ai,llms}/**/*.ts : follow the standard structure for llm-related function...
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-07-18T15:06:40.272Z
Learning: Applies to apps/web/utils/{ai,llms}/**/*.ts : Follow the standard structure for LLM-related functions: use a scoped logger, define a Zod schema for output, validate inputs early, separate system and user prompts, log inputs and outputs, call chatCompletionObject with proper configuration, and return validated results.

Applied to files:

  • apps/web/utils/ai/snippets/find-snippets.ts
  • apps/web/utils/ai/rule/generate-rules-prompt.ts
  • apps/web/app/(app)/[emailAccountId]/assistant/RulesPrompt.tsx
  • apps/web/utils/llms/model.ts
📚 Learning: applies to apps/web/utils/{ai,llms}/**/*.ts : format data consistently across similar llm-related fu...
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-07-18T15:06:40.272Z
Learning: Applies to apps/web/utils/{ai,llms}/**/*.ts : Format data consistently across similar LLM-related functions.

Applied to files:

  • apps/web/utils/ai/snippets/find-snippets.ts
  • apps/web/utils/llms/model.ts
📚 Learning: applies to apps/web/utils/{ai,llms}/**/*.ts : remove excessive whitespace and truncate long inputs i...
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-07-18T15:06:40.272Z
Learning: Applies to apps/web/utils/{ai,llms}/**/*.ts : Remove excessive whitespace and truncate long inputs in LLM prompts.

Applied to files:

  • apps/web/utils/ai/snippets/find-snippets.ts
  • apps/web/utils/ai/rule/generate-rules-prompt.ts
  • apps/web/app/(app)/[emailAccountId]/assistant/RulesPrompt.tsx
  • apps/web/utils/actions/ai-rule.ts
📚 Learning: applies to apps/web/utils/{ai,llms}/**/* : keep related ai functions in the same file or directory....
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-07-18T15:06:40.272Z
Learning: Applies to apps/web/utils/{ai,llms}/**/* : Keep related AI functions in the same file or directory.

Applied to files:

  • apps/web/utils/ai/snippets/find-snippets.ts
📚 Learning: applies to apps/web/utils/{ai,llms}/**/*.ts : make zod schemas as specific as possible to guide the ...
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-07-18T15:06:40.272Z
Learning: Applies to apps/web/utils/{ai,llms}/**/*.ts : Make Zod schemas as specific as possible to guide the LLM output.

Applied to files:

  • apps/web/utils/ai/snippets/find-snippets.ts
  • apps/web/utils/ai/rule/generate-rules-prompt.ts
  • apps/web/utils/llms/model.ts
📚 Learning: applies to apps/web/__tests__/**/*.test.ts : test both ai and non-ai paths in llm-related tests...
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm-test.mdc:0-0
Timestamp: 2025-07-18T15:06:10.570Z
Learning: Applies to apps/web/__tests__/**/*.test.ts : Test both AI and non-AI paths in LLM-related tests

Applied to files:

  • apps/web/utils/ai/snippets/find-snippets.ts
  • apps/web/utils/actions/ai-rule.ts
📚 Learning: applies to apps/web/utils/{ai,llms}/**/*.ts : define clear interfaces for complex input/output struc...
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-07-18T15:06:40.272Z
Learning: Applies to apps/web/utils/{ai,llms}/**/*.ts : Define clear interfaces for complex input/output structures in LLM-related code.

Applied to files:

  • apps/web/utils/ai/snippets/find-snippets.ts
  • apps/web/utils/ai/rule/generate-rules-prompt.ts
  • apps/web/utils/llms/model.ts
📚 Learning: applies to apps/web/utils/{ai,llms}/**/*.ts : use descriptive scoped loggers for each llm feature....
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-07-18T15:06:40.272Z
Learning: Applies to apps/web/utils/{ai,llms}/**/*.ts : Use descriptive scoped loggers for each LLM feature.

Applied to files:

  • apps/web/utils/ai/snippets/find-snippets.ts
  • apps/web/utils/ai/rule/generate-rules-prompt.ts
📚 Learning: applies to apps/web/utils/{ai,llms}/**/*.ts : always define a zod schema for response validation in ...
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-07-18T15:06:40.272Z
Learning: Applies to apps/web/utils/{ai,llms}/**/*.ts : Always define a Zod schema for response validation in LLM-related functions.

Applied to files:

  • apps/web/utils/ai/snippets/find-snippets.ts
  • apps/web/utils/ai/rule/generate-rules-prompt.ts
📚 Learning: applies to apps/web/utils/{ai,llms}/**/*.ts : use proper error types and logging in llm-related code...
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-07-18T15:06:40.272Z
Learning: Applies to apps/web/utils/{ai,llms}/**/*.ts : Use proper error types and logging in LLM-related code.

Applied to files:

  • apps/web/utils/ai/snippets/find-snippets.ts
  • apps/web/utils/llms/model.ts
📚 Learning: applies to apps/web/utils/{ai,llms}/**/*.ts : keep system prompts and user prompts separate in llm-r...
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-07-18T15:06:40.272Z
Learning: Applies to apps/web/utils/{ai,llms}/**/*.ts : Keep system prompts and user prompts separate in LLM-related code.

Applied to files:

  • apps/web/utils/ai/rule/generate-rules-prompt.ts
  • apps/web/app/(app)/[emailAccountId]/assistant/RulesPrompt.tsx
  • apps/web/utils/actions/ai-rule.ts
📚 Learning: applies to apps/web/utils/{ai,llms}/**/*.ts : system prompt should define the llm's role and task sp...
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-07-18T15:06:40.272Z
Learning: Applies to apps/web/utils/{ai,llms}/**/*.ts : System prompt should define the LLM's role and task specifications.

Applied to files:

  • apps/web/utils/ai/rule/generate-rules-prompt.ts
  • apps/web/app/(app)/[emailAccountId]/assistant/RulesPrompt.tsx
  • apps/web/utils/actions/ai-rule.ts
  • apps/web/utils/llms/model.ts
📚 Learning: applies to apps/web/utils/{ai,llms}/**/*.ts : user prompt should contain the actual data and context...
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-07-18T15:06:40.272Z
Learning: Applies to apps/web/utils/{ai,llms}/**/*.ts : User prompt should contain the actual data and context.

Applied to files:

  • apps/web/utils/ai/rule/generate-rules-prompt.ts
  • apps/web/app/(app)/[emailAccountId]/assistant/RulesPrompt.tsx
  • apps/web/utils/actions/ai-rule.ts
📚 Learning: applies to apps/web/utils/{ai,llms}/**/*.ts : use xml-like tags to structure data in llm prompts....
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-07-18T15:06:40.272Z
Learning: Applies to apps/web/utils/{ai,llms}/**/*.ts : Use XML-like tags to structure data in LLM prompts.

Applied to files:

  • apps/web/utils/ai/rule/generate-rules-prompt.ts
  • apps/web/app/(app)/[emailAccountId]/assistant/RulesPrompt.tsx
  • apps/web/utils/actions/ai-rule.ts
📚 Learning: in packages/resend/emails/digest.tsx, the digestemailprops type uses `[key: string]: digestitem[] | ...
Learnt from: edulelis
PR: elie222/inbox-zero#576
File: packages/resend/emails/digest.tsx:78-83
Timestamp: 2025-07-17T04:19:57.099Z
Learning: In packages/resend/emails/digest.tsx, the DigestEmailProps type uses `[key: string]: DigestItem[] | undefined | string | Date | undefined` instead of intersection types like `& Record<string, DigestItem[] | undefined>` due to implementation constraints. This was the initial implementation approach and cannot be changed to more restrictive typing.

Applied to files:

  • apps/web/utils/ai/rule/generate-rules-prompt.ts
📚 Learning: applies to **/*.{html,jsx,tsx} : don't use explicit role property that's the same as the implicit/de...
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/ultracite.mdc:0-0
Timestamp: 2025-07-20T09:03:06.318Z
Learning: Applies to **/*.{html,jsx,tsx} : Don't use explicit role property that's the same as the implicit/default role.

Applied to files:

  • apps/web/app/(app)/[emailAccountId]/assistant/RulesPrompt.tsx
📚 Learning: applies to apps/web/**/*.tsx : follow tailwindcss patterns with prettier-plugin-tailwindcss...
Learnt from: CR
PR: elie222/inbox-zero#0
File: apps/web/CLAUDE.md:0-0
Timestamp: 2025-07-18T15:04:30.467Z
Learning: Applies to apps/web/**/*.tsx : Follow tailwindcss patterns with prettier-plugin-tailwindcss

Applied to files:

  • apps/web/app/(app)/[emailAccountId]/assistant/RulesPrompt.tsx
📚 Learning: applies to {app,components}/**/*.{ts,tsx} : use `result?.servererror` with `toasterror` and `toastsu...
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/data-fetching.mdc:0-0
Timestamp: 2025-07-18T15:04:44.818Z
Learning: Applies to {app,components}/**/*.{ts,tsx} : Use `result?.serverError` with `toastError` and `toastSuccess` for error handling; success toast is optional

Applied to files:

  • apps/web/app/(app)/[emailAccountId]/assistant/RulesPrompt.tsx
📚 Learning: applies to apps/web/**/*.tsx : use `result?.servererror` with `toasterror` and `toastsuccess`...
Learnt from: CR
PR: elie222/inbox-zero#0
File: apps/web/CLAUDE.md:0-0
Timestamp: 2025-07-18T15:04:30.467Z
Learning: Applies to apps/web/**/*.tsx : Use `result?.serverError` with `toastError` and `toastSuccess`

Applied to files:

  • apps/web/app/(app)/[emailAccountId]/assistant/RulesPrompt.tsx
📚 Learning: applies to apps/web/components/**/*form.tsx : use `result?.servererror` with `toasterror` and `toast...
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/fullstack-workflow.mdc:0-0
Timestamp: 2025-07-18T15:05:16.146Z
Learning: Applies to apps/web/components/**/*Form.tsx : Use `result?.serverError` with `toastError` and `toastSuccess` for error and success notifications in form submission handlers.

Applied to files:

  • apps/web/app/(app)/[emailAccountId]/assistant/RulesPrompt.tsx
📚 Learning: applies to apps/web/utils/{ai,llms}/**/*.ts : use typescript types for all parameters and return val...
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-07-18T15:06:40.272Z
Learning: Applies to apps/web/utils/{ai,llms}/**/*.ts : Use TypeScript types for all parameters and return values in LLM-related code.

Applied to files:

  • apps/web/utils/actions/ai-rule.ts
  • apps/web/utils/llms/model.ts
📚 Learning: applies to apps/web/utils/llms/model.ts : model definitions and configurations must be placed in uti...
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-07-18T15:06:40.272Z
Learning: Applies to apps/web/utils/llms/model.ts : Model definitions and configurations must be placed in utils/llms/model.ts.

Applied to files:

  • apps/web/utils/llms/model.ts
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: Analyze (javascript-typescript)
  • GitHub Check: claude-review
🔇 Additional comments (4)
apps/web/app/(app)/[emailAccountId]/assistant/RulesPrompt.tsx (1)

311-313: LGTM! Good defensive programming

Adding the newline ensures proper formatting when appending to existing content, and the explicit fallback to empty string prevents potential runtime issues.

apps/web/utils/actions/ai-rule.ts (1)

555-555: Critical fix: Correct property access

The change correctly accesses the nested snippets array, aligning with the updated response schema from aiFindSnippets.

apps/web/utils/ai/rule/generate-rules-prompt.ts (1)

129-134: Good error handling with undefined return

The change to return undefined when no result is available provides clearer error semantics than returning an empty array.

apps/web/utils/llms/model.ts (1)

18-23: Excellent type standardization

The introduction of the SelectModel type provides consistency and type safety across all model selection functions.

Comment on lines +109 to +127
const modelOptions = getModel(emailAccount.user, "chat");

const generateText = createGenerateText({
userEmail: emailAccount.email,
label: "Generate rules prompt",
modelOptions,
});

const aiResponse = await generateText({
...modelOptions,
system,
prompt,
tools: {
generate_rules: {
description: "Generate a list of email management rules",
inputSchema: hasSnippets ? parametersSnippets : parameters,
},
},
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Duplicate model configuration issue

Similar to the previous file, modelOptions are passed to createGenerateText and then spread again in the call.

Remove the spread operator:

  const aiResponse = await generateText({
-   ...modelOptions,
    system,
    prompt,
    tools: {
      generate_rules: {
        description: "Generate a list of email management rules",
        inputSchema: hasSnippets ? parametersSnippets : parameters,
      },
    },
  });
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const modelOptions = getModel(emailAccount.user, "chat");
const generateText = createGenerateText({
userEmail: emailAccount.email,
label: "Generate rules prompt",
modelOptions,
});
const aiResponse = await generateText({
...modelOptions,
system,
prompt,
tools: {
generate_rules: {
description: "Generate a list of email management rules",
inputSchema: hasSnippets ? parametersSnippets : parameters,
},
},
});
const modelOptions = getModel(emailAccount.user, "chat");
const generateText = createGenerateText({
userEmail: emailAccount.email,
label: "Generate rules prompt",
modelOptions,
});
const aiResponse = await generateText({
system,
prompt,
tools: {
generate_rules: {
description: "Generate a list of email management rules",
inputSchema: hasSnippets ? parametersSnippets : parameters,
},
},
});
🤖 Prompt for AI Agents
In apps/web/utils/ai/rule/generate-rules-prompt.ts around lines 109 to 127, the
modelOptions object is passed both to createGenerateText and then spread again
in the generateText call, causing duplication. To fix this, remove the spread
operator for modelOptions in the generateText call and rely on the configuration
already set in createGenerateText.

Comment on lines +52 to 72
const modelOptions = getModel(emailAccount.user, "chat");

const { provider, model, llmModel, providerOptions } = getModel(
emailAccount.user,
"chat",
);
const generateObject = createGenerateObject({
userEmail: emailAccount.email,
label: "ai-find-snippets",
modelOptions,
});

const aiResponse = await generateObject({
model: llmModel,
...modelOptions,
system,
prompt,
providerOptions,
output: "array",
schemaName: "Find snippets",
schemaDescription: "Snippets",
schema: z.object({
text: z.string(),
count: z.number(),
snippets: z.array(
z.object({
text: z.string(),
count: z.number(),
}),
),
}),
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Potential duplicate model configuration

The modelOptions are passed both to createGenerateObject and spread again in the generateObject call. This could lead to duplicate or conflicting configuration.

Consider removing the spread operator on line 61:

  const aiResponse = await generateObject({
-   ...modelOptions,
    system,
    prompt,
    schema: z.object({
      snippets: z.array(
        z.object({
          text: z.string(),
          count: z.number(),
        }),
      ),
    }),
  });

The model configuration should already be encapsulated in the factory function created with createGenerateObject.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const modelOptions = getModel(emailAccount.user, "chat");
const { provider, model, llmModel, providerOptions } = getModel(
emailAccount.user,
"chat",
);
const generateObject = createGenerateObject({
userEmail: emailAccount.email,
label: "ai-find-snippets",
modelOptions,
});
const aiResponse = await generateObject({
model: llmModel,
...modelOptions,
system,
prompt,
providerOptions,
output: "array",
schemaName: "Find snippets",
schemaDescription: "Snippets",
schema: z.object({
text: z.string(),
count: z.number(),
snippets: z.array(
z.object({
text: z.string(),
count: z.number(),
}),
),
}),
});
const modelOptions = getModel(emailAccount.user, "chat");
const generateObject = createGenerateObject({
userEmail: emailAccount.email,
label: "ai-find-snippets",
modelOptions,
});
const aiResponse = await generateObject({
system,
prompt,
schema: z.object({
snippets: z.array(
z.object({
text: z.string(),
count: z.number(),
}),
),
}),
});
🤖 Prompt for AI Agents
In apps/web/utils/ai/snippets/find-snippets.ts between lines 52 and 72, the
modelOptions are passed twice: once to createGenerateObject and again spread
into the generateObject call, causing potential duplication or conflicts. To fix
this, remove the spread of modelOptions from the generateObject call on line 61,
relying on the modelOptions already encapsulated within the generateObject
instance.

@claude
Copy link

claude bot commented Aug 7, 2025

Claude finished @elie222's task —— View job


PR Review: Revert generate args to tool call

Review Progress

  • Read and analyze the changed files to understand the revert
  • Review code quality and best practices
  • Check for potential bugs or issues
  • Assess performance considerations
  • Evaluate security concerns
  • Provide constructive feedback in GitHub comment

Summary

This PR reverts from using chatCompletionObject with structured schema output back to using generateText with tool calls for AI argument generation. The change affects the core AI processing pipeline in ai-choose-args.ts.

Code Quality & Best Practices ✅

  • Clean Architecture: The revert maintains good separation of concerns with proper logging, error handling, and retry logic
  • Type Safety: Strong TypeScript typing is preserved with proper interfaces and Zod schemas
  • Error Handling: Robust error handling with withRetry and InvalidArgumentError catching
  • Logging: Comprehensive logging with scoped logger and trace-level debugging
  • Factory Pattern: Good use of createGenerateText factory for dependency injection

Analysis of Changes

What Changed:

  1. API Approach: Switched from chatCompletionObject (structured output) to generateText with tools
  2. Response Processing: Changed from aiResponse.object to aiResponse.toolCalls[0].input
  3. New Utilities: Added createGenerateText and createGenerateObject factory functions
  4. Model Handling: Added explicit getModel call and model options management

Why This Might Be Better:

  • Tool Calls: More explicit control over AI function calling vs. structured output
  • Flexibility: Tool calls allow for more complex multi-step interactions
  • Fallback: Better handling when AI doesn't provide expected tool calls (if (!toolCall?.toolName) return)

Potential Issues ⚠️

  1. Silent Failure Risk (ai-choose-args.ts:114):

    if (!toolCall?.toolName) return; // Returns undefined silently

    Consider logging when this happens or returning a more explicit error state.

  2. Array Access Without Bounds Check (ai-choose-args.ts:112):

    const toolCall = aiResponse.toolCalls[0]; // Could be empty array

    Should validate aiResponse.toolCalls.length > 0 first.

  3. Unused Import: The generateObject import is unused in the current code, though createGenerateObject is exported.

Performance Considerations ✅

  • Minimal Overhead: Factory functions add negligible overhead
  • Usage Tracking: Proper AI usage tracking preserved for billing/monitoring
  • Retry Logic: Same retry strategy maintained (3 retries, 1s delay)
  • No Breaking Changes: Model selection logic preserved

Security Concerns ✅

  • Input Validation: Zod schemas provide robust input validation
  • API Key Handling: Secure credential management through modelOptions
  • Error Sanitization: Error handling doesn't leak sensitive information
  • User Context: Proper user email tracking for audit trails

Suggestions for Improvement

  1. Add bounds checking:

    const toolCalls = aiResponse.toolCalls;
    if (!toolCalls || toolCalls.length === 0) {
      logger.warn("No tool calls received from AI", loggerOptions);
      return;
    }
    const toolCall = toolCalls[0];
  2. Consider more descriptive return types instead of undefined for clarity.

  3. Add JSDoc comments to the new factory functions for better developer experience.

Overall Assessment ✅

This is a well-executed revert that:

  • Maintains code quality and architectural patterns
  • Preserves important functionality (logging, error handling, usage tracking)
  • Adds useful utility functions for future use
  • Has minimal risk with only minor improvements needed

The switch from structured output to tool calls suggests a strategic decision to have more control over AI interactions, which is reasonable for a rule processing system.


View job

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 10

🔭 Outside diff range comments (6)
apps/web/utils/ai/categorize-sender/ai-categorize-single-sender.ts (1)

8-70: Add missing error handling and logging

According to the retrieved learnings, LLM-related functions should:

  1. Use a scoped logger for debugging
  2. Implement try/catch blocks for proper error handling
  3. Log inputs and outputs with appropriate log levels
  4. Validate inputs early

Add these missing elements:

+import { createScopedLogger } from "@/utils/logger";
+
+const logger = createScopedLogger("ai/categorize-sender/single");
+
 export async function aiCategorizeSender({
   emailAccount,
   sender,
   previousEmails,
   categories,
 }: {
   emailAccount: EmailAccountWithAI;
   sender: string;
   previousEmails: { subject: string; snippet: string }[];
   categories: Pick<Category, "name" | "description">[];
 }) {
+  try {
+    // Validate inputs early
+    if (!sender?.trim()) {
+      logger.warn("Empty sender provided");
+      return null;
+    }
+    
+    if (!categories.length) {
+      logger.warn("No categories provided");
+      return null;
+    }
apps/web/utils/ai/reply/check-if-needs-reply.ts (1)

12-79: Add missing error handling and logging

According to the retrieved learnings, LLM-related functions should:

  1. Use descriptive scoped loggers
  2. Implement proper error handling with try/catch blocks
  3. Log inputs and outputs with appropriate log levels
  4. Implement fallbacks for AI failures

Add the missing error handling and logging:

+import { createScopedLogger } from "@/utils/logger";
+
+const logger = createScopedLogger("ai/reply/check-needs-reply");
+
 export async function aiCheckIfNeedsReply({
   emailAccount,
   messageToSend,
   threadContextMessages,
 }: {
   emailAccount: EmailAccountWithAI;
   messageToSend: EmailForLLM;
   threadContextMessages: EmailForLLM[];
 }) {
+  try {
     // If messageToSend somehow is null/undefined, default to no reply needed.
     if (!messageToSend)
       return { needsReply: false, rationale: "No message provided" };
+
+    logger.trace("Checking if email needs reply", {
+      messageId: messageToSend.id,
+      threadLength: threadContextMessages.length
+    });

     // ... existing logic ...

     const aiResponse = await generateObject({
       // ... existing call ...
     });

+    logger.trace("AI response for reply check", {
+      needsReply: aiResponse.object.needsReply,
+      rationale: aiResponse.object.rationale
+    });

     return aiResponse.object;
+  } catch (error) {
+    logger.error("Failed to check if email needs reply", { error });
+    // Fallback: assume reply needed for safety
+    return { needsReply: true, rationale: "Error occurred, assuming reply needed" };
+  }
 }
apps/web/utils/ai/rule/diff-rules.ts (1)

20-29: Add early input validation per LLM standards

According to the learnings, LLM-related functions should validate inputs early before processing.

 export async function aiDiffRules({
   emailAccount,
   oldPromptFile,
   newPromptFile,
 }: {
   emailAccount: EmailAccountWithAI;
   oldPromptFile: string;
   newPromptFile: string;
 }) {
+  // Validate inputs early
+  if (!emailAccount?.email || !emailAccount?.user) {
+    logger.error("Invalid email account configuration");
+    throw new Error("Invalid email account configuration");
+  }
+  
+  if (!oldPromptFile || !newPromptFile) {
+    logger.error("Missing prompt files");
+    throw new Error("Missing prompt files for diff");
+  }
+
   const diff = createPatch("prompt", oldPromptFile, newPromptFile);
apps/web/utils/ai/knowledge/extract-from-email-history.ts (1)

74-89: Enhance input validation per LLM standards

The function should validate all inputs early, not just historicalMessages length.

 export async function aiExtractFromEmailHistory({
   currentThreadMessages,
   historicalMessages,
   emailAccount,
 }: {
   currentThreadMessages: EmailForLLM[];
   historicalMessages: EmailForLLM[];
   emailAccount: EmailAccountWithAI;
 }): Promise<string | null> {
+  // Validate inputs early
+  if (!emailAccount?.email || !emailAccount?.user) {
+    logger.error("Invalid email account configuration");
+    return null;
+  }
+  
+  if (!currentThreadMessages || currentThreadMessages.length === 0) {
+    logger.error("No current thread messages provided");
+    return null;
+  }
+
   try {
     logger.info("Extracting information from email history", {
       currentThreadCount: currentThreadMessages.length,
       historicalCount: historicalMessages.length,
     });
apps/web/utils/ai/rule/find-existing-rules.ts (2)

26-41: Add early input validation

Validate inputs early before processing per LLM standards.

 export async function aiFindExistingRules({
   emailAccount,
   promptRulesToEdit,
   promptRulesToRemove,
   databaseRules,
 }: {
   emailAccount: EmailAccountWithAI;
   promptRulesToEdit: { oldRule: string; newRule: string }[];
   promptRulesToRemove: string[];
   databaseRules: (Rule & { actions: Action[] })[];
 }) {
+  // Validate inputs early
+  if (!emailAccount?.email || !emailAccount?.user) {
+    logger.error("Invalid email account configuration");
+    throw new Error("Invalid email account configuration");
+  }
+  
+  if (!databaseRules || databaseRules.length === 0) {
+    logger.warn("No database rules to compare against");
+    return { editedRules: [], removedRules: [] };
+  }
+
   const promptRules = [
     ...promptRulesToEdit.map((r) => r.oldRule),
     ...promptRulesToRemove,
   ];

54-99: Add error handling with fallback

Wrap the AI invocation in proper error handling per LLM standards.

   const modelOptions = getModel(emailAccount.user, "chat");
 
   const generateText = createGenerateText({
     userEmail: emailAccount.email,
     label: "Find existing rules",
     modelOptions,
   });
 
+  try {
     const result = await generateText({
       ...modelOptions,
       system,
       prompt,
       tools: {
         findExistingRules,
       },
     });
     
     // ... rest of parsing logic ...
     
     return {
       editedRules: existingRules.filter((rule) => rule.toEdit),
       removedRules: existingRules.filter((rule) => rule.toRemove),
     };
+  } catch (error) {
+    logger.error("Failed to find existing rules", { 
+      error, 
+      userEmail: emailAccount.email 
+    });
+    
+    // Fallback: return empty results
+    return { editedRules: [], removedRules: [] };
+  }
♻️ Duplicate comments (2)
apps/web/utils/llms/index.ts (2)

44-83: Still missing input validation and scoped logging

The previous review comment about adding input validation and proper scoped logging has not been addressed. This is critical for LLM functions per the coding standards.


85-121: Code duplication not addressed

The previous review comment about extracting a common factory function to reduce duplication between createGenerateText and createGenerateObject has not been addressed.

🧹 Nitpick comments (4)
apps/web/utils/ai/choose-rule/ai-detect-recurring-pattern.ts (1)

131-131: Consider adding input/output logging for consistency.

While error logging is preserved, the refactor removed info-level logging that would help with debugging and monitoring. According to the retrieved learnings, LLM-related functions should "log inputs and outputs with appropriate log levels."

Consider adding logging for the AI input and successful responses:

  try {
+   logger.info("Detecting recurring pattern", {
+     senderEmail,
+     emailCount: emails.length,
+     rulesCount: rules.length
+   });
+
    const modelOptions = getModel(emailAccount.user, "chat");
    
    const generateObject = createGenerateObject({
      userEmail: emailAccount.email,
      label: "Detect recurring pattern",
      modelOptions,
    });

    const aiResponse = await generateObject({
      ...modelOptions,
      system,
      prompt,
      schema,
    });

+   logger.info("Pattern detection completed", {
+     matchedRule: aiResponse.object.matchedRule,
+     explanation: aiResponse.object.explanation
+   });

    return aiResponse.object;
apps/web/utils/ai/knowledge/extract.ts (1)

10-38: Consider the impact of removing input/output logging

The refactoring removes explicit logging that was previously used for tracing AI inputs and outputs. According to the retrieved learnings, LLM-related functions should "log inputs and outputs with appropriate log levels" and "include relevant context in log messages."

While the createGenerateObject factory may handle some logging internally, ensure that sufficient debugging information is available for troubleshooting knowledge extraction issues.

apps/web/utils/ai/choose-rule/ai-choose-rule.ts (1)

18-113: Consider adding logging back for debugging

While the factory function may handle some logging internally, the retrieved learnings emphasize that LLM-related functions should "log inputs and outputs with appropriate log levels" and "use descriptive scoped loggers for each LLM feature."

Consider adding a scoped logger for debugging rule selection issues:

+import { createScopedLogger } from "@/utils/logger";
+
+const logger = createScopedLogger("ai/choose-rule");
+
 async function getAiResponse(options: GetAiResponseOptions) {
   const { email, emailAccount, rules, modelType = "default" } = options;
+
+  logger.trace("Choosing rule for email", {
+    emailId: email.id,
+    rulesCount: rules.length,
+    modelType
+  });
apps/web/utils/ai/clean/ai-clean.ts (1)

21-113: Consider re-adding logging for debugging

The retrieved learnings specify that LLM-related functions should "use descriptive scoped loggers for each LLM feature" and "log inputs and outputs with appropriate log levels." The clean functionality would benefit from logging for debugging email categorization decisions.

Add logging back for debugging purposes:

+import { createScopedLogger } from "@/utils/logger";
+
+const logger = createScopedLogger("ai/clean");
+
 export async function aiClean({
   emailAccount,
   messageId: _messageId,
   messages,
   instructions,
   skips,
 }: {
   // ... existing interface
 }): Promise<{ archive: boolean }> {
+  logger.trace("Analyzing email for cleaning", {
+    hasInstructions: !!instructions,
+    skips,
+    messageCount: messages.length
+  });

Comment on lines 97 to 110
const modelOptions = getModel(emailAccount.user, "economy");

// const result = await chatCompletionObject({
// system,
// prompt,
// schema: extractionSchema,
// usageLabel: "Email history extraction",
// userAi: emailAccount.user,
// userEmail: emailAccount.email,
// modelType: "economy",
// });

const { provider, model, llmModel, providerOptions } = getModel(
emailAccount.user,
"economy",
);
const generateObject = createGenerateObject({
userEmail: emailAccount.email,
label: "Email history extraction",
modelOptions,
});

const result = await generateObject({
model: llmModel,
...modelOptions,
system,
prompt,
schema: extractionSchema,
providerOptions,
schema,
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add required logging for AI inputs and outputs

According to the learnings, LLM functions must log inputs and outputs with appropriate log levels.

   const modelOptions = getModel(emailAccount.user, "economy");
 
   const generateObject = createGenerateObject({
     userEmail: emailAccount.email,
     label: "Email history extraction",
     modelOptions,
   });
 
+  logger.trace("Generating email history extraction", {
+    promptLength: prompt.length,
+    userEmail: emailAccount.email,
+  });
+
   const result = await generateObject({
     ...modelOptions,
     system,
     prompt,
     schema,
   });
+  
+  logger.trace("Email history extraction complete", {
+    hasContext: result.object.hasHistoricalContext,
+    summaryLength: result.object.summary?.length,
+  });
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const modelOptions = getModel(emailAccount.user, "economy");
// const result = await chatCompletionObject({
// system,
// prompt,
// schema: extractionSchema,
// usageLabel: "Email history extraction",
// userAi: emailAccount.user,
// userEmail: emailAccount.email,
// modelType: "economy",
// });
const { provider, model, llmModel, providerOptions } = getModel(
emailAccount.user,
"economy",
);
const generateObject = createGenerateObject({
userEmail: emailAccount.email,
label: "Email history extraction",
modelOptions,
});
const result = await generateObject({
model: llmModel,
...modelOptions,
system,
prompt,
schema: extractionSchema,
providerOptions,
schema,
});
const modelOptions = getModel(emailAccount.user, "economy");
const generateObject = createGenerateObject({
userEmail: emailAccount.email,
label: "Email history extraction",
modelOptions,
});
logger.trace("Generating email history extraction", {
promptLength: prompt.length,
userEmail: emailAccount.email,
});
const result = await generateObject({
...modelOptions,
system,
prompt,
schema,
});
logger.trace("Email history extraction complete", {
hasContext: result.object.hasHistoricalContext,
summaryLength: result.object.summary?.length,
});
🤖 Prompt for AI Agents
In apps/web/utils/ai/knowledge/extract-from-email-history.ts between lines 97
and 110, the code calls the LLM function without logging its inputs and outputs.
Add logging statements before calling generateObject to log the input parameters
(modelOptions, system, prompt, schema) at an appropriate log level, and after
the call to log the result output. Use suitable log levels such as info or debug
to ensure traceability of AI function usage.

Comment on lines +99 to +105
const modelOptions = getModel(emailAccount.user, "economy");

// const result = await chatCompletionObject({
// system,
// prompt,
// schema: extractionSchema,
// usageLabel: "Knowledge extraction",
// userAi: emailAccount.user,
// userEmail: emailAccount.email,
// modelType: "economy",
// });

const { provider, model, llmModel, providerOptions } = getModel(
emailAccount.user,
"economy",
);
const generateObject = createGenerateObject({
userEmail: emailAccount.email,
label: "Knowledge extraction",
modelOptions,
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Verify error handling in factory function

The refactored code relies on createGenerateObject for error handling and usage tracking. Ensure the factory function implements proper fallback mechanisms as required by the learnings.


🏁 Script executed:

#!/bin/bash
# Description: Verify that createGenerateObject implements proper error handling and fallbacks
# Expected: Find error handling, retry logic, and fallback implementations

ast-grep --pattern $'export function createGenerateObject($$$) {
  $$$
}'

# Also check for withRetry usage
rg -A 10 "withRetry|retry" apps/web/utils/llms/

Length of output: 1664


🏁 Script executed:

#!/bin/bash
# Locate the definition of createGenerateObject across the repo
rg -n "createGenerateObject" -A20

Length of output: 55979


🏁 Script executed:

#!/bin/bash
# Print full createGenerateObject implementation to verify error handling, retry and fallback logic
rg -n "export function createGenerateObject" -A50 -B5 apps/web/utils/llms/index.ts

Length of output: 1334


Add retry logic in createGenerateObject for transient errors

I confirmed that in apps/web/utils/llms/index.ts (createGenerateObject, lines 94–120) you’re already tracing calls, saving usage, and delegating errors to handleError before re-throwing. Call sites in the AI utils (e.g. extract.ts) correctly catch and return fallbacks (null).

However, per our LLM-retry learning you should wrap the internal call in withRetry to handle transient failures. For example:

import { withRetry } from "./index";

export function createGenerateObject(...) {
  return async (...args) => {
    try {
      logger.trace("Generating object", {...});

      const result = await withRetry(
        () => generateObject(...args),
        {
          retryIf: (err) => /* inspect err and return true for transient errors */,
          maxRetries: 2,
          delayMs: 500,
        }
      );

      /* usage tracking… */
      return result;
    } catch (error) {
      await handleError(error, userEmail);
      throw error;
    }
  };
}

• File needing update: apps/web/utils/llms/index.ts (createGenerateObject)
• Retain existing fallbacks at AI call sites (catch → return null)

🤖 Prompt for AI Agents
In apps/web/utils/llms/index.ts around lines 94 to 120, the createGenerateObject
function lacks retry logic for transient errors. Modify createGenerateObject to
wrap the internal generateObject call with the withRetry utility, specifying a
retryIf function that detects transient errors, setting maxRetries to 2 and
delayMs to 500. Ensure existing error handling with handleError remains and that
errors are re-thrown after retries. Keep the current fallback behavior at AI
call sites like apps/web/utils/ai/knowledge/extract.ts unchanged.

Comment on lines +37 to +38
return aiResponse.toolCalls?.[0]
?.input as CreateOrUpdateRuleSchemaWithCategories;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Verify type safety of tool call response casting.

The response handling accesses toolCalls[0]?.input and casts it to the expected schema type. While this aligns with the tool-calling approach, ensure that the type casting is safe and that error handling exists for cases where toolCalls might be empty or malformed.


🏁 Script executed:

#!/bin/bash
# Description: Check if there are error handling patterns for toolCalls in similar files
# Test: Look for toolCalls usage patterns and error handling

echo "Searching for toolCalls usage patterns..."
rg -A 3 -B 1 "toolCalls\?" --type ts

echo -e "\nSearching for error handling around toolCalls..."
rg -A 5 -B 5 "toolCalls.*\[0\]" --type ts

Length of output: 5682


Add runtime validation for toolCalls[0]?.input in create-rule.ts

The current code in apps/web/utils/ai/rule/create-rule.ts simply does:

return aiResponse.toolCalls?.[0]
  ?.input as CreateOrUpdateRuleSchemaWithCategories;

Without checking that toolCalls exists or that input is defined, you risk returning undefined (silently cast as your schema) or causing hard‐to‐trace errors downstream. In other parts of this utility (e.g. generate-rules-prompt.ts) you guard against missing results, and in others you run Zod parsing. To keep things consistent and safe:

• In apps/web/utils/ai/rule/create-rule.ts (around lines 37–38):

const toolCall = aiResponse.toolCalls?.[0];
if (!toolCall?.input) {
  throw new Error("create-rule: no tool response input received");
}
// Validate & cast via Zod or your schema helper
return createRuleSchema.parse(toolCall.input);

• Or at minimum, early-exit or throw if aiResponse.toolCalls?.[0]?.input is falsy before the cast.

This ensures you never return undefined as a CreateOrUpdateRuleSchemaWithCategories and surfaces issues immediately.

🤖 Prompt for AI Agents
In apps/web/utils/ai/rule/create-rule.ts around lines 37 to 38, the code
currently casts aiResponse.toolCalls?.[0]?.input directly without checking if it
exists, risking undefined values or runtime errors. To fix this, first assign
aiResponse.toolCalls?.[0] to a variable, then check if its input property
exists; if not, throw an error. After this validation, parse and validate the
input using the createRuleSchema parser before returning it, ensuring safe and
consistent runtime validation.

Comment on lines +1 to +6
import z from "zod";
import { createPatch } from "diff";
import { generateObject } from "ai";
import type { EmailAccountWithAI } from "@/utils/llms/types";
import { createScopedLogger } from "@/utils/logger";
import { getModel } from "@/utils/llms/model";
import { saveAiUsage } from "@/utils/usage";

const logger = createScopedLogger("ai-diff-rules");
import { createGenerateText } from "@/utils/llms";

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Missing required scoped logger for LLM function

According to the learnings, LLM-related functions must use a scoped logger. Add the logger import and initialization.

 import z from "zod";
 import { createPatch } from "diff";
 import type { EmailAccountWithAI } from "@/utils/llms/types";
 import { getModel } from "@/utils/llms/model";
 import { createGenerateText } from "@/utils/llms";
+import { createScopedLogger } from "@/utils/logger";
+
+const logger = createScopedLogger("ai.rule.diff-rules");
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
import z from "zod";
import { createPatch } from "diff";
import { generateObject } from "ai";
import type { EmailAccountWithAI } from "@/utils/llms/types";
import { createScopedLogger } from "@/utils/logger";
import { getModel } from "@/utils/llms/model";
import { saveAiUsage } from "@/utils/usage";
const logger = createScopedLogger("ai-diff-rules");
import { createGenerateText } from "@/utils/llms";
import z from "zod";
import { createPatch } from "diff";
import type { EmailAccountWithAI } from "@/utils/llms/types";
import { getModel } from "@/utils/llms/model";
import { createGenerateText } from "@/utils/llms";
import { createScopedLogger } from "@/utils/logger";
const logger = createScopedLogger("ai.rule.diff-rules");
🤖 Prompt for AI Agents
In apps/web/utils/ai/rule/diff-rules.ts around lines 1 to 6, the LLM-related
functions lack a scoped logger as required. Import the scoped logger utility and
initialize a logger instance specific to this module or function. Replace any
generic or missing logging with this scoped logger to ensure consistent and
contextual logging for LLM operations.

Comment on lines +58 to 83
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Add error handling and fallback mechanism per LLM standards

According to the learnings, LLM-related functions must implement proper error handling and fallbacks for AI failures.

Wrap the AI invocation in a try-catch block:

   const modelOptions = getModel(emailAccount.user, "chat");
 
   const generateObject = createGenerateText({
     userEmail: emailAccount.email,
     label: "Diff rules",
     modelOptions,
   });
 
+  try {
     const result = await generateObject({
       ...modelOptions,
       system,
       prompt,
       tools: {
         diff_rules: {
           description:
             "Analyze two prompt files and their diff to return the differences",
           inputSchema,
         },
       },
     });
 
     const parsedRules = result.toolCalls?.[0]?.input as z.infer<
       typeof inputSchema
     >;
     return parsedRules;
+  } catch (error) {
+    logger.error("Failed to generate diff rules", { error, userEmail: emailAccount.email });
+    
+    // Fallback: return empty diff if AI fails
+    return {
+      addedRules: [],
+      editedRules: [],
+      removedRules: [],
+    };
+  }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const modelOptions = getModel(emailAccount.user, "chat");
const generateObject = createGenerateText({
userEmail: emailAccount.email,
label: "Diff rules",
modelOptions,
});
const result = await generateObject({
...modelOptions,
system,
prompt,
tools: {
diff_rules: {
description:
"Analyze two prompt files and their diff to return the differences",
inputSchema,
},
},
});
const parsedRules = result.toolCalls?.[0]?.input as z.infer<
typeof inputSchema
>;
return parsedRules;
}
const modelOptions = getModel(emailAccount.user, "chat");
const generateObject = createGenerateText({
userEmail: emailAccount.email,
label: "Diff rules",
modelOptions,
});
try {
const result = await generateObject({
...modelOptions,
system,
prompt,
tools: {
diff_rules: {
description:
"Analyze two prompt files and their diff to return the differences",
inputSchema,
},
},
});
const parsedRules = result.toolCalls?.[0]?.input as z.infer<
typeof inputSchema
>;
return parsedRules;
} catch (error) {
logger.error("Failed to generate diff rules", {
error,
userEmail: emailAccount.email,
});
// Fallback: return empty diff if AI fails
return {
addedRules: [],
editedRules: [],
removedRules: [],
};
}
}
🤖 Prompt for AI Agents
In apps/web/utils/ai/rule/diff-rules.ts between lines 58 and 83, the AI
invocation lacks error handling which is required for robustness. Wrap the call
to generateObject in a try-catch block to catch any exceptions during the AI
call. In the catch block, log or handle the error appropriately and provide a
fallback return value or behavior to ensure the function does not fail silently
or crash.

Comment on lines +60 to +77
userEmail: emailAccount.email,
label: "Diff rules",
modelOptions,
});

const result = await generateObject({
...modelOptions,
system,
prompt,
tools: {
diff_rules: {
description:
"Analyze two prompt files and their diff to return the differences",
inputSchema,
},
},
});

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Confusing function naming - createGenerateText used for object/tool generation

The function createGenerateText is being used with a tools-based API pattern, which typically requires object generation. This naming mismatch could cause confusion. Consider renaming to createGenerateWithTools or using createGenerateObject if available.

Also, add logging per LLM standards:

   const generateObject = createGenerateText({
     userEmail: emailAccount.email,
     label: "Diff rules",
     modelOptions,
   });
 
+  logger.trace("Generating diff rules", {
+    userEmail: emailAccount.email,
+    oldPromptLength: oldPromptFile.length,
+    newPromptLength: newPromptFile.length,
+  });
+
   const result = await generateObject({
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const generateObject = createGenerateText({
userEmail: emailAccount.email,
label: "Diff rules",
modelOptions,
});
const result = await generateObject({
...modelOptions,
system,
prompt,
tools: {
diff_rules: {
description:
"Analyze two prompt files and their diff to return the differences",
inputSchema,
},
},
});
const generateObject = createGenerateText({
userEmail: emailAccount.email,
label: "Diff rules",
modelOptions,
});
logger.trace("Generating diff rules", {
userEmail: emailAccount.email,
oldPromptLength: oldPromptFile.length,
newPromptLength: newPromptFile.length,
});
const result = await generateObject({
...modelOptions,
system,
prompt,
tools: {
diff_rules: {
description:
"Analyze two prompt files and their diff to return the differences",
inputSchema,
},
},
});
🤖 Prompt for AI Agents
In apps/web/utils/ai/rule/diff-rules.ts around lines 60 to 77, the function name
createGenerateText is misleading because it is used to generate an object with
tools, not just text. Rename createGenerateText to createGenerateWithTools or
createGenerateObject to better reflect its purpose. Additionally, add logging
statements following LLM standards to track the generation process and results
for better observability.

Comment on lines +79 to +82
typeof inputSchema
>;
return parsedRules;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Unsafe type assertion and missing result validation

The code unsafely casts the result without validation, which could cause runtime errors if the AI response is malformed.

-  const parsedRules = result.toolCalls?.[0]?.input as z.infer<
-    typeof inputSchema
-  >;
-  return parsedRules;
+  const toolCallInput = result.toolCalls?.[0]?.input;
+  
+  if (!toolCallInput) {
+    logger.error("No tool call input in AI response");
+    throw new Error("Invalid AI response: missing tool call");
+  }
+  
+  // Validate the response against the schema
+  const parsedRules = inputSchema.safeParse(toolCallInput);
+  
+  if (!parsedRules.success) {
+    logger.error("Failed to parse AI response", { error: parsedRules.error });
+    throw new Error("Invalid AI response format");
+  }
+  
+  logger.trace("Successfully parsed diff rules", { 
+    addedCount: parsedRules.data.addedRules.length,
+    editedCount: parsedRules.data.editedRules.length,
+    removedCount: parsedRules.data.removedRules.length,
+  });
+  
+  return parsedRules.data;
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const parsedRules = result.toolCalls?.[0]?.input as z.infer<
typeof inputSchema
>;
return parsedRules;
const toolCallInput = result.toolCalls?.[0]?.input;
if (!toolCallInput) {
logger.error("No tool call input in AI response");
throw new Error("Invalid AI response: missing tool call");
}
// Validate the response against the schema
const parsedRules = inputSchema.safeParse(toolCallInput);
if (!parsedRules.success) {
logger.error("Failed to parse AI response", { error: parsedRules.error });
throw new Error("Invalid AI response format");
}
logger.trace("Successfully parsed diff rules", {
addedCount: parsedRules.data.addedRules.length,
editedCount: parsedRules.data.editedRules.length,
removedCount: parsedRules.data.removedRules.length,
});
return parsedRules.data;
🤖 Prompt for AI Agents
In apps/web/utils/ai/rule/diff-rules.ts around lines 79 to 82, the code unsafely
casts result.toolCalls?.[0]?.input to the expected type without validating the
structure, risking runtime errors if the AI response is malformed. To fix this,
add proper validation of result.toolCalls and its input before casting, using
the inputSchema's safe parsing or validation methods to ensure the data matches
the expected format, and handle validation failures gracefully.

Comment on lines 1 to +7
import { z } from "zod";
import { generateText, tool } from "ai";
import { tool } from "ai";
import type { EmailAccountWithAI } from "@/utils/llms/types";
import type { Action, Rule } from "@prisma/client";
import { getModel } from "@/utils/llms/model";
import { saveAiUsage } from "@/utils/usage";
import { isDefined } from "@/utils/types";

const schema = z
.object({
ruleId: z.string().describe("The id of the existing rule"),
promptNumber: z
.number()
.describe("The index of the prompt that matches the rule"),
})
.describe("The existing rules that match the prompt rules");
import { createGenerateText } from "@/utils/llms";

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Missing required scoped logger

LLM-related functions must use a scoped logger per the coding standards.

 import { z } from "zod";
 import { tool } from "ai";
 import type { EmailAccountWithAI } from "@/utils/llms/types";
 import type { Action, Rule } from "@prisma/client";
 import { getModel } from "@/utils/llms/model";
 import { createGenerateText } from "@/utils/llms";
+import { createScopedLogger } from "@/utils/logger";
+
+const logger = createScopedLogger("ai.rule.find-existing");
🤖 Prompt for AI Agents
In apps/web/utils/ai/rule/find-existing-rules.ts within the first 7 lines, the
imports and initial setup lack a scoped logger as required by coding standards
for LLM-related functions. Add an import for the scoped logger utility and
initialize a scoped logger instance specific to this module or function. Replace
any generic or missing logging with this scoped logger to ensure consistent and
contextual logging.

Comment on lines +56 to 69
const generateText = createGenerateText({
userEmail: emailAccount.email,
label: "Find existing rules",
modelOptions,
});

const result = await generateText({
model: llmModel,
...modelOptions,
system,
prompt,
providerOptions,
tools: {
findExistingRules,
},
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Function naming confusion and missing logging

Using createGenerateText with tools pattern is semantically confusing. Also missing required logging.

-  const generateText = createGenerateText({
+  // Consider renaming to createGenerateWithTools for clarity
+  const generateWithTools = createGenerateText({
     userEmail: emailAccount.email,
     label: "Find existing rules",
     modelOptions,
   });
 
+  logger.trace("Finding existing rules", {
+    promptRulesCount: promptRules.length,
+    databaseRulesCount: databaseRules.length,
+    userEmail: emailAccount.email,
+  });
+
-  const result = await generateText({
+  const result = await generateWithTools({
     ...modelOptions,
     system,
     prompt,
     tools: {
       findExistingRules,
     },
   });
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const generateText = createGenerateText({
userEmail: emailAccount.email,
label: "Find existing rules",
modelOptions,
});
const result = await generateText({
model: llmModel,
...modelOptions,
system,
prompt,
providerOptions,
tools: {
findExistingRules,
},
});
// Consider renaming to createGenerateWithTools for clarity
const generateWithTools = createGenerateText({
userEmail: emailAccount.email,
label: "Find existing rules",
modelOptions,
});
logger.trace("Finding existing rules", {
promptRulesCount: promptRules.length,
databaseRulesCount: databaseRules.length,
userEmail: emailAccount.email,
});
const result = await generateWithTools({
...modelOptions,
system,
prompt,
tools: {
findExistingRules,
},
});
🤖 Prompt for AI Agents
In apps/web/utils/ai/rule/find-existing-rules.ts around lines 56 to 69, the
function name createGenerateText is confusing when used with a tools pattern,
and the code lacks necessary logging. Rename createGenerateText to a more
descriptive name that reflects its use with tools, such as
createTextGeneratorWithTools, and add appropriate logging statements to capture
key events or errors during the generateText execution to improve observability.

Comment on lines +71 to +94
const parsedRules = result.toolCalls?.[0]?.input as z.infer<typeof schema>;

const existingRules = parsedRules.existingRules.map((rule) => {
const promptRule = rule.promptNumber
? promptRules[rule.promptNumber - 1]
: null;

const toRemove = promptRule
? promptRulesToRemove.includes(promptRule)
: null;

const toEdit = promptRule
? promptRulesToEdit.find((r) => r.oldRule === promptRule)
: null;

return {
rule: databaseRules.find((dbRule) => dbRule.id === rule.ruleId),
promptNumber: rule.promptNumber,
promptRule,
toRemove: !!toRemove,
toEdit: !!toEdit,
updatedPromptRule: toEdit?.newRule,
};
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Unsafe result parsing without validation

The code unsafely casts and accesses nested properties without proper validation, which could cause runtime errors.

-  const parsedRules = result.toolCalls?.[0]?.input as z.infer<typeof schema>;
+  const toolCallInput = result.toolCalls?.[0]?.input;
+  
+  if (!toolCallInput) {
+    logger.error("No tool call input in AI response");
+    return { editedRules: [], removedRules: [] };
+  }
+  
+  const parseResult = schema.safeParse(toolCallInput);
+  
+  if (!parseResult.success) {
+    logger.error("Failed to parse AI response", { error: parseResult.error });
+    return { editedRules: [], removedRules: [] };
+  }
+  
+  const parsedRules = parseResult.data;
 
   const existingRules = parsedRules.existingRules.map((rule) => {
🤖 Prompt for AI Agents
In apps/web/utils/ai/rule/find-existing-rules.ts around lines 71 to 94, the code
unsafely casts result.toolCalls?.[0]?.input to the schema type without
validating it, risking runtime errors. To fix this, validate
result.toolCalls?.[0]?.input against the schema using schema.safeParse or a
similar method before accessing its properties. Handle the case where validation
fails by returning an error or fallback value to ensure safe parsing.

@claude
Copy link

claude bot commented Aug 7, 2025

Claude finished @elie222's task —— View job


I'll analyze this and get back to you.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
apps/web/utils/llms/index.ts (2)

78-81: Improve tool call logging to capture all tool calls.

The current implementation only logs the first tool call input, potentially missing important debugging information for functions with multiple tool calls.

-      if (args[0].tools) {
-        const toolCallInput = result.toolCalls?.[0]?.input;
-        logger.trace("Result", { result: toolCallInput });
-      }
+      if (args[0].tools && result.toolCalls?.length) {
+        const toolCallInputs = result.toolCalls.map(call => ({
+          name: call.toolName,
+          input: call.input
+        }));
+        logger.trace("Tool calls", { toolCalls: toolCallInputs });
+      }

226-229: Remove or implement the disabled chatCompletionTools functionality.

The commented-out chatCompletionTools function should either be removed if no longer needed or properly implemented with the new factory pattern.

Would you like me to help implement a createGenerateTextWithTools factory function using the new pattern, or should this be removed entirely?

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b4bb746 and 16c34e8.

📒 Files selected for processing (15)
  • .cursor/rules/llm.mdc (2 hunks)
  • apps/web/utils/ai/assistant/process-user-request.ts (3 hunks)
  • apps/web/utils/ai/categorize-sender/ai-categorize-senders.ts (2 hunks)
  • apps/web/utils/ai/categorize-sender/ai-categorize-single-sender.ts (2 hunks)
  • apps/web/utils/ai/clean/ai-clean-select-labels.ts (2 hunks)
  • apps/web/utils/ai/clean/ai-clean.ts (4 hunks)
  • apps/web/utils/ai/digest/summarize-email-for-digest.ts (3 hunks)
  • apps/web/utils/ai/example-matches/find-example-matches.ts (2 hunks)
  • apps/web/utils/ai/group/create-group.ts (3 hunks)
  • apps/web/utils/ai/knowledge/extract-from-email-history.ts (4 hunks)
  • apps/web/utils/ai/reply/draft-with-knowledge.ts (3 hunks)
  • apps/web/utils/ai/rule/generate-prompt-on-delete-rule.ts (3 hunks)
  • apps/web/utils/ai/rule/generate-prompt-on-update-rule.ts (3 hunks)
  • apps/web/utils/ai/rule/prompt-to-rules.ts (3 hunks)
  • apps/web/utils/llms/index.ts (5 hunks)
🚧 Files skipped from review as they are similar to previous changes (9)
  • apps/web/utils/ai/categorize-sender/ai-categorize-senders.ts
  • apps/web/utils/ai/reply/draft-with-knowledge.ts
  • apps/web/utils/ai/rule/generate-prompt-on-update-rule.ts
  • apps/web/utils/ai/clean/ai-clean.ts
  • apps/web/utils/ai/digest/summarize-email-for-digest.ts
  • apps/web/utils/ai/knowledge/extract-from-email-history.ts
  • apps/web/utils/ai/rule/generate-prompt-on-delete-rule.ts
  • apps/web/utils/ai/categorize-sender/ai-categorize-single-sender.ts
  • apps/web/utils/ai/clean/ai-clean-select-labels.ts
🧰 Additional context used
📓 Path-based instructions (12)
apps/web/**/*.{ts,tsx}

📄 CodeRabbit Inference Engine (apps/web/CLAUDE.md)

apps/web/**/*.{ts,tsx}: Use TypeScript with strict null checks
Path aliases: Use @/ for imports from project root
Use proper error handling with try/catch blocks
Format code with Prettier
Leverage TypeScript inference for better DX

Files:

  • apps/web/utils/ai/rule/prompt-to-rules.ts
  • apps/web/utils/ai/assistant/process-user-request.ts
  • apps/web/utils/ai/group/create-group.ts
  • apps/web/utils/ai/example-matches/find-example-matches.ts
  • apps/web/utils/llms/index.ts
!{.cursor/rules/*.mdc}

📄 CodeRabbit Inference Engine (.cursor/rules/cursor-rules.mdc)

Never place rule files in the project root, in subdirectories outside .cursor/rules, or in any other location

Files:

  • apps/web/utils/ai/rule/prompt-to-rules.ts
  • .cursor/rules/llm.mdc
  • apps/web/utils/ai/assistant/process-user-request.ts
  • apps/web/utils/ai/group/create-group.ts
  • apps/web/utils/ai/example-matches/find-example-matches.ts
  • apps/web/utils/llms/index.ts
**/*.ts

📄 CodeRabbit Inference Engine (.cursor/rules/form-handling.mdc)

**/*.ts: The same validation should be done in the server action too
Define validation schemas using Zod

Files:

  • apps/web/utils/ai/rule/prompt-to-rules.ts
  • apps/web/utils/ai/assistant/process-user-request.ts
  • apps/web/utils/ai/group/create-group.ts
  • apps/web/utils/ai/example-matches/find-example-matches.ts
  • apps/web/utils/llms/index.ts
apps/web/utils/{ai,llms}/**/*

📄 CodeRabbit Inference Engine (.cursor/rules/llm.mdc)

apps/web/utils/{ai,llms}/**/*: LLM-related code must be organized in the directories: apps/web/utils/ai/, apps/web/utils/llms/, and apps/web/tests/ for LLM-specific tests.
Keep related AI functions in the same file or directory.

Files:

  • apps/web/utils/ai/rule/prompt-to-rules.ts
  • apps/web/utils/ai/assistant/process-user-request.ts
  • apps/web/utils/ai/group/create-group.ts
  • apps/web/utils/ai/example-matches/find-example-matches.ts
  • apps/web/utils/llms/index.ts
apps/web/utils/{ai,llms}/**/*.ts

📄 CodeRabbit Inference Engine (.cursor/rules/llm.mdc)

apps/web/utils/{ai,llms}/**/*.ts: Follow the standard structure for LLM-related functions: use a scoped logger, define a Zod schema for output, validate inputs early, separate system and user prompts, log inputs and outputs, call chatCompletionObject with proper configuration, and return validated results.
Keep system prompts and user prompts separate in LLM-related code.
System prompt should define the LLM's role and task specifications.
User prompt should contain the actual data and context.
Always define a Zod schema for response validation in LLM-related functions.
Make Zod schemas as specific as possible to guide the LLM output.
Use descriptive scoped loggers for each LLM feature.
Log inputs and outputs with appropriate log levels in LLM-related functions.
Include relevant context in log messages for LLM-related code.
Implement early returns for invalid inputs in LLM-related functions.
Use proper error types and logging in LLM-related code.
Implement fallbacks for AI failures in LLM-related functions.
Add retry logic for transient failures using withRetry in LLM-related code.
Use XML-like tags to structure data in LLM prompts.
Remove excessive whitespace and truncate long inputs in LLM prompts.
Format data consistently across similar LLM-related functions.
Use TypeScript types for all parameters and return values in LLM-related code.
Define clear interfaces for complex input/output structures in LLM-related code.
Extract common patterns into utility functions in LLM-related code.
Document complex AI logic with clear comments in LLM-related code.

Files:

  • apps/web/utils/ai/rule/prompt-to-rules.ts
  • apps/web/utils/ai/assistant/process-user-request.ts
  • apps/web/utils/ai/group/create-group.ts
  • apps/web/utils/ai/example-matches/find-example-matches.ts
  • apps/web/utils/llms/index.ts
**/*.{ts,tsx}

📄 CodeRabbit Inference Engine (.cursor/rules/logging.mdc)

**/*.{ts,tsx}: Use createScopedLogger for logging in backend TypeScript files
Typically add the logger initialization at the top of the file when using createScopedLogger
Only use .with() on a logger instance within a specific function, not for a global logger

Import Prisma in the project using import prisma from "@/utils/prisma";

**/*.{ts,tsx}: Don't use TypeScript enums.
Don't use TypeScript const enum.
Don't use the TypeScript directive @ts-ignore.
Don't use primitive type aliases or misleading types.
Don't use empty type parameters in type aliases and interfaces.
Don't use any or unknown as type constraints.
Don't use implicit any type on variable declarations.
Don't let variables evolve into any type through reassignments.
Don't use non-null assertions with the ! postfix operator.
Don't misuse the non-null assertion operator (!) in TypeScript files.
Don't use user-defined types.
Use as const instead of literal types and type annotations.
Use export type for types.
Use import type for types.
Don't declare empty interfaces.
Don't merge interfaces and classes unsafely.
Don't use overload signatures that aren't next to each other.
Use the namespace keyword instead of the module keyword to declare TypeScript namespaces.
Don't use TypeScript namespaces.
Don't export imported variables.
Don't add type annotations to variables, parameters, and class properties that are initialized with literal expressions.
Don't use parameter properties in class constructors.
Use either T[] or Array consistently.
Initialize each enum member value explicitly.
Make sure all enum members are literal values.

Files:

  • apps/web/utils/ai/rule/prompt-to-rules.ts
  • apps/web/utils/ai/assistant/process-user-request.ts
  • apps/web/utils/ai/group/create-group.ts
  • apps/web/utils/ai/example-matches/find-example-matches.ts
  • apps/web/utils/llms/index.ts
apps/web/utils/**

📄 CodeRabbit Inference Engine (.cursor/rules/project-structure.mdc)

Create utility functions in utils/ folder for reusable logic

Files:

  • apps/web/utils/ai/rule/prompt-to-rules.ts
  • apps/web/utils/ai/assistant/process-user-request.ts
  • apps/web/utils/ai/group/create-group.ts
  • apps/web/utils/ai/example-matches/find-example-matches.ts
  • apps/web/utils/llms/index.ts
apps/web/utils/**/*.ts

📄 CodeRabbit Inference Engine (.cursor/rules/project-structure.mdc)

apps/web/utils/**/*.ts: Use lodash utilities for common operations (arrays, objects, strings)
Import specific lodash functions to minimize bundle size

Files:

  • apps/web/utils/ai/rule/prompt-to-rules.ts
  • apps/web/utils/ai/assistant/process-user-request.ts
  • apps/web/utils/ai/group/create-group.ts
  • apps/web/utils/ai/example-matches/find-example-matches.ts
  • apps/web/utils/llms/index.ts
**/*.{js,jsx,ts,tsx}

📄 CodeRabbit Inference Engine (.cursor/rules/ultracite.mdc)

**/*.{js,jsx,ts,tsx}: Don't use elements in Next.js projects.
Don't use elements in Next.js projects.
Don't use namespace imports.
Don't access namespace imports dynamically.
Don't use global eval().
Don't use console.
Don't use debugger.
Don't use var.
Don't use with statements in non-strict contexts.
Don't use the arguments object.
Don't use consecutive spaces in regular expression literals.
Don't use the comma operator.
Don't use unnecessary boolean casts.
Don't use unnecessary callbacks with flatMap.
Use for...of statements instead of Array.forEach.
Don't create classes that only have static members (like a static namespace).
Don't use this and super in static contexts.
Don't use unnecessary catch clauses.
Don't use unnecessary constructors.
Don't use unnecessary continue statements.
Don't export empty modules that don't change anything.
Don't use unnecessary escape sequences in regular expression literals.
Don't use unnecessary labels.
Don't use unnecessary nested block statements.
Don't rename imports, exports, and destructured assignments to the same name.
Don't use unnecessary string or template literal concatenation.
Don't use String.raw in template literals when there are no escape sequences.
Don't use useless case statements in switch statements.
Don't use ternary operators when simpler alternatives exist.
Don't use useless this aliasing.
Don't initialize variables to undefined.
Don't use the void operators (they're not familiar).
Use arrow functions instead of function expressions.
Use Date.now() to get milliseconds since the Unix Epoch.
Use .flatMap() instead of map().flat() when possible.
Use literal property access instead of computed property access.
Don't use parseInt() or Number.parseInt() when binary, octal, or hexadecimal literals work.
Use concise optional chaining instead of chained logical expressions.
Use regular expression literals instead of the RegExp constructor when possible.
Don't use number literal object member names th...

Files:

  • apps/web/utils/ai/rule/prompt-to-rules.ts
  • apps/web/utils/ai/assistant/process-user-request.ts
  • apps/web/utils/ai/group/create-group.ts
  • apps/web/utils/ai/example-matches/find-example-matches.ts
  • apps/web/utils/llms/index.ts
!pages/_document.{js,jsx,ts,tsx}

📄 CodeRabbit Inference Engine (.cursor/rules/ultracite.mdc)

!pages/_document.{js,jsx,ts,tsx}: Don't import next/document outside of pages/_document.jsx in Next.js projects.
Don't import next/document outside of pages/_document.jsx in Next.js projects.

Files:

  • apps/web/utils/ai/rule/prompt-to-rules.ts
  • .cursor/rules/llm.mdc
  • apps/web/utils/ai/assistant/process-user-request.ts
  • apps/web/utils/ai/group/create-group.ts
  • apps/web/utils/ai/example-matches/find-example-matches.ts
  • apps/web/utils/llms/index.ts
.cursor/rules/*.mdc

📄 CodeRabbit Inference Engine (.cursor/rules/cursor-rules.mdc)

.cursor/rules/*.mdc: Always place rule files in PROJECT_ROOT/.cursor/rules/
Follow the naming convention: use kebab-case for filenames, always use .mdc extension, and make names descriptive of the rule's purpose
Cursor rules must have a frontmatter section with description, globs, and alwaysApply fields, followed by markdown content explaining the rule

Files:

  • .cursor/rules/llm.mdc
apps/web/utils/llms/index.ts

📄 CodeRabbit Inference Engine (.cursor/rules/llm.mdc)

The core LLM functionality must be implemented in utils/llms/index.ts.

Files:

  • apps/web/utils/llms/index.ts
🧠 Learnings (38)
📓 Common learnings
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-07-18T15:06:40.272Z
Learning: Applies to apps/web/utils/{ai,llms}/**/*.ts : Follow the standard structure for LLM-related functions: use a scoped logger, define a Zod schema for output, validate inputs early, separate system and user prompts, log inputs and outputs, call chatCompletionObject with proper configuration, and return validated results.
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-07-18T15:06:40.272Z
Learning: Applies to apps/web/utils/{ai,llms}/**/*.ts : Implement fallbacks for AI failures in LLM-related functions.
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-07-18T15:06:40.272Z
Learning: Applies to apps/web/utils/{ai,llms}/**/*.ts : Extract common patterns into utility functions in LLM-related code.
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-07-18T15:06:40.272Z
Learning: Applies to apps/web/utils/{ai,llms}/**/*.ts : Make Zod schemas as specific as possible to guide the LLM output.
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-07-18T15:06:40.272Z
Learning: Applies to apps/web/utils/{ai,llms}/**/*.ts : Document complex AI logic with clear comments in LLM-related code.
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-07-18T15:06:40.272Z
Learning: Applies to apps/web/utils/{ai,llms}/**/*.ts : Use proper error types and logging in LLM-related code.
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-07-18T15:06:40.272Z
Learning: Applies to apps/web/utils/{ai,llms}/**/*.ts : Format data consistently across similar LLM-related functions.
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-07-18T15:06:40.272Z
Learning: Applies to apps/web/utils/{ai,llms}/**/*.ts : Include relevant context in log messages for LLM-related code.
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-07-18T15:06:40.272Z
Learning: Applies to apps/web/utils/{ai,llms}/**/* : Keep related AI functions in the same file or directory.
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-07-18T15:06:40.272Z
Learning: Applies to apps/web/utils/{ai,llms}/**/*.ts : User prompt should contain the actual data and context.
📚 Learning: applies to apps/web/utils/{ai,llms}/**/*.ts : keep system prompts and user prompts separate in llm-r...
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-07-18T15:06:40.272Z
Learning: Applies to apps/web/utils/{ai,llms}/**/*.ts : Keep system prompts and user prompts separate in LLM-related code.

Applied to files:

  • apps/web/utils/ai/rule/prompt-to-rules.ts
  • .cursor/rules/llm.mdc
  • apps/web/utils/ai/assistant/process-user-request.ts
  • apps/web/utils/ai/group/create-group.ts
  • apps/web/utils/llms/index.ts
📚 Learning: applies to apps/web/utils/{ai,llms}/**/*.ts : user prompt should contain the actual data and context...
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-07-18T15:06:40.272Z
Learning: Applies to apps/web/utils/{ai,llms}/**/*.ts : User prompt should contain the actual data and context.

Applied to files:

  • apps/web/utils/ai/rule/prompt-to-rules.ts
  • .cursor/rules/llm.mdc
  • apps/web/utils/ai/assistant/process-user-request.ts
  • apps/web/utils/ai/group/create-group.ts
  • apps/web/utils/llms/index.ts
📚 Learning: applies to apps/web/utils/{ai,llms}/**/*.ts : system prompt should define the llm's role and task sp...
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-07-18T15:06:40.272Z
Learning: Applies to apps/web/utils/{ai,llms}/**/*.ts : System prompt should define the LLM's role and task specifications.

Applied to files:

  • apps/web/utils/ai/rule/prompt-to-rules.ts
  • .cursor/rules/llm.mdc
  • apps/web/utils/ai/assistant/process-user-request.ts
  • apps/web/utils/ai/group/create-group.ts
  • apps/web/utils/llms/index.ts
📚 Learning: applies to apps/web/utils/{ai,llms}/**/*.ts : follow the standard structure for llm-related function...
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-07-18T15:06:40.272Z
Learning: Applies to apps/web/utils/{ai,llms}/**/*.ts : Follow the standard structure for LLM-related functions: use a scoped logger, define a Zod schema for output, validate inputs early, separate system and user prompts, log inputs and outputs, call chatCompletionObject with proper configuration, and return validated results.

Applied to files:

  • apps/web/utils/ai/rule/prompt-to-rules.ts
  • .cursor/rules/llm.mdc
  • apps/web/utils/ai/assistant/process-user-request.ts
  • apps/web/utils/ai/group/create-group.ts
  • apps/web/utils/ai/example-matches/find-example-matches.ts
  • apps/web/utils/llms/index.ts
📚 Learning: applies to apps/web/utils/{ai,llms}/**/*.ts : document complex ai logic with clear comments in llm-r...
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-07-18T15:06:40.272Z
Learning: Applies to apps/web/utils/{ai,llms}/**/*.ts : Document complex AI logic with clear comments in LLM-related code.

Applied to files:

  • apps/web/utils/ai/rule/prompt-to-rules.ts
  • .cursor/rules/llm.mdc
  • apps/web/utils/ai/assistant/process-user-request.ts
  • apps/web/utils/ai/group/create-group.ts
  • apps/web/utils/ai/example-matches/find-example-matches.ts
  • apps/web/utils/llms/index.ts
📚 Learning: applies to apps/web/utils/{ai,llms}/**/*.ts : implement fallbacks for ai failures in llm-related fun...
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-07-18T15:06:40.272Z
Learning: Applies to apps/web/utils/{ai,llms}/**/*.ts : Implement fallbacks for AI failures in LLM-related functions.

Applied to files:

  • apps/web/utils/ai/rule/prompt-to-rules.ts
  • .cursor/rules/llm.mdc
  • apps/web/utils/ai/assistant/process-user-request.ts
  • apps/web/utils/ai/group/create-group.ts
  • apps/web/utils/ai/example-matches/find-example-matches.ts
  • apps/web/utils/llms/index.ts
📚 Learning: applies to apps/web/utils/{ai,llms}/**/*.ts : remove excessive whitespace and truncate long inputs i...
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-07-18T15:06:40.272Z
Learning: Applies to apps/web/utils/{ai,llms}/**/*.ts : Remove excessive whitespace and truncate long inputs in LLM prompts.

Applied to files:

  • apps/web/utils/ai/rule/prompt-to-rules.ts
  • .cursor/rules/llm.mdc
  • apps/web/utils/ai/assistant/process-user-request.ts
  • apps/web/utils/ai/group/create-group.ts
  • apps/web/utils/llms/index.ts
📚 Learning: applies to apps/web/utils/{ai,llms}/**/*.ts : use xml-like tags to structure data in llm prompts....
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-07-18T15:06:40.272Z
Learning: Applies to apps/web/utils/{ai,llms}/**/*.ts : Use XML-like tags to structure data in LLM prompts.

Applied to files:

  • apps/web/utils/ai/rule/prompt-to-rules.ts
📚 Learning: applies to apps/web/utils/{ai,llms}/**/*.ts : use typescript types for all parameters and return val...
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-07-18T15:06:40.272Z
Learning: Applies to apps/web/utils/{ai,llms}/**/*.ts : Use TypeScript types for all parameters and return values in LLM-related code.

Applied to files:

  • apps/web/utils/ai/rule/prompt-to-rules.ts
  • apps/web/utils/ai/assistant/process-user-request.ts
  • apps/web/utils/ai/example-matches/find-example-matches.ts
  • apps/web/utils/llms/index.ts
📚 Learning: applies to apps/web/utils/{ai,llms}/**/*.ts : extract common patterns into utility functions in llm-...
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-07-18T15:06:40.272Z
Learning: Applies to apps/web/utils/{ai,llms}/**/*.ts : Extract common patterns into utility functions in LLM-related code.

Applied to files:

  • apps/web/utils/ai/rule/prompt-to-rules.ts
  • apps/web/utils/ai/assistant/process-user-request.ts
  • apps/web/utils/ai/group/create-group.ts
  • apps/web/utils/ai/example-matches/find-example-matches.ts
  • apps/web/utils/llms/index.ts
📚 Learning: applies to apps/web/utils/{ai,llms}/**/*.ts : make zod schemas as specific as possible to guide the ...
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-07-18T15:06:40.272Z
Learning: Applies to apps/web/utils/{ai,llms}/**/*.ts : Make Zod schemas as specific as possible to guide the LLM output.

Applied to files:

  • apps/web/utils/ai/rule/prompt-to-rules.ts
  • .cursor/rules/llm.mdc
  • apps/web/utils/ai/assistant/process-user-request.ts
  • apps/web/utils/ai/group/create-group.ts
  • apps/web/utils/ai/example-matches/find-example-matches.ts
📚 Learning: applies to apps/web/utils/{ai,llms}/**/*.ts : always define a zod schema for response validation in ...
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-07-18T15:06:40.272Z
Learning: Applies to apps/web/utils/{ai,llms}/**/*.ts : Always define a Zod schema for response validation in LLM-related functions.

Applied to files:

  • apps/web/utils/ai/rule/prompt-to-rules.ts
  • .cursor/rules/llm.mdc
  • apps/web/utils/ai/example-matches/find-example-matches.ts
  • apps/web/utils/llms/index.ts
📚 Learning: applies to apps/web/utils/actions/**/*.ts : use zod schemas for validation on both client and server...
Learnt from: CR
PR: elie222/inbox-zero#0
File: apps/web/CLAUDE.md:0-0
Timestamp: 2025-07-18T15:04:30.467Z
Learning: Applies to apps/web/utils/actions/**/*.ts : Use Zod schemas for validation on both client and server

Applied to files:

  • apps/web/utils/ai/rule/prompt-to-rules.ts
  • apps/web/utils/llms/index.ts
📚 Learning: applies to apps/web/utils/actions/*.validation.ts : define zod schemas for validation in dedicated f...
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/fullstack-workflow.mdc:0-0
Timestamp: 2025-07-18T15:05:16.146Z
Learning: Applies to apps/web/utils/actions/*.validation.ts : Define Zod schemas for validation in dedicated files and use them for both client and server validation.

Applied to files:

  • apps/web/utils/ai/rule/prompt-to-rules.ts
  • apps/web/utils/llms/index.ts
📚 Learning: applies to apps/web/utils/{ai,llms}/**/*.ts : define clear interfaces for complex input/output struc...
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-07-18T15:06:40.272Z
Learning: Applies to apps/web/utils/{ai,llms}/**/*.ts : Define clear interfaces for complex input/output structures in LLM-related code.

Applied to files:

  • apps/web/utils/ai/rule/prompt-to-rules.ts
  • .cursor/rules/llm.mdc
  • apps/web/utils/ai/assistant/process-user-request.ts
  • apps/web/utils/ai/group/create-group.ts
  • apps/web/utils/ai/example-matches/find-example-matches.ts
  • apps/web/utils/llms/index.ts
📚 Learning: applies to apps/web/utils/actions/*.validation.ts : define input validation schemas using zod in the...
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/server-actions.mdc:0-0
Timestamp: 2025-07-18T17:27:58.249Z
Learning: Applies to apps/web/utils/actions/*.validation.ts : Define input validation schemas using Zod in the corresponding `.validation.ts` file. These schemas are used by `next-safe-action` (`.schema()`) and can also be reused on the client for form validation.

Applied to files:

  • apps/web/utils/ai/rule/prompt-to-rules.ts
  • apps/web/utils/llms/index.ts
📚 Learning: applies to apps/web/utils/{ai,llms}/**/*.ts : use descriptive scoped loggers for each llm feature....
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-07-18T15:06:40.272Z
Learning: Applies to apps/web/utils/{ai,llms}/**/*.ts : Use descriptive scoped loggers for each LLM feature.

Applied to files:

  • apps/web/utils/ai/rule/prompt-to-rules.ts
  • .cursor/rules/llm.mdc
  • apps/web/utils/ai/assistant/process-user-request.ts
  • apps/web/utils/llms/index.ts
📚 Learning: applies to apps/web/utils/{ai,llms}/**/*.ts : include relevant context in log messages for llm-relat...
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-07-18T15:06:40.272Z
Learning: Applies to apps/web/utils/{ai,llms}/**/*.ts : Include relevant context in log messages for LLM-related code.

Applied to files:

  • apps/web/utils/ai/rule/prompt-to-rules.ts
  • .cursor/rules/llm.mdc
  • apps/web/utils/ai/assistant/process-user-request.ts
  • apps/web/utils/llms/index.ts
📚 Learning: applies to apps/web/utils/{ai,llms}/**/*.ts : log inputs and outputs with appropriate log levels in ...
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-07-18T15:06:40.272Z
Learning: Applies to apps/web/utils/{ai,llms}/**/*.ts : Log inputs and outputs with appropriate log levels in LLM-related functions.

Applied to files:

  • .cursor/rules/llm.mdc
  • apps/web/utils/ai/assistant/process-user-request.ts
  • apps/web/utils/llms/index.ts
📚 Learning: applies to apps/web/utils/{ai,llms}/**/*.ts : use proper error types and logging in llm-related code...
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-07-18T15:06:40.272Z
Learning: Applies to apps/web/utils/{ai,llms}/**/*.ts : Use proper error types and logging in LLM-related code.

Applied to files:

  • .cursor/rules/llm.mdc
  • apps/web/utils/ai/assistant/process-user-request.ts
  • apps/web/utils/ai/group/create-group.ts
  • apps/web/utils/llms/index.ts
📚 Learning: applies to apps/web/utils/{ai,llms}/**/*.ts : format data consistently across similar llm-related fu...
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-07-18T15:06:40.272Z
Learning: Applies to apps/web/utils/{ai,llms}/**/*.ts : Format data consistently across similar LLM-related functions.

Applied to files:

  • .cursor/rules/llm.mdc
  • apps/web/utils/ai/assistant/process-user-request.ts
  • apps/web/utils/ai/group/create-group.ts
  • apps/web/utils/ai/example-matches/find-example-matches.ts
  • apps/web/utils/llms/index.ts
📚 Learning: applies to apps/web/utils/{ai,llms}/**/*.ts : implement early returns for invalid inputs in llm-rela...
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-07-18T15:06:40.272Z
Learning: Applies to apps/web/utils/{ai,llms}/**/*.ts : Implement early returns for invalid inputs in LLM-related functions.

Applied to files:

  • .cursor/rules/llm.mdc
  • apps/web/utils/ai/example-matches/find-example-matches.ts
  • apps/web/utils/llms/index.ts
📚 Learning: applies to apps/web/__tests__/**/*.test.ts : test both ai and non-ai paths in llm-related tests...
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm-test.mdc:0-0
Timestamp: 2025-07-18T15:06:10.570Z
Learning: Applies to apps/web/__tests__/**/*.test.ts : Test both AI and non-AI paths in LLM-related tests

Applied to files:

  • apps/web/utils/ai/assistant/process-user-request.ts
  • apps/web/utils/ai/group/create-group.ts
  • apps/web/utils/ai/example-matches/find-example-matches.ts
  • apps/web/utils/llms/index.ts
📚 Learning: applies to apps/web/utils/llms/model.ts : model definitions and configurations must be placed in uti...
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-07-18T15:06:40.272Z
Learning: Applies to apps/web/utils/llms/model.ts : Model definitions and configurations must be placed in utils/llms/model.ts.

Applied to files:

  • apps/web/utils/ai/assistant/process-user-request.ts
📚 Learning: applies to apps/web/utils/llms/index.ts : the core llm functionality must be implemented in utils/ll...
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-07-18T15:06:40.272Z
Learning: Applies to apps/web/utils/llms/index.ts : The core LLM functionality must be implemented in utils/llms/index.ts.

Applied to files:

  • apps/web/utils/ai/assistant/process-user-request.ts
📚 Learning: applies to apps/web/utils/{ai,llms}/**/* : keep related ai functions in the same file or directory....
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-07-18T15:06:40.272Z
Learning: Applies to apps/web/utils/{ai,llms}/**/* : Keep related AI functions in the same file or directory.

Applied to files:

  • apps/web/utils/ai/group/create-group.ts
📚 Learning: the `utils` folder also contains core app logic such as next.js server actions and gmail api request...
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/utilities.mdc:0-0
Timestamp: 2025-07-19T17:50:28.270Z
Learning: The `utils` folder also contains core app logic such as Next.js Server Actions and Gmail API requests.

Applied to files:

  • apps/web/utils/ai/group/create-group.ts
  • apps/web/utils/ai/example-matches/find-example-matches.ts
📚 Learning: applies to apps/web/utils/gmail/**/*.ts : keep provider-specific implementation details isolated in ...
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/gmail-api.mdc:0-0
Timestamp: 2025-07-18T15:05:34.899Z
Learning: Applies to apps/web/utils/gmail/**/*.ts : Keep provider-specific implementation details isolated in the appropriate utils subfolder (e.g., 'apps/web/utils/gmail/')

Applied to files:

  • apps/web/utils/ai/group/create-group.ts
📚 Learning: the clean onboarding page in apps/web/app/(app)/[emailaccountid]/clean/onboarding/page.tsx is intent...
Learnt from: elie222
PR: elie222/inbox-zero#537
File: apps/web/app/(app)/[emailAccountId]/clean/onboarding/page.tsx:30-34
Timestamp: 2025-07-08T13:14:07.449Z
Learning: The clean onboarding page in apps/web/app/(app)/[emailAccountId]/clean/onboarding/page.tsx is intentionally Gmail-specific and should show an error for non-Google email accounts rather than attempting to support multiple providers.

Applied to files:

  • apps/web/utils/ai/group/create-group.ts
  • apps/web/utils/llms/index.ts
📚 Learning: applies to apps/web/app/api/**/*.{ts,js} : request bodies in api routes should use zod schemas for v...
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/security-audit.mdc:0-0
Timestamp: 2025-07-20T09:00:41.968Z
Learning: Applies to apps/web/app/api/**/*.{ts,js} : Request bodies in API routes should use Zod schemas for validation.

Applied to files:

  • apps/web/utils/llms/index.ts
📚 Learning: form handling using react hook form and zod...
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/index.mdc:0-0
Timestamp: 2025-07-18T15:05:56.644Z
Learning: Form handling using React Hook Form and Zod

Applied to files:

  • apps/web/utils/llms/index.ts
📚 Learning: applies to **/api/**/route.ts : request bodies in api routes must be validated using zod schemas bef...
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/security.mdc:0-0
Timestamp: 2025-07-18T17:27:46.389Z
Learning: Applies to **/api/**/route.ts : Request bodies in API routes MUST be validated using Zod schemas before use.

Applied to files:

  • apps/web/utils/llms/index.ts
📚 Learning: applies to **/*.ts : define validation schemas using zod...
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/form-handling.mdc:0-0
Timestamp: 2025-07-18T15:04:57.115Z
Learning: Applies to **/*.ts : Define validation schemas using Zod

Applied to files:

  • apps/web/utils/llms/index.ts
📚 Learning: applies to apps/web/components/**/*.tsx : use react hook form with zod validation for form handling...
Learnt from: CR
PR: elie222/inbox-zero#0
File: apps/web/CLAUDE.md:0-0
Timestamp: 2025-07-18T15:04:30.467Z
Learning: Applies to apps/web/components/**/*.tsx : Use React Hook Form with Zod validation for form handling

Applied to files:

  • apps/web/utils/llms/index.ts
📚 Learning: applies to apps/web/components/**/*form.tsx : use react hook form with zod resolver for form handlin...
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/fullstack-workflow.mdc:0-0
Timestamp: 2025-07-18T15:05:16.146Z
Learning: Applies to apps/web/components/**/*Form.tsx : Use React Hook Form with Zod resolver for form handling and validation.

Applied to files:

  • apps/web/utils/llms/index.ts
📚 Learning: run ai tests with: pnpm test-ai your-feature...
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm-test.mdc:0-0
Timestamp: 2025-07-18T15:06:10.570Z
Learning: Run AI tests with: pnpm test-ai your-feature

Applied to files:

  • apps/web/utils/llms/index.ts
📚 Learning: applies to apps/web/utils/{ai,llms}/**/*.ts : add retry logic for transient failures using withretry...
Learnt from: CR
PR: elie222/inbox-zero#0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-07-18T15:06:40.272Z
Learning: Applies to apps/web/utils/{ai,llms}/**/*.ts : Add retry logic for transient failures using withRetry in LLM-related code.

Applied to files:

  • apps/web/utils/llms/index.ts
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Analyze (javascript-typescript)
🔇 Additional comments (17)
.cursor/rules/llm.mdc (3)

31-31: Documentation updated to reflect new AI generation pattern.

The removal of explicit logger and schema declarations in the example aligns with the broader refactor to use createGenerateText and createGenerateObject factories that encapsulate these concerns internally.


57-64: Inline schema definition follows new pattern.

The schema is now defined directly within the tool call, which aligns with the new function-calling style API and reduces module-level boilerplate.


69-69: Simplified return statement reflects improved API design.

The direct return of result.object indicates that the new AI generation functions provide cleaner result handling compared to the previous approach.

apps/web/utils/ai/rule/prompt-to-rules.ts (3)

3-3: Import changes align with new AI generation pattern.

The imports for createGenerateText and getModel correctly replace the previous direct AI completion imports, following the centralized utilities approach.

Also applies to: 12-12


60-66: Model configuration follows new standardized pattern.

The use of getModel and createGenerateText with user email, label, and model options centralizes AI configuration and removes explicit usage tracking, following the established pattern.


68-78: AI generation call updated while preserving functionality.

The new generateText call with spread model options maintains the same tool definitions and functionality while using the improved abstraction layer.

apps/web/utils/ai/example-matches/find-example-matches.ts (3)

1-1: Import changes support new generation pattern with stopping criteria.

The addition of stepCountIs, createGenerateText, and getModel imports correctly supports the new AI generation pattern with enhanced stopping control.

Also applies to: 4-4, 7-7


95-101: Model configuration follows established pattern.

The model options retrieval and createGenerateText setup correctly implements the standardized AI generation pattern with proper user context and labeling.


103-115: Enhanced stopping criteria improves AI generation control.

The use of stopWhen: stepCountIs(10) provides more explicit and declarative control over generation stopping compared to the previous maxSteps parameter.

apps/web/utils/ai/assistant/process-user-request.ts (3)

1-1: Import changes consistent with refactor pattern.

The imports for stepCountIs, createGenerateText, and getModel align with the standardized AI generation approach used across the codebase.

Also applies to: 4-4, 38-38


202-208: Model configuration setup follows standardized pattern.

The model options retrieval and createGenerateText configuration correctly implements the established pattern with appropriate user context and labeling.


210-634: AI generation call preserves complex tool functionality.

The new generateText call maintains all existing tool definitions while using the improved abstraction. The stopWhen: stepCountIs(5) provides appropriate generation control for this assistant context.

apps/web/utils/ai/group/create-group.ts (4)

1-1: Import changes align with established refactor pattern.

The imports for stepCountIs, createGenerateText, and getModel are consistent with the standardized AI generation approach used throughout the codebase.

Also applies to: 4-4, 9-9


92-104: AI generation call properly structured.

The generateText call with spread model options and stopWhen: stepCountIs(10) correctly implements the new pattern while preserving the tool functionality.


154-174: Verification function follows pattern with same model type issue.

The verifyGroupItems function correctly implements the new AI generation pattern, but has the same missing model type parameter in the getModel call as the main function.


84-84: No explicit modelType needed for getModel in create-group.ts

The getModel(emailAccount.user) calls at lines 84 and 154 intentionally rely on the default "default" modelType, matching the pattern used in other utilities (e.g. summarization, single‐sender categorization). No change is required here—please disregard the previous flag.

Likely an incorrect or invalid review comment.

apps/web/utils/llms/index.ts (1)

299-299: LGTM! Good addition of contextual logging.

The addition of error logging with user context improves debugging capabilities and follows the coding guidelines for including relevant context in log messages.

@claude
Copy link

claude bot commented Aug 7, 2025

Claude encountered an error —— View job


I'll analyze this and get back to you.

@claude
Copy link

claude bot commented Aug 7, 2025

Claude encountered an error —— View job


I'll analyze this and get back to you.

@claude
Copy link

claude bot commented Aug 7, 2025

Claude encountered an error —— View job


I'll analyze this and get back to you.

@claude
Copy link

claude bot commented Aug 7, 2025

Claude encountered an error —— View job


I'll analyze this and get back to you.

@claude
Copy link

claude bot commented Aug 7, 2025

Claude encountered an error —— View job


I'll analyze this and get back to you.

@claude
Copy link

claude bot commented Aug 7, 2025

Claude encountered an error —— View job


I'll analyze this and get back to you.

@elie222 elie222 changed the title Revert generate args to tool call Improved LLM calls Aug 7, 2025
@elie222 elie222 merged commit 0816a83 into main Aug 7, 2025
10 of 12 checks passed
@elie222 elie222 deleted the feat/ai-fixes branch August 7, 2025 09:19
@coderabbitai coderabbitai bot mentioned this pull request Aug 13, 2025
@coderabbitai coderabbitai bot mentioned this pull request Aug 24, 2025
@coderabbitai coderabbitai bot mentioned this pull request Sep 17, 2025
This was referenced Nov 8, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant

Comments