Skip to content

Comments

AI template responses#263

Merged
elie222 merged 19 commits intomainfrom
template
Nov 28, 2024
Merged

AI template responses#263
elie222 merged 19 commits intomainfrom
template

Conversation

@elie222
Copy link
Owner

@elie222 elie222 commented Nov 27, 2024

Summary by CodeRabbit

Release Notes

  • New Features

    • Introduced new test suites for validating AI-related functionalities, enhancing test coverage for action items and AI argument generation.
    • Enhanced the RuleForm component for better handling of AI-generated actions and improved user experience with updated layouts and error handling.
    • Improved the presentation of AI-generated content in the TestResultDisplay component with better styling and structure.
    • Added a utility function to check for variable patterns in text.
  • Bug Fixes

    • Refined error logging in various functions to improve clarity and focus on error messages.
  • Documentation

    • Added comments and documentation to clarify the purpose and usage of new functionalities related to AI-generated actions.
  • Refactor

    • Simplified the structure for accessing action properties, consolidating logic to improve maintainability and readability across several modules.
  • Chores

    • Removed unnecessary dependencies and improved logging practices throughout the codebase.

@vercel
Copy link

vercel bot commented Nov 27, 2024

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Updated (UTC)
inbox-zero ✅ Ready (Inspect) Visit Preview Nov 28, 2024 3:19am

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Nov 27, 2024

Walkthrough

This pull request introduces a comprehensive suite of changes across multiple files, primarily focusing on enhancing the testing, functionality, and structure of AI-related features within the application. Key updates include the addition of new test cases for AI argument generation and rule selection, modifications to the RuleForm and TestResultDisplay components for improved user experience, and significant restructuring of action properties in various modules. Additionally, logging and error handling have been refined, while the package.json file has been updated to remove unnecessary dependencies.

Changes

File Path Change Summary
apps/web/__tests__/ai-choose-args.test.ts Introduced a new test suite for getActionItemsWithAiArgs, validating behavior under various conditions. Added helper functions for creating mock objects.
apps/web/__tests__/ai-choose-rule.test.ts Modified test cases for chooseRule, updating properties in expected results and adding new properties for action items.
apps/web/app/(app)/automation/RuleForm.tsx Enhanced label creation logic and rendering for action fields, including new functions for variable checks and improved error handling.
apps/web/app/(app)/automation/TestRules.tsx Updated styling and structure for displaying AI-generated content, including improved layout and conditional rendering.
apps/web/app/(app)/automation/rule/[ruleId]/page.tsx Simplified action property assignment logic by removing conditional checks for AI detection.
apps/web/app/api/ai/cold-email/controller.ts Modified output formatting in aiIsColdEmail function for consistency in prompt structure.
apps/web/app/api/google/webhook/process-history.ts Refined error logging in processHistoryForUser to log structured error messages.
apps/web/package.json Removed dependency on @vercel/speed-insights.
apps/web/utils/actions/ai-rule.ts Enhanced logging functionality by replacing console.log with structured logging and refined error handling.
apps/web/utils/actions/rule.ts Simplified action property assignment in createRuleAction and updateRuleAction functions.
apps/web/utils/ai/choose-rule/ai-choose-args.test.ts Introduced unit tests for getParameterFieldsForAction and parseTemplate, validating schema creation and template parsing.
apps/web/utils/ai/choose-rule/ai-choose-args.ts Redefined ActionRequiringAi type, introduced extractActionsNeedingAiGeneration, and modified getActionItemsWithAiArgs for better structure.
apps/web/utils/ai/rule/create-rule-schema.ts Renamed static field to fields in createRuleSchema and removed aiPrompts object.
apps/web/utils/ai/rule/create-rule.ts Restructured aiCreateRule function to access action properties from a unified fields object.
apps/web/utils/ai/rule/prompt-to-rules.ts Updated aiPromptToRules function to access action properties from fields.
apps/web/utils/risk.test.ts Introduced unit tests for getActionRiskLevel and getRiskLevel, validating risk assessment logic.
apps/web/utils/risk.ts Introduced RISK_LEVELS constant, restructured getRiskLevel to utilize getActionRiskLevel, and added new helper functions for risk assessment.
apps/web/utils/template.ts Added hasVariables function to check for template variable patterns in strings.

Possibly related PRs

  • AI prompt for each generated argument #191: AI prompt for each generated argument - This PR relates to the main PR as it involves modifications to the RuleForm component, which is relevant to the testing of the getActionItemsWithAiArgs function.
  • Inbox Zero API #203: Inbox Zero API - This PR includes changes that may affect how actions are processed, which could relate to the functionality being tested in the main PR.
  • Add support for Anthropic #215: Add support for Anthropic - The changes in this PR may impact how AI-generated content is handled, which is relevant to the tests for getActionItemsWithAiArgs.
  • Add bulk actions #223: Add bulk actions - This PR introduces functionality that could interact with the actions being tested in the main PR.
  • Generate prompt for user based on their history/labels #232: Generate prompt for user based on their history/labels - This PR's focus on generating prompts is directly related to the functionality being tested in the main PR.
  • Generate groups with AI prompt #244: Generate groups with AI prompt - The changes in this PR may influence how AI prompts are generated and processed, which is relevant to the main PR's focus.
  • Onboarding #245: Onboarding - This PR may include changes that affect user interactions with AI features, which could relate to the main PR's testing of AI functionality.
  • Queue fixes #250: Queue fixes - The modifications in this PR could impact the processing of actions, which is relevant to the tests for getActionItemsWithAiArgs.

🐇 In the fields of code we play,
With AI prompts to guide our way.
Tests are written, logic clear,
In every function, we hold dear.
From rules to actions, all aligned,
A better app, for all mankind! 🐇✨

Warning

Rate limit exceeded

@elie222 has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 21 minutes and 30 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📥 Commits

Reviewing files that changed from the base of the PR and between f138b0d and 23330aa.


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 13

🧹 Outside diff range and nitpick comments (36)
apps/web/utils/ai/rule/create-rule.ts (1)

37-42: Consider destructuring for improved readability.

The current implementation has some repetition that could be simplified.

Here's a more concise version:

-      label: action.fields?.label ?? undefined,
-      to: action.fields?.to ?? undefined,
-      cc: action.fields?.cc ?? undefined,
-      bcc: action.fields?.bcc ?? undefined,
-      subject: action.fields?.subject ?? undefined,
-      content: action.fields?.content ?? undefined,
+      const { label, to, cc, bcc, subject, content } = action.fields ?? {};
+      label,
+      to,
+      cc,
+      bcc,
+      subject,
+      content,
apps/web/app/(app)/automation/rule/[ruleId]/page.tsx (1)

Line range hint 44-54: Consider improving the new rule notification.

The UI shows a message when searchParams.new === "true" indicating AI-generated rules, but there's no validation of whether the rules are actually AI-generated.

Consider enhancing the notification to:

  1. Actually check if any of the rules contain AI-generated content
  2. Show which specific parts were AI-generated
      {searchParams.new === "true" && (
+       const hasAIContent = ruleWithActions.actions.some(
+         action => action.label.ai || action.subject.ai || action.content.ai
+       );
+     )}
+     {searchParams.new === "true" && hasAIContent && (
        <TopSection
          title="Here are your rule settings!"
          descriptionComponent={
            <p>
-             These rules were AI generated, feel free to adjust them to your
+             Some parts of these rules were AI generated, feel free to adjust them to your
              needs.
            </p>
          }
        />
      )}
apps/web/utils/ai/rule/prompt-to-rules.ts (1)

53-58: Consider enhancing type safety with explicit field typing.

While the current implementation is functional, we could improve type safety by explicitly defining the shape of fields.

Consider adding a type definition:

type ActionFields = {
  label?: string;
  to?: string;
  cc?: string;
  bcc?: string;
  subject?: string;
  content?: string;
};

// Then update the action type to include fields
type Action = {
  type: string;
  fields?: ActionFields;
};
apps/web/utils/ai/rule/create-rule-schema.ts (3)

Line range hint 47-81: Add validation for email fields and dynamic value syntax.

While the schema structure is well-organized, consider adding these essential validations:

  1. Email format validation for to, cc, and bcc fields
  2. Length constraints for subject and content
  3. Pattern validation for dynamic value syntax {{}}
 fields: z
   .object({
     label: z
       .string()
       .nullish()
       .transform((v) => v ?? null)
       .describe("The label to apply to the email"),
     to: z
       .string()
+      .email("Invalid email format")
       .nullish()
       .transform((v) => v ?? null)
       .describe("The to email address to send the email to"),
     cc: z
       .string()
+      .email("Invalid email format")
       .nullish()
       .transform((v) => v ?? null)
       .describe("The cc email address to send the email to"),
     bcc: z
       .string()
+      .email("Invalid email format")
       .nullish()
       .transform((v) => v ?? null)
       .describe("The bcc email address to send the email to"),
     subject: z
       .string()
+      .max(255, "Subject too long")
       .nullish()
       .transform((v) => v ?? null)
       .describe("The subject of the email"),
     content: z
       .string()
+      .regex(
+        /^(?:[^{]|\{(?!\{)|(?<!\})\}|\{\{.*?\}\})*$/,
+        "Invalid dynamic value syntax. Use {{}} for dynamic values"
+      )
       .nullish()
       .transform((v) => v ?? null)
       .describe("The content of the email"),
   })

82-82: Improve documentation readability for dynamic values.

The description is comprehensive but could be structured better. Consider breaking it into distinct sections:

  1. Syntax explanation
  2. Examples
  3. Best practices
-            "The fields to use for the action. Static text can be combined with dynamic values using double braces {{}}. For example: 'Hi {{sender's name}}' or 'Re: {{subject}}' or '{{when I'm available for a meeting}}'. Dynamic values will be replaced with actual email data when the rule is executed. Dynamic values are generated in real time by the AI. Only use dynamic values where absolutely necessary. Otherwise, use plain static text. A field can be also be fully static or fully dynamic.",
+            `The fields to use for the action.
+
+            Syntax:
+            - Use double braces {{}} to insert dynamic values
+            - Values inside {{}} are generated in real time by AI
+            - Fields can be fully static, fully dynamic, or mixed
+
+            Examples:
+            - Mixed: 'Hi {{sender's name}}'
+            - Mixed: 'Re: {{subject}}'
+            - Dynamic: '{{when I'm available for a meeting}}'
+
+            Best Practices:
+            - Use static text whenever possible
+            - Use dynamic values only when necessary
+            - Ensure dynamic values are contextually appropriate`,

Line range hint 47-82: Consider type-safe field validation based on ActionType.

The current schema allows all fields for any ActionType, but certain fields might only be relevant for specific action types. Consider implementing conditional field validation based on the action type.

Example approach:

const emailActionFields = z.object({
  to: z.string().email().nullish(),
  subject: z.string().nullish(),
  content: z.string().nullish(),
});

const labelActionFields = z.object({
  label: z.string().nullish(),
});

fields: z.discriminatedUnion('type', [
  z.object({
    type: z.literal(ActionType.EMAIL),
    fields: emailActionFields,
  }),
  z.object({
    type: z.literal(ActionType.LABEL),
    fields: labelActionFields,
  }),
]);
apps/web/__tests__/ai-choose-rule.test.ts (2)

99-109: Consider making the test more maintainable

While explicit property checking is good for catching unintended changes, the test might be too coupled to implementation details. Consider:

  1. Using a type-safe matcher or partial object matching
  2. Focusing on behavioral expectations rather than exact property structure

Example refactor:

-      {
-        bcc: null,
-        cc: null,
-        content: expect.any(String),
-        label: null,
-        subject: null,
-        to: null,
-        type: "REPLY",
-        labelPrompt: null,
-        subjectPrompt: null,
-        contentPrompt: null,
-        toPrompt: null,
-        ccPrompt: null,
-        bccPrompt: null,
-        createdAt: expect.any(Date),
-        updatedAt: expect.any(Date),
-        id: "id",
-        ruleId: "ruleId",
-      },
+      expect.objectContaining({
+        content: expect.any(String),
+        type: "REPLY",
+        id: "id",
+        ruleId: "ruleId"
+      })

Line range hint 1-158: Add test coverage for template syntax behavior

While the tests cover basic rule selection scenarios, there's no explicit test coverage for how the new template syntax ({{...}}) is processed. Consider adding test cases that verify:

  1. Template variable substitution
  2. Multiple template variables in one content string
  3. Error handling for invalid template syntax

Would you like me to help generate these additional test cases?

apps/web/utils/ai/choose-rule/ ai-choose-args.test.ts (2)

10-90: Consider adding edge cases to strengthen test coverage.

While the current test cases cover the main scenarios well, consider adding tests for:

  1. Empty strings in template variables (e.g., {{}})
  2. Invalid template syntax (e.g., {{incomplete)
  3. Special characters in templates
  4. Schema constraint validation

Example test case:

it("handles edge cases and invalid syntax", () => {
  const action = {
    label: "{{}}",  // empty template
    subject: "{{incomplete",  // invalid syntax
    content: "{{special!@#$}}",  // special characters
    to: "",
    cc: "",
    bcc: "",
  };

  const result = getParameterFieldsForAction(action);
  
  // Add appropriate assertions
});

92-125: Enhance parseTemplate test coverage with edge cases and error handling.

The current tests are good but could be strengthened by adding:

  1. Error handling tests
  2. Edge cases:
    • Nested templates
    • Whitespace handling
    • Escaped templates
    • Maximum template length validation

Example additional test cases:

it("handles error cases", () => {
  const errorCases = [
    {
      template: "{{nested{{template}}}}",
      throws: "Nested templates are not allowed"
    },
    {
      template: "{{ whitespace }}",
      expected: {
        aiPrompts: ["whitespace"],
        fixedParts: ["", ""]
      }
    },
    {
      template: "\\{{escaped}}",
      expected: {
        aiPrompts: [],
        fixedParts: ["{{escaped}}"]
      }
    }
  ];

  errorCases.forEach(({ template, throws, expected }) => {
    if (throws) {
      expect(() => parseTemplate(template)).toThrow(throws);
    } else {
      expect(parseTemplate(template)).toEqual(expected);
    }
  });
});
apps/web/utils/risk.test.ts (3)

1-9: LGTM! Consider enhancing the run command comment.

The test setup and imports look good. The mock for server-only is appropriately minimal.

Consider enhancing the run command comment to include the full command with the test runner:

-// Run with:
-// pnpm test risk.test.ts
+// Run this test file:
+// pnpm test apps/web/utils/risk.test.ts
+// Run in watch mode:
+// pnpm test apps/web/utils/risk.test.ts --watch

11-115: LGTM! Consider adding edge cases.

The test suite for getActionRiskLevel is well-structured with comprehensive test cases for the main scenarios. The data-driven approach makes it easy to add more test cases.

Consider adding these test cases to improve coverage:

// Edge cases
{
  name: "handles undefined values in action fields",
  action: {
    subject: undefined,
    content: undefined,
    to: undefined,
    cc: undefined,
    bcc: undefined,
  },
  hasAutomation: false,
  ruleType: RuleType.AI,
  expectedLevel: "low",
  expectedMessageContains: "Low Risk",
},
{
  name: "handles null values in action fields",
  action: {
    subject: null,
    content: null,
    to: null,
    cc: null,
    bcc: null,
  },
  hasAutomation: false,
  ruleType: RuleType.AI,
  expectedLevel: "low",
  expectedMessageContains: "Low Risk",
},
// Invalid inputs
{
  name: "handles invalid rule type",
  action: {
    subject: "Static",
    content: "Static",
    to: "test@example.com",
    cc: "",
    bcc: "",
  },
  hasAutomation: false,
  ruleType: "INVALID_TYPE" as RuleType,
  expectedLevel: "low",
  expectedMessageContains: "Low Risk",
}

117-205: LGTM! Consider adding boundary test cases.

The test suite for getRiskLevel effectively tests the aggregation of risk levels across multiple actions.

Consider adding these test cases to improve coverage:

// Boundary cases
{
  name: "handles empty actions array",
  rule: {
    actions: [],
    automate: false,
    type: RuleType.AI,
  } as RulesResponse[number],
  expectedLevel: "low",
  expectedMessageContains: "Low Risk",
},
{
  name: "handles single action",
  rule: {
    actions: [{
      subject: "Static",
      content: "Static",
      to: "test@example.com",
      cc: "",
      bcc: "",
    }],
    automate: false,
    type: RuleType.AI,
  } as RulesResponse[number],
  expectedLevel: "low",
  expectedMessageContains: "Low Risk",
},
{
  name: "propagates very-high risk level",
  rule: {
    actions: [
      {
        subject: "{{dynamic}}",
        content: "{{dynamic}}",
        to: "{{dynamic}}",
        cc: "",
        bcc: "",
      },
      {
        subject: "Static",
        content: "Static",
        to: "test@example.com",
        cc: "",
        bcc: "",
      }
    ],
    automate: true,
    type: RuleType.AI,
  } as RulesResponse[number],
  expectedLevel: "very-high",
  expectedMessageContains: "Very High Risk",
}
apps/web/__tests__/ai-choose-args.test.ts (3)

70-70: Remove debug console statements.

These debug statements were likely used during development and should be removed before merging.

-    console.debug("Generated content:\n", result[0].content);
-    console.debug("Generated label:\n", result[0].label);
-    console.debug("Generated content:\n", result[1].content);

Also applies to: 141-142


65-69: Strengthen test assertions.

The current assertions could be more specific to ensure the exact behavior we expect. Consider adding:

  1. Type checking for the returned actions
  2. Validation of all action properties, not just content
  3. Specific content format validation

Example enhancement for the AI content test:

     expect(result).toHaveLength(1);
-    expect(result[0]).toMatchObject({
-      ...actions[0],
-      content: "The price of pears is: $1.99",
-    });
+    const generatedAction = result[0];
+    expect(generatedAction).toMatchObject({
+      ...actions[0],
+      type: ActionType.REPLY,
+      content: expect.stringMatching(/^The price of pears is: \$\d+\.\d{2}$/),
+    });
+    expect(typeof generatedAction.content).toBe('string');

Also applies to: 91-93


204-212: Make getUser helper function more flexible.

Consider making the getUser function accept parameters similar to other helper functions for better test flexibility.

-function getUser() {
+function getUser({
+  email = "user@test.com",
+  aiModel = null,
+  aiProvider = null,
+  aiApiKey = null,
+  about = null,
+}: Partial<{
+  email: string;
+  aiModel: string | null;
+  aiProvider: string | null;
+  aiApiKey: string | null;
+  about: string | null;
+}> = {}) {
   return {
-    aiModel: null,
-    aiProvider: null,
-    email: "user@test.com",
-    aiApiKey: null,
-    about: null,
+    aiModel,
+    aiProvider,
+    email,
+    aiApiKey,
+    about,
   };
 }
apps/web/utils/actions/rule.ts (3)

44-49: Consider adding value validation for action fields.

While the optional chaining handles undefined fields safely, consider adding validation for the actual values being assigned. This could prevent invalid or malformed data from being stored.

Consider adding validation like this:

 label: label?.value,
+validateActionField('label', label?.value),
 subject: subject?.value,
+validateActionField('subject', subject?.value),
// ... similar for other fields

// Add this helper:
function validateActionField(field: string, value: string | undefined) {
  if (value && value.length > MAX_FIELD_LENGTH) {
    throw new Error(`${field} exceeds maximum length`);
  }
  // Add other validation rules as needed
}

168-173: Add batching and error handling for bulk action creation.

The createMany operation will fail entirely if any single record is invalid. Additionally, there's no limit on the batch size which could impact performance with large sets of actions.

Consider these improvements:

-prisma.action.createMany({
-  data: actionsToCreate.map((a) => ({
-    ruleId: body.id,
-    type: a.type,
-    label: a.label?.value,
-    subject: a.subject?.value,
-    content: a.content?.value,
-    to: a.to?.value,
-    cc: a.cc?.value,
-    bcc: a.bcc?.value,
-  })),
-}),
+// Split into batches of 100
+...chunk(actionsToCreate, 100).map(batch =>
+  prisma.action.createMany({
+    data: batch.map((a) => ({
+      ruleId: body.id,
+      type: a.type,
+      label: a.label?.value,
+      subject: a.subject?.value,
+      content: a.content?.value,
+      to: a.to?.value,
+      cc: a.cc?.value,
+      bcc: a.bcc?.value,
+    })),
+    skipDuplicates: true, // Optional: depends on your requirements
+  })
+),

Line range hint 1-1: Enhance error logging for better debugging.

The generic error messages like "Error creating rule" and "Error updating rule" don't provide enough context for debugging issues in production.

Consider adding detailed error logging:

+"use server";
+import { logger } from "@/utils/logger";  // Add appropriate logger
+
 // In catch blocks:
 } catch (error) {
+  logger.error("Failed to create rule", {
+    error,
+    userId: session.user.id,
+    ruleName: body.name,
+  });
   if (isDuplicateError(error, "name")) {
     return { error: "Rule name already exists" };
   }
   // ... other checks ...
   return { error: "Error creating rule." };
 }
apps/web/app/(app)/automation/TestRules.tsx (2)

230-250: Consider performance optimizations for action items rendering.

The action items rendering logic is well-structured, but could benefit from some performance optimizations:

Consider these improvements:

+ const ALLOWED_ACTION_KEYS = ["label", "subject", "content", "to", "cc", "bcc"] as const;
+ const capitalizeKey = memoize(capitalCase);

  const aiGeneratedContent = result.actionItems.map((action, i) => (
    <div key={i} className="rounded-md border border-gray-200 bg-gray-50 p-3">
      <div className="mb-2 text-xs font-semibold uppercase tracking-wide text-gray-900">
-       {capitalCase(action.type)}
+       {capitalizeKey(action.type)}
      </div>
      {Object.entries(action)
        .filter(
          ([key, value]) =>
            value &&
-           ["label", "subject", "content", "to", "cc", "bcc"].includes(key),
+           ALLOWED_ACTION_KEYS.includes(key as typeof ALLOWED_ACTION_KEYS[number]),
        )
        .map(([key, value]) => (
          <div key={key} className="flex text-sm text-gray-800">
            <span className="min-w-20 font-medium text-gray-600">
-             {capitalCase(key)}:
+             {capitalizeKey(key)}:
            </span>
            <span className="ml-2 flex-1">{value}</span>
          </div>
        ))}
    </div>
  ));

268-271: Consider extracting truncation logic and constant.

The truncation implementation could be improved for better maintainability and reusability:

Consider these improvements:

+ // Move to constants file or utils
+ const MAX_INSTRUCTION_LENGTH = 280;
+ 
+ // Move to text utils
+ const truncateText = (text: string, maxLength: number) => {
+   if (text.length <= maxLength) return text;
+   return `${text.substring(0, maxLength)}...`;
+ };

  <div className="border-t border-blue-100 pt-3 text-sm">
    <span className="font-medium">Rule Instructions: </span>
-   {result.rule.instructions.substring(0, MAX_LENGTH)}
-   {result.rule.instructions.length >= MAX_LENGTH && "..."}
+   {truncateText(result.rule.instructions, MAX_INSTRUCTION_LENGTH)}
  </div>
apps/web/app/api/google/webhook/process-history.ts (2)

207-211: Standardize error handling patterns across the file

The file has inconsistent error handling patterns across different functions. For example:

  • processHistoryForUser: Logs only error message
  • processHistory: Logs full error object
  • processHistoryItem: Special handling for 404 errors

Consider implementing a consistent error handling strategy across all functions.

Suggestions:

  1. Create a dedicated error handling utility that standardizes error logging
  2. Define specific error types for different scenarios (e.g., GmailApiError, ProcessingError)
  3. Implement consistent error context preservation

Example implementation:

type ErrorContext = {
  email?: string;
  messageId?: string;
  threadId?: string;
  decodedData?: any;
};

class ProcessingError extends Error {
  constructor(message: string, public context: ErrorContext) {
    super(message);
    this.name = 'ProcessingError';
  }
}

function handleProcessingError(error: unknown, context: ErrorContext) {
  const processedError = error instanceof Error ? error : new Error(String(error));
  
  logger.error('Processing error', {
    name: processedError.name,
    message: processedError.message,
    stack: processedError.stack,
    context,
  });
  
  captureException(processedError, { extra: context });
}

Also applies to: 332-339, 449-458


Line range hint 332-339: Consider performance optimizations for message processing

The current implementation processes messages sequentially, which might not be optimal for large history batches. Consider implementing batch processing and caching strategies.

Suggestions:

  1. Implement batch processing for messages:
async function processBatch(messages: typeof uniqueMessages, options: ProcessHistoryOptions) {
  const batchSize = 10;
  for (let i = 0; i < messages.length; i += batchSize) {
    const batch = messages.slice(i, i + batchSize);
    await Promise.all(batch.map(m => processHistoryItem(m, options)));
  }
}
  1. Add caching for Gmail API responses:
const messageCache = new Map<string, gmail_v1.Schema$Message>();
async function getCachedMessage(messageId: string, gmail: gmail_v1.Gmail) {
  if (messageCache.has(messageId)) return messageCache.get(messageId);
  const message = await getMessage(messageId, gmail, "full");
  messageCache.set(messageId, message);
  return message;
}
apps/web/app/(app)/automation/RuleForm.tsx (2)

303-304: Optimize performance by moving function declaration

The hasVariables function is recreated on every render since it's defined inside the map callback. Consider moving it outside the component for better performance.

+const hasVariables = (text: string) => /\{\{.*?\}\}/g.test(text);
+
 export function RuleForm({ rule }: { rule: CreateRuleBody & { id?: string } }) {
   // ... existing code ...
   
   return (
     // ... existing code ...
     {watch("actions")?.map((action, i) => {
-      const hasVariables = (text: string) => /\{\{.*?\}\}/g.test(text);
       return (
         // ... existing code ...

375-393: Extract duplicate placeholder text to constant

The placeholder text is duplicated between the textarea and input elements. Consider extracting it to a constant for better maintainability.

+const AI_PROMPT_PLACEHOLDER = "Add text or use {{AI prompts}}. e.g. Hi {{write greeting}}";
+
 export function RuleForm({ rule }: { rule: CreateRuleBody & { id?: string } }) {
   // ... existing code ...
   
   <textarea
     className="block w-full flex-1 whitespace-pre-wrap rounded-md border-gray-300 shadow-sm focus:border-black focus:ring-black sm:text-sm"
     rows={4}
-    placeholder="Add text or use {{AI prompts}}. e.g. Hi {{write greeting}}"
+    placeholder={AI_PROMPT_PLACEHOLDER}
     value={value || ""}
     {...register(`actions.${i}.${field.name}.value`)}
   />
   
   <input
     className="block w-full flex-1 rounded-md border-gray-300 shadow-sm focus:border-black focus:ring-black sm:text-sm"
     type="text"
-    placeholder="Add text or use {{AI prompts}}. e.g. Hi {{write greeting}}"
+    placeholder={AI_PROMPT_PLACEHOLDER}
     {...register(`actions.${i}.${field.name}.value`)}
   />
apps/web/utils/actions/ai-rule.ts (4)

87-87: Enhance log message with context.

The log message could be more informative for debugging purposes.

-      logger.info("Skipping. Rule already exists.");
+      logger.info(`Skipping execution. Rule already exists for messageId: ${email.messageId}, threadId: ${email.threadId}`);

Line range hint 524-525: Replace console.error with logger.error for consistency.

There's an inconsistency in error logging where some errors use console.error while others use the logger. This makes it harder to track and monitor errors consistently.

-      console.error("Input validation failed:", error.message);
+      logger.error("Input validation failed", { error: error.message });

-            console.error(
-              `Error updating group for rule. ${groupIdResult.error}`,
-            );
+            logger.error("Error updating group for rule", { error: groupIdResult.error });

-        console.error(`Error creating group for rule. ${groupIdResult.error}`);
+        logger.error("Error creating group for rule", { error: groupIdResult.error });

Also applies to: 656-658, 686-687


568-570: Enhance log messages with structured data.

Key log messages could benefit from structured logging to make them more queryable and analyzable.

-      logger.info(
-        `Diff results: Added rules: ${diff.addedRules.length}, Edited rules: ${diff.editedRules.length}, Removed rules: ${diff.removedRules.length}`,
-      );
+      logger.info("Rules diff results", {
+        addedRulesCount: diff.addedRules.length,
+        editedRulesCount: diff.editedRules.length,
+        removedRulesCount: diff.removedRules.length
+      });

-    logger.info(
-      `saveRulesPromptAction completed. Created rules: ${addedRules?.length || 0}, Edited rules: ${editRulesCount}, Removed rules: ${removeRulesCount}`,
-    );
+    logger.info("saveRulesPromptAction completed", {
+      createdRulesCount: addedRules?.length || 0,
+      editedRulesCount,
+      removedRulesCount
+    });

Also applies to: 699-702


Line range hint 1-785: Consider splitting the file into smaller, focused modules.

This file handles multiple responsibilities including rule creation, updating, prompt management, and execution. Consider splitting it into smaller modules:

  • ai-rule-creation.ts: Rule creation and updating logic
  • ai-rule-execution.ts: Rule execution logic
  • ai-prompt-management.ts: Prompt-related functionality
  • ai-rule-utils.ts: Utility functions like shouldAutomate

This would improve maintainability and make the codebase more modular.

apps/web/utils/risk.ts (6)

4-11: Consider using a TypeScript enum for RISK_LEVELS

Using a TypeScript enum for RISK_LEVELS can enhance type safety and code readability. Enums provide a clear and maintainable way to define a set of related constants.

Apply this diff to refactor using an enum:

-const RISK_LEVELS = {
-  VERY_HIGH: "very-high",
-  HIGH: "high",
-  MEDIUM: "medium",
-  LOW: "low",
-} as const;

-export type RiskLevel = (typeof RISK_LEVELS)[keyof typeof RISK_LEVELS];
+export enum RiskLevel {
+  VERY_HIGH = "very-high",
+  HIGH = "high",
+  MEDIUM = "medium",
+  LOW = "low",
+}

109-117: Simplify the compareRiskLevels function

The current implementation uses a mapping object to compare risk levels. This can be simplified by leveraging an array to determine the risk order, improving readability.

Apply this diff to simplify the function:

-function compareRiskLevels(a: RiskLevel, b: RiskLevel): RiskLevel {
-  const riskOrder: Record<RiskLevel, number> = {
-    [RISK_LEVELS.VERY_HIGH]: 4,
-    [RISK_LEVELS.HIGH]: 3,
-    [RISK_LEVELS.MEDIUM]: 2,
-    [RISK_LEVELS.LOW]: 1,
-  };
-  return riskOrder[a] >= riskOrder[b] ? a : b;
+function compareRiskLevels(a: RiskLevel, b: RiskLevel): RiskLevel {
+  const riskLevelsInOrder: RiskLevel[] = [
+    RiskLevel.LOW,
+    RiskLevel.MEDIUM,
+    RiskLevel.HIGH,
+    RiskLevel.VERY_HIGH,
+  ];
+  return riskLevelsInOrder.indexOf(a) >= riskLevelsInOrder.indexOf(b) ? a : b;
 }

125-140: Enhance readability of the getRiskLevel function

The use of reduce with complex logic may reduce readability. Separating concerns can make the function clearer.

Consider refactoring the function as follows:

 export function getRiskLevel(
   rule: Pick<RulesResponse[number], "actions" | "automate" | "type">,
 ): {
   level: RiskLevel;
   message: string;
 } {
-  // Get risk level for each action and return the highest risk
-  return rule.actions.reduce<{ level: RiskLevel; message: string }>(
-    (highestRisk, action) => {
-      const actionRisk = getActionRiskLevel(action, rule.automate, rule.type);
-      if (
-        compareRiskLevels(actionRisk.level, highestRisk.level) ===
-        actionRisk.level
-      ) {
-        return actionRisk;
-      }
-      return highestRisk;
-    },
-    {
-      level: RISK_LEVELS.LOW,
-      message: "Low Risk: All content and recipients are static.",
-    },
-  );
+  // Collect all risk levels from actions
+  const risks = rule.actions.map((action) =>
+    getActionRiskLevel(action, rule.automate, rule.type)
+  );
+
+  // Determine the highest risk level
+  return risks.reduce((highestRisk, currentRisk) =>
+    compareRiskLevels(currentRisk.level, highestRisk.level) === currentRisk.level
+      ? currentRisk
+      : highestRisk,
+    { level: RiskLevel.LOW, message: "Low Risk: All content and recipients are static." }
+  );
 }

58-59: Improve consistency in risk messages

Consider storing repeated risk messages in constants to maintain consistency and ease future updates.

Apply this diff to extract messages:

+const VERY_HIGH_RISK_MESSAGE =
+  "Very High Risk: The AI can generate any content and send it to any address. A malicious actor could trick the AI to send spam or other unwanted emails on your behalf.";

 if (hasFullyDynamicContent && hasFullyDynamicRecipient) {
   const level =
     ruleType === RuleType.AI ? RISK_LEVELS.VERY_HIGH : RISK_LEVELS.HIGH;
   return {
     level,
-    message: `${level === RISK_LEVELS.VERY_HIGH ? "Very High" : "High"} Risk: The AI can generate any content and send it to any address. A malicious actor could trick the AI to send spam or other unwanted emails on your behalf.`,
+    message:
+      level === RISK_LEVELS.VERY_HIGH
+        ? VERY_HIGH_RISK_MESSAGE
+        : HIGH_RISK_MESSAGE,
   };
 }

75-76: Spell out abbreviations in user-facing messages

For clarity in user-facing messages, avoid abbreviations like "AI" unless they are well-understood by the target audience.

Consider updating the message:

 message:
-  "High Risk: The AI can automatically generate and send any email content. A malicious actor could potentially trick the AI into generating unwanted or inappropriate content.",
+  "High Risk: Artificial Intelligence can automatically generate and send any email content. A malicious actor could potentially trick it into generating unwanted or inappropriate content.",

78-84: Ensure consistent risk assessment for partially dynamic fields

Partially dynamic content and recipients are currently assessed together. Consider separating these assessments for more granular risk evaluation.

 if (hasPartiallyDynamicContent || hasPartiallyDynamicRecipient) {
   return {
     level: RISK_LEVELS.MEDIUM,
     message:
-      "Medium Risk: The AI can generate content or recipients using templates. While more constrained than fully dynamic content, review the templates carefully.",
+      hasPartiallyDynamicContent && hasPartiallyDynamicRecipient
+        ? "Medium Risk: Templates are used for both content and recipients. Review carefully."
+        : hasPartiallyDynamicContent
+        ? "Medium Risk: Templates are used for content. Review the templates carefully."
+        : "Medium Risk: Templates are used for recipients. Review the templates carefully.",
   };
 }
apps/web/utils/ai/choose-rule/ai-choose-args.ts (1)

121-125: Remove redundant parameters length check in getArgsAiResponse

The parameters.length === 0 check is performed both in getActionItemsWithAiArgs and getArgsAiResponse. Since it's already checked before calling getArgsAiResponse, you can remove the redundant check inside getArgsAiResponse to simplify the code.

Apply this diff to remove the redundant check:

  logger.info(
    `Generating args for rule ${selectedRule.name} (${selectedRule.id})`,
  );

- // If no parameters, skip
- if (parameters.length === 0) {
-   logger.info(
-     `Skipping. No parameters for rule ${selectedRule.name} (${selectedRule.id})`,
-   );
-   return;
- }

  const system = getSystemPrompt({ user });
  const prompt = getPrompt({ email, selectedRule });
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 54f91ce and f138b0d.

⛔ Files ignored due to path filters (1)
  • pnpm-lock.yaml is excluded by !**/pnpm-lock.yaml
📒 Files selected for processing (17)
  • apps/web/__tests__/ai-choose-args.test.ts (1 hunks)
  • apps/web/__tests__/ai-choose-rule.test.ts (4 hunks)
  • apps/web/app/(app)/automation/RuleForm.tsx (4 hunks)
  • apps/web/app/(app)/automation/TestRules.tsx (2 hunks)
  • apps/web/app/(app)/automation/rule/[ruleId]/page.tsx (1 hunks)
  • apps/web/app/api/ai/cold-email/controller.ts (1 hunks)
  • apps/web/app/api/google/webhook/process-history.ts (1 hunks)
  • apps/web/package.json (0 hunks)
  • apps/web/utils/actions/ai-rule.ts (12 hunks)
  • apps/web/utils/actions/rule.ts (3 hunks)
  • apps/web/utils/ai/choose-rule/ ai-choose-args.test.ts (1 hunks)
  • apps/web/utils/ai/choose-rule/ai-choose-args.ts (4 hunks)
  • apps/web/utils/ai/rule/create-rule-schema.ts (2 hunks)
  • apps/web/utils/ai/rule/create-rule.ts (1 hunks)
  • apps/web/utils/ai/rule/prompt-to-rules.ts (1 hunks)
  • apps/web/utils/risk.test.ts (1 hunks)
  • apps/web/utils/risk.ts (1 hunks)
💤 Files with no reviewable changes (1)
  • apps/web/package.json
✅ Files skipped from review due to trivial changes (1)
  • apps/web/app/api/ai/cold-email/controller.ts
🔇 Additional comments (14)
apps/web/utils/ai/rule/create-rule.ts (1)

37-42: LGTM! Property access restructuring looks good.

The consolidation of properties under the fields object with proper optional chaining and undefined fallbacks is a clean approach.

apps/web/app/(app)/automation/rule/[ruleId]/page.tsx (1)

33-33: Verify the AI detection regex pattern.

The current regex pattern /\{\{.*?\}\}/g might match more patterns than intended. It would match any content within double curly braces, which could lead to false positives.

Let's verify the usage of this pattern across the codebase:

✅ Verification successful

Based on the search results, I can see that the {{.*?}} pattern is consistently used throughout the codebase for AI-generated dynamic values, particularly in the automation rules feature. The pattern is well-documented and tested, as evidenced by:

  1. The schema description in apps/web/utils/ai/rule/create-rule-schema.ts explicitly states this is the intended format for dynamic values:
"The fields to use for the action. Static text can be combined with dynamic values using double braces {{}}. For example: 'Hi {{sender's name}}' or 'Re: {{subject}}' or '{{when I'm available for a meeting}}'. Dynamic values will be replaced with actual email data when the rule is executed."
  1. Multiple test files demonstrate the expected usage:
  • apps/web/utils/risk.test.ts tests risk assessment for dynamic content using this pattern
  • apps/web/utils/ai/choose-rule/ai-choose-args.test.ts contains various test cases validating this syntax
  1. The UI in apps/web/app/(app)/automation/RuleForm.tsx guides users to use this exact pattern:
placeholder="Add text or use {{AI prompts}}. e.g. Hi {{write greeting}}"

The regex pattern /\{\{.*?\}\}/g is correctly implemented for detecting AI-generated content.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for similar AI detection patterns to ensure consistency
rg -g '*.{ts,tsx}' '\{\{.*?\}\}' -A 2 -B 2

# Search for test files that might contain test cases for AI detection
fd -e test.ts -e test.tsx | xargs rg 'isAIGenerated|ai:'

Length of output: 30450

apps/web/utils/ai/rule/prompt-to-rules.ts (1)

53-58: LGTM! Verify the refactoring across the codebase.

The unified access pattern through action.fields with proper null handling looks good. This simplification improves code maintainability.

Let's verify this refactoring is consistent across the codebase:

✅ Verification successful

The refactoring to action.fields is consistent and complete

The verification results show that:

  • No occurrences of the old patterns (action.static or action.aiPrompts) were found in the codebase
  • The new action.fields pattern is consistently used in both relevant files:
    • apps/web/utils/ai/rule/prompt-to-rules.ts
    • apps/web/utils/ai/rule/create-rule.ts
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for any remaining uses of action.static or action.aiPrompts
# that might have been missed during refactoring

echo "Checking for old action.static pattern..."
rg "action\.static" -g "!*.{md,log}"

echo "Checking for old action.aiPrompts pattern..."
rg "action\.aiPrompts" -g "!*.{md,log}"

echo "Verifying new action.fields pattern usage..."
rg "action\.fields" -g "!*.{md,log}"

Length of output: 1450

apps/web/__tests__/ai-choose-rule.test.ts (2)

65-72: Verify the architectural change in action template handling

The switch from using contentPrompt: "Write a joke" to content: "{{Write a joke}}" suggests a significant change in how action templates are processed. This template-based approach with double curly braces might affect other parts of the system.

✅ Verification successful

Template-based action handling is a valid architectural change

The switch from using contentPrompt to template syntax ({{...}}) in the content field is part of a deliberate architectural change, as evidenced by:

  1. The implementation in apps/web/utils/ai/choose-rule/ai-choose-args.ts shows the new template-based system is properly implemented and tested
  2. The database schema (schema.prisma) and migration (20240624075134_argument_prompt) show the system supports both approaches, with *Prompt fields being optional
  3. Test files demonstrate consistent usage of the new template syntax across multiple test cases

This change appears to be a well-structured enhancement to the action template system, allowing for more flexible and intuitive template variables within the content itself.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for other occurrences of template syntax and prompt handling
echo "Searching for template syntax usage..."
rg -l '\{\{.*?\}\}' --type ts --type tsx

echo "Searching for related prompt handling..."
rg 'contentPrompt|labelPrompt|subjectPrompt' --type ts --type tsx

Length of output: 344


Script:

#!/bin/bash
# Search for template syntax usage without specifying file types
echo "Searching for template syntax usage..."
rg -l '\{\{.*?\}\}'

echo -e "\nSearching for prompt handling..."
rg 'contentPrompt|labelPrompt|subjectPrompt'

echo -e "\nListing TypeScript files for reference..."
fd '.*\.ts$'

Length of output: 15375


Script:

#!/bin/bash
# Check the schema and implementation files for template handling
echo "Checking schema and implementation files..."
rg -A 5 -B 5 'content:.*\{\{.*?\}\}' apps/web/utils/ai/choose-rule/

echo -e "\nChecking database schema for action template fields..."
cat apps/web/prisma/schema.prisma | rg -A 10 -B 2 'contentPrompt'

echo -e "\nChecking recent migrations..."
cat apps/web/prisma/migrations/20240624075134_argument_prompt/migration.sql

Length of output: 4538


134-134: Verify the impact of AI provider and category filter changes

The changes to aiModel, aiProvider, and categoryFilterType fields suggest a modification in how rules are categorized and processed. Please ensure these changes align with the broader system architecture.

Also applies to: 152-153

✅ Verification successful

Let me gather more information about the usage of these fields in the actual implementation files, not just tests.


Let me fix the search patterns and file type issues:


Let me try one last search without specifying file types:


Based on the comprehensive search results, I can now provide a final response:

Setting these fields to null is safe and aligned with system design

The changes to categoryFilterType, aiModel, and aiProvider fields are part of a well-integrated system where:

  1. categoryFilterType is an optional field in the Rule schema that controls category-based filtering, with null being a valid state indicating no filtering
  2. aiModel and aiProvider are optional fields in the User schema with proper null handling throughout the codebase:
    • Default values are provided in getModel function
    • UI components handle null values gracefully
    • API endpoints properly handle null cases

The codebase has proper null-safety checks and fallback mechanisms in place, making these changes safe.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for usages of these fields in the codebase
echo "Checking categoryFilterType usage..."
ast-grep --pattern 'categoryFilterType'

echo "Checking AI provider and model usage..."
ast-grep --pattern $'const $_ = {
  $$$
  aiModel: $_,
  aiProvider: $_,
  $$$
}'

Length of output: 1096


Script:

#!/bin/bash
# Search for categoryFilterType in implementation files
echo "Checking categoryFilterType in implementation..."
rg "categoryFilterType" --type ts --type tsx -g '!**/*.test.{ts,tsx}'

# Search for aiModel and aiProvider in implementation files
echo -e "\nChecking aiModel and aiProvider in implementation..."
rg "aiModel|aiProvider" --type ts --type tsx -g '!**/*.test.{ts,tsx}'

# Look for rule type definitions
echo -e "\nChecking rule type definitions..."
rg "type.*Rule.*=|interface.*Rule.*{" --type ts --type tsx

Length of output: 697


Script:

#!/bin/bash
# Search for categoryFilterType in implementation files
echo "Checking categoryFilterType in implementation..."
rg "categoryFilterType" --type typescript --glob '!**/*.test.ts'

# Search for aiModel and aiProvider in implementation files
echo -e "\nChecking aiModel and aiProvider in implementation..."
rg "(aiModel|aiProvider)" --type typescript --glob '!**/*.test.ts'

# Look for rule type definitions
echo -e "\nChecking rule type definitions..."
rg "type.*Rule.*=|interface.*Rule" --type typescript

Length of output: 604


Script:

#!/bin/bash
# Search for all occurrences without file type restrictions
echo "Checking all usages..."
rg -g '!**/*.test.{ts,tsx}' -g '!**/node_modules/**' "categoryFilterType|aiModel|aiProvider" -A 2 -B 2

# Look for rule related type definitions
echo -e "\nChecking rule types..."
rg -g '!**/node_modules/**' "export (type|interface).*Rule" -A 3

Length of output: 26180

apps/web/utils/ai/choose-rule/ ai-choose-args.test.ts (2)

1-8: LGTM! Well-structured test setup.

The imports and mocking are appropriate, and the run command comment is helpful for developers.


1-125: Overall, well-structured and comprehensive test file!

The test suites are well-organized, maintainable, and provide good coverage of the main functionality. The suggestions above for additional test cases would further strengthen the test coverage but aren't blocking issues.

apps/web/utils/risk.test.ts (1)

1-205: Verify test coverage and implementation.

Let's ensure the tests are properly integrated and the implementation matches the test cases.

✅ Verification successful

Let me gather more information about the implementation details of the risk assessment functions to ensure complete test coverage.


Based on the comprehensive information gathered, I can now provide the final response:

Tests are well-implemented and provide good coverage of risk assessment logic

The test suite thoroughly covers the risk assessment functionality with the following verified aspects:

  • The implementation in risk.ts matches perfectly with the test cases in risk.test.ts
  • All risk levels (very-high, high, medium, low) are tested with various combinations
  • Edge cases are covered, including:
    • Dynamic content with automation
    • Dynamic recipients with/without automation
    • Multiple actions with different risk levels
    • Static content and recipients
  • Tests are properly integrated into the project's test suite via Vitest configuration
  • The test file is the only risk-related test file, which is appropriate given the focused functionality
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify test coverage and implementation

# Check if tests are included in CI configuration
echo "Checking CI configuration for test execution:"
fd -t f "^(package\.json|\.github.*yml)$" | xargs rg -l "risk\.test\.ts"

# Check implementation of tested functions
echo "Checking implementation of risk assessment functions:"
rg -A 10 "^(export )?function (getActionRiskLevel|getRiskLevel)"

# Check test coverage configuration
echo "Checking test coverage configuration:"
fd -t f "^vitest\.config\.(ts|js)$" | xargs cat

Length of output: 2349


Script:

#!/bin/bash
# Check the complete implementation of risk.ts
echo "Checking complete risk.ts implementation:"
cat apps/web/utils/risk.ts

# Check if there are any other test files related to risk assessment
echo -e "\nChecking for other related test files:"
fd -t f "risk.*\.(test|spec)\.(ts|js|tsx|jsx)$"

# Check package.json for test configuration
echo -e "\nChecking package.json test configuration:"
fd "package.json" | xargs cat

Length of output: 14112

apps/web/app/(app)/automation/TestRules.tsx (3)

259-259: LGTM!

The spacing utilities are appropriately used for consistent layout.


262-264: LGTM!

The styling effectively highlights the AI reasoning section with appropriate visual hierarchy.


Line range hint 1-284: Verify AI testing functionality integration.

The UI changes look good, but let's verify the integration with the AI testing functionality:

✅ Verification successful

Let me analyze the test endpoints and error handling patterns more deeply by checking the actual implementation of the AI testing functionality.


Let me check the actual implementation of the AI rule testing functions to verify the integration.


Based on the implementation details revealed in the output, I can now provide a final response about the AI testing functionality integration.

AI testing integration is properly implemented and verified

The implementation shows a robust integration with proper error handling and complete test coverage:

  • The testAiAction and testAiCustomContentAction functions in ai-rule.ts are properly implemented with session validation, error handling, and user context retrieval
  • The core testing logic in run-rules.ts implements comprehensive rule testing, including:
    • Static rule testing
    • Group rule testing
    • AI rule testing
    • Proper handling of test results and action items
  • Error handling is consistently implemented using isActionError pattern
  • The UI components properly integrate with these testing functions and handle their responses appropriately
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify the AI testing functionality integration

# Check for AI test-related API endpoints
echo "Checking for AI test endpoints..."
rg -l "testAi.*Action|/api/.*test" --type ts

# Check for error handling patterns
echo "\nChecking error handling patterns..."
rg "isActionError|toastError" --type ts

# Check for test coverage
echo "\nChecking test coverage..."
fd -e test.ts -e test.tsx

Length of output: 16866


Script:

#!/bin/bash
# Check the implementation of AI testing actions
echo "Checking AI testing action implementations..."
ast-grep --pattern 'export async function testAi$_Action($_) {
  $$$
}'

# Check for test rule implementations
echo "\nChecking test rule implementations..."
ast-grep --pattern 'function $_TestRules$_($_) {
  $$$
}'

# Check the API route implementations
echo "\nChecking API route implementations..."
rg -A 10 "export async function.*test.*" --type ts

Length of output: 1875


Script:

#!/bin/bash
# Check the implementation of testAiAction and testAiCustomContentAction
echo "Checking AI rule testing implementations..."
rg -A 20 "testAiAction|testAiCustomContentAction" apps/web/utils/actions/ai-rule.ts

# Check the test rules implementation in run-rules.ts
echo "\nChecking test rules core logic..."
cat apps/web/utils/ai/choose-rule/run-rules.ts

# Check for any related test files
echo "\nChecking related test files..."
rg -l "testAi.*Action" apps/web/__tests__/

Length of output: 7974

apps/web/app/(app)/automation/RuleForm.tsx (2)

93-93: LGTM: Improved label creation logic

The additional check !action.label?.ai prevents unnecessary creation of labels that are meant to be AI-generated, which is the correct behavior.


395-412: LGTM: Well-implemented AI prompt highlighting

The visual highlighting of AI variables is well implemented:

  • Uses semantic HTML with proper ARIA attributes
  • Provides clear visual feedback with appropriate styling
  • Correctly splits and maps the text parts
apps/web/utils/actions/ai-rule.ts (1)

41-43: LGTM! Good logging setup.

The scoped logger initialization follows best practices by using a meaningful scope name that reflects the module's purpose.

Comment on lines 31 to 39
label: {
value: action.label,
ai: /\{\{.*?\}\}/g.test(action.label || ""),
},
subject: { value: action.subject },
content: { value: action.content },
to: { value: action.to },
cc: { value: action.cc },
bcc: { value: action.bcc },
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Consider improving type safety and AI detection.

The current implementation has several areas that could be improved:

  1. The AI detection is only implemented for the label property, while other properties (subject, content, etc.) might also need AI detection.
  2. The structure lacks type safety which could lead to runtime errors.

Consider applying these improvements:

      ...action,
      label: {
        value: action.label,
-       ai: /\{\{.*?\}\}/g.test(action.label || ""),
+       ai: isAIGenerated(action.label),
      },
-     subject: { value: action.subject },
-     content: { value: action.content },
-     to: { value: action.to },
-     cc: { value: action.cc },
-     bcc: { value: action.bcc },
+     subject: { 
+       value: action.subject,
+       ai: isAIGenerated(action.subject),
+     },
+     content: { 
+       value: action.content,
+       ai: isAIGenerated(action.content),
+     },
+     to: { 
+       value: action.to,
+       ai: isAIGenerated(action.to),
+     },
+     cc: { 
+       value: action.cc,
+       ai: isAIGenerated(action.cc),
+     },
+     bcc: { 
+       value: action.bcc,
+       ai: isAIGenerated(action.bcc),
+     },

Add this utility function at the top of the file:

const isAIGenerated = (value: string | null | undefined): boolean => {
  return /\{\{.*?\}\}/g.test(value || "");
};

// Add types for better type safety
interface ActionProperty {
  value: string | null;
  ai: boolean;
}

interface ActionWithAI extends Omit<Action, 'label' | 'subject' | 'content' | 'to' | 'cc' | 'bcc'> {
  label: ActionProperty;
  subject: ActionProperty;
  content: ActionProperty;
  to: ActionProperty;
  cc: ActionProperty;
  bcc: ActionProperty;
}

Comment on lines 7 to 144
describe("getActionItemsWithAiArgs", { only: true }, () => {
test("should return actions unchanged when no AI args needed", async () => {
const actions = [getAction({})];
const rule = getRule("Test rule", actions);

const result = await getActionItemsWithAiArgs({
email: getEmail(),
user: getUser(),
selectedRule: rule,
});

expect(result).toEqual(actions);
});

test("should return actions unchanged when no variables to fill", async () => {
const actions = [
getAction({
type: ActionType.REPLY,
content: "You can set a meeting with me here: https://cal.com/alice",
}),
];
const rule = getRule("Choose this rule for meeting requests", actions);

const result = await getActionItemsWithAiArgs({
email: getEmail({
subject: "Quick question",
content: "When is the meeting tomorrow?",
}),
user: getUser(),
selectedRule: rule,
});

expect(result).toHaveLength(1);
expect(result[0]).toMatchObject(actions[0]);
});

test("should generate AI content for actions that need it", async () => {
const actions = [
getAction({
type: ActionType.REPLY,
content:
"The price of pears is: {{the price with the dollar sign - pears are $1.99, apples are $2.99}}",
}),
];
const rule = getRule(
"Choose this when the price of an items is asked for",
actions,
);

const result = await getActionItemsWithAiArgs({
email: getEmail({
subject: "Quick question",
content: "How much are pears?",
}),
user: getUser(),
selectedRule: rule,
});

expect(result).toHaveLength(1);
expect(result[0]).toMatchObject({
...actions[0],
content: "The price of pears is: $1.99",
});
console.debug("Generated content:\n", result[0].content);
});

test("should handle multiple actions with mixed AI needs", async () => {
const actions = [
getAction({
content: "Write a professional response",
}),
getAction({}),
];
const rule = getRule("Test rule", actions);

const result = await getActionItemsWithAiArgs({
email: getEmail({
subject: "Project status",
content: "Can you update me on the project status?",
}),
user: getUser(),
selectedRule: rule,
});

expect(result).toHaveLength(2);
expect(result[0].content).toBeTruthy();
expect(result[1]).toEqual(actions[1]);
});

test("should handle multiple variables with specific formatting", async () => {
const actions = [
getAction({
type: ActionType.LABEL,
label: "{{fruit}}",
}),
getAction({
type: ActionType.REPLY,
content: `Hey {{name}},

{{$10 for apples, $20 for pears}}

Best,
Matt`,
}),
];
const rule = getRule(
"Use this when someone asks about the price of fruits",
actions,
);

const result = await getActionItemsWithAiArgs({
email: getEmail({
from: "jill@example.com",
subject: "fruits",
content: "how much do apples cost?",
}),
user: getUser(),
selectedRule: rule,
});

expect(result).toHaveLength(2);

// Check label action
expect(result[0].label).toBeTruthy();
expect(result[0].label).not.toContain("{{");
expect(result[0].label).toMatch(/apple(s)?/i);

// Check reply action
expect(result[1].content).toMatch(/^Hey [Jj]ill,/); // Match "Hey Jill," or "Hey jill,"
expect(result[1].content).toContain("$10");
expect(result[1].content).toContain("Best,\nMatt");
expect(result[1].content).not.toContain("{{");
expect(result[1].content).not.toContain("}}");

console.debug("Generated label:\n", result[0].label);
console.debug("Generated content:\n", result[1].content);
});
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Consider adding more comprehensive test cases.

The current test suite could benefit from additional test cases covering:

  1. Error handling scenarios (e.g., AI service failures)
  2. Edge cases with special characters in template variables
  3. Rate limiting or timeout scenarios
  4. Invalid template syntax
  5. Empty or whitespace-only variables

Example additional test case:

test("should handle AI service errors gracefully", async () => {
  const actions = [
    getAction({
      type: ActionType.REPLY,
      content: "{{ai generated response}}",
    }),
  ];
  const rule = getRule("Test error handling", actions);
  
  // Mock AI service to throw an error
  vi.spyOn(global, 'fetch').mockRejectedValueOnce(new Error('AI service error'));
  
  await expect(
    getActionItemsWithAiArgs({
      email: getEmail(),
      user: getUser(),
      selectedRule: rule,
    })
  ).rejects.toThrow('AI service error');
});

Comment on lines +152 to +157
label: a.label?.value,
subject: a.subject?.value,
content: a.content?.value,
to: a.to?.value,
cc: a.cc?.value,
bcc: a.bcc?.value,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Consider explicit handling of field clearing vs. undefined values.

The current implementation doesn't distinguish between fields that were intentionally cleared (set to empty) versus fields that weren't provided in the update. This could lead to unintended field clearing.

Consider this approach:

-label: a.label?.value,
+label: a.label === undefined ? undefined : a.label?.value ?? null,
-subject: a.subject?.value,
+subject: a.subject === undefined ? undefined : a.subject?.value ?? null,
// ... similar for other fields

This ensures that:

  • undefined means "don't update this field"
  • null or empty string means "clear this field"
  • A value means "update this field"

Committable suggestion skipped: line range outside the PR's diff.

Comment on lines 207 to 211
logger.error(
"Error processing webhook",
(error as any)?.message,
decodedData,
);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Consider preserving more error details for debugging

While logging only the error message helps reduce noise, it might make debugging more difficult in production. Consider these improvements:

  1. Preserve the error stack trace for debugging
  2. Use a type-safe approach for error handling

Apply this diff to improve error handling:

    logger.error(
      "Error processing webhook",
-     (error as any)?.message,
+     error instanceof Error ? {
+       message: error.message,
+       stack: error.stack,
+       name: error.name
+     } : String(error),
      decodedData,
    );

This approach:

  • Safely handles both Error objects and unknown error types
  • Preserves stack traces for better debugging
  • Maintains type safety without using any
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
logger.error(
"Error processing webhook",
(error as any)?.message,
decodedData,
);
logger.error(
"Error processing webhook",
error instanceof Error ? {
message: error.message,
stack: error.stack,
name: error.name
} : String(error),
decodedData,
);

⚠️ Potential issue

Review PII exposure in error logs

The error logging includes decodedData which contains user email addresses. Consider redacting or masking sensitive information in logs to prevent PII exposure.

Apply this diff to implement PII redaction:

+function redactPII(data: any) {
+  if (!data) return data;
+  const redacted = { ...data };
+  if (redacted.emailAddress) {
+    redacted.emailAddress = redacted.emailAddress.replace(
+      /(.{2}).*(@.*)/,
+      '$1***$2'
+    );
+  }
+  return redacted;
+}

    logger.error(
      "Error processing webhook",
      (error as any)?.message,
-     decodedData,
+     redactPII(decodedData),
    );

Committable suggestion skipped: line range outside the PR's diff.

Comment on lines +144 to +163
function getFieldsDynamicStatus(
action: Pick<
RulesResponse[number]["actions"][number],
"toPrompt" | "ccPrompt" | "bccPrompt"
"subject" | "content" | "to" | "cc" | "bcc"
>,
) {
return (
typeof action.toPrompt === "string" ||
typeof action.ccPrompt === "string" ||
typeof action.bccPrompt === "string"
);
const checkFieldStatus = (field: string | null) => {
if (!field) return null;
if (isFullyDynamicField(field)) return "fully-dynamic";
if (isPartiallyDynamicField(field)) return "partially-dynamic";
return "static";
};

return {
subject: checkFieldStatus(action.subject),
content: checkFieldStatus(action.content),
to: checkFieldStatus(action.to),
cc: checkFieldStatus(action.cc),
bcc: checkFieldStatus(action.bcc),
};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Ensure getFieldsDynamicStatus handles null fields correctly

The function getFieldsDynamicStatus should consistently handle cases where action fields might be null or undefined to prevent runtime errors.

Consider adding explicit checks or default values to handle null or undefined fields safely.

 function getFieldsDynamicStatus(
   action: Pick<
     RulesResponse[number]["actions"][number],
     "subject" | "content" | "to" | "cc" | "bcc"
   >,
 ) {
   const checkFieldStatus = (field: string | null) => {
-    if (!field) return null;
+    if (!field || typeof field !== 'string') return "static";
     if (isFullyDynamicField(field)) return "fully-dynamic";
     if (isPartiallyDynamicField(field)) return "partially-dynamic";
     return "static";
   };

   return {
     subject: checkFieldStatus(action.subject),
     content: checkFieldStatus(action.content),
     to: checkFieldStatus(action.to),
     cc: checkFieldStatus(action.cc),
     bcc: checkFieldStatus(action.bcc),
   };
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
function getFieldsDynamicStatus(
action: Pick<
RulesResponse[number]["actions"][number],
"toPrompt" | "ccPrompt" | "bccPrompt"
"subject" | "content" | "to" | "cc" | "bcc"
>,
) {
return (
typeof action.toPrompt === "string" ||
typeof action.ccPrompt === "string" ||
typeof action.bccPrompt === "string"
);
const checkFieldStatus = (field: string | null) => {
if (!field) return null;
if (isFullyDynamicField(field)) return "fully-dynamic";
if (isPartiallyDynamicField(field)) return "partially-dynamic";
return "static";
};
return {
subject: checkFieldStatus(action.subject),
content: checkFieldStatus(action.content),
to: checkFieldStatus(action.to),
cc: checkFieldStatus(action.cc),
bcc: checkFieldStatus(action.bcc),
};
function getFieldsDynamicStatus(
action: Pick<
RulesResponse[number]["actions"][number],
"subject" | "content" | "to" | "cc" | "bcc"
>,
) {
const checkFieldStatus = (field: string | null) => {
if (!field || typeof field !== 'string') return "static";
if (isFullyDynamicField(field)) return "fully-dynamic";
if (isPartiallyDynamicField(field)) return "partially-dynamic";
return "static";
};
return {
subject: checkFieldStatus(action.subject),
content: checkFieldStatus(action.content),
to: checkFieldStatus(action.to),
cc: checkFieldStatus(action.cc),
bcc: checkFieldStatus(action.bcc),
};
}

Comment on lines +325 to +329
aiPrompts.forEach((prompt, index) => {
template = template.replace(
`{{${prompt}}}`,
`{{var${index + 1}: ${prompt}}}`,
);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Ensure all occurrences of template variables are replaced

The current implementation of template.replace only replaces the first occurrence of a template variable. If a variable appears multiple times in the template, subsequent occurrences won't be replaced. Use a global regular expression to replace all occurrences.

Apply this diff to fix the issue:

    aiPrompts.forEach((prompt, index) => {
-     template = template.replace(
-       `{{${prompt}}}`,
+     const regex = new RegExp(`\\{\\{${prompt}\\}\\}`, 'g');
+     template = template.replace(
+       regex,
        `{{var${index + 1}: ${prompt}}}`,
      );
    });
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
aiPrompts.forEach((prompt, index) => {
template = template.replace(
`{{${prompt}}}`,
`{{var${index + 1}: ${prompt}}}`,
);
aiPrompts.forEach((prompt, index) => {
const regex = new RegExp(`\\{\\{${prompt}\\}\\}`, 'g');
template = template.replace(
regex,
`{{var${index + 1}: ${prompt}}}`,
);

Comment on lines +82 to +89
if (
field === "label" ||
field === "subject" ||
field === "content" ||
field === "to" ||
field === "cc" ||
field === "bcc"
) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Reuse fieldNames array for field checking to enhance maintainability

Instead of manually listing out the field names, consider reusing the fieldNames array defined earlier. This adheres to the DRY (Don't Repeat Yourself) principle and improves code maintainability.

Apply this diff to implement the change:

+ // Ensure `fieldNames` is accessible here, possibly move it to a shared scope if necessary
  // Merge variables for each field that has AI-generated content
  for (const [field, vars] of Object.entries(aiAction)) {
    // Only process fields that we know can contain template strings
-   if (
-     field === "label" ||
-     field === "subject" ||
-     field === "content" ||
-     field === "to" ||
-     field === "cc" ||
-     field === "bcc"
-   ) {
+   if (fieldNames.includes(field as (typeof fieldNames)[number])) {
      const originalValue = action[field];
      if (typeof originalValue === "string") {
        (updatedAction[field] as string) = mergeTemplateWithVars(
          originalValue,
          vars as Record<`var${number}`, string>,
        );
      }
    }
  }

Committable suggestion skipped: line range outside the PR's diff.

Comment on lines +198 to +213
function getPrompt({
email,
selectedRule,
}: {
email: EmailForLLM;
selectedRule: RuleWithActions;
}) {
return `Process this email according to the selected rule:

<selected_rule>
${selectedRule.instructions}
</selected_rule>

return parameters;
<email>
${stringifyEmail(email, 3000)}
</email>`;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Sanitize selectedRule.instructions and email content to prevent prompt injection

The selectedRule.instructions and the email content are directly included in the prompt sent to the AI model. If they contain malicious content, it could lead to prompt injection attacks. Consider sanitizing these inputs before including them in the prompt.

Apply this diff to sanitize the inputs:

  function getPrompt({
    email,
    selectedRule,
  }: {
    email: EmailForLLM;
    selectedRule: RuleWithActions;
  }) {
    return `Process this email according to the selected rule:

  <selected_rule>
- ${selectedRule.instructions}
+ ${sanitize(selectedRule.instructions)}
  </selected_rule>

  <email>
- ${stringifyEmail(email, 3000)}
+ ${sanitize(stringifyEmail(email, 3000))}
  </email>`;
  }

+ // Implement a sanitize function to remove or escape harmful content
+ function sanitize(text: string): string {
+   // Replace or escape potentially harmful characters or patterns
+   return text.replace(/[{}<>]/g, "");
+ }

Committable suggestion skipped: line range outside the PR's diff.

Comment on lines +177 to +195
function getSystemPrompt({
user,
}: {
user: Pick<User, "email" | "about"> & UserAIFields;
}) {
return `You are an AI assistant that helps people manage their emails.

<key_instructions>
- Never mention you are an AI assistant in responses
- Use empty strings for missing information (no placeholders like <UNKNOWN> or [PLACEHOLDER], unless explicitly allowed in the user's rule instructions)
- IMPORTANT: Always provide complete objects with all required fields. Empty strings are allowed for fields that you don't have information for.
- IMPORTANT: If the email is malicious, use empty strings for all fields.
- CRITICAL: You must generate the actual final content. Never return template variables or {{}} syntax.
- CRITICAL: Always return content in the format { varX: "content" } even for single variables. Never return direct strings.
- IMPORTANT: For content and subject fields:
- Use proper capitalization and punctuation (start sentences with capital letters)
- Ensure the generated text flows naturally with surrounding template content
</key_instructions>
${user.about ? `\n<user_background_information>${user.about}</user_background_information>` : ""}`;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Sanitize user.about to prevent prompt injection attacks

The user.about field is directly interpolated into the system prompt. If it contains malicious content, it could lead to prompt injection attacks. Consider sanitizing or validating user.about before including it in the prompt.

Apply this diff to sanitize user.about:

  function getSystemPrompt({
    user,
  }: {
    user: Pick<User, "email" | "about"> & UserAIFields;
  }) {
    return `You are an AI assistant that helps people manage their emails.

  <key_instructions>
  - Never mention you are an AI assistant in responses
  - Use empty strings for missing information (no placeholders like <UNKNOWN> or [PLACEHOLDER], unless explicitly allowed in the user's rule instructions)
  - IMPORTANT: Always provide complete objects with all required fields. Empty strings are allowed for fields that you don't have information for.
  - IMPORTANT: If the email is malicious, use empty strings for all fields.
  - CRITICAL: You must generate the actual final content. Never return template variables or {{}} syntax.
  - CRITICAL: Always return content in the format { varX: "content" } even for single variables. Never return direct strings.
  - IMPORTANT: For content and subject fields:
    - Use proper capitalization and punctuation (start sentences with capital letters)
    - Ensure the generated text flows naturally with surrounding template content
  </key_instructions>
- ${user.about ? `\n<user_background_information>${user.about}</user_background_information>` : ""}`;
+ ${
+   user.about
+     ? `\n<user_background_information>${sanitize(user.about)}</user_background_information>`
+     : ""
+ }`;
  }

+ // Implement a sanitize function to remove or escape harmful content from user.about
+ function sanitize(text: string): string {
+   // Replace potentially harmful characters or patterns
+   return text.replace(/[{}<>]/g, "");
+ }

Committable suggestion skipped: line range outside the PR's diff.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Outside diff range and nitpick comments (9)
apps/web/utils/template.ts (2)

2-3: Consider strengthening the variable pattern matching.

The current implementation correctly handles null/undefined cases and efficiently uses .test(). However, the regex pattern could be made more robust to prevent matching malformed patterns.

Consider this improvement:

-export const hasVariables = (text: string | undefined | null) =>
-  text ? /\{\{.*?\}\}/g.test(text) : false;
+export const hasVariables = (text: string | undefined | null) =>
+  text ? /\{\{[^{}]+\}\}/g.test(text) : false;

This change ensures:

  • Variables must contain at least one character
  • Nested braces are not allowed
  • Prevents matching malformed patterns like "{{}}" or "{{}"

1-1: Enhance function documentation.

The current comment could be more descriptive about the function's purpose and behavior.

Consider this improvement:

-// Returns true if contains "{{" and "}}".
+/**
+ * Checks if the given text contains variable placeholders in the format "{{variable}}".
+ * @param text - The text to check for variables
+ * @returns true if the text contains properly formatted variables, false otherwise
+ * @example
+ * hasVariables("Hello {{name}}") // returns true
+ * hasVariables("Hello name") // returns false
+ * hasVariables(null) // returns false
+ */
apps/web/__tests__/ai-choose-args.test.ts (3)

70-70: Remove debug console statements.

Debug console statements should be removed before committing the code.

-    console.debug("Generated content:\n", result[0].content);
-    console.debug("Generated label:\n", result[0].label);
-    console.debug("Generated content:\n", result[1].content);

Also applies to: 141-142


91-93: Strengthen test assertions.

The current assertions could be more specific about the expected content structure.

     expect(result).toHaveLength(2);
-    expect(result[0].content).toBeTruthy();
+    expect(result[0].content).toMatch(/^[A-Z].*\./); // Expect sentence case and ending with period
     expect(result[1]).toEqual(actions[1]);

146-212: Enhance helper functions with types and documentation.

Consider adding:

  1. JSDoc comments explaining the purpose and parameters of each helper function
  2. Stricter type checking for the parameters

Example enhancement:

/**
 * Creates a test Action with default values
 * @param action - Partial Action object to override defaults
 * @returns Complete Action object for testing
 */
function getAction(action: Partial<Action> = {}): Action {
  const defaults: Action = {
    id: "a123",
    createdAt: new Date(),
    // ... rest of the defaults
  };
  
  return {
    ...defaults,
    ...action,
  };
}
apps/web/app/(app)/automation/RuleForm.tsx (1)

374-392: Add ARIA labels for better accessibility

While the input fields have good visual cues, they should include ARIA labels for screen readers.

Apply these changes to improve accessibility:

 <textarea
   className="block w-full flex-1 whitespace-pre-wrap rounded-md border-gray-300 shadow-sm focus:border-black focus:ring-black sm:text-sm"
   rows={4}
   placeholder="Add text or use {{AI prompts}}. e.g. Hi {{write greeting}}"
   value={value || ""}
+  aria-label={`${field.label} text input with AI prompt support`}
   {...register(`actions.${i}.${field.name}.value`)}
 />

 <input
   className="block w-full flex-1 rounded-md border-gray-300 shadow-sm focus:border-black focus:ring-black sm:text-sm"
   type="text"
   placeholder="Add text or use {{AI prompts}}. e.g. Hi {{write greeting}}"
+  aria-label={`${field.label} text input with AI prompt support`}
   {...register(`actions.${i}.${field.name}.value`)}
 />
apps/web/utils/actions/ai-rule.ts (3)

302-307: Enhance error logging and user feedback

While the structured error logging is good, consider these improvements:

  1. Include rule details in the log for better debugging context
  2. Provide more specific error messages to the user based on the error type
 logger.info("Error creating rule", {
   error:
     error instanceof Error
       ? { message: error.message, stack: error.stack, name: error.name }
       : error,
+  rule: { name: result.name, type: result.condition.type },
 });

-return { error: "Error creating rule." };
+return { error: `Failed to create rule: ${error instanceof Error ? error.message : 'Unknown error'}` };

Line range hint 524-711: Standardize logging approach

The function mixes console.error and logger.info. Consider these improvements:

  1. Replace console.error with logger.error
  2. Use structured logging consistently
  3. Add appropriate log levels (info/warn/error)

Here's a sample of the changes needed:

-console.error("Input validation failed:", error.message);
+logger.error("Input validation failed", { error: error.message });

-console.error("User not found");
+logger.error("User not found");

-console.error("User email not found");
+logger.error("User email not found");

-console.error(`Rule ID not found for rule. Prompt: ${rule.name}`);
+logger.error("Rule ID not found", { ruleName: rule.name });

-console.error(`Error updating group for rule. ${groupIdResult.error}`);
+logger.error("Error updating group for rule", { error: groupIdResult.error });

-console.error(`Error creating group for rule. ${groupIdResult.error}`);
+logger.error("Error creating group for rule", { error: groupIdResult.error });

Line range hint 4-4: Add tests for better code coverage

The TODO comment indicates missing tests. Consider adding unit tests to ensure the reliability of the code.

Would you like me to help create unit tests for these functions or create a GitHub issue to track this task?

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between f138b0d and 23330aa.

📒 Files selected for processing (6)
  • apps/web/__tests__/ai-choose-args.test.ts (1 hunks)
  • apps/web/app/(app)/automation/RuleForm.tsx (4 hunks)
  • apps/web/app/(app)/automation/rule/[ruleId]/page.tsx (2 hunks)
  • apps/web/app/api/google/webhook/process-history.ts (1 hunks)
  • apps/web/utils/actions/ai-rule.ts (12 hunks)
  • apps/web/utils/template.ts (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • apps/web/app/(app)/automation/rule/[ruleId]/page.tsx
  • apps/web/app/api/google/webhook/process-history.ts
🔇 Additional comments (6)
apps/web/__tests__/ai-choose-args.test.ts (2)

7-143: Add test cases for error scenarios and edge cases.

While the current test suite provides good coverage of happy paths, it would benefit from additional test cases covering:

  1. Error handling scenarios (AI service failures)
  2. Rate limiting or timeout scenarios
  3. Invalid template syntax
  4. Empty or whitespace-only variables

Example test case:

test("should handle invalid template syntax", async () => {
  const actions = [
    getAction({
      type: ActionType.REPLY,
      content: "Hello {{unclosed template",
    }),
  ];
  const rule = getRule("Test invalid syntax", actions);
  
  await expect(
    getActionItemsWithAiArgs({
      email: getEmail(),
      user: getUser(),
      selectedRule: rule,
    })
  ).rejects.toThrow();
});

5-5: Verify the mock implementation of server-only module.

The current empty mock might not be sufficient if the actual module has exports that are used elsewhere in the code.

✅ Verification successful

Empty mock implementation of server-only is appropriate

The empty mock implementation is correct because:

  1. server-only is a special package that enforces server-side only imports and has no actual exports
  2. All actual imports in the codebase are using it as import "server-only" without any named imports
  3. The package is consistently mocked with empty object {} across all test files in the codebase
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for usage of server-only imports
rg -l 'from ["'']server-only' 

Length of output: 29


Script:

#!/bin/bash
# Search for any imports or requires of server-only
rg -l "require.*['\"]server-only['\"]|from ['\"]server-only['\"]"

# Also check for direct usage of server-only
rg "server-only" -C 2

Length of output: 5714

apps/web/app/(app)/automation/RuleForm.tsx (2)

94-96: LGTM: Improved label creation logic

The condition correctly prevents the creation of AI-generated labels while maintaining the creation of static labels, which is the expected behavior.


394-411: LGTM: Well-implemented template variable visualization

The implementation provides clear visual feedback for AI template variables with:

  • Clean regex-based parsing
  • Accessible color contrast
  • Clear visual distinction between AI and static content
apps/web/utils/actions/ai-rule.ts (2)

41-43: LGTM: Good logging setup!

The scoped logger implementation follows best practices by providing a dedicated scope for AI rule-related logs.


333-338: Similar improvements needed for update error handling

The same enhancements suggested for the create error handling apply here.

 logger.info("Error updating rule", {
   error:
     error instanceof Error
       ? { message: error.message, stack: error.stack, name: error.name }
       : error,
+  rule: { id: ruleId, name: result.name, type: result.condition.type },
 });

-return { error: "Error updating rule." };
+return { error: `Failed to update rule: ${error instanceof Error ? error.message : 'Unknown error'}` };

Comment on lines +1 to +212
import { describe, expect, test, vi } from "vitest";
import { getActionItemsWithAiArgs } from "@/utils/ai/choose-rule/ai-choose-args";
import { type Action, ActionType, RuleType } from "@prisma/client";

vi.mock("server-only", () => ({}));

describe("getActionItemsWithAiArgs", () => {
test("should return actions unchanged when no AI args needed", async () => {
const actions = [getAction({})];
const rule = getRule("Test rule", actions);

const result = await getActionItemsWithAiArgs({
email: getEmail(),
user: getUser(),
selectedRule: rule,
});

expect(result).toEqual(actions);
});

test("should return actions unchanged when no variables to fill", async () => {
const actions = [
getAction({
type: ActionType.REPLY,
content: "You can set a meeting with me here: https://cal.com/alice",
}),
];
const rule = getRule("Choose this rule for meeting requests", actions);

const result = await getActionItemsWithAiArgs({
email: getEmail({
subject: "Quick question",
content: "When is the meeting tomorrow?",
}),
user: getUser(),
selectedRule: rule,
});

expect(result).toHaveLength(1);
expect(result[0]).toMatchObject(actions[0]);
});

test("should generate AI content for actions that need it", async () => {
const actions = [
getAction({
type: ActionType.REPLY,
content:
"The price of pears is: {{the price with the dollar sign - pears are $1.99, apples are $2.99}}",
}),
];
const rule = getRule(
"Choose this when the price of an items is asked for",
actions,
);

const result = await getActionItemsWithAiArgs({
email: getEmail({
subject: "Quick question",
content: "How much are pears?",
}),
user: getUser(),
selectedRule: rule,
});

expect(result).toHaveLength(1);
expect(result[0]).toMatchObject({
...actions[0],
content: "The price of pears is: $1.99",
});
console.debug("Generated content:\n", result[0].content);
});

test("should handle multiple actions with mixed AI needs", async () => {
const actions = [
getAction({
content: "Write a professional response",
}),
getAction({}),
];
const rule = getRule("Test rule", actions);

const result = await getActionItemsWithAiArgs({
email: getEmail({
subject: "Project status",
content: "Can you update me on the project status?",
}),
user: getUser(),
selectedRule: rule,
});

expect(result).toHaveLength(2);
expect(result[0].content).toBeTruthy();
expect(result[1]).toEqual(actions[1]);
});

test("should handle multiple variables with specific formatting", async () => {
const actions = [
getAction({
type: ActionType.LABEL,
label: "{{fruit}}",
}),
getAction({
type: ActionType.REPLY,
content: `Hey {{name}},

{{$10 for apples, $20 for pears}}

Best,
Matt`,
}),
];
const rule = getRule(
"Use this when someone asks about the price of fruits",
actions,
);

const result = await getActionItemsWithAiArgs({
email: getEmail({
from: "jill@example.com",
subject: "fruits",
content: "how much do apples cost?",
}),
user: getUser(),
selectedRule: rule,
});

expect(result).toHaveLength(2);

// Check label action
expect(result[0].label).toBeTruthy();
expect(result[0].label).not.toContain("{{");
expect(result[0].label).toMatch(/apple(s)?/i);

// Check reply action
expect(result[1].content).toMatch(/^Hey [Jj]ill,/); // Match "Hey Jill," or "Hey jill,"
expect(result[1].content).toContain("$10");
expect(result[1].content).toContain("Best,\nMatt");
expect(result[1].content).not.toContain("{{");
expect(result[1].content).not.toContain("}}");

console.debug("Generated label:\n", result[0].label);
console.debug("Generated content:\n", result[1].content);
});
});

// helpers
function getAction(action: Partial<Action> = {}): Action {
return {
id: "a123",
createdAt: new Date(),
updatedAt: new Date(),
type: ActionType.REPLY,
ruleId: "ruleId",
label: null,
subject: null,
content: null,
to: null,
cc: null,
bcc: null,
labelPrompt: null,
subjectPrompt: null,
contentPrompt: null,
toPrompt: null,
ccPrompt: null,
bccPrompt: null,
...action,
};
}

function getRule(instructions: string, actions: Action[] = []) {
return {
instructions,
name: "Test Rule",
actions,
id: "r123",
userId: "userId",
createdAt: new Date(),
updatedAt: new Date(),
automate: false,
runOnThreads: false,
groupId: null,
from: null,
subject: null,
body: null,
to: null,
type: RuleType.AI,
enabled: true,
categoryFilterType: null,
};
}

function getEmail({
from = "from@test.com",
subject = "subject",
content = "content",
}: { from?: string; subject?: string; content?: string } = {}) {
return {
from,
subject,
content,
};
}

function getUser() {
return {
aiModel: null,
aiProvider: null,
email: "user@test.com",
aiApiKey: null,
about: null,
};
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codebase verification

Add test coverage for error handling scenarios

The implementation includes error handling for InvalidToolArgumentsError in the withRetry function, but there are no corresponding test cases. Consider adding tests for:

  • Error handling when AI tool calls fail
  • Retry behavior with InvalidToolArgumentsError
  • Edge cases with malformed template variables
🔗 Analysis chain

Verify test coverage of the implementation.

Let's ensure all code paths in the implementation are covered by these tests.


Let's analyze the implementation file we found. I notice several code paths that need verification in the test coverage:

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for the implementation file
fd -e ts "ai-choose-args.ts" -x cat {} \;

# Look for conditional statements and error handling that might need testing
rg -n "if|throw|catch" "ai-choose-args.ts"

Length of output: 12003


Script:

# Let's check if all the exported functions are tested
ast-grep --pattern 'export function $NAME' apps/web/utils/ai/choose-rule/ai-choose-args.ts

# Let's also check if error cases from withRetry are tested
rg -A 5 "withRetry" apps/web/utils/ai/choose-rule/ai-choose-args.ts

Length of output: 517


Script:

# Let's check all exported functions and their test coverage
ast-grep --pattern 'export function $NAME' utils/ai/choose-rule/ai-choose-args.ts

# Let's also check the test file for error cases
rg -n "test\(.*error|test\(.*fail|test\(.*invalid" apps/web/__tests__/ai-choose-args.test.ts

# Let's check for error handling in the implementation
rg -n "throw|catch|Error" utils/ai/choose-rule/ai-choose-args.ts

Length of output: 410


Script:

# Let's try with correct paths based on the earlier output
# Check exported functions
ast-grep --pattern 'export function $NAME' apps/web/utils/ai/choose-rule/ai-choose-args.ts

# Check error test cases
rg -n "test\(.*error|test\(.*fail|test\(.*invalid" apps/web/__tests__/ai-choose-args.test.ts

# Check error handling
rg -n "throw|catch|Error" apps/web/utils/ai/choose-rule/ai-choose-args.ts

Length of output: 398

Comment on lines +592 to +598
logger.info("Processing added rules");
addedRules = await aiPromptToRules({
user: { ...user, email: user.email },
promptFile: diff.addedRules.join("\n\n"),
isEditing: false,
});
console.log(`${addedRules?.length || 0} rules to be added`);
logger.info(`${addedRules?.length || 0} rules to be added`);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Add error handling for AI operations

The AI operations (aiPromptToRules) lack proper error handling. Consider wrapping these calls in try-catch blocks and handling potential AI-related errors gracefully.

+try {
   logger.info("Processing added rules");
   addedRules = await aiPromptToRules({
     user: { ...user, email: user.email },
     promptFile: diff.addedRules.join("\n\n"),
     isEditing: false,
   });
   logger.info(`${addedRules?.length || 0} rules to be added`);
+} catch (error) {
+  logger.error("Error processing rules with AI", {
+    error: error instanceof Error ? error.message : error,
+  });
+  return { error: "Failed to process rules with AI" };
+}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
logger.info("Processing added rules");
addedRules = await aiPromptToRules({
user: { ...user, email: user.email },
promptFile: diff.addedRules.join("\n\n"),
isEditing: false,
});
console.log(`${addedRules?.length || 0} rules to be added`);
logger.info(`${addedRules?.length || 0} rules to be added`);
try {
logger.info("Processing added rules");
addedRules = await aiPromptToRules({
user: { ...user, email: user.email },
promptFile: diff.addedRules.join("\n\n"),
isEditing: false,
});
logger.info(`${addedRules?.length || 0} rules to be added`);
} catch (error) {
logger.error("Error processing rules with AI", {
error: error instanceof Error ? error.message : error,
});
return { error: "Failed to process rules with AI" };
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant