Skip to content

Comments

Add Google Gemini Support#297

Merged
elie222 merged 1 commit intomainfrom
google-llm-support
Jan 9, 2025
Merged

Add Google Gemini Support#297
elie222 merged 1 commit intomainfrom
google-llm-support

Conversation

@elie222
Copy link
Owner

@elie222 elie222 commented Jan 9, 2025

Closes: #82

Summary by CodeRabbit

  • New Features

    • Added support for Google AI and Ollama providers
    • Introduced two new Google AI models: Gemini 1.5 Pro and Gemini 1.5 Flash
    • Expanded AI settings validation to include new providers
  • Dependencies

    • Added @ai-sdk/google package
  • Improvements

    • Enhanced logging for AI rule generation
    • Updated cost calculations for new AI models

@vercel
Copy link

vercel bot commented Jan 9, 2025

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Updated (UTC)
inbox-zero 🔄 Building (Inspect) Visit Preview Jan 9, 2025 10:17am

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 9, 2025

Walkthrough

This pull request introduces support for Google's AI models by expanding the existing AI provider configuration. The changes involve updating configuration files, validation schemas, and utility functions to include Google's Gemini models. The modifications allow users to select Google as an AI provider, with support for two models: Gemini 1.5 Pro and Gemini 1.5 Flash. The implementation maintains the existing error handling and validation logic while providing more flexibility in AI model selection.

Changes

File Change Summary
apps/web/utils/llms/config.ts Added GOOGLE provider, GEMINI_1_5_PRO and GEMINI_1_5_FLASH models
apps/web/app/api/user/settings/validation.ts Expanded aiProvider enum to include GOOGLE and conditionally OLLAMA
apps/web/app/api/user/settings/route.ts Updated getModel function to handle Google provider model selection
apps/web/utils/llms/index.ts Added import and logic for Google Generative AI provider
apps/web/utils/usage.ts Added cost calculations for Gemini 1.5 Pro and Flash models
apps/web/package.json Added @ai-sdk/google dependency

Sequence Diagram

sequenceDiagram
    participant User
    participant Settings
    participant AIProvider
    participant ModelSelector

    User->>Settings: Select Google Provider
    Settings->>ModelSelector: Validate Provider
    ModelSelector-->>Settings: Validate Success
    Settings->>AIProvider: Configure Google AI
    AIProvider-->>Settings: Confirm Configuration
    Settings->>User: Save Settings Confirmed
Loading

Possibly related PRs

Poem

🐰 A Rabbit's Ode to Google's AI Might

With Gemini's spark, our code takes flight,
New models dancing, oh so bright!
From Pro to Flash, we expand our view,
CodeRabbit's magic, forever true! 🌟
Hop into progress, with AI's delight! 🚀

Finishing Touches

  • 📝 Generate Docstrings

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Beta)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🔭 Outside diff range comments (1)
apps/web/utils/llms/index.ts (1)

Line range hint 307-347: Add Google-specific error handling.

The handleError function handles specific error cases for OpenAI and Anthropic, but lacks Google-specific error handling. Consider adding error handling for Google API-specific errors.

Example implementation:

 async function handleError(error: unknown, userEmail: string) {
   if (APICallError.isInstance(error)) {
+    if (isGoogleAPIError(error)) {
+      return await addUserErrorMessage(
+        userEmail,
+        ErrorType.GOOGLE_API_ERROR,
+        error.message,
+      );
+    }
+
     if (isIncorrectOpenAIAPIKeyError(error)) {
       // ... existing code
     }
     // ... rest of the error handling
   }
 }
🧹 Nitpick comments (2)
apps/web/utils/llms/config.ts (1)

19-20: Consider versioning strategy for Gemini models.

Using "latest" in model identifiers might cause issues with version tracking and reproducibility. Consider using specific version identifiers like we do with Claude models (e.g., "gemini-1.5-pro-20240101").

apps/web/utils/usage.ts (1)

106-106: Fix typo in function name.

The function name calcuateCost contains a typo and should be calculateCost.

-function calcuateCost(
+function calculateCost(
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b3735ad and 5331bd6.

⛔ Files ignored due to path filters (1)
  • pnpm-lock.yaml is excluded by !**/pnpm-lock.yaml
📒 Files selected for processing (7)
  • apps/web/app/api/user/settings/route.ts (1 hunks)
  • apps/web/app/api/user/settings/validation.ts (1 hunks)
  • apps/web/package.json (1 hunks)
  • apps/web/utils/ai/rule/generate-rules-prompt.ts (1 hunks)
  • apps/web/utils/llms/config.ts (3 hunks)
  • apps/web/utils/llms/index.ts (2 hunks)
  • apps/web/utils/usage.ts (1 hunks)
✅ Files skipped from review due to trivial changes (1)
  • apps/web/utils/ai/rule/generate-rules-prompt.ts
🔇 Additional comments (6)
apps/web/app/api/user/settings/validation.ts (1)

6-11: LGTM! Provider enum extension is well-implemented.

The addition of Provider.GOOGLE and conditional inclusion of Provider.OLLAMA maintains proper type safety while extending provider support.

apps/web/utils/llms/config.ts (1)

45-54: LGTM! Model options are well-structured.

The model options for Google provider are properly configured with clear labels.

apps/web/utils/usage.ts (1)

94-103: LGTM! Cost calculations are accurate and well-documented.

The pricing implementation for Gemini models is correct and includes helpful reference URLs.

apps/web/utils/llms/index.ts (2)

13-13: LGTM! Import statement is properly placed.

The import for Google's AI SDK is correctly positioned alongside other provider imports.


74-83: Verify error handling for Google API key.

The implementation looks good and follows the established pattern. However, let's verify the error handling behavior is consistent with other providers.

Run this script to check error handling consistency across providers:

✅ Verification successful

Error handling for Google API key is appropriate and consistent

The implementation correctly validates the API key requirement, similar to how other providers handle their specific authentication needs:

  • OpenAI falls back to environment variables
  • Anthropic has dual-path validation for direct API or Bedrock credentials
  • Google requires an explicit API key
  • Ollama uses a different authentication mechanism
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check error handling patterns across providers
# Look for error handling patterns in the codebase

# Check error handling patterns
rg -A 2 "throw new Error\(.*API key.*\)" 

# Check if Google-specific errors are handled in error utility functions
rg -l "isGoogleError|GoogleError" 

Length of output: 1297


Script:

#!/bin/bash
# Get all provider implementations in the LLM utils
ast-grep --pattern 'if (provider === $_) {
  $$$
}'

# Also check for any provider-specific error handling
rg -A 5 "provider === " apps/web/utils/llms/index.ts

Length of output: 3933

apps/web/package.json (1)

18-18: Verify @ai-sdk/google package version.

The package is correctly added, but let's verify if 1.0.12 is the latest stable version and check for any known security issues.

Comment on lines +31 to +34
case Provider.GOOGLE:
return options.aiModel || Model.GEMINI_1_5_PRO;
case Provider.OLLAMA:
return Model.OLLAMA;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add model validation for Google provider.

While the default model is appropriate, consider validating that options.aiModel is a valid Google model when provided to prevent runtime errors.

 case Provider.GOOGLE:
-  return options.aiModel || Model.GEMINI_1_5_PRO;
+  const model = options.aiModel || Model.GEMINI_1_5_PRO;
+  if (![Model.GEMINI_1_5_PRO, Model.GEMINI_1_5_FLASH].includes(model)) {
+    throw new Error("Invalid Google model");
+  }
+  return model;
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
case Provider.GOOGLE:
return options.aiModel || Model.GEMINI_1_5_PRO;
case Provider.OLLAMA:
return Model.OLLAMA;
case Provider.GOOGLE:
const model = options.aiModel || Model.GEMINI_1_5_PRO;
if (![Model.GEMINI_1_5_PRO, Model.GEMINI_1_5_FLASH].includes(model)) {
throw new Error("Invalid Google model");
}
return model;
case Provider.OLLAMA:
return Model.OLLAMA;

@elie222 elie222 merged commit e5ad1f5 into main Jan 9, 2025
3 of 4 checks passed
@coderabbitai coderabbitai bot mentioned this pull request Nov 19, 2025
@elie222 elie222 deleted the google-llm-support branch December 18, 2025 23:07
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Support AI's other than OpenAI like Mistral

1 participant