Skip to content

Feat: AI provider suite#1080

Closed
duartebarbosadev wants to merge 26 commits intoelie222:mainfrom
duartebarbosadev:feature/ai-provider-suite
Closed

Feat: AI provider suite#1080
duartebarbosadev wants to merge 26 commits intoelie222:mainfrom
duartebarbosadev:feature/ai-provider-suite

Conversation

@duartebarbosadev
Copy link

@duartebarbosadev duartebarbosadev commented Dec 9, 2025

This pull request adds support for local and custom AI providers (Ollama, LM Studio, and OpenAI-compatible servers) to the app's model selection and configuration UI, and improves the bulk rule-running feature by allowing users to include read emails. The changes enhance flexibility for users and admins, making it easier to connect to different AI backends and configure them per account.

AI Provider Configuration & Model Selection

Environment Configuration

  • Updated .env.example to document new environment variables for custom and local AI providers, including OPENAI_BASE_URL, OLLAMA_BASE_URL, LM_STUDIO_BASE_URL, and ALLOW_USER_AI_PROVIDER_URL. [1] [2]

Bulk Rule-Running Feature

Error Handling & User Feedback


These changes make it easier for users to connect to a variety of AI providers, configure them flexibly, and run bulk operations with more control.

Summary by CodeRabbit

Release Notes

  • New Features

    • Added support for Ollama and LM Studio local LLM providers with dynamic model loading
    • Enabled custom OpenAI-compatible API endpoints for AI providers
    • Added "Include read emails" toggle for bulk email processing rules
    • Added connection testing for AI settings with retry capability
  • Improvements

    • Enhanced AI settings validation and error messaging
    • Added model refresh functionality for local LLM providers
  • Documentation

    • Updated environment variable documentation with new LLM provider configurations
    • Added Ollama local setup guide with security notes

✏️ Tip: You can customize this high-level summary in your review settings.

Copilot AI review requested due to automatic review settings December 9, 2025 07:27
@vercel
Copy link

vercel bot commented Dec 9, 2025

@duartebarbosadev is attempting to deploy a commit to the Inbox Zero OSS Program Team on Vercel.

A member of the Team first needs to authorize it.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 9, 2025

Walkthrough

The PR adds support for custom AI provider base URLs (OpenAI-compatible, Ollama, LM Studio) via environment variables and optional user-provided URLs with a security feature gate. It introduces new API routes to fetch Ollama and LM Studio models, updates the ModelSection UI for dynamic model loading, adds database migrations for per-user base URLs, and propagates aiBaseUrl across AI action handlers.

Changes

Cohort / File(s) Summary
Environment Configuration
apps/web/.env.example, apps/web/env.ts, apps/web/app/(app)/config/page.tsx, turbo.json
Added OPENAI_BASE_URL, OLLAMA_BASE_URL, LM_STUDIO_BASE_URL, and ALLOW_USER_AI_PROVIDER_URL environment variables. Added allowUserAiProviderUrl UI display in admin config page. Extended turbo.json build env.
Database Schema
apps/web/prisma/schema.prisma, apps/web/prisma/migrations/20251204150811_add_user_ai_base_url/*
Added aiBaseUrl optional String field to User model and corresponding SQL migration.
API Routes for Model Discovery
apps/web/app/api/ai/ollama-models/route.ts, apps/web/app/api/ai/lmstudio-models/route.ts, apps/web/app/api/ai/models/route.ts, apps/web/app/api/user/me/route.ts
Introduced new GET endpoints to dynamically fetch models from Ollama and LM Studio servers. Updated /models route to accept optional baseUrl parameter. Extended /me response with supportsOllama, supportsLmStudio, allowUserAiProviderUrl, and aiBaseUrl.
LLM Provider Infrastructure
apps/web/utils/llms/config.shared.ts, apps/web/utils/llms/config.ts, apps/web/utils/llms/model.ts, apps/web/utils/llms/types.ts, apps/web/utils/llms/index.ts
Extracted shared provider config. Added normalizeOpenAiBaseUrl, createLmStudioFetch, createOpenAICompatible support. Updated model selection to propagate baseURL for OpenAI-compatible, Ollama, and LM Studio providers. Made supportsOllama and supportsLmStudio URL-based. Added providerRequiresApiKey logic. Enhanced error logging.
AI Settings & Validation
apps/web/utils/actions/settings.ts, apps/web/utils/actions/settings.validation.ts
Extended SaveAiSettingsBody schema with optional aiBaseUrl. Added testAiSettingsAction for connection validation with retry logic. Implemented provider-specific field nulling and conditional aiBaseUrl storage. Updated API key requirement logic per provider.
Data Access Layer
apps/web/utils/user/get.ts, apps/web/utils/user/validate.ts, apps/web/utils/actions/ai-rule.ts, apps/web/utils/actions/assess.ts, apps/web/utils/actions/cold-email.ts, apps/web/utils/assistant/process-assistant-email.ts, apps/web/utils/webhook/validate-webhook-account.ts, apps/web/app/api/ai/analyze-sender-pattern/route.ts
Added aiBaseUrl to Prisma user selections across AI-related queries and actions. No control-flow changes; purely extends data retrieval scope.
UI Components
apps/web/app/(app)/[emailAccountId]/settings/ModelSection.tsx, apps/web/app/(app)/[emailAccountId]/assistant/BulkRunRules.tsx
ModelSection: added dynamic model fetching for Ollama/LM Studio, provider-specific URL inputs, refresh buttons, test connection flow. BulkRunRules: introduced includeRead toggle to control processing of read emails in addition to unread.
Dependencies
apps/web/package.json
Added @ai-sdk/openai-compatible and replaced ollama-ai-provider with ollama-ai-provider-v2.
Tests & Documentation
apps/web/utils/llms/model.test.ts, docs/hosting/environment-variables.md, docs/hosting/self-hosting.md
Extended UserAIFields type with aiBaseUrl. Updated env docs with new variable descriptions and security warnings for ALLOW_USER_AI_PROVIDER_URL. Added Ollama local setup documentation.

Sequence Diagram

sequenceDiagram
    actor User
    participant UI as ModelSection UI
    participant Server as Next.js API
    participant Provider as Ollama/LM Studio<br/>Server
    participant DB as Database
    
    rect rgb(200, 240, 255)
    Note over User,Provider: Dynamic Model Fetching Flow
    User->>UI: Select Ollama/LM Studio provider
    activate UI
    UI->>Server: GET /api/ai/ollama-models?baseUrl=...
    activate Server
    Server->>Server: Validate feature flags &<br/>resolve base URL
    Server->>Provider: GET {baseUrl}/api/tags
    activate Provider
    Provider-->>Server: Model list
    deactivate Provider
    Server-->>UI: { models: [...] }
    deactivate Server
    UI->>UI: Display models in dropdown<br/>or text input fallback
    deactivate UI
    end
    
    rect rgb(255, 240, 200)
    Note over User,DB: Settings Save & Test
    User->>UI: Select model & test connection
    activate UI
    UI->>Server: POST /api/ai/settings/test<br/>(provider, model, baseUrl, apiKey)
    activate Server
    Server->>Server: Call generateText with<br/>provider config + retry logic
    Server->>Provider: Attempt AI request
    activate Provider
    Provider-->>Server: Success/Error
    deactivate Provider
    alt Connection Success
        Server-->>UI: { success: true }
    else Connection Failed (Retry)
        Server->>Server: Wait 5s, retry once
        Server->>Provider: Retry request
        Provider-->>Server: Result
        alt Still Failed
            Server-->>UI: { error: message }
        else Retry Success
            Server-->>UI: { success: true }
        end
    end
    deactivate Server
    UI->>UI: Show test result toast
    User->>UI: Confirm & save settings
    UI->>Server: POST /api/ai/settings<br/>(provider, model, baseUrl, aiApiKey)
    activate Server
    Server->>DB: Update user { aiProvider,<br/>aiModel, aiBaseUrl, ... }
    activate DB
    DB-->>Server: Success
    deactivate DB
    Server-->>UI: Settings saved
    deactivate Server
    deactivate UI
    end
Loading

Estimated Code review effort

🎯 4 (Complex) | ⏱️ ~45–60 minutes

Areas requiring extra attention:

  • apps/web/utils/llms/model.ts — High logic density: URL normalization for OpenAI-compatible endpoints, LM Studio fetch wrapper with field stripping, provider routing with baseURL propagation, and interactions with multiple provider SDKs (OpenAI, Ollama, Anthropic, etc.)
  • apps/web/app/(app)/[emailAccountId]/settings/ModelSection.tsx — Significant UI refactor: dynamic model fetching via SWR for Ollama/LM Studio, provider-specific URL inputs, refresh logic, and two-attempt test connection flow with state management and toast notifications
  • apps/web/utils/actions/settings.ts — New testAiSettingsAction with retry/backoff logic, provider-specific field nulling, conditional aiBaseUrl storage, and integration with various AI SDKs
  • apps/web/app/api/ai/ollama-models/route.ts and apps/web/app/api/ai/lmstudio-models/route.ts — New public API routes with URL validation, base URL resolution logic gated by allowUserAiProviderUrl, error handling, and external service calls with timeout
  • Prisma type and query consistency — Verify aiBaseUrl is correctly propagated across all affected Prisma selections in UserAIFields, EmailAccountWithAI, and related query helper types to avoid runtime type mismatches

Possibly related PRs

  • PR #577: Modifies apps/web/utils/llms/model.ts (getModel/SelectModel types and provider selection) — directly overlaps with provider routing and model handling logic
  • PR #479: Modifies apps/web/app/(app)/[emailAccountId]/settings/ModelSection.tsx — touches the same model selection UI and may have conflicting state/fetching approaches
  • PR #269: Adds Ollama LLM support and modifies env.ts, .env.example, package.json, and utils/llms/* — foundational Ollama integration that this PR builds upon

Suggested reviewers

  • elie222

🐰 A rabbit hops through the config with glee,
Ollama models dance wild and LM free,
User URLs now flow where they please,
With base paths and tests put minds at ease! 🚀

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 19.35% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'Feat: AI provider suite' clearly and concisely summarizes the main change: adding support for multiple AI providers (Ollama, LM Studio, OpenAI-compatible) to the application.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@macroscopeapp
Copy link
Contributor

macroscopeapp bot commented Dec 9, 2025

Add AI provider suite by enabling Ollama and LM Studio with user-specified base URLs across settings, model selection, and APIs

Introduce conditional provider options and base URL handling, add LM Studio and Ollama selection with model listing endpoints, persist user.aiBaseUrl, and route model selection through normalized OpenAI-compatible URLs with provider-specific API key rules. Update settings UI to configure provider, model, and base URL with a test flow, and extend server env and validation to support OPENAI_BASE_URL, LM_STUDIO_BASE_URL, and ALLOW_USER_AI_PROVIDER_URL.

📍Where to Start

Start with shared provider configuration in config.shared.ts, then review model selection logic in model.ts, and settings flow in ModelSection.tsx.


📊 Macroscope summarized 95bb179. 23 files reviewed, 24 issues evaluated, 21 issues filtered, 0 comments posted. View details

Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

2 issues found across 32 files

Prompt for AI agents (all 2 issues)

Check if these issues are valid — if so, understand the root cause of each and fix them.


<file name="apps/web/app/(app)/[emailAccountId]/settings/ModelSection.tsx">

<violation number="1" location="apps/web/app/(app)/[emailAccountId]/settings/ModelSection.tsx:233">
P2: Redundant return statement makes the validation check pointless. Both branches lead to an immediate return, so the `if (result !== &quot;validation&quot;)` check has no effect. Consider removing the duplicate `return;` or clarifying the intended logic.</violation>

<violation number="2" location="apps/web/app/(app)/[emailAccountId]/settings/ModelSection.tsx:345">
P3: The explainText mentions &quot;LM Studio&quot; but this input only appears for Ollama provider. Consider updating to just reference Ollama, e.g., &quot;Custom Ollama server URL. Save first, then refresh models.&quot;</violation>
</file>

Reply to cubic to teach it or ask questions. Re-run a review with @cubic-dev-ai review this PR

registerProps={register("aiBaseUrl")}
error={errors.aiBaseUrl}
placeholder="http://localhost:11434"
explainText="Custom Ollama or LM Studio server URL. Save first, then refresh models."
Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot Dec 9, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P3: The explainText mentions "LM Studio" but this input only appears for Ollama provider. Consider updating to just reference Ollama, e.g., "Custom Ollama server URL. Save first, then refresh models."

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At apps/web/app/(app)/[emailAccountId]/settings/ModelSection.tsx, line 345:

<comment>The explainText mentions &quot;LM Studio&quot; but this input only appears for Ollama provider. Consider updating to just reference Ollama, e.g., &quot;Custom Ollama server URL. Save first, then refresh models.&quot;</comment>

<file context>
@@ -136,16 +323,93 @@ function ModelSectionForm(props: {
+                registerProps={register(&quot;aiBaseUrl&quot;)}
+                error={errors.aiBaseUrl}
+                placeholder=&quot;http://localhost:11434&quot;
+                explainText=&quot;Custom Ollama or LM Studio server URL. Save first, then refresh models.&quot;
+              /&gt;
+              &lt;Button
</file context>
Fix with Cubic

try {
const result = await runTestOnce(data);
if (result !== "validation") {
return;
Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot Dec 9, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: Redundant return statement makes the validation check pointless. Both branches lead to an immediate return, so the if (result !== "validation") check has no effect. Consider removing the duplicate return; or clarifying the intended logic.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At apps/web/app/(app)/[emailAccountId]/settings/ModelSection.tsx, line 233:

<comment>Redundant return statement makes the validation check pointless. Both branches lead to an immediate return, so the `if (result !== &quot;validation&quot;)` check has no effect. Consider removing the duplicate `return;` or clarifying the intended logic.</comment>

<file context>
@@ -101,16 +186,118 @@ function ModelSectionForm(props: {
+            try {
+              const result = await runTestOnce(data);
+              if (result !== &quot;validation&quot;) {
+                return;
+              }
+              return;
</file context>
Fix with Cubic

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
apps/web/utils/llms/model.ts (1)

385-397: createOpenRouterProviderOptions should omit or conditionally structure the provider option instead of setting it to null.

According to OpenRouter SDK documentation, the provider configuration option accepts an object with fields like order, allowFallbacks, requireParameters, sort, and data_collection. Setting provider: null when order.length === 0 is not a documented pattern and may cause the SDK to behave unexpectedly. Either omit the provider key entirely or restructure the return value to avoid passing null values for configuration objects.

🧹 Nitpick comments (16)
apps/web/utils/actions/settings.validation.ts (1)

3-3: AI settings schema and validation look coherent; consider centralizing provider predicates

The extended aiProvider enum, aiBaseUrl field, and superRefine logic (API key required for non-default/non-local providers; URL only allowed for Ollama/OpenAI/LM Studio) are consistent and match the intended feature set.

To reduce drift over time, you could centralize the “requires API key” and “supports custom base URL” checks in helpers (e.g., providerRequiresApiKey, providerSupportsBaseUrl) in config.shared and reuse them here and in server actions.

Also applies to: 26-42, 44-74

apps/web/utils/actions/settings.ts (1)

220-253: Helper mappings for AI fields and URL redaction are well-structured

toUserAiFields mirrors the validation rules (nulls for default, no API key for Ollama/LM Studio, base URL only for supported providers), and redactUrls does a reasonable job of stripping server URLs from error messages when using system-managed providers.

If you find similar provider-conditional mapping logic elsewhere, consider reusing toUserAiFields (or a shared helper) to keep behavior consistent between “test” and “save” flows.

docs/hosting/self-hosting.md (1)

47-60: Ollama self-hosting docs are clear; optional Docker networking note

The new Ollama section cleanly explains required env vars and restart steps, and correctly notes that no API key is needed. As an optional refinement, you could add a short note that host.docker.internal may need explicit configuration on some Linux Docker setups, but this isn’t blocking.

apps/web/utils/llms/index.ts (1)

340-386: APICallError logging enhances diagnostics; consider extra care with payload content

Logging statusCode, url, and a truncated responseBody for APICallError will be very helpful when debugging provider issues, and it’s wired in before the existing error-type checks without changing behavior. Given this is LLM traffic, you might optionally consider:

  • Ensuring logs can’t be accessed by untrusted operators, and/or
  • Further restricting the logged responseBody (e.g., only for 4xx/5xx, or mapping to known error codes/messages instead of raw text)

Not a blocker, but worth thinking about for deployments with stricter data-in-logs policies. Based on learnings, this still fits the expectation to have good, scoped logging around LLM calls.

docs/hosting/environment-variables.md (1)

53-65: Document LM_STUDIO_BASE_URL alongside other LLM env vars

The new entries for OPENAI_BASE_URL, OLLAMA_BASE_URL, and ALLOW_USER_AI_PROVIDER_URL plus the security section read well, but LM_STUDIO_BASE_URL (present in apps/web/env.ts) isn’t listed in the “All Environment Variables” table. Consider adding a row for it with an example (e.g., http://localhost:1234) so LM Studio setup is discoverable and consistent with the code.

Also applies to: 105-129

apps/web/env.ts (1)

52-54: Consider stricter URL validation for new base URL env vars

OPENAI_BASE_URL and LM_STUDIO_BASE_URL are currently z.string().optional(). To catch typos and malformed URLs earlier, consider switching both to z.string().url().optional(), which aligns with how DATABASE_URL is validated and matches the expected http(s)://… format used by the model code.

Also applies to: 62-64, 125-125

apps/web/utils/llms/model.test.ts (1)

191-228: Avoid non-null assertions on Provider enum values

The uses of Provider.OLLAMA! can safely drop the ! – enum members should already be non-nullable, and relying on ! is discouraged in this codebase.

Also applies to: 253-305

apps/web/app/api/ai/lmstudio-models/route.ts (1)

6-21: Missing response type export for client-side type safety.

Per coding guidelines, GET API routes should export response types using the Awaited<ReturnType<>> pattern. Add a type export for client consumption.

 export type LmStudioModelsResponse = {
   object: string;
   data: LmStudioModel[];
 };

+export type GetLmStudioModelsResponse = LmStudioModel[];
apps/web/app/api/ai/ollama-models/route.ts (1)

6-24: Missing response type export for client-side type safety.

Similar to the LM Studio route, add a response type export per coding guidelines.

 export type OllamaModelsResponse = {
   models: OllamaModel[];
 };

+export type GetOllamaModelsResponse = OllamaModel[];
apps/web/app/(app)/[emailAccountId]/settings/ModelSection.tsx (3)

154-161: LM Studio model fetching logic may be confusing.

When allowUserAiProviderUrl is true but no aiBaseUrl is provided, lmStudioModelsUrl becomes null and no models are fetched. This is intentional per the comment, but the UX could be clearer - the user sees "Refresh models" button but clicking it won't work until they enter a URL.

Consider disabling the refresh button or showing a clearer message when URL is required but not provided.


224-241: Simplify the retry loop logic.

The current retry logic has some redundancy with the early returns on lines 232-235. Both paths return immediately, making the continue on line 238 unreachable after a successful result.

           for (let attempt = 0; attempt < 2; attempt++) {
             if (attempt === 1) {
               setTestingMessage("Retrying in 5 seconds...");
               await new Promise((resolve) => setTimeout(resolve, 5000));
             }

             try {
               const result = await runTestOnce(data);
-              if (result !== "validation") {
-                return;
-              }
+              // Exit on success or validation error (no point retrying validation)
               return;
             } catch (error) {
               lastError = error;
               if (attempt === 0) continue;
               throw error;
             }
           }

90-91: Consider typing modelsError more specifically.

Using any for error types reduces type safety. Consider using Error | undefined or the specific error type from SWR.

-  modelsError?: any;
+  modelsError?: Error;
apps/web/utils/llms/model.ts (3)

265-296: Ollama provider implementation looks correct with proper URL normalization.

Good validation requiring model name and proper security gating via allowUserAiProviderUrl. The URL normalization correctly handles the /api suffix.

Minor note: Line 291 has unnecessary non-null assertion (Provider.OLLAMA!) since Provider.OLLAMA is a string constant that cannot be null.

       return {
-        provider: Provider.OLLAMA!,
+        provider: Provider.OLLAMA,
         modelName,

542-544: providerRequiresApiKey helper is useful but could be more explicit.

Consider using a Set or exhaustive check for clarity and future maintainability.

 function providerRequiresApiKey(provider: string) {
-  return provider !== Provider.OLLAMA && provider !== Provider.LM_STUDIO;
+  const localProviders = new Set([Provider.OLLAMA, Provider.LM_STUDIO]);
+  return !localProviders.has(provider);
 }

114-140: URL normalization logic has overlapping conditions and a problematic path truncation behavior.

Line 123's condition (path.startsWith("/v1/")) discards everything after /v1/ when returning ${url.origin}/v1, which loses path prefixes for nested base URLs like https://example.com/proxy/v1/completions. Additionally, lines 122 and 130-131 both handle paths ending with /v1, creating redundancy.

The logic should prioritize finding the /v1 segment and preserving any legitimate path prefix, rather than unconditionally truncating to just /v1. A regex-based approach could simplify this to: find the first /v1 segment and truncate there, or append /v1 if absent.

apps/web/utils/llms/config.ts (1)

28-35: Important note about static vs dynamic usage.

The comment at lines 29-30 is helpful for preventing misuse in client components. Consider adding a @server-only directive or using a naming convention to make this clearer.

To enforce server-only usage more explicitly:

+import "server-only";
+
 import { env } from "@/env";

This would cause a build error if accidentally imported in a client component.

Comment on lines +54 to +61
const { searchParams } = new URL(req.url);
const userProvidedUrl = searchParams.get("baseUrl");

// Use user-provided URL if allowed and provided, otherwise fall back to env
const baseUrl =
allowUserAiProviderUrl && userProvidedUrl
? userProvidedUrl
: env.LM_STUDIO_BASE_URL;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

cat -n apps/web/app/api/ai/lmstudio-models/route.ts | head -100

Repository: elie222/inbox-zero

Length of output: 180


🏁 Script executed:

sed -n '1,100p' apps/web/app/api/ai/lmstudio-models/route.ts | cat -n

Repository: elie222/inbox-zero

Length of output: 180


🏁 Script executed:

wc -l apps/web/app/api/ai/lmstudio-models/route.ts

Repository: elie222/inbox-zero

Length of output: 180


🏁 Script executed:

cat apps/web/app/api/ai/lmstudio-models/route.ts

Repository: elie222/inbox-zero

Length of output: 180


🏁 Script executed:

cat apps/web/app/api/ai/lmstudio-models/route.ts

Repository: elie222/inbox-zero

Length of output: 180


🌐 Web query:

SSRF prevention Node.js private IP ranges validation user-provided URLs fetch

💡 Result:

Short checklist + minimal Node.js pattern to prevent SSRF when fetching user URLs:

  1. Parse & validate URL: only allow expected protocols (https/http) and ports.
  2. Resolve DNS to IP(s) server-side (don’t trust hostname string). Use dns.promises.lookup/all. Re-resolve after any redirect.
  3. Check each resolved IP against a deny-list of reserved ranges (loopback, RFC1918 private, link-local, multicast, unspecified, CGNAT, RFC4193 ULA, IPv4‑mapped IPv6, etc.). Use a maintained IP library (ipaddr.js) for correct IPv4/IPv6 checks. (github.com)
  4. Block requests that resolve to any denied range; also block if any redirect target resolves to denied ranges.
  5. Enforce network-level egress controls (firewall, proxy) as a last line of defense. (nodejs-security.com)

Minimal Node.js example (conceptual):

  • npm install ipaddr.js
  • use dns.promises.lookup/all, ipaddr.js to check ranges, and only then fetch.

Example (outline — adapt to your app):
const dns = require('dns').promises;
const ipaddr = require('ipaddr.js');
const fetch = require('node-fetch');

async function isDeniedIP(ip) {
const addr = ipaddr.process(ip); // handles IPv4-mapped IPv6
const range = addr.range(); // e.g. 'loopback','private','linkLocal','multicast','reserved', etc.
// treat anything not 'unicast' as denied, and explicitly deny known ranges:
return range !== 'unicast';
}

async function safeFetch(urlString, opts) {
const url = new URL(urlString);
if (!['http:','https:'].includes(url.protocol)) throw new Error('Invalid protocol');
const addrs = await dns.lookup(url.hostname, { all: true });
for (const a of addrs) {
if (await isDeniedIP(a.address)) throw new Error('Destination IP is prohibited');
}
// perform fetch but do NOT follow redirects automatically — handle them manually:
const res = await fetch(url.href, { ...opts, redirect: 'manual' });
if (res.status >= 300 && res.status < 400 && res.headers.get('location')) {
const loc = new URL(res.headers.get('location'), url);
// repeat DNS resolution + checks for loc.hostname before following
// (re-run above checks)
throw new Error('Redirects must be validated before following');
}
return res;
}

Notes / pitfalls:

  • Don’t rely on simple string checks or blacklist packages with known bugs — use ipaddr.js or equivalent to correctly classify ranges (IPv6, IPv4-mapped addresses, multicast, link-local, etc.). There have been real-world pitfalls in some libraries. (nodejs-security.com)
  • Time-of-check → time-of-use: re-resolve after redirects and just before making the connection.
  • Use egress filtering (host firewall / proxy) to enforce rules even if app validation fails. (nodejs-security.com)

Want a fully runnable example (with complete redirect handling + recommended deny-list and unit tests)? I can provide one tailored to your environment (Node version, fetch lib, proxy).

Citations:


Add IP range validation to prevent SSRF when allowUserAiProviderUrl is enabled.

URL format validation (lines 74-81) does not prevent SSRF attacks. Users can still probe internal services by providing URLs that resolve to private IP ranges (10.x, 172.16-31.x, 192.168.x, 127.x, etc.). Implement DNS resolution and IP range validation using a library like ipaddr.js, blocking loopback, RFC1918 private ranges, link-local, and multicast addresses. Also validate redirect targets before following them, as attackers can use redirects to bypass initial URL validation.

Comment on lines +47 to +57
export const GET = withEmailAccount("api/ai/ollama-models", async (req) => {
// Only allow custom URL if the feature is enabled via env var
const { searchParams } = new URL(req.url);
const customUrl = searchParams.get("baseUrl");

// Security: Only use custom URL if ALLOW_USER_AI_PROVIDER_URL is enabled
// Note: URL should include /api (e.g., http://localhost:11434/api)
const baseUrl =
allowUserAiProviderUrl && customUrl
? customUrl
: env.OLLAMA_BASE_URL || "http://localhost:11434/api";
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Inconsistency: Missing enablement check compared to LM Studio route.

The LM Studio route checks supportsLmStudio and returns 403 if disabled, but this route defaults to localhost:11434 even when Ollama isn't configured. This could lead to unexpected behavior or expose the endpoint when Ollama support isn't intended.

Consider adding a consistent enablement check:

 export const GET = withEmailAccount("api/ai/ollama-models", async (req) => {
+  // Import supportsOllama from config
+  if (!supportsOllama) {
+    return NextResponse.json(
+      { error: "Ollama is not enabled on this server" },
+      { status: 403 },
+    );
+  }
+
   // Only allow custom URL if the feature is enabled via env var
   const { searchParams } = new URL(req.url);

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In apps/web/app/api/ai/ollama-models/route.ts around lines 47 to 57, add the
same enablement guard used in the LM Studio route: check the feature
flag/boolean that indicates Ollama support (e.g., supportsOllama or equivalent
env flag) before resolving baseUrl; if Ollama support is disabled, immediately
return a 403 response (forbidden). Keep the existing
allowUserAiProviderUrl/customUrl logic only when the feature is enabled and fall
back to env.OLLAMA_BASE_URL or the localhost default otherwise, ensuring you do
not expose the endpoint when Ollama is not configured.

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds support for local and custom AI providers (Ollama, LM Studio, and OpenAI-compatible servers), along with improvements to the bulk rule-running feature. The implementation includes dynamic model fetching, custom server URLs, and enhanced UI for configuration.

Key changes:

  • Adds Ollama and LM Studio as AI provider options with dynamic model discovery
  • Introduces ALLOW_USER_AI_PROVIDER_URL flag to allow users to configure custom AI provider URLs
  • Updates bulk rule-running to optionally include read emails
  • Adds comprehensive test coverage for new providers

Reviewed changes

Copilot reviewed 31 out of 32 changed files in this pull request and generated 6 comments.

Show a summary per file
File Description
apps/web/utils/llms/model.ts Core provider logic for Ollama, LM Studio, and OpenAI-compatible servers with URL normalization
apps/web/app/api/ai/ollama-models/route.ts API endpoint to fetch available Ollama models
apps/web/app/api/ai/lmstudio-models/route.ts API endpoint to fetch available LM Studio models
apps/web/app/api/ai/models/route.ts Updated OpenAI models endpoint to support custom base URLs
apps/web/app/(app)/[emailAccountId]/settings/ModelSection.tsx Enhanced UI for provider selection with dynamic model fetching and connection testing
apps/web/utils/actions/settings.validation.ts Validation schema for AI settings including base URL
apps/web/utils/actions/settings.ts Server actions for saving and testing AI settings
apps/web/app/(app)/[emailAccountId]/assistant/BulkRunRules.tsx Added toggle to include read emails in bulk operations
apps/web/prisma/schema.prisma Added aiBaseUrl field to User model
apps/web/env.ts Added environment variables for custom provider URLs
apps/web/.env.example Documented new configuration options with security warnings
docs/hosting/environment-variables.md Comprehensive security documentation for ALLOW_USER_AI_PROVIDER_URL
Files not reviewed (1)
  • pnpm-lock.yaml: Language not supported

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +50 to +57
const customUrl = searchParams.get("baseUrl");

// Security: Only use custom URL if ALLOW_USER_AI_PROVIDER_URL is enabled
// Note: URL should include /api (e.g., http://localhost:11434/api)
const baseUrl =
allowUserAiProviderUrl && customUrl
? customUrl
: env.OLLAMA_BASE_URL || "http://localhost:11434/api";
Copy link

Copilot AI Dec 9, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Security: Missing URL validation for SSRF protection

When ALLOW_USER_AI_PROVIDER_URL is enabled, the customUrl from query parameters is used without validation. This could allow users to make requests to:

  • Internal network resources (e.g., http://localhost:6379, http://169.254.169.254)
  • Private IP ranges (http://192.168.x.x, http://10.x.x.x)
  • Cloud metadata endpoints

Add URL validation to:

  1. Validate it's a properly formatted URL
  2. Block private IP ranges and localhost (unless explicitly allowed)
  3. Block cloud metadata endpoints
  4. Consider using an allowlist approach instead

Similar validation is needed in the LM Studio endpoint.

Copilot uses AI. Check for mistakes.
Comment on lines +73 to +81
// Validate URL format (defense-in-depth, frontend also validates)
try {
new URL(baseUrl);
} catch {
return NextResponse.json(
{ error: "Invalid URL format for baseUrl parameter" },
{ status: 400 },
);
}
Copy link

Copilot AI Dec 9, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Security: Insufficient URL validation for SSRF protection

The URL validation at lines 74-81 only checks if it's a valid URL format, but doesn't prevent SSRF attacks. When ALLOW_USER_AI_PROVIDER_URL is enabled, users can still target:

  • Internal services (http://localhost:6379, internal APIs)
  • Private IP ranges (http://192.168.x.x, http://10.x.x.x, http://172.16.x.x)
  • Cloud metadata endpoints (http://169.254.169.254)

This is the same critical security issue as in the Ollama endpoint. Add proper URL validation to block private IPs and sensitive endpoints.

Copilot uses AI. Check for mistakes.
Comment on lines +272 to +278
// Security: Only use user's custom URL if ALLOW_USER_AI_PROVIDER_URL is enabled
// Normalize URL - strip /api if present, then add it back
const rawUrl =
allowUserAiProviderUrl && aiBaseUrl
? aiBaseUrl
: env.OLLAMA_BASE_URL || "http://localhost:11434";
const baseURL = `${rawUrl.replace(/\/api\/?$/, "")}/api`;
Copy link

Copilot AI Dec 9, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Security: SSRF vulnerability in Ollama provider configuration

Similar to the API endpoints, when allowUserAiProviderUrl is true and aiBaseUrl is user-provided, there's no validation to prevent SSRF attacks. The user-provided URL is directly used to make requests, which could target internal services.

This critical vulnerability exists in three places:

  1. Ollama provider (this location)
  2. LM Studio provider (line 307)
  3. OpenAI provider with custom baseURL (line 190)

All three need URL validation before use to prevent SSRF.

Copilot uses AI. Check for mistakes.
]),
aiModel: z.string(),
aiApiKey: z.string().optional(),
aiBaseUrl: z.string().url().optional().or(z.literal("")),
Copy link

Copilot AI Dec 9, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The validation uses z.string().url() which validates URL format but doesn't prevent SSRF attacks. The validation should also check for:

  • Private IP addresses (localhost, 127.0.0.1, 192.168.x.x, 10.x.x.x, 172.16-31.x.x)
  • Cloud metadata endpoints (169.254.169.254)
  • Other non-routable addresses

Consider adding a custom Zod validator that checks against an SSRF blocklist or only allows specific patterns.

Copilot uses AI. Check for mistakes.
Comment on lines +18 to +20
const baseURL = normalizeOpenAiBaseUrl(
(allowUserAiProviderUrl && baseUrl) || env.OPENAI_BASE_URL,
);
Copy link

Copilot AI Dec 9, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Security: User-provided URL used without validation

The baseUrl from the database (which can be user-provided when ALLOW_USER_AI_PROVIDER_URL is enabled) is passed directly to OpenAI client without SSRF validation. This could allow requests to internal services.

Same SSRF vulnerability applies here as in the Ollama and LM Studio endpoints.

Copilot uses AI. Check for mistakes.
return `${url.origin}${normalizedPath}`;
} catch {
const cleaned = baseUrl.replace(/\/+$/, "");
return cleaned.endsWith("/v1") ? cleaned : `${cleaned}/v1`;
Copy link

Copilot AI Dec 9, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The normalizeOpenAiBaseUrl function has a potential bug in the catch block. When URL parsing fails (in the catch), it falls back to string manipulation but doesn't validate the result is a valid URL. This could lead to malformed URLs being returned.

Consider adding validation after the string manipulation in the catch block, or returning null if the URL cannot be properly normalized.

Suggested change
return cleaned.endsWith("/v1") ? cleaned : `${cleaned}/v1`;
const normalized = cleaned.endsWith("/v1") ? cleaned : `${cleaned}/v1`;
try {
// Validate the normalized string is a valid URL
new URL(normalized);
return normalized;
} catch {
return null;
}

Copilot uses AI. Check for mistakes.
aiProvider: true,
aiModel: true,
aiApiKey: true,
aiBaseUrl: true,
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you sure we need this stored at the user level?

think it's best if we just support ollama/lmstudio at the env level. or is there another need for this here?

@@ -69,8 +71,18 @@ export function BulkRunRules() {
{data && (
<>
<SectionDescription>
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

unrelated changes to this pr

Comment on lines +70 to +73
// Server-side config passed to client
supportsOllama,
supportsLmStudio,
allowUserAiProviderUrl,
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this doesnt have anything to do with the user, so not sure why we need this here?

@elie222
Copy link
Owner

elie222 commented Dec 9, 2025

Thanks for the PR. Main issue is that I don't want to support this at the user level. Ollama/Lmstudio can stay for self-hosters, but not something that each user of the product needs to change.

Supporting at the user level adds complexity and makes db call slightly slower and its not something that I see a real need for.

Copy link
Owner

@elie222 elie222 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove unrelated code from the PR. Don't enable Ollama/LMStudio. Only for users. Aim to make the PR way simpler. Stick with 1 change at a time. ie. only adding adding Ollama. You can add LMStudio and other changes in separate PRs

@elie222
Copy link
Owner

elie222 commented Dec 9, 2025

Added a PR here, but I haven't tested it at all:
#1083

@elie222 elie222 closed this Dec 10, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants