Conversation
… related components
…uffix and adjust documentation
…rovider options and user settings
…pdate model fetching logic
|
@duartebarbosadev is attempting to deploy a commit to the Inbox Zero OSS Program Team on Vercel. A member of the Team first needs to authorize it. |
WalkthroughThe PR adds support for custom AI provider base URLs (OpenAI-compatible, Ollama, LM Studio) via environment variables and optional user-provided URLs with a security feature gate. It introduces new API routes to fetch Ollama and LM Studio models, updates the ModelSection UI for dynamic model loading, adds database migrations for per-user base URLs, and propagates aiBaseUrl across AI action handlers. Changes
Sequence DiagramsequenceDiagram
actor User
participant UI as ModelSection UI
participant Server as Next.js API
participant Provider as Ollama/LM Studio<br/>Server
participant DB as Database
rect rgb(200, 240, 255)
Note over User,Provider: Dynamic Model Fetching Flow
User->>UI: Select Ollama/LM Studio provider
activate UI
UI->>Server: GET /api/ai/ollama-models?baseUrl=...
activate Server
Server->>Server: Validate feature flags &<br/>resolve base URL
Server->>Provider: GET {baseUrl}/api/tags
activate Provider
Provider-->>Server: Model list
deactivate Provider
Server-->>UI: { models: [...] }
deactivate Server
UI->>UI: Display models in dropdown<br/>or text input fallback
deactivate UI
end
rect rgb(255, 240, 200)
Note over User,DB: Settings Save & Test
User->>UI: Select model & test connection
activate UI
UI->>Server: POST /api/ai/settings/test<br/>(provider, model, baseUrl, apiKey)
activate Server
Server->>Server: Call generateText with<br/>provider config + retry logic
Server->>Provider: Attempt AI request
activate Provider
Provider-->>Server: Success/Error
deactivate Provider
alt Connection Success
Server-->>UI: { success: true }
else Connection Failed (Retry)
Server->>Server: Wait 5s, retry once
Server->>Provider: Retry request
Provider-->>Server: Result
alt Still Failed
Server-->>UI: { error: message }
else Retry Success
Server-->>UI: { success: true }
end
end
deactivate Server
UI->>UI: Show test result toast
User->>UI: Confirm & save settings
UI->>Server: POST /api/ai/settings<br/>(provider, model, baseUrl, aiApiKey)
activate Server
Server->>DB: Update user { aiProvider,<br/>aiModel, aiBaseUrl, ... }
activate DB
DB-->>Server: Success
deactivate DB
Server-->>UI: Settings saved
deactivate Server
deactivate UI
end
Estimated Code review effort🎯 4 (Complex) | ⏱️ ~45–60 minutes Areas requiring extra attention:
Possibly related PRs
Suggested reviewers
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Add AI provider suite by enabling Ollama and LM Studio with user-specified base URLs across settings, model selection, and APIsIntroduce conditional provider options and base URL handling, add LM Studio and Ollama selection with model listing endpoints, persist 📍Where to StartStart with shared provider configuration in config.shared.ts, then review model selection logic in model.ts, and settings flow in ModelSection.tsx. 📊 Macroscope summarized 95bb179. 23 files reviewed, 24 issues evaluated, 21 issues filtered, 0 comments posted. View details |
There was a problem hiding this comment.
2 issues found across 32 files
Prompt for AI agents (all 2 issues)
Check if these issues are valid — if so, understand the root cause of each and fix them.
<file name="apps/web/app/(app)/[emailAccountId]/settings/ModelSection.tsx">
<violation number="1" location="apps/web/app/(app)/[emailAccountId]/settings/ModelSection.tsx:233">
P2: Redundant return statement makes the validation check pointless. Both branches lead to an immediate return, so the `if (result !== "validation")` check has no effect. Consider removing the duplicate `return;` or clarifying the intended logic.</violation>
<violation number="2" location="apps/web/app/(app)/[emailAccountId]/settings/ModelSection.tsx:345">
P3: The explainText mentions "LM Studio" but this input only appears for Ollama provider. Consider updating to just reference Ollama, e.g., "Custom Ollama server URL. Save first, then refresh models."</violation>
</file>
Reply to cubic to teach it or ask questions. Re-run a review with @cubic-dev-ai review this PR
| registerProps={register("aiBaseUrl")} | ||
| error={errors.aiBaseUrl} | ||
| placeholder="http://localhost:11434" | ||
| explainText="Custom Ollama or LM Studio server URL. Save first, then refresh models." |
There was a problem hiding this comment.
P3: The explainText mentions "LM Studio" but this input only appears for Ollama provider. Consider updating to just reference Ollama, e.g., "Custom Ollama server URL. Save first, then refresh models."
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At apps/web/app/(app)/[emailAccountId]/settings/ModelSection.tsx, line 345:
<comment>The explainText mentions "LM Studio" but this input only appears for Ollama provider. Consider updating to just reference Ollama, e.g., "Custom Ollama server URL. Save first, then refresh models."</comment>
<file context>
@@ -136,16 +323,93 @@ function ModelSectionForm(props: {
+ registerProps={register("aiBaseUrl")}
+ error={errors.aiBaseUrl}
+ placeholder="http://localhost:11434"
+ explainText="Custom Ollama or LM Studio server URL. Save first, then refresh models."
+ />
+ <Button
</file context>
| try { | ||
| const result = await runTestOnce(data); | ||
| if (result !== "validation") { | ||
| return; |
There was a problem hiding this comment.
P2: Redundant return statement makes the validation check pointless. Both branches lead to an immediate return, so the if (result !== "validation") check has no effect. Consider removing the duplicate return; or clarifying the intended logic.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At apps/web/app/(app)/[emailAccountId]/settings/ModelSection.tsx, line 233:
<comment>Redundant return statement makes the validation check pointless. Both branches lead to an immediate return, so the `if (result !== "validation")` check has no effect. Consider removing the duplicate `return;` or clarifying the intended logic.</comment>
<file context>
@@ -101,16 +186,118 @@ function ModelSectionForm(props: {
+ try {
+ const result = await runTestOnce(data);
+ if (result !== "validation") {
+ return;
+ }
+ return;
</file context>
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
apps/web/utils/llms/model.ts (1)
385-397:createOpenRouterProviderOptionsshould omit or conditionally structure theprovideroption instead of setting it tonull.According to OpenRouter SDK documentation, the
providerconfiguration option accepts an object with fields likeorder,allowFallbacks,requireParameters,sort, anddata_collection. Settingprovider: nullwhenorder.length === 0is not a documented pattern and may cause the SDK to behave unexpectedly. Either omit theproviderkey entirely or restructure the return value to avoid passingnullvalues for configuration objects.
🧹 Nitpick comments (16)
apps/web/utils/actions/settings.validation.ts (1)
3-3: AI settings schema and validation look coherent; consider centralizing provider predicatesThe extended
aiProviderenum,aiBaseUrlfield, andsuperRefinelogic (API key required for non-default/non-local providers; URL only allowed for Ollama/OpenAI/LM Studio) are consistent and match the intended feature set.To reduce drift over time, you could centralize the “requires API key” and “supports custom base URL” checks in helpers (e.g.,
providerRequiresApiKey,providerSupportsBaseUrl) inconfig.sharedand reuse them here and in server actions.Also applies to: 26-42, 44-74
apps/web/utils/actions/settings.ts (1)
220-253: Helper mappings for AI fields and URL redaction are well-structured
toUserAiFieldsmirrors the validation rules (nulls for default, no API key for Ollama/LM Studio, base URL only for supported providers), andredactUrlsdoes a reasonable job of stripping server URLs from error messages when using system-managed providers.If you find similar provider-conditional mapping logic elsewhere, consider reusing
toUserAiFields(or a shared helper) to keep behavior consistent between “test” and “save” flows.docs/hosting/self-hosting.md (1)
47-60: Ollama self-hosting docs are clear; optional Docker networking noteThe new Ollama section cleanly explains required env vars and restart steps, and correctly notes that no API key is needed. As an optional refinement, you could add a short note that
host.docker.internalmay need explicit configuration on some Linux Docker setups, but this isn’t blocking.apps/web/utils/llms/index.ts (1)
340-386: APICallError logging enhances diagnostics; consider extra care with payload contentLogging
statusCode,url, and a truncatedresponseBodyforAPICallErrorwill be very helpful when debugging provider issues, and it’s wired in before the existing error-type checks without changing behavior. Given this is LLM traffic, you might optionally consider:
- Ensuring logs can’t be accessed by untrusted operators, and/or
- Further restricting the logged
responseBody(e.g., only for 4xx/5xx, or mapping to known error codes/messages instead of raw text)Not a blocker, but worth thinking about for deployments with stricter data-in-logs policies. Based on learnings, this still fits the expectation to have good, scoped logging around LLM calls.
docs/hosting/environment-variables.md (1)
53-65: Document LM_STUDIO_BASE_URL alongside other LLM env varsThe new entries for
OPENAI_BASE_URL,OLLAMA_BASE_URL, andALLOW_USER_AI_PROVIDER_URLplus the security section read well, butLM_STUDIO_BASE_URL(present inapps/web/env.ts) isn’t listed in the “All Environment Variables” table. Consider adding a row for it with an example (e.g.,http://localhost:1234) so LM Studio setup is discoverable and consistent with the code.Also applies to: 105-129
apps/web/env.ts (1)
52-54: Consider stricter URL validation for new base URL env vars
OPENAI_BASE_URLandLM_STUDIO_BASE_URLare currentlyz.string().optional(). To catch typos and malformed URLs earlier, consider switching both toz.string().url().optional(), which aligns with howDATABASE_URLis validated and matches the expectedhttp(s)://…format used by the model code.Also applies to: 62-64, 125-125
apps/web/utils/llms/model.test.ts (1)
191-228: Avoid non-null assertions on Provider enum valuesThe uses of
Provider.OLLAMA!can safely drop the!– enum members should already be non-nullable, and relying on!is discouraged in this codebase.Also applies to: 253-305
apps/web/app/api/ai/lmstudio-models/route.ts (1)
6-21: Missing response type export for client-side type safety.Per coding guidelines, GET API routes should export response types using the
Awaited<ReturnType<>>pattern. Add a type export for client consumption.export type LmStudioModelsResponse = { object: string; data: LmStudioModel[]; }; +export type GetLmStudioModelsResponse = LmStudioModel[];apps/web/app/api/ai/ollama-models/route.ts (1)
6-24: Missing response type export for client-side type safety.Similar to the LM Studio route, add a response type export per coding guidelines.
export type OllamaModelsResponse = { models: OllamaModel[]; }; +export type GetOllamaModelsResponse = OllamaModel[];apps/web/app/(app)/[emailAccountId]/settings/ModelSection.tsx (3)
154-161: LM Studio model fetching logic may be confusing.When
allowUserAiProviderUrlis true but noaiBaseUrlis provided,lmStudioModelsUrlbecomesnulland no models are fetched. This is intentional per the comment, but the UX could be clearer - the user sees "Refresh models" button but clicking it won't work until they enter a URL.Consider disabling the refresh button or showing a clearer message when URL is required but not provided.
224-241: Simplify the retry loop logic.The current retry logic has some redundancy with the early returns on lines 232-235. Both paths return immediately, making the
continueon line 238 unreachable after a successful result.for (let attempt = 0; attempt < 2; attempt++) { if (attempt === 1) { setTestingMessage("Retrying in 5 seconds..."); await new Promise((resolve) => setTimeout(resolve, 5000)); } try { const result = await runTestOnce(data); - if (result !== "validation") { - return; - } + // Exit on success or validation error (no point retrying validation) return; } catch (error) { lastError = error; if (attempt === 0) continue; throw error; } }
90-91: Consider typingmodelsErrormore specifically.Using
anyfor error types reduces type safety. Consider usingError | undefinedor the specific error type from SWR.- modelsError?: any; + modelsError?: Error;apps/web/utils/llms/model.ts (3)
265-296: Ollama provider implementation looks correct with proper URL normalization.Good validation requiring model name and proper security gating via
allowUserAiProviderUrl. The URL normalization correctly handles the/apisuffix.Minor note: Line 291 has unnecessary non-null assertion (
Provider.OLLAMA!) sinceProvider.OLLAMAis a string constant that cannot be null.return { - provider: Provider.OLLAMA!, + provider: Provider.OLLAMA, modelName,
542-544:providerRequiresApiKeyhelper is useful but could be more explicit.Consider using a Set or exhaustive check for clarity and future maintainability.
function providerRequiresApiKey(provider: string) { - return provider !== Provider.OLLAMA && provider !== Provider.LM_STUDIO; + const localProviders = new Set([Provider.OLLAMA, Provider.LM_STUDIO]); + return !localProviders.has(provider); }
114-140: URL normalization logic has overlapping conditions and a problematic path truncation behavior.Line 123's condition (
path.startsWith("/v1/")) discards everything after/v1/when returning${url.origin}/v1, which loses path prefixes for nested base URLs likehttps://example.com/proxy/v1/completions. Additionally, lines 122 and 130-131 both handle paths ending with/v1, creating redundancy.The logic should prioritize finding the
/v1segment and preserving any legitimate path prefix, rather than unconditionally truncating to just/v1. A regex-based approach could simplify this to: find the first/v1segment and truncate there, or append/v1if absent.apps/web/utils/llms/config.ts (1)
28-35: Important note about static vs dynamic usage.The comment at lines 29-30 is helpful for preventing misuse in client components. Consider adding a
@server-onlydirective or using a naming convention to make this clearer.To enforce server-only usage more explicitly:
+import "server-only"; + import { env } from "@/env";This would cause a build error if accidentally imported in a client component.
| const { searchParams } = new URL(req.url); | ||
| const userProvidedUrl = searchParams.get("baseUrl"); | ||
|
|
||
| // Use user-provided URL if allowed and provided, otherwise fall back to env | ||
| const baseUrl = | ||
| allowUserAiProviderUrl && userProvidedUrl | ||
| ? userProvidedUrl | ||
| : env.LM_STUDIO_BASE_URL; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n apps/web/app/api/ai/lmstudio-models/route.ts | head -100Repository: elie222/inbox-zero
Length of output: 180
🏁 Script executed:
sed -n '1,100p' apps/web/app/api/ai/lmstudio-models/route.ts | cat -nRepository: elie222/inbox-zero
Length of output: 180
🏁 Script executed:
wc -l apps/web/app/api/ai/lmstudio-models/route.tsRepository: elie222/inbox-zero
Length of output: 180
🏁 Script executed:
cat apps/web/app/api/ai/lmstudio-models/route.tsRepository: elie222/inbox-zero
Length of output: 180
🏁 Script executed:
cat apps/web/app/api/ai/lmstudio-models/route.tsRepository: elie222/inbox-zero
Length of output: 180
🌐 Web query:
SSRF prevention Node.js private IP ranges validation user-provided URLs fetch
💡 Result:
Short checklist + minimal Node.js pattern to prevent SSRF when fetching user URLs:
- Parse & validate URL: only allow expected protocols (https/http) and ports.
- Resolve DNS to IP(s) server-side (don’t trust hostname string). Use dns.promises.lookup/all. Re-resolve after any redirect.
- Check each resolved IP against a deny-list of reserved ranges (loopback, RFC1918 private, link-local, multicast, unspecified, CGNAT, RFC4193 ULA, IPv4‑mapped IPv6, etc.). Use a maintained IP library (ipaddr.js) for correct IPv4/IPv6 checks. (github.com)
- Block requests that resolve to any denied range; also block if any redirect target resolves to denied ranges.
- Enforce network-level egress controls (firewall, proxy) as a last line of defense. (nodejs-security.com)
Minimal Node.js example (conceptual):
- npm install ipaddr.js
- use dns.promises.lookup/all, ipaddr.js to check ranges, and only then fetch.
Example (outline — adapt to your app):
const dns = require('dns').promises;
const ipaddr = require('ipaddr.js');
const fetch = require('node-fetch');
async function isDeniedIP(ip) {
const addr = ipaddr.process(ip); // handles IPv4-mapped IPv6
const range = addr.range(); // e.g. 'loopback','private','linkLocal','multicast','reserved', etc.
// treat anything not 'unicast' as denied, and explicitly deny known ranges:
return range !== 'unicast';
}
async function safeFetch(urlString, opts) {
const url = new URL(urlString);
if (!['http:','https:'].includes(url.protocol)) throw new Error('Invalid protocol');
const addrs = await dns.lookup(url.hostname, { all: true });
for (const a of addrs) {
if (await isDeniedIP(a.address)) throw new Error('Destination IP is prohibited');
}
// perform fetch but do NOT follow redirects automatically — handle them manually:
const res = await fetch(url.href, { ...opts, redirect: 'manual' });
if (res.status >= 300 && res.status < 400 && res.headers.get('location')) {
const loc = new URL(res.headers.get('location'), url);
// repeat DNS resolution + checks for loc.hostname before following
// (re-run above checks)
throw new Error('Redirects must be validated before following');
}
return res;
}
Notes / pitfalls:
- Don’t rely on simple string checks or blacklist packages with known bugs — use ipaddr.js or equivalent to correctly classify ranges (IPv6, IPv4-mapped addresses, multicast, link-local, etc.). There have been real-world pitfalls in some libraries. (nodejs-security.com)
- Time-of-check → time-of-use: re-resolve after redirects and just before making the connection.
- Use egress filtering (host firewall / proxy) to enforce rules even if app validation fails. (nodejs-security.com)
Want a fully runnable example (with complete redirect handling + recommended deny-list and unit tests)? I can provide one tailored to your environment (Node version, fetch lib, proxy).
Citations:
- 1: https://github.com/whitequark/ipaddr.js/?utm_source=openai
- 2: https://www.nodejs-security.com/learn/ssrf-in-nodejs/use-a-lib-for-public-private-ip-namespaces?utm_source=openai
- 3: https://www.nodejs-security.com/blog/dont-be-fooled-multicast-ssrf-bypass-private-ip?utm_source=openai
- 4: https://www.nodejs-security.com/learn/ssrf-in-nodejs/use-a-lib-for-public-private-ip-namespaces?utm_source=openai
Add IP range validation to prevent SSRF when allowUserAiProviderUrl is enabled.
URL format validation (lines 74-81) does not prevent SSRF attacks. Users can still probe internal services by providing URLs that resolve to private IP ranges (10.x, 172.16-31.x, 192.168.x, 127.x, etc.). Implement DNS resolution and IP range validation using a library like ipaddr.js, blocking loopback, RFC1918 private ranges, link-local, and multicast addresses. Also validate redirect targets before following them, as attackers can use redirects to bypass initial URL validation.
| export const GET = withEmailAccount("api/ai/ollama-models", async (req) => { | ||
| // Only allow custom URL if the feature is enabled via env var | ||
| const { searchParams } = new URL(req.url); | ||
| const customUrl = searchParams.get("baseUrl"); | ||
|
|
||
| // Security: Only use custom URL if ALLOW_USER_AI_PROVIDER_URL is enabled | ||
| // Note: URL should include /api (e.g., http://localhost:11434/api) | ||
| const baseUrl = | ||
| allowUserAiProviderUrl && customUrl | ||
| ? customUrl | ||
| : env.OLLAMA_BASE_URL || "http://localhost:11434/api"; |
There was a problem hiding this comment.
Inconsistency: Missing enablement check compared to LM Studio route.
The LM Studio route checks supportsLmStudio and returns 403 if disabled, but this route defaults to localhost:11434 even when Ollama isn't configured. This could lead to unexpected behavior or expose the endpoint when Ollama support isn't intended.
Consider adding a consistent enablement check:
export const GET = withEmailAccount("api/ai/ollama-models", async (req) => {
+ // Import supportsOllama from config
+ if (!supportsOllama) {
+ return NextResponse.json(
+ { error: "Ollama is not enabled on this server" },
+ { status: 403 },
+ );
+ }
+
// Only allow custom URL if the feature is enabled via env var
const { searchParams } = new URL(req.url);Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In apps/web/app/api/ai/ollama-models/route.ts around lines 47 to 57, add the
same enablement guard used in the LM Studio route: check the feature
flag/boolean that indicates Ollama support (e.g., supportsOllama or equivalent
env flag) before resolving baseUrl; if Ollama support is disabled, immediately
return a 403 response (forbidden). Keep the existing
allowUserAiProviderUrl/customUrl logic only when the feature is enabled and fall
back to env.OLLAMA_BASE_URL or the localhost default otherwise, ensuring you do
not expose the endpoint when Ollama is not configured.
There was a problem hiding this comment.
Pull request overview
This PR adds support for local and custom AI providers (Ollama, LM Studio, and OpenAI-compatible servers), along with improvements to the bulk rule-running feature. The implementation includes dynamic model fetching, custom server URLs, and enhanced UI for configuration.
Key changes:
- Adds Ollama and LM Studio as AI provider options with dynamic model discovery
- Introduces
ALLOW_USER_AI_PROVIDER_URLflag to allow users to configure custom AI provider URLs - Updates bulk rule-running to optionally include read emails
- Adds comprehensive test coverage for new providers
Reviewed changes
Copilot reviewed 31 out of 32 changed files in this pull request and generated 6 comments.
Show a summary per file
| File | Description |
|---|---|
apps/web/utils/llms/model.ts |
Core provider logic for Ollama, LM Studio, and OpenAI-compatible servers with URL normalization |
apps/web/app/api/ai/ollama-models/route.ts |
API endpoint to fetch available Ollama models |
apps/web/app/api/ai/lmstudio-models/route.ts |
API endpoint to fetch available LM Studio models |
apps/web/app/api/ai/models/route.ts |
Updated OpenAI models endpoint to support custom base URLs |
apps/web/app/(app)/[emailAccountId]/settings/ModelSection.tsx |
Enhanced UI for provider selection with dynamic model fetching and connection testing |
apps/web/utils/actions/settings.validation.ts |
Validation schema for AI settings including base URL |
apps/web/utils/actions/settings.ts |
Server actions for saving and testing AI settings |
apps/web/app/(app)/[emailAccountId]/assistant/BulkRunRules.tsx |
Added toggle to include read emails in bulk operations |
apps/web/prisma/schema.prisma |
Added aiBaseUrl field to User model |
apps/web/env.ts |
Added environment variables for custom provider URLs |
apps/web/.env.example |
Documented new configuration options with security warnings |
docs/hosting/environment-variables.md |
Comprehensive security documentation for ALLOW_USER_AI_PROVIDER_URL |
Files not reviewed (1)
- pnpm-lock.yaml: Language not supported
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| const customUrl = searchParams.get("baseUrl"); | ||
|
|
||
| // Security: Only use custom URL if ALLOW_USER_AI_PROVIDER_URL is enabled | ||
| // Note: URL should include /api (e.g., http://localhost:11434/api) | ||
| const baseUrl = | ||
| allowUserAiProviderUrl && customUrl | ||
| ? customUrl | ||
| : env.OLLAMA_BASE_URL || "http://localhost:11434/api"; |
There was a problem hiding this comment.
Security: Missing URL validation for SSRF protection
When ALLOW_USER_AI_PROVIDER_URL is enabled, the customUrl from query parameters is used without validation. This could allow users to make requests to:
- Internal network resources (e.g.,
http://localhost:6379,http://169.254.169.254) - Private IP ranges (
http://192.168.x.x,http://10.x.x.x) - Cloud metadata endpoints
Add URL validation to:
- Validate it's a properly formatted URL
- Block private IP ranges and localhost (unless explicitly allowed)
- Block cloud metadata endpoints
- Consider using an allowlist approach instead
Similar validation is needed in the LM Studio endpoint.
| // Validate URL format (defense-in-depth, frontend also validates) | ||
| try { | ||
| new URL(baseUrl); | ||
| } catch { | ||
| return NextResponse.json( | ||
| { error: "Invalid URL format for baseUrl parameter" }, | ||
| { status: 400 }, | ||
| ); | ||
| } |
There was a problem hiding this comment.
Security: Insufficient URL validation for SSRF protection
The URL validation at lines 74-81 only checks if it's a valid URL format, but doesn't prevent SSRF attacks. When ALLOW_USER_AI_PROVIDER_URL is enabled, users can still target:
- Internal services (
http://localhost:6379, internal APIs) - Private IP ranges (
http://192.168.x.x,http://10.x.x.x,http://172.16.x.x) - Cloud metadata endpoints (
http://169.254.169.254)
This is the same critical security issue as in the Ollama endpoint. Add proper URL validation to block private IPs and sensitive endpoints.
| // Security: Only use user's custom URL if ALLOW_USER_AI_PROVIDER_URL is enabled | ||
| // Normalize URL - strip /api if present, then add it back | ||
| const rawUrl = | ||
| allowUserAiProviderUrl && aiBaseUrl | ||
| ? aiBaseUrl | ||
| : env.OLLAMA_BASE_URL || "http://localhost:11434"; | ||
| const baseURL = `${rawUrl.replace(/\/api\/?$/, "")}/api`; |
There was a problem hiding this comment.
Security: SSRF vulnerability in Ollama provider configuration
Similar to the API endpoints, when allowUserAiProviderUrl is true and aiBaseUrl is user-provided, there's no validation to prevent SSRF attacks. The user-provided URL is directly used to make requests, which could target internal services.
This critical vulnerability exists in three places:
- Ollama provider (this location)
- LM Studio provider (line 307)
- OpenAI provider with custom baseURL (line 190)
All three need URL validation before use to prevent SSRF.
| ]), | ||
| aiModel: z.string(), | ||
| aiApiKey: z.string().optional(), | ||
| aiBaseUrl: z.string().url().optional().or(z.literal("")), |
There was a problem hiding this comment.
The validation uses z.string().url() which validates URL format but doesn't prevent SSRF attacks. The validation should also check for:
- Private IP addresses (localhost, 127.0.0.1, 192.168.x.x, 10.x.x.x, 172.16-31.x.x)
- Cloud metadata endpoints (169.254.169.254)
- Other non-routable addresses
Consider adding a custom Zod validator that checks against an SSRF blocklist or only allows specific patterns.
| const baseURL = normalizeOpenAiBaseUrl( | ||
| (allowUserAiProviderUrl && baseUrl) || env.OPENAI_BASE_URL, | ||
| ); |
There was a problem hiding this comment.
Security: User-provided URL used without validation
The baseUrl from the database (which can be user-provided when ALLOW_USER_AI_PROVIDER_URL is enabled) is passed directly to OpenAI client without SSRF validation. This could allow requests to internal services.
Same SSRF vulnerability applies here as in the Ollama and LM Studio endpoints.
| return `${url.origin}${normalizedPath}`; | ||
| } catch { | ||
| const cleaned = baseUrl.replace(/\/+$/, ""); | ||
| return cleaned.endsWith("/v1") ? cleaned : `${cleaned}/v1`; |
There was a problem hiding this comment.
The normalizeOpenAiBaseUrl function has a potential bug in the catch block. When URL parsing fails (in the catch), it falls back to string manipulation but doesn't validate the result is a valid URL. This could lead to malformed URLs being returned.
Consider adding validation after the string manipulation in the catch block, or returning null if the URL cannot be properly normalized.
| return cleaned.endsWith("/v1") ? cleaned : `${cleaned}/v1`; | |
| const normalized = cleaned.endsWith("/v1") ? cleaned : `${cleaned}/v1`; | |
| try { | |
| // Validate the normalized string is a valid URL | |
| new URL(normalized); | |
| return normalized; | |
| } catch { | |
| return null; | |
| } |
| aiProvider: true, | ||
| aiModel: true, | ||
| aiApiKey: true, | ||
| aiBaseUrl: true, |
There was a problem hiding this comment.
you sure we need this stored at the user level?
think it's best if we just support ollama/lmstudio at the env level. or is there another need for this here?
| @@ -69,8 +71,18 @@ export function BulkRunRules() { | |||
| {data && ( | |||
| <> | |||
| <SectionDescription> | |||
| // Server-side config passed to client | ||
| supportsOllama, | ||
| supportsLmStudio, | ||
| allowUserAiProviderUrl, |
There was a problem hiding this comment.
this doesnt have anything to do with the user, so not sure why we need this here?
|
Thanks for the PR. Main issue is that I don't want to support this at the user level. Ollama/Lmstudio can stay for self-hosters, but not something that each user of the product needs to change. Supporting at the user level adds complexity and makes db call slightly slower and its not something that I see a real need for. |
elie222
left a comment
There was a problem hiding this comment.
Remove unrelated code from the PR. Don't enable Ollama/LMStudio. Only for users. Aim to make the PR way simpler. Stick with 1 change at a time. ie. only adding adding Ollama. You can add LMStudio and other changes in separate PRs
|
Added a PR here, but I haven't tested it at all: |
This pull request adds support for local and custom AI providers (Ollama, LM Studio, and OpenAI-compatible servers) to the app's model selection and configuration UI, and improves the bulk rule-running feature by allowing users to include read emails. The changes enhance flexibility for users and admins, making it easier to connect to different AI backends and configure them per account.
AI Provider Configuration & Model Selection
ModelSection.tsxto support Ollama and LM Studio providers, including dynamic fetching of available models and custom server URLs. The form now adapts based on provider type and admin/user permissions, with improved error handling and connection testing. (apps/web/app/(app)/[emailAccountId]/settings/ModelSection.tsxR19-R49, apps/web/app/(app)/[emailAccountId]/settings/ModelSection.tsxR88-R116, apps/web/app/(app)/[emailAccountId]/settings/ModelSection.tsxL76-R167, apps/web/app/(app)/[emailAccountId]/settings/ModelSection.tsxR189-R300, apps/web/app/(app)/[emailAccountId]/settings/ModelSection.tsxR326-R412, apps/web/app/(app)/[emailAccountId]/settings/ModelSection.tsxL158-R423)Environment Configuration
.env.exampleto document new environment variables for custom and local AI providers, includingOPENAI_BASE_URL,OLLAMA_BASE_URL,LM_STUDIO_BASE_URL, andALLOW_USER_AI_PROVIDER_URL. [1] [2]Bulk Rule-Running Feature
BulkRunRules.tsx) to let users choose whether to include read emails, updated descriptions, and adjusted backend logic to filter emails accordingly. (apps/web/app/(app)/[emailAccountId]/assistant/BulkRunRules.tsxR46, apps/web/app/(app)/[emailAccountId]/assistant/BulkRunRules.tsxL72-R85, apps/web/app/(app)/[emailAccountId]/assistant/BulkRunRules.tsxR100-R124, apps/web/app/(app)/[emailAccountId]/assistant/BulkRunRules.tsxL134-R171, apps/web/app/(app)/[emailAccountId]/assistant/BulkRunRules.tsxL166-R205, apps/web/app/(app)/[emailAccountId]/assistant/BulkRunRules.tsxL186-R229, apps/web/app/(app)/[emailAccountId]/assistant/BulkRunRules.tsxL210-R253)Error Handling & User Feedback
These changes make it easier for users to connect to a variety of AI providers, configure them flexibly, and run bulk operations with more control.
Summary by CodeRabbit
Release Notes
New Features
Improvements
Documentation
✏️ Tip: You can customize this high-level summary in your review settings.