Skip to content

Add back ollama support#1083

Merged
elie222 merged 3 commits intomainfrom
feat/ollama
Dec 10, 2025
Merged

Add back ollama support#1083
elie222 merged 3 commits intomainfrom
feat/ollama

Conversation

@elie222
Copy link
Owner

@elie222 elie222 commented Dec 9, 2025

Restore Ollama provider in web app by loading Provider.OLLAMA from server env and selecting models via utils.llms.model.selectModel using OLLAMA_MODEL and OLLAMA_BASE_URL

Switch to ollama-ai-provider-v2, wire Provider.OLLAMA to server env via OLLAMA_MODEL, and implement the Provider.OLLAMA branch in utils.llms.model.selectModel to create the model from createOllama({ baseURL: env.OLLAMA_BASE_URL })(env.OLLAMA_MODEL); update env validation, docs, and Turbo passthrough to use OLLAMA_MODEL instead of NEXT_PUBLIC_OLLAMA_MODEL.

📍Where to Start

Start with the selectModel implementation for Provider.OLLAMA in model.ts, then review provider exposure in config.ts and env parsing in env.ts.


📊 Macroscope summarized 2941364. 4 files reviewed, 6 issues evaluated, 6 issues filtered, 0 comments posted

🗂️ Filtered Issues

apps/web/env.ts — 0 comments posted, 2 evaluated, 2 filtered
  • line 62: Contract change: client env NEXT_PUBLIC_OLLAMA_MODEL was removed from client and experimental__runtimeEnv, while a server-only OLLAMA_MODEL was added (line 62). Any client code that previously relied on env.client.NEXT_PUBLIC_OLLAMA_MODEL or the variable being exposed to the browser will now receive undefined and may misbehave. If the intent was to move this to server-only, ensure all client usages are removed or replaced. [ Low confidence ]
  • line 151: Client env var NEXT_PUBLIC_FREE_UNSUBSCRIBE_CREDITS is defined as z.number().default(5) (line 151) but environment variables are strings, and experimental__runtimeEnv passes through process.env.NEXT_PUBLIC_FREE_UNSUBSCRIBE_CREDITS as a string (line 228). If the variable is set (e.g., "5" or "0"), z.number() will fail validation instead of coercing. Use z.coerce.number() to accept numeric strings or ensure the var is unset to use the default. [ Out of scope ]
apps/web/utils/actions/settings.validation.ts — 0 comments posted, 1 evaluated, 1 filtered
  • line 28: saveAiSettingsBody enum omits Provider.AI_GATEWAY, while the UI exposes it via providerOptions. This ensures any submission with aiProvider: 'aigateway' will fail validation. Include Provider.AI_GATEWAY in the enum or remove it from the options to maintain contract parity. [ Out of scope ]
apps/web/utils/llms/config.ts — 0 comments posted, 1 evaluated, 1 filtered
  • line 23: The form options include Provider.AI_GATEWAY via providerOptions, allowing users to select "AI Gateway", but the schema saveAiSettingsBody enum does not include Provider.AI_GATEWAY. This creates a contract mismatch: selecting AI Gateway in the UI will fail Zod validation on submit. Align the enum with the options or remove the option. [ Low confidence ]
apps/web/utils/llms/model.ts — 0 comments posted, 2 evaluated, 2 filtered
  • line 141: In the Provider.OLLAMA switch case, the guard if (!provider) throw new Error("Provider.OLLAMA is not defined"); is ineffective. When env.OLLAMA_MODEL is unset, Provider.OLLAMA is undefined, making the case Provider.OLLAMA label undefined and unreachable for an input aiProvider === 'ollama'. The code will fall through to the default case and throw a misleading "LLM provider not supported" error instead of the intended "OLLAMA_MODEL environment variable is not set" message. Consider handling the string literal 'ollama' explicitly or validating earlier with a clear error. [ Already posted ]
  • line 165: In the BEDROCK case, the code uses non-null assertions env.BEDROCK_ACCESS_KEY! and env.BEDROCK_SECRET_KEY!, but these env vars are optional. If selectModel is called with aiProvider set to bedrock while the credentials are unset, this yields undefined credentials at runtime, likely causing provider initialization/failures. Add explicit validation and a clear error before constructing the provider. [ Out of scope ]

Summary by CodeRabbit

  • New Features

    • Ollama model configuration now managed server-side for improved security and control.
  • Chores

    • Updated Ollama provider dependency to latest version.
    • Restructured environment variable configuration; refer to updated documentation for setup instructions.
    • Version bumped to v2.21.59.

✏️ Tip: You can customize this high-level summary in your review settings.

@vercel
Copy link

vercel bot commented Dec 9, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Preview Updated (UTC)
inbox-zero Ready Ready Preview Dec 10, 2025 2:19am

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 9, 2025

Note

Other AI code review bot(s) detected

CodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review.

Walkthrough

This pull request refactors Ollama model configuration from public client-side to private server-side environment variables, updates the Ollama provider package to ollama-ai-provider-v2 (1.5.5), and adjusts provider validation, configuration, and model selection logic accordingly. The version is bumped to v2.21.59.

Changes

Cohort / File(s) Change Summary
Environment & Build Configuration
apps/web/env.ts, turbo.json, docs/hosting/environment-variables.md
Moved OLLAMA_MODEL from client-side public (NEXT_PUBLIC_OLLAMA_MODEL) to server-side private schema. Updated turbo.json env lists and documentation to reflect the change.
Ollama Provider Package
apps/web/package.json
Updated ollama-ai-provider from 1.2.0 to ollama-ai-provider-v2 1.5.5.
Provider Validation & Configuration
apps/web/utils/actions/settings.validation.ts, apps/web/utils/llms/config.ts
Removed Provider.OLLAMA from allowed aiProvider enum in validation. Updated Provider object to conditionally include OLLAMA based on env.OLLAMA_MODEL; removed Ollama entry from providerOptions.
Model Selection & Tests
apps/web/utils/llms/model.ts, apps/web/utils/llms/model.test.ts
Implemented Ollama provider support in selectModel using ollama-ai-provider-v2, reading OLLAMA_MODEL and OLLAMA_BASE_URL from environment. Updated test mocks and added env-based Ollama configuration test.
Version Bump
version.txt
Incremented version from v2.21.58 to v2.21.59.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~22 minutes

  • apps/web/utils/llms/model.ts: Verify Ollama provider implementation correctly instantiates and handles OLLAMA_MODEL/OLLAMA_BASE_URL environment variables with proper validation.
  • apps/web/utils/llms/config.ts: Confirm conditional logic for OLLAMA provider inclusion based on env.OLLAMA_MODEL is correct and doesn't create edge cases with missing env vars.
  • apps/web/package.json & imports: Ensure ollama-ai-provider-v2 1.5.5 is compatible with the new implementation; verify all import statements match the package upgrade.
  • Provider enum removal: Validate that removing Provider.OLLAMA from settings validation doesn't break existing workflows or cause runtime errors.

Possibly related PRs

Poem

🐰 A hop, a skip, the Ollama hops server-side,
No secrets spilled on the client's ride,
Provider-v2 arrives, sleek and new,
Local LLMs hidden safe from view! 🎩

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title 'Add back ollama support' accurately reflects the primary objective of the pull request, which re-enables Ollama integration through environment configuration changes, dependency updates, and provider logic modifications.
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/ollama

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@elie222 elie222 mentioned this pull request Dec 9, 2025
Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No issues found across 10 files

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
apps/web/utils/llms/model.ts (1)

9-9: Ollama integration wiring is coherent; consider a tiny cleanup

  • The new Ollama branch correctly:
    • Uses env.OLLAMA_MODEL and fails fast if it’s missing.
    • Uses env.OLLAMA_BASE_URL for the provider.
    • Disables backup models by setting backupModel: null, which makes sense for local Ollama.
  • getProviderApiKey returning "ollama-local" is a reasonable way to satisfy existing selection logic when Provider.OLLAMA is enabled.

Two optional nits:

  • The if (!provider) throw new Error("Provider.OLLAMA is not defined"); check is effectively unreachable given how Provider.OLLAMA is defined; you could drop it to reduce noise.
  • If you intend to support ECONOMY_LLM_PROVIDER / CHAT_LLM_PROVIDER = "ollama" with different models, note that the current implementation always uses env.OLLAMA_MODEL and ignores the aiModel argument for Ollama; if that’s intentional, it might be worth a brief comment in the Ollama case for future readers.

Also applies to: 141-152, 347-347

apps/web/utils/llms/model.test.ts (1)

34-36: Ollama tests correctly exercise env‑driven config; optional mock cleanup

  • The new ollama-ai-provider-v2 mock and env mock fields (OLLAMA_BASE_URL, OLLAMA_MODEL) are consistent with the implementation.
  • The "should configure Ollama model correctly via env vars" test covers the main happy path (provider, modelName, model existence, and backupModel === null) and gives good regression coverage for the new wiring.

Minor optional cleanup:

  • The vi.mock("./config", ...) adding supportsOllama: true is now redundant since supportsOllama was removed from config.ts. You can safely drop that mock to simplify the test setup.

Also applies to: 53-58, 63-69, 156-172

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b5f1239 and 4ac6d76.

⛔ Files ignored due to path filters (1)
  • pnpm-lock.yaml is excluded by !**/pnpm-lock.yaml
📒 Files selected for processing (9)
  • apps/web/env.ts (1 hunks)
  • apps/web/package.json (1 hunks)
  • apps/web/utils/actions/settings.validation.ts (0 hunks)
  • apps/web/utils/llms/config.ts (1 hunks)
  • apps/web/utils/llms/model.test.ts (3 hunks)
  • apps/web/utils/llms/model.ts (3 hunks)
  • docs/hosting/environment-variables.md (1 hunks)
  • turbo.json (1 hunks)
  • version.txt (1 hunks)
💤 Files with no reviewable changes (1)
  • apps/web/utils/actions/settings.validation.ts
🧰 Additional context used
📓 Path-based instructions (20)
**/package.json

📄 CodeRabbit inference engine (.cursor/rules/installing-packages.mdc)

Use pnpm as the package manager

Files:

  • apps/web/package.json
apps/web/package.json

📄 CodeRabbit inference engine (.cursor/rules/installing-packages.mdc)

Don't install packages in root; install in apps/web workspace instead

Files:

  • apps/web/package.json
!(pages/_document).{jsx,tsx}

📄 CodeRabbit inference engine (.cursor/rules/ultracite.mdc)

Don't use the next/head module in pages/_document.js on Next.js projects

Files:

  • apps/web/package.json
  • docs/hosting/environment-variables.md
  • turbo.json
  • apps/web/env.ts
  • version.txt
  • apps/web/utils/llms/model.ts
  • apps/web/utils/llms/config.ts
  • apps/web/utils/llms/model.test.ts
turbo.json

📄 CodeRabbit inference engine (.cursor/rules/environment-variables.mdc)

Add new environment variables to turbo.json under tasks.build.env as a global dependency for the build task

Files:

  • turbo.json
apps/web/**/*.{ts,tsx}

📄 CodeRabbit inference engine (apps/web/CLAUDE.md)

apps/web/**/*.{ts,tsx}: Use TypeScript with strict null checks
Use @/ path aliases for imports from project root
Use proper error handling with try/catch blocks
Format code with Prettier
Follow consistent naming conventions using PascalCase for components
Centralize shared types in dedicated type files

Import specific lodash functions rather than entire lodash library to minimize bundle size (e.g., import groupBy from 'lodash/groupBy')

Files:

  • apps/web/env.ts
  • apps/web/utils/llms/model.ts
  • apps/web/utils/llms/config.ts
  • apps/web/utils/llms/model.test.ts
apps/web/**/{.env.example,env.ts,turbo.json}

📄 CodeRabbit inference engine (apps/web/CLAUDE.md)

Add environment variables to .env.example, env.ts, and turbo.json

Files:

  • apps/web/env.ts
**/*.{ts,tsx}

📄 CodeRabbit inference engine (.cursor/rules/data-fetching.mdc)

**/*.{ts,tsx}: For API GET requests to server, use the swr package
Use result?.serverError with toastError from @/components/Toast for error handling in async operations

**/*.{ts,tsx}: Use wrapper functions for Gmail message operations (get, list, batch, etc.) from @/utils/gmail/message.ts instead of direct API calls
Use wrapper functions for Gmail thread operations from @/utils/gmail/thread.ts instead of direct API calls
Use wrapper functions for Gmail label operations from @/utils/gmail/label.ts instead of direct API calls

**/*.{ts,tsx}: For early access feature flags, create hooks using the naming convention use[FeatureName]Enabled that return a boolean from useFeatureFlagEnabled("flag-key")
For A/B test variant flags, create hooks using the naming convention use[FeatureName]Variant that define variant types, use useFeatureFlagVariantKey() with type casting, and provide a default "control" fallback
Use kebab-case for PostHog feature flag keys (e.g., inbox-cleaner, pricing-options-2)
Always define types for A/B test variant flags (e.g., type PricingVariant = "control" | "variant-a" | "variant-b") and provide type safety through type casting

**/*.{ts,tsx}: Don't use primitive type aliases or misleading types
Don't use empty type parameters in type aliases and interfaces
Don't use this and super in static contexts
Don't use any or unknown as type constraints
Don't use the TypeScript directive @ts-ignore
Don't use TypeScript enums
Don't export imported variables
Don't add type annotations to variables, parameters, and class properties that are initialized with literal expressions
Don't use TypeScript namespaces
Don't use non-null assertions with the ! postfix operator
Don't use parameter properties in class constructors
Don't use user-defined types
Use as const instead of literal types and type annotations
Use either T[] or Array<T> consistently
Initialize each enum member value explicitly
Use export type for types
Use `impo...

Files:

  • apps/web/env.ts
  • apps/web/utils/llms/model.ts
  • apps/web/utils/llms/config.ts
  • apps/web/utils/llms/model.test.ts
apps/web/env.ts

📄 CodeRabbit inference engine (.cursor/rules/environment-variables.mdc)

apps/web/env.ts: Add server-only environment variables to apps/web/env.ts under the server object with Zod schema validation
Add client-side environment variables to apps/web/env.ts under the client object with NEXT_PUBLIC_ prefix and Zod schema validation
Add client-side environment variables to apps/web/env.ts under the experimental__runtimeEnv object to enable runtime access

Files:

  • apps/web/env.ts
{.env.example,apps/web/env.ts}

📄 CodeRabbit inference engine (.cursor/rules/environment-variables.mdc)

Client-side environment variables must be prefixed with NEXT_PUBLIC_

Files:

  • apps/web/env.ts
**/*.{ts,tsx,js,jsx}

📄 CodeRabbit inference engine (.cursor/rules/prisma-enum-imports.mdc)

Always import Prisma enums from @/generated/prisma/enums instead of @/generated/prisma/client to avoid Next.js bundling errors in client components

Import Prisma using the project's centralized utility: import prisma from '@/utils/prisma'

Files:

  • apps/web/env.ts
  • apps/web/utils/llms/model.ts
  • apps/web/utils/llms/config.ts
  • apps/web/utils/llms/model.test.ts
**/*.ts

📄 CodeRabbit inference engine (.cursor/rules/security.mdc)

**/*.ts: ALL database queries MUST be scoped to the authenticated user/account by including user/account filtering in WHERE clauses to prevent unauthorized data access
Always validate that resources belong to the authenticated user before performing operations, using ownership checks in WHERE clauses or relationships
Always validate all input parameters for type, format, and length before using them in database queries
Use SafeError for error responses to prevent information disclosure. Generic error messages should not reveal internal IDs, logic, or resource ownership details
Only return necessary fields in API responses using Prisma's select option. Never expose sensitive data such as password hashes, private keys, or system flags
Prevent Insecure Direct Object References (IDOR) by validating resource ownership before operations. All findUnique/findFirst calls MUST include ownership filters
Prevent mass assignment vulnerabilities by explicitly whitelisting allowed fields in update operations instead of accepting all user-provided data
Prevent privilege escalation by never allowing users to modify system fields, ownership fields, or admin-only attributes through user input
All findMany queries MUST be scoped to the user's data by including appropriate WHERE filters to prevent returning data from other users
Use Prisma relationships for access control by leveraging nested where clauses (e.g., emailAccount: { id: emailAccountId }) to validate ownership

Files:

  • apps/web/env.ts
  • apps/web/utils/llms/model.ts
  • apps/web/utils/llms/config.ts
  • apps/web/utils/llms/model.test.ts
**/*.{tsx,ts}

📄 CodeRabbit inference engine (.cursor/rules/ui-components.mdc)

**/*.{tsx,ts}: Use Shadcn UI and Tailwind for components and styling
Use next/image package for images
For API GET requests to server, use the swr package with hooks like useSWR to fetch data
For text inputs, use the Input component with registerProps for form integration and error handling

Files:

  • apps/web/env.ts
  • apps/web/utils/llms/model.ts
  • apps/web/utils/llms/config.ts
  • apps/web/utils/llms/model.test.ts
**/*.{tsx,ts,css}

📄 CodeRabbit inference engine (.cursor/rules/ui-components.mdc)

Implement responsive design with Tailwind CSS using a mobile-first approach

Files:

  • apps/web/env.ts
  • apps/web/utils/llms/model.ts
  • apps/web/utils/llms/config.ts
  • apps/web/utils/llms/model.test.ts
**/*.{js,jsx,ts,tsx}

📄 CodeRabbit inference engine (.cursor/rules/ultracite.mdc)

**/*.{js,jsx,ts,tsx}: Don't use accessKey attribute on any HTML element
Don't set aria-hidden="true" on focusable elements
Don't add ARIA roles, states, and properties to elements that don't support them
Don't use distracting elements like <marquee> or <blink>
Only use the scope prop on <th> elements
Don't assign non-interactive ARIA roles to interactive HTML elements
Make sure label elements have text content and are associated with an input
Don't assign interactive ARIA roles to non-interactive HTML elements
Don't assign tabIndex to non-interactive HTML elements
Don't use positive integers for tabIndex property
Don't include "image", "picture", or "photo" in img alt prop
Don't use explicit role property that's the same as the implicit/default role
Make static elements with click handlers use a valid role attribute
Always include a title element for SVG elements
Give all elements requiring alt text meaningful information for screen readers
Make sure anchors have content that's accessible to screen readers
Assign tabIndex to non-interactive HTML elements with aria-activedescendant
Include all required ARIA attributes for elements with ARIA roles
Make sure ARIA properties are valid for the element's supported roles
Always include a type attribute for button elements
Make elements with interactive roles and handlers focusable
Give heading elements content that's accessible to screen readers (not hidden with aria-hidden)
Always include a lang attribute on the html element
Always include a title attribute for iframe elements
Accompany onClick with at least one of: onKeyUp, onKeyDown, or onKeyPress
Accompany onMouseOver/onMouseOut with onFocus/onBlur
Include caption tracks for audio and video elements
Use semantic elements instead of role attributes in JSX
Make sure all anchors are valid and navigable
Ensure all ARIA properties (aria-*) are valid
Use valid, non-abstract ARIA roles for elements with ARIA roles
Use valid AR...

Files:

  • apps/web/env.ts
  • apps/web/utils/llms/model.ts
  • apps/web/utils/llms/config.ts
  • apps/web/utils/llms/model.test.ts
**/*.{js,ts,jsx,tsx}

📄 CodeRabbit inference engine (.cursor/rules/utilities.mdc)

**/*.{js,ts,jsx,tsx}: Use lodash utilities for common operations (arrays, objects, strings)
Import specific lodash functions to minimize bundle size (e.g., import groupBy from 'lodash/groupBy')

Files:

  • apps/web/env.ts
  • apps/web/utils/llms/model.ts
  • apps/web/utils/llms/config.ts
  • apps/web/utils/llms/model.test.ts
apps/web/{utils/ai,utils/llms,__tests__}/**/*.ts

📄 CodeRabbit inference engine (.cursor/rules/llm.mdc)

LLM-related code must be organized in specific directories: apps/web/utils/ai/ for main implementations, apps/web/utils/llms/ for core utilities and configurations, and apps/web/__tests__/ for LLM-specific tests

Files:

  • apps/web/utils/llms/model.ts
  • apps/web/utils/llms/config.ts
  • apps/web/utils/llms/model.test.ts
apps/web/utils/llms/{index,model}.ts

📄 CodeRabbit inference engine (.cursor/rules/llm.mdc)

Core LLM functionality must be defined in utils/llms/index.ts, model definitions and configurations in utils/llms/model.ts, and usage tracking in utils/usage.ts

Files:

  • apps/web/utils/llms/model.ts
**/{server,api,actions,utils}/**/*.ts

📄 CodeRabbit inference engine (.cursor/rules/logging.mdc)

**/{server,api,actions,utils}/**/*.ts: Use createScopedLogger from "@/utils/logger" for logging in backend code
Add the createScopedLogger instantiation at the top of the file with an appropriate scope name
Use .with() method to attach context variables only within specific functions, not on global loggers
For large functions with reused variables, use createScopedLogger().with() to attach context once and reuse the logger without passing variables repeatedly

Files:

  • apps/web/utils/llms/model.ts
  • apps/web/utils/llms/config.ts
  • apps/web/utils/llms/model.test.ts
**/*.test.{ts,tsx}

📄 CodeRabbit inference engine (.cursor/rules/testing.mdc)

**/*.test.{ts,tsx}: Use vitest for testing the application
Tests should be colocated next to the tested file with .test.ts or .test.tsx extension (e.g., dir/format.ts and dir/format.test.ts)
Mock server-only using vi.mock("server-only", () => ({}))
Mock Prisma using vi.mock("@/utils/prisma") and import the mock from @/utils/__mocks__/prisma
Use vi.clearAllMocks() in beforeEach to clean up mocks between tests
Each test should be independent
Use descriptive test names
Mock external dependencies in tests
Do not mock the Logger
Avoid testing implementation details
Use test helpers getEmail, getEmailAccount, and getRule from @/__tests__/helpers for mocking emails, accounts, and rules

Files:

  • apps/web/utils/llms/model.test.ts
**/*.{test,spec}.{js,jsx,ts,tsx}

📄 CodeRabbit inference engine (.cursor/rules/ultracite.mdc)

**/*.{test,spec}.{js,jsx,ts,tsx}: Don't nest describe() blocks too deeply in test files
Don't use callbacks in asynchronous tests and hooks
Don't have duplicate hooks in describe blocks
Don't use export or module.exports in test files
Don't use focused tests
Make sure the assertion function, like expect, is placed inside an it() function call
Don't use disabled tests

Files:

  • apps/web/utils/llms/model.test.ts
🧠 Learnings (26)
📚 Learning: 2025-11-25T14:36:45.807Z
Learnt from: CR
Repo: elie222/inbox-zero PR: 0
File: .cursor/rules/environment-variables.mdc:0-0
Timestamp: 2025-11-25T14:36:45.807Z
Learning: Applies to apps/web/env.ts : Add client-side environment variables to `apps/web/env.ts` under the `client` object with `NEXT_PUBLIC_` prefix and Zod schema validation

Applied to files:

  • docs/hosting/environment-variables.md
  • turbo.json
  • apps/web/env.ts
📚 Learning: 2025-11-25T14:36:43.454Z
Learnt from: CR
Repo: elie222/inbox-zero PR: 0
File: .cursor/rules/environment-variables.mdc:0-0
Timestamp: 2025-11-25T14:36:43.454Z
Learning: Applies to apps/web/env.ts : For client-side environment variables in `apps/web/env.ts`, prefix them with `NEXT_PUBLIC_` and add them to both the `client` and `experimental__runtimeEnv` sections

Applied to files:

  • docs/hosting/environment-variables.md
  • turbo.json
  • apps/web/env.ts
📚 Learning: 2025-11-25T14:36:45.807Z
Learnt from: CR
Repo: elie222/inbox-zero PR: 0
File: .cursor/rules/environment-variables.mdc:0-0
Timestamp: 2025-11-25T14:36:45.807Z
Learning: Applies to {.env.example,apps/web/env.ts} : Client-side environment variables must be prefixed with `NEXT_PUBLIC_`

Applied to files:

  • docs/hosting/environment-variables.md
  • turbo.json
  • apps/web/env.ts
📚 Learning: 2025-11-25T14:36:18.416Z
Learnt from: CR
Repo: elie222/inbox-zero PR: 0
File: apps/web/CLAUDE.md:0-0
Timestamp: 2025-11-25T14:36:18.416Z
Learning: Applies to apps/web/**/*NEXT_PUBLIC_* : Prefix client-side environment variables with `NEXT_PUBLIC_`

Applied to files:

  • docs/hosting/environment-variables.md
  • turbo.json
  • apps/web/env.ts
📚 Learning: 2025-11-25T14:36:45.807Z
Learnt from: CR
Repo: elie222/inbox-zero PR: 0
File: .cursor/rules/environment-variables.mdc:0-0
Timestamp: 2025-11-25T14:36:45.807Z
Learning: Applies to .env.example : Add new environment variables to `.env.example` with example values

Applied to files:

  • docs/hosting/environment-variables.md
📚 Learning: 2025-11-25T14:36:45.807Z
Learnt from: CR
Repo: elie222/inbox-zero PR: 0
File: .cursor/rules/environment-variables.mdc:0-0
Timestamp: 2025-11-25T14:36:45.807Z
Learning: Applies to turbo.json : Add new environment variables to `turbo.json` under `tasks.build.env` as a global dependency for the build task

Applied to files:

  • turbo.json
📚 Learning: 2025-11-25T14:36:43.454Z
Learnt from: CR
Repo: elie222/inbox-zero PR: 0
File: .cursor/rules/environment-variables.mdc:0-0
Timestamp: 2025-11-25T14:36:43.454Z
Learning: Applies to turbo.json : Add environment variables to `turbo.json` under `tasks.build.env` array to declare build-time dependencies

Applied to files:

  • turbo.json
📚 Learning: 2025-11-25T14:36:18.416Z
Learnt from: CR
Repo: elie222/inbox-zero PR: 0
File: apps/web/CLAUDE.md:0-0
Timestamp: 2025-11-25T14:36:18.416Z
Learning: Applies to apps/web/**/{.env.example,env.ts,turbo.json} : Add environment variables to `.env.example`, `env.ts`, and `turbo.json`

Applied to files:

  • turbo.json
  • apps/web/env.ts
📚 Learning: 2025-11-25T14:36:45.807Z
Learnt from: CR
Repo: elie222/inbox-zero PR: 0
File: .cursor/rules/environment-variables.mdc:0-0
Timestamp: 2025-11-25T14:36:45.807Z
Learning: Applies to apps/web/env.ts : Add client-side environment variables to `apps/web/env.ts` under the `experimental__runtimeEnv` object to enable runtime access

Applied to files:

  • apps/web/env.ts
📚 Learning: 2025-11-25T14:36:45.807Z
Learnt from: CR
Repo: elie222/inbox-zero PR: 0
File: .cursor/rules/environment-variables.mdc:0-0
Timestamp: 2025-11-25T14:36:45.807Z
Learning: Applies to apps/web/env.ts : Add server-only environment variables to `apps/web/env.ts` under the `server` object with Zod schema validation

Applied to files:

  • apps/web/env.ts
📚 Learning: 2025-11-25T14:36:43.454Z
Learnt from: CR
Repo: elie222/inbox-zero PR: 0
File: .cursor/rules/environment-variables.mdc:0-0
Timestamp: 2025-11-25T14:36:43.454Z
Learning: Applies to apps/web/env.ts : Define environment variables in `apps/web/env.ts` using Zod schema validation, organizing them into `server` and `client` sections

Applied to files:

  • apps/web/env.ts
📚 Learning: 2025-11-25T14:38:07.606Z
Learnt from: CR
Repo: elie222/inbox-zero PR: 0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-11-25T14:38:07.606Z
Learning: Applies to apps/web/utils/llms/{index,model}.ts : Core LLM functionality must be defined in `utils/llms/index.ts`, model definitions and configurations in `utils/llms/model.ts`, and usage tracking in `utils/usage.ts`

Applied to files:

  • apps/web/env.ts
  • apps/web/utils/llms/model.ts
  • apps/web/utils/llms/config.ts
  • apps/web/utils/llms/model.test.ts
📚 Learning: 2025-11-25T14:38:07.606Z
Learnt from: CR
Repo: elie222/inbox-zero PR: 0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-11-25T14:38:07.606Z
Learning: Applies to apps/web/utils/ai/**/*.ts : LLM feature functions must import from `zod` for schema validation, use `createScopedLogger` from `@/utils/logger`, `chatCompletionObject` and `createGenerateObject` from `@/utils/llms`, and import `EmailAccountWithAI` type from `@/utils/llms/types`

Applied to files:

  • apps/web/env.ts
  • apps/web/utils/llms/model.ts
  • apps/web/utils/llms/config.ts
  • apps/web/utils/llms/model.test.ts
📚 Learning: 2025-11-25T14:42:08.869Z
Learnt from: CR
Repo: elie222/inbox-zero PR: 0
File: .cursor/rules/ultracite.mdc:0-0
Timestamp: 2025-11-25T14:42:08.869Z
Learning: Applies to **/*.{js,jsx,ts,tsx} : Don't hardcode sensitive data like API keys and tokens

Applied to files:

  • apps/web/env.ts
📚 Learning: 2025-11-25T14:38:07.606Z
Learnt from: CR
Repo: elie222/inbox-zero PR: 0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-11-25T14:38:07.606Z
Learning: Applies to apps/web/utils/ai/**/*.ts : Use TypeScript types for all LLM function parameters and return values, and define clear interfaces for complex input/output structures

Applied to files:

  • apps/web/utils/llms/model.ts
  • apps/web/utils/llms/model.test.ts
📚 Learning: 2025-11-25T14:37:56.430Z
Learnt from: CR
Repo: elie222/inbox-zero PR: 0
File: .cursor/rules/llm-test.mdc:0-0
Timestamp: 2025-11-25T14:37:56.430Z
Learning: Applies to apps/web/__tests__/**/*.test.ts : Use `describe.runIf(isAiTest)` with environment variable `RUN_AI_TESTS === "true"` to conditionally run LLM tests

Applied to files:

  • apps/web/utils/llms/model.test.ts
📚 Learning: 2025-11-25T14:40:00.833Z
Learnt from: CR
Repo: elie222/inbox-zero PR: 0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-11-25T14:40:00.833Z
Learning: Applies to **/__tests__/**/*.{ts,tsx} : AI tests must be placed in the `__tests__` directory and are not run by default (they use a real LLM)

Applied to files:

  • apps/web/utils/llms/model.test.ts
📚 Learning: 2025-11-25T14:37:56.430Z
Learnt from: CR
Repo: elie222/inbox-zero PR: 0
File: .cursor/rules/llm-test.mdc:0-0
Timestamp: 2025-11-25T14:37:56.430Z
Learning: Applies to apps/web/__tests__/**/*.test.ts : Mock 'server-only' module with empty object in LLM test files: `vi.mock("server-only", () => ({}))`

Applied to files:

  • apps/web/utils/llms/model.test.ts
📚 Learning: 2025-11-25T14:37:56.430Z
Learnt from: CR
Repo: elie222/inbox-zero PR: 0
File: .cursor/rules/llm-test.mdc:0-0
Timestamp: 2025-11-25T14:37:56.430Z
Learning: Applies to apps/web/__tests__/**/*.test.ts : Use vitest imports (`describe`, `expect`, `test`, `vi`, `beforeEach`) in LLM test files

Applied to files:

  • apps/web/utils/llms/model.test.ts
📚 Learning: 2025-11-25T14:38:07.606Z
Learnt from: CR
Repo: elie222/inbox-zero PR: 0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-11-25T14:38:07.606Z
Learning: Applies to apps/web/{utils/ai,utils/llms,__tests__}/**/*.ts : LLM-related code must be organized in specific directories: `apps/web/utils/ai/` for main implementations, `apps/web/utils/llms/` for core utilities and configurations, and `apps/web/__tests__/` for LLM-specific tests

Applied to files:

  • apps/web/utils/llms/model.test.ts
📚 Learning: 2025-11-25T14:37:56.430Z
Learnt from: CR
Repo: elie222/inbox-zero PR: 0
File: .cursor/rules/llm-test.mdc:0-0
Timestamp: 2025-11-25T14:37:56.430Z
Learning: Applies to apps/web/__tests__/**/*.test.ts : Place all LLM-related tests in `apps/web/__tests__/` directory

Applied to files:

  • apps/web/utils/llms/model.test.ts
📚 Learning: 2025-11-25T14:38:07.606Z
Learnt from: CR
Repo: elie222/inbox-zero PR: 0
File: .cursor/rules/llm.mdc:0-0
Timestamp: 2025-11-25T14:38:07.606Z
Learning: Applies to apps/web/utils/ai/**/*.ts : LLM feature functions must follow a standard structure: accept options with `inputData` and `emailAccount` parameters, implement input validation with early returns, define separate system and user prompts, create a Zod schema for response validation, and use `createGenerateObject` to execute the LLM call

Applied to files:

  • apps/web/utils/llms/model.test.ts
📚 Learning: 2025-11-25T14:40:00.833Z
Learnt from: CR
Repo: elie222/inbox-zero PR: 0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-11-25T14:40:00.833Z
Learning: Applies to **/*.test.{ts,tsx} : Mock `server-only` using `vi.mock("server-only", () => ({}))`

Applied to files:

  • apps/web/utils/llms/model.test.ts
📚 Learning: 2025-11-25T14:40:00.833Z
Learnt from: CR
Repo: elie222/inbox-zero PR: 0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-11-25T14:40:00.833Z
Learning: Applies to **/*.test.{ts,tsx} : Mock Prisma using `vi.mock("@/utils/prisma")` and import the mock from `@/utils/__mocks__/prisma`

Applied to files:

  • apps/web/utils/llms/model.test.ts
📚 Learning: 2025-11-25T14:40:00.833Z
Learnt from: CR
Repo: elie222/inbox-zero PR: 0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-11-25T14:40:00.833Z
Learning: Applies to **/*.test.{ts,tsx} : Use `vi.clearAllMocks()` in `beforeEach` to clean up mocks between tests

Applied to files:

  • apps/web/utils/llms/model.test.ts
📚 Learning: 2025-11-25T14:40:00.833Z
Learnt from: CR
Repo: elie222/inbox-zero PR: 0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-11-25T14:40:00.833Z
Learning: Applies to **/*.test.{ts,tsx} : Mock external dependencies in tests

Applied to files:

  • apps/web/utils/llms/model.test.ts
🧬 Code graph analysis (1)
apps/web/utils/llms/model.ts (2)
apps/web/env.ts (1)
  • env (17-246)
apps/web/utils/llms/config.ts (1)
  • Provider (5-14)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: cubic · AI code reviewer
  • GitHub Check: test
  • GitHub Check: Review for correctness
  • GitHub Check: Analyze (javascript-typescript)
🔇 Additional comments (6)
version.txt (1)

1-1: Version bump is consistent with PR scope

Version update looks good and matches the Ollama-related changes elsewhere.

turbo.json (1)

45-45: Adding OLLAMA_MODEL to build env is correct

Including OLLAMA_MODEL in tasks.build.env keeps Turbo’s build dependencies in sync with the new server env var. Based on learnings, this is the right place for it.

apps/web/utils/llms/config.ts (1)

13-13: Conditional exposure of Provider.OLLAMA is a good safeguard

Tying Provider.OLLAMA to env.OLLAMA_MODEL keeps the provider only available when it’s actually configured, which matches the server-only env design.

docs/hosting/environment-variables.md (1)

63-67: Docs now correctly reflect server‑side Ollama config and QStash keys

Switching to OLLAMA_MODEL in the Ollama section and documenting QSTASH_NEXT_SIGNING_KEY keeps this page aligned with the env schema and server-only model config.

apps/web/package.json (1)

136-136: Verify Ollama provider package migration

Switching to ollama-ai-provider-v2@1.5.5 appears consistent with the codebase changes. Ensure:

  • No remaining imports from the old ollama-ai-provider package exist
  • The new package is properly imported where needed (e.g., utils/llms/model.ts)
apps/web/env.ts (1)

6-15: Env schema for Ollama is consistent with server-only configuration

Adding "ollama" to llmProviderEnum and defining OLLAMA_MODEL as a server-only optional string aligns with the new Ollama integration. Verify that OLLAMA_MODEL is also present in apps/web/.env.example with a sample value for self-hosters.

@@ -139,16 +139,17 @@ function selectModel(
};
}
case Provider.OLLAMA: {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

case Provider.OLLAMA is unreachable when env.OLLAMA_MODEL is unset (since Provider.OLLAMA is then undefined). This makes aiProvider === 'ollama' hit the default with a misleading error. Consider matching the literal 'ollama' (or validate earlier) so the intended OLLAMA_MODEL error is thrown.

Suggested change
case Provider.OLLAMA: {
case "ollama": {

🚀 Reply to ask Macroscope to explain or update this suggestion.

👍 Helpful? React to give us feedback.

@elie222 elie222 merged commit de6861d into main Dec 10, 2025
13 checks passed
@elie222 elie222 deleted the feat/ollama branch December 10, 2025 02:37
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant