Skip to content

Chore/update docker workflow for fork#1051

Closed
duartebarbosadev wants to merge 6 commits intoelie222:mainfrom
duartebarbosadev:chore/update-docker-workflow-for-fork
Closed

Chore/update docker workflow for fork#1051
duartebarbosadev wants to merge 6 commits intoelie222:mainfrom
duartebarbosadev:chore/update-docker-workflow-for-fork

Conversation

@duartebarbosadev
Copy link
Copy Markdown

@duartebarbosadev duartebarbosadev commented Dec 3, 2025

This pull request updates the codebase to support the new ollama-ai-provider-v2 package and adds first-class support for local Ollama models as an AI provider. It also improves environment variable handling, validation logic, and documentation for easier self-hosting with Ollama. The workflow configuration is updated to reflect repository ownership changes.

Ollama provider upgrade and integration:

  • Replaced the old ollama-ai-provider package with ollama-ai-provider-v2 in apps/web/package.json, all imports, and mocks, updating code to use the new API. [1] [2] [3] [4] [5] [6] [7]
  • Refactored Ollama model selection logic in apps/web/utils/llms/model.ts to support specifying the model via environment variables and to use a default base URL if not set. Ollama is now selectable without an API key. [1] [2] [3]
  • Added and improved tests for Ollama support in model.test.ts, including validation for missing models and base URLs. [1] [2] [3]

Validation and environment variable improvements:

  • Updated validation logic in settings.validation.ts so Ollama does not require an API key, and improved provider API key checks for economy/chat model selection. [1] [2] [3]
  • Added documentation for configuring Ollama as a local provider, including environment variable setup and Docker instructions in self-hosting.md.

Repository and workflow updates:

  • Changed Docker workflow configuration and permissions to use the new repository owner, updating all occurrences of elie222 to duartebarbosadev in .github/workflows/build_and_publish_docker.yml. [1] [2] [3]

Dependency and lockfile updates:

  • Updated pnpm-lock.yaml to reflect new dependencies, removed unused packages, and upgraded related provider utilities to match the new Ollama provider. [1] [2] [3] [4] [5] [6] [7] [8]

Summary by CodeRabbit

  • New Features

    • Added support for using Ollama as a local LLM provider without requiring an API key.
  • Documentation

    • Added comprehensive guide for configuring and self-hosting with Ollama, including environment variables and docker-compose setup instructions.

✏️ Tip: You can customize this high-level summary in your review settings.

Copilot AI review requested due to automatic review settings December 3, 2025 15:33
@vercel
Copy link
Copy Markdown

vercel bot commented Dec 3, 2025

@duartebarbosadev is attempting to deploy a commit to the Inbox Zero OSS Program Team on Vercel.

A member of the Team first needs to authorize it.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Dec 3, 2025

Caution

Review failed

The pull request is closed.

Walkthrough

The PR migrates Ollama provider support from ollama-ai-provider to ollama-ai-provider-v2, implements actual Ollama initialization and model handling instead of error-throwing, updates validation to exclude Ollama from API key requirements, and switches GitHub Actions workflow repository context from elie222 to duartebarbosadev.

Changes

Cohort / File(s) Summary
GitHub Actions workflow update
.github/workflows/build_and_publish_docker.yml
Updated Docker build workflow to reference duartebarbosadev repository context instead of elie222 across DOCKER_USERNAME environment variable and repository conditionals.
Ollama provider dependency migration
apps/web/package.json
Replaced ollama-ai-provider (v1.2.0) with ollama-ai-provider-v2 (^1.5.5).
AI settings validation
apps/web/utils/actions/settings.validation.ts
Refined validation to exclude OLLAMA provider from API key requirement via new requiresApiKey flag; only non-default, non-Ollama providers require API keys.
LLM model implementation
apps/web/utils/llms/model.ts
Implemented Ollama provider support with runtime initialization via createOllama, added provider-specific API key validation through providerRequiresApiKey helper, updated model selection logic to allow Ollama without API key.
LLM model tests
apps/web/utils/llms/model.test.ts
Updated Ollama mocks to use ollama-ai-provider-v2 and createOllama, enabled real Ollama configuration test path, added tests for missing Ollama model error and default base URL fallback.
Ollama self-hosting documentation
docs/hosting/self-hosting.md
Added new subsection with Ollama configuration instructions, environment variables (NEXT_PUBLIC_OLLAMA_MODEL, OLLAMA_BASE_URL), docker-compose example, and notes on API key requirements.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Areas requiring extra attention:

  • apps/web/utils/llms/model.ts: Core logic changes for Ollama initialization, provider-specific validation, and model selection flow require careful verification of null/undefined handling and baseURL defaults
  • apps/web/utils/actions/settings.validation.ts: Validation gate logic change to exclude OLLAMA from API key requirement should be cross-verified against model.ts logic
  • apps/web/utils/llms/model.test.ts: New test coverage for Ollama paths should be reviewed for completeness and proper mock isolation from v2 provider

Possibly related PRs

Poem

🐰 A rabbit hops through code so bright,
Where Ollama runs local through the night,
No API keys for self-hosted cheer,
V2 provider makes models clear,
Local LLMs now within our reach! 🎉

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

📜 Recent review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between de3bc27 and 188e2d0.

⛔ Files ignored due to path filters (1)
  • pnpm-lock.yaml is excluded by !**/pnpm-lock.yaml
📒 Files selected for processing (6)
  • .github/workflows/build_and_publish_docker.yml (3 hunks)
  • apps/web/package.json (1 hunks)
  • apps/web/utils/actions/settings.validation.ts (1 hunks)
  • apps/web/utils/llms/model.test.ts (5 hunks)
  • apps/web/utils/llms/model.ts (6 hunks)
  • docs/hosting/self-hosting.md (1 hunks)

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@duartebarbosadev
Copy link
Copy Markdown
Author

My bad, intended on doing this for my fork to test ollama integration
Will do this write once properly tested

@macroscopeapp
Copy link
Copy Markdown
Contributor

macroscopeapp bot commented Dec 3, 2025

Update Docker workflow for fork and add Ollama provider support across model selection and validation

  • Point CI Docker publish checks and username to duartebarbosadev/inbox-zero in build_and_publish_docker.yml.
  • Add Provider.OLLAMA support in Models.getModel, requiring aiModel, using process.env.OLLAMA_BASE_URL defaulting to http://localhost:11434, and setting no backup model in model.ts.
  • Gate API key checks behind providerRequiresApiKey, allowing OLLAMA without an API key in model selection paths in model.ts.
  • Stop requiring an API key for OLLAMA in settings validation in settings.validation.ts.
  • Switch to ollama-ai-provider-v2@^1.5.5 and update tests for Ollama configuration in model.test.ts.
  • Document Ollama self-hosting setup in self-hosting.md.

📍Where to Start

Start with the OLLAMA branch in Models.getModel and related helpers in model.ts.


Macroscope summarized 188e2d0.

Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR migrates from the deprecated ollama-ai-provider to ollama-ai-provider-v2, restoring first-class support for local Ollama models and updating the repository ownership references for a fork.

  • Upgraded Ollama provider package and updated all related code to use the new v2 API
  • Enhanced Ollama configuration to not require API keys and support environment-based model selection
  • Updated GitHub Actions workflow to reflect new repository ownership (duartebarbosadev/inbox-zero)

Reviewed changes

Copilot reviewed 6 out of 7 changed files in this pull request and generated 2 comments.

Show a summary per file
File Description
apps/web/package.json Updated dependency from ollama-ai-provider@1.2.0 to ollama-ai-provider-v2@^1.5.5
pnpm-lock.yaml Reflected dependency changes including new provider-utils versions and removal of deprecated packages
apps/web/utils/llms/model.ts Restored Ollama provider implementation with updated API, default base URL fallback, and API key exemption logic
apps/web/utils/llms/model.test.ts Re-enabled and expanded Ollama test coverage including validation for missing models and base URL fallback behavior
apps/web/utils/actions/settings.validation.ts Updated validation to exempt Ollama from API key requirements
docs/hosting/self-hosting.md Added comprehensive documentation for configuring and using Ollama as a local LLM provider
.github/workflows/build_and_publish_docker.yml Updated repository references from elie222 to duartebarbosadev for fork ownership
Files not reviewed (1)
  • pnpm-lock.yaml: Language not supported

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +51 to +56
1. Set `NEXT_PUBLIC_OLLAMA_MODEL` in `apps/web/.env` to the exact model name you have pulled in Ollama (e.g., `llama3` or `qwen2.5`).
2. (Optional) Set `OLLAMA_BASE_URL` if your Ollama server is not on the default `http://localhost:11434`. When running the app in Docker but Ollama is on the host, use `http://host.docker.internal:11434`.
3. Restart the stack so the updated environment variables are loaded:

```bash
NEXT_PUBLIC_BASE_URL=https://yourdomain.com docker compose --env-file apps/web/.env --profile all up -d
Copy link

Copilot AI Dec 3, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The documentation instructs users to set environment variables in apps/web/.env, but the location should be clearer. The instructions reference both the root .env file (line 40) and apps/web/.env (line 51). For consistency and clarity, it should be specified that Ollama environment variables should go in the root .env file (or whichever is the correct location based on the project setup).

Additionally, line 56 uses --env-file apps/web/.env, which suggests the environment file is at apps/web/.env, not the root. Ensure this path is correct and consistent with the instructions.

Suggested change
1. Set `NEXT_PUBLIC_OLLAMA_MODEL` in `apps/web/.env` to the exact model name you have pulled in Ollama (e.g., `llama3` or `qwen2.5`).
2. (Optional) Set `OLLAMA_BASE_URL` if your Ollama server is not on the default `http://localhost:11434`. When running the app in Docker but Ollama is on the host, use `http://host.docker.internal:11434`.
3. Restart the stack so the updated environment variables are loaded:
```bash
NEXT_PUBLIC_BASE_URL=https://yourdomain.com docker compose --env-file apps/web/.env --profile all up -d
1. Set `NEXT_PUBLIC_OLLAMA_MODEL` in your root `.env` file to the exact model name you have pulled in Ollama (e.g., `llama3` or `qwen2.5`).
2. (Optional) Set `OLLAMA_BASE_URL` in your root `.env` file if your Ollama server is not on the default `http://localhost:11434`. When running the app in Docker but Ollama is on the host, use `http://host.docker.internal:11434`.
3. Restart the stack so the updated environment variables are loaded:
```bash
NEXT_PUBLIC_BASE_URL=https://yourdomain.com docker compose --env-file .env --profile all up -d

Copilot uses AI. Check for mistakes.
"nodemailer": "7.0.9",
"nuqs": "2.7.2",
"ollama-ai-provider": "1.2.0",
"ollama-ai-provider-v2": "^1.5.5",
Copy link

Copilot AI Dec 3, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[nitpick] The version specifier uses a caret (^1.5.5), which allows minor and patch version updates. This could introduce breaking changes if the package doesn't follow semantic versioning strictly. Consider using a more restrictive version specifier like ~1.5.5 (only patch updates) or an exact version 1.5.5 for more predictable behavior, especially since this is a major provider change.

Suggested change
"ollama-ai-provider-v2": "^1.5.5",
"ollama-ai-provider-v2": "1.5.5",

Copilot uses AI. Check for mistakes.
@CLAassistant
Copy link
Copy Markdown

CLAassistant commented Dec 3, 2025

CLA assistant check
All committers have signed the CLA.

Copy link
Copy Markdown
Contributor

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No issues found across 7 files

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants