Chore/update docker workflow for fork#1051
Chore/update docker workflow for fork#1051duartebarbosadev wants to merge 6 commits intoelie222:mainfrom
Conversation
…pport-if-missing Support Ollama model selection
|
@duartebarbosadev is attempting to deploy a commit to the Inbox Zero OSS Program Team on Vercel. A member of the Team first needs to authorize it. |
|
Caution Review failedThe pull request is closed. WalkthroughThe PR migrates Ollama provider support from Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Areas requiring extra attention:
Possibly related PRs
Poem
✨ Finishing touches
🧪 Generate unit tests (beta)
📜 Recent review detailsConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro ⛔ Files ignored due to path filters (1)
📒 Files selected for processing (6)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
My bad, intended on doing this for my fork to test ollama integration |
Update Docker workflow for fork and add Ollama provider support across model selection and validation
📍Where to StartStart with the OLLAMA branch in Macroscope summarized 188e2d0. |
There was a problem hiding this comment.
Pull request overview
This PR migrates from the deprecated ollama-ai-provider to ollama-ai-provider-v2, restoring first-class support for local Ollama models and updating the repository ownership references for a fork.
- Upgraded Ollama provider package and updated all related code to use the new v2 API
- Enhanced Ollama configuration to not require API keys and support environment-based model selection
- Updated GitHub Actions workflow to reflect new repository ownership (
duartebarbosadev/inbox-zero)
Reviewed changes
Copilot reviewed 6 out of 7 changed files in this pull request and generated 2 comments.
Show a summary per file
| File | Description |
|---|---|
apps/web/package.json |
Updated dependency from ollama-ai-provider@1.2.0 to ollama-ai-provider-v2@^1.5.5 |
pnpm-lock.yaml |
Reflected dependency changes including new provider-utils versions and removal of deprecated packages |
apps/web/utils/llms/model.ts |
Restored Ollama provider implementation with updated API, default base URL fallback, and API key exemption logic |
apps/web/utils/llms/model.test.ts |
Re-enabled and expanded Ollama test coverage including validation for missing models and base URL fallback behavior |
apps/web/utils/actions/settings.validation.ts |
Updated validation to exempt Ollama from API key requirements |
docs/hosting/self-hosting.md |
Added comprehensive documentation for configuring and using Ollama as a local LLM provider |
.github/workflows/build_and_publish_docker.yml |
Updated repository references from elie222 to duartebarbosadev for fork ownership |
Files not reviewed (1)
- pnpm-lock.yaml: Language not supported
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| 1. Set `NEXT_PUBLIC_OLLAMA_MODEL` in `apps/web/.env` to the exact model name you have pulled in Ollama (e.g., `llama3` or `qwen2.5`). | ||
| 2. (Optional) Set `OLLAMA_BASE_URL` if your Ollama server is not on the default `http://localhost:11434`. When running the app in Docker but Ollama is on the host, use `http://host.docker.internal:11434`. | ||
| 3. Restart the stack so the updated environment variables are loaded: | ||
|
|
||
| ```bash | ||
| NEXT_PUBLIC_BASE_URL=https://yourdomain.com docker compose --env-file apps/web/.env --profile all up -d |
There was a problem hiding this comment.
The documentation instructs users to set environment variables in apps/web/.env, but the location should be clearer. The instructions reference both the root .env file (line 40) and apps/web/.env (line 51). For consistency and clarity, it should be specified that Ollama environment variables should go in the root .env file (or whichever is the correct location based on the project setup).
Additionally, line 56 uses --env-file apps/web/.env, which suggests the environment file is at apps/web/.env, not the root. Ensure this path is correct and consistent with the instructions.
| 1. Set `NEXT_PUBLIC_OLLAMA_MODEL` in `apps/web/.env` to the exact model name you have pulled in Ollama (e.g., `llama3` or `qwen2.5`). | |
| 2. (Optional) Set `OLLAMA_BASE_URL` if your Ollama server is not on the default `http://localhost:11434`. When running the app in Docker but Ollama is on the host, use `http://host.docker.internal:11434`. | |
| 3. Restart the stack so the updated environment variables are loaded: | |
| ```bash | |
| NEXT_PUBLIC_BASE_URL=https://yourdomain.com docker compose --env-file apps/web/.env --profile all up -d | |
| 1. Set `NEXT_PUBLIC_OLLAMA_MODEL` in your root `.env` file to the exact model name you have pulled in Ollama (e.g., `llama3` or `qwen2.5`). | |
| 2. (Optional) Set `OLLAMA_BASE_URL` in your root `.env` file if your Ollama server is not on the default `http://localhost:11434`. When running the app in Docker but Ollama is on the host, use `http://host.docker.internal:11434`. | |
| 3. Restart the stack so the updated environment variables are loaded: | |
| ```bash | |
| NEXT_PUBLIC_BASE_URL=https://yourdomain.com docker compose --env-file .env --profile all up -d |
| "nodemailer": "7.0.9", | ||
| "nuqs": "2.7.2", | ||
| "ollama-ai-provider": "1.2.0", | ||
| "ollama-ai-provider-v2": "^1.5.5", |
There was a problem hiding this comment.
[nitpick] The version specifier uses a caret (^1.5.5), which allows minor and patch version updates. This could introduce breaking changes if the package doesn't follow semantic versioning strictly. Consider using a more restrictive version specifier like ~1.5.5 (only patch updates) or an exact version 1.5.5 for more predictable behavior, especially since this is a major provider change.
| "ollama-ai-provider-v2": "^1.5.5", | |
| "ollama-ai-provider-v2": "1.5.5", |
This pull request updates the codebase to support the new
ollama-ai-provider-v2package and adds first-class support for local Ollama models as an AI provider. It also improves environment variable handling, validation logic, and documentation for easier self-hosting with Ollama. The workflow configuration is updated to reflect repository ownership changes.Ollama provider upgrade and integration:
ollama-ai-providerpackage withollama-ai-provider-v2inapps/web/package.json, all imports, and mocks, updating code to use the new API. [1] [2] [3] [4] [5] [6] [7]apps/web/utils/llms/model.tsto support specifying the model via environment variables and to use a default base URL if not set. Ollama is now selectable without an API key. [1] [2] [3]model.test.ts, including validation for missing models and base URLs. [1] [2] [3]Validation and environment variable improvements:
settings.validation.tsso Ollama does not require an API key, and improved provider API key checks for economy/chat model selection. [1] [2] [3]self-hosting.md.Repository and workflow updates:
elie222toduartebarbosadevin.github/workflows/build_and_publish_docker.yml. [1] [2] [3]Dependency and lockfile updates:
pnpm-lock.yamlto reflect new dependencies, removed unused packages, and upgraded related provider utilities to match the new Ollama provider. [1] [2] [3] [4] [5] [6] [7] [8]Summary by CodeRabbit
New Features
Documentation
✏️ Tip: You can customize this high-level summary in your review settings.