-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Chore/update docker workflow for fork #1051
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
341a91a
2e143b7
993eebb
ab40bfd
cf4bd32
188e2d0
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change | ||||||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
|
@@ -44,6 +44,20 @@ If doing this manually edit then you'll need to configure: | |||||||||||||||||||||||||
| - **LLM Provider**: Uncomment one provider block and add your API key | ||||||||||||||||||||||||||
| - **Optional**: Microsoft OAuth, external Redis, etc. | ||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||
| #### Using Ollama (local LLM) | ||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||
| To use a locally hosted Ollama model instead of a cloud LLM provider: | ||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||
| 1. Set `NEXT_PUBLIC_OLLAMA_MODEL` in `apps/web/.env` to the exact model name you have pulled in Ollama (e.g., `llama3` or `qwen2.5`). | ||||||||||||||||||||||||||
| 2. (Optional) Set `OLLAMA_BASE_URL` if your Ollama server is not on the default `http://localhost:11434`. When running the app in Docker but Ollama is on the host, use `http://host.docker.internal:11434`. | ||||||||||||||||||||||||||
| 3. Restart the stack so the updated environment variables are loaded: | ||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||
| ```bash | ||||||||||||||||||||||||||
| NEXT_PUBLIC_BASE_URL=https://yourdomain.com docker compose --env-file apps/web/.env --profile all up -d | ||||||||||||||||||||||||||
|
Comment on lines
+51
to
+56
|
||||||||||||||||||||||||||
| 1. Set `NEXT_PUBLIC_OLLAMA_MODEL` in `apps/web/.env` to the exact model name you have pulled in Ollama (e.g., `llama3` or `qwen2.5`). | |
| 2. (Optional) Set `OLLAMA_BASE_URL` if your Ollama server is not on the default `http://localhost:11434`. When running the app in Docker but Ollama is on the host, use `http://host.docker.internal:11434`. | |
| 3. Restart the stack so the updated environment variables are loaded: | |
| ```bash | |
| NEXT_PUBLIC_BASE_URL=https://yourdomain.com docker compose --env-file apps/web/.env --profile all up -d | |
| 1. Set `NEXT_PUBLIC_OLLAMA_MODEL` in your root `.env` file to the exact model name you have pulled in Ollama (e.g., `llama3` or `qwen2.5`). | |
| 2. (Optional) Set `OLLAMA_BASE_URL` in your root `.env` file if your Ollama server is not on the default `http://localhost:11434`. When running the app in Docker but Ollama is on the host, use `http://host.docker.internal:11434`. | |
| 3. Restart the stack so the updated environment variables are loaded: | |
| ```bash | |
| NEXT_PUBLIC_BASE_URL=https://yourdomain.com docker compose --env-file .env --profile all up -d |
Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[nitpick] The version specifier uses a caret (
^1.5.5), which allows minor and patch version updates. This could introduce breaking changes if the package doesn't follow semantic versioning strictly. Consider using a more restrictive version specifier like~1.5.5(only patch updates) or an exact version1.5.5for more predictable behavior, especially since this is a major provider change.