Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
22 changes: 21 additions & 1 deletion .env.example
Original file line number Diff line number Diff line change
Expand Up @@ -121,12 +121,32 @@ VITE_SHOW_DEVTOOLS=false
PROD=false


# Cloud Provider Configuration (Optional - Can also be configured via UI)
# ⭐ RECOMMENDED: Configure these through Settings > Cloud Providers in the UI for easier management
#
# Azure OpenAI:
# AZURE_OPENAI_ENDPOINT: Your Azure OpenAI resource endpoint
# AZURE_OPENAI_API_KEY: Your Azure OpenAI API key
# AZURE_OPENAI_API_VERSION: API version (e.g., 2024-02-15-preview)
# AZURE_OPENAI_DEPLOYMENT: Your deployment name
#
# AWS Bedrock:
# AWS_ACCESS_KEY_ID: Your AWS IAM Access Key ID
# AWS_SECRET_ACCESS_KEY: Your AWS IAM Secret Access Key
# AWS_REGION: AWS region for Bedrock (e.g., us-east-1, us-west-2)
# AWS_BEDROCK_MODEL_ID: Bedrock model ID (e.g., anthropic.claude-3-sonnet-20240229-v1:0)
#
# Note: UI configuration is preferred as it provides encrypted storage and easier management

# NOTE: All other configuration has been moved to database management!
# Run the credentials_setup.sql file in your Supabase SQL editor to set up the credentials table.
# Then use the Settings page in the web UI to manage:
# - OPENAI_API_KEY (encrypted)
# - OPENAI_API_KEY (encrypted) - or use Azure OpenAI credentials above
# - AZURE_OPENAI_API_KEY (encrypted) - if using Azure OpenAI
# - MODEL_CHOICE
# - TRANSPORT settings
# - LLM_PROVIDER (set to "azure-openai" if using Azure OpenAI)
# - EMBEDDING_PROVIDER (set to "azure-openai" if using Azure OpenAI for embeddings)
# - RAG strategy flags (USE_CONTEXTUAL_EMBEDDINGS, USE_HYBRID_SEARCH, etc.)
# - Crawler settings:
# * CRAWL_MAX_CONCURRENT (default: 10) - Max concurrent pages per crawl operation
Expand Down
22 changes: 11 additions & 11 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ This new vision for Archon replaces the old one (the agenteer). Archon used to b
- [Docker Desktop](https://www.docker.com/products/docker-desktop/)
- [Node.js 18+](https://nodejs.org/) (for hybrid development mode)
- [Supabase](https://supabase.com/) account (free tier or local Supabase both work)
- [OpenAI API key](https://platform.openai.com/api-keys) (Gemini and Ollama are supported too!)
- [OpenAI API key](https://platform.openai.com/api-keys) (Azure OpenAI, Gemini, and Ollama are supported too!)
- (OPTIONAL) [Make](https://www.gnu.org/software/make/) (see [Installing Make](#installing-make) below)

### Setup Instructions
Expand Down Expand Up @@ -160,16 +160,16 @@ sudo yum install make
<summary><strong>🚀 Quick Command Reference for Make</strong></summary>
<br/>

| Command | Description |
| ----------------- | ------------------------------------------------------- |
| Command | Description |
| ----------------- | ------------------------------------------------------ |
| `make dev` | Start hybrid dev (backend in Docker, frontend local) ⭐ |
| `make dev-docker` | Everything in Docker |
| `make stop` | Stop all services |
| `make test` | Run all tests |
| `make lint` | Run linters |
| `make install` | Install dependencies |
| `make check` | Check environment setup |
| `make clean` | Remove containers and volumes (with confirmation) |
| `make dev-docker` | Everything in Docker |
| `make stop` | Stop all services |
| `make test` | Run all tests |
| `make lint` | Run linters |
| `make install` | Install dependencies |
| `make check` | Check environment setup |
| `make clean` | Remove containers and volumes (with confirmation) |

</details>

Expand Down Expand Up @@ -248,7 +248,7 @@ To upgrade Archon to the latest version:

- **Model Context Protocol (MCP)**: Connect any MCP-compatible client (Claude Code, Cursor, even non-AI coding assistants like Claude Desktop)
- **MCP Tools**: Comprehensive yet simple set of tools for RAG queries, task management, and project operations
- **Multi-LLM Support**: Works with OpenAI, Ollama, and Google Gemini models
- **Multi-LLM Support**: Works with OpenAI, Azure OpenAI, Ollama, and Google Gemini models
- **RAG Strategies**: Hybrid search, contextual embeddings, and result reranking for optimal AI responses
- **Real-time Streaming**: Live responses from AI agents with progress tracking

Expand Down
Loading