Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
20 changes: 13 additions & 7 deletions .env.example
Original file line number Diff line number Diff line change
Expand Up @@ -4,16 +4,17 @@
# OpenRouter: https://openrouter.ai/api/v1
BASE_URL=

# Get your Open AI API Key by following these instructions -
# https://help.openai.com/en/articles/4936850-where-do-i-find-my-openai-api-key
# Even if using OpenRouter/Ollama, you still need to set this for the embedding model.
# Future versions of Archon will be more flexible with this.
OPENAI_API_KEY=

# For OpenAI: https://help.openai.com/en/articles/4936850-where-do-i-find-my-openai-api-key
# For OpenRouter: https://openrouter.ai/keys
# For Ollama, no need to set this unless you specifically configured an API key
LLM_API_KEY=

# Get your Open AI API Key by following these instructions -
# https://help.openai.com/en/articles/4936850-where-do-i-find-my-openai-api-key
# Even if using OpenRouter, you still need to set this for the embedding model.
# No need to set this if using Ollama.
OPENAI_API_KEY=

# For the Supabase version (sample_supabase_agent.py), set your Supabase URL and Service Key.
# Get your SUPABASE_URL from the API section of your Supabase project settings -
# https://supabase.com/dashboard/project/<your project ID>/settings/api
Expand All @@ -32,4 +33,9 @@ REASONER_MODEL=
# The LLM you want to use for the primary agent/coder.
# Example: gpt-4o-mini
# Example: qwen2.5:14b-instruct-8k
PRIMARY_MODEL=
PRIMARY_MODEL=

# Embedding model you want to use
# Example for Ollama: nomic-embed-text
# Example for OpenAI: text-embedding-3-small
EMBEDDING_MODEL=
39 changes: 39 additions & 0 deletions .github/ISSUE_TEMPLATE/bug_report.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
---
name: Bug Report
about: Create a report to help improve Archon
title: '[BUG] '
labels: bug
assignees: ''
---

## Description
A clear and concise description of the issue.

## Steps to Reproduce
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error

## Expected Behavior
A clear and concise description of what you expected to happen.

## Actual Behavior
A clear and concise description of what actually happened.

## Screenshots
If applicable, add screenshots to help explain your problem.

## Environment
- OS: [e.g. Windows 10, macOS Monterey, Ubuntu 22.04]
- Python Version: [e.g. Python 3.13, Python 3.12]
- Using MCP or Streamlit (or something else)

## Additional Context
Add any other context about the problem here, such as:
- Does this happen consistently or intermittently?
- Were there any recent changes that might be related?
- Any workarounds you've discovered?

## Possible Solution
If you have suggestions on how to fix the issue or what might be causing it.
5 changes: 5 additions & 0 deletions .github/ISSUE_TEMPLATE/config.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
blank_issues_enabled: false
contact_links:
- name: Archon Community
url: https://thinktank.ottomator.ai/c/archon/30
about: Please ask questions and start conversations about Archon here in the oTTomator Think Tank!
19 changes: 19 additions & 0 deletions .github/ISSUE_TEMPLATE/feature_request.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
---
name: Feature Request
about: Suggest an idea for Archon
title: '[FEATURE] '
labels: enhancement
assignees: ''
---

## Describe the feature you'd like and why
A clear and concise description of what you want to happen.

## User Impact
Who would benefit from this feature and how?

## Implementation Details (optional)
Any thoughts on how this might be implemented?

## Additional context
Add any other screenshots, mockups, or context about the feature request here.
27 changes: 23 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,13 @@ Archon will be developed in iterations, starting with just a simple Pydantic AI
all the way to a full agentic workflow using LangGraph that can build other AI agents with any framework.
Through its iterative development, Archon showcases the power of planning, feedback loops, and domain-specific knowledge in creating robust AI agents.

The current version of Archon is V3 as mentioned above - see [V3 Documentation](iterations/v3-mcp-support/README.md) for details.
## Important Links

- The current version of Archon is V3 as mentioned above - see [V3 Documentation](iterations/v3-mcp-support/README.md) for details.

- I **just** created the [Archon community](https://thinktank.ottomator.ai/c/archon/30) forum over in the oTTomator Think Tank! Please post any questions you have there!

- [GitHub Kanban board](https://github.com/users/coleam00/projects/1) for feature implementation and bug squashing.

## Vision

Expand Down Expand Up @@ -61,7 +67,6 @@ Archon demonstrates three key principles in modern AI development:
- LangSmith
- Other frameworks besides Pydantic AI
- Other vector databases besides Supabase
- Alternative embedding models besides OpenAI

## Getting Started with V3 (current version)

Expand Down Expand Up @@ -146,6 +151,7 @@ This will:
1. Set up the database:
- Execute `utils/site_pages.sql` in your Supabase SQL Editor
- This creates tables and enables vector similarity search
- See the Database Setup section for more details

2. Crawl documentation:
```bash
Expand Down Expand Up @@ -196,8 +202,12 @@ The interface will be available at `http://localhost:8501`
- `utils/`: Utility functions and database setup
- `utils.py`: Shared utility functions
- `site_pages.sql`: Database setup commands
- `site_pages_ollama.sql`: Database setup commands with vector dimensions updated for nomic-embed-text

### Database Setup

The Supabase database uses the following schema:

### Database Schema
```sql
CREATE TABLE site_pages (
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
Expand All @@ -207,10 +217,19 @@ CREATE TABLE site_pages (
summary TEXT,
content TEXT,
metadata JSONB,
embedding VECTOR(1536)
embedding VECTOR(1536) -- Adjust dimensions as necessary (i.e. 768 for nomic-embed-text)
);
```

Execute the SQL commands in `utils/site_pages.sql` to:
1. Create the necessary tables
2. Enable vector similarity search
3. Set up Row Level Security policies

In Supabase, do this by going to the "SQL Editor" tab and pasting in the SQL into the editor there. Then click "Run".

If using Ollama with the nomic-embed-text embedding model or another with 786 dimensions, either update site_pages.sql so that the dimensions are 768 instead of 1536 or use `utils/ollama_site_pages.sql`

## Contributing

We welcome contributions! Whether you're fixing bugs, adding features, or improving documentation, please feel free to submit a Pull Request.
Expand Down
8 changes: 7 additions & 1 deletion archon/archon_graph.py
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,13 @@
system_prompt='Your job is to end a conversation for creating an AI agent by giving instructions for how to execute the agent and they saying a nice goodbye to the user.',
)

openai_client = AsyncOpenAI(api_key=os.getenv("OPENAI_API_KEY"))
openai_client=None

if is_ollama:
openai_client = AsyncOpenAI(base_url=base_url,api_key=api_key)
else:
openai_client = AsyncOpenAI(api_key=os.getenv("OPENAI_API_KEY"))

supabase: Client = Client(
os.getenv("SUPABASE_URL"),
os.getenv("SUPABASE_SERVICE_KEY")
Expand Down
32 changes: 29 additions & 3 deletions archon/crawl_pydantic_ai_docs.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,20 @@
load_dotenv()

# Initialize OpenAI and Supabase clients
openai_client = AsyncOpenAI(api_key=os.getenv("OPENAI_API_KEY"))

base_url = os.getenv('BASE_URL', 'https://api.openai.com/v1')
api_key = os.getenv('LLM_API_KEY', 'no-llm-api-key-provided')
is_ollama = "localhost" in base_url.lower()

embedding_model = os.getenv('EMBEDDING_MODEL', 'text-embedding-3-small')

openai_client=None

if is_ollama:
openai_client = AsyncOpenAI(base_url=base_url,api_key=api_key)
else:
openai_client = AsyncOpenAI(api_key=os.getenv("OPENAI_API_KEY"))

supabase: Client = create_client(
os.getenv("SUPABASE_URL"),
os.getenv("SUPABASE_SERVICE_KEY")
Expand Down Expand Up @@ -88,7 +101,7 @@ async def get_title_and_summary(chunk: str, url: str) -> Dict[str, str]:

try:
response = await openai_client.chat.completions.create(
model=os.getenv("LLM_MODEL", "gpt-4o-mini"),
model=os.getenv("PRIMARY_MODEL", "gpt-4o-mini"),
messages=[
{"role": "system", "content": system_prompt},
{"role": "user", "content": f"URL: {url}\n\nContent:\n{chunk[:1000]}..."} # Send first 1000 chars for context
Expand All @@ -104,7 +117,7 @@ async def get_embedding(text: str) -> List[float]:
"""Get embedding vector from OpenAI."""
try:
response = await openai_client.embeddings.create(
model="text-embedding-3-small",
model= embedding_model,
input=text
)
return response.data[0].embedding
Expand Down Expand Up @@ -231,7 +244,20 @@ def get_pydantic_ai_docs_urls() -> List[str]:
print(f"Error fetching sitemap: {e}")
return []

async def clear_existing_records():
"""Clear all existing records with source='pydantic_ai_docs' from the site_pages table."""
try:
result = supabase.table("site_pages").delete().eq("metadata->>source", "pydantic_ai_docs").execute()
print("Cleared existing pydantic_ai_docs records from site_pages")
return result
except Exception as e:
print(f"Error clearing existing records: {e}")
return None

async def main():
# Clear existing records first
await clear_existing_records()

# Get URLs from Pydantic AI docs
urls = get_pydantic_ai_docs_urls()
if not urls:
Expand Down
7 changes: 5 additions & 2 deletions archon/pydantic_ai_coder.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,8 +19,11 @@
base_url = os.getenv('BASE_URL', 'https://api.openai.com/v1')
api_key = os.getenv('LLM_API_KEY', 'no-llm-api-key-provided')
model = OpenAIModel(llm, base_url=base_url, api_key=api_key)
embedding_model = os.getenv('EMBEDDING_MODEL', 'text-embedding-3-small')

# logfire.configure(send_to_logfire='if-token-present')
logfire.configure(send_to_logfire='if-token-present')

is_ollama = "localhost" in base_url.lower()

@dataclass
class PydanticAIDeps:
Expand Down Expand Up @@ -88,7 +91,7 @@ async def get_embedding(text: str, openai_client: AsyncOpenAI) -> List[float]:
"""Get embedding vector from OpenAI."""
try:
response = await openai_client.embeddings.create(
model="text-embedding-3-small",
model=embedding_model,
input=text
)
return response.data[0].embedding
Expand Down
7 changes: 5 additions & 2 deletions iterations/v2-agentic-workflow/.env.example
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,9 @@ SUPABASE_SERVICE_KEY=
REASONER_MODEL=

# The LLM you want to use for the primary agent/coder.
# Example: gpt-4o-mini
# Example: qwen2.5:14b-instruct-8k
PRIMARY_MODEL=
PRIMARY_MODEL=

# Embedding model you want to use (nomic-embed-text:latest, text-embedding-3-small)
# Example: nomic-embed-text:latest
EMBEDDING_MODEL=
1 change: 1 addition & 0 deletions iterations/v2-agentic-workflow/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
.env
8 changes: 7 additions & 1 deletion iterations/v2-agentic-workflow/archon_graph.py
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,13 @@
system_prompt='Your job is to end a conversation for creating an AI agent by giving instructions for how to execute the agent and they saying a nice goodbye to the user.',
)

openai_client = AsyncOpenAI(api_key=os.getenv("OPENAI_API_KEY"))
openai_client=None

if is_ollama:
openai_client = AsyncOpenAI(base_url=base_url,api_key=api_key)
else:
openai_client = AsyncOpenAI(api_key=os.getenv("OPENAI_API_KEY"))

supabase: Client = Client(
os.getenv("SUPABASE_URL"),
os.getenv("SUPABASE_SERVICE_KEY")
Expand Down
19 changes: 16 additions & 3 deletions iterations/v2-agentic-workflow/crawl_pydantic_ai_docs.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,20 @@
load_dotenv()

# Initialize OpenAI and Supabase clients
openai_client = AsyncOpenAI(api_key=os.getenv("OPENAI_API_KEY"))

base_url = os.getenv('BASE_URL', 'https://api.openai.com/v1')
api_key = os.getenv('LLM_API_KEY', 'no-llm-api-key-provided')
is_ollama = "localhost" in base_url.lower()

embedding_model = os.getenv('EMBEDDING_MODEL', 'text-embedding-3-small')

openai_client=None

if is_ollama:
openai_client = AsyncOpenAI(base_url=base_url,api_key=api_key)
else:
openai_client = AsyncOpenAI(api_key=os.getenv("OPENAI_API_KEY"))

supabase: Client = create_client(
os.getenv("SUPABASE_URL"),
os.getenv("SUPABASE_SERVICE_KEY")
Expand Down Expand Up @@ -88,7 +101,7 @@ async def get_title_and_summary(chunk: str, url: str) -> Dict[str, str]:

try:
response = await openai_client.chat.completions.create(
model=os.getenv("LLM_MODEL", "gpt-4o-mini"),
model=os.getenv("PRIMARY_MODEL", "gpt-4o-mini"),
messages=[
{"role": "system", "content": system_prompt},
{"role": "user", "content": f"URL: {url}\n\nContent:\n{chunk[:1000]}..."} # Send first 1000 chars for context
Expand All @@ -104,7 +117,7 @@ async def get_embedding(text: str) -> List[float]:
"""Get embedding vector from OpenAI."""
try:
response = await openai_client.embeddings.create(
model="text-embedding-3-small",
model= embedding_model,
input=text
)
return response.data[0].embedding
Expand Down
Loading