๐ A powerful multi-agent system built with Pydantic AI that enables seamless orchestration of specialized AI agents with support for 100+ models through OpenRouter and direct provider integrations.
- ๐ค Multi-Model Support: Use any AI model from OpenRouter, OpenAI, Anthropic, Google, and more
- ๐ Specialized Agents: Code generation, research, analysis, refinement, and custom agents
- ๐ง Knowledge Management: Vector storage, semantic search, and validation systems
- โก Real-time Progress: Live streaming updates and progress visualization
- ๐ ๏ธ Extensive Tool Library: 25+ built-in tools with template system
- ๐ฎ Multiple Interfaces: CLI, Web UI (Streamlit), and API
- ๐ณ Production Ready: Docker support with monitoring and scaling
Multi-Agent Channel/
โโโ src/
โ โโโ agents/ # Specialized Pydantic AI agents
โ โโโ core/ # Core infrastructure (models, orchestration)
โ โโโ tools/ # Tool library with 25+ tools
โ โโโ knowledge/ # Vector store and knowledge management
โ โโโ progress/ # Progress tracking and visualization
โ โโโ ui/ # Streamlit UI components
โ โโโ api/ # FastAPI backend
โ โโโ cli/ # Command-line interface
โ โโโ monitoring/ # Metrics and observability
โโโ docker/ # Docker configuration
โโโ tests/ # Comprehensive test suite
โโโ examples/ # Usage examples and demos
# Clone the repository
git clone https://github.com/vedantparmar12/Multi-agent-channel.git
cd Multi-agent-channel
# Install dependencies
pip install -r requirements.txt
# Set up environment variables
cp .env.example .env
# Add your API keys to .env file
from src.agents.base_agent import BaseAgent
from src.core.model_provider import ModelConfig
# Configure with your API keys
config = ModelConfig(
openrouter_api_key="your-openrouter-key",
openai_api_key="your-openai-key",
anthropic_api_key="your-anthropic-key"
)
# Create an agent with any model
agent = BaseAgent(
model="anthropic/claude-3.5-sonnet", # or any model from 100+ options
model_config=config
)
# Run the agent
result = await agent.run("Your prompt here", deps)
Generates high-quality code with documentation, tests, and security checks.
from src.agents.code_generator import CodeGeneratorAgent, CodeGeneratorDeps
agent = CodeGeneratorAgent(model="openai/gpt-4-turbo")
deps = CodeGeneratorDeps(
language="python",
include_tests=True,
include_docs=True,
security_check=True
)
result = await agent.run("Create a REST API", deps)
Conducts comprehensive research with source tracking and analysis.
from src.agents.research_agent import ResearchAgent, ResearchDeps
agent = ResearchAgent(model="anthropic/claude-3-opus")
deps = ResearchDeps(
topic="quantum computing",
depth="comprehensive",
include_sources=True
)
result = await agent.run("Research latest developments", deps)
Performs deep analysis on code, data, or systems.
from src.agents.analysis_agent import AnalysisAgent, AnalysisDeps
agent = AnalysisAgent(model="anthropic/claude-3.5-sonnet")
deps = AnalysisDeps(
analysis_type="code_quality",
include_recommendations=True
)
result = await agent.run("Analyze this codebase", deps)
Enhances code with additional tools and capabilities.
from src.agents.tools_refiner import ToolsRefinerAgent, ToolsRefinerDeps
agent = ToolsRefinerAgent(model="groq/mixtral-8x7b-32768")
deps = ToolsRefinerDeps(
code="your code here",
requested_tools=["logging", "error_handling", "caching"]
)
result = await agent.run("Add production features", deps)
Access 100+ AI models through a unified interface:
- OpenRouter: Access to all OpenRouter models
- OpenAI: GPT-4, GPT-3.5, and other OpenAI models
- Anthropic: Claude 3 (Opus, Sonnet, Haiku)
- Google: Gemini Pro models
- Meta: Llama models
- Mistral: Mixtral and other models
- Groq: Ultra-fast inference
# Automatic model recommendations
from src.core.model_provider import ModelProvider
provider = ModelProvider(config=model_config)
# Get best model for coding
coding_model = provider.recommend_model(
task_type="coding",
budget_priority=False
)
# Get fastest model
fast_model = provider.recommend_model(
task_type="general",
speed_priority=True
)
# Get most affordable model
budget_model = provider.recommend_model(
task_type="general",
budget_priority=True
)
25+ built-in tools organized by category:
Development Tools
code_analyzer
: Analyze code structure and qualitycode_formatter
: Format code according to standardsdependency_manager
: Manage project dependenciestest_generator
: Generate unit testsdocumentation_generator
: Create documentation
Research Tools
web_searcher
: Search the web for informationarxiv_searcher
: Search academic papersnews_aggregator
: Aggregate news from multiple sourcestrend_analyzer
: Analyze trends and patterns
Data Tools
data_cleaner
: Clean and preprocess datadata_transformer
: Transform data formatsschema_validator
: Validate data schemasdata_visualizer
: Create data visualizations
System Tools
file_manager
: Manage files and directoriesprocess_monitor
: Monitor system processeslog_analyzer
: Analyze log filesperformance_profiler
: Profile performance
Security Tools
vulnerability_scanner
: Scan for vulnerabilitiesencryption_tool
: Encrypt/decrypt dataauth_manager
: Manage authenticationsecurity_auditor
: Audit security
# Run a single agent
agent-cli run "Your prompt" --model "anthropic/claude-3.5-sonnet"
# Use a specific tool
agent-cli tools search-web --query "latest AI news"
# Generate from template
agent-cli generate rest-api --name "UserAPI"
# Index codebase for search
agent-cli index ./src --output index.json
# Start the Streamlit UI
streamlit run src/ui/app.py
# Features:
# - Visual model selection
# - Real-time progress tracking
# - Multi-agent orchestration
# - Results visualization
# Start the FastAPI server
uvicorn src.api.main:app --reload
# Endpoints:
# POST /agents/run
# GET /models/list
# POST /tools/execute
# GET /progress/{task_id}
# Build and run with Docker Compose
docker-compose up -d
# Services included:
# - API server
# - Streamlit UI
# - Redis for caching
# - PostgreSQL for persistence
# - Prometheus for monitoring
# - Grafana for visualization
from src.knowledge.vector_store import VectorStore
# Initialize vector store
store = VectorStore(
supabase_url="your-url",
supabase_key="your-key",
openai_api_key="your-key"
)
# Add documents
await store.add_document(
content="Your content",
metadata={"category": "docs", "tags": ["api", "guide"]}
)
# Semantic search
results = await store.search(
query="How to implement authentication",
limit=5
)
from src.knowledge.validation_gate import ValidationGate
# Create validation gate
gate = ValidationGate()
# Validate code
result = await gate.validate(
data={"code": "your code"},
rules=["syntax", "security", "performance"]
)
Deep analysis mode for complex tasks:
from src.progress.grok_mode import GrokHeavyMode, GrokContext
grok = GrokHeavyMode()
context = GrokContext(
file_path="complex_system.py",
analysis_depth="DEEP",
include_patterns=True
)
result = await grok.analyze(context)
Real-time progress updates:
from src.progress.progress_tracker import ProgressTracker
tracker = ProgressTracker()
# Create hierarchical tasks
main_task = await tracker.create_task("main", "Main Task")
sub_task = await tracker.create_task("sub", "Sub Task", parent_id="main")
# Update progress
await tracker.update_progress("sub", 50, "Processing...")
Built-in monitoring with Prometheus and Grafana:
- Request metrics
- Model usage statistics
- Cost tracking
- Performance metrics
- Error rates
# Run all tests
pytest tests/
# Run specific test category
pytest tests/test_agents.py
pytest tests/test_tools.py
pytest tests/test_knowledge.py
# Run with coverage
pytest tests/ --cov=src --cov-report=html
Check the examples/
directory for:
- Multi-model usage examples
- Agent orchestration patterns
- Tool integration examples
- Knowledge management demos
- Production deployment guides
We welcome contributions! Please see our Contributing Guide for details.
This project is licensed under the MIT License - see the LICENSE file for details.
- Built with Pydantic AI
- Powered by OpenRouter for multi-model access
- Vector storage by Supabase
- Monitoring with Prometheus and Grafana
- Documentation: docs/
- Issues: GitHub Issues
- Discussions: GitHub Discussions
Ready to build with multiple AI models? ๐
# Get started now
git clone https://github.com/vedantparmar12/Multi-agent-channel.git
cd Multi-agent-channel
pip install -r requirements.txt