A comprehensive Model Context Protocol (MCP) Server and Client for Juniper's Routing Director with GPT-4 powered intelligent service management.
This project provides an intelligent interface to Juniper's Routing Director through MCP (Model Context Protocol), enabling natural language service creation, deletion, and management with GPT-4 powered analysis. The system includes both a server component and a user-friendly Streamlit web client.
- Natural Language Processing: Create and delete services using simple English commands
- Intelligent Query Analysis: Automatic detection of service types and required operations
- Smart Service Routing: GPT-4 determines the best tool for each request
- L2 Circuit Services: Full lifecycle management with interactive forms
- L3VPN Services: Viewing and management (creation/deletion coming soon)
- EVPN Services: Viewing and management (creation/deletion coming soon)
- Streamlit Chat Interface: Natural conversation flow for service management
- Interactive Forms: User-friendly forms with default values for service configuration
- Real-time Status: Monitor service creation and deletion progress
- Configuration Preview: Review JSON configs and JUNOS CLI before deployment
- 2-Step Workflow: Upload → Deploy for service creation
- 3-Step Deletion: Modify → Create Order → Execute for service deletion
- Bulk Operations: Handle multiple services efficiently
- Data Visualization: Enhanced tables with quality analysis and export options
- CLI Generation: Automatic JUNOS CLI configuration generation
- Python 3.8 or higher
- Access to Juniper Routing Director
- OpenAI API key (for GPT-4 functionality)
- 
Clone the repository git clone <repository-url> 
- 
Install dependencies pip install -r requirements.txt 
- 
Create .env file Create a .envfile in the root directory with your credentials:USERNAME=your_routing_director_username PASSWORD=your_routing_director_password ORG_ID=your_organization_id OPENAI_API_KEY=your_openai_api_key 
streamlit run mcpClient.pyThe web interface will open at http://localhost:8501
- Connect to MCP Server: Click "🔌 Connect" in the web interface
- Natural Language Commands: Type commands like:
- "Create 2 L2 circuits from PNH-ACX7024-A1 to TH-ACX7100-A6 for customer SINET"
- "Delete l2circuit1-135006"
- "Show me all L3VPN services"
 
Create an L2 circuit for customer SINET
Create L2 circuit from PNH-ACX7024-A1 to TH-ACX7100-A6 for customer SINET with service name test-l2ckt
Create 3 L2 circuits from PNH-ACX7024-A1 to TH-ACX7100-A6 for customer SINET
Delete l2circuit1-135006
Remove the L2 circuit service named test-circuit
Terminate service l2circuit1-135006
Show me all L3VPN services
Display L2 circuit instances
What EVPN services are running?
Show me the order history
| Variable | Description | Example | 
|---|---|---|
| USERNAME | Routing Director username | [email protected] | 
| PASSWORD | Routing Director password | xyz | 
| ORG_ID | Organization ID in Routing Director | 123456789 | 
| OPENAI_API_KEY | OpenAI API key for GPT-4 | sk-proj-eaVqS7tRMA... | 
The MCP server runs on Routing Director API at https://x.x.x.x:48800 by default. Modify the BASE_URL in mcpServer.py if needed.
- mcpServer.py: Main MCP server with GPT-4 integration
- mcpClient.py: Streamlit web client
- l3vpn_parser.py: L3VPN service parser
- l2ckt_parser.py: L2 Circuit service parser
- l2vpn_evpn_parser.py: EVPN service parser
- services_generator.py: Automatic service configuration generator
- junos_cli_generator.py: JUNOS CLI configuration generator
- Query Analysis: GPT-4 analyzes natural language input
- Tool Selection: System determines appropriate action
- Form Generation: Interactive forms for missing details
- Configuration: JSON and CLI configurations generated
- Confirmation: User reviews configurations
- Execution: 2-step deployment workflow
- Monitoring: Real-time status updates
- Credentials stored in .envfile (not in version control)
- Basic authentication with Routing Director
- HTTPS communication with API endpoints
- Input validation and error handling
- 
Connection Failed - Verify .envcredentials
- Check network connectivity to Routing Director
- Ensure Routing Director is accessible
 
- Verify 
- 
GPT-4 Not Available - Verify OPENAI_API_KEYin.env
- Check OpenAI API quota and billing
 
- Verify 
- 
Service Creation Failed - Verify customer exists in Routing Director
- Check device names in inventory
- Review service configuration parameters
 
Enable debug mode in the web interface to see:
- Session state information
- API request/response details
- Service generator status
The server provides access to these Routing Director endpoints:
- GET /order/instances- Service instances
- GET /order/orders- Service orders
- GET /order/customers- Customer information
- POST /order- Create service
- POST /order/.../exec- Execute service
This project is licensed under the MIT License - see the LICENSE file for details.
For issues and questions:
- Check the troubleshooting section
- Review logs in the web interface
- Use debug mode for detailed information
- Create an issue in the repository
- v1.0.0: Initial release with GPT-4 integration
- Support for L2 Circuit creation and deletion
- Interactive web interface
- Natural language processing
Note: Remember to keep your .env file secure and never commit it to version control. The .env file should contain your actual credentials for Routing Director and OpenAI API access.