Skip to content

AI Customer Support Chatbot – A microservices-based system using FastAPI, Redis, MLflow, spaCy, and FAISS for intent classification, NER, and RAG. Includes a Next.js frontend and Dockerized deployment for showcasing end-to-end AI-powered support.

Notifications You must be signed in to change notification settings

abhicode/ai-customer-support

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AI Customer Support Chatbot

An end-to-end AI-powered support system built with FastAPI, MLflow, FAISS, and Next.js.
The chatbot handles customer queries, retrieves answers from a knowledge base, tracks conversation context, and connects to human support when needed.

Screenshot from 2025-09-18 01-47-42
ai-customer-assistant.mp4

Features

  • Conversational AI: Handles customer queries via /v1/conversations.
  • Knowledge Base (FAISS): Stores documents and provides context-aware responses.
  • Intent Recognition: MLflow-hosted Logistic Regression model to classify intents.
  • Entity Extraction: spaCy-based NER for extracting order IDs, dates, products, etc.
  • Redis Session Store: Keeps conversation context for personalized follow-ups.
  • Frontend (Next.js + MUI): Responsive chat UI for interaction.
  • Dockerized Microservices: Orchestrator, Intent Service, Response Service, KB Service, MLflow, and Redis.

Architecture

  1. Frontend (Next.js) → Calls orchestrator API.
  2. Orchestrator (FastAPI) → Routes queries to intent-service, response-service, and kb-service.
  3. Intent Service → Uses MLflow model for intent classification.
  4. Response Service → Generates AI responses (OpenAI API).
  5. KB Service → Manages FAISS vector DB and document embeddings.
  6. Redis → Stores session context.

Tech Stack

  • Backend: FastAPI, Redis, MLflow, FAISS, spaCy
  • Frontend: Next.js, Material UI
  • AI Models: Sentence Transformers, Logistic Regression (MLflow), OpenAI GPT
  • Deployment: Docker Compose

Demo Flow

  1. User sends a message from the frontend.
  2. Orchestrator routes request → intent detection, KB retrieval, LLM response.
  3. Response returned with context persistence in Redis.

Getting Started

Prerequisites

  • Docker & Docker Compose installed
  • OpenAI API key (OPENAI_API_KEY)

Run Services

1. Change the current directory to "infra" and create a .env file

cd infra
# Redis
REDIS_HOST=redis
REDIS_PORT=6379

# MLflow
MLFLOW_TRACKING_URI=http://mlflow:5000
MLFLOW_MODEL_NAME=customer_intent_classifier
MLFLOW_MODEL_STAGE=Production

# OpenAI
OPENAI_API_KEY=sk-proj-***

# KB Service
DATA_DIR=/app/data
INDEX_FILE=/app/data/faiss.index
DOCS_FILE=/app/data/docs.pkl

2. Run MLflow

docker-compose up -d --build mlflow

runs on http://localhost:5000/

3. Run Logistic Regression Trainer

docker-compose run --rm trainer

4. Go to MLflow UI and promote the model to "Production"

5. Run the frontend container, which will run the rest of the containers

docker-compose up -d --build frontend

Runs on http://localhost:3000

6. Add FAQs or policies to the KB service via http://localhost:8003/add

{
  "docs": [
    "Refunds for orders can take 5-7 business days.",
    "Premium users get priority responses within 2 hours."
  ]
}

Future Improvements

  • Integration with external databases (orders, accounts).
  • Authentication & RBAC.
  • Analytics dashboard for query trends.

About

AI Customer Support Chatbot – A microservices-based system using FastAPI, Redis, MLflow, spaCy, and FAISS for intent classification, NER, and RAG. Includes a Next.js frontend and Dockerized deployment for showcasing end-to-end AI-powered support.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published