Skip to content

Use an AI agent to get information about your Kubernetes deployments!

Notifications You must be signed in to change notification settings

srimoyee1212/Kube-GPT-Agent

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Kubernetes AI Agent Using Flask and OpenAI's API

This project implements an AI agent capable of interpreting and answering Kubernetes-related queries using natural language. The agent leverages Flask for the web server, Kubernetes Python client for cluster interactions, and OpenAI’s GPT-4 model for natural language processing.

Approach

1. Flask Web Server and Endpoint Setup

The agent runs on a Flask web server and provides a POST endpoint (/query) for receiving user queries. Queries are submitted in JSON format, and the server responds with relevant information about the Kubernetes cluster.

  • Why Flask? Flask is lightweight and suitable for microservices, with a minimal setup that allows for efficient request handling and easy API structuring.

2. Kubernetes Client Initialization and Error Handling

The agent uses the Kubernetes Python client API (client.CoreV1Api and client.AppsV1Api) to connect to the Kubernetes cluster, loading the .kube/config file for secure configuration access.

  • Error Management: A custom context manager (k8s_error_handling) is implemented to manage API exceptions. This logs both Kubernetes-specific errors and unexpected issues, allowing the application to handle errors gracefully and providing clear logging for debugging.

3. Agent Design: Kubernetes Query Processor

The main logic resides within the KubernetesQueryProcessor class, which encapsulates Kubernetes-related functions needed to fulfill common queries. The agent maps user queries to functions that can:

  • List pods, services, nodes, deployments, and namespaces
  • Retrieve status, logs, or pods generated by specific deployments
  • Count the number of running pods and nodes

Each function utilizes the Kubernetes API to return data in the expected format, minimizing identifiers. For example, get_pods_by_deployment strips any suffixes from pod names, ensuring only base names are returned, as required.

4. Natural Language Understanding with GPT-4

OpenAI’s GPT-4 model is used to interpret natural language queries. It translates each query into a structured JSON format containing:

  • operation: The name of the Kubernetes function to execute

  • parameters: The parameters needed for the operation

    • Prompting Technique: A detailed system prompt provides GPT-4 with a dictionary of valid operations and their parameters. This helps map user queries accurately to specific functions within KubernetesQueryProcessor.
    • Function Invocation: Once the query is parsed, the agent dynamically calls the required function, passing in parameters as needed.

5. Query Processing Flow

Upon receiving a query:

  1. It is parsed with GPT-4, which determines the required operation and parameters.
  2. The relevant Kubernetes function is invoked with the specified parameters.
  3. The response is formatted for clarity and returned to the user.
  • Output Formatting: For list-based queries, results are presented as comma-separated strings. Counts or single values are returned as plain text.

6. Deployment and Testing Recommendations

To ensure robustness, the agent can be tested on Minikube with test applications deployed to confirm responses match expectations. Testing on Minikube provides a realistic Kubernetes environment, allowing potential issues to be identified and resolved before wider deployment.

Local Setup

Prerequisites

  1. Python 3.10: Make sure Python 3.10 is installed.

    • Install Python here.
  2. Kubernetes Cluster Access: Ensure you have access to a Kubernetes cluster (e.g., using Minikube or a remote cluster).

    • Minikube: You can install Minikube by following these instructions.
    • After starting Minikube, confirm the Kubernetes config file is located at ~/.kube/config.
  3. OpenAI API Key: You’ll need an API key from OpenAI to use GPT-4.

    • Sign up for OpenAI and generate an API key.
    • Set the API key as an environment variable:
      export OPENAI_API_KEY="your_openai_api_key"
  4. Install Dependencies: Install the required Python libraries.

    pip install -r requirements.txt

Running the AI Agent

Start the Flask Server To start the server, navigate to the directory where app.py is located and run:

python main.py

The server should now be running locally on http://localhost:8000. If successful, you should see output similar to:

 * Running on http://0.0.0.0:8000/ (Press CTRL+C to quit)

Local Testing

esting the Agent Locally Example Queries Using curl Once the server is running, you can test the agent by making HTTP POST requests to http://localhost:8000/query. Each query should be submitted in JSON format with a query key, as shown in the examples below.

For each example, open a terminal and replace your_query_here with the specific question or command you want to test.

  1. Check the Status of a Pod
curl -X POST http://localhost:8000/query -H "Content-Type: application/json" -d '{"query": "What is the status of the pod named nginx?"}'
  1. List All Pods in the Default Namespace
curl -X POST http://localhost:8000/query -H "Content-Type: application/json" -d '{"query": "List all pods in the default namespace"}'
  1. Show Logs of a Specific Pod
curl -X POST http://localhost:8000/query -H "Content-Type: application/json" -d '{"query": "Show me logs for pod nginx"}'
  1. List All Nodes in the Cluster
curl -X POST http://localhost:8000/query -H "Content-Type: application/json" -d '{"query": "List all nodes in the cluster"}'
  1. List All Services in the Default Namespace
curl -X POST http://localhost:8000/query -H "Content-Type: application/json" -d '{"query": "List all services in the default namespace"}'
  1. Count Running Pods in the Default Namespace
curl -X POST http://localhost:8000/query -H "Content-Type: application/json" -d '{"query": "How many pods are running in the default namespace?"}'
  1. List All Deployments in the Default Namespace
curl -X POST http://localhost:8000/query -H "Content-Type: application/json" -d '{"query": "List all deployments in the default namespace"}'

About

Use an AI agent to get information about your Kubernetes deployments!

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages