Skip to content

AI Testing Agent: Open Source AI Agent for Software Testing

Notifications You must be signed in to change notification settings

furudo-erika/ai-testing-agent

Repository files navigation

AI Testing Agent: Open Source AI Agent for Software Testing

This repository contains an AI Testing Agent that interacts with an LLM (via OpenRouter) to automatically:

  1. Generate a Test Plan for your API
  2. Generate Python test code (using pytest) based on that plan
  3. Run the generated tests
  4. Accept user feedback to refine or extend the test suite

By default, the AI agent assumes your API has some specific REST routes (e.g., /api/endpoint), but you can customize everything using prompts.


Contents


Overview

The AI Testing Agent leverages Large Language Models to automatically generate test plans and test code for your API endpoints. You can iteratively improve the generated tests by providing “feedback” to the agent in natural language.


How It Works

  1. You (the user) interact with agent.py, which is a LangChain-based chatbot.
  2. When you request an action (e.g., “Plan,” “Generate,” “Run,” “Feedback”), the agent calls the corresponding “tool” in agent_tools.py.
  3. Each tool calls api_tester.py in a subprocess, running commands like “python api_tester.py plan” or “python api_tester.py generate.”
  4. api_tester.py sends a prompt to the LLM (via OpenRouter) instructing it to produce test plans or code.
  5. The output is saved in generated_tests.py, which you can then execute with pytest.

Project Structure

• main.py

  • A simple FastAPI application with a few routes.
    • api_tester.py
  • Command-line script containing logic to plan, generate, run, and integrate feedback.
    • agent_tools.py
  • Python functions that call api_tester.py in a subprocess (for use with LangChain).
    • agent.py
  • A LangChain agent that exposes tools for planning, generating, running tests, and processing feedback.
    • generated_tests.py
  • The Python test file automatically generated by the LLM (overwritten each time you run “generate”).

Installation

  1. Clone this repository.

  2. Install dependencies (example below, adjust as needed):
    » pip install fastapi uvicorn requests pytest langchain openai

  3. Obtain an OpenRouter key from https://openrouter.ai/ and set it:
    » export OPENROUTER_API_KEY="YOUR_OPENROUTER_KEY"

  4. (Optional) If you have a real API environment instead of localhost:8000, set:
    » export TEST_API_URL="https://your-api-domain"


Environment Variables

• OPENROUTER_API_KEY

  • The key used to authenticate calls to the OpenRouter LLM endpoint.
    • TEST_API_URL (optional)
  • If set, generated tests will target this base URL. Otherwise defaults to http://localhost:8000.

Usage

1. Start Your API

If you have a FastAPI app in main.py, run it with uvicorn:

uvicorn main:app --host 0.0.0.0 --port 8000

This ensures your API is reachable at http://localhost:8000 for testing.

2. Run the AI Agent

In a separate terminal, start the agent:

python agent.py

You’ll see a prompt:
AI Testing Agent is running. Type 'quit' or 'exit' to stop.
User:

You can now type in commands or instructions to the agent.

3. Commands in agent.py

• “Plan”

  • Calls “python api_tester.py plan” to generate a test plan (text only).
    • “Generate”
  • Calls “python api_tester.py generate” to generate the test code file (generated_tests.py).
    • “Run”
  • Calls “python api_tester.py run” to run pytest on generated_tests.py.
    • “Feedback: ”
  • Calls “python api_tester.py feedback ''” to update or refine tests.

4. Run Tests Manually

Instead of going through agent.py, you can also directly run:

  1. python api_tester.py plan
  2. python api_tester.py generate
  3. python api_tester.py run
  4. python api_tester.py feedback "Please add boundary tests."

Better Prompting Tips

The LLM’s output quality depends heavily on how you prompt it. Here are some guidelines:

  1. Be Explicit About Your Endpoints

    • If your API has specific routes (/api/endpoint with param=“max” or “min”), describe them in detail.
    • E.g., “GET /api/endpoint?param=max ⇒ 200 with {‘result’: ‘success’}”
  2. Provide Desired Test Names and Structure

    • Example: “Generate test_endpoint_with_max(), test_endpoint_with_min(), etc. They must check the status code and JSON body.”
  3. Offer a Skeleton or Example

    • Show the model an example of the final test code you want, so it follows the same style.
  4. Set Temperature = 0.0

    • In api_tester.py, you can set "temperature": 0.0 to reduce randomness and get more deterministic results.
  5. Use Feedback Iterations

    • If initial tests differ from your real logic, provide feedback: “Change test x to expect 401 instead of 404,” or “Remove references to /api/items; only use /api/endpoint.”

With these approaches, you can achieve stable, correct tests that match your real API.


Troubleshooting

• Getting 401 instead of 200?

  • Perhaps your API requires an Authorization header. Update the prompt or main.py accordingly.

• The LLM outputs /api/items but your real API doesn’t have that route?

  • Provide a very explicit prompt about the actual route names.

• “ERROR: The environment variable OPENROUTER_API_KEY is not set.”

  • Make sure you exported OPENROUTER_API_KEY properly.

• “Tests always fail.”

  • Verify your main.py logic and the generated tests align. Use the “Feedback” action or incorporate a more detailed prompt that references your real endpoints.

© 2023 AI Testing Agent. Released under an open-source license. Feel free to adapt as needed.

Releases

No releases published

Packages

No packages published

Languages