This project aims to integrate Large Language Models (LLMs) into the EvoSuite test framework. This repository specifically serves as the request handling component of the proposed framework. The goal is to enhance the understandability of generated tests and assist in resolving coverage stalls, inspired by methods like CodaMosa. The current phase focuses on handling requests through Strawberry to interact with LLMs.
- Python >= 3.8
- Pip (Python package installer)
- ollama >= v0.1.12 (When running models locally)
- Clone the Repository: Since the repository is private, ensure you have access. Once public, it can be cloned as follows:
git clone https://github.com/amirdeljouyi/LLM-server
- Install Dependencies: Navigate to the project directory and install required Python packages:
pip install strawberry requests
- Set Non-Local Use: in
main.py
change the value of the variableUSE_LOCAL_LLM
toFalse
- Token Setup: in
main.py
change the value ofapi_token
to your hugging face token:Replaceapi_token = "hf_XXXXXXX" # <- your hugging face token
hf_XXXXXXX
with your actual API token.
-
Set Local Use: in
main.py
change the value of the variableUSE_LOCAL_LLM
toTrue
-
Installing Ollama: For a detailed tutorial on setting up ollama, we suggest you refer to their repository
First install ollama from the CLI
curl https://ollama.ai/install.sh | sh
Then install the LLM that you want to use. Here, by default, we use
codellama:7b-instruct
which is from the llama2 family of LLMsollama run codellama:7b-instruct
after the installation has completed you will see the following in your CLI
>>> Send a message (/? for help)
This indicates that the model has been properly fetched. You can exit this screen by typing
/bye
You are now ready to work with LLM-server
- Set the Environment Variables: Ensure that your python environment is set up properly.
- Start the Server: Run the following command to start the Strawberry server:
strawberry server main
strawberry
: Handles GraphQL requests and responses.requests
: Manages HTTP requests to the Hugging Face API.Prompt
: A class for structuring the input prompt.Response
: A class for structuring the LLM response.Query
: Contains the logic for the GraphQL query and interaction with the LLM.
- contains the names of the dependencies of the project
- contains the format of the query that you can use to test the LLM Server
The project is designed for easy expansion. When adding new features, modifying existing ones, or removing components:
- Update Code: Implement the changes in the appropriate files.
- Document Changes: Reflect these changes in this README, detailing new dependencies, environment variables, or usage instructions.
This project is part of a larger research effort to integrate LLMs with the EvoSuite framework. Future updates will include:
- Advanced LLM interactions.
- Enhanced request handling capabilities.
- capability of having the models be locally run.
- More comprehensive integration with EvoSuite.
As the project is intended for open-source contribution after becoming public, we welcome contributions. Please read CONTRIBUTING.md
(TODO) for details on our code of conduct and the process for submitting pull requests.
This project is licensed under the [LICENSE] - see the LICENSE file for details.
For any inquiries or contributions, please contact [email protected]