LLMFuzzer is the first open-source fuzzing framework specifically designed for Large Language Models (LLMs), especially for their integrations in applications via LLM APIs. ππ₯
If you're a security enthusiast, a pentester, or a cybersec researcher who loves to find and exploit vulnerabilities in AI systems, LLMFuzzer is the perfect tool for you. It's built to make your testing process streamlined and efficient. π΅οΈββοΈ
- Robust fuzzing for LLMs π§ͺ
- LLM API integration testing π οΈ
- Wide range of fuzzing strategies π
- Modular architecture for easy extendability π
- Adding more attacks
- Multiple Connectors (JSON-POST, RAW-POST, QUERY-GET)
- Multiple Comparers
- Dual-LLM (Side LLM observation)
- Autonomous Attack Mode
- Clone the repo
git clone https://github.com/domwhewell-sage/LLMFuzzer.git
- Navigate to the project directory
cd LLMFuzzer
- Install dependencies
pip install -r requirements.txt
- Edit llmfuzzer.yaml with your LLM API endpoint (LLMFuzzer -> Your Application -> LLM)
Resources:
Collaborator-URL: "https://webhook.site/#!/view/:uuid"
Proxies: {'http': 'http://127.0.0.1:8080', 'https': 'http://127.0.0.1:8080'}
Connection:
Type: HTTP-API
Query-Mode: Replace
Url: "http://localhost:3000/chat"
Content: JSON
Query-Attribute: /query
Initial-POST-Body: {"sid":"1","query":"Hi","model":"gpt-4"}
Output-Attribute: /response/message
Headers: {'Authorization': 'Bearer <token>'}
Cookies: {}
attackFiles: attacks/*.yaml
Reports:
- HTML: true
Path: "report.html"
- CSV: true
Path: "report.csv"
- Run LLMFuzzer
python main.py
See the wiki for more documentation.
We welcome all contributors who are passionate about improving LLMFuzzer. See our contributing guidelines for ways to get started. π€
LLMFuzzer is licensed under the MIT License. See the LICENSE file for more details.
LLMFuzzer couldn't exist without the community. We appreciate all our contributors and supporters. Let's make AI safer together! π
@mns - For the initial work on LLMFuzzer and allowing the fork