Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Template / PR Information
Introducing our first set of AI/LLM security testing templates for nuclei. These templates focus on security aspects of AI chatbots including basic examples of safety control bypasses, data exfiltration via OAST, and prompt injection vulnerabilities. Each template is designed to detect common attack vectors while being easily adaptable to different AI endpoints.
We're excited to share these initial templates with the community and look forward to feedback and contributions. As AI security is a rapidly evolving field, we plan to expand our coverage to include RAG poisoning, training data leakage and output sanitization checks. We encourage the community to try these templates, share their experiences, and contribute ideas for new AI security checks.
Note:
These templates are currently experimental and represent our initial exploration into automated AI security testing. We're testing different approaches and methodologies to find the most effective ways to identify AI-specific vulnerabilities.