SYSTEM PROMPT TRANSPARENCY FOR ALL
Built with Sayed Allam
This repository contains a collection of full system prompts, guidelines, and tools from major AI models, including OpenAI, Google, Anthropic, xAI, Perplexity, and many more. Our goal is to bring transparency to the hidden instructions that shape AI behavior.
- Why This Exists
- Covered Models
- Project Roadmap
- Legal and Ethical Considerations
- How to Contribute
- Community
- Contributors
- License
"In order to trust the output, one must understand the input."
AI labs use extensive, hidden prompt scaffolds to control the behavior of their models. These instructions, which are not visible to the public, have a significant impact on the perceptions and behavior of the growing number of people who rely on AI as an external intelligence layer.
These hidden prompts dictate:
- What AIs are forbidden to say
- The personas and functions they are forced to adopt
- How they are instructed to prevaricate, refuse, or redirect
- The ethical and political frameworks that are embedded by default
When you interact with an AI without knowing its system prompt, you are not engaging with a neutral intelligence—you are communicating with a shadow-puppet.
CL4R1T4S aims to rectify this.
This repository includes system prompts from a wide range of AI models and platforms:
- OpenAI: GPT-3, GPT-3.5, GPT-4 series
- Google: Gemini, LaMDA, Bard
- Anthropic: Claude (all versions)
- xAI: Grok
- Perplexity: Perplexity AI
- Cursor: AI-powered code editor
- Windsurf: Development assistant
- Replit: Coding platform AI
- Vercel V0: Web development AI
- Bolt: Development tool
- Lovable: AI platform
- Devin: AI software engineer
- Manus: AI assistant
- Cluely: AI platform
- MultiOn: AI agent
- SameDev: Development AI
- Hume: Emotional AI
And many more! We're constantly adding new models as they become available.
- ✅ Expand collection of system prompts across major AI platforms
- 🔄 Improve organization and searchability of existing content
- 🔄 Develop automated extraction and verification tools
- Analysis Tools: Create tools to analyze patterns and biases in system prompts
- Community Platform: Build dedicated platforms for discussion and collaboration
- Research Partnerships: Partner with academic institutions for systematic analysis
- Policy Advocacy: Work with policymakers to promote AI transparency standards
- Track coverage across AI providers and models
- Monitor community engagement and contributions
- Measure impact on AI transparency discussions
The Full-system-prompts project operates in a complex legal and ethical landscape. We are committed to navigating these challenges responsibly.
- Copyright Status: System prompts exist in a legal gray area. We believe functional instructions should be treated differently from creative works
- Terms of Service: Disclosure may conflict with some provider ToS. We prioritize public transparency interest
- Takedown Policy: We respond promptly to valid legal requests while protecting legitimate transparency interests
- Responsible Disclosure: We avoid prompts that could enable direct harm or abuse
- Educational Purpose: Content is shared for research, transparency, and accountability
- Community Standards: We maintain clear guidelines for contributors and users
- Harm Prevention: We actively monitor for misuse and take appropriate action
- All extraction methods and sources are documented when possible
- We provide context and metadata for better understanding
- Community can verify and discuss findings openly
- Regular audits ensure content quality and ethical compliance
We welcome contributions from the transparency community! Here's how you can help:
Required Information:
- ✅ Model name/version: Specific AI model and version
- ✅ Date of extraction: When the prompt was obtained
- ✅ Extraction method: How it was discovered (leaked, reverse-engineered, etc.)
- ✅ Context/notes: Any additional relevant information
Submission Methods:
- Pull Request: Submit via GitHub PR with proper documentation
- Direct Contact: Reach out to @Sayedevv_plinius on X or Discord
- Team Contact: Connect with Sayed Allam or project team members
- Verify accuracy: Ensure prompts are authentic and properly attributed
- Document sources: Provide clear information about how prompts were obtained
- Follow structure: Use our standardized format for consistency
- Respect ethics: Only submit content that serves the transparency mission
- Analysis & Research: Help analyze patterns and implications
- Documentation: Improve organization and accessibility
- Community Building: Help grow our transparency community
- Tool Development: Create tools for prompt analysis and discovery
Join the conversation and connect with other members of the AI transparency community:
- Twitter/X: @Sayedevv_plinius - Follow for updates and discussions
- Discord: Contact @Sayedevv_plinius for community server access
- GitHub Issues: Use for technical discussions and feature requests
- Maintain respectful and constructive dialogue
- Focus on transparency and accountability goals
- Share knowledge and resources openly
- Support fellow community members
- Watch this repository for new additions
- Follow our social media for announcements
- Join community discussions about AI transparency
This project thrives thanks to our amazing community of transparency advocates, researchers, and developers. Every contribution—whether it's a system prompt, analysis, documentation, or community support—helps advance AI transparency.
- Sayed Allam - Co-Creator & Lead Contributor
- @elder_plinius - Project Founder & Community Lead
We believe in recognizing the important work of our contributors:
- Contributors are acknowledged in our documentation
- Major contributions are highlighted in project updates
- Community members can earn recognition for ongoing support
A heartfelt thanks to everyone who has contributed to making AI systems more transparent and accountable.
[Contributor list will be automatically generated and updated]
This project is licensed under the GNU Affero General Public License v3.0.
The AGPL-3.0 ensures that this transparency resource remains open and accessible while requiring that any derivative works also remain open source. This strong copyleft license aligns with our mission of promoting AI transparency and preventing the re-closure of publicly shared information.
Key points:
- ✅ Free to use, modify, and distribute
- ✅ Must share improvements with the community
- ✅ Network use requires source availability
- ✅ Protects against proprietary exploitation
🔍 Transparency is not just a feature—it's a fundamental requirement for trustworthy AI.
Built with ❤️ by Sayed Allam and the AI transparency community