Skip to content

Full System Prompt Transparency for All—that aggregates full system prompts, guidelines, and tools from major AI models like ChatGPT, Gemini, Claude, Mistral, Anthropic, xAI, Perplexity, and more. It’s dedicated to exposing hidden AI instructions to build trust through openness.

License

Notifications You must be signed in to change notification settings

oxbshw/Full-system-prompts

Repository files navigation

Full-system-prompts

SYSTEM PROMPT TRANSPARENCY FOR ALL

Built with Sayed Allam

License: AGPL v3 Contributors

This repository contains a collection of full system prompts, guidelines, and tools from major AI models, including OpenAI, Google, Anthropic, xAI, Perplexity, and many more. Our goal is to bring transparency to the hidden instructions that shape AI behavior.


Table of Contents


Why This Exists

"In order to trust the output, one must understand the input."

AI labs use extensive, hidden prompt scaffolds to control the behavior of their models. These instructions, which are not visible to the public, have a significant impact on the perceptions and behavior of the growing number of people who rely on AI as an external intelligence layer.

These hidden prompts dictate:

  • What AIs are forbidden to say
  • The personas and functions they are forced to adopt
  • How they are instructed to prevaricate, refuse, or redirect
  • The ethical and political frameworks that are embedded by default

When you interact with an AI without knowing its system prompt, you are not engaging with a neutral intelligence—you are communicating with a shadow-puppet.

CL4R1T4S aims to rectify this.


Covered Models

This repository includes system prompts from a wide range of AI models and platforms:

Major AI Providers

  • OpenAI: GPT-3, GPT-3.5, GPT-4 series
  • Google: Gemini, LaMDA, Bard
  • Anthropic: Claude (all versions)
  • xAI: Grok
  • Perplexity: Perplexity AI

Development Tools & Platforms

  • Cursor: AI-powered code editor
  • Windsurf: Development assistant
  • Replit: Coding platform AI
  • Vercel V0: Web development AI
  • Bolt: Development tool
  • Lovable: AI platform

Specialized AI Systems

  • Devin: AI software engineer
  • Manus: AI assistant
  • Cluely: AI platform
  • MultiOn: AI agent
  • SameDev: Development AI
  • Hume: Emotional AI

And many more! We're constantly adding new models as they become available.


Project Roadmap

🎯 Current Priorities

  • ✅ Expand collection of system prompts across major AI platforms
  • 🔄 Improve organization and searchability of existing content
  • 🔄 Develop automated extraction and verification tools

🚀 Future Goals

  • Analysis Tools: Create tools to analyze patterns and biases in system prompts
  • Community Platform: Build dedicated platforms for discussion and collaboration
  • Research Partnerships: Partner with academic institutions for systematic analysis
  • Policy Advocacy: Work with policymakers to promote AI transparency standards

📊 Metrics & Impact

  • Track coverage across AI providers and models
  • Monitor community engagement and contributions
  • Measure impact on AI transparency discussions

Legal and Ethical Considerations

The Full-system-prompts project operates in a complex legal and ethical landscape. We are committed to navigating these challenges responsibly.

⚖️ Legal Framework

  • Copyright Status: System prompts exist in a legal gray area. We believe functional instructions should be treated differently from creative works
  • Terms of Service: Disclosure may conflict with some provider ToS. We prioritize public transparency interest
  • Takedown Policy: We respond promptly to valid legal requests while protecting legitimate transparency interests

🛡️ Ethical Guidelines

  • Responsible Disclosure: We avoid prompts that could enable direct harm or abuse
  • Educational Purpose: Content is shared for research, transparency, and accountability
  • Community Standards: We maintain clear guidelines for contributors and users
  • Harm Prevention: We actively monitor for misuse and take appropriate action

🔍 Transparency Principles

  • All extraction methods and sources are documented when possible
  • We provide context and metadata for better understanding
  • Community can verify and discuss findings openly
  • Regular audits ensure content quality and ethical compliance

How to Contribute

We welcome contributions from the transparency community! Here's how you can help:

📤 Submitting Prompts

Required Information:

  • Model name/version: Specific AI model and version
  • Date of extraction: When the prompt was obtained
  • Extraction method: How it was discovered (leaked, reverse-engineered, etc.)
  • Context/notes: Any additional relevant information

Submission Methods:

  1. Pull Request: Submit via GitHub PR with proper documentation
  2. Direct Contact: Reach out to @Sayedevv_plinius on X or Discord
  3. Team Contact: Connect with Sayed Allam or project team members

📋 Contribution Guidelines

  • Verify accuracy: Ensure prompts are authentic and properly attributed
  • Document sources: Provide clear information about how prompts were obtained
  • Follow structure: Use our standardized format for consistency
  • Respect ethics: Only submit content that serves the transparency mission

🔍 Other Ways to Help

  • Analysis & Research: Help analyze patterns and implications
  • Documentation: Improve organization and accessibility
  • Community Building: Help grow our transparency community
  • Tool Development: Create tools for prompt analysis and discovery

Community

Join the conversation and connect with other members of the AI transparency community:

💬 Communication Channels

  • Twitter/X: @Sayedevv_plinius - Follow for updates and discussions
  • Discord: Contact @Sayedevv_plinius for community server access
  • GitHub Issues: Use for technical discussions and feature requests

🤝 Community Guidelines

  • Maintain respectful and constructive dialogue
  • Focus on transparency and accountability goals
  • Share knowledge and resources openly
  • Support fellow community members

📢 Stay Updated

  • Watch this repository for new additions
  • Follow our social media for announcements
  • Join community discussions about AI transparency

Contributors

This project thrives thanks to our amazing community of transparency advocates, researchers, and developers. Every contribution—whether it's a system prompt, analysis, documentation, or community support—helps advance AI transparency.

👥 Project Team

  • Sayed Allam - Co-Creator & Lead Contributor
  • @elder_plinius - Project Founder & Community Lead

🏆 Recognition

We believe in recognizing the important work of our contributors:

  • Contributors are acknowledged in our documentation
  • Major contributions are highlighted in project updates
  • Community members can earn recognition for ongoing support

🙏 Special Thanks

A heartfelt thanks to everyone who has contributed to making AI systems more transparent and accountable.

[Contributor list will be automatically generated and updated]


License

This project is licensed under the GNU Affero General Public License v3.0.

The AGPL-3.0 ensures that this transparency resource remains open and accessible while requiring that any derivative works also remain open source. This strong copyleft license aligns with our mission of promoting AI transparency and preventing the re-closure of publicly shared information.

Key points:

  • ✅ Free to use, modify, and distribute
  • ✅ Must share improvements with the community
  • ✅ Network use requires source availability
  • ✅ Protects against proprietary exploitation

🔍 Transparency is not just a feature—it's a fundamental requirement for trustworthy AI.

Built with ❤️ by Sayed Allam and the AI transparency community

About

Full System Prompt Transparency for All—that aggregates full system prompts, guidelines, and tools from major AI models like ChatGPT, Gemini, Claude, Mistral, Anthropic, xAI, Perplexity, and more. It’s dedicated to exposing hidden AI instructions to build trust through openness.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published