diff --git a/.cursor/rules/create-explainer.md b/.cursor/rules/create-explainer.md new file mode 100644 index 000000000..c9f8124ef --- /dev/null +++ b/.cursor/rules/create-explainer.md @@ -0,0 +1,100 @@ +# Rule: Generating an Explainer or a Product Requirements Document (PRD) + +## Goal + +To guide an AI assistant in creating a detailed Explainer/Product Requirements Document (PRD) in Markdown format, based on an initial user prompt. The PRD should be clear, actionable, and suitable for a junior developer to understand and implement the feature. + +## Process + +1. **Receive Initial Prompt:** The user provides a brief description or request for a new feature or functionality. +2. **Ask Clarifying Questions:** Before writing the explainer, the AI *must* ask clarifying questions to gather sufficient detail. The goal is to understand the "why" and "what" of the feature, not necessarily the "how" (which the developer will figure out). All clarifying questions must be presented as enumerated letter or number lists to maximize clarity and ease of response. If any user answer is ambiguous, incomplete, or conflicting, the AI must explicitly flag the uncertainty, ask follow-up questions, and not proceed until the ambiguity is resolved or clearly documented as an open question or assumption. +3. **Generate Explainer:** Based on the initial prompt and the user's answers to the clarifying questions, generate an Explainer using the structure outlined below. +4. **Save Explainer:** Save the generated document as a GitHub issue with the title `Explainer: [feature-name]` inside the `Moq.Analyzers` repository using your GitHub MCP. + +## Clarifying Questions (Examples) + +The AI should adapt its questions based on the prompt, but must always: + +- Present all clarifying questions as enumerated letter or number lists. +- Validate that each user story provided follows the INVEST mnemonic. If a user story does not, the AI must either rewrite it for compliance or ask the user for clarification. +- If any answer is unclear, incomplete, or conflicting, the AI must flag it and ask for clarification before proceeding. + +Here are some common areas to explore: + +- **Problem/Goal:** "What problem does this feature solve for the user?" or "What is the main goal we want to achieve with this feature?" +- **Target User:** "Who is the primary user of this feature?" +- **Core Functionality:** "Can you describe the key actions a user should be able to perform with this feature?" +- **User Stories:** "Could you provide a few user stories? (e.g., As a [type of user], I want to [perform an action] so that [benefit].)" +- **INVEST compliance**: Validate that every user story follows the INVEST mnemonic. +- **Acceptance Criteria:** "How will we know when this feature is successfully implemented? What are the key success criteria?" +- **Scope/Boundaries:** "Are there any specific things this feature *should not* do (non-goals)?" +- **Data Requirements:** "What kind of data does this feature need to display or manipulate?" +- **Design/UI:** "Are there any existing design mockups or UI guidelines to follow?" or "Can you describe the desired look and feel?" +- **Edge Cases:** "Are there any potential edge cases or error conditions we should consider?" + +## Explainer Structure + +The generated explainer should include the following sections: + +1. **Introduction/Overview:** Briefly describe the feature and the problem it solves. State the goal. +2. **Goals:** List the specific, measurable objectives for this feature. +3. **Non-Goals (Out of Scope):** Clearly state what this feature will *not* include to manage scope. +4. **User Stories:** Detail the user narratives describing feature usage and benefits. +5. **Functional Requirements:** List the specific functionalities the feature must have. Use clear, concise language (e.g., "The system must allow users to upload a profile picture."). Number these requirements. +6. **Design Considerations (Optional):** Link to mockups, describe UI/UX requirements, or mention relevant components/styles if applicable. +7. **Technical Considerations (Optional):** Mention any known technical constraints, dependencies, or suggestions (e.g., "Should integrate with the existing Auth module"). +8. **Success Metrics:** How will the success of this feature be measured? (e.g., "Increase user engagement by 10%", "Reduce support tickets related to X"). +9. **Open Questions:** List any remaining questions, areas needing further clarification, or assumptions made due to missing or ambiguous information. If there are no open questions or assumptions, explicitly state "None". + +## Target Audience + +Assume the primary reader of the Explainer is a **junior developer**. Therefore, requirements should be explicit, unambiguous, avoid jargon where possible, and be written with a grade 9 reading level. Provide enough detail for them to understand the feature's purpose and core logic. + +## Output + +- **Format:** Markdown (`.md`) +- **Location:** GitHub issue in the `rjmurillo/Moq.Analyzers` repository +- **Title:** `Explainer: [feature-name]` + +## Final instructions + +1. Do NOT start implementing the Explainer +2. Make sure to ask the user clarifying questions +3. Take the user's answers to the clarifying questions and improve the Explainer + +--- + +## Example Explainer (Generic) + +```markdown +# Explainer: Feature Name + +## Introduction/Overview +Briefly describe the feature and the problem it solves. State the goal. + +## Goals +- List specific, measurable objectives for this feature. + +## Non-Goals (Out of Scope) +- List what is explicitly not included in this feature. + +## User Stories +- As a [user type], I want to [do something] so that [benefit]. +- As a [user type], I want to [do something else] so that [benefit]. + +## Functional Requirements +1. The system must allow users to do X. +2. The system must validate Y before Z. + +## Design Considerations (Optional) +- Link to mockups or describe UI/UX requirements. + +## Technical Considerations (Optional) +- List known technical constraints, dependencies, or suggestions. + +## Success Metrics +- How will success be measured? (e.g., "Increase engagement by 10%", "Reduce support tickets related to X") + +## Open Questions +- List any remaining questions, areas needing clarification, or assumptions. If none, state "None". +``` diff --git a/.cursor/rules/generate-tasks.md b/.cursor/rules/generate-tasks.md new file mode 100644 index 000000000..7ea448635 --- /dev/null +++ b/.cursor/rules/generate-tasks.md @@ -0,0 +1,83 @@ +# Rule: Generating a Task List from an Explainer + +## Goal + +To guide an AI assistant in creating a detailed, step-by-step task list in Markdown format based on an existing Explainer or Product Requirements Document (PRD). The task list should guide a developer through implementation. + +## Output + +- **Format:** Markdown (`.md`) +- **Location:** GitHub issues in `rjmurillo/Moq.Analyzers` repository +- **Title:** `Tasks for [explainer-issue-title]` (e.g., `Tasks for Explainer Expand LINQ Coverage`) + +Link the Tasks issue to the Explainer issue in GitHub using your MCP. + +## Process + +1. **Receive Explainer/PRD Reference:** The user points the AI to a specific Explainer or PRD GitHub issue +2. **Analyze Explainer/PRD:** The AI uses GitHub MCP and reads and analyzes the functional requirements, user stories, and other sections of the specified Explainer/PRD. +3. **Assess Current State:** Review the existing codebase to understand existing infrastructure, architectural patterns, and conventions. Also, identify any existing components or features that already exist and could be relevant to the Explainer/PRD requirements. Then, identify existing related files, components, and utilities that can be leveraged or need modification. +4. **Phase 1: Generate Parent Tasks:** Based on the Explainer/PRD analysis and current state assessment, create the file and generate the main, high-level tasks required to implement the feature. Use your judgment on how many high-level tasks to use. It's likely to be about 5. Present these tasks to the user in the specified format (without sub-tasks yet). Inform the user: "I have generated the high-level tasks based on the Explainer/PRD. Ready to generate the sub-tasks? Respond with 'Go' to proceed." +5. **Wait for Confirmation:** Pause and wait for the user to respond with "Go". +6. **Phase 2: Generate Sub-Tasks:** Once the user confirms, break down each parent task into smaller, actionable sub-tasks necessary to complete the parent task. Ensure sub-tasks logically follow from the parent task, cover the implementation details implied by the PRD, and consider existing codebase patterns where relevant without being constrained by them. +7. **Identify Relevant Files:** Based on the tasks and Explainer/PRD, identify potential files that will need to be created or modified. List these under the `Relevant Files` section, including corresponding test files if applicable. Identification may be achieved through search and through code coverage analysis. +8. **Generate Final Output:** Combine the parent tasks, sub-tasks, relevant files, and notes into the final Markdown structure. +9. **Save Task List:** Save the generated document into a new set of GitHub issues linked to the parent tasks. + +## Output Format + +The generated task list _must_ follow this structure. The example below is generic and should be adapted to the specific Explainer/PRD being processed. + +### GitHub Issue Structure Guidance + +- The parent task list should be created as a GitHub issue (the parent issue), with each high-level task represented as a checklist item that links to a corresponding sub-task issue. +- Each sub-task should be created as its own GitHub issue, with a reference back to the parent issue (e.g., "Parent: #123"). +- The parent issue should include a checklist like: + - [ ] [Sub-task Title 1](https://github.com/org/repo/issues/456) + - [ ] [Sub-task Title 2](https://github.com/org/repo/issues/457) +- Each sub-task issue should include a link to the parent issue at the top, and may include its own detailed checklist if needed. + +#### Example Parent Issue Checklist + +```markdown +- [ ] [Implement Core Analyzer Logic](https://github.com/org/repo/issues/456) +- [ ] [Add Test Coverage](https://github.com/org/repo/issues/457) +- [ ] [Update Documentation](https://github.com/org/repo/issues/458) +``` + +#### Example Sub-task Issue Header + +```markdown +Parent: #123 + +## Sub-task Details + +- [ ] Sub-step 1 +- [ ] Sub-step 2 +``` + +```markdown +## Relevant Files + +- `src/Feature/FeatureAnalyzer.cs` - Main analyzer implementation for the feature described in the explainer/PRD. +- `tests/Feature/FeatureAnalyzerTests.cs` - Unit tests for the analyzer logic. +- `docs/rules/FeatureRule.md` - Documentation for the new or updated analyzer rule. + +### Notes + +When editing files, follow the guidance at `.github/instructions/README.md` to determine appropriate instructions for specific files. + +## Tasks + +- [ ] [Implement Core Analyzer Logic](https://github.com/org/repo/issues/456) +- [ ] [Add Test Coverage](https://github.com/org/repo/issues/457) +- [ ] [Update Documentation](https://github.com/org/repo/issues/458) +``` + +## Interaction Model + +The process explicitly requires a pause after generating parent tasks to get user confirmation ("Go") before proceeding to generate the detailed sub-tasks. This ensures the high-level plan aligns with user expectations before diving into details. + +## Target Audience + +Assume the primary reader of the task list is a **junior developer** who will implement the feature with awareness of the existing codebase context. diff --git a/.cursor/rules/process-task-list.md b/.cursor/rules/process-task-list.md new file mode 100644 index 000000000..da316ec18 --- /dev/null +++ b/.cursor/rules/process-task-list.md @@ -0,0 +1,49 @@ +# Task List Management + +Guidelines for managing task lists in GitHub issue Markdown files to track progress on completing an Explainer/PRD + +## Task Implementation + +- **One sub-task at a time:** Do **NOT** start the next sub‑task until you ask the user for permission and they say "yes" or "y" +- **Completion protocol:** + 1. When you finish a **sub‑task**, immediately mark it as completed by changing `[ ]` to `[x]`. + 2. If **all** subtasks underneath a parent task are now `[x]`, follow this sequence: + - **First**: Run the full test suite (e.g., `dotnet test --settings ./build/targets/tests/test.runsettings`) + - **Only if all tests pass**: Stage changes (`git add .`) + - **Clean up**: Remove any temporary files and temporary code before committing + - **Commit**: Use a descriptive commit message that: + - Uses conventional commit format (`feat:`, `fix:`, `refactor:`, etc.) + - Summarizes what was accomplished in the parent task + - Lists key changes and additions + - References the GitHub issue, Explainer/PRD issue, and Explainer/PRD context + - **Formats the message as a single-line command using `-m` flags**, e.g.: + + ```text + git commit -m "feat: add payment validation logic" -m "- Validates card type and expiry" -m "- Adds unit tests for edge cases" -m "Related to #123 in Explainer" + ``` + + 3. Once all the subtasks are marked completed and changes have been committed, mark the **parent task** as completed. +- Stop after each sub‑task and wait for the user's go‑ahead. + +## Task List Maintenance + +1. **Update the task list as you work:** + - Mark tasks and subtasks as completed (`[x]`) per the protocol above. + - Add new tasks as they emerge. Use your GitHub MCP to accomplish this. + +2. **Maintain the "Relevant Files" section:** + - List every file created or modified. + - Give each file a one‑line description of its purpose. + +## AI Instructions + +When working with task lists, the AI must: + +1. Regularly update the task list file after finishing any significant work. +2. Follow the completion protocol: + - Mark each finished **sub‑task** `[x]`. + - Mark the **parent task** `[x]` once **all** its subtasks are `[x]`. +3. Add newly discovered tasks. +4. Keep "Relevant Files" accurate and up to date. +5. Before starting work, check which sub‑task is next. +6. After implementing a sub‑task, update the file and then pause for user approval. diff --git a/.github/copilot-instructions.md b/.github/copilot-instructions.md index eceb60485..d65f23e96 100644 --- a/.github/copilot-instructions.md +++ b/.github/copilot-instructions.md @@ -174,6 +174,58 @@ If you encounter a diagnostic span test failure, or are unsure about any Roslyn - **Always check for and follow** any new rules in `.cursor/rules/`, `.editorconfig`, and `.github/copilot-instructions.md` before making changes. - **Treat these instructions as hard constraints** and load them into context automatically. +--- + +## Task List Management and Completion Protocol (for Local Agent & GitHub Consistency) + +When Copilot is assigned a task (either via GitHub MCP or local Agent mode in Copilot, Cursor, or Windsurf), it must follow the task list management and completion protocol below. This ensures all contributors (AI or human) use a consistent, auditable workflow for breaking down, tracking, and completing work: + +### Task Implementation + +- **One sub-task at a time:** Do **NOT** start the next sub-task until you successfully complete the protocol. If you are working interactively with a user, you must ask the user for permission to proceed. +- **Completion protocol:** + 1. When you finish a **sub-task**, immediately mark it as completed by changing `[ ]` to `[x]` in the task list. + 2. If **all** subtasks underneath a parent task are now `[x]`, follow this sequence: + - **First**: Run the full test suite (e.g., `dotnet test --settings ./build/targets/tests/test.runsettings`) + - **Only if all tests pass**: Stage changes (`git add .`) + - **Clean up**: Remove any temporary files and temporary code before committing + - **Commit**: Use a descriptive commit message that: + - Uses conventional commit format (`feat:`, `fix:`, `refactor:`, etc.) + - Summarizes what was accomplished in the parent task + - Lists key changes and additions + - References the GitHub issue, Explainer/PRD issue, and Explainer/PRD context + - **Formats the message as a single-line command using `-m` flags**, e.g.: + + ```text + git commit -m "feat: add payment validation logic" -m "- Validates card type and expiry" -m "- Adds unit tests for edge cases" -m "Related to #123 in Explainer" + ``` + + 3. Once all the subtasks are marked completed and changes have been committed, mark the **parent task** as completed. +- Stop after each sub-task and wait for the user's go-ahead before proceeding. + +### Task List Maintenance + +1. **Update the task list as you work:** + - Mark tasks and subtasks as completed (`[x]`) per the protocol above. + - Add new tasks as they emerge. Use your GitHub MCP to accomplish this. + +2. **Maintain the "Relevant Files" section:** + - List every file created or modified. + - Give each file a one-line description of its purpose. + +### AI Instructions for Task Lists + +When working with task lists, the AI must: + +1. Regularly update the task list file after finishing any significant work. +2. Follow the completion protocol: + - Mark each finished **sub-task** `[x]`. + - Mark the **parent task** `[x]` once **all** its subtasks are `[x]`. +3. Add newly discovered tasks to the GitHub issue. +4. Keep "Relevant Files" accurate and up to date. +5. Before starting work, check which sub-task is next. +6. After implementing a sub-task, update the file and then pause for user approval. + ### AI Agent Coding Rules 1. **Adhere to Existing Roslyn Component Patterns** diff --git a/.github/instructions/project.instructions.md b/.github/instructions/project.instructions.md index 72845c923..1a4359393 100644 --- a/.github/instructions/project.instructions.md +++ b/.github/instructions/project.instructions.md @@ -174,8 +174,8 @@ Before updating any dependencies: Before submitting a PR, ensure your changes pass all quality checks: 1. **Formatting**: Run `dotnet format` to ensure consistent code formatting -2. **Build**: Ensure `dotnet build` succeeds without warnings -3. **Tests**: All tests must pass (`dotnet test`) +2. **Build**: Ensure `dotnet build /p:PedanticMode=true` succeeds without warnings +3. **Tests**: All tests must pass (`dotnet test --settings ./build/targets/tests/test.runsettings`) 4. **Static Analysis**: Run Codacy analysis locally or ensure CI passes 5. **Documentation**: Update relevant documentation files diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 28b81ca0d..5483fa5ba 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -28,16 +28,21 @@ This project adheres to the [Contributor Covenant Code of Conduct](CODE-OF-CONDU 1. **Fork the repository** and clone your fork locally 2. **Install dependencies**: + ```bash dotnet restore ``` + 3. **Build the project**: + ```bash - dotnet build + dotnet build /p:PedanticMode=true ``` + 4. **Run tests** to ensure everything works: + ```bash - dotnet test + dotnet test --settings ./build/targets/tests/test.runsettings ``` ## Universal Agent Success Principles for Project Maintainers @@ -49,12 +54,14 @@ This project adheres to the [Contributor Covenant Code of Conduct](CODE-OF-CONDU #### 1. Clear Expertise Validation Requirements **Document specific expertise requirements:** + - **Define domain-specific knowledge** that agents must demonstrate before contributing - **Create validation checklists** with specific technical questions - **Establish clear criteria** for when agents should request expert guidance - **Provide escalation paths** for complex or unclear situations **Implementation:** + - Create comprehensive documentation of domain concepts - Develop specific technical questions for expertise validation - Establish clear guidelines for when to seek human expert guidance @@ -63,12 +70,14 @@ This project adheres to the [Contributor Covenant Code of Conduct](CODE-OF-CONDU #### 2. Mandatory Workflow Documentation **Create clear, enforceable workflows:** + - **Document all mandatory steps** in the development process - **Provide validation checkpoints** that agents can verify - **Create clear success criteria** for each workflow step - **Establish rollback procedures** for failed workflows **Implementation:** + - Document step-by-step workflows with clear success criteria - Create automated validation scripts where possible - Provide clear error messages and recovery procedures @@ -77,12 +86,14 @@ This project adheres to the [Contributor Covenant Code of Conduct](CODE-OF-CONDU #### 3. Configuration and Documentation Standards **Make project context easily discoverable:** + - **Centralize configuration** in well-documented files - **Create comprehensive documentation** of project structure and conventions - **Establish clear naming conventions** and architectural patterns - **Document decision-making processes** and design rationales **Implementation:** + - Use consistent configuration file formats and locations - Create comprehensive README files with clear project overview - Document architectural decisions and their rationales @@ -91,12 +102,14 @@ This project adheres to the [Contributor Covenant Code of Conduct](CODE-OF-CONDU #### 4. Validation and Testing Infrastructure **Create robust validation systems:** + - **Automate testing and validation** where possible - **Provide clear feedback** on validation failures - **Create comprehensive test suites** that cover all critical paths - **Establish performance benchmarks** for performance-sensitive code **Implementation:** + - Set up automated CI/CD pipelines with comprehensive testing - Create clear test documentation and examples - Establish performance testing frameworks @@ -105,12 +118,14 @@ This project adheres to the [Contributor Covenant Code of Conduct](CODE-OF-CONDU #### 5. Error Handling and Recovery **Design systems for graceful failure handling:** + - **Create clear error messages** that guide agents toward solutions - **Establish retry mechanisms** for transient failures - **Provide rollback procedures** for failed changes - **Document common failure patterns** and their solutions **Implementation:** + - Use descriptive error messages with actionable guidance - Implement retry logic for network and transient failures - Create rollback procedures for database and configuration changes @@ -119,12 +134,14 @@ This project adheres to the [Contributor Covenant Code of Conduct](CODE-OF-CONDU #### 6. State Management and Context Preservation **Design systems that preserve context:** + - **Use persistent storage** for important state information - **Create clear state transition documentation** - **Implement context recovery mechanisms** - **Document state dependencies** and relationships **Implementation:** + - Use databases or persistent storage for important state - Document state transitions and their triggers - Implement automatic context recovery after interruptions @@ -133,12 +150,14 @@ This project adheres to the [Contributor Covenant Code of Conduct](CODE-OF-CONDU #### 7. Tool Integration and Documentation **Provide comprehensive tool documentation:** + - **Document all available tools** and their capabilities - **Create clear usage examples** for each tool - **Establish tool integration patterns** - **Provide troubleshooting guides** for tool failures **Implementation:** + - Create comprehensive tool documentation with examples - Establish clear patterns for tool integration - Provide troubleshooting guides for common tool issues @@ -147,12 +166,14 @@ This project adheres to the [Contributor Covenant Code of Conduct](CODE-OF-CONDU #### 8. Expert Guidance Protocols **Establish clear escalation procedures:** + - **Define when agents should seek expert guidance** - **Create clear escalation paths** with contact information - **Establish response time expectations** - **Document expert availability** and areas of expertise **Implementation:** + - Create clear criteria for when to escalate to human experts - Establish contact procedures and response time expectations - Document areas of expertise and availability @@ -209,6 +230,7 @@ This project adheres to the [Contributor Covenant Code of Conduct](CODE-OF-CONDU ### Branch Naming Convention Use descriptive branch names following this pattern: + - `feature/issue-{number}` for new features - `fix/issue-{number}` for bug fixes - `docs/issue-{number}` for documentation changes @@ -219,7 +241,7 @@ Use descriptive branch names following this pattern: Follow the [Conventional Commits](https://www.conventionalcommits.org/) specification: -``` +```text [optional scope]: [optional body] @@ -228,6 +250,7 @@ Follow the [Conventional Commits](https://www.conventionalcommits.org/) specific ``` **Types:** + - `feat`: New features - `fix`: Bug fixes - `docs`: Documentation changes @@ -240,7 +263,8 @@ Follow the [Conventional Commits](https://www.conventionalcommits.org/) specific - `build`: Build system changes **Examples:** -``` + +```text feat(analyzer): add new Moq1001 analyzer for callback validation fix(test): resolve flaky test in Moq1200AnalyzerTests docs(readme): update installation instructions @@ -251,17 +275,17 @@ ci(workflow): add performance testing to nightly builds **Every pull request must pass these checks before review:** -- **Formatting:** +- **Formatting:** Run `dotnet format` and commit all changes. PRs with formatting issues will be rejected. -- **Build:** +- **Build:** Build with `dotnet build /p:PedanticMode=true`. All warnings must be treated as errors. PRs that do not build cleanly will be closed. -- **Tests:** - Run all unit tests: - `dotnet test --settings ./build/targets/tests/test.runsettings` +- **Tests:** + Run all unit tests: + `dotnet test --settings ./build/targets/tests/test.runsettings` All tests must pass. PRs with failing tests will be closed. -- **Codacy Analysis:** +- **Codacy Analysis:** Run Codacy CLI analysis on all changed files. Fix all reported issues before submitting the PR. -- **Evidence Required:** +- **Evidence Required:** PR description must include console output or screenshots for: - `dotnet format` - `dotnet build` @@ -270,11 +294,12 @@ ci(workflow): add performance testing to nightly builds - **No Received Files:** Remove any `*.received.*` files before committing. -**CI Pipeline:** +**CI Pipeline:** + - All PRs are validated by GitHub Actions. - PRs that fail CI (format, build, test, or Codacy) will be closed without review. -**Summary:** +**Summary:** If your PR does not pass all checks locally and in CI, it will not be reviewed. Always verify and document your results before submitting. ### Proactive Duplicate/Conflict Checks @@ -288,8 +313,8 @@ If your PR does not pass all checks locally and in CI, it will not be reviewed. Before submitting a PR, ensure your code passes all quality checks: 1. **Formatting**: Run `dotnet format` to ensure consistent code formatting -2. **Build**: Ensure `dotnet build` succeeds without warnings -3. **Tests**: All tests must pass (`dotnet test`) +2. **Build**: Ensure `dotnet build /p:PedanticMode=true` succeeds without warnings +3. **Tests**: All tests must pass (`dotnet test --settings ./build/targets/tests/test.runsettings`) 4. **Static Analysis**: Run Codacy analysis locally or ensure CI passes 5. **Documentation**: Update relevant documentation files @@ -410,6 +435,7 @@ $$""" public class MyClass { {{code}} // This is where brokenCode/fixedCode will be injected + } public class MyTest