Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
18 commits
Select commit Hold shift + click to select a range
53318ec
docs: update build and test instructions in CONTRIBUTING.md
rjmurillo Jul 27, 2025
e53b716
docs: update build and test instructions for improved clarity
rjmurillo Jul 27, 2025
b43c1c4
docs: add guidelines for generating Explainer/Product Requirements Do…
rjmurillo Jul 27, 2025
aa7171d
docs: add guideline for generating task lists from Explainers and PRDs
rjmurillo Jul 27, 2025
4956fc9
docs: add guidelines for task list management in GitHub issues
rjmurillo Jul 27, 2025
5ed610f
docs: enhance clarifying questions section for improved user guidance…
rjmurillo Jul 27, 2025
4c0cddf
docs: update build and test instructions in CONTRIBUTING.md
rjmurillo Jul 27, 2025
046be15
docs: update build and test instructions for improved clarity
rjmurillo Jul 27, 2025
71f7bb1
docs: add guidelines for generating Explainer/Product Requirements Do…
rjmurillo Jul 27, 2025
9c1c710
docs: add guideline for generating task lists from Explainers and PRDs
rjmurillo Jul 27, 2025
8319d61
docs: add guidelines for task list management in GitHub issues
rjmurillo Jul 27, 2025
ba18f66
docs: enhance clarifying questions section for improved user guidance…
rjmurillo Jul 27, 2025
da09944
Merge branch 'docs/ai-tasks' of https://github.com/rjmurillo/moq.anal…
rjmurillo Jul 27, 2025
783710f
docs: add task list management and completion protocol for AI agents
rjmurillo Jul 27, 2025
ebcf85d
docs: Update .cursor/rules/generate-tasks.md typos
rjmurillo Jul 27, 2025
1a6bf75
docs: Update .cursor/rules/create-explainer.md INVEST criteria
rjmurillo Jul 27, 2025
2f8a288
docs: update contributing guidelines for improved setup and testing i…
rjmurillo Jul 27, 2025
72e044e
Merge branch 'docs/ai-tasks' of https://github.com/rjmurillo/moq.anal…
rjmurillo Jul 27, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
100 changes: 100 additions & 0 deletions .cursor/rules/create-explainer.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,100 @@
# Rule: Generating an Explainer or a Product Requirements Document (PRD)

## Goal

To guide an AI assistant in creating a detailed Explainer/Product Requirements Document (PRD) in Markdown format, based on an initial user prompt. The PRD should be clear, actionable, and suitable for a junior developer to understand and implement the feature.

## Process

1. **Receive Initial Prompt:** The user provides a brief description or request for a new feature or functionality.
2. **Ask Clarifying Questions:** Before writing the explainer, the AI *must* ask clarifying questions to gather sufficient detail. The goal is to understand the "why" and "what" of the feature, not necessarily the "how" (which the developer will figure out). All clarifying questions must be presented as enumerated letter or number lists to maximize clarity and ease of response. If any user answer is ambiguous, incomplete, or conflicting, the AI must explicitly flag the uncertainty, ask follow-up questions, and not proceed until the ambiguity is resolved or clearly documented as an open question or assumption.
3. **Generate Explainer:** Based on the initial prompt and the user's answers to the clarifying questions, generate an Explainer using the structure outlined below.
4. **Save Explainer:** Save the generated document as a GitHub issue with the title `Explainer: [feature-name]` inside the `Moq.Analyzers` repository using your GitHub MCP.

## Clarifying Questions (Examples)

The AI should adapt its questions based on the prompt, but must always:

- Present all clarifying questions as enumerated letter or number lists.
- Validate that each user story provided follows the INVEST mnemonic. If a user story does not, the AI must either rewrite it for compliance or ask the user for clarification.
- If any answer is unclear, incomplete, or conflicting, the AI must flag it and ask for clarification before proceeding.

Here are some common areas to explore:

- **Problem/Goal:** "What problem does this feature solve for the user?" or "What is the main goal we want to achieve with this feature?"
- **Target User:** "Who is the primary user of this feature?"
- **Core Functionality:** "Can you describe the key actions a user should be able to perform with this feature?"
- **User Stories:** "Could you provide a few user stories? (e.g., As a [type of user], I want to [perform an action] so that [benefit].)"
- **INVEST compliance**: Validate that every user story follows the INVEST mnemonic.
- **Acceptance Criteria:** "How will we know when this feature is successfully implemented? What are the key success criteria?"
- **Scope/Boundaries:** "Are there any specific things this feature *should not* do (non-goals)?"
- **Data Requirements:** "What kind of data does this feature need to display or manipulate?"
- **Design/UI:** "Are there any existing design mockups or UI guidelines to follow?" or "Can you describe the desired look and feel?"
- **Edge Cases:** "Are there any potential edge cases or error conditions we should consider?"

## Explainer Structure

The generated explainer should include the following sections:

1. **Introduction/Overview:** Briefly describe the feature and the problem it solves. State the goal.
2. **Goals:** List the specific, measurable objectives for this feature.
3. **Non-Goals (Out of Scope):** Clearly state what this feature will *not* include to manage scope.
4. **User Stories:** Detail the user narratives describing feature usage and benefits.
5. **Functional Requirements:** List the specific functionalities the feature must have. Use clear, concise language (e.g., "The system must allow users to upload a profile picture."). Number these requirements.
6. **Design Considerations (Optional):** Link to mockups, describe UI/UX requirements, or mention relevant components/styles if applicable.
7. **Technical Considerations (Optional):** Mention any known technical constraints, dependencies, or suggestions (e.g., "Should integrate with the existing Auth module").
8. **Success Metrics:** How will the success of this feature be measured? (e.g., "Increase user engagement by 10%", "Reduce support tickets related to X").
9. **Open Questions:** List any remaining questions, areas needing further clarification, or assumptions made due to missing or ambiguous information. If there are no open questions or assumptions, explicitly state "None".

## Target Audience

Assume the primary reader of the Explainer is a **junior developer**. Therefore, requirements should be explicit, unambiguous, avoid jargon where possible, and be written with a grade 9 reading level. Provide enough detail for them to understand the feature's purpose and core logic.

## Output

- **Format:** Markdown (`.md`)
- **Location:** GitHub issue in the `rjmurillo/Moq.Analyzers` repository
- **Title:** `Explainer: [feature-name]`

## Final instructions

1. Do NOT start implementing the Explainer
2. Make sure to ask the user clarifying questions
3. Take the user's answers to the clarifying questions and improve the Explainer

---

## Example Explainer (Generic)

```markdown
# Explainer: Feature Name

## Introduction/Overview
Briefly describe the feature and the problem it solves. State the goal.

## Goals
- List specific, measurable objectives for this feature.

## Non-Goals (Out of Scope)
- List what is explicitly not included in this feature.

## User Stories
- As a [user type], I want to [do something] so that [benefit].
- As a [user type], I want to [do something else] so that [benefit].

## Functional Requirements
1. The system must allow users to do X.
2. The system must validate Y before Z.

## Design Considerations (Optional)
- Link to mockups or describe UI/UX requirements.

## Technical Considerations (Optional)
- List known technical constraints, dependencies, or suggestions.

## Success Metrics
- How will success be measured? (e.g., "Increase engagement by 10%", "Reduce support tickets related to X")

## Open Questions
- List any remaining questions, areas needing clarification, or assumptions. If none, state "None".
```
83 changes: 83 additions & 0 deletions .cursor/rules/generate-tasks.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,83 @@
# Rule: Generating a Task List from an Explainer

## Goal

To guide an AI assistant in creating a detailed, step-by-step task list in Markdown format based on an existing Explainer or Product Requirements Document (PRD). The task list should guide a developer through implementation.

## Output

- **Format:** Markdown (`.md`)
- **Location:** GitHub issues in `rjmurillo/Moq.Analyzers` repository
- **Title:** `Tasks for [explainer-issue-title]` (e.g., `Tasks for Explainer Expand LINQ Coverage`)

Link the Tasks issue to the Explainer issue in GitHub using your MCP.

## Process

1. **Receive Explainer/PRD Reference:** The user points the AI to a specific Explainer or PRD GitHub issue
2. **Analyze Explainer/PRD:** The AI uses GitHub MCP and reads and analyzes the functional requirements, user stories, and other sections of the specified Explainer/PRD.
3. **Assess Current State:** Review the existing codebase to understand existing infrastructure, architectural patterns, and conventions. Also, identify any existing components or features that already exist and could be relevant to the Explainer/PRD requirements. Then, identify existing related files, components, and utilities that can be leveraged or need modification.
4. **Phase 1: Generate Parent Tasks:** Based on the Explainer/PRD analysis and current state assessment, create the file and generate the main, high-level tasks required to implement the feature. Use your judgment on how many high-level tasks to use. It's likely to be about 5. Present these tasks to the user in the specified format (without sub-tasks yet). Inform the user: "I have generated the high-level tasks based on the Explainer/PRD. Ready to generate the sub-tasks? Respond with 'Go' to proceed."
5. **Wait for Confirmation:** Pause and wait for the user to respond with "Go".
6. **Phase 2: Generate Sub-Tasks:** Once the user confirms, break down each parent task into smaller, actionable sub-tasks necessary to complete the parent task. Ensure sub-tasks logically follow from the parent task, cover the implementation details implied by the PRD, and consider existing codebase patterns where relevant without being constrained by them.
7. **Identify Relevant Files:** Based on the tasks and Explainer/PRD, identify potential files that will need to be created or modified. List these under the `Relevant Files` section, including corresponding test files if applicable. Identification may be achieved through search and through code coverage analysis.
8. **Generate Final Output:** Combine the parent tasks, sub-tasks, relevant files, and notes into the final Markdown structure.
9. **Save Task List:** Save the generated document into a new set of GitHub issues linked to the parent tasks.

## Output Format

The generated task list _must_ follow this structure. The example below is generic and should be adapted to the specific Explainer/PRD being processed.

### GitHub Issue Structure Guidance

- The parent task list should be created as a GitHub issue (the parent issue), with each high-level task represented as a checklist item that links to a corresponding sub-task issue.
- Each sub-task should be created as its own GitHub issue, with a reference back to the parent issue (e.g., "Parent: #123").
- The parent issue should include a checklist like:
- [ ] [Sub-task Title 1](https://github.com/org/repo/issues/456)
- [ ] [Sub-task Title 2](https://github.com/org/repo/issues/457)
- Each sub-task issue should include a link to the parent issue at the top, and may include its own detailed checklist if needed.

#### Example Parent Issue Checklist

```markdown
- [ ] [Implement Core Analyzer Logic](https://github.com/org/repo/issues/456)
- [ ] [Add Test Coverage](https://github.com/org/repo/issues/457)
- [ ] [Update Documentation](https://github.com/org/repo/issues/458)
```

#### Example Sub-task Issue Header

```markdown
Parent: #123

## Sub-task Details

- [ ] Sub-step 1
- [ ] Sub-step 2
```

```markdown
## Relevant Files

- `src/Feature/FeatureAnalyzer.cs` - Main analyzer implementation for the feature described in the explainer/PRD.
- `tests/Feature/FeatureAnalyzerTests.cs` - Unit tests for the analyzer logic.
- `docs/rules/FeatureRule.md` - Documentation for the new or updated analyzer rule.

### Notes

When editing files, follow the guidance at `.github/instructions/README.md` to determine appropriate instructions for specific files.

## Tasks

- [ ] [Implement Core Analyzer Logic](https://github.com/org/repo/issues/456)
- [ ] [Add Test Coverage](https://github.com/org/repo/issues/457)
- [ ] [Update Documentation](https://github.com/org/repo/issues/458)
```

## Interaction Model

The process explicitly requires a pause after generating parent tasks to get user confirmation ("Go") before proceeding to generate the detailed sub-tasks. This ensures the high-level plan aligns with user expectations before diving into details.

## Target Audience

Assume the primary reader of the task list is a **junior developer** who will implement the feature with awareness of the existing codebase context.
49 changes: 49 additions & 0 deletions .cursor/rules/process-task-list.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
# Task List Management

Guidelines for managing task lists in GitHub issue Markdown files to track progress on completing an Explainer/PRD

## Task Implementation

- **One sub-task at a time:** Do **NOT** start the next sub‑task until you ask the user for permission and they say "yes" or "y"
- **Completion protocol:**
1. When you finish a **sub‑task**, immediately mark it as completed by changing `[ ]` to `[x]`.
Comment on lines +7 to +9
Copy link

Copilot AI Jul 27, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Inconsistent hyphen character used. Line uses en-dash (‑) instead of standard hyphen (-) like other lines in the file.

Suggested change
- **One sub-task at a time:** Do **NOT** start the next subtask until you ask the user for permission and they say "yes" or "y"
- **Completion protocol:**
1. When you finish a **subtask**, immediately mark it as completed by changing `[ ]` to `[x]`.
- **One sub-task at a time:** Do **NOT** start the next sub-task until you ask the user for permission and they say "yes" or "y"
- **Completion protocol:**
1. When you finish a **sub-task**, immediately mark it as completed by changing `[ ]` to `[x]`.

Copilot uses AI. Check for mistakes.
Copy link

Copilot AI Jul 27, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Inconsistent hyphen character used. Line uses en-dash (‑) instead of standard hyphen (-) like other lines in the file.

Suggested change
1. When you finish a **subtask**, immediately mark it as completed by changing `[ ]` to `[x]`.
1. When you finish a **sub-task**, immediately mark it as completed by changing `[ ]` to `[x]`.

Copilot uses AI. Check for mistakes.
2. If **all** subtasks underneath a parent task are now `[x]`, follow this sequence:
- **First**: Run the full test suite (e.g., `dotnet test --settings ./build/targets/tests/test.runsettings`)

Check notice on line 11 in .cursor/rules/process-task-list.md

View check run for this annotation

Codacy Production / Codacy Static Code Analysis

.cursor/rules/process-task-list.md#L11

Expected: 2; Actual: 4
- **Only if all tests pass**: Stage changes (`git add .`)

Check notice on line 12 in .cursor/rules/process-task-list.md

View check run for this annotation

Codacy Production / Codacy Static Code Analysis

.cursor/rules/process-task-list.md#L12

Expected: 2; Actual: 4
- **Clean up**: Remove any temporary files and temporary code before committing

Check notice on line 13 in .cursor/rules/process-task-list.md

View check run for this annotation

Codacy Production / Codacy Static Code Analysis

.cursor/rules/process-task-list.md#L13

Expected: 2; Actual: 4
- **Commit**: Use a descriptive commit message that:

Check notice on line 14 in .cursor/rules/process-task-list.md

View check run for this annotation

Codacy Production / Codacy Static Code Analysis

.cursor/rules/process-task-list.md#L14

Expected: 2; Actual: 4
- Uses conventional commit format (`feat:`, `fix:`, `refactor:`, etc.)

Check notice on line 15 in .cursor/rules/process-task-list.md

View check run for this annotation

Codacy Production / Codacy Static Code Analysis

.cursor/rules/process-task-list.md#L15

Expected: 4; Actual: 6
- Summarizes what was accomplished in the parent task

Check notice on line 16 in .cursor/rules/process-task-list.md

View check run for this annotation

Codacy Production / Codacy Static Code Analysis

.cursor/rules/process-task-list.md#L16

Expected: 4; Actual: 6
- Lists key changes and additions

Check notice on line 17 in .cursor/rules/process-task-list.md

View check run for this annotation

Codacy Production / Codacy Static Code Analysis

.cursor/rules/process-task-list.md#L17

Expected: 4; Actual: 6
- References the GitHub issue, Explainer/PRD issue, and Explainer/PRD context

Check notice on line 18 in .cursor/rules/process-task-list.md

View check run for this annotation

Codacy Production / Codacy Static Code Analysis

.cursor/rules/process-task-list.md#L18

Expected: 4; Actual: 6
- **Formats the message as a single-line command using `-m` flags**, e.g.:

Check notice on line 19 in .cursor/rules/process-task-list.md

View check run for this annotation

Codacy Production / Codacy Static Code Analysis

.cursor/rules/process-task-list.md#L19

Expected: 4; Actual: 6
Comment thread
rjmurillo marked this conversation as resolved.

```text
git commit -m "feat: add payment validation logic" -m "- Validates card type and expiry" -m "- Adds unit tests for edge cases" -m "Related to #123 in Explainer"
```

3. Once all the subtasks are marked completed and changes have been committed, mark the **parent task** as completed.

Check notice on line 25 in .cursor/rules/process-task-list.md

View check run for this annotation

Codacy Production / Codacy Static Code Analysis

.cursor/rules/process-task-list.md#L25

Expected: 1; Actual: 3; Style: 1/1/1
- Stop after each sub‑task and wait for the user's go‑ahead.
Copy link

Copilot AI Jul 27, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Inconsistent hyphen character used. Line uses en-dash (‑) instead of standard hyphen (-) like other lines in the file.

Suggested change
- Stop after each subtask and wait for the user's goahead.
- Stop after each sub-task and wait for the user's go-ahead.

Copilot uses AI. Check for mistakes.

## Task List Maintenance

1. **Update the task list as you work:**
- Mark tasks and subtasks as completed (`[x]`) per the protocol above.
- Add new tasks as they emerge. Use your GitHub MCP to accomplish this.

2. **Maintain the "Relevant Files" section:**
- List every file created or modified.
- Give each file a one‑line description of its purpose.
Copy link

Copilot AI Jul 27, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Inconsistent hyphen character used. Line uses en-dash (‑) instead of standard hyphen (-) like other lines in the file.

Suggested change
- Give each file a oneline description of its purpose.
- Give each file a one-line description of its purpose.

Copilot uses AI. Check for mistakes.

## AI Instructions

When working with task lists, the AI must:

1. Regularly update the task list file after finishing any significant work.
2. Follow the completion protocol:
- Mark each finished **sub‑task** `[x]`.
Copy link

Copilot AI Jul 27, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Inconsistent hyphen character used. Line uses en-dash (‑) instead of standard hyphen (-) like other lines in the file.

Suggested change
- Mark each finished **subtask** `[x]`.
- Mark each finished **sub-task** `[x]`.

Copilot uses AI. Check for mistakes.
- Mark the **parent task** `[x]` once **all** its subtasks are `[x]`.
3. Add newly discovered tasks.
4. Keep "Relevant Files" accurate and up to date.
5. Before starting work, check which sub‑task is next.
Copy link

Copilot AI Jul 27, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Inconsistent hyphen character used. Line uses en-dash (‑) instead of standard hyphen (-) like other lines in the file.

Suggested change
5. Before starting work, check which subtask is next.
5. Before starting work, check which sub-task is next.

Copilot uses AI. Check for mistakes.
6. After implementing a sub‑task, update the file and then pause for user approval.
Copy link

Copilot AI Jul 27, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Inconsistent hyphen character used. Line uses en-dash (‑) instead of standard hyphen (-) like other lines in the file.

Suggested change
6. After implementing a subtask, update the file and then pause for user approval.
6. After implementing a sub-task, update the file and then pause for user approval.

Copilot uses AI. Check for mistakes.
52 changes: 52 additions & 0 deletions .github/copilot-instructions.md
Original file line number Diff line number Diff line change
Expand Up @@ -174,6 +174,58 @@ If you encounter a diagnostic span test failure, or are unsure about any Roslyn
- **Always check for and follow** any new rules in `.cursor/rules/`, `.editorconfig`, and `.github/copilot-instructions.md` before making changes.
- **Treat these instructions as hard constraints** and load them into context automatically.

---

## Task List Management and Completion Protocol (for Local Agent & GitHub Consistency)

When Copilot is assigned a task (either via GitHub MCP or local Agent mode in Copilot, Cursor, or Windsurf), it must follow the task list management and completion protocol below. This ensures all contributors (AI or human) use a consistent, auditable workflow for breaking down, tracking, and completing work:

### Task Implementation

- **One sub-task at a time:** Do **NOT** start the next sub-task until you successfully complete the protocol. If you are working interactively with a user, you must ask the user for permission to proceed.
- **Completion protocol:**
1. When you finish a **sub-task**, immediately mark it as completed by changing `[ ]` to `[x]` in the task list.
2. If **all** subtasks underneath a parent task are now `[x]`, follow this sequence:
- **First**: Run the full test suite (e.g., `dotnet test --settings ./build/targets/tests/test.runsettings`)
- **Only if all tests pass**: Stage changes (`git add .`)
- **Clean up**: Remove any temporary files and temporary code before committing
- **Commit**: Use a descriptive commit message that:
- Uses conventional commit format (`feat:`, `fix:`, `refactor:`, etc.)
- Summarizes what was accomplished in the parent task
- Lists key changes and additions
- References the GitHub issue, Explainer/PRD issue, and Explainer/PRD context
- **Formats the message as a single-line command using `-m` flags**, e.g.:

```text
git commit -m "feat: add payment validation logic" -m "- Validates card type and expiry" -m "- Adds unit tests for edge cases" -m "Related to #123 in Explainer"
```

3. Once all the subtasks are marked completed and changes have been committed, mark the **parent task** as completed.
- Stop after each sub-task and wait for the user's go-ahead before proceeding.

### Task List Maintenance

1. **Update the task list as you work:**
- Mark tasks and subtasks as completed (`[x]`) per the protocol above.
- Add new tasks as they emerge. Use your GitHub MCP to accomplish this.

2. **Maintain the "Relevant Files" section:**
- List every file created or modified.
- Give each file a one-line description of its purpose.

### AI Instructions for Task Lists

When working with task lists, the AI must:

1. Regularly update the task list file after finishing any significant work.
2. Follow the completion protocol:
- Mark each finished **sub-task** `[x]`.
- Mark the **parent task** `[x]` once **all** its subtasks are `[x]`.
3. Add newly discovered tasks to the GitHub issue.
4. Keep "Relevant Files" accurate and up to date.
5. Before starting work, check which sub-task is next.
6. After implementing a sub-task, update the file and then pause for user approval.

### AI Agent Coding Rules

1. **Adhere to Existing Roslyn Component Patterns**
Expand Down
4 changes: 2 additions & 2 deletions .github/instructions/project.instructions.md
Original file line number Diff line number Diff line change
Expand Up @@ -174,8 +174,8 @@ Before updating any dependencies:
Before submitting a PR, ensure your changes pass all quality checks:

1. **Formatting**: Run `dotnet format` to ensure consistent code formatting
2. **Build**: Ensure `dotnet build` succeeds without warnings
3. **Tests**: All tests must pass (`dotnet test`)
2. **Build**: Ensure `dotnet build /p:PedanticMode=true` succeeds without warnings
3. **Tests**: All tests must pass (`dotnet test --settings ./build/targets/tests/test.runsettings`)
4. **Static Analysis**: Run Codacy analysis locally or ensure CI passes
Comment thread
rjmurillo marked this conversation as resolved.
5. **Documentation**: Update relevant documentation files

Expand Down
Loading
Loading