Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions .github/agents/code-review.agent.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ Formal reviews are a quality enforcement mechanism, and as such MUST be performe
to get the checklist to fill in
2. Use `dotnet reviewmark --elaborate [review-set]` to get the files to review
3. Review the files all together
4. Populate the checklist with the findings to `.agent-logs/reviews/review-report-[review-set].md` of the project.
4. Populate the checklist with the findings to `.agent-logs/reviews/review-report-{review-set}.md` of the project.

# Don't Do These Things

Expand All @@ -31,13 +31,13 @@ Formal reviews are a quality enforcement mechanism, and as such MUST be performe

# Reporting

Upon completion create a summary in `.agent-logs/[agent-name]-[subject]-[unique-id].md`
Upon completion create a summary in `.agent-logs/{agent-name}-{subject}-{unique-id}.md`
of the project consisting of:

```markdown
# Code Review Report

**Result**: <SUCCEEDED/FAILED>
**Result**: (SUCCEEDED|FAILED)

## Review Summary

Expand Down
4 changes: 2 additions & 2 deletions .github/agents/developer.agent.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,13 +20,13 @@ Perform software development tasks by determining and applying appropriate DEMA

# Reporting

Upon completion create a summary in `.agent-logs/[agent-name]-[subject]-[unique-id].md`
Upon completion create a summary in `.agent-logs/{agent-name}-{subject}-{unique-id}.md`
of the project consisting of:

```markdown
# Developer Agent Report

**Result**: <SUCCEEDED/FAILED>
**Result**: (SUCCEEDED|FAILED)

## Work Summary

Expand Down
12 changes: 6 additions & 6 deletions .github/agents/implementation.agent.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ counting how many retries have occurred.

## RESEARCH State (start)

Call the built-in @explore sub-agent with:
Call the built-in explore sub-agent with:

- **context**: the user's request and any current quality findings
- **goal**: analyze the implementation state and develop a plan to implement the request
Expand All @@ -35,7 +35,7 @@ Once the explore sub-agent finishes, transition to the DEVELOPMENT state.

## DEVELOPMENT State

Call the @developer sub-agent with:
Call the developer sub-agent with:

- **context** the user's request and the current implementation plan
- **goal** implement the user's request and any identified quality fixes
Expand All @@ -47,7 +47,7 @@ Once the developer sub-agent finishes:

## QUALITY State

Call the @quality sub-agent with:
Call the quality sub-agent with:

- **context** the user's request and the current implementation report
- **goal** check the quality of the work performed for any issues
Expand All @@ -60,14 +60,14 @@ Once the quality sub-agent finishes:

### REPORT State (end)

Upon completion create a summary in `.agent-logs/[agent-name]-[subject]-[unique-id].md`
Upon completion create a summary in `.agent-logs/{agent-name}-{subject}-{unique-id}.md`
of the project consisting of:

```markdown
# Implementation Orchestration Report

**Result**: <SUCCEEDED/FAILED>
**Final State**: <RESEARCH/DEVELOPMENT/QUALITY/REPORT>
**Result**: (SUCCEEDED|FAILED)
**Final State**: (RESEARCH|DEVELOPMENT|QUALITY|REPORT)
**Retry Count**: <Number of quality retry cycles>

## State Machine Execution
Expand Down
162 changes: 79 additions & 83 deletions .github/agents/quality.agent.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,113 +13,109 @@ DEMA Consulting standards and Continuous Compliance practices.

# Standards-Based Quality Assessment

This assessment is a quality control system of the project and MUST be performed.
This assessment is a quality control system of the project and MUST be performed systematically.

1. **Analyze completed work** to identify scope and changes made
2. **Read relevant standards** from `.github/standards/` as defined in AGENTS.md based on work performed
3. **Execute comprehensive quality checks** across all compliance areas - EVERY checkbox item must be evaluated
3. **Execute comprehensive quality assessment** using the structured evaluation criteria in the reporting template
4. **Validate tool compliance** using ReqStream, ReviewMark, and language tools
5. **Generate quality assessment report** with findings and recommendations

## Requirements Compliance

- [ ] Were requirements updated to reflect functional changes?
- [ ] Were new requirements created for new features?
- [ ] Do requirement IDs follow semantic naming standards?
- [ ] Were source filters applied appropriately for platform-specific requirements?
- [ ] Does ReqStream enforcement pass without errors?
- [ ] Is requirements traceability maintained to tests?

## Design Documentation Compliance

- [ ] Were design documents updated for architectural changes?
- [ ] Were new design artifacts created for new components?
- [ ] Are design decisions documented with rationale?
- [ ] Is system/subsystem/unit categorization maintained?
- [ ] Is design-to-implementation traceability preserved?

## Code Quality Compliance

- [ ] Are language-specific standards followed (from applicable standards files)?
- [ ] Are quality checks from standards files satisfied?
- [ ] Is code properly categorized (system/subsystem/unit/OTS)?
- [ ] Is appropriate separation of concerns maintained?
- [ ] Was language-specific tooling executed and passing?

## Testing Compliance

- [ ] Were tests created/updated for all functional changes?
- [ ] Is test coverage maintained for all requirements?
- [ ] Are testing standards followed (AAA pattern, etc.)?
- [ ] Does test categorization align with code structure?
- [ ] Do all tests pass without failures?

## Review Management Compliance

- [ ] Were review-sets updated to include new/modified files?
- [ ] Do file patterns follow include-then-exclude approach?
- [ ] Is review scope appropriate for change magnitude?
- [ ] Was ReviewMark tooling executed and passing?
- [ ] Were review artifacts generated correctly?

## Documentation Compliance

- [ ] Was README.md updated for user-facing changes?
- [ ] Were user guides updated for feature changes?
- [ ] Does API documentation reflect code changes?
- [ ] Was compliance documentation generated?
- [ ] Does documentation follow standards formatting?
- [ ] Is documentation organized under `docs/` following standard folder structure?
- [ ] Do Pandoc collections include proper `introduction.md` files with Purpose and Scope sections?
- [ ] Are auto-generated markdown files left unmodified?
- [ ] Do README.md files use absolute URLs and include concrete examples?
- [ ] Is documentation integrated into ReviewMark review-sets for formal review?

## Process Compliance

- [ ] Was Continuous Compliance workflow followed?
- [ ] Did all quality gates execute successfully?
- [ ] Were appropriate tools used for validation?
- [ ] Were standards consistently applied across work?
- [ ] Was compliance evidence generated and preserved?

# Reporting

Upon completion create a summary in `.agent-logs/[agent-name]-[subject]-[unique-id].md`
Upon completion create a summary in `.agent-logs/{agent-name}-{subject}-{unique-id}.md`
of the project consisting of:

```markdown
# Quality Assessment Report

**Result**: <SUCCEEDED/FAILED>
**Overall Grade**: <PASS/FAIL/NEEDS_WORK>
**Result**: (SUCCEEDED|FAILED)
**Overall Grade**: (PASS|FAIL|NEEDS_WORK)

## Assessment Summary

- **Work Reviewed**: [Description of work assessed]
- **Standards Applied**: [Standards files used for assessment]
- **Categories Evaluated**: [Quality check categories assessed]

## Quality Check Results

- **Requirements Compliance**: <PASS/FAIL> - [Summary]
- **Design Documentation**: <PASS/FAIL> - [Summary]
- **Code Quality**: <PASS/FAIL> - [Summary]
- **Testing Compliance**: <PASS/FAIL> - [Summary]
- **Review Management**: <PASS/FAIL> - [Summary]
- **Documentation**: <PASS/FAIL> - [Summary]
- **Process Compliance**: <PASS/FAIL> - [Summary]

## Findings

- **Issues Found**: [List of compliance issues]
- **Recommendations**: [Suggested improvements]
## Requirements Compliance: (PASS|FAIL|N/A)

- Were requirements updated to reflect functional changes? (PASS|FAIL|N/A) - [Evidence]
- Were new requirements created for new features? (PASS|FAIL|N/A) - [Evidence]
- Do requirement IDs follow semantic naming standards? (PASS|FAIL|N/A) - [Evidence]
- Do requirement files follow kebab-case naming convention? (PASS|FAIL|N/A) - [Evidence]
- Are requirement files organized under `docs/reqstream/` with proper folder structure? (PASS|FAIL|N/A) - [Evidence]
- Are OTS requirements properly placed in `docs/reqstream/ots/` subfolder? (PASS|FAIL|N/A) - [Evidence]
- Were source filters applied appropriately for platform-specific requirements? (PASS|FAIL|N/A) - [Evidence]
- Does ReqStream enforcement pass without errors? (PASS|FAIL|N/A) - [Evidence]
- Is requirements traceability maintained to tests? (PASS|FAIL|N/A) - [Evidence]

## Design Documentation Compliance: (PASS|FAIL|N/A)

- Were design documents updated for architectural changes? (PASS|FAIL|N/A) - [Evidence]
- Were new design artifacts created for new components? (PASS|FAIL|N/A) - [Evidence]
- Do design folder names use kebab-case convention matching source structure? (PASS|FAIL|N/A) - [Evidence]
- Are design files properly named ({subsystem-name}.md, {unit-name}.md patterns)? (PASS|FAIL|N/A) - [Evidence]
- Is `docs/design/introduction.md` present with required Software Structure section? (PASS|FAIL|N/A) - [Evidence]
- Are design decisions documented with rationale? (PASS|FAIL|N/A) - [Evidence]
- Is system/subsystem/unit categorization maintained? (PASS|FAIL|N/A) - [Evidence]
- Is design-to-implementation traceability preserved? (PASS|FAIL|N/A) - [Evidence]

## Code Quality Compliance: (PASS|FAIL|N/A)

- Are language-specific standards followed (from applicable standards files)? (PASS|FAIL|N/A) - [Evidence]
- Are quality checks from standards files satisfied? (PASS|FAIL|N/A) - [Evidence]
- Is code properly categorized (system/subsystem/unit/OTS)? (PASS|FAIL|N/A) - [Evidence]
- Is appropriate separation of concerns maintained? (PASS|FAIL|N/A) - [Evidence]
- Was language-specific tooling executed and passing? (PASS|FAIL|N/A) - [Evidence]

## Testing Compliance: (PASS|FAIL|N/A)

- Were tests created/updated for all functional changes? (PASS|FAIL|N/A) - [Evidence]
- Is test coverage maintained for all requirements? (PASS|FAIL|N/A) - [Evidence]
- Are testing standards followed (AAA pattern, etc.)? (PASS|FAIL|N/A) - [Evidence]
- Does test categorization align with code structure? (PASS|FAIL|N/A) - [Evidence]
- Do all tests pass without failures? (PASS|FAIL|N/A) - [Evidence]

## Review Management Compliance: (PASS|FAIL|N/A)

- Were review-sets updated to include new/modified files? (PASS|FAIL|N/A) - [Evidence]
- Do file patterns follow include-then-exclude approach? (PASS|FAIL|N/A) - [Evidence]
- Is review scope appropriate for change magnitude? (PASS|FAIL|N/A) - [Evidence]
- Was ReviewMark tooling executed and passing? (PASS|FAIL|N/A) - [Evidence]
- Were review artifacts generated correctly? (PASS|FAIL|N/A) - [Evidence]

## Documentation Compliance: (PASS|FAIL|N/A)

- Was README.md updated for user-facing changes? (PASS|FAIL|N/A) - [Evidence]
- Were user guides updated for feature changes? (PASS|FAIL|N/A) - [Evidence]
- Does API documentation reflect code changes? (PASS|FAIL|N/A) - [Evidence]
- Was compliance documentation generated? (PASS|FAIL|N/A) - [Evidence]
- Does documentation follow standards formatting? (PASS|FAIL|N/A) - [Evidence]
- Is documentation organized under `docs/` following standard folder structure? (PASS|FAIL|N/A) - [Evidence]
- Do Pandoc collections include proper `introduction.md` with Purpose and Scope sections? (PASS|FAIL|N/A) - [Evidence]
- Are auto-generated markdown files left unmodified? (PASS|FAIL|N/A) - [Evidence]
- Do README.md files use absolute URLs and include concrete examples? (PASS|FAIL|N/A) - [Evidence]
- Is documentation integrated into ReviewMark review-sets for formal review? (PASS|FAIL|N/A) - [Evidence]

## Process Compliance: (PASS|FAIL|N/A)

- Was Continuous Compliance workflow followed? (PASS|FAIL|N/A) - [Evidence]
- Did all quality gates execute successfully? (PASS|FAIL|N/A) - [Evidence]
- Were appropriate tools used for validation? (PASS|FAIL|N/A) - [Evidence]
- Were standards consistently applied across work? (PASS|FAIL|N/A) - [Evidence]
- Was compliance evidence generated and preserved? (PASS|FAIL|N/A) - [Evidence]

## Overall Findings

- **Critical Issues**: [Count and description of critical findings]
- **Recommendations**: [Suggested improvements and next steps]
- **Tools Executed**: [Quality tools used for validation]

## Compliance Status

- **Standards Adherence**: [Overall compliance rating]
- **Quality Gates**: [Status of automated quality checks]
- **Standards Adherence**: [Overall compliance rating with specific standards]
- **Quality Gates**: [Status of automated quality checks with tool outputs]
```

Return this summary to the caller.
4 changes: 2 additions & 2 deletions .github/agents/repo-consistency.agent.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,13 +42,13 @@ benefit from template evolution while respecting project-specific customizations

# Reporting

Upon completion create a summary in `.agent-logs/[agent-name]-[subject]-[unique-id].md`
Upon completion create a summary in `.agent-logs/{agent-name}-{subject}-{unique-id}.md`
of the project consisting of:

```markdown
# Repo Consistency Report

**Result**: <SUCCEEDED/FAILED>
**Result**: (SUCCEEDED|FAILED)

## Consistency Analysis

Expand Down
4 changes: 2 additions & 2 deletions .github/standards/csharp-testing.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ reliable evidence.
- **Verify Interactions**: Assert that expected method calls occurred with correct parameters
- **Predictable Behavior**: Set up mocks to return known values for consistent test results

# MSTest V4 Antipatterns
# MSTest V4 Anti-patterns

Avoid these common MSTest V4 patterns because they produce poor error messages or cause tests to be silently ignored.

Expand Down Expand Up @@ -116,4 +116,4 @@ Before submitting C# tests, verify:
- [ ] External dependencies mocked with NSubstitute or equivalent
- [ ] Tests linked to requirements with source filters where needed
- [ ] Test results generate TRX format for ReqStream compatibility
- [ ] MSTest V4 antipatterns avoided (proper assertions, public visibility, etc.)
- [ ] MSTest V4 anti-patterns avoided (proper assertions, public visibility, etc.)
Loading
Loading