Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 6 additions & 0 deletions .config/dotnet-tools.json
Original file line number Diff line number Diff line change
Expand Up @@ -49,6 +49,12 @@
"commands": [
"versionmark"
]
},
"demaconsulting.reviewmark": {
"version": "0.1.0-rc.3",
"commands": [
"reviewmark"
]
}
}
}
4 changes: 4 additions & 0 deletions .cspell.json
Original file line number Diff line number Diff line change
Expand Up @@ -48,6 +48,10 @@
"Qube",
"reqstream",
"ReqStream",
"reviewmark",
"ReviewMark",
"reviewplan",
"reviewreport",
"Sarif",
"sarifmark",
"SarifMark",
Expand Down
72 changes: 72 additions & 0 deletions .github/agents/code-review-agent.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,72 @@
---
name: Code Review Agent
description: Assists in performing formal file reviews - knows how to elaborate review-sets and perform structured review checks
---

# Code Review Agent - TestResults

Perform formal file reviews for a named review-set, producing a structured findings report.

## When to Invoke This Agent

Invoke the code-review-agent for:

- Performing a formal review of a named review-set
- Producing review evidence for the Continuous Compliance pipeline
- Checking files against the structured review checklist

## How to Run This Agent

When invoked, the agent will be told which review-set is being reviewed. For example:

```text
Review the "TestResults-Model" review-set.
```

## Responsibilities

### Step 1: Elaborate the Review-Set

Run the following command to get the list of files in the review-set:

```bash
dotnet reviewmark --elaborate [review-set-id]
```

For example:

```bash
dotnet reviewmark --elaborate TestResults-Model
```

This will output the list of files covered by the review-set, along with their fingerprints
and current review status (current, stale, or missing).

### Step 2: Review Each File

For each file in the review-set, apply the checks from the standard review template at
[review-template.md](https://github.com/demaconsulting/ContinuousCompliance/blob/main/docs/review-template/review-template.md).
Determine which checklist sections apply based on the type of file (requirements, documentation,
source code, tests).

### Step 3: Generate Report

Write an `AGENT_REPORT_review-[review-set-id].md` file in the repository root with the
structured findings. This file is excluded from git and linting via `.gitignore`.

## Report Format

The generated `AGENT_REPORT_review-[review-set-id].md` must include:

1. **Review Header**: Project, Review ID, review date, files under review
2. **Checklist Results**: Each applicable section with Pass/Fail/N/A for every check
3. **Summary of Findings**: Any checks recorded as Fail, and notable observations
4. **Overall Outcome**: Pass or Fail with justification

## Don't

- Make any changes to source files, tests, or documentation during a review — record all
findings in the report only
- Skip applicable checklist sections
- Record findings without an overall outcome
- Commit the `AGENT_REPORT_*.md` file (it is excluded from git via `.gitignore`)
28 changes: 28 additions & 0 deletions .github/agents/repo-consistency-agent.md
Original file line number Diff line number Diff line change
Expand Up @@ -80,6 +80,34 @@ The agent reviews the following areas for consistency with the template:
- `quality/` (auto-generated)
- **Definition Files**: `definition.yaml` files for document generation

### Tracking Template Evolution

To ensure downstream projects benefit from recent template improvements, review recent pull requests
merged into the template repository:

1. **List Recent PRs**: Retrieve recently merged PRs from `demaconsulting/TemplateDotNetLibrary`
- Review the last 10-20 PRs to identify template improvements

2. **Identify Propagatable Changes**: For each PR, determine if changes should apply to downstream
projects:
- Focus on structural changes (workflows, agents, configurations) over content-specific changes
- Note changes to `.github/`, linting configurations, project patterns, and documentation
structure

3. **Check Downstream Application**: Verify if identified changes exist in the downstream project:
- Check if similar files/patterns exist in downstream
- Compare file contents between template and downstream project
- Look for similar PR titles or commit messages in downstream repository history

4. **Recommend Missing Updates**: For changes not yet applied, include them in the consistency
review with:
- Description of the template change (reference PR number)
- Explanation of benefits for the downstream project
- Specific files or patterns that need updating

This technique ensures downstream projects don't miss important template improvements and helps
maintain long-term consistency.

### Review Process

1. **Identify Differences**: Compare downstream repository structure with template
Expand Down
64 changes: 63 additions & 1 deletion .github/workflows/build.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -355,7 +355,7 @@ jobs:
echo "Capturing tool versions..."
dotnet versionmark --capture --job-id "build-docs" \
--output "artifacts/versionmark-build-docs.json" -- \
dotnet git node npm pandoc weasyprint sarifmark sonarmark reqstream buildmark versionmark
dotnet git node npm pandoc weasyprint sarifmark sonarmark reqstream buildmark versionmark reviewmark
echo "✓ Tool versions captured"

# === CAPTURE OTS SELF-VALIDATION RESULTS ===
Expand Down Expand Up @@ -393,6 +393,12 @@ jobs:
--validate
--results artifacts/sonarmark-self-validation.trx

- name: Run ReviewMark self-validation
run: >
dotnet reviewmark
--validate
--results artifacts/reviewmark-self-validation.trx

# === GENERATE MARKDOWN REPORTS ===
# This section generates all markdown reports from various tools and sources.
# Downstream projects: Add any additional markdown report generation steps here.
Expand Down Expand Up @@ -440,6 +446,28 @@ jobs:
echo "=== SonarCloud Quality Report ==="
cat docs/quality/sonar-quality.md

- name: Generate Review Plan and Review Report with ReviewMark
shell: bash
run: >
dotnet reviewmark
--definition .reviewmark.yaml
--plan docs/reviewplan/review-plan.md
--plan-depth 1
--report docs/reviewreport/review-report.md
--report-depth 1

- name: Display Review Plan
shell: bash
run: |
echo "=== Review Plan ==="
cat docs/reviewplan/review-plan.md

- name: Display Review Report
shell: bash
run: |
echo "=== Review Report ==="
cat docs/reviewreport/review-report.md

- name: Generate Build Notes with BuildMark
shell: bash
env:
Expand Down Expand Up @@ -534,6 +562,26 @@ jobs:
--metadata date="$(date +'%Y-%m-%d')"
--output docs/tracematrix/tracematrix.html

- name: Generate Review Plan HTML with Pandoc
shell: bash
run: >
dotnet pandoc
--defaults docs/reviewplan/definition.yaml
--filter node_modules/.bin/mermaid-filter.cmd
--metadata version="${{ inputs.version }}"
--metadata date="$(date +'%Y-%m-%d')"
--output docs/reviewplan/review-plan.html

- name: Generate Review Report HTML with Pandoc
shell: bash
run: >
dotnet pandoc
--defaults docs/reviewreport/definition.yaml
--filter node_modules/.bin/mermaid-filter.cmd
--metadata version="${{ inputs.version }}"
--metadata date="$(date +'%Y-%m-%d')"
--output docs/reviewreport/review-report.html

# === GENERATE PDF DOCUMENTS WITH WEASYPRINT ===
# This section converts HTML documents to PDF using Weasyprint.
# Downstream projects: Add any additional Weasyprint PDF generation steps here.
Expand Down Expand Up @@ -580,6 +628,20 @@ jobs:
docs/tracematrix/tracematrix.html
"docs/TestResults Trace Matrix.pdf"

- name: Generate Review Plan PDF with Weasyprint
run: >
dotnet weasyprint
--pdf-variant pdf/a-3u
docs/reviewplan/review-plan.html
"docs/TestResults Review Plan.pdf"

- name: Generate Review Report PDF with Weasyprint
run: >
dotnet weasyprint
--pdf-variant pdf/a-3u
docs/reviewreport/review-report.html
"docs/TestResults Review Report.pdf"

# === UPLOAD ARTIFACTS ===
# This section uploads all generated documentation artifacts.
# Downstream projects: Add any additional artifact uploads here.
Expand Down
10 changes: 10 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -34,3 +34,13 @@ docs/justifications/*.html
docs/quality/codeql-quality.md
docs/quality/sonar-quality.md
docs/quality/*.html
docs/reviewplan/review-plan.md
docs/reviewreport/review-report.md
docs/buildnotes.md
docs/buildnotes/versions.md

# VersionMark captures (generated during CI/CD)
versionmark-*.json

# Agent report files
AGENT_REPORT_*.md
30 changes: 30 additions & 0 deletions .reviewmark.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
---
# ReviewMark Configuration File
# This file defines which files require review, where the evidence store is located,
# and how files are grouped into named review-sets.

# Patterns identifying all files that require review.
# Processed in order; prefix a pattern with '!' to exclude.
needs-review:
- "**/*.cs"
- "!**/obj/**"

# Evidence source: review data and index.json are located in the root of the 'reviews'
# branch of this repository, accessed through the GitHub public HTTPS raw content endpoint.
evidence-source:
type: url
location: https://raw.githubusercontent.com/demaconsulting/TestResults/reviews/index.json

# Review sets grouping files by logical unit of review.
reviews:
- id: TestResults-Model
title: Review of TestResults Model
paths:
- "src/DemaConsulting.TestResults/*.cs"
- "!**/obj/**"

- id: TestResults-Serialization
title: Review of TestResults Serialization
paths:
- "src/DemaConsulting.TestResults/IO/**/*.cs"
- "!**/obj/**"
21 changes: 13 additions & 8 deletions .versionmark.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -26,39 +26,44 @@ tools:
# SonarScanner for .NET (from dotnet tool list)
dotnet-sonarscanner:
command: dotnet tool list
regex: '(?i)dotnet-sonarscanner\s+(?<version>\d+\.\d+\.\d+)'
regex: '(?i)dotnet-sonarscanner\s+(?<version>\d+\.\d+\.\d+(?:-[a-zA-Z0-9.]+)?)'

# Pandoc (DemaConsulting.PandocTool from dotnet tool list)
pandoc:
command: dotnet tool list
regex: '(?i)demaconsulting\.pandoctool\s+(?<version>\d+\.\d+\.\d+)'
regex: '(?i)demaconsulting\.pandoctool\s+(?<version>\d+\.\d+\.\d+(?:-[a-zA-Z0-9.]+)?)'

# WeasyPrint (DemaConsulting.WeasyPrintTool from dotnet tool list)
weasyprint:
command: dotnet tool list
regex: '(?i)demaconsulting\.weasyprinttool\s+(?<version>\d+\.\d+\.\d+)'
regex: '(?i)demaconsulting\.weasyprinttool\s+(?<version>\d+\.\d+\.\d+(?:-[a-zA-Z0-9.]+)?)'

# SarifMark (DemaConsulting.SarifMark from dotnet tool list)
sarifmark:
command: dotnet tool list
regex: '(?i)demaconsulting\.sarifmark\s+(?<version>\d+\.\d+\.\d+)'
regex: '(?i)demaconsulting\.sarifmark\s+(?<version>\d+\.\d+\.\d+(?:-[a-zA-Z0-9.]+)?)'

# SonarMark (DemaConsulting.SonarMark from dotnet tool list)
sonarmark:
command: dotnet tool list
regex: '(?i)demaconsulting\.sonarmark\s+(?<version>\d+\.\d+\.\d+)'
regex: '(?i)demaconsulting\.sonarmark\s+(?<version>\d+\.\d+\.\d+(?:-[a-zA-Z0-9.]+)?)'

# ReqStream (DemaConsulting.ReqStream from dotnet tool list)
reqstream:
command: dotnet tool list
regex: '(?i)demaconsulting\.reqstream\s+(?<version>\d+\.\d+\.\d+)'
regex: '(?i)demaconsulting\.reqstream\s+(?<version>\d+\.\d+\.\d+(?:-[a-zA-Z0-9.]+)?)'

# BuildMark (DemaConsulting.BuildMark from dotnet tool list)
buildmark:
command: dotnet tool list
regex: '(?i)demaconsulting\.buildmark\s+(?<version>\d+\.\d+\.\d+)'
regex: '(?i)demaconsulting\.buildmark\s+(?<version>\d+\.\d+\.\d+(?:-[a-zA-Z0-9.]+)?)'

# VersionMark (DemaConsulting.VersionMark from dotnet tool list)
versionmark:
command: dotnet tool list
regex: '(?i)demaconsulting\.versionmark\s+(?<version>\d+\.\d+\.\d+)'
regex: '(?i)demaconsulting\.versionmark\s+(?<version>\d+\.\d+\.\d+(?:-[a-zA-Z0-9.]+)?)'

# ReviewMark (DemaConsulting.ReviewMark from dotnet tool list)
reviewmark:
command: dotnet tool list
regex: '(?i)demaconsulting\.reviewmark\s+(?<version>\d+\.\d+\.\d+(?:-[a-zA-Z0-9.]+)?)'
2 changes: 2 additions & 0 deletions AGENTS.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@ programmatically creating test result files in TRX and JUnit formats.
- **Software Developer** - Writes production code in literate style
- **Test Developer** - Creates unit tests following AAA pattern
- **Code Quality Agent** - Enforces linting, static analysis, and security standards
- **Code Review Agent** - Assists in performing formal file reviews
- **Repo Consistency Agent** - Ensures downstream repositories remain consistent with template patterns

## Agent Selection Guide
Expand All @@ -22,6 +23,7 @@ programmatically creating test result files in TRX and JUnit formats.
- Add or update requirements → **Requirements Agent**
- Ensure test coverage linkage in `requirements.yaml`**Requirements Agent**
- Run security scanning or address CodeQL alerts → **Code Quality Agent**
- Perform a formal file review → **Code Review Agent**
- Propagate template changes → **Repo Consistency Agent**

## Tech Stack
Expand Down
11 changes: 11 additions & 0 deletions docs/reviewplan/definition.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
---
resource-path:
- docs/reviewplan
- docs/template
input-files:
- docs/reviewplan/title.txt
- docs/reviewplan/introduction.md
- docs/reviewplan/review-plan.md
template: template.html
table-of-contents: true
number-sections: true
32 changes: 32 additions & 0 deletions docs/reviewplan/introduction.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
# Introduction

This document contains the review plan for the TestResults project.

## Purpose

This review plan provides a comprehensive overview of all files requiring formal review
in the TestResults project. It identifies which review-sets cover which
files and serves as evidence that every file requiring review is covered by at least
one named review-set.

## Scope

This review plan covers:

- C# source code files requiring formal review
- Mapping of C# source files to named review-sets

## Generation Source

This report is automatically generated by the ReviewMark tool, analyzing the
`.reviewmark.yaml` configuration and the review evidence store. It serves as evidence
that every file requiring review is covered by a current, valid review.

## Audience

This document is intended for:

- Software developers working on TestResults
- Quality assurance teams validating review coverage
- Project stakeholders reviewing compliance status
- Auditors verifying that all required files have been reviewed
Loading
Loading