diff --git a/.github/workflows/health-72-template-sync.yml b/.github/workflows/health-72-template-sync.yml
new file mode 100644
index 000000000..27deaa74d
--- /dev/null
+++ b/.github/workflows/health-72-template-sync.yml
@@ -0,0 +1,37 @@
+name: Health 72 Template Sync
+
+on:
+ pull_request:
+ paths:
+ - '.github/scripts/**/*.js'
+ - 'templates/consumer-repo/.github/scripts/**/*.js'
+ push:
+ branches: [main]
+ paths:
+ - '.github/scripts/**/*.js'
+ - 'templates/consumer-repo/.github/scripts/**/*.js'
+
+jobs:
+ validate:
+ runs-on: ubuntu-latest
+ steps:
+ - uses: actions/checkout@v4
+
+ - name: Set up Python
+ uses: actions/setup-python@v5
+ with:
+ python-version: '3.11'
+
+ - name: Validate template sync
+ run: |
+ if ! python scripts/validate_template_sync.py; then
+ echo ""
+ echo "❌ FAILED: Template files are out of sync!"
+ echo ""
+ echo "To fix this error:"
+ echo " 1. Run: ./scripts/sync_templates.sh"
+ echo " 2. Commit the changes to templates/consumer-repo/"
+ echo " 3. Push to your branch"
+ echo ""
+ exit 1
+ fi
diff --git a/docs/CONTRIBUTING.md b/docs/CONTRIBUTING.md
index e5c773fb5..1376baf58 100644
--- a/docs/CONTRIBUTING.md
+++ b/docs/CONTRIBUTING.md
@@ -75,8 +75,27 @@ Before submitting changes, run the validation scripts:
# Comprehensive check (30-120 seconds)
./scripts/check_branch.sh
+
+# Template sync validation (if you modified .github/scripts/)
+python scripts/validate_template_sync.py
```
+### ⚠️ CRITICAL: Template Sync Guard
+
+**If you modify any file in `.github/scripts/`, you MUST also update the template:**
+
+```bash
+# After editing .github/scripts/*.js, run:
+./scripts/sync_templates.sh
+
+# Verify sync:
+python scripts/validate_template_sync.py
+```
+
+**Why?** Consumer repos get workflow updates via `templates/consumer-repo/`. If you only update `.github/scripts/` but not the template, your changes won't sync to consumer repos and no sync PRs will be created.
+
+The CI will fail if templates are out of sync with source files.
+
## Code Style
### Python
diff --git a/docs/SYNC_WORKFLOW.md b/docs/SYNC_WORKFLOW.md
index e9388f832..076a17141 100644
--- a/docs/SYNC_WORKFLOW.md
+++ b/docs/SYNC_WORKFLOW.md
@@ -5,6 +5,29 @@ Prevent propagating bugs to consumer repos by validating changes in the source (
## Before Any Sync
+### 0. Verify Template Sync (CRITICAL)
+
+**If you modified `.github/scripts/`, ensure templates are synced:**
+
+```bash
+# Check for out-of-sync templates
+python scripts/validate_template_sync.py
+
+# If validation fails:
+./scripts/sync_templates.sh
+
+# Verify fixed:
+python scripts/validate_template_sync.py
+
+# Commit template changes:
+git add templates/consumer-repo/.github/scripts/
+git commit -m "sync: update templates with latest script changes"
+```
+
+**Why?** Consumer repos sync from `templates/consumer-repo/`. If source scripts are changed but templates aren't updated, no sync PRs will be created.
+
+The CI workflow `health-72-template-sync.yml` enforces this, but check manually before triggering sync.
+
### 1. Validate in Workflows Repo
```bash
# In /workspaces/Workflows
diff --git a/docs/ci/WORKFLOWS.md b/docs/ci/WORKFLOWS.md
index a8165f7ba..53ea6bedc 100644
--- a/docs/ci/WORKFLOWS.md
+++ b/docs/ci/WORKFLOWS.md
@@ -168,6 +168,7 @@ Scheduled health jobs keep the automation ecosystem aligned:
* [`health-67-integration-sync-check.yml`](../../.github/workflows/health-67-integration-sync-check.yml) validates that Workflows-Integration-Tests repo stays in sync with templates (push, `repository_dispatch`, daily schedule).
* [`health-70-validate-sync-manifest.yml`](../../.github/workflows/health-70-validate-sync-manifest.yml) validates that sync-manifest.yml is complete - ensures all sync-able files are declared (PR, push).
* [`health-71-sync-health-check.yml`](../../.github/workflows/health-71-sync-health-check.yml) monitors sync workflow health daily - creates issues if all recent runs failed or sync is stale (daily schedule, manual dispatch).
+* [`health-72-template-sync.yml`](../../.github/workflows/health-72-template-sync.yml) validates that template files are in sync with source scripts - fails if `.github/scripts/` changes but `templates/consumer-repo/` isn't updated (PR, push on script changes).
* [`maint-68-sync-consumer-repos.yml`](../../.github/workflows/maint-68-sync-consumer-repos.yml) pushes workflow template updates to registered consumer repos (release, template push, manual dispatch).
* [`maint-69-sync-integration-repo.yml`](../../.github/workflows/maint-69-sync-integration-repo.yml) syncs integration-repo templates to Workflows-Integration-Tests repository (template push, manual dispatch with dry-run support).
* [`maint-70-fix-integration-formatting.yml`](../../.github/workflows/maint-70-fix-integration-formatting.yml) applies Black and Ruff formatting fixes to Integration-Tests repository files (manual dispatch for CI formatting failures).
diff --git a/docs/ci/WORKFLOW_SYSTEM.md b/docs/ci/WORKFLOW_SYSTEM.md
index 1d6489c04..f55062930 100644
--- a/docs/ci/WORKFLOW_SYSTEM.md
+++ b/docs/ci/WORKFLOW_SYSTEM.md
@@ -687,6 +687,7 @@ Keep this table handy when you are triaging automation: it confirms which workfl
| **Health 67 Integration Sync Check** (`health-67-integration-sync-check.yml`, maintenance bucket) | `push` (templates), `repository_dispatch`, `schedule` (daily) | Validate that Workflows-Integration-Tests repo stays in sync with templates. Creates issues when drift detected. | ⚪ Automatic/scheduled | [Integration sync runs](https://github.com/stranske/Workflows/actions/workflows/health-67-integration-sync-check.yml) |
| **Health 70 Validate Sync Manifest** (`health-70-validate-sync-manifest.yml`, maintenance bucket) | `pull_request`, `push` | Validate that sync-manifest.yml includes all sync-able files. Fails PRs that add workflows/prompts/scripts without updating manifest. | ⚪ Required on PRs | [Manifest validation runs](https://github.com/stranske/Workflows/actions/workflows/health-70-validate-sync-manifest.yml) |
| **Health 71 Sync Health Check** (`health-71-sync-health-check.yml`, maintenance bucket) | `schedule` (daily), `workflow_dispatch` | Monitor sync workflow health and create issues when all recent runs failed or sync is stale. | ⚪ Scheduled/manual | [Sync health check runs](https://github.com/stranske/Workflows/actions/workflows/health-71-sync-health-check.yml) |
+| **Health 72 Template Sync** (`health-72-template-sync.yml`, maintenance bucket) | `pull_request`, `push` (`.github/scripts/`, `templates/`) | Validate that template files are in sync with source scripts. Fails if `.github/scripts/*.js` changes but `templates/consumer-repo/` isn't updated. | ⚪ Required on PRs | [Template sync validation runs](https://github.com/stranske/Workflows/actions/workflows/health-72-template-sync.yml) |
| **Maint 68 Sync Consumer Repos** (`maint-68-sync-consumer-repos.yml`, maintenance bucket) | `release`, `push` (templates), `workflow_dispatch` | Push workflow template updates to registered consumer repositories. Creates PRs in consumer repos when templates change. | ⚪ Automatic/manual | [Consumer sync runs](https://github.com/stranske/Workflows/actions/workflows/maint-68-sync-consumer-repos.yml) |
| **Maint 69 Sync Integration Repo** (`maint-69-sync-integration-repo.yml`, maintenance bucket) | `push` (templates), `workflow_dispatch` | Sync integration-repo templates to Workflows-Integration-Tests repository. Resolves drift detected by Health 67. Supports dry-run mode. | ⚪ Automatic/manual | [Integration sync runs](https://github.com/stranske/Workflows/actions/workflows/maint-69-sync-integration-repo.yml) |
| **Fix Integration Tests Formatting** (`maint-70-fix-integration-formatting.yml`, maintenance bucket) | `workflow_dispatch` | Manually triggered workflow to apply Black and Ruff formatting fixes to Python files in the Workflows-Integration-Tests repository when CI formatting checks fail. | ⚪ Manual only | [Formatting fix runs](https://github.com/stranske/Workflows/actions/workflows/maint-70-fix-integration-formatting.yml) |
diff --git a/docs/keepalive/SETUP_CHECKLIST.md b/docs/keepalive/SETUP_CHECKLIST.md
index d5f9d9bce..44bd5d60b 100644
--- a/docs/keepalive/SETUP_CHECKLIST.md
+++ b/docs/keepalive/SETUP_CHECKLIST.md
@@ -310,6 +310,43 @@ Copy from:
---
+
+---
+
+## 📌 Important: Template Sync Process
+
+Before configuring workflows, understand how this consumer repo receives updates:
+
+### How Workflow Updates Work
+
+1. **Source**: Workflow scripts live in `stranske/Workflows/templates/consumer-repo/`
+2. **Sync**: The Workflows repo has a sync process that creates PRs to update consumer repos
+3. **Trigger**: Sync happens when template files change or on manual trigger
+
+### Template Sync Validation (For Workflows Repo Contributors)
+
+If you're contributing to the Workflows repo and modifying `.github/scripts/`:
+
+```bash
+# After editing workflow scripts
+./scripts/sync_templates.sh
+
+# Verify templates are in sync
+python scripts/validate_template_sync.py
+```
+
+**Why this matters**: Consumer repos only get updates when template files change. If you modify `.github/scripts/` but forget to update `templates/consumer-repo/.github/scripts/`, no sync PRs are created.
+
+The CI enforces this with `.github/workflows/health-72-template-sync.yml`.
+
+### As a Consumer Repo User
+
+- Watch for sync PRs from the Workflows repo
+- Review and merge them to get the latest workflow improvements
+- Don't manually edit workflow files in `.github/workflows/` or `.github/scripts/` - changes will be overwritten on next sync
+
+---
+
## Workflow Configuration
### Step 12: Configure Workflow Files
diff --git a/scripts/sync_templates.sh b/scripts/sync_templates.sh
new file mode 100755
index 000000000..51208c6c4
--- /dev/null
+++ b/scripts/sync_templates.sh
@@ -0,0 +1,39 @@
+#!/bin/bash
+# Sync source scripts to template directory
+set -e
+
+SOURCE_DIR=".github/scripts"
+TEMPLATE_DIR="templates/consumer-repo/.github/scripts"
+
+echo "🔄 Syncing scripts to template directory..."
+
+# Get list of files to sync (exclude tests)
+FILES=$(find "$SOURCE_DIR" -name "*.js" -type f \
+ | grep -v "__tests__" \
+ | grep -v ".test.js" \
+ | sed "s|^$SOURCE_DIR/||")
+
+synced=0
+for file in $FILES; do
+ source_file="$SOURCE_DIR/$file"
+ template_file="$TEMPLATE_DIR/$file"
+
+ # Create parent directory if it doesn't exist
+ mkdir -p "$(dirname "$template_file")"
+
+ if [ ! -f "$template_file" ]; then
+ echo " ✓ Creating $file (new file)"
+ cp "$source_file" "$template_file"
+ synced=$((synced + 1)) || true
+ elif ! cmp -s "$source_file" "$template_file"; then
+ echo " ✓ Syncing $file"
+ cp "$source_file" "$template_file"
+ synced=$((synced + 1)) || true
+ fi
+done
+
+if [ $synced -eq 0 ]; then
+ echo "✅ All files already in sync"
+else
+ echo "✅ Synced $synced file(s)"
+fi
diff --git a/scripts/validate_template_sync.py b/scripts/validate_template_sync.py
new file mode 100755
index 000000000..601629198
--- /dev/null
+++ b/scripts/validate_template_sync.py
@@ -0,0 +1,73 @@
+#!/usr/bin/env python3
+"""
+Validate that template files are in sync with source files.
+
+This prevents the common mistake of updating .github/scripts/ without
+updating templates/consumer-repo/.github/scripts/, which causes sync
+PRs to consumer repos to not be triggered.
+"""
+import hashlib
+import sys
+from pathlib import Path
+
+
+def hash_file(path: Path) -> str:
+ """Compute SHA256 hash of a file."""
+ return hashlib.sha256(path.read_bytes()).hexdigest()
+
+
+def main() -> int:
+ repo_root = Path(__file__).parent.parent
+ source_dir = repo_root / ".github" / "scripts"
+ template_dir = repo_root / "templates" / "consumer-repo" / ".github" / "scripts"
+
+ if not source_dir.exists():
+ print(f"❌ Source directory not found: {source_dir}")
+ return 1
+
+ if not template_dir.exists():
+ print(f"❌ Template directory not found: {template_dir}")
+ return 1
+
+ # Files that should be synced (exclude test files and some specific scripts)
+ exclude_patterns = ["__tests__", ".test.js", "deploy", "release"]
+
+ mismatches = []
+ source_files = [
+ f
+ for f in source_dir.rglob("*.js")
+ if not any(pattern in str(f) for pattern in exclude_patterns)
+ ]
+
+ for source_file in source_files:
+ relative_path = source_file.relative_to(source_dir)
+ template_file = template_dir / relative_path
+
+ if not template_file.exists():
+ mismatches.append(relative_path)
+ continue
+
+ source_hash = hash_file(source_file)
+ template_hash = hash_file(template_file)
+
+ if source_hash != template_hash:
+ mismatches.append(relative_path)
+
+ if mismatches:
+ print("❌ Template files out of sync with source files:\n")
+ for path in mismatches:
+ template_file = template_dir / path
+ if not template_file.exists():
+ print(f" • {path} (MISSING - needs to be created)")
+ else:
+ print(f" • {path} (out of sync)")
+ print("\n💡 To fix: ./scripts/sync_templates.sh")
+ print(" Then: git add templates/consumer-repo/.github/scripts/")
+ return 1
+
+ print("✅ All template files in sync")
+ return 0
+
+
+if __name__ == "__main__":
+ sys.exit(main())
diff --git a/templates/consumer-repo/.github/scripts/agents-guard.js b/templates/consumer-repo/.github/scripts/agents-guard.js
index fddae9c0a..62945e8db 100644
--- a/templates/consumer-repo/.github/scripts/agents-guard.js
+++ b/templates/consumer-repo/.github/scripts/agents-guard.js
@@ -27,6 +27,9 @@ const ALLOW_REMOVED_PATHS = new Set(
'.github/workflows/agents-pr-meta.yml',
'.github/workflows/agents-pr-meta-v2.yml',
'.github/workflows/agents-pr-meta-v3.yml',
+ // v1 verify-to-issue workflow deprecated; v2 is the active version.
+ // Archived to archives/deprecated-workflows/
+ '.github/workflows/agents-verify-to-issue.yml',
].map((entry) => entry.toLowerCase()),
);
@@ -447,7 +450,11 @@ function evaluateGuard({
const hasCodeownerApproval = hasExternalApproval || authorIsCodeowner;
const hasProtectedChanges = modifiedProtectedPaths.size > 0;
- const needsApproval = hasProtectedChanges && !hasCodeownerApproval;
+ // Security note: Allow `agents:allow-change` label to bypass CODEOWNER approval
+ // ONLY for automated dependency PRs from known bots (dependabot, renovate).
+ // Human PRs or other bot PRs still require CODEOWNER approval even with label.
+ const isAutomatedPR = normalizedAuthor && (normalizedAuthor === 'dependabot[bot]' || normalizedAuthor === 'renovate[bot]');
+ const needsApproval = hasProtectedChanges && !hasCodeownerApproval && !(hasAllowLabel && isAutomatedPR);
const needsLabel = hasProtectedChanges && !hasAllowLabel && !hasCodeownerApproval;
const failureReasons = [];
diff --git a/templates/consumer-repo/.github/scripts/agents_pr_meta_keepalive.js b/templates/consumer-repo/.github/scripts/agents_pr_meta_keepalive.js
index 893938410..8fc1ea938 100644
--- a/templates/consumer-repo/.github/scripts/agents_pr_meta_keepalive.js
+++ b/templates/consumer-repo/.github/scripts/agents_pr_meta_keepalive.js
@@ -264,9 +264,7 @@ async function detectKeepalive({ core, github, context, env = process.env }) {
if (!owner || !repo) {
outputs.reason = 'missing-repo';
core.info('Keepalive dispatch skipped: unable to resolve repository owner/name for PR lookup.');
- if (typeof finalise === 'function') {
- return finalise(false);
- }
+ // Early exit before finalise is defined - just return false
return false;
}
const body = comment?.body || '';
diff --git a/templates/consumer-repo/.github/scripts/agents_pr_meta_update_body.js b/templates/consumer-repo/.github/scripts/agents_pr_meta_update_body.js
index cb8c07730..0481c8d7e 100644
--- a/templates/consumer-repo/.github/scripts/agents_pr_meta_update_body.js
+++ b/templates/consumer-repo/.github/scripts/agents_pr_meta_update_body.js
@@ -378,7 +378,7 @@ async function fetchConnectorCheckboxStates(github, owner, repo, prNumber, core)
// Later comments override earlier ones (most recent state wins)
for (const comment of connectorComments) {
const commentStates = parseCheckboxStates(comment.body);
- for (const [key, value] of commentStates) {
+ for (const [key] of commentStates) {
states.set(key, true);
}
}
@@ -866,18 +866,18 @@ async function run({github, context, core, inputs}) {
return;
}
- const prInfo = await discoverPr({github, context, core, inputs});
- if (!prInfo) {
- core.info('No pull request context detected; skipping update.');
- return;
- }
+ const prInfo = await discoverPr({github, context, core, inputs});
+ if (!prInfo) {
+ core.info('No pull request context detected; skipping update.');
+ return;
+ }
+
+ const prResponse = await withRetries(
+ () => github.rest.pulls.get({owner, repo, pull_number: prInfo.number}),
+ {description: `pulls.get #${prInfo.number}`, core},
+ );
+ const pr = prResponse.data;
- const prResponse = await withRetries(
- () => github.rest.pulls.get({owner, repo, pull_number: prInfo.number}),
- {description: `pulls.get #${prInfo.number}`, core},
- );
- const pr = prResponse.data;
-
if (pr.state === 'closed') {
core.info(`Pull request #${pr.number} is closed; skipping update.`);
return;
diff --git a/templates/consumer-repo/.github/scripts/checkout_source.js b/templates/consumer-repo/.github/scripts/checkout_source.js
new file mode 100644
index 000000000..d01e2aaab
--- /dev/null
+++ b/templates/consumer-repo/.github/scripts/checkout_source.js
@@ -0,0 +1,65 @@
+'use strict';
+
+const normaliseRepo = (value) => {
+ if (!value) {
+ return '';
+ }
+ if (typeof value === 'string') {
+ return value.trim();
+ }
+ const owner = value.owner?.login || value.owner?.name || '';
+ const name = value.name || '';
+ return String(value.full_name || value.fullName || (owner && name ? `${owner}/${name}` : ''));
+};
+
+const normaliseSha = (value) => (typeof value === 'string' && value.trim()) || '';
+
+function resolveCheckoutSource({ core, context, fallbackRepo, fallbackRef }) {
+ const warnings = [];
+ const fallbackRepository = normaliseRepo(fallbackRepo) || `${context.repo.owner}/${context.repo.repo}`;
+ const fallbackSha = normaliseSha(fallbackRef) || context.sha || '';
+
+ let repository = fallbackRepository;
+ let ref = fallbackSha;
+
+ const pull = context.payload?.pull_request;
+ const workflowRun = context.payload?.workflow_run;
+
+ if (pull) {
+ const repoCandidate = normaliseRepo(pull.head?.repo);
+ const refCandidate = normaliseSha(pull.head?.sha);
+
+ repository = repoCandidate || fallbackRepository;
+ ref = refCandidate || fallbackSha;
+
+ if (!repoCandidate) {
+ warnings.push('pull_request head repository missing; defaulting to base repository.');
+ }
+ if (!refCandidate) {
+ warnings.push('pull_request head SHA missing; defaulting to workflow SHA.');
+ }
+ } else if (workflowRun) {
+ const pullRequests = Array.isArray(workflowRun.pull_requests) ? workflowRun.pull_requests : [];
+ const primaryPull = pullRequests.length > 0 ? workflowRun.pull_requests[0] : null;
+
+ const repoCandidate =
+ normaliseRepo(primaryPull?.head?.repo) || normaliseRepo(workflowRun.head_repository);
+ const refCandidate = normaliseSha(primaryPull?.head?.sha) || normaliseSha(workflowRun.head_sha);
+
+ repository = repoCandidate || fallbackRepository;
+ ref = refCandidate || fallbackSha;
+
+ if (!repoCandidate) {
+ warnings.push('workflow_run head repository missing; defaulting to base repository.');
+ }
+ if (!refCandidate) {
+ warnings.push('workflow_run head SHA missing; defaulting to workflow SHA.');
+ }
+ } else {
+ warnings.push('No pull_request or workflow_run context; defaulting checkout to base repository.');
+ }
+
+ return { repository, ref, warnings };
+}
+
+module.exports = { resolveCheckoutSource };
diff --git a/templates/consumer-repo/.github/scripts/conflict_detector.js b/templates/consumer-repo/.github/scripts/conflict_detector.js
new file mode 100644
index 000000000..7d49f841f
--- /dev/null
+++ b/templates/consumer-repo/.github/scripts/conflict_detector.js
@@ -0,0 +1,437 @@
+'use strict';
+
+/**
+ * Conflict detector module for keepalive pipeline.
+ * Detects merge conflicts on PRs to trigger conflict-specific prompts.
+ */
+
+/**
+ * Files to exclude from conflict detection.
+ * These files have special merge strategies (e.g., merge=ours in .gitattributes)
+ * or are .gitignored and should not block PR mergeability.
+ */
+const IGNORED_CONFLICT_FILES = [
+ 'pr_body.md',
+ 'ci/autofix/history.json',
+ 'keepalive-metrics.ndjson',
+ 'coverage-trend-history.ndjson',
+ 'metrics-history.ndjson',
+ 'residual-trend-history.ndjson',
+];
+
+// Comments from automation often mention "conflict" but should not block execution.
+const IGNORED_COMMENT_AUTHORS = new Set([
+ 'github-actions[bot]',
+ 'github-merge-queue[bot]',
+ 'dependabot[bot]',
+ 'github',
+]);
+
+const IGNORED_COMMENT_MARKERS = [
+ '';
+
+/**
+ * Format a failure comment for a PR
+ * @param {Object} params - Comment parameters
+ * @param {string} params.mode - Codex mode (keepalive/autofix/verifier)
+ * @param {string} params.exitCode - Exit code from Codex
+ * @param {string} params.errorCategory - Error category (transient/auth/resource/logic/unknown)
+ * @param {string} params.errorType - Error type (codex/infrastructure/auth/unknown)
+ * @param {string} params.recovery - Recovery guidance
+ * @param {string} params.summary - Output summary (truncated)
+ * @param {string} params.runUrl - URL to the workflow run
+ * @returns {string} Formatted comment body
+ */
+function formatFailureComment({
+ mode = 'unknown',
+ exitCode = 'unknown',
+ errorCategory = 'unknown',
+ errorType = 'unknown',
+ recovery = 'Check logs for details.',
+ summary = 'No output captured',
+ runUrl = '',
+}) {
+ const runLink = runUrl ? `[View logs](${runUrl})` : 'N/A';
+ const truncatedSummary = summary.length > 500 ? summary.slice(0, 500) + '...' : summary;
+
+ return `${COMMENT_MARKER}
+## ⚠️ Codex ${mode} run failed
+
+| Field | Value |
+|-------|-------|
+| Exit Code | \`${exitCode}\` |
+| Error Category | \`${errorCategory}\` |
+| Error Type | \`${errorType}\` |
+| Run | ${runLink} |
+
+### 🔧 Suggested Recovery
+
+${recovery}
+
+### 📝 What to do
+
+1. Check the [workflow logs](${runUrl || '#'}) for detailed error output
+2. If this is a configuration issue, update the relevant settings
+3. If the error persists, consider adding the \`needs-human\` label for manual review
+4. Re-run the workflow once the issue is resolved
+
+
+Output summary
+
+\`\`\`
+${truncatedSummary}
+\`\`\`
+
+ `;
+}
+
+/**
+ * Check if a comment body contains the failure marker
+ * @param {string} body - Comment body to check
+ * @returns {boolean} True if this is a failure notification comment
+ */
+function isFailureComment(body) {
+ return typeof body === 'string' && body.includes(COMMENT_MARKER);
+}
+
+module.exports = {
+ COMMENT_MARKER,
+ formatFailureComment,
+ isFailureComment,
+};
diff --git a/templates/consumer-repo/.github/scripts/gate-docs-only.js b/templates/consumer-repo/.github/scripts/gate-docs-only.js
new file mode 100644
index 000000000..316e15aa8
--- /dev/null
+++ b/templates/consumer-repo/.github/scripts/gate-docs-only.js
@@ -0,0 +1,67 @@
+'use strict';
+
+const DEFAULT_MARKER = '';
+const BASE_MESSAGE = 'Gate fast-pass: docs-only change detected; heavy checks skipped.';
+const NO_CHANGES_MESSAGE = 'Gate fast-pass: no changes detected; heavy checks skipped.';
+
+function normalizeReason(reason) {
+ if (reason === null || reason === undefined) {
+ return '';
+ }
+ if (typeof reason === 'string') {
+ return reason.trim();
+ }
+ return String(reason).trim();
+}
+
+function buildDocsOnlyMessage(reason) {
+ const normalized = normalizeReason(reason);
+ if (!normalized || normalized === 'docs_only') {
+ return BASE_MESSAGE;
+ }
+ if (normalized === 'no_changes') {
+ return NO_CHANGES_MESSAGE;
+ }
+ return `${BASE_MESSAGE} Reason: ${normalized}.`;
+}
+
+async function handleDocsOnlyFastPass({ core, reason, marker = DEFAULT_MARKER, summaryHeading = 'Gate docs-only fast-pass' } = {}) {
+ const message = buildDocsOnlyMessage(reason);
+ const outputs = {
+ state: 'success',
+ description: message,
+ comment_body: `${message}\n\n${marker}`,
+ marker,
+ base_message: BASE_MESSAGE,
+ };
+
+ if (core && typeof core.setOutput === 'function') {
+ for (const [key, value] of Object.entries(outputs)) {
+ core.setOutput(key, value);
+ }
+ }
+
+ if (core && typeof core.info === 'function') {
+ core.info(message);
+ }
+
+ const summary = core?.summary;
+ if (summary && typeof summary.addHeading === 'function' && typeof summary.addRaw === 'function' && typeof summary.write === 'function') {
+ await summary.addHeading(summaryHeading, 3).addRaw(`${message}\n`).write();
+ }
+
+ return {
+ message,
+ outputs,
+ marker,
+ baseMessage: BASE_MESSAGE,
+ };
+}
+
+module.exports = {
+ handleDocsOnlyFastPass,
+ buildDocsOnlyMessage,
+ DEFAULT_MARKER,
+ BASE_MESSAGE,
+ NO_CHANGES_MESSAGE,
+};
diff --git a/templates/consumer-repo/.github/scripts/github-api-with-retry.js b/templates/consumer-repo/.github/scripts/github-api-with-retry.js
new file mode 100755
index 000000000..b4a31f0fc
--- /dev/null
+++ b/templates/consumer-repo/.github/scripts/github-api-with-retry.js
@@ -0,0 +1,120 @@
+#!/usr/bin/env node
+
+/**
+ * GitHub API Retry Wrapper
+ *
+ * Wraps Octokit API calls with exponential backoff retry logic for rate limit errors.
+ * This prevents workflows from failing when hitting GitHub API rate limits.
+ *
+ * Usage in github-script actions:
+ * const { withRetry } = require('./.github/scripts/github-api-with-retry.js');
+ * const data = await withRetry(() => github.rest.issues.get({...}));
+ */
+
+/**
+ * Exponential backoff retry wrapper for GitHub API calls
+ *
+ * @param {Function} fn - Async function that makes GitHub API call
+ * @param {Object} options - Retry options
+ * @param {number} options.maxRetries - Maximum number of retries (default: 5)
+ * @param {number} options.initialDelay - Initial delay in ms (default: 1000)
+ * @param {number} options.maxDelay - Maximum delay in ms (default: 60000)
+ * @param {Function} options.onRetry - Callback on retry (receives attempt, error, delay)
+ * @returns {Promise} - Result of the API call
+ */
+async function withRetry(fn, options = {}) {
+ const {
+ maxRetries = 5,
+ initialDelay = 1000,
+ maxDelay = 60000,
+ onRetry = null
+ } = options;
+
+ let lastError;
+
+ for (let attempt = 0; attempt <= maxRetries; attempt++) {
+ try {
+ return await fn();
+ } catch (error) {
+ lastError = error;
+
+ // Check if it's a rate limit error
+ const isRateLimit =
+ error.status === 403 &&
+ (error.message?.includes('rate limit') ||
+ error.message?.includes('API rate limit exceeded'));
+
+ // Check if it's a secondary rate limit (abuse detection)
+ const isSecondaryRateLimit =
+ error.status === 403 &&
+ error.message?.includes('secondary rate limit');
+
+ // Don't retry on non-rate-limit errors
+ if (!isRateLimit && !isSecondaryRateLimit) {
+ throw error;
+ }
+
+ // Don't retry if we've exhausted attempts
+ if (attempt === maxRetries) {
+ console.error(`Max retries (${maxRetries}) reached for rate limit error`);
+ throw error;
+ }
+
+ // Calculate delay with exponential backoff
+ const baseDelay = isSecondaryRateLimit
+ ? initialDelay * 2 // Secondary rate limits need longer delays
+ : initialDelay;
+
+ const delay = Math.min(
+ baseDelay * Math.pow(2, attempt),
+ maxDelay
+ );
+
+ // Add jitter to prevent thundering herd
+ const jitter = Math.random() * 0.3 * delay;
+ const actualDelay = delay + jitter;
+
+ console.log(
+ `Rate limit hit (attempt ${attempt + 1}/${maxRetries + 1}). ` +
+ `Retrying in ${Math.round(actualDelay / 1000)}s...`
+ );
+
+ if (onRetry) {
+ onRetry(attempt + 1, error, actualDelay);
+ }
+
+ await sleep(actualDelay);
+ }
+ }
+
+ throw lastError;
+}
+
+/**
+ * Sleep for specified milliseconds
+ */
+function sleep(ms) {
+ return new Promise(resolve => setTimeout(resolve, ms));
+}
+
+/**
+ * Wrap paginate calls with retry logic
+ *
+ * @param {Object} github - Octokit instance
+ * @param {Function} method - Octokit method to paginate
+ * @param {Object} params - Parameters for the API call
+ * @param {Object} options - Retry options (same as withRetry)
+ * @returns {Promise} - Paginated results
+ */
+async function paginateWithRetry(github, method, params, options = {}) {
+ return withRetry(
+ () => github.paginate(method, params),
+ options
+ );
+}
+
+module.exports = {
+ withRetry,
+ paginateWithRetry,
+ sleep
+};
diff --git a/templates/consumer-repo/.github/scripts/issue_scope_parser.js b/templates/consumer-repo/.github/scripts/issue_scope_parser.js
index 6e2fa1572..3927bf446 100644
--- a/templates/consumer-repo/.github/scripts/issue_scope_parser.js
+++ b/templates/consumer-repo/.github/scripts/issue_scope_parser.js
@@ -3,11 +3,23 @@
const normalizeNewlines = (value) => String(value || '').replace(/\r\n/g, '\n');
const stripBlockquotePrefixes = (value) =>
String(value || '').replace(/^[ \t]*>+[ \t]?/gm, '');
-const escapeRegExp = (value) => String(value ?? '').replace(/[\\^$.*+?()[\]{}|]/g, '\\$&');
+
+/**
+ * Check if a line is a code fence delimiter (``` or ~~~).
+ * Used to track code block boundaries when processing content.
+ */
+const isCodeFenceLine = (line) => /^(`{3,}|~{3,})/.test(line.trim());
+
+const LIST_ITEM_REGEX = /^(\s*)([-*+]|\d+[.)])\s+(.*)$/;
const SECTION_DEFS = [
{ key: 'scope', label: 'Scope', aliases: ['Scope', 'Issue Scope', 'Why', 'Background', 'Context', 'Overview'], optional: true },
- { key: 'tasks', label: 'Tasks', aliases: ['Tasks', 'Task List', 'Implementation', 'Implementation notes'], optional: false },
+ {
+ key: 'tasks',
+ label: 'Tasks',
+ aliases: ['Tasks', 'Task', 'Task List', 'Implementation', 'Implementation notes', 'To Do', 'Todo', 'To-Do'],
+ optional: false,
+ },
{
key: 'acceptance',
label: 'Acceptance Criteria',
@@ -31,6 +43,7 @@ const PR_META_FALLBACK_PLACEHOLDERS = {
};
const CHECKBOX_SECTIONS = new Set(['tasks', 'acceptance']);
+const ACCEPTANCE_CUE_REGEX = /(acceptance|definition of done|done criteria)/i;
function normaliseSectionContent(sectionKey, content) {
const trimmed = String(content || '').trim();
@@ -78,8 +91,21 @@ function normaliseChecklist(content) {
const lines = raw.split('\n');
let mutated = false;
+ let insideCodeBlock = false;
+
const updated = lines.map((line) => {
- const match = line.match(/^(\s*)([-*])\s+(.*)$/);
+ // Track code fence boundaries - don't add checkboxes inside code blocks
+ if (isCodeFenceLine(line)) {
+ insideCodeBlock = !insideCodeBlock;
+ return line;
+ }
+
+ // Skip checkbox normalization for lines inside code blocks
+ if (insideCodeBlock) {
+ return line;
+ }
+
+ const match = line.match(LIST_ITEM_REGEX);
if (!match) {
return line;
}
@@ -99,6 +125,291 @@ function normaliseChecklist(content) {
return mutated ? updated.join('\n') : raw;
}
+function stripHeadingMarkers(rawLine) {
+ if (!rawLine) {
+ return '';
+ }
+ let text = String(rawLine).trim();
+ if (!text) {
+ return '';
+ }
+ text = text.replace(/^#{1,6}\s+/, '');
+ text = text.replace(/\s*:\s*$/, '');
+
+ const boldMatch = text.match(/^(?:\*\*|__)(.+)(?:\*\*|__)$/);
+ if (boldMatch) {
+ text = boldMatch[1].trim();
+ }
+
+ text = text.replace(/\s*:\s*$/, '');
+ return text.trim();
+}
+
+function extractHeadingLabel(rawLine) {
+ const cleaned = stripHeadingMarkers(rawLine);
+ if (!cleaned) {
+ const listMatch = String(rawLine || '').match(LIST_ITEM_REGEX);
+ if (!listMatch) {
+ return '';
+ }
+ const remainder = listMatch[3]?.trim() || '';
+ if (!remainder || /^\[[ xX]\]/.test(remainder)) {
+ return '';
+ }
+ return stripHeadingMarkers(remainder);
+ }
+ return cleaned;
+}
+
+function isExplicitHeadingLine(rawLine) {
+ const line = String(rawLine || '').trim();
+ if (!line) {
+ return false;
+ }
+ if (/^#{1,6}\s+\S/.test(line)) {
+ return true;
+ }
+ return /^(?:\*\*|__)(.+?)(?:\*\*|__)\s*:?\s*$/.test(line);
+}
+
+function extractListBlocks(lines) {
+ const blocks = [];
+ let current = [];
+ let insideCodeBlock = false;
+
+ const flush = () => {
+ if (current.length) {
+ const block = current.join('\n').trim();
+ if (block) {
+ blocks.push(block);
+ }
+ current = [];
+ }
+ };
+
+ for (const line of lines) {
+ // Track code fence boundaries
+ if (isCodeFenceLine(line)) {
+ insideCodeBlock = !insideCodeBlock;
+ // Include code fence lines in current block if we're building one
+ if (current.length) {
+ current.push(line);
+ }
+ continue;
+ }
+
+ // Lines inside code blocks are treated as continuation (don't break the block)
+ if (insideCodeBlock) {
+ if (current.length) {
+ current.push(line);
+ }
+ continue;
+ }
+
+ if (LIST_ITEM_REGEX.test(line)) {
+ current.push(line);
+ continue;
+ }
+ if (current.length) {
+ if (!line.trim()) {
+ current.push(line);
+ continue;
+ }
+ flush();
+ }
+ }
+ flush();
+
+ return blocks;
+}
+
+function extractListBlocksWithOffsets(lines) {
+ const blocks = [];
+ let current = [];
+ let blockStart = null;
+ let blockEnd = null;
+ let offset = 0;
+ let insideCodeBlock = false;
+
+ const flush = () => {
+ if (!current.length) {
+ return;
+ }
+ const content = current.join('\n').trim();
+ if (content) {
+ blocks.push({ start: blockStart, end: blockEnd, content });
+ }
+ current = [];
+ blockStart = null;
+ blockEnd = null;
+ };
+
+ for (const line of lines) {
+ // Track code fence boundaries
+ if (isCodeFenceLine(line)) {
+ insideCodeBlock = !insideCodeBlock;
+ if (current.length) {
+ current.push(line);
+ blockEnd = offset + line.length;
+ }
+ offset += line.length + 1;
+ continue;
+ }
+
+ // Lines inside code blocks continue the current block
+ if (insideCodeBlock) {
+ if (current.length) {
+ current.push(line);
+ blockEnd = offset + line.length;
+ }
+ offset += line.length + 1;
+ continue;
+ }
+
+ const isList = LIST_ITEM_REGEX.test(line);
+ if (isList) {
+ if (!current.length) {
+ blockStart = offset;
+ }
+ current.push(line);
+ blockEnd = offset + line.length;
+ } else if (current.length) {
+ if (!line.trim()) {
+ current.push(line);
+ blockEnd = offset + line.length;
+ } else {
+ flush();
+ }
+ }
+ offset += line.length + 1;
+ }
+ flush();
+
+ return blocks;
+}
+
+function removeTrailingBlock(content, block) {
+ const trimmedContent = String(content || '').trim();
+ const trimmedBlock = String(block || '').trim();
+ if (!trimmedContent || !trimmedBlock) {
+ return content;
+ }
+ if (!trimmedContent.endsWith(trimmedBlock)) {
+ return content;
+ }
+ const index = trimmedContent.lastIndexOf(trimmedBlock);
+ if (index === -1) {
+ return content;
+ }
+ return trimmedContent.slice(0, index).trimEnd();
+}
+
+function blockInsideRange(block, range) {
+ if (!range) {
+ return false;
+ }
+ return block.start >= range.start && block.end <= range.end;
+}
+
+function hasAcceptanceCue(content) {
+ const lines = String(content || '').split('\n');
+ return lines.some((line) => {
+ if (LIST_ITEM_REGEX.test(line)) {
+ return false;
+ }
+ const cleaned = stripHeadingMarkers(line);
+ if (!cleaned) {
+ return false;
+ }
+ return ACCEPTANCE_CUE_REGEX.test(cleaned);
+ });
+}
+
+function inferSectionsFromLists(segment) {
+ const sections = { scope: '', tasks: '', acceptance: '' };
+ const lines = String(segment || '').split('\n');
+ const firstListIndex = lines.findIndex((line) => LIST_ITEM_REGEX.test(line));
+ if (firstListIndex === -1) {
+ return sections;
+ }
+
+ const preListText = lines.slice(0, firstListIndex).join('\n').trim();
+ if (preListText) {
+ sections.scope = preListText;
+ }
+
+ const listBlocks = extractListBlocks(lines.slice(firstListIndex));
+ if (listBlocks.length > 0) {
+ sections.tasks = listBlocks[0];
+ }
+ if (listBlocks.length > 1) {
+ sections.acceptance = listBlocks[1];
+ }
+
+ return sections;
+}
+
+function applyListFallbacks({ segment, sections, listBlocks, ranges }) {
+ const updated = { ...sections };
+ const tasksMissing = !String(updated.tasks || '').trim();
+ const acceptanceMissing = !String(updated.acceptance || '').trim();
+
+ if (!tasksMissing && !acceptanceMissing) {
+ return updated;
+ }
+
+ if (tasksMissing && String(updated.scope || '').trim()) {
+ const inferred = inferSectionsFromLists(updated.scope);
+ if (inferred.tasks) {
+ updated.tasks = inferred.tasks;
+ updated.scope = inferred.scope || '';
+ if (acceptanceMissing && inferred.acceptance) {
+ updated.acceptance = inferred.acceptance;
+ }
+ }
+ }
+
+ const acceptanceStillMissing = !String(updated.acceptance || '').trim();
+ if (acceptanceStillMissing && String(updated.tasks || '').trim()) {
+ const taskBlocks = extractListBlocks(String(updated.tasks || '').split('\n'));
+ if (taskBlocks.length > 1 && hasAcceptanceCue(updated.tasks)) {
+ const acceptanceBlock = taskBlocks[taskBlocks.length - 1];
+ updated.acceptance = acceptanceBlock;
+ updated.tasks = removeTrailingBlock(updated.tasks, acceptanceBlock);
+ }
+ }
+
+ const tasksStillMissing = !String(updated.tasks || '').trim();
+ if (tasksStillMissing && listBlocks.length) {
+ const acceptanceRange = ranges.acceptance;
+ const candidates = listBlocks.filter((block) => !blockInsideRange(block, acceptanceRange));
+ if (candidates.length) {
+ if (acceptanceRange) {
+ const before = candidates.filter((block) => block.end <= acceptanceRange.start);
+ updated.tasks = (before.length ? before[before.length - 1] : candidates[0]).content;
+ } else {
+ updated.tasks = candidates[0].content;
+ }
+ }
+ }
+
+ const acceptanceStillMissingAfter = !String(updated.acceptance || '').trim();
+ if (acceptanceStillMissingAfter && listBlocks.length) {
+ const tasksRange = ranges.tasks;
+ const candidates = listBlocks.filter((block) => !blockInsideRange(block, tasksRange));
+ if (candidates.length) {
+ if (tasksRange) {
+ const after = candidates.filter((block) => block.start >= tasksRange.end);
+ updated.acceptance = (after.length ? after[0] : candidates[candidates.length - 1]).content;
+ } else {
+ updated.acceptance = candidates[candidates.length - 1].content;
+ }
+ }
+ }
+
+ return updated;
+}
+
function collectSections(source) {
const normalized = stripBlockquotePrefixes(normalizeNewlines(source));
if (!normalized.trim()) {
@@ -115,17 +426,6 @@ function collectSections(source) {
segment = normalized.slice(startIndex + startMarker.length, endIndex);
}
- const headingLabelPattern = SECTION_DEFS
- .flatMap((section) => section.aliases)
- .map((title) => escapeRegExp(title))
- .join('|');
-
- // Match headings that may be markdown headers (# H), bold (**H**), or plain text (with optional colon).
- const headingRegex = new RegExp(
- `^\\s*(?:#{1,6}\\s+|\\*\\*)?(${headingLabelPattern})(?:\\*\\*|:)?\\s*$`,
- 'gim'
- );
-
const aliasLookup = SECTION_DEFS.reduce((acc, section) => {
section.aliases.forEach((alias) => {
acc[alias.toLowerCase()] = section;
@@ -134,21 +434,47 @@ function collectSections(source) {
}, {});
const headings = [];
- let match;
- while ((match = headingRegex.exec(segment)) !== null) {
- const matchedLabel = (match[1] || '').trim();
- const title = matchedLabel.toLowerCase();
- if (!title || !aliasLookup[title]) {
+ const allHeadings = [];
+ const lines = segment.split('\n');
+ const listBlocks = extractListBlocksWithOffsets(lines);
+ let offset = 0;
+ let insideCodeBlock = false;
+ for (const line of lines) {
+ // Track code fence boundaries - skip heading detection inside code blocks
+ if (isCodeFenceLine(line)) {
+ insideCodeBlock = !insideCodeBlock;
+ offset += line.length + 1;
continue;
}
- const section = aliasLookup[title];
- headings.push({
- title: section.key,
- label: section.label,
- index: match.index,
- length: match[0].length,
- matchedLabel,
- });
+
+ if (!insideCodeBlock) {
+ const matchedLabel = extractHeadingLabel(line);
+ if (matchedLabel) {
+ const title = matchedLabel.toLowerCase();
+ if (aliasLookup[title]) {
+ const section = aliasLookup[title];
+ headings.push({
+ title: section.key,
+ label: section.label,
+ index: offset,
+ length: line.length,
+ matchedLabel,
+ });
+ }
+ }
+ if (isExplicitHeadingLine(line)) {
+ allHeadings.push({ index: offset, length: line.length });
+ }
+ }
+ offset += line.length + 1;
+ }
+
+ const headingIndexSet = new Set(allHeadings.map((heading) => heading.index));
+ for (const header of headings) {
+ if (!headingIndexSet.has(header.index)) {
+ allHeadings.push({ index: header.index, length: header.length });
+ headingIndexSet.add(header.index);
+ }
}
const extracted = SECTION_DEFS.reduce((acc, section) => {
@@ -159,9 +485,20 @@ function collectSections(source) {
acc[section.key] = section.label;
return acc;
}, {});
+ const ranges = SECTION_DEFS.reduce((acc, section) => {
+ acc[section.key] = null;
+ return acc;
+ }, {});
if (headings.length === 0) {
- return { segment, sections: extracted, labels };
+ const inferred = inferSectionsFromLists(segment);
+ const merged = {
+ ...extracted,
+ ...Object.fromEntries(
+ Object.entries(inferred).filter(([, value]) => String(value || '').trim())
+ ),
+ };
+ return { segment, sections: merged, labels };
}
for (const section of SECTION_DEFS) {
@@ -170,7 +507,7 @@ function collectSections(source) {
if (!header) {
continue; // Skip missing sections instead of failing
}
- const nextHeader = headings
+ const nextHeader = allHeadings
.filter((entry) => entry.index > header.index)
.sort((a, b) => a.index - b.index)[0];
const contentStart = (() => {
@@ -184,9 +521,11 @@ function collectSections(source) {
const content = normalizeNewlines(segment.slice(contentStart, contentEnd)).trim();
extracted[section.key] = content;
labels[section.key] = header.matchedLabel?.trim() || canonicalTitle;
+ ranges[section.key] = { start: contentStart, end: contentEnd };
}
- return { segment, sections: extracted, labels };
+ const sections = applyListFallbacks({ segment, sections: extracted, listBlocks, ranges });
+ return { segment, sections, labels };
}
/**
diff --git a/templates/consumer-repo/.github/scripts/keepalive_instruction_template.js b/templates/consumer-repo/.github/scripts/keepalive_instruction_template.js
index 521863012..55f0ee911 100644
--- a/templates/consumer-repo/.github/scripts/keepalive_instruction_template.js
+++ b/templates/consumer-repo/.github/scripts/keepalive_instruction_template.js
@@ -2,63 +2,109 @@
const fs = require('fs');
const path = require('path');
+const { resolvePromptMode } = require('./keepalive_prompt_routing');
/**
- * Path to the canonical keepalive instruction template.
- * Edit .github/templates/keepalive-instruction.md to change the instruction text.
+ * Path to the fallback keepalive instruction template.
+ * Edit .github/templates/keepalive-instruction.md to change the fallback text.
*/
const TEMPLATE_PATH = path.resolve(__dirname, '../templates/keepalive-instruction.md');
+const NEXT_TASK_TEMPLATE_PATH = path.resolve(__dirname, '../codex/prompts/keepalive_next_task.md');
+const FIX_TEMPLATE_PATH = path.resolve(__dirname, '../codex/prompts/fix_ci_failures.md');
+const VERIFY_TEMPLATE_PATH = path.resolve(__dirname, '../codex/prompts/verifier_acceptance_check.md');
+
+const TEMPLATE_PATHS = {
+ normal: NEXT_TASK_TEMPLATE_PATH,
+ fix_ci: FIX_TEMPLATE_PATH,
+ verify: VERIFY_TEMPLATE_PATH,
+};
/**
* Cached instruction text (loaded once per process).
- * @type {string|null}
+ * @type {Map}
*/
-let cachedInstruction = null;
+const instructionCache = new Map();
-/**
- * Returns the canonical keepalive instruction directive text.
- * The text is loaded from .github/templates/keepalive-instruction.md.
- *
- * @returns {string} The instruction directive (without @agent prefix)
- */
-function getKeepaliveInstruction() {
- if (cachedInstruction !== null) {
- return cachedInstruction;
+function normalise(value) {
+ return String(value ?? '').trim();
+}
+
+function resolveTemplatePath({ templatePath, mode, action, reason, scenario } = {}) {
+ const explicit = normalise(templatePath);
+ if (explicit) {
+ return { mode: 'custom', path: explicit };
+ }
+ const resolvedMode = resolvePromptMode({ mode, action, reason, scenario });
+ return { mode: resolvedMode, path: TEMPLATE_PATHS[resolvedMode] || TEMPLATE_PATH };
+}
+
+function getFallbackInstruction() {
+ return [
+ 'Your objective is to satisfy the **Acceptance Criteria** by completing each **Task** within the defined **Scope**.',
+ '',
+ '**This round you MUST:**',
+ '1. Implement actual code or test changes that advance at least one incomplete task toward acceptance.',
+ '2. Commit meaningful source code (.py, .yml, .js, etc.)—not just status/docs updates.',
+ '3. **UPDATE THE CHECKBOXES** in the Tasks and Acceptance Criteria sections below to mark completed items.',
+ '4. Change `- [ ]` to `- [x]` for items you have completed and verified.',
+ '',
+ '**CRITICAL - Checkbox Updates:**',
+ 'When you complete a task or acceptance criterion, update its checkbox directly in this prompt file.',
+ 'Change the `[ ]` to `[x]` for completed items. The automation will read these checkboxes and update the PR status summary.',
+ '',
+ '**Example:**',
+ 'Before: `- [ ] Add validation for user input`',
+ 'After: `- [x] Add validation for user input`',
+ '',
+ '**DO NOT:**',
+ '- Commit only status files, markdown summaries, or documentation when tasks require code.',
+ '- Mark checkboxes complete without actually implementing and verifying the work.',
+ '- Close the round without source-code changes when acceptance criteria require them.',
+ '- Change the text of checkboxes—only change `[ ]` to `[x]`.',
+ '',
+ 'Review the Scope/Tasks/Acceptance below, identify the next incomplete task that requires code, implement it, then **update the checkboxes** to mark completed items.',
+ ].join('\n');
+}
+
+function loadInstruction(templatePath, { allowDefaultFallback = true } = {}) {
+ const resolvedPath = templatePath || TEMPLATE_PATH;
+ if (instructionCache.has(resolvedPath)) {
+ return instructionCache.get(resolvedPath);
}
+ let content = '';
try {
- cachedInstruction = fs.readFileSync(TEMPLATE_PATH, 'utf8').trim();
+ content = fs.readFileSync(resolvedPath, 'utf8').trim();
} catch (err) {
- // Fallback if template file is missing
- console.warn(`Warning: Could not load keepalive instruction template from ${TEMPLATE_PATH}: ${err.message}`);
- cachedInstruction = [
- 'Your objective is to satisfy the **Acceptance Criteria** by completing each **Task** within the defined **Scope**.',
- '',
- '**This round you MUST:**',
- '1. Implement actual code or test changes that advance at least one incomplete task toward acceptance.',
- '2. Commit meaningful source code (.py, .yml, .js, etc.)—not just status/docs updates.',
- '3. **UPDATE THE CHECKBOXES** in the Tasks and Acceptance Criteria sections below to mark completed items.',
- '4. Change `- [ ]` to `- [x]` for items you have completed and verified.',
- '',
- '**CRITICAL - Checkbox Updates:**',
- 'When you complete a task or acceptance criterion, update its checkbox directly in this prompt file.',
- 'Change the `[ ]` to `[x]` for completed items. The automation will read these checkboxes and update the PR status summary.',
- '',
- '**Example:**',
- 'Before: `- [ ] Add validation for user input`',
- 'After: `- [x] Add validation for user input`',
- '',
- '**DO NOT:**',
- '- Commit only status files, markdown summaries, or documentation when tasks require code.',
- '- Mark checkboxes complete without actually implementing and verifying the work.',
- '- Close the round without source-code changes when acceptance criteria require them.',
- '- Change the text of checkboxes—only change `[ ]` to `[x]`.',
- '',
- 'Review the Scope/Tasks/Acceptance below, identify the next incomplete task that requires code, implement it, then **update the checkboxes** to mark completed items.',
- ].join('\n');
+ if (allowDefaultFallback && resolvedPath !== TEMPLATE_PATH) {
+ try {
+ content = fs.readFileSync(TEMPLATE_PATH, 'utf8').trim();
+ } catch (fallbackError) {
+ console.warn(
+ `Warning: Could not load keepalive instruction template from ${resolvedPath}: ${fallbackError.message}`
+ );
+ content = getFallbackInstruction();
+ }
+ } else {
+ console.warn(`Warning: Could not load keepalive instruction template from ${resolvedPath}: ${err.message}`);
+ content = getFallbackInstruction();
+ }
}
- return cachedInstruction;
+ instructionCache.set(resolvedPath, content);
+ return content;
+}
+
+/**
+ * Returns the canonical keepalive instruction directive text.
+ * The text is loaded from .github/templates/keepalive-instruction.md.
+ *
+ * @returns {string} The instruction directive (without @agent prefix)
+ */
+function getKeepaliveInstruction(options = {}) {
+ const params = options && typeof options === 'object' ? options : {};
+ const resolved = resolveTemplatePath(params);
+ return loadInstruction(resolved.path, { allowDefaultFallback: true });
}
/**
@@ -67,20 +113,31 @@ function getKeepaliveInstruction() {
* @param {string} [agent='codex'] - The agent alias to mention
* @returns {string} The full instruction with @agent prefix
*/
-function getKeepaliveInstructionWithMention(agent = 'codex') {
- const alias = String(agent || '').trim() || 'codex';
- return `@${alias} ${getKeepaliveInstruction()}`;
+function getKeepaliveInstructionWithMention(agent = 'codex', options = {}) {
+ let resolvedAgent = agent;
+ let params = options;
+
+ if (agent && typeof agent === 'object') {
+ params = agent;
+ resolvedAgent = params.agent;
+ }
+
+ const alias = String(resolvedAgent || '').trim() || 'codex';
+ return `@${alias} ${getKeepaliveInstruction(params)}`;
}
/**
* Clears the cached instruction (useful for testing).
*/
function clearCache() {
- cachedInstruction = null;
+ instructionCache.clear();
}
module.exports = {
TEMPLATE_PATH,
+ NEXT_TASK_TEMPLATE_PATH,
+ FIX_TEMPLATE_PATH,
+ VERIFY_TEMPLATE_PATH,
getKeepaliveInstruction,
getKeepaliveInstructionWithMention,
clearCache,
diff --git a/templates/consumer-repo/.github/scripts/keepalive_loop.js b/templates/consumer-repo/.github/scripts/keepalive_loop.js
new file mode 100644
index 000000000..278d0a0de
--- /dev/null
+++ b/templates/consumer-repo/.github/scripts/keepalive_loop.js
@@ -0,0 +1,2675 @@
+'use strict';
+
+const fs = require('fs');
+const path = require('path');
+
+const { parseScopeTasksAcceptanceSections } = require('./issue_scope_parser');
+const { loadKeepaliveState, formatStateComment } = require('./keepalive_state');
+const { resolvePromptMode } = require('./keepalive_prompt_routing');
+const { classifyError, ERROR_CATEGORIES } = require('./error_classifier');
+const { formatFailureComment } = require('./failure_comment_formatter');
+const { detectConflicts } = require('./conflict_detector');
+const { parseTimeoutConfig } = require('./timeout_config');
+
+const ATTEMPT_HISTORY_LIMIT = 5;
+const ATTEMPTED_TASK_LIMIT = 6;
+
+const TIMEOUT_VARIABLE_NAMES = [
+ 'WORKFLOW_TIMEOUT_DEFAULT',
+ 'WORKFLOW_TIMEOUT_EXTENDED',
+ 'WORKFLOW_TIMEOUT_WARNING_RATIO',
+ 'WORKFLOW_TIMEOUT_WARNING_MINUTES',
+];
+
+const PROMPT_ROUTES = {
+ fix_ci: {
+ mode: 'fix_ci',
+ file: '.github/codex/prompts/fix_ci_failures.md',
+ },
+ conflict: {
+ mode: 'conflict',
+ file: '.github/codex/prompts/fix_merge_conflicts.md',
+ },
+ verify: {
+ mode: 'verify',
+ file: '.github/codex/prompts/verifier_acceptance_check.md',
+ },
+ normal: {
+ mode: 'normal',
+ file: '.github/codex/prompts/keepalive_next_task.md',
+ },
+};
+
+function normalise(value) {
+ return String(value ?? '').trim();
+}
+
+function resolvePromptRouting({ scenario, mode, action, reason } = {}) {
+ const resolvedMode = resolvePromptMode({ scenario, mode, action, reason });
+ return PROMPT_ROUTES[resolvedMode] || PROMPT_ROUTES.normal;
+}
+
+function toBool(value, defaultValue = false) {
+ const raw = normalise(value);
+ if (!raw) return Boolean(defaultValue);
+ if (['true', 'yes', '1', 'on', 'enabled'].includes(raw.toLowerCase())) {
+ return true;
+ }
+ if (['false', 'no', '0', 'off', 'disabled'].includes(raw.toLowerCase())) {
+ return false;
+ }
+ return Boolean(defaultValue);
+}
+
+function toNumber(value, fallback = 0) {
+ if (value === null || value === undefined || value === '') {
+ return Number.isFinite(fallback) ? Number(fallback) : 0;
+ }
+ const parsed = Number(value);
+ if (Number.isFinite(parsed)) {
+ return parsed;
+ }
+ const int = parseInt(String(value), 10);
+ if (Number.isFinite(int)) {
+ return int;
+ }
+ return Number.isFinite(fallback) ? Number(fallback) : 0;
+}
+
+function toOptionalNumber(value) {
+ if (value === null || value === undefined || value === '') {
+ return null;
+ }
+ const parsed = Number(value);
+ if (Number.isFinite(parsed)) {
+ return parsed;
+ }
+ const int = parseInt(String(value), 10);
+ if (Number.isFinite(int)) {
+ return int;
+ }
+ return null;
+}
+
+function normaliseWarningRatio(value) {
+ if (!Number.isFinite(value)) {
+ return null;
+ }
+ if (value > 1 && value <= 100) {
+ return value / 100;
+ }
+ return value;
+}
+
+function buildAttemptEntry({
+ iteration,
+ action,
+ reason,
+ runResult,
+ promptMode,
+ promptFile,
+ gateConclusion,
+ errorCategory,
+ errorType,
+}) {
+ const actionValue = normalise(action) || 'unknown';
+ const reasonValue = normalise(reason) || actionValue;
+ const entry = {
+ iteration: Math.max(0, toNumber(iteration, 0)),
+ action: actionValue,
+ reason: reasonValue,
+ };
+
+ if (runResult) {
+ entry.run_result = normalise(runResult);
+ }
+ if (promptMode) {
+ entry.prompt_mode = normalise(promptMode);
+ }
+ if (promptFile) {
+ entry.prompt_file = normalise(promptFile);
+ }
+ if (gateConclusion) {
+ entry.gate = normalise(gateConclusion);
+ }
+ if (errorCategory) {
+ entry.error_category = normalise(errorCategory);
+ }
+ if (errorType) {
+ entry.error_type = normalise(errorType);
+ }
+
+ return entry;
+}
+
+function updateAttemptHistory(existing, nextEntry, limit = ATTEMPT_HISTORY_LIMIT) {
+ const history = Array.isArray(existing)
+ ? existing.filter((item) => item && typeof item === 'object')
+ : [];
+ if (!nextEntry || typeof nextEntry !== 'object') {
+ return history.slice(-limit);
+ }
+ const trimmed = history.slice(-limit);
+ const last = trimmed[trimmed.length - 1];
+ if (
+ last &&
+ last.iteration === nextEntry.iteration &&
+ last.action === nextEntry.action &&
+ last.reason === nextEntry.reason
+ ) {
+ return [...trimmed.slice(0, -1), { ...last, ...nextEntry }];
+ }
+ return [...trimmed, nextEntry].slice(-limit);
+}
+
+function normaliseTaskText(value) {
+ return String(value ?? '').replace(/\s+/g, ' ').trim();
+}
+
+function normaliseTaskKey(value) {
+ return normaliseTaskText(value).toLowerCase();
+}
+
+function normaliseAttemptedTasks(value) {
+ if (!Array.isArray(value)) {
+ return [];
+ }
+ const entries = [];
+ value.forEach((entry) => {
+ if (typeof entry === 'string') {
+ const task = normaliseTaskText(entry);
+ if (task) {
+ entries.push({ task, key: normaliseTaskKey(task) });
+ }
+ return;
+ }
+ if (entry && typeof entry === 'object') {
+ const task = normaliseTaskText(entry.task || entry.text || '');
+ if (!task) {
+ return;
+ }
+ entries.push({
+ ...entry,
+ task,
+ key: normaliseTaskKey(entry.key || task),
+ });
+ }
+ });
+ return entries;
+}
+
+function updateAttemptedTasks(existing, nextTask, iteration, limit = ATTEMPTED_TASK_LIMIT) {
+ const history = normaliseAttemptedTasks(existing);
+ const taskText = normaliseTaskText(nextTask);
+ if (!taskText) {
+ return history.slice(-limit);
+ }
+ const key = normaliseTaskKey(taskText);
+ const trimmed = history.filter((entry) => entry.key !== key).slice(-limit);
+ const entry = {
+ task: taskText,
+ key,
+ iteration: Math.max(0, toNumber(iteration, 0)),
+ timestamp: new Date().toISOString(),
+ };
+ return [...trimmed, entry].slice(-limit);
+}
+
+function resolveDurationMs({ durationMs, startTs }) {
+ if (Number.isFinite(durationMs)) {
+ return Math.max(0, Math.floor(durationMs));
+ }
+ if (!Number.isFinite(startTs)) {
+ return 0;
+ }
+ const startMs = startTs > 1e12 ? startTs : startTs * 1000;
+ const delta = Date.now() - startMs;
+ return Math.max(0, Math.floor(delta));
+}
+
+async function fetchPrLabels({ github, context, prNumber, core }) {
+ if (!github?.rest?.pulls?.get || !context?.repo?.owner || !context?.repo?.repo) {
+ return [];
+ }
+ if (!Number.isFinite(prNumber) || prNumber <= 0) {
+ return [];
+ }
+ try {
+ const { data } = await github.rest.pulls.get({
+ owner: context.repo.owner,
+ repo: context.repo.repo,
+ pull_number: prNumber,
+ });
+ const rawLabels = Array.isArray(data?.labels) ? data.labels : [];
+ return rawLabels.map((label) => normalise(label?.name).toLowerCase()).filter(Boolean);
+ } catch (error) {
+ if (core) {
+ core.info(`Failed to fetch PR labels for timeout config: ${error.message}`);
+ }
+ return [];
+ }
+}
+
+async function fetchRepoVariables({ github, context, core, names = [] }) {
+ if (!github?.rest?.actions?.listRepoVariables || !context?.repo?.owner || !context?.repo?.repo) {
+ return {};
+ }
+
+ const wanted = new Set((names || []).map((name) => normalise(name)).filter(Boolean));
+ if (!wanted.size) {
+ return {};
+ }
+
+ const results = {};
+ let page = 1;
+ const perPage = 100;
+
+ try {
+ while (true) {
+ const { data } = await github.rest.actions.listRepoVariables({
+ owner: context.repo.owner,
+ repo: context.repo.repo,
+ per_page: perPage,
+ page,
+ });
+ const variables = Array.isArray(data?.variables) ? data.variables : [];
+ for (const variable of variables) {
+ const name = normalise(variable?.name);
+ if (!wanted.has(name)) {
+ continue;
+ }
+ results[name] = normalise(variable?.value);
+ }
+ if (variables.length < perPage || Object.keys(results).length === wanted.size) {
+ break;
+ }
+ page += 1;
+ }
+ } catch (error) {
+ if (core) {
+ core.info(`Failed to fetch repository variables for timeout config: ${error.message}`);
+ }
+ }
+
+ return results;
+}
+
+async function resolveWorkflowRunStartMs({ github, context, core }) {
+ const payloadStartedAt =
+ context?.payload?.workflow_run?.run_started_at ??
+ context?.payload?.workflow_run?.created_at;
+ if (payloadStartedAt) {
+ const parsed = Date.parse(payloadStartedAt);
+ if (Number.isFinite(parsed)) {
+ return parsed;
+ }
+ }
+
+ if (!github?.rest?.actions?.getWorkflowRun) {
+ return null;
+ }
+ const runId = context?.runId || context?.run_id;
+ if (!runId || !context?.repo?.owner || !context?.repo?.repo) {
+ return null;
+ }
+ try {
+ const { data } = await github.rest.actions.getWorkflowRun({
+ owner: context.repo.owner,
+ repo: context.repo.repo,
+ run_id: runId,
+ });
+ const startedAt = data?.run_started_at;
+ if (!startedAt) {
+ return null;
+ }
+ const parsed = Date.parse(startedAt);
+ return Number.isFinite(parsed) ? parsed : null;
+ } catch (error) {
+ if (core) {
+ core.info(`Failed to fetch workflow run start time: ${error.message}`);
+ }
+ return null;
+ }
+}
+
+async function resolveElapsedMs({ github, context, inputs, core }) {
+ const durationMs = resolveDurationMs({
+ durationMs: toOptionalNumber(
+ inputs?.elapsed_ms ??
+ inputs?.elapsedMs ??
+ inputs?.duration_ms ??
+ inputs?.durationMs
+ ),
+ startTs: toOptionalNumber(inputs?.start_ts ?? inputs?.startTs),
+ });
+ if (durationMs > 0) {
+ return durationMs;
+ }
+ const runStartMs = await resolveWorkflowRunStartMs({ github, context, core });
+ if (!Number.isFinite(runStartMs)) {
+ return 0;
+ }
+ return Math.max(0, Date.now() - runStartMs);
+}
+
+function buildTimeoutStatus({
+ timeoutConfig,
+ elapsedMs,
+ warningRatio = 0.8,
+ warningRemainingMs = 5 * 60 * 1000,
+} = {}) {
+ const resolvedMinutes = Number.isFinite(timeoutConfig?.resolvedMinutes)
+ ? timeoutConfig.resolvedMinutes
+ : null;
+ const timeoutMs = Number.isFinite(resolvedMinutes) ? resolvedMinutes * 60 * 1000 : null;
+ const elapsedSafe = Number.isFinite(elapsedMs) && elapsedMs > 0 ? elapsedMs : null;
+ let remainingMs = null;
+ let usageRatio = null;
+ let warning = null;
+
+ if (timeoutMs && elapsedSafe !== null) {
+ remainingMs = Math.max(0, timeoutMs - elapsedSafe);
+ usageRatio = Math.min(1, elapsedSafe / timeoutMs);
+ const remainingMinutes = Math.ceil(remainingMs / 60000);
+ const usagePercent = Math.round(usageRatio * 100);
+ const thresholdPercent = Number.isFinite(warningRatio) ? Math.round(warningRatio * 100) : null;
+ const thresholdRemainingMinutes = Number.isFinite(warningRemainingMs)
+ ? Math.ceil(warningRemainingMs / 60000)
+ : null;
+ const warnByRatio = usageRatio >= warningRatio;
+ const warnByRemaining = remainingMs <= warningRemainingMs;
+ if (warnByRatio || warnByRemaining) {
+ warning = {
+ percent: usagePercent,
+ remaining_minutes: remainingMinutes,
+ threshold_percent: thresholdPercent,
+ threshold_remaining_minutes: thresholdRemainingMinutes,
+ reason: warnByRemaining ? 'remaining' : 'usage',
+ };
+ }
+ }
+
+ return {
+ defaultMinutes: timeoutConfig?.defaultMinutes ?? null,
+ extendedMinutes: timeoutConfig?.extendedMinutes ?? null,
+ overrideMinutes: timeoutConfig?.overrideMinutes ?? null,
+ resolvedMinutes,
+ source: timeoutConfig?.source ?? '',
+ label: timeoutConfig?.label ?? null,
+ timeoutMs,
+ elapsedMs: elapsedSafe,
+ remainingMs,
+ usageRatio,
+ warning,
+ };
+}
+
+function resolveTimeoutWarningConfig({ inputs = {}, env = process.env, variables = {} } = {}) {
+ const warningMinutes = toOptionalNumber(
+ inputs.timeout_warning_minutes ??
+ inputs.timeoutWarningMinutes ??
+ env.WORKFLOW_TIMEOUT_WARNING_MINUTES ??
+ variables.WORKFLOW_TIMEOUT_WARNING_MINUTES ??
+ env.TIMEOUT_WARNING_MINUTES ??
+ variables.TIMEOUT_WARNING_MINUTES
+ );
+ const warningRatioRaw = toOptionalNumber(
+ inputs.timeout_warning_ratio ??
+ inputs.timeoutWarningRatio ??
+ env.WORKFLOW_TIMEOUT_WARNING_RATIO ??
+ variables.WORKFLOW_TIMEOUT_WARNING_RATIO ??
+ env.TIMEOUT_WARNING_RATIO ??
+ variables.TIMEOUT_WARNING_RATIO
+ );
+ const warningRatio = normaliseWarningRatio(warningRatioRaw);
+ const config = {};
+ if (Number.isFinite(warningMinutes) && warningMinutes > 0) {
+ config.warningRemainingMs = warningMinutes * 60 * 1000;
+ }
+ if (Number.isFinite(warningRatio) && warningRatio > 0 && warningRatio <= 1) {
+ config.warningRatio = warningRatio;
+ }
+ return config;
+}
+
+function resolveTimeoutInputs({ inputs = {}, context } = {}) {
+ const payloadInputs = context?.payload?.inputs;
+ if (!payloadInputs || typeof payloadInputs !== 'object') {
+ return inputs;
+ }
+ return { ...payloadInputs, ...inputs };
+}
+
+function formatTimeoutMinutes(minutes) {
+ if (!Number.isFinite(minutes)) {
+ return '0';
+ }
+ return String(Math.max(0, Math.round(minutes)));
+}
+
+function formatTimeoutUsage({ elapsedMs, usageRatio, remainingMs }) {
+ if (!Number.isFinite(elapsedMs) || !Number.isFinite(usageRatio)) {
+ return '';
+ }
+ const elapsedMinutes = Math.floor(elapsedMs / 60000);
+ const usagePercent = Math.round(usageRatio * 100);
+ const remainingMinutes = Number.isFinite(remainingMs)
+ ? Math.ceil(Math.max(0, remainingMs) / 60000)
+ : null;
+ if (remainingMinutes === null) {
+ return `${elapsedMinutes}m elapsed (${usagePercent}%)`;
+ }
+ return `${elapsedMinutes}m elapsed (${usagePercent}%, ${remainingMinutes}m remaining)`;
+}
+
+function formatTimeoutWarning(warning) {
+ if (!warning || typeof warning !== 'object') {
+ return '';
+ }
+ const percent = Number.isFinite(warning.percent) ? warning.percent : null;
+ const remaining = Number.isFinite(warning.remaining_minutes) ? warning.remaining_minutes : null;
+ const reason = warning.reason === 'remaining' ? 'remaining threshold' : 'usage threshold';
+ const parts = [];
+ if (percent !== null) {
+ parts.push(`${percent}% consumed`);
+ }
+ if (remaining !== null) {
+ parts.push(`${remaining}m remaining`);
+ }
+ if (!parts.length) {
+ return '';
+ }
+ return `${parts.join(', ')} (${reason})`;
+}
+
+function buildMetricsRecord({
+ prNumber,
+ iteration,
+ action,
+ errorCategory,
+ durationMs,
+ tasksTotal,
+ tasksComplete,
+}) {
+ return {
+ pr_number: toNumber(prNumber, 0),
+ iteration: Math.max(1, toNumber(iteration, 0)),
+ timestamp: new Date().toISOString(),
+ action: normalise(action) || 'unknown',
+ error_category: normalise(errorCategory) || 'none',
+ duration_ms: Math.max(0, toNumber(durationMs, 0)),
+ tasks_total: Math.max(0, toNumber(tasksTotal, 0)),
+ tasks_complete: Math.max(0, toNumber(tasksComplete, 0)),
+ };
+}
+
+function emitMetricsRecord({ core, record }) {
+ if (core && typeof core.setOutput === 'function') {
+ core.setOutput('metrics_record_json', JSON.stringify(record));
+ }
+}
+
+function resolveMetricsPath(inputs) {
+ const explicitPath = normalise(
+ inputs.metrics_path ??
+ inputs.metricsPath ??
+ process.env.KEEPALIVE_METRICS_PATH ??
+ process.env.keepalive_metrics_path
+ );
+ if (explicitPath) {
+ return explicitPath;
+ }
+ const githubActions = normalise(process.env.GITHUB_ACTIONS).toLowerCase();
+ const workspace = normalise(process.env.GITHUB_WORKSPACE);
+ if (githubActions === 'true' && workspace) {
+ return path.join(workspace, 'keepalive-metrics.ndjson');
+ }
+ return '';
+}
+
+async function appendMetricsRecord({ core, record, metricsPath }) {
+ const targetPath = normalise(metricsPath);
+ if (!targetPath) {
+ return;
+ }
+ try {
+ const absolutePath = path.resolve(targetPath);
+ await fs.promises.mkdir(path.dirname(absolutePath), { recursive: true });
+ await fs.promises.appendFile(absolutePath, `${JSON.stringify(record)}\n`, 'utf8');
+ } catch (error) {
+ if (core && typeof core.warning === 'function') {
+ core.warning(`keepalive metrics write failed: ${error.message}`);
+ }
+ }
+}
+
+async function writeStepSummary({
+ core,
+ iteration,
+ maxIterations,
+ tasksTotal,
+ tasksUnchecked,
+ tasksCompletedDelta,
+ agentFilesChanged,
+ outcome,
+}) {
+ if (!core?.summary || typeof core.summary.addRaw !== 'function') {
+ return;
+ }
+ const total = Number.isFinite(tasksTotal) ? tasksTotal : 0;
+ const unchecked = Number.isFinite(tasksUnchecked) ? tasksUnchecked : 0;
+ const completed = Math.max(0, total - unchecked);
+ const iterationLabel = maxIterations > 0 ? `${iteration}/${maxIterations}` : `${iteration}/∞`;
+ const filesChanged = Number.isFinite(agentFilesChanged) ? agentFilesChanged : 0;
+ const delta = Number.isFinite(tasksCompletedDelta) ? tasksCompletedDelta : null;
+ const rows = [
+ `| Iteration | ${iterationLabel} |`,
+ `| Tasks completed | ${completed}/${total} |`,
+ ];
+ if (delta !== null) {
+ rows.push(`| Tasks completed this run | ${delta} |`);
+ }
+ rows.push(`| Files changed | ${filesChanged} |`);
+ rows.push(`| Outcome | ${outcome || 'unknown'} |`);
+ const summaryLines = [
+ '### Keepalive iteration summary',
+ '',
+ '| Field | Value |',
+ '| --- | --- |',
+ ...rows,
+ ];
+ await core.summary.addRaw(summaryLines.join('\n')).addEOL().write();
+}
+
+function countCheckboxes(markdown) {
+ const result = { total: 0, checked: 0, unchecked: 0 };
+ const regex = /(?:^|\n)\s*(?:[-*+]|\d+[.)])\s*\[( |x|X)\]/g;
+ const content = String(markdown || '');
+ let match;
+ while ((match = regex.exec(content)) !== null) {
+ result.total += 1;
+ if ((match[1] || '').toLowerCase() === 'x') {
+ result.checked += 1;
+ } else {
+ result.unchecked += 1;
+ }
+ }
+ return result;
+}
+
+function normaliseChecklistSection(content) {
+ const raw = String(content || '');
+ if (!raw.trim()) {
+ return raw;
+ }
+ const lines = raw.split('\n');
+ let mutated = false;
+
+ const updated = lines.map((line) => {
+ // Match bullet points (-, *, +) or numbered lists, for example: 1., 2., 3. or 1), 2), 3).
+ const match = line.match(/^(\s*)([-*+]|\d+[.)])\s+(.*)$/);
+ if (!match) {
+ return line;
+ }
+ const [, indent, bullet, remainderRaw] = match;
+ const remainder = remainderRaw.trim();
+ if (!remainder) {
+ return line;
+ }
+ // If already a checkbox, preserve it
+ if (/^\[[ xX]\]/.test(remainder)) {
+ return `${indent}${bullet} ${remainder}`;
+ }
+
+ mutated = true;
+ return `${indent}${bullet} [ ] ${remainder}`;
+ });
+ return mutated ? updated.join('\n') : raw;
+}
+
+function normaliseChecklistSections(sections = {}) {
+ return {
+ ...sections,
+ tasks: normaliseChecklistSection(sections.tasks),
+ acceptance: normaliseChecklistSection(sections.acceptance),
+ };
+}
+
+function classifyFailureDetails({ action, runResult, summaryReason, agentExitCode, agentSummary }) {
+ const runFailed = action === 'run' && runResult && runResult !== 'success';
+ const shouldClassify = runFailed || (action && action !== 'run' && summaryReason);
+ if (!shouldClassify) {
+ return { category: '', type: '', recovery: '', message: '' };
+ }
+
+ const message = [agentSummary, summaryReason, runResult].filter(Boolean).join(' ');
+ const errorInfo = classifyError({ message, code: agentExitCode });
+ let category = errorInfo.category;
+ const isGateCancelled = summaryReason.startsWith('gate-cancelled');
+
+ // If the agent runner reports failure with exit code 0, that strongly suggests
+ // an infrastructure/control-plane hiccup rather than a code/tool failure.
+ if (runFailed && summaryReason === 'agent-run-failed' && (!agentExitCode || agentExitCode === '0')) {
+ category = ERROR_CATEGORIES.transient;
+ }
+
+ // Detect dirty git state issues - agent saw unexpected changes before starting.
+ // These are typically workflow artifacts (.workflows-lib, codex-session-*.jsonl)
+ // that should have been cleaned up but weren't. Classify as transient.
+ const dirtyGitPatterns = [
+ /unexpected\s*changes/i,
+ /\.workflows-lib.*modified/i,
+ /codex-session.*untracked/i,
+ /existing\s*changes/i,
+ /how\s*would\s*you\s*like\s*me\s*to\s*proceed/i,
+ /before\s*making\s*edits/i,
+ ];
+ if (dirtyGitPatterns.some(pattern => pattern.test(message))) {
+ category = ERROR_CATEGORIES.transient;
+ }
+
+ if (runFailed && (runResult === 'cancelled' || runResult === 'skipped')) {
+ category = ERROR_CATEGORIES.transient;
+ }
+ if (!runFailed && isGateCancelled) {
+ category = ERROR_CATEGORIES.transient;
+ }
+
+ let type = '';
+ if (runFailed) {
+ if (category === ERROR_CATEGORIES.transient) {
+ type = 'infrastructure';
+ } else if (agentExitCode && agentExitCode !== '0') {
+ type = 'codex';
+ } else {
+ type = 'infrastructure';
+ }
+ } else {
+ type = 'infrastructure';
+ }
+
+
+ return {
+ category,
+ type,
+ recovery: errorInfo.recovery,
+ message: errorInfo.message,
+ };
+}
+
+/**
+ * Extract Source section from PR/issue body that contains links to parent issues/PRs.
+ * @param {string} body - PR or issue body text
+ * @returns {string|null} Source section content or null if not found
+ */
+function extractSourceSection(body) {
+ const text = String(body || '');
+ // Match "## Source" or "### Source" section
+ const match = text.match(/##?\s*Source\s*\n([\s\S]*?)(?=\n##|\n---|\n\n\n|$)/i);
+ if (match && match[1]) {
+ const content = match[1].trim();
+ // Only return if it has meaningful content (links to issues/PRs)
+ if (/#\d+|github\.com/.test(content)) {
+ return content;
+ }
+ }
+ return null;
+}
+
+function extractChecklistItems(markdown) {
+ const items = [];
+ const content = String(markdown || '');
+ const regex = /(?:^|\n)\s*(?:[-*+]|\d+[.)])\s*\[( |x|X)\]\s*(.+)/g;
+ let match;
+ while ((match = regex.exec(content)) !== null) {
+ const checked = (match[1] || '').toLowerCase() === 'x';
+ const text = normaliseTaskText(match[2] || '');
+ if (text) {
+ items.push({ text, checked });
+ }
+ }
+ return items;
+}
+
+/**
+ * Build the task appendix that gets passed to the agent prompt.
+ * This provides explicit, structured tasks and acceptance criteria.
+ * @param {object} sections - Parsed scope/tasks/acceptance sections
+ * @param {object} checkboxCounts - { total, checked, unchecked }
+ * @param {object} [state] - Optional keepalive state for reconciliation info
+ * @param {object} [options] - Additional options
+ * @param {string} [options.prBody] - Full PR body to extract Source section from
+ */
+function buildTaskAppendix(sections, checkboxCounts, state = {}, options = {}) {
+ const lines = [];
+
+ lines.push('---');
+ lines.push('## PR Tasks and Acceptance Criteria');
+ lines.push('');
+ lines.push(`**Progress:** ${checkboxCounts.checked}/${checkboxCounts.total} tasks complete, ${checkboxCounts.unchecked} remaining`);
+ lines.push('');
+
+ // Add reconciliation reminder if the previous iteration made changes but didn't check off tasks
+ if (state.needs_task_reconciliation) {
+ lines.push('### ⚠️ IMPORTANT: Task Reconciliation Required');
+ lines.push('');
+ lines.push(`The previous iteration changed **${state.last_files_changed || 'some'} file(s)** but did not update task checkboxes.`);
+ lines.push('');
+ lines.push('**Before continuing, you MUST:**');
+ lines.push('1. Review the recent commits to understand what was changed');
+ lines.push('2. Determine which task checkboxes should be marked complete');
+ lines.push('3. Update the PR body to check off completed tasks');
+ lines.push('4. Then continue with remaining tasks');
+ lines.push('');
+ lines.push('_Failure to update checkboxes means progress is not being tracked properly._');
+ lines.push('');
+ }
+
+ if (sections?.scope) {
+ lines.push('### Scope');
+ lines.push(sections.scope);
+ lines.push('');
+ }
+
+ if (sections?.tasks) {
+ lines.push('### Tasks');
+ lines.push('Complete these in order. Mark checkbox done ONLY after implementation is verified:');
+ lines.push('');
+ lines.push(sections.tasks);
+ lines.push('');
+ }
+
+ if (sections?.acceptance) {
+ lines.push('### Acceptance Criteria');
+ lines.push('The PR is complete when ALL of these are satisfied:');
+ lines.push('');
+ lines.push(sections.acceptance);
+ lines.push('');
+ }
+
+ const attemptedTasks = normaliseAttemptedTasks(state?.attempted_tasks);
+ const candidateSource = sections?.tasks || sections?.acceptance || '';
+ const taskItems = extractChecklistItems(candidateSource);
+ const unchecked = taskItems.filter((item) => !item.checked);
+ const attemptedKeys = new Set(attemptedTasks.map((entry) => entry.key));
+ const suggested = unchecked.find((item) => !attemptedKeys.has(normaliseTaskKey(item.text))) || unchecked[0];
+
+ if (attemptedTasks.length > 0) {
+ lines.push('### Recently Attempted Tasks');
+ lines.push('Avoid repeating these unless a task needs explicit follow-up:');
+ lines.push('');
+ attemptedTasks.slice(-3).forEach((entry) => {
+ lines.push(`- ${entry.task}`);
+ });
+ lines.push('');
+ }
+
+ if (suggested?.text) {
+ lines.push('### Suggested Next Task');
+ lines.push(`- ${suggested.text}`);
+ lines.push('');
+ }
+
+ // Add Source section if PR body contains links to parent issues/PRs
+ if (options.prBody) {
+ const sourceSection = extractSourceSection(options.prBody);
+ if (sourceSection) {
+ lines.push('### Source Context');
+ lines.push('_For additional background, check these linked issues/PRs:_');
+ lines.push('');
+ lines.push(sourceSection);
+ lines.push('');
+ }
+ }
+
+ lines.push('---');
+
+ return lines.join('\n');
+}
+
+async function fetchPrBody({ github, context, prNumber, core }) {
+ if (!github?.rest?.pulls?.get || !context?.repo?.owner || !context?.repo?.repo) {
+ return '';
+ }
+ try {
+ const { data } = await github.rest.pulls.get({
+ owner: context.repo.owner,
+ repo: context.repo.repo,
+ pull_number: prNumber,
+ });
+ return String(data?.body || '');
+ } catch (error) {
+ if (core) {
+ core.info(`Failed to fetch PR body for task focus: ${error.message}`);
+ }
+ return '';
+ }
+}
+
+function extractConfigSnippet(body) {
+ const source = String(body || '');
+ if (!source.trim()) {
+ return '';
+ }
+
+ const commentBlockPatterns = [
+ /([\s\S]*?)/i,
+ /([\s\S]*?)/i,
+ //i,
+ ];
+ for (const pattern of commentBlockPatterns) {
+ const match = source.match(pattern);
+ if (match && match[1]) {
+ return match[1].trim();
+ }
+ }
+
+ const headingBlock = source.match(
+ /(#+\s*(?:Keepalive|Codex)\s+config[^\n]*?)\n+```[a-zA-Z0-9_-]*\n([\s\S]*?)```/i
+ );
+ if (headingBlock && headingBlock[2]) {
+ return headingBlock[2].trim();
+ }
+
+ return '';
+}
+
+function parseConfigFromSnippet(snippet) {
+ const trimmed = normalise(snippet);
+ if (!trimmed) {
+ return {};
+ }
+
+ try {
+ const parsed = JSON.parse(trimmed);
+ if (parsed && typeof parsed === 'object') {
+ return parsed;
+ }
+ } catch (error) {
+ // fall back to key/value parsing
+ }
+
+ const result = {};
+ const lines = trimmed.split(/\r?\n/);
+ for (const line of lines) {
+ const candidate = line.trim();
+ if (!candidate || candidate.startsWith('#')) {
+ continue;
+ }
+ const match = candidate.match(/^([^:=\s]+)\s*[:=]\s*(.+)$/);
+ if (!match) {
+ continue;
+ }
+ const key = match[1].trim();
+ const rawValue = match[2].trim();
+ const cleanedValue = rawValue.replace(/\s+#.*$/, '').replace(/\s+\/\/.*$/, '').trim();
+ if (!key) {
+ continue;
+ }
+ const lowered = cleanedValue.toLowerCase();
+ if (['true', 'false', 'yes', 'no', 'on', 'off'].includes(lowered)) {
+ result[key] = ['true', 'yes', 'on'].includes(lowered);
+ } else if (!Number.isNaN(Number(cleanedValue))) {
+ result[key] = Number(cleanedValue);
+ } else {
+ result[key] = cleanedValue;
+ }
+ }
+
+ return result;
+}
+
+function normaliseConfig(config = {}) {
+ const cfg = config && typeof config === 'object' ? config : {};
+ const trace = normalise(cfg.trace || cfg.keepalive_trace);
+ const promptMode = normalise(cfg.prompt_mode ?? cfg.promptMode);
+ const promptFile = normalise(cfg.prompt_file ?? cfg.promptFile);
+ const promptScenario = normalise(cfg.prompt_scenario ?? cfg.promptScenario);
+ return {
+ keepalive_enabled: toBool(
+ cfg.keepalive_enabled ?? cfg.enable_keepalive ?? cfg.keepalive,
+ true
+ ),
+ autofix_enabled: toBool(cfg.autofix_enabled ?? cfg.autofix, false),
+ iteration: toNumber(cfg.iteration ?? cfg.keepalive_iteration, 0),
+ max_iterations: toNumber(cfg.max_iterations ?? cfg.keepalive_max_iterations, 5),
+ failure_threshold: toNumber(cfg.failure_threshold ?? cfg.keepalive_failure_threshold, 3),
+ trace,
+ prompt_mode: promptMode,
+ prompt_file: promptFile,
+ prompt_scenario: promptScenario,
+ };
+}
+
+function parseConfig(body) {
+ const snippet = extractConfigSnippet(body);
+ const parsed = parseConfigFromSnippet(snippet);
+ return normaliseConfig(parsed);
+}
+
+function formatProgressBar(current, total, width = 10) {
+ if (!Number.isFinite(total) || total <= 0) {
+ return 'n/a';
+ }
+ const safeWidth = Number.isFinite(width) && width > 0 ? Math.floor(width) : 10;
+ const bounded = Math.max(0, Math.min(current, total));
+ const filled = Math.round((bounded / total) * safeWidth);
+ const empty = Math.max(0, safeWidth - filled);
+ return `[${'#'.repeat(filled)}${'-'.repeat(empty)}] ${bounded}/${total}`;
+}
+
+async function resolvePrNumber({ github, context, core, payload: overridePayload }) {
+ const payload = overridePayload || context.payload || {};
+ const eventName = context.eventName;
+
+ // Support explicit PR number from override payload (for workflow_dispatch)
+ if (overridePayload?.workflow_run?.pull_requests?.[0]?.number) {
+ return overridePayload.workflow_run.pull_requests[0].number;
+ }
+
+ if (eventName === 'pull_request' && payload.pull_request) {
+ return payload.pull_request.number;
+ }
+
+ if (eventName === 'workflow_run' && payload.workflow_run) {
+ const pr = Array.isArray(payload.workflow_run.pull_requests)
+ ? payload.workflow_run.pull_requests[0]
+ : null;
+ if (pr && pr.number) {
+ return pr.number;
+ }
+ const headSha = payload.workflow_run.head_sha;
+ if (headSha && github?.rest?.repos?.listPullRequestsAssociatedWithCommit) {
+ try {
+ const { data } = await github.rest.repos.listPullRequestsAssociatedWithCommit({
+ owner: context.repo.owner,
+ repo: context.repo.repo,
+ commit_sha: headSha,
+ });
+ if (Array.isArray(data) && data[0]?.number) {
+ return data[0].number;
+ }
+ } catch (error) {
+ if (core) core.info(`Unable to resolve PR from head sha: ${error.message}`);
+ }
+ }
+ }
+
+ return 0;
+}
+
+/**
+ * Classify gate failure type to determine appropriate fix mode
+ * @returns {Object} { failureType: 'test'|'mypy'|'lint'|'none'|'unknown', shouldFixMode: boolean, failedJobs: string[] }
+ */
+async function classifyGateFailure({ github, context, pr, core }) {
+ if (!pr) {
+ return { failureType: 'unknown', shouldFixMode: false, failedJobs: [] };
+ }
+
+ try {
+ // Get the latest Gate workflow run
+ const { data } = await github.rest.actions.listWorkflowRuns({
+ owner: context.repo.owner,
+ repo: context.repo.repo,
+ workflow_id: 'pr-00-gate.yml',
+ branch: pr.head.ref,
+ event: 'pull_request',
+ per_page: 5,
+ });
+
+ const run = data?.workflow_runs?.find((r) => r.head_sha === pr.head.sha);
+ if (!run || run.conclusion === 'success') {
+ return { failureType: 'none', shouldFixMode: false, failedJobs: [] };
+ }
+
+ // Get jobs for this run to identify what failed
+ const { data: jobsData } = await github.rest.actions.listJobsForWorkflowRun({
+ owner: context.repo.owner,
+ repo: context.repo.repo,
+ run_id: run.id,
+ });
+
+ const failedJobs = (jobsData?.jobs || [])
+ .filter((job) => job.conclusion === 'failure')
+ .map((job) => job.name.toLowerCase());
+
+ if (failedJobs.length === 0) {
+ return { failureType: 'unknown', shouldFixMode: false, failedJobs: [] };
+ }
+
+ // Classify failure type based on job names
+ const hasTestFailure = failedJobs.some((name) =>
+ name.includes('test') || name.includes('pytest') || name.includes('unittest')
+ );
+ const hasMypyFailure = failedJobs.some((name) =>
+ name.includes('mypy') || name.includes('type') || name.includes('typecheck')
+ );
+ const hasLintFailure = failedJobs.some((name) =>
+ name.includes('lint') || name.includes('ruff') || name.includes('black') || name.includes('format')
+ );
+
+ // Determine primary failure type (prioritize test > mypy > lint)
+ let failureType = 'unknown';
+ if (hasTestFailure) {
+ failureType = 'test';
+ } else if (hasMypyFailure) {
+ failureType = 'mypy';
+ } else if (hasLintFailure) {
+ failureType = 'lint';
+ }
+
+ // Only route to fix mode for test/mypy failures
+ // Lint failures should go to autofix
+ const shouldFixMode = failureType === 'test' || failureType === 'mypy' || failureType === 'unknown';
+
+ if (core) {
+ core.info(`[keepalive] Gate failure classification: type=${failureType}, shouldFixMode=${shouldFixMode}, failedJobs=[${failedJobs.join(', ')}]`);
+ }
+
+ return { failureType, shouldFixMode, failedJobs };
+ } catch (error) {
+ if (core) core.info(`Failed to classify gate failure: ${error.message}`);
+ return { failureType: 'unknown', shouldFixMode: true, failedJobs: [] };
+ }
+}
+
+
+async function resolveGateConclusion({ github, context, pr, eventName, payload, core }) {
+ const run = await resolveGateRun({ github, context, pr, eventName, payload, core });
+ return run.conclusion;
+}
+
+async function resolveGateRun({ github, context, pr, eventName, payload, core }) {
+ if (eventName === 'workflow_run') {
+ return {
+ conclusion: normalise(payload?.workflow_run?.conclusion),
+ runId: payload?.workflow_run?.id ? Number(payload.workflow_run.id) : 0,
+ };
+ }
+
+ if (!pr) {
+ return { conclusion: '', runId: 0 };
+ }
+
+ try {
+ const { data } = await github.rest.actions.listWorkflowRuns({
+ owner: context.repo.owner,
+ repo: context.repo.repo,
+ workflow_id: 'pr-00-gate.yml',
+ branch: pr.head.ref,
+ event: 'pull_request',
+ per_page: 20,
+ });
+ if (Array.isArray(data?.workflow_runs)) {
+ const match = data.workflow_runs.find((run) => run.head_sha === pr.head.sha);
+ if (match) {
+ return {
+ conclusion: normalise(match.conclusion),
+ runId: Number(match.id) || 0,
+ };
+ }
+ const latest = data.workflow_runs[0];
+ if (latest) {
+ return {
+ conclusion: normalise(latest.conclusion),
+ runId: Number(latest.id) || 0,
+ };
+ }
+ }
+ } catch (error) {
+ if (core) core.info(`Failed to resolve Gate conclusion: ${error.message}`);
+ }
+
+ return { conclusion: '', runId: 0 };
+}
+
+function extractCheckRunId(job) {
+ const directId = Number(job?.check_run_id);
+ if (Number.isFinite(directId) && directId > 0) {
+ return directId;
+ }
+ const url = normalise(job?.check_run_url ?? job?.check_run?.url);
+ const match = url.match(/\/check-runs\/(\d+)/i);
+ if (match) {
+ return Number(match[1]) || 0;
+ }
+ return 0;
+}
+
+const RATE_LIMIT_PATTERNS = [
+ /rate limit/i,
+ /rate[-\s]limit/i,
+ /rate[-\s]limited/i,
+ /secondary rate limit/i,
+ /abuse detection/i,
+ /too many requests/i,
+ /api rate/i,
+ /exceeded.*rate limit/i,
+];
+
+function hasRateLimitSignal(text) {
+ const candidate = normalise(text);
+ if (!candidate) {
+ return false;
+ }
+ return RATE_LIMIT_PATTERNS.some((pattern) => pattern.test(candidate));
+}
+
+function annotationsContainRateLimit(annotations = []) {
+ for (const annotation of annotations) {
+ const combined = [
+ annotation?.message,
+ annotation?.title,
+ annotation?.raw_details,
+ ]
+ .filter(Boolean)
+ .join(' ');
+ if (hasRateLimitSignal(combined)) {
+ return true;
+ }
+ }
+ return false;
+}
+
+function extractRateLimitLogText(data) {
+ if (!data) {
+ return '';
+ }
+ const buffer = Buffer.isBuffer(data) ? data : Buffer.from(data);
+ if (buffer.length >= 2 && buffer[0] === 0x1f && buffer[1] === 0x8b) {
+ try {
+ const zlib = require('zlib');
+ return zlib.gunzipSync(buffer).toString('utf8');
+ } catch (error) {
+ return buffer.toString('utf8');
+ }
+ }
+ return buffer.toString('utf8');
+}
+
+function logContainsRateLimit(data) {
+ const text = extractRateLimitLogText(data);
+ if (!text) {
+ return false;
+ }
+ const sample = text.length > 500000 ? `${text.slice(0, 250000)}\n${text.slice(-250000)}` : text;
+ return hasRateLimitSignal(sample);
+}
+
+async function detectRateLimitCancellation({ github, context, runId, core }) {
+ const targetRunId = Number(runId) || 0;
+ if (!targetRunId || !github?.rest?.actions?.listJobsForWorkflowRun) {
+ return false;
+ }
+ const canCheckAnnotations = Boolean(github?.rest?.checks?.listAnnotations);
+ const canCheckLogs = Boolean(github?.rest?.actions?.downloadJobLogsForWorkflowRun);
+ if (!canCheckAnnotations && !canCheckLogs) {
+ if (core) core.info('Rate limit detection skipped; no annotations or logs API available.');
+ return false;
+ }
+
+ try {
+ const { data } = await github.rest.actions.listJobsForWorkflowRun({
+ owner: context.repo.owner,
+ repo: context.repo.repo,
+ run_id: targetRunId,
+ per_page: 100,
+ });
+ const jobs = Array.isArray(data?.jobs) ? data.jobs : [];
+ for (const job of jobs) {
+ if (canCheckAnnotations) {
+ const checkRunId = extractCheckRunId(job);
+ if (checkRunId) {
+ const params = {
+ owner: context.repo.owner,
+ repo: context.repo.repo,
+ check_run_id: checkRunId,
+ per_page: 100,
+ };
+ const annotations = github.paginate
+ ? await github.paginate(github.rest.checks.listAnnotations, params)
+ : (await github.rest.checks.listAnnotations(params))?.data;
+ if (annotationsContainRateLimit(annotations)) {
+ return true;
+ }
+ }
+ }
+
+ if (canCheckLogs) {
+ const jobId = Number(job?.id) || 0;
+ if (jobId) {
+ try {
+ const logs = await github.rest.actions.downloadJobLogsForWorkflowRun({
+ owner: context.repo.owner,
+ repo: context.repo.repo,
+ job_id: jobId,
+ });
+ if (logContainsRateLimit(logs?.data)) {
+ return true;
+ }
+ } catch (error) {
+ if (core) core.info(`Failed to inspect Gate job logs for rate limits: ${error.message}`);
+ }
+ }
+ }
+ }
+ } catch (error) {
+ if (core) core.info(`Failed to inspect Gate cancellation signals for rate limits: ${error.message}`);
+ }
+
+ return false;
+}
+
+async function evaluateKeepaliveLoop({ github, context, core, payload: overridePayload, overridePrNumber, forceRetry }) {
+ const payload = overridePayload || context.payload || {};
+ let prNumber = overridePrNumber || await resolvePrNumber({ github, context, core, payload });
+ if (!prNumber) {
+ return {
+ prNumber: 0,
+ action: 'skip',
+ reason: 'pr-not-found',
+ };
+ }
+
+ const { data: pr } = await github.rest.pulls.get({
+ owner: context.repo.owner,
+ repo: context.repo.repo,
+ pull_number: prNumber,
+ });
+
+ const gateRun = await resolveGateRun({
+ github,
+ context,
+ pr,
+ eventName: context.eventName,
+ payload,
+ core,
+ });
+ const gateConclusion = gateRun.conclusion;
+ const gateNormalized = normalise(gateConclusion).toLowerCase();
+ let gateRateLimit = false;
+
+ const config = parseConfig(pr.body || '');
+ const labels = Array.isArray(pr.labels) ? pr.labels.map((label) => normalise(label.name).toLowerCase()) : [];
+
+ // Extract agent type from agent:* labels (supports agent:codex, agent:claude, etc.)
+ const agentLabel = labels.find((label) => label.startsWith('agent:'));
+ const agentType = agentLabel ? agentLabel.replace('agent:', '') : '';
+ const hasAgentLabel = Boolean(agentType);
+ const keepaliveEnabled = config.keepalive_enabled && hasAgentLabel;
+
+ const sections = parseScopeTasksAcceptanceSections(pr.body || '');
+ const normalisedSections = normaliseChecklistSections(sections);
+ const combinedChecklist = [normalisedSections?.tasks, normalisedSections?.acceptance]
+ .filter(Boolean)
+ .join('\n');
+ const checkboxCounts = countCheckboxes(combinedChecklist);
+ const tasksPresent = checkboxCounts.total > 0;
+ const tasksRemaining = checkboxCounts.unchecked > 0;
+ const allComplete = tasksPresent && !tasksRemaining;
+
+ const stateResult = await loadKeepaliveState({
+ github,
+ context,
+ prNumber,
+ trace: config.trace,
+ });
+ const state = stateResult.state || {};
+ // Prefer state iteration unless config explicitly sets it (0 from config is default, not explicit)
+ const configHasExplicitIteration = config.iteration > 0;
+ const iteration = configHasExplicitIteration ? config.iteration : toNumber(state.iteration, 0);
+ const maxIterations = toNumber(config.max_iterations ?? state.max_iterations, 5);
+ const failureThreshold = toNumber(config.failure_threshold ?? state.failure_threshold, 3);
+
+ // Evidence-based productivity tracking
+ // Uses multiple signals to determine if work is being done:
+ // 1. File changes (primary signal)
+ // 2. Task completion progress
+ // 3. Historical productivity trend
+ const lastFilesChanged = toNumber(state.last_files_changed, 0);
+ const prevFilesChanged = toNumber(state.prev_files_changed, 0);
+ const hasRecentFailures = Boolean(state.failure?.count > 0);
+
+ // Track task completion trend
+ const previousTasks = state.tasks || {};
+ const prevUnchecked = toNumber(previousTasks.unchecked, checkboxCounts.unchecked);
+ const tasksCompletedSinceLastRound = prevUnchecked - checkboxCounts.unchecked;
+
+ // Calculate productivity score (0-100)
+ // This is evidence-based: higher score = more confidence work is happening
+ let productivityScore = 0;
+ if (lastFilesChanged > 0) productivityScore += Math.min(40, lastFilesChanged * 10);
+ if (tasksCompletedSinceLastRound > 0) productivityScore += Math.min(40, tasksCompletedSinceLastRound * 20);
+ if (prevFilesChanged > 0 && iteration > 1) productivityScore += 10; // Recent historical activity
+ if (!hasRecentFailures) productivityScore += 10; // No failures is a positive signal
+
+ // An iteration is productive if it has a reasonable productivity score
+ const isProductive = productivityScore >= 20 && !hasRecentFailures;
+
+ // Early detection: Check for diminishing returns pattern
+ // If we had activity before but now have none, might be naturally completing
+ const diminishingReturns =
+ iteration >= 2 &&
+ prevFilesChanged > 0 &&
+ lastFilesChanged === 0 &&
+ tasksCompletedSinceLastRound === 0;
+
+ // max_iterations is a "stuck detection" threshold, not a hard cap
+ // Continue past max if productive work is happening
+ // But stop earlier if we detect diminishing returns pattern
+ const shouldStopForMaxIterations = iteration >= maxIterations && !isProductive;
+ const shouldStopEarly = diminishingReturns && iteration >= Math.ceil(maxIterations * 0.6);
+
+ // Build task appendix for the agent prompt (after state load for reconciliation info)
+ const taskAppendix = buildTaskAppendix(normalisedSections, checkboxCounts, state, { prBody: pr.body });
+
+ // Check for merge conflicts - this takes priority over other work
+ let conflictResult = { hasConflict: false };
+ try {
+ conflictResult = await detectConflicts(github, context, prNumber, pr.head.sha);
+ if (conflictResult.hasConflict && core) {
+ core.info(`Merge conflict detected via ${conflictResult.primarySource}. Files: ${conflictResult.files?.join(', ') || 'unknown'}`);
+ }
+ } catch (conflictError) {
+ if (core) core.warning(`Conflict detection failed: ${conflictError.message}`);
+ }
+
+ let action = 'wait';
+ let reason = 'pending';
+ const verificationStatus = normalise(state?.verification?.status)?.toLowerCase();
+ const verificationDone = ['done', 'verified', 'complete'].includes(verificationStatus);
+ const verificationAttempted = Boolean(state?.verification?.iteration);
+ // Only try verification once - if it fails, that's OK, tasks are still complete
+ const needsVerification = allComplete && !verificationDone && !verificationAttempted;
+
+ // Only treat GitHub API conflicts as definitive (mergeable_state === 'dirty')
+ // CI-log based conflict detection has too many false positives from commit messages
+ // and should not block fix_ci mode when Gate fails with actual code errors
+ const hasDefinitiveConflict = conflictResult.hasConflict &&
+ conflictResult.primarySource === 'github-api';
+
+ // Conflict resolution takes highest priority ONLY for definitive conflicts
+ if (hasDefinitiveConflict && hasAgentLabel && keepaliveEnabled) {
+ action = 'conflict';
+ reason = `merge-conflict-${conflictResult.primarySource || 'detected'}`;
+ } else if (!hasAgentLabel) {
+ action = 'wait';
+ reason = 'missing-agent-label';
+ } else if (!keepaliveEnabled) {
+ action = 'skip';
+ reason = 'keepalive-disabled';
+ } else if (!tasksPresent) {
+ action = 'stop';
+ reason = 'no-checklists';
+ } else if (gateNormalized !== 'success') {
+ if (gateNormalized === 'cancelled') {
+ gateRateLimit = await detectRateLimitCancellation({
+ github,
+ context,
+ runId: gateRun.runId,
+ core,
+ });
+ // Rate limits are infrastructure noise, not code quality issues
+ // Proceed with work if Gate only failed due to rate limits
+ if (gateRateLimit && tasksRemaining) {
+ action = 'run';
+ reason = 'bypass-rate-limit-gate';
+ if (core) core.info('Gate cancelled due to rate limits only - proceeding with work');
+ } else if (forceRetry && tasksRemaining) {
+ action = 'run';
+ reason = 'force-retry-cancelled';
+ if (core) core.info(`Force retry enabled: bypassing cancelled gate (rate_limit=${gateRateLimit})`);
+ } else {
+ action = gateRateLimit ? 'defer' : 'wait';
+ reason = gateRateLimit ? 'gate-cancelled-rate-limit' : 'gate-cancelled';
+ }
+ } else {
+ // Gate failed - check if failure is rate-limit related vs code quality
+ const gateFailure = await classifyGateFailure({ github, context, pr, core });
+ if (gateFailure.shouldFixMode && gateNormalized === 'failure') {
+ action = 'fix';
+ reason = `fix-${gateFailure.failureType}`;
+ } else if (forceRetry && tasksRemaining) {
+ // forceRetry can also bypass non-success gates (user explicitly wants to retry)
+ action = 'run';
+ reason = 'force-retry-gate';
+ if (core) core.info(`Force retry enabled: bypassing gate conclusion '${gateNormalized}'`);
+ } else {
+ action = 'wait';
+ reason = gateNormalized ? 'gate-not-success' : 'gate-pending';
+ }
+ }
+ } else if (allComplete) {
+ if (needsVerification) {
+ action = 'run';
+ reason = 'verify-acceptance';
+ } else {
+ action = 'stop';
+ reason = 'tasks-complete';
+ }
+ } else if (shouldStopEarly) {
+ // Evidence-based early stopping: diminishing returns detected
+ action = 'stop';
+ reason = 'diminishing-returns';
+ } else if (shouldStopForMaxIterations) {
+ action = 'stop';
+ reason = isProductive ? 'max-iterations' : 'max-iterations-unproductive';
+ } else if (tasksRemaining) {
+ action = 'run';
+ reason = iteration >= maxIterations ? 'ready-extended' : 'ready';
+ }
+
+ const promptScenario = normalise(config.prompt_scenario);
+ const promptModeOverride = normalise(config.prompt_mode);
+ const promptFileOverride = normalise(config.prompt_file);
+ const promptRoute = resolvePromptRouting({
+ scenario: promptScenario,
+ mode: promptModeOverride,
+ action,
+ reason,
+ });
+ const promptMode = promptModeOverride || promptRoute.mode;
+ const promptFile = promptFileOverride || promptRoute.file;
+
+ return {
+ prNumber,
+ prRef: pr.head.ref || '',
+ headSha: pr.head.sha || '',
+ action,
+ reason,
+ promptMode,
+ promptFile,
+ gateConclusion,
+ config,
+ iteration,
+ maxIterations,
+ failureThreshold,
+ checkboxCounts,
+ hasAgentLabel,
+ agentType,
+ taskAppendix,
+ keepaliveEnabled,
+ stateCommentId: stateResult.commentId || 0,
+ state,
+ forceRetry: Boolean(forceRetry),
+ hasConflict: conflictResult.hasConflict,
+ conflictSource: conflictResult.primarySource || null,
+ conflictFiles: conflictResult.files || [],
+ };
+}
+
+async function updateKeepaliveLoopSummary({ github, context, core, inputs }) {
+ const prNumber = Number(inputs.prNumber || inputs.pr_number || 0);
+ if (!Number.isFinite(prNumber) || prNumber <= 0) {
+ if (core) core.info('No PR number available for summary update.');
+ return;
+ }
+
+ const gateConclusion = normalise(inputs.gateConclusion || inputs.gate_conclusion);
+ const action = normalise(inputs.action);
+ const reason = normalise(inputs.reason);
+ const tasksTotal = toNumber(inputs.tasksTotal ?? inputs.tasks_total, 0);
+ const tasksUnchecked = toNumber(inputs.tasksUnchecked ?? inputs.tasks_unchecked, 0);
+ const keepaliveEnabled = toBool(inputs.keepaliveEnabled ?? inputs.keepalive_enabled, false);
+ const autofixEnabled = toBool(inputs.autofixEnabled ?? inputs.autofix_enabled, false);
+ const agentType = normalise(inputs.agent_type ?? inputs.agentType) || 'codex';
+ const iteration = toNumber(inputs.iteration, 0);
+ const maxIterations = toNumber(inputs.maxIterations ?? inputs.max_iterations, 0);
+ const failureThreshold = Math.max(1, toNumber(inputs.failureThreshold ?? inputs.failure_threshold, 3));
+ const runResult = normalise(inputs.runResult || inputs.run_result);
+ const stateTrace = normalise(inputs.trace || inputs.keepalive_trace || '');
+
+ // Agent output details (agent-agnostic, with fallback to old codex_ names)
+ const agentExitCode = normalise(inputs.agent_exit_code ?? inputs.agentExitCode ?? inputs.codex_exit_code ?? inputs.codexExitCode);
+ const agentChangesMade = normalise(inputs.agent_changes_made ?? inputs.agentChangesMade ?? inputs.codex_changes_made ?? inputs.codexChangesMade);
+ const agentCommitSha = normalise(inputs.agent_commit_sha ?? inputs.agentCommitSha ?? inputs.codex_commit_sha ?? inputs.codexCommitSha);
+ const agentFilesChanged = toNumber(inputs.agent_files_changed ?? inputs.agentFilesChanged ?? inputs.codex_files_changed ?? inputs.codexFilesChanged, 0);
+ const agentSummary = normalise(inputs.agent_summary ?? inputs.agentSummary ?? inputs.codex_summary ?? inputs.codexSummary);
+ const runUrl = normalise(inputs.run_url ?? inputs.runUrl);
+ const promptModeInput = normalise(inputs.prompt_mode ?? inputs.promptMode);
+ const promptFileInput = normalise(inputs.prompt_file ?? inputs.promptFile);
+ const promptScenarioInput = normalise(inputs.prompt_scenario ?? inputs.promptScenario);
+ const promptRoute = resolvePromptRouting({
+ scenario: promptScenarioInput,
+ mode: promptModeInput,
+ action,
+ reason,
+ });
+ const promptMode = promptModeInput || promptRoute.mode;
+ const promptFile = promptFileInput || promptRoute.file;
+
+ // LLM task analysis details
+ const llmProvider = normalise(inputs.llm_provider ?? inputs.llmProvider);
+ const llmConfidence = toNumber(inputs.llm_confidence ?? inputs.llmConfidence, 0);
+ const llmAnalysisRun = toBool(inputs.llm_analysis_run ?? inputs.llmAnalysisRun, false);
+
+ // Quality metrics for BS detection and evidence-based decisions
+ const llmRawConfidence = toNumber(inputs.llm_raw_confidence ?? inputs.llmRawConfidence, llmConfidence);
+ const llmConfidenceAdjusted = toBool(inputs.llm_confidence_adjusted ?? inputs.llmConfidenceAdjusted, false);
+ const llmQualityWarnings = normalise(inputs.llm_quality_warnings ?? inputs.llmQualityWarnings);
+ const sessionDataQuality = normalise(inputs.session_data_quality ?? inputs.sessionDataQuality);
+ const sessionEffortScore = toNumber(inputs.session_effort_score ?? inputs.sessionEffortScore, 0);
+ const analysisTextLength = toNumber(inputs.analysis_text_length ?? inputs.analysisTextLength, 0);
+
+ const labels = await fetchPrLabels({ github, context, prNumber, core });
+ const timeoutRepoVariables = await fetchRepoVariables({
+ github,
+ context,
+ core,
+ names: TIMEOUT_VARIABLE_NAMES,
+ });
+ const timeoutInputs = resolveTimeoutInputs({ inputs, context });
+ const timeoutConfig = parseTimeoutConfig({
+ env: process.env,
+ inputs: timeoutInputs,
+ labels,
+ variables: timeoutRepoVariables,
+ });
+ const elapsedMs = await resolveElapsedMs({ github, context, inputs, core });
+ const timeoutWarningConfig = resolveTimeoutWarningConfig({
+ inputs: timeoutInputs,
+ env: process.env,
+ variables: timeoutRepoVariables,
+ });
+ const timeoutStatus = buildTimeoutStatus({
+ timeoutConfig,
+ elapsedMs,
+ ...timeoutWarningConfig,
+ });
+
+ const { state: previousState, commentId } = await loadKeepaliveState({
+ github,
+ context,
+ prNumber,
+ trace: stateTrace,
+ });
+ const previousFailure = previousState?.failure || {};
+ const prBody = await fetchPrBody({ github, context, prNumber, core });
+ const focusSections = prBody ? normaliseChecklistSections(parseScopeTasksAcceptanceSections(prBody)) : {};
+ const focusItems = extractChecklistItems(focusSections.tasks || focusSections.acceptance || '');
+ const focusUnchecked = focusItems.filter((item) => !item.checked);
+ const currentFocus = normaliseTaskText(previousState?.current_focus || '');
+ const fallbackFocus = focusUnchecked[0]?.text || '';
+
+ // Use the iteration from the CURRENT persisted state, not the stale value from evaluate.
+ // This prevents race conditions where another run updated state between evaluate and summary.
+ const currentIteration = toNumber(previousState?.iteration ?? iteration, 0);
+ let nextIteration = currentIteration;
+ let failure = { ...previousFailure };
+ // Stop conditions:
+ // - tasks-complete: SUCCESS, don't need needs-human label
+ // - no-checklists: neutral, agent has nothing to do
+ // - max-iterations: possible issue, MAY need attention
+ // - agent-run-failed-repeat: definite issue, needs attention
+ const isSuccessStop = reason === 'tasks-complete';
+ const isNeutralStop = reason === 'no-checklists' || reason === 'keepalive-disabled';
+ let stop = action === 'stop' && !isSuccessStop && !isNeutralStop;
+ let summaryReason = reason || action || 'unknown';
+ const baseReason = summaryReason;
+ const transientDetails = classifyFailureDetails({
+ action,
+ runResult,
+ summaryReason,
+ agentExitCode,
+ agentSummary,
+ });
+ const runFailed =
+ action === 'run' &&
+ runResult &&
+ !['success', 'skipped', 'cancelled'].includes(runResult);
+ const isTransientFailure =
+ runFailed && transientDetails.category === ERROR_CATEGORIES.transient;
+ const waitLikeAction = action === 'wait' || action === 'defer';
+ const waitIsTransientReason = [
+ 'gate-pending',
+ 'missing-agent-label',
+ 'gate-cancelled',
+ 'gate-cancelled-rate-limit',
+ ].includes(baseReason);
+ const isTransientWait =
+ waitLikeAction &&
+ (transientDetails.category === ERROR_CATEGORIES.transient || waitIsTransientReason);
+
+ // Task reconciliation: detect when agent made changes but didn't update checkboxes
+ const previousTasks = previousState?.tasks || {};
+ const previousUnchecked = toNumber(previousTasks.unchecked, tasksUnchecked);
+ const tasksCompletedThisRound = previousUnchecked - tasksUnchecked;
+ const madeChangesButNoTasksChecked =
+ action === 'run' &&
+ runResult === 'success' &&
+ agentChangesMade === 'true' &&
+ agentFilesChanged > 0 &&
+ tasksCompletedThisRound <= 0;
+
+ if (action === 'run') {
+ if (runResult === 'success') {
+ nextIteration = currentIteration + 1;
+ failure = {};
+ } else if (runResult) {
+ // If the job was skipped/cancelled, it usually means the workflow condition
+ // prevented execution (e.g. gate not ready, label missing, concurrency).
+ // Don't treat this as an agent failure.
+ if (runResult === 'skipped') {
+ failure = {};
+ summaryReason = 'agent-run-skipped';
+ } else if (runResult === 'cancelled') {
+ failure = {};
+ summaryReason = 'agent-run-cancelled';
+ } else if (isTransientFailure) {
+ failure = {};
+ summaryReason = 'agent-run-transient';
+ } else {
+ const same = failure.reason === 'agent-run-failed';
+ const count = same ? toNumber(failure.count, 0) + 1 : 1;
+ failure = { reason: 'agent-run-failed', count };
+ if (count >= failureThreshold) {
+ stop = true;
+ summaryReason = 'agent-run-failed-repeat';
+ } else {
+ summaryReason = 'agent-run-failed';
+ }
+ }
+ }
+ } else if (action === 'stop') {
+ // Differentiate between terminal states:
+ // - tasks-complete: Success! Clear failure state
+ // - no-checklists / keepalive-disabled: Neutral, nothing to do
+ // - max-iterations: Could be a problem, count as failure
+ if (isSuccessStop) {
+ // Tasks complete is success, clear any failure state
+ failure = {};
+ } else if (isNeutralStop) {
+ // Neutral states don't need failure tracking
+ failure = {};
+ } else {
+ // max-iterations type stops should count as potential issues
+ const sameReason = failure.reason && failure.reason === summaryReason;
+ const count = sameReason ? toNumber(failure.count, 0) + 1 : 1;
+ failure = { reason: summaryReason, count };
+ if (count >= failureThreshold) {
+ summaryReason = `${summaryReason}-repeat`;
+ }
+ }
+ } else if (waitLikeAction) {
+ // Wait states are NOT failures - they're transient conditions
+ // Don't increment failure counter for: gate-pending, gate-not-success, missing-agent-label
+ // These are expected states that will resolve on their own
+ // Check if this is a transient error (from error classification)
+ if (isTransientWait) {
+ failure = {};
+ summaryReason = `${summaryReason}-transient`;
+ } else if (failure.reason && !failure.reason.startsWith('gate-') && failure.reason !== 'missing-agent-label') {
+ // Keep the failure from a previous real failure (like agent-run-failed)
+ // but don't increment for wait states
+ } else {
+ // Clear failure state for transient wait conditions
+ failure = {};
+ }
+ }
+
+ const failureDetails = classifyFailureDetails({
+ action,
+ runResult,
+ summaryReason,
+ agentExitCode,
+ agentSummary,
+ });
+ const errorCategory = failureDetails.category;
+ const errorType = failureDetails.type;
+ const errorRecovery = failureDetails.recovery;
+ const tasksComplete = Math.max(0, tasksTotal - tasksUnchecked);
+ const allTasksComplete = tasksUnchecked === 0 && tasksTotal > 0;
+ const metricsIteration = action === 'run' ? currentIteration + 1 : currentIteration;
+ const durationMs = resolveDurationMs({
+ durationMs: toOptionalNumber(inputs.duration_ms ?? inputs.durationMs),
+ startTs: toOptionalNumber(inputs.start_ts ?? inputs.startTs),
+ });
+ const metricsRecord = buildMetricsRecord({
+ prNumber,
+ iteration: metricsIteration,
+ action,
+ errorCategory,
+ durationMs,
+ tasksTotal,
+ tasksComplete,
+ });
+ emitMetricsRecord({ core, record: metricsRecord });
+ await appendMetricsRecord({
+ core,
+ record: metricsRecord,
+ metricsPath: resolveMetricsPath(inputs),
+ });
+
+ // Capitalize agent name for display
+ const agentDisplayName = agentType.charAt(0).toUpperCase() + agentType.slice(1);
+
+ // Determine if we're in extended mode (past max_iterations but still productive)
+ const inExtendedMode = nextIteration > maxIterations && maxIterations > 0;
+ const extendedCount = inExtendedMode ? nextIteration - maxIterations : 0;
+ const iterationDisplay = inExtendedMode
+ ? `**${maxIterations}+${extendedCount}** 🚀 extended`
+ : `${nextIteration}/${maxIterations || '∞'}`;
+
+ const dispositionLabel = (() => {
+ if (action === 'defer') {
+ return 'deferred (transient)';
+ }
+ if (action === 'wait') {
+ return isTransientWait ? 'skipped (transient)' : 'skipped (failure)';
+ }
+ if (action === 'skip') {
+ return 'skipped';
+ }
+ return '';
+ })();
+ const actionReason = waitLikeAction
+ ? (baseReason || summaryReason)
+ : (summaryReason || baseReason);
+
+ const summaryLines = [
+ '',
+ `## 🤖 Keepalive Loop Status`,
+ '',
+ `**PR #${prNumber}** | Agent: **${agentDisplayName}** | Iteration ${iterationDisplay}`,
+ '',
+ '### Current State',
+ `| Metric | Value |`,
+ `|--------|-------|`,
+ `| Iteration progress | ${
+ maxIterations > 0
+ ? inExtendedMode
+ ? `${formatProgressBar(maxIterations, maxIterations)} ${maxIterations} base + ${extendedCount} extended = **${nextIteration}** total`
+ : formatProgressBar(nextIteration, maxIterations)
+ : 'n/a (unbounded)'
+ } |`,
+ `| Action | ${action || 'unknown'} (${actionReason || 'n/a'}) |`,
+ ...(dispositionLabel ? [`| Disposition | ${dispositionLabel} |`] : []),
+ ...(allTasksComplete ? [`| Agent status | ✅ ALL TASKS COMPLETE |`] : runFailed ? [`| Agent status | ❌ AGENT FAILED |`] : []),
+ `| Gate | ${gateConclusion || 'unknown'} |`,
+ `| Tasks | ${tasksComplete}/${tasksTotal} complete |`,
+ `| Timeout | ${formatTimeoutMinutes(timeoutStatus.resolvedMinutes)} min (${timeoutStatus.source || 'default'}) |`,
+ `| Keepalive | ${keepaliveEnabled ? '✅ enabled' : '❌ disabled'} |`,
+ `| Autofix | ${autofixEnabled ? '✅ enabled' : '❌ disabled'} |`,
+ ];
+
+ const timeoutUsage = formatTimeoutUsage({
+ elapsedMs: timeoutStatus.elapsedMs,
+ usageRatio: timeoutStatus.usageRatio,
+ remainingMs: timeoutStatus.remainingMs,
+ });
+ if (timeoutUsage) {
+ summaryLines.splice(summaryLines.length - 2, 0, `| Timeout usage | ${timeoutUsage} |`);
+ }
+ if (timeoutStatus.warning) {
+ const timeoutWarning = formatTimeoutWarning(timeoutStatus.warning);
+ const warningValue = timeoutWarning ? `⚠️ ${timeoutWarning}` : `⚠️ ${timeoutStatus.warning.remaining_minutes}m remaining`;
+ summaryLines.splice(
+ summaryLines.length - 2,
+ 0,
+ `| Timeout warning | ${warningValue} |`,
+ );
+ }
+
+ if (timeoutStatus.warning && core && typeof core.warning === 'function') {
+ const percent = timeoutStatus.warning.percent ?? 0;
+ const remaining = timeoutStatus.warning.remaining_minutes ?? 0;
+ const reason = timeoutStatus.warning.reason === 'remaining' ? 'remaining threshold' : 'usage threshold';
+ const thresholdParts = [];
+ const thresholdPercent = timeoutStatus.warning.threshold_percent;
+ const thresholdRemaining = timeoutStatus.warning.threshold_remaining_minutes;
+ if (Number.isFinite(thresholdPercent)) {
+ thresholdParts.push(`${thresholdPercent}% threshold`);
+ }
+ if (Number.isFinite(thresholdRemaining)) {
+ thresholdParts.push(`${thresholdRemaining}m threshold`);
+ }
+ const thresholdSuffix = thresholdParts.length ? ` (thresholds: ${thresholdParts.join(', ')})` : '';
+ core.warning(`Timeout warning (${reason}): ${percent}% consumed, ${remaining}m remaining${thresholdSuffix}.`);
+ }
+
+ // Add agent run details if we ran an agent
+ if (action === 'run' && runResult) {
+ const runLinkText = runUrl ? ` ([view logs](${runUrl}))` : '';
+ summaryLines.push('', `### Last ${agentDisplayName} Run${runLinkText}`);
+
+ if (runResult === 'success') {
+ const changesIcon = agentChangesMade === 'true' ? '✅' : '⚪';
+ summaryLines.push(
+ `| Result | Value |`,
+ `|--------|-------|`,
+ `| Status | ✅ Success |`,
+ `| Changes | ${changesIcon} ${agentChangesMade === 'true' ? `${agentFilesChanged} file(s)` : 'No changes'} |`,
+ );
+ if (agentCommitSha) {
+ summaryLines.push(`| Commit | [\`${agentCommitSha.slice(0, 7)}\`](../commit/${agentCommitSha}) |`);
+ }
+ } else if (runResult === 'skipped') {
+ summaryLines.push(
+ `| Result | Value |`,
+ `|--------|-------|`,
+ `| Status | ⏭️ Skipped |`,
+ `| Reason | ${summaryReason || 'agent-run-skipped'} |`,
+ );
+ } else if (runResult === 'cancelled') {
+ summaryLines.push(
+ `| Result | Value |`,
+ `|--------|-------|`,
+ `| Status | 🚫 Cancelled |`,
+ `| Reason | ${summaryReason || 'agent-run-cancelled'} |`,
+ );
+ } else {
+ summaryLines.push(
+ `| Result | Value |`,
+ `|--------|-------|`,
+ `| Status | ❌ AGENT FAILED |`,
+ `| Reason | ${summaryReason || runResult || 'unknown'} |`,
+ `| Exit code | ${agentExitCode || 'unknown'} |`,
+ `| Failures | ${failure.count || 1}/${failureThreshold} before pause |`,
+ );
+ }
+
+ // Add agent output summary if available
+ if (agentSummary && agentSummary.length > 10) {
+ const truncatedSummary = agentSummary.length > 300
+ ? agentSummary.slice(0, 300) + '...'
+ : agentSummary;
+ summaryLines.push('', `**${agentDisplayName} output:**`, `> ${truncatedSummary}`);
+ }
+
+ // Task reconciliation warning: agent made changes but didn't check off tasks
+ if (madeChangesButNoTasksChecked) {
+ summaryLines.push(
+ '',
+ '### 📋 Task Reconciliation Needed',
+ '',
+ `⚠️ ${agentDisplayName} changed **${agentFilesChanged} file(s)** but didn't check off any tasks.`,
+ '',
+ '**Next iteration should:**',
+ '1. Review the changes made and determine which tasks were addressed',
+ '2. Update the PR body to check off completed task checkboxes',
+ '3. If work was unrelated to tasks, continue with remaining tasks',
+ );
+ }
+ }
+
+ if (errorType || errorCategory) {
+ summaryLines.push(
+ '',
+ '### 🔍 Failure Classification',
+ `| Error type | ${errorType || 'unknown'} |`,
+ `| Error category | ${errorCategory || 'unknown'} |`,
+ );
+ if (errorRecovery) {
+ summaryLines.push(`| Suggested recovery | ${errorRecovery} |`);
+ }
+ }
+
+ // LLM analysis details - show which provider was used for task completion detection
+ if (llmAnalysisRun && llmProvider) {
+ const providerIcon = llmProvider === 'github-models' ? '✅' :
+ llmProvider === 'openai' ? '⚠️' :
+ llmProvider === 'regex-fallback' ? '🔶' : 'ℹ️';
+ const providerLabel = llmProvider === 'github-models' ? 'GitHub Models (primary)' :
+ llmProvider === 'openai' ? 'OpenAI (fallback)' :
+ llmProvider === 'regex-fallback' ? 'Regex (fallback)' : llmProvider;
+ const confidencePercent = Math.round(llmConfidence * 100);
+
+ summaryLines.push(
+ '',
+ '### 🧠 Task Analysis',
+ `| Provider | ${providerIcon} ${providerLabel} |`,
+ `| Confidence | ${confidencePercent}% |`,
+ );
+
+ // Show quality metrics if available
+ if (sessionDataQuality) {
+ const qualityIcon = sessionDataQuality === 'high' ? '🟢' :
+ sessionDataQuality === 'medium' ? '🟡' :
+ sessionDataQuality === 'low' ? '🟠' : '🔴';
+ summaryLines.push(`| Data Quality | ${qualityIcon} ${sessionDataQuality} |`);
+ }
+ if (sessionEffortScore > 0) {
+ summaryLines.push(`| Effort Score | ${sessionEffortScore}/100 |`);
+ }
+
+ // Show BS detection warnings if confidence was adjusted
+ if (llmConfidenceAdjusted && llmRawConfidence !== llmConfidence) {
+ const rawPercent = Math.round(llmRawConfidence * 100);
+ summaryLines.push(
+ '',
+ `> ⚠️ **Confidence adjusted**: Raw confidence was ${rawPercent}%, adjusted to ${confidencePercent}% based on session quality metrics.`
+ );
+ }
+
+ // Show specific quality warnings if present
+ if (llmQualityWarnings) {
+ summaryLines.push(
+ '',
+ '#### Quality Warnings',
+ );
+ // Parse warnings (could be JSON array or comma-separated)
+ let warnings = [];
+ try {
+ warnings = JSON.parse(llmQualityWarnings);
+ } catch {
+ warnings = llmQualityWarnings.split(';').filter(w => w.trim());
+ }
+ for (const warning of warnings) {
+ summaryLines.push(`- ⚠️ ${warning.trim()}`);
+ }
+ }
+
+ // Analysis data health check
+ if (analysisTextLength > 0 && analysisTextLength < 200 && agentFilesChanged > 0) {
+ summaryLines.push(
+ '',
+ `> 🔴 **Data Loss Alert**: Analysis text was only ${analysisTextLength} chars despite ${agentFilesChanged} file changes. Task detection may be inaccurate.`
+ );
+ }
+
+ if (llmProvider !== 'github-models') {
+ summaryLines.push(
+ '',
+ `> ⚠️ Primary provider (GitHub Models) was unavailable; used ${providerLabel} instead.`,
+ );
+ }
+ }
+
+ if (isTransientFailure) {
+ summaryLines.push(
+ '',
+ '### ♻️ Transient Issue Detected',
+ 'This run failed due to a transient issue. The failure counter has been reset to avoid pausing the loop.',
+ );
+ }
+
+ if (action === 'defer') {
+ summaryLines.push(
+ '',
+ '### ⏳ Deferred',
+ 'Keepalive deferred due to a transient Gate cancellation (likely rate limits). It will retry later.',
+ );
+ }
+
+ // Show failure tracking prominently if there are failures
+ if (failure.count > 0) {
+ summaryLines.push(
+ '',
+ '### ⚠️ Failure Tracking',
+ `| Consecutive failures | ${failure.count}/${failureThreshold} |`,
+ `| Reason | ${failure.reason || 'unknown'} |`,
+ );
+ }
+
+ if (stop) {
+ summaryLines.push(
+ '',
+ '### 🛑 Paused – Human Attention Required',
+ '',
+ 'The keepalive loop has paused due to repeated failures.',
+ '',
+ '**To resume:**',
+ '1. Investigate the failure reason above',
+ '2. Fix any issues in the code or prompt',
+ '3. Remove the `needs-human` label from this PR',
+ '4. The next Gate pass will restart the loop',
+ '',
+ '_Or manually edit this comment to reset `failure: {}` in the state below._',
+ );
+ }
+
+ const focusTask = currentFocus || fallbackFocus;
+ const shouldRecordAttempt = action === 'run' && reason !== 'verify-acceptance';
+ let attemptedTasks = normaliseAttemptedTasks(previousState?.attempted_tasks);
+ if (shouldRecordAttempt && focusTask) {
+ attemptedTasks = updateAttemptedTasks(attemptedTasks, focusTask, metricsIteration);
+ }
+
+ let verification = previousState?.verification && typeof previousState.verification === 'object'
+ ? { ...previousState.verification }
+ : {};
+ if (tasksUnchecked > 0) {
+ verification = {};
+ } else if (reason === 'verify-acceptance') {
+ verification = {
+ status: runResult === 'success' ? 'done' : 'failed',
+ iteration: nextIteration,
+ last_result: runResult || '',
+ updated_at: new Date().toISOString(),
+ };
+ }
+
+ const newState = {
+ trace: stateTrace || previousState?.trace || '',
+ pr_number: prNumber,
+ iteration: nextIteration,
+ max_iterations: maxIterations,
+ last_action: action,
+ last_reason: summaryReason,
+ failure,
+ error_type: errorType,
+ error_category: errorCategory,
+ tasks: { total: tasksTotal, unchecked: tasksUnchecked },
+ gate_conclusion: gateConclusion,
+ failure_threshold: failureThreshold,
+ // Track task reconciliation for next iteration
+ needs_task_reconciliation: madeChangesButNoTasksChecked,
+ // Productivity tracking for evidence-based decisions
+ last_files_changed: agentFilesChanged,
+ prev_files_changed: toNumber(previousState?.last_files_changed, 0),
+ // Quality metrics for analysis validation
+ last_effort_score: sessionEffortScore,
+ last_data_quality: sessionDataQuality,
+ attempted_tasks: attemptedTasks,
+ last_focus: focusTask || '',
+ verification,
+ timeout: {
+ resolved_minutes: timeoutStatus.resolvedMinutes,
+ default_minutes: timeoutStatus.defaultMinutes,
+ extended_minutes: timeoutStatus.extendedMinutes,
+ override_minutes: timeoutStatus.overrideMinutes,
+ source: timeoutStatus.source,
+ label: timeoutStatus.label,
+ elapsed_minutes: timeoutStatus.elapsedMs ? Math.floor(timeoutStatus.elapsedMs / 60000) : null,
+ remaining_minutes: timeoutStatus.remainingMs ? Math.ceil(timeoutStatus.remainingMs / 60000) : null,
+ usage_ratio: timeoutStatus.usageRatio,
+ warning: timeoutStatus.warning || null,
+ },
+ };
+ const attemptEntry = buildAttemptEntry({
+ iteration: metricsIteration,
+ action,
+ reason: summaryReason,
+ runResult,
+ promptMode,
+ promptFile,
+ gateConclusion,
+ errorCategory,
+ errorType,
+ });
+ newState.attempts = updateAttemptHistory(previousState?.attempts, attemptEntry);
+
+ const summaryOutcome = runResult || summaryReason || action || 'unknown';
+ if (action === 'run' || runResult) {
+ await writeStepSummary({
+ core,
+ iteration: nextIteration,
+ maxIterations,
+ tasksTotal,
+ tasksUnchecked,
+ tasksCompletedDelta: tasksCompletedThisRound,
+ agentFilesChanged,
+ outcome: summaryOutcome,
+ });
+ }
+
+ const previousAttention = previousState?.attention && typeof previousState.attention === 'object'
+ ? previousState.attention
+ : {};
+ if (Object.keys(previousAttention).length > 0) {
+ newState.attention = { ...previousAttention };
+ }
+
+ if (core && typeof core.setOutput === 'function') {
+ core.setOutput('error_type', errorType || '');
+ core.setOutput('error_category', errorCategory || '');
+ }
+
+ const shouldEscalate =
+ (action === 'run' && runResult && runResult !== 'success' && errorCategory !== ERROR_CATEGORIES.transient) ||
+ (action === 'stop' && !isSuccessStop && !isNeutralStop && errorCategory !== ERROR_CATEGORIES.transient);
+
+ const attentionKey = [summaryReason, runResult, errorCategory, errorType, agentExitCode].filter(Boolean).join('|');
+ const priorAttentionKey = normalise(previousAttention.key);
+
+ // NOTE: Failure comment posting removed - handled by reusable-codex-run.yml with proper deduplication
+ // This prevents duplicate failure notifications on PRs
+
+ summaryLines.push('', formatStateComment(newState));
+ const body = summaryLines.join('\n');
+
+ if (commentId) {
+ await github.rest.issues.updateComment({
+ owner: context.repo.owner,
+ repo: context.repo.repo,
+ comment_id: commentId,
+ body,
+ });
+ } else {
+ await github.rest.issues.createComment({
+ owner: context.repo.owner,
+ repo: context.repo.repo,
+ issue_number: prNumber,
+ body,
+ });
+ }
+
+ if (shouldEscalate) {
+ try {
+ await github.rest.issues.addLabels({
+ owner: context.repo.owner,
+ repo: context.repo.repo,
+ issue_number: prNumber,
+ labels: ['agent:needs-attention'],
+ });
+ } catch (error) {
+ if (core) core.warning(`Failed to add agent:needs-attention label: ${error.message}`);
+ }
+ }
+
+ if (stop) {
+ try {
+ await github.rest.issues.addLabels({
+ owner: context.repo.owner,
+ repo: context.repo.repo,
+ issue_number: prNumber,
+ labels: ['needs-human'],
+ });
+ } catch (error) {
+ if (core) core.warning(`Failed to add needs-human label: ${error.message}`);
+ }
+ }
+}
+
+/**
+ * Mark that an agent is currently running by updating the summary comment.
+ * This provides real-time visibility into the keepalive loop's activity.
+ */
+async function markAgentRunning({ github, context, core, inputs }) {
+ const prNumber = Number(inputs.prNumber || inputs.pr_number || 0);
+ if (!Number.isFinite(prNumber) || prNumber <= 0) {
+ if (core) core.info('No PR number available for running status update.');
+ return;
+ }
+
+ const agentType = normalise(inputs.agent_type ?? inputs.agentType) || 'codex';
+ const iteration = toNumber(inputs.iteration, 0);
+ const maxIterations = toNumber(inputs.maxIterations ?? inputs.max_iterations, 0);
+ const tasksTotal = toNumber(inputs.tasksTotal ?? inputs.tasks_total, 0);
+ const tasksUnchecked = toNumber(inputs.tasksUnchecked ?? inputs.tasks_unchecked, 0);
+ const stateTrace = normalise(inputs.trace || inputs.keepalive_trace || '');
+ const runUrl = normalise(inputs.run_url ?? inputs.runUrl);
+
+ const { state: previousState, commentId } = await loadKeepaliveState({
+ github,
+ context,
+ prNumber,
+ trace: stateTrace,
+ });
+ const prBody = await fetchPrBody({ github, context, prNumber, core });
+ const focusSections = prBody ? normaliseChecklistSections(parseScopeTasksAcceptanceSections(prBody)) : {};
+ const focusItems = extractChecklistItems(focusSections.tasks || focusSections.acceptance || '');
+ const focusUnchecked = focusItems.filter((item) => !item.checked);
+ const attemptedTasks = normaliseAttemptedTasks(previousState?.attempted_tasks);
+ const attemptedKeys = new Set(attemptedTasks.map((entry) => entry.key));
+ const suggestedFocus = focusUnchecked.find((item) => !attemptedKeys.has(normaliseTaskKey(item.text))) || focusUnchecked[0];
+
+ // Capitalize agent name for display
+ const agentDisplayName = agentType.charAt(0).toUpperCase() + agentType.slice(1);
+
+ // Show iteration we're starting (current + 1)
+ const displayIteration = iteration + 1;
+
+ const runLinkText = runUrl ? ` ([view logs](${runUrl}))` : '';
+
+ // Determine if in extended mode for display
+ const inExtendedMode = displayIteration > maxIterations && maxIterations > 0;
+ const iterationText = inExtendedMode
+ ? `${maxIterations}+${displayIteration - maxIterations} (extended)`
+ : `${displayIteration} of ${maxIterations || '∞'}`;
+
+ const tasksCompleted = Math.max(0, tasksTotal - tasksUnchecked);
+ const progressPct = tasksTotal > 0 ? Math.round((tasksCompleted / tasksTotal) * 100) : 0;
+
+ const summaryLines = [
+ '',
+ `## 🤖 Keepalive Loop Status`,
+ '',
+ `**PR #${prNumber}** | Agent: **${agentDisplayName}** | Iteration **${iterationText}**`,
+ '',
+ '### 🔄 Agent Running',
+ '',
+ `**${agentDisplayName} is actively working on this PR**${runLinkText}`,
+ '',
+ `| Status | Value |`,
+ `|--------|-------|`,
+ `| Agent | ${agentDisplayName} |`,
+ `| Iteration | ${iterationText} |`,
+ `| Task progress | ${tasksCompleted}/${tasksTotal} (${progressPct}%) |`,
+ `| Started | ${new Date().toISOString().replace('T', ' ').slice(0, 19)} UTC |`,
+ '',
+ '_This comment will be updated when the agent completes._',
+ ];
+
+ // Preserve state from previous summary (don't modify state while running)
+ const preservedState = previousState || {};
+ preservedState.running = true;
+ preservedState.running_since = new Date().toISOString();
+ if (suggestedFocus?.text) {
+ preservedState.current_focus = suggestedFocus.text;
+ preservedState.current_focus_set_at = new Date().toISOString();
+ }
+
+ summaryLines.push('', formatStateComment(preservedState));
+ const body = summaryLines.join('\n');
+
+ if (commentId) {
+ await github.rest.issues.updateComment({
+ owner: context.repo.owner,
+ repo: context.repo.repo,
+ comment_id: commentId,
+ body,
+ });
+ if (core) core.info(`Updated summary comment ${commentId} with running status`);
+ } else {
+ const { data } = await github.rest.issues.createComment({
+ owner: context.repo.owner,
+ repo: context.repo.repo,
+ issue_number: prNumber,
+ body,
+ });
+ if (core) core.info(`Created summary comment ${data.id} with running status`);
+ }
+}
+
+/**
+ * Analyze commits and files changed to infer which tasks may have been completed.
+ * Uses keyword matching and file path analysis to suggest task completions.
+ * @param {object} params - Parameters
+ * @param {object} params.github - GitHub API client
+ * @param {object} params.context - GitHub Actions context
+ * @param {number} params.prNumber - PR number
+ * @param {string} params.baseSha - Base SHA to compare from
+ * @param {string} params.headSha - Head SHA to compare to
+ * @param {string} params.taskText - The raw task/acceptance text from PR body
+ * @param {object} [params.core] - Optional core for logging
+ * @returns {Promise<{matches: Array<{task: string, reason: string, confidence: string}>, summary: string}>}
+ */
+async function analyzeTaskCompletion({ github, context, prNumber, baseSha, headSha, taskText, core }) {
+ const matches = [];
+ const log = (msg) => core?.info?.(msg) || console.log(msg);
+
+ if (!context?.repo?.owner || !context?.repo?.repo) {
+ log('Skipping task analysis: missing repo context.');
+ return { matches, summary: 'Missing repo context for task analysis' };
+ }
+
+ if (!taskText || !baseSha || !headSha) {
+ log('Skipping task analysis: missing task text or commit range.');
+ return { matches, summary: 'Insufficient data for task analysis' };
+ }
+
+ // Get commits between base and head
+ let commits = [];
+ try {
+ const { data } = await github.rest.repos.compareCommits({
+ owner: context.repo.owner,
+ repo: context.repo.repo,
+ base: baseSha,
+ head: headSha,
+ });
+ commits = data.commits || [];
+ } catch (error) {
+ log(`Failed to get commits: ${error.message}`);
+ return { matches, summary: `Failed to analyze: ${error.message}` };
+ }
+
+ // Get files changed
+ let filesChanged = [];
+ try {
+ const { data } = await github.rest.pulls.listFiles({
+ owner: context.repo.owner,
+ repo: context.repo.repo,
+ pull_number: prNumber,
+ per_page: 100,
+ });
+ filesChanged = data.map(f => f.filename);
+ } catch (error) {
+ log(`Failed to get files: ${error.message}`);
+ }
+
+ // Parse tasks into individual items
+ const taskLines = taskText.split('\n')
+ .filter(line => /^\s*[-*+]\s*\[\s*\]/.test(line))
+ .map(line => {
+ const match = line.match(/^\s*[-*+]\s*\[\s*\]\s*(.+)$/);
+ return match ? match[1].trim() : null;
+ })
+ .filter(Boolean);
+
+ log(`Analyzing ${commits.length} commits against ${taskLines.length} unchecked tasks`);
+
+ // Common action synonyms for better matching
+ const SYNONYMS = {
+ add: ['create', 'implement', 'introduce', 'build'],
+ create: ['add', 'implement', 'introduce', 'build'],
+ implement: ['add', 'create', 'build'],
+ fix: ['repair', 'resolve', 'correct', 'patch'],
+ update: ['modify', 'change', 'revise', 'edit'],
+ remove: ['delete', 'drop', 'eliminate'],
+ test: ['tests', 'testing', 'spec', 'specs'],
+ config: ['configuration', 'settings', 'configure'],
+ doc: ['docs', 'documentation', 'document'],
+ };
+
+ // Helper to split camelCase/PascalCase into words
+ function splitCamelCase(str) {
+ return str
+ .replace(/([a-z])([A-Z])/g, '$1 $2')
+ .replace(/([A-Z]+)([A-Z][a-z])/g, '$1 $2')
+ .toLowerCase()
+ .split(/[\s_-]+/)
+ .filter(w => w.length > 2);
+ }
+
+ // Build keyword map from commits
+ const commitKeywords = new Set();
+ const commitMessages = commits
+ .map(c => c.commit.message.toLowerCase())
+ .join(' ');
+
+ // Extract meaningful words from commit messages
+ const words = commitMessages.match(/\b[a-z_-]{3,}\b/g) || [];
+ words.forEach(w => commitKeywords.add(w));
+
+ // Also split camelCase words from commit messages
+ const camelWords = commits
+ .map(c => c.commit.message)
+ .join(' ')
+ .match(/[a-zA-Z][a-z]+[A-Z][a-zA-Z]*/g) || [];
+ camelWords.forEach(w => splitCamelCase(w).forEach(part => commitKeywords.add(part)));
+
+ // Also extract from file paths
+ filesChanged.forEach(f => {
+ const parts = f.toLowerCase().replace(/[^a-z0-9_/-]/g, ' ').split(/[\s/]+/);
+ parts.forEach(p => p.length > 2 && commitKeywords.add(p));
+ // Extract camelCase from file names
+ const fileName = f.split('/').pop() || '';
+ splitCamelCase(fileName.replace(/\.[^.]+$/, '')).forEach(w => commitKeywords.add(w));
+ });
+
+ // Add synonyms for all commit keywords
+ const expandedKeywords = new Set(commitKeywords);
+ for (const keyword of commitKeywords) {
+ const synonymList = SYNONYMS[keyword];
+ if (synonymList) {
+ synonymList.forEach(syn => expandedKeywords.add(syn));
+ }
+ }
+
+ // Build module-to-test-file map for better test task matching
+ // e.g., tests/test_adapter_base.py -> ["adapter", "base", "adapters"]
+ const testFileModules = new Map();
+ filesChanged.forEach(f => {
+ const match = f.match(/tests\/test_([a-z_]+)\.py$/i);
+ if (match) {
+ const moduleParts = match[1].toLowerCase().split('_');
+ // Add both singular and plural forms, plus the full module name
+ const modules = [...moduleParts];
+ moduleParts.forEach(p => {
+ if (!p.endsWith('s')) modules.push(p + 's');
+ if (p.endsWith('s')) modules.push(p.slice(0, -1));
+ });
+ modules.push(match[1]); // full module name like "adapter_base"
+ testFileModules.set(f, modules);
+ }
+ });
+
+ // Match tasks to commits/files
+ for (const task of taskLines) {
+ const taskLower = task.toLowerCase();
+ const taskWords = taskLower.match(/\b[a-z_-]{3,}\b/g) || [];
+ const isTestTask = /\b(test|tests|unit\s*test|coverage)\b/i.test(task);
+
+ // Calculate overlap score using expanded keywords (with synonyms)
+ const matchingWords = taskWords.filter(w => expandedKeywords.has(w));
+ const score = taskWords.length > 0 ? matchingWords.length / taskWords.length : 0;
+
+ // Extract explicit file references from task (e.g., `filename.js` or filename.test.js)
+ const fileRefs = taskLower.match(/`([^`]+\.[a-z]+)`|([a-z0-9_./-]+(?:\.test)?\.(?:js|ts|py|yml|yaml|md))/g) || [];
+ const cleanFileRefs = fileRefs.map(f => f.replace(/`/g, '').toLowerCase());
+
+ // Check for explicit file creation (high confidence if exact file was created)
+ const exactFileMatch = cleanFileRefs.some(ref => {
+ const refBase = ref.split('/').pop(); // Get just filename
+ return filesChanged.some(f => {
+ const fBase = f.split('/').pop().toLowerCase();
+ return fBase === refBase || f.toLowerCase().endsWith(ref);
+ });
+ });
+
+
+ // Special check for test tasks: match module references to test files
+ // e.g., "Add unit tests for `adapters/` module" should match tests/test_adapter_base.py
+ let testModuleMatch = false;
+ if (isTestTask) {
+ // Extract module references from task (e.g., `adapters/`, `etl/`)
+ const moduleRefs = taskLower.match(/`([a-z_\/]+)`|for\s+([a-z_]+)\s+module/gi) || [];
+ const cleanModuleRefs = moduleRefs.map(m => m.replace(/[`\/]/g, '').toLowerCase().trim())
+ .flatMap(m => [m, m.replace(/s$/, ''), m + 's']); // singular/plural
+
+ for (const [testFile, modules] of testFileModules.entries()) {
+ if (cleanModuleRefs.some(ref => modules.some(mod => mod.includes(ref) || ref.includes(mod)))) {
+ testModuleMatch = true;
+ break;
+ }
+ }
+ }
+
+ // Check for specific file mentions (partial match)
+ const fileMatch = filesChanged.some(f => {
+ const fLower = f.toLowerCase();
+ return taskWords.some(w => fLower.includes(w));
+ });
+
+ // Check for specific commit message matches
+ const commitMatch = commits.some(c => {
+ const msg = c.commit.message.toLowerCase();
+ return taskWords.some(w => w.length > 4 && msg.includes(w));
+ });
+
+ let confidence = 'low';
+ let reason = '';
+
+ // Exact file match is very high confidence
+ if (exactFileMatch) {
+ confidence = 'high';
+ const matchedFile = cleanFileRefs.find(ref => filesChanged.some(f => f.toLowerCase().includes(ref)));
+ reason = `Exact file created: ${matchedFile}`;
+ matches.push({ task, reason, confidence });
+ } else if (isTestTask && testModuleMatch) {
+ confidence = 'high';
+ reason = 'Test file created matching module reference';
+ matches.push({ task, reason, confidence });
+ } else if (score >= 0.35 && (fileMatch || commitMatch)) {
+ // Lowered threshold from 0.5 to 0.35 to catch more legitimate completions
+ confidence = 'high';
+ reason = `${Math.round(score * 100)}% keyword match, ${fileMatch ? 'file match' : 'commit match'}`;
+ matches.push({ task, reason, confidence });
+ } else if (score >= 0.25 && fileMatch) {
+ // File match with moderate keyword overlap is high confidence
+ confidence = 'high';
+ reason = `${Math.round(score * 100)}% keyword match with file match`;
+ matches.push({ task, reason, confidence });
+ } else if (score >= 0.2 || fileMatch) {
+ confidence = 'medium';
+ reason = `${Math.round(score * 100)}% keyword match${fileMatch ? ', file touched' : ''}`;
+ matches.push({ task, reason, confidence });
+ }
+ }
+
+ const summary = matches.length > 0
+ ? `Found ${matches.length} potential task completion(s): ${matches.filter(m => m.confidence === 'high').length} high, ${matches.filter(m => m.confidence === 'medium').length} medium confidence`
+ : 'No clear task matches found in commits';
+
+ log(summary);
+ return { matches, summary };
+}
+
+/**
+ * Auto-reconcile task checkboxes in PR body based on commit analysis.
+ * Updates the PR body to check off tasks that appear to be completed.
+ * @param {object} params - Parameters
+ * @param {object} params.github - GitHub API client
+ * @param {object} params.context - GitHub Actions context
+ * @param {number} params.prNumber - PR number
+ * @param {string} params.baseSha - Base SHA (before agent work)
+ * @param {string} params.headSha - Head SHA (after agent work)
+ * @param {string[]} [params.llmCompletedTasks] - Tasks marked complete by LLM analysis
+ * @param {object} [params.core] - Optional core for logging
+ * @returns {Promise<{updated: boolean, tasksChecked: number, details: string}>}
+ */
+async function autoReconcileTasks({ github, context, prNumber, baseSha, headSha, llmCompletedTasks, core }) {
+ const log = (msg) => core?.info?.(msg) || console.log(msg);
+ const sources = { llm: 0, commit: 0 };
+
+ if (!context?.repo?.owner || !context?.repo?.repo || !prNumber) {
+ log('Skipping reconciliation: missing repo context or PR number.');
+ return {
+ updated: false,
+ tasksChecked: 0,
+ details: 'Missing repo context or PR number',
+ sources,
+ };
+ }
+
+ // Get current PR body
+ let pr;
+ try {
+ const { data } = await github.rest.pulls.get({
+ owner: context.repo.owner,
+ repo: context.repo.repo,
+ pull_number: prNumber,
+ });
+ pr = data;
+ } catch (error) {
+ log(`Failed to get PR: ${error.message}`);
+ return { updated: false, tasksChecked: 0, details: `Failed to get PR: ${error.message}` };
+ }
+
+ const sections = parseScopeTasksAcceptanceSections(pr.body || '');
+ const taskText = [sections.tasks, sections.acceptance].filter(Boolean).join('\n');
+
+ if (!taskText) {
+ log('Skipping reconciliation: no tasks found in PR body.');
+ return { updated: false, tasksChecked: 0, details: 'No tasks found in PR body', sources };
+ }
+
+ // Build high-confidence matches from multiple sources
+ let highConfidence = [];
+
+ // Source 1: LLM analysis (highest priority if available)
+ if (llmCompletedTasks && Array.isArray(llmCompletedTasks) && llmCompletedTasks.length > 0) {
+ log(`LLM analysis found ${llmCompletedTasks.length} completed task(s)`);
+ for (const task of llmCompletedTasks) {
+ highConfidence.push({
+ task,
+ reason: 'LLM session analysis',
+ confidence: 'high',
+ source: 'llm',
+ });
+ sources.llm += 1;
+ }
+ }
+
+ // Source 2: Commit/file analysis (fallback or supplementary)
+ const analysis = await analyzeTaskCompletion({
+ github, context, prNumber, baseSha, headSha, taskText, core
+ });
+
+ // Add commit-based matches that aren't already covered by LLM
+ const llmTasksLower = new Set((llmCompletedTasks || []).map(t => t.toLowerCase()));
+ const commitMatches = analysis.matches
+ .filter(m => m.confidence === 'high')
+ .filter(m => !llmTasksLower.has(m.task.toLowerCase()));
+
+ if (commitMatches.length > 0) {
+ log(`Commit analysis found ${commitMatches.length} additional task(s)`);
+ for (const match of commitMatches) {
+ highConfidence.push({ ...match, source: 'commit' });
+ sources.commit += 1;
+ }
+ }
+
+ if (highConfidence.length === 0) {
+ log('No high-confidence task matches to auto-check');
+ return {
+ updated: false,
+ tasksChecked: 0,
+ details: analysis.summary + ' (no high-confidence matches for auto-check)',
+ sources,
+ };
+ }
+
+ // Update PR body to check off matched tasks
+ let updatedBody = pr.body;
+ let checkedCount = 0;
+
+ for (const match of highConfidence) {
+ // Escape special regex characters in task text
+ const escaped = match.task.replace(/[.*+?^${}()|[\]\\]/g, '\\$&');
+ const pattern = new RegExp(`([-*+]\\s*)\\[\\s*\\](\\s*${escaped})`, 'i');
+
+ if (pattern.test(updatedBody)) {
+ updatedBody = updatedBody.replace(pattern, '$1[x]$2');
+ checkedCount++;
+ log(`Auto-checked task: ${match.task.slice(0, 50)}... (${match.reason})`);
+ }
+ }
+
+ if (checkedCount === 0) {
+ log('Matched tasks but no checkbox patterns found to update.');
+ return {
+ updated: false,
+ tasksChecked: 0,
+ details: 'Tasks matched but patterns not found in body',
+ sources,
+ };
+ }
+
+ // Update the PR body
+ try {
+ await github.rest.pulls.update({
+ owner: context.repo.owner,
+ repo: context.repo.repo,
+ pull_number: prNumber,
+ body: updatedBody,
+ });
+ log(`Updated PR body, checked ${checkedCount} task(s)`);
+ } catch (error) {
+ log(`Failed to update PR body: ${error.message}`);
+ return {
+ updated: false,
+ tasksChecked: 0,
+ details: `Failed to update PR: ${error.message}`,
+ sources,
+ };
+ }
+
+ // Build detailed description
+ const sourceDesc = [];
+ if (sources.llm > 0) sourceDesc.push(`${sources.llm} from LLM analysis`);
+ if (sources.commit > 0) sourceDesc.push(`${sources.commit} from commit analysis`);
+ const sourceInfo = sourceDesc.length > 0 ? ` (${sourceDesc.join(', ')})` : '';
+
+ return {
+ updated: true,
+ tasksChecked: checkedCount,
+ details: `Auto-checked ${checkedCount} task(s)${sourceInfo}: ${highConfidence.map(m => m.task.slice(0, 30) + '...').join(', ')}`,
+ sources,
+ };
+}
+
+module.exports = {
+ countCheckboxes,
+ parseConfig,
+ buildTaskAppendix,
+ extractSourceSection,
+ evaluateKeepaliveLoop,
+ markAgentRunning,
+ updateKeepaliveLoopSummary,
+ analyzeTaskCompletion,
+ autoReconcileTasks,
+ normaliseChecklistSection,
+};
diff --git a/templates/consumer-repo/.github/scripts/keepalive_prompt_routing.js b/templates/consumer-repo/.github/scripts/keepalive_prompt_routing.js
new file mode 100644
index 000000000..eed7f2cf4
--- /dev/null
+++ b/templates/consumer-repo/.github/scripts/keepalive_prompt_routing.js
@@ -0,0 +1,99 @@
+'use strict';
+
+function normalise(value) {
+ return String(value ?? '').trim().toLowerCase();
+}
+
+const FIX_SCENARIOS = new Set([
+ 'ci',
+ 'ci-failure',
+ 'ci_failure',
+ 'fix',
+ 'fix-ci',
+ 'fix_ci',
+ 'fix-ci-failure',
+]);
+
+const VERIFY_SCENARIOS = new Set([
+ 'verify',
+ 'verification',
+ 'verify-acceptance',
+ 'acceptance',
+]);
+
+const CONFLICT_SCENARIOS = new Set([
+ 'conflict',
+ 'merge-conflict',
+ 'merge_conflict',
+ 'conflicts',
+ 'fix-conflict',
+ 'fix_conflict',
+ 'resolve-conflict',
+ 'resolve_conflict',
+]);
+
+const FEATURE_SCENARIOS = new Set([
+ 'feature',
+ 'feature-work',
+ 'feature_work',
+ 'task',
+ 'next-task',
+ 'next_task',
+ 'nexttask',
+]);
+
+const FIX_MODES = new Set(['fix', 'fix-ci', 'fix_ci', 'ci', 'ci-failure', 'ci_failure', 'fix-ci-failure']);
+const VERIFY_MODES = new Set(['verify', 'verification', 'verify-acceptance', 'acceptance']);
+const CONFLICT_MODES = new Set(['conflict', 'merge-conflict', 'merge_conflict', 'fix-conflict', 'fix_conflict']);
+
+function resolvePromptMode({ scenario, mode, action, reason } = {}) {
+ const modeValue = normalise(mode);
+ if (modeValue) {
+ // Conflict mode takes highest priority - merge conflicts block all other work
+ if (CONFLICT_MODES.has(modeValue)) {
+ return 'conflict';
+ }
+ if (FIX_MODES.has(modeValue)) {
+ return 'fix_ci';
+ }
+ if (VERIFY_MODES.has(modeValue)) {
+ return 'verify';
+ }
+ }
+
+ const actionValue = normalise(action);
+ const reasonValue = normalise(reason);
+
+ // Check for conflict-related actions/reasons first
+ if (actionValue === 'conflict' || reasonValue.startsWith('conflict') || reasonValue.includes('merge-conflict')) {
+ return 'conflict';
+ }
+ if (actionValue === 'fix' || reasonValue.startsWith('fix-')) {
+ return 'fix_ci';
+ }
+ if (actionValue === 'verify' || reasonValue === 'verify-acceptance') {
+ return 'verify';
+ }
+
+ const scenarioValue = normalise(scenario);
+ if (scenarioValue) {
+ if (CONFLICT_SCENARIOS.has(scenarioValue)) {
+ return 'conflict';
+ }
+ if (FIX_SCENARIOS.has(scenarioValue)) {
+ return 'fix_ci';
+ }
+ if (VERIFY_SCENARIOS.has(scenarioValue)) {
+ return 'verify';
+ }
+ if (FEATURE_SCENARIOS.has(scenarioValue)) {
+ return 'normal';
+ }
+ }
+
+ return 'normal';
+}
+
+module.exports = {
+ resolvePromptMode,
+};
diff --git a/templates/consumer-repo/.github/scripts/keepalive_state.js b/templates/consumer-repo/.github/scripts/keepalive_state.js
new file mode 100644
index 000000000..7854a754d
--- /dev/null
+++ b/templates/consumer-repo/.github/scripts/keepalive_state.js
@@ -0,0 +1,417 @@
+'use strict';
+
+const STATE_MARKER = 'keepalive-state';
+const STATE_VERSION = 'v1';
+const STATE_REGEX = //s;
+const LOG_PREFIX = '[keepalive_state]';
+
+function logInfo(message) {
+ console.info(`${LOG_PREFIX} ${message}`);
+}
+
+function normalise(value) {
+ return String(value ?? '').trim();
+}
+
+function normaliseLower(value) {
+ return normalise(value).toLowerCase();
+}
+
+function deepMerge(target, source) {
+ const base = target && typeof target === 'object' && !Array.isArray(target) ? { ...target } : {};
+ const updates = source && typeof source === 'object' && !Array.isArray(source) ? source : {};
+ const result = { ...base };
+
+ for (const [key, value] of Object.entries(updates)) {
+ if (value && typeof value === 'object' && !Array.isArray(value)) {
+ result[key] = deepMerge(base[key], value);
+ } else if (value === undefined) {
+ continue;
+ } else {
+ result[key] = value;
+ }
+ }
+
+ return result;
+}
+
+function toNumber(value, fallback = 0) {
+ const parsed = Number(value);
+ return Number.isFinite(parsed) ? parsed : fallback;
+}
+
+function resolveTimestampMs(value) {
+ if (value instanceof Date) {
+ const parsed = value.getTime();
+ return Number.isFinite(parsed) ? parsed : null;
+ }
+ if (typeof value === 'number') {
+ return Number.isFinite(value) ? value : null;
+ }
+ const text = normalise(value);
+ if (!text) {
+ return null;
+ }
+ if (/^\d+(\.\d+)?$/.test(text)) {
+ const parsedNumber = Number(text);
+ return Number.isFinite(parsedNumber) ? parsedNumber : null;
+ }
+ const parsed = Date.parse(text);
+ return Number.isFinite(parsed) ? parsed : null;
+}
+
+function formatDuration(seconds) {
+ const totalSeconds = Math.max(0, Math.floor(seconds));
+ const hours = Math.floor(totalSeconds / 3600);
+ const minutes = Math.floor((totalSeconds % 3600) / 60);
+ const remainingSeconds = totalSeconds % 60;
+ const parts = [];
+ if (hours > 0) {
+ parts.push(`${hours}h`);
+ }
+ if (hours > 0 || minutes > 0) {
+ parts.push(`${minutes}m`);
+ }
+ parts.push(`${remainingSeconds}s`);
+ return parts.join(' ');
+}
+
+function calculateElapsedTime(startTime, now) {
+ const startMs = resolveTimestampMs(startTime);
+ if (!Number.isFinite(startMs)) {
+ return '0s';
+ }
+ const nowMs = resolveTimestampMs(now);
+ const resolvedNow = Number.isFinite(nowMs) ? nowMs : Date.now();
+ const deltaMs = resolvedNow - startMs;
+ if (!Number.isFinite(deltaMs) || deltaMs <= 0) {
+ return '0s';
+ }
+ return formatDuration(deltaMs / 1000);
+}
+
+function applyIterationTracking(state) {
+ if (!state || typeof state !== 'object') {
+ return;
+ }
+ const nowMs = Date.now();
+ const nowIso = new Date(nowMs).toISOString();
+ state.current_iteration_at = nowIso;
+ const iteration = toNumber(state.iteration, 0);
+ if (!state.first_iteration_at && iteration === 1) {
+ state.first_iteration_at = nowIso;
+ }
+}
+
+function formatTimestamp(value = new Date(), { debug = false } = {}) {
+ const date = value instanceof Date ? value : new Date(value);
+ const iso = date.toISOString();
+ if (debug) {
+ return iso;
+ }
+ return iso.replace(/\.\d{3}Z$/, 'Z');
+}
+
+function parseStateComment(body) {
+ if (typeof body !== 'string' || !body.includes(STATE_MARKER)) {
+ return null;
+ }
+ const match = body.match(STATE_REGEX);
+ if (!match) {
+ return null;
+ }
+ const version = normalise(match[1]) || STATE_VERSION;
+ const payloadText = normalise(match[2]);
+ if (!payloadText) {
+ return { version, data: {} };
+ }
+ try {
+ const data = JSON.parse(payloadText);
+ if (data && typeof data === 'object') {
+ return { version, data };
+ }
+ } catch (error) {
+ // fall through to null
+ }
+ return { version, data: {} };
+}
+
+function formatStateComment(data) {
+ const payload = data && typeof data === 'object' ? { ...data } : {};
+ const version = normalise(payload.version) || STATE_VERSION;
+ payload.version = version;
+ return ``;
+}
+
+function upsertStateCommentBody(body, stateComment) {
+ const existing = String(body ?? '');
+ const marker = String(stateComment ?? '').trim();
+ if (!marker) {
+ return existing;
+ }
+ if (!existing.trim()) {
+ return marker;
+ }
+ if (STATE_REGEX.test(existing)) {
+ return existing.replace(STATE_REGEX, () => marker);
+ }
+ const trimmed = existing.trimEnd();
+ const separator = trimmed ? '\n\n' : '';
+ return `${trimmed}${separator}${marker}`;
+}
+
+async function listAllComments({ github, owner, repo, prNumber }) {
+ if (!github?.paginate || !github?.rest?.issues?.listComments) {
+ return [];
+ }
+ try {
+ const comments = await github.paginate(github.rest.issues.listComments, {
+ owner,
+ repo,
+ issue_number: prNumber,
+ per_page: 100,
+ });
+ return Array.isArray(comments) ? comments : [];
+ } catch (error) {
+ return [];
+ }
+}
+
+async function findStateComment({ github, owner, repo, prNumber, trace }) {
+ if (!Number.isFinite(prNumber) || prNumber <= 0) {
+ return null;
+ }
+ const comments = await listAllComments({ github, owner, repo, prNumber });
+ if (!comments.length) {
+ return null;
+ }
+ const traceNorm = normaliseLower(trace);
+ for (let index = comments.length - 1; index >= 0; index -= 1) {
+ const comment = comments[index];
+ const parsed = parseStateComment(comment?.body);
+ if (!parsed) {
+ continue;
+ }
+ const candidate = parsed.data || {};
+ if (traceNorm) {
+ const candidateTrace = normaliseLower(candidate.trace);
+ if (candidateTrace !== traceNorm) {
+ continue;
+ }
+ }
+ return {
+ comment,
+ state: candidate,
+ version: parsed.version,
+ };
+ }
+ return null;
+}
+
+async function createKeepaliveStateManager({ github, context, prNumber, trace, round }) {
+ const owner = context?.repo?.owner;
+ const repo = context?.repo?.repo;
+ if (!owner || !repo || !Number.isFinite(prNumber) || prNumber <= 0) {
+ return {
+ state: {},
+ commentId: 0,
+ commentUrl: '',
+ async save() {
+ return { state: {}, commentId: 0, commentUrl: '' };
+ },
+ };
+ }
+
+ const existing = await findStateComment({ github, owner, repo, prNumber, trace });
+ let state = existing?.state && typeof existing.state === 'object' ? { ...existing.state } : {};
+ let commentId = existing?.comment?.id ? Number(existing.comment.id) : 0;
+ let commentUrl = existing?.comment?.html_url || '';
+ let commentBody = existing?.comment?.body || '';
+
+ const ensureDefaults = () => {
+ if (trace && normalise(state.trace) !== trace) {
+ state.trace = trace;
+ }
+ if (round && normalise(state.round) !== normalise(round)) {
+ state.round = normalise(round);
+ }
+ if (Number.isFinite(prNumber)) {
+ state.pr_number = Number(prNumber);
+ }
+ state.version = STATE_VERSION;
+ };
+
+ ensureDefaults();
+ applyIterationTracking(state);
+
+ const save = async (updates = {}) => {
+ state = deepMerge(state, updates);
+ ensureDefaults();
+ state.iteration_duration = calculateElapsedTime(state.current_iteration_at);
+ const body = formatStateComment(state);
+
+ if (commentId) {
+ let latestBody = commentBody;
+ if (github?.rest?.issues?.getComment) {
+ try {
+ const response = await github.rest.issues.getComment({
+ owner,
+ repo,
+ comment_id: commentId,
+ });
+ if (response?.data?.body) {
+ latestBody = response.data.body;
+ }
+ } catch (error) {
+ // fall back to cached body if lookup fails
+ }
+ }
+ const updatedBody = upsertStateCommentBody(latestBody, body);
+ await github.rest.issues.updateComment({
+ owner,
+ repo,
+ comment_id: commentId,
+ body: updatedBody,
+ });
+ commentBody = updatedBody;
+ } else {
+ const { data } = await github.rest.issues.createComment({
+ owner,
+ repo,
+ issue_number: prNumber,
+ body,
+ });
+ commentId = data?.id ? Number(data.id) : 0;
+ commentUrl = data?.html_url || '';
+ commentBody = body;
+ }
+
+ return { state: { ...state }, commentId, commentUrl };
+ };
+
+ return {
+ state: { ...state },
+ commentId,
+ commentUrl,
+ save,
+ };
+}
+
+async function saveKeepaliveState({ github, context, prNumber, trace, round, updates }) {
+ const manager = await createKeepaliveStateManager({ github, context, prNumber, trace, round });
+ return manager.save(updates);
+}
+
+async function loadKeepaliveState({ github, context, prNumber, trace }) {
+ const owner = context?.repo?.owner;
+ const repo = context?.repo?.repo;
+ if (!owner || !repo || !Number.isFinite(prNumber) || prNumber <= 0) {
+ return { state: {}, commentId: 0, commentUrl: '' };
+ }
+ const existing = await findStateComment({ github, owner, repo, prNumber, trace });
+ if (!existing) {
+ return { state: {}, commentId: 0, commentUrl: '' };
+ }
+ const loadedState = existing.state && typeof existing.state === 'object' ? { ...existing.state } : {};
+ applyIterationTracking(loadedState);
+ return {
+ state: loadedState,
+ commentId: existing.comment?.id ? Number(existing.comment.id) : 0,
+ commentUrl: existing.comment?.html_url || '',
+ };
+}
+
+async function resetState({ github, context, prNumber, trace, round }) {
+ const startTime = Date.now();
+ const timestamp = new Date(startTime).toISOString();
+ const issueNumber = Number.isFinite(prNumber) ? String(prNumber) : normalise(prNumber);
+ logInfo(`resetState starting: ts=${timestamp} issue=${issueNumber || 'unknown'}`);
+
+ let success = false;
+ try {
+ const owner = context?.repo?.owner;
+ const repo = context?.repo?.repo;
+ if (
+ !owner ||
+ !repo ||
+ !Number.isFinite(prNumber) ||
+ prNumber <= 0 ||
+ !github?.rest?.issues?.createComment ||
+ !github?.rest?.issues?.updateComment
+ ) {
+ return { state: {}, commentId: 0, commentUrl: '' };
+ }
+
+ const existing = await findStateComment({ github, owner, repo, prNumber, trace });
+ const state = {};
+ if (trace) {
+ state.trace = trace;
+ }
+ if (round) {
+ state.round = normalise(round);
+ }
+ state.pr_number = Number(prNumber);
+ state.version = STATE_VERSION;
+ const body = formatStateComment(state);
+
+ if (existing?.comment?.id) {
+ let latestBody = existing.comment.body || '';
+ if (github?.rest?.issues?.getComment) {
+ try {
+ const response = await github.rest.issues.getComment({
+ owner,
+ repo,
+ comment_id: existing.comment.id,
+ });
+ if (response?.data?.body) {
+ latestBody = response.data.body;
+ }
+ } catch (error) {
+ // fall back to cached body if lookup fails
+ }
+ }
+ const updatedBody = upsertStateCommentBody(latestBody, body);
+ await github.rest.issues.updateComment({
+ owner,
+ repo,
+ comment_id: existing.comment.id,
+ body: updatedBody,
+ });
+ success = true;
+ return {
+ state: { ...state },
+ commentId: Number(existing.comment.id),
+ commentUrl: existing.comment.html_url || '',
+ };
+ }
+
+ const { data } = await github.rest.issues.createComment({
+ owner,
+ repo,
+ issue_number: prNumber,
+ body,
+ });
+ success = true;
+ return {
+ state: { ...state },
+ commentId: data?.id ? Number(data.id) : 0,
+ commentUrl: data?.html_url || '',
+ };
+ } finally {
+ const durationMs = Date.now() - startTime;
+ logInfo(`resetState finished: status=${success ? 'success' : 'failure'} duration_ms=${durationMs}`);
+ }
+}
+
+module.exports = {
+ createKeepaliveStateManager,
+ saveKeepaliveState,
+ loadKeepaliveState,
+ calculateElapsedTime,
+ resetState,
+ parseStateComment,
+ formatStateComment,
+ upsertStateCommentBody,
+ deepMerge,
+ formatTimestamp,
+};
diff --git a/templates/consumer-repo/.github/scripts/keepalive_worker_gate.js b/templates/consumer-repo/.github/scripts/keepalive_worker_gate.js
index c0045b1c4..5828538c2 100644
--- a/templates/consumer-repo/.github/scripts/keepalive_worker_gate.js
+++ b/templates/consumer-repo/.github/scripts/keepalive_worker_gate.js
@@ -293,7 +293,7 @@ async function evaluateKeepaliveWorkerGate({ core, github, context, env = proces
const lastProcessed = extractLastProcessedState(stateInfo?.state);
let action = 'execute';
- let reason = 'missing-history';
+ let reason;
if (!latestInstruction) {
reason = 'missing-instruction';
diff --git a/templates/consumer-repo/.github/scripts/maint-post-ci.js b/templates/consumer-repo/.github/scripts/maint-post-ci.js
new file mode 100644
index 000000000..db3efaff1
--- /dev/null
+++ b/templates/consumer-repo/.github/scripts/maint-post-ci.js
@@ -0,0 +1,963 @@
+'use strict';
+
+const fs = require('fs');
+const path = require('path');
+
+async function discoverWorkflowRuns({ github, context, core }) {
+ const { owner, repo } = context.repo;
+ const workflowRun = context.payload.workflow_run || {};
+ const prFromPayload = Array.isArray(workflowRun.pull_requests)
+ ? workflowRun.pull_requests.find(item => item && item.head && item.head.sha)
+ : null;
+ const headSha = (prFromPayload?.head?.sha || workflowRun.head_sha || context.sha || '').trim();
+
+ const parseJsonInput = (raw, fallback) => {
+ if (!raw) {
+ return fallback;
+ }
+ try {
+ return JSON.parse(raw);
+ } catch (error) {
+ core.warning(`Failed to parse JSON input: ${error}`);
+ return fallback;
+ }
+ };
+
+ const defaultWorkflowTargets = [
+ { key: 'gate', display_name: 'Gate', workflow_path: '.github/workflows/pr-00-gate.yml' },
+ ];
+
+ const workflowTargetsRaw = process.env.WORKFLOW_TARGETS_JSON;
+ const workflowTargetsInput = parseJsonInput(workflowTargetsRaw, defaultWorkflowTargets);
+ const workflowTargetsSource = Array.isArray(workflowTargetsInput) ? workflowTargetsInput : defaultWorkflowTargets;
+
+ function normalizeTargetProps(target) {
+ return {
+ key: target.key,
+ displayName: target.display_name || target.displayName || target.key || 'workflow',
+ workflowPath: target.workflow_path || target.workflowPath || '',
+ workflowFile: target.workflow_file || target.workflowFile || target.workflow_id || target.workflowId || '',
+ workflowName: target.workflow_name || target.workflowName || '',
+ workflowIds: Array.isArray(target.workflow_ids)
+ ? target.workflow_ids
+ : (target.workflowIds && Array.isArray(target.workflowIds) ? target.workflowIds : []),
+ };
+ }
+
+ const workflowTargets = workflowTargetsSource
+ .map(normalizeTargetProps)
+ .filter(target => target && target.key);
+
+ const normalizePath = (value) => {
+ if (!value) return '';
+ return String(value).replace(/^\.\//, '').replace(/^\/+/, '');
+ };
+
+ async function loadWorkflowRun(identifier) {
+ if (!identifier) {
+ return null;
+ }
+ try {
+ const response = await github.rest.actions.listWorkflowRuns({
+ owner,
+ repo,
+ workflow_id: identifier,
+ head_sha: headSha || undefined,
+ event: 'pull_request',
+ per_page: 10,
+ });
+ const runs = response.data.workflow_runs || [];
+ if (!runs.length) {
+ return null;
+ }
+ if (!headSha) {
+ return runs[0];
+ }
+ const exact = runs.find(item => item.head_sha === headSha);
+ return exact || runs[0];
+ } catch (error) {
+ core.warning(`Failed to query workflow runs for "${identifier}": ${error}`);
+ return null;
+ }
+ }
+
+ async function loadJobs(runId) {
+ if (!runId) {
+ return [];
+ }
+ try {
+ const jobs = await github.paginate(
+ github.rest.actions.listJobsForWorkflowRun,
+ {
+ owner,
+ repo,
+ run_id: runId,
+ per_page: 100,
+ },
+ );
+ return jobs
+ .filter(job => job)
+ .map(job => ({
+ name: job.name,
+ conclusion: job.conclusion,
+ status: job.status,
+ html_url: job.html_url,
+ }));
+ } catch (error) {
+ core.warning(`Failed to query jobs for workflow run ${runId}: ${error}`);
+ return [];
+ }
+ }
+
+ async function resolveRun(target) {
+ const candidates = [];
+ if (Array.isArray(target.workflowIds) && target.workflowIds.length) {
+ for (const id of target.workflowIds) {
+ if (id) {
+ candidates.push(id);
+ }
+ }
+ }
+ if (target.workflowPath) {
+ candidates.push(normalizePath(target.workflowPath));
+ }
+ if (target.workflowFile) {
+ candidates.push(normalizePath(target.workflowFile));
+ }
+ if (target.workflowName) {
+ candidates.push(target.workflowName);
+ }
+ if (!candidates.length) {
+ candidates.push(target.key);
+ }
+
+ for (const identifier of candidates) {
+ const run = await loadWorkflowRun(identifier);
+ if (run) {
+ return run;
+ }
+ }
+ return null;
+ }
+
+ const collected = [];
+ for (const target of workflowTargets) {
+ const run = await resolveRun(target);
+ if (run) {
+ const jobs = await loadJobs(run.id);
+ collected.push({
+ key: target.key,
+ displayName: target.displayName,
+ present: true,
+ id: run.id,
+ run_attempt: run.run_attempt,
+ conclusion: run.conclusion,
+ status: run.status,
+ html_url: run.html_url,
+ jobs,
+ });
+ } else {
+ collected.push({
+ key: target.key,
+ displayName: target.displayName,
+ present: false,
+ jobs: [],
+ });
+ }
+ }
+
+ const gateRun = collected.find(entry => entry.key === 'gate' && entry.present);
+ const gateRunId = gateRun ? String(gateRun.id) : '';
+
+ core.setOutput('runs', JSON.stringify(collected));
+ core.setOutput('ci_run_id', gateRunId);
+ core.setOutput('gate_run_id', gateRunId);
+ core.setOutput('head_sha', headSha || '');
+ core.notice(`Collected ${collected.filter(entry => entry.present).length} Gate workflow runs for head ${headSha}`);
+}
+
+async function propagateGateCommitStatus({ github, context, core }) {
+ const { owner, repo } = context.repo;
+ const sha = process.env.HEAD_SHA || '';
+ if (!sha) {
+ core.info('Head SHA missing; skipping Gate commit status update.');
+ return;
+ }
+
+ const conclusion = (process.env.RUN_CONCLUSION || '').toLowerCase();
+ const status = (process.env.RUN_STATUS || '').toLowerCase();
+ let state = 'pending';
+ let description = 'Gate workflow status pending.';
+
+ if (conclusion === 'success') {
+ state = 'success';
+ description = 'Gate workflow succeeded.';
+ } else if (conclusion === 'failure') {
+ state = 'failure';
+ description = 'Gate workflow failed.';
+ } else if (conclusion === 'cancelled') {
+ state = 'error';
+ description = 'Gate workflow was cancelled.';
+ } else if (conclusion === 'timed_out') {
+ state = 'error';
+ description = 'Gate workflow timed out.';
+ } else if (conclusion === 'action_required') {
+ state = 'pending';
+ description = 'Gate workflow requires attention.';
+ } else if (!conclusion) {
+ if (status === 'completed') {
+ description = 'Gate workflow completed with unknown result.';
+ } else if (status === 'in_progress') {
+ description = 'Gate workflow is still running.';
+ } else if (status === 'queued') {
+ description = 'Gate workflow is queued.';
+ }
+ } else {
+ description = `Gate workflow concluded with ${conclusion}.`;
+ }
+
+ const MAX_DESCRIPTION_LENGTH = 140;
+ const trimmed = description.length > MAX_DESCRIPTION_LENGTH
+ ? `${description.slice(0, MAX_DESCRIPTION_LENGTH - 3)}...`
+ : description;
+ const runId = context.payload?.workflow_run?.id || context.runId;
+ const baseUrl = process.env.GITHUB_SERVER_URL || 'https://github.com';
+ const targetUrl = process.env.GATE_RUN_URL || `${baseUrl.replace(/\/$/, '')}/${owner}/${repo}/actions/runs/${runId}`;
+
+ try {
+ await github.rest.repos.createCommitStatus({
+ owner,
+ repo,
+ sha,
+ state,
+ context: 'Gate / gate',
+ description: trimmed,
+ target_url: targetUrl,
+ });
+ core.info(`Propagated Gate commit status (${state}) for ${sha}.`);
+ } catch (error) {
+ core.warning(`Failed to propagate Gate commit status: ${error.message}`);
+ }
+}
+
+async function resolveAutofixContext({ github, context, core }) {
+ const run = context.payload.workflow_run;
+ const owner = context.repo.owner;
+ const repo = context.repo.repo;
+ const prefix = process.env.COMMIT_PREFIX || 'chore(autofix):';
+ const payloadPr = run && Array.isArray(run.pull_requests)
+ ? run.pull_requests.find(item => item && typeof item.number === 'number')
+ : null;
+ const branch = (payloadPr?.head?.ref || run?.head_branch || '').trim();
+ const headSha = (payloadPr?.head?.sha || run?.head_sha || '').trim();
+
+ const result = {
+ found: 'false',
+ pr: '',
+ head_ref: branch || '',
+ head_sha: headSha || '',
+ same_repo: 'false',
+ loop_skip: 'false',
+ small_eligible: 'false',
+ file_count: '0',
+ change_count: '0',
+ safe_paths: '',
+ unsafe_paths: '',
+ safe_file_count: '0',
+ unsafe_file_count: '0',
+ safe_change_count: '0',
+ unsafe_change_count: '0',
+ all_safe: 'false',
+ has_opt_in: 'false',
+ has_patch_label: 'false',
+ is_draft: run.event === 'pull_request' && run.head_repository ? (run.pull_requests?.[0]?.draft ? 'true' : 'false') : 'false',
+ run_conclusion: run.conclusion || '',
+ actor: (run.triggering_actor?.login || run.actor?.login || '').toLowerCase(),
+ head_subject: '',
+ failure_tracker_skip: 'false',
+ };
+
+ if (!branch || !headSha) {
+ core.info('Workflow run missing branch or head SHA; skipping.');
+ for (const [key, value] of Object.entries(result)) {
+ core.setOutput(key, value);
+ }
+ return;
+ }
+
+ const headShaLower = (headSha || '').toLowerCase();
+ let pr = null;
+ let prNumber = null;
+
+ if (payloadPr) {
+ prNumber = Number(payloadPr.number);
+ if (!Number.isNaN(prNumber)) {
+ try {
+ const prResponse = await github.rest.pulls.get({ owner, repo, pull_number: prNumber });
+ pr = prResponse.data;
+ } catch (error) {
+ core.warning(`Failed to load PR #${prNumber} from workflow payload: ${error.message}`);
+ }
+ }
+ }
+
+ let openPrs = [];
+ if (!pr) {
+ openPrs = await github.paginate(github.rest.pulls.list, {
+ owner,
+ repo,
+ state: 'open',
+ per_page: 100,
+ });
+
+ if (headShaLower) {
+ pr = openPrs.find(item => (item.head?.sha || '').toLowerCase() === headShaLower) || null;
+ }
+
+ if (!pr && branch) {
+ const branchLower = branch.toLowerCase();
+ pr = openPrs.find(item => (item.head?.ref || '').toLowerCase() === branchLower) || null;
+ }
+
+ if (!pr && openPrs.length) {
+ pr = openPrs[0];
+ }
+ }
+
+ if (!pr) {
+ core.info(`Unable to locate an open PR for workflow run (head_sha=${headSha}, branch=${branch || 'n/a'})`);
+ for (const [key, value] of Object.entries(result)) {
+ core.setOutput(key, value);
+ }
+ return;
+ }
+
+ prNumber = Number(pr.number);
+ result.found = 'true';
+ result.pr = String(prNumber);
+ result.head_ref = pr.head?.ref || branch;
+ result.head_sha = pr.head?.sha || headSha;
+ result.same_repo = pr.head?.repo?.full_name === `${owner}/${repo}` ? 'true' : 'false';
+ result.is_draft = pr.draft ? 'true' : 'false';
+
+ const resolvedHeadRef = result.head_ref || branch || 'unknown-ref';
+ const resolvedHeadSha = result.head_sha || headSha || 'unknown-sha';
+ const gateRunId = run?.id ? String(run.id) : 'unknown-run';
+ core.notice(
+ `Resolved PR #${result.pr} (${resolvedHeadRef} @ ${resolvedHeadSha}) for Gate run ${gateRunId}.`,
+ );
+
+ const failureTrackerSkipPrs = new Set([10, 12]);
+ if (failureTrackerSkipPrs.has(prNumber)) {
+ core.info(`PR #${prNumber} flagged to skip failure tracker updates (legacy duplicate).`);
+ result.failure_tracker_skip = 'true';
+ }
+
+ const labels = Array.isArray(pr.labels)
+ ? pr.labels
+ .filter(label => label && typeof label.name === 'string')
+ .map(label => label.name)
+ : [];
+ const optLabel = process.env.AUTOFIX_LABEL || 'autofix:clean';
+ const patchLabel = process.env.AUTOFIX_PATCH_LABEL || 'autofix:patch';
+ result.has_opt_in = labels.includes(optLabel) ? 'true' : 'false';
+ result.has_patch_label = labels.includes(patchLabel) ? 'true' : 'false';
+ try {
+ result.labels_json = JSON.stringify(labels);
+ } catch (error) {
+ core.warning(`Failed to serialise label list: ${error}`);
+ result.labels_json = '[]';
+ }
+ result.title = pr.title || '';
+
+ try {
+ const commit = await github.rest.repos.getCommit({ owner, repo, ref: result.head_sha });
+ const subject = (commit.data.commit.message || '').split('\n')[0];
+ result.head_subject = subject;
+ const actor = result.actor;
+ const isAutomation = actor === 'github-actions' || actor === 'github-actions[bot]';
+ const subjectLower = subject.toLowerCase();
+ const prefixLower = prefix.toLowerCase();
+ if (isAutomation && prefixLower && subjectLower.startsWith(prefixLower)) {
+ core.info(`Loop guard engaged for actor ${actor}: detected prior autofix commit.`);
+ result.loop_skip = 'true';
+ }
+ } catch (error) {
+ core.warning(`Unable to inspect commit message for loop guard: ${error.message}`);
+ }
+
+ if (result.found === 'true') {
+ const files = await github.paginate(github.rest.pulls.listFiles, {
+ owner,
+ repo,
+ pull_number: pr.number,
+ per_page: 100,
+ });
+ const safeSuffixes = ['.py', '.pyi', '.toml', '.cfg', '.ini'];
+ const safeBasenames = new Set([
+ 'pyproject.toml',
+ 'ruff.toml',
+ '.ruff.toml',
+ 'mypy.ini',
+ '.pre-commit-config.yaml',
+ 'pytest.ini',
+ '.coveragerc',
+ ].map(name => name.toLowerCase()));
+ const isSafePath = (filepath) => {
+ const lower = filepath.toLowerCase();
+ if (safeSuffixes.some(suffix => lower.endsWith(suffix))) {
+ return true;
+ }
+ for (const name of safeBasenames) {
+ if (lower === name || lower.endsWith(`/${name}`)) {
+ return true;
+ }
+ }
+ return false;
+ };
+ const totalFiles = files.length;
+ const totalChanges = files.reduce((acc, file) => acc + (file.changes || 0), 0);
+ const safeFiles = files.filter(file => isSafePath(file.filename));
+ const unsafeFiles = files.filter(file => !isSafePath(file.filename));
+ const safeChanges = safeFiles.reduce((acc, file) => acc + (file.changes || 0), 0);
+ const unsafeChanges = totalChanges - safeChanges;
+ const allSafe = unsafeFiles.length === 0;
+ const limitFiles = Number(process.env.AUTOFIX_MAX_FILES || 40);
+ const limitChanges = Number(process.env.AUTOFIX_MAX_CHANGES || 800);
+ const baseEligible = labels.includes(optLabel);
+ const safeEligible = baseEligible && safeFiles.length > 0 && safeFiles.length <= limitFiles && safeChanges <= limitChanges;
+ result.small_eligible = safeEligible ? 'true' : 'false';
+ result.file_count = String(totalFiles);
+ result.change_count = String(totalChanges);
+ result.safe_paths = safeFiles.map(file => file.filename).join('\n');
+ result.unsafe_paths = unsafeFiles.map(file => file.filename).join('\n');
+ result.safe_file_count = String(safeFiles.length);
+ result.unsafe_file_count = String(unsafeFiles.length);
+ result.safe_change_count = String(safeChanges);
+ result.unsafe_change_count = String(unsafeChanges);
+ result.all_safe = allSafe ? 'true' : 'false';
+ }
+
+ for (const [key, value] of Object.entries(result)) {
+ core.setOutput(key, value ?? '');
+ }
+}
+
+async function inspectFailingJobs({ github, context, core }) {
+ const run = context.payload.workflow_run;
+ const owner = context.repo.owner;
+ const repo = context.repo.repo;
+ const conclusion = (run.conclusion || '').toLowerCase();
+
+ const setOutputs = ({
+ trivial = 'false',
+ names = '',
+ count = '0',
+ incomplete = 'false',
+ hasJobs = 'false',
+ } = {}) => {
+ core.setOutput('trivial', trivial);
+ core.setOutput('names', names);
+ core.setOutput('count', count);
+ core.setOutput('incomplete', incomplete);
+ core.setOutput('has_jobs', hasJobs);
+ };
+
+ if (!run.id) {
+ setOutputs({ incomplete: 'true' });
+ return;
+ }
+
+ if (conclusion === 'success') {
+ setOutputs();
+ return;
+ }
+
+ if (conclusion && conclusion !== 'failure') {
+ setOutputs({ incomplete: 'true' });
+ return;
+ }
+
+ const keywords = (process.env.AUTOFIX_TRIVIAL_KEYWORDS || 'lint,format,style,doc,ruff,mypy,type,black,isort,label,test').split(',')
+ .map(str => str.trim().toLowerCase())
+ .filter(Boolean);
+
+ const jobs = await github.paginate(github.rest.actions.listJobsForWorkflowRun, {
+ owner,
+ repo,
+ run_id: run.id,
+ per_page: 100,
+ });
+
+ const failing = jobs.filter(job => {
+ const c = (job.conclusion || '').toLowerCase();
+ return c && c !== 'success' && c !== 'skipped';
+ });
+
+ if (!failing.length) {
+ setOutputs();
+ return;
+ }
+
+ const actionableConclusions = new Set(['failure']);
+ const incomplete = failing.some(job => !actionableConclusions.has((job.conclusion || '').toLowerCase()));
+ const allTrivial = failing.every(job => {
+ const name = (job.name || '').toLowerCase();
+ return keywords.some(keyword => name.includes(keyword));
+ });
+
+ setOutputs({
+ trivial: allTrivial ? 'true' : 'false',
+ names: failing.map(job => job.name).join(', '),
+ count: String(failing.length),
+ incomplete: incomplete ? 'true' : 'false',
+ hasJobs: 'true',
+ });
+}
+
+async function evaluateAutofixRerunGuard({ github, context, core }) {
+ const prNumber = Number(process.env.PR_NUMBER || '0');
+ const headSha = (process.env.HEAD_SHA || '').toLowerCase();
+ const sameRepo = (process.env.SAME_REPO || '').toLowerCase() === 'true';
+ const hasPatchLabel = (process.env.HAS_PATCH_LABEL || '').toLowerCase() === 'true';
+ const markerPrefix = '` : null;
+ const prLine = prNumber ? `Tracked PR: #${prNumber}` : null;
+
+ const slugify = (value) => {
+ if (!value) {
+ return 'unknown-workflow';
+ }
+ const slug = String(value)
+ .toLowerCase()
+ .replace(/[^a-z0-9]+/g, '-')
+ .replace(/^-+|-+$/g, '')
+ .replace(/--+/g, '-')
+ .trim();
+ return slug ? slug.slice(0, 80) : 'unknown-workflow';
+ };
+
+ const RATE_LIMIT_MINUTES = parseInt(process.env.RATE_LIMIT_MINUTES || '15', 10);
+ const STACK_TOKENS_ENABLED = /^true$/i.test(process.env.STACK_TOKENS_ENABLED || 'true');
+ const STACK_TOKEN_MAX_LEN = parseInt(process.env.STACK_TOKEN_MAX_LEN || '160', 10);
+ const FAILURE_INACTIVITY_HEAL_HOURS = parseFloat(process.env.FAILURE_INACTIVITY_HEAL_HOURS || '0');
+ const HEAL_THRESHOLD_DESC = `Auto-heal after ${process.env.AUTO_HEAL_INACTIVITY_HOURS || '24'}h stability (success path)`;
+
+ const jobsResp = await github.rest.actions.listJobsForWorkflowRun({ owner, repo, run_id: runId, per_page: 100 });
+ const failedJobs = jobsResp.data.jobs.filter(j => (j.conclusion || '').toLowerCase() !== 'success');
+ if (!failedJobs.length) {
+ core.info('No failed jobs found despite run-level failure — aborting.');
+ return;
+ }
+
+ let stackTokenNote = 'Stack tokens disabled';
+ let stackToken = null;
+ if (STACK_TOKENS_ENABLED) {
+ const zlib = require('zlib');
+ const STACK_TOKEN_RAW = /^true$/i.test(process.env.STACK_TOKEN_RAW || 'false');
+ function normalizeToken(raw, maxLen) {
+ if (STACK_TOKEN_RAW) return (raw || 'no-stack').slice(0, maxLen);
+ if (!raw) return 'no-stack';
+ let t = raw;
+ const ISO_TIMESTAMP_START_REGEX = /^\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}(?:\.\d+)?Z\s*/;
+ t = t.replace(ISO_TIMESTAMP_START_REGEX, '');
+ t = t.replace(/\s+\[[0-9]{1,3}%\]\s*/g, ' ');
+ t = t.replace(/\s+/g, ' ').trim();
+ const m = t.match(/^[^:]+: [^:]+/);
+ if (m) t = m[0];
+ if (!t) t = 'no-stack';
+ return t.slice(0, maxLen);
+ }
+ async function extractStackToken(job) {
+ try {
+ const logs = await github.rest.actions.downloadJobLogsForWorkflowRun({ owner, repo, job_id: job.id });
+ const buffer = Buffer.from(logs.data);
+ const text = job.name.includes('test')
+ ? zlib.gunzipSync(buffer).toString('utf8')
+ : buffer.toString('utf8');
+ const lines = text.split(/\r?\n/);
+ for (const line of lines) {
+ if (line.includes('Traceback') || line.includes('Error:')) {
+ return normalizeToken(line, STACK_TOKEN_MAX_LEN);
+ }
+ }
+ } catch (error) {
+ core.info(`Failed to extract stack token for job ${job.id}: ${error.message}`);
+ }
+ return null;
+ }
+
+ for (const job of failedJobs) {
+ stackToken = await extractStackToken(job);
+ if (stackToken) {
+ break;
+ }
+ }
+
+ if (stackToken) {
+ stackTokenNote = `Stack token: ${stackToken}`;
+ } else {
+ stackTokenNote = 'Stack token unavailable';
+ }
+ }
+
+ const signatureParts = failedJobs.map(job => `${job.name} (${job.conclusion || job.status || 'unknown'})`);
+ const title = `${slugify(run.name || run.display_title || 'Gate')} failure: ${signatureParts.join(', ')}`;
+ const descriptionLines = [
+ `Workflow: ${run.name || run.display_title || 'Gate'}`,
+ `Run ID: ${runId}`,
+ `Run URL: ${run.html_url || ''}`,
+ prLine,
+ stackTokenNote,
+ ].filter(Boolean);
+
+ const labels = ['ci-failure'];
+ const cooldownHours = parseFloat(process.env.NEW_ISSUE_COOLDOWN_HOURS || '12');
+ const retryMs = parseInt(process.env.COOLDOWN_RETRY_MS || '3000', 10);
+
+ async function attemptCooldownAppend(stage) {
+ try {
+ const listName = `failure-cooldown-${slugify(run.name || 'gate')}`;
+ const response = await github.rest.actions.getEnvironmentVariable({ owner, repo, name: listName });
+ const lastEntries = response.data.value ? JSON.parse(response.data.value) : [];
+ const now = Date.now();
+ const recent = lastEntries.find(entry => now - entry.timestamp < cooldownHours * 3600_000);
+ if (recent) {
+ core.info(`Cooldown active (${stage}); skipping failure issue creation.`);
+ return true;
+ }
+ lastEntries.push({ timestamp: now, run_id: runId });
+ await github.rest.actions.updateEnvironmentVariable({ owner, repo, name: listName, value: JSON.stringify(lastEntries.slice(-25)) });
+ core.info(`Recorded cooldown entry for run ${runId} (${stage}).`);
+ return true;
+ } catch (error) {
+ core.info(`Cooldown list retrieval failed (${stage}): ${error.message}`);
+ }
+ return false;
+ }
+
+ let appendedViaCooldown = await attemptCooldownAppend('initial');
+ if (!appendedViaCooldown && cooldownHours > 0 && retryMs > 0) {
+ await new Promise(r => setTimeout(r, retryMs));
+ appendedViaCooldown = await attemptCooldownAppend('retry');
+ }
+ if (appendedViaCooldown) return;
+
+ const nowIso = new Date().toISOString();
+ const headerMeta = [
+ 'Occurrences: 1',
+ `Last seen: ${nowIso}`,
+ `Healing threshold: ${HEAL_THRESHOLD_DESC}`,
+ '',
+ ].join('\n');
+ const bodyBlock = [
+ '## Failure summary',
+ ...failedJobs.map(job => `- ${job.name} (${job.conclusion || job.status || 'unknown'})`),
+ '',
+ stackTokenNote,
+ '',
+ ...(prTag ? [prTag] : []),
+ ].join('\n');
+ const created = await github.rest.issues.create({ owner, repo, title, body: headerMeta + bodyBlock, labels });
+ core.info(`Created new failure issue #${created.data.number}`);
+}
+
+async function resolveFailureIssuesForRecoveredPR({ github, context, core }) {
+ const pr = parseInt(process.env.PR_NUMBER || '', 10);
+ if (!Number.isFinite(pr) || pr <= 0) {
+ core.info('No PR number detected; skipping failure issue resolution.');
+ return;
+ }
+ const { owner, repo } = context.repo;
+ const tag = ``;
+ const query = `repo:${owner}/${repo} is:issue is:open label:ci-failure "${tag}"`;
+ const search = await github.rest.search.issuesAndPullRequests({ q: query, per_page: 10 });
+ if (!search.data.items.length) {
+ core.info(`No open failure issues tagged for PR #${pr}.`);
+ return;
+ }
+ const runUrl = process.env.RUN_URL || (context.payload.workflow_run && context.payload.workflow_run.html_url) || '';
+ const nowIso = new Date().toISOString();
+ for (const item of search.data.items) {
+ const issue_number = item.number;
+ const issue = await github.rest.issues.get({ owner, repo, issue_number });
+ let body = issue.data.body || '';
+ body = body
+ .replace(/^Resolved:.*$/gim, '')
+ .replace(/\n{3,}/g, '\n\n')
+ .replace(/^\n+/, '')
+ .replace(/\s+$/, '');
+ body = `Resolved: ${nowIso}\n${body}`.replace(/\n{3,}/g, '\n\n').replace(/\s+$/, '');
+ if (body) {
+ body = `${body}\n`;
+ }
+
+ const commentLines = [
+ `Resolution: Gate run succeeded for PR #${pr}.`,
+ runUrl ? `Success run: ${runUrl}` : null,
+ `Timestamp: ${nowIso}`,
+ ].filter(Boolean);
+ if (commentLines.length) {
+ await github.rest.issues.createComment({
+ owner,
+ repo,
+ issue_number,
+ body: commentLines.join('\n'),
+ });
+ }
+ await github.rest.issues.update({ owner, repo, issue_number, state: 'closed', body });
+ core.info(`Closed failure issue #${issue_number} for PR #${pr}.`);
+ }
+}
+
+async function autoHealFailureIssues({ github, context, core }) {
+ const { owner, repo } = context.repo;
+ const INACTIVITY_HOURS = parseFloat(process.env.AUTO_HEAL_INACTIVITY_HOURS || '24');
+ const now = Date.now();
+ const q = `repo:${owner}/${repo} is:issue is:open label:ci-failure`;
+ const search = await github.rest.search.issuesAndPullRequests({ q, per_page: 100 });
+ for (const item of search.data.items) {
+ const issue_number = item.number;
+ const issue = await github.rest.issues.get({ owner, repo, issue_number });
+ const body = issue.data.body || '';
+ const m = body.match(/Last seen:\s*(.+)/i);
+ if (!m) continue;
+ const lastSeenTs = Date.parse(m[1].trim());
+ if (Number.isNaN(lastSeenTs)) continue;
+ const hours = (now - lastSeenTs) / 3_600_000;
+ if (hours >= INACTIVITY_HOURS) {
+ const comment = `Auto-heal: no reoccurrence for ${hours.toFixed(1)}h (>= ${INACTIVITY_HOURS}h). Closing.`;
+ await github.rest.issues.createComment({ owner, repo, issue_number, body: comment });
+ await github.rest.issues.update({ owner, repo, issue_number, state: 'closed' });
+ core.info(`Closed healed failure issue #${issue_number}`);
+ }
+ }
+ core.summary.addHeading('Success Run Summary');
+ core.summary.addRaw('Checked for stale failure issues and applied auto-heal where applicable.');
+ await core.summary.write();
+}
+
+async function snapshotFailureIssues({ github, context, core }) {
+ const { owner, repo } = context.repo;
+ const q = `repo:${owner}/${repo} is:issue is:open label:ci-failure`;
+ const search = await github.rest.search.issuesAndPullRequests({ q, per_page: 100 });
+ const issues = [];
+ for (const item of search.data.items) {
+ const issue = await github.rest.issues.get({ owner, repo, issue_number: item.number });
+ const body = issue.data.body || '';
+ const occ = (body.match(/Occurrences:\s*(\d+)/i) || [])[1] || null;
+ const lastSeen = (body.match(/Last seen:\s*(.*)/i) || [])[1] || null;
+ issues.push({
+ number: issue.data.number,
+ title: issue.data.title,
+ occurrences: occ ? parseInt(occ, 10) : null,
+ last_seen: lastSeen,
+ url: issue.data.html_url,
+ created_at: issue.data.created_at,
+ updated_at: issue.data.updated_at,
+ });
+ }
+ fs.mkdirSync('artifacts', { recursive: true });
+ fs.writeFileSync(
+ path.join('artifacts', 'ci_failures_snapshot.json'),
+ JSON.stringify({ generated_at: new Date().toISOString(), issues }, null, 2),
+ );
+ core.info(`Snapshot written with ${issues.length} open failure issues.`);
+}
+
+function parsePullNumber(value) {
+ const pr = Number(value || 0);
+ return Number.isFinite(pr) && pr > 0 ? pr : null;
+}
+
+async function applyCiFailureLabel({ github, context, core, prNumber, label }) {
+ const pr = parsePullNumber(prNumber ?? process.env.PR_NUMBER);
+ if (!pr) {
+ core.info('No PR number detected; skipping ci-failure label application.');
+ return;
+ }
+
+ const { owner, repo } = context.repo;
+ const targetLabel = label || 'ci-failure';
+ try {
+ await github.rest.issues.addLabels({ owner, repo, issue_number: pr, labels: [targetLabel] });
+ core.info(`Applied ${targetLabel} label to PR #${pr}.`);
+ } catch (error) {
+ if (error?.status === 422) {
+ core.info(`${targetLabel} label already present on PR #${pr}.`);
+ } else {
+ throw error;
+ }
+ }
+}
+
+async function removeCiFailureLabel({ github, context, core, prNumber, label }) {
+ const pr = parsePullNumber(prNumber ?? process.env.PR_NUMBER);
+ if (!pr) {
+ core.info('No PR number detected; skipping ci-failure label removal.');
+ return;
+ }
+
+ const { owner, repo } = context.repo;
+ const targetLabel = label || 'ci-failure';
+ try {
+ await github.rest.issues.removeLabel({ owner, repo, issue_number: pr, name: targetLabel });
+ core.info(`Removed ${targetLabel} label from PR #${pr}.`);
+ } catch (error) {
+ if (error?.status === 404) {
+ core.info(`${targetLabel} label not present on PR #${pr}.`);
+ } else {
+ throw error;
+ }
+ }
+}
+
+async function ensureAutofixComment({ github, context, core }) {
+ const prNumber = parsePullNumber(process.env.PR_NUMBER);
+ const headShaRaw = (process.env.HEAD_SHA || '').trim();
+ if (!prNumber || !headShaRaw) {
+ core.info('Autofix comment prerequisites missing; skipping.');
+ return;
+ }
+
+ const headShaLower = headShaRaw.toLowerCase();
+ const fileListRaw = process.env.FILE_LIST || '';
+ const gateUrl = (process.env.GATE_RUN_URL || '').trim();
+ const rerunTriggered = /^true$/i.test(process.env.GATE_RERUN_TRIGGERED || '');
+ const runId = context.payload?.workflow_run?.id;
+
+ const markerParts = [`head=${headShaLower}`];
+ if (runId) {
+ markerParts.push(`run=${runId}`);
+ }
+ const marker = ``;
+
+ const { owner, repo } = context.repo;
+ const comments = await github.paginate(github.rest.issues.listComments, {
+ owner,
+ repo,
+ issue_number: prNumber,
+ per_page: 100,
+ });
+
+ const existing = comments.find(comment => {
+ const body = (comment.body || '').toLowerCase();
+ if (!body.includes('/g, '[HTML_COMMENT_REMOVED]');
+
+ // Remove zero-width characters
+ sanitized = sanitized.replace(/[\u200B-\u200D\uFEFF\u2060-\u2064\u00AD]/g, '');
+
+ // Mask potential secrets (simple patterns)
+ sanitized = sanitized.replace(/\b(ghp_|gho_|ghs_|sk-)[A-Za-z0-9]{20,}/g, '[SECRET_MASKED]');
+
+ // Truncate very long base64-like strings
+ sanitized = sanitized.replace(/[A-Za-z0-9+/]{100,}={0,2}/g, '[BASE64_TRUNCATED]');
+
+ return sanitized;
+}
+
+// ---------------------------------------------------------------------------
+// Comprehensive guard evaluation
+// ---------------------------------------------------------------------------
+
+/**
+ * Evaluate all prompt injection guards for a PR.
+ * @param {object} params
+ * @param {object} params.github - GitHub API client
+ * @param {object} params.context - GitHub Actions context
+ * @param {object} params.pr - Pull request object
+ * @param {string} params.actor - The actor triggering the workflow
+ * @param {string} [params.promptContent] - Optional prompt content to scan
+ * @param {object} [params.config] - Configuration options
+ * @param {string[]} [params.config.allowedUsers] - Explicitly allowed users
+ * @param {string[]} [params.config.allowedBots] - Explicitly allowed bots
+ * @param {boolean} [params.config.requireCollaborator] - Require collaborator status (default: true)
+ * @param {boolean} [params.config.blockForks] - Block forked PRs (default: true)
+ * @param {boolean} [params.config.scanContent] - Scan content for red flags (default: true)
+ * @param {object} [params.core] - GitHub Actions core for logging
+ * @returns {Promise<{
+ * allowed: boolean,
+ * blocked: boolean,
+ * reason: string,
+ * details: {
+ * fork: { isFork: boolean, reason: string },
+ * actor: { allowed: boolean, reason: string },
+ * collaborator: { isCollaborator: boolean, permission: string, reason: string },
+ * content: { flagged: boolean, matches: Array<{ pattern: string, match: string, index: number }> }
+ * }
+ * }>}
+ */
+async function evaluatePromptInjectionGuard({
+ github,
+ context,
+ pr,
+ actor,
+ promptContent = '',
+ config = {},
+ core,
+}) {
+ const blockForks = config.blockForks !== false;
+ const requireCollaborator = config.requireCollaborator !== false;
+ const scanContent = config.scanContent !== false;
+
+ const details = {
+ fork: { isFork: false, reason: '' },
+ actor: { allowed: true, reason: '' },
+ collaborator: { isCollaborator: false, permission: '', reason: '' },
+ content: { flagged: false, matches: [] },
+ };
+
+ // 1. Fork detection
+ if (blockForks && pr) {
+ details.fork = detectFork(pr);
+ if (details.fork.isFork) {
+ if (core) core.warning(`Blocked: PR is from a fork - ${details.fork.reason}`);
+ return {
+ allowed: false,
+ blocked: true,
+ reason: 'fork-pr-blocked',
+ details,
+ };
+ }
+ }
+
+ // 2. Actor allow-list check
+ details.actor = validateActorAllowList(actor, {
+ allowedUsers: config.allowedUsers,
+ allowedBots: config.allowedBots,
+ });
+
+ // 3. Collaborator check (if required and not already allowed)
+ if (requireCollaborator && !details.actor.allowed) {
+ details.collaborator = await checkCollaborator({ github, context, actor });
+
+ if (!details.collaborator.isCollaborator) {
+ if (core) core.warning(`Blocked: Actor ${actor} is not a collaborator - ${details.collaborator.reason}`);
+ return {
+ allowed: false,
+ blocked: true,
+ reason: 'non-collaborator-blocked',
+ details,
+ };
+ }
+ }
+
+ // 4. Check for bypass label
+ const prLabels = (pr?.labels || []).map(l => (typeof l === 'string' ? l : l.name || '').toLowerCase());
+ const hasBypassLabel = prLabels.includes(BYPASS_LABEL.toLowerCase());
+ if (hasBypassLabel) {
+ if (core) core.info(`Security gate bypassed via ${BYPASS_LABEL} label`);
+ return {
+ allowed: true,
+ blocked: false,
+ reason: 'bypass-label',
+ details,
+ };
+ }
+
+ // 5. Content red-flag scanning - SKIP for collaborators (they're trusted)
+ const isCollaborator = details.actor.allowed || details.collaborator.isCollaborator;
+ if (scanContent && promptContent && !isCollaborator) {
+ details.content = scanForRedFlags(promptContent);
+
+ if (details.content.flagged) {
+ if (core) {
+ core.warning(`Blocked: Prompt content contains red-flag patterns`);
+ for (const match of details.content.matches) {
+ core.warning(` - Pattern: ${match.pattern.substring(0, 50)}... Match: ${match.match}`);
+ }
+ }
+ return {
+ allowed: false,
+ blocked: true,
+ reason: 'red-flag-content-detected',
+ details,
+ };
+ }
+ }
+
+ // All checks passed
+ const finalAllowed = details.actor.allowed || details.collaborator.isCollaborator;
+ return {
+ allowed: finalAllowed,
+ blocked: !finalAllowed,
+ reason: finalAllowed ? 'all-checks-passed' : 'actor-not-authorized',
+ details,
+ };
+}
+
+// ---------------------------------------------------------------------------
+// Exports
+// ---------------------------------------------------------------------------
+
+module.exports = {
+ // Core functions
+ detectFork,
+ validateActorAllowList,
+ checkCollaborator,
+ scanForRedFlags,
+ sanitizeForDisplay,
+ evaluatePromptInjectionGuard,
+
+ // Constants for testing/customization
+ DEFAULT_ALLOWED_BOTS,
+ DEFAULT_RED_FLAG_PATTERNS,
+ BYPASS_LABEL,
+};
diff --git a/templates/consumer-repo/.github/scripts/verifier_ci_query.js b/templates/consumer-repo/.github/scripts/verifier_ci_query.js
index 61710cc35..f8024236b 100644
--- a/templates/consumer-repo/.github/scripts/verifier_ci_query.js
+++ b/templates/consumer-repo/.github/scripts/verifier_ci_query.js
@@ -67,7 +67,8 @@ async function withRetry(apiCall, options = {}) {
? delays
: buildRetryDelays(maxRetries, baseDelayMs);
let lastError = null;
- for (let attempt = 0; attempt <= retryDelays.length; attempt += 1) {
+ // Loop allows retryDelays.length retries (attempt 0 = first try, then retries)
+ for (let attempt = 0; attempt < retryDelays.length + 1; attempt += 1) {
try {
return await apiCall();
} catch (error) {
diff --git a/tests/scripts/test_validate_template_sync.py b/tests/scripts/test_validate_template_sync.py
new file mode 100644
index 000000000..f259b8680
--- /dev/null
+++ b/tests/scripts/test_validate_template_sync.py
@@ -0,0 +1,170 @@
+"""Tests for scripts/validate_template_sync.py"""
+
+import shutil
+import subprocess
+import sys
+from pathlib import Path
+
+
+def create_test_structure(tmp_path: Path) -> tuple[Path, Path]:
+ """Create temporary source and template directories."""
+ source = tmp_path / ".github" / "scripts"
+ template = tmp_path / "templates" / "consumer-repo" / ".github" / "scripts"
+ source.mkdir(parents=True)
+ template.mkdir(parents=True)
+
+ # Copy validator script to tmp_path so it can find paths relative to cwd
+ script_dir = tmp_path / "scripts"
+ script_dir.mkdir(parents=True)
+ shutil.copy("scripts/validate_template_sync.py", script_dir / "validate_template_sync.py")
+
+ return source, template
+
+
+def test_validator_passes_when_files_match(tmp_path):
+ """Validator should exit 0 when source and template files match."""
+ source, template = create_test_structure(tmp_path)
+
+ # Create matching files
+ (source / "test.js").write_text("console.log('test');")
+ (template / "test.js").write_text("console.log('test');")
+
+ result = subprocess.run(
+ [sys.executable, "scripts/validate_template_sync.py"],
+ cwd=tmp_path,
+ capture_output=True,
+ text=True,
+ )
+
+ assert result.returncode == 0
+ assert "✅ All template files in sync" in result.stdout
+
+
+def test_validator_fails_on_hash_mismatch(tmp_path):
+ """Validator should exit 1 when files have different content."""
+ source, template = create_test_structure(tmp_path)
+
+ # Create mismatched files
+ (source / "test.js").write_text("console.log('source');")
+ (template / "test.js").write_text("console.log('template');")
+
+ result = subprocess.run(
+ [sys.executable, "scripts/validate_template_sync.py"],
+ cwd=tmp_path,
+ capture_output=True,
+ text=True,
+ )
+
+ assert result.returncode == 1
+ assert "❌ Template files out of sync" in result.stdout
+ assert "test.js" in result.stdout
+
+
+def test_validator_fails_on_missing_template(tmp_path):
+ """Validator should exit 1 when source file exists but template doesn't."""
+ source, template = create_test_structure(tmp_path)
+
+ # Create source file without template counterpart
+ (source / "new_file.js").write_text("console.log('new');")
+
+ result = subprocess.run(
+ [sys.executable, "scripts/validate_template_sync.py"],
+ cwd=tmp_path,
+ capture_output=True,
+ text=True,
+ )
+
+ assert result.returncode == 1
+ assert "❌ Template files out of sync" in result.stdout
+ assert "new_file.js" in result.stdout
+ assert "(MISSING - needs to be created)" in result.stdout
+
+
+def test_validator_handles_missing_template_directory(tmp_path):
+ """Validator should handle missing template directory gracefully."""
+ source, _ = create_test_structure(tmp_path)
+
+ # Remove template directory entirely
+ shutil.rmtree(tmp_path / "templates")
+
+ (source / "test.js").write_text("console.log('test');")
+
+ result = subprocess.run(
+ [sys.executable, "scripts/validate_template_sync.py"],
+ cwd=tmp_path,
+ capture_output=True,
+ text=True,
+ )
+
+ # Should fail with clear error
+ assert result.returncode == 1
+ assert "Template directory not found" in result.stdout or "test.js" in result.stdout
+
+
+def test_validator_suggests_sync_command(tmp_path):
+ """Validator should suggest running sync script when validation fails."""
+ source, template = create_test_structure(tmp_path)
+
+ (source / "test.js").write_text("console.log('source');")
+ (template / "test.js").write_text("console.log('template');")
+
+ result = subprocess.run(
+ [sys.executable, "scripts/validate_template_sync.py"],
+ cwd=tmp_path,
+ capture_output=True,
+ text=True,
+ )
+
+ assert result.returncode == 1
+ assert "./scripts/sync_templates.sh" in result.stdout
+
+
+def test_validator_handles_multiple_mismatches(tmp_path):
+ """Validator should report all mismatched files."""
+ source, template = create_test_structure(tmp_path)
+
+ # Create multiple mismatches
+ (source / "file1.js").write_text("console.log('1');")
+ (template / "file1.js").write_text("console.log('old1');")
+
+ (source / "file2.js").write_text("console.log('2');")
+ (template / "file2.js").write_text("console.log('old2');")
+
+ (source / "file3.js").write_text("console.log('3');") # Missing in template
+
+ result = subprocess.run(
+ [sys.executable, "scripts/validate_template_sync.py"],
+ cwd=tmp_path,
+ capture_output=True,
+ text=True,
+ )
+
+ assert result.returncode == 1
+ assert "file1.js" in result.stdout
+ assert "file2.js" in result.stdout
+ assert "file3.js" in result.stdout
+ assert "(MISSING - needs to be created)" in result.stdout
+
+
+def test_validator_ignores_non_js_files(tmp_path):
+ """Validator should only check .js files."""
+ source, template = create_test_structure(tmp_path)
+
+ # Create .js file that matches
+ (source / "test.js").write_text("console.log('test');")
+ (template / "test.js").write_text("console.log('test');")
+
+ # Create non-.js files that don't match (should be ignored)
+ (source / "README.md").write_text("# Source")
+ (template / "README.md").write_text("# Template")
+
+ result = subprocess.run(
+ [sys.executable, "scripts/validate_template_sync.py"],
+ cwd=tmp_path,
+ capture_output=True,
+ text=True,
+ )
+
+ # Should pass because only .js files are checked
+ assert result.returncode == 0
+ assert "✅ All template files in sync" in result.stdout
diff --git a/tests/workflows/test_workflow_naming.py b/tests/workflows/test_workflow_naming.py
index cdb2c4060..bc881b2ca 100644
--- a/tests/workflows/test_workflow_naming.py
+++ b/tests/workflows/test_workflow_naming.py
@@ -213,6 +213,7 @@ def test_workflow_display_names_are_unique():
"health-67-integration-sync-check.yml": "Health 67 Integration Sync Check",
"health-70-validate-sync-manifest.yml": "Validate Sync Manifest",
"health-71-sync-health-check.yml": "Health 71 Sync Health Check",
+ "health-72-template-sync.yml": "Health 72 Template Sync",
"maint-68-sync-consumer-repos.yml": "Maint 68 Sync Consumer Repos",
"maint-69-sync-integration-repo.yml": "Maint 69 Sync Integration Repo",
"maint-60-release.yml": "Maint 60 Release",