Skip to content

chore: sync workflow templates#150

Closed
stranske wants to merge 1 commit intomainfrom
sync/workflows-e11fa23a80b6
Closed

chore: sync workflow templates#150
stranske wants to merge 1 commit intomainfrom
sync/workflows-e11fa23a80b6

Conversation

@stranske
Copy link
Copy Markdown
Owner

@stranske stranske commented Jan 5, 2026

Sync Summary

Files Updated

  • keepalive_loop.js: Core keepalive loop logic
  • keepalive_prompt_routing.js: Prompt routing logic for keepalive - determines which prompt template to use
  • issue_formatter.py: Issue formatter - converts raw text to AGENT_ISSUE_TEMPLATE format
  • format_issue.md: Prompt template for LLM-based issue formatting
  • agents-guard.js: Guards against unauthorized agent workflow file changes

Files Skipped

  • pr-00-gate.yml: File exists and sync_mode is create_only
  • ci.yml: File exists and sync_mode is create_only
  • dependabot.yml: File exists and sync_mode is create_only

Review Checklist

  • CI passes with updated workflows
  • No repo-specific customizations were overwritten

Source: stranske/Workflows
Manifest: .github/sync-manifest.yml

Automated sync from stranske/Workflows
Template hash: e11fa23a80b6

Changes synced from sync-manifest.yml
@stranske stranske added the sync Automated sync from Workflows label Jan 5, 2026
Copilot AI review requested due to automatic review settings January 5, 2026 17:11
@stranske stranske added the automated Automated sync from Workflows label Jan 5, 2026
@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Jan 5, 2026

⚠️ Action Required: Unable to determine source issue for PR #150. The PR title, branch name, or body must contain the issue number (e.g. #123, branch: issue-123, or the hidden marker ).

@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Jan 5, 2026

🤖 Keepalive Loop Status

PR #150 | Agent: Codex | Iteration 0/5

Current State

Metric Value
Iteration progress [----------] 0/5
Action wait (missing-agent-label)
Disposition skipped (transient)
Gate success
Tasks 0/8 complete
Keepalive ❌ disabled
Autofix ❌ disabled

🔍 Failure Classification

| Error type | infrastructure |
| Error category | resource |
| Suggested recovery | Confirm the referenced resource exists (repo, PR, branch, workflow, or file). |

@github-actions github-actions bot added the autofix Triggers autofix on PR label Jan 5, 2026
@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Jan 5, 2026

Status | ✅ no new diagnostics
History points | 0
Timestamp | 2026-01-05 17:12:11 UTC
Report artifact | autofix-report-pr-150
Remaining | ∅
New | ∅
No additional artifacts

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: c3d5d91bfd

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +105 to +107
from tools.llm_provider import DEFAULT_MODEL, GITHUB_MODELS_BASE_URL

if github_token:
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Missing llm_provider dependency crashes formatter with LLM enabled

issue_formatter.py imports tools.llm_provider, but that module does not exist anywhere in the repo (searched with rg "llm_provider"). When langchain_openai is installed and either GITHUB_TOKEN or OPENAI_API_KEY is present (the default use case for LLM formatting), this import raises ModuleNotFoundError before the fallback formatter runs, causing the CLI to crash instead of emitting formatted output.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR syncs workflow templates and supporting scripts from the central Workflows repository, adding new LLM-based issue formatting capabilities and improving existing keepalive and guard logic.

Key Changes:

  • Adds a new Python script and prompt template for formatting GitHub issues into a structured agent-friendly template format
  • Expands CI failure mode detection aliases in keepalive prompt routing
  • Refines verification status checking logic in keepalive loop
  • Clarifies security documentation in the agents-guard script

Reviewed changes

Copilot reviewed 5 out of 5 changed files in this pull request and generated 4 comments.

Show a summary per file
File Description
scripts/langchain/issue_formatter.py New Python script that converts raw GitHub issue text into AGENT_ISSUE_TEMPLATE format using LLM or fallback parsing logic
scripts/langchain/prompts/format_issue.md New prompt template for LLM-based issue formatting with structured section headers and formatting rules
.github/scripts/keepalive_prompt_routing.js Expands FIX_MODES set to include additional CI failure mode aliases ('ci_failure', 'fix-ci-failure')
.github/scripts/keepalive_loop.js Removes .toLowerCase() call from verification status check, making comparison case-sensitive
.github/scripts/agents-guard.js Updates comment to clarify security policy for label-based approval bypass

if not github_token and not openai_token:
return None

from tools.llm_provider import DEFAULT_MODEL, GITHUB_MODELS_BASE_URL
Copy link

Copilot AI Jan 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This import will fail because the tools.llm_provider module does not exist in the repository. The code attempts to import DEFAULT_MODEL and GITHUB_MODELS_BASE_URL from this non-existent module, which will cause an ImportError when _get_llm_client() is called and a GitHub or OpenAI token is available. Either create the missing module with these constants, or move these constants directly into this file.

Copilot uses AI. Check for mistakes.
let reason = 'pending';
const verificationStatus = normalise(state?.verification?.status);
const verificationDone = ['done', 'verified', 'complete'].includes(verificationStatus.toLowerCase());
const verificationDone = ['done', 'verified', 'complete'].includes(verificationStatus);
Copy link

Copilot AI Jan 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removing .toLowerCase() changes the behavior from case-insensitive to case-sensitive comparison. The normalise() function (line 31) returns a trimmed string but does not convert to lowercase. If state?.verification?.status contains values like "Done", "VERIFIED", or "Complete" (different casing), they will no longer match the lowercase strings in the array. Since line 1664 explicitly sets the status to lowercase values ('done' or 'pending'), this change should be safe if that's the only place the status is set. However, if the verification status can be set manually or from other sources with different casing, this could break the verification logic.

Suggested change
const verificationDone = ['done', 'verified', 'complete'].includes(verificationStatus);
const verificationDone = ['done', 'verified', 'complete'].includes((verificationStatus || '').toLowerCase());

Copilot uses AI. Check for mistakes.
Comment on lines +1 to +366
#!/usr/bin/env python3
"""
Format raw issue text into the AGENT_ISSUE_TEMPLATE structure.

Run with:
python scripts/langchain/issue_formatter.py \
--input-file issue.md --output-file formatted.md
"""

from __future__ import annotations

import argparse
import json
import os
import re
import sys
from pathlib import Path
from typing import Any

ISSUE_FORMATTER_PROMPT = """
You are a formatting assistant. Convert the raw GitHub issue body into the
AGENT_ISSUE_TEMPLATE format with the exact section headers in order:

## Why
## Scope
## Non-Goals
## Tasks
## Acceptance Criteria
## Implementation Notes

Rules:
- Use bullet points ONLY in Tasks and Acceptance Criteria.
- Every task/criterion must be specific, verifiable, and sized for ~10 minutes.
- Use unchecked checkboxes: "- [ ]".
- Preserve file paths and concrete details when mentioned.
- If a section lacks content, use "_Not provided._" (or "- [ ] _Not provided._"
for Tasks/Acceptance).
- Output ONLY the formatted markdown with these sections (no extra commentary).

Raw issue body:
{issue_body}
""".strip()

PROMPT_PATH = Path(__file__).resolve().parent / "prompts" / "format_issue.md"
FEEDBACK_PROMPT_PATH = Path(__file__).resolve().parent / "prompts" / "format_issue_feedback.md"

SECTION_ALIASES = {
"why": ["why", "motivation", "summary", "goals"],
"scope": ["scope", "background", "context", "overview"],
"non_goals": ["non-goals", "nongoals", "out of scope", "constraints", "exclusions"],
"tasks": ["tasks", "task list", "tasklist", "todo", "to do", "implementation"],
"acceptance": [
"acceptance criteria",
"acceptance",
"definition of done",
"done criteria",
"success criteria",
],
"implementation": [
"implementation notes",
"implementation note",
"notes",
"details",
"technical notes",
],
}

SECTION_TITLES = {
"why": "Why",
"scope": "Scope",
"non_goals": "Non-Goals",
"tasks": "Tasks",
"acceptance": "Acceptance Criteria",
"implementation": "Implementation Notes",
}

LIST_ITEM_REGEX = re.compile(r"^(\s*)([-*+]|\d+[.)])\s+(.*)$")
CHECKBOX_REGEX = re.compile(r"^\[([ xX])\]\s*(.*)$")


def _load_prompt() -> str:
if PROMPT_PATH.is_file():
base_prompt = PROMPT_PATH.read_text(encoding="utf-8").strip()
else:
base_prompt = ISSUE_FORMATTER_PROMPT

if FEEDBACK_PROMPT_PATH.is_file():
feedback = FEEDBACK_PROMPT_PATH.read_text(encoding="utf-8").strip()
if feedback:
return f"{base_prompt}\n\n{feedback}\n"
return base_prompt


def _get_llm_client() -> tuple[object, str] | None:
try:
from langchain_openai import ChatOpenAI
except ImportError:
return None

github_token = os.environ.get("GITHUB_TOKEN")
openai_token = os.environ.get("OPENAI_API_KEY")
if not github_token and not openai_token:
return None

from tools.llm_provider import DEFAULT_MODEL, GITHUB_MODELS_BASE_URL

if github_token:
return (
ChatOpenAI(
model=DEFAULT_MODEL,
base_url=GITHUB_MODELS_BASE_URL,
api_key=github_token,
temperature=0.1,
),
"github-models",
)
return (
ChatOpenAI(
model=DEFAULT_MODEL,
api_key=openai_token,
temperature=0.1,
),
"openai",
)


def _normalize_heading(text: str) -> str:
cleaned = re.sub(r"[#*_:]+", " ", text).strip().lower()
cleaned = re.sub(r"\s+", " ", cleaned)
return cleaned


def _resolve_section(label: str) -> str | None:
normalized = _normalize_heading(label)
for key, aliases in SECTION_ALIASES.items():
for alias in aliases:
if normalized == _normalize_heading(alias):
return key
return None


def _strip_list_marker(line: str) -> str:
match = LIST_ITEM_REGEX.match(line)
if not match:
return line
return match.group(3).strip()


def _normalize_non_action_lines(lines: list[str]) -> list[str]:
cleaned: list[str] = []
in_fence = False
for raw in lines:
stripped = raw.strip()
if stripped.startswith("```"):
in_fence = not in_fence
cleaned.append(raw)
continue
if in_fence:
cleaned.append(raw)
continue
if not stripped:
cleaned.append("")
continue
cleaned.append(_strip_list_marker(raw))
return cleaned


def _normalize_checklist_lines(lines: list[str]) -> list[str]:
cleaned: list[str] = []
in_fence = False
for raw in lines:
stripped = raw.strip()
if stripped.startswith("```"):
in_fence = not in_fence
cleaned.append(raw)
continue
if in_fence:
cleaned.append(raw)
continue
if not stripped:
continue
match = LIST_ITEM_REGEX.match(raw)
if match:
indent, _, remainder = match.groups()
checkbox = CHECKBOX_REGEX.match(remainder.strip())
if checkbox:
mark = "x" if checkbox.group(1).lower() == "x" else " "
text = checkbox.group(2).strip()
if text:
cleaned.append(f"{indent}- [{mark}] {text}")
continue
cleaned.append(f"{indent}- [ ] {remainder.strip()}")
else:
cleaned.append(f"- [ ] {stripped}")
return cleaned


def _parse_sections(body: str) -> tuple[dict[str, list[str]], list[str]]:
sections: dict[str, list[str]] = {key: [] for key in SECTION_TITLES}
preamble: list[str] = []
current: str | None = None
for line in body.splitlines():
heading_match = re.match(r"^\s*#{1,6}\s+(.*)$", line)
if heading_match:
section_key = _resolve_section(heading_match.group(1))
if section_key:
current = section_key
continue
if re.match(r"^\s*(?:\*\*|__)(.+?)(?:\*\*|__)\s*:?\s*$", line):
inner = re.sub(r"^\s*(?:\*\*|__)(.+?)(?:\*\*|__)\s*:?\s*$", r"\1", line)
section_key = _resolve_section(inner)
if section_key:
current = section_key
continue
if re.match(r"^\s*[A-Za-z][A-Za-z0-9\s-]{2,}:\s*$", line):
label = line.split(":", 1)[0]
section_key = _resolve_section(label)
if section_key:
current = section_key
continue
if current:
sections[current].append(line)
else:
preamble.append(line)
return sections, preamble


def _format_issue_fallback(issue_body: str) -> str:
body = issue_body.strip()
sections, preamble = _parse_sections(body)

if preamble and not sections["scope"]:
sections["scope"] = preamble

why_lines = _normalize_non_action_lines(sections["why"])
scope_lines = _normalize_non_action_lines(sections["scope"])
non_goals_lines = _normalize_non_action_lines(sections["non_goals"])
impl_lines = _normalize_non_action_lines(sections["implementation"])

tasks_lines = _normalize_checklist_lines(sections["tasks"])
acceptance_lines = _normalize_checklist_lines(sections["acceptance"])

def join_or_placeholder(lines: list[str], placeholder: str) -> str:
content = "\n".join(line for line in lines).strip()
return content if content else placeholder

why_text = join_or_placeholder(why_lines, "_Not provided._")
scope_text = join_or_placeholder(scope_lines, "_Not provided._")
non_goals_text = join_or_placeholder(non_goals_lines, "_Not provided._")
impl_text = join_or_placeholder(impl_lines, "_Not provided._")
tasks_text = join_or_placeholder(tasks_lines, "- [ ] _Not provided._")
acceptance_text = join_or_placeholder(acceptance_lines, "- [ ] _Not provided._")

parts = [
"## Why",
"",
why_text,
"",
"## Scope",
"",
scope_text,
"",
"## Non-Goals",
"",
non_goals_text,
"",
"## Tasks",
"",
tasks_text,
"",
"## Acceptance Criteria",
"",
acceptance_text,
"",
"## Implementation Notes",
"",
impl_text,
]
return "\n".join(parts).strip()


def _formatted_output_valid(text: str) -> bool:
if not text:
return False
required = ["## Tasks", "## Acceptance Criteria"]
return all(section in text for section in required)


def format_issue_body(issue_body: str, *, use_llm: bool = True) -> dict[str, Any]:
if not issue_body:
issue_body = ""

if use_llm:
client_info = _get_llm_client()
if client_info:
client, provider = client_info
try:
from langchain_core.prompts import ChatPromptTemplate
except ImportError:
client_info = None
else:
prompt = _load_prompt()
template = ChatPromptTemplate.from_template(prompt)
chain = template | client
response = chain.invoke({"issue_body": issue_body})
content = getattr(response, "content", None) or str(response)
formatted = content.strip()
if _formatted_output_valid(formatted):
return {
"formatted_body": formatted,
"provider_used": provider,
"used_llm": True,
}

formatted = _format_issue_fallback(issue_body)
return {
"formatted_body": formatted,
"provider_used": None,
"used_llm": False,
}


def build_label_transition() -> dict[str, list[str]]:
return {
"add": ["agents:formatted"],
"remove": ["agents:format"],
}


def _load_input(args: argparse.Namespace) -> str:
if args.input_file:
return Path(args.input_file).read_text(encoding="utf-8")
if args.input_text:
return args.input_text
return sys.stdin.read()


def main() -> None:
parser = argparse.ArgumentParser(description="Format issues into AGENT_ISSUE_TEMPLATE.")
parser.add_argument("--input-file", help="Path to raw issue text.")
parser.add_argument("--input-text", help="Raw issue text (inline).")
parser.add_argument("--output-file", help="Path to write formatted output.")
parser.add_argument("--json", action="store_true", help="Emit JSON payload to stdout.")
parser.add_argument("--no-llm", action="store_true", help="Disable LLM usage.")
args = parser.parse_args()

raw = _load_input(args)
result = format_issue_body(raw, use_llm=not args.no_llm)

if args.output_file:
Path(args.output_file).write_text(result["formatted_body"], encoding="utf-8")

if args.json:
payload = {
"formatted_body": result["formatted_body"],
"provider_used": result.get("provider_used"),
"used_llm": result.get("used_llm", False),
"labels": build_label_transition(),
}
print(json.dumps(payload, ensure_ascii=True))
else:
print(result["formatted_body"])


if __name__ == "__main__":
main()
Copy link

Copilot AI Jan 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This new script lacks test coverage. Since the repository has comprehensive test coverage for other scripts (e.g., tests/test_board_sync.py), this script should also have tests. Consider adding tests for the key functions: format_issue_body(), _format_issue_fallback(), _parse_sections(), and the normalization functions to ensure the formatting logic works correctly with various input formats.

Copilot uses AI. Check for mistakes.
try:
from langchain_core.prompts import ChatPromptTemplate
except ImportError:
client_info = None
Copy link

Copilot AI Jan 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Variable client_info is not used.

Suggested change
client_info = None
pass

Copilot uses AI. Check for mistakes.
@stranske stranske closed this Jan 5, 2026
@stranske stranske deleted the sync/workflows-e11fa23a80b6 branch January 17, 2026 17:47
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

autofix Triggers autofix on PR automated Automated sync from Workflows sync Automated sync from Workflows

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants