Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/workflows/maint-76-claude-code-review.yml
Original file line number Diff line number Diff line change
Expand Up @@ -188,7 +188,7 @@ jobs:
- name: Run Claude Code Review
id: claude
continue-on-error: true
uses: anthropics/claude-code-action@220272d38887a1caed373da96a9ffdb0919c26cc
uses: anthropics/claude-code-action@220272d38887a1caed373da96a9ffdb0919c26cc # v1
with:
claude_code_oauth_token: ${{ secrets.CLAUDE_CODE_OAUTH_TOKEN }}
allowed_bots: '*'
Expand Down
38 changes: 37 additions & 1 deletion scripts/langchain/followup_issue_generator.py
Original file line number Diff line number Diff line change
Expand Up @@ -531,6 +531,7 @@ def extract_verification_data(comment_body: str) -> VerificationData:
# Extract provider verdicts (from comparison reports)
lines = comment_body.splitlines()
in_provider_table = False
provider_summary_concerns: list[str] = []
for line in lines:
if re.search(
r"\|\s*Provider\s*\|\s*Model\s*\|\s*Verdict\s*\|\s*Confidence",
Expand All @@ -556,11 +557,17 @@ def extract_verification_data(comment_body: str) -> VerificationData:
verdict = cols[2]
confidence_text = cols[3]
confidence = _parse_confidence_value(confidence_text)
data.provider_verdicts[provider] = {
summary_text = cols[4].strip() if len(cols) >= 5 else ""
entry = {
"model": model,
"verdict": verdict.strip(),
"confidence": confidence,
}
if summary_text:
entry["summary"] = summary_text
if verdict.strip().upper() != "PASS":
provider_summary_concerns.append(summary_text)
Comment on lines +568 to +569
Copy link

Copilot AI Mar 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When extracting the Provider Summary table, summary_text can be the placeholder "N/A" (see pr_verifier’s table generation). As written, non-PASS rows will add "N/A" into provider_summary_concerns, which then becomes a top-level concern and can generate noisy follow-up tasks. Consider skipping placeholder/empty summaries (e.g., "N/A"), and applying a minimum-length or other sanity filter similar to the other concern extraction paths before appending.

Suggested change
if verdict.strip().upper() != "PASS":
provider_summary_concerns.append(summary_text)
# Only treat non-placeholder, substantive summaries as concerns.
if verdict.strip().upper() != "PASS":
normalized_summary = summary_text.strip()
is_placeholder = normalized_summary.upper() in {
"N/A",
"NA",
"NONE",
"NO SUMMARY",
}
is_too_short = len(normalized_summary) < 8
if not is_placeholder and not is_too_short:
provider_summary_concerns.append(summary_text)

Copilot uses AI. Check for mistakes.
data.provider_verdicts[provider] = entry

# Extract verdicts from provider detail sections as a fallback.
current_provider = None
Expand Down Expand Up @@ -665,6 +672,8 @@ def extract_verification_data(comment_body: str) -> VerificationData:
if concern and len(concern) > 15:
all_concerns.append(concern)

all_concerns.extend(provider_summary_concerns)

# Deduplicate while preserving order, and filter out spurious entries
seen: set[str] = set()
data.concerns = []
Expand Down Expand Up @@ -1475,6 +1484,33 @@ def _generate_without_llm(
body_parts.append(f"- {concern}")
body_parts.extend(["", "</details>"])

body_parts.extend(
[
"",
"## verify:compare Analysis",
"",
f"- Resolved verdict: {verdict}",
]
)
for concern in blocking_concerns[:10]:
body_parts.append(f"- Concern: {concern}")
for concern in advisory_concerns[:10]:
body_parts.append(f"- Advisory: {concern}")

body_parts.extend(
[
"",
"## verify:compare Evidence",
"",
]
)
Comment on lines +1487 to +1506
Copy link

Copilot AI Mar 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new sections are titled "## verify:compare Analysis" / "## verify:compare Evidence", but _generate_without_llm is used for any no-LLM path (including single-provider verify:evaluate cases). This makes the generated issue misleading. Suggest renaming these headings to something verification-generic (or only emitting the compare-specific headings when multiple providers are present).

Copilot uses AI. Check for mistakes.
for provider, data in verification_data.provider_verdicts.items():
evidence = f"- {provider}: {data.get('verdict', 'Unknown')} @ {data.get('confidence', 0)}%"
summary = data.get("summary")
if summary:
evidence += f" ({summary})"
body_parts.append(evidence)
Comment on lines +1507 to +1512
Copy link

Copilot AI Mar 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the evidence loop, the loop variable name data shadows the surrounding verification_data identifier and is also reused elsewhere in the file for different payload shapes. Renaming it (e.g., provider_payload) would make this section easier to read and reduce confusion during future edits.

Copilot uses AI. Check for mistakes.

# Add background context in collapsible section
body_parts.extend(
[
Expand Down
Loading