Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
209 changes: 71 additions & 138 deletions .claude/commands/update-pull-request.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@

If you need to accept edits during execution:
- Choose "accept edits and continue" (NOT "clear context")
- Or wait until the "Commit Changes" step to accept all edits at once
- Or wait until the "Commit and Push Changes" step to accept all edits at once

## Instructions

Expand All @@ -22,70 +22,34 @@ Follow these steps:
### 1. Fetch Pull Request Data

- Accept the pull request ID from ${ARGUMENTS}; error if no argument is provided with a clear message that a pull request number is required.
- **IMPORTANT: Environment variable persistence** - When using the Bash tool, environment variables do not persist between separate tool invocations. The SCRATCHPAD, OWNER, and REPO setup commands below should be combined into a single bash execution along with the PR data fetch, or re-declared at the start of each subsequent bash command that needs them.
- **CRITICAL: Set up SCRATCHPAD environment variable FIRST** before any file operations:
- **IMPORTANT: Environment variable persistence** - When using the Bash tool, environment variables do not persist between separate tool invocations. You must explicitly re-declare `SCRATCHPAD`, `OWNER`, and `REPO` at the top of each subsequent bash block that references them.
- Set up the scratchpad directory, determine the repository owner and name, and fetch PR data in a single bash execution:

```bash
# Set up scratchpad directory with session isolation
: "${TMPDIR:=/tmp}"
if [ -z "${SCRATCHPAD:-}" ]; then
# Create a unique, private scratchpad directory
_old_umask=$(umask)
umask 077
SCRATCHPAD="$(mktemp -d "${TMPDIR%/}/claude-scratchpad.XXXXXX")" || {
echo "Error: Failed to create scratchpad directory under ${TMPDIR}"
exit 1
}
umask "${_old_umask}"
else
# Use caller-provided SCRATCHPAD, hardening it
_old_umask=$(umask)
mkdir -p "${SCRATCHPAD}"
chmod 700 "${SCRATCHPAD}" 2>/dev/null || true
umask "${_old_umask}"
fi
export SCRATCHPAD

_old_umask=$(umask)
umask 077
SCRATCHPAD="$(mktemp -d "${TMPDIR%/}/claude-scratchpad.XXXXXX")" || {
echo "Error: Failed to create scratchpad directory under ${TMPDIR}"
exit 1
}
umask "${_old_umask}"
Comment thread
forstmeier marked this conversation as resolved.
echo "Using scratchpad: ${SCRATCHPAD}"
```

- Determine repository owner and name from git remote and export as environment variables:

```bash
# Determine repository owner and name
_remote_url=$(git remote get-url origin)
export OWNER=$(echo "${_remote_url}" | sed -E 's|.*[:/]([^/]+)/([^/]+)(\.git)?$|\1|')
export REPO=$(echo "${_remote_url}" | sed -E 's|.*[:/]([^/]+)/([^/]+)(\.git)?$|\2|' | sed 's/\.git$//')
OWNER=$(echo "${_remote_url}" | sed -E 's|.*[:/]([^/]+)/([^/]+)(\.git)?$|\1|')
REPO=$(echo "${_remote_url}" | sed -E 's|.*[:/]([^/]+)/([^/]+)(\.git)?$|\2|' | sed 's/\.git$//')

if [ -z "${OWNER}" ] || [ -z "${REPO}" ]; then
echo "Error: Failed to determine repository owner and name from git remote \"origin\""
exit 1
fi

echo "Repository: ${OWNER}/${REPO}"
```

- **Verify SCRATCHPAD is set before file operations**:

```bash
# Critical: Validate SCRATCHPAD is set and accessible before proceeding
# Note: If running these commands in separate bash invocations, SCRATCHPAD must be
# re-declared or the setup block must be combined with this validation block.
if [ -z "${SCRATCHPAD}" ]; then
echo "Error: SCRATCHPAD variable is not set. Ensure SCRATCHPAD setup and this validation are in the same bash execution context."
exit 1
fi

if [ ! -d "${SCRATCHPAD}" ] || [ ! -w "${SCRATCHPAD}" ]; then
echo "Error: SCRATCHPAD directory ${SCRATCHPAD} does not exist or is not writable"
exit 1
fi

echo "SCRATCHPAD validated: ${SCRATCHPAD}"
```

- Fetch comprehensive pull request data using a single GraphQL query, saving to a file to avoid token limit issues:

```bash
# Fetch PR data
gh api graphql -f query='
query($owner: String!, $repo: String!, $number: Int!) {
repository(owner: $owner, name: $repo) {
Expand Down Expand Up @@ -160,10 +124,12 @@ Follow these steps:
}
}
}
' -f owner="${OWNER}" -f repo="${REPO}" -F number=${ARGUMENTS} > ${SCRATCHPAD}/pr_data.json
' -f owner="${OWNER}" -f repo="${REPO}" -F number=${ARGUMENTS} > "${SCRATCHPAD}/pr_data.json"

echo "PR data fetched to ${SCRATCHPAD}/pr_data.json"
Comment thread
forstmeier marked this conversation as resolved.
```

- Validate that the pull request data was fetched successfully:
- Validate the fetched data:

```bash
# Check jq is installed
Expand All @@ -172,31 +138,21 @@ Follow these steps:
exit 1
fi

# Check file exists
if [ ! -f "${SCRATCHPAD}/pr_data.json" ]; then
echo "Error: Failed to fetch pull request data to ${SCRATCHPAD}/pr_data.json using 'gh api'. Check that pull request #${ARGUMENTS} exists, run 'gh auth status' to verify authentication, then retry the fetch command."
exit 1
fi

# Validate JSON structure and GraphQL response
if ! jq empty "${SCRATCHPAD}/pr_data.json" 2>/dev/null; then
echo "Error: pull request data file ${SCRATCHPAD}/pr_data.json contains invalid JSON. Try re-running the 'gh api' fetch command; if the problem persists, run 'gh auth status' and verify network access before retrying."
# Check file exists and contains valid JSON
if [ ! -f "${SCRATCHPAD}/pr_data.json" ] || ! jq empty "${SCRATCHPAD}/pr_data.json" 2>/dev/null; then
echo "Error: Failed to fetch valid pull request data. Check that pull request #${ARGUMENTS} exists, run 'gh auth status' to verify authentication, then retry."
Comment thread
forstmeier marked this conversation as resolved.
exit 1
fi

# Validate GraphQL response structure
if ! jq -e '.data.repository.pullRequest.id and (.errors | not)' "${SCRATCHPAD}/pr_data.json" >/dev/null 2>&1; then
echo "Error: pull request data missing expected fields or contains GraphQL errors. Check pull request #${ARGUMENTS} exists and you have access."
echo "Error: Pull request data missing expected fields or contains GraphQL errors. Check pull request #${ARGUMENTS} exists and you have access."
jq -r '.errors[]?.message // "No specific error message available"' "${SCRATCHPAD}/pr_data.json"
exit 1
fi
```

- This single query replaces multiple REST API calls and includes thread IDs needed for later resolution.
- **Important**: Save output to a file (`${SCRATCHPAD}/pr_data.json`) to avoid token limit errors when reading large responses. Parse this file using `jq` for subsequent processing.
- **Critical**: The pull request data file will be too large to read directly with the Read tool. Extract structured data once into smaller files (see below).

- **Extract structured data into focused files** (replaces all scattered jq calls throughout command):
- **Critical**: The pull request data file will be too large to read directly with the Read tool. Extract structured data once into smaller files:

```bash
echo "Extracting structured data from PR response..."
Expand Down Expand Up @@ -261,29 +217,24 @@ Follow these steps:
echo " - check_failures.json ($(jq 'length' "${SCRATCHPAD}/check_failures.json") failed checks)"
```

- These smaller structured files can be read with Read tool if needed, and eliminate redundant jq parsing throughout the command.
- These smaller structured files can be read with the Read tool if needed, and eliminate redundant jq parsing throughout the command.

### 2. Analyze Check Failures
### 2. Analyze PR State

- Review failing checks from the extracted data:
- Review all extracted data to build a complete picture of what needs to be addressed:

```bash
echo "=== Check Failures ==="
jq -r '.[] | "[\(.conclusion)] \(.name) - \(.detailsUrl)"' "${SCRATCHPAD}/check_failures.json"
```

- Note that check-runs and workflow runs are distinct; to fetch logs, first obtain the workflow run ID, then use `gh api repos/${OWNER}/${REPO}/actions/runs/{run_id}/logs`.
- If logs are inaccessible via API, run `mask development python all` or `mask development rust all` locally to replicate the errors and capture the failure details.
- Add check failures to the list of items that need fixes.

### 3. Group and Analyze Feedback

- Review the extracted feedback data from structured files:

```bash
echo ""
echo "=== Unresolved Review Threads ==="
jq -r '.[] | "\(.threadId) | \(.comments[0].path // "N/A") | \(.comments[0].author)"' "${SCRATCHPAD}/review_threads.json"

echo ""
echo "=== Outdated Threads (require manual review) ==="
jq -r '.[] | "\(.threadId) | \(.comments[0].path // "N/A") | \(.comments[0].author) | \(.comments[0].body[:80])"' "${SCRATCHPAD}/outdated_threads.json"

echo ""
echo "=== PR-level Comments ==="
jq -r '.[] | "\(.databaseId) | \(.author) | \(.body[:80])"' "${SCRATCHPAD}/pr_comments.json"
Expand All @@ -292,38 +243,27 @@ Follow these steps:
- For detailed review of specific threads, use:

```bash
# Get full details of a specific thread
jq '.[] | select(.threadId == "PRRT_xxx")' "${SCRATCHPAD}/review_threads.json"
```

- The structured files contain all necessary metadata:
- `review_threads.json`: Thread ID, comment IDs (both node and database), body, author, file path/position
- `outdated_threads.json`: Thread ID and comment metadata (body, author, file path); "outdated" means the code was modified, not that the feedback is irrelevant - review each to determine if it still applies
- `pr_comments.json`: Comment IDs, body, author
- `check_failures.json`: Check name, conclusion, details URL

- Group related feedback using judgement: by file, by theme, by type of change, or whatever makes most sense for the specific pull request; ensure each group maintains the full metadata for all comments it contains.
- Note that check-runs and workflow runs are distinct; to fetch logs, first obtain the workflow run ID, then use `gh api repos/${OWNER}/${REPO}/actions/runs/{run_id}/logs`. If logs are inaccessible via API, run `mask development python all` or `mask development rust all` locally to replicate the errors.
- Group all feedback (check failures, review threads, outdated threads, PR-level comments) using judgement: by file, by theme, by type of change, or whatever makes most sense for the specific pull request; ensure each group maintains the full metadata for all items it contains.
- Analyze dependencies between feedback groups to determine which are independent (can be worked in parallel) and which are interdependent (must be handled sequentially).
- For each piece of feedback, evaluate whether to address it (make code changes) or reject it (explain why the feedback doesn't apply); provide clear reasoning for each decision.

### 3a. Identify Outdated Threads

- Before processing feedback, identify outdated threads that are still unresolved:

```bash
echo "=== Outdated threads (require manual review) ==="
jq -r '.[] | "\(.threadId) | \(.comments[0].path // "N/A") | \(.comments[0].author) | \(.comments[0].body[:80])"' "${SCRATCHPAD}/outdated_threads.json"
```

- **Important**: "Outdated" means the code was modified, not that feedback is irrelevant. Review each outdated thread manually during Step 3 to determine if the feedback still applies or should be addressed.
- Include outdated threads in your feedback grouping and analysis - they may still require responses or code changes.

### 4. Enter Plan Mode
### 3. Enter Plan Mode

- Enter plan mode to organize the work.
- Present a high-level overview showing: (1) total number of feedback groups identified, (2) brief description of each group, (3) which groups are independent vs interdependent, (4) check failures that need fixes.
- Wait for user acknowledgment of the overview before proceeding to detailed planning.

### 5. Present Plans Consecutively
### 4. Present Plans Consecutively

**Default to consecutive presentation to provide clear, reviewable chunks** - even for superficial fixes. Present each logical group separately and wait for approval before proceeding.

Expand All @@ -336,29 +276,50 @@ Follow these steps:
- Note: "The following groups are independent and will be implemented in parallel using subagents to keep main context clean."
- Wait for user approval before spawning parallel subagents.

### 6. Implement Fixes
### 5. Implement Fixes

- After plan approval for a group (or parallel groups):
- For independent groups, spawn parallel subagents to implement solutions simultaneously, keeping main context clean.
- For interdependent groups, implement sequentially.
- Implement all fixes for feedback being addressed and for check failures in this group.

### 7. Verify Changes
### 6. Verify Changes

- After implementing each group's fixes, run verification checks locally:
- Run `mask development python all` if any Python files were modified.
- Run `mask development rust all` if any Rust files were modified.
- Skip redundant checks if the next group will touch the same language files (batch them), but always run comprehensive checks at the end.
- **Note**: Local verification confirms fixes work in the development environment, but remote continuous integration on the pull request will not re-run or reflect these results until changes are pushed in the "Commit Changes" step.
- **Note**: Local verification confirms fixes work in the development environment. Remote continuous integration will re-run after changes are pushed in the "Commit and Push Changes" step.
- If checks fail, resolve issues and re-run until passing before moving to the next group.
- Do not proceed to the next feedback group until current group's changes pass verification.
- Repeat steps 4-6 for each feedback group until all have been addressed.
- Always run final comprehensive verification using both `mask development python all` and `mask development rust all` before proceeding.

### 7. Commit and Push Changes

- After all fixes are verified, confirm with the user that changes are ready to commit and push.
- Stage all modified files and create a commit:

```bash
git add file1.py file2.md
git commit -m "$(cat <<'EOF'
Address pull request #<number> feedback: <brief summary>

### 8. Iterate Through All Groups
<Detailed explanation of changes made>

- Repeat steps 5-7 for each feedback group until all have been addressed.
- Always run final comprehensive verification using both `mask development python all` and `mask development rust all` regardless of which files were changed.
Co-Authored-By: Claude <noreply@anthropic.com>
EOF
)"
```

- Push to the remote branch:

```bash
git push
```

- Confirm the push succeeded before proceeding. Pushing is required before responding to and resolving comments, so that thread resolutions reflect changes that are live on the branch.

### 9. Respond to and Resolve Comments
### 8. Respond to and Resolve Comments

- For each piece of feedback (both addressed and rejected), draft a response comment explaining what was done or why it was rejected, using the commenter name for personalization.
- Post all response comments to their respective threads:
Expand Down Expand Up @@ -386,14 +347,13 @@ Follow these steps:
Example to get thread ID:

```bash
# Get thread ID from review_threads.json
THREAD_ID=$(jq -r '.[] | select(.comments[0].body | contains("some text")) | .threadId' "${SCRATCHPAD}/review_threads.json")
```

**Response formatting guidelines**:
- Keep responses concise and single-line when possible
- Avoid embedding code blocks or complex markdown in mutation strings
- Use simple sentences: "Fixed in step X" or "Updated to use GraphQL approach"
- Use simple sentences: "Fixed in the latest commit" or "Updated to use GraphQL approach"
- For longer responses, reference line numbers or file paths instead of quoting code

- For issue comments (pull request-level), use REST API:
Expand All @@ -405,7 +365,7 @@ Follow these steps:
- For each response posted, capture the returned comment ID for verification.
- Auto-resolve all comment threads after posting responses:
- For review comment threads:
- Use the thread ID (format: `PRRT_*`) captured during parsing from GraphQL response's `reviewThreads.nodes[].id` field.
- Use the thread ID (format: `PRRT_*`) from `review_threads.json`.
- Resolve thread using GraphQL mutation:

```bash
Expand All @@ -421,45 +381,18 @@ Follow these steps:
' -f threadId="<thread_id>"
```

- Map each comment back to its parent thread using the data structure from step 3 parsing.
- Map each comment back to its parent thread using the structured files from step 1 (particularly review_threads.json).
- Resolve both addressed and rejected feedback threads (explanation provided in response).
- **Note**: Threads are resolved based on local verification. The fixes will not appear on the remote pull request branch until the "Commit Changes" step. Remote continuous integration will not re-run until changes are pushed.

- For issue comments (pull request-level):
- No resolution mechanism (issue comments don't have thread states).
- Only post response; no resolution step needed.

- **Do not create any git commits yet.** All changes remain unstaged.

### 10. Commit Changes (After User Confirmation)

- After user confirms that responses and resolutions are correct, create a git commit:
- Stage all modified files: `git add <files>`
- Create commit with descriptive message following CLAUDE.md conventions:
- Include detailed summary of what was fixed and why
- Reference pull request number
- Add co-author line: extract model name from system context (format: "You are powered by the model named X") and use `Co-Authored-By: Claude X <noreply@anthropic.com>`
- Example:

```bash
git add file1.py file2.md
git commit -m "$(cat <<'EOF'
Address pull request #<number> feedback: <brief summary>

<Detailed explanation of changes made>

Co-Authored-By: Claude <model_name> <noreply@anthropic.com>
EOF
)"
```

- Ask user: "Ready to push these changes to the remote branch to update the pull request? This will trigger remote continuous integration to re-run and make the fixes visible to other reviewers."

### 11. Final Summary
### 9. Final Summary

- Provide a comprehensive summary showing:
- Total feedback items processed (with count of addressed vs rejected), including any outdated threads that were reviewed.
- Which checks were fixed.
- Confirmation that all comments have been responded to and resolved.
- Final verification status (all local checks passing; note that remote continuous integration status will update after pushing changes).
- For check failures that were fixed, note that no comments were posted - the fixes will be reflected in re-run checks after pushing to the remote branch.
- Final verification status (all local checks passing; remote continuous integration is now running against the pushed changes).
- For check failures that were fixed, note that no comments were posted - the fixes will be reflected in re-run checks which are now in progress.
Loading