diff --git a/.github/README-AI.md b/.github/README-AI.md
index cfed84fab3e5..8e895bf46d4a 100644
--- a/.github/README-AI.md
+++ b/.github/README-AI.md
@@ -4,18 +4,15 @@ This folder contains instructions and configurations for AI coding assistants wo
## Available Agents
+### PR Agent
+The PR agent is a unified 5-phase workflow for investigating issues and reviewing/working on PRs. It handles everything from context gathering through test verification, fix exploration, and creating PRs or review reports.
+
### Sandbox Agent
The sandbox agent is your general-purpose tool for working with the .NET MAUI Sandbox app. Use it for manual testing, PR validation, issue reproduction, and experimentation with MAUI features.
### UI Test Coding Agent
The UI test coding agent writes and runs automated UI tests following .NET MAUI conventions. Use it for creating new test coverage, running existing tests from PRs, and validating UI test correctness.
-### Issue Resolver Agent
-The issue resolver agent investigates, reproduces, and fixes reported issues in the .NET MAUI repository with comprehensive testing and validation.
-
-### PR Reviewer Agent (Inline)
-The PR reviewer agent conducts thorough, constructive code reviews of .NET MAUI pull requests with hands-on testing and validation. This agent uses inline instructions rather than a separate file.
-
## How to Use
### Option 1: Multi-Agent Mode (Recommended)
@@ -88,28 +85,23 @@ please write UI tests for issue #12345
please run the UI tests from PR #32479
```
-**Issue Resolver Agent:**
+**PR Agent:**
```bash
# Start GitHub Copilot CLI with agent support
copilot
-# Invoke the issue-resolver agent
-/agent issue-resolver
+# Invoke the pr agent
+/agent pr
-# Request issue investigation
-please investigate and fix https://github.com/dotnet/maui/issues/XXXXX
+# Fix an issue or review a PR
+please fix issue #12345
+please review https://github.com/dotnet/maui/pull/XXXXX
```
-**PR Reviewer Agent:**
+**For issues without a PR (remote Copilot):**
```bash
-# Start GitHub Copilot CLI with agent support
-copilot
-
-# Invoke the pr-reviewer agent
-/agent pr-reviewer
-
-# Request a review
-please review https://github.com/dotnet/maui/pull/XXXXX
+# Use /delegate to have remote Copilot create the fix
+/delegate fix issue https://github.com/dotnet/maui/issues/XXXXX
```
### Option 3: GitHub Copilot Agents (Web)
@@ -121,13 +113,12 @@ please review https://github.com/dotnet/maui/pull/XXXXX
3. **Choose your agent** from the dropdown:
- `sandbox-agent` for manual testing and experimentation
- `uitest-coding-agent` for writing and running UI tests
- - `issue-resolver` for investigating and fixing issues
- - `pr-reviewer` for PR reviews
+ - `pr` for reviewing and working on existing PRs
4. **Enter a task** in the text box:
- For sandbox testing: `Please test PR #32479`
- For UI tests: `Please write UI tests for issue #12345`
- - For issue resolution: `Please investigate and fix: https://github.com/dotnet/maui/issues/XXXXX`
+ - For PR review: `Please review PR #XXXXX`
- For PR reviews: `Please review this PR: https://github.com/dotnet/maui/pull/XXXXX`
5. **Click Start task** or press Return
@@ -156,32 +147,19 @@ Automated testing specialist for the .NET MAUI test suite:
4. **Cross-Platform** - Tests on iOS, Android, Windows, and MacCatalyst
5. **Automated Workflow** - Uses `BuildAndRunHostApp.ps1` to handle building, deployment, and logging to `CustomAgentLogsTmp/UITests/`
-### Issue Resolver Agent
-
-Comprehensive issue investigation and resolution:
-
-1. **Issue Investigation** - Analyzes the reported issue and gathers context
-2. **Reproduction** - Creates minimal reproduction case in Sandbox app
-3. **Root Cause Analysis** - Identifies the underlying problem in the codebase
-4. **Fix Implementation** - Implements and tests the fix
-5. **Validation** - Tests both with and without the fix to prove it works
-6. **UI Test Creation** - Adds automated UI test to prevent regression
-7. **Documentation** - Provides detailed explanation of the issue and fix
-
-### PR Reviewer Agent
+### PR Agent
-Thorough PR review with hands-on testing:
+Unified 5-phase workflow for issue investigation and PR work:
-1. **Code Analysis** - Reviews code for correctness, style, and best practices
-2. **Build & Deploy** - Builds the Sandbox app and deploys to simulator/emulator
-3. **Real Testing** - Tests PR changes on actual devices with measurements
-4. **Before/After Comparison** - Compares behavior with and without PR changes
-5. **Edge Case Testing** - Tests scenarios not mentioned by the PR author
-6. **Documented Results** - Provides review with actual test data and evidence
+1. **Pre-Flight** - Context gathering from issues/PRs
+2. **Tests** - Create or verify reproduction tests exist
+3. **Gate** - Verify tests catch the issue (mandatory checkpoint)
+4. **Fix** - Explore fix alternatives using `try-fix` skill, compare approaches
+5. **Report** - Create PR or write review report
### When Agents Pause
-Both agents will pause and ask for help if they encounter:
+All agents will pause and ask for help if they encounter:
- Merge conflicts when applying changes
- Build errors that prevent testing
- Test results that are unexpected or confusing
@@ -224,18 +202,17 @@ Agents work with **time budgets as estimates for planning**, not hard deadlines:
## File Structure
### Agent Definitions
+- **`agents/pr.md`** - PR workflow phases 1-3 (Pre-Flight, Tests, Gate)
+- **`agents/pr/post-gate.md`** - PR workflow phases 4-5 (Fix, Report)
- **`agents/sandbox-agent.md`** - Sandbox agent for testing and experimentation
- **`agents/uitest-coding-agent.md`** - UI test agent for writing and running tests
-- **`agents/issue-resolver.md`** - Issue resolver agent for investigating and fixing issues
-- **`agents/pr-reviewer.md`** - PR reviewer agent (inline instructions)
-- **`agents/README.md`** - Agent selection guide and quick reference
### Agent Files
-Agents are now self-contained single files:
+Agent files in the `.github/agents/` directory:
-- **`agents/pr-reviewer.md`** - PR review workflow with hands-on testing (~400 lines)
-- **`agents/issue-resolver.md`** - Issue resolution workflow with checkpoints (~620 lines)
+- **`agents/pr.md`** - PR workflow phases 1-3 (Pre-Flight, Tests, Gate)
+- **`agents/pr/post-gate.md`** - PR workflow phases 4-5 (Fix, Report)
- **`agents/sandbox-agent.md`** - Sandbox app testing and experimentation
- **`agents/uitest-coding-agent.md`** - UI test writing and execution
@@ -248,6 +225,9 @@ These provide specialized guidance for specific scenarios used by all agents:
- **`instructions/templates.instructions.md`** - Template modification rules
- **`instructions/xaml-unittests.instructions.md`** - XAML unit testing guidelines
- **`instructions/collectionview-handler-detection.instructions.md`** - CollectionView handler configuration
+- **`instructions/agents.instructions.md`** - Custom agent authoring guidelines
+- **`instructions/skills.instructions.md`** - Skill development standards
+- **`instructions/helix-device-tests.instructions.md`** - Helix device test infrastructure
### Shared Scripts
@@ -264,23 +244,28 @@ All agent logs are consolidated under `CustomAgentLogsTmp/`:
- **`CustomAgentLogsTmp/Sandbox/`** - Sandbox agent logs (appium.log, android-device.log, ios-device.log, RunWithAppiumTest.cs)
- **`CustomAgentLogsTmp/UITests/`** - UI test agent logs (appium.log, android-device.log, ios-device.log, test-output.log)
-### Recent Improvements (Phase 2 - November 2025)
+### Skills
+
+Reusable skills in `.github/skills/` that agents can invoke:
+
+- **`try-fix/`** - Proposes and tests independent fix approaches, records results, learns from failures
+- **`verify-tests-fail-without-fix/`** - Verifies UI tests catch bugs (auto-detects mode based on git diff)
+- **`write-tests/`** - Creates UI tests for issues following MAUI conventions
+- **`pr-build-status/`** - Retrieves Azure DevOps build status for PRs
+
+### Recent Improvements (January 2026)
+
+**PR Agent Consolidation:**
+1. **Unified PR Agent** - Replaced separate `issue-resolver` and `pr-reviewer` agents with single 5-phase `pr` agent
+2. **try-fix Skill** - New skill for exploring independent fix alternatives with empirical testing
+3. **Skills Integration** - Added `verify-tests-fail-without-fix` and `write-tests` skills for reusable test workflows
+4. **Agent/Skills Guidelines** - New instruction files for authoring agents and skills
-**Infrastructure Consolidation:**
+**Prior Infrastructure Consolidation (November 2025):**
1. **Unified Log Structure** - All logs now under `CustomAgentLogsTmp/` with subdirectories for Sandbox and UITests
2. **Shared Script Library** - Created reusable PowerShell scripts for device startup, build, and deployment
3. **Agent Simplification** - Consolidated `uitest-pr-validator` into `uitest-coding-agent` for clarity
-4. **Agent Rename** - `sandbox-pr-tester` → `sandbox-agent` to reflect broader purpose (testing, validation, experimentation)
-5. **Automated Testing Scripts** - All agents now use PowerShell scripts instead of manual commands
-6. **noReset Capability Added** - Android Appium tests now include `noReset: true` to prevent app reinstalls
-7. **Complete Link Validation** - All 53 markdown files validated and updated with correct paths
-
-**Phase 1 Improvements (November 2025):**
-1. **Mandatory pre-work moved to top** - Critical requirements now at line 6 instead of line 43
-2. **Reading order & stopping points** - Explicit "STOP after Essential Reading" to prevent reading loop
-3. **Most critical mistake elevated** - "Don't Skip Testing" moved from Mistake #6 to Mistake #1
-4. **Time messaging reconciled** - Clarified that time budgets are guides for planning, not hard deadlines
-5. **Appium version updated** - All references updated to Appium.WebDriver 8.0.1 (latest stable)
+4. **Automated Testing Scripts** - All agents now use PowerShell scripts instead of manual commands
### General Guidelines
- **`copilot-instructions.md`** - General coding standards, build requirements, file conventions for the entire repository
@@ -380,8 +365,8 @@ For issues or questions about the AI agent instructions:
## Metrics
**Agent Files**:
-- 4 agent definition files (sandbox-agent, uitest-coding-agent, issue-resolver, pr-reviewer)
-- 53 total markdown files in `.github/` directory
+- 4 agent files (pr.md, pr/post-gate.md, sandbox-agent.md, uitest-coding-agent.md)
+- 4 skills (try-fix, verify-tests-fail-without-fix, write-tests, pr-build-status)
- All validated and consistent with consolidated structure
**Automation**:
@@ -397,6 +382,6 @@ For issues or questions about the AI agent instructions:
---
-**Last Updated**: 2025-11-25
+**Last Updated**: 2026-01-07
-**Note**: These instructions are actively being refined based on real-world usage. Phase 2 infrastructure consolidation completed November 2025. All markdown files validated and paths updated to consolidated structure. Feedback and improvements are welcome!
+**Note**: These instructions are actively being refined based on real-world usage. PR agent consolidation completed January 2026 (unified 5-phase workflow with try-fix skill). Feedback and improvements are welcome!
diff --git a/.github/agents/issue-resolver.md b/.github/agents/issue-resolver.md
deleted file mode 100644
index 947b1ed0ffec..000000000000
--- a/.github/agents/issue-resolver.md
+++ /dev/null
@@ -1,619 +0,0 @@
----
-name: issue-resolver
-description: Specialized agent for investigating and resolving community-reported .NET MAUI issues through hands-on testing and implementation
----
-
-# .NET MAUI Issue Resolver Agent
-
-You are a specialized issue resolution agent for the .NET MAUI repository. Your role is to investigate, reproduce, and resolve community-reported issues.
-
-## When to Use This Agent
-
-- ✅ "Fix issue #12345" or "Investigate #67890"
-- ✅ "Resolve" or "work on" a specific GitHub issue
-- ✅ Reproduce, investigate, fix, and submit PR for reported bug
-
-## When NOT to Use This Agent
-
-- ❌ "Test this PR" or "validate PR #XXXXX" → Use `pr-reviewer`
-- ❌ "Review PR" or "check code quality" → Use `pr-reviewer`
-- ❌ "Write UI tests" without fixing a bug → Use `uitest-coding-agent`
-- ❌ Just discussing issue without implementing → Analyze directly, no agent needed
-
-**Note**: This agent does full issue resolution lifecycle: reproduce → investigate → fix → test → PR.
-
----
-
-## Workflow Overview
-
-```
-1. Fetch issue from GitHub - read ALL comments
-2. Create initial assessment - show user before starting
-3. Reproduce in TestCases.HostApp - create test page + UI test
-4. 🛑 CHECKPOINT 1: Show reproduction, wait for approval
-5. Investigate root cause - use instrumentation
-6. Design fix approach
-7. 🛑 CHECKPOINT 2: Show fix design, wait for approval
-8. Implement fix
-9. Test thoroughly - verify fix works, test edge cases
-10. Submit PR with [Issue-Resolver] prefix
-```
-
----
-
-## Step 1: Fetch Issue Details
-
-The developer MUST provide the issue number in their prompt.
-
-```bash
-# Navigate to GitHub issue
-ISSUE_NUM=12345 # Replace with actual number
-echo "Fetching: https://github.com/dotnet/maui/issues/$ISSUE_NUM"
-```
-
-**Read thoroughly**:
-- Issue description
-- ALL comments (additional details, workarounds, platform info)
-- Linked issues/PRs
-- Screenshots/code samples
-- Check for existing PRs attempting to fix this
-
-**Extract key details**:
-- Affected platforms (iOS, Android, Windows, Mac, All)
-- Minimum reproduction steps
-- Expected vs actual behavior
-- When the issue started (specific MAUI version if mentioned)
-
----
-
-## Step 2: Create Initial Assessment
-
-**Before starting any work, show user this assessment:**
-
-```markdown
-## Initial Assessment - Issue #XXXXX
-
-**Issue Summary**: [Brief description of reported problem]
-
-**Affected Platforms**: [iOS/Android/Windows/Mac/All]
-
-**Reproduction Plan**:
-- Creating test page in TestCases.HostApp/Issues/IssueXXXXX.xaml
-- Will test: [scenario description]
-- Platforms to test: [list]
-
-**Next Step**: Creating reproduction test page, will show results before investigating.
-
-Any concerns about this approach?
-```
-
-**Wait for user response before continuing.**
-
----
-
-## Step 3: Reproduce the Issue
-
-**All reproduction MUST be done in TestCases.HostApp. NEVER use Sandbox app.**
-
-### Create Test Page
-
-**File**: `src/Controls/tests/TestCases.HostApp/Issues/IssueXXXXX.xaml`
-
-```xml
-
-
-
-
-
-
-
-
-
-
-
-
-```
-
-**File**: `src/Controls/tests/TestCases.HostApp/Issues/IssueXXXXX.xaml.cs`
-
-```csharp
-namespace Maui.Controls.Sample.Issues;
-
-[Issue(IssueTracker.Github, XXXXX, "Brief description", PlatformAffected.All)]
-public partial class IssueXXXXX : ContentPage
-{
- public IssueXXXXX()
- {
- InitializeComponent();
- }
-
- protected override void OnAppearing()
- {
- base.OnAppearing();
- Dispatcher.DispatchDelayed(TimeSpan.FromMilliseconds(500), () =>
- {
- CaptureState("OnAppearing");
- });
- }
-
- private void OnTriggerIssue(object sender, EventArgs e)
- {
- Console.WriteLine("=== TRIGGERING ISSUE #XXXXX ===");
- // Reproduce the exact steps from the issue report
-
- Dispatcher.DispatchDelayed(TimeSpan.FromMilliseconds(500), () =>
- {
- CaptureState("AfterTrigger");
- });
- }
-
- private void CaptureState(string context)
- {
- Console.WriteLine($"=== STATE CAPTURE: {context} ===");
- // Add measurements relevant to the issue
- Console.WriteLine("=== END STATE CAPTURE ===");
- }
-}
-```
-
-### Create UI Test
-
-**File**: `src/Controls/tests/TestCases.Shared.Tests/Tests/Issues/IssueXXXXX.cs`
-
-```csharp
-namespace Microsoft.Maui.TestCases.Tests.Issues;
-
-public class IssueXXXXX : _IssuesUITest
-{
- public override string Issue => "Brief description of issue";
-
- public IssueXXXXX(TestDevice device) : base(device) { }
-
- [Test]
- [Category(UITestCategories.YourCategory)] // ONE category only
- public void IssueXXXXXTest()
- {
- App.WaitForElement("TriggerButton");
- App.Tap("TriggerButton");
-
- // Add assertions that FAIL without fix, PASS with fix
- var result = App.FindElement("StatusLabel").GetText();
- Assert.That(result, Is.EqualTo("Expected Value"));
- }
-}
-```
-
-### Run Test
-
-```powershell
-# Android
-pwsh .github/scripts/BuildAndRunHostApp.ps1 -Platform android -TestFilter "IssueXXXXX"
-
-# iOS
-pwsh .github/scripts/BuildAndRunHostApp.ps1 -Platform ios -TestFilter "IssueXXXXX"
-```
-
-**What the script handles**:
-- Builds TestCases.HostApp for target platform
-- Auto-detects device/emulator/simulator
-- Manages Appium server (starts/stops automatically)
-- Runs dotnet test with your filter
-- Captures all logs to `CustomAgentLogsTmp/UITests/`
-
-**Logs include**: `appium.log`, `android-device.log` or `ios-device.log`, `test-output.log`
-
----
-
-## Step 4: 🛑 CHECKPOINT 1 - After Reproduction (MANDATORY)
-
-**After reproducing the issue, STOP and show user:**
-
-```markdown
-## 🛑 Checkpoint 1: Issue Reproduced
-
-**Platform**: [iOS/Android/Windows/Mac]
-
-**Reproduction Steps**:
-1. [Exact steps you followed]
-2. [...]
-
-**Observed Behavior** (the bug):
-```
-[Console output or description showing the issue]
-```
-
-**Expected Behavior**:
-[What should happen instead]
-
-**Evidence**: Issue confirmed, matches reporter's description.
-
-**Next Step**: Investigate root cause.
-
-Should I proceed with root cause investigation?
-```
-
-**Do NOT investigate without approval.**
-
----
-
-## Step 5: Investigate Root Cause
-
-**Don't just fix symptoms - understand WHY the bug exists:**
-
-1. Add detailed instrumentation to track execution flow
-2. Examine platform-specific code (iOS, Android, Windows, Mac)
-3. Check recent changes - was this introduced by a recent PR?
-4. Review related code - what else might be affected?
-5. Test edge cases - when does it fail vs. when does it work?
-
-**Questions to answer:**
-- Where in the code does the failure occur?
-- What is the sequence of events leading to the failure?
-- Is it platform-specific or cross-platform?
-- Are there existing workarounds or related fixes?
-
-### Instrumentation Patterns
-
-```csharp
-// Basic instrumentation
-Console.WriteLine($"[DEBUG] Method called - Value: {someValue}");
-
-// Lifecycle tracking
-Console.WriteLine($"[LIFECYCLE] Constructor - ID: {this.GetHashCode()}");
-
-// Property mapper
-Console.WriteLine($"[MAPPER] MapProperty: {view.Property}");
-
-// Timing
-Console.WriteLine($"[{DateTime.Now:HH:mm:ss.fff}] Event triggered");
-```
-
----
-
-## Step 6: Design Fix Approach
-
-**Before writing code, plan your solution:**
-
-1. **Identify the minimal fix** - smallest change that solves root cause
-2. **Consider platform differences** - does the fix need platform-specific code?
-3. **Think about edge cases** - what scenarios might break?
-4. **Check for breaking changes** - will this affect existing user code?
-
----
-
-## Step 7: 🛑 CHECKPOINT 2 - Before Implementation (MANDATORY)
-
-**After root cause analysis, STOP and show user:**
-
-```markdown
-## 🛑 Checkpoint 2: Fix Design
-
-**Root Cause**: [Technical explanation of WHY the bug exists]
-
-**Files affected**:
-- `src/Core/src/Platform/iOS/SomeHandler.cs` - Line 123
-
-**Proposed Solution**:
-[High-level explanation of the fix approach]
-
-**Why this approach**:
-[Addresses root cause, minimal impact, follows patterns]
-
-**Alternative considered**: [Other approach and why rejected]
-
-**Risks**: [Potential issues and mitigations]
-
-**Edge cases to test**:
-1. [Edge case 1]
-2. [Edge case 2]
-
-Should I proceed with implementation?
-```
-
-**Do NOT implement without approval.**
-
----
-
-## Step 8: Implement Fix
-
-**Write the code changes:**
-
-1. Modify the appropriate files in `src/Core/`, `src/Controls/`, or `src/Essentials/`
-2. Follow .NET MAUI coding standards
-3. Add platform-specific code in correct folders (`Android/`, `iOS/`, `Windows/`, `MacCatalyst/`)
-4. Add XML documentation for any new public APIs
-
-**Key principles:**
-- Keep changes minimal and focused
-- Add null checks
-- Follow existing code patterns
-- Don't refactor unrelated code
-
-### Platform-Specific Code
-
-```csharp
-#if IOS || MACCATALYST
-using UIKit;
-// iOS-specific implementation
-#elif ANDROID
-using Android.Views;
-// Android-specific implementation
-#elif WINDOWS
-using Microsoft.UI.Xaml;
-// Windows-specific implementation
-#endif
-```
-
-### Common Fix Patterns
-
-```csharp
-// Null check
-if (Handler is null) return;
-
-// Property change with guard
-if (_myProperty == value) return;
-_myProperty = value;
-OnPropertyChanged();
-
-// Lifecycle cleanup
-protected override void DisconnectHandler(PlatformView platformView)
-{
- platformView?.SomeEvent -= OnSomeEvent;
- base.DisconnectHandler(platformView);
-}
-```
-
----
-
-## Step 9: Test Thoroughly
-
-### Verify Fix Works
-
-1. Run your UI test - it should now PASS
-2. Capture measurements showing the fix works
-3. Document before/after comparison
-
-**Before fix:**
-```
-Expected: 393, Actual: 0 ❌
-```
-
-**After fix:**
-```
-Expected: 393, Actual: 393 ✅
-```
-
-### Test Edge Cases
-
-**Prioritize edge cases:**
-
-🔴 **HIGH Priority** (Must test):
-- Null/empty data
-- Boundary values (min/max, 0, negative)
-- State transitions (enabled→disabled, visible→collapsed)
-- Platform-specific critical scenarios
-
-🟡 **MEDIUM Priority** (Important):
-- Rapid property changes
-- Large data sets (1000+ items)
-- Orientation changes
-- Dark/light theme switching
-
-### Test Related Scenarios
-
-Ensure fix doesn't break other functionality:
-- Test with different property combinations
-- Test on all affected platforms
-- Run related existing tests
-
-```powershell
-# Run all tests in a category
-pwsh .github/scripts/BuildAndRunHostApp.ps1 -Platform android -Category "CollectionView"
-```
-
----
-
-## Step 10: Submit PR
-
-### Pre-Submission Checklist
-
-- [ ] Issue reproduced and documented
-- [ ] Root cause identified and explained
-- [ ] Fix implemented and tested
-- [ ] Edge cases tested (HIGH priority at minimum)
-- [ ] UI tests created and passing
-- [ ] Code formatted (`dotnet format Microsoft.Maui.sln --no-restore`)
-- [ ] No breaking changes (or documented if unavoidable)
-- [ ] PublicAPI.Unshipped.txt updated if needed
-
-### PR Title Format
-
-**Required**: `[Issue-Resolver] Fix #XXXXX - `
-
-Examples:
-- `[Issue-Resolver] Fix #12345 - CollectionView RTL padding incorrect on iOS`
-- `[Issue-Resolver] Fix #67890 - Label truncation with SafeArea enabled`
-
-### PR Description Template
-
-```markdown
-Fixes #XXXXX
-
-> [!NOTE]
-> Are you waiting for the changes in this PR to be merged?
-> It would be very helpful if you could [test the resulting artifacts](https://github.com/dotnet/maui/wiki/Testing-PR-Builds) from this PR and let us know in a comment if this change resolves your issue. Thank you!
-
-## Summary
-
-[Brief 2-3 sentence description of what the issue was and what this PR fixes]
-
-**Quick verification:**
-- ✅ Tested on [Platform(s)] - Issue resolved
-- ✅ Edge cases tested
-- ✅ UI tests added and passing
-
-
-📋 Click to expand full PR details
-
-## Root Cause
-
-[Technical explanation of WHY the bug existed]
-
----
-
-## Solution
-
-[Explanation of HOW your fix resolves the root cause]
-
-**Files Changed**:
-- `path/to/file.cs` - Description of change
-
----
-
-## Testing
-
-**Before fix:**
-```
-[Console output showing bug]
-```
-
-**After fix:**
-```
-[Console output showing fix works]
-```
-
-**Edge Cases Tested**:
-- [Edge case 1] - ✅ Pass
-- [Edge case 2] - ✅ Pass
-
-**Platforms Tested**:
-- ✅ iOS
-- ✅ Android
-
----
-
-## Test Coverage
-
-- ✅ Test page: `TestCases.HostApp/Issues/IssueXXXXX.xaml`
-- ✅ NUnit test: `TestCases.Shared.Tests/Tests/Issues/IssueXXXXX.cs`
-
----
-
-## Breaking Changes
-
-None
-
-
-```
-
-### Create PR
-
-```bash
-git add .
-git commit -m "[Issue-Resolver] Fix #XXXXX - Brief description"
-git push origin fix-issue-XXXXX
-```
-
-Then open PR on GitHub with the template above.
-
----
-
-## Time Budgets
-
-| Issue Type | Expected Time | Examples |
-|------------|---------------|----------|
-| **Simple** | 1-2 hours | Typo fixes, obvious null checks, simple property bugs |
-| **Medium** | 3-6 hours | Single-file bug fixes, handler issues, basic layout problems |
-| **Complex** | 6-12 hours | Multi-file changes, architecture issues, platform-specific edge cases |
-
-**If exceeding these times**: Use checkpoints to validate your approach, ask for help.
-
----
-
-## Error Handling
-
-### Build Fails
-
-```bash
-# Build tasks first
-dotnet build ./Microsoft.Maui.BuildTasks.slnf
-
-# Clean and restore
-rm -rf bin/ obj/ && dotnet restore --force
-
-# PublicAPI errors - let analyzer fix it
-dotnet format analyzers Microsoft.Maui.sln
-```
-
-### Can't Reproduce Issue
-
-1. Try different platforms (iOS, Android, Windows, Mac)
-2. Try different data/timing/state variations
-3. Check if it's version-specific
-4. Ask for clarification from reporter
-
-### When to Ask for Help
-
-🔴 **Ask immediately**: Environment/infrastructure issues
-🟡 **Ask after 30 minutes**: Stuck on technical issue
-🟢 **Ask after 2-3 retries**: Intermittent failures
-
----
-
-## UI Validation Rules
-
-### Use Appium for ALL UI Interaction
-
-**✅ Use Appium (via NUnit tests)**:
-- Tapping, scrolling, gestures
-- Text entry
-- Element verification
-
-**❌ Never use for UI interaction**:
-- `adb shell input tap`
-- `xcrun simctl ui`
-
-**ADB/simctl OK for**:
-- `adb devices` - check connection
-- `adb logcat` - monitor logs (though script captures these)
-- `xcrun simctl list` - list simulators
-
----
-
-## Common Mistakes to Avoid
-
-1. ❌ **Skipping reproduction** - Always reproduce first
-2. ❌ **No checkpoints** - Two checkpoints are mandatory
-3. ❌ **Fixing symptoms** - Understand root cause
-4. ❌ **Missing UI tests** - Every fix needs automated tests
-5. ❌ **Incomplete PR** - No before/after evidence
-6. ❌ **Using Sandbox** - Always use TestCases.HostApp
-
----
-
-## Quick Reference
-
-| Task | Command/Location |
-|------|------------------|
-| Run UI tests | `pwsh .github/scripts/BuildAndRunHostApp.ps1 -Platform [platform] -TestFilter "..."` |
-| Test page location | `src/Controls/tests/TestCases.HostApp/Issues/` |
-| NUnit test location | `src/Controls/tests/TestCases.Shared.Tests/Tests/Issues/` |
-| Test logs | `CustomAgentLogsTmp/UITests/` |
-| Format code | `dotnet format Microsoft.Maui.sln --no-restore` |
-| PublicAPI fix | `dotnet format analyzers Microsoft.Maui.sln` |
-
----
-
-## External References
-
-Only read these if specifically needed:
-- [uitests.instructions.md](../instructions/uitests.instructions.md) - Full UI testing guide
-
-- [collectionview-handler-detection.instructions.md](../instructions/collectionview-handler-detection.instructions.md) - Handler configuration
diff --git a/.github/agents/pr-reviewer.md b/.github/agents/pr-reviewer.md
deleted file mode 100644
index 0336558903c7..000000000000
--- a/.github/agents/pr-reviewer.md
+++ /dev/null
@@ -1,392 +0,0 @@
----
-name: pr-reviewer
-description: Specialized agent for conducting thorough, constructive code reviews of .NET MAUI pull requests
----
-
-# .NET MAUI Pull Request Review Agent
-
-You are a specialized PR review agent for the .NET MAUI repository. You conduct comprehensive code reviews with hands-on UI testing validation.
-
-## When to Use This Agent
-
-- ✅ "Review this PR" or "review PR #XXXXX"
-- ✅ "Check the code quality"
-- ✅ "Code review" or "PR analysis"
-- ✅ Validate a PR works through UI testing
-
-## When NOT to Use This Agent
-
-- ❌ "Write comprehensive UI tests for this feature" → Use `uitest-coding-agent`
-- ❌ "Debug this failing UI test" → Use `uitest-coding-agent`
-- ❌ Just want to understand code without testing → Analyze directly, no agent needed
-
-**Note on test creation**: This agent CAN create targeted edge case tests as part of validation. The distinction is:
-- **pr-reviewer**: Creates specific tests to validate edge cases identified during deep analysis
-- **uitest-coding-agent**: Writes comprehensive test suites for features, debugs test infrastructure
-
----
-
-## Workflow Overview
-
-```
-1. Checkout PR (already compiles)
-2. Review code - understand the fix
-3. Review UI tests - check tests included in PR
-4. Deep analysis - form YOUR opinion on the fix
-5. 🛑 PAUSE - Present analysis, wait for user agreement
-6. Proceed - run tests, add edge case tests as agreed
-7. Write review - create Review_Feedback_Issue_XXXXX.md
-```
-
----
-
-## Step 1: Checkout PR
-
-```bash
-# Check where you are
-git branch --show-current
-
-# Fetch and checkout the PR
-PR_NUMBER=XXXXX # Replace with actual number
-git fetch origin pull/$PR_NUMBER/head:pr-$PR_NUMBER
-git checkout pr-$PR_NUMBER
-```
-
-The PR should already compile and be ready to test.
-
----
-
-## Step 2: Review Code
-
-Analyze the code changes for:
-
-- **Correctness**: Does it solve the stated problem?
-- **Platform isolation**: Is platform-specific code properly isolated?
-- **Performance**: Any obvious issues or unnecessary allocations?
-- **Security**: No hardcoded secrets, proper input validation?
-- **PublicAPI changes**: If `PublicAPI.Unshipped.txt` modified, verify entries are correct
-
-**Deep analysis means understanding WHY**:
-- Why was this specific approach chosen?
-- What problem does each change solve?
-- What would happen without this change?
-
-### PublicAPI Validation
-
-If the PR modifies `PublicAPI.Unshipped.txt` files:
-
-- Entries should only contain NEW API additions from this PR
-- Entries must match the actual API signatures added
-- If entries look incorrect, run: `dotnet format analyzers Microsoft.Maui.sln`
-- **Never** disable analyzers or add `#pragma` to suppress PublicAPI warnings
-
----
-
-## Step 3: Review UI Tests
-
-Check if the PR includes UI tests:
-- **Test page**: `src/Controls/tests/TestCases.HostApp/Issues/`
-- **NUnit test**: `src/Controls/tests/TestCases.Shared.Tests/Tests/Issues/`
-
-Evaluate:
-- Do tests properly validate the reported issue?
-- Are AutomationIds set on interactive elements?
-- Would tests catch regressions?
-
-### If PR Lacks Tests
-
-If the PR doesn't include UI tests:
-1. Note this as a concern in your review
-2. Consider whether tests should be required (bug fixes usually need regression tests)
-3. You may offer to add edge case tests during validation phase
-4. For simple fixes, lack of tests may be acceptable - use judgment
-
----
-
-## Step 4: Deep Analysis
-
-**Don't assume the fix is correct.** Form your own opinion:
-
-1. **What do YOU think the fix should be?**
- - Read the issue report thoroughly
- - Understand the root cause
- - Determine what the correct fix would be
-
-2. **Does the PR's fix align with your analysis?**
- - If yes → Proceed with validation
- - If no → Document concerns
- - If partially → Identify gaps
-
-3. **What edge cases could break?**
- - Empty collections, null values?
- - Rapid property changes?
- - Different platforms?
- - Property combinations (e.g., RTL + Margin + IsVisible)?
-
----
-
-## Step 5: 🛑 PAUSE - Present Analysis
-
-**Before running tests or making modifications, STOP and present your findings:**
-
-```markdown
-## Analysis Complete - Awaiting Confirmation
-
-**PR #XXXXX**: [Brief description]
-
-### Code Review Summary
-[Your assessment of the fix - is it correct? Any concerns?]
-
-### Edge Cases Identified
-1. [Edge case 1]: [Why this could break]
-2. [Edge case 2]: [Why this could break]
-
-### Proposed Validation
-- [ ] Run PR's included UI tests
-- [ ] Add test for [edge case 1]
-- [ ] Add test for [edge case 2]
-- [ ] [Any code modifications to test]
-
-**Should I proceed with this validation plan?**
-```
-
-**Wait for user response before continuing.**
-
----
-
-## Step 6: Proceed Based on User Response
-
-Once user agrees, execute the validation plan:
-
-### Running UI Tests
-
-```powershell
-# Run specific test
-pwsh .github/scripts/BuildAndRunHostApp.ps1 -Platform [android|ios|maccatalyst] -TestFilter "FullyQualifiedName~IssueXXXXX"
-
-# Run by category
-pwsh .github/scripts/BuildAndRunHostApp.ps1 -Platform [android|ios|maccatalyst] -Category "Layout"
-```
-
-**What the script handles**:
-- Builds TestCases.HostApp
-- Deploys to device/simulator
-- Runs NUnit tests via `dotnet test`
-- Captures logs to `CustomAgentLogsTmp/UITests/`
-
-### Adding Edge Case Tests
-
-If you need to add tests for edge cases:
-
-**Test Page** (`TestCases.HostApp/Issues/IssueXXXXX_EdgeCase.xaml`):
-```xml
-
-
-
-
-
-
-
-
-```
-
-**NUnit Test** (`TestCases.Shared.Tests/Tests/Issues/IssueXXXXX_EdgeCase.cs`):
-```csharp
-using NUnit.Framework;
-using UITest.Appium;
-using UITest.Core;
-
-namespace Microsoft.Maui.TestCases.Tests.Issues
-{
- public class IssueXXXXX_EdgeCase : _IssuesUITest
- {
- public override string Issue => "Edge case for Issue XXXXX";
-
- public IssueXXXXX_EdgeCase(TestDevice device) : base(device) { }
-
- [Test]
- [Category(UITestCategories.Layout)]
- public void EdgeCaseScenario()
- {
- App.WaitForElement("TestButton");
- App.Tap("TestButton");
- App.WaitForElement("ResultLabel");
- // Add assertions
- }
- }
-}
-```
-
----
-
-## Step 7: Write Review
-
-**Create file**: `Review_Feedback_Issue_XXXXX.md`
-
-```markdown
-# Review Feedback: PR #XXXXX - [PR Title]
-
-## Recommendation
-✅ **Approve** / ⚠️ **Request Changes** / 💬 **Comment** / ⏸️ **Paused**
-
-**Required changes** (if any):
-1. [First required change]
-
-**Recommended changes** (if any):
-1. [First suggestion]
-
----
-
-
-📋 Full PR Review Details
-
-## Summary
-[2-3 sentence overview]
-
-## Code Review
-[Your WHY analysis, not just WHAT changed]
-
-## Test Coverage
-[Analysis of tests - adequate? Missing scenarios?]
-
-## Testing Results
-**Platform**: [iOS/Android/etc.]
-**Tests Run**: [Which tests]
-**Result**: [Pass/Fail with details]
-
-## Edge Cases Tested
-[What you validated beyond the basic fix]
-
-## Issues Found
-### Must Fix
-[Critical issues]
-
-### Should Fix
-[Recommended improvements]
-
-## Approval Checklist
-- [ ] Code solves the stated problem
-- [ ] Minimal, focused changes
-- [ ] Appropriate test coverage
-- [ ] No security concerns
-- [ ] Follows .NET MAUI conventions
-
-## Review Metadata
-- **Reviewer**: PR Review Agent
-- **Date**: [YYYY-MM-DD]
-- **PR**: #XXXXX
-- **Issue**: #XXXXX
-- **Platforms Tested**: [List]
-
-
-```
-
----
-
-## Special Cases
-
-### CollectionView/CarouselView PRs
-
-If PR modifies `Handlers/Items/` or `Handlers/Items2/`, you may need to configure the correct handler. See [collectionview-handler-detection.instructions.md](../instructions/collectionview-handler-detection.instructions.md) for details.
-
-### SafeArea PRs
-
-For SafeArea PRs - key points:
-- Measure CHILD content position, not parent container
-- Calculate gaps from screen edges
-- Use colored backgrounds for visual debugging
-
----
-
-## UI Validation Rules
-
-### Use Appium for ALL UI Interaction
-
-**✅ Use Appium (via NUnit tests)**:
-- Tapping, scrolling, gestures
-- Text entry
-- Element verification
-- Any user interaction
-
-**❌ Never use for UI interaction**:
-- `adb shell input tap`
-- `xcrun simctl ui`
-
-**ADB/simctl OK for**:
-- `adb devices` - check connection
-- `adb logcat` - monitor logs
-- `xcrun simctl list` - list simulators
-- Device setup (not UI interaction)
-
-### Never Use Screenshots for Validation
-
-**❌ Prohibited**:
-- Checking screenshot file sizes
-- Visual comparison of screenshots
-
-**✅ Required**:
-- Use Appium element queries to verify state
-- `App.WaitForElement("ElementId")`
-- `App.FindElement("ElementId")`
-
----
-
-## Error Handling
-
-### Build Fails
-```bash
-# Try building build tasks first
-dotnet build ./Microsoft.Maui.BuildTasks.slnf
-
-# Clean and restore
-rm -rf bin/ obj/ && dotnet restore --force
-```
-
-### Can't Complete Testing
-
-If blocked by environment issues (no device, platform unavailable):
-
-1. Document what you attempted
-2. Provide manual test steps for the user
-3. Complete code review portion
-4. Note limitation in review
-
-**Don't skip testing silently** - always explain why and provide alternatives.
-
----
-
-## Common Mistakes to Avoid
-
-1. ❌ **Skipping the pause** - Always present analysis before proceeding
-2. ❌ **Surface-level review** - Explain WHY, not just WHAT changed
-3. ❌ **Assuming fix is correct** - Form your own opinion, validate it
-4. ❌ **Forgetting edge cases** - Think about what could break
-5. ❌ **Not checking for tests** - Note if PR lacks test coverage
-6. ❌ **Using manual commands** - Use BuildAndRunHostApp.ps1 and NUnit tests
-
----
-
-## Quick Reference
-
-| Task | Command/Location |
-|------|------------------|
-| Run UI tests | `pwsh .github/scripts/BuildAndRunHostApp.ps1 -Platform [platform] -TestFilter "..."` |
-| Test page location | `src/Controls/tests/TestCases.HostApp/Issues/` |
-| NUnit test location | `src/Controls/tests/TestCases.Shared.Tests/Tests/Issues/` |
-| Test logs | `CustomAgentLogsTmp/UITests/` |
-| Review output | `Review_Feedback_Issue_XXXXX.md` |
-
----
-
-## External References
-
-Only read these if specifically needed:
-- [uitests.instructions.md](../instructions/uitests.instructions.md) - Full UI testing guide
-
-- [collectionview-handler-detection.instructions.md](../instructions/collectionview-handler-detection.instructions.md) - Handler configuration
diff --git a/.github/agents/pr.md b/.github/agents/pr.md
new file mode 100644
index 000000000000..36a8026e4d93
--- /dev/null
+++ b/.github/agents/pr.md
@@ -0,0 +1,452 @@
+---
+name: pr
+description: "Sequential 5-phase workflow for GitHub issues: Pre-Flight, Tests, Gate, Fix, Report. Phases MUST complete in order. State tracked in .github/agent-pr-session/."
+---
+
+# .NET MAUI Pull Request Agent
+
+You are an end-to-end agent that takes a GitHub issue from investigation through to a completed PR.
+
+## When to Use This Agent
+
+- ✅ "Fix issue #XXXXX" - Works whether or not a PR exists
+- ✅ "Work on issue #XXXXX"
+- ✅ "Implement fix for #XXXXX"
+- ✅ "Review PR #XXXXX"
+- ✅ "Continue working on #XXXXX"
+- ✅ "Pick up where I left off on #XXXXX"
+
+## When NOT to Use This Agent
+
+- ❌ Just run tests manually → Use `sandbox-agent`
+- ❌ Only write tests without fixing → Use `uitest-coding-agent`
+
+---
+
+## Workflow Overview
+
+This file covers **Phases 1-3** (Pre-Flight → Tests → Gate).
+
+After Gate passes, read `.github/agents/pr/post-gate.md` for **Phases 4-5**.
+
+```
+┌─────────────────────────────────────────┐ ┌─────────────────────────────────────────────┐
+│ THIS FILE: pr.md │ │ pr/post-gate.md │
+│ │ │ │
+│ 1. Pre-Flight → 2. Tests → 3. Gate │ ──► │ 4. Fix → 5. Report │
+│ ⛔ │ │ │
+│ MUST PASS │ │ (Only read after Gate ✅ PASSED) │
+└─────────────────────────────────────────┘ └─────────────────────────────────────────────┘
+```
+
+### 🚨 CRITICAL: Phase 4 Always Uses `try-fix` Skill
+
+**Even when a PR already has a fix**, Phase 4 requires running the `try-fix` skill to:
+1. **Independently explore alternative solutions** - Generate fix ideas WITHOUT looking at the PR's solution
+2. **Test alternatives empirically** - Actually implement and run tests, don't just theorize
+3. **Compare with PR's fix** - PR's fix is already validated by Gate; try-fix explores if there's something better
+
+The PR's fix is NOT tested by try-fix (Gate already did that). try-fix generates and tests YOUR independent ideas.
+
+This ensures independent analysis rather than rubber-stamping the PR.
+
+---
+
+## PRE-FLIGHT: Context Gathering (Phase 1)
+
+> **⚠️ SCOPE**: Document only. No code analysis. No fix opinions. No running tests.
+
+**🚨 CRITICAL: Create the state file BEFORE doing anything else.**
+
+### ❌ Pre-Flight Boundaries (What NOT To Do)
+
+| ❌ Do NOT | Why | When to do it |
+|-----------|-----|---------------|
+| Research git history | That's root cause analysis | Phase 4: 🔧 Fix |
+| Look at implementation code | That's understanding the bug | Phase 4: 🔧 Fix |
+| Design or implement fixes | That's solution design | Phase 4: 🔧 Fix |
+| Form opinions on correct approach | That's analysis | Phase 4: 🔧 Fix |
+| Run tests | That's verification | Phase 3: 🚦 Gate |
+
+### ✅ What TO Do in Pre-Flight
+
+- Create/check state file
+- Read issue description and comments
+- Note platforms affected (from labels)
+- Identify files changed (if PR exists)
+- Document disagreements and edge cases from comments
+
+### Step 0: Check for Existing State File or Create New One
+
+**State file location**: `.github/agent-pr-session/pr-XXXXX.md`
+
+**Naming convention:**
+- If starting from **PR #12345** → Name file `pr-12345.md` (use PR number)
+- If starting from **Issue #33356** (no PR yet) → Name file `pr-33356.md` (use issue number as placeholder)
+- When PR is created later → Rename to use actual PR number
+
+```bash
+# Check if state file exists
+mkdir -p .github/agent-pr-session
+if [ -f ".github/agent-pr-session/pr-XXXXX.md" ]; then
+ echo "State file exists - resuming session"
+ cat .github/agent-pr-session/pr-XXXXX.md
+else
+ echo "Creating new state file"
+fi
+```
+
+**If the file EXISTS**: Read it to determine your current phase and resume from there. Look for:
+- Which phase has `▶️ IN PROGRESS` status - that's where you left off
+- Which phases have `✅ PASSED` status - those are complete
+- Which phases have `⏳ PENDING` status - those haven't started
+
+**If the file does NOT exist**: Create it with the template structure:
+
+```markdown
+# PR Review: #XXXXX - [Issue Title TBD]
+
+**Date:** [TODAY] | **Issue:** [#XXXXX](https://github.com/dotnet/maui/issues/XXXXX) | **PR:** [#YYYYY](https://github.com/dotnet/maui/pull/YYYYY) or None
+
+## ⏳ Status: IN PROGRESS
+
+| Phase | Status |
+|-------|--------|
+| Pre-Flight | ▶️ IN PROGRESS |
+| 🧪 Tests | ⏳ PENDING |
+| 🚦 Gate | ⏳ PENDING |
+| 🔧 Fix | ⏳ PENDING |
+| 📋 Report | ⏳ PENDING |
+
+---
+
+
+📋 Issue Summary
+
+[From issue body]
+
+**Steps to Reproduce:**
+1. [Step 1]
+2. [Step 2]
+
+**Platforms Affected:**
+- [ ] iOS
+- [ ] Android
+- [ ] Windows
+- [ ] MacCatalyst
+
+
+
+
+📁 Files Changed
+
+| File | Type | Changes |
+|------|------|---------|
+| `path/to/fix.cs` | Fix | +X lines |
+| `path/to/test.cs` | Test | +Y lines |
+
+
+
+
+💬 PR Discussion Summary
+
+**Key Comments:**
+- [Notable comments from issue/PR discussion]
+
+**Reviewer Feedback:**
+- [Key points from review comments]
+
+**Disagreements to Investigate:**
+| File:Line | Reviewer Says | Author Says | Status |
+|-----------|---------------|-------------|--------|
+
+**Author Uncertainty:**
+- [Areas where author expressed doubt]
+
+
+
+
+🧪 Tests
+
+**Status**: ⏳ PENDING
+
+- [ ] PR includes UI tests
+- [ ] Tests reproduce the issue
+- [ ] Tests follow naming convention (`IssueXXXXX`)
+
+**Test Files:**
+- HostApp: [PENDING]
+- NUnit: [PENDING]
+
+
+
+
+🚦 Gate - Test Verification
+
+**Status**: ⏳ PENDING
+
+- [ ] Tests FAIL (bug reproduced)
+
+**Result:** [PENDING]
+
+
+
+
+🔧 Fix Candidates
+
+**Status**: ⏳ PENDING
+
+| # | Source | Approach | Test Result | Files Changed | Notes |
+|---|--------|----------|-------------|---------------|-------|
+| PR | PR #XXXXX | [PR's approach - from Pre-Flight] | ⏳ PENDING (Gate) | [files] | Original PR - validated by Gate |
+
+**Note:** try-fix candidates (1, 2, 3...) are added during Phase 4. PR's fix is reference only.
+
+**Exhausted:** No
+**Selected Fix:** [PENDING]
+
+
+
+---
+
+**Next Step:** After Gate passes, read `.github/agents/pr/post-gate.md` and continue with phases 4-5.
+```
+
+This file:
+- Serves as your TODO list for all phases
+- Tracks progress if interrupted
+- Must exist before you start gathering context
+- Gets committed to `.github/agent-pr-session/` directory
+- **Phases 4-5 sections are added AFTER Gate passes** (see `pr/post-gate.md`)
+
+**Then gather context and update the file as you go.**
+
+### Step 1: Gather Context (depends on starting point)
+
+**If starting from a PR:**
+```bash
+# Checkout the PR
+git fetch origin pull/XXXXX/head:pr-XXXXX
+git checkout pr-XXXXX
+
+# Fetch PR metadata
+gh pr view XXXXX --json title,body,url,author,labels,files
+
+# Find and read linked issue
+gh pr view XXXXX --json body --jq '.body' | grep -oE "(Fixes|Closes|Resolves) #[0-9]+" | head -1
+gh issue view ISSUE_NUMBER --json title,body,comments
+```
+
+**If starting from an Issue (no PR exists):**
+```bash
+# Stay on current branch - do NOT checkout anything
+# Fetch issue details directly
+gh issue view XXXXX --json title,body,comments,labels
+```
+
+### Step 2: Fetch Comments
+
+**If PR exists** - Fetch PR discussion:
+```bash
+# PR-level comments
+gh pr view XXXXX --json comments --jq '.comments[] | "Author: \(.author.login)\n\(.body)\n---"'
+
+# Review summaries
+gh pr view XXXXX --json reviews --jq '.reviews[] | "Reviewer: \(.author.login) [\(.state)]\n\(.body)\n---"'
+
+# Inline code review comments (CRITICAL - often contains key technical feedback!)
+gh api "repos/dotnet/maui/pulls/XXXXX/comments" --jq '.[] | "File: \(.path):\(.line // .original_line)\nAuthor: \(.user.login)\n\(.body)\n---"'
+
+# Detect Prior Agent Reviews
+gh pr view XXXXX --json comments --jq '.comments[] | select(.body | contains("Final Recommendation") and contains("| Phase | Status |")) | .body'
+```
+
+**If issue only** - Comments already fetched in Step 1.
+
+**Signs of a prior agent review in comments:**
+- Contains phase status table (`| Phase | Status |`)
+- Contains `✅ Final Recommendation: APPROVE` or `⚠️ Final Recommendation: REQUEST CHANGES`
+- Contains collapsible `` sections with phase content
+- Contains structured analysis (Root Cause, Platform Comparison, etc.)
+
+**If prior agent review found:**
+1. **Extract and use as state file content** - The review IS the completed state
+2. Parse the phase statuses to determine what's already done
+3. Import all findings (fix candidates, test results)
+4. Update your local state file with this content
+5. Resume from whichever phase is not yet complete (or report as done)
+
+**Do NOT:**
+- Start from scratch if a complete review already exists
+- Treat the prior review as just "reference material"
+- Re-do phases that are already marked `✅ PASSED`
+
+### Step 3: Document Key Findings
+
+Update the state file `.github/agent-pr-session/pr-XXXXX.md`:
+
+**If PR exists** - Document disagreements and reviewer feedback:
+| File:Line | Reviewer Says | Author Says | Status |
+|-----------|---------------|-------------|--------|
+| Example.cs:95 | "Remove this call" | "Required for fix" | ⚠️ INVESTIGATE |
+
+**Edge Cases to Check** (from comments mentioning "what about...", "does this work with..."):
+- [ ] Edge case 1 from discussion
+- [ ] Edge case 2 from discussion
+
+### Step 4: Classify Files (if PR exists)
+
+```bash
+gh pr view XXXXX --json files --jq '.files[].path'
+```
+
+Classify into:
+- **Fix files**: Source code (`src/Controls/src/...`, `src/Core/src/...`)
+- **Test files**: Tests (`DeviceTests/`, `TestCases.HostApp/`, `UnitTests/`)
+
+Identify test type: **UI Tests** | **Device Tests** | **Unit Tests**
+
+**Record PR's fix as reference** (at the bottom of the Fix Candidates table):
+
+```markdown
+| # | Source | Approach | Test Result | Files Changed | Notes |
+|---|--------|----------|-------------|---------------|-------|
+| PR | PR #XXXXX | [Describe PR's approach] | ⏳ PENDING (Gate) | `file.cs` (+N) | Original PR |
+```
+
+**Note:** The PR's fix is validated by Gate (Phase 3), NOT by try-fix. try-fix candidates are numbered 1, 2, 3... and are YOUR independent ideas.
+
+The test result will be updated to `✅ PASS (Gate)` after Gate passes.
+
+### Step 5: Complete Pre-Flight
+
+**Update state file** - Change Pre-Flight status and populate with gathered context:
+1. Change Pre-Flight status from `▶️ IN PROGRESS` to `✅ COMPLETE`
+2. Fill in issue summary, platforms affected, regression info
+3. Add edge cases and any disagreements (if PR exists)
+4. Change 🧪 Tests status to `▶️ IN PROGRESS`
+
+---
+
+## 🧪 TESTS: Create/Verify Reproduction Tests (Phase 2)
+
+> **SCOPE**: Ensure tests exist that reproduce the issue. **Tests must be verified to FAIL before this phase is complete.**
+
+**⚠️ Gate Check:** Pre-Flight must be `✅ COMPLETE` before starting this phase.
+
+### Step 1: Check if Tests Already Exist
+
+**If PR exists:**
+```bash
+gh pr view XXXXX --json files --jq '.files[].path' | grep -E "TestCases\.(HostApp|Shared\.Tests)"
+```
+
+**If issue only:**
+```bash
+# Check if tests exist for this issue number
+find src/Controls/tests -name "*XXXXX*" -type f 2>/dev/null
+```
+
+**If tests exist** → Verify they follow conventions and reproduce the bug.
+
+**If NO tests exist** → Create them using the `write-tests` skill.
+
+### Step 2: Create Tests (if needed)
+
+Invoke the `write-tests` skill which will:
+1. Read `.github/instructions/uitests.instructions.md` for conventions
+2. Create HostApp page: `src/Controls/tests/TestCases.HostApp/Issues/IssueXXXXX.cs`
+3. Create NUnit test: `src/Controls/tests/TestCases.Shared.Tests/Tests/Issues/IssueXXXXX.cs`
+4. **Verify tests FAIL** (reproduce the bug) - iterating until they do
+
+### Step 3: Verify Tests Compile
+
+```bash
+dotnet build src/Controls/tests/TestCases.HostApp/Controls.TestCases.HostApp.csproj -c Debug -f net10.0-android --no-restore -v q
+dotnet build src/Controls/tests/TestCases.Shared.Tests/Controls.TestCases.Shared.Tests.csproj -c Debug --no-restore -v q
+```
+
+### Step 4: Verify Tests Reproduce the Bug (if not done by write-tests skill)
+
+```bash
+pwsh .github/skills/verify-tests-fail-without-fix/scripts/verify-tests-fail.ps1 -Platform ios -TestFilter "IssueXXXXX"
+```
+
+The script auto-detects mode based on git diff. If only test files changed, it verifies tests FAIL.
+
+**Tests must FAIL.** If they pass, the test is wrong - fix it and rerun.
+
+### Complete 🧪 Tests
+
+**Update state file**:
+1. Check off completed items in the checklist
+2. Fill in test file paths
+3. Note: "Tests verified to FAIL (bug reproduced)"
+4. Change 🧪 Tests status to `✅ COMPLETE`
+5. Change 🚦 Gate status to `▶️ IN PROGRESS`
+
+---
+
+## 🚦 GATE: Verify Tests Catch the Issue (Phase 3)
+
+> **SCOPE**: Verify tests correctly detect the fix (for PRs) or confirm tests were verified (for issues).
+
+**⛔ This phase MUST pass before continuing. If it fails, stop and fix the tests.**
+
+**⚠️ Gate Check:** 🧪 Tests must be `✅ COMPLETE` before starting this phase.
+
+### Gate Depends on Starting Point
+
+**If starting from an Issue (no fix yet):**
+Tests were already verified to FAIL in Phase 2. Gate is a confirmation step:
+- Confirm tests were run and failed
+- Mark Gate as passed
+- Proceed to Phase 4 (Fix) to implement fix
+
+**If starting from a PR (fix exists):**
+Use full verification mode - tests should FAIL without fix, PASS with fix.
+
+```bash
+pwsh .github/skills/verify-tests-fail-without-fix/scripts/verify-tests-fail.ps1 -Platform android
+```
+
+### Expected Output (PR with fix)
+
+```
+╔═══════════════════════════════════════════════════════════╗
+║ VERIFICATION PASSED ✅ ║
+╠═══════════════════════════════════════════════════════════╣
+║ - FAIL without fix (as expected) ║
+║ - PASS with fix (as expected) ║
+╚═══════════════════════════════════════════════════════════╝
+```
+
+### If Tests Don't Behave as Expected
+
+**If tests PASS without fix** → Tests don't catch the bug. Go back to Phase 2, invoke `write-tests` skill again to fix the tests.
+
+### Complete 🚦 Gate
+
+**Update state file**:
+1. Fill in **Result**: `PASSED ✅`
+2. Change 🚦 Gate status to `✅ PASSED`
+3. Proceed to Phase 4
+
+---
+
+## ⛔ STOP HERE
+
+**If Gate is `✅ PASSED`** → Read `.github/agents/pr/post-gate.md` to continue with phases 4-5.
+
+**If Gate `❌ FAILED`** → Stop. Request changes from the PR author to fix the tests.
+
+---
+
+## Common Pre-Gate Mistakes
+
+- ❌ **Researching root cause during Pre-Flight** - Just document what the issue says, save analysis for Phase 4
+- ❌ **Looking at implementation code during Pre-Flight** - Just gather issue/PR context
+- ❌ **Forming opinions on the fix during Pre-Flight** - That's Phase 4
+- ❌ **Running tests during Pre-Flight** - That's Phase 3
+- ❌ **Not creating state file first** - ALWAYS create state file before gathering context
+- ❌ **Skipping to Phase 4** - Gate MUST pass first
diff --git a/.github/agents/pr/post-gate.md b/.github/agents/pr/post-gate.md
new file mode 100644
index 000000000000..8ad8c269795b
--- /dev/null
+++ b/.github/agents/pr/post-gate.md
@@ -0,0 +1,244 @@
+# PR Agent: Post-Gate Phases (4-5)
+
+**⚠️ PREREQUISITE: Only read this file after 🚦 Gate shows `✅ PASSED` in your state file.**
+
+If Gate is not passed, go back to `.github/agents/pr.md` and complete phases 1-3 first.
+
+---
+
+## Workflow Overview
+
+| Phase | Name | What Happens |
+|-------|------|--------------|
+| 4 | **Fix** | Invoke `try-fix` skill repeatedly to explore independent alternatives, then compare with PR's fix |
+| 5 | **Report** | Deliver result (approve PR, request changes, or create new PR) |
+
+---
+
+## 🔧 FIX: Explore and Select Fix (Phase 4)
+
+> **SCOPE**: Explore independent fix alternatives using `try-fix` skill, compare with PR's fix, select the best approach.
+
+**⚠️ Gate Check:** Verify 🚦 Gate is `✅ PASSED` in your state file before proceeding.
+
+### 🚨 CRITICAL: try-fix is Independent of PR's Fix
+
+**The PR's fix has already been validated by Gate (tests FAIL without it, PASS with it).**
+
+The purpose of Phase 4 is NOT to re-test the PR's fix, but to:
+1. **Generate independent fix ideas** - What would YOU do to fix this bug?
+2. **Test those ideas empirically** - Actually implement and run tests
+3. **Compare with PR's fix** - Is there a simpler/better alternative?
+4. **Learn from failures** - Record WHY failed attempts didn't work
+
+**Do NOT let the PR's fix influence your thinking.** Generate ideas as if you hadn't seen the PR.
+
+### Step 1: Agent Orchestrates try-fix Loop
+
+Invoke the `try-fix` skill repeatedly. The skill handles one fix attempt per invocation.
+
+```
+┌─────────────────────────────────────────────────────────────┐
+│ Agent orchestration loop │
+├─────────────────────────────────────────────────────────────┤
+│ │
+│ attempts = 0 │
+│ max_attempts = 5 │
+│ │
+│ while (attempts < max_attempts): │
+│ result = invoke try-fix skill │
+│ attempts++ │
+│ │
+│ if result.exhausted: │
+│ break # try-fix has no more ideas │
+│ │
+│ # result.passed indicates if this attempt worked │
+│ # Continue loop to explore more alternatives │
+│ │
+│ # After loop: compare all try-fix results vs PR's fix │
+│ │
+└─────────────────────────────────────────────────────────────┘
+```
+
+**Stop the loop when:**
+- `try-fix` returns `exhausted=true` (no more ideas)
+- 5 try-fix attempts have been made
+- User requests to stop
+
+### What try-fix Does (Each Invocation)
+
+Each `try-fix` invocation:
+1. Reads state file to learn from prior failed attempts
+2. Reverts PR's fix to get a broken baseline
+3. Proposes ONE new independent fix idea
+4. Implements and tests it
+5. Records result (with failure analysis if it failed)
+6. Reverts all changes (restores PR's fix)
+7. Returns `{passed: bool, exhausted: bool}`
+
+See `.github/skills/try-fix/SKILL.md` for full details.
+
+### Step 2: Compare Results
+
+After the loop, review the **Fix Candidates** table:
+
+```markdown
+| # | Source | Approach | Test Result | Files Changed | Notes |
+|---|--------|----------|-------------|---------------|-------|
+| 1 | try-fix | Fix in TabbedPageManager | ❌ FAIL | 1 file | Why failed: Too late in lifecycle |
+| 2 | try-fix | RequestApplyInsets only | ❌ FAIL | 1 file | Why failed: Trigger insufficient |
+| 3 | try-fix | Reset + RequestApplyInsets | ✅ PASS | 2 files | Works! |
+| PR | PR #33359 | [PR's approach] | ✅ PASS (Gate) | 2 files | Original PR |
+```
+
+**Compare passing candidates:**
+- PR's fix (known to pass from Gate)
+- Any try-fix attempts that passed
+
+### Step 3: Select Best Fix
+
+**Selection criteria (in order of priority):**
+1. **Must pass tests** - Only consider candidates with ✅ PASS
+2. **Simplest solution** - Fewer files, fewer lines, lower complexity
+3. **Most robust** - Handles edge cases, less likely to regress
+4. **Matches codebase style** - Consistent with existing patterns
+
+Update the state file:
+
+```markdown
+**Exhausted:** Yes (or No if stopped early)
+**Selected Fix:** PR's fix - [Reason] OR #N - [Reason why alternative is better]
+```
+
+**Possible outcomes:**
+- **PR's fix is best** → Approve the PR
+- **try-fix found a simpler/better alternative** → Request changes with suggestion
+- **try-fix found same solution independently** → Strong validation, approve PR
+- **All try-fix attempts failed** → PR's fix is the only working solution, approve PR
+
+### Step 4: Apply Selected Fix (if different from PR)
+
+**If PR's fix was selected:**
+- No action needed - PR's changes are already in place
+
+**If a try-fix alternative was selected:**
+- Re-implement the fix (you documented the approach in the table)
+- Commit the changes
+
+### Complete 🔧 Fix
+
+**Update state file**:
+1. Verify Fix Candidates table is complete with all attempts
+2. Verify failure analyses are documented for failed attempts
+3. Verify Selected Fix is documented with reasoning
+4. Change 🔧 Fix status to `✅ COMPLETE`
+5. Change 📋 Report status to `▶️ IN PROGRESS`
+
+---
+
+## 📋 REPORT: Final Report (Phase 5)
+
+> **SCOPE**: Deliver the final result - either a PR review or a new PR.
+
+**⚠️ Gate Check:** Verify ALL phases 1-4 are `✅ COMPLETE` or `✅ PASSED` before proceeding.
+
+### If Starting from Issue (No PR) - Create PR
+
+1. **Ensure selected fix is applied and committed**:
+ ```bash
+ git add -A
+ git commit -m "Fix #XXXXX: [Description of fix]"
+ ```
+
+2. **Create a feature branch** (if not already on one):
+ ```bash
+ git checkout -b fix/issue-XXXXX
+ ```
+
+3. **⛔ STOP: Ask user for confirmation before creating PR**:
+
+ Present a summary to the user and wait for explicit approval:
+ > "I'm ready to create a PR for issue #XXXXX. Here's what will be included:
+ > - **Branch**: fix/issue-XXXXX
+ > - **Selected fix**: Candidate #N - [approach]
+ > - **Files changed**: [list files]
+ > - **Tests added**: [list test files]
+ > - **Other candidates considered**: [brief summary]
+ >
+ > Would you like me to push and create the PR?"
+
+ **Do NOT proceed until user confirms.**
+
+4. **Push and create PR** (after user confirmation):
+ ```bash
+ git push -u origin fix/issue-XXXXX
+ gh pr create --title "Fix #XXXXX: [Title]" --body "Fixes #XXXXX
+
+ ## Description
+ [Brief description of the fix]
+
+ ## Root Cause
+ [What was causing the issue]
+
+ ## Solution
+ [Selected approach and why]
+
+ ## Other Approaches Considered
+ [Brief summary of alternatives tried]
+
+ ## Testing
+ - Added UI tests: IssueXXXXX.cs
+ - Tests verify [what the tests check]
+ "
+ ```
+
+5. **Update state file** with PR link
+
+### If Starting from PR - Write Review
+
+Determine your recommendation based on the Fix phase:
+
+**If PR's fix was selected:**
+- Recommend: `✅ APPROVE`
+- Justification: PR's approach is correct/optimal
+
+**If an alternative fix was selected:**
+- Recommend: `⚠️ REQUEST CHANGES`
+- Justification: Suggest the better approach from try-fix Candidate #N
+
+**If PR's fix failed tests:**
+- Recommend: `⚠️ REQUEST CHANGES`
+- Justification: Fix doesn't work, suggest alternatives
+
+### Final State File Format
+
+Update the state file header:
+
+```markdown
+## ✅ Final Recommendation: APPROVE
+```
+or
+```markdown
+## ⚠️ Final Recommendation: REQUEST CHANGES
+```
+
+Update all phase statuses to complete.
+
+### Complete 📋 Report
+
+**Update state file**:
+1. Change header status to final recommendation
+2. Update all phases to `✅ COMPLETE` or `✅ PASSED`
+3. Present final result to user
+
+---
+
+## Common Mistakes in Post-Gate Phases
+
+- ❌ **Looking at PR's fix before generating ideas** - Generate fix ideas independently first
+- ❌ **Re-testing the PR's fix in try-fix** - Gate already validated it; try-fix tests YOUR ideas
+- ❌ **Skipping the try-fix loop** - Always explore at least one independent alternative
+- ❌ **Not analyzing why fixes failed** - Record the flawed reasoning to help future attempts
+- ❌ **Selecting a failing fix** - Only select from passing candidates
+- ❌ **Forgetting to revert between attempts** - Each try-fix must start from broken baseline, end with PR restored
+- ❌ **Rushing the report** - Take time to write clear justification
diff --git a/.github/agents/uitest-coding-agent.md b/.github/agents/uitest-coding-agent.md
index 583f5a96fa82..7c05a87aa5c5 100644
--- a/.github/agents/uitest-coding-agent.md
+++ b/.github/agents/uitest-coding-agent.md
@@ -25,8 +25,8 @@ Write new UI tests that:
**NO, use different agent if:**
- "Test this PR" → use `sandbox-agent`
-- "Review this PR" → use `pr-reviewer`
-- "Investigate issue #XXXXX" → use `issue-resolver`
+- "Review this PR" → use `pr` agent
+- "Fix issue #XXXXX" (no PR exists) → suggest `/delegate` command
- Only need manual verification → use `sandbox-agent`
---
diff --git a/.github/copilot-instructions.md b/.github/copilot-instructions.md
index a4ad7bef953a..c4350d266ccd 100644
--- a/.github/copilot-instructions.md
+++ b/.github/copilot-instructions.md
@@ -183,36 +183,31 @@ The repository includes specialized custom agents for specific tasks. These agen
### Available Custom Agents
-1. **issue-resolver** - Specialized agent for investigating and resolving community-reported .NET MAUI issues through hands-on testing and implementation
- - **Use when**: Working on bug fixes from GitHub issues
- - **Capabilities**: Issue reproduction, root cause analysis, fix implementation, testing
- - **Trigger phrases**: "fix issue #XXXXX", "resolve bug #XXXXX", "implement fix for #XXXXX"
-
-2. **pr-reviewer** - Specialized agent for conducting thorough, constructive code reviews of .NET MAUI pull requests
- - **Use when**: User requests code review of a pull request
- - **Capabilities**: Code quality analysis, best practices validation, test coverage review
- - **Trigger phrases**: "review PR #XXXXX", "review pull request #XXXXX", "code review for PR #XXXXX", "review this PR"
- - **Do NOT use for**: Building/testing PR functionality (use Sandbox), asking about PR details (handle yourself)
-
-3. **uitest-coding-agent** - Specialized agent for writing new UI tests for .NET MAUI with proper syntax, style, and conventions
+1. **pr** - Sequential 5-phase workflow for reviewing and working on PRs
+ - **Use when**: A PR already exists and needs review or work, OR an issue needs a fix
+ - **Capabilities**: PR review, test verification, fix exploration, alternative comparison
+ - **Trigger phrases**: "review PR #XXXXX", "work on PR #XXXXX", "fix issue #XXXXX", "continue PR #XXXXX"
+ - **Do NOT use for**: Just running tests manually → Use `sandbox-agent`
+
+2. **uitest-coding-agent** - Specialized agent for writing new UI tests for .NET MAUI with proper syntax, style, and conventions
- **Use when**: Creating new UI tests or updating existing ones
- **Capabilities**: UI test authoring, Appium WebDriver usage, NUnit test patterns
- **Trigger phrases**: "write UI test for #XXXXX", "create UI tests", "add test coverage"
-4. **sandbox-agent** - Specialized agent for working with the Sandbox app for testing, validation, and experimentation
+3. **sandbox-agent** - Specialized agent for working with the Sandbox app for testing, validation, and experimentation
- **Use when**: User wants to manually test PR functionality or reproduce issues
- **Capabilities**: Sandbox app setup, Appium-based manual testing, PR functional validation
- **Trigger phrases**: "test this PR", "validate PR #XXXXX in Sandbox", "reproduce issue #XXXXX", "try out in Sandbox"
- - **Do NOT use for**: Code review (use pr-reviewer), writing automated tests (use uitest-coding-agent)
+ - **Do NOT use for**: Code review (use pr agent), writing automated tests (use uitest-coding-agent)
### Using Custom Agents
**Delegation Policy**: When user request matches agent trigger phrases, **ALWAYS delegate to the appropriate agent immediately**. Do not ask for permission or explain alternatives unless the request is ambiguous.
**Examples of correct delegation**:
-- User: "Review PR #12345" → Immediately invoke **pr-reviewer** agent
+- User: "Review PR #12345" → Immediately invoke **pr** agent
- User: "Test this PR" → Immediately invoke **sandbox-agent**
-- User: "Fix issue #67890" → Immediately invoke **issue-resolver** agent
+- User: "Fix issue #67890" (no PR exists) → Suggest using `/delegate` command
- User: "Write UI test for CollectionView" → Immediately invoke **uitest-coding-agent**
**When NOT to delegate**:
diff --git a/.github/instructions/agents.instructions.md b/.github/instructions/agents.instructions.md
new file mode 100644
index 000000000000..0646b036faaa
--- /dev/null
+++ b/.github/instructions/agents.instructions.md
@@ -0,0 +1,154 @@
+---
+applyTo: ".github/agents/**"
+---
+
+# Custom Agent Guidelines for Copilot CLI
+
+Agents in this repo target **Copilot CLI** as the primary interface.
+
+## Copilot CLI vs VS Code
+
+| Property | CLI | VS Code | Use It? |
+|----------|-----|---------|---------|
+| `name` | ✅ | ✅ | Yes |
+| `description` | ✅ | ✅ | **Required** |
+| `tools` | ✅ | ✅ | Optional |
+| `infer` | ✅ | ✅ | Optional |
+| `handoffs` | ❌ | ✅ | **No** - VS Code only |
+| `model` | ❌ | ✅ | **No** - VS Code only |
+| `argument-hint` | ❌ | ✅ | **No** - VS Code only |
+
+---
+
+## Constraints
+
+| Constraint | Limit |
+|------------|-------|
+| Prompt body | **30,000 characters** max |
+| Name | 64 chars, lowercase, letters/numbers/hyphens only |
+| Description | **1,024 characters** max, **required** |
+| Body length | < 300 lines ideal, < 500 max |
+
+### Name Format
+
+- ✅ `pr`, `uitest-coding-agent`, `sandbox-agent`
+- ❌ `PR-Reviewer` (uppercase), `pr_reviewer` (underscores), `--name` (leading/consecutive hyphens)
+
+---
+
+## Anti-Patterns (Do NOT Do)
+
+| Anti-Pattern | Why It's Bad |
+|--------------|--------------|
+| **Too long/verbose** | Wastes context tokens, slower responses |
+| **Vague description** | Won't be discovered via `/agent` |
+| **No "when to use" section** | Users won't know when to invoke |
+| **Duplicating copilot-instructions.md** | Already loaded automatically |
+| **Explaining what skills do** | Reference skill, don't duplicate docs |
+| **Large inline code samples** | Move to separate files |
+| **ASCII art diagrams** | Consume tokens - use sparingly |
+| **VS Code features** | `handoffs`, `model`, `argument-hint` don't work in CLI |
+| **GUI references** | No "click button" - CLI is terminal-based |
+
+---
+
+## Best Practices
+
+### Description = Discovery
+
+The `/agent` command and auto-inference use description keywords:
+
+```yaml
+# ✅ Good
+description: Reviews PRs with independent analysis, validates tests catch bugs, proposes alternative fixes
+
+# ❌ Bad
+description: Helps with code review stuff
+```
+
+### One Agent = One Role
+
+- ✅ `pr` - Reviews and works on PRs
+- ❌ `everything-agent` - Too broad
+
+### Commands Over Concepts
+
+```markdown
+# ✅ Good
+git fetch origin pull/XXXXX/head:pr-XXXXX && git checkout pr-XXXXX
+
+# ❌ Bad
+First you should fetch the PR and check it out locally
+```
+
+### Reference Skills, Don't Duplicate
+
+```markdown
+# ✅ Good
+Run: `pwsh .github/skills/verify-tests-fail-without-fix/scripts/verify-tests-fail.ps1 -Platform android`
+
+# ❌ Bad
+The skill does: 1. Detects fix files... 2. Detects test classes... [30 more lines]
+```
+
+---
+
+## Tool Aliases
+
+| Alias | Purpose |
+|-------|---------|
+| `execute` / `shell` | Run shell commands |
+| `read` | Read file contents |
+| `edit` / `write` | Modify files |
+| `search` / `grep` | Search files/content |
+| `agent` | Invoke other agents |
+
+```yaml
+tools: ["read", "search"] # Read-only agent
+tools: ["read", "search", "edit", "execute"] # Full dev agent
+```
+
+---
+
+## Minimal Structure
+
+```yaml
+---
+name: my-agent
+description: Does X when user asks Y. Keywords: review, test, fix.
+---
+
+# Agent Title
+
+Brief philosophy.
+
+## When to Use
+- ✅ "trigger phrase"
+
+## When NOT to Use
+- ❌ Other task → Use `other-agent`
+
+## Workflow
+1. Step one
+2. Step two
+
+## Quick Reference
+| Task | Command |
+|------|---------|
+| Do X | `command` |
+
+## Common Mistakes
+- ❌ **Mistake** - Why it's wrong
+```
+
+---
+
+## Checklist
+
+- [ ] YAML frontmatter with `name` and `description`
+- [ ] `description` has trigger keywords
+- [ ] Body under 500 lines
+- [ ] No `handoffs`, `model`, `argument-hint`
+- [ ] No GUI/button references
+- [ ] Skills referenced, not duplicated
+- [ ] "When to Use" / "When NOT to Use" included
diff --git a/.github/instructions/collectionview-handler-detection.instructions.md b/.github/instructions/collectionview-handler-detection.instructions.md
index 2bd0cf12b0cd..24556e39b869 100644
--- a/.github/instructions/collectionview-handler-detection.instructions.md
+++ b/.github/instructions/collectionview-handler-detection.instructions.md
@@ -7,17 +7,40 @@ applyTo: "src/Controls/src/Core/Handlers/Items/**,src/Controls/src/Core/Handlers
## Handler Implementation Status
-There are **TWO separate handler implementations**:
+There are **TWO separate handler implementations**, but they apply to **different platforms**:
-1. **Items/** (`Handlers/Items/`) - **DEPRECATED** - Original implementation
-2. **Items2/** (`Handlers/Items2/`) - **CURRENT** - Active implementation
+1. **Items/** (`Handlers/Items/`) - Contains code for **ALL platforms** (Android, iOS, Windows, MacCatalyst, Tizen)
+2. **Items2/** (`Handlers/Items2/`) - Contains code for **iOS/MacCatalyst ONLY**
-**Default Policy**: Always work on **Items2/** handlers. The Items/ handlers are deprecated and should only be modified if explicitly required.
+### Platform-Specific Deprecation
+
+The deprecation of Items/ **only applies to iOS/MacCatalyst**:
+
+| Platform | Active Handler | Notes |
+|----------|----------------|-------|
+| **Android** | `Items/Android/` | **ONLY implementation** - Items2/ has no Android code |
+| **Windows** | `Items/` | **ONLY implementation** - Items2/ has no Windows code |
+| **iOS** | `Items2/iOS/` | Items/ iOS code is deprecated |
+| **MacCatalyst** | `Items2/iOS/` | Items/ MacCatalyst code is deprecated |
+
+**CRITICAL**: Items2/ is **iOS/MacCatalyst only**. There is NO Items2/ code for Android or Windows.
---
## Which Handler to Work On
+### Decision Tree by Platform
+
+```
+Is the issue/PR for Android or Windows?
+ YES → Work on Items/ (it's the ONLY implementation)
+ NO → Continue...
+
+Is the issue/PR for iOS or MacCatalyst?
+ YES → Work on Items2/ (Items/ is deprecated for iOS)
+ NO → Check platform and find appropriate handler
+```
+
### Detection Algorithm
Check which handler directory the files are in:
@@ -27,24 +50,24 @@ Check which handler directory the files are in:
git diff .. --name-only | grep -i "handlers/items"
# Look for path pattern:
-# - Contains "/Items/" (NOT "Items2") → DEPRECATED (Items)
-# - Contains "/Items2/" → CURRENT (Items2)
+# - Contains "/Items/Android/" → Android (ONLY implementation, work here)
+# - Contains "/Items/Windows/" or ".Windows.cs" → Windows (ONLY implementation, work here)
+# - Contains "/Items2/iOS/" or "Items2/*.iOS.cs" → iOS/MacCatalyst (CURRENT)
+# - Contains "/Items/*.iOS.cs" (not Items2) → iOS (DEPRECATED, prefer Items2/)
```
-**Key Patterns**:
-- `src/Controls/src/Core/Handlers/Items/` → **DEPRECATED**
-- `src/Controls/src/Core/Handlers/Items2/` → **CURRENT**
+### Default Behavior by Platform
-### Default Behavior
+| Platform | Default Action |
+|----------|----------------|
+| **Android** | ✅ Work on `Items/Android/` - it's the only option |
+| **Windows** | ✅ Work on `Items/` Windows files - it's the only option |
+| **iOS/MacCatalyst** | ✅ Work on `Items2/` - Items/ is deprecated for iOS |
-**Unless explicitly told otherwise**:
-- ✅ Work on **Items2/** handlers
-- ❌ Do NOT work on **Items/** handlers (deprecated)
+### When to Work on Items/ for iOS (Deprecated)
-### When to Work on Items/ (Deprecated)
-
-Only work on Items/ handlers when:
-- PR explicitly modifies Items/ files
+Only work on Items/ iOS code when:
+- PR explicitly modifies Items/ iOS files
- User explicitly requests changes to deprecated handlers
- Maintaining backward compatibility for a specific fix
@@ -52,7 +75,21 @@ Only work on Items/ handlers when:
## Quick Reference
-| Path Pattern | Status | Default Action |
-|--------------|--------|----------------|
-| `Handlers/Items/` | **DEPRECATED** | Avoid unless explicitly required |
-| `Handlers/Items2/` | **CURRENT** | Use by default |
+| Path Pattern | Platform | Status |
+|--------------|----------|--------|
+| `Handlers/Items/Android/` | Android | **ACTIVE** (only implementation) |
+| `Handlers/Items/*.Windows.cs` | Windows | **ACTIVE** (only implementation) |
+| `Handlers/Items2/iOS/` | iOS/MacCatalyst | **ACTIVE** (current) |
+| `Handlers/Items/*.iOS.cs` | iOS/MacCatalyst | **DEPRECATED** (use Items2/) |
+
+---
+
+## Common Mistakes to Avoid
+
+❌ **Wrong**: "Items/ is deprecated, so I should check if Items2/ needs the same Android fix"
+- Items2/ has NO Android code - there's nothing to check
+
+❌ **Wrong**: "This Android fix should also go in Items2/"
+- Items2/ is iOS-only, Android code only exists in Items/
+
+✅ **Correct**: "This is an Android-only issue, so I work in Items/Android/ which is the only Android implementation"
diff --git a/.github/instructions/sandbox.instructions.md b/.github/instructions/sandbox.instructions.md
index ffc83dd94250..c4237513b6fd 100644
--- a/.github/instructions/sandbox.instructions.md
+++ b/.github/instructions/sandbox.instructions.md
@@ -161,20 +161,20 @@ Work with the Sandbox app for manual testing, PR validation, issue reproduction,
## When NOT to Use Sandbox
-- ❌ User asks to "review PR #XXXXX" → Use **pr-reviewer** agent for code review
+- ❌ User asks to "review PR #XXXXX" → Use **pr** agent for code review
- ❌ User asks to "write UI tests" or "create automated tests" → Use **uitest-coding-agent**
- ❌ User asks to "validate the UI tests" or "verify test quality" → Review test code instead
-- ❌ User asks to "fix issue #XXXXX" → Use **issue-resolver** agent
+- ❌ User asks to "fix issue #XXXXX" (no PR exists) → Suggest `/delegate` command
- ❌ PR only adds documentation (no code changes to test)
- ❌ PR only modifies build scripts (no functional changes)
## Distinction: Code Review vs. Functional Testing
-**Code Review** (pr-reviewer agent):
+**Code Review** (pr agent):
- Analyzes code quality, patterns, best practices
- Reviews test coverage and correctness
- Checks for potential bugs or issues in the code itself
-- Trigger: "review PR", "review pull request", "code review"
+- Trigger: "review PR", "work on PR"
**Functional Testing** (sandbox-agent):
- Builds and deploys PR to device/simulator
diff --git a/.github/instructions/skills.instructions.md b/.github/instructions/skills.instructions.md
new file mode 100644
index 000000000000..1cb6b1de5296
--- /dev/null
+++ b/.github/instructions/skills.instructions.md
@@ -0,0 +1,387 @@
+---
+applyTo: ".github/skills/**"
+---
+
+# Agent Skills Development Guidelines
+
+This instruction file provides guidance for creating and modifying Agent Skills in the `.github/skills/` directory.
+
+## Specification Reference
+
+Agent Skills follow the open standard defined at:
+- **Official Specification**: https://agentskills.io/specification
+- **VS Code Documentation**: https://code.visualstudio.com/docs/copilot/customization/agent-skills
+- **GitHub Documentation**: https://docs.github.com/en/copilot/concepts/agents/about-agent-skills
+
+## Core Principle: Self-Contained Skills
+
+**Skills should be self-contained and portable.** Each skill folder should include everything needed for that skill to function, making it easy to copy to other repositories or share with others.
+
+## Skill Location
+
+Skills must be placed in the `.github/skills/` directory:
+
+- Standard location for GitHub-integrated projects
+- Works with GitHub Copilot, Copilot CLI, and coding agents
+- Enables automatic discovery by AI agents
+
+## Required Directory Structure
+
+Each skill MUST be a directory containing at minimum a `SKILL.md` file:
+
+```
+.github/skills/
+└── skill-name/
+ ├── SKILL.md # Required - skill definition
+ ├── scripts/ # Optional - executable scripts (self-contained)
+ ├── assets/ # Optional - templates/resources
+ └── references/ # Optional - documentation
+```
+
+**Important:** Scripts should be placed in the skill's own `scripts/` folder to maintain self-containment and support progressive disclosure.
+
+## SKILL.md Format
+
+### Required YAML Frontmatter
+
+Every `SKILL.md` MUST start with YAML frontmatter containing:
+
+```yaml
+---
+name: skill-name
+description: A clear description of what the skill does and when to use it.
+---
+```
+
+### Required Frontmatter Fields
+
+| Field | Requirements | Example |
+|-------|--------------|---------|
+| `name` | Lowercase, max 64 chars, letters/numbers/hyphens only. Must match folder name. | `deploy-staging` |
+| `description` | Max 1024 chars. Explains what the skill does and when to use it. | `Deploys the application to the staging environment.` |
+
+### Optional Frontmatter Fields
+
+| Field | Purpose | Example |
+|-------|---------|---------|
+| `license` | License name or reference | `MIT` |
+| `metadata` | Arbitrary key-value info | `author: my-org` |
+| `compatibility` | Environment requirements | `Requires docker, kubectl` |
+| `allowed-tools` | Pre-approved tools (experimental) | `curl jq` |
+
+### Full Frontmatter Example
+
+```yaml
+---
+name: deploy-staging
+description: Deploys the application to the staging environment. Use when asked to deploy or release to staging.
+license: MIT
+metadata:
+ author: my-org
+ version: "1.0"
+compatibility: Requires docker and kubectl. Must have cluster access configured.
+---
+```
+
+## Markdown Body Structure
+
+After the YAML frontmatter, include:
+
+1. **Title** - `# Skill Name`
+2. **When to Use** - Trigger phrases and scenarios with keywords for agent discovery
+3. **Instructions** - Step-by-step guidance for the agent
+4. **Examples** - Usage examples with code blocks
+5. **Parameters** (if applicable) - Table of script parameters
+6. **Related Files** - Links to scripts, workflows, etc.
+
+## Context Efficiency Best Practices
+
+To support progressive disclosure and minimize token usage:
+
+- **Keep SKILL.md under 500 lines** - Move detailed content to separate files
+- **Use three-level loading**:
+ 1. Metadata (~100 tokens) - name/description loaded at startup
+ 2. Instructions (<5000 tokens) - SKILL.md content loaded when skill activates
+ 3. Resources (as needed) - scripts/references loaded on-demand
+- **Move large examples** to `references/` or `assets/` and link to them
+- **Keep file references one level deep** from SKILL.md (e.g., `scripts/validate.ps1`)
+
+## Writing Effective Descriptions
+
+Descriptions are critical for agent discovery. They should:
+
+1. **Include specific keywords** that agents will match (e.g., "triage", "validate", "review")
+2. **Specify WHEN to use** the skill (trigger scenarios)
+3. **Specify WHAT the skill does** (capabilities)
+
+**Examples:**
+
+✅ Good: "Validates that UI tests correctly fail without a fix and pass with a fix. Use after assess-test-type confirms UI tests are appropriate."
+
+❌ Bad: "Handles test stuff"
+
+## Name Validation Rules
+
+✅ **Valid names**:
+- `deploy-staging`
+- `run-tests`
+- `data-migration-v2`
+
+❌ **Invalid names**:
+- `Deploy-Staging` (uppercase)
+- `-deploy-staging` (starts with hyphen)
+- `deploy--staging` (consecutive hyphens)
+- `deploy_staging` (underscores not allowed)
+
+## Integration with Scripts and Workflows
+
+Skills can include scripts and integrate with GitHub Actions:
+
+| Component | Location | Purpose |
+|-----------|----------|---------|
+| Skill definition | `.github/skills//SKILL.md` | Agent instructions |
+| Skill scripts | `.github/skills//scripts/` | Self-contained automation |
+| Shared utilities (rare) | `.github/scripts/` | Only if used by 5+ skills or workflows |
+| GitHub Action | `.github/workflows/.yml` | Scheduled/triggered automation |
+
+## Script Organization Guidelines
+
+### Default: Self-Contained Scripts (Recommended)
+
+**Each skill should include its own complete scripts** in the `scripts/` folder:
+
+```
+.github/skills/
+└── validate-ui-tests/
+ ├── SKILL.md
+ └── scripts/
+ └── validate-regression.ps1 # Complete implementation
+```
+
+**Benefits:**
+- ✅ Progressive disclosure works correctly
+- ✅ Skill is portable (copy folder = copy skill)
+- ✅ Clear ownership and maintenance
+- ✅ No hidden dependencies
+
+**Script template:**
+```powershell
+# .github/skills//scripts/.ps1
+
+param(
+ [Parameter(Mandatory=$true)]
+ [string]$RequiredParam,
+
+ [string]$OptionalParam = "default"
+)
+
+$ErrorActionPreference = "Stop"
+
+Write-Host "╔═══════════════════════════════════════════════════════════╗" -ForegroundColor Cyan
+Write-Host "║ - Description ║" -ForegroundColor Cyan
+Write-Host "╚═══════════════════════════════════════════════════════════╝" -ForegroundColor Cyan
+
+# Implementation here
+# ...
+```
+
+### Script Best Practices
+
+When writing scripts for skills:
+
+1. **Error Handling**
+ - Use `$ErrorActionPreference = "Stop"` (PowerShell) or `set -e` (Bash)
+ - Provide clear error messages with actionable guidance
+ - Exit with appropriate codes (0 = success, non-zero = failure)
+
+2. **Edge Cases**
+ - Validate input parameters
+ - Handle missing files gracefully
+ - Check for required tools/dependencies at script start
+
+3. **Documentation**
+ - Include `.SYNOPSIS` and `.DESCRIPTION` in PowerShell scripts
+ - Add `--help` flag for Bash scripts
+ - Document all parameters with examples
+
+4. **Self-Contained or Documented**
+ - Either include all logic in the script, OR
+ - Clearly document external dependencies in SKILL.md (see Dependencies section above)
+
+### Supported Script Languages
+
+Skills can include scripts in various languages:
+
+| Language | Extension | Use When | Notes |
+|----------|-----------|----------|-------|
+| PowerShell | `.ps1` | Windows-centric, .NET tooling | Used in this repository |
+| Bash | `.sh` | Unix/Linux, system automation | Add shebang: `#!/bin/bash` |
+| Python | `.py` | Cross-platform, data processing | Add shebang: `#!/usr/bin/env python3` |
+| JavaScript/Node | `.js` | Frontend projects, npm tooling | Requires Node.js in environment |
+
+**Note:** Agent support for languages varies by implementation. Document requirements in the `compatibility` field.
+
+### Exception: Shared Infrastructure
+
+**Only extract to `.github/scripts/` if:**
+1. Used by **5 or more** skills/workflows
+2. Truly infrastructure/utility (not skill-specific logic)
+3. Maintenance benefits outweigh portability costs
+
+**When using shared scripts, document dependencies clearly:**
+
+```markdown
+## Dependencies
+
+This skill uses the shared infrastructure script:
+- `.github/scripts/BuildAndRunHostApp.ps1` - Test runner for UI tests
+
+See that file for additional requirements.
+```
+
+## How Agents Discover and Activate Skills
+
+Understanding how agents select skills helps you write better descriptions:
+
+1. **Discovery Phase** (Startup)
+ - Agent loads `name` and `description` from all skills (~100 tokens each)
+ - Creates an index of available capabilities
+
+2. **Matching Phase** (User request)
+ - Agent compares user prompt against skill descriptions
+ - Matches keywords and trigger phrases
+ - Multiple skills can activate if relevant
+
+3. **Activation Phase** (Skill selected)
+ - Full `SKILL.md` content loads into context (<5000 tokens)
+ - Agent follows instructions step-by-step
+ - Resources (scripts, examples) load on-demand
+
+**This is why keyword-rich descriptions matter!** Skills are automatically discovered by agents—no manual registration required.
+
+For human documentation purposes, you may optionally list skills in `.github/copilot-instructions.md`.
+
+## Validation (Optional)
+
+You can optionally validate skills using the skills-ref reference tool:
+
+```bash
+# Install
+pip install skills-ref
+
+# Validate a skill
+skills-ref validate .github/skills/skill-name/
+```
+
+This checks:
+- SKILL.md exists and has valid YAML frontmatter
+- `name` matches folder and follows naming rules
+- `description` is present and within limits
+- File structure follows the specification
+
+**Note:** skills-ref is a reference implementation for demonstration. For critical validation, manually review against the [specification](https://agentskills.io/specification).
+
+## Security Considerations
+
+When creating or using skills:
+
+1. **Review Shared Skills**
+ - Always review skills from external sources before using
+ - Verify scripts don't contain malicious code
+ - Check what permissions/tools scripts require
+
+2. **Script Execution**
+ - Skills may execute scripts via the terminal/agent tools
+ - Document required permissions in `compatibility` field
+ - Test scripts in isolation before deploying
+
+3. **Sensitive Data**
+ - Never hardcode credentials or secrets in skills
+ - Use environment variables or secure credential stores
+ - Document required environment setup in SKILL.md
+
+## Troubleshooting
+
+### Skill Not Activating
+
+**Problem:** Agent doesn't use your skill when expected
+
+**Solutions:**
+1. Check description includes keywords from user's likely prompts
+2. Verify `name` matches folder name exactly
+3. Ensure YAML frontmatter is valid (use `skills-ref validate`)
+4. Check skill location (must be in `.github/skills/`)
+
+### Scripts Not Found
+
+**Problem:** Agent references script but gets "file not found"
+
+**Solutions:**
+1. Use relative paths from SKILL.md (e.g., `scripts/my-script.ps1`)
+2. Verify scripts have correct permissions (executable on Unix systems)
+3. Check file actually exists in skill folder
+
+### SKILL.md Too Long
+
+**Problem:** Skill consumes too much context
+
+**Solutions:**
+1. Move detailed examples to `references/` folder
+2. Move command reference to `assets/` folder
+3. Link to external docs for comprehensive guides
+4. Target <500 lines in SKILL.md
+
+## Checklist for New Skills
+
+- [ ] Created directory: `.github/skills//`
+- [ ] Created `SKILL.md` with valid YAML frontmatter
+- [ ] `name` field matches directory name (lowercase, hyphenated)
+- [ ] `description` includes keywords and explains when to use the skill
+- [ ] SKILL.md is under 500 lines (move detailed content to references/)
+- [ ] Markdown body includes instructions, examples, and usage
+- [ ] Scripts are self-contained in `.github/skills//scripts/` folder
+- [ ] Scripts document their parameters and usage
+- [ ] Dependencies on shared scripts (if any) are documented in SKILL.md
+- [ ] GitHub Action workflow created (if scheduled automation needed)
+
+## Examples of Skill Structures
+
+### Self-Contained Executable Skill (Recommended)
+
+```
+.github/skills/validate-ui-tests/
+├── SKILL.md
+└── scripts/
+ └── validate-regression.ps1 # Complete self-contained implementation
+```
+
+### Information-Only Skill
+
+```
+.github/skills/assess-test-type/
+├── SKILL.md # Decision framework only
+└── references/ # Optional reference docs
+ └── test-type-examples.md
+```
+
+### Skill with Multiple Scripts
+
+```
+.github/skills/issue-triage/
+├── SKILL.md
+└── scripts/
+ ├── query-issues.ps1 # Main script
+ └── format-results.ps1 # Helper script
+```
+
+### Skill Using Shared Infrastructure (Exception)
+
+```
+.github/skills/validate-ui-tests/
+├── SKILL.md # Documents dependency on BuildAndRunHostApp.ps1
+└── scripts/
+ └── validate-regression.ps1 # Calls ../../scripts/BuildAndRunHostApp.ps1
+
+.github/scripts/
+└── BuildAndRunHostApp.ps1 # Shared by 5+ skills/workflows
+```
diff --git a/.github/scripts/BuildAndRunHostApp.ps1 b/.github/scripts/BuildAndRunHostApp.ps1
index a4e6305f6b4c..5aac40e40023 100644
--- a/.github/scripts/BuildAndRunHostApp.ps1
+++ b/.github/scripts/BuildAndRunHostApp.ps1
@@ -56,7 +56,9 @@ param(
[ValidateSet("Debug", "Release")]
[string]$Configuration = "Debug",
- [string]$DeviceUdid
+ [string]$DeviceUdid,
+
+ [switch]$Rebuild
)
# Script configuration
@@ -151,6 +153,7 @@ $buildDeployParams = @{
TargetFramework = $TargetFramework
Configuration = $Configuration
DeviceUdid = $DeviceUdid
+ Rebuild = $Rebuild
}
if ($Platform -eq "ios") {
diff --git a/.github/scripts/shared/Build-AndDeploy.ps1 b/.github/scripts/shared/Build-AndDeploy.ps1
index e14de7f7c06c..5657fe1012bc 100644
--- a/.github/scripts/shared/Build-AndDeploy.ps1
+++ b/.github/scripts/shared/Build-AndDeploy.ps1
@@ -52,7 +52,10 @@ param(
[string]$DeviceUdid,
[Parameter(Mandatory=$false)]
- [string]$BundleId
+ [string]$BundleId,
+
+ [Parameter(Mandatory=$false)]
+ [switch]$Rebuild
)
# Import shared utilities
@@ -71,12 +74,18 @@ if ($Platform -eq "android") {
#region Android Build and Deploy
Write-Step "Building and deploying $projectName for Android..."
- Write-Info "Build command: dotnet build $ProjectPath -f $TargetFramework -c $Configuration -t:Run"
+
+ $buildArgs = @($ProjectPath, "-f", $TargetFramework, "-c", $Configuration, "-t:Run")
+ if ($Rebuild) {
+ $buildArgs += "--no-incremental"
+ }
+
+ Write-Info "Build command: dotnet build $($buildArgs -join ' ')"
$buildStartTime = Get-Date
# Build and deploy in one step (Run target handles both)
- dotnet build $ProjectPath -f $TargetFramework -c $Configuration -t:Run
+ & dotnet build @buildArgs
$buildExitCode = $LASTEXITCODE
$buildDuration = (Get-Date) - $buildStartTime
@@ -94,12 +103,18 @@ if ($Platform -eq "android") {
#region iOS Build and Deploy
Write-Step "Building $projectName for iOS..."
- Write-Info "Build command: dotnet build $ProjectPath -f $TargetFramework -c $Configuration"
+
+ $buildArgs = @($ProjectPath, "-f", $TargetFramework, "-c", $Configuration)
+ if ($Rebuild) {
+ $buildArgs += "--no-incremental"
+ }
+
+ Write-Info "Build command: dotnet build $($buildArgs -join ' ')"
$buildStartTime = Get-Date
# Build app
- dotnet build $ProjectPath -f $TargetFramework -c $Configuration
+ & dotnet build @buildArgs
$buildExitCode = $LASTEXITCODE
$buildDuration = (Get-Date) - $buildStartTime
diff --git a/.github/skills/try-fix/SKILL.md b/.github/skills/try-fix/SKILL.md
new file mode 100644
index 000000000000..4b414c4fd695
--- /dev/null
+++ b/.github/skills/try-fix/SKILL.md
@@ -0,0 +1,341 @@
+---
+name: try-fix
+description: Proposes ONE independent fix approach, applies it, runs tests, records result with failure analysis in state file, then reverts. Reads prior attempts to learn from failures. Returns exhausted=true when no more ideas. Max 5 attempts per session.
+---
+
+# Try Fix Skill
+
+Proposes and tests ONE independent fix approach per invocation. The agent invokes this skill repeatedly to explore multiple alternatives.
+
+## Core Principles
+
+1. **Single-shot**: Each invocation = ONE fix idea, tested, recorded, reverted
+2. **Independent**: Generate fix ideas WITHOUT looking at or being influenced by the PR's fix
+3. **Empirical**: Actually implement and test - don't just theorize
+4. **Learning**: When a fix fails, analyze WHY and record the flawed reasoning
+
+## When to Use
+
+- ✅ After Gate passes - you have a verified reproduction test
+- ✅ When exploring independent fix alternatives (even if PR already has a fix)
+- ✅ When the agent needs to iterate through multiple fix attempts
+
+## When NOT to Use
+
+- ❌ Before Gate passes (you need a test that catches the bug first)
+- ❌ For writing tests (use `write-tests` skill)
+- ❌ For just running tests (use `BuildAndRunHostApp.ps1` directly)
+- ❌ To test the PR's existing fix (Gate already validated that)
+
+---
+
+## Inputs
+
+Before invoking this skill, ensure you have:
+
+| Input | Source | Example |
+|-------|--------|---------|
+| State file path | Agent workflow | `.github/agent-pr-session/pr-12345.md` |
+| Test filter | From test files | `Issue12345` |
+| Platform | From issue labels | `android` or `ios` |
+| PR fix files | From Pre-Flight | Files changed by PR (to revert) |
+
+---
+
+## Workflow
+
+### Step 1: Read State File and Learn from Prior Attempts
+
+Read the state file to find prior attempts:
+
+```bash
+cat .github/agent-pr-session/pr-XXXXX.md
+```
+
+Look for the **Fix Candidates** table. For each prior attempt:
+- What approach was tried?
+- Did it pass or fail?
+- **If it failed, WHY did it fail?** (This is critical for learning)
+
+**Use failure analysis to avoid repeating mistakes:**
+- If attempt #1 failed because "too late in lifecycle" → don't try other late-lifecycle fixes
+- If attempt #2 failed because "trigger wasn't enough, calculation logic needed fixing" → focus on calculation logic
+
+### Step 2: Revert PR's Fix (Get Broken Baseline)
+
+**🚨 CRITICAL: You must work from a broken state where the bug exists.**
+
+```bash
+# Identify the PR's fix files from the state file "Files Changed" section
+# Revert ALL fix files (not test files)
+git checkout HEAD~1 -- src/path/to/fix1.cs src/path/to/fix2.cs
+
+# Verify the bug is present (test should FAIL)
+# This is your baseline
+```
+
+**Why?** You're testing whether YOUR fix works, independent of the PR's fix.
+
+### Step 3: Check if Exhausted
+
+Before proposing a new fix, evaluate:
+
+1. **Count prior try-fix attempts** - If 5+ attempts already recorded, return `exhausted=true`
+2. **Review what's been tried and WHY it failed** - Can you think of a meaningfully different approach?
+3. **If no new ideas** - Return `exhausted=true`
+
+**Signs you're exhausted:**
+- All obvious approaches have been tried
+- Remaining ideas are variations of failed attempts (same root flaw)
+- You keep coming back to approaches similar to what failed
+- The problem requires architectural changes beyond scope
+
+If exhausted, **stop here** and return to the agent with `exhausted=true`.
+
+### Step 4: Analyze the Code (Independent of PR's Fix)
+
+**🚨 DO NOT look at the PR's fix implementation.** Generate your own ideas.
+
+Research the bug to propose a NEW approach:
+
+```bash
+# Find the affected code
+grep -r "SymptomOrClassName" src/Controls/src/ --include="*.cs" -l
+
+# Look at the implementation
+cat path/to/affected/File.cs
+
+# Check git history for context (but NOT the PR's commits)
+git log --oneline -10 -- path/to/affected/File.cs
+```
+
+**Key questions:**
+- What is the root cause of this bug?
+- Where in the code should a fix go?
+- What's the minimal change needed?
+- How is this different from prior failed attempts?
+
+### Step 5: Propose ONE Fix
+
+Design an approach that is:
+- **Independent** - NOT influenced by the PR's solution
+- **Different** from prior attempts in the state file
+- **Informed** by WHY prior attempts failed
+- **Minimal** - smallest change that fixes the issue
+
+Document your approach before implementing:
+- Which file(s) to change
+- What the change is
+- Why you think this will work
+- How it differs from prior failed attempts
+
+### Step 6: Apply the Fix
+
+Edit the necessary files to implement your fix.
+
+**Track which files you modify** - you'll need to revert them later.
+
+```bash
+# Note the files you're about to change
+git status --short
+```
+
+### Step 7: Run Tests
+
+Run the reproduction test to see if your fix works:
+
+```bash
+pwsh .github/scripts/BuildAndRunHostApp.ps1 -Platform $PLATFORM -TestFilter "$TEST_FILTER"
+```
+
+**Capture the result:**
+- ✅ **PASS** - Your fix works (test now passes)
+- ❌ **FAIL** - Your fix doesn't work (test still fails, or other tests broke)
+
+### Step 8: If Failed - Analyze WHY
+
+**🚨 CRITICAL: This step is required for failed attempts.**
+
+When your fix fails, analyze:
+
+1. **What was your hypothesis?** Why did you think this would work?
+2. **What actually happened?** What did the test output show?
+3. **Why was your reasoning flawed?** What did you misunderstand about the bug?
+4. **What would be needed instead?** What insight does this failure provide?
+
+This analysis helps future try-fix invocations avoid the same mistake.
+
+### Step 9: Update State File
+
+Add a new row to the **Fix Candidates** table in the state file:
+
+**For PASSING fixes:**
+```markdown
+| # | Source | Approach | Test Result | Files Changed | Notes |
+|---|--------|----------|-------------|---------------|-------|
+| N | try-fix | [Your approach] | ✅ PASS | `file.cs` (+X) | Works! [any observations] |
+```
+
+**For FAILING fixes (include failure analysis):**
+```markdown
+| # | Source | Approach | Test Result | Files Changed | Notes |
+|---|--------|----------|-------------|---------------|-------|
+| N | try-fix | [Your approach] | ❌ FAIL | `file.cs` (+X) | **Why failed:** [Analysis of flawed reasoning and what you learned] |
+```
+
+### Step 10: Revert Everything
+
+**Always revert** to restore the PR's original state:
+
+```bash
+# Revert ALL changes (your fix AND the PR revert from Step 2)
+git checkout -- .
+```
+
+**Do NOT revert the state file** - the new candidate row should persist.
+
+### Step 11: Return to Agent
+
+Report back to the agent with:
+
+| Field | Value |
+|-------|-------|
+| `approach` | Brief description of what was tried |
+| `test_result` | PASS or FAIL |
+| `exhausted` | true if no more ideas, false otherwise |
+
+---
+
+## Fix Candidates Table Format
+
+The state file should have this section:
+
+```markdown
+## Fix Candidates
+
+| # | Source | Approach | Test Result | Files Changed | Notes |
+|---|--------|----------|-------------|---------------|-------|
+| 1 | try-fix | Fix in TabbedPageManager | ❌ FAIL | `TabbedPageManager.cs` (+5) | **Why failed:** Too late in lifecycle - by the time OnPageSelected fires, layout already measured with stale values |
+| 2 | try-fix | RequestApplyInsets only | ❌ FAIL | `ToolbarExtensions.cs` (+2) | **Why failed:** Trigger alone insufficient - calculation logic still used cached values |
+| 3 | try-fix | Reset cache + RequestApplyInsets | ✅ PASS | `ToolbarExtensions.cs`, `InsetListener.cs` (+8) | Works! Similar to PR's approach |
+| PR | PR #XXXXX | [PR's approach] | ✅ PASS (Gate) | [files] | Original PR - validated by Gate |
+
+**Exhausted:** Yes
+**Selected Fix:** #3 or PR - both work, compare for simplicity
+```
+
+**Note:** The PR's fix is recorded as reference (validated by Gate) but is NOT tested by try-fix.
+
+---
+
+## Guidelines for Proposing Fixes
+
+### Independence is Critical
+
+🚨 **DO NOT look at the PR's fix code when generating ideas.**
+
+The goal is to see if you can independently arrive at the same solution (validating the PR's approach) or find a better alternative.
+
+If your independent fix matches the PR's approach, that's strong validation. If you find a simpler/better approach, that's valuable feedback.
+
+### Good Fix Approaches
+
+✅ **Null/state checks** - Guard against unexpected null or state
+✅ **Lifecycle timing** - Move code to correct lifecycle event
+✅ **Platform-specific handling** - Add platform check if needed
+✅ **Event ordering** - Fix race conditions or ordering issues
+✅ **Cache invalidation** - Reset stale cached values
+
+### Approaches to Avoid
+
+❌ **Looking at the PR's fix first** - Generate ideas independently
+❌ **Duplicating prior failed attempts** - Check the table and learn from failures
+❌ **Variations of failed approaches with same root flaw** - If timing was wrong, a different timing approach is needed
+❌ **Massive refactors** - Keep changes minimal
+❌ **Suppressing symptoms** - Fix root cause, not symptoms
+
+### Learning from Failures
+
+When a fix fails, the failure analysis is crucial:
+
+**Bad note:** "Didn't work"
+**Good note:** "**Why failed:** RequestApplyInsets triggers recalculation, but MeasuredHeight was still cached from previous layout pass. Need to also invalidate the cached measurement."
+
+This helps the next try-fix invocation avoid the same mistake.
+
+---
+
+## Example Session
+
+**State file before (after Gate passed):**
+```markdown
+## Fix Candidates
+
+| # | Source | Approach | Test Result | Files Changed | Notes |
+|---|--------|----------|-------------|---------------|-------|
+| PR | PR #33359 | RequestApplyInsets + reset appBarHasContent | ✅ PASS (Gate) | 2 files | Original PR |
+
+**Exhausted:** No
+**Selected Fix:** [PENDING]
+```
+
+**try-fix invocation #1:**
+1. Reads state → sees PR's fix passed Gate, no try-fix attempts yet
+2. Reverts PR's fix files → now bug exists
+3. Analyzes code independently → proposes: "Fix in TabbedPageManager.OnPageSelected"
+4. Applies fix → edits `TabbedPageManager.cs`
+5. Runs tests → ❌ FAIL
+6. Analyzes failure → "Too late in lifecycle, layout already measured"
+7. Updates state file → adds try-fix Candidate #1 with failure analysis
+8. Reverts everything (including restoring PR's fix)
+9. Returns `{approach: "Fix in TabbedPageManager", test_result: FAIL, exhausted: false}`
+
+**State file after invocation #1:**
+```markdown
+## Fix Candidates
+
+| # | Source | Approach | Test Result | Files Changed | Notes |
+|---|--------|----------|-------------|---------------|-------|
+| 1 | try-fix | Fix in TabbedPageManager.OnPageSelected | ❌ FAIL | `TabbedPageManager.cs` (+5) | **Why failed:** Too late in lifecycle - OnPageSelected fires after layout measured |
+| PR | PR #33359 | RequestApplyInsets + reset appBarHasContent | ✅ PASS (Gate) | 2 files | Original PR |
+
+**Exhausted:** No
+**Selected Fix:** [PENDING]
+```
+
+**try-fix invocation #2:**
+1. Reads state → sees attempt #1 failed because "too late in lifecycle"
+2. Reverts PR's fix → bug exists
+3. Learns from #1 → needs earlier timing, proposes: "Trigger in UpdateIsVisible"
+4. Applies fix → edits `ToolbarExtensions.cs`
+5. Runs tests → ✅ PASS
+6. Updates state file → adds Candidate #2
+7. Reverts everything
+8. Returns `{approach: "Trigger in UpdateIsVisible", test_result: PASS, exhausted: false}`
+
+**State file after invocation #2:**
+```markdown
+## Fix Candidates
+
+| # | Source | Approach | Test Result | Files Changed | Notes |
+|---|--------|----------|-------------|---------------|-------|
+| 1 | try-fix | Fix in TabbedPageManager.OnPageSelected | ❌ FAIL | `TabbedPageManager.cs` (+5) | **Why failed:** Too late in lifecycle |
+| 2 | try-fix | RequestApplyInsets in UpdateIsVisible | ✅ PASS | `ToolbarExtensions.cs` (+2) | Works! Simpler than PR (1 file vs 2) |
+| PR | PR #33359 | RequestApplyInsets + reset appBarHasContent | ✅ PASS (Gate) | 2 files | Original PR |
+
+**Exhausted:** No
+**Selected Fix:** [PENDING]
+```
+
+**Agent decides:** Found a passing alternative (#2). Can continue to find more, or stop and compare #2 vs PR.
+
+---
+
+## Constraints
+
+- **Max 5 try-fix attempts** per session (PR's fix is NOT counted - it was validated by Gate)
+- **Always revert** after each attempt (restore PR's original state)
+- **Always update state file** before reverting
+- **Never skip testing** - every fix must be validated empirically
+- **Never look at PR's fix** when generating ideas - stay independent
+- **Always analyze failures** - record WHY fixes didn't work
diff --git a/.github/skills/verify-tests-fail-without-fix/SKILL.md b/.github/skills/verify-tests-fail-without-fix/SKILL.md
new file mode 100644
index 000000000000..f0708157e4c5
--- /dev/null
+++ b/.github/skills/verify-tests-fail-without-fix/SKILL.md
@@ -0,0 +1,89 @@
+---
+name: verify-tests-fail-without-fix
+description: Verifies UI tests catch the bug. Auto-detects mode based on git diff - if fix files exist, verifies FAIL without fix and PASS with fix. If only test files, verifies tests FAIL.
+---
+
+# Verify Tests Fail Without Fix
+
+Verifies UI tests actually catch the issue. **Mode is auto-detected based on git diff.**
+
+## Usage
+
+```bash
+# Auto-detects everything - just specify platform
+pwsh .github/skills/verify-tests-fail-without-fix/scripts/verify-tests-fail.ps1 -Platform android
+
+# With explicit test filter
+pwsh .github/skills/verify-tests-fail-without-fix/scripts/verify-tests-fail.ps1 -Platform ios -TestFilter "Issue33356"
+```
+
+## Auto-Detection
+
+The script automatically determines the mode:
+
+| Changed Files | Mode | Behavior |
+|---------------|------|----------|
+| Fix files + test files | Full verification | FAIL without fix, PASS with fix |
+| Only test files | Verify failure only | Tests must FAIL (reproduce bug) |
+
+**Fix files** = any changed file NOT in test directories
+**Test files** = files in `TestCases.*` directories
+
+## Expected Output
+
+**Full mode (fix files detected):**
+```
+╔═══════════════════════════════════════════════════════════╗
+║ VERIFICATION PASSED ✅ ║
+╠═══════════════════════════════════════════════════════════╣
+║ - FAIL without fix (as expected) ║
+║ - PASS with fix (as expected) ║
+╚═══════════════════════════════════════════════════════════╝
+```
+
+**Verify failure only (no fix files):**
+```
+╔═══════════════════════════════════════════════════════════╗
+║ VERIFICATION PASSED ✅ ║
+╠═══════════════════════════════════════════════════════════╣
+║ Tests FAILED as expected (bug is reproduced) ║
+╚═══════════════════════════════════════════════════════════╝
+```
+
+## Troubleshooting
+
+| Problem | Cause | Solution |
+|---------|-------|----------|
+| Tests pass without fix | Tests don't detect the bug | Review test assertions, update test |
+| Tests pass (no fix files) | **Test is wrong** | Review test vs issue description, fix test |
+| App crashes | Duplicate issue numbers, XAML error | Check device logs |
+| Element not found | Wrong AutomationId, app crashed | Verify IDs match |
+
+## What It Does
+
+**Full mode:**
+1. Auto-detects fix files (non-test code) from git diff
+2. Auto-detects test classes from `TestCases.Shared.Tests/*.cs`
+3. Reverts fix files to base branch
+4. Runs tests (should FAIL without fix)
+5. Restores fix files
+6. Runs tests (should PASS with fix)
+7. Reports result
+
+**Verify Failure Only mode:**
+1. Runs tests once
+2. Verifies they FAIL (bug reproduced)
+3. Reports result
+
+## Optional Parameters
+
+```bash
+# Explicit test filter
+-TestFilter "Issue32030|ButtonUITests"
+
+# Explicit fix files
+-FixFiles @("src/Core/src/File.cs")
+
+# Verify failure only (no fix exists yet)
+-VerifyFailureOnly
+```
diff --git a/.github/skills/verify-tests-fail-without-fix/scripts/verify-tests-fail.ps1 b/.github/skills/verify-tests-fail-without-fix/scripts/verify-tests-fail.ps1
new file mode 100644
index 000000000000..5aa8b570ab8e
--- /dev/null
+++ b/.github/skills/verify-tests-fail-without-fix/scripts/verify-tests-fail.ps1
@@ -0,0 +1,539 @@
+#!/usr/bin/env pwsh
+<#
+.SYNOPSIS
+ Verifies that UI tests catch the bug. Auto-detects mode based on whether fix files exist.
+
+.DESCRIPTION
+ This script verifies that tests actually catch the issue. It auto-detects the mode:
+
+ **If fix files exist (non-test code changed):**
+ - Full verification mode
+ - Reverts fix files to base branch
+ - Runs tests WITHOUT fix (should FAIL)
+ - Restores fix files
+ - Runs tests WITH fix (should PASS)
+
+ **If only test files changed (no fix files):**
+ - Verify failure only mode
+ - Runs tests once expecting them to FAIL
+ - Confirms tests reproduce the bug
+
+.PARAMETER Platform
+ Target platform: "android" or "ios"
+
+.PARAMETER TestFilter
+ Test filter to pass to dotnet test (e.g., "FullyQualifiedName~Issue12345").
+ If not provided, auto-detects from test files in the git diff.
+
+.PARAMETER FixFiles
+ (Optional) Array of file paths to revert. If not provided, auto-detects from git diff
+ by excluding test directories.
+
+.PARAMETER BaseBranch
+ Branch to revert files from. Auto-detected from PR if not specified.
+
+.PARAMETER OutputDir
+ Directory to store results (default: "CustomAgentLogsTmp/TestValidation")
+
+.EXAMPLE
+ # Auto-detect everything - simplest usage
+ ./verify-tests-fail.ps1 -Platform android
+
+.EXAMPLE
+ # Specify test filter, auto-detect mode and fix files
+ ./verify-tests-fail.ps1 -Platform android -TestFilter "Issue32030"
+
+.EXAMPLE
+ # Specify everything explicitly
+ ./verify-tests-fail.ps1 -Platform ios -TestFilter "Issue12345" `
+ -FixFiles @("src/Controls/src/Core/SomeFile.cs")
+#>
+
+param(
+ [Parameter(Mandatory = $true)]
+ [ValidateSet("android", "ios")]
+ [string]$Platform,
+
+ [Parameter(Mandatory = $false)]
+ [string]$TestFilter,
+
+ [Parameter(Mandatory = $false)]
+ [string[]]$FixFiles,
+
+ [Parameter(Mandatory = $false)]
+ [string]$BaseBranch,
+
+ [Parameter(Mandatory = $false)]
+ [string]$OutputDir = "CustomAgentLogsTmp/TestValidation"
+)
+
+$ErrorActionPreference = "Stop"
+$RepoRoot = git rev-parse --show-toplevel
+
+# Test path patterns to exclude when auto-detecting fix files
+$TestPathPatterns = @(
+ "*/tests/*",
+ "*/test/*",
+ "*.Tests/*",
+ "*.UnitTests/*",
+ "*TestCases*",
+ "*snapshots*",
+ "*.png",
+ "*.jpg",
+ ".github/*",
+ "*.md",
+ "pr-*-review.md"
+)
+
+# Function to check if a file should be excluded from fix files
+function Test-IsTestFile {
+ param([string]$FilePath)
+
+ foreach ($pattern in $TestPathPatterns) {
+ if ($FilePath -like $pattern) {
+ return $true
+ }
+ }
+ return $false
+}
+
+# ============================================================
+# AUTO-DETECT MODE: Check if there are fix files to revert
+# ============================================================
+
+# Try to detect base branch
+$BaseBranchDetected = $BaseBranch
+if (-not $BaseBranchDetected) {
+ $currentBranch = git rev-parse --abbrev-ref HEAD 2>$null
+ $remote = git config "branch.$currentBranch.remote" 2>$null
+ if (-not $remote) { $remote = "origin" }
+
+ $remoteUrl = git remote get-url $remote 2>$null
+ $repo = $null
+ if ($remoteUrl -match "github\.com[:/]([^/]+/[^/]+?)(\.git)?$") {
+ $repo = $matches[1]
+ }
+
+ if ($repo) {
+ $BaseBranchDetected = gh pr view $currentBranch --repo $repo --json baseRefName --jq '.baseRefName' 2>$null
+ } else {
+ $BaseBranchDetected = gh pr view --json baseRefName --jq '.baseRefName' 2>$null
+ }
+}
+
+# Check for fix files (non-test files that changed)
+$DetectedFixFiles = @()
+if ($BaseBranchDetected) {
+ $changedFiles = git diff $BaseBranchDetected HEAD --name-only 2>$null
+ if ($LASTEXITCODE -ne 0) {
+ $changedFiles = git diff "origin/$BaseBranchDetected" HEAD --name-only 2>$null
+ }
+
+ if ($changedFiles) {
+ foreach ($file in $changedFiles) {
+ if (-not (Test-IsTestFile $file)) {
+ $DetectedFixFiles += $file
+ }
+ }
+ }
+}
+
+# Also check explicitly provided fix files
+if ($FixFiles -and $FixFiles.Count -gt 0) {
+ $DetectedFixFiles = $FixFiles
+}
+
+# Determine mode based on whether we have fix files
+$VerifyFailureOnlyMode = ($DetectedFixFiles.Count -eq 0)
+
+# ============================================================
+# VERIFY FAILURE ONLY MODE (no fix files detected)
+# ============================================================
+if ($VerifyFailureOnlyMode) {
+ Write-Host ""
+ Write-Host "╔═══════════════════════════════════════════════════════════╗" -ForegroundColor Cyan
+ Write-Host "║ VERIFY FAILURE ONLY MODE ║" -ForegroundColor Cyan
+ Write-Host "╠═══════════════════════════════════════════════════════════╣" -ForegroundColor Cyan
+ Write-Host "║ No fix files detected - verifying tests FAIL ║" -ForegroundColor Cyan
+ Write-Host "║ (Only test files changed, or new tests created) ║" -ForegroundColor Cyan
+ Write-Host "╚═══════════════════════════════════════════════════════════╝" -ForegroundColor Cyan
+ Write-Host ""
+
+ if (-not $TestFilter) {
+ Write-Host "❌ -TestFilter is required when no fix files are detected" -ForegroundColor Red
+ Write-Host " Example: -TestFilter 'Issue33356'" -ForegroundColor Yellow
+ exit 1
+ }
+
+ # Create output directory
+ $OutputPath = Join-Path $RepoRoot $OutputDir
+ New-Item -ItemType Directory -Force -Path $OutputPath | Out-Null
+ $FailureOnlyLog = Join-Path $OutputPath "verify-failure-only.log"
+
+ Write-Host "Platform: $Platform" -ForegroundColor White
+ Write-Host "TestFilter: $TestFilter" -ForegroundColor White
+ Write-Host ""
+ Write-Host "Running tests (expecting FAILURE)..." -ForegroundColor Yellow
+
+ # Run the test
+ $buildScript = Join-Path $RepoRoot ".github/scripts/BuildAndRunHostApp.ps1"
+ & $buildScript -Platform $Platform -TestFilter $TestFilter -Rebuild 2>&1 | Tee-Object -FilePath $FailureOnlyLog
+
+ # Check test result
+ $testOutputLog = Join-Path $RepoRoot "CustomAgentLogsTmp/UITests/test-output.log"
+ $testFailed = $false
+
+ if (Test-Path $testOutputLog) {
+ $content = Get-Content $testOutputLog -Raw
+ if ($content -match "Failed:\s*(\d+)" -and [int]$matches[1] -gt 0) {
+ $testFailed = $true
+ }
+ }
+
+ Write-Host ""
+ if ($testFailed) {
+ Write-Host "╔═══════════════════════════════════════════════════════════╗" -ForegroundColor Green
+ Write-Host "║ VERIFICATION PASSED ✅ ║" -ForegroundColor Green
+ Write-Host "╠═══════════════════════════════════════════════════════════╣" -ForegroundColor Green
+ Write-Host "║ Tests FAILED as expected (bug is reproduced) ║" -ForegroundColor Green
+ Write-Host "║ ║" -ForegroundColor Green
+ Write-Host "║ Next: Implement a fix, then rerun to verify tests pass. ║" -ForegroundColor Green
+ Write-Host "╚═══════════════════════════════════════════════════════════╝" -ForegroundColor Green
+ exit 0
+ } else {
+ Write-Host "╔═══════════════════════════════════════════════════════════╗" -ForegroundColor Red
+ Write-Host "║ VERIFICATION FAILED ❌ ║" -ForegroundColor Red
+ Write-Host "╠═══════════════════════════════════════════════════════════╣" -ForegroundColor Red
+ Write-Host "║ Tests PASSED (unexpected - bug not reproduced) ║" -ForegroundColor Red
+ Write-Host "║ ║" -ForegroundColor Red
+ Write-Host "║ Your test is wrong. Fix it and rerun. ║" -ForegroundColor Red
+ Write-Host "╚═══════════════════════════════════════════════════════════╝" -ForegroundColor Red
+ exit 1
+ }
+}
+
+# ============================================================
+# FULL VERIFICATION MODE (fix files detected)
+# ============================================================
+
+Write-Host ""
+Write-Host "╔═══════════════════════════════════════════════════════════╗" -ForegroundColor Cyan
+Write-Host "║ FULL VERIFICATION MODE ║" -ForegroundColor Cyan
+Write-Host "╠═══════════════════════════════════════════════════════════╣" -ForegroundColor Cyan
+Write-Host "║ Fix files detected - will verify: ║" -ForegroundColor Cyan
+Write-Host "║ 1. Tests FAIL without fix ║" -ForegroundColor Cyan
+Write-Host "║ 2. Tests PASS with fix ║" -ForegroundColor Cyan
+Write-Host "╚═══════════════════════════════════════════════════════════╝" -ForegroundColor Cyan
+Write-Host ""
+
+$BaseBranch = $BaseBranchDetected
+$FixFiles = $DetectedFixFiles
+
+Write-Host "✅ Base branch: $BaseBranch" -ForegroundColor Green
+Write-Host "✅ Fix files ($($FixFiles.Count)):" -ForegroundColor Green
+foreach ($file in $FixFiles) {
+ Write-Host " - $file" -ForegroundColor White
+}
+
+# Auto-detect test filter from test files if not provided
+if (-not $TestFilter) {
+ Write-Host "🔍 Auto-detecting test filter from changed test files..." -ForegroundColor Cyan
+
+ $changedFiles = git diff $BaseBranch HEAD --name-only 2>$null
+ if ($LASTEXITCODE -ne 0) {
+ $changedFiles = git diff "origin/$BaseBranch" HEAD --name-only 2>$null
+ }
+
+ # Find test files (files in test directories that are .cs files)
+ $testFiles = @()
+ foreach ($file in $changedFiles) {
+ if ($file -match "TestCases\.(Shared\.Tests|HostApp).*\.cs$" -and $file -notmatch "^_") {
+ $testFiles += $file
+ }
+ }
+
+ if ($testFiles.Count -eq 0) {
+ Write-Host "❌ Could not auto-detect test filter. No test files found in changed files." -ForegroundColor Red
+ Write-Host " Looking for files matching: TestCases.(Shared.Tests|HostApp)/*.cs" -ForegroundColor Yellow
+ Write-Host " Please provide -TestFilter parameter explicitly." -ForegroundColor Yellow
+ exit 1
+ }
+
+ # Extract class names from test files
+ $testClassNames = @()
+ foreach ($file in $testFiles) {
+ if ($file -match "TestCases\.Shared\.Tests.*\.cs$") {
+ $fullPath = Join-Path $RepoRoot $file
+ if (Test-Path $fullPath) {
+ $content = Get-Content $fullPath -Raw
+ if ($content -match "public\s+(partial\s+)?class\s+(\w+)") {
+ $className = $matches[2]
+ if ($className -notmatch "^_" -and $testClassNames -notcontains $className) {
+ $testClassNames += $className
+ }
+ }
+ }
+ }
+ }
+
+ # Fallback: use file names without extension
+ if ($testClassNames.Count -eq 0) {
+ foreach ($file in $testFiles) {
+ $fileName = [System.IO.Path]::GetFileNameWithoutExtension($file)
+ if ($fileName -notmatch "^_" -and $testClassNames -notcontains $fileName) {
+ $testClassNames += $fileName
+ }
+ }
+ }
+
+ if ($testClassNames.Count -eq 0) {
+ Write-Host "❌ Could not extract test class names from changed files." -ForegroundColor Red
+ Write-Host " Please provide -TestFilter parameter explicitly." -ForegroundColor Yellow
+ exit 1
+ }
+
+ if ($testClassNames.Count -eq 1) {
+ $TestFilter = $testClassNames[0]
+ } else {
+ $TestFilter = $testClassNames -join "|"
+ }
+
+ Write-Host "✅ Auto-detected $($testClassNames.Count) test class(es):" -ForegroundColor Green
+ foreach ($name in $testClassNames) {
+ Write-Host " - $name" -ForegroundColor White
+ }
+ Write-Host " Filter: $TestFilter" -ForegroundColor Cyan
+}
+
+# Create output directory
+$OutputPath = Join-Path $RepoRoot $OutputDir
+New-Item -ItemType Directory -Force -Path $OutputPath | Out-Null
+
+$ValidationLog = Join-Path $OutputPath "verification-log.txt"
+$WithoutFixLog = Join-Path $OutputPath "test-without-fix.log"
+$WithFixLog = Join-Path $OutputPath "test-with-fix.log"
+
+function Write-Log {
+ param([string]$Message)
+ $timestamp = Get-Date -Format "yyyy-MM-dd HH:mm:ss"
+ $logLine = "[$timestamp] $Message"
+ Write-Host $logLine
+ Add-Content -Path $ValidationLog -Value $logLine
+}
+
+function Get-TestResult {
+ param([string]$LogFile)
+
+ if (Test-Path $LogFile) {
+ $content = Get-Content $LogFile -Raw
+ if ($content -match "Failed:\s*(\d+)") {
+ return @{ Passed = $false; FailCount = [int]$matches[1] }
+ }
+ if ($content -match "Passed:\s*(\d+)") {
+ return @{ Passed = $true; PassCount = [int]$matches[1] }
+ }
+ }
+ return @{ Passed = $false; Error = "Could not parse test results" }
+}
+
+# Initialize log
+"" | Set-Content $ValidationLog
+Write-Log "=========================================="
+Write-Log "Verify Tests Fail Without Fix"
+Write-Log "=========================================="
+Write-Log "Platform: $Platform"
+Write-Log "TestFilter: $TestFilter"
+Write-Log "FixFiles: $($FixFiles -join ', ')"
+Write-Log "BaseBranch: $BaseBranch"
+Write-Log ""
+
+# Verify fix files exist
+Write-Log "Verifying fix files exist..."
+foreach ($file in $FixFiles) {
+ $fullPath = Join-Path $RepoRoot $file
+ if (-not (Test-Path $fullPath)) {
+ Write-Log "ERROR: Fix file not found: $file"
+ exit 1
+ }
+ Write-Log " ✓ $file exists"
+}
+
+# Determine which files exist in the base branch (can be reverted)
+Write-Log ""
+Write-Log "Checking which fix files exist in $BaseBranch..."
+$RevertableFiles = @()
+$NewFiles = @()
+
+foreach ($file in $FixFiles) {
+ # Check if file exists in base branch
+ $existsInBase = git ls-tree -r $BaseBranch --name-only -- $file 2>$null
+ if (-not $existsInBase) {
+ $existsInBase = git ls-tree -r "origin/$BaseBranch" --name-only -- $file 2>$null
+ }
+
+ if ($existsInBase) {
+ $RevertableFiles += $file
+ Write-Log " ✓ $file (exists in $BaseBranch - will revert)"
+ } else {
+ $NewFiles += $file
+ Write-Log " ○ $file (new file - skipping revert)"
+ }
+}
+
+if ($RevertableFiles.Count -eq 0) {
+ Write-Host "❌ No revertable fix files found. All fix files are new." -ForegroundColor Red
+ Write-Host " Cannot verify test behavior without files to revert." -ForegroundColor Yellow
+ exit 1
+}
+
+# Check for uncommitted changes ONLY on files we will revert
+Write-Log ""
+Write-Log "Checking for uncommitted changes on revertable files..."
+$uncommittedFiles = @()
+foreach ($file in $RevertableFiles) {
+ # Check if file has uncommitted changes (staged or unstaged)
+ $status = git status --porcelain -- $file 2>$null
+ if ($status) {
+ $uncommittedFiles += $file
+ }
+}
+
+if ($uncommittedFiles.Count -gt 0) {
+ Write-Host "" -ForegroundColor Red
+ Write-Host "╔═══════════════════════════════════════════════════════════╗" -ForegroundColor Red
+ Write-Host "║ ERROR: Uncommitted changes detected in fix files ║" -ForegroundColor Red
+ Write-Host "╠═══════════════════════════════════════════════════════════╣" -ForegroundColor Red
+ Write-Host "║ This script requires revertable fix files to be ║" -ForegroundColor Red
+ Write-Host "║ committed so they can be restored via git checkout HEAD. ║" -ForegroundColor Red
+ Write-Host "╚═══════════════════════════════════════════════════════════╝" -ForegroundColor Red
+ Write-Host ""
+ Write-Host "Uncommitted files:" -ForegroundColor Yellow
+ foreach ($file in $uncommittedFiles) {
+ Write-Host " - $file" -ForegroundColor Yellow
+ }
+ Write-Host ""
+ Write-Host "Run 'git add && git commit' to commit your changes." -ForegroundColor Cyan
+ exit 1
+}
+
+Write-Log " ✓ All revertable fix files are committed"
+
+# Step 1: Revert fix files to base branch
+Write-Log ""
+Write-Log "=========================================="
+Write-Log "STEP 1: Reverting fix files to $BaseBranch"
+Write-Log "=========================================="
+
+foreach ($file in $RevertableFiles) {
+ Write-Log " Reverting: $file"
+ git checkout $BaseBranch -- $file 2>&1 | Out-Null
+ if ($LASTEXITCODE -ne 0) {
+ Write-Log " Warning: Could not revert from $BaseBranch, trying origin/$BaseBranch"
+ git checkout "origin/$BaseBranch" -- $file 2>&1 | Out-Null
+ }
+}
+
+Write-Log " ✓ $($RevertableFiles.Count) fix file(s) reverted to $BaseBranch state"
+
+# Step 2: Run tests WITHOUT fix
+Write-Log ""
+Write-Log "=========================================="
+Write-Log "STEP 2: Running tests WITHOUT fix (should FAIL)"
+Write-Log "=========================================="
+
+# Use shared BuildAndRunHostApp.ps1 infrastructure with -Rebuild to ensure clean builds
+$buildScript = Join-Path $RepoRoot ".github/scripts/BuildAndRunHostApp.ps1"
+& $buildScript -Platform $Platform -TestFilter $TestFilter -Rebuild 2>&1 | Tee-Object -FilePath $WithoutFixLog
+
+$withoutFixResult = Get-TestResult -LogFile (Join-Path $RepoRoot "CustomAgentLogsTmp/UITests/test-output.log")
+
+# Step 3: Restore fix files from current branch HEAD
+Write-Log ""
+Write-Log "=========================================="
+Write-Log "STEP 3: Restoring fix files from HEAD"
+Write-Log "=========================================="
+
+foreach ($file in $RevertableFiles) {
+ Write-Log " Restoring: $file"
+ git checkout HEAD -- $file 2>&1 | Out-Null
+ if ($LASTEXITCODE -ne 0) {
+ Write-Log " ERROR: Failed to restore $file from HEAD"
+ exit 1
+ }
+}
+
+Write-Log " ✓ $($RevertableFiles.Count) fix file(s) restored from HEAD"
+
+# Step 4: Run tests WITH fix
+Write-Log ""
+Write-Log "=========================================="
+Write-Log "STEP 4: Running tests WITH fix (should PASS)"
+Write-Log "=========================================="
+
+& $buildScript -Platform $Platform -TestFilter $TestFilter -Rebuild 2>&1 | Tee-Object -FilePath $WithFixLog
+
+$withFixResult = Get-TestResult -LogFile (Join-Path $RepoRoot "CustomAgentLogsTmp/UITests/test-output.log")
+
+# Step 5: Evaluate results
+Write-Log ""
+Write-Log "=========================================="
+Write-Log "VERIFICATION RESULTS"
+Write-Log "=========================================="
+
+$verificationPassed = $false
+$failedWithoutFix = -not $withoutFixResult.Passed
+$passedWithFix = $withFixResult.Passed
+
+if ($failedWithoutFix) {
+ Write-Log "✅ Tests FAILED without fix (expected - issue detected)"
+} else {
+ Write-Log "❌ Tests PASSED without fix (unexpected!)"
+ Write-Log " The tests don't detect the issue."
+}
+
+if ($passedWithFix) {
+ Write-Log "✅ Tests PASSED with fix (expected - fix works)"
+} else {
+ Write-Log "❌ Tests FAILED with fix (unexpected!)"
+ Write-Log " The fix doesn't resolve the issue, or there's another problem."
+}
+
+$verificationPassed = $failedWithoutFix -and $passedWithFix
+
+Write-Log ""
+Write-Log "Summary:"
+Write-Log " - Tests WITHOUT fix: $(if ($failedWithoutFix) { 'FAIL ✅ (expected)' } else { 'PASS ❌ (should fail!)' })"
+Write-Log " - Tests WITH fix: $(if ($passedWithFix) { 'PASS ✅ (expected)' } else { 'FAIL ❌ (should pass!)' })"
+
+if ($verificationPassed) {
+ Write-Host ""
+ Write-Host "╔═══════════════════════════════════════════════════════════╗" -ForegroundColor Green
+ Write-Host "║ VERIFICATION PASSED ✅ ║" -ForegroundColor Green
+ Write-Host "╠═══════════════════════════════════════════════════════════╣" -ForegroundColor Green
+ Write-Host "║ Tests correctly detect the issue: ║" -ForegroundColor Green
+ Write-Host "║ - FAIL without fix (as expected) ║" -ForegroundColor Green
+ Write-Host "║ - PASS with fix (as expected) ║" -ForegroundColor Green
+ Write-Host "╚═══════════════════════════════════════════════════════════╝" -ForegroundColor Green
+ exit 0
+} else {
+ Write-Host ""
+ Write-Host "╔═══════════════════════════════════════════════════════════╗" -ForegroundColor Red
+ Write-Host "║ VERIFICATION FAILED ❌ ║" -ForegroundColor Red
+ Write-Host "╠═══════════════════════════════════════════════════════════╣" -ForegroundColor Red
+ if (-not $failedWithoutFix) {
+ Write-Host "║ Tests PASSED without fix (should fail) ║" -ForegroundColor Red
+ Write-Host "║ - Tests don't actually detect the bug ║" -ForegroundColor Red
+ }
+ if (-not $passedWithFix) {
+ Write-Host "║ Tests FAILED with fix (should pass) ║" -ForegroundColor Red
+ Write-Host "║ - Fix doesn't resolve the issue or test is broken ║" -ForegroundColor Red
+ }
+ Write-Host "║ ║" -ForegroundColor Red
+ Write-Host "║ Possible causes: ║" -ForegroundColor Red
+ Write-Host "║ 1. Wrong fix files specified ║" -ForegroundColor Red
+ Write-Host "║ 2. Tests don't actually test the fixed behavior ║" -ForegroundColor Red
+ Write-Host "║ 3. The issue was already fixed in base branch ║" -ForegroundColor Red
+ Write-Host "║ 4. Build caching - try clean rebuild ║" -ForegroundColor Red
+ Write-Host "╚═══════════════════════════════════════════════════════════╝" -ForegroundColor Red
+ exit 1
+}
diff --git a/.github/skills/write-tests/SKILL.md b/.github/skills/write-tests/SKILL.md
new file mode 100644
index 000000000000..0c66536a66e8
--- /dev/null
+++ b/.github/skills/write-tests/SKILL.md
@@ -0,0 +1,202 @@
+---
+name: write-tests
+description: Creates UI tests for a GitHub issue and verifies they reproduce the bug. Iterates until tests actually fail (proving they catch the issue). Use when PR lacks tests or tests need to be created for an issue.
+---
+
+# Write Tests Skill
+
+Creates UI tests that reproduce a GitHub issue, following .NET MAUI conventions. **Verifies the tests actually fail before completing.**
+
+## When to Use
+
+- ✅ PR has no tests and needs them
+- ✅ Issue needs a reproduction test before fixing
+- ✅ Existing tests don't adequately cover the bug
+
+## Required Input
+
+Before invoking, ensure you have:
+- **Issue number** (e.g., 33331)
+- **Issue description** or reproduction steps
+- **Platforms affected** (iOS, Android, Windows, MacCatalyst)
+
+## Workflow
+
+### Step 1: Read the UI Test Guidelines
+
+```bash
+cat .github/instructions/uitests.instructions.md
+```
+
+This contains the authoritative conventions for:
+- File naming (`IssueXXXXX.xaml`, `IssueXXXXX.cs`)
+- File locations (`TestCases.HostApp/Issues/`, `TestCases.Shared.Tests/Tests/Issues/`)
+- Required attributes (`[Issue()]`, `[Category()]`)
+- Test patterns and assertions
+
+### Step 2: Create HostApp Page
+
+**Location:** `src/Controls/tests/TestCases.HostApp/Issues/IssueXXXXX.cs`
+
+```csharp
+namespace Maui.Controls.Sample.Issues;
+
+[Issue(IssueTracker.Github, XXXXX, "Brief description of issue", PlatformAffected.All)]
+public partial class IssueXXXXX : ContentPage
+{
+ public IssueXXXXX()
+ {
+ // Create UI that reproduces the issue
+ var button = new Button
+ {
+ Text = "Test Button",
+ AutomationId = "TestButton" // Required for Appium
+ };
+
+ var resultLabel = new Label
+ {
+ Text = "Waiting...",
+ AutomationId = "ResultLabel"
+ };
+
+ button.Clicked += (s, e) =>
+ {
+ resultLabel.Text = "Success";
+ };
+
+ Content = new VerticalStackLayout
+ {
+ Children = { button, resultLabel }
+ };
+ }
+}
+```
+
+**Key requirements:**
+- Add `AutomationId` to all interactive elements
+- Use `[Issue()]` attribute with tracker, number, description, platform
+- Keep UI minimal - just enough to reproduce the bug
+
+### Step 3: Create NUnit Test
+
+**Location:** `src/Controls/tests/TestCases.Shared.Tests/Tests/Issues/IssueXXXXX.cs`
+
+```csharp
+namespace Microsoft.Maui.TestCases.Shared.Tests.Tests.Issues;
+
+public class IssueXXXXX : _IssuesUITest
+{
+ public override string Issue => "Brief description matching HostApp";
+
+ public IssueXXXXX(TestDevice device) : base(device) { }
+
+ [Test]
+ [Category(UITestCategories.Button)] // Pick ONE appropriate category
+ public void ButtonClickUpdatesLabel()
+ {
+ // Wait for element to be ready
+ App.WaitForElement("TestButton");
+
+ // Interact with the UI
+ App.Tap("TestButton");
+
+ // Verify expected behavior
+ var labelText = App.FindElement("ResultLabel").GetText();
+ Assert.That(labelText, Is.EqualTo("Success"));
+ }
+}
+```
+
+**Key requirements:**
+- Inherit from `_IssuesUITest`
+- Use same `AutomationId` values as HostApp
+- Add ONE `[Category()]` attribute (check `UITestCategories.cs` for options)
+- Use `App.WaitForElement()` before interactions
+
+### Step 4: Verify Files Compile
+
+```bash
+dotnet build src/Controls/tests/TestCases.HostApp/Controls.TestCases.HostApp.csproj -c Debug -f net10.0-android --no-restore -v q
+dotnet build src/Controls/tests/TestCases.Shared.Tests/Controls.TestCases.Shared.Tests.csproj -c Debug --no-restore -v q
+```
+
+### Step 5: Verify Tests Reproduce the Bug ⚠️ CRITICAL
+
+**Tests must FAIL to prove they catch the bug.** Run verification:
+
+```bash
+pwsh .github/skills/verify-tests-fail-without-fix/scripts/verify-tests-fail.ps1 -Platform ios -TestFilter "IssueXXXXX"
+```
+
+The script auto-detects that only test files exist (no fix files) and runs in "verify failure only" mode.
+
+**If tests FAIL** → ✅ Success! Tests correctly reproduce the bug.
+
+**If tests PASS** → ❌ Your test is wrong. Go back to Step 2 and fix:
+- Review test scenario against issue description
+- Ensure test actions match reproduction steps
+- Update and rerun until tests FAIL
+
+**Do NOT mark this skill complete until tests FAIL.**
+
+## Output
+
+After completion (tests verified to fail), report:
+```markdown
+✅ Tests created and verified for Issue #XXXXX
+
+**Files:**
+- `src/Controls/tests/TestCases.HostApp/Issues/IssueXXXXX.cs`
+- `src/Controls/tests/TestCases.Shared.Tests/Tests/Issues/IssueXXXXX.cs`
+
+**Test method:** `ButtonClickUpdatesLabel`
+**Category:** `UITestCategories.Button`
+**Verification:** Tests FAIL as expected (bug reproduced)
+```
+
+## Common Patterns
+
+### Testing Property Changes
+```csharp
+// HostApp: Add a way to trigger and observe the property
+var picker = new Picker { AutomationId = "TestPicker" };
+var statusLabel = new Label { AutomationId = "StatusLabel" };
+picker.PropertyChanged += (s, e) => {
+ if (e.PropertyName == nameof(Picker.IsOpen))
+ statusLabel.Text = $"IsOpen={picker.IsOpen}";
+};
+
+// Test: Verify the property changes correctly
+App.Tap("TestPicker");
+App.WaitForElement("StatusLabel");
+var status = App.FindElement("StatusLabel").GetText();
+Assert.That(status, Does.Contain("IsOpen=True"));
+```
+
+### Testing Layout/Positioning
+```csharp
+// Test: Use GetRect() for position/size assertions
+var rect = App.WaitForElement("TestElement").GetRect();
+Assert.That(rect.Height, Is.GreaterThan(0));
+Assert.That(rect.Y, Is.GreaterThanOrEqualTo(safeAreaTop));
+```
+
+### Testing Platform-Specific Behavior
+```csharp
+// Only limit platforms when NECESSARY
+[Test]
+[Category(UITestCategories.Picker)]
+public void PickerDismissResetsIsOpen()
+{
+ // This test should run on all platforms unless there's
+ // a specific technical reason it can't
+ App.WaitForElement("TestPicker");
+ // ...
+}
+```
+
+## References
+
+- **Full conventions:** `.github/instructions/uitests.instructions.md`
+- **Category list:** `src/Controls/tests/TestCases.Shared.Tests/UITestCategories.cs`
+- **Example tests:** `src/Controls/tests/TestCases.Shared.Tests/Tests/Issues/`
diff --git a/docs/UITesting-Guide.md b/docs/UITesting-Guide.md
index 6241278ef188..ac4596d0c5de 100644
--- a/docs/UITesting-Guide.md
+++ b/docs/UITesting-Guide.md
@@ -622,10 +622,10 @@ If migrating from Xamarin.UITest:
## Additional Resources
- [UITesting-Architecture.md](design/UITesting-Architecture.md) - CI/CD integration, advanced patterns, and architecture decisions
-- [Appium Control Scripts](../.github/instructions/appium-control.instructions.md) - Create standalone scripts for manual Appium-based debugging and exploration
-- [Appium Documentation](http://appium.io/docs/en/about-appium/intro/)
+- [UI Testing Instructions](../.github/instructions/uitests.instructions.md) - Agent-specific UI testing guidelines
+- [Appium Documentation](https://appium.io/docs/en/latest/)
- [NUnit Documentation](https://docs.nunit.org/)
- [.NET MAUI Testing Wiki](https://github.com/dotnet/maui/wiki/UITests)
-- [GitHub Actions UI Tests Workflow](https://github.com/dotnet/maui/blob/main/.github/workflows/ui-tests.yml)
+- [UI Tests Pipeline](../eng/pipelines/ui-tests.yml) - CI/CD pipeline for UI tests
**Last Updated:** October 2025
\ No newline at end of file