An AI-powered GitHub Action that analyzes, evaluates, and refactors GitHub issues into high-quality user stories. Transforms vague bug reports and feature requests into actionable, well-structured issues with consistent formatting, clear acceptance criteria, and proper context.
- π€ AI-Powered Analysis: Leverages Microsoft Agent Framework to understand issue intent and context
- β¨ Automatic Enhancement: Converts rough ideas into structured user stories with acceptance criteria
- π’ Enterprise Ready: Supports GitHub Enterprise Server via custom API endpoints
- β‘ Fast Startup: < 1 second cold start with .NET 8 AOT compilation
- π Secure: Minimal permissions required (
issues: write) - π Observable: Structured logging with performance metrics and AI reasoning traces
Issue Agent automatically:
- Retrieves full issue context (description, comments, labels, references)
- Analyzes the content using AI to understand intent and requirements
- Evaluates clarity, completeness, and actionability
- Refactors (optional) into well-structured user stories with:
- Clear problem statement
- Acceptance criteria
- Technical context
- Related issues and dependencies
- Suggested labels and milestones
Add this workflow to your repository:
name: Issue Context
on:
issues:
types: [opened, reopened]
issue_comment:
types: [created]
jobs:
analyze-issue:
runs-on: ubuntu-latest
permissions:
issues: read
steps:
- name: Retrieve Issue Context
uses: mattdot/issueagent@v1
with:
github_token: ${{ github.token }}| Name | Required | Default | Description |
|---|---|---|---|
github_token |
No | ${{ github.token }} |
Token for GitHub API authentication with issues:read permission. |
azure_ai_foundry_endpoint |
No | - | Azure AI Foundry project endpoint URL (format: https://<resource>.services.ai.azure.com/api/projects/<project>). Falls back to AZURE_AI_FOUNDRY_ENDPOINT environment variable. |
azure_client_id |
No | - | Azure service principal client ID for OIDC authentication. Falls back to AZURE_CLIENT_ID environment variable. |
azure_tenant_id |
No | - | Azure tenant ID for OIDC authentication. Falls back to AZURE_TENANT_ID environment variable. |
azure_ai_foundry_model_deployment |
No | gpt-5-mini |
Model deployment name in Azure AI Foundry project. Falls back to AZURE_AI_FOUNDRY_MODEL_DEPLOYMENT environment variable. |
azure_ai_foundry_api_version |
No | 2025-04-01-preview |
Azure AI Foundry API version. Falls back to AZURE_AI_FOUNDRY_API_VERSION environment variable. |
enable_verbose_logging |
No | false |
Enable verbose logging for troubleshooting. When enabled, logs detailed information about configuration, connections, and authentication at the Debug level. |
Issue Agent uses Azure AI Foundry for AI-powered issue analysis with OIDC authentication via Azure service principal. To enable AI features:
- Go to Azure AI Foundry portal
- Create a new project or use an existing one
- Note the project endpoint URL (Settings β Overview)
- Note your resource group name and subscription ID
Create a service principal for GitHub Actions OIDC authentication:
# Replace with your values
SUBSCRIPTION_ID="your-subscription-id"
RESOURCE_GROUP="your-resource-group"
APP_NAME="github-issueagent-sp"
# Create the service principal
az ad app create --display-name $APP_NAME
# Get the app ID (client ID)
CLIENT_ID=$(az ad app list --display-name $APP_NAME --query "[0].appId" -o tsv)
# Create federated credential for GitHub OIDC
az ad app federated-credential create \
--id $CLIENT_ID \
--parameters '{
"name": "github-actions-federated",
"issuer": "https://token.actions.githubusercontent.com",
"subject": "repo:YOUR_ORG/YOUR_REPO:ref:refs/heads/main",
"audiences": ["api://AzureADTokenExchange"]
}'
# Get your tenant ID
TENANT_ID=$(az account show --query tenantId -o tsv)
# Assign Cognitive Services User role to the service principal
az role assignment create \
--role "Cognitive Services User" \
--assignee $CLIENT_ID \
--scope /subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUPRequired Azure Permissions:
- Cognitive Services User role on the Azure AI Foundry resource group
- This allows the service principal to call Azure AI services endpoints
Add these secrets to your repository (Settings β Secrets and variables β Actions):
AZURE_AI_FOUNDRY_ENDPOINT: Your project endpoint URL (e.g.,https://my-project.services.ai.azure.com/api/projects/my-project)AZURE_CLIENT_ID: Service principal client ID (from step 2)AZURE_TENANT_ID: Azure tenant ID (from step 2)
name: Issue Context
on:
issues:
types: [opened, reopened]
jobs:
analyze-issue:
runs-on: ubuntu-latest
permissions:
issues: read
id-token: write # Required for OIDC authentication
steps:
- name: Analyze Issue
uses: mattdot/issueagent@v1
with:
github_token: ${{ github.token }}
azure_ai_foundry_endpoint: ${{ secrets.AZURE_AI_FOUNDRY_ENDPOINT }}
azure_client_id: ${{ secrets.AZURE_CLIENT_ID }}
azure_tenant_id: ${{ secrets.AZURE_TENANT_ID }}
azure_ai_foundry_model_deployment: gpt-4o-mini # Optional: specify your modelImportant: The id-token: write permission is required for GitHub Actions to generate OIDC tokens.
The action passes inputs to the Docker container as environment variables. You have two options:
Option 1: Use inputs (recommended)
steps:
- name: Analyze Issue
uses: mattdot/issueagent@v1
with:
github_token: ${{ github.token }}
azure_ai_foundry_endpoint: ${{ secrets.AZURE_AI_FOUNDRY_ENDPOINT }}
azure_client_id: ${{ secrets.AZURE_CLIENT_ID }}
azure_tenant_id: ${{ secrets.AZURE_TENANT_ID }}Option 2: Mix inputs and environment variables
steps:
- name: Analyze Issue
uses: mattdot/issueagent@v1
with:
github_token: ${{ github.token }}
# Inputs take precedence; if not provided, falls back to env vars
azure_ai_foundry_endpoint: ${{ secrets.AZURE_AI_FOUNDRY_ENDPOINT }}The action internally checks inputs first, then falls back to AZURE_* environment variables if inputs are not provided.
The action validates the Azure AI Foundry connection during startup:
- Validates endpoint URL format
- Checks API key length (minimum 32 characters)
- Establishes connection within 30 seconds
- Logs connection success with duration metrics
Failed connections will cause the action to fail with a descriptive error message.
If you're experiencing issues with the action (such as authentication failures or connection problems), enable verbose logging to get detailed diagnostic information:
steps:
- name: Analyze Issue
uses: mattdot/issueagent@v1
with:
github_token: ${{ github.token }}
azure_ai_foundry_endpoint: ${{ secrets.AZURE_AI_FOUNDRY_ENDPOINT }}
azure_ai_foundry_api_key: ${{ secrets.AZURE_AI_FOUNDRY_API_KEY }}
enable_verbose_logging: trueWhen verbose logging is enabled, the action will output:
- Configuration loading details (which environment variables are set)
- Azure AI Foundry connection attempts and responses
- Authentication provider initialization
- API request/response status codes
- Detailed error messages for troubleshooting
Security Note: Verbose logs do NOT contain API keys or tokens - these are always redacted for security.
Authentication Error (401):
- Verify your API key is correct and has not expired
- Check that the API key has access to the specified endpoint
- Ensure the endpoint URL is correct (format:
https://<resource>.services.ai.azure.com/api/projects/<project>)
Connection Timeout:
- Check network connectivity to Azure AI Foundry
- Verify the endpoint URL is accessible from GitHub Actions runners
- Consider firewall or proxy settings in your organization
Model Not Found (404):
- Verify the model deployment name matches your Azure AI Foundry configuration
- Check that the model is deployed and available in your project
β Currently Implemented:
- Core issue context retrieval via GitHub GraphQL API
- Fast, AOT-compiled Docker action
- GitHub Enterprise Server support
- Comprehensive test suite (24 tests including Docker integration)
π§ In Development:
- Microsoft Agent Framework integration
- AI-powered analysis and evaluation
- Issue enhancement and refactoring
- Quality scoring system
π Planned Features:
- Support for custom issue templates
- Multi-language issue analysis
- Configurable AI prompts and templates
- Epic and milestone suggestions
- Code context integration (analyze referenced files)
- Team-specific conventions and formatting
Optimized for GitHub Actions environments:
- Cold Start: < 1 second (AOT compilation)
- Issue Retrieval: < 2 seconds
- AI Analysis: 2-5 seconds (when implemented)
- Total Execution: < 10 seconds (typical)
- Binary Size: ~15MB (AOT-compiled, trimmed)
Performance metrics are logged including startup time and (future) AI reasoning time.
- Data Handling: Issue content is sent to configured AI endpoint (Azure OpenAI or OpenAI)
- Token Security: GitHub tokens are redacted from all logs
- Minimal Permissions: Only requires
issues:readfor evaluation,issues:writefor updates - Enterprise Deployment: Can be fully air-gapped with Azure OpenAI in your tenant
- π Documentation: See docs/ for detailed guides
- π Bug Reports: Open an issue
- π¬ Discussions: GitHub Discussions
- π§ Troubleshooting: Check workflow logs for detailed error messages
We welcome contributions! See CONTRIBUTING.md for:
- Development environment setup
- Testing guidelines (including Docker integration tests)
- Code standards and architecture
- Submission process
MIT License - See LICENSE file for details.
Note: This action is currently under active development. The foundation (fast issue context retrieval with GitHub Enterprise support) is complete and tested. AI analysis and enhancement features are being added in upcoming releases. Watch this repository for updates!