Image generated by Nano Banana Pro.
AI-powered GitHub Action that automatically labels low-risk pull requests for skipping human code review.
Code review is essential, but not all changes carry the same risk. Typo fixes, i18n updates, and formatting changes don't need the same scrutiny as business logic changes.
The Problem:
- Reviewers waste cognitive energy on trivial PRs
- Simple fixes get stuck waiting for review
- Context switching between complex and trivial PRs reduces focus
The Solution:
- Automatically identify low-risk PRs with AI analysis
- Reduce review noise by ~20% (based on real-world usage)
- Enable zero-latency deployment for trivial changes
- Let reviewers focus on high-impact, complex changes
- AI-Powered Analysis - Uses OpenAI (or compatible APIs) to analyze PR changes
- Conservative by Default - Only labels PRs with high confidence scores
- Transparent Decisions - Adds explanatory comments to labeled PRs
- Configurable - Customize model, threshold, label name, and more
- Skip-Review Categories - Detects typos, i18n updates, UI tweaks, formatting, unused-code cleanup, and safe dependency bumps
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ PR Opened/ │────▶│ Fetch Diff & │────▶│ AI Analyzes │
│ Synchronized │ │ File Changes │ │ Changes │
└─────────────────┘ └─────────────────┘ └────────┬────────┘
│
┌─────────────────┐ │
│ Add Label & │◀─── Yes ─────┤ Eligible &
│ Comment │ │ High Confidence?
└─────────────────┘ │
│
┌─────────────────┐ │
│ Skip (No │◀─── No ──────┘
│ Action) │
└─────────────────┘
Add your OpenAI API key as a repository secret:
- Go to Settings > Secrets and variables > Actions
- Click New repository secret
- Name:
OPENAI_API_KEY - Value: Your OpenAI API key
Create .github/workflows/skip-review-labeler.yml:
name: AI Skip-Review Labeler
on:
pull_request:
types: [opened, synchronize, reopened]
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number }}
cancel-in-progress: true
jobs:
analyze:
runs-on: ubuntu-latest
# Skip if already labeled or opened by a bot account
if: |
!contains(github.event.pull_request.labels.*.name, 'skip-review') &&
github.event.pull_request.user.type == 'User' # Exclude bot-created PRs
permissions:
contents: read
pull-requests: write
steps:
- uses: chatbotgang/skip-review-labeler@v1
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
openai_api_key: ${{ secrets.OPENAI_API_KEY }}| Input | Description | Required | Default |
|---|---|---|---|
github_token |
GitHub token for API access | Yes | - |
openai_api_key |
OpenAI API key (or compatible) | Yes | - |
model |
AI model to use | No | gpt-5-mini |
confidence_threshold |
Minimum confidence to apply label (0-100) | No | 80 |
label_name |
Label to apply when eligible | No | skip-review |
max_diff_size |
Maximum diff size in characters | No | 50000 |
add_comment |
Add explanatory comment to PR | No | true |
- uses: chatbotgang/skip-review-labeler@v1
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
openai_api_key: ${{ secrets.OPENAI_API_KEY }}
model: gpt-4o
confidence_threshold: 90
label_name: auto-merge-eligible
max_diff_size: 100000
add_comment: trueThe action identifies the following low-risk change types:
Spelling or grammar corrections in comments, documentation, or string literals that don't affect functionality.
- // Calcualte the total price
+ // Calculate the total priceChanges to internationalization files, translation strings, or translation key references.
- "campaigns.title": "Campaigns"
+ "campaigns.title": "Marketing Campaigns"Visual-only changes to CSS, styled-components, or inline styles without logic changes.
- padding: 12px 16px;
+ padding: 16px 24px;Automated formatting changes from tools like Prettier or ESLint.
- const sum=a+b;
+ const sum = a + b;Cutting deprecated endpoints, feature flags, or configs that have zero remaining callers.
-// Temporary shim until everyone hits v2
-app.use('/api/v1/members', legacyMembersRouter);
app.use('/api/v2/members', membersRouter);Patch or minor version updates that only touch dependency manifests or lockfiles.
- "typescript": "5.5.4"
+ "typescript": "5.5.5"| Output | Description |
|---|---|
eligible |
Whether the PR is eligible for skip-review (true/false) |
confidence |
AI confidence score (0-100) |
category |
Detected category or none |
reasoning |
AI reasoning for the decision |
The action also automatically outputs a summary to GITHUB_STEP_SUMMARY.
- uses: chatbotgang/skip-review-labeler@v1
id: labeler
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
openai_api_key: ${{ secrets.OPENAI_API_KEY }}
- name: Check result
run: |
echo "Eligible: ${{ steps.labeler.outputs.eligible }}"
echo "Confidence: ${{ steps.labeler.outputs.confidence }}%"
echo "Category: ${{ steps.labeler.outputs.category }}"
echo "Reasoning: ${{ steps.labeler.outputs.reasoning }}"- Never commit API keys - Always use repository secrets
- Review the label - The
skip-reviewlabel is a suggestion, not a mandate - Conservative defaults - 80% confidence threshold ensures high accuracy
- Audit trail - Comments explain why each PR was labeled
- Human override - Remove the label manually if you disagree
The AI is trained to reject these change types:
- Any logic or functional changes
- API endpoint modifications
- Configuration file changes
- Dependency updates
- Test file changes
- Security-related code
- Database queries or migrations
Ensure your workflow has the required permissions:
permissions:
contents: read
pull-requests: writeCheck the workflow run logs. Common reasons:
- Confidence too low - The AI wasn't confident enough (below threshold)
- Not eligible - Changes include logic modifications
- Already labeled - PR already has the skip-review label
- Diff too large - Exceeds
max_diff_size
If hitting OpenAI rate limits:
- Use a model with higher rate limits
- Add concurrency controls to your workflow
- Consider using Azure OpenAI endpoint
The action supports Azure OpenAI by configuring the API endpoint:
- uses: chatbotgang/skip-review-labeler@v1
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
openai_api_key: ${{ secrets.AZURE_OPENAI_KEY }}
openai_base_url: https://your-resource.openai.azure.com
model: your-deployment-nameUsing gpt-5-mini (default):
- Average PR diff: ~2,000 tokens input, ~200 tokens output
- Cost per PR: ~$0.0003
- 1,000 PRs: ~$0.30
Contributions are welcome! Please:
- Fork the repository
- Create a feature branch
- Make your changes
- Submit a pull request
Apache License 2.0 - see LICENSE for details.