diff --git a/.ai/commands/README.md b/.ai/commands/README.md new file mode 100644 index 0000000..f0ad51f --- /dev/null +++ b/.ai/commands/README.md @@ -0,0 +1,108 @@ +# AI Commands for Agentic Workflow Template + +This directory contains helpful commands for working with the agentic workflow template. These commands provide +quick access to common tasks and workflows and can be used by any AI assistant. + +## Available Commands + +### Setup & Configuration + +- **[setup-project.md](setup-project.md)** - Initial project setup and configuration +- **[label-management.md](label-management.md)** - GitHub label setup and management +- **[update-template.md](update-template.md)** - Sync with latest template updates + +### Development + +- **[create-issue.md](create-issue.md)** - Create GitHub issues with optional AI automation +- **[create-ai-task.md](create-ai-task.md)** - Execute AI tasks locally or via GitHub +- **[development-workflow.md](development-workflow.md)** - Standard development commands +- **[run-tests.md](run-tests.md)** - Testing commands and strategies + +### Operations + +- **[workflow-management.md](workflow-management.md)** - GitHub Actions workflow management +- **[security-checks.md](security-checks.md)** - Security scanning and validation +- **[cost-monitoring.md](cost-monitoring.md)** - Monitor and control AI API costs + +### Documentation & Analysis + +- **[roadmap.sh](../../shared-commands/commands/roadmap.sh)** - View and generate project roadmaps. +- **[create-spec-issue.sh](../../shared-commands/commands/create-spec-issue.sh)** - Create detailed technical + specifications. +- **[analyze-issue.sh](../../shared-commands/commands/analyze-issue.sh)** - Analyze GitHub issues for + requirements and complexity. + +### Support + +- **[troubleshooting.md](troubleshooting.md)** - Common issues and solutions + +## Quick Start + +1. **Initial Setup** + + ```bash + # See setup-project.md + ./install.sh && source venv/bin/activate + ``` + +2. **Create Issues & AI Tasks** + + ```bash + # Create issue (see create-issue.md) + npx @stillrivercode/shared-commands create-issue -t "Fix login bug" -b "Description here" + + # Create issue with AI automation (explicit opt-in) + npx @stillrivercode/shared-commands create-issue -t "Add feature" --ai-task -b "Details here" + + # Execute AI task for existing issue (see create-ai-task.md) + ./scripts/execute-ai-task.sh ISSUE_NUMBER + ``` + +3. **Generate Documentation** (using unified shared commands) + + ```bash + # Create technical specification + npx @stillrivercode/shared-commands create-spec-issue --title "Authentication Architecture" + + # Analyze existing issue for requirements and complexity + npx @stillrivercode/shared-commands analyze-issue --issue 25 + ``` + +4. **Run Tests** + + ```bash + # See run-tests.md + pytest + ``` + +## Shared Commands + +The core commands for this workflow are located in the `shared-commands/` +directory and distributed as an NPM package for consistency across different AI assistants (Claude, Gemini, etc.). + +### Using Shared Commands + +```bash +# Use commands directly via npx +npx @stillrivercode/shared-commands roadmap +npx @stillrivercode/shared-commands create-spec-issue --title "Technical Design" +npx @stillrivercode/shared-commands analyze-issue --issue 25 +``` + +## Command Usage + +Each command file contains: + +- Purpose and description +- Actual commands to run +- Examples and use cases +- Related documentation + +## Contributing + +When adding new commands: + +1. Create a descriptive `.md` file +2. Include clear examples +3. Link to relevant documentation +4. Update this README diff --git a/.ai/commands/cost-monitoring.md b/.ai/commands/cost-monitoring.md new file mode 100644 index 0000000..445afdb --- /dev/null +++ b/.ai/commands/cost-monitoring.md @@ -0,0 +1,47 @@ +# Cost Monitoring + +Monitor and analyze AI API costs. + +## Cost Analysis + +```bash +# View cost analysis documentation +cat docs/cost-analysis.md + +# Check estimated costs for different usage patterns +grep -A10 "Estimated Costs" README.md +``` + +## Cost Tracking Templates + +```bash +# Copy cost monitoring template +cp docs/templates/ai-cost-monitoring-template.csv my-costs.csv + +# View Google Sheets template guide +cat docs/templates/cost-monitoring-guide.md +``` + +## Usage Patterns + +```bash +# Light usage (10-20 tasks/month): $20-50 +# Moderate usage (50-100 tasks/month): $75-200 +# Heavy usage (100+ tasks/month): $150-500 + +# Cost factors: +# - Issue complexity +# - Code generation amount +# - Number of iterations +# - Model selection (Claude Sonnet) +``` + +## Cost Reduction Tips + +```bash +# 1. Use specific, clear issue descriptions +# 2. Break large features into smaller tasks +# 3. Set spending limits in workflow config +# 4. Use ai-task-small label for simple tasks +# 5. Monitor logs: gh run list --workflow=ai-task.yml +``` diff --git a/.ai/commands/create-ai-task.md b/.ai/commands/create-ai-task.md new file mode 100644 index 0000000..200e30c --- /dev/null +++ b/.ai/commands/create-ai-task.md @@ -0,0 +1,30 @@ +# Create AI Task + +Execute an AI task locally (useful for testing before GitHub workflow). + +## Command + +```bash +# Execute AI task for a specific issue +./scripts/execute-ai-task.sh ISSUE_NUMBER + +# Example with real issue number +./scripts/execute-ai-task.sh 42 +``` + +## Manual Process + +```bash +# 1. Create feature branch +git checkout -b feature/ai-task-ISSUE_NUMBER + +# 2. Run Claude with issue context +claude "Implement the requirements from GitHub issue #ISSUE_NUMBER" + +# 3. Create PR when done +./scripts/create-pr.sh +``` + +## GitHub Workflow + +Alternatively, add `ai-task` label to any GitHub issue to trigger automated implementation. diff --git a/.ai/commands/create-issue.md b/.ai/commands/create-issue.md new file mode 100644 index 0000000..cc38664 --- /dev/null +++ b/.ai/commands/create-issue.md @@ -0,0 +1,200 @@ +# Create Issue Command + +Create GitHub issues with optional labels and automation triggers. The `ai-task` label is **NOT** automatically added. + +## Features + +- **Manual Label Control**: No automatic `ai-task` label addition +- **Interactive Mode**: Guided issue creation with prompts +- **Label Validation**: Warns about non-existent labels +- **Dry Run Mode**: Preview before creation +- **Prerequisite Validation**: Checks GitHub CLI and authentication +- **AI Integration**: Optional explicit opt-in for AI workflows + +## Usage Examples + +```bash +# Basic issue creation +npx @stillrivercode/shared-commands create-issue -t "Fix login bug" -b "Users cannot log in with valid credentials" + +# Issue with labels (manual ai-task) +npx @stillrivercode/shared-commands create-issue -t "Add user auth" -l "feature,backend" -b "Implement JWT authentication" + +# Explicit AI task (opt-in required) +npx @stillrivercode/shared-commands create-issue -t "Refactor database layer" --ai-task -b "Clean up database connections" + +# Interactive mode for guided creation +npx @stillrivercode/shared-commands create-issue -i + +# Dry run to preview without creating +npx @stillrivercode/shared-commands create-issue -t "Test issue" --dry-run + +# Full example with all options +npx @stillrivercode/shared-commands create-issue \ + -t "Implement user dashboard" \ + -b "Create responsive user dashboard with analytics" \ + -l "feature,frontend,priority-high" \ + -a "developer-username" \ + --ai-task +``` + +## Command Options + +### Required + +- `-t, --title TITLE` - Issue title (required) + +### Optional + +- `-b, --body BODY` - Issue description/body +- `-l, --labels LABELS` - Comma-separated labels +- `-a, --assignee USER` - Assign to GitHub user +- `-m, --milestone NUMBER` - Milestone number +- `--ai-task` - **Explicit opt-in** to add ai-task label +- `--dry-run` - Preview without creating +- `-i, --interactive` - Interactive guided mode +- `-h, --help` - Show help information + +## Label Categories + +### Standard Labels + +- `feature`, `bug`, `enhancement`, `documentation`, `question` +- `good-first-issue`, `help-wanted` + +### Priority Labels + +- `priority-high`, `priority-medium`, `priority-low` + +### AI Workflow Labels (Manual Opt-in) + +- `ai-task` - General AI development tasks +- `ai-bug-fix` - AI-assisted bug fixes +- `ai-refactor` - Code refactoring requests +- `ai-test` - Test generation +- `ai-docs` - Documentation updates + +### Technical Area Labels + +- `backend`, `frontend`, `api`, `database`, `security` + +## AI Task Integration + +### Manual Opt-in Required + +The script **does not** automatically add the `ai-task` label. Users must: + +1. **Explicitly use `--ai-task` flag** +2. **Include `ai-task` in `--labels` list** +3. **Choose to add it in interactive mode** + +### AI Workflow Trigger + +When `ai-task` label is explicitly added: + +- Triggers automated AI implementation workflow +- Creates feature branch automatically +- Generates pull request with implementation +- Estimated timeline: 5-15 minutes + +## Interactive Mode + +The interactive mode provides guided issue creation: + +```bash +./scripts/create-issue.sh -i +``` + +Interactive flow: + +1. **Title prompt** - Required field +2. **Description input** - Multi-line or editor support +3. **Label selection** - Suggestions provided +4. **AI task confirmation** - Explicit yes/no for ai-task label +5. **Assignee selection** - Optional +6. **Preview and confirmation** - Review before creation + +## Prerequisites + +- **GitHub CLI (gh)** - Must be installed and authenticated +- **Git repository** - Must be run from within a git repo +- **Issues enabled** - Repository must have Issues feature enabled + +## Validation and Safety + +### Input Validation + +- Title is required and validated +- GitHub CLI authentication checked +- Repository Issues feature verified +- Label existence warnings (but doesn't block) + +### Security Features + +- No automatic label addition +- Explicit user consent for AI workflows +- Dry run mode for safe testing +- Clear warnings about AI trigger implications + +## Output Examples + +### Successful Creation + +```text +āœ… Issue created successfully! + URL: https://github.com/user/repo/issues/42 + Issue #: 42 + +šŸ¤– AI Task Workflow: + The ai-task label will trigger automated AI implementation + Monitor progress in the Actions tab of your repository + Expected timeline: 5-15 minutes for implementation + +🌐 Open issue in browser? (Y/n): +``` + +### Dry Run Preview + +```text +šŸ” DRY RUN - Would execute: + gh issue create --title "Test Feature" --label "feature ai-task" + +šŸ“‹ Issue Details: + Title: Test Feature + Labels: feature,ai-task + Assignee: none + Milestone: none + + Body: + ----- + Implement new test feature with proper error handling +``` + +## Best Practices + +1. **Use descriptive titles** - Clear, specific issue titles +2. **Provide detailed descriptions** - Include requirements and context +3. **Choose appropriate labels** - Use existing label conventions +4. **Consider AI implications** - Only use `--ai-task` when you want automated implementation +5. **Test with dry run** - Preview complex issues before creation +6. **Use interactive mode** - For comprehensive issue creation + +## Integration with Existing Workflows + +This command integrates with: + +- **AI task workflows** - When ai-task label is explicitly added +- **Project management** - Milestone and assignee support +- **Label management** - Respects existing repository labels +- **User story tracking** - Can create issues for user story implementation + +## Error Handling + +Common errors and solutions: + +- **GitHub CLI not authenticated** - Run `gh auth login` +- **Issues disabled** - Enable Issues in repository settings +- **Invalid assignee** - Check GitHub username spelling +- **Network issues** - Check internet connection and GitHub status + +This command provides safe, controlled issue creation with explicit opt-in for AI automation workflows. diff --git a/.ai/commands/development-workflow.md b/.ai/commands/development-workflow.md new file mode 100644 index 0000000..7348d54 --- /dev/null +++ b/.ai/commands/development-workflow.md @@ -0,0 +1,66 @@ +# Development Workflow + +Standard development workflow commands. + +## Pre-commit Setup + +```bash +# Install pre-commit hooks +pre-commit install + +# Run pre-commit on all files +pre-commit run --all-files + +# Update pre-commit hooks +pre-commit autoupdate +``` + +## Code Quality + +```bash +# Python formatting +black scripts/ tests/ + +# Import sorting +isort scripts/ tests/ + +# Linting +ruff check scripts/ tests/ + +# Type checking +mypy scripts/ + +# All quality checks +pre-commit run --all-files +``` + +## Branch Management + +```bash +# Create feature branch +git checkout -b feature/your-feature + +# Create AI task branch +git checkout -b feature/ai-task-ISSUE_NUMBER + +# Create fix branch +git checkout -b fix/your-fix + +# Push branch +git push -u origin HEAD +``` + +## PR Creation + +```bash +# Create PR with commit changes +./scripts/commit-changes.sh +./scripts/push-changes.sh +./scripts/create-pr.sh + +# Or use GitHub CLI +gh pr create --title "Your PR title" --body "Description" + +# Create draft PR +gh pr create --draft +``` diff --git a/.ai/commands/label-management.md b/.ai/commands/label-management.md new file mode 100644 index 0000000..25b4dba --- /dev/null +++ b/.ai/commands/label-management.md @@ -0,0 +1,56 @@ +# Label Management + +Manage GitHub labels for AI workflows. + +## Setup Labels + +```bash +# Create all required labels +./scripts/setup-labels.sh + +# This creates: +# - ai-task (triggers AI implementation) +# - ai-bug-fix (AI bug fixes) +# - ai-refactor (AI refactoring) +# - ai-test (AI test generation) +# - ai-docs (AI documentation) +# - ai-completed (task completed) +# - ai-error (task failed) +# - ai-review-needed (needs human review) +``` + +## Label Usage + +```bash +# Add label to issue (requires gh CLI) +gh issue edit ISSUE_NUMBER --add-label "ai-task" + +# Remove label +gh issue edit ISSUE_NUMBER --remove-label "ai-task" + +# List issues with specific label +gh issue list --label "ai-task" + +# View all labels +gh label list +``` + +## Custom Labels + +```bash +# Create custom AI label +gh label create "ai-security" --description "AI security analysis" --color "FF0000" + +# Update label +gh label edit "ai-task" --color "00FF00" --description "Updated description" + +# Delete label +gh label delete "old-label" +``` + +## Label Workflow Triggers + +- `ai-task` → Triggers AI implementation workflow +- `ai-fix-lint` → Triggers lint fix workflow +- `ai-fix-security` → Triggers security fix workflow +- `ai-fix-tests` → Triggers test fix workflow diff --git a/.ai/commands/run-tests.md b/.ai/commands/run-tests.md new file mode 100644 index 0000000..976b998 --- /dev/null +++ b/.ai/commands/run-tests.md @@ -0,0 +1,48 @@ +# Run Tests + +Run the test suite for the agentic workflow template. + +## All Tests + +```bash +# Activate virtual environment first +source venv/bin/activate + +# Run all tests +pytest + +# Run with coverage +pytest --cov=scripts --cov=tests + +# Run specific test file +pytest tests/test_workflow_syntax.py + +# Run specific test +pytest tests/test_workflow_syntax.py::TestWorkflowSyntax::test_ai_task_workflow_structure +``` + +## Test Categories + +```bash +# Unit tests only +pytest -m "not integration" + +# Integration tests +pytest -m integration + +# Fast tests (exclude slow) +pytest -m "not slow" +``` + +## Workflow Tests + +```bash +# Test workflow syntax +pytest tests/test_workflow_syntax.py -v + +# Test workflow integration +pytest tests/test_workflow_integration.py -v + +# Test security +pytest tests/test_pat_security.py -v +``` diff --git a/.ai/commands/security-checks.md b/.ai/commands/security-checks.md new file mode 100644 index 0000000..3e592f7 --- /dev/null +++ b/.ai/commands/security-checks.md @@ -0,0 +1,53 @@ +# Security Checks + +Run security scans and validations. + +## Run Security Scan + +```bash +# Full security scan +./scripts/run-security-scan.sh + +# AI-powered security review +./scripts/ai-security-review.sh +``` + +## Security Tools + +```bash +# Bandit for Python security +bandit -r scripts/ -f json + +# Check for secrets +pre-commit run detect-secrets --all-files + +# Dependency check +pip-audit + +# YAML security +yamllint .github/workflows/ +``` + +## Token Security + +```bash +# Test PAT security +pytest tests/test_pat_security.py -v + +# Check for exposed tokens +grep -r "ghp_" . --exclude-dir=venv --exclude-dir=.git +grep -r "github_pat" . --exclude-dir=venv --exclude-dir=.git +``` + +## Permissions Audit + +```bash +# Check workflow permissions +python -c " +import yaml +import glob +for f in glob.glob('.github/workflows/*.yml'): + with open(f) as file: + data = yaml.safe_load(file) + print(f'{f}: {data.get(\"permissions\", \"No explicit permissions\")}')" +``` diff --git a/.ai/commands/setup-project.md b/.ai/commands/setup-project.md new file mode 100644 index 0000000..52477d4 --- /dev/null +++ b/.ai/commands/setup-project.md @@ -0,0 +1,32 @@ +# Setup Project + +Set up the agentic workflow template in your repository. + +## Command + +```bash +# Install the template +./install.sh + +# Activate virtual environment +source venv/bin/activate + +# Install development dependencies +pip install -e ".[dev]" + +# Setup GitHub labels +./scripts/setup-labels.sh + +# Run tests to verify setup +pytest +``` + +## Required GitHub Secrets + +- `ANTHROPIC_API_KEY` - Your Claude API key +- `GH_PAT` - GitHub Personal Access Token (repo, workflow, write:packages scopes) + +## Next Steps + +- Configure workflow settings in `.github/workflows/` +- Test AI workflows with `ai-task` labeled issues diff --git a/.ai/commands/troubleshooting.md b/.ai/commands/troubleshooting.md new file mode 100644 index 0000000..546e6e6 --- /dev/null +++ b/.ai/commands/troubleshooting.md @@ -0,0 +1,74 @@ +# Troubleshooting + +Common issues and solutions for the agentic workflow template. + +## Workflow Not Triggering + +```bash +# Check if labels are correctly applied +gh issue view ISSUE_NUMBER + +# Verify GitHub secrets are set +gh secret list + +# Check Actions tab for errors +gh run list --workflow=ai-task.yml + +# View workflow logs +gh run view RUN_ID --log +``` + +## AI Task Failures + +```bash +# Check Claude CLI installation +./dev-scripts/install-claude.sh + +# Test Claude CLI +claude --version + +# Check API key +echo $ANTHROPIC_API_KEY | head -c 10 + +# Manual execution for debugging +./scripts/execute-ai-task.sh ISSUE_NUMBER +``` + +## Permission Errors + +```bash +# Check PAT permissions +gh auth status + +# Verify PAT has required scopes: +# - repo (full control) +# - workflow (update workflows) +# - write:packages (if using packages) + +# Test PAT +curl -H "Authorization: token $GH_PAT" https://api.github.com/user +``` + +## Cost Issues + +```bash +# Check recent workflow runs +gh run list --workflow=ai-task.yml --limit 10 + +# Calculate approximate costs +# Each run ā‰ˆ $0.10-$5.00 depending on complexity + +# Set spending limits in workflow +# Edit .github/workflows/ai-task.yml timeout-minutes +``` + +## Debug Mode + +```bash +# Enable debug logging +export ACTIONS_STEP_DEBUG=true +export ACTIONS_RUNNER_DEBUG=true + +# Run with verbose output +./scripts/execute-ai-task.sh ISSUE_NUMBER -v +``` diff --git a/.ai/commands/update-template.md b/.ai/commands/update-template.md new file mode 100644 index 0000000..24b1934 --- /dev/null +++ b/.ai/commands/update-template.md @@ -0,0 +1,62 @@ +# Update Template + +Keep your repository synchronized with the latest template updates. + +## Update from Template + +```bash +# Pull latest changes from template repository +./dev-scripts/update-from-template.sh + +# This will: +# 1. Add template as remote if not exists +# 2. Fetch latest changes +# 3. Merge template updates +# 4. Show conflicts if any +``` + +## Manual Update Process + +```bash +# Add template remote +git remote add template https://github.com/stillrivercode/agentic-workflow-template.git + +# Fetch template changes +git fetch template + +# Merge specific version +git merge template/v1.0.0 --allow-unrelated-histories + +# Or cherry-pick specific commits +git cherry-pick COMMIT_HASH +``` + +## Check for Updates + +```bash +# View template releases +gh release list --repo stillrivercode/agentic-workflow-template + +# Compare with template +git fetch template +git log --oneline HEAD..template/main + +# View update documentation +cat docs/template-updates.md +``` + +## Conflict Resolution + +```bash +# Common conflicts occur in: +# - .github/workflows/ +# - CLAUDE.md +# - README.md + +# Strategy: Keep your customizations, incorporate new features +git status +git diff +# Manually resolve conflicts +git add . +git commit +``` diff --git a/.ai/commands/workflow-management.md b/.ai/commands/workflow-management.md new file mode 100644 index 0000000..9dc1588 --- /dev/null +++ b/.ai/commands/workflow-management.md @@ -0,0 +1,53 @@ +# Workflow Management + +Commands for managing GitHub Actions workflows. + +## Workflow Status + +```bash +# Check workflow syntax +python -c "import yaml; yaml.safe_load(open('.github/workflows/ai-task.yml'))" + +# Validate all workflows +for f in .github/workflows/*.yml; do + echo "Validating $f" + python -c "import yaml; yaml.safe_load(open('$f'))" +done +``` + +## Update Workflow Diagram + +When modifying workflows, update the diagram: + +```bash +# Edit the workflow diagram +$EDITOR docs/workflow-diagram.md + +# The diagram uses Mermaid syntax +# Update it to reflect any workflow changes +``` + +## Workflow Permissions + +```bash +# Check current permissions in workflows +grep -A5 "permissions:" .github/workflows/*.yml + +# Validate permissions are minimal +cat docs/workflow-permissions.md +``` + +## Test Workflow Locally + +```bash +# Install act (GitHub Actions local runner) +brew install act + +# List available workflows +act -l + +# Run specific workflow (dry run) +act -n -j ai-task + +# Note: Some features like GitHub secrets won't work locally +``` diff --git a/.bandit b/.bandit new file mode 100644 index 0000000..31c047d --- /dev/null +++ b/.bandit @@ -0,0 +1,12 @@ +# Bandit Configuration File +# This configuration helps reduce false positives in security scanning + +[bandit] +# Exclude test directories and virtual environments from security scanning +# Test code often uses patterns that would be unsafe in production +# Virtual environments contain third-party code we don't control +exclude: /tests/,/test/,*test*.py,/venv/,/env/,/.venv/,/.env/ + +# Skip specific tests that generate too many false positives +# B101: Use of assert detected - Assert statements are acceptable in test code +skips: B101 diff --git a/.github/workflows/release-generator.yml b/.github/workflows/release-generator.yml deleted file mode 100644 index 3300598..0000000 --- a/.github/workflows/release-generator.yml +++ /dev/null @@ -1,1358 +0,0 @@ -name: šŸš€ Automated Release Generator - -on: - workflow_dispatch: - inputs: - release-type: - description: 'Type of release to create' - required: false - default: 'auto' - type: choice - options: - - auto - - major - - minor - - patch - - prerelease - enable-ai-enhancement: - description: 'Use AI to enhance release descriptions' - required: false - default: false - type: boolean - publish-npm: - description: 'Publish to npm registry' - required: false - default: false - type: boolean - push: - tags: - - 'v*' - pull_request: - types: [closed] - branches: - - main - schedule: - # Weekly release notes generation (Sundays at midnight UTC) - - cron: '0 0 * * 0' - -# Centralized timeout and security settings -# These timeouts prevent runaway processes and ensure resource efficiency -env: - AI_EXECUTION_TIMEOUT_MINUTES: ${{ vars.AI_EXECUTION_TIMEOUT_MINUTES || '10' }} - WORKFLOW_TIMEOUT: 30 # Maximum runtime for entire workflow (safety limit) - AI_OPERATION_TIMEOUT: 15 # Timeout for AI enhancement operations (cost control) - API_OPERATION_TIMEOUT: 10 # Timeout for GitHub API calls (reliability) - SETUP_TIMEOUT: 5 # Timeout for environment setup (fast-fail on setup issues) - -# Minimal required permissions -permissions: - contents: read - issues: write - pull-requests: read - actions: read - -# Prevent concurrent runs for the same workflow -concurrency: - group: release-notes-${{ github.ref }} - cancel-in-progress: false - -jobs: - # Emergency controls check - validates system state before proceeding - # Integrates with repository-wide emergency controls for coordinated workflow management - emergency-check: - runs-on: ubuntu-latest - timeout-minutes: 5 - outputs: - can-proceed: ${{ steps.check.outputs.can-proceed }} - emergency-reason: ${{ steps.check.outputs.emergency-reason }} - steps: - - name: Check Emergency Controls - id: check - run: | - echo "šŸ” Checking emergency controls..." - - # EMERGENCY_STOP: Repository variable that immediately halts all AI workflows - # Used for critical incidents, budget overruns, or system-wide issues - if [[ "${{ vars.EMERGENCY_STOP }}" == "true" ]]; then - echo "🚨 Release notes generation blocked by emergency stop" - echo "can-proceed=false" >> $GITHUB_OUTPUT - echo "emergency-reason=Emergency stop activated" >> $GITHUB_OUTPUT - # MAINTENANCE_MODE: Repository variable that pauses workflows during system updates - # Prevents workflow conflicts during infrastructure changes - elif [[ "${{ vars.MAINTENANCE_MODE }}" == "true" ]]; then - echo "šŸ”§ Release notes generation paused for maintenance" - echo "can-proceed=false" >> $GITHUB_OUTPUT - echo "emergency-reason=Maintenance mode active" >> $GITHUB_OUTPUT - else - echo "āœ… Release notes generation can proceed" - echo "can-proceed=true" >> $GITHUB_OUTPUT - echo "emergency-reason=" >> $GITHUB_OUTPUT - fi - - # Check if template files changed - check-template-changes: - needs: emergency-check - if: | - needs.emergency-check.outputs.can-proceed == 'true' && - (github.event_name != 'pull_request' || github.event.pull_request.merged == true) - runs-on: ubuntu-latest - timeout-minutes: 5 - outputs: - template-files-changed: ${{ steps.check.outputs.template-files-changed }} - changed-files: ${{ steps.check.outputs.changed-files }} - all-changed-files: ${{ steps.check.outputs.all-changed-files }} - steps: - - name: Checkout repository - uses: actions/checkout@v4 - with: - fetch-depth: 0 # Full history for file change analysis - fetch-tags: true # Ensure tags are fetched - - - name: Check template file changes - id: check - run: | - echo "šŸ” Checking for template file changes..." - - # Validate prerequisites - if ! command -v node >/dev/null 2>&1; then - echo "āŒ Error: node is not installed" - exit 1 - fi - - if ! command -v jq >/dev/null 2>&1; then - echo "āŒ Error: jq is not installed" - exit 1 - fi - - # Validate package.json exists - if [[ ! -f "package.json" ]]; then - echo "āŒ Error: package.json not found" - exit 1 - fi - - # Get template files from package.json with error handling - TEMPLATE_FILES=$(node -p "JSON.stringify(require('./package.json').files)" 2>/dev/null | jq -r '.[]' 2>/dev/null) - if [[ -z "$TEMPLATE_FILES" ]]; then - echo "āŒ Error: Failed to read files array from package.json" - exit 1 - fi - - echo "Template files to check:" - echo "$TEMPLATE_FILES" - - # Get base reference (latest tag or beginning of history) - BASE_REF="" - if git describe --tags --abbrev=0 >/dev/null 2>&1; then - BASE_REF=$(git describe --tags --abbrev=0) - # Validate git tag format to prevent injection - if [[ ! "$BASE_REF" =~ ^[a-zA-Z0-9._/-]+$ ]]; then - echo "āŒ Error: Invalid git tag format: $BASE_REF" - exit 1 - fi - echo "Using latest tag as base: $BASE_REF" - else - echo "No tags found, checking all files" - BASE_REF="" - fi - - # Check for changes in template files - TEMPLATE_CHANGED=false - CHANGED_FILES="" - - if [[ -n "$BASE_REF" ]]; then - # Get changed files since last tag with error handling - if ! CHANGED_FILES_RAW=$(git diff --name-only "$BASE_REF"..HEAD 2>&1); then - echo "āŒ Error: Failed to get changed files: $CHANGED_FILES_RAW" - exit 1 - fi - else - # For first release, consider all template files as changed - CHANGED_FILES_RAW=$(echo "$TEMPLATE_FILES") - TEMPLATE_CHANGED=true - fi - - echo "Files changed since $BASE_REF:" - echo "$CHANGED_FILES_RAW" - - # Check if any changed file matches template files - while IFS= read -r template_file; do - [[ -z "$template_file" ]] && continue - - # Escape special regex characters to prevent injection - escaped_template=$(printf '%s\n' "$template_file" | sed 's/[[\.*^$()+?{|]/\\&/g') - - # Handle directory patterns (e.g., "cli/", ".github/workflows/") - if [[ "$template_file" =~ /$ ]]; then - # Directory pattern - check if any files in this directory changed - # Use grep -F for fixed string matching to prevent regex injection - dir_prefix="${template_file%/}/" - if echo "$CHANGED_FILES_RAW" | grep -F "$dir_prefix" >/dev/null; then - echo "āœ“ Template directory changed: $template_file" - TEMPLATE_CHANGED=true - CHANGED_FILES="${CHANGED_FILES} \"${template_file}\"" - fi - else - # Exact file pattern - use grep -Fx for exact fixed string matching - if echo "$CHANGED_FILES_RAW" | grep -Fx "$template_file" >/dev/null; then - echo "āœ“ Template file changed: $template_file" - TEMPLATE_CHANGED=true - CHANGED_FILES="${CHANGED_FILES} \"${template_file}\"" - fi - fi - done <<< "$TEMPLATE_FILES" - - # Make the raw changed files list available for reporting, converting newlines to spaces - ALL_CHANGED_FILES=$(echo "$CHANGED_FILES_RAW" | tr '\n' ' ') - - echo "template-files-changed=$TEMPLATE_CHANGED" >> $GITHUB_OUTPUT - echo "changed-files=$CHANGED_FILES" >> $GITHUB_OUTPUT - echo "all-changed-files=$ALL_CHANGED_FILES" >> $GITHUB_OUTPUT - - if [[ "$TEMPLATE_CHANGED" == "true" ]]; then - echo "āœ… Template files have changed - version bump needed" - else - echo "ā„¹ļø No template files changed - skipping version bump" - fi - - # Analyze commits and calculate version - analyze-commits: - needs: [emergency-check, check-template-changes] - if: | - needs.emergency-check.outputs.can-proceed == 'true' && - needs.check-template-changes.outputs.template-files-changed == 'true' && - (github.event_name != 'pull_request' || github.event.pull_request.merged == true) - runs-on: ubuntu-latest - timeout-minutes: 10 - outputs: - analysis-available: ${{ steps.analyze.outputs.analysis-available }} - current-version: ${{ steps.analyze.outputs.current-version }} - next-version: ${{ steps.analyze.outputs.next-version }} - bump-type: ${{ steps.analyze.outputs.bump-type }} - has-changes: ${{ steps.analyze.outputs.has-changes }} - steps: - - name: Checkout repository - uses: actions/checkout@v4 - with: - fetch-depth: 0 # Full history for commit analysis - fetch-tags: true # Ensure tags are fetched for version detection - - - name: Set up Python - uses: actions/setup-python@v4 - with: - python-version: '3.12' - - - name: Install dependencies - timeout-minutes: ${{ fromJSON(env.SETUP_TIMEOUT) }} - run: | - python -m pip install --upgrade pip - pip install -e dev-scripts/version-management/ - - # Verify installation - python -c "import yaml, json, subprocess, sys; print('Dependencies OK')" - - # Test analyze-commits.py can be imported - python -c "import sys; sys.path.insert(0, 'dev-scripts/version-management'); " \ - "exec(open('dev-scripts/version-management/analyze-commits.py').read())" --help || true - - - name: Cache git analysis results - uses: actions/cache@v4 - with: - path: .git-analysis-cache - key: git-analysis-${{ runner.os }}-${{ github.sha }}-${{ hashFiles('.github/semver-config.yml') }} - restore-keys: | - git-analysis-${{ runner.os }}-${{ github.sha }}- - git-analysis-${{ runner.os }}- - - - name: Analyze commits - id: analyze - run: | - echo "šŸ“Š Analyzing commits for version calculation..." - - # Determine base reference (always use latest tag) - BASE_REF="" - if [[ -z "$BASE_REF" ]]; then - # Get latest tag or use beginning of history - if git describe --tags --abbrev=0 >/dev/null 2>&1; then - BASE_REF=$(git describe --tags --abbrev=0) - echo "Using latest tag as base: $BASE_REF" - else - echo "No tags found, analyzing all commits" - BASE_REF="" - fi - fi - - # Run commit analysis - ANALYSIS_FILE="commit-analysis.json" - ERROR_FILE="analysis-error.log" - - if [[ -n "$BASE_REF" ]]; then - echo "Running analyze-commits.py with base ref: $BASE_REF" - if ! timeout 120 python dev-scripts/version-management/analyze-commits.py \ - --since "$BASE_REF" \ - --output json \ - --verbose > "$ANALYSIS_FILE" 2>"$ERROR_FILE"; then - echo "āŒ Error running analyze-commits.py with base ref $BASE_REF (exit code: $?):" - cat "$ERROR_FILE" - echo "--- Analysis file content (if any) ---" - cat "$ANALYSIS_FILE" || echo "No analysis file created" - exit 1 - fi - else - echo "Running analyze-commits.py for all commits (with 2-minute timeout)" - if ! timeout 120 python dev-scripts/version-management/analyze-commits.py \ - --output json \ - --verbose > "$ANALYSIS_FILE" 2>"$ERROR_FILE"; then - echo "āŒ Error running analyze-commits.py (analyzing commits, exit code: $?):" - cat "$ERROR_FILE" - echo "--- Analysis file content (if any) ---" - cat "$ANALYSIS_FILE" || echo "No analysis file created" - exit 1 - fi - fi - - # Verify the file was created and is not empty - if [[ ! -f "$ANALYSIS_FILE" || ! -s "$ANALYSIS_FILE" ]]; then - echo "āŒ Error: Analysis file is empty or missing" - exit 1 - fi - - # Validate JSON format - if ! jq empty "$ANALYSIS_FILE" 2>/dev/null; then - echo "āŒ Error: Invalid JSON in analysis file" - echo "File content:" - head -20 "$ANALYSIS_FILE" - exit 1 - fi - - # Extract key information - CURRENT_VERSION=$(jq -r '.current_version' "$ANALYSIS_FILE") - NEXT_VERSION=$(jq -r '.next_version' "$ANALYSIS_FILE") - BUMP_TYPE=$(jq -r '.recommended_bump' "$ANALYSIS_FILE") - TOTAL_COMMITS=$(jq -r '.total_commits' "$ANALYSIS_FILE") - - echo "current-version=$CURRENT_VERSION" >> $GITHUB_OUTPUT - echo "next-version=$NEXT_VERSION" >> $GITHUB_OUTPUT - echo "bump-type=$BUMP_TYPE" >> $GITHUB_OUTPUT - echo "has-changes=$([ $TOTAL_COMMITS -gt 0 ] && echo 'true' || echo 'false')" >> $GITHUB_OUTPUT - - # Debug: Show analysis file content - echo "šŸ“„ Analysis file content:" - cat "$ANALYSIS_FILE" - echo "" - - # Save analysis result using artifacts to avoid secret detection - # GitHub Actions can detect large JSON content as potential secrets - echo "šŸ’¾ Saving analysis result as artifact..." - - # Create artifacts directory - mkdir -p workflow-artifacts - cp "$ANALYSIS_FILE" workflow-artifacts/analysis-result.json - - # Save just the key metadata as outputs (safe, no large JSON content) - echo "analysis-available=true" >> $GITHUB_OUTPUT - - echo "šŸ“‹ Analysis complete:" - echo " Current version: $CURRENT_VERSION" - echo " Next version: $NEXT_VERSION" - echo " Bump type: $BUMP_TYPE" - echo " Total commits: $TOTAL_COMMITS" - - - name: Upload analysis artifacts - uses: actions/upload-artifact@v4 - with: - name: commit-analysis-${{ github.run_id }} - path: workflow-artifacts/ - retention-days: 1 - - # Calculate final version with validation - calculate-version: - needs: [check-template-changes, analyze-commits] - if: | - needs.check-template-changes.outputs.template-files-changed == 'true' && - needs.analyze-commits.outputs.has-changes == 'true' - runs-on: ubuntu-latest - timeout-minutes: 5 - outputs: - final-version: ${{ steps.calculate.outputs.final-version }} - version-tag: ${{ steps.calculate.outputs.version-tag }} - is-prerelease: ${{ steps.calculate.outputs.is-prerelease }} - steps: - - name: Checkout repository - uses: actions/checkout@v4 - with: - fetch-tags: true # Ensure tags are fetched - - - name: Set up Python - uses: actions/setup-python@v4 - with: - python-version: '3.12' - - - name: Install dependencies - timeout-minutes: ${{ fromJSON(env.SETUP_TIMEOUT) }} - run: | - python -m pip install --upgrade pip - pip install -e dev-scripts/version-management/ - - - name: Download analysis artifacts - uses: actions/download-artifact@v4 - with: - name: commit-analysis-${{ github.run_id }} - path: workflow-artifacts/ - - - name: Calculate version - id: calculate - env: - RELEASE_TYPE: ${{ github.event.inputs.release-type || 'auto' }} - run: | - echo "šŸ”¢ Calculating final version..." - echo "Release type: $RELEASE_TYPE" - - # Use analysis result from artifact - if [[ -f "workflow-artifacts/analysis-result.json" ]]; then - echo "Using analysis result from artifact" - cp workflow-artifacts/analysis-result.json analysis.json - else - echo "āŒ Error: Analysis result artifact not found" - exit 1 - fi - - # Debug: Validate JSON - if ! jq . analysis.json > /dev/null 2>&1; then - echo "āŒ Error: Invalid JSON in ANALYSIS_RESULT" - echo "Content:" - cat analysis.json - exit 1 - fi - - # Determine release parameters based on release type and branch - CURRENT_BRANCH="${{ github.ref_name }}" - EVENT_NAME="${{ github.event_name }}" - echo "Current branch: $CURRENT_BRANCH" - echo "Event name: $EVENT_NAME" - - PRERELEASE_ID="" - OVERRIDE_BUMP="" - PROMOTE_FROM_PRERELEASE=false - - # Function to validate version format - validate_version() { - local version=$1 - if [[ ! $version =~ ^[0-9]+\.[0-9]+\.[0-9]+(-[a-zA-Z0-9.-]+)?$ ]]; then - echo "::error::Invalid version format: $version" - return 1 - fi - return 0 - } - - if [[ "$RELEASE_TYPE" == "prerelease" ]]; then - # Explicit prerelease requested - PRERELEASE_ID="beta" - OVERRIDE_BUMP="patch" - echo "šŸ”€ Explicit pre-release requested - creating beta pre-release" - elif [[ "$CURRENT_BRANCH" == feature/* ]]; then - # Feature branch - always create pre-release - PRERELEASE_ID="beta" - OVERRIDE_BUMP="patch" - echo "šŸ”€ Feature branch detected - creating pre-release with beta suffix" - elif [[ "$CURRENT_BRANCH" == develop* || "$CURRENT_BRANCH" == dev* ]]; then - # Development branch - always create pre-release - PRERELEASE_ID="alpha" - OVERRIDE_BUMP="patch" - echo "šŸ”€ Development branch detected - creating pre-release with alpha suffix" - elif [[ "$CURRENT_BRANCH" == "main" && "$EVENT_NAME" == "pull_request" ]]; then - # PR merged to main - promote from pre-release to stable release - PROMOTE_FROM_PRERELEASE=true - echo "šŸ“¦ PR merged to main - promoting from pre-release to stable release" - - # Get the latest tag to check for pre-release - LATEST_TAG=$(git describe --tags --abbrev=0 2>/dev/null || echo "") - if [[ "$LATEST_TAG" =~ v?[0-9]+\.[0-9]+\.[0-9]+-.+ ]]; then - # This logic handles the promotion of a pre-release to a stable release. - # When a pull request is merged into the main branch, this workflow - # checks for the latest tag. If the latest tag is a pre-release version - # (e.g., v1.2.3-alpha.1), it strips the pre-release suffix to determine - # the new stable version (e.g., 1.2.3). This stable version is then - # used for the new release, effectively "promoting" the pre-release. - echo "šŸ”„ Found pre-release version to promote: $LATEST_TAG" - # Remove pre-release suffix (everything after the dash) to get stable version - STABLE_VERSION=$(echo "$LATEST_TAG" | sed 's/v//' | sed 's/-.*$//') - - # Validate the extracted stable version - if ! validate_version "$STABLE_VERSION"; then - echo "::error::Failed to promote version: Invalid stable version format." - exit 1 - fi - - jq --arg version "$STABLE_VERSION" '.next_version = $version' analysis.json > analysis-temp.json && mv analysis-temp.json analysis.json - echo "šŸŽÆ Promoting to stable version: $STABLE_VERSION" - else - echo "ā„¹ļø No pre-release version found, using normal version calculation" - fi - elif [[ "$RELEASE_TYPE" != "auto" ]]; then - # Manual override for main branch - OVERRIDE_BUMP="$RELEASE_TYPE" - echo "šŸ“¦ Manual release type specified: $RELEASE_TYPE" - fi - - # Apply overrides to analysis - if [[ -n "$OVERRIDE_BUMP" ]]; then - jq --arg bump "$OVERRIDE_BUMP" '.recommended_bump = $bump' analysis.json > analysis-temp.json && mv analysis-temp.json analysis.json - echo "Overriding bump type to: $OVERRIDE_BUMP" - fi - - # Calculate version - CALC_ARGS="--analysis analysis.json --output json" - - if [[ -n "$PRERELEASE_ID" ]]; then - CALC_ARGS="$CALC_ARGS --prerelease $PRERELEASE_ID" - echo "šŸ“¦ Using pre-release identifier: $PRERELEASE_ID" - fi - - # Run calculation - eval "python dev-scripts/version-management/calculate-version.py $CALC_ARGS" > version-calc.json - - # Extract results - FINAL_VERSION=$(cat version-calc.json | jq -r '.next_version') - VERSION_TAG=$(cat version-calc.json | jq -r '.suggested_tag') - IS_PRERELEASE=$(cat version-calc.json | jq -r '.is_prerelease') - - echo "final-version=$FINAL_VERSION" >> $GITHUB_OUTPUT - echo "version-tag=$VERSION_TAG" >> $GITHUB_OUTPUT - echo "is-prerelease=$IS_PRERELEASE" >> $GITHUB_OUTPUT - - echo "āœ… Version calculation complete:" - echo " Final version: $FINAL_VERSION" - echo " Git tag: $VERSION_TAG" - echo " Is pre-release: $IS_PRERELEASE" - - # Update package version files to match calculated version - update-version: - needs: [check-template-changes, analyze-commits, calculate-version] - if: | - needs.check-template-changes.outputs.template-files-changed == 'true' && - needs.analyze-commits.outputs.has-changes == 'true' - runs-on: ubuntu-latest - timeout-minutes: 5 - permissions: - contents: write - steps: - - name: Checkout repository - uses: actions/checkout@v4 - with: - token: ${{ github.token }} - fetch-tags: true # Ensure tags are fetched - - - name: Set up Python - uses: actions/setup-python@v4 - with: - python-version: '3.12' - - - name: Install dependencies - timeout-minutes: ${{ fromJSON(env.SETUP_TIMEOUT) }} - run: | - python -m pip install --upgrade pip - pip install -e dev-scripts/version-management/ - - - name: Update package.json version - run: | - echo "šŸ“ Updating package.json to version ${{ needs.calculate-version.outputs.final-version }}..." - - # Update the version in package.json - NEW_VERSION="${{ needs.calculate-version.outputs.final-version }}" - - # Use npm version to update (without creating a git tag) - npm version "$NEW_VERSION" --no-git-tag-version - - # Verify the change - echo "Updated version in package.json:" - grep '"version"' package.json - - # Check if there are any changes to commit - if git diff --quiet package.json; then - echo "No changes needed - version already up to date" - else - echo "āœ… Version updated successfully" - fi - - - name: Commit version update - run: | - # Configure git - git config user.name "github-actions[bot]" - git config user.email "github-actions[bot]@users.noreply.github.com" - - # Check if there are changes to commit - if git diff --quiet package.json; then - echo "No version changes to commit" - else - # Add and commit the version update - git add package.json - git commit -m "$(cat <<'EOF' - chore: update version to ${{ needs.calculate-version.outputs.final-version }} for release - - Automatically updated by release-notes-generator workflow. - - šŸ¤– Generated with [Claude Code](https://claude.ai/code) - - Co-Authored-By: Claude - EOF - )" - - # Push the changes - git push - echo "āœ… Version update committed and pushed" - fi - - - name: Validate version - run: | - echo "šŸ” Validating version progression..." - python dev-scripts/version-management/validate-version.py \ - --version ${{ needs.calculate-version.outputs.final-version }} \ - --output text - - # Generate release notes - generate-notes: - needs: - [ - check-template-changes, - analyze-commits, - calculate-version, - update-version, - ] - if: | - needs.check-template-changes.outputs.template-files-changed == 'true' && - needs.analyze-commits.outputs.has-changes == 'true' - runs-on: ubuntu-latest - timeout-minutes: 10 - outputs: - release-notes-md: ${{ steps.generate.outputs.release-notes-md }} - release-notes-json: ${{ steps.generate.outputs.release-notes-json }} - release-notes-generated: ${{ steps.generate.outputs.release-notes-generated }} - changelog-entry: ${{ steps.generate.outputs.changelog-entry }} - steps: - - name: Checkout repository - uses: actions/checkout@v4 - with: - fetch-depth: 0 - fetch-tags: true # Ensure tags are fetched - - - name: Set up Python - uses: actions/setup-python@v4 - with: - python-version: '3.12' - - - name: Install dependencies - timeout-minutes: ${{ fromJSON(env.SETUP_TIMEOUT) }} - run: | - python -m pip install --upgrade pip - pip install -e dev-scripts/version-management/ Jinja2 - - - name: Download analysis artifacts - uses: actions/download-artifact@v4 - with: - name: commit-analysis-${{ github.run_id }} - path: workflow-artifacts/ - - - name: Generate release notes - id: generate - run: | - echo "šŸ“ Generating release notes..." - - # Create release notes generation script - cat > generate_release_notes.py << 'EOF' - #!/usr/bin/env python3 - import json - import yaml - from datetime import datetime - from pathlib import Path - - # Load analysis data - with open('analysis.json', 'r') as f: - analysis = json.load(f) - - # Load configuration - config_path = Path('.github/semver-config.yml') - if config_path.exists(): - with open(config_path, 'r') as f: - config = yaml.safe_load(f) - else: - config = {} - - # Get release notes config - notes_config = config.get('release_notes', {}) - categories = notes_config.get('categories', []) - - # Organize commits by category - categorized_commits = {} - for category in categories: - categorized_commits[category['name']] = [] - - # Default category for uncategorized commits - categorized_commits['šŸ”§ Other Changes'] = [] - - # Categorize commits - for commit in analysis.get('commits', []): - categorized = False - commit_type = commit.get('type', '') - - for category in categories: - if commit_type in category.get('commit_types', []): - categorized_commits[category['name']].append(commit) - categorized = True - break - - if not categorized: - categorized_commits['šŸ”§ Other Changes'].append(commit) - - # Generate markdown - version = "${{ needs.calculate-version.outputs.final-version }}" - date = datetime.now().strftime('%Y-%m-%d') - - md_lines = [ - f"# Release {version}", - f"", - f"**Release Date:** {date}", - f"**Version Type:** {analysis.get('recommended_bump', 'patch').title()}", - f"", - ] - - # Add summary - total_commits = analysis.get('total_commits', 0) - feature_count = analysis.get('feature_count', 0) - fix_count = analysis.get('fix_count', 0) - breaking_count = len(analysis.get('breaking_changes', [])) - - md_lines.extend([ - "## šŸ“Š Summary", - "", - f"- **Total Changes:** {total_commits} commits", - f"- **New Features:** {feature_count}", - f"- **Bug Fixes:** {fix_count}", - f"- **Breaking Changes:** {breaking_count}", - "", - ]) - - # Add breaking changes first if any - if breaking_count > 0: - md_lines.extend([ - "## šŸ’„ Breaking Changes", - "", - ]) - for change in analysis.get('breaking_changes', []): - md_lines.append(f"- {change}") - md_lines.append("") - - # Add categorized changes - for category_name, commits in categorized_commits.items(): - if commits: - md_lines.extend([ - f"## {category_name}", - "", - ]) - for commit in commits: - desc = commit.get('description', commit.get('message', '').split('\n')[0]) - hash_short = commit.get('hash', '')[:7] - md_lines.append(f"- {desc} ({hash_short})") - md_lines.append("") - - # Write markdown file - with open('release-notes.md', 'w') as f: - f.write('\n'.join(md_lines)) - - # Generate JSON format - json_data = { - "version": version, - "date": date, - "bump_type": analysis.get('recommended_bump', 'patch'), - "summary": { - "total_commits": total_commits, - "feature_count": feature_count, - "fix_count": fix_count, - "breaking_count": breaking_count - }, - "breaking_changes": analysis.get('breaking_changes', []), - "categories": {name: commits for name, commits in categorized_commits.items() if commits}, - "commits": analysis.get('commits', []) - } - - with open('release-notes.json', 'w') as f: - json.dump(json_data, f, indent=2) - - # Generate changelog entry - changelog_lines = [ - f"## [{version}] - {date}", - "", - ] - - if breaking_count > 0: - changelog_lines.extend([ - "### Breaking Changes", - "", - ]) - for change in analysis.get('breaking_changes', []): - changelog_lines.append(f"- {change}") - changelog_lines.append("") - - for category_name, commits in categorized_commits.items(): - if commits and category_name != 'šŸ”§ Other Changes': - # Convert emoji categories to standard changelog sections - section_name = category_name.split(' ', 1)[1] if ' ' in category_name else category_name - changelog_lines.extend([ - f"### {section_name}", - "", - ]) - for commit in commits: - desc = commit.get('description', commit.get('message', '').split('\n')[0]) - changelog_lines.append(f"- {desc}") - changelog_lines.append("") - - with open('changelog-entry.md', 'w') as f: - f.write('\n'.join(changelog_lines)) - - print("āœ… Release notes generated successfully") - EOF - - # Use analysis result from artifact - if [[ -f "workflow-artifacts/analysis-result.json" ]]; then - echo "Using analysis result from artifact" - cp workflow-artifacts/analysis-result.json analysis.json - else - echo "āŒ Error: Analysis result artifact not found in generate-notes job" - exit 1 - fi - - # Debug: Validate JSON - if ! jq . analysis.json > /dev/null 2>&1; then - echo "āŒ Error: Invalid JSON in ANALYSIS_RESULT" - echo "Content:" - cat analysis.json - exit 1 - fi - - # Generate release notes - python generate_release_notes.py - - # Set outputs using file-based approach for reliability - # This avoids delimiter issues entirely by saving to files - - # For markdown - use simple EOF since it's unlikely to appear - { - echo "release-notes-md<> $GITHUB_OUTPUT - - # For JSON - use single-line output to avoid delimiter issues - # This reads the entire JSON file into a single line - JSON_CONTENT=$(jq -c . release-notes.json) - echo "release-notes-json=${JSON_CONTENT}" >> $GITHUB_OUTPUT - - # Also save file reference and success indicator - echo "release-notes-json-file=release-notes.json" >> $GITHUB_OUTPUT - echo "release-notes-generated=true" >> $GITHUB_OUTPUT - - # For changelog - use simple EOF - { - echo "changelog-entry<> $GITHUB_OUTPUT - - - name: Upload release notes artifacts - uses: actions/upload-artifact@v4 - with: - name: release-notes-${{ needs.calculate-version.outputs.final-version }} - path: | - release-notes.md - release-notes.json - changelog-entry.md - - # Optional AI enhancement - uses Claude to improve release note quality - # Only runs when explicitly enabled and emergency controls allow it - ai-enhance-notes: - needs: - [ - check-template-changes, - analyze-commits, - calculate-version, - update-version, - generate-notes, - ] - # Conditional execution with multiple safety checks: - # - template-files-changed: Only enhance if template files changed - # - has-changes: Only enhance if there are actual changes to document - # - enable-ai-enhancement: Manual workflow input flag (cost control) - # - EMERGENCY_STOP: Respects emergency halt (safety override) - # - MAINTENANCE_MODE: Avoids AI operations during maintenance - if: | - needs.check-template-changes.outputs.template-files-changed == 'true' && - needs.analyze-commits.outputs.has-changes == 'true' && - github.event.inputs.enable-ai-enhancement == 'true' && - vars.EMERGENCY_STOP != 'true' && - vars.MAINTENANCE_MODE != 'true' - runs-on: ubuntu-latest - timeout-minutes: 10 # AI execution timeout (secrets cannot be used in timeout-minutes context) - steps: - - name: Checkout repository - uses: actions/checkout@v4 - - - name: Setup Python - uses: actions/setup-python@v5 - with: - python-version: '3.11' - - - name: Install OpenRouter dependencies - run: | - pip install openai==1.54.3 httpx==0.27.0 - - - name: Enhance release notes with AI - env: - # OPENROUTER_API_KEY: Repository secret containing OpenRouter API credentials - # Required for AI enhancement functionality - costs are controlled via timeouts and input gating - OPENROUTER_API_KEY: ${{ secrets.OPENROUTER_API_KEY }} - AI_MODEL: ${{ vars.AI_MODEL || 'anthropic/claude-sonnet-4' }} - run: | - echo "šŸ¤– Enhancing release notes with AI..." - - # Prepare release notes for enhancement - echo '${{ needs.generate-notes.outputs.release-notes-md }}' > raw-notes.md - - # Use AI via OpenRouter to enhance the release notes - # Enhancement focuses on clarity and user understanding while preserving structure - PROMPT="Please enhance these release notes by improving descriptions " - PROMPT+="and adding helpful context while maintaining the same structure " - PROMPT+="and format. Focus on making the changes more understandable for users:" - - # Create prompt file with raw notes - echo "$PROMPT" > enhancement-prompt.md - echo "" >> enhancement-prompt.md - cat raw-notes.md >> enhancement-prompt.md - - echo "šŸ¤– Using $AI_MODEL via OpenRouter to enhance release notes..." - python ./scripts/openrouter-ai-helper.py \ - --prompt-file enhancement-prompt.md \ - --output-file enhanced-notes.md \ - --model "$AI_MODEL" \ - --title "Release Notes Enhancement" - - echo "āœ… Release notes enhanced successfully" - - # Upload enhanced notes as artifact - mkdir -p enhanced-release-notes - cp enhanced-notes.md enhanced-release-notes/ - - - name: Upload enhanced notes - uses: actions/upload-artifact@v4 - with: - name: enhanced-release-notes-${{ needs.calculate-version.outputs.final-version }} - path: enhanced-release-notes/ - - # Create GitHub release - create-release: - needs: - [ - check-template-changes, - calculate-version, - update-version, - generate-notes, - ai-enhance-notes, - ] - if: | - always() && - needs.check-template-changes.outputs.template-files-changed == 'true' && - needs.calculate-version.result == 'success' && - needs.update-version.result == 'success' && - needs.generate-notes.result == 'success' && - (needs.ai-enhance-notes.result == 'success' || needs.ai-enhance-notes.result == 'skipped') - runs-on: ubuntu-latest - timeout-minutes: 10 - permissions: - contents: write - steps: - - name: Checkout repository - uses: actions/checkout@v4 - - - name: Create git tag - run: | - set -e - VERSION_TAG="${{ needs.calculate-version.outputs.version-tag }}" - # Remove any stray quotes - VERSION_TAG=$(echo "$VERSION_TAG" | sed 's/"//g') - echo "šŸ·ļø Creating git tag: $VERSION_TAG" - - git config user.name "github-actions[bot]" - git config user.email "github-actions[bot]@users.noreply.github.com" - - # Check if tag already exists - if git rev-parse "$VERSION_TAG" >/dev/null 2>&1; then - echo "Tag $VERSION_TAG already exists" - exit 0 - fi - - git tag -a "$VERSION_TAG" -m "Release $VERSION_TAG" - git push origin "$VERSION_TAG" - - - name: Create GitHub release - env: - GH_TOKEN: ${{ github.token }} - run: | - set -e - VERSION_TAG="${{ needs.calculate-version.outputs.version-tag }}" - # Remove any stray quotes - VERSION_TAG=$(echo "$VERSION_TAG" | sed 's/"//g') - IS_PRERELEASE="${{ needs.calculate-version.outputs.is-prerelease }}" - - # Fetch tags to ensure we have the latest - git fetch --tags - - # Prepare release notes - echo '${{ needs.generate-notes.outputs.release-notes-md }}' > release-notes.md - - # Create release - use array to properly handle arguments - RELEASE_ARGS=(--title "Release $VERSION_TAG" --notes-file release-notes.md) - - if [[ "$IS_PRERELEASE" == "true" ]]; then - RELEASE_ARGS+=(--prerelease) - fi - - echo "šŸš€ Creating GitHub release: $VERSION_TAG" - gh release create "$VERSION_TAG" "${RELEASE_ARGS[@]}" - - # Publish to npm - publish-npm: - needs: [check-template-changes, calculate-version, update-version] - if: | - needs.check-template-changes.outputs.template-files-changed == 'true' && - needs.calculate-version.result == 'success' && - needs.update-version.result == 'success' && - ( - (github.event_name == 'push' && startsWith(github.ref, 'refs/tags/v')) || - (github.event_name == 'workflow_dispatch' && github.event.inputs.publish-npm == 'true') || - ( - github.event_name == 'pull_request' && - github.event.pull_request.merged == true && - github.event.pull_request.base.ref == 'main' - ) - ) - runs-on: ubuntu-latest - timeout-minutes: 10 - permissions: - contents: read - id-token: write # For npm provenance - steps: - - name: Checkout repository - uses: actions/checkout@v4 - with: - fetch-depth: 0 - - - name: Get latest changes - run: | - echo "šŸ“„ Fetching latest changes including version update..." - git fetch origin ${{ github.ref_name }} - git checkout origin/${{ github.ref_name }} - echo "āœ… Now on latest commit with version updates" - - - name: Setup Node.js - uses: actions/setup-node@v4 - with: - node-version: '18' - registry-url: 'https://registry.npmjs.org' - - - name: Verify version - run: | - echo "šŸ“¦ Preparing to publish version ${{ needs.calculate-version.outputs.final-version }}" - - # Verify package.json version matches - PACKAGE_VERSION=$(node -p "require('./package.json').version") - EXPECTED_VERSION="${{ needs.calculate-version.outputs.final-version }}" - - if [[ "$PACKAGE_VERSION" != "$EXPECTED_VERSION" ]]; then - echo "āŒ Version mismatch!" - echo " Package.json: $PACKAGE_VERSION" - echo " Expected: $EXPECTED_VERSION" - exit 1 - fi - - echo "āœ… Version verified: $PACKAGE_VERSION" - - - name: Check npm access - run: | - # Check if package exists and we have access - PACKAGE_NAME=$(node -p "require('./package.json').name") - - if npm view "$PACKAGE_NAME" version 2>/dev/null; then - echo "šŸ“¦ Package exists on npm: $PACKAGE_NAME" - LATEST_VERSION=$(npm view "$PACKAGE_NAME" version) - echo " Latest published version: $LATEST_VERSION" - else - echo "šŸ“¦ This will be the first publish of: $PACKAGE_NAME" - fi - - - name: Publish to npm - env: - NODE_AUTH_TOKEN: ${{ secrets.NPM_TOKEN }} - run: | - echo "šŸš€ Publishing to npm..." - - # Validate npm token is available - if [[ -z "$NODE_AUTH_TOKEN" ]]; then - echo "āŒ Error: NPM_TOKEN secret is not set" - echo "Please add NPM_TOKEN to your repository secrets" - exit 1 - fi - - # Publish with error handling and retry logic - RETRY_COUNT=0 - MAX_RETRIES=3 - - while [[ $RETRY_COUNT -lt $MAX_RETRIES ]]; do - if npm publish --access public; then - echo "āœ… Successfully published to npm!" - break - else - RETRY_COUNT=$((RETRY_COUNT + 1)) - if [[ $RETRY_COUNT -lt $MAX_RETRIES ]]; then - echo "āš ļø Publish failed, retrying in 10 seconds (attempt $RETRY_COUNT/$MAX_RETRIES)..." - sleep 10 - else - echo "āŒ Failed to publish after $MAX_RETRIES attempts" - echo "This may be due to:" - echo " - Invalid npm token" - echo " - Package name already exists with different owner" - echo " - Version already published" - echo " - Network connectivity issues" - exit 1 - fi - fi - done - - - name: Verify publication - run: | - # Wait a moment for npm to update - sleep 5 - - PACKAGE_NAME=$(node -p "require('./package.json').name") - PUBLISHED_VERSION=$(npm view "$PACKAGE_NAME" version) - EXPECTED_VERSION="${{ needs.calculate-version.outputs.final-version }}" - - echo "šŸ“¦ Published version: $PUBLISHED_VERSION" - - if [[ "$PUBLISHED_VERSION" == "$EXPECTED_VERSION" ]]; then - echo "āœ… Version $EXPECTED_VERSION successfully published to npm!" - else - echo "āš ļø Published version may not be updated yet" - echo " Expected: $EXPECTED_VERSION" - echo " Current on npm: $PUBLISHED_VERSION" - fi - - # Update changelog - update-changelog: - needs: - [ - check-template-changes, - calculate-version, - update-version, - generate-notes, - create-release, - ] - if: | - always() && - needs.check-template-changes.outputs.template-files-changed == 'true' && - needs.create-release.result == 'success' && - github.event_name == 'workflow_dispatch' - runs-on: ubuntu-latest - timeout-minutes: 5 - permissions: - contents: write - steps: - - name: Checkout repository - uses: actions/checkout@v4 - with: - token: ${{ github.token }} - - - name: Update CHANGELOG.md - run: | - echo "šŸ“ Updating CHANGELOG.md" - - # Create CHANGELOG.md if it doesn't exist - if [[ ! -f CHANGELOG.md ]]; then - cat > CHANGELOG.md << 'EOF' - # Changelog - - All notable changes to this project will be documented in this file. - - The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), - and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). - - EOF - fi - - # Prepare new changelog entry - echo '${{ needs.generate-notes.outputs.changelog-entry }}' > new-entry.md - - # Insert new entry after the header - { - head -n 6 CHANGELOG.md - echo "" - cat new-entry.md - echo "" - tail -n +7 CHANGELOG.md - } > CHANGELOG.new.md - - mv CHANGELOG.new.md CHANGELOG.md - - # Commit changes - git config user.name "github-actions[bot]" - git config user.email "github-actions[bot]@users.noreply.github.com" - - if git diff --quiet CHANGELOG.md; then - echo "No changes to CHANGELOG.md" - else - git add CHANGELOG.md - git commit -m "Update CHANGELOG.md for ${{ needs.calculate-version.outputs.version-tag }}" - git push - fi - - # Final status report - report-status: - needs: - [ - emergency-check, - check-template-changes, - analyze-commits, - calculate-version, - update-version, - generate-notes, - create-release, - publish-npm, - ] - if: always() - runs-on: ubuntu-latest - timeout-minutes: 5 - steps: - - name: Report final status - run: | - echo "šŸŽÆ Release Notes Generation Summary" - echo "==================================" - - if [[ "${{ needs.emergency-check.outputs.can-proceed }}" != "true" ]]; then - echo "āŒ Blocked by emergency controls: ${{ needs.emergency-check.outputs.emergency-reason }}" - exit 0 - fi - - if [[ "${{ needs.check-template-changes.outputs.template-files-changed }}" != "true" ]]; then - echo "ā„¹ļø No template files changed - version bump and release skipped" - echo "šŸ“‹ Changed files (non-template): ${{ needs.check-template-changes.outputs.all-changed-files }}" - exit 0 - fi - - if [[ "${{ needs.analyze-commits.outputs.has-changes }}" != "true" ]]; then - echo "ā„¹ļø No commits found since last release - no release notes generated" - exit 0 - fi - - echo "šŸ“Š Analysis Results:" - echo " - Current version: ${{ needs.analyze-commits.outputs.current-version }}" - echo " - Next version: ${{ needs.calculate-version.outputs.final-version }}" - echo " - Bump type: ${{ needs.analyze-commits.outputs.bump-type }}" - echo " - Pre-release: ${{ needs.calculate-version.outputs.is-prerelease }}" - - echo "" - echo "šŸŽÆ Workflow Results:" - echo " - Emergency check: ${{ needs.emergency-check.result }}" - echo " - Template file check: ${{ needs.check-template-changes.result }}" - echo " - Commit analysis: ${{ needs.analyze-commits.result }}" - echo " - Version calculation: ${{ needs.calculate-version.result }}" - echo " - Version update: ${{ needs.update-version.result }}" - echo " - Notes generation: ${{ needs.generate-notes.result }}" - echo " - Release creation: ${{ needs.create-release.result }}" - echo " - NPM publishing: ${{ needs.publish-npm.result }}" - - echo "" - echo "šŸŽÆ Results Summary:" - if [[ "${{ needs.publish-npm.result }}" == "success" ]]; then - echo "āœ… NPM package published successfully!" - echo "šŸ“¦ Version ${{ needs.calculate-version.outputs.final-version }} is now available on npm" - elif [[ "${{ needs.publish-npm.result }}" == "skipped" ]]; then - echo "šŸ“¦ NPM publishing skipped (not triggered)" - else - echo "āŒ NPM publishing failed" - fi - - if [[ "${{ needs.create-release.result }}" == "success" ]]; then - echo "āœ… GitHub release ${{ needs.calculate-version.outputs.version-tag }} created successfully!" - echo "šŸ”— View release: ${{ github.server_url }}/${{ github.repository }}/releases/tag/" \ - "${{ needs.calculate-version.outputs.version-tag }}" - elif [[ "${{ needs.create-release.result }}" == "skipped" ]]; then - echo "šŸ“‹ GitHub release skipped" - else - echo "āŒ GitHub release creation failed" - fi - - - name: Collect workflow metrics - if: always() - run: | - echo "šŸ“Š Collecting workflow performance metrics..." - - # Calculate workflow duration - WORKFLOW_START=$(date -u +"%Y-%m-%dT%H:%M:%SZ") - WORKFLOW_END=$(date -u +"%Y-%m-%dT%H:%M:%SZ") - - # Job statuses - EMERGENCY_CHECK_STATUS="${{ needs.emergency-check.result }}" - TEMPLATE_CHECK_STATUS="${{ needs.check-template-changes.result }}" - ANALYZE_COMMITS_STATUS="${{ needs.analyze-commits.result }}" - CALCULATE_VERSION_STATUS="${{ needs.calculate-version.result }}" - UPDATE_VERSION_STATUS="${{ needs.update-version.result }}" - GENERATE_NOTES_STATUS="${{ needs.generate-notes.result }}" - AI_ENHANCE_STATUS="${{ needs.ai-enhance-notes.result }}" - CREATE_RELEASE_STATUS="${{ needs.create-release.result }}" - - # Generate metrics report - cat > workflow-metrics.json << EOF - { - "workflow_run_id": "${{ github.run_id }}", - "workflow_run_number": "${{ github.run_number }}", - "repository": "${{ github.repository }}", - "ref": "${{ github.ref }}", - "sha": "${{ github.sha }}", - "actor": "${{ github.actor }}", - "event_name": "${{ github.event_name }}", - "started_at": "${WORKFLOW_START}", - "completed_at": "${WORKFLOW_END}", - "job_statuses": { - "emergency_check": "${EMERGENCY_CHECK_STATUS}", - "template_check": "${TEMPLATE_CHECK_STATUS}", - "analyze_commits": "${ANALYZE_COMMITS_STATUS}", - "calculate_version": "${CALCULATE_VERSION_STATUS}", - "update_version": "${UPDATE_VERSION_STATUS}", - "generate_notes": "${GENERATE_NOTES_STATUS}", - "ai_enhance": "${AI_ENHANCE_STATUS}", - "create_release": "${CREATE_RELEASE_STATUS}" - }, - "version_info": { - "current_version": "${{ needs.analyze-commits.outputs.current-version }}", - "next_version": "${{ needs.calculate-version.outputs.final-version }}", - "bump_type": "${{ needs.analyze-commits.outputs.bump-type }}", - "is_prerelease": "${{ needs.calculate-version.outputs.is-prerelease }}", - "has_changes": "${{ needs.analyze-commits.outputs.has-changes }}", - "template_files_changed": "${{ needs.check-template-changes.outputs.template-files-changed }}", - "changed_template_files": "${{ needs.check-template-changes.outputs.changed-files }}" - }, - "success": $([ "${CREATE_RELEASE_STATUS}" == "success" ] && echo "true" || echo "false") - } - EOF - - echo "šŸ“ˆ Workflow metrics collected" - cat workflow-metrics.json - - # Set output for downstream tools - echo "workflow_status=$([ \"${CREATE_RELEASE_STATUS}\" == \"success\" ] && " \ - "echo \"success\" || echo \"failure\")" >> $GITHUB_ENV - - - name: Upload metrics artifact - if: always() - uses: actions/upload-artifact@v4 - with: - name: workflow-metrics-${{ github.run_id }} - path: workflow-metrics.json - retention-days: 30 diff --git a/.markdownlint.yml b/.markdownlint.yml new file mode 100644 index 0000000..92b755a --- /dev/null +++ b/.markdownlint.yml @@ -0,0 +1,23 @@ +# Markdownlint configuration +default: true + +# Exclude virtual environment directories +exclude: + - venv/ + - .venv/ + +# Allow HTML +MD033: false + +# Allow inline HTML +MD010: false + +# Line length +MD013: + line_length: 100 + code_blocks: false + tables: false + +# Allow duplicate headers +MD024: + siblings_only: true diff --git a/.yamllint.yml b/.yamllint.yml new file mode 100644 index 0000000..2941243 --- /dev/null +++ b/.yamllint.yml @@ -0,0 +1,12 @@ +extends: default + +rules: + line-length: + max: 120 + level: warning + comments: + min-spaces-from-content: 1 + document-start: disable + truthy: + allowed-values: ['true', 'false'] + check-keys: false diff --git a/CLAUDE.md b/CLAUDE.md index f6039d9..fb4aebf 100644 --- a/CLAUDE.md +++ b/CLAUDE.md @@ -457,6 +457,31 @@ functionality across different AI assistants (Claude, Gemini, etc.). ### Available Shared Commands +#### roadmap [OPTIONS] + +Displays the latest project roadmap or generates a new one from a template. + +Usage: + +```bash +# Display the latest roadmap +./shared-commands/commands/roadmap.sh + +# Generate a new roadmap with custom title +./shared-commands/commands/roadmap.sh --title "Q4 2024 Development Roadmap" + +# Generate a new roadmap from input +./shared-commands/commands/roadmap.sh --generate --input "Feature A, Bug Fix B, Refactor C" +``` + +This command: + +1. Finds and displays the most recent roadmap document +2. Generates new roadmaps from templates with title-based naming +3. Supports custom titles and automated filename generation +4. Includes collision avoidance for duplicate names +5. Creates comprehensive roadmap structure from templates + #### create-user-story --title "TITLE" [OPTIONS] Creates a GitHub issue and comprehensive user story document in a unified workflow. diff --git a/README.md b/README.md index 067420a..9281d7e 100644 --- a/README.md +++ b/README.md @@ -1,6 +1,6 @@ # Smart Workplace Dashboard Workshop -A streamlined workshop teaching Document Driven Development (DDD) through building a Smart Workplace Dashboard. Participants create three focused widgets that help HR teams and office managers monitor workplace conditions and optimize employee experience using API Ninjas and mock data. +A streamlined workshop teaching Document Driven Development (DDD) through building a Smart Workplace Dashboard. Participants create three focused widgets that help HR teams and office managers monitor workplace conditions and optimize employee experience using api.stillriver.info and mock data. ## Workshop Overview @@ -8,7 +8,7 @@ A streamlined workshop teaching Document Driven Development (DDD) through buildi By the end of this 2-hour workshop, participants will: - Write effective DDD specifications for dashboard components - Generate React components using AI agents -- Integrate multiple API Ninjas endpoints +- Integrate multiple API endpoints via api.stillriver.info - Handle asynchronous data and error states - Measure productivity improvements with AI-driven development @@ -20,15 +20,14 @@ A focused dashboard with three core widgets: ### Technical Requirements - Node.js and npm installed -- API Ninjas free account +- Internet access (uses Stillriver API proxy) - Code editor (VS Code) - Chrome browser - Access to Claude -## šŸš€ Workshop Setup - -### Prerequisites Setup +## šŸš€ Quick Start for Workshop Participants +### Step 1: Get the Code ```bash # Clone the workshop repository git clone https://github.com/stillrivercode/smart-workplace.git @@ -36,21 +35,20 @@ cd smart-workplace # Run the installation script ./install.sh +``` -# Copy environment templates -cp .env.example .env -cp app/.env.example app/.env +### Step 2: Start Development +```bash +# Start the development server +cd app +npm install +npm start ``` -### API Ninjas Setup +Your workshop environment is now ready! The app will open at `http://localhost:3000`. -1. Create a free account at [API Ninjas](https://api.api-ninjas.com/) -2. Get your API key from the dashboard -3. Add it to your React app's `.env` file: - ```bash - # In app/.env - REACT_APP_API_NINJAS_KEY=your-api-key-here - ``` +### API Setup - Already Done! āœ… +This workshop uses **Stillriver API** (api.stillriver.info) as a proxy for API Ninjas endpoints. **No API keys or registration required** - everything works out of the box for workshop participants. ## šŸŽÆ Workshop Structure @@ -78,7 +76,7 @@ cp app/.env.example app/.env 1. **Specification Writing**: Create detailed component specifications 2. **AI Implementation**: Use Claude to generate React components -3. **API Integration**: Connect to API Ninjas endpoints +3. **API Integration**: Connect to Stillriver API proxy endpoints 4. **Testing & Refinement**: Validate functionality and user experience 5. **Performance Review**: Measure AI-assisted development efficiency @@ -112,9 +110,9 @@ The project includes pre-configured linting and formatting tools: ## šŸ“š Resources -### API Documentation -- [API Ninjas Weather](https://api.api-ninjas.com/v1/weather) -- [API Ninjas World Time](https://api.api-ninjas.com/v1/worldtime) +### API Information +- **Stillriver API**: api.stillriver.info - Workshop proxy for API Ninjas endpoints (no auth required) +- **Available Endpoints**: `/v1/weather`, `/v1/timezone`, `/v1/airquality` - [React Documentation](https://react.dev/) ### Workshop Materials @@ -133,7 +131,7 @@ The project includes pre-configured linting and formatting tools: | Issue | Solution | |-------|----------| -| API key not working | Verify key at [API Ninjas](https://api.api-ninjas.com/) | +| API connection issues | Verify internet connection to api.stillriver.info | | CORS errors | Use development proxy in package.json | | Component not rendering | Check browser console for errors | diff --git a/app/package.json b/app/package.json index 6b06411..8336f5d 100644 --- a/app/package.json +++ b/app/package.json @@ -4,6 +4,7 @@ "version": "0.0.0", "type": "module", "scripts": { + "start": "vite", "dev": "vite", "build": "tsc -b && vite build", "lint": "eslint .", diff --git a/cli/configuration-manager.js b/cli/configuration-manager.js deleted file mode 100644 index 74b9c0f..0000000 --- a/cli/configuration-manager.js +++ /dev/null @@ -1,169 +0,0 @@ -const chalk = require('chalk'); -const inquirer = require('inquirer'); -const { validateGitHubOrgIfProvided } = require('./project-validation'); - -/** - * Builds configuration from CLI options for non-interactive mode - * @param {Object} config - Base configuration object - * @param {Object} options - CLI options - * @returns {void} - */ -function buildNonInteractiveConfig(config, options) { - console.log( - chalk.gray('Using non-interactive mode with provided options...') - ); - - // Set defaults from CLI options or use sensible defaults - config.template = options.template || 'default'; - config.githubOrg = options.githubOrg || 'your-org'; - config.repositoryName = options.repoName || config.projectName; - config.description = - options.description || 'AI-powered workflow automation project'; - - // Parse features from comma-separated string or use defaults - if (options.features) { - config.features = options.features.split(',').map((f) => f.trim()); - } else { - config.features = [ - 'ai-tasks', - 'ai-pr-review', - 'cost-monitoring', - 'security', - ]; - } - - console.log(chalk.gray(` Template: ${config.template}`)); - console.log(chalk.gray(` GitHub Org: ${config.githubOrg}`)); - console.log(chalk.gray(` Repository: ${config.repositoryName}`)); - console.log(chalk.gray(` Features: ${config.features.join(', ')}`)); -} - -/** - * Creates questions for interactive configuration - * @param {Object} config - Base configuration object - * @param {Object} options - CLI options - * @returns {Array} Array of inquirer questions - */ -function createConfigurationQuestions(config, options) { - const questions = [ - { - type: 'input', - name: 'githubOrg', - message: 'GitHub organization/username:', - default: options.githubOrg, - validate: (input) => - input.trim().length > 0 || 'Organization is required', - filter: (input) => input.trim(), - when: () => !options.githubOrg, - }, - { - type: 'input', - name: 'repositoryName', - message: 'Repository name:', - default: options.repoName || config.projectName, - validate: (input) => - input.trim().length > 0 || 'Repository name is required', - filter: (input) => input.trim(), - when: () => !options.repoName, - }, - { - type: 'input', - name: 'description', - message: 'Project description:', - default: options.description || 'AI-powered workflow automation project', - when: () => !options.description, - }, - { - type: 'checkbox', - name: 'features', - message: 'Select features to enable:', - choices: [ - { name: 'AI Task Automation', value: 'ai-tasks', checked: true }, - { name: 'AI PR Review', value: 'ai-pr-review', checked: true }, - { name: 'Cost Monitoring', value: 'cost-monitoring', checked: true }, - { name: 'Security Scanning', value: 'security', checked: true }, - ], - when: () => !options.features, - }, - ]; - - // Add template question if not specified - if (!options.template || options.template === 'default') { - questions.unshift({ - type: 'list', - name: 'template', - message: 'Choose project template:', - choices: [ - { name: 'Default (Recommended)', value: 'default' }, - { name: 'Minimal (Basic workflows only)', value: 'minimal' }, - { name: 'Enterprise (Advanced features)', value: 'enterprise' }, - ], - default: 'default', - }); - } - - return questions; -} - -/** - * Merges CLI options with interactive answers - * @param {Object} config - Base configuration object - * @param {Object} options - CLI options - * @param {Object} answers - Inquirer answers - * @returns {void} - */ -function mergeConfigurationAnswers(config, options, answers) { - Object.assign(config, { - template: options.template || answers.template || 'default', - githubOrg: options.githubOrg || answers.githubOrg, - repositoryName: options.repoName || answers.repositoryName, - description: options.description || answers.description, - features: options.features - ? options.features.split(',').map((f) => f.trim()) - : answers.features, - }); -} - -/** - * Determines if non-interactive mode should be used - * @param {Object} options - CLI options - * @returns {boolean} - */ -function shouldUseNonInteractiveMode(options) { - return options.nonInteractive || (options.githubOrg && options.repoName); -} - -/** - * Main configuration collection function - * @param {Object} config - Base configuration object - * @param {Object} options - CLI options - * @returns {Promise} - */ -async function collectConfiguration(config, options) { - console.log(chalk.blue('\nšŸ“‹ Project Configuration')); - - // Use non-interactive mode if appropriate - if (shouldUseNonInteractiveMode(options)) { - buildNonInteractiveConfig(config, options); - return; - } - - // Interactive mode - const questions = createConfigurationQuestions(config, options); - const answers = await inquirer.prompt(questions); - - mergeConfigurationAnswers(config, options, answers); - - // Validate GitHub organization for interactive flow - if (config.githubOrg && !options.githubOrg) { - validateGitHubOrgIfProvided(config.githubOrg); - } -} - -module.exports = { - collectConfiguration, - buildNonInteractiveConfig, - createConfigurationQuestions, - mergeConfigurationAnswers, - shouldUseNonInteractiveMode, -}; diff --git a/cli/create-project.js b/cli/create-project.js deleted file mode 100644 index 0c53386..0000000 --- a/cli/create-project.js +++ /dev/null @@ -1,350 +0,0 @@ -const fs = require('fs-extra'); -const path = require('path'); -const chalk = require('chalk'); -const { spawn } = require('child_process'); -const { - validateAndGetProjectName, - handleDirectoryConflict, - validateTemplate, - validateNonInteractiveOptions, - validateGitHubOrgIfProvided, -} = require('./project-validation'); -const { collectConfiguration } = require('./configuration-manager'); -// Removed setupSecrets - replaced with sample env creation -const { - copyTemplateFiles, - getDistributionStats, -} = require('./file-distribution'); - -// Sanitize path to prevent directory traversal -function sanitizePath(userPath) { - // Remove any path traversal attempts - const normalized = path.normalize(userPath).replace(/^(\.\.(\/|\\|$))+/, ''); - // Ensure the path doesn't contain any remaining traversal patterns - if (normalized.includes('..')) { - throw new Error('Invalid path: Path traversal detected'); - } - return normalized; -} - -// Find package root directory (works for both local dev and npx) -function findPackageRoot(startDir) { - let currentDir = startDir; - while (currentDir !== path.dirname(currentDir)) { - const packageJsonPath = path.join(currentDir, 'package.json'); - if (fs.existsSync(packageJsonPath)) { - try { - const pkg = fs.readJsonSync(packageJsonPath); - // Verify this is the correct package - if (pkg.name === '@stillrivercode/agentic-workflow-template') { - // Validate essential template files exist - const essentialFiles = ['cli/', 'scripts/', 'package.json']; - const missingFiles = essentialFiles.filter( - (file) => - !fs.existsSync(path.join(currentDir, file)) - ); - - if (missingFiles.length === 0) { - return currentDir; - } - // Continue searching if essential files are missing - } - } catch { - // Continue searching if can't read package.json - } - } - currentDir = path.dirname(currentDir); - } - - // Fallback: try to resolve via require (works in npm global installs) - try { - const packageJsonPath = require.resolve( - '@stillrivercode/agentic-workflow-template/package.json' - ); - const packageDir = path.dirname(packageJsonPath); - - // Validate essential template files exist in resolved package - const essentialFiles = ['cli/', 'scripts/', 'package.json']; - const missingFiles = essentialFiles.filter( - (file) => - !fs.existsSync(path.join(packageDir, file)) - ); - - if (missingFiles.length === 0) { - return packageDir; - } - } catch { - // Fall through to final fallback - } - - // If all else fails, return the parent directory (original behavior) - return path.join(startDir, '..'); -} - -/** - * Creates a new project with AI-powered workflow automation - * @param {string} projectName - Name of the project to create - * @param {Object} options - Configuration options - * @param {boolean} options.force - Force overwrite existing directory - * @param {boolean} options.nonInteractive - Run without prompts - * @param {string} options.githubOrg - GitHub organization - * @param {string} options.repoName - Repository name - * @param {string} options.description - Project description - * @param {string} options.template - Template type - * @param {string} options.features - Comma-separated features - * @param {boolean} options.gitInit - Initialize git repository - * @returns {Promise} - */ -async function createProject(projectName, options = {}) { - // Validate inputs - validateTemplate(options.template); - validateNonInteractiveOptions(options); - validateGitHubOrgIfProvided(options.githubOrg); - - // Get and validate project name - const validatedProjectName = await validateAndGetProjectName( - projectName, - options - ); - - const config = { - projectName: validatedProjectName, - projectPath: path.resolve( - process.cwd(), - sanitizePath(validatedProjectName) - ), - }; - - // Handle directory conflicts - await handleDirectoryConflict( - config.projectPath, - validatedProjectName, - options - ); - - // Collect configuration - await collectConfiguration(config, options); - - // Create project structure - await createProjectStructure(config, options); - - // Secrets setup removed - users configure manually - - // Initialize git if requested - if (options.gitInit) { - await initializeGit(config); - } - - console.log(chalk.green(`\nšŸ“ Project created at: ${config.projectPath}`)); -} - -// Configuration collection is now handled by configuration-manager.js - -async function createProjectStructure(config, _options) { - console.log(chalk.blue('\nšŸ—ļø Creating project structure...')); - - // Find the package root directory (works for both local dev and npx) - const templateDir = findPackageRoot(__dirname); - if (!templateDir) { - throw new Error( - 'Unable to locate template files. Please ensure the package is installed correctly.' - ); - } - - const targetDir = config.projectPath; - - // Create target directory - await fs.ensureDir(targetDir); - - // Show distribution statistics - const stats = getDistributionStats(config.template); - console.log( - chalk.gray( - ` šŸ“Š Distributing ${stats.totalFiles} files for ${stats.templateType} template` - ) - ); - - // Debug logging for troubleshooting npx issues - if (process.env.DEBUG) { - console.log(chalk.gray(` šŸ“ Template directory: ${templateDir}`)); - console.log(chalk.gray(` šŸ“ Target directory: ${targetDir}`)); - } - - // Copy template files using centralized configuration - const result = await copyTemplateFiles( - templateDir, - targetDir, - config.template - ); - - if (!result.success) { - console.error(chalk.red('\nāŒ Failed to copy template files')); - console.error(chalk.red(`Template directory: ${templateDir}`)); - console.error(chalk.red(`Errors: ${result.errors.join(', ')}`)); - throw new Error( - `Failed to create project structure: ${result.errors.join(', ')}` - ); - } - - // Log copy results - if (result.copied.length > 0) { - console.log( - chalk.green(` āœ… Copied ${result.copied.length} files successfully`) - ); - } - - if (result.skipped.length > 0) { - console.log( - chalk.yellow(` āš ļø Skipped ${result.skipped.length} missing files`) - ); - } - - if (result.errors.length > 0) { - console.log(chalk.red(` āŒ Errors: ${result.errors.join(', ')}`)); - } - - // Generate configuration files - await generateConfigFiles(config); - - console.log(chalk.green(' āœ… Project structure created')); -} - -async function generateConfigFiles(config) { - const configDir = path.join(sanitizePath(config.projectPath), '.github'); - await fs.ensureDir(configDir); - - // Generate repository configuration - const repoConfig = { - name: config.repositoryName, - description: config.description, - owner: config.githubOrg, - features: config.features, - template: config.template, - created: new Date().toISOString(), - }; - - await fs.writeJson(path.join(configDir, 'repo-config.json'), repoConfig, { - spaces: 2, - }); - - // Update README with project-specific information (only if it was copied) - await updateReadme(config); -} - -async function updateReadme(config) { - const readmePath = path.join(sanitizePath(config.projectPath), 'README.md'); - - // Check if README.md exists before trying to read it - if (!(await fs.pathExists(readmePath))) { - console.log( - chalk.yellow( - 'āš ļø README.md template not found. Creating default README.md...' - ) - ); - - // Create a basic README.md as fallback - const defaultReadme = generateDefaultReadme(config); - await fs.writeFile(readmePath, defaultReadme); - console.log(chalk.green(' āœ… Created default README.md')); - return; - } - - let readme = await fs.readFile(readmePath, 'utf8'); - - // Replace template placeholders - readme = readme - .replace(/agentic-workflow-template/g, config.repositoryName) - .replace(/YOUR_ORG/g, config.githubOrg) - .replace(/YOUR_REPO/g, config.repositoryName) - .replace(/{{PROJECT_DESCRIPTION}}/g, config.description) - .replace(/{{PROJECT_NAME}}/g, config.repositoryName); - - await fs.writeFile(readmePath, readme); - console.log(chalk.green(' āœ… Updated README.md with project details')); -} - -function generateDefaultReadme(config) { - // Validate and normalize config properties - const projectName = config.repositoryName || 'New Project'; - const description = - config.description || 'AI-powered workflow automation project'; - const features = Array.isArray(config.features) ? config.features : []; - - return `# ${projectName} - -${description} - -## šŸš€ Quick Start - -This project uses AI-powered GitHub workflow automation. - -### Setup - -1. Configure your GitHub secrets: - \`\`\`bash - gh secret set OPENROUTER_API_KEY - \`\`\` - -2. Create your first AI task: - \`\`\`bash - gh issue create --title "Your task description" --label "ai-task" - \`\`\` - -3. Watch as AI automatically implements your requirements! - -## šŸ”§ Available Features - -${features.includes('ai-tasks') ? '- āœ… AI Task Automation' : ''} -${features.includes('ai-pr-review') ? '- āœ… AI PR Review' : ''} -${features.includes('cost-monitoring') ? '- āœ… Cost Monitoring' : ''} -${features.includes('security') ? '- āœ… Security Scanning' : ''} - -## šŸ·ļø Available Labels - -- \`ai-task\` - General AI development tasks -- \`ai-bug-fix\` - AI-assisted bug fixes -- \`ai-refactor\` - Code refactoring requests -- \`ai-test\` - Test generation -- \`ai-docs\` - Documentation updates - -## šŸ“š Documentation - -- [Getting Started Guide](docs/simplified-architecture.md) -- [AI Workflow Guide](docs/ai-workflows.md) -- [Security Guidelines](docs/security.md) - -## šŸ†˜ Support - -Need help? Create an issue with the \`help\` label. - ---- - -*Generated by [Agentic Workflow Template](https://github.com/stillrivercode/agentic-workflow-template)* -`; -} - -async function initializeGit(config) { - console.log(chalk.blue('\nšŸ”§ Initializing git repository...')); - - return new Promise((resolve, reject) => { - const git = spawn('git', ['init'], { - cwd: config.projectPath, - stdio: 'inherit', - }); - - git.on('close', (code) => { - if (code === 0) { - console.log(chalk.green(' āœ… Git repository initialized')); - resolve(); - } else { - reject(new Error(`Git init failed with code ${code}`)); - } - }); - - git.on('error', (error) => { - reject(new Error(`Failed to initialize git: ${error.message}`)); - }); - }); -} - -module.exports = { createProject, findPackageRoot }; diff --git a/cli/file-distribution-config.json b/cli/file-distribution-config.json deleted file mode 100644 index 6168287..0000000 --- a/cli/file-distribution-config.json +++ /dev/null @@ -1,55 +0,0 @@ -{ - "distributedFiles": { - "base": [ - "scripts/", - ".github/", - "README.md", - "LICENSE", - ".gitignore", - ".gitattributes", - "install.sh", - "shared-commands/", - "package.json" - ], - "codeQuality": [ - ".pre-commit-config.yaml", - ".eslintrc.js", - ".yamllint.yaml", - ".secrets.baseline" - ], - "documentation": ["docs/"], - "enterprise": ["monitoring/", "advanced-scripts/"] - }, - "packageJsonFiles": [ - "cli/", - "scripts/", - ".github/", - "docs/", - "README.md", - "LICENSE", - "install.sh", - "shared-commands/", - "package.json", - ".pre-commit-config.yaml", - ".eslintrc.js", - ".yamllint.yaml", - ".secrets.baseline" - ], - "exclusionFilters": [ - "dev-docs/", - "tests/", - "venv/", - ".pytest_cache/", - "node_modules/", - ".git/" - ], - "validation": { - "requiredFiles": [ - ".pre-commit-config.yaml", - "package.json", - "README.md", - "LICENSE" - ], - "optionalFiles": [".eslintrc.js", ".yamllint.yaml", ".secrets.baseline"] - } -} diff --git a/cli/file-distribution.js b/cli/file-distribution.js deleted file mode 100644 index c83be64..0000000 --- a/cli/file-distribution.js +++ /dev/null @@ -1,270 +0,0 @@ -const fs = require('fs-extra'); -const path = require('path'); -const chalk = require('chalk'); - -/** - * Validate and sanitize file path to prevent path traversal attacks - * @param {string} filePath - The file path to validate - * @returns {string} Sanitized file path - * @throws {Error} If path contains traversal attempts - */ -function sanitizePath(filePath) { - // Remove any leading/trailing whitespace and null bytes - const cleanPath = filePath.replace(/[\0\r\n]/g, '').trim(); - - // Check for path traversal attempts - if ( - cleanPath.includes('..') || - path.isAbsolute(cleanPath) || - cleanPath.includes('\0') || - cleanPath.startsWith('/') || - cleanPath.startsWith('\\') - ) { - throw new Error(`Invalid file path: ${filePath}`); - } - - // Validate path components don't contain dangerous characters - const pathComponents = cleanPath.split(path.sep); - for (const component of pathComponents) { - if (component === '.' || component === '..') { - throw new Error(`Invalid path component: ${component} in ${filePath}`); - } - // Skip empty components (can happen with trailing slashes) - if (component === '') { - continue; - } - } - - // Return the normalized path (safe since we've validated no traversal) - return path.normalize(cleanPath); -} - -// Load file list directly from package.json to ensure consistency -const packageJson = require('../package.json'); - -/** - * Get files to distribute based on template type - * Uses package.json files array as the single source of truth - * @param {string} templateType - The template type (default, minimal, enterprise) - * @returns {string[]} Array of file paths to copy - */ -function getFilesToDistribute(templateType = 'default') { - let filesToCopy = [...packageJson.files]; - - // For minimal template, exclude docs - if (templateType === 'minimal') { - filesToCopy = filesToCopy.filter((file) => !file.includes('docs/')); - } - - // For enterprise template, add enterprise-specific files if they exist - if (templateType === 'enterprise') { - const enterpriseFiles = ['monitoring/', 'advanced-scripts/']; - filesToCopy.push(...enterpriseFiles); - } - - return filesToCopy; -} - -/** - * Get package.json files list (used for npm packaging) - * @returns {string[]} Array of files to include in npm package - */ -function getPackageFiles() { - return [...packageJson.files]; -} - -/** - * Get exclusion filters for file copying - * @returns {string[]} Array of patterns to exclude - */ -function getExclusionFilters() { - return [ - 'dev-docs/', - 'tests/', - 'venv/', - '.pytest_cache/', - 'node_modules/', - '.git/', - ]; -} - -/** - * Validate that required files exist in the template - * @param {string} templateDir - Path to template directory - * @returns {Promise<{valid: boolean, missing: string[], warnings: string[]}>} - */ -async function validateTemplateFiles(templateDir) { - const result = { - valid: true, - missing: [], - warnings: [], - }; - - // Check required files from package.json - const requiredFiles = [ - '.pre-commit-config.yaml', - 'package.json', - 'README.md', - 'LICENSE', - ]; - - for (const file of requiredFiles) { - try { - const sanitizedFile = sanitizePath(file); - // nosemgrep: javascript.lang.security.audit.path-traversal.path-join-resolve-traversal.path-join-resolve-traversal - const filePath = path.join(templateDir, sanitizedFile); - if (!(await fs.pathExists(filePath))) { - result.valid = false; - result.missing.push(file); - } - } catch (error) { - result.valid = false; - result.missing.push(file); - console.error(chalk.red(`Invalid file path: ${file} - ${error.message}`)); - } - } - - // Check optional files (warnings only) - const optionalFiles = ['.eslintrc.js', '.yamllint.yaml', '.secrets.baseline']; - - for (const file of optionalFiles) { - try { - const sanitizedFile = sanitizePath(file); - // nosemgrep: javascript.lang.security.audit.path-traversal.path-join-resolve-traversal.path-join-resolve-traversal - const filePath = path.join(templateDir, sanitizedFile); - if (!(await fs.pathExists(filePath))) { - result.warnings.push(file); - } - } catch (error) { - result.warnings.push(file); - console.error( - chalk.yellow(`Invalid optional file path: ${file} - ${error.message}`) - ); - } - } - - return result; -} - -/** - * Copy files with validation and proper error handling - * @param {string} templateDir - Source template directory - * @param {string} targetDir - Target project directory - * @param {string} templateType - Template type - * @returns {Promise<{success: boolean, copied: string[], skipped: string[], errors: string[]}>} - */ -async function copyTemplateFiles( - templateDir, - targetDir, - templateType = 'default' -) { - const result = { - success: true, - copied: [], - skipped: [], - errors: [], - }; - - // Validate template files first - const validation = await validateTemplateFiles(templateDir); - if (!validation.valid) { - result.success = false; - result.errors.push( - `Missing required files: ${validation.missing.join(', ')}` - ); - return result; - } - - // Log warnings for missing optional files - if (validation.warnings.length > 0) { - console.log( - chalk.yellow( - ` āš ļø Optional files not found: ${validation.warnings.join(', ')}` - ) - ); - } - - const filesToCopy = getFilesToDistribute(templateType); - const exclusionFilters = getExclusionFilters(); - - for (const file of filesToCopy) { - try { - const sanitizedFile = sanitizePath(file); - // nosemgrep: javascript.lang.security.audit.path-traversal.path-join-resolve-traversal.path-join-resolve-traversal - const srcPath = path.join(templateDir, sanitizedFile); - // nosemgrep: javascript.lang.security.audit.path-traversal.path-join-resolve-traversal.path-join-resolve-traversal - const destPath = path.join(targetDir, sanitizedFile); - - if (await fs.pathExists(srcPath)) { - await fs.copy(srcPath, destPath, { - filter: (src) => { - // Apply exclusion filters - return !exclusionFilters.some((filter) => src.includes(filter)); - }, - }); - result.copied.push(file); - } else { - result.skipped.push(file); - console.log(chalk.yellow(` āš ļø Template file not found: ${file}`)); - } - } catch (error) { - result.success = false; - if (error.message.includes('Invalid file path')) { - result.errors.push(`Security error - invalid file path: ${file}`); - } else { - result.errors.push(`Failed to copy ${file}: ${error.message}`); - } - } - } - - return result; -} - -/** - * Update package.json files list to match current configuration - * @param {string} packageJsonPath - Path to package.json file - * @returns {Promise} Success status - */ -async function updatePackageJsonFiles(packageJsonPath) { - try { - const packageJson = await fs.readJson(packageJsonPath); - packageJson.files = getPackageFiles(); - await fs.writeJson(packageJsonPath, packageJson, { spaces: 2 }); - return true; - } catch (error) { - console.error( - chalk.red(`Failed to update package.json files: ${error.message}`) - ); - return false; - } -} - -/** - * Get file distribution statistics - * @param {string} templateType - Template type - * @returns {object} Statistics about file distribution - */ -function getDistributionStats(templateType = 'default') { - const filesToCopy = getFilesToDistribute(templateType); - const baseFiles = packageJson.files.length; - - const stats = { - totalFiles: filesToCopy.length, - baseFiles: baseFiles, - templateType, - sourceFiles: packageJson.files, // Show which files come from package.json - }; - - return stats; -} - -module.exports = { - getFilesToDistribute, - getPackageFiles, - getExclusionFilters, - validateTemplateFiles, - copyTemplateFiles, - updatePackageJsonFiles, - getDistributionStats, - sanitizePath, -}; diff --git a/cli/index.js b/cli/index.js deleted file mode 100755 index fc0bf6f..0000000 --- a/cli/index.js +++ /dev/null @@ -1,76 +0,0 @@ -#!/usr/bin/env node - -const { Command } = require('commander'); -const { createProject } = require('./create-project'); -const { validateSystem } = require('./validators'); -const chalk = require('chalk'); - -const program = new Command(); - -program - .name('agentic-workflow-template') - .description('Create AI-powered GitHub workflow automation projects') - .version(require('../package.json').version); - -// Main command - create project -program - .argument('[project-name]', 'name of the project to create') - .option('-f, --force', 'overwrite existing directory') - .option('--skip-install', 'skip npm install') - .option('--template