Add Jira quarterly-initiative-report skill to pf-workflow plugin#48
Add Jira quarterly-initiative-report skill to pf-workflow plugin#48janwright73 wants to merge 3 commits intopatternfly:mainfrom
Conversation
This skill generates comprehensive quarterly Jira status reports with: - Progress tracking across epics with completion metrics - RAG (Red/Amber/Green) status assessment - Cross-project duplicate link analysis (critical for multi-team initiatives) - Blocker identification and risk assessment - Q+1 priority recommendations based on incomplete work - Complete epic reference table with clickable Jira links Key Features: - Hybrid MCP/REST API support for maximum compatibility - Handles cross-project work via duplicate links (AAP, MTV, CONSOLE, SAT, etc.) - Prevents "invisible work" problem by checking ALL epics for linked work - Tool-agnostic: works in Claude Code, Cursor, and future AI tools - Uses standard tools: curl and jq (no special dependencies) Tested with: - PatternFly Q1 2026 initiative (35 epics, 549 issues) - Cross-project work spanning 6 different Jira projects - Both direct children and linked epic scenarios Benefits: - Automates tedious manual report generation - Ensures complete visibility of cross-project work - Provides data-driven status assessments - Saves hours per quarterly report File: plugins/pf-workflow/skills/quarterly-initiative-report/SKILL.md Lines: 320 (concise, under 500-line guideline) Standards: Meets all ai-helpers repository requirements
|
@coderabbitai full review |
✅ Actions performedFull review triggered. |
📝 WalkthroughWalkthroughAdds documentation for a new quarterly-initiative-report skill and its end-to-end test results: skill workflow for fetching Jira epics, computing child-issue metrics, detecting duplicate-linked work, assigning RAG status, and rendering a structured markdown report. Reviewed against CONTRIBUTING.md, CONTRIBUTING-SKILLS.md and skill-creator plugin guidelines. Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
plugins/pf-workflow/skills/quarterly-initiative-report/SKILL.md (1)
30-30: Replace tool-specific setup wording with tool-agnostic phrasing.Line 30 calls out specific products (“Claude Code”, “Cursor”). Reword this to generic assistant/tool settings language to stay fully tool-agnostic.
As per coding guidelines in
CONTRIBUTING-SKILLS.md:116-129, skills must be tool-agnostic and avoid referencing a specific tool.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plugins/pf-workflow/skills/quarterly-initiative-report/SKILL.md` at line 30, Update the wording in SKILL.md under the "Option 1: In AI tool settings" section to remove product names ("Claude Code", "Cursor") and replace them with tool-agnostic phrasing such as "your assistant or tool settings (e.g., settings.json or config file)"; locate the header "Option 1: In AI tool settings" and the sentence that currently lists “Claude Code settings.json, Cursor config” and reword it to a generic instruction about adjusting assistant/tool settings per CONTRIBUTING-SKILLS.md guidelines.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@SKILL-TEST-RESULTS.md`:
- Line 4: Replace the personal local path shown in the "Skill Location:" entry
(the string starting with "/Users/jawright/...") with a generic placeholder
(e.g., "/path/to/skill/skill.md" or "{SKILL_PATH}") and update the related
compliance assertion around the compliance claim (the statement referenced near
line 154) so it accurately reflects that only generic placeholders are used;
ensure the "Skill Location" header and the compliance line both use the same
non-identifying placeholder format.
---
Nitpick comments:
In `@plugins/pf-workflow/skills/quarterly-initiative-report/SKILL.md`:
- Line 30: Update the wording in SKILL.md under the "Option 1: In AI tool
settings" section to remove product names ("Claude Code", "Cursor") and replace
them with tool-agnostic phrasing such as "your assistant or tool settings (e.g.,
settings.json or config file)"; locate the header "Option 1: In AI tool
settings" and the sentence that currently lists “Claude Code settings.json,
Cursor config” and reword it to a generic instruction about adjusting
assistant/tool settings per CONTRIBUTING-SKILLS.md guidelines.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: c905ec7b-302e-4dcd-a76e-021a8c109e16
📒 Files selected for processing (2)
SKILL-TEST-RESULTS.mdplugins/pf-workflow/skills/quarterly-initiative-report/SKILL.md
SKILL-TEST-RESULTS.md
Outdated
| # Quarterly Initiative Report Skill - Test Results | ||
|
|
||
| **Test Date:** March 28, 2026 | ||
| **Skill Location:** `/Users/jawright/.claude/skills/quarterly-initiative-report-test/skill.md` |
There was a problem hiding this comment.
Remove personal path details or update the compliance claim.
Line 4 includes a user-identifying local path (/Users/jawright/...), which conflicts with the Line 154 assertion that only generic placeholders are used and no personal information is present.
As per coding guidelines in CONTRIBUTING-SKILLS.md:116-129, skills/docs should meet repository standards, including safe and generic documentation patterns.
Also applies to: 154-154
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@SKILL-TEST-RESULTS.md` at line 4, Replace the personal local path shown in
the "Skill Location:" entry (the string starting with "/Users/jawright/...")
with a generic placeholder (e.g., "/path/to/skill/skill.md" or "{SKILL_PATH}")
and update the related compliance assertion around the compliance claim (the
statement referenced near line 154) so it accurately reflects that only generic
placeholders are used; ensure the "Skill Location" header and the compliance
line both use the same non-identifying placeholder format.
- Remove specific shell rc file references (~/.zshrc, ~/.bashrc) - Replace personal skill path with generic skill name - Addresses feedback from PR review
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (2)
plugins/pf-workflow/skills/quarterly-initiative-report/SKILL.md (1)
30-31: Use tool-neutral wording in setup instructions.Calling out “Claude Code settings.json, Cursor config” is tool-specific. Prefer neutral phrasing so the skill text remains portable.
Suggested fix
-**Option 1: In AI tool settings** (Claude Code settings.json, Cursor config): +**Option 1: In your AI assistant environment configuration**:As per coding guidelines in
CONTRIBUTING-SKILLS.md, “Skills must be tool-agnostic … Avoid referencing a specific tool.”🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plugins/pf-workflow/skills/quarterly-initiative-report/SKILL.md` around lines 30 - 31, Replace the tool-specific phrase "Claude Code settings.json, Cursor config" under the "Option 1: In AI tool settings" section in SKILL.md with a tool-neutral instruction (e.g., "your AI/code tool's settings or configuration file") so the guidance is portable; update the surrounding text to avoid naming any specific products and ensure it follows the CONTRIBUTING-SKILLS.md rule that "Skills must be tool-agnostic."SKILL-TEST-RESULTS.md (1)
156-156: Use “Markdown” capitalization for consistency.Consider updating “Proper markdown formatting” → “Proper Markdown formatting”.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@SKILL-TEST-RESULTS.md` at line 156, Update the table cell text that currently reads "Proper markdown formatting" to use consistent capitalization "Proper Markdown formatting"; locate the string "Proper markdown formatting" in the SKILL-TEST-RESULTS.md table row and change only the word "markdown" to "Markdown" so the header/content uses Title-case for "Markdown".
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@plugins/pf-workflow/skills/quarterly-initiative-report/SKILL.md`:
- Around line 284-290: Update the two fenced code blocks that currently lack
language tags by adding a language identifier (use "text") to the opening fence
for the snippets containing "Epic PF-3227: Ansible Nexus Migration (Closed)" and
"Epic PF-3408: Ansible Q1 Features (In Progress)"; make the same change for the
other occurrence mentioned (lines 293-301) so all three code fences start with
```text to satisfy MD040 and improve rendering.
- Around line 146-147: Update the JQL used in the curl call in SKILL.md so the
query is scoped to the target project by adding a project filter (e.g., include
project=PROJECT) alongside the existing label and status/priority filters;
locate the JQL string containing labels="LABEL" AND (status=Blocked OR
priority=Highest) in the SKILL.md example and modify it to include
project=PROJECT so the exported query only returns issues for the selected
project.
In `@SKILL-TEST-RESULTS.md`:
- Line 150: The "Tool-agnostic language" compliance claim is too strong for the
quarterly-initiative-report skill; either update the SKILL-TEST-RESULTS.md table
row for "Tool-agnostic language" to reflect the current status (e.g., change ✅
PASS to ❌ FAIL or add a note) or remove tool-specific references from the
quarterly-initiative-report skill text so it truly is tool-agnostic; locate the
skill by name "quarterly-initiative-report" and edit the SKILL.md content to
eliminate any Claude/Cursor-specific wording, or update the
SKILL-TEST-RESULTS.md row to accurately describe the existing wording.
---
Nitpick comments:
In `@plugins/pf-workflow/skills/quarterly-initiative-report/SKILL.md`:
- Around line 30-31: Replace the tool-specific phrase "Claude Code
settings.json, Cursor config" under the "Option 1: In AI tool settings" section
in SKILL.md with a tool-neutral instruction (e.g., "your AI/code tool's settings
or configuration file") so the guidance is portable; update the surrounding text
to avoid naming any specific products and ensure it follows the
CONTRIBUTING-SKILLS.md rule that "Skills must be tool-agnostic."
In `@SKILL-TEST-RESULTS.md`:
- Line 156: Update the table cell text that currently reads "Proper markdown
formatting" to use consistent capitalization "Proper Markdown formatting";
locate the string "Proper markdown formatting" in the SKILL-TEST-RESULTS.md
table row and change only the word "markdown" to "Markdown" so the
header/content uses Title-case for "Markdown".
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: e057f4c1-f305-418a-90a6-651086a706d9
📒 Files selected for processing (2)
SKILL-TEST-RESULTS.mdplugins/pf-workflow/skills/quarterly-initiative-report/SKILL.md
| -d '{"jql":"labels=\"LABEL\" AND (status=Blocked OR priority=Highest)","fields":["key","summary","status","priority","assignee"],"maxResults":100}' \ | ||
| "$JIRA_SITE_URL/rest/api/3/search/jql" |
There was a problem hiding this comment.
Scope blocker query to the selected project key.
The blocker JQL omits project=PROJECT, so it can pull unrelated issues that share the label and distort risk reporting.
Suggested fix
- -d '{"jql":"labels=\"LABEL\" AND (status=Blocked OR priority=Highest)","fields":["key","summary","status","priority","assignee"],"maxResults":100}' \
+ -d '{"jql":"project=PROJECT AND labels=\"LABEL\" AND (status=Blocked OR priority=Highest)","fields":["key","summary","status","priority","assignee"],"maxResults":100}' \As per coding guidelines in CONTRIBUTING-SKILLS.md, “Describe outcomes, not implementation” and keep behavior reliable for the intended skill scope; project-scoped data retrieval is required for correct report outcomes.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@plugins/pf-workflow/skills/quarterly-initiative-report/SKILL.md` around lines
146 - 147, Update the JQL used in the curl call in SKILL.md so the query is
scoped to the target project by adding a project filter (e.g., include
project=PROJECT) alongside the existing label and status/priority filters;
locate the JQL string containing labels="LABEL" AND (status=Blocked OR
priority=Highest) in the SKILL.md example and modify it to include
project=PROJECT so the exported query only returns issues for the selected
project.
| ``` | ||
| Epic PF-3227: Ansible Nexus Migration (Closed) | ||
| Direct children: 0 issues | ||
| Linked via duplicates: | ||
| - AAP-58793: 16 issues (16 done, 100%) | ||
| Combined: 16 issues, 100% complete ✅ | ||
| ``` |
There was a problem hiding this comment.
Add language identifiers to fenced code blocks.
Two fenced examples are missing language tags, which triggers MD040 and reduces readability in renderers.
Suggested fix
-```
+```text
Epic PF-3227: Ansible Nexus Migration (Closed)
Direct children: 0 issues
Linked via duplicates:
- AAP-58793: 16 issues (16 done, 100%)
Combined: 16 issues, 100% complete ✅- +text
Epic PF-3408: Ansible Q1 Features (In Progress)
Direct children: 0 issues
Linked via duplicates:
- AAP-60038: 63 issues (55 done, 87%)
- AAP-57961: 18 issues (18 done, 100%)
- AAP-59349: 56 issues (22 done, 39%)
Combined: 137 issues, 69% complete
Also applies to: 293-301
🧰 Tools
🪛 markdownlint-cli2 (0.22.0)
[warning] 284-284: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@plugins/pf-workflow/skills/quarterly-initiative-report/SKILL.md` around lines
284 - 290, Update the two fenced code blocks that currently lack language tags
by adding a language identifier (use "text") to the opening fence for the
snippets containing "Epic PF-3227: Ansible Nexus Migration (Closed)" and "Epic
PF-3408: Ansible Q1 Features (In Progress)"; make the same change for the other
occurrence mentioned (lines 293-301) so all three code fences start with ```text
to satisfy MD040 and improve rendering.
SKILL-TEST-RESULTS.md
Outdated
| |------------|--------|-------| | ||
| | **Frontmatter present** | ✅ PASS | name, description, disable-model-invocation | | ||
| | **Name matches directory** | ✅ PASS | quarterly-initiative-report | | ||
| | **Tool-agnostic language** | ✅ PASS | No Claude/Cursor-specific references | |
There was a problem hiding this comment.
Compliance claim is currently too strong.
No Claude/Cursor-specific references does not match the current skill text (see plugins/pf-workflow/skills/quarterly-initiative-report/SKILL.md, Line 30). Please adjust this row or update the skill wording.
As per coding guidelines in CONTRIBUTING-SKILLS.md, “Skills must be tool-agnostic … Avoid referencing a specific tool.”
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@SKILL-TEST-RESULTS.md` at line 150, The "Tool-agnostic language" compliance
claim is too strong for the quarterly-initiative-report skill; either update the
SKILL-TEST-RESULTS.md table row for "Tool-agnostic language" to reflect the
current status (e.g., change ✅ PASS to ❌ FAIL or add a note) or remove
tool-specific references from the quarterly-initiative-report skill text so it
truly is tool-agnostic; locate the skill by name "quarterly-initiative-report"
and edit the SKILL.md content to eliminate any Claude/Cursor-specific wording,
or update the SKILL-TEST-RESULTS.md row to accurately describe the existing
wording.
SKILL-TEST-RESULTS.md
Outdated
| @@ -0,0 +1,256 @@ | |||
| # Quarterly Initiative Report Skill - Test Results | |||
There was a problem hiding this comment.
This test results file would get committed to the repo root. Probably worth removing from the PR since it's specific to your test run.
Test results file is specific to local test run and should not be committed to repository root. Addresses PR review feedback.
This skill generates comprehensive quarterly Jira status reports with:
Key Features:
Tested with:
Benefits:
File: plugins/pf-workflow/skills/quarterly-initiative-report/SKILL.md
Lines: 320 (concise, under 500-line guideline)
Standards: Meets all ai-helpers repository requirements
Summary by CodeRabbit
New Features
Documentation