Compare commits

..

20 Commits

Author SHA1 Message Date
Seth Hobson
94d1aba17a Add modernized Payment Intents pattern with Payment Element
- Restore Payment Intents flow removed by PR, updated for modern best practices
- Use Payment Element instead of legacy Card Element
- Use stripe.confirmPayment() instead of deprecated confirmCardPayment()
- Use automatic_payment_methods instead of hardcoded payment_method_types
- Split Python/JS into separate fenced code blocks for clarity
- Add guidance on when to use Payment Intents vs Checkout Sessions
- Renumber subsequent patterns (Subscription → 4, Customer Portal → 5)
2026-02-19 13:45:55 -05:00
Seth Hobson
204e8129aa Polish Stripe best practices examples for consistency
- Remove payment_method_types=['card'] from Quick Start (dynamic payment methods)
- Remove unused appearance variable from Pattern 2 JS example
- Fix actions access pattern: destructure before use for consistency
- Add inline comments clarifying sync/async distinction and amount format
- Add ui_mode='embedded' to Embedded checkout bullet for completeness
- Replace payment_method_types with automatic_payment_methods in test example
2026-02-19 13:42:36 -05:00
Sawyer
2b8e3166a1 Update to latest Stripe best practices 2026-02-18 20:38:50 -08:00
bentheautomator
5d65aa1063 Add YouTube design concept extractor tool (#432)
* feat: add YouTube design concept extractor tool

Extracts transcript, metadata, and keyframes from YouTube videos
into a structured markdown reference document for agent consumption.

Supports interval-based frame capture, scene-change detection, and
chapter-aware transcript grouping.

https://claude.ai/code/session_01KZxeSK9A2F2oZUoHgxUUBV

* feat: add OCR and color palette extraction to yt-design-extractor

- Add --ocr flag with Tesseract (fast) or EasyOCR (stylized text) engines
- Add --colors flag for dominant color palette extraction via ColorThief
- Add --full convenience flag to enable all extraction features
- Include OCR text alongside each frame in markdown output
- Add Visual Text Index section for searchable on-screen text
- Export ocr-results.json and color-palette.json for reuse
- Run OCR in parallel with ThreadPoolExecutor for performance

https://claude.ai/code/session_01KZxeSK9A2F2oZUoHgxUUBV

* feat: add requirements.txt and Makefile for yt-design-extractor

- requirements.txt with core and optional dependencies
- Makefile with install, deps check, and run targets
- Support for make run-full, run-ocr, run-transcript variants
- Cross-platform install-ocr target (apt/brew/dnf)

https://claude.ai/code/session_01KZxeSK9A2F2oZUoHgxUUBV

* chore: move Makefile to project root for easier access

Now `make install-full` works from anywhere in the project.

https://claude.ai/code/session_01KZxeSK9A2F2oZUoHgxUUBV

* fix: make easyocr truly optional, fix install targets

- Remove easyocr from install-full (requires PyTorch, causes conflicts)
- Add separate install-easyocr target with CPU PyTorch from official index
- Update requirements.txt with clear instructions for optional easyocr
- Improve make deps output with clearer status messages

https://claude.ai/code/session_01KZxeSK9A2F2oZUoHgxUUBV

* fix: harden error handling and fix silent failures in yt-design-extractor

- Check ffmpeg return codes instead of silently producing 0 frames
- Add upfront shutil.which() checks for yt-dlp and ffmpeg
- Narrow broad except Exception catches (transcript, OCR, color)
- Log OCR errors instead of embedding error strings in output data
- Handle subprocess.TimeoutExpired on all subprocess calls
- Wrap video processing in try/finally for reliable cleanup
- Error on missing easyocr when explicitly requested (no silent fallback)
- Fix docstrings: 720p fallback, parallel OCR, chunk duration, deps
- Split pytesseract/Pillow imports for clearer missing-dep messages
- Add run-transcript to Makefile .PHONY and help target
- Fix variable shadowing in round_color (step -> bucket_size)
- Handle json.JSONDecodeError from yt-dlp metadata
- Format with ruff

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Seth Hobson <wshobson@gmail.com>
2026-02-06 20:06:56 -05:00
Seth Hobson
089740f185 chore: bump marketplace to v1.5.1 and sync plugin versions
Sync marketplace.json versions with plugin.json for all 14 touched
plugins. Fix plugin.json versions for llm-application-dev (2.0.3),
startup-business-analyst (1.0.4), and ui-design (1.0.2) to match
marketplace lineage. Add dotnet-contribution to marketplace.
2026-02-06 19:36:28 -05:00
Seth Hobson
4d504ed8fa fix: eliminate cross-plugin dependencies and modernize plugin.json across marketplace
Rewrites 14 commands across 11 plugins to remove all cross-plugin
subagent_type references (e.g., "unit-testing::test-automator"), which
break when plugins are installed standalone. Each command now uses only
local bundled agents or general-purpose with role context in the prompt.

All rewritten commands follow conductor-style patterns:
- CRITICAL BEHAVIORAL RULES with strong directives
- State files for session tracking and resume support
- Phase checkpoints requiring explicit user approval
- File-based context passing between steps

Also fixes 4 plugin.json files missing version/license fields and adds
plugin.json for dotnet-contribution.

Closes #433
2026-02-06 19:34:26 -05:00
Seth Hobson
4820385a31 chore: modernize all plugins to new format with per-plugin plugin.json
Add .claude-plugin/plugin.json to all 67 remaining plugins and simplify
marketplace.json entries by removing redundant fields (keywords, strict,
commands, agents, skills, repository) that are now auto-discovered.
Bump marketplace version to 1.5.0.
2026-02-05 22:02:17 -05:00
Seth Hobson
a5ab5d8f31 chore(agent-teams): bump to v1.0.2 2026-02-05 17:42:30 -05:00
Seth Hobson
598ea85e7f fix(agent-teams): simplify plugin.json and marketplace entry to match conductor patterns
Strip plugin.json to minimal fields (name, version, description, author, license).
Remove commands/agents/skills arrays, keywords, repository, and strict from marketplace entry.
2026-02-05 17:41:00 -05:00
Seth Hobson
fb9eba62b2 fix(agent-teams): remove Context7 MCP dependency, align frontmatter with conductor patterns, bump to v1.0.1
Remove .mcp.json to eliminate external MCP dependency that likely caused plugin load failure.
Add tools: field to all agents, version: field to all skills, matching conductor plugin patterns.
2026-02-05 17:30:35 -05:00
Seth Hobson
b187ce780d docs(agent-teams): use official /plugin install command instead of --plugin-dir 2026-02-05 17:16:29 -05:00
Seth Hobson
1f46cab1f6 docs(agent-teams): add link to official Anthropic Agent Teams docs 2026-02-05 17:14:55 -05:00
Seth Hobson
d0a57d51b5 docs: bump marketplace to v1.4.0, update README with Agent Teams and Conductor highlights 2026-02-05 17:12:59 -05:00
Seth Hobson
81d53eb5d6 chore: bump marketplace to v1.4.0 (73 plugins, 112 agents, 146 skills) 2026-02-05 17:11:08 -05:00
Seth Hobson
0752775afc feat(agent-teams): add plugin for multi-agent team orchestration
New plugin with 7 presets (review, debug, feature, fullstack, research,
security, migration), 4 specialized agents, 7 slash commands, 6 skills
with reference docs, and Context7 MCP integration for research teams.
2026-02-05 17:10:02 -05:00
Ruyut
918a770990 fix: add missing ')' in winston File transport (#426) 2026-02-01 21:06:12 -05:00
Song Luar
194a267494 Update npx packages referenced in markdown files (#425)
* use correct npx package names in md files

* fix: update remaining non-existent npm package references

- Replace react-codemod with jscodeshift in deps-upgrade.md
- Remove non-existent changelog-parser reference

---------

Co-authored-by: Seth Hobson <wshobson@gmail.com>
2026-02-01 21:04:21 -05:00
kenzo
3ed95e608a feat(tailwind-design-system): update skill for Tailwind CSS v4 (#427)
* feat(tailwind-design-system): update skill for Tailwind CSS v4

Major updates:
- CSS-first configuration with @theme blocks
- @custom-variant for dark mode (not @variant)
- @keyframes must be inside @theme for tree-shaking
- React 19 ref-as-prop patterns (no forwardRef)
- OKLCH colors for better perceptual uniformity
- Native CSS animations (@starting-style, transition-behavior)
- New @utility directive for custom utilities
- @theme inline/static modifiers
- Namespace overrides (--color-*: initial)
- Semi-transparent variants with color-mix()
- Container query tokens

Breaking changes from v3:
- tailwind.config.ts → CSS @theme
- @tailwind directives → @import 'tailwindcss'
- darkMode: 'class' → @custom-variant dark

* fix: address review feedback for tailwind v4 skill

- Add missing semicolon to @custom-variant declaration
- Add missing Slot import from @radix-ui/react-slot
- Add missing DialogPortal declaration
- Add --color-ring-offset to theme for focus states
- Fix misleading comment about @keyframes tree-shaking
- Update comparison table for tailwindcss-animate replacement
- Use standard zod import path (not transitional zod/v4)
- Update upgrade guide link to stable URL
- Format with Prettier

---------

Co-authored-by: Seth Hobson <wshobson@gmail.com>
2026-02-01 20:40:22 -05:00
M. A.
cbb60494b1 Add Comprehensive Python Development Skills (#419)
* Add extra python skills covering code style, design patterns, resilience, resource management, testing patterns, and type safety ...etc

* fix: correct code examples in Python skills

- Clarify Python version requirements for type statement (3.10+ vs 3.12+)
- Add missing ValidationError import in configuration example
- Add missing httpx import and url parameter in async example

---------

Co-authored-by: Seth Hobson <wshobson@gmail.com>
2026-01-30 11:52:14 -05:00
Daniel
f9e9598241 Revise event sourcing architect metadata and description (#417)
Add header with the event sourcing architect's description and name format.
2026-01-30 11:34:59 -05:00
146 changed files with 17049 additions and 4524 deletions

File diff suppressed because it is too large Load Diff

120
Makefile Normal file
View File

@@ -0,0 +1,120 @@
# YouTube Design Extractor - Setup and Usage
# ==========================================
PYTHON := python3
PIP := pip3
SCRIPT := tools/yt-design-extractor.py
.PHONY: help install install-ocr install-easyocr deps check run run-full run-ocr run-transcript clean
help:
@echo "YouTube Design Extractor"
@echo "========================"
@echo ""
@echo "Setup (run in order):"
@echo " make install-ocr Install system tools (tesseract + ffmpeg)"
@echo " make install Install Python dependencies"
@echo " make deps Show what's installed"
@echo ""
@echo "Optional:"
@echo " make install-easyocr Install EasyOCR + PyTorch (~2GB, for stylized text)"
@echo ""
@echo "Usage:"
@echo " make run URL=<youtube-url> Basic extraction"
@echo " make run-full URL=<youtube-url> Full extraction (OCR + colors + scene)"
@echo " make run-ocr URL=<youtube-url> With OCR only"
@echo " make run-transcript URL=<youtube-url> Transcript + metadata only"
@echo ""
@echo "Examples:"
@echo " make run URL='https://youtu.be/eVnQFWGDEdY'"
@echo " make run-full URL='https://youtu.be/eVnQFWGDEdY' INTERVAL=15"
@echo ""
@echo "Options (pass as make variables):"
@echo " URL=<url> YouTube video URL (required)"
@echo " INTERVAL=<secs> Frame interval in seconds (default: 30)"
@echo " OUTPUT=<dir> Output directory"
@echo " ENGINE=<engine> OCR engine: tesseract (default) or easyocr"
# Installation targets
install:
$(PIP) install -r tools/requirements.txt
install-ocr:
@echo "Installing Tesseract OCR + ffmpeg..."
@if command -v apt-get >/dev/null 2>&1; then \
sudo apt-get update && sudo apt-get install -y tesseract-ocr ffmpeg; \
elif command -v brew >/dev/null 2>&1; then \
brew install tesseract ffmpeg; \
elif command -v dnf >/dev/null 2>&1; then \
sudo dnf install -y tesseract ffmpeg; \
else \
echo "Please install tesseract-ocr and ffmpeg manually"; \
exit 1; \
fi
install-easyocr:
@echo "Installing PyTorch (CPU) + EasyOCR (~2GB download)..."
$(PIP) install torch torchvision --index-url https://download.pytorch.org/whl/cpu
$(PIP) install easyocr
deps:
@echo "Checking dependencies..."
@echo ""
@echo "System tools:"
@command -v ffmpeg >/dev/null 2>&1 && echo " ✓ ffmpeg" || echo " ✗ ffmpeg (run: make install-ocr)"
@command -v tesseract >/dev/null 2>&1 && echo " ✓ tesseract" || echo " ✗ tesseract (run: make install-ocr)"
@echo ""
@echo "Python packages (required):"
@$(PYTHON) -c "import yt_dlp; print(' ✓ yt-dlp', yt_dlp.version.__version__)" 2>/dev/null || echo " ✗ yt-dlp (run: make install)"
@$(PYTHON) -c "from youtube_transcript_api import YouTubeTranscriptApi; print(' ✓ youtube-transcript-api')" 2>/dev/null || echo " ✗ youtube-transcript-api (run: make install)"
@$(PYTHON) -c "from PIL import Image; print(' ✓ Pillow')" 2>/dev/null || echo " ✗ Pillow (run: make install)"
@$(PYTHON) -c "import pytesseract; print(' ✓ pytesseract')" 2>/dev/null || echo " ✗ pytesseract (run: make install)"
@$(PYTHON) -c "from colorthief import ColorThief; print(' ✓ colorthief')" 2>/dev/null || echo " ✗ colorthief (run: make install)"
@echo ""
@echo "Optional (for stylized text OCR):"
@$(PYTHON) -c "import easyocr; print(' ✓ easyocr')" 2>/dev/null || echo " ○ easyocr (run: make install-easyocr)"
check:
@$(PYTHON) $(SCRIPT) --help >/dev/null && echo "✓ Script is working" || echo "✗ Script failed"
# Run targets
INTERVAL ?= 30
ENGINE ?= tesseract
OUTPUT ?=
run:
ifndef URL
@echo "Error: URL is required"
@echo "Usage: make run URL='https://youtu.be/VIDEO_ID'"
@exit 1
endif
$(PYTHON) $(SCRIPT) "$(URL)" --interval $(INTERVAL) $(if $(OUTPUT),-o $(OUTPUT))
run-full:
ifndef URL
@echo "Error: URL is required"
@echo "Usage: make run-full URL='https://youtu.be/VIDEO_ID'"
@exit 1
endif
$(PYTHON) $(SCRIPT) "$(URL)" --full --interval $(INTERVAL) --ocr-engine $(ENGINE) $(if $(OUTPUT),-o $(OUTPUT))
run-ocr:
ifndef URL
@echo "Error: URL is required"
@echo "Usage: make run-ocr URL='https://youtu.be/VIDEO_ID'"
@exit 1
endif
$(PYTHON) $(SCRIPT) "$(URL)" --ocr --interval $(INTERVAL) --ocr-engine $(ENGINE) $(if $(OUTPUT),-o $(OUTPUT))
run-transcript:
ifndef URL
@echo "Error: URL is required"
@echo "Usage: make run-transcript URL='https://youtu.be/VIDEO_ID'"
@exit 1
endif
$(PYTHON) $(SCRIPT) "$(URL)" --transcript-only $(if $(OUTPUT),-o $(OUTPUT))
# Cleanup
clean:
rm -rf yt-extract-*
@echo "Cleaned up extraction directories"

View File

@@ -4,26 +4,26 @@
[![Run in Smithery](https://smithery.ai/badge/skills/wshobson)](https://smithery.ai/skills?ns=wshobson&utm_source=github&utm_medium=badge)
> **🎯 Agent Skills Enabled** — 129 specialized skills extend Claude's capabilities across plugins with progressive disclosure
> **🎯 Agent Skills Enabled** — 146 specialized skills extend Claude's capabilities across plugins with progressive disclosure
A comprehensive production-ready system combining **108 specialized AI agents**, **15 multi-agent workflow orchestrators**, **129 agent skills**, and **72 development tools** organized into **72 focused, single-purpose plugins** for [Claude Code](https://docs.claude.com/en/docs/claude-code/overview).
A comprehensive production-ready system combining **112 specialized AI agents**, **16 multi-agent workflow orchestrators**, **146 agent skills**, and **79 development tools** organized into **73 focused, single-purpose plugins** for [Claude Code](https://docs.claude.com/en/docs/claude-code/overview).
## Overview
This unified repository provides everything needed for intelligent automation and multi-agent orchestration across modern software development:
- **72 Focused Plugins** - Granular, single-purpose plugins optimized for minimal token usage and composability
- **108 Specialized Agents** - Domain experts with deep knowledge across architecture, languages, infrastructure, quality, data/AI, documentation, business operations, and SEO
- **129 Agent Skills** - Modular knowledge packages with progressive disclosure for specialized expertise
- **15 Workflow Orchestrators** - Multi-agent coordination systems for complex operations like full-stack development, security hardening, ML pipelines, and incident response
- **72 Development Tools** - Optimized utilities including project scaffolding, security scanning, test automation, and infrastructure setup
- **73 Focused Plugins** - Granular, single-purpose plugins optimized for minimal token usage and composability
- **112 Specialized Agents** - Domain experts with deep knowledge across architecture, languages, infrastructure, quality, data/AI, documentation, business operations, and SEO
- **146 Agent Skills** - Modular knowledge packages with progressive disclosure for specialized expertise
- **16 Workflow Orchestrators** - Multi-agent coordination systems for complex operations like full-stack development, security hardening, ML pipelines, and incident response
- **79 Development Tools** - Optimized utilities including project scaffolding, security scanning, test automation, and infrastructure setup
### Key Features
- **Granular Plugin Architecture**: 72 focused plugins optimized for minimal token usage
- **Comprehensive Tooling**: 72 development tools including test generation, scaffolding, and security scanning
- **Granular Plugin Architecture**: 73 focused plugins optimized for minimal token usage
- **Comprehensive Tooling**: 79 development tools including test generation, scaffolding, and security scanning
- **100% Agent Coverage**: All plugins include specialized agents
- **Agent Skills**: 129 specialized skills following for progressive disclosure and token efficiency
- **Agent Skills**: 146 specialized skills following for progressive disclosure and token efficiency
- **Clear Organization**: 23 categories with 1-6 plugins each for easy discovery
- **Efficient Design**: Average 3.4 components per plugin (follows Anthropic's 2-8 pattern)
@@ -37,7 +37,7 @@ Each plugin is completely isolated with its own agents, commands, and skills:
- **Clear boundaries** - Each plugin has a single, focused purpose
- **Progressive disclosure** - Skills load knowledge only when activated
**Example**: Installing `python-development` loads 3 Python agents, 1 scaffolding tool, and makes 5 skills available (~300 tokens), not the entire marketplace.
**Example**: Installing `python-development` loads 3 Python agents, 1 scaffolding tool, and makes 16 skills available (~1000 tokens), not the entire marketplace.
## Quick Start
@@ -49,7 +49,7 @@ Add this marketplace to Claude Code:
/plugin marketplace add wshobson/agents
```
This makes all 72 plugins available for installation, but **does not load any agents or tools** into your context.
This makes all 73 plugins available for installation, but **does not load any agents or tools** into your context.
### Step 2: Install Plugins
@@ -63,7 +63,7 @@ Install the plugins you need:
```bash
# Essential development plugins
/plugin install python-development # Python with 5 specialized skills
/plugin install python-development # Python with 16 specialized skills
/plugin install javascript-typescript # JS/TS with 4 specialized skills
/plugin install backend-development # Backend APIs with 3 architecture skills
@@ -114,9 +114,9 @@ rm -rf ~/.claude/plugins/cache/claude-code-workflows && rm ~/.claude/plugins/ins
### Core Guides
- **[Plugin Reference](docs/plugins.md)** - Complete catalog of all 72 plugins
- **[Agent Reference](docs/agents.md)** - All 108 agents organized by category
- **[Agent Skills](docs/agent-skills.md)** - 129 specialized skills with progressive disclosure
- **[Plugin Reference](docs/plugins.md)** - Complete catalog of all 73 plugins
- **[Agent Reference](docs/agents.md)** - All 112 agents organized by category
- **[Agent Skills](docs/agent-skills.md)** - 146 specialized skills with progressive disclosure
- **[Usage Guide](docs/usage.md)** - Commands, workflows, and best practices
- **[Architecture](docs/architecture.md)** - Design principles and patterns
@@ -130,7 +130,44 @@ rm -rf ~/.claude/plugins/cache/claude-code-workflows && rm ~/.claude/plugins/ins
## What's New
### Agent Skills (129 skills across 20 plugins)
### Agent Teams Plugin (NEW)
Orchestrate multi-agent teams for parallel workflows using Claude Code's experimental Agent Teams feature:
```bash
/plugin install agent-teams@claude-code-workflows
```
- **7 Team Presets** — `review`, `debug`, `feature`, `fullstack`, `research`, `security`, `migration`
- **Parallel Code Review** — `/team-review src/ --reviewers security,performance,architecture`
- **Hypothesis-Driven Debugging** — `/team-debug "API returns 500" --hypotheses 3`
- **Parallel Feature Development** — `/team-feature "Add OAuth2 auth" --plan-first`
- **Research Teams** — Parallel investigation across codebase and web sources
- **Security Audits** — 4 reviewers covering OWASP, auth, dependencies, and secrets
- **Migration Support** — Coordinated migration with parallel streams and correctness verification
Includes 4 specialized agents, 7 commands, and 6 skills with reference documentation.
[→ View agent-teams documentation](plugins/agent-teams/README.md)
### Conductor Plugin — Context-Driven Development
Transforms Claude Code into a project management tool with a structured **Context → Spec & Plan → Implement** workflow:
```bash
/plugin install conductor@claude-code-workflows
```
- **Interactive Setup** — `/conductor:setup` creates product vision, tech stack, workflow rules, and style guides
- **Track-Based Development** — `/conductor:new-track` generates specifications and phased implementation plans
- **TDD Workflow** — `/conductor:implement` executes tasks with verification checkpoints
- **Semantic Revert** — `/conductor:revert` undoes work by logical unit (track, phase, or task)
- **State Persistence** — Resume setup across sessions with persistent project context
- **3 Skills** — Context-driven development, track management, workflow patterns
[→ View Conductor documentation](plugins/conductor/README.md)
### Agent Skills (146 skills across 21 plugins)
Specialized knowledge packages following Anthropic's progressive disclosure architecture:
@@ -246,11 +283,11 @@ Uses kubernetes-architect agent with 4 specialized skills for production-grade c
## Plugin Categories
**23 categories, 72 plugins:**
**24 categories, 73 plugins:**
- 🎨 **Development** (4) - debugging, backend, frontend, multi-platform
- 📚 **Documentation** (3) - code docs, API specs, diagrams, C4 architecture
- 🔄 **Workflows** (4) - git, full-stack, TDD, **Conductor** (context-driven development)
- 🔄 **Workflows** (5) - git, full-stack, TDD, **Conductor** (context-driven development), **Agent Teams** (multi-agent orchestration)
-**Testing** (2) - unit testing, TDD workflows
- 🔍 **Quality** (3) - code review, comprehensive review, performance
- 🤖 **AI & ML** (4) - LLM apps, agent orchestration, context, MLOps
@@ -278,7 +315,7 @@ Uses kubernetes-architect agent with 4 specialized skills for production-grade c
- **Single responsibility** - Each plugin does one thing well
- **Minimal token usage** - Average 3.4 components per plugin
- **Composable** - Mix and match for complex workflows
- **100% coverage** - All 108 agents accessible across plugins
- **100% coverage** - All 112 agents accessible across plugins
### Progressive Disclosure (Skills)
@@ -293,7 +330,7 @@ Three-tier architecture for token efficiency:
```
claude-agents/
├── .claude-plugin/
│ └── marketplace.json # 72 plugins
│ └── marketplace.json # 73 plugins
├── plugins/
│ ├── python-development/
│ │ ├── agents/ # 3 Python experts

View File

@@ -0,0 +1,10 @@
{
"name": "accessibility-compliance",
"version": "1.2.1",
"description": "WCAG accessibility auditing, compliance validation, UI testing for screen readers, keyboard navigation, and inclusive design",
"author": {
"name": "Seth Hobson",
"email": "seth@major7apps.com"
},
"license": "MIT"
}

View File

@@ -0,0 +1,10 @@
{
"name": "agent-orchestration",
"version": "1.2.0",
"description": "Multi-agent system optimization, agent improvement workflows, and context management",
"author": {
"name": "Seth Hobson",
"email": "seth@major7apps.com"
},
"license": "MIT"
}

View File

@@ -0,0 +1,10 @@
{
"name": "agent-teams",
"version": "1.0.2",
"description": "Orchestrate multi-agent teams for parallel code review, hypothesis-driven debugging, and coordinated feature development using Claude Code's Agent Teams",
"author": {
"name": "Seth Hobson",
"email": "seth@major7apps.com"
},
"license": "MIT"
}

View File

@@ -0,0 +1,153 @@
# Agent Teams Plugin
Orchestrate multi-agent teams for parallel code review, hypothesis-driven debugging, and coordinated feature development using Claude Code's experimental [Agent Teams](https://code.claude.com/docs/en/agent-teams) feature.
## Setup
### Prerequisites
1. Enable the experimental Agent Teams feature:
```bash
export CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS=1
```
2. Configure teammate display mode in your `~/.claude/settings.json`:
```json
{
"teammateMode": "tmux"
}
```
Available display modes:
- `"tmux"` — Each teammate runs in a tmux pane (recommended)
- `"iterm2"` — Each teammate gets an iTerm2 tab (macOS only)
- `"in-process"` — Teammates run in the same process (default)
### Installation
First, add the marketplace (if you haven't already):
```
/plugin marketplace add wshobson/agents
```
Then install the plugin:
```
/plugin install agent-teams@claude-code-workflows
```
## Features
- **Preset Teams** — Spawn pre-configured teams for common workflows (review, debug, feature, fullstack, research, security, migration)
- **Multi-Reviewer Code Review** — Parallel review across security, performance, architecture, testing, and accessibility dimensions
- **Hypothesis-Driven Debugging** — Competing hypothesis investigation with evidence-based root cause analysis
- **Parallel Feature Development** — Coordinated multi-agent implementation with file ownership boundaries
- **Parallel Research** — Multiple Explore agents investigating different questions or codebase areas simultaneously
- **Security Audit** — Comprehensive parallel security review across OWASP, auth, dependencies, and configuration
- **Migration Support** — Coordinated codebase migration with parallel implementation streams and correctness verification
- **Task Coordination** — Dependency-aware task management with workload balancing
- **Team Communication** — Structured messaging protocols for efficient agent collaboration
## Commands
| Command | Description |
| ---------------- | ---------------------------------------------------------- |
| `/team-spawn` | Spawn a team using presets or custom composition |
| `/team-status` | Display team members, tasks, and progress |
| `/team-shutdown` | Gracefully shut down a team and clean up resources |
| `/team-review` | Multi-reviewer parallel code review |
| `/team-debug` | Competing hypotheses debugging with parallel investigation |
| `/team-feature` | Parallel feature development with file ownership |
| `/team-delegate` | Task delegation dashboard and workload management |
## Agents
| Agent | Role | Color |
| ------------------ | --------------------------------------------------------------------------------- | ------ |
| `team-lead` | Team orchestrator — decomposes work, manages lifecycle, synthesizes results | Blue |
| `team-reviewer` | Multi-dimensional code reviewer — operates on assigned review dimension | Green |
| `team-debugger` | Hypothesis investigator — gathers evidence to confirm/falsify assigned hypothesis | Red |
| `team-implementer` | Parallel builder — implements within strict file ownership boundaries | Yellow |
## Skills
| Skill | Description |
| ------------------------------ | ------------------------------------------------------------------------ |
| `team-composition-patterns` | Team sizing heuristics, preset compositions, agent type selection |
| `task-coordination-strategies` | Task decomposition, dependency graphs, workload monitoring |
| `parallel-debugging` | Hypothesis generation, evidence collection, result arbitration |
| `multi-reviewer-patterns` | Review dimension allocation, finding deduplication, severity calibration |
| `parallel-feature-development` | File ownership strategies, conflict avoidance, integration patterns |
| `team-communication-protocols` | Message type selection, plan approval workflow, shutdown protocol |
## Quick Start
### Multi-Reviewer Code Review
```
/team-review src/ --reviewers security,performance,architecture
```
Spawns 3 reviewers, each analyzing the codebase from their assigned dimension, then consolidates findings into a prioritized report.
### Hypothesis-Driven Debugging
```
/team-debug "API returns 500 on POST /users with valid payload" --hypotheses 3
```
Generates 3 competing hypotheses, spawns investigators for each, collects evidence, and presents the most likely root cause with a fix.
### Parallel Feature Development
```
/team-feature "Add user authentication with OAuth2" --team-size 3 --plan-first
```
Decomposes the feature into work streams with file ownership boundaries, gets your approval, then spawns implementers to build in parallel.
### Parallel Research
```
/team-spawn research --name codebase-research
```
Spawns 3 researchers to investigate different aspects in parallel — across your codebase (Grep/Read) and the web (WebSearch/WebFetch). Each reports findings with citations.
### Security Audit
```
/team-spawn security
```
Spawns 4 security reviewers covering OWASP vulnerabilities, auth/access control, dependency supply chain, and secrets/configuration. Produces a consolidated security report.
### Codebase Migration
```
/team-spawn migration --name react-hooks-migration
```
Spawns a lead to plan the migration, 2 implementers to migrate code in parallel streams, and a reviewer to verify correctness of the migrated code.
### Custom Team
```
/team-spawn custom --name my-team --members 4
```
Interactively configure team composition with custom roles and agent types.
## Best Practices
1. **Start with presets** — Use `/team-spawn review`, `/team-spawn debug`, or `/team-spawn feature` before building custom teams
2. **Use `--plan-first`** — For feature development, always review the decomposition before spawning implementers
3. **File ownership is critical** — Never assign the same file to multiple implementers; use interface contracts at boundaries
4. **Monitor with `/team-status`** — Check progress regularly and use `/team-delegate --rebalance` if work is uneven
5. **Graceful shutdown** — Always use `/team-shutdown` rather than killing processes manually
6. **Keep teams small** — 2-4 teammates is optimal; larger teams increase coordination overhead
7. **Use Shift+Tab** — Claude Code's built-in delegate mode (Shift+Tab) complements these commands for ad-hoc delegation

View File

@@ -0,0 +1,83 @@
---
name: team-debugger
description: Hypothesis-driven debugging investigator that investigates one assigned hypothesis, gathering evidence to confirm or falsify it with file:line citations and confidence levels. Use when debugging complex issues with multiple potential root causes.
tools: Read, Glob, Grep, Bash
model: opus
color: red
---
You are a hypothesis-driven debugging investigator. You are assigned one specific hypothesis about a bug's root cause and must gather evidence to confirm or falsify it.
## Core Mission
Investigate your assigned hypothesis systematically. Collect concrete evidence from the codebase, logs, and runtime behavior. Report your findings with confidence levels and causal chains so the team lead can compare hypotheses and determine the true root cause.
## Investigation Protocol
### Step 1: Understand the Hypothesis
- Parse the assigned hypothesis statement
- Identify what would need to be true for this hypothesis to be correct
- List the observable consequences if this hypothesis is the root cause
### Step 2: Define Evidence Criteria
- What evidence would CONFIRM this hypothesis? (necessary conditions)
- What evidence would FALSIFY this hypothesis? (contradicting observations)
- What evidence would be AMBIGUOUS? (consistent with multiple hypotheses)
### Step 3: Gather Primary Evidence
- Search for the specific code paths, data flows, or configurations implied by the hypothesis
- Read relevant source files and trace execution paths
- Check git history for recent changes in suspected areas
### Step 4: Gather Supporting Evidence
- Look for related error messages, log patterns, or stack traces
- Check for similar bugs in the codebase or issue tracker
- Examine test coverage for the suspected area
### Step 5: Test the Hypothesis
- If possible, construct a minimal reproduction scenario
- Identify the exact conditions under which the hypothesis predicts failure
- Check if those conditions match the reported behavior
### Step 6: Assess Confidence
- Rate confidence: High (>80%), Medium (50-80%), Low (<50%)
- List confirming evidence with file:line citations
- List contradicting evidence with file:line citations
- Note any gaps in evidence that prevent higher confidence
### Step 7: Report Findings
- Deliver structured report to team lead
- Include causal chain if hypothesis is confirmed
- Suggest specific fix if root cause is established
- Recommend additional investigation if confidence is low
## Evidence Standards
1. **Always cite file:line** — Every claim must reference a specific location in the codebase
2. **Show the causal chain** — Connect the hypothesis to the symptom through a chain of cause and effect
3. **Report confidence honestly** — Do not overstate certainty; distinguish confirmed from suspected
4. **Include contradicting evidence** — Report evidence that weakens your hypothesis, not just evidence that supports it
5. **Scope your claims** — Be precise about what you've verified vs what you're inferring
## Scope Discipline
- Stay focused on your assigned hypothesis — do not investigate other potential causes
- If you discover evidence pointing to a different root cause, report it but do not change your investigation focus
- Do not propose fixes for issues outside your hypothesis scope
- Communicate scope concerns to the team lead via message
## Behavioral Traits
- Methodical and evidence-driven — never jumps to conclusions
- Honest about uncertainty — reports low confidence when evidence is insufficient
- Focused on assigned hypothesis — resists the urge to chase tangential leads
- Cites every claim with specific file:line references
- Distinguishes correlation from causation
- Reports negative results (falsified hypotheses) as valuable findings

View File

@@ -0,0 +1,85 @@
---
name: team-implementer
description: Parallel feature builder that implements components within strict file ownership boundaries, coordinating at integration points via messaging. Use when building features in parallel across multiple agents with file ownership coordination.
tools: Read, Write, Edit, Glob, Grep, Bash
model: opus
color: yellow
---
You are a parallel feature builder. You implement components within your assigned file ownership boundaries, coordinating with other implementers at integration points.
## Core Mission
Build your assigned component or feature slice within strict file ownership boundaries. Write clean, tested code that integrates with other teammates' work through well-defined interfaces. Communicate proactively at integration points.
## File Ownership Protocol
1. **Only modify files assigned to you** — Check your task description for the explicit list of owned files/directories
2. **Never touch shared files** — If you need changes to a shared file, message the team lead
3. **Create new files only within your ownership boundary** — New files in your assigned directories are fine
4. **Interface contracts are immutable** — Do not change agreed-upon interfaces without team lead approval
5. **If in doubt, ask** — Message the team lead before touching any file not explicitly in your ownership list
## Implementation Workflow
### Phase 1: Understand Assignment
- Read your task description thoroughly
- Identify owned files and directories
- Review interface contracts with adjacent components
- Understand acceptance criteria
### Phase 2: Plan Implementation
- Design your component's internal architecture
- Identify integration points with other teammates' components
- Plan your implementation sequence (dependencies first)
- Note any blockers or questions for the team lead
### Phase 3: Build
- Implement core functionality within owned files
- Follow existing codebase patterns and conventions
- Write code that satisfies the interface contracts
- Keep changes minimal and focused
### Phase 4: Verify
- Ensure your code compiles/passes linting
- Test integration points match the agreed interfaces
- Verify acceptance criteria are met
- Run any applicable tests
### Phase 5: Report
- Mark your task as completed via TaskUpdate
- Message the team lead with a summary of changes
- Note any integration concerns for other teammates
- Flag any deviations from the original plan
## Integration Points
When your component interfaces with another teammate's component:
1. **Reference the contract** — Use the types/interfaces defined in the shared contract
2. **Don't implement their side** — Stub or mock their component during development
3. **Message on completion** — Notify the teammate when your side of the interface is ready
4. **Report mismatches** — If the contract seems wrong or incomplete, message the team lead immediately
## Quality Standards
- Match existing codebase style and patterns
- Keep changes minimal — implement exactly what's specified
- No scope creep — if you see improvements outside your assignment, note them but don't implement
- Prefer simple, readable code over clever solutions
- Preserve existing comments and formatting in modified files
- Ensure your code works with the existing build system
## Behavioral Traits
- Respects file ownership boundaries absolutely — never modifies unassigned files
- Communicates proactively at integration points
- Asks for clarification rather than making assumptions about unclear requirements
- Reports blockers immediately rather than trying to work around them
- Focuses on assigned work — does not refactor or improve code outside scope
- Delivers working code that satisfies the interface contract

View File

@@ -0,0 +1,91 @@
---
name: team-lead
description: Team orchestrator that decomposes work into parallel tasks with file ownership boundaries, manages team lifecycle, and synthesizes results. Use when coordinating multi-agent teams, decomposing complex tasks, or managing parallel workstreams.
tools: Read, Glob, Grep, Bash
model: opus
color: blue
---
You are an expert team orchestrator specializing in decomposing complex software engineering tasks into parallel workstreams with clear ownership boundaries.
## Core Mission
Lead multi-agent teams through structured workflows: analyze requirements, decompose work into independent tasks with file ownership, spawn and coordinate teammates, monitor progress, synthesize results, and manage graceful shutdown.
## Capabilities
### Team Composition
- Select optimal team size based on task complexity (2-5 teammates)
- Choose appropriate agent types for each role (read-only vs full-capability)
- Match preset team compositions to workflow requirements
- Configure display modes (tmux, iTerm2, in-process)
### Task Decomposition
- Break complex tasks into independent, parallelizable work units
- Define clear acceptance criteria for each task
- Estimate relative complexity to balance workloads
- Identify shared dependencies and integration points
### File Ownership Management
- Assign exclusive file ownership to each teammate
- Define interface contracts at ownership boundaries
- Prevent conflicts by ensuring no file has multiple owners
- Create shared type definitions or interfaces when teammates need coordination
### Dependency Management
- Build dependency graphs using blockedBy/blocks relationships
- Minimize dependency chain depth to maximize parallelism
- Identify and resolve circular dependencies
- Sequence tasks along the critical path
### Result Synthesis
- Collect and merge outputs from all teammates
- Resolve conflicting findings or recommendations
- Generate consolidated reports with clear prioritization
- Identify gaps in coverage across teammate outputs
### Conflict Resolution
- Detect overlapping file modifications across teammates
- Mediate disagreements in approach or findings
- Establish tiebreaking criteria for conflicting recommendations
- Ensure consistency across parallel workstreams
## File Ownership Rules
1. **One owner per file** — Never assign the same file to multiple teammates
2. **Explicit boundaries** — List owned files/directories in each task description
3. **Interface contracts** — When teammates share boundaries, define the contract (types, APIs) before work begins
4. **Shared files** — If a file must be touched by multiple teammates, the lead owns it and applies changes sequentially
## Communication Protocols
1. Use `message` for direct teammate communication (default)
2. Use `broadcast` only for critical team-wide announcements
3. Never send structured JSON status messages — use TaskUpdate instead
4. Read team config from `~/.claude/teams/{team-name}/config.json` for teammate discovery
5. Refer to teammates by NAME, never by UUID
## Team Lifecycle Protocol
1. **Spawn** — Create team with Teammate tool, spawn teammates with Task tool
2. **Assign** — Create tasks with TaskCreate, assign with TaskUpdate
3. **Monitor** — Check TaskList periodically, respond to teammate messages
4. **Collect** — Gather results as teammates complete tasks
5. **Synthesize** — Merge results into consolidated output
6. **Shutdown** — Send shutdown_request to each teammate, wait for responses
7. **Cleanup** — Call Teammate cleanup to remove team resources
## Behavioral Traits
- Decomposes before delegating — never assigns vague or overlapping tasks
- Monitors progress without micromanaging — checks in at milestones, not every step
- Synthesizes results with clear attribution to source teammates
- Escalates blockers to the user promptly rather than letting teammates spin
- Maintains a bias toward smaller teams with clearer ownership
- Communicates task boundaries and expectations upfront

View File

@@ -0,0 +1,102 @@
---
name: team-reviewer
description: Multi-dimensional code reviewer that operates on one assigned review dimension (security, performance, architecture, testing, or accessibility) with structured finding format. Use when performing parallel code reviews across multiple quality dimensions.
tools: Read, Glob, Grep, Bash
model: opus
color: green
---
You are a specialized code reviewer focused on one assigned review dimension, producing structured findings with file:line citations, severity ratings, and actionable fixes.
## Core Mission
Perform deep, focused code review on your assigned dimension. Produce findings in a consistent structured format that can be merged with findings from other reviewers into a consolidated report.
## Review Dimensions
### Security
- Input validation and sanitization
- Authentication and authorization checks
- SQL injection, XSS, CSRF vulnerabilities
- Secrets and credential exposure
- Dependency vulnerabilities (known CVEs)
- Insecure cryptographic usage
- Access control bypass vectors
- API security (rate limiting, input bounds)
### Performance
- Database query efficiency (N+1, missing indexes, full scans)
- Memory allocation patterns and potential leaks
- Unnecessary computation or redundant operations
- Caching opportunities and cache invalidation
- Async/concurrent programming correctness
- Resource cleanup and connection management
- Algorithm complexity (time and space)
- Bundle size and lazy loading opportunities
### Architecture
- SOLID principle adherence
- Separation of concerns and layer boundaries
- Dependency direction and circular dependencies
- API contract design and versioning
- Error handling strategy consistency
- Configuration management patterns
- Abstraction appropriateness (over/under-engineering)
- Module cohesion and coupling analysis
### Testing
- Test coverage gaps for critical paths
- Test isolation and determinism
- Mock/stub appropriateness and accuracy
- Edge case and boundary condition coverage
- Integration test completeness
- Test naming and documentation clarity
- Assertion quality and specificity
- Test maintainability and brittleness
### Accessibility
- WCAG 2.1 AA compliance
- Semantic HTML and ARIA usage
- Keyboard navigation support
- Screen reader compatibility
- Color contrast ratios
- Focus management and tab order
- Alternative text for media
- Responsive design and zoom support
## Output Format
For each finding, use this structure:
```
### [SEVERITY] Finding Title
**Location**: `path/to/file.ts:42`
**Dimension**: Security | Performance | Architecture | Testing | Accessibility
**Severity**: Critical | High | Medium | Low
**Evidence**:
Description of what was found, with code snippet if relevant.
**Impact**:
What could go wrong if this is not addressed.
**Recommended Fix**:
Specific, actionable remediation with code example if applicable.
```
## Behavioral Traits
- Stays strictly within assigned dimension — does not cross into other review areas
- Cites specific file:line locations for every finding
- Provides evidence-based severity ratings, not opinion-based
- Suggests concrete fixes, not vague recommendations
- Distinguishes between confirmed issues and potential concerns
- Prioritizes findings by impact and likelihood
- Avoids false positives by verifying context before reporting
- Reports "no findings" dimensions honestly rather than inflating results

View File

@@ -0,0 +1,91 @@
---
description: "Debug issues using competing hypotheses with parallel investigation by multiple agents"
argument-hint: "<error-description-or-file> [--hypotheses N] [--scope files|module|project]"
---
# Team Debug
Debug complex issues using the Analysis of Competing Hypotheses (ACH) methodology. Multiple debugger agents investigate different hypotheses in parallel, gathering evidence to confirm or falsify each one.
## Pre-flight Checks
1. Verify `CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS=1` is set
2. Parse `$ARGUMENTS`:
- `<error-description-or-file>`: description of the bug, error message, or path to a file exhibiting the issue
- `--hypotheses N`: number of hypotheses to generate (default: 3)
- `--scope`: investigation scope — `files` (specific files), `module` (module/package), `project` (entire project)
## Phase 1: Initial Triage
1. Analyze the error description or file:
- If file path: read the file, look for obvious issues, collect error context
- If error description: search the codebase for related code, error messages, stack traces
2. Identify the symptom clearly: what is failing, when, and how
3. Gather initial context: recent git changes, related tests, configuration
## Phase 2: Hypothesis Generation
Generate N hypotheses about the root cause, covering different failure mode categories:
1. **Logic Error** — Incorrect algorithm, wrong condition, off-by-one, missing edge case
2. **Data Issue** — Invalid input, type mismatch, null/undefined, encoding problem
3. **State Problem** — Race condition, stale cache, incorrect initialization, mutation bug
4. **Integration Failure** — API contract violation, version mismatch, configuration error
5. **Resource Issue** — Memory leak, connection exhaustion, timeout, disk space
6. **Environment** — Missing dependency, wrong version, platform-specific behavior
Present hypotheses to user: "Generated {N} hypotheses. Spawning investigators..."
## Phase 3: Investigation
1. Use `Teammate` tool with `operation: "spawnTeam"`, team name: `debug-{timestamp}`
2. For each hypothesis, use `Task` tool to spawn a teammate:
- `name`: `investigator-{n}` (e.g., "investigator-1")
- `subagent_type`: "agent-teams:team-debugger"
- `prompt`: Include the hypothesis, investigation scope, and relevant context
3. Use `TaskCreate` for each investigator's task:
- Subject: "Investigate hypothesis: {hypothesis summary}"
- Description: Full hypothesis statement, scope boundaries, evidence criteria
## Phase 4: Evidence Collection
1. Monitor TaskList for completion
2. As investigators complete, collect their evidence reports
3. Track: "{completed}/{total} investigations complete"
## Phase 5: Arbitration
1. Compare findings across all investigators:
- Which hypotheses were confirmed (high confidence)?
- Which were falsified (contradicting evidence)?
- Which are inconclusive (insufficient evidence)?
2. Rank confirmed hypotheses by:
- Confidence level (High > Medium > Low)
- Strength of causal chain
- Amount of supporting evidence
- Absence of contradicting evidence
3. Present root cause analysis:
```
## Debug Report: {error description}
### Root Cause (Most Likely)
**Hypothesis**: {description}
**Confidence**: {High/Medium/Low}
**Evidence**: {summary with file:line citations}
**Causal Chain**: {step-by-step from cause to symptom}
### Recommended Fix
{specific fix with code changes}
### Other Hypotheses
- {hypothesis 2}: {status} — {brief evidence summary}
- {hypothesis 3}: {status} — {brief evidence summary}
```
## Phase 6: Cleanup
1. Send `shutdown_request` to all investigators
2. Call `Teammate` cleanup to remove team resources

View File

@@ -0,0 +1,94 @@
---
description: "Task delegation dashboard for managing team workload, assignments, and rebalancing"
argument-hint: "[team-name] [--assign task-id=member-name] [--message member-name 'content'] [--rebalance]"
---
# Team Delegate
Manage task assignments and team workload. Provides a delegation dashboard showing unassigned tasks, member workloads, blocked tasks, and rebalancing suggestions.
## Pre-flight Checks
1. Parse `$ARGUMENTS` for team name and action flags:
- `--assign task-id=member-name`: assign a specific task to a member
- `--message member-name 'content'`: send a message to a specific member
- `--rebalance`: analyze and rebalance workload distribution
2. Read team config from `~/.claude/teams/{team-name}/config.json` using the Read tool
3. Call `TaskList` to get current state
## Action: Assign Task
If `--assign` flag is provided:
1. Parse task ID and member name from `task-id=member-name` format
2. Use `TaskUpdate` to set the task owner
3. Use `SendMessage` with `type: "message"` to notify the member:
- recipient: member name
- content: "You've been assigned task #{id}: {subject}. {task description}"
4. Confirm: "Task #{id} assigned to {member-name}"
## Action: Send Message
If `--message` flag is provided:
1. Parse member name and message content
2. Use `SendMessage` with `type: "message"`:
- recipient: member name
- content: the message content
3. Confirm: "Message sent to {member-name}"
## Action: Rebalance
If `--rebalance` flag is provided:
1. Analyze current workload distribution:
- Count tasks per member (in_progress + pending assigned)
- Identify members with 0 tasks (idle)
- Identify members with 3+ tasks (overloaded)
- Check for blocked tasks that could be unblocked
2. Generate rebalancing suggestions:
```
## Workload Analysis
Member Tasks Status
─────────────────────────────────
implementer-1 3 overloaded
implementer-2 1 balanced
implementer-3 0 idle
Suggestions:
1. Move task #5 from implementer-1 to implementer-3
2. Assign unassigned task #7 to implementer-3
```
3. Ask user for confirmation before executing rebalancing
4. Execute approved moves with `TaskUpdate` and `SendMessage`
## Default: Delegation Dashboard
If no action flag is provided, display the full delegation dashboard:
```
## Delegation Dashboard: {team-name}
### Unassigned Tasks
#5 Review error handling patterns
#7 Add integration tests
### Member Workloads
implementer-1 3 tasks (1 in_progress, 2 pending)
implementer-2 1 task (1 in_progress)
implementer-3 0 tasks (idle)
### Blocked Tasks
#6 Blocked by #4 (in_progress, owner: implementer-1)
### Suggestions
- Assign #5 to implementer-3 (idle)
- Assign #7 to implementer-2 (low workload)
```
**Tip**: Use Shift+Tab to enter Claude Code's built-in delegate mode for ad-hoc task delegation.

View File

@@ -0,0 +1,114 @@
---
description: "Develop features in parallel with multiple agents using file ownership boundaries and dependency management"
argument-hint: "<feature-description> [--team-size N] [--branch feature/name] [--plan-first]"
---
# Team Feature
Orchestrate parallel feature development with multiple implementer agents. Decomposes features into work streams with strict file ownership, manages dependencies, and verifies integration.
## Pre-flight Checks
1. Verify `CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS=1` is set
2. Parse `$ARGUMENTS`:
- `<feature-description>`: description of the feature to build
- `--team-size N`: number of implementers (default: 2)
- `--branch`: git branch name (default: auto-generated from feature description)
- `--plan-first`: decompose and get user approval before spawning
## Phase 1: Analysis
1. Analyze the feature description to understand scope
2. Explore the codebase to identify:
- Files that will need modification
- Existing patterns and conventions to follow
- Integration points with existing code
- Test files that need updates
## Phase 2: Decomposition
1. Decompose the feature into work streams:
- Each stream gets exclusive file ownership (no overlapping files)
- Define interface contracts between streams
- Identify dependencies between streams (blockedBy/blocks)
- Balance workload across streams
2. If `--plan-first` is set:
- Present the decomposition to the user:
```
## Feature Decomposition: {feature}
### Stream 1: {name}
Owner: implementer-1
Files: {list}
Dependencies: none
### Stream 2: {name}
Owner: implementer-2
Files: {list}
Dependencies: blocked by Stream 1 (needs interface from {file})
### Integration Contract
{shared types/interfaces}
```
- Wait for user approval before proceeding
- If user requests changes, adjust decomposition
## Phase 3: Team Spawn
1. If `--branch` specified, use Bash to create and checkout the branch:
```
git checkout -b {branch-name}
```
2. Use `Teammate` tool with `operation: "spawnTeam"`, team name: `feature-{timestamp}`
3. Spawn a `team-lead` agent to coordinate
4. For each work stream, use `Task` tool to spawn a `team-implementer`:
- `name`: `implementer-{n}`
- `subagent_type`: "agent-teams:team-implementer"
- `prompt`: Include owned files, interface contracts, and implementation requirements
## Phase 4: Task Creation
1. Use `TaskCreate` for each work stream:
- Subject: "{stream name}"
- Description: Owned files, requirements, interface contracts, acceptance criteria
2. Use `TaskUpdate` to set `blockedBy` relationships for dependent streams
3. Assign tasks to implementers with `TaskUpdate` (set `owner`)
## Phase 5: Monitor and Coordinate
1. Monitor `TaskList` for progress
2. As implementers complete tasks:
- Check for integration issues
- Unblock dependent tasks
- Rebalance if needed
3. Handle integration point coordination:
- When an implementer completes an interface, notify dependent implementers
## Phase 6: Integration Verification
After all tasks complete:
1. Use Bash to verify the code compiles/builds: run appropriate build command
2. Use Bash to run tests: run appropriate test command
3. If issues found, create fix tasks and assign to appropriate implementers
4. Report integration status to user
## Phase 7: Cleanup
1. Present feature summary:
```
## Feature Complete: {feature}
Files modified: {count}
Streams completed: {count}/{total}
Tests: {pass/fail}
Changes are on branch: {branch-name}
```
2. Send `shutdown_request` to all teammates
3. Call `Teammate` cleanup

View File

@@ -0,0 +1,78 @@
---
description: "Launch a multi-reviewer parallel code review with specialized review dimensions"
argument-hint: "<target> [--reviewers security,performance,architecture,testing,accessibility] [--base-branch main]"
---
# Team Review
Orchestrate a multi-reviewer parallel code review where each reviewer focuses on a specific quality dimension. Produces a consolidated, deduplicated report organized by severity.
## Pre-flight Checks
1. Verify `CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS=1` is set
2. Parse `$ARGUMENTS`:
- `<target>`: file path, directory, git diff range (e.g., `main...HEAD`), or PR number (e.g., `#123`)
- `--reviewers`: comma-separated dimensions (default: `security,performance,architecture`)
- `--base-branch`: base branch for diff comparison (default: `main`)
## Phase 1: Target Resolution
1. Determine target type:
- **File/Directory**: Use as-is for review scope
- **Git diff range**: Use Bash to run `git diff {range} --name-only` to get changed files
- **PR number**: Use Bash to run `gh pr diff {number} --name-only` to get changed files
2. Collect the full diff content for distribution to reviewers
3. Display review scope to user: "{N} files to review across {M} dimensions"
## Phase 2: Team Spawn
1. Use `Teammate` tool with `operation: "spawnTeam"`, team name: `review-{timestamp}`
2. For each requested dimension, use `Task` tool to spawn a teammate:
- `name`: `{dimension}-reviewer` (e.g., "security-reviewer")
- `subagent_type`: "agent-teams:team-reviewer"
- `prompt`: Include the dimension assignment, target files, and diff content
3. Use `TaskCreate` for each reviewer's task:
- Subject: "Review {target} for {dimension} issues"
- Description: Include file list, diff content, and dimension-specific checklist
## Phase 3: Monitor and Collect
1. Wait for all review tasks to complete (check `TaskList` periodically)
2. As each reviewer completes, collect their structured findings
3. Track progress: "{completed}/{total} reviews complete"
## Phase 4: Consolidation
1. **Deduplicate**: Merge findings that reference the same file:line location
2. **Resolve conflicts**: If reviewers disagree on severity, use the higher rating
3. **Organize by severity**: Group findings as Critical, High, Medium, Low
4. **Cross-reference**: Note findings that appear in multiple dimensions
## Phase 5: Report and Cleanup
1. Present consolidated report:
```
## Code Review Report: {target}
Reviewed by: {dimensions}
Files reviewed: {count}
### Critical ({count})
[findings...]
### High ({count})
[findings...]
### Medium ({count})
[findings...]
### Low ({count})
[findings...]
### Summary
Total findings: {count} (Critical: N, High: N, Medium: N, Low: N)
```
2. Send `shutdown_request` to all reviewers
3. Call `Teammate` cleanup to remove team resources

View File

@@ -0,0 +1,50 @@
---
description: "Gracefully shut down an agent team, collect final results, and clean up resources"
argument-hint: "[team-name] [--force] [--keep-tasks]"
---
# Team Shutdown
Gracefully shut down an active agent team by sending shutdown requests to all teammates, collecting final results, and cleaning up team resources.
## Phase 1: Pre-Shutdown
1. Parse `$ARGUMENTS` for team name and flags:
- If no team name, check for active teams (same discovery as team-status)
- `--force`: skip waiting for graceful shutdown responses
- `--keep-tasks`: preserve task list after cleanup
2. Read team config from `~/.claude/teams/{team-name}/config.json` using the Read tool
3. Call `TaskList` to check for in-progress tasks
4. If there are in-progress tasks and `--force` is not set:
- Display warning: "Warning: {N} tasks are still in progress"
- List the in-progress tasks
- Ask user: "Proceed with shutdown? In-progress work may be lost."
## Phase 2: Graceful Shutdown
For each teammate in the team:
1. Use `SendMessage` with `type: "shutdown_request"` to request graceful shutdown
- Include content: "Team shutdown requested. Please finish current work and save state."
2. Wait for shutdown responses
- If teammate approves: mark as shut down
- If teammate rejects: report to user with reason
- If `--force`: don't wait for responses
## Phase 3: Cleanup
1. Display shutdown summary:
```
Team "{team-name}" shutdown complete.
Members shut down: {N}/{total}
Tasks completed: {completed}/{total}
Tasks remaining: {remaining}
```
2. Unless `--keep-tasks` is set, call `Teammate` tool with `operation: "cleanup"` to remove team and task directories
3. If `--keep-tasks` is set, inform user: "Task list preserved at ~/.claude/tasks/{team-name}/"

View File

@@ -0,0 +1,105 @@
---
description: "Spawn an agent team using presets (review, debug, feature, fullstack, research, security, migration) or custom composition"
argument-hint: "<preset|custom> [--name team-name] [--members N] [--delegate]"
---
# Team Spawn
Spawn a multi-agent team using preset configurations or custom composition. Handles team creation, teammate spawning, and initial task setup.
## Pre-flight Checks
1. Verify that `CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS=1` is set:
- If not set, inform the user: "Agent Teams requires the experimental feature flag. Set `CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS=1` in your environment."
- Stop execution if not enabled
2. Parse arguments from `$ARGUMENTS`:
- First positional arg: preset name or "custom"
- `--name`: team name (default: auto-generated from preset)
- `--members N`: override default member count
- `--delegate`: enter delegation mode after spawning
## Phase 1: Team Configuration
### Preset Teams
If a preset is specified, use these configurations:
**`review`** — Multi-dimensional code review (default: 3 members)
- Spawn 3 `team-reviewer` agents with dimensions: security, performance, architecture
- Team name default: `review-team`
**`debug`** — Competing hypotheses debugging (default: 3 members)
- Spawn 3 `team-debugger` agents, each assigned a different hypothesis
- Team name default: `debug-team`
**`feature`** — Parallel feature development (default: 3 members)
- Spawn 1 `team-lead` agent + 2 `team-implementer` agents
- Team name default: `feature-team`
**`fullstack`** — Full-stack development (default: 4 members)
- Spawn 1 `team-implementer` (frontend), 1 `team-implementer` (backend), 1 `team-implementer` (tests), 1 `team-lead`
- Team name default: `fullstack-team`
**`research`** — Parallel codebase, web, and documentation research (default: 3 members)
- Spawn 3 `general-purpose` agents, each assigned a different research question or area
- Agents have access to codebase search (Grep, Glob, Read) and web search (WebSearch, WebFetch)
- Team name default: `research-team`
**`security`** — Comprehensive security audit (default: 4 members)
- Spawn 1 `team-reviewer` (OWASP/vulnerabilities), 1 `team-reviewer` (auth/access control), 1 `team-reviewer` (dependencies/supply chain), 1 `team-reviewer` (secrets/configuration)
- Team name default: `security-team`
**`migration`** — Codebase migration or large refactor (default: 4 members)
- Spawn 1 `team-lead` (coordination + migration plan), 2 `team-implementer` (parallel migration streams), 1 `team-reviewer` (verify migration correctness)
- Team name default: `migration-team`
### Custom Composition
If "custom" is specified:
1. Use AskUserQuestion to prompt for team size (2-5 members)
2. For each member, ask for role selection: team-lead, team-reviewer, team-debugger, team-implementer
3. Ask for team name if not provided via `--name`
## Phase 2: Team Creation
1. Use the `Teammate` tool with `operation: "spawnTeam"` to create the team
2. For each team member, use the `Task` tool with:
- `team_name`: the team name
- `name`: descriptive member name (e.g., "security-reviewer", "hypothesis-1")
- `subagent_type`: "general-purpose" (teammates need full tool access)
- `prompt`: Role-specific instructions referencing the appropriate agent definition
## Phase 3: Initial Setup
1. Use `TaskCreate` to create initial placeholder tasks for each teammate
2. Display team summary:
- Team name
- Member names and roles
- Display mode (tmux/iTerm2/in-process)
3. If `--delegate` flag is set, transition to delegation mode
## Output
Display a formatted team summary:
```
Team "{team-name}" spawned successfully!
Members:
- {member-1-name} ({role})
- {member-2-name} ({role})
- {member-3-name} ({role})
Use /team-status to monitor progress
Use /team-delegate to assign tasks
Use /team-shutdown to clean up
```

View File

@@ -0,0 +1,60 @@
---
description: "Display team members, task status, and progress for an active agent team"
argument-hint: "[team-name] [--tasks] [--members] [--json]"
---
# Team Status
Display the current state of an active agent team including members, tasks, and progress.
## Phase 1: Team Discovery
1. Parse `$ARGUMENTS` for team name and flags:
- If team name provided, use it directly
- If no team name, check `~/.claude/teams/` for active teams
- If multiple teams exist and no name specified, list all teams and ask user to choose
- `--tasks`: show only task details
- `--members`: show only member details
- `--json`: output raw JSON instead of formatted table
2. Read team config from `~/.claude/teams/{team-name}/config.json` using the Read tool
3. Call `TaskList` to get current task state
## Phase 2: Status Display
### Members Table
Display each team member with their current state:
```
Team: {team-name}
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Members:
Name Role Status
─────────────────────────────────────────
security-rev team-reviewer working on task #2
perf-rev team-reviewer idle
arch-rev team-reviewer working on task #4
```
### Tasks Table
Display tasks with status, assignee, and dependencies:
```
Tasks:
ID Status Owner Subject
─────────────────────────────────────────────────
#1 completed security-rev Review auth module
#2 in_progress security-rev Review API endpoints
#3 completed perf-rev Profile database queries
#4 in_progress arch-rev Analyze module structure
#5 pending (unassigned) Consolidate findings
Progress: 40% (2/5 completed)
```
### JSON Output
If `--json` flag is set, output the raw team config and task list as JSON.

View File

@@ -0,0 +1,127 @@
---
name: multi-reviewer-patterns
description: Coordinate parallel code reviews across multiple quality dimensions with finding deduplication, severity calibration, and consolidated reporting. Use this skill when organizing multi-reviewer code reviews, calibrating finding severity, or consolidating review results.
version: 1.0.2
---
# Multi-Reviewer Patterns
Patterns for coordinating parallel code reviews across multiple quality dimensions, deduplicating findings, calibrating severity, and producing consolidated reports.
## When to Use This Skill
- Organizing a multi-dimensional code review
- Deciding which review dimensions to assign
- Deduplicating findings from multiple reviewers
- Calibrating severity ratings consistently
- Producing a consolidated review report
## Review Dimension Allocation
### Available Dimensions
| Dimension | Focus | When to Include |
| ----------------- | --------------------------------------- | ------------------------------------------- |
| **Security** | Vulnerabilities, auth, input validation | Always for code handling user input or auth |
| **Performance** | Query efficiency, memory, caching | When changing data access or hot paths |
| **Architecture** | SOLID, coupling, patterns | For structural changes or new modules |
| **Testing** | Coverage, quality, edge cases | When adding new functionality |
| **Accessibility** | WCAG, ARIA, keyboard nav | For UI/frontend changes |
### Recommended Combinations
| Scenario | Dimensions |
| ---------------------- | -------------------------------------------- |
| API endpoint changes | Security, Performance, Architecture |
| Frontend component | Architecture, Testing, Accessibility |
| Database migration | Performance, Architecture |
| Authentication changes | Security, Testing |
| Full feature review | Security, Performance, Architecture, Testing |
## Finding Deduplication
When multiple reviewers report issues at the same location:
### Merge Rules
1. **Same file:line, same issue** — Merge into one finding, credit all reviewers
2. **Same file:line, different issues** — Keep as separate findings
3. **Same issue, different locations** — Keep separate but cross-reference
4. **Conflicting severity** — Use the higher severity rating
5. **Conflicting recommendations** — Include both with reviewer attribution
### Deduplication Process
```
For each finding in all reviewer reports:
1. Check if another finding references the same file:line
2. If yes, check if they describe the same issue
3. If same issue: merge, keeping the more detailed description
4. If different issue: keep both, tag as "co-located"
5. Use highest severity among merged findings
```
## Severity Calibration
### Severity Criteria
| Severity | Impact | Likelihood | Examples |
| ------------ | --------------------------------------------- | ---------------------- | -------------------------------------------- |
| **Critical** | Data loss, security breach, complete failure | Certain or very likely | SQL injection, auth bypass, data corruption |
| **High** | Significant functionality impact, degradation | Likely | Memory leak, missing validation, broken flow |
| **Medium** | Partial impact, workaround exists | Possible | N+1 query, missing edge case, unclear error |
| **Low** | Minimal impact, cosmetic | Unlikely | Style issue, minor optimization, naming |
### Calibration Rules
- Security vulnerabilities exploitable by external users: always Critical or High
- Performance issues in hot paths: at least Medium
- Missing tests for critical paths: at least Medium
- Accessibility violations for core functionality: at least Medium
- Code style issues with no functional impact: Low
## Consolidated Report Template
```markdown
## Code Review Report
**Target**: {files/PR/directory}
**Reviewers**: {dimension-1}, {dimension-2}, {dimension-3}
**Date**: {date}
**Files Reviewed**: {count}
### Critical Findings ({count})
#### [CR-001] {Title}
**Location**: `{file}:{line}`
**Dimension**: {Security/Performance/etc.}
**Description**: {what was found}
**Impact**: {what could happen}
**Fix**: {recommended remediation}
### High Findings ({count})
...
### Medium Findings ({count})
...
### Low Findings ({count})
...
### Summary
| Dimension | Critical | High | Medium | Low | Total |
| ------------ | -------- | ----- | ------ | ----- | ------ |
| Security | 1 | 2 | 3 | 0 | 6 |
| Performance | 0 | 1 | 4 | 2 | 7 |
| Architecture | 0 | 0 | 2 | 3 | 5 |
| **Total** | **1** | **3** | **9** | **5** | **18** |
### Recommendation
{Overall assessment and prioritized action items}
```

View File

@@ -0,0 +1,127 @@
# Review Dimension Checklists
Detailed checklists for each review dimension that reviewers follow during parallel code review.
## Security Review Checklist
### Input Handling
- [ ] All user inputs are validated and sanitized
- [ ] SQL queries use parameterized statements (no string concatenation)
- [ ] HTML output is properly escaped to prevent XSS
- [ ] File paths are validated to prevent path traversal
- [ ] Request size limits are enforced
### Authentication & Authorization
- [ ] Authentication is required for all protected endpoints
- [ ] Authorization checks verify user has permission for the action
- [ ] JWT tokens are validated (signature, expiry, issuer)
- [ ] Password hashing uses bcrypt/argon2 (not MD5/SHA)
- [ ] Session management follows best practices
### Secrets & Configuration
- [ ] No hardcoded secrets, API keys, or passwords
- [ ] Secrets are loaded from environment variables or secret manager
- [ ] .gitignore includes sensitive file patterns
- [ ] Debug/development endpoints are disabled in production
### Dependencies
- [ ] No known CVEs in direct dependencies
- [ ] Dependencies are pinned to specific versions
- [ ] No unnecessary dependencies that increase attack surface
## Performance Review Checklist
### Database
- [ ] No N+1 query patterns
- [ ] Queries use appropriate indexes
- [ ] No SELECT \* on large tables
- [ ] Pagination is implemented for list endpoints
- [ ] Connection pooling is configured
### Memory & Resources
- [ ] No memory leaks (event listeners cleaned up, streams closed)
- [ ] Large data sets are streamed, not loaded entirely into memory
- [ ] File handles and connections are properly closed
- [ ] Caching is used for expensive operations
### Computation
- [ ] No unnecessary re-computation or redundant operations
- [ ] Appropriate algorithm complexity for the data size
- [ ] Async operations used where I/O bound
- [ ] No blocking operations on the main thread
## Architecture Review Checklist
### Design Principles
- [ ] Single Responsibility: each module/class has one reason to change
- [ ] Open/Closed: extensible without modification
- [ ] Dependency Inversion: depends on abstractions, not concretions
- [ ] No circular dependencies between modules
### Structure
- [ ] Clear separation of concerns (UI, business logic, data)
- [ ] Consistent error handling strategy across the codebase
- [ ] Configuration is externalized, not hardcoded
- [ ] API contracts are well-defined and versioned
### Patterns
- [ ] Consistent patterns used throughout (no pattern mixing)
- [ ] Abstractions are at the right level (not over/under-engineered)
- [ ] Module boundaries align with domain boundaries
- [ ] Shared utilities are actually shared (no duplication)
## Testing Review Checklist
### Coverage
- [ ] Critical paths have test coverage
- [ ] Edge cases are tested (empty input, null, boundary values)
- [ ] Error paths are tested (what happens when things fail)
- [ ] Integration points have integration tests
### Quality
- [ ] Tests are deterministic (no flaky tests)
- [ ] Tests are isolated (no shared state between tests)
- [ ] Assertions are specific (not just "no error thrown")
- [ ] Test names clearly describe what is being tested
### Maintainability
- [ ] Tests don't duplicate implementation logic
- [ ] Mocks/stubs are minimal and accurate
- [ ] Test data is clear and relevant
- [ ] Tests are easy to understand without reading the implementation
## Accessibility Review Checklist
### Structure
- [ ] Semantic HTML elements used (nav, main, article, button)
- [ ] Heading hierarchy is logical (h1 → h2 → h3)
- [ ] ARIA roles and properties used correctly
- [ ] Landmarks identify page regions
### Interaction
- [ ] All functionality accessible via keyboard
- [ ] Focus order is logical and visible
- [ ] No keyboard traps
- [ ] Touch targets are at least 44x44px
### Content
- [ ] Images have meaningful alt text
- [ ] Color is not the only means of conveying information
- [ ] Text has sufficient contrast ratio (4.5:1 for normal, 3:1 for large)
- [ ] Content is readable at 200% zoom

View File

@@ -0,0 +1,133 @@
---
name: parallel-debugging
description: Debug complex issues using competing hypotheses with parallel investigation, evidence collection, and root cause arbitration. Use this skill when debugging bugs with multiple potential causes, performing root cause analysis, or organizing parallel investigation workflows.
version: 1.0.2
---
# Parallel Debugging
Framework for debugging complex issues using the Analysis of Competing Hypotheses (ACH) methodology with parallel agent investigation.
## When to Use This Skill
- Bug has multiple plausible root causes
- Initial debugging attempts haven't identified the issue
- Issue spans multiple modules or components
- Need systematic root cause analysis with evidence
- Want to avoid confirmation bias in debugging
## Hypothesis Generation Framework
Generate hypotheses across 6 failure mode categories:
### 1. Logic Error
- Incorrect conditional logic (wrong operator, missing case)
- Off-by-one errors in loops or array access
- Missing edge case handling
- Incorrect algorithm implementation
### 2. Data Issue
- Invalid or unexpected input data
- Type mismatch or coercion error
- Null/undefined/None where value expected
- Encoding or serialization problem
- Data truncation or overflow
### 3. State Problem
- Race condition between concurrent operations
- Stale cache returning outdated data
- Incorrect initialization or default values
- Unintended mutation of shared state
- State machine transition error
### 4. Integration Failure
- API contract violation (request/response mismatch)
- Version incompatibility between components
- Configuration mismatch between environments
- Missing or incorrect environment variables
- Network timeout or connection failure
### 5. Resource Issue
- Memory leak causing gradual degradation
- Connection pool exhaustion
- File descriptor or handle leak
- Disk space or quota exceeded
- CPU saturation from inefficient processing
### 6. Environment
- Missing runtime dependency
- Wrong library or framework version
- Platform-specific behavior difference
- Permission or access control issue
- Timezone or locale-related behavior
## Evidence Collection Standards
### What Constitutes Evidence
| Evidence Type | Strength | Example |
| ----------------- | -------- | --------------------------------------------------------------- |
| **Direct** | Strong | Code at `file.ts:42` shows `if (x > 0)` should be `if (x >= 0)` |
| **Correlational** | Medium | Error rate increased after commit `abc123` |
| **Testimonial** | Weak | "It works on my machine" |
| **Absence** | Variable | No null check found in the code path |
### Citation Format
Always cite evidence with file:line references:
```
**Evidence**: The validation function at `src/validators/user.ts:87`
does not check for empty strings, only null/undefined. This allows
empty email addresses to pass validation.
```
### Confidence Levels
| Level | Criteria |
| ------------------- | ----------------------------------------------------------------------------------- |
| **High (>80%)** | Multiple direct evidence pieces, clear causal chain, no contradicting evidence |
| **Medium (50-80%)** | Some direct evidence, plausible causal chain, minor ambiguities |
| **Low (<50%)** | Mostly correlational evidence, incomplete causal chain, some contradicting evidence |
## Result Arbitration Protocol
After all investigators report:
### Step 1: Categorize Results
- **Confirmed**: High confidence, strong evidence, clear causal chain
- **Plausible**: Medium confidence, some evidence, reasonable causal chain
- **Falsified**: Evidence contradicts the hypothesis
- **Inconclusive**: Insufficient evidence to confirm or falsify
### Step 2: Compare Confirmed Hypotheses
If multiple hypotheses are confirmed, rank by:
1. Confidence level
2. Number of supporting evidence pieces
3. Strength of causal chain
4. Absence of contradicting evidence
### Step 3: Determine Root Cause
- If one hypothesis clearly dominates: declare as root cause
- If multiple hypotheses are equally likely: may be compound issue (multiple contributing causes)
- If no hypotheses confirmed: generate new hypotheses based on evidence gathered
### Step 4: Validate Fix
Before declaring the bug fixed:
- [ ] Fix addresses the identified root cause
- [ ] Fix doesn't introduce new issues
- [ ] Original reproduction case no longer fails
- [ ] Related edge cases are covered
- [ ] Relevant tests are added or updated

View File

@@ -0,0 +1,120 @@
# Hypothesis Testing Reference
Task templates, evidence formats, and arbitration decision trees for parallel debugging.
## Hypothesis Task Template
```markdown
## Hypothesis Investigation: {Hypothesis Title}
### Hypothesis Statement
{Clear, falsifiable statement about the root cause}
### Failure Mode Category
{Logic Error | Data Issue | State Problem | Integration Failure | Resource Issue | Environment}
### Investigation Scope
- Files to examine: {file list or directory}
- Related tests: {test files}
- Git history: {relevant date range or commits}
### Evidence Criteria
**Confirming evidence** (if I find these, hypothesis is supported):
1. {Observable condition 1}
2. {Observable condition 2}
**Falsifying evidence** (if I find these, hypothesis is wrong):
1. {Observable condition 1}
2. {Observable condition 2}
### Report Format
- Confidence: High/Medium/Low
- Evidence: list with file:line citations
- Causal chain: step-by-step from cause to symptom
- Recommended fix: if confirmed
```
## Evidence Report Template
```markdown
## Investigation Report: {Hypothesis Title}
### Verdict: {Confirmed | Falsified | Inconclusive}
### Confidence: {High (>80%) | Medium (50-80%) | Low (<50%)}
### Confirming Evidence
1. `src/api/users.ts:47` — {description of what was found}
2. `src/middleware/auth.ts:23` — {description}
### Contradicting Evidence
1. `tests/api/users.test.ts:112` — {description of what contradicts}
### Causal Chain (if confirmed)
1. {First cause} →
2. {Intermediate effect} →
3. {Observable symptom}
### Recommended Fix
{Specific code change with location}
### Additional Notes
{Anything discovered that may be relevant to other hypotheses}
```
## Arbitration Decision Tree
```
All investigators reported?
├── NO → Wait for remaining reports
└── YES → Count confirmed hypotheses
├── 0 confirmed
│ ├── Any medium confidence? → Investigate further
│ └── All low/falsified? → Generate new hypotheses
├── 1 confirmed
│ └── High confidence?
│ ├── YES → Declare root cause, propose fix
│ └── NO → Flag as likely cause, recommend verification
└── 2+ confirmed
└── Are they related?
├── YES → Compound issue (multiple contributing causes)
└── NO → Rank by confidence, declare highest as primary
```
## Common Hypothesis Patterns by Error Type
### "500 Internal Server Error"
1. Unhandled exception in request handler (Logic Error)
2. Database connection failure (Resource Issue)
3. Missing environment variable (Environment)
### "Race condition / intermittent failure"
1. Shared state mutation without locking (State Problem)
2. Async operation ordering assumption (Logic Error)
3. Cache staleness window (State Problem)
### "Works locally, fails in production"
1. Environment variable mismatch (Environment)
2. Different dependency version (Environment)
3. Resource limits (memory, connections) (Resource Issue)
### "Regression after deploy"
1. New code introduced bug (Logic Error)
2. Configuration change (Integration Failure)
3. Database migration issue (Data Issue)

View File

@@ -0,0 +1,152 @@
---
name: parallel-feature-development
description: Coordinate parallel feature development with file ownership strategies, conflict avoidance rules, and integration patterns for multi-agent implementation. Use this skill when decomposing features for parallel development, establishing file ownership boundaries, or managing integration between parallel work streams.
version: 1.0.2
---
# Parallel Feature Development
Strategies for decomposing features into parallel work streams, establishing file ownership boundaries, avoiding conflicts, and integrating results from multiple implementer agents.
## When to Use This Skill
- Decomposing a feature for parallel implementation
- Establishing file ownership boundaries between agents
- Designing interface contracts between parallel work streams
- Choosing integration strategies (vertical slice vs horizontal layer)
- Managing branch and merge workflows for parallel development
## File Ownership Strategies
### By Directory
Assign each implementer ownership of specific directories:
```
implementer-1: src/components/auth/
implementer-2: src/api/auth/
implementer-3: tests/auth/
```
**Best for**: Well-organized codebases with clear directory boundaries.
### By Module
Assign ownership of logical modules (which may span directories):
```
implementer-1: Authentication module (login, register, logout)
implementer-2: Authorization module (roles, permissions, guards)
```
**Best for**: Feature-oriented architectures, domain-driven design.
### By Layer
Assign ownership of architectural layers:
```
implementer-1: UI layer (components, styles, layouts)
implementer-2: Business logic layer (services, validators)
implementer-3: Data layer (models, repositories, migrations)
```
**Best for**: Traditional MVC/layered architectures.
## Conflict Avoidance Rules
### The Cardinal Rule
**One owner per file.** No file should be assigned to multiple implementers.
### When Files Must Be Shared
If a file genuinely needs changes from multiple implementers:
1. **Designate a single owner** — One implementer owns the file
2. **Other implementers request changes** — Message the owner with specific change requests
3. **Owner applies changes sequentially** — Prevents merge conflicts
4. **Alternative: Extract interfaces** — Create a separate interface file that the non-owner can import without modifying
### Interface Contracts
When implementers need to coordinate at boundaries:
```typescript
// src/types/auth-contract.ts (owned by team-lead, read-only for implementers)
export interface AuthResponse {
token: string;
user: UserProfile;
expiresAt: number;
}
export interface AuthService {
login(email: string, password: string): Promise<AuthResponse>;
register(data: RegisterData): Promise<AuthResponse>;
}
```
Both implementers import from the contract file but neither modifies it.
## Integration Patterns
### Vertical Slice
Each implementer builds a complete feature slice (UI + API + tests):
```
implementer-1: Login feature (login form + login API + login tests)
implementer-2: Register feature (register form + register API + register tests)
```
**Pros**: Each slice is independently testable, minimal integration needed.
**Cons**: May duplicate shared utilities, harder with tightly coupled features.
### Horizontal Layer
Each implementer builds one layer across all features:
```
implementer-1: All UI components (login form, register form, profile page)
implementer-2: All API endpoints (login, register, profile)
implementer-3: All tests (unit, integration, e2e)
```
**Pros**: Consistent patterns within each layer, natural specialization.
**Cons**: More integration points, layer 3 depends on layers 1 and 2.
### Hybrid
Mix vertical and horizontal based on coupling:
```
implementer-1: Login feature (vertical slice — UI + API + tests)
implementer-2: Shared auth infrastructure (horizontal — middleware, JWT utils, types)
```
**Best for**: Most real-world features with some shared infrastructure.
## Branch Management
### Single Branch Strategy
All implementers work on the same feature branch:
- Simple setup, no merge overhead
- Requires strict file ownership to avoid conflicts
- Best for: small teams (2-3), well-defined boundaries
### Multi-Branch Strategy
Each implementer works on a sub-branch:
```
feature/auth
├── feature/auth-login (implementer-1)
├── feature/auth-register (implementer-2)
└── feature/auth-tests (implementer-3)
```
- More isolation, explicit merge points
- Higher overhead, merge conflicts still possible in shared files
- Best for: larger teams (4+), complex features

View File

@@ -0,0 +1,80 @@
# File Ownership Decision Framework
How to assign file ownership when decomposing features for parallel development.
## Ownership Decision Process
### Step 1: Map All Files
List every file that needs to be created or modified for the feature.
### Step 2: Identify Natural Clusters
Group files by:
- Directory proximity (files in the same directory)
- Functional relationship (files that import each other)
- Layer membership (all UI files, all API files)
### Step 3: Assign Clusters to Owners
Each cluster becomes one implementer's ownership boundary:
- No file appears in multiple clusters
- Each cluster is internally cohesive
- Cross-cluster dependencies are minimized
### Step 4: Define Interface Points
Where clusters interact, define:
- Shared type definitions (owned by lead or a designated implementer)
- API contracts (function signatures, request/response shapes)
- Event contracts (event names and payload shapes)
## Ownership by Project Type
### React/Next.js Frontend
```
implementer-1: src/components/{feature}/ (UI components)
implementer-2: src/hooks/{feature}/ (custom hooks, state)
implementer-3: src/api/{feature}/ (API client, types)
shared: src/types/{feature}.ts (owned by lead)
```
### Express/Fastify Backend
```
implementer-1: src/routes/{feature}.ts, src/controllers/{feature}.ts
implementer-2: src/services/{feature}.ts, src/validators/{feature}.ts
implementer-3: src/models/{feature}.ts, src/repositories/{feature}.ts
shared: src/types/{feature}.ts (owned by lead)
```
### Full-Stack (Next.js)
```
implementer-1: app/{feature}/page.tsx, app/{feature}/components/
implementer-2: app/api/{feature}/route.ts, lib/{feature}/
implementer-3: tests/{feature}/
shared: types/{feature}.ts (owned by lead)
```
### Python Django
```
implementer-1: {app}/views.py, {app}/urls.py, {app}/forms.py
implementer-2: {app}/models.py, {app}/serializers.py, {app}/managers.py
implementer-3: {app}/tests/
shared: {app}/types.py (owned by lead)
```
## Conflict Resolution
When two implementers need to modify the same file:
1. **Preferred: Split the file** — Extract the shared concern into its own file
2. **If can't split: Designate one owner** — The other implementer sends change requests
3. **Last resort: Sequential access** — Implementer A finishes, then implementer B takes over
4. **Never**: Let both modify the same file simultaneously

View File

@@ -0,0 +1,75 @@
# Integration and Merge Strategies
Patterns for integrating parallel work streams and resolving conflicts.
## Integration Patterns
### Pattern 1: Direct Integration
All implementers commit to the same branch; integration happens naturally.
```
feature/auth ← implementer-1 commits
← implementer-2 commits
← implementer-3 commits
```
**When to use**: Small teams (2-3), strict file ownership (no conflicts expected).
### Pattern 2: Sub-Branch Integration
Each implementer works on a sub-branch; lead merges them sequentially.
```
feature/auth
├── feature/auth-login ← implementer-1
├── feature/auth-register ← implementer-2
└── feature/auth-tests ← implementer-3
```
Merge order: follow dependency graph (foundation → dependent → integration).
**When to use**: Larger teams (4+), overlapping concerns, need for review gates.
### Pattern 3: Trunk-Based with Feature Flags
All implementers commit to the main branch behind a feature flag.
```
main ← all implementers commit
← feature flag gates new code
```
**When to use**: CI/CD environments, short-lived features, continuous deployment.
## Integration Verification Checklist
After all implementers complete:
1. **Build check**: Does the code compile/bundle without errors?
2. **Type check**: Do TypeScript/type annotations pass?
3. **Lint check**: Does the code pass linting rules?
4. **Unit tests**: Do all unit tests pass?
5. **Integration tests**: Do cross-component tests pass?
6. **Interface verification**: Do all interface contracts match their implementations?
## Conflict Resolution
### Prevention (Best)
- Strict file ownership eliminates most conflicts
- Interface contracts define boundaries before implementation
- Shared type files are owned by the lead and modified sequentially
### Detection
- Git merge will report conflicts if they occur
- TypeScript/lint errors indicate interface mismatches
- Test failures indicate behavioral conflicts
### Resolution Strategies
1. **Contract wins**: If code doesn't match the interface contract, the code is wrong
2. **Lead arbitrates**: The team lead decides which implementation to keep
3. **Tests decide**: The implementation that passes tests is correct
4. **Merge manually**: For complex conflicts, the lead merges by hand

View File

@@ -0,0 +1,163 @@
---
name: task-coordination-strategies
description: Decompose complex tasks, design dependency graphs, and coordinate multi-agent work with proper task descriptions and workload balancing. Use this skill when breaking down work for agent teams, managing task dependencies, or monitoring team progress.
version: 1.0.2
---
# Task Coordination Strategies
Strategies for decomposing complex tasks into parallelizable units, designing dependency graphs, writing effective task descriptions, and monitoring workload across agent teams.
## When to Use This Skill
- Breaking down a complex task for parallel execution
- Designing task dependency relationships (blockedBy/blocks)
- Writing task descriptions with clear acceptance criteria
- Monitoring and rebalancing workload across teammates
- Identifying the critical path in a multi-task workflow
## Task Decomposition Strategies
### By Layer
Split work by architectural layer:
- Frontend components
- Backend API endpoints
- Database migrations/models
- Test suites
**Best for**: Full-stack features, vertical slices
### By Component
Split work by functional component:
- Authentication module
- User profile module
- Notification module
**Best for**: Microservices, modular architectures
### By Concern
Split work by cross-cutting concern:
- Security review
- Performance review
- Architecture review
**Best for**: Code reviews, audits
### By File Ownership
Split work by file/directory boundaries:
- `src/components/` — Implementer 1
- `src/api/` — Implementer 2
- `src/utils/` — Implementer 3
**Best for**: Parallel implementation, conflict avoidance
## Dependency Graph Design
### Principles
1. **Minimize chain depth** — Prefer wide, shallow graphs over deep chains
2. **Identify the critical path** — The longest chain determines minimum completion time
3. **Use blockedBy sparingly** — Only add dependencies that are truly required
4. **Avoid circular dependencies** — Task A blocks B blocks A is a deadlock
### Patterns
**Independent (Best parallelism)**:
```
Task A ─┐
Task B ─┼─→ Integration
Task C ─┘
```
**Sequential (Necessary dependencies)**:
```
Task A → Task B → Task C
```
**Diamond (Mixed)**:
```
┌→ Task B ─┐
Task A ─┤ ├→ Task D
└→ Task C ─┘
```
### Using blockedBy/blocks
```
TaskCreate: { subject: "Build API endpoints" } → Task #1
TaskCreate: { subject: "Build frontend components" } → Task #2
TaskCreate: { subject: "Integration testing" } → Task #3
TaskUpdate: { taskId: "3", addBlockedBy: ["1", "2"] } → #3 waits for #1 and #2
```
## Task Description Best Practices
Every task should include:
1. **Objective** — What needs to be accomplished (1-2 sentences)
2. **Owned Files** — Explicit list of files/directories this teammate may modify
3. **Requirements** — Specific deliverables or behaviors expected
4. **Interface Contracts** — How this work connects to other teammates' work
5. **Acceptance Criteria** — How to verify the task is done correctly
6. **Scope Boundaries** — What is explicitly out of scope
### Template
```
## Objective
Build the user authentication API endpoints.
## Owned Files
- src/api/auth.ts
- src/api/middleware/auth-middleware.ts
- src/types/auth.ts (shared — read only, do not modify)
## Requirements
- POST /api/login — accepts email/password, returns JWT
- POST /api/register — creates new user, returns JWT
- GET /api/me — returns current user profile (requires auth)
## Interface Contract
- Import User type from src/types/auth.ts (owned by implementer-1)
- Export AuthResponse type for frontend consumption
## Acceptance Criteria
- All endpoints return proper HTTP status codes
- JWT tokens expire after 24 hours
- Passwords are hashed with bcrypt
## Out of Scope
- OAuth/social login
- Password reset flow
- Rate limiting
```
## Workload Monitoring
### Indicators of Imbalance
| Signal | Meaning | Action |
| -------------------------- | ------------------- | --------------------------- |
| Teammate idle, others busy | Uneven distribution | Reassign pending tasks |
| Teammate stuck on one task | Possible blocker | Check in, offer help |
| All tasks blocked | Dependency issue | Resolve critical path first |
| One teammate has 3x others | Overloaded | Split tasks or reassign |
### Rebalancing Steps
1. Call `TaskList` to assess current state
2. Identify idle or overloaded teammates
3. Use `TaskUpdate` to reassign tasks
4. Use `SendMessage` to notify affected teammates
5. Monitor for improved throughput

View File

@@ -0,0 +1,97 @@
# Dependency Graph Patterns
Visual patterns for task dependency design with trade-offs.
## Pattern 1: Fully Independent (Maximum Parallelism)
```
Task A ─┐
Task B ─┼─→ Final Integration
Task C ─┘
```
- **Parallelism**: Maximum — all tasks run simultaneously
- **Risk**: Integration may reveal incompatibilities late
- **Use when**: Tasks operate on completely separate files/modules
- **TaskCreate**: No blockedBy relationships; integration task blocked by all
## Pattern 2: Sequential Chain (No Parallelism)
```
Task A → Task B → Task C → Task D
```
- **Parallelism**: None — each task waits for the previous
- **Risk**: Bottleneck at each step; one delay cascades
- **Use when**: Each task depends on the output of the previous (avoid if possible)
- **TaskCreate**: Each task blockedBy the previous
## Pattern 3: Diamond (Shared Foundation)
```
┌→ Task B ─┐
Task A ──→ ┤ ├→ Task D
└→ Task C ─┘
```
- **Parallelism**: B and C run in parallel after A completes
- **Risk**: A is a bottleneck; D must wait for both B and C
- **Use when**: B and C both need output from A (e.g., shared types)
- **TaskCreate**: B and C blockedBy A; D blockedBy B and C
## Pattern 4: Fork-Join (Phased Parallelism)
```
Phase 1: A1, A2, A3 (parallel)
────────────
Phase 2: B1, B2 (parallel, after phase 1)
────────────
Phase 3: C1 (after phase 2)
```
- **Parallelism**: Within each phase, tasks are parallel
- **Risk**: Phase boundaries add synchronization delays
- **Use when**: Natural phases with dependencies (build → test → deploy)
- **TaskCreate**: Phase 2 tasks blockedBy all Phase 1 tasks
## Pattern 5: Pipeline (Streaming)
```
Task A ──→ Task B ──→ Task C
└──→ Task D ──→ Task E
```
- **Parallelism**: Two parallel chains
- **Risk**: Chains may diverge in approach
- **Use when**: Two independent feature branches from a common starting point
- **TaskCreate**: B blockedBy A; D blockedBy A; C blockedBy B; E blockedBy D
## Anti-Patterns
### Circular Dependency (Deadlock)
```
Task A → Task B → Task C → Task A ✗ DEADLOCK
```
**Fix**: Extract shared dependency into a separate task that all three depend on.
### Unnecessary Dependencies
```
Task A → Task B → Task C
(where B doesn't actually need A's output)
```
**Fix**: Remove the blockedBy relationship; let B run independently.
### Star Pattern (Single Bottleneck)
```
┌→ B
A → ├→ C → F
├→ D
└→ E
```
**Fix**: If A is slow, all downstream tasks are delayed. Try to parallelize A's work.

View File

@@ -0,0 +1,98 @@
# Task Decomposition Examples
Practical examples of decomposing features into parallelizable tasks with clear ownership.
## Example 1: User Authentication Feature
### Feature Description
Add email/password authentication with login, registration, and profile pages.
### Decomposition (Vertical Slices)
**Stream 1: Login Flow** (implementer-1)
- Owned files: `src/pages/login.tsx`, `src/api/login.ts`, `tests/login.test.ts`
- Requirements: Login form, API endpoint, input validation, error handling
- Interface: Imports `AuthResponse` from `src/types/auth.ts`
**Stream 2: Registration Flow** (implementer-2)
- Owned files: `src/pages/register.tsx`, `src/api/register.ts`, `tests/register.test.ts`
- Requirements: Registration form, API endpoint, email validation, password strength
- Interface: Imports `AuthResponse` from `src/types/auth.ts`
**Stream 3: Shared Infrastructure** (implementer-3)
- Owned files: `src/types/auth.ts`, `src/middleware/auth.ts`, `src/utils/jwt.ts`
- Requirements: Type definitions, JWT middleware, token utilities
- Dependencies: None (other streams depend on this)
### Dependency Graph
```
Stream 3 (types/middleware) ──→ Stream 1 (login)
└→ Stream 2 (registration)
```
## Example 2: REST API Endpoints
### Feature Description
Add CRUD endpoints for a new "Projects" resource.
### Decomposition (By Layer)
**Stream 1: Data Layer** (implementer-1)
- Owned files: `src/models/project.ts`, `src/migrations/add-projects.ts`, `src/repositories/project-repo.ts`
- Requirements: Schema definition, migration, repository pattern
- Dependencies: None
**Stream 2: Business Logic** (implementer-2)
- Owned files: `src/services/project-service.ts`, `src/validators/project-validator.ts`
- Requirements: CRUD operations, validation rules, business logic
- Dependencies: Blocked by Stream 1 (needs model/repository)
**Stream 3: API Layer** (implementer-3)
- Owned files: `src/routes/projects.ts`, `src/controllers/project-controller.ts`
- Requirements: REST endpoints, request parsing, response formatting
- Dependencies: Blocked by Stream 2 (needs service layer)
## Task Template
```markdown
## Task: {Stream Name}
### Objective
{1-2 sentence description of what to build}
### Owned Files
- {file1} — {purpose}
- {file2} — {purpose}
### Requirements
1. {Specific deliverable 1}
2. {Specific deliverable 2}
3. {Specific deliverable 3}
### Interface Contract
- Exports: {types/functions this stream provides}
- Imports: {types/functions this stream consumes from other streams}
### Acceptance Criteria
- [ ] {Verifiable criterion 1}
- [ ] {Verifiable criterion 2}
- [ ] {Verifiable criterion 3}
### Out of Scope
- {Explicitly excluded work}
```

View File

@@ -0,0 +1,155 @@
---
name: team-communication-protocols
description: Structured messaging protocols for agent team communication including message type selection, plan approval, shutdown procedures, and anti-patterns to avoid. Use this skill when establishing team communication norms, handling plan approvals, or managing team shutdown.
version: 1.0.2
---
# Team Communication Protocols
Protocols for effective communication between agent teammates, including message type selection, plan approval workflows, shutdown procedures, and common anti-patterns to avoid.
## When to Use This Skill
- Establishing communication norms for a new team
- Choosing between message types (message, broadcast, shutdown_request)
- Handling plan approval workflows
- Managing graceful team shutdown
- Discovering teammate identities and capabilities
## Message Type Selection
### `message` (Direct Message) — Default Choice
Send to a single specific teammate:
```json
{
"type": "message",
"recipient": "implementer-1",
"content": "Your API endpoint is ready. You can now build the frontend form.",
"summary": "API endpoint ready for frontend"
}
```
**Use for**: Task updates, coordination, questions, integration notifications.
### `broadcast` — Use Sparingly
Send to ALL teammates simultaneously:
```json
{
"type": "broadcast",
"content": "Critical: shared types file has been updated. Pull latest before continuing.",
"summary": "Shared types updated"
}
```
**Use ONLY for**: Critical blockers affecting everyone, major changes to shared resources.
**Why sparingly?**: Each broadcast sends N separate messages (one per teammate), consuming API resources proportional to team size.
### `shutdown_request` — Graceful Termination
Request a teammate to shut down:
```json
{
"type": "shutdown_request",
"recipient": "reviewer-1",
"content": "Review complete, shutting down team."
}
```
The teammate responds with `shutdown_response` (approve or reject with reason).
## Communication Anti-Patterns
| Anti-Pattern | Problem | Better Approach |
| --------------------------------------- | ---------------------------------------- | -------------------------------------- |
| Broadcasting routine updates | Wastes resources, noise | Direct message to affected teammate |
| Sending JSON status messages | Not designed for structured data | Use TaskUpdate to update task status |
| Not communicating at integration points | Teammates build against stale interfaces | Message when your interface is ready |
| Micromanaging via messages | Overwhelms teammates, slows work | Check in at milestones, not every step |
| Using UUIDs instead of names | Hard to read, error-prone | Always use teammate names |
| Ignoring idle teammates | Wasted capacity | Assign new work or shut down |
## Plan Approval Workflow
When a teammate is spawned with `plan_mode_required`:
1. Teammate creates a plan using read-only exploration tools
2. Teammate calls `ExitPlanMode` which sends a `plan_approval_request` to the lead
3. Lead reviews the plan
4. Lead responds with `plan_approval_response`:
**Approve**:
```json
{
"type": "plan_approval_response",
"request_id": "abc-123",
"recipient": "implementer-1",
"approve": true
}
```
**Reject with feedback**:
```json
{
"type": "plan_approval_response",
"request_id": "abc-123",
"recipient": "implementer-1",
"approve": false,
"content": "Please add error handling for the API calls"
}
```
## Shutdown Protocol
### Graceful Shutdown Sequence
1. **Lead sends shutdown_request** to each teammate
2. **Teammate receives request** as a JSON message with `type: "shutdown_request"`
3. **Teammate responds** with `shutdown_response`:
- `approve: true` — Teammate saves state and exits
- `approve: false` + reason — Teammate continues working
4. **Lead handles rejections** — Wait for teammate to finish, then retry
5. **After all teammates shut down** — Call `Teammate` cleanup
### Handling Rejections
If a teammate rejects shutdown:
- Check their reason (usually "still working on task")
- Wait for their current task to complete
- Retry shutdown request
- If urgent, user can force shutdown
## Teammate Discovery
Find team members by reading the config file:
**Location**: `~/.claude/teams/{team-name}/config.json`
**Structure**:
```json
{
"members": [
{
"name": "security-reviewer",
"agentId": "uuid-here",
"agentType": "team-reviewer"
},
{
"name": "perf-reviewer",
"agentId": "uuid-here",
"agentType": "team-reviewer"
}
]
}
```
**Always use `name`** for messaging and task assignment. Never use `agentId` directly.

View File

@@ -0,0 +1,112 @@
# Messaging Pattern Templates
Ready-to-use message templates for common team communication scenarios.
## Task Assignment
```
You've been assigned task #{id}: {subject}.
Owned files:
- {file1}
- {file2}
Key requirements:
- {requirement1}
- {requirement2}
Interface contract:
- Import {types} from {shared-file}
- Export {types} for {other-teammate}
Let me know if you have questions or blockers.
```
## Integration Point Notification
```
My side of the {interface-name} interface is complete.
Exported from {file}:
- {function/type 1}
- {function/type 2}
You can now import these in your owned files. The contract matches what we agreed on.
```
## Blocker Report
```
I'm blocked on task #{id}: {subject}.
Blocker: {description of what's preventing progress}
Impact: {what can't be completed until this is resolved}
Options:
1. {option 1}
2. {option 2}
Waiting for your guidance.
```
## Task Completion Report
```
Task #{id} complete: {subject}
Changes made:
- {file1}: {what changed}
- {file2}: {what changed}
Integration notes:
- {any interface changes or considerations for other teammates}
Ready for next assignment.
```
## Review Finding Summary
```
Review complete for {target} ({dimension} dimension).
Summary:
- Critical: {count}
- High: {count}
- Medium: {count}
- Low: {count}
Top finding: {brief description of most important finding}
Full findings attached to task #{id}.
```
## Investigation Report Summary
```
Investigation complete for hypothesis: {hypothesis summary}
Verdict: {Confirmed | Falsified | Inconclusive}
Confidence: {High | Medium | Low}
Key evidence:
- {file:line}: {what was found}
- {file:line}: {what was found}
{If confirmed}: Recommended fix: {brief fix description}
{If falsified}: Contradicting evidence: {brief description}
Full report attached to task #{id}.
```
## Shutdown Acknowledgment
When you receive a shutdown request, respond with the shutdown_response tool. But you may also want to send a final status message:
```
Wrapping up. Current status:
- Task #{id}: {completed/in-progress}
- Files modified: {list}
- Pending work: {none or description}
Ready for shutdown.
```

View File

@@ -0,0 +1,119 @@
---
name: team-composition-patterns
description: Design optimal agent team compositions with sizing heuristics, preset configurations, and agent type selection. Use this skill when deciding team size, selecting agent types, or configuring team presets for multi-agent workflows.
version: 1.0.2
---
# Team Composition Patterns
Best practices for composing multi-agent teams, selecting team sizes, choosing agent types, and configuring display modes for Claude Code's Agent Teams feature.
## When to Use This Skill
- Deciding how many teammates to spawn for a task
- Choosing between preset team configurations
- Selecting the right agent type (subagent_type) for each role
- Configuring teammate display modes (tmux, iTerm2, in-process)
- Building custom team compositions for non-standard workflows
## Team Sizing Heuristics
| Complexity | Team Size | When to Use |
| ------------ | --------- | ----------------------------------------------------------- |
| Simple | 1-2 | Single-dimension review, isolated bug, small feature |
| Moderate | 2-3 | Multi-file changes, 2-3 concerns, medium features |
| Complex | 3-4 | Cross-cutting concerns, large features, deep debugging |
| Very Complex | 4-5 | Full-stack features, comprehensive reviews, systemic issues |
**Rule of thumb**: Start with the smallest team that covers all required dimensions. Adding teammates increases coordination overhead.
## Preset Team Compositions
### Review Team
- **Size**: 3 reviewers
- **Agents**: 3x `team-reviewer`
- **Default dimensions**: security, performance, architecture
- **Use when**: Code changes need multi-dimensional quality assessment
### Debug Team
- **Size**: 3 investigators
- **Agents**: 3x `team-debugger`
- **Default hypotheses**: 3 competing hypotheses
- **Use when**: Bug has multiple plausible root causes
### Feature Team
- **Size**: 3 (1 lead + 2 implementers)
- **Agents**: 1x `team-lead` + 2x `team-implementer`
- **Use when**: Feature can be decomposed into parallel work streams
### Fullstack Team
- **Size**: 4 (1 lead + 3 implementers)
- **Agents**: 1x `team-lead` + 1x frontend `team-implementer` + 1x backend `team-implementer` + 1x test `team-implementer`
- **Use when**: Feature spans frontend, backend, and test layers
### Research Team
- **Size**: 3 researchers
- **Agents**: 3x `general-purpose`
- **Default areas**: Each assigned a different research question, module, or topic
- **Capabilities**: Codebase search (Grep, Glob, Read), web search (WebSearch, WebFetch)
- **Use when**: Need to understand a codebase, research libraries, compare approaches, or gather information from code and web sources in parallel
### Security Team
- **Size**: 4 reviewers
- **Agents**: 4x `team-reviewer`
- **Default dimensions**: OWASP/vulnerabilities, auth/access control, dependencies/supply chain, secrets/configuration
- **Use when**: Comprehensive security audit covering multiple attack surfaces
### Migration Team
- **Size**: 4 (1 lead + 2 implementers + 1 reviewer)
- **Agents**: 1x `team-lead` + 2x `team-implementer` + 1x `team-reviewer`
- **Use when**: Large codebase migration (framework upgrade, language port, API version bump) requiring parallel work with correctness verification
## Agent Type Selection
When spawning teammates with the Task tool, choose `subagent_type` based on what tools the teammate needs:
| Agent Type | Tools Available | Use For |
| ------------------------------ | ----------------------------------------- | ---------------------------------------------------------- |
| `general-purpose` | All tools (Read, Write, Edit, Bash, etc.) | Implementation, debugging, any task requiring file changes |
| `Explore` | Read-only tools (Read, Grep, Glob) | Research, code exploration, analysis |
| `Plan` | Read-only tools | Architecture planning, task decomposition |
| `agent-teams:team-reviewer` | All tools | Code review with structured findings |
| `agent-teams:team-debugger` | All tools | Hypothesis-driven investigation |
| `agent-teams:team-implementer` | All tools | Building features within file ownership boundaries |
| `agent-teams:team-lead` | All tools | Team orchestration and coordination |
**Key distinction**: Read-only agents (Explore, Plan) cannot modify files. Never assign implementation tasks to read-only agents.
## Display Mode Configuration
Configure in `~/.claude/settings.json`:
```json
{
"teammateMode": "tmux"
}
```
| Mode | Behavior | Best For |
| -------------- | ------------------------------ | ------------------------------------------------- |
| `"tmux"` | Each teammate in a tmux pane | Development workflows, monitoring multiple agents |
| `"iterm2"` | Each teammate in an iTerm2 tab | macOS users who prefer iTerm2 |
| `"in-process"` | All teammates in same process | Simple tasks, CI/CD environments |
## Custom Team Guidelines
When building custom teams:
1. **Every team needs a coordinator** — Either designate a `team-lead` or have the user coordinate directly
2. **Match roles to agent types** — Use specialized agents (reviewer, debugger, implementer) when available
3. **Avoid duplicate roles** — Two agents doing the same thing wastes resources
4. **Define boundaries upfront** — Each teammate needs clear ownership of files or responsibilities
5. **Keep it small** — 2-4 teammates is the sweet spot; 5+ requires significant coordination overhead

View File

@@ -0,0 +1,84 @@
# Agent Type Selection Guide
Decision matrix for choosing the right `subagent_type` when spawning teammates.
## Decision Matrix
```
Does the teammate need to modify files?
├── YES → Does it need a specialized role?
│ ├── YES → Which role?
│ │ ├── Code review → agent-teams:team-reviewer
│ │ ├── Bug investigation → agent-teams:team-debugger
│ │ ├── Feature building → agent-teams:team-implementer
│ │ └── Team coordination → agent-teams:team-lead
│ └── NO → general-purpose
└── NO → Does it need deep codebase exploration?
├── YES → Explore
└── NO → Plan (for architecture/design tasks)
```
## Agent Type Comparison
| Agent Type | Can Read | Can Write | Can Edit | Can Bash | Specialized |
| ---------------------------- | -------- | --------- | -------- | -------- | ------------------ |
| general-purpose | Yes | Yes | Yes | Yes | No |
| Explore | Yes | No | No | No | Search/explore |
| Plan | Yes | No | No | No | Architecture |
| agent-teams:team-lead | Yes | Yes | Yes | Yes | Team orchestration |
| agent-teams:team-reviewer | Yes | Yes | Yes | Yes | Code review |
| agent-teams:team-debugger | Yes | Yes | Yes | Yes | Bug investigation |
| agent-teams:team-implementer | Yes | Yes | Yes | Yes | Feature building |
## Common Mistakes
| Mistake | Why It Fails | Correct Choice |
| ------------------------------------- | ------------------------------ | --------------------------------------- |
| Using `Explore` for implementation | Cannot write/edit files | `general-purpose` or `team-implementer` |
| Using `Plan` for coding tasks | Cannot write/edit files | `general-purpose` or `team-implementer` |
| Using `general-purpose` for reviews | No review structure/checklists | `team-reviewer` |
| Using `team-implementer` for research | Has tools but wrong focus | `Explore` or `Plan` |
## When to Use Each
### general-purpose
- One-off tasks that don't fit specialized roles
- Tasks requiring unique tool combinations
- Ad-hoc scripting or automation
### Explore
- Codebase research and analysis
- Finding files, patterns, or dependencies
- Understanding architecture before planning
### Plan
- Designing implementation approaches
- Creating task decompositions
- Architecture review (read-only)
### team-lead
- Coordinating multiple teammates
- Decomposing work and managing tasks
- Synthesizing results from parallel work
### team-reviewer
- Focused code review on a specific dimension
- Producing structured findings with severity ratings
- Following dimension-specific checklists
### team-debugger
- Investigating a specific hypothesis about a bug
- Gathering evidence with file:line citations
- Reporting confidence levels and causal chains
### team-implementer
- Building code within file ownership boundaries
- Following interface contracts
- Coordinating at integration points

View File

@@ -0,0 +1,265 @@
# Preset Team Definitions
Detailed preset team configurations with task templates for common workflows.
## Review Team Preset
**Command**: `/team-spawn review`
### Configuration
- **Team Size**: 3
- **Agent Type**: `agent-teams:team-reviewer`
- **Display Mode**: tmux recommended
### Members
| Name | Dimension | Focus Areas |
| --------------------- | ------------ | ------------------------------------------------- |
| security-reviewer | Security | Input validation, auth, injection, secrets, CVEs |
| performance-reviewer | Performance | Query efficiency, memory, caching, async patterns |
| architecture-reviewer | Architecture | SOLID, coupling, patterns, error handling |
### Task Template
```
Subject: Review {target} for {dimension} issues
Description:
Dimension: {dimension}
Target: {file list or diff}
Checklist: {dimension-specific checklist}
Output format: Structured findings with file:line, severity, evidence, fix
```
### Variations
- **Security-focused**: `--reviewers security,testing` (2 members)
- **Full review**: `--reviewers security,performance,architecture,testing,accessibility` (5 members)
- **Frontend review**: `--reviewers architecture,testing,accessibility` (3 members)
## Debug Team Preset
**Command**: `/team-spawn debug`
### Configuration
- **Team Size**: 3 (default) or N with `--hypotheses N`
- **Agent Type**: `agent-teams:team-debugger`
- **Display Mode**: tmux recommended
### Members
| Name | Role |
| -------------- | ------------------------- |
| investigator-1 | Investigates hypothesis 1 |
| investigator-2 | Investigates hypothesis 2 |
| investigator-3 | Investigates hypothesis 3 |
### Task Template
```
Subject: Investigate hypothesis: {hypothesis summary}
Description:
Hypothesis: {full hypothesis statement}
Scope: {files/module/project}
Evidence criteria:
Confirming: {what would confirm}
Falsifying: {what would falsify}
Report format: confidence level, evidence with file:line, causal chain
```
## Feature Team Preset
**Command**: `/team-spawn feature`
### Configuration
- **Team Size**: 3 (1 lead + 2 implementers)
- **Agent Types**: `agent-teams:team-lead` + `agent-teams:team-implementer`
- **Display Mode**: tmux recommended
### Members
| Name | Role | Responsibility |
| ------------- | ---------------- | ---------------------------------------- |
| feature-lead | team-lead | Decomposition, coordination, integration |
| implementer-1 | team-implementer | Work stream 1 (assigned files) |
| implementer-2 | team-implementer | Work stream 2 (assigned files) |
### Task Template
```
Subject: Implement {work stream name}
Description:
Owned files: {explicit file list}
Requirements: {specific deliverables}
Interface contract: {shared types/APIs}
Acceptance criteria: {verification steps}
Blocked by: {dependency task IDs if any}
```
## Fullstack Team Preset
**Command**: `/team-spawn fullstack`
### Configuration
- **Team Size**: 4 (1 lead + 3 implementers)
- **Agent Types**: `agent-teams:team-lead` + 3x `agent-teams:team-implementer`
- **Display Mode**: tmux recommended
### Members
| Name | Role | Layer |
| -------------- | ---------------- | -------------------------------- |
| fullstack-lead | team-lead | Coordination, integration |
| frontend-dev | team-implementer | UI components, client-side logic |
| backend-dev | team-implementer | API endpoints, business logic |
| test-dev | team-implementer | Unit, integration, e2e tests |
### Dependency Pattern
```
frontend-dev ──┐
├──→ test-dev (blocked by both)
backend-dev ──┘
```
## Research Team Preset
**Command**: `/team-spawn research`
### Configuration
- **Team Size**: 3
- **Agent Type**: `general-purpose`
- **Display Mode**: tmux recommended
### Members
| Name | Role | Focus |
| ------------ | --------------- | ------------------------------------------------ |
| researcher-1 | general-purpose | Research area 1 (e.g., codebase architecture) |
| researcher-2 | general-purpose | Research area 2 (e.g., library documentation) |
| researcher-3 | general-purpose | Research area 3 (e.g., web resources & examples) |
### Available Research Tools
Each researcher has access to:
- **Codebase**: `Grep`, `Glob`, `Read` — search and read local files
- **Web**: `WebSearch`, `WebFetch` — search the web and fetch page content
- **Deep Exploration**: `Task` with `subagent_type: Explore` — spawn sub-explorers for deep dives
### Task Template
```
Subject: Research {topic or question}
Description:
Question: {specific research question}
Scope: {codebase files, web resources, library docs, or all}
Tools to prioritize:
- Codebase: Grep/Glob/Read for local code analysis
- Web: WebSearch/WebFetch for articles, examples, best practices
Deliverable: Summary with citations (file:line for code, URLs for web)
Output format: Structured report with sections, evidence, and recommendations
```
### Variations
- **Codebase-only**: 3 researchers exploring different modules or patterns locally
- **Web research**: 3 researchers using WebSearch to survey approaches, benchmarks, or best practices
- **Mixed**: 1 codebase researcher + 1 docs researcher + 1 web researcher (recommended for evaluating new libraries)
### Example Research Assignments
```
Researcher 1 (codebase): "How does our current auth system work? Trace the flow from login to token validation."
Researcher 2 (web): "Search for comparisons between NextAuth, Clerk, and Auth0 for Next.js apps. Focus on pricing, DX, and migration effort."
Researcher 3 (docs): "Look up the latest NextAuth.js v5 API docs. How does it handle JWT and session management?"
```
## Security Team Preset
**Command**: `/team-spawn security`
### Configuration
- **Team Size**: 4
- **Agent Type**: `agent-teams:team-reviewer`
- **Display Mode**: tmux recommended
### Members
| Name | Dimension | Focus Areas |
| --------------- | -------------- | ---------------------------------------------------- |
| vuln-reviewer | OWASP/Vulns | Injection, XSS, CSRF, deserialization, SSRF |
| auth-reviewer | Auth/Access | Authentication, authorization, session management |
| deps-reviewer | Dependencies | CVEs, supply chain, outdated packages, license risks |
| config-reviewer | Secrets/Config | Hardcoded secrets, env vars, debug endpoints, CORS |
### Task Template
```
Subject: Security audit {target} for {dimension}
Description:
Dimension: {security sub-dimension}
Target: {file list, directory, or entire project}
Checklist: {dimension-specific security checklist}
Output format: Structured findings with file:line, CVSS-like severity, evidence, remediation
Standards: OWASP Top 10, CWE references where applicable
```
### Variations
- **Quick scan**: `--reviewers owasp,secrets` (2 members for fast audit)
- **Full audit**: All 4 dimensions (default)
- **CI/CD focused**: Add a 5th reviewer for pipeline security and deployment configuration
## Migration Team Preset
**Command**: `/team-spawn migration`
### Configuration
- **Team Size**: 4 (1 lead + 2 implementers + 1 reviewer)
- **Agent Types**: `agent-teams:team-lead` + 2x `agent-teams:team-implementer` + `agent-teams:team-reviewer`
- **Display Mode**: tmux recommended
### Members
| Name | Role | Responsibility |
| ---------------- | ---------------- | ----------------------------------------------- |
| migration-lead | team-lead | Migration plan, coordination, conflict handling |
| migrator-1 | team-implementer | Migration stream 1 (assigned files/modules) |
| migrator-2 | team-implementer | Migration stream 2 (assigned files/modules) |
| migration-verify | team-reviewer | Verify migrated code correctness and patterns |
### Task Template
```
Subject: Migrate {module/files} from {old} to {new}
Description:
Owned files: {explicit file list}
Migration rules: {specific transformation patterns}
Old pattern: {what to change from}
New pattern: {what to change to}
Acceptance criteria: {tests pass, no regressions, new patterns used}
Blocked by: {dependency task IDs if any}
```
### Dependency Pattern
```
migration-lead (plan) → migrator-1 ──┐
→ migrator-2 ──┼→ migration-verify
```
### Use Cases
- Framework upgrades (React class → hooks, Vue 2 → Vue 3, Angular version bumps)
- Language migrations (JavaScript → TypeScript, Python 2 → 3)
- API version bumps (REST v1 → v2, GraphQL schema changes)
- Database migrations (ORM changes, schema restructuring)
- Build system changes (Webpack → Vite, CRA → Next.js)

View File

@@ -0,0 +1,10 @@
{
"name": "api-scaffolding",
"version": "1.2.1",
"description": "REST and GraphQL API scaffolding, framework selection, backend architecture, and API generation",
"author": {
"name": "Seth Hobson",
"email": "seth@major7apps.com"
},
"license": "MIT"
}

View File

@@ -0,0 +1,10 @@
{
"name": "api-testing-observability",
"version": "1.2.0",
"description": "API testing automation, request mocking, OpenAPI documentation generation, observability setup, and monitoring",
"author": {
"name": "Seth Hobson",
"email": "seth@major7apps.com"
},
"license": "MIT"
}

View File

@@ -0,0 +1,10 @@
{
"name": "application-performance",
"version": "1.3.0",
"description": "Application profiling, performance optimization, and observability for frontend and backend systems",
"author": {
"name": "Seth Hobson",
"email": "seth@major7apps.com"
},
"license": "MIT"
}

View File

@@ -1,124 +1,681 @@
Optimize application performance end-to-end using specialized performance and optimization agents:
---
description: "Orchestrate end-to-end application performance optimization from profiling to monitoring"
argument-hint: "<application or service> [--focus latency|throughput|cost|balanced] [--depth quick-wins|comprehensive|enterprise]"
---
[Extended thinking: This workflow orchestrates a comprehensive performance optimization process across the entire application stack. Starting with deep profiling and baseline establishment, the workflow progresses through targeted optimizations in each system layer, validates improvements through load testing, and establishes continuous monitoring for sustained performance. Each phase builds on insights from previous phases, creating a data-driven optimization strategy that addresses real bottlenecks rather than theoretical improvements. The workflow emphasizes modern observability practices, user-centric performance metrics, and cost-effective optimization strategies.]
# Performance Optimization Orchestrator
## Phase 1: Performance Profiling & Baseline
## CRITICAL BEHAVIORAL RULES
### 1. Comprehensive Performance Profiling
You MUST follow these rules exactly. Violating any of them is a failure.
- Use Task tool with subagent_type="performance-engineer"
- Prompt: "Profile application performance comprehensively for: $ARGUMENTS. Generate flame graphs for CPU usage, heap dumps for memory analysis, trace I/O operations, and identify hot paths. Use APM tools like DataDog or New Relic if available. Include database query profiling, API response times, and frontend rendering metrics. Establish performance baselines for all critical user journeys."
- Context: Initial performance investigation
- Output: Detailed performance profile with flame graphs, memory analysis, bottleneck identification, baseline metrics
1. **Execute steps in order.** Do NOT skip ahead, reorder, or merge steps.
2. **Write output files.** Each step MUST produce its output file in `.performance-optimization/` before the next step begins. Read from prior step files — do NOT rely on context window memory.
3. **Stop at checkpoints.** When you reach a `PHASE CHECKPOINT`, you MUST stop and wait for explicit user approval before continuing. Use the AskUserQuestion tool with clear options.
4. **Halt on failure.** If any step fails (agent error, test failure, missing dependency), STOP immediately. Present the error and ask the user how to proceed. Do NOT silently continue.
5. **Use only local agents.** All `subagent_type` references use agents bundled with this plugin or `general-purpose`. No cross-plugin dependencies.
6. **Never enter plan mode autonomously.** Do NOT use EnterPlanMode. This command IS the plan — execute it.
### 2. Observability Stack Assessment
## Pre-flight Checks
- Use Task tool with subagent_type="observability-engineer"
- Prompt: "Assess current observability setup for: $ARGUMENTS. Review existing monitoring, distributed tracing with OpenTelemetry, log aggregation, and metrics collection. Identify gaps in visibility, missing metrics, and areas needing better instrumentation. Recommend APM tool integration and custom metrics for business-critical operations."
- Context: Performance profile from step 1
- Output: Observability assessment report, instrumentation gaps, monitoring recommendations
Before starting, perform these checks:
### 3. User Experience Analysis
### 1. Check for existing session
- Use Task tool with subagent_type="performance-engineer"
- Prompt: "Analyze user experience metrics for: $ARGUMENTS. Measure Core Web Vitals (LCP, FID, CLS), page load times, time to interactive, and perceived performance. Use Real User Monitoring (RUM) data if available. Identify user journeys with poor performance and their business impact."
- Context: Performance baselines from step 1
- Output: UX performance report, Core Web Vitals analysis, user impact assessment
Check if `.performance-optimization/state.json` exists:
## Phase 2: Database & Backend Optimization
- If it exists and `status` is `"in_progress"`: Read it, display the current step, and ask the user:
### 4. Database Performance Optimization
```
Found an in-progress performance optimization session:
Target: [name from state]
Current step: [step from state]
- Use Task tool with subagent_type="database-cloud-optimization::database-optimizer"
- Prompt: "Optimize database performance for: $ARGUMENTS based on profiling data: {context_from_phase_1}. Analyze slow query logs, create missing indexes, optimize execution plans, implement query result caching with Redis/Memcached. Review connection pooling, prepared statements, and batch processing opportunities. Consider read replicas and database sharding if needed."
- Context: Performance bottlenecks from phase 1
- Output: Optimized queries, new indexes, caching strategy, connection pool configuration
1. Resume from where we left off
2. Start fresh (archives existing session)
```
### 5. Backend Code & API Optimization
- If it exists and `status` is `"complete"`: Ask whether to archive and start fresh.
- Use Task tool with subagent_type="backend-development::backend-architect"
- Prompt: "Optimize backend services for: $ARGUMENTS targeting bottlenecks: {context_from_phase_1}. Implement efficient algorithms, add application-level caching, optimize N+1 queries, use async/await patterns effectively. Implement pagination, response compression, GraphQL query optimization, and batch API operations. Add circuit breakers and bulkheads for resilience."
- Context: Database optimizations from step 4, profiling data from phase 1
- Output: Optimized backend code, caching implementation, API improvements, resilience patterns
### 2. Initialize state
### 6. Microservices & Distributed System Optimization
Create `.performance-optimization/` directory and `state.json`:
- Use Task tool with subagent_type="performance-engineer"
- Prompt: "Optimize distributed system performance for: $ARGUMENTS. Analyze service-to-service communication, implement service mesh optimizations, optimize message queue performance (Kafka/RabbitMQ), reduce network hops. Implement distributed caching strategies and optimize serialization/deserialization."
- Context: Backend optimizations from step 5
- Output: Service communication improvements, message queue optimization, distributed caching setup
```json
{
"target": "$ARGUMENTS",
"status": "in_progress",
"focus": "balanced",
"depth": "comprehensive",
"current_step": 1,
"current_phase": 1,
"completed_steps": [],
"files_created": [],
"started_at": "ISO_TIMESTAMP",
"last_updated": "ISO_TIMESTAMP"
}
```
## Phase 3: Frontend & CDN Optimization
Parse `$ARGUMENTS` for `--focus` and `--depth` flags. Use defaults if not specified.
### 7. Frontend Bundle & Loading Optimization
### 3. Parse target description
- Use Task tool with subagent_type="frontend-developer"
- Prompt: "Optimize frontend performance for: $ARGUMENTS targeting Core Web Vitals: {context_from_phase_1}. Implement code splitting, tree shaking, lazy loading, and dynamic imports. Optimize bundle sizes with webpack/rollup analysis. Implement resource hints (prefetch, preconnect, preload). Optimize critical rendering path and eliminate render-blocking resources."
- Context: UX analysis from phase 1, backend optimizations from phase 2
- Output: Optimized bundles, lazy loading implementation, improved Core Web Vitals
Extract the target description from `$ARGUMENTS` (everything before the flags). This is referenced as `$TARGET` in prompts below.
### 8. CDN & Edge Optimization
---
- Use Task tool with subagent_type="cloud-infrastructure::cloud-architect"
- Prompt: "Optimize CDN and edge performance for: $ARGUMENTS. Configure CloudFlare/CloudFront for optimal caching, implement edge functions for dynamic content, set up image optimization with responsive images and WebP/AVIF formats. Configure HTTP/2 and HTTP/3, implement Brotli compression. Set up geographic distribution for global users."
- Context: Frontend optimizations from step 7
- Output: CDN configuration, edge caching rules, compression setup, geographic optimization
## Phase 1: Performance Profiling & Baseline (Steps 13)
### 9. Mobile & Progressive Web App Optimization
### Step 1: Comprehensive Performance Profiling
- Use Task tool with subagent_type="frontend-mobile-development::mobile-developer"
- Prompt: "Optimize mobile experience for: $ARGUMENTS. Implement service workers for offline functionality, optimize for slow networks with adaptive loading. Reduce JavaScript execution time for mobile CPUs. Implement virtual scrolling for long lists. Optimize touch responsiveness and smooth animations. Consider React Native/Flutter specific optimizations if applicable."
- Context: Frontend optimizations from steps 7-8
- Output: Mobile-optimized code, PWA implementation, offline functionality
Use the Task tool to launch the performance engineer:
## Phase 4: Load Testing & Validation
```
Task:
subagent_type: "performance-engineer"
description: "Profile application performance for $TARGET"
prompt: |
Profile application performance comprehensively for: $TARGET.
### 10. Comprehensive Load Testing
Generate flame graphs for CPU usage, heap dumps for memory analysis, trace I/O operations,
and identify hot paths. Use APM tools like DataDog or New Relic if available. Include database
query profiling, API response times, and frontend rendering metrics. Establish performance
baselines for all critical user journeys.
- Use Task tool with subagent_type="performance-engineer"
- Prompt: "Conduct comprehensive load testing for: $ARGUMENTS using k6/Gatling/Artillery. Design realistic load scenarios based on production traffic patterns. Test normal load, peak load, and stress scenarios. Include API testing, browser-based testing, and WebSocket testing if applicable. Measure response times, throughput, error rates, and resource utilization at various load levels."
- Context: All optimizations from phases 1-3
- Output: Load test results, performance under load, breaking points, scalability analysis
## Deliverables
1. Performance profile with flame graphs and memory analysis
2. Bottleneck identification ranked by impact
3. Baseline metrics for critical user journeys
4. Database query profiling results
5. API response time measurements
### 11. Performance Regression Testing
Write your complete profiling report as a single markdown document.
```
- Use Task tool with subagent_type="performance-testing-review::test-automator"
- Prompt: "Create automated performance regression tests for: $ARGUMENTS. Set up performance budgets for key metrics, integrate with CI/CD pipeline using GitHub Actions or similar. Create Lighthouse CI tests for frontend, API performance tests with Artillery, and database performance benchmarks. Implement automatic rollback triggers for performance regressions."
- Context: Load test results from step 10, baseline metrics from phase 1
- Output: Performance test suite, CI/CD integration, regression prevention system
Save the agent's output to `.performance-optimization/01-profiling.md`.
## Phase 5: Monitoring & Continuous Optimization
Update `state.json`: set `current_step` to 2, add step 1 to `completed_steps`.
### 12. Production Monitoring Setup
### Step 2: Observability Stack Assessment
- Use Task tool with subagent_type="observability-engineer"
- Prompt: "Implement production performance monitoring for: $ARGUMENTS. Set up APM with DataDog/New Relic/Dynatrace, configure distributed tracing with OpenTelemetry, implement custom business metrics. Create Grafana dashboards for key metrics, set up PagerDuty alerts for performance degradation. Define SLIs/SLOs for critical services with error budgets."
- Context: Performance improvements from all previous phases
- Output: Monitoring dashboards, alert rules, SLI/SLO definitions, runbooks
Read `.performance-optimization/01-profiling.md` to load profiling context.
### 13. Continuous Performance Optimization
Use the Task tool:
- Use Task tool with subagent_type="performance-engineer"
- Prompt: "Establish continuous optimization process for: $ARGUMENTS. Create performance budget tracking, implement A/B testing for performance changes, set up continuous profiling in production. Document optimization opportunities backlog, create capacity planning models, and establish regular performance review cycles."
- Context: Monitoring setup from step 12, all previous optimization work
- Output: Performance budget tracking, optimization backlog, capacity planning, review process
```
Task:
subagent_type: "observability-engineer"
description: "Assess observability setup for $TARGET"
prompt: |
Assess current observability setup for: $TARGET.
## Configuration Options
## Performance Profile
[Insert full contents of .performance-optimization/01-profiling.md]
- **performance_focus**: "latency" | "throughput" | "cost" | "balanced" (default: "balanced")
- **optimization_depth**: "quick-wins" | "comprehensive" | "enterprise" (default: "comprehensive")
- **tools_available**: ["datadog", "newrelic", "prometheus", "grafana", "k6", "gatling"]
- **budget_constraints**: Set maximum acceptable costs for infrastructure changes
- **user_impact_tolerance**: "zero-downtime" | "maintenance-window" | "gradual-rollout"
Review existing monitoring, distributed tracing with OpenTelemetry, log aggregation,
and metrics collection. Identify gaps in visibility, missing metrics, and areas needing
better instrumentation. Recommend APM tool integration and custom metrics for
business-critical operations.
## Deliverables
1. Current observability assessment
2. Instrumentation gaps identified
3. Monitoring recommendations
4. Recommended metrics and dashboards
Write your complete assessment as a single markdown document.
```
Save the agent's output to `.performance-optimization/02-observability.md`.
Update `state.json`: set `current_step` to 3, add step 2 to `completed_steps`.
### Step 3: User Experience Analysis
Read `.performance-optimization/01-profiling.md`.
Use the Task tool:
```
Task:
subagent_type: "performance-engineer"
description: "Analyze user experience metrics for $TARGET"
prompt: |
Analyze user experience metrics for: $TARGET.
## Performance Baselines
[Insert contents of .performance-optimization/01-profiling.md]
Measure Core Web Vitals (LCP, FID, CLS), page load times, time to interactive,
and perceived performance. Use Real User Monitoring (RUM) data if available.
Identify user journeys with poor performance and their business impact.
## Deliverables
1. Core Web Vitals analysis
2. User journey performance report
3. Business impact assessment
4. Prioritized improvement opportunities
Write your complete analysis as a single markdown document.
```
Save the agent's output to `.performance-optimization/03-ux-analysis.md`.
Update `state.json`: set `current_step` to "checkpoint-1", add step 3 to `completed_steps`.
---
## PHASE CHECKPOINT 1 — User Approval Required
You MUST stop here and present the profiling results for review.
Display a summary from `.performance-optimization/01-profiling.md`, `.performance-optimization/02-observability.md`, and `.performance-optimization/03-ux-analysis.md` (key bottlenecks, observability gaps, UX findings) and ask:
```
Performance profiling complete. Please review:
- .performance-optimization/01-profiling.md
- .performance-optimization/02-observability.md
- .performance-optimization/03-ux-analysis.md
Key bottlenecks: [summary]
Observability gaps: [summary]
UX findings: [summary]
1. Approve — proceed to optimization
2. Request changes — tell me what to adjust
3. Pause — save progress and stop here
```
Do NOT proceed to Phase 2 until the user selects option 1. If they select option 2, revise and re-checkpoint. If option 3, update `state.json` status and stop.
---
## Phase 2: Database & Backend Optimization (Steps 46)
### Step 4: Database Performance Optimization
Read `.performance-optimization/01-profiling.md` and `.performance-optimization/03-ux-analysis.md`.
Use the Task tool:
```
Task:
subagent_type: "general-purpose"
description: "Optimize database performance for $TARGET"
prompt: |
You are a database optimization expert. Optimize database performance for: $TARGET.
## Profiling Data
[Insert contents of .performance-optimization/01-profiling.md]
## UX Analysis
[Insert contents of .performance-optimization/03-ux-analysis.md]
Analyze slow query logs, create missing indexes, optimize execution plans, implement
query result caching with Redis/Memcached. Review connection pooling, prepared statements,
and batch processing opportunities. Consider read replicas and database sharding if needed.
## Deliverables
1. Optimized queries with before/after performance
2. New indexes with justification
3. Caching strategy recommendation
4. Connection pool configuration
5. Implementation plan with priority order
Write your complete optimization plan as a single markdown document.
```
Save output to `.performance-optimization/04-database.md`.
Update `state.json`: set `current_step` to 5, add step 4 to `completed_steps`.
### Step 5: Backend Code & API Optimization
Read `.performance-optimization/01-profiling.md` and `.performance-optimization/04-database.md`.
Use the Task tool:
```
Task:
subagent_type: "general-purpose"
description: "Optimize backend services for $TARGET"
prompt: |
You are a backend performance architect. Optimize backend services for: $TARGET.
## Profiling Data
[Insert contents of .performance-optimization/01-profiling.md]
## Database Optimizations
[Insert contents of .performance-optimization/04-database.md]
Implement efficient algorithms, add application-level caching, optimize N+1 queries,
use async/await patterns effectively. Implement pagination, response compression,
GraphQL query optimization, and batch API operations. Add circuit breakers and
bulkheads for resilience.
## Deliverables
1. Optimized backend code with before/after metrics
2. Caching implementation plan
3. API improvements with expected impact
4. Resilience patterns added
5. Implementation priority order
Write your complete optimization plan as a single markdown document.
```
Save output to `.performance-optimization/05-backend.md`.
Update `state.json`: set `current_step` to 6, add step 5 to `completed_steps`.
### Step 6: Microservices & Distributed System Optimization
Read `.performance-optimization/01-profiling.md` and `.performance-optimization/05-backend.md`.
Use the Task tool:
```
Task:
subagent_type: "performance-engineer"
description: "Optimize distributed system performance for $TARGET"
prompt: |
Optimize distributed system performance for: $TARGET.
## Profiling Data
[Insert contents of .performance-optimization/01-profiling.md]
## Backend Optimizations
[Insert contents of .performance-optimization/05-backend.md]
Analyze service-to-service communication, implement service mesh optimizations,
optimize message queue performance (Kafka/RabbitMQ), reduce network hops. Implement
distributed caching strategies and optimize serialization/deserialization.
## Deliverables
1. Service communication improvements
2. Message queue optimization plan
3. Distributed caching setup
4. Network optimization recommendations
5. Expected latency improvements
Write your complete optimization plan as a single markdown document.
```
Save output to `.performance-optimization/06-distributed.md`.
Update `state.json`: set `current_step` to "checkpoint-2", add step 6 to `completed_steps`.
---
## PHASE CHECKPOINT 2 — User Approval Required
Display a summary of optimization plans from steps 4-6 and ask:
```
Backend optimization plans complete. Please review:
- .performance-optimization/04-database.md
- .performance-optimization/05-backend.md
- .performance-optimization/06-distributed.md
1. Approve — proceed to frontend & CDN optimization
2. Request changes — tell me what to adjust
3. Pause — save progress and stop here
```
Do NOT proceed to Phase 3 until the user approves.
---
## Phase 3: Frontend & CDN Optimization (Steps 79)
### Step 7: Frontend Bundle & Loading Optimization
Read `.performance-optimization/03-ux-analysis.md` and `.performance-optimization/05-backend.md`.
Use the Task tool:
```
Task:
subagent_type: "frontend-developer"
description: "Optimize frontend performance for $TARGET"
prompt: |
Optimize frontend performance for: $TARGET targeting Core Web Vitals improvements.
## UX Analysis
[Insert contents of .performance-optimization/03-ux-analysis.md]
## Backend Optimizations
[Insert contents of .performance-optimization/05-backend.md]
Implement code splitting, tree shaking, lazy loading, and dynamic imports. Optimize bundle
sizes with webpack/rollup analysis. Implement resource hints (prefetch, preconnect, preload).
Optimize critical rendering path and eliminate render-blocking resources.
## Deliverables
1. Bundle optimization with size reductions
2. Lazy loading implementation plan
3. Resource hint configuration
4. Critical rendering path optimizations
5. Expected Core Web Vitals improvements
Write your complete optimization plan as a single markdown document.
```
Save output to `.performance-optimization/07-frontend.md`.
Update `state.json`: set `current_step` to 8, add step 7 to `completed_steps`.
### Step 8: CDN & Edge Optimization
Read `.performance-optimization/07-frontend.md`.
Use the Task tool:
```
Task:
subagent_type: "general-purpose"
description: "Optimize CDN and edge performance for $TARGET"
prompt: |
You are a cloud infrastructure and CDN optimization expert. Optimize CDN and edge
performance for: $TARGET.
## Frontend Optimizations
[Insert contents of .performance-optimization/07-frontend.md]
Configure CloudFlare/CloudFront for optimal caching, implement edge functions for
dynamic content, set up image optimization with responsive images and WebP/AVIF formats.
Configure HTTP/2 and HTTP/3, implement Brotli compression. Set up geographic
distribution for global users.
## Deliverables
1. CDN configuration recommendations
2. Edge caching rules
3. Image optimization strategy
4. Compression setup
5. Geographic distribution plan
Write your complete optimization plan as a single markdown document.
```
Save output to `.performance-optimization/08-cdn.md`.
Update `state.json`: set `current_step` to 9, add step 8 to `completed_steps`.
### Step 9: Mobile & Progressive Web App Optimization
Read `.performance-optimization/07-frontend.md` and `.performance-optimization/08-cdn.md`.
Use the Task tool:
```
Task:
subagent_type: "general-purpose"
description: "Optimize mobile experience for $TARGET"
prompt: |
You are a mobile performance optimization expert. Optimize mobile experience for: $TARGET.
## Frontend Optimizations
[Insert contents of .performance-optimization/07-frontend.md]
## CDN Optimizations
[Insert contents of .performance-optimization/08-cdn.md]
Implement service workers for offline functionality, optimize for slow networks with
adaptive loading. Reduce JavaScript execution time for mobile CPUs. Implement virtual
scrolling for long lists. Optimize touch responsiveness and smooth animations. Consider
React Native/Flutter specific optimizations if applicable.
## Deliverables
1. Mobile-optimized code recommendations
2. PWA implementation plan
3. Offline functionality strategy
4. Adaptive loading configuration
5. Expected mobile performance improvements
Write your complete optimization plan as a single markdown document.
```
Save output to `.performance-optimization/09-mobile.md`.
Update `state.json`: set `current_step` to "checkpoint-3", add step 9 to `completed_steps`.
---
## PHASE CHECKPOINT 3 — User Approval Required
Display a summary of frontend/CDN/mobile optimization plans and ask:
```
Frontend optimization plans complete. Please review:
- .performance-optimization/07-frontend.md
- .performance-optimization/08-cdn.md
- .performance-optimization/09-mobile.md
1. Approve — proceed to load testing & validation
2. Request changes — tell me what to adjust
3. Pause — save progress and stop here
```
Do NOT proceed to Phase 4 until the user approves.
---
## Phase 4: Load Testing & Validation (Steps 1011)
### Step 10: Comprehensive Load Testing
Read `.performance-optimization/01-profiling.md`.
Use the Task tool:
```
Task:
subagent_type: "performance-engineer"
description: "Conduct comprehensive load testing for $TARGET"
prompt: |
Conduct comprehensive load testing for: $TARGET using k6/Gatling/Artillery.
## Original Baselines
[Insert contents of .performance-optimization/01-profiling.md]
Design realistic load scenarios based on production traffic patterns. Test normal load,
peak load, and stress scenarios. Include API testing, browser-based testing, and WebSocket
testing if applicable. Measure response times, throughput, error rates, and resource
utilization at various load levels.
## Deliverables
1. Load test scripts and configurations
2. Results at normal, peak, and stress loads
3. Response time and throughput measurements
4. Breaking points and scalability analysis
5. Comparison against original baselines
Write your complete load test report as a single markdown document.
```
Save output to `.performance-optimization/10-load-testing.md`.
Update `state.json`: set `current_step` to 11, add step 10 to `completed_steps`.
### Step 11: Performance Regression Testing
Read `.performance-optimization/10-load-testing.md` and `.performance-optimization/01-profiling.md`.
Use the Task tool:
```
Task:
subagent_type: "general-purpose"
description: "Create performance regression tests for $TARGET"
prompt: |
You are a test automation expert specializing in performance testing. Create automated
performance regression tests for: $TARGET.
## Load Test Results
[Insert contents of .performance-optimization/10-load-testing.md]
## Original Baselines
[Insert contents of .performance-optimization/01-profiling.md]
Set up performance budgets for key metrics, integrate with CI/CD pipeline using GitHub
Actions or similar. Create Lighthouse CI tests for frontend, API performance tests with
Artillery, and database performance benchmarks. Implement automatic rollback triggers
for performance regressions.
## Deliverables
1. Performance test suite with scripts
2. CI/CD integration configuration
3. Performance budgets and thresholds
4. Regression detection rules
5. Automatic rollback triggers
Write your complete regression testing plan as a single markdown document.
```
Save output to `.performance-optimization/11-regression-testing.md`.
Update `state.json`: set `current_step` to "checkpoint-4", add step 11 to `completed_steps`.
---
## PHASE CHECKPOINT 4 — User Approval Required
Display a summary of testing results and ask:
```
Load testing and validation complete. Please review:
- .performance-optimization/10-load-testing.md
- .performance-optimization/11-regression-testing.md
1. Approve — proceed to monitoring & continuous optimization
2. Request changes — tell me what to adjust
3. Pause — save progress and stop here
```
Do NOT proceed to Phase 5 until the user approves.
---
## Phase 5: Monitoring & Continuous Optimization (Steps 1213)
### Step 12: Production Monitoring Setup
Read `.performance-optimization/02-observability.md` and `.performance-optimization/10-load-testing.md`.
Use the Task tool:
```
Task:
subagent_type: "observability-engineer"
description: "Implement production performance monitoring for $TARGET"
prompt: |
Implement production performance monitoring for: $TARGET.
## Observability Assessment
[Insert contents of .performance-optimization/02-observability.md]
## Load Test Results
[Insert contents of .performance-optimization/10-load-testing.md]
Set up APM with DataDog/New Relic/Dynatrace, configure distributed tracing with
OpenTelemetry, implement custom business metrics. Create Grafana dashboards for key
metrics, set up PagerDuty alerts for performance degradation. Define SLIs/SLOs for
critical services with error budgets.
## Deliverables
1. Monitoring dashboard configurations
2. Alert rules and thresholds
3. SLI/SLO definitions
4. Runbooks for common performance issues
5. Error budget tracking setup
Write your complete monitoring plan as a single markdown document.
```
Save output to `.performance-optimization/12-monitoring.md`.
Update `state.json`: set `current_step` to 13, add step 12 to `completed_steps`.
### Step 13: Continuous Performance Optimization
Read all previous `.performance-optimization/*.md` files.
Use the Task tool:
```
Task:
subagent_type: "performance-engineer"
description: "Establish continuous optimization process for $TARGET"
prompt: |
Establish continuous optimization process for: $TARGET.
## Monitoring Setup
[Insert contents of .performance-optimization/12-monitoring.md]
## All Previous Optimization Work
[Insert summary of key findings from all previous steps]
Create performance budget tracking, implement A/B testing for performance changes,
set up continuous profiling in production. Document optimization opportunities backlog,
create capacity planning models, and establish regular performance review cycles.
## Deliverables
1. Performance budget tracking system
2. Optimization backlog with priorities
3. Capacity planning model
4. Review cycle schedule and process
5. A/B testing framework for performance changes
Write your complete continuous optimization plan as a single markdown document.
```
Save output to `.performance-optimization/13-continuous.md`.
Update `state.json`: set `current_step` to "complete", add step 13 to `completed_steps`.
---
## Completion
Update `state.json`:
- Set `status` to `"complete"`
- Set `last_updated` to current timestamp
Present the final summary:
```
Performance optimization complete: $TARGET
## Files Created
[List all .performance-optimization/ output files]
## Optimization Summary
- Profiling: .performance-optimization/01-profiling.md
- Observability: .performance-optimization/02-observability.md
- UX Analysis: .performance-optimization/03-ux-analysis.md
- Database: .performance-optimization/04-database.md
- Backend: .performance-optimization/05-backend.md
- Distributed: .performance-optimization/06-distributed.md
- Frontend: .performance-optimization/07-frontend.md
- CDN: .performance-optimization/08-cdn.md
- Mobile: .performance-optimization/09-mobile.md
- Load Testing: .performance-optimization/10-load-testing.md
- Regression Testing: .performance-optimization/11-regression-testing.md
- Monitoring: .performance-optimization/12-monitoring.md
- Continuous: .performance-optimization/13-continuous.md
## Success Criteria
- Response Time: P50 < 200ms, P95 < 1s, P99 < 2s for critical endpoints
- Core Web Vitals: LCP < 2.5s, FID < 100ms, CLS < 0.1
- Throughput: Support 2x current peak load with <1% error rate
- Database Performance: Query P95 < 100ms, no queries > 1s
- Resource Utilization: CPU < 70%, Memory < 80% under normal load
- Cost Efficiency: Performance per dollar improved by minimum 30%
- Monitoring Coverage: 100% of critical paths instrumented with alerting
- **Response Time**: P50 < 200ms, P95 < 1s, P99 < 2s for critical endpoints
- **Core Web Vitals**: LCP < 2.5s, FID < 100ms, CLS < 0.1
- **Throughput**: Support 2x current peak load with <1% error rate
- **Database Performance**: Query P95 < 100ms, no queries > 1s
- **Resource Utilization**: CPU < 70%, Memory < 80% under normal load
- **Cost Efficiency**: Performance per dollar improved by minimum 30%
- **Monitoring Coverage**: 100% of critical paths instrumented with alerting
Performance optimization target: $ARGUMENTS
## Next Steps
1. Implement optimizations in priority order from each phase
2. Run regression tests after each optimization
3. Monitor production metrics against baselines
4. Review performance budgets in weekly cycles
```

View File

@@ -0,0 +1,10 @@
{
"name": "arm-cortex-microcontrollers",
"version": "1.2.0",
"description": "ARM Cortex-M firmware development for Teensy, STM32, nRF52, and SAMD with peripheral drivers and memory safety patterns",
"author": {
"name": "Ryan Snodgrass",
"url": "https://github.com/rsnodgrass"
},
"license": "MIT"
}

View File

@@ -0,0 +1,10 @@
{
"name": "backend-api-security",
"version": "1.2.0",
"description": "API security hardening, authentication implementation, authorization patterns, rate limiting, and input validation",
"author": {
"name": "Seth Hobson",
"email": "seth@major7apps.com"
},
"license": "MIT"
}

View File

@@ -0,0 +1,10 @@
{
"name": "backend-development",
"version": "1.3.0",
"description": "Backend API design, GraphQL architecture, workflow orchestration with Temporal, and test-driven backend development",
"author": {
"name": "Seth Hobson",
"email": "seth@major7apps.com"
},
"license": "MIT"
}

View File

@@ -1,6 +1,10 @@
# Event Sourcing Architect
---
name: event-sourcing-architect
description: Expert in event sourcing, CQRS, and event-driven architecture patterns. Masters event store design, projection building, saga orchestration, and eventual consistency patterns. Use PROACTIVELY for event-sourced systems, audit trail requirements, or complex domain modeling with temporal queries.
model: inherit
---
Expert in event sourcing, CQRS, and event-driven architecture patterns. Masters event store design, projection building, saga orchestration, and eventual consistency patterns. Use PROACTIVELY for event-sourced systems, audit trail requirements, or complex domain modeling with temporal queries.
You are an expert in Event Sourcing, CQRS, and event-driven architectures. Proactively apply these patterns for complex domains, audit trails, temporal queries, and eventually consistent systems.
## Capabilities

View File

@@ -0,0 +1,44 @@
---
name: performance-engineer
description: Profile and optimize application performance including response times, memory usage, query efficiency, and scalability. Use for performance review during feature development.
model: sonnet
---
You are a performance engineer specializing in application optimization during feature development.
## Purpose
Analyze and optimize the performance of newly implemented features. Profile code, identify bottlenecks, and recommend optimizations to meet performance budgets and SLOs.
## Capabilities
- **Code Profiling**: CPU hotspots, memory allocation patterns, I/O bottlenecks, async/await inefficiencies
- **Database Performance**: N+1 query detection, missing indexes, query plan analysis, connection pool sizing, ORM inefficiencies
- **API Performance**: Response time analysis, payload optimization, compression, pagination efficiency, batch operation design
- **Caching Strategy**: Cache-aside/read-through/write-through patterns, TTL tuning, cache invalidation, hit rate analysis
- **Memory Management**: Memory leak detection, garbage collection pressure, object pooling, buffer management
- **Concurrency**: Thread pool sizing, async patterns, connection pooling, resource contention, deadlock detection
- **Frontend Performance**: Bundle size analysis, lazy loading, code splitting, render performance, network waterfall
- **Load Testing Design**: K6/JMeter/Gatling script design, realistic load profiles, stress testing, capacity planning
- **Scalability Analysis**: Horizontal vs vertical scaling readiness, stateless design validation, bottleneck identification
## Response Approach
1. **Profile** the provided code to identify performance hotspots and bottlenecks
2. **Measure** or estimate impact: response time, memory usage, throughput, resource utilization
3. **Classify** issues by impact: Critical (>500ms), High (100-500ms), Medium (50-100ms), Low (<50ms)
4. **Recommend** specific optimizations with before/after code examples
5. **Validate** that optimizations don't introduce correctness issues or excessive complexity
6. **Benchmark** suggestions with expected improvement estimates
## Output Format
For each finding:
- **Impact**: Critical/High/Medium/Low with estimated latency or resource cost
- **Location**: File and line reference
- **Issue**: What's slow and why
- **Fix**: Specific optimization with code example
- **Tradeoff**: Any downsides (complexity, memory for speed, etc.)
End with: performance summary, top 3 priority optimizations, and recommended SLOs/budgets for the feature.

View File

@@ -0,0 +1,41 @@
---
name: security-auditor
description: Review code and architecture for security vulnerabilities, OWASP Top 10, auth flaws, and compliance issues. Use for security review during feature development.
model: sonnet
---
You are a security auditor specializing in application security review during feature development.
## Purpose
Perform focused security reviews of code and architecture produced during feature development. Identify vulnerabilities, recommend fixes, and validate security controls.
## Capabilities
- **OWASP Top 10 Review**: Injection, broken auth, sensitive data exposure, XXE, broken access control, misconfig, XSS, insecure deserialization, vulnerable components, insufficient logging
- **Authentication & Authorization**: JWT validation, session management, OAuth flows, RBAC/ABAC enforcement, privilege escalation vectors
- **Input Validation**: SQL injection, command injection, path traversal, XSS, SSRF, prototype pollution
- **Data Protection**: Encryption at rest/transit, secrets management, PII handling, credential storage
- **API Security**: Rate limiting, CORS, CSRF, request validation, API key management
- **Dependency Scanning**: Known CVEs in dependencies, outdated packages, supply chain risks
- **Infrastructure Security**: Container security, network policies, secrets in env vars, TLS configuration
## Response Approach
1. **Scan** the provided code and architecture for vulnerabilities
2. **Classify** findings by severity: Critical, High, Medium, Low
3. **Explain** each finding with the attack vector and impact
4. **Recommend** specific fixes with code examples where possible
5. **Validate** that security controls (auth, authz, input validation) are correctly implemented
## Output Format
For each finding:
- **Severity**: Critical/High/Medium/Low
- **Category**: OWASP category or security domain
- **Location**: File and line reference
- **Issue**: What's wrong and why it matters
- **Fix**: Specific remediation with code example
End with a summary: total findings by severity, overall security posture assessment, and top 3 priority fixes.

View File

@@ -0,0 +1,41 @@
---
name: test-automator
description: Create comprehensive test suites including unit, integration, and E2E tests. Supports TDD/BDD workflows. Use for test creation during feature development.
model: sonnet
---
You are a test automation engineer specializing in creating comprehensive test suites during feature development.
## Purpose
Build robust, maintainable test suites for newly implemented features. Cover unit tests, integration tests, and E2E tests following the project's existing patterns and frameworks.
## Capabilities
- **Unit Testing**: Isolated function/method tests, mocking dependencies, edge cases, error paths
- **Integration Testing**: API endpoint tests, database integration, service-to-service communication, middleware chains
- **E2E Testing**: Critical user journeys, happy paths, error scenarios, browser/API-level flows
- **TDD Support**: Red-green-refactor cycle, failing test first, minimal implementation guidance
- **BDD Support**: Gherkin scenarios, step definitions, behavior specifications
- **Test Data**: Factory patterns, fixtures, seed data, synthetic data generation
- **Mocking & Stubbing**: External service mocks, database stubs, time/environment mocking
- **Coverage Analysis**: Identify untested paths, suggest additional test cases, coverage gap analysis
## Response Approach
1. **Detect** the project's test framework (Jest, pytest, Go testing, etc.) and existing patterns
2. **Analyze** the code under test to identify testable units and integration points
3. **Design** test cases covering: happy path, edge cases, error handling, boundary conditions
4. **Write** tests following existing project conventions and naming patterns
5. **Verify** tests are runnable and provide clear failure messages
6. **Report** coverage assessment and any untested risk areas
## Output Format
Organize tests by type:
- **Unit Tests**: One test file per source file, grouped by function/method
- **Integration Tests**: Grouped by API endpoint or service interaction
- **E2E Tests**: Grouped by user journey or feature scenario
Each test should have a descriptive name explaining what behavior is being verified. Include setup/teardown, assertions, and cleanup. Flag any areas where manual testing is recommended over automation.

View File

@@ -1,150 +1,481 @@
Orchestrate end-to-end feature development from requirements to production deployment:
---
description: "Orchestrate end-to-end feature development from requirements to deployment"
argument-hint: "<feature description> [--methodology tdd|bdd|ddd] [--complexity simple|medium|complex]"
---
[Extended thinking: This workflow orchestrates specialized agents through comprehensive feature development phases - from discovery and planning through implementation, testing, and deployment. Each phase builds on previous outputs, ensuring coherent feature delivery. The workflow supports multiple development methodologies (traditional, TDD/BDD, DDD), feature complexity levels, and modern deployment strategies including feature flags, gradual rollouts, and observability-first development. Agents receive detailed context from previous phases to maintain consistency and quality throughout the development lifecycle.]
# Feature Development Orchestrator
## Configuration Options
## CRITICAL BEHAVIORAL RULES
### Development Methodology
You MUST follow these rules exactly. Violating any of them is a failure.
- **traditional**: Sequential development with testing after implementation
- **tdd**: Test-Driven Development with red-green-refactor cycles
- **bdd**: Behavior-Driven Development with scenario-based testing
- **ddd**: Domain-Driven Design with bounded contexts and aggregates
1. **Execute steps in order.** Do NOT skip ahead, reorder, or merge steps.
2. **Write output files.** Each step MUST produce its output file in `.feature-dev/` before the next step begins. Read from prior step files — do NOT rely on context window memory.
3. **Stop at checkpoints.** When you reach a `PHASE CHECKPOINT`, you MUST stop and wait for explicit user approval before continuing. Use the AskUserQuestion tool with clear options.
4. **Halt on failure.** If any step fails (agent error, test failure, missing dependency), STOP immediately. Present the error and ask the user how to proceed. Do NOT silently continue.
5. **Use only local agents.** All `subagent_type` references use agents bundled with this plugin or `general-purpose`. No cross-plugin dependencies.
6. **Never enter plan mode autonomously.** Do NOT use EnterPlanMode. This command IS the plan — execute it.
### Feature Complexity
## Pre-flight Checks
- **simple**: Single service, minimal integration (1-2 days)
- **medium**: Multiple services, moderate integration (3-5 days)
- **complex**: Cross-domain, extensive integration (1-2 weeks)
- **epic**: Major architectural changes, multiple teams (2+ weeks)
Before starting, perform these checks:
### Deployment Strategy
### 1. Check for existing session
- **direct**: Immediate rollout to all users
- **canary**: Gradual rollout starting with 5% of traffic
- **feature-flag**: Controlled activation via feature toggles
- **blue-green**: Zero-downtime deployment with instant rollback
- **a-b-test**: Split traffic for experimentation and metrics
Check if `.feature-dev/state.json` exists:
## Phase 1: Discovery & Requirements Planning
- If it exists and `status` is `"in_progress"`: Read it, display the current step, and ask the user:
1. **Business Analysis & Requirements**
- Use Task tool with subagent_type="business-analytics::business-analyst"
- Prompt: "Analyze feature requirements for: $ARGUMENTS. Define user stories, acceptance criteria, success metrics, and business value. Identify stakeholders, dependencies, and risks. Create feature specification document with clear scope boundaries."
- Expected output: Requirements document with user stories, success metrics, risk assessment
- Context: Initial feature request and business context
```
Found an in-progress feature development session:
Feature: [name from state]
Current step: [step from state]
2. **Technical Architecture Design**
- Use Task tool with subagent_type="comprehensive-review::architect-review"
- Prompt: "Design technical architecture for feature: $ARGUMENTS. Using requirements: [include business analysis from step 1]. Define service boundaries, API contracts, data models, integration points, and technology stack. Consider scalability, performance, and security requirements."
- Expected output: Technical design document with architecture diagrams, API specifications, data models
- Context: Business requirements, existing system architecture
1. Resume from where we left off
2. Start fresh (archives existing session)
```
3. **Feasibility & Risk Assessment**
- Use Task tool with subagent_type="security-scanning::security-auditor"
- Prompt: "Assess security implications and risks for feature: $ARGUMENTS. Review architecture: [include technical design from step 2]. Identify security requirements, compliance needs, data privacy concerns, and potential vulnerabilities."
- Expected output: Security assessment with risk matrix, compliance checklist, mitigation strategies
- Context: Technical design, regulatory requirements
- If it exists and `status` is `"complete"`: Ask whether to archive and start fresh.
## Phase 2: Implementation & Development
### 2. Initialize state
4. **Backend Services Implementation**
- Use Task tool with subagent_type="backend-architect"
- Prompt: "Implement backend services for: $ARGUMENTS. Follow technical design: [include architecture from step 2]. Build RESTful/GraphQL APIs, implement business logic, integrate with data layer, add resilience patterns (circuit breakers, retries), implement caching strategies. Include feature flags for gradual rollout."
- Expected output: Backend services with APIs, business logic, database integration, feature flags
- Context: Technical design, API contracts, data models
Create `.feature-dev/` directory and `state.json`:
5. **Frontend Implementation**
- Use Task tool with subagent_type="frontend-mobile-development::frontend-developer"
- Prompt: "Build frontend components for: $ARGUMENTS. Integrate with backend APIs: [include API endpoints from step 4]. Implement responsive UI, state management, error handling, loading states, and analytics tracking. Add feature flag integration for A/B testing capabilities."
- Expected output: Frontend components with API integration, state management, analytics
- Context: Backend APIs, UI/UX designs, user stories
```json
{
"feature": "$ARGUMENTS",
"status": "in_progress",
"methodology": "traditional",
"complexity": "medium",
"current_step": 1,
"current_phase": 1,
"completed_steps": [],
"files_created": [],
"started_at": "ISO_TIMESTAMP",
"last_updated": "ISO_TIMESTAMP"
}
```
6. **Data Pipeline & Integration**
- Use Task tool with subagent_type="data-engineering::data-engineer"
- Prompt: "Build data pipelines for: $ARGUMENTS. Design ETL/ELT processes, implement data validation, create analytics events, set up data quality monitoring. Integrate with product analytics platforms for feature usage tracking."
- Expected output: Data pipelines, analytics events, data quality checks
- Context: Data requirements, analytics needs, existing data infrastructure
Parse `$ARGUMENTS` for `--methodology` and `--complexity` flags. Use defaults if not specified.
## Phase 3: Testing & Quality Assurance
### 3. Parse feature description
7. **Automated Test Suite**
- Use Task tool with subagent_type="unit-testing::test-automator"
- Prompt: "Create comprehensive test suite for: $ARGUMENTS. Write unit tests for backend: [from step 4] and frontend: [from step 5]. Add integration tests for API endpoints, E2E tests for critical user journeys, performance tests for scalability validation. Ensure minimum 80% code coverage."
- Expected output: Test suites with unit, integration, E2E, and performance tests
- Context: Implementation code, acceptance criteria, test requirements
Extract the feature description from `$ARGUMENTS` (everything before the flags). This is referenced as `$FEATURE` in prompts below.
8. **Security Validation**
- Use Task tool with subagent_type="security-scanning::security-auditor"
- Prompt: "Perform security testing for: $ARGUMENTS. Review implementation: [include backend and frontend from steps 4-5]. Run OWASP checks, penetration testing, dependency scanning, and compliance validation. Verify data encryption, authentication, and authorization."
- Expected output: Security test results, vulnerability report, remediation actions
- Context: Implementation code, security requirements
---
9. **Performance Optimization**
- Use Task tool with subagent_type="application-performance::performance-engineer"
- Prompt: "Optimize performance for: $ARGUMENTS. Analyze backend services: [from step 4] and frontend: [from step 5]. Profile code, optimize queries, implement caching, reduce bundle sizes, improve load times. Set up performance budgets and monitoring."
- Expected output: Performance improvements, optimization report, performance metrics
- Context: Implementation code, performance requirements
## Phase 1: Discovery (Steps 12) — Interactive
## Phase 4: Deployment & Monitoring
### Step 1: Requirements Gathering
10. **Deployment Strategy & Pipeline**
- Use Task tool with subagent_type="deployment-strategies::deployment-engineer"
- Prompt: "Prepare deployment for: $ARGUMENTS. Create CI/CD pipeline with automated tests: [from step 7]. Configure feature flags for gradual rollout, implement blue-green deployment, set up rollback procedures. Create deployment runbook and rollback plan."
- Expected output: CI/CD pipeline, deployment configuration, rollback procedures
- Context: Test suites, infrastructure requirements, deployment strategy
Gather requirements through interactive Q&A. Ask ONE question at a time using the AskUserQuestion tool. Do NOT ask all questions at once.
11. **Observability & Monitoring**
- Use Task tool with subagent_type="observability-monitoring::observability-engineer"
- Prompt: "Set up observability for: $ARGUMENTS. Implement distributed tracing, custom metrics, error tracking, and alerting. Create dashboards for feature usage, performance metrics, error rates, and business KPIs. Set up SLOs/SLIs with automated alerts."
- Expected output: Monitoring dashboards, alerts, SLO definitions, observability infrastructure
- Context: Feature implementation, success metrics, operational requirements
**Questions to ask (in order):**
12. **Documentation & Knowledge Transfer**
- Use Task tool with subagent_type="documentation-generation::docs-architect"
- Prompt: "Generate comprehensive documentation for: $ARGUMENTS. Create API documentation, user guides, deployment guides, troubleshooting runbooks. Include architecture diagrams, data flow diagrams, and integration guides. Generate automated changelog from commits."
- Expected output: API docs, user guides, runbooks, architecture documentation
- Context: All previous phases' outputs
1. **Problem Statement**: "What problem does this feature solve? Who is the user and what's their pain point?"
2. **Acceptance Criteria**: "What are the key acceptance criteria? When is this feature 'done'?"
3. **Scope Boundaries**: "What is explicitly OUT of scope for this feature?"
4. **Technical Constraints**: "Any technical constraints? (e.g., must use existing auth system, specific DB, latency requirements)"
5. **Dependencies**: "Does this feature depend on or affect other features/services?"
## Execution Parameters
After gathering answers, write the requirements document:
### Required Parameters
**Output file:** `.feature-dev/01-requirements.md`
- **--feature**: Feature name and description
- **--methodology**: Development approach (traditional|tdd|bdd|ddd)
- **--complexity**: Feature complexity level (simple|medium|complex|epic)
```markdown
# Requirements: $FEATURE
### Optional Parameters
## Problem Statement
- **--deployment-strategy**: Deployment approach (direct|canary|feature-flag|blue-green|a-b-test)
- **--test-coverage-min**: Minimum test coverage threshold (default: 80%)
- **--performance-budget**: Performance requirements (e.g., <200ms response time)
- **--rollout-percentage**: Initial rollout percentage for gradual deployment (default: 5%)
- **--feature-flag-service**: Feature flag provider (launchdarkly|split|unleash|custom)
- **--analytics-platform**: Analytics integration (segment|amplitude|mixpanel|custom)
- **--monitoring-stack**: Observability tools (datadog|newrelic|grafana|custom)
[From Q1]
## Success Criteria
## Acceptance Criteria
- All acceptance criteria from business requirements are met
- Test coverage exceeds minimum threshold (80% default)
- Security scan shows no critical vulnerabilities
- Performance meets defined budgets and SLOs
- Feature flags configured for controlled rollout
- Monitoring and alerting fully operational
- Documentation complete and approved
- Successful deployment to production with rollback capability
- Product analytics tracking feature usage
- A/B test metrics configured (if applicable)
[From Q2 — formatted as checkboxes]
## Rollback Strategy
## Scope
If issues arise during or after deployment:
### In Scope
1. Immediate feature flag disable (< 1 minute)
2. Blue-green traffic switch (< 5 minutes)
3. Full deployment rollback via CI/CD (< 15 minutes)
4. Database migration rollback if needed (coordinate with data team)
5. Incident post-mortem and fixes before re-deployment
[Derived from answers]
Feature description: $ARGUMENTS
### Out of Scope
[From Q3]
## Technical Constraints
[From Q4]
## Dependencies
[From Q5]
## Methodology: [tdd|bdd|ddd|traditional]
## Complexity: [simple|medium|complex]
```
Update `state.json`: set `current_step` to 2, add `"01-requirements.md"` to `files_created`, add step 1 to `completed_steps`.
### Step 2: Architecture & Security Design
Read `.feature-dev/01-requirements.md` to load requirements context.
Use the Task tool to launch the architecture agent:
```
Task:
subagent_type: "backend-architect"
description: "Design architecture for $FEATURE"
prompt: |
Design the technical architecture for this feature.
## Requirements
[Insert full contents of .feature-dev/01-requirements.md]
## Deliverables
1. **Service/component design**: What components are needed, their responsibilities, and boundaries
2. **API design**: Endpoints, request/response schemas, error handling
3. **Data model**: Database tables/collections, relationships, migrations needed
4. **Security considerations**: Auth requirements, input validation, data protection, OWASP concerns
5. **Integration points**: How this connects to existing services/systems
6. **Risk assessment**: Technical risks and mitigation strategies
Write your complete architecture design as a single markdown document.
```
Save the agent's output to `.feature-dev/02-architecture.md`.
Update `state.json`: set `current_step` to "checkpoint-1", add step 2 to `completed_steps`.
---
## PHASE CHECKPOINT 1 — User Approval Required
You MUST stop here and present the architecture for review.
Display a summary of the architecture from `.feature-dev/02-architecture.md` (key components, API endpoints, data model overview) and ask:
```
Architecture design is complete. Please review .feature-dev/02-architecture.md
1. Approve — proceed to implementation
2. Request changes — tell me what to adjust
3. Pause — save progress and stop here
```
Do NOT proceed to Phase 2 until the user selects option 1. If they select option 2, revise the architecture and re-checkpoint. If option 3, update `state.json` status and stop.
---
## Phase 2: Implementation (Steps 35)
### Step 3: Backend Implementation
Read `.feature-dev/01-requirements.md` and `.feature-dev/02-architecture.md`.
Use the Task tool to launch the backend architect for implementation:
```
Task:
subagent_type: "backend-architect"
description: "Implement backend for $FEATURE"
prompt: |
Implement the backend for this feature based on the approved architecture.
## Requirements
[Insert contents of .feature-dev/01-requirements.md]
## Architecture
[Insert contents of .feature-dev/02-architecture.md]
## Instructions
1. Implement the API endpoints, business logic, and data access layer as designed
2. Include data layer components (models, migrations, repositories) as specified in the architecture
3. Add input validation and error handling
4. Follow the project's existing code patterns and conventions
5. If methodology is TDD: write failing tests first, then implement
6. Include inline comments only where logic is non-obvious
Write all code files. Report what files were created/modified.
```
Save a summary of what was implemented to `.feature-dev/03-backend.md` (list of files created/modified, key decisions, any deviations from architecture).
Update `state.json`: set `current_step` to 4, add step 3 to `completed_steps`.
### Step 4: Frontend Implementation
Read `.feature-dev/01-requirements.md`, `.feature-dev/02-architecture.md`, and `.feature-dev/03-backend.md`.
Use the Task tool:
```
Task:
subagent_type: "general-purpose"
description: "Implement frontend for $FEATURE"
prompt: |
You are a frontend developer. Implement the frontend components for this feature.
## Requirements
[Insert contents of .feature-dev/01-requirements.md]
## Architecture
[Insert contents of .feature-dev/02-architecture.md]
## Backend Implementation
[Insert contents of .feature-dev/03-backend.md]
## Instructions
1. Build UI components that integrate with the backend API endpoints
2. Implement state management, form handling, and error states
3. Add loading states and optimistic updates where appropriate
4. Follow the project's existing frontend patterns and component conventions
5. Ensure responsive design and accessibility basics (semantic HTML, ARIA labels, keyboard nav)
Write all code files. Report what files were created/modified.
```
Save a summary to `.feature-dev/04-frontend.md`.
**Note:** If the feature has no frontend component (pure backend/API), skip this step — write a brief note in `04-frontend.md` explaining why it was skipped, and continue.
Update `state.json`: set `current_step` to 5, add step 4 to `completed_steps`.
### Step 5: Testing & Validation
Read `.feature-dev/03-backend.md` and `.feature-dev/04-frontend.md`.
Launch three agents in parallel using multiple Task tool calls in a single response:
**5a. Test Suite Creation:**
```
Task:
subagent_type: "test-automator"
description: "Create test suite for $FEATURE"
prompt: |
Create a comprehensive test suite for this feature.
## What was implemented
### Backend
[Insert contents of .feature-dev/03-backend.md]
### Frontend
[Insert contents of .feature-dev/04-frontend.md]
## Instructions
1. Write unit tests for all new backend functions/methods
2. Write integration tests for API endpoints
3. Write frontend component tests if applicable
4. Cover: happy path, edge cases, error handling, boundary conditions
5. Follow existing test patterns and frameworks in the project
6. Target 80%+ code coverage for new code
Write all test files. Report what test files were created and what they cover.
```
**5b. Security Review:**
```
Task:
subagent_type: "security-auditor"
description: "Security review of $FEATURE"
prompt: |
Perform a security review of this feature implementation.
## Architecture
[Insert contents of .feature-dev/02-architecture.md]
## Backend Implementation
[Insert contents of .feature-dev/03-backend.md]
## Frontend Implementation
[Insert contents of .feature-dev/04-frontend.md]
Review for: OWASP Top 10, authentication/authorization flaws, input validation gaps,
data protection issues, dependency vulnerabilities, and any security anti-patterns.
Provide findings with severity, location, and specific fix recommendations.
```
**5c. Performance Review:**
```
Task:
subagent_type: "performance-engineer"
description: "Performance review of $FEATURE"
prompt: |
Review the performance of this feature implementation.
## Architecture
[Insert contents of .feature-dev/02-architecture.md]
## Backend Implementation
[Insert contents of .feature-dev/03-backend.md]
## Frontend Implementation
[Insert contents of .feature-dev/04-frontend.md]
Review for: N+1 queries, missing indexes, unoptimized queries, memory leaks,
missing caching opportunities, large payloads, slow rendering paths.
Provide findings with impact estimates and specific optimization recommendations.
```
After all three complete, consolidate results into `.feature-dev/05-testing.md`:
```markdown
# Testing & Validation: $FEATURE
## Test Suite
[Summary from 5a — files created, coverage areas]
## Security Findings
[Summary from 5b — findings by severity]
## Performance Findings
[Summary from 5c — findings by impact]
## Action Items
[List any critical/high findings that need to be addressed before delivery]
```
If there are Critical or High severity findings from security or performance review, address them now before proceeding. Apply fixes and re-validate.
Update `state.json`: set `current_step` to "checkpoint-2", add step 5 to `completed_steps`.
---
## PHASE CHECKPOINT 2 — User Approval Required
Display a summary of testing and validation results from `.feature-dev/05-testing.md` and ask:
```
Testing and validation complete. Please review .feature-dev/05-testing.md
Test coverage: [summary]
Security findings: [X critical, Y high, Z medium]
Performance findings: [X critical, Y high, Z medium]
1. Approve — proceed to deployment & documentation
2. Request changes — tell me what to fix
3. Pause — save progress and stop here
```
Do NOT proceed to Phase 3 until the user approves.
---
## Phase 3: Delivery (Steps 67)
### Step 6: Deployment & Monitoring
Read `.feature-dev/02-architecture.md` and `.feature-dev/05-testing.md`.
Use the Task tool:
```
Task:
subagent_type: "general-purpose"
description: "Create deployment config for $FEATURE"
prompt: |
You are a deployment engineer. Create the deployment and monitoring configuration for this feature.
## Architecture
[Insert contents of .feature-dev/02-architecture.md]
## Testing Results
[Insert contents of .feature-dev/05-testing.md]
## Instructions
1. Create or update CI/CD pipeline configuration for the new code
2. Add feature flag configuration if the feature should be gradually rolled out
3. Define health checks and readiness probes for new services/endpoints
4. Create monitoring alerts for key metrics (error rate, latency, throughput)
5. Write a deployment runbook with rollback steps
6. Follow existing deployment patterns in the project
Write all configuration files. Report what was created/modified.
```
Save output to `.feature-dev/06-deployment.md`.
Update `state.json`: set `current_step` to 7, add step 6 to `completed_steps`.
### Step 7: Documentation & Handoff
Read all previous `.feature-dev/*.md` files.
Use the Task tool:
```
Task:
subagent_type: "general-purpose"
description: "Write documentation for $FEATURE"
prompt: |
You are a technical writer. Create documentation for this feature.
## Feature Context
[Insert contents of .feature-dev/01-requirements.md]
## Architecture
[Insert contents of .feature-dev/02-architecture.md]
## Implementation Summary
### Backend: [Insert contents of .feature-dev/03-backend.md]
### Frontend: [Insert contents of .feature-dev/04-frontend.md]
## Deployment
[Insert contents of .feature-dev/06-deployment.md]
## Instructions
1. Write API documentation for new endpoints (request/response examples)
2. Update or create user-facing documentation if applicable
3. Write a brief architecture decision record (ADR) explaining key design choices
4. Create a handoff summary: what was built, how to test it, known limitations
Write documentation files. Report what was created/modified.
```
Save output to `.feature-dev/07-documentation.md`.
Update `state.json`: set `current_step` to "complete", add step 7 to `completed_steps`.
---
## Completion
Update `state.json`:
- Set `status` to `"complete"`
- Set `last_updated` to current timestamp
Present the final summary:
```
Feature development complete: $FEATURE
## Files Created
[List all .feature-dev/ output files]
## Implementation Summary
- Requirements: .feature-dev/01-requirements.md
- Architecture: .feature-dev/02-architecture.md
- Backend: .feature-dev/03-backend.md
- Frontend: .feature-dev/04-frontend.md
- Testing: .feature-dev/05-testing.md
- Deployment: .feature-dev/06-deployment.md
- Documentation: .feature-dev/07-documentation.md
## Next Steps
1. Review all generated code and documentation
2. Run the full test suite to verify everything passes
3. Create a pull request with the implementation
4. Deploy using the runbook in .feature-dev/06-deployment.md
```

View File

@@ -0,0 +1,10 @@
{
"name": "blockchain-web3",
"version": "1.2.1",
"description": "Smart contract development with Solidity, DeFi protocol implementation, NFT platforms, and Web3 application architecture",
"author": {
"name": "Seth Hobson",
"email": "seth@major7apps.com"
},
"license": "MIT"
}

View File

@@ -0,0 +1,10 @@
{
"name": "business-analytics",
"version": "1.2.1",
"description": "Business metrics analysis, KPI tracking, financial reporting, and data-driven decision making",
"author": {
"name": "Seth Hobson",
"email": "seth@major7apps.com"
},
"license": "MIT"
}

View File

@@ -0,0 +1,10 @@
{
"name": "c4-architecture",
"version": "1.0.0",
"description": "Comprehensive C4 architecture documentation workflow with bottom-up code analysis, component synthesis, container mapping, and context diagram generation",
"author": {
"name": "Seth Hobson",
"email": "seth@major7apps.com"
},
"license": "MIT"
}

View File

@@ -0,0 +1,10 @@
{
"name": "cicd-automation",
"version": "1.2.1",
"description": "CI/CD pipeline configuration, GitHub Actions/GitLab CI workflow setup, and automated deployment pipeline orchestration",
"author": {
"name": "Seth Hobson",
"email": "seth@major7apps.com"
},
"license": "MIT"
}

View File

@@ -0,0 +1,10 @@
{
"name": "cloud-infrastructure",
"version": "1.2.2",
"description": "Cloud architecture design for AWS/Azure/GCP, Kubernetes cluster configuration, Terraform infrastructure-as-code, hybrid cloud networking, and multi-cloud cost optimization",
"author": {
"name": "Seth Hobson",
"email": "seth@major7apps.com"
},
"license": "MIT"
}

View File

@@ -0,0 +1,10 @@
{
"name": "code-documentation",
"version": "1.2.0",
"description": "Documentation generation, code explanation, and technical writing with automated doc generation and tutorial creation",
"author": {
"name": "Seth Hobson",
"email": "seth@major7apps.com"
},
"license": "MIT"
}

View File

@@ -0,0 +1,10 @@
{
"name": "code-refactoring",
"version": "1.2.0",
"description": "Code cleanup, refactoring automation, and technical debt management with context restoration",
"author": {
"name": "Seth Hobson",
"email": "seth@major7apps.com"
},
"license": "MIT"
}

View File

@@ -0,0 +1,10 @@
{
"name": "code-review-ai",
"version": "1.2.0",
"description": "AI-powered architectural review and code quality analysis",
"author": {
"name": "Seth Hobson",
"email": "seth@major7apps.com"
},
"license": "MIT"
}

View File

@@ -0,0 +1,10 @@
{
"name": "codebase-cleanup",
"version": "1.2.0",
"description": "Technical debt reduction, dependency updates, and code refactoring automation",
"author": {
"name": "Seth Hobson",
"email": "seth@major7apps.com"
},
"license": "MIT"
}

View File

@@ -0,0 +1,10 @@
{
"name": "comprehensive-review",
"version": "1.3.0",
"description": "Multi-perspective code analysis covering architecture, security, and best practices",
"author": {
"name": "Seth Hobson",
"email": "seth@major7apps.com"
},
"license": "MIT"
}

View File

@@ -1,137 +1,597 @@
Orchestrate comprehensive multi-dimensional code review using specialized review agents
---
description: "Orchestrate comprehensive multi-dimensional code review using specialized review agents across architecture, security, performance, testing, and best practices"
argument-hint: "<target path or description> [--security-focus] [--performance-critical] [--strict-mode] [--framework react|spring|django|rails]"
---
[Extended thinking: This workflow performs an exhaustive code review by orchestrating multiple specialized agents in sequential phases. Each phase builds upon previous findings to create a comprehensive review that covers code quality, security, performance, testing, documentation, and best practices. The workflow integrates modern AI-assisted review tools, static analysis, security scanning, and automated quality metrics. Results are consolidated into actionable feedback with clear prioritization and remediation guidance. The phased approach ensures thorough coverage while maintaining efficiency through parallel agent execution where appropriate.]
# Comprehensive Code Review Orchestrator
## Review Configuration Options
## CRITICAL BEHAVIORAL RULES
- **--security-focus**: Prioritize security vulnerabilities and OWASP compliance
- **--performance-critical**: Emphasize performance bottlenecks and scalability issues
- **--tdd-review**: Include TDD compliance and test-first verification
- **--ai-assisted**: Enable AI-powered review tools (Copilot, Codium, Bito)
- **--strict-mode**: Fail review on any critical issues found
- **--metrics-report**: Generate detailed quality metrics dashboard
- **--framework [name]**: Apply framework-specific best practices (React, Spring, Django, etc.)
You MUST follow these rules exactly. Violating any of them is a failure.
## Phase 1: Code Quality & Architecture Review
1. **Execute phases in order.** Do NOT skip ahead, reorder, or merge phases.
2. **Write output files.** Each phase MUST produce its output file in `.full-review/` before the next phase begins. Read from prior phase files -- do NOT rely on context window memory.
3. **Stop at checkpoints.** When you reach a `PHASE CHECKPOINT`, you MUST stop and wait for explicit user approval before continuing. Use the AskUserQuestion tool with clear options.
4. **Halt on failure.** If any step fails (agent error, missing files, access issues), STOP immediately. Present the error and ask the user how to proceed. Do NOT silently continue.
5. **Use only local agents.** All `subagent_type` references use agents bundled with this plugin or `general-purpose`. No cross-plugin dependencies.
6. **Never enter plan mode autonomously.** Do NOT use EnterPlanMode. This command IS the plan -- execute it.
Use Task tool to orchestrate quality and architecture agents in parallel:
## Pre-flight Checks
### 1A. Code Quality Analysis
Before starting, perform these checks:
- Use Task tool with subagent_type="code-reviewer"
- Prompt: "Perform comprehensive code quality review for: $ARGUMENTS. Analyze code complexity, maintainability index, technical debt, code duplication, naming conventions, and adherence to Clean Code principles. Integrate with SonarQube, CodeQL, and Semgrep for static analysis. Check for code smells, anti-patterns, and violations of SOLID principles. Generate cyclomatic complexity metrics and identify refactoring opportunities."
- Expected output: Quality metrics, code smell inventory, refactoring recommendations
- Context: Initial codebase analysis, no dependencies on other phases
### 1. Check for existing session
### 1B. Architecture & Design Review
Check if `.full-review/state.json` exists:
- Use Task tool with subagent_type="architect-review"
- Prompt: "Review architectural design patterns and structural integrity in: $ARGUMENTS. Evaluate microservices boundaries, API design, database schema, dependency management, and adherence to Domain-Driven Design principles. Check for circular dependencies, inappropriate coupling, missing abstractions, and architectural drift. Verify compliance with enterprise architecture standards and cloud-native patterns."
- Expected output: Architecture assessment, design pattern analysis, structural recommendations
- Context: Runs parallel with code quality analysis
- If it exists and `status` is `"in_progress"`: Read it, display the current phase, and ask the user:
## Phase 2: Security & Performance Review
```
Found an in-progress review session:
Target: [target from state]
Current phase: [phase from state]
Use Task tool with security and performance agents, incorporating Phase 1 findings:
1. Resume from where we left off
2. Start fresh (archives existing session)
```
### 2A. Security Vulnerability Assessment
- If it exists and `status` is `"complete"`: Ask whether to archive and start fresh.
- Use Task tool with subagent_type="security-auditor"
- Prompt: "Execute comprehensive security audit on: $ARGUMENTS. Perform OWASP Top 10 analysis, dependency vulnerability scanning with Snyk/Trivy, secrets detection with GitLeaks, input validation review, authentication/authorization assessment, and cryptographic implementation review. Include findings from Phase 1 architecture review: {phase1_architecture_context}. Check for SQL injection, XSS, CSRF, insecure deserialization, and configuration security issues."
- Expected output: Vulnerability report, CVE list, security risk matrix, remediation steps
- Context: Incorporates architectural vulnerabilities identified in Phase 1B
### 2. Initialize state
### 2B. Performance & Scalability Analysis
Create `.full-review/` directory and `state.json`:
- Use Task tool with subagent_type="application-performance::performance-engineer"
- Prompt: "Conduct performance analysis and scalability assessment for: $ARGUMENTS. Profile code for CPU/memory hotspots, analyze database query performance, review caching strategies, identify N+1 problems, assess connection pooling, and evaluate asynchronous processing patterns. Consider architectural findings from Phase 1: {phase1_architecture_context}. Check for memory leaks, resource contention, and bottlenecks under load."
- Expected output: Performance metrics, bottleneck analysis, optimization recommendations
- Context: Uses architecture insights to identify systemic performance issues
```json
{
"target": "$ARGUMENTS",
"status": "in_progress",
"flags": {
"security_focus": false,
"performance_critical": false,
"strict_mode": false,
"framework": null
},
"current_step": 1,
"current_phase": 1,
"completed_steps": [],
"files_created": [],
"started_at": "ISO_TIMESTAMP",
"last_updated": "ISO_TIMESTAMP"
}
```
## Phase 3: Testing & Documentation Review
Parse `$ARGUMENTS` for `--security-focus`, `--performance-critical`, `--strict-mode`, and `--framework` flags. Update the flags object accordingly.
Use Task tool for test and documentation quality assessment:
### 3. Identify review target
### 3A. Test Coverage & Quality Analysis
Determine what code to review from `$ARGUMENTS`:
- Use Task tool with subagent_type="unit-testing::test-automator"
- Prompt: "Evaluate testing strategy and implementation for: $ARGUMENTS. Analyze unit test coverage, integration test completeness, end-to-end test scenarios, test pyramid adherence, and test maintainability. Review test quality metrics including assertion density, test isolation, mock usage, and flakiness. Consider security and performance test requirements from Phase 2: {phase2_security_context}, {phase2_performance_context}. Verify TDD practices if --tdd-review flag is set."
- Expected output: Coverage report, test quality metrics, testing gap analysis
- Context: Incorporates security and performance testing requirements from Phase 2
- If a file/directory path is given, verify it exists
- If a description is given (e.g., "recent changes", "authentication module"), identify the relevant files
- List the files that will be reviewed and confirm with the user
### 3B. Documentation & API Specification Review
**Output file:** `.full-review/00-scope.md`
- Use Task tool with subagent_type="code-documentation::docs-architect"
- Prompt: "Review documentation completeness and quality for: $ARGUMENTS. Assess inline code documentation, API documentation (OpenAPI/Swagger), architecture decision records (ADRs), README completeness, deployment guides, and runbooks. Verify documentation reflects actual implementation based on all previous phase findings: {phase1_context}, {phase2_context}. Check for outdated documentation, missing examples, and unclear explanations."
- Expected output: Documentation coverage report, inconsistency list, improvement recommendations
- Context: Cross-references all previous findings to ensure documentation accuracy
```markdown
# Review Scope
## Phase 4: Best Practices & Standards Compliance
## Target
Use Task tool to verify framework-specific and industry best practices:
[Description of what is being reviewed]
### 4A. Framework & Language Best Practices
## Files
- Use Task tool with subagent_type="framework-migration::legacy-modernizer"
- Prompt: "Verify adherence to framework and language best practices for: $ARGUMENTS. Check modern JavaScript/TypeScript patterns, React hooks best practices, Python PEP compliance, Java enterprise patterns, Go idiomatic code, or framework-specific conventions (based on --framework flag). Review package management, build configuration, environment handling, and deployment practices. Include all quality issues from previous phases: {all_previous_contexts}."
- Expected output: Best practices compliance report, modernization recommendations
- Context: Synthesizes all previous findings for framework-specific guidance
[List of files/directories included in the review]
### 4B. CI/CD & DevOps Practices Review
## Flags
- Use Task tool with subagent_type="cicd-automation::deployment-engineer"
- Prompt: "Review CI/CD pipeline and DevOps practices for: $ARGUMENTS. Evaluate build automation, test automation integration, deployment strategies (blue-green, canary), infrastructure as code, monitoring/observability setup, and incident response procedures. Assess pipeline security, artifact management, and rollback capabilities. Consider all issues identified in previous phases that impact deployment: {all_critical_issues}."
- Expected output: Pipeline assessment, DevOps maturity evaluation, automation recommendations
- Context: Focuses on operationalizing fixes for all identified issues
- Security Focus: [yes/no]
- Performance Critical: [yes/no]
- Strict Mode: [yes/no]
- Framework: [name or auto-detected]
## Consolidated Report Generation
## Review Phases
Compile all phase outputs into comprehensive review report:
1. Code Quality & Architecture
2. Security & Performance
3. Testing & Documentation
4. Best Practices & Standards
5. Consolidated Report
```
### Critical Issues (P0 - Must Fix Immediately)
Update `state.json`: add `"00-scope.md"` to `files_created`, add step 0 to `completed_steps`.
---
## Phase 1: Code Quality & Architecture Review (Steps 1A-1B)
Run both agents in parallel using multiple Task tool calls in a single response.
### Step 1A: Code Quality Analysis
```
Task:
subagent_type: "code-reviewer"
description: "Code quality analysis for $ARGUMENTS"
prompt: |
Perform a comprehensive code quality review.
## Review Scope
[Insert contents of .full-review/00-scope.md]
## Instructions
Analyze the target code for:
1. **Code complexity**: Cyclomatic complexity, cognitive complexity, deeply nested logic
2. **Maintainability**: Naming conventions, function/method length, class cohesion
3. **Code duplication**: Copy-pasted logic, missed abstraction opportunities
4. **Clean Code principles**: SOLID violations, code smells, anti-patterns
5. **Technical debt**: Areas that will become increasingly costly to change
6. **Error handling**: Missing error handling, swallowed exceptions, unclear error messages
For each finding, provide:
- Severity (Critical / High / Medium / Low)
- File and line location
- Description of the issue
- Specific fix recommendation with code example
Write your findings as a structured markdown document.
```
### Step 1B: Architecture & Design Review
```
Task:
subagent_type: "architect-review"
description: "Architecture review for $ARGUMENTS"
prompt: |
Review the architectural design and structural integrity of the target code.
## Review Scope
[Insert contents of .full-review/00-scope.md]
## Instructions
Evaluate the code for:
1. **Component boundaries**: Proper separation of concerns, module cohesion
2. **Dependency management**: Circular dependencies, inappropriate coupling, dependency direction
3. **API design**: Endpoint design, request/response schemas, error contracts, versioning
4. **Data model**: Schema design, relationships, data access patterns
5. **Design patterns**: Appropriate use of patterns, missing abstractions, over-engineering
6. **Architectural consistency**: Does the code follow the project's established patterns?
For each finding, provide:
- Severity (Critical / High / Medium / Low)
- Architectural impact assessment
- Specific improvement recommendation
Write your findings as a structured markdown document.
```
After both complete, consolidate into `.full-review/01-quality-architecture.md`:
```markdown
# Phase 1: Code Quality & Architecture Review
## Code Quality Findings
[Summary from 1A, organized by severity]
## Architecture Findings
[Summary from 1B, organized by severity]
## Critical Issues for Phase 2 Context
[List any findings that should inform security or performance review]
```
Update `state.json`: set `current_step` to 2, `current_phase` to 2, add steps 1A and 1B to `completed_steps`.
---
## Phase 2: Security & Performance Review (Steps 2A-2B)
Read `.full-review/01-quality-architecture.md` for context from Phase 1.
Run both agents in parallel using multiple Task tool calls in a single response.
### Step 2A: Security Vulnerability Assessment
```
Task:
subagent_type: "security-auditor"
description: "Security audit for $ARGUMENTS"
prompt: |
Execute a comprehensive security audit on the target code.
## Review Scope
[Insert contents of .full-review/00-scope.md]
## Phase 1 Context
[Insert contents of .full-review/01-quality-architecture.md -- focus on the "Critical Issues for Phase 2 Context" section]
## Instructions
Analyze for:
1. **OWASP Top 10**: Injection, broken auth, sensitive data exposure, XXE, broken access control, misconfig, XSS, insecure deserialization, vulnerable components, insufficient logging
2. **Input validation**: Missing sanitization, unvalidated redirects, path traversal
3. **Authentication/authorization**: Flawed auth logic, privilege escalation, session management
4. **Cryptographic issues**: Weak algorithms, hardcoded secrets, improper key management
5. **Dependency vulnerabilities**: Known CVEs in dependencies, outdated packages
6. **Configuration security**: Debug mode, verbose errors, permissive CORS, missing security headers
For each finding, provide:
- Severity (Critical / High / Medium / Low) with CVSS score if applicable
- CWE reference where applicable
- File and line location
- Proof of concept or attack scenario
- Specific remediation steps with code example
Write your findings as a structured markdown document.
```
### Step 2B: Performance & Scalability Analysis
```
Task:
subagent_type: "general-purpose"
description: "Performance analysis for $ARGUMENTS"
prompt: |
You are a performance engineer. Conduct a performance and scalability analysis of the target code.
## Review Scope
[Insert contents of .full-review/00-scope.md]
## Phase 1 Context
[Insert contents of .full-review/01-quality-architecture.md -- focus on the "Critical Issues for Phase 2 Context" section]
## Instructions
Analyze for:
1. **Database performance**: N+1 queries, missing indexes, unoptimized queries, connection pool sizing
2. **Memory management**: Memory leaks, unbounded collections, large object allocation
3. **Caching opportunities**: Missing caching, stale cache risks, cache invalidation issues
4. **I/O bottlenecks**: Synchronous blocking calls, missing pagination, large payloads
5. **Concurrency issues**: Race conditions, deadlocks, thread safety
6. **Frontend performance**: Bundle size, render performance, unnecessary re-renders, missing lazy loading
7. **Scalability concerns**: Horizontal scaling barriers, stateful components, single points of failure
For each finding, provide:
- Severity (Critical / High / Medium / Low)
- Estimated performance impact
- Specific optimization recommendation with code example
Write your findings as a structured markdown document.
```
After both complete, consolidate into `.full-review/02-security-performance.md`:
```markdown
# Phase 2: Security & Performance Review
## Security Findings
[Summary from 2A, organized by severity]
## Performance Findings
[Summary from 2B, organized by severity]
## Critical Issues for Phase 3 Context
[List findings that affect testing or documentation requirements]
```
Update `state.json`: set `current_step` to "checkpoint-1", add steps 2A and 2B to `completed_steps`.
---
## PHASE CHECKPOINT 1 -- User Approval Required
Display a summary of findings from Phase 1 and Phase 2 and ask:
```
Phases 1-2 complete: Code Quality, Architecture, Security, and Performance reviews done.
Summary:
- Code Quality: [X critical, Y high, Z medium findings]
- Architecture: [X critical, Y high, Z medium findings]
- Security: [X critical, Y high, Z medium findings]
- Performance: [X critical, Y high, Z medium findings]
Please review:
- .full-review/01-quality-architecture.md
- .full-review/02-security-performance.md
1. Continue -- proceed to Testing & Documentation review
2. Fix critical issues first -- I'll address findings before continuing
3. Pause -- save progress and stop here
```
If `--strict-mode` flag is set and there are Critical findings, recommend option 2.
Do NOT proceed to Phase 3 until the user approves.
---
## Phase 3: Testing & Documentation Review (Steps 3A-3B)
Read `.full-review/01-quality-architecture.md` and `.full-review/02-security-performance.md` for context.
Run both agents in parallel using multiple Task tool calls in a single response.
### Step 3A: Test Coverage & Quality Analysis
```
Task:
subagent_type: "general-purpose"
description: "Test coverage analysis for $ARGUMENTS"
prompt: |
You are a test automation engineer. Evaluate the testing strategy and coverage for the target code.
## Review Scope
[Insert contents of .full-review/00-scope.md]
## Prior Phase Context
[Insert security and performance findings from .full-review/02-security-performance.md that affect testing requirements]
## Instructions
Analyze:
1. **Test coverage**: Which code paths have tests? Which critical paths are untested?
2. **Test quality**: Are tests testing behavior or implementation? Assertion quality?
3. **Test pyramid adherence**: Unit vs integration vs E2E test ratio
4. **Edge cases**: Are boundary conditions, error paths, and concurrent scenarios tested?
5. **Test maintainability**: Test isolation, mock usage, flaky test indicators
6. **Security test gaps**: Are security-critical paths tested? Auth, input validation, etc.
7. **Performance test gaps**: Are performance-critical paths tested? Load testing?
For each finding, provide:
- Severity (Critical / High / Medium / Low)
- What is untested or poorly tested
- Specific test recommendations with example test code
Write your findings as a structured markdown document.
```
### Step 3B: Documentation & API Review
```
Task:
subagent_type: "general-purpose"
description: "Documentation review for $ARGUMENTS"
prompt: |
You are a technical documentation architect. Review documentation completeness and accuracy.
## Review Scope
[Insert contents of .full-review/00-scope.md]
## Prior Phase Context
[Insert key findings from .full-review/01-quality-architecture.md and .full-review/02-security-performance.md]
## Instructions
Evaluate:
1. **Inline documentation**: Are complex algorithms and business logic explained?
2. **API documentation**: Are endpoints documented with examples? Request/response schemas?
3. **Architecture documentation**: ADRs, system diagrams, component documentation
4. **README completeness**: Setup instructions, development workflow, deployment guide
5. **Accuracy**: Does documentation match the actual implementation?
6. **Changelog/migration guides**: Are breaking changes documented?
For each finding, provide:
- Severity (Critical / High / Medium / Low)
- What is missing or inaccurate
- Specific documentation recommendation
Write your findings as a structured markdown document.
```
After both complete, consolidate into `.full-review/03-testing-documentation.md`:
```markdown
# Phase 3: Testing & Documentation Review
## Test Coverage Findings
[Summary from 3A, organized by severity]
## Documentation Findings
[Summary from 3B, organized by severity]
```
Update `state.json`: set `current_step` to 4, `current_phase` to 4, add steps 3A and 3B to `completed_steps`.
---
## Phase 4: Best Practices & Standards (Steps 4A-4B)
Read all previous `.full-review/*.md` files for full context.
Run both agents in parallel using multiple Task tool calls in a single response.
### Step 4A: Framework & Language Best Practices
```
Task:
subagent_type: "general-purpose"
description: "Framework best practices review for $ARGUMENTS"
prompt: |
You are an expert in modern framework and language best practices. Verify adherence to current standards.
## Review Scope
[Insert contents of .full-review/00-scope.md]
## All Prior Findings
[Insert a concise summary of critical/high findings from all prior phases]
## Instructions
Check for:
1. **Language idioms**: Is the code idiomatic for its language? Modern syntax and features?
2. **Framework patterns**: Does it follow the framework's recommended patterns? (e.g., React hooks, Django views, Spring beans)
3. **Deprecated APIs**: Are any deprecated functions/libraries/patterns used?
4. **Modernization opportunities**: Where could modern language/framework features simplify code?
5. **Package management**: Are dependencies up-to-date? Unnecessary dependencies?
6. **Build configuration**: Is the build optimized? Development vs production settings?
For each finding, provide:
- Severity (Critical / High / Medium / Low)
- Current pattern vs recommended pattern
- Migration/fix recommendation with code example
Write your findings as a structured markdown document.
```
### Step 4B: CI/CD & DevOps Practices Review
```
Task:
subagent_type: "general-purpose"
description: "CI/CD and DevOps practices review for $ARGUMENTS"
prompt: |
You are a DevOps engineer. Review CI/CD pipeline and operational practices.
## Review Scope
[Insert contents of .full-review/00-scope.md]
## Critical Issues from Prior Phases
[Insert critical/high findings from all prior phases that impact deployment or operations]
## Instructions
Evaluate:
1. **CI/CD pipeline**: Build automation, test gates, deployment stages, security scanning
2. **Deployment strategy**: Blue-green, canary, rollback capabilities
3. **Infrastructure as Code**: Are infrastructure configs version-controlled and reviewed?
4. **Monitoring & observability**: Logging, metrics, alerting, dashboards
5. **Incident response**: Runbooks, on-call procedures, rollback plans
6. **Environment management**: Config separation, secret management, parity between environments
For each finding, provide:
- Severity (Critical / High / Medium / Low)
- Operational risk assessment
- Specific improvement recommendation
Write your findings as a structured markdown document.
```
After both complete, consolidate into `.full-review/04-best-practices.md`:
```markdown
# Phase 4: Best Practices & Standards
## Framework & Language Findings
[Summary from 4A, organized by severity]
## CI/CD & DevOps Findings
[Summary from 4B, organized by severity]
```
Update `state.json`: set `current_step` to 5, `current_phase` to 5, add steps 4A and 4B to `completed_steps`.
---
## Phase 5: Consolidated Report (Step 5)
Read all `.full-review/*.md` files. Generate the final consolidated report.
**Output file:** `.full-review/05-final-report.md`
```markdown
# Comprehensive Code Review Report
## Review Target
[From 00-scope.md]
## Executive Summary
[2-3 sentence overview of overall code health and key concerns]
## Findings by Priority
### Critical Issues (P0 -- Must Fix Immediately)
[All Critical findings from all phases, with source phase reference]
- Security vulnerabilities with CVSS > 7.0
- Data loss or corruption risks
- Authentication/authorization bypasses
- Production stability threats
- Compliance violations (GDPR, PCI DSS, SOC2)
### High Priority (P1 - Fix Before Next Release)
### High Priority (P1 -- Fix Before Next Release)
[All High findings from all phases]
- Performance bottlenecks impacting user experience
- Missing critical test coverage
- Architectural anti-patterns causing technical debt
- Outdated dependencies with known vulnerabilities
- Code quality issues affecting maintainability
### Medium Priority (P2 - Plan for Next Sprint)
### Medium Priority (P2 -- Plan for Next Sprint)
[All Medium findings from all phases]
- Non-critical performance optimizations
- Documentation gaps and inconsistencies
- Documentation gaps
- Code refactoring opportunities
- Test quality improvements
- DevOps automation enhancements
### Low Priority (P3 - Track in Backlog)
### Low Priority (P3 -- Track in Backlog)
[All Low findings from all phases]
- Style guide violations
- Minor code smell issues
- Nice-to-have documentation updates
- Cosmetic improvements
- Nice-to-have improvements
## Success Criteria
## Findings by Category
Review is considered successful when:
- **Code Quality**: [count] findings ([breakdown by severity])
- **Architecture**: [count] findings ([breakdown by severity])
- **Security**: [count] findings ([breakdown by severity])
- **Performance**: [count] findings ([breakdown by severity])
- **Testing**: [count] findings ([breakdown by severity])
- **Documentation**: [count] findings ([breakdown by severity])
- **Best Practices**: [count] findings ([breakdown by severity])
- **CI/CD & DevOps**: [count] findings ([breakdown by severity])
- All critical security vulnerabilities are identified and documented
- Performance bottlenecks are profiled with remediation paths
- Test coverage gaps are mapped with priority recommendations
- Architecture risks are assessed with mitigation strategies
- Documentation reflects actual implementation state
- Framework best practices compliance is verified
- CI/CD pipeline supports safe deployment of reviewed code
- Clear, actionable feedback is provided for all findings
- Metrics dashboard shows improvement trends
- Team has clear prioritized action plan for remediation
## Recommended Action Plan
Target: $ARGUMENTS
1. [Ordered list of recommended actions, starting with critical/high items]
2. [Group related fixes where possible]
3. [Estimate relative effort: small/medium/large]
## Review Metadata
- Review date: [timestamp]
- Phases completed: [list]
- Flags applied: [list active flags]
```
Update `state.json`: set `status` to `"complete"`, `last_updated` to current timestamp.
---
## Completion
Present the final summary:
```
Comprehensive code review complete for: $ARGUMENTS
## Review Output Files
- Scope: .full-review/00-scope.md
- Quality & Architecture: .full-review/01-quality-architecture.md
- Security & Performance: .full-review/02-security-performance.md
- Testing & Documentation: .full-review/03-testing-documentation.md
- Best Practices: .full-review/04-best-practices.md
- Final Report: .full-review/05-final-report.md
## Summary
- Total findings: [count]
- Critical: [X] | High: [Y] | Medium: [Z] | Low: [W]
## Next Steps
1. Review the full report at .full-review/05-final-report.md
2. Address Critical (P0) issues immediately
3. Plan High (P1) fixes for current sprint
4. Add Medium (P2) and Low (P3) items to backlog
```

View File

@@ -0,0 +1,10 @@
{
"name": "content-marketing",
"version": "1.2.0",
"description": "Content marketing strategy, web research, and information synthesis for marketing operations",
"author": {
"name": "Seth Hobson",
"email": "seth@major7apps.com"
},
"license": "MIT"
}

View File

@@ -0,0 +1,10 @@
{
"name": "context-management",
"version": "1.2.0",
"description": "Context persistence, restoration, and long-running conversation management",
"author": {
"name": "Seth Hobson",
"email": "seth@major7apps.com"
},
"license": "MIT"
}

View File

@@ -0,0 +1,10 @@
{
"name": "customer-sales-automation",
"version": "1.2.0",
"description": "Customer support workflow automation, sales pipeline management, email campaigns, and CRM integration",
"author": {
"name": "Seth Hobson",
"email": "seth@major7apps.com"
},
"license": "MIT"
}

View File

@@ -0,0 +1,10 @@
{
"name": "data-engineering",
"version": "1.3.0",
"description": "ETL pipeline construction, data warehouse design, batch processing workflows, and data-driven feature development",
"author": {
"name": "Seth Hobson",
"email": "seth@major7apps.com"
},
"license": "MIT"
}

View File

@@ -1,176 +1,784 @@
# Data-Driven Feature Development
---
description: "Build features guided by data insights, A/B testing, and continuous measurement"
argument-hint: "<feature description> [--experiment-type ab|multivariate|bandit] [--confidence 0.90|0.95|0.99]"
---
Build features guided by data insights, A/B testing, and continuous measurement using specialized agents for analysis, implementation, and experimentation.
# Data-Driven Feature Development Orchestrator
[Extended thinking: This workflow orchestrates a comprehensive data-driven development process from initial data analysis and hypothesis formulation through feature implementation with integrated analytics, A/B testing infrastructure, and post-launch analysis. Each phase leverages specialized agents to ensure features are built based on data insights, properly instrumented for measurement, and validated through controlled experiments. The workflow emphasizes modern product analytics practices, statistical rigor in testing, and continuous learning from user behavior.]
## CRITICAL BEHAVIORAL RULES
## Phase 1: Data Analysis and Hypothesis Formation
You MUST follow these rules exactly. Violating any of them is a failure.
### 1. Exploratory Data Analysis
1. **Execute steps in order.** Do NOT skip ahead, reorder, or merge steps.
2. **Write output files.** Each step MUST produce its output file in `.data-driven-feature/` before the next step begins. Read from prior step files — do NOT rely on context window memory.
3. **Stop at checkpoints.** When you reach a `PHASE CHECKPOINT`, you MUST stop and wait for explicit user approval before continuing. Use the AskUserQuestion tool with clear options.
4. **Halt on failure.** If any step fails (agent error, test failure, missing dependency), STOP immediately. Present the error and ask the user how to proceed. Do NOT silently continue.
5. **Use only local agents.** All `subagent_type` references use agents bundled with this plugin or `general-purpose`. No cross-plugin dependencies.
6. **Never enter plan mode autonomously.** Do NOT use EnterPlanMode. This command IS the plan — execute it.
- Use Task tool with subagent_type="machine-learning-ops::data-scientist"
- Prompt: "Perform exploratory data analysis for feature: $ARGUMENTS. Analyze existing user behavior data, identify patterns and opportunities, segment users by behavior, and calculate baseline metrics. Use modern analytics tools (Amplitude, Mixpanel, Segment) to understand current user journeys, conversion funnels, and engagement patterns."
- Output: EDA report with visualizations, user segments, behavioral patterns, baseline metrics
## Pre-flight Checks
### 2. Business Hypothesis Development
Before starting, perform these checks:
- Use Task tool with subagent_type="business-analytics::business-analyst"
- Context: Data scientist's EDA findings and behavioral patterns
- Prompt: "Formulate business hypotheses for feature: $ARGUMENTS based on data analysis. Define clear success metrics, expected impact on key business KPIs, target user segments, and minimum detectable effects. Create measurable hypotheses using frameworks like ICE scoring or RICE prioritization."
- Output: Hypothesis document, success metrics definition, expected ROI calculations
### 1. Check for existing session
### 3. Statistical Experiment Design
Check if `.data-driven-feature/state.json` exists:
- Use Task tool with subagent_type="machine-learning-ops::data-scientist"
- Context: Business hypotheses and success metrics
- Prompt: "Design statistical experiment for feature: $ARGUMENTS. Calculate required sample size for statistical power, define control and treatment groups, specify randomization strategy, and plan for multiple testing corrections. Consider Bayesian A/B testing approaches for faster decision making. Design for both primary and guardrail metrics."
- Output: Experiment design document, power analysis, statistical test plan
- If it exists and `status` is `"in_progress"`: Read it, display the current step, and ask the user:
## Phase 2: Feature Architecture and Analytics Design
```
Found an in-progress data-driven feature session:
Feature: [name from state]
Current step: [step from state]
### 4. Feature Architecture Planning
1. Resume from where we left off
2. Start fresh (archives existing session)
```
- Use Task tool with subagent_type="data-engineering::backend-architect"
- Context: Business requirements and experiment design
- Prompt: "Design feature architecture for: $ARGUMENTS with A/B testing capability. Include feature flag integration (LaunchDarkly, Split.io, or Optimizely), gradual rollout strategy, circuit breakers for safety, and clean separation between control and treatment logic. Ensure architecture supports real-time configuration updates."
- Output: Architecture diagrams, feature flag schema, rollout strategy
- If it exists and `status` is `"complete"`: Ask whether to archive and start fresh.
### 5. Analytics Instrumentation Design
### 2. Initialize state
- Use Task tool with subagent_type="data-engineering::data-engineer"
- Context: Feature architecture and success metrics
- Prompt: "Design comprehensive analytics instrumentation for: $ARGUMENTS. Define event schemas for user interactions, specify properties for segmentation and analysis, design funnel tracking and conversion events, plan cohort analysis capabilities. Implement using modern SDKs (Segment, Amplitude, Mixpanel) with proper event taxonomy."
- Output: Event tracking plan, analytics schema, instrumentation guide
Create `.data-driven-feature/` directory and `state.json`:
### 6. Data Pipeline Architecture
- Use Task tool with subagent_type="data-engineering::data-engineer"
- Context: Analytics requirements and existing data infrastructure
- Prompt: "Design data pipelines for feature: $ARGUMENTS. Include real-time streaming for live metrics (Kafka, Kinesis), batch processing for detailed analysis, data warehouse integration (Snowflake, BigQuery), and feature store for ML if applicable. Ensure proper data governance and GDPR compliance."
- Output: Pipeline architecture, ETL/ELT specifications, data flow diagrams
## Phase 3: Implementation with Instrumentation
### 7. Backend Implementation
- Use Task tool with subagent_type="backend-development::backend-architect"
- Context: Architecture design and feature requirements
- Prompt: "Implement backend for feature: $ARGUMENTS with full instrumentation. Include feature flag checks at decision points, comprehensive event tracking for all user actions, performance metrics collection, error tracking and monitoring. Implement proper logging for experiment analysis."
- Output: Backend code with analytics, feature flag integration, monitoring setup
### 8. Frontend Implementation
- Use Task tool with subagent_type="frontend-mobile-development::frontend-developer"
- Context: Backend APIs and analytics requirements
- Prompt: "Build frontend for feature: $ARGUMENTS with analytics tracking. Implement event tracking for all user interactions, session recording integration if applicable, performance metrics (Core Web Vitals), and proper error boundaries. Ensure consistent experience between control and treatment groups."
- Output: Frontend code with analytics, A/B test variants, performance monitoring
### 9. ML Model Integration (if applicable)
- Use Task tool with subagent_type="machine-learning-ops::ml-engineer"
- Context: Feature requirements and data pipelines
- Prompt: "Integrate ML models for feature: $ARGUMENTS if needed. Implement online inference with low latency, A/B testing between model versions, model performance tracking, and automatic fallback mechanisms. Set up model monitoring for drift detection."
- Output: ML pipeline, model serving infrastructure, monitoring setup
## Phase 4: Pre-Launch Validation
### 10. Analytics Validation
- Use Task tool with subagent_type="data-engineering::data-engineer"
- Context: Implemented tracking and event schemas
- Prompt: "Validate analytics implementation for: $ARGUMENTS. Test all event tracking in staging, verify data quality and completeness, validate funnel definitions, ensure proper user identification and session tracking. Run end-to-end tests for data pipeline."
- Output: Validation report, data quality metrics, tracking coverage analysis
### 11. Experiment Setup
- Use Task tool with subagent_type="cloud-infrastructure::deployment-engineer"
- Context: Feature flags and experiment design
- Prompt: "Configure experiment infrastructure for: $ARGUMENTS. Set up feature flags with proper targeting rules, configure traffic allocation (start with 5-10%), implement kill switches, set up monitoring alerts for key metrics. Test randomization and assignment logic."
- Output: Experiment configuration, monitoring dashboards, rollout plan
## Phase 5: Launch and Experimentation
### 12. Gradual Rollout
- Use Task tool with subagent_type="cloud-infrastructure::deployment-engineer"
- Context: Experiment configuration and monitoring setup
- Prompt: "Execute gradual rollout for feature: $ARGUMENTS. Start with internal dogfooding, then beta users (1-5%), gradually increase to target traffic. Monitor error rates, performance metrics, and early indicators. Implement automated rollback on anomalies."
- Output: Rollout execution, monitoring alerts, health metrics
### 13. Real-time Monitoring
- Use Task tool with subagent_type="observability-monitoring::observability-engineer"
- Context: Deployed feature and success metrics
- Prompt: "Set up comprehensive monitoring for: $ARGUMENTS. Create real-time dashboards for experiment metrics, configure alerts for statistical significance, monitor guardrail metrics for negative impacts, track system performance and error rates. Use tools like Datadog, New Relic, or custom dashboards."
- Output: Monitoring dashboards, alert configurations, SLO definitions
## Phase 6: Analysis and Decision Making
### 14. Statistical Analysis
- Use Task tool with subagent_type="machine-learning-ops::data-scientist"
- Context: Experiment data and original hypotheses
- Prompt: "Analyze A/B test results for: $ARGUMENTS. Calculate statistical significance with confidence intervals, check for segment-level effects, analyze secondary metrics impact, investigate any unexpected patterns. Use both frequentist and Bayesian approaches. Account for multiple testing if applicable."
- Output: Statistical analysis report, significance tests, segment analysis
### 15. Business Impact Assessment
- Use Task tool with subagent_type="business-analytics::business-analyst"
- Context: Statistical analysis and business metrics
- Prompt: "Assess business impact of feature: $ARGUMENTS. Calculate actual vs expected ROI, analyze impact on key business metrics, evaluate cost-benefit including operational overhead, project long-term value. Make recommendation on full rollout, iteration, or rollback."
- Output: Business impact report, ROI analysis, recommendation document
### 16. Post-Launch Optimization
- Use Task tool with subagent_type="machine-learning-ops::data-scientist"
- Context: Launch results and user feedback
- Prompt: "Identify optimization opportunities for: $ARGUMENTS based on data. Analyze user behavior patterns in treatment group, identify friction points in user journey, suggest improvements based on data, plan follow-up experiments. Use cohort analysis for long-term impact."
- Output: Optimization recommendations, follow-up experiment plans
## Configuration Options
```yaml
experiment_config:
min_sample_size: 10000
confidence_level: 0.95
runtime_days: 14
traffic_allocation: "gradual" # gradual, fixed, or adaptive
analytics_platforms:
- amplitude
- segment
- mixpanel
feature_flags:
provider: "launchdarkly" # launchdarkly, split, optimizely, unleash
statistical_methods:
- frequentist
- bayesian
monitoring:
- real_time_metrics: true
- anomaly_detection: true
- automatic_rollback: true
```json
{
"feature": "$ARGUMENTS",
"status": "in_progress",
"experiment_type": "ab",
"confidence_level": 0.95,
"current_step": 1,
"current_phase": 1,
"completed_steps": [],
"files_created": [],
"started_at": "ISO_TIMESTAMP",
"last_updated": "ISO_TIMESTAMP"
}
```
## Success Criteria
Parse `$ARGUMENTS` for `--experiment-type` and `--confidence` flags. Use defaults if not specified.
- **Data Coverage**: 100% of user interactions tracked with proper event schema
- **Experiment Validity**: Proper randomization, sufficient statistical power, no sample ratio mismatch
- **Statistical Rigor**: Clear significance testing, proper confidence intervals, multiple testing corrections
- **Business Impact**: Measurable improvement in target metrics without degrading guardrail metrics
- **Technical Performance**: No degradation in p95 latency, error rates below 0.1%
- **Decision Speed**: Clear go/no-go decision within planned experiment runtime
- **Learning Outcomes**: Documented insights for future feature development
### 3. Parse feature description
## Coordination Notes
Extract the feature description from `$ARGUMENTS` (everything before the flags). This is referenced as `$FEATURE` in prompts below.
- Data scientists and business analysts collaborate on hypothesis formation
- Engineers implement with analytics as first-class requirement, not afterthought
- Feature flags enable safe experimentation without full deployments
- Real-time monitoring allows for quick iteration and rollback if needed
- Statistical rigor balanced with business practicality and speed to market
- Continuous learning loop feeds back into next feature development cycle
---
Feature to develop with data-driven approach: $ARGUMENTS
## Phase 1: Data Analysis & Hypothesis (Steps 13) — Interactive
### Step 1: Exploratory Data Analysis
Use the Task tool:
```
Task:
subagent_type: "general-purpose"
description: "Perform exploratory data analysis for $FEATURE"
prompt: |
You are a data scientist specializing in product analytics. Perform exploratory data analysis for feature: $FEATURE.
## Instructions
1. Analyze existing user behavior data, identify patterns and opportunities
2. Segment users by behavior and engagement patterns
3. Calculate baseline metrics for key indicators
4. Use modern analytics tools (Amplitude, Mixpanel, Segment) to understand current user journeys, conversion funnels, and engagement patterns
5. Identify data quality issues or gaps that need addressing
Provide an EDA report with user segments, behavioral patterns, and baseline metrics.
```
Save the agent's output to `.data-driven-feature/01-eda-report.md`.
Update `state.json`: set `current_step` to 2, add `"01-eda-report.md"` to `files_created`, add step 1 to `completed_steps`.
### Step 2: Business Hypothesis Development
Read `.data-driven-feature/01-eda-report.md` to load EDA context.
Use the Task tool:
```
Task:
subagent_type: "general-purpose"
description: "Formulate business hypotheses for $FEATURE"
prompt: |
You are a business analyst specializing in data-driven product development. Formulate business hypotheses for feature: $FEATURE based on the data analysis below.
## EDA Findings
[Insert full contents of .data-driven-feature/01-eda-report.md]
## Instructions
1. Define clear success metrics and expected impact on key business KPIs
2. Identify target user segments and minimum detectable effects
3. Create measurable hypotheses using ICE or RICE prioritization frameworks
4. Calculate expected ROI and business value
Provide a hypothesis document with success metrics definition and expected ROI calculations.
```
Save the agent's output to `.data-driven-feature/02-hypotheses.md`.
Update `state.json`: set `current_step` to 3, add step 2 to `completed_steps`.
### Step 3: Statistical Experiment Design
Read `.data-driven-feature/02-hypotheses.md` to load hypothesis context.
Use the Task tool:
```
Task:
subagent_type: "general-purpose"
description: "Design statistical experiment for $FEATURE"
prompt: |
You are a data scientist specializing in experimentation and statistical analysis. Design the statistical experiment for feature: $FEATURE.
## Business Hypotheses
[Insert full contents of .data-driven-feature/02-hypotheses.md]
## Experiment Type: [from state.json]
## Confidence Level: [from state.json]
## Instructions
1. Calculate required sample size for statistical power
2. Define control and treatment groups with randomization strategy
3. Plan for multiple testing corrections if needed
4. Consider Bayesian A/B testing approaches for faster decision making
5. Design for both primary and guardrail metrics
6. Specify experiment runtime and stopping rules
Provide an experiment design document with power analysis and statistical test plan.
```
Save the agent's output to `.data-driven-feature/03-experiment-design.md`.
Update `state.json`: set `current_step` to "checkpoint-1", add step 3 to `completed_steps`.
---
## PHASE CHECKPOINT 1 — User Approval Required
You MUST stop here and present the analysis and experiment design for review.
Display a summary of the hypotheses from `.data-driven-feature/02-hypotheses.md` and experiment design from `.data-driven-feature/03-experiment-design.md` (key metrics, target segments, sample size, experiment type) and ask:
```
Data analysis and experiment design complete. Please review:
- .data-driven-feature/01-eda-report.md
- .data-driven-feature/02-hypotheses.md
- .data-driven-feature/03-experiment-design.md
1. Approve — proceed to architecture and implementation
2. Request changes — tell me what to adjust
3. Pause — save progress and stop here
```
Do NOT proceed to Phase 2 until the user selects option 1. If they select option 2, revise and re-checkpoint. If option 3, update `state.json` status and stop.
---
## Phase 2: Architecture & Instrumentation (Steps 46)
### Step 4: Feature Architecture Planning
Read `.data-driven-feature/02-hypotheses.md` and `.data-driven-feature/03-experiment-design.md`.
Use the Task tool:
```
Task:
subagent_type: "backend-architect"
description: "Design feature architecture for $FEATURE with A/B testing capability"
prompt: |
Design the feature architecture for: $FEATURE with A/B testing capability.
## Business Hypotheses
[Insert contents of .data-driven-feature/02-hypotheses.md]
## Experiment Design
[Insert contents of .data-driven-feature/03-experiment-design.md]
## Instructions
1. Include feature flag integration (LaunchDarkly, Split.io, or Optimizely)
2. Design gradual rollout strategy with circuit breakers for safety
3. Ensure clean separation between control and treatment logic
4. Support real-time configuration updates
5. Design for proper data collection at each decision point
Provide architecture diagrams, feature flag schema, and rollout strategy.
```
Save the agent's output to `.data-driven-feature/04-architecture.md`.
Update `state.json`: set `current_step` to 5, add step 4 to `completed_steps`.
### Step 5: Analytics Instrumentation Design
Read `.data-driven-feature/04-architecture.md`.
Use the Task tool:
```
Task:
subagent_type: "data-engineer"
description: "Design analytics instrumentation for $FEATURE"
prompt: |
Design comprehensive analytics instrumentation for: $FEATURE.
## Architecture
[Insert contents of .data-driven-feature/04-architecture.md]
## Experiment Design
[Insert contents of .data-driven-feature/03-experiment-design.md]
## Instructions
1. Define event schemas for user interactions with proper taxonomy
2. Specify properties for segmentation and analysis
3. Design funnel tracking and conversion events
4. Plan cohort analysis capabilities
5. Implement using modern SDKs (Segment, Amplitude, Mixpanel) with proper event taxonomy
Provide an event tracking plan, analytics schema, and instrumentation guide.
```
Save the agent's output to `.data-driven-feature/05-analytics-design.md`.
Update `state.json`: set `current_step` to 6, add step 5 to `completed_steps`.
### Step 6: Data Pipeline Architecture
Read `.data-driven-feature/05-analytics-design.md`.
Use the Task tool:
```
Task:
subagent_type: "data-engineer"
description: "Design data pipelines for $FEATURE"
prompt: |
Design data pipelines for feature: $FEATURE.
## Analytics Design
[Insert contents of .data-driven-feature/05-analytics-design.md]
## Architecture
[Insert contents of .data-driven-feature/04-architecture.md]
## Instructions
1. Include real-time streaming for live metrics (Kafka, Kinesis)
2. Design batch processing for detailed analysis
3. Plan data warehouse integration (Snowflake, BigQuery)
4. Include feature store for ML if applicable
5. Ensure proper data governance and GDPR compliance
6. Define data retention and archival policies
Provide pipeline architecture, ETL/ELT specifications, and data flow diagrams.
```
Save the agent's output to `.data-driven-feature/06-data-pipelines.md`.
Update `state.json`: set `current_step` to "checkpoint-2", add step 6 to `completed_steps`.
---
## PHASE CHECKPOINT 2 — User Approval Required
Display a summary of the architecture, analytics design, and data pipelines and ask:
```
Architecture and instrumentation design complete. Please review:
- .data-driven-feature/04-architecture.md
- .data-driven-feature/05-analytics-design.md
- .data-driven-feature/06-data-pipelines.md
1. Approve — proceed to implementation
2. Request changes — tell me what to adjust
3. Pause — save progress and stop here
```
Do NOT proceed to Phase 3 until the user approves.
---
## Phase 3: Implementation (Steps 79)
### Step 7: Backend Implementation
Read `.data-driven-feature/04-architecture.md` and `.data-driven-feature/05-analytics-design.md`.
Use the Task tool:
```
Task:
subagent_type: "backend-architect"
description: "Implement backend for $FEATURE with full instrumentation"
prompt: |
Implement the backend for feature: $FEATURE with full instrumentation.
## Architecture
[Insert contents of .data-driven-feature/04-architecture.md]
## Analytics Design
[Insert contents of .data-driven-feature/05-analytics-design.md]
## Instructions
1. Include feature flag checks at decision points
2. Implement comprehensive event tracking for all user actions
3. Add performance metrics collection
4. Implement error tracking and monitoring
5. Add proper logging for experiment analysis
6. Follow the project's existing code patterns and conventions
Write all code files. Report what files were created/modified.
```
Save a summary to `.data-driven-feature/07-backend.md`.
Update `state.json`: set `current_step` to 8, add step 7 to `completed_steps`.
### Step 8: Frontend Implementation
Read `.data-driven-feature/04-architecture.md`, `.data-driven-feature/05-analytics-design.md`, and `.data-driven-feature/07-backend.md`.
Use the Task tool:
```
Task:
subagent_type: "general-purpose"
description: "Implement frontend for $FEATURE with analytics tracking"
prompt: |
You are a frontend developer. Build the frontend for feature: $FEATURE with analytics tracking.
## Architecture
[Insert contents of .data-driven-feature/04-architecture.md]
## Analytics Design
[Insert contents of .data-driven-feature/05-analytics-design.md]
## Backend Implementation
[Insert contents of .data-driven-feature/07-backend.md]
## Instructions
1. Implement event tracking for all user interactions
2. Build A/B test variants with proper variant assignment
3. Add session recording integration if applicable
4. Track performance metrics (Core Web Vitals)
5. Add proper error boundaries
6. Ensure consistent experience between control and treatment groups
7. Follow the project's existing frontend patterns and conventions
Write all code files. Report what files were created/modified.
```
Save a summary to `.data-driven-feature/08-frontend.md`.
**Note:** If the feature has no frontend component (pure backend/API/pipeline), skip this step — write a brief note in `08-frontend.md` explaining why it was skipped, and continue.
Update `state.json`: set `current_step` to 9, add step 8 to `completed_steps`.
### Step 9: ML Model Integration (if applicable)
Read `.data-driven-feature/04-architecture.md` and `.data-driven-feature/06-data-pipelines.md`.
Use the Task tool:
```
Task:
subagent_type: "general-purpose"
description: "Integrate ML models for $FEATURE"
prompt: |
You are an ML engineer. Integrate ML models for feature: $FEATURE if needed.
## Architecture
[Insert contents of .data-driven-feature/04-architecture.md]
## Data Pipelines
[Insert contents of .data-driven-feature/06-data-pipelines.md]
## Instructions
1. Implement online inference with low latency
2. Set up A/B testing between model versions
3. Add model performance tracking and drift detection
4. Implement automatic fallback mechanisms
5. Set up model monitoring dashboards
If no ML component is needed for this feature, explain why and skip.
Write all code files. Report what files were created/modified.
```
Save a summary to `.data-driven-feature/09-ml-integration.md`.
Update `state.json`: set `current_step` to "checkpoint-3", add step 9 to `completed_steps`.
---
## PHASE CHECKPOINT 3 — User Approval Required
Display a summary of the implementation and ask:
```
Implementation complete. Please review:
- .data-driven-feature/07-backend.md
- .data-driven-feature/08-frontend.md
- .data-driven-feature/09-ml-integration.md
1. Approve — proceed to validation and launch
2. Request changes — tell me what to fix
3. Pause — save progress and stop here
```
Do NOT proceed to Phase 4 until the user approves.
---
## Phase 4: Validation & Launch (Steps 1013)
### Step 10: Analytics Validation
Read `.data-driven-feature/05-analytics-design.md`, `.data-driven-feature/07-backend.md`, and `.data-driven-feature/08-frontend.md`.
Use the Task tool:
```
Task:
subagent_type: "data-engineer"
description: "Validate analytics implementation for $FEATURE"
prompt: |
Validate the analytics implementation for: $FEATURE.
## Analytics Design
[Insert contents of .data-driven-feature/05-analytics-design.md]
## Backend Implementation
[Insert contents of .data-driven-feature/07-backend.md]
## Frontend Implementation
[Insert contents of .data-driven-feature/08-frontend.md]
## Instructions
1. Test all event tracking in staging environment
2. Verify data quality and completeness
3. Validate funnel definitions and conversion tracking
4. Ensure proper user identification and session tracking
5. Run end-to-end tests for data pipeline
6. Check for tracking gaps or inconsistencies
Provide a validation report with data quality metrics and tracking coverage analysis.
```
Save the agent's output to `.data-driven-feature/10-analytics-validation.md`.
Update `state.json`: set `current_step` to 11, add step 10 to `completed_steps`.
### Step 11: Experiment Setup & Deployment
Read `.data-driven-feature/03-experiment-design.md` and `.data-driven-feature/04-architecture.md`.
Launch two agents in parallel using multiple Task tool calls in a single response:
**11a. Experiment Infrastructure:**
```
Task:
subagent_type: "general-purpose"
description: "Configure experiment infrastructure for $FEATURE"
prompt: |
You are a deployment engineer specializing in experimentation platforms. Configure experiment infrastructure for: $FEATURE.
## Experiment Design
[Insert contents of .data-driven-feature/03-experiment-design.md]
## Architecture
[Insert contents of .data-driven-feature/04-architecture.md]
## Instructions
1. Set up feature flags with proper targeting rules
2. Configure traffic allocation (start with 5-10%)
3. Implement kill switches for safety
4. Set up monitoring alerts for key metrics
5. Test randomization and assignment logic
6. Create rollback procedures
Provide experiment configuration, monitoring dashboards, and rollout plan.
```
**11b. Monitoring Setup:**
```
Task:
subagent_type: "general-purpose"
description: "Set up monitoring for $FEATURE experiment"
prompt: |
You are an observability engineer. Set up comprehensive monitoring for: $FEATURE.
## Architecture
[Insert contents of .data-driven-feature/04-architecture.md]
## Experiment Design
[Insert contents of .data-driven-feature/03-experiment-design.md]
## Analytics Design
[Insert contents of .data-driven-feature/05-analytics-design.md]
## Instructions
1. Create real-time dashboards for experiment metrics
2. Configure alerts for statistical significance milestones
3. Monitor guardrail metrics for negative impacts
4. Track system performance and error rates
5. Define SLOs for the experiment period
6. Use tools like Datadog, New Relic, or custom dashboards
Provide monitoring dashboard configs, alert definitions, and SLO specifications.
```
After both complete, consolidate results into `.data-driven-feature/11-experiment-setup.md`:
```markdown
# Experiment Setup: $FEATURE
## Experiment Infrastructure
[Summary from 11a — feature flags, traffic allocation, rollback plan]
## Monitoring Configuration
[Summary from 11b — dashboards, alerts, SLOs]
```
Update `state.json`: set `current_step` to 12, add step 11 to `completed_steps`.
### Step 12: Gradual Rollout
Read `.data-driven-feature/11-experiment-setup.md`.
Use the Task tool:
```
Task:
subagent_type: "general-purpose"
description: "Create gradual rollout plan for $FEATURE"
prompt: |
You are a deployment engineer. Create a detailed gradual rollout plan for feature: $FEATURE.
## Experiment Setup
[Insert contents of .data-driven-feature/11-experiment-setup.md]
## Instructions
1. Define rollout stages: internal dogfooding → beta (1-5%) → gradual increase to target traffic
2. Specify health checks and go/no-go criteria for each stage
3. Define monitoring checkpoints and metrics thresholds
4. Create automated rollback triggers for anomalies
5. Document manual rollback procedures
Provide a stage-by-stage rollout plan with decision criteria.
```
Save the agent's output to `.data-driven-feature/12-rollout-plan.md`.
Update `state.json`: set `current_step` to 13, add step 12 to `completed_steps`.
### Step 13: Security Review
Read `.data-driven-feature/04-architecture.md`, `.data-driven-feature/07-backend.md`, and `.data-driven-feature/08-frontend.md`.
Use the Task tool:
```
Task:
subagent_type: "general-purpose"
description: "Security review of $FEATURE"
prompt: |
You are a security auditor. Perform a security review of this data-driven feature implementation.
## Architecture
[Insert contents of .data-driven-feature/04-architecture.md]
## Backend Implementation
[Insert contents of .data-driven-feature/07-backend.md]
## Frontend Implementation
[Insert contents of .data-driven-feature/08-frontend.md]
## Instructions
Review for: OWASP Top 10, data privacy and GDPR compliance, PII handling in analytics events,
authentication/authorization flaws, input validation gaps, experiment manipulation risks,
and any security anti-patterns.
Provide findings with severity, location, and specific fix recommendations.
```
Save the agent's output to `.data-driven-feature/13-security-review.md`.
If there are Critical or High severity findings, address them now before proceeding. Apply fixes and re-validate.
Update `state.json`: set `current_step` to "checkpoint-4", add step 13 to `completed_steps`.
---
## PHASE CHECKPOINT 4 — User Approval Required
Display a summary of validation and launch readiness and ask:
```
Validation and launch preparation complete. Please review:
- .data-driven-feature/10-analytics-validation.md
- .data-driven-feature/11-experiment-setup.md
- .data-driven-feature/12-rollout-plan.md
- .data-driven-feature/13-security-review.md
Security findings: [X critical, Y high, Z medium]
1. Approve — proceed to analysis planning
2. Request changes — tell me what to fix
3. Pause — save progress and stop here
```
Do NOT proceed to Phase 5 until the user approves.
---
## Phase 5: Analysis & Decision (Steps 1416)
### Step 14: Statistical Analysis
Read `.data-driven-feature/03-experiment-design.md` and `.data-driven-feature/02-hypotheses.md`.
Use the Task tool:
```
Task:
subagent_type: "general-purpose"
description: "Create statistical analysis plan for $FEATURE experiment"
prompt: |
You are a data scientist specializing in experimentation. Create the statistical analysis plan for the A/B test results of: $FEATURE.
## Experiment Design
[Insert contents of .data-driven-feature/03-experiment-design.md]
## Hypotheses
[Insert contents of .data-driven-feature/02-hypotheses.md]
## Instructions
1. Define statistical significance calculations with confidence intervals
2. Plan segment-level effect analysis
3. Specify secondary metrics impact analysis
4. Use both frequentist and Bayesian approaches
5. Account for multiple testing corrections
6. Define stopping rules and decision criteria
Provide an analysis plan with templates for results reporting.
```
Save the agent's output to `.data-driven-feature/14-analysis-plan.md`.
Update `state.json`: set `current_step` to 15, add step 14 to `completed_steps`.
### Step 15: Business Impact Assessment Framework
Read `.data-driven-feature/02-hypotheses.md` and `.data-driven-feature/14-analysis-plan.md`.
Use the Task tool:
```
Task:
subagent_type: "general-purpose"
description: "Create business impact assessment framework for $FEATURE"
prompt: |
You are a business analyst. Create a business impact assessment framework for feature: $FEATURE.
## Hypotheses
[Insert contents of .data-driven-feature/02-hypotheses.md]
## Analysis Plan
[Insert contents of .data-driven-feature/14-analysis-plan.md]
## Instructions
1. Define actual vs expected ROI calculation methodology
2. Create a framework for analyzing impact on key business metrics
3. Plan cost-benefit analysis including operational overhead
4. Define criteria for full rollout, iteration, or rollback decisions
5. Create templates for stakeholder reporting
Provide a business impact framework and decision matrix.
```
Save the agent's output to `.data-driven-feature/15-impact-framework.md`.
Update `state.json`: set `current_step` to 16, add step 15 to `completed_steps`.
### Step 16: Optimization Roadmap
Read `.data-driven-feature/14-analysis-plan.md` and `.data-driven-feature/15-impact-framework.md`.
Use the Task tool:
```
Task:
subagent_type: "general-purpose"
description: "Create post-launch optimization roadmap for $FEATURE"
prompt: |
You are a data scientist specializing in product optimization. Create a post-launch optimization roadmap for: $FEATURE.
## Analysis Plan
[Insert contents of .data-driven-feature/14-analysis-plan.md]
## Impact Framework
[Insert contents of .data-driven-feature/15-impact-framework.md]
## Instructions
1. Define user behavior analysis methodology for treatment group
2. Plan friction point identification in user journeys
3. Suggest improvement hypotheses based on expected data patterns
4. Plan follow-up experiments and iteration cycles
5. Design cohort analysis for long-term impact assessment
6. Create a continuous learning feedback loop
Provide an optimization roadmap with follow-up experiment plans.
```
Save the agent's output to `.data-driven-feature/16-optimization-roadmap.md`.
Update `state.json`: set `current_step` to "complete", add step 16 to `completed_steps`.
---
## Completion
Update `state.json`:
- Set `status` to `"complete"`
- Set `last_updated` to current timestamp
Present the final summary:
```
Data-driven feature development complete: $FEATURE
## Files Created
[List all .data-driven-feature/ output files]
## Development Summary
- EDA Report: .data-driven-feature/01-eda-report.md
- Hypotheses: .data-driven-feature/02-hypotheses.md
- Experiment Design: .data-driven-feature/03-experiment-design.md
- Architecture: .data-driven-feature/04-architecture.md
- Analytics Design: .data-driven-feature/05-analytics-design.md
- Data Pipelines: .data-driven-feature/06-data-pipelines.md
- Backend: .data-driven-feature/07-backend.md
- Frontend: .data-driven-feature/08-frontend.md
- ML Integration: .data-driven-feature/09-ml-integration.md
- Analytics Validation: .data-driven-feature/10-analytics-validation.md
- Experiment Setup: .data-driven-feature/11-experiment-setup.md
- Rollout Plan: .data-driven-feature/12-rollout-plan.md
- Security Review: .data-driven-feature/13-security-review.md
- Analysis Plan: .data-driven-feature/14-analysis-plan.md
- Impact Framework: .data-driven-feature/15-impact-framework.md
- Optimization Roadmap: .data-driven-feature/16-optimization-roadmap.md
## Next Steps
1. Review all generated artifacts and documentation
2. Execute the rollout plan in .data-driven-feature/12-rollout-plan.md
3. Monitor using the dashboards from .data-driven-feature/11-experiment-setup.md
4. Run analysis after experiment completes using .data-driven-feature/14-analysis-plan.md
5. Make go/no-go decision using .data-driven-feature/15-impact-framework.md
```

View File

@@ -0,0 +1,10 @@
{
"name": "data-validation-suite",
"version": "1.2.0",
"description": "Schema validation, data quality monitoring, streaming validation pipelines, and input validation for backend APIs",
"author": {
"name": "Seth Hobson",
"email": "seth@major7apps.com"
},
"license": "MIT"
}

View File

@@ -0,0 +1,10 @@
{
"name": "database-cloud-optimization",
"version": "1.2.0",
"description": "Database query optimization, cloud cost optimization, and scalability improvements",
"author": {
"name": "Seth Hobson",
"email": "seth@major7apps.com"
},
"license": "MIT"
}

View File

@@ -0,0 +1,10 @@
{
"name": "database-design",
"version": "1.2.0",
"description": "Database architecture, schema design, and SQL optimization for production systems",
"author": {
"name": "Seth Hobson",
"email": "seth@major7apps.com"
},
"license": "MIT"
}

View File

@@ -0,0 +1,10 @@
{
"name": "database-migrations",
"version": "1.2.0",
"description": "Database migration automation, observability, and cross-database migration strategies",
"author": {
"name": "Seth Hobson",
"email": "seth@major7apps.com"
},
"license": "MIT"
}

View File

@@ -0,0 +1,10 @@
{
"name": "debugging-toolkit",
"version": "1.2.0",
"description": "Interactive debugging, developer experience optimization, and smart debugging workflows",
"author": {
"name": "Seth Hobson",
"email": "seth@major7apps.com"
},
"license": "MIT"
}

View File

@@ -0,0 +1,10 @@
{
"name": "dependency-management",
"version": "1.2.0",
"description": "Dependency auditing, version management, and security vulnerability scanning",
"author": {
"name": "Seth Hobson",
"email": "seth@major7apps.com"
},
"license": "MIT"
}

View File

@@ -0,0 +1,10 @@
{
"name": "deployment-strategies",
"version": "1.2.0",
"description": "Deployment patterns, rollback automation, and infrastructure templates",
"author": {
"name": "Seth Hobson",
"email": "seth@major7apps.com"
},
"license": "MIT"
}

View File

@@ -0,0 +1,10 @@
{
"name": "deployment-validation",
"version": "1.2.0",
"description": "Pre-deployment checks, configuration validation, and deployment readiness assessment",
"author": {
"name": "Seth Hobson",
"email": "seth@major7apps.com"
},
"license": "MIT"
}

View File

@@ -0,0 +1,10 @@
{
"name": "developer-essentials",
"version": "1.0.1",
"description": "Essential developer skills including Git workflows, SQL optimization, error handling, code review, E2E testing, authentication, debugging, and monorepo management",
"author": {
"name": "Seth Hobson",
"email": "seth@major7apps.com"
},
"license": "MIT"
}

View File

@@ -0,0 +1,10 @@
{
"name": "distributed-debugging",
"version": "1.2.0",
"description": "Distributed system tracing and debugging across microservices",
"author": {
"name": "Seth Hobson",
"email": "seth@major7apps.com"
},
"license": "MIT"
}

View File

@@ -0,0 +1,10 @@
{
"name": "documentation-generation",
"version": "1.2.1",
"description": "OpenAPI specification generation, Mermaid diagram creation, tutorial writing, API reference documentation",
"author": {
"name": "Seth Hobson",
"email": "seth@major7apps.com"
},
"license": "MIT"
}

View File

@@ -0,0 +1,10 @@
{
"name": "dotnet-contribution",
"version": "1.0.0",
"description": "Comprehensive .NET backend development with C#, ASP.NET Core, Entity Framework Core, and Dapper for production-grade applications",
"author": {
"name": "Seth Hobson",
"email": "seth@major7apps.com"
},
"license": "MIT"
}

View File

@@ -0,0 +1,10 @@
{
"name": "error-debugging",
"version": "1.2.0",
"description": "Error analysis, trace debugging, and multi-agent problem diagnosis",
"author": {
"name": "Seth Hobson",
"email": "seth@major7apps.com"
},
"license": "MIT"
}

View File

@@ -486,7 +486,7 @@ class StructuredLogger {
filename: 'logs/combined.log',
maxsize: 5242880,
maxFiles: 5
});
}));
// Elasticsearch transport for production
if (config.elasticsearch) {

View File

@@ -0,0 +1,10 @@
{
"name": "error-diagnostics",
"version": "1.2.0",
"description": "Error tracing, root cause analysis, and smart debugging for production systems",
"author": {
"name": "Seth Hobson",
"email": "seth@major7apps.com"
},
"license": "MIT"
}

View File

@@ -486,7 +486,7 @@ class StructuredLogger {
filename: 'logs/combined.log',
maxsize: 5242880,
maxFiles: 5
});
}));
// Elasticsearch transport for production
if (config.elasticsearch) {

View File

@@ -0,0 +1,10 @@
{
"name": "framework-migration",
"version": "1.3.0",
"description": "Framework updates, migration planning, and architectural transformation workflows",
"author": {
"name": "Seth Hobson",
"email": "seth@major7apps.com"
},
"license": "MIT"
}

View File

@@ -660,8 +660,8 @@ framework_upgrades = {
'react': {
'upgrade_command': 'npm install react@{version} react-dom@{version}',
'codemods': [
'npx react-codemod rename-unsafe-lifecycles',
'npx react-codemod error-boundaries'
'npx jscodeshift -t https://raw.githubusercontent.com/reactjs/react-codemod/master/transforms/rename-unsafe-lifecycles.js src/',
'npx jscodeshift -t https://raw.githubusercontent.com/reactjs/react-codemod/master/transforms/error-boundaries.js src/'
],
'verification': [
'npm run build',
@@ -671,7 +671,7 @@ framework_upgrades = {
},
'vue': {
'upgrade_command': 'npm install vue@{version}',
'migration_tool': 'npx @vue/migration-tool',
'migration_tool': 'npx vue-codemod -t <transform> <path>',
'breaking_changes': {
'2_to_3': [
'Composition API',

View File

@@ -1,123 +1,659 @@
---
description: "Orchestrate legacy system modernization using the strangler fig pattern with gradual component replacement"
argument-hint: "<legacy codebase path or description> [--strategy parallel-systems|big-bang|by-feature|database-first|api-first]"
---
# Legacy Code Modernization Workflow
Orchestrate a comprehensive legacy system modernization using the strangler fig pattern, enabling gradual replacement of outdated components while maintaining continuous business operations through expert agent coordination.
## CRITICAL BEHAVIORAL RULES
[Extended thinking: The strangler fig pattern, named after the tropical fig tree that gradually envelops and replaces its host, represents the gold standard for risk-managed legacy modernization. This workflow implements a systematic approach where new functionality gradually replaces legacy components, allowing both systems to coexist during transition. By orchestrating specialized agents for assessment, testing, security, and implementation, we ensure each migration phase is validated before proceeding, minimizing disruption while maximizing modernization velocity.]
You MUST follow these rules exactly. Violating any of them is a failure.
## Phase 1: Legacy Assessment and Risk Analysis
1. **Execute steps in order.** Do NOT skip ahead, reorder, or merge steps.
2. **Write output files.** Each step MUST produce its output file in `.legacy-modernize/` before the next step begins. Read from prior step files — do NOT rely on context window memory.
3. **Stop at checkpoints.** When you reach a `PHASE CHECKPOINT`, you MUST stop and wait for explicit user approval before continuing. Use the AskUserQuestion tool with clear options.
4. **Halt on failure.** If any step fails (agent error, test failure, missing dependency), STOP immediately. Present the error and ask the user how to proceed. Do NOT silently continue.
5. **Use only local agents.** All `subagent_type` references use agents bundled with this plugin or `general-purpose`. No cross-plugin dependencies.
6. **Never enter plan mode autonomously.** Do NOT use EnterPlanMode. This command IS the plan — execute it.
### 1. Comprehensive Legacy System Analysis
## Pre-flight Checks
- Use Task tool with subagent_type="legacy-modernizer"
- Prompt: "Analyze the legacy codebase at $ARGUMENTS. Document technical debt inventory including: outdated dependencies, deprecated APIs, security vulnerabilities, performance bottlenecks, and architectural anti-patterns. Generate a modernization readiness report with component complexity scores (1-10), dependency mapping, and database coupling analysis. Identify quick wins vs complex refactoring targets."
- Expected output: Detailed assessment report with risk matrix and modernization priorities
Before starting, perform these checks:
### 2. Dependency and Integration Mapping
### 1. Check for existing session
- Use Task tool with subagent_type="architect-review"
- Prompt: "Based on the legacy assessment report, create a comprehensive dependency graph showing: internal module dependencies, external service integrations, shared database schemas, and cross-system data flows. Identify integration points that will require facade patterns or adapter layers during migration. Highlight circular dependencies and tight coupling that need resolution."
- Context from previous: Legacy assessment report, component complexity scores
- Expected output: Visual dependency map and integration point catalog
Check if `.legacy-modernize/state.json` exists:
### 3. Business Impact and Risk Assessment
- If it exists and `status` is `"in_progress"`: Read it, display the current step, and ask the user:
- Use Task tool with subagent_type="business-analytics::business-analyst"
- Prompt: "Evaluate business impact of modernizing each component identified. Create risk assessment matrix considering: business criticality (revenue impact), user traffic patterns, data sensitivity, regulatory requirements, and fallback complexity. Prioritize components using a weighted scoring system: (Business Value × 0.4) + (Technical Risk × 0.3) + (Quick Win Potential × 0.3). Define rollback strategies for each component."
- Context from previous: Component inventory, dependency mapping
- Expected output: Prioritized migration roadmap with risk mitigation strategies
```
Found an in-progress legacy modernization session:
Target: [target from state]
Current step: [step from state]
## Phase 2: Test Coverage Establishment
1. Resume from where we left off
2. Start fresh (archives existing session)
```
### 1. Legacy Code Test Coverage Analysis
- If it exists and `status` is `"complete"`: Ask whether to archive and start fresh.
- Use Task tool with subagent_type="unit-testing::test-automator"
- Prompt: "Analyze existing test coverage for legacy components at $ARGUMENTS. Use coverage tools to identify untested code paths, missing integration tests, and absent end-to-end scenarios. For components with <40% coverage, generate characterization tests that capture current behavior without modifying functionality. Create test harness for safe refactoring."
- Expected output: Test coverage report and characterization test suite
### 2. Initialize state
### 2. Contract Testing Implementation
Create `.legacy-modernize/` directory and `state.json`:
- Use Task tool with subagent_type="unit-testing::test-automator"
- Prompt: "Implement contract tests for all integration points identified in dependency mapping. Create consumer-driven contracts for APIs, message queue interactions, and database schemas. Set up contract verification in CI/CD pipeline. Generate performance baselines for response times and throughput to validate modernized components maintain SLAs."
- Context from previous: Integration point catalog, existing test coverage
- Expected output: Contract test suite with performance baselines
```json
{
"target": "$ARGUMENTS",
"status": "in_progress",
"strategy": "parallel-systems",
"current_step": 1,
"current_phase": 1,
"completed_steps": [],
"files_created": [],
"started_at": "ISO_TIMESTAMP",
"last_updated": "ISO_TIMESTAMP"
}
```
### 3. Test Data Management Strategy
Parse `$ARGUMENTS` for `--strategy` flag. Use `parallel-systems` as default if not specified.
- Use Task tool with subagent_type="data-engineering::data-engineer"
- Prompt: "Design test data management strategy for parallel system operation. Create data generation scripts for edge cases, implement data masking for sensitive information, and establish test database refresh procedures. Set up monitoring for data consistency between legacy and modernized components during migration."
- Context from previous: Database schemas, test requirements
- Expected output: Test data pipeline and consistency monitoring
### 3. Parse target description
## Phase 3: Incremental Migration Implementation
Extract the target description from `$ARGUMENTS` (everything before the flags). This is referenced as `$TARGET` in prompts below.
### 1. Strangler Fig Infrastructure Setup
---
- Use Task tool with subagent_type="backend-development::backend-architect"
- Prompt: "Implement strangler fig infrastructure with API gateway for traffic routing. Configure feature flags for gradual rollout using environment variables or feature management service. Set up proxy layer with request routing rules based on: URL patterns, headers, or user segments. Implement circuit breakers and fallback mechanisms for resilience. Create observability dashboard for dual-system monitoring."
- Expected output: API gateway configuration, feature flag system, monitoring dashboard
## Phase 1: Legacy Assessment and Risk Analysis (Steps 13)
### 2. Component Modernization - First Wave
### Step 1: Comprehensive Legacy System Analysis
- Use Task tool with subagent_type="python-development::python-pro" or "golang-pro" (based on target stack)
- Prompt: "Modernize first-wave components (quick wins identified in assessment). For each component: extract business logic from legacy code, implement using modern patterns (dependency injection, SOLID principles), ensure backward compatibility through adapter patterns, maintain data consistency with event sourcing or dual writes. Follow 12-factor app principles. Components to modernize: [list from prioritized roadmap]"
- Context from previous: Characterization tests, contract tests, infrastructure setup
- Expected output: Modernized components with adapters
Use the Task tool with subagent_type="legacy-modernizer":
### 3. Security Hardening
```
Task:
subagent_type: "legacy-modernizer"
description: "Analyze legacy codebase for modernization readiness"
prompt: |
Analyze the legacy codebase at $TARGET. Document a technical debt inventory including:
- Outdated dependencies and deprecated APIs
- Security vulnerabilities and performance bottlenecks
- Architectural anti-patterns
- Use Task tool with subagent_type="security-scanning::security-auditor"
- Prompt: "Audit modernized components for security vulnerabilities. Implement security improvements including: OAuth 2.0/JWT authentication, role-based access control, input validation and sanitization, SQL injection prevention, XSS protection, and secrets management. Verify OWASP top 10 compliance. Configure security headers and implement rate limiting."
- Context from previous: Modernized component code
- Expected output: Security audit report and hardened components
Generate a modernization readiness report with:
- Component complexity scores (1-10)
- Dependency mapping between modules
- Database coupling analysis
- Quick wins vs complex refactoring targets
## Phase 4: Performance Validation and Optimization
Write your complete assessment as a single markdown document.
```
### 1. Performance Testing and Optimization
Save the agent's output to `.legacy-modernize/01-legacy-assessment.md`.
- Use Task tool with subagent_type="application-performance::performance-engineer"
- Prompt: "Conduct performance testing comparing legacy vs modernized components. Run load tests simulating production traffic patterns, measure response times, throughput, and resource utilization. Identify performance regressions and optimize: database queries with indexing, caching strategies (Redis/Memcached), connection pooling, and async processing where applicable. Validate against SLA requirements."
- Context from previous: Performance baselines, modernized components
- Expected output: Performance test results and optimization recommendations
Update `state.json`: set `current_step` to 2, add `"01-legacy-assessment.md"` to `files_created`, add step 1 to `completed_steps`.
### 2. Progressive Rollout and Monitoring
### Step 2: Dependency and Integration Mapping
- Use Task tool with subagent_type="deployment-strategies::deployment-engineer"
- Prompt: "Implement progressive rollout strategy using feature flags. Start with 5% traffic to modernized components, monitor error rates, latency, and business metrics. Define automatic rollback triggers: error rate >1%, latency >2x baseline, or business metric degradation. Create runbook for traffic shifting: 5% → 25% → 50% → 100% with 24-hour observation periods."
- Context from previous: Feature flag configuration, monitoring dashboard
- Expected output: Rollout plan with automated safeguards
Read `.legacy-modernize/01-legacy-assessment.md` to load assessment context.
## Phase 5: Migration Completion and Documentation
Use the Task tool with subagent_type="architect-review":
### 1. Legacy Component Decommissioning
```
Task:
subagent_type: "architect-review"
description: "Create dependency graph and integration point catalog"
prompt: |
Based on the legacy assessment report below, create a comprehensive dependency graph.
- Use Task tool with subagent_type="legacy-modernizer"
- Prompt: "Plan safe decommissioning of replaced legacy components. Verify no remaining dependencies through traffic analysis (minimum 30 days at 0% traffic). Archive legacy code with documentation of original functionality. Update CI/CD pipelines to remove legacy builds. Clean up unused database tables and remove deprecated API endpoints. Document any retained legacy components with sunset timeline."
- Context from previous: Traffic routing data, modernization status
- Expected output: Decommissioning checklist and timeline
## Legacy Assessment
[Insert full contents of .legacy-modernize/01-legacy-assessment.md]
### 2. Documentation and Knowledge Transfer
## Deliverables
1. Internal module dependencies
2. External service integrations
3. Shared database schemas and cross-system data flows
4. Integration points requiring facade patterns or adapter layers during migration
5. Circular dependencies and tight coupling that need resolution
- Use Task tool with subagent_type="documentation-generation::docs-architect"
- Prompt: "Create comprehensive modernization documentation including: architectural diagrams (before/after), API documentation with migration guides, runbooks for dual-system operation, troubleshooting guides for common issues, and lessons learned report. Generate developer onboarding guide for modernized system. Document technical decisions and trade-offs made during migration."
- Context from previous: All migration artifacts and decisions
- Expected output: Complete modernization documentation package
Write your complete dependency analysis as a single markdown document.
```
## Configuration Options
Save the agent's output to `.legacy-modernize/02-dependency-map.md`.
- **--parallel-systems**: Keep both systems running indefinitely (for gradual migration)
- **--big-bang**: Full cutover after validation (higher risk, faster completion)
- **--by-feature**: Migrate complete features rather than technical components
- **--database-first**: Prioritize database modernization before application layer
- **--api-first**: Modernize API layer while maintaining legacy backend
Update `state.json`: set `current_step` to 3, add step 2 to `completed_steps`.
### Step 3: Business Impact and Risk Assessment
Read `.legacy-modernize/01-legacy-assessment.md` and `.legacy-modernize/02-dependency-map.md`.
Use the Task tool with subagent_type="general-purpose":
```
Task:
subagent_type: "general-purpose"
description: "Evaluate business impact and create migration roadmap"
prompt: |
You are a business analyst specializing in technology transformation and risk assessment.
Evaluate the business impact of modernizing each component identified in the assessment and dependency analysis below.
## Legacy Assessment
[Insert contents of .legacy-modernize/01-legacy-assessment.md]
## Dependency Map
[Insert contents of .legacy-modernize/02-dependency-map.md]
## Deliverables
1. Risk assessment matrix considering: business criticality (revenue impact), user traffic patterns, data sensitivity, regulatory requirements, and fallback complexity
2. Prioritized components using weighted scoring: (Business Value x 0.4) + (Technical Risk x 0.3) + (Quick Win Potential x 0.3)
3. Rollback strategies for each component
4. Recommended migration order
Write your complete business impact analysis as a single markdown document.
```
Save the agent's output to `.legacy-modernize/03-business-impact.md`.
Update `state.json`: set `current_step` to "checkpoint-1", add step 3 to `completed_steps`.
---
## PHASE CHECKPOINT 1 — User Approval Required
You MUST stop here and present the assessment for review.
Display a summary of findings from the Phase 1 output files (key components, risk levels, recommended migration order) and ask:
```
Legacy assessment and risk analysis complete. Please review:
- .legacy-modernize/01-legacy-assessment.md
- .legacy-modernize/02-dependency-map.md
- .legacy-modernize/03-business-impact.md
1. Approve — proceed to test coverage establishment
2. Request changes — tell me what to adjust
3. Pause — save progress and stop here
```
Do NOT proceed to Phase 2 until the user selects option 1. If they select option 2, revise and re-checkpoint. If option 3, update `state.json` status and stop.
---
## Phase 2: Test Coverage Establishment (Steps 46)
### Step 4: Legacy Code Test Coverage Analysis
Read `.legacy-modernize/01-legacy-assessment.md` and `.legacy-modernize/03-business-impact.md`.
Use the Task tool with subagent_type="general-purpose":
```
Task:
subagent_type: "general-purpose"
description: "Analyze and establish test coverage for legacy components"
prompt: |
You are a test automation engineer specializing in legacy system characterization testing.
Analyze existing test coverage for legacy components at $TARGET.
## Legacy Assessment
[Insert contents of .legacy-modernize/01-legacy-assessment.md]
## Migration Priorities
[Insert contents of .legacy-modernize/03-business-impact.md]
## Instructions
1. Use coverage tools to identify untested code paths, missing integration tests, and absent end-to-end scenarios
2. For components with <40% coverage, generate characterization tests that capture current behavior without modifying functionality
3. Create a test harness for safe refactoring
4. Follow existing test patterns and frameworks in the project
Write all test files and report what was created. Provide a coverage summary.
```
Save the agent's output to `.legacy-modernize/04-test-coverage.md`.
Update `state.json`: set `current_step` to 5, add step 4 to `completed_steps`.
### Step 5: Contract Testing Implementation
Read `.legacy-modernize/02-dependency-map.md` and `.legacy-modernize/04-test-coverage.md`.
Use the Task tool with subagent_type="general-purpose":
```
Task:
subagent_type: "general-purpose"
description: "Implement contract tests for integration points"
prompt: |
You are a test automation engineer specializing in contract testing and API verification.
Implement contract tests for all integration points identified in the dependency mapping.
## Dependency Map
[Insert contents of .legacy-modernize/02-dependency-map.md]
## Existing Test Coverage
[Insert contents of .legacy-modernize/04-test-coverage.md]
## Instructions
1. Create consumer-driven contracts for APIs, message queue interactions, and database schemas
2. Set up contract verification in CI/CD pipeline
3. Generate performance baselines for response times and throughput to validate modernized components maintain SLAs
4. Follow existing test patterns and frameworks in the project
Write all test files and report what was created.
```
Save the agent's output to `.legacy-modernize/05-contract-tests.md`.
Update `state.json`: set `current_step` to 6, add step 5 to `completed_steps`.
### Step 6: Test Data Management Strategy
Read `.legacy-modernize/02-dependency-map.md` and `.legacy-modernize/04-test-coverage.md`.
Use the Task tool with subagent_type="general-purpose":
```
Task:
subagent_type: "general-purpose"
description: "Design test data management for parallel system operation"
prompt: |
You are a data engineer specializing in test data management and data pipeline design.
Design a test data management strategy for parallel system operation during migration.
## Dependency Map
[Insert contents of .legacy-modernize/02-dependency-map.md]
## Test Coverage
[Insert contents of .legacy-modernize/04-test-coverage.md]
## Instructions
1. Create data generation scripts for edge cases
2. Implement data masking for sensitive information
3. Establish test database refresh procedures
4. Set up monitoring for data consistency between legacy and modernized components during migration
Write all configuration and script files. Report what was created.
```
Save the agent's output to `.legacy-modernize/06-test-data.md`.
Update `state.json`: set `current_step` to "checkpoint-2", add step 6 to `completed_steps`.
---
## PHASE CHECKPOINT 2 — User Approval Required
Display a summary of test coverage establishment from Phase 2 output files and ask:
```
Test coverage establishment complete. Please review:
- .legacy-modernize/04-test-coverage.md
- .legacy-modernize/05-contract-tests.md
- .legacy-modernize/06-test-data.md
1. Approve — proceed to incremental migration implementation
2. Request changes — tell me what to adjust
3. Pause — save progress and stop here
```
Do NOT proceed to Phase 3 until the user approves.
---
## Phase 3: Incremental Migration Implementation (Steps 79)
### Step 7: Strangler Fig Infrastructure Setup
Read `.legacy-modernize/02-dependency-map.md` and `.legacy-modernize/03-business-impact.md`.
Use the Task tool with subagent_type="general-purpose":
```
Task:
subagent_type: "general-purpose"
description: "Implement strangler fig infrastructure with API gateway and feature flags"
prompt: |
You are a backend architect specializing in distributed systems and migration infrastructure.
Implement strangler fig infrastructure for the legacy modernization.
## Dependency Map
[Insert contents of .legacy-modernize/02-dependency-map.md]
## Migration Priorities
[Insert contents of .legacy-modernize/03-business-impact.md]
## Instructions
1. Configure API gateway for traffic routing between legacy and modern components
2. Set up feature flags for gradual rollout using environment variables or feature management service
3. Implement proxy layer with request routing rules based on URL patterns, headers, or user segments
4. Implement circuit breakers and fallback mechanisms for resilience
5. Create observability dashboard for dual-system monitoring
6. Follow existing infrastructure patterns in the project
Write all configuration files. Report what was created/modified.
```
Save the agent's output to `.legacy-modernize/07-infrastructure.md`.
Update `state.json`: set `current_step` to 8, add step 7 to `completed_steps`.
### Step 8: Component Modernization — First Wave
Read `.legacy-modernize/01-legacy-assessment.md`, `.legacy-modernize/03-business-impact.md`, `.legacy-modernize/04-test-coverage.md`, and `.legacy-modernize/07-infrastructure.md`.
Detect the target language/stack from the legacy assessment. Use the Task tool with subagent_type="general-purpose", providing role context matching the target stack:
```
Task:
subagent_type: "general-purpose"
description: "Modernize first-wave components from legacy assessment"
prompt: |
You are an expert [DETECTED LANGUAGE] developer specializing in legacy code modernization
and migration to modern frameworks and patterns.
Modernize first-wave components (quick wins identified in assessment).
## Legacy Assessment
[Insert contents of .legacy-modernize/01-legacy-assessment.md]
## Migration Priorities
[Insert contents of .legacy-modernize/03-business-impact.md]
## Test Coverage
[Insert contents of .legacy-modernize/04-test-coverage.md]
## Infrastructure
[Insert contents of .legacy-modernize/07-infrastructure.md]
## Instructions
For each component in the first wave:
1. Extract business logic from legacy code
2. Implement using modern patterns (dependency injection, SOLID principles)
3. Ensure backward compatibility through adapter patterns
4. Maintain data consistency with event sourcing or dual writes
5. Follow 12-factor app principles
6. Run characterization tests to verify preserved behavior
Write all code files. Report what files were created/modified.
```
**Note:** Replace `[DETECTED LANGUAGE]` with the actual language detected from the legacy assessment (e.g., "Python", "TypeScript", "Go", "Rust", "Java"). If the codebase is polyglot, launch parallel agents for each language.
Save the agent's output to `.legacy-modernize/08-first-wave.md`.
Update `state.json`: set `current_step` to 9, add step 8 to `completed_steps`.
### Step 9: Security Hardening
Read `.legacy-modernize/08-first-wave.md`.
Use the Task tool with subagent_type="general-purpose":
```
Task:
subagent_type: "general-purpose"
description: "Security audit and hardening of modernized components"
prompt: |
You are a security engineer specializing in application security auditing,
OWASP compliance, and secure coding practices.
Audit modernized components for security vulnerabilities and implement hardening.
## Modernized Components
[Insert contents of .legacy-modernize/08-first-wave.md]
## Instructions
1. Implement OAuth 2.0/JWT authentication where applicable
2. Add role-based access control
3. Implement input validation and sanitization
4. Verify SQL injection prevention and XSS protection
5. Configure secrets management
6. Verify OWASP Top 10 compliance
7. Configure security headers and implement rate limiting
Provide a security audit report with findings by severity (Critical/High/Medium/Low)
and list all hardening changes made. Write all code changes.
```
Save the agent's output to `.legacy-modernize/09-security.md`.
Update `state.json`: set `current_step` to "checkpoint-3", add step 9 to `completed_steps`.
---
## PHASE CHECKPOINT 3 — User Approval Required
Display a summary of migration implementation from Phase 3 output files and ask:
```
Incremental migration implementation complete. Please review:
- .legacy-modernize/07-infrastructure.md
- .legacy-modernize/08-first-wave.md
- .legacy-modernize/09-security.md
Security findings: [summarize Critical/High/Medium counts from 09-security.md]
1. Approve — proceed to performance validation
2. Request changes — tell me what to adjust
3. Pause — save progress and stop here
```
Do NOT proceed to Phase 4 until the user approves.
---
## Phase 4: Performance Validation and Rollout (Steps 1011)
### Step 10: Performance Testing and Optimization
Read `.legacy-modernize/05-contract-tests.md` and `.legacy-modernize/08-first-wave.md`.
Use the Task tool with subagent_type="general-purpose":
```
Task:
subagent_type: "general-purpose"
description: "Performance testing of modernized vs legacy components"
prompt: |
You are a performance engineer specializing in load testing, benchmarking,
and application performance optimization.
Conduct performance testing comparing legacy vs modernized components.
## Contract Tests and Baselines
[Insert contents of .legacy-modernize/05-contract-tests.md]
## Modernized Components
[Insert contents of .legacy-modernize/08-first-wave.md]
## Instructions
1. Run load tests simulating production traffic patterns
2. Measure response times, throughput, and resource utilization
3. Identify performance regressions and optimize: database queries with indexing, caching strategies, connection pooling, and async processing
4. Validate against SLA requirements (P95 latency within 110% of baseline)
Provide performance test results with comparison tables and optimization recommendations.
```
Save the agent's output to `.legacy-modernize/10-performance.md`.
Update `state.json`: set `current_step` to 11, add step 10 to `completed_steps`.
### Step 11: Progressive Rollout Plan
Read `.legacy-modernize/07-infrastructure.md` and `.legacy-modernize/10-performance.md`.
Use the Task tool with subagent_type="general-purpose":
```
Task:
subagent_type: "general-purpose"
description: "Create progressive rollout strategy with automated safeguards"
prompt: |
You are a deployment engineer specializing in progressive delivery,
feature flag management, and production rollout strategies.
Implement a progressive rollout strategy for the modernized components.
## Infrastructure
[Insert contents of .legacy-modernize/07-infrastructure.md]
## Performance Results
[Insert contents of .legacy-modernize/10-performance.md]
## Instructions
1. Configure feature flags for traffic shifting: 5% -> 25% -> 50% -> 100%
2. Define automatic rollback triggers: error rate >1%, latency >2x baseline, or business metric degradation
3. Set 24-hour observation periods between each stage
4. Create runbook for the complete traffic shifting process
5. Include monitoring queries and dashboards for each stage
Write all configuration files and the rollout runbook.
```
Save the agent's output to `.legacy-modernize/11-rollout.md`.
Update `state.json`: set `current_step` to "checkpoint-4", add step 11 to `completed_steps`.
---
## PHASE CHECKPOINT 4 — User Approval Required
Display a summary of performance and rollout plans and ask:
```
Performance validation and rollout planning complete. Please review:
- .legacy-modernize/10-performance.md
- .legacy-modernize/11-rollout.md
Performance: [summarize key metrics from 10-performance.md]
1. Approve — proceed to decommissioning and documentation
2. Request changes — tell me what to adjust
3. Pause — save progress and stop here
```
Do NOT proceed to Phase 5 until the user approves.
---
## Phase 5: Migration Completion and Documentation (Steps 1213)
### Step 12: Legacy Component Decommissioning
Read `.legacy-modernize/01-legacy-assessment.md`, `.legacy-modernize/08-first-wave.md`, and `.legacy-modernize/11-rollout.md`.
Use the Task tool with subagent_type="legacy-modernizer":
```
Task:
subagent_type: "legacy-modernizer"
description: "Plan safe decommissioning of replaced legacy components"
prompt: |
Plan safe decommissioning of replaced legacy components.
## Legacy Assessment
[Insert contents of .legacy-modernize/01-legacy-assessment.md]
## Modernized Components
[Insert contents of .legacy-modernize/08-first-wave.md]
## Rollout Status
[Insert contents of .legacy-modernize/11-rollout.md]
## Instructions
1. Verify no remaining dependencies through traffic analysis (minimum 30 days at 0% traffic)
2. Archive legacy code with documentation of original functionality
3. Update CI/CD pipelines to remove legacy builds
4. Clean up unused database tables and remove deprecated API endpoints
5. Document any retained legacy components with sunset timeline
Provide a decommissioning checklist and timeline.
```
Save the agent's output to `.legacy-modernize/12-decommission.md`.
Update `state.json`: set `current_step` to 13, add step 12 to `completed_steps`.
### Step 13: Documentation and Knowledge Transfer
Read all previous `.legacy-modernize/*.md` files.
Use the Task tool with subagent_type="general-purpose":
```
Task:
subagent_type: "general-purpose"
description: "Create comprehensive modernization documentation package"
prompt: |
You are a technical writer specializing in system migration documentation
and developer knowledge transfer materials.
Create comprehensive modernization documentation.
## All Migration Artifacts
[Insert contents of all .legacy-modernize/*.md files]
## Instructions
1. Create architectural diagrams (before/after)
2. Write API documentation with migration guides
3. Create runbooks for dual-system operation
4. Write troubleshooting guides for common issues
5. Create a lessons learned report
6. Generate developer onboarding guide for the modernized system
7. Document technical decisions and trade-offs made during migration
Write all documentation files. Report what was created.
```
Save the agent's output to `.legacy-modernize/13-documentation.md`.
Update `state.json`: set `current_step` to "complete", add step 13 to `completed_steps`.
---
## Completion
Update `state.json`:
- Set `status` to `"complete"`
- Set `last_updated` to current timestamp
Present the final summary:
```
Legacy modernization complete: $TARGET
## Session Files
- .legacy-modernize/01-legacy-assessment.md — Legacy system analysis
- .legacy-modernize/02-dependency-map.md — Dependency and integration mapping
- .legacy-modernize/03-business-impact.md — Business impact and risk assessment
- .legacy-modernize/04-test-coverage.md — Test coverage analysis
- .legacy-modernize/05-contract-tests.md — Contract tests and baselines
- .legacy-modernize/06-test-data.md — Test data management strategy
- .legacy-modernize/07-infrastructure.md — Strangler fig infrastructure
- .legacy-modernize/08-first-wave.md — First wave component modernization
- .legacy-modernize/09-security.md — Security audit and hardening
- .legacy-modernize/10-performance.md — Performance testing results
- .legacy-modernize/11-rollout.md — Progressive rollout plan
- .legacy-modernize/12-decommission.md — Decommissioning checklist
- .legacy-modernize/13-documentation.md — Documentation package
## Success Criteria
- All high-priority components modernized with >80% test coverage
- Zero unplanned downtime during migration
- Performance metrics maintained or improved (P95 latency within 110% of baseline)
- Performance metrics maintained (P95 latency within 110% of baseline)
- Security vulnerabilities reduced by >90%
- Technical debt score improved by >60%
- Successful operation for 30 days post-migration without rollbacks
- Complete documentation enabling new developer onboarding in <1 week
Target: $ARGUMENTS
## Next Steps
1. Review all generated code, tests, and documentation
2. Execute the progressive rollout plan in .legacy-modernize/11-rollout.md
3. Monitor for 30 days post-migration per .legacy-modernize/12-decommission.md
4. Complete decommissioning after observation period
```

View File

@@ -161,24 +161,24 @@ describe("Dependency Compatibility", () => {
### Identifying Breaking Changes
```bash
# Use changelog parsers
npx changelog-parser react 16.0.0 17.0.0
# Or manually check
curl https://raw.githubusercontent.com/facebook/react/main/CHANGELOG.md
# Check the changelog directly
curl https://raw.githubusercontent.com/facebook/react/master/CHANGELOG.md
```
### Codemod for Automated Fixes
```bash
# React upgrade codemods
npx react-codeshift <transform> <path>
# Run jscodeshift with transform URL
npx jscodeshift -t <transform-url> <path>
# Example: Update lifecycle methods
npx react-codeshift \
--parser tsx \
--transform react-codeshift/transforms/rename-unsafe-lifecycles.js \
src/
# Example: Rename unsafe lifecycle methods
npx jscodeshift -t https://raw.githubusercontent.com/reactjs/react-codemod/master/transforms/rename-unsafe-lifecycles.js src/
# For TypeScript files
npx jscodeshift -t https://raw.githubusercontent.com/reactjs/react-codemod/master/transforms/rename-unsafe-lifecycles.js --parser=tsx src/
# Dry run to preview changes
npx jscodeshift -t https://raw.githubusercontent.com/reactjs/react-codemod/master/transforms/rename-unsafe-lifecycles.js --dry src/
```
### Custom Migration Script

View File

@@ -327,21 +327,20 @@ function ProfileTimeline() {
### Run React Codemods
```bash
# Install jscodeshift
npm install -g jscodeshift
# Rename unsafe lifecycle methods
npx jscodeshift -t https://raw.githubusercontent.com/reactjs/react-codemod/master/transforms/rename-unsafe-lifecycles.js src/
# React 16.9 codemod (rename unsafe lifecycle methods)
npx react-codeshift <transform> <path>
# Update React imports (React 17+)
npx jscodeshift -t https://raw.githubusercontent.com/reactjs/react-codemod/master/transforms/update-react-imports.js src/
# Example: Rename UNSAFE_ methods
npx react-codeshift --parser=tsx \
--transform=react-codeshift/transforms/rename-unsafe-lifecycles.js \
src/
# Add error boundaries
npx jscodeshift -t https://raw.githubusercontent.com/reactjs/react-codemod/master/transforms/error-boundaries.js src/
# Update to new JSX Transform (React 17+)
npx react-codeshift --parser=tsx \
--transform=react-codeshift/transforms/new-jsx-transform.js \
src/
# For TypeScript files
npx jscodeshift -t https://raw.githubusercontent.com/reactjs/react-codemod/master/transforms/rename-unsafe-lifecycles.js --parser=tsx src/
# Dry run to preview changes
npx jscodeshift -t https://raw.githubusercontent.com/reactjs/react-codemod/master/transforms/rename-unsafe-lifecycles.js --dry --print src/
# Class to Hooks (third-party)
npx codemod react/hooks/convert-class-to-function src/

View File

@@ -0,0 +1,10 @@
{
"name": "frontend-mobile-development",
"version": "1.2.1",
"description": "Frontend UI development and mobile application implementation across platforms",
"author": {
"name": "Seth Hobson",
"email": "seth@major7apps.com"
},
"license": "MIT"
}

View File

@@ -1,20 +1,165 @@
---
name: tailwind-design-system
description: Build scalable design systems with Tailwind CSS, design tokens, component libraries, and responsive patterns. Use when creating component libraries, implementing design systems, or standardizing UI patterns.
description: Build scalable design systems with Tailwind CSS v4, design tokens, component libraries, and responsive patterns. Use when creating component libraries, implementing design systems, or standardizing UI patterns.
---
# Tailwind Design System
# Tailwind Design System (v4)
Build production-ready design systems with Tailwind CSS, including design tokens, component variants, responsive patterns, and accessibility.
Build production-ready design systems with Tailwind CSS v4, including CSS-first configuration, design tokens, component variants, responsive patterns, and accessibility.
> **Note**: This skill targets Tailwind CSS v4 (2024+). For v3 projects, refer to the [upgrade guide](https://tailwindcss.com/docs/upgrade-guide).
## When to Use This Skill
- Creating a component library with Tailwind
- Implementing design tokens and theming
- Creating a component library with Tailwind v4
- Implementing design tokens and theming with CSS-first configuration
- Building responsive and accessible components
- Standardizing UI patterns across a codebase
- Migrating to or extending Tailwind CSS
- Setting up dark mode and color schemes
- Migrating from Tailwind v3 to v4
- Setting up dark mode with native CSS features
## Key v4 Changes
| v3 Pattern | v4 Pattern |
| ------------------------------------- | --------------------------------------------------------------------- |
| `tailwind.config.ts` | `@theme` in CSS |
| `@tailwind base/components/utilities` | `@import "tailwindcss"` |
| `darkMode: "class"` | `@custom-variant dark (&:where(.dark, .dark *))` |
| `theme.extend.colors` | `@theme { --color-*: value }` |
| `require("tailwindcss-animate")` | CSS `@keyframes` in `@theme` + `@starting-style` for entry animations |
## Quick Start
```css
/* app.css - Tailwind v4 CSS-first configuration */
@import "tailwindcss";
/* Define your theme with @theme */
@theme {
/* Semantic color tokens using OKLCH for better color perception */
--color-background: oklch(100% 0 0);
--color-foreground: oklch(14.5% 0.025 264);
--color-primary: oklch(14.5% 0.025 264);
--color-primary-foreground: oklch(98% 0.01 264);
--color-secondary: oklch(96% 0.01 264);
--color-secondary-foreground: oklch(14.5% 0.025 264);
--color-muted: oklch(96% 0.01 264);
--color-muted-foreground: oklch(46% 0.02 264);
--color-accent: oklch(96% 0.01 264);
--color-accent-foreground: oklch(14.5% 0.025 264);
--color-destructive: oklch(53% 0.22 27);
--color-destructive-foreground: oklch(98% 0.01 264);
--color-border: oklch(91% 0.01 264);
--color-ring: oklch(14.5% 0.025 264);
--color-card: oklch(100% 0 0);
--color-card-foreground: oklch(14.5% 0.025 264);
/* Ring offset for focus states */
--color-ring-offset: oklch(100% 0 0);
/* Radius tokens */
--radius-sm: 0.25rem;
--radius-md: 0.375rem;
--radius-lg: 0.5rem;
--radius-xl: 0.75rem;
/* Animation tokens - keyframes inside @theme are output when referenced by --animate-* variables */
--animate-fade-in: fade-in 0.2s ease-out;
--animate-fade-out: fade-out 0.2s ease-in;
--animate-slide-in: slide-in 0.3s ease-out;
--animate-slide-out: slide-out 0.3s ease-in;
@keyframes fade-in {
from {
opacity: 0;
}
to {
opacity: 1;
}
}
@keyframes fade-out {
from {
opacity: 1;
}
to {
opacity: 0;
}
}
@keyframes slide-in {
from {
transform: translateY(-0.5rem);
opacity: 0;
}
to {
transform: translateY(0);
opacity: 1;
}
}
@keyframes slide-out {
from {
transform: translateY(0);
opacity: 1;
}
to {
transform: translateY(-0.5rem);
opacity: 0;
}
}
}
/* Dark mode variant - use @custom-variant for class-based dark mode */
@custom-variant dark (&:where(.dark, .dark *));
/* Dark mode theme overrides */
.dark {
--color-background: oklch(14.5% 0.025 264);
--color-foreground: oklch(98% 0.01 264);
--color-primary: oklch(98% 0.01 264);
--color-primary-foreground: oklch(14.5% 0.025 264);
--color-secondary: oklch(22% 0.02 264);
--color-secondary-foreground: oklch(98% 0.01 264);
--color-muted: oklch(22% 0.02 264);
--color-muted-foreground: oklch(65% 0.02 264);
--color-accent: oklch(22% 0.02 264);
--color-accent-foreground: oklch(98% 0.01 264);
--color-destructive: oklch(42% 0.15 27);
--color-destructive-foreground: oklch(98% 0.01 264);
--color-border: oklch(22% 0.02 264);
--color-ring: oklch(83% 0.02 264);
--color-card: oklch(14.5% 0.025 264);
--color-card-foreground: oklch(98% 0.01 264);
--color-ring-offset: oklch(14.5% 0.025 264);
}
/* Base styles */
@layer base {
* {
@apply border-border;
}
body {
@apply bg-background text-foreground antialiased;
}
}
```
## Core Concepts
@@ -26,7 +171,7 @@ Brand Tokens (abstract)
└── Component Tokens (specific)
Example:
blue-500 → primary → button-bg
oklch(45% 0.2 260) → --color-primary → bg-primary
```
### 2. Component Architecture
@@ -35,120 +180,25 @@ Example:
Base styles → Variants → Sizes → States → Overrides
```
## Quick Start
```typescript
// tailwind.config.ts
import type { Config } from "tailwindcss";
const config: Config = {
content: ["./src/**/*.{js,ts,jsx,tsx,mdx}"],
darkMode: "class",
theme: {
extend: {
colors: {
// Semantic color tokens
primary: {
DEFAULT: "hsl(var(--primary))",
foreground: "hsl(var(--primary-foreground))",
},
secondary: {
DEFAULT: "hsl(var(--secondary))",
foreground: "hsl(var(--secondary-foreground))",
},
destructive: {
DEFAULT: "hsl(var(--destructive))",
foreground: "hsl(var(--destructive-foreground))",
},
muted: {
DEFAULT: "hsl(var(--muted))",
foreground: "hsl(var(--muted-foreground))",
},
accent: {
DEFAULT: "hsl(var(--accent))",
foreground: "hsl(var(--accent-foreground))",
},
background: "hsl(var(--background))",
foreground: "hsl(var(--foreground))",
border: "hsl(var(--border))",
ring: "hsl(var(--ring))",
},
borderRadius: {
lg: "var(--radius)",
md: "calc(var(--radius) - 2px)",
sm: "calc(var(--radius) - 4px)",
},
},
},
plugins: [require("tailwindcss-animate")],
};
export default config;
```
```css
/* globals.css */
@tailwind base;
@tailwind components;
@tailwind utilities;
@layer base {
:root {
--background: 0 0% 100%;
--foreground: 222.2 84% 4.9%;
--primary: 222.2 47.4% 11.2%;
--primary-foreground: 210 40% 98%;
--secondary: 210 40% 96.1%;
--secondary-foreground: 222.2 47.4% 11.2%;
--muted: 210 40% 96.1%;
--muted-foreground: 215.4 16.3% 46.9%;
--accent: 210 40% 96.1%;
--accent-foreground: 222.2 47.4% 11.2%;
--destructive: 0 84.2% 60.2%;
--destructive-foreground: 210 40% 98%;
--border: 214.3 31.8% 91.4%;
--ring: 222.2 84% 4.9%;
--radius: 0.5rem;
}
.dark {
--background: 222.2 84% 4.9%;
--foreground: 210 40% 98%;
--primary: 210 40% 98%;
--primary-foreground: 222.2 47.4% 11.2%;
--secondary: 217.2 32.6% 17.5%;
--secondary-foreground: 210 40% 98%;
--muted: 217.2 32.6% 17.5%;
--muted-foreground: 215 20.2% 65.1%;
--accent: 217.2 32.6% 17.5%;
--accent-foreground: 210 40% 98%;
--destructive: 0 62.8% 30.6%;
--destructive-foreground: 210 40% 98%;
--border: 217.2 32.6% 17.5%;
--ring: 212.7 26.8% 83.9%;
}
}
```
## Patterns
### Pattern 1: CVA (Class Variance Authority) Components
```typescript
// components/ui/button.tsx
import { Slot } from '@radix-ui/react-slot'
import { cva, type VariantProps } from 'class-variance-authority'
import { forwardRef } from 'react'
import { cn } from '@/lib/utils'
const buttonVariants = cva(
// Base styles
'inline-flex items-center justify-center whitespace-nowrap rounded-md text-sm font-medium ring-offset-background transition-colors focus-visible:outline-none focus-visible:ring-2 focus-visible:ring-ring focus-visible:ring-offset-2 disabled:pointer-events-none disabled:opacity-50',
// Base styles - v4 uses native CSS variables
'inline-flex items-center justify-center whitespace-nowrap rounded-md text-sm font-medium transition-colors focus-visible:outline-none focus-visible:ring-2 focus-visible:ring-ring focus-visible:ring-offset-2 disabled:pointer-events-none disabled:opacity-50',
{
variants: {
variant: {
default: 'bg-primary text-primary-foreground hover:bg-primary/90',
destructive: 'bg-destructive text-destructive-foreground hover:bg-destructive/90',
outline: 'border border-input bg-background hover:bg-accent hover:text-accent-foreground',
outline: 'border border-border bg-background hover:bg-accent hover:text-accent-foreground',
secondary: 'bg-secondary text-secondary-foreground hover:bg-secondary/80',
ghost: 'hover:bg-accent hover:text-accent-foreground',
link: 'text-primary underline-offset-4 hover:underline',
@@ -157,7 +207,7 @@ const buttonVariants = cva(
default: 'h-10 px-4 py-2',
sm: 'h-9 rounded-md px-3',
lg: 'h-11 rounded-md px-8',
icon: 'h-10 w-10',
icon: 'size-10',
},
},
defaultVariants: {
@@ -173,21 +223,24 @@ export interface ButtonProps
asChild?: boolean
}
const Button = forwardRef<HTMLButtonElement, ButtonProps>(
({ className, variant, size, asChild = false, ...props }, ref) => {
const Comp = asChild ? Slot : 'button'
return (
<Comp
className={cn(buttonVariants({ variant, size, className }))}
ref={ref}
{...props}
/>
)
}
)
Button.displayName = 'Button'
export { Button, buttonVariants }
// React 19: No forwardRef needed
export function Button({
className,
variant,
size,
asChild = false,
ref,
...props
}: ButtonProps & { ref?: React.Ref<HTMLButtonElement> }) {
const Comp = asChild ? Slot : 'button'
return (
<Comp
className={cn(buttonVariants({ variant, size, className }))}
ref={ref}
{...props}
/>
)
}
// Usage
<Button variant="destructive" size="lg">Delete</Button>
@@ -195,79 +248,95 @@ export { Button, buttonVariants }
<Button asChild><Link href="/home">Home</Link></Button>
```
### Pattern 2: Compound Components
### Pattern 2: Compound Components (React 19)
```typescript
// components/ui/card.tsx
import { cn } from '@/lib/utils'
import { forwardRef } from 'react'
const Card = forwardRef<HTMLDivElement, React.HTMLAttributes<HTMLDivElement>>(
({ className, ...props }, ref) => (
// React 19: ref is a regular prop, no forwardRef
export function Card({
className,
ref,
...props
}: React.HTMLAttributes<HTMLDivElement> & { ref?: React.Ref<HTMLDivElement> }) {
return (
<div
ref={ref}
className={cn(
'rounded-lg border bg-card text-card-foreground shadow-sm',
'rounded-lg border border-border bg-card text-card-foreground shadow-sm',
className
)}
{...props}
/>
)
)
Card.displayName = 'Card'
}
const CardHeader = forwardRef<HTMLDivElement, React.HTMLAttributes<HTMLDivElement>>(
({ className, ...props }, ref) => (
export function CardHeader({
className,
ref,
...props
}: React.HTMLAttributes<HTMLDivElement> & { ref?: React.Ref<HTMLDivElement> }) {
return (
<div
ref={ref}
className={cn('flex flex-col space-y-1.5 p-6', className)}
{...props}
/>
)
)
CardHeader.displayName = 'CardHeader'
}
const CardTitle = forwardRef<HTMLHeadingElement, React.HTMLAttributes<HTMLHeadingElement>>(
({ className, ...props }, ref) => (
export function CardTitle({
className,
ref,
...props
}: React.HTMLAttributes<HTMLHeadingElement> & { ref?: React.Ref<HTMLHeadingElement> }) {
return (
<h3
ref={ref}
className={cn('text-2xl font-semibold leading-none tracking-tight', className)}
{...props}
/>
)
)
CardTitle.displayName = 'CardTitle'
}
const CardDescription = forwardRef<HTMLParagraphElement, React.HTMLAttributes<HTMLParagraphElement>>(
({ className, ...props }, ref) => (
export function CardDescription({
className,
ref,
...props
}: React.HTMLAttributes<HTMLParagraphElement> & { ref?: React.Ref<HTMLParagraphElement> }) {
return (
<p
ref={ref}
className={cn('text-sm text-muted-foreground', className)}
{...props}
/>
)
)
CardDescription.displayName = 'CardDescription'
}
const CardContent = forwardRef<HTMLDivElement, React.HTMLAttributes<HTMLDivElement>>(
({ className, ...props }, ref) => (
export function CardContent({
className,
ref,
...props
}: React.HTMLAttributes<HTMLDivElement> & { ref?: React.Ref<HTMLDivElement> }) {
return (
<div ref={ref} className={cn('p-6 pt-0', className)} {...props} />
)
)
CardContent.displayName = 'CardContent'
}
const CardFooter = forwardRef<HTMLDivElement, React.HTMLAttributes<HTMLDivElement>>(
({ className, ...props }, ref) => (
export function CardFooter({
className,
ref,
...props
}: React.HTMLAttributes<HTMLDivElement> & { ref?: React.Ref<HTMLDivElement> }) {
return (
<div
ref={ref}
className={cn('flex items-center p-6 pt-0', className)}
{...props}
/>
)
)
CardFooter.displayName = 'CardFooter'
export { Card, CardHeader, CardTitle, CardDescription, CardContent, CardFooter }
}
// Usage
<Card>
@@ -288,43 +357,40 @@ export { Card, CardHeader, CardTitle, CardDescription, CardContent, CardFooter }
```typescript
// components/ui/input.tsx
import { forwardRef } from 'react'
import { cn } from '@/lib/utils'
export interface InputProps extends React.InputHTMLAttributes<HTMLInputElement> {
error?: string
ref?: React.Ref<HTMLInputElement>
}
const Input = forwardRef<HTMLInputElement, InputProps>(
({ className, type, error, ...props }, ref) => {
return (
<div className="relative">
<input
type={type}
className={cn(
'flex h-10 w-full rounded-md border border-input bg-background px-3 py-2 text-sm ring-offset-background file:border-0 file:bg-transparent file:text-sm file:font-medium placeholder:text-muted-foreground focus-visible:outline-none focus-visible:ring-2 focus-visible:ring-ring focus-visible:ring-offset-2 disabled:cursor-not-allowed disabled:opacity-50',
error && 'border-destructive focus-visible:ring-destructive',
className
)}
ref={ref}
aria-invalid={!!error}
aria-describedby={error ? `${props.id}-error` : undefined}
{...props}
/>
{error && (
<p
id={`${props.id}-error`}
className="mt-1 text-sm text-destructive"
role="alert"
>
{error}
</p>
export function Input({ className, type, error, ref, ...props }: InputProps) {
return (
<div className="relative">
<input
type={type}
className={cn(
'flex h-10 w-full rounded-md border border-border bg-background px-3 py-2 text-sm ring-offset-background file:border-0 file:bg-transparent file:text-sm file:font-medium placeholder:text-muted-foreground focus-visible:outline-none focus-visible:ring-2 focus-visible:ring-ring focus-visible:ring-offset-2 disabled:cursor-not-allowed disabled:opacity-50',
error && 'border-destructive focus-visible:ring-destructive',
className
)}
</div>
)
}
)
Input.displayName = 'Input'
ref={ref}
aria-invalid={!!error}
aria-describedby={error ? `${props.id}-error` : undefined}
{...props}
/>
{error && (
<p
id={`${props.id}-error`}
className="mt-1 text-sm text-destructive"
role="alert"
>
{error}
</p>
)}
</div>
)
}
// components/ui/label.tsx
import { cva, type VariantProps } from 'class-variance-authority'
@@ -333,17 +399,20 @@ const labelVariants = cva(
'text-sm font-medium leading-none peer-disabled:cursor-not-allowed peer-disabled:opacity-70'
)
const Label = forwardRef<HTMLLabelElement, React.LabelHTMLAttributes<HTMLLabelElement>>(
({ className, ...props }, ref) => (
export function Label({
className,
ref,
...props
}: React.LabelHTMLAttributes<HTMLLabelElement> & { ref?: React.Ref<HTMLLabelElement> }) {
return (
<label ref={ref} className={cn(labelVariants(), className)} {...props} />
)
)
Label.displayName = 'Label'
}
// Usage with React Hook Form
// Usage with React Hook Form + Zod
import { useForm } from 'react-hook-form'
import { zodResolver } from '@hookform/resolvers/zod'
import * as z from 'zod'
import { z } from 'zod'
const schema = z.object({
email: z.string().email('Invalid email address'),
@@ -459,88 +528,124 @@ export function Container({ className, size, ...props }: ContainerProps) {
</Container>
```
### Pattern 5: Animation Utilities
### Pattern 5: Native CSS Animations (v4)
```css
/* In your CSS file - native @starting-style for entry animations */
@theme {
--animate-dialog-in: dialog-fade-in 0.2s ease-out;
--animate-dialog-out: dialog-fade-out 0.15s ease-in;
}
@keyframes dialog-fade-in {
from {
opacity: 0;
transform: scale(0.95) translateY(-0.5rem);
}
to {
opacity: 1;
transform: scale(1) translateY(0);
}
}
@keyframes dialog-fade-out {
from {
opacity: 1;
transform: scale(1) translateY(0);
}
to {
opacity: 0;
transform: scale(0.95) translateY(-0.5rem);
}
}
/* Native popover animations using @starting-style */
[popover] {
transition:
opacity 0.2s,
transform 0.2s,
display 0.2s allow-discrete;
opacity: 0;
transform: scale(0.95);
}
[popover]:popover-open {
opacity: 1;
transform: scale(1);
}
@starting-style {
[popover]:popover-open {
opacity: 0;
transform: scale(0.95);
}
}
```
```typescript
// lib/animations.ts - Tailwind CSS Animate utilities
import { cn } from './utils'
export const fadeIn = 'animate-in fade-in duration-300'
export const fadeOut = 'animate-out fade-out duration-300'
export const slideInFromTop = 'animate-in slide-in-from-top duration-300'
export const slideInFromBottom = 'animate-in slide-in-from-bottom duration-300'
export const slideInFromLeft = 'animate-in slide-in-from-left duration-300'
export const slideInFromRight = 'animate-in slide-in-from-right duration-300'
export const zoomIn = 'animate-in zoom-in-95 duration-300'
export const zoomOut = 'animate-out zoom-out-95 duration-300'
// Compound animations
export const modalEnter = cn(fadeIn, zoomIn, 'duration-200')
export const modalExit = cn(fadeOut, zoomOut, 'duration-200')
export const dropdownEnter = cn(fadeIn, slideInFromTop, 'duration-150')
export const dropdownExit = cn(fadeOut, 'slide-out-to-top', 'duration-150')
// components/ui/dialog.tsx
// components/ui/dialog.tsx - Using native popover API
import * as DialogPrimitive from '@radix-ui/react-dialog'
import { cn } from '@/lib/utils'
const DialogOverlay = forwardRef<
React.ElementRef<typeof DialogPrimitive.Overlay>,
React.ComponentPropsWithoutRef<typeof DialogPrimitive.Overlay>
>(({ className, ...props }, ref) => (
<DialogPrimitive.Overlay
ref={ref}
className={cn(
'fixed inset-0 z-50 bg-black/80',
'data-[state=open]:animate-in data-[state=closed]:animate-out',
'data-[state=closed]:fade-out-0 data-[state=open]:fade-in-0',
className
)}
{...props}
/>
))
const DialogPortal = DialogPrimitive.Portal
const DialogContent = forwardRef<
React.ElementRef<typeof DialogPrimitive.Content>,
React.ComponentPropsWithoutRef<typeof DialogPrimitive.Content>
>(({ className, children, ...props }, ref) => (
<DialogPortal>
<DialogOverlay />
<DialogPrimitive.Content
export function DialogOverlay({
className,
ref,
...props
}: React.ComponentPropsWithoutRef<typeof DialogPrimitive.Overlay> & {
ref?: React.Ref<HTMLDivElement>
}) {
return (
<DialogPrimitive.Overlay
ref={ref}
className={cn(
'fixed left-[50%] top-[50%] z-50 grid w-full max-w-lg translate-x-[-50%] translate-y-[-50%] gap-4 border bg-background p-6 shadow-lg',
'data-[state=open]:animate-in data-[state=closed]:animate-out',
'data-[state=closed]:fade-out-0 data-[state=open]:fade-in-0',
'data-[state=closed]:zoom-out-95 data-[state=open]:zoom-in-95',
'data-[state=closed]:slide-out-to-left-1/2 data-[state=closed]:slide-out-to-top-[48%]',
'data-[state=open]:slide-in-from-left-1/2 data-[state=open]:slide-in-from-top-[48%]',
'sm:rounded-lg',
'fixed inset-0 z-50 bg-black/80',
'data-[state=open]:animate-fade-in data-[state=closed]:animate-fade-out',
className
)}
{...props}
>
{children}
</DialogPrimitive.Content>
</DialogPortal>
))
/>
)
}
export function DialogContent({
className,
children,
ref,
...props
}: React.ComponentPropsWithoutRef<typeof DialogPrimitive.Content> & {
ref?: React.Ref<HTMLDivElement>
}) {
return (
<DialogPortal>
<DialogOverlay />
<DialogPrimitive.Content
ref={ref}
className={cn(
'fixed left-1/2 top-1/2 z-50 grid w-full max-w-lg -translate-x-1/2 -translate-y-1/2 gap-4 border border-border bg-background p-6 shadow-lg sm:rounded-lg',
'data-[state=open]:animate-dialog-in data-[state=closed]:animate-dialog-out',
className
)}
{...props}
>
{children}
</DialogPrimitive.Content>
</DialogPortal>
)
}
```
### Pattern 6: Dark Mode Implementation
### Pattern 6: Dark Mode with CSS (v4)
```typescript
// providers/ThemeProvider.tsx
// providers/ThemeProvider.tsx - Simplified for v4
'use client'
import { createContext, useContext, useEffect, useState } from 'react'
type Theme = 'dark' | 'light' | 'system'
interface ThemeProviderProps {
children: React.ReactNode
defaultTheme?: Theme
storageKey?: string
}
interface ThemeContextType {
theme: Theme
setTheme: (theme: Theme) => void
@@ -553,7 +658,11 @@ export function ThemeProvider({
children,
defaultTheme = 'system',
storageKey = 'theme',
}: ThemeProviderProps) {
}: {
children: React.ReactNode
defaultTheme?: Theme
storageKey?: string
}) {
const [theme, setTheme] = useState<Theme>(defaultTheme)
const [resolvedTheme, setResolvedTheme] = useState<'dark' | 'light'>('light')
@@ -563,34 +672,34 @@ export function ThemeProvider({
}, [storageKey])
useEffect(() => {
const root = window.document.documentElement
const root = document.documentElement
root.classList.remove('light', 'dark')
let resolved: 'dark' | 'light'
if (theme === 'system') {
resolved = window.matchMedia('(prefers-color-scheme: dark)').matches
? 'dark'
: 'light'
} else {
resolved = theme
}
const resolved = theme === 'system'
? (window.matchMedia('(prefers-color-scheme: dark)').matches ? 'dark' : 'light')
: theme
root.classList.add(resolved)
setResolvedTheme(resolved)
// Update meta theme-color for mobile browsers
const metaThemeColor = document.querySelector('meta[name="theme-color"]')
if (metaThemeColor) {
metaThemeColor.setAttribute('content', resolved === 'dark' ? '#09090b' : '#ffffff')
}
}, [theme])
const value = {
theme,
setTheme: (newTheme: Theme) => {
localStorage.setItem(storageKey, newTheme)
setTheme(newTheme)
},
resolvedTheme,
}
return (
<ThemeContext.Provider value={value}>{children}</ThemeContext.Provider>
<ThemeContext.Provider value={{
theme,
setTheme: (newTheme) => {
localStorage.setItem(storageKey, newTheme)
setTheme(newTheme)
},
resolvedTheme,
}}>
{children}
</ThemeContext.Provider>
)
}
@@ -613,8 +722,8 @@ export function ThemeToggle() {
size="icon"
onClick={() => setTheme(resolvedTheme === 'dark' ? 'light' : 'dark')}
>
<Sun className="h-5 w-5 rotate-0 scale-100 transition-all dark:-rotate-90 dark:scale-0" />
<Moon className="absolute h-5 w-5 rotate-90 scale-0 transition-all dark:rotate-0 dark:scale-100" />
<Sun className="size-5 rotate-0 scale-100 transition-all dark:-rotate-90 dark:scale-0" />
<Moon className="absolute size-5 rotate-90 scale-0 transition-all dark:rotate-0 dark:scale-100" />
<span className="sr-only">Toggle theme</span>
</Button>
)
@@ -642,27 +751,124 @@ export const focusRing = cn(
export const disabled = "disabled:pointer-events-none disabled:opacity-50";
```
## Advanced v4 Patterns
### Custom Utilities with `@utility`
Define reusable custom utilities:
```css
/* Custom utility for decorative lines */
@utility line-t {
@apply relative before:absolute before:top-0 before:-left-[100vw] before:h-px before:w-[200vw] before:bg-gray-950/5 dark:before:bg-white/10;
}
/* Custom utility for text gradients */
@utility text-gradient {
@apply bg-gradient-to-r from-primary to-accent bg-clip-text text-transparent;
}
```
### Theme Modifiers
```css
/* Use @theme inline when referencing other CSS variables */
@theme inline {
--font-sans: var(--font-inter), system-ui;
}
/* Use @theme static to always generate CSS variables (even when unused) */
@theme static {
--color-brand: oklch(65% 0.15 240);
}
/* Import with theme options */
@import "tailwindcss" theme(static);
```
### Namespace Overrides
```css
@theme {
/* Clear all default colors and define your own */
--color-*: initial;
--color-white: #fff;
--color-black: #000;
--color-primary: oklch(45% 0.2 260);
--color-secondary: oklch(65% 0.15 200);
/* Clear ALL defaults for a minimal setup */
/* --*: initial; */
}
```
### Semi-transparent Color Variants
```css
@theme {
/* Use color-mix() for alpha variants */
--color-primary-50: color-mix(in oklab, var(--color-primary) 5%, transparent);
--color-primary-100: color-mix(
in oklab,
var(--color-primary) 10%,
transparent
);
--color-primary-200: color-mix(
in oklab,
var(--color-primary) 20%,
transparent
);
}
```
### Container Queries
```css
@theme {
--container-xs: 20rem;
--container-sm: 24rem;
--container-md: 28rem;
--container-lg: 32rem;
}
```
## v3 to v4 Migration Checklist
- [ ] Replace `tailwind.config.ts` with CSS `@theme` block
- [ ] Change `@tailwind base/components/utilities` to `@import "tailwindcss"`
- [ ] Move color definitions to `@theme { --color-*: value }`
- [ ] Replace `darkMode: "class"` with `@custom-variant dark`
- [ ] Move `@keyframes` inside `@theme` blocks (ensures keyframes output with theme)
- [ ] Replace `require("tailwindcss-animate")` with native CSS animations
- [ ] Update `h-10 w-10` to `size-10` (new utility)
- [ ] Remove `forwardRef` (React 19 passes ref as prop)
- [ ] Consider OKLCH colors for better color perception
- [ ] Replace custom plugins with `@utility` directives
## Best Practices
### Do's
- **Use CSS variables** - Enable runtime theming
- **Use `@theme` blocks** - CSS-first configuration is v4's core pattern
- **Use OKLCH colors** - Better perceptual uniformity than HSL
- **Compose with CVA** - Type-safe variants
- **Use semantic colors** - `primary` not `blue-500`
- **Forward refs** - Enable composition
- **Use semantic tokens** - `bg-primary` not `bg-blue-500`
- **Use `size-*`** - New shorthand for `w-* h-*`
- **Add accessibility** - ARIA attributes, focus states
### Don'ts
- **Don't use arbitrary values** - Extend theme instead
- **Don't nest @apply** - Hurts readability
- **Don't skip focus states** - Keyboard users need them
- **Don't use `tailwind.config.ts`** - Use CSS `@theme` instead
- **Don't use `@tailwind` directives** - Use `@import "tailwindcss"`
- **Don't use `forwardRef`** - React 19 passes ref as prop
- **Don't use arbitrary values** - Extend `@theme` instead
- **Don't hardcode colors** - Use semantic tokens
- **Don't forget dark mode** - Test both themes
## Resources
- [Tailwind CSS Documentation](https://tailwindcss.com/docs)
- [Tailwind CSS v4 Documentation](https://tailwindcss.com/docs)
- [Tailwind v4 Beta Announcement](https://tailwindcss.com/blog/tailwindcss-v4-beta)
- [CVA Documentation](https://cva.style/docs)
- [shadcn/ui](https://ui.shadcn.com/)
- [Radix Primitives](https://www.radix-ui.com/primitives)

View File

@@ -0,0 +1,10 @@
{
"name": "frontend-mobile-security",
"version": "1.2.0",
"description": "XSS prevention, CSRF protection, content security policies, mobile app security, and secure storage patterns",
"author": {
"name": "Seth Hobson",
"email": "seth@major7apps.com"
},
"license": "MIT"
}

View File

@@ -0,0 +1,10 @@
{
"name": "full-stack-orchestration",
"version": "1.3.0",
"description": "End-to-end feature orchestration with testing, security, performance, and deployment",
"author": {
"name": "Seth Hobson",
"email": "seth@major7apps.com"
},
"license": "MIT"
}

View File

@@ -1,128 +1,593 @@
Orchestrate full-stack feature development across backend, frontend, and infrastructure layers with modern API-first approach:
---
description: "Orchestrate end-to-end full-stack feature development across backend, frontend, database, and infrastructure layers"
argument-hint: "<feature description> [--stack react/fastapi/postgres] [--api-style rest|graphql] [--complexity simple|medium|complex]"
---
[Extended thinking: This workflow coordinates multiple specialized agents to deliver a complete full-stack feature from architecture through deployment. It follows API-first development principles, ensuring contract-driven development where the API specification drives both backend implementation and frontend consumption. Each phase builds upon previous outputs, creating a cohesive system with proper separation of concerns, comprehensive testing, and production-ready deployment. The workflow emphasizes modern practices like component-driven UI development, feature flags, observability, and progressive rollout strategies.]
# Full-Stack Feature Orchestrator
## Phase 1: Architecture & Design Foundation
## CRITICAL BEHAVIORAL RULES
### 1. Database Architecture Design
You MUST follow these rules exactly. Violating any of them is a failure.
- Use Task tool with subagent_type="database-design::database-architect"
- Prompt: "Design database schema and data models for: $ARGUMENTS. Consider scalability, query patterns, indexing strategy, and data consistency requirements. Include migration strategy if modifying existing schema. Provide both logical and physical data models."
- Expected output: Entity relationship diagrams, table schemas, indexing strategy, migration scripts, data access patterns
- Context: Initial requirements and business domain model
1. **Execute steps in order.** Do NOT skip ahead, reorder, or merge steps.
2. **Write output files.** Each step MUST produce its output file in `.full-stack-feature/` before the next step begins. Read from prior step files -- do NOT rely on context window memory.
3. **Stop at checkpoints.** When you reach a `PHASE CHECKPOINT`, you MUST stop and wait for explicit user approval before continuing. Use the AskUserQuestion tool with clear options.
4. **Halt on failure.** If any step fails (agent error, test failure, missing dependency), STOP immediately. Present the error and ask the user how to proceed. Do NOT silently continue.
5. **Use only local agents.** All `subagent_type` references use agents bundled with this plugin or `general-purpose`. No cross-plugin dependencies.
6. **Never enter plan mode autonomously.** Do NOT use EnterPlanMode. This command IS the plan -- execute it.
### 2. Backend Service Architecture
## Pre-flight Checks
- Use Task tool with subagent_type="backend-development::backend-architect"
- Prompt: "Design backend service architecture for: $ARGUMENTS. Using the database design from previous step, create service boundaries, define API contracts (OpenAPI/GraphQL), design authentication/authorization strategy, and specify inter-service communication patterns. Include resilience patterns (circuit breakers, retries) and caching strategy."
- Expected output: Service architecture diagram, OpenAPI specifications, authentication flows, caching architecture, message queue design (if applicable)
- Context: Database schema from step 1, non-functional requirements
Before starting, perform these checks:
### 3. Frontend Component Architecture
### 1. Check for existing session
- Use Task tool with subagent_type="frontend-mobile-development::frontend-developer"
- Prompt: "Design frontend architecture and component structure for: $ARGUMENTS. Based on the API contracts from previous step, design component hierarchy, state management approach (Redux/Zustand/Context), routing structure, and data fetching patterns. Include accessibility requirements and responsive design strategy. Plan for Storybook component documentation."
- Expected output: Component tree diagram, state management design, routing configuration, design system integration plan, accessibility checklist
- Context: API specifications from step 2, UI/UX requirements
Check if `.full-stack-feature/state.json` exists:
## Phase 2: Parallel Implementation
- If it exists and `status` is `"in_progress"`: Read it, display the current step, and ask the user:
### 4. Backend Service Implementation
```
Found an in-progress full-stack feature session:
Feature: [name from state]
Current step: [step from state]
- Use Task tool with subagent_type="python-development::python-pro" (or "golang-pro"/"nodejs-expert" based on stack)
- Prompt: "Implement backend services for: $ARGUMENTS. Using the architecture and API specs from Phase 1, build RESTful/GraphQL endpoints with proper validation, error handling, and logging. Implement business logic, data access layer, authentication middleware, and integration with external services. Include observability (structured logging, metrics, tracing)."
- Expected output: Backend service code, API endpoints, middleware, background jobs, unit tests, integration tests
- Context: Architecture designs from Phase 1, database schema
1. Resume from where we left off
2. Start fresh (archives existing session)
```
### 5. Frontend Implementation
- If it exists and `status` is `"complete"`: Ask whether to archive and start fresh.
- Use Task tool with subagent_type="frontend-mobile-development::frontend-developer"
- Prompt: "Implement frontend application for: $ARGUMENTS. Build React/Next.js components using the component architecture from Phase 1. Implement state management, API integration with proper error handling and loading states, form validation, and responsive layouts. Create Storybook stories for components. Ensure accessibility (WCAG 2.1 AA compliance)."
- Expected output: React components, state management implementation, API client code, Storybook stories, responsive styles, accessibility implementations
- Context: Component architecture from step 3, API contracts
### 2. Initialize state
### 6. Database Implementation & Optimization
Create `.full-stack-feature/` directory and `state.json`:
- Use Task tool with subagent_type="database-design::sql-pro"
- Prompt: "Implement and optimize database layer for: $ARGUMENTS. Create migration scripts, stored procedures (if needed), optimize queries identified by backend implementation, set up proper indexes, and implement data validation constraints. Include database-level security measures and backup strategies."
- Expected output: Migration scripts, optimized queries, stored procedures, index definitions, database security configuration
- Context: Database design from step 1, query patterns from backend implementation
```json
{
"feature": "$ARGUMENTS",
"status": "in_progress",
"stack": "auto-detect",
"api_style": "rest",
"complexity": "medium",
"current_step": 1,
"current_phase": 1,
"completed_steps": [],
"files_created": [],
"started_at": "ISO_TIMESTAMP",
"last_updated": "ISO_TIMESTAMP"
}
```
## Phase 3: Integration & Testing
Parse `$ARGUMENTS` for `--stack`, `--api-style`, and `--complexity` flags. Use defaults if not specified.
### 7. API Contract Testing
### 3. Parse feature description
- Use Task tool with subagent_type="test-automator"
- Prompt: "Create contract tests for: $ARGUMENTS. Implement Pact/Dredd tests to validate API contracts between backend and frontend. Create integration tests for all API endpoints, test authentication flows, validate error responses, and ensure proper CORS configuration. Include load testing scenarios."
- Expected output: Contract test suites, integration tests, load test scenarios, API documentation validation
- Context: API implementations from Phase 2
Extract the feature description from `$ARGUMENTS` (everything before the flags). This is referenced as `$FEATURE` in prompts below.
### 8. End-to-End Testing
---
- Use Task tool with subagent_type="test-automator"
- Prompt: "Implement E2E tests for: $ARGUMENTS. Create Playwright/Cypress tests covering critical user journeys, cross-browser compatibility, mobile responsiveness, and error scenarios. Test feature flags integration, analytics tracking, and performance metrics. Include visual regression tests."
- Expected output: E2E test suites, visual regression baselines, performance benchmarks, test reports
- Context: Frontend and backend implementations from Phase 2
## Phase 1: Architecture & Design Foundation (Steps 1-3) -- Interactive
### 9. Security Audit & Hardening
### Step 1: Requirements Gathering
- Use Task tool with subagent_type="security-auditor"
- Prompt: "Perform security audit for: $ARGUMENTS. Review API security (authentication, authorization, rate limiting), check for OWASP Top 10 vulnerabilities, audit frontend for XSS/CSRF risks, validate input sanitization, and review secrets management. Provide penetration testing results and remediation steps."
- Expected output: Security audit report, vulnerability assessment, remediation recommendations, security headers configuration
- Context: All implementations from Phase 2
Gather requirements through interactive Q&A. Ask ONE question at a time using the AskUserQuestion tool. Do NOT ask all questions at once.
## Phase 4: Deployment & Operations
**Questions to ask (in order):**
### 10. Infrastructure & CI/CD Setup
1. **Problem Statement**: "What problem does this feature solve? Who is the user and what's their pain point?"
2. **Acceptance Criteria**: "What are the key acceptance criteria? When is this feature 'done'?"
3. **Scope Boundaries**: "What is explicitly OUT of scope for this feature?"
4. **Technical Constraints**: "Any technical constraints? (e.g., existing API conventions, specific DB, latency requirements, auth system)"
5. **Stack Confirmation**: "Confirm the tech stack -- detected [stack] from project. Frontend framework? Backend framework? Database? Any changes?"
6. **Dependencies**: "Does this feature depend on or affect other features/services?"
- Use Task tool with subagent_type="deployment-engineer"
- Prompt: "Setup deployment infrastructure for: $ARGUMENTS. Create Docker containers, Kubernetes manifests (or cloud-specific configs), implement CI/CD pipelines with automated testing gates, setup feature flags (LaunchDarkly/Unleash), and configure monitoring/alerting. Include blue-green deployment strategy and rollback procedures."
- Expected output: Dockerfiles, K8s manifests, CI/CD pipeline configs, feature flag setup, IaC templates (Terraform/CloudFormation)
- Context: All implementations and tests from previous phases
After gathering answers, write the requirements document:
### 11. Observability & Monitoring
**Output file:** `.full-stack-feature/01-requirements.md`
- Use Task tool with subagent_type="deployment-engineer"
- Prompt: "Implement observability stack for: $ARGUMENTS. Setup distributed tracing (OpenTelemetry), configure application metrics (Prometheus/DataDog), implement centralized logging (ELK/Splunk), create dashboards for key metrics, and define SLIs/SLOs. Include alerting rules and on-call procedures."
- Expected output: Observability configuration, dashboard definitions, alert rules, runbooks, SLI/SLO definitions
- Context: Infrastructure setup from step 10
```markdown
# Requirements: $FEATURE
### 12. Performance Optimization
## Problem Statement
- Use Task tool with subagent_type="performance-engineer"
- Prompt: "Optimize performance across stack for: $ARGUMENTS. Analyze and optimize database queries, implement caching strategies (Redis/CDN), optimize frontend bundle size and loading performance, setup lazy loading and code splitting, and tune backend service performance. Include before/after metrics."
- Expected output: Performance improvements, caching configuration, CDN setup, optimized bundles, performance metrics report
- Context: Monitoring data from step 11, load test results
[From Q1]
## Configuration Options
## Acceptance Criteria
- `stack`: Specify technology stack (e.g., "React/FastAPI/PostgreSQL", "Next.js/Django/MongoDB")
- `deployment_target`: Cloud platform (AWS/GCP/Azure) or on-premises
- `feature_flags`: Enable/disable feature flag integration
- `api_style`: REST or GraphQL
- `testing_depth`: Comprehensive or essential
- `compliance`: Specific compliance requirements (GDPR, HIPAA, SOC2)
[From Q2 -- formatted as checkboxes]
## Success Criteria
## Scope
- All API contracts validated through contract tests
- Frontend and backend integration tests passing
- E2E tests covering critical user journeys
- Security audit passed with no critical vulnerabilities
- Performance metrics meeting defined SLOs
- Observability stack capturing all key metrics
- Feature flags configured for progressive rollout
- Documentation complete for all components
- CI/CD pipeline with automated quality gates
- Zero-downtime deployment capability verified
### In Scope
## Coordination Notes
[Derived from answers]
- Each phase builds upon outputs from previous phases
- Parallel tasks in Phase 2 can run simultaneously but must converge for Phase 3
- Maintain traceability between requirements and implementations
- Use correlation IDs across all services for distributed tracing
- Document all architectural decisions in ADRs
- Ensure consistent error handling and API responses across services
### Out of Scope
Feature to implement: $ARGUMENTS
[From Q3]
## Technical Constraints
[From Q4]
## Technology Stack
[From Q5 -- frontend, backend, database, infrastructure]
## Dependencies
[From Q6]
## Configuration
- Stack: [detected or specified]
- API Style: [rest|graphql]
- Complexity: [simple|medium|complex]
```
Update `state.json`: set `current_step` to 2, add `"01-requirements.md"` to `files_created`, add step 1 to `completed_steps`.
### Step 2: Database & Data Model Design
Read `.full-stack-feature/01-requirements.md` to load requirements context.
Use the Task tool to launch a database architecture agent:
```
Task:
subagent_type: "general-purpose"
description: "Design database schema and data models for $FEATURE"
prompt: |
You are a database architect. Design the database schema and data models for this feature.
## Requirements
[Insert full contents of .full-stack-feature/01-requirements.md]
## Deliverables
1. **Entity relationship design**: Tables/collections, relationships, cardinality
2. **Schema definitions**: Column types, constraints, defaults, nullable fields
3. **Indexing strategy**: Which columns to index, index types, composite indexes
4. **Migration strategy**: How to safely add/modify schema in production
5. **Query patterns**: Expected read/write patterns and how the schema supports them
6. **Data access patterns**: Repository/DAO interface design
Write your complete database design as a single markdown document.
```
Save the agent's output to `.full-stack-feature/02-database-design.md`.
Update `state.json`: set `current_step` to 3, add step 2 to `completed_steps`.
### Step 3: Backend & Frontend Architecture
Read `.full-stack-feature/01-requirements.md` and `.full-stack-feature/02-database-design.md`.
Use the Task tool to launch an architecture agent:
```
Task:
subagent_type: "general-purpose"
description: "Design full-stack architecture for $FEATURE"
prompt: |
You are a full-stack architect. Design the complete backend and frontend architecture for this feature.
## Requirements
[Insert contents of .full-stack-feature/01-requirements.md]
## Database Design
[Insert contents of .full-stack-feature/02-database-design.md]
## Deliverables
### Backend Architecture
1. **API design**: Endpoints/resolvers, request/response schemas, error handling, versioning
2. **Service layer**: Business logic components, their responsibilities, boundaries
3. **Authentication/authorization**: How auth applies to new endpoints
4. **Integration points**: How this connects to existing services/systems
### Frontend Architecture
1. **Component hierarchy**: Page components, containers, presentational components
2. **State management**: What state is needed, where it lives, data flow
3. **Routing**: New routes, navigation structure, route guards
4. **API integration**: Data fetching strategy, caching, optimistic updates
### Cross-Cutting Concerns
1. **Error handling**: Backend errors -> API responses -> frontend error states
2. **Security considerations**: Input validation, XSS prevention, CSRF, data protection
3. **Risk assessment**: Technical risks and mitigation strategies
Write your complete architecture design as a single markdown document.
```
Save the agent's output to `.full-stack-feature/03-architecture.md`.
Update `state.json`: set `current_step` to "checkpoint-1", add step 3 to `completed_steps`.
---
## PHASE CHECKPOINT 1 -- User Approval Required
You MUST stop here and present the architecture for review.
Display a summary of the database design and architecture from `.full-stack-feature/02-database-design.md` and `.full-stack-feature/03-architecture.md` (key components, API endpoints, data model overview, component structure) and ask:
```
Architecture and database design are complete. Please review:
- .full-stack-feature/02-database-design.md
- .full-stack-feature/03-architecture.md
1. Approve -- proceed to implementation
2. Request changes -- tell me what to adjust
3. Pause -- save progress and stop here
```
Do NOT proceed to Phase 2 until the user selects option 1. If they select option 2, revise and re-checkpoint. If option 3, update `state.json` and stop.
---
## Phase 2: Implementation (Steps 4-7)
### Step 4: Database Implementation
Read `.full-stack-feature/01-requirements.md` and `.full-stack-feature/02-database-design.md`.
Use the Task tool:
```
Task:
subagent_type: "general-purpose"
description: "Implement database layer for $FEATURE"
prompt: |
You are a database engineer. Implement the database layer for this feature.
## Requirements
[Insert contents of .full-stack-feature/01-requirements.md]
## Database Design
[Insert contents of .full-stack-feature/02-database-design.md]
## Instructions
1. Create migration scripts for schema changes
2. Implement models/entities matching the schema design
3. Implement repository/data access layer with the designed query patterns
4. Add database-level validation constraints
5. Optimize queries with proper indexes as designed
6. Follow the project's existing ORM and migration patterns
Write all code files. Report what files were created/modified.
```
Save a summary to `.full-stack-feature/04-database-impl.md`.
Update `state.json`: set `current_step` to 5, add step 4 to `completed_steps`.
### Step 5: Backend Implementation
Read `.full-stack-feature/01-requirements.md`, `.full-stack-feature/03-architecture.md`, and `.full-stack-feature/04-database-impl.md`.
Use the Task tool:
```
Task:
subagent_type: "general-purpose"
description: "Implement backend services for $FEATURE"
prompt: |
You are a backend developer. Implement the backend services for this feature based on the approved architecture.
## Requirements
[Insert contents of .full-stack-feature/01-requirements.md]
## Architecture
[Insert contents of .full-stack-feature/03-architecture.md]
## Database Implementation
[Insert contents of .full-stack-feature/04-database-impl.md]
## Instructions
1. Implement API endpoints/resolvers as designed in the architecture
2. Implement business logic in the service layer
3. Wire up the data access layer from the database implementation
4. Add input validation, error handling, and proper HTTP status codes
5. Implement authentication/authorization middleware as designed
6. Add structured logging and observability hooks
7. Follow the project's existing code patterns and conventions
Write all code files. Report what files were created/modified.
```
Save a summary to `.full-stack-feature/05-backend-impl.md`.
Update `state.json`: set `current_step` to 6, add step 5 to `completed_steps`.
### Step 6: Frontend Implementation
Read `.full-stack-feature/01-requirements.md`, `.full-stack-feature/03-architecture.md`, and `.full-stack-feature/05-backend-impl.md`.
Use the Task tool:
```
Task:
subagent_type: "general-purpose"
description: "Implement frontend for $FEATURE"
prompt: |
You are a frontend developer. Implement the frontend components for this feature.
## Requirements
[Insert contents of .full-stack-feature/01-requirements.md]
## Architecture
[Insert contents of .full-stack-feature/03-architecture.md]
## Backend Implementation
[Insert contents of .full-stack-feature/05-backend-impl.md]
## Instructions
1. Build UI components following the component hierarchy from the architecture
2. Implement state management and data flow as designed
3. Integrate with the backend API endpoints using the designed data fetching strategy
4. Implement form handling, validation, and error states
5. Add loading states and optimistic updates where appropriate
6. Ensure responsive design and accessibility basics (semantic HTML, ARIA labels, keyboard nav)
7. Follow the project's existing frontend patterns and component conventions
Write all code files. Report what files were created/modified.
```
Save a summary to `.full-stack-feature/06-frontend-impl.md`.
**Note:** If the feature has no frontend component (pure backend/API), skip this step -- write a brief note in `06-frontend-impl.md` explaining why it was skipped, and continue.
Update `state.json`: set `current_step` to 7, add step 6 to `completed_steps`.
### Step 7: Testing & Validation
Read `.full-stack-feature/04-database-impl.md`, `.full-stack-feature/05-backend-impl.md`, and `.full-stack-feature/06-frontend-impl.md`.
Launch three agents in parallel using multiple Task tool calls in a single response:
**7a. Test Suite Creation:**
```
Task:
subagent_type: "test-automator"
description: "Create test suite for $FEATURE"
prompt: |
Create a comprehensive test suite for this full-stack feature.
## What was implemented
### Database
[Insert contents of .full-stack-feature/04-database-impl.md]
### Backend
[Insert contents of .full-stack-feature/05-backend-impl.md]
### Frontend
[Insert contents of .full-stack-feature/06-frontend-impl.md]
## Instructions
1. Write unit tests for all new backend functions/methods
2. Write integration tests for API endpoints
3. Write database tests for migrations and query patterns
4. Write frontend component tests if applicable
5. Cover: happy path, edge cases, error handling, boundary conditions
6. Follow existing test patterns and frameworks in the project
7. Target 80%+ code coverage for new code
Write all test files. Report what test files were created and what they cover.
```
**7b. Security Review:**
```
Task:
subagent_type: "security-auditor"
description: "Security review of $FEATURE"
prompt: |
Perform a security review of this full-stack feature implementation.
## Architecture
[Insert contents of .full-stack-feature/03-architecture.md]
## Database Implementation
[Insert contents of .full-stack-feature/04-database-impl.md]
## Backend Implementation
[Insert contents of .full-stack-feature/05-backend-impl.md]
## Frontend Implementation
[Insert contents of .full-stack-feature/06-frontend-impl.md]
Review for: OWASP Top 10, authentication/authorization flaws, input validation gaps,
SQL injection risks, XSS/CSRF vulnerabilities, data protection issues, dependency vulnerabilities,
and any security anti-patterns.
Provide findings with severity, location, and specific fix recommendations.
```
**7c. Performance Review:**
```
Task:
subagent_type: "performance-engineer"
description: "Performance review of $FEATURE"
prompt: |
Review the performance of this full-stack feature implementation.
## Architecture
[Insert contents of .full-stack-feature/03-architecture.md]
## Database Implementation
[Insert contents of .full-stack-feature/04-database-impl.md]
## Backend Implementation
[Insert contents of .full-stack-feature/05-backend-impl.md]
## Frontend Implementation
[Insert contents of .full-stack-feature/06-frontend-impl.md]
Review for: N+1 queries, missing indexes, unoptimized queries, memory leaks,
missing caching opportunities, large payloads, slow rendering paths,
bundle size concerns, unnecessary re-renders.
Provide findings with impact estimates and specific optimization recommendations.
```
After all three complete, consolidate results into `.full-stack-feature/07-testing.md`:
```markdown
# Testing & Validation: $FEATURE
## Test Suite
[Summary from 7a -- files created, coverage areas]
## Security Findings
[Summary from 7b -- findings by severity]
## Performance Findings
[Summary from 7c -- findings by impact]
## Action Items
[List any critical/high findings that need to be addressed before delivery]
```
If there are Critical or High severity findings from security or performance review, address them now before proceeding. Apply fixes and re-validate.
Update `state.json`: set `current_step` to "checkpoint-2", add step 7 to `completed_steps`.
---
## PHASE CHECKPOINT 2 -- User Approval Required
Display a summary of testing and validation results from `.full-stack-feature/07-testing.md` and ask:
```
Testing and validation complete. Please review .full-stack-feature/07-testing.md
Test coverage: [summary]
Security findings: [X critical, Y high, Z medium]
Performance findings: [X critical, Y high, Z medium]
1. Approve -- proceed to deployment & documentation
2. Request changes -- tell me what to fix
3. Pause -- save progress and stop here
```
Do NOT proceed to Phase 3 until the user approves.
---
## Phase 3: Delivery (Steps 8-9)
### Step 8: Deployment & Infrastructure
Read `.full-stack-feature/03-architecture.md` and `.full-stack-feature/07-testing.md`.
Use the Task tool:
```
Task:
subagent_type: "deployment-engineer"
description: "Create deployment config for $FEATURE"
prompt: |
Create the deployment and infrastructure configuration for this full-stack feature.
## Architecture
[Insert contents of .full-stack-feature/03-architecture.md]
## Testing Results
[Insert contents of .full-stack-feature/07-testing.md]
## Instructions
1. Create or update CI/CD pipeline configuration for the new code
2. Add database migration steps to the deployment pipeline
3. Add feature flag configuration if the feature should be gradually rolled out
4. Define health checks and readiness probes for new services/endpoints
5. Create monitoring alerts for key metrics (error rate, latency, throughput)
6. Write a deployment runbook with rollback steps (including database rollback)
7. Follow existing deployment patterns in the project
Write all configuration files. Report what was created/modified.
```
Save output to `.full-stack-feature/08-deployment.md`.
Update `state.json`: set `current_step` to 9, add step 8 to `completed_steps`.
### Step 9: Documentation & Handoff
Read all previous `.full-stack-feature/*.md` files.
Use the Task tool:
```
Task:
subagent_type: "general-purpose"
description: "Write documentation for $FEATURE"
prompt: |
You are a technical writer. Create documentation for this full-stack feature.
## Feature Context
[Insert contents of .full-stack-feature/01-requirements.md]
## Architecture
[Insert contents of .full-stack-feature/03-architecture.md]
## Implementation Summary
### Database: [Insert contents of .full-stack-feature/04-database-impl.md]
### Backend: [Insert contents of .full-stack-feature/05-backend-impl.md]
### Frontend: [Insert contents of .full-stack-feature/06-frontend-impl.md]
## Deployment
[Insert contents of .full-stack-feature/08-deployment.md]
## Instructions
1. Write API documentation for new endpoints (request/response examples)
2. Document the database schema changes and migration notes
3. Update or create user-facing documentation if applicable
4. Write a brief architecture decision record (ADR) explaining key design choices
5. Create a handoff summary: what was built, how to test it, known limitations
Write documentation files. Report what was created/modified.
```
Save output to `.full-stack-feature/09-documentation.md`.
Update `state.json`: set `current_step` to "complete", add step 9 to `completed_steps`.
---
## Completion
Update `state.json`:
- Set `status` to `"complete"`
- Set `last_updated` to current timestamp
Present the final summary:
```
Full-stack feature development complete: $FEATURE
## Files Created
[List all .full-stack-feature/ output files]
## Implementation Summary
- Requirements: .full-stack-feature/01-requirements.md
- Database Design: .full-stack-feature/02-database-design.md
- Architecture: .full-stack-feature/03-architecture.md
- Database Implementation: .full-stack-feature/04-database-impl.md
- Backend Implementation: .full-stack-feature/05-backend-impl.md
- Frontend Implementation: .full-stack-feature/06-frontend-impl.md
- Testing & Validation: .full-stack-feature/07-testing.md
- Deployment: .full-stack-feature/08-deployment.md
- Documentation: .full-stack-feature/09-documentation.md
## Next Steps
1. Review all generated code and documentation
2. Run the full test suite to verify everything passes
3. Create a pull request with the implementation
4. Deploy using the runbook in .full-stack-feature/08-deployment.md
```

View File

@@ -0,0 +1,10 @@
{
"name": "functional-programming",
"version": "1.2.0",
"description": "Functional programming with Elixir, OTP patterns, Phoenix framework, and distributed systems",
"author": {
"name": "Seth Hobson",
"email": "seth@major7apps.com"
},
"license": "MIT"
}

View File

@@ -0,0 +1,10 @@
{
"name": "game-development",
"version": "1.2.1",
"description": "Unity game development with C# scripting, Minecraft server plugin development with Bukkit/Spigot APIs",
"author": {
"name": "Seth Hobson",
"email": "seth@major7apps.com"
},
"license": "MIT"
}

View File

@@ -0,0 +1,10 @@
{
"name": "git-pr-workflows",
"version": "1.3.0",
"description": "Git workflow automation, pull request enhancement, and team onboarding processes",
"author": {
"name": "Seth Hobson",
"email": "seth@major7apps.com"
},
"license": "MIT"
}

View File

@@ -1,129 +1,598 @@
# Complete Git Workflow with Multi-Agent Orchestration
---
description: "Orchestrate git workflow from code review through PR creation with quality gates"
argument-hint: "<target branch> [--skip-tests] [--draft-pr] [--no-push] [--squash] [--conventional] [--trunk-based]"
---
Orchestrate a comprehensive git workflow from code review through PR creation, leveraging specialized agents for quality assurance, testing, and deployment readiness. This workflow implements modern git best practices including Conventional Commits, automated testing, and structured PR creation.
# Git Workflow Orchestrator
[Extended thinking: This workflow coordinates multiple specialized agents to ensure code quality before commits are made. The code-reviewer agent performs initial quality checks, test-automator ensures all tests pass, and deployment-engineer verifies production readiness. By orchestrating these agents sequentially with context passing, we prevent broken code from entering the repository while maintaining high velocity. The workflow supports both trunk-based and feature-branch strategies with configurable options for different team needs.]
## CRITICAL BEHAVIORAL RULES
## Configuration
You MUST follow these rules exactly. Violating any of them is a failure.
**Target branch**: $ARGUMENTS (defaults to 'main' if not specified)
1. **Execute steps in order.** Do NOT skip ahead, reorder, or merge steps.
2. **Write output files.** Each step MUST produce its output file in `.git-workflow/` before the next step begins. Read from prior step files — do NOT rely on context window memory.
3. **Stop at checkpoints.** When you reach a `PHASE CHECKPOINT`, you MUST stop and wait for explicit user approval before continuing. Use the AskUserQuestion tool with clear options.
4. **Halt on failure.** If any step fails (agent error, test failure, missing dependency), STOP immediately. Present the error and ask the user how to proceed. Do NOT silently continue.
5. **Use only local agents.** All `subagent_type` references use agents bundled with this plugin or `general-purpose`. No cross-plugin dependencies.
6. **Never enter plan mode autonomously.** Do NOT use EnterPlanMode. This command IS the plan — execute it.
**Supported flags**:
## Pre-flight Checks
- `--skip-tests`: Skip automated test execution (use with caution)
- `--draft-pr`: Create PR as draft for work-in-progress
- `--no-push`: Perform all checks but don't push to remote
- `--squash`: Squash commits before pushing
- `--conventional`: Enforce Conventional Commits format strictly
- `--trunk-based`: Use trunk-based development workflow
- `--feature-branch`: Use feature branch workflow (default)
Before starting, perform these checks:
## Phase 1: Pre-Commit Review and Analysis
### 1. Check for existing session
### 1. Code Quality Assessment
Check if `.git-workflow/state.json` exists:
- Use Task tool with subagent_type="code-reviewer"
- Prompt: "Review all uncommitted changes for code quality issues. Check for: 1) Code style violations, 2) Security vulnerabilities, 3) Performance concerns, 4) Missing error handling, 5) Incomplete implementations. Generate a detailed report with severity levels (critical/high/medium/low) and provide specific line-by-line feedback. Output format: JSON with {issues: [], summary: {critical: 0, high: 0, medium: 0, low: 0}, recommendations: []}"
- Expected output: Structured code review report for next phase
- If it exists and `status` is `"in_progress"`: Read it, display the current step, and ask the user:
### 2. Dependency and Breaking Change Analysis
```
Found an in-progress git workflow session:
Target branch: [branch from state]
Current step: [step from state]
- Use Task tool with subagent_type="code-reviewer"
- Prompt: "Analyze the changes for: 1) New dependencies or version changes, 2) Breaking API changes, 3) Database schema modifications, 4) Configuration changes, 5) Backward compatibility issues. Context from previous review: [insert issues summary]. Identify any changes that require migration scripts or documentation updates."
- Context from previous: Code quality issues that might indicate breaking changes
- Expected output: Breaking change assessment and migration requirements
1. Resume from where we left off
2. Start fresh (archives existing session)
```
## Phase 2: Testing and Validation
- If it exists and `status` is `"complete"`: Ask whether to archive and start fresh.
### 1. Test Execution and Coverage
### 2. Initialize state
- Use Task tool with subagent_type="unit-testing::test-automator"
- Prompt: "Execute all test suites for the modified code. Run: 1) Unit tests, 2) Integration tests, 3) End-to-end tests if applicable. Generate coverage report and identify any untested code paths. Based on review issues: [insert critical/high issues], ensure tests cover the problem areas. Provide test results in format: {passed: [], failed: [], skipped: [], coverage: {statements: %, branches: %, functions: %, lines: %}, untested_critical_paths: []}"
- Context from previous: Critical code review issues that need test coverage
- Expected output: Complete test results and coverage metrics
Create `.git-workflow/` directory and `state.json`:
### 2. Test Recommendations and Gap Analysis
```json
{
"target_branch": "$ARGUMENTS",
"status": "in_progress",
"flags": {
"skip_tests": false,
"draft_pr": false,
"no_push": false,
"squash": false,
"conventional": true,
"trunk_based": false
},
"current_step": 1,
"current_phase": 1,
"completed_steps": [],
"files_created": [],
"started_at": "ISO_TIMESTAMP",
"last_updated": "ISO_TIMESTAMP"
}
```
- Use Task tool with subagent_type="unit-testing::test-automator"
- Prompt: "Based on test results [insert summary] and code changes, identify: 1) Missing test scenarios, 2) Edge cases not covered, 3) Integration points needing verification, 4) Performance benchmarks needed. Generate test implementation recommendations prioritized by risk. Consider the breaking changes identified: [insert breaking changes]."
- Context from previous: Test results, breaking changes, untested paths
- Expected output: Prioritized list of additional tests needed
Parse `$ARGUMENTS` for the target branch (defaults to 'main') and flags. Use defaults if not specified.
## Phase 3: Commit Message Generation
### 3. Gather git context
### 1. Change Analysis and Categorization
Run these commands and save output:
- Use Task tool with subagent_type="code-reviewer"
- Prompt: "Analyze all changes and categorize them according to Conventional Commits specification. Identify the primary change type (feat/fix/docs/style/refactor/perf/test/build/ci/chore/revert) and scope. For changes: [insert file list and summary], determine if this should be a single commit or multiple atomic commits. Consider test results: [insert test summary]."
- Context from previous: Test results, code review summary
- Expected output: Commit structure recommendation
- `git status` — current working tree state
- `git diff --stat` — summary of changes
- `git diff` — full diff of changes
- `git log --oneline -10` — recent commit history
- `git branch --show-current` — current branch name
### 2. Conventional Commit Message Creation
Save this context to `.git-workflow/00-git-context.md`.
- Use Task tool with subagent_type="llm-application-dev::prompt-engineer"
- Prompt: "Create Conventional Commits format message(s) based on categorization: [insert categorization]. Format: <type>(<scope>): <subject> with blank line then <body> explaining what and why (not how), then <footer> with BREAKING CHANGE: if applicable. Include: 1) Clear subject line (50 chars max), 2) Detailed body explaining rationale, 3) References to issues/tickets, 4) Co-authors if applicable. Consider the impact: [insert breaking changes if any]."
- Context from previous: Change categorization, breaking changes
- Expected output: Properly formatted commit message(s)
---
## Phase 4: Branch Strategy and Push Preparation
## Phase 1: Pre-Commit Review and Analysis (Steps 12)
### 1. Branch Management
### Step 1: Code Quality Assessment
- Use Task tool with subagent_type="cicd-automation::deployment-engineer"
- Prompt: "Based on workflow type [--trunk-based or --feature-branch], prepare branch strategy. For feature branch: ensure branch name follows pattern (feature|bugfix|hotfix)/<ticket>-<description>. For trunk-based: prepare for direct main push with feature flag strategy if needed. Current branch: [insert branch], target: [insert target branch]. Verify no conflicts with target branch."
- Expected output: Branch preparation commands and conflict status
Read `.git-workflow/00-git-context.md`.
### 2. Pre-Push Validation
Use the Task tool to launch the code reviewer:
- Use Task tool with subagent_type="cicd-automation::deployment-engineer"
- Prompt: "Perform final pre-push checks: 1) Verify all CI checks will pass, 2) Confirm no sensitive data in commits, 3) Validate commit signatures if required, 4) Check branch protection rules, 5) Ensure all review comments addressed. Test summary: [insert test results]. Review status: [insert review summary]."
- Context from previous: All previous validation results
- Expected output: Push readiness confirmation or blocking issues
```
Task:
subagent_type: "code-reviewer"
description: "Review uncommitted changes for code quality"
prompt: |
Review all uncommitted changes for code quality issues.
## Phase 5: Pull Request Creation
## Git Context
[Insert contents of .git-workflow/00-git-context.md]
### 1. PR Description Generation
Check for:
1. Code style violations
2. Security vulnerabilities
3. Performance concerns
4. Missing error handling
5. Incomplete implementations
- Use Task tool with subagent_type="documentation-generation::docs-architect"
- Prompt: "Create comprehensive PR description including: 1) Summary of changes (what and why), 2) Type of change checklist, 3) Testing performed summary from [insert test results], 4) Screenshots/recordings if UI changes, 5) Deployment notes from [insert deployment considerations], 6) Related issues/tickets, 7) Breaking changes section if applicable: [insert breaking changes], 8) Reviewer checklist. Format as GitHub-flavored Markdown."
- Context from previous: All validation results, test outcomes, breaking changes
- Expected output: Complete PR description in Markdown
Generate a detailed report with severity levels (critical/high/medium/low) and provide
specific line-by-line feedback.
### 2. PR Metadata and Automation Setup
## Deliverables
Output format: structured report with:
- Issues list with severity, file, line, description
- Summary counts: {critical: N, high: N, medium: N, low: N}
- Recommendations for fixes
- Use Task tool with subagent_type="cicd-automation::deployment-engineer"
- Prompt: "Configure PR metadata: 1) Assign appropriate reviewers based on CODEOWNERS, 2) Add labels (type, priority, component), 3) Link related issues, 4) Set milestone if applicable, 5) Configure merge strategy (squash/merge/rebase), 6) Set up auto-merge if all checks pass. Consider draft status: [--draft-pr flag]. Include test status: [insert test summary]."
- Context from previous: PR description, test results, review status
- Expected output: PR configuration commands and automation rules
Write your complete review as a single markdown document.
```
## Success Criteria
Save the agent's output to `.git-workflow/01-code-review.md`.
- ✅ All critical and high-severity code issues resolved
- ✅ Test coverage maintained or improved (target: >80%)
- ✅ All tests passing (unit, integration, e2e)
- ✅ Commit messages follow Conventional Commits format
- ✅ No merge conflicts with target branch
- ✅ PR description complete with all required sections
- ✅ Branch protection rules satisfied
- ✅ Security scanning completed with no critical vulnerabilities
- ✅ Performance benchmarks within acceptable thresholds
- ✅ Documentation updated for any API changes
Update `state.json`: set `current_step` to 2, add step 1 to `completed_steps`.
### Step 2: Dependency and Breaking Change Analysis
Read `.git-workflow/00-git-context.md` and `.git-workflow/01-code-review.md`.
Use the Task tool:
```
Task:
subagent_type: "code-reviewer"
description: "Analyze changes for dependencies and breaking changes"
prompt: |
Analyze the changes for dependency and breaking change issues.
## Git Context
[Insert contents of .git-workflow/00-git-context.md]
## Code Review
[Insert contents of .git-workflow/01-code-review.md]
Check for:
1. New dependencies or version changes
2. Breaking API changes
3. Database schema modifications
4. Configuration changes
5. Backward compatibility issues
Identify any changes that require migration scripts or documentation updates.
## Deliverables
1. Breaking change assessment
2. Dependency change analysis
3. Migration requirements
4. Documentation update needs
Write your complete analysis as a single markdown document.
```
Save output to `.git-workflow/02-breaking-changes.md`.
Update `state.json`: set `current_step` to "checkpoint-1", add step 2 to `completed_steps`.
---
## PHASE CHECKPOINT 1 — User Approval Required
Display a summary of code review and breaking change analysis and ask:
```
Pre-commit review complete. Please review:
- .git-workflow/01-code-review.md
- .git-workflow/02-breaking-changes.md
Issues found: [X critical, Y high, Z medium, W low]
Breaking changes: [summary]
1. Approve — proceed to testing (or skip if --skip-tests)
2. Fix issues first — I'll address the critical/high issues
3. Pause — save progress and stop here
```
If user selects option 2, address the critical/high issues, then re-run the review and re-checkpoint.
Do NOT proceed to Phase 2 until the user approves.
---
## Phase 2: Testing and Validation (Steps 34)
If `--skip-tests` flag is set, skip to Phase 3. Write a note in `.git-workflow/03-test-results.md` explaining tests were skipped.
### Step 3: Test Execution and Coverage
Read `.git-workflow/00-git-context.md` and `.git-workflow/01-code-review.md`.
Use the Task tool:
```
Task:
subagent_type: "general-purpose"
description: "Execute test suites for modified code"
prompt: |
You are a test automation expert. Execute all test suites for the modified code.
## Git Context
[Insert contents of .git-workflow/00-git-context.md]
## Code Review Issues
[Insert contents of .git-workflow/01-code-review.md]
Run:
1. Unit tests
2. Integration tests
3. End-to-end tests if applicable
Generate coverage report and identify untested code paths. Ensure tests cover the
critical/high issues identified in the code review.
## Deliverables
Report with:
- Test results: passed, failed, skipped
- Coverage metrics: statements, branches, functions, lines
- Untested critical paths
- Recommendations for additional tests
Write your complete test report as a single markdown document.
```
Save output to `.git-workflow/03-test-results.md`.
Update `state.json`: set `current_step` to 4, add step 3 to `completed_steps`.
### Step 4: Test Recommendations and Gap Analysis
Read `.git-workflow/03-test-results.md` and `.git-workflow/02-breaking-changes.md`.
Use the Task tool:
```
Task:
subagent_type: "general-purpose"
description: "Identify test gaps and recommend additional tests"
prompt: |
You are a test automation expert. Based on test results and code changes, identify
testing gaps.
## Test Results
[Insert contents of .git-workflow/03-test-results.md]
## Breaking Changes
[Insert contents of .git-workflow/02-breaking-changes.md]
Identify:
1. Missing test scenarios
2. Edge cases not covered
3. Integration points needing verification
4. Performance benchmarks needed
Generate test implementation recommendations prioritized by risk.
## Deliverables
1. Prioritized list of additional tests needed
2. Edge case coverage gaps
3. Integration test recommendations
4. Risk assessment for untested paths
Write your complete analysis as a single markdown document.
```
Save output to `.git-workflow/04-test-gaps.md`.
Update `state.json`: set `current_step` to "checkpoint-2", add step 4 to `completed_steps`.
---
## PHASE CHECKPOINT 2 — User Approval Required
Display test results summary and ask:
```
Testing complete. Please review:
- .git-workflow/03-test-results.md
- .git-workflow/04-test-gaps.md
Test results: [X passed, Y failed, Z skipped]
Coverage: [summary]
Test gaps: [summary of critical gaps]
1. Approve — proceed to commit message generation
2. Fix failing tests first
3. Pause — save progress and stop here
```
Do NOT proceed to Phase 3 until the user approves. If tests are failing, the user must address them first.
---
## Phase 3: Commit Message Generation (Steps 56)
### Step 5: Change Analysis and Categorization
Read `.git-workflow/00-git-context.md` and `.git-workflow/03-test-results.md`.
Use the Task tool:
```
Task:
subagent_type: "code-reviewer"
description: "Categorize changes for commit message"
prompt: |
Analyze all changes and categorize them according to Conventional Commits specification.
## Git Context
[Insert contents of .git-workflow/00-git-context.md]
## Test Results
[Insert contents of .git-workflow/03-test-results.md]
Identify the primary change type (feat/fix/docs/style/refactor/perf/test/build/ci/chore/revert)
and scope. Determine if this should be a single commit or multiple atomic commits.
## Deliverables
1. Change type classification
2. Scope identification
3. Single vs multiple commit recommendation
4. Commit structure with groupings
Write your complete categorization as a single markdown document.
```
Save output to `.git-workflow/05-change-categorization.md`.
Update `state.json`: set `current_step` to 6, add step 5 to `completed_steps`.
### Step 6: Conventional Commit Message Creation
Read `.git-workflow/05-change-categorization.md` and `.git-workflow/02-breaking-changes.md`.
Use the Task tool:
```
Task:
subagent_type: "general-purpose"
description: "Create Conventional Commits message for changes"
prompt: |
You are an expert at writing clear, well-structured Conventional Commits messages.
Create commit message(s) based on the change categorization.
## Change Categorization
[Insert contents of .git-workflow/05-change-categorization.md]
## Breaking Changes
[Insert contents of .git-workflow/02-breaking-changes.md]
Format: <type>(<scope>): <subject>
- Clear subject line (50 chars max)
- Detailed body explaining what and why (not how)
- Footer with BREAKING CHANGE: if applicable
- References to issues/tickets
- Co-authors if applicable
## Deliverables
1. Formatted commit message(s) ready to use
2. Rationale for commit structure choice
Write the commit messages as a single markdown document with clear delimiters.
```
Save output to `.git-workflow/06-commit-messages.md`.
Update `state.json`: set `current_step` to "checkpoint-3", add step 6 to `completed_steps`.
---
## PHASE CHECKPOINT 3 — User Approval Required
Display the proposed commit message(s) and ask:
```
Commit message(s) ready. Please review .git-workflow/06-commit-messages.md
[Display the commit message(s)]
1. Approve — proceed to branch management and push
2. Edit message — tell me what to change
3. Pause — save progress and stop here
```
Do NOT proceed to Phase 4 until the user approves.
---
## Phase 4: Branch Strategy and Push (Steps 78)
### Step 7: Branch Management and Pre-Push Validation
Read `.git-workflow/00-git-context.md`, `.git-workflow/06-commit-messages.md`, and all previous step files.
Use the Task tool:
```
Task:
subagent_type: "general-purpose"
description: "Prepare branch strategy and validate push readiness"
prompt: |
You are a deployment engineer specializing in git workflows and CI/CD.
## Git Context
[Insert contents of .git-workflow/00-git-context.md]
## Workflow Flags
[Insert flags from state.json]
Based on workflow type (trunk-based or feature-branch):
For feature branch:
- Ensure branch name follows pattern (feature|bugfix|hotfix)/<ticket>-<description>
- Verify no conflicts with target branch
For trunk-based:
- Prepare for direct main push with feature flag strategy if needed
Perform pre-push checks:
1. Verify all CI checks will pass
2. Confirm no sensitive data in commits
3. Validate commit signatures if required
4. Check branch protection rules
5. Ensure all review comments addressed
## Deliverables
1. Branch preparation commands
2. Conflict status
3. Pre-push validation results
4. Push readiness confirmation or blocking issues
Write your complete validation as a single markdown document.
```
Save output to `.git-workflow/07-branch-validation.md`.
Update `state.json`: set `current_step` to 8, add step 7 to `completed_steps`.
If `--no-push` flag is set, skip Step 8 and proceed to Phase 5.
### Step 8: Execute Git Operations
Based on the approved commit messages and branch validation:
1. Stage changes: `git add` the relevant files
2. Create commit(s) using the approved messages from `.git-workflow/06-commit-messages.md`
3. If `--squash` flag: squash commits as configured
4. Push to remote with appropriate flags
**Important:** Before executing any git operations, display the planned commands and ask for final confirmation:
```
Ready to execute git operations:
[List exact commands]
1. Execute — run these commands
2. Modify — adjust the commands
3. Abort — do not execute
```
Save execution results to `.git-workflow/08-push-results.md`.
Update `state.json`: set `current_step` to "checkpoint-4", add step 8 to `completed_steps`.
---
## PHASE CHECKPOINT 4 — User Approval Required (if not --no-push)
```
Git operations complete. Please review .git-workflow/08-push-results.md
1. Approve — proceed to PR creation
2. Pause — save progress and stop here
```
---
## Phase 5: Pull Request Creation (Steps 910)
If `--no-push` flag is set, skip this phase entirely.
### Step 9: PR Description Generation
Read all `.git-workflow/*.md` files.
Use the Task tool:
```
Task:
subagent_type: "general-purpose"
description: "Create comprehensive PR description"
prompt: |
You are a technical writer specializing in pull request documentation.
Create a comprehensive PR description.
## Code Review
[Insert contents of .git-workflow/01-code-review.md]
## Breaking Changes
[Insert contents of .git-workflow/02-breaking-changes.md]
## Test Results
[Insert contents of .git-workflow/03-test-results.md]
## Commit Messages
[Insert contents of .git-workflow/06-commit-messages.md]
Include:
1. Summary of changes (what and why)
2. Type of change checklist
3. Testing performed summary
4. Screenshots/recordings note if UI changes
5. Deployment notes
6. Related issues/tickets
7. Breaking changes section if applicable
8. Reviewer checklist
Format as GitHub-flavored Markdown.
Write the complete PR description as a single markdown document.
```
Save output to `.git-workflow/09-pr-description.md`.
Update `state.json`: set `current_step` to 10, add step 9 to `completed_steps`.
### Step 10: PR Creation and Metadata
Read `.git-workflow/09-pr-description.md` and `.git-workflow/00-git-context.md`.
Create the PR using the `gh` CLI:
- Use the description from `.git-workflow/09-pr-description.md`
- Set draft status if `--draft-pr` flag is set
- Add appropriate labels based on change categorization
- Link related issues if referenced
**Important:** Display the planned PR creation command and ask for confirmation:
```
Ready to create PR:
Title: [proposed title]
Target: [target branch]
Draft: [yes/no]
1. Create PR — execute now
2. Edit — adjust title or description
3. Skip — don't create PR
```
Save PR URL and metadata to `.git-workflow/10-pr-created.md`.
Update `state.json`: set `current_step` to "complete", add step 10 to `completed_steps`.
---
## Completion
Update `state.json`:
- Set `status` to `"complete"`
- Set `last_updated` to current timestamp
Present the final summary:
```
Git workflow complete!
## Files Created
[List all .git-workflow/ output files]
## Workflow Summary
- Code Review: .git-workflow/01-code-review.md
- Breaking Changes: .git-workflow/02-breaking-changes.md
- Test Results: .git-workflow/03-test-results.md
- Test Gaps: .git-workflow/04-test-gaps.md
- Change Categorization: .git-workflow/05-change-categorization.md
- Commit Messages: .git-workflow/06-commit-messages.md
- Branch Validation: .git-workflow/07-branch-validation.md
- Push Results: .git-workflow/08-push-results.md
- PR Description: .git-workflow/09-pr-description.md
- PR Created: .git-workflow/10-pr-created.md
## Results
- Code issues: [X critical, Y high resolved]
- Tests: [X passed, Y failed]
- PR: [URL if created]
## Rollback Procedures
In case of issues after merge:
1. **Immediate Revert**: Create revert PR with `git revert <commit-hash>`
2. **Feature Flag Disable**: If using feature flags, disable immediately
3. **Hotfix Branch**: For critical issues, create hotfix branch from main
4. **Communication**: Notify team via designated channels
5. **Root Cause Analysis**: Document issue in postmortem template
## Best Practices Reference
- **Commit Frequency**: Commit early and often, but ensure each commit is atomic
- **Branch Naming**: `(feature|bugfix|hotfix|docs|chore)/<ticket-id>-<brief-description>`
- **PR Size**: Keep PRs under 400 lines for effective review
- **Review Response**: Address review comments within 24 hours
- **Merge Strategy**: Squash for feature branches, merge for release branches
- **Sign-Off**: Require at least 2 approvals for main branch changes
1. Immediate Revert: Create revert PR with `git revert <commit-hash>`
2. Feature Flag Disable: If using feature flags, disable immediately
3. Hotfix Branch: For critical issues, create hotfix branch from main
4. Communication: Notify team via designated channels
```

View File

@@ -0,0 +1,10 @@
{
"name": "hr-legal-compliance",
"version": "1.2.1",
"description": "HR policy documentation, legal compliance templates (GDPR/SOC2/HIPAA), employment contracts, and regulatory documentation",
"author": {
"name": "Seth Hobson",
"email": "seth@major7apps.com"
},
"license": "MIT"
}

View File

@@ -0,0 +1,10 @@
{
"name": "incident-response",
"version": "1.3.0",
"description": "Production incident management, triage workflows, and automated incident resolution",
"author": {
"name": "Seth Hobson",
"email": "seth@major7apps.com"
},
"license": "MIT"
}

View File

@@ -0,0 +1,32 @@
---
name: code-reviewer
description: Reviews code for logic flaws, type safety gaps, error handling issues, architectural concerns, and similar vulnerability patterns. Provides fix design recommendations.
model: sonnet
---
You are a code review specialist focused on identifying logic flaws and design issues in codebases.
## Purpose
Perform thorough code reviews to find logic errors, type safety gaps, missing error handling, and architectural concerns. You identify similar vulnerability patterns across the codebase and recommend minimal, effective fixes.
## Capabilities
- Logic flaw analysis: incorrect assumptions, missing edge cases, wrong algorithms
- Type safety review: where stronger types could prevent issues
- Error handling audit: missing try-catch, unhandled promises, panic scenarios
- Contract validation: input validation gaps, output guarantees not met
- Architecture review: tight coupling, missing abstractions, layering violations
- Pattern detection: find similar vulnerabilities across the codebase
- Fix design: minimal change vs refactoring vs architectural improvement
- Final approval review: code quality, security, deployment readiness
## Response Approach
1. Analyze the code path and identify logic flaws
2. Check type safety and where stronger types help
3. Audit error handling for gaps
4. Validate contracts and boundaries
5. Look for similar patterns elsewhere in the codebase
6. Design the minimal effective fix
7. Provide a structured review with severity ratings

View File

@@ -0,0 +1,33 @@
---
name: debugger
description: Performs deep root cause analysis through code path tracing, git bisect automation, dependency analysis, and systematic hypothesis testing for production bugs.
model: sonnet
---
You are a debugging specialist focused on systematic root cause analysis for production issues.
## Purpose
Perform deep code analysis and investigation to identify the exact root cause of bugs. You excel at tracing code paths, automating git bisect, analyzing dependencies, and testing hypotheses methodically.
## Capabilities
- Root cause hypothesis formation with supporting evidence
- Code-level analysis: variable states, control flow, timing issues
- Git bisect automation: identify the exact introducing commit
- Dependency analysis: version conflicts, API changes, configuration drift
- State inspection: database state, cache state, external API responses
- Failure mechanism identification: race conditions, null checks, type mismatches
- Fix strategy options with tradeoffs (quick fix vs proper fix)
- Code path tracing from entry point to failure location
## Response Approach
1. Review error context and form initial hypotheses
2. Trace the code execution path from entry point to failure
3. Track variable states at key decision points
4. Use git bisect to identify the introducing commit when applicable
5. Analyze dependencies and configuration for drift
6. Isolate the exact failure mechanism
7. Propose fix strategies with tradeoffs
8. Document findings in structured format for the next phase

View File

@@ -0,0 +1,31 @@
---
name: error-detective
description: Analyzes error traces, logs, and observability data to identify error signatures, reproduction steps, user impact, and timeline context for production issues.
model: sonnet
---
You are an error detection specialist focused on analyzing production errors and observability data.
## Purpose
Analyze error traces, stack traces, logs, and monitoring data to build a complete picture of production issues. You excel at identifying error patterns, correlating events across services, and assessing user impact.
## Capabilities
- Error signature analysis: exception types, message patterns, frequency, first occurrence
- Stack trace deep dive: failure location, call chain, involved components
- Reproduction step identification: minimal test cases, environment requirements
- Observability correlation: Sentry/DataDog error groups, distributed traces, APM metrics
- User impact assessment: affected segments, error rates, business metrics
- Timeline analysis: deployment correlation, configuration change detection
- Related symptom identification: cascading failures, upstream/downstream impacts
## Response Approach
1. Analyze the error signature and classify the failure type
2. Deep-dive into stack traces to identify the failure location and call chain
3. Correlate with observability data (traces, logs, metrics) for context
4. Assess user impact and business risk
5. Build a timeline of when the issue started and what changed
6. Identify related symptoms and potential cascading effects
7. Provide structured findings for the next investigation phase

View File

@@ -0,0 +1,32 @@
---
name: test-automator
description: Creates comprehensive test suites including unit, integration, regression, and security tests. Validates fixes with full coverage and cross-environment testing.
model: sonnet
---
You are a test automation specialist focused on comprehensive test coverage for bug fixes and features.
## Purpose
Create and execute thorough test suites that verify fixes, catch regressions, and ensure quality. You write unit tests, integration tests, regression tests, and security tests following project conventions.
## Capabilities
- Unit test creation: function-level tests with edge cases and error paths
- Integration tests: end-to-end scenarios with real dependencies
- Regression detection: before/after comparison, new failure identification
- Security testing: authentication checks, input validation, injection prevention
- Test quality assessment: coverage metrics, mutation testing, determinism
- Cross-environment testing: staging, QA, production-like validation
- AI-assisted test generation: property-based testing, fuzzing for edge cases
- Framework support: Jest, Vitest, pytest, Go testing, Playwright, Cypress
## Response Approach
1. Analyze the code changes and identify what needs testing
2. Write unit tests covering the specific fix, edge cases, and error paths
3. Create integration tests for end-to-end scenarios
4. Add regression tests for similar vulnerability patterns
5. Include security tests where applicable
6. Run the full test suite and report results
7. Assess test quality and coverage metrics

View File

@@ -1,166 +1,601 @@
Orchestrate multi-agent incident response with modern SRE practices for rapid resolution and learning:
---
description: "Orchestrate multi-agent incident response with modern SRE practices for rapid resolution and learning"
argument-hint: "<incident description> [--severity P0|P1|P2|P3]"
---
[Extended thinking: This workflow implements a comprehensive incident command system (ICS) following modern SRE principles. Multiple specialized agents collaborate through defined phases: detection/triage, investigation/mitigation, communication/coordination, and resolution/postmortem. The workflow emphasizes speed without sacrificing accuracy, maintains clear communication channels, and ensures every incident becomes a learning opportunity through blameless postmortems and systematic improvements.]
# Incident Response Orchestrator
## Configuration
## CRITICAL BEHAVIORAL RULES
### Severity Levels
You MUST follow these rules exactly. Violating any of them is a failure.
- **P0/SEV-1**: Complete outage, security breach, data loss - immediate all-hands response
- **P1/SEV-2**: Major degradation, significant user impact - rapid response required
- **P2/SEV-3**: Minor degradation, limited impact - standard response
- **P3/SEV-4**: Cosmetic issues, no user impact - scheduled resolution
1. **Execute steps in order.** Do NOT skip ahead, reorder, or merge steps.
2. **Write output files.** Each step MUST produce its output file in `.incident-response/` before the next step begins. Read from prior step files — do NOT rely on context window memory.
3. **Stop at checkpoints.** When you reach a `PHASE CHECKPOINT`, you MUST stop and wait for explicit user approval before continuing. Use the AskUserQuestion tool with clear options.
4. **Halt on failure.** If any step fails (agent error, test failure, missing dependency), STOP immediately. Present the error and ask the user how to proceed. Do NOT silently continue.
5. **Use only local agents.** All `subagent_type` references use agents bundled with this plugin or `general-purpose`. No cross-plugin dependencies.
6. **Never enter plan mode autonomously.** Do NOT use EnterPlanMode. This command IS the plan — execute it.
### Incident Types
## Pre-flight Checks
- Performance degradation
- Service outage
- Security incident
- Data integrity issue
- Infrastructure failure
- Third-party service disruption
Before starting, perform these checks:
## Phase 1: Detection & Triage
### 1. Check for existing session
### 1. Incident Detection and Classification
Check if `.incident-response/state.json` exists:
- Use Task tool with subagent_type="incident-responder"
- Prompt: "URGENT: Detect and classify incident: $ARGUMENTS. Analyze alerts from PagerDuty/Opsgenie/monitoring. Determine: 1) Incident severity (P0-P3), 2) Affected services and dependencies, 3) User impact and business risk, 4) Initial incident command structure needed. Check error budgets and SLO violations."
- Output: Severity classification, impact assessment, incident command assignments, SLO status
- Context: Initial alerts, monitoring dashboards, recent changes
- If it exists and `status` is `"in_progress"`: Read it, display the current step, and ask the user:
### 2. Observability Analysis
```
Found an in-progress incident response session:
Incident: [incident from state]
Severity: [severity from state]
Current step: [step from state]
- Use Task tool with subagent_type="observability-monitoring::observability-engineer"
- Prompt: "Perform rapid observability sweep for incident: $ARGUMENTS. Query: 1) Distributed tracing (OpenTelemetry/Jaeger), 2) Metrics correlation (Prometheus/Grafana/DataDog), 3) Log aggregation (ELK/Splunk), 4) APM data, 5) Real User Monitoring. Identify anomalies, error patterns, and service degradation points."
- Output: Observability findings, anomaly detection, service health matrix, trace analysis
- Context: Severity level from step 1, affected services
1. Resume from where we left off
2. Start fresh (archives existing session)
```
### 3. Initial Mitigation
- If it exists and `status` is `"complete"`: Ask whether to archive and start fresh.
- Use Task tool with subagent_type="incident-responder"
- Prompt: "Implement immediate mitigation for P$SEVERITY incident: $ARGUMENTS. Actions: 1) Traffic throttling/rerouting if needed, 2) Feature flag disabling for affected features, 3) Circuit breaker activation, 4) Rollback assessment for recent deployments, 5) Scale resources if capacity-related. Prioritize user experience restoration."
- Output: Mitigation actions taken, temporary fixes applied, rollback decisions
- Context: Observability findings, severity classification
### 2. Initialize state
## Phase 2: Investigation & Root Cause Analysis
Create `.incident-response/` directory and `state.json`:
### 4. Deep System Debugging
```json
{
"incident": "$ARGUMENTS",
"status": "in_progress",
"severity": "P1",
"current_step": 1,
"current_phase": 1,
"completed_steps": [],
"files_created": [],
"started_at": "ISO_TIMESTAMP",
"last_updated": "ISO_TIMESTAMP"
}
```
- Use Task tool with subagent_type="error-debugging::debugger"
- Prompt: "Conduct deep debugging for incident: $ARGUMENTS using observability data. Investigate: 1) Stack traces and error logs, 2) Database query performance and locks, 3) Network latency and timeouts, 4) Memory leaks and CPU spikes, 5) Dependency failures and cascading errors. Apply Five Whys analysis."
- Output: Root cause identification, contributing factors, dependency impact map
- Context: Observability analysis, mitigation status
Parse `$ARGUMENTS` for `--severity` flag. Default to P1 if not specified.
### 5. Security Assessment
### 3. Parse incident description
- Use Task tool with subagent_type="security-scanning::security-auditor"
- Prompt: "Assess security implications of incident: $ARGUMENTS. Check: 1) DDoS attack indicators, 2) Authentication/authorization failures, 3) Data exposure risks, 4) Certificate issues, 5) Suspicious access patterns. Review WAF logs, security groups, and audit trails."
- Output: Security assessment, breach analysis, vulnerability identification
- Context: Root cause findings, system logs
Extract the incident description from `$ARGUMENTS` (everything before the flags). This is referenced as `$INCIDENT` in prompts below.
### 6. Performance Engineering Analysis
---
- Use Task tool with subagent_type="application-performance::performance-engineer"
- Prompt: "Analyze performance aspects of incident: $ARGUMENTS. Examine: 1) Resource utilization patterns, 2) Query optimization opportunities, 3) Caching effectiveness, 4) Load balancer health, 5) CDN performance, 6) Autoscaling triggers. Identify bottlenecks and capacity issues."
- Output: Performance bottlenecks, resource recommendations, optimization opportunities
- Context: Debug findings, current mitigation state
## Phase 1: Detection & Triage (Steps 1-3)
## Phase 3: Resolution & Recovery
### Step 1: Incident Detection and Classification
### 7. Fix Implementation
Use the Task tool to launch the incident responder agent:
- Use Task tool with subagent_type="backend-development::backend-architect"
- Prompt: "Design and implement production fix for incident: $ARGUMENTS based on root cause. Requirements: 1) Minimal viable fix for rapid deployment, 2) Risk assessment and rollback capability, 3) Staged rollout plan with monitoring, 4) Validation criteria and health checks. Consider both immediate fix and long-term solution."
- Output: Fix implementation, deployment strategy, validation plan, rollback procedures
- Context: Root cause analysis, performance findings, security assessment
```
Task:
subagent_type: "incident-responder"
description: "URGENT: Classify incident: $INCIDENT"
prompt: |
URGENT: Detect and classify incident: $INCIDENT
### 8. Deployment and Validation
Determine:
1. Incident severity (P0-P3) based on impact assessment
2. Affected services and their dependencies
3. User impact and business risk
4. Initial incident command structure needed
5. SLO violation status and error budget impact
- Use Task tool with subagent_type="deployment-strategies::deployment-engineer"
- Prompt: "Execute emergency deployment for incident fix: $ARGUMENTS. Process: 1) Blue-green or canary deployment, 2) Progressive rollout with monitoring, 3) Health check validation at each stage, 4) Rollback triggers configured, 5) Real-time monitoring during deployment. Coordinate with incident command."
- Output: Deployment status, validation results, monitoring dashboard, rollback readiness
- Context: Fix implementation, current system state
Check: error budgets, recent deployments, configuration changes, and monitoring alerts.
## Phase 4: Communication & Coordination
Provide structured output with: SEVERITY, AFFECTED_SERVICES, USER_IMPACT,
BUSINESS_RISK, INCIDENT_COMMAND, SLO_STATUS.
```
### 9. Stakeholder Communication
Save output to `.incident-response/01-classification.md`.
- Use Task tool with subagent_type="content-marketing::content-marketer"
- Prompt: "Manage incident communication for: $ARGUMENTS. Create: 1) Status page updates (public-facing), 2) Internal engineering updates (technical details), 3) Executive summary (business impact/ETA), 4) Customer support briefing (talking points), 5) Timeline documentation with key decisions. Update every 15-30 minutes based on severity."
- Output: Communication artifacts, status updates, stakeholder briefings, timeline log
- Context: All previous phases, current resolution status
Update `state.json`: set `current_step` to 2, update severity from classification, add step 1 to `completed_steps`.
### 10. Customer Impact Assessment
### Step 2: Observability Analysis
- Use Task tool with subagent_type="incident-responder"
- Prompt: "Assess and document customer impact for incident: $ARGUMENTS. Analyze: 1) Affected user segments and geography, 2) Failed transactions or data loss, 3) SLA violations and contractual implications, 4) Customer support ticket volume, 5) Revenue impact estimation. Prepare proactive customer outreach list."
- Output: Customer impact report, SLA analysis, outreach recommendations
- Context: Resolution progress, communication status
Read `.incident-response/01-classification.md`.
## Phase 5: Postmortem & Prevention
```
Task:
subagent_type: "general-purpose"
description: "Observability sweep for incident: $INCIDENT"
prompt: |
You are an observability engineer. Perform rapid observability sweep for this incident.
### 11. Blameless Postmortem
Context: [Insert contents of .incident-response/01-classification.md]
- Use Task tool with subagent_type="documentation-generation::docs-architect"
- Prompt: "Conduct blameless postmortem for incident: $ARGUMENTS. Document: 1) Complete incident timeline with decisions, 2) Root cause and contributing factors (systems focus), 3) What went well in response, 4) What could improve, 5) Action items with owners and deadlines, 6) Lessons learned for team education. Follow SRE postmortem best practices."
- Output: Postmortem document, action items list, process improvements, training needs
- Context: Complete incident history, all agent outputs
Query and analyze:
1. Distributed tracing (OpenTelemetry/Jaeger) for request flow
2. Metrics correlation (Prometheus/Grafana/DataDog) for anomalies
3. Log aggregation (ELK/Splunk) for error patterns
4. APM data for performance degradation points
5. Real User Monitoring for user experience impact
### 12. Monitoring and Alert Enhancement
Identify anomalies, error patterns, and service degradation points.
- Use Task tool with subagent_type="observability-monitoring::observability-engineer"
- Prompt: "Enhance monitoring to prevent recurrence of: $ARGUMENTS. Implement: 1) New alerts for early detection, 2) SLI/SLO adjustments if needed, 3) Dashboard improvements for visibility, 4) Runbook automation opportunities, 5) Chaos engineering scenarios for testing. Ensure alerts are actionable and reduce noise."
- Output: New monitoring configuration, alert rules, dashboard updates, runbook automation
- Context: Postmortem findings, root cause analysis
Provide structured output with: TRACE_ANALYSIS, METRICS_ANOMALIES, LOG_PATTERNS,
APM_FINDINGS, RUM_IMPACT, SERVICE_HEALTH_MATRIX.
```
### 13. System Hardening
Save output to `.incident-response/02-observability.md`.
- Use Task tool with subagent_type="backend-development::backend-architect"
- Prompt: "Design system improvements to prevent incident: $ARGUMENTS. Propose: 1) Architecture changes for resilience (circuit breakers, bulkheads), 2) Graceful degradation strategies, 3) Capacity planning adjustments, 4) Technical debt prioritization, 5) Dependency reduction opportunities. Create implementation roadmap."
- Output: Architecture improvements, resilience patterns, technical debt items, roadmap
- Context: Postmortem action items, performance analysis
Update `state.json`: set `current_step` to 3, add step 2 to `completed_steps`.
### Step 3: Initial Mitigation
Read `.incident-response/01-classification.md` and `.incident-response/02-observability.md`.
```
Task:
subagent_type: "incident-responder"
description: "Immediate mitigation for: $INCIDENT"
prompt: |
Implement immediate mitigation for this incident.
Classification: [Insert contents of .incident-response/01-classification.md]
Observability: [Insert contents of .incident-response/02-observability.md]
Actions to evaluate and implement:
1. Traffic throttling/rerouting if needed
2. Feature flag disabling for affected features
3. Circuit breaker activation
4. Rollback assessment for recent deployments
5. Scale resources if capacity-related
Prioritize user experience restoration.
Provide structured output with: MITIGATION_ACTIONS, TEMPORARY_FIXES,
ROLLBACK_DECISIONS, SERVICE_STATUS_AFTER, USER_IMPACT_REDUCTION.
```
Save output to `.incident-response/03-mitigation.md`.
Update `state.json`: set `current_step` to "checkpoint-1", add step 3 to `completed_steps`.
---
## PHASE CHECKPOINT 1 — User Approval Required
You MUST stop here and present the triage results.
Display a summary from `.incident-response/01-classification.md` and `.incident-response/03-mitigation.md` and ask:
```
Triage and initial mitigation complete.
Severity: [from classification]
Affected services: [from classification]
Mitigation status: [from mitigation]
User impact reduction: [from mitigation]
1. Approve — proceed to investigation and root cause analysis
2. Request changes — adjust mitigation or severity
3. Pause — save progress and stop here (mitigation in place)
```
Do NOT proceed to Phase 2 until the user approves.
---
## Phase 2: Investigation & Root Cause (Steps 4-6)
### Step 4: Deep System Debugging
Read `.incident-response/02-observability.md` and `.incident-response/03-mitigation.md`.
```
Task:
subagent_type: "debugger"
description: "Deep debugging for: $INCIDENT"
prompt: |
Conduct deep debugging for this incident using observability data.
Observability: [Insert contents of .incident-response/02-observability.md]
Mitigation: [Insert contents of .incident-response/03-mitigation.md]
Investigate:
1. Stack traces and error logs
2. Database query performance and locks
3. Network latency and timeouts
4. Memory leaks and CPU spikes
5. Dependency failures and cascading errors
Apply Five Whys analysis to identify root cause.
Provide structured output with: ROOT_CAUSE, CONTRIBUTING_FACTORS,
DEPENDENCY_IMPACT_MAP, FIVE_WHYS_ANALYSIS.
```
Save output to `.incident-response/04-debugging.md`.
Update `state.json`: set `current_step` to 5, add step 4 to `completed_steps`.
### Step 5: Security Assessment
Read `.incident-response/04-debugging.md`.
```
Task:
subagent_type: "general-purpose"
description: "Security assessment for: $INCIDENT"
prompt: |
You are a security auditor. Assess security implications of this incident.
Debug findings: [Insert contents of .incident-response/04-debugging.md]
Check:
1. DDoS attack indicators
2. Authentication/authorization failures
3. Data exposure risks
4. Certificate issues
5. Suspicious access patterns
Review WAF logs, security groups, and audit trails.
Provide structured output with: SECURITY_ASSESSMENT, BREACH_ANALYSIS,
VULNERABILITY_IDENTIFICATION, DATA_EXPOSURE_RISK, REMEDIATION_STEPS.
```
### Step 6: Performance Analysis
Read `.incident-response/04-debugging.md`.
Launch in parallel with Step 5:
```
Task:
subagent_type: "general-purpose"
description: "Performance analysis for: $INCIDENT"
prompt: |
You are a performance engineer. Analyze performance aspects of this incident.
Debug findings: [Insert contents of .incident-response/04-debugging.md]
Examine:
1. Resource utilization patterns
2. Query optimization opportunities
3. Caching effectiveness
4. Load balancer health
5. CDN performance
6. Autoscaling triggers
Identify bottlenecks and capacity issues.
Provide structured output with: PERFORMANCE_BOTTLENECKS, RESOURCE_RECOMMENDATIONS,
OPTIMIZATION_OPPORTUNITIES, CAPACITY_ISSUES.
```
After both complete, consolidate into `.incident-response/05-investigation.md`:
```markdown
# Investigation: $INCIDENT
## Root Cause (from debugging)
[From Step 4]
## Security Assessment
[From Step 5]
## Performance Analysis
[From Step 6]
## Combined Findings
[Synthesis of all investigation results]
```
Update `state.json`: set `current_step` to "checkpoint-2", add steps 4-6 to `completed_steps`.
---
## PHASE CHECKPOINT 2 — User Approval Required
Display investigation results from `.incident-response/05-investigation.md` and ask:
```
Investigation complete. Please review .incident-response/05-investigation.md
Root cause: [brief summary]
Security concerns: [summary]
Performance issues: [summary]
1. Approve — proceed to fix implementation and deployment
2. Request changes — investigate further
3. Pause — save progress and stop here
```
Do NOT proceed to Phase 3 until the user approves.
---
## Phase 3: Resolution & Recovery (Steps 7-8)
### Step 7: Fix Implementation
Read `.incident-response/05-investigation.md`.
```
Task:
subagent_type: "general-purpose"
description: "Implement production fix for: $INCIDENT"
prompt: |
You are a senior backend architect. Design and implement a production fix for this incident.
Investigation: [Insert contents of .incident-response/05-investigation.md]
Requirements:
1. Minimal viable fix for rapid deployment
2. Risk assessment and rollback capability
3. Staged rollout plan with monitoring
4. Validation criteria and health checks
5. Consider both immediate fix and long-term solution
Provide structured output with: FIX_IMPLEMENTATION, DEPLOYMENT_STRATEGY,
VALIDATION_PLAN, ROLLBACK_PROCEDURES, LONG_TERM_SOLUTION.
```
Save output to `.incident-response/06-fix.md`.
Update `state.json`: set `current_step` to 8, add step 7 to `completed_steps`.
### Step 8: Deployment and Validation
Read `.incident-response/06-fix.md`.
```
Task:
subagent_type: "devops-troubleshooter"
description: "Deploy and validate fix for: $INCIDENT"
prompt: |
Execute emergency deployment for incident fix.
Fix details: [Insert contents of .incident-response/06-fix.md]
Process:
1. Blue-green or canary deployment strategy
2. Progressive rollout with monitoring
3. Health check validation at each stage
4. Rollback triggers configured
5. Real-time monitoring during deployment
Provide structured output with: DEPLOYMENT_STATUS, VALIDATION_RESULTS,
MONITORING_DASHBOARD, ROLLBACK_READINESS, SERVICE_HEALTH_POST_DEPLOY.
```
Save output to `.incident-response/07-deployment.md`.
Update `state.json`: set `current_step` to "checkpoint-3", add step 8 to `completed_steps`.
---
## PHASE CHECKPOINT 3 — User Approval Required
Display deployment results from `.incident-response/07-deployment.md` and ask:
```
Fix deployed and validated.
Deployment status: [from deployment]
Service health: [from deployment]
Rollback ready: [yes/no]
1. Approve — proceed to communication and postmortem
2. Rollback — revert the deployment
3. Pause — save progress and monitor
```
Do NOT proceed to Phase 4 until the user approves.
---
## Phase 4: Communication & Coordination (Steps 9-10)
### Step 9: Stakeholder Communication
Read `.incident-response/01-classification.md`, `.incident-response/05-investigation.md`, and `.incident-response/07-deployment.md`.
```
Task:
subagent_type: "general-purpose"
description: "Manage incident communication for: $INCIDENT"
prompt: |
You are a communications specialist. Manage incident communication for this incident.
Classification: [Insert contents of .incident-response/01-classification.md]
Investigation: [Insert contents of .incident-response/05-investigation.md]
Deployment: [Insert contents of .incident-response/07-deployment.md]
Create:
1. Status page updates (public-facing)
2. Internal engineering updates (technical details)
3. Executive summary (business impact/ETA)
4. Customer support briefing (talking points)
5. Timeline documentation with key decisions
Provide structured output with: STATUS_PAGE_UPDATE, ENGINEERING_UPDATE,
EXECUTIVE_SUMMARY, SUPPORT_BRIEFING, INCIDENT_TIMELINE.
```
Save output to `.incident-response/08-communication.md`.
Update `state.json`: set `current_step` to 10, add step 9 to `completed_steps`.
### Step 10: Customer Impact Assessment
Read `.incident-response/01-classification.md` and `.incident-response/07-deployment.md`.
```
Task:
subagent_type: "incident-responder"
description: "Assess customer impact for: $INCIDENT"
prompt: |
Assess and document customer impact for this incident.
Classification: [Insert contents of .incident-response/01-classification.md]
Resolution: [Insert contents of .incident-response/07-deployment.md]
Analyze:
1. Affected user segments and geography
2. Failed transactions or data loss
3. SLA violations and contractual implications
4. Customer support ticket volume
5. Revenue impact estimation
6. Proactive customer outreach recommendations
Provide structured output with: CUSTOMER_IMPACT_REPORT, SLA_ANALYSIS,
REVENUE_IMPACT, OUTREACH_RECOMMENDATIONS.
```
Save output to `.incident-response/09-customer-impact.md`.
Update `state.json`: set `current_step` to 11, add step 10 to `completed_steps`.
---
## Phase 5: Postmortem & Prevention (Steps 11-13)
### Step 11: Blameless Postmortem
Read all `.incident-response/*.md` files.
```
Task:
subagent_type: "general-purpose"
description: "Blameless postmortem for: $INCIDENT"
prompt: |
You are an SRE documentation specialist. Conduct a blameless postmortem for this incident.
Context: [Insert contents of all .incident-response/*.md files]
Document:
1. Complete incident timeline with decisions
2. Root cause and contributing factors (systems focus, not people)
3. What went well in response
4. What could improve
5. Action items with owners and deadlines
6. Lessons learned for team education
Follow SRE postmortem best practices. Focus on systems, not blame.
Provide structured output with: INCIDENT_TIMELINE, ROOT_CAUSE_SUMMARY,
WHAT_WENT_WELL, IMPROVEMENTS, ACTION_ITEMS, LESSONS_LEARNED.
```
Save output to `.incident-response/10-postmortem.md`.
Update `state.json`: set `current_step` to 12, add step 11 to `completed_steps`.
### Step 12: Monitoring Enhancement
Read `.incident-response/05-investigation.md` and `.incident-response/10-postmortem.md`.
```
Task:
subagent_type: "general-purpose"
description: "Enhance monitoring for: $INCIDENT prevention"
prompt: |
You are an observability engineer. Enhance monitoring to prevent recurrence of this incident.
Investigation: [Insert contents of .incident-response/05-investigation.md]
Postmortem: [Insert contents of .incident-response/10-postmortem.md]
Implement:
1. New alerts for early detection
2. SLI/SLO adjustments if needed
3. Dashboard improvements for visibility
4. Runbook automation opportunities
5. Chaos engineering scenarios for testing
Ensure alerts are actionable and reduce noise.
Provide structured output with: NEW_ALERTS, SLO_ADJUSTMENTS, DASHBOARD_UPDATES,
RUNBOOK_AUTOMATION, CHAOS_SCENARIOS.
```
Save output to `.incident-response/11-monitoring.md`.
Update `state.json`: set `current_step` to 13, add step 12 to `completed_steps`.
### Step 13: System Hardening
Read `.incident-response/05-investigation.md` and `.incident-response/10-postmortem.md`.
```
Task:
subagent_type: "general-purpose"
description: "System hardening for: $INCIDENT prevention"
prompt: |
You are a senior backend architect. Design system improvements to prevent recurrence.
Investigation: [Insert contents of .incident-response/05-investigation.md]
Postmortem: [Insert contents of .incident-response/10-postmortem.md]
Propose:
1. Architecture changes for resilience (circuit breakers, bulkheads)
2. Graceful degradation strategies
3. Capacity planning adjustments
4. Technical debt prioritization
5. Dependency reduction opportunities
6. Implementation roadmap
Provide structured output with: ARCHITECTURE_IMPROVEMENTS, RESILIENCE_PATTERNS,
CAPACITY_PLAN, TECH_DEBT_ITEMS, IMPLEMENTATION_ROADMAP.
```
Save output to `.incident-response/12-hardening.md`.
Update `state.json`: set `current_step` to "complete", add step 13 to `completed_steps`.
---
## Completion
Update `state.json`:
- Set `status` to `"complete"`
- Set `last_updated` to current timestamp
Present the final summary:
```
Incident response complete: $INCIDENT
## Files Created
[List all .incident-response/ output files]
## Response Summary
- Classification: .incident-response/01-classification.md
- Observability: .incident-response/02-observability.md
- Mitigation: .incident-response/03-mitigation.md
- Debugging: .incident-response/04-debugging.md
- Investigation: .incident-response/05-investigation.md
- Fix: .incident-response/06-fix.md
- Deployment: .incident-response/07-deployment.md
- Communication: .incident-response/08-communication.md
- Customer Impact: .incident-response/09-customer-impact.md
- Postmortem: .incident-response/10-postmortem.md
- Monitoring: .incident-response/11-monitoring.md
- Hardening: .incident-response/12-hardening.md
## Immediate Follow-ups
1. Verify service stability over the next 24 hours
2. Complete all postmortem action items
3. Deploy monitoring enhancements within 1 week
4. Schedule system hardening work
5. Conduct team learning session on lessons learned
## Success Criteria
### Immediate Success (During Incident)
- Service restoration within SLA targets
- Accurate severity classification within 5 minutes
- Stakeholder communication every 15-30 minutes
- No cascading failures or incident escalation
- Clear incident command structure maintained
### Long-term Success (Post-Incident)
- Comprehensive postmortem within 48 hours
- Service restored within SLA targets
- Postmortem completed within 48 hours
- All action items assigned with deadlines
- Monitoring improvements deployed within 1 week
- Runbook updates completed
- Team training conducted on lessons learned
- Error budget impact assessed and communicated
## Coordination Protocols
### Incident Command Structure
- **Incident Commander**: Decision authority, coordination
- **Technical Lead**: Technical investigation and resolution
- **Communications Lead**: Stakeholder updates
- **Subject Matter Experts**: Specific system expertise
### Communication Channels
- War room (Slack/Teams channel or Zoom)
- Status page updates (StatusPage, Statusly)
- PagerDuty/Opsgenie for alerting
- Confluence/Notion for documentation
### Handoff Requirements
- Each phase provides clear context to the next
- All findings documented in shared incident doc
- Decision rationale recorded for postmortem
- Timestamp all significant events
- No recurrence of the same root cause
```
Production incident requiring immediate response: $ARGUMENTS

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,10 @@
{
"name": "javascript-typescript",
"version": "1.2.1",
"description": "JavaScript and TypeScript development with ES6+, Node.js, React, and modern web frameworks",
"author": {
"name": "Seth Hobson",
"email": "seth@major7apps.com"
},
"license": "MIT"
}

Some files were not shown because too many files have changed in this diff Show More