Compare commits

...

8 Commits

Author SHA1 Message Date
Seth Hobson
a5ab5d8f31 chore(agent-teams): bump to v1.0.2 2026-02-05 17:42:30 -05:00
Seth Hobson
598ea85e7f fix(agent-teams): simplify plugin.json and marketplace entry to match conductor patterns
Strip plugin.json to minimal fields (name, version, description, author, license).
Remove commands/agents/skills arrays, keywords, repository, and strict from marketplace entry.
2026-02-05 17:41:00 -05:00
Seth Hobson
fb9eba62b2 fix(agent-teams): remove Context7 MCP dependency, align frontmatter with conductor patterns, bump to v1.0.1
Remove .mcp.json to eliminate external MCP dependency that likely caused plugin load failure.
Add tools: field to all agents, version: field to all skills, matching conductor plugin patterns.
2026-02-05 17:30:35 -05:00
Seth Hobson
b187ce780d docs(agent-teams): use official /plugin install command instead of --plugin-dir 2026-02-05 17:16:29 -05:00
Seth Hobson
1f46cab1f6 docs(agent-teams): add link to official Anthropic Agent Teams docs 2026-02-05 17:14:55 -05:00
Seth Hobson
d0a57d51b5 docs: bump marketplace to v1.4.0, update README with Agent Teams and Conductor highlights 2026-02-05 17:12:59 -05:00
Seth Hobson
81d53eb5d6 chore: bump marketplace to v1.4.0 (73 plugins, 112 agents, 146 skills) 2026-02-05 17:11:08 -05:00
Seth Hobson
0752775afc feat(agent-teams): add plugin for multi-agent team orchestration
New plugin with 7 presets (review, debug, feature, fullstack, research,
security, migration), 4 specialized agents, 7 slash commands, 6 skills
with reference docs, and Context7 MCP integration for research teams.
2026-02-05 17:10:02 -05:00
30 changed files with 3094 additions and 21 deletions

View File

@@ -6,8 +6,8 @@
"url": "https://github.com/wshobson"
},
"metadata": {
"description": "Production-ready workflow orchestration with 72 focused plugins, 108 specialized agents, and 140 skills - optimized for granular installation and minimal token usage",
"version": "1.3.7"
"description": "Production-ready workflow orchestration with 73 focused plugins, 112 specialized agents, and 146 skills - optimized for granular installation and minimal token usage",
"version": "1.4.0"
},
"plugins": [
{
@@ -1928,6 +1928,19 @@
"category": "development",
"homepage": "https://github.com/wshobson/agents",
"license": "MIT"
},
{
"name": "agent-teams",
"description": "Orchestrate multi-agent teams for parallel code review, hypothesis-driven debugging, and coordinated feature development using Claude Code's Agent Teams",
"version": "1.0.2",
"author": {
"name": "Seth Hobson",
"email": "seth@major7apps.com"
},
"source": "./plugins/agent-teams",
"category": "workflows",
"homepage": "https://github.com/wshobson/agents",
"license": "MIT"
}
]
}

View File

@@ -4,26 +4,26 @@
[![Run in Smithery](https://smithery.ai/badge/skills/wshobson)](https://smithery.ai/skills?ns=wshobson&utm_source=github&utm_medium=badge)
> **🎯 Agent Skills Enabled** — 129 specialized skills extend Claude's capabilities across plugins with progressive disclosure
> **🎯 Agent Skills Enabled** — 146 specialized skills extend Claude's capabilities across plugins with progressive disclosure
A comprehensive production-ready system combining **108 specialized AI agents**, **15 multi-agent workflow orchestrators**, **129 agent skills**, and **72 development tools** organized into **72 focused, single-purpose plugins** for [Claude Code](https://docs.claude.com/en/docs/claude-code/overview).
A comprehensive production-ready system combining **112 specialized AI agents**, **16 multi-agent workflow orchestrators**, **146 agent skills**, and **79 development tools** organized into **73 focused, single-purpose plugins** for [Claude Code](https://docs.claude.com/en/docs/claude-code/overview).
## Overview
This unified repository provides everything needed for intelligent automation and multi-agent orchestration across modern software development:
- **72 Focused Plugins** - Granular, single-purpose plugins optimized for minimal token usage and composability
- **108 Specialized Agents** - Domain experts with deep knowledge across architecture, languages, infrastructure, quality, data/AI, documentation, business operations, and SEO
- **129 Agent Skills** - Modular knowledge packages with progressive disclosure for specialized expertise
- **15 Workflow Orchestrators** - Multi-agent coordination systems for complex operations like full-stack development, security hardening, ML pipelines, and incident response
- **72 Development Tools** - Optimized utilities including project scaffolding, security scanning, test automation, and infrastructure setup
- **73 Focused Plugins** - Granular, single-purpose plugins optimized for minimal token usage and composability
- **112 Specialized Agents** - Domain experts with deep knowledge across architecture, languages, infrastructure, quality, data/AI, documentation, business operations, and SEO
- **146 Agent Skills** - Modular knowledge packages with progressive disclosure for specialized expertise
- **16 Workflow Orchestrators** - Multi-agent coordination systems for complex operations like full-stack development, security hardening, ML pipelines, and incident response
- **79 Development Tools** - Optimized utilities including project scaffolding, security scanning, test automation, and infrastructure setup
### Key Features
- **Granular Plugin Architecture**: 72 focused plugins optimized for minimal token usage
- **Comprehensive Tooling**: 72 development tools including test generation, scaffolding, and security scanning
- **Granular Plugin Architecture**: 73 focused plugins optimized for minimal token usage
- **Comprehensive Tooling**: 79 development tools including test generation, scaffolding, and security scanning
- **100% Agent Coverage**: All plugins include specialized agents
- **Agent Skills**: 129 specialized skills following for progressive disclosure and token efficiency
- **Agent Skills**: 146 specialized skills following for progressive disclosure and token efficiency
- **Clear Organization**: 23 categories with 1-6 plugins each for easy discovery
- **Efficient Design**: Average 3.4 components per plugin (follows Anthropic's 2-8 pattern)
@@ -49,7 +49,7 @@ Add this marketplace to Claude Code:
/plugin marketplace add wshobson/agents
```
This makes all 72 plugins available for installation, but **does not load any agents or tools** into your context.
This makes all 73 plugins available for installation, but **does not load any agents or tools** into your context.
### Step 2: Install Plugins
@@ -114,9 +114,9 @@ rm -rf ~/.claude/plugins/cache/claude-code-workflows && rm ~/.claude/plugins/ins
### Core Guides
- **[Plugin Reference](docs/plugins.md)** - Complete catalog of all 72 plugins
- **[Agent Reference](docs/agents.md)** - All 108 agents organized by category
- **[Agent Skills](docs/agent-skills.md)** - 129 specialized skills with progressive disclosure
- **[Plugin Reference](docs/plugins.md)** - Complete catalog of all 73 plugins
- **[Agent Reference](docs/agents.md)** - All 112 agents organized by category
- **[Agent Skills](docs/agent-skills.md)** - 146 specialized skills with progressive disclosure
- **[Usage Guide](docs/usage.md)** - Commands, workflows, and best practices
- **[Architecture](docs/architecture.md)** - Design principles and patterns
@@ -130,7 +130,44 @@ rm -rf ~/.claude/plugins/cache/claude-code-workflows && rm ~/.claude/plugins/ins
## What's New
### Agent Skills (140 skills across 20 plugins)
### Agent Teams Plugin (NEW)
Orchestrate multi-agent teams for parallel workflows using Claude Code's experimental Agent Teams feature:
```bash
/plugin install agent-teams@claude-code-workflows
```
- **7 Team Presets** — `review`, `debug`, `feature`, `fullstack`, `research`, `security`, `migration`
- **Parallel Code Review** — `/team-review src/ --reviewers security,performance,architecture`
- **Hypothesis-Driven Debugging** — `/team-debug "API returns 500" --hypotheses 3`
- **Parallel Feature Development** — `/team-feature "Add OAuth2 auth" --plan-first`
- **Research Teams** — Parallel investigation across codebase and web sources
- **Security Audits** — 4 reviewers covering OWASP, auth, dependencies, and secrets
- **Migration Support** — Coordinated migration with parallel streams and correctness verification
Includes 4 specialized agents, 7 commands, and 6 skills with reference documentation.
[→ View agent-teams documentation](plugins/agent-teams/README.md)
### Conductor Plugin — Context-Driven Development
Transforms Claude Code into a project management tool with a structured **Context → Spec & Plan → Implement** workflow:
```bash
/plugin install conductor@claude-code-workflows
```
- **Interactive Setup** — `/conductor:setup` creates product vision, tech stack, workflow rules, and style guides
- **Track-Based Development** — `/conductor:new-track` generates specifications and phased implementation plans
- **TDD Workflow** — `/conductor:implement` executes tasks with verification checkpoints
- **Semantic Revert** — `/conductor:revert` undoes work by logical unit (track, phase, or task)
- **State Persistence** — Resume setup across sessions with persistent project context
- **3 Skills** — Context-driven development, track management, workflow patterns
[→ View Conductor documentation](plugins/conductor/README.md)
### Agent Skills (146 skills across 21 plugins)
Specialized knowledge packages following Anthropic's progressive disclosure architecture:
@@ -246,11 +283,11 @@ Uses kubernetes-architect agent with 4 specialized skills for production-grade c
## Plugin Categories
**23 categories, 72 plugins:**
**24 categories, 73 plugins:**
- 🎨 **Development** (4) - debugging, backend, frontend, multi-platform
- 📚 **Documentation** (3) - code docs, API specs, diagrams, C4 architecture
- 🔄 **Workflows** (4) - git, full-stack, TDD, **Conductor** (context-driven development)
- 🔄 **Workflows** (5) - git, full-stack, TDD, **Conductor** (context-driven development), **Agent Teams** (multi-agent orchestration)
-**Testing** (2) - unit testing, TDD workflows
- 🔍 **Quality** (3) - code review, comprehensive review, performance
- 🤖 **AI & ML** (4) - LLM apps, agent orchestration, context, MLOps
@@ -278,7 +315,7 @@ Uses kubernetes-architect agent with 4 specialized skills for production-grade c
- **Single responsibility** - Each plugin does one thing well
- **Minimal token usage** - Average 3.4 components per plugin
- **Composable** - Mix and match for complex workflows
- **100% coverage** - All 108 agents accessible across plugins
- **100% coverage** - All 112 agents accessible across plugins
### Progressive Disclosure (Skills)
@@ -293,7 +330,7 @@ Three-tier architecture for token efficiency:
```
claude-agents/
├── .claude-plugin/
│ └── marketplace.json # 72 plugins
│ └── marketplace.json # 73 plugins
├── plugins/
│ ├── python-development/
│ │ ├── agents/ # 3 Python experts

View File

@@ -0,0 +1,10 @@
{
"name": "agent-teams",
"version": "1.0.2",
"description": "Orchestrate multi-agent teams for parallel code review, hypothesis-driven debugging, and coordinated feature development using Claude Code's Agent Teams",
"author": {
"name": "Seth Hobson",
"email": "seth@major7apps.com"
},
"license": "MIT"
}

View File

@@ -0,0 +1,153 @@
# Agent Teams Plugin
Orchestrate multi-agent teams for parallel code review, hypothesis-driven debugging, and coordinated feature development using Claude Code's experimental [Agent Teams](https://code.claude.com/docs/en/agent-teams) feature.
## Setup
### Prerequisites
1. Enable the experimental Agent Teams feature:
```bash
export CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS=1
```
2. Configure teammate display mode in your `~/.claude/settings.json`:
```json
{
"teammateMode": "tmux"
}
```
Available display modes:
- `"tmux"` — Each teammate runs in a tmux pane (recommended)
- `"iterm2"` — Each teammate gets an iTerm2 tab (macOS only)
- `"in-process"` — Teammates run in the same process (default)
### Installation
First, add the marketplace (if you haven't already):
```
/plugin marketplace add wshobson/agents
```
Then install the plugin:
```
/plugin install agent-teams@claude-code-workflows
```
## Features
- **Preset Teams** — Spawn pre-configured teams for common workflows (review, debug, feature, fullstack, research, security, migration)
- **Multi-Reviewer Code Review** — Parallel review across security, performance, architecture, testing, and accessibility dimensions
- **Hypothesis-Driven Debugging** — Competing hypothesis investigation with evidence-based root cause analysis
- **Parallel Feature Development** — Coordinated multi-agent implementation with file ownership boundaries
- **Parallel Research** — Multiple Explore agents investigating different questions or codebase areas simultaneously
- **Security Audit** — Comprehensive parallel security review across OWASP, auth, dependencies, and configuration
- **Migration Support** — Coordinated codebase migration with parallel implementation streams and correctness verification
- **Task Coordination** — Dependency-aware task management with workload balancing
- **Team Communication** — Structured messaging protocols for efficient agent collaboration
## Commands
| Command | Description |
| ---------------- | ---------------------------------------------------------- |
| `/team-spawn` | Spawn a team using presets or custom composition |
| `/team-status` | Display team members, tasks, and progress |
| `/team-shutdown` | Gracefully shut down a team and clean up resources |
| `/team-review` | Multi-reviewer parallel code review |
| `/team-debug` | Competing hypotheses debugging with parallel investigation |
| `/team-feature` | Parallel feature development with file ownership |
| `/team-delegate` | Task delegation dashboard and workload management |
## Agents
| Agent | Role | Color |
| ------------------ | --------------------------------------------------------------------------------- | ------ |
| `team-lead` | Team orchestrator — decomposes work, manages lifecycle, synthesizes results | Blue |
| `team-reviewer` | Multi-dimensional code reviewer — operates on assigned review dimension | Green |
| `team-debugger` | Hypothesis investigator — gathers evidence to confirm/falsify assigned hypothesis | Red |
| `team-implementer` | Parallel builder — implements within strict file ownership boundaries | Yellow |
## Skills
| Skill | Description |
| ------------------------------ | ------------------------------------------------------------------------ |
| `team-composition-patterns` | Team sizing heuristics, preset compositions, agent type selection |
| `task-coordination-strategies` | Task decomposition, dependency graphs, workload monitoring |
| `parallel-debugging` | Hypothesis generation, evidence collection, result arbitration |
| `multi-reviewer-patterns` | Review dimension allocation, finding deduplication, severity calibration |
| `parallel-feature-development` | File ownership strategies, conflict avoidance, integration patterns |
| `team-communication-protocols` | Message type selection, plan approval workflow, shutdown protocol |
## Quick Start
### Multi-Reviewer Code Review
```
/team-review src/ --reviewers security,performance,architecture
```
Spawns 3 reviewers, each analyzing the codebase from their assigned dimension, then consolidates findings into a prioritized report.
### Hypothesis-Driven Debugging
```
/team-debug "API returns 500 on POST /users with valid payload" --hypotheses 3
```
Generates 3 competing hypotheses, spawns investigators for each, collects evidence, and presents the most likely root cause with a fix.
### Parallel Feature Development
```
/team-feature "Add user authentication with OAuth2" --team-size 3 --plan-first
```
Decomposes the feature into work streams with file ownership boundaries, gets your approval, then spawns implementers to build in parallel.
### Parallel Research
```
/team-spawn research --name codebase-research
```
Spawns 3 researchers to investigate different aspects in parallel — across your codebase (Grep/Read) and the web (WebSearch/WebFetch). Each reports findings with citations.
### Security Audit
```
/team-spawn security
```
Spawns 4 security reviewers covering OWASP vulnerabilities, auth/access control, dependency supply chain, and secrets/configuration. Produces a consolidated security report.
### Codebase Migration
```
/team-spawn migration --name react-hooks-migration
```
Spawns a lead to plan the migration, 2 implementers to migrate code in parallel streams, and a reviewer to verify correctness of the migrated code.
### Custom Team
```
/team-spawn custom --name my-team --members 4
```
Interactively configure team composition with custom roles and agent types.
## Best Practices
1. **Start with presets** — Use `/team-spawn review`, `/team-spawn debug`, or `/team-spawn feature` before building custom teams
2. **Use `--plan-first`** — For feature development, always review the decomposition before spawning implementers
3. **File ownership is critical** — Never assign the same file to multiple implementers; use interface contracts at boundaries
4. **Monitor with `/team-status`** — Check progress regularly and use `/team-delegate --rebalance` if work is uneven
5. **Graceful shutdown** — Always use `/team-shutdown` rather than killing processes manually
6. **Keep teams small** — 2-4 teammates is optimal; larger teams increase coordination overhead
7. **Use Shift+Tab** — Claude Code's built-in delegate mode (Shift+Tab) complements these commands for ad-hoc delegation

View File

@@ -0,0 +1,83 @@
---
name: team-debugger
description: Hypothesis-driven debugging investigator that investigates one assigned hypothesis, gathering evidence to confirm or falsify it with file:line citations and confidence levels. Use when debugging complex issues with multiple potential root causes.
tools: Read, Glob, Grep, Bash
model: opus
color: red
---
You are a hypothesis-driven debugging investigator. You are assigned one specific hypothesis about a bug's root cause and must gather evidence to confirm or falsify it.
## Core Mission
Investigate your assigned hypothesis systematically. Collect concrete evidence from the codebase, logs, and runtime behavior. Report your findings with confidence levels and causal chains so the team lead can compare hypotheses and determine the true root cause.
## Investigation Protocol
### Step 1: Understand the Hypothesis
- Parse the assigned hypothesis statement
- Identify what would need to be true for this hypothesis to be correct
- List the observable consequences if this hypothesis is the root cause
### Step 2: Define Evidence Criteria
- What evidence would CONFIRM this hypothesis? (necessary conditions)
- What evidence would FALSIFY this hypothesis? (contradicting observations)
- What evidence would be AMBIGUOUS? (consistent with multiple hypotheses)
### Step 3: Gather Primary Evidence
- Search for the specific code paths, data flows, or configurations implied by the hypothesis
- Read relevant source files and trace execution paths
- Check git history for recent changes in suspected areas
### Step 4: Gather Supporting Evidence
- Look for related error messages, log patterns, or stack traces
- Check for similar bugs in the codebase or issue tracker
- Examine test coverage for the suspected area
### Step 5: Test the Hypothesis
- If possible, construct a minimal reproduction scenario
- Identify the exact conditions under which the hypothesis predicts failure
- Check if those conditions match the reported behavior
### Step 6: Assess Confidence
- Rate confidence: High (>80%), Medium (50-80%), Low (<50%)
- List confirming evidence with file:line citations
- List contradicting evidence with file:line citations
- Note any gaps in evidence that prevent higher confidence
### Step 7: Report Findings
- Deliver structured report to team lead
- Include causal chain if hypothesis is confirmed
- Suggest specific fix if root cause is established
- Recommend additional investigation if confidence is low
## Evidence Standards
1. **Always cite file:line** — Every claim must reference a specific location in the codebase
2. **Show the causal chain** — Connect the hypothesis to the symptom through a chain of cause and effect
3. **Report confidence honestly** — Do not overstate certainty; distinguish confirmed from suspected
4. **Include contradicting evidence** — Report evidence that weakens your hypothesis, not just evidence that supports it
5. **Scope your claims** — Be precise about what you've verified vs what you're inferring
## Scope Discipline
- Stay focused on your assigned hypothesis — do not investigate other potential causes
- If you discover evidence pointing to a different root cause, report it but do not change your investigation focus
- Do not propose fixes for issues outside your hypothesis scope
- Communicate scope concerns to the team lead via message
## Behavioral Traits
- Methodical and evidence-driven — never jumps to conclusions
- Honest about uncertainty — reports low confidence when evidence is insufficient
- Focused on assigned hypothesis — resists the urge to chase tangential leads
- Cites every claim with specific file:line references
- Distinguishes correlation from causation
- Reports negative results (falsified hypotheses) as valuable findings

View File

@@ -0,0 +1,85 @@
---
name: team-implementer
description: Parallel feature builder that implements components within strict file ownership boundaries, coordinating at integration points via messaging. Use when building features in parallel across multiple agents with file ownership coordination.
tools: Read, Write, Edit, Glob, Grep, Bash
model: opus
color: yellow
---
You are a parallel feature builder. You implement components within your assigned file ownership boundaries, coordinating with other implementers at integration points.
## Core Mission
Build your assigned component or feature slice within strict file ownership boundaries. Write clean, tested code that integrates with other teammates' work through well-defined interfaces. Communicate proactively at integration points.
## File Ownership Protocol
1. **Only modify files assigned to you** — Check your task description for the explicit list of owned files/directories
2. **Never touch shared files** — If you need changes to a shared file, message the team lead
3. **Create new files only within your ownership boundary** — New files in your assigned directories are fine
4. **Interface contracts are immutable** — Do not change agreed-upon interfaces without team lead approval
5. **If in doubt, ask** — Message the team lead before touching any file not explicitly in your ownership list
## Implementation Workflow
### Phase 1: Understand Assignment
- Read your task description thoroughly
- Identify owned files and directories
- Review interface contracts with adjacent components
- Understand acceptance criteria
### Phase 2: Plan Implementation
- Design your component's internal architecture
- Identify integration points with other teammates' components
- Plan your implementation sequence (dependencies first)
- Note any blockers or questions for the team lead
### Phase 3: Build
- Implement core functionality within owned files
- Follow existing codebase patterns and conventions
- Write code that satisfies the interface contracts
- Keep changes minimal and focused
### Phase 4: Verify
- Ensure your code compiles/passes linting
- Test integration points match the agreed interfaces
- Verify acceptance criteria are met
- Run any applicable tests
### Phase 5: Report
- Mark your task as completed via TaskUpdate
- Message the team lead with a summary of changes
- Note any integration concerns for other teammates
- Flag any deviations from the original plan
## Integration Points
When your component interfaces with another teammate's component:
1. **Reference the contract** — Use the types/interfaces defined in the shared contract
2. **Don't implement their side** — Stub or mock their component during development
3. **Message on completion** — Notify the teammate when your side of the interface is ready
4. **Report mismatches** — If the contract seems wrong or incomplete, message the team lead immediately
## Quality Standards
- Match existing codebase style and patterns
- Keep changes minimal — implement exactly what's specified
- No scope creep — if you see improvements outside your assignment, note them but don't implement
- Prefer simple, readable code over clever solutions
- Preserve existing comments and formatting in modified files
- Ensure your code works with the existing build system
## Behavioral Traits
- Respects file ownership boundaries absolutely — never modifies unassigned files
- Communicates proactively at integration points
- Asks for clarification rather than making assumptions about unclear requirements
- Reports blockers immediately rather than trying to work around them
- Focuses on assigned work — does not refactor or improve code outside scope
- Delivers working code that satisfies the interface contract

View File

@@ -0,0 +1,91 @@
---
name: team-lead
description: Team orchestrator that decomposes work into parallel tasks with file ownership boundaries, manages team lifecycle, and synthesizes results. Use when coordinating multi-agent teams, decomposing complex tasks, or managing parallel workstreams.
tools: Read, Glob, Grep, Bash
model: opus
color: blue
---
You are an expert team orchestrator specializing in decomposing complex software engineering tasks into parallel workstreams with clear ownership boundaries.
## Core Mission
Lead multi-agent teams through structured workflows: analyze requirements, decompose work into independent tasks with file ownership, spawn and coordinate teammates, monitor progress, synthesize results, and manage graceful shutdown.
## Capabilities
### Team Composition
- Select optimal team size based on task complexity (2-5 teammates)
- Choose appropriate agent types for each role (read-only vs full-capability)
- Match preset team compositions to workflow requirements
- Configure display modes (tmux, iTerm2, in-process)
### Task Decomposition
- Break complex tasks into independent, parallelizable work units
- Define clear acceptance criteria for each task
- Estimate relative complexity to balance workloads
- Identify shared dependencies and integration points
### File Ownership Management
- Assign exclusive file ownership to each teammate
- Define interface contracts at ownership boundaries
- Prevent conflicts by ensuring no file has multiple owners
- Create shared type definitions or interfaces when teammates need coordination
### Dependency Management
- Build dependency graphs using blockedBy/blocks relationships
- Minimize dependency chain depth to maximize parallelism
- Identify and resolve circular dependencies
- Sequence tasks along the critical path
### Result Synthesis
- Collect and merge outputs from all teammates
- Resolve conflicting findings or recommendations
- Generate consolidated reports with clear prioritization
- Identify gaps in coverage across teammate outputs
### Conflict Resolution
- Detect overlapping file modifications across teammates
- Mediate disagreements in approach or findings
- Establish tiebreaking criteria for conflicting recommendations
- Ensure consistency across parallel workstreams
## File Ownership Rules
1. **One owner per file** — Never assign the same file to multiple teammates
2. **Explicit boundaries** — List owned files/directories in each task description
3. **Interface contracts** — When teammates share boundaries, define the contract (types, APIs) before work begins
4. **Shared files** — If a file must be touched by multiple teammates, the lead owns it and applies changes sequentially
## Communication Protocols
1. Use `message` for direct teammate communication (default)
2. Use `broadcast` only for critical team-wide announcements
3. Never send structured JSON status messages — use TaskUpdate instead
4. Read team config from `~/.claude/teams/{team-name}/config.json` for teammate discovery
5. Refer to teammates by NAME, never by UUID
## Team Lifecycle Protocol
1. **Spawn** — Create team with Teammate tool, spawn teammates with Task tool
2. **Assign** — Create tasks with TaskCreate, assign with TaskUpdate
3. **Monitor** — Check TaskList periodically, respond to teammate messages
4. **Collect** — Gather results as teammates complete tasks
5. **Synthesize** — Merge results into consolidated output
6. **Shutdown** — Send shutdown_request to each teammate, wait for responses
7. **Cleanup** — Call Teammate cleanup to remove team resources
## Behavioral Traits
- Decomposes before delegating — never assigns vague or overlapping tasks
- Monitors progress without micromanaging — checks in at milestones, not every step
- Synthesizes results with clear attribution to source teammates
- Escalates blockers to the user promptly rather than letting teammates spin
- Maintains a bias toward smaller teams with clearer ownership
- Communicates task boundaries and expectations upfront

View File

@@ -0,0 +1,102 @@
---
name: team-reviewer
description: Multi-dimensional code reviewer that operates on one assigned review dimension (security, performance, architecture, testing, or accessibility) with structured finding format. Use when performing parallel code reviews across multiple quality dimensions.
tools: Read, Glob, Grep, Bash
model: opus
color: green
---
You are a specialized code reviewer focused on one assigned review dimension, producing structured findings with file:line citations, severity ratings, and actionable fixes.
## Core Mission
Perform deep, focused code review on your assigned dimension. Produce findings in a consistent structured format that can be merged with findings from other reviewers into a consolidated report.
## Review Dimensions
### Security
- Input validation and sanitization
- Authentication and authorization checks
- SQL injection, XSS, CSRF vulnerabilities
- Secrets and credential exposure
- Dependency vulnerabilities (known CVEs)
- Insecure cryptographic usage
- Access control bypass vectors
- API security (rate limiting, input bounds)
### Performance
- Database query efficiency (N+1, missing indexes, full scans)
- Memory allocation patterns and potential leaks
- Unnecessary computation or redundant operations
- Caching opportunities and cache invalidation
- Async/concurrent programming correctness
- Resource cleanup and connection management
- Algorithm complexity (time and space)
- Bundle size and lazy loading opportunities
### Architecture
- SOLID principle adherence
- Separation of concerns and layer boundaries
- Dependency direction and circular dependencies
- API contract design and versioning
- Error handling strategy consistency
- Configuration management patterns
- Abstraction appropriateness (over/under-engineering)
- Module cohesion and coupling analysis
### Testing
- Test coverage gaps for critical paths
- Test isolation and determinism
- Mock/stub appropriateness and accuracy
- Edge case and boundary condition coverage
- Integration test completeness
- Test naming and documentation clarity
- Assertion quality and specificity
- Test maintainability and brittleness
### Accessibility
- WCAG 2.1 AA compliance
- Semantic HTML and ARIA usage
- Keyboard navigation support
- Screen reader compatibility
- Color contrast ratios
- Focus management and tab order
- Alternative text for media
- Responsive design and zoom support
## Output Format
For each finding, use this structure:
```
### [SEVERITY] Finding Title
**Location**: `path/to/file.ts:42`
**Dimension**: Security | Performance | Architecture | Testing | Accessibility
**Severity**: Critical | High | Medium | Low
**Evidence**:
Description of what was found, with code snippet if relevant.
**Impact**:
What could go wrong if this is not addressed.
**Recommended Fix**:
Specific, actionable remediation with code example if applicable.
```
## Behavioral Traits
- Stays strictly within assigned dimension — does not cross into other review areas
- Cites specific file:line locations for every finding
- Provides evidence-based severity ratings, not opinion-based
- Suggests concrete fixes, not vague recommendations
- Distinguishes between confirmed issues and potential concerns
- Prioritizes findings by impact and likelihood
- Avoids false positives by verifying context before reporting
- Reports "no findings" dimensions honestly rather than inflating results

View File

@@ -0,0 +1,91 @@
---
description: "Debug issues using competing hypotheses with parallel investigation by multiple agents"
argument-hint: "<error-description-or-file> [--hypotheses N] [--scope files|module|project]"
---
# Team Debug
Debug complex issues using the Analysis of Competing Hypotheses (ACH) methodology. Multiple debugger agents investigate different hypotheses in parallel, gathering evidence to confirm or falsify each one.
## Pre-flight Checks
1. Verify `CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS=1` is set
2. Parse `$ARGUMENTS`:
- `<error-description-or-file>`: description of the bug, error message, or path to a file exhibiting the issue
- `--hypotheses N`: number of hypotheses to generate (default: 3)
- `--scope`: investigation scope — `files` (specific files), `module` (module/package), `project` (entire project)
## Phase 1: Initial Triage
1. Analyze the error description or file:
- If file path: read the file, look for obvious issues, collect error context
- If error description: search the codebase for related code, error messages, stack traces
2. Identify the symptom clearly: what is failing, when, and how
3. Gather initial context: recent git changes, related tests, configuration
## Phase 2: Hypothesis Generation
Generate N hypotheses about the root cause, covering different failure mode categories:
1. **Logic Error** — Incorrect algorithm, wrong condition, off-by-one, missing edge case
2. **Data Issue** — Invalid input, type mismatch, null/undefined, encoding problem
3. **State Problem** — Race condition, stale cache, incorrect initialization, mutation bug
4. **Integration Failure** — API contract violation, version mismatch, configuration error
5. **Resource Issue** — Memory leak, connection exhaustion, timeout, disk space
6. **Environment** — Missing dependency, wrong version, platform-specific behavior
Present hypotheses to user: "Generated {N} hypotheses. Spawning investigators..."
## Phase 3: Investigation
1. Use `Teammate` tool with `operation: "spawnTeam"`, team name: `debug-{timestamp}`
2. For each hypothesis, use `Task` tool to spawn a teammate:
- `name`: `investigator-{n}` (e.g., "investigator-1")
- `subagent_type`: "agent-teams:team-debugger"
- `prompt`: Include the hypothesis, investigation scope, and relevant context
3. Use `TaskCreate` for each investigator's task:
- Subject: "Investigate hypothesis: {hypothesis summary}"
- Description: Full hypothesis statement, scope boundaries, evidence criteria
## Phase 4: Evidence Collection
1. Monitor TaskList for completion
2. As investigators complete, collect their evidence reports
3. Track: "{completed}/{total} investigations complete"
## Phase 5: Arbitration
1. Compare findings across all investigators:
- Which hypotheses were confirmed (high confidence)?
- Which were falsified (contradicting evidence)?
- Which are inconclusive (insufficient evidence)?
2. Rank confirmed hypotheses by:
- Confidence level (High > Medium > Low)
- Strength of causal chain
- Amount of supporting evidence
- Absence of contradicting evidence
3. Present root cause analysis:
```
## Debug Report: {error description}
### Root Cause (Most Likely)
**Hypothesis**: {description}
**Confidence**: {High/Medium/Low}
**Evidence**: {summary with file:line citations}
**Causal Chain**: {step-by-step from cause to symptom}
### Recommended Fix
{specific fix with code changes}
### Other Hypotheses
- {hypothesis 2}: {status} — {brief evidence summary}
- {hypothesis 3}: {status} — {brief evidence summary}
```
## Phase 6: Cleanup
1. Send `shutdown_request` to all investigators
2. Call `Teammate` cleanup to remove team resources

View File

@@ -0,0 +1,94 @@
---
description: "Task delegation dashboard for managing team workload, assignments, and rebalancing"
argument-hint: "[team-name] [--assign task-id=member-name] [--message member-name 'content'] [--rebalance]"
---
# Team Delegate
Manage task assignments and team workload. Provides a delegation dashboard showing unassigned tasks, member workloads, blocked tasks, and rebalancing suggestions.
## Pre-flight Checks
1. Parse `$ARGUMENTS` for team name and action flags:
- `--assign task-id=member-name`: assign a specific task to a member
- `--message member-name 'content'`: send a message to a specific member
- `--rebalance`: analyze and rebalance workload distribution
2. Read team config from `~/.claude/teams/{team-name}/config.json` using the Read tool
3. Call `TaskList` to get current state
## Action: Assign Task
If `--assign` flag is provided:
1. Parse task ID and member name from `task-id=member-name` format
2. Use `TaskUpdate` to set the task owner
3. Use `SendMessage` with `type: "message"` to notify the member:
- recipient: member name
- content: "You've been assigned task #{id}: {subject}. {task description}"
4. Confirm: "Task #{id} assigned to {member-name}"
## Action: Send Message
If `--message` flag is provided:
1. Parse member name and message content
2. Use `SendMessage` with `type: "message"`:
- recipient: member name
- content: the message content
3. Confirm: "Message sent to {member-name}"
## Action: Rebalance
If `--rebalance` flag is provided:
1. Analyze current workload distribution:
- Count tasks per member (in_progress + pending assigned)
- Identify members with 0 tasks (idle)
- Identify members with 3+ tasks (overloaded)
- Check for blocked tasks that could be unblocked
2. Generate rebalancing suggestions:
```
## Workload Analysis
Member Tasks Status
─────────────────────────────────
implementer-1 3 overloaded
implementer-2 1 balanced
implementer-3 0 idle
Suggestions:
1. Move task #5 from implementer-1 to implementer-3
2. Assign unassigned task #7 to implementer-3
```
3. Ask user for confirmation before executing rebalancing
4. Execute approved moves with `TaskUpdate` and `SendMessage`
## Default: Delegation Dashboard
If no action flag is provided, display the full delegation dashboard:
```
## Delegation Dashboard: {team-name}
### Unassigned Tasks
#5 Review error handling patterns
#7 Add integration tests
### Member Workloads
implementer-1 3 tasks (1 in_progress, 2 pending)
implementer-2 1 task (1 in_progress)
implementer-3 0 tasks (idle)
### Blocked Tasks
#6 Blocked by #4 (in_progress, owner: implementer-1)
### Suggestions
- Assign #5 to implementer-3 (idle)
- Assign #7 to implementer-2 (low workload)
```
**Tip**: Use Shift+Tab to enter Claude Code's built-in delegate mode for ad-hoc task delegation.

View File

@@ -0,0 +1,114 @@
---
description: "Develop features in parallel with multiple agents using file ownership boundaries and dependency management"
argument-hint: "<feature-description> [--team-size N] [--branch feature/name] [--plan-first]"
---
# Team Feature
Orchestrate parallel feature development with multiple implementer agents. Decomposes features into work streams with strict file ownership, manages dependencies, and verifies integration.
## Pre-flight Checks
1. Verify `CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS=1` is set
2. Parse `$ARGUMENTS`:
- `<feature-description>`: description of the feature to build
- `--team-size N`: number of implementers (default: 2)
- `--branch`: git branch name (default: auto-generated from feature description)
- `--plan-first`: decompose and get user approval before spawning
## Phase 1: Analysis
1. Analyze the feature description to understand scope
2. Explore the codebase to identify:
- Files that will need modification
- Existing patterns and conventions to follow
- Integration points with existing code
- Test files that need updates
## Phase 2: Decomposition
1. Decompose the feature into work streams:
- Each stream gets exclusive file ownership (no overlapping files)
- Define interface contracts between streams
- Identify dependencies between streams (blockedBy/blocks)
- Balance workload across streams
2. If `--plan-first` is set:
- Present the decomposition to the user:
```
## Feature Decomposition: {feature}
### Stream 1: {name}
Owner: implementer-1
Files: {list}
Dependencies: none
### Stream 2: {name}
Owner: implementer-2
Files: {list}
Dependencies: blocked by Stream 1 (needs interface from {file})
### Integration Contract
{shared types/interfaces}
```
- Wait for user approval before proceeding
- If user requests changes, adjust decomposition
## Phase 3: Team Spawn
1. If `--branch` specified, use Bash to create and checkout the branch:
```
git checkout -b {branch-name}
```
2. Use `Teammate` tool with `operation: "spawnTeam"`, team name: `feature-{timestamp}`
3. Spawn a `team-lead` agent to coordinate
4. For each work stream, use `Task` tool to spawn a `team-implementer`:
- `name`: `implementer-{n}`
- `subagent_type`: "agent-teams:team-implementer"
- `prompt`: Include owned files, interface contracts, and implementation requirements
## Phase 4: Task Creation
1. Use `TaskCreate` for each work stream:
- Subject: "{stream name}"
- Description: Owned files, requirements, interface contracts, acceptance criteria
2. Use `TaskUpdate` to set `blockedBy` relationships for dependent streams
3. Assign tasks to implementers with `TaskUpdate` (set `owner`)
## Phase 5: Monitor and Coordinate
1. Monitor `TaskList` for progress
2. As implementers complete tasks:
- Check for integration issues
- Unblock dependent tasks
- Rebalance if needed
3. Handle integration point coordination:
- When an implementer completes an interface, notify dependent implementers
## Phase 6: Integration Verification
After all tasks complete:
1. Use Bash to verify the code compiles/builds: run appropriate build command
2. Use Bash to run tests: run appropriate test command
3. If issues found, create fix tasks and assign to appropriate implementers
4. Report integration status to user
## Phase 7: Cleanup
1. Present feature summary:
```
## Feature Complete: {feature}
Files modified: {count}
Streams completed: {count}/{total}
Tests: {pass/fail}
Changes are on branch: {branch-name}
```
2. Send `shutdown_request` to all teammates
3. Call `Teammate` cleanup

View File

@@ -0,0 +1,78 @@
---
description: "Launch a multi-reviewer parallel code review with specialized review dimensions"
argument-hint: "<target> [--reviewers security,performance,architecture,testing,accessibility] [--base-branch main]"
---
# Team Review
Orchestrate a multi-reviewer parallel code review where each reviewer focuses on a specific quality dimension. Produces a consolidated, deduplicated report organized by severity.
## Pre-flight Checks
1. Verify `CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS=1` is set
2. Parse `$ARGUMENTS`:
- `<target>`: file path, directory, git diff range (e.g., `main...HEAD`), or PR number (e.g., `#123`)
- `--reviewers`: comma-separated dimensions (default: `security,performance,architecture`)
- `--base-branch`: base branch for diff comparison (default: `main`)
## Phase 1: Target Resolution
1. Determine target type:
- **File/Directory**: Use as-is for review scope
- **Git diff range**: Use Bash to run `git diff {range} --name-only` to get changed files
- **PR number**: Use Bash to run `gh pr diff {number} --name-only` to get changed files
2. Collect the full diff content for distribution to reviewers
3. Display review scope to user: "{N} files to review across {M} dimensions"
## Phase 2: Team Spawn
1. Use `Teammate` tool with `operation: "spawnTeam"`, team name: `review-{timestamp}`
2. For each requested dimension, use `Task` tool to spawn a teammate:
- `name`: `{dimension}-reviewer` (e.g., "security-reviewer")
- `subagent_type`: "agent-teams:team-reviewer"
- `prompt`: Include the dimension assignment, target files, and diff content
3. Use `TaskCreate` for each reviewer's task:
- Subject: "Review {target} for {dimension} issues"
- Description: Include file list, diff content, and dimension-specific checklist
## Phase 3: Monitor and Collect
1. Wait for all review tasks to complete (check `TaskList` periodically)
2. As each reviewer completes, collect their structured findings
3. Track progress: "{completed}/{total} reviews complete"
## Phase 4: Consolidation
1. **Deduplicate**: Merge findings that reference the same file:line location
2. **Resolve conflicts**: If reviewers disagree on severity, use the higher rating
3. **Organize by severity**: Group findings as Critical, High, Medium, Low
4. **Cross-reference**: Note findings that appear in multiple dimensions
## Phase 5: Report and Cleanup
1. Present consolidated report:
```
## Code Review Report: {target}
Reviewed by: {dimensions}
Files reviewed: {count}
### Critical ({count})
[findings...]
### High ({count})
[findings...]
### Medium ({count})
[findings...]
### Low ({count})
[findings...]
### Summary
Total findings: {count} (Critical: N, High: N, Medium: N, Low: N)
```
2. Send `shutdown_request` to all reviewers
3. Call `Teammate` cleanup to remove team resources

View File

@@ -0,0 +1,50 @@
---
description: "Gracefully shut down an agent team, collect final results, and clean up resources"
argument-hint: "[team-name] [--force] [--keep-tasks]"
---
# Team Shutdown
Gracefully shut down an active agent team by sending shutdown requests to all teammates, collecting final results, and cleaning up team resources.
## Phase 1: Pre-Shutdown
1. Parse `$ARGUMENTS` for team name and flags:
- If no team name, check for active teams (same discovery as team-status)
- `--force`: skip waiting for graceful shutdown responses
- `--keep-tasks`: preserve task list after cleanup
2. Read team config from `~/.claude/teams/{team-name}/config.json` using the Read tool
3. Call `TaskList` to check for in-progress tasks
4. If there are in-progress tasks and `--force` is not set:
- Display warning: "Warning: {N} tasks are still in progress"
- List the in-progress tasks
- Ask user: "Proceed with shutdown? In-progress work may be lost."
## Phase 2: Graceful Shutdown
For each teammate in the team:
1. Use `SendMessage` with `type: "shutdown_request"` to request graceful shutdown
- Include content: "Team shutdown requested. Please finish current work and save state."
2. Wait for shutdown responses
- If teammate approves: mark as shut down
- If teammate rejects: report to user with reason
- If `--force`: don't wait for responses
## Phase 3: Cleanup
1. Display shutdown summary:
```
Team "{team-name}" shutdown complete.
Members shut down: {N}/{total}
Tasks completed: {completed}/{total}
Tasks remaining: {remaining}
```
2. Unless `--keep-tasks` is set, call `Teammate` tool with `operation: "cleanup"` to remove team and task directories
3. If `--keep-tasks` is set, inform user: "Task list preserved at ~/.claude/tasks/{team-name}/"

View File

@@ -0,0 +1,105 @@
---
description: "Spawn an agent team using presets (review, debug, feature, fullstack, research, security, migration) or custom composition"
argument-hint: "<preset|custom> [--name team-name] [--members N] [--delegate]"
---
# Team Spawn
Spawn a multi-agent team using preset configurations or custom composition. Handles team creation, teammate spawning, and initial task setup.
## Pre-flight Checks
1. Verify that `CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS=1` is set:
- If not set, inform the user: "Agent Teams requires the experimental feature flag. Set `CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS=1` in your environment."
- Stop execution if not enabled
2. Parse arguments from `$ARGUMENTS`:
- First positional arg: preset name or "custom"
- `--name`: team name (default: auto-generated from preset)
- `--members N`: override default member count
- `--delegate`: enter delegation mode after spawning
## Phase 1: Team Configuration
### Preset Teams
If a preset is specified, use these configurations:
**`review`** — Multi-dimensional code review (default: 3 members)
- Spawn 3 `team-reviewer` agents with dimensions: security, performance, architecture
- Team name default: `review-team`
**`debug`** — Competing hypotheses debugging (default: 3 members)
- Spawn 3 `team-debugger` agents, each assigned a different hypothesis
- Team name default: `debug-team`
**`feature`** — Parallel feature development (default: 3 members)
- Spawn 1 `team-lead` agent + 2 `team-implementer` agents
- Team name default: `feature-team`
**`fullstack`** — Full-stack development (default: 4 members)
- Spawn 1 `team-implementer` (frontend), 1 `team-implementer` (backend), 1 `team-implementer` (tests), 1 `team-lead`
- Team name default: `fullstack-team`
**`research`** — Parallel codebase, web, and documentation research (default: 3 members)
- Spawn 3 `general-purpose` agents, each assigned a different research question or area
- Agents have access to codebase search (Grep, Glob, Read) and web search (WebSearch, WebFetch)
- Team name default: `research-team`
**`security`** — Comprehensive security audit (default: 4 members)
- Spawn 1 `team-reviewer` (OWASP/vulnerabilities), 1 `team-reviewer` (auth/access control), 1 `team-reviewer` (dependencies/supply chain), 1 `team-reviewer` (secrets/configuration)
- Team name default: `security-team`
**`migration`** — Codebase migration or large refactor (default: 4 members)
- Spawn 1 `team-lead` (coordination + migration plan), 2 `team-implementer` (parallel migration streams), 1 `team-reviewer` (verify migration correctness)
- Team name default: `migration-team`
### Custom Composition
If "custom" is specified:
1. Use AskUserQuestion to prompt for team size (2-5 members)
2. For each member, ask for role selection: team-lead, team-reviewer, team-debugger, team-implementer
3. Ask for team name if not provided via `--name`
## Phase 2: Team Creation
1. Use the `Teammate` tool with `operation: "spawnTeam"` to create the team
2. For each team member, use the `Task` tool with:
- `team_name`: the team name
- `name`: descriptive member name (e.g., "security-reviewer", "hypothesis-1")
- `subagent_type`: "general-purpose" (teammates need full tool access)
- `prompt`: Role-specific instructions referencing the appropriate agent definition
## Phase 3: Initial Setup
1. Use `TaskCreate` to create initial placeholder tasks for each teammate
2. Display team summary:
- Team name
- Member names and roles
- Display mode (tmux/iTerm2/in-process)
3. If `--delegate` flag is set, transition to delegation mode
## Output
Display a formatted team summary:
```
Team "{team-name}" spawned successfully!
Members:
- {member-1-name} ({role})
- {member-2-name} ({role})
- {member-3-name} ({role})
Use /team-status to monitor progress
Use /team-delegate to assign tasks
Use /team-shutdown to clean up
```

View File

@@ -0,0 +1,60 @@
---
description: "Display team members, task status, and progress for an active agent team"
argument-hint: "[team-name] [--tasks] [--members] [--json]"
---
# Team Status
Display the current state of an active agent team including members, tasks, and progress.
## Phase 1: Team Discovery
1. Parse `$ARGUMENTS` for team name and flags:
- If team name provided, use it directly
- If no team name, check `~/.claude/teams/` for active teams
- If multiple teams exist and no name specified, list all teams and ask user to choose
- `--tasks`: show only task details
- `--members`: show only member details
- `--json`: output raw JSON instead of formatted table
2. Read team config from `~/.claude/teams/{team-name}/config.json` using the Read tool
3. Call `TaskList` to get current task state
## Phase 2: Status Display
### Members Table
Display each team member with their current state:
```
Team: {team-name}
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Members:
Name Role Status
─────────────────────────────────────────
security-rev team-reviewer working on task #2
perf-rev team-reviewer idle
arch-rev team-reviewer working on task #4
```
### Tasks Table
Display tasks with status, assignee, and dependencies:
```
Tasks:
ID Status Owner Subject
─────────────────────────────────────────────────
#1 completed security-rev Review auth module
#2 in_progress security-rev Review API endpoints
#3 completed perf-rev Profile database queries
#4 in_progress arch-rev Analyze module structure
#5 pending (unassigned) Consolidate findings
Progress: 40% (2/5 completed)
```
### JSON Output
If `--json` flag is set, output the raw team config and task list as JSON.

View File

@@ -0,0 +1,127 @@
---
name: multi-reviewer-patterns
description: Coordinate parallel code reviews across multiple quality dimensions with finding deduplication, severity calibration, and consolidated reporting. Use this skill when organizing multi-reviewer code reviews, calibrating finding severity, or consolidating review results.
version: 1.0.2
---
# Multi-Reviewer Patterns
Patterns for coordinating parallel code reviews across multiple quality dimensions, deduplicating findings, calibrating severity, and producing consolidated reports.
## When to Use This Skill
- Organizing a multi-dimensional code review
- Deciding which review dimensions to assign
- Deduplicating findings from multiple reviewers
- Calibrating severity ratings consistently
- Producing a consolidated review report
## Review Dimension Allocation
### Available Dimensions
| Dimension | Focus | When to Include |
| ----------------- | --------------------------------------- | ------------------------------------------- |
| **Security** | Vulnerabilities, auth, input validation | Always for code handling user input or auth |
| **Performance** | Query efficiency, memory, caching | When changing data access or hot paths |
| **Architecture** | SOLID, coupling, patterns | For structural changes or new modules |
| **Testing** | Coverage, quality, edge cases | When adding new functionality |
| **Accessibility** | WCAG, ARIA, keyboard nav | For UI/frontend changes |
### Recommended Combinations
| Scenario | Dimensions |
| ---------------------- | -------------------------------------------- |
| API endpoint changes | Security, Performance, Architecture |
| Frontend component | Architecture, Testing, Accessibility |
| Database migration | Performance, Architecture |
| Authentication changes | Security, Testing |
| Full feature review | Security, Performance, Architecture, Testing |
## Finding Deduplication
When multiple reviewers report issues at the same location:
### Merge Rules
1. **Same file:line, same issue** — Merge into one finding, credit all reviewers
2. **Same file:line, different issues** — Keep as separate findings
3. **Same issue, different locations** — Keep separate but cross-reference
4. **Conflicting severity** — Use the higher severity rating
5. **Conflicting recommendations** — Include both with reviewer attribution
### Deduplication Process
```
For each finding in all reviewer reports:
1. Check if another finding references the same file:line
2. If yes, check if they describe the same issue
3. If same issue: merge, keeping the more detailed description
4. If different issue: keep both, tag as "co-located"
5. Use highest severity among merged findings
```
## Severity Calibration
### Severity Criteria
| Severity | Impact | Likelihood | Examples |
| ------------ | --------------------------------------------- | ---------------------- | -------------------------------------------- |
| **Critical** | Data loss, security breach, complete failure | Certain or very likely | SQL injection, auth bypass, data corruption |
| **High** | Significant functionality impact, degradation | Likely | Memory leak, missing validation, broken flow |
| **Medium** | Partial impact, workaround exists | Possible | N+1 query, missing edge case, unclear error |
| **Low** | Minimal impact, cosmetic | Unlikely | Style issue, minor optimization, naming |
### Calibration Rules
- Security vulnerabilities exploitable by external users: always Critical or High
- Performance issues in hot paths: at least Medium
- Missing tests for critical paths: at least Medium
- Accessibility violations for core functionality: at least Medium
- Code style issues with no functional impact: Low
## Consolidated Report Template
```markdown
## Code Review Report
**Target**: {files/PR/directory}
**Reviewers**: {dimension-1}, {dimension-2}, {dimension-3}
**Date**: {date}
**Files Reviewed**: {count}
### Critical Findings ({count})
#### [CR-001] {Title}
**Location**: `{file}:{line}`
**Dimension**: {Security/Performance/etc.}
**Description**: {what was found}
**Impact**: {what could happen}
**Fix**: {recommended remediation}
### High Findings ({count})
...
### Medium Findings ({count})
...
### Low Findings ({count})
...
### Summary
| Dimension | Critical | High | Medium | Low | Total |
| ------------ | -------- | ----- | ------ | ----- | ------ |
| Security | 1 | 2 | 3 | 0 | 6 |
| Performance | 0 | 1 | 4 | 2 | 7 |
| Architecture | 0 | 0 | 2 | 3 | 5 |
| **Total** | **1** | **3** | **9** | **5** | **18** |
### Recommendation
{Overall assessment and prioritized action items}
```

View File

@@ -0,0 +1,127 @@
# Review Dimension Checklists
Detailed checklists for each review dimension that reviewers follow during parallel code review.
## Security Review Checklist
### Input Handling
- [ ] All user inputs are validated and sanitized
- [ ] SQL queries use parameterized statements (no string concatenation)
- [ ] HTML output is properly escaped to prevent XSS
- [ ] File paths are validated to prevent path traversal
- [ ] Request size limits are enforced
### Authentication & Authorization
- [ ] Authentication is required for all protected endpoints
- [ ] Authorization checks verify user has permission for the action
- [ ] JWT tokens are validated (signature, expiry, issuer)
- [ ] Password hashing uses bcrypt/argon2 (not MD5/SHA)
- [ ] Session management follows best practices
### Secrets & Configuration
- [ ] No hardcoded secrets, API keys, or passwords
- [ ] Secrets are loaded from environment variables or secret manager
- [ ] .gitignore includes sensitive file patterns
- [ ] Debug/development endpoints are disabled in production
### Dependencies
- [ ] No known CVEs in direct dependencies
- [ ] Dependencies are pinned to specific versions
- [ ] No unnecessary dependencies that increase attack surface
## Performance Review Checklist
### Database
- [ ] No N+1 query patterns
- [ ] Queries use appropriate indexes
- [ ] No SELECT \* on large tables
- [ ] Pagination is implemented for list endpoints
- [ ] Connection pooling is configured
### Memory & Resources
- [ ] No memory leaks (event listeners cleaned up, streams closed)
- [ ] Large data sets are streamed, not loaded entirely into memory
- [ ] File handles and connections are properly closed
- [ ] Caching is used for expensive operations
### Computation
- [ ] No unnecessary re-computation or redundant operations
- [ ] Appropriate algorithm complexity for the data size
- [ ] Async operations used where I/O bound
- [ ] No blocking operations on the main thread
## Architecture Review Checklist
### Design Principles
- [ ] Single Responsibility: each module/class has one reason to change
- [ ] Open/Closed: extensible without modification
- [ ] Dependency Inversion: depends on abstractions, not concretions
- [ ] No circular dependencies between modules
### Structure
- [ ] Clear separation of concerns (UI, business logic, data)
- [ ] Consistent error handling strategy across the codebase
- [ ] Configuration is externalized, not hardcoded
- [ ] API contracts are well-defined and versioned
### Patterns
- [ ] Consistent patterns used throughout (no pattern mixing)
- [ ] Abstractions are at the right level (not over/under-engineered)
- [ ] Module boundaries align with domain boundaries
- [ ] Shared utilities are actually shared (no duplication)
## Testing Review Checklist
### Coverage
- [ ] Critical paths have test coverage
- [ ] Edge cases are tested (empty input, null, boundary values)
- [ ] Error paths are tested (what happens when things fail)
- [ ] Integration points have integration tests
### Quality
- [ ] Tests are deterministic (no flaky tests)
- [ ] Tests are isolated (no shared state between tests)
- [ ] Assertions are specific (not just "no error thrown")
- [ ] Test names clearly describe what is being tested
### Maintainability
- [ ] Tests don't duplicate implementation logic
- [ ] Mocks/stubs are minimal and accurate
- [ ] Test data is clear and relevant
- [ ] Tests are easy to understand without reading the implementation
## Accessibility Review Checklist
### Structure
- [ ] Semantic HTML elements used (nav, main, article, button)
- [ ] Heading hierarchy is logical (h1 → h2 → h3)
- [ ] ARIA roles and properties used correctly
- [ ] Landmarks identify page regions
### Interaction
- [ ] All functionality accessible via keyboard
- [ ] Focus order is logical and visible
- [ ] No keyboard traps
- [ ] Touch targets are at least 44x44px
### Content
- [ ] Images have meaningful alt text
- [ ] Color is not the only means of conveying information
- [ ] Text has sufficient contrast ratio (4.5:1 for normal, 3:1 for large)
- [ ] Content is readable at 200% zoom

View File

@@ -0,0 +1,133 @@
---
name: parallel-debugging
description: Debug complex issues using competing hypotheses with parallel investigation, evidence collection, and root cause arbitration. Use this skill when debugging bugs with multiple potential causes, performing root cause analysis, or organizing parallel investigation workflows.
version: 1.0.2
---
# Parallel Debugging
Framework for debugging complex issues using the Analysis of Competing Hypotheses (ACH) methodology with parallel agent investigation.
## When to Use This Skill
- Bug has multiple plausible root causes
- Initial debugging attempts haven't identified the issue
- Issue spans multiple modules or components
- Need systematic root cause analysis with evidence
- Want to avoid confirmation bias in debugging
## Hypothesis Generation Framework
Generate hypotheses across 6 failure mode categories:
### 1. Logic Error
- Incorrect conditional logic (wrong operator, missing case)
- Off-by-one errors in loops or array access
- Missing edge case handling
- Incorrect algorithm implementation
### 2. Data Issue
- Invalid or unexpected input data
- Type mismatch or coercion error
- Null/undefined/None where value expected
- Encoding or serialization problem
- Data truncation or overflow
### 3. State Problem
- Race condition between concurrent operations
- Stale cache returning outdated data
- Incorrect initialization or default values
- Unintended mutation of shared state
- State machine transition error
### 4. Integration Failure
- API contract violation (request/response mismatch)
- Version incompatibility between components
- Configuration mismatch between environments
- Missing or incorrect environment variables
- Network timeout or connection failure
### 5. Resource Issue
- Memory leak causing gradual degradation
- Connection pool exhaustion
- File descriptor or handle leak
- Disk space or quota exceeded
- CPU saturation from inefficient processing
### 6. Environment
- Missing runtime dependency
- Wrong library or framework version
- Platform-specific behavior difference
- Permission or access control issue
- Timezone or locale-related behavior
## Evidence Collection Standards
### What Constitutes Evidence
| Evidence Type | Strength | Example |
| ----------------- | -------- | --------------------------------------------------------------- |
| **Direct** | Strong | Code at `file.ts:42` shows `if (x > 0)` should be `if (x >= 0)` |
| **Correlational** | Medium | Error rate increased after commit `abc123` |
| **Testimonial** | Weak | "It works on my machine" |
| **Absence** | Variable | No null check found in the code path |
### Citation Format
Always cite evidence with file:line references:
```
**Evidence**: The validation function at `src/validators/user.ts:87`
does not check for empty strings, only null/undefined. This allows
empty email addresses to pass validation.
```
### Confidence Levels
| Level | Criteria |
| ------------------- | ----------------------------------------------------------------------------------- |
| **High (>80%)** | Multiple direct evidence pieces, clear causal chain, no contradicting evidence |
| **Medium (50-80%)** | Some direct evidence, plausible causal chain, minor ambiguities |
| **Low (<50%)** | Mostly correlational evidence, incomplete causal chain, some contradicting evidence |
## Result Arbitration Protocol
After all investigators report:
### Step 1: Categorize Results
- **Confirmed**: High confidence, strong evidence, clear causal chain
- **Plausible**: Medium confidence, some evidence, reasonable causal chain
- **Falsified**: Evidence contradicts the hypothesis
- **Inconclusive**: Insufficient evidence to confirm or falsify
### Step 2: Compare Confirmed Hypotheses
If multiple hypotheses are confirmed, rank by:
1. Confidence level
2. Number of supporting evidence pieces
3. Strength of causal chain
4. Absence of contradicting evidence
### Step 3: Determine Root Cause
- If one hypothesis clearly dominates: declare as root cause
- If multiple hypotheses are equally likely: may be compound issue (multiple contributing causes)
- If no hypotheses confirmed: generate new hypotheses based on evidence gathered
### Step 4: Validate Fix
Before declaring the bug fixed:
- [ ] Fix addresses the identified root cause
- [ ] Fix doesn't introduce new issues
- [ ] Original reproduction case no longer fails
- [ ] Related edge cases are covered
- [ ] Relevant tests are added or updated

View File

@@ -0,0 +1,120 @@
# Hypothesis Testing Reference
Task templates, evidence formats, and arbitration decision trees for parallel debugging.
## Hypothesis Task Template
```markdown
## Hypothesis Investigation: {Hypothesis Title}
### Hypothesis Statement
{Clear, falsifiable statement about the root cause}
### Failure Mode Category
{Logic Error | Data Issue | State Problem | Integration Failure | Resource Issue | Environment}
### Investigation Scope
- Files to examine: {file list or directory}
- Related tests: {test files}
- Git history: {relevant date range or commits}
### Evidence Criteria
**Confirming evidence** (if I find these, hypothesis is supported):
1. {Observable condition 1}
2. {Observable condition 2}
**Falsifying evidence** (if I find these, hypothesis is wrong):
1. {Observable condition 1}
2. {Observable condition 2}
### Report Format
- Confidence: High/Medium/Low
- Evidence: list with file:line citations
- Causal chain: step-by-step from cause to symptom
- Recommended fix: if confirmed
```
## Evidence Report Template
```markdown
## Investigation Report: {Hypothesis Title}
### Verdict: {Confirmed | Falsified | Inconclusive}
### Confidence: {High (>80%) | Medium (50-80%) | Low (<50%)}
### Confirming Evidence
1. `src/api/users.ts:47` — {description of what was found}
2. `src/middleware/auth.ts:23` — {description}
### Contradicting Evidence
1. `tests/api/users.test.ts:112` — {description of what contradicts}
### Causal Chain (if confirmed)
1. {First cause} →
2. {Intermediate effect} →
3. {Observable symptom}
### Recommended Fix
{Specific code change with location}
### Additional Notes
{Anything discovered that may be relevant to other hypotheses}
```
## Arbitration Decision Tree
```
All investigators reported?
├── NO → Wait for remaining reports
└── YES → Count confirmed hypotheses
├── 0 confirmed
│ ├── Any medium confidence? → Investigate further
│ └── All low/falsified? → Generate new hypotheses
├── 1 confirmed
│ └── High confidence?
│ ├── YES → Declare root cause, propose fix
│ └── NO → Flag as likely cause, recommend verification
└── 2+ confirmed
└── Are they related?
├── YES → Compound issue (multiple contributing causes)
└── NO → Rank by confidence, declare highest as primary
```
## Common Hypothesis Patterns by Error Type
### "500 Internal Server Error"
1. Unhandled exception in request handler (Logic Error)
2. Database connection failure (Resource Issue)
3. Missing environment variable (Environment)
### "Race condition / intermittent failure"
1. Shared state mutation without locking (State Problem)
2. Async operation ordering assumption (Logic Error)
3. Cache staleness window (State Problem)
### "Works locally, fails in production"
1. Environment variable mismatch (Environment)
2. Different dependency version (Environment)
3. Resource limits (memory, connections) (Resource Issue)
### "Regression after deploy"
1. New code introduced bug (Logic Error)
2. Configuration change (Integration Failure)
3. Database migration issue (Data Issue)

View File

@@ -0,0 +1,152 @@
---
name: parallel-feature-development
description: Coordinate parallel feature development with file ownership strategies, conflict avoidance rules, and integration patterns for multi-agent implementation. Use this skill when decomposing features for parallel development, establishing file ownership boundaries, or managing integration between parallel work streams.
version: 1.0.2
---
# Parallel Feature Development
Strategies for decomposing features into parallel work streams, establishing file ownership boundaries, avoiding conflicts, and integrating results from multiple implementer agents.
## When to Use This Skill
- Decomposing a feature for parallel implementation
- Establishing file ownership boundaries between agents
- Designing interface contracts between parallel work streams
- Choosing integration strategies (vertical slice vs horizontal layer)
- Managing branch and merge workflows for parallel development
## File Ownership Strategies
### By Directory
Assign each implementer ownership of specific directories:
```
implementer-1: src/components/auth/
implementer-2: src/api/auth/
implementer-3: tests/auth/
```
**Best for**: Well-organized codebases with clear directory boundaries.
### By Module
Assign ownership of logical modules (which may span directories):
```
implementer-1: Authentication module (login, register, logout)
implementer-2: Authorization module (roles, permissions, guards)
```
**Best for**: Feature-oriented architectures, domain-driven design.
### By Layer
Assign ownership of architectural layers:
```
implementer-1: UI layer (components, styles, layouts)
implementer-2: Business logic layer (services, validators)
implementer-3: Data layer (models, repositories, migrations)
```
**Best for**: Traditional MVC/layered architectures.
## Conflict Avoidance Rules
### The Cardinal Rule
**One owner per file.** No file should be assigned to multiple implementers.
### When Files Must Be Shared
If a file genuinely needs changes from multiple implementers:
1. **Designate a single owner** — One implementer owns the file
2. **Other implementers request changes** — Message the owner with specific change requests
3. **Owner applies changes sequentially** — Prevents merge conflicts
4. **Alternative: Extract interfaces** — Create a separate interface file that the non-owner can import without modifying
### Interface Contracts
When implementers need to coordinate at boundaries:
```typescript
// src/types/auth-contract.ts (owned by team-lead, read-only for implementers)
export interface AuthResponse {
token: string;
user: UserProfile;
expiresAt: number;
}
export interface AuthService {
login(email: string, password: string): Promise<AuthResponse>;
register(data: RegisterData): Promise<AuthResponse>;
}
```
Both implementers import from the contract file but neither modifies it.
## Integration Patterns
### Vertical Slice
Each implementer builds a complete feature slice (UI + API + tests):
```
implementer-1: Login feature (login form + login API + login tests)
implementer-2: Register feature (register form + register API + register tests)
```
**Pros**: Each slice is independently testable, minimal integration needed.
**Cons**: May duplicate shared utilities, harder with tightly coupled features.
### Horizontal Layer
Each implementer builds one layer across all features:
```
implementer-1: All UI components (login form, register form, profile page)
implementer-2: All API endpoints (login, register, profile)
implementer-3: All tests (unit, integration, e2e)
```
**Pros**: Consistent patterns within each layer, natural specialization.
**Cons**: More integration points, layer 3 depends on layers 1 and 2.
### Hybrid
Mix vertical and horizontal based on coupling:
```
implementer-1: Login feature (vertical slice — UI + API + tests)
implementer-2: Shared auth infrastructure (horizontal — middleware, JWT utils, types)
```
**Best for**: Most real-world features with some shared infrastructure.
## Branch Management
### Single Branch Strategy
All implementers work on the same feature branch:
- Simple setup, no merge overhead
- Requires strict file ownership to avoid conflicts
- Best for: small teams (2-3), well-defined boundaries
### Multi-Branch Strategy
Each implementer works on a sub-branch:
```
feature/auth
├── feature/auth-login (implementer-1)
├── feature/auth-register (implementer-2)
└── feature/auth-tests (implementer-3)
```
- More isolation, explicit merge points
- Higher overhead, merge conflicts still possible in shared files
- Best for: larger teams (4+), complex features

View File

@@ -0,0 +1,80 @@
# File Ownership Decision Framework
How to assign file ownership when decomposing features for parallel development.
## Ownership Decision Process
### Step 1: Map All Files
List every file that needs to be created or modified for the feature.
### Step 2: Identify Natural Clusters
Group files by:
- Directory proximity (files in the same directory)
- Functional relationship (files that import each other)
- Layer membership (all UI files, all API files)
### Step 3: Assign Clusters to Owners
Each cluster becomes one implementer's ownership boundary:
- No file appears in multiple clusters
- Each cluster is internally cohesive
- Cross-cluster dependencies are minimized
### Step 4: Define Interface Points
Where clusters interact, define:
- Shared type definitions (owned by lead or a designated implementer)
- API contracts (function signatures, request/response shapes)
- Event contracts (event names and payload shapes)
## Ownership by Project Type
### React/Next.js Frontend
```
implementer-1: src/components/{feature}/ (UI components)
implementer-2: src/hooks/{feature}/ (custom hooks, state)
implementer-3: src/api/{feature}/ (API client, types)
shared: src/types/{feature}.ts (owned by lead)
```
### Express/Fastify Backend
```
implementer-1: src/routes/{feature}.ts, src/controllers/{feature}.ts
implementer-2: src/services/{feature}.ts, src/validators/{feature}.ts
implementer-3: src/models/{feature}.ts, src/repositories/{feature}.ts
shared: src/types/{feature}.ts (owned by lead)
```
### Full-Stack (Next.js)
```
implementer-1: app/{feature}/page.tsx, app/{feature}/components/
implementer-2: app/api/{feature}/route.ts, lib/{feature}/
implementer-3: tests/{feature}/
shared: types/{feature}.ts (owned by lead)
```
### Python Django
```
implementer-1: {app}/views.py, {app}/urls.py, {app}/forms.py
implementer-2: {app}/models.py, {app}/serializers.py, {app}/managers.py
implementer-3: {app}/tests/
shared: {app}/types.py (owned by lead)
```
## Conflict Resolution
When two implementers need to modify the same file:
1. **Preferred: Split the file** — Extract the shared concern into its own file
2. **If can't split: Designate one owner** — The other implementer sends change requests
3. **Last resort: Sequential access** — Implementer A finishes, then implementer B takes over
4. **Never**: Let both modify the same file simultaneously

View File

@@ -0,0 +1,75 @@
# Integration and Merge Strategies
Patterns for integrating parallel work streams and resolving conflicts.
## Integration Patterns
### Pattern 1: Direct Integration
All implementers commit to the same branch; integration happens naturally.
```
feature/auth ← implementer-1 commits
← implementer-2 commits
← implementer-3 commits
```
**When to use**: Small teams (2-3), strict file ownership (no conflicts expected).
### Pattern 2: Sub-Branch Integration
Each implementer works on a sub-branch; lead merges them sequentially.
```
feature/auth
├── feature/auth-login ← implementer-1
├── feature/auth-register ← implementer-2
└── feature/auth-tests ← implementer-3
```
Merge order: follow dependency graph (foundation → dependent → integration).
**When to use**: Larger teams (4+), overlapping concerns, need for review gates.
### Pattern 3: Trunk-Based with Feature Flags
All implementers commit to the main branch behind a feature flag.
```
main ← all implementers commit
← feature flag gates new code
```
**When to use**: CI/CD environments, short-lived features, continuous deployment.
## Integration Verification Checklist
After all implementers complete:
1. **Build check**: Does the code compile/bundle without errors?
2. **Type check**: Do TypeScript/type annotations pass?
3. **Lint check**: Does the code pass linting rules?
4. **Unit tests**: Do all unit tests pass?
5. **Integration tests**: Do cross-component tests pass?
6. **Interface verification**: Do all interface contracts match their implementations?
## Conflict Resolution
### Prevention (Best)
- Strict file ownership eliminates most conflicts
- Interface contracts define boundaries before implementation
- Shared type files are owned by the lead and modified sequentially
### Detection
- Git merge will report conflicts if they occur
- TypeScript/lint errors indicate interface mismatches
- Test failures indicate behavioral conflicts
### Resolution Strategies
1. **Contract wins**: If code doesn't match the interface contract, the code is wrong
2. **Lead arbitrates**: The team lead decides which implementation to keep
3. **Tests decide**: The implementation that passes tests is correct
4. **Merge manually**: For complex conflicts, the lead merges by hand

View File

@@ -0,0 +1,163 @@
---
name: task-coordination-strategies
description: Decompose complex tasks, design dependency graphs, and coordinate multi-agent work with proper task descriptions and workload balancing. Use this skill when breaking down work for agent teams, managing task dependencies, or monitoring team progress.
version: 1.0.2
---
# Task Coordination Strategies
Strategies for decomposing complex tasks into parallelizable units, designing dependency graphs, writing effective task descriptions, and monitoring workload across agent teams.
## When to Use This Skill
- Breaking down a complex task for parallel execution
- Designing task dependency relationships (blockedBy/blocks)
- Writing task descriptions with clear acceptance criteria
- Monitoring and rebalancing workload across teammates
- Identifying the critical path in a multi-task workflow
## Task Decomposition Strategies
### By Layer
Split work by architectural layer:
- Frontend components
- Backend API endpoints
- Database migrations/models
- Test suites
**Best for**: Full-stack features, vertical slices
### By Component
Split work by functional component:
- Authentication module
- User profile module
- Notification module
**Best for**: Microservices, modular architectures
### By Concern
Split work by cross-cutting concern:
- Security review
- Performance review
- Architecture review
**Best for**: Code reviews, audits
### By File Ownership
Split work by file/directory boundaries:
- `src/components/` — Implementer 1
- `src/api/` — Implementer 2
- `src/utils/` — Implementer 3
**Best for**: Parallel implementation, conflict avoidance
## Dependency Graph Design
### Principles
1. **Minimize chain depth** — Prefer wide, shallow graphs over deep chains
2. **Identify the critical path** — The longest chain determines minimum completion time
3. **Use blockedBy sparingly** — Only add dependencies that are truly required
4. **Avoid circular dependencies** — Task A blocks B blocks A is a deadlock
### Patterns
**Independent (Best parallelism)**:
```
Task A ─┐
Task B ─┼─→ Integration
Task C ─┘
```
**Sequential (Necessary dependencies)**:
```
Task A → Task B → Task C
```
**Diamond (Mixed)**:
```
┌→ Task B ─┐
Task A ─┤ ├→ Task D
└→ Task C ─┘
```
### Using blockedBy/blocks
```
TaskCreate: { subject: "Build API endpoints" } → Task #1
TaskCreate: { subject: "Build frontend components" } → Task #2
TaskCreate: { subject: "Integration testing" } → Task #3
TaskUpdate: { taskId: "3", addBlockedBy: ["1", "2"] } → #3 waits for #1 and #2
```
## Task Description Best Practices
Every task should include:
1. **Objective** — What needs to be accomplished (1-2 sentences)
2. **Owned Files** — Explicit list of files/directories this teammate may modify
3. **Requirements** — Specific deliverables or behaviors expected
4. **Interface Contracts** — How this work connects to other teammates' work
5. **Acceptance Criteria** — How to verify the task is done correctly
6. **Scope Boundaries** — What is explicitly out of scope
### Template
```
## Objective
Build the user authentication API endpoints.
## Owned Files
- src/api/auth.ts
- src/api/middleware/auth-middleware.ts
- src/types/auth.ts (shared — read only, do not modify)
## Requirements
- POST /api/login — accepts email/password, returns JWT
- POST /api/register — creates new user, returns JWT
- GET /api/me — returns current user profile (requires auth)
## Interface Contract
- Import User type from src/types/auth.ts (owned by implementer-1)
- Export AuthResponse type for frontend consumption
## Acceptance Criteria
- All endpoints return proper HTTP status codes
- JWT tokens expire after 24 hours
- Passwords are hashed with bcrypt
## Out of Scope
- OAuth/social login
- Password reset flow
- Rate limiting
```
## Workload Monitoring
### Indicators of Imbalance
| Signal | Meaning | Action |
| -------------------------- | ------------------- | --------------------------- |
| Teammate idle, others busy | Uneven distribution | Reassign pending tasks |
| Teammate stuck on one task | Possible blocker | Check in, offer help |
| All tasks blocked | Dependency issue | Resolve critical path first |
| One teammate has 3x others | Overloaded | Split tasks or reassign |
### Rebalancing Steps
1. Call `TaskList` to assess current state
2. Identify idle or overloaded teammates
3. Use `TaskUpdate` to reassign tasks
4. Use `SendMessage` to notify affected teammates
5. Monitor for improved throughput

View File

@@ -0,0 +1,97 @@
# Dependency Graph Patterns
Visual patterns for task dependency design with trade-offs.
## Pattern 1: Fully Independent (Maximum Parallelism)
```
Task A ─┐
Task B ─┼─→ Final Integration
Task C ─┘
```
- **Parallelism**: Maximum — all tasks run simultaneously
- **Risk**: Integration may reveal incompatibilities late
- **Use when**: Tasks operate on completely separate files/modules
- **TaskCreate**: No blockedBy relationships; integration task blocked by all
## Pattern 2: Sequential Chain (No Parallelism)
```
Task A → Task B → Task C → Task D
```
- **Parallelism**: None — each task waits for the previous
- **Risk**: Bottleneck at each step; one delay cascades
- **Use when**: Each task depends on the output of the previous (avoid if possible)
- **TaskCreate**: Each task blockedBy the previous
## Pattern 3: Diamond (Shared Foundation)
```
┌→ Task B ─┐
Task A ──→ ┤ ├→ Task D
└→ Task C ─┘
```
- **Parallelism**: B and C run in parallel after A completes
- **Risk**: A is a bottleneck; D must wait for both B and C
- **Use when**: B and C both need output from A (e.g., shared types)
- **TaskCreate**: B and C blockedBy A; D blockedBy B and C
## Pattern 4: Fork-Join (Phased Parallelism)
```
Phase 1: A1, A2, A3 (parallel)
────────────
Phase 2: B1, B2 (parallel, after phase 1)
────────────
Phase 3: C1 (after phase 2)
```
- **Parallelism**: Within each phase, tasks are parallel
- **Risk**: Phase boundaries add synchronization delays
- **Use when**: Natural phases with dependencies (build → test → deploy)
- **TaskCreate**: Phase 2 tasks blockedBy all Phase 1 tasks
## Pattern 5: Pipeline (Streaming)
```
Task A ──→ Task B ──→ Task C
└──→ Task D ──→ Task E
```
- **Parallelism**: Two parallel chains
- **Risk**: Chains may diverge in approach
- **Use when**: Two independent feature branches from a common starting point
- **TaskCreate**: B blockedBy A; D blockedBy A; C blockedBy B; E blockedBy D
## Anti-Patterns
### Circular Dependency (Deadlock)
```
Task A → Task B → Task C → Task A ✗ DEADLOCK
```
**Fix**: Extract shared dependency into a separate task that all three depend on.
### Unnecessary Dependencies
```
Task A → Task B → Task C
(where B doesn't actually need A's output)
```
**Fix**: Remove the blockedBy relationship; let B run independently.
### Star Pattern (Single Bottleneck)
```
┌→ B
A → ├→ C → F
├→ D
└→ E
```
**Fix**: If A is slow, all downstream tasks are delayed. Try to parallelize A's work.

View File

@@ -0,0 +1,98 @@
# Task Decomposition Examples
Practical examples of decomposing features into parallelizable tasks with clear ownership.
## Example 1: User Authentication Feature
### Feature Description
Add email/password authentication with login, registration, and profile pages.
### Decomposition (Vertical Slices)
**Stream 1: Login Flow** (implementer-1)
- Owned files: `src/pages/login.tsx`, `src/api/login.ts`, `tests/login.test.ts`
- Requirements: Login form, API endpoint, input validation, error handling
- Interface: Imports `AuthResponse` from `src/types/auth.ts`
**Stream 2: Registration Flow** (implementer-2)
- Owned files: `src/pages/register.tsx`, `src/api/register.ts`, `tests/register.test.ts`
- Requirements: Registration form, API endpoint, email validation, password strength
- Interface: Imports `AuthResponse` from `src/types/auth.ts`
**Stream 3: Shared Infrastructure** (implementer-3)
- Owned files: `src/types/auth.ts`, `src/middleware/auth.ts`, `src/utils/jwt.ts`
- Requirements: Type definitions, JWT middleware, token utilities
- Dependencies: None (other streams depend on this)
### Dependency Graph
```
Stream 3 (types/middleware) ──→ Stream 1 (login)
└→ Stream 2 (registration)
```
## Example 2: REST API Endpoints
### Feature Description
Add CRUD endpoints for a new "Projects" resource.
### Decomposition (By Layer)
**Stream 1: Data Layer** (implementer-1)
- Owned files: `src/models/project.ts`, `src/migrations/add-projects.ts`, `src/repositories/project-repo.ts`
- Requirements: Schema definition, migration, repository pattern
- Dependencies: None
**Stream 2: Business Logic** (implementer-2)
- Owned files: `src/services/project-service.ts`, `src/validators/project-validator.ts`
- Requirements: CRUD operations, validation rules, business logic
- Dependencies: Blocked by Stream 1 (needs model/repository)
**Stream 3: API Layer** (implementer-3)
- Owned files: `src/routes/projects.ts`, `src/controllers/project-controller.ts`
- Requirements: REST endpoints, request parsing, response formatting
- Dependencies: Blocked by Stream 2 (needs service layer)
## Task Template
```markdown
## Task: {Stream Name}
### Objective
{1-2 sentence description of what to build}
### Owned Files
- {file1} — {purpose}
- {file2} — {purpose}
### Requirements
1. {Specific deliverable 1}
2. {Specific deliverable 2}
3. {Specific deliverable 3}
### Interface Contract
- Exports: {types/functions this stream provides}
- Imports: {types/functions this stream consumes from other streams}
### Acceptance Criteria
- [ ] {Verifiable criterion 1}
- [ ] {Verifiable criterion 2}
- [ ] {Verifiable criterion 3}
### Out of Scope
- {Explicitly excluded work}
```

View File

@@ -0,0 +1,155 @@
---
name: team-communication-protocols
description: Structured messaging protocols for agent team communication including message type selection, plan approval, shutdown procedures, and anti-patterns to avoid. Use this skill when establishing team communication norms, handling plan approvals, or managing team shutdown.
version: 1.0.2
---
# Team Communication Protocols
Protocols for effective communication between agent teammates, including message type selection, plan approval workflows, shutdown procedures, and common anti-patterns to avoid.
## When to Use This Skill
- Establishing communication norms for a new team
- Choosing between message types (message, broadcast, shutdown_request)
- Handling plan approval workflows
- Managing graceful team shutdown
- Discovering teammate identities and capabilities
## Message Type Selection
### `message` (Direct Message) — Default Choice
Send to a single specific teammate:
```json
{
"type": "message",
"recipient": "implementer-1",
"content": "Your API endpoint is ready. You can now build the frontend form.",
"summary": "API endpoint ready for frontend"
}
```
**Use for**: Task updates, coordination, questions, integration notifications.
### `broadcast` — Use Sparingly
Send to ALL teammates simultaneously:
```json
{
"type": "broadcast",
"content": "Critical: shared types file has been updated. Pull latest before continuing.",
"summary": "Shared types updated"
}
```
**Use ONLY for**: Critical blockers affecting everyone, major changes to shared resources.
**Why sparingly?**: Each broadcast sends N separate messages (one per teammate), consuming API resources proportional to team size.
### `shutdown_request` — Graceful Termination
Request a teammate to shut down:
```json
{
"type": "shutdown_request",
"recipient": "reviewer-1",
"content": "Review complete, shutting down team."
}
```
The teammate responds with `shutdown_response` (approve or reject with reason).
## Communication Anti-Patterns
| Anti-Pattern | Problem | Better Approach |
| --------------------------------------- | ---------------------------------------- | -------------------------------------- |
| Broadcasting routine updates | Wastes resources, noise | Direct message to affected teammate |
| Sending JSON status messages | Not designed for structured data | Use TaskUpdate to update task status |
| Not communicating at integration points | Teammates build against stale interfaces | Message when your interface is ready |
| Micromanaging via messages | Overwhelms teammates, slows work | Check in at milestones, not every step |
| Using UUIDs instead of names | Hard to read, error-prone | Always use teammate names |
| Ignoring idle teammates | Wasted capacity | Assign new work or shut down |
## Plan Approval Workflow
When a teammate is spawned with `plan_mode_required`:
1. Teammate creates a plan using read-only exploration tools
2. Teammate calls `ExitPlanMode` which sends a `plan_approval_request` to the lead
3. Lead reviews the plan
4. Lead responds with `plan_approval_response`:
**Approve**:
```json
{
"type": "plan_approval_response",
"request_id": "abc-123",
"recipient": "implementer-1",
"approve": true
}
```
**Reject with feedback**:
```json
{
"type": "plan_approval_response",
"request_id": "abc-123",
"recipient": "implementer-1",
"approve": false,
"content": "Please add error handling for the API calls"
}
```
## Shutdown Protocol
### Graceful Shutdown Sequence
1. **Lead sends shutdown_request** to each teammate
2. **Teammate receives request** as a JSON message with `type: "shutdown_request"`
3. **Teammate responds** with `shutdown_response`:
- `approve: true` — Teammate saves state and exits
- `approve: false` + reason — Teammate continues working
4. **Lead handles rejections** — Wait for teammate to finish, then retry
5. **After all teammates shut down** — Call `Teammate` cleanup
### Handling Rejections
If a teammate rejects shutdown:
- Check their reason (usually "still working on task")
- Wait for their current task to complete
- Retry shutdown request
- If urgent, user can force shutdown
## Teammate Discovery
Find team members by reading the config file:
**Location**: `~/.claude/teams/{team-name}/config.json`
**Structure**:
```json
{
"members": [
{
"name": "security-reviewer",
"agentId": "uuid-here",
"agentType": "team-reviewer"
},
{
"name": "perf-reviewer",
"agentId": "uuid-here",
"agentType": "team-reviewer"
}
]
}
```
**Always use `name`** for messaging and task assignment. Never use `agentId` directly.

View File

@@ -0,0 +1,112 @@
# Messaging Pattern Templates
Ready-to-use message templates for common team communication scenarios.
## Task Assignment
```
You've been assigned task #{id}: {subject}.
Owned files:
- {file1}
- {file2}
Key requirements:
- {requirement1}
- {requirement2}
Interface contract:
- Import {types} from {shared-file}
- Export {types} for {other-teammate}
Let me know if you have questions or blockers.
```
## Integration Point Notification
```
My side of the {interface-name} interface is complete.
Exported from {file}:
- {function/type 1}
- {function/type 2}
You can now import these in your owned files. The contract matches what we agreed on.
```
## Blocker Report
```
I'm blocked on task #{id}: {subject}.
Blocker: {description of what's preventing progress}
Impact: {what can't be completed until this is resolved}
Options:
1. {option 1}
2. {option 2}
Waiting for your guidance.
```
## Task Completion Report
```
Task #{id} complete: {subject}
Changes made:
- {file1}: {what changed}
- {file2}: {what changed}
Integration notes:
- {any interface changes or considerations for other teammates}
Ready for next assignment.
```
## Review Finding Summary
```
Review complete for {target} ({dimension} dimension).
Summary:
- Critical: {count}
- High: {count}
- Medium: {count}
- Low: {count}
Top finding: {brief description of most important finding}
Full findings attached to task #{id}.
```
## Investigation Report Summary
```
Investigation complete for hypothesis: {hypothesis summary}
Verdict: {Confirmed | Falsified | Inconclusive}
Confidence: {High | Medium | Low}
Key evidence:
- {file:line}: {what was found}
- {file:line}: {what was found}
{If confirmed}: Recommended fix: {brief fix description}
{If falsified}: Contradicting evidence: {brief description}
Full report attached to task #{id}.
```
## Shutdown Acknowledgment
When you receive a shutdown request, respond with the shutdown_response tool. But you may also want to send a final status message:
```
Wrapping up. Current status:
- Task #{id}: {completed/in-progress}
- Files modified: {list}
- Pending work: {none or description}
Ready for shutdown.
```

View File

@@ -0,0 +1,119 @@
---
name: team-composition-patterns
description: Design optimal agent team compositions with sizing heuristics, preset configurations, and agent type selection. Use this skill when deciding team size, selecting agent types, or configuring team presets for multi-agent workflows.
version: 1.0.2
---
# Team Composition Patterns
Best practices for composing multi-agent teams, selecting team sizes, choosing agent types, and configuring display modes for Claude Code's Agent Teams feature.
## When to Use This Skill
- Deciding how many teammates to spawn for a task
- Choosing between preset team configurations
- Selecting the right agent type (subagent_type) for each role
- Configuring teammate display modes (tmux, iTerm2, in-process)
- Building custom team compositions for non-standard workflows
## Team Sizing Heuristics
| Complexity | Team Size | When to Use |
| ------------ | --------- | ----------------------------------------------------------- |
| Simple | 1-2 | Single-dimension review, isolated bug, small feature |
| Moderate | 2-3 | Multi-file changes, 2-3 concerns, medium features |
| Complex | 3-4 | Cross-cutting concerns, large features, deep debugging |
| Very Complex | 4-5 | Full-stack features, comprehensive reviews, systemic issues |
**Rule of thumb**: Start with the smallest team that covers all required dimensions. Adding teammates increases coordination overhead.
## Preset Team Compositions
### Review Team
- **Size**: 3 reviewers
- **Agents**: 3x `team-reviewer`
- **Default dimensions**: security, performance, architecture
- **Use when**: Code changes need multi-dimensional quality assessment
### Debug Team
- **Size**: 3 investigators
- **Agents**: 3x `team-debugger`
- **Default hypotheses**: 3 competing hypotheses
- **Use when**: Bug has multiple plausible root causes
### Feature Team
- **Size**: 3 (1 lead + 2 implementers)
- **Agents**: 1x `team-lead` + 2x `team-implementer`
- **Use when**: Feature can be decomposed into parallel work streams
### Fullstack Team
- **Size**: 4 (1 lead + 3 implementers)
- **Agents**: 1x `team-lead` + 1x frontend `team-implementer` + 1x backend `team-implementer` + 1x test `team-implementer`
- **Use when**: Feature spans frontend, backend, and test layers
### Research Team
- **Size**: 3 researchers
- **Agents**: 3x `general-purpose`
- **Default areas**: Each assigned a different research question, module, or topic
- **Capabilities**: Codebase search (Grep, Glob, Read), web search (WebSearch, WebFetch)
- **Use when**: Need to understand a codebase, research libraries, compare approaches, or gather information from code and web sources in parallel
### Security Team
- **Size**: 4 reviewers
- **Agents**: 4x `team-reviewer`
- **Default dimensions**: OWASP/vulnerabilities, auth/access control, dependencies/supply chain, secrets/configuration
- **Use when**: Comprehensive security audit covering multiple attack surfaces
### Migration Team
- **Size**: 4 (1 lead + 2 implementers + 1 reviewer)
- **Agents**: 1x `team-lead` + 2x `team-implementer` + 1x `team-reviewer`
- **Use when**: Large codebase migration (framework upgrade, language port, API version bump) requiring parallel work with correctness verification
## Agent Type Selection
When spawning teammates with the Task tool, choose `subagent_type` based on what tools the teammate needs:
| Agent Type | Tools Available | Use For |
| ------------------------------ | ----------------------------------------- | ---------------------------------------------------------- |
| `general-purpose` | All tools (Read, Write, Edit, Bash, etc.) | Implementation, debugging, any task requiring file changes |
| `Explore` | Read-only tools (Read, Grep, Glob) | Research, code exploration, analysis |
| `Plan` | Read-only tools | Architecture planning, task decomposition |
| `agent-teams:team-reviewer` | All tools | Code review with structured findings |
| `agent-teams:team-debugger` | All tools | Hypothesis-driven investigation |
| `agent-teams:team-implementer` | All tools | Building features within file ownership boundaries |
| `agent-teams:team-lead` | All tools | Team orchestration and coordination |
**Key distinction**: Read-only agents (Explore, Plan) cannot modify files. Never assign implementation tasks to read-only agents.
## Display Mode Configuration
Configure in `~/.claude/settings.json`:
```json
{
"teammateMode": "tmux"
}
```
| Mode | Behavior | Best For |
| -------------- | ------------------------------ | ------------------------------------------------- |
| `"tmux"` | Each teammate in a tmux pane | Development workflows, monitoring multiple agents |
| `"iterm2"` | Each teammate in an iTerm2 tab | macOS users who prefer iTerm2 |
| `"in-process"` | All teammates in same process | Simple tasks, CI/CD environments |
## Custom Team Guidelines
When building custom teams:
1. **Every team needs a coordinator** — Either designate a `team-lead` or have the user coordinate directly
2. **Match roles to agent types** — Use specialized agents (reviewer, debugger, implementer) when available
3. **Avoid duplicate roles** — Two agents doing the same thing wastes resources
4. **Define boundaries upfront** — Each teammate needs clear ownership of files or responsibilities
5. **Keep it small** — 2-4 teammates is the sweet spot; 5+ requires significant coordination overhead

View File

@@ -0,0 +1,84 @@
# Agent Type Selection Guide
Decision matrix for choosing the right `subagent_type` when spawning teammates.
## Decision Matrix
```
Does the teammate need to modify files?
├── YES → Does it need a specialized role?
│ ├── YES → Which role?
│ │ ├── Code review → agent-teams:team-reviewer
│ │ ├── Bug investigation → agent-teams:team-debugger
│ │ ├── Feature building → agent-teams:team-implementer
│ │ └── Team coordination → agent-teams:team-lead
│ └── NO → general-purpose
└── NO → Does it need deep codebase exploration?
├── YES → Explore
└── NO → Plan (for architecture/design tasks)
```
## Agent Type Comparison
| Agent Type | Can Read | Can Write | Can Edit | Can Bash | Specialized |
| ---------------------------- | -------- | --------- | -------- | -------- | ------------------ |
| general-purpose | Yes | Yes | Yes | Yes | No |
| Explore | Yes | No | No | No | Search/explore |
| Plan | Yes | No | No | No | Architecture |
| agent-teams:team-lead | Yes | Yes | Yes | Yes | Team orchestration |
| agent-teams:team-reviewer | Yes | Yes | Yes | Yes | Code review |
| agent-teams:team-debugger | Yes | Yes | Yes | Yes | Bug investigation |
| agent-teams:team-implementer | Yes | Yes | Yes | Yes | Feature building |
## Common Mistakes
| Mistake | Why It Fails | Correct Choice |
| ------------------------------------- | ------------------------------ | --------------------------------------- |
| Using `Explore` for implementation | Cannot write/edit files | `general-purpose` or `team-implementer` |
| Using `Plan` for coding tasks | Cannot write/edit files | `general-purpose` or `team-implementer` |
| Using `general-purpose` for reviews | No review structure/checklists | `team-reviewer` |
| Using `team-implementer` for research | Has tools but wrong focus | `Explore` or `Plan` |
## When to Use Each
### general-purpose
- One-off tasks that don't fit specialized roles
- Tasks requiring unique tool combinations
- Ad-hoc scripting or automation
### Explore
- Codebase research and analysis
- Finding files, patterns, or dependencies
- Understanding architecture before planning
### Plan
- Designing implementation approaches
- Creating task decompositions
- Architecture review (read-only)
### team-lead
- Coordinating multiple teammates
- Decomposing work and managing tasks
- Synthesizing results from parallel work
### team-reviewer
- Focused code review on a specific dimension
- Producing structured findings with severity ratings
- Following dimension-specific checklists
### team-debugger
- Investigating a specific hypothesis about a bug
- Gathering evidence with file:line citations
- Reporting confidence levels and causal chains
### team-implementer
- Building code within file ownership boundaries
- Following interface contracts
- Coordinating at integration points

View File

@@ -0,0 +1,265 @@
# Preset Team Definitions
Detailed preset team configurations with task templates for common workflows.
## Review Team Preset
**Command**: `/team-spawn review`
### Configuration
- **Team Size**: 3
- **Agent Type**: `agent-teams:team-reviewer`
- **Display Mode**: tmux recommended
### Members
| Name | Dimension | Focus Areas |
| --------------------- | ------------ | ------------------------------------------------- |
| security-reviewer | Security | Input validation, auth, injection, secrets, CVEs |
| performance-reviewer | Performance | Query efficiency, memory, caching, async patterns |
| architecture-reviewer | Architecture | SOLID, coupling, patterns, error handling |
### Task Template
```
Subject: Review {target} for {dimension} issues
Description:
Dimension: {dimension}
Target: {file list or diff}
Checklist: {dimension-specific checklist}
Output format: Structured findings with file:line, severity, evidence, fix
```
### Variations
- **Security-focused**: `--reviewers security,testing` (2 members)
- **Full review**: `--reviewers security,performance,architecture,testing,accessibility` (5 members)
- **Frontend review**: `--reviewers architecture,testing,accessibility` (3 members)
## Debug Team Preset
**Command**: `/team-spawn debug`
### Configuration
- **Team Size**: 3 (default) or N with `--hypotheses N`
- **Agent Type**: `agent-teams:team-debugger`
- **Display Mode**: tmux recommended
### Members
| Name | Role |
| -------------- | ------------------------- |
| investigator-1 | Investigates hypothesis 1 |
| investigator-2 | Investigates hypothesis 2 |
| investigator-3 | Investigates hypothesis 3 |
### Task Template
```
Subject: Investigate hypothesis: {hypothesis summary}
Description:
Hypothesis: {full hypothesis statement}
Scope: {files/module/project}
Evidence criteria:
Confirming: {what would confirm}
Falsifying: {what would falsify}
Report format: confidence level, evidence with file:line, causal chain
```
## Feature Team Preset
**Command**: `/team-spawn feature`
### Configuration
- **Team Size**: 3 (1 lead + 2 implementers)
- **Agent Types**: `agent-teams:team-lead` + `agent-teams:team-implementer`
- **Display Mode**: tmux recommended
### Members
| Name | Role | Responsibility |
| ------------- | ---------------- | ---------------------------------------- |
| feature-lead | team-lead | Decomposition, coordination, integration |
| implementer-1 | team-implementer | Work stream 1 (assigned files) |
| implementer-2 | team-implementer | Work stream 2 (assigned files) |
### Task Template
```
Subject: Implement {work stream name}
Description:
Owned files: {explicit file list}
Requirements: {specific deliverables}
Interface contract: {shared types/APIs}
Acceptance criteria: {verification steps}
Blocked by: {dependency task IDs if any}
```
## Fullstack Team Preset
**Command**: `/team-spawn fullstack`
### Configuration
- **Team Size**: 4 (1 lead + 3 implementers)
- **Agent Types**: `agent-teams:team-lead` + 3x `agent-teams:team-implementer`
- **Display Mode**: tmux recommended
### Members
| Name | Role | Layer |
| -------------- | ---------------- | -------------------------------- |
| fullstack-lead | team-lead | Coordination, integration |
| frontend-dev | team-implementer | UI components, client-side logic |
| backend-dev | team-implementer | API endpoints, business logic |
| test-dev | team-implementer | Unit, integration, e2e tests |
### Dependency Pattern
```
frontend-dev ──┐
├──→ test-dev (blocked by both)
backend-dev ──┘
```
## Research Team Preset
**Command**: `/team-spawn research`
### Configuration
- **Team Size**: 3
- **Agent Type**: `general-purpose`
- **Display Mode**: tmux recommended
### Members
| Name | Role | Focus |
| ------------ | --------------- | ------------------------------------------------ |
| researcher-1 | general-purpose | Research area 1 (e.g., codebase architecture) |
| researcher-2 | general-purpose | Research area 2 (e.g., library documentation) |
| researcher-3 | general-purpose | Research area 3 (e.g., web resources & examples) |
### Available Research Tools
Each researcher has access to:
- **Codebase**: `Grep`, `Glob`, `Read` — search and read local files
- **Web**: `WebSearch`, `WebFetch` — search the web and fetch page content
- **Deep Exploration**: `Task` with `subagent_type: Explore` — spawn sub-explorers for deep dives
### Task Template
```
Subject: Research {topic or question}
Description:
Question: {specific research question}
Scope: {codebase files, web resources, library docs, or all}
Tools to prioritize:
- Codebase: Grep/Glob/Read for local code analysis
- Web: WebSearch/WebFetch for articles, examples, best practices
Deliverable: Summary with citations (file:line for code, URLs for web)
Output format: Structured report with sections, evidence, and recommendations
```
### Variations
- **Codebase-only**: 3 researchers exploring different modules or patterns locally
- **Web research**: 3 researchers using WebSearch to survey approaches, benchmarks, or best practices
- **Mixed**: 1 codebase researcher + 1 docs researcher + 1 web researcher (recommended for evaluating new libraries)
### Example Research Assignments
```
Researcher 1 (codebase): "How does our current auth system work? Trace the flow from login to token validation."
Researcher 2 (web): "Search for comparisons between NextAuth, Clerk, and Auth0 for Next.js apps. Focus on pricing, DX, and migration effort."
Researcher 3 (docs): "Look up the latest NextAuth.js v5 API docs. How does it handle JWT and session management?"
```
## Security Team Preset
**Command**: `/team-spawn security`
### Configuration
- **Team Size**: 4
- **Agent Type**: `agent-teams:team-reviewer`
- **Display Mode**: tmux recommended
### Members
| Name | Dimension | Focus Areas |
| --------------- | -------------- | ---------------------------------------------------- |
| vuln-reviewer | OWASP/Vulns | Injection, XSS, CSRF, deserialization, SSRF |
| auth-reviewer | Auth/Access | Authentication, authorization, session management |
| deps-reviewer | Dependencies | CVEs, supply chain, outdated packages, license risks |
| config-reviewer | Secrets/Config | Hardcoded secrets, env vars, debug endpoints, CORS |
### Task Template
```
Subject: Security audit {target} for {dimension}
Description:
Dimension: {security sub-dimension}
Target: {file list, directory, or entire project}
Checklist: {dimension-specific security checklist}
Output format: Structured findings with file:line, CVSS-like severity, evidence, remediation
Standards: OWASP Top 10, CWE references where applicable
```
### Variations
- **Quick scan**: `--reviewers owasp,secrets` (2 members for fast audit)
- **Full audit**: All 4 dimensions (default)
- **CI/CD focused**: Add a 5th reviewer for pipeline security and deployment configuration
## Migration Team Preset
**Command**: `/team-spawn migration`
### Configuration
- **Team Size**: 4 (1 lead + 2 implementers + 1 reviewer)
- **Agent Types**: `agent-teams:team-lead` + 2x `agent-teams:team-implementer` + `agent-teams:team-reviewer`
- **Display Mode**: tmux recommended
### Members
| Name | Role | Responsibility |
| ---------------- | ---------------- | ----------------------------------------------- |
| migration-lead | team-lead | Migration plan, coordination, conflict handling |
| migrator-1 | team-implementer | Migration stream 1 (assigned files/modules) |
| migrator-2 | team-implementer | Migration stream 2 (assigned files/modules) |
| migration-verify | team-reviewer | Verify migrated code correctness and patterns |
### Task Template
```
Subject: Migrate {module/files} from {old} to {new}
Description:
Owned files: {explicit file list}
Migration rules: {specific transformation patterns}
Old pattern: {what to change from}
New pattern: {what to change to}
Acceptance criteria: {tests pass, no regressions, new patterns used}
Blocked by: {dependency task IDs if any}
```
### Dependency Pattern
```
migration-lead (plan) → migrator-1 ──┐
→ migrator-2 ──┼→ migration-verify
```
### Use Cases
- Framework upgrades (React class → hooks, Vue 2 → Vue 3, Angular version bumps)
- Language migrations (JavaScript → TypeScript, Python 2 → 3)
- API version bumps (REST v1 → v2, GraphQL schema changes)
- Database migrations (ORM changes, schema restructuring)
- Build system changes (Webpack → Vite, CRA → Next.js)