feat(agent-teams): add plugin for multi-agent team orchestration

New plugin with 7 presets (review, debug, feature, fullstack, research,
security, migration), 4 specialized agents, 7 slash commands, 6 skills
with reference docs, and Context7 MCP integration for research teams.
This commit is contained in:
Seth Hobson
2026-02-05 17:10:02 -05:00
parent 918a770990
commit 0752775afc
30 changed files with 3080 additions and 0 deletions

View File

@@ -0,0 +1,126 @@
---
name: multi-reviewer-patterns
description: Coordinate parallel code reviews across multiple quality dimensions with finding deduplication, severity calibration, and consolidated reporting. Use this skill when organizing multi-reviewer code reviews, calibrating finding severity, or consolidating review results.
---
# Multi-Reviewer Patterns
Patterns for coordinating parallel code reviews across multiple quality dimensions, deduplicating findings, calibrating severity, and producing consolidated reports.
## When to Use This Skill
- Organizing a multi-dimensional code review
- Deciding which review dimensions to assign
- Deduplicating findings from multiple reviewers
- Calibrating severity ratings consistently
- Producing a consolidated review report
## Review Dimension Allocation
### Available Dimensions
| Dimension | Focus | When to Include |
| ----------------- | --------------------------------------- | ------------------------------------------- |
| **Security** | Vulnerabilities, auth, input validation | Always for code handling user input or auth |
| **Performance** | Query efficiency, memory, caching | When changing data access or hot paths |
| **Architecture** | SOLID, coupling, patterns | For structural changes or new modules |
| **Testing** | Coverage, quality, edge cases | When adding new functionality |
| **Accessibility** | WCAG, ARIA, keyboard nav | For UI/frontend changes |
### Recommended Combinations
| Scenario | Dimensions |
| ---------------------- | -------------------------------------------- |
| API endpoint changes | Security, Performance, Architecture |
| Frontend component | Architecture, Testing, Accessibility |
| Database migration | Performance, Architecture |
| Authentication changes | Security, Testing |
| Full feature review | Security, Performance, Architecture, Testing |
## Finding Deduplication
When multiple reviewers report issues at the same location:
### Merge Rules
1. **Same file:line, same issue** — Merge into one finding, credit all reviewers
2. **Same file:line, different issues** — Keep as separate findings
3. **Same issue, different locations** — Keep separate but cross-reference
4. **Conflicting severity** — Use the higher severity rating
5. **Conflicting recommendations** — Include both with reviewer attribution
### Deduplication Process
```
For each finding in all reviewer reports:
1. Check if another finding references the same file:line
2. If yes, check if they describe the same issue
3. If same issue: merge, keeping the more detailed description
4. If different issue: keep both, tag as "co-located"
5. Use highest severity among merged findings
```
## Severity Calibration
### Severity Criteria
| Severity | Impact | Likelihood | Examples |
| ------------ | --------------------------------------------- | ---------------------- | -------------------------------------------- |
| **Critical** | Data loss, security breach, complete failure | Certain or very likely | SQL injection, auth bypass, data corruption |
| **High** | Significant functionality impact, degradation | Likely | Memory leak, missing validation, broken flow |
| **Medium** | Partial impact, workaround exists | Possible | N+1 query, missing edge case, unclear error |
| **Low** | Minimal impact, cosmetic | Unlikely | Style issue, minor optimization, naming |
### Calibration Rules
- Security vulnerabilities exploitable by external users: always Critical or High
- Performance issues in hot paths: at least Medium
- Missing tests for critical paths: at least Medium
- Accessibility violations for core functionality: at least Medium
- Code style issues with no functional impact: Low
## Consolidated Report Template
```markdown
## Code Review Report
**Target**: {files/PR/directory}
**Reviewers**: {dimension-1}, {dimension-2}, {dimension-3}
**Date**: {date}
**Files Reviewed**: {count}
### Critical Findings ({count})
#### [CR-001] {Title}
**Location**: `{file}:{line}`
**Dimension**: {Security/Performance/etc.}
**Description**: {what was found}
**Impact**: {what could happen}
**Fix**: {recommended remediation}
### High Findings ({count})
...
### Medium Findings ({count})
...
### Low Findings ({count})
...
### Summary
| Dimension | Critical | High | Medium | Low | Total |
| ------------ | -------- | ----- | ------ | ----- | ------ |
| Security | 1 | 2 | 3 | 0 | 6 |
| Performance | 0 | 1 | 4 | 2 | 7 |
| Architecture | 0 | 0 | 2 | 3 | 5 |
| **Total** | **1** | **3** | **9** | **5** | **18** |
### Recommendation
{Overall assessment and prioritized action items}
```

View File

@@ -0,0 +1,127 @@
# Review Dimension Checklists
Detailed checklists for each review dimension that reviewers follow during parallel code review.
## Security Review Checklist
### Input Handling
- [ ] All user inputs are validated and sanitized
- [ ] SQL queries use parameterized statements (no string concatenation)
- [ ] HTML output is properly escaped to prevent XSS
- [ ] File paths are validated to prevent path traversal
- [ ] Request size limits are enforced
### Authentication & Authorization
- [ ] Authentication is required for all protected endpoints
- [ ] Authorization checks verify user has permission for the action
- [ ] JWT tokens are validated (signature, expiry, issuer)
- [ ] Password hashing uses bcrypt/argon2 (not MD5/SHA)
- [ ] Session management follows best practices
### Secrets & Configuration
- [ ] No hardcoded secrets, API keys, or passwords
- [ ] Secrets are loaded from environment variables or secret manager
- [ ] .gitignore includes sensitive file patterns
- [ ] Debug/development endpoints are disabled in production
### Dependencies
- [ ] No known CVEs in direct dependencies
- [ ] Dependencies are pinned to specific versions
- [ ] No unnecessary dependencies that increase attack surface
## Performance Review Checklist
### Database
- [ ] No N+1 query patterns
- [ ] Queries use appropriate indexes
- [ ] No SELECT \* on large tables
- [ ] Pagination is implemented for list endpoints
- [ ] Connection pooling is configured
### Memory & Resources
- [ ] No memory leaks (event listeners cleaned up, streams closed)
- [ ] Large data sets are streamed, not loaded entirely into memory
- [ ] File handles and connections are properly closed
- [ ] Caching is used for expensive operations
### Computation
- [ ] No unnecessary re-computation or redundant operations
- [ ] Appropriate algorithm complexity for the data size
- [ ] Async operations used where I/O bound
- [ ] No blocking operations on the main thread
## Architecture Review Checklist
### Design Principles
- [ ] Single Responsibility: each module/class has one reason to change
- [ ] Open/Closed: extensible without modification
- [ ] Dependency Inversion: depends on abstractions, not concretions
- [ ] No circular dependencies between modules
### Structure
- [ ] Clear separation of concerns (UI, business logic, data)
- [ ] Consistent error handling strategy across the codebase
- [ ] Configuration is externalized, not hardcoded
- [ ] API contracts are well-defined and versioned
### Patterns
- [ ] Consistent patterns used throughout (no pattern mixing)
- [ ] Abstractions are at the right level (not over/under-engineered)
- [ ] Module boundaries align with domain boundaries
- [ ] Shared utilities are actually shared (no duplication)
## Testing Review Checklist
### Coverage
- [ ] Critical paths have test coverage
- [ ] Edge cases are tested (empty input, null, boundary values)
- [ ] Error paths are tested (what happens when things fail)
- [ ] Integration points have integration tests
### Quality
- [ ] Tests are deterministic (no flaky tests)
- [ ] Tests are isolated (no shared state between tests)
- [ ] Assertions are specific (not just "no error thrown")
- [ ] Test names clearly describe what is being tested
### Maintainability
- [ ] Tests don't duplicate implementation logic
- [ ] Mocks/stubs are minimal and accurate
- [ ] Test data is clear and relevant
- [ ] Tests are easy to understand without reading the implementation
## Accessibility Review Checklist
### Structure
- [ ] Semantic HTML elements used (nav, main, article, button)
- [ ] Heading hierarchy is logical (h1 → h2 → h3)
- [ ] ARIA roles and properties used correctly
- [ ] Landmarks identify page regions
### Interaction
- [ ] All functionality accessible via keyboard
- [ ] Focus order is logical and visible
- [ ] No keyboard traps
- [ ] Touch targets are at least 44x44px
### Content
- [ ] Images have meaningful alt text
- [ ] Color is not the only means of conveying information
- [ ] Text has sufficient contrast ratio (4.5:1 for normal, 3:1 for large)
- [ ] Content is readable at 200% zoom

View File

@@ -0,0 +1,132 @@
---
name: parallel-debugging
description: Debug complex issues using competing hypotheses with parallel investigation, evidence collection, and root cause arbitration. Use this skill when debugging bugs with multiple potential causes, performing root cause analysis, or organizing parallel investigation workflows.
---
# Parallel Debugging
Framework for debugging complex issues using the Analysis of Competing Hypotheses (ACH) methodology with parallel agent investigation.
## When to Use This Skill
- Bug has multiple plausible root causes
- Initial debugging attempts haven't identified the issue
- Issue spans multiple modules or components
- Need systematic root cause analysis with evidence
- Want to avoid confirmation bias in debugging
## Hypothesis Generation Framework
Generate hypotheses across 6 failure mode categories:
### 1. Logic Error
- Incorrect conditional logic (wrong operator, missing case)
- Off-by-one errors in loops or array access
- Missing edge case handling
- Incorrect algorithm implementation
### 2. Data Issue
- Invalid or unexpected input data
- Type mismatch or coercion error
- Null/undefined/None where value expected
- Encoding or serialization problem
- Data truncation or overflow
### 3. State Problem
- Race condition between concurrent operations
- Stale cache returning outdated data
- Incorrect initialization or default values
- Unintended mutation of shared state
- State machine transition error
### 4. Integration Failure
- API contract violation (request/response mismatch)
- Version incompatibility between components
- Configuration mismatch between environments
- Missing or incorrect environment variables
- Network timeout or connection failure
### 5. Resource Issue
- Memory leak causing gradual degradation
- Connection pool exhaustion
- File descriptor or handle leak
- Disk space or quota exceeded
- CPU saturation from inefficient processing
### 6. Environment
- Missing runtime dependency
- Wrong library or framework version
- Platform-specific behavior difference
- Permission or access control issue
- Timezone or locale-related behavior
## Evidence Collection Standards
### What Constitutes Evidence
| Evidence Type | Strength | Example |
| ----------------- | -------- | --------------------------------------------------------------- |
| **Direct** | Strong | Code at `file.ts:42` shows `if (x > 0)` should be `if (x >= 0)` |
| **Correlational** | Medium | Error rate increased after commit `abc123` |
| **Testimonial** | Weak | "It works on my machine" |
| **Absence** | Variable | No null check found in the code path |
### Citation Format
Always cite evidence with file:line references:
```
**Evidence**: The validation function at `src/validators/user.ts:87`
does not check for empty strings, only null/undefined. This allows
empty email addresses to pass validation.
```
### Confidence Levels
| Level | Criteria |
| ------------------- | ----------------------------------------------------------------------------------- |
| **High (>80%)** | Multiple direct evidence pieces, clear causal chain, no contradicting evidence |
| **Medium (50-80%)** | Some direct evidence, plausible causal chain, minor ambiguities |
| **Low (<50%)** | Mostly correlational evidence, incomplete causal chain, some contradicting evidence |
## Result Arbitration Protocol
After all investigators report:
### Step 1: Categorize Results
- **Confirmed**: High confidence, strong evidence, clear causal chain
- **Plausible**: Medium confidence, some evidence, reasonable causal chain
- **Falsified**: Evidence contradicts the hypothesis
- **Inconclusive**: Insufficient evidence to confirm or falsify
### Step 2: Compare Confirmed Hypotheses
If multiple hypotheses are confirmed, rank by:
1. Confidence level
2. Number of supporting evidence pieces
3. Strength of causal chain
4. Absence of contradicting evidence
### Step 3: Determine Root Cause
- If one hypothesis clearly dominates: declare as root cause
- If multiple hypotheses are equally likely: may be compound issue (multiple contributing causes)
- If no hypotheses confirmed: generate new hypotheses based on evidence gathered
### Step 4: Validate Fix
Before declaring the bug fixed:
- [ ] Fix addresses the identified root cause
- [ ] Fix doesn't introduce new issues
- [ ] Original reproduction case no longer fails
- [ ] Related edge cases are covered
- [ ] Relevant tests are added or updated

View File

@@ -0,0 +1,120 @@
# Hypothesis Testing Reference
Task templates, evidence formats, and arbitration decision trees for parallel debugging.
## Hypothesis Task Template
```markdown
## Hypothesis Investigation: {Hypothesis Title}
### Hypothesis Statement
{Clear, falsifiable statement about the root cause}
### Failure Mode Category
{Logic Error | Data Issue | State Problem | Integration Failure | Resource Issue | Environment}
### Investigation Scope
- Files to examine: {file list or directory}
- Related tests: {test files}
- Git history: {relevant date range or commits}
### Evidence Criteria
**Confirming evidence** (if I find these, hypothesis is supported):
1. {Observable condition 1}
2. {Observable condition 2}
**Falsifying evidence** (if I find these, hypothesis is wrong):
1. {Observable condition 1}
2. {Observable condition 2}
### Report Format
- Confidence: High/Medium/Low
- Evidence: list with file:line citations
- Causal chain: step-by-step from cause to symptom
- Recommended fix: if confirmed
```
## Evidence Report Template
```markdown
## Investigation Report: {Hypothesis Title}
### Verdict: {Confirmed | Falsified | Inconclusive}
### Confidence: {High (>80%) | Medium (50-80%) | Low (<50%)}
### Confirming Evidence
1. `src/api/users.ts:47` — {description of what was found}
2. `src/middleware/auth.ts:23` — {description}
### Contradicting Evidence
1. `tests/api/users.test.ts:112` — {description of what contradicts}
### Causal Chain (if confirmed)
1. {First cause} →
2. {Intermediate effect} →
3. {Observable symptom}
### Recommended Fix
{Specific code change with location}
### Additional Notes
{Anything discovered that may be relevant to other hypotheses}
```
## Arbitration Decision Tree
```
All investigators reported?
├── NO → Wait for remaining reports
└── YES → Count confirmed hypotheses
├── 0 confirmed
│ ├── Any medium confidence? → Investigate further
│ └── All low/falsified? → Generate new hypotheses
├── 1 confirmed
│ └── High confidence?
│ ├── YES → Declare root cause, propose fix
│ └── NO → Flag as likely cause, recommend verification
└── 2+ confirmed
└── Are they related?
├── YES → Compound issue (multiple contributing causes)
└── NO → Rank by confidence, declare highest as primary
```
## Common Hypothesis Patterns by Error Type
### "500 Internal Server Error"
1. Unhandled exception in request handler (Logic Error)
2. Database connection failure (Resource Issue)
3. Missing environment variable (Environment)
### "Race condition / intermittent failure"
1. Shared state mutation without locking (State Problem)
2. Async operation ordering assumption (Logic Error)
3. Cache staleness window (State Problem)
### "Works locally, fails in production"
1. Environment variable mismatch (Environment)
2. Different dependency version (Environment)
3. Resource limits (memory, connections) (Resource Issue)
### "Regression after deploy"
1. New code introduced bug (Logic Error)
2. Configuration change (Integration Failure)
3. Database migration issue (Data Issue)

View File

@@ -0,0 +1,151 @@
---
name: parallel-feature-development
description: Coordinate parallel feature development with file ownership strategies, conflict avoidance rules, and integration patterns for multi-agent implementation. Use this skill when decomposing features for parallel development, establishing file ownership boundaries, or managing integration between parallel work streams.
---
# Parallel Feature Development
Strategies for decomposing features into parallel work streams, establishing file ownership boundaries, avoiding conflicts, and integrating results from multiple implementer agents.
## When to Use This Skill
- Decomposing a feature for parallel implementation
- Establishing file ownership boundaries between agents
- Designing interface contracts between parallel work streams
- Choosing integration strategies (vertical slice vs horizontal layer)
- Managing branch and merge workflows for parallel development
## File Ownership Strategies
### By Directory
Assign each implementer ownership of specific directories:
```
implementer-1: src/components/auth/
implementer-2: src/api/auth/
implementer-3: tests/auth/
```
**Best for**: Well-organized codebases with clear directory boundaries.
### By Module
Assign ownership of logical modules (which may span directories):
```
implementer-1: Authentication module (login, register, logout)
implementer-2: Authorization module (roles, permissions, guards)
```
**Best for**: Feature-oriented architectures, domain-driven design.
### By Layer
Assign ownership of architectural layers:
```
implementer-1: UI layer (components, styles, layouts)
implementer-2: Business logic layer (services, validators)
implementer-3: Data layer (models, repositories, migrations)
```
**Best for**: Traditional MVC/layered architectures.
## Conflict Avoidance Rules
### The Cardinal Rule
**One owner per file.** No file should be assigned to multiple implementers.
### When Files Must Be Shared
If a file genuinely needs changes from multiple implementers:
1. **Designate a single owner** — One implementer owns the file
2. **Other implementers request changes** — Message the owner with specific change requests
3. **Owner applies changes sequentially** — Prevents merge conflicts
4. **Alternative: Extract interfaces** — Create a separate interface file that the non-owner can import without modifying
### Interface Contracts
When implementers need to coordinate at boundaries:
```typescript
// src/types/auth-contract.ts (owned by team-lead, read-only for implementers)
export interface AuthResponse {
token: string;
user: UserProfile;
expiresAt: number;
}
export interface AuthService {
login(email: string, password: string): Promise<AuthResponse>;
register(data: RegisterData): Promise<AuthResponse>;
}
```
Both implementers import from the contract file but neither modifies it.
## Integration Patterns
### Vertical Slice
Each implementer builds a complete feature slice (UI + API + tests):
```
implementer-1: Login feature (login form + login API + login tests)
implementer-2: Register feature (register form + register API + register tests)
```
**Pros**: Each slice is independently testable, minimal integration needed.
**Cons**: May duplicate shared utilities, harder with tightly coupled features.
### Horizontal Layer
Each implementer builds one layer across all features:
```
implementer-1: All UI components (login form, register form, profile page)
implementer-2: All API endpoints (login, register, profile)
implementer-3: All tests (unit, integration, e2e)
```
**Pros**: Consistent patterns within each layer, natural specialization.
**Cons**: More integration points, layer 3 depends on layers 1 and 2.
### Hybrid
Mix vertical and horizontal based on coupling:
```
implementer-1: Login feature (vertical slice — UI + API + tests)
implementer-2: Shared auth infrastructure (horizontal — middleware, JWT utils, types)
```
**Best for**: Most real-world features with some shared infrastructure.
## Branch Management
### Single Branch Strategy
All implementers work on the same feature branch:
- Simple setup, no merge overhead
- Requires strict file ownership to avoid conflicts
- Best for: small teams (2-3), well-defined boundaries
### Multi-Branch Strategy
Each implementer works on a sub-branch:
```
feature/auth
├── feature/auth-login (implementer-1)
├── feature/auth-register (implementer-2)
└── feature/auth-tests (implementer-3)
```
- More isolation, explicit merge points
- Higher overhead, merge conflicts still possible in shared files
- Best for: larger teams (4+), complex features

View File

@@ -0,0 +1,80 @@
# File Ownership Decision Framework
How to assign file ownership when decomposing features for parallel development.
## Ownership Decision Process
### Step 1: Map All Files
List every file that needs to be created or modified for the feature.
### Step 2: Identify Natural Clusters
Group files by:
- Directory proximity (files in the same directory)
- Functional relationship (files that import each other)
- Layer membership (all UI files, all API files)
### Step 3: Assign Clusters to Owners
Each cluster becomes one implementer's ownership boundary:
- No file appears in multiple clusters
- Each cluster is internally cohesive
- Cross-cluster dependencies are minimized
### Step 4: Define Interface Points
Where clusters interact, define:
- Shared type definitions (owned by lead or a designated implementer)
- API contracts (function signatures, request/response shapes)
- Event contracts (event names and payload shapes)
## Ownership by Project Type
### React/Next.js Frontend
```
implementer-1: src/components/{feature}/ (UI components)
implementer-2: src/hooks/{feature}/ (custom hooks, state)
implementer-3: src/api/{feature}/ (API client, types)
shared: src/types/{feature}.ts (owned by lead)
```
### Express/Fastify Backend
```
implementer-1: src/routes/{feature}.ts, src/controllers/{feature}.ts
implementer-2: src/services/{feature}.ts, src/validators/{feature}.ts
implementer-3: src/models/{feature}.ts, src/repositories/{feature}.ts
shared: src/types/{feature}.ts (owned by lead)
```
### Full-Stack (Next.js)
```
implementer-1: app/{feature}/page.tsx, app/{feature}/components/
implementer-2: app/api/{feature}/route.ts, lib/{feature}/
implementer-3: tests/{feature}/
shared: types/{feature}.ts (owned by lead)
```
### Python Django
```
implementer-1: {app}/views.py, {app}/urls.py, {app}/forms.py
implementer-2: {app}/models.py, {app}/serializers.py, {app}/managers.py
implementer-3: {app}/tests/
shared: {app}/types.py (owned by lead)
```
## Conflict Resolution
When two implementers need to modify the same file:
1. **Preferred: Split the file** — Extract the shared concern into its own file
2. **If can't split: Designate one owner** — The other implementer sends change requests
3. **Last resort: Sequential access** — Implementer A finishes, then implementer B takes over
4. **Never**: Let both modify the same file simultaneously

View File

@@ -0,0 +1,75 @@
# Integration and Merge Strategies
Patterns for integrating parallel work streams and resolving conflicts.
## Integration Patterns
### Pattern 1: Direct Integration
All implementers commit to the same branch; integration happens naturally.
```
feature/auth ← implementer-1 commits
← implementer-2 commits
← implementer-3 commits
```
**When to use**: Small teams (2-3), strict file ownership (no conflicts expected).
### Pattern 2: Sub-Branch Integration
Each implementer works on a sub-branch; lead merges them sequentially.
```
feature/auth
├── feature/auth-login ← implementer-1
├── feature/auth-register ← implementer-2
└── feature/auth-tests ← implementer-3
```
Merge order: follow dependency graph (foundation → dependent → integration).
**When to use**: Larger teams (4+), overlapping concerns, need for review gates.
### Pattern 3: Trunk-Based with Feature Flags
All implementers commit to the main branch behind a feature flag.
```
main ← all implementers commit
← feature flag gates new code
```
**When to use**: CI/CD environments, short-lived features, continuous deployment.
## Integration Verification Checklist
After all implementers complete:
1. **Build check**: Does the code compile/bundle without errors?
2. **Type check**: Do TypeScript/type annotations pass?
3. **Lint check**: Does the code pass linting rules?
4. **Unit tests**: Do all unit tests pass?
5. **Integration tests**: Do cross-component tests pass?
6. **Interface verification**: Do all interface contracts match their implementations?
## Conflict Resolution
### Prevention (Best)
- Strict file ownership eliminates most conflicts
- Interface contracts define boundaries before implementation
- Shared type files are owned by the lead and modified sequentially
### Detection
- Git merge will report conflicts if they occur
- TypeScript/lint errors indicate interface mismatches
- Test failures indicate behavioral conflicts
### Resolution Strategies
1. **Contract wins**: If code doesn't match the interface contract, the code is wrong
2. **Lead arbitrates**: The team lead decides which implementation to keep
3. **Tests decide**: The implementation that passes tests is correct
4. **Merge manually**: For complex conflicts, the lead merges by hand

View File

@@ -0,0 +1,162 @@
---
name: task-coordination-strategies
description: Decompose complex tasks, design dependency graphs, and coordinate multi-agent work with proper task descriptions and workload balancing. Use this skill when breaking down work for agent teams, managing task dependencies, or monitoring team progress.
---
# Task Coordination Strategies
Strategies for decomposing complex tasks into parallelizable units, designing dependency graphs, writing effective task descriptions, and monitoring workload across agent teams.
## When to Use This Skill
- Breaking down a complex task for parallel execution
- Designing task dependency relationships (blockedBy/blocks)
- Writing task descriptions with clear acceptance criteria
- Monitoring and rebalancing workload across teammates
- Identifying the critical path in a multi-task workflow
## Task Decomposition Strategies
### By Layer
Split work by architectural layer:
- Frontend components
- Backend API endpoints
- Database migrations/models
- Test suites
**Best for**: Full-stack features, vertical slices
### By Component
Split work by functional component:
- Authentication module
- User profile module
- Notification module
**Best for**: Microservices, modular architectures
### By Concern
Split work by cross-cutting concern:
- Security review
- Performance review
- Architecture review
**Best for**: Code reviews, audits
### By File Ownership
Split work by file/directory boundaries:
- `src/components/` — Implementer 1
- `src/api/` — Implementer 2
- `src/utils/` — Implementer 3
**Best for**: Parallel implementation, conflict avoidance
## Dependency Graph Design
### Principles
1. **Minimize chain depth** — Prefer wide, shallow graphs over deep chains
2. **Identify the critical path** — The longest chain determines minimum completion time
3. **Use blockedBy sparingly** — Only add dependencies that are truly required
4. **Avoid circular dependencies** — Task A blocks B blocks A is a deadlock
### Patterns
**Independent (Best parallelism)**:
```
Task A ─┐
Task B ─┼─→ Integration
Task C ─┘
```
**Sequential (Necessary dependencies)**:
```
Task A → Task B → Task C
```
**Diamond (Mixed)**:
```
┌→ Task B ─┐
Task A ─┤ ├→ Task D
└→ Task C ─┘
```
### Using blockedBy/blocks
```
TaskCreate: { subject: "Build API endpoints" } → Task #1
TaskCreate: { subject: "Build frontend components" } → Task #2
TaskCreate: { subject: "Integration testing" } → Task #3
TaskUpdate: { taskId: "3", addBlockedBy: ["1", "2"] } → #3 waits for #1 and #2
```
## Task Description Best Practices
Every task should include:
1. **Objective** — What needs to be accomplished (1-2 sentences)
2. **Owned Files** — Explicit list of files/directories this teammate may modify
3. **Requirements** — Specific deliverables or behaviors expected
4. **Interface Contracts** — How this work connects to other teammates' work
5. **Acceptance Criteria** — How to verify the task is done correctly
6. **Scope Boundaries** — What is explicitly out of scope
### Template
```
## Objective
Build the user authentication API endpoints.
## Owned Files
- src/api/auth.ts
- src/api/middleware/auth-middleware.ts
- src/types/auth.ts (shared — read only, do not modify)
## Requirements
- POST /api/login — accepts email/password, returns JWT
- POST /api/register — creates new user, returns JWT
- GET /api/me — returns current user profile (requires auth)
## Interface Contract
- Import User type from src/types/auth.ts (owned by implementer-1)
- Export AuthResponse type for frontend consumption
## Acceptance Criteria
- All endpoints return proper HTTP status codes
- JWT tokens expire after 24 hours
- Passwords are hashed with bcrypt
## Out of Scope
- OAuth/social login
- Password reset flow
- Rate limiting
```
## Workload Monitoring
### Indicators of Imbalance
| Signal | Meaning | Action |
| -------------------------- | ------------------- | --------------------------- |
| Teammate idle, others busy | Uneven distribution | Reassign pending tasks |
| Teammate stuck on one task | Possible blocker | Check in, offer help |
| All tasks blocked | Dependency issue | Resolve critical path first |
| One teammate has 3x others | Overloaded | Split tasks or reassign |
### Rebalancing Steps
1. Call `TaskList` to assess current state
2. Identify idle or overloaded teammates
3. Use `TaskUpdate` to reassign tasks
4. Use `SendMessage` to notify affected teammates
5. Monitor for improved throughput

View File

@@ -0,0 +1,97 @@
# Dependency Graph Patterns
Visual patterns for task dependency design with trade-offs.
## Pattern 1: Fully Independent (Maximum Parallelism)
```
Task A ─┐
Task B ─┼─→ Final Integration
Task C ─┘
```
- **Parallelism**: Maximum — all tasks run simultaneously
- **Risk**: Integration may reveal incompatibilities late
- **Use when**: Tasks operate on completely separate files/modules
- **TaskCreate**: No blockedBy relationships; integration task blocked by all
## Pattern 2: Sequential Chain (No Parallelism)
```
Task A → Task B → Task C → Task D
```
- **Parallelism**: None — each task waits for the previous
- **Risk**: Bottleneck at each step; one delay cascades
- **Use when**: Each task depends on the output of the previous (avoid if possible)
- **TaskCreate**: Each task blockedBy the previous
## Pattern 3: Diamond (Shared Foundation)
```
┌→ Task B ─┐
Task A ──→ ┤ ├→ Task D
└→ Task C ─┘
```
- **Parallelism**: B and C run in parallel after A completes
- **Risk**: A is a bottleneck; D must wait for both B and C
- **Use when**: B and C both need output from A (e.g., shared types)
- **TaskCreate**: B and C blockedBy A; D blockedBy B and C
## Pattern 4: Fork-Join (Phased Parallelism)
```
Phase 1: A1, A2, A3 (parallel)
────────────
Phase 2: B1, B2 (parallel, after phase 1)
────────────
Phase 3: C1 (after phase 2)
```
- **Parallelism**: Within each phase, tasks are parallel
- **Risk**: Phase boundaries add synchronization delays
- **Use when**: Natural phases with dependencies (build → test → deploy)
- **TaskCreate**: Phase 2 tasks blockedBy all Phase 1 tasks
## Pattern 5: Pipeline (Streaming)
```
Task A ──→ Task B ──→ Task C
└──→ Task D ──→ Task E
```
- **Parallelism**: Two parallel chains
- **Risk**: Chains may diverge in approach
- **Use when**: Two independent feature branches from a common starting point
- **TaskCreate**: B blockedBy A; D blockedBy A; C blockedBy B; E blockedBy D
## Anti-Patterns
### Circular Dependency (Deadlock)
```
Task A → Task B → Task C → Task A ✗ DEADLOCK
```
**Fix**: Extract shared dependency into a separate task that all three depend on.
### Unnecessary Dependencies
```
Task A → Task B → Task C
(where B doesn't actually need A's output)
```
**Fix**: Remove the blockedBy relationship; let B run independently.
### Star Pattern (Single Bottleneck)
```
┌→ B
A → ├→ C → F
├→ D
└→ E
```
**Fix**: If A is slow, all downstream tasks are delayed. Try to parallelize A's work.

View File

@@ -0,0 +1,98 @@
# Task Decomposition Examples
Practical examples of decomposing features into parallelizable tasks with clear ownership.
## Example 1: User Authentication Feature
### Feature Description
Add email/password authentication with login, registration, and profile pages.
### Decomposition (Vertical Slices)
**Stream 1: Login Flow** (implementer-1)
- Owned files: `src/pages/login.tsx`, `src/api/login.ts`, `tests/login.test.ts`
- Requirements: Login form, API endpoint, input validation, error handling
- Interface: Imports `AuthResponse` from `src/types/auth.ts`
**Stream 2: Registration Flow** (implementer-2)
- Owned files: `src/pages/register.tsx`, `src/api/register.ts`, `tests/register.test.ts`
- Requirements: Registration form, API endpoint, email validation, password strength
- Interface: Imports `AuthResponse` from `src/types/auth.ts`
**Stream 3: Shared Infrastructure** (implementer-3)
- Owned files: `src/types/auth.ts`, `src/middleware/auth.ts`, `src/utils/jwt.ts`
- Requirements: Type definitions, JWT middleware, token utilities
- Dependencies: None (other streams depend on this)
### Dependency Graph
```
Stream 3 (types/middleware) ──→ Stream 1 (login)
└→ Stream 2 (registration)
```
## Example 2: REST API Endpoints
### Feature Description
Add CRUD endpoints for a new "Projects" resource.
### Decomposition (By Layer)
**Stream 1: Data Layer** (implementer-1)
- Owned files: `src/models/project.ts`, `src/migrations/add-projects.ts`, `src/repositories/project-repo.ts`
- Requirements: Schema definition, migration, repository pattern
- Dependencies: None
**Stream 2: Business Logic** (implementer-2)
- Owned files: `src/services/project-service.ts`, `src/validators/project-validator.ts`
- Requirements: CRUD operations, validation rules, business logic
- Dependencies: Blocked by Stream 1 (needs model/repository)
**Stream 3: API Layer** (implementer-3)
- Owned files: `src/routes/projects.ts`, `src/controllers/project-controller.ts`
- Requirements: REST endpoints, request parsing, response formatting
- Dependencies: Blocked by Stream 2 (needs service layer)
## Task Template
```markdown
## Task: {Stream Name}
### Objective
{1-2 sentence description of what to build}
### Owned Files
- {file1} — {purpose}
- {file2} — {purpose}
### Requirements
1. {Specific deliverable 1}
2. {Specific deliverable 2}
3. {Specific deliverable 3}
### Interface Contract
- Exports: {types/functions this stream provides}
- Imports: {types/functions this stream consumes from other streams}
### Acceptance Criteria
- [ ] {Verifiable criterion 1}
- [ ] {Verifiable criterion 2}
- [ ] {Verifiable criterion 3}
### Out of Scope
- {Explicitly excluded work}
```

View File

@@ -0,0 +1,154 @@
---
name: team-communication-protocols
description: Structured messaging protocols for agent team communication including message type selection, plan approval, shutdown procedures, and anti-patterns to avoid. Use this skill when establishing team communication norms, handling plan approvals, or managing team shutdown.
---
# Team Communication Protocols
Protocols for effective communication between agent teammates, including message type selection, plan approval workflows, shutdown procedures, and common anti-patterns to avoid.
## When to Use This Skill
- Establishing communication norms for a new team
- Choosing between message types (message, broadcast, shutdown_request)
- Handling plan approval workflows
- Managing graceful team shutdown
- Discovering teammate identities and capabilities
## Message Type Selection
### `message` (Direct Message) — Default Choice
Send to a single specific teammate:
```json
{
"type": "message",
"recipient": "implementer-1",
"content": "Your API endpoint is ready. You can now build the frontend form.",
"summary": "API endpoint ready for frontend"
}
```
**Use for**: Task updates, coordination, questions, integration notifications.
### `broadcast` — Use Sparingly
Send to ALL teammates simultaneously:
```json
{
"type": "broadcast",
"content": "Critical: shared types file has been updated. Pull latest before continuing.",
"summary": "Shared types updated"
}
```
**Use ONLY for**: Critical blockers affecting everyone, major changes to shared resources.
**Why sparingly?**: Each broadcast sends N separate messages (one per teammate), consuming API resources proportional to team size.
### `shutdown_request` — Graceful Termination
Request a teammate to shut down:
```json
{
"type": "shutdown_request",
"recipient": "reviewer-1",
"content": "Review complete, shutting down team."
}
```
The teammate responds with `shutdown_response` (approve or reject with reason).
## Communication Anti-Patterns
| Anti-Pattern | Problem | Better Approach |
| --------------------------------------- | ---------------------------------------- | -------------------------------------- |
| Broadcasting routine updates | Wastes resources, noise | Direct message to affected teammate |
| Sending JSON status messages | Not designed for structured data | Use TaskUpdate to update task status |
| Not communicating at integration points | Teammates build against stale interfaces | Message when your interface is ready |
| Micromanaging via messages | Overwhelms teammates, slows work | Check in at milestones, not every step |
| Using UUIDs instead of names | Hard to read, error-prone | Always use teammate names |
| Ignoring idle teammates | Wasted capacity | Assign new work or shut down |
## Plan Approval Workflow
When a teammate is spawned with `plan_mode_required`:
1. Teammate creates a plan using read-only exploration tools
2. Teammate calls `ExitPlanMode` which sends a `plan_approval_request` to the lead
3. Lead reviews the plan
4. Lead responds with `plan_approval_response`:
**Approve**:
```json
{
"type": "plan_approval_response",
"request_id": "abc-123",
"recipient": "implementer-1",
"approve": true
}
```
**Reject with feedback**:
```json
{
"type": "plan_approval_response",
"request_id": "abc-123",
"recipient": "implementer-1",
"approve": false,
"content": "Please add error handling for the API calls"
}
```
## Shutdown Protocol
### Graceful Shutdown Sequence
1. **Lead sends shutdown_request** to each teammate
2. **Teammate receives request** as a JSON message with `type: "shutdown_request"`
3. **Teammate responds** with `shutdown_response`:
- `approve: true` — Teammate saves state and exits
- `approve: false` + reason — Teammate continues working
4. **Lead handles rejections** — Wait for teammate to finish, then retry
5. **After all teammates shut down** — Call `Teammate` cleanup
### Handling Rejections
If a teammate rejects shutdown:
- Check their reason (usually "still working on task")
- Wait for their current task to complete
- Retry shutdown request
- If urgent, user can force shutdown
## Teammate Discovery
Find team members by reading the config file:
**Location**: `~/.claude/teams/{team-name}/config.json`
**Structure**:
```json
{
"members": [
{
"name": "security-reviewer",
"agentId": "uuid-here",
"agentType": "team-reviewer"
},
{
"name": "perf-reviewer",
"agentId": "uuid-here",
"agentType": "team-reviewer"
}
]
}
```
**Always use `name`** for messaging and task assignment. Never use `agentId` directly.

View File

@@ -0,0 +1,112 @@
# Messaging Pattern Templates
Ready-to-use message templates for common team communication scenarios.
## Task Assignment
```
You've been assigned task #{id}: {subject}.
Owned files:
- {file1}
- {file2}
Key requirements:
- {requirement1}
- {requirement2}
Interface contract:
- Import {types} from {shared-file}
- Export {types} for {other-teammate}
Let me know if you have questions or blockers.
```
## Integration Point Notification
```
My side of the {interface-name} interface is complete.
Exported from {file}:
- {function/type 1}
- {function/type 2}
You can now import these in your owned files. The contract matches what we agreed on.
```
## Blocker Report
```
I'm blocked on task #{id}: {subject}.
Blocker: {description of what's preventing progress}
Impact: {what can't be completed until this is resolved}
Options:
1. {option 1}
2. {option 2}
Waiting for your guidance.
```
## Task Completion Report
```
Task #{id} complete: {subject}
Changes made:
- {file1}: {what changed}
- {file2}: {what changed}
Integration notes:
- {any interface changes or considerations for other teammates}
Ready for next assignment.
```
## Review Finding Summary
```
Review complete for {target} ({dimension} dimension).
Summary:
- Critical: {count}
- High: {count}
- Medium: {count}
- Low: {count}
Top finding: {brief description of most important finding}
Full findings attached to task #{id}.
```
## Investigation Report Summary
```
Investigation complete for hypothesis: {hypothesis summary}
Verdict: {Confirmed | Falsified | Inconclusive}
Confidence: {High | Medium | Low}
Key evidence:
- {file:line}: {what was found}
- {file:line}: {what was found}
{If confirmed}: Recommended fix: {brief fix description}
{If falsified}: Contradicting evidence: {brief description}
Full report attached to task #{id}.
```
## Shutdown Acknowledgment
When you receive a shutdown request, respond with the shutdown_response tool. But you may also want to send a final status message:
```
Wrapping up. Current status:
- Task #{id}: {completed/in-progress}
- Files modified: {list}
- Pending work: {none or description}
Ready for shutdown.
```

View File

@@ -0,0 +1,118 @@
---
name: team-composition-patterns
description: Design optimal agent team compositions with sizing heuristics, preset configurations, and agent type selection. Use this skill when deciding team size, selecting agent types, or configuring team presets for multi-agent workflows.
---
# Team Composition Patterns
Best practices for composing multi-agent teams, selecting team sizes, choosing agent types, and configuring display modes for Claude Code's Agent Teams feature.
## When to Use This Skill
- Deciding how many teammates to spawn for a task
- Choosing between preset team configurations
- Selecting the right agent type (subagent_type) for each role
- Configuring teammate display modes (tmux, iTerm2, in-process)
- Building custom team compositions for non-standard workflows
## Team Sizing Heuristics
| Complexity | Team Size | When to Use |
| ------------ | --------- | ----------------------------------------------------------- |
| Simple | 1-2 | Single-dimension review, isolated bug, small feature |
| Moderate | 2-3 | Multi-file changes, 2-3 concerns, medium features |
| Complex | 3-4 | Cross-cutting concerns, large features, deep debugging |
| Very Complex | 4-5 | Full-stack features, comprehensive reviews, systemic issues |
**Rule of thumb**: Start with the smallest team that covers all required dimensions. Adding teammates increases coordination overhead.
## Preset Team Compositions
### Review Team
- **Size**: 3 reviewers
- **Agents**: 3x `team-reviewer`
- **Default dimensions**: security, performance, architecture
- **Use when**: Code changes need multi-dimensional quality assessment
### Debug Team
- **Size**: 3 investigators
- **Agents**: 3x `team-debugger`
- **Default hypotheses**: 3 competing hypotheses
- **Use when**: Bug has multiple plausible root causes
### Feature Team
- **Size**: 3 (1 lead + 2 implementers)
- **Agents**: 1x `team-lead` + 2x `team-implementer`
- **Use when**: Feature can be decomposed into parallel work streams
### Fullstack Team
- **Size**: 4 (1 lead + 3 implementers)
- **Agents**: 1x `team-lead` + 1x frontend `team-implementer` + 1x backend `team-implementer` + 1x test `team-implementer`
- **Use when**: Feature spans frontend, backend, and test layers
### Research Team
- **Size**: 3 researchers
- **Agents**: 3x `general-purpose`
- **Default areas**: Each assigned a different research question, module, or topic
- **Capabilities**: Codebase search (Grep, Glob, Read), web search (WebSearch, WebFetch), library documentation (Context7 MCP)
- **Use when**: Need to understand a codebase, research libraries, compare approaches, or gather information from code and web sources in parallel
### Security Team
- **Size**: 4 reviewers
- **Agents**: 4x `team-reviewer`
- **Default dimensions**: OWASP/vulnerabilities, auth/access control, dependencies/supply chain, secrets/configuration
- **Use when**: Comprehensive security audit covering multiple attack surfaces
### Migration Team
- **Size**: 4 (1 lead + 2 implementers + 1 reviewer)
- **Agents**: 1x `team-lead` + 2x `team-implementer` + 1x `team-reviewer`
- **Use when**: Large codebase migration (framework upgrade, language port, API version bump) requiring parallel work with correctness verification
## Agent Type Selection
When spawning teammates with the Task tool, choose `subagent_type` based on what tools the teammate needs:
| Agent Type | Tools Available | Use For |
| ------------------------------ | ----------------------------------------- | ---------------------------------------------------------- |
| `general-purpose` | All tools (Read, Write, Edit, Bash, etc.) | Implementation, debugging, any task requiring file changes |
| `Explore` | Read-only tools (Read, Grep, Glob) | Research, code exploration, analysis |
| `Plan` | Read-only tools | Architecture planning, task decomposition |
| `agent-teams:team-reviewer` | All tools | Code review with structured findings |
| `agent-teams:team-debugger` | All tools | Hypothesis-driven investigation |
| `agent-teams:team-implementer` | All tools | Building features within file ownership boundaries |
| `agent-teams:team-lead` | All tools | Team orchestration and coordination |
**Key distinction**: Read-only agents (Explore, Plan) cannot modify files. Never assign implementation tasks to read-only agents.
## Display Mode Configuration
Configure in `~/.claude/settings.json`:
```json
{
"teammateMode": "tmux"
}
```
| Mode | Behavior | Best For |
| -------------- | ------------------------------ | ------------------------------------------------- |
| `"tmux"` | Each teammate in a tmux pane | Development workflows, monitoring multiple agents |
| `"iterm2"` | Each teammate in an iTerm2 tab | macOS users who prefer iTerm2 |
| `"in-process"` | All teammates in same process | Simple tasks, CI/CD environments |
## Custom Team Guidelines
When building custom teams:
1. **Every team needs a coordinator** — Either designate a `team-lead` or have the user coordinate directly
2. **Match roles to agent types** — Use specialized agents (reviewer, debugger, implementer) when available
3. **Avoid duplicate roles** — Two agents doing the same thing wastes resources
4. **Define boundaries upfront** — Each teammate needs clear ownership of files or responsibilities
5. **Keep it small** — 2-4 teammates is the sweet spot; 5+ requires significant coordination overhead

View File

@@ -0,0 +1,84 @@
# Agent Type Selection Guide
Decision matrix for choosing the right `subagent_type` when spawning teammates.
## Decision Matrix
```
Does the teammate need to modify files?
├── YES → Does it need a specialized role?
│ ├── YES → Which role?
│ │ ├── Code review → agent-teams:team-reviewer
│ │ ├── Bug investigation → agent-teams:team-debugger
│ │ ├── Feature building → agent-teams:team-implementer
│ │ └── Team coordination → agent-teams:team-lead
│ └── NO → general-purpose
└── NO → Does it need deep codebase exploration?
├── YES → Explore
└── NO → Plan (for architecture/design tasks)
```
## Agent Type Comparison
| Agent Type | Can Read | Can Write | Can Edit | Can Bash | Specialized |
| ---------------------------- | -------- | --------- | -------- | -------- | ------------------ |
| general-purpose | Yes | Yes | Yes | Yes | No |
| Explore | Yes | No | No | No | Search/explore |
| Plan | Yes | No | No | No | Architecture |
| agent-teams:team-lead | Yes | Yes | Yes | Yes | Team orchestration |
| agent-teams:team-reviewer | Yes | Yes | Yes | Yes | Code review |
| agent-teams:team-debugger | Yes | Yes | Yes | Yes | Bug investigation |
| agent-teams:team-implementer | Yes | Yes | Yes | Yes | Feature building |
## Common Mistakes
| Mistake | Why It Fails | Correct Choice |
| ------------------------------------- | ------------------------------ | --------------------------------------- |
| Using `Explore` for implementation | Cannot write/edit files | `general-purpose` or `team-implementer` |
| Using `Plan` for coding tasks | Cannot write/edit files | `general-purpose` or `team-implementer` |
| Using `general-purpose` for reviews | No review structure/checklists | `team-reviewer` |
| Using `team-implementer` for research | Has tools but wrong focus | `Explore` or `Plan` |
## When to Use Each
### general-purpose
- One-off tasks that don't fit specialized roles
- Tasks requiring unique tool combinations
- Ad-hoc scripting or automation
### Explore
- Codebase research and analysis
- Finding files, patterns, or dependencies
- Understanding architecture before planning
### Plan
- Designing implementation approaches
- Creating task decompositions
- Architecture review (read-only)
### team-lead
- Coordinating multiple teammates
- Decomposing work and managing tasks
- Synthesizing results from parallel work
### team-reviewer
- Focused code review on a specific dimension
- Producing structured findings with severity ratings
- Following dimension-specific checklists
### team-debugger
- Investigating a specific hypothesis about a bug
- Gathering evidence with file:line citations
- Reporting confidence levels and causal chains
### team-implementer
- Building code within file ownership boundaries
- Following interface contracts
- Coordinating at integration points

View File

@@ -0,0 +1,268 @@
# Preset Team Definitions
Detailed preset team configurations with task templates for common workflows.
## Review Team Preset
**Command**: `/team-spawn review`
### Configuration
- **Team Size**: 3
- **Agent Type**: `agent-teams:team-reviewer`
- **Display Mode**: tmux recommended
### Members
| Name | Dimension | Focus Areas |
| --------------------- | ------------ | ------------------------------------------------- |
| security-reviewer | Security | Input validation, auth, injection, secrets, CVEs |
| performance-reviewer | Performance | Query efficiency, memory, caching, async patterns |
| architecture-reviewer | Architecture | SOLID, coupling, patterns, error handling |
### Task Template
```
Subject: Review {target} for {dimension} issues
Description:
Dimension: {dimension}
Target: {file list or diff}
Checklist: {dimension-specific checklist}
Output format: Structured findings with file:line, severity, evidence, fix
```
### Variations
- **Security-focused**: `--reviewers security,testing` (2 members)
- **Full review**: `--reviewers security,performance,architecture,testing,accessibility` (5 members)
- **Frontend review**: `--reviewers architecture,testing,accessibility` (3 members)
## Debug Team Preset
**Command**: `/team-spawn debug`
### Configuration
- **Team Size**: 3 (default) or N with `--hypotheses N`
- **Agent Type**: `agent-teams:team-debugger`
- **Display Mode**: tmux recommended
### Members
| Name | Role |
| -------------- | ------------------------- |
| investigator-1 | Investigates hypothesis 1 |
| investigator-2 | Investigates hypothesis 2 |
| investigator-3 | Investigates hypothesis 3 |
### Task Template
```
Subject: Investigate hypothesis: {hypothesis summary}
Description:
Hypothesis: {full hypothesis statement}
Scope: {files/module/project}
Evidence criteria:
Confirming: {what would confirm}
Falsifying: {what would falsify}
Report format: confidence level, evidence with file:line, causal chain
```
## Feature Team Preset
**Command**: `/team-spawn feature`
### Configuration
- **Team Size**: 3 (1 lead + 2 implementers)
- **Agent Types**: `agent-teams:team-lead` + `agent-teams:team-implementer`
- **Display Mode**: tmux recommended
### Members
| Name | Role | Responsibility |
| ------------- | ---------------- | ---------------------------------------- |
| feature-lead | team-lead | Decomposition, coordination, integration |
| implementer-1 | team-implementer | Work stream 1 (assigned files) |
| implementer-2 | team-implementer | Work stream 2 (assigned files) |
### Task Template
```
Subject: Implement {work stream name}
Description:
Owned files: {explicit file list}
Requirements: {specific deliverables}
Interface contract: {shared types/APIs}
Acceptance criteria: {verification steps}
Blocked by: {dependency task IDs if any}
```
## Fullstack Team Preset
**Command**: `/team-spawn fullstack`
### Configuration
- **Team Size**: 4 (1 lead + 3 implementers)
- **Agent Types**: `agent-teams:team-lead` + 3x `agent-teams:team-implementer`
- **Display Mode**: tmux recommended
### Members
| Name | Role | Layer |
| -------------- | ---------------- | -------------------------------- |
| fullstack-lead | team-lead | Coordination, integration |
| frontend-dev | team-implementer | UI components, client-side logic |
| backend-dev | team-implementer | API endpoints, business logic |
| test-dev | team-implementer | Unit, integration, e2e tests |
### Dependency Pattern
```
frontend-dev ──┐
├──→ test-dev (blocked by both)
backend-dev ──┘
```
## Research Team Preset
**Command**: `/team-spawn research`
### Configuration
- **Team Size**: 3
- **Agent Type**: `general-purpose`
- **Display Mode**: tmux recommended
### Members
| Name | Role | Focus |
| ------------ | --------------- | ------------------------------------------------ |
| researcher-1 | general-purpose | Research area 1 (e.g., codebase architecture) |
| researcher-2 | general-purpose | Research area 2 (e.g., library documentation) |
| researcher-3 | general-purpose | Research area 3 (e.g., web resources & examples) |
### Available Research Tools
Each researcher has access to:
- **Codebase**: `Grep`, `Glob`, `Read` — search and read local files
- **Web**: `WebSearch`, `WebFetch` — search the web and fetch page content
- **Library Docs**: Context7 MCP (`resolve-library-id`, `query-docs`) — look up current documentation for any library
- **Deep Exploration**: `Task` with `subagent_type: Explore` — spawn sub-explorers for deep dives
### Task Template
```
Subject: Research {topic or question}
Description:
Question: {specific research question}
Scope: {codebase files, web resources, library docs, or all}
Tools to prioritize:
- Codebase: Grep/Glob/Read for local code analysis
- Web: WebSearch/WebFetch for articles, examples, best practices
- Docs: Context7 MCP for up-to-date library documentation
Deliverable: Summary with citations (file:line for code, URLs for web)
Output format: Structured report with sections, evidence, and recommendations
```
### Variations
- **Codebase-only**: 3 researchers exploring different modules or patterns locally
- **Documentation**: 3 researchers using Context7 to compare library APIs and patterns
- **Web research**: 3 researchers using WebSearch to survey approaches, benchmarks, or best practices
- **Mixed**: 1 codebase researcher + 1 docs researcher + 1 web researcher (recommended for evaluating new libraries)
### Example Research Assignments
```
Researcher 1 (codebase): "How does our current auth system work? Trace the flow from login to token validation."
Researcher 2 (docs): "Use Context7 to look up the latest NextAuth.js v5 API. How does it handle JWT and session management?"
Researcher 3 (web): "Search for comparisons between NextAuth, Clerk, and Auth0 for Next.js apps. Focus on pricing, DX, and migration effort."
```
## Security Team Preset
**Command**: `/team-spawn security`
### Configuration
- **Team Size**: 4
- **Agent Type**: `agent-teams:team-reviewer`
- **Display Mode**: tmux recommended
### Members
| Name | Dimension | Focus Areas |
| --------------- | -------------- | ---------------------------------------------------- |
| vuln-reviewer | OWASP/Vulns | Injection, XSS, CSRF, deserialization, SSRF |
| auth-reviewer | Auth/Access | Authentication, authorization, session management |
| deps-reviewer | Dependencies | CVEs, supply chain, outdated packages, license risks |
| config-reviewer | Secrets/Config | Hardcoded secrets, env vars, debug endpoints, CORS |
### Task Template
```
Subject: Security audit {target} for {dimension}
Description:
Dimension: {security sub-dimension}
Target: {file list, directory, or entire project}
Checklist: {dimension-specific security checklist}
Output format: Structured findings with file:line, CVSS-like severity, evidence, remediation
Standards: OWASP Top 10, CWE references where applicable
```
### Variations
- **Quick scan**: `--reviewers owasp,secrets` (2 members for fast audit)
- **Full audit**: All 4 dimensions (default)
- **CI/CD focused**: Add a 5th reviewer for pipeline security and deployment configuration
## Migration Team Preset
**Command**: `/team-spawn migration`
### Configuration
- **Team Size**: 4 (1 lead + 2 implementers + 1 reviewer)
- **Agent Types**: `agent-teams:team-lead` + 2x `agent-teams:team-implementer` + `agent-teams:team-reviewer`
- **Display Mode**: tmux recommended
### Members
| Name | Role | Responsibility |
| ---------------- | ---------------- | ----------------------------------------------- |
| migration-lead | team-lead | Migration plan, coordination, conflict handling |
| migrator-1 | team-implementer | Migration stream 1 (assigned files/modules) |
| migrator-2 | team-implementer | Migration stream 2 (assigned files/modules) |
| migration-verify | team-reviewer | Verify migrated code correctness and patterns |
### Task Template
```
Subject: Migrate {module/files} from {old} to {new}
Description:
Owned files: {explicit file list}
Migration rules: {specific transformation patterns}
Old pattern: {what to change from}
New pattern: {what to change to}
Acceptance criteria: {tests pass, no regressions, new patterns used}
Blocked by: {dependency task IDs if any}
```
### Dependency Pattern
```
migration-lead (plan) → migrator-1 ──┐
→ migrator-2 ──┼→ migration-verify
```
### Use Cases
- Framework upgrades (React class → hooks, Vue 2 → Vue 3, Angular version bumps)
- Language migrations (JavaScript → TypeScript, Python 2 → 3)
- API version bumps (REST v1 → v2, GraphQL schema changes)
- Database migrations (ORM changes, schema restructuring)
- Build system changes (Webpack → Vite, CRA → Next.js)