diff --git a/plugins/application-performance/.claude-plugin/plugin.json b/plugins/application-performance/.claude-plugin/plugin.json index 9aa33c0..2ec1db7 100644 --- a/plugins/application-performance/.claude-plugin/plugin.json +++ b/plugins/application-performance/.claude-plugin/plugin.json @@ -1,6 +1,6 @@ { "name": "application-performance", - "version": "1.2.1", + "version": "1.3.0", "description": "Application profiling, performance optimization, and observability for frontend and backend systems", "author": { "name": "Seth Hobson", diff --git a/plugins/application-performance/commands/performance-optimization.md b/plugins/application-performance/commands/performance-optimization.md index 2a516af..def469f 100644 --- a/plugins/application-performance/commands/performance-optimization.md +++ b/plugins/application-performance/commands/performance-optimization.md @@ -1,124 +1,681 @@ -Optimize application performance end-to-end using specialized performance and optimization agents: +--- +description: "Orchestrate end-to-end application performance optimization from profiling to monitoring" +argument-hint: " [--focus latency|throughput|cost|balanced] [--depth quick-wins|comprehensive|enterprise]" +--- -[Extended thinking: This workflow orchestrates a comprehensive performance optimization process across the entire application stack. Starting with deep profiling and baseline establishment, the workflow progresses through targeted optimizations in each system layer, validates improvements through load testing, and establishes continuous monitoring for sustained performance. Each phase builds on insights from previous phases, creating a data-driven optimization strategy that addresses real bottlenecks rather than theoretical improvements. The workflow emphasizes modern observability practices, user-centric performance metrics, and cost-effective optimization strategies.] +# Performance Optimization Orchestrator -## Phase 1: Performance Profiling & Baseline +## CRITICAL BEHAVIORAL RULES -### 1. Comprehensive Performance Profiling +You MUST follow these rules exactly. Violating any of them is a failure. -- Use Task tool with subagent_type="performance-engineer" -- Prompt: "Profile application performance comprehensively for: $ARGUMENTS. Generate flame graphs for CPU usage, heap dumps for memory analysis, trace I/O operations, and identify hot paths. Use APM tools like DataDog or New Relic if available. Include database query profiling, API response times, and frontend rendering metrics. Establish performance baselines for all critical user journeys." -- Context: Initial performance investigation -- Output: Detailed performance profile with flame graphs, memory analysis, bottleneck identification, baseline metrics +1. **Execute steps in order.** Do NOT skip ahead, reorder, or merge steps. +2. **Write output files.** Each step MUST produce its output file in `.performance-optimization/` before the next step begins. Read from prior step files — do NOT rely on context window memory. +3. **Stop at checkpoints.** When you reach a `PHASE CHECKPOINT`, you MUST stop and wait for explicit user approval before continuing. Use the AskUserQuestion tool with clear options. +4. **Halt on failure.** If any step fails (agent error, test failure, missing dependency), STOP immediately. Present the error and ask the user how to proceed. Do NOT silently continue. +5. **Use only local agents.** All `subagent_type` references use agents bundled with this plugin or `general-purpose`. No cross-plugin dependencies. +6. **Never enter plan mode autonomously.** Do NOT use EnterPlanMode. This command IS the plan — execute it. -### 2. Observability Stack Assessment +## Pre-flight Checks -- Use Task tool with subagent_type="observability-engineer" -- Prompt: "Assess current observability setup for: $ARGUMENTS. Review existing monitoring, distributed tracing with OpenTelemetry, log aggregation, and metrics collection. Identify gaps in visibility, missing metrics, and areas needing better instrumentation. Recommend APM tool integration and custom metrics for business-critical operations." -- Context: Performance profile from step 1 -- Output: Observability assessment report, instrumentation gaps, monitoring recommendations +Before starting, perform these checks: -### 3. User Experience Analysis +### 1. Check for existing session -- Use Task tool with subagent_type="performance-engineer" -- Prompt: "Analyze user experience metrics for: $ARGUMENTS. Measure Core Web Vitals (LCP, FID, CLS), page load times, time to interactive, and perceived performance. Use Real User Monitoring (RUM) data if available. Identify user journeys with poor performance and their business impact." -- Context: Performance baselines from step 1 -- Output: UX performance report, Core Web Vitals analysis, user impact assessment +Check if `.performance-optimization/state.json` exists: -## Phase 2: Database & Backend Optimization +- If it exists and `status` is `"in_progress"`: Read it, display the current step, and ask the user: -### 4. Database Performance Optimization + ``` + Found an in-progress performance optimization session: + Target: [name from state] + Current step: [step from state] -- Use Task tool with subagent_type="database-cloud-optimization::database-optimizer" -- Prompt: "Optimize database performance for: $ARGUMENTS based on profiling data: {context_from_phase_1}. Analyze slow query logs, create missing indexes, optimize execution plans, implement query result caching with Redis/Memcached. Review connection pooling, prepared statements, and batch processing opportunities. Consider read replicas and database sharding if needed." -- Context: Performance bottlenecks from phase 1 -- Output: Optimized queries, new indexes, caching strategy, connection pool configuration + 1. Resume from where we left off + 2. Start fresh (archives existing session) + ``` -### 5. Backend Code & API Optimization +- If it exists and `status` is `"complete"`: Ask whether to archive and start fresh. -- Use Task tool with subagent_type="backend-development::backend-architect" -- Prompt: "Optimize backend services for: $ARGUMENTS targeting bottlenecks: {context_from_phase_1}. Implement efficient algorithms, add application-level caching, optimize N+1 queries, use async/await patterns effectively. Implement pagination, response compression, GraphQL query optimization, and batch API operations. Add circuit breakers and bulkheads for resilience." -- Context: Database optimizations from step 4, profiling data from phase 1 -- Output: Optimized backend code, caching implementation, API improvements, resilience patterns +### 2. Initialize state -### 6. Microservices & Distributed System Optimization +Create `.performance-optimization/` directory and `state.json`: -- Use Task tool with subagent_type="performance-engineer" -- Prompt: "Optimize distributed system performance for: $ARGUMENTS. Analyze service-to-service communication, implement service mesh optimizations, optimize message queue performance (Kafka/RabbitMQ), reduce network hops. Implement distributed caching strategies and optimize serialization/deserialization." -- Context: Backend optimizations from step 5 -- Output: Service communication improvements, message queue optimization, distributed caching setup +```json +{ + "target": "$ARGUMENTS", + "status": "in_progress", + "focus": "balanced", + "depth": "comprehensive", + "current_step": 1, + "current_phase": 1, + "completed_steps": [], + "files_created": [], + "started_at": "ISO_TIMESTAMP", + "last_updated": "ISO_TIMESTAMP" +} +``` -## Phase 3: Frontend & CDN Optimization +Parse `$ARGUMENTS` for `--focus` and `--depth` flags. Use defaults if not specified. -### 7. Frontend Bundle & Loading Optimization +### 3. Parse target description -- Use Task tool with subagent_type="frontend-developer" -- Prompt: "Optimize frontend performance for: $ARGUMENTS targeting Core Web Vitals: {context_from_phase_1}. Implement code splitting, tree shaking, lazy loading, and dynamic imports. Optimize bundle sizes with webpack/rollup analysis. Implement resource hints (prefetch, preconnect, preload). Optimize critical rendering path and eliminate render-blocking resources." -- Context: UX analysis from phase 1, backend optimizations from phase 2 -- Output: Optimized bundles, lazy loading implementation, improved Core Web Vitals +Extract the target description from `$ARGUMENTS` (everything before the flags). This is referenced as `$TARGET` in prompts below. -### 8. CDN & Edge Optimization +--- -- Use Task tool with subagent_type="cloud-infrastructure::cloud-architect" -- Prompt: "Optimize CDN and edge performance for: $ARGUMENTS. Configure CloudFlare/CloudFront for optimal caching, implement edge functions for dynamic content, set up image optimization with responsive images and WebP/AVIF formats. Configure HTTP/2 and HTTP/3, implement Brotli compression. Set up geographic distribution for global users." -- Context: Frontend optimizations from step 7 -- Output: CDN configuration, edge caching rules, compression setup, geographic optimization +## Phase 1: Performance Profiling & Baseline (Steps 1–3) -### 9. Mobile & Progressive Web App Optimization +### Step 1: Comprehensive Performance Profiling -- Use Task tool with subagent_type="frontend-mobile-development::mobile-developer" -- Prompt: "Optimize mobile experience for: $ARGUMENTS. Implement service workers for offline functionality, optimize for slow networks with adaptive loading. Reduce JavaScript execution time for mobile CPUs. Implement virtual scrolling for long lists. Optimize touch responsiveness and smooth animations. Consider React Native/Flutter specific optimizations if applicable." -- Context: Frontend optimizations from steps 7-8 -- Output: Mobile-optimized code, PWA implementation, offline functionality +Use the Task tool to launch the performance engineer: -## Phase 4: Load Testing & Validation +``` +Task: + subagent_type: "performance-engineer" + description: "Profile application performance for $TARGET" + prompt: | + Profile application performance comprehensively for: $TARGET. -### 10. Comprehensive Load Testing + Generate flame graphs for CPU usage, heap dumps for memory analysis, trace I/O operations, + and identify hot paths. Use APM tools like DataDog or New Relic if available. Include database + query profiling, API response times, and frontend rendering metrics. Establish performance + baselines for all critical user journeys. -- Use Task tool with subagent_type="performance-engineer" -- Prompt: "Conduct comprehensive load testing for: $ARGUMENTS using k6/Gatling/Artillery. Design realistic load scenarios based on production traffic patterns. Test normal load, peak load, and stress scenarios. Include API testing, browser-based testing, and WebSocket testing if applicable. Measure response times, throughput, error rates, and resource utilization at various load levels." -- Context: All optimizations from phases 1-3 -- Output: Load test results, performance under load, breaking points, scalability analysis + ## Deliverables + 1. Performance profile with flame graphs and memory analysis + 2. Bottleneck identification ranked by impact + 3. Baseline metrics for critical user journeys + 4. Database query profiling results + 5. API response time measurements -### 11. Performance Regression Testing + Write your complete profiling report as a single markdown document. +``` -- Use Task tool with subagent_type="performance-testing-review::test-automator" -- Prompt: "Create automated performance regression tests for: $ARGUMENTS. Set up performance budgets for key metrics, integrate with CI/CD pipeline using GitHub Actions or similar. Create Lighthouse CI tests for frontend, API performance tests with Artillery, and database performance benchmarks. Implement automatic rollback triggers for performance regressions." -- Context: Load test results from step 10, baseline metrics from phase 1 -- Output: Performance test suite, CI/CD integration, regression prevention system +Save the agent's output to `.performance-optimization/01-profiling.md`. -## Phase 5: Monitoring & Continuous Optimization +Update `state.json`: set `current_step` to 2, add step 1 to `completed_steps`. -### 12. Production Monitoring Setup +### Step 2: Observability Stack Assessment -- Use Task tool with subagent_type="observability-engineer" -- Prompt: "Implement production performance monitoring for: $ARGUMENTS. Set up APM with DataDog/New Relic/Dynatrace, configure distributed tracing with OpenTelemetry, implement custom business metrics. Create Grafana dashboards for key metrics, set up PagerDuty alerts for performance degradation. Define SLIs/SLOs for critical services with error budgets." -- Context: Performance improvements from all previous phases -- Output: Monitoring dashboards, alert rules, SLI/SLO definitions, runbooks +Read `.performance-optimization/01-profiling.md` to load profiling context. -### 13. Continuous Performance Optimization +Use the Task tool: -- Use Task tool with subagent_type="performance-engineer" -- Prompt: "Establish continuous optimization process for: $ARGUMENTS. Create performance budget tracking, implement A/B testing for performance changes, set up continuous profiling in production. Document optimization opportunities backlog, create capacity planning models, and establish regular performance review cycles." -- Context: Monitoring setup from step 12, all previous optimization work -- Output: Performance budget tracking, optimization backlog, capacity planning, review process +``` +Task: + subagent_type: "observability-engineer" + description: "Assess observability setup for $TARGET" + prompt: | + Assess current observability setup for: $TARGET. -## Configuration Options + ## Performance Profile + [Insert full contents of .performance-optimization/01-profiling.md] -- **performance_focus**: "latency" | "throughput" | "cost" | "balanced" (default: "balanced") -- **optimization_depth**: "quick-wins" | "comprehensive" | "enterprise" (default: "comprehensive") -- **tools_available**: ["datadog", "newrelic", "prometheus", "grafana", "k6", "gatling"] -- **budget_constraints**: Set maximum acceptable costs for infrastructure changes -- **user_impact_tolerance**: "zero-downtime" | "maintenance-window" | "gradual-rollout" + Review existing monitoring, distributed tracing with OpenTelemetry, log aggregation, + and metrics collection. Identify gaps in visibility, missing metrics, and areas needing + better instrumentation. Recommend APM tool integration and custom metrics for + business-critical operations. + + ## Deliverables + 1. Current observability assessment + 2. Instrumentation gaps identified + 3. Monitoring recommendations + 4. Recommended metrics and dashboards + + Write your complete assessment as a single markdown document. +``` + +Save the agent's output to `.performance-optimization/02-observability.md`. + +Update `state.json`: set `current_step` to 3, add step 2 to `completed_steps`. + +### Step 3: User Experience Analysis + +Read `.performance-optimization/01-profiling.md`. + +Use the Task tool: + +``` +Task: + subagent_type: "performance-engineer" + description: "Analyze user experience metrics for $TARGET" + prompt: | + Analyze user experience metrics for: $TARGET. + + ## Performance Baselines + [Insert contents of .performance-optimization/01-profiling.md] + + Measure Core Web Vitals (LCP, FID, CLS), page load times, time to interactive, + and perceived performance. Use Real User Monitoring (RUM) data if available. + Identify user journeys with poor performance and their business impact. + + ## Deliverables + 1. Core Web Vitals analysis + 2. User journey performance report + 3. Business impact assessment + 4. Prioritized improvement opportunities + + Write your complete analysis as a single markdown document. +``` + +Save the agent's output to `.performance-optimization/03-ux-analysis.md`. + +Update `state.json`: set `current_step` to "checkpoint-1", add step 3 to `completed_steps`. + +--- + +## PHASE CHECKPOINT 1 — User Approval Required + +You MUST stop here and present the profiling results for review. + +Display a summary from `.performance-optimization/01-profiling.md`, `.performance-optimization/02-observability.md`, and `.performance-optimization/03-ux-analysis.md` (key bottlenecks, observability gaps, UX findings) and ask: + +``` +Performance profiling complete. Please review: +- .performance-optimization/01-profiling.md +- .performance-optimization/02-observability.md +- .performance-optimization/03-ux-analysis.md + +Key bottlenecks: [summary] +Observability gaps: [summary] +UX findings: [summary] + +1. Approve — proceed to optimization +2. Request changes — tell me what to adjust +3. Pause — save progress and stop here +``` + +Do NOT proceed to Phase 2 until the user selects option 1. If they select option 2, revise and re-checkpoint. If option 3, update `state.json` status and stop. + +--- + +## Phase 2: Database & Backend Optimization (Steps 4–6) + +### Step 4: Database Performance Optimization + +Read `.performance-optimization/01-profiling.md` and `.performance-optimization/03-ux-analysis.md`. + +Use the Task tool: + +``` +Task: + subagent_type: "general-purpose" + description: "Optimize database performance for $TARGET" + prompt: | + You are a database optimization expert. Optimize database performance for: $TARGET. + + ## Profiling Data + [Insert contents of .performance-optimization/01-profiling.md] + + ## UX Analysis + [Insert contents of .performance-optimization/03-ux-analysis.md] + + Analyze slow query logs, create missing indexes, optimize execution plans, implement + query result caching with Redis/Memcached. Review connection pooling, prepared statements, + and batch processing opportunities. Consider read replicas and database sharding if needed. + + ## Deliverables + 1. Optimized queries with before/after performance + 2. New indexes with justification + 3. Caching strategy recommendation + 4. Connection pool configuration + 5. Implementation plan with priority order + + Write your complete optimization plan as a single markdown document. +``` + +Save output to `.performance-optimization/04-database.md`. + +Update `state.json`: set `current_step` to 5, add step 4 to `completed_steps`. + +### Step 5: Backend Code & API Optimization + +Read `.performance-optimization/01-profiling.md` and `.performance-optimization/04-database.md`. + +Use the Task tool: + +``` +Task: + subagent_type: "general-purpose" + description: "Optimize backend services for $TARGET" + prompt: | + You are a backend performance architect. Optimize backend services for: $TARGET. + + ## Profiling Data + [Insert contents of .performance-optimization/01-profiling.md] + + ## Database Optimizations + [Insert contents of .performance-optimization/04-database.md] + + Implement efficient algorithms, add application-level caching, optimize N+1 queries, + use async/await patterns effectively. Implement pagination, response compression, + GraphQL query optimization, and batch API operations. Add circuit breakers and + bulkheads for resilience. + + ## Deliverables + 1. Optimized backend code with before/after metrics + 2. Caching implementation plan + 3. API improvements with expected impact + 4. Resilience patterns added + 5. Implementation priority order + + Write your complete optimization plan as a single markdown document. +``` + +Save output to `.performance-optimization/05-backend.md`. + +Update `state.json`: set `current_step` to 6, add step 5 to `completed_steps`. + +### Step 6: Microservices & Distributed System Optimization + +Read `.performance-optimization/01-profiling.md` and `.performance-optimization/05-backend.md`. + +Use the Task tool: + +``` +Task: + subagent_type: "performance-engineer" + description: "Optimize distributed system performance for $TARGET" + prompt: | + Optimize distributed system performance for: $TARGET. + + ## Profiling Data + [Insert contents of .performance-optimization/01-profiling.md] + + ## Backend Optimizations + [Insert contents of .performance-optimization/05-backend.md] + + Analyze service-to-service communication, implement service mesh optimizations, + optimize message queue performance (Kafka/RabbitMQ), reduce network hops. Implement + distributed caching strategies and optimize serialization/deserialization. + + ## Deliverables + 1. Service communication improvements + 2. Message queue optimization plan + 3. Distributed caching setup + 4. Network optimization recommendations + 5. Expected latency improvements + + Write your complete optimization plan as a single markdown document. +``` + +Save output to `.performance-optimization/06-distributed.md`. + +Update `state.json`: set `current_step` to "checkpoint-2", add step 6 to `completed_steps`. + +--- + +## PHASE CHECKPOINT 2 — User Approval Required + +Display a summary of optimization plans from steps 4-6 and ask: + +``` +Backend optimization plans complete. Please review: +- .performance-optimization/04-database.md +- .performance-optimization/05-backend.md +- .performance-optimization/06-distributed.md + +1. Approve — proceed to frontend & CDN optimization +2. Request changes — tell me what to adjust +3. Pause — save progress and stop here +``` + +Do NOT proceed to Phase 3 until the user approves. + +--- + +## Phase 3: Frontend & CDN Optimization (Steps 7–9) + +### Step 7: Frontend Bundle & Loading Optimization + +Read `.performance-optimization/03-ux-analysis.md` and `.performance-optimization/05-backend.md`. + +Use the Task tool: + +``` +Task: + subagent_type: "frontend-developer" + description: "Optimize frontend performance for $TARGET" + prompt: | + Optimize frontend performance for: $TARGET targeting Core Web Vitals improvements. + + ## UX Analysis + [Insert contents of .performance-optimization/03-ux-analysis.md] + + ## Backend Optimizations + [Insert contents of .performance-optimization/05-backend.md] + + Implement code splitting, tree shaking, lazy loading, and dynamic imports. Optimize bundle + sizes with webpack/rollup analysis. Implement resource hints (prefetch, preconnect, preload). + Optimize critical rendering path and eliminate render-blocking resources. + + ## Deliverables + 1. Bundle optimization with size reductions + 2. Lazy loading implementation plan + 3. Resource hint configuration + 4. Critical rendering path optimizations + 5. Expected Core Web Vitals improvements + + Write your complete optimization plan as a single markdown document. +``` + +Save output to `.performance-optimization/07-frontend.md`. + +Update `state.json`: set `current_step` to 8, add step 7 to `completed_steps`. + +### Step 8: CDN & Edge Optimization + +Read `.performance-optimization/07-frontend.md`. + +Use the Task tool: + +``` +Task: + subagent_type: "general-purpose" + description: "Optimize CDN and edge performance for $TARGET" + prompt: | + You are a cloud infrastructure and CDN optimization expert. Optimize CDN and edge + performance for: $TARGET. + + ## Frontend Optimizations + [Insert contents of .performance-optimization/07-frontend.md] + + Configure CloudFlare/CloudFront for optimal caching, implement edge functions for + dynamic content, set up image optimization with responsive images and WebP/AVIF formats. + Configure HTTP/2 and HTTP/3, implement Brotli compression. Set up geographic + distribution for global users. + + ## Deliverables + 1. CDN configuration recommendations + 2. Edge caching rules + 3. Image optimization strategy + 4. Compression setup + 5. Geographic distribution plan + + Write your complete optimization plan as a single markdown document. +``` + +Save output to `.performance-optimization/08-cdn.md`. + +Update `state.json`: set `current_step` to 9, add step 8 to `completed_steps`. + +### Step 9: Mobile & Progressive Web App Optimization + +Read `.performance-optimization/07-frontend.md` and `.performance-optimization/08-cdn.md`. + +Use the Task tool: + +``` +Task: + subagent_type: "general-purpose" + description: "Optimize mobile experience for $TARGET" + prompt: | + You are a mobile performance optimization expert. Optimize mobile experience for: $TARGET. + + ## Frontend Optimizations + [Insert contents of .performance-optimization/07-frontend.md] + + ## CDN Optimizations + [Insert contents of .performance-optimization/08-cdn.md] + + Implement service workers for offline functionality, optimize for slow networks with + adaptive loading. Reduce JavaScript execution time for mobile CPUs. Implement virtual + scrolling for long lists. Optimize touch responsiveness and smooth animations. Consider + React Native/Flutter specific optimizations if applicable. + + ## Deliverables + 1. Mobile-optimized code recommendations + 2. PWA implementation plan + 3. Offline functionality strategy + 4. Adaptive loading configuration + 5. Expected mobile performance improvements + + Write your complete optimization plan as a single markdown document. +``` + +Save output to `.performance-optimization/09-mobile.md`. + +Update `state.json`: set `current_step` to "checkpoint-3", add step 9 to `completed_steps`. + +--- + +## PHASE CHECKPOINT 3 — User Approval Required + +Display a summary of frontend/CDN/mobile optimization plans and ask: + +``` +Frontend optimization plans complete. Please review: +- .performance-optimization/07-frontend.md +- .performance-optimization/08-cdn.md +- .performance-optimization/09-mobile.md + +1. Approve — proceed to load testing & validation +2. Request changes — tell me what to adjust +3. Pause — save progress and stop here +``` + +Do NOT proceed to Phase 4 until the user approves. + +--- + +## Phase 4: Load Testing & Validation (Steps 10–11) + +### Step 10: Comprehensive Load Testing + +Read `.performance-optimization/01-profiling.md`. + +Use the Task tool: + +``` +Task: + subagent_type: "performance-engineer" + description: "Conduct comprehensive load testing for $TARGET" + prompt: | + Conduct comprehensive load testing for: $TARGET using k6/Gatling/Artillery. + + ## Original Baselines + [Insert contents of .performance-optimization/01-profiling.md] + + Design realistic load scenarios based on production traffic patterns. Test normal load, + peak load, and stress scenarios. Include API testing, browser-based testing, and WebSocket + testing if applicable. Measure response times, throughput, error rates, and resource + utilization at various load levels. + + ## Deliverables + 1. Load test scripts and configurations + 2. Results at normal, peak, and stress loads + 3. Response time and throughput measurements + 4. Breaking points and scalability analysis + 5. Comparison against original baselines + + Write your complete load test report as a single markdown document. +``` + +Save output to `.performance-optimization/10-load-testing.md`. + +Update `state.json`: set `current_step` to 11, add step 10 to `completed_steps`. + +### Step 11: Performance Regression Testing + +Read `.performance-optimization/10-load-testing.md` and `.performance-optimization/01-profiling.md`. + +Use the Task tool: + +``` +Task: + subagent_type: "general-purpose" + description: "Create performance regression tests for $TARGET" + prompt: | + You are a test automation expert specializing in performance testing. Create automated + performance regression tests for: $TARGET. + + ## Load Test Results + [Insert contents of .performance-optimization/10-load-testing.md] + + ## Original Baselines + [Insert contents of .performance-optimization/01-profiling.md] + + Set up performance budgets for key metrics, integrate with CI/CD pipeline using GitHub + Actions or similar. Create Lighthouse CI tests for frontend, API performance tests with + Artillery, and database performance benchmarks. Implement automatic rollback triggers + for performance regressions. + + ## Deliverables + 1. Performance test suite with scripts + 2. CI/CD integration configuration + 3. Performance budgets and thresholds + 4. Regression detection rules + 5. Automatic rollback triggers + + Write your complete regression testing plan as a single markdown document. +``` + +Save output to `.performance-optimization/11-regression-testing.md`. + +Update `state.json`: set `current_step` to "checkpoint-4", add step 11 to `completed_steps`. + +--- + +## PHASE CHECKPOINT 4 — User Approval Required + +Display a summary of testing results and ask: + +``` +Load testing and validation complete. Please review: +- .performance-optimization/10-load-testing.md +- .performance-optimization/11-regression-testing.md + +1. Approve — proceed to monitoring & continuous optimization +2. Request changes — tell me what to adjust +3. Pause — save progress and stop here +``` + +Do NOT proceed to Phase 5 until the user approves. + +--- + +## Phase 5: Monitoring & Continuous Optimization (Steps 12–13) + +### Step 12: Production Monitoring Setup + +Read `.performance-optimization/02-observability.md` and `.performance-optimization/10-load-testing.md`. + +Use the Task tool: + +``` +Task: + subagent_type: "observability-engineer" + description: "Implement production performance monitoring for $TARGET" + prompt: | + Implement production performance monitoring for: $TARGET. + + ## Observability Assessment + [Insert contents of .performance-optimization/02-observability.md] + + ## Load Test Results + [Insert contents of .performance-optimization/10-load-testing.md] + + Set up APM with DataDog/New Relic/Dynatrace, configure distributed tracing with + OpenTelemetry, implement custom business metrics. Create Grafana dashboards for key + metrics, set up PagerDuty alerts for performance degradation. Define SLIs/SLOs for + critical services with error budgets. + + ## Deliverables + 1. Monitoring dashboard configurations + 2. Alert rules and thresholds + 3. SLI/SLO definitions + 4. Runbooks for common performance issues + 5. Error budget tracking setup + + Write your complete monitoring plan as a single markdown document. +``` + +Save output to `.performance-optimization/12-monitoring.md`. + +Update `state.json`: set `current_step` to 13, add step 12 to `completed_steps`. + +### Step 13: Continuous Performance Optimization + +Read all previous `.performance-optimization/*.md` files. + +Use the Task tool: + +``` +Task: + subagent_type: "performance-engineer" + description: "Establish continuous optimization process for $TARGET" + prompt: | + Establish continuous optimization process for: $TARGET. + + ## Monitoring Setup + [Insert contents of .performance-optimization/12-monitoring.md] + + ## All Previous Optimization Work + [Insert summary of key findings from all previous steps] + + Create performance budget tracking, implement A/B testing for performance changes, + set up continuous profiling in production. Document optimization opportunities backlog, + create capacity planning models, and establish regular performance review cycles. + + ## Deliverables + 1. Performance budget tracking system + 2. Optimization backlog with priorities + 3. Capacity planning model + 4. Review cycle schedule and process + 5. A/B testing framework for performance changes + + Write your complete continuous optimization plan as a single markdown document. +``` + +Save output to `.performance-optimization/13-continuous.md`. + +Update `state.json`: set `current_step` to "complete", add step 13 to `completed_steps`. + +--- + +## Completion + +Update `state.json`: + +- Set `status` to `"complete"` +- Set `last_updated` to current timestamp + +Present the final summary: + +``` +Performance optimization complete: $TARGET + +## Files Created +[List all .performance-optimization/ output files] + +## Optimization Summary +- Profiling: .performance-optimization/01-profiling.md +- Observability: .performance-optimization/02-observability.md +- UX Analysis: .performance-optimization/03-ux-analysis.md +- Database: .performance-optimization/04-database.md +- Backend: .performance-optimization/05-backend.md +- Distributed: .performance-optimization/06-distributed.md +- Frontend: .performance-optimization/07-frontend.md +- CDN: .performance-optimization/08-cdn.md +- Mobile: .performance-optimization/09-mobile.md +- Load Testing: .performance-optimization/10-load-testing.md +- Regression Testing: .performance-optimization/11-regression-testing.md +- Monitoring: .performance-optimization/12-monitoring.md +- Continuous: .performance-optimization/13-continuous.md ## Success Criteria +- Response Time: P50 < 200ms, P95 < 1s, P99 < 2s for critical endpoints +- Core Web Vitals: LCP < 2.5s, FID < 100ms, CLS < 0.1 +- Throughput: Support 2x current peak load with <1% error rate +- Database Performance: Query P95 < 100ms, no queries > 1s +- Resource Utilization: CPU < 70%, Memory < 80% under normal load +- Cost Efficiency: Performance per dollar improved by minimum 30% +- Monitoring Coverage: 100% of critical paths instrumented with alerting -- **Response Time**: P50 < 200ms, P95 < 1s, P99 < 2s for critical endpoints -- **Core Web Vitals**: LCP < 2.5s, FID < 100ms, CLS < 0.1 -- **Throughput**: Support 2x current peak load with <1% error rate -- **Database Performance**: Query P95 < 100ms, no queries > 1s -- **Resource Utilization**: CPU < 70%, Memory < 80% under normal load -- **Cost Efficiency**: Performance per dollar improved by minimum 30% -- **Monitoring Coverage**: 100% of critical paths instrumented with alerting - -Performance optimization target: $ARGUMENTS +## Next Steps +1. Implement optimizations in priority order from each phase +2. Run regression tests after each optimization +3. Monitor production metrics against baselines +4. Review performance budgets in weekly cycles +``` diff --git a/plugins/backend-development/.claude-plugin/plugin.json b/plugins/backend-development/.claude-plugin/plugin.json index e4295d7..f9648d0 100644 --- a/plugins/backend-development/.claude-plugin/plugin.json +++ b/plugins/backend-development/.claude-plugin/plugin.json @@ -1,6 +1,6 @@ { "name": "backend-development", - "version": "1.2.4", + "version": "1.3.0", "description": "Backend API design, GraphQL architecture, workflow orchestration with Temporal, and test-driven backend development", "author": { "name": "Seth Hobson", diff --git a/plugins/backend-development/agents/performance-engineer.md b/plugins/backend-development/agents/performance-engineer.md new file mode 100644 index 0000000..4666b20 --- /dev/null +++ b/plugins/backend-development/agents/performance-engineer.md @@ -0,0 +1,44 @@ +--- +name: performance-engineer +description: Profile and optimize application performance including response times, memory usage, query efficiency, and scalability. Use for performance review during feature development. +model: sonnet +--- + +You are a performance engineer specializing in application optimization during feature development. + +## Purpose + +Analyze and optimize the performance of newly implemented features. Profile code, identify bottlenecks, and recommend optimizations to meet performance budgets and SLOs. + +## Capabilities + +- **Code Profiling**: CPU hotspots, memory allocation patterns, I/O bottlenecks, async/await inefficiencies +- **Database Performance**: N+1 query detection, missing indexes, query plan analysis, connection pool sizing, ORM inefficiencies +- **API Performance**: Response time analysis, payload optimization, compression, pagination efficiency, batch operation design +- **Caching Strategy**: Cache-aside/read-through/write-through patterns, TTL tuning, cache invalidation, hit rate analysis +- **Memory Management**: Memory leak detection, garbage collection pressure, object pooling, buffer management +- **Concurrency**: Thread pool sizing, async patterns, connection pooling, resource contention, deadlock detection +- **Frontend Performance**: Bundle size analysis, lazy loading, code splitting, render performance, network waterfall +- **Load Testing Design**: K6/JMeter/Gatling script design, realistic load profiles, stress testing, capacity planning +- **Scalability Analysis**: Horizontal vs vertical scaling readiness, stateless design validation, bottleneck identification + +## Response Approach + +1. **Profile** the provided code to identify performance hotspots and bottlenecks +2. **Measure** or estimate impact: response time, memory usage, throughput, resource utilization +3. **Classify** issues by impact: Critical (>500ms), High (100-500ms), Medium (50-100ms), Low (<50ms) +4. **Recommend** specific optimizations with before/after code examples +5. **Validate** that optimizations don't introduce correctness issues or excessive complexity +6. **Benchmark** suggestions with expected improvement estimates + +## Output Format + +For each finding: + +- **Impact**: Critical/High/Medium/Low with estimated latency or resource cost +- **Location**: File and line reference +- **Issue**: What's slow and why +- **Fix**: Specific optimization with code example +- **Tradeoff**: Any downsides (complexity, memory for speed, etc.) + +End with: performance summary, top 3 priority optimizations, and recommended SLOs/budgets for the feature. diff --git a/plugins/backend-development/agents/security-auditor.md b/plugins/backend-development/agents/security-auditor.md new file mode 100644 index 0000000..9052adf --- /dev/null +++ b/plugins/backend-development/agents/security-auditor.md @@ -0,0 +1,41 @@ +--- +name: security-auditor +description: Review code and architecture for security vulnerabilities, OWASP Top 10, auth flaws, and compliance issues. Use for security review during feature development. +model: sonnet +--- + +You are a security auditor specializing in application security review during feature development. + +## Purpose + +Perform focused security reviews of code and architecture produced during feature development. Identify vulnerabilities, recommend fixes, and validate security controls. + +## Capabilities + +- **OWASP Top 10 Review**: Injection, broken auth, sensitive data exposure, XXE, broken access control, misconfig, XSS, insecure deserialization, vulnerable components, insufficient logging +- **Authentication & Authorization**: JWT validation, session management, OAuth flows, RBAC/ABAC enforcement, privilege escalation vectors +- **Input Validation**: SQL injection, command injection, path traversal, XSS, SSRF, prototype pollution +- **Data Protection**: Encryption at rest/transit, secrets management, PII handling, credential storage +- **API Security**: Rate limiting, CORS, CSRF, request validation, API key management +- **Dependency Scanning**: Known CVEs in dependencies, outdated packages, supply chain risks +- **Infrastructure Security**: Container security, network policies, secrets in env vars, TLS configuration + +## Response Approach + +1. **Scan** the provided code and architecture for vulnerabilities +2. **Classify** findings by severity: Critical, High, Medium, Low +3. **Explain** each finding with the attack vector and impact +4. **Recommend** specific fixes with code examples where possible +5. **Validate** that security controls (auth, authz, input validation) are correctly implemented + +## Output Format + +For each finding: + +- **Severity**: Critical/High/Medium/Low +- **Category**: OWASP category or security domain +- **Location**: File and line reference +- **Issue**: What's wrong and why it matters +- **Fix**: Specific remediation with code example + +End with a summary: total findings by severity, overall security posture assessment, and top 3 priority fixes. diff --git a/plugins/backend-development/agents/test-automator.md b/plugins/backend-development/agents/test-automator.md new file mode 100644 index 0000000..180180e --- /dev/null +++ b/plugins/backend-development/agents/test-automator.md @@ -0,0 +1,41 @@ +--- +name: test-automator +description: Create comprehensive test suites including unit, integration, and E2E tests. Supports TDD/BDD workflows. Use for test creation during feature development. +model: sonnet +--- + +You are a test automation engineer specializing in creating comprehensive test suites during feature development. + +## Purpose + +Build robust, maintainable test suites for newly implemented features. Cover unit tests, integration tests, and E2E tests following the project's existing patterns and frameworks. + +## Capabilities + +- **Unit Testing**: Isolated function/method tests, mocking dependencies, edge cases, error paths +- **Integration Testing**: API endpoint tests, database integration, service-to-service communication, middleware chains +- **E2E Testing**: Critical user journeys, happy paths, error scenarios, browser/API-level flows +- **TDD Support**: Red-green-refactor cycle, failing test first, minimal implementation guidance +- **BDD Support**: Gherkin scenarios, step definitions, behavior specifications +- **Test Data**: Factory patterns, fixtures, seed data, synthetic data generation +- **Mocking & Stubbing**: External service mocks, database stubs, time/environment mocking +- **Coverage Analysis**: Identify untested paths, suggest additional test cases, coverage gap analysis + +## Response Approach + +1. **Detect** the project's test framework (Jest, pytest, Go testing, etc.) and existing patterns +2. **Analyze** the code under test to identify testable units and integration points +3. **Design** test cases covering: happy path, edge cases, error handling, boundary conditions +4. **Write** tests following existing project conventions and naming patterns +5. **Verify** tests are runnable and provide clear failure messages +6. **Report** coverage assessment and any untested risk areas + +## Output Format + +Organize tests by type: + +- **Unit Tests**: One test file per source file, grouped by function/method +- **Integration Tests**: Grouped by API endpoint or service interaction +- **E2E Tests**: Grouped by user journey or feature scenario + +Each test should have a descriptive name explaining what behavior is being verified. Include setup/teardown, assertions, and cleanup. Flag any areas where manual testing is recommended over automation. diff --git a/plugins/backend-development/commands/feature-development.md b/plugins/backend-development/commands/feature-development.md index bf03ae5..6a0afb0 100644 --- a/plugins/backend-development/commands/feature-development.md +++ b/plugins/backend-development/commands/feature-development.md @@ -1,150 +1,481 @@ -Orchestrate end-to-end feature development from requirements to production deployment: +--- +description: "Orchestrate end-to-end feature development from requirements to deployment" +argument-hint: " [--methodology tdd|bdd|ddd] [--complexity simple|medium|complex]" +--- -[Extended thinking: This workflow orchestrates specialized agents through comprehensive feature development phases - from discovery and planning through implementation, testing, and deployment. Each phase builds on previous outputs, ensuring coherent feature delivery. The workflow supports multiple development methodologies (traditional, TDD/BDD, DDD), feature complexity levels, and modern deployment strategies including feature flags, gradual rollouts, and observability-first development. Agents receive detailed context from previous phases to maintain consistency and quality throughout the development lifecycle.] +# Feature Development Orchestrator -## Configuration Options +## CRITICAL BEHAVIORAL RULES -### Development Methodology +You MUST follow these rules exactly. Violating any of them is a failure. -- **traditional**: Sequential development with testing after implementation -- **tdd**: Test-Driven Development with red-green-refactor cycles -- **bdd**: Behavior-Driven Development with scenario-based testing -- **ddd**: Domain-Driven Design with bounded contexts and aggregates +1. **Execute steps in order.** Do NOT skip ahead, reorder, or merge steps. +2. **Write output files.** Each step MUST produce its output file in `.feature-dev/` before the next step begins. Read from prior step files — do NOT rely on context window memory. +3. **Stop at checkpoints.** When you reach a `PHASE CHECKPOINT`, you MUST stop and wait for explicit user approval before continuing. Use the AskUserQuestion tool with clear options. +4. **Halt on failure.** If any step fails (agent error, test failure, missing dependency), STOP immediately. Present the error and ask the user how to proceed. Do NOT silently continue. +5. **Use only local agents.** All `subagent_type` references use agents bundled with this plugin or `general-purpose`. No cross-plugin dependencies. +6. **Never enter plan mode autonomously.** Do NOT use EnterPlanMode. This command IS the plan — execute it. -### Feature Complexity +## Pre-flight Checks -- **simple**: Single service, minimal integration (1-2 days) -- **medium**: Multiple services, moderate integration (3-5 days) -- **complex**: Cross-domain, extensive integration (1-2 weeks) -- **epic**: Major architectural changes, multiple teams (2+ weeks) +Before starting, perform these checks: -### Deployment Strategy +### 1. Check for existing session -- **direct**: Immediate rollout to all users -- **canary**: Gradual rollout starting with 5% of traffic -- **feature-flag**: Controlled activation via feature toggles -- **blue-green**: Zero-downtime deployment with instant rollback -- **a-b-test**: Split traffic for experimentation and metrics +Check if `.feature-dev/state.json` exists: -## Phase 1: Discovery & Requirements Planning +- If it exists and `status` is `"in_progress"`: Read it, display the current step, and ask the user: -1. **Business Analysis & Requirements** - - Use Task tool with subagent_type="business-analytics::business-analyst" - - Prompt: "Analyze feature requirements for: $ARGUMENTS. Define user stories, acceptance criteria, success metrics, and business value. Identify stakeholders, dependencies, and risks. Create feature specification document with clear scope boundaries." - - Expected output: Requirements document with user stories, success metrics, risk assessment - - Context: Initial feature request and business context + ``` + Found an in-progress feature development session: + Feature: [name from state] + Current step: [step from state] -2. **Technical Architecture Design** - - Use Task tool with subagent_type="comprehensive-review::architect-review" - - Prompt: "Design technical architecture for feature: $ARGUMENTS. Using requirements: [include business analysis from step 1]. Define service boundaries, API contracts, data models, integration points, and technology stack. Consider scalability, performance, and security requirements." - - Expected output: Technical design document with architecture diagrams, API specifications, data models - - Context: Business requirements, existing system architecture + 1. Resume from where we left off + 2. Start fresh (archives existing session) + ``` -3. **Feasibility & Risk Assessment** - - Use Task tool with subagent_type="security-scanning::security-auditor" - - Prompt: "Assess security implications and risks for feature: $ARGUMENTS. Review architecture: [include technical design from step 2]. Identify security requirements, compliance needs, data privacy concerns, and potential vulnerabilities." - - Expected output: Security assessment with risk matrix, compliance checklist, mitigation strategies - - Context: Technical design, regulatory requirements +- If it exists and `status` is `"complete"`: Ask whether to archive and start fresh. -## Phase 2: Implementation & Development +### 2. Initialize state -4. **Backend Services Implementation** - - Use Task tool with subagent_type="backend-architect" - - Prompt: "Implement backend services for: $ARGUMENTS. Follow technical design: [include architecture from step 2]. Build RESTful/GraphQL APIs, implement business logic, integrate with data layer, add resilience patterns (circuit breakers, retries), implement caching strategies. Include feature flags for gradual rollout." - - Expected output: Backend services with APIs, business logic, database integration, feature flags - - Context: Technical design, API contracts, data models +Create `.feature-dev/` directory and `state.json`: -5. **Frontend Implementation** - - Use Task tool with subagent_type="frontend-mobile-development::frontend-developer" - - Prompt: "Build frontend components for: $ARGUMENTS. Integrate with backend APIs: [include API endpoints from step 4]. Implement responsive UI, state management, error handling, loading states, and analytics tracking. Add feature flag integration for A/B testing capabilities." - - Expected output: Frontend components with API integration, state management, analytics - - Context: Backend APIs, UI/UX designs, user stories +```json +{ + "feature": "$ARGUMENTS", + "status": "in_progress", + "methodology": "traditional", + "complexity": "medium", + "current_step": 1, + "current_phase": 1, + "completed_steps": [], + "files_created": [], + "started_at": "ISO_TIMESTAMP", + "last_updated": "ISO_TIMESTAMP" +} +``` -6. **Data Pipeline & Integration** - - Use Task tool with subagent_type="data-engineering::data-engineer" - - Prompt: "Build data pipelines for: $ARGUMENTS. Design ETL/ELT processes, implement data validation, create analytics events, set up data quality monitoring. Integrate with product analytics platforms for feature usage tracking." - - Expected output: Data pipelines, analytics events, data quality checks - - Context: Data requirements, analytics needs, existing data infrastructure +Parse `$ARGUMENTS` for `--methodology` and `--complexity` flags. Use defaults if not specified. -## Phase 3: Testing & Quality Assurance +### 3. Parse feature description -7. **Automated Test Suite** - - Use Task tool with subagent_type="unit-testing::test-automator" - - Prompt: "Create comprehensive test suite for: $ARGUMENTS. Write unit tests for backend: [from step 4] and frontend: [from step 5]. Add integration tests for API endpoints, E2E tests for critical user journeys, performance tests for scalability validation. Ensure minimum 80% code coverage." - - Expected output: Test suites with unit, integration, E2E, and performance tests - - Context: Implementation code, acceptance criteria, test requirements +Extract the feature description from `$ARGUMENTS` (everything before the flags). This is referenced as `$FEATURE` in prompts below. -8. **Security Validation** - - Use Task tool with subagent_type="security-scanning::security-auditor" - - Prompt: "Perform security testing for: $ARGUMENTS. Review implementation: [include backend and frontend from steps 4-5]. Run OWASP checks, penetration testing, dependency scanning, and compliance validation. Verify data encryption, authentication, and authorization." - - Expected output: Security test results, vulnerability report, remediation actions - - Context: Implementation code, security requirements +--- -9. **Performance Optimization** - - Use Task tool with subagent_type="application-performance::performance-engineer" - - Prompt: "Optimize performance for: $ARGUMENTS. Analyze backend services: [from step 4] and frontend: [from step 5]. Profile code, optimize queries, implement caching, reduce bundle sizes, improve load times. Set up performance budgets and monitoring." - - Expected output: Performance improvements, optimization report, performance metrics - - Context: Implementation code, performance requirements +## Phase 1: Discovery (Steps 1–2) — Interactive -## Phase 4: Deployment & Monitoring +### Step 1: Requirements Gathering -10. **Deployment Strategy & Pipeline** - - Use Task tool with subagent_type="deployment-strategies::deployment-engineer" - - Prompt: "Prepare deployment for: $ARGUMENTS. Create CI/CD pipeline with automated tests: [from step 7]. Configure feature flags for gradual rollout, implement blue-green deployment, set up rollback procedures. Create deployment runbook and rollback plan." - - Expected output: CI/CD pipeline, deployment configuration, rollback procedures - - Context: Test suites, infrastructure requirements, deployment strategy +Gather requirements through interactive Q&A. Ask ONE question at a time using the AskUserQuestion tool. Do NOT ask all questions at once. -11. **Observability & Monitoring** - - Use Task tool with subagent_type="observability-monitoring::observability-engineer" - - Prompt: "Set up observability for: $ARGUMENTS. Implement distributed tracing, custom metrics, error tracking, and alerting. Create dashboards for feature usage, performance metrics, error rates, and business KPIs. Set up SLOs/SLIs with automated alerts." - - Expected output: Monitoring dashboards, alerts, SLO definitions, observability infrastructure - - Context: Feature implementation, success metrics, operational requirements +**Questions to ask (in order):** -12. **Documentation & Knowledge Transfer** - - Use Task tool with subagent_type="documentation-generation::docs-architect" - - Prompt: "Generate comprehensive documentation for: $ARGUMENTS. Create API documentation, user guides, deployment guides, troubleshooting runbooks. Include architecture diagrams, data flow diagrams, and integration guides. Generate automated changelog from commits." - - Expected output: API docs, user guides, runbooks, architecture documentation - - Context: All previous phases' outputs +1. **Problem Statement**: "What problem does this feature solve? Who is the user and what's their pain point?" +2. **Acceptance Criteria**: "What are the key acceptance criteria? When is this feature 'done'?" +3. **Scope Boundaries**: "What is explicitly OUT of scope for this feature?" +4. **Technical Constraints**: "Any technical constraints? (e.g., must use existing auth system, specific DB, latency requirements)" +5. **Dependencies**: "Does this feature depend on or affect other features/services?" -## Execution Parameters +After gathering answers, write the requirements document: -### Required Parameters +**Output file:** `.feature-dev/01-requirements.md` -- **--feature**: Feature name and description -- **--methodology**: Development approach (traditional|tdd|bdd|ddd) -- **--complexity**: Feature complexity level (simple|medium|complex|epic) +```markdown +# Requirements: $FEATURE -### Optional Parameters +## Problem Statement -- **--deployment-strategy**: Deployment approach (direct|canary|feature-flag|blue-green|a-b-test) -- **--test-coverage-min**: Minimum test coverage threshold (default: 80%) -- **--performance-budget**: Performance requirements (e.g., <200ms response time) -- **--rollout-percentage**: Initial rollout percentage for gradual deployment (default: 5%) -- **--feature-flag-service**: Feature flag provider (launchdarkly|split|unleash|custom) -- **--analytics-platform**: Analytics integration (segment|amplitude|mixpanel|custom) -- **--monitoring-stack**: Observability tools (datadog|newrelic|grafana|custom) +[From Q1] -## Success Criteria +## Acceptance Criteria -- All acceptance criteria from business requirements are met -- Test coverage exceeds minimum threshold (80% default) -- Security scan shows no critical vulnerabilities -- Performance meets defined budgets and SLOs -- Feature flags configured for controlled rollout -- Monitoring and alerting fully operational -- Documentation complete and approved -- Successful deployment to production with rollback capability -- Product analytics tracking feature usage -- A/B test metrics configured (if applicable) +[From Q2 — formatted as checkboxes] -## Rollback Strategy +## Scope -If issues arise during or after deployment: +### In Scope -1. Immediate feature flag disable (< 1 minute) -2. Blue-green traffic switch (< 5 minutes) -3. Full deployment rollback via CI/CD (< 15 minutes) -4. Database migration rollback if needed (coordinate with data team) -5. Incident post-mortem and fixes before re-deployment +[Derived from answers] -Feature description: $ARGUMENTS +### Out of Scope + +[From Q3] + +## Technical Constraints + +[From Q4] + +## Dependencies + +[From Q5] + +## Methodology: [tdd|bdd|ddd|traditional] + +## Complexity: [simple|medium|complex] +``` + +Update `state.json`: set `current_step` to 2, add `"01-requirements.md"` to `files_created`, add step 1 to `completed_steps`. + +### Step 2: Architecture & Security Design + +Read `.feature-dev/01-requirements.md` to load requirements context. + +Use the Task tool to launch the architecture agent: + +``` +Task: + subagent_type: "backend-architect" + description: "Design architecture for $FEATURE" + prompt: | + Design the technical architecture for this feature. + + ## Requirements + [Insert full contents of .feature-dev/01-requirements.md] + + ## Deliverables + 1. **Service/component design**: What components are needed, their responsibilities, and boundaries + 2. **API design**: Endpoints, request/response schemas, error handling + 3. **Data model**: Database tables/collections, relationships, migrations needed + 4. **Security considerations**: Auth requirements, input validation, data protection, OWASP concerns + 5. **Integration points**: How this connects to existing services/systems + 6. **Risk assessment**: Technical risks and mitigation strategies + + Write your complete architecture design as a single markdown document. +``` + +Save the agent's output to `.feature-dev/02-architecture.md`. + +Update `state.json`: set `current_step` to "checkpoint-1", add step 2 to `completed_steps`. + +--- + +## PHASE CHECKPOINT 1 — User Approval Required + +You MUST stop here and present the architecture for review. + +Display a summary of the architecture from `.feature-dev/02-architecture.md` (key components, API endpoints, data model overview) and ask: + +``` +Architecture design is complete. Please review .feature-dev/02-architecture.md + +1. Approve — proceed to implementation +2. Request changes — tell me what to adjust +3. Pause — save progress and stop here +``` + +Do NOT proceed to Phase 2 until the user selects option 1. If they select option 2, revise the architecture and re-checkpoint. If option 3, update `state.json` status and stop. + +--- + +## Phase 2: Implementation (Steps 3–5) + +### Step 3: Backend Implementation + +Read `.feature-dev/01-requirements.md` and `.feature-dev/02-architecture.md`. + +Use the Task tool to launch the backend architect for implementation: + +``` +Task: + subagent_type: "backend-architect" + description: "Implement backend for $FEATURE" + prompt: | + Implement the backend for this feature based on the approved architecture. + + ## Requirements + [Insert contents of .feature-dev/01-requirements.md] + + ## Architecture + [Insert contents of .feature-dev/02-architecture.md] + + ## Instructions + 1. Implement the API endpoints, business logic, and data access layer as designed + 2. Include data layer components (models, migrations, repositories) as specified in the architecture + 3. Add input validation and error handling + 4. Follow the project's existing code patterns and conventions + 5. If methodology is TDD: write failing tests first, then implement + 6. Include inline comments only where logic is non-obvious + + Write all code files. Report what files were created/modified. +``` + +Save a summary of what was implemented to `.feature-dev/03-backend.md` (list of files created/modified, key decisions, any deviations from architecture). + +Update `state.json`: set `current_step` to 4, add step 3 to `completed_steps`. + +### Step 4: Frontend Implementation + +Read `.feature-dev/01-requirements.md`, `.feature-dev/02-architecture.md`, and `.feature-dev/03-backend.md`. + +Use the Task tool: + +``` +Task: + subagent_type: "general-purpose" + description: "Implement frontend for $FEATURE" + prompt: | + You are a frontend developer. Implement the frontend components for this feature. + + ## Requirements + [Insert contents of .feature-dev/01-requirements.md] + + ## Architecture + [Insert contents of .feature-dev/02-architecture.md] + + ## Backend Implementation + [Insert contents of .feature-dev/03-backend.md] + + ## Instructions + 1. Build UI components that integrate with the backend API endpoints + 2. Implement state management, form handling, and error states + 3. Add loading states and optimistic updates where appropriate + 4. Follow the project's existing frontend patterns and component conventions + 5. Ensure responsive design and accessibility basics (semantic HTML, ARIA labels, keyboard nav) + + Write all code files. Report what files were created/modified. +``` + +Save a summary to `.feature-dev/04-frontend.md`. + +**Note:** If the feature has no frontend component (pure backend/API), skip this step — write a brief note in `04-frontend.md` explaining why it was skipped, and continue. + +Update `state.json`: set `current_step` to 5, add step 4 to `completed_steps`. + +### Step 5: Testing & Validation + +Read `.feature-dev/03-backend.md` and `.feature-dev/04-frontend.md`. + +Launch three agents in parallel using multiple Task tool calls in a single response: + +**5a. Test Suite Creation:** + +``` +Task: + subagent_type: "test-automator" + description: "Create test suite for $FEATURE" + prompt: | + Create a comprehensive test suite for this feature. + + ## What was implemented + ### Backend + [Insert contents of .feature-dev/03-backend.md] + + ### Frontend + [Insert contents of .feature-dev/04-frontend.md] + + ## Instructions + 1. Write unit tests for all new backend functions/methods + 2. Write integration tests for API endpoints + 3. Write frontend component tests if applicable + 4. Cover: happy path, edge cases, error handling, boundary conditions + 5. Follow existing test patterns and frameworks in the project + 6. Target 80%+ code coverage for new code + + Write all test files. Report what test files were created and what they cover. +``` + +**5b. Security Review:** + +``` +Task: + subagent_type: "security-auditor" + description: "Security review of $FEATURE" + prompt: | + Perform a security review of this feature implementation. + + ## Architecture + [Insert contents of .feature-dev/02-architecture.md] + + ## Backend Implementation + [Insert contents of .feature-dev/03-backend.md] + + ## Frontend Implementation + [Insert contents of .feature-dev/04-frontend.md] + + Review for: OWASP Top 10, authentication/authorization flaws, input validation gaps, + data protection issues, dependency vulnerabilities, and any security anti-patterns. + + Provide findings with severity, location, and specific fix recommendations. +``` + +**5c. Performance Review:** + +``` +Task: + subagent_type: "performance-engineer" + description: "Performance review of $FEATURE" + prompt: | + Review the performance of this feature implementation. + + ## Architecture + [Insert contents of .feature-dev/02-architecture.md] + + ## Backend Implementation + [Insert contents of .feature-dev/03-backend.md] + + ## Frontend Implementation + [Insert contents of .feature-dev/04-frontend.md] + + Review for: N+1 queries, missing indexes, unoptimized queries, memory leaks, + missing caching opportunities, large payloads, slow rendering paths. + + Provide findings with impact estimates and specific optimization recommendations. +``` + +After all three complete, consolidate results into `.feature-dev/05-testing.md`: + +```markdown +# Testing & Validation: $FEATURE + +## Test Suite + +[Summary from 5a — files created, coverage areas] + +## Security Findings + +[Summary from 5b — findings by severity] + +## Performance Findings + +[Summary from 5c — findings by impact] + +## Action Items + +[List any critical/high findings that need to be addressed before delivery] +``` + +If there are Critical or High severity findings from security or performance review, address them now before proceeding. Apply fixes and re-validate. + +Update `state.json`: set `current_step` to "checkpoint-2", add step 5 to `completed_steps`. + +--- + +## PHASE CHECKPOINT 2 — User Approval Required + +Display a summary of testing and validation results from `.feature-dev/05-testing.md` and ask: + +``` +Testing and validation complete. Please review .feature-dev/05-testing.md + +Test coverage: [summary] +Security findings: [X critical, Y high, Z medium] +Performance findings: [X critical, Y high, Z medium] + +1. Approve — proceed to deployment & documentation +2. Request changes — tell me what to fix +3. Pause — save progress and stop here +``` + +Do NOT proceed to Phase 3 until the user approves. + +--- + +## Phase 3: Delivery (Steps 6–7) + +### Step 6: Deployment & Monitoring + +Read `.feature-dev/02-architecture.md` and `.feature-dev/05-testing.md`. + +Use the Task tool: + +``` +Task: + subagent_type: "general-purpose" + description: "Create deployment config for $FEATURE" + prompt: | + You are a deployment engineer. Create the deployment and monitoring configuration for this feature. + + ## Architecture + [Insert contents of .feature-dev/02-architecture.md] + + ## Testing Results + [Insert contents of .feature-dev/05-testing.md] + + ## Instructions + 1. Create or update CI/CD pipeline configuration for the new code + 2. Add feature flag configuration if the feature should be gradually rolled out + 3. Define health checks and readiness probes for new services/endpoints + 4. Create monitoring alerts for key metrics (error rate, latency, throughput) + 5. Write a deployment runbook with rollback steps + 6. Follow existing deployment patterns in the project + + Write all configuration files. Report what was created/modified. +``` + +Save output to `.feature-dev/06-deployment.md`. + +Update `state.json`: set `current_step` to 7, add step 6 to `completed_steps`. + +### Step 7: Documentation & Handoff + +Read all previous `.feature-dev/*.md` files. + +Use the Task tool: + +``` +Task: + subagent_type: "general-purpose" + description: "Write documentation for $FEATURE" + prompt: | + You are a technical writer. Create documentation for this feature. + + ## Feature Context + [Insert contents of .feature-dev/01-requirements.md] + + ## Architecture + [Insert contents of .feature-dev/02-architecture.md] + + ## Implementation Summary + ### Backend: [Insert contents of .feature-dev/03-backend.md] + ### Frontend: [Insert contents of .feature-dev/04-frontend.md] + + ## Deployment + [Insert contents of .feature-dev/06-deployment.md] + + ## Instructions + 1. Write API documentation for new endpoints (request/response examples) + 2. Update or create user-facing documentation if applicable + 3. Write a brief architecture decision record (ADR) explaining key design choices + 4. Create a handoff summary: what was built, how to test it, known limitations + + Write documentation files. Report what was created/modified. +``` + +Save output to `.feature-dev/07-documentation.md`. + +Update `state.json`: set `current_step` to "complete", add step 7 to `completed_steps`. + +--- + +## Completion + +Update `state.json`: + +- Set `status` to `"complete"` +- Set `last_updated` to current timestamp + +Present the final summary: + +``` +Feature development complete: $FEATURE + +## Files Created +[List all .feature-dev/ output files] + +## Implementation Summary +- Requirements: .feature-dev/01-requirements.md +- Architecture: .feature-dev/02-architecture.md +- Backend: .feature-dev/03-backend.md +- Frontend: .feature-dev/04-frontend.md +- Testing: .feature-dev/05-testing.md +- Deployment: .feature-dev/06-deployment.md +- Documentation: .feature-dev/07-documentation.md + +## Next Steps +1. Review all generated code and documentation +2. Run the full test suite to verify everything passes +3. Create a pull request with the implementation +4. Deploy using the runbook in .feature-dev/06-deployment.md +``` diff --git a/plugins/comprehensive-review/.claude-plugin/plugin.json b/plugins/comprehensive-review/.claude-plugin/plugin.json index e30d053..0655fe1 100644 --- a/plugins/comprehensive-review/.claude-plugin/plugin.json +++ b/plugins/comprehensive-review/.claude-plugin/plugin.json @@ -1,6 +1,6 @@ { "name": "comprehensive-review", - "version": "1.2.1", + "version": "1.3.0", "description": "Multi-perspective code analysis covering architecture, security, and best practices", "author": { "name": "Seth Hobson", diff --git a/plugins/comprehensive-review/commands/full-review.md b/plugins/comprehensive-review/commands/full-review.md index f980b68..7af0282 100644 --- a/plugins/comprehensive-review/commands/full-review.md +++ b/plugins/comprehensive-review/commands/full-review.md @@ -1,137 +1,597 @@ -Orchestrate comprehensive multi-dimensional code review using specialized review agents +--- +description: "Orchestrate comprehensive multi-dimensional code review using specialized review agents across architecture, security, performance, testing, and best practices" +argument-hint: " [--security-focus] [--performance-critical] [--strict-mode] [--framework react|spring|django|rails]" +--- -[Extended thinking: This workflow performs an exhaustive code review by orchestrating multiple specialized agents in sequential phases. Each phase builds upon previous findings to create a comprehensive review that covers code quality, security, performance, testing, documentation, and best practices. The workflow integrates modern AI-assisted review tools, static analysis, security scanning, and automated quality metrics. Results are consolidated into actionable feedback with clear prioritization and remediation guidance. The phased approach ensures thorough coverage while maintaining efficiency through parallel agent execution where appropriate.] +# Comprehensive Code Review Orchestrator -## Review Configuration Options +## CRITICAL BEHAVIORAL RULES -- **--security-focus**: Prioritize security vulnerabilities and OWASP compliance -- **--performance-critical**: Emphasize performance bottlenecks and scalability issues -- **--tdd-review**: Include TDD compliance and test-first verification -- **--ai-assisted**: Enable AI-powered review tools (Copilot, Codium, Bito) -- **--strict-mode**: Fail review on any critical issues found -- **--metrics-report**: Generate detailed quality metrics dashboard -- **--framework [name]**: Apply framework-specific best practices (React, Spring, Django, etc.) +You MUST follow these rules exactly. Violating any of them is a failure. -## Phase 1: Code Quality & Architecture Review +1. **Execute phases in order.** Do NOT skip ahead, reorder, or merge phases. +2. **Write output files.** Each phase MUST produce its output file in `.full-review/` before the next phase begins. Read from prior phase files -- do NOT rely on context window memory. +3. **Stop at checkpoints.** When you reach a `PHASE CHECKPOINT`, you MUST stop and wait for explicit user approval before continuing. Use the AskUserQuestion tool with clear options. +4. **Halt on failure.** If any step fails (agent error, missing files, access issues), STOP immediately. Present the error and ask the user how to proceed. Do NOT silently continue. +5. **Use only local agents.** All `subagent_type` references use agents bundled with this plugin or `general-purpose`. No cross-plugin dependencies. +6. **Never enter plan mode autonomously.** Do NOT use EnterPlanMode. This command IS the plan -- execute it. -Use Task tool to orchestrate quality and architecture agents in parallel: +## Pre-flight Checks -### 1A. Code Quality Analysis +Before starting, perform these checks: -- Use Task tool with subagent_type="code-reviewer" -- Prompt: "Perform comprehensive code quality review for: $ARGUMENTS. Analyze code complexity, maintainability index, technical debt, code duplication, naming conventions, and adherence to Clean Code principles. Integrate with SonarQube, CodeQL, and Semgrep for static analysis. Check for code smells, anti-patterns, and violations of SOLID principles. Generate cyclomatic complexity metrics and identify refactoring opportunities." -- Expected output: Quality metrics, code smell inventory, refactoring recommendations -- Context: Initial codebase analysis, no dependencies on other phases +### 1. Check for existing session -### 1B. Architecture & Design Review +Check if `.full-review/state.json` exists: -- Use Task tool with subagent_type="architect-review" -- Prompt: "Review architectural design patterns and structural integrity in: $ARGUMENTS. Evaluate microservices boundaries, API design, database schema, dependency management, and adherence to Domain-Driven Design principles. Check for circular dependencies, inappropriate coupling, missing abstractions, and architectural drift. Verify compliance with enterprise architecture standards and cloud-native patterns." -- Expected output: Architecture assessment, design pattern analysis, structural recommendations -- Context: Runs parallel with code quality analysis +- If it exists and `status` is `"in_progress"`: Read it, display the current phase, and ask the user: -## Phase 2: Security & Performance Review + ``` + Found an in-progress review session: + Target: [target from state] + Current phase: [phase from state] -Use Task tool with security and performance agents, incorporating Phase 1 findings: + 1. Resume from where we left off + 2. Start fresh (archives existing session) + ``` -### 2A. Security Vulnerability Assessment +- If it exists and `status` is `"complete"`: Ask whether to archive and start fresh. -- Use Task tool with subagent_type="security-auditor" -- Prompt: "Execute comprehensive security audit on: $ARGUMENTS. Perform OWASP Top 10 analysis, dependency vulnerability scanning with Snyk/Trivy, secrets detection with GitLeaks, input validation review, authentication/authorization assessment, and cryptographic implementation review. Include findings from Phase 1 architecture review: {phase1_architecture_context}. Check for SQL injection, XSS, CSRF, insecure deserialization, and configuration security issues." -- Expected output: Vulnerability report, CVE list, security risk matrix, remediation steps -- Context: Incorporates architectural vulnerabilities identified in Phase 1B +### 2. Initialize state -### 2B. Performance & Scalability Analysis +Create `.full-review/` directory and `state.json`: -- Use Task tool with subagent_type="application-performance::performance-engineer" -- Prompt: "Conduct performance analysis and scalability assessment for: $ARGUMENTS. Profile code for CPU/memory hotspots, analyze database query performance, review caching strategies, identify N+1 problems, assess connection pooling, and evaluate asynchronous processing patterns. Consider architectural findings from Phase 1: {phase1_architecture_context}. Check for memory leaks, resource contention, and bottlenecks under load." -- Expected output: Performance metrics, bottleneck analysis, optimization recommendations -- Context: Uses architecture insights to identify systemic performance issues +```json +{ + "target": "$ARGUMENTS", + "status": "in_progress", + "flags": { + "security_focus": false, + "performance_critical": false, + "strict_mode": false, + "framework": null + }, + "current_step": 1, + "current_phase": 1, + "completed_steps": [], + "files_created": [], + "started_at": "ISO_TIMESTAMP", + "last_updated": "ISO_TIMESTAMP" +} +``` -## Phase 3: Testing & Documentation Review +Parse `$ARGUMENTS` for `--security-focus`, `--performance-critical`, `--strict-mode`, and `--framework` flags. Update the flags object accordingly. -Use Task tool for test and documentation quality assessment: +### 3. Identify review target -### 3A. Test Coverage & Quality Analysis +Determine what code to review from `$ARGUMENTS`: -- Use Task tool with subagent_type="unit-testing::test-automator" -- Prompt: "Evaluate testing strategy and implementation for: $ARGUMENTS. Analyze unit test coverage, integration test completeness, end-to-end test scenarios, test pyramid adherence, and test maintainability. Review test quality metrics including assertion density, test isolation, mock usage, and flakiness. Consider security and performance test requirements from Phase 2: {phase2_security_context}, {phase2_performance_context}. Verify TDD practices if --tdd-review flag is set." -- Expected output: Coverage report, test quality metrics, testing gap analysis -- Context: Incorporates security and performance testing requirements from Phase 2 +- If a file/directory path is given, verify it exists +- If a description is given (e.g., "recent changes", "authentication module"), identify the relevant files +- List the files that will be reviewed and confirm with the user -### 3B. Documentation & API Specification Review +**Output file:** `.full-review/00-scope.md` -- Use Task tool with subagent_type="code-documentation::docs-architect" -- Prompt: "Review documentation completeness and quality for: $ARGUMENTS. Assess inline code documentation, API documentation (OpenAPI/Swagger), architecture decision records (ADRs), README completeness, deployment guides, and runbooks. Verify documentation reflects actual implementation based on all previous phase findings: {phase1_context}, {phase2_context}. Check for outdated documentation, missing examples, and unclear explanations." -- Expected output: Documentation coverage report, inconsistency list, improvement recommendations -- Context: Cross-references all previous findings to ensure documentation accuracy +```markdown +# Review Scope -## Phase 4: Best Practices & Standards Compliance +## Target -Use Task tool to verify framework-specific and industry best practices: +[Description of what is being reviewed] -### 4A. Framework & Language Best Practices +## Files -- Use Task tool with subagent_type="framework-migration::legacy-modernizer" -- Prompt: "Verify adherence to framework and language best practices for: $ARGUMENTS. Check modern JavaScript/TypeScript patterns, React hooks best practices, Python PEP compliance, Java enterprise patterns, Go idiomatic code, or framework-specific conventions (based on --framework flag). Review package management, build configuration, environment handling, and deployment practices. Include all quality issues from previous phases: {all_previous_contexts}." -- Expected output: Best practices compliance report, modernization recommendations -- Context: Synthesizes all previous findings for framework-specific guidance +[List of files/directories included in the review] -### 4B. CI/CD & DevOps Practices Review +## Flags -- Use Task tool with subagent_type="cicd-automation::deployment-engineer" -- Prompt: "Review CI/CD pipeline and DevOps practices for: $ARGUMENTS. Evaluate build automation, test automation integration, deployment strategies (blue-green, canary), infrastructure as code, monitoring/observability setup, and incident response procedures. Assess pipeline security, artifact management, and rollback capabilities. Consider all issues identified in previous phases that impact deployment: {all_critical_issues}." -- Expected output: Pipeline assessment, DevOps maturity evaluation, automation recommendations -- Context: Focuses on operationalizing fixes for all identified issues +- Security Focus: [yes/no] +- Performance Critical: [yes/no] +- Strict Mode: [yes/no] +- Framework: [name or auto-detected] -## Consolidated Report Generation +## Review Phases -Compile all phase outputs into comprehensive review report: +1. Code Quality & Architecture +2. Security & Performance +3. Testing & Documentation +4. Best Practices & Standards +5. Consolidated Report +``` -### Critical Issues (P0 - Must Fix Immediately) +Update `state.json`: add `"00-scope.md"` to `files_created`, add step 0 to `completed_steps`. + +--- + +## Phase 1: Code Quality & Architecture Review (Steps 1A-1B) + +Run both agents in parallel using multiple Task tool calls in a single response. + +### Step 1A: Code Quality Analysis + +``` +Task: + subagent_type: "code-reviewer" + description: "Code quality analysis for $ARGUMENTS" + prompt: | + Perform a comprehensive code quality review. + + ## Review Scope + [Insert contents of .full-review/00-scope.md] + + ## Instructions + Analyze the target code for: + 1. **Code complexity**: Cyclomatic complexity, cognitive complexity, deeply nested logic + 2. **Maintainability**: Naming conventions, function/method length, class cohesion + 3. **Code duplication**: Copy-pasted logic, missed abstraction opportunities + 4. **Clean Code principles**: SOLID violations, code smells, anti-patterns + 5. **Technical debt**: Areas that will become increasingly costly to change + 6. **Error handling**: Missing error handling, swallowed exceptions, unclear error messages + + For each finding, provide: + - Severity (Critical / High / Medium / Low) + - File and line location + - Description of the issue + - Specific fix recommendation with code example + + Write your findings as a structured markdown document. +``` + +### Step 1B: Architecture & Design Review + +``` +Task: + subagent_type: "architect-review" + description: "Architecture review for $ARGUMENTS" + prompt: | + Review the architectural design and structural integrity of the target code. + + ## Review Scope + [Insert contents of .full-review/00-scope.md] + + ## Instructions + Evaluate the code for: + 1. **Component boundaries**: Proper separation of concerns, module cohesion + 2. **Dependency management**: Circular dependencies, inappropriate coupling, dependency direction + 3. **API design**: Endpoint design, request/response schemas, error contracts, versioning + 4. **Data model**: Schema design, relationships, data access patterns + 5. **Design patterns**: Appropriate use of patterns, missing abstractions, over-engineering + 6. **Architectural consistency**: Does the code follow the project's established patterns? + + For each finding, provide: + - Severity (Critical / High / Medium / Low) + - Architectural impact assessment + - Specific improvement recommendation + + Write your findings as a structured markdown document. +``` + +After both complete, consolidate into `.full-review/01-quality-architecture.md`: + +```markdown +# Phase 1: Code Quality & Architecture Review + +## Code Quality Findings + +[Summary from 1A, organized by severity] + +## Architecture Findings + +[Summary from 1B, organized by severity] + +## Critical Issues for Phase 2 Context + +[List any findings that should inform security or performance review] +``` + +Update `state.json`: set `current_step` to 2, `current_phase` to 2, add steps 1A and 1B to `completed_steps`. + +--- + +## Phase 2: Security & Performance Review (Steps 2A-2B) + +Read `.full-review/01-quality-architecture.md` for context from Phase 1. + +Run both agents in parallel using multiple Task tool calls in a single response. + +### Step 2A: Security Vulnerability Assessment + +``` +Task: + subagent_type: "security-auditor" + description: "Security audit for $ARGUMENTS" + prompt: | + Execute a comprehensive security audit on the target code. + + ## Review Scope + [Insert contents of .full-review/00-scope.md] + + ## Phase 1 Context + [Insert contents of .full-review/01-quality-architecture.md -- focus on the "Critical Issues for Phase 2 Context" section] + + ## Instructions + Analyze for: + 1. **OWASP Top 10**: Injection, broken auth, sensitive data exposure, XXE, broken access control, misconfig, XSS, insecure deserialization, vulnerable components, insufficient logging + 2. **Input validation**: Missing sanitization, unvalidated redirects, path traversal + 3. **Authentication/authorization**: Flawed auth logic, privilege escalation, session management + 4. **Cryptographic issues**: Weak algorithms, hardcoded secrets, improper key management + 5. **Dependency vulnerabilities**: Known CVEs in dependencies, outdated packages + 6. **Configuration security**: Debug mode, verbose errors, permissive CORS, missing security headers + + For each finding, provide: + - Severity (Critical / High / Medium / Low) with CVSS score if applicable + - CWE reference where applicable + - File and line location + - Proof of concept or attack scenario + - Specific remediation steps with code example + + Write your findings as a structured markdown document. +``` + +### Step 2B: Performance & Scalability Analysis + +``` +Task: + subagent_type: "general-purpose" + description: "Performance analysis for $ARGUMENTS" + prompt: | + You are a performance engineer. Conduct a performance and scalability analysis of the target code. + + ## Review Scope + [Insert contents of .full-review/00-scope.md] + + ## Phase 1 Context + [Insert contents of .full-review/01-quality-architecture.md -- focus on the "Critical Issues for Phase 2 Context" section] + + ## Instructions + Analyze for: + 1. **Database performance**: N+1 queries, missing indexes, unoptimized queries, connection pool sizing + 2. **Memory management**: Memory leaks, unbounded collections, large object allocation + 3. **Caching opportunities**: Missing caching, stale cache risks, cache invalidation issues + 4. **I/O bottlenecks**: Synchronous blocking calls, missing pagination, large payloads + 5. **Concurrency issues**: Race conditions, deadlocks, thread safety + 6. **Frontend performance**: Bundle size, render performance, unnecessary re-renders, missing lazy loading + 7. **Scalability concerns**: Horizontal scaling barriers, stateful components, single points of failure + + For each finding, provide: + - Severity (Critical / High / Medium / Low) + - Estimated performance impact + - Specific optimization recommendation with code example + + Write your findings as a structured markdown document. +``` + +After both complete, consolidate into `.full-review/02-security-performance.md`: + +```markdown +# Phase 2: Security & Performance Review + +## Security Findings + +[Summary from 2A, organized by severity] + +## Performance Findings + +[Summary from 2B, organized by severity] + +## Critical Issues for Phase 3 Context + +[List findings that affect testing or documentation requirements] +``` + +Update `state.json`: set `current_step` to "checkpoint-1", add steps 2A and 2B to `completed_steps`. + +--- + +## PHASE CHECKPOINT 1 -- User Approval Required + +Display a summary of findings from Phase 1 and Phase 2 and ask: + +``` +Phases 1-2 complete: Code Quality, Architecture, Security, and Performance reviews done. + +Summary: +- Code Quality: [X critical, Y high, Z medium findings] +- Architecture: [X critical, Y high, Z medium findings] +- Security: [X critical, Y high, Z medium findings] +- Performance: [X critical, Y high, Z medium findings] + +Please review: +- .full-review/01-quality-architecture.md +- .full-review/02-security-performance.md + +1. Continue -- proceed to Testing & Documentation review +2. Fix critical issues first -- I'll address findings before continuing +3. Pause -- save progress and stop here +``` + +If `--strict-mode` flag is set and there are Critical findings, recommend option 2. + +Do NOT proceed to Phase 3 until the user approves. + +--- + +## Phase 3: Testing & Documentation Review (Steps 3A-3B) + +Read `.full-review/01-quality-architecture.md` and `.full-review/02-security-performance.md` for context. + +Run both agents in parallel using multiple Task tool calls in a single response. + +### Step 3A: Test Coverage & Quality Analysis + +``` +Task: + subagent_type: "general-purpose" + description: "Test coverage analysis for $ARGUMENTS" + prompt: | + You are a test automation engineer. Evaluate the testing strategy and coverage for the target code. + + ## Review Scope + [Insert contents of .full-review/00-scope.md] + + ## Prior Phase Context + [Insert security and performance findings from .full-review/02-security-performance.md that affect testing requirements] + + ## Instructions + Analyze: + 1. **Test coverage**: Which code paths have tests? Which critical paths are untested? + 2. **Test quality**: Are tests testing behavior or implementation? Assertion quality? + 3. **Test pyramid adherence**: Unit vs integration vs E2E test ratio + 4. **Edge cases**: Are boundary conditions, error paths, and concurrent scenarios tested? + 5. **Test maintainability**: Test isolation, mock usage, flaky test indicators + 6. **Security test gaps**: Are security-critical paths tested? Auth, input validation, etc. + 7. **Performance test gaps**: Are performance-critical paths tested? Load testing? + + For each finding, provide: + - Severity (Critical / High / Medium / Low) + - What is untested or poorly tested + - Specific test recommendations with example test code + + Write your findings as a structured markdown document. +``` + +### Step 3B: Documentation & API Review + +``` +Task: + subagent_type: "general-purpose" + description: "Documentation review for $ARGUMENTS" + prompt: | + You are a technical documentation architect. Review documentation completeness and accuracy. + + ## Review Scope + [Insert contents of .full-review/00-scope.md] + + ## Prior Phase Context + [Insert key findings from .full-review/01-quality-architecture.md and .full-review/02-security-performance.md] + + ## Instructions + Evaluate: + 1. **Inline documentation**: Are complex algorithms and business logic explained? + 2. **API documentation**: Are endpoints documented with examples? Request/response schemas? + 3. **Architecture documentation**: ADRs, system diagrams, component documentation + 4. **README completeness**: Setup instructions, development workflow, deployment guide + 5. **Accuracy**: Does documentation match the actual implementation? + 6. **Changelog/migration guides**: Are breaking changes documented? + + For each finding, provide: + - Severity (Critical / High / Medium / Low) + - What is missing or inaccurate + - Specific documentation recommendation + + Write your findings as a structured markdown document. +``` + +After both complete, consolidate into `.full-review/03-testing-documentation.md`: + +```markdown +# Phase 3: Testing & Documentation Review + +## Test Coverage Findings + +[Summary from 3A, organized by severity] + +## Documentation Findings + +[Summary from 3B, organized by severity] +``` + +Update `state.json`: set `current_step` to 4, `current_phase` to 4, add steps 3A and 3B to `completed_steps`. + +--- + +## Phase 4: Best Practices & Standards (Steps 4A-4B) + +Read all previous `.full-review/*.md` files for full context. + +Run both agents in parallel using multiple Task tool calls in a single response. + +### Step 4A: Framework & Language Best Practices + +``` +Task: + subagent_type: "general-purpose" + description: "Framework best practices review for $ARGUMENTS" + prompt: | + You are an expert in modern framework and language best practices. Verify adherence to current standards. + + ## Review Scope + [Insert contents of .full-review/00-scope.md] + + ## All Prior Findings + [Insert a concise summary of critical/high findings from all prior phases] + + ## Instructions + Check for: + 1. **Language idioms**: Is the code idiomatic for its language? Modern syntax and features? + 2. **Framework patterns**: Does it follow the framework's recommended patterns? (e.g., React hooks, Django views, Spring beans) + 3. **Deprecated APIs**: Are any deprecated functions/libraries/patterns used? + 4. **Modernization opportunities**: Where could modern language/framework features simplify code? + 5. **Package management**: Are dependencies up-to-date? Unnecessary dependencies? + 6. **Build configuration**: Is the build optimized? Development vs production settings? + + For each finding, provide: + - Severity (Critical / High / Medium / Low) + - Current pattern vs recommended pattern + - Migration/fix recommendation with code example + + Write your findings as a structured markdown document. +``` + +### Step 4B: CI/CD & DevOps Practices Review + +``` +Task: + subagent_type: "general-purpose" + description: "CI/CD and DevOps practices review for $ARGUMENTS" + prompt: | + You are a DevOps engineer. Review CI/CD pipeline and operational practices. + + ## Review Scope + [Insert contents of .full-review/00-scope.md] + + ## Critical Issues from Prior Phases + [Insert critical/high findings from all prior phases that impact deployment or operations] + + ## Instructions + Evaluate: + 1. **CI/CD pipeline**: Build automation, test gates, deployment stages, security scanning + 2. **Deployment strategy**: Blue-green, canary, rollback capabilities + 3. **Infrastructure as Code**: Are infrastructure configs version-controlled and reviewed? + 4. **Monitoring & observability**: Logging, metrics, alerting, dashboards + 5. **Incident response**: Runbooks, on-call procedures, rollback plans + 6. **Environment management**: Config separation, secret management, parity between environments + + For each finding, provide: + - Severity (Critical / High / Medium / Low) + - Operational risk assessment + - Specific improvement recommendation + + Write your findings as a structured markdown document. +``` + +After both complete, consolidate into `.full-review/04-best-practices.md`: + +```markdown +# Phase 4: Best Practices & Standards + +## Framework & Language Findings + +[Summary from 4A, organized by severity] + +## CI/CD & DevOps Findings + +[Summary from 4B, organized by severity] +``` + +Update `state.json`: set `current_step` to 5, `current_phase` to 5, add steps 4A and 4B to `completed_steps`. + +--- + +## Phase 5: Consolidated Report (Step 5) + +Read all `.full-review/*.md` files. Generate the final consolidated report. + +**Output file:** `.full-review/05-final-report.md` + +```markdown +# Comprehensive Code Review Report + +## Review Target + +[From 00-scope.md] + +## Executive Summary + +[2-3 sentence overview of overall code health and key concerns] + +## Findings by Priority + +### Critical Issues (P0 -- Must Fix Immediately) + +[All Critical findings from all phases, with source phase reference] - Security vulnerabilities with CVSS > 7.0 - Data loss or corruption risks - Authentication/authorization bypasses - Production stability threats -- Compliance violations (GDPR, PCI DSS, SOC2) -### High Priority (P1 - Fix Before Next Release) +### High Priority (P1 -- Fix Before Next Release) + +[All High findings from all phases] - Performance bottlenecks impacting user experience - Missing critical test coverage - Architectural anti-patterns causing technical debt - Outdated dependencies with known vulnerabilities -- Code quality issues affecting maintainability -### Medium Priority (P2 - Plan for Next Sprint) +### Medium Priority (P2 -- Plan for Next Sprint) + +[All Medium findings from all phases] - Non-critical performance optimizations -- Documentation gaps and inconsistencies +- Documentation gaps - Code refactoring opportunities - Test quality improvements -- DevOps automation enhancements -### Low Priority (P3 - Track in Backlog) +### Low Priority (P3 -- Track in Backlog) + +[All Low findings from all phases] - Style guide violations - Minor code smell issues -- Nice-to-have documentation updates -- Cosmetic improvements +- Nice-to-have improvements -## Success Criteria +## Findings by Category -Review is considered successful when: +- **Code Quality**: [count] findings ([breakdown by severity]) +- **Architecture**: [count] findings ([breakdown by severity]) +- **Security**: [count] findings ([breakdown by severity]) +- **Performance**: [count] findings ([breakdown by severity]) +- **Testing**: [count] findings ([breakdown by severity]) +- **Documentation**: [count] findings ([breakdown by severity]) +- **Best Practices**: [count] findings ([breakdown by severity]) +- **CI/CD & DevOps**: [count] findings ([breakdown by severity]) -- All critical security vulnerabilities are identified and documented -- Performance bottlenecks are profiled with remediation paths -- Test coverage gaps are mapped with priority recommendations -- Architecture risks are assessed with mitigation strategies -- Documentation reflects actual implementation state -- Framework best practices compliance is verified -- CI/CD pipeline supports safe deployment of reviewed code -- Clear, actionable feedback is provided for all findings -- Metrics dashboard shows improvement trends -- Team has clear prioritized action plan for remediation +## Recommended Action Plan -Target: $ARGUMENTS +1. [Ordered list of recommended actions, starting with critical/high items] +2. [Group related fixes where possible] +3. [Estimate relative effort: small/medium/large] + +## Review Metadata + +- Review date: [timestamp] +- Phases completed: [list] +- Flags applied: [list active flags] +``` + +Update `state.json`: set `status` to `"complete"`, `last_updated` to current timestamp. + +--- + +## Completion + +Present the final summary: + +``` +Comprehensive code review complete for: $ARGUMENTS + +## Review Output Files +- Scope: .full-review/00-scope.md +- Quality & Architecture: .full-review/01-quality-architecture.md +- Security & Performance: .full-review/02-security-performance.md +- Testing & Documentation: .full-review/03-testing-documentation.md +- Best Practices: .full-review/04-best-practices.md +- Final Report: .full-review/05-final-report.md + +## Summary +- Total findings: [count] +- Critical: [X] | High: [Y] | Medium: [Z] | Low: [W] + +## Next Steps +1. Review the full report at .full-review/05-final-report.md +2. Address Critical (P0) issues immediately +3. Plan High (P1) fixes for current sprint +4. Add Medium (P2) and Low (P3) items to backlog +``` diff --git a/plugins/data-engineering/.claude-plugin/plugin.json b/plugins/data-engineering/.claude-plugin/plugin.json index 4e7a312..5a4c953 100644 --- a/plugins/data-engineering/.claude-plugin/plugin.json +++ b/plugins/data-engineering/.claude-plugin/plugin.json @@ -1,6 +1,6 @@ { "name": "data-engineering", - "version": "1.2.2", + "version": "1.3.0", "description": "ETL pipeline construction, data warehouse design, batch processing workflows, and data-driven feature development", "author": { "name": "Seth Hobson", diff --git a/plugins/data-engineering/commands/data-driven-feature.md b/plugins/data-engineering/commands/data-driven-feature.md index 56a4fb9..ddf26c6 100644 --- a/plugins/data-engineering/commands/data-driven-feature.md +++ b/plugins/data-engineering/commands/data-driven-feature.md @@ -1,176 +1,784 @@ -# Data-Driven Feature Development +--- +description: "Build features guided by data insights, A/B testing, and continuous measurement" +argument-hint: " [--experiment-type ab|multivariate|bandit] [--confidence 0.90|0.95|0.99]" +--- -Build features guided by data insights, A/B testing, and continuous measurement using specialized agents for analysis, implementation, and experimentation. +# Data-Driven Feature Development Orchestrator -[Extended thinking: This workflow orchestrates a comprehensive data-driven development process from initial data analysis and hypothesis formulation through feature implementation with integrated analytics, A/B testing infrastructure, and post-launch analysis. Each phase leverages specialized agents to ensure features are built based on data insights, properly instrumented for measurement, and validated through controlled experiments. The workflow emphasizes modern product analytics practices, statistical rigor in testing, and continuous learning from user behavior.] +## CRITICAL BEHAVIORAL RULES -## Phase 1: Data Analysis and Hypothesis Formation +You MUST follow these rules exactly. Violating any of them is a failure. -### 1. Exploratory Data Analysis +1. **Execute steps in order.** Do NOT skip ahead, reorder, or merge steps. +2. **Write output files.** Each step MUST produce its output file in `.data-driven-feature/` before the next step begins. Read from prior step files — do NOT rely on context window memory. +3. **Stop at checkpoints.** When you reach a `PHASE CHECKPOINT`, you MUST stop and wait for explicit user approval before continuing. Use the AskUserQuestion tool with clear options. +4. **Halt on failure.** If any step fails (agent error, test failure, missing dependency), STOP immediately. Present the error and ask the user how to proceed. Do NOT silently continue. +5. **Use only local agents.** All `subagent_type` references use agents bundled with this plugin or `general-purpose`. No cross-plugin dependencies. +6. **Never enter plan mode autonomously.** Do NOT use EnterPlanMode. This command IS the plan — execute it. -- Use Task tool with subagent_type="machine-learning-ops::data-scientist" -- Prompt: "Perform exploratory data analysis for feature: $ARGUMENTS. Analyze existing user behavior data, identify patterns and opportunities, segment users by behavior, and calculate baseline metrics. Use modern analytics tools (Amplitude, Mixpanel, Segment) to understand current user journeys, conversion funnels, and engagement patterns." -- Output: EDA report with visualizations, user segments, behavioral patterns, baseline metrics +## Pre-flight Checks -### 2. Business Hypothesis Development +Before starting, perform these checks: -- Use Task tool with subagent_type="business-analytics::business-analyst" -- Context: Data scientist's EDA findings and behavioral patterns -- Prompt: "Formulate business hypotheses for feature: $ARGUMENTS based on data analysis. Define clear success metrics, expected impact on key business KPIs, target user segments, and minimum detectable effects. Create measurable hypotheses using frameworks like ICE scoring or RICE prioritization." -- Output: Hypothesis document, success metrics definition, expected ROI calculations +### 1. Check for existing session -### 3. Statistical Experiment Design +Check if `.data-driven-feature/state.json` exists: -- Use Task tool with subagent_type="machine-learning-ops::data-scientist" -- Context: Business hypotheses and success metrics -- Prompt: "Design statistical experiment for feature: $ARGUMENTS. Calculate required sample size for statistical power, define control and treatment groups, specify randomization strategy, and plan for multiple testing corrections. Consider Bayesian A/B testing approaches for faster decision making. Design for both primary and guardrail metrics." -- Output: Experiment design document, power analysis, statistical test plan +- If it exists and `status` is `"in_progress"`: Read it, display the current step, and ask the user: -## Phase 2: Feature Architecture and Analytics Design + ``` + Found an in-progress data-driven feature session: + Feature: [name from state] + Current step: [step from state] -### 4. Feature Architecture Planning + 1. Resume from where we left off + 2. Start fresh (archives existing session) + ``` -- Use Task tool with subagent_type="data-engineering::backend-architect" -- Context: Business requirements and experiment design -- Prompt: "Design feature architecture for: $ARGUMENTS with A/B testing capability. Include feature flag integration (LaunchDarkly, Split.io, or Optimizely), gradual rollout strategy, circuit breakers for safety, and clean separation between control and treatment logic. Ensure architecture supports real-time configuration updates." -- Output: Architecture diagrams, feature flag schema, rollout strategy +- If it exists and `status` is `"complete"`: Ask whether to archive and start fresh. -### 5. Analytics Instrumentation Design +### 2. Initialize state -- Use Task tool with subagent_type="data-engineering::data-engineer" -- Context: Feature architecture and success metrics -- Prompt: "Design comprehensive analytics instrumentation for: $ARGUMENTS. Define event schemas for user interactions, specify properties for segmentation and analysis, design funnel tracking and conversion events, plan cohort analysis capabilities. Implement using modern SDKs (Segment, Amplitude, Mixpanel) with proper event taxonomy." -- Output: Event tracking plan, analytics schema, instrumentation guide +Create `.data-driven-feature/` directory and `state.json`: -### 6. Data Pipeline Architecture - -- Use Task tool with subagent_type="data-engineering::data-engineer" -- Context: Analytics requirements and existing data infrastructure -- Prompt: "Design data pipelines for feature: $ARGUMENTS. Include real-time streaming for live metrics (Kafka, Kinesis), batch processing for detailed analysis, data warehouse integration (Snowflake, BigQuery), and feature store for ML if applicable. Ensure proper data governance and GDPR compliance." -- Output: Pipeline architecture, ETL/ELT specifications, data flow diagrams - -## Phase 3: Implementation with Instrumentation - -### 7. Backend Implementation - -- Use Task tool with subagent_type="backend-development::backend-architect" -- Context: Architecture design and feature requirements -- Prompt: "Implement backend for feature: $ARGUMENTS with full instrumentation. Include feature flag checks at decision points, comprehensive event tracking for all user actions, performance metrics collection, error tracking and monitoring. Implement proper logging for experiment analysis." -- Output: Backend code with analytics, feature flag integration, monitoring setup - -### 8. Frontend Implementation - -- Use Task tool with subagent_type="frontend-mobile-development::frontend-developer" -- Context: Backend APIs and analytics requirements -- Prompt: "Build frontend for feature: $ARGUMENTS with analytics tracking. Implement event tracking for all user interactions, session recording integration if applicable, performance metrics (Core Web Vitals), and proper error boundaries. Ensure consistent experience between control and treatment groups." -- Output: Frontend code with analytics, A/B test variants, performance monitoring - -### 9. ML Model Integration (if applicable) - -- Use Task tool with subagent_type="machine-learning-ops::ml-engineer" -- Context: Feature requirements and data pipelines -- Prompt: "Integrate ML models for feature: $ARGUMENTS if needed. Implement online inference with low latency, A/B testing between model versions, model performance tracking, and automatic fallback mechanisms. Set up model monitoring for drift detection." -- Output: ML pipeline, model serving infrastructure, monitoring setup - -## Phase 4: Pre-Launch Validation - -### 10. Analytics Validation - -- Use Task tool with subagent_type="data-engineering::data-engineer" -- Context: Implemented tracking and event schemas -- Prompt: "Validate analytics implementation for: $ARGUMENTS. Test all event tracking in staging, verify data quality and completeness, validate funnel definitions, ensure proper user identification and session tracking. Run end-to-end tests for data pipeline." -- Output: Validation report, data quality metrics, tracking coverage analysis - -### 11. Experiment Setup - -- Use Task tool with subagent_type="cloud-infrastructure::deployment-engineer" -- Context: Feature flags and experiment design -- Prompt: "Configure experiment infrastructure for: $ARGUMENTS. Set up feature flags with proper targeting rules, configure traffic allocation (start with 5-10%), implement kill switches, set up monitoring alerts for key metrics. Test randomization and assignment logic." -- Output: Experiment configuration, monitoring dashboards, rollout plan - -## Phase 5: Launch and Experimentation - -### 12. Gradual Rollout - -- Use Task tool with subagent_type="cloud-infrastructure::deployment-engineer" -- Context: Experiment configuration and monitoring setup -- Prompt: "Execute gradual rollout for feature: $ARGUMENTS. Start with internal dogfooding, then beta users (1-5%), gradually increase to target traffic. Monitor error rates, performance metrics, and early indicators. Implement automated rollback on anomalies." -- Output: Rollout execution, monitoring alerts, health metrics - -### 13. Real-time Monitoring - -- Use Task tool with subagent_type="observability-monitoring::observability-engineer" -- Context: Deployed feature and success metrics -- Prompt: "Set up comprehensive monitoring for: $ARGUMENTS. Create real-time dashboards for experiment metrics, configure alerts for statistical significance, monitor guardrail metrics for negative impacts, track system performance and error rates. Use tools like Datadog, New Relic, or custom dashboards." -- Output: Monitoring dashboards, alert configurations, SLO definitions - -## Phase 6: Analysis and Decision Making - -### 14. Statistical Analysis - -- Use Task tool with subagent_type="machine-learning-ops::data-scientist" -- Context: Experiment data and original hypotheses -- Prompt: "Analyze A/B test results for: $ARGUMENTS. Calculate statistical significance with confidence intervals, check for segment-level effects, analyze secondary metrics impact, investigate any unexpected patterns. Use both frequentist and Bayesian approaches. Account for multiple testing if applicable." -- Output: Statistical analysis report, significance tests, segment analysis - -### 15. Business Impact Assessment - -- Use Task tool with subagent_type="business-analytics::business-analyst" -- Context: Statistical analysis and business metrics -- Prompt: "Assess business impact of feature: $ARGUMENTS. Calculate actual vs expected ROI, analyze impact on key business metrics, evaluate cost-benefit including operational overhead, project long-term value. Make recommendation on full rollout, iteration, or rollback." -- Output: Business impact report, ROI analysis, recommendation document - -### 16. Post-Launch Optimization - -- Use Task tool with subagent_type="machine-learning-ops::data-scientist" -- Context: Launch results and user feedback -- Prompt: "Identify optimization opportunities for: $ARGUMENTS based on data. Analyze user behavior patterns in treatment group, identify friction points in user journey, suggest improvements based on data, plan follow-up experiments. Use cohort analysis for long-term impact." -- Output: Optimization recommendations, follow-up experiment plans - -## Configuration Options - -```yaml -experiment_config: - min_sample_size: 10000 - confidence_level: 0.95 - runtime_days: 14 - traffic_allocation: "gradual" # gradual, fixed, or adaptive - -analytics_platforms: - - amplitude - - segment - - mixpanel - -feature_flags: - provider: "launchdarkly" # launchdarkly, split, optimizely, unleash - -statistical_methods: - - frequentist - - bayesian - -monitoring: - - real_time_metrics: true - - anomaly_detection: true - - automatic_rollback: true +```json +{ + "feature": "$ARGUMENTS", + "status": "in_progress", + "experiment_type": "ab", + "confidence_level": 0.95, + "current_step": 1, + "current_phase": 1, + "completed_steps": [], + "files_created": [], + "started_at": "ISO_TIMESTAMP", + "last_updated": "ISO_TIMESTAMP" +} ``` -## Success Criteria +Parse `$ARGUMENTS` for `--experiment-type` and `--confidence` flags. Use defaults if not specified. -- **Data Coverage**: 100% of user interactions tracked with proper event schema -- **Experiment Validity**: Proper randomization, sufficient statistical power, no sample ratio mismatch -- **Statistical Rigor**: Clear significance testing, proper confidence intervals, multiple testing corrections -- **Business Impact**: Measurable improvement in target metrics without degrading guardrail metrics -- **Technical Performance**: No degradation in p95 latency, error rates below 0.1% -- **Decision Speed**: Clear go/no-go decision within planned experiment runtime -- **Learning Outcomes**: Documented insights for future feature development +### 3. Parse feature description -## Coordination Notes +Extract the feature description from `$ARGUMENTS` (everything before the flags). This is referenced as `$FEATURE` in prompts below. -- Data scientists and business analysts collaborate on hypothesis formation -- Engineers implement with analytics as first-class requirement, not afterthought -- Feature flags enable safe experimentation without full deployments -- Real-time monitoring allows for quick iteration and rollback if needed -- Statistical rigor balanced with business practicality and speed to market -- Continuous learning loop feeds back into next feature development cycle +--- -Feature to develop with data-driven approach: $ARGUMENTS +## Phase 1: Data Analysis & Hypothesis (Steps 1–3) — Interactive + +### Step 1: Exploratory Data Analysis + +Use the Task tool: + +``` +Task: + subagent_type: "general-purpose" + description: "Perform exploratory data analysis for $FEATURE" + prompt: | + You are a data scientist specializing in product analytics. Perform exploratory data analysis for feature: $FEATURE. + + ## Instructions + 1. Analyze existing user behavior data, identify patterns and opportunities + 2. Segment users by behavior and engagement patterns + 3. Calculate baseline metrics for key indicators + 4. Use modern analytics tools (Amplitude, Mixpanel, Segment) to understand current user journeys, conversion funnels, and engagement patterns + 5. Identify data quality issues or gaps that need addressing + + Provide an EDA report with user segments, behavioral patterns, and baseline metrics. +``` + +Save the agent's output to `.data-driven-feature/01-eda-report.md`. + +Update `state.json`: set `current_step` to 2, add `"01-eda-report.md"` to `files_created`, add step 1 to `completed_steps`. + +### Step 2: Business Hypothesis Development + +Read `.data-driven-feature/01-eda-report.md` to load EDA context. + +Use the Task tool: + +``` +Task: + subagent_type: "general-purpose" + description: "Formulate business hypotheses for $FEATURE" + prompt: | + You are a business analyst specializing in data-driven product development. Formulate business hypotheses for feature: $FEATURE based on the data analysis below. + + ## EDA Findings + [Insert full contents of .data-driven-feature/01-eda-report.md] + + ## Instructions + 1. Define clear success metrics and expected impact on key business KPIs + 2. Identify target user segments and minimum detectable effects + 3. Create measurable hypotheses using ICE or RICE prioritization frameworks + 4. Calculate expected ROI and business value + + Provide a hypothesis document with success metrics definition and expected ROI calculations. +``` + +Save the agent's output to `.data-driven-feature/02-hypotheses.md`. + +Update `state.json`: set `current_step` to 3, add step 2 to `completed_steps`. + +### Step 3: Statistical Experiment Design + +Read `.data-driven-feature/02-hypotheses.md` to load hypothesis context. + +Use the Task tool: + +``` +Task: + subagent_type: "general-purpose" + description: "Design statistical experiment for $FEATURE" + prompt: | + You are a data scientist specializing in experimentation and statistical analysis. Design the statistical experiment for feature: $FEATURE. + + ## Business Hypotheses + [Insert full contents of .data-driven-feature/02-hypotheses.md] + + ## Experiment Type: [from state.json] + ## Confidence Level: [from state.json] + + ## Instructions + 1. Calculate required sample size for statistical power + 2. Define control and treatment groups with randomization strategy + 3. Plan for multiple testing corrections if needed + 4. Consider Bayesian A/B testing approaches for faster decision making + 5. Design for both primary and guardrail metrics + 6. Specify experiment runtime and stopping rules + + Provide an experiment design document with power analysis and statistical test plan. +``` + +Save the agent's output to `.data-driven-feature/03-experiment-design.md`. + +Update `state.json`: set `current_step` to "checkpoint-1", add step 3 to `completed_steps`. + +--- + +## PHASE CHECKPOINT 1 — User Approval Required + +You MUST stop here and present the analysis and experiment design for review. + +Display a summary of the hypotheses from `.data-driven-feature/02-hypotheses.md` and experiment design from `.data-driven-feature/03-experiment-design.md` (key metrics, target segments, sample size, experiment type) and ask: + +``` +Data analysis and experiment design complete. Please review: +- .data-driven-feature/01-eda-report.md +- .data-driven-feature/02-hypotheses.md +- .data-driven-feature/03-experiment-design.md + +1. Approve — proceed to architecture and implementation +2. Request changes — tell me what to adjust +3. Pause — save progress and stop here +``` + +Do NOT proceed to Phase 2 until the user selects option 1. If they select option 2, revise and re-checkpoint. If option 3, update `state.json` status and stop. + +--- + +## Phase 2: Architecture & Instrumentation (Steps 4–6) + +### Step 4: Feature Architecture Planning + +Read `.data-driven-feature/02-hypotheses.md` and `.data-driven-feature/03-experiment-design.md`. + +Use the Task tool: + +``` +Task: + subagent_type: "backend-architect" + description: "Design feature architecture for $FEATURE with A/B testing capability" + prompt: | + Design the feature architecture for: $FEATURE with A/B testing capability. + + ## Business Hypotheses + [Insert contents of .data-driven-feature/02-hypotheses.md] + + ## Experiment Design + [Insert contents of .data-driven-feature/03-experiment-design.md] + + ## Instructions + 1. Include feature flag integration (LaunchDarkly, Split.io, or Optimizely) + 2. Design gradual rollout strategy with circuit breakers for safety + 3. Ensure clean separation between control and treatment logic + 4. Support real-time configuration updates + 5. Design for proper data collection at each decision point + + Provide architecture diagrams, feature flag schema, and rollout strategy. +``` + +Save the agent's output to `.data-driven-feature/04-architecture.md`. + +Update `state.json`: set `current_step` to 5, add step 4 to `completed_steps`. + +### Step 5: Analytics Instrumentation Design + +Read `.data-driven-feature/04-architecture.md`. + +Use the Task tool: + +``` +Task: + subagent_type: "data-engineer" + description: "Design analytics instrumentation for $FEATURE" + prompt: | + Design comprehensive analytics instrumentation for: $FEATURE. + + ## Architecture + [Insert contents of .data-driven-feature/04-architecture.md] + + ## Experiment Design + [Insert contents of .data-driven-feature/03-experiment-design.md] + + ## Instructions + 1. Define event schemas for user interactions with proper taxonomy + 2. Specify properties for segmentation and analysis + 3. Design funnel tracking and conversion events + 4. Plan cohort analysis capabilities + 5. Implement using modern SDKs (Segment, Amplitude, Mixpanel) with proper event taxonomy + + Provide an event tracking plan, analytics schema, and instrumentation guide. +``` + +Save the agent's output to `.data-driven-feature/05-analytics-design.md`. + +Update `state.json`: set `current_step` to 6, add step 5 to `completed_steps`. + +### Step 6: Data Pipeline Architecture + +Read `.data-driven-feature/05-analytics-design.md`. + +Use the Task tool: + +``` +Task: + subagent_type: "data-engineer" + description: "Design data pipelines for $FEATURE" + prompt: | + Design data pipelines for feature: $FEATURE. + + ## Analytics Design + [Insert contents of .data-driven-feature/05-analytics-design.md] + + ## Architecture + [Insert contents of .data-driven-feature/04-architecture.md] + + ## Instructions + 1. Include real-time streaming for live metrics (Kafka, Kinesis) + 2. Design batch processing for detailed analysis + 3. Plan data warehouse integration (Snowflake, BigQuery) + 4. Include feature store for ML if applicable + 5. Ensure proper data governance and GDPR compliance + 6. Define data retention and archival policies + + Provide pipeline architecture, ETL/ELT specifications, and data flow diagrams. +``` + +Save the agent's output to `.data-driven-feature/06-data-pipelines.md`. + +Update `state.json`: set `current_step` to "checkpoint-2", add step 6 to `completed_steps`. + +--- + +## PHASE CHECKPOINT 2 — User Approval Required + +Display a summary of the architecture, analytics design, and data pipelines and ask: + +``` +Architecture and instrumentation design complete. Please review: +- .data-driven-feature/04-architecture.md +- .data-driven-feature/05-analytics-design.md +- .data-driven-feature/06-data-pipelines.md + +1. Approve — proceed to implementation +2. Request changes — tell me what to adjust +3. Pause — save progress and stop here +``` + +Do NOT proceed to Phase 3 until the user approves. + +--- + +## Phase 3: Implementation (Steps 7–9) + +### Step 7: Backend Implementation + +Read `.data-driven-feature/04-architecture.md` and `.data-driven-feature/05-analytics-design.md`. + +Use the Task tool: + +``` +Task: + subagent_type: "backend-architect" + description: "Implement backend for $FEATURE with full instrumentation" + prompt: | + Implement the backend for feature: $FEATURE with full instrumentation. + + ## Architecture + [Insert contents of .data-driven-feature/04-architecture.md] + + ## Analytics Design + [Insert contents of .data-driven-feature/05-analytics-design.md] + + ## Instructions + 1. Include feature flag checks at decision points + 2. Implement comprehensive event tracking for all user actions + 3. Add performance metrics collection + 4. Implement error tracking and monitoring + 5. Add proper logging for experiment analysis + 6. Follow the project's existing code patterns and conventions + + Write all code files. Report what files were created/modified. +``` + +Save a summary to `.data-driven-feature/07-backend.md`. + +Update `state.json`: set `current_step` to 8, add step 7 to `completed_steps`. + +### Step 8: Frontend Implementation + +Read `.data-driven-feature/04-architecture.md`, `.data-driven-feature/05-analytics-design.md`, and `.data-driven-feature/07-backend.md`. + +Use the Task tool: + +``` +Task: + subagent_type: "general-purpose" + description: "Implement frontend for $FEATURE with analytics tracking" + prompt: | + You are a frontend developer. Build the frontend for feature: $FEATURE with analytics tracking. + + ## Architecture + [Insert contents of .data-driven-feature/04-architecture.md] + + ## Analytics Design + [Insert contents of .data-driven-feature/05-analytics-design.md] + + ## Backend Implementation + [Insert contents of .data-driven-feature/07-backend.md] + + ## Instructions + 1. Implement event tracking for all user interactions + 2. Build A/B test variants with proper variant assignment + 3. Add session recording integration if applicable + 4. Track performance metrics (Core Web Vitals) + 5. Add proper error boundaries + 6. Ensure consistent experience between control and treatment groups + 7. Follow the project's existing frontend patterns and conventions + + Write all code files. Report what files were created/modified. +``` + +Save a summary to `.data-driven-feature/08-frontend.md`. + +**Note:** If the feature has no frontend component (pure backend/API/pipeline), skip this step — write a brief note in `08-frontend.md` explaining why it was skipped, and continue. + +Update `state.json`: set `current_step` to 9, add step 8 to `completed_steps`. + +### Step 9: ML Model Integration (if applicable) + +Read `.data-driven-feature/04-architecture.md` and `.data-driven-feature/06-data-pipelines.md`. + +Use the Task tool: + +``` +Task: + subagent_type: "general-purpose" + description: "Integrate ML models for $FEATURE" + prompt: | + You are an ML engineer. Integrate ML models for feature: $FEATURE if needed. + + ## Architecture + [Insert contents of .data-driven-feature/04-architecture.md] + + ## Data Pipelines + [Insert contents of .data-driven-feature/06-data-pipelines.md] + + ## Instructions + 1. Implement online inference with low latency + 2. Set up A/B testing between model versions + 3. Add model performance tracking and drift detection + 4. Implement automatic fallback mechanisms + 5. Set up model monitoring dashboards + + If no ML component is needed for this feature, explain why and skip. + Write all code files. Report what files were created/modified. +``` + +Save a summary to `.data-driven-feature/09-ml-integration.md`. + +Update `state.json`: set `current_step` to "checkpoint-3", add step 9 to `completed_steps`. + +--- + +## PHASE CHECKPOINT 3 — User Approval Required + +Display a summary of the implementation and ask: + +``` +Implementation complete. Please review: +- .data-driven-feature/07-backend.md +- .data-driven-feature/08-frontend.md +- .data-driven-feature/09-ml-integration.md + +1. Approve — proceed to validation and launch +2. Request changes — tell me what to fix +3. Pause — save progress and stop here +``` + +Do NOT proceed to Phase 4 until the user approves. + +--- + +## Phase 4: Validation & Launch (Steps 10–13) + +### Step 10: Analytics Validation + +Read `.data-driven-feature/05-analytics-design.md`, `.data-driven-feature/07-backend.md`, and `.data-driven-feature/08-frontend.md`. + +Use the Task tool: + +``` +Task: + subagent_type: "data-engineer" + description: "Validate analytics implementation for $FEATURE" + prompt: | + Validate the analytics implementation for: $FEATURE. + + ## Analytics Design + [Insert contents of .data-driven-feature/05-analytics-design.md] + + ## Backend Implementation + [Insert contents of .data-driven-feature/07-backend.md] + + ## Frontend Implementation + [Insert contents of .data-driven-feature/08-frontend.md] + + ## Instructions + 1. Test all event tracking in staging environment + 2. Verify data quality and completeness + 3. Validate funnel definitions and conversion tracking + 4. Ensure proper user identification and session tracking + 5. Run end-to-end tests for data pipeline + 6. Check for tracking gaps or inconsistencies + + Provide a validation report with data quality metrics and tracking coverage analysis. +``` + +Save the agent's output to `.data-driven-feature/10-analytics-validation.md`. + +Update `state.json`: set `current_step` to 11, add step 10 to `completed_steps`. + +### Step 11: Experiment Setup & Deployment + +Read `.data-driven-feature/03-experiment-design.md` and `.data-driven-feature/04-architecture.md`. + +Launch two agents in parallel using multiple Task tool calls in a single response: + +**11a. Experiment Infrastructure:** + +``` +Task: + subagent_type: "general-purpose" + description: "Configure experiment infrastructure for $FEATURE" + prompt: | + You are a deployment engineer specializing in experimentation platforms. Configure experiment infrastructure for: $FEATURE. + + ## Experiment Design + [Insert contents of .data-driven-feature/03-experiment-design.md] + + ## Architecture + [Insert contents of .data-driven-feature/04-architecture.md] + + ## Instructions + 1. Set up feature flags with proper targeting rules + 2. Configure traffic allocation (start with 5-10%) + 3. Implement kill switches for safety + 4. Set up monitoring alerts for key metrics + 5. Test randomization and assignment logic + 6. Create rollback procedures + + Provide experiment configuration, monitoring dashboards, and rollout plan. +``` + +**11b. Monitoring Setup:** + +``` +Task: + subagent_type: "general-purpose" + description: "Set up monitoring for $FEATURE experiment" + prompt: | + You are an observability engineer. Set up comprehensive monitoring for: $FEATURE. + + ## Architecture + [Insert contents of .data-driven-feature/04-architecture.md] + + ## Experiment Design + [Insert contents of .data-driven-feature/03-experiment-design.md] + + ## Analytics Design + [Insert contents of .data-driven-feature/05-analytics-design.md] + + ## Instructions + 1. Create real-time dashboards for experiment metrics + 2. Configure alerts for statistical significance milestones + 3. Monitor guardrail metrics for negative impacts + 4. Track system performance and error rates + 5. Define SLOs for the experiment period + 6. Use tools like Datadog, New Relic, or custom dashboards + + Provide monitoring dashboard configs, alert definitions, and SLO specifications. +``` + +After both complete, consolidate results into `.data-driven-feature/11-experiment-setup.md`: + +```markdown +# Experiment Setup: $FEATURE + +## Experiment Infrastructure + +[Summary from 11a — feature flags, traffic allocation, rollback plan] + +## Monitoring Configuration + +[Summary from 11b — dashboards, alerts, SLOs] +``` + +Update `state.json`: set `current_step` to 12, add step 11 to `completed_steps`. + +### Step 12: Gradual Rollout + +Read `.data-driven-feature/11-experiment-setup.md`. + +Use the Task tool: + +``` +Task: + subagent_type: "general-purpose" + description: "Create gradual rollout plan for $FEATURE" + prompt: | + You are a deployment engineer. Create a detailed gradual rollout plan for feature: $FEATURE. + + ## Experiment Setup + [Insert contents of .data-driven-feature/11-experiment-setup.md] + + ## Instructions + 1. Define rollout stages: internal dogfooding → beta (1-5%) → gradual increase to target traffic + 2. Specify health checks and go/no-go criteria for each stage + 3. Define monitoring checkpoints and metrics thresholds + 4. Create automated rollback triggers for anomalies + 5. Document manual rollback procedures + + Provide a stage-by-stage rollout plan with decision criteria. +``` + +Save the agent's output to `.data-driven-feature/12-rollout-plan.md`. + +Update `state.json`: set `current_step` to 13, add step 12 to `completed_steps`. + +### Step 13: Security Review + +Read `.data-driven-feature/04-architecture.md`, `.data-driven-feature/07-backend.md`, and `.data-driven-feature/08-frontend.md`. + +Use the Task tool: + +``` +Task: + subagent_type: "general-purpose" + description: "Security review of $FEATURE" + prompt: | + You are a security auditor. Perform a security review of this data-driven feature implementation. + + ## Architecture + [Insert contents of .data-driven-feature/04-architecture.md] + + ## Backend Implementation + [Insert contents of .data-driven-feature/07-backend.md] + + ## Frontend Implementation + [Insert contents of .data-driven-feature/08-frontend.md] + + ## Instructions + Review for: OWASP Top 10, data privacy and GDPR compliance, PII handling in analytics events, + authentication/authorization flaws, input validation gaps, experiment manipulation risks, + and any security anti-patterns. + + Provide findings with severity, location, and specific fix recommendations. +``` + +Save the agent's output to `.data-driven-feature/13-security-review.md`. + +If there are Critical or High severity findings, address them now before proceeding. Apply fixes and re-validate. + +Update `state.json`: set `current_step` to "checkpoint-4", add step 13 to `completed_steps`. + +--- + +## PHASE CHECKPOINT 4 — User Approval Required + +Display a summary of validation and launch readiness and ask: + +``` +Validation and launch preparation complete. Please review: +- .data-driven-feature/10-analytics-validation.md +- .data-driven-feature/11-experiment-setup.md +- .data-driven-feature/12-rollout-plan.md +- .data-driven-feature/13-security-review.md + +Security findings: [X critical, Y high, Z medium] + +1. Approve — proceed to analysis planning +2. Request changes — tell me what to fix +3. Pause — save progress and stop here +``` + +Do NOT proceed to Phase 5 until the user approves. + +--- + +## Phase 5: Analysis & Decision (Steps 14–16) + +### Step 14: Statistical Analysis + +Read `.data-driven-feature/03-experiment-design.md` and `.data-driven-feature/02-hypotheses.md`. + +Use the Task tool: + +``` +Task: + subagent_type: "general-purpose" + description: "Create statistical analysis plan for $FEATURE experiment" + prompt: | + You are a data scientist specializing in experimentation. Create the statistical analysis plan for the A/B test results of: $FEATURE. + + ## Experiment Design + [Insert contents of .data-driven-feature/03-experiment-design.md] + + ## Hypotheses + [Insert contents of .data-driven-feature/02-hypotheses.md] + + ## Instructions + 1. Define statistical significance calculations with confidence intervals + 2. Plan segment-level effect analysis + 3. Specify secondary metrics impact analysis + 4. Use both frequentist and Bayesian approaches + 5. Account for multiple testing corrections + 6. Define stopping rules and decision criteria + + Provide an analysis plan with templates for results reporting. +``` + +Save the agent's output to `.data-driven-feature/14-analysis-plan.md`. + +Update `state.json`: set `current_step` to 15, add step 14 to `completed_steps`. + +### Step 15: Business Impact Assessment Framework + +Read `.data-driven-feature/02-hypotheses.md` and `.data-driven-feature/14-analysis-plan.md`. + +Use the Task tool: + +``` +Task: + subagent_type: "general-purpose" + description: "Create business impact assessment framework for $FEATURE" + prompt: | + You are a business analyst. Create a business impact assessment framework for feature: $FEATURE. + + ## Hypotheses + [Insert contents of .data-driven-feature/02-hypotheses.md] + + ## Analysis Plan + [Insert contents of .data-driven-feature/14-analysis-plan.md] + + ## Instructions + 1. Define actual vs expected ROI calculation methodology + 2. Create a framework for analyzing impact on key business metrics + 3. Plan cost-benefit analysis including operational overhead + 4. Define criteria for full rollout, iteration, or rollback decisions + 5. Create templates for stakeholder reporting + + Provide a business impact framework and decision matrix. +``` + +Save the agent's output to `.data-driven-feature/15-impact-framework.md`. + +Update `state.json`: set `current_step` to 16, add step 15 to `completed_steps`. + +### Step 16: Optimization Roadmap + +Read `.data-driven-feature/14-analysis-plan.md` and `.data-driven-feature/15-impact-framework.md`. + +Use the Task tool: + +``` +Task: + subagent_type: "general-purpose" + description: "Create post-launch optimization roadmap for $FEATURE" + prompt: | + You are a data scientist specializing in product optimization. Create a post-launch optimization roadmap for: $FEATURE. + + ## Analysis Plan + [Insert contents of .data-driven-feature/14-analysis-plan.md] + + ## Impact Framework + [Insert contents of .data-driven-feature/15-impact-framework.md] + + ## Instructions + 1. Define user behavior analysis methodology for treatment group + 2. Plan friction point identification in user journeys + 3. Suggest improvement hypotheses based on expected data patterns + 4. Plan follow-up experiments and iteration cycles + 5. Design cohort analysis for long-term impact assessment + 6. Create a continuous learning feedback loop + + Provide an optimization roadmap with follow-up experiment plans. +``` + +Save the agent's output to `.data-driven-feature/16-optimization-roadmap.md`. + +Update `state.json`: set `current_step` to "complete", add step 16 to `completed_steps`. + +--- + +## Completion + +Update `state.json`: + +- Set `status` to `"complete"` +- Set `last_updated` to current timestamp + +Present the final summary: + +``` +Data-driven feature development complete: $FEATURE + +## Files Created +[List all .data-driven-feature/ output files] + +## Development Summary +- EDA Report: .data-driven-feature/01-eda-report.md +- Hypotheses: .data-driven-feature/02-hypotheses.md +- Experiment Design: .data-driven-feature/03-experiment-design.md +- Architecture: .data-driven-feature/04-architecture.md +- Analytics Design: .data-driven-feature/05-analytics-design.md +- Data Pipelines: .data-driven-feature/06-data-pipelines.md +- Backend: .data-driven-feature/07-backend.md +- Frontend: .data-driven-feature/08-frontend.md +- ML Integration: .data-driven-feature/09-ml-integration.md +- Analytics Validation: .data-driven-feature/10-analytics-validation.md +- Experiment Setup: .data-driven-feature/11-experiment-setup.md +- Rollout Plan: .data-driven-feature/12-rollout-plan.md +- Security Review: .data-driven-feature/13-security-review.md +- Analysis Plan: .data-driven-feature/14-analysis-plan.md +- Impact Framework: .data-driven-feature/15-impact-framework.md +- Optimization Roadmap: .data-driven-feature/16-optimization-roadmap.md + +## Next Steps +1. Review all generated artifacts and documentation +2. Execute the rollout plan in .data-driven-feature/12-rollout-plan.md +3. Monitor using the dashboards from .data-driven-feature/11-experiment-setup.md +4. Run analysis after experiment completes using .data-driven-feature/14-analysis-plan.md +5. Make go/no-go decision using .data-driven-feature/15-impact-framework.md +``` diff --git a/plugins/dotnet-contribution/.claude-plugin/plugin.json b/plugins/dotnet-contribution/.claude-plugin/plugin.json new file mode 100644 index 0000000..7ea3534 --- /dev/null +++ b/plugins/dotnet-contribution/.claude-plugin/plugin.json @@ -0,0 +1,10 @@ +{ + "name": "dotnet-contribution", + "version": "1.0.0", + "description": "Comprehensive .NET backend development with C#, ASP.NET Core, Entity Framework Core, and Dapper for production-grade applications", + "author": { + "name": "Seth Hobson", + "email": "seth@major7apps.com" + }, + "license": "MIT" +} diff --git a/plugins/framework-migration/.claude-plugin/plugin.json b/plugins/framework-migration/.claude-plugin/plugin.json index c57ee90..0138da7 100644 --- a/plugins/framework-migration/.claude-plugin/plugin.json +++ b/plugins/framework-migration/.claude-plugin/plugin.json @@ -1,6 +1,6 @@ { "name": "framework-migration", - "version": "1.2.2", + "version": "1.3.0", "description": "Framework updates, migration planning, and architectural transformation workflows", "author": { "name": "Seth Hobson", diff --git a/plugins/framework-migration/commands/legacy-modernize.md b/plugins/framework-migration/commands/legacy-modernize.md index 060bff6..ac57e74 100644 --- a/plugins/framework-migration/commands/legacy-modernize.md +++ b/plugins/framework-migration/commands/legacy-modernize.md @@ -1,123 +1,659 @@ +--- +description: "Orchestrate legacy system modernization using the strangler fig pattern with gradual component replacement" +argument-hint: " [--strategy parallel-systems|big-bang|by-feature|database-first|api-first]" +--- + # Legacy Code Modernization Workflow -Orchestrate a comprehensive legacy system modernization using the strangler fig pattern, enabling gradual replacement of outdated components while maintaining continuous business operations through expert agent coordination. +## CRITICAL BEHAVIORAL RULES -[Extended thinking: The strangler fig pattern, named after the tropical fig tree that gradually envelops and replaces its host, represents the gold standard for risk-managed legacy modernization. This workflow implements a systematic approach where new functionality gradually replaces legacy components, allowing both systems to coexist during transition. By orchestrating specialized agents for assessment, testing, security, and implementation, we ensure each migration phase is validated before proceeding, minimizing disruption while maximizing modernization velocity.] +You MUST follow these rules exactly. Violating any of them is a failure. -## Phase 1: Legacy Assessment and Risk Analysis +1. **Execute steps in order.** Do NOT skip ahead, reorder, or merge steps. +2. **Write output files.** Each step MUST produce its output file in `.legacy-modernize/` before the next step begins. Read from prior step files — do NOT rely on context window memory. +3. **Stop at checkpoints.** When you reach a `PHASE CHECKPOINT`, you MUST stop and wait for explicit user approval before continuing. Use the AskUserQuestion tool with clear options. +4. **Halt on failure.** If any step fails (agent error, test failure, missing dependency), STOP immediately. Present the error and ask the user how to proceed. Do NOT silently continue. +5. **Use only local agents.** All `subagent_type` references use agents bundled with this plugin or `general-purpose`. No cross-plugin dependencies. +6. **Never enter plan mode autonomously.** Do NOT use EnterPlanMode. This command IS the plan — execute it. -### 1. Comprehensive Legacy System Analysis +## Pre-flight Checks -- Use Task tool with subagent_type="legacy-modernizer" -- Prompt: "Analyze the legacy codebase at $ARGUMENTS. Document technical debt inventory including: outdated dependencies, deprecated APIs, security vulnerabilities, performance bottlenecks, and architectural anti-patterns. Generate a modernization readiness report with component complexity scores (1-10), dependency mapping, and database coupling analysis. Identify quick wins vs complex refactoring targets." -- Expected output: Detailed assessment report with risk matrix and modernization priorities +Before starting, perform these checks: -### 2. Dependency and Integration Mapping +### 1. Check for existing session -- Use Task tool with subagent_type="architect-review" -- Prompt: "Based on the legacy assessment report, create a comprehensive dependency graph showing: internal module dependencies, external service integrations, shared database schemas, and cross-system data flows. Identify integration points that will require facade patterns or adapter layers during migration. Highlight circular dependencies and tight coupling that need resolution." -- Context from previous: Legacy assessment report, component complexity scores -- Expected output: Visual dependency map and integration point catalog +Check if `.legacy-modernize/state.json` exists: -### 3. Business Impact and Risk Assessment +- If it exists and `status` is `"in_progress"`: Read it, display the current step, and ask the user: -- Use Task tool with subagent_type="business-analytics::business-analyst" -- Prompt: "Evaluate business impact of modernizing each component identified. Create risk assessment matrix considering: business criticality (revenue impact), user traffic patterns, data sensitivity, regulatory requirements, and fallback complexity. Prioritize components using a weighted scoring system: (Business Value × 0.4) + (Technical Risk × 0.3) + (Quick Win Potential × 0.3). Define rollback strategies for each component." -- Context from previous: Component inventory, dependency mapping -- Expected output: Prioritized migration roadmap with risk mitigation strategies + ``` + Found an in-progress legacy modernization session: + Target: [target from state] + Current step: [step from state] -## Phase 2: Test Coverage Establishment + 1. Resume from where we left off + 2. Start fresh (archives existing session) + ``` -### 1. Legacy Code Test Coverage Analysis +- If it exists and `status` is `"complete"`: Ask whether to archive and start fresh. -- Use Task tool with subagent_type="unit-testing::test-automator" -- Prompt: "Analyze existing test coverage for legacy components at $ARGUMENTS. Use coverage tools to identify untested code paths, missing integration tests, and absent end-to-end scenarios. For components with <40% coverage, generate characterization tests that capture current behavior without modifying functionality. Create test harness for safe refactoring." -- Expected output: Test coverage report and characterization test suite +### 2. Initialize state -### 2. Contract Testing Implementation +Create `.legacy-modernize/` directory and `state.json`: -- Use Task tool with subagent_type="unit-testing::test-automator" -- Prompt: "Implement contract tests for all integration points identified in dependency mapping. Create consumer-driven contracts for APIs, message queue interactions, and database schemas. Set up contract verification in CI/CD pipeline. Generate performance baselines for response times and throughput to validate modernized components maintain SLAs." -- Context from previous: Integration point catalog, existing test coverage -- Expected output: Contract test suite with performance baselines +```json +{ + "target": "$ARGUMENTS", + "status": "in_progress", + "strategy": "parallel-systems", + "current_step": 1, + "current_phase": 1, + "completed_steps": [], + "files_created": [], + "started_at": "ISO_TIMESTAMP", + "last_updated": "ISO_TIMESTAMP" +} +``` -### 3. Test Data Management Strategy +Parse `$ARGUMENTS` for `--strategy` flag. Use `parallel-systems` as default if not specified. -- Use Task tool with subagent_type="data-engineering::data-engineer" -- Prompt: "Design test data management strategy for parallel system operation. Create data generation scripts for edge cases, implement data masking for sensitive information, and establish test database refresh procedures. Set up monitoring for data consistency between legacy and modernized components during migration." -- Context from previous: Database schemas, test requirements -- Expected output: Test data pipeline and consistency monitoring +### 3. Parse target description -## Phase 3: Incremental Migration Implementation +Extract the target description from `$ARGUMENTS` (everything before the flags). This is referenced as `$TARGET` in prompts below. -### 1. Strangler Fig Infrastructure Setup +--- -- Use Task tool with subagent_type="backend-development::backend-architect" -- Prompt: "Implement strangler fig infrastructure with API gateway for traffic routing. Configure feature flags for gradual rollout using environment variables or feature management service. Set up proxy layer with request routing rules based on: URL patterns, headers, or user segments. Implement circuit breakers and fallback mechanisms for resilience. Create observability dashboard for dual-system monitoring." -- Expected output: API gateway configuration, feature flag system, monitoring dashboard +## Phase 1: Legacy Assessment and Risk Analysis (Steps 1–3) -### 2. Component Modernization - First Wave +### Step 1: Comprehensive Legacy System Analysis -- Use Task tool with subagent_type="python-development::python-pro" or "golang-pro" (based on target stack) -- Prompt: "Modernize first-wave components (quick wins identified in assessment). For each component: extract business logic from legacy code, implement using modern patterns (dependency injection, SOLID principles), ensure backward compatibility through adapter patterns, maintain data consistency with event sourcing or dual writes. Follow 12-factor app principles. Components to modernize: [list from prioritized roadmap]" -- Context from previous: Characterization tests, contract tests, infrastructure setup -- Expected output: Modernized components with adapters +Use the Task tool with subagent_type="legacy-modernizer": -### 3. Security Hardening +``` +Task: + subagent_type: "legacy-modernizer" + description: "Analyze legacy codebase for modernization readiness" + prompt: | + Analyze the legacy codebase at $TARGET. Document a technical debt inventory including: + - Outdated dependencies and deprecated APIs + - Security vulnerabilities and performance bottlenecks + - Architectural anti-patterns -- Use Task tool with subagent_type="security-scanning::security-auditor" -- Prompt: "Audit modernized components for security vulnerabilities. Implement security improvements including: OAuth 2.0/JWT authentication, role-based access control, input validation and sanitization, SQL injection prevention, XSS protection, and secrets management. Verify OWASP top 10 compliance. Configure security headers and implement rate limiting." -- Context from previous: Modernized component code -- Expected output: Security audit report and hardened components + Generate a modernization readiness report with: + - Component complexity scores (1-10) + - Dependency mapping between modules + - Database coupling analysis + - Quick wins vs complex refactoring targets -## Phase 4: Performance Validation and Optimization + Write your complete assessment as a single markdown document. +``` -### 1. Performance Testing and Optimization +Save the agent's output to `.legacy-modernize/01-legacy-assessment.md`. -- Use Task tool with subagent_type="application-performance::performance-engineer" -- Prompt: "Conduct performance testing comparing legacy vs modernized components. Run load tests simulating production traffic patterns, measure response times, throughput, and resource utilization. Identify performance regressions and optimize: database queries with indexing, caching strategies (Redis/Memcached), connection pooling, and async processing where applicable. Validate against SLA requirements." -- Context from previous: Performance baselines, modernized components -- Expected output: Performance test results and optimization recommendations +Update `state.json`: set `current_step` to 2, add `"01-legacy-assessment.md"` to `files_created`, add step 1 to `completed_steps`. -### 2. Progressive Rollout and Monitoring +### Step 2: Dependency and Integration Mapping -- Use Task tool with subagent_type="deployment-strategies::deployment-engineer" -- Prompt: "Implement progressive rollout strategy using feature flags. Start with 5% traffic to modernized components, monitor error rates, latency, and business metrics. Define automatic rollback triggers: error rate >1%, latency >2x baseline, or business metric degradation. Create runbook for traffic shifting: 5% → 25% → 50% → 100% with 24-hour observation periods." -- Context from previous: Feature flag configuration, monitoring dashboard -- Expected output: Rollout plan with automated safeguards +Read `.legacy-modernize/01-legacy-assessment.md` to load assessment context. -## Phase 5: Migration Completion and Documentation +Use the Task tool with subagent_type="architect-review": -### 1. Legacy Component Decommissioning +``` +Task: + subagent_type: "architect-review" + description: "Create dependency graph and integration point catalog" + prompt: | + Based on the legacy assessment report below, create a comprehensive dependency graph. -- Use Task tool with subagent_type="legacy-modernizer" -- Prompt: "Plan safe decommissioning of replaced legacy components. Verify no remaining dependencies through traffic analysis (minimum 30 days at 0% traffic). Archive legacy code with documentation of original functionality. Update CI/CD pipelines to remove legacy builds. Clean up unused database tables and remove deprecated API endpoints. Document any retained legacy components with sunset timeline." -- Context from previous: Traffic routing data, modernization status -- Expected output: Decommissioning checklist and timeline + ## Legacy Assessment + [Insert full contents of .legacy-modernize/01-legacy-assessment.md] -### 2. Documentation and Knowledge Transfer + ## Deliverables + 1. Internal module dependencies + 2. External service integrations + 3. Shared database schemas and cross-system data flows + 4. Integration points requiring facade patterns or adapter layers during migration + 5. Circular dependencies and tight coupling that need resolution -- Use Task tool with subagent_type="documentation-generation::docs-architect" -- Prompt: "Create comprehensive modernization documentation including: architectural diagrams (before/after), API documentation with migration guides, runbooks for dual-system operation, troubleshooting guides for common issues, and lessons learned report. Generate developer onboarding guide for modernized system. Document technical decisions and trade-offs made during migration." -- Context from previous: All migration artifacts and decisions -- Expected output: Complete modernization documentation package + Write your complete dependency analysis as a single markdown document. +``` -## Configuration Options +Save the agent's output to `.legacy-modernize/02-dependency-map.md`. -- **--parallel-systems**: Keep both systems running indefinitely (for gradual migration) -- **--big-bang**: Full cutover after validation (higher risk, faster completion) -- **--by-feature**: Migrate complete features rather than technical components -- **--database-first**: Prioritize database modernization before application layer -- **--api-first**: Modernize API layer while maintaining legacy backend +Update `state.json`: set `current_step` to 3, add step 2 to `completed_steps`. + +### Step 3: Business Impact and Risk Assessment + +Read `.legacy-modernize/01-legacy-assessment.md` and `.legacy-modernize/02-dependency-map.md`. + +Use the Task tool with subagent_type="general-purpose": + +``` +Task: + subagent_type: "general-purpose" + description: "Evaluate business impact and create migration roadmap" + prompt: | + You are a business analyst specializing in technology transformation and risk assessment. + + Evaluate the business impact of modernizing each component identified in the assessment and dependency analysis below. + + ## Legacy Assessment + [Insert contents of .legacy-modernize/01-legacy-assessment.md] + + ## Dependency Map + [Insert contents of .legacy-modernize/02-dependency-map.md] + + ## Deliverables + 1. Risk assessment matrix considering: business criticality (revenue impact), user traffic patterns, data sensitivity, regulatory requirements, and fallback complexity + 2. Prioritized components using weighted scoring: (Business Value x 0.4) + (Technical Risk x 0.3) + (Quick Win Potential x 0.3) + 3. Rollback strategies for each component + 4. Recommended migration order + + Write your complete business impact analysis as a single markdown document. +``` + +Save the agent's output to `.legacy-modernize/03-business-impact.md`. + +Update `state.json`: set `current_step` to "checkpoint-1", add step 3 to `completed_steps`. + +--- + +## PHASE CHECKPOINT 1 — User Approval Required + +You MUST stop here and present the assessment for review. + +Display a summary of findings from the Phase 1 output files (key components, risk levels, recommended migration order) and ask: + +``` +Legacy assessment and risk analysis complete. Please review: +- .legacy-modernize/01-legacy-assessment.md +- .legacy-modernize/02-dependency-map.md +- .legacy-modernize/03-business-impact.md + +1. Approve — proceed to test coverage establishment +2. Request changes — tell me what to adjust +3. Pause — save progress and stop here +``` + +Do NOT proceed to Phase 2 until the user selects option 1. If they select option 2, revise and re-checkpoint. If option 3, update `state.json` status and stop. + +--- + +## Phase 2: Test Coverage Establishment (Steps 4–6) + +### Step 4: Legacy Code Test Coverage Analysis + +Read `.legacy-modernize/01-legacy-assessment.md` and `.legacy-modernize/03-business-impact.md`. + +Use the Task tool with subagent_type="general-purpose": + +``` +Task: + subagent_type: "general-purpose" + description: "Analyze and establish test coverage for legacy components" + prompt: | + You are a test automation engineer specializing in legacy system characterization testing. + + Analyze existing test coverage for legacy components at $TARGET. + + ## Legacy Assessment + [Insert contents of .legacy-modernize/01-legacy-assessment.md] + + ## Migration Priorities + [Insert contents of .legacy-modernize/03-business-impact.md] + + ## Instructions + 1. Use coverage tools to identify untested code paths, missing integration tests, and absent end-to-end scenarios + 2. For components with <40% coverage, generate characterization tests that capture current behavior without modifying functionality + 3. Create a test harness for safe refactoring + 4. Follow existing test patterns and frameworks in the project + + Write all test files and report what was created. Provide a coverage summary. +``` + +Save the agent's output to `.legacy-modernize/04-test-coverage.md`. + +Update `state.json`: set `current_step` to 5, add step 4 to `completed_steps`. + +### Step 5: Contract Testing Implementation + +Read `.legacy-modernize/02-dependency-map.md` and `.legacy-modernize/04-test-coverage.md`. + +Use the Task tool with subagent_type="general-purpose": + +``` +Task: + subagent_type: "general-purpose" + description: "Implement contract tests for integration points" + prompt: | + You are a test automation engineer specializing in contract testing and API verification. + + Implement contract tests for all integration points identified in the dependency mapping. + + ## Dependency Map + [Insert contents of .legacy-modernize/02-dependency-map.md] + + ## Existing Test Coverage + [Insert contents of .legacy-modernize/04-test-coverage.md] + + ## Instructions + 1. Create consumer-driven contracts for APIs, message queue interactions, and database schemas + 2. Set up contract verification in CI/CD pipeline + 3. Generate performance baselines for response times and throughput to validate modernized components maintain SLAs + 4. Follow existing test patterns and frameworks in the project + + Write all test files and report what was created. +``` + +Save the agent's output to `.legacy-modernize/05-contract-tests.md`. + +Update `state.json`: set `current_step` to 6, add step 5 to `completed_steps`. + +### Step 6: Test Data Management Strategy + +Read `.legacy-modernize/02-dependency-map.md` and `.legacy-modernize/04-test-coverage.md`. + +Use the Task tool with subagent_type="general-purpose": + +``` +Task: + subagent_type: "general-purpose" + description: "Design test data management for parallel system operation" + prompt: | + You are a data engineer specializing in test data management and data pipeline design. + + Design a test data management strategy for parallel system operation during migration. + + ## Dependency Map + [Insert contents of .legacy-modernize/02-dependency-map.md] + + ## Test Coverage + [Insert contents of .legacy-modernize/04-test-coverage.md] + + ## Instructions + 1. Create data generation scripts for edge cases + 2. Implement data masking for sensitive information + 3. Establish test database refresh procedures + 4. Set up monitoring for data consistency between legacy and modernized components during migration + + Write all configuration and script files. Report what was created. +``` + +Save the agent's output to `.legacy-modernize/06-test-data.md`. + +Update `state.json`: set `current_step` to "checkpoint-2", add step 6 to `completed_steps`. + +--- + +## PHASE CHECKPOINT 2 — User Approval Required + +Display a summary of test coverage establishment from Phase 2 output files and ask: + +``` +Test coverage establishment complete. Please review: +- .legacy-modernize/04-test-coverage.md +- .legacy-modernize/05-contract-tests.md +- .legacy-modernize/06-test-data.md + +1. Approve — proceed to incremental migration implementation +2. Request changes — tell me what to adjust +3. Pause — save progress and stop here +``` + +Do NOT proceed to Phase 3 until the user approves. + +--- + +## Phase 3: Incremental Migration Implementation (Steps 7–9) + +### Step 7: Strangler Fig Infrastructure Setup + +Read `.legacy-modernize/02-dependency-map.md` and `.legacy-modernize/03-business-impact.md`. + +Use the Task tool with subagent_type="general-purpose": + +``` +Task: + subagent_type: "general-purpose" + description: "Implement strangler fig infrastructure with API gateway and feature flags" + prompt: | + You are a backend architect specializing in distributed systems and migration infrastructure. + + Implement strangler fig infrastructure for the legacy modernization. + + ## Dependency Map + [Insert contents of .legacy-modernize/02-dependency-map.md] + + ## Migration Priorities + [Insert contents of .legacy-modernize/03-business-impact.md] + + ## Instructions + 1. Configure API gateway for traffic routing between legacy and modern components + 2. Set up feature flags for gradual rollout using environment variables or feature management service + 3. Implement proxy layer with request routing rules based on URL patterns, headers, or user segments + 4. Implement circuit breakers and fallback mechanisms for resilience + 5. Create observability dashboard for dual-system monitoring + 6. Follow existing infrastructure patterns in the project + + Write all configuration files. Report what was created/modified. +``` + +Save the agent's output to `.legacy-modernize/07-infrastructure.md`. + +Update `state.json`: set `current_step` to 8, add step 7 to `completed_steps`. + +### Step 8: Component Modernization — First Wave + +Read `.legacy-modernize/01-legacy-assessment.md`, `.legacy-modernize/03-business-impact.md`, `.legacy-modernize/04-test-coverage.md`, and `.legacy-modernize/07-infrastructure.md`. + +Detect the target language/stack from the legacy assessment. Use the Task tool with subagent_type="general-purpose", providing role context matching the target stack: + +``` +Task: + subagent_type: "general-purpose" + description: "Modernize first-wave components from legacy assessment" + prompt: | + You are an expert [DETECTED LANGUAGE] developer specializing in legacy code modernization + and migration to modern frameworks and patterns. + + Modernize first-wave components (quick wins identified in assessment). + + ## Legacy Assessment + [Insert contents of .legacy-modernize/01-legacy-assessment.md] + + ## Migration Priorities + [Insert contents of .legacy-modernize/03-business-impact.md] + + ## Test Coverage + [Insert contents of .legacy-modernize/04-test-coverage.md] + + ## Infrastructure + [Insert contents of .legacy-modernize/07-infrastructure.md] + + ## Instructions + For each component in the first wave: + 1. Extract business logic from legacy code + 2. Implement using modern patterns (dependency injection, SOLID principles) + 3. Ensure backward compatibility through adapter patterns + 4. Maintain data consistency with event sourcing or dual writes + 5. Follow 12-factor app principles + 6. Run characterization tests to verify preserved behavior + + Write all code files. Report what files were created/modified. +``` + +**Note:** Replace `[DETECTED LANGUAGE]` with the actual language detected from the legacy assessment (e.g., "Python", "TypeScript", "Go", "Rust", "Java"). If the codebase is polyglot, launch parallel agents for each language. + +Save the agent's output to `.legacy-modernize/08-first-wave.md`. + +Update `state.json`: set `current_step` to 9, add step 8 to `completed_steps`. + +### Step 9: Security Hardening + +Read `.legacy-modernize/08-first-wave.md`. + +Use the Task tool with subagent_type="general-purpose": + +``` +Task: + subagent_type: "general-purpose" + description: "Security audit and hardening of modernized components" + prompt: | + You are a security engineer specializing in application security auditing, + OWASP compliance, and secure coding practices. + + Audit modernized components for security vulnerabilities and implement hardening. + + ## Modernized Components + [Insert contents of .legacy-modernize/08-first-wave.md] + + ## Instructions + 1. Implement OAuth 2.0/JWT authentication where applicable + 2. Add role-based access control + 3. Implement input validation and sanitization + 4. Verify SQL injection prevention and XSS protection + 5. Configure secrets management + 6. Verify OWASP Top 10 compliance + 7. Configure security headers and implement rate limiting + + Provide a security audit report with findings by severity (Critical/High/Medium/Low) + and list all hardening changes made. Write all code changes. +``` + +Save the agent's output to `.legacy-modernize/09-security.md`. + +Update `state.json`: set `current_step` to "checkpoint-3", add step 9 to `completed_steps`. + +--- + +## PHASE CHECKPOINT 3 — User Approval Required + +Display a summary of migration implementation from Phase 3 output files and ask: + +``` +Incremental migration implementation complete. Please review: +- .legacy-modernize/07-infrastructure.md +- .legacy-modernize/08-first-wave.md +- .legacy-modernize/09-security.md + +Security findings: [summarize Critical/High/Medium counts from 09-security.md] + +1. Approve — proceed to performance validation +2. Request changes — tell me what to adjust +3. Pause — save progress and stop here +``` + +Do NOT proceed to Phase 4 until the user approves. + +--- + +## Phase 4: Performance Validation and Rollout (Steps 10–11) + +### Step 10: Performance Testing and Optimization + +Read `.legacy-modernize/05-contract-tests.md` and `.legacy-modernize/08-first-wave.md`. + +Use the Task tool with subagent_type="general-purpose": + +``` +Task: + subagent_type: "general-purpose" + description: "Performance testing of modernized vs legacy components" + prompt: | + You are a performance engineer specializing in load testing, benchmarking, + and application performance optimization. + + Conduct performance testing comparing legacy vs modernized components. + + ## Contract Tests and Baselines + [Insert contents of .legacy-modernize/05-contract-tests.md] + + ## Modernized Components + [Insert contents of .legacy-modernize/08-first-wave.md] + + ## Instructions + 1. Run load tests simulating production traffic patterns + 2. Measure response times, throughput, and resource utilization + 3. Identify performance regressions and optimize: database queries with indexing, caching strategies, connection pooling, and async processing + 4. Validate against SLA requirements (P95 latency within 110% of baseline) + + Provide performance test results with comparison tables and optimization recommendations. +``` + +Save the agent's output to `.legacy-modernize/10-performance.md`. + +Update `state.json`: set `current_step` to 11, add step 10 to `completed_steps`. + +### Step 11: Progressive Rollout Plan + +Read `.legacy-modernize/07-infrastructure.md` and `.legacy-modernize/10-performance.md`. + +Use the Task tool with subagent_type="general-purpose": + +``` +Task: + subagent_type: "general-purpose" + description: "Create progressive rollout strategy with automated safeguards" + prompt: | + You are a deployment engineer specializing in progressive delivery, + feature flag management, and production rollout strategies. + + Implement a progressive rollout strategy for the modernized components. + + ## Infrastructure + [Insert contents of .legacy-modernize/07-infrastructure.md] + + ## Performance Results + [Insert contents of .legacy-modernize/10-performance.md] + + ## Instructions + 1. Configure feature flags for traffic shifting: 5% -> 25% -> 50% -> 100% + 2. Define automatic rollback triggers: error rate >1%, latency >2x baseline, or business metric degradation + 3. Set 24-hour observation periods between each stage + 4. Create runbook for the complete traffic shifting process + 5. Include monitoring queries and dashboards for each stage + + Write all configuration files and the rollout runbook. +``` + +Save the agent's output to `.legacy-modernize/11-rollout.md`. + +Update `state.json`: set `current_step` to "checkpoint-4", add step 11 to `completed_steps`. + +--- + +## PHASE CHECKPOINT 4 — User Approval Required + +Display a summary of performance and rollout plans and ask: + +``` +Performance validation and rollout planning complete. Please review: +- .legacy-modernize/10-performance.md +- .legacy-modernize/11-rollout.md + +Performance: [summarize key metrics from 10-performance.md] + +1. Approve — proceed to decommissioning and documentation +2. Request changes — tell me what to adjust +3. Pause — save progress and stop here +``` + +Do NOT proceed to Phase 5 until the user approves. + +--- + +## Phase 5: Migration Completion and Documentation (Steps 12–13) + +### Step 12: Legacy Component Decommissioning + +Read `.legacy-modernize/01-legacy-assessment.md`, `.legacy-modernize/08-first-wave.md`, and `.legacy-modernize/11-rollout.md`. + +Use the Task tool with subagent_type="legacy-modernizer": + +``` +Task: + subagent_type: "legacy-modernizer" + description: "Plan safe decommissioning of replaced legacy components" + prompt: | + Plan safe decommissioning of replaced legacy components. + + ## Legacy Assessment + [Insert contents of .legacy-modernize/01-legacy-assessment.md] + + ## Modernized Components + [Insert contents of .legacy-modernize/08-first-wave.md] + + ## Rollout Status + [Insert contents of .legacy-modernize/11-rollout.md] + + ## Instructions + 1. Verify no remaining dependencies through traffic analysis (minimum 30 days at 0% traffic) + 2. Archive legacy code with documentation of original functionality + 3. Update CI/CD pipelines to remove legacy builds + 4. Clean up unused database tables and remove deprecated API endpoints + 5. Document any retained legacy components with sunset timeline + + Provide a decommissioning checklist and timeline. +``` + +Save the agent's output to `.legacy-modernize/12-decommission.md`. + +Update `state.json`: set `current_step` to 13, add step 12 to `completed_steps`. + +### Step 13: Documentation and Knowledge Transfer + +Read all previous `.legacy-modernize/*.md` files. + +Use the Task tool with subagent_type="general-purpose": + +``` +Task: + subagent_type: "general-purpose" + description: "Create comprehensive modernization documentation package" + prompt: | + You are a technical writer specializing in system migration documentation + and developer knowledge transfer materials. + + Create comprehensive modernization documentation. + + ## All Migration Artifacts + [Insert contents of all .legacy-modernize/*.md files] + + ## Instructions + 1. Create architectural diagrams (before/after) + 2. Write API documentation with migration guides + 3. Create runbooks for dual-system operation + 4. Write troubleshooting guides for common issues + 5. Create a lessons learned report + 6. Generate developer onboarding guide for the modernized system + 7. Document technical decisions and trade-offs made during migration + + Write all documentation files. Report what was created. +``` + +Save the agent's output to `.legacy-modernize/13-documentation.md`. + +Update `state.json`: set `current_step` to "complete", add step 13 to `completed_steps`. + +--- + +## Completion + +Update `state.json`: + +- Set `status` to `"complete"` +- Set `last_updated` to current timestamp + +Present the final summary: + +``` +Legacy modernization complete: $TARGET + +## Session Files +- .legacy-modernize/01-legacy-assessment.md — Legacy system analysis +- .legacy-modernize/02-dependency-map.md — Dependency and integration mapping +- .legacy-modernize/03-business-impact.md — Business impact and risk assessment +- .legacy-modernize/04-test-coverage.md — Test coverage analysis +- .legacy-modernize/05-contract-tests.md — Contract tests and baselines +- .legacy-modernize/06-test-data.md — Test data management strategy +- .legacy-modernize/07-infrastructure.md — Strangler fig infrastructure +- .legacy-modernize/08-first-wave.md — First wave component modernization +- .legacy-modernize/09-security.md — Security audit and hardening +- .legacy-modernize/10-performance.md — Performance testing results +- .legacy-modernize/11-rollout.md — Progressive rollout plan +- .legacy-modernize/12-decommission.md — Decommissioning checklist +- .legacy-modernize/13-documentation.md — Documentation package ## Success Criteria - - All high-priority components modernized with >80% test coverage - Zero unplanned downtime during migration -- Performance metrics maintained or improved (P95 latency within 110% of baseline) +- Performance metrics maintained (P95 latency within 110% of baseline) - Security vulnerabilities reduced by >90% - Technical debt score improved by >60% -- Successful operation for 30 days post-migration without rollbacks -- Complete documentation enabling new developer onboarding in <1 week -Target: $ARGUMENTS +## Next Steps +1. Review all generated code, tests, and documentation +2. Execute the progressive rollout plan in .legacy-modernize/11-rollout.md +3. Monitor for 30 days post-migration per .legacy-modernize/12-decommission.md +4. Complete decommissioning after observation period +``` diff --git a/plugins/full-stack-orchestration/.claude-plugin/plugin.json b/plugins/full-stack-orchestration/.claude-plugin/plugin.json index 51028b4..4ad00d5 100644 --- a/plugins/full-stack-orchestration/.claude-plugin/plugin.json +++ b/plugins/full-stack-orchestration/.claude-plugin/plugin.json @@ -1,6 +1,6 @@ { "name": "full-stack-orchestration", - "version": "1.2.1", + "version": "1.3.0", "description": "End-to-end feature orchestration with testing, security, performance, and deployment", "author": { "name": "Seth Hobson", diff --git a/plugins/full-stack-orchestration/commands/full-stack-feature.md b/plugins/full-stack-orchestration/commands/full-stack-feature.md index 6b6129b..8f3c865 100644 --- a/plugins/full-stack-orchestration/commands/full-stack-feature.md +++ b/plugins/full-stack-orchestration/commands/full-stack-feature.md @@ -1,128 +1,593 @@ -Orchestrate full-stack feature development across backend, frontend, and infrastructure layers with modern API-first approach: +--- +description: "Orchestrate end-to-end full-stack feature development across backend, frontend, database, and infrastructure layers" +argument-hint: " [--stack react/fastapi/postgres] [--api-style rest|graphql] [--complexity simple|medium|complex]" +--- -[Extended thinking: This workflow coordinates multiple specialized agents to deliver a complete full-stack feature from architecture through deployment. It follows API-first development principles, ensuring contract-driven development where the API specification drives both backend implementation and frontend consumption. Each phase builds upon previous outputs, creating a cohesive system with proper separation of concerns, comprehensive testing, and production-ready deployment. The workflow emphasizes modern practices like component-driven UI development, feature flags, observability, and progressive rollout strategies.] +# Full-Stack Feature Orchestrator -## Phase 1: Architecture & Design Foundation +## CRITICAL BEHAVIORAL RULES -### 1. Database Architecture Design +You MUST follow these rules exactly. Violating any of them is a failure. -- Use Task tool with subagent_type="database-design::database-architect" -- Prompt: "Design database schema and data models for: $ARGUMENTS. Consider scalability, query patterns, indexing strategy, and data consistency requirements. Include migration strategy if modifying existing schema. Provide both logical and physical data models." -- Expected output: Entity relationship diagrams, table schemas, indexing strategy, migration scripts, data access patterns -- Context: Initial requirements and business domain model +1. **Execute steps in order.** Do NOT skip ahead, reorder, or merge steps. +2. **Write output files.** Each step MUST produce its output file in `.full-stack-feature/` before the next step begins. Read from prior step files -- do NOT rely on context window memory. +3. **Stop at checkpoints.** When you reach a `PHASE CHECKPOINT`, you MUST stop and wait for explicit user approval before continuing. Use the AskUserQuestion tool with clear options. +4. **Halt on failure.** If any step fails (agent error, test failure, missing dependency), STOP immediately. Present the error and ask the user how to proceed. Do NOT silently continue. +5. **Use only local agents.** All `subagent_type` references use agents bundled with this plugin or `general-purpose`. No cross-plugin dependencies. +6. **Never enter plan mode autonomously.** Do NOT use EnterPlanMode. This command IS the plan -- execute it. -### 2. Backend Service Architecture +## Pre-flight Checks -- Use Task tool with subagent_type="backend-development::backend-architect" -- Prompt: "Design backend service architecture for: $ARGUMENTS. Using the database design from previous step, create service boundaries, define API contracts (OpenAPI/GraphQL), design authentication/authorization strategy, and specify inter-service communication patterns. Include resilience patterns (circuit breakers, retries) and caching strategy." -- Expected output: Service architecture diagram, OpenAPI specifications, authentication flows, caching architecture, message queue design (if applicable) -- Context: Database schema from step 1, non-functional requirements +Before starting, perform these checks: -### 3. Frontend Component Architecture +### 1. Check for existing session -- Use Task tool with subagent_type="frontend-mobile-development::frontend-developer" -- Prompt: "Design frontend architecture and component structure for: $ARGUMENTS. Based on the API contracts from previous step, design component hierarchy, state management approach (Redux/Zustand/Context), routing structure, and data fetching patterns. Include accessibility requirements and responsive design strategy. Plan for Storybook component documentation." -- Expected output: Component tree diagram, state management design, routing configuration, design system integration plan, accessibility checklist -- Context: API specifications from step 2, UI/UX requirements +Check if `.full-stack-feature/state.json` exists: -## Phase 2: Parallel Implementation +- If it exists and `status` is `"in_progress"`: Read it, display the current step, and ask the user: -### 4. Backend Service Implementation + ``` + Found an in-progress full-stack feature session: + Feature: [name from state] + Current step: [step from state] -- Use Task tool with subagent_type="python-development::python-pro" (or "golang-pro"/"nodejs-expert" based on stack) -- Prompt: "Implement backend services for: $ARGUMENTS. Using the architecture and API specs from Phase 1, build RESTful/GraphQL endpoints with proper validation, error handling, and logging. Implement business logic, data access layer, authentication middleware, and integration with external services. Include observability (structured logging, metrics, tracing)." -- Expected output: Backend service code, API endpoints, middleware, background jobs, unit tests, integration tests -- Context: Architecture designs from Phase 1, database schema + 1. Resume from where we left off + 2. Start fresh (archives existing session) + ``` -### 5. Frontend Implementation +- If it exists and `status` is `"complete"`: Ask whether to archive and start fresh. -- Use Task tool with subagent_type="frontend-mobile-development::frontend-developer" -- Prompt: "Implement frontend application for: $ARGUMENTS. Build React/Next.js components using the component architecture from Phase 1. Implement state management, API integration with proper error handling and loading states, form validation, and responsive layouts. Create Storybook stories for components. Ensure accessibility (WCAG 2.1 AA compliance)." -- Expected output: React components, state management implementation, API client code, Storybook stories, responsive styles, accessibility implementations -- Context: Component architecture from step 3, API contracts +### 2. Initialize state -### 6. Database Implementation & Optimization +Create `.full-stack-feature/` directory and `state.json`: -- Use Task tool with subagent_type="database-design::sql-pro" -- Prompt: "Implement and optimize database layer for: $ARGUMENTS. Create migration scripts, stored procedures (if needed), optimize queries identified by backend implementation, set up proper indexes, and implement data validation constraints. Include database-level security measures and backup strategies." -- Expected output: Migration scripts, optimized queries, stored procedures, index definitions, database security configuration -- Context: Database design from step 1, query patterns from backend implementation +```json +{ + "feature": "$ARGUMENTS", + "status": "in_progress", + "stack": "auto-detect", + "api_style": "rest", + "complexity": "medium", + "current_step": 1, + "current_phase": 1, + "completed_steps": [], + "files_created": [], + "started_at": "ISO_TIMESTAMP", + "last_updated": "ISO_TIMESTAMP" +} +``` -## Phase 3: Integration & Testing +Parse `$ARGUMENTS` for `--stack`, `--api-style`, and `--complexity` flags. Use defaults if not specified. -### 7. API Contract Testing +### 3. Parse feature description -- Use Task tool with subagent_type="test-automator" -- Prompt: "Create contract tests for: $ARGUMENTS. Implement Pact/Dredd tests to validate API contracts between backend and frontend. Create integration tests for all API endpoints, test authentication flows, validate error responses, and ensure proper CORS configuration. Include load testing scenarios." -- Expected output: Contract test suites, integration tests, load test scenarios, API documentation validation -- Context: API implementations from Phase 2 +Extract the feature description from `$ARGUMENTS` (everything before the flags). This is referenced as `$FEATURE` in prompts below. -### 8. End-to-End Testing +--- -- Use Task tool with subagent_type="test-automator" -- Prompt: "Implement E2E tests for: $ARGUMENTS. Create Playwright/Cypress tests covering critical user journeys, cross-browser compatibility, mobile responsiveness, and error scenarios. Test feature flags integration, analytics tracking, and performance metrics. Include visual regression tests." -- Expected output: E2E test suites, visual regression baselines, performance benchmarks, test reports -- Context: Frontend and backend implementations from Phase 2 +## Phase 1: Architecture & Design Foundation (Steps 1-3) -- Interactive -### 9. Security Audit & Hardening +### Step 1: Requirements Gathering -- Use Task tool with subagent_type="security-auditor" -- Prompt: "Perform security audit for: $ARGUMENTS. Review API security (authentication, authorization, rate limiting), check for OWASP Top 10 vulnerabilities, audit frontend for XSS/CSRF risks, validate input sanitization, and review secrets management. Provide penetration testing results and remediation steps." -- Expected output: Security audit report, vulnerability assessment, remediation recommendations, security headers configuration -- Context: All implementations from Phase 2 +Gather requirements through interactive Q&A. Ask ONE question at a time using the AskUserQuestion tool. Do NOT ask all questions at once. -## Phase 4: Deployment & Operations +**Questions to ask (in order):** -### 10. Infrastructure & CI/CD Setup +1. **Problem Statement**: "What problem does this feature solve? Who is the user and what's their pain point?" +2. **Acceptance Criteria**: "What are the key acceptance criteria? When is this feature 'done'?" +3. **Scope Boundaries**: "What is explicitly OUT of scope for this feature?" +4. **Technical Constraints**: "Any technical constraints? (e.g., existing API conventions, specific DB, latency requirements, auth system)" +5. **Stack Confirmation**: "Confirm the tech stack -- detected [stack] from project. Frontend framework? Backend framework? Database? Any changes?" +6. **Dependencies**: "Does this feature depend on or affect other features/services?" -- Use Task tool with subagent_type="deployment-engineer" -- Prompt: "Setup deployment infrastructure for: $ARGUMENTS. Create Docker containers, Kubernetes manifests (or cloud-specific configs), implement CI/CD pipelines with automated testing gates, setup feature flags (LaunchDarkly/Unleash), and configure monitoring/alerting. Include blue-green deployment strategy and rollback procedures." -- Expected output: Dockerfiles, K8s manifests, CI/CD pipeline configs, feature flag setup, IaC templates (Terraform/CloudFormation) -- Context: All implementations and tests from previous phases +After gathering answers, write the requirements document: -### 11. Observability & Monitoring +**Output file:** `.full-stack-feature/01-requirements.md` -- Use Task tool with subagent_type="deployment-engineer" -- Prompt: "Implement observability stack for: $ARGUMENTS. Setup distributed tracing (OpenTelemetry), configure application metrics (Prometheus/DataDog), implement centralized logging (ELK/Splunk), create dashboards for key metrics, and define SLIs/SLOs. Include alerting rules and on-call procedures." -- Expected output: Observability configuration, dashboard definitions, alert rules, runbooks, SLI/SLO definitions -- Context: Infrastructure setup from step 10 +```markdown +# Requirements: $FEATURE -### 12. Performance Optimization +## Problem Statement -- Use Task tool with subagent_type="performance-engineer" -- Prompt: "Optimize performance across stack for: $ARGUMENTS. Analyze and optimize database queries, implement caching strategies (Redis/CDN), optimize frontend bundle size and loading performance, setup lazy loading and code splitting, and tune backend service performance. Include before/after metrics." -- Expected output: Performance improvements, caching configuration, CDN setup, optimized bundles, performance metrics report -- Context: Monitoring data from step 11, load test results +[From Q1] -## Configuration Options +## Acceptance Criteria -- `stack`: Specify technology stack (e.g., "React/FastAPI/PostgreSQL", "Next.js/Django/MongoDB") -- `deployment_target`: Cloud platform (AWS/GCP/Azure) or on-premises -- `feature_flags`: Enable/disable feature flag integration -- `api_style`: REST or GraphQL -- `testing_depth`: Comprehensive or essential -- `compliance`: Specific compliance requirements (GDPR, HIPAA, SOC2) +[From Q2 -- formatted as checkboxes] -## Success Criteria +## Scope -- All API contracts validated through contract tests -- Frontend and backend integration tests passing -- E2E tests covering critical user journeys -- Security audit passed with no critical vulnerabilities -- Performance metrics meeting defined SLOs -- Observability stack capturing all key metrics -- Feature flags configured for progressive rollout -- Documentation complete for all components -- CI/CD pipeline with automated quality gates -- Zero-downtime deployment capability verified +### In Scope -## Coordination Notes +[Derived from answers] -- Each phase builds upon outputs from previous phases -- Parallel tasks in Phase 2 can run simultaneously but must converge for Phase 3 -- Maintain traceability between requirements and implementations -- Use correlation IDs across all services for distributed tracing -- Document all architectural decisions in ADRs -- Ensure consistent error handling and API responses across services +### Out of Scope -Feature to implement: $ARGUMENTS +[From Q3] + +## Technical Constraints + +[From Q4] + +## Technology Stack + +[From Q5 -- frontend, backend, database, infrastructure] + +## Dependencies + +[From Q6] + +## Configuration + +- Stack: [detected or specified] +- API Style: [rest|graphql] +- Complexity: [simple|medium|complex] +``` + +Update `state.json`: set `current_step` to 2, add `"01-requirements.md"` to `files_created`, add step 1 to `completed_steps`. + +### Step 2: Database & Data Model Design + +Read `.full-stack-feature/01-requirements.md` to load requirements context. + +Use the Task tool to launch a database architecture agent: + +``` +Task: + subagent_type: "general-purpose" + description: "Design database schema and data models for $FEATURE" + prompt: | + You are a database architect. Design the database schema and data models for this feature. + + ## Requirements + [Insert full contents of .full-stack-feature/01-requirements.md] + + ## Deliverables + 1. **Entity relationship design**: Tables/collections, relationships, cardinality + 2. **Schema definitions**: Column types, constraints, defaults, nullable fields + 3. **Indexing strategy**: Which columns to index, index types, composite indexes + 4. **Migration strategy**: How to safely add/modify schema in production + 5. **Query patterns**: Expected read/write patterns and how the schema supports them + 6. **Data access patterns**: Repository/DAO interface design + + Write your complete database design as a single markdown document. +``` + +Save the agent's output to `.full-stack-feature/02-database-design.md`. + +Update `state.json`: set `current_step` to 3, add step 2 to `completed_steps`. + +### Step 3: Backend & Frontend Architecture + +Read `.full-stack-feature/01-requirements.md` and `.full-stack-feature/02-database-design.md`. + +Use the Task tool to launch an architecture agent: + +``` +Task: + subagent_type: "general-purpose" + description: "Design full-stack architecture for $FEATURE" + prompt: | + You are a full-stack architect. Design the complete backend and frontend architecture for this feature. + + ## Requirements + [Insert contents of .full-stack-feature/01-requirements.md] + + ## Database Design + [Insert contents of .full-stack-feature/02-database-design.md] + + ## Deliverables + + ### Backend Architecture + 1. **API design**: Endpoints/resolvers, request/response schemas, error handling, versioning + 2. **Service layer**: Business logic components, their responsibilities, boundaries + 3. **Authentication/authorization**: How auth applies to new endpoints + 4. **Integration points**: How this connects to existing services/systems + + ### Frontend Architecture + 1. **Component hierarchy**: Page components, containers, presentational components + 2. **State management**: What state is needed, where it lives, data flow + 3. **Routing**: New routes, navigation structure, route guards + 4. **API integration**: Data fetching strategy, caching, optimistic updates + + ### Cross-Cutting Concerns + 1. **Error handling**: Backend errors -> API responses -> frontend error states + 2. **Security considerations**: Input validation, XSS prevention, CSRF, data protection + 3. **Risk assessment**: Technical risks and mitigation strategies + + Write your complete architecture design as a single markdown document. +``` + +Save the agent's output to `.full-stack-feature/03-architecture.md`. + +Update `state.json`: set `current_step` to "checkpoint-1", add step 3 to `completed_steps`. + +--- + +## PHASE CHECKPOINT 1 -- User Approval Required + +You MUST stop here and present the architecture for review. + +Display a summary of the database design and architecture from `.full-stack-feature/02-database-design.md` and `.full-stack-feature/03-architecture.md` (key components, API endpoints, data model overview, component structure) and ask: + +``` +Architecture and database design are complete. Please review: +- .full-stack-feature/02-database-design.md +- .full-stack-feature/03-architecture.md + +1. Approve -- proceed to implementation +2. Request changes -- tell me what to adjust +3. Pause -- save progress and stop here +``` + +Do NOT proceed to Phase 2 until the user selects option 1. If they select option 2, revise and re-checkpoint. If option 3, update `state.json` and stop. + +--- + +## Phase 2: Implementation (Steps 4-7) + +### Step 4: Database Implementation + +Read `.full-stack-feature/01-requirements.md` and `.full-stack-feature/02-database-design.md`. + +Use the Task tool: + +``` +Task: + subagent_type: "general-purpose" + description: "Implement database layer for $FEATURE" + prompt: | + You are a database engineer. Implement the database layer for this feature. + + ## Requirements + [Insert contents of .full-stack-feature/01-requirements.md] + + ## Database Design + [Insert contents of .full-stack-feature/02-database-design.md] + + ## Instructions + 1. Create migration scripts for schema changes + 2. Implement models/entities matching the schema design + 3. Implement repository/data access layer with the designed query patterns + 4. Add database-level validation constraints + 5. Optimize queries with proper indexes as designed + 6. Follow the project's existing ORM and migration patterns + + Write all code files. Report what files were created/modified. +``` + +Save a summary to `.full-stack-feature/04-database-impl.md`. + +Update `state.json`: set `current_step` to 5, add step 4 to `completed_steps`. + +### Step 5: Backend Implementation + +Read `.full-stack-feature/01-requirements.md`, `.full-stack-feature/03-architecture.md`, and `.full-stack-feature/04-database-impl.md`. + +Use the Task tool: + +``` +Task: + subagent_type: "general-purpose" + description: "Implement backend services for $FEATURE" + prompt: | + You are a backend developer. Implement the backend services for this feature based on the approved architecture. + + ## Requirements + [Insert contents of .full-stack-feature/01-requirements.md] + + ## Architecture + [Insert contents of .full-stack-feature/03-architecture.md] + + ## Database Implementation + [Insert contents of .full-stack-feature/04-database-impl.md] + + ## Instructions + 1. Implement API endpoints/resolvers as designed in the architecture + 2. Implement business logic in the service layer + 3. Wire up the data access layer from the database implementation + 4. Add input validation, error handling, and proper HTTP status codes + 5. Implement authentication/authorization middleware as designed + 6. Add structured logging and observability hooks + 7. Follow the project's existing code patterns and conventions + + Write all code files. Report what files were created/modified. +``` + +Save a summary to `.full-stack-feature/05-backend-impl.md`. + +Update `state.json`: set `current_step` to 6, add step 5 to `completed_steps`. + +### Step 6: Frontend Implementation + +Read `.full-stack-feature/01-requirements.md`, `.full-stack-feature/03-architecture.md`, and `.full-stack-feature/05-backend-impl.md`. + +Use the Task tool: + +``` +Task: + subagent_type: "general-purpose" + description: "Implement frontend for $FEATURE" + prompt: | + You are a frontend developer. Implement the frontend components for this feature. + + ## Requirements + [Insert contents of .full-stack-feature/01-requirements.md] + + ## Architecture + [Insert contents of .full-stack-feature/03-architecture.md] + + ## Backend Implementation + [Insert contents of .full-stack-feature/05-backend-impl.md] + + ## Instructions + 1. Build UI components following the component hierarchy from the architecture + 2. Implement state management and data flow as designed + 3. Integrate with the backend API endpoints using the designed data fetching strategy + 4. Implement form handling, validation, and error states + 5. Add loading states and optimistic updates where appropriate + 6. Ensure responsive design and accessibility basics (semantic HTML, ARIA labels, keyboard nav) + 7. Follow the project's existing frontend patterns and component conventions + + Write all code files. Report what files were created/modified. +``` + +Save a summary to `.full-stack-feature/06-frontend-impl.md`. + +**Note:** If the feature has no frontend component (pure backend/API), skip this step -- write a brief note in `06-frontend-impl.md` explaining why it was skipped, and continue. + +Update `state.json`: set `current_step` to 7, add step 6 to `completed_steps`. + +### Step 7: Testing & Validation + +Read `.full-stack-feature/04-database-impl.md`, `.full-stack-feature/05-backend-impl.md`, and `.full-stack-feature/06-frontend-impl.md`. + +Launch three agents in parallel using multiple Task tool calls in a single response: + +**7a. Test Suite Creation:** + +``` +Task: + subagent_type: "test-automator" + description: "Create test suite for $FEATURE" + prompt: | + Create a comprehensive test suite for this full-stack feature. + + ## What was implemented + ### Database + [Insert contents of .full-stack-feature/04-database-impl.md] + + ### Backend + [Insert contents of .full-stack-feature/05-backend-impl.md] + + ### Frontend + [Insert contents of .full-stack-feature/06-frontend-impl.md] + + ## Instructions + 1. Write unit tests for all new backend functions/methods + 2. Write integration tests for API endpoints + 3. Write database tests for migrations and query patterns + 4. Write frontend component tests if applicable + 5. Cover: happy path, edge cases, error handling, boundary conditions + 6. Follow existing test patterns and frameworks in the project + 7. Target 80%+ code coverage for new code + + Write all test files. Report what test files were created and what they cover. +``` + +**7b. Security Review:** + +``` +Task: + subagent_type: "security-auditor" + description: "Security review of $FEATURE" + prompt: | + Perform a security review of this full-stack feature implementation. + + ## Architecture + [Insert contents of .full-stack-feature/03-architecture.md] + + ## Database Implementation + [Insert contents of .full-stack-feature/04-database-impl.md] + + ## Backend Implementation + [Insert contents of .full-stack-feature/05-backend-impl.md] + + ## Frontend Implementation + [Insert contents of .full-stack-feature/06-frontend-impl.md] + + Review for: OWASP Top 10, authentication/authorization flaws, input validation gaps, + SQL injection risks, XSS/CSRF vulnerabilities, data protection issues, dependency vulnerabilities, + and any security anti-patterns. + + Provide findings with severity, location, and specific fix recommendations. +``` + +**7c. Performance Review:** + +``` +Task: + subagent_type: "performance-engineer" + description: "Performance review of $FEATURE" + prompt: | + Review the performance of this full-stack feature implementation. + + ## Architecture + [Insert contents of .full-stack-feature/03-architecture.md] + + ## Database Implementation + [Insert contents of .full-stack-feature/04-database-impl.md] + + ## Backend Implementation + [Insert contents of .full-stack-feature/05-backend-impl.md] + + ## Frontend Implementation + [Insert contents of .full-stack-feature/06-frontend-impl.md] + + Review for: N+1 queries, missing indexes, unoptimized queries, memory leaks, + missing caching opportunities, large payloads, slow rendering paths, + bundle size concerns, unnecessary re-renders. + + Provide findings with impact estimates and specific optimization recommendations. +``` + +After all three complete, consolidate results into `.full-stack-feature/07-testing.md`: + +```markdown +# Testing & Validation: $FEATURE + +## Test Suite + +[Summary from 7a -- files created, coverage areas] + +## Security Findings + +[Summary from 7b -- findings by severity] + +## Performance Findings + +[Summary from 7c -- findings by impact] + +## Action Items + +[List any critical/high findings that need to be addressed before delivery] +``` + +If there are Critical or High severity findings from security or performance review, address them now before proceeding. Apply fixes and re-validate. + +Update `state.json`: set `current_step` to "checkpoint-2", add step 7 to `completed_steps`. + +--- + +## PHASE CHECKPOINT 2 -- User Approval Required + +Display a summary of testing and validation results from `.full-stack-feature/07-testing.md` and ask: + +``` +Testing and validation complete. Please review .full-stack-feature/07-testing.md + +Test coverage: [summary] +Security findings: [X critical, Y high, Z medium] +Performance findings: [X critical, Y high, Z medium] + +1. Approve -- proceed to deployment & documentation +2. Request changes -- tell me what to fix +3. Pause -- save progress and stop here +``` + +Do NOT proceed to Phase 3 until the user approves. + +--- + +## Phase 3: Delivery (Steps 8-9) + +### Step 8: Deployment & Infrastructure + +Read `.full-stack-feature/03-architecture.md` and `.full-stack-feature/07-testing.md`. + +Use the Task tool: + +``` +Task: + subagent_type: "deployment-engineer" + description: "Create deployment config for $FEATURE" + prompt: | + Create the deployment and infrastructure configuration for this full-stack feature. + + ## Architecture + [Insert contents of .full-stack-feature/03-architecture.md] + + ## Testing Results + [Insert contents of .full-stack-feature/07-testing.md] + + ## Instructions + 1. Create or update CI/CD pipeline configuration for the new code + 2. Add database migration steps to the deployment pipeline + 3. Add feature flag configuration if the feature should be gradually rolled out + 4. Define health checks and readiness probes for new services/endpoints + 5. Create monitoring alerts for key metrics (error rate, latency, throughput) + 6. Write a deployment runbook with rollback steps (including database rollback) + 7. Follow existing deployment patterns in the project + + Write all configuration files. Report what was created/modified. +``` + +Save output to `.full-stack-feature/08-deployment.md`. + +Update `state.json`: set `current_step` to 9, add step 8 to `completed_steps`. + +### Step 9: Documentation & Handoff + +Read all previous `.full-stack-feature/*.md` files. + +Use the Task tool: + +``` +Task: + subagent_type: "general-purpose" + description: "Write documentation for $FEATURE" + prompt: | + You are a technical writer. Create documentation for this full-stack feature. + + ## Feature Context + [Insert contents of .full-stack-feature/01-requirements.md] + + ## Architecture + [Insert contents of .full-stack-feature/03-architecture.md] + + ## Implementation Summary + ### Database: [Insert contents of .full-stack-feature/04-database-impl.md] + ### Backend: [Insert contents of .full-stack-feature/05-backend-impl.md] + ### Frontend: [Insert contents of .full-stack-feature/06-frontend-impl.md] + + ## Deployment + [Insert contents of .full-stack-feature/08-deployment.md] + + ## Instructions + 1. Write API documentation for new endpoints (request/response examples) + 2. Document the database schema changes and migration notes + 3. Update or create user-facing documentation if applicable + 4. Write a brief architecture decision record (ADR) explaining key design choices + 5. Create a handoff summary: what was built, how to test it, known limitations + + Write documentation files. Report what was created/modified. +``` + +Save output to `.full-stack-feature/09-documentation.md`. + +Update `state.json`: set `current_step` to "complete", add step 9 to `completed_steps`. + +--- + +## Completion + +Update `state.json`: + +- Set `status` to `"complete"` +- Set `last_updated` to current timestamp + +Present the final summary: + +``` +Full-stack feature development complete: $FEATURE + +## Files Created +[List all .full-stack-feature/ output files] + +## Implementation Summary +- Requirements: .full-stack-feature/01-requirements.md +- Database Design: .full-stack-feature/02-database-design.md +- Architecture: .full-stack-feature/03-architecture.md +- Database Implementation: .full-stack-feature/04-database-impl.md +- Backend Implementation: .full-stack-feature/05-backend-impl.md +- Frontend Implementation: .full-stack-feature/06-frontend-impl.md +- Testing & Validation: .full-stack-feature/07-testing.md +- Deployment: .full-stack-feature/08-deployment.md +- Documentation: .full-stack-feature/09-documentation.md + +## Next Steps +1. Review all generated code and documentation +2. Run the full test suite to verify everything passes +3. Create a pull request with the implementation +4. Deploy using the runbook in .full-stack-feature/08-deployment.md +``` diff --git a/plugins/git-pr-workflows/.claude-plugin/plugin.json b/plugins/git-pr-workflows/.claude-plugin/plugin.json index 8954407..dddcdac 100644 --- a/plugins/git-pr-workflows/.claude-plugin/plugin.json +++ b/plugins/git-pr-workflows/.claude-plugin/plugin.json @@ -1,6 +1,6 @@ { "name": "git-pr-workflows", - "version": "1.2.1", + "version": "1.3.0", "description": "Git workflow automation, pull request enhancement, and team onboarding processes", "author": { "name": "Seth Hobson", diff --git a/plugins/git-pr-workflows/commands/git-workflow.md b/plugins/git-pr-workflows/commands/git-workflow.md index b2d6f39..bf46cd1 100644 --- a/plugins/git-pr-workflows/commands/git-workflow.md +++ b/plugins/git-pr-workflows/commands/git-workflow.md @@ -1,129 +1,598 @@ -# Complete Git Workflow with Multi-Agent Orchestration +--- +description: "Orchestrate git workflow from code review through PR creation with quality gates" +argument-hint: " [--skip-tests] [--draft-pr] [--no-push] [--squash] [--conventional] [--trunk-based]" +--- -Orchestrate a comprehensive git workflow from code review through PR creation, leveraging specialized agents for quality assurance, testing, and deployment readiness. This workflow implements modern git best practices including Conventional Commits, automated testing, and structured PR creation. +# Git Workflow Orchestrator -[Extended thinking: This workflow coordinates multiple specialized agents to ensure code quality before commits are made. The code-reviewer agent performs initial quality checks, test-automator ensures all tests pass, and deployment-engineer verifies production readiness. By orchestrating these agents sequentially with context passing, we prevent broken code from entering the repository while maintaining high velocity. The workflow supports both trunk-based and feature-branch strategies with configurable options for different team needs.] +## CRITICAL BEHAVIORAL RULES -## Configuration +You MUST follow these rules exactly. Violating any of them is a failure. -**Target branch**: $ARGUMENTS (defaults to 'main' if not specified) +1. **Execute steps in order.** Do NOT skip ahead, reorder, or merge steps. +2. **Write output files.** Each step MUST produce its output file in `.git-workflow/` before the next step begins. Read from prior step files — do NOT rely on context window memory. +3. **Stop at checkpoints.** When you reach a `PHASE CHECKPOINT`, you MUST stop and wait for explicit user approval before continuing. Use the AskUserQuestion tool with clear options. +4. **Halt on failure.** If any step fails (agent error, test failure, missing dependency), STOP immediately. Present the error and ask the user how to proceed. Do NOT silently continue. +5. **Use only local agents.** All `subagent_type` references use agents bundled with this plugin or `general-purpose`. No cross-plugin dependencies. +6. **Never enter plan mode autonomously.** Do NOT use EnterPlanMode. This command IS the plan — execute it. -**Supported flags**: +## Pre-flight Checks -- `--skip-tests`: Skip automated test execution (use with caution) -- `--draft-pr`: Create PR as draft for work-in-progress -- `--no-push`: Perform all checks but don't push to remote -- `--squash`: Squash commits before pushing -- `--conventional`: Enforce Conventional Commits format strictly -- `--trunk-based`: Use trunk-based development workflow -- `--feature-branch`: Use feature branch workflow (default) +Before starting, perform these checks: -## Phase 1: Pre-Commit Review and Analysis +### 1. Check for existing session -### 1. Code Quality Assessment +Check if `.git-workflow/state.json` exists: -- Use Task tool with subagent_type="code-reviewer" -- Prompt: "Review all uncommitted changes for code quality issues. Check for: 1) Code style violations, 2) Security vulnerabilities, 3) Performance concerns, 4) Missing error handling, 5) Incomplete implementations. Generate a detailed report with severity levels (critical/high/medium/low) and provide specific line-by-line feedback. Output format: JSON with {issues: [], summary: {critical: 0, high: 0, medium: 0, low: 0}, recommendations: []}" -- Expected output: Structured code review report for next phase +- If it exists and `status` is `"in_progress"`: Read it, display the current step, and ask the user: -### 2. Dependency and Breaking Change Analysis + ``` + Found an in-progress git workflow session: + Target branch: [branch from state] + Current step: [step from state] -- Use Task tool with subagent_type="code-reviewer" -- Prompt: "Analyze the changes for: 1) New dependencies or version changes, 2) Breaking API changes, 3) Database schema modifications, 4) Configuration changes, 5) Backward compatibility issues. Context from previous review: [insert issues summary]. Identify any changes that require migration scripts or documentation updates." -- Context from previous: Code quality issues that might indicate breaking changes -- Expected output: Breaking change assessment and migration requirements + 1. Resume from where we left off + 2. Start fresh (archives existing session) + ``` -## Phase 2: Testing and Validation +- If it exists and `status` is `"complete"`: Ask whether to archive and start fresh. -### 1. Test Execution and Coverage +### 2. Initialize state -- Use Task tool with subagent_type="unit-testing::test-automator" -- Prompt: "Execute all test suites for the modified code. Run: 1) Unit tests, 2) Integration tests, 3) End-to-end tests if applicable. Generate coverage report and identify any untested code paths. Based on review issues: [insert critical/high issues], ensure tests cover the problem areas. Provide test results in format: {passed: [], failed: [], skipped: [], coverage: {statements: %, branches: %, functions: %, lines: %}, untested_critical_paths: []}" -- Context from previous: Critical code review issues that need test coverage -- Expected output: Complete test results and coverage metrics +Create `.git-workflow/` directory and `state.json`: -### 2. Test Recommendations and Gap Analysis +```json +{ + "target_branch": "$ARGUMENTS", + "status": "in_progress", + "flags": { + "skip_tests": false, + "draft_pr": false, + "no_push": false, + "squash": false, + "conventional": true, + "trunk_based": false + }, + "current_step": 1, + "current_phase": 1, + "completed_steps": [], + "files_created": [], + "started_at": "ISO_TIMESTAMP", + "last_updated": "ISO_TIMESTAMP" +} +``` -- Use Task tool with subagent_type="unit-testing::test-automator" -- Prompt: "Based on test results [insert summary] and code changes, identify: 1) Missing test scenarios, 2) Edge cases not covered, 3) Integration points needing verification, 4) Performance benchmarks needed. Generate test implementation recommendations prioritized by risk. Consider the breaking changes identified: [insert breaking changes]." -- Context from previous: Test results, breaking changes, untested paths -- Expected output: Prioritized list of additional tests needed +Parse `$ARGUMENTS` for the target branch (defaults to 'main') and flags. Use defaults if not specified. -## Phase 3: Commit Message Generation +### 3. Gather git context -### 1. Change Analysis and Categorization +Run these commands and save output: -- Use Task tool with subagent_type="code-reviewer" -- Prompt: "Analyze all changes and categorize them according to Conventional Commits specification. Identify the primary change type (feat/fix/docs/style/refactor/perf/test/build/ci/chore/revert) and scope. For changes: [insert file list and summary], determine if this should be a single commit or multiple atomic commits. Consider test results: [insert test summary]." -- Context from previous: Test results, code review summary -- Expected output: Commit structure recommendation +- `git status` — current working tree state +- `git diff --stat` — summary of changes +- `git diff` — full diff of changes +- `git log --oneline -10` — recent commit history +- `git branch --show-current` — current branch name -### 2. Conventional Commit Message Creation +Save this context to `.git-workflow/00-git-context.md`. -- Use Task tool with subagent_type="llm-application-dev::prompt-engineer" -- Prompt: "Create Conventional Commits format message(s) based on categorization: [insert categorization]. Format: (): with blank line then explaining what and why (not how), then