fix: eliminate cross-plugin dependencies and modernize plugin.json across marketplace

Rewrites 14 commands across 11 plugins to remove all cross-plugin
subagent_type references (e.g., "unit-testing::test-automator"), which
break when plugins are installed standalone. Each command now uses only
local bundled agents or general-purpose with role context in the prompt.

All rewritten commands follow conductor-style patterns:
- CRITICAL BEHAVIORAL RULES with strong directives
- State files for session tracking and resume support
- Phase checkpoints requiring explicit user approval
- File-based context passing between steps

Also fixes 4 plugin.json files missing version/license fields and adds
plugin.json for dotnet-contribution.

Closes #433
This commit is contained in:
Seth Hobson
2026-02-06 19:34:26 -05:00
parent 4820385a31
commit 4d504ed8fa
36 changed files with 7235 additions and 2980 deletions

View File

@@ -1,6 +1,6 @@
{
"name": "data-engineering",
"version": "1.2.2",
"version": "1.3.0",
"description": "ETL pipeline construction, data warehouse design, batch processing workflows, and data-driven feature development",
"author": {
"name": "Seth Hobson",

View File

@@ -1,176 +1,784 @@
# Data-Driven Feature Development
---
description: "Build features guided by data insights, A/B testing, and continuous measurement"
argument-hint: "<feature description> [--experiment-type ab|multivariate|bandit] [--confidence 0.90|0.95|0.99]"
---
Build features guided by data insights, A/B testing, and continuous measurement using specialized agents for analysis, implementation, and experimentation.
# Data-Driven Feature Development Orchestrator
[Extended thinking: This workflow orchestrates a comprehensive data-driven development process from initial data analysis and hypothesis formulation through feature implementation with integrated analytics, A/B testing infrastructure, and post-launch analysis. Each phase leverages specialized agents to ensure features are built based on data insights, properly instrumented for measurement, and validated through controlled experiments. The workflow emphasizes modern product analytics practices, statistical rigor in testing, and continuous learning from user behavior.]
## CRITICAL BEHAVIORAL RULES
## Phase 1: Data Analysis and Hypothesis Formation
You MUST follow these rules exactly. Violating any of them is a failure.
### 1. Exploratory Data Analysis
1. **Execute steps in order.** Do NOT skip ahead, reorder, or merge steps.
2. **Write output files.** Each step MUST produce its output file in `.data-driven-feature/` before the next step begins. Read from prior step files — do NOT rely on context window memory.
3. **Stop at checkpoints.** When you reach a `PHASE CHECKPOINT`, you MUST stop and wait for explicit user approval before continuing. Use the AskUserQuestion tool with clear options.
4. **Halt on failure.** If any step fails (agent error, test failure, missing dependency), STOP immediately. Present the error and ask the user how to proceed. Do NOT silently continue.
5. **Use only local agents.** All `subagent_type` references use agents bundled with this plugin or `general-purpose`. No cross-plugin dependencies.
6. **Never enter plan mode autonomously.** Do NOT use EnterPlanMode. This command IS the plan — execute it.
- Use Task tool with subagent_type="machine-learning-ops::data-scientist"
- Prompt: "Perform exploratory data analysis for feature: $ARGUMENTS. Analyze existing user behavior data, identify patterns and opportunities, segment users by behavior, and calculate baseline metrics. Use modern analytics tools (Amplitude, Mixpanel, Segment) to understand current user journeys, conversion funnels, and engagement patterns."
- Output: EDA report with visualizations, user segments, behavioral patterns, baseline metrics
## Pre-flight Checks
### 2. Business Hypothesis Development
Before starting, perform these checks:
- Use Task tool with subagent_type="business-analytics::business-analyst"
- Context: Data scientist's EDA findings and behavioral patterns
- Prompt: "Formulate business hypotheses for feature: $ARGUMENTS based on data analysis. Define clear success metrics, expected impact on key business KPIs, target user segments, and minimum detectable effects. Create measurable hypotheses using frameworks like ICE scoring or RICE prioritization."
- Output: Hypothesis document, success metrics definition, expected ROI calculations
### 1. Check for existing session
### 3. Statistical Experiment Design
Check if `.data-driven-feature/state.json` exists:
- Use Task tool with subagent_type="machine-learning-ops::data-scientist"
- Context: Business hypotheses and success metrics
- Prompt: "Design statistical experiment for feature: $ARGUMENTS. Calculate required sample size for statistical power, define control and treatment groups, specify randomization strategy, and plan for multiple testing corrections. Consider Bayesian A/B testing approaches for faster decision making. Design for both primary and guardrail metrics."
- Output: Experiment design document, power analysis, statistical test plan
- If it exists and `status` is `"in_progress"`: Read it, display the current step, and ask the user:
## Phase 2: Feature Architecture and Analytics Design
```
Found an in-progress data-driven feature session:
Feature: [name from state]
Current step: [step from state]
### 4. Feature Architecture Planning
1. Resume from where we left off
2. Start fresh (archives existing session)
```
- Use Task tool with subagent_type="data-engineering::backend-architect"
- Context: Business requirements and experiment design
- Prompt: "Design feature architecture for: $ARGUMENTS with A/B testing capability. Include feature flag integration (LaunchDarkly, Split.io, or Optimizely), gradual rollout strategy, circuit breakers for safety, and clean separation between control and treatment logic. Ensure architecture supports real-time configuration updates."
- Output: Architecture diagrams, feature flag schema, rollout strategy
- If it exists and `status` is `"complete"`: Ask whether to archive and start fresh.
### 5. Analytics Instrumentation Design
### 2. Initialize state
- Use Task tool with subagent_type="data-engineering::data-engineer"
- Context: Feature architecture and success metrics
- Prompt: "Design comprehensive analytics instrumentation for: $ARGUMENTS. Define event schemas for user interactions, specify properties for segmentation and analysis, design funnel tracking and conversion events, plan cohort analysis capabilities. Implement using modern SDKs (Segment, Amplitude, Mixpanel) with proper event taxonomy."
- Output: Event tracking plan, analytics schema, instrumentation guide
Create `.data-driven-feature/` directory and `state.json`:
### 6. Data Pipeline Architecture
- Use Task tool with subagent_type="data-engineering::data-engineer"
- Context: Analytics requirements and existing data infrastructure
- Prompt: "Design data pipelines for feature: $ARGUMENTS. Include real-time streaming for live metrics (Kafka, Kinesis), batch processing for detailed analysis, data warehouse integration (Snowflake, BigQuery), and feature store for ML if applicable. Ensure proper data governance and GDPR compliance."
- Output: Pipeline architecture, ETL/ELT specifications, data flow diagrams
## Phase 3: Implementation with Instrumentation
### 7. Backend Implementation
- Use Task tool with subagent_type="backend-development::backend-architect"
- Context: Architecture design and feature requirements
- Prompt: "Implement backend for feature: $ARGUMENTS with full instrumentation. Include feature flag checks at decision points, comprehensive event tracking for all user actions, performance metrics collection, error tracking and monitoring. Implement proper logging for experiment analysis."
- Output: Backend code with analytics, feature flag integration, monitoring setup
### 8. Frontend Implementation
- Use Task tool with subagent_type="frontend-mobile-development::frontend-developer"
- Context: Backend APIs and analytics requirements
- Prompt: "Build frontend for feature: $ARGUMENTS with analytics tracking. Implement event tracking for all user interactions, session recording integration if applicable, performance metrics (Core Web Vitals), and proper error boundaries. Ensure consistent experience between control and treatment groups."
- Output: Frontend code with analytics, A/B test variants, performance monitoring
### 9. ML Model Integration (if applicable)
- Use Task tool with subagent_type="machine-learning-ops::ml-engineer"
- Context: Feature requirements and data pipelines
- Prompt: "Integrate ML models for feature: $ARGUMENTS if needed. Implement online inference with low latency, A/B testing between model versions, model performance tracking, and automatic fallback mechanisms. Set up model monitoring for drift detection."
- Output: ML pipeline, model serving infrastructure, monitoring setup
## Phase 4: Pre-Launch Validation
### 10. Analytics Validation
- Use Task tool with subagent_type="data-engineering::data-engineer"
- Context: Implemented tracking and event schemas
- Prompt: "Validate analytics implementation for: $ARGUMENTS. Test all event tracking in staging, verify data quality and completeness, validate funnel definitions, ensure proper user identification and session tracking. Run end-to-end tests for data pipeline."
- Output: Validation report, data quality metrics, tracking coverage analysis
### 11. Experiment Setup
- Use Task tool with subagent_type="cloud-infrastructure::deployment-engineer"
- Context: Feature flags and experiment design
- Prompt: "Configure experiment infrastructure for: $ARGUMENTS. Set up feature flags with proper targeting rules, configure traffic allocation (start with 5-10%), implement kill switches, set up monitoring alerts for key metrics. Test randomization and assignment logic."
- Output: Experiment configuration, monitoring dashboards, rollout plan
## Phase 5: Launch and Experimentation
### 12. Gradual Rollout
- Use Task tool with subagent_type="cloud-infrastructure::deployment-engineer"
- Context: Experiment configuration and monitoring setup
- Prompt: "Execute gradual rollout for feature: $ARGUMENTS. Start with internal dogfooding, then beta users (1-5%), gradually increase to target traffic. Monitor error rates, performance metrics, and early indicators. Implement automated rollback on anomalies."
- Output: Rollout execution, monitoring alerts, health metrics
### 13. Real-time Monitoring
- Use Task tool with subagent_type="observability-monitoring::observability-engineer"
- Context: Deployed feature and success metrics
- Prompt: "Set up comprehensive monitoring for: $ARGUMENTS. Create real-time dashboards for experiment metrics, configure alerts for statistical significance, monitor guardrail metrics for negative impacts, track system performance and error rates. Use tools like Datadog, New Relic, or custom dashboards."
- Output: Monitoring dashboards, alert configurations, SLO definitions
## Phase 6: Analysis and Decision Making
### 14. Statistical Analysis
- Use Task tool with subagent_type="machine-learning-ops::data-scientist"
- Context: Experiment data and original hypotheses
- Prompt: "Analyze A/B test results for: $ARGUMENTS. Calculate statistical significance with confidence intervals, check for segment-level effects, analyze secondary metrics impact, investigate any unexpected patterns. Use both frequentist and Bayesian approaches. Account for multiple testing if applicable."
- Output: Statistical analysis report, significance tests, segment analysis
### 15. Business Impact Assessment
- Use Task tool with subagent_type="business-analytics::business-analyst"
- Context: Statistical analysis and business metrics
- Prompt: "Assess business impact of feature: $ARGUMENTS. Calculate actual vs expected ROI, analyze impact on key business metrics, evaluate cost-benefit including operational overhead, project long-term value. Make recommendation on full rollout, iteration, or rollback."
- Output: Business impact report, ROI analysis, recommendation document
### 16. Post-Launch Optimization
- Use Task tool with subagent_type="machine-learning-ops::data-scientist"
- Context: Launch results and user feedback
- Prompt: "Identify optimization opportunities for: $ARGUMENTS based on data. Analyze user behavior patterns in treatment group, identify friction points in user journey, suggest improvements based on data, plan follow-up experiments. Use cohort analysis for long-term impact."
- Output: Optimization recommendations, follow-up experiment plans
## Configuration Options
```yaml
experiment_config:
min_sample_size: 10000
confidence_level: 0.95
runtime_days: 14
traffic_allocation: "gradual" # gradual, fixed, or adaptive
analytics_platforms:
- amplitude
- segment
- mixpanel
feature_flags:
provider: "launchdarkly" # launchdarkly, split, optimizely, unleash
statistical_methods:
- frequentist
- bayesian
monitoring:
- real_time_metrics: true
- anomaly_detection: true
- automatic_rollback: true
```json
{
"feature": "$ARGUMENTS",
"status": "in_progress",
"experiment_type": "ab",
"confidence_level": 0.95,
"current_step": 1,
"current_phase": 1,
"completed_steps": [],
"files_created": [],
"started_at": "ISO_TIMESTAMP",
"last_updated": "ISO_TIMESTAMP"
}
```
## Success Criteria
Parse `$ARGUMENTS` for `--experiment-type` and `--confidence` flags. Use defaults if not specified.
- **Data Coverage**: 100% of user interactions tracked with proper event schema
- **Experiment Validity**: Proper randomization, sufficient statistical power, no sample ratio mismatch
- **Statistical Rigor**: Clear significance testing, proper confidence intervals, multiple testing corrections
- **Business Impact**: Measurable improvement in target metrics without degrading guardrail metrics
- **Technical Performance**: No degradation in p95 latency, error rates below 0.1%
- **Decision Speed**: Clear go/no-go decision within planned experiment runtime
- **Learning Outcomes**: Documented insights for future feature development
### 3. Parse feature description
## Coordination Notes
Extract the feature description from `$ARGUMENTS` (everything before the flags). This is referenced as `$FEATURE` in prompts below.
- Data scientists and business analysts collaborate on hypothesis formation
- Engineers implement with analytics as first-class requirement, not afterthought
- Feature flags enable safe experimentation without full deployments
- Real-time monitoring allows for quick iteration and rollback if needed
- Statistical rigor balanced with business practicality and speed to market
- Continuous learning loop feeds back into next feature development cycle
---
Feature to develop with data-driven approach: $ARGUMENTS
## Phase 1: Data Analysis & Hypothesis (Steps 13) — Interactive
### Step 1: Exploratory Data Analysis
Use the Task tool:
```
Task:
subagent_type: "general-purpose"
description: "Perform exploratory data analysis for $FEATURE"
prompt: |
You are a data scientist specializing in product analytics. Perform exploratory data analysis for feature: $FEATURE.
## Instructions
1. Analyze existing user behavior data, identify patterns and opportunities
2. Segment users by behavior and engagement patterns
3. Calculate baseline metrics for key indicators
4. Use modern analytics tools (Amplitude, Mixpanel, Segment) to understand current user journeys, conversion funnels, and engagement patterns
5. Identify data quality issues or gaps that need addressing
Provide an EDA report with user segments, behavioral patterns, and baseline metrics.
```
Save the agent's output to `.data-driven-feature/01-eda-report.md`.
Update `state.json`: set `current_step` to 2, add `"01-eda-report.md"` to `files_created`, add step 1 to `completed_steps`.
### Step 2: Business Hypothesis Development
Read `.data-driven-feature/01-eda-report.md` to load EDA context.
Use the Task tool:
```
Task:
subagent_type: "general-purpose"
description: "Formulate business hypotheses for $FEATURE"
prompt: |
You are a business analyst specializing in data-driven product development. Formulate business hypotheses for feature: $FEATURE based on the data analysis below.
## EDA Findings
[Insert full contents of .data-driven-feature/01-eda-report.md]
## Instructions
1. Define clear success metrics and expected impact on key business KPIs
2. Identify target user segments and minimum detectable effects
3. Create measurable hypotheses using ICE or RICE prioritization frameworks
4. Calculate expected ROI and business value
Provide a hypothesis document with success metrics definition and expected ROI calculations.
```
Save the agent's output to `.data-driven-feature/02-hypotheses.md`.
Update `state.json`: set `current_step` to 3, add step 2 to `completed_steps`.
### Step 3: Statistical Experiment Design
Read `.data-driven-feature/02-hypotheses.md` to load hypothesis context.
Use the Task tool:
```
Task:
subagent_type: "general-purpose"
description: "Design statistical experiment for $FEATURE"
prompt: |
You are a data scientist specializing in experimentation and statistical analysis. Design the statistical experiment for feature: $FEATURE.
## Business Hypotheses
[Insert full contents of .data-driven-feature/02-hypotheses.md]
## Experiment Type: [from state.json]
## Confidence Level: [from state.json]
## Instructions
1. Calculate required sample size for statistical power
2. Define control and treatment groups with randomization strategy
3. Plan for multiple testing corrections if needed
4. Consider Bayesian A/B testing approaches for faster decision making
5. Design for both primary and guardrail metrics
6. Specify experiment runtime and stopping rules
Provide an experiment design document with power analysis and statistical test plan.
```
Save the agent's output to `.data-driven-feature/03-experiment-design.md`.
Update `state.json`: set `current_step` to "checkpoint-1", add step 3 to `completed_steps`.
---
## PHASE CHECKPOINT 1 — User Approval Required
You MUST stop here and present the analysis and experiment design for review.
Display a summary of the hypotheses from `.data-driven-feature/02-hypotheses.md` and experiment design from `.data-driven-feature/03-experiment-design.md` (key metrics, target segments, sample size, experiment type) and ask:
```
Data analysis and experiment design complete. Please review:
- .data-driven-feature/01-eda-report.md
- .data-driven-feature/02-hypotheses.md
- .data-driven-feature/03-experiment-design.md
1. Approve — proceed to architecture and implementation
2. Request changes — tell me what to adjust
3. Pause — save progress and stop here
```
Do NOT proceed to Phase 2 until the user selects option 1. If they select option 2, revise and re-checkpoint. If option 3, update `state.json` status and stop.
---
## Phase 2: Architecture & Instrumentation (Steps 46)
### Step 4: Feature Architecture Planning
Read `.data-driven-feature/02-hypotheses.md` and `.data-driven-feature/03-experiment-design.md`.
Use the Task tool:
```
Task:
subagent_type: "backend-architect"
description: "Design feature architecture for $FEATURE with A/B testing capability"
prompt: |
Design the feature architecture for: $FEATURE with A/B testing capability.
## Business Hypotheses
[Insert contents of .data-driven-feature/02-hypotheses.md]
## Experiment Design
[Insert contents of .data-driven-feature/03-experiment-design.md]
## Instructions
1. Include feature flag integration (LaunchDarkly, Split.io, or Optimizely)
2. Design gradual rollout strategy with circuit breakers for safety
3. Ensure clean separation between control and treatment logic
4. Support real-time configuration updates
5. Design for proper data collection at each decision point
Provide architecture diagrams, feature flag schema, and rollout strategy.
```
Save the agent's output to `.data-driven-feature/04-architecture.md`.
Update `state.json`: set `current_step` to 5, add step 4 to `completed_steps`.
### Step 5: Analytics Instrumentation Design
Read `.data-driven-feature/04-architecture.md`.
Use the Task tool:
```
Task:
subagent_type: "data-engineer"
description: "Design analytics instrumentation for $FEATURE"
prompt: |
Design comprehensive analytics instrumentation for: $FEATURE.
## Architecture
[Insert contents of .data-driven-feature/04-architecture.md]
## Experiment Design
[Insert contents of .data-driven-feature/03-experiment-design.md]
## Instructions
1. Define event schemas for user interactions with proper taxonomy
2. Specify properties for segmentation and analysis
3. Design funnel tracking and conversion events
4. Plan cohort analysis capabilities
5. Implement using modern SDKs (Segment, Amplitude, Mixpanel) with proper event taxonomy
Provide an event tracking plan, analytics schema, and instrumentation guide.
```
Save the agent's output to `.data-driven-feature/05-analytics-design.md`.
Update `state.json`: set `current_step` to 6, add step 5 to `completed_steps`.
### Step 6: Data Pipeline Architecture
Read `.data-driven-feature/05-analytics-design.md`.
Use the Task tool:
```
Task:
subagent_type: "data-engineer"
description: "Design data pipelines for $FEATURE"
prompt: |
Design data pipelines for feature: $FEATURE.
## Analytics Design
[Insert contents of .data-driven-feature/05-analytics-design.md]
## Architecture
[Insert contents of .data-driven-feature/04-architecture.md]
## Instructions
1. Include real-time streaming for live metrics (Kafka, Kinesis)
2. Design batch processing for detailed analysis
3. Plan data warehouse integration (Snowflake, BigQuery)
4. Include feature store for ML if applicable
5. Ensure proper data governance and GDPR compliance
6. Define data retention and archival policies
Provide pipeline architecture, ETL/ELT specifications, and data flow diagrams.
```
Save the agent's output to `.data-driven-feature/06-data-pipelines.md`.
Update `state.json`: set `current_step` to "checkpoint-2", add step 6 to `completed_steps`.
---
## PHASE CHECKPOINT 2 — User Approval Required
Display a summary of the architecture, analytics design, and data pipelines and ask:
```
Architecture and instrumentation design complete. Please review:
- .data-driven-feature/04-architecture.md
- .data-driven-feature/05-analytics-design.md
- .data-driven-feature/06-data-pipelines.md
1. Approve — proceed to implementation
2. Request changes — tell me what to adjust
3. Pause — save progress and stop here
```
Do NOT proceed to Phase 3 until the user approves.
---
## Phase 3: Implementation (Steps 79)
### Step 7: Backend Implementation
Read `.data-driven-feature/04-architecture.md` and `.data-driven-feature/05-analytics-design.md`.
Use the Task tool:
```
Task:
subagent_type: "backend-architect"
description: "Implement backend for $FEATURE with full instrumentation"
prompt: |
Implement the backend for feature: $FEATURE with full instrumentation.
## Architecture
[Insert contents of .data-driven-feature/04-architecture.md]
## Analytics Design
[Insert contents of .data-driven-feature/05-analytics-design.md]
## Instructions
1. Include feature flag checks at decision points
2. Implement comprehensive event tracking for all user actions
3. Add performance metrics collection
4. Implement error tracking and monitoring
5. Add proper logging for experiment analysis
6. Follow the project's existing code patterns and conventions
Write all code files. Report what files were created/modified.
```
Save a summary to `.data-driven-feature/07-backend.md`.
Update `state.json`: set `current_step` to 8, add step 7 to `completed_steps`.
### Step 8: Frontend Implementation
Read `.data-driven-feature/04-architecture.md`, `.data-driven-feature/05-analytics-design.md`, and `.data-driven-feature/07-backend.md`.
Use the Task tool:
```
Task:
subagent_type: "general-purpose"
description: "Implement frontend for $FEATURE with analytics tracking"
prompt: |
You are a frontend developer. Build the frontend for feature: $FEATURE with analytics tracking.
## Architecture
[Insert contents of .data-driven-feature/04-architecture.md]
## Analytics Design
[Insert contents of .data-driven-feature/05-analytics-design.md]
## Backend Implementation
[Insert contents of .data-driven-feature/07-backend.md]
## Instructions
1. Implement event tracking for all user interactions
2. Build A/B test variants with proper variant assignment
3. Add session recording integration if applicable
4. Track performance metrics (Core Web Vitals)
5. Add proper error boundaries
6. Ensure consistent experience between control and treatment groups
7. Follow the project's existing frontend patterns and conventions
Write all code files. Report what files were created/modified.
```
Save a summary to `.data-driven-feature/08-frontend.md`.
**Note:** If the feature has no frontend component (pure backend/API/pipeline), skip this step — write a brief note in `08-frontend.md` explaining why it was skipped, and continue.
Update `state.json`: set `current_step` to 9, add step 8 to `completed_steps`.
### Step 9: ML Model Integration (if applicable)
Read `.data-driven-feature/04-architecture.md` and `.data-driven-feature/06-data-pipelines.md`.
Use the Task tool:
```
Task:
subagent_type: "general-purpose"
description: "Integrate ML models for $FEATURE"
prompt: |
You are an ML engineer. Integrate ML models for feature: $FEATURE if needed.
## Architecture
[Insert contents of .data-driven-feature/04-architecture.md]
## Data Pipelines
[Insert contents of .data-driven-feature/06-data-pipelines.md]
## Instructions
1. Implement online inference with low latency
2. Set up A/B testing between model versions
3. Add model performance tracking and drift detection
4. Implement automatic fallback mechanisms
5. Set up model monitoring dashboards
If no ML component is needed for this feature, explain why and skip.
Write all code files. Report what files were created/modified.
```
Save a summary to `.data-driven-feature/09-ml-integration.md`.
Update `state.json`: set `current_step` to "checkpoint-3", add step 9 to `completed_steps`.
---
## PHASE CHECKPOINT 3 — User Approval Required
Display a summary of the implementation and ask:
```
Implementation complete. Please review:
- .data-driven-feature/07-backend.md
- .data-driven-feature/08-frontend.md
- .data-driven-feature/09-ml-integration.md
1. Approve — proceed to validation and launch
2. Request changes — tell me what to fix
3. Pause — save progress and stop here
```
Do NOT proceed to Phase 4 until the user approves.
---
## Phase 4: Validation & Launch (Steps 1013)
### Step 10: Analytics Validation
Read `.data-driven-feature/05-analytics-design.md`, `.data-driven-feature/07-backend.md`, and `.data-driven-feature/08-frontend.md`.
Use the Task tool:
```
Task:
subagent_type: "data-engineer"
description: "Validate analytics implementation for $FEATURE"
prompt: |
Validate the analytics implementation for: $FEATURE.
## Analytics Design
[Insert contents of .data-driven-feature/05-analytics-design.md]
## Backend Implementation
[Insert contents of .data-driven-feature/07-backend.md]
## Frontend Implementation
[Insert contents of .data-driven-feature/08-frontend.md]
## Instructions
1. Test all event tracking in staging environment
2. Verify data quality and completeness
3. Validate funnel definitions and conversion tracking
4. Ensure proper user identification and session tracking
5. Run end-to-end tests for data pipeline
6. Check for tracking gaps or inconsistencies
Provide a validation report with data quality metrics and tracking coverage analysis.
```
Save the agent's output to `.data-driven-feature/10-analytics-validation.md`.
Update `state.json`: set `current_step` to 11, add step 10 to `completed_steps`.
### Step 11: Experiment Setup & Deployment
Read `.data-driven-feature/03-experiment-design.md` and `.data-driven-feature/04-architecture.md`.
Launch two agents in parallel using multiple Task tool calls in a single response:
**11a. Experiment Infrastructure:**
```
Task:
subagent_type: "general-purpose"
description: "Configure experiment infrastructure for $FEATURE"
prompt: |
You are a deployment engineer specializing in experimentation platforms. Configure experiment infrastructure for: $FEATURE.
## Experiment Design
[Insert contents of .data-driven-feature/03-experiment-design.md]
## Architecture
[Insert contents of .data-driven-feature/04-architecture.md]
## Instructions
1. Set up feature flags with proper targeting rules
2. Configure traffic allocation (start with 5-10%)
3. Implement kill switches for safety
4. Set up monitoring alerts for key metrics
5. Test randomization and assignment logic
6. Create rollback procedures
Provide experiment configuration, monitoring dashboards, and rollout plan.
```
**11b. Monitoring Setup:**
```
Task:
subagent_type: "general-purpose"
description: "Set up monitoring for $FEATURE experiment"
prompt: |
You are an observability engineer. Set up comprehensive monitoring for: $FEATURE.
## Architecture
[Insert contents of .data-driven-feature/04-architecture.md]
## Experiment Design
[Insert contents of .data-driven-feature/03-experiment-design.md]
## Analytics Design
[Insert contents of .data-driven-feature/05-analytics-design.md]
## Instructions
1. Create real-time dashboards for experiment metrics
2. Configure alerts for statistical significance milestones
3. Monitor guardrail metrics for negative impacts
4. Track system performance and error rates
5. Define SLOs for the experiment period
6. Use tools like Datadog, New Relic, or custom dashboards
Provide monitoring dashboard configs, alert definitions, and SLO specifications.
```
After both complete, consolidate results into `.data-driven-feature/11-experiment-setup.md`:
```markdown
# Experiment Setup: $FEATURE
## Experiment Infrastructure
[Summary from 11a — feature flags, traffic allocation, rollback plan]
## Monitoring Configuration
[Summary from 11b — dashboards, alerts, SLOs]
```
Update `state.json`: set `current_step` to 12, add step 11 to `completed_steps`.
### Step 12: Gradual Rollout
Read `.data-driven-feature/11-experiment-setup.md`.
Use the Task tool:
```
Task:
subagent_type: "general-purpose"
description: "Create gradual rollout plan for $FEATURE"
prompt: |
You are a deployment engineer. Create a detailed gradual rollout plan for feature: $FEATURE.
## Experiment Setup
[Insert contents of .data-driven-feature/11-experiment-setup.md]
## Instructions
1. Define rollout stages: internal dogfooding → beta (1-5%) → gradual increase to target traffic
2. Specify health checks and go/no-go criteria for each stage
3. Define monitoring checkpoints and metrics thresholds
4. Create automated rollback triggers for anomalies
5. Document manual rollback procedures
Provide a stage-by-stage rollout plan with decision criteria.
```
Save the agent's output to `.data-driven-feature/12-rollout-plan.md`.
Update `state.json`: set `current_step` to 13, add step 12 to `completed_steps`.
### Step 13: Security Review
Read `.data-driven-feature/04-architecture.md`, `.data-driven-feature/07-backend.md`, and `.data-driven-feature/08-frontend.md`.
Use the Task tool:
```
Task:
subagent_type: "general-purpose"
description: "Security review of $FEATURE"
prompt: |
You are a security auditor. Perform a security review of this data-driven feature implementation.
## Architecture
[Insert contents of .data-driven-feature/04-architecture.md]
## Backend Implementation
[Insert contents of .data-driven-feature/07-backend.md]
## Frontend Implementation
[Insert contents of .data-driven-feature/08-frontend.md]
## Instructions
Review for: OWASP Top 10, data privacy and GDPR compliance, PII handling in analytics events,
authentication/authorization flaws, input validation gaps, experiment manipulation risks,
and any security anti-patterns.
Provide findings with severity, location, and specific fix recommendations.
```
Save the agent's output to `.data-driven-feature/13-security-review.md`.
If there are Critical or High severity findings, address them now before proceeding. Apply fixes and re-validate.
Update `state.json`: set `current_step` to "checkpoint-4", add step 13 to `completed_steps`.
---
## PHASE CHECKPOINT 4 — User Approval Required
Display a summary of validation and launch readiness and ask:
```
Validation and launch preparation complete. Please review:
- .data-driven-feature/10-analytics-validation.md
- .data-driven-feature/11-experiment-setup.md
- .data-driven-feature/12-rollout-plan.md
- .data-driven-feature/13-security-review.md
Security findings: [X critical, Y high, Z medium]
1. Approve — proceed to analysis planning
2. Request changes — tell me what to fix
3. Pause — save progress and stop here
```
Do NOT proceed to Phase 5 until the user approves.
---
## Phase 5: Analysis & Decision (Steps 1416)
### Step 14: Statistical Analysis
Read `.data-driven-feature/03-experiment-design.md` and `.data-driven-feature/02-hypotheses.md`.
Use the Task tool:
```
Task:
subagent_type: "general-purpose"
description: "Create statistical analysis plan for $FEATURE experiment"
prompt: |
You are a data scientist specializing in experimentation. Create the statistical analysis plan for the A/B test results of: $FEATURE.
## Experiment Design
[Insert contents of .data-driven-feature/03-experiment-design.md]
## Hypotheses
[Insert contents of .data-driven-feature/02-hypotheses.md]
## Instructions
1. Define statistical significance calculations with confidence intervals
2. Plan segment-level effect analysis
3. Specify secondary metrics impact analysis
4. Use both frequentist and Bayesian approaches
5. Account for multiple testing corrections
6. Define stopping rules and decision criteria
Provide an analysis plan with templates for results reporting.
```
Save the agent's output to `.data-driven-feature/14-analysis-plan.md`.
Update `state.json`: set `current_step` to 15, add step 14 to `completed_steps`.
### Step 15: Business Impact Assessment Framework
Read `.data-driven-feature/02-hypotheses.md` and `.data-driven-feature/14-analysis-plan.md`.
Use the Task tool:
```
Task:
subagent_type: "general-purpose"
description: "Create business impact assessment framework for $FEATURE"
prompt: |
You are a business analyst. Create a business impact assessment framework for feature: $FEATURE.
## Hypotheses
[Insert contents of .data-driven-feature/02-hypotheses.md]
## Analysis Plan
[Insert contents of .data-driven-feature/14-analysis-plan.md]
## Instructions
1. Define actual vs expected ROI calculation methodology
2. Create a framework for analyzing impact on key business metrics
3. Plan cost-benefit analysis including operational overhead
4. Define criteria for full rollout, iteration, or rollback decisions
5. Create templates for stakeholder reporting
Provide a business impact framework and decision matrix.
```
Save the agent's output to `.data-driven-feature/15-impact-framework.md`.
Update `state.json`: set `current_step` to 16, add step 15 to `completed_steps`.
### Step 16: Optimization Roadmap
Read `.data-driven-feature/14-analysis-plan.md` and `.data-driven-feature/15-impact-framework.md`.
Use the Task tool:
```
Task:
subagent_type: "general-purpose"
description: "Create post-launch optimization roadmap for $FEATURE"
prompt: |
You are a data scientist specializing in product optimization. Create a post-launch optimization roadmap for: $FEATURE.
## Analysis Plan
[Insert contents of .data-driven-feature/14-analysis-plan.md]
## Impact Framework
[Insert contents of .data-driven-feature/15-impact-framework.md]
## Instructions
1. Define user behavior analysis methodology for treatment group
2. Plan friction point identification in user journeys
3. Suggest improvement hypotheses based on expected data patterns
4. Plan follow-up experiments and iteration cycles
5. Design cohort analysis for long-term impact assessment
6. Create a continuous learning feedback loop
Provide an optimization roadmap with follow-up experiment plans.
```
Save the agent's output to `.data-driven-feature/16-optimization-roadmap.md`.
Update `state.json`: set `current_step` to "complete", add step 16 to `completed_steps`.
---
## Completion
Update `state.json`:
- Set `status` to `"complete"`
- Set `last_updated` to current timestamp
Present the final summary:
```
Data-driven feature development complete: $FEATURE
## Files Created
[List all .data-driven-feature/ output files]
## Development Summary
- EDA Report: .data-driven-feature/01-eda-report.md
- Hypotheses: .data-driven-feature/02-hypotheses.md
- Experiment Design: .data-driven-feature/03-experiment-design.md
- Architecture: .data-driven-feature/04-architecture.md
- Analytics Design: .data-driven-feature/05-analytics-design.md
- Data Pipelines: .data-driven-feature/06-data-pipelines.md
- Backend: .data-driven-feature/07-backend.md
- Frontend: .data-driven-feature/08-frontend.md
- ML Integration: .data-driven-feature/09-ml-integration.md
- Analytics Validation: .data-driven-feature/10-analytics-validation.md
- Experiment Setup: .data-driven-feature/11-experiment-setup.md
- Rollout Plan: .data-driven-feature/12-rollout-plan.md
- Security Review: .data-driven-feature/13-security-review.md
- Analysis Plan: .data-driven-feature/14-analysis-plan.md
- Impact Framework: .data-driven-feature/15-impact-framework.md
- Optimization Roadmap: .data-driven-feature/16-optimization-roadmap.md
## Next Steps
1. Review all generated artifacts and documentation
2. Execute the rollout plan in .data-driven-feature/12-rollout-plan.md
3. Monitor using the dashboards from .data-driven-feature/11-experiment-setup.md
4. Run analysis after experiment completes using .data-driven-feature/14-analysis-plan.md
5. Make go/no-go decision using .data-driven-feature/15-impact-framework.md
```