mirror of
https://github.com/wshobson/agents.git
synced 2026-03-18 09:37:15 +00:00
fix: eliminate cross-plugin dependencies and modernize plugin.json across marketplace
Rewrites 14 commands across 11 plugins to remove all cross-plugin subagent_type references (e.g., "unit-testing::test-automator"), which break when plugins are installed standalone. Each command now uses only local bundled agents or general-purpose with role context in the prompt. All rewritten commands follow conductor-style patterns: - CRITICAL BEHAVIORAL RULES with strong directives - State files for session tracking and resume support - Phase checkpoints requiring explicit user approval - File-based context passing between steps Also fixes 4 plugin.json files missing version/license fields and adds plugin.json for dotnet-contribution. Closes #433
This commit is contained in:
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "tdd-workflows",
|
||||
"version": "1.2.1",
|
||||
"version": "1.3.0",
|
||||
"description": "Test-driven development methodology with red-green-refactor cycles and code review",
|
||||
"author": {
|
||||
"name": "Seth Hobson",
|
||||
|
||||
@@ -1,12 +1,74 @@
|
||||
Execute a comprehensive Test-Driven Development (TDD) workflow with strict red-green-refactor discipline:
|
||||
---
|
||||
description: "Execute a comprehensive TDD workflow with strict red-green-refactor discipline"
|
||||
argument-hint: "<feature or module to implement> [--incremental|--suite] [--coverage 80]"
|
||||
---
|
||||
|
||||
[Extended thinking: This workflow enforces test-first development through coordinated agent orchestration. Each phase of the TDD cycle is strictly enforced with fail-first verification, incremental implementation, and continuous refactoring. The workflow supports both single test and test suite approaches with configurable coverage thresholds.]
|
||||
# TDD Cycle Orchestrator
|
||||
|
||||
## CRITICAL BEHAVIORAL RULES
|
||||
|
||||
You MUST follow these rules exactly. Violating any of them is a failure.
|
||||
|
||||
1. **Execute steps in order.** Do NOT skip ahead, reorder, or merge steps.
|
||||
2. **Write output files.** Each step MUST produce its output file in `.tdd-cycle/` before the next step begins. Read from prior step files — do NOT rely on context window memory.
|
||||
3. **Stop at checkpoints.** When you reach a `PHASE CHECKPOINT`, you MUST stop and wait for explicit user approval before continuing. Use the AskUserQuestion tool with clear options.
|
||||
4. **Halt on failure.** If any step fails (agent error, test failure, missing dependency), STOP immediately. Present the error and ask the user how to proceed. Do NOT silently continue.
|
||||
5. **Use only local agents.** All `subagent_type` references use agents bundled with this plugin or `general-purpose`. No cross-plugin dependencies.
|
||||
6. **Never enter plan mode autonomously.** Do NOT use EnterPlanMode. This command IS the plan — execute it.
|
||||
|
||||
## Pre-flight Checks
|
||||
|
||||
Before starting, perform these checks:
|
||||
|
||||
### 1. Check for existing session
|
||||
|
||||
Check if `.tdd-cycle/state.json` exists:
|
||||
|
||||
- If it exists and `status` is `"in_progress"`: Read it, display the current step, and ask the user:
|
||||
|
||||
```
|
||||
Found an in-progress TDD cycle session:
|
||||
Feature: [name from state]
|
||||
Current step: [step from state]
|
||||
|
||||
1. Resume from where we left off
|
||||
2. Start fresh (archives existing session)
|
||||
```
|
||||
|
||||
- If it exists and `status` is `"complete"`: Ask whether to archive and start fresh.
|
||||
|
||||
### 2. Initialize state
|
||||
|
||||
Create `.tdd-cycle/` directory and `state.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"feature": "$ARGUMENTS",
|
||||
"status": "in_progress",
|
||||
"mode": "suite",
|
||||
"coverage_target": 80,
|
||||
"current_step": 1,
|
||||
"current_phase": 1,
|
||||
"completed_steps": [],
|
||||
"files_created": [],
|
||||
"started_at": "ISO_TIMESTAMP",
|
||||
"last_updated": "ISO_TIMESTAMP"
|
||||
}
|
||||
```
|
||||
|
||||
Parse `$ARGUMENTS` for `--incremental`, `--suite`, and `--coverage` flags. Use defaults if not specified (mode: suite, coverage: 80).
|
||||
|
||||
### 3. Parse feature description
|
||||
|
||||
Extract the feature description from `$ARGUMENTS` (everything before the flags). This is referenced as `$FEATURE` in prompts below.
|
||||
|
||||
---
|
||||
|
||||
## Configuration
|
||||
|
||||
### Coverage Thresholds
|
||||
|
||||
- Minimum line coverage: 80%
|
||||
- Minimum line coverage: parsed from `--coverage` flag (default 80%)
|
||||
- Minimum branch coverage: 75%
|
||||
- Critical path coverage: 100%
|
||||
|
||||
@@ -17,125 +79,543 @@ Execute a comprehensive Test-Driven Development (TDD) workflow with strict red-g
|
||||
- Class length > 200 lines
|
||||
- Duplicate code blocks > 3 lines
|
||||
|
||||
## Phase 1: Test Specification and Design
|
||||
---
|
||||
|
||||
### 1. Requirements Analysis
|
||||
## Phase 1: Test Specification and Design (Steps 1-2)
|
||||
|
||||
- Use Task tool with subagent_type="comprehensive-review::architect-review"
|
||||
- Prompt: "Analyze requirements for: $ARGUMENTS. Define acceptance criteria, identify edge cases, and create test scenarios. Output a comprehensive test specification."
|
||||
- Output: Test specification, acceptance criteria, edge case matrix
|
||||
- Validation: Ensure all requirements have corresponding test scenarios
|
||||
### Step 1: Requirements Analysis
|
||||
|
||||
### 2. Test Architecture Design
|
||||
Use the Task tool to analyze requirements:
|
||||
|
||||
- Use Task tool with subagent_type="unit-testing::test-automator"
|
||||
- Prompt: "Design test architecture for: $ARGUMENTS based on test specification. Define test structure, fixtures, mocks, and test data strategy. Ensure testability and maintainability."
|
||||
- Output: Test architecture, fixture design, mock strategy
|
||||
- Validation: Architecture supports isolated, fast, reliable tests
|
||||
```
|
||||
Task:
|
||||
subagent_type: "general-purpose"
|
||||
description: "Analyze requirements for TDD: $FEATURE"
|
||||
prompt: |
|
||||
You are a software architect specializing in test-driven development.
|
||||
|
||||
## Phase 2: RED - Write Failing Tests
|
||||
Analyze requirements for: $FEATURE
|
||||
|
||||
### 3. Write Unit Tests (Failing)
|
||||
## Deliverables
|
||||
1. Define acceptance criteria with clear pass/fail conditions
|
||||
2. Identify edge cases (null/empty, boundary values, error states, concurrent access)
|
||||
3. Create a comprehensive test scenario matrix mapping requirements to test cases
|
||||
4. Categorize tests: unit, integration, contract, property-based
|
||||
5. Identify external dependencies that will need mocking
|
||||
|
||||
- Use Task tool with subagent_type="unit-testing::test-automator"
|
||||
- Prompt: "Write FAILING unit tests for: $ARGUMENTS. Tests must fail initially. Include edge cases, error scenarios, and happy paths. DO NOT implement production code."
|
||||
- Output: Failing unit tests, test documentation
|
||||
- **CRITICAL**: Verify all tests fail with expected error messages
|
||||
Write your complete analysis as a single markdown document.
|
||||
```
|
||||
|
||||
### 4. Verify Test Failure
|
||||
Save the agent's output to `.tdd-cycle/01-requirements.md`.
|
||||
|
||||
- Use Task tool with subagent_type="tdd-workflows::code-reviewer"
|
||||
- Prompt: "Verify that all tests for: $ARGUMENTS are failing correctly. Ensure failures are for the right reasons (missing implementation, not test errors). Confirm no false positives."
|
||||
- Output: Test failure verification report
|
||||
- **GATE**: Do not proceed until all tests fail appropriately
|
||||
Update `state.json`: set `current_step` to 2, add `"01-requirements.md"` to `files_created`, add step 1 to `completed_steps`.
|
||||
|
||||
## Phase 3: GREEN - Make Tests Pass
|
||||
### Step 2: Test Architecture Design
|
||||
|
||||
### 5. Minimal Implementation
|
||||
Read `.tdd-cycle/01-requirements.md` to load requirements context.
|
||||
|
||||
- Use Task tool with subagent_type="backend-development::backend-architect"
|
||||
- Prompt: "Implement MINIMAL code to make tests pass for: $ARGUMENTS. Focus only on making tests green. Do not add extra features or optimizations. Keep it simple."
|
||||
- Output: Minimal working implementation
|
||||
- Constraint: No code beyond what's needed to pass tests
|
||||
Use the Task tool to design test architecture:
|
||||
|
||||
### 6. Verify Test Success
|
||||
```
|
||||
Task:
|
||||
subagent_type: "general-purpose"
|
||||
description: "Design test architecture for $FEATURE"
|
||||
prompt: |
|
||||
You are a test automation expert specializing in test architecture and TDD workflows.
|
||||
|
||||
- Use Task tool with subagent_type="unit-testing::test-automator"
|
||||
- Prompt: "Run all tests for: $ARGUMENTS and verify they pass. Check test coverage metrics. Ensure no tests were accidentally broken."
|
||||
- Output: Test execution report, coverage metrics
|
||||
- **GATE**: All tests must pass before proceeding
|
||||
Design test architecture for: $FEATURE
|
||||
|
||||
## Phase 4: REFACTOR - Improve Code Quality
|
||||
## Requirements
|
||||
[Insert full contents of .tdd-cycle/01-requirements.md]
|
||||
|
||||
### 7. Code Refactoring
|
||||
## Deliverables
|
||||
1. Test structure and organization (directory layout, naming conventions)
|
||||
2. Fixture design (shared setup, teardown, test data factories)
|
||||
3. Mock/stub strategy (what to mock, what to use real implementations for)
|
||||
4. Test data strategy (generators, factories, edge case data sets)
|
||||
5. Test execution order and parallelization plan
|
||||
6. Framework-specific configuration (matching project's existing test framework)
|
||||
|
||||
- Use Task tool with subagent_type="tdd-workflows::code-reviewer"
|
||||
- Prompt: "Refactor implementation for: $ARGUMENTS while keeping tests green. Apply SOLID principles, remove duplication, improve naming, and optimize performance. Run tests after each refactoring."
|
||||
- Output: Refactored code, refactoring report
|
||||
- Constraint: Tests must remain green throughout
|
||||
Ensure architecture supports isolated, fast, reliable tests.
|
||||
Write your complete design as a single markdown document.
|
||||
```
|
||||
|
||||
### 8. Test Refactoring
|
||||
Save the agent's output to `.tdd-cycle/02-test-architecture.md`.
|
||||
|
||||
- Use Task tool with subagent_type="unit-testing::test-automator"
|
||||
- Prompt: "Refactor tests for: $ARGUMENTS. Remove test duplication, improve test names, extract common fixtures, and enhance test readability. Ensure tests still provide same coverage."
|
||||
- Output: Refactored tests, improved test structure
|
||||
- Validation: Coverage metrics unchanged or improved
|
||||
Update `state.json`: set `current_step` to "checkpoint-1", add step 2 to `completed_steps`.
|
||||
|
||||
## Phase 5: Integration and System Tests
|
||||
---
|
||||
|
||||
### 9. Write Integration Tests (Failing First)
|
||||
## PHASE CHECKPOINT 1 — User Approval Required
|
||||
|
||||
- Use Task tool with subagent_type="unit-testing::test-automator"
|
||||
- Prompt: "Write FAILING integration tests for: $ARGUMENTS. Test component interactions, API contracts, and data flow. Tests must fail initially."
|
||||
- Output: Failing integration tests
|
||||
- Validation: Tests fail due to missing integration logic
|
||||
You MUST stop here and present the test specification and architecture for review.
|
||||
|
||||
### 10. Implement Integration
|
||||
Display a summary of the requirements analysis from `.tdd-cycle/01-requirements.md` and test architecture from `.tdd-cycle/02-test-architecture.md` (key test scenarios, architecture decisions, mock strategy) and ask:
|
||||
|
||||
- Use Task tool with subagent_type="backend-development::backend-architect"
|
||||
- Prompt: "Implement integration code for: $ARGUMENTS to make integration tests pass. Focus on component interaction and data flow."
|
||||
- Output: Integration implementation
|
||||
- Validation: All integration tests pass
|
||||
```
|
||||
Test specification and architecture complete. Please review:
|
||||
- .tdd-cycle/01-requirements.md
|
||||
- .tdd-cycle/02-test-architecture.md
|
||||
|
||||
## Phase 6: Continuous Improvement Cycle
|
||||
1. Approve — proceed to RED phase (write failing tests)
|
||||
2. Request changes — tell me what to adjust
|
||||
3. Pause — save progress and stop here
|
||||
```
|
||||
|
||||
### 11. Performance and Edge Case Tests
|
||||
Do NOT proceed to Phase 2 until the user selects option 1. If they select option 2, revise and re-checkpoint. If option 3, update `state.json` status and stop.
|
||||
|
||||
- Use Task tool with subagent_type="unit-testing::test-automator"
|
||||
- Prompt: "Add performance tests and additional edge case tests for: $ARGUMENTS. Include stress tests, boundary tests, and error recovery tests."
|
||||
- Output: Extended test suite
|
||||
- Metric: Increased test coverage and scenario coverage
|
||||
---
|
||||
|
||||
### 12. Final Code Review
|
||||
## Phase 2: RED — Write Failing Tests (Steps 3-4)
|
||||
|
||||
- Use Task tool with subagent_type="comprehensive-review::architect-review"
|
||||
- Prompt: "Perform comprehensive review of: $ARGUMENTS. Verify TDD process was followed, check code quality, test quality, and coverage. Suggest improvements."
|
||||
- Output: Review report, improvement suggestions
|
||||
- Action: Implement critical suggestions while maintaining green tests
|
||||
### Step 3: Write Unit Tests (Failing)
|
||||
|
||||
Read `.tdd-cycle/01-requirements.md` and `.tdd-cycle/02-test-architecture.md`.
|
||||
|
||||
Use the Task tool:
|
||||
|
||||
```
|
||||
Task:
|
||||
subagent_type: "general-purpose"
|
||||
description: "Write failing unit tests for $FEATURE"
|
||||
prompt: |
|
||||
You are a test automation expert specializing in TDD red phase.
|
||||
|
||||
Write FAILING unit tests for: $FEATURE
|
||||
|
||||
## Requirements
|
||||
[Insert contents of .tdd-cycle/01-requirements.md]
|
||||
|
||||
## Test Architecture
|
||||
[Insert contents of .tdd-cycle/02-test-architecture.md]
|
||||
|
||||
## Instructions
|
||||
1. Tests must fail initially — DO NOT implement production code
|
||||
2. Include edge cases, error scenarios, and happy paths
|
||||
3. Use the project's existing test framework and conventions
|
||||
4. Follow Arrange-Act-Assert pattern
|
||||
5. Use descriptive test names (should_X_when_Y)
|
||||
6. Ensure failures are for the RIGHT reasons (missing implementation, not syntax errors)
|
||||
|
||||
Write all test files. Report what test files were created and what they cover.
|
||||
```
|
||||
|
||||
Save a summary to `.tdd-cycle/03-failing-tests.md` (list of test files, test count, coverage areas).
|
||||
|
||||
Update `state.json`: set `current_step` to 4, add step 3 to `completed_steps`.
|
||||
|
||||
### Step 4: Verify Test Failure
|
||||
|
||||
Use the Task tool with the local code-reviewer agent:
|
||||
|
||||
```
|
||||
Task:
|
||||
subagent_type: "code-reviewer"
|
||||
description: "Verify tests fail correctly for $FEATURE"
|
||||
prompt: |
|
||||
Verify that all tests for: $FEATURE are failing correctly.
|
||||
|
||||
## Failing Tests
|
||||
[Insert contents of .tdd-cycle/03-failing-tests.md]
|
||||
|
||||
## Instructions
|
||||
1. Run the test suite and confirm all new tests fail
|
||||
2. Ensure failures are for the right reasons (missing implementation, not test errors)
|
||||
3. Confirm no false positives (tests that accidentally pass)
|
||||
4. Verify no existing tests were broken
|
||||
5. Check test quality: meaningful names, proper assertions, good error messages
|
||||
|
||||
Report your findings. This is a GATE — do not approve if tests pass or fail for wrong reasons.
|
||||
```
|
||||
|
||||
Save output to `.tdd-cycle/04-failure-verification.md`.
|
||||
|
||||
**GATE**: Do not proceed to Phase 3 unless all tests fail appropriately. If verification fails, fix tests and re-verify.
|
||||
|
||||
Update `state.json`: set `current_step` to "checkpoint-2", add step 4 to `completed_steps`.
|
||||
|
||||
---
|
||||
|
||||
## PHASE CHECKPOINT 2 — User Approval Required
|
||||
|
||||
Display a summary of the failing tests from `.tdd-cycle/03-failing-tests.md` and verification from `.tdd-cycle/04-failure-verification.md` and ask:
|
||||
|
||||
```
|
||||
RED phase complete. All tests are failing as expected.
|
||||
|
||||
Test count: [number]
|
||||
Coverage areas: [summary]
|
||||
Verification: [pass/fail summary]
|
||||
|
||||
1. Approve — proceed to GREEN phase (make tests pass)
|
||||
2. Request changes — adjust tests before implementing
|
||||
3. Pause — save progress and stop here
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: GREEN — Make Tests Pass (Steps 5-6)
|
||||
|
||||
### Step 5: Minimal Implementation
|
||||
|
||||
Read `.tdd-cycle/01-requirements.md`, `.tdd-cycle/02-test-architecture.md`, and `.tdd-cycle/03-failing-tests.md`.
|
||||
|
||||
Use the Task tool:
|
||||
|
||||
```
|
||||
Task:
|
||||
subagent_type: "general-purpose"
|
||||
description: "Implement minimal code to pass tests for $FEATURE"
|
||||
prompt: |
|
||||
You are a backend architect implementing the GREEN phase of TDD.
|
||||
|
||||
Implement MINIMAL code to make tests pass for: $FEATURE
|
||||
|
||||
## Requirements
|
||||
[Insert contents of .tdd-cycle/01-requirements.md]
|
||||
|
||||
## Test Architecture
|
||||
[Insert contents of .tdd-cycle/02-test-architecture.md]
|
||||
|
||||
## Failing Tests
|
||||
[Insert contents of .tdd-cycle/03-failing-tests.md]
|
||||
|
||||
## Instructions
|
||||
1. Focus ONLY on making tests green — no extra features or optimizations
|
||||
2. Use the simplest implementation that passes each test
|
||||
3. Follow the project's existing code patterns and conventions
|
||||
4. Keep methods/functions small and focused
|
||||
5. Don't add error handling unless tests require it
|
||||
6. Document shortcuts taken for the refactor phase
|
||||
|
||||
Write all code files. Report what files were created/modified and any technical debt noted.
|
||||
```
|
||||
|
||||
Save a summary to `.tdd-cycle/05-implementation.md`.
|
||||
|
||||
Update `state.json`: set `current_step` to 6, add step 5 to `completed_steps`.
|
||||
|
||||
### Step 6: Verify Test Success
|
||||
|
||||
Use the Task tool:
|
||||
|
||||
```
|
||||
Task:
|
||||
subagent_type: "general-purpose"
|
||||
description: "Verify all tests pass for $FEATURE"
|
||||
prompt: |
|
||||
You are a test automation expert verifying TDD green phase completion.
|
||||
|
||||
Run all tests for: $FEATURE and verify they pass.
|
||||
|
||||
## Implementation
|
||||
[Insert contents of .tdd-cycle/05-implementation.md]
|
||||
|
||||
## Instructions
|
||||
1. Run the full test suite
|
||||
2. Verify ALL new tests pass (green)
|
||||
3. Verify no existing tests were broken
|
||||
4. Check test coverage metrics against targets
|
||||
5. Confirm implementation is truly minimal (no gold plating)
|
||||
|
||||
Report test execution results, coverage metrics, and any issues found.
|
||||
```
|
||||
|
||||
Save output to `.tdd-cycle/06-green-verification.md`.
|
||||
|
||||
**GATE**: All tests must pass before proceeding. If tests fail, return to Step 5 and fix.
|
||||
|
||||
Update `state.json`: set `current_step` to "checkpoint-3", add step 6 to `completed_steps`.
|
||||
|
||||
---
|
||||
|
||||
## PHASE CHECKPOINT 3 — User Approval Required
|
||||
|
||||
Display results from `.tdd-cycle/06-green-verification.md` and ask:
|
||||
|
||||
```
|
||||
GREEN phase complete. All tests passing.
|
||||
|
||||
Test results: [pass/fail counts]
|
||||
Coverage: [metrics]
|
||||
|
||||
1. Approve — proceed to REFACTOR phase
|
||||
2. Request changes — adjust implementation
|
||||
3. Pause — save progress and stop here
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 4: REFACTOR — Improve Code Quality (Steps 7-8)
|
||||
|
||||
### Step 7: Code Refactoring
|
||||
|
||||
Read `.tdd-cycle/05-implementation.md` and `.tdd-cycle/06-green-verification.md`.
|
||||
|
||||
Use the Task tool with the local code-reviewer agent:
|
||||
|
||||
```
|
||||
Task:
|
||||
subagent_type: "code-reviewer"
|
||||
description: "Refactor implementation for $FEATURE"
|
||||
prompt: |
|
||||
Refactor the implementation for: $FEATURE while keeping all tests green.
|
||||
|
||||
## Implementation
|
||||
[Insert contents of .tdd-cycle/05-implementation.md]
|
||||
|
||||
## Green Verification
|
||||
[Insert contents of .tdd-cycle/06-green-verification.md]
|
||||
|
||||
## Instructions
|
||||
1. Apply SOLID principles where appropriate
|
||||
2. Remove code duplication
|
||||
3. Improve naming for clarity
|
||||
4. Optimize performance where tests support it
|
||||
5. Run tests after each refactoring step — tests MUST remain green
|
||||
6. Apply refactoring triggers: complexity > 10, method > 20 lines, class > 200 lines, duplication > 3 lines
|
||||
|
||||
Report all refactoring changes made and confirm tests still pass.
|
||||
```
|
||||
|
||||
Save output to `.tdd-cycle/07-refactored-code.md`.
|
||||
|
||||
Update `state.json`: set `current_step` to 8, add step 7 to `completed_steps`.
|
||||
|
||||
### Step 8: Test Refactoring
|
||||
|
||||
Use the Task tool:
|
||||
|
||||
```
|
||||
Task:
|
||||
subagent_type: "general-purpose"
|
||||
description: "Refactor tests for $FEATURE"
|
||||
prompt: |
|
||||
You are a test automation expert refactoring tests for clarity and maintainability.
|
||||
|
||||
Refactor tests for: $FEATURE
|
||||
|
||||
## Current Tests
|
||||
[Insert contents of .tdd-cycle/03-failing-tests.md]
|
||||
|
||||
## Refactored Code
|
||||
[Insert contents of .tdd-cycle/07-refactored-code.md]
|
||||
|
||||
## Instructions
|
||||
1. Remove test duplication — extract common fixtures
|
||||
2. Improve test names for clarity and documentation value
|
||||
3. Ensure tests still provide the same coverage
|
||||
4. Optimize test execution speed where possible
|
||||
5. Verify coverage metrics unchanged or improved
|
||||
|
||||
Report all test refactoring changes and confirm coverage is maintained.
|
||||
```
|
||||
|
||||
Save output to `.tdd-cycle/08-refactored-tests.md`.
|
||||
|
||||
Update `state.json`: set `current_step` to "checkpoint-4", add step 8 to `completed_steps`.
|
||||
|
||||
---
|
||||
|
||||
## PHASE CHECKPOINT 4 — User Approval Required
|
||||
|
||||
Display refactoring summary from `.tdd-cycle/07-refactored-code.md` and `.tdd-cycle/08-refactored-tests.md` and ask:
|
||||
|
||||
```
|
||||
REFACTOR phase complete.
|
||||
|
||||
Code changes: [summary of refactoring]
|
||||
Test changes: [summary of test improvements]
|
||||
Coverage: [maintained/improved]
|
||||
|
||||
1. Approve — proceed to integration testing
|
||||
2. Request changes — adjust refactoring
|
||||
3. Pause — save progress and stop here
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 5: Integration and Extended Testing (Steps 9-11)
|
||||
|
||||
### Step 9: Write Integration Tests (Failing First)
|
||||
|
||||
Read `.tdd-cycle/07-refactored-code.md`.
|
||||
|
||||
Use the Task tool:
|
||||
|
||||
```
|
||||
Task:
|
||||
subagent_type: "general-purpose"
|
||||
description: "Write failing integration tests for $FEATURE"
|
||||
prompt: |
|
||||
You are a test automation expert writing integration tests in TDD style.
|
||||
|
||||
Write FAILING integration tests for: $FEATURE
|
||||
|
||||
## Refactored Implementation
|
||||
[Insert contents of .tdd-cycle/07-refactored-code.md]
|
||||
|
||||
## Instructions
|
||||
1. Test component interactions, API contracts, and data flow
|
||||
2. Tests must fail initially (follow red-green-refactor)
|
||||
3. Focus on integration points identified in the architecture
|
||||
4. Include contract tests for API boundaries
|
||||
5. Follow existing project test patterns
|
||||
|
||||
Write test files and report what they cover.
|
||||
```
|
||||
|
||||
Save output to `.tdd-cycle/09-integration-tests.md`.
|
||||
|
||||
Update `state.json`: set `current_step` to 10, add step 9 to `completed_steps`.
|
||||
|
||||
### Step 10: Implement Integration
|
||||
|
||||
Use the Task tool:
|
||||
|
||||
```
|
||||
Task:
|
||||
subagent_type: "general-purpose"
|
||||
description: "Implement integration code for $FEATURE"
|
||||
prompt: |
|
||||
You are a backend architect implementing integration code.
|
||||
|
||||
Implement integration code for: $FEATURE to make integration tests pass.
|
||||
|
||||
## Integration Tests
|
||||
[Insert contents of .tdd-cycle/09-integration-tests.md]
|
||||
|
||||
## Instructions
|
||||
1. Focus on component interaction and data flow
|
||||
2. Implement only what's needed to pass integration tests
|
||||
3. Follow existing project patterns for integration code
|
||||
|
||||
Write code and report what was created/modified.
|
||||
```
|
||||
|
||||
Save output to `.tdd-cycle/10-integration-impl.md`.
|
||||
|
||||
Update `state.json`: set `current_step` to 11, add step 10 to `completed_steps`.
|
||||
|
||||
### Step 11: Performance and Edge Case Tests
|
||||
|
||||
Use the Task tool:
|
||||
|
||||
```
|
||||
Task:
|
||||
subagent_type: "general-purpose"
|
||||
description: "Add performance and edge case tests for $FEATURE"
|
||||
prompt: |
|
||||
You are a test automation expert adding extended test coverage.
|
||||
|
||||
Add performance tests and additional edge case tests for: $FEATURE
|
||||
|
||||
## Current Implementation
|
||||
[Insert contents of .tdd-cycle/10-integration-impl.md]
|
||||
|
||||
## Instructions
|
||||
1. Add stress tests and boundary tests
|
||||
2. Add error recovery tests
|
||||
3. Include performance benchmarks where appropriate
|
||||
4. Ensure all new tests pass
|
||||
|
||||
Write test files and report coverage improvements.
|
||||
```
|
||||
|
||||
Save output to `.tdd-cycle/11-extended-tests.md`.
|
||||
|
||||
Update `state.json`: set `current_step` to 12, add step 11 to `completed_steps`.
|
||||
|
||||
---
|
||||
|
||||
## Phase 6: Final Review (Step 12)
|
||||
|
||||
### Step 12: Final Code Review
|
||||
|
||||
Read all `.tdd-cycle/*.md` files.
|
||||
|
||||
Use the Task tool with the local code-reviewer agent:
|
||||
|
||||
```
|
||||
Task:
|
||||
subagent_type: "code-reviewer"
|
||||
description: "Final TDD review of $FEATURE"
|
||||
prompt: |
|
||||
Perform comprehensive final review of: $FEATURE
|
||||
|
||||
## All Artifacts
|
||||
[Insert contents of all .tdd-cycle/*.md files]
|
||||
|
||||
## Instructions
|
||||
1. Verify TDD process was followed (red-green-refactor discipline)
|
||||
2. Check code quality and SOLID principle adherence
|
||||
3. Assess test quality and coverage completeness
|
||||
4. Verify no anti-patterns (test-after, skipped refactoring, etc.)
|
||||
5. Suggest any remaining improvements
|
||||
|
||||
Provide a final review report with findings and recommendations.
|
||||
```
|
||||
|
||||
Save output to `.tdd-cycle/12-final-review.md`.
|
||||
|
||||
Update `state.json`: set `current_step` to "complete", add step 12 to `completed_steps`.
|
||||
|
||||
---
|
||||
|
||||
## Completion
|
||||
|
||||
Update `state.json`:
|
||||
|
||||
- Set `status` to `"complete"`
|
||||
- Set `last_updated` to current timestamp
|
||||
|
||||
Present the final summary:
|
||||
|
||||
```
|
||||
TDD cycle complete: $FEATURE
|
||||
|
||||
## Files Created
|
||||
[List all .tdd-cycle/ output files]
|
||||
|
||||
## TDD Metrics
|
||||
- Test count: [total tests written]
|
||||
- Coverage: [line/branch/function coverage]
|
||||
- Phases completed: Specification > RED > GREEN > REFACTOR > Integration > Review
|
||||
- Mode: [incremental|suite]
|
||||
|
||||
## Artifacts
|
||||
- Requirements: .tdd-cycle/01-requirements.md
|
||||
- Test Architecture: .tdd-cycle/02-test-architecture.md
|
||||
- Failing Tests: .tdd-cycle/03-failing-tests.md
|
||||
- Failure Verification: .tdd-cycle/04-failure-verification.md
|
||||
- Implementation: .tdd-cycle/05-implementation.md
|
||||
- Green Verification: .tdd-cycle/06-green-verification.md
|
||||
- Refactored Code: .tdd-cycle/07-refactored-code.md
|
||||
- Refactored Tests: .tdd-cycle/08-refactored-tests.md
|
||||
- Integration Tests: .tdd-cycle/09-integration-tests.md
|
||||
- Integration Impl: .tdd-cycle/10-integration-impl.md
|
||||
- Extended Tests: .tdd-cycle/11-extended-tests.md
|
||||
- Final Review: .tdd-cycle/12-final-review.md
|
||||
|
||||
## Next Steps
|
||||
1. Review all generated code and test files
|
||||
2. Run the full test suite to verify everything passes
|
||||
3. Create a pull request with the implementation
|
||||
4. Monitor coverage metrics in CI
|
||||
```
|
||||
|
||||
## Incremental Development Mode
|
||||
|
||||
For test-by-test development:
|
||||
When `--incremental` flag is present:
|
||||
|
||||
1. Write ONE failing test
|
||||
2. Make ONLY that test pass
|
||||
3. Refactor if needed
|
||||
4. Repeat for next test
|
||||
|
||||
Use this approach by adding `--incremental` flag to focus on one test at a time.
|
||||
The orchestrator adjusts the RED-GREEN-REFACTOR phases to operate on a single test at a time rather than full test suites.
|
||||
|
||||
## Test Suite Mode
|
||||
|
||||
For comprehensive test suite development:
|
||||
|
||||
1. Write ALL tests for a feature/module (failing)
|
||||
2. Implement code to pass ALL tests
|
||||
3. Refactor entire module
|
||||
4. Add integration tests
|
||||
|
||||
Use this approach by adding `--suite` flag for batch test development.
|
||||
|
||||
## Validation Checkpoints
|
||||
## Validation Checklists
|
||||
|
||||
### RED Phase Validation
|
||||
|
||||
@@ -159,35 +639,6 @@ Use this approach by adding `--suite` flag for batch test development.
|
||||
- [ ] Performance improved or maintained
|
||||
- [ ] Test readability improved
|
||||
|
||||
## Coverage Reports
|
||||
|
||||
Generate coverage reports after each phase:
|
||||
|
||||
- Line coverage
|
||||
- Branch coverage
|
||||
- Function coverage
|
||||
- Statement coverage
|
||||
|
||||
## Failure Recovery
|
||||
|
||||
If TDD discipline is broken:
|
||||
|
||||
1. **STOP** immediately
|
||||
2. Identify which phase was violated
|
||||
3. Rollback to last valid state
|
||||
4. Resume from correct phase
|
||||
5. Document lesson learned
|
||||
|
||||
## TDD Metrics Tracking
|
||||
|
||||
Track and report:
|
||||
|
||||
- Time in each phase (Red/Green/Refactor)
|
||||
- Number of test-implementation cycles
|
||||
- Coverage progression
|
||||
- Refactoring frequency
|
||||
- Defect escape rate
|
||||
|
||||
## Anti-Patterns to Avoid
|
||||
|
||||
- Writing implementation before tests
|
||||
@@ -198,24 +649,12 @@ Track and report:
|
||||
- Ignoring failing tests
|
||||
- Writing tests after implementation
|
||||
|
||||
## Success Criteria
|
||||
## Failure Recovery
|
||||
|
||||
- 100% of code written test-first
|
||||
- All tests pass continuously
|
||||
- Coverage exceeds thresholds
|
||||
- Code complexity within limits
|
||||
- Zero defects in covered code
|
||||
- Clear test documentation
|
||||
- Fast test execution (< 5 seconds for unit tests)
|
||||
If TDD discipline is broken:
|
||||
|
||||
## Notes
|
||||
|
||||
- Enforce strict RED-GREEN-REFACTOR discipline
|
||||
- Each phase must be completed before moving to next
|
||||
- Tests are the specification
|
||||
- If a test is hard to write, the design needs improvement
|
||||
- Refactoring is NOT optional
|
||||
- Keep test execution fast
|
||||
- Tests should be independent and isolated
|
||||
|
||||
TDD implementation for: $ARGUMENTS
|
||||
1. **STOP** immediately
|
||||
2. Identify which phase was violated
|
||||
3. Rollback to last valid state
|
||||
4. Resume from correct phase
|
||||
5. Document lesson learned
|
||||
|
||||
@@ -1,98 +1,79 @@
|
||||
Implement minimal code to make failing tests pass in TDD green phase:
|
||||
---
|
||||
description: "Implement minimal code to make failing tests pass in TDD green phase"
|
||||
argument-hint: "<description of failing tests or test file paths>"
|
||||
---
|
||||
|
||||
[Extended thinking: This tool uses the test-automator agent to implement the minimal code necessary to make tests pass. It focuses on simplicity, avoiding over-engineering while ensuring all tests become green.]
|
||||
# TDD Green Phase
|
||||
|
||||
## CRITICAL BEHAVIORAL RULES
|
||||
|
||||
You MUST follow these rules exactly. Violating any of them is a failure.
|
||||
|
||||
1. **Implement only what tests require.** Do NOT add features, optimizations, or error handling beyond what failing tests demand.
|
||||
2. **Run tests after each change.** Verify progress incrementally — do not batch implement and hope it works.
|
||||
3. **Halt on failure.** If tests remain red after implementation or existing tests break, STOP and present the error to the user.
|
||||
4. **Use only local agents.** All `subagent_type` references use agents bundled with this plugin or `general-purpose`. No cross-plugin dependencies.
|
||||
5. **Never enter plan mode autonomously.** Do NOT use EnterPlanMode. Execute directly.
|
||||
|
||||
## Implementation Process
|
||||
|
||||
Use Task tool with subagent_type="unit-testing::test-automator" to implement minimal passing code.
|
||||
Use the Task tool to implement minimal passing code:
|
||||
|
||||
Prompt: "Implement MINIMAL code to make these failing tests pass: $ARGUMENTS. Follow TDD green phase principles:
|
||||
```
|
||||
Task:
|
||||
subagent_type: "general-purpose"
|
||||
description: "Implement minimal code to pass failing tests"
|
||||
prompt: |
|
||||
You are a test automation expert implementing the GREEN phase of TDD.
|
||||
|
||||
1. **Pre-Implementation Analysis**
|
||||
- Review all failing tests and their error messages
|
||||
- Identify the simplest path to make tests pass
|
||||
- Map test requirements to minimal implementation needs
|
||||
- Avoid premature optimization or over-engineering
|
||||
- Focus only on making tests green, not perfect code
|
||||
Implement MINIMAL code to make these failing tests pass: $ARGUMENTS
|
||||
|
||||
2. **Implementation Strategy**
|
||||
- **Fake It**: Return hard-coded values when appropriate
|
||||
- **Obvious Implementation**: When solution is trivial and clear
|
||||
- **Triangulation**: Generalize only when multiple tests require it
|
||||
- Start with the simplest test and work incrementally
|
||||
- One test at a time - don't try to pass all at once
|
||||
Follow TDD green phase principles:
|
||||
|
||||
3. **Code Structure Guidelines**
|
||||
- Write the minimal code that could possibly work
|
||||
- Avoid adding functionality not required by tests
|
||||
- Use simple data structures initially
|
||||
- Defer architectural decisions until refactor phase
|
||||
- Keep methods/functions small and focused
|
||||
- Don't add error handling unless tests require it
|
||||
1. **Pre-Implementation Analysis**
|
||||
- Review all failing tests and their error messages
|
||||
- Identify the simplest path to make tests pass
|
||||
- Map test requirements to minimal implementation needs
|
||||
- Avoid premature optimization or over-engineering
|
||||
- Focus only on making tests green, not perfect code
|
||||
|
||||
4. **Language-Specific Patterns**
|
||||
- **JavaScript/TypeScript**: Simple functions, avoid classes initially
|
||||
- **Python**: Functions before classes, simple returns
|
||||
- **Java**: Minimal class structure, no patterns yet
|
||||
- **C#**: Basic implementations, no interfaces yet
|
||||
- **Go**: Simple functions, defer goroutines/channels
|
||||
- **Ruby**: Procedural before object-oriented when possible
|
||||
2. **Implementation Strategy**
|
||||
- **Fake It**: Return hard-coded values when appropriate
|
||||
- **Obvious Implementation**: When solution is trivial and clear
|
||||
- **Triangulation**: Generalize only when multiple tests require it
|
||||
- Start with the simplest test and work incrementally
|
||||
- One test at a time — don't try to pass all at once
|
||||
|
||||
5. **Progressive Implementation**
|
||||
- Make first test pass with simplest possible code
|
||||
- Run tests after each change to verify progress
|
||||
- Add just enough code for next failing test
|
||||
- Resist urge to implement beyond test requirements
|
||||
- Keep track of technical debt for refactor phase
|
||||
- Document assumptions and shortcuts taken
|
||||
3. **Code Structure Guidelines**
|
||||
- Write the minimal code that could possibly work
|
||||
- Avoid adding functionality not required by tests
|
||||
- Use simple data structures initially
|
||||
- Defer architectural decisions until refactor phase
|
||||
- Keep methods/functions small and focused
|
||||
- Don't add error handling unless tests require it
|
||||
|
||||
6. **Common Green Phase Techniques**
|
||||
- Hard-coded returns for initial tests
|
||||
- Simple if/else for limited test cases
|
||||
- Basic loops only when iteration tests require
|
||||
- Minimal data structures (arrays before complex objects)
|
||||
- In-memory storage before database integration
|
||||
- Synchronous before asynchronous implementation
|
||||
4. **Progressive Implementation**
|
||||
- Make first test pass with simplest possible code
|
||||
- Run tests after each change to verify progress
|
||||
- Add just enough code for next failing test
|
||||
- Resist urge to implement beyond test requirements
|
||||
- Keep track of technical debt for refactor phase
|
||||
- Document assumptions and shortcuts taken
|
||||
|
||||
7. **Success Criteria**
|
||||
✓ All tests pass (green)
|
||||
✓ No extra functionality beyond test requirements
|
||||
✓ Code is readable even if not optimal
|
||||
✓ No broken existing functionality
|
||||
✓ Implementation time is minimized
|
||||
✓ Clear path to refactoring identified
|
||||
5. **Success Criteria**
|
||||
- All tests pass (green)
|
||||
- No extra functionality beyond test requirements
|
||||
- Code is readable even if not optimal
|
||||
- No broken existing functionality
|
||||
- Clear path to refactoring identified
|
||||
|
||||
8. **Anti-Patterns to Avoid**
|
||||
- Gold plating or adding unrequested features
|
||||
- Implementing design patterns prematurely
|
||||
- Complex abstractions without test justification
|
||||
- Performance optimizations without metrics
|
||||
- Adding tests during green phase
|
||||
- Refactoring during implementation
|
||||
- Ignoring test failures to move forward
|
||||
|
||||
9. **Implementation Metrics**
|
||||
- Time to green: Track implementation duration
|
||||
- Lines of code: Measure implementation size
|
||||
- Cyclomatic complexity: Keep it low initially
|
||||
- Test pass rate: Must reach 100%
|
||||
- Code coverage: Verify all paths tested
|
||||
|
||||
10. **Validation Steps**
|
||||
- Run all tests and confirm they pass
|
||||
- Verify no regression in existing tests
|
||||
- Check that implementation is truly minimal
|
||||
- Document any technical debt created
|
||||
- Prepare notes for refactoring phase
|
||||
|
||||
Output should include:
|
||||
|
||||
- Complete implementation code
|
||||
- Test execution results showing all green
|
||||
- List of shortcuts taken for later refactoring
|
||||
- Implementation time metrics
|
||||
- Technical debt documentation
|
||||
- Readiness assessment for refactor phase"
|
||||
Output should include:
|
||||
- Complete implementation code
|
||||
- Test execution results showing all green
|
||||
- List of shortcuts taken for later refactoring
|
||||
- Technical debt documentation
|
||||
- Readiness assessment for refactor phase
|
||||
```
|
||||
|
||||
## Post-Implementation Checks
|
||||
|
||||
@@ -116,788 +97,8 @@ If tests still fail:
|
||||
|
||||
## Integration Points
|
||||
|
||||
- Follows from tdd-red.md test creation
|
||||
- Prepares for tdd-refactor.md improvements
|
||||
- Follows from tdd-red test creation
|
||||
- Prepares for tdd-refactor improvements
|
||||
- Updates test coverage metrics
|
||||
- Triggers CI/CD pipeline verification
|
||||
- Documents technical debt for tracking
|
||||
|
||||
## Best Practices
|
||||
|
||||
- Embrace "good enough" for this phase
|
||||
- Speed over perfection (perfection comes in refactor)
|
||||
- Make it work, then make it right, then make it fast
|
||||
- Trust that refactoring phase will improve code
|
||||
- Keep changes small and incremental
|
||||
- Celebrate reaching green state!
|
||||
|
||||
## Complete Implementation Examples
|
||||
|
||||
### Example 1: Minimal → Production-Ready (User Service)
|
||||
|
||||
**Test Requirements:**
|
||||
|
||||
```typescript
|
||||
describe("UserService", () => {
|
||||
it("should create a new user", async () => {
|
||||
const user = await userService.create({
|
||||
email: "test@example.com",
|
||||
name: "Test",
|
||||
});
|
||||
expect(user.id).toBeDefined();
|
||||
expect(user.email).toBe("test@example.com");
|
||||
});
|
||||
|
||||
it("should find user by email", async () => {
|
||||
await userService.create({ email: "test@example.com", name: "Test" });
|
||||
const user = await userService.findByEmail("test@example.com");
|
||||
expect(user).toBeDefined();
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
**Stage 1: Fake It (Minimal)**
|
||||
|
||||
```typescript
|
||||
class UserService {
|
||||
create(data: { email: string; name: string }) {
|
||||
return { id: "123", email: data.email, name: data.name };
|
||||
}
|
||||
|
||||
findByEmail(email: string) {
|
||||
return { id: "123", email: email, name: "Test" };
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
_Tests pass. Implementation is obviously fake but validates test structure._
|
||||
|
||||
**Stage 2: Simple Real Implementation**
|
||||
|
||||
```typescript
|
||||
class UserService {
|
||||
private users: Map<string, User> = new Map();
|
||||
private nextId = 1;
|
||||
|
||||
create(data: { email: string; name: string }) {
|
||||
const user = { id: String(this.nextId++), ...data };
|
||||
this.users.set(user.email, user);
|
||||
return user;
|
||||
}
|
||||
|
||||
findByEmail(email: string) {
|
||||
return this.users.get(email) || null;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
_In-memory storage. Tests pass. Good enough for green phase._
|
||||
|
||||
**Stage 3: Production-Ready (Refactor Phase)**
|
||||
|
||||
```typescript
|
||||
class UserService {
|
||||
constructor(private db: Database) {}
|
||||
|
||||
async create(data: { email: string; name: string }) {
|
||||
const existing = await this.db.query(
|
||||
"SELECT * FROM users WHERE email = ?",
|
||||
[data.email],
|
||||
);
|
||||
if (existing) throw new Error("User exists");
|
||||
|
||||
const id = await this.db.insert("users", data);
|
||||
return { id, ...data };
|
||||
}
|
||||
|
||||
async findByEmail(email: string) {
|
||||
return this.db.queryOne("SELECT * FROM users WHERE email = ?", [email]);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
_Database integration, error handling, validation - saved for refactor phase._
|
||||
|
||||
### Example 2: API-First Implementation (Express)
|
||||
|
||||
**Test Requirements:**
|
||||
|
||||
```javascript
|
||||
describe("POST /api/tasks", () => {
|
||||
it("should create task and return 201", async () => {
|
||||
const res = await request(app)
|
||||
.post("/api/tasks")
|
||||
.send({ title: "Test Task" });
|
||||
|
||||
expect(res.status).toBe(201);
|
||||
expect(res.body.id).toBeDefined();
|
||||
expect(res.body.title).toBe("Test Task");
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
**Stage 1: Hardcoded Response**
|
||||
|
||||
```javascript
|
||||
app.post("/api/tasks", (req, res) => {
|
||||
res.status(201).json({ id: "1", title: req.body.title });
|
||||
});
|
||||
```
|
||||
|
||||
_Tests pass immediately. No logic needed yet._
|
||||
|
||||
**Stage 2: Simple Logic**
|
||||
|
||||
```javascript
|
||||
let tasks = [];
|
||||
let nextId = 1;
|
||||
|
||||
app.post("/api/tasks", (req, res) => {
|
||||
const task = { id: String(nextId++), title: req.body.title };
|
||||
tasks.push(task);
|
||||
res.status(201).json(task);
|
||||
});
|
||||
```
|
||||
|
||||
_Minimal state management. Ready for more tests._
|
||||
|
||||
**Stage 3: Layered Architecture (Refactor)**
|
||||
|
||||
```javascript
|
||||
// Controller
|
||||
app.post('/api/tasks', async (req, res) => {
|
||||
try {
|
||||
const task = await taskService.create(req.body);
|
||||
res.status(201).json(task);
|
||||
} catch (error) {
|
||||
res.status(400).json({ error: error.message });
|
||||
}
|
||||
});
|
||||
|
||||
// Service layer
|
||||
class TaskService {
|
||||
constructor(private repository: TaskRepository) {}
|
||||
|
||||
async create(data: CreateTaskDto): Promise<Task> {
|
||||
this.validate(data);
|
||||
return this.repository.save(data);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
_Proper separation of concerns added during refactor phase._
|
||||
|
||||
### Example 3: Database Integration (Django)
|
||||
|
||||
**Test Requirements:**
|
||||
|
||||
```python
|
||||
def test_product_creation():
|
||||
product = Product.objects.create(name="Widget", price=9.99)
|
||||
assert product.id is not None
|
||||
assert product.name == "Widget"
|
||||
|
||||
def test_product_price_validation():
|
||||
with pytest.raises(ValidationError):
|
||||
Product.objects.create(name="Widget", price=-1)
|
||||
```
|
||||
|
||||
**Stage 1: Model Only**
|
||||
|
||||
```python
|
||||
class Product(models.Model):
|
||||
name = models.CharField(max_length=200)
|
||||
price = models.DecimalField(max_digits=10, decimal_places=2)
|
||||
```
|
||||
|
||||
_First test passes. Second test fails - validation not implemented._
|
||||
|
||||
**Stage 2: Add Validation**
|
||||
|
||||
```python
|
||||
class Product(models.Model):
|
||||
name = models.CharField(max_length=200)
|
||||
price = models.DecimalField(max_digits=10, decimal_places=2)
|
||||
|
||||
def clean(self):
|
||||
if self.price < 0:
|
||||
raise ValidationError("Price cannot be negative")
|
||||
|
||||
def save(self, *args, **kwargs):
|
||||
self.clean()
|
||||
super().save(*args, **kwargs)
|
||||
```
|
||||
|
||||
_All tests pass. Minimal validation logic added._
|
||||
|
||||
**Stage 3: Rich Domain Model (Refactor)**
|
||||
|
||||
```python
|
||||
class Product(models.Model):
|
||||
name = models.CharField(max_length=200)
|
||||
price = models.DecimalField(max_digits=10, decimal_places=2)
|
||||
category = models.ForeignKey(Category, on_delete=models.CASCADE)
|
||||
created_at = models.DateTimeField(auto_now_add=True)
|
||||
updated_at = models.DateTimeField(auto_now=True)
|
||||
|
||||
class Meta:
|
||||
indexes = [models.Index(fields=['category', '-created_at'])]
|
||||
|
||||
def clean(self):
|
||||
if self.price < 0:
|
||||
raise ValidationError("Price cannot be negative")
|
||||
if self.price > 10000:
|
||||
raise ValidationError("Price exceeds maximum")
|
||||
|
||||
def apply_discount(self, percentage: float) -> Decimal:
|
||||
return self.price * (1 - percentage / 100)
|
||||
```
|
||||
|
||||
_Additional features, indexes, business logic added when needed._
|
||||
|
||||
### Example 4: React Component Implementation
|
||||
|
||||
**Test Requirements:**
|
||||
|
||||
```typescript
|
||||
describe('UserProfile', () => {
|
||||
it('should display user name', () => {
|
||||
render(<UserProfile user={{ name: 'John', email: 'john@test.com' }} />);
|
||||
expect(screen.getByText('John')).toBeInTheDocument();
|
||||
});
|
||||
|
||||
it('should display email', () => {
|
||||
render(<UserProfile user={{ name: 'John', email: 'john@test.com' }} />);
|
||||
expect(screen.getByText('john@test.com')).toBeInTheDocument();
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
**Stage 1: Minimal JSX**
|
||||
|
||||
```typescript
|
||||
interface UserProfileProps {
|
||||
user: { name: string; email: string };
|
||||
}
|
||||
|
||||
const UserProfile: React.FC<UserProfileProps> = ({ user }) => (
|
||||
<div>
|
||||
<div>{user.name}</div>
|
||||
<div>{user.email}</div>
|
||||
</div>
|
||||
);
|
||||
```
|
||||
|
||||
_Tests pass. No styling, no structure._
|
||||
|
||||
**Stage 2: Basic Structure**
|
||||
|
||||
```typescript
|
||||
const UserProfile: React.FC<UserProfileProps> = ({ user }) => (
|
||||
<div className="user-profile">
|
||||
<h2>{user.name}</h2>
|
||||
<p>{user.email}</p>
|
||||
</div>
|
||||
);
|
||||
```
|
||||
|
||||
_Added semantic HTML, className for styling hook._
|
||||
|
||||
**Stage 3: Production Component (Refactor)**
|
||||
|
||||
```typescript
|
||||
const UserProfile: React.FC<UserProfileProps> = ({ user }) => {
|
||||
const [isEditing, setIsEditing] = useState(false);
|
||||
|
||||
return (
|
||||
<div className="user-profile" role="article" aria-label="User profile">
|
||||
<header>
|
||||
<h2>{user.name}</h2>
|
||||
<button onClick={() => setIsEditing(true)} aria-label="Edit profile">
|
||||
Edit
|
||||
</button>
|
||||
</header>
|
||||
<section>
|
||||
<p>{user.email}</p>
|
||||
{user.bio && <p>{user.bio}</p>}
|
||||
</section>
|
||||
</div>
|
||||
);
|
||||
};
|
||||
```
|
||||
|
||||
_Accessibility, interaction, additional features added incrementally._
|
||||
|
||||
## Decision Frameworks
|
||||
|
||||
### Framework 1: Fake vs. Real Implementation
|
||||
|
||||
**When to Fake It:**
|
||||
|
||||
- First test for a new feature
|
||||
- Complex external dependencies (payment gateways, APIs)
|
||||
- Implementation approach is still uncertain
|
||||
- Need to validate test structure first
|
||||
- Time pressure to see all tests green
|
||||
|
||||
**When to Go Real:**
|
||||
|
||||
- Second or third test reveals pattern
|
||||
- Implementation is obvious and simple
|
||||
- Faking would be more complex than real code
|
||||
- Need to test integration points
|
||||
- Tests explicitly require real behavior
|
||||
|
||||
**Decision Matrix:**
|
||||
|
||||
```
|
||||
Complexity Low | High
|
||||
↓ | ↓
|
||||
Simple → REAL | FAKE first, real later
|
||||
Complex → REAL | FAKE, evaluate alternatives
|
||||
```
|
||||
|
||||
### Framework 2: Complexity Trade-off Analysis
|
||||
|
||||
**Simplicity Score Calculation:**
|
||||
|
||||
```
|
||||
Score = (Lines of Code) + (Cyclomatic Complexity × 2) + (Dependencies × 3)
|
||||
|
||||
< 20 → Simple enough, implement directly
|
||||
20-50 → Consider simpler alternative
|
||||
> 50 → Defer complexity to refactor phase
|
||||
```
|
||||
|
||||
**Example Evaluation:**
|
||||
|
||||
```typescript
|
||||
// Option A: Direct implementation (Score: 45)
|
||||
function calculateShipping(
|
||||
weight: number,
|
||||
distance: number,
|
||||
express: boolean,
|
||||
): number {
|
||||
let base = weight * 0.5 + distance * 0.1;
|
||||
if (express) base *= 2;
|
||||
if (weight > 50) base += 10;
|
||||
if (distance > 1000) base += 20;
|
||||
return base;
|
||||
}
|
||||
|
||||
// Option B: Simplest for green phase (Score: 15)
|
||||
function calculateShipping(
|
||||
weight: number,
|
||||
distance: number,
|
||||
express: boolean,
|
||||
): number {
|
||||
return express ? 50 : 25; // Fake it until more tests drive real logic
|
||||
}
|
||||
```
|
||||
|
||||
_Choose Option B for green phase, evolve to Option A as tests require._
|
||||
|
||||
### Framework 3: Performance Consideration Timing
|
||||
|
||||
**Green Phase: Focus on Correctness**
|
||||
|
||||
```
|
||||
❌ Avoid:
|
||||
- Caching strategies
|
||||
- Database query optimization
|
||||
- Algorithmic complexity improvements
|
||||
- Premature memory optimization
|
||||
|
||||
✓ Accept:
|
||||
- O(n²) if it makes code simpler
|
||||
- Multiple database queries
|
||||
- Synchronous operations
|
||||
- Inefficient but clear algorithms
|
||||
```
|
||||
|
||||
**When Performance Matters in Green Phase:**
|
||||
|
||||
1. Performance is explicit test requirement
|
||||
2. Implementation would cause timeout in test suite
|
||||
3. Memory leak would crash tests
|
||||
4. Resource exhaustion prevents testing
|
||||
|
||||
**Performance Testing Integration:**
|
||||
|
||||
```typescript
|
||||
// Add performance test AFTER functional tests pass
|
||||
describe("Performance", () => {
|
||||
it("should handle 1000 users within 100ms", () => {
|
||||
const start = Date.now();
|
||||
for (let i = 0; i < 1000; i++) {
|
||||
userService.create({ email: `user${i}@test.com`, name: `User ${i}` });
|
||||
}
|
||||
expect(Date.now() - start).toBeLessThan(100);
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
## Framework-Specific Patterns
|
||||
|
||||
### React Patterns
|
||||
|
||||
**Simple Component → Hooks → Context:**
|
||||
|
||||
```typescript
|
||||
// Green Phase: Props only
|
||||
const Counter = ({ count, onIncrement }) => (
|
||||
<button onClick={onIncrement}>{count}</button>
|
||||
);
|
||||
|
||||
// Refactor: Add hooks
|
||||
const Counter = () => {
|
||||
const [count, setCount] = useState(0);
|
||||
return <button onClick={() => setCount(c => c + 1)}>{count}</button>;
|
||||
};
|
||||
|
||||
// Refactor: Extract to context
|
||||
const Counter = () => {
|
||||
const { count, increment } = useCounter();
|
||||
return <button onClick={increment}>{count}</button>;
|
||||
};
|
||||
```
|
||||
|
||||
### Django Patterns
|
||||
|
||||
**Function View → Class View → Generic View:**
|
||||
|
||||
```python
|
||||
# Green Phase: Simple function
|
||||
def product_list(request):
|
||||
products = Product.objects.all()
|
||||
return JsonResponse({'products': list(products.values())})
|
||||
|
||||
# Refactor: Class-based view
|
||||
class ProductListView(View):
|
||||
def get(self, request):
|
||||
products = Product.objects.all()
|
||||
return JsonResponse({'products': list(products.values())})
|
||||
|
||||
# Refactor: Generic view
|
||||
class ProductListView(ListView):
|
||||
model = Product
|
||||
context_object_name = 'products'
|
||||
```
|
||||
|
||||
### Express Patterns
|
||||
|
||||
**Inline → Middleware → Service Layer:**
|
||||
|
||||
```javascript
|
||||
// Green Phase: Inline logic
|
||||
app.post("/api/users", (req, res) => {
|
||||
const user = { id: Date.now(), ...req.body };
|
||||
users.push(user);
|
||||
res.json(user);
|
||||
});
|
||||
|
||||
// Refactor: Extract middleware
|
||||
app.post("/api/users", validateUser, (req, res) => {
|
||||
const user = userService.create(req.body);
|
||||
res.json(user);
|
||||
});
|
||||
|
||||
// Refactor: Full layering
|
||||
app.post("/api/users", validateUser, asyncHandler(userController.create));
|
||||
```
|
||||
|
||||
## Refactoring Resistance Patterns
|
||||
|
||||
### Pattern 1: Test Anchor Points
|
||||
|
||||
Keep tests green during refactoring by maintaining interface contracts:
|
||||
|
||||
```typescript
|
||||
// Original implementation (tests green)
|
||||
function calculateTotal(items: Item[]): number {
|
||||
return items.reduce((sum, item) => sum + item.price, 0);
|
||||
}
|
||||
|
||||
// Refactoring: Add tax calculation (keep interface)
|
||||
function calculateTotal(items: Item[]): number {
|
||||
const subtotal = items.reduce((sum, item) => sum + item.price, 0);
|
||||
const tax = subtotal * 0.1;
|
||||
return subtotal + tax;
|
||||
}
|
||||
|
||||
// Tests still green because return type/behavior unchanged
|
||||
```
|
||||
|
||||
### Pattern 2: Parallel Implementation
|
||||
|
||||
Run old and new implementations side by side:
|
||||
|
||||
```python
|
||||
def process_order(order):
|
||||
# Old implementation (tests depend on this)
|
||||
result_old = legacy_process(order)
|
||||
|
||||
# New implementation (testing in parallel)
|
||||
result_new = new_process(order)
|
||||
|
||||
# Verify they match
|
||||
assert result_old == result_new, "Implementation mismatch"
|
||||
|
||||
return result_old # Keep tests green
|
||||
```
|
||||
|
||||
### Pattern 3: Feature Flags for Refactoring
|
||||
|
||||
```javascript
|
||||
class PaymentService {
|
||||
processPayment(amount) {
|
||||
if (config.USE_NEW_PAYMENT_PROCESSOR) {
|
||||
return this.newPaymentProcessor(amount);
|
||||
}
|
||||
return this.legacyPaymentProcessor(amount);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Performance-First Green Phase Strategies
|
||||
|
||||
### Strategy 1: Type-Driven Development
|
||||
|
||||
Use types to guide minimal implementation:
|
||||
|
||||
```typescript
|
||||
// Types define contract
|
||||
interface UserRepository {
|
||||
findById(id: string): Promise<User | null>;
|
||||
save(user: User): Promise<void>;
|
||||
}
|
||||
|
||||
// Green phase: In-memory implementation
|
||||
class InMemoryUserRepository implements UserRepository {
|
||||
private users = new Map<string, User>();
|
||||
|
||||
async findById(id: string) {
|
||||
return this.users.get(id) || null;
|
||||
}
|
||||
|
||||
async save(user: User) {
|
||||
this.users.set(user.id, user);
|
||||
}
|
||||
}
|
||||
|
||||
// Refactor: Database implementation (same interface)
|
||||
class DatabaseUserRepository implements UserRepository {
|
||||
constructor(private db: Database) {}
|
||||
|
||||
async findById(id: string) {
|
||||
return this.db.query("SELECT * FROM users WHERE id = ?", [id]);
|
||||
}
|
||||
|
||||
async save(user: User) {
|
||||
await this.db.insert("users", user);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Strategy 2: Contract Testing Integration
|
||||
|
||||
```typescript
|
||||
// Define contract
|
||||
const userServiceContract = {
|
||||
create: {
|
||||
input: { email: "string", name: "string" },
|
||||
output: { id: "string", email: "string", name: "string" },
|
||||
},
|
||||
};
|
||||
|
||||
// Green phase: Implementation matches contract
|
||||
class UserService {
|
||||
create(data: { email: string; name: string }) {
|
||||
return { id: "123", ...data }; // Minimal but contract-compliant
|
||||
}
|
||||
}
|
||||
|
||||
// Contract test ensures compliance
|
||||
describe("UserService Contract", () => {
|
||||
it("should match create contract", () => {
|
||||
const result = userService.create({ email: "test@test.com", name: "Test" });
|
||||
expect(typeof result.id).toBe("string");
|
||||
expect(typeof result.email).toBe("string");
|
||||
expect(typeof result.name).toBe("string");
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
### Strategy 3: Continuous Refactoring Workflow
|
||||
|
||||
**Micro-Refactoring During Green Phase:**
|
||||
|
||||
```python
|
||||
# Test passes with this
|
||||
def calculate_discount(price, customer_type):
|
||||
if customer_type == 'premium':
|
||||
return price * 0.8
|
||||
return price
|
||||
|
||||
# Immediate micro-refactor (tests still green)
|
||||
DISCOUNT_RATES = {
|
||||
'premium': 0.8,
|
||||
'standard': 1.0
|
||||
}
|
||||
|
||||
def calculate_discount(price, customer_type):
|
||||
rate = DISCOUNT_RATES.get(customer_type, 1.0)
|
||||
return price * rate
|
||||
```
|
||||
|
||||
**Safe Refactoring Checklist:**
|
||||
|
||||
- ✓ Tests green before refactoring
|
||||
- ✓ Change one thing at a time
|
||||
- ✓ Run tests after each change
|
||||
- ✓ Commit after each successful refactor
|
||||
- ✓ No behavior changes, only structure
|
||||
|
||||
## Modern Development Practices (2024/2025)
|
||||
|
||||
### Type-Driven Development
|
||||
|
||||
**Python Type Hints:**
|
||||
|
||||
```python
|
||||
from typing import Optional, List
|
||||
from dataclasses import dataclass
|
||||
|
||||
@dataclass
|
||||
class User:
|
||||
id: str
|
||||
email: str
|
||||
name: str
|
||||
|
||||
class UserService:
|
||||
def create(self, email: str, name: str) -> User:
|
||||
return User(id="123", email=email, name=name)
|
||||
|
||||
def find_by_email(self, email: str) -> Optional[User]:
|
||||
return None # Minimal implementation
|
||||
```
|
||||
|
||||
**TypeScript Strict Mode:**
|
||||
|
||||
```typescript
|
||||
// Enable strict mode in tsconfig.json
|
||||
{
|
||||
"compilerOptions": {
|
||||
"strict": true,
|
||||
"noUncheckedIndexedAccess": true,
|
||||
"exactOptionalPropertyTypes": true
|
||||
}
|
||||
}
|
||||
|
||||
// Implementation guided by types
|
||||
interface CreateUserDto {
|
||||
email: string;
|
||||
name: string;
|
||||
}
|
||||
|
||||
class UserService {
|
||||
create(data: CreateUserDto): User {
|
||||
// Type system enforces contract
|
||||
return { id: '123', email: data.email, name: data.name };
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### AI-Assisted Green Phase
|
||||
|
||||
**Using Copilot/AI Tools:**
|
||||
|
||||
1. Write test first (human-driven)
|
||||
2. Let AI suggest minimal implementation
|
||||
3. Verify suggestion passes tests
|
||||
4. Accept if truly minimal, reject if over-engineered
|
||||
5. Iterate with AI for refactoring phase
|
||||
|
||||
**AI Prompt Pattern:**
|
||||
|
||||
```
|
||||
Given these failing tests:
|
||||
[paste tests]
|
||||
|
||||
Provide the MINIMAL implementation that makes tests pass.
|
||||
Do not add error handling, validation, or features beyond test requirements.
|
||||
Focus on simplicity over completeness.
|
||||
```
|
||||
|
||||
### Cloud-Native Patterns
|
||||
|
||||
**Local → Container → Cloud:**
|
||||
|
||||
```javascript
|
||||
// Green Phase: Local implementation
|
||||
class CacheService {
|
||||
private cache = new Map();
|
||||
|
||||
get(key) { return this.cache.get(key); }
|
||||
set(key, value) { this.cache.set(key, value); }
|
||||
}
|
||||
|
||||
// Refactor: Redis-compatible interface
|
||||
class CacheService {
|
||||
constructor(private redis) {}
|
||||
|
||||
async get(key) { return this.redis.get(key); }
|
||||
async set(key, value) { return this.redis.set(key, value); }
|
||||
}
|
||||
|
||||
// Production: Distributed cache with fallback
|
||||
class CacheService {
|
||||
constructor(private redis, private fallback) {}
|
||||
|
||||
async get(key) {
|
||||
try {
|
||||
return await this.redis.get(key);
|
||||
} catch {
|
||||
return this.fallback.get(key);
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Observability-Driven Development
|
||||
|
||||
**Add observability hooks during green phase:**
|
||||
|
||||
```typescript
|
||||
class OrderService {
|
||||
async createOrder(data: CreateOrderDto): Promise<Order> {
|
||||
console.log("[OrderService] Creating order", { data }); // Simple logging
|
||||
|
||||
const order = { id: "123", ...data };
|
||||
|
||||
console.log("[OrderService] Order created", { orderId: order.id }); // Success log
|
||||
|
||||
return order;
|
||||
}
|
||||
}
|
||||
|
||||
// Refactor: Structured logging
|
||||
class OrderService {
|
||||
constructor(private logger: Logger) {}
|
||||
|
||||
async createOrder(data: CreateOrderDto): Promise<Order> {
|
||||
this.logger.info("order.create.start", { data });
|
||||
|
||||
const order = await this.repository.save(data);
|
||||
|
||||
this.logger.info("order.create.success", {
|
||||
orderId: order.id,
|
||||
duration: Date.now() - start,
|
||||
});
|
||||
|
||||
return order;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Tests to make pass: $ARGUMENTS
|
||||
|
||||
@@ -1,82 +1,92 @@
|
||||
Write comprehensive failing tests following TDD red phase principles.
|
||||
---
|
||||
description: "Write comprehensive failing tests following TDD red phase principles"
|
||||
argument-hint: "<feature or component to write tests for>"
|
||||
---
|
||||
|
||||
[Extended thinking: Generates failing tests that properly define expected behavior using test-automator agent.]
|
||||
# TDD Red Phase
|
||||
|
||||
## Role
|
||||
## CRITICAL BEHAVIORAL RULES
|
||||
|
||||
Generate failing tests using Task tool with subagent_type="unit-testing::test-automator".
|
||||
You MUST follow these rules exactly. Violating any of them is a failure.
|
||||
|
||||
## Prompt Template
|
||||
1. **Write tests only — no production code.** Do NOT implement any production code during this phase.
|
||||
2. **Verify tests fail.** All generated tests MUST fail when run. If any test passes, investigate and fix.
|
||||
3. **Halt on error.** If test generation fails (syntax errors, import issues), STOP and present the error to the user.
|
||||
4. **Use only local agents.** All `subagent_type` references use agents bundled with this plugin or `general-purpose`. No cross-plugin dependencies.
|
||||
5. **Never enter plan mode autonomously.** Do NOT use EnterPlanMode. Execute directly.
|
||||
|
||||
"Generate comprehensive FAILING tests for: $ARGUMENTS
|
||||
## Test Generation Process
|
||||
|
||||
## Core Requirements
|
||||
Use the Task tool to generate failing tests:
|
||||
|
||||
1. **Test Structure**
|
||||
- Framework-appropriate setup (Jest/pytest/JUnit/Go/RSpec)
|
||||
- Arrange-Act-Assert pattern
|
||||
- should_X_when_Y naming convention
|
||||
- Isolated fixtures with no interdependencies
|
||||
```
|
||||
Task:
|
||||
subagent_type: "general-purpose"
|
||||
description: "Generate comprehensive failing tests for TDD red phase"
|
||||
prompt: |
|
||||
You are a test automation expert specializing in TDD red phase test generation.
|
||||
|
||||
2. **Behavior Coverage**
|
||||
- Happy path scenarios
|
||||
- Edge cases (empty, null, boundary values)
|
||||
- Error handling and exceptions
|
||||
- Concurrent access (if applicable)
|
||||
Generate comprehensive FAILING tests for: $ARGUMENTS
|
||||
|
||||
3. **Failure Verification**
|
||||
- Tests MUST fail when run
|
||||
- Failures for RIGHT reasons (not syntax/import errors)
|
||||
- Meaningful diagnostic error messages
|
||||
- No cascading failures
|
||||
## Core Requirements
|
||||
|
||||
4. **Test Categories**
|
||||
- Unit: Isolated component behavior
|
||||
- Integration: Component interaction
|
||||
- Contract: API/interface contracts
|
||||
- Property: Mathematical invariants
|
||||
1. **Test Structure**
|
||||
- Framework-appropriate setup (Jest/pytest/JUnit/Go/RSpec — match project conventions)
|
||||
- Arrange-Act-Assert pattern
|
||||
- should_X_when_Y naming convention
|
||||
- Isolated fixtures with no interdependencies
|
||||
|
||||
## Framework Patterns
|
||||
2. **Behavior Coverage**
|
||||
- Happy path scenarios
|
||||
- Edge cases (empty, null, boundary values)
|
||||
- Error handling and exceptions
|
||||
- Concurrent access (if applicable)
|
||||
|
||||
**JavaScript/TypeScript (Jest/Vitest)**
|
||||
3. **Failure Verification**
|
||||
- Tests MUST fail when run
|
||||
- Failures for RIGHT reasons (not syntax/import errors)
|
||||
- Meaningful diagnostic error messages
|
||||
- No cascading failures
|
||||
|
||||
- Mock dependencies with `vi.fn()` or `jest.fn()`
|
||||
- Use `@testing-library` for React components
|
||||
- Property tests with `fast-check`
|
||||
4. **Test Categories**
|
||||
- Unit: Isolated component behavior
|
||||
- Integration: Component interaction
|
||||
- Contract: API/interface contracts
|
||||
- Property: Mathematical invariants (if applicable)
|
||||
|
||||
**Python (pytest)**
|
||||
## Quality Checklist
|
||||
|
||||
- Fixtures with appropriate scopes
|
||||
- Parametrize for multiple test cases
|
||||
- Hypothesis for property-based tests
|
||||
- Readable test names documenting intent
|
||||
- One behavior per test
|
||||
- No implementation leakage
|
||||
- Meaningful test data (not 'foo'/'bar')
|
||||
- Tests serve as living documentation
|
||||
|
||||
**Go**
|
||||
## Anti-Patterns to Avoid
|
||||
|
||||
- Table-driven tests with subtests
|
||||
- `t.Parallel()` for parallel execution
|
||||
- Use `testify/assert` for cleaner assertions
|
||||
- Tests passing immediately
|
||||
- Testing implementation vs behavior
|
||||
- Complex setup code
|
||||
- Multiple responsibilities per test
|
||||
- Brittle tests tied to specifics
|
||||
|
||||
**Ruby (RSpec)**
|
||||
## Output Requirements
|
||||
|
||||
- `let` for lazy loading, `let!` for eager
|
||||
- Contexts for different scenarios
|
||||
- Shared examples for common behavior
|
||||
- Complete test files with imports
|
||||
- Documentation of test purpose
|
||||
- Commands to run and verify failures
|
||||
- Metrics: test count, coverage areas
|
||||
- Next steps for green phase
|
||||
```
|
||||
|
||||
## Quality Checklist
|
||||
## Validation
|
||||
|
||||
- Readable test names documenting intent
|
||||
- One behavior per test
|
||||
- No implementation leakage
|
||||
- Meaningful test data (not 'foo'/'bar')
|
||||
- Tests serve as living documentation
|
||||
After generation:
|
||||
|
||||
## Anti-Patterns to Avoid
|
||||
|
||||
- Tests passing immediately
|
||||
- Testing implementation vs behavior
|
||||
- Complex setup code
|
||||
- Multiple responsibilities per test
|
||||
- Brittle tests tied to specifics
|
||||
1. Run tests — confirm they fail
|
||||
2. Verify helpful failure messages
|
||||
3. Check test independence
|
||||
4. Ensure comprehensive coverage
|
||||
|
||||
## Edge Case Categories
|
||||
|
||||
@@ -85,56 +95,3 @@ Generate failing tests using Task tool with subagent_type="unit-testing::test-au
|
||||
- **Special Cases**: Unicode, whitespace, special characters
|
||||
- **State**: Invalid transitions, concurrent modifications
|
||||
- **Errors**: Network failures, timeouts, permissions
|
||||
|
||||
## Output Requirements
|
||||
|
||||
- Complete test files with imports
|
||||
- Documentation of test purpose
|
||||
- Commands to run and verify failures
|
||||
- Metrics: test count, coverage areas
|
||||
- Next steps for green phase"
|
||||
|
||||
## Validation
|
||||
|
||||
After generation:
|
||||
|
||||
1. Run tests - confirm they fail
|
||||
2. Verify helpful failure messages
|
||||
3. Check test independence
|
||||
4. Ensure comprehensive coverage
|
||||
|
||||
## Example (Minimal)
|
||||
|
||||
```typescript
|
||||
// auth.service.test.ts
|
||||
describe("AuthService", () => {
|
||||
let authService: AuthService;
|
||||
let mockUserRepo: jest.Mocked<UserRepository>;
|
||||
|
||||
beforeEach(() => {
|
||||
mockUserRepo = { findByEmail: jest.fn() } as any;
|
||||
authService = new AuthService(mockUserRepo);
|
||||
});
|
||||
|
||||
it("should_return_token_when_valid_credentials", async () => {
|
||||
const user = { id: "1", email: "test@example.com", passwordHash: "hashed" };
|
||||
mockUserRepo.findByEmail.mockResolvedValue(user);
|
||||
|
||||
const result = await authService.authenticate("test@example.com", "pass");
|
||||
|
||||
expect(result.success).toBe(true);
|
||||
expect(result.token).toBeDefined();
|
||||
});
|
||||
|
||||
it("should_fail_when_user_not_found", async () => {
|
||||
mockUserRepo.findByEmail.mockResolvedValue(null);
|
||||
|
||||
const result = await authService.authenticate("none@example.com", "pass");
|
||||
|
||||
expect(result.success).toBe(false);
|
||||
expect(result.error).toBe("INVALID_CREDENTIALS");
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
Test requirements: $ARGUMENTS
|
||||
|
||||
Reference in New Issue
Block a user