mirror of
https://github.com/wshobson/agents.git
synced 2026-03-18 09:37:15 +00:00
Rewrites 14 commands across 11 plugins to remove all cross-plugin subagent_type references (e.g., "unit-testing::test-automator"), which break when plugins are installed standalone. Each command now uses only local bundled agents or general-purpose with role context in the prompt. All rewritten commands follow conductor-style patterns: - CRITICAL BEHAVIORAL RULES with strong directives - State files for session tracking and resume support - Phase checkpoints requiring explicit user approval - File-based context passing between steps Also fixes 4 plugin.json files missing version/license fields and adds plugin.json for dotnet-contribution. Closes #433
3.9 KiB
3.9 KiB
description, argument-hint
| description | argument-hint |
|---|---|
| Implement minimal code to make failing tests pass in TDD green phase | <description of failing tests or test file paths> |
TDD Green Phase
CRITICAL BEHAVIORAL RULES
You MUST follow these rules exactly. Violating any of them is a failure.
- Implement only what tests require. Do NOT add features, optimizations, or error handling beyond what failing tests demand.
- Run tests after each change. Verify progress incrementally — do not batch implement and hope it works.
- Halt on failure. If tests remain red after implementation or existing tests break, STOP and present the error to the user.
- Use only local agents. All
subagent_typereferences use agents bundled with this plugin orgeneral-purpose. No cross-plugin dependencies. - Never enter plan mode autonomously. Do NOT use EnterPlanMode. Execute directly.
Implementation Process
Use the Task tool to implement minimal passing code:
Task:
subagent_type: "general-purpose"
description: "Implement minimal code to pass failing tests"
prompt: |
You are a test automation expert implementing the GREEN phase of TDD.
Implement MINIMAL code to make these failing tests pass: $ARGUMENTS
Follow TDD green phase principles:
1. **Pre-Implementation Analysis**
- Review all failing tests and their error messages
- Identify the simplest path to make tests pass
- Map test requirements to minimal implementation needs
- Avoid premature optimization or over-engineering
- Focus only on making tests green, not perfect code
2. **Implementation Strategy**
- **Fake It**: Return hard-coded values when appropriate
- **Obvious Implementation**: When solution is trivial and clear
- **Triangulation**: Generalize only when multiple tests require it
- Start with the simplest test and work incrementally
- One test at a time — don't try to pass all at once
3. **Code Structure Guidelines**
- Write the minimal code that could possibly work
- Avoid adding functionality not required by tests
- Use simple data structures initially
- Defer architectural decisions until refactor phase
- Keep methods/functions small and focused
- Don't add error handling unless tests require it
4. **Progressive Implementation**
- Make first test pass with simplest possible code
- Run tests after each change to verify progress
- Add just enough code for next failing test
- Resist urge to implement beyond test requirements
- Keep track of technical debt for refactor phase
- Document assumptions and shortcuts taken
5. **Success Criteria**
- All tests pass (green)
- No extra functionality beyond test requirements
- Code is readable even if not optimal
- No broken existing functionality
- Clear path to refactoring identified
Output should include:
- Complete implementation code
- Test execution results showing all green
- List of shortcuts taken for later refactoring
- Technical debt documentation
- Readiness assessment for refactor phase
Post-Implementation Checks
After implementation:
- Run full test suite to confirm all tests pass
- Verify no existing tests were broken
- Document areas needing refactoring
- Check implementation is truly minimal
- Record implementation time for metrics
Recovery Process
If tests still fail:
- Review test requirements carefully
- Check for misunderstood assertions
- Add minimal code to address specific failures
- Avoid the temptation to rewrite from scratch
- Consider if tests themselves need adjustment
Integration Points
- Follows from tdd-red test creation
- Prepares for tdd-refactor improvements
- Updates test coverage metrics
- Triggers CI/CD pipeline verification
- Documents technical debt for tracking