Plugin Scope Improvements: - Remove language-specialists plugin (not task-focused) - Split specialized-domains into 5 focused plugins: * blockchain-web3 - Smart contract development only * quantitative-trading - Financial modeling and trading only * payment-processing - Payment gateway integration only * game-development - Unity and Minecraft only * accessibility-compliance - WCAG auditing only - Split business-operations into 3 focused plugins: * business-analytics - Metrics and reporting only * hr-legal-compliance - HR and legal docs only * customer-sales-automation - Support and sales workflows only - Fix infrastructure-devops scope: * Remove database concerns (db-migrate, database-admin) * Remove observability concerns (observability-engineer) * Move slo-implement to incident-response * Focus purely on container orchestration (K8s, Docker, Terraform) - Fix customer-sales-automation scope: * Remove content-marketer (unrelated to customer/sales workflows) Marketplace Statistics: - Total plugins: 27 (was 22) - Tool coverage: 100% (42/42 tools referenced) - Fat plugins removed: 3 (language-specialists, specialized-domains, business-operations) - All plugins now have clear, focused tasks Model Migration: - Migrate all 42 tools from claude-sonnet-4-0/opus-4-1 to model: sonnet - Migrate all 15 workflows from claude-opus-4-1 to model: sonnet - Use short model syntax consistent with agent files Documentation Updates: - Update README.md with refined plugin structure - Update plugin descriptions to be task-focused - Remove anthropomorphic and marketing language - Improve category organization (now 16 distinct categories) Ready for October 9, 2025 @ 9am PST launch
4.3 KiB
model
| model |
|---|
| sonnet |
Write comprehensive failing tests following TDD red phase principles:
[Extended thinking: This tool uses the test-automator agent to generate comprehensive failing tests that properly define expected behavior. It ensures tests fail for the right reasons and establishes a solid foundation for implementation.]
Test Generation Process
Use Task tool with subagent_type="test-automator" to generate failing tests.
Prompt: "Generate comprehensive FAILING tests for: $ARGUMENTS. Follow TDD red phase principles:
-
Test Structure Setup
- Choose appropriate testing framework for the language/stack
- Set up test fixtures and necessary imports
- Configure test runners and assertion libraries
- Establish test naming conventions (should_X_when_Y format)
-
Behavior Definition
- Define clear expected behaviors from requirements
- Cover happy path scenarios thoroughly
- Include edge cases and boundary conditions
- Add error handling and exception scenarios
- Consider null/undefined/empty input cases
-
Test Implementation
- Write descriptive test names that document intent
- Keep tests focused on single behaviors (one assertion per test when possible)
- Use Arrange-Act-Assert (AAA) pattern consistently
- Implement test data builders for complex objects
- Avoid test interdependencies - each test must be isolated
-
Failure Verification
- Ensure tests actually fail when run
- Verify failure messages are meaningful and diagnostic
- Confirm tests fail for the RIGHT reasons (not syntax/import errors)
- Check that error messages guide implementation
- Validate test isolation - no cascading failures
-
Test Categories
- Unit Tests: Isolated component behavior
- Integration Tests: Component interaction scenarios
- Contract Tests: API and interface contracts
- Property Tests: Invariants and mathematical properties
- Acceptance Tests: User story validation
-
Framework-Specific Patterns
- JavaScript/TypeScript: Jest, Mocha, Vitest patterns
- Python: pytest fixtures and parameterization
- Java: JUnit5 annotations and assertions
- C#: NUnit/xUnit attributes and theory data
- Go: Table-driven tests and subtests
- Ruby: RSpec expectations and contexts
-
Test Quality Checklist ✓ Tests are readable and self-documenting ✓ Failure messages clearly indicate what went wrong ✓ Tests follow DRY principle with appropriate abstractions ✓ Coverage includes positive, negative, and edge cases ✓ Tests can serve as living documentation ✓ No implementation details leaked into tests ✓ Tests use meaningful test data, not 'foo' and 'bar'
-
Common Anti-Patterns to Avoid
- Writing tests that pass immediately
- Testing implementation instead of behavior
- Overly complex test setup
- Brittle tests tied to specific implementations
- Tests with multiple responsibilities
- Ignored or commented-out tests
- Tests without clear assertions
Output should include:
- Complete test file(s) with all necessary imports
- Clear documentation of what each test validates
- Verification commands to run tests and see failures
- Metrics: number of tests, coverage areas, test categories
- Next steps for moving to green phase"
Validation Steps
After test generation:
- Run tests to confirm they fail
- Verify failure messages are helpful
- Check test independence and isolation
- Ensure comprehensive coverage
- Document any assumptions made
Recovery Process
If tests don't fail properly:
- Debug import/syntax issues first
- Ensure test framework is properly configured
- Verify assertions are actually checking behavior
- Add more specific assertions if needed
- Consider missing test categories
Integration Points
- Links to tdd-green.md for implementation phase
- Coordinates with tdd-refactor.md for improvement phase
- Integrates with CI/CD for automated verification
- Connects to test coverage reporting tools
Best Practices
- Start with the simplest failing test
- One behavior change at a time
- Tests should tell a story of the feature
- Prefer many small tests over few large ones
- Use test naming as documentation
- Keep test code as clean as production code
Test requirements: $ARGUMENTS