fix: eliminate cross-plugin dependencies and modernize plugin.json across marketplace

Rewrites 14 commands across 11 plugins to remove all cross-plugin
subagent_type references (e.g., "unit-testing::test-automator"), which
break when plugins are installed standalone. Each command now uses only
local bundled agents or general-purpose with role context in the prompt.

All rewritten commands follow conductor-style patterns:
- CRITICAL BEHAVIORAL RULES with strong directives
- State files for session tracking and resume support
- Phase checkpoints requiring explicit user approval
- File-based context passing between steps

Also fixes 4 plugin.json files missing version/license fields and adds
plugin.json for dotnet-contribution.

Closes #433
This commit is contained in:
Seth Hobson
2026-02-06 19:34:26 -05:00
parent 4820385a31
commit 4d504ed8fa
36 changed files with 7235 additions and 2980 deletions

View File

@@ -1,82 +1,92 @@
Write comprehensive failing tests following TDD red phase principles.
---
description: "Write comprehensive failing tests following TDD red phase principles"
argument-hint: "<feature or component to write tests for>"
---
[Extended thinking: Generates failing tests that properly define expected behavior using test-automator agent.]
# TDD Red Phase
## Role
## CRITICAL BEHAVIORAL RULES
Generate failing tests using Task tool with subagent_type="unit-testing::test-automator".
You MUST follow these rules exactly. Violating any of them is a failure.
## Prompt Template
1. **Write tests only — no production code.** Do NOT implement any production code during this phase.
2. **Verify tests fail.** All generated tests MUST fail when run. If any test passes, investigate and fix.
3. **Halt on error.** If test generation fails (syntax errors, import issues), STOP and present the error to the user.
4. **Use only local agents.** All `subagent_type` references use agents bundled with this plugin or `general-purpose`. No cross-plugin dependencies.
5. **Never enter plan mode autonomously.** Do NOT use EnterPlanMode. Execute directly.
"Generate comprehensive FAILING tests for: $ARGUMENTS
## Test Generation Process
## Core Requirements
Use the Task tool to generate failing tests:
1. **Test Structure**
- Framework-appropriate setup (Jest/pytest/JUnit/Go/RSpec)
- Arrange-Act-Assert pattern
- should_X_when_Y naming convention
- Isolated fixtures with no interdependencies
```
Task:
subagent_type: "general-purpose"
description: "Generate comprehensive failing tests for TDD red phase"
prompt: |
You are a test automation expert specializing in TDD red phase test generation.
2. **Behavior Coverage**
- Happy path scenarios
- Edge cases (empty, null, boundary values)
- Error handling and exceptions
- Concurrent access (if applicable)
Generate comprehensive FAILING tests for: $ARGUMENTS
3. **Failure Verification**
- Tests MUST fail when run
- Failures for RIGHT reasons (not syntax/import errors)
- Meaningful diagnostic error messages
- No cascading failures
## Core Requirements
4. **Test Categories**
- Unit: Isolated component behavior
- Integration: Component interaction
- Contract: API/interface contracts
- Property: Mathematical invariants
1. **Test Structure**
- Framework-appropriate setup (Jest/pytest/JUnit/Go/RSpec — match project conventions)
- Arrange-Act-Assert pattern
- should_X_when_Y naming convention
- Isolated fixtures with no interdependencies
## Framework Patterns
2. **Behavior Coverage**
- Happy path scenarios
- Edge cases (empty, null, boundary values)
- Error handling and exceptions
- Concurrent access (if applicable)
**JavaScript/TypeScript (Jest/Vitest)**
3. **Failure Verification**
- Tests MUST fail when run
- Failures for RIGHT reasons (not syntax/import errors)
- Meaningful diagnostic error messages
- No cascading failures
- Mock dependencies with `vi.fn()` or `jest.fn()`
- Use `@testing-library` for React components
- Property tests with `fast-check`
4. **Test Categories**
- Unit: Isolated component behavior
- Integration: Component interaction
- Contract: API/interface contracts
- Property: Mathematical invariants (if applicable)
**Python (pytest)**
## Quality Checklist
- Fixtures with appropriate scopes
- Parametrize for multiple test cases
- Hypothesis for property-based tests
- Readable test names documenting intent
- One behavior per test
- No implementation leakage
- Meaningful test data (not 'foo'/'bar')
- Tests serve as living documentation
**Go**
## Anti-Patterns to Avoid
- Table-driven tests with subtests
- `t.Parallel()` for parallel execution
- Use `testify/assert` for cleaner assertions
- Tests passing immediately
- Testing implementation vs behavior
- Complex setup code
- Multiple responsibilities per test
- Brittle tests tied to specifics
**Ruby (RSpec)**
## Output Requirements
- `let` for lazy loading, `let!` for eager
- Contexts for different scenarios
- Shared examples for common behavior
- Complete test files with imports
- Documentation of test purpose
- Commands to run and verify failures
- Metrics: test count, coverage areas
- Next steps for green phase
```
## Quality Checklist
## Validation
- Readable test names documenting intent
- One behavior per test
- No implementation leakage
- Meaningful test data (not 'foo'/'bar')
- Tests serve as living documentation
After generation:
## Anti-Patterns to Avoid
- Tests passing immediately
- Testing implementation vs behavior
- Complex setup code
- Multiple responsibilities per test
- Brittle tests tied to specifics
1. Run tests — confirm they fail
2. Verify helpful failure messages
3. Check test independence
4. Ensure comprehensive coverage
## Edge Case Categories
@@ -85,56 +95,3 @@ Generate failing tests using Task tool with subagent_type="unit-testing::test-au
- **Special Cases**: Unicode, whitespace, special characters
- **State**: Invalid transitions, concurrent modifications
- **Errors**: Network failures, timeouts, permissions
## Output Requirements
- Complete test files with imports
- Documentation of test purpose
- Commands to run and verify failures
- Metrics: test count, coverage areas
- Next steps for green phase"
## Validation
After generation:
1. Run tests - confirm they fail
2. Verify helpful failure messages
3. Check test independence
4. Ensure comprehensive coverage
## Example (Minimal)
```typescript
// auth.service.test.ts
describe("AuthService", () => {
let authService: AuthService;
let mockUserRepo: jest.Mocked<UserRepository>;
beforeEach(() => {
mockUserRepo = { findByEmail: jest.fn() } as any;
authService = new AuthService(mockUserRepo);
});
it("should_return_token_when_valid_credentials", async () => {
const user = { id: "1", email: "test@example.com", passwordHash: "hashed" };
mockUserRepo.findByEmail.mockResolvedValue(user);
const result = await authService.authenticate("test@example.com", "pass");
expect(result.success).toBe(true);
expect(result.token).toBeDefined();
});
it("should_fail_when_user_not_found", async () => {
mockUserRepo.findByEmail.mockResolvedValue(null);
const result = await authService.authenticate("none@example.com", "pass");
expect(result.success).toBe(false);
expect(result.error).toBe("INVALID_CREDENTIALS");
});
});
```
Test requirements: $ARGUMENTS