mirror of
https://github.com/wshobson/agents.git
synced 2026-03-18 17:47:16 +00:00
Consolidate workflows and tools from commands repository
Repository Restructure: - Move all 83 agent .md files to agents/ subdirectory - Add 15 workflow orchestrators from commands repo to workflows/ - Add 42 development tools from commands repo to tools/ - Update README for unified repository structure The commands repository functionality is now fully integrated, providing complete workflow orchestration and development tooling alongside agents. Directory Structure: - agents/ - 83 specialized AI agents - workflows/ - 15 multi-agent orchestration commands - tools/ - 42 focused development utilities No breaking changes to agent functionality - all agents remain accessible with same names and behavior. Adds workflow and tool commands for enhanced multi-agent coordination capabilities.
This commit is contained in:
75
workflows/data-driven-feature.md
Normal file
75
workflows/data-driven-feature.md
Normal file
@@ -0,0 +1,75 @@
|
||||
---
|
||||
model: claude-opus-4-1
|
||||
---
|
||||
|
||||
Build data-driven features with integrated pipelines and ML capabilities using specialized agents:
|
||||
|
||||
[Extended thinking: This workflow orchestrates data scientists, data engineers, backend architects, and AI engineers to build features that leverage data pipelines, analytics, and machine learning. Each agent contributes their expertise to create a complete data-driven solution.]
|
||||
|
||||
## Phase 1: Data Analysis and Design
|
||||
|
||||
### 1. Data Requirements Analysis
|
||||
- Use Task tool with subagent_type="data-scientist"
|
||||
- Prompt: "Analyze data requirements for: $ARGUMENTS. Identify data sources, required transformations, analytics needs, and potential ML opportunities."
|
||||
- Output: Data analysis report, feature engineering requirements, ML feasibility
|
||||
|
||||
### 2. Data Pipeline Architecture
|
||||
- Use Task tool with subagent_type="data-engineer"
|
||||
- Prompt: "Design data pipeline architecture for: $ARGUMENTS. Include ETL/ELT processes, data storage, streaming requirements, and integration with existing systems based on data scientist's analysis."
|
||||
- Output: Pipeline architecture, technology stack, data flow diagrams
|
||||
|
||||
## Phase 2: Backend Integration
|
||||
|
||||
### 3. API and Service Design
|
||||
- Use Task tool with subagent_type="backend-architect"
|
||||
- Prompt: "Design backend services to support data-driven feature: $ARGUMENTS. Include APIs for data ingestion, analytics endpoints, and ML model serving based on pipeline architecture."
|
||||
- Output: Service architecture, API contracts, integration patterns
|
||||
|
||||
### 4. Database and Storage Design
|
||||
- Use Task tool with subagent_type="database-optimizer"
|
||||
- Prompt: "Design optimal database schema and storage strategy for: $ARGUMENTS. Consider both transactional and analytical workloads, time-series data, and ML feature stores."
|
||||
- Output: Database schemas, indexing strategies, storage recommendations
|
||||
|
||||
## Phase 3: ML and AI Implementation
|
||||
|
||||
### 5. ML Pipeline Development
|
||||
- Use Task tool with subagent_type="ml-engineer"
|
||||
- Prompt: "Implement ML pipeline for: $ARGUMENTS. Include feature engineering, model training, validation, and deployment based on data scientist's requirements."
|
||||
- Output: ML pipeline code, model artifacts, deployment strategy
|
||||
|
||||
### 6. AI Integration
|
||||
- Use Task tool with subagent_type="ai-engineer"
|
||||
- Prompt: "Build AI-powered features for: $ARGUMENTS. Integrate LLMs, implement RAG if needed, and create intelligent automation based on ML engineer's models."
|
||||
- Output: AI integration code, prompt engineering, RAG implementation
|
||||
|
||||
## Phase 4: Implementation and Optimization
|
||||
|
||||
### 7. Data Pipeline Implementation
|
||||
- Use Task tool with subagent_type="data-engineer"
|
||||
- Prompt: "Implement production data pipelines for: $ARGUMENTS. Include real-time streaming, batch processing, and data quality monitoring based on all previous designs."
|
||||
- Output: Pipeline implementation, monitoring setup, data quality checks
|
||||
|
||||
### 8. Performance Optimization
|
||||
- Use Task tool with subagent_type="performance-engineer"
|
||||
- Prompt: "Optimize data processing and model serving performance for: $ARGUMENTS. Focus on query optimization, caching strategies, and model inference speed."
|
||||
- Output: Performance improvements, caching layers, optimization report
|
||||
|
||||
## Phase 5: Testing and Deployment
|
||||
|
||||
### 9. Comprehensive Testing
|
||||
- Use Task tool with subagent_type="test-automator"
|
||||
- Prompt: "Create test suites for data pipelines and ML components: $ARGUMENTS. Include data validation tests, model performance tests, and integration tests."
|
||||
- Output: Test suites, data quality tests, ML monitoring tests
|
||||
|
||||
### 10. Production Deployment
|
||||
- Use Task tool with subagent_type="deployment-engineer"
|
||||
- Prompt: "Deploy data-driven feature to production: $ARGUMENTS. Include pipeline orchestration, model deployment, monitoring, and rollback strategies."
|
||||
- Output: Deployment configurations, monitoring dashboards, operational runbooks
|
||||
|
||||
## Coordination Notes
|
||||
- Data flow and requirements cascade from data scientists to engineers
|
||||
- ML models must integrate seamlessly with backend services
|
||||
- Performance considerations apply to both data processing and model serving
|
||||
- Maintain data lineage and versioning throughout the pipeline
|
||||
|
||||
Data-driven feature to build: $ARGUMENTS
|
||||
88
workflows/feature-development.md
Normal file
88
workflows/feature-development.md
Normal file
@@ -0,0 +1,88 @@
|
||||
---
|
||||
model: claude-opus-4-1
|
||||
---
|
||||
|
||||
Implement a new feature using specialized agents with explicit Task tool invocations:
|
||||
|
||||
[Extended thinking: This workflow orchestrates multiple specialized agents to implement a complete feature from design to deployment. Each agent receives context from previous agents to ensure coherent implementation. Supports both traditional and TDD-driven development approaches.]
|
||||
|
||||
## Development Mode Selection
|
||||
|
||||
Choose your development approach:
|
||||
|
||||
### Option A: Traditional Development (Default)
|
||||
Use the Task tool to delegate to specialized agents in sequence:
|
||||
|
||||
### Option B: TDD-Driven Development
|
||||
For test-first development, use the tdd-orchestrator agent:
|
||||
- Use Task tool with subagent_type="tdd-orchestrator"
|
||||
- Prompt: "Implement feature using TDD methodology: $ARGUMENTS. Follow red-green-refactor cycle strictly."
|
||||
- Alternative: Use the dedicated tdd-cycle workflow for granular TDD control
|
||||
|
||||
When TDD mode is selected, the workflow follows this pattern:
|
||||
1. Write failing tests first (Red phase)
|
||||
2. Implement minimum code to pass tests (Green phase)
|
||||
3. Refactor while keeping tests green (Refactor phase)
|
||||
4. Repeat cycle for each feature component
|
||||
|
||||
## Traditional Development Steps
|
||||
|
||||
1. **Backend Architecture Design**
|
||||
- Use Task tool with subagent_type="backend-architect"
|
||||
- Prompt: "Design RESTful API and data model for: $ARGUMENTS. Include endpoint definitions, database schema, and service boundaries."
|
||||
- Save the API design and schema for next agents
|
||||
|
||||
2. **Frontend Implementation**
|
||||
- Use Task tool with subagent_type="frontend-developer"
|
||||
- Prompt: "Create UI components for: $ARGUMENTS. Use the API design from backend-architect: [include API endpoints and data models from step 1]"
|
||||
- Ensure UI matches the backend API contract
|
||||
|
||||
3. **Test Coverage**
|
||||
- Use Task tool with subagent_type="test-automator"
|
||||
- Prompt: "Write comprehensive tests for: $ARGUMENTS. Cover both backend API endpoints: [from step 1] and frontend components: [from step 2]"
|
||||
- Include unit, integration, and e2e tests
|
||||
|
||||
4. **Production Deployment**
|
||||
- Use Task tool with subagent_type="deployment-engineer"
|
||||
- Prompt: "Prepare production deployment for: $ARGUMENTS. Include CI/CD pipeline, containerization, and monitoring for the implemented feature."
|
||||
- Ensure all components from previous steps are deployment-ready
|
||||
|
||||
## TDD Development Steps
|
||||
|
||||
When using TDD mode, the sequence changes to:
|
||||
|
||||
1. **Test-First Backend Design**
|
||||
- Use Task tool with subagent_type="tdd-orchestrator"
|
||||
- Prompt: "Design and write failing tests for backend API: $ARGUMENTS. Define test cases before implementation."
|
||||
- Create comprehensive test suite for API endpoints
|
||||
|
||||
2. **Test-First Frontend Design**
|
||||
- Use Task tool with subagent_type="tdd-orchestrator"
|
||||
- Prompt: "Write failing tests for frontend components: $ARGUMENTS. Include unit and integration tests."
|
||||
- Define expected UI behavior through tests
|
||||
|
||||
3. **Incremental Implementation**
|
||||
- Use Task tool with subagent_type="tdd-orchestrator"
|
||||
- Prompt: "Implement features to pass tests for: $ARGUMENTS. Follow strict red-green-refactor cycles."
|
||||
- Build features incrementally, guided by tests
|
||||
|
||||
4. **Refactoring & Optimization**
|
||||
- Use Task tool with subagent_type="tdd-orchestrator"
|
||||
- Prompt: "Refactor implementation while maintaining green tests: $ARGUMENTS. Optimize for maintainability."
|
||||
- Improve code quality with test safety net
|
||||
|
||||
5. **Production Deployment**
|
||||
- Use Task tool with subagent_type="deployment-engineer"
|
||||
- Prompt: "Deploy TDD-developed feature: $ARGUMENTS. Verify all tests pass in CI/CD pipeline."
|
||||
- Ensure test suite runs in deployment pipeline
|
||||
|
||||
## Execution Parameters
|
||||
|
||||
- **--tdd**: Enable TDD mode (uses tdd-orchestrator agent)
|
||||
- **--strict-tdd**: Enforce strict red-green-refactor cycles
|
||||
- **--test-coverage-min**: Set minimum test coverage threshold (default: 80%)
|
||||
- **--tdd-cycle**: Use dedicated tdd-cycle workflow for granular control
|
||||
|
||||
Aggregate results from all agents and present a unified implementation plan.
|
||||
|
||||
Feature description: $ARGUMENTS
|
||||
80
workflows/full-review.md
Normal file
80
workflows/full-review.md
Normal file
@@ -0,0 +1,80 @@
|
||||
---
|
||||
model: claude-opus-4-1
|
||||
---
|
||||
|
||||
Perform a comprehensive review using multiple specialized agents with explicit Task tool invocations:
|
||||
|
||||
[Extended thinking: This workflow performs a thorough multi-perspective review by orchestrating specialized review agents. Each agent examines different aspects and the results are consolidated into a unified action plan. Includes TDD compliance verification when enabled.]
|
||||
|
||||
## Review Configuration
|
||||
|
||||
- **Standard Review**: Traditional comprehensive review (default)
|
||||
- **TDD-Enhanced Review**: Includes TDD compliance and test-first verification
|
||||
- Enable with **--tdd-review** flag
|
||||
- Verifies red-green-refactor cycle adherence
|
||||
- Checks test-first implementation patterns
|
||||
|
||||
Execute parallel reviews using Task tool with specialized agents:
|
||||
|
||||
## 1. Code Quality Review
|
||||
- Use Task tool with subagent_type="code-reviewer"
|
||||
- Prompt: "Review code quality and maintainability for: $ARGUMENTS. Check for code smells, readability, documentation, and adherence to best practices."
|
||||
- Focus: Clean code principles, SOLID, DRY, naming conventions
|
||||
|
||||
## 2. Security Audit
|
||||
- Use Task tool with subagent_type="security-auditor"
|
||||
- Prompt: "Perform security audit on: $ARGUMENTS. Check for vulnerabilities, OWASP compliance, authentication issues, and data protection."
|
||||
- Focus: Injection risks, authentication, authorization, data encryption
|
||||
|
||||
## 3. Architecture Review
|
||||
- Use Task tool with subagent_type="architect-reviewer"
|
||||
- Prompt: "Review architectural design and patterns in: $ARGUMENTS. Evaluate scalability, maintainability, and adherence to architectural principles."
|
||||
- Focus: Service boundaries, coupling, cohesion, design patterns
|
||||
|
||||
## 4. Performance Analysis
|
||||
- Use Task tool with subagent_type="performance-engineer"
|
||||
- Prompt: "Analyze performance characteristics of: $ARGUMENTS. Identify bottlenecks, resource usage, and optimization opportunities."
|
||||
- Focus: Response times, memory usage, database queries, caching
|
||||
|
||||
## 5. Test Coverage Assessment
|
||||
- Use Task tool with subagent_type="test-automator"
|
||||
- Prompt: "Evaluate test coverage and quality for: $ARGUMENTS. Assess unit tests, integration tests, and identify gaps in test coverage."
|
||||
- Focus: Coverage metrics, test quality, edge cases, test maintainability
|
||||
|
||||
## 6. TDD Compliance Review (When --tdd-review is enabled)
|
||||
- Use Task tool with subagent_type="tdd-orchestrator"
|
||||
- Prompt: "Verify TDD compliance for: $ARGUMENTS. Check for test-first development patterns, red-green-refactor cycles, and test-driven design."
|
||||
- Focus on TDD metrics:
|
||||
- **Test-First Verification**: Were tests written before implementation?
|
||||
- **Red-Green-Refactor Cycles**: Evidence of proper TDD cycles
|
||||
- **Test Coverage Trends**: Coverage growth patterns during development
|
||||
- **Test Granularity**: Appropriate test size and scope
|
||||
- **Refactoring Evidence**: Code improvements with test safety net
|
||||
- **Test Quality**: Tests that drive design, not just verify behavior
|
||||
|
||||
## Consolidated Report Structure
|
||||
Compile all feedback into a unified report:
|
||||
- **Critical Issues** (must fix): Security vulnerabilities, broken functionality, architectural flaws
|
||||
- **Recommendations** (should fix): Performance bottlenecks, code quality issues, missing tests
|
||||
- **Suggestions** (nice to have): Refactoring opportunities, documentation improvements
|
||||
- **Positive Feedback** (what's done well): Good practices to maintain and replicate
|
||||
|
||||
### TDD-Specific Metrics (When --tdd-review is enabled)
|
||||
Additional TDD compliance report section:
|
||||
- **TDD Adherence Score**: Percentage of code developed using TDD methodology
|
||||
- **Test-First Evidence**: Commits showing tests before implementation
|
||||
- **Cycle Completeness**: Percentage of complete red-green-refactor cycles
|
||||
- **Test Design Quality**: How well tests drive the design
|
||||
- **Coverage Delta Analysis**: Coverage changes correlated with feature additions
|
||||
- **Refactoring Frequency**: Evidence of continuous improvement
|
||||
- **Test Execution Time**: Performance of test suite
|
||||
- **Test Stability**: Flakiness and reliability metrics
|
||||
|
||||
## Review Options
|
||||
|
||||
- **--tdd-review**: Enable TDD compliance checking
|
||||
- **--strict-tdd**: Fail review if TDD practices not followed
|
||||
- **--tdd-metrics**: Generate detailed TDD metrics report
|
||||
- **--test-first-only**: Only review code with test-first evidence
|
||||
|
||||
Target: $ARGUMENTS
|
||||
63
workflows/full-stack-feature.md
Normal file
63
workflows/full-stack-feature.md
Normal file
@@ -0,0 +1,63 @@
|
||||
---
|
||||
model: claude-opus-4-1
|
||||
---
|
||||
|
||||
Implement a full-stack feature across multiple platforms with coordinated agent orchestration:
|
||||
|
||||
[Extended thinking: This workflow orchestrates a comprehensive feature implementation across backend, frontend, mobile, and API layers. Each agent builds upon the work of previous agents to create a cohesive multi-platform solution.]
|
||||
|
||||
## Phase 1: Architecture and API Design
|
||||
|
||||
### 1. Backend Architecture
|
||||
- Use Task tool with subagent_type="backend-architect"
|
||||
- Prompt: "Design backend architecture for: $ARGUMENTS. Include service boundaries, data models, and technology recommendations."
|
||||
- Output: Service architecture, database schema, API structure
|
||||
|
||||
### 2. GraphQL API Design (if applicable)
|
||||
- Use Task tool with subagent_type="graphql-architect"
|
||||
- Prompt: "Design GraphQL schema and resolvers for: $ARGUMENTS. Build on the backend architecture from previous step. Include types, queries, mutations, and subscriptions."
|
||||
- Output: GraphQL schema, resolver structure, federation strategy
|
||||
|
||||
## Phase 2: Implementation
|
||||
|
||||
### 3. Frontend Development
|
||||
- Use Task tool with subagent_type="frontend-developer"
|
||||
- Prompt: "Implement web frontend for: $ARGUMENTS. Use the API design from previous steps. Include responsive UI, state management, and API integration."
|
||||
- Output: React/Vue/Angular components, state management, API client
|
||||
|
||||
### 4. Mobile Development
|
||||
- Use Task tool with subagent_type="mobile-developer"
|
||||
- Prompt: "Implement mobile app features for: $ARGUMENTS. Ensure consistency with web frontend and use the same API. Include offline support and native integrations."
|
||||
- Output: React Native/Flutter implementation, offline sync, push notifications
|
||||
|
||||
## Phase 3: Quality Assurance
|
||||
|
||||
### 5. Comprehensive Testing
|
||||
- Use Task tool with subagent_type="test-automator"
|
||||
- Prompt: "Create test suites for: $ARGUMENTS. Cover backend APIs, frontend components, mobile app features, and integration tests across all platforms."
|
||||
- Output: Unit tests, integration tests, e2e tests, test documentation
|
||||
|
||||
### 6. Security Review
|
||||
- Use Task tool with subagent_type="security-auditor"
|
||||
- Prompt: "Audit security across all implementations for: $ARGUMENTS. Check API security, frontend vulnerabilities, and mobile app security."
|
||||
- Output: Security report, remediation steps
|
||||
|
||||
## Phase 4: Optimization and Deployment
|
||||
|
||||
### 7. Performance Optimization
|
||||
- Use Task tool with subagent_type="performance-engineer"
|
||||
- Prompt: "Optimize performance across all platforms for: $ARGUMENTS. Focus on API response times, frontend bundle size, and mobile app performance."
|
||||
- Output: Performance improvements, caching strategies, optimization report
|
||||
|
||||
### 8. Deployment Preparation
|
||||
- Use Task tool with subagent_type="deployment-engineer"
|
||||
- Prompt: "Prepare deployment for all components of: $ARGUMENTS. Include CI/CD pipelines, containerization, and monitoring setup."
|
||||
- Output: Deployment configurations, monitoring setup, rollout strategy
|
||||
|
||||
## Coordination Notes
|
||||
- Each agent receives outputs from previous agents
|
||||
- Maintain consistency across all platforms
|
||||
- Ensure API contracts are honored by all clients
|
||||
- Document integration points between components
|
||||
|
||||
Feature to implement: $ARGUMENTS
|
||||
13
workflows/git-workflow.md
Normal file
13
workflows/git-workflow.md
Normal file
@@ -0,0 +1,13 @@
|
||||
---
|
||||
model: claude-opus-4-1
|
||||
---
|
||||
|
||||
Complete Git workflow using specialized agents:
|
||||
|
||||
1. code-reviewer: Review uncommitted changes
|
||||
2. test-automator: Ensure tests pass
|
||||
3. deployment-engineer: Verify deployment readiness
|
||||
4. Create commit message following conventions
|
||||
5. Push and create PR with proper description
|
||||
|
||||
Target branch: $ARGUMENTS
|
||||
17
workflows/improve-agent.md
Normal file
17
workflows/improve-agent.md
Normal file
@@ -0,0 +1,17 @@
|
||||
---
|
||||
model: claude-opus-4-1
|
||||
---
|
||||
|
||||
Improve an existing agent based on recent performance:
|
||||
|
||||
1. Analyze recent uses of: $ARGUMENTS
|
||||
2. Identify patterns in:
|
||||
- Failed tasks
|
||||
- User corrections
|
||||
- Suboptimal outputs
|
||||
3. Update the agent's prompt with:
|
||||
- New examples
|
||||
- Clarified instructions
|
||||
- Additional constraints
|
||||
4. Test on recent scenarios
|
||||
5. Save improved version
|
||||
85
workflows/incident-response.md
Normal file
85
workflows/incident-response.md
Normal file
@@ -0,0 +1,85 @@
|
||||
---
|
||||
model: claude-opus-4-1
|
||||
---
|
||||
|
||||
Respond to production incidents with coordinated agent expertise for rapid resolution:
|
||||
|
||||
[Extended thinking: This workflow handles production incidents with urgency and precision. Multiple specialized agents work together to identify root causes, implement fixes, and prevent recurrence.]
|
||||
|
||||
## Phase 1: Immediate Response
|
||||
|
||||
### 1. Incident Assessment
|
||||
- Use Task tool with subagent_type="incident-responder"
|
||||
- Prompt: "URGENT: Assess production incident: $ARGUMENTS. Determine severity, impact, and immediate mitigation steps. Time is critical."
|
||||
- Output: Incident severity, impact assessment, immediate actions
|
||||
|
||||
### 2. Initial Troubleshooting
|
||||
- Use Task tool with subagent_type="devops-troubleshooter"
|
||||
- Prompt: "Investigate production issue: $ARGUMENTS. Check logs, metrics, recent deployments, and system health. Identify potential root causes."
|
||||
- Output: Initial findings, suspicious patterns, potential causes
|
||||
|
||||
## Phase 2: Root Cause Analysis
|
||||
|
||||
### 3. Deep Debugging
|
||||
- Use Task tool with subagent_type="debugger"
|
||||
- Prompt: "Debug production issue: $ARGUMENTS using findings from initial investigation. Analyze stack traces, reproduce issue if possible, identify exact root cause."
|
||||
- Output: Root cause identification, reproduction steps, debug analysis
|
||||
|
||||
### 4. Performance Analysis (if applicable)
|
||||
- Use Task tool with subagent_type="performance-engineer"
|
||||
- Prompt: "Analyze performance aspects of incident: $ARGUMENTS. Check for resource exhaustion, bottlenecks, or performance degradation."
|
||||
- Output: Performance metrics, resource analysis, bottleneck identification
|
||||
|
||||
### 5. Database Investigation (if applicable)
|
||||
- Use Task tool with subagent_type="database-optimizer"
|
||||
- Prompt: "Investigate database-related aspects of incident: $ARGUMENTS. Check for locks, slow queries, connection issues, or data corruption."
|
||||
- Output: Database health report, query analysis, data integrity check
|
||||
|
||||
## Phase 3: Resolution Implementation
|
||||
|
||||
### 6. Fix Development
|
||||
- Use Task tool with subagent_type="backend-architect"
|
||||
- Prompt: "Design and implement fix for incident: $ARGUMENTS based on root cause analysis. Ensure fix is safe for immediate production deployment."
|
||||
- Output: Fix implementation, safety analysis, rollout strategy
|
||||
|
||||
### 7. Emergency Deployment
|
||||
- Use Task tool with subagent_type="deployment-engineer"
|
||||
- Prompt: "Deploy emergency fix for incident: $ARGUMENTS. Implement with minimal risk, include rollback plan, and monitor deployment closely."
|
||||
- Output: Deployment execution, rollback procedures, monitoring setup
|
||||
|
||||
## Phase 4: Stabilization and Prevention
|
||||
|
||||
### 8. System Stabilization
|
||||
- Use Task tool with subagent_type="devops-troubleshooter"
|
||||
- Prompt: "Stabilize system after incident fix: $ARGUMENTS. Monitor system health, clear any backlogs, and ensure full recovery."
|
||||
- Output: System health report, recovery metrics, stability confirmation
|
||||
|
||||
### 9. Security Review (if applicable)
|
||||
- Use Task tool with subagent_type="security-auditor"
|
||||
- Prompt: "Review security implications of incident: $ARGUMENTS. Check for any security breaches, data exposure, or vulnerabilities exploited."
|
||||
- Output: Security assessment, breach analysis, hardening recommendations
|
||||
|
||||
## Phase 5: Post-Incident Activities
|
||||
|
||||
### 10. Monitoring Enhancement
|
||||
- Use Task tool with subagent_type="devops-troubleshooter"
|
||||
- Prompt: "Enhance monitoring to prevent recurrence of: $ARGUMENTS. Add alerts, improve observability, and set up early warning systems."
|
||||
- Output: New monitoring rules, alert configurations, observability improvements
|
||||
|
||||
### 11. Test Coverage
|
||||
- Use Task tool with subagent_type="test-automator"
|
||||
- Prompt: "Create tests to prevent regression of incident: $ARGUMENTS. Include unit tests, integration tests, and chaos engineering scenarios."
|
||||
- Output: Test implementations, regression prevention, chaos tests
|
||||
|
||||
### 12. Documentation
|
||||
- Use Task tool with subagent_type="incident-responder"
|
||||
- Prompt: "Document incident postmortem for: $ARGUMENTS. Include timeline, root cause, impact, resolution, and lessons learned. No blame, focus on improvement."
|
||||
- Output: Postmortem document, action items, process improvements
|
||||
|
||||
## Coordination Notes
|
||||
- Speed is critical in early phases - parallel agent execution where possible
|
||||
- Communication between agents must be clear and rapid
|
||||
- All changes must be safe and reversible
|
||||
- Document everything for postmortem analysis
|
||||
|
||||
Production incident: $ARGUMENTS
|
||||
14
workflows/legacy-modernize.md
Normal file
14
workflows/legacy-modernize.md
Normal file
@@ -0,0 +1,14 @@
|
||||
---
|
||||
model: claude-opus-4-1
|
||||
---
|
||||
|
||||
Modernize legacy code using expert agents:
|
||||
|
||||
1. legacy-modernizer: Analyze and plan modernization
|
||||
2. test-automator: Create tests for legacy code
|
||||
3. code-reviewer: Review modernization plan
|
||||
4. python-pro/golang-pro: Implement modernization
|
||||
5. security-auditor: Verify security improvements
|
||||
6. performance-engineer: Validate performance
|
||||
|
||||
Target: $ARGUMENTS
|
||||
47
workflows/ml-pipeline.md
Normal file
47
workflows/ml-pipeline.md
Normal file
@@ -0,0 +1,47 @@
|
||||
---
|
||||
model: claude-opus-4-1
|
||||
---
|
||||
|
||||
# Machine Learning Pipeline
|
||||
|
||||
Design and implement a complete ML pipeline for: $ARGUMENTS
|
||||
|
||||
Create a production-ready pipeline including:
|
||||
|
||||
1. **Data Ingestion**:
|
||||
- Multiple data source connectors
|
||||
- Schema validation with Pydantic
|
||||
- Data versioning strategy
|
||||
- Incremental loading capabilities
|
||||
|
||||
2. **Feature Engineering**:
|
||||
- Feature transformation pipeline
|
||||
- Feature store integration
|
||||
- Statistical validation
|
||||
- Handling missing data and outliers
|
||||
|
||||
3. **Model Training**:
|
||||
- Experiment tracking (MLflow/W&B)
|
||||
- Hyperparameter optimization
|
||||
- Cross-validation strategy
|
||||
- Model versioning
|
||||
|
||||
4. **Model Evaluation**:
|
||||
- Comprehensive metrics
|
||||
- A/B testing framework
|
||||
- Bias detection
|
||||
- Performance monitoring
|
||||
|
||||
5. **Deployment**:
|
||||
- Model serving API
|
||||
- Batch/stream prediction
|
||||
- Model registry
|
||||
- Rollback capabilities
|
||||
|
||||
6. **Monitoring**:
|
||||
- Data drift detection
|
||||
- Model performance tracking
|
||||
- Alert system
|
||||
- Retraining triggers
|
||||
|
||||
Include error handling, logging, and make it cloud-agnostic. Use modern tools like DVC, MLflow, or similar. Ensure reproducibility and scalability.
|
||||
14
workflows/multi-platform.md
Normal file
14
workflows/multi-platform.md
Normal file
@@ -0,0 +1,14 @@
|
||||
---
|
||||
model: claude-opus-4-1
|
||||
---
|
||||
|
||||
Build the same feature across multiple platforms:
|
||||
|
||||
Run in parallel:
|
||||
- frontend-developer: Web implementation
|
||||
- mobile-developer: Mobile app implementation
|
||||
- api-documenter: API documentation
|
||||
|
||||
Ensure consistency across all platforms.
|
||||
|
||||
Feature specification: $ARGUMENTS
|
||||
75
workflows/performance-optimization.md
Normal file
75
workflows/performance-optimization.md
Normal file
@@ -0,0 +1,75 @@
|
||||
---
|
||||
model: claude-opus-4-1
|
||||
---
|
||||
|
||||
Optimize application performance end-to-end using specialized performance and optimization agents:
|
||||
|
||||
[Extended thinking: This workflow coordinates multiple agents to identify and fix performance bottlenecks across the entire stack. From database queries to frontend rendering, each agent contributes their expertise to create a highly optimized application.]
|
||||
|
||||
## Phase 1: Performance Analysis
|
||||
|
||||
### 1. Application Profiling
|
||||
- Use Task tool with subagent_type="performance-engineer"
|
||||
- Prompt: "Profile application performance for: $ARGUMENTS. Identify CPU, memory, and I/O bottlenecks. Include flame graphs, memory profiles, and resource utilization metrics."
|
||||
- Output: Performance profile, bottleneck analysis, optimization priorities
|
||||
|
||||
### 2. Database Performance Analysis
|
||||
- Use Task tool with subagent_type="database-optimizer"
|
||||
- Prompt: "Analyze database performance for: $ARGUMENTS. Review query execution plans, identify slow queries, check indexing, and analyze connection pooling."
|
||||
- Output: Query optimization report, index recommendations, schema improvements
|
||||
|
||||
## Phase 2: Backend Optimization
|
||||
|
||||
### 3. Backend Code Optimization
|
||||
- Use Task tool with subagent_type="performance-engineer"
|
||||
- Prompt: "Optimize backend code for: $ARGUMENTS based on profiling results. Focus on algorithm efficiency, caching strategies, and async operations."
|
||||
- Output: Optimized code, caching implementation, performance improvements
|
||||
|
||||
### 4. API Optimization
|
||||
- Use Task tool with subagent_type="backend-architect"
|
||||
- Prompt: "Optimize API design and implementation for: $ARGUMENTS. Consider pagination, response compression, field filtering, and batch operations."
|
||||
- Output: Optimized API endpoints, GraphQL query optimization, response time improvements
|
||||
|
||||
## Phase 3: Frontend Optimization
|
||||
|
||||
### 5. Frontend Performance
|
||||
- Use Task tool with subagent_type="frontend-developer"
|
||||
- Prompt: "Optimize frontend performance for: $ARGUMENTS. Focus on bundle size, lazy loading, code splitting, and rendering performance. Implement Core Web Vitals improvements."
|
||||
- Output: Optimized bundles, lazy loading implementation, performance metrics
|
||||
|
||||
### 6. Mobile App Optimization
|
||||
- Use Task tool with subagent_type="mobile-developer"
|
||||
- Prompt: "Optimize mobile app performance for: $ARGUMENTS. Focus on startup time, memory usage, battery efficiency, and offline performance."
|
||||
- Output: Optimized mobile code, reduced app size, improved battery life
|
||||
|
||||
## Phase 4: Infrastructure Optimization
|
||||
|
||||
### 7. Cloud Infrastructure Optimization
|
||||
- Use Task tool with subagent_type="cloud-architect"
|
||||
- Prompt: "Optimize cloud infrastructure for: $ARGUMENTS. Review auto-scaling, instance types, CDN usage, and geographic distribution."
|
||||
- Output: Infrastructure improvements, cost optimization, scaling strategy
|
||||
|
||||
### 8. Deployment Optimization
|
||||
- Use Task tool with subagent_type="deployment-engineer"
|
||||
- Prompt: "Optimize deployment and build processes for: $ARGUMENTS. Improve CI/CD performance, implement caching, and optimize container images."
|
||||
- Output: Faster builds, optimized containers, improved deployment times
|
||||
|
||||
## Phase 5: Monitoring and Validation
|
||||
|
||||
### 9. Performance Monitoring Setup
|
||||
- Use Task tool with subagent_type="devops-troubleshooter"
|
||||
- Prompt: "Set up comprehensive performance monitoring for: $ARGUMENTS. Include APM, real user monitoring, and custom performance metrics."
|
||||
- Output: Monitoring dashboards, alert thresholds, SLO definitions
|
||||
|
||||
### 10. Performance Testing
|
||||
- Use Task tool with subagent_type="test-automator"
|
||||
- Prompt: "Create performance test suites for: $ARGUMENTS. Include load tests, stress tests, and performance regression tests."
|
||||
- Output: Performance test suite, benchmark results, regression prevention
|
||||
|
||||
## Coordination Notes
|
||||
- Performance metrics guide optimization priorities
|
||||
- Each optimization must be validated with measurements
|
||||
- Consider trade-offs between different performance aspects
|
||||
- Document all optimizations and their impact
|
||||
|
||||
Performance optimization target: $ARGUMENTS
|
||||
68
workflows/security-hardening.md
Normal file
68
workflows/security-hardening.md
Normal file
@@ -0,0 +1,68 @@
|
||||
---
|
||||
model: claude-opus-4-1
|
||||
---
|
||||
|
||||
Implement security-first architecture and hardening measures with coordinated agent orchestration:
|
||||
|
||||
[Extended thinking: This workflow prioritizes security at every layer of the application stack. Multiple agents work together to identify vulnerabilities, implement secure patterns, and ensure compliance with security best practices.]
|
||||
|
||||
## Phase 1: Security Assessment
|
||||
|
||||
### 1. Initial Security Audit
|
||||
- Use Task tool with subagent_type="security-auditor"
|
||||
- Prompt: "Perform comprehensive security audit on: $ARGUMENTS. Identify vulnerabilities, compliance gaps, and security risks across all components."
|
||||
- Output: Vulnerability report, risk assessment, compliance gaps
|
||||
|
||||
### 2. Architecture Security Review
|
||||
- Use Task tool with subagent_type="backend-architect"
|
||||
- Prompt: "Review and redesign architecture for security: $ARGUMENTS. Focus on secure service boundaries, data isolation, and defense in depth. Use findings from security audit."
|
||||
- Output: Secure architecture design, service isolation strategy, data flow diagrams
|
||||
|
||||
## Phase 2: Security Implementation
|
||||
|
||||
### 3. Backend Security Hardening
|
||||
- Use Task tool with subagent_type="backend-architect"
|
||||
- Prompt: "Implement backend security measures for: $ARGUMENTS. Include authentication, authorization, input validation, and secure data handling based on security audit findings."
|
||||
- Output: Secure API implementations, auth middleware, validation layers
|
||||
|
||||
### 4. Infrastructure Security
|
||||
- Use Task tool with subagent_type="devops-troubleshooter"
|
||||
- Prompt: "Implement infrastructure security for: $ARGUMENTS. Configure firewalls, secure secrets management, implement least privilege access, and set up security monitoring."
|
||||
- Output: Infrastructure security configs, secrets management, monitoring setup
|
||||
|
||||
### 5. Frontend Security
|
||||
- Use Task tool with subagent_type="frontend-developer"
|
||||
- Prompt: "Implement frontend security measures for: $ARGUMENTS. Include CSP headers, XSS prevention, secure authentication flows, and sensitive data handling."
|
||||
- Output: Secure frontend code, CSP policies, auth integration
|
||||
|
||||
## Phase 3: Compliance and Testing
|
||||
|
||||
### 6. Compliance Verification
|
||||
- Use Task tool with subagent_type="security-auditor"
|
||||
- Prompt: "Verify compliance with security standards for: $ARGUMENTS. Check OWASP Top 10, GDPR, SOC2, or other relevant standards. Validate all security implementations."
|
||||
- Output: Compliance report, remediation requirements
|
||||
|
||||
### 7. Security Testing
|
||||
- Use Task tool with subagent_type="test-automator"
|
||||
- Prompt: "Create security test suites for: $ARGUMENTS. Include penetration tests, security regression tests, and automated vulnerability scanning."
|
||||
- Output: Security test suite, penetration test results, CI/CD integration
|
||||
|
||||
## Phase 4: Deployment and Monitoring
|
||||
|
||||
### 8. Secure Deployment
|
||||
- Use Task tool with subagent_type="deployment-engineer"
|
||||
- Prompt: "Implement secure deployment pipeline for: $ARGUMENTS. Include security gates, vulnerability scanning in CI/CD, and secure configuration management."
|
||||
- Output: Secure CI/CD pipeline, deployment security checks, rollback procedures
|
||||
|
||||
### 9. Security Monitoring Setup
|
||||
- Use Task tool with subagent_type="devops-troubleshooter"
|
||||
- Prompt: "Set up security monitoring and incident response for: $ARGUMENTS. Include intrusion detection, log analysis, and automated alerting."
|
||||
- Output: Security monitoring dashboards, alert rules, incident response procedures
|
||||
|
||||
## Coordination Notes
|
||||
- Security findings from each phase inform subsequent implementations
|
||||
- All agents must prioritize security in their recommendations
|
||||
- Regular security reviews between phases ensure nothing is missed
|
||||
- Document all security decisions and trade-offs
|
||||
|
||||
Security hardening target: $ARGUMENTS
|
||||
48
workflows/smart-fix.md
Normal file
48
workflows/smart-fix.md
Normal file
@@ -0,0 +1,48 @@
|
||||
---
|
||||
model: claude-opus-4-1
|
||||
---
|
||||
|
||||
Intelligently fix the issue using automatic agent selection with explicit Task tool invocations:
|
||||
|
||||
[Extended thinking: This workflow analyzes the issue and automatically routes to the most appropriate specialist agent(s). Complex issues may require multiple agents working together.]
|
||||
|
||||
First, analyze the issue to categorize it, then use Task tool with the appropriate agent:
|
||||
|
||||
## Analysis Phase
|
||||
Examine the issue: "$ARGUMENTS" to determine the problem domain.
|
||||
|
||||
## Agent Selection and Execution
|
||||
|
||||
### For Deployment/Infrastructure Issues
|
||||
If the issue involves deployment failures, infrastructure problems, or DevOps concerns:
|
||||
- Use Task tool with subagent_type="devops-troubleshooter"
|
||||
- Prompt: "Debug and fix this deployment/infrastructure issue: $ARGUMENTS"
|
||||
|
||||
### For Code Errors and Bugs
|
||||
If the issue involves application errors, exceptions, or functional bugs:
|
||||
- Use Task tool with subagent_type="debugger"
|
||||
- Prompt: "Analyze and fix this code error: $ARGUMENTS. Provide root cause analysis and solution."
|
||||
|
||||
### For Database Performance
|
||||
If the issue involves slow queries, database bottlenecks, or data access patterns:
|
||||
- Use Task tool with subagent_type="database-optimizer"
|
||||
- Prompt: "Optimize database performance for: $ARGUMENTS. Include query analysis, indexing strategies, and schema improvements."
|
||||
|
||||
### For Application Performance
|
||||
If the issue involves slow response times, high resource usage, or performance degradation:
|
||||
- Use Task tool with subagent_type="performance-engineer"
|
||||
- Prompt: "Profile and optimize application performance issue: $ARGUMENTS. Identify bottlenecks and provide optimization strategies."
|
||||
|
||||
### For Legacy Code Issues
|
||||
If the issue involves outdated code, deprecated patterns, or technical debt:
|
||||
- Use Task tool with subagent_type="legacy-modernizer"
|
||||
- Prompt: "Modernize and fix legacy code issue: $ARGUMENTS. Provide migration path and updated implementation."
|
||||
|
||||
## Multi-Domain Coordination
|
||||
For complex issues spanning multiple domains:
|
||||
1. Use primary agent based on main symptom
|
||||
2. Use secondary agents for related aspects
|
||||
3. Coordinate fixes across all affected areas
|
||||
4. Verify integration between different fixes
|
||||
|
||||
Issue: $ARGUMENTS
|
||||
203
workflows/tdd-cycle.md
Normal file
203
workflows/tdd-cycle.md
Normal file
@@ -0,0 +1,203 @@
|
||||
---
|
||||
model: claude-opus-4-1
|
||||
---
|
||||
|
||||
Execute a comprehensive Test-Driven Development (TDD) workflow with strict red-green-refactor discipline:
|
||||
|
||||
[Extended thinking: This workflow enforces test-first development through coordinated agent orchestration. Each phase of the TDD cycle is strictly enforced with fail-first verification, incremental implementation, and continuous refactoring. The workflow supports both single test and test suite approaches with configurable coverage thresholds.]
|
||||
|
||||
## Configuration
|
||||
|
||||
### Coverage Thresholds
|
||||
- Minimum line coverage: 80%
|
||||
- Minimum branch coverage: 75%
|
||||
- Critical path coverage: 100%
|
||||
|
||||
### Refactoring Triggers
|
||||
- Cyclomatic complexity > 10
|
||||
- Method length > 20 lines
|
||||
- Class length > 200 lines
|
||||
- Duplicate code blocks > 3 lines
|
||||
|
||||
## Phase 1: Test Specification and Design
|
||||
|
||||
### 1. Requirements Analysis
|
||||
- Use Task tool with subagent_type="architect-review"
|
||||
- Prompt: "Analyze requirements for: $ARGUMENTS. Define acceptance criteria, identify edge cases, and create test scenarios. Output a comprehensive test specification."
|
||||
- Output: Test specification, acceptance criteria, edge case matrix
|
||||
- Validation: Ensure all requirements have corresponding test scenarios
|
||||
|
||||
### 2. Test Architecture Design
|
||||
- Use Task tool with subagent_type="test-automator"
|
||||
- Prompt: "Design test architecture for: $ARGUMENTS based on test specification. Define test structure, fixtures, mocks, and test data strategy. Ensure testability and maintainability."
|
||||
- Output: Test architecture, fixture design, mock strategy
|
||||
- Validation: Architecture supports isolated, fast, reliable tests
|
||||
|
||||
## Phase 2: RED - Write Failing Tests
|
||||
|
||||
### 3. Write Unit Tests (Failing)
|
||||
- Use Task tool with subagent_type="test-automator"
|
||||
- Prompt: "Write FAILING unit tests for: $ARGUMENTS. Tests must fail initially. Include edge cases, error scenarios, and happy paths. DO NOT implement production code."
|
||||
- Output: Failing unit tests, test documentation
|
||||
- **CRITICAL**: Verify all tests fail with expected error messages
|
||||
|
||||
### 4. Verify Test Failure
|
||||
- Use Task tool with subagent_type="code-reviewer"
|
||||
- Prompt: "Verify that all tests for: $ARGUMENTS are failing correctly. Ensure failures are for the right reasons (missing implementation, not test errors). Confirm no false positives."
|
||||
- Output: Test failure verification report
|
||||
- **GATE**: Do not proceed until all tests fail appropriately
|
||||
|
||||
## Phase 3: GREEN - Make Tests Pass
|
||||
|
||||
### 5. Minimal Implementation
|
||||
- Use Task tool with subagent_type="backend-architect"
|
||||
- Prompt: "Implement MINIMAL code to make tests pass for: $ARGUMENTS. Focus only on making tests green. Do not add extra features or optimizations. Keep it simple."
|
||||
- Output: Minimal working implementation
|
||||
- Constraint: No code beyond what's needed to pass tests
|
||||
|
||||
### 6. Verify Test Success
|
||||
- Use Task tool with subagent_type="test-automator"
|
||||
- Prompt: "Run all tests for: $ARGUMENTS and verify they pass. Check test coverage metrics. Ensure no tests were accidentally broken."
|
||||
- Output: Test execution report, coverage metrics
|
||||
- **GATE**: All tests must pass before proceeding
|
||||
|
||||
## Phase 4: REFACTOR - Improve Code Quality
|
||||
|
||||
### 7. Code Refactoring
|
||||
- Use Task tool with subagent_type="code-reviewer"
|
||||
- Prompt: "Refactor implementation for: $ARGUMENTS while keeping tests green. Apply SOLID principles, remove duplication, improve naming, and optimize performance. Run tests after each refactoring."
|
||||
- Output: Refactored code, refactoring report
|
||||
- Constraint: Tests must remain green throughout
|
||||
|
||||
### 8. Test Refactoring
|
||||
- Use Task tool with subagent_type="test-automator"
|
||||
- Prompt: "Refactor tests for: $ARGUMENTS. Remove test duplication, improve test names, extract common fixtures, and enhance test readability. Ensure tests still provide same coverage."
|
||||
- Output: Refactored tests, improved test structure
|
||||
- Validation: Coverage metrics unchanged or improved
|
||||
|
||||
## Phase 5: Integration and System Tests
|
||||
|
||||
### 9. Write Integration Tests (Failing First)
|
||||
- Use Task tool with subagent_type="test-automator"
|
||||
- Prompt: "Write FAILING integration tests for: $ARGUMENTS. Test component interactions, API contracts, and data flow. Tests must fail initially."
|
||||
- Output: Failing integration tests
|
||||
- Validation: Tests fail due to missing integration logic
|
||||
|
||||
### 10. Implement Integration
|
||||
- Use Task tool with subagent_type="backend-architect"
|
||||
- Prompt: "Implement integration code for: $ARGUMENTS to make integration tests pass. Focus on component interaction and data flow."
|
||||
- Output: Integration implementation
|
||||
- Validation: All integration tests pass
|
||||
|
||||
## Phase 6: Continuous Improvement Cycle
|
||||
|
||||
### 11. Performance and Edge Case Tests
|
||||
- Use Task tool with subagent_type="test-automator"
|
||||
- Prompt: "Add performance tests and additional edge case tests for: $ARGUMENTS. Include stress tests, boundary tests, and error recovery tests."
|
||||
- Output: Extended test suite
|
||||
- Metric: Increased test coverage and scenario coverage
|
||||
|
||||
### 12. Final Code Review
|
||||
- Use Task tool with subagent_type="architect-review"
|
||||
- Prompt: "Perform comprehensive review of: $ARGUMENTS. Verify TDD process was followed, check code quality, test quality, and coverage. Suggest improvements."
|
||||
- Output: Review report, improvement suggestions
|
||||
- Action: Implement critical suggestions while maintaining green tests
|
||||
|
||||
## Incremental Development Mode
|
||||
|
||||
For test-by-test development:
|
||||
1. Write ONE failing test
|
||||
2. Make ONLY that test pass
|
||||
3. Refactor if needed
|
||||
4. Repeat for next test
|
||||
|
||||
Use this approach by adding `--incremental` flag to focus on one test at a time.
|
||||
|
||||
## Test Suite Mode
|
||||
|
||||
For comprehensive test suite development:
|
||||
1. Write ALL tests for a feature/module (failing)
|
||||
2. Implement code to pass ALL tests
|
||||
3. Refactor entire module
|
||||
4. Add integration tests
|
||||
|
||||
Use this approach by adding `--suite` flag for batch test development.
|
||||
|
||||
## Validation Checkpoints
|
||||
|
||||
### RED Phase Validation
|
||||
- [ ] All tests written before implementation
|
||||
- [ ] All tests fail with meaningful error messages
|
||||
- [ ] Test failures are due to missing implementation
|
||||
- [ ] No test passes accidentally
|
||||
|
||||
### GREEN Phase Validation
|
||||
- [ ] All tests pass
|
||||
- [ ] No extra code beyond test requirements
|
||||
- [ ] Coverage meets minimum thresholds
|
||||
- [ ] No test was modified to make it pass
|
||||
|
||||
### REFACTOR Phase Validation
|
||||
- [ ] All tests still pass after refactoring
|
||||
- [ ] Code complexity reduced
|
||||
- [ ] Duplication eliminated
|
||||
- [ ] Performance improved or maintained
|
||||
- [ ] Test readability improved
|
||||
|
||||
## Coverage Reports
|
||||
|
||||
Generate coverage reports after each phase:
|
||||
- Line coverage
|
||||
- Branch coverage
|
||||
- Function coverage
|
||||
- Statement coverage
|
||||
|
||||
## Failure Recovery
|
||||
|
||||
If TDD discipline is broken:
|
||||
1. **STOP** immediately
|
||||
2. Identify which phase was violated
|
||||
3. Rollback to last valid state
|
||||
4. Resume from correct phase
|
||||
5. Document lesson learned
|
||||
|
||||
## TDD Metrics Tracking
|
||||
|
||||
Track and report:
|
||||
- Time in each phase (Red/Green/Refactor)
|
||||
- Number of test-implementation cycles
|
||||
- Coverage progression
|
||||
- Refactoring frequency
|
||||
- Defect escape rate
|
||||
|
||||
## Anti-Patterns to Avoid
|
||||
|
||||
- Writing implementation before tests
|
||||
- Writing tests that already pass
|
||||
- Skipping the refactor phase
|
||||
- Writing multiple features without tests
|
||||
- Modifying tests to make them pass
|
||||
- Ignoring failing tests
|
||||
- Writing tests after implementation
|
||||
|
||||
## Success Criteria
|
||||
|
||||
- 100% of code written test-first
|
||||
- All tests pass continuously
|
||||
- Coverage exceeds thresholds
|
||||
- Code complexity within limits
|
||||
- Zero defects in covered code
|
||||
- Clear test documentation
|
||||
- Fast test execution (< 5 seconds for unit tests)
|
||||
|
||||
## Notes
|
||||
|
||||
- Enforce strict RED-GREEN-REFACTOR discipline
|
||||
- Each phase must be completed before moving to next
|
||||
- Tests are the specification
|
||||
- If a test is hard to write, the design needs improvement
|
||||
- Refactoring is NOT optional
|
||||
- Keep test execution fast
|
||||
- Tests should be independent and isolated
|
||||
|
||||
TDD implementation for: $ARGUMENTS
|
||||
1343
workflows/workflow-automate.md
Normal file
1343
workflows/workflow-automate.md
Normal file
File diff suppressed because it is too large
Load Diff
Reference in New Issue
Block a user