mirror of
https://github.com/wshobson/agents.git
synced 2026-03-18 09:37:15 +00:00
style: format all files with prettier
This commit is contained in:
@@ -17,12 +17,14 @@ Orchestrate comprehensive multi-dimensional code review using specialized review
|
||||
Use Task tool to orchestrate quality and architecture agents in parallel:
|
||||
|
||||
### 1A. Code Quality Analysis
|
||||
|
||||
- Use Task tool with subagent_type="code-reviewer"
|
||||
- Prompt: "Perform comprehensive code quality review for: $ARGUMENTS. Analyze code complexity, maintainability index, technical debt, code duplication, naming conventions, and adherence to Clean Code principles. Integrate with SonarQube, CodeQL, and Semgrep for static analysis. Check for code smells, anti-patterns, and violations of SOLID principles. Generate cyclomatic complexity metrics and identify refactoring opportunities."
|
||||
- Expected output: Quality metrics, code smell inventory, refactoring recommendations
|
||||
- Context: Initial codebase analysis, no dependencies on other phases
|
||||
|
||||
### 1B. Architecture & Design Review
|
||||
|
||||
- Use Task tool with subagent_type="architect-review"
|
||||
- Prompt: "Review architectural design patterns and structural integrity in: $ARGUMENTS. Evaluate microservices boundaries, API design, database schema, dependency management, and adherence to Domain-Driven Design principles. Check for circular dependencies, inappropriate coupling, missing abstractions, and architectural drift. Verify compliance with enterprise architecture standards and cloud-native patterns."
|
||||
- Expected output: Architecture assessment, design pattern analysis, structural recommendations
|
||||
@@ -33,12 +35,14 @@ Use Task tool to orchestrate quality and architecture agents in parallel:
|
||||
Use Task tool with security and performance agents, incorporating Phase 1 findings:
|
||||
|
||||
### 2A. Security Vulnerability Assessment
|
||||
|
||||
- Use Task tool with subagent_type="security-auditor"
|
||||
- Prompt: "Execute comprehensive security audit on: $ARGUMENTS. Perform OWASP Top 10 analysis, dependency vulnerability scanning with Snyk/Trivy, secrets detection with GitLeaks, input validation review, authentication/authorization assessment, and cryptographic implementation review. Include findings from Phase 1 architecture review: {phase1_architecture_context}. Check for SQL injection, XSS, CSRF, insecure deserialization, and configuration security issues."
|
||||
- Expected output: Vulnerability report, CVE list, security risk matrix, remediation steps
|
||||
- Context: Incorporates architectural vulnerabilities identified in Phase 1B
|
||||
|
||||
### 2B. Performance & Scalability Analysis
|
||||
|
||||
- Use Task tool with subagent_type="application-performance::performance-engineer"
|
||||
- Prompt: "Conduct performance analysis and scalability assessment for: $ARGUMENTS. Profile code for CPU/memory hotspots, analyze database query performance, review caching strategies, identify N+1 problems, assess connection pooling, and evaluate asynchronous processing patterns. Consider architectural findings from Phase 1: {phase1_architecture_context}. Check for memory leaks, resource contention, and bottlenecks under load."
|
||||
- Expected output: Performance metrics, bottleneck analysis, optimization recommendations
|
||||
@@ -49,12 +53,14 @@ Use Task tool with security and performance agents, incorporating Phase 1 findin
|
||||
Use Task tool for test and documentation quality assessment:
|
||||
|
||||
### 3A. Test Coverage & Quality Analysis
|
||||
|
||||
- Use Task tool with subagent_type="unit-testing::test-automator"
|
||||
- Prompt: "Evaluate testing strategy and implementation for: $ARGUMENTS. Analyze unit test coverage, integration test completeness, end-to-end test scenarios, test pyramid adherence, and test maintainability. Review test quality metrics including assertion density, test isolation, mock usage, and flakiness. Consider security and performance test requirements from Phase 2: {phase2_security_context}, {phase2_performance_context}. Verify TDD practices if --tdd-review flag is set."
|
||||
- Expected output: Coverage report, test quality metrics, testing gap analysis
|
||||
- Context: Incorporates security and performance testing requirements from Phase 2
|
||||
|
||||
### 3B. Documentation & API Specification Review
|
||||
|
||||
- Use Task tool with subagent_type="code-documentation::docs-architect"
|
||||
- Prompt: "Review documentation completeness and quality for: $ARGUMENTS. Assess inline code documentation, API documentation (OpenAPI/Swagger), architecture decision records (ADRs), README completeness, deployment guides, and runbooks. Verify documentation reflects actual implementation based on all previous phase findings: {phase1_context}, {phase2_context}. Check for outdated documentation, missing examples, and unclear explanations."
|
||||
- Expected output: Documentation coverage report, inconsistency list, improvement recommendations
|
||||
@@ -65,12 +71,14 @@ Use Task tool for test and documentation quality assessment:
|
||||
Use Task tool to verify framework-specific and industry best practices:
|
||||
|
||||
### 4A. Framework & Language Best Practices
|
||||
|
||||
- Use Task tool with subagent_type="framework-migration::legacy-modernizer"
|
||||
- Prompt: "Verify adherence to framework and language best practices for: $ARGUMENTS. Check modern JavaScript/TypeScript patterns, React hooks best practices, Python PEP compliance, Java enterprise patterns, Go idiomatic code, or framework-specific conventions (based on --framework flag). Review package management, build configuration, environment handling, and deployment practices. Include all quality issues from previous phases: {all_previous_contexts}."
|
||||
- Expected output: Best practices compliance report, modernization recommendations
|
||||
- Context: Synthesizes all previous findings for framework-specific guidance
|
||||
|
||||
### 4B. CI/CD & DevOps Practices Review
|
||||
|
||||
- Use Task tool with subagent_type="cicd-automation::deployment-engineer"
|
||||
- Prompt: "Review CI/CD pipeline and DevOps practices for: $ARGUMENTS. Evaluate build automation, test automation integration, deployment strategies (blue-green, canary), infrastructure as code, monitoring/observability setup, and incident response procedures. Assess pipeline security, artifact management, and rollback capabilities. Consider all issues identified in previous phases that impact deployment: {all_critical_issues}."
|
||||
- Expected output: Pipeline assessment, DevOps maturity evaluation, automation recommendations
|
||||
@@ -81,6 +89,7 @@ Use Task tool to verify framework-specific and industry best practices:
|
||||
Compile all phase outputs into comprehensive review report:
|
||||
|
||||
### Critical Issues (P0 - Must Fix Immediately)
|
||||
|
||||
- Security vulnerabilities with CVSS > 7.0
|
||||
- Data loss or corruption risks
|
||||
- Authentication/authorization bypasses
|
||||
@@ -88,6 +97,7 @@ Compile all phase outputs into comprehensive review report:
|
||||
- Compliance violations (GDPR, PCI DSS, SOC2)
|
||||
|
||||
### High Priority (P1 - Fix Before Next Release)
|
||||
|
||||
- Performance bottlenecks impacting user experience
|
||||
- Missing critical test coverage
|
||||
- Architectural anti-patterns causing technical debt
|
||||
@@ -95,6 +105,7 @@ Compile all phase outputs into comprehensive review report:
|
||||
- Code quality issues affecting maintainability
|
||||
|
||||
### Medium Priority (P2 - Plan for Next Sprint)
|
||||
|
||||
- Non-critical performance optimizations
|
||||
- Documentation gaps and inconsistencies
|
||||
- Code refactoring opportunities
|
||||
@@ -102,6 +113,7 @@ Compile all phase outputs into comprehensive review report:
|
||||
- DevOps automation enhancements
|
||||
|
||||
### Low Priority (P3 - Track in Backlog)
|
||||
|
||||
- Style guide violations
|
||||
- Minor code smell issues
|
||||
- Nice-to-have documentation updates
|
||||
@@ -110,6 +122,7 @@ Compile all phase outputs into comprehensive review report:
|
||||
## Success Criteria
|
||||
|
||||
Review is considered successful when:
|
||||
|
||||
- All critical security vulnerabilities are identified and documented
|
||||
- Performance bottlenecks are profiled with remediation paths
|
||||
- Test coverage gaps are mapped with priority recommendations
|
||||
@@ -121,4 +134,4 @@ Review is considered successful when:
|
||||
- Metrics dashboard shows improvement trends
|
||||
- Team has clear prioritized action plan for remediation
|
||||
|
||||
Target: $ARGUMENTS
|
||||
Target: $ARGUMENTS
|
||||
|
||||
Reference in New Issue
Block a user