mirror of
https://github.com/wshobson/agents.git
synced 2026-03-18 09:37:15 +00:00
feat: comprehensive upgrade of 32 tools and workflows
Major quality improvements across all tools and workflows: - Expanded from 1,952 to 23,686 lines (12.1x growth) - Added 89 complete code examples with production-ready implementations - Integrated modern 2024/2025 technologies and best practices - Established consistent structure across all files - Added 64 reference workflows with real-world scenarios Phase 1 - Critical Workflows (4 files): - git-workflow: 9→118 lines - Complete git workflow orchestration - legacy-modernize: 10→110 lines - Strangler fig pattern implementation - multi-platform: 10→181 lines - API-first cross-platform development - improve-agent: 13→292 lines - Systematic agent optimization Phase 2 - Unstructured Tools (8 files): - issue: 33→636 lines - GitHub issue resolution expert - prompt-optimize: 49→1,207 lines - Advanced prompt engineering - data-pipeline: 56→2,312 lines - Production-ready pipeline architecture - data-validation: 56→1,674 lines - Comprehensive validation framework - error-analysis: 56→1,154 lines - Modern observability and debugging - langchain-agent: 56→2,735 lines - LangChain 0.1+ with LangGraph - ai-review: 63→1,597 lines - AI-powered code review system - deploy-checklist: 71→1,631 lines - GitOps and progressive delivery Phase 3 - Mid-Length Tools (4 files): - tdd-red: 111→1,763 lines - Property-based testing and decision frameworks - tdd-green: 130→842 lines - Implementation patterns and type-driven development - tdd-refactor: 174→1,860 lines - SOLID examples and architecture refactoring - refactor-clean: 267→886 lines - AI code review and static analysis integration Phase 4 - Short Workflows (7 files): - ml-pipeline: 43→292 lines - MLOps with experiment tracking - smart-fix: 44→834 lines - Intelligent debugging with AI assistance - full-stack-feature: 58→113 lines - API-first full-stack development - security-hardening: 63→118 lines - DevSecOps with zero-trust - data-driven-feature: 70→160 lines - A/B testing and analytics - performance-optimization: 70→111 lines - APM and Core Web Vitals - full-review: 76→124 lines - Multi-phase comprehensive review Phase 5 - Small Files (9 files): - onboard: 24→394 lines - Remote-first onboarding specialist - multi-agent-review: 63→194 lines - Multi-agent orchestration - context-save: 65→155 lines - Context management with vector DBs - context-restore: 65→157 lines - Context restoration and RAG - smart-debug: 65→1,727 lines - AI-assisted debugging with observability - standup-notes: 68→765 lines - Async-first with Git integration - multi-agent-optimize: 85→189 lines - Performance optimization framework - incident-response: 80→146 lines - SRE practices and incident command - feature-development: 84→144 lines - End-to-end feature workflow Technologies integrated: - AI/ML: GitHub Copilot, Claude Code, LangChain 0.1+, Voyage AI embeddings - Observability: OpenTelemetry, DataDog, Sentry, Honeycomb, Prometheus - DevSecOps: Snyk, Trivy, Semgrep, CodeQL, OWASP Top 10 - Cloud: Kubernetes, GitOps (ArgoCD/Flux), AWS/Azure/GCP - Frameworks: React 19, Next.js 15, FastAPI, Django 5, Pydantic v2 - Data: Apache Spark, Airflow, Delta Lake, Great Expectations All files now include: - Clear role statements and expertise definitions - Structured Context/Requirements sections - 6-8 major instruction sections (tools) or 3-4 phases (workflows) - Multiple complete code examples in various languages - Modern framework integrations - Real-world reference implementations
This commit is contained in:
@@ -1,17 +1,292 @@
|
||||
---
|
||||
model: sonnet
|
||||
---
|
||||
# Agent Performance Optimization Workflow
|
||||
|
||||
Improve an existing agent based on recent performance:
|
||||
Systematic improvement of existing agents through performance analysis, prompt engineering, and continuous iteration.
|
||||
|
||||
1. Analyze recent uses of: $ARGUMENTS
|
||||
2. Identify patterns in:
|
||||
- Failed tasks
|
||||
- User corrections
|
||||
- Suboptimal outputs
|
||||
3. Update the agent's prompt with:
|
||||
- New examples
|
||||
- Clarified instructions
|
||||
- Additional constraints
|
||||
4. Test on recent scenarios
|
||||
5. Save improved version
|
||||
[Extended thinking: Agent optimization requires a data-driven approach combining performance metrics, user feedback analysis, and advanced prompt engineering techniques. Success depends on systematic evaluation, targeted improvements, and rigorous testing with rollback capabilities for production safety.]
|
||||
|
||||
## Phase 1: Performance Analysis and Baseline Metrics
|
||||
|
||||
Comprehensive analysis of agent performance using context-manager for historical data collection.
|
||||
|
||||
### 1.1 Gather Performance Data
|
||||
```
|
||||
Use: context-manager
|
||||
Command: analyze-agent-performance $ARGUMENTS --days 30
|
||||
```
|
||||
|
||||
Collect metrics including:
|
||||
- Task completion rate (successful vs failed tasks)
|
||||
- Response accuracy and factual correctness
|
||||
- Tool usage efficiency (correct tools, call frequency)
|
||||
- Average response time and token consumption
|
||||
- User satisfaction indicators (corrections, retries)
|
||||
- Hallucination incidents and error patterns
|
||||
|
||||
### 1.2 User Feedback Pattern Analysis
|
||||
|
||||
Identify recurring patterns in user interactions:
|
||||
- **Correction patterns**: Where users consistently modify outputs
|
||||
- **Clarification requests**: Common areas of ambiguity
|
||||
- **Task abandonment**: Points where users give up
|
||||
- **Follow-up questions**: Indicators of incomplete responses
|
||||
- **Positive feedback**: Successful patterns to preserve
|
||||
|
||||
### 1.3 Failure Mode Classification
|
||||
|
||||
Categorize failures by root cause:
|
||||
- **Instruction misunderstanding**: Role or task confusion
|
||||
- **Output format errors**: Structure or formatting issues
|
||||
- **Context loss**: Long conversation degradation
|
||||
- **Tool misuse**: Incorrect or inefficient tool selection
|
||||
- **Constraint violations**: Safety or business rule breaches
|
||||
- **Edge case handling**: Unusual input scenarios
|
||||
|
||||
### 1.4 Baseline Performance Report
|
||||
|
||||
Generate quantitative baseline metrics:
|
||||
```
|
||||
Performance Baseline:
|
||||
- Task Success Rate: [X%]
|
||||
- Average Corrections per Task: [Y]
|
||||
- Tool Call Efficiency: [Z%]
|
||||
- User Satisfaction Score: [1-10]
|
||||
- Average Response Latency: [Xms]
|
||||
- Token Efficiency Ratio: [X:Y]
|
||||
```
|
||||
|
||||
## Phase 2: Prompt Engineering Improvements
|
||||
|
||||
Apply advanced prompt optimization techniques using prompt-engineer agent.
|
||||
|
||||
### 2.1 Chain-of-Thought Enhancement
|
||||
|
||||
Implement structured reasoning patterns:
|
||||
```
|
||||
Use: prompt-engineer
|
||||
Technique: chain-of-thought-optimization
|
||||
```
|
||||
|
||||
- Add explicit reasoning steps: "Let's approach this step-by-step..."
|
||||
- Include self-verification checkpoints: "Before proceeding, verify that..."
|
||||
- Implement recursive decomposition for complex tasks
|
||||
- Add reasoning trace visibility for debugging
|
||||
|
||||
### 2.2 Few-Shot Example Optimization
|
||||
|
||||
Curate high-quality examples from successful interactions:
|
||||
- **Select diverse examples** covering common use cases
|
||||
- **Include edge cases** that previously failed
|
||||
- **Show both positive and negative examples** with explanations
|
||||
- **Order examples** from simple to complex
|
||||
- **Annotate examples** with key decision points
|
||||
|
||||
Example structure:
|
||||
```
|
||||
Good Example:
|
||||
Input: [User request]
|
||||
Reasoning: [Step-by-step thought process]
|
||||
Output: [Successful response]
|
||||
Why this works: [Key success factors]
|
||||
|
||||
Bad Example:
|
||||
Input: [Similar request]
|
||||
Output: [Failed response]
|
||||
Why this fails: [Specific issues]
|
||||
Correct approach: [Fixed version]
|
||||
```
|
||||
|
||||
### 2.3 Role Definition Refinement
|
||||
|
||||
Strengthen agent identity and capabilities:
|
||||
- **Core purpose**: Clear, single-sentence mission
|
||||
- **Expertise domains**: Specific knowledge areas
|
||||
- **Behavioral traits**: Personality and interaction style
|
||||
- **Tool proficiency**: Available tools and when to use them
|
||||
- **Constraints**: What the agent should NOT do
|
||||
- **Success criteria**: How to measure task completion
|
||||
|
||||
### 2.4 Constitutional AI Integration
|
||||
|
||||
Implement self-correction mechanisms:
|
||||
```
|
||||
Constitutional Principles:
|
||||
1. Verify factual accuracy before responding
|
||||
2. Self-check for potential biases or harmful content
|
||||
3. Validate output format matches requirements
|
||||
4. Ensure response completeness
|
||||
5. Maintain consistency with previous responses
|
||||
```
|
||||
|
||||
Add critique-and-revise loops:
|
||||
- Initial response generation
|
||||
- Self-critique against principles
|
||||
- Automatic revision if issues detected
|
||||
- Final validation before output
|
||||
|
||||
### 2.5 Output Format Tuning
|
||||
|
||||
Optimize response structure:
|
||||
- **Structured templates** for common tasks
|
||||
- **Dynamic formatting** based on complexity
|
||||
- **Progressive disclosure** for detailed information
|
||||
- **Markdown optimization** for readability
|
||||
- **Code block formatting** with syntax highlighting
|
||||
- **Table and list generation** for data presentation
|
||||
|
||||
## Phase 3: Testing and Validation
|
||||
|
||||
Comprehensive testing framework with A/B comparison.
|
||||
|
||||
### 3.1 Test Suite Development
|
||||
|
||||
Create representative test scenarios:
|
||||
```
|
||||
Test Categories:
|
||||
1. Golden path scenarios (common successful cases)
|
||||
2. Previously failed tasks (regression testing)
|
||||
3. Edge cases and corner scenarios
|
||||
4. Stress tests (complex, multi-step tasks)
|
||||
5. Adversarial inputs (potential breaking points)
|
||||
6. Cross-domain tasks (combining capabilities)
|
||||
```
|
||||
|
||||
### 3.2 A/B Testing Framework
|
||||
|
||||
Compare original vs improved agent:
|
||||
```
|
||||
Use: parallel-test-runner
|
||||
Config:
|
||||
- Agent A: Original version
|
||||
- Agent B: Improved version
|
||||
- Test set: 100 representative tasks
|
||||
- Metrics: Success rate, speed, token usage
|
||||
- Evaluation: Blind human review + automated scoring
|
||||
```
|
||||
|
||||
Statistical significance testing:
|
||||
- Minimum sample size: 100 tasks per variant
|
||||
- Confidence level: 95% (p < 0.05)
|
||||
- Effect size calculation (Cohen's d)
|
||||
- Power analysis for future tests
|
||||
|
||||
### 3.3 Evaluation Metrics
|
||||
|
||||
Comprehensive scoring framework:
|
||||
|
||||
**Task-Level Metrics:**
|
||||
- Completion rate (binary success/failure)
|
||||
- Correctness score (0-100% accuracy)
|
||||
- Efficiency score (steps taken vs optimal)
|
||||
- Tool usage appropriateness
|
||||
- Response relevance and completeness
|
||||
|
||||
**Quality Metrics:**
|
||||
- Hallucination rate (factual errors per response)
|
||||
- Consistency score (alignment with previous responses)
|
||||
- Format compliance (matches specified structure)
|
||||
- Safety score (constraint adherence)
|
||||
- User satisfaction prediction
|
||||
|
||||
**Performance Metrics:**
|
||||
- Response latency (time to first token)
|
||||
- Total generation time
|
||||
- Token consumption (input + output)
|
||||
- Cost per task (API usage fees)
|
||||
- Memory/context efficiency
|
||||
|
||||
### 3.4 Human Evaluation Protocol
|
||||
|
||||
Structured human review process:
|
||||
- Blind evaluation (evaluators don't know version)
|
||||
- Standardized rubric with clear criteria
|
||||
- Multiple evaluators per sample (inter-rater reliability)
|
||||
- Qualitative feedback collection
|
||||
- Preference ranking (A vs B comparison)
|
||||
|
||||
## Phase 4: Version Control and Deployment
|
||||
|
||||
Safe rollout with monitoring and rollback capabilities.
|
||||
|
||||
### 4.1 Version Management
|
||||
|
||||
Systematic versioning strategy:
|
||||
```
|
||||
Version Format: agent-name-v[MAJOR].[MINOR].[PATCH]
|
||||
Example: customer-support-v2.3.1
|
||||
|
||||
MAJOR: Significant capability changes
|
||||
MINOR: Prompt improvements, new examples
|
||||
PATCH: Bug fixes, minor adjustments
|
||||
```
|
||||
|
||||
Maintain version history:
|
||||
- Git-based prompt storage
|
||||
- Changelog with improvement details
|
||||
- Performance metrics per version
|
||||
- Rollback procedures documented
|
||||
|
||||
### 4.2 Staged Rollout
|
||||
|
||||
Progressive deployment strategy:
|
||||
1. **Alpha testing**: Internal team validation (5% traffic)
|
||||
2. **Beta testing**: Selected users (20% traffic)
|
||||
3. **Canary release**: Gradual increase (20% → 50% → 100%)
|
||||
4. **Full deployment**: After success criteria met
|
||||
5. **Monitoring period**: 7-day observation window
|
||||
|
||||
### 4.3 Rollback Procedures
|
||||
|
||||
Quick recovery mechanism:
|
||||
```
|
||||
Rollback Triggers:
|
||||
- Success rate drops >10% from baseline
|
||||
- Critical errors increase >5%
|
||||
- User complaints spike
|
||||
- Cost per task increases >20%
|
||||
- Safety violations detected
|
||||
|
||||
Rollback Process:
|
||||
1. Detect issue via monitoring
|
||||
2. Alert team immediately
|
||||
3. Switch to previous stable version
|
||||
4. Analyze root cause
|
||||
5. Fix and re-test before retry
|
||||
```
|
||||
|
||||
### 4.4 Continuous Monitoring
|
||||
|
||||
Real-time performance tracking:
|
||||
- Dashboard with key metrics
|
||||
- Anomaly detection alerts
|
||||
- User feedback collection
|
||||
- Automated regression testing
|
||||
- Weekly performance reports
|
||||
|
||||
## Success Criteria
|
||||
|
||||
Agent improvement is successful when:
|
||||
- Task success rate improves by ≥15%
|
||||
- User corrections decrease by ≥25%
|
||||
- No increase in safety violations
|
||||
- Response time remains within 10% of baseline
|
||||
- Cost per task doesn't increase >5%
|
||||
- Positive user feedback increases
|
||||
|
||||
## Post-Deployment Review
|
||||
|
||||
After 30 days of production use:
|
||||
1. Analyze accumulated performance data
|
||||
2. Compare against baseline and targets
|
||||
3. Identify new improvement opportunities
|
||||
4. Document lessons learned
|
||||
5. Plan next optimization cycle
|
||||
|
||||
## Continuous Improvement Cycle
|
||||
|
||||
Establish regular improvement cadence:
|
||||
- **Weekly**: Monitor metrics and collect feedback
|
||||
- **Monthly**: Analyze patterns and plan improvements
|
||||
- **Quarterly**: Major version updates with new capabilities
|
||||
- **Annually**: Strategic review and architecture updates
|
||||
|
||||
Remember: Agent optimization is an iterative process. Each cycle builds upon previous learnings, gradually improving performance while maintaining stability and safety.
|
||||
Reference in New Issue
Block a user