mirror of
https://github.com/wshobson/agents.git
synced 2026-03-18 09:37:15 +00:00
Plugin Scope Improvements: - Remove language-specialists plugin (not task-focused) - Split specialized-domains into 5 focused plugins: * blockchain-web3 - Smart contract development only * quantitative-trading - Financial modeling and trading only * payment-processing - Payment gateway integration only * game-development - Unity and Minecraft only * accessibility-compliance - WCAG auditing only - Split business-operations into 3 focused plugins: * business-analytics - Metrics and reporting only * hr-legal-compliance - HR and legal docs only * customer-sales-automation - Support and sales workflows only - Fix infrastructure-devops scope: * Remove database concerns (db-migrate, database-admin) * Remove observability concerns (observability-engineer) * Move slo-implement to incident-response * Focus purely on container orchestration (K8s, Docker, Terraform) - Fix customer-sales-automation scope: * Remove content-marketer (unrelated to customer/sales workflows) Marketplace Statistics: - Total plugins: 27 (was 22) - Tool coverage: 100% (42/42 tools referenced) - Fat plugins removed: 3 (language-specialists, specialized-domains, business-operations) - All plugins now have clear, focused tasks Model Migration: - Migrate all 42 tools from claude-sonnet-4-0/opus-4-1 to model: sonnet - Migrate all 15 workflows from claude-opus-4-1 to model: sonnet - Use short model syntax consistent with agent files Documentation Updates: - Update README.md with refined plugin structure - Update plugin descriptions to be task-focused - Remove anthropomorphic and marketing language - Improve category organization (now 16 distinct categories) Ready for October 9, 2025 @ 9am PST launch
4.1 KiB
4.1 KiB
model
| model |
|---|
| sonnet |
Build data-driven features with integrated pipelines and ML capabilities using specialized agents:
[Extended thinking: This workflow orchestrates data scientists, data engineers, backend architects, and AI engineers to build features that leverage data pipelines, analytics, and machine learning. Each agent contributes their expertise to create a complete data-driven solution.]
Phase 1: Data Analysis and Design
1. Data Requirements Analysis
- Use Task tool with subagent_type="data-scientist"
- Prompt: "Analyze data requirements for: $ARGUMENTS. Identify data sources, required transformations, analytics needs, and potential ML opportunities."
- Output: Data analysis report, feature engineering requirements, ML feasibility
2. Data Pipeline Architecture
- Use Task tool with subagent_type="data-engineer"
- Prompt: "Design data pipeline architecture for: $ARGUMENTS. Include ETL/ELT processes, data storage, streaming requirements, and integration with existing systems based on data scientist's analysis."
- Output: Pipeline architecture, technology stack, data flow diagrams
Phase 2: Backend Integration
3. API and Service Design
- Use Task tool with subagent_type="backend-architect"
- Prompt: "Design backend services to support data-driven feature: $ARGUMENTS. Include APIs for data ingestion, analytics endpoints, and ML model serving based on pipeline architecture."
- Output: Service architecture, API contracts, integration patterns
4. Database and Storage Design
- Use Task tool with subagent_type="database-optimizer"
- Prompt: "Design optimal database schema and storage strategy for: $ARGUMENTS. Consider both transactional and analytical workloads, time-series data, and ML feature stores."
- Output: Database schemas, indexing strategies, storage recommendations
Phase 3: ML and AI Implementation
5. ML Pipeline Development
- Use Task tool with subagent_type="ml-engineer"
- Prompt: "Implement ML pipeline for: $ARGUMENTS. Include feature engineering, model training, validation, and deployment based on data scientist's requirements."
- Output: ML pipeline code, model artifacts, deployment strategy
6. AI Integration
- Use Task tool with subagent_type="ai-engineer"
- Prompt: "Build AI-powered features for: $ARGUMENTS. Integrate LLMs, implement RAG if needed, and create intelligent automation based on ML engineer's models."
- Output: AI integration code, prompt engineering, RAG implementation
Phase 4: Implementation and Optimization
7. Data Pipeline Implementation
- Use Task tool with subagent_type="data-engineer"
- Prompt: "Implement production data pipelines for: $ARGUMENTS. Include real-time streaming, batch processing, and data quality monitoring based on all previous designs."
- Output: Pipeline implementation, monitoring setup, data quality checks
8. Performance Optimization
- Use Task tool with subagent_type="performance-engineer"
- Prompt: "Optimize data processing and model serving performance for: $ARGUMENTS. Focus on query optimization, caching strategies, and model inference speed."
- Output: Performance improvements, caching layers, optimization report
Phase 5: Testing and Deployment
9. Comprehensive Testing
- Use Task tool with subagent_type="test-automator"
- Prompt: "Create test suites for data pipelines and ML components: $ARGUMENTS. Include data validation tests, model performance tests, and integration tests."
- Output: Test suites, data quality tests, ML monitoring tests
10. Production Deployment
- Use Task tool with subagent_type="deployment-engineer"
- Prompt: "Deploy data-driven feature to production: $ARGUMENTS. Include pipeline orchestration, model deployment, monitoring, and rollback strategies."
- Output: Deployment configurations, monitoring dashboards, operational runbooks
Coordination Notes
- Data flow and requirements cascade from data scientists to engineers
- ML models must integrate seamlessly with backend services
- Performance considerations apply to both data processing and model serving
- Maintain data lineage and versioning throughout the pipeline
Data-driven feature to build: $ARGUMENTS