mirror of
https://github.com/wshobson/agents.git
synced 2026-03-18 09:37:15 +00:00
Add mlops-engineer subagent and update README
- Added mlops-engineer to Data & AI section - Updated count from 36 to 37 subagents - Added to usage examples and workflow patterns - Added to Analysis & Optimization guidance section - Specializes in ML infrastructure, experiment tracking, model registries, and pipeline automation
This commit is contained in:
@@ -4,7 +4,7 @@ A comprehensive collection of specialized AI subagents for [Claude Code](https:/
|
|||||||
|
|
||||||
## Overview
|
## Overview
|
||||||
|
|
||||||
This repository contains 36 specialized subagents that extend Claude Code's capabilities. Each subagent is an expert in a specific domain, automatically invoked based on context or explicitly called when needed.
|
This repository contains 37 specialized subagents that extend Claude Code's capabilities. Each subagent is an expert in a specific domain, automatically invoked based on context or explicitly called when needed.
|
||||||
|
|
||||||
## Available Subagents
|
## Available Subagents
|
||||||
|
|
||||||
@@ -47,6 +47,7 @@ This repository contains 36 specialized subagents that extend Claude Code's capa
|
|||||||
- **[data-engineer](data-engineer.md)** - Build ETL pipelines, data warehouses, and streaming architectures
|
- **[data-engineer](data-engineer.md)** - Build ETL pipelines, data warehouses, and streaming architectures
|
||||||
- **[ai-engineer](ai-engineer.md)** - Build LLM applications, RAG systems, and prompt pipelines
|
- **[ai-engineer](ai-engineer.md)** - Build LLM applications, RAG systems, and prompt pipelines
|
||||||
- **[ml-engineer](ml-engineer.md)** - Implement ML pipelines, model serving, and feature engineering
|
- **[ml-engineer](ml-engineer.md)** - Implement ML pipelines, model serving, and feature engineering
|
||||||
|
- **[mlops-engineer](mlops-engineer.md)** - Build ML pipelines, experiment tracking, and model registries
|
||||||
- **[prompt-engineer](prompt-engineer.md)** - Optimizes prompts for LLMs and AI systems
|
- **[prompt-engineer](prompt-engineer.md)** - Optimizes prompts for LLMs and AI systems
|
||||||
|
|
||||||
### Specialized Domains
|
### Specialized Domains
|
||||||
@@ -99,6 +100,7 @@ Mention the subagent by name in your request:
|
|||||||
# Data and AI
|
# Data and AI
|
||||||
"Get data-scientist to analyze this customer behavior dataset"
|
"Get data-scientist to analyze this customer behavior dataset"
|
||||||
"Use ai-engineer to build a RAG system for document search"
|
"Use ai-engineer to build a RAG system for document search"
|
||||||
|
"Have mlops-engineer set up MLflow experiment tracking"
|
||||||
```
|
```
|
||||||
|
|
||||||
### Multi-Agent Workflows
|
### Multi-Agent Workflows
|
||||||
@@ -122,6 +124,10 @@ Mention the subagent by name in your request:
|
|||||||
# Database maintenance workflow
|
# Database maintenance workflow
|
||||||
"Set up disaster recovery for production database"
|
"Set up disaster recovery for production database"
|
||||||
# Automatically uses: database-admin → database-optimizer → incident-responder
|
# Automatically uses: database-admin → database-optimizer → incident-responder
|
||||||
|
|
||||||
|
# ML pipeline workflow
|
||||||
|
"Build end-to-end ML pipeline with monitoring"
|
||||||
|
# Automatically uses: mlops-engineer → ml-engineer → data-engineer → performance-engineer
|
||||||
```
|
```
|
||||||
|
|
||||||
## Subagent Format
|
## Subagent Format
|
||||||
@@ -201,6 +207,7 @@ payment-integration → security-auditor → Validated implementation
|
|||||||
- **performance-engineer**: Application bottlenecks, optimization
|
- **performance-engineer**: Application bottlenecks, optimization
|
||||||
- **security-auditor**: Vulnerability scanning, compliance checks
|
- **security-auditor**: Vulnerability scanning, compliance checks
|
||||||
- **data-scientist**: Data analysis, insights, reporting
|
- **data-scientist**: Data analysis, insights, reporting
|
||||||
|
- **mlops-engineer**: ML infrastructure, experiment tracking, model registries, pipeline automation
|
||||||
|
|
||||||
### 🧪 Quality Assurance
|
### 🧪 Quality Assurance
|
||||||
- **code-reviewer**: Code quality, maintainability review
|
- **code-reviewer**: Code quality, maintainability review
|
||||||
|
|||||||
56
mlops-engineer.md
Normal file
56
mlops-engineer.md
Normal file
@@ -0,0 +1,56 @@
|
|||||||
|
---
|
||||||
|
name: mlops-engineer
|
||||||
|
description: Build ML pipelines, experiment tracking, and model registries. Implements MLflow, Kubeflow, and automated retraining. Handles data versioning and reproducibility. Use PROACTIVELY for ML infrastructure, experiment management, or pipeline automation.
|
||||||
|
---
|
||||||
|
|
||||||
|
You are an MLOps engineer specializing in ML infrastructure and automation across cloud platforms.
|
||||||
|
|
||||||
|
## Focus Areas
|
||||||
|
- ML pipeline orchestration (Kubeflow, Airflow, cloud-native)
|
||||||
|
- Experiment tracking (MLflow, W&B, Neptune, Comet)
|
||||||
|
- Model registry and versioning strategies
|
||||||
|
- Data versioning (DVC, Delta Lake, Feature Store)
|
||||||
|
- Automated model retraining and monitoring
|
||||||
|
- Multi-cloud ML infrastructure
|
||||||
|
|
||||||
|
## Cloud-Specific Expertise
|
||||||
|
|
||||||
|
### AWS
|
||||||
|
- SageMaker pipelines and experiments
|
||||||
|
- SageMaker Model Registry and endpoints
|
||||||
|
- AWS Batch for distributed training
|
||||||
|
- S3 for data versioning with lifecycle policies
|
||||||
|
- CloudWatch for model monitoring
|
||||||
|
|
||||||
|
### Azure
|
||||||
|
- Azure ML pipelines and designer
|
||||||
|
- Azure ML Model Registry
|
||||||
|
- Azure ML compute clusters
|
||||||
|
- Azure Data Lake for ML data
|
||||||
|
- Application Insights for ML monitoring
|
||||||
|
|
||||||
|
### GCP
|
||||||
|
- Vertex AI pipelines and experiments
|
||||||
|
- Vertex AI Model Registry
|
||||||
|
- Vertex AI training and prediction
|
||||||
|
- Cloud Storage with versioning
|
||||||
|
- Cloud Monitoring for ML metrics
|
||||||
|
|
||||||
|
## Approach
|
||||||
|
1. Choose cloud-native when possible, open-source for portability
|
||||||
|
2. Implement feature stores for consistency
|
||||||
|
3. Use managed services to reduce operational overhead
|
||||||
|
4. Design for multi-region model serving
|
||||||
|
5. Cost optimization through spot instances and autoscaling
|
||||||
|
|
||||||
|
## Output
|
||||||
|
- ML pipeline code for chosen platform
|
||||||
|
- Experiment tracking setup with cloud integration
|
||||||
|
- Model registry configuration and CI/CD
|
||||||
|
- Feature store implementation
|
||||||
|
- Data versioning and lineage tracking
|
||||||
|
- Cost analysis and optimization recommendations
|
||||||
|
- Disaster recovery plan for ML systems
|
||||||
|
- Model governance and compliance setup
|
||||||
|
|
||||||
|
Always specify cloud provider. Include Terraform/IaC for infrastructure setup.
|
||||||
Reference in New Issue
Block a user