style: format all files with prettier

This commit is contained in:
Seth Hobson
2026-01-19 17:07:03 -05:00
parent 8d37048deb
commit 56848874a2
355 changed files with 15215 additions and 10241 deletions

File diff suppressed because it is too large Load Diff

View File

@@ -24,6 +24,7 @@ Build sophisticated AI agent system for: $ARGUMENTS
## Essential Architecture
### LangGraph State Management
```python
from langgraph.graph import StateGraph, MessagesState, START, END
from langgraph.prebuilt import create_react_agent
@@ -35,6 +36,7 @@ class AgentState(TypedDict):
```
### Model & Embeddings
- **Primary LLM**: Claude Sonnet 4.5 (`claude-sonnet-4-5`)
- **Embeddings**: Voyage AI (`voyage-3-large`) - officially recommended by Anthropic for Claude
- **Specialized**: `voyage-code-3` (code), `voyage-finance-2` (finance), `voyage-law-2` (legal)
@@ -84,6 +86,7 @@ base_retriever = vectorstore.as_retriever(
```
### Advanced RAG Patterns
- **HyDE**: Generate hypothetical documents for better retrieval
- **RAG Fusion**: Multiple query perspectives for comprehensive results
- **Reranking**: Use Cohere Rerank for relevance optimization
@@ -117,6 +120,7 @@ tool = StructuredTool.from_function(
## Production Deployment
### FastAPI Server with Streaming
```python
from fastapi import FastAPI
from fastapi.responses import StreamingResponse
@@ -132,12 +136,14 @@ async def invoke_agent(request: AgentRequest):
```
### Monitoring & Observability
- **LangSmith**: Trace all agent executions
- **Prometheus**: Track metrics (requests, latency, errors)
- **Structured Logging**: Use `structlog` for consistent logs
- **Health Checks**: Validate LLM, tools, memory, and external services
### Optimization Strategies
- **Caching**: Redis for response caching with TTL
- **Connection Pooling**: Reuse vector DB connections
- **Load Balancing**: Multiple agent workers with round-robin routing
@@ -165,6 +171,7 @@ results = await evaluate(
## Key Patterns
### State Graph Pattern
```python
builder = StateGraph(MessagesState)
builder.add_node("node1", node1_func)
@@ -176,6 +183,7 @@ agent = builder.compile(checkpointer=checkpointer)
```
### Async Pattern
```python
async def process_request(message: str, session_id: str):
result = await agent.ainvoke(
@@ -186,6 +194,7 @@ async def process_request(message: str, session_id: str):
```
### Error Handling Pattern
```python
from tenacity import retry, stop_after_attempt, wait_exponential

View File

@@ -22,12 +22,14 @@ $ARGUMENTS
Evaluate the prompt across key dimensions:
**Assessment Framework**
- Clarity score (1-10) and ambiguity points
- Structure: logical flow and section boundaries
- Model alignment: capability utilization and token efficiency
- Performance: success rate, failure modes, edge case handling
**Decomposition**
- Core objective and constraints
- Output format requirements
- Explicit vs implicit expectations
@@ -36,6 +38,7 @@ Evaluate the prompt across key dimensions:
### 2. Apply Chain-of-Thought Enhancement
**Standard CoT Pattern**
```python
# Before: Simple instruction
prompt = "Analyze this customer feedback and determine sentiment"
@@ -56,11 +59,13 @@ Step 1 - Key emotional phrases:
```
**Zero-Shot CoT**
```python
enhanced = original + "\n\nLet's approach this step-by-step, breaking down the problem into smaller components and reasoning through each carefully."
```
**Tree-of-Thoughts**
```python
tot_prompt = """
Explore multiple solution paths:
@@ -79,6 +84,7 @@ Select best approach and implement.
### 3. Implement Few-Shot Learning
**Strategic Example Selection**
```python
few_shot = """
Example 1 (Simple case):
@@ -100,6 +106,7 @@ Now apply to: {actual_input}
### 4. Apply Constitutional AI Patterns
**Self-Critique Loop**
```python
constitutional = """
{initial_instruction}
@@ -119,7 +126,8 @@ Final Response: [Refined]
### 5. Model-Specific Optimization
**GPT-5.2**
```python
````python
gpt5_optimized = """
##CONTEXT##
{structured_context}
@@ -134,12 +142,13 @@ gpt5_optimized = """
##OUTPUT FORMAT##
```json
{"structured": "response"}
```
````
##EXAMPLES##
{few_shot_examples}
"""
```
````
**Claude 4.5/4**
```python
@@ -162,9 +171,10 @@ claude_optimized = """
{xml_structured_response}
</output_format>
"""
```
````
**Gemini Pro/Ultra**
```python
gemini_optimized = """
**System Context:** {background}
@@ -188,6 +198,7 @@ gemini_optimized = """
### 6. RAG Integration
**RAG-Optimized Prompt**
```python
rag_prompt = """
## Context Documents
@@ -210,6 +221,7 @@ Example: "Based on [Source 1], {answer}. [Source 3] corroborates: {detail}. No i
### 7. Evaluation Framework
**Testing Protocol**
```python
evaluation = """
## Test Cases (20 total)
@@ -227,6 +239,7 @@ evaluation = """
```
**LLM-as-Judge**
```python
judge_prompt = """
Evaluate AI response quality.
@@ -252,6 +265,7 @@ Recommendation: Accept/Revise/Reject
### 8. Production Deployment
**Prompt Versioning**
```python
class PromptVersion:
def __init__(self, base_prompt):
@@ -270,6 +284,7 @@ class PromptVersion:
```
**Error Handling**
```python
robust_prompt = """
{main_instruction}
@@ -291,15 +306,18 @@ Provide partial solution with boundaries and next steps if full task cannot be c
### Example 1: Customer Support
**Before**
```
Answer customer questions about our product.
```
**After**
```markdown
````markdown
You are a senior customer support specialist for TechCorp with 5+ years experience.
## Context
- Product: {product_name}
- Customer Tier: {tier}
- Issue Category: {category}
@@ -307,9 +325,11 @@ You are a senior customer support specialist for TechCorp with 5+ years experien
## Framework
### 1. Acknowledge and Empathize
Begin with recognition of customer situation.
### 2. Diagnostic Reasoning
<thinking>
1. Identify core issue
2. Consider common causes
@@ -318,23 +338,27 @@ Begin with recognition of customer situation.
</thinking>
### 3. Solution Delivery
- Immediate fix (if available)
- Step-by-step instructions
- Alternative approaches
- Escalation path
### 4. Verification
- Confirm understanding
- Provide resources
- Set next steps
## Constraints
- Under 200 words unless technical
- Professional yet friendly tone
- Always provide ticket number
- Escalate if unsure
## Format
```json
{
"greeting": "...",
@@ -343,14 +367,18 @@ Begin with recognition of customer situation.
"follow_up": "..."
}
```
````
```
### Example 2: Data Analysis
**Before**
```
Analyze this sales data and provide insights.
```
````
**After**
```python
@@ -404,16 +432,20 @@ recommendations:
immediate: []
short_term: []
long_term: []
```
````
"""
```
### Example 3: Code Generation
**Before**
```
Write a Python function to process user data.
```
````
**After**
```python
@@ -473,15 +505,17 @@ def process_user_data(raw_data: Dict[str, Any]) -> Union[ProcessedUser, Dict[str
name=sanitize_string(raw_data['name'], 100),
metadata={k: v for k, v in raw_data.items() if k not in required}
)
```
````
### Self-Review
✓ Input validation and sanitization
✓ Injection prevention
✓ Error handling
✓ Performance: O(n) complexity
"""
```
````
### Example 4: Meta-Prompt Generator
@@ -530,18 +564,20 @@ ELSE: APPLY hybrid
Overall: []/50
Recommendation: use_as_is | iterate | redesign
"""
```
````
## Output Format
Deliver comprehensive optimization report:
### Optimized Prompt
```markdown
[Complete production-ready prompt with all enhancements]
```
### Optimization Report
```yaml
analysis:
original_assessment:
@@ -583,6 +619,7 @@ next_steps:
```
### Usage Guidelines
1. **Implementation**: Use optimized prompt exactly
2. **Parameters**: Apply recommended settings
3. **Testing**: Run test cases before production