fix(conductor): move plugin to plugins/ directory for proper discovery

Conductor plugin was at root level instead of plugins/ directory,
causing slash commands to not be recognized by Claude Code.
This commit is contained in:
Seth Hobson
2026-01-15 20:34:57 -05:00
parent efb75ac1fc
commit 1408671cb7
28 changed files with 0 additions and 0 deletions

View File

@@ -0,0 +1,20 @@
{
"name": "conductor",
"version": "1.0.1",
"description": "Context-Driven Development plugin that transforms Claude Code into a project management tool. Implements structured workflow: Context → Spec & Plan → Implement with full TDD support, track-based work units, and semantic git reversion.",
"author": {
"name": "Seth Hobson",
"url": "https://github.com/wshobson"
},
"homepage": "https://github.com/wshobson/agents",
"repository": "https://github.com/wshobson/agents",
"license": "MIT",
"keywords": [
"project-management",
"context-driven-development",
"tdd",
"planning",
"specifications",
"workflow"
]
}

109
plugins/conductor/README.md Normal file
View File

@@ -0,0 +1,109 @@
# Conductor - Context-Driven Development Plugin for Claude Code
Conductor transforms Claude Code into a project management tool by implementing **Context-Driven Development**. It enforces a structured workflow: **Context → Spec & Plan → Implement**.
## Philosophy
By treating context as a managed artifact alongside code, teams establish a persistent, project-aware foundation for all AI interactions. The system maintains:
- **Product vision** as living documentation
- **Technical decisions** as structured artifacts
- **Work units (tracks)** with specifications and phased plans
- **TDD workflow** with verification checkpoints
## Features
- **Specification & Planning**: Generate detailed specs and actionable task plans before implementation
- **Context Management**: Maintain style guides, tech stack preferences, and product goals
- **Safe Iteration**: Review plans before code generation, keeping humans in control
- **Team Collaboration**: Project-level context documents become shared foundations
- **Project Intelligence**: Handles both greenfield (new) and brownfield (existing) projects
- **Semantic Reversion**: Git-aware revert by logical work units (tracks, phases, tasks)
- **State Persistence**: Resume setup across multiple sessions
## Commands
| Command | Description |
| ---------------------- | ---------------------------------------------------------------------------------- |
| `/conductor:setup` | Initialize project with product definition, tech stack, workflow, and style guides |
| `/conductor:new-track` | Create a feature or bug track with spec.md and plan.md |
| `/conductor:implement` | Execute tasks from the plan following workflow rules |
| `/conductor:status` | Display project progress overview |
| `/conductor:revert` | Git-aware undo by track, phase, or task |
## Generated Artifacts
```
conductor/
├── index.md # Navigation hub
├── product.md # Product vision & goals
├── product-guidelines.md # Standards & messaging
├── tech-stack.md # Technology preferences
├── workflow.md # Development practices (TDD, commits)
├── tracks.md # Master track registry
├── setup_state.json # Resumable setup state
├── code_styleguides/ # Language-specific conventions
└── tracks/
└── <track-id>/
├── spec.md # Requirements specification
├── plan.md # Phased task breakdown
├── metadata.json # Track metadata
└── index.md # Track navigation
```
## Workflow
### 1. Setup (`/conductor:setup`)
Interactive initialization that creates foundational project documentation:
- Detects greenfield vs brownfield projects
- Asks sequential questions about product, tech stack, workflow preferences
- Generates style guides for selected languages
- Creates tracks registry
### 2. Create Track (`/conductor:new-track`)
Start a new feature or bug fix:
- Interactive Q&A to gather requirements
- Generates detailed specification (spec.md)
- Creates phased implementation plan (plan.md)
- Registers track in tracks.md
### 3. Implement (`/conductor:implement`)
Execute the plan systematically:
- Follows TDD red-green-refactor cycle
- Updates task status markers
- Includes manual verification checkpoints
- Synchronizes documentation on completion
### 4. Monitor (`/conductor:status`)
View project progress:
- Current phase and task
- Completion percentage
- Identified blockers
### 5. Revert (`/conductor:revert`)
Undo work by logical unit:
- Select track, phase, or task to revert
- Git-aware: finds all associated commits
- Requires confirmation before execution
## Installation
```bash
claude --plugin-dir /path/to/conductor
```
Or copy to your project's `.claude-plugin/` directory.
## License
MIT

View File

@@ -0,0 +1,268 @@
---
name: conductor-validator
description: |
Validates Conductor project artifacts for completeness, consistency, and correctness. Use after setup, when diagnosing issues, or before implementation to verify project context.
<example>
Context: User just ran /conductor:setup
User: "Can you verify conductor is set up correctly?"
Assistant: Uses conductor-validator agent to check the setup
</example>
<example>
Context: User getting errors with conductor commands
User: "Why isn't /conductor:new-track working?"
Assistant: Uses conductor-validator agent to diagnose the issue
</example>
<example>
Context: Before starting implementation
User: "Is my project ready for /conductor:implement?"
Assistant: Uses conductor-validator agent to verify prerequisites
</example>
model: opus
color: cyan
tools:
- Read
- Glob
- Grep
- Bash
---
You are an expert validator for Conductor project artifacts. Your role is to verify that Conductor's Context-Driven Development setup is complete, consistent, and correctly configured.
## When to Use This Agent
- After `/conductor:setup` completes to verify all artifacts were created correctly
- When a user reports issues with Conductor commands not working
- Before starting implementation to verify project context is complete
- When synchronizing documentation after track completion
## Validation Categories
### A. Setup Validation
Verify the foundational Conductor structure exists and is properly configured.
**Directory Check:**
- `conductor/` directory exists at project root
**Required Files:**
- `conductor/index.md` - Navigation hub
- `conductor/product.md` - Product vision and goals
- `conductor/product-guidelines.md` - Standards and messaging
- `conductor/tech-stack.md` - Technology preferences
- `conductor/workflow.md` - Development practices
- `conductor/tracks.md` - Master track registry
**File Integrity:**
- All required files exist
- Files are not empty (have meaningful content)
- Markdown structure is valid (proper headings, lists)
### B. Content Validation
Verify required sections exist within each artifact.
**product.md Required Sections:**
- Overview or Introduction
- Problem Statement
- Target Users
- Value Proposition
**tech-stack.md Required Elements:**
- Technology decisions documented
- At least one language/framework specified
- Rationale for choices (preferred)
**workflow.md Required Elements:**
- Task lifecycle defined
- TDD workflow (if applicable)
- Commit message conventions
- Review/verification checkpoints
**tracks.md Required Format:**
- Status legend present ([ ], [~], [x] markers)
- Separator line usage (----)
- Track listing section
### C. Track Validation
When tracks exist, verify each track is properly configured.
**Track Registry Consistency:**
- Each track listed in `tracks.md` has a corresponding directory in `conductor/tracks/`
- Track directories contain required files:
- `spec.md` - Requirements specification
- `plan.md` - Phased task breakdown
- `metadata.json` - Track metadata
**Status Marker Validation:**
- Status markers in `tracks.md` match actual track states
- `[ ]` = not started (no tasks marked in progress or complete)
- `[~]` = in progress (has tasks marked `[~]` in plan.md)
- `[x]` = complete (all tasks marked `[x]` in plan.md)
**Plan Task Markers:**
- Tasks use proper markers: `[ ]` (pending), `[~]` (in progress), `[x]` (complete)
- Phases are properly numbered and structured
- At most one task should be `[~]` at a time
### D. Consistency Validation
Verify cross-artifact consistency.
**Track ID Uniqueness:**
- All track IDs are unique
- Track IDs follow naming convention (e.g., `feature_name_YYYYMMDD`)
**Reference Resolution:**
- All track references in `tracks.md` resolve to existing directories
- Cross-references between documents are valid
**Metadata Consistency:**
- `metadata.json` in each track is valid JSON
- Metadata reflects actual track state (status, dates, etc.)
### E. State Validation
Verify state files are valid.
**setup_state.json (if exists):**
- Valid JSON structure
- State reflects actual file system state
- No orphaned or inconsistent state entries
## Validation Process
1. **Use Glob** to find all relevant files and directories
2. **Use Read** to check file contents and structure
3. **Use Grep** to search for specific patterns and markers
4. **Use Bash** only for directory existence checks (e.g., `ls -la`)
## Output Format
Always produce a structured validation report:
```
## Conductor Validation Report
### Summary
- Status: PASS | FAIL | WARNINGS
- Files checked: X
- Issues found: Y
### Setup Validation
- [x] conductor/ directory exists
- [x] index.md exists and valid
- [x] product.md exists and valid
- [x] product-guidelines.md exists and valid
- [x] tech-stack.md exists and valid
- [x] workflow.md exists and valid
- [x] tracks.md exists and valid
- [ ] tech-stack.md missing required sections
### Content Validation
- [x] product.md has required sections
- [ ] tech-stack.md missing "Backend" section
- [x] workflow.md has task lifecycle
### Track Validation (if tracks exist)
- Track: auth_20250115
- [x] Directory exists
- [x] spec.md present
- [x] plan.md present
- [x] metadata.json valid
- [ ] Status mismatch: tracks.md shows [~] but no tasks in progress
### Issues
1. [CRITICAL] tech-stack.md: Missing "Backend" section
2. [WARNING] Track "auth_20250115": Status is [~] but no tasks in progress in plan.md
3. [INFO] product.md: Consider adding more detail to Value Proposition
### Recommendations
1. Add Backend section to tech-stack.md with your server-side technology choices
2. Update track status in tracks.md to reflect actual progress
3. Expand Value Proposition in product.md (optional)
```
## Issue Severity Levels
**CRITICAL** - Validation failure that will break Conductor commands:
- Missing required files
- Invalid JSON in metadata files
- Missing required sections that commands depend on
**WARNING** - Inconsistencies that may cause confusion:
- Status markers don't match actual state
- Track references don't resolve
- Empty sections that should have content
**INFO** - Suggestions for improvement:
- Missing optional sections
- Best practice recommendations
- Documentation quality suggestions
## Key Rules
1. **Be thorough** - Check all files and cross-references
2. **Be concise** - Report findings clearly without excessive verbosity
3. **Be actionable** - Provide specific recommendations for each issue
4. **Read-only** - Never modify files; only validate and report
5. **Report all issues** - Don't stop at the first error; find everything
6. **Prioritize** - List CRITICAL issues first, then WARNING, then INFO
## Example Validation Commands
```bash
# Check if conductor directory exists
ls -la conductor/
# Find all track directories
ls -la conductor/tracks/
# Check for required files
ls conductor/index.md conductor/product.md conductor/tech-stack.md conductor/workflow.md conductor/tracks.md
```
## Pattern Matching
**Status markers in tracks.md:**
```
- [ ] Track Name # Not started
- [~] Track Name # In progress
- [x] Track Name # Complete
```
**Task markers in plan.md:**
```
- [ ] Task description # Pending
- [~] Task description # In progress
- [x] Task description # Complete
```
**Track ID pattern:**
```
<type>_<name>_<YYYYMMDD>
Example: feature_user_auth_20250115
```

View File

@@ -0,0 +1,369 @@
---
name: implement
description: Execute tasks from a track's implementation plan following workflow rules
model: opus
argument-hint: "[track-id]"
---
Execute tasks from a track's implementation plan, following the workflow rules defined in `conductor/workflow.md`.
## Pre-flight Checks
1. Verify Conductor is initialized:
- Check `conductor/product.md` exists
- Check `conductor/workflow.md` exists
- Check `conductor/tracks.md` exists
- If missing: Display error and suggest running `/conductor:setup` first
2. Load workflow configuration:
- Read `conductor/workflow.md`
- Parse TDD strictness level
- Parse commit strategy
- Parse verification checkpoint rules
## Track Selection
### If argument provided:
- Validate track exists: `conductor/tracks/{argument}/plan.md`
- If not found: Search for partial matches, suggest corrections
### If no argument:
1. Read `conductor/tracks.md`
2. Parse for incomplete tracks (status `[ ]` or `[~]`)
3. Display selection menu:
```
Select a track to implement:
In Progress:
1. [~] auth_20250115 - User Authentication (Phase 2, Task 3)
Pending:
2. [ ] nav-fix_20250114 - Navigation Bug Fix
3. [ ] dashboard_20250113 - Dashboard Feature
Enter number or track ID:
```
## Context Loading
Load all relevant context for implementation:
1. Track documents:
- `conductor/tracks/{trackId}/spec.md` - Requirements
- `conductor/tracks/{trackId}/plan.md` - Task list
- `conductor/tracks/{trackId}/metadata.json` - Progress state
2. Project context:
- `conductor/product.md` - Product understanding
- `conductor/tech-stack.md` - Technical constraints
- `conductor/workflow.md` - Process rules
3. Code style (if exists):
- `conductor/code_styleguides/{language}.md`
## Track Status Update
Update track to in-progress:
1. In `conductor/tracks.md`:
- Change `[ ]` to `[~]` for this track
2. In `conductor/tracks/{trackId}/metadata.json`:
- Set `status: "in_progress"`
- Update `updated` timestamp
## Task Execution Loop
For each incomplete task in plan.md (marked with `[ ]`):
### 1. Task Identification
Parse plan.md to find next incomplete task:
- Look for lines matching `- [ ] Task X.Y: {description}`
- Track current phase from structure
### 2. Task Start
Mark task as in-progress:
- Update plan.md: Change `[ ]` to `[~]` for current task
- Announce: "Starting Task X.Y: {description}"
### 3. TDD Workflow (if TDD enabled in workflow.md)
**Red Phase - Write Failing Test:**
```
Following TDD workflow for Task X.Y...
Step 1: Writing failing test
```
- Create test file if needed
- Write test(s) for the task functionality
- Run tests to confirm they fail
- If tests pass unexpectedly: HALT, investigate
**Green Phase - Implement:**
```
Step 2: Implementing minimal code to pass test
```
- Write minimum code to make test pass
- Run tests to confirm they pass
- If tests fail: Debug and fix
**Refactor Phase:**
```
Step 3: Refactoring while keeping tests green
```
- Clean up code
- Run tests to ensure still passing
### 4. Non-TDD Workflow (if TDD not strict)
- Implement the task directly
- Run any existing tests
- Manual verification as needed
### 5. Task Completion
**Commit changes** (following commit strategy from workflow.md):
```bash
git add -A
git commit -m "{commit_prefix}: {task description} ({trackId})"
```
**Update plan.md:**
- Change `[~]` to `[x]` for completed task
- Commit plan update:
```bash
git add conductor/tracks/{trackId}/plan.md
git commit -m "chore: mark task X.Y complete ({trackId})"
```
**Update metadata.json:**
- Increment `tasks.completed`
- Update `updated` timestamp
### 6. Phase Completion Check
After each task, check if phase is complete:
- Parse plan.md for phase structure
- If all tasks in current phase are `[x]`:
**Run phase verification:**
```
Phase {N} complete. Running verification...
```
- Execute verification tasks listed for the phase
- Run full test suite: `npm test` / `pytest` / etc.
**Report and wait for approval:**
```
Phase {N} Verification Results:
- All phase tasks: Complete
- Tests: {passing/failing}
- Verification: {pass/fail}
Approve to continue to Phase {N+1}?
1. Yes, continue
2. No, there are issues to fix
3. Pause implementation
```
**CRITICAL: Wait for explicit user approval before proceeding to next phase.**
## Error Handling During Implementation
### On Tool Failure
```
ERROR: {tool} failed with: {error message}
Options:
1. Retry the operation
2. Skip this task and continue
3. Pause implementation
4. Revert current task changes
```
- HALT and present options
- Do NOT automatically continue
### On Test Failure
```
TESTS FAILING after Task X.Y
Failed tests:
- {test name}: {failure reason}
Options:
1. Attempt to fix
2. Rollback task changes
3. Pause for manual intervention
```
### On Git Failure
```
GIT ERROR: {error message}
This may indicate:
- Uncommitted changes from outside Conductor
- Merge conflicts
- Permission issues
Options:
1. Show git status
2. Attempt to resolve
3. Pause for manual intervention
```
## Track Completion
When all phases and tasks are complete:
### 1. Final Verification
```
All tasks complete. Running final verification...
```
- Run full test suite
- Check all acceptance criteria from spec.md
- Generate verification report
### 2. Update Track Status
In `conductor/tracks.md`:
- Change `[~]` to `[x]` for this track
- Update the "Updated" column
In `conductor/tracks/{trackId}/metadata.json`:
- Set `status: "complete"`
- Set `phases.completed` to total
- Set `tasks.completed` to total
- Update `updated` timestamp
In `conductor/tracks/{trackId}/plan.md`:
- Update header status to `[x] Complete`
### 3. Documentation Sync Offer
```
Track complete! Would you like to sync documentation?
This will update:
- conductor/product.md (if new features added)
- conductor/tech-stack.md (if new dependencies added)
- README.md (if applicable)
1. Yes, sync documentation
2. No, skip
```
### 4. Cleanup Offer
```
Track {trackId} is complete.
Cleanup options:
1. Archive - Move to conductor/tracks/_archive/
2. Delete - Remove track directory
3. Keep - Leave as-is
```
### 5. Completion Summary
```
Track Complete: {track title}
Summary:
- Track ID: {trackId}
- Phases completed: {N}/{N}
- Tasks completed: {M}/{M}
- Commits created: {count}
- Tests: All passing
Next steps:
- Run /conductor:status to see project progress
- Run /conductor:new-track for next feature
```
## Progress Tracking
Maintain progress in `metadata.json` throughout:
```json
{
"id": "auth_20250115",
"title": "User Authentication",
"type": "feature",
"status": "in_progress",
"created": "2025-01-15T10:00:00Z",
"updated": "2025-01-15T14:30:00Z",
"current_phase": 2,
"current_task": "2.3",
"phases": {
"total": 3,
"completed": 1
},
"tasks": {
"total": 12,
"completed": 7
},
"commits": [
"abc1234: feat: add login form (auth_20250115)",
"def5678: feat: add password validation (auth_20250115)"
]
}
```
## Resumption
If implementation is paused and resumed:
1. Load `metadata.json` for current state
2. Find current task from `current_task` field
3. Check if task is `[~]` in plan.md
4. Ask user:
```
Resuming track: {title}
Last task in progress: Task {X.Y}: {description}
Options:
1. Continue from where we left off
2. Restart current task
3. Show progress summary first
```
## Critical Rules
1. **NEVER skip verification checkpoints** - Always wait for user approval between phases
2. **STOP on any failure** - Do not attempt to continue past errors
3. **Follow workflow.md strictly** - TDD, commit strategy, and verification rules are mandatory
4. **Keep plan.md updated** - Task status must reflect actual progress
5. **Commit frequently** - Each task completion should be committed
6. **Track all commits** - Record commit hashes in metadata.json for potential revert

View File

@@ -0,0 +1,414 @@
---
name: new-track
description: Create a new feature or bug track with specification and phased implementation plan
model: opus
argument-hint: "[track description]"
---
Create a new track (feature, bug fix, chore, or refactor) with a detailed specification and phased implementation plan.
## Pre-flight Checks
1. Verify Conductor is initialized:
- Check `conductor/product.md` exists
- Check `conductor/tech-stack.md` exists
- Check `conductor/workflow.md` exists
- If missing: Display error and suggest running `/conductor:setup` first
2. Load context files:
- Read `conductor/product.md` for product context
- Read `conductor/tech-stack.md` for technical context
- Read `conductor/workflow.md` for TDD/commit preferences
## Track Classification
Determine track type based on description or ask user:
```
What type of track is this?
1. Feature - New functionality
2. Bug - Fix for existing issue
3. Chore - Maintenance, dependencies, config
4. Refactor - Code improvement without behavior change
```
## Interactive Specification Gathering
**CRITICAL RULES:**
- Ask ONE question per turn
- Wait for user response before proceeding
- Tailor questions based on track type
- Maximum 6 questions total
### For Feature Tracks
**Q1: Feature Summary**
```
Describe the feature in 1-2 sentences.
[If argument provided, confirm: "You want to: {argument}. Is this correct?"]
```
**Q2: User Story**
```
Who benefits and how?
Format: As a [user type], I want to [action] so that [benefit].
```
**Q3: Acceptance Criteria**
```
What must be true for this feature to be complete?
List 3-5 acceptance criteria (one per line):
```
**Q4: Dependencies**
```
Does this depend on any existing code, APIs, or other tracks?
1. No dependencies
2. Depends on existing code (specify)
3. Depends on incomplete track (specify)
```
**Q5: Scope Boundaries**
```
What is explicitly OUT of scope for this track?
(Helps prevent scope creep)
```
**Q6: Technical Considerations (optional)**
```
Any specific technical approach or constraints?
(Press enter to skip)
```
### For Bug Tracks
**Q1: Bug Summary**
```
What is broken?
[If argument provided, confirm]
```
**Q2: Steps to Reproduce**
```
How can this bug be reproduced?
List steps:
```
**Q3: Expected vs Actual Behavior**
```
What should happen vs what actually happens?
```
**Q4: Affected Areas**
```
What parts of the system are affected?
```
**Q5: Root Cause Hypothesis (optional)**
```
Any hypothesis about the cause?
(Press enter to skip)
```
### For Chore/Refactor Tracks
**Q1: Task Summary**
```
What needs to be done?
[If argument provided, confirm]
```
**Q2: Motivation**
```
Why is this work needed?
```
**Q3: Success Criteria**
```
How will we know this is complete?
```
**Q4: Risk Assessment**
```
What could go wrong? Any risky changes?
```
## Track ID Generation
Generate track ID in format: `{shortname}_{YYYYMMDD}`
- Extract shortname from feature/bug summary (2-3 words, lowercase, hyphenated)
- Use current date
- Example: `user-auth_20250115`, `nav-bug_20250115`
Validate uniqueness:
- Check `conductor/tracks.md` for existing IDs
- If collision, append counter: `user-auth_20250115_2`
## Specification Generation
Create `conductor/tracks/{trackId}/spec.md`:
```markdown
# Specification: {Track Title}
**Track ID:** {trackId}
**Type:** {Feature|Bug|Chore|Refactor}
**Created:** {YYYY-MM-DD}
**Status:** Draft
## Summary
{1-2 sentence summary}
## Context
{Product context from product.md relevant to this track}
## User Story (for features)
As a {user}, I want to {action} so that {benefit}.
## Problem Description (for bugs)
{Bug description, steps to reproduce}
## Acceptance Criteria
- [ ] {Criterion 1}
- [ ] {Criterion 2}
- [ ] {Criterion 3}
## Dependencies
{List dependencies or "None"}
## Out of Scope
{Explicit exclusions}
## Technical Notes
{Technical considerations or "None specified"}
---
_Generated by Conductor. Review and edit as needed._
```
## User Review of Spec
Display the generated spec and ask:
```
Here is the specification I've generated:
{spec content}
Is this specification correct?
1. Yes, proceed to plan generation
2. No, let me edit (opens for inline edits)
3. Start over with different inputs
```
## Plan Generation
After spec approval, generate `conductor/tracks/{trackId}/plan.md`:
### Plan Structure
```markdown
# Implementation Plan: {Track Title}
**Track ID:** {trackId}
**Spec:** [spec.md](./spec.md)
**Created:** {YYYY-MM-DD}
**Status:** [ ] Not Started
## Overview
{Brief summary of implementation approach}
## Phase 1: {Phase Name}
{Phase description}
### Tasks
- [ ] Task 1.1: {Description}
- [ ] Task 1.2: {Description}
- [ ] Task 1.3: {Description}
### Verification
- [ ] {Verification step for phase 1}
## Phase 2: {Phase Name}
{Phase description}
### Tasks
- [ ] Task 2.1: {Description}
- [ ] Task 2.2: {Description}
### Verification
- [ ] {Verification step for phase 2}
## Phase 3: {Phase Name} (if needed)
...
## Final Verification
- [ ] All acceptance criteria met
- [ ] Tests passing
- [ ] Documentation updated (if applicable)
- [ ] Ready for review
---
_Generated by Conductor. Tasks will be marked [~] in progress and [x] complete._
```
### Phase Guidelines
- Group related tasks into logical phases
- Each phase should be independently verifiable
- Include verification task after each phase
- TDD tracks: Include test writing tasks before implementation tasks
- Typical structure:
1. **Setup/Foundation** - Initial scaffolding, interfaces
2. **Core Implementation** - Main functionality
3. **Integration** - Connect with existing system
4. **Polish** - Error handling, edge cases, docs
## User Review of Plan
Display the generated plan and ask:
```
Here is the implementation plan:
{plan content}
Is this plan correct?
1. Yes, create the track
2. No, let me edit (opens for inline edits)
3. Add more phases/tasks
4. Start over
```
## Track Creation
After plan approval:
1. Create directory structure:
```
conductor/tracks/{trackId}/
├── spec.md
├── plan.md
├── metadata.json
└── index.md
```
2. Create `metadata.json`:
```json
{
"id": "{trackId}",
"title": "{Track Title}",
"type": "feature|bug|chore|refactor",
"status": "pending",
"created": "ISO_TIMESTAMP",
"updated": "ISO_TIMESTAMP",
"phases": {
"total": N,
"completed": 0
},
"tasks": {
"total": M,
"completed": 0
}
}
```
3. Create `index.md`:
```markdown
# Track: {Track Title}
**ID:** {trackId}
**Status:** Pending
## Documents
- [Specification](./spec.md)
- [Implementation Plan](./plan.md)
## Progress
- Phases: 0/{N} complete
- Tasks: 0/{M} complete
## Quick Links
- [Back to Tracks](../../tracks.md)
- [Product Context](../../product.md)
```
4. Register in `conductor/tracks.md`:
- Add row to tracks table
- Format: `| [ ] | {trackId} | {title} | {created} | {created} |`
5. Update `conductor/index.md`:
- Add track to "Active Tracks" section
## Completion Message
```
Track created successfully!
Track ID: {trackId}
Location: conductor/tracks/{trackId}/
Files created:
- spec.md - Requirements specification
- plan.md - Phased implementation plan
- metadata.json - Track metadata
- index.md - Track navigation
Next steps:
1. Review spec.md and plan.md, make any edits
2. Run /conductor:implement {trackId} to start implementation
3. Run /conductor:status to see project progress
```
## Error Handling
- If directory creation fails: Halt and report, do not register in tracks.md
- If any file write fails: Clean up partial track, report error
- If tracks.md update fails: Warn user to manually register track

View File

@@ -0,0 +1,361 @@
---
name: revert
description: Git-aware undo by logical work unit (track, phase, or task)
model: opus
allowed-tools:
- Read
- Write
- Edit
- Bash
- Glob
- Grep
- AskUserQuestion
argument-hint: "[track-id | track-id:phase | track-id:task]"
---
Revert changes by logical work unit with full git awareness. Supports reverting entire tracks, specific phases, or individual tasks.
## Pre-flight Checks
1. Verify Conductor is initialized:
- Check `conductor/tracks.md` exists
- If missing: Display error and suggest running `/conductor:setup` first
2. Verify git repository:
- Run `git status` to confirm git repo
- Check for uncommitted changes
- If uncommitted changes exist:
```
WARNING: Uncommitted changes detected
Files with changes:
{list of files}
Options:
1. Stash changes and continue
2. Commit changes first
3. Cancel revert
```
3. Verify git is clean enough to revert:
- No merge in progress
- No rebase in progress
- If issues found: Halt and explain resolution steps
## Target Selection
### If argument provided:
Parse the argument format:
**Full track:** `{trackId}`
- Example: `auth_20250115`
- Reverts all commits for the entire track
**Specific phase:** `{trackId}:phase{N}`
- Example: `auth_20250115:phase2`
- Reverts commits for phase N and all subsequent phases
**Specific task:** `{trackId}:task{X.Y}`
- Example: `auth_20250115:task2.3`
- Reverts commits for task X.Y only
### If no argument:
Display guided selection menu:
```
What would you like to revert?
Currently In Progress:
1. [~] Task 2.3 in dashboard_20250112 (most recent)
Recently Completed:
2. [x] Task 2.2 in dashboard_20250112 (1 hour ago)
3. [x] Phase 1 in dashboard_20250112 (3 hours ago)
4. [x] Full track: auth_20250115 (yesterday)
Options:
5. Enter specific reference (track:phase or track:task)
6. Cancel
Select option:
```
## Commit Discovery
### For Task Revert
1. Search git log for task-specific commits:
```bash
git log --oneline --grep="{trackId}" --grep="Task {X.Y}" --all-match
```
2. Also find the plan.md update commit:
```bash
git log --oneline --grep="mark task {X.Y} complete" --grep="{trackId}" --all-match
```
3. Collect all matching commit SHAs
### For Phase Revert
1. Determine task range for the phase by reading plan.md
2. Search for all task commits in that phase:
```bash
git log --oneline --grep="{trackId}" | grep -E "Task {N}\.[0-9]"
```
3. Find phase verification commit if exists
4. Find all plan.md update commits for phase tasks
5. Collect all matching commit SHAs in chronological order
### For Full Track Revert
1. Find ALL commits mentioning the track:
```bash
git log --oneline --grep="{trackId}"
```
2. Find track creation commits:
```bash
git log --oneline -- "conductor/tracks/{trackId}/"
```
3. Collect all matching commit SHAs in chronological order
## Execution Plan Display
Before any revert operations, display full plan:
```
================================================================================
REVERT EXECUTION PLAN
================================================================================
Target: {description of what's being reverted}
Commits to revert (in reverse chronological order):
1. abc1234 - feat: add chart rendering (dashboard_20250112)
2. def5678 - chore: mark task 2.3 complete (dashboard_20250112)
3. ghi9012 - feat: add data hooks (dashboard_20250112)
4. jkl3456 - chore: mark task 2.2 complete (dashboard_20250112)
Files that will be affected:
- src/components/Dashboard.tsx (modified)
- src/hooks/useData.ts (will be deleted - was created in these commits)
- conductor/tracks/dashboard_20250112/plan.md (modified)
Plan updates:
- Task 2.2: [x] -> [ ]
- Task 2.3: [~] -> [ ]
================================================================================
!! WARNING !!
================================================================================
This operation will:
- Create {N} revert commits
- Modify {M} files
- Reset {P} tasks to pending status
This CANNOT be easily undone without manual intervention.
================================================================================
Type 'YES' to proceed, or anything else to cancel:
```
**CRITICAL: Require explicit 'YES' confirmation. Do not proceed on 'y', 'yes', or enter.**
## Revert Execution
Execute reverts in reverse chronological order (newest first):
```
Executing revert plan...
[1/4] Reverting abc1234...
git revert --no-edit abc1234
✓ Success
[2/4] Reverting def5678...
git revert --no-edit def5678
✓ Success
[3/4] Reverting ghi9012...
git revert --no-edit ghi9012
✓ Success
[4/4] Reverting jkl3456...
git revert --no-edit jkl3456
✓ Success
```
### On Merge Conflict
If any revert produces a merge conflict:
```
================================================================================
MERGE CONFLICT DETECTED
================================================================================
Conflict occurred while reverting: {sha} - {message}
Conflicted files:
- src/components/Dashboard.tsx
Options:
1. Show conflict details
2. Abort revert sequence (keeps completed reverts)
3. Open manual resolution guide
IMPORTANT: Reverts 1-{N} have been completed. You may need to manually
resolve this conflict before continuing or fully undo the revert sequence.
Select option:
```
**HALT immediately on any conflict. Do not attempt automatic resolution.**
## Plan.md Updates
After successful git reverts, update plan.md:
1. Read current plan.md
2. For each reverted task, change marker:
- `[x]` -> `[ ]`
- `[~]` -> `[ ]`
3. Write updated plan.md
4. Update metadata.json:
- Decrement `tasks.completed`
- Update `status` if needed
- Update `updated` timestamp
**Do NOT commit plan.md changes** - they are part of the revert operation
## Track Status Updates
### If reverting entire track:
- In tracks.md: Change `[x]` or `[~]` to `[ ]`
- Consider offering to delete the track directory entirely
### If reverting to incomplete state:
- In tracks.md: Ensure marked as `[~]` if partially complete, `[ ]` if fully reverted
## Verification
After revert completion:
```
================================================================================
REVERT COMPLETE
================================================================================
Summary:
- Reverted {N} commits
- Reset {P} tasks to pending
- {M} files affected
Git log now shows:
{recent commit history}
Plan.md status:
- Task 2.2: [ ] Pending
- Task 2.3: [ ] Pending
================================================================================
Verify the revert was successful:
1. Run tests: {test command}
2. Check application: {relevant check}
If issues are found, you may need to:
- Fix conflicts manually
- Re-implement the reverted tasks
- Use 'git revert HEAD~{N}..HEAD' to undo the reverts
================================================================================
```
## Safety Rules
1. **NEVER use `git reset --hard`** - Only use `git revert`
2. **NEVER use `git push --force`** - Only safe push operations
3. **NEVER auto-resolve conflicts** - Always halt for human intervention
4. **ALWAYS show full plan** - User must see exactly what will happen
5. **REQUIRE explicit 'YES'** - Not 'y', not enter, only 'YES'
6. **HALT on ANY error** - Do not attempt to continue past failures
7. **PRESERVE history** - Revert commits are preferred over history rewriting
## Edge Cases
### Track Never Committed
```
No commits found for track: {trackId}
The track exists but has no associated commits. This may mean:
- Implementation never started
- Commits used different format
Options:
1. Delete track directory only
2. Cancel
```
### Commits Already Reverted
```
Some commits appear to already be reverted:
- abc1234 was reverted by xyz9876
Options:
1. Skip already-reverted commits
2. Cancel and investigate
```
### Remote Already Pushed
```
WARNING: Some commits have been pushed to remote
Commits on remote:
- abc1234 (origin/main)
- def5678 (origin/main)
Reverting will create new revert commits that you'll need to push.
This is the safe approach (no force push required).
Continue with revert? (YES/no):
```
## Undo the Revert
If user needs to undo the revert itself:
```
To undo this revert operation:
git revert HEAD~{N}..HEAD
This will create new commits that restore the reverted changes.
Alternatively, if not yet pushed:
git reset --soft HEAD~{N}
git checkout -- .
(Use with caution - this discards the revert commits)
```

View File

@@ -0,0 +1,406 @@
---
name: setup
description: Initialize project with Conductor artifacts (product definition, tech stack, workflow, style guides)
model: opus
argument-hint: "[--resume]"
---
Initialize or resume Conductor project setup. This command creates foundational project documentation through interactive Q&A.
## Pre-flight Checks
1. Check if `conductor/` directory already exists in the project root:
- If `conductor/product.md` exists: Ask user whether to resume setup or reinitialize
- If `conductor/setup_state.json` exists with incomplete status: Offer to resume from last step
2. Detect project type by checking for existing indicators:
- **Greenfield (new project)**: No .git, no package.json, no requirements.txt, no go.mod, no src/ directory
- **Brownfield (existing project)**: Any of the above exist
3. Load or create `conductor/setup_state.json`:
```json
{
"status": "in_progress",
"project_type": "greenfield|brownfield",
"current_section": "product|guidelines|tech_stack|workflow|styleguides",
"current_question": 1,
"completed_sections": [],
"answers": {},
"files_created": [],
"started_at": "ISO_TIMESTAMP",
"last_updated": "ISO_TIMESTAMP"
}
```
## Interactive Q&A Protocol
**CRITICAL RULES:**
- Ask ONE question per turn
- Wait for user response before proceeding
- Offer 2-3 suggested answers plus "Type your own" option
- Maximum 5 questions per section
- Update `setup_state.json` after each successful step
- Validate file writes succeeded before continuing
### Section 1: Product Definition (max 5 questions)
**Q1: Project Name**
```
What is your project name?
Suggested:
1. [Infer from directory name]
2. [Infer from package.json/go.mod if brownfield]
3. Type your own
```
**Q2: Project Description**
```
Describe your project in one sentence.
Suggested:
1. A web application that [does X]
2. A CLI tool for [doing Y]
3. Type your own
```
**Q3: Problem Statement**
```
What problem does this project solve?
Suggested:
1. Users struggle to [pain point]
2. There's no good way to [need]
3. Type your own
```
**Q4: Target Users**
```
Who are the primary users?
Suggested:
1. Developers building [X]
2. End users who need [Y]
3. Internal teams managing [Z]
4. Type your own
```
**Q5: Key Goals (optional)**
```
What are 2-3 key goals for this project? (Press enter to skip)
```
### Section 2: Product Guidelines (max 3 questions)
**Q1: Voice and Tone**
```
What voice/tone should documentation and UI text use?
Suggested:
1. Professional and technical
2. Friendly and approachable
3. Concise and direct
4. Type your own
```
**Q2: Design Principles**
```
What design principles guide this project?
Suggested:
1. Simplicity over features
2. Performance first
3. Developer experience focused
4. User safety and reliability
5. Type your own (comma-separated)
```
### Section 3: Tech Stack (max 5 questions)
For **brownfield projects**, first analyze existing code:
- Run `Glob` to find package.json, requirements.txt, go.mod, Cargo.toml, etc.
- Parse detected files to pre-populate tech stack
- Present findings and ask for confirmation/additions
**Q1: Primary Language(s)**
```
What primary language(s) does this project use?
[For brownfield: "I detected: Python 3.11, JavaScript. Is this correct?"]
Suggested:
1. TypeScript
2. Python
3. Go
4. Rust
5. Type your own (comma-separated)
```
**Q2: Frontend Framework (if applicable)**
```
What frontend framework (if any)?
Suggested:
1. React
2. Vue
3. Next.js
4. None / CLI only
5. Type your own
```
**Q3: Backend Framework (if applicable)**
```
What backend framework (if any)?
Suggested:
1. Express / Fastify
2. Django / FastAPI
3. Go standard library
4. None / Frontend only
5. Type your own
```
**Q4: Database (if applicable)**
```
What database (if any)?
Suggested:
1. PostgreSQL
2. MongoDB
3. SQLite
4. None / Stateless
5. Type your own
```
**Q5: Infrastructure**
```
Where will this be deployed?
Suggested:
1. AWS (Lambda, ECS, etc.)
2. Vercel / Netlify
3. Self-hosted / Docker
4. Not decided yet
5. Type your own
```
### Section 4: Workflow Preferences (max 4 questions)
**Q1: TDD Strictness**
```
How strictly should TDD be enforced?
Suggested:
1. Strict - tests required before implementation
2. Moderate - tests encouraged, not blocked
3. Flexible - tests recommended for complex logic
```
**Q2: Commit Strategy**
```
What commit strategy should be followed?
Suggested:
1. Conventional Commits (feat:, fix:, etc.)
2. Descriptive messages, no format required
3. Squash commits per task
```
**Q3: Code Review Requirements**
```
What code review policy?
Suggested:
1. Required for all changes
2. Required for non-trivial changes
3. Optional / self-review OK
```
**Q4: Verification Checkpoints**
```
When should manual verification be required?
Suggested:
1. After each phase completion
2. After each task completion
3. Only at track completion
```
### Section 5: Code Style Guides (max 2 questions)
**Q1: Languages to Include**
```
Which language style guides should be generated?
[Based on detected languages, pre-select]
Options:
1. TypeScript/JavaScript
2. Python
3. Go
4. Rust
5. All detected languages
6. Skip style guides
```
**Q2: Existing Conventions**
```
Do you have existing linting/formatting configs to incorporate?
[For brownfield: "I found .eslintrc, .prettierrc. Should I incorporate these?"]
Suggested:
1. Yes, use existing configs
2. No, generate fresh guides
3. Skip this step
```
## Artifact Generation
After completing Q&A, generate the following files:
### 1. conductor/index.md
```markdown
# Conductor - [Project Name]
Navigation hub for project context.
## Quick Links
- [Product Definition](./product.md)
- [Product Guidelines](./product-guidelines.md)
- [Tech Stack](./tech-stack.md)
- [Workflow](./workflow.md)
- [Tracks](./tracks.md)
## Active Tracks
<!-- Auto-populated by /conductor:new-track -->
## Getting Started
Run `/conductor:new-track` to create your first feature track.
```
### 2. conductor/product.md
Template populated with Q&A answers for:
- Project name and description
- Problem statement
- Target users
- Key goals
### 3. conductor/product-guidelines.md
Template populated with:
- Voice and tone
- Design principles
- Any additional standards
### 4. conductor/tech-stack.md
Template populated with:
- Languages (with versions if detected)
- Frameworks (frontend, backend)
- Database
- Infrastructure
- Key dependencies (for brownfield, from package files)
### 5. conductor/workflow.md
Template populated with:
- TDD policy and strictness level
- Commit strategy and conventions
- Code review requirements
- Verification checkpoint rules
- Task lifecycle definition
### 6. conductor/tracks.md
```markdown
# Tracks Registry
| Status | Track ID | Title | Created | Updated |
| ------ | -------- | ----- | ------- | ------- |
<!-- Tracks registered by /conductor:new-track -->
```
### 7. conductor/code_styleguides/
Generate selected style guides from `$CLAUDE_PLUGIN_ROOT/templates/code_styleguides/`
## State Management
After each successful file creation:
1. Update `setup_state.json`:
- Add filename to `files_created` array
- Update `last_updated` timestamp
- If section complete, add to `completed_sections`
2. Verify file exists with `Read` tool
## Completion
When all files are created:
1. Set `setup_state.json` status to "complete"
2. Display summary:
```
Conductor setup complete!
Created artifacts:
- conductor/index.md
- conductor/product.md
- conductor/product-guidelines.md
- conductor/tech-stack.md
- conductor/workflow.md
- conductor/tracks.md
- conductor/code_styleguides/[languages]
Next steps:
1. Review generated files and customize as needed
2. Run /conductor:new-track to create your first track
```
## Resume Handling
If `--resume` argument or resuming from state:
1. Load `setup_state.json`
2. Skip completed sections
3. Resume from `current_section` and `current_question`
4. Verify previously created files still exist
5. If files missing, offer to regenerate
## Error Handling
- If file write fails: Halt and report error, do not update state
- If user cancels: Save current state for future resume
- If state file corrupted: Offer to start fresh or attempt recovery

View File

@@ -0,0 +1,323 @@
---
name: status
description: Display project progress overview including tracks, phases, and tasks
model: opus
allowed-tools:
- Read
- Glob
- Grep
argument-hint: "[track-id]"
---
Display the current status of the Conductor project, including overall progress, active tracks, and next actions.
## Pre-flight Checks
1. Verify Conductor is initialized:
- Check `conductor/product.md` exists
- Check `conductor/tracks.md` exists
- If missing: Display error and suggest running `/conductor:setup` first
2. Check for any tracks:
- Read `conductor/tracks.md`
- If no tracks registered: Display setup complete message with suggestion to create first track
## Data Collection
### 1. Project Information
Read `conductor/product.md` and extract:
- Project name
- Project description
### 2. Tracks Overview
Read `conductor/tracks.md` and parse:
- Total tracks count
- Completed tracks (marked `[x]`)
- In-progress tracks (marked `[~]`)
- Pending tracks (marked `[ ]`)
### 3. Detailed Track Analysis
For each track in `conductor/tracks/`:
Read `conductor/tracks/{trackId}/plan.md`:
- Count total tasks (lines matching `- [x]`, `- [~]`, `- [ ]` with Task prefix)
- Count completed tasks (`[x]`)
- Count in-progress tasks (`[~]`)
- Count pending tasks (`[ ]`)
- Identify current phase (first phase with incomplete tasks)
- Identify next pending task
Read `conductor/tracks/{trackId}/metadata.json`:
- Track type (feature, bug, chore, refactor)
- Created date
- Last updated date
- Status
Read `conductor/tracks/{trackId}/spec.md`:
- Check for any noted blockers or dependencies
### 4. Blocker Detection
Scan for potential blockers:
- Tasks marked with `BLOCKED:` prefix
- Dependencies on incomplete tracks
- Failed verification tasks
## Output Format
### Full Project Status (no argument)
```
================================================================================
PROJECT STATUS: {Project Name}
================================================================================
Last Updated: {current timestamp}
--------------------------------------------------------------------------------
OVERALL PROGRESS
--------------------------------------------------------------------------------
Tracks: {completed}/{total} completed ({percentage}%)
Tasks: {completed}/{total} completed ({percentage}%)
Progress: [##########..........] {percentage}%
--------------------------------------------------------------------------------
TRACK SUMMARY
--------------------------------------------------------------------------------
| Status | Track ID | Type | Tasks | Last Updated |
|--------|-------------------|---------|------------|--------------|
| [x] | auth_20250110 | feature | 12/12 (100%)| 2025-01-12 |
| [~] | dashboard_20250112| feature | 7/15 (47%) | 2025-01-15 |
| [ ] | nav-fix_20250114 | bug | 0/4 (0%) | 2025-01-14 |
--------------------------------------------------------------------------------
CURRENT FOCUS
--------------------------------------------------------------------------------
Active Track: dashboard_20250112 - Dashboard Feature
Current Phase: Phase 2: Core Components
Current Task: [~] Task 2.3: Implement chart rendering
Progress in Phase:
- [x] Task 2.1: Create dashboard layout
- [x] Task 2.2: Add data fetching hooks
- [~] Task 2.3: Implement chart rendering
- [ ] Task 2.4: Add filter controls
--------------------------------------------------------------------------------
NEXT ACTIONS
--------------------------------------------------------------------------------
1. Complete: Task 2.3 - Implement chart rendering (dashboard_20250112)
2. Then: Task 2.4 - Add filter controls (dashboard_20250112)
3. After Phase 2: Phase verification checkpoint
--------------------------------------------------------------------------------
BLOCKERS
--------------------------------------------------------------------------------
{If blockers found:}
! BLOCKED: Task 3.1 in dashboard_20250112 depends on api_20250111 (incomplete)
{If no blockers:}
No blockers identified.
================================================================================
Commands: /conductor:implement {trackId} | /conductor:new-track | /conductor:revert
================================================================================
```
### Single Track Status (with track-id argument)
```
================================================================================
TRACK STATUS: {Track Title}
================================================================================
Track ID: {trackId}
Type: {feature|bug|chore|refactor}
Status: {Pending|In Progress|Complete}
Created: {date}
Updated: {date}
--------------------------------------------------------------------------------
SPECIFICATION
--------------------------------------------------------------------------------
Summary: {brief summary from spec.md}
Acceptance Criteria:
- [x] {Criterion 1}
- [ ] {Criterion 2}
- [ ] {Criterion 3}
--------------------------------------------------------------------------------
IMPLEMENTATION
--------------------------------------------------------------------------------
Overall: {completed}/{total} tasks ({percentage}%)
Progress: [##########..........] {percentage}%
## Phase 1: {Phase Name} [COMPLETE]
- [x] Task 1.1: {description}
- [x] Task 1.2: {description}
- [x] Verification: {description}
## Phase 2: {Phase Name} [IN PROGRESS]
- [x] Task 2.1: {description}
- [~] Task 2.2: {description} <-- CURRENT
- [ ] Task 2.3: {description}
- [ ] Verification: {description}
## Phase 3: {Phase Name} [PENDING]
- [ ] Task 3.1: {description}
- [ ] Task 3.2: {description}
- [ ] Verification: {description}
--------------------------------------------------------------------------------
GIT HISTORY
--------------------------------------------------------------------------------
Related Commits:
abc1234 - feat: add login form ({trackId})
def5678 - feat: add password validation ({trackId})
ghi9012 - chore: mark task 1.2 complete ({trackId})
--------------------------------------------------------------------------------
NEXT STEPS
--------------------------------------------------------------------------------
1. Current: Task 2.2 - {description}
2. Next: Task 2.3 - {description}
3. Phase 2 verification pending
================================================================================
Commands: /conductor:implement {trackId} | /conductor:revert {trackId}
================================================================================
```
## Status Markers Legend
Display at bottom if helpful:
```
Legend:
[x] = Complete
[~] = In Progress
[ ] = Pending
[!] = Blocked
```
## Error States
### No Tracks Found
```
================================================================================
PROJECT STATUS: {Project Name}
================================================================================
Conductor is set up but no tracks have been created yet.
To get started:
/conductor:new-track "your feature description"
================================================================================
```
### Conductor Not Initialized
```
ERROR: Conductor not initialized
Could not find conductor/product.md
Run /conductor:setup to initialize Conductor for this project.
```
### Track Not Found (with argument)
```
ERROR: Track not found: {argument}
Available tracks:
- auth_20250115
- dashboard_20250112
- nav-fix_20250114
Usage: /conductor:status [track-id]
```
## Calculation Logic
### Task Counting
```
For each plan.md:
- Complete: count lines matching /^- \[x\] Task/
- In Progress: count lines matching /^- \[~\] Task/
- Pending: count lines matching /^- \[ \] Task/
- Total: Complete + In Progress + Pending
```
### Phase Detection
```
Current phase = first phase header followed by any incomplete task ([ ] or [~])
```
### Progress Bar
```
filled = floor((completed / total) * 20)
empty = 20 - filled
bar = "[" + "#".repeat(filled) + ".".repeat(empty) + "]"
```
## Quick Mode
If invoked with `--quick` or `-q`:
```
{Project Name}: {completed}/{total} tasks ({percentage}%)
Active: {trackId} - Task {X.Y}
```
## JSON Output
If invoked with `--json`:
```json
{
"project": "{name}",
"timestamp": "ISO_TIMESTAMP",
"tracks": {
"total": N,
"completed": X,
"in_progress": Y,
"pending": Z
},
"tasks": {
"total": M,
"completed": A,
"in_progress": B,
"pending": C
},
"current": {
"track": "{trackId}",
"phase": N,
"task": "{X.Y}"
},
"blockers": []
}
```

View File

@@ -0,0 +1,385 @@
---
name: context-driven-development
description: Use this skill when working with Conductor's context-driven development methodology, managing project context artifacts, or understanding the relationship between product.md, tech-stack.md, and workflow.md files.
version: 1.0.0
---
# Context-Driven Development
Guide for implementing and maintaining context as a managed artifact alongside code, enabling consistent AI interactions and team alignment through structured project documentation.
## When to Use This Skill
- Setting up new projects with Conductor
- Understanding the relationship between context artifacts
- Maintaining consistency across AI-assisted development sessions
- Onboarding team members to an existing Conductor project
- Deciding when to update context documents
- Managing greenfield vs brownfield project contexts
## Core Philosophy
Context-Driven Development treats project context as a first-class artifact managed alongside code. Instead of relying on ad-hoc prompts or scattered documentation, establish a persistent, structured foundation that informs all AI interactions.
Key principles:
1. **Context precedes code**: Define what you're building and how before implementation
2. **Living documentation**: Context artifacts evolve with the project
3. **Single source of truth**: One canonical location for each type of information
4. **AI alignment**: Consistent context produces consistent AI behavior
## The Workflow
Follow the **Context → Spec & Plan → Implement** workflow:
1. **Context Phase**: Establish or verify project context artifacts exist and are current
2. **Specification Phase**: Define requirements and acceptance criteria for work units
3. **Planning Phase**: Break specifications into phased, actionable tasks
4. **Implementation Phase**: Execute tasks following established workflow patterns
## Artifact Relationships
### product.md - Defines WHAT and WHY
Purpose: Captures product vision, goals, target users, and business context.
Contents:
- Product name and one-line description
- Problem statement and solution approach
- Target user personas
- Core features and capabilities
- Success metrics and KPIs
- Product roadmap (high-level)
Update when:
- Product vision or goals change
- New major features are planned
- Target audience shifts
- Business priorities evolve
### product-guidelines.md - Defines HOW to Communicate
Purpose: Establishes brand voice, messaging standards, and communication patterns.
Contents:
- Brand voice and tone guidelines
- Terminology and glossary
- Error message conventions
- User-facing copy standards
- Documentation style
Update when:
- Brand guidelines change
- New terminology is introduced
- Communication patterns need refinement
### tech-stack.md - Defines WITH WHAT
Purpose: Documents technology choices, dependencies, and architectural decisions.
Contents:
- Primary languages and frameworks
- Key dependencies with versions
- Infrastructure and deployment targets
- Development tools and environment
- Testing frameworks
- Code quality tools
Update when:
- Adding new dependencies
- Upgrading major versions
- Changing infrastructure
- Adopting new tools or patterns
### workflow.md - Defines HOW to Work
Purpose: Establishes development practices, quality gates, and team workflows.
Contents:
- Development methodology (TDD, etc.)
- Git workflow and commit conventions
- Code review requirements
- Testing requirements and coverage targets
- Quality assurance gates
- Deployment procedures
Update when:
- Team practices evolve
- Quality standards change
- New workflow patterns are adopted
### tracks.md - Tracks WHAT'S HAPPENING
Purpose: Registry of all work units with status and metadata.
Contents:
- Active tracks with current status
- Completed tracks with completion dates
- Track metadata (type, priority, assignee)
- Links to individual track directories
Update when:
- New tracks are created
- Track status changes
- Tracks are completed or archived
## Context Maintenance Principles
### Keep Artifacts Synchronized
Ensure changes in one artifact reflect in related documents:
- New feature in product.md → Update tech-stack.md if new dependencies needed
- Completed track → Update product.md to reflect new capabilities
- Workflow change → Update all affected track plans
### Update tech-stack.md When Adding Dependencies
Before adding any new dependency:
1. Check if existing dependencies solve the need
2. Document the rationale for new dependencies
3. Add version constraints
4. Note any configuration requirements
### Update product.md When Features Complete
After completing a feature track:
1. Move feature from "planned" to "implemented" in product.md
2. Update any affected success metrics
3. Document any scope changes from original plan
### Verify Context Before Implementation
Before starting any track:
1. Read all context artifacts
2. Flag any outdated information
3. Propose updates before proceeding
4. Confirm context accuracy with stakeholders
## Greenfield vs Brownfield Handling
### Greenfield Projects (New)
For new projects:
1. Run `/conductor:setup` to create all artifacts interactively
2. Answer questions about product vision, tech preferences, and workflow
3. Generate initial style guides for chosen languages
4. Create empty tracks registry
Characteristics:
- Full control over context structure
- Define standards before code exists
- Establish patterns early
### Brownfield Projects (Existing)
For existing codebases:
1. Run `/conductor:setup` with existing codebase detection
2. System analyzes existing code, configs, and documentation
3. Pre-populate artifacts based on discovered patterns
4. Review and refine generated context
Characteristics:
- Extract implicit context from existing code
- Reconcile existing patterns with desired patterns
- Document technical debt and modernization plans
- Preserve working patterns while establishing standards
## Benefits
### Team Alignment
- New team members onboard faster with explicit context
- Consistent terminology and conventions across the team
- Shared understanding of product goals and technical decisions
### AI Consistency
- AI assistants produce aligned outputs across sessions
- Reduced need to re-explain context in each interaction
- Predictable behavior based on documented standards
### Institutional Memory
- Decisions and rationale are preserved
- Context survives team changes
- Historical context informs future decisions
### Quality Assurance
- Standards are explicit and verifiable
- Deviations from context are detectable
- Quality gates are documented and enforceable
## Directory Structure
```
conductor/
├── index.md # Navigation hub linking all artifacts
├── product.md # Product vision and goals
├── product-guidelines.md # Communication standards
├── tech-stack.md # Technology preferences
├── workflow.md # Development practices
├── tracks.md # Work unit registry
├── setup_state.json # Resumable setup state
├── code_styleguides/ # Language-specific conventions
│ ├── python.md
│ ├── typescript.md
│ └── ...
└── tracks/
└── <track-id>/
├── spec.md
├── plan.md
├── metadata.json
└── index.md
```
## Context Lifecycle
1. **Creation**: Initial setup via `/conductor:setup`
2. **Validation**: Verify before each track
3. **Evolution**: Update as project grows
4. **Synchronization**: Keep artifacts aligned
5. **Archival**: Document historical decisions
## Context Validation Checklist
Before starting implementation on any track, validate context:
### Product Context
- [ ] product.md reflects current product vision
- [ ] Target users are accurately described
- [ ] Feature list is up to date
- [ ] Success metrics are defined
### Technical Context
- [ ] tech-stack.md lists all current dependencies
- [ ] Version numbers are accurate
- [ ] Infrastructure targets are correct
- [ ] Development tools are documented
### Workflow Context
- [ ] workflow.md describes current practices
- [ ] Quality gates are defined
- [ ] Coverage targets are specified
- [ ] Commit conventions are documented
### Track Context
- [ ] tracks.md shows all active work
- [ ] No stale or abandoned tracks
- [ ] Dependencies between tracks are noted
## Common Anti-Patterns
Avoid these context management mistakes:
### Stale Context
Problem: Context documents become outdated and misleading.
Solution: Update context as part of each track's completion process.
### Context Sprawl
Problem: Information scattered across multiple locations.
Solution: Use the defined artifact structure; resist creating new document types.
### Implicit Context
Problem: Relying on knowledge not captured in artifacts.
Solution: If you reference something repeatedly, add it to the appropriate artifact.
### Context Hoarding
Problem: One person maintains context without team input.
Solution: Review context artifacts in pull requests; make updates collaborative.
### Over-Specification
Problem: Context becomes so detailed it's impossible to maintain.
Solution: Keep artifacts focused on decisions that affect AI behavior and team alignment.
## Integration with Development Tools
### IDE Integration
Configure your IDE to display context files prominently:
- Pin conductor/product.md for quick reference
- Add tech-stack.md to project notes
- Create snippets for common patterns from style guides
### Git Hooks
Consider pre-commit hooks that:
- Warn when dependencies change without tech-stack.md update
- Remind to update product.md when feature branches merge
- Validate context artifact syntax
### CI/CD Integration
Include context validation in pipelines:
- Check tech-stack.md matches actual dependencies
- Verify links in context documents resolve
- Ensure tracks.md status matches git branch state
## Session Continuity
Conductor supports multi-session development through context persistence:
### Starting a New Session
1. Read index.md to orient yourself
2. Check tracks.md for active work
3. Review relevant track's plan.md for current task
4. Verify context artifacts are current
### Ending a Session
1. Update plan.md with current progress
2. Note any blockers or decisions made
3. Commit in-progress work with clear status
4. Update tracks.md if status changed
### Handling Interruptions
If interrupted mid-task:
1. Mark task as `[~]` with note about stopping point
2. Commit work-in-progress to feature branch
3. Document any uncommitted decisions in plan.md
## Best Practices
1. **Read context first**: Always read relevant artifacts before starting work
2. **Small updates**: Make incremental context changes, not massive rewrites
3. **Link decisions**: Reference context when making implementation choices
4. **Version context**: Commit context changes alongside code changes
5. **Review context**: Include context artifact reviews in code reviews
6. **Validate regularly**: Run context validation checklist before major work
7. **Communicate changes**: Notify team when context artifacts change significantly
8. **Preserve history**: Use git to track context evolution over time
9. **Question staleness**: If context feels wrong, investigate and update
10. **Keep it actionable**: Every context item should inform a decision or behavior

View File

@@ -0,0 +1,593 @@
---
name: track-management
description: Use this skill when creating, managing, or working with Conductor tracks - the logical work units for features, bugs, and refactors. Applies to spec.md, plan.md, and track lifecycle operations.
version: 1.0.0
---
# Track Management
Guide for creating, managing, and completing Conductor tracks - the logical work units that organize features, bugs, and refactors through specification, planning, and implementation phases.
## When to Use This Skill
- Creating new feature, bug, or refactor tracks
- Writing or reviewing spec.md files
- Creating or updating plan.md files
- Managing track lifecycle from creation to completion
- Understanding track status markers and conventions
- Working with the tracks.md registry
- Interpreting or updating track metadata
## Track Concept
A track is a logical work unit that encapsulates a complete piece of work. Each track has:
- A unique identifier
- A specification defining requirements
- A phased plan breaking work into tasks
- Metadata tracking status and progress
Tracks provide semantic organization for work, enabling:
- Clear scope boundaries
- Progress tracking
- Git-aware operations (revert by track)
- Team coordination
## Track Types
### feature
New functionality or capabilities. Use for:
- New user-facing features
- New API endpoints
- New integrations
- Significant enhancements
### bug
Defect fixes. Use for:
- Incorrect behavior
- Error conditions
- Performance regressions
- Security vulnerabilities
### chore
Maintenance and housekeeping. Use for:
- Dependency updates
- Configuration changes
- Documentation updates
- Cleanup tasks
### refactor
Code improvement without behavior change. Use for:
- Code restructuring
- Pattern adoption
- Technical debt reduction
- Performance optimization (same behavior, better performance)
## Track ID Format
Track IDs follow the pattern: `{shortname}_{YYYYMMDD}`
- **shortname**: 2-4 word kebab-case description (e.g., `user-auth`, `api-rate-limit`)
- **YYYYMMDD**: Creation date in ISO format
Examples:
- `user-auth_20250115`
- `fix-login-error_20250115`
- `upgrade-deps_20250115`
- `refactor-api-client_20250115`
## Track Lifecycle
### 1. Creation (newTrack)
**Define Requirements**
1. Gather requirements through interactive Q&A
2. Identify acceptance criteria
3. Determine scope boundaries
4. Identify dependencies
**Generate Specification**
1. Create `spec.md` with structured requirements
2. Document functional and non-functional requirements
3. Define acceptance criteria
4. List dependencies and constraints
**Generate Plan**
1. Create `plan.md` with phased task breakdown
2. Organize tasks into logical phases
3. Add verification tasks after phases
4. Estimate effort and complexity
**Register Track**
1. Add entry to `tracks.md` registry
2. Create track directory structure
3. Generate `metadata.json`
4. Create track `index.md`
### 2. Implementation
**Execute Tasks**
1. Select next pending task from plan
2. Mark task as in-progress
3. Implement following workflow (TDD)
4. Mark task complete with commit SHA
**Update Status**
1. Update task markers in plan.md
2. Record commit SHAs for traceability
3. Update phase progress
4. Update track status in tracks.md
**Verify Progress**
1. Complete verification tasks
2. Wait for checkpoint approval
3. Record checkpoint commits
### 3. Completion
**Sync Documentation**
1. Update product.md if features added
2. Update tech-stack.md if dependencies changed
3. Verify all acceptance criteria met
**Archive or Delete**
1. Mark track as completed in tracks.md
2. Record completion date
3. Archive or retain track directory
## Specification (spec.md) Structure
```markdown
# {Track Title}
## Overview
Brief description of what this track accomplishes and why.
## Functional Requirements
### FR-1: {Requirement Name}
Description of the functional requirement.
- Acceptance: How to verify this requirement is met
### FR-2: {Requirement Name}
...
## Non-Functional Requirements
### NFR-1: {Requirement Name}
Description of the non-functional requirement (performance, security, etc.)
- Target: Specific measurable target
- Verification: How to test
## Acceptance Criteria
- [ ] Criterion 1: Specific, testable condition
- [ ] Criterion 2: Specific, testable condition
- [ ] Criterion 3: Specific, testable condition
## Scope
### In Scope
- Explicitly included items
- Features to implement
- Components to modify
### Out of Scope
- Explicitly excluded items
- Future considerations
- Related but separate work
## Dependencies
### Internal
- Other tracks or components this depends on
- Required context artifacts
### External
- Third-party services or APIs
- External dependencies
## Risks and Mitigations
| Risk | Impact | Mitigation |
| ---------------- | --------------- | ------------------- |
| Risk description | High/Medium/Low | Mitigation strategy |
## Open Questions
- [ ] Question that needs resolution
- [x] Resolved question - Answer
```
## Plan (plan.md) Structure
```markdown
# Implementation Plan: {Track Title}
Track ID: `{track-id}`
Created: YYYY-MM-DD
Status: pending | in-progress | completed
## Overview
Brief description of implementation approach.
## Phase 1: {Phase Name}
### Tasks
- [ ] **Task 1.1**: Task description
- Sub-task or detail
- Sub-task or detail
- [ ] **Task 1.2**: Task description
- [ ] **Task 1.3**: Task description
### Verification
- [ ] **Verify 1.1**: Verification step for phase
## Phase 2: {Phase Name}
### Tasks
- [ ] **Task 2.1**: Task description
- [ ] **Task 2.2**: Task description
### Verification
- [ ] **Verify 2.1**: Verification step for phase
## Phase 3: Finalization
### Tasks
- [ ] **Task 3.1**: Update documentation
- [ ] **Task 3.2**: Final integration test
### Verification
- [ ] **Verify 3.1**: All acceptance criteria met
## Checkpoints
| Phase | Checkpoint SHA | Date | Status |
| ------- | -------------- | ---- | ------- |
| Phase 1 | | | pending |
| Phase 2 | | | pending |
| Phase 3 | | | pending |
```
## Status Marker Conventions
Use consistent markers in plan.md:
| Marker | Meaning | Usage |
| ------ | ----------- | --------------------------- |
| `[ ]` | Pending | Task not started |
| `[~]` | In Progress | Currently being worked |
| `[x]` | Complete | Task finished (include SHA) |
| `[-]` | Skipped | Intentionally not done |
| `[!]` | Blocked | Waiting on dependency |
Example:
```markdown
- [x] **Task 1.1**: Set up database schema `abc1234`
- [~] **Task 1.2**: Implement user model
- [ ] **Task 1.3**: Add validation logic
- [!] **Task 1.4**: Integrate auth service (blocked: waiting for API key)
- [-] **Task 1.5**: Legacy migration (skipped: not needed)
```
## Track Registry (tracks.md) Format
```markdown
# Track Registry
## Active Tracks
| Track ID | Type | Status | Phase | Started | Assignee |
| ------------------------------------------------ | ------- | ----------- | ----- | ---------- | ---------- |
| [user-auth_20250115](tracks/user-auth_20250115/) | feature | in-progress | 2/3 | 2025-01-15 | @developer |
| [fix-login_20250114](tracks/fix-login_20250114/) | bug | pending | 0/2 | 2025-01-14 | - |
## Completed Tracks
| Track ID | Type | Completed | Duration |
| ---------------------------------------------- | ----- | ---------- | -------- |
| [setup-ci_20250110](tracks/setup-ci_20250110/) | chore | 2025-01-12 | 2 days |
## Archived Tracks
| Track ID | Reason | Archived |
| ---------------------------------------------------- | ---------- | ---------- |
| [old-feature_20241201](tracks/old-feature_20241201/) | Superseded | 2025-01-05 |
```
## Metadata (metadata.json) Fields
```json
{
"id": "user-auth_20250115",
"title": "User Authentication System",
"type": "feature",
"status": "in-progress",
"priority": "high",
"created": "2025-01-15T10:30:00Z",
"updated": "2025-01-15T14:45:00Z",
"started": "2025-01-15T11:00:00Z",
"completed": null,
"assignee": "@developer",
"phases": {
"total": 3,
"current": 2,
"completed": 1
},
"tasks": {
"total": 12,
"completed": 5,
"in_progress": 1,
"pending": 6
},
"checkpoints": [
{
"phase": 1,
"sha": "abc1234",
"date": "2025-01-15T13:00:00Z"
}
],
"dependencies": [],
"tags": ["auth", "security"]
}
```
## Track Operations
### Creating a Track
1. Run `/conductor:new-track`
2. Answer interactive questions
3. Review generated spec.md
4. Review generated plan.md
5. Confirm track creation
### Starting Implementation
1. Read spec.md and plan.md
2. Verify context artifacts are current
3. Mark first task as `[~]`
4. Begin TDD workflow
### Completing a Phase
1. Ensure all phase tasks are `[x]`
2. Complete verification tasks
3. Wait for checkpoint approval
4. Record checkpoint SHA
5. Proceed to next phase
### Completing a Track
1. Verify all phases complete
2. Verify all acceptance criteria met
3. Update product.md if needed
4. Mark track completed in tracks.md
5. Update metadata.json
### Reverting a Track
1. Run `/conductor:revert`
2. Select track to revert
3. Choose granularity (track/phase/task)
4. Confirm revert operation
5. Update status markers
## Handling Track Dependencies
### Identifying Dependencies
During track creation, identify:
- **Hard dependencies**: Must complete before this track can start
- **Soft dependencies**: Can proceed in parallel but may affect integration
- **External dependencies**: Third-party services, APIs, or team decisions
### Documenting Dependencies
In spec.md, list dependencies with:
- Dependency type (hard/soft/external)
- Current status (available/pending/blocked)
- Resolution path (what needs to happen)
### Managing Blocked Tracks
When a track is blocked:
1. Mark blocked tasks with `[!]` and reason
2. Update tracks.md status
3. Document blocker in metadata.json
4. Consider creating dependency track if needed
## Track Sizing Guidelines
### Right-Sized Tracks
Aim for tracks that:
- Complete in 1-5 days of work
- Have 2-4 phases
- Contain 8-20 tasks total
- Deliver a coherent, testable unit
### Too Large
Signs a track is too large:
- More than 5 phases
- More than 25 tasks
- Multiple unrelated features
- Estimated duration > 1 week
Solution: Split into multiple tracks with clear boundaries.
### Too Small
Signs a track is too small:
- Single phase with 1-2 tasks
- No meaningful verification needed
- Could be a sub-task of another track
- Less than a few hours of work
Solution: Combine with related work or handle as part of existing track.
## Specification Quality Checklist
Before finalizing spec.md, verify:
### Requirements Quality
- [ ] Each requirement has clear acceptance criteria
- [ ] Requirements are testable
- [ ] Requirements are independent (can verify separately)
- [ ] No ambiguous language ("should be fast" → "response < 200ms")
### Scope Clarity
- [ ] In-scope items are specific
- [ ] Out-of-scope items prevent scope creep
- [ ] Boundaries are clear to implementer
### Dependencies Identified
- [ ] All internal dependencies listed
- [ ] External dependencies have owners/contacts
- [ ] Dependency status is current
### Risks Addressed
- [ ] Major risks identified
- [ ] Impact assessment realistic
- [ ] Mitigations are actionable
## Plan Quality Checklist
Before starting implementation, verify plan.md:
### Task Quality
- [ ] Tasks are atomic (one logical action)
- [ ] Tasks are independently verifiable
- [ ] Task descriptions are clear
- [ ] Sub-tasks provide helpful detail
### Phase Organization
- [ ] Phases group related tasks
- [ ] Each phase delivers something testable
- [ ] Verification tasks after each phase
- [ ] Phases build on each other logically
### Completeness
- [ ] All spec requirements have corresponding tasks
- [ ] Documentation tasks included
- [ ] Testing tasks included
- [ ] Integration tasks included
## Common Track Patterns
### Feature Track Pattern
```
Phase 1: Foundation
- Data models
- Database migrations
- Basic API structure
Phase 2: Core Logic
- Business logic implementation
- Input validation
- Error handling
Phase 3: Integration
- UI integration
- API documentation
- End-to-end tests
```
### Bug Fix Track Pattern
```
Phase 1: Reproduction
- Write failing test capturing bug
- Document reproduction steps
Phase 2: Fix
- Implement fix
- Verify test passes
- Check for regressions
Phase 3: Verification
- Manual verification
- Update documentation if needed
```
### Refactor Track Pattern
```
Phase 1: Preparation
- Add characterization tests
- Document current behavior
Phase 2: Refactoring
- Apply changes incrementally
- Maintain green tests throughout
Phase 3: Cleanup
- Remove dead code
- Update documentation
```
## Best Practices
1. **One track, one concern**: Keep tracks focused on a single logical change
2. **Small phases**: Break work into phases of 3-5 tasks maximum
3. **Verification after phases**: Always include verification tasks
4. **Update markers immediately**: Mark task status as you work
5. **Record SHAs**: Always note commit SHAs for completed tasks
6. **Review specs before planning**: Ensure spec is complete before creating plan
7. **Link dependencies**: Explicitly note track dependencies
8. **Archive, don't delete**: Preserve completed tracks for reference
9. **Size appropriately**: Keep tracks between 1-5 days of work
10. **Clear acceptance criteria**: Every requirement must be testable

View File

@@ -0,0 +1,623 @@
---
name: workflow-patterns
description: Use this skill when implementing tasks according to Conductor's TDD workflow, handling phase checkpoints, managing git commits for tasks, or understanding the verification protocol.
version: 1.0.0
---
# Workflow Patterns
Guide for implementing tasks using Conductor's TDD workflow, managing phase checkpoints, handling git commits, and executing the verification protocol that ensures quality throughout implementation.
## When to Use This Skill
- Implementing tasks from a track's plan.md
- Following TDD red-green-refactor cycle
- Completing phase checkpoints
- Managing git commits and notes
- Understanding quality assurance gates
- Handling verification protocols
- Recording progress in plan files
## TDD Task Lifecycle
Follow these 11 steps for each task:
### Step 1: Select Next Task
Read plan.md and identify the next pending `[ ]` task. Select tasks in order within the current phase. Do not skip ahead to later phases.
### Step 2: Mark as In Progress
Update plan.md to mark the task as `[~]`:
```markdown
- [~] **Task 2.1**: Implement user validation
```
Commit this status change separately from implementation.
### Step 3: RED - Write Failing Tests
Write tests that define the expected behavior before writing implementation:
- Create test file if needed
- Write test cases covering happy path
- Write test cases covering edge cases
- Write test cases covering error conditions
- Run tests - they should FAIL
Example:
```python
def test_validate_user_email_valid():
user = User(email="test@example.com")
assert user.validate_email() is True
def test_validate_user_email_invalid():
user = User(email="invalid")
assert user.validate_email() is False
```
### Step 4: GREEN - Implement Minimum Code
Write the minimum code necessary to make tests pass:
- Focus on making tests green, not perfection
- Avoid premature optimization
- Keep implementation simple
- Run tests - they should PASS
### Step 5: REFACTOR - Improve Clarity
With green tests, improve the code:
- Extract common patterns
- Improve naming
- Remove duplication
- Simplify logic
- Run tests after each change - they should remain GREEN
### Step 6: Verify Coverage
Check test coverage meets the 80% target:
```bash
pytest --cov=module --cov-report=term-missing
```
If coverage is below 80%:
- Identify uncovered lines
- Add tests for missing paths
- Re-run coverage check
### Step 7: Document Deviations
If implementation deviated from plan or introduced new dependencies:
- Update tech-stack.md with new dependencies
- Note deviations in plan.md task comments
- Update spec.md if requirements changed
### Step 8: Commit Implementation
Create a focused commit for the task:
```bash
git add -A
git commit -m "feat(user): implement email validation
- Add validate_email method to User class
- Handle empty and malformed emails
- Add comprehensive test coverage
Task: 2.1
Track: user-auth_20250115"
```
Commit message format:
- Type: feat, fix, refactor, test, docs, chore
- Scope: affected module or component
- Summary: imperative, present tense
- Body: bullet points of changes
- Footer: task and track references
### Step 9: Attach Git Notes
Add rich task summary as git note:
```bash
git notes add -m "Task 2.1: Implement user validation
Summary:
- Added email validation using regex pattern
- Handles edge cases: empty, no @, no domain
- Coverage: 94% on validation module
Files changed:
- src/models/user.py (modified)
- tests/test_user.py (modified)
Decisions:
- Used simple regex over email-validator library
- Reason: No external dependency for basic validation"
```
### Step 10: Update Plan with SHA
Update plan.md to mark task complete with commit SHA:
```markdown
- [x] **Task 2.1**: Implement user validation `abc1234`
```
### Step 11: Commit Plan Update
Commit the plan status update:
```bash
git add conductor/tracks/*/plan.md
git commit -m "docs: update plan - task 2.1 complete
Track: user-auth_20250115"
```
## Phase Completion Protocol
When all tasks in a phase are complete, execute the verification protocol:
### Identify Changed Files
List all files modified since the last checkpoint:
```bash
git diff --name-only <last-checkpoint-sha>..HEAD
```
### Ensure Test Coverage
For each modified file:
1. Identify corresponding test file
2. Verify tests exist for new/changed code
3. Run coverage for modified modules
4. Add tests if coverage < 80%
### Run Full Test Suite
Execute complete test suite:
```bash
pytest -v --tb=short
```
All tests must pass before proceeding.
### Generate Manual Verification Steps
Create checklist of manual verifications:
```markdown
## Phase 1 Verification Checklist
- [ ] User can register with valid email
- [ ] Invalid email shows appropriate error
- [ ] Database stores user correctly
- [ ] API returns expected response codes
```
### WAIT for User Approval
Present verification checklist to user:
```
Phase 1 complete. Please verify:
1. [ ] Test suite passes (automated)
2. [ ] Coverage meets target (automated)
3. [ ] Manual verification items (requires human)
Respond with 'approved' to continue, or note issues.
```
Do NOT proceed without explicit approval.
### Create Checkpoint Commit
After approval, create checkpoint commit:
```bash
git add -A
git commit -m "checkpoint: phase 1 complete - user-auth_20250115
Verified:
- All tests passing
- Coverage: 87%
- Manual verification approved
Phase 1 tasks:
- [x] Task 1.1: Setup database schema
- [x] Task 1.2: Implement user model
- [x] Task 1.3: Add validation logic"
```
### Record Checkpoint SHA
Update plan.md checkpoints table:
```markdown
## Checkpoints
| Phase | Checkpoint SHA | Date | Status |
| ------- | -------------- | ---------- | -------- |
| Phase 1 | def5678 | 2025-01-15 | verified |
| Phase 2 | | | pending |
```
## Quality Assurance Gates
Before marking any task complete, verify these gates:
### Passing Tests
- All existing tests pass
- New tests pass
- No test regressions
### Coverage >= 80%
- New code has 80%+ coverage
- Overall project coverage maintained
- Critical paths fully covered
### Style Compliance
- Code follows style guides
- Linting passes
- Formatting correct
### Documentation
- Public APIs documented
- Complex logic explained
- README updated if needed
### Type Safety
- Type hints present (if applicable)
- Type checker passes
- No type: ignore without reason
### No Linting Errors
- Zero linter errors
- Warnings addressed or justified
- Static analysis clean
### Mobile Compatibility
If applicable:
- Responsive design verified
- Touch interactions work
- Performance acceptable
### Security Audit
- No secrets in code
- Input validation present
- Authentication/authorization correct
- Dependencies vulnerability-free
## Git Integration
### Commit Message Format
```
<type>(<scope>): <subject>
<body>
<footer>
```
Types:
- `feat`: New feature
- `fix`: Bug fix
- `refactor`: Code change without feature/fix
- `test`: Adding tests
- `docs`: Documentation
- `chore`: Maintenance
### Git Notes for Rich Summaries
Attach detailed notes to commits:
```bash
git notes add -m "<detailed summary>"
```
View notes:
```bash
git log --show-notes
```
Benefits:
- Preserves context without cluttering commit message
- Enables semantic queries across commits
- Supports track-based operations
### SHA Recording in plan.md
Always record the commit SHA when completing tasks:
```markdown
- [x] **Task 1.1**: Setup schema `abc1234`
- [x] **Task 1.2**: Add model `def5678`
```
This enables:
- Traceability from plan to code
- Semantic revert operations
- Progress auditing
## Verification Checkpoints
### Why Checkpoints Matter
Checkpoints create restore points for semantic reversion:
- Revert to end of any phase
- Maintain logical code state
- Enable safe experimentation
### When to Create Checkpoints
Create checkpoint after:
- All phase tasks complete
- All phase verifications pass
- User approval received
### Checkpoint Commit Content
Include in checkpoint commit:
- All uncommitted changes
- Updated plan.md
- Updated metadata.json
- Any documentation updates
### How to Use Checkpoints
For reverting:
```bash
# Revert to end of Phase 1
git revert --no-commit <phase-2-commits>...
git commit -m "revert: rollback to phase 1 checkpoint"
```
For review:
```bash
# See what changed in Phase 2
git diff <phase-1-sha>..<phase-2-sha>
```
## Handling Deviations
During implementation, deviations from the plan may occur. Handle them systematically:
### Types of Deviations
**Scope Addition**
Discovered requirement not in original spec.
- Document in spec.md as new requirement
- Add tasks to plan.md
- Note addition in task comments
**Scope Reduction**
Feature deemed unnecessary during implementation.
- Mark tasks as `[-]` (skipped) with reason
- Update spec.md scope section
- Document decision rationale
**Technical Deviation**
Different implementation approach than planned.
- Note deviation in task completion comment
- Update tech-stack.md if dependencies changed
- Document why original approach was unsuitable
**Requirement Change**
Understanding of requirement changes during work.
- Update spec.md with corrected requirement
- Adjust plan.md tasks if needed
- Re-verify acceptance criteria
### Deviation Documentation Format
When completing a task with deviation:
```markdown
- [x] **Task 2.1**: Implement validation `abc1234`
- DEVIATION: Used library instead of custom code
- Reason: Better edge case handling
- Impact: Added email-validator to dependencies
```
## Error Recovery
### Failed Tests After GREEN
If tests fail after reaching GREEN:
1. Do NOT proceed to REFACTOR
2. Identify which test started failing
3. Check if refactoring broke something
4. Revert to last known GREEN state
5. Re-approach the implementation
### Checkpoint Rejection
If user rejects a checkpoint:
1. Note rejection reason in plan.md
2. Create tasks to address issues
3. Complete remediation tasks
4. Request checkpoint approval again
### Blocked by Dependency
If task cannot proceed:
1. Mark task as `[!]` with blocker description
2. Check if other tasks can proceed
3. Document expected resolution timeline
4. Consider creating dependency resolution track
## TDD Variations by Task Type
### Data Model Tasks
```
RED: Write test for model creation and validation
GREEN: Implement model class with fields
REFACTOR: Add computed properties, improve types
```
### API Endpoint Tasks
```
RED: Write test for request/response contract
GREEN: Implement endpoint handler
REFACTOR: Extract validation, improve error handling
```
### Integration Tasks
```
RED: Write test for component interaction
GREEN: Wire components together
REFACTOR: Improve error propagation, add logging
```
### Refactoring Tasks
```
RED: Add characterization tests for current behavior
GREEN: Apply refactoring (tests should stay green)
REFACTOR: Clean up any introduced complexity
```
## Working with Existing Tests
When modifying code with existing tests:
### Extend, Don't Replace
- Keep existing tests passing
- Add new tests for new behavior
- Update tests only when requirements change
### Test Migration
When refactoring changes test structure:
1. Run existing tests (should pass)
2. Add new tests for refactored code
3. Migrate test cases to new structure
4. Remove old tests only after new tests pass
### Regression Prevention
After any change:
1. Run full test suite
2. Check for unexpected failures
3. Investigate any new failures
4. Fix regressions before proceeding
## Checkpoint Verification Details
### Automated Verification
Run before requesting approval:
```bash
# Test suite
pytest -v --tb=short
# Coverage
pytest --cov=src --cov-report=term-missing
# Linting
ruff check src/ tests/
# Type checking (if applicable)
mypy src/
```
### Manual Verification Guidance
For manual items, provide specific instructions:
```markdown
## Manual Verification Steps
### User Registration
1. Navigate to /register
2. Enter valid email: test@example.com
3. Enter password meeting requirements
4. Click Submit
5. Verify success message appears
6. Verify user appears in database
### Error Handling
1. Enter invalid email: "notanemail"
2. Verify error message shows
3. Verify form retains other entered data
```
## Performance Considerations
### Test Suite Performance
Keep test suite fast:
- Use fixtures to avoid redundant setup
- Mock slow external calls
- Run subset during development, full suite at checkpoints
### Commit Performance
Keep commits atomic:
- One logical change per commit
- Complete thought, not work-in-progress
- Tests should pass after every commit
## Best Practices
1. **Never skip RED**: Always write failing tests first
2. **Small commits**: One logical change per commit
3. **Immediate updates**: Update plan.md right after task completion
4. **Wait for approval**: Never skip checkpoint verification
5. **Rich git notes**: Include context that helps future understanding
6. **Coverage discipline**: Don't accept coverage below target
7. **Quality gates**: Check all gates before marking complete
8. **Sequential phases**: Complete phases in order
9. **Document deviations**: Note any changes from original plan
10. **Clean state**: Each commit should leave code in working state
11. **Fast feedback**: Run relevant tests frequently during development
12. **Clear blockers**: Address blockers promptly, don't work around them

View File

@@ -0,0 +1,600 @@
# C# Style Guide
C# conventions and best practices for .NET development.
## Naming Conventions
### General Rules
```csharp
// PascalCase for public members, types, namespaces
public class UserService { }
public void ProcessOrder() { }
public string FirstName { get; set; }
// camelCase for private fields, parameters, locals
private readonly ILogger _logger;
private int _itemCount;
public void DoWork(string inputValue) { }
// Prefix interfaces with I
public interface IUserRepository { }
public interface INotificationService { }
// Suffix async methods with Async
public async Task<User> GetUserAsync(int id) { }
public async Task ProcessOrderAsync(Order order) { }
// Constants: PascalCase (not SCREAMING_CASE)
public const int MaxRetryCount = 3;
public const string DefaultCurrency = "USD";
```
### Field and Property Naming
```csharp
public class Order
{
// Private fields: underscore prefix + camelCase
private readonly IOrderRepository _repository;
private int _itemCount;
// Public properties: PascalCase
public int Id { get; set; }
public string CustomerName { get; set; }
public DateTime CreatedAt { get; init; }
// Boolean properties: Is/Has/Can prefix
public bool IsActive { get; set; }
public bool HasDiscount { get; set; }
public bool CanEdit { get; }
}
```
## Async/Await Patterns
### Basic Async Usage
```csharp
// Always use async/await for I/O operations
public async Task<User> GetUserAsync(int id)
{
var user = await _repository.FindAsync(id);
if (user == null)
{
throw new NotFoundException($"User {id} not found");
}
return user;
}
// Don't block on async code
// Bad
var user = GetUserAsync(id).Result;
// Good
var user = await GetUserAsync(id);
```
### Async Best Practices
```csharp
// Use ConfigureAwait(false) in library code
public async Task<Data> FetchDataAsync()
{
var response = await _httpClient.GetAsync(url)
.ConfigureAwait(false);
return await response.Content.ReadAsAsync<Data>()
.ConfigureAwait(false);
}
// Avoid async void except for event handlers
// Bad
public async void ProcessOrder() { }
// Good
public async Task ProcessOrderAsync() { }
// Event handler exception
private async void Button_Click(object sender, EventArgs e)
{
try
{
await ProcessOrderAsync();
}
catch (Exception ex)
{
HandleError(ex);
}
}
```
### Parallel Async Operations
```csharp
// Execute independent operations in parallel
public async Task<DashboardData> LoadDashboardAsync()
{
var usersTask = _userService.GetActiveUsersAsync();
var ordersTask = _orderService.GetRecentOrdersAsync();
var statsTask = _statsService.GetDailyStatsAsync();
await Task.WhenAll(usersTask, ordersTask, statsTask);
return new DashboardData
{
Users = await usersTask,
Orders = await ordersTask,
Stats = await statsTask
};
}
// Use SemaphoreSlim for throttling
public async Task ProcessItemsAsync(IEnumerable<Item> items)
{
using var semaphore = new SemaphoreSlim(10); // Max 10 concurrent
var tasks = items.Select(async item =>
{
await semaphore.WaitAsync();
try
{
await ProcessItemAsync(item);
}
finally
{
semaphore.Release();
}
});
await Task.WhenAll(tasks);
}
```
## LINQ
### Query Syntax vs Method Syntax
```csharp
// Method syntax (preferred for simple queries)
var activeUsers = users
.Where(u => u.IsActive)
.OrderBy(u => u.Name)
.ToList();
// Query syntax (for complex queries with joins)
var orderSummary =
from order in orders
join customer in customers on order.CustomerId equals customer.Id
where order.Total > 100
group order by customer.Name into g
select new { Customer = g.Key, Total = g.Sum(o => o.Total) };
```
### LINQ Best Practices
```csharp
// Use appropriate methods
var hasItems = items.Any(); // Not: items.Count() > 0
var firstOrDefault = items.FirstOrDefault(); // Not: items.First()
var count = items.Count; // Property, not Count()
// Avoid multiple enumerations
// Bad
if (items.Any())
{
foreach (var item in items) { }
}
// Good
var itemList = items.ToList();
if (itemList.Count > 0)
{
foreach (var item in itemList) { }
}
// Project early to reduce memory
var names = users
.Where(u => u.IsActive)
.Select(u => u.Name) // Select only what you need
.ToList();
```
### Common LINQ Operations
```csharp
// Filtering
var adults = people.Where(p => p.Age >= 18);
// Transformation
var names = people.Select(p => $"{p.FirstName} {p.LastName}");
// Aggregation
var total = orders.Sum(o => o.Amount);
var average = scores.Average();
var max = values.Max();
// Grouping
var byDepartment = employees
.GroupBy(e => e.Department)
.Select(g => new { Department = g.Key, Count = g.Count() });
// Joining
var result = orders
.Join(customers,
o => o.CustomerId,
c => c.Id,
(o, c) => new { Order = o, Customer = c });
// Flattening
var allOrders = customers.SelectMany(c => c.Orders);
```
## Dependency Injection
### Service Registration
```csharp
// In Program.cs or Startup.cs
public void ConfigureServices(IServiceCollection services)
{
// Transient: new instance each time
services.AddTransient<IEmailService, EmailService>();
// Scoped: one instance per request
services.AddScoped<IUserRepository, UserRepository>();
// Singleton: one instance for app lifetime
services.AddSingleton<ICacheService, MemoryCacheService>();
// Factory registration
services.AddScoped<IDbConnection>(sp =>
{
var config = sp.GetRequiredService<IConfiguration>();
return new SqlConnection(config.GetConnectionString("Default"));
});
}
```
### Constructor Injection
```csharp
public class OrderService : IOrderService
{
private readonly IOrderRepository _repository;
private readonly ILogger<OrderService> _logger;
private readonly IEmailService _emailService;
public OrderService(
IOrderRepository repository,
ILogger<OrderService> logger,
IEmailService emailService)
{
_repository = repository ?? throw new ArgumentNullException(nameof(repository));
_logger = logger ?? throw new ArgumentNullException(nameof(logger));
_emailService = emailService ?? throw new ArgumentNullException(nameof(emailService));
}
public async Task<Order> CreateOrderAsync(OrderRequest request)
{
_logger.LogInformation("Creating order for customer {CustomerId}", request.CustomerId);
var order = new Order(request);
await _repository.SaveAsync(order);
await _emailService.SendOrderConfirmationAsync(order);
return order;
}
}
```
### Options Pattern
```csharp
// Configuration class
public class EmailSettings
{
public string SmtpServer { get; set; }
public int Port { get; set; }
public string FromAddress { get; set; }
}
// Registration
services.Configure<EmailSettings>(
configuration.GetSection("Email"));
// Usage
public class EmailService
{
private readonly EmailSettings _settings;
public EmailService(IOptions<EmailSettings> options)
{
_settings = options.Value;
}
}
```
## Testing
### xUnit Basics
```csharp
public class CalculatorTests
{
[Fact]
public void Add_TwoPositiveNumbers_ReturnsSum()
{
// Arrange
var calculator = new Calculator();
// Act
var result = calculator.Add(2, 3);
// Assert
Assert.Equal(5, result);
}
[Theory]
[InlineData(1, 1, 2)]
[InlineData(0, 0, 0)]
[InlineData(-1, 1, 0)]
public void Add_VariousNumbers_ReturnsCorrectSum(int a, int b, int expected)
{
var calculator = new Calculator();
Assert.Equal(expected, calculator.Add(a, b));
}
}
```
### Mocking with Moq
```csharp
public class OrderServiceTests
{
private readonly Mock<IOrderRepository> _mockRepository;
private readonly Mock<ILogger<OrderService>> _mockLogger;
private readonly OrderService _service;
public OrderServiceTests()
{
_mockRepository = new Mock<IOrderRepository>();
_mockLogger = new Mock<ILogger<OrderService>>();
_service = new OrderService(_mockRepository.Object, _mockLogger.Object);
}
[Fact]
public async Task GetOrderAsync_ExistingOrder_ReturnsOrder()
{
// Arrange
var expectedOrder = new Order { Id = 1, Total = 100m };
_mockRepository
.Setup(r => r.FindAsync(1))
.ReturnsAsync(expectedOrder);
// Act
var result = await _service.GetOrderAsync(1);
// Assert
Assert.Equal(expectedOrder.Id, result.Id);
_mockRepository.Verify(r => r.FindAsync(1), Times.Once);
}
[Fact]
public async Task GetOrderAsync_NonExistingOrder_ThrowsNotFoundException()
{
// Arrange
_mockRepository
.Setup(r => r.FindAsync(999))
.ReturnsAsync((Order)null);
// Act & Assert
await Assert.ThrowsAsync<NotFoundException>(
() => _service.GetOrderAsync(999));
}
}
```
### Integration Testing
```csharp
public class ApiIntegrationTests : IClassFixture<WebApplicationFactory<Program>>
{
private readonly HttpClient _client;
public ApiIntegrationTests(WebApplicationFactory<Program> factory)
{
_client = factory.CreateClient();
}
[Fact]
public async Task GetUsers_ReturnsSuccessAndCorrectContentType()
{
// Act
var response = await _client.GetAsync("/api/users");
// Assert
response.EnsureSuccessStatusCode();
Assert.Equal("application/json; charset=utf-8",
response.Content.Headers.ContentType.ToString());
}
}
```
## Common Patterns
### Null Handling
```csharp
// Null-conditional operators
var length = customer?.Address?.Street?.Length;
var name = user?.Name ?? "Unknown";
// Null-coalescing assignment
list ??= new List<Item>();
// Pattern matching for null checks
if (user is not null)
{
ProcessUser(user);
}
// Guard clauses
public void ProcessOrder(Order order)
{
ArgumentNullException.ThrowIfNull(order);
if (order.Items.Count == 0)
{
throw new ArgumentException("Order must have items", nameof(order));
}
// Process...
}
```
### Records and Init-Only Properties
```csharp
// Record for immutable data
public record User(int Id, string Name, string Email);
// Record with additional members
public record Order
{
public int Id { get; init; }
public string CustomerName { get; init; }
public decimal Total { get; init; }
public bool IsHighValue => Total > 1000;
}
// Record mutation via with expression
var updatedUser = user with { Name = "New Name" };
```
### Pattern Matching
```csharp
// Type patterns
public decimal CalculateDiscount(object customer) => customer switch
{
PremiumCustomer p => p.PurchaseTotal * 0.2m,
RegularCustomer r when r.YearsActive > 5 => r.PurchaseTotal * 0.1m,
RegularCustomer r => r.PurchaseTotal * 0.05m,
null => 0m,
_ => throw new ArgumentException("Unknown customer type")
};
// Property patterns
public string GetShippingOption(Order order) => order switch
{
{ Total: > 100, IsPriority: true } => "Express",
{ Total: > 100 } => "Standard",
{ IsPriority: true } => "Priority",
_ => "Economy"
};
// List patterns (C# 11)
public bool IsValidSequence(int[] numbers) => numbers switch
{
[1, 2, 3] => true,
[1, .., 3] => true,
[_, _, ..] => numbers.Length >= 2,
_ => false
};
```
### Disposable Pattern
```csharp
public class ResourceManager : IDisposable
{
private bool _disposed;
private readonly FileStream _stream;
public ResourceManager(string path)
{
_stream = File.OpenRead(path);
}
public void DoWork()
{
ObjectDisposedException.ThrowIf(_disposed, this);
// Work with _stream
}
public void Dispose()
{
Dispose(true);
GC.SuppressFinalize(this);
}
protected virtual void Dispose(bool disposing)
{
if (_disposed) return;
if (disposing)
{
_stream?.Dispose();
}
_disposed = true;
}
}
// Using statement
using var manager = new ResourceManager("file.txt");
manager.DoWork();
```
## Code Organization
### File Structure
```csharp
// One type per file (generally)
// Filename matches type name: UserService.cs
// Order of members
public class UserService
{
// 1. Constants
private const int MaxRetries = 3;
// 2. Static fields
private static readonly object _lock = new();
// 3. Instance fields
private readonly IUserRepository _repository;
// 4. Constructors
public UserService(IUserRepository repository)
{
_repository = repository;
}
// 5. Properties
public int TotalUsers { get; private set; }
// 6. Public methods
public async Task<User> GetUserAsync(int id) { }
// 7. Private methods
private void ValidateUser(User user) { }
}
```
### Project Structure
```
Solution/
├── src/
│ ├── MyApp.Api/ # Web API project
│ ├── MyApp.Core/ # Domain/business logic
│ ├── MyApp.Infrastructure/ # Data access, external services
│ └── MyApp.Shared/ # Shared utilities
├── tests/
│ ├── MyApp.UnitTests/
│ └── MyApp.IntegrationTests/
└── MyApp.sln
```

View File

@@ -0,0 +1,668 @@
# Dart/Flutter Style Guide
Dart language conventions and Flutter-specific patterns.
## Null Safety
### Enable Sound Null Safety
```dart
// pubspec.yaml
environment:
sdk: '>=3.0.0 <4.0.0'
// All types are non-nullable by default
String name = 'John'; // Cannot be null
String? nickname; // Can be null
// Late initialization
late final Database database;
```
### Null-Aware Operators
```dart
// Null-aware access
final length = user?.name?.length;
// Null-aware assignment
nickname ??= 'Anonymous';
// Null assertion (use sparingly)
final definitelyNotNull = maybeNull!;
// Null-aware cascade
user
?..name = 'John'
..email = 'john@example.com';
// Null coalescing
final displayName = user.nickname ?? user.name ?? 'Unknown';
```
### Null Handling Patterns
```dart
// Guard clause with null check
void processUser(User? user) {
if (user == null) {
throw ArgumentError('User cannot be null');
}
// user is promoted to non-nullable here
print(user.name);
}
// Pattern matching (Dart 3)
void handleResult(Result? result) {
switch (result) {
case Success(data: final data):
handleSuccess(data);
case Error(message: final message):
handleError(message);
case null:
handleNull();
}
}
```
## Async/Await
### Future Basics
```dart
// Async function
Future<User> fetchUser(int id) async {
final response = await http.get(Uri.parse('/users/$id'));
if (response.statusCode != 200) {
throw HttpException('Failed to fetch user');
}
return User.fromJson(jsonDecode(response.body));
}
// Error handling
Future<User?> safeFetchUser(int id) async {
try {
return await fetchUser(id);
} on HttpException catch (e) {
logger.error('HTTP error: ${e.message}');
return null;
} catch (e) {
logger.error('Unexpected error: $e');
return null;
}
}
```
### Parallel Execution
```dart
// Wait for all futures
Future<Dashboard> loadDashboard() async {
final results = await Future.wait([
fetchUsers(),
fetchOrders(),
fetchStats(),
]);
return Dashboard(
users: results[0] as List<User>,
orders: results[1] as List<Order>,
stats: results[2] as Stats,
);
}
// With typed results
Future<(List<User>, List<Order>)> loadData() async {
final (users, orders) = await (
fetchUsers(),
fetchOrders(),
).wait;
return (users, orders);
}
```
### Streams
```dart
// Stream creation
Stream<int> countStream(int max) async* {
for (var i = 0; i < max; i++) {
await Future.delayed(const Duration(seconds: 1));
yield i;
}
}
// Stream transformation
Stream<String> userNames(Stream<User> users) {
return users.map((user) => user.name);
}
// Stream consumption
void listenToUsers() {
userStream.listen(
(user) => print('New user: ${user.name}'),
onError: (error) => print('Error: $error'),
onDone: () => print('Stream closed'),
);
}
```
## Widgets
### Stateless Widgets
```dart
class UserCard extends StatelessWidget {
const UserCard({
super.key,
required this.user,
this.onTap,
});
final User user;
final VoidCallback? onTap;
@override
Widget build(BuildContext context) {
return Card(
child: ListTile(
leading: CircleAvatar(
backgroundImage: NetworkImage(user.avatarUrl),
),
title: Text(user.name),
subtitle: Text(user.email),
onTap: onTap,
),
);
}
}
```
### Stateful Widgets
```dart
class Counter extends StatefulWidget {
const Counter({super.key, this.initialValue = 0});
final int initialValue;
@override
State<Counter> createState() => _CounterState();
}
class _CounterState extends State<Counter> {
late int _count;
@override
void initState() {
super.initState();
_count = widget.initialValue;
}
void _increment() {
setState(() {
_count++;
});
}
@override
Widget build(BuildContext context) {
return Column(
children: [
Text('Count: $_count'),
ElevatedButton(
onPressed: _increment,
child: const Text('Increment'),
),
],
);
}
}
```
### Widget Best Practices
```dart
// Use const constructors
class MyWidget extends StatelessWidget {
const MyWidget({super.key}); // const constructor
@override
Widget build(BuildContext context) {
return const Column(
children: [
Text('Hello'), // const widget
SizedBox(height: 8), // const widget
],
);
}
}
// Extract widgets for reusability
class PrimaryButton extends StatelessWidget {
const PrimaryButton({
super.key,
required this.label,
required this.onPressed,
this.isLoading = false,
});
final String label;
final VoidCallback? onPressed;
final bool isLoading;
@override
Widget build(BuildContext context) {
return ElevatedButton(
onPressed: isLoading ? null : onPressed,
child: isLoading
? const SizedBox(
width: 20,
height: 20,
child: CircularProgressIndicator(strokeWidth: 2),
)
: Text(label),
);
}
}
```
## State Management
### Provider Pattern
```dart
// Model with ChangeNotifier
class CartModel extends ChangeNotifier {
final List<Item> _items = [];
List<Item> get items => List.unmodifiable(_items);
double get totalPrice => _items.fold(0, (sum, item) => sum + item.price);
void addItem(Item item) {
_items.add(item);
notifyListeners();
}
void removeItem(Item item) {
_items.remove(item);
notifyListeners();
}
}
// Provider setup
void main() {
runApp(
ChangeNotifierProvider(
create: (_) => CartModel(),
child: const MyApp(),
),
);
}
// Consuming provider
class CartPage extends StatelessWidget {
const CartPage({super.key});
@override
Widget build(BuildContext context) {
return Consumer<CartModel>(
builder: (context, cart, child) {
return ListView.builder(
itemCount: cart.items.length,
itemBuilder: (context, index) {
return ListTile(
title: Text(cart.items[index].name),
);
},
);
},
);
}
}
```
### Riverpod Pattern
```dart
// Provider definition
final userProvider = FutureProvider<User>((ref) async {
final repository = ref.read(userRepositoryProvider);
return repository.fetchCurrentUser();
});
final counterProvider = StateNotifierProvider<CounterNotifier, int>((ref) {
return CounterNotifier();
});
class CounterNotifier extends StateNotifier<int> {
CounterNotifier() : super(0);
void increment() => state++;
void decrement() => state--;
}
// Consumer widget
class UserProfile extends ConsumerWidget {
const UserProfile({super.key});
@override
Widget build(BuildContext context, WidgetRef ref) {
final userAsync = ref.watch(userProvider);
return userAsync.when(
data: (user) => Text('Hello, ${user.name}'),
loading: () => const CircularProgressIndicator(),
error: (error, stack) => Text('Error: $error'),
);
}
}
```
### BLoC Pattern
```dart
// Events
abstract class CounterEvent {}
class IncrementEvent extends CounterEvent {}
class DecrementEvent extends CounterEvent {}
// State
class CounterState {
final int count;
const CounterState(this.count);
}
// BLoC
class CounterBloc extends Bloc<CounterEvent, CounterState> {
CounterBloc() : super(const CounterState(0)) {
on<IncrementEvent>((event, emit) {
emit(CounterState(state.count + 1));
});
on<DecrementEvent>((event, emit) {
emit(CounterState(state.count - 1));
});
}
}
// Usage
class CounterPage extends StatelessWidget {
const CounterPage({super.key});
@override
Widget build(BuildContext context) {
return BlocBuilder<CounterBloc, CounterState>(
builder: (context, state) {
return Text('Count: ${state.count}');
},
);
}
}
```
## Testing
### Unit Tests
```dart
import 'package:test/test.dart';
void main() {
group('Calculator', () {
late Calculator calculator;
setUp(() {
calculator = Calculator();
});
test('adds two positive numbers', () {
expect(calculator.add(2, 3), equals(5));
});
test('handles negative numbers', () {
expect(calculator.add(-1, 1), equals(0));
});
});
}
```
### Widget Tests
```dart
import 'package:flutter_test/flutter_test.dart';
void main() {
testWidgets('Counter increments', (WidgetTester tester) async {
// Build widget
await tester.pumpWidget(const MaterialApp(home: Counter()));
// Verify initial state
expect(find.text('Count: 0'), findsOneWidget);
// Tap increment button
await tester.tap(find.byIcon(Icons.add));
await tester.pump();
// Verify incremented state
expect(find.text('Count: 1'), findsOneWidget);
});
testWidgets('shows loading indicator', (WidgetTester tester) async {
await tester.pumpWidget(
const MaterialApp(
home: UserProfile(isLoading: true),
),
);
expect(find.byType(CircularProgressIndicator), findsOneWidget);
});
}
```
### Mocking
```dart
import 'package:mockito/mockito.dart';
import 'package:mockito/annotations.dart';
@GenerateMocks([UserRepository])
void main() {
late MockUserRepository mockRepository;
late UserService service;
setUp(() {
mockRepository = MockUserRepository();
service = UserService(mockRepository);
});
test('fetches user by id', () async {
final user = User(id: 1, name: 'John');
when(mockRepository.findById(1)).thenAnswer((_) async => user);
final result = await service.getUser(1);
expect(result, equals(user));
verify(mockRepository.findById(1)).called(1);
});
}
```
## Common Patterns
### Factory Constructors
```dart
class User {
final int id;
final String name;
final String email;
const User({
required this.id,
required this.name,
required this.email,
});
// Factory from JSON
factory User.fromJson(Map<String, dynamic> json) {
return User(
id: json['id'] as int,
name: json['name'] as String,
email: json['email'] as String,
);
}
// Factory for default user
factory User.guest() {
return const User(
id: 0,
name: 'Guest',
email: 'guest@example.com',
);
}
Map<String, dynamic> toJson() {
return {
'id': id,
'name': name,
'email': email,
};
}
}
```
### Extension Methods
```dart
extension StringExtensions on String {
String capitalize() {
if (isEmpty) return this;
return '${this[0].toUpperCase()}${substring(1)}';
}
bool get isValidEmail {
return RegExp(r'^[\w-\.]+@([\w-]+\.)+[\w-]{2,4}$').hasMatch(this);
}
}
extension DateTimeExtensions on DateTime {
String get formatted => '${day.toString().padLeft(2, '0')}/'
'${month.toString().padLeft(2, '0')}/$year';
bool get isToday {
final now = DateTime.now();
return year == now.year && month == now.month && day == now.day;
}
}
// Usage
final name = 'john'.capitalize(); // 'John'
final isValid = 'test@example.com'.isValidEmail; // true
```
### Sealed Classes (Dart 3)
```dart
sealed class Result<T> {}
class Success<T> extends Result<T> {
final T data;
Success(this.data);
}
class Error<T> extends Result<T> {
final String message;
Error(this.message);
}
class Loading<T> extends Result<T> {}
// Usage with exhaustive pattern matching
Widget buildResult(Result<User> result) {
return switch (result) {
Success(data: final user) => Text(user.name),
Error(message: final msg) => Text('Error: $msg'),
Loading() => const CircularProgressIndicator(),
};
}
```
### Freezed for Immutable Data
```dart
import 'package:freezed_annotation/freezed_annotation.dart';
part 'user.freezed.dart';
part 'user.g.dart';
@freezed
class User with _$User {
const factory User({
required int id,
required String name,
required String email,
@Default(false) bool isActive,
}) = _User;
factory User.fromJson(Map<String, dynamic> json) => _$UserFromJson(json);
}
// Usage
final user = User(id: 1, name: 'John', email: 'john@example.com');
final updatedUser = user.copyWith(name: 'Jane');
```
## Project Structure
### Feature-Based Organization
```
lib/
├── main.dart
├── app.dart
├── core/
│ ├── constants/
│ ├── extensions/
│ ├── utils/
│ └── widgets/
├── features/
│ ├── auth/
│ │ ├── data/
│ │ ├── domain/
│ │ └── presentation/
│ ├── home/
│ │ ├── data/
│ │ ├── domain/
│ │ └── presentation/
│ └── profile/
└── shared/
├── models/
├── services/
└── widgets/
```
### Naming Conventions
```dart
// Files: snake_case
// user_repository.dart
// home_screen.dart
// Classes: PascalCase
class UserRepository {}
class HomeScreen extends StatelessWidget {}
// Variables and functions: camelCase
final userName = 'John';
void fetchUserData() {}
// Constants: camelCase or SCREAMING_SNAKE_CASE
const defaultPadding = 16.0;
const API_BASE_URL = 'https://api.example.com';
// Private: underscore prefix
class _HomeScreenState extends State<HomeScreen> {}
final _internalCache = <String, dynamic>{};
```

View File

@@ -0,0 +1,235 @@
# General Code Style Guide
Universal coding principles that apply across all languages and frameworks.
## Readability
### Code is Read More Than Written
- Write code for humans first, computers second
- Favor clarity over cleverness
- If code needs a comment to explain what it does, consider rewriting it
### Formatting
- Consistent indentation (use project standard)
- Reasonable line length (80-120 characters)
- Logical grouping with whitespace
- One statement per line
### Structure
- Keep functions/methods short (ideally < 20 lines)
- One level of abstraction per function
- Early returns to reduce nesting
- Group related code together
## Naming Conventions
### General Principles
- Names should reveal intent
- Avoid abbreviations (except universally understood ones)
- Be consistent within codebase
- Length proportional to scope
### Variables
```
# Bad
d = 86400 # What is this?
temp = getUserData() # Temp what?
# Good
secondsPerDay = 86400
userData = getUserData()
```
### Functions/Methods
- Use verbs for actions: `calculateTotal()`, `validateInput()`
- Use `is/has/can` for booleans: `isValid()`, `hasPermission()`
- Be specific: `sendEmailNotification()` not `send()`
### Constants
- Use SCREAMING_SNAKE_CASE or language convention
- Group related constants
- Document magic numbers
### Classes/Types
- Use nouns: `User`, `OrderProcessor`, `ValidationResult`
- Avoid generic names: `Manager`, `Handler`, `Data`
## Comments
### When to Comment
- WHY, not WHAT (code shows what, comments explain why)
- Complex algorithms or business logic
- Non-obvious workarounds with references
- Public API documentation
### When NOT to Comment
- Obvious code
- Commented-out code (delete it)
- Change history (use git)
- TODOs without tickets (create tickets instead)
### Comment Quality
```
# Bad
i += 1 # Increment i
# Good
# Retry limit based on SLA requirements (see JIRA-1234)
maxRetries = 3
```
## Error Handling
### Principles
- Fail fast and explicitly
- Handle errors at appropriate level
- Preserve error context
- Log for debugging, throw for callers
### Patterns
```
# Bad: Silent failure
try:
result = riskyOperation()
except:
pass
# Good: Explicit handling
try:
result = riskyOperation()
except SpecificError as e:
logger.error(f"Operation failed: {e}")
raise OperationFailed("Unable to complete operation") from e
```
### Error Messages
- Be specific about what failed
- Include relevant context
- Suggest remediation when possible
- Avoid exposing internal details to users
## Functions and Methods
### Single Responsibility
- One function = one task
- If you need "and" to describe it, split it
- Extract helper functions for clarity
### Parameters
- Limit parameters (ideally ≤ 3)
- Use objects/structs for many parameters
- Avoid boolean parameters (use named options)
- Order: required first, optional last
### Return Values
- Return early for edge cases
- Consistent return types
- Avoid returning null/nil when possible
- Consider Result/Option types for failures
## Code Organization
### File Structure
- One primary concept per file
- Related helpers in same file or nearby
- Consistent file naming
- Logical directory structure
### Import/Dependency Order
1. Standard library
2. External dependencies
3. Internal dependencies
4. Local/relative imports
### Coupling and Cohesion
- High cohesion within modules
- Low coupling between modules
- Depend on abstractions, not implementations
- Avoid circular dependencies
## Testing Considerations
### Testable Code
- Pure functions where possible
- Dependency injection
- Avoid global state
- Small, focused functions
### Test Naming
```
# Describe behavior, not implementation
test_user_can_login_with_valid_credentials()
test_order_total_includes_tax_and_shipping()
```
## Security Basics
### Input Validation
- Validate all external input
- Sanitize before use
- Whitelist over blacklist
- Fail closed (deny by default)
### Secrets
- Never hardcode secrets
- Use environment variables or secret managers
- Don't log sensitive data
- Rotate credentials regularly
### Data Handling
- Minimize data collection
- Encrypt sensitive data
- Secure data in transit and at rest
- Follow principle of least privilege
## Performance Mindset
### Premature Optimization
- Make it work, then make it fast
- Measure before optimizing
- Optimize bottlenecks, not everything
- Document performance-critical code
### Common Pitfalls
- N+1 queries
- Unnecessary allocations in loops
- Missing indexes
- Synchronous operations that could be async
## Code Review Checklist
- [ ] Does it work correctly?
- [ ] Is it readable and maintainable?
- [ ] Are edge cases handled?
- [ ] Is error handling appropriate?
- [ ] Are there security concerns?
- [ ] Is it tested adequately?
- [ ] Does it follow project conventions?
- [ ] Is there unnecessary complexity?

View File

@@ -0,0 +1,562 @@
# Go Style Guide
Go idioms and conventions for clean, maintainable code.
## gofmt and Standard Formatting
### Always Use gofmt
```bash
# Format a single file
gofmt -w file.go
# Format entire project
gofmt -w .
# Use goimports for imports management
goimports -w .
```
### Formatting Rules (Enforced by gofmt)
- Tabs for indentation
- No trailing whitespace
- Consistent brace placement
- Standardized spacing
## Error Handling
### Explicit Error Checking
```go
// Always check errors explicitly
file, err := os.Open(filename)
if err != nil {
return fmt.Errorf("opening file %s: %w", filename, err)
}
defer file.Close()
// Don't ignore errors with _
// Bad
data, _ := json.Marshal(obj)
// Good
data, err := json.Marshal(obj)
if err != nil {
return nil, fmt.Errorf("marshaling object: %w", err)
}
```
### Error Wrapping
```go
// Use %w to wrap errors for unwrapping later
func processFile(path string) error {
data, err := os.ReadFile(path)
if err != nil {
return fmt.Errorf("reading file %s: %w", path, err)
}
if err := validate(data); err != nil {
return fmt.Errorf("validating data: %w", err)
}
return nil
}
// Check wrapped errors
if errors.Is(err, os.ErrNotExist) {
// Handle file not found
}
var validationErr *ValidationError
if errors.As(err, &validationErr) {
// Handle validation error
}
```
### Custom Error Types
```go
// Sentinel errors for expected conditions
var (
ErrNotFound = errors.New("resource not found")
ErrUnauthorized = errors.New("unauthorized access")
ErrInvalidInput = errors.New("invalid input")
)
// Custom error type with additional context
type ValidationError struct {
Field string
Message string
}
func (e *ValidationError) Error() string {
return fmt.Sprintf("validation error on %s: %s", e.Field, e.Message)
}
// Error constructor
func NewValidationError(field, message string) error {
return &ValidationError{Field: field, Message: message}
}
```
## Interfaces
### Small, Focused Interfaces
```go
// Good: Single-method interface
type Reader interface {
Read(p []byte) (n int, err error)
}
type Writer interface {
Write(p []byte) (n int, err error)
}
// Compose interfaces
type ReadWriter interface {
Reader
Writer
}
// Bad: Large interfaces
type Repository interface {
Find(id string) (*User, error)
FindAll() ([]*User, error)
Create(user *User) error
Update(user *User) error
Delete(id string) error
FindByEmail(email string) (*User, error)
// Too many methods - hard to implement and test
}
```
### Accept Interfaces, Return Structs
```go
// Good: Accept interface, return concrete type
func NewUserService(repo UserRepository) *UserService {
return &UserService{repo: repo}
}
// Interface defined by consumer
type UserRepository interface {
Find(ctx context.Context, id string) (*User, error)
Save(ctx context.Context, user *User) error
}
// Concrete implementation
type PostgresUserRepo struct {
db *sql.DB
}
func (r *PostgresUserRepo) Find(ctx context.Context, id string) (*User, error) {
// Implementation
}
```
### Interface Naming
```go
// Single-method interfaces: method name + "er"
type Reader interface { Read(p []byte) (n int, err error) }
type Writer interface { Write(p []byte) (n int, err error) }
type Closer interface { Close() error }
type Stringer interface { String() string }
// Multi-method interfaces: descriptive name
type UserStore interface {
Get(ctx context.Context, id string) (*User, error)
Put(ctx context.Context, user *User) error
}
```
## Package Structure
### Standard Layout
```
myproject/
├── cmd/
│ └── myapp/
│ └── main.go # Application entry point
├── internal/
│ ├── auth/
│ │ ├── auth.go
│ │ └── auth_test.go
│ ├── user/
│ │ ├── user.go
│ │ ├── repository.go
│ │ └── service.go
│ └── config/
│ └── config.go
├── pkg/ # Public packages (optional)
│ └── api/
│ └── client.go
├── go.mod
├── go.sum
└── README.md
```
### Package Guidelines
```go
// Package names: short, lowercase, no underscores
package user // Good
package userService // Bad
package user_service // Bad
// Package comment at top of primary file
// Package user provides user management functionality.
package user
// Group imports: stdlib, external, internal
import (
"context"
"fmt"
"github.com/google/uuid"
"github.com/lib/pq"
"myproject/internal/config"
)
```
### Internal Packages
```go
// internal/ packages cannot be imported from outside the module
// Use for implementation details you don't want to expose
// myproject/internal/cache/cache.go
package cache
// This can only be imported by code in myproject/
```
## Testing
### Test File Organization
```go
// user_test.go - same package
package user
import (
"testing"
)
func TestUserValidation(t *testing.T) {
// Test implementation details
}
// user_integration_test.go - external test package
package user_test
import (
"testing"
"myproject/internal/user"
)
func TestUserService(t *testing.T) {
// Test public API
}
```
### Table-Driven Tests
```go
func TestAdd(t *testing.T) {
tests := []struct {
name string
a, b int
expected int
}{
{"positive numbers", 2, 3, 5},
{"negative numbers", -1, -1, -2},
{"mixed numbers", -1, 5, 4},
{"zeros", 0, 0, 0},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result := Add(tt.a, tt.b)
if result != tt.expected {
t.Errorf("Add(%d, %d) = %d; want %d",
tt.a, tt.b, result, tt.expected)
}
})
}
}
```
### Test Helpers
```go
// Helper functions should call t.Helper()
func newTestUser(t *testing.T) *User {
t.Helper()
return &User{
ID: uuid.New().String(),
Name: "Test User",
Email: "test@example.com",
}
}
func assertNoError(t *testing.T, err error) {
t.Helper()
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
}
func assertEqual[T comparable](t *testing.T, got, want T) {
t.Helper()
if got != want {
t.Errorf("got %v; want %v", got, want)
}
}
```
### Mocking with Interfaces
```go
// Define interface for dependency
type UserRepository interface {
Find(ctx context.Context, id string) (*User, error)
Save(ctx context.Context, user *User) error
}
// Mock implementation for testing
type mockUserRepo struct {
users map[string]*User
}
func newMockUserRepo() *mockUserRepo {
return &mockUserRepo{users: make(map[string]*User)}
}
func (m *mockUserRepo) Find(ctx context.Context, id string) (*User, error) {
user, ok := m.users[id]
if !ok {
return nil, ErrNotFound
}
return user, nil
}
func (m *mockUserRepo) Save(ctx context.Context, user *User) error {
m.users[user.ID] = user
return nil
}
// Test using mock
func TestUserService_GetUser(t *testing.T) {
repo := newMockUserRepo()
repo.users["123"] = &User{ID: "123", Name: "Test"}
service := NewUserService(repo)
user, err := service.GetUser(context.Background(), "123")
assertNoError(t, err)
assertEqual(t, user.Name, "Test")
}
```
## Common Patterns
### Options Pattern
```go
// Option function type
type ServerOption func(*Server)
// Option functions
func WithPort(port int) ServerOption {
return func(s *Server) {
s.port = port
}
}
func WithTimeout(timeout time.Duration) ServerOption {
return func(s *Server) {
s.timeout = timeout
}
}
func WithLogger(logger *slog.Logger) ServerOption {
return func(s *Server) {
s.logger = logger
}
}
// Constructor using options
func NewServer(opts ...ServerOption) *Server {
s := &Server{
port: 8080, // defaults
timeout: 30 * time.Second,
logger: slog.Default(),
}
for _, opt := range opts {
opt(s)
}
return s
}
// Usage
server := NewServer(
WithPort(9000),
WithTimeout(time.Minute),
)
```
### Context Usage
```go
// Always pass context as first parameter
func (s *Service) ProcessRequest(ctx context.Context, req *Request) (*Response, error) {
// Check for cancellation
select {
case <-ctx.Done():
return nil, ctx.Err()
default:
}
// Use context for timeouts
ctx, cancel := context.WithTimeout(ctx, 5*time.Second)
defer cancel()
result, err := s.repo.Find(ctx, req.ID)
if err != nil {
return nil, fmt.Errorf("finding item: %w", err)
}
return &Response{Data: result}, nil
}
```
### Defer for Cleanup
```go
func processFile(path string) error {
file, err := os.Open(path)
if err != nil {
return err
}
defer file.Close() // Always executed on return
// Process file...
return nil
}
// Multiple defers execute in LIFO order
func transaction(db *sql.DB) error {
tx, err := db.Begin()
if err != nil {
return err
}
defer tx.Rollback() // Safe: no-op if committed
// Do work...
return tx.Commit()
}
```
### Concurrency Patterns
```go
// Worker pool
func processItems(items []Item, workers int) []Result {
jobs := make(chan Item, len(items))
results := make(chan Result, len(items))
// Start workers
var wg sync.WaitGroup
for i := 0; i < workers; i++ {
wg.Add(1)
go func() {
defer wg.Done()
for item := range jobs {
results <- process(item)
}
}()
}
// Send jobs
for _, item := range items {
jobs <- item
}
close(jobs)
// Wait and collect
go func() {
wg.Wait()
close(results)
}()
var out []Result
for r := range results {
out = append(out, r)
}
return out
}
```
## Code Quality
### Linting with golangci-lint
```yaml
# .golangci.yml
linters:
enable:
- errcheck
- govet
- ineffassign
- staticcheck
- unused
- gosimple
- gocritic
- gofmt
- goimports
linters-settings:
govet:
check-shadowing: true
errcheck:
check-type-assertions: true
issues:
exclude-rules:
- path: _test\.go
linters:
- errcheck
```
### Common Commands
```bash
# Format code
go fmt ./...
# Run linter
golangci-lint run
# Run tests
go test ./...
# Run tests with coverage
go test -coverprofile=coverage.out ./...
go tool cover -html=coverage.out
# Check for race conditions
go test -race ./...
# Build
go build ./...
```

View File

@@ -0,0 +1,618 @@
# HTML & CSS Style Guide
Web standards for semantic markup, maintainable styling, and accessibility.
## Semantic HTML
### Document Structure
```html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<meta name="description" content="Page description for SEO" />
<title>Page Title | Site Name</title>
<link rel="stylesheet" href="styles.css" />
</head>
<body>
<header>
<nav aria-label="Main navigation">
<!-- Navigation -->
</nav>
</header>
<main>
<article>
<!-- Primary content -->
</article>
<aside>
<!-- Supplementary content -->
</aside>
</main>
<footer>
<!-- Footer content -->
</footer>
</body>
</html>
```
### Semantic Elements
```html
<!-- Use appropriate semantic elements -->
<!-- Navigation -->
<nav aria-label="Main navigation">
<ul>
<li><a href="/">Home</a></li>
<li><a href="/about">About</a></li>
</ul>
</nav>
<!-- Article with header and footer -->
<article>
<header>
<h1>Article Title</h1>
<time datetime="2024-01-15">January 15, 2024</time>
</header>
<p>Article content...</p>
<footer>
<p>Written by <address>Author Name</address></p>
</footer>
</article>
<!-- Sections with headings -->
<section aria-labelledby="features-heading">
<h2 id="features-heading">Features</h2>
<p>Section content...</p>
</section>
<!-- Figures with captions -->
<figure>
<img src="chart.png" alt="Sales data showing 20% growth">
<figcaption>Q4 2024 Sales Performance</figcaption>
</figure>
<!-- Definition lists -->
<dl>
<dt>HTML</dt>
<dd>HyperText Markup Language</dd>
<dt>CSS</dt>
<dd>Cascading Style Sheets</dd>
</dl>
```
### Form Elements
```html
<form action="/submit" method="POST">
<!-- Text input with label -->
<div class="form-group">
<label for="email">Email Address</label>
<input
type="email"
id="email"
name="email"
required
autocomplete="email"
aria-describedby="email-hint"
/>
<span id="email-hint" class="hint">We'll never share your email.</span>
</div>
<!-- Select with label -->
<div class="form-group">
<label for="country">Country</label>
<select id="country" name="country" required>
<option value="">Select a country</option>
<option value="us">United States</option>
<option value="uk">United Kingdom</option>
</select>
</div>
<!-- Radio group with fieldset -->
<fieldset>
<legend>Preferred Contact Method</legend>
<div>
<input type="radio" id="contact-email" name="contact" value="email" />
<label for="contact-email">Email</label>
</div>
<div>
<input type="radio" id="contact-phone" name="contact" value="phone" />
<label for="contact-phone">Phone</label>
</div>
</fieldset>
<!-- Submit button -->
<button type="submit">Submit</button>
</form>
```
## BEM Naming Convention
### Block, Element, Modifier
```css
/* Block: Standalone component */
.card {
}
/* Element: Part of block (double underscore) */
.card__header {
}
.card__body {
}
.card__footer {
}
/* Modifier: Variation (double hyphen) */
.card--featured {
}
.card--compact {
}
.card__header--centered {
}
```
### BEM Examples
```html
<!-- Card component -->
<article class="card card--featured">
<header class="card__header">
<h2 class="card__title">Card Title</h2>
</header>
<div class="card__body">
<p class="card__text">Card content goes here.</p>
</div>
<footer class="card__footer">
<button class="card__button card__button--primary">Action</button>
</footer>
</article>
<!-- Navigation component -->
<nav class="nav nav--horizontal">
<ul class="nav__list">
<li class="nav__item nav__item--active">
<a class="nav__link" href="/">Home</a>
</li>
<li class="nav__item">
<a class="nav__link" href="/about">About</a>
</li>
</ul>
</nav>
```
### BEM Best Practices
```css
/* Avoid deep nesting */
/* Bad */
.card__header__title__icon {
}
/* Good - flatten structure */
.card__title-icon {
}
/* Avoid styling elements without class */
/* Bad */
.card h2 {
}
/* Good */
.card__title {
}
/* Modifiers extend base styles */
.button {
padding: 8px 16px;
border-radius: 4px;
}
.button--large {
padding: 12px 24px;
}
.button--primary {
background: blue;
color: white;
}
```
## Accessibility
### ARIA Attributes
```html
<!-- Live regions for dynamic content -->
<div aria-live="polite" aria-atomic="true">Status updates appear here</div>
<!-- Landmarks -->
<nav aria-label="Main navigation"></nav>
<nav aria-label="Footer navigation"></nav>
<!-- Current page in navigation -->
<a href="/about" aria-current="page">About</a>
<!-- Expanded/collapsed state -->
<button aria-expanded="false" aria-controls="menu">Toggle Menu</button>
<div id="menu" hidden>Menu content</div>
<!-- Disabled vs aria-disabled -->
<button disabled>Can't click (removed from tab order)</button>
<button aria-disabled="true">Can't click (stays in tab order)</button>
<!-- Loading states -->
<button aria-busy="true">
<span aria-hidden="true">Loading...</span>
<span class="visually-hidden">Please wait</span>
</button>
```
### Keyboard Navigation
```html
<!-- Skip link -->
<a href="#main-content" class="skip-link">Skip to main content</a>
<!-- Focusable elements should be obvious -->
<style>
:focus-visible {
outline: 2px solid blue;
outline-offset: 2px;
}
</style>
<!-- Tabindex usage -->
<!-- tabindex="0": Add to tab order -->
<div tabindex="0" role="button">Custom button</div>
<!-- tabindex="-1": Programmatically focusable only -->
<div id="modal" tabindex="-1">Modal content</div>
<!-- Never use tabindex > 0 -->
```
### Screen Reader Support
```css
/* Visually hidden but accessible */
.visually-hidden {
position: absolute;
width: 1px;
height: 1px;
padding: 0;
margin: -1px;
overflow: hidden;
clip: rect(0, 0, 0, 0);
white-space: nowrap;
border: 0;
}
/* Hide from screen readers */
[aria-hidden="true"] {
/* Decorative content */
}
```
```html
<!-- Icon buttons need accessible names -->
<button aria-label="Close dialog">
<svg aria-hidden="true"><!-- icon --></svg>
</button>
<!-- Decorative images -->
<img src="decoration.png" alt="" role="presentation" />
<!-- Informative images -->
<img src="chart.png" alt="Sales increased 20% in Q4 2024" />
<!-- Complex images -->
<figure>
<img
src="flowchart.png"
alt="User registration process"
aria-describedby="flowchart-desc"
/>
<figcaption id="flowchart-desc">
Step 1: Enter email. Step 2: Verify email. Step 3: Create password.
</figcaption>
</figure>
```
## Responsive Design
### Mobile-First Approach
```css
/* Base styles for mobile */
.container {
padding: 16px;
}
.grid {
display: grid;
gap: 16px;
grid-template-columns: 1fr;
}
/* Tablet and up */
@media (min-width: 768px) {
.container {
padding: 24px;
}
.grid {
grid-template-columns: repeat(2, 1fr);
}
}
/* Desktop and up */
@media (min-width: 1024px) {
.container {
padding: 32px;
max-width: 1200px;
margin: 0 auto;
}
.grid {
grid-template-columns: repeat(3, 1fr);
}
}
```
### Flexible Units
```css
/* Use relative units */
body {
font-size: 16px; /* Base size */
}
h1 {
font-size: 2rem; /* Relative to root */
margin-bottom: 1em; /* Relative to element */
}
.container {
max-width: 75ch; /* Character width for readability */
padding: 1rem;
}
/* Fluid typography */
h1 {
font-size: clamp(1.5rem, 4vw, 3rem);
}
/* Fluid spacing */
.section {
padding: clamp(2rem, 5vw, 4rem);
}
```
### Responsive Images
```html
<!-- Responsive image with srcset -->
<img
src="image-800.jpg"
srcset="image-400.jpg 400w, image-800.jpg 800w, image-1200.jpg 1200w"
sizes="(max-width: 600px) 100vw, 50vw"
alt="Description"
loading="lazy"
/>
<!-- Art direction with picture -->
<picture>
<source media="(min-width: 1024px)" srcset="hero-desktop.jpg" />
<source media="(min-width: 768px)" srcset="hero-tablet.jpg" />
<img src="hero-mobile.jpg" alt="Hero image" />
</picture>
```
## CSS Best Practices
### Custom Properties (CSS Variables)
```css
:root {
/* Colors */
--color-primary: #0066cc;
--color-primary-dark: #004c99;
--color-secondary: #6c757d;
--color-success: #28a745;
--color-error: #dc3545;
/* Typography */
--font-family-base: system-ui, sans-serif;
--font-family-mono: ui-monospace, monospace;
--font-size-sm: 0.875rem;
--font-size-base: 1rem;
--font-size-lg: 1.25rem;
/* Spacing */
--spacing-xs: 0.25rem;
--spacing-sm: 0.5rem;
--spacing-md: 1rem;
--spacing-lg: 1.5rem;
--spacing-xl: 2rem;
/* Borders */
--border-radius: 4px;
--border-color: #dee2e6;
/* Shadows */
--shadow-sm: 0 1px 2px rgba(0, 0, 0, 0.1);
--shadow-md: 0 4px 6px rgba(0, 0, 0, 0.1);
}
/* Dark mode */
@media (prefers-color-scheme: dark) {
:root {
--color-primary: #4da6ff;
--color-background: #1a1a1a;
--color-text: #ffffff;
}
}
/* Usage */
.button {
background: var(--color-primary);
padding: var(--spacing-sm) var(--spacing-md);
border-radius: var(--border-radius);
}
```
### Modern Layout
```css
/* Flexbox for 1D layouts */
.navbar {
display: flex;
justify-content: space-between;
align-items: center;
gap: var(--spacing-md);
}
/* Grid for 2D layouts */
.page-layout {
display: grid;
grid-template-areas:
"header header"
"sidebar main"
"footer footer";
grid-template-columns: 250px 1fr;
grid-template-rows: auto 1fr auto;
min-height: 100vh;
}
.header {
grid-area: header;
}
.sidebar {
grid-area: sidebar;
}
.main {
grid-area: main;
}
.footer {
grid-area: footer;
}
/* Auto-fit grid */
.card-grid {
display: grid;
grid-template-columns: repeat(auto-fit, minmax(300px, 1fr));
gap: var(--spacing-lg);
}
```
### Performance
```css
/* Avoid expensive properties in animations */
/* Bad - triggers layout */
.animate-bad {
animation: move 1s;
}
@keyframes move {
to {
left: 100px;
top: 100px;
}
}
/* Good - uses transform */
.animate-good {
animation: move-optimized 1s;
}
@keyframes move-optimized {
to {
transform: translate(100px, 100px);
}
}
/* Use will-change sparingly */
.will-animate {
will-change: transform;
}
/* Contain for layout isolation */
.card {
contain: layout style;
}
/* Content-visibility for off-screen content */
.below-fold {
content-visibility: auto;
contain-intrinsic-size: 500px;
}
```
## HTML Best Practices
### Validation and Attributes
```html
<!-- Use proper input types -->
<input type="email" autocomplete="email" />
<input type="tel" autocomplete="tel" />
<input type="url" />
<input type="number" min="0" max="100" step="1" />
<input type="date" min="2024-01-01" />
<!-- Required and validation -->
<input type="text" required minlength="2" maxlength="50" pattern="[A-Za-z]+" />
<!-- Autocomplete for better UX -->
<input type="text" name="name" autocomplete="name" />
<input type="text" name="address" autocomplete="street-address" />
<input type="text" name="cc-number" autocomplete="cc-number" />
```
### Performance Attributes
```html
<!-- Lazy loading -->
<img src="image.jpg" loading="lazy" alt="Description" />
<iframe src="video.html" loading="lazy"></iframe>
<!-- Preload critical resources -->
<link rel="preload" href="critical.css" as="style" />
<link rel="preload" href="hero.jpg" as="image" />
<link rel="preload" href="font.woff2" as="font" crossorigin />
<!-- Preconnect to origins -->
<link rel="preconnect" href="https://api.example.com" />
<link rel="dns-prefetch" href="https://analytics.example.com" />
<!-- Async/defer scripts -->
<script src="analytics.js" async></script>
<script src="app.js" defer></script>
```
### Microdata and SEO
```html
<!-- Schema.org markup -->
<article itemscope itemtype="https://schema.org/Article">
<h1 itemprop="headline">Article Title</h1>
<time itemprop="datePublished" datetime="2024-01-15"> January 15, 2024 </time>
<div itemprop="author" itemscope itemtype="https://schema.org/Person">
<span itemprop="name">Author Name</span>
</div>
<div itemprop="articleBody">Article content...</div>
</article>
<!-- Open Graph for social sharing -->
<meta property="og:title" content="Page Title" />
<meta property="og:description" content="Page description" />
<meta property="og:image" content="https://example.com/image.jpg" />
<meta property="og:url" content="https://example.com/page" />
```

View File

@@ -0,0 +1,569 @@
# JavaScript Style Guide
Modern JavaScript (ES6+) best practices and conventions.
## ES6+ Features
### Use Modern Syntax
```javascript
// Prefer const and let over var
const immutableValue = "fixed";
let mutableValue = "can change";
// Never use var
// var outdated = 'avoid this';
// Template literals over concatenation
const greeting = `Hello, ${name}!`;
// Destructuring
const { id, name, email } = user;
const [first, second, ...rest] = items;
// Spread operator
const merged = { ...defaults, ...options };
const combined = [...array1, ...array2];
// Arrow functions for short callbacks
const doubled = numbers.map((n) => n * 2);
```
### Object Shorthand
```javascript
// Property shorthand
const name = "John";
const age = 30;
const user = { name, age };
// Method shorthand
const calculator = {
add(a, b) {
return a + b;
},
subtract(a, b) {
return a - b;
},
};
// Computed property names
const key = "dynamic";
const obj = {
[key]: "value",
[`${key}Method`]() {
return "result";
},
};
```
### Default Parameters and Rest
```javascript
// Default parameters
function greet(name = "Guest", greeting = "Hello") {
return `${greeting}, ${name}!`;
}
// Rest parameters
function sum(...numbers) {
return numbers.reduce((total, n) => total + n, 0);
}
// Named parameters via destructuring
function createUser({ name, email, role = "user" }) {
return { name, email, role, createdAt: new Date() };
}
```
## Async/Await
### Prefer async/await Over Promises
```javascript
// Bad: Promise chains
function fetchUserPosts(userId) {
return fetch(`/users/${userId}`)
.then((res) => res.json())
.then((user) => fetch(`/posts?userId=${user.id}`))
.then((res) => res.json());
}
// Good: async/await
async function fetchUserPosts(userId) {
const userRes = await fetch(`/users/${userId}`);
const user = await userRes.json();
const postsRes = await fetch(`/posts?userId=${user.id}`);
return postsRes.json();
}
```
### Parallel Execution
```javascript
// Sequential (slow)
async function loadDataSequentially() {
const users = await fetchUsers();
const posts = await fetchPosts();
const comments = await fetchComments();
return { users, posts, comments };
}
// Parallel (fast)
async function loadDataParallel() {
const [users, posts, comments] = await Promise.all([
fetchUsers(),
fetchPosts(),
fetchComments(),
]);
return { users, posts, comments };
}
```
### Error Handling
```javascript
// try/catch with async/await
async function fetchData(url) {
try {
const response = await fetch(url);
if (!response.ok) {
throw new Error(`HTTP ${response.status}: ${response.statusText}`);
}
return await response.json();
} catch (error) {
console.error("Fetch failed:", error.message);
throw error;
}
}
// Error handling utility
async function safeAsync(promise) {
try {
const result = await promise;
return [result, null];
} catch (error) {
return [null, error];
}
}
// Usage
const [data, error] = await safeAsync(fetchData("/api/users"));
if (error) {
handleError(error);
}
```
## Error Handling
### Custom Errors
```javascript
class AppError extends Error {
constructor(message, code, statusCode = 500) {
super(message);
this.name = "AppError";
this.code = code;
this.statusCode = statusCode;
Error.captureStackTrace(this, this.constructor);
}
}
class ValidationError extends AppError {
constructor(message, field) {
super(message, "VALIDATION_ERROR", 400);
this.name = "ValidationError";
this.field = field;
}
}
class NotFoundError extends AppError {
constructor(resource, id) {
super(`${resource} with id ${id} not found`, "NOT_FOUND", 404);
this.name = "NotFoundError";
this.resource = resource;
this.resourceId = id;
}
}
```
### Error Handling Patterns
```javascript
// Centralized error handler
function handleError(error) {
if (error instanceof ValidationError) {
showFieldError(error.field, error.message);
} else if (error instanceof NotFoundError) {
showNotFound(error.resource);
} else {
showGenericError("Something went wrong");
reportError(error);
}
}
// Error boundary pattern (for React)
function withErrorBoundary(Component) {
return class extends React.Component {
state = { hasError: false };
static getDerivedStateFromError() {
return { hasError: true };
}
componentDidCatch(error, info) {
reportError(error, info);
}
render() {
if (this.state.hasError) {
return <ErrorFallback />;
}
return <Component {...this.props} />;
}
};
}
```
## Module Patterns
### ES Modules
```javascript
// Named exports
export const API_URL = "/api";
export function fetchData(endpoint) {
/* ... */
}
export class ApiClient {
/* ... */
}
// Re-exports
export { User, Post } from "./types.js";
export * as utils from "./utils.js";
// Imports
import { fetchData, API_URL } from "./api.js";
import * as api from "./api.js";
import defaultExport from "./module.js";
```
### Module Organization
```javascript
// Feature-based organization
// features/user/
// index.js - Public exports
// api.js - API calls
// utils.js - Helper functions
// constants.js - Feature constants
// index.js - Barrel export
export { UserService } from "./service.js";
export { validateUser } from "./utils.js";
export { USER_ROLES } from "./constants.js";
```
### Dependency Injection
```javascript
// Constructor injection
class UserService {
constructor(apiClient, logger) {
this.api = apiClient;
this.logger = logger;
}
async getUser(id) {
this.logger.info(`Fetching user ${id}`);
return this.api.get(`/users/${id}`);
}
}
// Factory function
function createUserService(config = {}) {
const api = config.apiClient || new ApiClient();
const logger = config.logger || console;
return new UserService(api, logger);
}
```
## Functional Patterns
### Pure Functions
```javascript
// Impure: Modifies external state
let count = 0;
function incrementCount() {
count++;
return count;
}
// Pure: No side effects
function increment(value) {
return value + 1;
}
// Pure: Same input = same output
function calculateTotal(items) {
return items.reduce((sum, item) => sum + item.price, 0);
}
```
### Array Methods
```javascript
const users = [
{ id: 1, name: "Alice", active: true },
{ id: 2, name: "Bob", active: false },
{ id: 3, name: "Charlie", active: true },
];
// map - transform
const names = users.map((user) => user.name);
// filter - select
const activeUsers = users.filter((user) => user.active);
// find - first match
const user = users.find((user) => user.id === 2);
// some/every - boolean check
const hasActive = users.some((user) => user.active);
const allActive = users.every((user) => user.active);
// reduce - accumulate
const userMap = users.reduce((map, user) => {
map[user.id] = user;
return map;
}, {});
// Chaining
const activeNames = users
.filter((user) => user.active)
.map((user) => user.name)
.sort();
```
### Composition
```javascript
// Compose functions
const compose =
(...fns) =>
(x) =>
fns.reduceRight((acc, fn) => fn(acc), x);
const pipe =
(...fns) =>
(x) =>
fns.reduce((acc, fn) => fn(acc), x);
// Usage
const processUser = pipe(validateUser, normalizeUser, enrichUser);
const result = processUser(rawUserData);
```
## Classes
### Modern Class Syntax
```javascript
class User {
// Private fields
#password;
// Static properties
static ROLES = ["admin", "user", "guest"];
constructor(name, email) {
this.name = name;
this.email = email;
this.#password = null;
}
// Getter
get displayName() {
return `${this.name} <${this.email}>`;
}
// Setter
set password(value) {
if (value.length < 8) {
throw new Error("Password too short");
}
this.#password = hashPassword(value);
}
// Instance method
toJSON() {
return { name: this.name, email: this.email };
}
// Static method
static fromJSON(json) {
return new User(json.name, json.email);
}
}
```
### Inheritance
```javascript
class Entity {
constructor(id) {
this.id = id;
this.createdAt = new Date();
}
equals(other) {
return other instanceof Entity && this.id === other.id;
}
}
class User extends Entity {
constructor(id, name, email) {
super(id);
this.name = name;
this.email = email;
}
toJSON() {
return {
id: this.id,
name: this.name,
email: this.email,
createdAt: this.createdAt.toISOString(),
};
}
}
```
## Common Patterns
### Null Safety
```javascript
// Optional chaining
const city = user?.address?.city;
const firstItem = items?.[0];
const result = obj?.method?.();
// Nullish coalescing
const name = user.name ?? "Anonymous";
const count = value ?? 0;
// Combining both
const displayName = user?.profile?.name ?? "Unknown";
```
### Debounce and Throttle
```javascript
function debounce(fn, delay) {
let timeoutId;
return function (...args) {
clearTimeout(timeoutId);
timeoutId = setTimeout(() => fn.apply(this, args), delay);
};
}
function throttle(fn, limit) {
let inThrottle;
return function (...args) {
if (!inThrottle) {
fn.apply(this, args);
inThrottle = true;
setTimeout(() => (inThrottle = false), limit);
}
};
}
```
### Memoization
```javascript
function memoize(fn) {
const cache = new Map();
return function (...args) {
const key = JSON.stringify(args);
if (cache.has(key)) {
return cache.get(key);
}
const result = fn.apply(this, args);
cache.set(key, result);
return result;
};
}
// Usage
const expensiveCalculation = memoize((n) => {
// Complex computation
return fibonacci(n);
});
```
## Best Practices
### Avoid Common Pitfalls
```javascript
// Avoid loose equality
// Bad
if (value == null) {
}
// Good
if (value === null || value === undefined) {
}
if (value == null) {
} // Only acceptable for null/undefined check
// Avoid implicit type coercion
// Bad
if (items.length) {
}
// Good
if (items.length > 0) {
}
// Avoid modifying function arguments
// Bad
function process(options) {
options.processed = true;
return options;
}
// Good
function process(options) {
return { ...options, processed: true };
}
```
### Performance Tips
```javascript
// Avoid creating functions in loops
// Bad
items.forEach(function (item) {
item.addEventListener("click", function () {});
});
// Good
function handleClick(event) {}
items.forEach((item) => {
item.addEventListener("click", handleClick);
});
// Use appropriate data structures
// For frequent lookups, use Map/Set instead of Array
const userMap = new Map(users.map((u) => [u.id, u]));
const userIds = new Set(users.map((u) => u.id));
```

View File

@@ -0,0 +1,566 @@
# Python Style Guide
Python conventions following PEP 8 and modern best practices.
## PEP 8 Fundamentals
### Naming Conventions
```python
# Variables and functions: snake_case
user_name = "John"
def calculate_total(items):
pass
# Constants: SCREAMING_SNAKE_CASE
MAX_CONNECTIONS = 100
DEFAULT_TIMEOUT = 30
# Classes: PascalCase
class UserAccount:
pass
# Private: single underscore prefix
class User:
def __init__(self):
self._internal_state = {}
# Name mangling: double underscore prefix
class Base:
def __init__(self):
self.__private = "truly private"
# Module-level "private": single underscore
_module_cache = {}
```
### Indentation and Line Length
```python
# 4 spaces per indentation level
def function():
if condition:
do_something()
# Line length: 88 characters (Black) or 79 (PEP 8)
# Break long lines appropriately
result = some_function(
argument_one,
argument_two,
argument_three,
)
# Implicit line continuation in brackets
users = [
"alice",
"bob",
"charlie",
]
```
### Imports
```python
# Standard library
import os
import sys
from pathlib import Path
from typing import Optional, List
# Third-party
import requests
from pydantic import BaseModel
# Local application
from myapp.models import User
from myapp.utils import format_date
# Avoid wildcard imports
# Bad: from module import *
# Good: from module import specific_item
```
## Type Hints
### Basic Type Annotations
```python
from typing import Optional, List, Dict, Tuple, Union, Any
# Variables
name: str = "John"
age: int = 30
active: bool = True
scores: List[int] = [90, 85, 92]
# Functions
def greet(name: str) -> str:
return f"Hello, {name}!"
def find_user(user_id: int) -> Optional[User]:
"""Returns User or None if not found."""
pass
def process_items(items: List[str]) -> Dict[str, int]:
"""Returns count of each item."""
pass
```
### Advanced Type Hints
```python
from typing import (
TypeVar, Generic, Protocol, Callable,
Literal, TypedDict, Final
)
# TypeVar for generics
T = TypeVar('T')
def first(items: List[T]) -> Optional[T]:
return items[0] if items else None
# Protocol for structural typing
class Renderable(Protocol):
def render(self) -> str: ...
def display(obj: Renderable) -> None:
print(obj.render())
# Literal for specific values
Status = Literal["pending", "active", "completed"]
def set_status(status: Status) -> None:
pass
# TypedDict for dictionary shapes
class UserDict(TypedDict):
id: int
name: str
email: Optional[str]
# Final for constants
MAX_SIZE: Final = 100
```
### Type Hints in Classes
```python
from dataclasses import dataclass
from typing import ClassVar, Self
@dataclass
class User:
id: int
name: str
email: str
active: bool = True
# Class variable
_instances: ClassVar[Dict[int, 'User']] = {}
def deactivate(self) -> Self:
self.active = False
return self
class Builder:
def __init__(self) -> None:
self._value: str = ""
def append(self, text: str) -> Self:
self._value += text
return self
```
## Docstrings
### Function Docstrings
```python
def calculate_discount(
price: float,
discount_percent: float,
min_price: float = 0.0
) -> float:
"""Calculate the discounted price.
Args:
price: Original price of the item.
discount_percent: Discount percentage (0-100).
min_price: Minimum price floor. Defaults to 0.0.
Returns:
The discounted price, not less than min_price.
Raises:
ValueError: If discount_percent is not between 0 and 100.
Example:
>>> calculate_discount(100.0, 20.0)
80.0
"""
if not 0 <= discount_percent <= 100:
raise ValueError("Discount must be between 0 and 100")
discounted = price * (1 - discount_percent / 100)
return max(discounted, min_price)
```
### Class Docstrings
```python
class UserService:
"""Service for managing user operations.
This service handles user CRUD operations and authentication.
It requires a database connection and optional cache.
Attributes:
db: Database connection instance.
cache: Optional cache for user lookups.
Example:
>>> service = UserService(db_connection)
>>> user = service.get_user(123)
"""
def __init__(
self,
db: DatabaseConnection,
cache: Optional[Cache] = None
) -> None:
"""Initialize the UserService.
Args:
db: Active database connection.
cache: Optional cache instance for performance.
"""
self.db = db
self.cache = cache
```
## Virtual Environments
### Setup Commands
```bash
# Create virtual environment
python -m venv .venv
# Activate (Unix/macOS)
source .venv/bin/activate
# Activate (Windows)
.venv\Scripts\activate
# Install dependencies
pip install -r requirements.txt
# Freeze dependencies
pip freeze > requirements.txt
```
### Modern Tools
```bash
# Using uv (recommended)
uv venv
uv pip install -r requirements.txt
# Using poetry
poetry init
poetry add requests
poetry install
# Using pipenv
pipenv install
pipenv install requests
```
### Project Structure
```
project/
├── .venv/ # Virtual environment (gitignored)
├── src/
│ └── myapp/
│ ├── __init__.py
│ ├── main.py
│ └── utils.py
├── tests/
│ ├── __init__.py
│ └── test_main.py
├── pyproject.toml # Modern project config
├── requirements.txt # Pinned dependencies
└── README.md
```
## Testing
### pytest Basics
```python
import pytest
from myapp.calculator import add, divide
def test_add_positive_numbers():
assert add(2, 3) == 5
def test_add_negative_numbers():
assert add(-1, -1) == -2
def test_divide_by_zero_raises():
with pytest.raises(ZeroDivisionError):
divide(10, 0)
# Parametrized tests
@pytest.mark.parametrize("a,b,expected", [
(1, 1, 2),
(0, 0, 0),
(-1, 1, 0),
])
def test_add_parametrized(a, b, expected):
assert add(a, b) == expected
```
### Fixtures
```python
import pytest
from myapp.database import Database
from myapp.models import User
@pytest.fixture
def db():
"""Provide a clean database for each test."""
database = Database(":memory:")
database.create_tables()
yield database
database.close()
@pytest.fixture
def sample_user(db):
"""Create a sample user in the database."""
user = User(name="Test User", email="test@example.com")
db.save(user)
return user
def test_user_creation(db, sample_user):
found = db.find_user(sample_user.id)
assert found.name == "Test User"
```
### Mocking
```python
from unittest.mock import Mock, patch, MagicMock
import pytest
def test_api_client_with_mock():
# Create mock
mock_response = Mock()
mock_response.json.return_value = {"id": 1, "name": "Test"}
mock_response.status_code = 200
with patch('requests.get', return_value=mock_response) as mock_get:
result = fetch_user(1)
mock_get.assert_called_once_with('/users/1')
assert result['name'] == "Test"
@patch('myapp.service.external_api')
def test_with_patch_decorator(mock_api):
mock_api.get_data.return_value = {"status": "ok"}
result = process_data()
assert result["status"] == "ok"
```
## Error Handling
### Exception Patterns
```python
# Define custom exceptions
class AppError(Exception):
"""Base exception for application errors."""
pass
class ValidationError(AppError):
"""Raised when validation fails."""
def __init__(self, field: str, message: str):
self.field = field
self.message = message
super().__init__(f"{field}: {message}")
class NotFoundError(AppError):
"""Raised when a resource is not found."""
def __init__(self, resource: str, identifier: Any):
self.resource = resource
self.identifier = identifier
super().__init__(f"{resource} '{identifier}' not found")
```
### Exception Handling
```python
def get_user(user_id: int) -> User:
try:
user = db.find_user(user_id)
if user is None:
raise NotFoundError("User", user_id)
return user
except DatabaseError as e:
logger.error(f"Database error: {e}")
raise AppError("Unable to fetch user") from e
# Context managers for cleanup
from contextlib import contextmanager
@contextmanager
def database_transaction(db):
try:
yield db
db.commit()
except Exception:
db.rollback()
raise
```
## Common Patterns
### Dataclasses
```python
from dataclasses import dataclass, field
from typing import List, Optional
from datetime import datetime
@dataclass
class User:
id: int
name: str
email: str
active: bool = True
created_at: datetime = field(default_factory=datetime.now)
tags: List[str] = field(default_factory=list)
def __post_init__(self):
self.email = self.email.lower()
@dataclass(frozen=True)
class Point:
"""Immutable point."""
x: float
y: float
def distance_to(self, other: 'Point') -> float:
return ((self.x - other.x)**2 + (self.y - other.y)**2) ** 0.5
```
### Context Managers
```python
from contextlib import contextmanager
from typing import Generator
@contextmanager
def timer(name: str) -> Generator[None, None, None]:
"""Time a block of code."""
import time
start = time.perf_counter()
try:
yield
finally:
elapsed = time.perf_counter() - start
print(f"{name}: {elapsed:.3f}s")
# Usage
with timer("data processing"):
process_large_dataset()
# Class-based context manager
class DatabaseConnection:
def __init__(self, connection_string: str):
self.connection_string = connection_string
self.connection = None
def __enter__(self):
self.connection = connect(self.connection_string)
return self.connection
def __exit__(self, exc_type, exc_val, exc_tb):
if self.connection:
self.connection.close()
return False # Don't suppress exceptions
```
### Decorators
```python
from functools import wraps
from typing import Callable, TypeVar, ParamSpec
import time
P = ParamSpec('P')
R = TypeVar('R')
def retry(max_attempts: int = 3, delay: float = 1.0):
"""Retry decorator with exponential backoff."""
def decorator(func: Callable[P, R]) -> Callable[P, R]:
@wraps(func)
def wrapper(*args: P.args, **kwargs: P.kwargs) -> R:
last_exception = None
for attempt in range(max_attempts):
try:
return func(*args, **kwargs)
except Exception as e:
last_exception = e
if attempt < max_attempts - 1:
time.sleep(delay * (2 ** attempt))
raise last_exception
return wrapper
return decorator
@retry(max_attempts=3, delay=0.5)
def fetch_data(url: str) -> dict:
response = requests.get(url)
response.raise_for_status()
return response.json()
```
## Code Quality Tools
### Ruff Configuration
```toml
# pyproject.toml
[tool.ruff]
line-length = 88
target-version = "py311"
[tool.ruff.lint]
select = [
"E", # pycodestyle errors
"W", # pycodestyle warnings
"F", # Pyflakes
"I", # isort
"B", # flake8-bugbear
"C4", # flake8-comprehensions
"UP", # pyupgrade
]
ignore = ["E501"] # Line too long (handled by formatter)
[tool.ruff.lint.isort]
known-first-party = ["myapp"]
```
### Type Checking with mypy
```toml
# pyproject.toml
[tool.mypy]
python_version = "3.11"
strict = true
warn_return_any = true
warn_unused_configs = true
ignore_missing_imports = true
```

View File

@@ -0,0 +1,451 @@
# TypeScript Style Guide
TypeScript-specific conventions and best practices for type-safe development.
## Strict Mode
### Enable Strict Configuration
```json
{
"compilerOptions": {
"strict": true,
"noImplicitAny": true,
"strictNullChecks": true,
"strictFunctionTypes": true,
"strictBindCallApply": true,
"strictPropertyInitialization": true,
"noImplicitThis": true,
"alwaysStrict": true,
"noUncheckedIndexedAccess": true,
"noImplicitReturns": true,
"noFallthroughCasesInSwitch": true
}
}
```
### Benefits
- Catches errors at compile time
- Better IDE support and autocomplete
- Self-documenting code
- Easier refactoring
## Type Safety
### Avoid `any`
```typescript
// Bad
function processData(data: any): any {
return data.value;
}
// Good
interface DataItem {
value: string;
count: number;
}
function processData(data: DataItem): string {
return data.value;
}
```
### Use `unknown` for Unknown Types
```typescript
// When type is truly unknown
function parseJSON(json: string): unknown {
return JSON.parse(json);
}
// Then narrow with type guards
function isUser(obj: unknown): obj is User {
return (
typeof obj === "object" && obj !== null && "id" in obj && "name" in obj
);
}
```
### Prefer Explicit Types
```typescript
// Bad: Implicit any
const items = [];
// Good: Explicit type
const items: Item[] = [];
// Also good: Type inference when obvious
const count = 0; // number inferred
const name = "John"; // string inferred
```
## Interfaces vs Types
### Use Interfaces for Object Shapes
```typescript
// Preferred for objects
interface User {
id: string;
name: string;
email: string;
}
// Interfaces can be extended
interface AdminUser extends User {
permissions: string[];
}
// Interfaces can be augmented (declaration merging)
interface User {
avatar?: string;
}
```
### Use Types for Unions, Primitives, and Computed Types
```typescript
// Union types
type Status = "pending" | "active" | "completed";
// Primitive aliases
type UserId = string;
// Computed/mapped types
type Readonly<T> = {
readonly [P in keyof T]: T[P];
};
// Tuple types
type Coordinate = [number, number];
```
### Decision Guide
| Use Case | Recommendation |
| ----------------------- | -------------- |
| Object shape | `interface` |
| Union type | `type` |
| Function signature | `type` |
| Class implementation | `interface` |
| Mapped/conditional type | `type` |
| Library public API | `interface` |
## Async Patterns
### Prefer async/await
```typescript
// Bad: Callback hell
function fetchUserData(id: string, callback: (user: User) => void) {
fetch(`/users/${id}`)
.then((res) => res.json())
.then((user) => callback(user));
}
// Good: async/await
async function fetchUserData(id: string): Promise<User> {
const response = await fetch(`/users/${id}`);
return response.json();
}
```
### Error Handling in Async Code
```typescript
// Explicit error handling
async function fetchUser(id: string): Promise<User> {
try {
const response = await fetch(`/users/${id}`);
if (!response.ok) {
throw new ApiError(`Failed to fetch user: ${response.status}`);
}
return response.json();
} catch (error) {
if (error instanceof ApiError) {
throw error;
}
throw new NetworkError("Network request failed", { cause: error });
}
}
```
### Promise Types
```typescript
// Return type annotation for clarity
async function loadData(): Promise<Data[]> {
// ...
}
// Use Promise.all for parallel operations
async function loadAllData(): Promise<[Users, Posts]> {
return Promise.all([fetchUsers(), fetchPosts()]);
}
```
## Module Structure
### File Organization
```
src/
├── types/ # Shared type definitions
│ ├── user.ts
│ └── api.ts
├── utils/ # Pure utility functions
│ ├── validation.ts
│ └── formatting.ts
├── services/ # Business logic
│ ├── userService.ts
│ └── authService.ts
├── components/ # UI components (if applicable)
└── index.ts # Public API exports
```
### Export Patterns
```typescript
// Named exports (preferred)
export interface User { ... }
export function createUser(data: UserInput): User { ... }
export const DEFAULT_USER: User = { ... };
// Re-exports for public API
// index.ts
export { User, createUser } from './user';
export { type Config } from './config';
// Avoid default exports (harder to refactor)
// Bad
export default class UserService { ... }
// Good
export class UserService { ... }
```
### Import Organization
```typescript
// 1. External dependencies
import { useState, useEffect } from "react";
import { z } from "zod";
// 2. Internal absolute imports
import { ApiClient } from "@/services/api";
import { User } from "@/types";
// 3. Relative imports
import { formatDate } from "./utils";
import { UserCard } from "./UserCard";
```
## Utility Types
### Built-in Utility Types
```typescript
// Partial - all properties optional
type UpdateUser = Partial<User>;
// Required - all properties required
type CompleteUser = Required<User>;
// Pick - select properties
type UserPreview = Pick<User, "id" | "name">;
// Omit - exclude properties
type UserWithoutPassword = Omit<User, "password">;
// Record - dictionary type
type UserRoles = Record<string, Role>;
// ReturnType - extract return type
type ApiResponse = ReturnType<typeof fetchData>;
// Parameters - extract parameter types
type FetchParams = Parameters<typeof fetch>;
```
### Custom Utility Types
```typescript
// Make specific properties optional
type PartialBy<T, K extends keyof T> = Omit<T, K> & Partial<Pick<T, K>>;
// Make specific properties required
type RequiredBy<T, K extends keyof T> = Omit<T, K> & Required<Pick<T, K>>;
// Deep readonly
type DeepReadonly<T> = {
readonly [P in keyof T]: T[P] extends object ? DeepReadonly<T[P]> : T[P];
};
```
## Enums and Constants
### Prefer const Objects Over Enums
```typescript
// Enums have runtime overhead
enum Status {
Pending = "pending",
Active = "active",
}
// Prefer const objects
const Status = {
Pending: "pending",
Active: "active",
} as const;
type Status = (typeof Status)[keyof typeof Status];
```
### When to Use Enums
```typescript
// Numeric enums for bit flags
enum Permissions {
None = 0,
Read = 1 << 0,
Write = 1 << 1,
Execute = 1 << 2,
All = Read | Write | Execute,
}
```
## Generics
### Basic Generic Usage
```typescript
// Generic function
function first<T>(items: T[]): T | undefined {
return items[0];
}
// Generic interface
interface Repository<T> {
find(id: string): Promise<T | null>;
save(item: T): Promise<T>;
delete(id: string): Promise<void>;
}
```
### Constraining Generics
```typescript
// Constrain to objects with id
function findById<T extends { id: string }>(
items: T[],
id: string,
): T | undefined {
return items.find((item) => item.id === id);
}
// Multiple constraints
function merge<T extends object, U extends object>(a: T, b: U): T & U {
return { ...a, ...b };
}
```
## Error Types
### Custom Error Classes
```typescript
class AppError extends Error {
constructor(
message: string,
public readonly code: string,
public readonly statusCode: number = 500,
) {
super(message);
this.name = "AppError";
}
}
class ValidationError extends AppError {
constructor(
message: string,
public readonly field: string,
) {
super(message, "VALIDATION_ERROR", 400);
this.name = "ValidationError";
}
}
```
### Type Guards for Errors
```typescript
function isAppError(error: unknown): error is AppError {
return error instanceof AppError;
}
function handleError(error: unknown): void {
if (isAppError(error)) {
console.error(`[${error.code}] ${error.message}`);
} else if (error instanceof Error) {
console.error(`Unexpected error: ${error.message}`);
} else {
console.error("Unknown error occurred");
}
}
```
## Testing Types
### Type Testing
```typescript
// Use type assertions for compile-time checks
type Assert<T, U extends T> = U;
// Test that types work as expected
type _TestUserHasId = Assert<{ id: string }, User>;
// Expect error (compile-time check)
// @ts-expect-error - User should require id
const invalidUser: User = { name: "John" };
```
## Common Patterns
### Builder Pattern
```typescript
class QueryBuilder<T> {
private filters: Array<(item: T) => boolean> = [];
where(predicate: (item: T) => boolean): this {
this.filters.push(predicate);
return this;
}
execute(items: T[]): T[] {
return items.filter((item) => this.filters.every((filter) => filter(item)));
}
}
```
### Result Type
```typescript
type Result<T, E = Error> =
| { success: true; data: T }
| { success: false; error: E };
function divide(a: number, b: number): Result<number> {
if (b === 0) {
return { success: false, error: new Error("Division by zero") };
}
return { success: true, data: a / b };
}
```

View File

@@ -0,0 +1,90 @@
# Conductor Hub
## Project: {{PROJECT_NAME}}
Central navigation for all Conductor artifacts and development tracks.
## Quick Links
### Core Documents
| Document | Description | Status |
| --------------------------------------------- | -------------------------- | ---------- |
| [Product Vision](./product.md) | Product overview and goals | {{STATUS}} |
| [Product Guidelines](./product-guidelines.md) | Voice, tone, and standards | {{STATUS}} |
| [Tech Stack](./tech-stack.md) | Technology decisions | {{STATUS}} |
| [Workflow](./workflow.md) | Development process | {{STATUS}} |
### Track Management
| Document | Description |
| ------------------------------- | ---------------------- |
| [Track Registry](./tracks.md) | All development tracks |
| [Active Tracks](#active-tracks) | Currently in progress |
### Style Guides
| Guide | Language/Domain |
| ---------------------------------------------- | ------------------------- |
| [General](./code_styleguides/general.md) | Universal principles |
| [TypeScript](./code_styleguides/typescript.md) | TypeScript conventions |
| [JavaScript](./code_styleguides/javascript.md) | JavaScript best practices |
| [Python](./code_styleguides/python.md) | Python standards |
| [Go](./code_styleguides/go.md) | Go idioms |
| [C#](./code_styleguides/csharp.md) | C# conventions |
| [Dart](./code_styleguides/dart.md) | Dart/Flutter patterns |
| [HTML/CSS](./code_styleguides/html-css.md) | Web standards |
## Active Tracks
| Track | Status | Priority | Spec | Plan |
| -------------- | ---------- | ------------ | ------------------------------------- | ------------------------------------- |
| {{TRACK_NAME}} | {{STATUS}} | {{PRIORITY}} | [spec](./tracks/{{TRACK_ID}}/spec.md) | [plan](./tracks/{{TRACK_ID}}/plan.md) |
## Recent Activity
| Date | Track | Action |
| -------- | --------- | ---------- |
| {{DATE}} | {{TRACK}} | {{ACTION}} |
## Project Status
**Current Phase:** {{CURRENT_PHASE}}
**Overall Progress:** {{PROGRESS_PERCENTAGE}}%
### Milestone Tracker
| Milestone | Target Date | Status |
| --------------- | ----------- | ------------ |
| {{MILESTONE_1}} | {{DATE_1}} | {{STATUS_1}} |
| {{MILESTONE_2}} | {{DATE_2}} | {{STATUS_2}} |
| {{MILESTONE_3}} | {{DATE_3}} | {{STATUS_3}} |
## Getting Started
1. Review [Product Vision](./product.md) for project context
2. Check [Tech Stack](./tech-stack.md) for technology decisions
3. Read [Workflow](./workflow.md) for development process
4. Find your track in [Track Registry](./tracks.md)
5. Follow track spec and plan
## Commands Reference
```bash
# Setup
{{SETUP_COMMAND}}
# Development
{{DEV_COMMAND}}
# Testing
{{TEST_COMMAND}}
# Build
{{BUILD_COMMAND}}
```
---
**Last Updated:** {{LAST_UPDATED}}
**Maintained By:** {{MAINTAINER}}

View File

@@ -0,0 +1,196 @@
# Product Guidelines
## Voice & Tone
### Brand Voice
{{BRAND_VOICE_DESCRIPTION}}
### Voice Attributes
- **{{ATTRIBUTE_1}}:** {{ATTRIBUTE_1_DESCRIPTION}}
- **{{ATTRIBUTE_2}}:** {{ATTRIBUTE_2_DESCRIPTION}}
- **{{ATTRIBUTE_3}}:** {{ATTRIBUTE_3_DESCRIPTION}}
### Tone Variations by Context
| Context | Tone | Example |
| -------------- | -------------------- | ----------------------- |
| Success states | {{SUCCESS_TONE}} | {{SUCCESS_EXAMPLE}} |
| Error states | {{ERROR_TONE}} | {{ERROR_EXAMPLE}} |
| Onboarding | {{ONBOARDING_TONE}} | {{ONBOARDING_EXAMPLE}} |
| Empty states | {{EMPTY_STATE_TONE}} | {{EMPTY_STATE_EXAMPLE}} |
### Words We Use
- {{PREFERRED_WORD_1}}
- {{PREFERRED_WORD_2}}
- {{PREFERRED_WORD_3}}
### Words We Avoid
- {{AVOIDED_WORD_1}}
- {{AVOIDED_WORD_2}}
- {{AVOIDED_WORD_3}}
## Messaging Guidelines
### Core Messages
**Primary Message:**
> {{PRIMARY_MESSAGE}}
**Supporting Messages:**
1. {{SUPPORTING_MESSAGE_1}}
2. {{SUPPORTING_MESSAGE_2}}
3. {{SUPPORTING_MESSAGE_3}}
### Message Hierarchy
1. **Must Communicate:** {{MUST_COMMUNICATE}}
2. **Should Communicate:** {{SHOULD_COMMUNICATE}}
3. **Could Communicate:** {{COULD_COMMUNICATE}}
### Audience-Specific Messaging
| Audience | Key Message | Proof Points |
| -------------- | ------------- | ------------ |
| {{AUDIENCE_1}} | {{MESSAGE_1}} | {{PROOF_1}} |
| {{AUDIENCE_2}} | {{MESSAGE_2}} | {{PROOF_2}} |
## Design Principles
### Principle 1: {{PRINCIPLE_1_NAME}}
{{PRINCIPLE_1_DESCRIPTION}}
**Do:**
- {{PRINCIPLE_1_DO_1}}
- {{PRINCIPLE_1_DO_2}}
**Don't:**
- {{PRINCIPLE_1_DONT_1}}
- {{PRINCIPLE_1_DONT_2}}
### Principle 2: {{PRINCIPLE_2_NAME}}
{{PRINCIPLE_2_DESCRIPTION}}
**Do:**
- {{PRINCIPLE_2_DO_1}}
- {{PRINCIPLE_2_DO_2}}
**Don't:**
- {{PRINCIPLE_2_DONT_1}}
- {{PRINCIPLE_2_DONT_2}}
### Principle 3: {{PRINCIPLE_3_NAME}}
{{PRINCIPLE_3_DESCRIPTION}}
**Do:**
- {{PRINCIPLE_3_DO_1}}
- {{PRINCIPLE_3_DO_2}}
**Don't:**
- {{PRINCIPLE_3_DONT_1}}
- {{PRINCIPLE_3_DONT_2}}
## Accessibility Standards
### Compliance Target
{{ACCESSIBILITY_STANDARD}} (e.g., WCAG 2.1 AA)
### Core Requirements
#### Perceivable
- All images have meaningful alt text
- Color is not the only means of conveying information
- Text has minimum contrast ratio of 4.5:1
- Content is readable at 200% zoom
#### Operable
- All functionality available via keyboard
- No content flashes more than 3 times per second
- Skip navigation links provided
- Focus indicators clearly visible
#### Understandable
- Language is clear and simple
- Navigation is consistent
- Error messages are descriptive and helpful
- Labels and instructions are clear
#### Robust
- Valid HTML markup
- ARIA labels used appropriately
- Compatible with assistive technologies
- Progressive enhancement approach
### Testing Requirements
- Screen reader testing with {{SCREEN_READER}}
- Keyboard-only navigation testing
- Color contrast verification
- Automated accessibility scans
## Error Handling Philosophy
### Error Prevention
- Validate input early and often
- Provide clear constraints and requirements upfront
- Use inline validation where appropriate
- Confirm destructive actions
### Error Communication
#### Principles
1. **Be specific:** Tell users exactly what went wrong
2. **Be helpful:** Explain how to fix the problem
3. **Be human:** Use friendly, non-technical language
4. **Be timely:** Show errors as soon as they're detected
#### Error Message Structure
```
[What happened] + [Why it happened (if relevant)] + [How to fix it]
```
#### Examples
| Bad | Good |
| --------------- | ---------------------------------------------------- |
| "Invalid input" | "Email address must include @ symbol" |
| "Error 500" | "We couldn't save your changes. Please try again." |
| "Failed" | "Unable to connect. Check your internet connection." |
### Error States
| Severity | Visual Treatment | User Action Required |
| -------- | ---------------------- | -------------------- |
| Info | {{INFO_TREATMENT}} | Optional |
| Warning | {{WARNING_TREATMENT}} | Recommended |
| Error | {{ERROR_TREATMENT}} | Required |
| Critical | {{CRITICAL_TREATMENT}} | Immediate |
### Recovery Patterns
- Auto-save user progress where possible
- Provide clear "try again" actions
- Offer alternative paths when primary fails
- Preserve user input on errors

View File

@@ -0,0 +1,102 @@
# Product Vision
## Product Overview
**Name:** {{PRODUCT_NAME}}
**Tagline:** {{ONE_LINE_DESCRIPTION}}
**Description:**
{{DETAILED_DESCRIPTION}}
## Problem Statement
### The Problem
{{PROBLEM_DESCRIPTION}}
### Current Solutions
{{EXISTING_SOLUTIONS}}
### Why They Fall Short
{{SOLUTION_GAPS}}
## Target Users
### Primary Users
{{PRIMARY_USER_PERSONA}}
- **Who:** {{USER_DESCRIPTION}}
- **Goals:** {{USER_GOALS}}
- **Pain Points:** {{USER_PAIN_POINTS}}
- **Technical Proficiency:** {{TECHNICAL_LEVEL}}
### Secondary Users
{{SECONDARY_USER_PERSONA}}
- **Who:** {{USER_DESCRIPTION}}
- **Goals:** {{USER_GOALS}}
- **Relationship to Primary:** {{RELATIONSHIP}}
## Core Value Proposition
### Key Benefits
1. {{BENEFIT_1}}
2. {{BENEFIT_2}}
3. {{BENEFIT_3}}
### Differentiators
- {{DIFFERENTIATOR_1}}
- {{DIFFERENTIATOR_2}}
### Value Statement
> {{VALUE_STATEMENT}}
## Success Metrics
### Key Performance Indicators
| Metric | Target | Measurement Method |
| ------------ | ------------ | ------------------ |
| {{METRIC_1}} | {{TARGET_1}} | {{METHOD_1}} |
| {{METRIC_2}} | {{TARGET_2}} | {{METHOD_2}} |
| {{METRIC_3}} | {{TARGET_3}} | {{METHOD_3}} |
### North Star Metric
{{NORTH_STAR_METRIC}}
### Leading Indicators
- {{LEADING_INDICATOR_1}}
- {{LEADING_INDICATOR_2}}
### Lagging Indicators
- {{LAGGING_INDICATOR_1}}
- {{LAGGING_INDICATOR_2}}
## Out of Scope
### Explicitly Not Included
- {{OUT_OF_SCOPE_1}}
- {{OUT_OF_SCOPE_2}}
- {{OUT_OF_SCOPE_3}}
### Future Considerations
- {{FUTURE_CONSIDERATION_1}}
- {{FUTURE_CONSIDERATION_2}}
### Non-Goals
- {{NON_GOAL_1}}
- {{NON_GOAL_2}}

View File

@@ -0,0 +1,204 @@
# Technology Stack
## Frontend
### Framework
**Choice:** {{FRONTEND_FRAMEWORK}}
**Version:** {{FRONTEND_VERSION}}
**Rationale:**
{{FRONTEND_RATIONALE}}
### State Management
**Choice:** {{STATE_MANAGEMENT}}
**Version:** {{STATE_VERSION}}
**Rationale:**
{{STATE_RATIONALE}}
### Styling
**Choice:** {{STYLING_SOLUTION}}
**Version:** {{STYLING_VERSION}}
**Rationale:**
{{STYLING_RATIONALE}}
### Additional Frontend Libraries
| Library | Purpose | Version |
| ------------ | -------------------- | -------------------- |
| {{FE_LIB_1}} | {{FE_LIB_1_PURPOSE}} | {{FE_LIB_1_VERSION}} |
| {{FE_LIB_2}} | {{FE_LIB_2_PURPOSE}} | {{FE_LIB_2_VERSION}} |
| {{FE_LIB_3}} | {{FE_LIB_3_PURPOSE}} | {{FE_LIB_3_VERSION}} |
## Backend
### Language
**Choice:** {{BACKEND_LANGUAGE}}
**Version:** {{BACKEND_LANGUAGE_VERSION}}
**Rationale:**
{{BACKEND_LANGUAGE_RATIONALE}}
### Framework
**Choice:** {{BACKEND_FRAMEWORK}}
**Version:** {{BACKEND_FRAMEWORK_VERSION}}
**Rationale:**
{{BACKEND_FRAMEWORK_RATIONALE}}
### Database
#### Primary Database
**Choice:** {{PRIMARY_DATABASE}}
**Version:** {{PRIMARY_DB_VERSION}}
**Rationale:**
{{PRIMARY_DB_RATIONALE}}
#### Secondary Database (if applicable)
**Choice:** {{SECONDARY_DATABASE}}
**Purpose:** {{SECONDARY_DB_PURPOSE}}
### Additional Backend Libraries
| Library | Purpose | Version |
| ------------ | -------------------- | -------------------- |
| {{BE_LIB_1}} | {{BE_LIB_1_PURPOSE}} | {{BE_LIB_1_VERSION}} |
| {{BE_LIB_2}} | {{BE_LIB_2_PURPOSE}} | {{BE_LIB_2_VERSION}} |
| {{BE_LIB_3}} | {{BE_LIB_3_PURPOSE}} | {{BE_LIB_3_VERSION}} |
## Infrastructure
### Hosting
**Provider:** {{HOSTING_PROVIDER}}
**Environment:** {{HOSTING_ENVIRONMENT}}
**Services Used:**
- {{HOSTING_SERVICE_1}}
- {{HOSTING_SERVICE_2}}
- {{HOSTING_SERVICE_3}}
### CI/CD
**Platform:** {{CICD_PLATFORM}}
**Pipeline Stages:**
1. {{PIPELINE_STAGE_1}}
2. {{PIPELINE_STAGE_2}}
3. {{PIPELINE_STAGE_3}}
4. {{PIPELINE_STAGE_4}}
### Monitoring
**APM:** {{APM_TOOL}}
**Logging:** {{LOGGING_TOOL}}
**Alerting:** {{ALERTING_TOOL}}
### Additional Infrastructure
| Service | Purpose | Provider |
| ----------- | ------------------- | -------------------- |
| {{INFRA_1}} | {{INFRA_1_PURPOSE}} | {{INFRA_1_PROVIDER}} |
| {{INFRA_2}} | {{INFRA_2_PURPOSE}} | {{INFRA_2_PROVIDER}} |
## Development Tools
### Package Manager
**Choice:** {{PACKAGE_MANAGER}}
**Version:** {{PACKAGE_MANAGER_VERSION}}
### Testing
| Type | Tool | Coverage Target |
| ----------- | ------------------------- | ------------------------- |
| Unit | {{UNIT_TEST_TOOL}} | {{UNIT_COVERAGE}}% |
| Integration | {{INTEGRATION_TEST_TOOL}} | {{INTEGRATION_COVERAGE}}% |
| E2E | {{E2E_TEST_TOOL}} | Critical paths |
### Linting & Formatting
**Linter:** {{LINTER}}
**Formatter:** {{FORMATTER}}
**Config:** {{LINT_CONFIG}}
### Additional Dev Tools
| Tool | Purpose |
| -------------- | ---------------------- |
| {{DEV_TOOL_1}} | {{DEV_TOOL_1_PURPOSE}} |
| {{DEV_TOOL_2}} | {{DEV_TOOL_2_PURPOSE}} |
| {{DEV_TOOL_3}} | {{DEV_TOOL_3_PURPOSE}} |
## Decision Log
### {{DECISION_1_TITLE}}
**Date:** {{DECISION_1_DATE}}
**Status:** {{DECISION_1_STATUS}}
**Context:**
{{DECISION_1_CONTEXT}}
**Decision:**
{{DECISION_1_DECISION}}
**Consequences:**
- {{DECISION_1_CONSEQUENCE_1}}
- {{DECISION_1_CONSEQUENCE_2}}
---
### {{DECISION_2_TITLE}}
**Date:** {{DECISION_2_DATE}}
**Status:** {{DECISION_2_STATUS}}
**Context:**
{{DECISION_2_CONTEXT}}
**Decision:**
{{DECISION_2_DECISION}}
**Consequences:**
- {{DECISION_2_CONSEQUENCE_1}}
- {{DECISION_2_CONSEQUENCE_2}}
---
### {{DECISION_3_TITLE}}
**Date:** {{DECISION_3_DATE}}
**Status:** {{DECISION_3_STATUS}}
**Context:**
{{DECISION_3_CONTEXT}}
**Decision:**
{{DECISION_3_DECISION}}
**Consequences:**
- {{DECISION_3_CONSEQUENCE_1}}
- {{DECISION_3_CONSEQUENCE_2}}
## Version Compatibility Matrix
| Component | Min Version | Max Version | Notes |
| --------------- | ----------- | ----------- | ----------- |
| {{COMPONENT_1}} | {{MIN_1}} | {{MAX_1}} | {{NOTES_1}} |
| {{COMPONENT_2}} | {{MIN_2}} | {{MAX_2}} | {{NOTES_2}} |
| {{COMPONENT_3}} | {{MIN_3}} | {{MAX_3}} | {{NOTES_3}} |

View File

@@ -0,0 +1,10 @@
{
"id": "",
"type": "feature|bug|chore|refactor",
"status": "pending|in_progress|completed",
"created_at": "",
"updated_at": "",
"description": "",
"spec_path": "",
"plan_path": ""
}

View File

@@ -0,0 +1,198 @@
# Implementation Plan: {{TRACK_NAME}}
## Overview
**Track ID:** {{TRACK_ID}}
**Spec:** [spec.md](./spec.md)
**Estimated Effort:** {{EFFORT_ESTIMATE}}
**Target Completion:** {{TARGET_DATE}}
## Progress Summary
| Phase | Status | Progress |
| ------------------------- | ---------- | ------------- |
| Phase 1: {{PHASE_1_NAME}} | {{STATUS}} | {{PROGRESS}}% |
| Phase 2: {{PHASE_2_NAME}} | {{STATUS}} | {{PROGRESS}}% |
| Phase 3: {{PHASE_3_NAME}} | {{STATUS}} | {{PROGRESS}}% |
| Phase 4: {{PHASE_4_NAME}} | {{STATUS}} | {{PROGRESS}}% |
## Phase 1: {{PHASE_1_NAME}}
**Objective:** {{PHASE_1_OBJECTIVE}}
**Estimated Duration:** {{PHASE_1_DURATION}}
### Tasks
- [ ] **1.1 {{TASK_1_1_TITLE}}**
- [ ] {{SUBTASK_1_1_1}}
- [ ] {{SUBTASK_1_1_2}}
- [ ] {{SUBTASK_1_1_3}}
- [ ] **1.2 {{TASK_1_2_TITLE}}**
- [ ] {{SUBTASK_1_2_1}}
- [ ] {{SUBTASK_1_2_2}}
- [ ] **1.3 {{TASK_1_3_TITLE}}**
- [ ] {{SUBTASK_1_3_1}}
- [ ] {{SUBTASK_1_3_2}}
### Verification
- [ ] All Phase 1 tests passing
- [ ] Code coverage meets threshold
- [ ] Code review approved
### Checkpoint
```
Commit: [track-id] checkpoint: phase 1 complete
```
---
## Phase 2: {{PHASE_2_NAME}}
**Objective:** {{PHASE_2_OBJECTIVE}}
**Estimated Duration:** {{PHASE_2_DURATION}}
**Dependencies:** Phase 1 complete
### Tasks
- [ ] **2.1 {{TASK_2_1_TITLE}}**
- [ ] {{SUBTASK_2_1_1}}
- [ ] {{SUBTASK_2_1_2}}
- [ ] {{SUBTASK_2_1_3}}
- [ ] **2.2 {{TASK_2_2_TITLE}}**
- [ ] {{SUBTASK_2_2_1}}
- [ ] {{SUBTASK_2_2_2}}
- [ ] **2.3 {{TASK_2_3_TITLE}}**
- [ ] {{SUBTASK_2_3_1}}
- [ ] {{SUBTASK_2_3_2}}
### Verification
- [ ] All Phase 2 tests passing
- [ ] Integration tests passing
- [ ] Code review approved
### Checkpoint
```
Commit: [track-id] checkpoint: phase 2 complete
```
---
## Phase 3: {{PHASE_3_NAME}}
**Objective:** {{PHASE_3_OBJECTIVE}}
**Estimated Duration:** {{PHASE_3_DURATION}}
**Dependencies:** Phase 2 complete
### Tasks
- [ ] **3.1 {{TASK_3_1_TITLE}}**
- [ ] {{SUBTASK_3_1_1}}
- [ ] {{SUBTASK_3_1_2}}
- [ ] **3.2 {{TASK_3_2_TITLE}}**
- [ ] {{SUBTASK_3_2_1}}
- [ ] {{SUBTASK_3_2_2}}
- [ ] **3.3 {{TASK_3_3_TITLE}}**
- [ ] {{SUBTASK_3_3_1}}
- [ ] {{SUBTASK_3_3_2}}
### Verification
- [ ] All Phase 3 tests passing
- [ ] End-to-end tests passing
- [ ] Code review approved
### Checkpoint
```
Commit: [track-id] checkpoint: phase 3 complete
```
---
## Phase 4: {{PHASE_4_NAME}}
**Objective:** {{PHASE_4_OBJECTIVE}}
**Estimated Duration:** {{PHASE_4_DURATION}}
**Dependencies:** Phase 3 complete
### Tasks
- [ ] **4.1 {{TASK_4_1_TITLE}}**
- [ ] {{SUBTASK_4_1_1}}
- [ ] {{SUBTASK_4_1_2}}
- [ ] **4.2 {{TASK_4_2_TITLE}}**
- [ ] {{SUBTASK_4_2_1}}
- [ ] {{SUBTASK_4_2_2}}
- [ ] **4.3 {{TASK_4_3_TITLE}}**
- [ ] {{SUBTASK_4_3_1}}
- [ ] {{SUBTASK_4_3_2}}
### Verification
- [ ] All tests passing
- [ ] Coverage ≥ 80%
- [ ] Performance benchmarks met
- [ ] Documentation complete
- [ ] Code review approved
### Checkpoint
```
Commit: [track-id] checkpoint: phase 4 complete (track done)
```
---
## Final Verification
### Quality Gates
- [ ] All unit tests passing
- [ ] All integration tests passing
- [ ] All E2E tests passing
- [ ] Code coverage ≥ 80%
- [ ] No critical linting errors
- [ ] Security scan passed
- [ ] Performance requirements met
- [ ] Accessibility requirements met
### Documentation
- [ ] API documentation updated
- [ ] README updated (if applicable)
- [ ] Changelog entry added
### Deployment
- [ ] Staging deployment successful
- [ ] Smoke tests passed
- [ ] Production deployment approved
---
## Deviations Log
| Date | Task | Deviation | Reason | Resolution |
| -------- | -------- | ------------- | ---------- | -------------- |
| {{DATE}} | {{TASK}} | {{DEVIATION}} | {{REASON}} | {{RESOLUTION}} |
## Notes
{{IMPLEMENTATION_NOTES}}
---
**Plan Created:** {{CREATED_DATE}}
**Last Updated:** {{UPDATED_DATE}}

View File

@@ -0,0 +1,169 @@
# Track Specification: {{TRACK_NAME}}
## Overview
**Track ID:** {{TRACK_ID}}
**Type:** {{TRACK_TYPE}} (feature | bug | chore | refactor)
**Priority:** {{PRIORITY}} (critical | high | medium | low)
**Created:** {{CREATED_DATE}}
**Author:** {{AUTHOR}}
### Description
{{TRACK_DESCRIPTION}}
### Background
{{BACKGROUND_CONTEXT}}
## Functional Requirements
### FR-1: {{REQUIREMENT_1_TITLE}}
{{REQUIREMENT_1_DESCRIPTION}}
**Acceptance Criteria:**
- [ ] {{FR1_CRITERIA_1}}
- [ ] {{FR1_CRITERIA_2}}
- [ ] {{FR1_CRITERIA_3}}
### FR-2: {{REQUIREMENT_2_TITLE}}
{{REQUIREMENT_2_DESCRIPTION}}
**Acceptance Criteria:**
- [ ] {{FR2_CRITERIA_1}}
- [ ] {{FR2_CRITERIA_2}}
- [ ] {{FR2_CRITERIA_3}}
### FR-3: {{REQUIREMENT_3_TITLE}}
{{REQUIREMENT_3_DESCRIPTION}}
**Acceptance Criteria:**
- [ ] {{FR3_CRITERIA_1}}
- [ ] {{FR3_CRITERIA_2}}
- [ ] {{FR3_CRITERIA_3}}
## Non-Functional Requirements
### Performance
- {{PERFORMANCE_REQUIREMENT_1}}
- {{PERFORMANCE_REQUIREMENT_2}}
### Security
- {{SECURITY_REQUIREMENT_1}}
- {{SECURITY_REQUIREMENT_2}}
### Scalability
- {{SCALABILITY_REQUIREMENT_1}}
### Accessibility
- {{ACCESSIBILITY_REQUIREMENT_1}}
### Compatibility
- {{COMPATIBILITY_REQUIREMENT_1}}
## Acceptance Criteria
### Must Have (P0)
- [ ] {{P0_CRITERIA_1}}
- [ ] {{P0_CRITERIA_2}}
- [ ] {{P0_CRITERIA_3}}
### Should Have (P1)
- [ ] {{P1_CRITERIA_1}}
- [ ] {{P1_CRITERIA_2}}
### Nice to Have (P2)
- [ ] {{P2_CRITERIA_1}}
- [ ] {{P2_CRITERIA_2}}
## Scope
### In Scope
- {{IN_SCOPE_1}}
- {{IN_SCOPE_2}}
- {{IN_SCOPE_3}}
- {{IN_SCOPE_4}}
### Out of Scope
- {{OUT_OF_SCOPE_1}}
- {{OUT_OF_SCOPE_2}}
- {{OUT_OF_SCOPE_3}}
### Future Considerations
- {{FUTURE_1}}
- {{FUTURE_2}}
## Dependencies
### Upstream Dependencies
| Dependency | Type | Status | Notes |
| ---------- | ---------- | ------------ | ----------- |
| {{DEP_1}} | {{TYPE_1}} | {{STATUS_1}} | {{NOTES_1}} |
| {{DEP_2}} | {{TYPE_2}} | {{STATUS_2}} | {{NOTES_2}} |
### Downstream Impacts
| Component | Impact | Mitigation |
| --------------- | ------------ | ---------------- |
| {{COMPONENT_1}} | {{IMPACT_1}} | {{MITIGATION_1}} |
| {{COMPONENT_2}} | {{IMPACT_2}} | {{MITIGATION_2}} |
### External Dependencies
- {{EXTERNAL_DEP_1}}
- {{EXTERNAL_DEP_2}}
## Risks
### Technical Risks
| Risk | Probability | Impact | Mitigation |
| --------------- | ----------- | ------------ | ---------------- |
| {{TECH_RISK_1}} | {{PROB_1}} | {{IMPACT_1}} | {{MITIGATION_1}} |
| {{TECH_RISK_2}} | {{PROB_2}} | {{IMPACT_2}} | {{MITIGATION_2}} |
### Business Risks
| Risk | Probability | Impact | Mitigation |
| -------------- | ----------- | ------------ | ---------------- |
| {{BIZ_RISK_1}} | {{PROB_1}} | {{IMPACT_1}} | {{MITIGATION_1}} |
### Unknowns
- {{UNKNOWN_1}}
- {{UNKNOWN_2}}
## Open Questions
- [ ] {{QUESTION_1}}
- [ ] {{QUESTION_2}}
- [ ] {{QUESTION_3}}
## References
- {{REFERENCE_1}}
- {{REFERENCE_2}}
- {{REFERENCE_3}}
---
**Approved By:** {{APPROVER}}
**Approval Date:** {{APPROVAL_DATE}}

View File

@@ -0,0 +1,53 @@
# Track Registry
This file maintains the registry of all development tracks for the project. Each track represents a distinct body of work with its own spec and implementation plan.
## Status Legend
| Symbol | Status | Description |
| ------ | ----------- | ------------------------- |
| `[ ]` | Pending | Not yet started |
| `[~]` | In Progress | Currently being worked on |
| `[x]` | Completed | Finished and verified |
## Active Tracks
### [ ] {{TRACK_ID}}: {{TRACK_NAME}}
**Description:** {{TRACK_DESCRIPTION}}
**Priority:** {{PRIORITY}}
**Folder:** [./tracks/{{TRACK_ID}}/](./tracks/{{TRACK_ID}}/)
---
### [ ] {{TRACK_ID}}: {{TRACK_NAME}}
**Description:** {{TRACK_DESCRIPTION}}
**Priority:** {{PRIORITY}}
**Folder:** [./tracks/{{TRACK_ID}}/](./tracks/{{TRACK_ID}}/)
---
## Completed Tracks
<!-- Move completed tracks here -->
---
## Track Creation Checklist
When creating a new track:
1. [ ] Add entry to this registry
2. [ ] Create track folder: `./tracks/{{track-id}}/`
3. [ ] Create spec.md from template
4. [ ] Create plan.md from template
5. [ ] Create metadata.json from template
6. [ ] Update index.md with new track reference
## Notes
- Track IDs should be lowercase with hyphens (e.g., `user-auth`, `api-v2`)
- Keep descriptions concise (one line)
- Prioritize tracks as: critical, high, medium, low
- Archive completed tracks quarterly

View File

@@ -0,0 +1,192 @@
# Development Workflow
## Core Principles
1. **plan.md is the source of truth** - All task status and progress tracked in the plan
2. **Test-Driven Development** - Red → Green → Refactor cycle with 80% coverage target
3. **CI/CD Compatibility** - All changes must pass automated pipelines before merge
4. **Incremental Progress** - Small, verifiable commits with clear purpose
## Task Lifecycle
### Step 1: Task Selection
- Review plan.md for next pending task
- Verify dependencies are complete
- Confirm understanding of acceptance criteria
### Step 2: Progress Marking
- Update task status in plan.md from `[ ]` to `[~]`
- Note start time if tracking velocity
### Step 3: Red Phase (Write Failing Tests)
- Write test(s) that define expected behavior
- Verify test fails for the right reason
- Keep tests focused and minimal
### Step 4: Green Phase (Make Tests Pass)
- Write minimum code to pass tests
- Avoid premature optimization
- Focus on correctness over elegance
### Step 5: Refactor Phase
- Improve code structure without changing behavior
- Apply relevant style guide conventions
- Remove duplication and clarify intent
### Step 6: Coverage Verification
- Run coverage report
- Ensure new code meets 80% threshold
- Add edge case tests if coverage gaps exist
### Step 7: Deviation Documentation
- If implementation differs from spec, document why
- Update spec if change is permanent
- Flag for review if uncertain
### Step 8: Code Commit
- Stage related changes only
- Write clear commit message referencing task
- Format: `[track-id] task: description`
### Step 9: Git Notes (Optional)
- Add implementation notes for complex changes
- Reference relevant decisions or trade-offs
### Step 10: Plan Update
- Mark task as `[x]` completed in plan.md
- Update any affected downstream tasks
- Note blockers or follow-up items
### Step 11: Plan Commit
- Commit plan.md changes separately
- Format: `[track-id] plan: mark task X complete`
## Phase Completion Protocol
### Checkpoint Commits
At the end of each phase:
1. Ensure all phase tasks are `[x]` complete
2. Run full test suite
3. Verify coverage meets threshold
4. Create checkpoint commit: `[track-id] checkpoint: phase N complete`
### Test Verification
```bash
{{TEST_COMMAND}}
{{COVERAGE_COMMAND}}
```
### Manual Approval Gates
Phases requiring approval before proceeding:
- Architecture changes
- API contract modifications
- Database schema changes
- Security-sensitive implementations
## Quality Assurance Gates
All code must pass these criteria before merge:
| Gate | Requirement | Command |
| ----------- | ------------------------ | ------------------------ |
| 1. Tests | All tests passing | `{{TEST_COMMAND}}` |
| 2. Coverage | Minimum 80% | `{{COVERAGE_COMMAND}}` |
| 3. Style | Follows style guide | `{{LINT_COMMAND}}` |
| 4. Docs | Public APIs documented | Manual review |
| 5. Types | No type errors | `{{TYPE_CHECK_COMMAND}}` |
| 6. Linting | No lint errors | `{{LINT_COMMAND}}` |
| 7. Mobile | Responsive if applicable | Manual review |
| 8. Security | No known vulnerabilities | `{{SECURITY_COMMAND}}` |
## Development Commands
### Environment Setup
```bash
{{SETUP_COMMAND}}
```
### Development Server
```bash
{{DEV_COMMAND}}
```
### Pre-Commit Checks
```bash
{{PRE_COMMIT_COMMAND}}
```
### Full Validation
```bash
{{VALIDATE_COMMAND}}
```
## Workflow Diagram
```
┌─────────────┐
│ Select Task │
└──────┬──────┘
┌─────────────┐
│ Mark [~] │
└──────┬──────┘
┌─────────────┐
│ RED: Write │
│ Failing Test│
└──────┬──────┘
┌─────────────┐
│ GREEN: Make │
│ Test Pass │
└──────┬──────┘
┌─────────────┐
│ REFACTOR │
└──────┬──────┘
┌─────────────┐
│ Verify │
│ Coverage │
└──────┬──────┘
┌─────────────┐
│ Commit Code │
└──────┬──────┘
┌─────────────┐
│ Mark [x] │
└──────┬──────┘
┌─────────────┐
│ Commit Plan │
└─────────────┘
```