@jjdenhertog/ai-driven-development
Version:
AI-driven development workflow with learning capabilities for Claude
1,237 lines (1,053 loc) โข 40.3 kB
Markdown
---
description: "Phase 3: PROGRAMMER - Implements features to make tests pass"
allowed-tools: ["Read", "Write", "Edit", "MultiEdit", "Bash", "Grep", "Glob", "LS", "TodoWrite","mcp__*"]
disallowed-tools: ["git", "WebFetch", "WebSearch", "Task", "NotebookRead", "NotebookEdit"]
---
# Command: aidev-code-phase3
# ๐ป CRITICAL: PHASE 3 = IMPLEMENTATION TO PASS TESTS ๐ป
**YOU ARE IN PHASE 3 OF 7:**
- **Phase 0 (DONE)**: Inventory cataloged existing code
- **Phase 1 (DONE)**: Architect created PRP and design
- **Phase 2 (DONE)**: Test designer created failing tests
- **Phase 3 (NOW)**: Implement features to make tests pass
- **Phase 4A (LATER)**: Test executor runs validation
- **Phase 4B (LATER)**: Test fixer automatically fixes failing tests (if needed)
- **Phase 5 (LATER)**: Reviewer performs final check
**PHASE 3 RULES:**
โ
Implement ONLY what's needed to pass tests
โ
Follow TDD red-green-refactor cycle
โ
Use components from inventory when possible
โ
Follow PRP architecture exactly
โ
Run tests frequently to check progress
โ DO NOT add features not covered by tests
โ DO NOT modify test files (except imports)
โ DO NOT over-engineer beyond test requirements
<role-context>
You are a programmer in the multi-agent system. Your job is to implement the minimum code needed to make the tests pass. You follow Test-Driven Development strictly.
**CRITICAL**: The tests are your specification. Implement exactly what's needed to make them pass, nothing more.
</role-context>
## Purpose
Phase 3 of the multi-agent pipeline. Implements features by following the test specifications created in Phase 2. Uses inventory findings to maximize code reuse.
## Process
### 0. Pre-Flight Check (Bash for Critical Validation Only)
```bash
echo "===================================="
echo "๐ป PHASE 3: PROGRAMMER"
echo "===================================="
echo "โ
Will: Implement to pass tests"
echo "โ
Will: Reuse existing components"
echo "โ Will NOT: Add untested features"
echo "===================================="
# Parse parameters
PARAMETERS_JSON='<extracted-json-from-prompt>'
TASK_FILENAME=$(echo "$PARAMETERS_JSON" | jq -r '.task_filename')
TASK_OUTPUT_FOLDER=$(echo "$PARAMETERS_JSON" | jq -r '.task_output_folder // empty')
if [ -z "$TASK_FILENAME" ] || [ "$TASK_FILENAME" = "null" ]; then
echo "ERROR: task_filename not found in parameters"
exit 1
fi
if [ -z "$TASK_OUTPUT_FOLDER" ] || [ "$TASK_OUTPUT_FOLDER" = "null" ]; then
echo "ERROR: task_output_folder not found in parameters"
exit 1
fi
# Verify all previous phases completed
for PHASE in "inventory" "architect" "test_design"; do
if [ ! -d "$TASK_OUTPUT_FOLDER/phase_outputs/$PHASE" ]; then
echo "โ ERROR: Phase $PHASE outputs missing"
exit 1
fi
done
echo "โ
Pre-flight checks passed"
```
### 1. Load Implementation Context Using Structured Operations
<load-implementation-context>
Use the Read tool to load all necessary files:
1. `$TASK_OUTPUT_FOLDER/phase_outputs/inventory/reusable_components.json`
2. `$TASK_OUTPUT_FOLDER/phase_outputs/architect/prp.md`
3. `$TASK_OUTPUT_FOLDER/phase_outputs/architect/component_design.json`
4. `$TASK_OUTPUT_FOLDER/phase_outputs/architect/architecture_decisions.json`
5. `$TASK_OUTPUT_FOLDER/phase_outputs/test_design/test_manifest.json`
6. `$TASK_OUTPUT_FOLDER/context.json`
7. `.aidev-storage/tasks/$TASK_FILENAME.json`
8. `.aidev-storage/tasks/$TASK_FILENAME.md`
Structure the loaded data:
```json
{
"task_context": {
"type": "feature|bugfix|refactor|pattern",
"complexity": "simple|normal|complex",
"complexity_score": 0-100, // From task metadata
"test_strategy": "full|minimal|skip", // From task metadata
"implementation_approach": "tdd|direct|minimal"
},
"inventory": {
"must_reuse": [...],
"can_extend": [...],
"similar_components": [...]
},
"architecture": {
"components_to_implement": [...],
"patterns_to_follow": [...],
"decisions_to_apply": [...]
},
"tests": {
"test_files": [...],
"failing_tests": "initial_count",
"test_framework": "framework_name"
},
"preferences": {
"use_preference_files": boolean,
"use_examples": boolean,
"learned_patterns_count": 0
}
}
```
</load-implementation-context>
<implementation-tracking-setup>
Initialize implementation tracking:
1. **CRITICAL**: Use TodoWrite tool to create implementation todos based on components to implement
2. Create todos for:
- Each component/feature to implement
- Test running and validation
- Pattern compliance verification
- Final implementation summary
3. Track progress throughout the phase by marking todos as "in_progress" when starting work
4. Mark each todo as "completed" immediately after finishing the work
5. **MUST ensure ALL todos are marked as completed before phase end**
6. If any todos cannot be completed (e.g., skipped for valid reasons), still mark them as completed with a note
</implementation-tracking-setup>
```
### 2. Initial Test Run and Framework Validation
<initial-test-run>
For non-pattern tasks, run tests to establish baseline:
1. **Pre-Run Framework Check**:
<framework-validation>
CRITICAL: Before attempting to run tests, verify framework is working:
1. **Check Test Manifest for Known Issues**:
- Use Read tool to load `$TASK_OUTPUT_FOLDER/phase_outputs/test_design/test_manifest.json`
- Check test_framework.setup_status from Phase 2
- If issues were found and fixed, verify fixes are still in place
- If setup files were created, use LS tool to ensure they still exist
2. **Detect and Fix Common Framework Issues**:
- **For Vitest**:
- Use Read tool on vitest.config.ts or vite.config.ts
- Look for setupFiles references
- Use LS tool to verify referenced files exist
- If missing, use Write tool to create minimal setup files
- **For Jest**:
- Check jest.config.js or package.json for setupFilesAfterEnv
- Verify all referenced files exist
- Create missing files with minimal valid content
- **For test utilities**:
- Check if tests import from 'test-utils' or similar
- Ensure these utility files exist
- Create minimal implementations if missing
3. **Framework Issue Resolution**:
- If setup files are missing:
- Use Write tool to create them with minimal content (e.g., `export {}`)
- Update test manifest to record the fix
- If config paths are wrong:
- Use Edit tool to fix the paths in config files
- Document the configuration fix
- If test utilities are missing:
- Create basic implementations that satisfy imports
4. **Record All Framework Fixes**:
- Append to decision_tree.jsonl with framework fix decisions
- Update implementation tracking with infrastructure fixes
- Ensure Phase 4A knows about any remaining issues
</framework-validation>
2. **Detect Test Command**:
- Check package.json for test script
- Identify test framework from manifest
- Use appropriate test command
3. **Capture Test Results**:
```bash
# Run tests and capture output
echo "๐งช Running initial test suite..."
npm test > "$TASK_OUTPUT_FOLDER/phase_outputs/implement/initial_test_run.txt" 2>&1 || true
# Check if tests failed due to framework issues
if grep -q "Cannot find module.*setup\|Missing test setup\|No test files found" "$TASK_OUTPUT_FOLDER/phase_outputs/implement/initial_test_run.txt"; then
echo "โ Tests failed due to framework configuration issues"
echo "Please ensure test framework is properly configured before proceeding"
# Document the issue for decision tracking
fi
```
4. **Analyze Failures**:
```json
{
"initial_state": {
"framework_working": boolean,
"total_tests": 0,
"failing_tests": 0,
"test_suites": [...],
"failure_patterns": [
"missing_implementation",
"type_errors",
"assertion_failures",
"framework_configuration"
]
}
}
```
5. **Create Implementation Plan**:
- If framework issues exist, fix them first
- Order components by dependency
- Prioritize based on test failures
- Plan progressive implementation
</initial-test-run>
<pattern-task-handling>
For pattern tasks:
1. Skip test execution
2. Focus on creating reusable example
3. Ensure 50-100 line requirement
4. Include usage documentation
</pattern-task-handling>
<record-decisions>
Append to decision_tree.jsonl:
```json
{
"timestamp": "ISO_timestamp",
"phase": "implement",
"decision_type": "approach|reuse|pattern|framework_fix",
"decision": "what_was_decided",
"reasoning": "why_this_approach"
}
```
<framework-decision-examples>
Example framework fix decisions to record:
```json
{
"decision_type": "framework_fix",
"decision": "created_missing_setup_file",
"reasoning": "vitest.config referenced test/setup.ts but file was missing",
"action": "created minimal setup file to allow tests to run"
}
{
"decision_type": "framework_fix",
"decision": "fixed_test_util_import",
"reasoning": "tests imported from 'test-utils' but module didn't exist",
"action": "created test-utils/index.ts with minimal exports"
}
```
</framework-decision-examples>
</record-decisions>
```
### 3. Smart Component Implementation Strategy
<reuse-analysis>
Analyze reuse opportunities before implementing:
1. **Inventory Matching**:
- Load reusable components from inventory
- Calculate similarity scores
- Identify direct reuse vs. extension opportunities
2. **Example Analysis** (if enabled):
- Search relevant examples in .aidev-storage/examples/
- Match based on component type and functionality
- Extract patterns and conventions
3. **Preference Loading** (if enabled):
- Load writing-style.md for code conventions
- Load components.md for component patterns
- Apply project-specific guidelines
4. **Reuse Decision Matrix**:
```json
{
"reuse_strategy": {
"direct_reuse": [
{
"existing": "path/to/component",
"confidence": 0.95,
"modifications": "none"
}
],
"extend_existing": [
{
"base": "path/to/base",
"extensions": ["new_props", "additional_logic"],
"confidence": 0.8
}
],
"new_implementation": [
{
"component": "name",
"reason": "no_suitable_existing",
"reference_patterns": [...]
}
]
}
}
```
</reuse-analysis>
<implementation-order>
Determine optimal implementation order:
1. Dependencies first (utilities, types, constants)
2. Base components before compound components
3. Data layers before presentation layers
4. Simple before complex
</implementation-order>
```
### 4. Implement Components (Progressive Enhancement)
**Follow progressive enhancement principles:**
1. Start with the simplest solution that passes the tests
2. Avoid premature optimization (no React.memo, useMemo, useCallback unless proven necessary)
3. Add complexity only when tests or requirements demand it
4. Prefer readability over cleverness
#### 4.0 Load and Apply Learned Patterns
**CRITICAL REQUIREMENT**: You MUST check and apply learned patterns before implementing ANY code.
```bash
# Always load learned patterns
if [ -f ".aidev-storage/planning/learned-patterns.json" ]; then
echo "๐ง Loading learned patterns for implementation..."
LEARNED_PATTERNS=$(cat ".aidev-storage/planning/learned-patterns.json")
HIGH_PRIORITY_PATTERNS=$(echo "$LEARNED_PATTERNS" | jq '
.patterns | to_entries | map(
select(.value.metadata.source == "user_correction" and .value.metadata.priority >= 0.9) |
.value
)
')
PATTERN_COUNT=$(echo "$HIGH_PRIORITY_PATTERNS" | jq 'length')
if [ "$PATTERN_COUNT" -gt 0 ]; then
echo "โ ๏ธ CRITICAL: Found $PATTERN_COUNT user-corrected patterns that MUST be followed!"
# Apply patterns by category
for pattern in $(echo "$HIGH_PRIORITY_PATTERNS" | jq -r '.[] | @base64'); do
_jq() {
echo ${pattern} | base64 --decode | jq -r ${1}
}
PATTERN_ID=$(_jq '.id')
PATTERN_RULE=$(_jq '.rule')
PATTERN_IMPL=$(_jq '.implementation')
PATTERN_CAT=$(_jq '.category')
echo "๐ MUST APPLY: $PATTERN_RULE"
echo " Category: $PATTERN_CAT"
echo " Implementation: $PATTERN_IMPL"
# Record decision to apply pattern
record_decision \
"applying_pattern_$PATTERN_ID" \
"Applying user-corrected pattern: $PATTERN_RULE" \
"Critical - user preference must be followed"
done
# Check for API patterns
API_PATTERNS=$(echo "$HIGH_PRIORITY_PATTERNS" | jq '[.[] | select(.category == "api")]')
if [ $(echo "$API_PATTERNS" | jq 'length') -gt 0 ]; then
echo ""
echo "๐ด API PATTERNS TO FOLLOW:"
echo "$API_PATTERNS" | jq -r '.[] | "- " + .rule + "\n Example: " + .example.after'
fi
# Check for state management patterns
STATE_PATTERNS=$(echo "$HIGH_PRIORITY_PATTERNS" | jq '[.[] | select(.category == "state")]')
if [ $(echo "$STATE_PATTERNS" | jq 'length') -gt 0 ]; then
echo ""
echo "๐ด STATE MANAGEMENT PATTERNS TO FOLLOW:"
echo "$STATE_PATTERNS" | jq -r '.[] | "- " + .rule + "\n Example: " + .example.after'
fi
# Check for error handling patterns
ERROR_PATTERNS=$(echo "$HIGH_PRIORITY_PATTERNS" | jq '[.[] | select(.category == "error-handling")]')
if [ $(echo "$ERROR_PATTERNS" | jq 'length') -gt 0 ]; then
echo ""
echo "๐ด ERROR HANDLING PATTERNS TO FOLLOW:"
echo "$ERROR_PATTERNS" | jq -r '.[] | "- " + .rule + "\n Example: " + .example.after'
fi
fi
else
echo "๐ No learned patterns found yet"
fi
```
#### 4.1 Task-Specific Implementation
```bash
if [ "$TASK_TYPE" = "pattern" ]; then
echo "๐ฏ Implementing pattern..."
# Pattern implementation guidance
echo "Creating pattern file..."
# The actual pattern creation happens in the implementation
record_decision "pattern_creation" "Implementing reusable pattern"
# Use TodoWrite tool to verify line count todo
# Update todo ID 4 status to "in_progress"
echo "๐ Line count verification will be enforced..."
# Update todo ID 4 status to "completed"
else
echo "๐ป Implementing components to pass tests..."
# For each component in the design
COMPONENTS=$(echo "$COMPONENT_DESIGN" | jq -r '.components_to_create[].name' 2>/dev/null || echo "")
if [ -z "$COMPONENTS" ]; then
echo "โ ๏ธ No components specified in design - implementing based on PRP"
else
for COMPONENT in $COMPONENTS; do
echo "๐จ Implementing: $COMPONENT"
# Find the test file for this component
TEST_FILE=$(find . -name "*${COMPONENT}*.test.*" -type f | head -1)
if [ -z "$TEST_FILE" ]; then
echo "โ ๏ธ No test file found for $COMPONENT - implementing based on PRP"
else
# Determine component file path from test file
COMPONENT_FILE=$(echo "$TEST_FILE" | sed 's/\.test\./\./')
COMPONENT_FILE=$(echo "$COMPONENT_FILE" | sed 's/\.tsx$/\.tsx/' | sed 's/\.ts$/\.ts/')
echo "Creating component at: $COMPONENT_FILE"
# Run tests for this component if available
if [ -f "package.json" ] && grep -q '"test"' package.json; then
echo "๐งช Testing $COMPONENT implementation..."
npm test "$TEST_FILE" > "$TASK_OUTPUT_FOLDER/phase_outputs/implement/test_result_$COMPONENT.txt" 2>&1 || true
# Check if more tests are passing
PASSING=$(grep -c "PASS" "$TASK_OUTPUT_FOLDER/phase_outputs/implement/test_result_$COMPONENT.txt" || echo "0")
echo "โ
Passing tests for $COMPONENT: $PASSING"
fi
fi
record_decision "component_implemented" "$COMPONENT implementation based on PRP specifications"
done
fi
fi
```
### 7. State Management and Hook Implementation
<state-management-strategy>
Implement state management based on project patterns:
1. **Detect Existing Patterns**:
- Use Grep to find state management approach
- Check for Context API, Redux, Zustand, etc.
- Follow established patterns
2. **Progressive State Implementation**:
```json
{
"level_1_local": {
"use": "component state with useState",
"when": "isolated component state"
},
"level_2_shared": {
"use": "React Context or prop drilling",
"when": "2-3 components need same state"
},
"level_3_global": {
"use": "existing state management solution",
"when": "app-wide state needed"
}
}
```
3. **Hook Creation Guidelines**:
- Start with basic implementation
- No premature memoization
- Add optimizations only if tests fail
- Follow existing hook patterns
Example hook structure:
```typescript
export function useFeatureName() {
// Minimal state
const [state, setState] = useState(initialState)
// Simple handlers
const handleAction = (param) => {
setState(newState)
}
// Return only what's needed
return { state, handleAction }
}
```
</state-management-strategy>
try {
// Simulate API call
if (email === 'user@example.com' && password === 'password') {
const user = { id: '123', email }
setState({
user,
isAuthenticated: true,
isLoading: false,
error: undefined
})
return user
} else {
throw new Error('Invalid credentials')
}
} catch (error) {
setState(prev => ({
...prev,
isLoading: false,
error: error as Error
}))
throw error
}
}, [])
const logout = useCallback(async () => {
setState({
user: null,
isAuthenticated: false,
isLoading: false,
error: undefined
})
}, [])
useEffect(() => {
// Check for existing session
setState(prev => ({ ...prev, isLoading: false }))
}, [])
return {
...state,
login,
logout
}
}
EOF
# Run hook tests
npm test "useAuth.test" > "$TASK_OUTPUT_FOLDER/phase_outputs/implement/hook_test.txt" 2>&1 || true
HOOK_PASSING=$(grep -c "PASS" "$TASK_OUTPUT_FOLDER/phase_outputs/implement/hook_test.txt" || echo "0")
echo "โ
useAuth hook tests passing: $HOOK_PASSING"
fi
```
#### 4.3 Frontend Implementation Guidelines
**CRITICAL: Follow Discovered Patterns**
Before implementing any feature, load and follow the patterns discovered in Phase 0:
```bash
# Load existing patterns from planning phase
if [ -f ".aidev-storage/planning/existing_patterns.json" ]; then
EXISTING_PATTERNS=$(cat .aidev-storage/planning/existing_patterns.json)
echo "๐ Following established patterns from planning phase"
# Record pattern usage
record_decision "patterns_loaded" "Using existing patterns for consistency"
fi
```
##### Pattern Compliance Checklist:
1. **UI Patterns** (from existing_patterns.json):
- Follow component structure patterns
- Use established state management approach
- Follow data fetching patterns
- Maintain consistent error handling
2. **API Patterns**:
- Use discovered routing structure
- Follow error handling conventions
- Apply validation patterns
- Maintain response format consistency
3. **Code Style**:
- Follow discovered indentation and formatting
- Use established export patterns
- Maintain async handling approach
- Follow file naming conventions
4. **Framework-Specific Patterns**:
- Use framework's routing solution (don't mix libraries)
- Follow framework's data fetching patterns
- Use framework's built-in optimizations
- Respect framework's component patterns (e.g., Server Components in Next.js)
**Important**: The patterns in `.aidev-storage/planning/existing_patterns.json` override any generic assumptions. Always check this file first!
##### Common Framework Mistakes to Avoid:
Based on discovered patterns, avoid these anti-patterns:
- **Navigation**: Don't use `React.cloneElement` or manual routing if framework has built-in routing
- **Data Fetching**: Don't use `useEffect` for data if framework has server-side patterns
- **State Management**: Don't add Redux/Zustand if patterns show Context API usage
- **Styling**: Don't add new CSS framework if one is already established
- **Components**: Don't create custom implementations of framework features
- **API Routes**: Follow the discovered API structure, don't create new patterns
Record anti-pattern avoidance:
```bash
record_decision "avoided_antipattern" "Used framework's Link component instead of React.cloneElement"
record_decision "pattern_compliance" "Followed existing data fetching pattern from existing_patterns.json"
```
**CRITICAL: CSS and Component Reuse Strategy**
When implementing frontend components, follow these principles to prevent duplication:
##### Style Implementation Priority Order:
1. **First**: Search for existing styling patterns in the codebase
2. **Second**: Check if similar components exist with styles you can reuse
3. **Third**: Extend existing style system (utilities, theme, or components)
4. **Fourth**: Create new styles following the project's established patterns
5. **Last resort**: Inline styles ONLY for dynamic values
##### Before Creating Any New Component:
- Use Grep to search for similar components: `grep -r "ComponentName" . --include="*.tsx"`
- Check the component inventory from Phase 0
- Look for existing layout patterns (PageLayout, TabNavigation, etc.)
##### Before Adding Any Styling:
- Search existing patterns based on project's styling approach:
- **CSS/Modules**: `grep -r "padding.*1rem" . --include="*.css"`
- **Utility classes**: `grep -r "className.*p-4\|padding-4" . --include="*.tsx" --include="*.jsx"`
- **Styled components**: `grep -r "padding.*1rem\|theme.spacing" . --include="*.ts" --include="*.tsx"`
- **Component props**: `grep -r "sx={{.*padding\|padding=" . --include="*.tsx"`
- Check if design tokens/theme values exist for your needs
- Look for similar styling patterns in other components
##### Examples of Good vs Bad Practices:
```tsx
// โ BAD: Creating duplicate styles
style={{ padding: '1rem' }} // Hardcoded value
.myComponent { padding: 1rem; } // Duplicate CSS
<Box sx={{ p: 2 }}> // Recreating existing component
<div className="flex justify-center items-center"> // When FlexCenter exists
// โ
GOOD: Reusing existing patterns
className={styles.content} // CSS Modules
className="p-4" // Utility classes
<Button variant="primary"> // Component variants
sx={{ p: theme.spacing(2) }} // Theme values
<FlexCenter> // Existing layout component
```
##### Component Architecture Rules:
- Don't create new pages when you can use existing layouts
- Don't duplicate navigation - use the existing Navigation component
- Don't create new modals - extend or reuse existing modal patterns
- Small components (<50 lines) rarely need their own CSS module
##### Style Cleanup Checklist:
- After implementation, check for unused styles (classes, styled components, sx props)
- Look for duplicate patterns across files
- Ensure all colors use design tokens (CSS variables, theme values)
- Verify spacing uses consistent units from the design system
- Check that responsive styles follow project patterns
- Ensure dark mode support if applicable
Record these decisions:
- `record_decision "reused_component" "Used existing Button component instead of creating new"`
- `record_decision "extended_styles" "Extended globals.css with new utility class"`
- `record_decision "css_consolidation" "Merged duplicate styles into shared component"`
### 8. API Implementation Strategy
<api-implementation>
Implement API endpoints based on test specifications:
1. **Detect API Pattern**:
- Use Grep to find existing API structure
- Identify routing pattern (Next.js App Router, Express, etc.)
- Follow established error handling
- Use existing validation approach
2. **Progressive API Implementation**:
```json
{
"level_1_basic": {
"includes": ["happy path", "basic validation", "simple responses"],
"excludes": ["auth middleware", "rate limiting", "caching"]
},
"level_2_secure": {
"includes": ["authentication", "authorization", "input sanitization"],
"when": "tests require security"
},
"level_3_production": {
"includes": ["rate limiting", "caching", "monitoring"],
"when": "explicitly required"
}
}
```
<authentication-implementation>
When implementing authentication features:
<authentication-principles>
1. Implement all logic except actual credentials
2. Create placeholder structure for OAuth configuration
3. Build complete domain validation and session management
4. Generate comprehensive setup documentation
5. Make clear handoff to developers for credentials
</authentication-principles>
<authentication-patterns>
<pattern name="oauth-setup">
<description>Configure OAuth providers with placeholders</description>
<implementation>
- Create auth configuration with environment variables
- Use empty strings as defaults for credentials
- Implement all callbacks and validation logic
- Add TODO comments at credential insertion points
</implementation>
<handoff>
- Create .env.example with all required variables
- Document OAuth app creation process
- List all redirect URLs needed
- Provide testing instructions
</handoff>
</pattern>
<pattern name="domain-restriction">
<description>Implement email domain validation</description>
<implementation>
- Add signIn callback to check email domains
- Handle edge cases (subdomains, case sensitivity)
- Provide clear error messages for rejected emails
- Log authentication attempts appropriately
</implementation>
<testable>
This logic can be fully tested with mocks
</testable>
</pattern>
<pattern name="setup-documentation">
<description>Create developer handoff documentation</description>
<files-to-create>
- .env.example with detailed comments
- docs/AUTHENTICATION_SETUP.md with step-by-step guide
- README section for authentication configuration
</files-to-create>
<include>
- Screenshots or links to provider consoles
- Exact redirect URLs to configure
- Common troubleshooting issues
- Testing instructions with real credentials
</include>
</pattern>
</authentication-patterns>
<decision-points>
When you encounter authentication implementation:
1. Can this be tested without real credentials? โ Implement fully
2. Does this require OAuth secrets? โ Use environment variables with defaults
3. Is this provider-specific setup? โ Document in setup guide
4. Will developers need manual steps? โ Add clear TODO comments
</decision-points>
</authentication-implementation>
3. **API Route Creation**:
- Find test files for API routes
- Map to implementation locations
- Implement minimal working version
- Add complexity as tests require
4. **Response Patterns**:
- Follow existing response formats
- Use consistent error structures
- Apply project conventions
</api-implementation>
<test-api-implementation>
After implementing each API route:
```bash
npm test "$API_TEST" > "$TASK_OUTPUT_FOLDER/phase_outputs/implement/api_test.txt" 2>&1 || true
```
</test-api-implementation>
```
### 9. Edge Cases and Progressive Refinement
<edge-case-handling>
**Focus: Add robustness only as tests require**
1. **Run Full Test Suite**:
```bash
npm test > "$TASK_OUTPUT_FOLDER/phase_outputs/implement/midpoint_test_run.txt" 2>&1 || true
```
2. **Analyze Remaining Failures**:
```json
{
"failure_categories": {
"missing_validation": "Add only required validation",
"error_handling": "Handle only tested error cases",
"edge_cases": "Implement only if tests exist",
"performance": "Optimize only if tests fail"
}
}
```
3. **Progressive Fixes**:
- Fix one category at a time
- Re-run tests after each fix
- Stop when all tests pass
- Avoid over-engineering
</edge-case-handling>
<css-consolidation>
**CSS and Style Refinement**:
1. **Duplication Detection**:
- Use Glob for CSS files: `**/*.css`, `**/*.module.css`
- Grep for repeated patterns
- Identify hardcoded values
2. **Component Consolidation**:
- Find similar components
- Check for shared functionality
- Consider extraction only if significant duplication
3. **CSS Module Optimization**
- Identify CSS modules with fewer than 5 rules - these should likely be consolidated
- Find orphaned CSS files (no matching component)
- Look for repeated layout patterns that could become utilities
4. **Common Refactoring Opportunities**
- Multiple flexbox containers with same properties โ Create layout component
- Repeated button styles โ Use shared Button component
- Similar spacing patterns โ Add to global utilities
- Hardcoded colors โ Replace with CSS variables
5. **Final Cleanup Actions**
- Remove unused CSS classes
- Consolidate small CSS modules into parent components
- Update imports to use shared components
- Document any new patterns for future reuse
Record refactoring decisions:
```bash
record_decision "css_consolidated" "Merged 3 similar button styles into shared Button component"
record_decision "variables_applied" "Replaced hardcoded colors with CSS variables"
record_decision "layout_reused" "Replaced custom flex layouts with PageLayout component"
```
3. **Smart Refactoring Decisions**:
- Extract only if 3+ duplications
- Create utilities only if widely used
- Consolidate styles only if significant savings
- Keep simple things simple
4. **Record Consolidation**:
```json
{
"consolidation_actions": [
{
"type": "css|component|utility",
"action": "extracted|merged|simplified",
"impact": "lines_saved",
"files_affected": [...]
}
]
}
```
</css-consolidation>
```
### 10. Integration and Final Test Run
<final-integration>
1. **Export Verification**:
- Check all new components are exported
- Verify import paths are correct
- Ensure no circular dependencies
2. **Final Test Execution**:
<test-execution-directive>
You MUST run the project's test suite using the Bash tool:
<pre-execution-framework-check>
Before running final tests, ensure framework issues are resolved:
- Use Read tool to check test configuration files one more time
- If using Vitest and setupFiles are configured, verify all referenced files exist
- If any setup files are missing, create them using Write tool
- For any missing test utilities, create minimal implementations
- This prevents test failures due to configuration issues vs actual implementation issues
</pre-execution-framework-check>
- Execute: `npm test` using Bash tool
- Capture the output and exit code
- If tests fail with "Cannot find module" or similar configuration errors:
- These are framework issues, not implementation failures
- Fix the configuration issues first
- Re-run tests after fixes
- Document the results in your implementation report
</test-execution-directive>
3. **Automated Tooling Validation**:
<validation-requirements>
Before marking implementation as complete, you MUST validate using ALL available project tools:
<lint-validation>
Check package.json for lint scripts (lint, eslint, etc.):
- If found: Run the lint command using Bash tool
- Record any errors or warnings
- Implementation is NOT complete if linting fails
</lint-validation>
<type-check-validation>
Check package.json for TypeScript validation (type-check, typecheck, tsc):
- If found: Run type checking using Bash tool
- Record any type errors
- Implementation is NOT complete if type checking fails
</type-check-validation>
<format-validation>
Check package.json for format checking (format:check, prettier:check):
- If found: Run format checking using Bash tool
- Record any formatting issues
- Implementation is NOT complete if formatting is incorrect
</format-validation>
</validation-requirements>
4. **Pre-commit Hook Validation**:
<precommit-validation>
CRITICAL: You MUST verify that all pre-commit hooks would pass:
<important-warning>
โ ๏ธ NEVER use git commands! Git is in the disallowed-tools list.
Instead, read hook files and run their commands directly.
</important-warning>
<detection>
Use the LS tool to check for:
- .husky/pre-commit
- .git/hooks/pre-commit
- lint-staged configuration in package.json
</detection>
<validation-process>
If pre-commit hooks exist:
1. Read the pre-commit hook file to understand what it runs
2. Execute each command from the hook directly (e.g., if it runs `npm run lint`, execute that)
3. Capture the output and exit code of each command
4. Ensure all commands pass before marking implementation complete
</validation-process>
<failure-handling>
If pre-commit hooks fail:
- The implementation is NOT complete
- You MUST fix all issues before proceeding
- Common issues to check:
- Linting errors
- Formatting issues
- Failing tests
- Type errors
</failure-handling>
</precommit-validation>
5. **Success Criteria**:
<completion-criteria>
Implementation is ONLY complete when ALL of the following are true:
โ All tests pass (exit code 0)
โ No regression in existing tests
โ Coverage targets met (if configured)
โ Lint checks pass (if configured)
โ Type checks pass (if TypeScript project)
โ Format checks pass (if configured)
โ Pre-commit hooks pass (if configured)
โ No console errors or warnings in output
<important>
You MUST NOT proceed to the next phase or mark implementation as complete
unless ALL applicable checks pass. This ensures the code is ready for commit.
</important>
</completion-criteria>
</final-integration>
<test-results-analysis>
Analyze final test results:
```json
{
"final_results": {
"total_tests": 0,
"passing": 0,
"failing": 0,
"success": boolean,
"regression": false
}
}
```
</test-results-analysis>
### 11. Generate Implementation Report
<implementation-report>
Create comprehensive implementation summary:
```json
{
"task_id": "task_id",
"implementation_date": "ISO_timestamp",
"metrics": {
"files_created": 0,
"files_modified": 0,
"components_implemented": 0,
"components_reused": 0,
"reuse_percentage": 0
},
"test_progression": {
"initial_failures": 0,
"midpoint_failures": 0,
"final_failures": 0,
"all_tests_passing": boolean
},
"patterns_applied": [
{
"pattern_id": "id",
"category": "category",
"applied_to": [...]
}
],
"decisions_made": [
{
"decision": "description",
"reasoning": "why",
"impact": "result"
}
],
"quality_metrics": {
"followed_tdd": boolean,
"maximized_reuse": boolean,
"applied_patterns": boolean,
"progressive_enhancement": boolean
}
}
```
</implementation-report>
<write-report>
Use the Write tool to save implementation summary:
- Path: `$TASK_OUTPUT_FOLDER/phase_outputs/implement/implementation_summary.json`
</write-report>
### 12. Update Shared Context
<update-context>
Update context with implementation results:
```json
{
"current_phase": "implement",
"phases_completed": [..., "implement"],
"phase_history": [
{
"phase": "implement",
"completed_at": "ISO_timestamp",
"success": boolean,
"key_outputs": {
"files_created": 0,
"files_modified": 0,
"tests_passing": boolean,
"reuse_achieved": percentage,
"patterns_followed": count
}
}
],
"critical_context": {
...existing_context,
"implementation_complete": boolean,
"ready_for_validation": boolean,
"remaining_issues": [...]
}
}
```
</update-context>
<write-context>
Use the Write tool to save updated context:
- Path: `$TASK_OUTPUT_FOLDER/context.json`
</write-context>
```
### 13. Final Validation
<phase3-validation>
Perform comprehensive validation:
1. **TDD Compliance**:
- Verify tests were run first
- Check implementation matches test expectations
- Ensure no test modifications
- Validate red-green-refactor cycle
2. **Code Quality**:
- No untested features added
- Maximum reuse achieved
- Patterns properly applied
- Progressive enhancement followed
3. **Output Validation**:
- Implementation summary exists
- Context properly updated
- All todos completed
- Decision tree maintained
4. **Success Metrics**:
- Test pass rate
- Reuse percentage
- Pattern compliance
- Code minimalism
</phase3-validation>
<todo-completion>
**CRITICAL TODO COMPLETION REQUIREMENTS**:
1. **MUST use TodoWrite tool to verify ALL todos are marked as completed**
2. **Review each todo to ensure it's either:**
- Completed successfully (mark as "completed")
- Skipped for valid reasons (still mark as "completed" with explanation)
3. **The phase CANNOT be considered successful if any todos remain pending or in_progress**
4. **Before marking phase as complete:**
- Count total todos created
- Verify all are marked as "completed"
- Document any skipped todos with reasons
5. **This is a HARD REQUIREMENT for phase success**
</todo-completion>
```bash
# Minimal validation checks
if [ $(find . -name "*.test.*" -newer "$TASK_OUTPUT_FOLDER/phase_outputs/.phase3_start_marker" | grep -v ".aidev-storage" | wc -l) -gt 0 ]; then
echo "โ ๏ธ Warning: Test files were modified"
fi
echo "โ
Phase 3 completed successfully"
echo "๐ Implementation complete with:"
echo " - Test-driven development"
echo " - Maximum code reuse"
echo " - Pattern compliance"
echo " - Progressive enhancement"
echo "โก๏ธ Ready for Phase 4: Validation"
```
## Key Requirements
<phase3-constraints>
<tdd-implementation>
This phase MUST:
โก Follow red-green-refactor cycle
โก Implement ONLY to pass tests
โก Reuse components from inventory
โก Run tests frequently
โก Track progress with todos
This phase MUST NOT:
โก Add untested features
โก Modify test files
โก Over-engineer solutions
โก Skip failing tests
</tdd-implementation>
<code-reuse>
Maximize reuse by:
โก Checking inventory first
โก Extending existing components
โก Following established patterns
โก Avoiding duplication
</code-reuse>
</phase3-constraints>
## Success Criteria
<phase3-success-requirements>
Phase 3 is successful ONLY when ALL of the following are true:
<mandatory-validations>
โ All tests are passing (or no tests for simple tasks)
โ All automated tooling passes:
- Linting (if configured in package.json)
- Type checking (if TypeScript project)
- Format checking (if configured)
โ Pre-commit hooks pass (if configured)
โ Minimal code implemented
โ Maximum code reuse achieved
โ No untested features added
โ Implementation matches PRP design
</mandatory-validations>
<todo-completion-requirement>
โ **ALL TODOS MARKED AS COMPLETED** (This is MANDATORY - use TodoWrite tool to verify)
- External tooling depends on this for phase completion tracking
- You MUST use TodoWrite to verify all todos are marked "completed"
- Phase CANNOT be considered successful with any pending or in_progress todos
</todo-completion-requirement>
<critical-requirement>
You MUST NOT mark this phase as complete if:
- ANY automated checks fail (lint, type-check, format, pre-commit)
- ANY todos remain incomplete
- The code is not in a committable state
The implementation must be in a state where all pre-commit validation would pass.
</critical-requirement>
</phase3-success-requirements>