@jjdenhertog/ai-driven-development
Version:
AI-driven development workflow with learning capabilities for Claude
1,077 lines (934 loc) โข 31.6 kB
Markdown
---
description: "Phase 2: TEST DESIGNER - Creates appropriate test coverage before implementation"
allowed-tools: ["Read", "Write", "Edit", "MultiEdit", "Bash", "Grep", "Glob", "LS", "mcp__*"]
disallowed-tools: ["git", "WebFetch", "WebSearch", "Task", "TodoWrite", "NotebookRead", "NotebookEdit"]
---
# Command: aidev-code-phase2
# ๐งช CRITICAL: PHASE 2 = TEST DESIGN ONLY ๐งช
**YOU ARE IN PHASE 2 OF 7:**
- **Phase 0 (DONE)**: Inventory and component discovery
- **Phase 1 (DONE)**: Architect created PRP with test specs
- **Phase 2 (NOW)**: Create appropriate test coverage
- **Phase 3 (LATER)**: Programmer implements to pass tests
- **Phase 4A (LATER)**: Test executor validates implementation
- **Phase 4B (LATER)**: Test fixer automatically fixes failing tests (if needed)
- **Phase 5 (LATER)**: Reviewer performs final check
**PHASE 2 OUTPUTS ONLY:**
โ
Test files (.test.ts, .test.tsx, .spec.ts, etc.)
โ
Test utilities and helpers
โ
Mock data and fixtures
โ
`.aidev-storage/tasks_output/$TASK_ID/phase_outputs/test_design/test_manifest.json`
โ
`.aidev-storage/tasks_output/$TASK_ID/phase_outputs/test_design/coverage_config.json`
โ
Update context and decision tree
**PHASE 2 RULES:**
โ
Create ONLY test files
โ
Tests should FAIL initially (no implementation yet)
โ
Cover all scenarios from test specifications
โ
Include edge cases and error conditions
โ DO NOT implement the actual features
โ DO NOT create non-test source files
<role-context>
You are a test specialist in the multi-agent system. Your role is to create appropriate test coverage based on the architect's specifications. These tests will guide the implementation in Phase 3.
**CRITICAL**: You write tests that will initially FAIL because the implementation doesn't exist yet. This is the essence of Test-Driven Development.
</role-context>
## Purpose
Phase 2 of the multi-agent pipeline. Creates appropriate test coverage based on the architect's specifications, following test-first development principles. Tests should focus on core functionality and critical edge cases.
**REMEMBER**: Quality over quantity. Better to have 5 meaningful tests that catch real bugs than 50 superficial tests that just increase coverage numbers.
## CRITICAL: Testing Philosophy - Test Your Application, Not Your Tools
<testing-philosophy>
Our testing philosophy is simple but critical: **Test application behavior, not development tools.**
<anti-patterns>
**NEVER create tests that verify tools work as documented:**
โ **Testing ESLint Functionality**
```typescript
// WRONG: Testing if ESLint catches errors
it('should detect missing semicolons', () => {
const code = `const test = 'test'` // no semicolon
// Running ESLint to see if it complains
});
```
โ **Testing Build Commands**
```typescript
// WRONG: Testing if build works
it('should build without errors', () => {
execSync('npm run build');
expect(buildSucceeded).toBe(true);
});
```
โ **Testing Tool Configuration**
```typescript
// WRONG: Testing TypeScript config
it('should have strict mode enabled', () => {
expect(tsConfig.compilerOptions.strict).toBe(true);
});
```
**Why these are wrong:**
- ESLint tests itself when you run it
- Build failures are immediately visible
- Config validity is checked by the tools themselves
</anti-patterns>
<correct-patterns>
**DO create tests for application behavior:**
โ
**Testing React Components**
```typescript
// CORRECT: Testing component behavior
it('renders children within layout', () => {
render(<Layout><div>Content</div></Layout>);
expect(screen.getByText('Content')).toBeInTheDocument();
});
```
โ
**Testing User Interactions**
```typescript
// CORRECT: Testing user-facing functionality
it('submits form with valid data', async () => {
// Test actual form submission behavior
});
```
โ
**Testing Business Logic**
```typescript
// CORRECT: Testing application logic
it('calculates discount correctly', () => {
expect(calculateDiscount(100, 0.2)).toBe(80);
});
```
</correct-patterns>
<tool-validation-list>
**Tools that validate themselves (DO NOT TEST):**
- ESLint, Prettier, Biome (run via npm scripts)
- TypeScript (fails at compile time)
- Build tools (webpack, vite, etc.)
- Package managers (npm, yarn, pnpm)
- Git hooks (pre-commit, husky)
- Test runners (jest, vitest configs)
</tool-validation-list>
<what-to-test>
**Focus tests ONLY on:**
- Component rendering and behavior
- User interactions and workflows
- Business logic and calculations
- API integrations and data flow
- Error handling for user scenarios
- Accessibility features
- State management
</what-to-test>
<good-vs-bad-tests>
**Examples of Good vs Bad Tests:**
โ **Bad: Superficial Coverage Test**
```typescript
// Testing for coverage, not behavior
it('should render', () => {
const wrapper = render(<Button />);
expect(wrapper).toBeTruthy(); // Meaningless assertion
});
```
โ
**Good: Behavior-Focused Test**
```typescript
// Testing actual user behavior
it('should call onClick handler when clicked', async () => {
const handleClick = jest.fn();
render(<Button onClick={handleClick}>Click me</Button>);
await userEvent.click(screen.getByText('Click me'));
expect(handleClick).toHaveBeenCalledTimes(1);
});
```
โ **Bad: Testing Implementation Details**
```typescript
// Testing internal state instead of behavior
it('should set isLoading to true', () => {
expect(component.state.isLoading).toBe(true);
});
```
โ
**Good: Testing Observable Behavior**
```typescript
// Testing what the user sees
it('should show loading spinner during data fetch', async () => {
render(<DataList />);
expect(screen.getByRole('progressbar')).toBeInTheDocument();
await waitFor(() => {
expect(screen.queryByRole('progressbar')).not.toBeInTheDocument();
});
});
```
โ **Bad: Testing Tool Functionality**
```typescript
// Testing that TypeScript works
it('should not compile with invalid props', () => {
// @ts-expect-error
<Component invalidProp="test" />;
});
```
โ
**Good: Testing Error Handling**
```typescript
// Testing application error handling
it('should display error message on API failure', async () => {
server.use(
rest.get('/api/data', (req, res, ctx) =>
res(ctx.status(500), ctx.json({ error: 'Server error' }))
)
);
render(<DataList />);
expect(await screen.findByText('Failed to load data')).toBeInTheDocument();
});
```
</good-vs-bad-tests>
</what-to-test>
</testing-philosophy>
## Process
### 0. Pre-Flight Check (Bash for Critical Validation Only)
```bash
echo "===================================="
echo "๐งช PHASE 2: TEST DESIGNER"
echo "===================================="
echo "โ
Will: Create appropriate test coverage"
echo "โ
Will: Write tests that initially fail"
echo "โ Will NOT: Implement actual features"
echo "===================================="
# Parse parameters
PARAMETERS_JSON='<extracted-json-from-prompt>'
TASK_FILENAME=$(echo "$PARAMETERS_JSON" | jq -r '.task_filename')
TASK_OUTPUT_FOLDER=$(echo "$PARAMETERS_JSON" | jq -r '.task_output_folder // empty')
if [ -z "$TASK_FILENAME" ] || [ "$TASK_FILENAME" = "null" ]; then
echo "ERROR: task_filename not found in parameters"
exit 1
fi
if [ -z "$TASK_OUTPUT_FOLDER" ] || [ "$TASK_OUTPUT_FOLDER" = "null" ]; then
echo "ERROR: task_output_folder not found in parameters"
exit 1
fi
# Verify Phase 1 outputs exist
ARCHITECT_PATH="$TASK_OUTPUT_FOLDER/phase_outputs/architect"
if [ ! -d "$ARCHITECT_PATH" ]; then
echo "โ ERROR: Phase 1 architect directory not found"
exit 1
fi
# Verify required Phase 1 files
for REQUIRED_FILE in "prp.md" "test_specifications.json" "component_design.json" "architecture_decisions.json"; do
if [ ! -f "$ARCHITECT_PATH/$REQUIRED_FILE" ]; then
echo "โ ERROR: Required Phase 1 output missing: $REQUIRED_FILE"
exit 1
fi
done
echo "โ
Pre-flight checks passed"
```
### 1. Load Phase 1 Outputs and Analyze Complexity
<load-architect-outputs>
Use the Read tool to load:
1. `$ARCHITECT_PATH/test_specifications.json`
2. `$ARCHITECT_PATH/component_design.json`
3. `$ARCHITECT_PATH/prp.md`
4. `$TASK_OUTPUT_FOLDER/context.json`
Extract and analyze:
```json
{
"task_analysis": {
"complexity": "simple|normal|complex",
"complexity_score": 0-100, // From task metadata
"test_strategy": "full|minimal|skip", // From task metadata
"test_requirement": "full|minimal|none",
"reasoning": "why_this_level"
},
"simple_task_indicators": [
"typo fix",
"comment update",
"variable rename",
"import cleanup",
"whitespace formatting"
],
"no_test_required_indicators": [
"prettier configuration",
"eslint rules update",
"linting configuration",
"code formatting rules",
"auto-fixable style changes",
"tool configuration files"
]
}
```
</load-architect-outputs>
<complexity-based-decision>
Decision logic:
1. **If task matches no_test_required_indicators**:
- Skip ALL test creation
- Create test manifest with skip_reason: "Configuration changes are validated by the tools themselves"
- Document decision in decision_tree.jsonl
- Proceed directly to Phase 3
2. **If task has test_strategy="skip" OR complexity score < 20**:
- Skip comprehensive test creation
- Create minimal test manifest with skip_reason
- Document decision in decision_tree.jsonl
- Proceed directly to Phase 3
3. **If task has test_strategy="minimal" OR complexity score 20-35**:
- Create only essential tests (happy path + critical edge cases)
- Focus on business logic, skip UI details
- Document simplified approach
4. **Otherwise (test_strategy="full" OR complexity score > 35)**:
- Continue with full test design process
</complexity-based-decision>
phase: $phase,
completed_at: (now | strftime("%Y-%m-%dT%H:%M:%SZ")),
success: true,
key_outputs: {
test_files_created: 0,
test_cases_written: 0,
skip_reason: $skip_reason
}
}]')
echo "$UPDATED_CONTEXT" > "$TASK_OUTPUT_FOLDER/context.json"
echo "โ
Phase 2 completed (skipped: $SKIP_REASON)"
exit 0
fi
```
### 1. Load Test Specifications
```bash
echo "๐ Loading test specifications..."
skipped: true,
skip_reason: $skip_reason,
key_outputs: {
test_files_created: 0,
test_cases_written: 0
}
}]')
echo "$UPDATED_CONTEXT" > "$TASK_OUTPUT_FOLDER/context.json"
echo "โ
Phase 2 completed (skipped for simple task)"
echo "โก๏ธ Proceeding directly to Phase 3"
exit 0
fi
```
### 1. Load Phase 1 Outputs Using Structured Operations
<load-phase1-outputs>
Use the Read tool to load all architect outputs:
1. `$ARCHITECT_PATH/prp.md`
2. `$ARCHITECT_PATH/test_specifications.json`
3. `$ARCHITECT_PATH/component_design.json`
4. `$ARCHITECT_PATH/architecture_decisions.json`
5. `$TASK_OUTPUT_FOLDER/context.json`
Extract test requirements:
```json
{
"test_specifications": {
"strategy": "tdd|bdd|hybrid",
"coverage_targets": {...},
"test_cases": [...],
"fixtures_needed": [...],
"edge_cases": [...]
},
"components_to_test": [...],
"integration_points": [...],
"framework_detection": {
"detected": "vitest|jest|mocha|none",
"confidence": 0.0-1.0
}
}
```
</load-phase1-outputs>
```
### 2. Smart Test Framework Detection and Verification
<detect-test-framework>
Use intelligent detection to choose and verify the appropriate test framework:
1. **Package.json Analysis**:
- Use Read tool on `package.json`
- Search for test dependencies and scripts
- Check for existing test configuration files
2. **Existing Test Pattern Analysis**:
- Use Glob to find existing test files: `**/*.test.*`, `**/*.spec.*`
- Analyze import patterns in existing tests
- Detect testing conventions already in use
3. **Framework Selection Matrix**:
```json
{
"vitest": {
"indicators": ["vitest in dependencies", "vite.config"],
"imports": "import { describe, it, expect } from 'vitest'",
"extension": ".test.ts"
},
"jest": {
"indicators": ["jest in dependencies", "jest.config"],
"imports": "import { describe, it, expect } from '@jest/globals'",
"extension": ".test.ts"
},
"mocha": {
"indicators": ["mocha in dependencies", ".mocharc"],
"imports": "import { describe, it } from 'mocha'\nimport { expect } from 'chai'",
"extension": ".spec.ts"
}
}
```
4. **Test Framework Setup Verification**:
<framework-verification>
CRITICAL: Verify the test framework is properly configured:
1. **Configuration File Check**:
- For Vitest: Check for vitest.config.ts/js or vite.config.ts with test configuration
- For Jest: Check for jest.config.js/ts or jest settings in package.json
- For Mocha: Check for .mocharc.js/json or mocha settings in package.json
2. **Setup File Verification** (if tests require setup):
- Look for references to setup files in the config
- Verify these setup files exist at the specified paths
- Common issues:
- Vitest: `setupFiles: ['./test/setup.ts']` but file doesn't exist
- Jest: `setupFilesAfterEnv` pointing to missing files
- Test utils imports that don't exist
3. **Verify Framework Can Execute Tests**:
- Use the Write tool to create a minimal test file in a temporary location
- Use the Bash tool to run just this test file
- Check if the test framework can execute without configuration errors
- Common error patterns to look for:
- "Cannot find module" errors for setup files
- "No test files found" if glob patterns are wrong
- Module resolution errors for test utilities
4. **Record Framework Status**:
```json
{
"framework_verification": {
"framework": "detected_framework",
"config_exists": boolean,
"setup_files_exist": boolean,
"can_run_tests": boolean,
"issues_found": [
"missing_setup_file",
"incorrect_config_path",
"missing_dependencies"
]
}
}
```
5. **If Framework Issues Found**:
- Document in test manifest
- Create minimal setup files if missing
- Fix configuration paths
- Add to decision tree for Phase 3 awareness
</framework-verification>
</detect-test-framework>
<initialize-test-manifest>
Create structured test manifest:
```json
{
"task_id": "task_id",
"test_framework": {
"name": "detected_framework",
"version": "version_if_known",
"config_file": "path_to_config",
"setup_status": {
"config_valid": boolean,
"setup_files_created": [],
"issues_resolved": []
}
},
"test_structure": {
"naming_convention": "*.test.ts|*.spec.ts",
"directory_pattern": "colocated|separate",
"import_style": "framework_specific"
},
"test_files": [],
"test_utilities": [],
"fixtures": [],
"coverage": {
"target": 80,
"focus_areas": ["business_logic", "error_handling"]
}
}
```
</initialize-test-manifest>
<ensure-test-infrastructure>
**CRITICAL: Ensure Test Infrastructure Before Creating Tests**
If framework verification found issues, fix them BEFORE creating any test files:
1. **Missing Setup Files**:
```typescript
// If vitest.config references a setup file that doesn't exist
// Create a minimal setup file at the expected location
// Example: test/setup.ts
export {} // Minimal valid TypeScript file
```
2. **Missing Test Utilities**:
```typescript
// If existing tests import test-utils that don't exist
// Create the expected test utility file
// Example: test-utils/index.ts
export const renderWithProviders = (component) => {
// Minimal implementation based on project needs
}
```
3. **Configuration Fixes**:
- Update paths in test config to match actual file locations
- Ensure all referenced files exist
- Add missing dependencies if needed
4. **Record Infrastructure Actions**:
```json
{
"infrastructure_fixes": {
"setup_files_created": [
{
"path": "test/setup.ts",
"reason": "Referenced in vitest.config but missing"
}
],
"config_updates": [
{
"file": "vitest.config.ts",
"change": "Fixed setup file path"
}
]
}
}
```
5. **Verify Fix Success**:
- Re-run the simple test after fixes
- Ensure framework can now execute tests
- Document success in test manifest
6. **Record Infrastructure Decisions**:
Append to decision_tree.jsonl:
```json
{
"timestamp": "ISO_timestamp",
"phase": "test_design",
"decision_type": "infrastructure_fix",
"decision": "created_missing_test_setup",
"reasoning": "test framework configuration referenced non-existent files",
"files_created": ["test/setup.ts"],
"impact": "enables test execution in subsequent phases"
}
```
</ensure-test-infrastructure>
```
### 3. Progressive Test Creation Strategy
<test-creation-principles>
**CRITICAL**: Apply progressive enhancement to test creation:
1. **Start with Essential Tests (Level 1)**:
- Happy path for core functionality
- Primary user interactions
- Basic error conditions
- Smoke tests for integration
2. **Add Important Tests (Level 2)**:
- Edge cases that users might encounter
- Data validation and boundaries
- Async operation handling
- State management scenarios
3. **Skip Unless Critical (Level 3)**:
- Framework-specific behavior
- Implementation details
- UI styling verification
- Micro-optimizations
4. **DO NOT CREATE TESTS FOR (Meta-Testing Anti-Patterns)**:
<meta-testing-examples>
โ Tool functionality tests:
- "ESLint catches semicolon errors"
- "Prettier formats code correctly"
- "TypeScript compiles with strict mode"
โ Configuration validation tests:
- "tsconfig.json has correct settings"
- ".eslintrc has proper rules"
- "Build configuration is valid"
โ Command existence tests:
- "npm run lint exists and works"
- "Build command completes successfully"
- "Test command finds test files"
โ Tool integration tests:
- "Pre-commit hooks run linting"
- "CI pipeline executes all checks"
- "Development server starts"
</meta-testing-examples>
**Remember**: If a tool would catch it, don't test it. The tool IS the test.
5. **Application-Focused Test Prioritization**:
```json
{
"priority_matrix": {
"critical": [
"user authentication flows",
"payment processing logic",
"data integrity operations",
"security-critical features"
],
"high": [
"form validation behavior",
"api data transformations",
"user-facing error handling",
"core business logic"
],
"normal": [
"ui component interactions",
"data display formatting",
"navigation workflows",
"state management"
],
"skip_always": [
"tool configuration validation",
"build process verification",
"linting rule enforcement",
"development environment setup"
]
}
}
```
</test-creation-principles>
<test-patterns>
Identify and apply existing test patterns:
1. **Pattern Detection**:
- Use Grep to find test patterns in existing tests
- Analyze assertion styles
- Identify mock/stub conventions
- Detect async test patterns
2. **Pattern Application**:
- Reuse existing test utilities
- Follow established naming conventions
- Apply consistent test structure
- Use project-specific helpers
</test-patterns>
### 4. Pre-Test Analysis: Avoid Tool Testing
<pre-test-analysis>
Before creating ANY test, ask yourself:
<decision-tree>
1. **Is this testing a tool's behavior?**
- Would ESLint/Prettier/TypeScript catch this? โ SKIP
- Is this verifying a build succeeds? โ SKIP
- Is this checking config validity? โ SKIP
2. **Is this testing application behavior?**
- Does a user interact with this? โ CREATE TEST
- Does this contain business logic? โ CREATE TEST
- Does this affect what users see/experience? โ CREATE TEST
3. **Red flags that indicate tool testing:**
- Test name contains: "eslint", "prettier", "config", "build"
- Test imports development tools
- Test reads configuration files
- Test executes npm scripts
- Test verifies file existence/structure
</decision-tree>
<existing-test-review>
When test files already exist:
1. Use Grep to search for anti-patterns:
- `grep -i "eslint\|prettier\|build\|config" *.test.*`
- Look for execSync, spawn, or shell commands
- Check for configuration file imports
2. If found, add to test manifest as "tests_to_remove" with reason
</existing-test-review>
</pre-test-analysis>
### 5. Test Implementation Process
<test-implementation>
For each component/feature to test:
1. **Validate Test Purpose**:
- Confirm it tests application behavior
- Ensure it's not testing tool functionality
- Verify it provides value to users/developers
2. **Determine Test Location**:
- Check if test file already exists (use Glob)
- Follow project's test organization pattern
- Create test file in appropriate location
2. **Generate Test Structure**:
```typescript
// Minimal test template
import { describe, it, expect } from '[framework]'
import { ComponentName } from './ComponentName'
describe('ComponentName', () => {
it('should [core behavior]', () => {
// Arrange
// Act
// Assert
expect(true).toBe(false) // Initially fails
})
})
```
3. **Apply TDD Red Phase**:
- Write test that describes expected behavior
- Ensure test fails (no implementation yet)
- Focus on interface, not implementation
4. **Test Data Strategy**:
```json
{
"fixtures": {
"location": "__tests__/fixtures or colocated",
"format": "typescript|json",
"reusability": "high"
},
"mocks": {
"scope": "external dependencies only",
"tools": "framework built-ins preferred"
}
}
```
5. **Authentication Testing Strategy**:
<authentication-testing>
<context>
Authentication features require special handling because OAuth providers need real credentials
that cannot be safely stored in code. The goal is to test your business logic while mocking
external dependencies.
</context>
<principles>
- Test YOUR code, not the OAuth provider's functionality
- Mock external services to test business logic
- Create clear handoff documentation for developers
- Focus on testable aspects: domain validation, session management, access control
</principles>
<authentication-test-examples>
<example>
<description>Testing domain restriction logic</description>
<approach>Mock the OAuth provider and test only your validation logic</approach>
<good-pattern>
Test that validates email domain:
- Mock OAuth response with test email
- Verify your callback rejects non-allowed domains
- Test edge cases (subdomains, case sensitivity)
</good-pattern>
<bad-pattern>
Test that connects to real Google OAuth:
- Requires real credentials in tests
- Tests Google's service, not your code
- Will fail in CI/CD environments
</bad-pattern>
</example>
<example>
<description>Testing protected routes</description>
<approach>Mock the session state to test authorization</approach>
<good-pattern>
Test with mocked authenticated user:
- Mock session with test user data
- Verify protected routes allow access
- Test redirect behavior when unauthenticated
</good-pattern>
<bad-pattern>
Test requiring real login flow:
- Cannot automate OAuth consent screen
- Depends on external service availability
- Makes tests slow and flaky
</bad-pattern>
</example>
</authentication-test-examples>
<implementation-guidance>
When creating authentication tests:
1. Identify what YOU control (domain validation, session handling, access rules)
2. Mock what you DON'T control (OAuth providers, tokens, external APIs)
3. Create fixtures for common scenarios (valid user, invalid user, expired session)
4. Document what requires manual testing with real credentials
</implementation-guidance>
<handoff-documentation>
Create clear setup instructions for developers:
- List all required environment variables in .env.example
- Document steps to obtain OAuth credentials
- Specify which features need manual testing
- Include troubleshooting guide for common auth issues
</handoff-documentation>
</authentication-testing>
</test-implementation>
<existing-test-reuse>
Before creating new tests:
1. Search for similar test patterns
2. Check for reusable test utilities
3. Identify shared fixtures
4. Extend existing test suites when appropriate
</existing-test-reuse>
### 5. Generate Test Manifest and Coverage Configuration
<coverage-configuration>
Generate intelligent coverage configuration:
```json
{
"coverage_strategy": {
"target_percentage": 80,
"focus_areas": [
"business_logic",
"data_transformations",
"error_handling",
"api_integration"
],
"exclude_patterns": [
"*.stories.*",
"*.test.*",
"*.spec.*",
"*.config.*",
"*.d.ts",
"**/types/**"
],
"thresholds": {
"statements": 80,
"branches": 75,
"functions": 80,
"lines": 80
}
},
"framework_specific": {
},
"jest": {
"collectCoverageFrom": ["src/**/*.{ts,tsx}", "!**/*.d.ts"],
"coverageReporters": ["text", "lcov", "html"]
}
}
}
```
</coverage-configuration>
<write-coverage-config>
Use the Write tool to save coverage configuration:
- Path: `.aidev-storage/tasks_output/$TASK_ID/phase_outputs/test_design/coverage_config.json`
</write-coverage-config>
```
### 6. Finalize Test Manifest
<finalize-test-manifest>
Create comprehensive test manifest with all test artifacts:
```json
{
"task_id": "task_id",
"test_framework": "framework_name",
"test_summary": {
"total_files": 0,
"total_test_cases": 0,
"coverage_target": 80,
"test_levels": {
"unit": 0,
"integration": 0,
"e2e": 0
}
},
"test_files": [
{
"path": "path/to/test.test.ts",
"type": "unit|integration|e2e",
"component": "component_name",
"test_count": 0,
"priority": "critical|high|normal",
"tests_application_behavior": true
}
],
"tests_to_remove": [
{
"path": "path/to/tool.test.ts",
"reason": "Tests ESLint functionality instead of application",
"pattern": "meta-testing|tool-testing|config-testing"
}
],
"test_utilities": [
{
"path": "test-utils/helpers.ts",
"purpose": "shared test utilities",
"reused": true
}
],
"fixtures": [
{
"path": "fixtures/data.ts",
"type": "mock|stub|fake",
"scope": "component|global"
}
],
"patterns_applied": [
"existing_test_structure",
"assertion_style",
"mock_conventions"
]
}
```
</finalize-test-manifest>
<write-test-manifest>
Use the Write tool to save test manifest:
- Path: `.aidev-storage/tasks_output/$TASK_ID/phase_outputs/test_design/test_manifest.json`
</write-test-manifest>
```
### 7. Update Shared Context
<update-context>
Update the shared context with Phase 2 results:
```json
{
"current_phase": "test_design",
"phases_completed": ["inventory", "architect", "test_design"],
"phase_history": [
{
"phase": "test_design",
"completed_at": "ISO_timestamp",
"success": true,
"key_outputs": {
"test_files_created": 0,
"test_cases_written": 0,
"coverage_target": 80,
"test_framework": "framework_name",
"appropriate_coverage": true
}
}
],
"critical_context": {
...existing_context,
"test_manifest_path": "phase_outputs/test_design/test_manifest.json",
"coverage_config_path": "phase_outputs/test_design/coverage_config.json",
"tdd_ready": true
}
}
```
</update-context>
<write-context>
Use the Write tool to save updated context:
- Path: `.aidev-storage/tasks_output/$TASK_ID/context.json`
</write-context>
### 8. Final Validation
<phase2-validation>
Perform final validation checks:
1. **Output Validation**:
- Verify test_manifest.json exists and is complete
- Check coverage_config.json is properly formatted
- Ensure all test files follow naming conventions
- Validate test file locations match project structure
2. **Test Quality Validation**:
- Tests cover essential functionality
- Edge cases are appropriately addressed
- No over-engineering or unnecessary tests
- Tests will fail initially (TDD red phase)
3. **Compliance Validation**:
- Only test files were created
- No implementation code written
- Test utilities are minimal and reusable
- Follows existing test patterns
4. **Appropriate Coverage**:
- Coverage matches task complexity
- Simple tasks have minimal tests
- Complex tasks have comprehensive tests
- No testing of framework behavior
5. **Testing Philosophy Compliance**:
<philosophy-validation>
Verify ALL created tests follow the philosophy:
- โ Tests focus on application behavior
- โ No tests for tool functionality
- โ No tests for build/lint/format commands
- โ No tests for configuration validity
- โ Tests provide value to users/developers
If ANY test violates these principles:
- Remove it from test files
- Add to "tests_to_remove" in manifest
- Document why it was excluded
</philosophy-validation>
</phase2-validation>
```bash
# Minimal validation for non-test files
if [ $(find . -type f -newer "$TASK_OUTPUT_FOLDER/phase_outputs/.phase2_start_marker" \( -name "*.ts" -o -name "*.tsx" \) ! -name "*.test.*" ! -name "*.spec.*" ! -path "*/test-utils/*" | grep -v ".aidev-storage" | wc -l) -gt 0 ]; then
echo "โ ๏ธ Warning: Non-test files created in test phase"
fi
echo "โ
Phase 2 completed successfully"
echo "๐งช Appropriate test coverage created:"
echo " - Essential functionality covered"
echo " - Tests will initially fail (TDD)"
echo " - Ready for implementation"
echo "โก๏ธ Ready for Phase 3: Implementation"
```
## Key Requirements
<phase2-constraints>
<test-only>
This phase MUST:
โก Create essential test files
โก Write tests that initially fail
โก Focus on core functionality
โก Include critical edge cases
โก Create test utilities only when needed
This phase MUST NOT:
โก Implement actual features
โก Create non-test source files
โก Make tests pass (no implementation)
โก Skip any test scenarios
</test-only>
<tdd-principles>
Follow Test-Driven Development:
โก Red: Write failing tests
โก Tests define the contract
โก Tests guide implementation
โก Appropriate coverage for functionality
โก Test behavior, not implementation
</tdd-principles>
</phase2-constraints>
## Success Criteria
Phase 2 is successful when:
- All test files are created
- Tests cover all specifications
- Tests will fail (no implementation yet)
- Mock data and utilities exist
- Test manifest is complete
- Coverage configuration is set