UNPKG

citty-test-utils

Version:

A comprehensive testing framework for CLI applications built with Citty, featuring Docker cleanroom support, fluent assertions, advanced scenario DSL, and noun-verb CLI structure with template generation.

374 lines (291 loc) โ€ข 8.7 kB
# Citty Test Utils - Comprehensive Test Suite A complete testing framework for GitVan CLI with unit, integration, and BDD tests. ## ๐Ÿงช Test Structure ``` tests/ โ”œโ”€โ”€ unit/ # Unit tests for individual components โ”‚ โ”œโ”€โ”€ assertions.test.mjs # Fluent assertion API tests โ”‚ โ”œโ”€โ”€ scenario-dsl.test.mjs # Scenario DSL and test utils tests โ”‚ โ””โ”€โ”€ local-runner.test.mjs # Local runner component tests โ”œโ”€โ”€ integration/ # Integration tests for component interactions โ”‚ โ””โ”€โ”€ full-integration.test.mjs # Cross-component integration tests โ””โ”€โ”€ bdd/ # BDD tests with scenario-based testing โ””โ”€โ”€ gitvan-cli-bdd.test.mjs # Behavior-driven development tests ``` ## ๐Ÿš€ Quick Start ### Run All Tests ```bash pnpm test:run ``` ### Run Specific Test Types ```bash # Unit tests only pnpm test:unit # Integration tests only pnpm test:integration # BDD tests only pnpm test:bdd # With coverage pnpm test:coverage ``` ### Interactive Testing ```bash # Watch mode pnpm test:watch # UI mode pnpm test:ui ``` ### Comprehensive Test Runner ```bash node run-tests.mjs ``` ## ๐Ÿ“‹ Test Categories ### Unit Tests (`tests/unit/`) **Purpose**: Test individual components in isolation **Coverage**: - โœ… Fluent assertion API (`assertions.test.mjs`) - โœ… Scenario DSL and test utilities (`scenario-dsl.test.mjs`) - โœ… Local runner component (`local-runner.test.mjs`) **Key Features**: - Mocked dependencies for isolated testing - Comprehensive assertion method testing - Error handling validation - Method chaining verification ### Integration Tests (`tests/integration/`) **Purpose**: Test component interactions and real CLI execution **Coverage**: - โœ… Local runner with actual GitVan CLI - โœ… Cleanroom runner with Docker containers - โœ… Scenario DSL execution - โœ… Test utilities integration - โœ… Cross-component workflows **Key Features**: - Real CLI command execution - Docker container management - File system operations - Multi-step workflows ### BDD Tests (`tests/bdd/`) **Purpose**: Behavior-driven development with user-focused scenarios **Coverage**: - โœ… CLI help system scenarios - โœ… Error handling scenarios - โœ… Command-specific help scenarios - โœ… Docker cleanroom scenarios - โœ… Complex workflow scenarios - โœ… Test utilities scenarios - โœ… Cross-environment testing **Key Features**: - Given-When-Then structure - User-focused test descriptions - Scenario-based testing - Real-world usage patterns ## ๐Ÿ› ๏ธ Test Configuration ### Vitest Configuration (`vitest.config.mjs`) ```javascript export default defineConfig({ test: { globals: true, environment: 'node', timeout: 60000, // 60 seconds for Docker operations // Coverage configuration coverage: { provider: 'v8', reporter: ['text', 'json', 'html', 'lcov'], thresholds: { global: { branches: 80, functions: 80, lines: 80, statements: 80 } } } } }) ``` ### Coverage Thresholds - **Branches**: 80% - **Functions**: 80% - **Lines**: 80% - **Statements**: 80% ## ๐Ÿ“Š Test Reports ### Coverage Reports - **HTML**: `coverage/lcov-report/index.html` - **JSON**: `coverage/coverage-final.json` - **LCOV**: `coverage/lcov.info` ### Test Results - **JSON**: `test-results.json` - **HTML**: `test-results.html` ## ๐ŸŽฏ Test Scenarios ### Unit Test Scenarios #### Assertions API - โœ… Exit code validation - โœ… Output content matching (string/regex) - โœ… Stderr validation - โœ… JSON output handling - โœ… Success/failure expectations - โœ… Output length validation - โœ… Method chaining #### Scenario DSL - โœ… Scenario builder creation - โœ… Step definition and execution - โœ… Expectation validation - โœ… Error handling - โœ… Test utilities (waitFor, retry, temp files) #### Local Runner - โœ… Project root detection - โœ… Process spawning - โœ… Error handling - โœ… Timeout management - โœ… JSON parsing - โœ… Environment variables ### Integration Test Scenarios #### Local Runner Integration - โœ… GitVan CLI command execution - โœ… Version command handling - โœ… Invalid command handling - โœ… Fluent assertions integration - โœ… JSON output parsing - โœ… Timeout handling #### Cleanroom Runner Integration - โœ… Docker container execution - โœ… Multiple commands in same container - โœ… Invalid command handling - โœ… Fluent assertions in container - โœ… Container lifecycle management #### Scenario DSL Integration - โœ… Complex multi-step scenarios - โœ… Scenario failure handling - โœ… Cross-runner compatibility #### Test Utils Integration - โœ… Temporary file creation/cleanup - โœ… Retry logic for flaky operations - โœ… Wait for conditions - โœ… Complex workflow integration ### BDD Test Scenarios #### CLI Help System - โœ… User requests help - โœ… Help includes all commands - โœ… Help is well-formatted - โœ… User requests version - โœ… Version is valid semantic version #### Error Handling - โœ… Invalid command handling - โœ… Helpful error messages - โœ… Graceful failure (no crashes) #### Command-Specific Help - โœ… Help for specific commands - โœ… Relevant help content - โœ… Multiple command support #### Docker Cleanroom Testing - โœ… Isolated environment execution - โœ… Consistent results - โœ… Multiple commands in same container #### Complex Workflows - โœ… Multi-step GitVan workflows - โœ… Reproducible workflows - โœ… Scenario DSL integration #### Test Utilities - โœ… Temporary file management - โœ… Retry logic for flaky operations - โœ… Wait for conditions - โœ… Cross-environment testing ## ๐Ÿ”ง Test Utilities ### Available Utilities ```javascript import { testUtils } from './index.js' // Wait for conditions await testUtils.waitFor(() => condition, timeout, interval) // Retry operations await testUtils.retry(operation, maxAttempts, delay) // Temporary files const tempFile = await testUtils.createTempFile(content, extension) await testUtils.cleanupTempFiles([tempFile]) ``` ### Scenario DSL ```javascript import { scenario } from './index.js' const testScenario = scenario("Test Name") .step("Description") .run(args, options) .expect(result => result.expectSuccess()) const results = await testScenario.execute(runner) ``` ## ๐Ÿณ Docker Requirements For cleanroom tests, ensure Docker is running: ```bash # Check Docker status docker --version docker ps # Start Docker if needed sudo systemctl start docker # Linux # or start Docker Desktop # macOS/Windows ``` ## ๐Ÿ“ˆ Performance ### Test Execution Times - **Unit Tests**: ~2-5 seconds - **Integration Tests**: ~10-30 seconds - **BDD Tests**: ~30-60 seconds - **Full Suite**: ~60-120 seconds ### Optimization Tips - Use `pnpm test:unit` for fast feedback during development - Use `pnpm test:watch` for continuous testing - Use `pnpm test:coverage` for coverage analysis - Use `pnpm test:ui` for interactive debugging ## ๐Ÿšจ Troubleshooting ### Common Issues #### Docker Not Running ```bash # Error: Docker container failed to start # Solution: Start Docker service sudo systemctl start docker ``` #### Permission Issues ```bash # Error: Permission denied # Solution: Check file permissions chmod +x run-tests.mjs ``` #### Timeout Issues ```bash # Error: Test timeout # Solution: Increase timeout in vitest.config.mjs timeout: 120000 // 2 minutes ``` #### Coverage Issues ```bash # Error: Coverage threshold not met # Solution: Add more tests or adjust thresholds ``` ## ๐Ÿ“š Best Practices ### Writing Tests 1. **Unit Tests**: Mock external dependencies 2. **Integration Tests**: Use real components 3. **BDD Tests**: Focus on user scenarios 4. **Coverage**: Aim for 80%+ coverage 5. **Naming**: Use descriptive test names ### Test Organization 1. **Group related tests** in describe blocks 2. **Use beforeEach/afterEach** for setup/cleanup 3. **Keep tests independent** and isolated 4. **Use meaningful assertions** with clear error messages ### Performance 1. **Run unit tests frequently** during development 2. **Run integration tests** before commits 3. **Run BDD tests** for feature validation 4. **Use watch mode** for continuous testing ## ๐ŸŽ‰ Success Criteria A successful test run should show: - โœ… All unit tests passing - โœ… All integration tests passing - โœ… All BDD tests passing - โœ… Coverage thresholds met (80%+) - โœ… No flaky tests - โœ… Clean test output ## ๐Ÿ“ž Support For issues with the test suite: 1. Check the troubleshooting section 2. Review test output for specific errors 3. Ensure all dependencies are installed 4. Verify Docker is running for cleanroom tests 5. Check file permissions and paths