orchestrix
Version:
Orchestrix - Universal AI Agent Framework for Coordinated AI-Driven Development
446 lines (367 loc) • 13.8 kB
Markdown
# Validate Functional & End-to-End Testing (Auto-Execution)
## 🤖 AUTO-EXECUTION MODE (Claude Code SubAgent Default)
**Mission**: Automatically execute and validate functional, integration, and end-to-end tests across application layers
### Immediate Action Protocol:
1. **Auto-Detect Test Frameworks**: Identify functional testing tools and configurations
2. **Auto-Execute Test Suites**: Run appropriate functional test commands
3. **Auto-Analyze Results**: Parse test output for failures and performance metrics
4. **Auto-Validate Coverage**: Ensure adequate test coverage of functionality
5. **Auto-Report**: Document functional testing status in QA Results
### Non-Negotiable Requirements:
- ✅ MUST support multiple testing frameworks (Jest, Cypress, Selenium, etc.)
- ✅ MUST execute tests in proper environment configurations
- ✅ MUST validate both success criteria and performance benchmarks
- ✅ MUST analyze test coverage and identify gaps
- ✅ MUST integrate with existing QA review workflow
### Auto-Halt Conditions:
- ❌ Critical test failures detected → Report functional issues, halt
- ❌ Test framework not detected → Report configuration issues, halt
- ❌ Coverage gaps in critical functionality → Report coverage issues, halt
- ❌ Performance benchmarks not met → Report performance issues, halt
## 🎯 AUTOMATED FUNCTIONAL TESTING ENGINE
### Test Framework Auto-Detection:
```yaml
# Comprehensive test framework detection
test_framework_detection:
javascript_typescript:
- Jest + React Testing Library (component testing)
- Cypress (end-to-end testing)
- Playwright (cross-browser testing)
- Supertest (API testing)
java:
- JUnit + Mockito (unit testing)
- Selenium (browser automation)
- RestAssured (API testing)
- TestNG (advanced testing)
python:
- pytest (comprehensive testing)
- unittest (standard library)
- Selenium (browser testing)
- Requests (API testing)
go:
- testing package (standard library)
- testify (assertions)
- ginkgo/gomega (BDD testing)
api_focused:
- Postman collections
- Insomnia tests
- OpenAPI contract tests
- GraphQL testing tools
mobile:
- Appium (cross-platform mobile)
- Espresso (Android)
- XCTest (iOS)
- Detox (React Native)
```
### Functional Test Execution Auto-Process:
```yaml
# Systematic functional test execution
test_execution:
test_discovery:
- Identify test files and test directories
- Detect test configuration files
- Map tests to functionality and user stories
- Determine test execution order and dependencies
environment_setup:
- Configure test databases and mock services
- Setup test users and authentication
- Prepare test data and fixtures
- Initialize test environment variables
test_execution_strategy:
- Execute unit tests first for quick feedback
- Run integration tests for component interactions
- Execute end-to-end tests for user workflows
- Perform performance and load testing
result_analysis:
- Parse test output for failures and errors
- Analyze test duration and performance trends
- Calculate test coverage metrics
- Identify flaky tests and reliability issues
```
### Test Coverage Auto-Validation:
```yaml
# Comprehensive test coverage analysis
coverage_validation:
code_coverage_analysis:
- Line coverage percentage and gaps
- Branch coverage analysis
- Function/method coverage
- Conditional coverage validation
functionality_coverage:
- Map tests to user story acceptance criteria
- Verify all critical paths are tested
- Identify untested edge cases and error conditions
- Validate business logic coverage
integration_coverage:
- Test API endpoint coverage
- Validate database interaction testing
- Check external service integration tests
- Verify UI component interaction testing
user_journey_coverage:
- End-to-end user workflow validation
- Cross-browser and cross-platform testing
- Accessibility and usability testing
- Localization and internationalization testing
```
### Performance & Reliability Testing:
```yaml
# Performance and reliability assessment
performance_validation:
response_time_validation:
- API response time benchmarks
- Page load time measurements
- Transaction processing times
- Database query performance
load_testing:
- Concurrent user simulation
- Request per second capacity
- System resource utilization under load
- Autoscaling and elasticity testing
reliability_testing:
- Error rate and failure analysis
- Recovery time objective validation
- Data consistency under failure
- Rollback and recovery procedures
scalability_testing:
- Horizontal scaling validation
- Database scaling performance
- Cache effectiveness measurement
- Content delivery network performance
```
## 🔧 EXECUTION LOGIC
### Functional Test Validation Auto-Methodology:
```yaml
# Systematic functional testing process
validation_execution:
test_prioritization:
- Critical path tests first
- High-risk functionality priority
- Frequently used features emphasis
- Recent changes regression testing
execution_environment:
- Development environment for quick iteration
- Staging environment for integration testing
- Production-like environment for performance tests
- Isolated environment for reliability testing
result_categorization:
- Success: All tests pass, coverage adequate
- Warning: Tests pass but coverage gaps exist
- Failure: Test failures preventing release
- Flaky: Intermittent test failures requiring investigation
automated_remediation:
- Auto-fix common test configuration issues
- Suggest test data generation techniques
- Provide test isolation strategies
- Generate missing test skeletons
```
### Test Quality & Maintenance Validation:
```yaml
# Test suite quality assessment
test_quality_validation:
test_design_quality:
- Meaningful test names and descriptions
- Proper test isolation and independence
- Clear assertions and expectations
- Appropriate test granularity
maintainability_assessment:
- Test code duplication analysis
- Test data management quality
- Mock and stub effectiveness
- Test configuration simplicity
reliability_analysis:
- Flaky test detection and reporting
- Test execution time consistency
- Environment dependency analysis
- Test ordering dependency issues
documentation_quality:
- Test purpose documentation
- Test data requirements clarity
- Environment setup instructions
- Failure investigation guidance
```
### Integration with QA Workflow:
```yaml
# Seamless integration with existing QA process
qa_integration:
comprehensive_validation:
- Execute after compilation and container validation
- Provide final functional quality assurance
- Block release if critical functionality broken
- Ensure user experience quality
results_integration:
- Include functional testing status in QA Results
- Document test coverage gaps and recommendations
- Provide performance benchmark results
- Track test reliability metrics over time
continuous_improvement:
- Identify frequently failing functionality
- Suggest test automation opportunities
- Recommend test environment improvements
- Provide test data management strategies
```
## ⚡ AUTO-VALIDATION CHECKPOINTS
### Pre-Test Validation:
```bash
✓ Test frameworks correctly identified
✓ Test environment properly configured
✓ Test data and fixtures prepared
✓ Dependent services and databases available
✓ Test execution plan validated
```
### Test Execution Validation:
```bash
✓ Test suites execute successfully
✓ No test framework configuration errors
✓ Test isolation properly maintained
✓ Resource usage during testing monitored
✓ Test execution time within acceptable limits
```
### Result Analysis Validation:
```bash
✓ Test results properly parsed and categorized
✓ Coverage analysis completed accurately
✓ Performance metrics collected and analyzed
✓ Failure root cause analysis performed
✓ QA Results section updated comprehensively
```
### Post-Test Validation:
```bash
✓ Test environment properly cleaned up
✓ Test artifacts and reports generated
✓ Flaky tests identified and documented
✓ Coverage gaps clearly communicated
✓ Release readiness determination made
```
## 📊 AUTOMATED FUNCTIONAL TESTING REPORT
### Functional Testing Report Auto-Template:
```markdown
## Functional Testing Results
### Validation Date: {{current_date}}
### Test Framework: {{primary_test_framework}}
### Agent Model: {{ai_model_version}}
### Test Execution Summary
**Overall Test Status**: {{SUCCESS/WARNING/FAILURE}}
**Test Duration**: {{total_test_time}} seconds
**Tests Executed**: {{test_count}} tests
**Test Pass Rate**: {{pass_percentage}}%
### Test Results Breakdown
**Passed**: {{passed_tests}} tests
**Failed**: {{failed_tests}} tests
**Skipped**: {{skipped_tests}} tests
**Flaky**: {{flaky_tests}} tests (intermittent failures)
### Failure Analysis
{{if_test_failures}}
❌ **Test Failures Detected**
**Critical Failures**:
{{critical_failures}}
**Failure Categories**:
- **Functional Defects**: {{functional_defect_count}}
- **Environment Issues**: {{environment_issue_count}}
- **Test Framework Issues**: {{framework_issue_count}}
- **Data Issues**: {{data_issue_count}}
**Root Cause Analysis**:
{{failure_root_causes}}
{{end_if}}
{{if_no_failures}}
✅ **All Tests Passed** - No functional defects detected
**Test Reliability**: {{reliability_score}}/10
{{end_if}}
### Test Coverage Analysis
**Code Coverage**: {{code_coverage}}%
- **Line Coverage**: {{line_coverage}}%
- **Branch Coverage**: {{branch_coverage}}%
- **Function Coverage**: {{function_coverage}}%
**Functional Coverage**:
- **Acceptance Criteria**: {{ac_coverage}}% covered
- **User Stories**: {{story_coverage}}% covered
- **Critical Paths**: {{critical_path_coverage}}% covered
**Coverage Gaps**:
{{coverage_gap_analysis}}
### Performance Metrics
**Response Times**:
- **API P95**: {{api_p95}} ms
- **Page Load**: {{page_load}} ms
- **DB Queries**: {{db_query}} ms
**Load Testing**:
- **Max Users**: {{max_users}} concurrent
- **RPS Capacity**: {{requests_per_second}} RPS
- **Error Rate**: {{error_rate}}%
**Resource Usage**:
- **CPU Utilization**: {{cpu_usage}}%
- **Memory Usage**: {{memory_usage}} MB
- **Network I/O**: {{network_io}} MB/s
### Test Quality Assessment
- **Test Design Quality**: {{quality_score}}/10
- **Maintainability**: {{maintainability_score}}/10
- **Reliability**: {{reliability_score}}/10
- **Documentation**: {{documentation_score}}/10
### Recommendations
{{test_improvement_recommendations}}
### Next Steps
{{functional_testing_next_steps}}
```
## 🛠️ ERROR HANDLING & RECOVERY
### Test Issue Management:
```yaml
error_handling:
test_failures:
functional_defects: Document specific bug details and reproduction steps
environment_issues: Fix test environment configuration
data_problems: Regenerate or fix test data
timing_issues: Adjust test timeouts and waits
framework_issues:
configuration_errors: Fix test framework configuration
dependency_problems: Resolve test dependency issues
version_conflicts: Align test framework versions
coverage_gaps:
missing_tests: Generate test skeletons for uncovered code
inadequate_testing: Suggest additional test scenarios
edge_case_missing: Recommend edge case testing
performance_issues:
slow_tests: Optimize test execution time
resource_leaks: Fix test resource management
scalability_problems: Address performance bottlenecks
```
### Quality Assurance Fallback:
```yaml
fallback_strategies:
complex_test_issues:
- Generate detailed test failure analysis reports
- Provide manual test reproduction instructions
- Suggest alternative testing approaches
- Escalate to development team for defect fixing
environment_specific_issues:
- Document test environment requirements
- Provide containerized test environment solutions
- Suggest test environment automation strategies
- Recommend environment parity between dev/test/prod
test_optimization:
- Suggest test parallelization strategies
- Recommend test execution order optimization
- Provide test data management improvements
- Suggest test framework migration if beneficial
```
## 🎯 SUCCESS CRITERIA
### Functional Testing Success Indicators:
- ✅ All test suites execute successfully
- ✅ Zero test failures in critical functionality
- ✅ Adequate test coverage achieved
- ✅ Performance benchmarks met
- ✅ QA Results section comprehensively populated
### Quality Assurance Gates:
- **Test Reliability**: High test pass rate, low flakiness
- **Coverage Quality**: Comprehensive functional coverage
- **Performance**: Meets response time and capacity requirements
- **Maintainability**: Well-structured, maintainable test code
- **Documentation**: Clear test purpose and documentation
### Approval Criteria:
- **✅ Approved - Functionally Valid**: All tests pass, coverage adequate
- **⚠️ Warning - Review Needed**: Tests pass but coverage gaps exist
- **❌ Failed - Fix Required**: Test failures preventing release
**Integration Note**: Final validation step before story completion, should execute after all other technical validations.