sf-agent-framework
Version:
AI Agent Orchestration Framework for Salesforce Development - Two-phase architecture with 70% context reduction
508 lines (452 loc) • 16.8 kB
Markdown
# Build Performance Optimization Task
This task guides the systematic optimization of Salesforce build and deployment
performance to reduce cycle times and improve developer productivity.
## Purpose
Enable build engineers to:
- Analyze and optimize build pipeline performance
- Reduce deployment and testing cycle times
- Improve resource utilization efficiency
- Minimize build failures and retries
- Enhance developer experience and productivity
## Prerequisites
- Access to CI/CD pipeline configuration
- Build and deployment metrics history
- Understanding of Salesforce deployment mechanisms
- Performance monitoring tools access
- Build infrastructure administration rights
## Build Performance Framework
### 1. Performance Analysis Baseline
**Build Metrics Collection**
```yaml
Performance_Metrics:
Pipeline_Duration:
Total_Build_Time: 'End-to-end pipeline execution'
Stage_Breakdown:
- Source_Checkout: 'Code retrieval from repository'
- Dependency_Resolution: 'Package and library downloads'
- Compilation: 'Code compilation and validation'
- Testing: 'Unit and integration test execution'
- Packaging: 'Deployment package creation'
- Deployment: 'Target environment deployment'
- Validation: 'Post-deployment verification'
Resource_Utilization:
CPU_Usage: 'Build agent processor utilization'
Memory_Consumption: 'RAM usage during build'
Network_Bandwidth: 'Data transfer rates'
Storage_I/O: 'Disk read/write operations'
Quality_Metrics:
Success_Rate: 'Percentage of successful builds'
Failure_Rate: 'Build failure frequency'
Retry_Rate: 'Number of retries required'
Test_Coverage: 'Code coverage percentage'
```
**Performance Benchmarking**
```
Algorithm: Build Performance Baseline Establishment
INPUT: metrics_collector, historical_days (default: 30)
PROCESS:
1. GET historical_builds from metrics for specified days
2. CALCULATE baseline metrics:
- average_build_duration from historical builds
- percentile_durations (50th, 75th, 95th percentiles)
- success_rate from historical builds
- failure_patterns analysis
- resource_utilization analysis
3. STORE baseline_metrics for future comparisons
4. RETURN complete baseline
OUTPUT: performance_baseline_metrics
```
```
Algorithm: Performance Bottleneck Identification
INPUT: build_data
PROCESS:
1. INITIALIZE bottlenecks = empty list
2. ANALYZE stage performance:
FOR each stage, metrics in stage_analysis:
IF metrics.duration > (metrics.expected_duration * 1.5) THEN
CREATE bottleneck_entry:
- type: "stage_performance"
- stage, current_duration, expected_duration
- impact: "high" if duration > 300 else "medium"
ADD to bottlenecks
3. ANALYZE resource constraints:
FOR each resource, constraint in resource_analysis:
IF constraint.utilization > 0.9 THEN
CREATE resource_bottleneck:
- type: "resource_constraint"
- resource, utilization
- impact: "high"
ADD to bottlenecks
4. RETURN identified bottlenecks
OUTPUT: performance_bottleneck_list
```
### 2. Optimization Strategies
**Parallel Execution Optimization**
```yaml
Parallelization_Strategy:
Test_Execution:
Parallel_Test_Classes: 'Run test classes in parallel'
Test_Partitioning: 'Divide tests by execution time'
Resource_Allocation: 'Optimize test runner resources'
Deployment_Components:
Metadata_Chunking: 'Deploy metadata in parallel chunks'
Component_Grouping: 'Group related components'
Dependency_Management: 'Resolve dependencies efficiently'
Multi_Environment:
Concurrent_Deployments: 'Deploy to multiple envs simultaneously'
Environment_Isolation: 'Isolate environment-specific operations'
Resource_Sharing: 'Share common resources across environments'
```
**Build Caching Implementation**
```
Algorithm: Dependency Caching Strategy Implementation
INPUT: cache_configuration
PROCESS:
1. DEFINE cache_layers = ["dependency", "compilation", "test_results"]
2. CREATE cache_strategy with components:
a. package_dependencies:
- cache_key: "package-lock-hash"
- cache_paths: ["node_modules/", ".sfdx/"]
- invalidation: "on_package_change"
- compression: true
b. salesforce_metadata:
- cache_key: "metadata-hash"
- cache_paths: ["force-app/", "sfdx-project.json"]
- invalidation: "on_metadata_change"
- compression: true
c. test_data:
- cache_key: "test-data-hash"
- cache_paths: ["test-data/", "test-config/"]
- invalidation: "on_test_change"
- compression: false
3. RETURN complete cache_strategy
OUTPUT: dependency_caching_strategy
```
```
Algorithm: Cache Utilization Optimization
INPUT: build_context
PROCESS:
1. INITIALIZE optimization_actions = empty list
2. ANALYZE cache_performance for all layers
3. FOR each layer, stats in cache_stats:
a. IF hit_rate < 0.7 (less than 70%) THEN
ADD optimization_action:
- action: "improve_cache_key_strategy"
- layer, current_hit_rate
- recommendation from analysis
b. IF cache size_mb > 1000 THEN
ADD optimization_action:
- action: "optimize_cache_size"
- layer, current_size
- recommendation: "Implement cache pruning strategy"
4. RETURN optimization_actions
OUTPUT: cache_optimization_recommendations
```
**Incremental Build Implementation**
```
Algorithm: Incremental Change Detection
INPUT: base_commit, target_commit, source_control
PROCESS:
1. GET changes from source_control between base_commit and target_commit
2. INITIALIZE categorized_changes with categories:
- apex_classes, lightning_components, flows
- metadata, test_classes, static_resources
3. FOR each change in changes:
a. CATEGORIZE change based on file path and type
b. IF category identified THEN
ADD change to categorized_changes[category]
4. RETURN categorized_changes
OUTPUT: categorized_incremental_changes
```
```
Algorithm: Incremental Deployment Package Creation
INPUT: categorized_changes
PROCESS:
1. INITIALIZE package_components = empty list
2. FOR each category, changed_files in changes:
a. IF changed_files not empty THEN
RESOLVE dependencies for changed_files in category
ADD dependencies to package_components
3. OPTIMIZE package structure:
- REMOVE duplicates
- ORDER components by deployment priority
4. DETERMINE test_scope from changes
5. GET incremental_deployment_options based on changes
6. RETURN deployment_package with:
- optimized components
- test_classes scope
- deployment options
OUTPUT: incremental_deployment_package
```
## Implementation Steps
### Step 1: Build Pipeline Analysis
**Performance Profiling Setup**
```
Algorithm: Build Performance Profiler Script
INPUT: build_id, performance_tracking_configuration
PROCESS:
1. CONFIGURE script execution:
- SET error_exit_mode = true
- ENABLE detailed timing logs
- RECORD build_start_time
2. LOG profiler initialization with build_id and start_time
3. DEFINE stage_timing_logger function:
- INPUT: stage_name, start_time
- CALCULATE duration = current_time - start_time
- LOG stage completion with duration
- APPEND metrics to build_metrics.csv
4. EXECUTE build stages with timing:
a. SOURCE_CHECKOUT:
- RECORD stage_start_time
- RUN git fetch with shallow clone
- LOG timing for "source_checkout"
b. DEPENDENCY_RESOLUTION:
- RECORD stage_start_time
- RUN npm ci for production dependencies
- INSTALL sfdx scanner plugin
- LOG timing for "dependency_resolution"
c. CODE_QUALITY_ANALYSIS:
- RECORD stage_start_time
- RUN sfdx scanner on force-app
- OUTPUT results to quality-results.json
- LOG timing for "code_quality"
d. UNIT_TESTS:
- RECORD stage_start_time
- RUN apex tests with code coverage
- OUTPUT results to test-results directory
- LOG timing for "unit_tests"
e. PACKAGE_CREATION:
- RECORD stage_start_time
- CONVERT source to mdapi format
- LOG timing for "package_creation"
f. DEPLOYMENT_VALIDATION:
- RECORD stage_start_time
- RUN check-only deployment with local tests
- LOG timing for "deployment_validation"
5. CALCULATE total_duration = build_end_time - build_start_time
6. LOG build completion with total duration and end time
7. GENERATE performance report from build_metrics.csv
OUTPUT: build_performance_profile_with_metrics
```
**Resource Monitoring**
```
Algorithm: Resource Monitoring Initialization
INPUT: sampling_interval (default: 5 seconds)
PROCESS:
1. SET sampling_interval for metric collection frequency
2. INITIALIZE metrics = empty list for storing snapshots
3. SET monitoring = false (initial state)
4. RETURN configured resource monitor
OUTPUT: resource_monitoring_system
```
```
Algorithm: Background Resource Monitoring
INPUT: resource_monitor
PROCESS:
1. SET monitoring = true to start collection
2. WHILE monitoring is active:
a. CREATE metrics_snapshot with current timestamp:
- cpu_percent (1-second interval measurement)
- memory_percent and memory_used_gb
- disk_io_counters statistics
- network_io_counters statistics
- active_process_count
- system_load_average
b. ADD metrics_snapshot to metrics collection
c. WAIT for sampling_interval seconds
3. CONTINUE until monitoring stopped
OUTPUT: continuous_resource_metrics_collection
```
```
Algorithm: Resource Monitoring Report Generation
INPUT: collected_metrics
PROCESS:
1. IF no metrics collected THEN
RETURN error: "No metrics collected"
2. EXTRACT cpu_values and memory_values from all metrics
3. CALCULATE monitoring_duration = sample_count * sampling_interval
4. COMPUTE cpu_statistics:
- average, maximum, minimum CPU usage
5. COMPUTE memory_statistics:
- average, maximum, minimum memory usage
- peak_usage_gb from all samples
6. GENERATE optimization_recommendations based on patterns
7. COMPILE comprehensive report with:
- monitoring duration and sample count
- cpu and memory statistics
- optimization recommendations
8. RETURN complete resource_usage_report
OUTPUT: comprehensive_resource_analysis
```
### Step 2: Performance Optimization Implementation
**Parallel Test Execution**
```yaml
# parallel-test-config.yml
parallel_test_configuration:
max_parallel_classes: 5
test_partitioning_strategy: 'by_execution_time'
test_groups:
fast_tests:
max_execution_time: 30 # seconds
parallel_instances: 3
classes:
- AccountTriggerTest
- ContactTriggerTest
- OpportunityTriggerTest
medium_tests:
max_execution_time: 120 # seconds
parallel_instances: 2
classes:
- DataMigrationTest
- IntegrationTest
slow_tests:
max_execution_time: 300 # seconds
parallel_instances: 1
classes:
- PerformanceTest
- EndToEndTest
```
**Smart Deployment Packaging**
```
Algorithm: Optimized Package Creation
INPUT: changed_components, metadata_analyzer
PROCESS:
1. DEFINE optimization_strategies:
- minimize_metadata_size
- optimize_component_order
- bundle_related_components
- exclude_unnecessary_metadata
2. INITIALIZE optimized_package = copy of changed_components
3. FOR each strategy in optimization_strategies:
APPLY strategy to optimized_package
UPDATE optimized_package with strategy results
4. GENERATE optimization_report comparing original and optimized packages
5. RETURN package_result with:
- optimized package
- optimization report
OUTPUT: optimized_deployment_package
```
```
Algorithm: Metadata Size Minimization
INPUT: deployment_package
PROCESS:
1. INITIALIZE size_optimizations = empty list
2. INITIALIZE optimized_package = empty dictionary
3. FOR each component_type, components in package:
a. IF component_type = "CustomObject" THEN
INITIALIZE optimized_components = empty list
FOR each component in components:
STRIP non-essential metadata (help text, descriptions for non-prod)
ADD optimized_component to optimized_components
SET optimized_package[component_type] = optimized_components
b. ELSE
SET optimized_package[component_type] = components (unchanged)
4. RETURN optimized_package
OUTPUT: size_optimized_package
```
### Step 3: Advanced Performance Techniques
**Dynamic Resource Scaling**
```
Algorithm: Dynamic Build Resource Scaling
INPUT: build_requirements, container_orchestrator
PROCESS:
1. LOAD scaling_policies from configuration
2. CALCULATE required_resources from build_requirements
3. GET current_resource_allocation from orchestrator
4. MAKE scaling_decision:
- scale_up: required_cpu > (current_cpu * 0.8)
- scale_down: required_cpu < (current_cpu * 0.3)
- target_resources: calculated requirements
5. IF scale_up needed THEN
RETURN scale_up_resources with target_resources
6. ELSE IF scale_down needed THEN
RETURN scale_down_resources with target_resources
7. ELSE
RETURN no_scaling_required with current_resources
OUTPUT: resource_scaling_action
```
```
Algorithm: Build Resource Requirements Calculation
INPUT: build_requirements
PROCESS:
1. INITIALIZE base_requirements:
- cpu: 2, memory_gb: 4, storage_gb: 20
2. ADJUST based on build characteristics:
a. IF test_count > 100 THEN
ADD 2 to cpu, ADD 4 to memory_gb
b. IF package_size_mb > 100 THEN
ADD 2 to memory_gb, ADD 10 to storage_gb
c. IF parallel_jobs > 1 THEN
MULTIPLY cpu by parallel_jobs count
3. RETURN adjusted base_requirements
OUTPUT: calculated_resource_requirements
```
**Build Artifact Optimization**
```
Algorithm: Build Artifact Storage Optimization
INPUT: build_artifacts, artifact_store
PROCESS:
1. INITIALIZE optimization_results = empty list
2. FOR each artifact in build_artifacts:
a. IF artifact size_mb > 50 THEN
COMPRESS artifact using compression algorithm
ADD optimization_result:
- artifact name, action: "compressed"
- original_size, compressed_size
- compression_ratio calculation
b. IF artifact checksum exists in store THEN
ADD optimization_result:
- artifact name, action: "deduplicated"
- savings_mb: original artifact size
c. IF artifact age_days > 30 THEN
ARCHIVE artifact to cold storage
ADD optimization_result:
- artifact name, action: "archived"
- storage_tier: "cold_storage"
3. RETURN complete optimization_results
OUTPUT: artifact_optimization_report
```
## Monitoring and Continuous Improvement
### Performance Metrics Dashboard
```
Algorithm: Real-time Performance Dashboard Generation
INPUT: metrics_database
PROCESS:
1. COMPILE dashboard_data components:
- current_builds: active build status
- recent_performance: performance trends
- optimization_opportunities: identified improvements
- resource_utilization: system resource summary
- success_rates: build success statistics
- performance_alerts: active performance warnings
2. RETURN complete dashboard_data
OUTPUT: real_time_performance_dashboard
```
```
Algorithm: Optimization Opportunities Identification
INPUT: metrics_database
PROCESS:
1. INITIALIZE opportunities = empty list
2. GET recent_builds from last 7 days
3. ANALYZE build duration trends:
IF duration_regression detected (>20% increase) THEN
ADD opportunity:
- type: "duration_regression", priority: "high"
- description: duration increase details
- recommended_actions: ["Review recent changes", "Check resource constraints"]
4. ANALYZE test execution trends:
GET test_duration_trend from recent builds
IF test duration increasing THEN
ADD opportunity:
- type: "test_performance", priority: "medium"
- description: test execution time increase percentage
- recommended_actions: ["Implement parallel test execution", "Review slow test classes"]
5. RETURN identified opportunities
OUTPUT: build_optimization_opportunities
```
## Success Criteria
✅ Build performance baseline established ✅ Bottlenecks identified and
prioritized ✅ Optimization strategies implemented ✅ Parallel execution
configured ✅ Caching mechanisms active ✅ Resource utilization optimized ✅
Performance monitoring dashboard operational ✅ Continuous improvement process
established