UNPKG

context-forge

Version:

AI orchestration platform with autonomous teams, enhancement planning, migration tools, 25+ slash commands, checkpoints & hooks. Multi-IDE: Claude, Cursor, Windsurf, Cline, Copilot

327 lines (241 loc) • 9.33 kB
# Dashboard Usage Guide The Context Forge Dashboard provides comprehensive tracking and visualization of your development workflow, helping you monitor progress, identify patterns, and optimize your AI-assisted development process. ## Overview The dashboard tracks all Context Forge operations and provides: - **Real-time progress monitoring** for active operations - **Historical analysis** of completed operations - **Performance metrics** and success rates - **Project health status** and insights - **Error tracking** and recovery suggestions ## Getting Started ### Basic Dashboard View the main dashboard: ```bash context-forge dashboard ``` This shows: - Current operations (if any) - Summary statistics - Recent activity (last 7 days) - Project status ### Dashboard Options ```bash # Real-time watch mode (updates every 2 seconds) context-forge dashboard --watch # Full operation history context-forge dashboard --history # Summary statistics only context-forge dashboard --summary # Clean up old operations (default: 30 days) context-forge dashboard --clear-old context-forge dashboard --clear-old 7 # Keep only last 7 days ``` ## Dashboard Sections ### šŸ”„ Current Operations Shows operations currently in progress with: - **Operation type**: init, enhance, migrate, analyze - **Current step**: Which phase the operation is in - **Start time**: When the operation began - **Step progress**: Individual steps within the operation **Example:** ``` šŸ”„ Current Operation āœ… init - Project initialization Started: 2024-01-15 10:30:15 • Duration: ongoing IDEs: claude, cursor • AI: enabled Steps: āœ“ Interactive setup wizard ā— Generate documentation ā—‹ Complete setup ``` ### šŸ“ˆ Summary Statistics Key performance metrics: - **Total Operations**: All operations ever run - **Success Rate**: Percentage of successful completions - **Average Duration**: Mean time for operations to complete - **Current Status**: Active, completed, and failed operations **Visual Indicators:** - 🟢 **90%+ success rate**: Excellent - 🟔 **70-89% success rate**: Good - šŸ”“ **<70% success rate**: Needs attention ### šŸ“… Recent Activity Last 7 days of operations showing: - **Operation timeline**: When operations were run - **Success/failure status**: Visual indicators for outcomes - **Metadata**: Files generated, features implemented, etc. - **Duration tracking**: How long operations took ### šŸ—ļø Project Status Current project health indicators: - **Project type**: Web, API, full-stack, etc. - **Technology stack**: Detected frameworks and tools - **File statistics**: Components, routes, tests - **Documentation status**: Available docs and gaps ## Watch Mode Real-time monitoring for active development: ```bash context-forge dashboard --watch ``` **Features:** - **Live updates**: Refreshes every 2 seconds - **Current progress**: See operations update in real-time - **Step tracking**: Watch individual steps complete - **Error monitoring**: Immediate notification of failures **Usage Scenarios:** - Monitoring long-running migrations - Tracking complex enhancement implementations - Real-time debugging of setup issues - Team coordination during development sprints ## Operation Details ### Status Icons - āœ… **Completed**: Operation finished successfully - āŒ **Failed**: Operation encountered errors - šŸ”„ **In Progress**: Currently running - āš ļø **Cancelled**: User-cancelled operation - ā³ **Pending**: Queued or waiting ### Metadata Tracking The dashboard captures rich metadata: **For `init` operations:** - Target IDEs selected - AI enhancement usage - Files generated - Configuration choices **For `enhance` operations:** - Number of features planned - Implementation phases - Complexity assessment - Dependencies identified **For `migrate` operations:** - Source and target frameworks - Breaking changes detected - Migration phases - Risk assessment **For `analyze` operations:** - Project complexity score - Recommendations generated - Issues identified - Optimization suggestions ## Performance Analysis ### Duration Tracking Operations are timed to help you understand: - **Bottlenecks**: Which steps take longest - **Trends**: Whether operations are getting faster/slower - **Optimization opportunities**: Where to focus improvements **Duration Display:** - `1.2s` - Under 60 seconds - `3.5m` - Under 60 minutes - `1.2h` - Over 60 minutes ### Success Rate Analysis Track your development workflow efficiency: - **Overall success rate**: Across all operations - **Command-specific rates**: Success by operation type - **Trend analysis**: Improving or declining performance - **Error patterns**: Common failure points ## Error Tracking ### Error Categories The dashboard categorizes errors to help identify patterns: - **Configuration errors**: Setup and config issues - **Permission errors**: File access problems - **Dependency errors**: Missing packages or tools - **Network errors**: AI or external service issues - **User errors**: Cancelled or invalid operations ### Error Recovery Failed operations show: - **Error details**: What went wrong - **Recovery suggestions**: How to fix issues - **Related operations**: Similar successful operations - **Documentation links**: Relevant troubleshooting guides ## Team Usage ### Shared Insights When working in teams, dashboard data helps with: - **Workflow standardization**: See what works best - **Onboarding**: New team members can see patterns - **Process optimization**: Identify team-wide bottlenecks - **Knowledge sharing**: Learn from successful operations ### Project Health Monitoring Track project health across team members: - **Consistency**: Are setups similar across developers? - **Best practices**: Which configurations work best? - **Tool adoption**: How well are AI features being used? - **Quality trends**: Are projects improving over time? ## Data Management ### Storage Location Dashboard data is stored locally in: ``` .context-forge/progress.json ``` **Characteristics:** - **Local only**: Never shared externally - **Project-specific**: Each project has its own tracking - **Portable**: Can be backed up or moved with project - **Privacy-focused**: No external data collection ### Data Cleanup Automatic and manual cleanup options: ```bash # Clean operations older than 30 days (default) context-forge dashboard --clear-old # Keep only last 7 days context-forge dashboard --clear-old 7 # Keep only last day context-forge dashboard --clear-old 1 ``` **Auto-cleanup rules:** - Keeps all in-progress operations - Preserves recent successful operations - Removes old failed operations first - Maintains summary statistics ## Advanced Usage ### Scripting Integration Parse dashboard data programmatically: ```bash # Get current status (returns JSON) context-forge dashboard --summary | grep "Success Rate" # Check for active operations context-forge dashboard | grep -q "Current Operation" ``` ### CI/CD Integration Use dashboard data in automation: - **Pre-deployment checks**: Verify no failed operations - **Performance monitoring**: Track operation duration trends - **Quality gates**: Ensure success rates meet thresholds - **Automated cleanup**: Regular maintenance of old data ### Custom Analytics The progress.json file can be analyzed with custom tools: - **Time series analysis**: Operation frequency over time - **Pattern detection**: Identify successful workflows - **Performance optimization**: Find and fix slow operations - **Predictive analysis**: Anticipate potential issues ## Troubleshooting ### Common Issues **Dashboard shows no data** - Run any Context Forge command to start tracking - Check if `.context-forge/` directory exists - Verify write permissions in project directory **Operations not updating in watch mode** - Ctrl+C to exit watch mode and restart - Check if operation is actually running - Verify operation started with tracking enabled **Performance issues with large history** - Run `--clear-old` to remove old operations - Consider shorter retention periods for active projects - Monitor progress.json file size ### Best Practices 1. **Regular monitoring**: Check dashboard weekly for patterns 2. **Clean up regularly**: Remove old data to maintain performance 3. **Use watch mode**: For real-time operation monitoring 4. **Track trends**: Look for improving or declining patterns 5. **Share insights**: Discuss patterns with team members ## Integration with Other Tools ### Development Workflow Integrate dashboard monitoring with: - **IDE extensions**: Show operation status in editor - **Terminal prompts**: Display current operation in shell - **Notification systems**: Alert on operation completion - **Project management**: Link operations to tasks/stories ### Monitoring Tools Export dashboard data to: - **Time series databases**: InfluxDB, Prometheus - **Analytics platforms**: Custom dashboards and alerts - **Team tools**: Slack notifications, email reports - **Documentation**: Include performance data in project docs --- The dashboard is designed to be a central hub for understanding and optimizing your AI-assisted development workflow. Use it regularly to identify patterns, improve processes, and ensure your development workflow is as efficient as possible.