UNPKG

@paradiselabs/mco-protocol

Version:

MCO (Model Configuration Orchestration) MCP Server & Configuration Tool

178 lines (117 loc) 5.45 kB
# MCO Hackathon Project - Integration Documentation ## Project Overview This project implements a real-world demonstration of the MCO Protocol as the missing orchestration layer for agent frameworks. It features: 1. **Real AutoGPT-like Agent**: Built with Modal API for genuine LLM inference and tool execution 2. **Genuine MCO Orchestration**: Using the actual MCO MCP server (not simulated) 3. **Single-Page UI**: Showing Claude's thinking process and MCO orchestration logs 4. **Visual SNLP Generator**: With value/NLP editing toggle and proper syntax ## Architecture The implementation consists of three main components: 1. **MCO MCP Server**: The orchestration layer that manages workflow state and progressive revelation 2. **Modal Agent**: An AutoGPT-like agent that can be orchestrated by MCO 3. **Gradio UI**: A web interface for demonstrating and interacting with the system ### Integration Flow 1. User inputs task requirements in the Gradio UI 2. Gradio sends request to Modal endpoint 3. Modal agent starts and connects to MCO MCP server 4. MCO orchestrates the agent through the workflow 5. Agent thinking and MCO logs are streamed back to the UI 6. Results are displayed to the user ## Components ### 1. MCO MCP Server The MCO MCP server uses the official MCP SDK with stdio transport, ensuring compatibility with MCP Inspector and other MCP-enabled tools. Key improvements include: - **Enhanced SNLP Parser**: Better cross-platform path handling and error reporting - **Robust Error Handling**: Clear error messages and graceful failure modes - **Proper Initialization**: Reliable startup and shutdown sequences ### 2. Modal Agent The Modal implementation provides a real AutoGPT-like agent with: - **LLM Interface**: Claude API for reasoning and planning - **Tool System**: Code interpreter, file operations, web access - **MCP Client**: Integration with MCO MCP server - **Specialized Code Review**: Analysis, suggestions, and test generation ### 3. Gradio UI The single-page Gradio UI features: - **Agent Demo**: Run the agent with real Modal API and MCO orchestration - **Thinking Visualization**: See Claude's thinking process in real-time - **MCO Logbook**: Track orchestration events and progress - **SNLP Generator**: Create and edit MCO workflow files with a toggle for simplified editing ## Implementation Details ### Modal Implementation The Modal implementation (`modal_implementation.py`) defines: - **AutoGPTAgent**: Base agent class with MCO integration - **CodeReviewAgent**: Specialized agent for code review tasks - **MCPClient**: Client for interacting with MCO MCP server - **Modal Functions**: Remote endpoints for running the agent ### Gradio UI The Gradio UI (`gradio_ui.py`) provides: - **Agent Demo Tab**: Run the agent and view results - **SNLP Generator Tab**: Create and edit MCO workflow files - **About Tab**: Information about MCO Protocol ### SNLP Files The system generates four SNLP files: 1. **mco.core**: Core workflow configuration 2. **mco.sc**: Success criteria 3. **mco.features**: Feature specifications 4. **mco.styles**: Style guidelines ## Usage Instructions ### Running the Demo 1. Start the Gradio app: ``` python gradio_ui.py ``` 2. Navigate to the Agent Demo tab 3. Enter a task description, select review type and language focus 4. Click "Run Agent" to start the agent with MCO orchestration 5. Watch the agent thinking process and MCO logs in real-time 6. View the results when the agent completes ### Creating SNLP Files 1. Navigate to the SNLP Generator tab 2. Select review type and language focus 3. Click "Generate SNLP Files" to create initial files 4. Toggle between "Values Only" and "Full Edit" modes 5. Edit the values and NLP content as needed 6. Download individual files or all files as a zip ### Using MCO with Your Own Agent 1. Install the MCO package: ``` npm install @paradiselabs/mco-protocol ``` 2. Add MCO to your MCP config: ```json { "mcpServers": { "mco-orchestration": { "command": "node", "args": ["path/to/mco-mcp-server.js"], "env": { "MCO_CONFIG_DIR": "path/to/config" } } } } ``` 3. Create SNLP files using the generator 4. Run your agent with MCO orchestration ## Technical Notes ### Real vs. Simulated Components This implementation uses: - **Real Modal API**: For genuine LLM inference with Claude - **Real MCO MCP Server**: Using the official MCP SDK - **Real Tool Execution**: Code interpreter, file operations, etc. - **Real-time UI Updates**: Streaming thinking process and logs ### Cross-Platform Compatibility The implementation ensures compatibility across: - **Windows**: Proper path handling and normalization - **Mac/Linux**: Standard path handling - **Different Browsers**: Responsive UI design ### Error Handling The system includes robust error handling: - **MCO Server Errors**: Clear error messages and recovery - **Modal API Errors**: Graceful fallbacks - **UI Errors**: User-friendly error messages ## Future Improvements 1. **Enhanced Tool System**: Add more specialized tools for different tasks 2. **Multi-Agent Orchestration**: Coordinate multiple agents with MCO 3. **Custom SNLP Templates**: More domain-specific templates 4. **Performance Optimization**: Faster response times and resource usage 5. **Advanced Visualization**: More detailed orchestration visualization