UNPKG

lamplighter-mcp

Version:

An intelligent context engine for AI-assisted software development

131 lines (98 loc) 6.1 kB
# Lamplighter-MCP End-to-End Testing Plan This document outlines the comprehensive testing plan for Lamplighter-MCP to ensure all components work correctly together and fulfill the requirements specified in the PRD. ## Test Environment Setup 1. **Local Development Environment** - Fresh clone of the repository - Node.js 18.x or higher - All dependencies installed via `npm install` - Properly configured `.env` file with test credentials - Local Cursor installation with the `.cursor` directory from the project 2. **Production-like Environment** (for final verification) - Deployed instance following the deployment plan - Same configuration as development but with production settings ## Test Scenarios ### 1. Basic Server Functionality | Test Case | Steps | Expected Outcome | |-----------|-------|------------------| | Server starts successfully | Run `npm start` | Server starts without errors, listening on port 3001 | | API endpoints available | `curl http://localhost:3001/sse` | Receives a valid SSE connection response | | Error handling | Send malformed requests to endpoints | Server returns appropriate error responses, doesn't crash | ### 2. Codebase Analysis Tool Testing | Test Case | Steps | Expected Outcome | |-----------|-------|------------------| | Initial analysis | 1. Start server<br>2. Verify automatic analysis runs | `codebase_summary.md` is created with accurate content | | Manual analysis | 1. Open Cursor<br>2. Request codebase analysis | Analysis completes, updates summary file with fresh data | | Validation | Check `codebase_summary.md` content | File correctly identifies project structure, key technologies | ### 3. Confluence Integration Testing | Test Case | Steps | Expected Outcome | |-----------|-------|------------------| | Fetch spec content | 1. Open Cursor<br>2. Request processing of a valid Confluence URL | Server successfully retrieves the content from Confluence | | Process specification | Continue from previous test | Server processes content, creates a task breakdown in Markdown | | Invalid URL handling | Request processing with invalid Confluence URL | Server returns appropriate error message, doesn't crash | ### 4. Task Management Testing | Test Case | Steps | Expected Outcome | |-----------|-------|------------------| | Create tasks | Process a Confluence spec | Tasks are created in `feature_tasks/feature_XYZ_tasks.md` | | Update task status | 1. Open Cursor<br>2. Request to mark a task as "Done" | Task status is updated in the file | | Get next task | Request the next task for a feature | Returns the first "ToDo" task | | Task not found | Request to update a non-existent task | Appropriate error message without crashing | ### 5. Context Retrieval Testing | Test Case | Steps | Expected Outcome | |-----------|-------|------------------| | Get codebase summary | Request codebase summary via Cursor | Summary content is returned correctly | | Get feature tasks | Request tasks for a specific feature | Tasks list is returned correctly | | Get history log | Request history log | Log content shows recent actions | | Missing files | Request content before it exists | Appropriate error handling/generation | ### 6. End-to-End Workflow Testing | Test Case | Steps | Expected Outcome | |-----------|-------|------------------| | Feature implementation workflow | 1. Process Confluence spec<br>2. Get next task<br>3. Update task status<br>4. Repeat until all tasks complete | All interactions work correctly, task statuses update, history is logged | | "AI Suggestion + User Confirmation" | 1. Complete a task implementation<br>2. Have AI suggest marking task as complete<br>3. Confirm the action | Task is marked complete only after explicit confirmation | ### 7. Error Handling and Recovery | Test Case | Steps | Expected Outcome | |-----------|-------|------------------| | Server restart | 1. Stop server<br>2. Start server<br>3. Test functionality | All functionality works after restart | | Invalid inputs | Provide invalid inputs to all tools | Appropriate error messages without crashing | | Concurrent connections | Connect multiple clients simultaneously | All connections handled correctly | ## Testing Tools 1. **MCP Inspector** - Use [MCP Inspector](https://github.com/modelcontextprotocol/inspector) for direct tool testing - Test each tool independently before testing in Cursor 2. **Cursor Integration Testing** - Configure Cursor with local `.cursor/` directory - Test natural language interactions with the AI - Verify tools are suggested and executed correctly 3. **Automated Testing Script** - Create a Node.js script to automate API calls to all endpoints - Verify responses match expected formats - Simulate error conditions ## Test Execution and Reporting 1. **Test Execution Steps** - Begin with basic server functionality tests - Proceed to individual tool tests using MCP Inspector - Conduct full workflow tests with Cursor - Run all tests in the production-like environment 2. **Test Results Documentation** - Document all test outcomes - Record any failures or unexpected behavior - Include screenshots of key interactions 3. **Performance Metrics** - Record response times for different operations - Monitor memory usage during extended operations - Test with different sizes of codebases ## Acceptance Criteria The deployment will be considered successful if: 1. All MCP tools function as specified in the PRD 2. Context files are correctly created and maintained 3. Cursor can interact with the server and execute all tools 4. The system handles error conditions gracefully 5. All end-to-end workflows complete successfully ## Post-Deployment Monitoring 1. **Ongoing Checks** - Regular health checks on deployed instance - Monitor logs for unexpected errors - Check for any authentication/permission issues 2. **Feedback Collection** - Gather user feedback on tool functionality - Record any issues encountered in real usage - Prioritize improvements based on feedback