lamplighter-mcp
Version:
An intelligent context engine for AI-assisted software development
249 lines (215 loc) • 14.4 kB
Markdown
# Engineering Requirements Document: Lamplighter-MCP
**Version:** 1.0
**Date:** 2025-04-13
**Related PRD:** Version 1.0
## 1. Introduction
This document provides the technical specifications for implementing Lamplighter-MCP, a Node.js/TypeScript-based backend service acting as a project context engine for AI coding assistants like Cursor. It leverages the `@modelcontextprotocol/sdk` to expose functionality via MCP. This ERD translates the functional and non-functional requirements from the PRD into actionable implementation details for the engineering team.
## 2. System Architecture
Lamplighter-MCP will follow a modular architecture:
* **Core Logic Modules:** TypeScript modules responsible for specific business logic (code analysis, Confluence interaction, AI processing, task management, history logging). These modules are independent of the MCP transport layer.
* **MCP Server Interface:** A dedicated module initializing and configuring an `McpServer` instance from `@modelcontextprotocol/sdk`. This layer defines MCP tools that wrap calls to the Core Logic Modules.
* **Transport Layer:** Utilizes the `@modelcontextprotocol/sdk` transport mechanisms (specifically HTTP/SSE) to handle communication with MCP clients (Cursor).
* **Configuration:** Uses environment variables (`dotenv` package) for managing sensitive information (API keys) and service settings.
* **Data Store:** Relies on the filesystem, specifically structured Markdown files within a designated project directory (`lamplighter_context/`), managed via Node.js `fs` module.
```mermaid
graph LR
subgraph "Client Side"
Client[MCP Client]
end
subgraph "Server Side (Lamplighter-MCP)"
Transport[Transport Layer]
MCPInterface[MCP Server Interface]
CoreModules[Core Logic Modules]
FileSystem[Filesystem]
subgraph "Core Logic Modules"
direction TB
Analyzer[CodebaseAnalyzer]
Confluence[ConfluenceReader]
AIService[AIService]
Processor[FeatureSpecProcessor]
TaskManager[TaskManager]
Logger[HistoryLogger]
end
end
subgraph "External Services"
ConfluenceAPI[Confluence API]
LLM_API[LLM API]
end
Client -- MCP Requests/Responses --> Transport
Transport -- MCP Messages --> MCPInterface
MCPInterface -- Calls Methods --> CoreModules
CoreModules -- Read/Write --> FileSystem
Confluence -- Fetches --> ConfluenceAPI
AIService -- Calls --> LLM_API
```
## 3. Technology Stack
* **Language:** TypeScript
* **Runtime:** Node.js (LTS version recommended)
* **MCP Framework:** `@modelcontextprotocol/sdk`
* **Schema Validation:** `zod` (as used by the MCP SDK)
* **Configuration:** `dotenv`
* **Filesystem Access:** Node.js `fs` module (standard library)
* **HTTP Server (for SSE):** `express` (recommended)
* **External APIs:**
* Confluence REST API client (e.g., `axios` or a dedicated library)
* LLM Client SDKs (e.g., `openai`, `@google-ai/generativelanguage`)
## 4. Core Logic Module Specifications
* **`CodebaseAnalyzer` (`src/modules/codebaseAnalyzer.ts`)**
* **Purpose:** Analyzes codebase structure.
* **Methods:**
* `analyze(projectRoot: string): Promise<void>`: Scans the directory, identifies structure/modules/tech, generates Markdown content, writes to `codebase_summary.md`.
* **Dependencies:** `fs`, potentially libraries like `tree-sitter` or `ts-morph` for deeper analysis (optional initial implementation can use simpler directory traversal/regex). `AIService` (optional for summaries).
* **`ConfluenceReader` (`src/modules/confluenceReader.ts`)**
* **Purpose:** Fetches content from Confluence.
* **Methods:**
* `fetchPageContent(pageUrlOrId: string): Promise<string>`: Authenticates (via env vars), fetches page content, returns main text content.
* **Dependencies:** HTTP client (`axios`), `dotenv`. Configuration for Confluence URL and API token.
* **`AIService` (`src/services/aiService.ts`)**
* **Purpose:** Abstract LLM interactions.
* **Interface/Base Class:** Define `IAIService` or `BaseAIService`.
* **Methods:**
* `generateText(prompt: string, provider: string, config: AIConfig): Promise<string>`: Selects provider implementation based on input, makes API call, returns text response.
* **Concrete Implementations:** `OpenAIService.ts`, `GoogleGeminiService.ts`, etc., implementing the interface.
* **Dependencies:** LLM SDKs, `dotenv` for API keys.
* **`FeatureSpecProcessor` (`src/modules/featureSpecProcessor.ts`)**
* **Purpose:** Processes specs into Markdown tasks.
* **Methods:**
* `processSpecification(specText: string, codebaseSummary: string, featureIdentifier: string): Promise<string>`: Constructs prompt, calls `AIService.generateText`, parses response, formats Markdown, writes to `feature_tasks/feature_XYZ_tasks.md`. Returns path to file.
* **Dependencies:** `AIService`, `fs`, `path`.
* **`TaskManager` (`src/modules/taskManager.ts`)**
* **Purpose:** Manages tasks within Markdown files.
* **Methods:**
* `updateTaskStatus(featureIdentifier: string, taskIdentifier: string, newStatus: 'ToDo' | 'InProgress' | 'Done'): Promise<void>`: Reads file, parses checklist, finds task (by exact text match initially), updates marker (`[ ]` <-> `[x]`), rewrites file.
* `suggestNextTask(featureIdentifier: string): Promise<string | null>`: Reads file, parses checklist, finds first task line starting with `- [ ]`, returns the task text or null.
* **Dependencies:** `fs`, `path`. Requires robust Markdown checklist parsing (regex or library).
* **`HistoryLogger` (`src/modules/historyLogger.ts`)**
* **Purpose:** Appends events to the history log.
* **Methods:**
* `log(message: string): Promise<void>`: Gets timestamp, formats log entry, appends to `history_log.md` using `fs.appendFile`.
* **Dependencies:** `fs`, `path`.
## 5. MCP Server Implementation (`src/server.ts`)
* **Initialization:**
* Instantiate `McpServer` with `name: 'Lamplighter-MCP'` and current `version`.
* Instantiate core logic modules.
* **Transport:** Implement using HTTP with SSE (Server-Sent Events) via `SSEServerTransport` and `express`.
* Set up an Express app with `/sse` endpoint for client connections and a `/messages` endpoint for receiving client messages (POST).
* Manage multiple client connections using a sessionId-to-transport map as shown in the SDK documentation.
* **MCP Tool Definitions:** Define tools using `server.tool()`:
* **`analyze_codebase`**
* `name`: `"analyze_codebase"`
* `argumentSchema`: `z.object({})` (No arguments)
* `handler`:
```typescript
async () => {
await codebaseAnalyzer.analyze(projectRoot);
await historyLogger.log('Codebase analysis complete.');
return { content: [{ type: 'text', text: 'Codebase analysis initiated and completed.' }] };
}
```
* **`process_confluence_spec`**
* `name`: `"process_confluence_spec"`
* `argumentSchema`: `z.object({ confluence_url: z.string().url() })`
* `handler`:
```typescript
async ({ confluence_url }) => {
// TODO: Add robust error handling (try/catch)
const specText = await confluenceReader.fetchPageContent(confluence_url);
const codebaseSummary = await fs.promises.readFile(codebaseSummaryPath, 'utf-8');
const featureId = deriveFeatureId(confluence_url); // Implement this helper
const filePath = await featureSpecProcessor.processSpecification(specText, codebaseSummary, featureId);
await historyLogger.log(`Processed spec "${featureId}" from ${confluence_url}`);
return { content: [{ type: 'text', text: `Specification processed. Tasks available at ${filePath}` }] };
}
```
* **`update_task_status`**
* `name`: `"update_task_status"`
* `argumentSchema`: `z.object({ feature_identifier: z.string(), task_identifier: z.string(), new_status: z.enum(['ToDo', 'InProgress', 'Done']) })`
* `handler`:
```typescript
async ({ feature_identifier, task_identifier, new_status }) => {
// TODO: Add robust error handling (try/catch)
await taskManager.updateTaskStatus(feature_identifier, task_identifier, new_status);
await historyLogger.log(`Updated task "${task_identifier}" in feature "${feature_identifier}" to ${new_status}`);
return { content: [{ type: 'text', text: 'Task status updated.' }] };
}
```
* **`suggest_next_task`**
* `name`: `"suggest_next_task"`
* `argumentSchema`: `z.object({ feature_identifier: z.string() })`
* `handler`:
```typescript
async ({ feature_identifier }) => {
// TODO: Add robust error handling (try/catch)
const nextTask = await taskManager.suggestNextTask(feature_identifier);
return { content: [{ type: 'text', text: nextTask ? `Next task: ${nextTask}` : 'No pending tasks found.' }] };
}
```
* **`get_codebase_summary`**
* `name`: `"get_codebase_summary"`
* `argumentSchema`: `z.object({})`
* `handler`:
```typescript
async () => {
// TODO: Add robust error handling (try/catch, check file exists)
const content = await fs.promises.readFile(codebaseSummaryPath, 'utf-8');
return { content: [{ type: 'text', text: content }] };
}
```
* **`get_history_log`**
* `name`: `"get_history_log"`
* `argumentSchema`: `z.object({})`
* `handler`:
```typescript
async () => {
// TODO: Add robust error handling (try/catch, check file exists)
const content = await fs.promises.readFile(historyLogPath, 'utf-8');
return { content: [{ type: 'text', text: content }] };
}
```
* **`get_feature_tasks`**
* `name`: `"get_feature_tasks"`
* `argumentSchema`: `z.object({ feature_identifier: z.string() })`
* `handler`:
```typescript
async ({ feature_identifier }) => {
// TODO: Add robust error handling (try/catch, check file exists)
const filePath = path.join(featureTasksDir, `feature_${feature_identifier}_tasks.md`);
const content = await fs.promises.readFile(filePath, 'utf-8');
return { content: [{ type: 'text', text: content }] };
}
```
* **Connection:** Start the Express server and call `await server.connect(transport)` within the `/sse` endpoint handler for each connecting client.
## 6. Data Management
* **Context Directory:** A configurable root directory (e.g., `lamplighter_context/`, default path relative to project root). Path provided via environment variable `LAMPLIGHTER_CONTEXT_DIR`.
* **File Structure:** Enforce the structure defined in PRD-DM2.
* **Markdown Format:**
* Tasks within `feature_tasks/*.md` MUST use GitHub Flavored Markdown checklist syntax (e.g., `- [ ] Task Description`, `- [x] Completed Task`).
* `codebase_summary.md` should use standard Markdown headings and lists for structure.
* `history_log.md` format: `[YYYY-MM-DD HH:MM:SS] - Log Message\n`
## 7. AI Service Design
* Implement `AIService` as described in Section 4.
* Use environment variables for selecting the provider (`AI_PROVIDER=openai` or `google`) and model (`AI_MODEL=gpt-4` or `gemini-pro`), and for API keys (`OPENAI_API_KEY`, `GOOGLE_API_KEY`).
## 8. Cursor Integration (`.cursor/`)
* **`mcp.json`:**
* Must contain the URL of the running Lamplighter-MCP HTTP server (e.g., `http://localhost:3001/sse` if running locally).
* Must list all defined MCP tools (`analyze_codebase`, `process_confluence_spec`, etc.).
* **`rules/*.mdc`:**
* Create rule files as specified in PRD-IF3, providing clear instructions for Cursor on system purpose, file structure, tool usage (arguments, purpose), and the "AI Suggestion + User Confirmation" workflow for `update_task_status`.
## 9. Error Handling & Logging
* Wrap core logic calls within MCP tool handlers in `try...catch` blocks.
* Log detailed errors internally (e.g., using `console.error` or a dedicated logging library).
* For errors occurring during MCP tool execution, return an appropriate MCP response with `isError: true` and a user-friendly error message in the `content`, e.g., `{ content: [{ type: 'text', text: 'Error fetching from Confluence: ...' }], isError: true }`.
* Use `HistoryLogger` for tracking successful high-level operations.
## 10. Security Considerations
* Strictly use environment variables (`dotenv`) for all API keys (Confluence, LLMs) and secrets. Never commit secrets to version control. Include a `.env.example` file.
* Validate inputs using `zod` schemas in MCP tool definitions.
* Be mindful of potential injection risks if user-provided input is used directly in file paths or external API calls (though `zod` validation helps mitigate this).
## 11. Deployment & Running
* The server will be run using `node dist/server.js` (after TypeScript compilation).
* Requires environment variables to be set correctly.
* The HTTP/SSE transport is chosen for flexibility in running the server locally or potentially on a shared development server accessible via HTTP.
## 12. Testing Strategy
* **Unit Tests:** Use a framework like `jest` to test individual Core Logic Modules in isolation. Mock dependencies like `fs`, external APIs (`AIService`, `ConfluenceReader`).
* **Integration Tests:** Test the MCP tool handlers by mocking the core logic modules they call, ensuring correct argument parsing and response formatting.
* **End-to-End (Manual):** Test the full flow using Cursor connected to a running Lamplighter-MCP server and the MCP Inspector tool.
---