context-optimizer-mcp-server
Version:
Context optimization tools MCP server for AI coding assistants - compatible with GitHub Copilot, Cursor AI, and other MCP-supporting assistants
115 lines (80 loc) โข 6.58 kB
Markdown
# Context Optimizer MCP Server
   
A Model Context Protocol (MCP) server that provides context optimization tools for AI coding assistants including GitHub Copilot, Cursor AI, Claude Desktop, and other MCP-compatible assistants.
## Overview
This server provides context optimization functionality similar to the VS Code Copilot Context Optimizer extension, but with compatibility across MCP-supporting applications. It enables AI assistants to extract targeted information rather than processing large files and command outputs in their entirety.
## Features
- **๐ File Analysis Tool** (`askAboutFile`) - Extract specific information from files without loading entire contents
- **๐ฅ๏ธ Terminal Execution Tool** (`runAndExtract`) - Execute commands and extract relevant information using LLM analysis
- **โ Follow-up Questions Tool** (`askFollowUp`) - Continue conversations about previous terminal executions
- **๐ฌ Research Tools** (`researchTopic`, `deepResearch`) - Conduct web research using Exa.ai's API
- **๐ Security Controls** - Path validation, command filtering, and session management
- **๐ง Multi-LLM Support** - Works with Google Gemini, Claude (Anthropic), and OpenAI
- **โ๏ธ Environment Variable Configuration** - API key management through system environment variables
- **๐๏ธ Simple Configuration** - Environment variables only, no config files to manage
- **๐งช Comprehensive Testing** - Unit tests, integration tests, and security validation
## Quick Start
```bash
# 1. Install globally
npm install -g context-optimizer-mcp-server
# 2. Set environment variables (see docs/guides/usage.md for OS-specific instructions)
export CONTEXT_OPT_LLM_PROVIDER="gemini"
export CONTEXT_OPT_GEMINI_KEY="your-gemini-api-key"
export CONTEXT_OPT_ALLOWED_PATHS="/path/to/your/projects"
# 3. Configure your AI assistant (VS Code, Claude Desktop, etc.)
# See docs/guides/usage.md for detailed setup instructions
```
For complete setup instructions including OS-specific environment variable configuration and AI assistant setup, see **[docs/guides/usage.md](docs/guides/usage.md)**.
## Available Tools
- **`askAboutFile`** - Extract specific information from files without loading entire contents into chat context. Perfect for checking if files contain specific functions, extracting import/export statements, or understanding file purpose without reading the full content.
- **`runAndExtract`** - Execute terminal commands and intelligently extract relevant information using LLM analysis. Supports non-interactive commands with security validation, timeouts, and session management for follow-up questions.
- **`askFollowUp`** - Continue conversations about previous terminal executions without re-running commands. Access complete context from previous `runAndExtract` calls including full command output and execution details.
- **`researchTopic`** - Conduct quick, focused web research on software development topics using Exa.ai's research capabilities. Get current best practices, implementation guidance, and up-to-date information on evolving technologies.
- **`deepResearch`** - Comprehensive research and analysis using Exa.ai's exhaustive capabilities for critical decision-making and complex architectural planning. Ideal for strategic technology decisions, architecture planning, and long-term roadmap development.
For detailed tool documentation and examples, see **[docs/tools.md](docs/tools.md)** and **[docs/guides/usage.md](docs/guides/usage.md)**.
## Documentation
All documentation is organized under the `docs/` directory:
| Topic | Location | Description |
|-------|----------|-------------|
| **Architecture** | `docs/architecture.md` | System design and component overview |
| **Tools Reference** | `docs/tools.md` | Complete tool documentation and examples |
| **Usage Guide** | `docs/guides/usage.md` | Complete setup and configuration |
| **VS Code Setup** | `docs/guides/vs-code-setup.md` | VS Code specific configuration |
| **Troubleshooting** | `docs/guides/troubleshooting.md` | Common issues and solutions |
| **API Keys** | `docs/reference/api-keys.md` | API key management |
| **Testing** | `docs/reference/testing.md` | Testing framework and procedures |
| **Changelog** | `docs/reference/changelog.md` | Version history |
| **Contributing** | `docs/reference/contributing.md` | Development guidelines |
| **Security** | `docs/reference/security.md` | Security policy |
| **Code of Conduct** | `docs/reference/code-of-conduct.md` | Community guidelines |
### Quick Links
- **Get Started**: See `docs/guides/usage.md` for complete setup instructions
- **Tools Reference**: Check `docs/tools.md` for detailed tool documentation
- **Troubleshooting**: Check `docs/guides/troubleshooting.md` for common issues
- **VS Code Setup**: Follow `docs/guides/vs-code-setup.md` for VS Code configuration
## Testing
```bash
# Run all tests (skips LLM integration tests without API keys)
npm test
# Run tests with API keys for full integration testing
# Set environment variables first:
export CONTEXT_OPT_LLM_PROVIDER="gemini"
export CONTEXT_OPT_GEMINI_KEY="your-gemini-key"
export CONTEXT_OPT_EXA_KEY="your-exa-key"
npm test # Now runs all tests including LLM integration
# Run in watch mode
npm run test:watch
```
For detailed testing setup, see **[docs/reference/testing.md](docs/reference/testing.md)**.
## Contributing
Contributions are welcome! Please read **[docs/reference/contributing.md](docs/reference/contributing.md)** for guidelines on development workflow, coding standards, testing, and submitting pull requests.
## Community
- **Code of Conduct**: See **[docs/reference/code-of-conduct.md](docs/reference/code-of-conduct.md)**
- **Security Reports**: Follow **[docs/reference/security.md](docs/reference/security.md)** for responsible disclosure
- **Issues**: Use GitHub Issues for bugs & feature requests
- **Pull Requests**: Ensure tests pass and docs are updated
- **Discussions**: (If enabled) Use for open-ended questions/ideas
## License
MIT License - see LICENSE file for details.
## Related Projects
- VS Code Copilot Context Optimizer โ Original VS Code extension (companion project)