mcp-prompt-optimizer
Version:
Professional cloud-based MCP server for AI-powered prompt optimization with intelligent context detection, Bayesian optimization, AG-UI real-time optimization, template auto-save, optimization insights, personal model configuration via WebUI, team collabo
497 lines (399 loc) ⢠18.4 kB
Markdown
# MCP Prompt Optimizer v2.2.3
š **Professional cloud-based MCP server** for AI-powered prompt optimization with intelligent context detection, template management, team collaboration, enterprise-grade features, and **optional personal model configuration**. Starting at $2.99/month.
## ⨠Key Features
š§ **AI Context Detection** - Automatically detects and optimizes for image generation, LLM interaction, technical automation
š **Template Management** - Auto-save high-confidence optimizations, search & reuse patterns
š„ **Team Collaboration** - Shared quotas, team templates, role-based access
š **Real-time Analytics** - Confidence scoring, usage tracking, optimization insights (Note: Advanced features like Bayesian Optimization and AG-UI are configurable and may provide mock data if disabled in the backend)
āļø **Cloud Processing** - Always up-to-date AI models, no local setup required
šļø **Personal Model Choice** - Use your own OpenRouter models via WebUI configuration
š§ **Universal MCP** - Works with Claude Desktop, Cursor, Windsurf, Cline, VS Code, Zed, Replit
## š Quick Start
**1. Install the MCP server:**
```bash
npm install -g mcp-prompt-optimizer
```
**2. Get your API key:**
Visit [https://promptoptimizer-blog.vercel.app/pricing](https://promptoptimizer-blog.vercel.app/pricing) for your free tier (`sk-local-*` key, 5 optimizations per day)
**3. Configure Claude Desktop:**
Add to your `~/.claude/claude_desktop_config.json`:
```json
{
"mcpServers": {
"prompt-optimizer": {
"command": "npx",
"args": ["mcp-prompt-optimizer"],
"env": {
"OPTIMIZER_API_KEY": "sk-local-your-key-here" // Use sk-local-* for free tier, sk-opt-* for paid subscriptions
}
}
}
}
```
**4. Restart Claude Desktop** and start optimizing with AI context awareness!
**5. (Optional) Configure custom models** - See [Advanced Model Configuration](#-advanced-model-configuration-optional) below
## šļø Advanced Model Configuration (Optional)
### WebUI Model Selection & Personal OpenRouter Keys
**Want to use your own AI models?** Configure them in the WebUI first, then the NPM package automatically uses your settings!
#### **Step 1: Configure in WebUI**
1. **Visit Dashboard:** [https://promptoptimizer-blog.vercel.app/dashboard](https://promptoptimizer-blog.vercel.app/dashboard)
2. **Go to Settings** ā User Settings
3. **Add OpenRouter API Key:** Get one from [OpenRouter.ai](https://openrouter.ai)
4. **Select Your Models:**
- **Optimization Model:** e.g., `anthropic/claude-3-5-sonnet` (for prompt optimization)
- **Evaluation Model:** e.g., `google/gemini-pro-1.5` (for quality assessment)
#### **Step 2: Use NPM Package**
Your configured models are **automatically used** by the MCP server - no additional setup needed!
```json
{
"mcpServers": {
"prompt-optimizer": {
"command": "npx",
"args": ["mcp-prompt-optimizer"],
"env": {
"OPTIMIZER_API_KEY": "sk-opt-your-key-here" // Your service API key
}
}
}
}
```
### **Model Selection Priority**
```
1. šÆ Your WebUI-configured models (highest priority)
2. š§ Request-specific model (if specified)
3. āļø System defaults (fallback)
```
### **Benefits of Personal Model Configuration**
ā
**Cost Control** - Pay for your own OpenRouter usage
ā
**Model Choice** - Access 100+ models (Claude, GPT-4, Gemini, Llama, etc.)
ā
**Performance** - Choose faster or more capable models
ā
**Consistency** - Same models across WebUI and MCP tools
ā
**Privacy** - Your data goes through your OpenRouter account
### **Example Model Recommendations**
**For Creative/Complex Prompts:**
- Optimization: `anthropic/claude-3-5-sonnet`
- Evaluation: `google/gemini-pro-1.5`
**For Fast/Simple Optimizations:**
- Optimization: `openai/gpt-4o-mini`
- Evaluation: `openai/gpt-3.5-turbo`
**For Technical/Code Prompts:**
- Optimization: `anthropic/claude-3-5-sonnet`
- Evaluation: `anthropic/claude-3-haiku`
### **Important Notes**
š **Two Different API Keys:**
- **Service API Key** (`sk-opt-*`): For the MCP service subscription
- **OpenRouter API Key**: For your personal model usage (configured in WebUI)
š° **Cost Structure:**
- **Service subscription**: Monthly fee for optimization features
- **OpenRouter usage**: Pay-per-token for your chosen models
š **No NPM Package Changes Needed:**
When you update models in WebUI, the NPM package automatically uses the new settings!
---
## š° Cloud Subscription Plans
> **All plans include the same sophisticated AI optimization quality**
### šÆ Explorer - $2.99/month
- **5,000 optimizations** per month
- **Individual use** (1 user, 1 API key)
- **Full AI features** - context detection, template management, insights
- **Personal model configuration** via WebUI
- **Community support**
### šØ Creator - $25.99/month ā *Popular*
- **18,000 optimizations** per month
- **Team features** (2 members, 3 API keys)
- **Full AI features** - context detection, template management, insights
- **Personal model configuration** via WebUI
- **Priority processing** + email support
### š Innovator - $69.99/month
- **75,000 optimizations** per month
- **Large teams** (5 members, 10 API keys)
- **Full AI features** - context detection, template management, insights
- **Personal model configuration** via WebUI
- **Advanced analytics** + priority support + dedicated support channel
š **Free Trial:** 5 optimizations with full feature access
## š§ AI Context Detection & Enhancement
The server automatically detects your prompt type and enhances optimization goals:
### šØ Image Generation Context
**Detected patterns:** `--ar`, `--v`, `midjourney`, `dall-e`, `photorealistic`, `4k`
```
Input: "A beautiful landscape --ar 16:9 --v 6"
ā
Enhanced goals: parameter_preservation, keyword_density, technical_precision
ā
Preserves technical parameters (--ar, --v, etc.)
ā
Optimizes quality keywords and visual descriptors
```
### š¤ LLM Interaction Context
**Detected patterns:** `analyze`, `explain`, `evaluate`, `summary`, `research`, `paper`, `analysis`, `interpret`, `discussion`, `assessment`, `compare`, `contrast`
```
Input: "Analyze the pros and cons of this research paper and provide a comprehensive evaluation"
ā
Enhanced goals: context_specificity, token_efficiency, actionability
ā
Improves role clarity and instruction precision
ā
Optimizes for better AI understanding
```
### š» Code Generation Context
**Detected patterns:** `def`, `function`, `code`, `python`, `javascript`, `java`, `c++`, `return`, `import`, `class`, `for`, `while`, `if`, `else`, `elif`
```
Input: "def fibonacci(n): return n if n <= 1 else fibonacci(n-1) + fibonacci(n-2)"
ā
Enhanced goals: technical_accuracy, parameter_preservation, precision
ā
Protects code elements and technical syntax
ā
Enhances technical precision and clarity
```
### āļø Technical Automation Context
**Detected patterns:** `automate`, `script`, `api`
```
Input: "Create a script to automate deployment process"
ā
Enhanced goals: technical_accuracy, parameter_preservation, precision
ā
Protects code elements and technical syntax
ā
Enhances technical precision and clarity
```
### š¬ Human Communication Context (Default)
**All other prompts** get standard optimization for human readability and clarity.
## š Enhanced Optimization Features
### Professional Optimization (All Users)
```
šÆ Optimized Prompt
Create a comprehensive technical blog post about artificial intelligence that systematically explores current real-world applications, evidence-based benefits, existing limitations and challenges, and data-driven future implications for businesses and society.
Confidence: 87.3%
Plan: Creator
AI Context: Human Communication
Goals Enhanced: Yes (clarity ā clarity, specificity, actionability)
š§ AI Context Benefits Applied
- ā
Standard optimization rules applied
- ā
Human communication optimized
ā
Auto-saved as template (ID: tmp_abc123)
*High-confidence optimization automatically saved for future use*
š Similar Templates Found
1. AI Article Writing Template (92.1% similarity)
2. Technical Blog Post Structure (85.6% similarity)
*Use `search_templates` tool to explore your template library*
š Optimization Insights
Performance Analysis:
- Clarity improvement: +21.9%
- Specificity boost: +17.3%
- Length optimization: +15.2%
Prompt Analysis:
- Complexity level: intermediate
- Optimization confidence: 87.3%
AI Recommendations:
- Optimization achieved 87.3% confidence
- Template automatically saved for future reference
- Prompt optimized from 15 to 23 words
*Professional analytics and improvement recommendations*
---
*Professional cloud-based AI optimization with context awareness*
š” Manage account & configure models: https://promptoptimizer-blog.vercel.app/dashboard
š Check quota: Use `get_quota_status` tool
š Search templates: Use `search_templates` tool
```
## š§ Universal MCP Client Support
### Claude Desktop
```json
{
"mcpServers": {
"prompt-optimizer": {
"command": "npx",
"args": ["mcp-prompt-optimizer"],
"env": {
"OPTIMIZER_API_KEY": "sk-opt-your-key-here"
}
}
}
}
```
### Cursor IDE
Add to `~/.cursor/mcp.json`:
```json
{
"mcpServers": {
"prompt-optimizer": {
"command": "npx",
"args": ["mcp-prompt-optimizer"],
"env": {
"OPTIMIZER_API_KEY": "sk-opt-your-key-here"
}
}
}
}
```
### Windsurf
Configure in IDE settings or add to MCP configuration file.
### Other MCP Clients
- **Cline:** Standard MCP configuration
- **VS Code:** MCP extension setup
- **Zed:** MCP server configuration
- **Replit:** Environment variable setup
- **JetBrains IDEs:** MCP plugin configuration
- **Emacs/Vim/Neovim:** MCP client setup
## š ļø Available MCP Tools (for AI Agents & MCP Clients)
These tools are exposed via the Model Context Protocol (MCP) server and are intended for use by AI agents, MCP-compatible clients (like Claude Desktop, Cursor IDE), or custom scripts that interact with the server via stdin/stdout.
### `optimize_prompt`
**Professional AI optimization with context detection, auto-save, and insights.**
```javascript
{
"prompt": "Your prompt text",
"goals": ["clarity", "specificity"], // Optional: e.g., "clarity", "conciseness", "creativity", "technical_accuracy"
"ai_context": "llm_interaction", // Optional: Auto-detected if not specified. e.g., "code_generation", "image_generation"
"enable_bayesian": true // Optional: Enable Bayesian optimization features (if available in backend)
}
```
### `detect_ai_context`
**Detects the AI context for a given prompt using advanced backend analysis.**
```javascript
{
"prompt": "The prompt text for which to detect the AI context"
}
```
### `create_template`
**Create a new optimization template.**
```javascript
{
"title": "Title of the template",
"description": "Description of the template", // Optional
"original_prompt": "The original prompt text",
"optimized_prompt": "The optimized prompt text",
"optimization_goals": ["clarity"], // Optional: e.g., ["clarity", "conciseness"]
"confidence_score": 0.9, // (0.0-1.0)
"model_used": "openai/gpt-4o-mini", // Optional
"optimization_tier": "llm", // Optional: e.g., "rules", "llm", "hybrid"
"ai_context_detected": "llm_interaction", // Optional: e.g., "code_generation", "image_generation"
"is_public": false, // Optional: Whether the template is public
"tags": ["marketing", "email"] // Optional
}
```
### `get_template`
**Retrieve a specific template by its ID.**
```javascript
{
"template_id": "the-template-id"
}
```
### `update_template`
**Update an existing optimization template.**
```javascript
{
"template_id": "the-template-id",
"title": "New title for the template", // Optional
"description": "New description for the template", // Optional
"is_public": true // Optional: Update public status
// Other fields from 'create_template' can also be updated
}
```
### `search_templates`
**Search your saved template library with AI-aware filtering.**
```javascript
{
"query": "blog post", // Optional: Search term to filter templates by content or title
"ai_context": "human_communication", // Optional: Filter templates by AI context type
"sophistication_level": "advanced", // Optional: Filter by template sophistication level
"complexity_level": "complex", // Optional: Filter by template complexity level
"optimization_strategy": "rules_only", // Optional: Filter by optimization strategy used
"limit": 5, // Optional: Number of templates to return (1-20)
"sort_by": "confidence_score", // Optional: e.g., "created_at", "usage_count", "title"
"sort_order": "desc" // Optional: "asc" or "desc"
}
```
### `get_quota_status`
**Check subscription status, quota usage, and account information.**
```javascript
// No parameters needed
```
### `get_optimization_insights` (Conditional)
**Get advanced Bayesian optimization insights, performance analytics, and parameter tuning recommendations.**
*Note: This tool provides mock data if Bayesian optimization is disabled in the backend.*
```javascript
{
"analysis_depth": "detailed", // Optional: "basic", "detailed", "comprehensive"
"include_recommendations": true // Optional: Include optimization recommendations
}
```
### `get_real_time_status` (Conditional)
**Get real-time optimization status and AG-UI capabilities.**
*Note: This tool provides mock data if AG-UI features are disabled in the backend.*
```javascript
// No parameters needed
```
---
## š§ Professional CLI Commands (Direct Execution)
These are direct command-line tools provided by the `mcp-prompt-optimizer` executable for administrative and diagnostic purposes.
```bash
# Check API key and quota status
mcp-prompt-optimizer check-status
# Validate API key with backend
mcp-prompt-optimizer validate-key
# Test backend integration
mcp-prompt-optimizer test
# Run comprehensive diagnostic
mcp-prompt-optimizer diagnose
# Clear validation cache
mcp-prompt-optimizer clear-cache
# Show help and setup instructions
mcp-prompt-optimizer help
# Show version information
mcp-prompt-optimizer version
```
## š¢ Team Collaboration Features
### Team API Keys (`sk-team-*`)
- **Shared quotas** across team members
- **Centralized billing** and management
- **Team template libraries** for consistency
- **Role-based access** control
- **Team usage analytics**
### Individual API Keys (`sk-opt-*`)
- **Personal quotas** and billing
- **Individual template libraries**
- **Personal usage tracking**
- **Account self-management**
## š Security & Privacy
- **Enterprise-grade security** with encrypted data transmission
- **API key validation** with secure backend authentication
- **Quota enforcement** with real-time usage tracking
- **Professional uptime** with 99.9% availability SLA
- **GDPR compliant** data handling and processing
- **No data retention** - prompts processed and optimized immediately
## š Advanced Features
### Automatic Template Management
- **Auto-save** high-confidence optimizations (>70% confidence)
- **Intelligent categorization** by AI context and content type
- **Similarity search** to find related templates
- **Template analytics** with usage patterns and effectiveness
### Real-time Optimization Insights
- **Performance metrics** - clarity, specificity, length improvements
- **Confidence scoring** with detailed analysis
- **AI-powered recommendations** for continuous improvement
- **Usage analytics** and optimization patterns
*Note: Advanced features like Bayesian Optimization and AG-UI Real-time Features are configurable and may provide mock data if disabled in the backend.*
### Intelligent Context Routing
- **Automatic detection** of prompt context and intent
- **Goal enhancement** based on detected context
- **Parameter preservation** for technical prompts
- **Context-specific optimizations** for better results
## š Getting Started
### šāāļø Fast Start (System Defaults)
1. **Sign up** at [promptoptimizer-blog.vercel.app/pricing](https://promptoptimizer-blog.vercel.app/pricing)
2. **Install** the MCP server: `npm install -g mcp-prompt-optimizer`
3. **Configure** your MCP client with your API key
4. **Start optimizing** with intelligent AI context detection!
### šļø Advanced Start (Custom Models)
1. **Sign up** at [promptoptimizer-blog.vercel.app/pricing](https://promptoptimizer-blog.vercel.app/pricing)
2. **Configure WebUI** at [dashboard](https://promptoptimizer-blog.vercel.app/dashboard) with your OpenRouter key & models
3. **Install** the MCP server: `npm install -g mcp-prompt-optimizer`
4. **Configure** your MCP client with your API key
5. **Enjoy enhanced optimization** with your chosen models!
## š Support & Resources
- **š Documentation:** https://promptoptimizer-blog.vercel.app/docs
- **š¬ Community Support:** GitHub Discussions
- **š§ Email Support:** support@promptoptimizer.help (Creator/Innovator)
- **š¢ Enterprise:** enterprise@promptoptimizer.help
- **š Dashboard & Model Config:** https://promptoptimizer-blog.vercel.app/dashboard
- **š§ Troubleshooting:** https://promptoptimizer-blog.vercel.app/docs/troubleshooting
## š Why Choose MCP Prompt Optimizer?
ā
**Professional Quality** - Enterprise-grade optimization with consistent results
ā
**Universal Compatibility** - Works with 10+ MCP clients out of the box
ā
**AI Context Awareness** - Intelligent optimization based on prompt type
ā
**Personal Model Choice** - Use your own OpenRouter models & pay-per-use
ā
**Template Management** - Build and reuse optimization patterns
ā
**Team Collaboration** - Shared resources and centralized management
ā
**Real-time Analytics** - Track performance and improvement over time
ā
**Startup Validation** - Comprehensive error handling and troubleshooting
ā
**Professional Support** - From community to enterprise-level assistance
---
**š Professional MCP Server** - Built for serious AI development with intelligent context detection, comprehensive template management, personal model configuration, and enterprise-grade reliability.
*Get started with 5 free optimizations at [promptoptimizer-blog.vercel.app/pricing](https://promptoptimizer-blog.vercel.app/pricing)*