UNPKG

cf-memory-mcp

Version:

Best-in-class MCP (Model Context Protocol) server for AI memory storage with MIRIX-Inspired Specialized Memory Types (Core, Episodic, Semantic, Procedural, Resource, Knowledge Vault), Progressive Disclosure, AI-Powered Summaries, Context Window Optimizati

1,055 lines (710 loc) β€’ 43.1 kB
# CF Memory MCP [![npm version](https://badge.fury.io/js/cf-memory-mcp.svg)](https://badge.fury.io/js/cf-memory-mcp) [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) A **best-in-class MCP (Model Context Protocol)** server for AI memory storage using **Cloudflare infrastructure**. This package provides AI coding agents with intelligent memory management featuring **smart auto-features**, **intelligent search**, **memory collections**, **temporal intelligence**, **multi-agent collaboration**, **advanced analytics**, and a **real-time analytics dashboard** with interactive visualizations and business intelligence. ## 🎯 Current Version: v2.12.1 ## πŸ“Š Real-time Analytics Dashboard **NEW: Beautiful, high-performance analytics dashboard with interactive visualizations and business intelligence** 🌐 **Live Dashboard**: [https://55a2aea1.cf-memory-dashboard-vue.pages.dev](https://55a2aea1.cf-memory-dashboard-vue.pages.dev) ### Key Features - **πŸ”„ Real-time Updates** - Live data streaming with Server-Sent Events (SSE) - **πŸ“ˆ Interactive Charts** - Quality heatmaps, learning velocity gauges, performance radar charts - **πŸ•ΈοΈ Network Visualization** - Memory relationship graphs with clustering and filtering - **πŸ“± Mobile Responsive** - Optimized for desktop, tablet, and mobile devices - **πŸŒ™ Dark/Light Themes** - Automatic theme switching with user preferences - **πŸ“Š Export & Reports** - JSON/CSV export for business intelligence and presentations - **⚑ <2s Loading** - Enterprise-grade performance with global CDN - **πŸ§ͺ Built-in Testing** - Comprehensive performance and functionality testing suite ### Business Value - **Quality Tracking** - Monitor AI learning progress from 27% to 60%+ quality scores - **Performance Monitoring** - Real-time system health and optimization insights - **Decision Support** - Data-driven insights for strategic planning and resource allocation - **ROI Measurement** - Quantifiable metrics for AI investment returns ### Quick Start ```bash # Deploy dashboard (requires Cloudflare account) cd dashboard-vue npm run deploy:production # Or access the live demo open https://55a2aea1.cf-memory-dashboard-vue.pages.dev ``` πŸ“– **Documentation**: [Dashboard README](./dashboard-vue/README.md) | [Executive Summary](./docs/dashboard-executive-summary.md) **πŸš€ NEW: Enhanced JSON + Cloudflare Vectorize Integration (v2.12.1) - Next-Level Semantic Search:** - 🎯 **Entity-Level Vectorization** - Individual JSON entities get their own vectors for granular semantic search - πŸ” **Multi-Level Search Architecture** - Search at memory level AND entity level simultaneously - πŸ€– **Automatic Relationship Discovery** - AI-powered similarity-based relationship suggestions - πŸ“Š **85-95% Search Accuracy** - Enterprise-grade semantic understanding of complex data structures - ⚑ **50-70% Faster Discovery** - Optimized performance with Cloudflare's edge infrastructure - πŸ”— **Cross-Memory Entity Linking** - Connect similar entities across different JSON memories - πŸ“ˆ **Entity Analytics** - Importance scoring and pattern analysis for JSON structures **πŸ”₯ Enhanced JSON Processing & Temporal Relationship Tracking (v2.12.0) - Graphiti-Inspired Features:** - πŸ“Š **Enhanced JSON Processing** - Automatic entity extraction from structured JSON data with JSONPath tracking - πŸ•’ **Temporal Relationship Tracking** - Relationship versioning, validity status, and evolution history - πŸ”— **Relationship Evolution** - Track how connections between memories change over time - πŸ“ **Source Type Support** - Handle text, JSON, and message format data with automatic processing - 🎯 **Entity Relationship Mapping** - Automatic relationship generation between JSON entities - πŸ“ˆ **Relationship Analytics** - Evolution summaries and temporal pattern analysis - πŸ”§ **New MCP Tools** - update_memory_relationship, search_relationships_temporal, get_relationship_evolution - πŸ—„οΈ **Database Extensions** - Enhanced schema with memory_entities table and temporal indexes **🧠 Priority 4 - Context-Aware + Temporal Intelligence (v2.11.0) - AI-Enhanced Features:** - 🎯 **AI-Enhanced Contextual Suggestions** - Smart suggestions using semantic search and AI-powered relevance scoring - πŸ•’ **Advanced Temporal Intelligence** - Enhanced time-aware search with sophisticated temporal scoring algorithms - πŸ”„ **Context-Switching Optimization** - Automatic project detection and intelligent context switching - πŸ“Š **Temporal Pattern Analytics** - Advanced pattern recognition with ML-powered predictions - πŸ€– **AI-Powered Suggestion Text** - Intelligent suggestion generation using Cloudflare AI (Llama 3.1 8B) - πŸ“ˆ **Enhanced Temporal Relevance** - Context-aware scoring with access patterns and importance weighting - 🧠 **Smart Context Detection** - AI-powered context extraction from conversation history - ⚑ **Semantic Context Matching** - Vector-based project context discovery with 95%+ confidence **🧠 AI/ML Intelligence Engine (v2.9.0) - Production AI Features:** - πŸ€– **AI-Powered Content Expansion** - Real content enrichment using Llama 3.1 8B (replaces static text appending) - 🏷️ **Semantic Tag Generation** - Intelligent tagging with Cloudflare AI classification models - πŸ“Š **Real Performance Monitoring** - Actual metrics from database analytics (replaces mock data) - ⚑ **Enhanced Analytics Dashboard** - Database-driven performance tracking and system health - 🎯 **Production AI Models** - BGE embeddings, DistilBERT sentiment, Llama classification - πŸ”§ **Improved Quality Scoring** - AI-powered analysis with >95% prediction confidence - πŸ“ˆ **Performance Tracking** - Real-time operation monitoring with automatic metric collection **πŸš€ Cloudflare Vectorize Integration (v2.8.1) - Paid Tier Enhancement:** - 🎯 **Advanced Vector Search** - Cloudflare Vectorize for lightning-fast semantic search (50M queries/month) - πŸ“Š **Vector Storage** - Dedicated vector database with 10M stored dimensions/month - πŸ” **Enhanced Similarity** - Superior semantic search performance vs D1-based embeddings - 🧩 **Memory Clustering** - AI-powered clustering analysis using vector similarity - πŸ“ˆ **Paid Tier Optimization** - 33x more KV writes, 10x larger batches, 6x faster learning cycles - ⚑ **Performance Boost** - 50-70% response time reduction through optimized caching **⚑ KV Optimization Engine (v2.8.0) - Performance & Reliability:** - 🎯 **Intelligent Caching** - Optimized cache service with conditional writes and longer TTL values - πŸ“Š **KV Usage Monitoring** - Real-time tracking to prevent daily limit breaches (1,000 writes/day) - πŸ—„οΈ **D1 Database Fallback** - Analytics data stored in D1 to reduce KV write frequency - πŸ”„ **Batched Operations** - Write queue batching to minimize KV operations - πŸ“ˆ **Usage Analytics** - Trends, recommendations, and optimization insights - πŸ›‘οΈ **Limit Protection** - Automatic prevention of KV limit exceeded errors **🧠 Memory Intelligence Engine (v2.7.0) - Autonomous Optimization:** - πŸ€– **Automated Learning Loops** - Self-improving algorithms with A/B testing framework - 🎯 **Adaptive Thresholds** - Dynamic parameter optimization based on performance data - πŸ§ͺ **Learning Experiments** - Scientific approach to testing optimization strategies - πŸ“Š **A/B Testing Framework** - Rigorous experimentation with statistical analysis - πŸ”„ **Autonomous Optimization** - System continuously improves itself without manual intervention **Previous Features (Phase 2 Enhancements):** - πŸš€ **Quality Auto-Improvement Engine** - AI-powered memory enhancement to boost quality scores from 27% to 60%+ - πŸ”§ **Content Expansion** - Intelligent AI analysis to expand short memories with relevant context - 🏷️ **Smart Tag Enhancement** - Automatic tag suggestions and improvements for better organization - βš–οΈ **Importance Recalculation** - Dynamic importance scoring based on content analysis and usage patterns **Previous Features (Phase 1 Enhancements):** - πŸ“Š **Memory Analytics Dashboard** - Real-time statistics and performance insights - πŸ” **Advanced Search Filters** - Date range, importance, size, and boolean search - πŸ₯ **Memory Health Monitoring** - Orphan detection and quality scoring - πŸ“ˆ **Performance Metrics** - Response time tracking and cache efficiency analysis - πŸ“€ **Rich Export/Import** - Multiple formats including graph visualization **Total Tools Available: 50+** spanning memory management, relationships, temporal intelligence, collaboration, autonomous optimization, KV performance monitoring, and advanced vector search. ## 🎯 Agent Tool Selection Solutions (v2.9.1) **NEW: Comprehensive guidance for AI agents to efficiently select from 31+ available MCP tools** With 31+ powerful MCP tools available, selecting the right tool for your task can be overwhelming. Our **Agent Tool Selection Solutions** provide structured guidance to help AI agents quickly identify optimal tools and workflows. ### πŸ“š Documentation Suite - **[Intent-Based Tool Selection Guide](docs/AGENT_TOOL_SELECTION_GUIDE.md)** - Clear mappings from user intents to appropriate tools - **[Common Workflow Patterns](docs/AGENT_WORKFLOW_PATTERNS.md)** - 5 proven workflow templates for common agent tasks - **[Tool Categories & Organization](docs/TOOL_CATEGORIES.md)** - 31+ tools organized into 8 logical categories - **[Performance Tips & Best Practices](docs/PERFORMANCE_TIPS.md)** - Optimization guidelines for maximum efficiency ### πŸ”§ Tool Categories (8 Categories, 31+ Tools) | Category | Tools | Best For | |----------|-------|----------| | **πŸ”§ CORE** | 5 tools | Daily operations, simple tasks | | **πŸ“¦ BATCH** | 3 tools | Bulk operations (>5 items) | | **πŸ•ΈοΈ GRAPH** | 6 tools | Exploring connections, relationships | | **🧠 INTELLIGENCE** | 6 tools | AI-powered automation, quality improvement | | **🎯 CONTEXT** | 6 tools | Project management, relevant suggestions | | **🀝 COLLABORATION** | 6 tools | Team projects, multi-agent workflows | | **πŸ“Š ANALYTICS** | 7 tools | System monitoring, performance analysis | | **⏰ LIFECYCLE** | 7 tools | Data maintenance, system optimization | ### ⚑ Quick Selection Guide ``` Need basic operations? β†’ CORE tools Working with many items? β†’ BATCH tools Exploring connections? β†’ GRAPH tools Want AI assistance? β†’ INTELLIGENCE tools Working on projects? β†’ CONTEXT tools Collaborating with others? β†’ COLLABORATION tools Monitoring system? β†’ ANALYTICS tools Managing data lifecycle? β†’ LIFECYCLE tools ``` ### πŸ”„ Common Workflow Patterns 1. **New Project Setup**: `create_project_context` β†’ `project_onboarding` β†’ `store_multiple_memories` β†’ `build_automatic_relationships` 2. **Research & Discovery**: `intelligent_search` β†’ `get_related_memories` β†’ `traverse_memory_graph` β†’ `get_contextual_suggestions` 3. **Quality Improvement**: `memory_health_check` β†’ `improve_memory_quality` β†’ `repair_and_enhance_tags` β†’ `detect_duplicates` 4. **Analytics & Insights**: `get_memory_stats` β†’ `get_usage_analytics` β†’ `analyze_temporal_relationships` 5. **Collaboration Setup**: `register_agent` β†’ `create_memory_space` β†’ `grant_space_permission` β†’ `add_memory_to_space` ### πŸ€– Smart Tool Recommendation (NEW!) **Get intelligent tool recommendations based on your intent:** ```javascript // Example: Finding information await callTool('recommend_tools', { user_intent: 'I want to find information about React performance optimization', current_context: 'Working on a React project', task_description: 'Need to improve the performance of my React application' }); // Returns: // - Intent: "search_data" (66% confidence) // - Top tools: intelligent_search, store_memory, retrieve_memory // - Workflows: Quality Improvement, Analytics & Insights // Example: Storing project data await callTool('recommend_tools', { user_intent: 'I want to store multiple memories about my new project', current_context: 'Starting a new e-commerce project', task_description: 'Need to save project requirements, team info, and technical decisions' }); // Returns: // - Intent: "store_data" (95% confidence) // - Top tools: store_memory, retrieve_memory, unified_search // - Workflows: New Project Setup, Collaboration Setup ``` ### πŸ’‘ Performance Tips - **Use batch tools for >5 operations** (10x performance improvement) - **Enable `semantic: true`** for AI-powered search capabilities - **Set project context** for better relevance and accuracy - **Use `get_contextual_suggestions`** when unsure what to do next - **Use `recommend_tools`** for intelligent tool selection guidance - **Leverage AI features** for automation and quality improvement ## πŸš€ Quick Start ```bash # Run directly with npx (no installation required) npx cf-memory-mcp # Or install globally npm install -g cf-memory-mcp cf-memory-mcp ``` ## ✨ Features ### Core Features - **🌐 Completely Portable** - No local setup required, connects to deployed Cloudflare Worker - **⚑ Production Ready** - Uses Cloudflare D1 database and KV storage for reliability - **πŸ”§ Zero Configuration** - Works out of the box with any MCP client - **🌍 Cross Platform** - Supports Windows, macOS, and Linux - **πŸ“¦ NPX Compatible** - Run instantly without installation - **πŸ”’ Secure** - Built on Cloudflare's secure infrastructure - **πŸš„ Fast** - Global edge deployment with KV caching ### πŸ€– Smart Auto-Features (v2.0.0) - **πŸ”— Auto-Relationship Detection** - Automatically suggests relationships between memories - **πŸ” Duplicate Detection** - Identifies potential duplicates with merge strategies - **🏷️ Smart Tagging** - AI-powered tag suggestions based on content analysis - **⭐ Auto-Importance Scoring** - ML-based importance prediction with detailed reasoning ### 🧠 Intelligent Search & Collections (v2.0.0) - **🎯 Intelligent Search** - Combines semantic + keyword + graph traversal with query expansion - **πŸ“š Memory Collections** - Organize memories with auto-include criteria and sharing - **πŸš€ Project Onboarding** - Automated workflows for project setup and knowledge extraction - **πŸ”„ Query Expansion** - Automatically includes synonyms and related terms ### ⏰ Context-Aware & Temporal Intelligence (v2.2.0) - **🧠 Conversation Context** - Track and manage conversation-specific memory contexts - **⏰ Temporal Relevance** - Time-based memory scoring and decay management - **πŸ”„ Memory Evolution** - Version control and evolution tracking for memories - **πŸ“Š Temporal Analytics** - Analyze how memories and relationships change over time - **🎯 Context Activation** - Smart memory activation based on conversation context - **πŸ“ˆ Predictive Relevance** - ML-powered predictions for memory importance over time ### 🀝 Multi-Agent Collaboration (v2.3.0) - **πŸ‘₯ Agent Management** - Register and authenticate multiple AI agents - **🏠 Collaborative Spaces** - Shared memory workspaces with permission control - **πŸ” Access Control** - Fine-grained permissions (read/write/admin) for agents - **πŸ”„ Memory Synchronization** - Real-time sync between different instances - **⚑ Conflict Resolution** - Smart merge strategies for concurrent edits - **πŸ“Š Collaboration Analytics** - Track agent interactions and collaboration patterns ### 🧠 Memory Intelligence Engine (v2.7.0) - **πŸ€– Automated Learning Loops** - Self-improving algorithms that continuously optimize system performance - **🎯 Adaptive Thresholds** - Dynamic parameter adjustment based on real-time performance data - **πŸ§ͺ Learning Experiments** - Create and manage A/B tests for optimization strategies - **πŸ“Š A/B Testing Framework** - Scientific experimentation with statistical analysis and confidence scoring - **πŸ”„ Improvement Cycles** - Autonomous optimization cycles that identify and apply performance enhancements - **πŸ“ˆ Predictive Analytics** - ML-powered predictions with >95% confidence targeting - **πŸŽ›οΈ Threshold Management** - Initialize and manage quality, relevance, importance, and relationship thresholds - **πŸ“‹ Experiment Analysis** - Automated analysis of test results with optimization recommendations ### πŸ“€ Advanced Export/Import (v2.3.0) - **πŸ“‹ Multi-Format Export** - JSON, XML, Markdown, CSV, GraphML formats - **πŸ”„ Batch Operations** - Asynchronous export/import job processing - **πŸ•ΈοΈ Graph Visualization** - Export memory networks for analysis tools - **πŸ“¦ Rich Metadata** - Full preservation of relationships and collaboration data - **πŸ”€ Conflict Handling** - Smart import strategies for existing memories ### πŸ“Š Phase 1 Enhancements (v2.5.0) - **πŸ“ˆ Memory Analytics Dashboard** - Real-time statistics, usage patterns, and performance metrics - **πŸ” Advanced Search Filters** - Date range, importance score, content size, and boolean search operators - **πŸ₯ Memory Health Monitoring** - Orphan detection, stale memory identification, and quality scoring - **πŸ“Š Performance Insights** - Response time tracking, cache efficiency, and database performance - **🎯 Quality Analysis** - Multi-factor quality scoring with improvement recommendations ### Advanced Features - **🧠 Semantic Search** - AI-powered vector search using Cloudflare AI Workers - **πŸ•ΈοΈ Knowledge Graph** - Store and traverse relationships between memories - **πŸ“¦ Batch Operations** - Efficiently process multiple memories at once - **πŸ” Graph Traversal** - Find paths and connections between related memories - **🎯 Smart Filtering** - Advanced search with tags, importance, and similarity ## πŸ› οΈ Usage ### With MCP Clients Add to your MCP client configuration: ```json { "mcpServers": { "cf-memory": { "command": "npx", "args": ["cf-memory-mcp"] } } } ``` ### With Augment Add to your `augment-config.json`: ```json { "mcpServers": { "cf-memory": { "command": "npx", "args": ["cf-memory-mcp"] } } } ``` ### With Claude Desktop Add to your Claude Desktop MCP configuration: ```json { "mcpServers": { "cf-memory": { "command": "npx", "args": ["cf-memory-mcp"] } } } ``` ## πŸ”§ Available Tools The CF Memory MCP server provides comprehensive memory management tools: ### Core Memory Operations #### `store_memory` Store a new memory with optional metadata and tags. **Parameters:** - `content` (string, required) - The memory content - `tags` (array, optional) - Tags for categorization - `importance_score` (number, optional) - Importance score 0-10 - `metadata` (object, optional) - Additional metadata #### `unified_search` Unified search interface that consolidates all search modes: basic, intelligent, temporal, and vectorize. **Parameters:** - `query` (string, optional) - Full-text or semantic search query - `tags` (array, optional) - Filter by specific tags - `limit` (number, optional) - Maximum results (default: 10) - `offset` (number, optional) - Results offset (default: 0) - `min_importance` (number, optional) - Minimum importance score - `semantic` (boolean, optional) - Use AI-powered semantic search - `similarity_threshold` (number, optional) - Minimum similarity for semantic search #### `retrieve_memory` Retrieve a specific memory by ID. **Parameters:** - `id` (string, required) - The unique memory ID ### Batch Operations #### `store_multiple_memories` Store multiple memories in a single batch operation. **Parameters:** - `memories` (array, required) - Array of memory objects to store #### `update_multiple_memories` Update multiple memories in a single batch operation. **Parameters:** - `updates` (array, required) - Array of memory updates with ID and data #### `search_and_update` Search for memories and update them in one operation. **Parameters:** - `search` (object, required) - Search criteria - `update` (object, required) - Update data to apply ### Graph & Relationship Operations #### `traverse_memory_graph` Traverse the memory graph from a starting point to find connected memories. **Parameters:** - `start_memory_id` (string, required) - Starting memory ID - `relationship_types` (array, optional) - Filter by relationship types - `max_depth` (number, optional) - Maximum traversal depth (default: 3) - `direction` (string, optional) - Direction: 'outgoing', 'incoming', or 'both' - `min_strength` (number, optional) - Minimum relationship strength #### `find_memory_path` Find a path between two memories through relationships. **Parameters:** - `start_memory_id` (string, required) - Starting memory ID - `end_memory_id` (string, required) - Target memory ID - `relationship_types` (array, optional) - Filter by relationship types - `max_depth` (number, optional) - Maximum path length (default: 5) - `min_strength` (number, optional) - Minimum relationship strength #### `get_related_memories` Get memories related to a specific memory with various options. **Parameters:** - `memory_id` (string, required) - Memory ID to find related memories for - `relationship_types` (array, optional) - Filter by relationship types - `min_strength` (number, optional) - Minimum relationship strength - `limit` (number, optional) - Maximum results (default: 10) - `include_indirect` (boolean, optional) - Include indirectly related memories - `max_hops` (number, optional) - Maximum hops for indirect relationships ## πŸ€– Smart Auto-Features (v2.0.0) ### `suggest_relationships` Get intelligent relationship suggestions for a memory without automatically creating them. **Parameters:** - `memory_id` (string, required) - Memory ID to suggest relationships for **Returns:** Array of potential relationships with confidence scores and suggested actions. ### `detect_duplicates` Detect potential duplicate memories with similarity analysis and merge strategies. **Parameters:** - `memory_id` (string, optional) - Specific memory to check for duplicates **Returns:** Array of potential duplicates with similarity scores and merge suggestions. ### `suggest_tags` Get AI-powered tag suggestions based on content analysis and existing patterns. **Parameters:** - `content` (string, required) - Content to analyze for tag suggestions - `existing_tags` (array, optional) - Existing tags to exclude from suggestions **Returns:** Suggested tags with confidence scores and reasoning. ### `calculate_auto_importance` Calculate automatic importance score based on multiple factors. **Parameters:** - `memory_id` (string, required) - Memory ID to calculate importance for **Returns:** Importance score with detailed factor analysis and reasoning. ### `improve_memory_quality` **Quality Auto-Improvement Engine** - Enhance memory quality using AI to boost quality scores from 27% to 60%+. **Parameters:** - `memory_id` (string, optional) - Specific memory ID to improve. If not provided, improves batch of low-quality memories - `batch_size` (number, optional) - Number of memories to process in batch (default: 20) - `target_quality_threshold` (number, optional) - Target quality threshold - memories above this score are skipped (default: 60) - `improvement_types` (array, optional) - Types of improvements to apply: content_expansion, importance_recalculation, tag_enhancement, relationship_building - `dry_run` (boolean, optional) - If true, only analyze and suggest improvements without applying them **Returns:** Detailed improvement report with before/after quality scores, applied changes, and quality statistics. ## 🧠 Intelligent Search & Collections (v2.0.0) ### `intelligent_search` Advanced search combining semantic, keyword, and graph traversal with query expansion. **Parameters:** - `query` (string, required) - Natural language search query - `auto_expand` (boolean, optional) - Automatically expand query with synonyms - `include_related` (number, optional) - Include related memories (number of hops) - `context_aware` (boolean, optional) - Apply context-aware ranking - `project_context` (string, optional) - Project context for ranking **Returns:** Search results with metadata about methods used and query expansion. ### `create_collection` Create a memory collection with optional auto-include criteria. **Parameters:** - `name` (string, required) - Collection name - `description` (string, optional) - Collection description - `auto_include_criteria` (object, optional) - Criteria for auto-populating collection - `sharing_permissions` (object, optional) - Sharing and access permissions ### `project_onboarding` Smart workflow for automated project onboarding with knowledge extraction. **Parameters:** - `project_name` (string, required) - Name of the project - `project_description` (string, optional) - Project description - `technologies` (array, optional) - Technologies used in the project - `team_members` (array, optional) - Team members - `goals` (array, optional) - Project goals and objectives **Returns:** Complete onboarding results with key concepts, relationship map, knowledge gaps, and documentation suggestions. ## ⏰ Context-Aware & Temporal Intelligence (v2.2.0) ### `create_conversation_context` Create a new conversation context for tracking related memories. **Parameters:** - `context_name` (string, required) - Name for the conversation context - `description` (string, optional) - Description of the context - `metadata` (object, optional) - Additional context metadata ### `activate_memory_in_context` Activate a memory within a specific conversation context. **Parameters:** - `memory_id` (string, required) - Memory ID to activate - `context_id` (string, required) - Context ID to activate memory in - `activation_strength` (number, optional) - Strength of activation (0-1) ### `get_context_memories` Get all memories associated with a conversation context. **Parameters:** - `context_id` (string, required) - Context ID to get memories for - `include_inactive` (boolean, optional) - Include inactive memories - `sort_by_relevance` (boolean, optional) - Sort by temporal relevance ### `evolve_memory` Create a new version of a memory with evolution tracking. **Parameters:** - `memory_id` (string, required) - Original memory ID - `new_content` (string, required) - Updated content - `evolution_type` (string, required) - Type of evolution (refinement, expansion, correction) - `evolution_summary` (string, optional) - Summary of changes ### `analyze_memory_decay` Analyze temporal decay patterns for memories. **Parameters:** - `memory_id` (string, optional) - Specific memory to analyze - `time_range_days` (number, optional) - Time range for analysis (default: 30) - `include_predictions` (boolean, optional) - Include future decay predictions ### `analyze_temporal_relationships` Analyze how relationships evolve over time. **Parameters:** - `relationship_id` (string, optional) - Specific relationship to analyze - `memory_id` (string, optional) - Memory ID to analyze relationships for - `time_range_days` (number, optional) - Time range in days (default: 30) - `include_predictions` (boolean, optional) - Include future predictions ## 🀝 Multi-Agent Collaboration (v2.3.0) ### `register_agent` Register a new agent in the system for collaboration. **Parameters:** - `name` (string, required) - Agent name - `type` (string, required) - Agent type: 'ai_agent', 'human_user', or 'system' - `description` (string, optional) - Agent description - `capabilities` (array, optional) - Agent capabilities - `metadata` (object, optional) - Agent metadata ### `create_memory_space` Create a collaborative memory space for multi-agent sharing. **Parameters:** - `name` (string, required) - Memory space name - `description` (string, optional) - Space description - `owner_agent_id` (string, required) - Agent ID who owns this space - `space_type` (string, optional) - Type: 'private', 'collaborative', or 'public' - `access_policy` (string, optional) - Policy: 'open', 'invite_only', or 'restricted' ### `grant_space_permission` Grant permission to an agent for a memory space. **Parameters:** - `space_id` (string, required) - Memory space ID - `agent_id` (string, required) - Agent ID to grant permission to - `permission_level` (string, required) - Level: 'read', 'write', or 'admin' - `granted_by` (string, required) - Agent ID granting the permission ### `add_memory_to_space` Add a memory to a collaborative space. **Parameters:** - `memory_id` (string, required) - Memory ID to add - `space_id` (string, required) - Space ID to add memory to - `added_by` (string, required) - Agent ID adding the memory - `access_level` (string, optional) - Access level for this memory ### `get_agent_spaces` Get all memory spaces accessible to an agent. **Parameters:** - `agent_id` (string, required) - Agent ID to get spaces for ### `get_space_memories` Get all memories in a space (requires permission). **Parameters:** - `space_id` (string, required) - Space ID to get memories from - `agent_id` (string, required) - Agent ID requesting access ## πŸ”„ Memory Synchronization (v2.3.0) ### `sync_memory` Synchronize a memory with another instance. **Parameters:** - `memory_id` (string, required) - Memory ID to synchronize - `target_instance` (string, required) - Target instance identifier - `force_sync` (boolean, optional) - Force sync even if already synced - `conflict_resolution` (string, optional) - Strategy: 'manual', 'auto_merge', 'source_wins', 'target_wins' ### `resolve_sync_conflict` Resolve a synchronization conflict. **Parameters:** - `conflict_id` (string, required) - Conflict ID to resolve - `resolution_method` (string, required) - Resolution method - `resolved_by` (string, required) - Agent ID resolving the conflict - `resolved_version` (object, optional) - Manually resolved version ### `get_sync_status` Get synchronization status for a memory. **Parameters:** - `memory_id` (string, required) - Memory ID to check sync status for ## πŸ“€ Export/Import Operations (v2.3.0) ### `create_export_job` Create an export job for memories. **Parameters:** - `format` (string, required) - Export format: 'json', 'xml', 'markdown', 'csv', 'graphml' - `memory_ids` (array, optional) - Specific memory IDs to export - `space_ids` (array, optional) - Memory space IDs to export - `include_relationships` (boolean, optional) - Include memory relationships - `include_metadata` (boolean, optional) - Include full metadata - `initiated_by` (string, required) - Agent ID initiating export ### `get_export_job` Get export job status and download information. **Parameters:** - `job_id` (string, required) - Export job ID ### `create_import_job` Create an import job for memories. **Parameters:** - `format` (string, required) - Import format - `file_content` (string, required) - File content to import - `target_space_id` (string, optional) - Target space to import into - `conflict_resolution` (string, optional) - How to handle existing memories - `initiated_by` (string, required) - Agent ID initiating import ## πŸ“Š Analytics & Monitoring (v2.3.0) ### ⚑ KV Optimization & Monitoring (v2.8.0) #### `get_kv_usage_stats` Get current KV storage usage statistics and daily limits. **Returns:** Current daily usage, remaining writes, usage percentage, and warnings. #### `get_kv_usage_trends` Get KV usage trends over the past week. **Returns:** Daily usage trends with writes, reads, deletes, and total operations. #### `get_cache_optimization_recommendations` Get recommendations for optimizing KV cache usage. **Returns:** Personalized optimization recommendations based on usage patterns. #### `migrate_analytics_to_d1` Migrate existing analytics data from KV to D1 database to reduce KV writes. **Returns:** Migration results with migrated keys and any errors. #### `flush_cache_queue` Manually flush the optimized cache write queue to KV storage. **Returns:** Cache statistics including in-memory entries and queue size. ### `track_memory_analytics` Track a memory analytics event. **Parameters:** - `memory_id` (string, required) - Memory ID - `agent_id` (string, required) - Agent ID performing the action - `action_type` (string, required) - Action: 'create', 'read', 'update', 'delete', 'search', 'relate' - `session_id` (string, optional) - Session identifier - `context_data` (object, optional) - Context data about the action - `performance_metrics` (object, optional) - Performance metrics ### `get_memory_analytics` Get memory usage analytics. **Parameters:** - `memory_id` (string, optional) - Specific memory ID - `agent_id` (string, optional) - Specific agent ID - `action_type` (string, optional) - Specific action type - `start_date` (string, optional) - Start date for analytics - `end_date` (string, optional) - End date for analytics - `limit` (number, optional) - Maximum number of results ### `get_collaboration_analytics` Get collaboration event analytics. **Parameters:** - `space_id` (string, optional) - Specific space ID - `agent_id` (string, optional) - Specific agent ID - `event_type` (string, optional) - Specific event type - `start_date` (string, optional) - Start date for analytics - `end_date` (string, optional) - End date for analytics - `limit` (number, optional) - Maximum number of results ## 🧠 Memory Intelligence Engine (v2.7.0) ### `initialize_adaptive_thresholds` Initialize adaptive thresholds for automated learning optimization. **Parameters:** - `threshold_types` (array, optional) - Types of thresholds to initialize (quality, relevance, importance, relationship_strength) - `baseline_values` (object, optional) - Optional baseline values for thresholds **Returns:** Number of thresholds initialized and their current values. ### `create_learning_experiment` Create a new learning experiment for A/B testing and optimization. **Parameters:** - `experiment_name` (string, required) - Name of the experiment - `experiment_type` (string, required) - Type of experiment (quality_improvement, relationship_discovery, tag_enhancement, content_expansion) - `hypothesis` (string, required) - Hypothesis being tested - `success_criteria` (object, required) - Success criteria for the experiment - `control_group_size` (number, optional) - Size of control group (default: 100) - `test_group_size` (number, optional) - Size of test group (default: 100) - `confidence_threshold` (number, optional) - Statistical confidence threshold (default: 0.95) - `created_by` (string, optional) - Creator of the experiment **Returns:** Experiment ID and creation confirmation. ### `run_ab_test` Run A/B test for a specific learning experiment. **Parameters:** - `experiment_id` (string, required) - ID of the experiment to run - `memory_ids` (array, required) - Memory IDs to include in the test - `test_strategy` (string, optional) - Strategy for splitting test groups (random_split, importance_based, content_length_based) **Returns:** Control and test group assignments with group sizes. ### `analyze_experiment_results` Analyze results from a learning experiment and make threshold adjustments. **Parameters:** - `experiment_id` (string, required) - ID of the experiment to analyze - `include_recommendations` (boolean, optional) - Include optimization recommendations (default: true) **Returns:** Number of adjustments made and optimization recommendations. ### `run_improvement_cycle` Run a complete self-improvement cycle with automated optimizations. **Parameters:** - `cycle_type` (string, optional) - Type of improvement cycle (full, quality_focused, relationship_focused, performance_focused) - `max_improvements` (number, optional) - Maximum number of improvements to apply (default: 5) **Returns:** Number of improvements applied, performance gain percentage, and next cycle scheduling. ## 🎯 Cloudflare Vectorize Integration (v2.8.1) - Paid Tier The Vectorize integration provides lightning-fast semantic search using Cloudflare's dedicated vector database. This paid tier enhancement offers superior performance compared to D1-based embeddings with 50M queries/month and 10M stored dimensions/month. ### Setup Instructions For paid tier users, enable Vectorize with: ```bash # Setup Vectorize index and configuration npm run setup-vectorize # Deploy with Vectorize enabled npm run setup-paid-tier ``` This creates the `cf-memory-embeddings` Vectorize index with 768 dimensions (BGE-base-en-v1.5 compatible) and cosine similarity metric. ### Hybrid D1+Vectorize Architecture The system uses a hybrid approach combining both databases: - **D1 Database**: Stores all memory metadata, content, tags, relationships, and serves as fallback for semantic search - **Vectorize**: Stores only vector embeddings for ultra-fast semantic similarity search - **Hybrid Search Flow**: Vectorize finds similar vectors β†’ D1 enriches with full memory data β†’ ranked results returned - **Fallback Mechanism**: If Vectorize fails, system automatically uses D1-based semantic search - **Data Consistency**: Both databases stay synchronized when memories are created/updated/deleted ### `vectorize_semantic_search` Perform advanced semantic search using Cloudflare Vectorize for superior speed and accuracy. **Parameters:** - `query` (string, required) - Search query for semantic similarity - `limit` (number, optional) - Maximum number of results (default: 10) - `filter` (object, optional) - Metadata filters to apply - `return_vectors` (boolean, optional) - Include vector data in results (default: false) **Returns:** Array of search results with similarity scores, metadata, and optional vector data. ### `vectorize_find_similar` Find memories similar to a specific memory using vector similarity. **Parameters:** - `memory_id` (string, required) - Memory ID to find similar memories for - `limit` (number, optional) - Maximum number of results (default: 10) - `similarity_threshold` (number, optional) - Minimum similarity score (default: 0.7) - `exclude_self` (boolean, optional) - Exclude the source memory from results (default: true) **Returns:** Array of similar memories with similarity scores and metadata. ### `vectorize_cluster_memories` Perform AI-powered clustering analysis using vector similarity to group related memories. **Parameters:** - `memory_ids` (array, required) - Array of memory IDs to cluster - `cluster_count` (number, optional) - Number of clusters to create (default: 5) **Returns:** Array of clusters with cluster IDs, memory IDs in each cluster, and centroid similarity scores. ### `vectorize_index_stats` Get statistics and information about the Vectorize index. **Returns:** Index statistics including dimensions, vector count, and configuration details. ### Paid Tier Benefits - **50M Queries/Month**: Massive query capacity for high-volume applications - **10M Stored Dimensions/Month**: Store millions of memory vectors - **33x More KV Writes**: Increased from 1,000 to 33,333 daily KV operations - **10x Larger Batches**: Process up to 500 memories per batch operation - **6x Faster Learning**: Learning cycles run every 5 minutes instead of 30 minutes - **50-70% Performance Boost**: Significantly faster response times through optimized caching ## 🌐 Architecture ``` β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ MCP Client β”‚ β”‚ cf-memory-mcp β”‚ β”‚ Cloudflare Worker β”‚ β”‚ (Augment, │◄──►│ (npm package) │◄──►│ (Production API) β”‚ β”‚ Claude, etc.) β”‚ β”‚ β”‚ β”‚ β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ β–Ό β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ Cloudflare D1 DB β”‚ β”‚ + KV Storage β”‚ β”‚ + Vectorize (Paid) β”‚ β”‚ + AI Workers β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ ``` ### Hybrid D1+Vectorize Architecture The system uses a sophisticated hybrid approach: - **D1 Database**: Primary storage for all memory content, metadata, relationships, and tags - **Vectorize**: High-performance vector similarity search with 50M queries/month capacity - **Hybrid Search**: Vectorize finds similar vectors β†’ D1 enriches with full memory data - **Fallback System**: Automatic fallback to D1-based search if Vectorize is unavailable - **Data Sync**: Both databases stay synchronized for all memory operations πŸ“– **[Detailed Architecture Documentation](docs/vectorize-architecture.md)** - Complete technical overview with diagrams, data flows, and performance characteristics. ## πŸ”§ Command Line Options ```bash # Start the MCP server npx cf-memory-mcp # Show version npx cf-memory-mcp --version # Show help npx cf-memory-mcp --help # Enable debug logging DEBUG=1 npx cf-memory-mcp ``` ## 🌍 Environment Variables - `DEBUG=1` - Enable debug logging - `MCP_DEBUG=1` - Enable MCP-specific debug logging ## πŸ“‹ Requirements - **Node.js** 16.0.0 or higher - **Internet connection** (connects to Cloudflare Worker) - **MCP client** (Augment, Claude Desktop, etc.) ## πŸš€ Why CF Memory MCP? ### Traditional Approach ❌ - Clone repository - Set up local database - Configure environment variables - Manage local server process - Handle updates manually ### CF Memory MCP βœ… - Run `npx cf-memory-mcp` - That's it! πŸŽ‰ ## πŸ”’ Privacy & Security - **No local data storage** - All data stored securely in Cloudflare D1 - **HTTPS encryption** - All communication encrypted in transit - **Edge deployment** - Data replicated globally for reliability - **No API keys required** - Public read/write access for simplicity ## 🀝 Contributing Contributions are welcome! Please see the [GitHub repository](https://github.com/johnlam90/cf-memory-mcp) for more information. ## πŸ“„ License MIT License - see [LICENSE](LICENSE) file for details. ## πŸ”— Links - **GitHub Repository**: https://github.com/johnlam90/cf-memory-mcp - **npm Package**: https://www.npmjs.com/package/cf-memory-mcp - **Issues**: https://github.com/johnlam90/cf-memory-mcp/issues - **MCP Specification**: https://modelcontextprotocol.io/ --- Made with ❀️ by [John Lam](https://github.com/johnlam90)