@jeelidev/personal-neo4j-memory-server
Version:
Personal MCP Memory Server with Neo4j backend - Enhanced Cloudflare Access support with robust tunnel management and remove functionality
226 lines (179 loc) • 19.9 kB
JavaScript
import{j as c}from"./chunk-ULWSXDW6.mjs";import{z as a}from"zod";function d(n){n.prompt("explore-memory-graph",{starting_point:a.string(),depth:a.string().optional()},async({starting_point:t,depth:r})=>({messages:[{role:"user",content:{type:"text",text:`I need to explore the memory graph starting from "${t}".
Please help me:
1. First, search for memories related to "${t}" using memory_find
2. For each found memory, use memory_find with traverseFrom to explore connections${r?` up to ${r} levels deep`:""}
3. Identify key relationship patterns and clusters
4. Highlight any surprising connections or isolated subgraphs
5. Suggest potential missing relationships based on the data
Focus on revealing the knowledge structure and finding insights in the connections.`}}]})),n.prompt("create-project-knowledge",{project_name:a.string(),domain:a.string().optional()},async({project_name:t,domain:r})=>({messages:[{role:"user",content:{type:"text",text:`Let's document the "${t}" project systematically.
Please guide me through creating a comprehensive memory structure:
1. First, check if project memories already exist using memory_find
2. Create the main project memory with rich observations about:
- Project purpose and goals
- Technical stack and architecture
- Key decisions and trade-offs
- Current status and next steps
3. Create related memories for:
- Major components/modules (with CONTAINS relationships)
- Key decisions (with INFLUENCES relationships)
- Technical patterns used (with IMPLEMENTS relationships)
- Known issues (with AFFECTS relationships)
4. Ensure each memory has contextual observations following the pattern:
"During [when], [what happened] because [why]. This resulted in [impact] and means [significance]."
5. Create a rich relationship network - no orphaned nodes
Domain context: ${r||"general software development"}`}}]})),n.prompt("debug-orphaned-memories",{memory_types:a.string().optional()},async({memory_types:t})=>({messages:[{role:"user",content:{type:"text",text:`Help me find and fix orphaned memories${t?` of types: ${t}`:""}.
Analysis workflow:
1. Use memory_find with "${t||"*"}" to list all relevant memories
2. For each memory, check its relationships using includeContext: "relations-only"
3. Identify memories with 0 or very few connections
4. For orphaned memories, suggest logical connections based on:
- Similar names or types
- Overlapping metadata (project, tags, dates)
- Semantic similarity of observations
5. Create the suggested relationships using memory_modify with meaningful metadata explaining the connection
Goal: Every memory should have at least 2 meaningful relationships.`}}]})),n.prompt("document-decision",{decision_title:a.string(),area:a.string().optional()},async({decision_title:t,area:r})=>({messages:[{role:"user",content:{type:"text",text:`Document the decision: "${t}"
Please capture this decision comprehensively:
1. Search for related existing decisions and context
2. Create a decision memory with observations covering:
- Context: What problem were we solving?
- Constraints: What limitations did we face?
- Alternatives: What other approaches were considered?
- Rationale: Why was this approach chosen?
- Trade-offs: What did we gain/lose?
- Consequences: What does this mean for the future?
3. Link to:
- Affected components (INFLUENCES)
- Previous related decisions (EXTENDS or CONFLICTS_WITH)
- Implementation memories (GUIDES)
4. Use rich metadata:
- decision_date
- stakeholders
- reversibility
- confidence_level
Area: ${r||"general architecture"}`}}]})),n.prompt("analyze-memory-quality",{scope:a.string().optional()},async({scope:t})=>({messages:[{role:"user",content:{type:"text",text:`Analyze memory quality for scope: ${t||"all memories"}.
Quality assessment steps:
1. Retrieve memories using memory_find with appropriate filters
2. For each memory, evaluate:
- Observation quality: Are they self-contained context units?
- Observation count: Multiple perspectives captured?
- Relationship density: Well-connected or isolated?
- Metadata completeness: Properly classified?
- Temporal coverage: Regular updates or stale?
3. Identify patterns:
- Which memory types have the best/worst quality?
- Common issues (fragmented observations, missing context)
- Relationship gaps in the knowledge graph
4. Generate specific recommendations:
- Memories needing richer observations
- Missing relationships to create
- Metadata to standardize
5. Provide quality metrics summary with actionable next steps`}}]})),n.prompt("text-to-knowledge-graph",{text_source:a.string(),domain:a.string().optional(),granularity:a.string().optional()},async({text_source:t,domain:r,granularity:i})=>({messages:[{role:"user",content:{type:"text",text:`Transform the ${t} into a structured knowledge graph.
PHASE 1: Entity Extraction & Classification
- Extract entities: Characters/People, Locations, Objects, Concepts, Events
- For each entity, capture:
* Name and aliases
* Type classification
* First appearance context
* Key characteristics (metadata)
* Initial observations (full contextual narrative)
PHASE 2: Search Before Create
For EVERY entity:
1. Search existing memories using memory_find (by name and aliases)
2. If found: prepare update with new observations
3. If not found: prepare for creation
4. Log decisions to prevent duplicates
PHASE 3: Contextual Observation Creation
Each observation must be a self-contained narrative unit answering:
- WHERE did this occur? (location/scene)
- WHEN did this happen? (temporal context)
- WHAT specifically happened? (actions/events)
- WHO was involved? (actors/participants)
- WHY did it happen? (motives/causes)
- HOW was it accomplished? (methods/tools)
- WHAT changed? (consequences/impact)
- WHY does it matter? (significance)
Example: "During the midnight confrontation in the abandoned warehouse (Chapter 3, Scene 2), Detective Morgan discovered the missing evidence hidden inside a false bottom drawer, using ultraviolet light to reveal fingerprints. This discovery, motivated by a cryptic note from the victim, proved the suspect's presence at the scene and shifted suspicion from the partner to the CEO, fundamentally altering the investigation's direction."
PHASE 4: Memory Creation/Updates
- Use memory_store for new entities with localIds for cross-references
- Use memory_modify to add observations to existing entities
- Ensure rich metadata (domain-specific properties)
- Granularity level: ${i||"medium"}
PHASE 5: Relationship Network Creation
Create typed relationships between entities:
- Character \u2194 Character: INTERACTS_WITH, SUSPECTS, OPPOSES, ALLIES_WITH
- Character \u2192 Location: VISITS, LIVES_IN, DISCOVERS, ESCAPES_FROM
- Character \u2192 Object: POSSESSES, DISCOVERS, USES, DESTROYS
- Character \u2192 Event: PARTICIPATES_IN, WITNESSES, CAUSES
- Object \u2192 Location: LOCATED_IN, HIDDEN_IN, MOVED_TO
- Concept \u2192 Entity: DRIVES, MANIFESTS_IN, SYMBOLIZED_BY
Each relationship needs:
- Type (semantic meaning)
- Strength (0.1-1.0 based on importance)
- Metadata (context, evidence, chapter/scene reference)
PHASE 6: Structural Hierarchy
If processing book/document chapters:
- Create Chapter/Section memories
- Link entities to their appearance scenes
- Maintain CONTAINS hierarchies
- Track narrative flow with FOLLOWS relationships
PHASE 7: Quality Validation
- Verify no orphaned nodes (minimum 2 relationships each)
- Check observation completeness
- Validate relationship logic
- Ensure temporal consistency
Domain: ${r||"general narrative"}
Target relationship density: 3-5 connections per entity
Memory type naming convention: Use singular forms (Character, Location, Concept)`}}]})),n.prompt("vibe-code-with-memory",{project_description:a.string(),session_type:a.string().optional(),vibe_level:a.string().optional()},async({project_description:t,session_type:r,vibe_level:i})=>({messages:[{role:"user",content:{type:"text",text:`Let's vibe-code ${t} with memory tracking.
SESSION TYPE: ${r||"continue"} | VIBE LEVEL: ${i||"moderate"}
WORKFLOW:
1. **Memory Context**: Search existing project memories
2. **Vibe Alignment**: Feel out what needs building next
3. **Implementation**: Code with flow, document decisions as we go
4. **Memory Capture**: Store what we learn, why we chose this path
5. **Relationship Mapping**: Connect new decisions to existing context
MEMORY STRATEGY:
- Capture implementation insights as observations
- Link architectural decisions with INFLUENCES relationships
- Document pain points and solutions
- Track technology choices and their rationale
- Build searchable knowledge for future sessions
Ready to flow with the code while building permanent knowledge?`}}]})),n.prompt("refactor-with-rationale",{target_code:a.string(),pain_points:a.string(),constraints:a.string().optional()},async({target_code:t,pain_points:r,constraints:i})=>({messages:[{role:"user",content:{type:"text",text:`Refactor ${t} while documenting every decision.
PAIN POINTS TO SOLVE:
${r}
CONSTRAINTS (MUST NOT CHANGE):
${i||"None specified - proceed with caution"}
REFACTORING WITH MEMORY WORKFLOW:
1. **Before State**: Document current architecture in memory
2. **Analysis**: Capture what's wrong and why
3. **Options**: Record alternative approaches considered
4. **Incremental Changes**: Small steps with rationale
5. **Decision Trail**: Every change gets documented
6. **After State**: Final architecture with improvements
7. **Lessons Learned**: What we'd do differently next time
MEMORY STRUCTURE:
- Original architecture memory
- Refactoring decision memory
- Individual change memories
- Final state memory
- Connected with EVOLVES_FROM, FIXES, IMPROVES relationships
Let's refactor systematically while building institutional memory.`}}]})),n.prompt("evolve-architecture",{current_state:a.string(),desired_outcome:a.string(),iteration_size:a.string().optional()},async({current_state:t,desired_outcome:r,iteration_size:i})=>({messages:[{role:"user",content:{type:"text",text:`Evolve architecture from current state to desired outcome.
CURRENT STATE:
${t}
DESIRED OUTCOME:
${r}
ITERATION SIZE: ${i||"small"}
EVOLUTION STRATEGY:
1. **Gap Analysis**: Document what needs to change
2. **Incremental Path**: Plan small, safe steps
3. **Risk Assessment**: What could go wrong?
4. **Parallel Tracking**: Old and new systems coexist
5. **Memory Trail**: Every architectural decision documented
6. **Validation**: Each step improves without breaking
MEMORY ARCHITECTURE:
- Current state snapshot
- Desired outcome vision
- Migration plan memories
- Individual step memories
- Risk and mitigation memories
- Connected evolution chain with LEADS_TO relationships
Ready to evolve architecture with full memory tracking?`}}]}))}import{z as e}from"zod";var l={memory_store:"Create memories with observations and relationships. **Pattern**: Search\u2192Create\u2192Connect. **Observations**: Self-contained context units (what/when/where/why/impact). **LocalIds**: Cross-references within THIS request only. **Limits**: 50 memories, 200 relations. **Quality**: Each observation = complete detective notes answering setting/action/actors/evidence/impact/significance.",memory_find:'Unified search/retrieval. **Query**: text, IDs array, or "*". **Context**: minimal (lists), full (everything), relations-only (graph). **Temporal**: createdAfter "7d"/"2024-01-15". **Graph**: traverseFrom + relations + depth. **Always search before creating**. Updates access timestamps for analytics.',memory_modify:"Update/delete memories, manage observations/relations. **Operations**: update (properties), delete (cascade), add-observations (append insights), create-relations (link existing). **Atomic**: All succeed or all fail. **Quality**: One substantial observation per session - complete context stories, not fragments.",database_switch:"Switch active database context (creates if missing). ALL subsequent operations use this DB. Call once per session/project. Like 'cd' for memories. **Session-scoped**: Establishes context for entire workflow, not per-operation."},o={memories:"Array of memories to create. **Always search first** to avoid duplicates. Each memory = one focused concept.","memories.name":"Human-readable identifier. Be specific: 'OAuth2 Implementation' not 'Auth'. Include searchable keywords.","memories.memoryType":"Category: knowledge (facts), decision (choices), issue (problems), implementation (code), architecture (structure), pattern (recurring solutions), insight (discoveries).","memories.observations":"Context-rich narratives. Each = complete story with setting/action/actors/evidence/impact/significance. **One insight per session** - don't fragment thoughts.","memories.metadata":"Static properties (JSON). Use for: project, language, status, tags, dates, version. **Narrative content goes in observations**.","memories.localId":"Temporary ID for relations within THIS request. Not reusable across operations. Format: short descriptive names.",relations:"Connect memories: from/to (localId or memoryId), type (semantic meaning), strength (0.1-1.0 importance).",query:"Search text, array of memory IDs, or '*' for all. **Semantic search**: finds meaning, not just keywords.",includeContext:"Detail level: **minimal** (id/name/type only - for lists), **full** (everything - default work mode), **relations-only** (graph analysis only).",limit:"Max results (default: 10). **Increase for comprehensive searches** - use 50+ for full exploration.",threshold:"Semantic match minimum (0.1-1.0). **Lower = more results**. 0.1 = permissive, 0.8 = strict matching.",memoryTypes:"Filter by type array. **Leave empty for all types**. Common: ['knowledge', 'decision', 'implementation'].",createdAfter:"Date filter. **ISO** ('2024-01-15') or **relative** ('7d', '30d', '3m', '1y'). Finds recent additions.",traverseFrom:"Memory ID to start graph exploration. **Discovers connected knowledge** through relationships.",traverseRelations:"Relation types to follow. **Empty = all types**. Common: ['INFLUENCES', 'DEPENDS_ON', 'IMPLEMENTS'].",maxDepth:"Graph traversal depth (1-5, default: 2). **Higher = broader discovery**, but slower performance.",operation:"Action type: **update** (properties), **delete** (cascade), **add-observations** (append), **create-relations** (connect existing).",target:"Single memory ID to modify. **Use 'targets' for batch operations** to maintain atomicity.",changes:"For update: new name/type/metadata. **Preserves existing observations** - use add-observations to append.",observations:"For add-observations: new insights to append. **One substantial observation per session** - complete context stories.","observations.contents":"For add: new observation text(s) - **typically one per session**. For delete: observation IDs to remove.",databaseName:"Target database name. **Will be created if doesn't exist**. Use project names for isolation."};function f(n,t){n.tool("memory_store",l.memory_store,{memories:e.array(e.object({name:e.string().describe(o["memories.name"]),memoryType:e.string().describe(o["memories.memoryType"]),localId:e.string().optional().describe(o["memories.localId"]),observations:e.array(e.string()).describe(o["memories.observations"]),metadata:e.record(e.any()).optional().describe(o["memories.metadata"])})).describe(o.memories),relations:e.array(e.object({from:e.string().describe("Source localId or existing memoryId"),to:e.string().describe("Target localId or existing memoryId"),type:e.string().describe("Relationship type: INFLUENCES, DEPENDS_ON, EXTENDS, IMPLEMENTS, CONTAINS, etc."),strength:e.number().min(.1).max(1).optional().describe("0.1-1.0, defaults to 0.5"),source:e.enum(["agent","user","system"]).optional().describe("defaults to 'agent'")})).optional().describe(o.relations),options:e.object({validateReferences:e.boolean().optional().describe("Check all target IDs exist (default: true)"),allowDuplicateRelations:e.boolean().optional().describe("Skip/error on duplicates (default: false)"),transactional:e.boolean().optional().describe("All-or-nothing behavior (default: true)"),maxMemories:e.number().optional().describe("Batch size limit per request (default: 50)"),maxRelations:e.number().optional().describe("Relations limit per request (default: 200)")}).optional().describe("Store options")},async r=>{try{let{unifiedStoreHandler:i}=await t(),s=await i.handleMemoryStore(r);return{content:[{type:"text",text:JSON.stringify(s,null,2)}]}}catch(i){throw c(i)}}),n.tool("memory_find",l.memory_find,{query:e.union([e.string(),e.array(e.string())]).describe(o.query),limit:e.number().optional().describe(o.limit),memoryTypes:e.array(e.string()).optional().describe(o.memoryTypes),includeContext:e.enum(["minimal","full","relations-only"]).optional().describe(o.includeContext),threshold:e.number().min(.01).max(1).optional().describe(o.threshold),orderBy:e.enum(["relevance","created","modified","accessed"]).optional().describe("Sort order (default: 'relevance')"),createdAfter:e.string().optional().describe(o.createdAfter),createdBefore:e.string().optional().describe("ISO date or relative"),modifiedSince:e.string().optional().describe("ISO date or relative"),accessedSince:e.string().optional().describe("ISO date or relative"),traverseFrom:e.string().optional().describe(o.traverseFrom),traverseRelations:e.array(e.string()).optional().describe(o.traverseRelations),maxDepth:e.number().min(1).max(5).optional().describe(o.maxDepth),traverseDirection:e.enum(["outbound","inbound","both"]).optional().describe("Traversal direction (default: 'both')")},async r=>{try{let{unifiedFindHandler:i}=await t(),s=await i.handleMemoryFind(r);return{content:[{type:"text",text:JSON.stringify(s,null,2)}]}}catch(i){throw c(i)}}),n.tool("memory_modify",l.memory_modify,{operation:e.enum(["update","delete","batch-delete","add-observations","delete-observations","create-relations","update-relations","delete-relations"]).describe(o.operation),target:e.string().optional().describe(o.target),targets:e.array(e.string()).optional().describe("Multiple IDs for batch operations"),changes:e.object({name:e.string().optional().describe("New memory name"),memoryType:e.string().optional().describe("New memory type"),metadata:e.record(e.any()).optional().describe("New metadata (replaces existing)")}).optional().describe(o.changes),observations:e.array(e.object({memoryId:e.string().describe("Target memory ID"),contents:e.array(e.string()).describe(o["observations.contents"])})).optional().describe(o.observations),relations:e.array(e.object({from:e.string().describe("Source memory ID"),to:e.string().describe("Target memory ID"),type:e.string().describe("Relationship type: INFLUENCES, DEPENDS_ON, EXTENDS, IMPLEMENTS, CONTAINS, etc."),strength:e.number().min(.1).max(1).optional().describe("For create/update operations (0.1-1.0)"),source:e.enum(["agent","user","system"]).optional().describe("For create operations")})).optional().describe("Relationships to create/update/delete between existing memories."),options:e.object({cascadeDelete:e.boolean().optional().describe("Delete related observations/relations (default: true)"),validateObservationIds:e.boolean().optional().describe("Validate observation IDs for delete (default: true)"),createIfNotExists:e.boolean().optional().describe("For database operations")}).optional().describe("Modify options")},async r=>{try{let{unifiedModifyHandler:i}=await t(),s=await i.handleMemoryModify(r);return{content:[{type:"text",text:JSON.stringify(s,null,2)}]}}catch(i){throw c(i)}}),n.tool("database_switch",l.database_switch,{databaseName:e.string().describe(o.databaseName)},async r=>{try{let{databaseHandler:i}=await t(),s=await i.handleDatabaseSwitch(r.databaseName);return{content:[{type:"text",text:JSON.stringify(s,null,2)}]}}catch(i){throw c(i)}})}export{d as a,f as b};