claude-flow
Version:
Ruflo - Enterprise AI agent orchestration for Claude Code. Deploy 60+ specialized agents in coordinated swarms with self-learning, fault-tolerant consensus, vector memory, and MCP integration
1,293 lines (991 loc) β’ 282 kB
Markdown
# π Ruflo v3: Enterprise AI Orchestration Platform
<div align="center">

[](https://github.com/ruvnet/claude-flow)
[](https://github.com/ruvnet/claude-flow)
[](https://www.npmjs.com/package/claude-flow)
[](https://www.npmjs.com/package/claude-flow)
[](https://ruv.io)
[](https://discord.com/invite/dfxmpwkG2D)
[](https://github.com/ruvnet/claude-flow)
[](https://opensource.org/licenses/MIT)
---
[](https://x.com/ruv)
[](https://www.linkedin.com/in/reuvencohen/)
[](https://www.youtube.com/@ReuvenCohen)
# **Production-ready multi-agent AI orchestration for Claude Code**
*Deploy 60+ specialized agents in coordinated swarms with self-learning capabilities, fault-tolerant consensus, and enterprise-grade security.*
</div>
> **Why Ruflo?** Claude Flow is now Ruflo β named by Ruv, who loves Rust, flow states, and building things that feel inevitable. The "Ru" is the Ruv. The "flo" is the flow. Underneath, WASM kernels written in Rust power the policy engine, embeddings, and proof system. 5,800 commits later, the alpha is over. This is v3.5.
## Getting into the Flow
Ruflo is a comprehensive AI agent orchestration framework that transforms Claude Code into a powerful multi-agent development platform. It enables teams to deploy, coordinate, and optimize specialized AI agents working together on complex software engineering tasks.
### Self-Learning/Self-Optimizing Agent Architecture
```
User β Ruflo (CLI/MCP) β Router β Swarm β Agents β Memory β LLM Providers
β β
βββββ Learning Loop ββββββββ
```
<details>
<summary>π <strong>Expanded Architecture</strong> β Full system diagram with RuVector intelligence</summary>
```mermaid
flowchart TB
subgraph USER["π€ User Layer"]
U[User]
end
subgraph ENTRY["πͺ Entry Layer"]
CLI[CLI / MCP Server]
AID[AIDefence Security]
end
subgraph ROUTING["π§ Routing Layer"]
QL[Q-Learning Router]
MOE[MoE - 8 Experts]
SK[Skills - 42+]
HK[Hooks - 17]
end
subgraph SWARM["π Swarm Coordination"]
TOPO[Topologies<br/>mesh/hier/ring/star]
CONS[Consensus<br/>Raft/BFT/Gossip/CRDT]
CLM[Claims<br/>Human-Agent Coord]
end
subgraph AGENTS["π€ 60+ Agents"]
AG1[coder]
AG2[tester]
AG3[reviewer]
AG4[architect]
AG5[security]
AG6[...]
end
subgraph RESOURCES["π¦ Resources"]
MEM[(Memory<br/>AgentDB)]
PROV[Providers<br/>Claude/GPT/Gemini/Ollama]
WORK[Workers - 12<br/>ultralearn/audit/optimize]
end
subgraph RUVECTOR["π§ RuVector Intelligence Layer"]
direction TB
subgraph ROW1[" "]
SONA[SONA<br/>Self-Optimize<br/><0.05ms]
EWC[EWC++<br/>No Forgetting]
FLASH[Flash Attention<br/>2.49-7.47x]
end
subgraph ROW2[" "]
HNSW[HNSW<br/>150x-12,500x faster]
RB[ReasoningBank<br/>Pattern Store]
HYP[Hyperbolic<br/>PoincarΓ©]
end
subgraph ROW3[" "]
LORA[LoRA/Micro<br/>128x compress]
QUANT[Int8 Quant<br/>3.92x memory]
RL[9 RL Algos<br/>Q/SARSA/PPO/DQN]
end
end
subgraph LEARNING["π Learning Loop"]
L1[RETRIEVE] --> L2[JUDGE] --> L3[DISTILL] --> L4[CONSOLIDATE] --> L5[ROUTE]
end
U --> CLI
CLI --> AID
AID --> QL & MOE & SK & HK
QL & MOE & SK & HK --> TOPO & CONS & CLM
TOPO & CONS & CLM --> AG1 & AG2 & AG3 & AG4 & AG5 & AG6
AG1 & AG2 & AG3 & AG4 & AG5 & AG6 --> MEM & PROV & WORK
MEM --> SONA & EWC & FLASH
SONA & EWC & FLASH --> HNSW & RB & HYP
HNSW & RB & HYP --> LORA & QUANT & RL
LORA & QUANT & RL --> L1
L5 -.->|loops back| QL
style RUVECTOR fill:#1a1a2e,stroke:#e94560,stroke-width:2px
style LEARNING fill:#0f3460,stroke:#e94560,stroke-width:2px
style USER fill:#16213e,stroke:#0f3460
style ENTRY fill:#1a1a2e,stroke:#0f3460
style ROUTING fill:#1a1a2e,stroke:#0f3460
style SWARM fill:#1a1a2e,stroke:#0f3460
style AGENTS fill:#1a1a2e,stroke:#0f3460
style RESOURCES fill:#1a1a2e,stroke:#0f3460
```
**RuVector Components** (included with Ruflo):
| Component | Purpose | Performance |
|-----------|---------|-------------|
| **SONA** | Self-Optimizing Neural Architecture - learns optimal routing | Fast adaptation |
| **EWC++** | Elastic Weight Consolidation - prevents catastrophic forgetting | Preserves learned patterns |
| **Flash Attention** | Optimized attention computation | 2-7x speedup |
| **HNSW** | Hierarchical Navigable Small World vector search | Sub-millisecond retrieval |
| **ReasoningBank** | Pattern storage with trajectory learning | RETRIEVEβJUDGEβDISTILL |
| **Hyperbolic** | Poincare ball embeddings for hierarchical data | Better code relationships |
| **LoRA/MicroLoRA** | Low-Rank Adaptation for efficient fine-tuning | Lightweight adaptation |
| **Int8 Quantization** | Memory-efficient weight storage | ~4x memory reduction |
| **SemanticRouter** | Semantic task routing with cosine similarity | Fast intent routing |
| **9 RL Algorithms** | Q-Learning, SARSA, A2C, PPO, DQN, Decision Transformer, etc. | Task-specific learning |
```bash
# Use RuVector via Ruflo
npx ruflo@alpha hooks intelligence --status
```
</details>
### Get Started Fast
```bash
# One-line install (recommended)
curl -fsSL https://cdn.jsdelivr.net/gh/ruvnet/claude-flow@main/scripts/install.sh | bash
# Or full setup with MCP + diagnostics
curl -fsSL https://cdn.jsdelivr.net/gh/ruvnet/claude-flow@main/scripts/install.sh | bash -s -- --full
# Or via npx
npx ruflo@alpha init --wizard
```
---
### Key Capabilities
π€ **60+ Specialized Agents** - Ready-to-use AI agents for coding, code review, testing, security audits, documentation, and DevOps. Each agent is optimized for its specific role.
π **Coordinated Agent Teams** - Run unlimited agents simultaneously in organized swarms. Agents spawn sub-workers, communicate, share context, and divide work automatically using hierarchical (queen/workers) or mesh (peer-to-peer) patterns.
π§ **Learns From Your Workflow** - The system remembers what works. Successful patterns are stored and reused, routing similar tasks to the best-performing agents. Gets smarter over time.
π **Works With Any LLM** - Switch between Claude, GPT, Gemini, Cohere, or local models like Llama. Automatic failover if one provider is unavailable. Smart routing picks the cheapest option that meets quality requirements.
β‘ **Plugs Into Claude Code** - Native integration via MCP (Model Context Protocol). Use ruflo commands directly in your Claude Code sessions with full tool access.
π **Production-Ready Security** - Built-in protection against prompt injection, input validation, path traversal prevention, command injection blocking, and safe credential handling.
π§© **Extensible Plugin System** - Add custom capabilities with the plugin SDK. Create workers, hooks, providers, and security modules. Share plugins via the decentralized IPFS marketplace.
---
### A multi-purpose Agent Tool Kit
<details>
<summary>π <strong>Core Flow</strong> β How requests move through the system</summary>
Every request flows through four layers: from your CLI or Claude Code interface, through intelligent routing, to specialized agents, and finally to LLM providers for reasoning.
| Layer | Components | What It Does |
|-------|------------|--------------|
| User | Claude Code, CLI | Your interface to control and run commands |
| Orchestration | MCP Server, Router, Hooks | Routes requests to the right agents |
| Agents | 60+ types | Specialized workers (coder, tester, reviewer...) |
| Providers | Anthropic, OpenAI, Google, Ollama | AI models that power reasoning |
</details>
<details>
<summary>π <strong>Swarm Coordination</strong> β How agents work together</summary>
Agents organize into swarms led by queens that coordinate work, prevent drift, and reach consensus on decisionsβeven when some agents fail.
| Layer | Components | What It Does |
|-------|------------|--------------|
| Coordination | Queen, Swarm, Consensus | Manages agent teams (Raft, Byzantine, Gossip) |
| Drift Control | Hierarchical topology, Checkpoints | Prevents agents from going off-task |
| Hive Mind | Queen-led hierarchy, Collective memory | Strategic/tactical/adaptive queens coordinate workers |
| Consensus | Byzantine, Weighted, Majority | Fault-tolerant decisions (2/3 majority for BFT) |
**Hive Mind Capabilities:**
- π **Queen Types**: Strategic (planning), Tactical (execution), Adaptive (optimization)
- π· **8 Worker Types**: Researcher, Coder, Analyst, Tester, Architect, Reviewer, Optimizer, Documenter
- π³οΈ **3 Consensus Algorithms**: Majority, Weighted (Queen 3x), Byzantine (f < n/3)
- π§ **Collective Memory**: Shared knowledge, LRU cache, SQLite persistence with WAL
- β‘ **Performance**: Fast batch spawning with parallel agent coordination
</details>
<details>
<summary>π§ <strong>Intelligence & Memory</strong> β How the system learns and remembers</summary>
The system stores successful patterns in vector memory, builds a knowledge graph for structural understanding, learns from outcomes via neural networks, and adapts routing based on what works best.
| Layer | Components | What It Does |
|-------|------------|--------------|
| Memory | HNSW, AgentDB, Cache | Stores and retrieves patterns with fast HNSW search |
| Knowledge Graph | MemoryGraph, PageRank, Communities | Identifies influential insights, detects clusters (ADR-049) |
| Self-Learning | LearningBridge, SONA, ReasoningBank | Triggers learning from insights, confidence lifecycle (ADR-049) |
| Agent Scopes | AgentMemoryScope, 3-scope dirs | Per-agent isolation + cross-agent knowledge transfer (ADR-049) |
| Embeddings | ONNX Runtime, MiniLM | Local vectors without API calls (75x faster) |
| Learning | SONA, MoE, ReasoningBank | Self-improves from results (<0.05ms adaptation) |
| Fine-tuning | MicroLoRA, EWC++ | Lightweight adaptation without full retraining |
</details>
<details>
<summary>β‘ <strong>Optimization</strong> β How to reduce cost and latency</summary>
Skip expensive LLM calls for simple tasks using WebAssembly transforms, and compress tokens to reduce API costs by 30-50%.
| Layer | Components | What It Does |
|-------|------------|--------------|
| Agent Booster | WASM, AST analysis | Skips LLM for simple edits (<1ms) |
| Token Optimizer | Compression, Caching | Reduces token usage 30-50% |
</details>
<details>
<summary>π§ <strong>Operations</strong> β Background services and integrations</summary>
Background daemons handle security audits, performance optimization, and session persistence automatically while you work.
| Layer | Components | What It Does |
|-------|------------|--------------|
| Background | Daemon, 12 Workers | Auto-runs audits, optimization, learning |
| Security | AIDefence, Validation | Blocks injection, detects threats |
| Sessions | Persist, Restore, Export | Saves context across conversations |
| GitHub | PR, Issues, Workflows | Manages repos and code reviews |
| Analytics | Metrics, Benchmarks | Monitors performance, finds bottlenecks |
</details>
<details>
<summary>π― <strong>Task Routing</strong> β Extend your Claude Code subscription by 250%</summary>
Smart routing skips expensive LLM calls when possible. Simple edits use WASM (free), medium tasks use cheaper models. This can extend your Claude Code usage by 250% or save significantly on direct API costs.
| Complexity | Handler | Speed |
|------------|---------|-------|
| Simple | Agent Booster (WASM) | <1ms |
| Medium | Haiku/Sonnet | ~500ms |
| Complex | Opus + Swarm | 2-5s |
</details>
<details>
<summary>β‘ <strong>Agent Booster (WASM)</strong> β Skip LLM for simple code transforms</summary>
Agent Booster uses WebAssembly to handle simple code transformations without calling the LLM at all. When the hooks system detects a simple task, it routes directly to Agent Booster for instant results.
**Supported Transform Intents:**
| Intent | What It Does | Example |
|--------|--------------|---------|
| `var-to-const` | Convert var/let to const | `var x = 1` β `const x = 1` |
| `add-types` | Add TypeScript type annotations | `function foo(x)` β `function foo(x: string)` |
| `add-error-handling` | Wrap in try/catch | Adds proper error handling |
| `async-await` | Convert promises to async/await | `.then()` chains β `await` |
| `add-logging` | Add console.log statements | Adds debug logging |
| `remove-console` | Strip console.* calls | Removes all console statements |
**Hook Signals:**
When you see these in hook output, the system is telling you how to optimize:
```bash
# Agent Booster available - skip LLM entirely
[AGENT_BOOSTER_AVAILABLE] Intent: var-to-const
β Use Edit tool directly, 352x faster than LLM
# Model recommendation for Task tool
[TASK_MODEL_RECOMMENDATION] Use model="haiku"
β Pass model="haiku" to Task tool for cost savings
```
**Performance:**
| Metric | Agent Booster | LLM Call |
|--------|---------------|----------|
| Latency | <1ms | 2-5s |
| Cost | $0 | $0.0002-$0.015 |
| Speedup | **352x faster** | baseline |
</details>
<details>
<summary>π° <strong>Token Optimizer</strong> β 30-50% token reduction</summary>
The Token Optimizer integrates agentic-flow optimizations to reduce API costs by compressing context and caching results.
**Savings Breakdown:**
| Optimization | Token Savings | How It Works |
|--------------|---------------|--------------|
| ReasoningBank retrieval | -32% | Fetches relevant patterns instead of full context |
| Agent Booster edits | -15% | Simple edits skip LLM entirely |
| Cache (95% hit rate) | -10% | Reuses embeddings and patterns |
| Optimal batch size | -20% | Groups related operations |
| **Combined** | **30-50%** | Stacks multiplicatively |
**Usage:**
```typescript
import { getTokenOptimizer } from '@claude-flow/integration';
const optimizer = await getTokenOptimizer();
// Get compact context (32% fewer tokens)
const ctx = await optimizer.getCompactContext("auth patterns");
// Optimized edit (352x faster for simple transforms)
await optimizer.optimizedEdit(file, oldStr, newStr, "typescript");
// Optimal config for swarm (100% success rate)
const config = optimizer.getOptimalConfig(agentCount);
```
</details>
<details>
<summary>π‘οΈ <strong>Anti-Drift Swarm Configuration</strong> β Prevent goal drift in multi-agent work</summary>
Complex swarms can drift from their original goals. Ruflo V3 includes anti-drift defaults that prevent agents from going off-task.
**Recommended Configuration:**
```javascript
// Anti-drift defaults (ALWAYS use for coding tasks)
swarm_init({
topology: "hierarchical", // Single coordinator enforces alignment
maxAgents: 8, // Smaller team = less drift surface
strategy: "specialized" // Clear roles reduce ambiguity
})
```
**Why This Prevents Drift:**
| Setting | Anti-Drift Benefit |
|---------|-------------------|
| `hierarchical` | Coordinator validates each output against goal, catches divergence early |
| `maxAgents: 6-8` | Fewer agents = less coordination overhead, easier alignment |
| `specialized` | Clear boundaries - each agent knows exactly what to do, no overlap |
| `raft` consensus | Leader maintains authoritative state, no conflicting decisions |
**Additional Anti-Drift Measures:**
- Frequent checkpoints via `post-task` hooks
- Shared memory namespace for all agents
- Short task cycles with verification gates
- Hierarchical coordinator reviews all outputs
**Task β Agent Routing (Anti-Drift):**
| Code | Task Type | Recommended Agents |
|------|-----------|-------------------|
| 1 | Bug Fix | coordinator, researcher, coder, tester |
| 3 | Feature | coordinator, architect, coder, tester, reviewer |
| 5 | Refactor | coordinator, architect, coder, reviewer |
| 7 | Performance | coordinator, perf-engineer, coder |
| 9 | Security | coordinator, security-architect, auditor |
| 11 | Memory | coordinator, memory-specialist, perf-engineer |
</details>
### Claude Code: With vs Without Ruflo
| Capability | Claude Code Alone | Claude Code + Ruflo |
|------------|-------------------|---------------------------|
| **Agent Collaboration** | Agents work in isolation, no shared context | Agents collaborate via swarms with shared memory and consensus |
| **Coordination** | Manual orchestration between tasks | Queen-led hierarchy with 5 consensus algorithms (Raft, Byzantine, Gossip) |
| **Hive Mind** | β Not available | π Queen-led swarms with collective intelligence, 3 queen types, 8 worker types |
| **Consensus** | β No multi-agent decisions | Byzantine fault-tolerant voting (f < n/3), weighted, majority |
| **Memory** | Session-only, no persistence | HNSW vector memory with sub-ms retrieval + knowledge graph |
| **Vector Database** | β No native support | π RuVector PostgreSQL with 77+ SQL functions, ~61Β΅s search, 16,400 QPS |
| **Knowledge Graph** | β Flat insight lists | PageRank + community detection identifies influential insights (ADR-049) |
| **Collective Memory** | β No shared knowledge | Shared knowledge base with LRU cache, SQLite persistence, 8 memory types |
| **Learning** | Static behavior, no adaptation | SONA self-learning with <0.05ms adaptation, LearningBridge for insights |
| **Agent Scoping** | Single project scope | 3-scope agent memory (project/local/user) with cross-agent transfer |
| **Task Routing** | You decide which agent to use | Intelligent routing based on learned patterns (89% accuracy) |
| **Complex Tasks** | Manual breakdown required | Automatic decomposition across 5 domains (Security, Core, Integration, Support) |
| **Background Workers** | Nothing runs automatically | 12 context-triggered workers auto-dispatch on file changes, patterns, sessions |
| **LLM Provider** | Anthropic only | 6 providers with automatic failover and cost-based routing (85% savings) |
| **Security** | Standard protections | CVE-hardened with bcrypt, input validation, path traversal prevention |
| **Performance** | Baseline | Faster tasks via parallel swarm spawning and intelligent routing |
## Quick Start
### Prerequisites
- **Node.js 20+** (required)
- **npm 9+** / **pnpm** / **bun** package manager
**IMPORTANT**: Claude Code must be installed first:
```bash
# 1. Install Claude Code globally
npm install -g @anthropic-ai/claude-code
# 2. (Optional) Skip permissions check for faster setup
claude --dangerously-skip-permissions
```
### Installation
#### One-Line Install (Recommended)
```bash
# curl-style installer with progress display
curl -fsSL https://cdn.jsdelivr.net/gh/ruvnet/claude-flow@main/scripts/install.sh | bash
# Full setup (global + MCP + diagnostics)
curl -fsSL https://cdn.jsdelivr.net/gh/ruvnet/claude-flow@main/scripts/install.sh | bash -s -- --full
```
<details>
<summary><b>Install Options</b></summary>
| Option | Description |
|--------|-------------|
| `--global`, `-g` | Install globally (`npm install -g`) |
| `--minimal`, `-m` | Skip optional deps (faster, ~15s) |
| `--setup-mcp` | Auto-configure MCP server for Claude Code |
| `--doctor`, `-d` | Run diagnostics after install |
| `--no-init` | Skip project initialization (init runs by default) |
| `--full`, `-f` | Full setup: global + MCP + doctor |
| `--version=X.X.X` | Install specific version |
**Examples:**
```bash
# Minimal global install (fastest)
curl ... | bash -s -- --global --minimal
# With MCP auto-setup
curl ... | bash -s -- --global --setup-mcp
# Full setup with diagnostics
curl ... | bash -s -- --full
```
**Speed:**
| Mode | Time |
|------|------|
| npx (cached) | ~3s |
| npx (fresh) | ~20s |
| global | ~35s |
| --minimal | ~15s |
</details>
#### npm/npx Install
```bash
# Quick start (no install needed)
npx ruflo@alpha init
# Or install globally
npm install -g ruflo@alpha
ruflo init
# With Bun (faster)
bunx ruflo@alpha init
```
#### Install Profiles
| Profile | Size | Use Case |
|---------|------|----------|
| `--omit=optional` | ~45MB | Core CLI only (fastest) |
| Default | ~340MB | Full install with ML/embeddings |
```bash
# Minimal install (skip ML/embeddings)
npm install -g ruflo@alpha --omit=optional
```
<details>
<summary>π€ <strong>OpenAI Codex CLI Support</strong> β Full Codex integration with self-learning</summary>
Ruflo supports both **Claude Code** and **OpenAI Codex CLI** via the [@claude-flow/codex](https://www.npmjs.com/package/@claude-flow/codex) package, following the [Agentics Foundation](https://agentics.org) standard.
### Quick Start for Codex
```bash
# Initialize for Codex CLI (creates AGENTS.md instead of CLAUDE.md)
npx ruflo@alpha init --codex
# Full Codex setup with all 137+ skills
npx ruflo@alpha init --codex --full
# Initialize for both platforms (dual mode)
npx ruflo@alpha init --dual
```
### Platform Comparison
| Feature | Claude Code | OpenAI Codex |
|---------|-------------|--------------|
| Config File | `CLAUDE.md` | `AGENTS.md` |
| Skills Dir | `.claude/skills/` | `.agents/skills/` |
| Skill Syntax | `/skill-name` | `$skill-name` |
| Settings | `settings.json` | `config.toml` |
| MCP | Native | Via `codex mcp add` |
| Default Model | claude-sonnet | gpt-5.3 |
### Key Concept: Execution Model
```
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β CLAUDE-FLOW = ORCHESTRATOR (tracks state, stores memory) β
β CODEX = EXECUTOR (writes code, runs commands, implements) β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
```
**Codex does the work. Claude-flow coordinates and learns.**
### Dual-Mode Integration (Claude Code + Codex)
Run Claude Code for interactive development and spawn headless Codex workers for parallel background tasks:
```
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β CLAUDE CODE (interactive) ββ CODEX WORKERS (headless) β
β - Main conversation - Parallel background execution β
β - Complex reasoning - Bulk code generation β
β - Architecture decisions - Test execution β
β - Final integration - File processing β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
```
```bash
# Spawn parallel Codex workers from Claude Code
claude -p "Analyze src/auth/ for security issues" --session-id "task-1" &
claude -p "Write unit tests for src/api/" --session-id "task-2" &
claude -p "Optimize database queries in src/db/" --session-id "task-3" &
wait # Wait for all to complete
```
| Dual-Mode Feature | Benefit |
|-------------------|---------|
| Parallel Execution | 4-8x faster for bulk tasks |
| Cost Optimization | Route simple tasks to cheaper workers |
| Context Preservation | Shared memory across platforms |
| Best of Both | Interactive + batch processing |
### Dual-Mode CLI Commands (NEW)
```bash
# List collaboration templates
npx @claude-flow/codex dual templates
# Run feature development swarm (architect β coder β tester β reviewer)
npx @claude-flow/codex dual run --template feature --task "Add user auth"
# Run security audit swarm (scanner β analyzer β fixer)
npx @claude-flow/codex dual run --template security --task "src/auth/"
# Run refactoring swarm (analyzer β planner β refactorer β validator)
npx @claude-flow/codex dual run --template refactor --task "src/legacy/"
```
### Pre-Built Collaboration Templates
| Template | Pipeline | Platforms |
|----------|----------|-----------|
| **feature** | architect β coder β tester β reviewer | Claude + Codex |
| **security** | scanner β analyzer β fixer | Codex + Claude |
| **refactor** | analyzer β planner β refactorer β validator | Claude + Codex |
### MCP Integration for Codex
When you run `init --codex`, the MCP server is automatically registered:
```bash
# Verify MCP is registered
codex mcp list
# If not present, add manually:
codex mcp add ruflo -- npx ruflo mcp start
```
### Self-Learning Workflow
```
1. LEARN: memory_search(query="task keywords") β Find similar patterns
2. COORD: swarm_init(topology="hierarchical") β Set up coordination
3. EXECUTE: YOU write code, run commands β Codex does real work
4. REMEMBER: memory_store(key, value, namespace="patterns") β Save for future
```
The **Intelligence Loop** (ADR-050) automates this cycle through hooks. Each session automatically:
- Builds a knowledge graph from memory entries (PageRank + Jaccard similarity)
- Injects ranked context into every route decision
- Tracks edit patterns and generates new insights
- Boosts confidence for useful patterns, decays unused ones
- Saves snapshots so you can track improvement with `node .claude/helpers/hook-handler.cjs stats`
### MCP Tools for Learning
| Tool | Purpose | When to Use |
|------|---------|-------------|
| `memory_search` | Semantic vector search | BEFORE starting any task |
| `memory_store` | Save patterns with embeddings | AFTER completing successfully |
| `swarm_init` | Initialize coordination | Start of complex tasks |
| `agent_spawn` | Register agent roles | Multi-agent workflows |
| `neural_train` | Train on patterns | Periodic improvement |
### 137+ Skills Available
| Category | Examples |
|----------|----------|
| **V3 Core** | `$v3-security-overhaul`, `$v3-memory-unification`, `$v3-performance-optimization` |
| **AgentDB** | `$agentdb-vector-search`, `$agentdb-optimization`, `$agentdb-learning` |
| **Swarm** | `$swarm-orchestration`, `$swarm-advanced`, `$hive-mind-advanced` |
| **GitHub** | `$github-code-review`, `$github-workflow-automation`, `$github-multi-repo` |
| **SPARC** | `$sparc-methodology`, `$sparc:architect`, `$sparc:coder`, `$sparc:tester` |
| **Flow Nexus** | `$flow-nexus-neural`, `$flow-nexus-swarm`, `$flow-nexus:workflow` |
| **Dual-Mode** | `$dual-spawn`, `$dual-coordinate`, `$dual-collect` |
### Vector Search Details
- **Embedding Dimensions**: 384
- **Search Algorithm**: HNSW (sub-millisecond)
- **Similarity Scoring**: 0-1 (higher = better)
- Score > 0.7: Strong match, use pattern
- Score 0.5-0.7: Partial match, adapt
- Score < 0.5: Weak match, create new
</details>
### Basic Usage
```bash
# Initialize project
npx ruflo@alpha init
# Start MCP server for Claude Code integration
npx ruflo@alpha mcp start
# Run a task with agents
npx ruflo@alpha --agent coder --task "Implement user authentication"
# List available agents
npx ruflo@alpha --list
```
### Upgrading
```bash
# Update helpers and statusline (preserves your data)
npx ruflo@v3alpha init upgrade
# Update AND add any missing skills/agents/commands
npx ruflo@v3alpha init upgrade --add-missing
```
The `--add-missing` flag automatically detects and installs new skills, agents, and commands that were added in newer versions, without overwriting your existing customizations.
### Claude Code MCP Integration
Add ruflo as an MCP server for seamless integration:
```bash
# Add ruflo MCP server to Claude Code
claude mcp add ruflo -- npx -y ruflo@latest mcp start
# Verify installation
claude mcp list
```
Once added, Claude Code can use all 175+ ruflo MCP tools directly:
- `swarm_init` - Initialize agent swarms
- `agent_spawn` - Spawn specialized agents
- `memory_search` - Search patterns with HNSW vector search
- `hooks_route` - Intelligent task routing
- And 170+ more tools...
---
## What is it exactly? Agents that learn, build and work perpetually.
<details>
<summary>π <strong>Why Ruflo v3?</strong></summary>
Ruflo v3 introduces **self-learning neural capabilities** that no other agent orchestration framework offers. While competitors require manual agent configuration and static routing, Ruflo learns from every task execution, prevents catastrophic forgetting of successful patterns, and intelligently routes work to specialized experts.
#### π§ Neural & Learning
| Feature | Ruflo v3 | CrewAI | LangGraph | AutoGen | Manus |
|---------|----------------|--------|-----------|---------|-------|
| **Self-Learning** | β
SONA + EWC++ | β | β | β | β |
| **Prevents Forgetting** | β
EWC++ consolidation | β | β | β | β |
| **Pattern Learning** | β
From trajectories | β | β | β | β |
| **Expert Routing** | β
MoE (8 experts) | Manual | Graph edges | β | Fixed |
| **Attention Optimization** | β
Flash Attention | β | β | β | β |
| **Low-Rank Adaptation** | β
LoRA (128x compress) | β | β | β | β |
#### πΎ Memory & Embeddings
| Feature | Ruflo v3 | CrewAI | LangGraph | AutoGen | Manus |
|---------|----------------|--------|-----------|---------|-------|
| **Vector Memory** | β
HNSW (sub-ms search) | β | Via plugins | β | β |
| **Knowledge Graph** | β
PageRank + communities | β | β | β | β |
| **Self-Learning Memory** | β
LearningBridge (SONA) | β | β | β | β |
| **Agent-Scoped Memory** | β
3-scope (project/local/user) | β | β | β | β |
| **PostgreSQL Vector DB** | β
RuVector (77+ SQL functions) | β | pgvector only | β | β |
| **Hyperbolic Embeddings** | β
PoincarΓ© ball (native + SQL) | β | β | β | β |
| **Quantization** | β
Int8 (~4x savings) | β | β | β | β |
| **Persistent Memory** | β
SQLite + AgentDB + PostgreSQL | β | β | β | Limited |
| **Cross-Session Context** | β
Full restoration | β | β | β | β |
| **GNN/Attention in SQL** | β
39 attention mechanisms | β | β | β | β |
#### π Swarm & Coordination
| Feature | Ruflo v3 | CrewAI | LangGraph | AutoGen | Manus |
|---------|----------------|--------|-----------|---------|-------|
| **Swarm Topologies** | β
4 types | 1 | 1 | 1 | 1 |
| **Consensus Protocols** | β
5 (Raft, BFT, etc.) | β | β | β | β |
| **Work Ownership** | β
Claims system | β | β | β | β |
| **Background Workers** | β
12 auto-triggered | β | β | β | β |
| **Multi-Provider LLM** | β
6 with failover | 2 | 3 | 2 | 1 |
#### π§ Developer Experience
| Feature | Ruflo v3 | CrewAI | LangGraph | AutoGen | Manus |
|---------|----------------|--------|-----------|---------|-------|
| **MCP Integration** | β
Native (170+ tools) | β | β | β | β |
| **Skills System** | β
42+ pre-built | β | β | β | Limited |
| **Stream Pipelines** | β
JSON chains | β | Via code | β | β |
| **Pair Programming** | β
Driver/Navigator | β | β | β | β |
| **Auto-Updates** | β
With rollback | β | β | β | β |
#### π‘οΈ Security & Platform
| Feature | Ruflo v3 | CrewAI | LangGraph | AutoGen | Manus |
|---------|----------------|--------|-----------|---------|-------|
| **Threat Detection** | β
AIDefence (<10ms) | β | β | β | β |
| **Cloud Platform** | β
Flow Nexus | β | β | β | β |
| **Code Transforms** | β
Agent Booster (WASM) | β | β | β | β |
| **Input Validation** | β
Zod + Path security | β | β | β | β |
<sub>*Comparison updated February 2026. Feature availability based on public documentation.*</sub>
</details>
<details>
<summary>π <strong>Key Differentiators</strong> β Self-learning, memory optimization, fault tolerance</summary>
What makes Ruflo different from other agent frameworks? These 10 capabilities work together to create a system that learns from experience, runs efficiently on any hardware, and keeps working even when things go wrong.
| | Feature | What It Does | Technical Details |
|---|---------|--------------|-------------------|
| π§ | **SONA** | Learns which agents perform best for each task type and routes work accordingly | Self-Optimizing Neural Architecture |
| π | **EWC++** | Preserves learned patterns when training on new ones β no forgetting | Elastic Weight Consolidation prevents catastrophic forgetting |
| π― | **MoE** | Routes tasks through 8 specialized expert networks based on task type | Mixture of 8 Experts with dynamic gating |
| β‘ | **Flash Attention** | Accelerates attention computation for faster agent responses | Optimized attention via @ruvector/attention |
| π | **Hyperbolic Embeddings** | Represents hierarchical code relationships in compact vector space | Poincare ball model for hierarchical data |
| π¦ | **LoRA** | Lightweight model adaptation so agents fit in limited memory | Low-Rank Adaptation via @ruvector/sona |
| ποΈ | **Int8 Quantization** | Converts 32-bit weights to 8-bit with minimal accuracy loss | ~4x memory reduction with calibrated integers |
| π€ | **Claims System** | Manages task ownership between humans and agents with handoff support | Work ownership with claim/release/handoff protocols |
| π‘οΈ | **Byzantine Consensus** | Coordinates agents even when some fail or return bad results | Fault-tolerant, handles up to 1/3 failing agents |
| π | **RuVector PostgreSQL** | Enterprise-grade vector database with 77+ SQL functions for AI operations | Fast vector search with GNN/attention in SQL |
</details>
<details>
<summary>π° <strong>Intelligent 3-Tier Model Routing</strong> β Save 75% on API costs, extend Claude Max 2.5x</summary>
Not every task needs the most powerful (and expensive) model. Ruflo analyzes each request and automatically routes it to the cheapest handler that can do the job well. Simple code transforms skip the LLM entirely using WebAssembly. Medium tasks use faster, cheaper models. Only complex architecture decisions use Opus.
**Cost & Usage Benefits:**
| Benefit | Impact |
|---------|--------|
| π΅ **API Cost Reduction** | 75% lower costs by using right-sized models |
| β±οΈ **Claude Max Extension** | 2.5x more tasks within your quota limits |
| π **Faster Simple Tasks** | <1ms for transforms vs 2-5s with LLM |
| π― **Zero Wasted Tokens** | Simple edits use 0 tokens (WASM handles them) |
**Routing Tiers:**
| Tier | Handler | Latency | Cost | Use Cases |
|------|---------|---------|------|-----------|
| **1** | Agent Booster (WASM) | <1ms | $0 | Simple transforms: varβconst, add-types, remove-console |
| **2** | Haiku/Sonnet | 500ms-2s | $0.0002-$0.003 | Bug fixes, refactoring, feature implementation |
| **3** | Opus | 2-5s | $0.015 | Architecture, security design, distributed systems |
**Benchmark Results:** 100% routing accuracy, 0.57ms avg routing decision latency
</details>
<details>
<summary>π <strong>Spec-Driven Development</strong> β Build complete specs, implement without drift</summary>
Complex projects fail when implementation drifts from the original plan. Ruflo solves this with a spec-first approach: define your architecture through ADRs (Architecture Decision Records), organize code into DDD bounded contexts, and let the system enforce compliance as agents work. The result is implementations that match specifications β even across multi-agent swarms working in parallel.
**How It Prevents Drift:**
| Capability | What It Does |
|------------|--------------|
| π― **Spec-First Planning** | Agents generate ADRs before writing code, capturing requirements and decisions |
| π **Real-Time Compliance** | Statusline shows ADR compliance %, catches deviations immediately |
| π§ **Bounded Contexts** | Each domain (Security, Memory, etc.) has clear boundaries agents can't cross |
| β
**Validation Gates** | `hooks progress` blocks merges that violate specifications |
| π **Living Documentation** | ADRs update automatically as requirements evolve |
**Specification Features:**
| Feature | Description |
|---------|-------------|
| **Architecture Decision Records** | 10 ADRs defining system behavior, integration patterns, and security requirements |
| **Domain-Driven Design** | 5 bounded contexts with clean interfaces preventing cross-domain pollution |
| **Automated Spec Generation** | Agents create specs from requirements using SPARC methodology |
| **Drift Detection** | Continuous monitoring flags when code diverges from spec |
| **Hierarchical Coordination** | Queen agent enforces spec compliance across all worker agents |
**DDD Bounded Contexts:**
```
βββββββββββββββ βββββββββββββββ βββββββββββββββ
β Core β β Memory β β Security β
β Agents, β β AgentDB, β β AIDefence, β
β Swarms, β β HNSW, β β Validation β
β Tasks β β Cache β β CVE Fixes β
βββββββββββββββ βββββββββββββββ βββββββββββββββ
βββββββββββββββ βββββββββββββββ
β Integration β βCoordination β
β agentic- β β Consensus, β
β flow,MCP β β Hive-Mind β
βββββββββββββββ βββββββββββββββ
```
**Key ADRs:**
- **ADR-001**: agentic-flow@alpha as foundation (eliminates 10,000+ duplicate lines)
- **ADR-006**: Unified Memory Service with AgentDB
- **ADR-008**: Vitest testing framework (10x faster than Jest)
- **ADR-009**: Hybrid Memory Backend (SQLite + HNSW)
- **ADR-026**: Intelligent 3-tier model routing
- **ADR-048**: Auto Memory Bridge (Claude Code β AgentDB bidirectional sync)
- **ADR-049**: Self-Learning Memory with GNN (LearningBridge, MemoryGraph, AgentMemoryScope)
</details>
---
### ποΈ Architecture Diagrams
<details>
<summary>π <strong>System Overview</strong> β High-level architecture</summary>
```mermaid
flowchart TB
subgraph User["π€ User Layer"]
CC[Claude Code]
CLI[CLI Commands]
end
subgraph Orchestration["π― Orchestration Layer"]
MCP[MCP Server]
Router[Intelligent Router]
Hooks[Self-Learning Hooks]
end
subgraph Agents["π€ Agent Layer"]
Queen[Queen Coordinator]
Workers[60+ Specialized Agents]
Swarm[Swarm Manager]
end
subgraph Intelligence["π§ Intelligence Layer"]
SONA[SONA Learning]
MoE[Mixture of Experts]
HNSW[HNSW Vector Search]
end
subgraph Providers["βοΈ Provider Layer"]
Anthropic[Anthropic]
OpenAI[OpenAI]
Google[Google]
Ollama[Ollama]
end
CC --> MCP
CLI --> MCP
MCP --> Router
Router --> Hooks
Hooks --> Queen
Queen --> Workers
Queen --> Swarm
Workers --> Intelligence
Intelligence --> Providers
```
</details>
<details>
<summary>π <strong>Request Flow</strong> β How tasks are processed</summary>
```mermaid
sequenceDiagram
participant U as User
participant R as Router
participant H as Hooks
participant A as Agent Pool
participant M as Memory
participant P as Provider
U->>R: Submit Task
R->>H: pre-task hook
H->>H: Analyze complexity
alt Simple Task
H->>A: Agent Booster (WASM)
A-->>U: Result (<1ms)
else Medium Task
H->>A: Spawn Haiku Agent
A->>M: Check patterns
M-->>A: Cached context
A->>P: LLM Call
P-->>A: Response
A->>H: post-task hook
H->>M: Store patterns
A-->>U: Result
else Complex Task
H->>A: Spawn Swarm
A->>A: Coordinate agents
A->>P: Multiple LLM calls
P-->>A: Responses
A->>H: post-task hook
A-->>U: Result
end
```
</details>
<details>
<summary>π§ <strong>Memory Architecture</strong> β How knowledge is stored, learned, and retrieved</summary>
```mermaid
flowchart LR
subgraph Input["π₯ Input"]
Query[Query/Pattern]
Insight[New Insight]
end
subgraph Processing["βοΈ Processing"]
Embed[ONNX Embeddings]
Normalize[Normalization]
Learn[LearningBridge<br/>SONA + ReasoningBank]
end
subgraph Storage["πΎ Storage"]
HNSW[(HNSW Index<br/>150x faster)]
SQLite[(SQLite Cache)]
AgentDB[(AgentDB)]
Graph[MemoryGraph<br/>PageRank + Communities]
end
subgraph Retrieval["π Retrieval"]
Vector[Vector Search]
Semantic[Semantic Match]
Rank[Graph-Aware Ranking]
Results[Top-K Results]
end
Query --> Embed
Embed --> Normalize
Normalize --> HNSW
Normalize --> SQLite
Insight --> Learn
Learn --> AgentDB
AgentDB --> Graph
HNSW --> Vector
SQLite --> Vector
AgentDB --> Semantic
Vector --> Rank
Semantic --> Rank
Graph --> Rank
Rank --> Results
```
**Self-Learning Memory (ADR-049):**
| Component | Purpose | Performance |
|-----------|---------|-------------|
| **LearningBridge** | Connects insights to SONA/ReasoningBank neural pipeline | 0.12 ms/insight |
| **MemoryGraph** | PageRank + label propagation knowledge graph | 2.78 ms build (1k nodes) |
| **AgentMemoryScope** | 3-scope agent memory (project/local/user) with cross-agent transfer | 1.25 ms transfer |
| **AutoMemoryBridge** | Bidirectional sync: Claude Code auto memory files β AgentDB | ADR-048 |
</details>
<details>
<summary>π§ <strong>AgentDB v3 Controllers</strong> β 20+ intelligent memory controllers</summary>
Ruflo V3 integrates AgentDB v3 (3.0.0-alpha.10) providing 20+ memory controllers accessible via MCP tools and the CLI.
**Core Memory:**
| Controller | MCP Tool | Description |
|-----------|----------|-------------|
| HierarchicalMemory | `agentdb_hierarchical-store/recall` | Working β episodic β semantic memory tiers with Ebbinghaus forgetting curves and spaced repetition |
| MemoryConsolidation | `agentdb_consolidate` | Automatic clustering and merging of related memories into semantic summaries |
| BatchOperations | `agentdb_batch` | Bulk insert/update/delete operations for high-throughput memory management |
| ReasoningBank | `agentdb_pattern-store/search` | Pattern storage with BM25+semantic hybrid search |
**Intelligence:**
| Controller | MCP Tool | Description |
|-----------|----------|-------------|
| SemanticRouter | `agentdb_semantic-route` | Route tasks to agents using vector similarity instead of manual rules |
| ContextSynthesizer | `agentdb_context-synthesize` | Auto-generate context summaries from memory entries |
| GNNService | β | Graph neural network for intent classification and skill recommendation |
| SonaTrajectoryService | β | Record and predict learning trajectories for agents |
| GraphTransformerService | β | Sublinear attention, causal attention, Granger causality extraction |
**Causal & Explainable:**
| Controller | MCP Tool | Description |
|-----------|----------|-------------|
| CausalRecall | `agentdb_causal-edge` | Recall with causal re-ranking and utility scoring |
| ExplainableRecall | β | Certificates proving *why* a memory was recalled |
| CausalMemoryGraph | β | Directed causal relationships between memory entries |
| MMRDiversityRanker | β | Maximal Marginal Relevance for diverse search results |
**Security & Integrity:**
| Controller | MCP Tool | Description |
|-----------|----------|-------------|
| GuardedVectorBackend | β | Cryptographic proof-of-work before vector insert/search |
| MutationGuard | β | Token-validated mutations with cryptographic proofs |
| AttestationLog | β | Immutable audit trail of all memory operations |
**Optimization:**
| Controller | MCP Tool | Description |
|-----------|----------|-------------|
| RVFOptimizer | β | 4-bit adaptive quantization and progressive compression |
**MCP Tool Examples:**
```bash
# Store to hierarchical memory
agentdb_hierarchical-store --key "auth-pattern" --value "JWT refresh" --tier "semantic"
# Recall from memory tiers
agentdb_hierarchical-recall --query "authentication" --topK 5
# Run memory consolidation
agentdb_consolidate
# Batch insert
agentdb_batch --operation insert --entries '[{"key":"k1","value":"v1"}]'
# Synthesize context
agentdb_context-synthesize --query "error handling patterns"
# Semantic routing
agentdb_semantic-route --input "fix auth bug in login"
```
**Hierarchical Memory Tiers:**
```
βββββββββββββββββββββββββββββββββββββββββββββββ
β Working Memory β β Active context, fast access
β Size-based eviction (1MB limit) β
βββββββββββββββββββββββββββββββββββββββββββββββ€
β Episodic Memory β β Recent patterns, moderate retention
β Importance Γ retention score ranking β
βββββββββββββββββββββββββββββββββββββββββββββββ€
β Semantic Memory β β Consolidated knowledge, persistent
β Promoted from episodic via consolidation β
βββββββββββββββββββββββββββββββββββββββββββββββ
```
</details>
<details>
<summary>π <strong>Swarm Topology</strong> β Multi-agent coordination patterns</summary>
```mermaid
flowchart TB
subgraph Hierarchical["π Hierarchical (Default)"]
Q1[Queen] --> W1[Worker 1]
Q1 --> W2[Worker 2]
Q1 --> W3[Worker 3]
end
subgraph Mesh["πΈοΈ Mesh"]
M1[Agent] <--> M2[Agent]
M2 <--> M3[Agent]
M3 <--> M1[Agent]
end
subgraph Ring["π Ring"]
R1[Agent] --> R2[Agent]
R2 --> R3[Agent]
R3 --> R1
end
subgraph Star["β Star"]
S1[Hub] --> S2[Agent]
S1 --> S3[Agent]
S1 --> S4[Agent]
end
```
</details>
<details>
<summary>π <strong>Security Layer</strong> β Threat detection and prevention</summary>
```mermaid
flowchart TB
subgraph Input["π₯ Input Validation"]
Req[Request] --> Scan[AIDefence Scan]
Scan --> PII[PII Detection]
Scan --> Inject[Injection Check]
Scan --> Jailbreak[Jailbreak Detection]
end
subgraph Decision["βοΈ Decision"]
PII --> Risk{Risk Level}
Inject --> Risk
Jailbreak --> Risk
end
subgraph Action["π¬ Action"]
Risk -->|Safe| Allow[β
Allow]
Risk -->|Warning| Sanitize[π§Ή Sanitize]
Risk -->|Threat| Block[β Block]
end
subgraph Learn["π Learning"]
Allow --> Log[Log Pattern]
Sanitize --> Log
Block --> Log
Log --> Update[Update Model]
end
```
</details>
---
## π Setup & Configuration
Connect Ruflo to your development environment.
<details>
<summary>π <strong>MCP Setup</strong> β Connect Ruflo to Any AI Environment</summary>
Ruflo runs as an MCP (Model Context Protocol) server, allowing you to connect it to any MCP-compatible AI client. This means you can use Ruflo's 60+ agents, swarm coordination, and self-learning capabilities from Claude Desktop, VS Code, Cursor, Windsurf, ChatGPT, and more.
### Quick Add Command
```bash
# Start Ruflo MCP server in any environment
npx ruflo@v3alpha mcp start
```
<details open>
<summary>π₯οΈ <strong>Claude Desktop</strong></summary>
**Config Location:**
- macOS: `~/Library/Application Support/Claude/claude_desktop_config.json`
- Windows: `%APPDATA%\Claude\claude_desktop_config.json`
**Access:** Claude β Settings β Developers β Edit Config
```json
{
"mcpServers": {
"ruflo": {
"command": "npx",
"args": ["ruflo@v3alpha", "mcp", "start"],
"env": {
"ANTHROPIC_API_KEY": "sk-ant-..."
}
}
}
}
```
Restart Claude Desktop after saving. Look for the MCP indicator (hammer icon) in the input box.
*Sources: [Claude Help Center](https://support.claude.com/en/articles/10949351-getting-started-with-local-mcp-servers-on-claude-desktop), [Anthropic Desktop Extensions](https://www.anthropic.com/engineering/desktop-extensions)*
</details>
<details>
<summary>β¨οΈ <strong>Claude Code (CLI)</strong></summary>
```bash
# Add via CLI (recommended)
claude mcp add ruflo -- npx ruflo@v3alpha mcp start
# Or add with environment variables
claude mcp add ruflo \
--env ANTHROPIC_API_KEY=sk-ant-... \
-- npx ruflo@v3alpha mcp start
# Verify installation
claude mcp list
```
*Sources: [Claude Code MCP Docs](https://code.claude.com/docs/en/mcp)*
</details>
<details>
<summary>π» <strong>VS Code</strong></summary>
**Requires:** VS Code 1.102+ (MCP support is GA)
**Method 1: Command Palette**
1. Press `Cmd+Shift+P` (Mac) / `Ctrl+Shift+P` (Windows)
2. Run `MCP: Add Server`
3. Enter server details
**Method 2: Workspace Config**
Create `.vscode/mcp.json` in your project:
```json
{
"mcpServers": {
"ruflo": {
"command": "npx",
"args": ["ruflo@v3alpha", "mcp", "start"],
"env": {
"ANTHROPIC_API_KEY": "sk-ant-..."
}
}
}
}
```
*Sources: [VS Code MCP Docs](https://code.visualstudio.com/docs/copilot/customization/mcp-servers), [MCP Integration Guides](https://mcpez.com/integrations)*
</details>
<details>
<summary>π― <strong>Cursor IDE</strong></summary>
**Method 1: One-Click** (if available in Cursor MCP marketplace)
**Method 2: Manual Config**
Create `.cursor/mcp.json` in your project (or global config):
```json
{
"mcpServers": {
"ruflo": {
"command": "npx",
"args": ["ruflo@v3alpha", "mcp", "start"],
"env": {
"ANTHROPIC_API_KEY": "sk-ant-..."
}
}
}
}
```
**Important:** Cursor must be in **Agent Mode** (not Ask Mode) to access MCP tools. Cursor supports up to 40 MCP tools.
*Sources: [Cursor MCP Docs](https://docs.cursor.com/context/